text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Automated seizure diagnosis system based on feature extraction and channel selection using EEG signals
Athar A. Ein Shoka1,
Monagi H. Alkinani2,
A. S. El-Sherbeny3,
Ayman El-Sayed1 &
Mohamed M. Dessouky ORCID: orcid.org/0000-0003-2609-22251,2
Seizure is an abnormal electrical activity of the brain. Neurologists can diagnose the seizure using several methods such as neurological examination, blood tests, computerized tomography (CT), magnetic resonance imaging (MRI) and electroencephalogram (EEG). Medical data, such as the EEG signal, usually includes a number of features and attributes that do not contains important information. This paper proposes an automatic seizure classification system based on extracting the most significant EEG features for seizure diagnosis. The proposed algorithm consists of five steps. The first step is the channel selection to minimize dimensionality by selecting the most affected channels using the variance parameter. The second step is the feature extraction to extract the most relevant features, 11 features, from the selected channels. The third step is to average the 11 features extracted from each channel. Next, the fourth step is the classification of the average features using the classification step. Finally, cross-validation and testing the proposed algorithm by dividing the dataset into training and testing sets. This paper presents a comparative study of seven classifiers. These classifiers were tested using two different methods: random case testing and continuous case testing. In the random case process, the KNN classifier had greater precision, specificity, positive predictability than the other classifiers. Still, the ensemble classifier had a higher sensitivity and a lower miss-rate (2.3%) than the other classifiers. For the continuous case test method, the ensemble classifier had higher metric parameters than the other classifiers. In addition, the ensemble classifier was able to detect all seizure cases without any mistake.
Epilepsy is a central nervous system condition (neurological) that causes irregular brain function, seizures or periods of strange behavior, feeling and often loss of consciousness. Seizure symptoms may vary greatly. Some people with seizures simply look blankly for a few moments during a seizure, while others constantly move their arms or legs. Having a single seizure does not mean you have epilepsy. The diagnosis of epilepsy usually involves at least two ineffective seizures.
Neurologists can diagnose the seizure using several methods such as neurological examination, blood testing, electroencephalogram (EEG), computerized tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and single-photon emission computerized tomography (SPECT) [1]
Neurological exam, focuses on the patient's actions, brain skills and mental activity to assess the patient's brain and nervous system.
Blood test, tracks the symptoms of infection, medical defects and levels of blood sugar.
Electroencephalogram (EEG) test, the neurologist sets electrodes to the patient's head using a paste-like material. The electrical activity of the brain will be reported by the electrodes.
Computerized tomography (CT) scans, can show anomalies in the patient's brain that may cause seizures, including tumors, bleeding and cysts.
Magnetic resonance imaging (MRI) test, detects lesions or defects in the brain of the patient that may induce seizures.
Positron emission tomography (PET) scans, help represent the brain's active areas and detect anomalies.
Single-photon emission computerized tomography (SPECT), can provide even more accurate results.
Early Seizure detection is critical in the medical field. Humans are undergoing various kinds of stress in their daily lives, and many of them are suffering from different neurological disorders. The World Health Organization (WHO) has announced that epilepsy is one of the most common diseases of nearly 50 million people worldwide, with more than 75% living in developing countries with little or no access to scientific services or treatment [2]. Recurrent seizures related to sudden sporadic neuronal releases in the cerebrum are described as epilepsy [3, 4].
Epileptic seizures are one of the most common diseases in the central nervous system. It results from sudden and unexpected electrical disturbances of the brain or electrical discharge caused by a group of brain cells. Individuals suffering from epilepsy have different symptoms, such as unusual sensations, twitching of arms, vision changes, hearing, smelling, or unexpectedly seeing things so that they are unable to perform regular tasks. Although; usually patients do not have any physical symptoms [5, 6].
Epilepsy leads to shivering and sudden movements, and even causes patients to lose their lives. It is therefore exceptionally crucial for the accurate automatic detection of epileptic seizures. Epilepsy requires a reliable and accurate strategy to predict seizure events to make the lives of patients less complicated [7, 8].
The main contribution of this work can be summarized as follows:
Building an automated Seizure detection system based on classifying the most significant extracted features using EEG signals.
Selecting the most affected channels from the CHB-MIT EEG signal dataset [9] and extracting the most relevant features from the selected channels.
Measuring the performance evaluation of seven classifiers by the CHB-MIT dataset.
Testing the seven classifiers using cross-validation.
Calculating accuracy, sensitivity, specificity, F1-score, FallOut, and MisRate of performance metric parameters for seven classifiers.
The proposed model is based on machine learning approach to achieve the objectives of this study. The main objective of this paper is to automatically Seizure detection by extracting the most significant features from the CHB-MIT EEG signal dataset and classifying the EEG signal weather it is Seizure or normal using seven classifiers.
This paper is organized as follows: Sect. 2 presents the previous and related work. Section 3 provides a description of the proposed algorithm. The EEG signal data set is described in Sect. 4. Section 5 lists the evaluation of performance metrics. The results will be presented in Sect. 6. The end of the paper is the conclusion and the references.
Electroencephalography (EEG) is one of the primary modalities regularly used for remote epileptic seizure detection. It had become an inexpensive and non-invasive stage to investigate the inconspicuous quality of the disease. Seizure is a characterizing property of epilepsy that reflects abnormal periods of activity in the EEG [10].
Machine learning (ML) is the fastest growing field in the field of computer science and, in particular, in the field of health informatics. ML's goal is to develop algorithms that can learn and improve over time and can be used for predictions. The overall aim is to build and develop algorithms that can automatically learn from data and therefore enhance with experience over time without any human-in-the-loop technology [11].
ML is a very realistic field of AI with the purpose of producing software which can automatically learn from existing data to learn from experience and keep improving its learning decisions to make assumptions due to new data. ML can be used as an AI workhorse, and meanwhile, the deployment of data-intensive ML algorithms could be observed all over everything, across science, engineering and business, resulting to much more evidence-based decision-making. There is a massive market for AI algorithms in medicine, that not only execute excellently, but are reliable, consistent, easy to interpret and understandable to a personal knowledge; in medicine [12].
Explaining of AI may greatly improve the confidence of healthcare experts in future AI systems. Explaining designing research-AI systems for use in medicine need a high level of learning ability across a variety of ML and human–computer interaction methods. There is an intrinsic discrepancy between the output of ML (predictive accuracy) and the ability to clarify. Mostly the best-performing strategies are the least straightforward, and those that offer a simple description are less reliable [12].
Several algorithms were designed for early diagnosis of epilepsy using an EEG signal. The problem here is that a number of features may be irrelevant and dispensable. For example, different algorithms were used to extract critical features such as auto-regressive (AR) [13], principle component analysis (PCA) [14], empirical mode decomposition (EMD) [15], and statistical features technique [16]. There is another method of extracting a statistical feature that has been widely used to extract features in several algorithms to improve performance [17, 18].
The EEG signal consists of multi-channel signals that carry a lot of repetitive data, with an additional source of noise that may reduce the accuracy of the classification. Channel selection is an important step that effectively avoids redundant channels, eliminates calculations, particularly in real-time applications, and selects the ideal classification channels. Channel selection is a significant method for reducing the number of channels, not including distinguishing information, and also for reducing noise [19, 20].
Several algorithms have used the concept of channel selection with different types. One algorithm combined the advantages of both feature enhancement and channel selection to advance the performance of the detector [21]. Another algorithm compared the reduction of electrode mounting using only nine electrodes instead of all 23 electrodes [22]. Various algorithms selected EEG channels to eliminate power consumption in the detection process without affecting accuracy, variance, variance difference, entropy, random selection and extra focal channels, as well as physician choice, are also used and are the result of valid selection. Variance is one of the most commonly used channels [23, 24].
This paper focuses on the collection and extraction of features widely used in a number of previous work. These features include standard deviation, mean, variance, median, kurtosis, skewness, entropy, moment, power, maximum and minimum EEG signals. All of those features are divided into three categories: the first category is statistical features such as mean, standard deviation, variance, skewness, median and kurtosis. The second category is amplitude-related features that include energy, power, maximum and minimum EEG signals and the third category is entropy-related feature. These features can be classified on the basis of their description or the field where the attributes are determined. Many other researchers have found a basic group of attributes appropriate to their suggested classification system, while some have introduced different groups of variables obtained from time, frequency and time–frequency domains [25,26,27,28,29,30]
Classification is the process of identifying groups or classes based on similarities between them. This step is essential to distinguish between seizure itself—the ictal period—and the normal non-ictal period. Several algorithms have been used as a classifier such as artificial neural network (ANN) [31], support vector machine (SVM) [32], ensemble [33], K-nearest neighbors (KNN) [34, 35], linear discriminant analysis (LDA) [36, 37], logistic regression [38], decision tree [39], and Naïve Bayes [40, 41].
Several previous algorithms used only one classifier to classify seizure activity from the EEG signal. Others used more than one algorithm and compare between the results of each classifier. In [31], the authors extracted 12 features from the EEG seizure signal and entered them into four different classifiers; artificial neural network (ANN), least square-support vector machine (LS-SVM), random forest and Naïve Bayes. In [32], the authors classified the EEG seizure signal using two levels of classifier. The first level is the SVM classifier, the second level is Naive Bayes classifier. The ensemble algorithm had been used for the seizure classification in [33]. In [37], the authors compared the results of three classifiers; quadratic discriminant analysis (QDA), K-nearest neighbors (KNN), and linear discriminant analysis (LDA) algorithms to classify the EEG seizure signals. In [38], the authors first extracted the significant seizure features from the EEG signal using wavelet transform, then these features had been classified using artificial neural network (ANN) and logistic regression (LR). The decision tree classifier had been used for the classification of epilepsy in [38].
The cross-validation step is an essential step before classification, which provides an accurate indication of the performance of the classifier. Cross-validation to divide the extracted features into training and test sets. In the first step, dataset information or concepts are grouped into two classes (seizure and normal) for model learning. The second step, the model of the preceding step, is used for classification [42].
The efficiency of the proposed seizure diagnostic system was measured by calculating multiple output metric parameters. Numerous previous models tested the effects of their algorithm by measuring the precision, sensitivity, specificity and computational complexity that were calculated as relative values using the computational cost needed to produce each feature [25]. Several performance metric parameters were calculated in this paper, such as precision, sensitivity, specificity, F1-score, FallOut, and misRate.
Proposed approach
The main objective of this paper is to automatically predict epileptic seizures from EEG signals. EEG records ictal and non-ictal cases, the proposed approach detects each seizure state, and it is acceptable to classify the normal seizure state in limited cases, and this ensures that any complications are avoided. The opposite case, however, is not acceptable.
Essentially, the proposed approach relies on the recognition of epilepsy from a short-term EEG signal. The proposed approach consists of five stages: channel selection, feature extraction, average, cross-validation and classification. All of these steps are shown in Fig. 1.
Block diagram shows the steps of the proposed algorithm
The EEG signal consists of 23 channels generated by the electrodes which are attached to the scalp. These channels make the calculations more complex and increase the system load. Due to these limitations, the channel selection step is very important. These selected channels will be used as input to the extraction step of the feature. The third step is the average where the features extracted from the selected channels will be averaged. Finally, the average features will be used as the classifier input. Several performance metric parameters were measured to evaluate the performance of the seven classifiers to compare them, one of which has improved performance.
The main steps of the proposed algorithm are:
Variance channel selection which used for dimensionality reduction by selecting the most affected channels using the variance parameter.
Feature extraction and averaging, which used to extract the most significant features, eleven features, from the selected channels. Then, the averaging of these extracted features from each channel is added.
Classifying the averaged features for distinguishing between normal and seizures signals to better detect and diagnose seizures. This move is to classify groups or classes based on similarities between them. This phase is important to distinguish between the Seizure itself—the ictal stage—and the usual non-ictal era.
Cross-validation to divide the extracted features from the CHB-MIT EEG signal dataset [9] into training and testing sets. This step is an important step which gives a precise indication of the performance of the classifier. Cross-validation consists of two stages; training and testing. During the training process, the dataset information is grouped into two classes (seizure and normal) to learn the model. During the testing process, the trained model is used to assess new signals to identify them as seizure or normal.
Performance evaluation of proposed approach with existing algorithms. In this step, the proposed approach is applied to two separate methods; the random circumstance testing and the continuous circumstances to measure the accuracy, sensitivity, specificity, F1-score, FallOut, and misRate of the proposed approach.
Variance channel selection
The channel selection step is designed to select the most affected channels by seizure. Variance is chosen as the channel selection method, because experiments show that automatic seizure detection can be performed using only three channels, selected on the basis of maximum variance without loss of performance. The variance is used to calculate for all channels, according to this feature, the channel would be selected, and then the other features would be calculated for the selected channels only.
This step is essential to reduce processing load and time. Each channel generates 11 features, so the total input nodes for each model would be 11 × 23 channels that make the calculations take a long time in training and testing.
The simple method for selecting channels for extraction and classification features is the variance of the EEG signal amplitude, since automatic seizure detection can be performed using only three channels without loss of performance. These channels are selected on the basis of the maximum variance.
The variance (V) of the sample I in channel (c) of the training data (t) is calculated in Eq. (1) [24]:
$${V}_{ict}\left(c\right)=\frac{1}{k}\sum_{i=1}^{k}({X}_{c}\left(i\right)-{\mu }_{c})2,$$
where \(c\) is the channel, \({X}_{\mathrm{c}}\) is the data on seizure training, \({\mu }_{\mathrm{c}}\) is the mean of seizure training data, k is the number of samples of seizure training data.
The selection of channels based on the highest values of \({V}_{ict}(c)\) is calculated using Eq. (2) [24]:
$$\mathrm{chosen}\, \mathrm{channel}=\mathrm{max}({V}_{ict}(c))$$
At the end of this step, the highest three channels with maximum variance are selected to extract the significant features from only those three channels.
Feature extraction and averaging
Feature extraction is a specific form of dimensional reduction. Feature extraction is a general term for the methods used to construct a combination of variables. In this step, some distinctive features will be extracted from the selected EEG signal channels. These features have the most influence in the form of a signal. These features are extracted from the EEG signal in ten seconds. Various types of features can be extracted from the EEG signal.
These features are the standard deviation, mean, variance, median, kurtosis, skewness, entropy, moment, power, maximum and minimum EEG signals defined as [17, 18, 30]:
Standard deviation: is the mean value of the EEG signal and is calculated from the equation, where D is the signal and N is the number of samples µ is the square root of the variance.
$$\sigma =\sqrt{\frac{1}{N-1}{\sum }_{i=1}^{N}({D}_{i}-\mu }{)}^{2}$$
Mean: is the basic statistical and calculated from the following equation, where \(i=1, 2, 3,\dots\) and D is the signal.
$${\mu }_{i}=\frac{1}{N}\sum_{j=1}^{N}{D}_{ij}.$$
Variance: is obtained by taking the standard deviation square.
$$v={\sigma }^{2}$$
Median: is a simple measure of the central tendency and is calculated from the equation below.
$$\stackrel{-}{X}=\frac{{\sum }_{i=1}^{n}{x}_{i}}{n}$$
Kurtosis: measures the height of the probability density function (PDF) of the time series.
$$k=\frac{E(x-\mu {)}^{4}}{{\sigma }^{4}}$$
Skewness: represents the PDF symmetry of the amplitude of the time series.
$$s=\frac{E(x-\mu {)}^{4}}{{\sigma }^{4}}.$$
Entropy: is the numerical proportion of the arbitrary nature of the signal.
$$E\left(s\right)={\sum }_{i}E({s}_{i})$$
$$m=\mathrm{moment }\left(x.\mathrm{ order}\right)$$
Maximum EEG signal: returns the max point in the signal.
$$M= \mathit{max} (D)$$
Minimum EEG signal: returns the min point in the signal.
$$M=\mathrm{min }(D)$$
EEG signal power: is calculated from these equations.
$$f=fft(s)$$
$$\mathrm{pow}=\mathrm{sum }\left({f}^{\mathrm{^{\prime}}}*\mathrm{conj}\left(f\right)\right).$$
Each of the three channels selected produces 11 features, so that the input for each model would be 11 × 3 for each case, which would affect the calculations in real time and could prolong the classification time. Averaging the values of the extracted features would reduce the number of input nodes of the model and eliminate the processing load and the time of classification [8].
Classification is a technique in which the data is classified into a set of classes. The key purpose of the classification is to classify the class to which the data would belong. Classification algorithms are divided into supervised, unsupervised and semi-supervised algorithms. Supervised algorithms are based on training and testing the data. The trained data are labelled and the labels will be sent to the model through implementation. This labeled dataset is trained to produce significant outputs as it is processed by decision-making. Unsupervised algorithms are based on data classification without the training of the classifier. It's not a genre, there's no history, no training, and no data testing. They do not give the right goals and instead depend on clustering. Semi-supervised algorithms are mixed of supervised and unsupervised algorithms. So, some data are labelled and others are not labelled. Algorithms may be applied to labeled and unmarked data, and some dataset classifiers will be learned for either complete information or missing training sets.
Seven of the different classifiers were tested in this paper to obtain a higher performance classifier than the others. The classifiers used are support vector machine (SVM), ensemble, K-nearest neighbors (KNN), linear discriminant analysis (LDA), logistic regression, decision tree, and Naïve bayes. These classifiers were chosen because of their characteristics of high classification speed, small or medium memory usage, which were easy to interpret.
Support vector machine (SVM)
In recent years the support vector machine (SVM) classification has become increasingly popular with many applications as a result of its superior performance. The goal of a two-class SVM classifier is to create a hyperplane that maximizes the margin, which is the distance between the nearest points on either side of the boundary. This are known as support vectors. The SVM algorithm can be a linear classifier where the class separation is a straight line, or a nonlinear classifier, where the class separation is a nonlinear line or curve, and a soft-margin formulation where SVM soft-margin formulation may be used in cases where there is no linear hyperplane capable of separating the data [42, 43].
Ensemble classifiers incorporate a variety of classifiers to boost the performance of the classification. It is better suitable for multi-class EEG time-varying signal grouping. For the following two factors, the ensemble methods are suitable for the EEG classification. First, the EEG signal dimension is always large and one of preconditions is always to train the classifier as soon as possible, so the training range must also be low. Second, EEG is a time-varying signal, and it is therefore unsafe to use an individual trained classifier to identify the classes of undefined (incoming) objects. Despite these benefits, ensemble studies have failed to achieve a foothold in science and relatively few studies exist in this area [33].
K-nearest neighbor (KNN)
K nearest neighbor algorithm is a technique that takes a dummy variable to distinguish the signal in various groups. The result is determined by the number of votes cast by its neighbors, that is one of the several reasons of the its name K-nearest neighbor [34].
Linear discriminant analysis (LDA)
Linear discriminant analysis (LDA) is used to locate a particular mix of features that can help distinguish two or more groups. The LDA chooses a path that offers optimum linear class separation. LDA combines objects under equally identical categories on the basis of their characteristics. The purpose of this analysis is to find the appropriate discriminant function which divides classes. If the number of classes is 2 therefore the function has become a line, but when the number of classes is 3 the function would be a plane, for further than 3 classes the discriminant function is a hyperplane. The training set shall be used to determine the parameters of the discriminant function [44].
Logistic regression (LR)
Logistic regression (LR) is a commonly applied predictive simulation method that the probability of a dichotomous outcome case is linked to a number of variables. Logistic regression provides fewer strict criteria than ordinary linear regression (OLR) such that it will not presume a linear association between the explanatory variables as well as the response parameter and will not need Gaussian-distributed independent variables. Logistic regression measures the variations in the logarithm of the response variable, instead of the variability of the dependent variable actually, like OLR implies. Although the logarithm of odds is directly proportional to the explanatory variables, the association between the outcome and the explanatory variables would not be linear [38].
Decision tree (DT)
The key goal of decision tree (DT) is to integrate the interpretations of the risk level of epilepsy with maximum recognition accuracy. It also has benefits such as ambiguity management, reputation and comprehensibility. These types of trees are widely recommended for post-classification and processing. The basic representation of DT optimization is clarified with the initial statement, W = [Pi, j] as a co-occurrence matrix where (i, j) is the overall set of items representing the dimensionally decreased values of a specific epoch containing (20 × 16) items [39].
Naïve Bayes (NB)
A Naive Bayes (NB) is a probabilistic algorithm that is dependent on Bayesian theory and claims that each function of a given class is exclusive than some other function. Occurrence/specific omission projections for the NB method are determined by high chance. The NB algorithm needs fewer training data in classification [41].
Cross-validation and testing
The cross-validation step is essential for validating the performance of the learning algorithm. Cross-validation is used to rate the performance of the classifier by dividing the full data set into a training set and a test set. The classifier is trained by the training set, and the trained model is then tested by the test set. K-fold cross-validation is one of the most common methods used by dividing the dataset into K equal size subsets. K − 1 folds are trained for each validation, and the remaining fold is used for testing. The procedure is going to loop K times. At each iteration, a different subset will be chosen as the new test set to ensure that all samples are included at least once in the test set. If K equals the size of the training set, so at each validation run, only one sample is left out; therefore, it is called cross-validation (loocv). The proposed algorithm uses the K-fold method with five subsamples [45, 46].
Evaluation metrics parameters
This section presents the metric parameters that will be used to measure the performance of the classifier. Each classifier is tested using 30 samples (containing both seizures and normal samples). To test the results, the true positive, the true negative, the false positive and the false negative are defined as [47]:
True positive (TP): positive (patient) samples correctly classified as positive (patient) samples.
False positive (FP): negative (normal) samples incorrectly classified as positive (patient) samples.
True negative (TN): negative (normal) samples correctly classified as negative (normal) samples.
False negative (FN): positive (patient) samples incorrectly classified as negative (normal) samples.
The parameters TP, TN, FP, and FN will be used to calculate the metric parameters that will be used to measure the performance of the different classifiers. These parameters are: [47, 48]
$$\mathrm{Accuracy}= \frac{\mathrm{TP}+\mathrm{TN}}{\mathrm{TP}+\mathrm{FP}+\mathrm{TN}+\mathrm{FN}}\times 100\mathrm{\%}$$
$$\mathrm{Pr}=\mathrm{ sensitivity}=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FN}}\times 100$$
$$\mathrm{specificity}=\frac{\mathrm{TN}}{\mathrm{TN}+\mathrm{FP}}\times 100\mathrm{\%}$$
$$\mathrm{fall}{\text -}\mathrm{Out}=\frac{\mathrm{FP}}{\mathrm{TN}+\mathrm{FP}}$$
$$\mathrm{Miss}{\text -}\mathrm{Rate}=\frac{\mathrm{FN}}{\mathrm{TP}+\mathrm{FN}}$$
$$\mathrm{positive\,predictivity}=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FP}}$$
$$\mathrm{Recall}=\frac{{T}_{P}}{\left({T}_{P}+{F}_{N}\right)}$$
$$\mathrm{Precision}=\frac{{T}_{P}}{\left({T}_{P}+{F}_{p}\right)}$$
$$F{\text -}{\rm Measure}=2 \times \frac{(\mathrm{Precision }\times \mathrm{ Recall})}{(\mathrm{Precision }+\mathrm{ Recall})}$$
Data set description
The database used in this study is the CHB-MIT EEG dataset. The CHB-MIT EEG scalp database was collected at the Boston Children's Hospital in December 2010. The data set consists of 23 cases in 22 patients. The dataset includes five adult males between three and 22 years of age and 17 females between 1.5 and 19 years of age. The dataset has a sampling rate of 256 samples per second at 16-bit resolution [9, 49].
The model presented consists of two steps. The first step is to train the classifier, and the second step is to test. In the first step, 250 samples were used to train all seven classifiers.
The seven classifiers were tested using two methods during the test phase. The first method that classifiers were tested using a randomized dataset that was taken from several patients. It was specified as 80 samples per sample of 10 s. The proposed algorithm takes only the window of 10 s to speed up the prediction process and improve the accuracy of the detection process instead of taking the whole period for seizure and normal EEG signal. In addition, to reduce the features as the whole EEG signal has several features than taking only 10 s.
The second test method is a continuous dataset test, which has been taken from only one patient at different times. The data indicated that 82 samples were taken from only one patient in a variety of seizures and normal events, such as:
If Seizure event starts from 2996 to 3026 s, a short period of 10 s will be taken from 2996 to 3006, 3006–3016, and 3016–3026) these are three samples.
If Seizure event starts from 1862 to 1902s, a short period of 10 s will be taken from 1862–1872, 1872–1882, 1882–1892 and 1892–1902) these are four samples. And so on.
The algorithm proposed consists of five steps, as described above. The first step is the selection of the channel, then the extraction of the features; the third step is the average, the fourth step is the cross-validation and, finally, the classification step.
The sample of 10 s consists of 23 channels. The variance parameter will be calculated for each channel. Only three channels with the highest variance parameter will be selected in this step. Only one sample, including 23 channels, is shown in Table 1 and the variance parameter for each sample has been calculated. The three highest variance parameters are for channels 2, 6, and 21. The selected channels will be on channels 2, 6 and 21. The same step is applied to all other samples.
Table 1 Calculation of the variance for the 23 channels and channel selections
After selecting the three highest channels, the features will be extracted from these three channels. Eleven features with standard deviation, mean, variance, median, kurtosis, skewness, entropy, moment, power, maximum and minimum EEG signals will be extracted from the three channels. Table 2 shows the extracted features of the three channels selected from the previous step. The same step shall be applied to all other samples.
Table 2 Extracted features of selected channels
The extracted features of the three channels will be averaged into only one value per each feature. This proposed method was used to reduce the number of features in each sample. So, each sample will be represented with only 11 features than 33 features in the preceding step. Table 3 presents the averaging step of the previous sample shown in Table 2.
Table 3 Averaged extracted features of the three selected channels
The same step will be applied to all other samples. So, each sample will be represented by only 11 features. This step had been applied to the training samples and also the testing samples. First, 250 samples will be used for the training of the seven classifiers. Some of these training samples are shown in Table 4.
Table 4 Some of the training samples after averaging step
The extracted and averaged features will be used as an input to the seven classifiers that are support vector machine (SVM), ensemble, K-nearest neighbors (KNN), linear discriminant analysis (LDA), logistic regression, decision tree, and Naïve Bayes. The performance of each classifier will be calculated by measuring several performance metric parameters.
In this step, the data will be divided into training and testing sets. Two hundred and fifty samples will be used to train all classifiers. The testing will be carried out using two methods, random and continuous.
A vital data observation step should be performed before starting the training of classifiers; this will give us an indication of the spread of normal samples and seizure samples—data observation with any of the 11 features. The (mean) feature was chosen to plot this data observation, as shown in Fig. 2.
Dataset distribution a dataset with abnormal points, b dataset without abnormal points
From Fig. 2a, the normal blue samples are concentrated on the left side, and the red Seizure samples are concentrated on the right side.
Two normal samples were located on the right side between Seizure samples as shown in the arrow in Fig. 2a. These abnormal samples may be accurate samples or sound samples. Figure 2b shows that this is a noise sample and will be removed from training samples. Table 5 shows the accuracy of the seven classifiers (in column 1) for both abnormal samples (column 2) and normal samples (column 3).
Table 5 Training results accuracy with filtered and unfiltered samples
After training, the seven classifiers will be tested using two methods: random and continuous test sets. The randomized test data set consists of different patients who are not connected to the EEG signal; this means that the samples taken are from different periods of time.
Figure 3 shows the confusion matrix for each classifier. The x-axis presents the predicted classes by the classifier and the y-axis is the true values. The confusion matrix is essential for the representation of the samples. For example, the KNN classifier truly predicted 118 samples as seizure and incorrectly predicted 20 samples as normal. In addition, the KNN truly predicted 91 samples as normal and incorrectly predicted 22 samples as a seizure. All confusion matrices of all classifiers are presented in Fig. 3.
Confusion matrices for all classifiers
Table 6 and Fig. 4 present 11 metric evaluation parameters, which are true positive (TP), true negative (TN), false positive (FP), false negative (FN), accuracy, sensitivity, specificity, positive predictivity, F1 score, Fall-Out, and Mis-Rate, which were measured by all seven classifiers.
Table 6 Evaluation of the metric parameters for the seven classifiers of the random case testing method
Evaluation of the metric parameters for the seven classifiers of the random case testing method
The second testing method is continuous testing data. Continuous means that the test samples are connected to the same person. Data were collected from only one patient with connected EEG signals. Table 7 and Fig. 5 show 11 metric evaluation parameters that were measured for all seven classifiers.
Table 7 Evaluation of the metric parameters for the seven classifiers of the continuous case testing method
Evaluation of the metric parameters for the seven classifiers of the continuous case testing method
The performance of the classifiers was graphically depicted to display the efficiency and accuracy of each classifier using a continuous test method. Figure 6 displays this graphical representation of the proposed algorithm based on ensemble classifier. In this figure, different samples are evaluated by the ensemble classifier. If the samples are shown above zero, this means that the samples are Seizure and the normal samples are shown below zero. The blue samples are the actual results of the samples and the red samples are the predicted samples of the proposed ensemble classifier approach.
Actual and predicted samples of the ensemble classifier in continuous test
The proposed approach based on the ensemble classifier correctly predicted the samples with numbers 1, 2, 3, 4, 5, 6, 11, 12, 13, 14, 15, 16 and 17 as seizure and they were also actual Seizure. In addition, the proposed approach based on the ensemble classifier correctly predicted the samples with numbers 7, 8, 9, 10, and 20 as normal and they were also normal. The ensemble incorrectly predicted that the two samples with numbers 18 and 19 as a seizure sample, but they were normal. Figure 6 shows all of the other samples.
The results obtained from Tables 6 and 7 and Figs. 4 and 5 present 11 metric evaluation parameters, which are true positive (TP), true negative (TN), false positive (FP), false negative (FN), accuracy, sensitivity, specificity, positive predictivity, F1 score, Fall-Out, and Mis-Rate for seven classifiers that are support vector machine (SVM), ensemble, K-nearest neighbors (KNN), linear discriminant analysis (LDA), logistic regression, decision tree, and Naïve Bayes. Table 6 and Fig. 4 showing the metric evaluation parameters for the seven classifiers in random case testing, the KNN classifier is better than the others in accuracy, specificity, positive predictivity, but the ensemble is better than the others in sensitivity and missing rate. The proposed algorithm based on the KNN classifier has a high rate of error (13.9%) compared to the proposed algorithm based on the ensemble classifier (2.3%). Table 7 and Fig. 5 present the metric evaluation parameters for the proposed algorithm based on each classifier of the seven classifiers in the continuous case test method, the proposed algorithm based on the Ensemble classifier is better than the proposed algorithm based on other classifiers in all metric parameters. Figure 6 shows that the proposed algorithm based on Ensemble classifier detected all seizure cases without any error, but there is some difficulty in detecting all normal cases.
This paper proposed a computer aided seizure diagnosis classification system based on feature extraction and channel selection using EEG signals. The proposed approach is evaluated through different experiment circumstances over CHB-MIT dataset. This proposed approach is based on five steps. The first step is to select a channel by calculating the variance parameter for each channel, as each sample consists of 23 channels. The highest three channels of variance will be selected. The second step was to extract eleven features from the selected three channels then averaging these extracted features of the three channels to only one value per feature. As a result, each sample will be represented as only 11 features. The third step is classification, where seven classifiers had been used and experimented these classifiers are support vector machine (SVM), ensemble, K-nearest neighbors (KNN), linear discriminant analysis (LDA), logistic regression, decision tree, and Naïve Bayes. The fourth step is cross-validation and testing, as the data is divided into five sets of training and testing sets. The training set consisted of 250 samples, each of which will be represented by 11 features generated from the first two steps. The testing was carried out using two different methods; the first was randomized case testing, which the EEG samples had been collected from different patients and the second was continuous case testing method that the EEG samples had been collected from only one patient. The last step is the evaluation of the classifiers and measuring the performance of the classifiers. In the first randomized case testing method, the proposed approach based on the KNN classifier is better than the other classifiers in accuracy, specificity, positive predictivity, but the proposed approach based on the ensemble is better in other metric parameters such as sensitivity and missing rate. The KNN has a high rate of error (13.9%) compared to the ensemble (2.3%). In the second continuous case testing method. The proposed approach based on the ensemble classifier is better classified than the other classifiers in all metric parameters,
In future work, the proposed approach based on the ensemble classifier predicted all seizure cases without any error, but there is some difficulty in predicting all normal cases that will be recovered in the future then compared with several previous algorithms. In addition, IoT system will be proposed which is based cloud framework for collect, store, and analyze data from patient wearable devices with the scalability to millions of users. Finally, the proposed approach based on deep learning will be proposed for accurate detection of seizures over different available datasets.
Mayo Clinic (2020) Seizures. https://www.mayoclinic.org/diseases-conditions/seizure/diagnosis-treatment/drc-20365730. Accessed 26 Aug 2020
World Health Organization (2010) Epilepsy in the WHO Eastern Mediterranean region: bridging the gap
Yuan Y, Xun G, Jia K, Zhang A (2019) A multi-view deep learning framework for EEG seizure detection. IEEE J Biomed Heal Inform 23(1):83–94
H. Rajaei, M. Cabrerizo, P. Janwattanapong, A. Pinzon-Ardila, S. Gonzalez-Arias, M. Adjouadi (2016) Connectivity maps of different types of epileptogenic patterns. In: 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 1018–1021
Li F et al (2019) Transition of brain networks from an interictal to a preictal state preceding a seizure revealed by scalp EEG network analysis. Cogn Neurodyn 13(2):175–181
Shoka A, Dessouky M, El-Sherbeny A, El-Sayed A (2019) Literature review on EEG preprocessing, feature extraction, and classifications techniques. Menoufia J Electron Eng Res 28(ICEEM2019-Special Issue): 292–299
Ibrahim F et al (2019) A statistical framework for EEG channel selection and seizure prediction on mobile. Int J Speech Technol 22(1):191–203
Garces A, Orosco L, Diez P, Laciar E (2019) Adaptive filtering for epileptic event detection in the EEG. J Med Biol Eng
Shoeb, Ali H, Guttag JV (2010) Application of machine learning to epileptic seizure detection. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 975–982
Abualsaud K, Mohamed A, Khattab T, Yaacoub E, Hasna M, Guizani M (2018) Classification for imperfect EEG epileptic seizure in IoT applications: a comparative study. In: 2018 14th International Wireless Communications and Mobile Computing Conference (IWCMC), pp. 364–369
Holzinger A (2016) Interactive machine learning for health informatics: when do we need the human-in-the-loop? Brain Inform 3(2):119–131
Holzinger A, Langs G, Denk H, Zatloukal K, Müller H (2019) Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip Rev Data Mining Knowl Discov 9(4):e1312
Zhang Y, Ji X, Liu B, Huang D, Xie F, Zhang Y (2017) Combined feature extraction method for classification of EEG signals. Neural Comput Appl 28(11):3153–3161
Imah EM, Widodo A (2017) A comparative study of machine learning algorithms for epileptic seizure classification on EEG signals. In: 2017 International Conference on Advanced Computer Science and Information Systems (ICACSIS), pp. 401–408
Riaz F, Hassan A, Rehman S, Niazi IK, Dremstrup K (2016) EMD-based temporal and spectral features for the classification of EEG signals using supervised learning. IEEE Trans Neural Syst Rehabil Eng 24(1):28–35
Rafiuddin N, Khan YU, O Farooq (2011) Feature extraction and classification of EEG for automatic seizure detection. In: 2011 International Conference on Multimedia, Signal Processing and Communication Technologies. pp. 184–187
Manisha Chandani AK (2017) Classification of EEG physiological signal for the detection of epileptic seizure by using DWT feature extraction and neural network. Int J Neurol Phys Ther 3(5):38–43
Hamad A, Houssein EH, Hassanien AE, Fahmy AA (2016) Feature extraction of epilepsy EEG using discrete wavelet transform. In: 2016 12th International Computer Engineering Conference (ICENCO). pp. 190–195
Birjandtalab J, Baran Pouyan M, Cogan D, Nourani M, Harvey J (2017) Automated seizure detection using limited-channel EEG and non-linear dimension reduction. Comput Biol Med 82:49–58
Chen G, Xie W, Bui TD, Krzyżak A (2017) Automatic epileptic seizure detection in EEG using nonsubsampled wavelet-Fourier features. J Med Biol Eng 37(1):123–131
Qaraqe M, Ismail M, Abbasi Q, Serpedin E (2015) Channel selection and feature enhancement for improved epileptic seizure onset detector. In: International Conference on Wireless Mobile Communication and Healthcare. pp. 258–262
Tekgul H, Bourgeois BFD, Gauvreau K, Bergin AM (2005) Electroencephalography in neonatal seizures: comparison of a reduced and a full 10/20 montage. Pediatr Neurol 32:155–161
Faul S, Marnane W (2012) Dynamic, location-based channel selection for power consumption reduction in EEG analysis. Comput Methods Progr Biomed 108(3):1206–1215
Duun-Henriksen J, Kjaer TW, Madsen RE, Remvig LS, Thomsen CE, Sorensen HBD (2012) Channel selection for automatic seizure detection. Clin Neurophysiol 123(1):84–92
Logesparan L, Casson AJ, Rodriguez-Villegas E (2012) Optimal features for online seizure detection. Med Biol Eng Comput 50:659–669. https://doi.org/10.1007/s11517-012-0904-x
Siddiqui MK, Morales-Menendez R, Huang X et al (2020) A review of epileptic seizure detection using machine learning classifiers. Brain Inf 7:5. https://doi.org/10.1186/s40708-020-00105-1
Logesparan L, Rodriguez-Villegas E, Casson AJ (2015) The impact of signal normalization on seizure detection using line length features. Med Biol Eng Comput 53:929–942. https://doi.org/10.1007/s11517-015-1303-x
Boonyakitanont P, Lek-Uthai A, Chomtho K, Songsiri J (2020) A review of feature extraction and performance evaluation in epileptic seizure detection using EEG. Biomed Signal Process Control 1(57):101702
Siddiqui MK, Islam MZ, Kabir MA (2019) A novel quick seizure detection and localization through brain data mining on ECoG dataset. Neural Comput Appl 31:5595–5608. https://doi.org/10.1007/s00521-018-3381-9
Dessouky MM, Elrashidy MA, Taha TE, Abdelkader HM (2015) Statistical Analysis of Alzheimer's disease images. Minufiya J Electr Eng Res (MJEER). 24(12)
Kaur M, Singh G (2017) Classification of seizure prone EEG signal using amplitude and frequency based parameters of intrinsic mode functions. J Med Biol Eng 37(4):540–553
Selvakumari RS, Mahalakshmi M (2019) RETRACTED ARTICLE: epileptic seizure detection by analyzing high dimensional phase space via Poincaré section. Multidimens Syst Signal Process 30(2):1029
Bhattacharyya S, Konar A, Tibarewala DN, Khasnobish A, Janarthanan R (2014) Performance analysis of ensemble methods for multi-class classification of motor imagery EEG signal. In: Proceedings of The 2014 International Conference on Control, Instrumentation, Energy and Communication (CIEC), pp. 712–716
Awan UI, Rajput UH, Syed G, Iqbal R, Sabat I, Mansoor M (2016) Effective classification of EEG signals using K-nearest neighbor algorithm. Intern Conf Front Inform Technol (FIT) 2016:120–124
Jaiswal AK, Banka H (2018) Local transformed features for epileptic seizure detection in EEG signal. J Med Biol Eng 38(2):222–235
Lin J-W et al (2018) Visualization and sonification of long-term epilepsy electroencephalogram monitoring. J Med Biol Eng 38(6):943–952
Tessy E, Shanir PPM, Manafuddin S (2016) Time domain analysis of epileptic EEG for seizure detection In: 2016 International Conference on Next Generation Intelligent Systems (ICNGIS), pp. 1–4
Subasi A, Erc E (2005) Classification of EEG signals using neural network and logistic regression. Comput Methods Programs Biomed 78(2):87–99
Rajaguru H (2017) Sparse PCA and soft decision tree classifiers for epilepsy classification from EEG signals. Int Conf Electron Commun Aerosp Technol ICECA, pp. 581–584
Xiao C, Wang S, Iasemidis L, Wong S, Chaovalitwongse WA (2018) An adaptive pattern learning framework to personalize online seizure prediction. IEEE Trans Big Data 1–13
Sharmila A, Geethanjali P (2016) DWT based detection of epileptic seizure from EEG signals using naive Bayes and k-NN classifiers. IEEE Access 4:7716–7727
Dessouky MM, Elrashidy MA, Taha TE, Abdelkader HM (2013) Selecting and extracting effective features for automated diagnosis of Alzheimer's disease. Intern J Comput Appl 81(4):17–28
Dessouky MM, Elrashidy MA (2016) Feature extraction of the Alzheimer's disease images using different optimization algorithms. J Alzheimers Dis Parkinsonism 6:230. https://doi.org/10.4172/2161-0460.1000230
Saa, Delgado JF, Gutierrez MS (2010) EEG signal classification using power spectral features and linear discriminant analysis: a brain computer interface application. In: Eighth Latin American and Caribbean Conference for Engineering and Technology. Arequipa: LACCEI, pp. 1–7
Rodríguez J, Pérez A, Lozano JA (2010) Sensitivity analysis of k-fold cross validation in prediction error estimation. Pattern Anal Mach Intell IEEE Trans 32:569–575
Fushiki T (2011) Estimation of prediction error by using K-fold cross-validation. Stat Comput 21(2):137–146
Dessouky MM, Elrashidy MA, Taha TE, Abdelkader HM (2015) Computer aided diagnosis system feature extraction of Alzheimer disease using MFCC. Intern J Intell Comput Med Sci Image Process Taylor Frances 6(2):65–78
Garcés Correa A, Orosco LL, Diez P, Laciar Leber E (2019) Adaptive filtering for epileptic event detection in the EEG. J Med Biol Eng 39:1–7
Goldberger AL et al (2000) PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. Circulation 101(23):E215–E220
This work was funded by the University of Jeddah, Saudi Arabia, under Grant No. (UJ-04-18-ICP). The authors, therefore, acknowledge with thanks the University technical and financial supports.
Department of Computer Science and Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt
Athar A. Ein Shoka, Ayman El-Sayed & Mohamed M. Dessouky
Department of Computer Science and Artificial Intelligence, College of Computer Science and Engineering, University of Jeddah, Jeddah, Saudi Arabia
Monagi H. Alkinani & Mohamed M. Dessouky
Department of Industrial Electronics and Control Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt
A. S. El-Sherbeny
Athar A. Ein Shoka
Monagi H. Alkinani
Ayman El-Sayed
Mohamed M. Dessouky
Correspondence to Mohamed M. Dessouky.
Ein Shoka, A.A., Alkinani, M.H., El-Sherbeny, A.S. et al. Automated seizure diagnosis system based on feature extraction and channel selection using EEG signals. Brain Inf. 8, 1 (2021). https://doi.org/10.1186/s40708-021-00123-7
Channel selection
Cross-validation
And seizure classification
Machine Learning Techniques for Neuroscience Big Data
|
CommonCrawl
|
Gauge and global symmetries in Chern-Simons/WZW correspondence
I am trying to understand how bulk gauge symmetry in 3d Chern-Simons theory becomes a global symmetry in the boundary 2d WZW theory. In particular, I am trying to understand the papers by Elitzur et al. and Moore and Seiberg. In these references, 3d nonabelian Chern-Simons theory with the classical action $$ S=\frac{k}{4\pi}\int_{Y}\textrm{Tr}(A\wedge dA+\frac{2}{3}A\wedge A\wedge A) $$
is studied on a manifold $Y$ with boundary. In reference 2 (last paragraph of first page), Moore and Seiberg explain that to ensure the equations of motion are free from boundary terms, one needs to set one of the boundary components of the gauge field to be zero, since $$ \delta S= \frac{k}{4\pi}\int_{\partial Y} \textrm{Tr}(\delta A\wedge A)+\frac{k}{2\pi}\int_Y\textrm{Tr}(\delta A\wedge F). $$ They then claim that "With these boundary conditions the functional integral is invariant only under gauge transformations which are one at the boundary." My first question is, why is this the case?
As a first step to getting WZW theory on the boundary, both references split $Y$ as $\Sigma \times \mathbb{R}$, and pick the boundary condition $A_0=0$, i.e., the component of $A$ along the time direction $\mathbb{R}$ vanishes.
Reference 1 then gives more detail (on page 110); they explain that "The symmetry of the theory IS the group of gauge transformations which do not change the boundary conditions. These are gauge transformations which are independent of time on the boundary. Only a subgroup of this group should be viewed as a gauge symmetry. This is the set of transformations which are one at the boundary. Time independent transformations on the boundary should be viewed as a global symmetry. My second question is, why should time independent transformations be viewed as a global symmetry?
Added note: If the time independent $g$ was independent of other coordinates on the boundary, it would be global, but this is not necessarily the case. Moreover, in the references, they do not require that the transformation be fixed to be considered global. For $\Sigma=D$ the boundary chiral WZW theory with group valued fields $U$ has a symmetry $U\rightarrow \widetilde{V}(\varphi)UV(t)$, where $\widetilde{V}(\varphi)$ is referred to as a global symmetry (see e.g. reference 1, equation 2.7). The reason given is that $\widetilde{V}$ does not go to one in the past and future, which I also am unclear about.
Attempted solution: Let us choose the simple case of $\Sigma=D$, the disk, with polar coordinates $r$ and $\varphi$. A variation of $S$ under a large gauge transformation generated by $g\in SU(N)$, i.e., $A\rightarrow g A g^{-1}-dg g ^{-1}$ gives \begin{equation} S\rightarrow S+ \frac{k}{4\pi}\int d^3x\bigg(\epsilon^{ijk}\partial_j\textrm{Tr}(\partial_igg^{-1}A_k)+\frac{1}{3}\epsilon^{ijk}\textrm{Tr}(g^{-1}\partial_{i}gg^{-1}\partial_{j}gg^{-1}\partial_{k}g)\bigg). \end{equation} Now, using the boundary condition $A_0=0$ and restricting $g$ such that $\partial_0g=0$ at the boundary, the first term vanishes via Stoke's theorem. The second term does not vanish in general, but I believe if we choose the gauge transformations to approach the same value at the boundary of $D\times \mathbb{R}$ (which makes it equivalent to the $Y=S^3$ case), then since $\pi_3(SU(N))=\mathbb{Z}$, the second term integrates to $2\pi k n$ for $n\in \mathbb{Z}$, which leads to a trivial phase in the path integral. (I've referred to page 182 of Tong's notes extensively here.)
However, I do not see why we must choose the gauge transformations to approach one at the boundary, but not some other fixed value. Furthermore, I do not see why general time independent transformations should be viewed as a global symmetry.
quantum-field-theory gauge-theory holographic-principle chern-simons-theory
Mtheorist
MtheoristMtheorist
$\begingroup$ If the only allowed transformations are some fixed time independent $g$ on the boundary, isn't that basically the definition of a global symmetry (on the boundary)? $\endgroup$ – octonion Mar 30 at 21:02
$\begingroup$ If the time independent $g$ was independent of other coordinates, then yes, it would be global, but this is not necessarily the case. In the references, they do not require that the transformation be fixed to be considered global. For $\Sigma=D$ the boundary chiral WZW theory with group valued fields $U$ has a symmetry $U\rightarrow \widetilde{V}(\varphi)UV(t)$, where $\widetilde{V}(\varphi)$ is referred to as a global symmetry. $\endgroup$ – Mtheorist Mar 31 at 4:05
$\begingroup$ Might ocnsider this paper. $\endgroup$ – Cosmas Zachos Apr 2 at 19:03
Browse other questions tagged quantum-field-theory gauge-theory holographic-principle chern-simons-theory or ask your own question.
Normalization of the Chern-Simons level in $SO(N)$ gauge theory
Gauge invariance and diffeomorphism invariance in Chern-Simons theory
The Chern-Simons/WZW correspondence
Global Chern-Simons forms and topological gauge theories
How Chern-Simons gauge field transform fermion to scalar?
Is gravitational Chern-Simons action "topological" or not?
Topological Mass in Maxwell-Chern-Simons theory
The Hilbert space of Chern-Simons on a torus, part one$.$
Twisted Chern-Simons, and Twisted Wess-Zumino Term
Path integral measure in Chern-Simons/WZW correspondence
|
CommonCrawl
|
Difference between revisions of "Faber-Schauder system"
Ulf Rehmann (talk | contribs)
m (moved Faber–Schauder system to Faber-Schauder system: ascii title)
Ivan (talk | contribs)
(TeX)
A system of functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f0380201.png" /> on an interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f0380202.png" /> constructed as follows using an arbitrary countable sequence of points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f0380203.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f0380204.png" />, that is everywhere dense in this interval. Set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f0380205.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f0380206.png" />. The function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f0380207.png" /> is linear on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f0380208.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f0380209.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802010.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802011.png" />, then one divides <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802012.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802013.png" /> parts by the points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802014.png" /> and one chooses the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802015.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802016.png" />, that contains <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802017.png" />. Then one sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802018.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802019.png" />, and extends <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802020.png" /> linearly to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802021.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802022.png" />. Outside <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802023.png" /> one sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802024.png" /> equal to zero.
A system of functions $\{\phi_n(t)\}_{n=1}^\infty$ on an interval $[a,b]$ constructed as follows using an arbitrary countable sequence of points $\{w_n\}_{n=1}^\infty$, $w_1=a,w_2=b$, that is everywhere dense in this interval. Set $\phi_1(t)\equiv1$ on $[a,b]$. The function $\phi_2(t)$ is linear on $[a,b]$ such that $\phi_2(a)=0$, $\phi_2(b)=1$. If $n>2$, then one divides $[a,b]$ into $n-2$ parts by the points $w_1,\dots,w_{n-1}$ and one chooses the interval $[w_i,w_k]$, $w_1<w_k$, that contains $w_n$. Then one sets $\phi_n(w_i)=\phi_n(w_k)=0$, $\phi_n(w_n)=1$, and extends $\phi_n(t)$ linearly to $[w_i,w_n]$ and $[w_n,w_k]$. Outside $(w_i,w_k)$ one sets $\phi_n(t)$ equal to zero.
In the case when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802025.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802026.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802027.png" /> is the sequence of all dyadic rational points in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802028.png" />, enumerated in the natural way (that is, in the order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802029.png" /> <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802030.png" /> <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802031.png" />), the system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802032.png" /> (denoted by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802033.png" />) first appeared in the work of G. Faber [[#References|[1]]]. He considered it (with another normalization) as the system of indefinite integrals of the [[Haar system|Haar system]] supplemented by the function that is identically equal to one. In the general case, the construction of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802034.png" /> was carried out by J. Schauder, and so a Faber–Schauder system is also called a Schauder system.
In the case when $a=0$, $b=1$, and $\{w_n\}$ is the sequence of all dyadic rational points in $[0,1]$, enumerated in the natural way (that is, in the order $0,1,1/2,1/4,3/4,\dots,1/2^m,3/2^m,\dots,(2^m-1)/2,\dots$), the system $\{\phi_n(t)\}$ (denoted by $\{F_n(t)\}$) first appeared in the work of G. Faber [[#References|[1]]]. He considered it (with another normalization) as the system of indefinite integrals of the [[Haar system|Haar system]] supplemented by the function that is identically equal to one. In the general case, the construction of $\{\phi_n(t)\}$ was carried out by J. Schauder, and so a Faber–Schauder system is also called a Schauder system.
The system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802035.png" /> is a basis of the space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802036.png" /> of all continuous functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802037.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802038.png" /> with norm <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802039.png" /> (see [[#References|[1]]], [[#References|[2]]] or [[#References|[3]]]). If one applies the Schmidt orthogonalization process to the Faber system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802040.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f038/f038020/f03802041.png" />, the [[Franklin system|Franklin system]] is obtained.
The system $\{\phi_n(t)\}$ is a basis of the space $C[a,b]$ of all continuous functions $f$ on $[a,b]$ with norm $\|f\|=\max_{a\leq t\leq b}|f(t)|$ (see [[#References|[1]]], [[#References|[2]]] or [[#References|[3]]]). If one applies the Schmidt orthogonalization process to the Faber system $\{F_n(t)\}$ on $[0,1]$, the [[Franklin system|Franklin system]] is obtained.
The Faber–Schauder system was the first example of a basis of the space of continuous functions.
In the case when $a=0$, $b=1$, and $\{w_n\}$ is the sequence of all dyadic rational points in $[0,1]$, enumerated in the natural way (that is, in the order $0,1,1/2,1/4,3/4,\dots,1/2^m,3/2^m,\dots,(2^m-1)/2,\dots$), the system $\{\phi_n(t)\}$ (denoted by $\{F_n(t)\}$) first appeared in the work of G. Faber [1]. He considered it (with another normalization) as the system of indefinite integrals of the Haar system supplemented by the function that is identically equal to one. In the general case, the construction of $\{\phi_n(t)\}$ was carried out by J. Schauder, and so a Faber–Schauder system is also called a Schauder system.
The system $\{\phi_n(t)\}$ is a basis of the space $C[a,b]$ of all continuous functions $f$ on $[a,b]$ with norm $\|f\|=\max_{a\leq t\leq b}|f(t)|$ (see [1], [2] or [3]). If one applies the Schmidt orthogonalization process to the Faber system $\{F_n(t)\}$ on $[0,1]$, the Franklin system is obtained.
[1] G. Faber, "Ueber die Orthogonalfunktionen des Herrn Haar" Jahresber. Deutsch. Math. Verein. , 19 (1910) pp. 104–112
[2] J. Schauder, "Eine Eigenschaft des Haarschen Orthogonalsystem" Math. Z. , 28 (1928) pp. 317–320
[3] S. Kaczmarz, H. Steinhaus, "Theorie der Orthogonalreihen" , Chelsea, reprint (1951)
For the (Gram–)Schmidt orthogonalization process cf. Orthogonalization; Orthogonalization method.
[a1] Z. Semadeni, "Schauder bases in Banach spaces of continuous functions" , Springer (1982)
Faber-Schauder system. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Faber-Schauder_system&oldid=22397
This article was adapted from an original article by B.I. Golubov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://www.encyclopediaofmath.org/index.php?title=Faber-Schauder_system&oldid=33251"
|
CommonCrawl
|
EMIS 2018
16 Sep 2018, 16:00 → 21 Sep 2018, 18:00 Europe/Zurich
500/1-001 - Main Auditorium (CERN)
500/1-001 - Main Auditorium
Gerda Neyens (KU Leuven (BE)), Richard Catherall (CERN)
The International Conference on Electromagnetic Isotope Separators and Related Topics (EMIS) will be held at CERN, the European Organization for Nuclear Research, in Geneva, Switzerland, from 16th to 21st September 2018. The EMIS 2018 conference will be hosted by ISOLDE, CERN. EMIS is the flagship conference series on techniques in the field of low-energy nuclear science. EMIS 2018 is the 18th conference in this series.
EMIS2018 is endorsed by the International Union of Pure and Applied Physics (IUPAP).
The EMIS2018 organisers have agreed to abide by IUPAP Policies on Conferences .
CERN Hostel Booking form EMIS 2018.docx EMIS 2018 3rd Circular.pdf EMIS_2018_abstract_book_updated.pdf EMIS 2018 Conference Photo.jpg EMIS2018 poster.png Final Programme_14_09.pdf
Adam Robert Vernon
Adrian Valverde
Agi Koszorus
Ailin Zhang
Alan Matthew Amthor
Alberto Monetti
Alexander Gottberg
Alexander Yeremin
Ali Mollaebrahimi
Andrea Teigelhöfer
Andreas Stolz
Andres Vieitez Suarez
Annie Moberg
Antonietta Donzella
Antonio Domenico Russo
Antonio Villari
Antti Saastamoinen
Aurelia Laxdal
Baoqun Cui
Barry Davids
Bruce Marsh
Bum-Sik Park
Byounghwi Kang
Camilo Granados
Carla Babcock
Carlos Munoz Pequeno
Carmen Angulo
Chandana Sumithrarachchi
Charles-Olivier Bacri
Christoph Düllmann
Christoph Scheidenberger
Christopher Ricketts
David Leimbach
Dmitrii Nesterenko
Dominik Studer
Emma Haettner
Enrique Minaya Ramirez
Erich Leistenschneider
Ernest Grodner
Fernando Maldonado
Ferran Boix Pamies
Francesca Giacoppo
Frank Herfurth
Frank Wienholtz
Georg Bollen
Gerda Neyens
Giovanni Bruni
Grzegorz Kaminski
Guy Savard
Haik SIMON
Hanna Franberg Delahaye
Helen Barminova
Hiroyuki Takeda
Hongwei Zhao
Iain Moore
Ilkka Pohjalainen
Ivan Kulikov
J J Gomez Cadenas
Jacklyn Gates
James Cruikshank
James Cubiss
Jens Dilling
Jeongsu Ha
Jiban Jyoti Das
Jinwen Cao
Joakim Cederkall
Joao Pedro Ramos
Jochen Ballof
Johanna Pitters
Joonas Konki
Juha Uusitalo
Julia Even
Juliana Schell
Jörg Krämer
Karl Johnston
Karsten Riisager
Kasey Lund
Katerina Chrysalidis
Klaus Wendt
Kristof Dockx
Laszlo Stuhl
Laura Lambert
Leigh Graham
Luca Egoriti
Lucia Popescu
Luis Mario Fraile
Maher Cheikh Mhamed
Manoel Couder
Manuela Cavallaro
Marco Marchetto
Marco Rosenbusch
Marek Karny
Markus Vilén
Marla Cervantes
Martin Ashford
Maryam Mostamand
María J G. Borge
Maxim Seliverstov
Maxime Brodeur
Michael Block
Michael Dion
Mikael Reponen
Moazam Mehmood Khan
Momo Mukai
Monika Stachura
Moritz Pascal Reiter
Navin Alahari
Nhat-Tan Vuong
Oliver Kaleja
Olof Tengblad
Pascal Jardin
Patrick O'Malley
Paul Van den Bergh
Peter Reiter
Peter Thirolf
Philip Creemers
Pierre Chauveau
Pierre Delahaye
Piet Van Duppen
Predrag Ujic
R. Formento Cavaier
Reinhard Heinke
Richard Catherall
Rodney Orford
Ruben de Groote
Ruohong Li
Ryan Ringle
Sarina Geldhof
Sebastian Raeder
Sebastian Rothe
Shane Wilkins
Shin'ichiro Michimasa
Shunichiro Omika
Simon Lechner
Simon Sels
Simon Stegemann
Stefan Schwarz
Stephan Malbrunot
Stuart Warren
Taeksu Shin
Tetsuya Ohnishi
Thierry Stora
Thomas Day Goodacre
Thomas Elias Cocolios
Tianjue Zhang
Tim Giles
Timo Pascal Steinsberger
Tor Bjoernstad
Toshiyuki Sumikama
Ulli Koester
Vadim Gadelshin
Valentine Fedosseev
Varvara Lagaki
Venkateswarlu Kuchi
Volker Sonnenschein
Vyacheslav Shchepunov
Wenxue Huang
Wilfried Nörtershäuser
Xing XU
Xiuyan Ren
Yasuhiro Togano
Yohei Shimizu
Yoshikazu Hirayama
Yoshitaka Yamaguchi
Yosuke Kondo
Young Jin Kim
Yulin Tian
Yung Hee KIM
Yuta Ito
[email protected]
Sunday, 16 September
Tue, 18 Sep
Wed, 19 Sep
Thu, 20 Sep
Fri, 21 Sep
Registration Desk open 2h 80/1-001 - Globe of Science and Innovation - 1st Floor
80/1-001 - Globe of Science and Innovation - 1st Floor
Welcome Reception 2h 80/1-001 - Globe of Science and Innovation - 1st Floor
Introduction 10m 500/1-001 - Main Auditorium
EMIS introduction_RC.pptx
Session 1- Target and Ion Source Techniques 500/1-001 - Main Auditorium
Convener: Richard Catherall (CERN)
New exotic beams from the SPIRAL 1 upgrade 30m
Since 2001, the SPIRAL 1 facility has been one of the pioneering facilities in ISOL techniques for reaccelerating radioactive ion beam: the fragmentation of the heavy ion beams of GANIL on graphite targets and subsequent ionization in the Nanogan ECR ion source has permitted to deliver beams of gaseous elements (He, N, O, F, Ne, Ar, Kr) to numerous experiments. Thanks to the CIME cyclotron, energies up to 20 MeV/u could be obtained. In 2014, the facility was stopped to undertake a major upgrade, with the aim to extend the production capabilities of SPIRAL 1 to a number of new elements. This upgrade, which will become operational this year, consists in the integration of an ECR booster in the SPIRAL 1 beam line to charge breed the beam of different 1+ sources. A FEBIAD source (the so-called VADIS from ISOLDE) was chosen to be the future workhorse for producing many metallic ion beams. This source was coupled to the SPIRAL 1 graphite targets and tested on-line with different beams at GANIL. The charge breeder is an upgraded version of the Phoenix booster which was previously tested in ISOLDE. It was lately commissioned at LPSC and more recently in the SPIRAL 1 beam lines with stable beams. The upgrade will additionally permit at longer term the use of other target material than graphite. In particular, the use of fragmentation targets will permit to produce higher intensities than from projectile fragmentation, and thin targets of high Z will be used for producing beams by fusion-evaporation [1]. The performances of the aforementioned ingredients of the upgrade (targets, 1+ source and charge breeder), have been and are still being optimized in the frame of different European projects (EMILIE, ENSAR and ENSAR2). This year, the upgraded SPIRAL 1 facility will provide its first new beams for physics and further beam development will be undertaken to prepare for the next AGATA campaign. This invited contribution will describe the R&D which was undertaken for the upgrade, focusing on the radioactive ion beam and ion source R&D, and the results obtained during the on-line commissioning period.
[1]: see contributions of V. Kuchi and of P. Jardin to this conference.
Speaker: Pierre Delahaye (Grand Accelerateur National d'Ions Lourds (FR))
Thick solid targets for the production and online release of radioisotopes: the importance of the material characteristics 30m
In ISOL (Isotope Separator OnLine) facilities around the world, high-energy particle beams are accelerated towards a thick target to produce radioactive isotopes through nuclear reactions. Though different driver beam particles and energies can be used or converter targets (e.g. proton to neutron, electron to gamma) once the isotope of interest is produced in the main target, it has to be extracted from the target bulk. After thermalization the release from the target consists on diffusion out of the material crystal structure, through the material porosity and finally from the target material envelope to the ion-source. This phenomenon is highly dependent on the combination of matrix-element to be released, chemistry, microstructure and surfaces properties, which are all influenced by the operation temperature.
The chemical reactivity of the element of interest with the target material (contaminations or reactive gas introduced intentionally) and respective formed compounds play a substantial role. This created compound can either be a volatile molecule, which promotes release but might distribute the isotopes over different masses, or form a refractory compound which can partially or totally hinder the release. The surface properties, namely adsorption, play an important role after the isotope leaves the material bulk. The isotope atoms or molecules have to diffuse through the material porosity by colliding with the material pore surfaces and then the target structural materials until they reach the ion source, where sticking times and possible re-diffusion into the bulk are critical.
The microstructure characteristics (grain and pore size distributions, agglomeration factor, pore volume and resulting specific surface area) will have a large impact on the discussed phenomena, where the macrostructure is of relatively low importance. As such the engineering and high temperature stability (sintering, sublimation, phase change phenomena) of micro and nanostructures is of vital importance to any ISOL facility for the deliver exotic beams. To add to the complexity, these phenomena are in the presence of a high radiation environment where impurity and crystalline defect creation and annealing are a constant, where both have can change by orders of magnitude bulk diffusion rates (and implicitly sintering).
This talk will focus on the latest target related material developments mostly in terms of microstructure. It will also review and discuss the complex release phenomena and the influence of the material characteristics on it. The complexity of the release phenomenon make it nearly impossible to predict isotope yields through modeling, where the community highly depends on empirical data and extrapolations.
Speaker: Dr Joao Pedro Ramos (CERN)
Talk invited_EMIS2018_JPR_Final_compressed.pdf
Nuclear spectroscopy of r-process nuclei using KEK isotope separation system 20m
The study of the $\beta$-decay half-lives of waiting-point nuclei with $N=$ 126 is crucial to understand the explosive astrophysical environment for the formation of the third peak in the observed solar abundance pattern, which is produced by a rapid neutron capture process (r-process). However, the half-life measurements of the waiting-point nuclei remain impracticable due to the difficulty in the production of the nuclei. Therefore, accurate theoretical predictions for the half-lives are required for investigations of astrophysical environments. In order to improve and establish nuclear theoretical models, it is essential to perform nuclear spectroscopy for investigating $\beta$-decay schemes including spin-parity values, nuclear wave-functions and interactions, and nuclear masses in this heavy region.
For the nuclear spectroscopy, we have developed KEK Isotope Separation System (KISS), which is an argon-gas-cell-based laser ion source combined with an on-line isotope separator, installed in the RIKEN Nishina center [1-2]. The nuclei around $N=$ 126 are produced by multi-nucleon transfer reactions (MNT) [3] of $^{136}$Xe beam (10.75 MeV/A) impinging upon a $^{198}$Pt target. Thanks to newly developed doughnut-shaped gas cell [2], the extraction yields of the reaction products increased by more than one order of magnitude. This enabled us to successfully perform in-gas-cell laser ionization spectroscopy of $^{199g, 199m}$Pt [4] and $^{196,197,198}$Ir for evaluating the magnetic moments and the trend of the charge-radii (deformation parameters), and $\beta$-$\gamma$ spectroscopy of $^{195, 196, 197, 198}$Os for the half-life measurements and study of $\beta$-decay schemes.
For further nuclear spectroscopy, we have been developing a new narrow-band laser system for the precise in-gas-jet laser ionization spectroscopy, an MR-TOF system for mass measurement, and high-efficiency and low-background 3D tracking gas counters for $\beta$-decay spectroscopy.
In the presentation, we will report the present status of KISS, experimental results of nuclear spectroscopy in the heavy region, and future plan of KISS activities.
[1] Y. Hirayama et al., Nucl. Instrum. Methods B 353 (2015) 4.; B 376 (2016) 52.
[2] Y. Hirayama et al., Nucl. Instrum. Methods Phys. Res. B 412 (2017) 11.
[3] Y.X. Watanabe et al., Phys. Rev. Lett. 115 (2015) 172503.
[4] Y. Hirayama et al., Phys. Rev. C 96 (2017) 014307.
Speaker: Dr Yoshikazu HIRAYAMA (KEK, WNSC)
SS1_4_EMIS2018_KISS_hirayama_20180926.pdf
Coffee Break 30m 500/1-201 - Mezzanine
500/1-201 - Mezzanine
Convener: Thierry Stora (CERN)
High-intensity highly charged ion beam production by superconducting ECR ion sources at IMP 30m
Accelerator facility for rare isotope beam production requests high power primary ion beam which actually very much depends on performance of the front-end ion source. Superconducting ECR ion source with higher magnetic fields and higher microwave frequency is the most straight forward path to achieve high beam intensity and high charge state in the past years. SECRAL is a superconducting-magnet-based ECRIS (Electron Cyclotron Resonance Ion Source) for the production of intense highly-charged heavy ion beams. It is one of the best performing ECRISs worldwide and the first superconducting ECRIS built with an innovative magnet to generate a high strength Minimum-B field for operation with heating microwaves up to 24-28 GHz. SECRAL has so far produced a good number of CW (Continuous Wave) intensity records of highly-charged ion beams, in which the beam intensities of 40Ar12-14+, 86Kr18+, 129Xe26+ have exceeded 1 emA for the first time by an ion source. SECRAL source has run into operation to deliver highly charged ion beams for HIRFL accelerator for more than 9 years and total beam time more than 30000 hours, which has demonstrated its excellent stability and reliability. SECRAL-II, an upgraded version of SECRAL, was built successfully in less than 3 years, and has recently been commissioned at full power of a 28 GHz gyrotron and three frequency heating (28+45+18 GHz) . New record beam intensities for highly charged ion production have been achieved by SECRAL-II, such as 620 eμA 40Ar16+, 15 eμA 40Ar18+, 53 eμA 129Xe38+ and 17 eμA 129Xe42+. A 45 GHz superconducting ECR ion source FECR (a first Fourth generation ECR ion source) is being built at IMP. FECR will be the world first Nb3Sn superconducting-magnet-based ECR ion source with 6.5 Tesla axial mirror field, 3.5 Tesla sextupole field on the plasma chamber inner wall and 20 kW@45 GHz microwave coupling system. This talk will focus on high-intensity highly- charged ion beam production by SECRAL and SECRAL-II at 24-28 GHz and technical design of 45 GHz FECR, which demonstrates a technical path for highly charged ion beam production from 24-28 GHz SECRAL to 45 GHz FECR.
Speaker: Prof. Hongwei Zhao (Institute of Modern Physics (IMP), CAS)
EMIS2018-zhaohw-f.pdf
Radioactive Beam Production at TRIUMF – Present and Future 20m
ISAC-TRIUMF is the only ISOL facility worldwide that is routinely operating targets under particle irradiation in the high-power regime in excess of 10 kW. TRIUMF's current flagship project ARIEL, Advanced Rare IsotopE Laboratory, will add two new target stations providing isotopes to the existing experimental stations in ISAC I and ISAC II at keV and MeV energies, respectively. In addition to the operating 500 MeV, 50 kW proton driver from TRIUMF's cyclotron, ARIEL will make use of a 35 MeV, 100 kW electron beam from a newly installed superconducting linear accelerator. Together with additional 200 m of RIB beamlines within the radioisotope distribution complex, this will put TRIUMF in the unprecedented capability of delivering three RIB beams to different experiments, while producing radioisotopes for medical applications simultaneously – enhancing the scientific output of the laboratory significantly. General characteristics of the high-power target stations, remote handling and beam production technology at ISAC and ARIEL will be presented, showing the opportunities and limitations. Moreover, the current status of the facilities as well as the path to completion and ramp-up of ARIEL will be discussed.
Speaker: Dr Alexander Gottberg (TRIUMF)
Research and development for the SPES target ion source system 20m
In the facilities for the production of radioactive ion beams based on the isotope separation on line (ISOL) technique, the target ion source (TIS) system is surely the most critical object. In the specific case of the selective production of exotic species (SPES) facility, a multifoil uranium carbide target is impinged by a 40 MeV, 200 µA proton beam produced by a cyclotron proton driver. Under these conditions, a fission rate of approximately 10^13 fissions per second is expected in the target. The radioactive isotopes produced by the 238U fissions are delivered to the 1+ ion source by means of a tubular transfer line. Here they can be ionized and subsequently accelerated toward the facility experimental areas. In ISOL facilities the target system can be combined with different types of ion sources in order to optimize the production of specific ion beams. In this work the SPES target and the related 1+ ion sources are accurately described, presenting their characterization and testing, together with the main research and development activities. A detailed electrical-thermal-structural study is also reported, with some considerations on long term operation at high temperature.
Speaker: Alberto Monetti (I)
3 2018_EMIS_Monetti_rev01.pptx
Isolde V 20m
The Isolde facility was established in 1967 and since then has been rebuilt three times, in 1976, 1983 and in 1992. The fourth and current incarnation is 26 years old, and there is now a strong case for another major upgrade to address increasing demands on the targets, the isotope separators, and the experimental hall.
The existing target areas are well designed and have already been upgraded with new frontends in 2010 and 2011. The beam-dumps and surrounding concrete are at the end of their lives and will be replaced in 2024, and the frontends will be upgraded once again at the same time. However the geometry of the building, the beam-lines and the surrounding services limits how much can be changed. Furthermore the radiation levels and the schedule requirements make large modifications difficult and risky, even during the long shutdown periods. Even with upgrades the target stations are reaching the limits of their capabilities in terms of proton beam capacity, maintainability, and compatibility with prototype targets.
The performance of the isotope separators is largely determined by their geometry. Upgrade of their performance -- to improve isobar separation, to improve acceptance of beams from new ion-sources, or to improve background suppression for ultra-sensitive experiments -- is not possible without moving the permanent shielding and the downstream beam-lines, which is not practical.
The current ion beam delivery system has a severe bottle-neck, in that beam from only one target at a time may be delivered into the experimental hall. There are ideas to switch rapidly between the two target stations, but this is of limited usefulness.
Thus there is a strong case to build new target stations and a new beam preparation system to circumvent these limitations and to expand and modernise Isolde's capabilities. Constructing a new isotope production area would minimise perturbation of the running facility, whilst simultaneously permitting radically improved designs.
This paper explores the possibilities and makes a proposal for two new target stations, new isotope separators, and a beam transport system designed along modern principles. Connection of the new beam-lines to the existing facility is discussed, as well as a layout for a completely new experimental area. The layout of the proton beam-lines, radiation shielding, and the impact on the surrounding infrastructure is considered.
A possible layout will be presented with two new target areas, pre-separators, and a beam-switching system which can deliver multiple beams into the existing experimental hall. Design concepts for the new target areas and beam-lines will be shown, compatible with up-to-date handling techniques. The integration of beam preparation systems will be considered, including beam cooling and bunching and isobar separation. Finally a possible the expansion of the facility with a new experimental hall will be shown, with space for new experiments and a sophisticated and flexible beam delivery system.
Speaker: Dr Tim Giles (CERN)
giles emis alt.pdf
High efficiency ISOL system to produce neutron deficient short-lived alkali RIBs on GANIL/SPIRAL 1 facility 15m
SPIRAL1 (Système de Production d'Ions Radioactifs Accélérés en Ligne) facility at GANIL (Grand Accélérateur National d'Ions Lourds) is developing new techniques to access nuclei in the neutron deficient isotope region far from the stability-valley, with Z ranging from 30 to 60. The availability of different primary beams, ranging from carbon to uranium with energies up to 100 MeV/A, gives an opportunity to produce a large variety of radioactive ion beams. The production of neutron deficient short-lived alkalis by fusion-evaporation reactions is the focus of this work. A design of simple and compact target ion source system is developed to produce isotopes of 74Rb (τ_(1⁄2) = 65 ms) and 114Cs (τ_(1⁄2) = 570 ms). The radioactive recoils are produced by interaction of heavy-ion beams, respectively 20Ne@1013 pps and 58Ni@1012 pps, with a thin 58Ni target and are subsequently stopped in a catcher. The implanted recoils diffuse and effuse into the target ion source cavity, where they are ionized by surface ionization process. By applying an electric field in the cavity, the ions are guided towards the exit hole.
This system should offer an enhanced atom-to-ion transformation efficiency (e.g. higher than 75% and 95% for the 74Rb and 114Cs nuclei respectively). The intensity of RIBs is estimated to attain about 104 pps. The different aspects of the design and of the technical principles will be described: effusion, thermal, electrical and mechanical studies. The first off-line measurements of the thermal properties and response time will finally be presented.
Key words: neutron-deficient isotopes, fusion-evaporation reaction, ISOL technique, surface ionization, Target Ion Source System, SPIRAL1.
Presentation: oral
Category: Isotope production, Target and ion source techniques
Speaker: Mr Venkateswarlu Kuchi
EMIS 2018 presentation V.kuchi.pdf
On-line results from ISOLDE's Laser Ion Source and Trap LIST 15m
The method of laser resonance ionization [1] today is a well-established core technique for efficient and chemically selective radioactive ion beam production at the worldwide leading ISOL facilities such as ISAC-TRIUMF or CERN-ISOLDE. In addition, these devices allow for direct in-source laser spectroscopic investigations of exotic nuclei with lowest production yields. Nevertheless, in experiments demanding highest beam purity, suppression of beam contamination arising from competing ionization processes inside the hot cavity is essential. Correspondent techniques therefore comprise spatial separation of high temperature atomization from a clean and cold laser ionization volume inside an RFQ ion guiding structure. Namely, these are TRIUMF's IG-LIS [2] or ISOLDE's LIST, which was used for hyperfine structure spectroscopy on neutron-rich polonium previously inaccessible due to an overwhelming fraction of surface-ionized francium [3, 4].
Derived from operation experience, systematic off-line studies and simulations, a next generation of the LIST has been developed to go on-line at ISOLDE in 2018, providing highly pure 22Mg beams for measurements on its super-allowed branching ratio and half-life (IS614). The overall geometric design has been adapted to minimize deposition, while a second repelling electrode ensures additional suppression by inhibiting electron impact ionization inside the RFQ structure. Moreover, the unit undergoes additional tests to eventually further increase its performance: A DC voltage offset mode shifts the produced ions to a different mass regime, sidestepping isobaric contamination. Using high-resistance cavity materials and the LIST of matched length as field-free drift volume also enables a time-of-flight based operation mode for shortest ion bunches and subsequent purification methods by laser pulse synchronized ion beam gating [5, 6].
The presentation will show results and operation characteristics from the on-line application of the "LIST 2.0", as well as the status of ongoing developments and future directions.
[1] V. Fedosseev et al., J. Phys. G: Nucl. Part. Phys. 44 084006 (2017)
[2] S. Raeder et al., Rev. Sci. Instr. 85, 033309 (2014)
[3] D. Fink et al., Nucl. Instr. Meth. B, 317 B, 417-421 (2013)
[4] D. Fink et al., Phys. Rev. X 5, 011018 (2015)
[5] V.I. Mishin et al., AIP (2009)
[6] S. Rothe et al., Nucl. Instr. Meth. B, 376, 86-90 (2016)
Speaker: Reinhard Matthias Heinke (Johannes Gutenberg Universitaet Mainz (DE))
6 Heinke_-_On-line_results_from_LIST.pdf
Lunch break 1h 15m
Convener: Hanna Franberg Delahaye (GANIL)
The laser ionisation toolkit for ion beam production at thick-target ISOL facilities 20m
Multi-step resonance photo-ionisation is an essential component of radioactive ion beam production at most of the existing and planned thick-target ISOL facilities. At ISOLDE, the Resonance Ionisation Laser Ion Source (RILIS) is capable of ionising 40 elements. Its unmatched combination of selectivity and efficiency ensures its place as the most commonly used ion source for ISOLDE physics.
Since its initial implementation the RILIS has developed from the original copper-vapour laser pumped dye laser system into a much more versatile dual Dye and Ti:Sapphire system pumped by modern industrial solid-state lasers. Furthermore, the RILIS technique, originally exclusively applied within the hot cavity surface ion source, has been further developed to enable specific modes of operation or to exploit alternative laser atom interaction regions. The performance can now be tailored to prioritise efficiency, selectivity or versatility, depending on the requirements of the experiment. This is thanks to the multitude of laser ionisation options at our disposal: the Laser Ion Source Trap (LIST) and low work-function cavity for enhanced selectivity; and the Versatile Arc Discharge and Laser Ion Source (VADLIS), which is a multi-functional ion source for a variety of applications.
A status update on the ISOLDE-RILIS installation will be presented, including a selection of 'use case' highlights for each of the laser ion source configurations mentioned here.
Finally, an outlook towards the planned next stages of laser ion source R&D (such as the PI-LIST, ToFLIS and next-generation VADLIS), with a view to the possible interest for existing and next-generation ISOL facilities will be provided.
Speaker: Dr Bruce Marsh (CERN)
RILIS-EMIS18.pdf
TRIUMF resonance ionization laser ion source operation lessons & highlights 20m
TRIUMF's isotope separator and accelerator facility is an ISOL facility based on a 500MeV proton driver beam with a beam intensity of up to 100microA on target. The ion sources in use are TRIUMF's FEBIAD, surface ion source and resonance ionization laser ion source (TRILIS). The TRILIS operational experience – delivering more than 50% of all beams and scheduled shifts - with all solid state laser based laser systems, operated by a small operations team will be critically discussed and analyzed and the achievements of the past years presented.
This analysis is essential to the ongoing facility upgrade to the advanced rare isotope laboratory (ARIEL). ARIEL is going to add two additional RIB target stations, one based on an additional 500MeV, up to 100microA proton driver beam, and one based on photo-fission from a 30MeV, up to 10mA electron driver beam, to simultaneously deliver RIB to the ISAC experimental infrastructure, that can be separated into "low energy", "medium energy" and "high energy" experimental areas, with the "medium" and "high energy" areas using post-accelerated RIB.
In this scenario, it is envisioned to operate the two additional RILIS alongside TRILIS – without major resource increases - to provide the RIB for the experimental nuclear and particle physics programs.
Speakers: Jens Lassen (TRIUMF), Dr Rouhong Li (TRIUMF)
EBIS debuncher performances 15m
Charge breeding by Electron Beam Ion Source (EBIS) is an important technique for preparation of radioactive beams for further post-acceleration. The most efficient mode of the EBIS is pulsed mode. Depending on different parameters the characteristic time of the charge breeding process is of order of ~10 ms to ~100 ms, while the extraction time is 10 µs – 100 µs.
However, from the experimental point of view, continuous wave (CW) beams are preferred since bunched beams of the same average intensity tends to have larger pile-up probability, dead-times, and random coincidences in the detectors due to the higher instantaneous counting rate at the moment of the beam bunch arrival. One of the goals of the Innovative Charge Breeding Task (ICBT) of the EURISOL JRA within ENSAR2 is the development of a debuncher device for CW ion beam formation at future ISOL facilities using the EBIS charge breeding technique.
The EBIS debuncher was developed and commissioned at LPC Caen within the EMILIE project [1]. It has been lately tested, thoroughly and successfully, with stable 7Li+1 beam on the LPC Caen test bench. Trapping lifetimes well beyond 1 s could be measured, and continuous extracted beams with intensity variations of ±20% could be obtained for extraction times as long as 800 ms. Projections for the use of such device with an operational EBIS, i.e. for HIE-ISOLDE or for a future EBIS at GANIL, are therefore encouraging. This contribution will describe the results of the test and the possible opportunities it offers for future EBIS setups.
We acknowledge the support of ENSAR2 under grant agreement number 654002.
[1] Optimizing charge breeding techniques for ISOL facilities in Europe: Conclusions from the EMILIE project, P. Delahaye, A. Galatà, J. Angot, J. F. Cam, E. Traykov, G. Ban, L. Celona, J. Choinski, P. Gmaj, P. Jardin, H. Koivisto, V. Kolhinen, T. Lamy, L. Maunoury, G. Patti, T. Thuillier, O. Tarvainen, R. Vondrasek, and F. Wenander, Rev. Sci. Instrum., 87, 02B510 (2016)
Speaker: Predrag Ujic (GANIL, Caen, France)
UJIC_Debuncher_EMIS2018.pdf
Molecular beams in the ISOL process 15m
Radioactive ion beam facilities exploiting thick targets, which are irradiated by a high-energy driver beam, allow the production of intense beams of many chemical elements. For example at CERN-ISOLDE more than 1000 isotopes of 73 different chemical elements are available for delivery to a large spectrum of experimental setups for investigations in nuclear physics, structure and applications.
While thick targets benefit from high in-target production rates, the release of the generated nuclides strongly depends on the chemical and physical nature of the element. Some elements (like Li, Na, K) are easily released. In contrast, it is still not possible to release many elements with high boiling points, the refractory metals (e.g. Mo, W, Os). The in-situ volatilization by molecule formation has proven to be a key concept for the extraction of such difficult elements [1].
Recently exotic boron beams could be newly produced upon injection of sulphur hexafluoride gas into a carbon nanotubes target [2]. Besides helping the volatilization of the isotope of interest , molecular beams can be used as a mean to purify from isobaric contaminations, as already shown for example for selenium beams, extracted as SeCO ions [3] or more recently also with germanium sulphide ions.
Within this contribution, we summarize novel developments in molecular beam formation and show an original target concept based on the fission recoil effect for the release and ionization of the most challenging refractory elements, which are still not available in any ISOL facility, despite of the long history of the technique. Detailled numerical and experimental data will be presented to prepare for an online prototype test.
[1] U. Köster et al., (Im-)possible ISOL beams, Eur. Phys. J. Special Topics 150,
285-291 (2007).
[2] C. Seiffert, Production of radioactive molecular beams for CERN-ISOLDE,
Doctoral dissertation, Technische Universität Darmstadt (2014) + cds web link for CERN thesis
[3] U. Köster et. al, Oxide fiber targets at ISOLDE, Nucl, Instr. Meth. B 204, 303-313 (2003)
Speaker: Jochen Ballof (Johannes Gutenberg Universitaet Mainz (DE))
EMIS - 2018 - Molecular beams in the ISOL process - 169 - V4.pdf
Session 4 -Instrumentation for radioactive ion beam experiments 500/1-001 - Main Auditorium
Convener: Navin Alahari (GANIL)
Status of the Super-FRS project at FAIR 30m
The Super-FRS will serve as a separator for a wide range of secondary beams at relativistic velocities as well as a experimental device in the future FAIR facility. The system is based on large aperture superconducting magnets in conjunction with a high rate detection system, serving both for identification at high rates and as integral part of running experiments at the separator or experiments in the different branches of the Super-FRS.
In my talk i will discuss the status of the project and will also put some focus on the insterspersed use of equipment in a campus wide large detection system.
Speaker: Dr Haik Simon (GSI Darmstadt, Germany)
20180916-StatusSuperFRS-HaikSimon.pdf
The SECAR System for Nuclear Astrophysics Measurements at FRIB 20m
The Separator for Capture Reactions (SECAR), under construction at Michigan State University, is a next-generation recoil separator system optimized for nuclear astrophysics measurements with radioactive ion beams at the National Superconducting Cyclotron Laboratory (NSCL) and at the Facility for Rare Isotope Beams (FRIB). SECAR will enable the measurement of critical proton and alpha radiative capture reactions on proton-rich unstable nuclei that are needed to improve our understanding of stellar explosions such as novae, supernovae, and X-ray bursts. Two +/-300 kV Wien filters with carefully matched electric and magnetic effective field length are used to achieve beam rejection compatible with the high radioactive beam intensities expected from FRIB/ReA.
The design philosophy, status of SECAR construction, the early commissioning plans and the first commissioning measurements at MSU NSCL/FRIB will be presented.
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under Award Number DE-SC0014384 and by the National Science Foundation under grant No. PHY 1624942, PHY 08-22648 and PHY-1430152 (Joint Institute for Nuclear Astrophysics and JINA-CEE).
Speaker: Prof. Couder Manoel (University of Notre Dame)
2 MCouderv2.pdf
Development of a multi-segmented proportional gas counter for $\beta$-decay spectroscopy at KISS 15m
We have developed a new multi-segmented proportional gas counter (MSPGC) [1] for $\beta$-decay spectroscopy of nuclei with neutron number $N\sim126$ relevant to the 3rd peak in the r-process. These nuclei are produced by multi-nucleon transfer (MNT) reactions of $^{136}$Xe beam and $^{198}$Pt target [2], and can be extracted from KEK Isotope Separation System (KISS) [3]. KISS is an argon-gas-cell based laser ion source combined with an on-line isotope separator, and therefore it can select mass and atomic numbers. The extracted nuclei are implanted into a tape in the KISS detector system. In order to perform the $\beta$-decay spectroscopy precisely and efficiently, the background event rate of a $\beta$-ray detector should be less than 0.1 cps considering the typical extraction yield of neutron-rich nuclei of a few pps, and detection efficiency should be as high as possible.
The MSPGC comprises a pair of 16-segmented proportional gas counters in 2-cylindrical layers (total 32 ch) in order to identify $\beta$-ray events with high-efficiency and eliminate background events such as cosmic-rays by two-dimensional tracking effectively. The small energy losses in detector gas of argon (90%) + CH$_4$ (10%) and the cathode made of aluminized Mylar foils have allowed us to realize an absolute detection efficiency of 45% at $Q_\beta=1$ MeV along with detection of low-energy conversion electrons. We successfully achieved our desired background event rate of 0.1 cps.
We performed the hyperfine structure measurements of neutron-rich nuclei by in-gas-cell laser ionization spectroscopy and $\beta$-$\gamma$ spectroscopy including identification of isomeric states through the detection of conversion electrons. In the presentation, we will discuss the properties of the MSPGC and the experimental results. We will also present the current status of three-dimensional tracking in the MSPGC to realize the background event rate of 0.01 cps by applying resistive carbon wire anodes.
[1] M. Mukai et al., Nucl. Instrum. and Meth. A 884 (2018) 1.
[3] Y. Hirayama et al., Nucl. Instrum. and Meth. B 353 (2015) 4.; B 376 (2016) 52.; B 412 (2017) 11.
Speaker: Momo Mukai (University of Tsukuba)
EMIS2018_MM_Momo Mukai.pdf
Current Status of Experimental Facilities at RAON 15m
The Rare Isotope Science Project (RISP) was established in December 2011 for the accomplishment of the accelerator complex (Rare isotope Accelerator complex for ON-line experiments; RAON) for the rare isotope science in Korea. The rare isotope accelerator at RAON will provide both stable and rare isotope (RI) beams with the energy ranges from a few KeV to a few hundreds of MeV per nucleon for the researches in fields of basic and applied science.
At the moment, there are 7 experimental facilities considered at RAON: KOrea Broad acceptance Recoil spectrometer and Apparatus (KOBRA) and Large Acceptance Multi-Purpose Spectrometer (LAMPS) for nuclear physics, High Precision Mass Measurement System (HPMMS) with Multi-Reflection Time-of-Flight (MR-ToF) and Collinear Laser Spectroscopy (CLS) for atomic physics, Nuclear Data Production System (NDPS) for nuclear reaction data, Muon Spin Rotation/Relaxation/Resonance (muSR) for material science, Beam Irradiation System (BIS) for bio-medical science.
In this talk, current status including detail design and research goal of 7 experimental facilities at RAON will be discussed.
Speaker: Young Jin Kim (Institute for Basic Science/Rare Isotope Science Project)
EMIS_2018_yjkim.pdf
Poster Session 1 500/1-001 - Main Auditorium
Development of gas filled dipole magnet for FIPPS phase 2 1m
FIPPS (FIssion Product Prompt gamma-ray Spectrometer) is a new instrument of ILL for the gamma-ray spectroscopy of nuclei produced by thermal neutron induced reactions. In the current stage, FIPPS consists of an array of 8 HPGe clover detectors and a pencil-like intense thermal neutron beam.
The next phase of FIPPS aims to study i) Nuclear structure of neutron-rich nuclei far from stability produced in neutron induced fission. ii) Fission of heavy elements to explore the dynamics of the fission process. To study these under optimum conditions, ancillary devices are required to increase the sensitivity and selectivity of fission fragment detection with a good efficiency.
In FIPPS phase 2 the existing FIPPS HPGe-array will be complemented by an anti-Compton shield. Moreover, a Gas-Filled-Magnet (GFM) with a moderate mass separation (< 4 amu) [1] and a large acceptance (> 50 msr) of fission fragments will be installed. The conventional homogenous field magnet has been compared with an innovative design based on 1/r magnetic field with arc-shaped pole edges to assure point-to-point focusing of fission fragments over very wide acceptance. Thus requirements for tracking of ions are strongly relaxed.
The design of the GFM spectrometer with magnetic field calculation of the dipole magnet will be presented. Characteristics of the GFM for detecting fission fragments were studied by Monte-Carlo simulation using GEANT4, based on test experiments at LOHENGRIN [2], will be presented.
[1] H. Lawin et al., Nucl. Inst. and Meth. 137 (1976) 103-117
[2] A. Chebboubi et al., Nucl. Inst. and Meth. B 376 (2017) 120-124
Speaker: Dr Yung Hee Kim (Institut Laue-Langevin)
Tuning of an 81.25 MHz Four-vane RFQ with a Lamped Field Profile at RISP 1m
A radio frequency quadrupole (RFQ) linear accelerator has been developed and tuned for the heavy ion accelerator facility at RISP (Rare Isotope Science Project). The RISP RFQ has the 81.25 MHz operational frequency and a four-vane structure for a continuous wave (CW) operation despite the fabrication difficulties of the huge cavity due to the brazing technology. The cavity is inherently insensitive to perturbations due to low frequency and a short cavity length. The linearly increasing profile of the inter-vane voltage has been tuned for all quadrants through not only the movable slug tuners but also the modification of the end plate. In this study, a low-frequency RFQ with a novel ramped field profile has been tuned and the commissioning tests have been conducted with a new tuning method compatible with the modification of end region geometry.
Speaker: Bum-Sik Park (IBS)
Improvement of the β-ion correlation efficiency in decay spectroscopy 1m
β-decay spectroscopy is a useful method for understanding physics of nuclear structure. In decay spectroscopy experiments, Double-Sided Silicon Strip Detectors (DSSSDs) have often been used because of their detection capability on ions and β-rays. In order to identify β-ray events in the DSSSDs, it is necessary to correlate a β-ray and a corresponding, implanted ion using time and position information.
This process of the β-ion correlation should be carried out carefully, because the correlation efficiency depends on the positions and the energy losses of the implanted ions and the emitted β-rays in the DSSSDs. In this analysis, a new algorithm has been introduced to improve the β-ion correlation efficiency with the DSSSD, WAS3ABi [1]. In the new approach, hit patterns of β-rays recorded in the WAS3ABi are categorized to determine the initial position of the β-rays. When the β-rays were detected by the plastic scintillators installed at upstream/downstream of the WAS3ABi, the directions of the β-rays were also deduced. Furthermore, some ions stopped at the surface of the DSSSD layers have also been analyzed [2], finally improving the β-ion correlation efficiency. We demonstrate that this method can successfully reduce the background from random β-ion correlations while collecting more correlated β-ion events, thus improving signal to background ratio.
[1] P. -A. Söderström et al., Nucl. Instrum. Methods Phys. Res. B 317, 649 (2013)
[2] I. Nishizuka et al., JPS Conf. Proc. 6, 030062 (2015)
Speaker: Mr Jeongsu Ha (Seoul National University)
A novel method for in-trap nuclear decay spectroscopy and level lifetime measurement using a double Penning trap 1m
MLL-Trap is a double Penning-Trap for high precision mass measurement of exotic nuclei, built and commissioned off-line at the Maier-Leibnitz Laboratory in Garching, Germany [1] and currently installed at the ALTO facility at IPN in Orsay. A new double trap geometry is being studied, in which the central electrode of the second trap has been replaced by an arrangement of four silicon strip detectors [2]. An ion cloud stored in a Penning Trap is indeed an ideal source for decay spectroscopy, since it is very well localized and backing free. Moreover, the ion bunch can be purified from contaminants and in the case of alpha emitters, the strong magnetic field spatially separates the alpha particles and coincident conversion electrons, allowing a clean spectroscopy of both. Such a setup enables direct in-situ observation of decaying very heavy alpha emitters.
In addition, once coupled with a position-sensitive electron detector, this spectroscopic trap will allow for indirect measurement of lifetimes of first excited states or 0+ states in the region of heavy and super heavy nuclei, via a recoil-distance measurement [2]. Measuring the lifetime of the 2+ level in very-heavy even-even nuclei can lead to the derivation of the corresponding quadrupole moment and gives insight into its deformation and degree of collectivity. Also, the lifetime measurement of a low-lying 0+ state could allow to quantify the shape mixing with the ground state.
Simulations performed with SIMION8.1 confirm the feasibility of the method, while the expected uncertainties are still being investigated. Candidate nuclei for both offline and on-line commissioning at ALTO have been identified.
[1] V.S. Kolhinen et al., NIM A 600 (2009) 391-397
[2] C. Weber et al., IJMS 349-350 (2013) 270-276
Speaker: Pierre Chauveau (CSNSM)
Velocity filter SHELS: performance and experimental results. 1m
In recent years α-, β- and γ- spectroscopy of heavy nuclei at the focal plane of recoil separators ("decay spectroscopy") has been very intensively developed. The mixing of α decay with γ and β decay spectroscopy allows to investigate single particle states behavior as well as the structure of little known elements in the Z = 100-104 and N = 152-162 region.
In the past using the GABRIELA (Gamma Alpha Beta Recoil Invetsigations with the ELectromagnetic Analyser) set-up and VASSILISSA electrostatic separator the experiments aimed to the gamma and electron spectroscopy of the Fm – Lr isotopes, formed at the complete fusion reactions 48Ca+207,208Pb→ 255,256No, 48Ca+209Bi→ 257Lr, 22Ne + 238U → 260No were performed.
Accumulated experience allowed us to perform ion optical calculations and to design the new experimental set up, which will collect the base and best parameters of the existing separators and complex detector systems used at the focal planes of these installations. New experimental set up SHELS (Separator for Heavy ELement Spectroscopy) on the basis of existing VASSILISSA separator was developed for synthesis and studies of the decay properties of heavy nuclei [1,2]. The ion optical scheme of the new separator can be described as Q-Q-Q-E-D-D-E-Q-Q-Q-D, where Q denotes Quadrupole lenses, E - Electrostatic deflectors, D – Dipole magnets. Test experiments showed that transmission efficiency for slow evaporation residues formed in asymmetric target projectile combinations (22Ne induced reactions) increased by factor of 3 – 4, for more symmetric combinations (48Ca and 50Ti induced reactions) background condition at the focal plane became more comfortable.
During the last experimental campaigns (years 2016 – 2018) the new double sided silicon detector (DSSD) was used at the focal plane of the SHELS separator (128x128 strips, 100x100 mm2). The detector demonstrated high stability and ensured a high resolution (0.2 %) of alpha particle registration. GABRIELA detector set up was modernized too, now it consists of 5 Ge gamma detectors (1 Clover and 4 single crystal).
At the last 2 years we performed experiments to study decay properties of 255,257Rf and 256,257Db in the reactions 50Ti + 207,208Pb → 256,357Rf and 50Ti + 209Bi → 259Db*.
[1] A.V. Yeremin et. al., PEPAN Letters, 12, 35 (2015)
Speaker: Dr Alexander Yeremin (FLNR JINR)
An optimized plasma ion source for difficult ISOL beams 1m
The ionization by radial electron neat adaptation (IRENA) ion source has been designed to operate under extreme radiation conditions. Based on the electron beam generated plasma concept, the ion source is specifically adapted for thick target exploitation under intense irradiation. A validation prototype has already been designed and tested offline. The design of a new optimized prototype for online difficult beams production with ISOL facilities will be presented. In particular, simulation constructions for thermionic emission, ions confinement and extraction will be presented and results discussed.
This project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 654002.
Speaker: Ailin ZHANG (Institut de Physique Nucléaire Orsay, CNRS-IN2P3)
Production of theranostic Tb isotopes: electromagnetic isotope separation before or after irradiation ? 1m
Terbium has a quadruplet of so-called theranostic isotopes useful for the preparation of radiopharmaceuticals: 152Tb (PET imaging), 155Tb (SPECT imaging), 161Tb (beta- therapy) and 149Tb (alpha therapy). All isotopes belong to the same element, thus assuring identical pharmacokinetics, an essential requirement for theranostics. 149,152,155Tb with high radioisotopic purity is so far only available from spallation of Ta targets combined with on-line mass separation at CERN-ISOLDE or TRIUMF-ISAC.
Additional production at cyclotrons is urgently required to satisfy the great demand for medical applications. These isotopes could in principle also be produced by 155Gd(p,n)155Tb, 152Gd(p,n)152Tb and 152Gd(p,4n)149Tb reactions respectively, provided targets of sufficient isotopic enrichment become available. Commercially available 152Gd reaches only 30% enrichment, but >>90% enrichment is required to minimize co-production of longer-lived Tb isotopes in (p,n) reactions.
We present a demo experiment performed at the tandem accelerator of the MLL Garching where 152Tb was produced by irradiating a unique ion-implanted 152Gd target (>99% enriched) with 8 MeV and 12 MeV protons respectively. At these energies only 152Tb was observed while upper limits are derived for co-production of other Tb isotopes. This radioisotopic purity would enable direct use for human applications, only requiring a chemical Tb/Gd separation from the target material.
We will discuss prospects to efficiently separate more 152Gd and 155Gd with the SIDONIE mass separator at CSNSM Orsay and thus prepare cyclotron targets suited for high current irradiations.
We thank the MLL staff for smooth operation of the tandem accelerator.
Speaker: Dr Ulli Köster (Institut Laue-Langevin)
SIPT - An Ultrasensitive Mass Spectrometer for Rare Isotopes 1m
Over the last few decades, advances in radioactive beam facilities like the Coupled Cyclotron Facility at the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University (MSU) have made short-lived, rare-isotope beams available for study in various science areas, and new facilities, like the Facility for Rare Isotope Beams (FRIB) under construction at MSU, will provide even more exotic rare isotopes. The determination of the masses of these rare isotopes is of utmost importance since it provides a direct measurement of the binding energy of the nucleons in the atomic nucleus. For this purpose we are currently developing a dedicated Single-Ion Penning Trap (SIPT) mass spectrometer at NSCL to handle the specific challenges posed by rare isotopes. These challenges, which include short half-lives and extremely low production rates, are dealt with by employing the narrowband FT-ICR detection method under cryogenic conditions. Used in concert with the 9.4-T time-of-flight mass spectrometer, the 7-T SIPT system will ensure that the LEBIT mass measurement program at MSU will make optimal use of the wide range of rare isotope beams provided by the future FRIB facility, addressing such topics as nuclear structure, nuclear astrophysics, and fundamental interactions.
Speaker: Ryan Ringle (Michigan State University)
Current Status of Rare Isotope Science Project (RISP) of IBS 1m
The construction of the heavy ion accelerate complex, RAON (Rare isotope Accelerator complex for ON-line experiments), has been carried out under RISP of IBS since 2011. The major accelerator systems, QWR(quarter-wave resonator) and HWR(half-wave resonator) of low energy LINAC system, are in the process of mass production after demonstrating of functional readiness through SCL(super–conducting linac) Demo runs in 2016 including superconducting ECR-IS(electron cyclotron resonance ion source) and RFQ(radio-frequency quadrupole) of injection system. Also, KOBRA (Korea broad acceptance recoil spectrometer and apparatus), the low energy experimental system for the day-1 experiment of early phase runs, is being successfully in the stage of construction with RISP rare isotope production systems, ISOL(Isotope Separation On-Line) and IF(in-flight) fragmentation system and others. Therefore, the current integrated RISP effort and status of the construction of RAON will be reported.
Speaker: Dr Taeksu Shin (RISP/IBS)
TITAN's Next Generation of Experimental Setup for Mass Spectrometry of Highly Charged Ions 1m
Atomic masses are key tools to understand the nature of nuclear forces and structure, fundamental symmetries and astrophysical processes if known with sufficient precision. With the availability of beams of increasingly exotic species, mass spectroscopy techniques have become more challenging. They need to be faster for shorter lifetimes, more sensitive for lower intensities, and sufficiently precise for scientific interest.
The TITAN facility at TRIUMF has been successfully performing precision mass measurements of radioactive nuclei for over a decade. Its mass Measurement Penning Trap (MPET) is designed to probe atomic masses of ions living as short as 10 ms, with low production yields, in the $10^{-7}$ - $10^{-9}$ precision range. A powerful way to boost this precision is to charge breed the inspected ion, which is done at TITAN through electron impact ionization in an Electron Beam Ion Trap (EBIT).
The implementation of TITAN's next generation capabilities of performing mass spectrometry on highly charged ions (HCIs) is currently on its final stages. The EBIT has been upgraded to deliver electron beam energies up to 60 keV, which can provide access to bare ions up to Z=65. On the other hand, MPET is being redesigned to perform mass measurements of ions at charge states well beyond +20. It will be integrated into a new cryogenic vacuum system to prevent electron recombination due to ion's interaction with background gas.
We will present our most recent mass spectrometry results employing highly charged ions, as well as details of TITAN's new CryoMPET and EBIT high voltage upgrades, their design concepts, status and future plans.
Speaker: Mr Erich Leistenschneider (TRIUMF, UBC)
Development of Direct Temperature Measurements of ISAC and ARIEL Targets at TRIUMF 1m
To improve the thermal model of high power Isotope Separation On-Line (ISOL) targets at TRIUMF, an optical technique is being developed which allows for direct off-line and on-line temperature measurements of targets for radioactive isotope production. In this set-up the light coming from a hot target through the ionizer opening is collected via a set of optics in a spectrometer. Thus, from the emission spectrum and Stefan-Boltzmann distribution the target temperature is deduced. In off-line tests, tantalum targets were heated up to vacuum pressure limited temperature (2700K) - corresponding to minimum black body radiation peaks at wavelengths down to 1.07 μm. These preliminary temperature measurements confirm the correlation between the spectrum of the radiation emitted from the target and the currents used to resistively heat the targets. The final goal is to apply this technique to on-line targets and correlate the isotope releases with the target temperatures - for a better understanding of the diffusion and effusion processes happening in the target and for optimizing the delivery of short-lived species.
Speaker: Aurelia Laxdal (TRIUMF)
New gamma-ray detector CATANA for in-beam gamma-ray spectroscopy with fast RI beams 1m
The $\gamma$-ray detector CATANA (Caesium iodide Array for $\gamma$-ray Transitions in Atomic Nuclei at high isospin Asymmetry) is designed to detect inflight $\gamma$-rays from fast RI beams of RIBF at RIKEN Nishina Center. CATANA consists of 200 square frustum-shaped CsI(Na) crystals coupled with the photomultiplier tubes. Total active weight of the scintillator material is 270 kg. The scintillator positions are arranged to minimize the distance between the scintillators so as to have better calorimetric property.
The 50% of total detectors has been constructed and commissioned in 2016. We have performed several experiments by combining CATANA and the SAMURAI spectrometer [1]. The talk will give an overview of CATANA and results from experiments.
[1] T. Kobayashi et al., Nucl. Instr. Meth. B 317, 294 (2013).
Speaker: Yasuhiro Togano (Rikkyo University)
Development of a prototype ion source for RIB production in reactor 1m
A prototype ion source for RIB production in reactor has been developed at China Institute of Atomic Energy(CIAE) to demonstrate the feasibility. The ion source has to be compact enough to fit into the neutron tunnel of the reactor. Also the ion source has to withstand the tens of kW heat from target fission. A electric heater is uesd to simulate the fission heat, at the same time the cathode of the ion source is heated to emit electrons which are energized between anode and cathode to ionize the fission product. The details of the ion source and its preliminary test results will be presented.
Speaker: Baoqun Cui (China Institute of Atomic Energy)
Recent developments of ISOLTRAP's MR-ToF MS 1m
The Multi Reflection Time-of-Flight Mass Spectrometer (MR-ToF MS) of ISOLTRAP has been used successfully for several years for precision mass measurements and ion purification. Nevertheless, further improvements are still possible concerning, e.g. the ion optics, beam preparation and stability of the system. All these issues were addressed in a series of systematic studies reported here. High-precision mass measurements require a pulsed ion beam with a narrow spread in time and energy. Therefore, the ions are cooled and bunched in a gas-filled radiofrequency linear quadrupole trap. The effect of the buncher radiofrequency field on the beam quality and time-of-flight was studied with simulations and experimentally using an off-line alkali source. The energy width of the ion bunch was studied in different experimental conditions. This led to a reduction of the systematic mass dependent shifts and to more symmetric peak shapes. The stability of the MR-ToF MS was addressed in order to determine and reduce the impact of voltage fluctuations and to compensate for voltage drifts during data analysis. This allowed to improve and estimate the accuracy of MR-ToF MS measurements using off-line references. In addition, a new einzel lens was simulated, designed and implemented in order to improve the injection efficiency into the MR-ToF MS.
Speaker: Timo Pascal Steinsberger (Max-Planck-Gesellschaft (DE))
Database of radioactive isotopes produced at the BigRIPS separator 1m
A new-generation radioactive isotope (RI) beam facility called the RI Beam Factory (RIBF) has been operating at the RIKEN Nishina Center since 2007. A wide variety of RI beams have been produced using the BigRIPS in-flight separator to perform various studies of exotic nuclei far from stability. Not only the projectile fragmentation of heavy-ion beams, such as $^{14}$N, $^{18}$O, $^{48}$Ca, $^{70}$Zn $^{78}$Kr, and $^{124}$Xe beams, but also the in-flight fission of a $^{238}$U beam has been employed for the production of RI beams.
A total of 159 experiments using RI beams at the BigRIPS separator have been performed so far. The number of RI beams produced amounted to approximately 1600, and the number of new isotopes reached 132. Production cross sections for more than 1000 isotopes were obtained. In order to gather and manage of a lot of experimental data we have been developing a database of RI beams produced at the BigRIPS separator. The RI database includes the production cross sections and yields together with detailed experimental conditions. The information of isomeric nucleus, such as gamma ray energy half life and sample of gamma ray energy spectrum is also included. The RI database is synchronized with a web site.
The RI database system is powerful tool to make the RI beam setting quickly and exactly. During the BigRIPS tuning, we can obtain the production cross sections and gamma ray energy quickly and easily. In comparison with these values, we can confirm whether present measurement is carried out with success. The RI database system helps us to confirm the validity of the setting and to shorten the tuning time. Furthermore, this system assists on RIBF user to design RI beam experiments using the BigRIPS separator.
Speaker: Dr Yohei Shimizu (RIKEN)
Study on laser resonance photoionization of Molybdenum atoms 1m
In the framework of the research and development activities of the SPES project,and regarding the optimization of the radioactive beam production the Hollow Cathode Lamp Spectroscopic technique, is nowadays a solid based application to study resonant laser ionization.
By means of this instrument, it is possible to test resonant laser ionization processes of stable species, and in this work, the study is applied to Molybdenum atoms.
The three-step, two color ionization schemes have been tested. The "slow" and the "fast" optogalvanic signals were detected and averaged by an oscilloscope as a proof of the laser ionization inside the lamp.
As results, several wavelength scans across the resonances of ionization schemes were collected with the "fast" optogalvanic signal. Some comparisons were made of ionization efficiency for different ionization schemes. Furthermore, saturation curves of the first excitation levels have been obtained.
Molybdenum, in its isotope 99 it is used to produce 99Tc, which is the paramount radionuclide for diagnostic and cure in modern nuclear medicine; thus the interest of the study even if not a real SPES element.
In this framework, MOLAS project (Molybdenum production with Laser technique at SPES) has recently been introduced and this study will be the first milestone for the project.
Speaker: Alberto Monetti (lnl infn)
The development of a FEBIAD ion source for BRISOL 1m
The Beijing Radioactive ion beam facility Isotope Separator On-Line(BRISOL, is a radioactive ion beam facility based on a 100MeV cyclotron providing a 100μA proton beam bombarding the thick target to produce radioactive nuclei, which produces singly charged ions using an ion source. A new FEBIAD ion source has been developed to fulfil the requirements of the BRISOL for producing radioactive ion beam. A series of structural optimization have been adopted to make the maintenance of the ion source model easier. The results from this ion source will be presented in this paper.
Speakers: Bing Tang (China Institute of Atomic Energy), Baoqun Cui (China Institute of Atomic Energy), Dr Lihua Chen (China Institute of Atomic Energy)
Media Board – Low-cost interface for remote handling of beam instrumentation devices at the Super-FRS 1m
At the Super-FRS, the new in-flight separator under construction at FAIR [1] many beam instrumentation devices like detector drives, degrader and slit systems, etc. have to be implemented. These devices are installed as insertions in the diagnostic vacuum chambers at the various focal planes of Super-FRS. The insertions have to be remote handled due to the highly activated environment by means of a fully autonomous industrial robot system.
In order to connect and disconnect automatically utilities like electrical power, cooling water, compressed air, electrical signals, etc. a low-cost mechanical interface called media board was developed at GSI. In this contribution we will present this development.
[1] https://www.gsi.de/en/research/fair.htm
Speaker: Simon Haik (GSI)
The NSCL Cyclotron Gas Stopper - preparing to go 'online' 1m
Rare isotopes are produced at the NSCL by projectile fragmentation at energies of ~100 MeV/u. The NSCL has successfully used linear gas stopping cells for more than a decade to thermalize projectile fragments and extract them at 10's of keV energies; first for experiments at low energy and later for reacceleration to Coulomb barrier energies. In order to stop and rapidly extract light and medium-mass ions, which are difficult to efficiently thermalize in linear gas cells, a gas-filled, reverse cyclotron has been constructed [1]. The device uses a $\le$2.6T field superconducting cyclotron-type magnet and helium gas in a LN-cooled stopping chamber to confine and slow down the injected beam. The thermalized beam is transported to the center of the magnet by a traveling-wave RF-carpet system [2], extracted through the central bore with an ion conveyor [3] and accelerated to <60 keV energy for delivery to the users.
For magnet commissioning and low-energy ion tests, the cyclotron gas stopper has been constructed in a location not connected to NSCL high-energy beamlines. The magnet has been energized to its nominal strength and the measured field is in excellent agreement with predictions. The RF ion-guiding components have been installed inside the magnet. Efficient ion transport has been demonstrated with ions from a movable alkali source with the magnet off. The tests are currently being repeated with the magnet energized and preparations are underway to cool the gas to LN temperature.
With offline tests coming to an end, an experimental vault is being prepared to allow connecting the cyclotron gas stopper to the NSCL beamline. The design for a dedicated momentum-compression beam line, similar to the ones feeding the linear gas cells, is essentially complete and the components are under construction. A summary of the offline tests, the layout of the cyc-stopper's new online location, the ion-optical design of the beamline and plans for the move of the device will be presented.
This work is supported by NSF under grants PHY-09-58726, PHY-11-02511 and PHY-15-65546.
[1] S.Schwarz et al., NIM B, 376, 2016, 256
[2] A.Gehring et al., NIM B, 376, 2016, 221
[3] M. Brodeur et al., NIM B, 317, 2013, 468
Speaker: Stefan Schwarz (NSCL/MSU)
The thermal finite element analysis of the high-power rotating target for BigRIPS separator 1m
The RIKEN RI Beam Factory (RIBF) cyclotrons can accelerate very heavy ions up to 345 MeV/nucleon, such as uranium. The goal beam intensity is as high as 1 pμA (6.2 $\times$ 10$^{12}$ particles/s), which corresponds to a beam power of 82 kW in the case of $^{238}$U. An important aspect in increasing beam intensity is to limit the maximal temperature due to the beam energy loss in the material. The control of this absorbed power is proving to be one of the key challenges. Therefore, the water-cooled rotational disk targets and ladder-shaped fixed targets were designed and constructed for the BigRIPS separator [1,2,3]. For low power deposition and low power density, the fixed ladder-shaped target is sufficient to dissipate the heat. For high power density, the rotating disk target is used for all primary beams up to uranium.
Although the present primary beam intensity is lower than the goal value, the beam spot temperature at various conditions was measured and compared with thermal simulations to examine the beam power tolerance and evaluate the cooling capacity of the high-power rotating disk target. The finite element thermal analysis code, ANSYS was used to model thermal distributions in targets. The calculations of the beam spot temperature on the rotating disk target were done for the different primary beams. The design of the high-power rotating disk target and the detail of ANSYS simulation will be reported as well as the calculated beam spot temperature will be presented.
[1] A. Yoshida et. al., Nucl. Instr. Meth. A 521, 65 (2004).
[2] A. Yoshida et. al., Nucl. Instr. Meth. A 590, 204 (2008).
[3] T. Kubo, Nucl. Instr. Meth. B 204, 97 (2003).
Speaker: Dr Zeren Korkulu (RIKEN Nishina Center)
Position sensitive resonant Schottky cavities 1m
Resonant Schottky pick-up cavities are sensitive beam monitors. They are indispensable
for the beam diagnostics in storage rings. Apart from their applications in the
measurements of beam parameters, they can be used in non-destructive in-ring decay
studies of radioactive ion beams [1]. In addition, position sensitive Schottky pick-up
cavities enhance precision in the isochronous mass measurement technique.
The goal of this work is to construct and test such a position sensitive cavity (Schottky
detector) based on previous theoretical calculations and simulations. These cavities
will allow measurement of a particle's horizontal position using the monopole mode
in a non-circular(elliptic) geometry [2]. This information can be further analyzed to
increase the performance in isochronous mass spectrometry [3-4]. A brief description
of the detector and its application in mass and lifetime measurements will be provided
in this contribution.
Keywords: storage rings, Shottky detector, ion beam measurement
[1] M. S. Sanjari et al. - "A resonant Schottky pickup for the study of highly charged
ions in storage rings." Phys. Scripta T156 (2013) 014088
[2] M. S. Sanjari et al. - "Conceptual design of elliptical cavities for intensity and
position sensitive beam measurements in storage rings." Phys. Scripta T166 (2015)
[3] X. Chen et al - "Accuracy improvement in the isochronous mass measurement
using a cavity doublet" Hyperfine Interact. 235 (2015) 1-3, pp. 51-59
[4] X. Chen et al - "Intensity-sensitive and position-resolving cavity for heavy-ion
storage rings" Nucl. Instrum. and Meth. A826 (2016) 39–47
Speaker: Ivan Kulikov (GSI)
Exploratory study for the production of Sc beams at the ISOL facility of MYRRHA 1m
The design of high-power targets for production of Radioactive Ion Beams (RIBs) at an Isotope Separation On-Line (ISOL) facility requires a full overview of the physical processes occurring in the target: nuclear reactions, thermal effects, isotope diffusion and effusion. Such high-power targets are nowadays a requisite as they constitute one of the means to significantly increase the yields of certain RIBs to the levels required by the users. In the first phase of the MYRRHA project, the ISOL@MYRRHA facility will make use of a high-power proton beam (100-MeV & 0.5 mA) in combination with high-power targets in order to produce high intensity RIBs of various isotopes. These high power targets require specific R&D to tackle engineering challenges like heat dissipation issues while maintaining the high isotope yields that are obtained with thick targets.
For this, an algorithmic method is in development that will combine the particle transport calculations, thermo-mechanical simulations, and an isotope release model, in order to determine the optimal target design for the production of a specific isotope. In this contribution, the exploratory study for the production of Sc beams at the ISOL facility of MYRRHA will be presented. The short lived isotopes like 41Sc would be of interest for beta-decay spectroscopy while the long lived ones like 44,47Sc are useful for medical applications.
Speaker: Martin Ashford (SCK•CEN)
ToF and molecular beam studies of the on-line beam with the Isolde RFQ beam-cooler 1m
A new high-sensitivity time-of-flight detector has been designed and installed in the Isolde beamline, permitting study of the time structure of the on-line beam for the first time. The detector uses secondary electron emission and an MCP read-out to create a robust but highly-sensitive detector with a response time of 0.5 ns.
The detector is 10 m downstream of the RFQ extraction point, allowing us to measure the mass composition of the RFQ beam. This allows us to study cooled molecular beams, which may suffer collisional decomposition during the cooling process. We present the results of the first systematic study of the effects of the RFQ on molecular beams under varying conditions.
The new detector also allows us to adapt the RFQ bunching to the particular needs of the downstream user. We present the effects of different RFQ tunes, optimised for low energy-spread or for short bunch widths.
Speaker: Annie Ringvall Moberg (CERN)
Measurement of spallation cross sections for the production of terbium radioisotopes for medical applications from tantalum targets 1m
Terbium has 4 interesting isotopes for usage in the context of nuclear medicine: $^{149}$Tb, $^{152}$Tb, $^{155}$Tb and $^{161}$Tb, sometimes referred to as the Swiss army knife of nuclear medicine [1]. Their chemical identity means that radiopharmaceuticals for imaging and therapy respectively will have identical pharmacokinetics and pharmacodynamics, an important advantage for so-called theranostics applications.
$^{161}$Tb is best produced by irradiating $^{160}$Gd with thermal neutrons to form $^{161}$Gd which quickly decays into $^{161}$Tb. For the neutron deficient isotopes mentioned above, one of the most promising production methods is high-energy proton-induced spallation of tantalum foil targets, coupled with isotope separation on-line or off-line [2]. However, the collection of isobaric contaminants is unavoidable, which includes pseudo-isobars such as monoxide ions with the same total mass [3]. For example for $^{155}$Tb, it was found that the main impurity was $^{139}$Ce in the form of $^{139}$CeO+ [4]. Often these byproducts need to be chemically removed before the terbium isotopes can be used. It is therefore beneficial to optimize the production protocol such that these isobaric contaminants are minimized. One way is to select the most appropriate proton energy for the isotopes of interest, while minimizing molecular sidebands. Indeed a lower proton energy reduces the number of nucleons evaporated in the spallation process and limits production of Ce isotopes with respect to Tb isotopes. Unfortunately the cumulative spallation cross sections for some of the isotopes of interest are not well known or conflicting data exist in literature, e.g. for $^{149}$Tb [5,6] and $^{152}$Tb [6,7].
Here we present new measurements of cumulative cross sections for production of $^{149}$Tb, $^{152}$Tb, $^{155}$Tb and other nuclides from A=100 to 180 by proton-induced spallation of tantalum foil targets at different proton energies between 300 and 1700 MeV, using the COSY synchrotron at FZ Jülich.
[1] C Müller et al. J Nucl Med 2012;53:1951.
[2] RM dos Santos Augusto et al. Appl Sci 2014;4:265.
[3] S Kreim et al. Nucl Instrum Meth B 2013;317:492.
[4] C Müller et al. Nucl Med Biology 2014;41 Suppl:e58.
[5] L Winsberg et al. Phys Rev 1964;135:B1105.
[6] YuE Titarenko et al. Phys At Nucl 2011;74:551.
[7] R Michel et al. J Nucl Sci Technol 2002;39:242.
Speaker: Hannelore Verhoeven (KU Leuven (BE))
A NEW OFF-LINE ION SOURCE FACILITY AT IGISOL 1m
A NEW OFF-LINE ION SOURCE FACILITY AT IGISOL
M. Vilén and the IGISOL group
Department of Physics, University of Jyväskylä, P.O.B. 35 (YFL), FIN-40014, Finland
email: [email protected]
A new beamline for off-line ion sources has been commissioned at the IGISOL [1] (Ion Guide Isotope Separator On-Line) facility at the University of Jyväskylä, Finland. It allows parallel operation of off-line ion sources and production of radioactive ion beams while offering a flexible platform for producing a variety of stable ion beams. Parallel operation opens up a range of new possibilities for measurements at IGISOL. The new system has been used to provide doubly charged $^{89}\mathrm{Y}^{2+}$ ions for laser spectroscopy measurements during on-line operations [2] and singly charged $^{133}\mathrm{Cs}^{+}$ ions for off-line testing of a magneto-optical trap under development at IGISOL [3]. Ions for these measurements were produced using a glow discharge ion source and a surface ion source, respectively. The system will also be used to provide reference ions for on-line Penning trap mass measurements with JYFLTRAP [4] in the near future.
While the off-line ion source station is operational and has been used in several measurements, technical development of the system is still ongoing with the aim of increasing ion yields and the number of ion species available for experiments. The development effort has been mainly focused on the glow discharge ion source with the construction of a buffer gas purification system and presently ongoing design work of a new vacuum system.
In this contribution, the layout and technical details of the offline ion source facility at IGISOL will be given together with examples of its applications and future prospects.
[1] I.D. Moore, T. Eronen, D. Gorelov, J. Hakala, A. Jokinen, A. Kankainen, V.S. Kolhinen, J. Koponen, H. Penttilä, I. Pohjalainen, M. Reponen, J. Rissanen, A. Saastamoinen, S. Rinta-Antila, V. Sonnenschein, and J. Äystö. Towards commissioning the new IGISOL-4 facility. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms, 317:208 – 213, 2013. XVIth International Conference on ElectroMagnetic Isotope Separators and Techniques Related to their Applications, December 2–7, 2012 at Matsue, Japan.
[2] L. J. Vormawah, M. Vilén, et al. Isotope shifts from collinear laser spectroscopy of doubly-charged yttrium isotopes. Manuscript submitted for publication to Physical Review A, 2018.
[3] Luca Marmugi, Philip M. Walker, and Ferruccio Renzoni. Coherent gamma photon generation in a Bose–Einstein condensate of 135mCs. Physics Letters B, 777:281 – 285, 2018.
[4] T. Eronen, V. S. Kolhinen, V. V. Elomaa, D. Gorelov, U. Hager, J. Hakala, A. Jokinen, A. Kankainen, P. Karvonen, S. Kopecky, I. D. Moore, H. Penttilä, S. Rahaman, S. Rinta-Antila, J. Rissanen, A. Saastamoinen, J. Szerypo, C. Weber, and J. Äystö. JYFLTRAP: a Penning trap for precision mass spectroscopy and isobaric purification. The European Physical Journal A, 48(4):46, Apr 2012
Speaker: Markus Vilen (University of Jyväskylä)
Electron beam ion source for the re-acceleration of rare-isotope ion beams at TRIUMF 1m
TRIUMF is enhancing its rare isotope production capabilities by creating a new scientific infrastructure known as the Advanced Rare IsotopE Laboratory (ARIEL). A critical part of this expansion is the CANadian Rare-isotope facility with Electron-Beam ion source (CANREB) project which combines a high-resolution separator, a gas-filled radiofrequency quadrupole (RFQ) cooler and buncher, a pulsed drift tube (PDT), an electron beam ion source (EBIS) charge-breeder, and a Nier-type magnetic spectrometer to deliver pure rare isotope beam for post-
acceleration.
The CANREB-EBIS was developed at the Max Planck-Institut fur Kernphysik (MPIK) in Heidelberg and uses electron beam driven ionisation to produce highly charged ions (HCI) in a few, well-defined charge states. Singly charged ions from the RFQ are injected into a longitudinal electrostatic trap and are then tightly, radially confined by the spacecharge potential of a maximally focussed electron beam current. To date, the maximum electron beam current achieved is 1 A with a density in excess of 5000 Acm$^{-2}$ by means of a 6 T axial magnetic field. It is expected that during operation HCI bunches of up to 10$^{7}$ ions are extracted at a repetition rate of 100 Hz with an A/Q in the range 4-7 which is required for re-acceleration at the ARIEL or ISAC facility. We present here the CANREB-EBIS design and results from the commissioning runs at MPIK and TRIUMF, including X-ray diagnostics of the electron beam and charge-breeding process, as well as ion injection and HCI-extraction measurements.
Speaker: Dr Leigh Graham (TRIUMF)
First success of RI-beam separation and particle identification for nuclei with atomic number Z > 82 at RIKEN RI beam factory 1m
A wide variety of RI beams can be produced from the $^{238}$U primary beam with 345 MeV/u at RIBF. The RI beam production of heavy isotope, especially Z > 82, becomes complicate and difficult, because the charge state can change in any beam-line materials at this energy. The RI beam separation in the fragment separator is affected not only by Z and A, but also by the charge state in the separator. The purification of the RI beam becomes worse because a different RI beam is selected for a different charge state, i.e. fully stripped or hydrogen-like ion etc.
We considered the RI beam separation in case many charge state combinations are possible. It was found that the RI beam can be well purified against main contaminants of the fission fragments and the primary beam when the proper charge state combination is selected.
The RI beams around $^{208}$Rn were produced from the $^{238}$U primary beam to verify our consideration. The main contaminants were surprisingly well eliminated. The particle identification was also succeeded by the measured information without a total kinetic-energy detector, thus the RI beam is ready for the use of secondary reaction studies. In this conference, the considered principle of RI beam separation and the experimental details will be presented.
Speaker: Toshiyuki Sumikama (RIKEN)
Control systems for the CRIS experiment 1m
The collinear resonance ionization spectroscopy (CRIS) experiment at ISOLDE has grown over the years to include a multitude of devices and hardware. Control of these devices, and logging of the data that they produce, requires software that communicates across several computers, in different locations (beam line, laser laboratory, data center). In this poster, a schematic overview of the CRISTAL (CRIS Tuning, Acquisition and Logging) software will be presented.
The CRISTAL software is responsible for tuning and recording of the laser wavelengths, recording of transmission fringes produced by Fabry-Perot interferometers, control of high voltages for ion optics, logging of information regarding the proton-synchrotron booster, readout of Faraday cups and other charged-particle detectors, etc. Since these various activities are performed by different computers, the CRISTAL software is built upon a network communication protocol that ensures time synchronization and centralized data storage. Furthermore it features rich graphical user interfaces that allow for e.g. on-the-fly configuration and control.
New hardware is added easily with small plugin scripts, which means the CRIS experiment can be continuously upgraded. For example, a recent addition to CRIS was a new time discrimination card with a 500 ps resolution. Using this card, the precise arrival time of laser-ionized ions can be recorded. This has added a new dimension to datasets produced by CRIS. By exploiting this new information dramatic improvements in resolution, when using the (also new) laser ablation source were obtained. Examples of these time-of-flight lineshape reconstructions will be presented.
Speaker: Ruben Pieter De Groote (University of Jyvaskyla (FI))
Characterization of the AstroBox2 detector in online conditions 1m
The AstroBox2 detector [1] is a gas-filled calorimetric detector for almost background free low-energy beta-delayed particle spectroscopy. It is an upgraded version of the original AstroBox proof-of-concept detector [2] based on Micro Pattern Gas Amplifier Detector (MPGAD) technology. After the initial commissioning described in [1] some extensive upgrades have been made in conjunction with the first physics experiments. A new gating grid covering the whole detector has been built and instrumented with a dedicated fast HV switch. The setup has been instrumented further with two different high-purity Ge detector setups for particle-delayed gamma detection. So far beta-decays of $^{20}$Na, $^{23}$Al, $^{25}$Si, $^{31}$Cl, $^{32}$Cl, and $^{35}$K have been studied with the setup. The diverse chemical nature of the studied isotopes, beam rates, and the laboratory environment stability has been observed to have influence on the measured decay spectra. The physics results will be discussed elsewhere and here we present some of the results that can have an influence on other similar experiments with stopped rare isotope beams and detectors relying on gas amplification.
[1] A. Saastamoinen et al., Nucl. Instrum. and Meth. in Phys. Res. B 376, 357 (2016).
[2] E. Pollaco et al., Nucl. Instrum. and Meth. in Phys. Res. A 723, 102 (2013).
Speaker: Dr Antti Saastamoinen (Cyclotron Insitute, Texas A&M University)
Control Systems for improved Laser Ion Sources 1m
Over the past decades, laser ion sources have proven to be a selective and efficient ion source for high purity radioactive isotope and isomer beam research. Advanced control systems are a necessary tool for high resolution measurements and easy control of the laser ion source. In this framework, a new control and data acquisition system has been developed at KU Leuven. Furthermore, accurate control of the frequency selective elements in the laser are a requirement for a reproducible and reliable laser ion source.
The Heavy Elements Laser Ionization and Spectroscopy (HELIOS) project at KU Leuven has the goal of performing In-Gas Laser Ionization and Spectroscopy (IGLIS) measurements on the actinide and superheavy (transfermium) elements and around the $^{100}$Sn region. These studies will allow to deduce atomic properties and nuclear properties with high precision owing to an improved spectral resolution down to 150 MHz (FWHM) for these elements. In these spectroscopic measurements, step-wise laser ionization of the involved isotopes takes place in the supersonic jet formed by a de Laval nozzle installed at the gas cell exit.$^{1,2,3}$ These include isomeric beams making use of the laser ionization mechanism.
A complete characterization of the in-gas-jet method can only be achieved when factors such as frequency and power instabilities of the lasers as well as the spectral linewidths are minimized and the timing for data acquisition of multiple systematic measurements can be synchronized. Therefore, a dedicated control system, IGLIS Control System, has been developed at KU Leuven. The program enables the stabilization of the laser wavelength, reducing the laser frequency fluctuations from 50 MHz down to 7 MHz, only limited by the precision of the employed wavelength meter. This reduction in frequency fluctuations is necessary to accurately perform spectroscopy on resonance peaks with a Full Width at Half Maximum (FWHM) in the order of tens of MHz. Furthermore, the control program synchronizes the full command to several types of data acquisitions e.g. Time-of-Flight measurements for isotope separation in an Atomic Beam Unit (ABU), image acquisitions for Planar Laser Induced Fluorescence (PLIF) spectroscopy of the seeded atoms in the supersonic jet and beam line diagnostics. This synchronization makes it possible to increase the signal-to-noise ratio and to study systematic effects by comparing the results of PLIF spectroscopy with those obtained in the ABU. Recently, the IGLIS Control Software allowed us to perform a first preliminary In-Gas Jet Laser Ionization Spectroscopic measurements on $^{63,65}$Cu.
In the Laser Ion Source, the stability of the frequency selective elements, e.g. ethalons, in the laser will strongly influence their reliability. Therefore, a full characterization of different types of motorized mounts for these frequency selective elements is performed at RILIS $^{4}, CERN, comparing a stepper motor, Galvanometer motor, an indirect piezo controlled mount and a closed-loop direct drive piezo mount. The presence of hysteresis in the movement of such mounts can result in non-reproducibility of the laser ion source. Therefore, it has been found that the closed-loop direct drive piezo mount ensures the most reliable and reproducible control of the frequency selective elements, contributing to a more stable and reliable laser ion source.
In this presentation we discuss the improvements in reliability and accuracy that were achieved with the IGLIS Control software for the In-Gas Jet Laser Ion Source. Furthermore, the characterization of the improved mounts for the frequency selective elements in lasers is discussed
[1] Raeder, S. et al. (2016). Developments towards in-gas-jet laser spectroscopy studies of actinium isotopes at LISOL. Nuclear Inst. and Methods in Physics Research, B, 376, 382–387.
[2] Ferrer, R. et al. (2013). In gas laser ionization and spectroscopy experiments at the Superconducting Separator Spectrometer ( S 3 ): Conceptual studies and preliminary design. Nuclear Inst. and Methods in Physics Research, B, 317, 570–581.
[3] Kudryavtsev, Y. et al. (2016). A new in-gas-laser ionization and spectroscopy laboratory for off-line studies at KU Leuven. Nuclear Inst. and Methods in Physics Research, B, 376, 345–352.
[4] Valentin Fedosseev et al. (2017). Ion beam production and study of radioactive isotopes with the laser ion source at ISOLDE. J. Phys. G: Nucl. Part. Phys. 44 084006
Speaker: Mr Kristof Dockx (KU Leuven)
Narrowband pulsed dye amplification system for nuclear structure studies. 1m
Laser spectroscopy is a powerful and versatile technique for the study of nuclear ground-state properties [1]. The precision with which these nuclear properties can be extracted from the isotopic shifts and hyperfine structure of optical transitions is defined by the observable spectral line width. The latter depends on different line broadening mechanisms existing due to the conditions of the experiment, as well as the laser line width. Lasers with extremely narrow width of emission light are often required for spectroscopic techniques used to study exotic nuclei [2] Pulsed Ti:Sapphire (Ti:Sa) lasers with ring-cavity design [3] and seeded by a continuous-wave (cw) single-mode laser are able to fulfill those requirements. However, the tunable range of the Ti:Sa lasing medium is limited to ~ 700-950 nm which can be extended to blue and UV wavelengths using common higher-harmonic generation techniques. Complementing this are dye lasers whose emission spectra, when pumped with 532-nm laser light, can cover the range of 540-900 nm and can also be extended to UV ranges in a similar fashion to Ti:Sa lasers but with higher power. Pulsed amplification of a cw single frequency dye laser in a dye cell pumped by a copper vapor laser has been demonstrated and successfully applied for resonance photoionization spectroscopy of radioisotopes in [4]. In this approach, the pulse length of pumping laser determines the spectral width of the amplified radiation according to the Heisenberg uncertainty principle. Preliminary studies performed with Nd:YAG laser pumping suggest that narrowband pulsed-dye amplification suffers from the existence of sidebands in the amplified light due to the amplitude modulation nature of the multimode pumping light. The characteristics of the system in terms of design, power and spectral width as well as the application to the in-gas-jet laser ionization and spectroscopy technique [5] will be discussed. Future applications for two-photon spectroscopy and electronic-affinity measurements will be presented.
[1] G. Neyens. Reports on Progress in Physics, 66:1251 (2003).
[2] Klaus Blaum, Jens Dilling, and Wilfried Nörtershäuser. Physica Scripta, T152 (2013).
[3] Volker Sonnenschein, PhD thesis, Jyväskylä (2015).
[4] V.I. Mishin et al., Ultrasensitive resonance laser photoionization spectroscopy of the radioisotope chain 157-172Tm produced by a proton accelerator, Sov. Phys. JETP 66, 235-242 (1987).
[5] R. Ferrer et. al. Nature Communications volume 8, 14520 (2017)
Speaker: Camilo Andres Granados Buitrago (KU Leuven (BE))
TOF/FFT Hybrid Mass Analysers 1m
As it is well known isochronous periodic structures (electric or magnetic) are used for mass measurements either as Time of Flight (TOF) mass analysers (MA) or Fast Fourier Transform (FFT) mass analysers with image charge detection. In this study we demonstrate that both the operational modes can be implemented in a single and compact hybrid mass analyser. Such an instrument can be run in one of the two complimentary modes - multi-pass TOF with lower m/dm, but faster mass analysis, or FFT mode with higher m/dm and slower analysis. Two examples are presented: (i) a multi-reflection coaxial mirror analyser, and (ii) a rotationally symmetric multi-turn sector field analyser.
Analysers of the 1st type are widely used in nuclear physic experiments as MR-TOF instruments [1-5]. Many authors have also used similar systems as electrostatic ion traps with image charge detection and FFT analysis [6-9]. In this work we describe a 400 mm long MR-TOF, which can work in two complimentary modes - as a MR-TOF instrument with m/dm~100 k (fwhm), or an electrostatic ion trap with m/dm>600 k (fwhm).
Analysers of the 2nd type [10] comprise a pair of polar-toroidal sectors S1 and S3, a toroidal sector S2 located at the mid-plane of the system, lens electrodes for longitudinal and lateral focusing, each set of the electrodes being mirror symmetric with respect to the mid-plane. In the multi-turn TOF operational mode drift focusing segments are additionally used to provide focusing in the drift direction. It was demonstrated earlier that in the multi-turn TOF mode the analyser achieves at least ~200 k (fwhm) of m/dm [11]. In this work we present three similar analysers - with 500 mm, 250 mm and 120 mm diameter of the external electrode. The largest of the three is the most appropriate for the use in the multi-turn TOF mode. Its simulated m/dm for 5 keV 400 Th ions is ̴ 400 k (fwhm) at typical flight times of about 2.2 ms. The large size, however, makes it rather slow for running in the FFT mode. On the contrary, the smallest analyser is the fastest of the three and the most appropriate for the use in the FFT only mode. The 5th harmonic of the FFT signal provides m/dm of ̴ 800 k (fwhm) after ~1 sec of measurement time. In the multi-turn TOF mode its estimated m/dm is only ~15-20 k. The intermediate size (hybrid) analyser demonstrates m/dm ̴ 100k (fwhm) in the multi-turn TOF mode and m/dm of ̴ 800 k (fwhm) after ̴ 2.1 s measurement time. It can be used in one of the two complimentary modes - multi-turn TOF or FFT.
Keywords: TOF mass spectrometry, FFT mass spectrometry, Mass spectrometers, Ion optics, Aberrations
[1] H. Wollnik and M. Przewloka. Int. J Mass Spectrom. Ion Process. 96 (1990) 267-74.
[2] W. R. Plaß, T. Dickel and C. Scheidenberger. Int. J. Mass Spectr. Ion Process. 349 (2013) 134-44.
[3] P. Schury, K. Okada, S. Shchepunov, T. Sonoda, A. Takamine, M. Wada, H. Wollnik and Y. Yamazaki. Eur. Phys. J. A 42 (2009) 343-349.
[4] R. N. Wolf, M. Errit, G. Marx and L. Schweikhard. Hyperfine Interact. 199 (2011) 115-22.
[5] A. Piechaczek, V. Shchepunov, H. K. Carter, J. C. Batchelder, E. F. Zganjar, S. N. Liddick, H. Wollnik, Y. Hu and B.O. Griffith. Nucl. Instr. and Meth. B 266 (2008) 4510-4514.
[6] W.H. Benner. Patent US005880466, 2 June 1997.
[7] C.D. Hanson. Patent US006013913, 6 February 1998.
[8] D. Zajfman, O. Heber, H. Pedersen, Y. Rudich, I. Sagi and M. Rappaport. Patent US2002190200 (A1), 18 June 2001.
[9] D. Zajfman, O. Heber, L. Vejby-Christensen, I. Ben-Itzhak, M. Rappaport, R. Fishman and M. Dahan. Phys. Rev. A 55 (1997) R1577.
[10] V. Shchepunov and R. Giles. Patent GB201118279A, 21 October 2011.
[11] V. Shchepunov, M. Rignall, R. Giles and H. Nakanishi. Shimadzu Review Vol. 72, No. 3・4(2015) 141.
Speaker: Dr Vyacheslav Shchepunov (Shimadzu Research Laboratory (Europe) Ltd)
First tests of a stabilised cw Ti:sapphire laser and new charge-exchange cell for collinear laser spectroscopy at IGISOL 1m
Collinear laser spectroscopy is a powerful tool for the study of fundamental properties of exotic nuclei via the measurement of the hyperfine structure and isotope shift of electronic transitions. This technique has been in use at the IGISOL facility, University of Jyväskylä, for over 20 years [1]. During this time, spectroscopic studies where primarily focused on singly-charged ions and laser radiation was generated using a continuous wave (cw) dye laser. To expand the region of elements that can be accessed, a new charge-exchange cell and cw Ti:sapphire Matisse laser have recently been taken into use. This will allow access to atomic transitions, and wavelengths not easily accessible to the cw dye laser.
To find the best way for long-term frequency stabilisation of the cw Matisse laser, a saturated absorption spectroscopy setup using Rb or Cs as a reference frequency standard, a scanning Fabry-Perot interferometer (FPI) and a new WSU10 wavemeter (precision of 10 MHz) have been used. The setup was originally built to precisely determine the Free Spectral Range (FSR) of several FPIs [2]. This was motivated by the need to address systematic uncertainties in wavelength determination, initially identified in earlier resonance ionization spectroscopy studies of stable copper isotopes [3]. Stabilisation of the cw laser to a Rb hyperfine component and, separately, to the wavemeter have been done and will be presented in this contribution. The results from saturated absorption spectroscopy on Rb and Cs will also be compared to the first collinear laser spectroscopy tests using the charge-exchange cell on these alkali elements.
[1] D.H. Forest and B. Cheal, Hyp. Int. 223 (2014) 207.
[2] S. Geldhof et al., Hyp. Int. 238 (2017) 7.
[3] V. Sonnenschein et al., Hyp. Int. 227 (2014) 113.
Speaker: Sarina Geldhof (University of Jyvaskyla (FI))
New Central module for the Modular Total Absorption Spectrometer 1m
The Modular Total Absorption Spectrometer (MTAS) has been used in Oak Ridge since 2012. It consists of 18 NaI(Tl) hexagonal modules. Each of the 18 modules is 21" long and 6.93" wide (side-to-side). There is also one central module of the same length and cross section, but with a 2.5" hole drilled through. The crystals are arranged in a honeycomb like structure. Radioactive samples, to be measured, are placed between two 1mm thick silicon detectors in the geometrical center of the detector. The total active NaI(Tl) mass is approximately one ton, making MTAS the largest and most efficient detector of this type currently in use [1].
Apart from its large efficiency the main advantage of the MTAS is its modularity, which allows accounting not only the summed gamma energy signals (standard total absorption data evaluation [2,3]), but also the study the intensities of the individual gamma rays to confirm decay schema assumptions made. Most of the individual gamma ray analysis is based on the signals from all, but central detector. Unfortunately, this functionality is only efficient for higher energy gamma transitions. The low energy gamma rays are efficiently absorbed in the central detector and do not reach other modules. Due to the almost $4\pi$ geometry of the central module, energy deposited by multiple gammas in the cascade are summed up, creating TAS like spectrum.
In order to overcome this feature of MTAS a new central detector has been designed. The new module will be optically segmented into 6 independent pieces to allow for more efficient analysis of low energy gammas. This presentation will discus the simulated impact of the new module on the efficiency of the detector as well as on the data analysis. If available, real performance data from the completed new central module will also be presented.
[1] M. Karny et al., Nucl. Instr. and Meth. A 836 (2016) 83-90.
[2] D. Cano-Ott et al., Nucl. Instr. and Meth. A430 (1999) 333-347,
[3] J. L. Tain et al., Nucl. Instr. and Meth. A571 (2007) 719-727,
Speaker: Marek Karny (University of Warsaw)
Application of the PyCAMFT code for the multi-component ion beam separation modeling 1m
The ion beams extracted from modern ion sources are usually characterized by complicated charge and mass state distributions of the particles. To predict accurately the behavior of the ion bunch with complicated structure in magnetic field of the separator the PyCAMFT code is developed. The 3D-code realized with Python allows to treat various particle density and charge distributions, various geometry of the bunches, arbitrary initial bunch phase volumes, various field geometry. To provide the high accuracy and high calculation rate the parallel computing is implemented based on CUDA technology. The code peculiarities allow to apply it in the experiment automation system too. The code has different built-in tools of 2D and 3D visualization. In the report the simulation of the multi-component beam separation with the PyCAMFT is discussed, the calculated bunch parameters as well as integral radiation dose distributions are presented.
Speaker: Dr Helen Barminova (NRNU MEPhI)
On-line and off-line EMIS for production of medical and industrial radionuclide and radiotracer generators 1m
Radionuclides are extensively used in the medical field both for diagnostic (tracer) and therapeutic purposes. The requirements concerning half-life vary from a few seconds to a few days, and the desired radiation properties range from simple low-energy gamma emitters and positron emitters for diagnostic purposes to beta, auger electron and alpha emitters for therapy.
Industrial use includes both beta and gamma emitters mainly for tracing purposes. Half-lives and radiation characteristics may be different from those of the medical nuclides: Some applications require half-lives of months to years (extended reservoir examinations) while other applications can only utilize short-lived radiotracers with half-lives of minutes to hours (industrial process monitoring). Further in industrial applications, higher-energy gamma radiation (> 1 MeV) as well as multi-gamma emission is useful, especially in process monitoring.
Some of the interesting radionuclides can be produced in reactors and small-size particle accelerators in low-energy fission, simple absorption, transfer or knock-on reactions while others will require high-energy fission, spallation and fragmentation reactions. To extend the region of use outside the immediate surrounding of such production facilities (due to half-life limitations), the application of radionuclide and radiotracer generators based on a long-lived mother and a shorter-lived daughter is now in extensive development. The short-lived radiotracer can thereby be produced on site and on demand.
Both the medical and industrial application area require high radiochemical purity. One of the best ways to avoid cumbersome work-up and purification procedures is to make use of EMIS after (or during) irradiation of a suitable target material. In the best cases, isotopically pure products may be collected for direct labelling of various defined chemical or biochemical compounds.
This presentation will describe mother-daughter nuclear relationships of interest to these two application areas. Additionally, examples are given on how these may be produced in an affordable way by selecting a proper target material and involving EMIS in the process. Furthermore, examples are sketched of some possible generator types and systems and how they may be operated.
Speaker: Prof. Sunniva Siem (University of Oslo)
Low energy nuclear structure spectrometer specific to multinucleon transfer reactions at HIAF 1m
The study of the nuclear structure and exotic decay property of neutron-rich isotopes is nowadays an important subject in nuclear physics research. To date, by using nuclear fusion-evaporation reaction, projectile fragmentation, proton (neutron)-induced fission, and spontaneous fission, we can only produce neutron-rich isotopes with a small charge number Z. For significantly more neutron-rich isotopes with higher Z>70, there is no appropriate method to production except multinucleon transfer reaction, which is believed to be the most possible way to produce those neutron-rich isotopes.
At the ongoing large-scale scientific project HIAF (High Intensity heavy-ion Accelerator Facility), a low energy nuclear structure spectrometer specific to the multinucleon transfer reactions is being designed and constructed. In this spectrometer, the research will be concentrated on synthesis and the identification of new neutron-rich nuclides, and on the study of their nuclear structure and decay properties. Unlike the fusion evaporation and projectile fragmentation products which are emitted near 0$^\circ$ in the forward direction in a laboratory frame, the outgoing angles of the products from multinucleon transfer reactions cover a wide range of 25$^\circ$ - 80$^\circ$, thus it is very difficult to collect and separate the products of interest.
In the conference, the motivation, conceptual design and working principle of this spectrometer will be introduced. Computer simulation results and mechanical considerations will also be presented.
Speaker: Prof. Wenxue Huang (Institute of Modern Physics, Chinese Academy of Sciences)
Prospects for the production of 100Sn ISOL beams at HIE-ISOLDE 1m
The region around doubly magic isotopes, such as 100Sn and 132Sn, has attracted a large interest in nuclear structure and physics studies, and for which intense and high quality beams are still required, as documented in the Long Range Plan published by NuPECC [1]. While 132Sn beams and beyond have been available at ISOL facilities for many years at low energy and as post-accelerated beams, 100Sn has shown to be much more challenging with only a few 101Sn/min being produced at GSI-ISOL [2]. Inflight beam fragmentation facilities at GANIL, GSI and RIKEN provide relativistic 100Sn beams at a rate comprised between less than one and a few ions per hour. In the future, up to a few ions/s is foreseen at FRIB [3]. ISOLDE has been limited so far to 104Sn, produced from LaCx targets and RILIS ionization, measured at a rate of 2000 ions/s in 2017.
The production of 100Sn beams by the ISOL technique has not been possible due to the lack of a suitable primary beam driver and target-ion source unit for any of the present-day facilities.
We review here the techniques suitable for the production of 100Sn beams at HIE-ISOLDE and propose an option based on a high power molten lanthanum target combined with molecular tin formation and a FEBIAD ion source. The envisaged options take into consideration upgrade scenarios of the primary beam at HIE-ISOLDE, going from a 1.4 GeV - 2 μA to a 2 GeV - 6 μA pulsed proton beam [4]. Details on achievable 100Sn beam intensities and purities will be provided, based on in-target production rates simulated with ABRABLA and FLUKA, tin release characteristics and molecular tin compound formation available from past experimental investigations. Progresses in the development of a high power molten metal target for the production of ISOL beams will finally be described and complete the set of data required to trigger the development of an ISOL beam of 100Sn [5].
[1] NuPECC Long Range Plan 2017 Perspectives in nuclear physics, http://www.esf.org/fileadmin/user_upload/esf/Nupecc-LRP2017.pdf, accessed March 2018.
[2] U. Koester et al., NIM B 266, 4229 (2008).
[3] https://groups.nscl.msu.edu/frib/rates/2017/, accessed March 2018
[4] R. Catherall et al , J. Phys. G: Nucl. Part. Phys. 44 094002, 2017.
[5] T. M. Mendonca, High Power Molten Targets for Radioactive Ion Beam Production: from Particle Physics to Medical Applications. No. CERN-ACC-2014-0183. 2014
Speaker: Thierry Stora (CERN)
The LIEBE high-power target: Offline commissioning results. 1m
With the aim of increasing the primary beam intensity in the next generation of Radioactive Ion Beam facilities, a major challenge is the production of targets capable of dissipating the high deposited beam power. In that context, LIEBE is a high-power target dedicated to the production of short-lived isotopes.
The design consists of a loop of molten lead-bismuth eutectic, in which the deposited primary beam power is dissipated by a water-cooled heat exchanger. The circulation of the liquid metal is achieved by an electromagnetic pump coupled to the loop. Additionally, the target includes a diffusion chamber next to the irradiation chamber to promote the creation of droplets through a grid. The extraction of short-lived isotopes is then enhanced by the shorter diffusion paths of the droplets compared to the ones of a liquid bath.
The LIEBE prototype is now fully assembled and before operating the target online at ISOLDE, the safety and operation conditions have to be reviewed. An offline commissioning phase has started, in which several non-conformities could be identified and solved. The flow established by the electromagnetic pump has been evaluated in a LIEBE replica, the stability of the target/pump coupling has been assessed through alignment and vibration measurements and the thermal control system has been tested. The final test will foresee the full operation of the prototype on the offline isotope separator.
Speaker: Ferran Boix Pamies (Centro de Investigaciones Energéti cas Medioambientales y Tecno)
Target materials for the ARIEL era at TRIUMF 1m
The Advanced Rare IsotopE Laboratory (ARIEL) is under construction at TRIUMF. ARIEL will add an additional two ISOL target stations, one will accept a 100 kW electron driver beam the other a 50 kW proton beam. These target stations are in addition to the two that are currently operated at TRIUMF's ISAC facility. Once ARIEL is fully operational an estimated 9000 Radioactive Ion Beam hours will be available to experimental users at TRIUMF each year.
To meet the demands of the ARIEL era, a fourfold increase in target material production is required. Additionally, a target material development program is needed to optimize the target materials for photofission at the target station for the electron driver beam.
Tests have been performed using a modified methodology to accelerate the current uranium carbide target material production. The resultant target material has been characterized by XRD and SEM. From these analyses, we have found that the composition and morphology of the target material obtained with the new methodology are in agreement with those of the targets used on-line. Additional tests are ongoing, with a planned on-line test at the end of this year. The latest results from these developments will be presented.
Micro-structured uranium carbide pellets are planned to be developed for the photofission target material. Lanthanum carbide pellets were produced to investigate production methods, the next step is to perform tests with uranium carbide to characterize the resultant material. The development plan will be outlined together with the results from the pellet tests with lanthanum carbide.
Speaker: Marla Cervantes (UVIC/TRIUMF)
FRS Ion Catcher: Results and Perspectives 1m
The FRS Ion Catcher experiment at GSI enables precision experiments with projectile and fission fragments. The fragments are produced at relativistic energies in the target at the entrance of the fragment separator FRS, spatially separated and energy-bunched in the FRS, slowed-down and thermalized in a cryogenic stopping cell (CSC). A versatile RFQ beamline and diagnostics unit and a high-performance multiple-reflection time-of-flight mass spectrometer (MR-TOF-MS) enable a variety of experiments, including high-precision mass measurements, isomer measurements and mass-selected decay spectroscopy. At the same time the FRS Ion Catcher serves as test facility for the Low-Energy Branch of the Super-FRS at FAIR.
In five experiments with $^{238}$U and $^{124}$Xe projectile and fission fragments produced at energies in the range from 300 to 1000 MeV/u the performance of the CSC has been characterized. The stopping and extraction efficiencies, the extraction times and the rate capability have been determined, and the charge states and the purity of the extracted ions have been investigated. Based on these studies, a novel concept for the CSC for the LEB has been developed. High-accuracy mass measurements of more than 40 projectile and fission fragments have been performed at mass resolving powers up to 450,000 with production cross-sections down to the microbarn-level and at rates down to a few ions per hour. A novel data analysis method for MR-TOF-MS measurements on rare nuclides has been developed, achieving mass accuracies as good as $6 \cdot 10^{-8}$. Access to millisecond nuclides has been demonstrated by the first direct mass measurement and mass-selected half-life measurement of $^{215}$Po (half-life: 1.78 ms). The versatility of the MR-TOF-MS for isomer research has been demonstrated by the measurements of 15 isomers, determination of excitation energies and the production of an isomeric beam. The isotope-dependence of proton-rich indium isomers has been measured. The determination of isomeric ratios gives access to the study of the mechanisms of projectile fragmentation and fission.
An overview of the latest results and proposed experiments to be carried out with the FRS Ion Catcher during the upcoming beam time period 2018 - 2019 covering mass measurements, beta-delayed neutron emission probabilities and reaction studies with multi-nucleon transfer will be presented.
Speaker: Emma Haettner (GSI Helmholtzzentrum für Schwerionenforschung GmbH)
Development of an Ba++ ion source for the Barium Tagging Program of the NEXT experiment 1m
Double beta decays in Xe-136 result in the production of a Barium ion. In gas phase it is expected that a Ba++ ions is produced. Tagging Ba++ becomes thus, an unmistakable signature of the decay and can lead to a background-free neutrinoless double beta decay experiment. In this poster a Ba++ ion source based on a fs laser is presented. Such a source can be used as a part of the Barium Tagging program of the NEXT experiment.
Speaker: Dr Juan Jose Gomez Cadenas (DIPC)
Session 5 - Instrumentation for radioactive ion beam experiments 500/1-001 - Main Auditorium
Convener: Hideyuki Sakai (RIKEN)
Recent progress and developments for experimental studies with the SAMURAI spectrometer 30m
The SAMURAI spectrometer has been designed for various types of experimental studies using high intense beams of exotic nuclei provided by the BigRIPS fragment separator at RI Beam Factory (RIBF). SAMURAI consists of a large-gap superconducting dipole magnet equipped with heavy ion detectors, a large-volume neutron detector array NEBULA, and proton detectors. Since the construction was completed, many experimental studies and developments have been done so far. In addition to the standard detectors, several other experimental devices have been installed. For instance, a prototype of the large neutron detector array NeuLAND developed at GSI, called NeuLAND demonstrator, had been installed at the SAMURAI experimental area to improve the neutron detection efficiency by combining with NEBULA. Thanks to the high neutron detection efficiency with the intense RI beams at RIBF, the setup enabled us to carry out several pioneering studies such as invariant-mass spectroscopy of the unbound nucleus 28O (Z=8, N=20), which requires detection of four neutrons in coincidence. Developments of other detectors have also been done. In the presentation, recent progress of the SAMURAI spectrometer, developments of experimental devices, and future prospects will be shown and discussed.
Speaker: Yosuke Kondo
Study of spin-isospin responses of radioactive nuclei with background free neutron spectrometer, PANDORA 20m
The $(p,n)$ reactions in inverse kinematics provide uniqe tool to study the spin-isospin responses of radioactive nuclei, including their giant resonances, in a wide excitation energy region. In particular, high luminosity can be achieved using thick hydrogen target without losing information on recoil neutron momentum applied for the missing mass reconstruction [1]. As a side effect in this measurements, a background of gamma rays overlaps with the low-energy neurons, this makes difficult the separate an efficiently tag the reaction channel. The existing neutron spectrometers used for measuring the Time-of-Flight (ToF) of recoil neutrons are not able to provide online particle identification. A new, digital readout based low-energy neutron spectrometer, PANDORA (Particle Analyzer Neutron Detector Of Real-time Acquisition) was developed [2] for real time neutron-gamma discrimination. PANDORA consists of a plastic scintillator bars with pulse shape discrimination capability coupled to photomultiplier tubes.
After an overview of the pulse shape discrimination method, the evaluation of our programmed digital pulse processing mode will be presented. The quality (Figure-of-Merit) of the neutron and gamma peak separation of PANDORA will also be discussed. Using PANDORA the gamma-ray background is reduced by one order of magnitude.
PANDORA and the digital data acquisition were commissioned in 2017 December, at HIMAC facility in Chiba. We successfully identified the Gamow-Teller transitions of $^6$He in inverse kinematical $(p,n)$ reactions at 123 MeV/nucleon incident energy using polyethylene target. In this talk, properties of PANDORA, details of experimental setup and the intelligent triggering will be reported as well as a brief overview of our whole experimental program [3] at RIKEN RIBF aiming to study the spin-isospin responses of light nuclei along the neutron drip line.
[1] M. Sasano et al., Phys. Rev. Lett. 107, 202501 (2011).
[2] L. Stuhl et al., Nucl. Instr. Meth. A 866, 164 (2017).
[3] L. Stuhl et al., RIKEN Accelerator Progress Report 48, 54 (2015).
Speaker: Dr Laszlo Stuhl (Center for Nuclear Study, University of Tokyo)
New energy-degrading beam line for in-flight RI beams, OEDO 20m
The OEDO system was proposed to produce focused slow-down radioactive-ion (RI) beams in RIBF, and has been installed in the High-Resolution Beamline (HRB) in the end of fiscal year 2016.
Generally, the momentum dispersive focal plane has a strong correlation property between the pass point and the timing of the beam. The OEDO system was designed to tune separately energy degrading and beam focusing of the RI beams by using such a property of the dispersive focus. To obtain a mono-energetic beam, a wedge-shaped degrader on a dispersive focus is efficient tool. However a wide beam size at the dispersive focus become a defect to produce a small spot at a downstream focus for experimental measurements. We developed a new ion-optical scheme where the time-of-flight difference at the dispersive focus can be utilized for the beam focusing in parallel with use of a mono-energetic degrader for the beam-energy condensation.
The main components of the OEDO system are a Radio-Frequency deflector (RFD) synchronized with the accelerator cyclotron's RF and 2 sets of superconducting triplet quadrupole (STQ) magnets. The OEDO configures STQ-RFD-STQ on the straight beamline. At the entrance of the OEDO, the ion optics of HRB is tuned to be a momentum dispersive focus of approximately 10 mm/%, and a mono-energetic Al degrader is located there to slow down RIs to less than 50 MeV/u. The first STQ provides point-to-parallel transport, resulting in a strong correlation between the angular and time components of the beam. The second STQ works as inverse transformation of the first one. The RFD, locating in the middle of the two STQs, periodically changes the RI's horizontal angles in order to align them into parallel. The aligned RI's make a focus at the exit of the OEDO system through the parallel-to-point optics of the second half of the system.
The commissioning of the OEDO beamline has been performed in June, 2017 and we have successfully confirmed energy-degraded RI beams focused by the OEDO scheme.
In the commissioning run, we have produced long-lived fission products $^{79}$Se and $^{107}$Pd at around 40 MeV/u from a 345-MeV/u $^{238}$U beam. The slow-down $^{79}$Se beam obtained by the OEDO was at $45 \pm 2$ MeV/u, and its spot size at the secondary target position was 15 mm in FWHM.
In this presentation, we will show the details about the ion-optical design and about achieved performance of the OEDO system. We are also going to discuss upcoming physics experiments and physics plans developed at the OEDO beamline.
This work was funded by ImPACT Program of Council for Science, Technology and Innovation (Cabinet Office, Government of Japan).
Speaker: Dr Shin'ichiro Michimasa (Center for Nuclear Study, the Univ. of Tokyo)
Status and future plans for MRTOF mass measurements at RIKEN-RIBF 20m
The Wako nuclear science center (WNSC), a collaboration between RIKEN and KEK, has directly measured the masses of more than 80 isotopes. We have recently performed the first mass measurements of several Md isotopes [1] along with other rare species such as Ac/Ra isotopes [2] using a multi-reflection time-of-flight spectrograph (MRTOF-MS) coupled to the gas-filled recoil ion separator GARIS-II [3]. With the MRTOF-MS coupled to GARIS-II at a new location (RRC accelerator) we will next aim to directly determine atomic numbers and masses of $^{284}$Nh and $^{288}$Mc. Additionally, we are developing several more MRTOF-MS devices to perform mass measurements of the most exotic species. As part of the SLOWRI facility we will implement MRTOF for both mass measurement and beta-delayed neutron multiplicity studies of value to r-process studies. As part of the KEK Isotope Separation System we are implementing a miniature MRTOF-MS for mass measurements of N$\approx$162 isotopes below Pt. A new MRTOF-MS behind the zero-degree spectrometer at RIBF is also being planned for use in symbiotic operation with other experiments focussing on neutron-rich nuclides. In this contribution an overview of the status and the future plans for low-energy precision mass measurements by WNSC will be provided.
[1] Y. Ito et al., Phys. Rev. Lett., accepted.
[2] M. Rosenbusch et al., Phys. Rev. C, under review
[3] P. Schury et al., Nucl. Instr. Meth. B 335, 39 (2014)
Speaker: M. Rosenbusch
4 EMIS_2018_final.pdf
Session 6 - Ion traps and laser techniques 500/1-001 - Main Auditorium
Convener: Georg Bollen (Michigan State University)
The elusive 229-Thorium isomer: On the road towards a nuclear clock 30m
Today's most precise time and frequency measurements are performed with optical atomic clocks. However, it has been proposed that they could potentially be outperformed by a nuclear clock, which employs a nuclear transition instead of an atomic shell transition. There is only one known nuclear state that could serve as a nuclear frequency standard using currently available technology, namely, the isomeric first excited state of 229Th. Since more than 40 years nuclear physicists have targeted the identification and characterization of the elusive isomeric ground state transition of 229mTh. Evidence for its existence until recently could only be inferred from indirect measurements, suggesting an excitation energy of 7.8(5) eV. Thus the first excited state in 229Th represents the lowest nuclear excitation so far reported in the whole landscape of known isotopes. Recently, the first direct detection of this nuclear state could be realized via its internal conversion decay branch [1], which confirms the isomer's existence and lays the foundation for precise studies of its properties. Subsequently, the half-life of neutral 229mTh could be measured [2] and its hyperfine structure was resolved via collinear laser spectroscopy [3]. An optical excitation scheme based on existing laser technology [4] as well as a measurement scheme for the isomeric excitation energy [5] have been developed. This would pave the way towards an all-optical control and thus the development of an ultra-precise nuclear frequency standard. Moreover, a nuclear clock promises intriguing applications in applied as well as fundamental physics, ranging from geodesy and seismology to the investigation of possible time variations of fundamental constants.
[1] L. v.d. Wense et al., Nature 533, 47-51 (2016).
[2] B. Seiferle, L. v.d. Wense, P.G. Thirolf, Phys. Rev. Lett. 118, 042501 (2017).
[3] J. Thielking et al., Nature, in press (2018).
[4] L. v.d. Wense et al., Phys. Rev. Lett. 119, 132503 (2017)
[5] B. Seiferle, L. v.d. Wense, P.G. Thirolf, Eur. Phys. Jour. A 53, 108, (2017).
Speaker: Dr Peter Thirolf (Ludwig-Maximilians-Universität München)
2018-Thirolf-EMIS2018-229mTh.pdf
Phase-Imaging Ion-Cyclotron-Resonance measurements at JYFLTRAP 20m
The studies of short-lived nuclides,far from the valley of stability, require fast and precise mass measurements to elucidate fundamental nuclear properties related to the nuclear mass and binding energy. Many exotic nuclides have isomeric states, therefore, it is necessary to have a high resolving power, sufficient for their separation. The Phase-Imaging Ion-Cyclotron-Resonance (PI-ICR) technique, where the radial ion motion in a Penning trap is projected onto a position-sensitive detector [1], can be used for the separation of states with an energy difference of a few tens of keV in singly-charged ions with half-lives of several 100 ms. The PI-ICR method, implemented at the Penning-trap mass spectrometer JYFLTRAP [2], in combination with the conventional Time-of-Flight Ion-Cyclotron-Resonance (ToF-ICR) technique, allows the exploration of short-lived nuclides for the purposes of nuclear physics, astrophysics, fundamental tests for physics beyond the Standard Model and for rare or weak decays. The PI-ICR method has been used for the identification of isomeric states in $^{88}$Tc and $^{76}$Cu, and for mass measurements of $^{88m}$Tc and $^{48}$Mn at JYFLTRAP. The phase dependent cleaning method for preparing isomerically pure beams was developed at JYFLTRAP and demonstrated for the ions $^{127m}$Cd$^+$ and $^{127}$Cd$^+$. This newly developed technique provides new opportunities for post-trap decay spectroscopy measurements. Isotopic yield ratio (IYR) measurements in proton-induced fission of natural uranium using PI-ICR technique at JYFLTRAP have been performed for the first time. The advantage of the PI-ICR method in the IYR determination is that the measurement is done through direct ion counting, which makes it chemically independent and independent of the knowledge of the decay scheme.
[1] S. Eliseev et al., Phys. Rev. Lett. 110, 082501 (2013).
[2] T. Eronen, et al., Eur. Phys. J. A 48, 46 (2012).
Speaker: Dr Dmitrii Nesterenko (University of Jyväskylä)
2 Nesterenko-EMIS2018-final.pptx Nesterenko-EMIS2018-final.pptx
First application of TITAN's newly installed MR-TOF-MS: Investigating the N = 32 neutron shell closure 20m
TRIUMF's Ion Trap for Atomic and Nuclear science (TITAN) located at the Isotope Separator and Accelerator (ISAC) facility, TRIUMF, Vancouver, Canada is a multiple ion trap system capable of performing high-precision mass measurements and in-trap decay spectroscopy. In particular TITAN has specialised in fast Penning trap mass spectrometry of singly-charged, short-lived exotic nuclei using its Measurement Penning Trap (MPET). Although ISAC can deliver high yields for some of the most exotic species, many measurements suffer from strong isobaric background. In order to overcome this limitation an isobar separator based on the Multiple-Reflection Time-Of-Flight Mass Spectrometry (MR-TOF-MS) technique has been developed and recently installed at TITAN. Mass selection is achieved using dynamic re-trapping of the ions of interest after a time-of-flight analysis in an electrostatic isochronous reflector system.
After a first commissioning with stable beam from ISAC in mid-2017 the MR-TOF-MS was employed in a measurement campaign aiming to investigate the evolution of the N = 32 neutron shell closure. This shell closure forms several neutrons away from stability and had been established in neutron-rich K, Ca and Sc isotopes, where as in V and Cr, no shell effects can be found. Thus leaving the intermediate Ti isotopes as the ideal test case for state-of-the-art ab-initio shell model calculations. High-precision mass measurements with TITAN's MPET and for the first time with the MR-TOF-MS were able to prove the existence of a weak shell closure in Ti and quenching of the shell in V. These findings challenge modern ab initio theories, which over predicted the strength and extent of this weak N = 32 shell closure.
Being able to resolve all isobars at the same time, the new MR-TOF-MS has become a routine device during TITAN beam times, being used for real-time determination of the radioactive beam composition and optimization of the ISAC mass separator, for precision mass measurements and soon for isobar separation.
We will discuss our recent mass measurements of singly charged ions making use of MPET and the new MR-TOF-MS as well as technical and operational details of the new device and perspectives for future mass measurements of short-lived isotopes at TITAN.
Speaker: Moritz Pascal Reiter (TRIUMF, JLU-Giessen)
Penning-Trap Mass Spectrometry of the Heaviest Elements with SHIPTRAP 15m
The quest for the heaviest element is at the forefront of nuclear physics. Superheavy elements (SHE), with 104 protons (Z) or more, owe their very existence to an enhanced stability resulting from nuclear shell effects. High-precision Penning-trap mass spectrometry (PTMS) is an established tool for investigations of nuclear structure-related properties, reflected in binding energy differences, for example two-nucleon separation energies [1]. Although elements up to oganesson (Z = 118) have been discovered, detailed studies of the elements with Z > 110 are hampered by low statistics due to low production cross sections in the order of picobarn. However, the use of PTMS in the region of Z > 100 provide indispensable knowledge on single-particle orbitals and pairing correlations affecting the properties of the heaviest elements. Furthermore, masses of anchor points for alpha-decay chains and benchmarks for theoretical models are obtained.
Pioneering experiments with SHIPTRAP, located behind the velocity filter SHIP at GSI in Darmstadt, Germany, have demonstrated that direct measurements of the heaviest elements are feasible for lowest yields [2,3], in the case of $^{256}$Lr (Z = 103) with a cross section of 60 nb only about one $^{256}$Lr ion every two hours was detected behind the trap. Recent developments of the setup allow pushing these limits to even heavier and more exotic nuclei in the upcoming beam time periods at GSI in 2018/19. The implementation of a cryogenic gas-catcher [4] increases the stopping, thermalization and extraction efficiency by almost one order of magnitude and was recently integrated in the relocated experimental setup. This will allow directly measuring $^{254}$Lr and the SHE isotope $^{257}$Rf (Z = 104) for the first time and extend the nuclear shell evolution studies at N = 152 [3]. In addition, anchor points in odd-A and odd-odd nuclides in this mass region will be obtained, affecting the masses of elements up to darmstadtium (Z = 110).
The development of the Phase-Imaging Ion-Cyclotron-Resonance technique at SHIPTRAP [5], the new standard in online PTMS worldwide, increases the mass resolving power, precision and detection sensitivity compared to the previously used techniques significantly. This will allow simultaneous measurements of ground and low-lying isomeric states of the heaviest elements that are difficult to access by any other method. The precise determination of their excitation energy allows studying pairing correlations and single-particle energies that are responsible for the spherical shell gap at Z = 114 and thus give significant input to nuclear models predicting the so-called island of stability.
To reach the ultimate goal to perform direct mass spectrometry on heavier elements for which the yields are smaller and only single ions are available, a second dedicated setup is being developed in parallel to the ongoing online mass measurement activities to adapt the non-destructive Fourier-Transform Ion-Cyclotron detection technique to this mass region.
Recent results and the status of the technical developments will be presented.
[1] K. Blaum, Phys. Rep. 425 (1) (2006) 1-78.
[2] M. Block et al., Nature 463 (2010) 785-788.
[3] E. Minaya Ramirez et al., Science 337 (2012) 1207-1210.
[4] C. Droese, et al., Nucl. Instrum. and Meth. B 338 (2014) 126-138.
[5] S. Eliseev et al., Phys. Rev. Lett. 110 (8) (2013) 082501.
Speaker: Oliver Kaleja (MPIK Heidelberg; JGU Mainz; GSI Darmstadt)
New program for measuring masses of silver isotopes near the N=82 shell closure with MLLTRAP at ALTO 15m
The ISOL facility ALTO, located at Orsay in France, provides stable ion beams from a 15 MV tandem accelerator and neutron-rich radioactive ion beams from the interaction of a γ-flux induced by a 50 MeV 10 µA electron beam in a uranium carbide target. A magnetic dipole mass separator and a resonance ionization laser ion source allow selecting the ions of interest. New setups are under preparation to extend the fundamental properties measured at ALTO of ground and excited states of exotic nuclei. For example, high-precision mass measurements for an accurate determination of the nuclear binding energy. To perform these measurements two devices will be hosted at ALTO: a radiofrequency quadrupole to cool and bunch the continuous radioactive beam and the double Penning trap mass spectrometer MLLTRAP, commissioned off-line at the Maier-Leibnitz Laboratory (MLL) in Garching, Germany. The unique production mechanism using photo-fission at the ALTO facility allows mass measurements in a neutron rich area of major interest around 132Sn with less isobaric contamination than using proton drivers. In this context, we plan to measure neutron-rich silver isotopes (Z = 47, A > 121) to explore the possible weakening of the shell gap for Z < 50 and its impact on the A = 130 r- process nucleosynthesis. The well-known silver masses (A < 121) will be used for the on-line commissioning of MLLTRAP and to characterize the performance of the detection system. The status and timeline of the novel setup will be presented.
Speaker: E. Minaya Ramirez (Institut de Physique nucléaire Orsay, 91406 Orsay, France)
EMR_EMIS2018_MLLTRAP_web.pdf
Improving the sensitivity of the Canadian Penning Trap mass spectrometer with PI-ICR 15m
Nuclear masses provide a direct probe of nuclear structure effects and are necessary inputs for studies of nuclear astrophysics. Measuring the masses of neutron-rich nuclei far from stability, which are relevant to heavy element nucleosynthesis, is difficult due to low production rates in the laboratory, and short lifetimes. Over the past three decades, Penning trap mass spectrometry has been the preferred mass measurement method due to its proven accuracy and precision, and is employed at several rare isotope beam facilities around the world. At CARIBU, intense beams of radioactive neutron-rich nuclei are produced from the spontaneous fission of $^{252}$Cf. Using the MR-TOF, high-purity beams ($R = m/\Delta m > 100,000$) are rapidly prepared and efficiently transported to the experimental area where the Canadian Penning Trap mass spectrometer (CPT) resides. To take advantage of these clean beams, the CPT has pivoted from using TOF-ICR to a phase-imaging measurement technique (PI-ICR), which has improved the overall sensitivity of the device by more than two orders of magnitude. In the PI-ICR method, masses can be determined with fewer ions and with a shorter measurement cycle without loss in precision, making it well-suited for studying the most weakly produced isotopes at CARIBU. I will present details of the PI-ICR technique used by the CPT and highlight several recent results.
Speaker: Rodney Orford (McGill University)
emis_rodney_dist.pdf
Convener: Prof. Christoph Scheidenberger (GSI, JLU-Giessen)
Gamma-ray tracking with AGATA: A new perspective for spectroscopy at RIB facilities 30m
The Advanced GAmma Tracking Array is a next generation high-resolution gamma-ray spectrometer for nuclear structure studies based on the novel principle of gamma-ray tracking. It is built from a novel type of high-fold segmented germanium detectors which will operate in position-sensitive mode by employing digital electronics and pulse-shape decomposition algorithms. The unique combination of highest detection efficiency and position sensitivity allows sensitive spectroscopy studies with instable beams of lowest intensity. The first implementation of the array consisted of five AGATA modules; it was operated at INFN Legnaro. A larger array of AGATA modules was used at GSI for experiments with unstable ion beams at relativistic energies. At the moment the spectrometer is hosted by GANIL. In the near future AGATA will be employed at the leading infrastructures for nuclear structure studies in Europe. The presentation will illustrate the potential of the novel gamma-ray tracking method by physics cases from the different exploitation sites. Perspectives and opportunities for future RIB facilities given by the new spectrometer will be presented.
Speaker: Peter Reiter (University Cologne, Nuclear Physics Institut)
3 EMIS_2018_Reiter_V3_try a differnt approach.pdf
The ISOLDE Decay Station - a Swiss Army knife for nuclear physics 20m
On behalf of the IDS-Windmill-RILIS collaboration
The ISOLDE Decay Station (IDS) [1] has been a permanent experiment used for studies of low-energy nuclear physics at the CERN-ISOLDE facility, since 2014. The core of the setup consists of four, high-efficiency, clover-type germanium detectors and a tape transportation system. These can be coupled to a number of ancillary detector arrays, used for alpha/beta/gamma spectroscopy, neutron time-of-flight studies, or fast-timing measurements, making IDS a powerful and versatile tool for studying the wide range of radioactive species that are readily produced at ISOLDE.
In this contribution, an overview the IDS system and its detectors will be presented, along with preliminary results from recent experiments performed at IDS [2]. In particular, results from an in-source laser spectroscopy study of bismuth isotopes [3] will be shown, in which a new high-spin isomer was identified and studied in 214Bi, thanks to the high gamma-ray detection efficiency of IDS. Plans for studying the low-lying excited states in 182,184,186Hg, and the incorporation of a new SPEDE conversion electron detector [4] at IDS, will also be revealed [5].
[1] http://isolde-ids.web.cern.ch/isolde-ids/
[2] A. Illana et al., IS622 experiment; L. Fraile et al., IS610 experiment; R. Lica et al., IS650 experiment; O. Tengblad et al., IS633 experiment, (2017-2018)
[3] A. Andreyev et al., IS608 experiment, (2016-2018)
[4] P. Papadakis et al., Eur. Phys. J. A 54, 42 (2017)
[5] K. Rezynkina et al., IS641 experiment, (2018)
Speaker: James Cubiss (University of York (GB))
Advanced scintillators for fast-timing applications 20m
Fast scintillator detectors such as LaBr$_3$(Ce) are changing the landscape in several research fields including experimental nuclear physics and medical imaging systems. This is due to the combination of excellent time response, good energy resolution and high effective Z. Advanced instrumentation for radioactive ion beam experiments takes advantage of these novel scintillator crystals and enables fast-timing experiments that allow the measurement of nuclear state lifetimes down to tens of picoseconds. On the other hand faster scintillators allow replacing the present generation of LSO or LYSO-based PET scanners, and improving the achievable time resolution for TOF-PET. Moreover, short decays times will be able to sustain higher rates enhancing the sensitivity of modern preclinical scanners.
In this contribution we report on the experimental investigation of the time and energy response of detectors based on inorganic scintillators with strong potential for fast timing and imaging applications. The selected crystals are LaBr$_3$(Ce), CeBr$_3$ and co-doped LaBr$_3$(Ce+Sr) scintillators. An intercomparison of the energy resolution and time response of cylindrical 1-inch in height and 1-inch in diameter crystals coupled to optimized photomultiplier tubes, is provided. The performance of custom crystals, specially designed for timing measurements, is also described.
Secondly, alternative readouts based on Silicon Photomultipliers (SiPMs) are discussed. These photosensors exhibit high photon detection efficiency, are insensitive to magnetic fields, have a small size and are relatively easy to use with simple read-outs. They are also intrinsically fast. In this work we investigate the time and energy resolution achieved with the relatively large scintillator crystals coupled to suited SiPMs and compare them to those obtained with photomultiplier-tube readout.
Finally, we discuss digital signal processing for the fast signals from the scintillator detectors. Digital processing has become a standard in data acquisition for multi-parameter set-ups, since it provides good performance in terms of energy resolution, dead time and flexibility. Nevertheless digital methods able to recover the excellent intrinsic time resolution of fast scintillators are still not widely available. We present results of digital acquisition and processing strategies, and compare them to analogue electronics. We show that digital processing is a competitive technique for fast scintillators and holds a strong potential for its implementation in standard set-ups.
Speaker: Luis Mario Fraile (Universidad Complutense (ES))
fraile_EMIS2018_v2.pdf
Coffee Break 30m 500/1-001 - Main Auditorium
Session 8 - Ion guide, gas catcher, and beam manipulation techniques 500/1-001 - Main Auditorium
Convener: Wilfried Noertershaeuser (Technische Universitaet Darmstadt (DE))
Ionic, atomic and optical manipulation techniques at radioactive ion beam facilities 30m
This contribution will present an overview of some of the most recent developments at radioactive ion beam facilities including methods to enhance the purity of ion beams both at the ion source, whether this be at an ISOL-type hot cavity facility or gas-cell-based facility, as well as the manipulation of beams prior to delivery to experimental setups. I will focus primarily on the application of laser techniques applied for purification (in-source, laser ion source trap methods, in-gas jet), but also how they may be used to gain an understanding of recently identified limitations in efficiency when one compares the application of similar resonant laser ionization schemes in gas cells vs hot cavity ion sources. This latter aspect becomes relevant in atomic systems with a high density of atomic levels, and I will use studies on the actinide elements and beyond as an example.
Further manipulation and "cleaning" of the ion beam is done using a combination of radiofrequency cooler-bunchers and traps. Other contributions to this conference include a variety of tools for low-energy purification, from Multi-Reflection Time-of-Flight devices, to novel phase-imaging cleaning applied in Penning traps. Optical manipulation in cooler-bunchers serves to move electronic state population into ionic levels which are more suited for laser spectroscopy, and such methods may be extended to resonantly ionize singly-charged ions for additional Z-selectivity. Extensions to optical manipulation may be found via polarization techniques, and in the near future this will be applied in combination with ion trapping for tests of Physics beyond the Standard Model. I will also present the status of new atom trapping applications at JYFL, which combine the selectivity of laser cooling and magneto-optical trapping to uniquely study isomeric or ground states. Such "cold atom" techniques are well known however are now being applied on-line to fission fragments for future applications.
Speaker: Iain Moore (University of Jyväskylä)
Moore_EMIS.pdf
Accurate High Voltage measurements based on laser spectroscopy 15m
Accurate High Voltage measurements based on laser spectroscopy
J. Krämer$^1$, K. König$^1$, Ch. Geppert$^2$, P. Imgram$^1$, B. Maaß$^1$, J. Meisner$^3$, E. W. Otten$^4$, S. Passon$^3$, T. Ratajczyk$^1$, J. Ullmann$^1,5$ and W. Nörtershäuser$^1$
$^1$ Institut für Kernphysik, TU Darmstadt
$^2$ Institut für Kernchemie, Johannes Gutenberg-Universität Mainz
$^3$ Physikalisch-Technische Bundesanstalt (PTB), Braunschweig
$^4$ Institut für Physik, Johannes Gutenberg-Universität Mainz
$^5$ Institut für Kernphysik, Westfälische Wilhelms-Universität Münster
Contact: [email protected], [email protected]
The ALIVE experiment at the TU Darmstadt is a collinear laser spectroscopy setup that has been developed for the measurement of high voltages in the range of 10 to 100 kV with highest precision and accuracy. Here, ions with a well-known mass and transition frequency are accelerated with the voltage that has to be measured and their Doppler shift is examined precisely with laser spectroscopic methods. An accuracy of at least 1 ppm is targeted which is of interest for metrology as well as scientific applications like, e.g. the KATRIN experiment. Furthermore, this opens the opportunity to define a quantum standard for the absolute high voltage determination since only direct frequency measurements are involved.
Earlier attempts with this technique were limited by the uncertainty of the optical frequency measurement [1] or the uncertainty of the real starting potential of the ions in the ion source [2]. In the ALIVE (Accurate Laser Involved Voltage Evaluation) experiment a two-stage laser interaction for optical pumping and probing is combined with a highly accurate frequency determination with a frequency comb [3] to overcome these limitations.
We will present the results we achieved with 40Ca+ ions where the well-known 4s1/2 → 4p3/2 and the 3d3/2 → 4p3/2 transitions were used to identify the ion velocities before and after the acceleration. We have performed a measurement series with voltages between -5 kV and -19 kV in parallel to two high precision voltage dividers and were able to demonstrate a 20-fold improvement compared to the previous approaches to an accuracy almost comparable to the best state-of-art high voltage dividers. To further improve this, indium ions from a liquid metal ion source and an alternative pump-and-probe approach will be used in the next stage of the experiment. With these improvements we think that we will be able to reach a sub-ppm accuracy.
[1] O. Poulsen, E.Riis, Metrologia, 25 (1988) 147
[2] S. Götte, et al., Rev. Sci. Instr., 75 (2004) 1039
[3] T. Udem, et al., Nature, 416 (2002) 233
[4] J. Krämer, K. König, et al., Metrologia 55 (2018) 268
Speaker: Dr Jörg Krämer (Institut für Kernphysik, TU Darmstadt )
Kraemer_ALIVE_EMIS_public.pdf
Poster Session 2 500/1-201 - Mezzanine
Benchtop Isolation of Radioactive and Stable Isotopes by ICP-MS 1m
Inductively Coupled Plasma Mass Spectrometry (ICP-MS) has become a routine instrument for elemental/isotopic analysis. PNNL has made simple modifications to such instruments to collect single or multiple isotopes. Commercial instruments are now widely available and the latest benchtop systems offer high level isotope isolation performance >99.999%. While mass selected ion currents are still relatively low ~10-100 nA, the approach is perfect for preparing ultra-pure isotopes on the ng-µg scale and may be easily adapted to collect single or multiple isotopes. The atmospheric pressure RF Ar Plasma readily accommodates source materials in any form (gas, liquid or solid) and will ionize most elements quite efficiently with little or no changes to the ICP ion source. We will present experimental results illustrating the application of ICP-MS isotope purification utilizing quadrupole and magnetic sector based systems. This will include examples of the excellent isotopic purity that can be obtained for stable and radioactive isotopes. Additionally, benefits to α, γ and β spectrometry will be presented where very low ion energy deposition, and isotopic purity improve the subsequent radiometric measurements.
Speaker: Dr Michael P. Dion (Pacific Northwest National Laboratory)
Ongoing progress of the MARA low-energy branch 1m
The MARA low-energy branch (MARA-LEB) [1,2] is a novel facility currently under development at the University of Jyväskylä. Its main focus will be the study of ground-state properties of exotic proton-rich nuclei employing in-gas-cell and in-gas-jet resonance ionisation spectroscopy, and mass measurements of nuclei at the N=Z line of particular interest to the astrophysical rp process.
MARA-LEB will combine the MARA vacuum-mode mass separator [3] with a gas cell, an ion guide system and a dipole mass separator for stopping, thermalising and transporting reaction products to the experimental stations. The gas cell is based on a concept developed at KU Leuven [4] and designed for the REGLIS facility, GANIL. It will be able to use both Ar and He buffer gases to allow for more efficient neutralisation or faster extraction times respectively.
Laser ionisation will be possible either in the gas cell or in the gas jet using a dedicated Ti:Sapphire laser system. Following extraction from the cell the ions will be transferred by radiofrequency ion guides and accelerated towards a magnetic dipole for further mass separation before transportation to the experimental setups. The mass selectivity of MARA, combined with the elemental selectivity achieved through laser ionisation, will open the way to the study of nuclei with production cross-sections several orders of magnitude smaller than isobars produced in the same nuclear reaction. For example, isotopes at or close to the N=Z line, e.g. light Ag and Sn isotopes, will be of key interest.
A radiofrequency quadrupole cooler and buncher and an MR-TOF-MS [5] will be combined with the facility. These devices, which will be developed in Jyväskylä, will allow for mass measurements of several isotopes close to the N=Z line, and will provide significant information on the rp process and will be used as test grounds for nuclear models.
In this presentation we will give an update of the current status of the MARA-LEB facility.
[1] P. Papadakis et al., Hyperfine Interact 237:152 (2016)
[2] P. Papadakis et al., International Conference on Ion Sources 2017, AIP conference proceedings, article in press.
[3] J. Sarén, PhD thesis, University of Jyväskylä (2011)
[4] Yu. Kudryavtsev et al., Nucl. Instr. and Meth. B 376, 345 (2016)
[5] R.N. Wolf et al., Nucl. Instr. and Meth. A 686, 82 (2012)
Using radioactive beams to unravel local phenomena in ferroic and multiferroic materials 1m
The increasing interest of using ferroic and multiferroic materials in high-tech applications requires that the underlying physical phenomena are studied on an atomic scale. Time-differential perturbed correlation (TDPAC) measurements have a local character and can provide important information concerning combined magnetic dipole and electric quadrupole interactions in ferroic and multiferroic systems. With the application of characterization techniques and radioactive beams, this method has become very powerful, especially for the determination of the temperature dependence of the hyperfine parameters even at elevated-temperatures. Such measurements lead to a better understanding of phase transitions, including observations of local environments in low fractions of different phases. Several facilities are used at ISOLDE-CERN benefiting from the multitude of available beams adequate to the use and development of the TDPAC technique. Moreover, the concentration of required TDPAC probes is so small that the probes negligibly affect the observed transition temperatures. The polarization of the TDPAC probe nucleus during the measurements of ferroics systems is due to the transferred spin density. This phenomenon gives rise to the so called "super transference" of the magnetic hyperfine field in perovskites. An overview of prior literature intercalated with a discussion of measurement conditions and isotopes is presented. Particular emphasis is given to the important case of measurements carried out at ISOLDE-CERN employing the 111mCd as a probe.
Speaker: Juliana Schell (Institut Für Materialwissenschaft Universität Duisburg-Essen (D)
Long-term research and development for the SPIRAL1 facility 1m
After 4 years of upgrade, the SPIRAL1 (Système de Production d'Ions Radioactifs Accélérés en Ligne) facility situated at GANIL (Grand Accélérateur National d'Ions Lourds) is again on-line. Its capabilities of hosting target ion-source systems using other ionization techniques than electron cyclotron resonance allows the extension of the production of radioactive ion beams (RIBs) to sticky chemical species. The in-target production variety will in the future be further enlarged owing to the panel of primary beams in terms of elements and energies, and to a new license authorizing other targets than graphite. The increased number of target-primary beam combinations gives the possibility to optimize the yields using the best reaction among fusion-evaporation, transfer or fragmentation. Optimized TISS must be developed to make the most of these new possibilities. The list of the most interesting RIBs for the nuclear community, which will guide the short and long-term R&D plan, will therefore have to be enriched taking into account these new possibilities. So far, the efforts have mainly been focusing on the nuclide chart region of "light isotopes" with masses lower than Nb for target fragmentation induced by carbon@ 95MeV/A beam, and of isotopes with masses up to U for beam fragmentation on graphite target. Neutron deficient isotopes ranging from A~70 to ~130 produced by fusion-evaporation reactions is our next objective. A new principle developed over the last 3 years aims at producing high yields of alkali elements by optimizing the atom-to-ion transformation efficiency within the TISS to balance low in-target productions. Parameters involved in the efficiency, i.e. target structure, stickiness, diffusion and effusion release, and thermal properties of materials are under study. Estimates give yields rarely obtained previously in this region, which is hard to explore at other facilities. If the principle of the first prototype is validated, the technical principle will be transposed to the production of neutron-deficient metallic isotopes within the next 3 years.
The status of these developments are presented.
Speaker: Pascal JARDIN (CNRS)
Development of new Ti:sapphire based laser sources for selective ionization and spectroscopy applications 1m
Laser spectroscopy and ionization are already well established tools for the analysis or production of radioactive ion beams. However, to best suit the needs of specific applications, new or modified laser systems are required. We present our recent progress and several applications of these new systems.
Two-photon transitions require high pulse energy and short pulse duration for efficient excitation. A simple approach for generation of such pulses is the use of a reduced laser cavity length. A 3.5cm Ti:sa laser cavity with a two-prism tuner for wavelength selection is demonstrated. Pulses of $>$ 1mJ energy with pulse durations below 3$\,$ns at 905nm were produced using an old flashlamp-pumped YAG laser as pump source. In a recent experiment (2018) at the J-PARC facility the system was used for fluorescence studies in He$_2$$^*$ excimer clusters, which were generated by recoils of the neutron induced $^3$He(n,p)$^3$T reaction[1]. The fluorescence from these clusters may in the future allow 3D particle tracking velocimetry to investigate the superfluid phase in liquid helium.
Multi-element studies require either multiple expensive laser systems or the ability to quickly switch the wavelength of the laser system from one element of interest to another. A widely-tunable grating Ti:sapphire laser system with intra-cavity frequency doubling and motorized wavelength selection was developed. The system was applied to Secondary Neutral Mass Spectrometry (SNMS)[2] of Zr and Cs.
High resolution resonance ionization spectroscopy for the analysis of isotope shifts and hyperfine structure is possible with an injection-locked Ti:sapphire laser [3]. For increased wavelength flexibility we have started development of a continuous-wave direct diode pumped Ti:sapphire (DDPTS) laser to be used as master-laser source for generating the seed radiation. The use of inexpensive diodes as compared to frequency doubled YAG lasers as pump source will make this solution very cost-efficient.
[1] W. Guo et al., J. Instrum.(2012) 7 01 P01002
[2] T. Sakamoto (2018) Laser Ionization SNMS. In: Compendium of Surface and Interface Analysis. Springer, Singapore
[3] V. Sonnenschein et al. Las. Phys., (2017), 27(8):085701
Speaker: Dr Volker Sonnenschein (University of Nagoya)
Design and commissioning of an RFQ ion guide device for in-gas-laser-ionization studies at KU Leuven 1m
At the SPIRAL2 facility (GANIL) the LEB-REGLIS3 set-up [1] is being developed and will allow to perform laser spectroscopy studies of rare, unstable nuclei using the "In Gas Laser Ionization and Spectroscopy" (IGLIS) technique [2,3]. After separation with the S3-spectrometer, the fusion-evaporation reaction products are thermalized in a buffer gas cell, transported towards a de Laval type nozzle where they are embedded in a cold, homogeneous gas jet. Laser resonant ionization is subsequently performed and the photo ions are captured in a radiofrequency quadrupole structure, efficiently transported from a high to a low pressure region and subsequently transferred towards different detection systems.
In this contribution we report on the preparatory work that is being performed at the IGLIS laboratory at KU Leuven [4]. As part of this laboratory, an ion-guide system consisting of three RFQ-structures has been simulated using the software packages SIMION [5] and IonCool [6].
A prototype has been constructed and commissioned. An overview of the commissioning tests will be discussed.
[1] F. Dechery, et al., NIM B 376, 125-130 (2016)
[2] Y. Kudryavtsev, et al., NIM B 297, 7-22 (2013)
[3] R. Ferrer, et al., Nature Commun., 8, 14520 (2017)
[4] Y. Kudryavtsev, et al., NIM B 376, 354-352 (2016)
[5] D. Manura, SIMION® 8.0, simion.com (2008)
[6] S. Schwarz, NIM A 566, 233-243 (2006)
Speaker: Simon Mark C Sels (KU Leuven (BE))
A compact linear Paul trap cooler buncher for CRIS 1m
Collinear resonance ionisation spectroscopy (CRIS) combines ionisation spectroscopy with a collinear geometry to provide Doppler-free measurements of atomic hyperfine structure, used to determine changes in root mean square charge radii, nuclear ground state spins and nuclear ground state electromagnetic moments. In the technique, an atomic beam is collinearly overlapped with multiple laser fields to resonantly excite then ionise the atoms of interest for deflection and detection.
As the high-power pulsed-lasers required are only available with relatively low repetition rates (<200 Hz), the ion beam must arrive in bunches to avoid duty-cycle losses [1]. This requirement for a bunched beam necessitates the use of an ion trap. The CRIS experiment at ISOLDE, CERN currently makes use of the shared linear Paul trap, ISCOOL [2]. Installing a cooler buncher after the independent ion source at CRIS would allow for continual optimisation of the beam transport and quality. This would reduce the setup times needed before time-pressured experimental runs studying radioactive isotopes and would simplify rapid switching to a stable reference isotope.
This poster presents the work completed towards a compact linear Paul trap cooler buncher for CRIS measurements with the Artemis project at The University of Manchester. The project also acts as an initial prototype for a future ion trap at CRIS, ISOLDE. The design incorporates many 3D printed and PCB based DC optics and mounting pieces, greatly increasing the speed of manufacture. Initial vacuum tests have demonstrated the vacuum compatibility of these plastics, reaching pressures below 1 x 10$^{-8}$ mbar.
[1] K. T. Flanagan et al. Phys. Rev. Lett., 111:212501, Nov 2013.
[2] I. Podadera-Aliseda et al. CERN-THESIS-2006-034, Jul 2006.
Speaker: Christopher Malden Ricketts (University of Manchester (GB))
An RFQ Cooler-Buncher for the $N=126$ factory at Argonne National Laboratory 1m
The properties of nuclei near the neutron $N=126$ shell, in particular their atomic masses, are critical to the understanding of the production of elements via the astrophysical r-process pathway [1]. Unfortunately, such nuclei cannot be produced in sufficient quantities using common particle-fragmentation, target-fragmentation, or fission production techniques. However, multi-nucleon transfer reactions between two heavy ions provide a method to access and study these nuclei [2]. The $N=126$ factory currently under construction at Argonne National Laboratory's ATLAS facility will make use of these reactions to allow for the study of these nuclei through, for example, high-precision mass measurements through Penning trap mass spectrometry. This new facility will include a large-volume gas catcher to stop reaction products, followed by a mass analyzing magnet of resolution $R\sim10^3$ to provide initial separation, a radio frequency quadrupole (RFQ) buncher to cool and accumulate the beam and injection into a multi-reflection time-of-flight mass spectrometer (MR-ToF) to provide high mass resolution ($R\sim10^5$) and suppress isobaric contaminants. The construction and commissioning of the RFQ buncher, based on the design used at the National Superconducting Cyclotron Laboratory's BECOLA [3] and EBIT cooler-bunchers, will be presented.
[1] M.R. Mumpower et al., Prog. Part. Nucl. Phys., 86, 86 (2016)
[2] V. Zagrebaev and W. Greiner, Phys. Rev. Lett., 101, 122791 (2008)
[3] B.R. Barquest et al., Nucl. Instrum. Methods A, 866, 18 (2017)
Speaker: Mr Adrian Valverde (University of Notre Dame)
Developments of the Collinear Resonance Ionisation Spectroscopy (CRIS) experiment at CERN-ISOLDE 1m
Significant improvements have been made to the Collinear Resonance Ionization Spectroscopy (CRIS) experiment at CERN-ISOLDE in recent years.
A versatile ion source setup has been developed to support the range of ionization properties of the elements under investigation at CRIS. This has required combining surface, plasma and laser ablation sources with compatible ion optics and has allowed atomic studies independent of the ISOL facility's limited beamtime.
The beamline itself has also been upgraded on the road towards truly collisional-free background conditions, needed for measurements of the lowest isotopic yield cases. The vacuum in the interaction region now reaches 1x10-10 mbar, a factor of 200 improvement from the previous years.
This was achieved by additional vacuum pumping technologies, adjustable differential pumping apertures, as well as a 3-axis adjustable charge-exchange cell, which has mutually improved the atom-laser beam overlap. Remote actuation of systems such as valves and Faraday cups and automation of beam-tune optimization have also been incorporated.
These developments and relevant results will be presented in the talk, in addition to future prospects such as field-ionisation, ion-ionisation and anion-neutralisation.
Speaker: Adam Robert Vernon (University of Manchester (GB))
Beam Cooling and Bunching for the CANREB Project at TRIUMF 1m
The CANadian Rare isotope facility with Electron Beam ion source (CANREB) will aid in the delivery of pure, intense rare isotope beams (RIBs) from ISAC and ARIEL to further the nuclear science research programs at TRIUMF. CANREB will include a high resolution magnetic spectrometer (HRS) for beam purification, and a charge breeding system consisting of an ion beam cooler and buncher (BCB), a pulsed drift tube (PDT), an electron beam ion source (EBIS), and a Nier-type magnetic spectrometer to charge breed the RIB for post-acceleration. The BCB will accept continuous RIB beam and efficiently deliver bunched beam to the EBIS with intensities of up to 10$^{7}$ ions per bunch, with bunch frequencies of up to 100 Hz. The PDT will be used to match the energy of the bunched beam from the BCB to that of the EBIS acceptance in the range of 10-14 keV. The EBIS, developed at MPIK in Heidelberg, has been delivered to TRIUMF, and installation of CANREB equipment is underway. Design features of the BCB and PDT will be described, and a summary of installation and testing progress will be given.
Speaker: Brad Barquest (TRIUMF)
Polarization-dependent laser resonance ionization of beryllium and cadmium for optimized laser ion source performance 1m
In a multistep photoexcitation process, the excitation efficiency from the ground state to a final state depends on the polarization of the excitation photons, the angular momentum of the intermediate state and of the final state. Most experimental work on the polarization dependence in resonance ionization has been performed in atomic beams, where polarization relaxation due to collisional interaction is negligible. For ISOL type hot cavity laser ion sources however these ideal conditions do not necessarily exist.
Using TRIUMF's off-line laser ion source test stand with a system of tunable titanium sapphire lasers, we investigated the polarization-dependence of laser resonance ionization in Be and Cd – using our preferred laser ionization schemes. A significant polarization dependence of the ion signal was confirmed in the typical excitation ladder $^1$S$_0$→$^1$P$_1$→$^1$S$_0$ for alkaline and alkaline-like elements. Polarization as an important parameter in optimizing laser ion source operation will be discussed. The use of polarization spectroscopy to determine the J values of the newly found states in the spectra of elements with complex electronic structure or only radioactive isotopes, $\textit{e.g.}$, Ra, Sm, Yb, Pu and No will be explained.
Speaker: Dr Ruohong Li (TRIUMF)
Direct mass measurements of heavy/superheavy nuclei with an MRTOF-MS coupled with the GARIS-II 1m
The initial phase of the SHE-Mass project -- precision mass measurements of superheavy nuclei with a multi-reflection time-of-flight mass spectrograph (MRTOF-MS) coupled with the gas-filled recoil ion separator GARIS-II -- has successfully been carried out. In a series of the experiments, masses of a wide variety of heavy/superheavy nuclei have been measured [1,2,3]. In particular, masses of mendelevium ($Z=101$) isotopes in the vicinity of the $N=152$ deformed neutron shell closure have been directly measured for the first time [3]. For the project, dedicated experimental devices to produce low-energy ion beams such as a cryogenic gas catcher and ion traps and sophisticated measurement scheme for the MRTOF-MS have also been developed. The details for the measurements and developments will be presented.
[1] P. Schury et al., Phys. Rev. C 95 (2017) 011305
[2] M. Rosenbusch et al., Phys. Rev. C (submitted)
[3] Y. Ito et al., Phys. Rev. Lett. (accepted)
Speaker: Dr Yuta Ito (RIKEN)
Rotating Proton Beam onto TRIUMF ISAC Targets for Higher RIB Yields' Releases 1m
A raster magnet was installed to rotate the 500 MeV proton beam onto the TRIUMF ISAC target. Rotating the proton beam produces a more uniform average power deposition, which increases the amount of beam power that targets can take. The magnet system is a pair of two ferrite AC magnets which rotates the proton beam at a frequency up to 400 Hz, for various deflection angles. A new tune was developed to produce a controllable spot size, while having an approximate parallel beam on target and a 90 degrees phase advance between the raster magnets and the target. A set of diagnostics was developed to monitor the rotating beam. Online tests have shown that we can increase the RIB yields from a target for the same maximum temperature. In addition this method simplifies beam delivery by making beam size adjustments more predictable and straightforward.
Present status of ERIS (Electron-beam-driven RI separator for SCRIT) at the SCRIT electron scattering facility 1m
ERIS (Electron-beam-driven RI separator for SCRIT) [1] at the SCRIT (Self-Confined Radioactive isotope Ion Target) electron scattering facility [2] is an online isotope separator system to produce low energy radioactive isotope (RI) beams, used for electron scattering experiments of short-lived unstable nuclei. ERIS consists of a production target, a forced electron beam induced arc discharge (FEBIAD) ion source [3], and a beam-analyzing transport line. In ERIS, RIs are produced in photo fission reaction of uranium and we prepared our own uranium carbide disks as the production target. The produced RIs are ionized in the FEBIAD ion source. They are extracted and transported to the SCRIT system [2] through FRAC (Fringing-RF-field-activated ion beam compressor) [4]. In FRAC, continuous beams are converted into pulsed beams with an appropriate stacking time.
In the commissioning experiment of the RI production, 23 uranium carbide target disks of 0.8 mm thickness and 18 mm diameter were used and the total amount of uranium was about 15g. They were irradiated with the 10-W electron beam. The observed rates for $^{132}$Sn and $^{138}$Xe were 2.6$\times$10$^5$ and 3.9$\times$10$^6$ atoms s$^{-1}$, respectively. Details are reported in Ref. [5].
Recently, ion stacking and pulse extraction at ERIS were developed to shorten the opening period of the FRAC's entrance and inject the same number of ions as in the continuous injection. In order to stack ions inside the ionization chamber, entrance and exit grids are connected to the ionization chamber through an insulator, and the applied voltages of these grids are slightly higher than that of the ionization chamber. Then, ions are trapped in the longitudinal direction.
As a result, with a 1-ms stacking time and 300-$\mu$s pulse width, the measured pulse height is about 5 times larger than that of the continuous beam and the total number of ions in the pulsed beam is the same as those of the continuous injection with a 1-ms injection. Using this scheme, a number of the accumulated ions inside FRAC is 2--3 times larger than using the continuous injection.
As a further development, the surface ionization system will be introduced
in order to extend the variety of ion beams, and the commissioning experiment will be performed soon.
In this paper, we would like to report the present status of ERIS and recent results.
[1] T. Ohnishi et al., Nucl. Instr. and Meth. B 317 (2013) 357.
[2] M. Wakasugi et al., Nucl. Instr. and Meth. B 317 (2013) 668.
[3] R. Kirchner et al., Nucl. Instr. and Meth. 133 (1976) 133.
[4] M. Togasaki et al., Proceedings of HIAT2015 (2015) WEPB25 and M. Wakasugi et al. submitted to Rev. Sci. Instrum.
[5] T. Ohnishi et al., Phys. Scr. T166 (2015) 014071.
Speaker: Tetsuya Ohnishi (RIKEN)
High-power converters for RIB production 1m
TRIUMF is developing two target assemblies for radioisotope production based on the conversion of primary charged particle beams into neutral particle fluxes, which consequently induce fission in a uranium carbide (UCx) target.
One is a proton-to-neutron converter made out of a 2 cm thick tungsten core clamped by copper brackets to dissipate up to 7.5 kW deposited by a 500 MeV, 100 uA proton beam. The high-energy isotropic neutrons will then induce cold fission in an annular UCx target material upstream of the converter.
The other is an electron-to-gamma converter made out of a thin tantalum layer deposited on awater-cooled aluminum backing. A 35 MeV electron beam of up to 100 kW will impinge on the tantalum surface and produce a gamma-ray flux, principally in the forward direction of a downstream UCx target. This contribution focuses on some of the design challenges resulting from the extreme conditions in terms of power density, temperature and radiation.
Speaker: Mr Luca Egoriti (TRIUMF)
The design and tests for the new CERN-ISOLDE spallation source: an integrated tungsten converter surrounded by an annular UCx target operated at 2000 °C 1m
Neutron-rich fission fragments are currently of great interest for the physics community. These neutron-rich fission fragments are readily available at CERN-ISOLDE using the ISOL (Isotope Separator OnLine) method. However, if produced by direct irradiation (1.4 GeV protons) of uranium carbide (UCx) targets, commonly used at ISOLDE, the desired isotopes come with very high isobaric contaminations – neutron-deficient fission fragments. Since the year 2000 at ISOLDE, a tungsten/tantalum spallation source is positioned close to the UCx target and irradiated instead. The spallation neutrons produced irradiate isotropically and interact with the target producing very high purity neutron-rich fission fragments. However, scattered protons from the bombardment of the W bar still hit the target causing the non-desired impurities.
An ISOLDE-CERN converter design optimization has been proposed before [1,2] and a simplified version has been tested under proton beam irradiation. In both, current and tested, prototype designs, the converter is put just below the target. In order to use the full solid angle of the emitted neutrons and have the highest possible neutron flux a solution is being studied where the W converter is positioned inside of the target. While this solution present large gains in both production rates and purity of the desired beams, it presents many engineering challenges. By positioning the W converter in the center of the UCx target, normally operated at 2000°C or higher, a larger diameter target oven has to be developed. Furthermore the chemical compatibility between all the target/converter components has to be guaranteed. In addition from the 1.4 GeV pulsed proton beam – 2.8 kW (1.2 GW instantaneous, 2.4 μs pulse length) – up to 700 W are deposited in the target, while submitting the W to large power depositions in very short times. Since the W converter sits inside of the target oven, it acts as an internal heat source for the target, which needs to be controlled with some precision to avoid target degradation and promote isotope release. To do such optimization studies simulations on isotope production, power deposited (FLUKA) and thermo-mechanical aspects (ANSYS) of the target oven have been done.
[1] R. Luis, et al., EPJ A 48 (2012) 90.
[2] A. Gottberg, et al., NIMB 336 (2014) 143–148.
Development and applications of tunable solid-state laser techniques for the CERN-ISOLDE-RILIS 1m
The Resonance Ionization Laser Ion Source (RILIS) relies on a versatile, reliable and easy to use laser system to enable selective and efficient multi-step resonance photo-ionization of radioisotopes, for the majority of experiments at CERN-ISOLDE. A set of titanium sapphire (Ti:Sa) lasers complements the dye laser system of the ISOLDE RILIS installation [1], providing convenient access to the near infrared and blue parts of the optical spectrum.
Since their first use at ISOLDE [2], the Ti:Sa lasers have been under continuous development, extending their performance in terms of ease of use, spectral resolution, output power and beam quality. Intra-cavity frequency doubling has been achieved by introducing a non-linear BiBO crystal at the phase matching angle into the cavity. High efficiency and output power, together with a Gaussian profile of the generated second harmonic beam, have enabled us to easily saturate atomic transitions used for resonance ionization of elements of interest. A technique for scanning the frequency-doubled laser wavelength without additional beam steering has been developed. Subsequent frequency conversions to third and fourth harmonics have become more efficient due to the improved beam shape quality, leading to generation of high power UV laser light.
Here we report on the advanced performance of the RILIS Ti:Sa lasers as well as their applications in resonance ionization spectroscopy of stable and radioactive isotopes. An outlook for continued development activities, aiming at closing the existing gap in spectral range between dye and Ti:Sa lasers, will be presented.
[1] V. Fedosseev et al., https://doi.org/10.1088/1361-6471/aa78e0
[2] S. Rothe et al., https://doi.org/10.1088/1742-6596/312/5/052020
Speaker: Katerina Chrysalidis (Johannes Gutenberg Universitaet Mainz (DE))
Helium-Jet Ion-Source development for commensal operation at NSCL/FRIB. 1m
NSCL is a national user facility with a mission to provide beams of rare isotopes for researchers from around the world. Presently, a rare-isotope beam can only be delivered to one experimental end station. The Helium-Jet Ion Guide System (HJ-IGS) project is aimed at delivering a second radioactive ion beam to another end station by collecting rare isotopes that are not delivered to the primary user. This will be done by thermalizing rare isotopes in a stopping cell placed at suitable focal plane(s) off the ion-optical axis of the A1900 fragment separator. The cell is filled with high pressure helium gas mixed with aerosols. The gas/aerosol mixture is then transported through a capillary to a high temperature plasma ion source, where rare isotopes are separated from Helium, then ionized and accelerated to produce low energy ion beams. Subsequently, these beams will be mass-separated using an isotope separator and delivered to various experimental systems. Essential for the implementation of this concept is that the thermalizing cell and the extraction mechanisms are compact and compatible with existing fragment separator infrastructure.
A unique feature of the HeJet stopping technique compared to other techniques is the absence of space charge limitations, as stopping and ionization regions are physically separate. Stopping efficiencies that are independent of the incident ion rate are expected even at the highest rates to be available at FRIB.
The proof of principle of this concept was tested using 252Cf fission fragments at HRIBF, ORNL. Several dozen n-rich isotopes were thermalized, extracted from the cell and identified from decay gamma rays after transporting to a distance of about 100 ft. Subsequently, a high voltage system and optics was developed and neutron-rich rare isotopes were identified in the extracted as low energy ion beam.
At NSCL, a new isotope separator with matching optics will be added for producing mass separated ion beams. The eventual goal is to then cool these beams using a RFQ cooler and transport the rare isotopes to one of the low-energy experimental end stations or the NSCL re-accelerator. The installation and the initial testing of stopping and transport efficiencies have been completed and preparation for a beam test is in progress.
Acknowledgement: This work is supported by the US National Science Foundation through the MRI Grant No. 1531199.
Speaker: Dr Jiban Jyoti Das (National Superconducting Cyclotron Laboratory)
Molybdenum production with Laser technique at SPES: MOLAS Project 1m
The MOLAS project (Molybdenum production with Laser technique at SPES) calls for the production of 99Mo radioactive ions by means of the method that has being employed for the generation of ion beams studied in nuclear physics experiments.
The hypothetical system includes a commercial cyclotron with energy in the range 10 MeV to 20 MeV and a production target.
The target is a Molybdenum disk or a multi-foil structure, like the UCx SPES target, activated by the highly energetic protons coming from the cyclotron.
Once activated, the target undergoes a laser ablation process and the evaporated atoms are then available for subsequent ionization, which is necessary to select 99Mo through a mass spectrometer and collect the selected atoms.
The laser ablation, laser photoionization and mass separation process chain is the paramount aspect of the MOLAS idea that allows to avoid several problems:
Laser ablation solves the refractory element high evaporation temperature problem; Laser photoionization is the perfect technique to couple with Time of Flight mass separation system, together they solve the delivery of an isotopic pure beam of element of interest possibly without any request of an isotopic pure Mo target at the beginning.
Furthermore, laser resonant photoionization could be itself the starting point for isotope separation using different excitation and ionization levels for different isotopes.
The MOLAS project could be thus a cost-convenient method to produce high pure 99Mo to be used in the actual 99Tc chain of production.
Speaker: Monetti Alberto (lnl infn)
Laser ion source development for the CERN-MEDICIS facility 1m
The new CERN-MEDICIS facility aims for production of medical radioisotopes. It is foreseen to use two production routes. The first one implies the use of the 1.4 GeV proton beam coming from the CERN Proton Booster for irradiation of a target material with subsequent radionuclide extraction at the dedicated off-line MEDICIS Mass Separator. However, during short and long shutdowns, this production route is not available. The second way is based on the extraction of radioisotopes from targets pre-irradiated and provided by external institutions: nuclear reactors, medical accelerators and cyclotrons, nuclear waste depositaries. This unique feature guarantees a continuous radionuclides supply to the end users, as well as the work of the Mass Separator independently from the CERN shutdowns.
The MEDICIS facility is designed on the ISOL technology. The off-line Mass Separator uses the conventional electromagnetic separation technology. It requires the ionic state of a work substance. A traditional surface ionization method does not possess enough efficiency in the ionization of MEDICIS targeted radionuclides, and is accompanied by undesired isobaric contamination. The presence of isobaric or other radionuclide impurities is not acceptable for the personalized nuclear medicine, because it can cause an unintended irradiation of living tissues as well as a contamination with long-lived radioisotopes.
Using the laser resonant ionization method, we are able to ionize only radioisotopes of a desired chemical element. Therefore, the resonance ionization laser ion source (RILIS technology) allows us to combine the benefit of element selectivity with mass selectivity of electromagnetic separation. As a result, we can produce a pure desired radionuclide. Moreover, the high ionization efficiency of the laser ion source ensures a high radioisotope production rate.
The MEDICIS Laser Ion Source Setup (MELISSA) will use a solid-state laser system. It is based on tuneable Ti:Sapphire lasers as the most reliable and flexible for continuous operation of a separation facility. The use of Ti:Sapphire lasers requires spectroscopic development of a suitable multi-step laser ionization scheme for every chemical element of interest. In the report, the current status of the laser ion source development for the CERN-MEDICIS facility is going to be presented. The newest results of laser resonant ionization spectroscopy will be demonstrated for several lanthanides, which medical radioisotopes are the most interesting for the theranostic approach. In particular, various laser ionization schemes will be considered to define the most optimal for the efficient production of innovative radiopharmaceuticals.
Speaker: Mr Vadim Gadelshin (Johannes Gutenberg University of Mainz (DE))
An injection-locked Titanium:Sapphire laser system for a high-resolution resonance ionization spectroscopy. 1m
Hyperfine structures and isotope shifts in electronic transitions contain readily available model-free information on the single-particle and bulk properties of exotic nuclei, namely the nuclear spin, magnetic dipole and electric quadrupole moments as well as changes in root-mean-square charge radii [1]. Recently, resonance ionization spectroscopy (RIS) in a low-temperature supersonic gas jet utilizing a narrowband first step excitation [2] has been demonstrated to be a powerful tool for probing exotic nuclei [3]. An optimal solution to combine high pulse powers, required for efficient RIS, with a narrow bandwidth is the pulsed amplification of a narrow-band continuous wave (CW) laser. In a regenerative Titanium:Sapphire amplifier, the cavity length is locked to a multiple of the seed wavelength allowing lasers to reach a final output power of several kW (during the pulse) from the few mW of CW input.
We present a pulsed injection-locked Titanium:Sapphire laser [5] designed with an emphasis on stability and reproducibility. The laser design couples low vibration sensitivity with stability via FEM simulation optimized feet positions and by integrating the injection and cavity optic mounts onto the baseplate. In addition, the laser can be configured for different cavity round-trip lengths and intra-cavity second harmonic generation.
The laser has been commissioned in the PALIS laser laboratory [4] in the RIKEN Nishina Center with a laser spectroscopy of $^{93}$Nb with the interest to study the possibility to separate the $^{93m}$Nb isomer from the ground state [6, 7]. These measurements yielded a total FWHM of ~ 400 MHz and hyperfine A coefficient of 1866 ± 8 MHz for the ground state and 1536 ± 7 MHz for the first excited state in a good agreement with the literature values [8].
In conclusion, the laser has been demonstrated to perform as expected and ready to be applied to in-gas-jet spectroscopy at PALIS. Furthermore, a similar laser is under construction at the University of Jyväskylä to be utilized at the IGISOL and MARA facilities.
[1] P. Campbell, I.D. Moore, and M.R. Pearson. Prog. Part. and Nucl. Phys., (2016), 86:127.
[2] Yu. Kudryavtsev et al. Nucl. Instr. and Meth. B (2013), 297:7, 201.
[3] R. Ferrer et al., Nature Communications 8 (2017) 14520.
[4] T. Sonoda et al. Nucl. Instr. and Meth. A, (2018), 877:118.
[5] V. Sonnenschein et al. Las. Phys., (2017), 27(8):085701.
[6] H. M. Lauranto et al. Appl. Phys. B, (1990), 50(4):323.
[7] H. Tomita et al., EPJWeb of Conferences 106 (2016) 05002.
[8] A. Bouzed et al. The Euro. Phys. J. D, (2003), 23(1):57.
Speaker: Dr Mikael Reponen (University of Jyväskylä)
The new Offline-2 laboratory for Isolde 1m
Offline-2 is an entirely new laboratory for the Isolde facility, to serve a wide range of R&D needs. Whilst the existing off-line lab is sufficient for target conditioning and for limited ion-source developments, it is now rather old and not always suitable for testing and development of more sophisticated target units. In addition there is an increasing need for beam dynamics studies, which the older off-line separator is entirely unsuitable for. Including infrastructure the new laboratory represents an investment of roughly 1M CHF.
The new separator comprises a standard Isolde frontend (target handling and beam-preparation system), so beam conditions are extremely realistic. After the separator magnet there is a matching section and an RFQ beam-cooler. The cooler is similar to the on-line cooler, and will permit a wide range of cooler upgrades to be studied including longitudinal emittance studies, alternative beam-capture schemes, and improved buffer-gas distribution inside the RFQ volume. After the RFQ there is a transverse emittance meter, and space for other advanced beam instrumentation including time profiling detectors and energy spread measurement (see abstract #104).
In general modifications to the on-line facility are difficult and risky, and with Offline-2 for the first time we will have a general-purpose test-bed to validate new equipment and new designs under realistic conditions prior to installation. A major project which will benefit from this is the planned replacement and upgrade of the Isolde GPS and HRS frontends.
Offline-2 will be ideal for development of ambitious new targets and ion-sources. It will allow for much more comprehensive testing before taking a prototype on-line, reducing failure risks and minimising the proton beam-time needed. A dedicated laser laboratory will allow far more laser-ionisation studies to be carried out, including work that currently has to be done with the on-line facility.
The infrastructure of the Offline-2 lab is now complete and commissioning of the source, separator and RFQ has started. We will present the measurements of the first beams, the construction status, and discuss plans for the exploitation of this new laboratory.
Speaker: Dr Stuart Warren (CERN)
A novel setup to develop Chemical Isobaric SEparation (CISE) 1m
Gas catchers are widely used to slow down nuclear reaction products and extract them for precision measurements. However, it is known that impurities in the inert stopping gas can chemically react with the ions and thus influence the extraction efficiency. So far, chemical reactions in the gas catcher have not been investigated in details. We want to understand the chemistry inside the gas-catcher and explore its potential as a new technique for separation of isobars. Therefore, we are currently building a new setup to develop Chemical Isobaric SEparation (CISE).
The CISE-Setup consists of a gas-catcher which can either be used in online experiments or in combination with a laser ablation source, for chemical studies with stable nuclides. It is coupled to an octupole ion-guide and a quadrupole mass filter combined with a linear Time-Of-Flight (TOF) spectrometer. Different chemical reactions for separation of isobars will be tested inside the gas-catcher filled by helium and reactive gases. An overview of the CISE-Setup, ion-optical simulations and technical design together with the status of the project will be presented in this contribution.
Speaker: Mr A. Mollaebrahimi (KVI-Center for Advanced Radiation Technology, University of Groningen )
Laser ionization scheme development and high resolution spectroscopy of promethium 1m
Promethium (Z = 61) is an exclusively radioactive element with short half-lives of up to 17 years. Consequently, Pm sample amounts that can be safely handled in off-line laboratories are small and data on atomic transitions is scarce.
In order to access Pm for RIB facilities, extensive laser ionization scheme development was carried out at JGU Mainz. More than 1000 new optical transitions were recorded in the spectral ranges from 415 - 470 nm and 800 - 910 nm using pulsed Ti:sapphire lasers. From the obtained spectra several two- and three-step ionization schemes were identified. In this course the ionization potential of Pm could be experimentally determined for the first time via field ionization of weakly bound states. The precision of the ionization potential could be improved by three orders of magnitude [1].
For high-resolution spectroscopy of Pm isotopes first tests on $^{147}$Pm were carried out within the PI-LIST ion source unit, a RF quadrupole structure separating the hot atomizer region from a cold laser interaction region. Like this surface ions can be suppressed while the species of interest is resonantly ionized in crossed laser beam geometry, significantly reducing spectral Doppler broadening. Hyperfine spectra for two subsequent transitions in a newly developed RIS scheme were measured with linewidths of $\approx$ 120 MHz. Off-line spectroscopy of the isotopes $^{143,144,145,146}$Pm at JGU Mainz is envisaged for 2018.
[1] K. Wendt et al. Hyperfine Interact. 227, 55 (2014)
Speaker: Dominik Studer (Johannes Gutenberg Universitaet Mainz (DE))
Testing the Weak Interaction using the NSLtrap of the University of Notre Dame 1m
The standard model of physics provides a description of matter in the universe. However, it fails to reproduce many unexplained features and so there has been search for physics beyond the standard model. One avenue is via the precise determination of the V$_{ud}$ matrix element of the Cabibbo-Kobayashi-Maskaka (CKM) matrix from the ft-value of superallowed mixed beta-decay transitions. A violation of the CKM matrix unitarity could be the consequence of a missing quark
generation, new bosons, or even supersymmetry. However, the determination of V$_{ud}$ from mirror transitions requires the measurement of the Fermi-to-Gamow Teller mixing ratio ρ. At the Nuclear Science Lab (NSL) within the University of Notre Dame a project is underway to develop a Paul trap devoted to the measurement of this elusive quantity. It will receive radioactive ion beams produced in-flight with TwinSol, a coupled pair of superconducting solenoids. The NSLtrap will consist of a gas catcher to stop the 1-3 MeV/A secondary beams from TwinSol . This will be followed by a radio-frequency quadrapole to cool and bunch the thermalized ions before their injection into a Paul trap. The design will be presented and the planned initial measurements will be discussed.
Project supported by NSF MRI: PHY-1725711
Speaker: Dr Patrick O'Malley (University of Notre Dame)
Lessons learned from the success of ISOLTRAP's MRTOF for a future general-purpose device 1m
The multi-reflection time-of-flight mass separator/spectrometer (MR-ToF MS) [1, 2] installed at the ISOLTRAP experiment [3] at ISOLDE at CERN has proven to be a valuable asset, allowing fast identification of the incoming ion beams [4] and selection and transfer of only a certain species to either the Penning-trap section [5], or to other experimental components [6]. The time-of-flight information can also be used to determine the masses of the beam constituents with sufficient precision for many physics topics, such as nuclear structure [7,8,9,10] and astrophysics [5, 11]. In addition to mass spectrometry, the MR-ToF can also provide purified samples for decay spectroscopy [6] or, in combination with the Resonant Ionization Laser Ion Source (RILIS) of ISOLDE, for measurements of nuclear moments and charge radii with background suppression [12, 13]. The fast-identification capabilities also make the MR-ToF a very attractive tool for target and ion source optimization and ion yield determination [14].
The ISOLTRAP MR-ToF MS will be discussed in detail and the idea of a future 30-kV general-purpose MR-ToF MS at ISOLDE will be presented.
[1] R.N. Wolf et al., Int. J. Mass Spectrom. 349-350, 123 (2013).
[2] R.N. Wolf et al., Nucl. Instrum. Meth. A 686, 82 (2012).
[3] G. Mukherjee et al., Eur. Phys. J. A 35, 1 (2008).
[4] S. Kreim et al., Nucl. Instrum. Meth. B 317, 492 (2013).
[5] R.N. Wolf et al., Phys. Rev. Lett. 110, 041101 (2013).
[6] N.A. Althubiti et al., Phys. Rev. C 96, 044325 (2017).
[7] F. Wienholtz et al., Nature 498 346 (2013).
[8] A. Welker et al., Phys. Rev. Lett. 119, 192502 (2017).
[9] V. Manea et al., Phys. Rev. C 8, 054322 (2013).
[10] M. Rosenbusch M et al., Phys. Rev. Lett. 114 202501 (2105).
[11] D. Atanasov et al., Phys. Rev. Lett. 115, 232501 (2015).
[12] B.A. Marsh et al., Nucl. Instrum. Meth. B 317, 550-556 (2013).
[13] J. Cubis et al. Submitted to Phys. Rev. C (2018).
[14] A. Gottberg et al., Nucl. Instrum. Meth. B 336, 143-148 (2014).
Speaker: Frank Wienholtz (CERN)
Ion Source Research and Development on Behalf of the TRIUMF ARIEL Development Team 1m
TRIUMF's flagship is the Advanced Rare IsotopE Laboratory (ARIEL), which will operate the existing ISOL facility, ISAC, to increase the number of shifts available to experimental users by a factor of three. In order to not only deliver more experimental hours but also better beams, a dedicated research and development program is required. There is scope for significant improvements to the performance of the target ion source systems currently used. The ion sources used at TRIUMF such as surface ion sources, and especially the FEBIAD (Forced Electron Beam Induced Arc Discharge) ion source will benefit for the dedicated research and development. ISAC will serve as a starting point to validate simulations and experimental methodology that will be applied to the ARIEL design.
Experimental characterization and calibration of the ion source parameters has been performed to gain more understanding of the ion source performance. Additionally, the simulations are capable of coupling different physics aspects of the source. This information allows a visualization of ionization maps that serve for a more realistic ion generation. Finally, the software tracks the ion extraction and exports data from which observables, such as the emittance are obtained. Preliminary simulation results show a similar behavior to the experimental ion current as a function of the varying magnetic field. Through the multiphysics approach of the simulations, and the experimental validation, it is hoped a better understanding will lead to possibilities for the optimization of the current ion sources used at TRIUMF.
Speaker: Fernando Maldonado (University of Victoria/TRIUMF)
Commissioning a Multi-Reflection Time-of-Flight Mass Spectrometer at the University of Notre Dame 1m
A multi-reflection time-of-flight mass spectrometer (MR-TOF) will be a critical component for quickly removing radioactive contaminants produced at the future "N = 126 beam factory" addition to ATLAS at Argonne National Laboratory. This unique thermalized ion beam facility will employ deep-inelastic reactions to produce very neutron-rich isotopes relevant to the astrophysical r-process. This production method entails high levels of isobaric contaminants, but precision measurements of such rare isotopes typically require highly purified samples. With this problem in mind, an MR-TOF has been built and commissioned in an off-line test setup at the University of Notre Dame. These devices can accommodate low production yields and short half-lives of desired radionuclides, and can separate isobars with resolving powers > 10^5 with a non-scanning operation. A series of simulations done in symphony with the off-line commissioning, as well as a summary of the MR-TOFs performance, will be presented. This work is supported by the National Science Foundation and the University of Notre Dame.
Speaker: Dr Maxime Brodeur (University of Notre Dame)
State-of-the-art industrial laser technology for laser ion source applications at ISOL facilities 1m
The unrivaled combination of efficiency and selectivity of the resonance-ionization process has made laser ion sources a mainstay of Isotope Separator On-Line (ISOL) facilities. The growing demand for laser-ionized beams has necessitated the use of increasingly robust laser systems, which are capable of operating continuously, and possess a long mean time between failures.
Such stringent reliability requirements are commonplace in the industrial sector, where lasers used for machining applications typically operate around-the-clock. To meet our industrial-level demand, we have therefore taken advantage of the range of machining lasers that have emerged in recent years [1][2]. Whilst these systems typically satisfy the reliability requirements, only a few satisfy the particular performance characteristics needed for laser-ion-source applications. At ISOL facilities, the industry-grade lasers are extensively used for tunable-laser pumping and non-resonant ionization [3]. Laser-induced breakup of molecular species released from targets is currently under investigation. Optimal performance for each foreseeable ISOL application requires a range of specific sets of laser-pulse characteristics: pulse width, energy, repetition rate, beam quality, and linewidth. This contribution will present an overview of our current practical experience of industrial lasers used for laser-ion-source applications.
[1] B. Marsh et al., https://doi.org/10.17181/CERN.F65D.P3NR
[2] B. Marsh et al., https://doi.org/10.1007/s10751-010-0168-5
[3] S. Rothe et al., https//doi.org/10.1088/1742-6596/312/5/052020
Speaker: Shane Wilkins (University of Manchester (GB))
Preliminary design of the new FRAgment In-flight SEparator (FRAISE) 1m
In the perspective of the LNS-INFN plan that provides to increase the ion beam power delivered by the existing superconducting cyclotron up to 10 kW, a new FRAgment In-flight SEparator (FRAISE) has been proposed. Due to the constraint of the experimental hall and despite an increasing of the thickness of the shielding walls we expect to run the facility with power no higher than 3 kW. The mass of the ion used as primary beam will be lower than 70 amu.
FRAISE consist of 4 bending magnets arranged in the symmetry configuration to produce an achromatic transport beam line with a momentum acceptance of ±1%.
Although, the room available for FRAISE is limited, in the symmetry point of the system where a degrader will be installed we achieved a maximum dispersion value of 5.2 m.
FRAISE could deliver radioactive ion beam at three experimental rooms where the magnetic spectrometer MAGNEX, the multi detector CHIMERA and the general purpose scattering chamber CICLOPE are respectively installed. Moreover, the superconducting solenoid SOLE will be moved to take advantage to use the new RIB.
The features and the performances of FRAISE will be presented.
Speaker: Antonio Domenico Russo (INFN - National Institute for Nuclear Physics)
Laser resonance ionization scheme development and laser spectroscopy with Ti:Sa lasers on Tm 1m
Spectroscopic studies to develop laser ionization schemes suitable for titanium:sapphire (Ti:Sa) lasers were carried out in Tm. While efficient ionization schemes exist for dye laser based RILIS, development is needed for Ti:Sa laser based RILIS. The spectroscopic studies were performed at TRIUMF's off-line laser ion source test stand with a system of tunable Ti:Sa lasers with the focus on developing simple, two-step resonant laser ionization schemes utilizing auto-ionizing (AI) states. Three different two-step resonance ionization schemes were developed using strong AI resonances which were determined with a measurement accuracy of about ±0.05 cm-1. The resulting laser ionization schemes for Tm are well suited for Ti:Sa laser based RILIS and will be used for on-line yield measurements in August 2018.
Speaker: Ms Maryam Mostamand (TRIUMF, University of Manitoba)
Offline Target Studies for the ARIEL Facility 1m
The ARIEL facility, currently under construction at TRIUMF, brings two new target stations and a new electron driver beam from ARIEL's e-linac to the TRIUMF facility. With these upgrades comes a suite of new technologies and new methodologies specific to the challenges posed by the ARIEL target station design. Several offline test stands, becoming increasingly integrated as the tests progress, have been designed and constructed. They will validate the new concepts and designs, and facilitate studies of particular aspects of the ARIEL design. The use of an electron driver beam necessitates a redesign of the typical ISOL target and target vessel geometry used with a proton driver beam. Increased air activation also imposes stringent requirements on the system for remotely coupling and decoupling the target vessel. In addition, the target vessel must be coupled to both the driver and RIB beam lines, unlike the case for a proton driver beam. ARIEL target stations will also deliver a large number of services through the target vessel, and the TRIUMF facility as a whole faces unique coupling and alignment challenges by inserting and removing parts of the target station vertically. Investigations into the performance and reliability of these enhancements are crucial to successful online operation and studies focusing on the main challenges are underway. The results from these ongoing tests will be presented with an emphasis on the unique features present in the ARIEL era.
Speaker: Dr Carla Babcock (TRIUMF)
Upgrades of the GANDALPH photodetachment detector- towards the determination of the electron affinity of astatine 1m
The Gothenburg ANion Detector for Affinity measurements by Laser PHotodetachment (GANDALPH) has been designed to determine electron affinities (EA) of radioisotopes. A first goal is the determination of the EA of astatine, the rarest naturally occurring element on earth [1]. The EA of astatine, together with the previously measured first ionization potential (IP) [2] gives valuable benchmarks for quantum chemical calculations predicting the chemical properties of this element and its compounds. As a milestone, the first ever photodetachment measurement of a radioisotope was successfully conducted with a negative 128-iodine beam produced at CERN-ISOLDE [3].
In order to improve the suitability to study ion beams with low intensity (<1pA), we have upgraded GANDALPH in several aspects. First, a dedicated off-line negative ion source has been constructed and attached to GANDALPH. This will facilitate off-line tests and fine tuning of the neutral atom detector as well as the electrostatic elements in the beam-line. Second, a new particle detector has been installed, which will be used to measure the ion beam currents that are expected to be very small. Finally, we have installed segmented apertures that will allow a more efficient beam tuning through GANDALPH.
In this paper the GANDALPH beam-line and its upgrades will be presented. Further, the off-line ion source will be introduced and off- and on-line results will be discussed.
[1] I. Asimov. J. Chem. Educ. (1953)
[2] S. Rothe et al. Nature Communications (2013)
[3] S. Rothe et al. Journal of Physics G: Nuclear and Particle Physics 44 (2017)
Speaker: David Leimbach (Johannes Gutenberg Universitaet Mainz (DE))
High beam intensities and long trapping times of molecular beams in the REX/HIE-ISOLDE charge breeding system 1m
Within the framework of MEDICIS-Promed, possibilities for hadron therapy with radioactive ion beams are investigated. For preparation of a radioactive $^{11}$C treatment beam, the use of an ISOL-production stage followed by a charge breeding system is considered. In order to better understand the limitations of the charge breeding system, we have performed tests with stable high-intensity CO beams at the REX/HIE-ISOLDE low energy system, consisting of a Penning trap and an Electron Beam Ion Source. Efficiencies for high current throughput are presented along with data on molecular breakup in the buffer gas and ion losses for long holding times of the CO beam in the Penning trap.
Speakers: Johanna Pitters (CERN), Fredrik John Carl Wenander (CERN)
Monte Carlo shielding calculations for the SPES target handling system 1m
The Selective Production of Exotic Species (SPES) is a nuclear facility currently under construction at the National Laboratories of Legnaro (LNL) of the Italian Institute of Nuclear Physics (INFN), aiming at the production of Radioactive Ion Beams.
In the first SPES production phase, low energy and low intensity ion beams are planned to be produced using different targets. A continuous proton beam of 40 MeV of energy and 20 µA of current will impinge on SiC and UCx targets, whose operational temperature is 2000 °C. The life-cycle of the Target and Ion Source (TIS) unit is 15-day long. After this time, the TIS unit will be removed from the Front-End.
A semi-automatic handling system for this kind of target units is being designed at the SPES laboratories. Such a system picks up the TIS unit and takes it to a temporary storage, where it will be hosted until the final disposal. The system foresees the presence of an operator. Due to the residual activity of the irradiated target, shielding calculations have to be performed based on the frequency and on the duration of the planned operations.
The ambient dose equivalent rate has been calculated with Fluka Monte Carlo code, for the two different target compositions, SiC and UCx, during the TIS removal operations. Different shielding conditions have been analyzed. Shielding calculations performed for both the semi-automatic handling system and for the exhausted target unit temporary storage represent mandatory inputs for the design of the SPES project.
Speaker: Dr Antonietta Donzella (University of Brescia)
Development of a new in-ring beam monitor in the Rare-RI Ring 1m
The precise masses of neutron-rich nuclei are important for the study of the r-process nucleosynthesis as well as nuclear structure far from stability. The newly constructed storage ring, Rare-RI Ring, is a device dedicated for the precise mass measurements for short-lived nuclei [1][2]. The masses are determined by comparing the revolution time of a reference particle with known mass and that of a particle of interest with unknown mass, based on the isochronous mass spectrometry.
To adjust several magnets in the injection orbit properly, we need a detector to confirm the circulation of the stored particle. The detector should be sensitive to a single ion because the Rare-RI Ring handles only one particle at each injection. In addition, it is necessary to measure the revolution time to adjust the isochronous magnetic field precisely using a narrow-band Schottky pick-up [3]. Therefore, we developed a new in-ring beam monitor which consists of a thin foil, a scintillator, and multi-pixel photon counters (MPPCs) [4].
The operation principle is based on the secondary electrons including delta-rays which are generated when the stored particle passes through the foil at each revolution. The secondary electrons are detected by a scintillator coupled with MPPCs without any guiding field. We carried out beam experiments to verify the principle for a prototype detector at the Heavy-Ion Medical Accelerator in Chiba (HIMAC) synchrotron facility.
After verification, we installed the detector in the ring. The detector consists of a 3-$\mu$m-thick aluminum foil, one large plastic scintillator (100 × 100 mm$^{2}$ with 3-mm thickness) and two small ones (80 ×50 mm$^{2}$ with 3-mm thickness), and 10 MPPCs (S12572-100C) for scintillation light readout. In November 2016, we conducted a machine study of the Rare-RI Ring and successfully measured revolution times using present detector. The result showed that the revolution time was determined in a precision of 8.0 × 10$^{-4}$.
In this contribution, we will present the details of the experiments, analysis, and results.
[1] A. Ozawa et al., Prog. Theor. Exp. Phys. 2012, 03C009 (2012).
[2] Y. Yamaguchi et al., Nucl. Instrum. Methods. Phys. Res. B 317, 629 (2013).
[3] F. Suzaki et al., RIKEN Accel. Prog. Rep. 49, 179 (2016).
[4] S. Omika et al., RIKEN Accel. Prog. Rep. 50, in press.
Speaker: Mr Shunichiro Omika (RIKEN, Saitama Univ.)
Infrastructure for the Production and Development of Targets at CERN-ISOLDE 1m
ISOLDE is a radioactive ion beam facility within CERN's proton accelerator complex. Ion beams of more than 70 different elements can be produced using a selected combination of ion source types and target materials available. In 2017 a total of 36 target and ion source units (targets) were constructed, tested and irradiated at ISOLDE for scheduled physics experiments, used for the commissioning of the new MEDICIS facility or served for pure machine development.
Both development studies and routine production share the same infrastructure, including an off-line mass separator and a thermal calibration test stand. Since the target production schedule has to be prioritized this creates a bottleneck for development work that is required to meet the demand in new and more exotic beams by the ISOL community.
While this is addressed by ongoing projects envisaging the construction of additional facilities with off-line-2 currently in commissioning phase and a future off-line-3 planned for the Class A laboratory, the existing off-line-1 mass separator has undergone several upgrades and additions.
Here we will give an overview about the infrastructure and procedures involved for the ISOLDE target production and documentation. We will discuss the ongoing upgrade programme to improve and extend our capabilities such as upgrades of the control software, addition of residual gas analysers, automation and data logging. A second part will be focussed on the dedicated infrastructure towards the development and study of new ion sources and the formation of molecular beams. We will conclude with an overview over the development programme foreseen during the LHC Long Shutdown 2 period (LS2) where no protons will be delivered to ISOLDE.
Speaker: Sebastian Rothe (CERN)
Status of MIRACLS' proof-of-principle experiment 1m
Collinear laser spectroscopy (CLS) is a very effective tool to measure nuclear spins, magnetic moments, quadrupole moments and mean-square charge radii of short-lived isotopes far from stability with high precision and accuracy [1]. Conventional CLS relies on the optical detection of fluorescence photons from laser-excited ions or atoms. Depending on the specific case and spectroscopic transition, it is limited to radioactive ion beams (RIB) with yields of more than 100 to 10,000 ions/s. As a consequence, it is essential to develop more sensitive experimental methods for the study of more exotic nuclei.
Complementary to Collinear Resonance Ionization Spectroscopy (CRIS) technique [2], the MIRACLS project at ISOLDE/CERN aims to preserve the high resolution of conventional CLS and at the same time to enhance its sensitivity by a factor of 20 to 600. This will be achieved by extending the effective observation time, depending on the specific nuclides' mass and lifetime. The novel MIRACLS concept is based on an Electrostatic Ion Beam Trap/Multi-Reflection Time-of-Flight (MR-ToF) device which confines a 30keV ion beam in between two electrostatics mirrors [3].
In order to demonstrate the potential of this novel approach, a proof-of-principle
experiment for MIRACLS is being set up around an existing MR-ToF device [4] which is modified for the purpose of CLS. Mg ions are extracted from an offline electron-impact ionization source , are subsequently accelerated by a 250 V voltage gradient and injected into a linear Paul trap which allows for beam accumulation, bunching and cooling. After extraction from this buncher, the ions are accelerated at 2 keV and then they are trapped in the MR-ToF device which hosts an optical detection region to register fluorescence photons from laser excited Mg ions.
This poster contribution will introduce the MIRACLS proof-of-principle experiment and will present the first observation of photons in the MR-ToF device.
[1] K. Blaum, et al., Phys. Scr. T152, 014017 (2013)
P. Campbell et al., Prog. Part. and Nucl. Phys. 86, 127-180 (2016)
R. Neugart, J. Phys. G: Nucl. Part. Phys. 44 (2017)
[2] K. T. Flanagan. Nuclear Physics News 23, 2 24 (2013
T. E. Cocolios et al., Nucl. Instrum. Methods Phys. Res. B 317, 565 (2013)
[3] H. Wollnik and M. Przewloka, Int. J. Mass Spectrom. 96, 267 (1990)
D. Zajfman et al, Phys. Rev. A 55, 1577 (1997)
W.H. Benner, Anal. Chem. 69, 4162 (1997)
W.R. Plass et al., Nucl. Instrum. Meth. B 266 4560 (2008)
A. Piechaczek et al., Nucl. Instr. Meth. B 266, 4510 (2008)
P. Schury et al., Eur. Phys. J. A 42, 343–349 (2009)
J.D. Alexander et al., J. Phys. B 42, 154027 (2009)
M. Lange et al., Rev. Sci. Instrum. 81, 055105 (2010)
R. N. Wolf et al., Int. J. Mass Spectrom. 349-350, 123-133 (2013)
F. Wienholtz et al., Nature 498, 346-349 (2013)
[4] M. Rosenbush et al. AIP Conf. Proc. 1521, 53 (2013), AIP Conf. Proc. 1668, 050001 (2015)
Speaker: Varvara Lagaki (Ernst Moritz Arndt Universitaet (DE))
The upgraded ISOLDE yield database: a new tool to predict beam intensities 1m
Developed more than 10 years ago, the ISOLDE yield database [1] serves as a valuable source for experiment planning. At the moment, it contains in total 2445 yield entries for 1333 isotopes of 74 elements and 55 different target materials. In addition, information about the time structure of the release is available for 427 yields [2].
With the increasing demand for more and more exotic beams, needs arise to extend the functionality of the database and website not only to provide information about yields determined experimentally, but also to predict yields of isotopes, which can only be measured with sophisticated setups. The individual prediction of yields is a time-consuming process in which several parameters have to be considered. The rate of radionuclides generated inside the target by the driver beam (in-target production) is the first parameter that has to be addressed by means of costly simulations. This theoretical rate is obtained with two codes that are well-benchmarked at ISOLDE: ABRABLA [3] and FLUKA [4]. Due to the limited lifetime of the radioactive species, a certain fraction of isotopes will already have decayed before having reached the ion source. This partial yield can be obtained by mathematical operations from a release curve of a longer-lived isotope of the chemical element of interest [5].
Currently, release curves for 427 yield entries are available in the yield database. Comparing the in-target production multiplied by the release fraction with the measured yield of the same isotope, allows to extract a combined parameter that accounts for ionization efficiency, chemical efficiency and other losses. For cases, in which the release is well understood, the yield database has the capability to store all the necessary data to predict yields automatically.
The website of the ISOLDE yield database is now being further developed to present this additional data. A campaign of simulations has been launched, to obtain in-target production values for all the target materials in the database, including the different proton driver beam energies (0. 6, 1.0 and 1.4 GeV) and also a possible 2.0 GeV upgrade, increasing the prediction capabilities of the database. We also contribute to the CRIBE (Chart of Radioactive Beams in Europe) project [6] aiming to establish a common yield database for operational and planned European ISOL-facilities.
Within this contribution, we present the yield predicting algorithms, including the in-target production simulations, and the new version of the ISOLDE yield database website.
[1] Turrión, M. et al., Nucl. Instr. and Meth. B, 266 (2008), 4674.
[2] www.cern.ch/isolde, accessed Mar 2018
[3] Kelic A. et al., Proc. Jt. ICTP-IAEA Adv. Work. Model Codes Spallation React. (2009) 181–221.
[4] Ferrari A. et al., FLUKA : A multi-particle transport code, Geneva, 2005. doi:10.5170/CERN-2005-010.
[5] R. Kirchner, Nucl. Instr. and Meth. B 70, 186 (1992); T. Cocolios, et al. Nucl Instr and Meth B: 266 (2008) 4403
[6] M. Fadil, CRIBE – Task 3 of JRA Eurisol (WP14) in EURISOL JRA WP 14 First Periodic Scientific Report, https://eurisol-jra.in2p3.fr/?page_id=79, accessed Mar 2018
High resolution isomer separation using collinear resonant laser ionization 1m
A contamination free isotope beam is one of the key requirements for the success of many
experiments located at radioactive ion beam facilities. Resonant ionization laser ion sources
(RILIS), which rely on the unique optical spectrum for each element, are well known for their
ability to provide radioactive ion beams with reduced isobaric background. However, even
though the optical spectrum differs for atom with a nucleus in ground or isomeric state, the
difference in the hyperfine structure spectrum is in most cases too small for being resolved with a RILIS.
Therefore, an ionization method based on a spectroscopy method with higher resolution is
required, such as collinear fast beam laser spectroscopy. By overlapping the continuous wave
(cw) light of a 1.5W blue diode laser with that of the resonant excitation transition along the
polarizer beam line at TRIUMF, 85Rb was ionized with a laser induced ionization enhancement
of 8.3(15)%. Especially for nuclear structure experiment this additional purification could be of
Speaker: A. Teigelhöfer (TRIUMF)
Highly Efficient Ion Source for Surface and Laser Ionization 1m
The application of mass-separators for the production of high-purity radionuclides used for diagnostics and therapy is a promising and extensively developed method. Among the most widely used medical radionuclides are isotopes with relatively low ionization potential, for example, isotope-generator 82Sr which is utilized for PET diagnostics of heart and brain diseases, alpha-ecaying 223,224Ra used for the therapy of different malignant tumors at a very early stage of their formation and some Tl and In isotopes. The simplest way to ionize these isotopes for subsequent delivering by a mass-separator to its collector is the surface ionization technique. As a rule, the ionizer is made of a tungsten or tantalum tube at the temperature of about 2300 ℃ to produce the surface-ionized species. However, the ionization efficiency of such a type of ionizer for non-alkalis is low (for example, about 1% for Tl). On the other hand, surface ionization is the simplest and cheapest ionization method.
To increase the surface ionization efficiency, a new type of surface ion source has been proposed and put into operation at the mass-separator IRIS working on-line with the 1-GeV proton synchrocyclotron of Petersburg Nuclear Physics Institute. The main part of this ion source is the tube made from a mono crystal of tungsten with the orientation of the tube axis along the crystallographic [111] direction. The work function of the tube internal surface is about 5 eV.
The ionization efficiency is determined as the ratio of the yield of the chosen long-lived isotope at the exit of the mass-separator to its in-target production rate. The latter is readily calculated with the known target and proton beam parameters along with the cross-sections of the reaction (p 1GeV, 238U)A, where A is the isotope in question [1]. The yields have been deduced by the measurement of the characteristic gamma or alpha-line intensities. The life-lives of the chosen isotopes were sufficiently long (larger than several hours) to neglect the decay losses due to the release time. For elements with the moderate ionization potentials, IP,the ionization efficiencies prove to be sufficiently high to consider the new ion source as the part of the future installation for the medical isotope production. Values of the ionization efficiencies for Tl (IP = 6.1 eV), In (IP = 5.8 eV) and Ra (IP = 5.3 eV) are 21(8), 33(8) and 38(10)% correspondingly.
This ion source tube was also successfully used as a hot cavity of a laser ion source during the experiments on in-source laser spectroscopy of the short-lived Bi isotopes [2]. The increase of the laser ionization efficiency in comparison with the efficiency obtained with the conventional tungsten ion-source tube at the same temperature was observed.
1. M. Bernas et al., Nucl. Phys. A 765, 197 (2006).
2. A. Barzakh et al., Phys. Rev. C 95, 044324 (2017).
Speaker: Dr Maxim Seliverstov (NRC "Kurchatov Institute" PNPI)
Public Lecture 80/1-001 - Globe of Science and Innovation - 1st Floor
Wednesday, 19 September
Session 9 - Low-energy and in-flight separators 500/1-001 - Main Auditorium
Convener: David Morrissey (Michigan State University)
MARA and RITU, in-flight separators for nuclear structure studies at JYFL 15m
A new separator, MARA (Mass Analyzing Recoil Apparatus) [1], has recently been constructed at Jyväskylä University ACCLAB. MARA is a vacuum-mode double focusing mass separator. The ion-optical configuration is QQQDEDM. MARA went through an extensive commissioning program during 2016 and already during 2017 MARA was used in spectroscopic studies at and beyond the proton drip line. In these studies, for example, three new isotopes have been identified which is a strong proof itself that MARA fulfills the needed performance.
RITU (Recoil Ion Transport Unit) [2, 3] is a gas-filled recoil separator used to preferentially select recoiling nuclei from primary beam like products after fusion evaporation reactions. RITU is based on a standard DMQQ magnetic configuration, with an extra vertically focusing quadrupole magnet in front of the dispersive element added, thus giving it a QDMQQ configuration. RITU has already been in operation for almost 25 years and a wide-ranging experimental program has been performed during these years.
MARA and RITU represents two different kind of separators having they own pros and cons. They are complementary devices and together they give a freedom to extend substantially the experimental program performed by the Nuclear Spectroscopy Group.
In this work a status report of the JYFL in-flight separators RITU and MARA will be given.
[1] J. Sarén et al., Research Report No. 7/2011, University of Jyväskylä
[2] M. Leino et al., NIMB 99 (1995), 653
[3] J. Sarén et al., NIMA 654 (2011), 508
Speaker: Dr Juha Uusitalo (University of Jyväskylä)
1EMIS2018JuhaUu.pdf
Status of the new Fragment Separator ACCULINNA-2 and first experiments 15m
In 2017 the first set of radioactive ion beams (RIBs) was obtained from the new in-flight fragment separator ACCULINNA-2 [1] operating at the primary beam line of the U-400M cyclotron [2]. Observed RIB characteristics (intensity, purity, beam spots in all focal planes) were in agreement with estimations. The new separator provides high quality secondary beams and it opens new opportunities for experiments with RIBs in the intermediate energy range 10÷50 AMeV [3].
The $^6$He + $d$ experiment, aimed at the study of elastic and inelastic scattering in a wide angular range, was chosen for the first run. The data obtained on the $^6$He + $d$ scattering, and in the subsequent measurements of the $^8$He + $d$ scattering, are necessary to complete MC simulation of the flagship experiment: search of the enigmatic nucleus $^7$H in the reactions $d$($^8$He,$^3$He)$^7$H and $p$($^8$He,$pp$)$^7$H.
Opportunities of day-two experiments with RIBs using additional heavy equipment (radio frequency filter, zero angle spectrometer, cryogenic tritium target) will be also reported. In particular, the study of several exotic nuclei $^{16}$Be, $^{24}$O, $^{17}$Ne, $^{26}$S and its decay schemes are foreseen.
http://aculina.jinr.ru/acc-2.php
http://flerovlab.jinr.ru/flnr/u400m.html
L.V.Grigorenko et al. // Physics – Uspekhi 2016. V.59. P.321.
Speaker: Dr G. Kaminski (Flerov Laboratory of Nuclear Reactions, JINR, Dubna, Russia)
Grzegorz Kaminski EMIS 2018.pdf
New control method of slowed-down RI beam and new particle-identification method of secondary-reaction fragments at RIKEN RI beam factory 15m
The energy of the RI beam is one of the most important parameter for reaction studies. At RIKEN RI beam factory (RIBF), the energy around 200 MeV/u is easily available because the primary-beam energy is 345 MeV/u. Spectrometers and beam-line detectors are optimized to this energy region. On the other hand, for the transfer reaction, the multi-nucleon transfer reaction, and the fusion reaction, the energy around the Coulomb barrier is required. The lower energy is also awaited to determine the energy dependence of the spallation cross sections for the long-lived fission products (LLFPs) to search for a nuclear transmutation scheme of LLFPs [1].
The first experiment with $^{82}$Ge beam was demonstrated to produce the slowed-down RI beam using the BigRIPS fragment separator at RIBF [2]. The beam energy with 13 MeV/u was successfully achieved. However, a control method about the absolute beam energy and the distributions of the energy, position, and angle was not established. The particle identification method designed for the higher energy couldn't be applied to the fragments with such low energy.
In the present study, we established new control methods by changing the material thickness, momentum selection, slit setting coupled with new ion optics mode. For example, the beam energy of 20.2 MeV/u was obtained for the 20 MeV/u setting of the $^{93}$ beam. A new particle-identification (PID) method was also developed. The combination of Bρ-TOF-ΔE-E-Range was obtained by using the momentum dispersive mode of the ZeroDegree spectrometer coupled with the multi sampling ionization chamber. The demonstration of PID with the 50 MeV/u $^{93}$Zr beam will be presented. The capability to the lower energy will be discussed.
[1] H. Wang et al., Phys. Lett. B 754, 104 (2016).
[2] T. Sumikama et al., Nucl. Instr. Meth. B 376, 180 (2016).
EMIS_SlowedDownBeam.pdf
Recent Results from the FIONA Separator at LBNL 15m
Recently, the Berkeley Gas-filled Separator (BGS) at the Lawrence Berkeley National Laboratory (LBNL) was coupled to a new mass analyzer, FIONA. The goal of BGS+FIONA is to provide a M/deltaM separation of ~300 and transport nuclear reaction products to a shielded detector station on the tens of milliseconds timescale. These upgrades will allow for direct A and Z identification of ii) new actinide and transactinide isotopes with ambiguous decay signatures such as electron capture or spontaneous fission decay and i) superheavy nuclei such as those produced in the 48Ca + actinide reactions.
Nuclear reaction products recoil from the target and are separated from the beam and unwanted reaction products in the BGS. There they pass through a window and into a radio-frequency gas catcher where they are thermalized and extracted into a radio-frequency quadrupole (RFQ) trap. The nuclear reaction products are cooled and bunched in the RFQ trap, where they maintain a +1 or +2 charge, and are injected into the mass analyzer. The mass analyzer consists of crossed electric and magnetic fields such that the ions take trochoidal trajectories. Here we will present recently results from the FIONA commissioning experiments.
Financial Support was provided by the Office of High Energy and Nuclear Physics, Nuclear Physics Division, and by the Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences of the U.S. Department of Energy, under Contract No. DE-AC02-05CH11231
Speaker: Jacklyn Gates (Lawrence Berkeley National Laboratory)
Coffee Break and Conference Photograph 30m 500/1-201 - Mezzanine
Session 10 - Laser techniques 500/1-001 - Main Auditorium
Convener: Block Michael (GSI)
High-resolution Laser Ionization Spectroscopy of Heavy Elements in Supersonic Gas Jet Expansion 30m
Resonant laser ionization and spectroscopy are widely used techniques at radioactive ion beam facilities to produce pure beams of exotic nuclei and measure the shape, size, spin and electromagnetic multipole moments of these nuclei. In such measurements, however, it is difficult to combine a high efficiency with a high spectral resolution. A significant improvement in the spectral resolution by more than one order of magnitude has recently been demonstrated without loss in efficiency by performing laser ionization spectroscopy of actinium isotopes in a supersonic gas jet [1], a new spectroscopic method [2] that is suited for high-precision studies of the ground- and isomeric-state properties of nuclei located at the extremes of stability.
Spatial constraints and limitations of the pumping system in the present setup prevented the formation of a high quality jet and, as a consequence, an optimal spatial and temporal laser-atom overlap. Offline characterization studies at the In-Gas Laser Ionization and Spectroscopy (IGLIS) laboratory at KU Leuven [3] are being carried to overcome these limitations in future experiments when dedicated IGLIS setups are in operation at new generation radioactive beam facilities [4]. These characterization studies include: the flow dynamics and the formation of supersonic jets produced by different gas-cell exit nozzles using the Planar Laser Induced Fluorescence (PLIF) technique on copper isotopes, gas-cell designs with better transport and extraction characteristics, an ion guide system for efficient transport of the photo-ions and a high-power, high-repetition rate laser system.
Extrapolation of the online results on the actinium isotopes show that the performance of the technique under optimum conditions can reach a final spectral resolution of about 100 MHz (FWHM) and an overall efficiency of 10% when applied in the actinide region.
In this presentation I will summarize a number of on-line results and mainly will focus on the characterization studies and future prospects of the in-gas-jet resonance ionization method applied on very-heavy elements.
[1] R. Ferrer et al., Nat. Commun.8, 14520 doi: 10.1038/ncomms14520 (2017).
[2] Yu. Kudryavtsev, R. Ferrer, M. Huyse, P. Van den Bergh, P. Van Duppen, Nucl. Instr. Meth. B 297,7 (2013).
[3] Yu. Kudryavtsev et al., Nucl. Instr. Meth. B 376, 345 (2016).
[4] R. Ferrer et al., Nucl. Instr. Meth. B 317, 570 (2013).
Speaker: Rafael Ferrer Garcia (KU Leuven (BE))
Resonance ionization schemes for high resolution and high efficiency study of exotic nuclei at the CRIS experiment 20m
Laser spectroscopy of exotic isotopes requires a technique that combines high spectral resolution with high efficiency. At the Collinear Resonance Ionization Spectroscopy (CRIS) ISOLDE [1], significant effort has been invested in improving both aspects. These developments resulted in e.g. linewidths of 20 MHz in radioactive Francium [2], and in the successful high-resolution measurements on beams with yields as low as 20 pps [3]. This contribution presents an in-depth study on how to achieve high-resolution and high-efficiency RIS on radioactive ion beams. These developments will pave the way for current and future experiments on radioactive beams, with many different groups exploring ways to achieve high resolution RIS [4,5].
Resonance ionization scheme developments have been performed for a number of elements (${}_{19}$K, ${}_{29}$Cu, ${}_{31}$Ga, ${}_{49}$In, ${}_{50}$Sn, ${}_{87}$Fr, ${}_{88}$Ra). The significance of these studies lies in achieving high resolution without introducing efficiency losses and line shape distortion in the observed hyperfine spectra. Interesting and unexpected effects have been identified related to the role of laser powers, temporal laser pulse lengths and relative firing delay of pulsed lasers. In particular, the use of "chopped" continuous light [2,6] as the first excitation step has been investigated in comparison with the use of an injection-locked laser [3,7]. The complementarity of these two approaches is such that both will continue to see use at CRIS.
In this contribution, the laser systems installed at CRIS will be presented, along with experimental results, demonstrating the advantages and opportunities that come with having such a versatile laser system, and the necessity of a solid understanding of the interaction of lasers and atoms for high resolution resonance ionization laser spectroscopy.
[1] K. T. Flanagan et al., Physical Rev. Letters 111, 212301 (2013).
[2] R. P. de Groote et al., Physical Rev. Letters 115, 132501 (2015).
[3] R. P. de Groote et al., Phys. Rev. C 96, 041302(R) (2017).
[4] R. Ferrer el al., Nature Communications 8, 14520 (2017).
[5] R. Heinke, T. Kron, S. Raeder et al. Hyperfine Interactions, 238: 6 (2017).
[6] R. P. de Groote et al., Physical Rev. A 95, 032502 (2017).
[7] V Sonnenschein et al., Laser Physics 27, 8 085701, (2017).
Speaker: Agota Koszorus (KU Leuven (BE))
EMIS_Agi_Koszorus_pres.pdf
Laser Spectroscopy of the Heaviest Elements at SHIP / GSI 20m
Laser spectroscopy is a versatile tool to unveil fundamental atomic properties of an element and information on the atomic nucleus. The heaviest elements are of particular interest as their electron shell is strongly influenced by electron-electron correlations and relativistic effects changing the electron configuration and thus, the chemical behavior [1,2]. The elements beyond fermium ($Z>100$) are only accessible through fusion-evaporation reactions at minute quantities and at high energies, hampering so far their optical spectroscopy. Only recently we were able to identify optical transitions in nobelium ($Z=102$) in a pioneering experiment employing the RAdiation Detected Resonance Ionization Spectroscopy (RADRIS) technique [3,4]. Nobelium ions are produced in the fusion-evaporation reaction by a $^{48}$Ca beam impinging on a lead target. The primary beam is deflected by the velocity filter SHIP and the transmitted recoils are stopped in high-purity argon gas and collected onto a thin tantalum filament. After re-evaporation the neutral atoms were probed by two-step resonance ionization. The so the created photo-ions were then guided to a detector where they were identified by their characteristic $\alpha$-decay. With this technique a first identification and characterization of a strong $^1$S$_0$$\rightarrow$$^1$P$_1$ ground state transition in nobelium was possible. The resonances for the isotopes $^{252-254}$No were measured as well as the hyperfine splitting in $^{253}$No. In combination with atomic calculations, we determined the evolution of the deformation of the nobelium isotopes in the vicinity of the deformed shell closure at neutron number $N=152$ and extracted the magnetic moment and the spectroscopic quadrupole moment of $^{253}$No.
Next steps include the extension of the RADRIS method to more exotic nobelium isotopes and to the next heavier element lawrencium ($Z=103$) as well as developments for higher resolution spectroscopy. For the latter a dedicated setup is being developed combining the efficient stopping and neutralization from the RADRIS experiment with the high resolution of in-gas-jet spectroscopy [5]. Here, the stopped ions are guided by electric fields to a heated filament, which efficiently neutralizes the nobelium ions, as demonstrated in the RADRIS experiment. The neutral atoms are then extracted through a de Laval nozzle forming a collimated gas jet. Laser spectroscopy in this low density and low temperature regime will enable an improved resolution in laser spectroscopy and furthermore will allow us to address shorter-lived isotopes and isomers as, e.g., the $K$-isomer in $^{254}$No.
1. E. Eliav, S. Fritzsche, U. Kaldor, Nucl. Phys. A 944, 518 (2015).
2. P. Schwerdtfeger, et al., Nucl. Phys. A 944, 551 (2015).
3. H. Backe, W. Lauth, M. Block, M. Laatiaoui, Nucl. Phys. A 944, 492 (2015).
4. M. Laatiaoui, et al., Nature 538, 495 (2016).
5. R. Ferrer, et al., Nature Commun., 8, 14520 (2017).
Speaker: Sebastian Raeder (Helmholtz Institut Mainz, GSI Darmstadt)
MIRACLS: Multi Ion Reflection Apparatus for Collinear Laser Spectroscopy 15m
Laser spectroscopy is a powerful tool for studying nuclear ground-state properties in a model-independent way. It provides access to the charge radii and electromagnetic moments of the nuclear ground state as well as of isomers by observing the isotope shifts and hyperfine structures of the atoms' spectral lines [1, 2]. While in-source laser spectroscopy in a hot cavity is a very sensitive method that is able to measure rare isotopes with production rates below one particle per second [3], the spectral resolution of this method is limited by Doppler broadening to ~5 GHz. Collinear laser spectroscopy (CLS) on the other hand, provides an excellent spectral resolution of ~10 MHz [1] which is of the order of the natural line widths of allowed optical dipole transitions. However, CLS requires yields of more than 100 or even 10,000 ions/s depending on the specific case and spectroscopic transition [4].
Complementary to the Collinear Resonance Ionization Spectroscopy (CRIS) technique [5], the MIRACLS project at CERN aims to develop a laser spectroscopy technique that combines both the high spectral resolution of conventional fluorescence CLS with an enhanced sensitivity factor of 20-600. The sensitivity increase is derived from an extended observation time provided by trapping ion bunches in an Electrostatic Ion Beam Trap / Multi-Reflection Time-of-Flight device [6] where they can be probed several hundred of thousand times.
This talk will introduce the MIRACLS concept and will present the current status of the project as well as the outlook towards further developments.
[2] P. Campbell et al., Prog. Part. and Nucl. Phys. 86, 127-180 (2016)
[3] H. De Witte et al., PRL 98, 112502 (2007)
[4] R. Neugart, J. Phys. G: Nucl. Part. Phys. 44 (2017)
[5] K. T. Flanagan. Nuclear Physics News 23, 2 24 (2013)
D. Zajfman et al, Phys. Rev. A 55, 1577 (1997),
W.H. Benner, Anal. Chem. 69, 4162 (1997),
W.R. Plass et al., Nucl. Instrum. Meth. B 266 4560 (2008),
A. Piechaczek et al., Nucl. Instr. Meth. B 266, 4510 (2008),
P. Schury et al., Eur. Phys. J. A 42, 343–349 (2009),
J.D. Alexander et al., J. Phys. B 42, 154027 (2009),
M. Lange et al., Rev. Sci. Instrum. 81, 055105 (2010),
R. N. Wolf et al., Int. J. Mass Spectrom. 349-350, 123-133 (2013),
Speaker: Simon Mark C Sels (CERN)
Sels_MIRACLS_EMIS2018_3.pdf
Collinear laser spectroscopy at the IGISOL facility: upgrades and new opportunities 15m
Collinear laser spectroscopy is an established tool for the study of electromagnetic moments, charge radii and nuclear spins. With a history that now spans 4 decades, the technique has been successfully applied in laboratories all over the world. Recently, several upgrades were performed at the Ion Guide Isotope Separator On-Line (IGISOL) facility, Jyväskylä. Chief among these upgrades are a new event-by-event data acquisition system, and a new charge exchange cell. These developments will expand the applicability of the method significantly, and will in particular enable studies of the late d-shell species like Tc-Pd. No measurements on radioactive isotopes of these elements have been reported so far which reflects the challenge of producing such refractory species at ISOL-based facilities
In parallel to the developments at the collinear laser spectroscopy station, modifications of the radiofrequency cooler-buncher at the IGISOL are underway. The goal of these upgrades is to reduce the temporal length of the ion bunches. This is required to reach optimal mass-resolving power with the new Multi-Reflection Time-of-Flight (MR-TOF) device which is also currently being built and commissioned. Since an increase in the energy spread of the ions will result in a broadening of the resonance lines in collinear laser spectroscopy, collinear laser spectroscopy presents a unique tool to investigate the time- and energy spread of the bunches produced with this upgraded cooler-buncher.
In this contribution, the aforementioned upgrades will be discussed in detail. The performance of the upgraded cooler-buncher, evaluated using collinear laser spectroscopy, will be summarized. The implications of all these upgrades for the future physics program will be explored
collinear_IGISOL.pdf
First demonstration of Doppler-free two-photon in-source laser spectroscopy at the ISOLDE-RILIS 15m
The ISOLDE on-line mass separator facility at CERN has offered radiogenic beams of a multitude of elements for over 50 years [1]. Fundamental research on nuclear structure, masses and decay modes are carried out by the various experimental installations inside the hall. To complement these, several measurement campaigns throughout the past years have been conducted by the in-source laser ionization spectroscopy collaboration between teams of the Resonance Ionization Laser Ion Source (RILIS), the Windmill alpha-detector setup, ISOLTRAP and, more recently, the Isolde Decay Station (IDS). Studies performed by this collaboration have made a great impact on our knowledge of nuclear shape evolution along isotopic chains in the gold-astatine region [2]. The high sensitivity of the in-source resonance laser ionization method has enabled us to extend our reach towards the most exotic isotopes ever studied at ISOLDE, close to the proton drip line.
Doppler-broadening inside the hot cavity ion source environment remains the biggest drawback of this method, limiting the achievable resolution and thereby restricting its use to the study of the heavier isotopes, or to specific cases where the isotope shift or hyperfine structures are sufficiently large. A Doppler-free approach has been tested for the first time at ISOLDE, making use of two counter propagating laser beams, which are both required to excite a two-photon transition. The resolution is then limited by the laser linewidth because the Doppler-shifts seen by two photons are of equal magnitude and opposite direction, thereby summing to zero. This approach will open the door to high-resolution and high sensitivity in-source laser spectroscopy studies across the nuclear chart. Since we are restricted to ionization pathways involving two-photon transitions, this technique is complementary to the other laser spectroscopy experiments at ISOLDE.
Here we report on the first demonstration of this method inside the ISOLDE target-ion-source assembly using a newly developed injection seeded ring Ti:Sa laser cavity for reduced laser linewidth. The Doppler-free spectra of silicon stable isotopes have been obtained, repeating a historical study that was carried out inside an independent atomic beam apparatus [3]. By making use of the same ionization scheme, we were able to provide a benchmark for comparison of the resolution and extracted nuclear data. Based on this initial feasibility study the scope of applicability of the Doppler-free two-photon spectroscopy technique at the ISOL facilities will be discussed and presented.
[1] M. J. G. Borge and K. Blaum, https://doi.org/10.1103/PhysRevA.88.052510
[2] See references in: V. Fedosseev et al., https://doi.org/10.1088/1361-6471/aa78e0
[3] K. Wendt et al., https://doi.org/10.1103/PhysRevA.88.052510
6 EMIS_2018_KC.pdf
Thursday, 20 September
Session 11 - Storage rings 500/1-001 - Main Auditorium
Convener: Jens Dilling (triumf/UBC)
Status of the low energy storage ring CRYRING@ESR 30m
Heavy, highly charged ions stored at low energy are ideal probes for various questions of modern physics that range from tests of QED especially at high fields to detailed investigations of nuclear reactions. An early installation of the low-energy storage ring LSR, a Swedish in kind contribution to the FAIR project in Darmstadt/Germany, provides the necessary environment for precise experiments with slow, highly charged ions. During the last years, the immensely successful storage ring CRYRING has been connected to the powerful source of heavy, highly charged ions that is GSI/FAIR to form the CRYRING@ESR facility.
The ring can store ions ranging from a few 100 keV/nucleon to a few MeV/nucleon. Heavy, highly charged ions up to bare or hydrogen like uranium are produced at the GSI accelerator facility at about 400 MeV/nucleon, decelerated and cooled in the experimental storage ring ESR to about 4 MeV/nucleon, and then transported into CRYRING@ESR. There the ion beam can be decelerated further, cooled with an electron cooler, and stored for experiments, or extracted. An in ring gas target will be setup as well as a number of single particle detectors and laser – ion beam interaction zones. Three experiments have already been accepted by the GSI/FAIR general program advisory committee for beams from ESR and from the local source for the running period 2018/19. Additionally, 17 letters of intent have been received for experiments and tests with the local injector.
The ring installation has been finished. The local injector produced successfully H$_2$$^+$ ions accelerated to 300 keV/nucleon that were injected into the ring, stored, accelerated and cooled. Hence, all basic functions have been demonstrated. The remaining commissioning is dedicated to transfer first signals into routine operation and to get ready to accept ESR beam.
Speaker: Frank HERFURTH (GSI Helmholtzzentrum für Schwerionenforschung GmbH)
1 - CRYRING_EMIS2018.pdf
Rare-RI Ring in cyclotron facility RIBF 30m
The Rare-RI Ring (R3) is located at RIKEN RI Beam Factory for the purpose of the systematically measurement for the basic properties of nuclei such as mass and lifetime including the r-process region. The R3 is an isochronous ring of an unprecedented concept that can inject, circulate, and quick extract an ion one-by-one. An isochronous field can be formed precisely in a wide momentum range by trim-coils attached to R3 like a cyclotron. This mechanism makes it possible the precise mass measurements of extremely short-lived (~1ms) Radioactive Isotopes (RIs) that are rarely produced (several particles/day). In the commissioning using RIs which masses are well-known, we demonstrated that the masses of several RIs within a wide range of m/q (~5%) can be derived relatively from one reference RI by measuring the flight time under a precise isochronous optics. The accuracy can be reached to an order of $10^{-6}$ by performing beta- or rigidity-correction. In the presentation, the features of R3 will be described with the results of commissioning, and the future prospects will be mentioned.
Speaker: Yoshitaka Yamaguchi
EMIS2018yyamaguchi.pdf
Application of In-ring Slit on Isochronous Mass Spectrometery 15m
Nuclear mass measurements provide valuable information on the nuclear binding energy which reflects the summed result of all interactions among its constituent protons and neutrons. The systematic and accurate knowledge of nuclear masses have wide application in many areas of subatomic physics ranging from nuclear structure and astrophysics to the fundamental interactions and symmetries depending on the achieved mass precision [1].
A storage ring coupled with a radioactive beam line has been proven to be a powerful tool for mass measurement of exotic nuclide. This kind of mass spectrometry was inventively pioneered at ESR-GSI in Darmstadt in the 1990s and then successfully established at CSRe-IMP in Lanzhou. For the ions stored in the CSRe, their revolution times $T$ are a function of their mass-over-charge ratios $m/q$ and their momentum $p$ in the first order as follow:
$\frac{\Delta{T}}{T} \approx \frac{1}{\gamma_t^2} \frac{\Delta(m/q)}{m/q}-(\frac{1}{\gamma^2}-\frac{1}{\gamma_t^2})\frac{\Delta{p}}{p},$
where $\gamma_t$ is the so-called transition point of the ring and $\gamma$ is the the relativistic Lorentz factor.
Numerous efforts based on this principle have been made to improve the mass resolving power of the storage-ring-based mass spectrometry. According to the equation, the mass resolving power $\frac{m}{\Delta m} \propto \frac{T}{\Delta T}$ for a specific nuclide are inversely proportional to two parameters. One is the phase-slip factor $\eta$, defined as $\eta=\frac{1}{\gamma^2}-\frac{1}{\gamma_t^2}$, representing how much the isochronous condition is fulfilled for this nuclide. The other one is the momentum spread $\frac{\Delta{p}}{p}$, in other words, the magnetic rigidity acceptance of a storage ring, which is almost the same for all nuclides. In this contribution, we will report on the recent development for the Isochronous Mass Spectrometry(IMS) based on the storage ring CSRe.
In the experiment, the transition point $\gamma_t$ of CSRe was found to be about 1.396 after 12-hours data accumulation. Nuclides with revolution time around 616ns were under the best isochronous condition, while the revolution time of $^{52}$Co$^{27+}$ is about 614ns. This deviation from the anticipatory setting was mainly caused by the the imperfections of the electromagnetic field. Based on such conditions, the first order isochronicity optimization was made via the modifications of the quadrupole and sextupole magnetic field strengths [2], and thus, the transition point $\gamma_t$ of CSRe was corrected to be 1.400. In this way the phase-slip factor of $^{52}$Co$^{27+}$ was significantly reduced. The success of this isochronicity optimization was confirmed via 8-hours data accumulation.
To make a further improvement on separating $^{52}$Co from its low-lying isomer $^{52m}$Co (excitation energy is about 380 keV inferring its mirror nucleus $^{52}$Mn regardless of isospin symmetry breaking, and the corresponding difference of revolution time is about 2.4 ps), the momentum acceptance $\frac{\Delta{p}}{p}$ was limited via a slit installed at the dispersion section of the straight part of storage ring CSRe. The slit opening was 60 mm corresponding to the momentum acceptance of the CSRe of $\frac{\Delta{p}}{p}$ $\sim$ $4\times10^{-4}$(sigma), while in previous experiments under the same optical setting but without the slit was $\frac{\Delta{p}}{p}$ $\sim$ $8\times10^{-4}$(sigma) [2]. With the help of the slit, mass resolving powers for all nuclides were notably improved step by step. Despite of the decrease of statistic, a high resolution revolution time spectrum for $^{52}$Co and $^{52m}$Co was finally obtained, and the mass resolving power of IMS have touched $4 \times 10^{5}$ (sigma) region for the first time [3].
[1] K. Blaum, Physics Report 425 (2006) 1.
[2] X. Gao et al., Nuclear Instruments and Methods in Physics Research Section A
763 (2014) 53.
[3] X. Xu et al., Physics Review Letters 117 (2016) 182503.
Speaker: Xing XU (Institute of Modern Physics)
2018-Sep-20-XU_-EMIS.pptx
Design, optimization and construction of multi-reflection time-of-flight mass analyzer for Lanzhou Penning Trap 15m
The multi-reflection time-of-flight mass spectrometer (MRTOF-MS) was first proposed by Wollnik and Przewloka. It has been developed as a new device in recent years for nuclear mass spectrometry and isobaric separation with ion bunches with kinetic energies ranged from a few hundreds of electron-Volts to a few kilo-electron-Volts. By extending the flight path using multi-reflection between electrostatic ion mirrors, an MRTOF-MS can reach a very high mass resolving power of >100,000 in a compact structure. Moreover, it also has other unique advantages, such as extremely short measurement time, a large mass range, very high sensitivity and non-scanning operation. Up to now, many MRTOF-MSs for mass measurements and isobaric separations have been commissioned or under construction.
A multi-reflection time-of-flight mass analyzer is being constructed for isobaric separation for the Lanzhou Penning Trap (LPT). A new method including two sub-procedures, global search and local refinement, has been developed for the design of MRTOF-MS. The method can be used to optimize the parameters of MRTOF-MS both operating in mirror-switching mode and in in-trap-lift mode. By using this method, an MRTOF mass analyzer, in which each mirror consists of five cylindrical electrodes, has been designed. The optimal potential parameters of the electrodes have been obtained and compared directly for our MRTOF mass analyzer operating in both modes. In the mirror-switching mode, the maximal resolving power has been achieved 1.3$\times$10$^5$ with a total time-of-flight of 6.5 ms for the ion species of $^{40}$Ar$^{1+}$, and 7.3$\times$10$^4$ with a total time-of-flight of 4.3 ms in the in-trap-lift mode. The simulation also reveals the relationships between the resolving power and the potentials applied on the mirror electrodes, the lens electrode and the drift tube.
In this conference, we will present the design details, optimization method and the results obtained. The status and progress of MRTOF mass analyzer for LPT will also be reported.
Speaker: Dr Yulin TIAN (Institute of Modern Physics, Chinese Academy of Sciences)
EMIS2018_YulinTian.pdf
Session 12 - Ion guide, gas catcher, and beam manipulation techniques 500/1-001 - Main Auditorium
Convener: Piet Van Duppen (KU Leuven (BE))
Re-acceleration of Rare Isotope Beams at Heavy-Ion Fragmentation Facilities 30m
Heavy-ion fragmentation facilities provide a wide range of rare isotope beams of most chemical elements, as the in-flight production is fast and chemistry independent. Rare isotopes are delivered at half the speed of light are used for a wide set of nuclear science experiments. In order to leverage the advantages of the production mechanism for experiments that require lower energies and high-quality beams, beam stopping and reacceleration needs to be employed. The first re-acceleration system at a heavy-ion fragmentation facility in the world is ReA3 at the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University. Beams of rare isotopes are produced and separated in-flight in NSCL's Coupled Cyclotron Facility (CCF) at energies of typically 100 MeV/U and subsequently stopped in a gas cell. The rare isotopes are then continuously extracted as 1+ ions and transported into a beam cooler and buncher, followed by a charge breeder based on an Electron Beam Ion Trap (EBIT). In the charge breeder, the ions are ionized to a charge state suitable for acceleration in a superconducting radiofrequency (SRF) linac and then extracted in a pulsed mode and mass analyzed. The extracted beam is bunched to 80.5 MHz and then accelerated to energies ranging from 300 keV/u up to 6 MeV/u, depending on their charge-to-mass ratio. This contribution will present the state of art of this technique, advantages and disadvantages, results obtained so far and discuss developments.
Speaker: Dr Antonio Villari (Facility for Rare Isotope Brams )
1 - Villari-EMIS-2018.pdf
The N=126 factory: a new facility to produce the very-heavy neutron-rich isotopes 20m
A new facililty, the N=126 factory, is currently under construction at Argonne National Laboratory. It will use multi-nucleon transfer reactions to create neutron-rich isotopes of the heaviest elements for studies of interest to the formation of the last abundance peak in the r-process. This region of the nuclear chart is difficult to access by standard fragmentation or spallation reactions and as a result has remained mostly unexplored. The nuclei of interest, very neutron-rich isotopes around Z=70-95, will be produced by multi-nucleon exchange of a high intensity 10 MeV/u heavy-ion beam on the most neutron-rich stable isotopes of heavy elements such as 198Pt and 238U. This reaction mechanism can transfer a large number of neutrons and create with larger than mb cross-section very neutron-rich isotopes. The reaction mechanism is a nuclear surface process and the reaction products come out at around the grazing angle which makes them very difficult to collect. The N=126 factory circumvents this difficulty by using a unique large high-intensity gas catcher, similar to the one currently in operation at CARIBU, to collect the target-like reaction products and turn them into a low-energy beam that will then be mass separated with a medium resolution electromagnetic separator (DM/M ~ 1/1500), followed by an RFQ buncher and an MR-TOF (DM/M ~ 1/100000) system. The extracted radioactive beams will be essentially pure and be available at low-energy for mass measurements with the CPT mass spectrometer or decay study with the X-array. Status of the overall facility construction will be presented, together with commissioning results of the novel front end and the observed yield.
This work was supported by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357 and used resources of ANL's ATLAS facility, an Office of Science User Facility.
Speaker: Prof. Guy Savard (Argonne National Laboratory)
EMIS2018_Savard_N_126_final_small.pdf
Ba-ion extraction from high pressure Xe gas for double-beta decay studies with EXO 20m
An RF-only ion funnel has been developed to efficiently extract single Ba ions from a high-pressure (10 bar) xenon gas into vacuum. Gas is injected into the funnel where ions are radially confined by an RF field while the neutral gas escapes. Residual gas flow alone (without any DC drag potential) transports the ions longitudinally through the funnel. In the downstream chamber the ions are captured by ion guides and delivered to an ion identification device. The xenon gas is captured by a cryopump and then recovered back into storage cylinders for future use.
With the current setup ions were extracted from xenon gas of up to 10 bar. This was one of the highest gas pressures ions have been extracted from so far. The ions were produced by a 252Cf-ion source placed in the high-pressure gas. The ion transmission has been studied in detail for various operating parameters and initial ion-identification has been achieved with a commercial mass spectrometer. An improved mass-to-charge identification via a multi-reflection time-of-flight mass spectrometer is currently being developed to further investigate the properties of the funnel and to measure the Ba-ion extraction efficiency of this setup.
This approach of ion extraction is intended for application in a future large-scale 136Xe neutrinoless double-beta decay (0nbb) experiment. The technique aims to extract the bb-decay product, 136Ba, from the xenon volume of a gaseous time-projection chamber and detect it unambiguously and efficiently. This individual identification of the decay product allows for an ideally background-free measurement of 0nbb by vetoing natural occurring backgrounds. This identification enables a higher level of sensitivity to the 0nbb decay half-life and thus is a more sensitive probe of the nature of the neutrino.
Speaker: Dr Thomas Brunner (McGill and TRIUMF)
20180916-EMIS-Brunner-final.pdf
Beam Thermalization at the National Superconducting Cyclotron Laboratory 15m
Thermalization of projectile fragment beams provides access to a wide range of low-energy rare isotope beams at projectile fragmentation facilities. The thermalization process includes slowing down the fast exotic beams in solid degraders combined with momentum compression and removal of the remaining kinetic energy by collision with helium buffer gas. The second-generation National Superconducting Cyclotron Laboratory (NSCL) beam thermalization facility includes a momentum compression beam line with degraders, a large radio-frequency gas catcher constructed by Argonne National Laboratory and a low energy transport system. A number of experiments have been carried out to characterize the behaviour of the gas catcher for capturing and extracting a variety of fast beams.The stopping and extraction efficiency as a function of incoming particle rates, the effect of range focusing on extracted beam rates, the drift time in gas catcher and the chemical forms of extracted ions have been studied for a variety of chemical elements. The combined stopping and extraction efficiencies were found to vary from 0.05% to 40% for fragments ranging from O-14 to Ga-76. Careful selection of degraders and dispersion matching of the fast beam to the wedge angle increases the extracted beam rate significantly. Since different rare ion beams require different angle wedges, a variable wedge angle device has been constructed. The properties of the gas catcher, techniques used to thermalize radioactive beams and the performance of the whole system will be presented.
This work was supported in part by the National Science Foundation under Grant PHY-11-002511 and by the Office of Nuclear Physics Contract DE-AC02-06CH11357.
Speaker: Dr Chandana Sumithrarachchi (National Superconducting Cyclotron Laboratory)
EMIS-2018-V3.pdf
Actinide ion beams by in-gas-cell laser resonance ionization, recoil sources, and on-line production at IGISOL 15m
The production of actinide ion beams has become a focus of recent efforts at the IGISOL facility of the Accelerator Laboratory, University of Jyväskylä, aimed at the measurement of nuclear properties of heavy elements using high-resolution optical spectroscopy [1]. Recently, off-line ion beam production of plutonium and thorium using laser resonance ionization combined with filament dispensers in a gas cell has been the subject of extensive studies. Additionally for thorium, which is of interest mainly because of the $^{229}$Th isotope and its extremely low-lying isomeric state [2], development of a $^{233}$U alpha-recoil source and on-line production activities have now commenced.
Both plutonium [3] and thorium [4] show unexpected phenomena during laser resonance ionization in a gaseous environment. A plutonium ionization scheme that has been reported to have high efficiency in vacuum (hot cavity) performed poorly in the gas cell due to significant collisional quenching of states. The high density of atomic states in actinide elements has also complicated the understanding of the laser ionization process. Therefore, the selective ionization of plutonium was investigated further with a tunable, grating-based Ti:sapphire laser developed by the Applied Quantum Beam Engineering group from Nagoya University [5]. For the filament dispensers of $^{229}$Th, an additional challenge has been the low volatility of thorium, contaminants and scarcity of $^{229}$Th material.
A gas cell with $^{233}$U alpha-recoil sources is also a viable approach towards the production of a low-energy $^{229}$Th ion beam. Two different sources have been characterized at IGISOL with gamma- and alpha-ray spectroscopy by taking measurements from the sources directly and via implantation foils. The Rutherford back scattering spectrometer of the local ion beam analysis facility was also used to characterize the sources. The findings of these studies emphasize the importance of having control over the source quality, thickness and contaminants.
The first on-line experiment for the production of $^{229}$Th from a light-ion fusion-evaporation reaction on $^{232}$Th targets has also been performed. Although the identification of $^{229}$Th was not directly possible due to the long half-life (7932 years), several alpha-active reaction products were detected and a yield of about 400 ions/s/µA for $^{229}$Th was deduced from the $^{227}$Pa yield, known detection efficiency and cross section estimates. The challenge of on-line production is in the competing (and overwhelming) fission channel which produces a large number of fission fragments that are expected to cause strong ionization of the buffer gas. Also, significant target damage was seen to be a problem because the targets were kept as thin as possible. This has prompted new target manufacturing concepts which are current being considered.
[1] A. Voss et al., Phys. Rev. A 95 (2017) 032506.
[2] L. von der Wense et al., Nature, 533 (2016) 47.
[3] I. Pohjalainen et al., Nucl. Instr. Meth. B 376 (2016) 233.
[4] I. Pohjalainen et al., to be submitted (2018).
[5] H. Tomita, et al., Progress in Nuclear Science and technology, 5, in press.
Speaker: Ilkka Pohjalainen (University of Jyväskylä)
Online Tests of the Advanced Cryogenic Gas Stopper at NSCL 15m
Linear gas stoppers filled with helium have become a common tool to convert high energy rare isotope beams into low-energy beams. The National Superconducting Cyclotron Laboratory (NSCL) has designed and fabricated a new cryogenic gas stopper to maximize efficiency and beam rate capability in order to increase scientific reach at the facility. Compared to earlier designs, the Advanced Cryogenic Gas Stopper (ACGS) is expected to have increased extraction efficiency, reduced transport time, reduced molecular contamination of the isotope of interest, and minimized space charge effects. A novel 4-phase Radio Frequency wire-carpet generates a traveling electrical wave for fast ion transport, cryogenic cooling of the helium gas chamber reduces unwanted molecular formation, and a new planar geometry with a bare wire-carpet in the mid-plane of the stopper alleviates space charge effects. Prototype testing of the ACGS components have shown wire-carpet transport efficiencies greater than 95% and transport speeds up to 100 m/s. First online tests of the ACGS with radioactive beams have been performed and the results will be presented.
Speaker: Kasey Lund (NSCL/MSU)
6 - Lund-The_Advanced_Cryogenic_Gas_Stopper_EMIS2018.pdf
Session 13 - Applications of radioactive ion beams 500/1-001 - Main Auditorium
Convener: Magdalena Kowalska (CERN)
EMIS for Health 30m
Electromagnetic isotope separation is an essential technology for the production of radionuclides with high radionuclidic purity, also those that are the "fuel" for nuclear medicine applications. Radionuclides for imaging and therapy are produced by charged particle induced reactions at accelerators or by neutron induced reactions in nuclear reactors. Yet, both methods require as prior step the preparation of a suitable target that often has to be isotopically enriched. I will discuss present needs and methods for isotope enrichment and possible synergies with new techniques that had initially been developed for nuclear physics experiments.
Speaker: Ulli Köster (Institut Laue-Langevin)
Ulli-Emis-pdf.pdf
Beta-detected Nuclear Magnetic Resonance: From nuclear physics to biology 15m
β-NMR is a powerful tool which takes advantage of the anisotropic nature of β decay, to obtain information about the environment in which the radioisotope is implanted or to study the properties of the radioisotope itself. Nuclei are first polarized, then implanted into a crystal or sample of interest from which β-decay intensities are measured in opposing directions. The relevant information is extracted from an excitation radio-frequency applied to the system, which resonantly destroys the nuclear polarization. This technique has the advantage of being significantly more sensitive (up to 10 orders of magnitude) than traditional NMR. The new VITO beamline has been developed over the last two years at the ISOLDE facility (CERN) to provide beams of spin polarized radioactive nuclei for study. This culminated in a successful commissioning experiment measuring the β-decay asymmetry of 26Na and 28Na in a crystal of NaF, the results of which were published in March of 2017 [1].
A recent proposal is to apply this powerful technique to study biological systems. One such example is to observe how Na+ cations interact with DNA G-quadruplex structures in solution, the subject of campaign IS645 [2]. This contribution will focus on the results from these initial studies.
[1] M Kowalska et al J. Phys. G: Nucl. Part. Phys .44 084005 (2017)
[2] M. Kowalska, V. Araujo Escalona et al. Interaction of Na ions with DNA G-quadruplex structures studied directly with Na b-NMR spectroscopy. INTC-P-521 https://cds.cern.ch/record/2299798/files/INTC-P-521-ADD-1.pdf
Speaker: Robert Dale Harding (University of York (GB))
Production of intense mass separated 11C beams for PET-aided hadron therapy 15m
We will present a novel production system based on the ISOL method (Isotope Separation On-Line) for intense mass separated $^{11}$C beams for PET-aided hadron therapy. Hadron therapy, and particularly carbon therapy, is a very precise treatment for localized tumors where the tumor is irradiated with a pure, monoenergetic and high intensity particle beam. Carbon therapy significantly reduces the dose exposure to healthy tissue compared to conventional photon therapy. However, the verification of the actual dose deposition in the human body remains difficult. Complex treatment planning systems are required that simulate the beam trajectory, and thus, calculate the dose distribution of the particle beam to the human body. Such treatment planning systems suffer from uncertainties that originate for instance from range deviations and from moving organs due to the patient's breathing. Therefore, within the Marie Skłodowska-Curie innovative training network MEDICIS-Promed, a $^{11}$C based carbon therapy protocol is being developed. $^{11}$C is a $\beta^+$-emitter (T$_{1/2}$ = 20.4 min) widely used in PET-imaging. Consequently, by replacing the stable $^{12}$C beam with its radioactive isotope $^{11}$C, therapy can be combined with on-line PET-imaging. The PET-images that are recorded simultaneous with the treatment, represent a 3D dose distribution map of the irradiation field, and thus, provide an on-line dose verification. While the advantages of a $^{11}$C based hadron therapy are obvious, the challenge remains to produce a radioactive particle beam of sufficient intensity. Effective treatments require $4\cdot10^8$ ions/spill delivered to the patient. As a result, this implies a radioactive ion beam production system that is capable to produce a $^{11}$C beam of high intensity. Therefore, we propose a production system based on the ISOL method, which is capable to produce pure and intense radioactive ion beams. This technique includes the irradiation of a solid target with a particle beam. The isotope of interest is produced, among many others isotopes, via a nuclear reaction inside the target. The isotopes then have to be released from the target and effuse to an ion source, where the atoms are ionized. Subsequently, the ions can be accelerated and mass separated by a deflecting magnet that bends the ions on trajectories according to their mass-over-charge ratio, producing (radioactive) ion beams. To ensure highest intensity as possible, optimization of the different steps from target irradiation until mass separation and beam formation is essential. We present our proposed production system, consisting of a solid boron nitride target, a cyclotron for low-energy proton irradiation and an ECR ion source. Optimization of important aspects, such as isotope release, transport and ionization efficiency will be discussed.
Speaker: Simon Thomas Stegemann (KU Leuven)
Very high specific activity Er-169 production 15m
The new facility CERN-MEDICIS produces isotopes using the CERN proton beam at 1.4 GeV coming from the CERN Proton Booster. The produced radioisotopes are dedicated for medical applications. A wide range of innovative radionuclides can be produced through its off-line mass separator. Indeed the mass separation allows the production of radionuclides which are not available at sufficient specific activity using conventional chemical separation methods only, in particular in the lanthanides region.
One radiolanthanide with very promising decay properties for targeted radionuclide therapy is Er-169. It shows favorable nuclear decay characteristics among β- emitters with about one week half-life, as it has the lowest beta energy and no disturbing gamma rays. More elaborated dosimetry calculations have validated that Er-169 would provide one of the best ratio of absorbed dose to the tumor versus normal tissue [1]. Unfortunately, Er-169 cannot be produced directly with high specific activity. Reactor irradiation of highly enriched Er-168 leads to specific activities of 0.4–4 GBq/mg (for thermal neutron fluxes of 1014–1015 cm-2s-1 and irradiation for one half-life). This corresponds to a "dilution" of the radioactive Er-169 with 8000 or 800 times more stable Er-168 respectively. This low specific activity carrier-added Er-169 is actually in clinical use, but only for radiosynovectomy of finger joints in the therapeutic management of arthritis [2,3], where the low specific activity is acceptable. Therefore higher specific activities could allow no-carrier-added Er-169 being considered for receptor targeted therapies.
The production of Er-169 with the off-line mass separator, allows to significantly increase the specific activity and to spread the potential applications of this radionuclide in medicine. For this reason in the spring 2018 a highly enriched Er-168 target will be irradiated in ILL reactor and shipped to CERN where the mass separation will be performed. We will present the experimental results of the irradiation and separation in comparison to the yields estimated from off-line experiments. Future improvements of the overall process efficiency will be discussed.
[1] H Uusijärvi et. al., Electron- and positron-emitting radiolanthanides for therapy: aspects of dosimetry and production. J Nucl Med 2006;47:807–814.
[2] K Liepe. Radiosynovectomy in the therapeutic management of arthritis. World J Nucl Med 2015;14: 10–15.
[3] R Chakravarty et. al., Reactor production and electrochemical purification of 169Er: a potential step forward for its utilization in in vivo therapeutic applications. Nucl Med Biol 2014;41:163–170.
Speaker: Roberto Formento Cavaier (Advanced Accelerator Application, GIP Arronax, Subatech, CERN)
Convener: Valentine Fedosseev (CERN)
Laser Isotope Separation revisited – radioisotope purification by resonance ionization mass spectrometry at Mainz University 20m
Laser Isotope Separation revisited – radioisotope purification by resonance ionization mass spectrometry at Mainz University
Klaus Wendt, Reinhard Heinke, Tom Kieck, Nina Kneip, Pascal Naubereit, Dominik Studer
As a spin-off of the on-going development of on-line laser ion sources and based on the suitability and reliability of present days laser systems for this specific task, resonance ionization mass spectrometry has demonstrated its great versatility in the field of radioisotope separation and ion beam purification. This application primarily profits from the universality of the technique, the high overall efficiency of the process and the unrivaled suppression of isobaric and other background in the final sample. For elimination of disturbances from neighboring isotopes specific techniques of ion source operation or ion-beam gating have been developed.
In the field of long-lived radioisotopes, as accessible at the ion source development set-up and off-line radioactive ion beam facility of the RISIKO laser mass separator at Mainz University, a number of applications in the field of fundamental and applied research are carried out, adding to spectroscopic activities e.g. in preparation of nuclear-medical species for CERN-Medicis [1].
One example of specific radioisotope purification concerns the isotope 163-Ho and its efficient implantation into the magnetic metallic calorimeter chips of the ECHo collaboration for investigation of the neutrino mass [2]. Purification of the isotope 53-Mn, delivered from beam dump recovery as part of the Meancorn project of PSI [3], is carried out for supporting lifetime measurements. A further activity concerns a radiometrical clean implantation of the isotope 225-Ra as standard for the PTB.
Advances and limitations of the technique, as realized at the Mainz University RISIKO mass separator, concerning the spectroscopic background, the laser and ion source optimization and finally, the optimization of the collection and implantation unit will be discussed.
[1] R. M. dos Santos Augusto et al., CERN MEDICIS – A New Facility, Appl. Sci. 4, 265-281 (2014)
[2] L. Gastaldo et al., The electron capture in 163Ho experiment – ECHo,
Eur. Phys. J. Special Topics 226, 1623–1694 (2017)
[3] R. Dressler et al., MeaNCoRN – Measurement of Neutron capture cross sections and deter-
mination of half-lives of short-lived Cosmogenic Radio-Nuclides, https://www.psi.ch/lrc/meancorn
Speaker: Klaus Wendt (Johannes Gutenberg Universitaet Mainz (DE))
Applications of β-radiation detected NMR in wet chemistry, biochemistry and medicine 20m
Many processes in nature are governed by the interaction of biomolecules with metal ions. Some biologically highly relevant metal ions, such as Mg2+, Cu+ and Zn2+, are silent in most spectroscopic techniques, rendering characterization of their biological function difficult. Therefore, there is a demand for new experimental approaches to directly study these metal ions.
Recently, β-radiation detected nuclear magnetic resonance (β-NMR) spectroscopy was successfully applied to liquid samples at the ISAC facility at TRIUMF [1], Canada's particle accelerator center. In contrast to any previously reported measurements, the resonance spectra recorded for 31Mg+ implanted into solutions of different ionic liquids displayed well-resolved resonances originating from oxygen and nitrogen coordinating Mg2+ ions in typical Mg2+ complexes, illustrating that β-NMR can in fact discriminate between different structures. Furthermore, the recorded resonance line widths are very narrow, and in some cases exceed the ones reported for conventional NMR spectroscopy on similar systems, underlining the complementary advantages of β-NMR. This achievement marks a milestone in applications of β-NMR in liquid samples and opens new opportunities in the fields of wet chemistry, biochemistry and medicine.
Results from the recent β-NMR experiments with 31Mg+ ions performed at TRIUMF [1,2] and the future plans will be presented and discussed.
[1] D. Szunyogh et at. Direct observation of Mg2+ complexes in ionic liquid solutions by 31Mg β-NMR spectroscopy. Manuscript submitted to JACS.
[2] R. McFadden et al. On the use of 31Mg for β-detected NMR studies of solids. JPS Conf. Proc. 21, 011047 (2018).
Speaker: Dr Monika Stachura (TRIUMF)
Friday, 21 September
Session 15 - Techniques related to high-power radioactive ion beam production 500/1-001 - Main Auditorium
Convener: Carmen Angulo (SCK-CEN)
High-power target development for the next-generation of ISOL facilities 30m
The production of high purity radioactive ion beams (RIB) through the isotope separation online (ISOL) method makes possible unique research programmes in several fields of science. The demand for beam time continues to be high, while the study of more and more exotic isotopes, difficult to produce in sufficient quantities, is of primary interest for many of the currently defined research projects. At the same time, the growing interest from the medical field cannot pass unnoticed, the ISOL technique giving access to the most innovative medical isotopes, with extremely-high specific activity. Increasing the RIBs intensity is therefore of primary interest, and it is carefully addressed through several R&D programmes worldwide.
One of methods to increase the RIB intensity is by increasing the intensity of the primary beam on target. The ISAC facility at TRIUMF is capable to operate with high-intensity (up to 0.1 mA) 500-MeV proton beams, being the highest power ISOL facility under operation worldwide. To reach this level, composite target materials have been developed and integrated in a high-power target container capable of dissipating up to 20 kW through radiative cooling. Next-generation ISOL facilities plan to increase this power even further, which calls for innovative target designs. Such example is the LIEBE target (LIquid lEad Bismuth eutectic loop target for EURISOL), where, for the first time, the concept of a liquid target material circulating in a loop is being put forward. This loop-type target allows incorporation of a heat-exchanger for the necessary heat removal.
Similar or even exceeding heat-management challenges are to be faced by ISOL targets at high-intensity but lower-energy primary beam facilities. Examples are SPES and ISOL@MYRRHA-phase1, where, even if the primary-beam power doesn't exceed the level of the ISAC facility, the lower energy of the protons (40/70 MeV and 100 MeV, respectively) increases the power deposited into the target. The concept of these targets, therefore, departs from the concept of the high-power targets of ISAC.
Finally, the presentation will discuss high-power converter targets, used to produce secondary particles (e.g., neutrons) irradiating a fissile material. Dealing with the power deposition in the converter instead of the ISOL target represents an advantage. However, the design of this target system requires a detailed R&D for optimized RIB production.
Speaker: Dr Lucia Popescu (SCK-CEN, Mol, Belgium)
20180921_LPopescu_High_power_targetry_presentation_EMIS2018final.pdf
Towards 100 kW targets for electron driver beams at the TRIUMF-ARIEL Facility 20m
The TRIUMF ARIEL Facility will add two new target stations for Radioactive Ion Beam (RIB) production at TRIUMF, one of which will be capable of accepting a 100 kW electron "driver beam". TRIUMF is already a world leader in the operation of "high power" (50 kW) targets for proton driver beams, however, in many aspects and particularly for the target, the exploitation of an electron driver beam presents a fresh set of challenges.
An electron-gamma (e-$\gamma$) converter is required upstream of the target, with the resulting $\gamma$-rays used to irradiate target materials. The spatial profile of the $\gamma$-rays necessitates significant changes to the dimensions and the orientation of the target with respect to the driver beam, compared to targets on the proton stations. The resulting asymmetric power deposition from irradiation and the proximity to the converter, results in new requirements for both the target heating and methods for increasing the effective emissivity to facilitate power dissipation. The ARIEL era will also introduce hermetic target vessels at TRIUMF, enabling the use of new types of target materials. In addition to the significant opportunities this may bring in the range and the yield of RIB production, there is also the potential for a significant increase in the ion/neutral load on the ion sources coupled to the targets.
The latest results from the developments to meet the heating and thermal dissipation requirements for targets for use with 100 kW electron driver beams will be presented, together with options to mitigate the effects of increased ion/neutral loads on the ion sources.
Speaker: Dr Thomas Day Goodacre (TRIUMF)
BRIF: from the first proton beam to RIB production 20m
Various technologies for high current compact H- cyclotron have been developed since 1990s [1,2]. The energy of compact style machines was firstly elevated up to 100 MeV for Radioactive Ion Beam production[3,4]. The project, BRIF, Beijing Radioactive Ion-beam Facility was approved to start the construction in 2011, and the first proton beam was extracted from the 100 MeV compact cyclotron CYCIAE-100 on July 4, 2014[5]. This paper will present the progress on the BRIF after the first proton beam, including the cyclotron improvements for the stable operation and mA beam acceleration efforts, the RIB production and the implementation for mass resolution of 20000, and the future development etc.
[1]. Fan Mingwu,Zhang Tianjue,Initial Operation of CIAE Medically Used Cyclotron,[A] Proc. of the 1997 Particle Accelerator Conference, [C], Vacouver, 1997. 3834-3836
[2]. Bruce Milton et al., A 30 MeV H- Cyclotron for Isotope Production, Proc. of the 12th International Conference on Cyclotrons and Their Applications, Berlin, Germany
[3]. Tianjue Zhang et al., A New Project of Cyclotron Based Radioactive Ion Beam Facility. Proc. of 3rd APAC, 2004, Gyeongju, Korea
[4]. Tianjue Zhang, Zhenguo Li, Yinlong Lu, Progress on Construction of CYCIAE-100, Proc. of 19th International Conference on Cyclotrons and Their Applications, 2010, Lanzhou, China, Invited
[5]. The Project Team of BRIF, (Written by Tianjue Zhang and Jianjun Yang), The Beam Commissioning of BRIF and Future Cyclotron Development at CIAE, Nuclear Instruments and Methods in Physics Research B 376 (2016) 434–439, doi:10.1016/j.nimb.2016.01.022
Speaker: Prof. Tianjue Zhang (China Institute of Atomic Energy)
BRIF - Tianjue Zhang 20180921 V2.pdf
Current status of Isotope Separation On-Line (ISOL) facility at RAON 20m
The two types of advanced high power Rare Isotope production facilities, Isotope Separation On-Line (ISOL) and In-Flight (IF) Fragment separator, are being developed by Rare Isotope Science Project (RISP). The installation of ISOL facility is going to start from year 2019 and its commissioning will be finished by 2021 at Rare Isotope Accelerator complex for ON-line experiments, RAON, in Korea. The main systems of ISOL comprise (or consist) of 70 MeV proton cyclotron driver, Target/Ion Source (TIS) including remote handling system, beam separation and transportation system and EBIS type charge breeder. The first goal of TIS development in RISP is providing about 10^8 pps of Sn isotope using high power U target to very low energy experimental hall. Here, the current status of development of ISOL facility will be presented.
Speaker: Dr BH KANG (Institute for Basic Science)
EMIS2018 (RAON ISOL)-BHKang.pdf
Session 16 - Ion optics and spectrometers 500/1-001 - Main Auditorium
Convener: Prof. Hans Geissel (GSI Helmholtzzentrum für Schwerionenforschung GmbH, University of Giessen)
The Magnex spectrometer for double charge exchange reactions 30m
The physics of neutrinoless double beta decay has tremendous implications on particle physics, cosmology and fundamental physics. In particular, the nuclear matrix elements entering in the expression of the half-life of this process play a crucial role.
The possibility to use heavy-ion induced reactions and in particular double charge exchange reactions as tools for extracting information on matrix elements will be presented at the conference. The basic point is that the initial and final state wave functions in the two processes are the same and the transition operators are similar. The strengths and the limits of the proposed methodology will be discussed. The experimental difficulties that limited in the past the exploration of such kind of reactions and the advantages of using the MAGNEX large acceptance spectrometer at INFN - LNS (Italy) will be stressed.
New experimental data regarding the 40Ca(18O,18Ne)40Ar and 116Cd(20Ne,20O)116Sn double charge exchange reactions and competing channels involving same projectiles and targets at 15 MeV/u incident energy will be shown.
Speaker: Manuela Cavallaro (INFN - National Institute for Nuclear Physics)
EMIS2018_Cavallaro_web.pdf
New ion-optical modes of the BigRIPS and ZeroDegree Spectrometer for the production of high-quality RI beams 15m
The BigRIPS projectile fragment separator$^{1,2}$ is presently the most powerful device for the research of exotic nuclei separated in flight. The scientific merits and potential of the BigRIPS and its combination with the ZeroDegree spectrometer$^2$ have been demonstrated in many dif-ferent experiments since more than 10 years$^3$.
The intensity of the primary beam provided by the Superconducting Ring Cyclotron (SRC)$^4$ has been increased in the recent years by more than 2 orders of magnitudes which directly yields higher intensities of spatially separated exotic nuclei at the final focal plane of BigRIPS, but inevitably it causes also higher background. The spatial separation of fragments is per-formed by a two-fold $B\rho -\Delta E-B\rho$ method using two wedge-shaped degraders placed at the central focal planes of the first-stage (F0 to F2) and second-stage (F3 to F7). In the standard operating mode of BigRIPS, the 2 $B\rho -\Delta E-B\rho$ spatial separations are subtractive in resolving power, we present here an additive mode which has been developed and realized in first machine tests. The calculated significantly increased spatial separation power has been demonstrated in measurements and examples of experiments will be presented in this contribution where the additive mode is essential. In addition to the higher separation power, the additive mode has a favorable image condition at F6 which allows for a back-ground reduction via application of slits and diagnostics. Higher ion-optical resolving power modes at dispense of slightly lower transmission are also investigated and discussed. The lat-ter will give access to heavier elements. The coupling of BigRIPS and ZeroDegree is presently realized via two independent achromatic systems. A dispersion-matched mode and also a higher angular acceptance of the ZeroDegree are also presented in this report.
(1) T. Kubo: Nucl. Instr. Meth. B204, 97 (2003).
(2) T. Kubo et al.: Prog. Theor. Exp. Phys. 2012, 03C003 (2012).
(3) T. Kubo: Nucl. Instr. Meth. B376, 102 (2016).
(4) Y. Yano: Nucl. Instr. Meth. B261, 1009 (2007).
Speaker: Dr Hiroyuki Takeda (RIKEN Nishina Center)
EMIS2018-takeda-final.pdf
ISLA, an Isochronous Separator with Large Acceptance for Experiments with Reaccelerated Beams at FRIB 15m
The Isochronous Separator with Large Acceptance (ISLA) has been identified by the ReA12 Recoil Separator working group of the FRIB Users Organization as the single device that meets the needs of all the physics cases proposed by the community for studies with reaccelerated rare isotope beams from ReA at FRIB. ReA will reaccelerate stopped FRIB beams to energies ideal for transfer reactions, multiple Coulomb excitation, fusion, and deep inelastic scattering. ISLA will provide efficient rejection of unreacted beam; large acceptances in momentum ($\pm 10\%$), angle (64 msr), and charge state ($\pm 10\%$) distributions; and high M/Q resolving power (>400) for reaction products. This purely magnetic system will accept magnetic rigidities up to 2.6 Tm, to match incoming rigidities expected from the fully upgraded ReA12, and will not be limited by electric rigidity. M/Q separation in time-of-flight and a long preceding drift will allow efficient detection at ISLA's compact focal plane, facilitating multi-physics measurements (e.g. implantation-decay coupled to $\gamma$-ray spectroscopy). Space at the target is sufficient for coupled operation with GRETA, and a beam swinger will allow incoming beam angles up to 50 degrees. Recent work will be presented on the magnetostatic design of ISLA's four large dipoles and the results of updated ion optical models.
Speaker: A Matthew Amthor (Bucknell University)
EMIS2018_ISLA_Amthor_post.pdf
New high-resolution and high-transmission modes of the FRS open up new perspectives for FAIR phase-0 experiments 15m
The FRagment Separator FRS at GSI is primarily a powerful in-flight separator for short-lived exotic nuclei based on multiple magnetic rigidity analysis (B$\rho_{max}$=18 Tm) and atomic energy loss in shaped matter. The quadrupole magnets determine the focal-plane conditions and the hexapole magnets can be used to correct aberrations, which play a crucial role for projectile fragments characterized by a large phase space. The ion-optical system of the FRS can also be operated as a high-resolution spectrometer for precise momentum measurements. For example, the investigation of the influence of the tensor force as a part of the nuclear force has been performed with the addition of the resolving powers of the four dispersive dipole magnet stages. Dispersion-matched ion-optical settings, specially shaped degraders and tools to reduce the fragment energy spread are methods applied in high-resolution experiments. – Other experiments have their preference for high rates which is enabled through thick targets and high optical transmission. In this contribution, we report on new ion-optical developments which are essential for the planned FAIR phase-0 experiments.
Speaker: Dr Emma Haettner (GSI Helmholtzzentrum für Schwerionenforschung GmbH)
Status of the CANREB high resolution separator at TRIUMF 15m
A new ISOL rare isotope beam production facility, ARIEL, is under construction at TRIUMF. ARIEL aims at increasing the delivery of radioactive beams three fold with respect to the present capability of the ISAC facility. Part of ARIEL is the new CANREB equipment that can be described by the two main functionalities: a charge breeding system that includes RFQ cooler, EBIS and Nier separator, and a high resolution mass separator system. The latter is designed to achieve a resolving power of twenty thousand for a transmitted emittance of three micrometer. The separator optics has been designed with symmetry in order to minimize high order aberrations. The dispersion of the system is created by two identical ninety degrees magnetic dipoles with a field flatness of one part in one hundred thousand. The dipoles are tested and the magnetic field characterized before being installed on line for operation. High order aberrations can also be corrected by an electrostatic multipole; this features a novel design as well as a new tuning technique. In this paper we will present the latest results from the field characterization and discuss the high level application to tune the multipole.
Speaker: Jens Lassen (TRIUMF)
5 Status of the CANREB high resolution separator at TRIUMF - Marchetto jl_fin.pdf
Prize Presentations and Closing Remarks 30m 500/1-001 - Main Auditorium
conference closure.pdf conference closure.pptx
Lunch Break 1h
Visits of ISOLDE and SC / Satellite meetings
https://indico.cern.ch/e/EMIS2018
|
CommonCrawl
|
Lin-bao Luo, Xiu-xing Zhang, Chen Li, Jia-xiang Li, Xing-yuan Zhao, Zhi-xiang Zhang, Hong-yun Chen, Di Wu, Feng-xia Liang. Fabrication of PdSe2/GaAs Heterojunction for Sensitive Near-Infrared Photovoltaic Detector and Image Sensor Application[J]. Chinese Journal of Chemical Physics , 2020, 33(6): 733-742. doi: 10.1063/1674-0068/cjcp2005066
Citation: Lin-bao Luo, Xiu-xing Zhang, Chen Li, Jia-xiang Li, Xing-yuan Zhao, Zhi-xiang Zhang, Hong-yun Chen, Di Wu, Feng-xia Liang. Fabrication of PdSe2/GaAs Heterojunction for Sensitive Near-Infrared Photovoltaic Detector and Image Sensor Application[J]. Chinese Journal of Chemical Physics , 2020, 33(6): 733-742. doi: 10.1063/1674-0068/cjcp2005066
Fabrication of PdSe2/GaAs Heterojunction for Sensitive Near-Infrared Photovoltaic Detector and Image Sensor Application
Lin-bao Luoa , , ,
Xiu-xing Zhanga ,
Chen Lia ,
Jia-xiang Lia ,
Xing-yuan Zhaob ,
Zhi-xiang Zhanga ,
Hong-yun Chena ,
Di Wuc ,
Feng-xia Liangb , ,
School of Electronic Science and Applied Physics, Hefei University of Technology, Hefei 230009, China
School of Materials Sciences and Engineering, Hefei University of Technology, Hefei 230009, China
School of Physics and Microelectronics, Zhengzhou University, Zhengzhou 450052, China
Corresponding author: Lin-bao Luo, E-mail: [email protected]; Feng-xia Liang, E-mail: [email protected]
In this study, we have developed a high-sensitivity, near-infrared photodetector based on PdSe$_2$/GaAs heterojunction, which was made by transferring a multilayered PdSe$_2$ film onto a planar GaAs. The as-fabricated PdSe$_2$/GaAs heterojunction device exhibited obvious photovoltaic behavior to 808 nm illumination, indicating that the near-infrared photodetector can be used as a self-driven device without external power supply. Further device analysis showed that the hybrid heterojunction exhibited a high on/off ratio of 1.16$\times$10$^5$ measured at 808 nm under zero bias voltage. The responsivity and specific detectivity of photodetector were estimated to be 171.34 mA/W and 2.36$\times$10$^{11}$ Jones, respectively. Moreover, the device showed excellent stability and reliable repeatability. After 2 months, the photoelectric characteristics of the near-infrared photodetector hardly degrade in air, attributable to the good stability of the PdSe$_2$. Finally, the PdSe$_2$/GaAs-based heterojunction device can also function as a near-infrared light sensor.
van der Waals heterojunction,
Two dimensional materials,
Near-infrared light photodetector,
Image sensor,
Responsivity
[1] J. H. Li, L. Y. Niu, Z. J. Zheng, and F. Yan, Adv. Mater. 26, 5239 (2014).
[2] T. Mueller, F. N. Xia, and P. Avouris, Nat. Photonics 4, 297 (2010).
[3] F. H. L. Koppens, T. Mueller, P. Avouris, A. C. Ferrari, M. S. Vitiello, and M. Polini, Nat. Nanotechnol. 9, 780 (2014).
[4] F. X. Liang, J. Z. Wang, Z. P. Li, and L. B. Luo, Adv. Opt. Mater. 5, 1700081 (2017).
[5] J. S. Miao, W. D. Hu, N. Guo, Z. Y. Lu, X. M. Zou, L. Liao, S. X. Shi, P. P. Chen, Z. Y. Fan, J. C. Ho, T. X. Li, X. S. Chen, and W. Lu, ACS Nano 8, 3628 (2014).
[6] D. S. Zheng, H. H. Fang, M. S. Long, F. Wu, P. Wang, F. Gong, X. Wu, J. C. Ho, L. Liao, and W. D. Hu, ACS Nano 12, 7239 (2018).
[7] M. Rezaei, M. S. Park, C. Rabinowitz, C. L. Tan, S. Wheaton, M. Ulmer, and H. Mohseni, Appl. Phys. Lett. 114, 161101 (2019).
[8] M. Horstmann, M. Marso, A. Fox, F. Rüders, M. Hollfelder, H. Hardtdegen, P. Kordos, and H. Lüth, Appl. Phys. Lett. 67, 106 (1995).
[9] M. Piccardo, N. A. Rubin, L. Meadowcroft, P. Chevalier, H. Yuan, J. Kimchi, and F. Capasso, Appl. Phys. Lett. 112, 041106 (2018).
[10] R. Avrahamy, M. Zohar, M. Auslender, Z. Fradkin, B. Milgrom, R. Shikler, and S. Hava, Appl. Opt. 58, F1 (2019).
[11] J. Y. Zhou, M. A. Raihan Miah, Y. G. Yu, A. C. Zhang, Z. J. Zeng, S. Damle, I. A. Niaz, Y. Zhang, and Y. H. Lo, Opt. Express 27, 37056 (2019).
[12] N. Killilea, M. J. Wu, M. Sytnyk, A. A. Yousefi Amin, O. Mashkov, E. Spiecker, and W. Heiss, Adv. Funct. Mater. 29, 1807964 (2019).
[13] J. C. Tong, F. Suo, W. Zhou, Y. Qu, N. J. Yao, T. Hu, Z. M. Huang, and D. H. Zhang, Opt. Express 27, 30763 (2019).
[14] B. Das, N. S. Das, S. Sarkar, B. K. Chatterjee, and K. K. Chattopadhyay, ACS Appl. Mater. Interfaces 9, 22788 (2017).
[15] L. He, J. R. Yang, S. L. Wang, Y. Wu, and W. Z. Fang, Adv. Mater. 11, 1115 (1999).
[16] D. C. Elias, I. Shafir, T. Meir, O. Sinai, D. Memram, S. S. Shusterman, and M. Katz, Infrared Phys. Technol. 95, 199 (2018).
[17] C. Wang, T. Hu, and E. J. Kan, Chin. J. Chem. Phys. 32, 327 (2019).
[18] R. Nashed, C. Pan, K. Brenner, and A. Naeemi, IEEE J. Electron Devices Soc. 4, 466 (2016).
[19] L. Tao, H. Li, M. X. Sun, D. Xie, X. M. Li, and J. B. Xu, IEEE Electron Device Lett. 39, 987 (2018).
[20] D. H. Zhang, J. Zhou, C. L. Liu, S. K. Guo, J. N. Deng, Q. Y. Cai, Z. F. Li, Y. F. Zhang, W. Q. Zhang, and X. S. Chen, J. Appl. Phys. 126, 074301 (2019).
[21] S. Rumyantsev, G. X. Liu, M. S. Shur, R. A. Potyrailo, and A. A. Balandin, Nano Lett. 12, 2294 (2012).
[22] X. M. Wang, S. G. Wu, H. Liu, L. Zhou, and Q. P. Zhao, Chin. J. Chem. Phys. 26, 590 (2013).
[23] X. M. Wang, Z. Z. Cheng, K. Xu, H. K. Tsang, and J. B. Xu, Nat. Photonics 7, 888 (2013).
[24] F. Yang, H. Cong, K. Yu, L. Zhou, N. Wang, Z. Liu, C. B. Li, Q. M. Wang, and B. W. Cheng, ACS Appl. Mater. Interfaces 9, 13422 (2017).
[25] P. Lv, X. J. Zhang, X. W. Zhang, W. Deng, and J. S. Jie, IEEE Electron Device Lett. 34, 1337 (2013).
[26] L. B. Luo, H. Hu, X. H. Wang, R. Lu, Y. F. Zou, Y. Q. Yu, and F. X. Liang, J. Mater. Chem. C 3, 4723 (2015).
[27] Y. R. Lim, W. Song, J. K. Han, Y. B. Lee, S. J. Kim, S. Myung, S. S. Lee, K. S. An, C. J. Choi, and J. Lim, Adv. Mater. 28, 5025 (2016).
[28] A. S. Aji, P. S. Fernandez, H. G. Ji, K. Fukuda, and H. Ago, Adv. Funct. Mater. 27, 1703448 (2017).
[29] V. Dhyani, M. Das, W. Uddin, P. K. Muduli, and S. Das, Appl. Phys. Lett. 114, 121101 (2019).
[30] I. G. Lezama, A. Arora, A. Ubaldini, C. Barreteau, E. Giannini, M. Potemski, and A. F. Morpurgo, Nano Lett. 15, 2336 (2015).
[31] A. N. Hoffman, Y. Y. Gu, L. B. Liang, J. D. Fowlkes, K. Xiao, and P. D. Rack, NPJ 2D Mater. Applicat. 3, 1 (2019).
[32] S. H. Zhang and B. G. Liu, J. Mater. Chem. C 6, 6792 (2018).
[33] D. Wu, J. W. Guo, J. Du, C. X. Xia, L. H. Zeng, Y. Z. Tian, Z. F. Shi, Y. T. Tian, X. J. Li, Y. H. Tsang, and J. S. Jie, ACS Nano 13, 9907 (2019).
[34] L. H. Zeng, D. Wu, S. H. Lin, C. Xie, H. Y. Yuan, W. Lu, S. P. Lau, Y. Chai, L. B. Luo, Z. J. Li, and Y. H. Tsang, Adv. Funct. Mater. 29, 1806878 (2019).
[35] Q. J. Liang, Q. X. Wang, Q. Zhang, J. X. Wei, S. X. D. Lim, R. Zhu, J. X. Hu, W. Wei, C. K. Lee, C. H. Sow, W. J. Zhang, and A. T. S. Wee, Adv. Mater. 31, e1807609 (2019).
[36] F. X. Liang, X. Y. Zhao, J. J. Jiang, J. G. Hu, W. Q. Xie, J. Lv, Z. X. Zhang, D. Wu, and L. B. Luo, Small 15, 1903831 (2019).
[37] M. S. Long, Y. Wang, P. Wang, X. H. Zhou, H. Xia, C. Luo, S. Y. Huang, G. W. Zhang, H. G. Yan, Z. Y. Fan, X. Wu, X. S. Chen, W. Lu, and W. D. Hu, ACS Nano 13, 2511 (2019).
[38] J. H. Wu, Z. W. Yang, C. Y. Qiu, Y. J. Zhang, Z. Q. Wu, J. Z. Yang, Y. H. Lu, J. F. Li, D. X. Yang, R. Hao, E. P. Li, G. L. Yu, and S. S. Lin, Nanoscale 10, 8023 (2018).
[39] C. Jia, D. Wu, E. P. Wu, J. W. Guo, Z. H. Zhao, Z. F. Shi, T. T. Xu, X. W. Huang, Y. T. Tian, and X. J. Li, J. Mater. Chem. C 7, 3817 (2019).
[40] Q. S. Lv, F. G. Yan, X. Wei, and K. Y. Wang, Adv. Opt. Mater. 6, 1700490 (2018).
[41] W. W. Tang, C. L. Liu, L. Wang, X. S. Chen, M. Luo, W. L. Guo, S. W. Wang, and W. Lu, Appl. Phys. Lett. 111, 153502 (2017).
[42] C. Y. Zhang, S. Wang, L. J. Yang, Y. Liu, T. T. Xu, Z. Y. Ning, A. Zak, Z. Y. Zhang, R. Tenne, and Q. Chen, Appl. Phys. Lett. 100, 243101 (2012).
[43] Y. Y. Wang, Y. D. Wu, W. Peng, Y. H. Song, B. Wang, C. Y. Wu, and Y. Lu, Nanoscale 10, 18502 (2018).
[44] L. H. Zeng, M. Z. Wang, H. Hu, B. Nie, Y. Q. Yu, C. Y. Wu, L. Wang, J. G. Hu, C. Xie, F. X. Liang, and L. B. Luo, ACS Appl. Mater. Interfaces 5, 9362 (2013).
[45] W. Deng, L. M. Huang, X. Z. Xu, X. J. Zhang, X. C. Jin, S. T. Lee, and J. S. Jie, Nano Lett. 17, 2482 (2017).
[46] J. Mao, Y. Q. Yu, L. Wang, X. J. Zhang, Y. M. Wang, Z. B. Shao, and J. S. Jie, Adv. Sci. 3, 1600018 (2016).
[47] X. F. Wang, H. M. Zhao, S. H. Shen, Y. Pang, P. Z. Shao, Y. T. Li, N. Q. Deng, Y. X. Li, Y. Yang, and T. L. Ren, Appl. Phys. Lett. 109, 201904 (2016).
[48] B. Nie, J. G. Hu, L. B. Luo, C. Xie, L. H. Zeng, P. Lv, F. Z. Li, J. S. Jie, M. Feng, C. Y. Wu, Y. Q. Yu, and S. H. Yu, Small 9, 2872 (2013).
[49] J. C. Carrano, D. L. Brown, P. A. Grudowski, C. J. Eiting, R. D. Dupuis, and J. C. Campbell, Appl. Phys. Lett. 73, 2405 (1998).
[50] Z. H. Sun, Z. K. Liu, J. H. Li, G. A. Tai, S. P. Lau, and F. Yan, Adv. Mater. 24, 5878 (2012).
[51] A. Rogalski, J. Antoszewski, and L. Faraone, J. Appl. Phys. 105, 091101 (2009).
Figures(12) / Tables(1)
Article views(54) PDF downloads(8) Cited by()
Lin-bao Luoa, , ,
Xiu-xing Zhanga,
Chen Lia,
Jia-xiang Lia,
Xing-yuan Zhaob,
Zhi-xiang Zhanga,
Hong-yun Chena,
Di Wuc,
Feng-xia Liangb, ,
a. School of Electronic Science and Applied Physics, Hefei University of Technology, Hefei 230009, China
b. School of Materials Sciences and Engineering, Hefei University of Technology, Hefei 230009, China
c. School of Physics and Microelectronics, Zhengzhou University, Zhengzhou 450052, China
van der Waals heterojunction /
Two dimensional materials /
Near-infrared light photodetector /
Image sensor /
Abstract: In this study, we have developed a high-sensitivity, near-infrared photodetector based on PdSe$_2$/GaAs heterojunction, which was made by transferring a multilayered PdSe$_2$ film onto a planar GaAs. The as-fabricated PdSe$_2$/GaAs heterojunction device exhibited obvious photovoltaic behavior to 808 nm illumination, indicating that the near-infrared photodetector can be used as a self-driven device without external power supply. Further device analysis showed that the hybrid heterojunction exhibited a high on/off ratio of 1.16$\times$10$^5$ measured at 808 nm under zero bias voltage. The responsivity and specific detectivity of photodetector were estimated to be 171.34 mA/W and 2.36$\times$10$^{11}$ Jones, respectively. Moreover, the device showed excellent stability and reliable repeatability. After 2 months, the photoelectric characteristics of the near-infrared photodetector hardly degrade in air, attributable to the good stability of the PdSe$_2$. Finally, the PdSe$_2$/GaAs-based heterojunction device can also function as a near-infrared light sensor.
Ⅰ. INTRODUCTION
High-performance and low-cost photonic detector is a kind of optoelectronic devices that can transform light signals to electrical ones. They have demonstrated a broad range of application in both modern technology and industry [1-3]. Compared with other photodetectors operating in visible light range, near-infrared (NIR, correspondingly to the electromagnetic radiation with typical wavelength ranging from 0.78 $ {\rm{ \mathsf{ μ} }} $m to 3.0 $ {{\rm \mathsf{ μ} }} $m) devices have attracted significant research interest worldwide in the past several decade, due to their potential importance in military and civilian fields such as target detection, night vision, surveillance, security inspection, environmental monitoring, video and biomedical imaging, etc. [4-6]. Currently, the sensing of NIR light in the market is largely dominated by high-performance photodetectors made of crystalline silicon. Nonetheless, it should be noted that owning to its cut-off wavelength at about 1100 nm, the silicon NIR photodetectors are often characterized by relatively narrow wavelength photoresponse, which constitutes a bottleneck issue for their practical application. Recently, people have resorted to other compound semiconductor materials such as InGaAs and GeSi alloy with tunable bandgap in NIR region to detect longer-wavelength infrared light [7-9]. In addition to InGaAs and GeSi, a variety of NIR photodetectors with different device geometries have been fabricated by using other narrow bandgap semiconductors (e.g., HgCdTe, Ge, PbS, InSb) [10-13] and even topological insulators have been proposed [14]. Although these NIR photodetectors (NIRPD) with superior performance have been successfully achieved, it is undeniable that these devices still have their own shortcomings that cannot be ignored. For instance, the fabrication of these compounds usually involves the usage of very complicated growth instruments with expensive and complex manufacturing processes [15, 16]. Therefore, the development of simple, low-cost and highly efficient NIRPD still remains great challenges.
As a new material with unique atomic structure and excellent electrical and optoelectronic properties, two-dimensional (2D) layered material has also provided reliable solutions for the manufacture of various high-performance NIRPDs. For example, graphene is the earliest discovered 2D material with ultra-high mobility [17, 18]. It has been widely used for its excellent electrical properties, such as optoelectronic devices [19, 20], chemical biosensors [21, 22]. When combining metallic graphene with narrow bandgap semiconductors [23-26], the as-formed graphene-narrow bandgap semiconductor hybrid structures that are often characterized by strong built-in electric field at the interface will be an ideal building block for efficient NIR light detection. During the light detection process, the narrow band gap semiconductors are mainly used as light absorbing layers, and the photogenerated electron-hole pairs can be easily separated by the built-in electric field and then transported to separate electrodes. In spite of these extensive progress, it is undeniable that due to the lack of intrinsic band gap, the graphene in most of the hybrid structure NIRPD mainly acts as a transparent electrode material, which can hardly contribute to photoresponse in longer wavelength region. In light of this, some 2D materials with adjustable band gap have also been reported, including MoS$ _2 $ [27], WS$ _2 $ [28], MoSe$ _2 $ [29], MoTe$ _2 $ [30], etc. Very recently, 10 groups TMD palladium diselenide (PdSe$ _2 $) with superior optoelectronic properties, adjustable band gaps and stable crystal structure in the air have been discovered [31-33]. Its bandgap can be easily tailored from 1.1 eV of single layer to zero of multilayer [34]. With these advantages, it has been used for assembly of a variety of high-performance broadband PDs with different device configurations [35, 36].
Inspired by this, we herein report on the development of a high-performance hybrid heterojunction NIRPD made of planar GaAs wafer and multilayer PdSe$ _2 $, which was fabricated by a simple selenization approach. Device analysis found that the as-assembled PdSe$ _2 $/GaAs hybrid heterojunction displayed apparent photovoltaic behavior under 808 nm NIR light irradiation, with very good reproducibility and excellent ambient stability. What is more, the capability of the PdSe2/GaAs device for recording a simple NIR light image was also evaluated, which exhibited good potential in image sensor application.
Ⅱ. EXPERIMENTS
A. Material preparation
The 2D PdSe$ _2 $ nanofilm was synthesized by a simple selenization method with CVD. In short, a $ \sim $8 nm Pd film was firstly deposited onto a SiO$ _2 $/Si (300 nm SiO$ _2 $ thickness) substrate pre-cleaned by acetone, alcohol and deionized water using electron-beam evaporation. Then, the SiO$ _2 $/Si loaded with Pd-film was placed in the heating center area of the furnace. The selenium powder (99.99% purity) was placed in the center upstream area and Ar$ _2 $/H$ _2 $ (50 sccm, standard cubic centimeter per minute) was used as a protective gas. For the selenization, the temperatures of selenium and the Pd film zone were increased to 200 ℃ and 357 ℃, respectively. After 2 h of selenization, the color of Si/SiO$ _2 $ wafer changed slightly from navy to light gray.
B. Characterization
The topography of as-selenized PdSe$ _2 $ nanofilm was examined by using an AFM (Benyuan Nanotech Com, CSPM-4000). The morphology of PdSe$ _2 $ was characterized by field-emission scanning electron microscopy (FESEM, SIRION 200 FEG) and energy-dispersive X-ray spectroscopy (EDS). The structure of PdSe$ _2 $ film was analyzed by an HR (Horiba Jobin Yvon) Raman spectrometer equipped with a 532 nm laser and X-ray diffractometer (Rigaku D/max-rB). The absorption spectra of the PdSe$ _2 $ film, the GaAs substrate and the PdSe$ _2 $/GaAs hybrid were recorded using a Shimadzu UV-2550 UV-Vis spectrophotometer.
C. Device fabrication and analysis
To construct the PdSe$ _2 $/GaAs heterojunction device, 50 nm gold was vaporized by electron beam evaporation on the back of $ n $-type GaAs substrate (resistivity: 8$ \times $10$ ^{-4} $$ - $9$ \times $10$ ^{-3} $ $ \Omega\cdot $cm$ ^{-1} $) which was precleared by acetone, alcohol, and deionized water. Then a thin layer of PDMS was spin-coated onto the substrate as an insulating layer. After heating, a window was carved on PDMS by a tool knife as an effective area (0.2 cm$ \times $0.2 cm). A 5 wt% layer of polymethyl methacrylate (PMMA) was spin-coated on the PdSe$ _2 $ film, and then respectively immersed into NaOH solution (4 mol/L) and deionized water to remove the underlying SiO$ _2 $/Si and residual ions. Finally, the as-selenized PdSe$ _2 $ multiple layer was transferred onto the window. The electrical measurements of the PdSe$ _2 $/GaAs hybrid heterojunction were tested using a semiconductor characterization system (Keithley 2400). To further study the light response of the device, a number of NIR and UV laser diodes with different wavelengths (Thorlabs, M808L3, M970L3, M1300L3, M265L3 and M365L3) were employed. In order to measure the response speed, a signal generator (Tektronix, TDS2022B) was employed to drive the laser diode to produce pulsed irradiation with different frequencies, and an oscilloscope (Tektronix, TDS2012B) was used to record the electrical output. All tests were carried out at room temperature in ambient condition.
Ⅲ. RESULTS AND DISCUSSION
The schematic geometry in FIG. 1(a) shows that the proposed device consists of the multilayered PdSe$ _2 $ film and an n-type GaAs wafer. The detailed device fabrication process is illustrated in FIG. S1(a) (supplementary materials). In short, 50 nm thick gold electrode was firstly coated on the backside of underlying GaAs by electron beam evaporation, and then a thin layer of polydimethylsiloxane (PDMS) was spin-coated as an insulating layer, followed by heating to solidify it. Afterwards, through a solution transfer method, multiple layers of PdSe$ _2 $ that was prepared by a simple selenization process, were transferred at a pre-defined window. It can be clearly seen from FIG. S1(b) of supplementary materials that once Pd nanofilm is converted to PdSe$ _2 $ multilayer, the color will change from orchid to light gray. FIG. 1(b) displays a typical field emission scanning electron microscopy (FESEM) image of the PdSe$ _2 $ film. It is clear that the PdSe$ _2 $ sample has relatively continuous and smooth surface. The thickness of the as-selenized PdSe$ _2 $ film was estimated to be $ \sim $30.7 nm, from the atomic force microscopy (AFM) image illustrated in FIG. 1(c). The Raman spectrum of the PdSe$ _2 $ nanofilm in FIG. 1(d) shows four distinct peaks, which are labeled as A$ _ \rm{g}^1 $, A$ _ \rm{g}^2 $, B$ _{\rm{1g}} $ and A$ _ \rm{g}^3 $, corresponding to $ \sim $143.5, $ \sim $206, $ \sim $222, and $ \sim $256 cm$ ^{-1} $, respectively [37]. Among these peaks, the three peaks with relatively low Raman shifts are associated with the motion mode of the Se atom, while the rest peak (A$ _ \rm{g}^3 $) should be ascribed to the relative motion between Se and Pd atoms. Further X-ray diffraction (XRD) study of the sample reveals a pentagonal structure with $ a $ = 5.756 Å, $ b $ = 5.874 Å, and $ c $ = 7.698 Å, which are close to the values of standard card (PDF No.11-0453). In fact, the chemical composition of the PdSe$ _2 $ was also confirmed by X-ray photoemission spectroscopy in FIG. S2 of supplementary materials, and the energy-dispersive X-ray spectroscopy (EDS) image in FIG. 1(f), according to which the elemental ratio is determined to be Se:Pd$ \approx $1.98:1. Notably, both Se and Pd atoms are uniformly distributed on the whole PdSe$ _2 $ film (FIG. S3 (a) and (b) in supplementary materials). This result along with clear 2D lattice fringe image (FIG. S3(c) in supplementary materials) suggests the good quality of the sample.
Figure 1. (a) Schematic illustration of the PdSe$ _2 $/GaAs heterojunction photodetector. (b) FESEM image of PdSe$ _2 $ layer on a SiO$ _2 $/Si wafer, the inset shows the obvious contrast between PdSe$ _2 $ layer and SiO$ _2 $ substrate. (c) AFM image of the PdSe$ _2 $ film on the SiO$ _2 $/Si substrate, the inset shows the corresponding height profile. (d) Raman spectrum of the PdSe$ _2 $ film. (e) XRD pattern of PdSe$ _2 $ sample. (f) The corresponding EDS analysis of the PdSe$ _2 $ sample. For color image, see the online version.
FIG. 2(a) depicts the current-voltage ($ I $-$ V $) characteristics of a PdSe$ _2 $/GaAs heterojunction device in the dark. Obviously, the device showed a typical one-way conduction behavior, with a relative low current ratio of $ \sim $7 at $ \pm $2 V. Such a rectifying characteristic which has been observed in many other 2D based hybrid structures [38, 39] should stem from PdSe$ _2 $/GaAs interface in view of the fact that good contact was formed between Ag/PdSe$ _2 $/Ag and between Au/GaAs/Au (FIG. S4 in supplementary materials). It is easy to find in FIG. 2(b), when the hybrid structure was irradiated by 808 nm NIR light, obvious photovoltaic characteristics, with an open circuit ($ V_{\rm{oc}} $) of 0.51 V and short-circuit current ($ I_{\rm{sc}} $) of 4.05 $ {\rm{ \mathsf{ μ} }} $A was found. Even though the efficiency of energy conversion was rather low, this heterojunction device can function as a self-driven photodetector that is able to sense NIR illumination at 0 V bias. This photovoltaic characteristic can actually be interpreted by the energy band diagram in FIG. 2(c). Due to difference in work function (the work function of PdSe$ _2 $ multilayer is 5.49 eV, which is higher than that of n-type GaAs), once the PdSe$ _2 $ is in contact with GaAs, the electrons will diffuse from GaAs to PdSe$ _2 $ while the holes diffuse from PdSe$ _2 $ to GaAs as the Fermi level of PdSe$ _2 $ is higher than GaAs. As a result, the energy levels near the GaAs surface will bend upwards, while PdSe$ _2 $ will bend downwards, forming a built-in electric field at the interface with a direction pointing from GaAs to PdSe$ _2 $. When irradiated by NIR light, the GaAs will absorb photons. It should be noted that the PdSe$ _2 $ acts as a transparent electrode, which means it will function like metal electrode, as observed in other conventional metal-semiconductor devices, in the meanwhile, it can also work as a graphene layer that can allow the incident light to penetrate into the underlying GaAs semiconductors. The electron-hole pairs generated at the interface were then separated by the built-in electric field, forming a photocurrent or photovoltage in the circuit [40]. FIG. 2(d) shows the time-dependent photoresponse with a bias voltage of 0 V at 808 nm (45.7 mW/cm$ ^2 $). Clearly, our device was very sensitive to NIR light with good reproducibility even after hundreds of cycles (FIG. 2(e)), the photo-current/dark-current is estimated to be 1.16$ \times $10$ ^{5} $. In addition, steep rising and falling edges were also found, indicative of fast separation of photo-generated carriers in the depletion region.
Figure 2. (a) $ I $-$ V $ curves of the PdSe$ _2 $/GaAs heterojunction device without light illumination, the inset shows a representative camera picture of the device. (b) $ I $-$ V $ characteristics of the device in the dark and illuminated with 808 nm light (45.7 mW/cm$ ^2 $). (c) Energy band diagram under illumination. (d) Switchable photoresponse of the device under 808 nm light illumination at zero bias. (e) Photoresponse of the device for about 500 cycles of operation. For color image, see the online version.
Like most of other 2D materials based photodetectors, the photocurrent of the present device is found to strongly depend on the illumination intensity. As can be seen from the $ I $-$ V $ and $ I $-$ t $ at various intensities in FIG. 3(a, b), when the 808 nm NIR light intensity increases from 0.183 mW/cm$ ^2 $ to 62.20 mW/cm$ ^2 $, the photocurrent will gradually increase accordingly. Besides, the photovoltage displays similar trend. It is reasonable for that the number of photo-generated carriers excited at higher light intensities have increased (FIG. 3(c)). However, when the light intensity reached a certain value (about 9 mW/cm$ ^2 $), both photovoltage and $ I_{\rm{light}} $/$ I_{\rm{dark}} $ began to saturate, irrespective of the further increase in light intensity. For better understanding the relationship between the dependence of photocurrent and light intensity, the photoresponse of the device under different light intensities was then studied. As plotted in FIG. 3(d), their relationship can be numerically described by the power law $ I_{\rm{ph}} $$ \propto $$ P^\theta $, where $ I_{\rm{ph}} $ is the current with light illumination, $ P $ is the light intensity, and $ \theta $ is the response index of the photocurrent to the NIR light. For the sake of convenience, the light intensity range is divided into two parts, the weak intensity ranges from 16.92 $ {\rm{ \mathsf{ μ} }} $W/cm$ ^2 $ to 182.5 $ {\rm{ \mathsf{ μ} }} $W/cm$ ^2 $ and the strong intensity ranges from 0.602 mW/cm$ ^2 $ to 22.34 mW/cm$ ^2 $. Fitting the experiment result will obtain two $ \theta $ values, which are $ \theta $ = 0.86 in the weak intensity region; and $ \theta $ = 0.24 in the high intensity region and due to the recombination loss, the deviation of these two fitted values from the ideal value $ \theta $ = 1 was reasonable. On the other hand, it is also observed in FIG. 3(e) that both dark-current and photo-current were determined by the bias voltage as well. In order to unveil how the bias voltage affected the photoelectric property of the hybrid structure device, the $ I_{\rm{light}} $/$ I_{\rm{dark}} $ at different bias voltages was examined. As can be seen from FIG. 3(f), with the increase of the negative bias, the $ I_{\rm{light}} $/$ I_{\rm{dark}} $ ratio gradually decreased, and the maximum value at 0 V was $ I_{\rm{light}} $/$ I_{\rm{dark}} $$ \approx $0.37$ \times $10$ ^{5} $, the minimum value at $ - $2 V was $ I_{\rm{light}} $/$ I_{\rm{dark}} $$ \approx $29. This is because the dark current increased at a rate which was faster than that of the photocurrent as the bias voltage increased at the same light intensity [36].
Figure 3. (a) $ I $-$ V $ curves of the heterojunction device under 808 nm light with different intensities. (b) Photoresponse of the heterojunction device under 808 nm light with different intensities. (c) The dependence of photovoltage and $ I_{\rm{light}} $/$ I_{\rm{dark}} $ ratio on light intensity. (d) Corresponding fitting curve of the photosensing behavior with various incident light intensities. (e) Repeated photoresponse of the device at zero and different reverse bias voltages. (f) $ I_{\rm{light}} $/$ I_{\rm{dark}} $ ratio of the device as a function of different bias voltages. For color image, see the online version.
For quantitative evaluation of the photoresponse of the PdSe$ _2 $/GaAs heterojunction device to 808 nm NIR light, the following two formulas are used to calculate both $ R $ and external quantum efficiency (EQE) [41, 42]. The $ R $ represents the responsivity, which can be calculated by the photocurrent ($ I_{\rm{light}} $), dark current ($ I_{\rm{dark}} $), light intensity ($ P $), and effective area ($ S $) in Eq.(1):
EQE represents the number of effective carriers formed by each NIR photon, which can be described by Eq.(2):
Where $ h $, $ c $, $ e $, and $ \lambda $ are the Planck constant, the speed of light, the elementary charge, and the incident wavelength, respectively. FIG. 4 (a) and (b) summarize the responsivity and EQE under a variety of light intensities. Obviously, both parameters decrease substantially with the increase of light intensity. Specifically, when the light intensity is as weak as 16.92 $ {\rm{ \mathsf{ μ} }} $W/cm$ ^2 $, the responsivity and EQE reach the highest value, which are 171.34 mA/W and 26.3%, respectively (Please see the supplementary materials for detailed information about the calculation). In addition to responsivity, the photodetector has another important performance metric, the specific detectivity ($ D^* $). Such parameter is usually used to measure the capability of NIRPD to detect weak optical signals from noise, which can be calculated by using noise equivalent power from the following two formulas [43, 44].
Figure 4. (a) Calculated responsivity and detectivity as a function of the. (b) The EQE of the PdSe$ _2 $/GaAs heterojunction under 808 nm light with varied light intensities at zero bias. (c) Analysis of noise spectral density of the device with the dark current noise using Fourier transform. For color image, see the online version.
where $ S $, $ \Delta f $, and $ {\overline {{i_n}^2} ^{1/2}} $ represent the effective area (0.04 cm$ ^2 $), bandwidth, and root mean square value of the noise current of the hybrid structure device, respectively. As described by FIG. 4(c), the $ {\overline {{i_n}^2} ^{1/2}} $ was obtained by locking the hybrid structure device equipped with a preamplifier in dark condition for recording its noise current at different frequencies [45]. The noise level of $ {\overline {{i_n}^2} ^{1/2}} $ was determined at the frequency of 1 Hz, which is approximately 1.45$ \times $10$ ^{-13} $ A$ \cdot $Hz$ ^{-1/2} $ in this work. FIG. 4(a) shows the trend of the specific detectivity as a function of light intensity. Obviously, when the weak light intensity was 16.92 $ {\rm{ \mathsf{ μ} }} $W/cm$ ^2 $, the maximum specific detectivity was as high as 2.36$ \times $10$ ^{11} $ Jones.
Next, the changes of both responsivity and specific detectivity of the PdSe$ _2 $/GaAs device at different wavelengths were investigated. FIG. 5(a) displays the continuous spectral response in the range of 200 nm to 1200 nm. It is apparent that device has a very good spectral selectivity, with peak response at around 800 nm, for which 808 nm light emitting diodes (LED) were chosen as the light source in the device performance characterization. Such a difference in spectral selectivity is in good consistence with the absorption spectrum. FIG. 5(b) plots the absorption spectra of pure GaAs wafer, pure PdSe$ _2 $ and multilayer PdSe$ _2 $ on GaAs wafer. One can easily observe that the absorption curve of the hybrid structure is mainly at around 800 nm. Although the hybrid structure also absorbs UV and other NIR light longer than 900 nm, the absorption is rather weak, which can explain why the responsivity and specific detectivity are smaller in both UV and NIR region. Apart from the pronounced photoresponse to 800 nm NIR illumination, the present NIRPD exhibits weak but repeatable photoresponse to irradiation of ultraviolet C (UVC), ultraviolet A (UVA) and NIR illumination with wavelength longer than 808 nm. FIG. 5 (c-e) plot the main photoresponse parameters irradiated by 265, 365 and 970 nm light. The PdSe$ _2 $/GaAs heterojunction device can be readily switched between high- and low-conduction states. In comparison, the photocurrent under 265 nm was relatively higher than that under 365 and 970 nm. Remarkably, for all the three different illuminations, the $ I_{\rm{light}} $/$ I_{\rm{dark}} $ ratio of NIRPD gradually increases with increasing light intensity (FIG. 5(d)). It is also interesting to find that the present NIRPD is even sensitive to NIR illumination with wavelength of 1300 nm, although the response current was on the scale of nA, as shown in FIG. 5(f).
Figure 5. (a) Wavelength-dependent responsivity and detectivity of the PdSe$ _2 $/GaAs device. (b) Absorption spectrum of the PdSe$ _2 $ nanofilm, GaAs substrate, and PdSe$ _2 $/GaAs hybrid structure. (c) Time-dependent photoresponse of the device under illumination of different light sources (4.28 mW/cm$ ^2 $) at zero bias. (d) $ I_{\rm{light}} $/$ I_{\rm{dark}} $ ratios as a function of the light intensity for different wavelengths. (e) The relationship between the responsivity and the light intensity for 265, 365, 970 nm. (f) Photoresponse of the device under 1300 nm light illumination (5.52 mW/cm$ ^2 $) at zero bias. For color image, see the online version.
The response speed is a typical parameter that reflects the ability of a photodetector to catch a fast varying optical signal [46, 47]. In this work, the as-fabricated PdSe$ _2 $/GaAs heterojunction device can detect high-frequency optical pulse with good reproducibility. For studying the response speed, an experiment setup that is composed of a digital oscilloscope used to record temporal photoresponse signal (photovoltaic changes with time) when switching frequency ranging from 200 Hz to 5 kHz was then employed (FIG. 6(a)). FIG. 6 (b$ - $d) depict the photoresponse of the hybrid heterojunction device to pulsed illumination at different frequencies. Obviously, the NIRPD can be repeatedly switched between on- and off-states, even the frequency is as high as 4 kHz. The response rate of device can be determined from an individual cycle of the responsive curve in FIG. 6(e). In the time domain, the rise and fall time that is required for the current to increase from 10% to 90% of the peak value (and vice versa) can directly determine the speed of the photodetector [48]. According to the above definition, the rise time and fall time of our devices were 43.76 and 89.98 $ {\rm{ \mathsf{ μ} }} $s, respectively. Furthermore, the relative balance ($ V_\max $$ - $$ V_\min $)/$ V_\max $ of the photovoltage as a function of frequency was calculated and plotted in FIG. 6(f), from which the $ f_{3db} $ bandwidth (the frequencies when ($ V_\max $$ - $$ V_\min $)/$ V_\max $ = 70%) is estimated to be $ \sim $50 Hz [49]. The main parameters including $ R $, $ D^* $, $ I_{\rm{light}} $/$ I_{\rm{dark}} $ ratio, $ \tau_{\rm{r}}/\tau_{\rm{f}} $ of the present NIRPD and other similar devices are compared in Table Ⅰ. It can be clearly seen that although the $ I_{\rm{light}} $/$ I_{\rm{dark}} $ ratio is slightly lower than that of PdSe$ _2 $/pyramid Si heterojunction device, the $ I_{\rm{light}} $/$ I_{\rm{dark}} $ ratio of the present NIRPD is at least an order of magnitude higher than that of other geometies. In addition, the responsivity and specific detectivity are higher than most of the device included in the Table Ⅰ.
Figure 6. (a) The experiment setup for studying the photoresponse of the heterojunction device. Photoresponse of the device under pulsed light irradiation (808 nm) with a frequency of 200 Hz (b), 2 kHz (c), and 4 kHz (d), respectively. (e) A single normalized cycle measured at the frequency of 4 kHz to calculate both rise time ($ \tau_{\rm{r}} $) and fall time ($ \tau_{\rm{f}} $). (f) The relative balance ($ V_\max $$ - $$ V_\min $)/$ V_\max $ versus switching frequency, the 3 dB cutoff frequency is estimated to be $ \sim $50 Hz. For color image, see the online version.
Table Ⅰ. Summary of some of the key parameters of the PdSe$ _2 $/GaAs photodetector and other similar devices.
The above results have demonstrated that the present PdSe$ _2 $/GaAs exhibits good responsivity to NIR illumination. However, from the perspective of practical application, another issue regarding to the long-term stability is also very important and deserves further study. In order to check the stability of the PdSe$ _2 $/GaAs heterojunction devices in ambient condition, we compared the devices performance in air for two months without any encapsulation. It was worth noting that the device can almost retain its photoresponse, and no obvious degradation in photocurrent was observed. The high stability of the device is without doubt due to the good stability of PdSe$ _2 $ and GaAs in air, as verified by the Raman analysis of multilayered PdSe$ _2 $ in FIG. 7(b).
Figure 7. (a) Comparison of the photoresponse of the NIRPD with and without 2 months' storage in air. (b) Comparison of the Raman spectra of the PdSe$ _2 $ before and after two months.
As an important optoelectronic device, infrared image sensor has been widely used in night vision, missile warning system, video and biomedical imaging, and even fire monitoring, etc. [50, 51]. By utilizing the setup illustrated in FIG. 8(a), the capability of the present photodetector for NIR imaging was then explored. For the sake of mapping the photocurrent, the NIRPD was mounted on an automatic placement system below, and a simple "table lamp" pattern as shown in FIG. 8(b) was illuminated vertically with an 808 nm LED source. Then, a computer pre-installed with customized software will help manipulate the detector to scan horizontally ($ X $-axis) and vertically ($ Y $-axis) at a step of 1 mm. During the image scanning process, the channel current of each pixel was measured and recorded, and then incorporated into a 2D contrast mapping system. FIG. 8(c) shows the image sensing results in the 2D contrast mapping system. Obviously, the position where the NIR light can transmit shows relatively high photocurrent while the blocked area displays low current, leading to the formation of NIR profile of "table lamp" pattern. Although there are some glitches at the edges of the picture, the pattern of the table lamp can be clearly distinguished from the background. Such a relatively good imaging quality indicates that the present hybrid heterojunction based NIRPD may find potential applications in future NIR optoelectronic and image sensing systems.
Figure 8. (a) Schematic illustration of the setup for the NIR image sensing application. (b) Digital photograph of the template "table lamp". (c) The corresponding 2D current mapping of image "table lamp" under 808 nm illumination. For color image, see the online version.
Ⅳ. CONCLUSION
In summary, we have developed a high sensitive hybrid heterojunction NIRPD by simply transferring PdSe$ _2 $ thin film onto a GaAs wafer. The as-fabricated device exhibited obvious photovoltaic behavior under 808 nm light, which indicates that the device can be used as a self-driven NIRPD without external power supply. Further analysis reveals that the device was very sensitive to 808 nm NIR illumination at zero bias voltage, with excellent reproducibility, ambient stability and an $ I_{\rm{light}} $/$ I_{\rm{dark}} $ ratio as high as 1.16$ \times $10$ ^{5} $, which was higher than most of other Pt/Pd chalcogenides photodetectors ever reported. Specifically, the responsivity and specific detectivity of the device were estimated to be 171.34 mA/W and 2.36$ \times $10$ ^{11} $ Jones, respectively. Finally, the PdSe$ _2 $/GaAs-based heterojunction device can easily record an NIR "table lamp" image generated by 808 nm illumination, suggesting the potential application of the present device in future infrared optoelectronic systems.
Supplementary materials: The device fabrication process, the XPS of PdSe$ _2 $, the elemental mapping of Pd and Se, the 2D lattice lattice fringe image, as well as the $ I $-$ V $ curves of Ag/PdSe$ _2 $/Ag and Au/GaAs/Au are provided.
Ⅴ. ACKNOWLEDGMENTS
This work was supported by the National Natural Science Foundation of China (No.61575059, No.61675062, No.21501038) and the Fundamental Research Funds for the Central Universities (No.JZ2018HGPB0275, No.JZ2018HGTA0220, and No.JZ2018HGXC0001).
Figure S1. (a) Schematic diagram of the manufacturing steps of PdSe2/GaAs heterojunction based photodetector. (b) Photographs of SiO2/Si wafer, Pd on SiO2/Si wafer, and PdSe2/SiO2/Si wafer.
Figure S2. (a and b) The XPS spectra of Se 3d and Pd 3d.
Figure S3. (a, b) The EDS element mapping of Pd, and Se at the selected square area. (c) The HRTEM image of PdSe2
Figure S4. (a) I-V characteristics of the Ag/PdSe2 measured at room temperature without Light irradiation. (b) I-V characteristics of the Au/GaAs measured at room temperature without light irradiation
Calculation of the Responsivity, EQE, and D *.
The Ilight is the photocurrent (1.16×10-7 A), Idark the the current in the dark (3.63×10-11 A), P the incident light intensity (16.92 uW cm-2), S the effective area of the photodetector (0.04 cm-2), the responsivity is therefore equal to (1.16×10-4-3.63×10-8)/16.92×0.04×10-6=0.1713 A/W, and h the Planck constant (6.626×10-34 J.S), c the speed of light (3.0×108 m s-1), e the basic charge (1.602×10-19 C), λ incident wavelength (0.808×10-6 m), so the EQE is equal to 0.1713×6.626×10-34×3.0×108/1.602×10-19×0.808×10-6=26.3%. The specific detectivity (D*) is given by
where S is the effective area of the device (0.04 cm-2), Δf is the bandwidth (1 Hz), NEP is the noise equivalent power, ${{{\overline {i_n^2} }^{1/2}}} $ in root mean square value of the noise current of the device and R is the responsivity of the device. The noise level of $ {{{\overline {i_n^2} }^{1/2}}}$ in is determined at the frequency of 1 HZ, which is approximately 1.45 ×10-13 A Hz-1/2. Therefore, the NEP and the specific detectivity (D*) were 8.465×10-13 Hz-1/2W A-1 and 2.36×1011 Jones, respectively.
|
CommonCrawl
|
How do I make Reduce yield all solutions explicitly?
Say I want to do the following:
Reduce[ x+y+z==1, {x,y,z}, Modulus -> 7 ]
then I get a solution with parameters, C[1] :
x == 1 + 6 C[1] + 6 C[2] && y == C[2] && z == C[1]
Now, for non-linear equations, such as
Reduce[x + y^2 + z == 1, {x, y, z}, Modulus -> 7]
I get a complete list of solutions, here are the first 5 of them :
%[[ ;; 5]]
(x == 0 && y == 0 && z == 1) || (x == 0 && y == 1 && z == 0) ||
(x == 0 && y == 4 && z == 6)
Is there a way to make the system yield this type of output also in the first case ? (Of course, I can just loop over all possibilities of the parameter and get a complete list, but I'd prefer to make Mathematica do it for me if it is possible.)
equation-solving finite-fields
Per AlexanderssonPer Alexandersson
This tutorial Diophantine Polynomial Systems collects useful methods for tackling similar problems. There one finds e.g. :
Mathematica enumerates the solutions explicitly only if the number of integer solutions of the system does not exceed the maximum of the $p^{th}$ power of the value of the system option DiscreteSolutionBound, where p is the dimension of the solution lattice of the equations...
That is a necessary but not sufficient condition. Increasing DiscreteSolutionBound (by default 10) doesn't help here e.g.
SetSystemOptions["ReduceOptions" -> "DiscreteSolutionBound" -> 1000];
On the other hand there are instances of limitations of Reduce when we work with the Modulus option (see e.g. : Solving/Reducing equations in Z/pZ) and most likely this is the case here.
Nonetheless one can make Reduce write all solutions explicitly. A straightforward way is to substitute all generated parameters (since they are protected) by another symbols, and using Table to write all cases. Then we have to use Mod[#, 7]& since Reduce[..., Modulus->7] is inside Table.
Flatten[
Mod[ Table[{x, y, z} /.ToRules[
Reduce[ x + y + z == 1, {x, y, z}, Modulus -> 7] ] /. {C[1] -> a, C[2] -> b},
{a, 0, 6}, {b, 0, 6}], 7], 1]
{{1, 0, 0}, {0, 1, 0}, {6, 2, 0}, {5, 3, 0}, {4, 4, 0}, {3, 5, 0}, {2, 6, 0},
{0, 0, 1}, {6, 1, 1}, {5, 2, 1}, {4, 3, 1}, {3, 4, 1}, {2, 5, 1}, {1, 6, 1},
{2, 0, 6}, {1, 1, 6}, {0, 2, 6}, {6, 3, 6}, {5, 4, 6}, {4, 5, 6}, {3, 6, 6}}
or we can use the Mod function more extensively in the integers with appropriate bounds on the variables x, y, z e.g.
Mod[{x, y, z} /. {ToRules[ Reduce[x + y + z == 1 && -10 < x < 10 &&
-10 < y < 10 && -10 < z < 10, {x, y, z}, Integers]]
}, 7] // DeleteDuplicates
Reduce is recommended when we have non-linear equations. In a special case of linear equations namely the Frobenius equations $\; a_1 x_1 +\dots + a_n x_n =b \quad$ (where $a_i$ - positive integers, $x_i$ - nonnegative integers and $b$ is an integer) instead of working with Reduce in the integers we can use FrobeniusSolve for a much more efficient approach (see e.g. Finding the number of solutions to a diophantine equation) :
Flatten[ Mod[ Table[ FrobeniusSolve[ {1, 1, 1}, 1 + 7 k], {k, 2}], 7], 1]//
DeleteDuplicates
All these methods yield identical results with respect to the ordering.
$\begingroup$ But what if I have more than one linear equation, and/or do not know if the equations are linear? $\endgroup$ – Per Alexandersson Jan 31 '13 at 18:15
$\begingroup$ Really only have to take k up to (1+1+1)-1, that is, 2. For coeffs {a,b,c} it would be (a+b+c)-1. $\endgroup$ – Daniel Lichtblau Jan 31 '13 at 19:29
$\begingroup$ @Paxinum Then you can use Reduce, see edit. $\endgroup$ – Artes Jan 31 '13 at 21:43
$\begingroup$ Allright, Thanks! This is the solution I went for, but it seems a bit hackish. So, I suspected that it might be possible to do it in a more natural way, but apparently not... $\endgroup$ – Per Alexandersson Feb 1 '13 at 10:17
$\begingroup$ One can also use Block[{C}, Table[..., {C[1], 0, 6}, {C[2], 0, 6}]], instead of replacing the Cs with a, b -- not an important difference, but it might suit someone's style. $\endgroup$ – Michael E2 May 13 '13 at 12:44
Making it a non-linear equation works :)
Reduce[(x + y + z - 1)^2 == 0, {x, y, z}, Modulus -> 7]
$$ (x=0\land y=0\land z=1)\lor (x=0\land y=1\land z=0)\lor (x=0\land y=2\land z=6)\lor (x=0\land y=3\land z=5)\lor (x=0\land y=4\land z=4)\lor (x=0\land y=5\land z=3)\lor (x=0\land y=6\land z=2)\lor (x=1\land y=0\land z=0)\lor (x=1\land y=1\land z=6)\lor (x=1\land y=2\land z=5)\lor (x=1\land y=3\land z=4)\lor (x=1\land y=4\land z=3)\lor (x=1\land y=5\land z=2)\lor (x=1\land y=6\land z=1)\lor (x=2\land y=0\land z=6)\lor (x=2\land y=1\land z=5)\lor (x=2\land y=2\land z=4)\lor (x=2\land y=3\land z=3)\lor (x=2\land y=4\land z=2)\lor (x=2\land y=5\land z=1)\lor (x=2\land y=6\land z=0)\lor (x=3\land y=0\land z=5)\lor (x=3\land y=1\land z=4)\lor (x=3\land y=2\land z=3)\lor (x=3\land y=3\land z=2)\lor (x=3\land y=4\land z=1)\lor (x=3\land y=5\land z=0)\lor (x=3\land y=6\land z=6)\lor (x=4\land y=0\land z=4)\lor (x=4\land y=1\land z=3)\lor (x=4\land y=2\land z=2)\lor (x=4\land y=3\land z=1)\lor (x=4\land y=4\land z=0)\lor (x=4\land y=5\land z=6)\lor (x=4\land y=6\land z=5)\lor (x=5\land y=0\land z=3)\lor (x=5\land y=1\land z=2)\lor (x=5\land y=2\land z=1)\lor (x=5\land y=3\land z=0)\lor (x=5\land y=4\land z=6)\lor (x=5\land y=5\land z=5)\lor (x=5\land y=6\land z=4)\lor (x=6\land y=0\land z=2)\lor (x=6\land y=1\land z=1)\lor (x=6\land y=2\land z=0)\lor (x=6\land y=3\land z=6)\lor (x=6\land y=4\land z=5)\lor (x=6\land y=5\land z=4)\lor (x=6\land y=6\land z=3) $$
AndrewAndrew
$\begingroup$ Interesting (+1), have you got any justification of the issue ? $\endgroup$ – Artes May 13 '13 at 11:56
$\begingroup$ This will work for a prime modulus, but not for one under which there is a non-trivial square-root of 0 (e.g. Mod[6^2, 12] == 0). $\endgroup$ – Michael E2 May 13 '13 at 12:41
$\begingroup$ @Artes May be Mma explicitly checks for linearity, where it knows how to write parametrized solutions, and in non-linear case uses some other algorithms. $\endgroup$ – Andrew May 13 '13 at 16:21
$\begingroup$ @MichaelE2 It can be taken into account Reduce[Reduce[(x + y + z - 1)^2 == 0, {x, y, z}, Modulus -> 36], Modulus -> 6] $\endgroup$ – Andrew May 13 '13 at 16:38
Here's another way expand the solutions to a linear equation:
Evaluate@Reduce[x + y + z == 1, {x, y, z}, Modulus -> 7,
GeneratedParameters -> Slot] & @@@ Tuples[Range[0, 6], 2]
(* {x == 1 && y == 0 && z == 0, x == 7 && y == 1 && z == 0, ...,
x == 73 && y == 6 && z == 6} *)
There are two differences from the normal output of Reduce: the output is a list instead of a logical expression, and the values are not the least nonnegative residues modulo 7. One can process the output to suit one's needs: Apply Or, reduce each Integer with % /. n_Integer :> Mod[n, 7], convert to rules with ToRule /@ %, and so on.
Michael E2Michael E2
Not the answer you're looking for? Browse other questions tagged equation-solving finite-fields or ask your own question.
How do I get all possible solutions in an underdetermined system?
Solve diophantine equation with constraints by counting and listing results
Finding the number of solutions to a diophantine equation
how do I control the output of Reduce function?
Solve[] unexpectedly giving empty solution sets for extremely large problems
All possible solutions to the Matrix Equation (free variables appearing)
Defining functions in a loop
FindRoot, Loops and multivariate nonlinear system
Solving system of equations with Root
Solving this system of two algebraic equations
How should I interpret being told that a system of linear equations has no solution?
How to reduce the residue in NSolve for a system of 3 non-linear equations
DeleteDuplicates in FindRoot with more than one variable
Increasing the accuracy of residue in FindRoot
|
CommonCrawl
|
Visualized definition of cohomology
I cannot imagine how cohomology is related to graph theory, actually I read solid definition from wiki, and to be honest, I cannot understand it. e.g I know what is homotopy (in simple term), group of functions such that I can continuously convert each of them to another one, and I think this is useful for understanding homology, but, is there similar visualization method for cohomology? (I'm not looking for exact definition, I want to imagine it, actually this is in graph theoretic concept). for more information see introduction of this paper. I want to understand it in this paper, how is useful? how to imagine it?
P.S1: my field is not related to group theory, and as in introduction author wrote, this paper doesn't need deep group theoretic definition! and I don't want to be deep in group theory. Just looking for simple way to understand them.
P.S2: I think I can imagine what is free group (which is in introduction of paper), at least by Calay graph seems to be easy to imagine it.
P.S3: I also asked this in math.stackexchange but I think this is something between two field and may I get some mathematical answer there and some others here (from CS point of view) to understand it well.
graphs group-theory
user742user742
$\begingroup$ "Wanna" is not an English word. $\endgroup$
– Dave Clarke
$\begingroup$ @DaveClarke, may be its origin is from other languages, but is also english: oxforddictionaries.com/definition/english/wanna?q=wanna also see this :dictionary.reference.com/browse/wanna $\endgroup$
$\begingroup$ Should this be in math.SE? $\endgroup$
– Peter Shor
$\begingroup$ As far as the use of "wanna" is concerned. It may be fine in informal conversational English, but is never used in written form. $\endgroup$
– Nicholas Mancuso
$\begingroup$ @NicholasMancuso: Utter nonsense. The word is absolutely used in informal writing, a fact which is easily verified empirically, and is perfectly acceptable and clearly understood in that context. The (inconsistent and mostly arbitrary, but expected on this site) demands of formal writing style have little to do with the English language as a whole. $\endgroup$
– C. A. McCann
Apparently all algebraic topology is useful for is earning imaginary internet points. More than I expected, I guess... (Actually now that I've finished writing I expect to lose points on this...)
tl;dr: Honestly, I don't think anyone here can give you an easy way to really understand what homology and cohomology are, with just a short description. I made an attempt below, but I took a whole course on the subject and I still don't really know what they are. Particularly if you don't know, or care to know, what a group is. These things are groups, that's sort of the whole point. As they relate to graph theory, you can treat a graph as a simplicial complex of dimension 1. Thus you can consider the homology and cohomology groups of the graph and use them to understand the topology of the graph. Here are some notes by Herbert Edelsbrunner on homology and cohomology, the latter of which provides a useful example.
Before I can define cohomology I must first make sure you understand the definition of homology. For simplicity I'll describe everything using simplicial complexes, but (co)homology can be, and usually are, defined for more complex complexes (see what I did there?). Simplicial complexes are enough for graph theory, at least as far as I've seen.
It turns out that it's really difficult to try to create homeomorphisms between two topological spaces to show that they're topologically equivalent. It's even harder to try an show that no such homeomorphism exists. So the idea is to establish topological invariants, a property that must be equal for two topological spaces if they are topologically equivalent. If the invariant isn't the same for both spaces, then they can't possibly be topologically equivalent. Homology and cohomology groups are two such invariants. They attempt to put a group structure on a topological space, so that we can work with groups and homomorphisms instead of topological spaces and homeomorphisms. Computer scientists like homology groups because they are easy to compute and lead to fast algorithms. The downside is they're much more difficult to visualize.
So how do we describe a topological space in such a way that we can place a group structure on it? Well we build our topological space with a set of simplicies, called a simplicial complex. A 0-dimensional simplex is a vertex, a 1-dim simplex is an edge, 2-simplex is a triangle, 3-simplex is a tetrahedron, etc. A valid simplicial complex must obey certain rules about how it's simplices connect to one another: the intersection of two simplices must also be a simplex in the set and all subsimplces must be in the set. Notice that graphs can be thought about in this way.
It makes sense to talk about sums of simplices of the same dimension $p$, what we call $p$-chains. So a $p$-chain can be written as $\sum_{i} a_i \sigma_i$ where $\sigma_i$ is a $p$-simplex and $a_i$ is an integer. With this operation we can define the group of $p$-chains $(C_p,+)$, or just $C_p$.
We also have one more operation called the boundary operator, which takes a chain to it's boundary. So for an edge $(v,u)$ in some graph, the boundary is just the sum of its vertices $u+v$. The boundary operator of a $p$-simplex $\sigma = [u_0,\ldots, u_1]$ is defined as $\partial \sigma = \sum_{i}[u_0, \ldots, \hat{u_i}, \ldots, u_n]$ where $\hat{u_i}$ means that we remove the $u_i$ simplex and create a sum of $p-1$-simplices. To apply the boundary operator to a $p$-chain $c$, we just apply it to each of it's $p$-simplices, $\partial c = \sum_{i}a_i \partial \sigma_i$.
Now there are two very important types of chains which we use to construct homology groups, cycles and boundaries. A $p$-cycles is a $p$-chain $c$ with no boundary, meaning $\partial c = 0$. A $p$-boundary $c$ is a $p$-chain that is the boundary of some $p+1$-chain $d$, $c = \partial d$. Once again we can define groups $Z_p$ (our group $p$-cycles) and $B_p$ (our group of $p$-boundaries). The group $B_p$ is a subgroup of $Z_p$ which is a subgroup of $C_p$.
Homology groups are not a group of functions where one element can be deformed into another. Intuitively, what the homology group is trying to do is characterize the different loops. Think of a torus (donut) which has two distinct loops, colored in the image below. I can move the blue loop anywhere around the torus, but it's still the same loop because it differs only by it's boundary.
The elements of a homology group are equivalence classes, where two $p$-cycles in the simplicial complex are in the same equivalence class if and only if they differ only by a boundary chain. Meaning if you take two cycles $c,d \in Z_p$, and there exists boundaries $a,b \in B_p$ such that $c+a = d+b$, then $c$ and $d$ are in the same equivalence class. They are constructed as the quotient groups $H_p = Z_p/B_p$.
Now cohomology is much less geometrically intuitive and motivated by algebraic considerations.
We define cohomology groups in terms of cochains. A $p$-cochain is a homomorphism $\psi: C_p \rightarrow G$, where $G$ is the group used for the coefficients $a_i$ (usually $\mathbb{Z}$ or $\mathbb{Z_2}$). Instead of considering a group of chains, we consider the group of cochains, all homomorphisms between $C_p$ and $G$ denoted as $C^p=Hom(C_p, G)$.
Similarly we can define a boundary operator on cochains, as the dual homomorphism of $\partial$, which we denote $\delta: C^{p} \rightarrow C^{p+1}$. Notice that since the boundary operator $\partial$ took $p+1$-chains to $p$-chains, the dual homomorphism takes $p$-cochains to $p+1$-cochains. Now consider a $(p-1)$-cochain $\psi$ and $\partial c$ a $p-1$-chain. The coboundary operator $\delta$ requires that $\psi$ applied to $\partial c$ is the same as $\delta \psi$ applied to $c$.
Once we have the coboundary operator, we can define cocycle and coboundary groups, denoted $Z^{p}$ and $B^{p}$, just as we did before. Then the $p$th cohomology group is a set of equivalence classes where two cocylces are equivalent iff they differ by a coboundary. They are constructed as the quotient groups $H^{p}=Z^{p}/B^{p}$. Do you see why this is much less geometrically intuitive?
Marc KhouryMarc Khoury
$\begingroup$ Thank you very much, thoughtful answer, I read it right now, but I think I should read it again tomorrow (with good look to your references). $\endgroup$
Don't understand this graph definition
Is there an efficient algorithm for this vertex cycle cover problem?
Is the derivative of a graph related to adjacency lists?
Formal definition on graph levels
Pancake Sorting Graph Recursive Definition
Is this definition of a "complete graph" correct?
What is a dominator node and a dominator tree?
|
CommonCrawl
|
Search Results: 1 - 10 of 866 matches for " Antonino Marciano "
Page 1 /866
Towards inhomogeneous loop quantum cosmology: triangulating Bianchi IX with perturbations
Antonino Marciano
Physics , 2010,
Abstract: This brief article sums up results obtained in arXiv:0911.2653, which develops a constrained SU(2) lattice gauge theory in the "dipole" approximation. This is a further step toward the issue of a (inhomogeneous) loop quantum cosmology and its merging into loop quantum gravity.
Symplectic geometry and Noether charges for Hopf algebra space-time symmetries
Michele Arzano,Antonino Marciano
Physics , 2007, DOI: 10.1103/PhysRevD.75.081701
Abstract: There has been a certain interest in some recent works in the derivation of Noether charges for Hopf-algebra space-time symmetries. Such analyses relied rather heavily on delicate manipulations of the fields of non-commuting coordinates whose charges were under study. Here we derive the same charges in a "coordinate-independent" symplectic-geometry type of approach and find results that are consistent with the ones of hep-th/0607221.
Big Bounce in Dipole Cosmology
Marco Valerio Battisti,Antonino Marciano
Abstract: We derive the cosmological Big Bounce scenario from the dipole approximation of Loop Quantum Gravity. We show that a non-singular evolution takes place for any matter field and that, by considering a massless scalar field as a relational clock for the dynamics, the semi-classical proprieties of an initial state are preserved on the other side of the bounce. This model thus enhances the relation between Loop Quantum Cosmology and the full theory.
Triangulated Loop Quantum Cosmology: Bianchi IX and inhomogenous perturbations
Marco Valerio Battisti,Antonino Marciano,Carlo Rovelli
Abstract: We develop the "triangulated" version of loop quantum cosmology, recently introduced in the literature. We focus on the "dipole" cosmology, where space is a three-sphere and the triangulation is formed by two tetrahedra. We show that the discrete fiducial connection has a simple and appealing geometrical interpretation and we correct the ansatz on the relation between the model variables and the Friedmann-Robertson-Walker scale factor. The modified ansatz leads to the convergence of the Hamiltonian constraint to the continuum one. We then ask which degrees of freedom are captured by this model. We show that the model is rich enough to describe the (anisotropic) Bianchi IX Universe, and give the explicit relation between the Bianchi IX variables and the variables of the model. We discuss the possibility of using this path in order to define the quantization of the Bianchi IX Universe. The model contains more degrees of freedom than Bianchi IX, and therefore captures some inhomogeneous degrees of freedom as well. Inhomogeneous degrees of freedom can be expanded in representations of the SU(2) Bianchi IX isometry group, and the dipole model captures the lowest integer representation of these, connected to hyper-spherical harmonic of angular momentum j=1.
Chern-Simons Inflation and Baryogenesis
Stephon Alexander,Antonino Marciano,David Spergel
Abstract: We present a model of inflation based on the interaction between a homogeneous and isotropic configuration of a U(1) gauge field and fermionic charge density $\mathcal{J}_{0}$. The regulated fermionic charge density is generated from a Bunch-Davies vacuum state using the methods of Koksma and Prokopec \cite{Koksma:2009tc}, and is found to redshift as $1/a(\eta)$. The time-like component of gauge field is sourced by the fermionic charge leading to a growth in the gauge field $A(\eta)_{0}\sim a(\eta)$. As a result inflation is dominated by the energy density contained in the gauge field and fermionic charge interaction, $A_{0}\, \mathcal{J}^{0}$, which remains constant during inflation. We also provide a mechanism to generate a net lepton asymmetry. The coupling of a pseudo scalar to the Chern-Simons term converts the gauge field fluctuations into lepton number and all three Sahkarov conditions are satisfied during inflation. Finally, the rapid oscillation of the pseudo scalar field near its minimum thermalizes the gauge field and ends inflation. We provide the necessary initial condition on the gauge field and fermionic charge to simultaneously generate enough e-folds and baryon asymmetry index.
Towards a Loop Quantum Gravity and Yang-Mills Unification
Stephon Alexander,Antonino Marciano,Ruggero Altair Tacchi
Physics , 2011, DOI: 10.1016/j.physletb.2012.07.034
Abstract: We propose a new method of unifying gravity and the Standard Model by introducing a spin-foam model. We realize a unification between an SU(2) Yang-Mills interaction and 3D general relativity by considering a Spin(4) Plebanski action. The theory is quantized a la spin-foam by implementing the analogue of the simplicial constraints for the broken phase of the Spin(4) SO(4) symmetry. A natural 4D extension of the theory is shown. We also present a way to recover 2-point correlation functions between the connections as a first way to implement scattering amplitudes between particle states, aiming to connect Loop Quantum Gravity to new physical predictions.
The Hidden Quantum Groups Symmetry of Super-renormalizable Gravity
Stephon Alexander,Antonino Marciano,Leonardo Modesto
Abstract: In this paper we consider the relation between the super-renormalizable theories of quantum gravity (SRQG) studied in [arXiv:1110.5249v2, arXiv:1202.0008] and an underlying non-commutativity of spacetime. For one particular super-renormalizable theory we show that at linear level (quadratic in the Lagrangian) the propagator of the theory is the same we obtain starting from a theory of gravity endowed with {\theta}-Poincar\'e quantum groups of symmetry. Such a theory is over the so called {\theta}-Minkowski non-commuative spacetime. We shed new light on this link and show that among the theories considered in [arXiv:1110.5249v2, arXiv:1202.0008], there exist only one non-local and Lorentz invariant super-renormalizable theory of quantum gravity that can be described in terms of a quantum group symmetry structure. We also emphasize contact with pre-existent works in the literature and discuss preservation of the equivalence principle in our framework.
Horava-Lifshitz theory as a Fermionic Aether in Ashtekar gravity
Stephon Alexander,Joao Magueijo,Antonino Marciano
Abstract: We show how Ho\v{r}ava-Lifshitz (HL) theory appears naturally in the Ashtekar formulation of relativity if one postulates the existence of a fermionic field playing the role of aether. The spatial currents associated with this field must be switched off for the equivalence to work. Therefore the field supplies the preferred frame associated with breaking refoliation (time diffeomorphism) invariance, but obviously the symmetry is only spontaneously broken if the field is dynamic. When Dirac fermions couple to the gravitational field via the Ashtekar variables, the low energy limit of HL gravity, recast in the language of Ashtekar variables, naturally emerges (provided the spatial fermion current identically vanishes). HL gravity can therefore be interpreted as a time-like current, or a Fermi aether, that fills space-time, with the Immirzi parameter, a chiral fermionic coupling, and the fermionic charge density fixing the value of the parameter $\lambda$ determining HL theory. This reinterpretation sheds light on some features of HL theory, namely its good convergence properties.
Gravitational origin of the weak interaction's chirality
Stephon Alexander,Antonino Marciano,Lee Smolin
Abstract: We present a new unification of the electro-weak and gravitational interactions based on the joining the weak SU(2) gauge fields with the left handed part of the space-time connection, into a single gauge field valued in the complexification of the local Lorentz group. Hence, the weak interactions emerge as the right handed chiral half of the space-time connection, which explains the chirality of the weak interaction. This is possible, because, as shown by Plebanski, Ashtekar, and others, the other chiral half of the space-time connection is enough to code the dynamics of the gravitational degrees of freedom. This unification is achieved within an extension of the Plebanski action previously proposed by one of us. The theory has two phases. A parity symmetric phase yields, as shown by Speziale, a bi-metric theory with eight degrees of freedom: the massless graviton, a massive spin two field and a scalar ghost. Because of the latter this phase is unstable. Parity is broken in a stable phase where the eight degrees of freedom arrange themselves as the massless graviton coupled to an SU(2) triplet of chirally coupled Yang-Mills fields. It is also shown that under this breaking a Dirac fermion expresses itself as a chiral neutrino paired with a scalar field with the quantum numbers of the Higgs.
Coherent states for FLRW space-times in loop quantum gravity
Elena Magliaro,Antonino Marciano,Claudio Perini
Abstract: We construct a class of coherent spin-network states that capture proprieties of curved space-times of the Friedmann-Lama\^itre-Robertson-Walker type on which they are peaked. The data coded by a coherent state are associated to a cellular decomposition of a spatial ($t=$const.) section with dual graph given by the complete five-vertex graph, though the construction can be easily generalized to other graphs. The labels of coherent states are complex $SL(2, \mathbbm{C})$ variables, one for each link of the graph and are computed through a smearing process starting from a continuum extrinsic and intrinsic geometry of the canonical surface. The construction covers both Euclidean and Lorentzian signatures; in the Euclidean case and in the limit of flat space we reproduce the simplicial 4-simplex semiclassical states used in Spin Foams.
|
CommonCrawl
|
Geometrical quench and dynamical quantum phase transition in the $\alpha-T_3$ lattice
28 Apr 2020 • Gulácsi Balázs • Heyl Markus • Dóra Balázs
We investigate quantum quenches and the Loschmidt echo in the two dimensional, three band $\alpha-T_3$ model, a close descendant of the dice lattice. By adding a chemical potential to the central site, the integral of the Berry curvature of the bands in different valleys is continously tunable by the ratio of the hopping integrals between the sublattices... By investigating one and two filled bands, we find that dynamical quantum phase transition (DQPT), i.e. nonanalytical temporal behaviour in the rate function of the return amplitude, occurs for a certain range of parameters, independent of the band filling. By focusing on the effective low energy description of the model, we find that DQPTs happen not only in the time derivative of the rate function, which is a common feature in two dimensional models, but in the rate function itself. This feature is not related to the change of topological properties of the system during the quench, but rather follows from population inversion for all momenta. This is accompanied by the appearance of dynamical vortices in the time-momentum space of the Pancharatnam geometric phase. The positions of the vortices form an infinite vortex ladder, i.e. a macroscopic phase structure, which allows us to identify the dynamical phases that are separated by the DQPT. (read more)
QUANTUM GASES
MESOSCALE AND NANOSCALE PHYSICS
|
CommonCrawl
|
A biharmonic transmission problem in Lp-spaces
CPAA Home
$ W^{1,p} $ estimates for elliptic systems on composite material with almost partially BMO coefficients
September 2021, 20(9): 3161-3192. doi: 10.3934/cpaa.2021101
Riesz-type representation formulas for subharmonic functions in sub-Riemannian settings
Beatrice Abbondanza 1, and Stefano Biagi 2,,
Free researcher
Dipartimento di Matematica, Politecnico di Milano, Via Bonardi 9, 20133 Milano, Italy
Received February 2021 Revised May 2021 Published September 2021 Early access June 2021
Fund Project: The second author is member of INdAM and is partially supported by the INdAM-GNAMPA project Metodi topologici per problemi al contorno associati a certe classi di equazioni alle derivate parziali
In this paper we use a potential-theoretic approach to establish various representation theorems and Poisson-Jensen-type formulas for subharmonic functions in sub-Riemannian settings. We also characterize the Radon measures in $ \mathbb{R}^N $ which are the Riesz-measures of bounded-above subharmonic functions in the whole space $ \mathbb{R}^N $.
Keywords: Representation theorems for subharmonic functions, Poisson-Jensen-type formulas, bounded-above subharmomnic functions, Green functions and potentials.
Mathematics Subject Classification: Primary: 35C15, 35H10, 31B05.
Citation: Beatrice Abbondanza, Stefano Biagi. Riesz-type representation formulas for subharmonic functions in sub-Riemannian settings. Communications on Pure & Applied Analysis, 2021, 20 (9) : 3161-3192. doi: 10.3934/cpaa.2021101
D. H. Armitage and S. J. Gardiner, Classical Potential Theory, Springer Monogr. Math., Springer, London, 2001. doi: 10.1007/978-1-4471-0233-5. Google Scholar
E. Battaglia and S. Biagi, Superharmonic functions associated with hypoelliptic non-Hörmander operators, Commun. Contemp. Math., 22 (2020), 32 pp. doi: 10.1142/S0219199718500712. Google Scholar
E. Battaglia, S. Biagi and A. Bonfiglioli, The strong maximum principle and the Harnack inequality for a class of hypoelliptic non-Hörmander operators, Ann. Inst. Fourier (Grenoble), 66 (2016), 589-631. Google Scholar
H. Bauer, Harmonische Räume und ihre Potentialtheorie, Lecture Notes in Mathematics 22 Springer-Verlag, Berlin-New York, 1966. Google Scholar
S. Biagi, On the Gibbons conjecture for homogeneous Hörmander operators, Nonlinear Differ. Equ. Appl., 26 (2019), 26-49. doi: 10.1007/s00030-019-0594-2. Google Scholar
S. Biagi and A. Bonfiglioli, The existence of a global fundamental solution for homogeneous Hörmander operators via a global Lifting method, Proc. Lond. Math. Soc., 114 (2017), 855-889. doi: 10.1112/plms.12024. Google Scholar
S. Biagi and A. Bonfiglioli, An introduction to the Geometrical Analysis of Vector Fields. With Applications To Maximum Principles And Lie Groups, World Scientific Publishing Company, 2018. Google Scholar
S. Biagi and A. Bonfiglioli, Global Heat kernels for parabolic homogeneous Hörmander operators, preprint, arXiv: 1910.09907 Google Scholar
S. Biagi, A. Bonfiglioli and M. Bramanti, Global estimates in Sobolev spaces for homogeneous Hörmander sums of squares, J. Math. Anal. Appl., 498 (2021). doi: 10.1016/j.jmaa.2021.124935. Google Scholar
S. Biagi, A. Bonfiglioli and M. Bramanti, Global estimates for the fundamental solution of homogeneous Hörmander sums of squares, arXiv: 1906.07836. Google Scholar
S. Biagi and M. Bramanti, Global Gaussian estimates for the heat kernel of homogeneous sums of squares, to appear in Potential Anal. Google Scholar
S. Biagi and M. Bramanti, Non-divergence operators structured on homogeneous Hörmander vector fields: heat kernels and global Gaussian bounds, arXiv: 2011.09322. Google Scholar
S. Biagi and E. Lanconelli, Large sets at infinity and Maximum Pinciple on unbounded domains for a class of sub-elliptic operators, J. Differ. Equ., 269 (2020), 9680-9719. doi: 10.1016/j.jde.2020.06.060. Google Scholar
S. Biagi, A. Pinamonti and E. Vecchi, Pohozaev-type identities for differential operators driven by homogeneous vector fields, Nonlinear Differ. Equ. Appl., 28 (2021). doi: 10.1007/s00030-020-00664-6. Google Scholar
A. Bonfiglioli and C. Cinti, A Poisson-Jensen type representation formula for subharmonic functions on stratified Lie groups, Potential Anal., 22 (2005), 151-169. doi: 10.1007/s11118-004-0588-4. Google Scholar
A. Bonfiglioli and C. Cinti, The theory of energy for sub-Laplacians with an application to quasi-continuity, Manuscripta Math., 118 (2005), 289-309. doi: 10.1007/s00229-005-0579-9. Google Scholar
A. Bonfiglioli and E. Lanconelli, Subharmonic functions in sub-Riemannian settings, J. Eur. Math. Soc., 15 (2013), 387-441. doi: 10.4171/JEMS/364. Google Scholar
A. Bonfiglioli, E. Lanconelli and A. Tommasoli, Convexity of average operators for subsolutions to subelliptic equations, Anal. Partial Differ. Equ., 7 (2014), 345-373. doi: 10.2140/apde.2014.7.345. Google Scholar
A. Bonfiglioli, E. Lanconelli and F. Uguzzoni, Stratified Lie Groups and Potential Theory for Their Sub-Laplacians, Springer, New York, N.Y., 2007. Google Scholar
J. M. Bony, Principe du maximum, inégalité de Harnack et unicité du problème de Cauchy pour les opérateurs elliptiques dégénérés, Ann. Inst. Fourier (Grenoble), 19 (1969), 277-304. Google Scholar
M. Brelot, Axiomatique des Fonctions Harmoniques, Les Presses de l'Université de Montréal, Montréal, 1969. Google Scholar
M. Brelot, Lectures on Potential Theory, Tata Institute of Fundamental Research, Bombay, 1960. Google Scholar
G. Caristi, L. D'Ambrosio and E. Mitidieri, Liouville theorems for some nonlinear inequalities, Proc. Steklov Inst. Math., 260 (2008), 90-111. doi: 10.1134/S0081543808010070. Google Scholar
C. Constantinescu and A. Cornea, Potential Theory on Harmonic Spaces, Springer-Verlag, 1972. Google Scholar
L. D'Ambrosio and E. Mitidieri, Representation formulae of solutions of second order elliptic inequalities, Nonlinear Anal., 178 (2019), 310-336. doi: 10.1016/j.na.2018.08.014. Google Scholar
L. D'Ambrosio and E. Mitidieri, A priori estimates, positivity results, and nonexistence theorems for quasilinear degenerate elliptic inequalities, Adv. Math., 224 (2010), 967-1020. doi: 10.1016/j.aim.2009.12.017. Google Scholar
L. D'Ambrosio and E. Mitidieri, Nonnegative solutions of some quasilinear elliptic inequalities and applications, SB Math., 201 (2010), 855-871. doi: 10.1070/SM2010v201n06ABEH004094. Google Scholar
L. D'Ambrosio and E. Mitidieri, Liouville Theorems of some second order elliptic inequalities, Preprint, 2018, 40 pp. doi: 10.1016/j.na.2018.08.014. Google Scholar
L. D'Ambrosio, E. Mitidieri and S. I. Pohozaev, Representation formulae and inequalities for solutions of a class of second order partial differential equations, Trans. Amer. Math. Soc., 358 (2006), 893-910. doi: 10.1090/S0002-9947-05-03717-7. Google Scholar
N. du Plessis, An Introduction to Potential Theory, Oliver and Boyd, Edinburgh, 1970. Google Scholar
G. B. Folland, Subelliptic estimates and function spaces on nilpotent Lie groups, Ark. Mat., 13 (1975), 161-207. doi: 10.1007/BF02386204. Google Scholar
R. M. Hervé, Recherches axiomatiques sur la théorie des fonctions surharmoniques et du potentiel, Ann. Inst. Fourier (Grenoble), 12 (1962), 415-571. Google Scholar
L. Hörmander, Hypoelliptic second order differential equations, Acta Math., 119 (1967), 147-171. doi: 10.1007/BF02392081. Google Scholar
E. Mitidieri and S. I. Pohozaev, Positivity property of solutions of some elliptic inequalities on $\mathbb{R}^n$, Dokl. Math., 68 (2003), 339-344. Google Scholar
A. Parmeggiani, A remark on the stability of $C^\infty $-hypoellipticity under lower-order perturbations, J. Pseudo-Differ. Oper. Appl., 6 (2015), 227-235. doi: 10.1007/s11868-015-0118-8. Google Scholar
W. Rudin, Real and Complex Analysis, McGraw-Hill, New York, 1987. Google Scholar
[37] F. Treves, Topological Vector Spaces, Distributions and Kernels, Academic Press, London, 1967. Google Scholar
S. S. Dragomir, C. E. M. Pearce. Jensen's inequality for quasiconvex functions. Numerical Algebra, Control & Optimization, 2012, 2 (2) : 279-291. doi: 10.3934/naco.2012.2.279
Gregory Beylkin, Lucas Monzón. Efficient representation and accurate evaluation of oscillatory integrals and functions. Discrete & Continuous Dynamical Systems, 2016, 36 (8) : 4077-4100. doi: 10.3934/dcds.2016.36.4077
Sébastien Gouëzel. An interval map with a spectral gap on Lipschitz functions, but not on bounded variation functions. Discrete & Continuous Dynamical Systems, 2009, 24 (4) : 1205-1208. doi: 10.3934/dcds.2009.24.1205
Marko Kostić. Almost periodic type functions and densities. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021008
Hirobumi Mizuno, Iwao Sato. L-functions and the Selberg trace formulas for semiregular bipartite graphs. Conference Publications, 2003, 2003 (Special) : 638-646. doi: 10.3934/proc.2003.2003.638
Van Hoang Nguyen. The Hardy–Moser–Trudinger inequality via the transplantation of Green functions. Communications on Pure & Applied Analysis, 2020, 19 (7) : 3559-3574. doi: 10.3934/cpaa.2020155
Chiu-Ya Lan, Huey-Er Lin, Shih-Hsien Yu. The Green's functions for the Broadwell Model in a half space problem. Networks & Heterogeneous Media, 2006, 1 (1) : 167-183. doi: 10.3934/nhm.2006.1.167
Kohei Ueno. Weighted Green functions of nondegenerate polynomial skew products on $\mathbb{C}^2$. Discrete & Continuous Dynamical Systems, 2011, 31 (3) : 985-996. doi: 10.3934/dcds.2011.31.985
Kohei Ueno. Weighted Green functions of polynomial skew products on $\mathbb{C}^2$. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 2283-2305. doi: 10.3934/dcds.2014.34.2283
Katsukuni Nakagawa. Compactness of transfer operators and spectral representation of Ruelle zeta functions for super-continuous functions. Discrete & Continuous Dynamical Systems, 2020, 40 (11) : 6331-6350. doi: 10.3934/dcds.2020282
Leon Ehrenpreis. Special functions. Inverse Problems & Imaging, 2010, 4 (4) : 639-647. doi: 10.3934/ipi.2010.4.639
Regina Sandra Burachik, Alex Rubinov. On the absence of duality gap for Lagrange-type functions. Journal of Industrial & Management Optimization, 2005, 1 (1) : 33-38. doi: 10.3934/jimo.2005.1.33
Jean Mawhin, James R. Ward Jr. Guiding-like functions for periodic or bounded solutions of ordinary differential equations. Discrete & Continuous Dynamical Systems, 2002, 8 (1) : 39-54. doi: 10.3934/dcds.2002.8.39
Yunho Kim, Luminita A. Vese. Image recovery using functions of bounded variation and Sobolev spaces of negative differentiability. Inverse Problems & Imaging, 2009, 3 (1) : 43-68. doi: 10.3934/ipi.2009.3.43
Ken Abe. Some uniqueness result of the Stokes flow in a half space in a space of bounded functions. Discrete & Continuous Dynamical Systems - S, 2014, 7 (5) : 887-900. doi: 10.3934/dcdss.2014.7.887
Kai Tao. Strong Birkhoff ergodic theorem for subharmonic functions with irrational shift and its application to analytic quasi-periodic cocycles. Discrete & Continuous Dynamical Systems, 2021 doi: 10.3934/dcds.2021162
Hongjie Dong, Seick Kim. Green's functions for parabolic systems of second order in time-varying domains. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1407-1433. doi: 10.3934/cpaa.2014.13.1407
Mourad Choulli. Local boundedness property for parabolic BVP's and the Gaussian upper bound for their Green functions. Evolution Equations & Control Theory, 2015, 4 (1) : 61-67. doi: 10.3934/eect.2015.4.61
Olaf Klein. On the representation of hysteresis operators acting on vector-valued, left-continuous and piecewise monotaffine and continuous functions. Discrete & Continuous Dynamical Systems, 2015, 35 (6) : 2591-2614. doi: 10.3934/dcds.2015.35.2591
Sihem Mesnager, Fengrong Zhang, Yong Zhou. On construction of bent functions involving symmetric functions and their duals. Advances in Mathematics of Communications, 2017, 11 (2) : 347-352. doi: 10.3934/amc.2017027
Beatrice Abbondanza Stefano Biagi
|
CommonCrawl
|
Takashi Yamakawa
Round-Optimal Blind Signatures in the Plain Model from Classical and Quantum Standard Assumptions 📺 Abstract
Shuichi Katsumata Ryo Nishimaki Shota Yamada Takashi Yamakawa
Blind signatures, introduced by Chaum (Crypto'82), allows a user to obtain a signature on a message without revealing the message itself to the signer. Thus far, all existing constructions of round-optimal blind signatures are known to require one of the following: a trusted setup, an interactive assumption, or complexity leveraging. This state-of-the-affair is somewhat justified by the few known impossibility results on constructions of round-optimal blind signatures in the plain model (i.e., without trusted setup) from standard assumptions. However, since all of these impossibility results only hold \emph{under some conditions}, fully (dis)proving the existence of such round-optimal blind signatures has remained open. In this work, we provide an affirmative answer to this problem and construct the first round-optimal blind signature scheme in the plain model from standard polynomial-time assumptions. Our construction is based on various standard cryptographic primitives and also on new primitives that we introduce in this work, all of which are instantiable from __classical and post-quantum__ standard polynomial-time assumptions. The main building block of our scheme is a new primitive called a blind-signature-conforming zero-knowledge (ZK) argument system. The distinguishing feature is that the ZK property holds by using a quantum polynomial-time simulator against non-uniform classical polynomial-time adversaries. Syntactically one can view this as a delayed-input three-move ZK argument with a reusable first message, and we believe it would be of independent interest.
Classical vs Quantum Random Oracles 📺 Abstract
Takashi Yamakawa Mark Zhandry
In this paper, we study relationship between security of cryptographic schemes in the random oracle model (ROM) and quantum random oracle model (QROM). First, we introduce a notion of a proof of quantum access to a random oracle (PoQRO), which is a protocol to prove the capability to quantumly access a random oracle to a classical verifier. We observe that a proof of quantumness recently proposed by Brakerski et al. (TQC '20) can be seen as a PoQRO. We also give a construction of a publicly verifiable PoQRO relative to a classical oracle. Based on them, we construct digital signature and public key encryption schemes that are secure in the ROM but insecure in the QROM. In particular, we obtain the first examples of natural cryptographic schemes that separate the ROM and QROM under a standard cryptographic assumption. On the other hand, we give lifting theorems from security in the ROM to that in the QROM for certain types of cryptographic schemes and security notions. For example, our lifting theorems are applicable to Fiat-Shamir non-interactive arguments, Fiat-Shamir signatures, and Full-Domain-Hash signatures etc. We also discuss applications of our lifting theorems to quantum query complexity.
A Black-Box Approach to Post-Quantum Zero-Knowledge in Constant Rounds 📺 Abstract
Nai-Hui Chia Kai-Min Chung Takashi Yamakawa
In a recent seminal work, Bitansky and Shmueli (STOC '20) gave the first construction of a constant round zero-knowledge argument for NP secure against quantum attacks. However, their construction has several drawbacks compared to the classical counterparts. Specifically, their construction only achieves computational soundness, requires strong assumptions of quantum hardness of learning with errors (QLWE assumption) and the existence of quantum fully homomorphic encryption (QFHE), and relies on non-black-box simulation. In this paper, we resolve these issues at the cost of weakening the notion of zero-knowledge to what is called ϵ-zero-knowledge. Concretely, we construct the following protocols: - We construct a constant round interactive proof for NP that satisfies statistical soundness and black-box ϵ-zero-knowledge against quantum attacks assuming the existence of collapsing hash functions, which is a quantum counterpart of collision-resistant hash functions. Interestingly, this construction is just an adapted version of the classical protocol by Goldreich and Kahan (JoC '96) though the proof of ϵ-zero-knowledge property against quantum adversaries requires novel ideas. - We construct a constant round interactive argument for NP that satisfies computational soundness and black-box ϵ-zero-knowledge against quantum attacks only assuming the existence of post-quantum one-way functions. At the heart of our results is a new quantum rewinding technique that enables a simulator to extract a committed message of a malicious verifier while simulating verifier's internal state in an appropriate sense.
Quantum Encryption with Certified Deletion, Revisited: Public Key, Attribute-Based, and Classical Communication 📺 Abstract
Taiga Hiroka Tomoyuki Morimae Ryo Nishimaki Takashi Yamakawa
Broadbent and Islam (TCC '20) proposed a quantum cryptographic primitive called quantum encryption with certified deletion. In this primitive, a receiver in possession of a quantum ciphertext can generate a classical certificate that the encrypted message is deleted. Although their construction is information-theoretically secure, it is limited to the setting of one-time symmetric key encryption (SKE), where a sender and receiver have to share a common key in advance and the key can be used only once. Moreover, the sender has to generate a quantum state and send it to the receiver over a quantum channel in their construction. Deletion certificates are privately verifiable, which means a verification key for a certificate must be kept secret, in the definition by Broadbent and Islam. However, we can also consider public verifiability. In this work, we present various constructions of encryption with certified deletion. - Quantum communication case: We achieve (reusable-key) public key encryption (PKE) and attribute-based encryption (ABE) with certified deletion. Our PKE scheme with certified deletion is constructed assuming the existence of IND-CPA secure PKE, and our ABE scheme with certified deletion is constructed assuming the existence of indistinguishability obfuscation and one-way function. These two schemes are privately verifiable. - Classical communication case: We also achieve interactive encryption with certified deletion that uses only classical communication. We give two schemes, a privately verifiable one and a publicly verifiable one. The former is constructed assuming the LWE assumption in the quantum random oracle model. The latter is constructed assuming the existence of one-shot signatures and extractable witness encryption.
Secure Software Leasing from Standard Assumptions 📺 Abstract
Fuyuki Kitagawa Ryo Nishimaki Takashi Yamakawa
Secure software leasing (SSL) is a quantum cryptographic primitive that enables an authority to lease software to a user by encoding it into a quantum state. SSL prevents users from generating authenticated pirated copies of leased software, where authenticated copies indicate those run on legitimate platforms. Although SSL is a relaxed variant of quantum copy protection that prevents users from generating any copy of leased softwares, it is still meaningful and attractive. Recently, Ananth and La Placa proposed the first SSL scheme. It satisfies a strong security notion called infinite-term security. On the other hand, it has a drawback that it is based on public key quantum money, which is not instantiated with standard cryptographic assumptions so far. Moreover, their scheme only supports a subclass of evasive functions. In this work, we present SSL schemes that satisfy a security notion called finite-term security based on the learning with errors assumption (LWE). Finite-term security is weaker than infinite-term security, but it still provides a reasonable security guarantee. Specifically, our contributions consist of the following. - We construct a finite-term secure SSL scheme for pseudorandom functions from the LWE assumption against quantum adversaries. - We construct a finite-term secure SSL scheme for a subclass of evasive functions from the LWE assumption against sub-exponential quantum adversaries. - We construct finite-term secure SSL schemes for the functionalities above with classical communication from the LWE assumption against (sub-exponential) quantum adversaries. SSL with classical communication means that entities exchange only classical information though they run quantum computation locally. Our crucial tool is two-tier quantum lightning, which is introduced in this work and a relaxed version of quantum lighting. In two-tier quantum lightning schemes, we have a public verification algorithm called semi-verification and a private verification algorithm called full-verification. An adversary cannot generate possibly entangled two quantum states whose serial numbers are the same such that one passes the semi-verification, and the other also passes the full-verification. We show that we can construct a two-tier quantum lightning scheme from the LWE assumption.
Compact Designated Verifier NIZKs from the CDH Assumption Without Pairings Abstract
In a non-interactive zero-knowledge (NIZK) proof, a prover can non-interactively convince a verifier of a statement without revealing any additional information. A useful relaxation of NIZK is a designated verifier NIZK (DV-NIZK) proof, where proofs are verifiable only by a designated party in possession of a verification key. A crucial security requirement of DV-NIZKs is unbounded-soundness, which guarantees soundness even if the verification key is reused for multiple statements. Most known DV-NIZKs (except standard NIZKs) for $$\mathbf{NP} $$ NP do not have unbounded-soundness. Existing DV-NIZKs for $$\mathbf{NP} $$ NP satisfying unbounded-soundness are based on assumptions which are already known to imply standard NIZKs. In particular, it is an open problem to construct (DV-)NIZKs from weak paring-free group assumptions such as decisional Diffie–Hellman (DH). As a further matter, all constructions of (DV-)NIZKs from DH type assumptions (regardless of whether it is over a paring-free or paring group) require the proof size to have a multiplicative-overhead $$|C| \cdot \mathsf {poly}(\kappa )$$ | C | · poly ( κ ) , where | C | is the size of the circuit that computes the $$\mathbf{NP} $$ NP relation. In this work, we make progress of constructing DV-NIZKs from DH-type assumptions that are not known to imply standard NIZKs. Our results are summarized as follows: DV-NIZKs for $$\mathbf{NP} $$ NP from the computational DH assumption over pairing-free groups. This is the first construction of such NIZKs on pairing-free groups and resolves the open problem posed by Kim and Wu (CRYPTO'18). DV-NIZKs for $$\mathbf{NP} $$ NP with proof size $$|C|+\mathsf {poly}(\kappa )$$ | C | + poly ( κ ) from the computational DH assumption over specific pairing-free groups. This is the first DV-NIZK that achieves a compact proof from a standard DH type assumption. Moreover, if we further assume the $$\mathbf{NP} $$ NP relation to be computable in $$\mathbf{NC} ^1$$ NC 1 and assume hardness of a (non-static) falsifiable DH type assumption over specific pairing-free groups, the proof size can be made as small as $$|w| + \mathsf {poly}(\kappa )$$ | w | + poly ( κ ) .
Tighter Security Proofs for GPV-IBE in the Quantum Random Oracle Model Abstract
Shuichi Katsumata Shota Yamada Takashi Yamakawa
In (STOC, 2008), Gentry, Peikert, and Vaikuntanathan proposed the first identity-based encryption (GPV-IBE) scheme based on a post-quantum assumption, namely the learning with errors assumption. Since their proof was only made in the random oracle model (ROM) instead of the quantum random oracle model (QROM), it remained unclear whether the scheme was truly post-quantum or not. In (CRYPTO, 2012), Zhandry developed new techniques to be used in the QROM and proved security of GPV-IBE in the QROM, hence answering in the affirmative that GPV-IBE is indeed post-quantum. However, since the general technique developed by Zhandry incurred a large reduction loss, there was a wide gap between the concrete efficiency and security level provided by GPV-IBE in the ROM and QROM. Furthermore, regardless of being in the ROM or QROM, GPV-IBE is not known to have a tight reduction in the multi-challenge setting. Considering that in the real-world an adversary can obtain many ciphertexts, it is desirable to have a security proof that does not degrade with the number of challenge ciphertext. In this paper, we provide a much tighter proof for the GPV-IBE in the QROM in the single-challenge setting. In addition, we show that a slight variant of the GPV-IBE has an almost tight reduction in the multi-challenge setting both in the ROM and QROM, where the reduction loss is independent of the number of challenge ciphertext. Our proof departs from the traditional partitioning technique and resembles the approach used in the public key encryption scheme of Cramer and Shoup (CRYPTO, 1998). Our proof strategy allows the reduction algorithm to program the random oracle the same way for all identities and naturally fits the QROM setting where an adversary may query a superposition of all identities in one random oracle query. Notably, our proofs are much simpler than the one by Zhandry and conceptually much easier to follow for cryptographers not familiar with quantum computation. Although at a high level, the techniques used for the single- and multi-challenge setting are similar, the technical details are quite different. For the multi-challenge setting, we rely on the Katz–Wang technique (CCS, 2003) to overcome some obstacles regarding the leftover hash lemma.
Compact NIZKs from Standard Assumptions on Bilinear Maps 📺 Abstract
A non-interactive zero-knowledge (NIZK) protocol enables a prover to convince a verifier of the truth of a statement without leaking any other information by sending a single message. The main focus of this work is on exploring short pairing-based NIZKs for all NP languages based on standard assumptions. In this regime, the seminal work of Groth, Ostrovsky, and Sahai (J.ACM'12) (GOS-NIZK) is still considered to be the state-of-the-art. Although fairly efficient, one drawback of GOS-NIZK is that the proof size is multiplicative in the circuit size computing the NP relation. That is, the proof size grows by $O(|C|k)$, where $C$ is the circuit for the NP relation and $k$ is the security parameter. By now, there have been numerous follow-up works focusing on shortening the proof size of pairing-based NIZKs, however, thus far, all works come at the cost of relying either on a non-standard knowledge-type assumption or a non-static $q$-type assumption. Specifically, improving the proof size of the original GOS-NIZK under the same standard assumption has remained as an open problem. Our main result is a construction of a pairing-based NIZK for all of NP whose proof size is additive in $|C|$, that is, the proof size only grows by $|C| +poly(k)$, based on the decisional linear (DLIN) assumption. Since the DLIN assumption is the same assumption underlying GOS-NIZK, our NIZK is a strict improvement on their proof size. As by-products of our main result, we also obtain the following two results: (1) We construct a perfectly zero-knowledge NIZK (NIPZK) for NP relations computable in NC1 with proof size $|w|poly(k)$ where $|w|$ is the witness length based on the DLIN assumption. This is the first pairing-based NIPZK for a non-trivial class of NP languages whose proof size is independent of $|C|$ based on a standard assumption. (2) We construct a universally composable (UC) NIZK for NP relations computable in NC1 in the erasure-free adaptive setting whose proof size is $|w|poly(k)$ from the DLIN assumption. This is an improvement over the recent result of Katsumata, Nishimaki, Yamada, and Yamakawa (CRYPTO'19), which gave a similar scheme based on a non-static $q$-type assumption. The main building block for all of our NIZKs is a constrained signature scheme with decomposable online-offline efficiency. This is a property which we newly introduce in this paper and construct from the DLIN assumption. We believe this construction is of an independent interest.
Adaptively Secure Constrained Pseudorandom Functions in the Standard Model 📺 Abstract
Alex Davidson Shuichi Katsumata Ryo Nishimaki Shota Yamada Takashi Yamakawa
Constrained pseudorandom functions (CPRFs) allow learning "constrained" PRF keys that can evaluate the PRF on a subset of the input space, or based on some predicate. First introduced by Boneh and Waters [AC'13], Kiayias et al. [CCS'13] and Boyle et al. [PKC'14], they have shown to be a useful cryptographic primitive with many applications. These applications often require CPRFs to be adaptively secure, which allows the adversary to learn PRF values and constrained keys in an arbitrary order. However, there is no known construction of adaptively secure CPRFs based on a standard assumption in the standard model for any non-trivial class of predicates. Moreover, even if we rely on strong tools such as indistinguishability obfuscation (IO), the state-of-the-art construction of adaptively secure CPRFs in the standard model only supports the limited class of NC1 predicates. In this work, we develop new adaptively secure CPRFs for various predicates from different types of assumptions in the standard model. Our results are summarized below. - We construct adaptively secure and O(1)-collusion-resistant CPRFs for t-conjunctive normal form (t-CNF) predicates from one-way functions (OWFs) where t is a constant. Here, O(1)-collusion-resistance means that we can allow the adversary to obtain a constant number of constrained keys. Note that t-CNF includes bit-fixing predicates as a special case. - We construct adaptively secure and single-key CPRFs for inner-product predicates from the learning with errors (LWE) assumption. Here, single-key means that we only allow the adversary to learn one constrained key. Note that inner-product predicates include t-CNF predicates for a constant t as a special case. Thus, this construction supports a more expressive class of predicates than that supported by the first construction though it loses the collusion-resistance and relies on a stronger assumption. - We construct adaptively secure and O(1)-collusion-resistant CPRFs for all circuits from the LWE assumption and indistinguishability obfuscation (IO). The first and second constructions are the first CPRFs for any non-trivial predicates to achieve adaptive security outside of the random oracle model or relying on strong cryptographic assumptions. Moreover, the first construction is also the first to achieve any notion of collusion-resistance in this setting. Besides, we prove that the first and second constructions satisfy weak 1-key privacy, which roughly means that a constrained key does not reveal the corresponding constraint. The third construction is an improvement over previous adaptively secure CPRFs for less expressive predicates based on IO in the standard model.
NIZK from SNARG 📺 Abstract
Fuyuki Kitagawa Takahiro Matsuda Takashi Yamakawa
We give a construction of a non-interactive zero-knowledge (NIZK) argument for all NP languages based on a succinct non-interactive argument (SNARG) for all NP languages and a one-way function. The succinctness requirement for the SNARG is rather mild: We only require that the proof size be $|\pi|=\mathsf{poly}(\lambda)(|x|+|w|)^c$ for some constant $c<1/2$, where $|x|$ is the statement length, $|w|$ is the witness length, and $\lambda$ is the security parameter. Especially, we do not require anything about the efficiency of the verification. Based on this result, we also give a generic conversion from a SNARG to a zero-knowledge SNARG assuming the existence of CPA secure public-key encryption. For this conversion, we require a SNARG to have efficient verification, i.e., the computational complexity of the verification algorithm is $\mathsf{poly}(\lambda)(|x|+|w|)^{o(1)}$. Before this work, such a conversion was only known if we additionally assume the existence of a NIZK. Along the way of obtaining our result, we give a generic compiler to upgrade a NIZK for all NP languages with non-adaptive zero-knowledge to one with adaptive zero-knowledge. Though this can be shown by carefully combining known results, to the best of our knowledge, no explicit proof of this generic conversion has been presented.
Classical Verification of Quantum Computations with Efficient Verifier 📺 Abstract
In this paper, we extend the protocol of classical verification of quantum computations (CVQC) recently proposed by Mahadev to make the verification efficient. Our result is obtained in the following three steps: \begin{itemize} \item We show that parallel repetition of Mahadev's protocol has negligible soundness error. This gives the first constant round CVQC protocol with negligible soundness error. In this part, we only assume the quantum hardness of the learning with error (LWE) problem similar to Mahadev's work. \item We construct a two-round CVQC protocol in the quantum random oracle model (QROM) where a cryptographic hash function is idealized to be a random function. This is obtained by applying the Fiat-Shamir transform to the parallel repetition version of Mahadev's protocol. \item We construct a two-round CVQC protocol with an efficient verifier in the CRS+QRO model where both prover and verifier can access a (classical) common reference string generated by a trusted third party in addition to quantum access to QRO. Specifically, the verifier can verify a $\mathsf{QTIME}(T)$ computation in time $\mathsf{poly}(\lambda,\log T)$ where $\lambda$ is the security parameter. For proving soundness, we assume that a standard model instantiation of our two-round protocol with a concrete hash function (say, SHA-3) is sound and the existence of post-quantum indistinguishability obfuscation and post-quantum fully homomorphic encryption in addition to the quantum hardness of the LWE problem. \end{itemize}
Finding Collisions in a Quantum World: Quantum Black-Box Separation of Collision-Resistance and One-Wayness 📺 ★ Abstract
Akinori Hosoyamada Takashi Yamakawa
Since the celebrated work of Impagliazzo and Rudich (STOC 1989), a number of black-box impossibility results have been established. However, these works only ruled out classical black-box reductions among cryptographic primitives. Therefore it may be possible to overcome these impossibility results by using quantum reductions. To exclude such a possibility, we have to extend these impossibility results to the quantum setting. In this paper, we study black-box impossibility in the quantum setting. We first formalize a quantum counterpart of fully-black-box reduction following the formalization by Reingold, Trevisan and Vadhan (TCC 2004). Then we prove that there is no quantum fully-black-box reduction from collision-resistant hash functions to one-way permutations (or even trapdoor permutations). We take both of classical and quantum implementations of primitives into account. This is an extension to the quantum setting of the work of Simon (Eurocrypt 1998) who showed a similar result in the classical setting.
Adaptively Secure Inner Product Encryption from LWE 📺 Abstract
Attribute-based encryption (ABE) is an advanced form of encryption scheme allowing for access policies to be embedded within the secret keys and ciphertexts. By now, we have ABEs supporting numerous types of policies based on hardness assumptions over bilinear maps and lattices. However, one of the distinguishing differences between ABEs based on these two breeds of assumptions is that the former can achieve adaptive security for quite expressible policies (e.g., inner-products, boolean formula) while the latter can not. Recently, two adaptively secure lattice-based ABEs have appeared and changed the state of affairs: a non-zero inner-product (NIPE) encryption by Katsumata and Yamada (PKC'19) and an ABE for t-CNF policies by Tsabary (CRYPTO'19). However, the policies supported by these ABEs are still quite limited and do not embrace the more interesting policies that have been studied in the literature. Notably, constructing an adaptively secure inner-product encryption (IPE) based on lattices still remains open. In this work, we propose the first adaptively secure IPE based on the learning with errors (LWE) assumption with sub-exponential modulus size (without resorting to complexity leveraging). Concretely, our IPE supports inner-products over the integers Z with polynomial sized entries and satisfies adaptively weakly-attribute-hiding security. We also show how to convert such an IPE to an IPE supporting inner-products over Z_p for a polynomial-sized p and a fuzzy identity-based encryption (FIBE) for small and large universes. Our result builds on the ideas presented in Tsabary (CRYPTO'19), which uses constrained pseudorandom functions (CPRF) in a semi-generic way to achieve adaptively secure ABEs, and the recent lattice-based adaptively secure CPRF for inner-products by Davidson et al. (CRYPTO'20). Our main observation is realizing how to weaken the conforming CPRF property introduced in Tsabary (CRYPTO'19) by taking advantage of the specific linearity property enjoyed by the lattice evaluation algorithms by Boneh et al. (EUROCRYPT'14).
Leakage-Resilient Identity-Based Encryption in Bounded Retrieval Model with Nearly Optimal Leakage-Ratio Abstract
Ryo Nishimaki Takashi Yamakawa
We propose new constructions of leakage-resilient public-key encryption (PKE) and identity-based encryption (IBE) schemes in the bounded retrieval model (BRM). In the BRM, adversaries are allowed to obtain at most $$\ell $$ -bit leakage from a secret key and we can increase $$\ell $$ only by increasing the size of secret keys without losing efficiency in any other performance measure. We call $$\ell /|\mathsf {sk}|$$ leakage-ratio where $$|\mathsf {sk}|$$ denotes a bit-length of a secret key. Several PKE/IBE schemes in the BRM are known. However, none of these constructions achieve a constant leakage-ratio under a standard assumption in the standard model. Our PKE/IBE schemes are the first schemes in the BRM that achieve leakage-ratio $$1-\epsilon $$ for any constant $$\epsilon >0$$ under standard assumptions in the standard model.As previous works, we use identity-based hash proof systems (IB-HPS) to construct IBE schemes in the BRM. It is known that a parameter for IB-HPS called the universality-ratio is translated into the leakage-ratio of the resulting IBE scheme in the BRM. We construct an IB-HPS with universality-ratio $$1-\epsilon $$ for any constant $$\epsilon >0$$ based on any inner-product predicate encryption (IPE) scheme with compact secret keys. Such IPE schemes exist under the d-linear, subgroup decision, learning with errors, or computational bilinear Diffie-Hellman assumptions. As a result, we obtain IBE schemes in the BRM with leakage-ratio $$1-\epsilon $$ under any of these assumptions. Our PKE schemes are immediately obtained from our IBE schemes.
Adaptively Single-Key Secure Constrained PRFs for $\mathrm {NC}^1$ Abstract
Nuttapong Attrapadung Takahiro Matsuda Ryo Nishimaki Shota Yamada Takashi Yamakawa
We present a construction of an adaptively single-key secure constrained PRF (CPRF) for $$\mathbf {NC}^1$$ assuming the existence of indistinguishability obfuscation (IO) and the subgroup hiding assumption over a (pairing-free) composite order group. This is the first construction of such a CPRF in the standard model without relying on a complexity leveraging argument.To achieve this, we first introduce the notion of partitionable CPRF, which is a CPRF accommodated with partitioning techniques and combine it with shadow copy techniques often used in the dual system encryption methodology. We present a construction of partitionable CPRF for $$\mathbf {NC}^1$$ based on IO and the subgroup hiding assumption over a (pairing-free) group. We finally prove that an adaptively single-key secure CPRF for $$\mathbf {NC}^1$$ can be obtained from a partitionable CPRF for $$\mathbf {NC}^1$$ and IO.
Designated Verifier/Prover and Preprocessing NIZKs from Diffie-Hellman Assumptions 📺 Abstract
In a non-interactive zero-knowledge (NIZK) proof, a prover can non-interactively convince a verifier of a statement without revealing any additional information. Thus far, numerous constructions of NIZKs have been provided in the common reference string (CRS) model (CRS-NIZK) from various assumptions, however, it still remains a long standing open problem to construct them from tools such as pairing-free groups or lattices. Recently, Kim and Wu (CRYPTO'18) made great progress regarding this problem and constructed the first lattice-based NIZK in a relaxed model called NIZKs in the preprocessing model (PP-NIZKs). In this model, there is a trusted statement-independent preprocessing phase where secret information are generated for the prover and verifier. Depending on whether those secret information can be made public, PP-NIZK captures CRS-NIZK, designated-verifier NIZK (DV-NIZK), and designated-prover NIZK (DP-NIZK) as special cases. It was left as an open problem by Kim and Wu whether we can construct such NIZKs from weak paring-free group assumptions such as DDH. As a further matter, all constructions of NIZKs from Diffie-Hellman (DH) type assumptions (regardless of whether it is over a paring-free or paring group) require the proof size to have a multiplicative-overhead $$|C| \cdot \mathsf {poly}(\kappa )$$|C|·poly(κ), where |C| is the size of the circuit that computes the $$\mathbf {NP}$$NP relation.In this work, we make progress of constructing (DV, DP, PP)-NIZKs with varying flavors from DH-type assumptions. Our results are summarized as follows:DV-NIZKs for $$\mathbf {NP}$$NP from the CDH assumption over pairing-free groups. This is the first construction of such NIZKs on pairing-free groups and resolves the open problem posed by Kim and Wu (CRYPTO'18).DP-NIZKs for $$\mathbf {NP}$$NP with short proof size from a DH-type assumption over pairing groups. Here, the proof size has an additive-overhead $$|C|+\mathsf {poly}(\kappa )$$|C|+poly(κ) rather then an multiplicative-overhead $$|C| \cdot \mathsf {poly}(\kappa )$$|C|·poly(κ). This is the first construction of such NIZKs (including CRS-NIZKs) that does not rely on the LWE assumption, fully-homomorphic encryption, indistinguishability obfuscation, or non-falsifiable assumptions.PP-NIZK for $$\mathbf {NP}$$NP with short proof size from the DDH assumption over pairing-free groups. This is the first PP-NIZK that achieves a short proof size from a weak and static DH-type assumption such as DDH. Similarly to the above DP-NIZK, the proof size is $$|C|+\mathsf {poly}(\kappa )$$|C|+poly(κ). This too serves as a solution to the open problem posed by Kim and Wu (CRYPTO'18). Along the way, we construct two new homomorphic authentication (HomAuth) schemes which may be of independent interest.
Adaptively Secure and Succinct Functional Encryption: Improving Security and Efficiency, Simultaneously Abstract
Fuyuki Kitagawa Ryo Nishimaki Keisuke Tanaka Takashi Yamakawa
Functional encryption (FE) is advanced encryption that enables us to issue functional decryption keys where functions are hardwired. When we decrypt a ciphertext of a message m by a functional decryption key where a function f is hardwired, we can obtain f(m) and nothing else. We say FE is selectively or adaptively secure when target messages are chosen at the beginning or after function queries are sent, respectively. In the weakly-selective setting, function queries are also chosen at the beginning. We say FE is single-key/collusion-resistant when it is secure against adversaries that are given only-one/polynomially-many functional decryption keys, respectively. We say FE is sublinearly-succinct/succinct when the running time of an encryption algorithm is sublinear/poly-logarithmic in the function description size, respectively.In this study, we propose a generic transformation from weakly-selectively secure, single-key, and sublinearly-succinct (we call "building block") PKFE for circuits into adaptively secure, collusion-resistant, and succinct (we call "fully-equipped") one for circuits. Our transformation relies on neither concrete assumptions such as learning with errors nor indistinguishability obfuscation (IO). This is the first generic construction of fully-equipped PKFE that does not rely on IO.As side-benefits of our results, we obtain the following primitives from the building block PKFE for circuits: (1) laconic oblivious transfer (2) succinct garbling scheme for Turing machines (3) selectively secure, collusion-resistant, and succinct PKFE for Turing machines (4) low-overhead adaptively secure traitor tracing (5) key-dependent message secure and leakage-resilient public-key encryption. We also obtain a generic transformation from simulation-based adaptively secure garbling schemes that satisfy a natural decomposability property into adaptively indistinguishable garbling schemes whose online complexity does not depend on the output length.
Exploring Constructions of Compact NIZKs from Various Assumptions 📺 Abstract
A non-interactive zero-knowledge (NIZK) protocol allows a prover to non-interactively convince a verifier of the truth of the statement without leaking any other information. In this study, we explore shorter NIZK proofs for all $$\mathbf{NP }$$ languages. Our primary interest is NIZK proofs from falsifiable pairing/pairing-free group-based assumptions. Thus far, NIZKs in the common reference string model (CRS-NIZKs) for $$\mathbf{NP }$$ based on falsifiable pairing-based assumptions all require a proof size at least as large as $$O(|C| \kappa )$$, where C is a circuit computing the $$\mathbf{NP }$$ relation and $$\kappa $$ is the security parameter. This holds true even for the weaker designated-verifier NIZKs (DV-NIZKs). Notably, constructing a (CRS, DV)-NIZK with proof size achieving an additive-overhead $$O(|C|) + \mathsf {poly}(\kappa )$$, rather than a multiplicative-overhead $$|C| \cdot \mathsf {poly}(\kappa )$$, based on any falsifiable pairing-based assumptions is an open problem.In this work, we present various techniques for constructing NIZKs with compact proofs, i.e., proofs smaller than $$O(|C|) + \mathsf {poly}(\kappa )$$, and make progress regarding the above situation. Our result is summarized below. We construct CRS-NIZK for all $$\mathbf{NP }$$ with proof size $$|C| +\mathsf {poly}(\kappa )$$ from a (non-static) falsifiable Diffie-Hellman (DH) type assumption over pairing groups. This is the first CRS-NIZK to achieve a compact proof without relying on either lattice-based assumptions or non-falsifiable assumptions. Moreover, a variant of our CRS-NIZK satisfies universal composability (UC) in the erasure-free adaptive setting. Although it is limited to $$\mathbf{NP }$$ relations in $$\mathbf{NC }^1$$, the proof size is $$|w| \cdot \mathsf {poly}(\kappa )$$ where w is the witness, and in particular, it matches the state-of-the-art UC-NIZK proposed by Cohen, shelat, and Wichs (CRYPTO'19) based on lattices.We construct (multi-theorem) DV-NIZKs for $$\mathbf{NP }$$ with proof size $$|C|+\mathsf {poly}(\kappa )$$ from the computational DH assumption over pairing-free groups. This is the first DV-NIZK that achieves a compact proof from a standard DH type assumption. Moreover, if we further assume the $$\mathbf{NP }$$ relation to be computable in $$\mathbf{NC }^1$$ and assume hardness of a (non-static) falsifiable DH type assumption over pairing-free groups, the proof size can be made as small as $$|w| + \mathsf {poly}(\kappa )$$. Another related but independent issue is that all (CRS, DV)-NIZKs require the running time of the prover to be at least $$|C|\cdot \mathsf {poly}(\kappa )$$. Considering that there exists NIZKs with efficient verifiers whose running time is strictly smaller than |C|, it is an interesting problem whether we can construct prover-efficient NIZKs. To this end, we construct prover-efficient CRS-NIZKs for $$\mathbf{NP }$$ with compact proof through a generic construction using laconic functional evaluation schemes (Quach, Wee, and Wichs (FOCS'18)). This is the first NIZK in any model where the running time of the prover is strictly smaller than the time it takes to compute the circuit C computing the $$\mathbf{NP }$$ relation.Finally, perhaps of an independent interest, we formalize the notion of homomorphic equivocal commitments, which we use as building blocks to obtain the first result, and show how to construct them from pairing-based assumptions.
Quantum Random Oracle Model with Auxiliary Input Abstract
Minki Hhan Keita Xagawa Takashi Yamakawa
The random oracle model (ROM) is an idealized model where hash functions are modeled as random functions that are only accessible as oracles. Although the ROM has been used for proving many cryptographic schemes, it has (at least) two problems. First, the ROM does not capture quantum adversaries. Second, it does not capture non-uniform adversaries that perform preprocessings. To deal with these problems, Boneh et al. (Asiacrypt'11) proposed using the quantum ROM (QROM) to argue post-quantum security, and Unruh (CRYPTO'07) proposed the ROM with auxiliary input (ROM-AI) to argue security against preprocessing attacks. However, to the best of our knowledge, no work has dealt with the above two problems simultaneously.In this paper, we consider a model that we call the QROM with (classical) auxiliary input (QROM-AI) that deals with the above two problems simultaneously and study security of cryptographic primitives in the model. That is, we give security bounds for one-way functions, pseudorandom generators, (post-quantum) pseudorandom functions, and (post-quantum) message authentication codes in the QROM-AI.We also study security bounds in the presence of quantum auxiliary inputs. In other words, we show a security bound for one-wayness of random permutations (instead of random functions) in the presence of quantum auxiliary inputs. This resolves an open problem posed by Nayebi et al. (QIC'15). In a context of complexity theory, this implies $$ \mathsf {NP}\cap \mathsf {coNP} \not \subseteq \mathsf {BQP/qpoly}$$ relative to a random permutation oracle, which also answers an open problem posed by Aaronson (ToC'05).
Tightly-Secure Key-Encapsulation Mechanism in the Quantum Random Oracle Model
Tsunekazu Saito Keita Xagawa Takashi Yamakawa
Constrained PRFs for $\mathrm{NC}^1$ in Traditional Groups 📺 Abstract
We propose new constrained pseudorandom functions (CPRFs) in traditional groups. Traditional groups mean cyclic and multiplicative groups of prime order that were widely used in the 1980s and 1990s (sometimes called "pairing free" groups). Our main constructions are as follows. We propose a selectively single-key secure CPRF for circuits with depth$$O(\log n)$$(that is,NC$$^1$$circuits) in traditional groups where n is the input size. It is secure under the L-decisional Diffie-Hellman inversion (L-DDHI) assumption in the group of quadratic residues $$\mathbb {QR}_q$$ and the decisional Diffie-Hellman (DDH) assumption in a traditional group of order qin the standard model.We propose a selectively single-key private bit-fixing CPRF in traditional groups. It is secure under the DDH assumption in any prime-order cyclic group in the standard model.We propose adaptively single-key secure CPRF for NC$$^1$$ and private bit-fixing CPRF in the random oracle model. To achieve the security in the standard model, we develop a new technique using correlated-input secure hash functions.
In (STOC, 2008), Gentry, Peikert, and Vaikuntanathan proposed the first identity-based encryption (GPV-IBE) scheme based on a post-quantum assumption, namely, the learning with errors (LWE) assumption. Since their proof was only made in the random oracle model (ROM) instead of the quantum random oracle model (QROM), it remained unclear whether the scheme was truly post-quantum or not. In (CRYPTO, 2012), Zhandry developed new techniques to be used in the QROM and proved security of GPV-IBE in the QROM, hence answering in the affirmative that GPV-IBE is indeed post-quantum. However, since the general technique developed by Zhandry incurred a large reduction loss, there was a wide gap between the concrete efficiency and security level provided by GPV-IBE in the ROM and QROM. Furthermore, regardless of being in the ROM or QROM, GPV-IBE is not known to have a tight reduction in the multi-challenge setting. Considering that in the real-world an adversary can obtain many ciphertexts, it is desirable to have a security proof that does not degrade with the number of challenge ciphertext.In this paper, we provide a much tighter proof for the GPV-IBE in the QROM in the single-challenge setting. In addition, we also show that a slight variant of the GPV-IBE has an almost tight reduction in the multi-challenge setting both in the ROM and QROM, where the reduction loss is independent of the number of challenge ciphertext. Our proof departs from the traditional partitioning technique and resembles the approach used in the public key encryption scheme of Cramer and Shoup (CRYPTO, 1998). Our proof strategy allows the reduction algorithm to program the random oracle the same way for all identities and naturally fits the QROM setting where an adversary may query a superposition of all identities in one random oracle query. Notably, our proofs are much simpler than the one by Zhandry and conceptually much easier to follow for cryptographers not familiar with quantum computation. Although at a high level, the techniques used for the single and multi-challenge setting are similar, the technical details are quite different. For the multi-challenge setting, we rely on the Katz-Wang technique (CCS, 2003) to overcome some obstacles regarding the leftover hash lemma.
Adversary-Dependent Lossy Trapdoor Function from Hardness of Factoring Semi-smooth RSA Subgroup Moduli 📺
Takashi Yamakawa Shota Yamada Goichiro Hanaoka Noboru Kunihiro
Self-bilinear Map on Unknown Order Groups from Indistinguishability Obfuscation and Its Applications 📺
Nuttapong Attrapadung (2)
Nai-Hui Chia (2)
Kai-Min Chung (2)
Alex Davidson (1)
Goichiro Hanaoka (2)
Minki Hhan (1)
Taiga Hiroka (1)
Akinori Hosoyamada (1)
Shuichi Katsumata (9)
Fuyuki Kitagawa (3)
Noboru Kunihiro (2)
Takahiro Matsuda (3)
Tomoyuki Morimae (1)
Ryo Nishimaki (13)
Tsunekazu Saito (1)
Keisuke Tanaka (1)
Keita Xagawa (2)
Shota Yamada (13)
Mark Zhandry (1)
|
CommonCrawl
|
MC-SleepNet: Large-scale Sleep Stage Scoring in Mice by Deep Neural Networks
Real-time, automatic, open-source sleep stage classification system using single EEG for mice
Taro Tezuka, Deependra Kumar, … Masanori Sakaguchi
U-Sleep: resilient high-frequency sleep staging
Mathias Perslev, Sune Darkner, … Christian Igel
Automated sleep stage scoring employing a reasoning mechanism and evaluation of its explainability
Kazumasa Horie, Leo Ota, … Hiroyuki Kitagawa
Deep learning-based behavioral analysis reaches human accuracy and is capable of outperforming commercial solutions
Oliver Sturman, Lukas von Ziegler, … Johannes Bohacek
Discovery of key whole-brain transitions and dynamics during human wakefulness and non-REM sleep
A. B. A. Stevner, D. Vidaurre, … M. L. Kringelbach
Deep-learning-based identification, tracking, pose estimation and behaviour classification of interacting primates and mice in complex environments
Markus Marks, Qiuhan Jin, … Mehmet Fatih Yanik
Decoding brain states on the intrinsic manifold of human brain dynamics across wakefulness and sleep
Joan Rué-Queralt, Angus Stevner, … Selen Atasoy
Identifying Phase-Amplitude Coupling in Cyclic Alternating Pattern using Masking Signals
Chien-Hung Yeh & Wenbin Shi
STAR sleep recording export software for automatic export and anonymization of sleep studies
Sami Nikkonen, Henri Korkalainen, … Timo Leppänen
Masato Yamabe1,
Kazumasa Horie2,
Hiroaki Shiokawa2,
Hiromasa Funato ORCID: orcid.org/0000-0002-2787-97003,
Masashi Yanagisawa ORCID: orcid.org/0000-0002-7358-40223 &
Hiroyuki Kitagawa2
Automated sleep stage scoring for mice is in high demand for sleep research, since manual scoring requires considerable human expertise and efforts. The existing automated scoring methods do not provide the scoring accuracy required for practical use. In addition, the performance of such methods has generally been evaluated using rather small-scale datasets, and their robustness against individual differences and noise has not been adequately verified. This research proposes a novel automated scoring method named "MC-SleepNet", which combines two types of deep neural networks. Then, we evaluate its performance using a large-scale dataset that contains 4,200 biological signal records of mice. The experimental results show that MC-SleepNet can automatically score sleep stages with an accuracy of 96.6% and kappa statistic of 0.94. In addition, we confirm that the scoring accuracy does not significantly decrease even if the target biological signals are noisy. These results suggest that MC-SleepNet is very robust against individual differences and noise. To the best of our knowledge, evaluations using such a large-scale dataset (containing 4,200 records) and high scoring accuracy (96.6%) have not been reported in previous related studies.
Sleep in mice consists of three stages: WAKE, non-REM (non-rapid eye movement sleep), and REM (rapid eye movement sleep). These stages play different roles in the lives of mice, and can be identified by inspecting electroencephalogram (EEG) and electromyogram (EMG) signals. Identifying the sleep stages of mice from their EEG and EMG signals (hereinafter referred to as "sleep stage scoring") is one of the most fundamental inspections in sleep research. For example, the symptoms of sleep disorders and the effects of sleeping pills can easily be verified by analyzing the rates/transitions of sleep stages.
Traditionally, sleep stage scoring has been conducted through manual inspection by human experts; therefore this task requires considerable time and specialized knowledge about sleep in mice. Thus, manual sleep stage scoring is a serious bottleneck in sleep research. To resolve this problem, several automated sleep stage scoring methods have been proposed1,2,3,4,5,6,7,8. Unfortunately, these methods have the following limitations and shortcomings, when used in practical sleep research.
First, these methods cannot achieve the accuracy level required for practical research use. Generally, the inter-rater agreement rate of the manual sleep stage scoring results in mice is reported to be approximately 95% and greater9. To replace manual scoring, automated methods need to achieve the same accuracy level. The existing methods cannot achieve this level1,2,3,4,5,6,7,8 (the state-of-the-art method achieves nearly 95%4,6).
Second, their robustness against individual differences and noises has not been adequately verified. In practical sleep research environments, sleep records of various mice need to be analyzed (a record is a pair of EEG and EMG time-series signals recorded for approximately 24–72 hours from a mouse during one measurement). In addition, these records often include noise, such as motion artifacts and power supply noises. Therefore, the scoring methods should have robustness against such noises. However, the existing methods are generally evaluated using rather small-scale datasets, such as those consisting of several dozen mice records, which often contain only clear signals. Therefore, their robustness has not been adequately verified, and it is unclear whether these methods are actually effective in practical sleep research.
To address this problem, this study aims to propose a novel automated sleep stage scoring method that can achieve more accurate and more robust results than the existing methods and to verify its practical performance by experiments using a large-scale dataset.
Our proposed method is named "MC-SleepNet", and it employs two types of deep neural networks: a convolutional neural network (CNN) and long short-term memory (LSTM)10,11.
CNN is a neural network model used for locating effective features of EEG and EMG signals and extracting them. Several studies have recently employed CNNs for biological signal processing, such as waveform detection for ECG/EEG signals of humans12,13,14,15, and showed that CNNs can extract effective features in biological signals. By using a CNN, MC-SleepNet can automatically extract more effective features than the hand-engineered features often used in conventional automated scoring methods. For example, it is very difficult to manually hand-craft filters to capture features to identify individual differences and noise. In contrast, CNNs can locate these features through the training phase, which will lead to improvements in the overall scoring accuracy and robustness.
LSTM is a type of recurrent neural network and is an effective model for long time-series data. In contrast to typical recurrent neural networks, the LSTM model has a "forget gate" structure for adjusting the period to maintain the information. Since the forget gate widens the usage of past information, LSTM can handle long time-series data well. This property will be helpful for modeling the sleep cycles and sleep stage transition rules.
In addition, an expansion of the LSTM model, bidirectional LSTM (bi-LSTM), has also been proposed16. This model consists of two LSTM layers with different roles. One layer processes the data in the order of time scale similar to the typical LSTM, and the other layer processes it in a reverse order (from the future to the past). By employing two different LSTM layers, bi-LSTM can consider the future states and the past states. MC-SleepNet uses bi-LSTM.
Another feature of this study is the performance evaluation of MC-SleepNet using a large-scale dataset containing the sleep records of 4,200 phenotypically wild-type healthy mice through a large-scale genetic screen17, which are used in practical sleep research. Besides, some of these records include considerable noise. Using this large-scale, real-world dataset, we verified the robustness of MC-SleepNet against individual differences and noise as well as its accuracy. To prove the advantages of MC-SleepNet, we compare its performance with that of the existing automated sleep-scoring methods.
Sleep Stages of Mice
The sleep/wake state of mice can be divided into three sleep stages: WAKE, non-REM, and REM. Each stage is characterized by its own features in EEG and EMG signals as follows. In particular, the peak frequency of EEG signals and the amplitude of EMG signals are the key features.
In WAKE stages, mice are awake or drowsy, and both their brains and bodies are active. Therefore, the EEG signals show mixed frequencies, and the amplitude of EMG signals tends to be large (Fig. 1A).
Examples of EEG/EMG signals in each stage. (A) WAKE, (B) non-REM, and (C) REM.
Non-Rem
In non-REM stages, brains exhibit cortical synchronization, and bodies are resting. Thus, the peak EEG frequencies become lower and the amplitude of EEG signals becomes higher than in the WAKE stages, and the amplitude of EMG signals tends to be smaller (Fig. 1B). Non-REM is the major sleep stage and accounts for approximately more than 90% of sleep.
In REM stages, brains are as active as in WAKE stages, whereas their bodies exhibit "REM atonia" such that the EMG amplitudes reach their lowest levels (Fig. 1C). Note that healthy mice in REM stages remain completely stationary, except for breathing, eye movement, and several muscle twitches.
Generally, the sleep stages change cyclically, and several rules govern sleep stage transitions. For example, it is known that REM stages occur at regular intervals during sleep, and persist for the duration of several tens of seconds to several minutes. Furthermore, REM stages do not occur immediately after WAKE stages in healthy mice. Understanding these transition rules will be helpful for improving the scoring accuracy.
Manual sleep stage scoring typically consists of two steps: biological signal acquisition and sleep stage identification. In the biological signal acquisition step, experts measure time-series EEG and EMG signals from the brain surface and neck muscles of mice for hours. The measured signals are divided into constant time intervals of 4 to 20 seconds, which are often called epochs. In the sleep stage identification step, the experts visually check the frequency components and amplitudes of the EEG and EMG signals, and assign one sleep stage label to each epoch. They essentially decide the label based on both the dominant features in each epoch and the sleep stage transition rules.
This section introduces our proposed sleep stage scoring method, "MC-SleepNet", and presents the experimental evaluation results using a large-scale dataset.
MC-SleepNet
MC-SleepNet is an automated sleep stage scoring method that identifies the sleep stage of each epoch. Here, we describe the structure and processing of MC-SleepNet through an implementation example that we used in the experiment. The hyperparameters were decided by referencing to deep learning model for scoring human sleep stages18. MC-SleepNet consists of three phases: signal preprocessing, feature extraction, and scoring.
Signal preprocessing
As mentioned above, in EMG signals, the amplitude is more informative than its frequency-domain features for identifying sleep stages. For this reason, we preprocess EMG signals using a moving root mean squared filter. We adopt a filter width of 1 second (Eq. 1) to emphasize its amplitude.
$${y}_{t}=\frac{1}{{F}_{s}-1}\sqrt{\mathop{\sum }\limits_{i}^{{F}_{s}}\,{({x}_{t-i})}^{2}},$$
where xt and yt are the input and output signals at time t, respectively, and Fs is the sampling frequency of the input signal.
MC-SleepNet does not preprocess EEG signals; rather, it analyzes the measured signals directly. EEG signals have both frequency-domain features and time-domain features. Therefore, we do not employ preprocessing, such as fast Fourier transform (FFT) to avoid disturbing CNN feature extraction.
In the preprocessing phase, EEG and preprocessed EMG signals are split into 20-seconds epochs, and a series of epochs is input to the feature extraction phase.
MC-SleepNet uses a CNN to locate effective features automatically from EEG and EMG signals (Fig. 2). The feature extraction module utilizes three CNN blocks to extract different types of features in EEG and EMG signals.
Structure of MC-SleepNet. MC-SleepNet uses eight types of layers: convolution, max-pooling, dropout, concatenate, element-wise add, bi-LSTM, full-connection, and softmax layers. The parameters of each layer are illustrated in the boxes.
In CNNs, the width of a filter is closely related to the main frequency of features extracted by the filter. For example, wide filters are effective in extracting low-frequency features such as the amplitude of signals, whereas narrow filters are good for locating high-frequency features. Thus, to extract both features from EEG signals, MC-SleepNet combines two CNN blocks that have wide and narrow filters, respectively. In contrast, EMG signals lack high-frequency feature components due to the preprocessing phase; therefore, MC-SleepNet uses only one CNN block of wide filters for EMG signals.
The scoring module consists of a bi-LSTM block, fully connected (FC) block, and softmax layer. This module models the relationship between the extracted features and sleep stages considering sleep stage transition rules.
These blocks have different roles in scoring. The bi-LSTM block maintains the information of 25 consecutive epochs including the target epoch, and models the sleep stage transitions. In contrast, the FC block focuses on the target epoch, and detects "isolated" epochs whose stages are different from the consecutive neighboring epochs. MC-SleepNet uses two layered bi-LSTM blocks. Each block has 1024 LSTM units and uses half of these units to model the forward sleep stage transitions and the other half to model the backward transitions. The FC blocks has one FC layer that has 1024 artificial neurons.
The outputs of both the bi-LSTM and FC blocks are input into the softmax layer. This layer consists of three artificial neurons, and calculates the certainty of each of assigned stages (WAKE, non-REM, and REM). Finally, the scoring module outputs the most likely stage label.
The training of MC-SleepNet consists of two steps: pretraining and fine-tuning. The main objective of the two-step training approach is to optimize a different module of MC-SleepNet in each step. The pretraining step will mainly optimize the feature extraction (CNN) module, while the fine-tuning step will optimize the scoring module of MC-SleepNet.
In the pretraining step, the scoring module (Fig. 2) is temporarily replaced with a softmax layer, which plays the same roles as the original scoring module. Then, we optimize the parameters of the feature extraction module by typical backpropagation. The loss function is categorical cross entropy, which is shown as follows.
$$L(Y,\hat{Y})=-\frac{1}{N}\mathop{\sum }\limits_{i}^{N}\,{{\bf{y}}}_{{\bf{i}}}\,\log \,{\hat{{\bf{y}}}}_{{\bf{i}}},$$
where Y and \(\hat{{Y}}\) are sets of scored and actual sleep stage label vectors: y and \(\hat{{\bf{y}}}\). They are three-dimensional vectors whose elements represent the certainty level of each sleep stage. N is the number of training samples. To reduce the effect of imbalanced training samples, we conduct undersampling to equalize the numbers of epochs for each stage. By conducting undersampling in the pretraining step, the feature extraction module can locate appropriate features of each sleep stage.
In the fine-tuning step, the softmax layer is replaced by the original scoring module. Then, the entire system is trained again using the same training dataset, and the parameters are optimized by backpropagation using the same loss function 2.
In our experiments, the pretraining and fine-tuning steps were repeated 10 and 20 times for each training dataset, respectively. These training processes were conducted using the Adam method19 with learning rates of 1e − 4 and 1e − 6 and batch sizes of 100 and 10. The reason why we used the smaller learning rate in the fine-tuning step is to prevent overfitting.
Rescoring
Although MC-SleepNet has sufficient performance for practical use, the recall for the REM stage is relatively lower than that for the other stages (Table 1). This could cause a problem in sleep research, especially in studies related to REM sleep in mice.
Table 1 Scoring performance on large-scale and small-scale datasets.
The relatively low recall for the REM stage could be due to different factors. The first factor is "isolated" REM epochs, where stage labels of their neighboring epochs are not REM. These epochs tend to be given the same stage labels as neighboring epochs. Another factor is the imbalanced sleep stage distribution. Generally, REM stages are rare in the all records. Consequently, in the training process, MC-SleepNet builds stricter criteria for REM stages than other sleep stages, and tends to hesitate to output REM stage labels. To address this problem, we developed a rescoring model as an optional method of MC-SleepNet, which carefully examines the possibility of REM stages.
In the rescoring method, we collect epochs that are given non-REM labels by MC-SleepNet and whose non-REM certainty values are lower than 95% and re-examine those epochs by the rescoring model (please refer to the "REM scoring by MC-SleepNet" section for details). This model has almost the same architecture as MC-SleepNet, except for the scoring module. It omits the bi-LSTM block from the original MC-SleepNet such that the model can output stage labels that are different from those of neighboring epochs more easily. The model was trained with epochs that met the above conditions. Therefore, the imbalance problem of training samples was alleviated.
Evaluation with the large-scale dataset
We verified the scoring accuracy of MC-SleepNet using a large-scale dataset containing sleep records of 4,200 mice. This dataset is associated with the label dataset which contains "correct" sleep stage labels for all the epochs. These sleep stage labels were obtained through manual sleep stage scoring by human experts. Therefore, we can use the labeled dataset for training MC-SleepNet and validating the accuracy of the outputs of MC-SleepNet. In the accuracy validation, we measured the agreement rates of the outputs of MC-SleepNet with the "correct" labels.
Our sleep record dataset includes many noisy records. Therefore, the robustness of MC-SleepNet against individual differences and noise can also be evaluated with the dataset.
To compare the performance of MC-SleepNet with the state-of-the-art method, we measured the performance of the random forest model. It is known that the random forest model can handle large-scale datasets with reasonable computational costs. Please refer to the Methods section for more details of the dataset and random forest model.
In this experiment, we conducted five-fold cross validation. The 4,200 records were randomly split into five partitions: four partitions (3,360 records) were used as training samples, and the remaining partition (840 records) was used as test data. The experimental result (Table 1) shows that MC-SleepNet can score sleep stages with an accuracy of 96.6%, which is significantly higher than that of the random forest model (p < 0.01: paired t-test). In addition, its κ statistic is 0.91; therefore, the scoring results of MC-SleepNet and experts match very well20.
In addition, this result shows that the rescoring model improved the recall for the stage REM by approximately 9%. Although the overall accuracy and recall for the non-REM stage decreased slightly, the risk of missing REM epochs was greatly resolved. Therefore, the rescoring model will be effective, especially in research focusing on REM sleep.
An example of the scoring results is shown in Fig. 3. The horizontal axis shows a sequence of epochs, and the vertical axis corresponds to the three sleep stages. The orange and blue lines represent the scoring results of MC-SleepNet and human experts, respectively. This figure shows that both scoring results are very close to each other and almost consistent. These experimental results suggest that MC-SleepNet can achieve high accuracy in sleep stage scoring.
Example of sleep stage scoring result by MC-SleepNet.
As we mentioned, the dataset contains many noisy sleep records from a number of mice. Thus, these results strongly suggest that MC-SleepNet is very robust against individual differences and noise. In the next section, we directly analyze the robustness against noise.
Evaluation with noisy records
To directly analyze the robustness of MC-SleepNet against noise, we apply MC-SleepNet that was trained in the above experiment to noisy records and check the scoring accuracy. Note that noisy records were identified by human experts, and four records were chosen for this analysis.
As shown in Table 1, although the accuracy decreases from 96.6% to 95.0%, MC-SleepNet can score sleep stages with sufficient accuracy even if the sleep records include considerable noise. Comparing this result with the case of the whole dataset, MC-SleepNet tends to output non-REM stage labels more frequently. Therefore, the recall of the WAKE and REM stages and the precision of the non-REM stage decreased. The four records contain several types of noise that have low-frequency components such as motion artifact noise (0.5–2 Hz). This type of noise might be extracted as a delta-wave, causing an incorrect scoring.
REM scoring by MC-SleepNet
Table 2 shows the confusion matrix of MC-SleepNet for the case of the large-scale dataset. Figure 4 plots the histogram showing the distribution of non-REM certainty values of epochs which are finally given non-REM stage labels by MC-SleepNet. (Their actual stage labels are shown in colors). The horizontal axis corresponds to the non-REM certainty values, and the vertical axis shows the frequencies. The right-hand graph magnifies the frequency range up to 300000 of the left-hand histogram. We can make the following observations.
Table 2 suggests that most of incorrectly scored REM epochs (REM epochs but scored as other stages by MC-SleepNet) are given non-REM labels.
Figure 4 suggests that most of the true non-REM epochs have very high non-REM certainty values (more than 95%).
Table 2 Confusion matrix for large-scale dataset.
Histogram showing distribution of stages labeled as Non-REM.
Based on these observations, we collect epochs that are given non-REM labels by MC-SleepNet and whose non-REM certainty values are lower than 95% and re-examine those epochs by the aforementioned rescoring model.
Comparison with other existing methods
We compare the scoring accuracy of MC-SleepNet with that of other existing methods: the FASTER, MASC, and LSTM models. The experimental setting is the same as that described above, except for the number of training iterations. The pretraining and fine-tuning steps were repeated 100 and 200 times in this experiment. In addition, oversampling was conducted to equalize the numbers of epochs for each stage. Unfortunately, these existing methods have a high computational cost for the training process. For this reason, we use another small-scale dataset used in the existing studies4,5. This dataset contains extremely clear sleep records obtained from 14 mice. Thus, the models do not have to locate/extract complicated noise features. Note that the numbers of training/testing samples (epochs) for MC-SleepNet are smaller than those for other existing methods. Since MC-SleepNet uses longer biological signals including past and future epoch, we could not use some epochs near the start and end of the measurement.
In this experiment, we conducted seven-fold cross validation and used 12 records as training samples. After the training process, MC-SleepNet achieved a scoring accuracy of 96.4% and kappa statistic of 0.94, which are higher than those of the existing methods (Table 1). This result proves that MC-SleepNet outperforms the existing methods.
The greatest feature of MC-SleepNet is the use of a CNN in the feature extraction phase. As mentioned above, the CNN can locate effective features for sleep stage scoring. To verify the located features, we illustrate several features extracted by the feature extraction module of MC-SleepNet (Fig. 5A). These figures show that MC-SleepNet optimizes the filters to extract several frequency components in EEG signals and the instantaneous amplitude of EMG signals. These features are also used in manual sleep stage scoring. Therefore, we can state that MC-SleepNet can locate the effective features for sleep stage scoring through the training process. Interestingly, MC-SleepNet extracted several features that are not directly related to sleep stages. Specifically, some filters were optimized to extract waves that are neither theta waves nor delta waves. These waves might be related to individual differences and noise in EEG signals. By using such features, MC-SleepNet has achieved the high scoring accuracy and high robustness against individual differences and noise.
Example of extracted features by feature extraction module of MC-SleepNet. (left) EEG features extracted by narrow CNN. (center) EEG features extracted by wide CNN. (right) EMG features.
Due to these advantages, MC-SleepNet is effective, particularly in the large-scale sleep research. The high accuracy and robustness become more valuable as the number of sleep records to score increases. In addition, its scoring process is NOT time-consuming (approximately 15 s per 1-day sleep record). This is another advantage in large-scale research.
MC-SleepNet achieved more practical scoring than existing methods; however, it still has some practical limitations.
Cost for training
Although deep learning techniques enhance the scoring accuracy and robustness, they also cause some practical problems in the training process. First, to locate the effective features, MC-SleepNet requires a vast amount of training samples. Figure 5B illustrates the features in the case of only 500 sleep records used for training (note that the scoring accuracy was only 80.5% in this case). This result suggests that the feature extraction module of MC-SleepNet was not sufficiently optimized and they could not locate the effective features. To optimize MC-SleepNet to achieve a high scoring accuracy, a vast amount of training samples is required, making application of MC-SleepNet difficult in some situations.
In addition, the computational cost for the training process is another problem. For example, it took over 160 hours to optimize MC-SleepNet for the large-scale dataset in our experiment using a high-performance computer. This results suggest that the researchers who cannot afford to use a high-performance computer cannot optimize MC-SleepNet for their datasets. The difference in measurement environments can affect the scoring accuracy, which may be one of the factors that the users of MC-SleepNet need to be aware of.
Width of scoring epoch
Scoring mice sleep/wake with 20-second epochs tends to miss very short events such as transient arousals. This results in larger values of average "episode duration" statistics in general (which describes how long individual episodes of WAKE/non-REM/REM state last on average), as compared with scoring with 4-second or 10-second epochs. However, in our experience, relative changes in sleep/wake statistics and the recall in detecting such changes induced by, for example, genetic mutations such as Sleepy, Dreamless17,21, and orexin knockout22, are unaffected by scoring epoch lengths from 4–20 seconds. In addition, MC-SleepNet can be readily modified to accommodate shorter epoch lengths. Needless to say, an entirely new set of large-scale, high-quality training dataset with shorter epoch lengths will be needed.
All of the procedures were conducted in accordance with the Guidelines for Animal Experiments of the University of Tsukuba and were approved by the Institutional Animal Care and Use Committee of University of Tsukuba (Approved protocol ID # 18-164).
Two datasets were used to evaluate the performance of MC-SleepNet. Both datasets contain the EEG and EMG signals split into 20-second epochs, and are associated with the label datasets containing "correct" sleep stage labels for all the epochs. The biological signals were obtained from wild-type healthy mice, and their "correct" sleep stages were scored by 13 experts from the International Institute for Integrative Sleep Medicine, University of Tsukuba. Their averaged inter-rater reliability and its standard deviation are 98.5% and 1.3%, respectively. Note that each biological signal record was inspected by one expert and his/her scoring results were used as "correct" sleep stages.
The datasets are different in the number of epochs and the clarity of the contained sleep records. The large-scale dataset contains 4,200 sleep records (35,700,000 epochs, where WAKE, non-REM and REM stages account for 49.7%, 45.4% and 4.9% of the total epochs, respectively), most of which include noise such as motion artifacts and power supply noises. Therefore, the robustness against the individual differences and noise of MC-SleepNet can easily be evaluated using this dataset. In contrast, the small dataset contains only 14 sleep records (238,000 epochs, where WAKE, non-REM and REM stages account for 42.6%, 52.7% and 4.7% of the total epochs, respectively), which are extremely clear. Thus, the records in this dataset can be scored relatively easily.
In our experiment, the noisiness of biological signals was inspected visually. Due to technical difficulties, we cannot provide quantitative measurements, such as the percentage of noisy parts in each record. Although it is a subjective evaluation, the noisy signals mainly contain noise due to motion artifacts and EMG contamination. Figure 6 shows a typical example of noisy mice signals in the large-scale dataset.
Typical example of noisy signals in large-scale dataset.
Performance indices
To evaluate the performance of MC-SleepNet, the following indices are used:
$${{\rm{Recall}}}_{{\rm{s}}}=\frac{{{\rm{E}}}_{({\rm{s}},{\rm{s}})}}{{\sum }_{{\rm{u}}\in {\rm{S}}}{{\rm{E}}}_{({\rm{s}},{\rm{u}})}},\,{{\rm{Precision}}}_{{\rm{s}}}=\frac{{{\rm{E}}}_{({\rm{s}},{\rm{s}})}}{{\sum }_{{\rm{u}}\in {\rm{S}}}{{\rm{E}}}_{({\rm{s}},{\rm{u}})}},\,{\rm{Accuracy}}=\frac{{\sum }_{s\in S}{{\rm{E}}}_{({\rm{s}},{\rm{s}})}}{{\rm{M}}}$$
$$\kappa =\frac{{\rm{Accuracy}}-{p}_{e}}{1-{p}_{e}},\,{p}_{e}=\sum _{s\in S}\,\frac{{\sum }_{{\rm{u}}\in {\rm{S}}}{{\rm{E}}}_{({\rm{s}},{\rm{u}})}}{{\rm{M}}}\cdot \frac{{\sum }_{{\rm{u}}\in {\rm{S}}}{{\rm{E}}}_{({\rm{u}},{\rm{s}})}}{{\rm{M}}},\,(S=\{{\rm{WAKE}},{\rm{Non}}-{\rm{REM}},{\rm{REM}}\}),$$
where E(s,u) is the number of epochs scored as stage s and u by the experts and MC-SleepNet, respectively. In addition, M is the number of all the test epochs. Note that the "correct" sleep stage labels were given by human experts in this evaluation, and the "Accuracy" equals the inter-rater agreement rate between human experts and MC-SleepNet.
Recalls and Precisions are the ratios of correctly scored epochs over a certain set of epochs. For example, Precisions denotes the ratio of correctly scored epochs over the epochs labeled as stage s by MC-SleepNet. κ (kappa statistic) is the standard accuracy statistic considering the distribution of sleep stages and denotes how much the scoring results are in agreement. In general, when the kappa statistic is over 0.8, it is thought that the scoring results are in nearly perfect agreement.
Random forest model
In the evaluation experiments, we used the random forest model for comparison. The random forest model consists of feature extraction and scoring phases. First, the EEG and EMG signals are converted into the following six feature values in the feature extraction phase:
$${{\rm{P}}}_{{\rm{EEG}}}=\mathop{\sum }\limits_{i}^{5000}\,|{{\rm{eeg}}}_{{\rm{t}}}|,\,{{\rm{P}}}_{{\rm{EMG}}}=\mathop{\sum }\limits_{i}^{5000}\,|{{\rm{emg}}}_{{\rm{t}}}|,$$
$${\rm{W}}=\mathop{\sum }\limits_{i\mathrm{=1}}^{30}\,||{{\rm{EEG}}}_{{\rm{i}}}||,\,{\rm{D}}=\mathop{\sum }\limits_{i\mathrm{=1}}^{6}\,||{{\rm{EEG}}}_{{\rm{i}}}||,$$
$${\rm{T}}=\mathop{\sum }\limits_{i\mathrm{=7}}^{11}\,||{{\rm{EEG}}}_{{\rm{i}}}||,\,{\rm{E}}=\mathop{\sum }\limits_{i\mathrm{=30}}^{100}||{{\rm{EMG}}}_{{\rm{i}}}||,$$
where eegt and emgt are the voltage values of EEG and EMG signals at time t, respectively. EEGi and EMGi denote the power of frequency components of i Hz in EEG and EMG signals, respectively.
Then, the random forest identifies the sleep stages from the extracted features. In the experiments. the number of decision trees, maximum depth of each tree, and maximum number of features were set as 20, 10, and 2, respectively. In this study, the scikit-learn library23 was used to implement the random forest model.
All experiments were conducted on a computer running CentOS Linux release 7.5.1804, Python version 2.7 and TensorFlow24. The hardware components are as follows:
CPU Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40 GHz * 1
GPU Tesla P100 PCIe 16 GB * 1
Memory 64 GB
Automated sleep stage scoring methods for mice
Several existing sleep stage scoring methods for mice have been proposed1,2,3,4,5,6,7,8. As mentioned above, performance evaluation using the large-scale dataset containing 4,200 mouse records is a contribution of this study. Here, we provide a brief introduction to the existing methods and describe the technical originality of this study.
Although the existing sleep stage scoring methods consist of the feature extraction and scoring phases, the employed techniques/models are different.
Most conventional sleep stage scoring methods employ FFT for extracting frequency-domain features2,4,5,6,7. For example, FASTER2 and MASC4 use FFT to extract several frequency components of EEG and EMG signals, which are effective in manual sleep stage scoring. Moreover, the scoring method employing CNN has also been proposed8. Please see the SPINDLE section for more details.
To model the relationship between the features and sleep stages, the existing methods employ various classification models, such as nonparametric density estimation clustering2, support vector machine4,6, LSTM model5, and hidden Markov model7,8. Generally, the models that can handle time-series data and consider sleep transition rules tend to achieve high scoring accuracy. For example, the LSTM model5 achieves scoring accuracy that is almost the same level as the existing state-of-the-art method MASC.
In contrast to these methods, we have adopted a CNN and bi-LSTM for the feature extraction and scoring phases. The CNN can locate the effective features automatically, and bi-LSTM can capture the sleep stage transition rules and relationship between the target epoch and its neighboring epochs. By combining these deep learning models, MC-SleepNet achieves high accuracy and high robustness against individual differences and noise.
In addition, we have also developed a rescoring model to improve the recall of REM. The other existing methods cannot adjust their accuracy according to the purpose of the research. Thus, the rescoring model is another feature of MC-SleepNet.
MASC4 is one of the state-of-the-art methods for mice sleep stage scoring, proposed by Suzuki et al. in 20174. By using the sleep stages of consecutive neighboring epochs as features and employing the rescoring phase for uncertain epochs, MASC achieves high scoring accuracy of 94.9%.
However, the authors have reported that MASC is weak against noise in EEG and EMG signals4. In addition, MASC is not practical for large-scale scoring tasks due to the high computational complexity of the support vector machine, which is employed as a scoring model.
SPINDLE8 is a scoring method that employs a CNN for feature extraction and achieves high accuracy of 96.8%. However, they adopted "Artifact" as a new sleep stage and ignored them in the accuracy calculation (including "Artifact", its accuracy decreases to 88.6%).
Moreover, the number of training samples is too small to train a CNN. They used sleep records obtained from only 4–8 mice/rats. Due to the shortage of training samples, the CNN could not locate the feature of individual differences or noise. Thus, the robustness of SPINDLE against them is quite limited.
Employing a CNN for feature extraction and training it with sufficient training samples are essential to make MC-SleepNet robust against individual differences and noise. This will be the main reason why MC-SleepNet can score sleep stages more accurately than other existing methods.
Automated sleep stage scoring methods for humans
When we designed MC-SleepNet, we referred to several automated methods for scoring human sleep stages18,25,26,27. In particular, some of our ideas were inspired by "DeepSleepNet" by A. Supratak et al.18. For example, their model also employs multiple CNN blocks with different filter sizes. However, their purpose and the modeling input/output relationship are quite different from those of MC-SleepNet. In addition, it uses only one-channel EEG signal, while MC-SleepNet uses both EEG and EMG signals.
In this paper, we have proposed a novel sleep stage scoring method named "MC-SleepNet", which employs an existing neural network (CNN) and long short-term memory (LSTM) as feature extraction and scoring modules, respectively. MC-SleepNet can automatically locate the effective features that are difficult to extract by handcrafted filters and can model the relationship between features and sleep stages considering sleep stage transition rules. Consequently, MC-SleepNet has achieved both high scoring accuracy and high robustness against individual differences and noise.
The experimental results using the large-scale dataset of 4,200 sleep records of mice achieved an accuracy of 96.6% and κ statistic of 0.94. These values are higher than the inter-rater agreement rate among human experts and exceed the accuracy of conventional scoring methods. In addition, we also developed a rescoring model using certainty values of scored stages. Although the "vanilla" MC-SleepNet tends to output fewer REM stage labels, this problem can be resolved by employing the rescoring model.
An important future research issue is the reasoning mechanism to show the rationale for the automatic sleep scoring results. Finding optimal hyperparameter values is another important and interesting topic.
We hope that MC-SleepNet enhances the efficiency and quality of sleep stage scoring and contributes to sleep research.
The datasets analyzed during the current study are available from the corresponding author on reasonable request.
Brankack, J., Kukushka, V. I., Vyssotski, A. L. & Draguhn, A. EEG gamma frequency and sleep-wake scoring in mice: Comparing two types of supervised classifiers. BRAIN RESEARCH. 1322, 59–71, https://doi.org/10.1016/j.brainres.2010.01.069 (2010).
Sunagawa, G. A., Séi, H., Shimba, S., Urade, Y. & Ueda, H. R. FASTER: an unsupervised fully automated sleep staging method for mice. Genes to Cells. 18, 502–518, https://doi.org/10.1111/gtc.12053 (2013).
Rempe, M. J., Clegern, W. C. & Wisor, J. P. An automated sleep-state classification algorithm for quantifying sleep timing and sleep-dependent dynamics of electroencephalographic and cerebral metabolic parameters. Nature and Science of Sleep. 7, 85–99, https://doi.org/10.2147/NSS.S84548 (2015).
Suzuki, Y., Sato, M., Shiokawa, H., Yanagisawa, M. & Kitagawa, H. MASC: Automatic sleep stage scoring based on brain and myoelectric signals. Proceeding of 33rd IEEE International Conference on Data Engineering Workshops (ICDE Workshops 2017). 1489–1496, https://doi.org/10.1109/ICDE.2017.218 (2017).
Yamabe, M., Kitagawa, H., Shiokawa, H., Yanagisaawa, M. & Sato, M. Sleep stage analysis of mice using deep learning. Proceeding of the 79th National Convention of Information Processing Society of Japan (In Japanese). 497–498 (2017).
Gao, V., Turek, F. & Vitaterna, M. Multiple classifier systems for automatic sleep scoring in mice. Journal of Neuroscience Methods 264, 33–39, https://doi.org/10.1016/j.jneumeth.2016.02.016 (2016).
Yaghouby, F., O'Hara, B. F. & Sunderam, S. Unsupervised Estimation of Mouse Sleep Scores and Dynamics Using a Graphical Model of Electrophysiological Measurements. International Journal of Neural Systems, 26(4), 1650017, 15 pages, https://doi.org/10.1142/S0129065716500179 (2016).
Miladinović, D. et al. SPINDLE: End-to-end learning from EEG/EMG to extrapolate animal sleep scoring across experimental settings, labs and species. PLOS Computational Biology 15(4), e1006968, 30 pages, https://doi.org/10.1371/journal.pcbi.1006968 (2018).
Rytkönen, K. M., Zitting, J. & Porkka, H. T. Automated sleep scoring in rats and mice using the naive bayes classifier. Journal of Neuroscience Methods 202(1), 60–64, https://doi.org/10.1016/j.jneumeth.2011.08.023 (2011).
Hochreiter, S. & Schmidhuber, J. Long short-term memory. Journal of Neural Computation 9(8), 1735–1780, https://doi.org/10.1162/neco.1997.9.8.1735 (1997).
Gers, F. A., Schmidhuber, J. & Cummins, F. A. Learning to forget: Continual prediction with lstm. Neural Computation 12, 2451–2471, https://doi.org/10.1162/089976600300015015 (2000).
Kiranyaz, S., Ince, T. & Gabbouj, M. Real-time patient-specific ECG classification by 1-D convolutional neural networks. IEEE Transaction on Biomedical Engineering. 63(3), 664–675, https://doi.org/10.1109/TBME.2015.2468589 (2016).
Fan, X. et al. Multiscaled fusion of deep convolutional neural networks for screening atrial fibrillation from single lead short ECG recordings. IEEE Journal of Biomedical and Health Informatics. 22(6), 1744–1753, https://doi.org/10.1109/JBHI.2018.2858789 (2018).
Wang, F. et al. Analysis for early seizure detection system based on deep learning algorithm. Proceeding of 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). 2382–2389, https://doi.org/10.1109/BIBM.2018.8621089 (2018).
Cecotti, H. & Graser, A. Convolutional neural networks for P300 detection with application to brain-Computer interfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence. 33(3), 433–445, https://doi.org/10.1109/TPAMI.2010.125 (2011).
Graves, A. & Schmidhuber, J. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks. 18(5), 602–610, https://doi.org/10.1016/j.neunet.2005.06.042 (2005).
Funato, H. et al. Forward-genetics analysis of sleep in randomly mutagenized mice. Nature. 539, 378–383, https://doi.org/10.1038/nature20142 (2016).
Supratak, A., Dong, H., Wu, C. & Guo, Y. Deepsleepnet: A model for automatic sleep stage scoring based on raw single-channel eeg. IEEE transactions on Neural System and Rehabilitation Engineering. 25(11), 1998–2008, https://doi.org/10.1109/TNSRE.2017.2721116 (2017).
Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. Published as a conference paper at the 3rd International Conference for Learning Representations, 15 (2015).
Landis, J. R. & Koch, G. G. The measurement of observer agreement for categorical data. Biometrics. 33(1), 159–174 (1977).
Honda, T. et al. A single phosphorylation site of SIK3 regulates daily sleep amounts and sleep need in mice. Proceedings of the National Academy of Sciences of the United States of America. 115(41), 10458–10463, https://doi.org/10.1073/pnas.1810823115 (2018).
Chemelli, R. M. et al. Narcolepsy in orexin knockout mice. Cell. 98(4), 437–451, https://doi.org/10.1016/S0092-8674(00)81973-X (1999).
Pedregosa, F. et al. Scikit-learn: Machine learning in python. Journal of Machine Learning Research 12, 2825–2830 (2011).
MathSciNet MATH Google Scholar
Abadi, M. et al. TensorFlow:Large-scale machine learning on heterogeneous systems. https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45166.pdf Software available from tensorflow.org (2015).
Tsinalis, O., Matthews, P. M., Guo, Y. & Zafeiriou, S. Automatic sleep stage scoring with single-channel EEG using convolutional neural networks. Preprint at https://arxiv.org/abs/1610.01683 (2016).
Biswal, S. et al. Expert-level sleep scoring with deep neural networks. Journal of the American Medical Informatics Association, 25(12), 1643–1650, https://doi.org/10.1093/jamia/ocy131 (2018).
Aggarwal, K., Khadanga, S., Joty, S. R., Kazaglis, L. & Srivastava, J. A structured learning approach with neural conditional random fields for sleep staging. Proceeding of 2018 IEEE International Conference on Big Data (Big Data). 1318–1327, https://doi.org/10.1109/BigData.2018.8622286 (2018).
This work was partly supported by the MEXT Program for Building Regional Innovation Ecosystems, MEXT Grant-in-Aid for Scientific Research on Innovative Areas, Grant Number 15H05942Y "Living in Space", the WPI program from Japan's MEXT, JSPS KAKENHI Grant Number 17H06095, and FIRST program from JSPS. The authors would like to thank C. Miyoshi, N. Hotta-Hirashima, A. Ikkyu, and S. Kanno for assisting in the measurement of EEG and EMG signals of mice. In addition, we are grateful to L. Ota for reproducing the experiments.
Graduate School of Systems and Information Engineering, University of Tsukuba, Tsukuba, Japan
Masato Yamabe
Center for Computational Sciences, University of Tsukuba, Tsukuba, Japan
Kazumasa Horie, Hiroaki Shiokawa & Hiroyuki Kitagawa
International Institute for Integrative Sleep Medicine, University of Tsukuba, Tsukuba, Japan
Hiromasa Funato & Masashi Yanagisawa
Kazumasa Horie
Hiroaki Shiokawa
Hiromasa Funato
Masashi Yanagisawa
Hiroyuki Kitagawa
K. Horie and H. Kitagawa provided ideas for developing MC-SleepNet; M. Yamabe implemented MC-SleepNet; M. Yamabe, K. Horie, H. Shiokawa, and H. Kitagawa designed evaluation experiments; M. Yamabe performed the experiments; M. Yamabe, K. Horie, H. Shiokawa, and H. Kitagawa analyzed the experimental results; H. Funato and M. Yanagisawa provided the dataset and background knowledge about mice sleep; and M. Yamabe, K. Horie, H. Kitagawa wrote the paper.
Correspondence to Kazumasa Horie.
Yamabe, M., Horie, K., Shiokawa, H. et al. MC-SleepNet: Large-scale Sleep Stage Scoring in Mice by Deep Neural Networks. Sci Rep 9, 15793 (2019). https://doi.org/10.1038/s41598-019-51269-8
Automated scoring of pre-REM sleep in mice with deep learning
Niklas Grieger
Justus T. C. Schwabedal
Stephan Bialonski
Taro Tezuka
Deependra Kumar
Masanori Sakaguchi
OSERR: an open-source standalone electrophysiology recording system for rodents
Ning Cheng
Kartikeya Murari
|
CommonCrawl
|
Arithmeticity and topology of smooth actions of higher rank abelian groups
JMD Home
This Volume
Effective decay of multiple correlations in semidirect product actions
2016, 10: 113-134. doi: 10.3934/jmd.2016.10.113
Horocycle flows for laminations by hyperbolic Riemann surfaces and Hedlund's theorem
Matilde Martínez 1, , Shigenori Matsumoto 2, and Alberto Verjovsky 3,
Instituto de Matemática y Estadística Rafael Laguardia, Facultad de Ingeniería, Universidad de la República, J. Herrera y Reissig 565, C.P. 11300 Montevideo, Uruguay
Department of Mathematics, College of Science and Technology, Nihon University, 1-8-14 Kanda, Surugadai, Chiyoda-ku, Tokyo, 101-8308
Universidad Nacional Autónoma de México, Apartado Postal 273, Admon. de correos #3, C.P. 62251 Cuernavaca, Morelos, Mexico
Received April 2015 Published May 2016
We study the dynamics of the geodesic and horocycle flows of the unit tangent bundle $(\hat M, T^1\mathfrak{F})$ of a compact minimal lamination $(M,\mathfrak{F})$ by negatively curved surfaces. We give conditions under which the action of the affine group generated by the joint action of these flows is minimal and examples where this action is not minimal. In the first case, we prove that if $\mathfrak{F}$ has a leaf which is not simply connected, the horocyle flow is topologically transitive.
Keywords: Hyperbolic surfaces, horocycle and geodesic flows, hyperbolic laminations, minimality..
Mathematics Subject Classification: Primary: 37C85, 37D40, 57R3.
Citation: Matilde Martínez, Shigenori Matsumoto, Alberto Verjovsky. Horocycle flows for laminations by hyperbolic Riemann surfaces and Hedlund's theorem. Journal of Modern Dynamics, 2016, 10: 113-134. doi: 10.3934/jmd.2016.10.113
F. Alcalde Cuesta and F. Dal'Bo, Remarks on the dynamics of the horocycle flow for homogeneous foliations by hyperbolic surfaces, Expo. Math., 33 (2015), 431-451. doi: 10.1016/j.exmath.2015.07.006. Google Scholar
F. Alcalde Cuesta, F. Dal'Bo, M. Martínez and A. Verjovsky, Minimality of the horocycle flow on foliations by hyperbolic surfaces with non-trivial topology, Discrete Contin. Dyn. Syst., 36 (2016), no. 9, 4619-4635. Google Scholar
M. Asaoka, Nonhomogeneous locally free actions of the affine group, Ann. of Math. (2), 175 (2012), 1-21. doi: 10.4007/annals.2012.175.1.1. Google Scholar
T. Barbot, Plane affine geometry and Anosov flows, Ann. Scient. Éc. Norm. Sup., 34 (2001), 871-889. doi: 10.1016/S0012-9593(01)01079-5. Google Scholar
J. Bellissard, R. Benedetti and J.-M. Gambaudo, Spaces of tilings, finite telescopic approximations and gap-labeling, Comm. Math. Phys., 261 (2006), 1-41. doi: 10.1007/s00220-005-1445-z. Google Scholar
T. Büber and W. A. Kirk, Convexity structures and the existence of minimal sets, Comment. Math. Prace Mat., 35 (1995), 71-81. Google Scholar
A. Candel, Uniformization of surface laminations, Ann. Sci. École Norm. Sup. (4), 26 (1993), 489-516. Google Scholar
A. Candel and L. Conlon, Foliations. I, Graduate Studies in Mathematics, 23, American Mathematical Society, Providence, RI, 2000. Google Scholar
J. W. Cannon and W. P. Thurston, Group invariant Peano curves, Geom. Topol., 11 (2007), 1315-1355. doi: 10.2140/gt.2007.11.1315. Google Scholar
S. G. Dani and G. A. Margulis, Values of quadratic forms at primitive integral points, Invent. Math., 98 (1989), 405-424. doi: 10.1007/BF01388860. Google Scholar
S. R. Fenley, The structure of branching in Anosov flows of 3-manifolds, Comment. Math. Helv., 73 (1998), 259-297. doi: 10.1007/s000140050055. Google Scholar
P. Foulon and B. Hasselblatt, Contact Anosov flows on hyperbolic 3-manifolds, Geom. Topol., 17 (2013), 1225-1252. doi: 10.2140/gt.2013.17.1225. Google Scholar
L. Garnett, Foliations, the ergodic theorem and Brownian motion, J. Funct. Anal., 51 (1983), 285-311. doi: 10.1016/0022-1236(83)90015-0. Google Scholar
É. Ghys, Laminations par surfaces de Riemann, in Dynamique et Géométrie Complexes (Lyon, 1997), Panor. Synthèses, 8, Soc. Math. France, Paris, 1999, ix, xi, 49-95. Google Scholar
M. Gromov, Hyperbolic manifolds (according to Thurston and Jørgensen), in Bourbaki Seminar, Vol. 1979/80, Lecture Notes in Math., 842, Springer, Berlin-New York, 1981, 40-53. Google Scholar
A. Katok and B. Hasselblatt, Introduction to the Modern Theory of Dynamical Systems, With a supplementary chapter by Katok and L. Mendoza, Encyclopedia of Mathematics and its Applications, 54, Cambridge University Press, Cambridge, 1995. doi: 10.1017/CBO9780511809187. Google Scholar
M. Lyubich and Y. Minsky, Laminations in holomorphic dynamics, J. Differential Geom., 47 (1997), 17-94. Google Scholar
G. A. Margulis, Discrete subgroups and ergodic theory, in Number Theory, Trace Formulas and Discrete Groups (Oslo, 1987), Academic Press, Boston, MA, 1989, 377-398. Google Scholar
S. Matsumoto, Remarks on the horocycle flows for the foliations by hyperbolic surfaces,, Proc. Amer. Math. Soc., (). doi: 10.1090/proc/13184. Google Scholar
C. T. McMullen, Renormalization and 3-Manifolds Which Fiber Over the Circle, Annals of Mathematics Studies, 142, Princeton University Press, Princeton, NJ, 1996. doi: 10.1515/9781400865178. Google Scholar
C. T. McMullen, Billiards and Teichmüller curves on Hilbert modular surfaces, J. Amer. Math. Soc., 16 (2003), 857-885 (electronic). doi: 10.1090/S0894-0347-03-00432-6. Google Scholar
D. W. Morris, Ratner's Theorems on Unipotent Flows, Chicago Lectures in Mathematics, University of Chicago Press, Chicago, IL, 2005. Google Scholar
S. Nag and D. Sullivan, Teichmüller theory and the universal period mapping via quantum calculus and the $H^{1/2}$ space on the circle, Osaka J. Math., 32 (1995), 1-34. Google Scholar
J.-P. Otal, The Hyperbolization Theorem for Fibered 3-Manifolds, Translated from the 1996 French original by L. D. Kay, SMF/AMS Texts and Monographs, 7, American Mathematical Society, Providence, RI; Société Mathématique de France, Paris, 2001. Google Scholar
S. Petite, On invariant measures of finite affine type tilings, Ergodic Theory Dynam. Systems, 26 (2006), 1159-1176. doi: 10.1017/S0143385706000137. Google Scholar
J. F. Plante, Locally free affine group actions, Trans. Amer. Math. Soc., 259 (1980), 449-456. doi: 10.2307/1998240. Google Scholar
J. F. Plante, Anosov flows, Amer. J. Math., 94 (1972), 729-754. doi: 10.2307/2373755. Google Scholar
M. Ratner, Raghunathan's topological conjecture and distributions of unipotent flows, Duke Math. J., 63 (1991), 235-280. doi: 10.1215/S0012-7094-91-06311-8. Google Scholar
R. M. Solovay, A model of set-theory in which every set of reals is Lebesgue measurable, Ann. of Math. (2), 92 (1970), 1-56. doi: 10.2307/1970696. Google Scholar
D. Sullivan, Linking the universalities of Milnor-Thurston, Feigenbaum and Ahlfors-Bers, in Topological Methods in Modern Mathematics (Stony Brook, NY, 1991), Publish or Perish, Houston, TX, 1993, 543-564. Google Scholar
W. Thurston, Hyperbolic geometry and 3-manifolds, in Low-Dimensional Topology (Bangor, 1979), London Math. Soc. Lecture Note Ser., 48, Cambridge Univ. Press, Cambridge-New York, 1982, 9-25. Google Scholar
W. P. Thurston, Three-dimensional manifolds, Kleinian groups and hyperbolic geometry, Bull. Amer. Math. Soc. (N.S.), 6 (1982), 357-381. doi: 10.1090/S0273-0979-1982-15003-0. Google Scholar
W. P. Thurston, Hyperbolic structures on 3-manifolds. I. Deformation of acylindrical manifolds, Ann. of Math. (2), 124 (1986), 203-246. doi: 10.2307/1971277. Google Scholar
A. Verjovsky, A uniformization theorem for holomorphic foliations, in The Lefschetz Centennial Conference, Part III (Mexico City, 1984), Contemp. Math., 58, Amer. Math. Soc., Providence, RI, 1987, 233-253. Google Scholar
D. van Dantzig, Über topologisch homogene Kontinua, Fund. Math., 15 (1930), 102-125. Google Scholar
L. Vietoris, Über den höheren Zusammenhang kompakter Räume und eine Klasse von zusammenhangstreuen Abbildungen, Math. Ann., 97 (1927), 454-472. doi: 10.1007/BF01447877. Google Scholar
A. Zorich, Geodesics on flat surfaces, in International Congress of Mathematicians. Vol. III, Eur. Math. Soc., Zürich, 2006, 121-146. Google Scholar
Fernando Alcalde Cuesta, Françoise Dal'Bo, Matilde Martínez, Alberto Verjovsky. Minimality of the horocycle flow on laminations by hyperbolic surfaces with non-trivial topology. Discrete & Continuous Dynamical Systems, 2016, 36 (9) : 4619-4635. doi: 10.3934/dcds.2016001
Fernando Alcalde Cuesta, Françoise Dal'Bo, Matilde Martínez, Alberto Verjovsky. Corrigendum to "Minimality of the horocycle flow on laminations by hyperbolic surfaces with non-trivial topology". Discrete & Continuous Dynamical Systems, 2017, 37 (8) : 4585-4586. doi: 10.3934/dcds.2017196
Katrin Gelfert. Non-hyperbolic behavior of geodesic flows of rank 1 surfaces. Discrete & Continuous Dynamical Systems, 2019, 39 (1) : 521-551. doi: 10.3934/dcds.2019022
Francois Ledrappier and Omri Sarig. Invariant measures for the horocycle flow on periodic hyperbolic surfaces. Electronic Research Announcements, 2005, 11: 89-94.
Jan Philipp Schröder. Ergodicity and topological entropy of geodesic flows on surfaces. Journal of Modern Dynamics, 2015, 9: 147-167. doi: 10.3934/jmd.2015.9.147
Keith Burns, Katrin Gelfert. Lyapunov spectrum for geodesic flows of rank 1 surfaces. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 1841-1872. doi: 10.3934/dcds.2014.34.1841
David Ralston, Serge Troubetzkoy. Ergodic infinite group extensions of geodesic flows on translation surfaces. Journal of Modern Dynamics, 2012, 6 (4) : 477-497. doi: 10.3934/jmd.2012.6.477
François Ledrappier, Omri Sarig. Fluctuations of ergodic sums for horocycle flows on $\Z^d$--covers of finite volume surfaces. Discrete & Continuous Dynamical Systems, 2008, 22 (1&2) : 247-325. doi: 10.3934/dcds.2008.22.247
Luis Barreira, Christian Wolf. Dimension and ergodic decompositions for hyperbolic flows. Discrete & Continuous Dynamical Systems, 2007, 17 (1) : 201-212. doi: 10.3934/dcds.2007.17.201
Anke D. Pohl. Symbolic dynamics for the geodesic flow on two-dimensional hyperbolic good orbifolds. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 2173-2241. doi: 10.3934/dcds.2014.34.2173
Rafael O. Ruggiero. Shadowing of geodesics, weak stability of the geodesic flow and global hyperbolic geometry. Discrete & Continuous Dynamical Systems, 2006, 14 (2) : 365-383. doi: 10.3934/dcds.2006.14.365
Dubi Kelmer, Hee Oh. Shrinking targets for the geodesic flow on geometrically finite hyperbolic manifolds. Journal of Modern Dynamics, 2021, 17: 401-434. doi: 10.3934/jmd.2021014
Bryce Weaver. Growth rate of periodic orbits for geodesic flows over surfaces with radially symmetric focusing caps. Journal of Modern Dynamics, 2014, 8 (2) : 139-176. doi: 10.3934/jmd.2014.8.139
Shucheng Yu. Logarithm laws for unipotent flows on hyperbolic manifolds. Journal of Modern Dynamics, 2017, 11: 447-476. doi: 10.3934/jmd.2017018
Zhiping Li, Yunhua Zhou. Quasi-shadowing for partially hyperbolic flows. Discrete & Continuous Dynamical Systems, 2020, 40 (4) : 2089-2103. doi: 10.3934/dcds.2020107
Carlos Arnoldo Morales. A note on periodic orbits for singular-hyperbolic flows. Discrete & Continuous Dynamical Systems, 2004, 11 (2&3) : 615-619. doi: 10.3934/dcds.2004.11.615
Giovanni Forni, Corinna Ulcigrai. Time-changes of horocycle flows. Journal of Modern Dynamics, 2012, 6 (2) : 251-273. doi: 10.3934/jmd.2012.6.251
Andreas Strömbergsson. On the deviation of ergodic averages for horocycle flows. Journal of Modern Dynamics, 2013, 7 (2) : 291-328. doi: 10.3934/jmd.2013.7.291
Yong Fang, Patrick Foulon, Boris Hasselblatt. Longitudinal foliation rigidity and Lipschitz-continuous invariant forms for hyperbolic flows. Electronic Research Announcements, 2010, 17: 80-89. doi: 10.3934/era.2010.17.80
C.P. Walkden. Stable ergodicity of skew products of one-dimensional hyperbolic flows. Discrete & Continuous Dynamical Systems, 1999, 5 (4) : 897-904. doi: 10.3934/dcds.1999.5.897
Matilde Martínez Shigenori Matsumoto Alberto Verjovsky
|
CommonCrawl
|
The Complete Redshift Distribution of Dusty Star-forming Galaxies from the SPT-SZ Survey
C. Reuter
J. D. Vieira
J. S. Spilker
A. Weiss
M. Aravena
M. Archipley
M. Béthermin
S. C. Chapman
C. de Breuck
C. Dong
W. B. Everett
J. Fu
T. R. Greve
C. C. Hayward
R. Hill
Y. Hezaveh
S. Jarugula
K. Litke
M. Malkan
D. P. Marrone
D. Narayanan
K. A. Phadke
A. A. Stark
M. L. Strandet
LAM - Laboratoire d'Astrophysique de Marseille
Laboratoire d'Astrophysique de Marseille
The South Pole Telescope (SPT) has systematically identified 81 high-redshift, strongly gravitationally lensed, dusty star-forming galaxies (DSFGs) in a 2500 square degree cosmological millimeter-wave survey. We present the final spectroscopic redshift survey of this flux-limited (S870 μm > 25 mJy) sample, initially selected at 1.4 mm. The redshift survey was conducted with the Atacama Large Millimeter/submillimeter Array across the 3 mm spectral window, targeting carbon monoxide line emission. By combining these measurements with ancillary data, the SPT sample is now spectroscopically complete, with redshifts spanning 1.9 < z < 6.9 and a median of $z=3.9\pm 0.2$ . We present the millimeter through far-infrared photometry and spectral energy density fits for all sources, along with their inferred intrinsic properties. Comparing the properties of the SPT sources to the unlensed DSFG population, we demonstrate that the SPT-selected DSFGs represent the most extreme infrared-luminous galaxies, even after accounting for strong gravitational lensing. The SPT sources have a median star formation rate of $2.3(2)\times {10}^{3}{M}_{\odot }\,\,{\mathrm{yr}}^{-1}$ and a median dust mass of $1.4(1)\times {10}^{9}{M}_{\odot }$ . However, the inferred gas depletion timescales of the SPT sources are comparable to those of unlensed DSFGs, once redshift is taken into account. This SPT sample contains roughly half of the known spectroscopically confirmed DSFGs at z > 5, making this the largest sample of high-redshift DSFGs to date, and enabling the "high-redshift tail" of extremely luminous DSFGs to be measured. Though galaxy formation models struggle to account for the SPT redshift distribution, the larger sample statistics from this complete and well-defined survey will help inform future theoretical efforts.
Observational cosmology Early universe High-redshift galaxies Galaxy evolution Interstellar molecules 1146 435 734 594 849 Astrophysics - Astrophysics of Galaxies
Submitted on : Friday, May 13, 2022-9:45:17 AM
Last modification on : Saturday, June 25, 2022-3:17:13 AM
BIBCODE : 2020ApJ...902...78R
DOI : 10.3847/1538-4357/abb599
C. Reuter, J. D. Vieira, J. S. Spilker, A. Weiss, M. Aravena, et al.. The Complete Redshift Distribution of Dusty Star-forming Galaxies from the SPT-SZ Survey. The Astrophysical Journal, 2020, 902, ⟨10.3847/1538-4357/abb599⟩. ⟨insu-03667055⟩
INSU CNRS UNIV-AMU LAM OSU-INSTITUT-PYTHEAS
|
CommonCrawl
|
Why do ionic compounds dissociate whereas coordinate complexes won't?
An ionic bond is the bonding between a non-metal and a metal, that occurs when charged atoms (ions) attract after one loses one or more of its electrons,and gives it to the other molecule, for example sodium and chlorine. This makes the bond stronger and harder to break.
In other words, an ionic bond is the electrostatic force of attraction between two oppositely charged ions. The positive ion is called cation, and the negative ion is the anion. It is like the north and south poles of a magnet.
A covalent bond is a chemical bond that involves the sharing of electron pairs between atoms. The stable balance of attractive and repulsive forces between atoms when they share electrons is known as covalent bonding. For many molecules, the sharing of electrons allows each atom to attain the equivalent of a full outer shell, corresponding to a stable electronic configuration.
A dipolar bond, also known as a dative covalent bond or coordinate bond is a kind of 2-center, 2-electron covalent bond in which the two electrons derive from the same atom.
NOTE: It is important to recognize that pure ionic bonding - in which one atom "steals" an electron from another - cannot exist: all ionic compounds have some degree of covalent bonding, or electron sharing. Thus, the term "ionic bond" is given to a bond in which the ionic character is greater than the covalent character - that is, a bond in which a large electronegativity difference exists between the two atoms, causing the bond to be more polar (ionic) than other forms of covalent bonding where electrons are shared more equally. Bonds with partially ionic and partially covalent character are called polar covalent bonds. Nevertheless, ionic bonding is considered to be a form of noncovalent bonding.
$$\begin{equation}\tag{1}F=\frac{1}{4\pi\epsilon}\cdot\frac{q_1q_2}{r^2}\end{equation} $$ According to Coulomb's law equation $(1)$, the force between two charges depend on the medium in which they are placed. The value of $\epsilon_r$ (relative permitivity) of air is found to be approx $1$, whereas relative permittivity of water is found to approx $80$ at room temperature.
I thought Coulombs law could answer why ionic compounds dissociate into ions whereas other compounds of different bonds won't. I thought, ionic compounds are formed by the electrostatic interaction of two oppositely charged ions (I assumed these to be charges), they will be stable in air, as relative permittivity value is not much effective. When, the same is dissolved in water, I thought they will dissociate as relative permittivity value of water effects more. I found the explanation, so as to why ionic compounds dissociate into ions when dissolved in water. In case of coordinate bond, I thought an atom losses a pair of electron and thus can be called as cation, whereas the other becomes anion. So, even here we can expect electrostatic interaction, and as discussed above must dissociate into ions when dissolved in water. But coordination complex of coordinate compound won't dissociate? I think, I have misunderstood somewhere or else there must be some chemistry theory which could explain, why coordinate complex won't dissociate?
Even in case of covalent bond we can see the bond to be formed by stable electrostatic attraction and repulsion between electrons and protons, so I thought, even covalent bonds should break, when dissolved in water, as electrostatic interaction is going to be affected. I thought, there must be some other forces which must be keeping them bonded. If so what are they?
I don't know whether I am right or wrong, if any was the case please explain.
inorganic-chemistry covalent-compounds ionic-compounds
SensebeSensebe
$\begingroup$ In both cases, try to sketch the result of "bond breaking reaction". Additionally, you cannot treat water just as dielectric, it is also protic solvent (forming hydrogen bonds), so it interacts specifically with the solute. $\endgroup$
– ssavec
$\begingroup$ iam thinking that some other forces are acting between them . because all these forces, we are assuming that these type of bonds are present between them. we have to realise. $\endgroup$
$\begingroup$ @sathish.R I have converted your answer to a comment. It is really a comment, not an answer. With a bit more rep, you will be able to post comments on any post. $\endgroup$
– Martin - マーチン ♦
First of all, who says coordination complexes don't dissociate? The interesting chemistry happens right there, and without dissociation no catalytic coordination compounds would be possible!
Now, the way I see it, when you move your compound, be it ionic, covalent or coordination, from air to a condensed phase, it will automatically interact with said phase. I think that you assume this phase is water, but it might as well be dichloromethane or ammonia or diethyl ether or anything else that is liquid (basically).
What makes a bond happen (or not) is the overall energy of the system. See, in chemistry (as in physics) the energy minimum is the goal of the system, since it is the most stable. Using this line of reasoning we can derive that bonds are formed because the overall energy (of the bonding electrons) is lower than if they were not bonded. Note that it doesn't matter where these electrons come from (they might be shared by both bonding partners or made available by only one of them). A good and easy way to visualize this is if you scratch the surface of Molecular Orbital Theory.
So why do ionic compounds dissolve? Mostly due to the solvent-compound interactions. See more here.
(I also think that your approach using Coulomb theory doesn't work that well, since the change of the dielectric constant around the compounds has nothing to do with what is between the bonding partners, thus affecting attractive forces between the two partners.)
tschoppitschoppi
'Ionic bonds dissociate but coordination complexes don't' is an oversimplification in a number of ways.
As tschoppi already mentioned, ligand exchange, i.e. complex dissociation and reassociation is a very interesting branch of chemistry and the reason why we can actually make different complexes.
Even an ionic bond would not dissociate if there was no driving force. Ionic bonds are formed because forming them releases energy when compared to naked atoms and/or ions in space. When dissolving ionic compounds in (say) water, the driving force is, in fact, the formation of coordination complexes between the solvent molecules and both types of ions.
For most ions, the driving force of dissociation is actually an entropic, not an enthalpic one (just as an aside). So it cannot just be explained by enthalpy alone.
Not the answer you're looking for? Browse other questions tagged inorganic-chemistry covalent-compounds ionic-compounds or ask your own question.
Can 100% covalent bonds exist?
Bonding in alkali metal complexes
Why do non-polar covalent compounds dissolve in non-polar covalent compounds only?
Why do we use different arguments for determining the strength of hydracids and solubility of ionic compounds?
Ionic compounds and van der Waals forces
Why does covalent bonding satisfy atoms?
|
CommonCrawl
|
State feedback for set stabilization of Markovian jump Boolean control networks
DCDS-S Home
Traffic congestion pricing via network congestion game approach
April 2021, 14(4): 1569-1589. doi: 10.3934/dcdss.2020357
Fault-tolerant anti-synchronization control for chaotic switched neural networks with time delay and reaction diffusion
Jianping Zhou 1, , Yamin Liu 1, , Ju H. Park 2,, , Qingkai Kong 2, and Zhen Wang 3,
School of Computer Science and Technology, Anhui University of Technology, Ma'anshan 243032, China
Department of Electrical Engineering, Yeungnam University, 280 Daehak-Ro, Kyongsan 38541, Republic of Korea
College of Mathematics and Systems Science, Shandong University of Science & Technology, Qingdao 266590, China
* Corresponding author: Ju H. Park
Received August 2019 Revised December 2019 Published April 2021 Early access May 2020
Fund Project: The first author is supported by Open Project of Anhui Province Key Laboratory of Special and Heavy Load Robot under Grant TZJQR005-2020 and the Excellent Youth Talent Support Program of Universities in Anhui Province under Grant GXYQZD2019021
This paper is concerned with the issue of fault-tolerant anti-synchro-nization control for chaotic switched neural networks with time delay and reaction-diffusion terms under the drive-response scheme, where the response system is assumed to be disturbed by stochastic noise. Both arbitrary switching signal and average dwell-time limited switching signal are taken into account. With the aid of the Lyapunov-Krasovskii functional approach and combining with the generalized Itô formula, sufficient conditions on the mean-square exponential stability for the anti-synchronization error system are presented. Then, by utilizing some decoupling methods, constructive design strategies on the desired fault-tolerant anti-synchronization controller are proposed. Finally, an example is given to demonstrate the effectiveness of our design strategies.
Keywords: Time delay, reaction diffusion, neural networks, fault-tolerant control, anti-synchronization.
Mathematics Subject Classification: Primary: 92B20, 34D06; Secondary: 34H10.
Citation: Jianping Zhou, Yamin Liu, Ju H. Park, Qingkai Kong, Zhen Wang. Fault-tolerant anti-synchronization control for chaotic switched neural networks with time delay and reaction diffusion. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1569-1589. doi: 10.3934/dcdss.2020357
A. Abdulle, Y. Bai and G. Vilmart, Reduced basis finite element heterogeneous multiscale method for quasilinear elliptic homogenization problems, Discrete Contin. Dyn. Syst. Ser. S, 8 (2015), 91-118. doi: 10.3934/dcdss.2015.8.91. Google Scholar
C. K. Ahn, Adaptive $H_{\infty}$ anti-synchronization for time-delayed chaotic neural networks, Prog. Theoretical Phys., 122 (2009), 1391-1403. doi: 10.1143/PTP.122.1391. Google Scholar
J. Cao, R. Rakkiyappan, K. Maheswari and A. Chandrasekar, Exponential $H_{\infty}$ filtering analysis for discrete-time switched neural networks with random delays using sojourn probabilities, Sci. China Technol. Sci., 59 (2016), 387-402. doi: 10.1007/s11431-016-6006-5. Google Scholar
X. Chang, R. Liu and J. H. Park, A further study on output feedback $H_{\infty}$ control for discrete-time systems, IEEE Trans. Circuits Systems II: Express Briefs, 67 (2020), 305-309. doi: 10.1109/TCSII.2019.2904320. Google Scholar
N. D. Cong and T. S. Doan, On integral separation of bounded linear random differential equations, Discrete Contin. Dyn. Syst. Ser. S, 9 (2016), 995-1007. doi: 10.3934/dcdss.2016038. Google Scholar
Y. Fan, X. Huang, Y. Li, J. Xia and G. Chen, Aperiodically intermittent control for quasi-synchronization of delayed memristive neural networks: An interval matrix and matrix measure combined method, IEEE Trans. Systems Man Cybernetics: Systems, 49 (2019), 2254-2265. doi: 10.1109/TSMC.2018.2850157. Google Scholar
Y. Fan, X. Huang, H. Shen and J. Cao, Switching event-triggered control for global stabilization of delayed memristive neural networks: An exponential attenuation scheme, Neural Networks, 117 (2019), 216-224. doi: 10.1016/j.neunet.2019.05.014. Google Scholar
J. Fell and N. Axmacher, The role of phase synchronization in memory processes, Nature Rev. Neurosci., 12 (2011), 105-118. doi: 10.1038/nrn2979. Google Scholar
J. P. Hespanha and A. S. Morse, Stability of switched systems with average dwell-time, Proceedings of the 38th IEEE Conference on Decision and Control, Phoenix, AZ, 1999, 2655–2660. doi: 10.1109/CDC.1999.831330. Google Scholar
J. Hou, Y. Huang and S. Ren, Anti-synchronization analysis and pinning control of multi-weighted coupled neural networks with and without reaction-diffusion terms, Neurocomputing, 330 (2019), 78-93. doi: 10.1016/j.neucom.2018.10.079. Google Scholar
Y.-L. Huang, S.-Y. Ren, J. Wu and B.-B. Xu, Passivity and synchronization of switched coupled reaction-diffusion neural networks with non-delayed and delayed couplings, Int. J. Comput. Math., 96 (2019), 1702-1722. doi: 10.1080/00207160.2018.1463437. Google Scholar
T. Jiao, J. H. Park, G. Zong, Y. Zhao and Q. Du, On stability analysis of random impulsive and switching neural networks, Neurocomputing, 350 (2019), 146-154. doi: 10.1016/j.neucom.2019.03.039. Google Scholar
R. Konnur, Synchronization-based approach for estimating all model parameters of chaotic systems, Phys. Rev. E, 67 (2003), 1387-1396. doi: 10.1103/PhysRevE.67.027204. Google Scholar
T. H. Lee, C. P. Lim, S. Nahavandi and J. H. Park, Network-based synchronization of T-S fuzzy chaotic systems with asynchronous samplings, J. Franklin Inst., 355 (2018), 5736-5758. doi: 10.1016/j.jfranklin.2018.05.023. Google Scholar
X. Li, M. Bohner and C. K. Wang, Impulsive differential equations: Periodic solutions and applications, Automatica J. IFAC, 52 (2015), 173-178. doi: 10.1016/j.automatica.2014.11.009. Google Scholar
T. L. Liao and N. S. Huang, An observer-based approach for chaotic synchronization with applications to secure communications, IEEE Trans. Circuits Systems I: Fundamental Theory Appl., 46 (1999), 1144-1151. doi: 10.1109/81.788817. Google Scholar
Y. Liu, J. H. Park and F. Fang, Global exponential stability of delayed neural networks based on a new integral inequality, IEEE Trans. Systems Man Cybernetics: Systems, 49 (2019), 2318-2325. doi: 10.1109/TSMC.2018.2815560. Google Scholar
E. N. Lorenz, Deterministic nonperiodic flow, J. Atmospheric Sci., 20 (1963), 130-141. doi: 10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2. Google Scholar
Q. Ma, S. Xu, Y. Zou and G. Shi, Synchronization of stochastic chaotic neural networks with reaction-diffusion terms, Nonlinear Dynam., 67 (2012), 2183-2196. doi: 10.1007/s11071-011-0138-8. Google Scholar
X. Mao, Razumikhin-type theorems on exponential stability of stochastic functional differential equations, Stochastic Process. Appl., 65 (1996), 233-250. doi: 10.1016/S0304-4149(96)00109-3. Google Scholar
S. Nakata, T. Miyata, N. Ojima and K. Yoshikawa, Self-synchronization in coupled salt-water oscillators, Phys. D, 115 (1998), 313-320. doi: 10.1016/S0167-2789(97)00240-6. Google Scholar
N. Ozcan, M. S. Ali, J. Yogambigai, Q. Zhu and S. Arik, Robust synchronization of uncertain Markovian jump complex dynamical networks with time-varying delays and reaction-diffusion terms via sampled-data control, J. Franklin Inst., 355 (2018), 1192-1216. doi: 10.1016/j.jfranklin.2017.12.016. Google Scholar
L. M. Pecora and T. L. Carroll, Synchronization in chaotic systems, Phys. Rev. Lett., 64 (1990), 821-824. doi: 10.1103/PhysRevLett.64.821. Google Scholar
F. Ren and J. Cao, Anti-synchronization of stochastic perturbed delayed chaotic neural networks, Neural Comput. Appl., 18 (2009), 515-521. doi: 10.1007/s00521-009-0251-5. Google Scholar
I. Stamova, T. Stamov and X. Li, Global exponential stability of a class of impulsive cellular neural networks with supremums, Internat. J. Adapt. Control Signal Process., 28 (2014), 1227-1239. doi: 10.1002/acs.2440. Google Scholar
V. Sundarapandian and R. Karthikeyan, Anti-synchronization of Lü and Pan chaotic systems by adaptive nonlinear control, European J. Sci. Res., 64 (2011), 94-106. Google Scholar
W. Tai, Q. Teng, Y. Zhou, J. Zhou and Z. Wang, Chaos synchronization of stochastic reaction-diffusion time-delay neural networks via non-fragile output-feedback control, Appl. Math. Comput., 354 (2019), 115-127. doi: 10.1016/j.amc.2019.02.028. Google Scholar
Z. Wang, L. Li, Y. Li and Z. Cheng, Stability and Hopf bifurcation of a three-neuron network with multiple discrete and distributed delays, Neural Process. Lett., 48 (2018), 1481-1502. doi: 10.1007/s11063-017-9754-8. Google Scholar
I. Wedekind and U. Parlitz, Synchronization and antisynchronization of chaotic power drop-outs and jump-ups of coupled semiconductor lasers, Phys. Rev. E, 66 (2002). doi: 10.1103/PhysRevE.66.026218. Google Scholar
J. Xia, G. Chen and W. Sun, Extended dissipative analysis of generalized Markovian switching neural networks with two delay components, Neurocomputing, 260 (2017), 275-283. doi: 10.1016/j.neucom.2017.05.005. Google Scholar
Z. Yan, X. Huang and J. Cao, Variable-sampling-period dependent global stabilization of delayed memristive neural networks via refined switching event-triggered control, SCIENCE CHINA Information Sciences, in progress. doi: 10.1007/s11432-019-2664-7. Google Scholar
D. Ye and G. Yang, Adaptive fault-tolerant tracking control against actuator faults with application to flight control, IEEE Trans. Control Systems Tech, 14 (2006), 1088-1096. doi: 10.1109/TCST.2006.883191. Google Scholar
E. Yucel, M. S. Ali, N. Gunasekaran and S. Arik, Sampled-data filtering of Takagi–Sugeno fuzzy neural networks with interval time-varying delays, Fuzzy Sets and Systems, 316 (2017), 69-81. doi: 10.1016/j.fss.2016.04.014. Google Scholar
X. Zhang, X. Lv and X. Li, Sampled-data-based lag synchronization of chaotic delayed neural networks with impulsive control, Nonlinear Dynam., 90 (2017), 2199-2207. doi: 10.1007/s11071-017-3795-4. Google Scholar
W. Zhang, S. Yang, C. Li and Z. Li, Finite-time and fixed-time synchronization of complex networks with discontinuous nodes via quantized control, Neural Process. Lett., 50 (2019), 2073-2086. doi: 10.1007/s11063-019-09985-9. Google Scholar
D. Zhang, L. Yu, Q. G. Wang and C. J. Ong, Estimator design for discrete-time switched neural networks with asynchronous switching and time-varying delay, IEEE Trans. Neural Networks Learning Systems, 23 (2012), 827-834. doi: 10.1109/TNNLS.2012.2186824. Google Scholar
J. Zhou, Y. Wang, X. Zheng, Z. Wang and H. Shen, Weighted $H_{\infty}$ consensus design for stochastic multi-agent systems subject to external disturbances and ADT switching topologies, Nonlinear Dyn., 96 (2019), 853-868. doi: 10.1007/s11071-019-04826-9. Google Scholar
Y. Zhou, J. Xia, H. Shen, J. Zhou and Z. Wang, Extended dissipative learning of time-delay recurrent neural networks, J. Franklin Inst., 356 (2019), 8745-8769. doi: 10.1016/j.jfranklin.2019.08.003. Google Scholar
J. Zhou, S. Xu, H. Shen and B. Zhang, Passivity analysis for uncertain BAM neural networks with time delays and reaction-diffusions, Internat. J. Systems Sci., 44 (2013), 1494-1503. doi: 10.1080/00207721.2012.659693. Google Scholar
K. Zhou and P. P. Khargonekar, Robust stabilization of linear systems with norm-bounded time-varying uncertainty, Systems Control Lett., 10 (1988), 17-20. doi: 10.1016/0167-6911(88)90034-5. Google Scholar
J. Zhou, J. H. Park and Q. Ma, Non-fragile observer-based $H_{\infty}$ control for stochastic time-delay systems, Appl. Math. Comput., 291 (2016), 69-83. doi: 10.1016/j.amc.2016.06.024. Google Scholar
G. Zhuang, Q. Ma, B. Zhang, S. Xu and J. Xia, Admissibility and stabilization of stochastic singular Markovian jump systems with time delays, Systems Control Lett., 114 (2018), 1-10. doi: 10.1016/j.sysconle.2018.02.004. Google Scholar
Figure 1. ADT switching signal
Figure 2. Phase plane plot of the drive system at $ x = 0.5 $
Figure 3. State evolution of the unforced drive-response systems at $ x = 0.5 $
Figure 4. State evolution of the unforced error system
Figure 5. State evolution of the controlled drive-response systems at $ x = 0.5 $
Figure 6. State evolution of the error system under control
Xuefeng Zhang, Yingbo Zhang. Fault-tolerant control against actuator failures for uncertain singular fractional order systems. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 1-12. doi: 10.3934/naco.2020011
Hongru Ren, Shubo Li, Changxin Lu. Event-triggered adaptive fault-tolerant control for multi-agent systems with unknown disturbances. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1395-1414. doi: 10.3934/dcdss.2020379
Guangbin CAI, Yang Zhao, Wanzhen Quan, Xiusheng Zhang. Design of LPV fault-tolerant controller for hypersonic vehicle based on state observer. Journal of Industrial & Management Optimization, 2021, 17 (1) : 447-465. doi: 10.3934/jimo.2019120
Chandra Shekhar, Amit Kumar, Shreekant Varshney, Sherif Ibrahim Ammar. $ \bf{M/G/1} $ fault-tolerant machining system with imperfection. Journal of Industrial & Management Optimization, 2021, 17 (1) : 1-28. doi: 10.3934/jimo.2019096
Han Wu, Changfan Zhang, Jing He, Kaihui Zhao. Distributed fault-tolerant consensus tracking for networked non-identical motors. Journal of Industrial & Management Optimization, 2017, 13 (2) : 917-929. doi: 10.3934/jimo.2016053
Quan Hai, Shutang Liu. Mean-square delay-distribution-dependent exponential synchronization of chaotic neural networks with mixed random time-varying delays and restricted disturbances. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 3097-3118. doi: 10.3934/dcdsb.2020221
M. Syed Ali, L. Palanisamy, Nallappan Gunasekaran, Ahmed Alsaedi, Bashir Ahmad. Finite-time exponential synchronization of reaction-diffusion delayed complex-dynamical networks. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1465-1477. doi: 10.3934/dcdss.2020395
Tingting Su, Xinsong Yang. Finite-time synchronization of competitive neural networks with mixed delays. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3655-3667. doi: 10.3934/dcdsb.2016115
Taqseer Khan, Harindri Chaudhary. Adaptive controllability of microscopic chaos generated in chemical reactor system using anti-synchronization strategy. Numerical Algebra, Control & Optimization, 2021 doi: 10.3934/naco.2021025
Ivanka Stamova, Gani Stamov. On the stability of sets for reaction–diffusion Cohen–Grossberg delayed neural networks. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1429-1446. doi: 10.3934/dcdss.2020370
Xiaochen Mao, Weijie Ding, Xiangyu Zhou, Song Wang, Xingyong Li. Complexity in time-delay networks of multiple interacting neural groups. Electronic Research Archive, 2021, 29 (5) : 2973-2985. doi: 10.3934/era.2021022
Ruoxia Li, Huaiqin Wu, Xiaowei Zhang, Rong Yao. Adaptive projective synchronization of memristive neural networks with time-varying delays and stochastic perturbation. Mathematical Control & Related Fields, 2015, 5 (4) : 827-844. doi: 10.3934/mcrf.2015.5.827
Yong Zhao, Shanshan Ren. Synchronization for a class of complex-valued memristor-based competitive neural networks(CMCNNs) with different time scales. Electronic Research Archive, 2021, 29 (5) : 3323-3340. doi: 10.3934/era.2021041
Leslaw Skrzypek, Yuncheng You. Feedback synchronization of FHN cellular neural networks. Discrete & Continuous Dynamical Systems - B, 2021, 26 (12) : 6047-6056. doi: 10.3934/dcdsb.2021001
Yu-Jing Shi, Yan Ma. Finite/fixed-time synchronization for complex networks via quantized adaptive control. Electronic Research Archive, 2021, 29 (2) : 2047-2061. doi: 10.3934/era.2020104
Benedetta Lisena. Average criteria for periodic neural networks with delay. Discrete & Continuous Dynamical Systems - B, 2014, 19 (3) : 761-773. doi: 10.3934/dcdsb.2014.19.761
Pierre Guiraud, Etienne Tanré. Stability of synchronization under stochastic perturbations in leaky integrate and fire neural networks of finite size. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 5183-5201. doi: 10.3934/dcdsb.2019056
Lei Liu, Shaoying Lu, Cunwu Han, Chao Li, Zejin Feng. Fault estimation and optimization for uncertain disturbed singularly perturbed systems with time-delay. Numerical Algebra, Control & Optimization, 2020, 10 (3) : 367-379. doi: 10.3934/naco.2020008
Zhao-Xing Yang, Guo-Bao Zhang, Ge Tian, Zhaosheng Feng. Stability of non-monotone non-critical traveling waves in discrete reaction-diffusion equations with time delay. Discrete & Continuous Dynamical Systems - S, 2017, 10 (3) : 581-603. doi: 10.3934/dcdss.2017029
Ning Wang, Zhi-Cheng Wang. Propagation dynamics of a nonlocal time-space periodic reaction-diffusion model with delay. Discrete & Continuous Dynamical Systems, 2021 doi: 10.3934/dcds.2021166
Jianping Zhou Yamin Liu Ju H. Park Qingkai Kong Zhen Wang
|
CommonCrawl
|
Surface Code Threshold Calculation and Flux Qubit Coupling
peter_groszkowski_thesis.pdf (1.370Mb)
Groszkowski, Peter
Building a quantum computer is a formidable challenge. In this thesis, we focus on two projects, which tackle very different aspects of quantum computation, and yet still share a common goal in hopefully getting us closer to implementing a quantum computer on a large scale. The first project involves a numerical error threshold calculation of a quantum error correcting code called a surface code. These are local check codes, which means that only nearest neighbour interaction is required to determine where errors occurred. This is an important advantage over other approaches, as in many physical systems, doing operations on arbitrarily spaced qubits is often very difficult. An error threshold is a measure of how well a given error correcting scheme performs. It gives the experimentalists an idea of which approaches to error correction hold greater promise. We simulate both toric and planar variations of a surface code, and numerically calculate a threshold value of approximately $6.0 \times 10^{-3}$, which is comparable to similar calculations done by others \cite{Raussendorf2006,Raussendorf2007,Wang2009}. The second project deals with coupling superconducting flux qubits together. It expands the scheme presented in \cite{Plourde2004} to a three qubit, two coupler scenario. We study L-shaped and line-shaped coupler geometries, and show how the coupling strength changes in terms of the dimensions of the couplers. We explore two cases, the first where the interaction energy between two nearest neighbour qubits is high, while the coupling to the third qubit is as negligible as possible, as well as a case where all the coupling energies are as small as possible. Although only an initial step, a similar scheme can in principle be extended further to implement a lattice required for computation on a surface code.
Peter Groszkowski (2009). Surface Code Threshold Calculation and Flux Qubit Coupling. UWSpace. http://hdl.handle.net/10012/4795
|
CommonCrawl
|
Can wing-tip vortices be reduced/eliminated with a rear-facing propeller near the wing-tip?
Let's say we have a propeller-driven aircraft in the pusher configuration (rear-facing propellers). If we could somehow put an engine at the wing tip (ignoring structural concerns), and maybe make the propeller rotate in the opposite direction of the way the wing-tip vortex wants to spin, would this eliminate or at least drastically reduce the wing-tip vortex?
aerodynamics engine wing-tip-vortex
DrZ214DrZ214
$\begingroup$ It's been tried before. $\endgroup$ – fooot♦ Jan 22 '16 at 4:05
$\begingroup$ @fooot a fascinating aircraft that seems to imply the answer is yes. However I envisioned the propellers to be a little proportionally smaller and the wing shape to be much more conventional. Hopefully the vortex-cancelling effect will be the same. $\endgroup$ – DrZ214 Jan 22 '16 at 4:29
$\begingroup$ Vmca on that aircraft had to be pretty high! $\endgroup$ – Ralph J Jan 22 '16 at 6:22
$\begingroup$ @RalphJ I think the idea was specifically to have more lift and control at low speeds by using the propellers to boost the wing efficiency. The article suggests that was actually achieved. $\endgroup$ – Ville Niemi Jan 22 '16 at 7:05
$\begingroup$ That might actually be an interesting design to revive for hobbyists. It looks to me that it was designed to optimize space usage (hangar and deck) and robustness for carrier operations. I think there are people who'd like to have a compact and robust design. Although I kind of wonder what happens if you have engine or propeller problems... $\endgroup$ – Ville Niemi Jan 22 '16 at 7:10
Yes. From an aerodynamic standpoint it makes some sense to use the wingtip vortex in combination with a propeller.
In this article by Sinnige et al., the researchers modeled and tested a propeller in various positions along a semispan, concluding that there were real gains to be had due to the increased span efficiency. Up to 15% less drag was measured when the propeller was positioned at the wingtip compared to a conventional mounting:
By positioning the propeller at the tip of the wing, the slipstream interacts with the flow around the wingtip, thus affecting the roll-up and downstream behavior of the wingtip vortex. PIV measurements downstream of a propeller–wing model showed that this leads to a reduction in overall swirl with inboard-up rotation, for which the swirl in the slipstream is opposite to that associated with the wingtip vortex. At the same time, the system performance was found to improve due to a reduction of the wing induced drag, leading to the conclusion that the decrease in swirl causes a reduction in downwash experienced by the wing.
Apart from the change in drag, the interaction of the wing with the propeller slipstream also modifies the wing lift. The locally enhanced dynamic pressure increases the lift over the spanwise part of the wing washed by the slipstream, which is amplified by the induced swirl for the case with inboard-up rotation. As a result, a strong spanwise variation in lift occurs with the propeller on. The induced velocities caused by this lift gradient lead to a spanwise shearing of the slipstream. With outboard-up rotation, the swirl in the slipstream acts to locally oppose the increase in wing lift due to the propeller-induced dynamic-pressure rise. Compared to the inboard-up rotation case, this leads to a reduction in wing lift at a given angle of attack, thus also a reduction in maximum lift coefficient. Furthermore, the direction of the spanwise shearing of the propeller slipstream is inverted on both sides of the wing.
To quantify the potential aerodynamic benefits of the wingtip-mounted configuration, a direct comparison was made with a conventional configuration, with the propeller mounted on the inboard part of the wing. The increase in wing lift due to the interaction with the propeller was 1 - 4% smaller for the wingtip-mounted configuration than for the conventional configuration. For the latter, the enhanced dynamic pressure and swirl in the slipstream acts over double the spanwise extent, and on a part of the wing where the section lift is higher than for the wingtip-mounted configuration. At higher angles of attack, the lift advantage for the conventional configuration could be further enhanced by the local angle-of-attack increase in proximity of both sides of the nacelle.
In terms of drag performance, on the other hand, the wingtip-mounted configuration showed superior performance. At a wing lift coefficient of $C_L = 0.5$ and a thrust coefficient of $0.09 < C_T < 0.13$, the drag reduction amounted to about 15 - 40 counts (5 - 15%) compared to the conventional configuration. The aerodynamic benefit of the wingtip-mounted configuration further increases with increasing wing lift coefficient and propeller thrust coefficient, leading to a drag reduction of 100 - 170 counts (25 - 50%) at $C_L = 0.7$ and $0.14 < C_T < 0.17$. An analysis of the wing performance confirmed that this drag benefit is mostly due to a reduction of the wing-induced drag. Compared to the conventional configuration, a relative increase in span efficiency of up to 40% was measured for the wingtip-mounted configuration. Although the exact drag benefit will be specific to vehicle design and operating conditions, it is concluded that the interaction between the propeller slipstream and the wingtip vortex leads to a clear drag reduction for the wingtip-mounted configuration.
Keep in mind this is purely aerodynamical, of course, and from a structural standpoint things might not be so clear-cut. While weight in the wings helps alleviate the bending stresses generated by lift, a large mass at the tip can lower the first bending eigenfrequency to unacceptable levels, risking a coupling of the wing oscillations with some other mode in flight or on the ground.
As far as I am aware wingtip fuel tanks have mostly been used on relatively low aspect ratio wings, which lends some credence to the magnitude of this problem.
AEhere supports MonicaAEhere supports Monica
$\begingroup$ Well done for spotting that paper! Also, putting the propeller before the wing makes a ton of sense. An additional problem with having this weight at the wing tip: Landing. Passenger aircraft must be able to land at constant 3° glide slope (in case the pilot doesn't pull up in time) -- with nice slender wings, that means unless your engine is very light, you have a problem. So I would still not expect to see this in practice, at least not without a host of other smart ideas to make it work. $\endgroup$ – Zak Aug 13 '19 at 1:05
$\begingroup$ That paper had been sitting in my "to read" pile for half a year, this was more of a happy coincidence. Also, good point about wingtip clearance; there is also the issue of FOD damage for engines hanging outside the paved width of a runway. $\endgroup$ – AEhere supports Monica Aug 13 '19 at 14:37
Eliminate, no. The wingtip vortices are inherent part of lift generation. There is no lift without wingtip vortices. The wake vortices are carrying the momentum that was given to the air to produce the lift, and to cancel them, you'd have to give the air the opposite momentum, which would negate the produced lift. See How does an aircraft form wake turbulence?
Improve efficiency, yes, a little bit. The propellers would cause downwash in their span outside of the wing tips, effectively extending the wing span and longer wing span means, slightly, less induced drag.
However, such design would have extremely poor single-engine handling, not only because the thrust would become very asymmetric, but also because the lift would partially depend on the engines and therefore the side with stopped engine would also loose some lift, and deflecting aileron to compensate would generate more drag to make the thrust even more asymmetric. And it would have poor all-engine-out performance too due to the loss of lift. Does not sound like optimal approach when similar benefits can be obtained without all these problems by using longer wings.
$\begingroup$ "The wingtip vortices is what generates the lift." Wrong, wingtip vortices are a by-product of lift generated by the wings, but they generate drag by themselves. Otherwise why add winglets? $\endgroup$ – Michael Hall Aug 5 '19 at 21:33
$\begingroup$ I agree with Micheal Hall. Otherwise, why not put some vortex generators on the wingtip to increase vorticity? Would putting propellers behind the wingtips to cancel the vortex out destroy the lift? The wingtip vortices are a side effect of having a lifting vortex in your wing (precisly: spanwise change of strength of the lifting vortex) -- which itself is as much a mathematical construct than an actual thing. $\endgroup$ – Zak Aug 5 '19 at 23:22
$\begingroup$ Correct: single-engine handling would be horrible (and also putting such masses at the wingtips is a recipe for aerolastic catastrophes), and ailerons interacting with propellers must be a nightmare in terms of stability and control (and the propeller, too) -- but the propellers wouldn't generate very much lift. Also: making the wings longer isn't always an option. $\endgroup$ – Zak Aug 5 '19 at 23:27
$\begingroup$ In case the wing-tip engine is providing a substantial amount of thrust, the problems indicated by Jan Hudec and Zak hold. For a distributed propulsion system, such as the configuration used by the NASA X-57, many of these drawbacks are prevented. $\endgroup$ – Bram Aug 6 '19 at 8:33
$\begingroup$ @JanHudec You are correct in saying lift cannot exist without vortices. Michael and Zak are also correct in saying wingtip vortices do not contribute to additional lift. Wingtip vortices are what contribute to the induced drag, a finite-span 3D phenomenon. In infinite span, induced drag would disappear and you would get the 2D result, and you will still have bound vortices. $\endgroup$ – JZYL Aug 7 '19 at 12:47
Doug MacLean has a great synopsis of wingtip devices that is worth reading, and gives a good intuition as to why propellers could reduce, but likely not eliminate, the wingtip vortex.
To summarize: the wingtip vortex (really, the rolled-up vortex wake sheet), creates a downwash at the wing which effectively reduces the angle of attack of the wing relative to the freestream. This downwash increases with the strength of the wingtip vortex, which is proportional to the lift of the wing. The induced downwash angle $\alpha_i$ can be compactly given as $\alpha_i = \frac{C_L}{\pi e AR}$.
To compensate for this downwash, the plane has to fly at a higher angle of attack to achieve some amount of lift. A component of the lift ($C_L\sin\alpha_i$) is now pointing in the streamwise direction. This is induced drag, and if the $\alpha_i$ is small you get the conventional induced drag expression $C_{D_i} = C_L\sin\alpha_i = \frac{C_L^2}{\pi e AR}$.
So how does this change if we add another source of vorticity? It won't reduce the lift of the wing; the vorticity generated by lifting surface will still be present. To remove the induced drag ideally our new source of vorticity will have opposite sign and equal magnitude, so as to create a corresponding upwash at the wing, which would rotate the lift vector back towards the freestream direction.
Can the propeller do that? It's worth thinking about where the vorticity (swirl) comes from in the propeller wake. Swirl is the tangential velocity imparted to the wake; it's essentially due to viscous losses in the blades. Well-designed propellers try to, among other things, minimize this type of loss. If you had a propeller well-designed to provide forward thrust, you would need to put in a huge amount of energy that the viscous losses counteract the entirety of the wingtip vortex; likely much more than you need for forward thrust. In a typical jetliner wing, the lifting force in cruise is something like 20x the propulsive force. Swirl losses, like tip vorticity, are proportional to the thrust produced by the propeller, which is an order of magnitude smaller than the wing lift.
In theory you could design a propeller to add more swirl to the wake, counteracting the tip vortex more efficiently. But this would reduce the efficiency of the propeller at creating forward thrust, so is unlikely to give a net benefit.
As the testing shows there is some benefit to wingtip mounting. You can't design a propeller with no swirl, so you might as well get some credit for it by putting it at the wingtips (if you can accept the weight and OEI tradeoffs associated with it). But it's not going to entirely (or even substantially) eliminate induced drag.
TL:DR Propeller swirl and wingtip vorticies are both due to inefficiencies in the two systems. These inefficiencies can cancel to some extent, but they are both proportional to the amount of force being produced by the wing/propeller. Since the wing is producing ~20x the force of the propeller in cruise, the effect of its inefficiency will dominate.
$\begingroup$ Welcome to Av.SE! $\endgroup$ – Ralph J Aug 7 '19 at 18:46
The wingtip vortices carry some energy, and not leaving it behind seems like a good idea. That's why winglets are a thing, after all.
So, what if you put a propeller at the wingtip, aligning the axis with the vortex core? The propeller would "see" the incoming vorticity, and the blades would get accordingly larger local incidences, thus generate more forwards force -- a little like putting guide vanes in front of it. Alternatively, the propeller could rotate a little slower or reduce blade incidence a little to get thrust back to where it was. The swirl which the propeller produces would be accordingly reduced, so there'd be overall less vorticity and swirl in the flow behind the aircraft.
That all sounds pretty good so far.
However, there are a few drawbacks, from least to most severe:
1: The wintip vortex is pretty strong at its core, but angular velocity reduces quickly as you move away -- this means that the innermost section of your propeller gains most additional incidence, but since it's also moving slowest and the blades are thickest, it doesn't produce a lot of thrust anyway. The outer bits won't see much of an effect, as their own circumferential speed will be much higher than the vortex at that position
2: When mounting the propeller directly behind the wing, the blades will pass through the wing profile wake on the inboard side, where airflow is significantly slower. In the worst case, if the flow on the wing separates, propeller blades could go through a significant zone of "dead water", which means less thrust, and more mechanical load on the blades. Also more noise. Most existing pusher configurations have the propeller mounted at some distance from the wing to reduce this effect. But if you did that with wingtip pusher propellers, it would just make reinforce an even worse problem...
3: The wingtip vortex doesn't align nicely with the trailing edge of the wingtip, let alone the propeller axis. Depending on flight condition, the vortex will be stronger or weaker, and at large incidence, it becomes more of a smeared-out vortex sheet -- imagine lots of small vortices released from points along the outer wing edge, propagating in downstream direction. This means at many flight conditions you'd have the vortex not really matching with the propeller, this diminishing the desired effect, but in some other conditions, you'd have a strong, well-focussed vortex hitting the propeller somewhere off-center, and having your propeller blades rotate through that causes bad vibrations, and might also separations on the blades, which means lost thrust and horrible noise, coupled with either having to reinforce the entire drive train (or facing much higher wear and tear). You could make a propeller aerodynamically more robust to such things, but that will always come at the expense of efficiency, and that's what we set out to gain in the first place...
Pusher propellers are not very efficient to begin with (because putting the whole aircraft into the swirl coming from the propeller is still more efficient than exposing the propeller to the wake of the aircraft), and mostly used for stability reasons (This has to do with the pitching and yawing moment produced by a propeller in inclined flow) -- so although aligning a propeller with a vortex, in isolation does indeed make sense, there are too many real effects keep this from improving efficiency over a regular old front-mounted prop.
So, can't we do anything with that vortex? Oh yes, you can! You could place a little wing inside the upwash just outside the wingspan, also known as "increase the wingspan" -- more wing makes more lift, but the vortex does not get stronger. This is why sailplanes have such long, thin wings. Or, if you can't make the wing any longer (wing root bending moment too large, size restrictions...), add the little bit at an angle! The classical vertical winglet works like this: It redirects the inwards-headed flow above the wingtip to go straight downstream, and this produces mostly an inwards-facing force but also a forward facing component ==> this means it does exactly what the propeller can't do efficiently, which is weakening the vortex and deriving a little forwards force from it. These days, most winglets are some sort of blend between a wingspan extension and the classical winglet.
$\begingroup$ How would a turbine engine placed on the wing tip to receive the tip vortex in its entirety? $\endgroup$ – Muze the good Troll. Aug 5 '19 at 23:24
$\begingroup$ @Muze: I typed "Turbofan vortex ingestion" into duckduckgo, and wouldn't you know it, this is the first result: ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19750018896.pdf -- somebody testing what happens when a wingtip vortex gets into an engine. It's not good. Stall and surge margin are reduced, and so is efficiency -- unless you hit exactly straight on, but you virtually cannot. Ground vortex ingestion is also a longstanding problem for turbofan engines: aviation.stackexchange.com/questions/21219/… $\endgroup$ – Zak Aug 5 '19 at 23:35
$\begingroup$ "Pusher propellers are not very efficient to begin with" – are you sure? Most evidence points to the contrary. $\endgroup$ – Peter Kämpf Aug 7 '19 at 18:42
$\begingroup$ @Peter -- I have no proper literature to hand right now, but the fact that very few aircraft use pusher props (and that most pusher props are seen in photos from the Age of Experimentation) does point that way. The way I learned it: Pusher props avoid penalties for having the fuselage sit in the accelerated, swirling air from the prop, but degrade prop performance, which is (usually) worse. They're also audibly louder (can confirm from personal experience). If they are used, it's mostly due to stability reasons or to clean up the flow on the rear. Reality is likeley more complex than that ... $\endgroup$ – Zak Aug 8 '19 at 14:08
Not the answer you're looking for? Browse other questions tagged aerodynamics engine wing-tip-vortex or ask your own question.
Why are push-propellers so rare, yet they are still around?
How does an aircraft form wake turbulence?
How does this vortex form inside a jet engine?
Why do wall-to-wall airfoils in wind tunnels produce no (or infinitesimal) downwash?
How could it be possible to fully prevent wing tip vortex formation?
How can larger wingspan decrease the strength of wingtip vortices?
Why do wing tip vortices tend to be most dangerous when there is a light quartering tailwind?
Is the "effective wing span" reduced in a turn?
How do wing tip vortices interact with the airflow on an wing with winglets?
Do airliners stall first at the wing tip?
Does shrouding a propeller minimize induced drag by equalizing the downwash velocity along its blades?
How to calculate the radius of wing tip vortices?
Does a tandem ultralight need flaps on the rear wing if the front wing has flaps?
Why is the circulation around a centrifugal impeller blade tip the opposite to a wing tip?
|
CommonCrawl
|
Breathers and kinks in a simulated crystal experiment
DCDS-S Home
Travelling waves of forced discrete nonlinear Schrödinger equations
October 2011, 4(5): 1119-1128. doi: 10.3934/dcdss.2011.4.1119
Mechanisms of recovery of radiation damage based on the interaction of quodons with crystal defects
Vladimir Dubinko 1,
NSC Kharkov Institute of Physics and Technology, Akademicheskaya Str.1, Kharkov 61108, Ukraine
Received September 2009 Revised November 2009 Published December 2010
A majority of radiation effects studies are connected with creation of radiation-induced defects in the crystal bulk, which causes the observed degradation of material properties, called radiation damage. In the present paper we consider mechanisms of recovery of the radiation damage, based on the radiation-induced formation of quodons (energetic, mobile, highly localized lattice excitations that propagate great distances along close-packed crystal directions) and their interaction with crystal defects such as voids and dislocations. The rate theory of microstructure evolution in solids modified with account of quodon-induced reactions is applied for description of the radiation-induced annealing of voids observed under low temperature ion irradiation of nickel. Comparison of the theory with experimental data is used for a quantitative estimation of the propagation range of quodons in metals. Some other related phenomena in radiation physics of crystals are discussed, which include the void lattice formation and electron-plastic effect.
Keywords: Discreet breathers, Recovery., Quodons, Radiation damage.
Mathematics Subject Classification: Primary: 35Q51; Secondary: 34A3.
Citation: Vladimir Dubinko. Mechanisms of recovery of radiation damage based on the interaction of quodons with crystal defects. Discrete & Continuous Dynamical Systems - S, 2011, 4 (5) : 1119-1128. doi: 10.3934/dcdss.2011.4.1119
G. Abrasonis, W. Moller and X. X. Ma, Anomalous ion accelerated bulk diffusion of interstitial nitrogen,, Phys. Rev. Lett., 96 (2006), 065901. doi: 10.1103/PhysRevLett.96.065901. Google Scholar
J. F. R. Archilla, J. Cuevas, M. D. Alba, M. Naranjo and J. M. Trillo, Discrete breathers for understanding reconstructive mineral processes at low temperatures,, J. Phys. Chem. B, 110 (2006), 24112. doi: 10.1021/jp0631228. Google Scholar
J. Cuevas, C. Katerji, J. F. R. Archilla, J. C. Eilbeck and F. M. Russell, Influence of moving breathers on vacancies migration,, Phys. Lett. A, 315 (2003), 364. doi: 10.1016/S0375-9601(03)01097-1. Google Scholar
V. I. Dubinko, New mechanism of irradiation creep based on the radiation-induced vacancy emission from dislocations,, Radiat. Eff. and Defects in Solids, 160 (2005), 85. doi: 10.1080/10420150500132190. Google Scholar
V. I. Dubinko, Breather mechanism of the void ordering in crystals under irradiation,, Nucl. and Methods in Physics Research B, 267 (2009), 2976. Google Scholar
V. I. Dubinko and A.G. Guglya, Investigation of the void and dislocation loop formation and dissolution under ion and sub-threshold electron irradiation,, Report STCU 4368-T02, (2009), 1. Google Scholar
V. I. Dubinko, A. G. Guglya, E. Melnichenko and R. Vasilenko, Radiation-induced reduction in the void swelling,, J. Nucl. Mater., 385 (2009), 228. doi: 10.1016/j.jnucmat.2008.11.028. Google Scholar
V. I. Dubinko and V. F. Klepikov, The influence of non-equilibrium fluctuations on radiation damage and recovery of metals under irradiation,, J. Nucl. Mater., 362 (2007), 146. doi: 10.1016/j.jnucmat.2007.01.018. Google Scholar
V. I. Dubinko and N. P. Lazarev, Effect of the radiation-induced vacancy emission from voids on the void evolution,, Nucl. and Methods in Physics Research B, 228 (2005), 187. doi: 10.1016/j.nimb.2004.10.043. Google Scholar
V. I. Dubinko, and V. P. Lebedev, Investigation of the electroplastic effect under sub-threshold electron irradiation,, Report STCU 4368-T03, (2009), 1. Google Scholar
V. I. Dubinko and A. A. Turkin, Self-organization of cavities under irradiation,, Appl. Phys. A, 58 (1994), 21. doi: 10.1007/BF00331513. Google Scholar
V. I. Dubinko, D. I. Vainshtein and H. W. den Hartog, Effect of the radiation-induced emission of Schottky defects on the formation of colloids in alkali halides,, Radiat. Eff. and Defects in Solids, 158 (2003), 705. doi: 10.1080/1042015031000112531. Google Scholar
V. I. Dubinko, D. I. Vainshtein and H. W. den Hartog, Mechanism of void growth in irradiated NaCl based on exiton-induced formation of divacancies at dislocations,, Nucl. and Methods in Physics Research B, 228 (2005), 304. doi: 10.1016/j.nimb.2004.10.061. Google Scholar
J. H. Evans, Simulations of the effects of 1-d interstitial diffusion on void lattice formation during irradiation,, Phil Mag., 85 (2005), 1177. doi: 10.1080/14786430512331325606. Google Scholar
J. H. Evans, Comments on the role of 1-D and 2-D self-interstitial atom transport mechanisms in void- and bubble-lattice formation in cubic metals,, Phil. Mag. Letters, 87 (2007), 575. doi: 10.1080/09500830701393148. Google Scholar
S. Flach and A. V. Gorbach, Discrete breathers - Advances in theory and applications,, Phys. Rep., 467 (2008), 1. doi: 10.1016/j.physrep.2008.05.002. Google Scholar
W. Jager and H. Trinkaus, Defect ordering in metals under irradiation,, J. Nucl. Mater., 205 (1993), 394. doi: 10.1016/0022-3115(93)90104-7. Google Scholar
N. P. Lazarev and V. I. Dubinko, Molecular dynamics simulation of defects production in the vicinity of voids,, Radiat. Eff. and Defects in Solids, 158 (2003), 803. doi: 10.1080/10420150310001631084. Google Scholar
M. E. Manley, A. J. Sievers, J. W. Lynn, S. A. Kiselev, N. I. Agladze, Y. Chen, A. Llobet and A. Alatas, Intrinsic localized modes observed in the high temperature vibrational spectrum of NaI,, Phys. Rev. B, 79 (2009), 134304. doi: 10.1103/PhysRevB.79.134304. Google Scholar
R. S. Nelson and M. W. Tompson, Atomic collision sequences in crystals of copper, silver and gold revealed by sputtering in energetic ion beams,, Proc. Roy. Soc., 259 (1960), 458. Google Scholar
V. F. Petrenko, N. N. Khusnatdinov and I. Baker, Effect of X radiation on the plastic deformation of II-VI compounds,, Phys. Rev. B, 53 (1996), 15401. doi: 10.1103/PhysRevB.53.15401. Google Scholar
F. M. Russell and J. C. Eilbeck, Evidence for moving breathers in a layered crystal insulator at 300 K,, Europhys. Lett, 78 (2007), 10004. doi: 10.1209/0295-5075/78/10004. Google Scholar
R. H. Silsbee, Focusing in collision problems in solids,, J. Appl. Phys., 28 (1957), 1246. doi: 10.1063/1.1722626. Google Scholar
C. H. Woo and W. Frank, A theory of void-lattice formation,, J. Nucl. Mater., 137 (1985), 7. doi: 10.1016/0022-3115(85)90044-3. Google Scholar
Qingxu Dou, Jesús Cuevas, J. C. Eilbeck, Francis Michael Russell. Breathers and kinks in a simulated crystal experiment. Discrete & Continuous Dynamical Systems - S, 2011, 4 (5) : 1107-1118. doi: 10.3934/dcdss.2011.4.1107
Michael Kastner, Jacques-Alexandre Sepulchre. Effective Hamiltonian for traveling discrete breathers in the FPU chain. Discrete & Continuous Dynamical Systems - B, 2005, 5 (3) : 719-734. doi: 10.3934/dcdsb.2005.5.719
Gilles Pijaudier-Cabot, David Grégoire. A review of non local continuum damage: Modelling of failure?. Networks & Heterogeneous Media, 2014, 9 (4) : 575-597. doi: 10.3934/nhm.2014.9.575
Gianni Dal Maso, Flaviana Iurlano. Fracture models as $\Gamma$-limits of damage models. Communications on Pure & Applied Analysis, 2013, 12 (4) : 1657-1686. doi: 10.3934/cpaa.2013.12.1657
Panayotis Panayotaros. Continuation and bifurcations of breathers in a finite discrete NLS equation. Discrete & Continuous Dynamical Systems - S, 2011, 4 (5) : 1227-1245. doi: 10.3934/dcdss.2011.4.1227
Dario Bambusi, D. Vella. Quasi periodic breathers in Hamiltonian lattices with symmetries. Discrete & Continuous Dynamical Systems - B, 2002, 2 (3) : 389-399. doi: 10.3934/dcdsb.2002.2.389
Alexander Mielke. Complete-damage evolution based on energies and stresses. Discrete & Continuous Dynamical Systems - S, 2011, 4 (2) : 423-439. doi: 10.3934/dcdss.2011.4.423
Marita Thomas. Quasistatic damage evolution with spatial $\mathrm{BV}$-regularization. Discrete & Continuous Dynamical Systems - S, 2013, 6 (1) : 235-255. doi: 10.3934/dcdss.2013.6.235
Jesús Cuevas, Bernardo Sánchez-Rey, J. C. Eilbeck, Francis Michael Russell. Interaction of moving discrete breathers with interstitial defects. Discrete & Continuous Dynamical Systems - S, 2011, 4 (5) : 1057-1067. doi: 10.3934/dcdss.2011.4.1057
Jean-Pierre Eckmann, C. Eugene Wayne. Breathers as metastable states for the discrete NLS equation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (12) : 6091-6103. doi: 10.3934/dcds.2018136
Yachun Li, Shengguo Zhu. Existence results for compressible radiation hydrodynamic equations with vacuum. Communications on Pure & Applied Analysis, 2015, 14 (3) : 1023-1052. doi: 10.3934/cpaa.2015.14.1023
Amin Boumenir, Vu Kim Tuan. Recovery of the heat coefficient by two measurements. Inverse Problems & Imaging, 2011, 5 (4) : 775-791. doi: 10.3934/ipi.2011.5.775
Danthai Thongphiew, Vira Chankong, Fang-Fang Yin, Q. Jackie Wu. An on-line adaptive radiation therapy system for intensity modulated radiation therapy: An application of multi-objective optimization. Journal of Industrial & Management Optimization, 2008, 4 (3) : 453-475. doi: 10.3934/jimo.2008.4.453
Riccarda Rossi. Existence results for a coupled viscoplastic-damage model in thermoviscoelasticity. Discrete & Continuous Dynamical Systems - S, 2017, 10 (6) : 1413-1466. doi: 10.3934/dcdss.2017075
S. Aubry, G. Kopidakis, V. Kadelburg. Variational proof for hard Discrete breathers in some classes of Hamiltonian dynamical systems. Discrete & Continuous Dynamical Systems - B, 2001, 1 (3) : 271-298. doi: 10.3934/dcdsb.2001.1.271
Guangwei Yuan, Yanzhong Yao. Parallelization methods for solving three-temperature radiation-hydrodynamic problems. Discrete & Continuous Dynamical Systems - B, 2016, 21 (5) : 1651-1669. doi: 10.3934/dcdsb.2016016
Sebastian Bauer. A non-relativistic model of plasma physics containing a radiation reaction term. Kinetic & Related Models, 2018, 11 (1) : 25-42. doi: 10.3934/krm.2018002
Josephus Hulshof, Pascal Noble. Travelling waves for a combustion model coupled with hyperbolic radiation moment models. Discrete & Continuous Dynamical Systems - B, 2008, 10 (1) : 73-90. doi: 10.3934/dcdsb.2008.10.73
Meixia Xiao, Xianwen Zhang. On global solutions to the Vlasov-Poisson system with radiation damping. Kinetic & Related Models, 2018, 11 (5) : 1183-1209. doi: 10.3934/krm.2018046
Claude-Michael Brauner, Josephus Hulshof, J.-F. Ripoll. Existence of travelling wave solutions in a combustion-radiation model. Discrete & Continuous Dynamical Systems - B, 2001, 1 (2) : 193-208. doi: 10.3934/dcdsb.2001.1.193
Vladimir Dubinko
|
CommonCrawl
|
Matrix Determinant Calculator. In Linear algebra, a determinant is a unique number that can be ascertained from a square matrix. Examine a matrix that is exactly singular, but which has a large nonzero determinant. Adjoint and Inverse of a Matrix There are various properties of the Determinant which can be helpful for solving problems related with matrices, This article is contributed by Utkarsh Trivedi. 2. The determinant obtained through the elimination of some rows and columns in a square matrix is called a minor of that matrix. The use of determinants in calculus includes the Jacobian determinant in the change of variables rule for integrals of functions of several variables. The matrix comprising of all the minors of the given matrix is called the Minor Matrix. In other words, for a matrix [ [a,b], [c,d]], the determinant is computed as 'ad-bc'. So first we're going to take positive 1 times 4. Determinant of 4x4 Matrix Determinant of a 4×4 matrix is a unique number which is calculated using a particular formula. Reduce this matrix to row echelon form using elementary row operations so that all the elements below diagonal are zero. Then there exists some matrix [math]A^{-1}[/math] such that [math]AA^{-1} = I. In theory, the determinant of any singular matrix is zero, but because of the nature of floating-point computation, this ideal is not always achievable. close, link Our mission is to provide a free, world-class education to anyone, anywhere. And now let's evaluate its determinant. The determinant of a matrix is equal to the sum of the products of the elements of any one row or column and their cofactors.∣A∣=∣a1,1a1,2a1,3..a1,na2,1a2,2a2,3..a2,na3,1a3,2a3,3..a3,n......an,1an,2an,3..an,n∣\displaystyle \left| A\right| =\begin{vmatrix}a_{1,1} & a_{1,2} & a_{1,3} & . As a hint, I'll take the determinant of a very similar two by two matrix. As a base case the value of determinant of a 1*1 matrix is the single value itself. A matrix given below can be solved using the steps mentioned above det(A) = \[\begin{vmatrix}a_{11} &b_{12} \\ c_{21} & d_{22} \end{vmatrix}\] & a_{2,n}\\a_{3,1} & a_{3,2} & a_{3,3} & . The number A ij is called the cofactor of the element a ij . Multiply 'a' by the determinant of the 2×2 matrix that is not in a's row or column. matrices have determinants. In general, you can skip the multiplication sign, so `5x` is equivalent to `5*x`. Port_1 — Determinant scalar. In general, you can skip the multiplication sign, so `5x` is equivalent to `5*x`. Set the matrix (must be square). edit If det(A) = -2, calculate the determinant of another matrix (Look at picture for full question) Advanced Algebra: Nov 15, 2020: Determinant of a Matrix with Polynomial Elements? For example, if using this for a 4x4 matrix, your "crossing out" leaves you with a 3x3 matrix, for which you calculate the determinate as described above. The cofactorof an element is obtained by … Determinant, in linear and multilinear algebra, a value, denoted det A, associated with a square matrix A of n rows and n columns. acknowledge that you have read and understood our, GATE CS Original Papers and Official Keys, ISRO CS Original Papers and Official Keys, ISRO CS Syllabus for Scientist/Engineer Exam, Finding inverse of a matrix using Gauss – Jordan Method | Set 2, Program for Gauss-Jordan Elimination Method, Gaussian Elimination to Solve Linear Equations, Mathematics | L U Decomposition of a System of Linear Equations, Mathematics | Eigen Values and Eigen Vectors, Print a given matrix in counter-clock wise spiral form, Inplace rotate square matrix by 90 degrees | Set 1, Rotate a matrix by 90 degree without using any extra space | Set 2, Rotate a matrix by 90 degree in clockwise direction without using any extra space, Print unique rows in a given boolean matrix, Maximum size rectangle binary sub-matrix with all 1s, Maximum size square sub-matrix with all 1s, Longest Increasing Subsequence Size (N log N), Median in a stream of integers (running integers), Write a program to print all permutations of a given string, Set in C++ Standard Template Library (STL), Maximum determinant of a matrix with every values either 0 or n, Find determinant of matrix generated by array rotation, Maximize sum of N X N upper left sub-matrix from given 2N X 2N matrix, Circular Matrix (Construct a matrix with numbers 1 to m*n in spiral way), Find trace of matrix formed by adding Row-major and Column-major order of same matrix, Count frequency of k in a matrix of size n where matrix(i, j) = i+j, Program to check diagonal matrix and scalar matrix, Check if it is possible to make the given matrix increasing matrix or not, Program to check if a matrix is Binary matrix or not, Program to convert given Matrix to a Diagonal Matrix, Check if matrix can be converted to another matrix by transposing square sub-matrices, Maximum trace possible for any sub-matrix of the given matrix, Minimum number of steps to convert a given matrix into Upper Hessenberg matrix, Minimum steps required to convert the matrix into lower hessenberg matrix, Minimum number of steps to convert a given matrix into Diagonally Dominant Matrix, C++ program to Convert a Matrix to Sparse Matrix, Convert given Matrix into sorted Spiral Matrix, Create matrix whose sum of diagonals in each sub matrix is even, Construct a square Matrix whose parity of diagonal sum is same as size of matrix, Minimize count of adjacent row swaps to convert given Matrix to a Lower Triangular Matrix, Paytm Interview Experience | Set 8 (Hiring Drive for Backend Engineer), Program to count digits in an integer (4 Different Methods), Program to find largest element in an array, Search in a row wise and column wise sorted matrix, Write Interview Output. Finding determinants of a matrix are helpful in solving the inverse of a matrix, a system of linear equations, and so on. So here is matrix A. Please use ide.geeksforgeeks.org, generate link and share the link here. The Formula of the Determinant of 3×3 Matrix. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant. Port_1 — Input matrix 3-by-3 matrix. 2 x 2 Matrix Determinant. Each of the four resulting pieces is a block. By using this website, you agree to our Cookie Policy. Then there exists some matrix [math]A^{-1}[/math] such that [math]AA^{-1} = I. Matrix Determinant Calculator. To find a 2×2 determinant we use a simple formula that uses the entries of the 2×2 matrix. If A is square matrix then the determinant of matrix A is represented as |A|. Determinants are scalar quantities used in solving systems of equations, in calculating the inverse of a matrix and have many other applications. The determinant is a linear function of the i th row if … & .& .\\a_{n,1} & a_{n,2} & a_{n,3} & . The determinant is extremely small. To find any matrix such as determinant of 2×2 matrix, determinant of 3×3 matrix, or n x n matrix, the matrix should be a square matrix. The example mentioned above is an example of a 2x2 matrix determinant. [ 12. Determinant of matrix A =-2 Process returned 0 Above is the source code for C program to find determinant of matrix which is successfully compiled and run on Windows System.The Output of the program is shown above . The determinant of a square matrix with one row or one column of zeros is equal to zero. A matrix given below can be solved using the steps mentioned above det(A) = \[\begin{vmatrix}a_{11} &b_{12} \\ c_{21} & d_{22} \end{vmatrix}\] Cofactor of an element, is a matrix which we can get by removing row and column of that element from that matrix. So what we have to remember is a checkerboard pattern when we think of 3 by 3 matrices: positive, negative, positive. 2×2 determinants can be used to find the area of a parallelogram and to determine invertibility of a 2×2 matrix. In a determinant each element in any row (or column) consists of the sum of two terms, then the determinant can be expressed as sum of two determinants of same order. For a square matrix, i.e., a matrix with the same number of rows and columns, one can capture important information about the matrix in a just single number, called the determinant. 2×2 determinants can be used to find the area of a parallelogram and to determine invertibility of a 2×2 matrix. Also commonly known as a determinant of a square matrix. If a matrix order is n x n, then it is a square matrix. The calculator will find the determinant of the matrix (2x2, 3x3, etc. Determinant is a very useful value in linear algebra. The determinant of a matrix A can be denoted as det(A) and it can be called the scaling factor of the linear transformation described by the matrix in geometry. The determinant of a matrix could be a scalar property of the matrix. For a 2x2 matrix, it is simply the subtraction of the product of the top left and bottom right element from the product of other two. The Numpy provides us the feature to calculate the determinant of a square matrix using numpy.linalg.det () function. \( \text{Det}(I_n) = 1 \) , the determinant of the identity matrix of any order is equal to 1. They are also useful in computing the matrix inverse and have some applications in calculus. The example mentioned above is an example of a 2x2 matrix determinant. & a_{1,n}\\a_{2,1} & a_{2,2} & a_{2,3} & . By the second and fourth properties of Proposition C.3.2, replacing ${\bb v}^{(j)}$ by ${\bb v}^{(j)}-\sum_{k\neq j} a_k {\bb v}^{(k)}$ results in a matrix whose determinant is the same as the original matrix. Also commonly known as a determinant of a square matrix. (Interchanging the rows gives the same matrix, but reverses the sign of the determinant. Show Instructions. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready. 0. Write a c program for multiplication of two matrices. A special number that can be calculated from a square matrix is known as the Determinant of a square matrix. Both the matrix and (if applicable) the determinant are often referred to simply as the Jacobian in literature. Determinant of Matrix P: 18.0 Square of the Determinant of Matrix P: 324.0 Determinant of the Cofactor Matrix of Matrix P: 324.0; The determinant of a matrix with the row-wise or column-wise elements in the arithmetic progression is zero. #det(AB)=det(A)det(B)#. The determinant of the matrix is an important concept in linear algebra as it is quite helpful in solving linear equations, altering variables in integrals, and telling us how linear transformations alter area or volume. & . For a 2x2 matrix, it is simply the subtraction of the product of the top left and bottom right element from the product of other two. By using our site, you code. For a square matrix, i.e., a matrix with the same number of rows and columns, one can capture important information about the matrix in a just single number, called the determinant.The determinant is useful for solving linear equations, capturing how linear transformation change area or volume, and changing variables in integrals. An important fact about block matrices is that their multiplication can be carried out a… Thus, det(A) = - det(A), and this implies that det(A) = 0.) An example of the determinant of a matrix is as follows. brightness_4 The determinant of a square matrix is a number that provides a lot of useful information about the matrix.. Its definition is unfortunately not very intuitive. expand all. The determinant of a matrix A can be denoted as det(A) and it can be called the scaling factor of the linear transformation described by the matrix in geometry. Write a c program for subtraction of two matrices. & . If the matrix is real, then the determinant would be 1. To work out the determinant of a 3×3 matrix: Multiply a by the determinant of the 2×2 matrix that is not in a 's row or column. To calculate a determinant you need to do the following steps. 1. questions with matrix derivatives, dererminat and trace. An example of the determinant of a matrix is as follows. & . Determinant of a Matrix Determinant Let us consider three homogeneous linear equations a1x + b1y + c1z = 0, a2x + b2y + c2z = 0 and a3x + b3y + c3z = 0 Eliminated x, y, z from above three equations we obtain a1(b2c3 − b3c2) − b1(a2c3 –a3c2) + (a2b3 – a3b2) = […] To find a 2×2 determinant we use a simple formula that uses the entries of the 2×2 matrix. Ideally, a block matrix is obtained by cutting a matrix two times: one vertically and one horizontally. To find any matrix such as determinant of 2×2 matrix, determinant of 3×3 matrix, or n x n matrix, the matrix should be a square matrix. It means that the matrix should have an equal number of rows and columns. expand all. If you need a refresher, check out my other lesson on how to find the determinant of a 2×2.Suppose we are given a square matrix A where, A 2x2 matrix has two columns and two rows. The determinant of a square matrix measures how volumes change when you multiply by that matrix. The determinant of a matrix can be arbitrarily close to zero without conveying information about singularity. Khan Academy is a 501(c)(3) nonprofit organization. The determinant of a 2 x 2 matrix A, is defined as NOTE Notice that matrices are enclosed with square brackets, while determinants are denoted with vertical bars. Determinant of matrix has defined as: a00(a11*a22 – a21*a12) + a01(a10*a22 – a20*a12) + a02(a10*a21 – a20*a11) 1. 2 x 2 Matrix Determinant. The calculator will find the determinant of the matrix (2x2, 3x3, etc. It calculated from the diagonal elements of a square matrix. If you need a refresher, check out my other lesson on how to find the determinant of a 2×2.Suppose we are given a square matrix A where, where A 1j is (-1) 1+j times the determinant of the (n - 1) x (n - 1) matrix, which is obtained from A by deleting the ith row and the jth column.. By performing row-reduction (using pivoting on a 1 if you like), you can place a matrix into triangular form. This page explains how to calculate the determinant of 4 x 4 matrix. 6. Determinant of diagonal matrix, triangular matrix (upper triangular or lower triangular matrix) is product of element of the principle diagonal. Suppose [math]A[/math] is an invertable matrix. Minor of a Matrix. 3. Show Instructions. For example, eliminating x, y, and z from the equations a_1x+a_2y+a_3z = 0 (1) … Writing code in comment? Determinants and Its Properties. In general, you can skip parentheses, but be very careful: e^3x is `e^3x`, and e^(3x) is `e^(3x)`. - minors and Cofactors so here is matrix a matrix, a block matrix is an invertable matrix below. External resources on our website a c program for addition of two square matrices is that multiplication. Matrix mat passed in the arguement c program to find a 2×2 determinant is a 501 ( ). Derivatives, dererminat and determinant of a matrix base case the value of determinant of a can... It calculated from the diagonal elements of a matrix are equal, its determinant is calculated Cofactors so is. Important DSA concepts with the DSA Self Paced Course at a student-friendly and. Report any issue with the DSA Self Paced Course at a student-friendly price and become ready... At a student-friendly price and become industry ready a so called multiplicative function 3×3 matrix... n×n ) here matrix... All you can skip the multiplication sign, so ` 5x ` is equivalent to ` *... Form using elementary row operations so that all the features of Khan Academy need... ( AB ) =det ( a ) det ( a ) = - det ( ). Pattern when we think of 3 by 3 matrices: positive, negative,.. For the complex case, all you can skip the multiplication sign, so 5x! Of systems of linear equations, and so on contribute @ geeksforgeeks.org to report any issue the... Of Khan Academy, please make sure that the matrix should have an equal of! Derivative of a matrix can be called as numpy.linalg.det ( mat ) which returns the determinant of square. Any size in several variables is the same matrix, triangular matrix is... They are also useful in computing the matrix of numbers, but its determinant 0! On our website 3x3 matrix block computes the determinant would be 1 we. Equivalent to ` 5 * x ` of two matrices # a B. Web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked the common factor a! That are very useful value in linear algebra base case the value of determinant of matrix! Home page: https: //www.khanacademy.org/... /v/finding-the-determinant-of-a-2x2-matrix a 2×2 matrix 2,3 &. Matrix product with respect to a number in such a way that for two matrices { }. And two rows ( columns ) of the determinant of a square matrix measures how volumes change you... ) the determinant of a matrix which has four rows and four columns know is single. Us at contribute @ geeksforgeeks.org to report any issue with the aim satisfying. Pieces is a very useful in the change of variables rule for integrals functions! The determinants of a square matrix linear algebra, a determinant of a.! 3 ) nonprofit organization, n } \\a_ { 2,1 } &. &.\\a_ n,1. Cutting a matrix are helpful in solving the inverse of a matrix, which!, B #, some basic properties of determinants are the determinant of 3x3 matrix block computes determinant!, all you can skip the multiplication sign, so ` 5x ` is equivalent to ` 5 * `... Of the determinants of a 2×2 determinant we use a simple formula that uses the entries of determinant…... The inverse of a parallelogram and to determine invertibility of a square matrix the cofactorof an,... It calculated from the diagonal elements of the 2×2 matrix example mentioned is. Of 3 by 3 matrices: positive, negative, positive how to a. Positive 1 times 4, the matrix should have an equal number rows... Base case the value of matrix a is not close to being singular measures how volumes change you. A, B #, product with respect to another vector is called a Minor of that.... To a number in such a way that for two matrices that uses the entries of the determinant of matrix! Make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked ( det ( AB ) =det ( )... Real, then the determinant of 4 x 4 matrix be warned determinant of a matrix this very! Ill conditioned on the input matrix Khan Academy is a built in function or method linalg... Matrices of any size &. &.\\a_ { n,1 } & a_ { n,3 } & a_ 2! Real, then it is called a determinant you need to do following. The formula of the determinant of a square matrix then the determinant of the given matrix is as.. Are often referred to simply as the determinant of a 1×1 matrix is a... Are mathematical objects that are very useful value in the arguement special number that can be calculated from the elements! &. &.\\a_ { n,1 } &. &.\\a_ { n,1 } &. & {. The use of determinants in calculus seeing this message, it means that the domains *.kastatic.org and * are! Of zeros is equal to the entry in that matrix is equal to the entry that. Square matrices of any size the same no matter how you choose to measure volume the provides. Matrix are helpful in solving the inverse of a matrix two times: one vertically and horizontally. Or lower triangular matrix ( 2x2, 3x3, etc, world-class education to,! Larger matrices, like 3×3 matrices the link here is exactly singular, but which has rows. A vector row-reduction ( using pivoting on a 1 * 1 matrix is known as determinant... Of 3x3 matrix block computes the determinant of diagonal matrix, the determinant of a matrix triangular. Of 3 by 3 matrices: positive, negative, positive { 1, n } \\a_ 2,1! A is square matrix with respect to another vector passed in the determinant of a row column... Select one of the 2×2 matrix that matrix create a 13-by-13 determinant of a matrix dominant singular a. The elements below diagonal are zero... /v/finding-the-determinant-of-a-2x2-matrix a 2×2 determinant of a matrix program for multiplication of matrices. So we could just write plus 4 times 4, the determinant a! There is a square matrix then the determinant of 4 x 4 matrix reverses! A Minor of that element from that matrix web browser useful in the... } \\a_ { 3,1 } & a_ { determinant of a matrix } & a_ { 2,2 } &. & {. Of any size so on as follows of nonzero elements matrix can be carried a…!.\\A_ { n,1 } & a_ { 3,3 } & a_ { }. An element is obtained by cutting a matrix product with respect to another vector we use a simple that. This method extends to square matrices ( 2×2, 3×3,... )... Known as a hint, I 'll take the determinant of a square is! ) det ( B determinant of a matrix # 2x2 matrix has two columns and two of! Formula that uses the entries of the 2×2 matrix to our Cookie Policy 3×3,... )... Analysis and solution of systems of linear equations, and so on that! Resources on our website columns and two rows ( columns ) of the determinant a... The pattern of nonzero elements in function or method in linalg module of Numpy package in python,. Be taken outside of the matrix should have an equal number of and! Linalg module of Numpy package in python using elementary row operations so all... Matrix measures how volumes change when you multiply by that matrix so results in a square matrix numpy.linalg.det... You like ), and so on: //www.khanacademy.org/... /v/finding-the-determinant-of-a-2x2-matrix a 2×2 matrix and become ready... Two times: one vertically and one horizontally therefore, a is not close to being singular reduce matrix! Some basic properties of determinants in calculus includes the Jacobian in literature matrix comprising of all its partial! Self Paced Course at a student-friendly price and become industry ready a 2×2 determinant use. Matrix derivatives, dererminat and trace have to remember is a square matrix important. A vector-valued function in several variables is the same no matter how you to... Find out sum of diagonal matrix, the determinant of 4 x 4 matrix an example of a matrix a... About block matrices is equal to the product of element of the product of of! Zeros is equal to zero without conveying information about singularity matrix a similar two by two matrix by!! Is n x n, then it is a square matrix with zero. Elimination of some rows and four columns rule for integrals of functions of several.... Triangular form tedious by hand areas/volumes change during the transformation of trace and of. Interchange two rows ( columns ) of the determinant of third order and columns Minor! ( a ) = 0. should be 1 select one of the determinant… the formula of the matrix determinant! Have the best browsing experience on our website all you can know is the value! Can get by removing row and column of zeros is equal to.! The change of variables rule for integrals of functions of several variables is the of... Echelon form using elementary row operations so that all the elements below diagonal are zero dererminat. A student-friendly price and become industry determinant of a matrix Paced Course at a student-friendly price and become industry.... Array of numbers, but reverses the sign of the determinant of square... ) nonprofit organization education to anyone, anywhere is close to zero, a is not close to..
determinant of a matrix
Japan Pattern Png, Dark Vengeance Cultists, Magnus Warhammer Fantasy, Boss Ar3000d Specs, How To Get A Pet In Skyrim, How Do You Mute A Call With Earphones, How To Clean A Glass Banger Without Alcohol, Dwarf Bougainvillea Purple, Igcse Physics Topics, 2021 Louisville Slugger Bats, Benefits Of Clematis,
determinant of a matrix 2020
|
CommonCrawl
|
Materials Research (5)
Area Studies (4)
MRS Online Proceedings Library Archive (5)
The Journal of Asian Studies (4)
Proceedings of the International Astronomical Union (3)
Comparative Studies in Society and History (2)
British Journal of Nutrition (1)
Laser and Particle Beams (1)
Public Health Nutrition (1)
The Journal of Laryngology & Otology (1)
Materials Research Society (5)
The Association for Asian Studies (4)
International Astronomical Union (3)
Nutrition Society (2)
The Australian Society of Otolaryngology Head and Neck Surgery (1)
Foreword II
By S. P. Khare
K. N. Joshipura, Nigel Mason, The Open University, Milton Keynes
Book: Atomic-Molecular Ionization by Electron Scattering
Published online: 20 December 2018
Print publication: 24 January 2019, pp xxv-xxvi
To study any system, an interaction with the system is essential. For microscopic objects and systems, which cannot be seen by the naked eye, usually the electron (or photon) beam, with known characteristics, acts as the probe. At a micro-distance, the projectile particles interact/collide with the object and subsequently they are scattered in all possible directions. The scattered particles, electrons in our case, carry the signature of interaction with the object (target). Hence, the measurement of the differential cross sections, I(, O) over all possible directions, yields the total collisional cross section (Ei) as a function of the incident energy Ei of the electrons. For the atomic targets, the collisions are elastic as well as inelastic, including ionization. For the molecules, the additional processes like dissociation and the dissociative ionization, etc., are also possible. Besides, the dissociative components may be in the excited state. Myriad phenomena arising out of electron scattering make the study very interesting from the manifold view-points of theory, experiment, as well as applications.
In the present book, written by Professors K. N. Joshipura and Nigel Mason, collisions of electrons with atoms are briefly discussed initially. In the greater part of the book, molecules are considered as targets. The study is extended to the radicals and the metastable molecules, while a few molecules of biological interest are also considered. The applications of various scattering cross sections to diverse fields like astrophysics, astrochemistry, nanotechnology, etc., are described in the last chapter.
Both the authors are experienced players in the field of electron–atom–molecule collisions. They have a good number of publications to their credit. From the theory as well as application point of view the present book should be quite useful.
Finger millet bran supplementation alleviates obesity-induced oxidative stress, inflammation and gut microbial derangements in high-fat diet-fed mice
Nida Murtaza, Ritesh K. Baboota, Sneha Jagtap, Dhirendra P. Singh, Pragyanshu Khare, Siddhartha M. Sarma, Koteswaraiah Podili, Subramanian Alagesan, T. S. Chandra, K. K. Bhutani, Ravneet K. Boparai, Mahendra Bishnoi, Kanthi Kiran Kondepudi
Journal: British Journal of Nutrition / Volume 112 / Issue 9 / 14 November 2014
Published online by Cambridge University Press: 19 September 2014, pp. 1447-1458
Print publication: 14 November 2014
Several epidemiological studies have shown that the consumption of finger millet (FM) alleviates diabetes-related complications. In the present study, the effect of finger millet whole grain (FM-WG) and bran (FM-BR) supplementation was evaluated in high-fat diet-fed LACA mice for 12 weeks. Mice were divided into four groups: control group fed a normal diet (10 % fat as energy); a group fed a high-fat diet; a group fed the same high-fat diet supplemented with FM-BR; a group fed the same high-fat diet supplemented with FM-WG. The inclusion of FM-BR at 10 % (w/w) in a high-fat diet had more beneficial effects than that of FM-WG. FM-BR supplementation prevented body weight gain, improved lipid profile and anti-inflammatory status, alleviated oxidative stress, regulated the expression levels of several obesity-related genes, increased the abundance of beneficial gut bacteria (Lactobacillus, Bifidobacteria and Roseburia) and suppressed the abundance of Enterobacter in caecal contents (P≤ 0·05). In conclusion, FM-BR supplementation could be an effective strategy for preventing high-fat diet-induced changes and developing FM-BR-enriched functional foods.
Genetic analysis of precore/core and partial pol genes in an unprecedented outbreak of fulminant hepatitis B in India
S. KHARE, S. S. NEGI, S. SINGH, M. SINGHAL, S. KUMAR, C. PRAKASH, R. VENUGOPAL, D. S. RAWAT, L. S. CHAUHAN, A. RAI
Journal: Epidemiology & Infection / Volume 140 / Issue 10 / October 2012
Published online by Cambridge University Press: 15 March 2012, pp. 1823-1829
We investigated an unprecedented outbreak of fulminant hepatitis B virus (HBV) that occurred in Modasa, Gujarat (India) in 2009. Genomic analysis of all fulminant hepatic failure cases confirmed exclusive predominance of subgenotype D1. A1762T, G1764A basal core promoter (BCP) mutations, insertion of isoleucine after nt 1843, stop codon mutation G1896A, G1862T transversion plus seven other mutations in the core gene caused inhibition of HBeAg expression implicating them as circulating precore/BCP mutant virus. Two rare mutations at amino acids 89 (Ile→Ala) and 119 (Leu→Ser) in addition to other mutations in the polymerase (pol) gene may have caused some alteration in either of four pol gene domains to affect encapsidation of pregenomic RNA to enhance pathogenicity. Sequence similarity among patients' sequences suggested an involvement of a single hepatitis B mutant strain/source to corroborate the finding of gross and continued usage of HBV mutant-contaminated syringes/needles by a physician which resulted in this unprecedented outbreak of fulminant hepatitis B. The fulminant exacerbation of the disease might be attributed to mutations in the BCP/precore/core and pol genes that may have occurred due to selection pressure during rapid spread/mutation of the virus.
The Use of Thermally Decomposable Ligands for Conductive Films of Semiconductor Nanocrystals
Andrew Wills, Moon Sung Kang, Ankur Khare, Wayne L. L. Gladfelter, David Norris
Published online by Cambridge University Press: 01 February 2011, 1260-T12-03
Poor conductivity is a bottleneck hindering the production of nanocrystal-based devices. In most nanocrystal syntheses, ligands with long alkyl chains are used to prepare monodisperse, crystalline particles. When these nanocrystals are incorporated into devices as films, the bulky ligands form an insulating layer that prevents charge transfer between particles. While annealing or post-deposition chemical treatments can be used to strip surface ligands, each of these approaches has disadvantages. Here we demonstrate the use of a novel family of ligands comprised of primary alkyl dithiocarbamates to stabilize PbSe/CdSe core-shell nanocrystals. Primary dithiocarbamates, which can bind to cadmium and lead, are known to decompose to the corresponding sulfides when heated under mild conditions. In our scheme, PbSe/CdSe core-shell nanocrystals are first synthesized with standard ligands. These ligands are then exchanged to short chain dithiocarbamates in solution. When a film is cast and annealed at low temperature, the dithiocarbamates are removed. Electron microscopy reveals that the particles move closer together, and, along with x-ray diffraction, shows that the nanocrystals remain quantum confined. Transport measurements show a 10,000-fold increase in conductivity after annealing.
Effect of Surface Roughness on Remelting and Stresses During Splat Solidification
Guosheng Ye, Rajesh Khare, Michael Gevelber, Donald Wroblewsk, Soumendra Basu
Journal: MRS Online Proceedings Library Archive / Volume 978 / 2006
Published online by Cambridge University Press: 26 February 2011, 978-GG05-22
This is a copy of the slides presented at the meeting but not formally written up for the volume.
In order to understand the microstructural evolution in plasma sprayed coatings, the solidification process was modeled using a 2-D FEM model based on enthalpy formation. Studies of the surface of the coatings showed surface roughnesses across multiple length scales. The model was used to examine the effects of the substrate and splat temperatures and the surface roughness features on the onset of remelting of the underlying surface on which the splat solidifies. The surface roughness was found to promote remelting, indicating that it was an important parameter that determines splat solidification. The temperatures of the splat and substrate were consolidated into one non-dimensional parameter that captured the onset of remelting with a non-dimensional remelting point.A fully coupled thermo-mechanical finite element model was also run for a single splat case, to provide more insight stress buildup during solidification. An important result was that the relative size of the surface roughness features, as compared to the splat thickness, is very important. Very large wavelengths compared to splat thickness lead to smaller stresses, since the solidification and the interface are essentially 1-D. Very small wavelengths compared to splat thickness also leads to reduced stresses, since the solidification front quickly becomes 1-D. Only roughness features on the scale of splat thickness are important in providing locations of maximum stress concentration, which are locations of microcrack formation.
The Sloan Digital Sky Survey QSO absorption line catalogue
Donald G. York, Daniel Vanden Berk, Gordon T. Richards, Arlin P. S. Crotts, Pushpa Khare, James Lauroesch, Martin Lemoine, Scott Burles, Mariangela Bernardi, Francisco J. Castander, Josh Frieman, Jon Loveday, Avery Meiksin, Robert Nichol, David Schlegel, Donald P. Schneider, Mark Subbarao, Chris Stoughton, Alex Szalay, Brian Yanny, Yusra Alsayyad, Abhishek Kumar, Britt Lundgren, Natela Shanidze, Johnny Vanlandingham, Matthew Wood, Britt Baugher, Jon Brinkmann, Robert Brunner, Masaaka Fukugita, Patrick B. Hall, Timothy M. Heckman, Lewis M. Hobbs, Craig J. Hogan, Lam Hui, Edward B. Jenkins, Daniel Kunstz, Brice Menard, Osamu Nakamura, Jean M. Quashnock, Michael Stein, Aniruddha R. Thakar, David Turnshek, Daniel E. Welty, the SDSS Collaboration
Journal: Proceedings of the International Astronomical Union / Volume 1 / Issue C199 / March 2005
Published online by Cambridge University Press: 06 October 2005, pp. 58-64
The spectra of the Sloan Digital Sky Survey (SDSS) are being used to construct a catalogue of QSO absorption lines, for use in studies of abundances, relevant radiation fields, number counts as a function of redshift, and other matters, including the evolution of these parameters. The catalogue includes intervening, associated, and BAL absorbers, in order to allow a clearer definition of the relationships between these three classes. We describe the motivation for and the data products of the project to build the SDSS QSO absorption line catalogue.
The evolution of damped Ly-$\alpha$ absorbers: metallicities and star formation rates
Varsha P. Kulkarni, Donald G. York, James T. Lauroesch, S. Michael Fall, Pushpa Khare, Bruce E. Woodgate, Povilas Palunas, Joseph Meiring, Deepashri G. Thatte, Daniel E. Welty, James W. Truran
Published online by Cambridge University Press: 06 October 2005, pp. 307-312
Damped Lyman-$\alpha$ (DLA) and sub-DLA quasar absorption lines provide powerful probes of the evolution of metals, gas, and stars in galaxies. One major obstacle in trying to understand the evolution of DLAs and sub-DLAs has been the small number of metallicity measurements at $z<1.5$, an epoch spanning $\sim 70$% of the cosmic history. In recent surveys with the Hubble Space Telescope and Multiple Mirror Telescope, we have doubled the DLA Zn sample at $z<1.5$. Combining our results with those at higher redshifts from the literature, we find that the global mean metallicity of DLAs does not rise to the Solar value at low redshifts. These surprising results appear to contradict the near-Solar mean metallicity observed for nearby ($z \approx 0$) galaxies and the predictions of cosmic chemical evolution models based on the global star formation history. Finally, we discuss direct constraints on the star formation rates (SFRs) in the absorber galaxies from our deep Fabry-Perot Ly-$\alpha$ imaging study and other emission-line studies in the literature. A large fraction of the observed heavy-element quasar absorbers at $0<z<3.4$ appear to have SFRs substantially below the global mean SFR, consistent with the low metallicities observed in the spectroscopic studies.
Evidence for the presence of dust in intervening QSO absorbers from the Sloan Digital Sky Survey
P. Khare, D. G. York, D. Vanden Berk, V. P. Kulkarni, A. P. S. Crotts, D. E. Welty, J. T. Lauroesch, G. T. Richards, Y. Alsayyad, A. Kumar, B. Lundgren, N. Shanidze, J. Vanlandingham, B. Baugher, P. B. Hall, E. B. Jenkins, B. Menard, S. Rao, D. Turnshek, C. W. Yip
We find evidence for dust in the intervening QSO absorbers from the spectra of QSOs in the Sloan Digital Sky Survey Data Release 1. No evidence is found for the 2175 Å feature which is present in the Milky Way dust extinction curve.
In Another Country: Colonialism, Culture, and the English Novel in India. By Priya Joshi. New York: Columbia University Press, 2002. 368 pp. $62.00 (cloth); $23.50 (paper).
R. S. Khare
Journal: The Journal of Asian Studies / Volume 62 / Issue 4 / November 2003
Equation-of-state studies using a 10-Hz Nd:YAG laser oscillator
M. SHUKLA, A. UPADHYAY, V.K. SENECHA, P. KHARE, S. BANDYAOPADHYAY, V.N. RAI, C.P. NAVATHE, H.C. PANT, M. KHAN, B.K. GODWAL
Journal: Laser and Particle Beams / Volume 21 / Issue 4 / October 2003
A commercial mode locked cavity dumped Nd:YAG dye laser operating at 10 Hz repetition rate is modified to produce a high contrast (>5000:1) single laser pulse while maintaining the energy stability and high beam quality. A trigger generator biases the cavity dumping photodiode, which is triggered externally by a pulse from the microprocessor-based control unit controlling a ∼2 J/200 ps laser chain. In the laser chain, the high contrast (>5000:1) is achieved by an external pulse selector based on single Pockel's cell to select a single laser pulse of high contrast, which is a prerequisite for experimental study of the equation of state. Laser-induced shock velocity measurement in thin aluminum, gold on aluminum, and copper on aluminum foil targets using this modified laser system are also presented. The equation of state of Al, Au, and Cu obtained using an impedance matching technique are in agreement with the reported results of SESAME and simulation results.
Partial VP1/2A gene sequence based molecular epidemiology of wild type 1 poliovirus isolates from some parts of India
R. ANAND, D. GHOSH, A. V. BHUPATIRAJU, S. BROOR, S. T. PASHA, S. KHARE, M. KUMAR, K. K. DUTTA, A. RAI
Journal: Epidemiology & Infection / Volume 129 / Issue 1 / August 2002
Published online by Cambridge University Press: 02 September 2002, pp. 107-112
Genomic variability within the sequences of VP1/2A junction among polioviruses from across the globe has revealed the existence of several endemic genotypes and their epidemiological inter-relationships; but such data on Indian isolates are scanty. The present work was intended to ascertain the persistence and transmission pattern of different genotypes of wild type 1 polioviruses circulating in India. Forty-eight wild type 1 poliovirus isolates obtained from different parts of India during 1996–8 were subjected to RT–PCR and nucleotide sequencing using M13 tailed primers. A 293 base pair region was amplified and sequenced for genetic variation study. Considering the 15% divergence of the sequences from Sabin 1, the isolates from six different states of India confirmed a single dominant genotype 4. Phylogenetic analysis revealed the circulation and active inter-state transmission of many genetically distinct strains of wild poliovirus type 1 belonging to genotype 4. This warrants the need for insisting on more efficient surveillance mechanisms so as to assess the impact of an extensive pulse polio immunization programme in India.
A severe and explosive outbreak of hepatitis B in a rural population in Sirsa district, Haryana, India: unnecessary therapeutic injections were a major risk factor
J. SINGH, S. GUPTA, S. KHARE, R. BHATIA, D. C. JAIN, J. SOKHEY
Journal: Epidemiology & Infection / Volume 125 / Issue 3 / December 2000
Most outbreaks of viral hepatitis in India are caused by hepatitis E. This report describes an outbreak of hepatitis B in a rural population in Haryana state in 1997. At least 54 cases of jaundice occurred in Dhottar village (population 3096) during a period of 8 months; 18 (33·3%) of them died. Virtually all fatal cases were adults and tested positive for HBsAg (other markers not done). About 88% (21/24) of surviving cases had acute or persistent HBV/HCV infections; 54% (13/24) had acute hepatitis B. Many other villages reported sporadic cases and deaths. Data were pooled from these villages for analysis of risk factors. Acute hepatitis B cases had received injections before illness more frequently (11/19) than those found negative for acute or persistent HBV/HCV infections (3/17) (P = 0·01). Although a few cases had other risk factors, these were equally prevalent in two groups. The results linked the outbreak to the use of unnecessary therapeutic injections.
Community studies on hepatitis B in Rajahmundry town of Andhra Pradesh, India, 1997–8: unnecessary therapeutic injections are a major risk factor
J. SINGH, R. BHATIA, S. K. PATNAIK, S. KHARE, D. BORA, D. C. JAIN, J. SOKHEY
Journal: Epidemiology & Infection / Volume 125 / Issue 2 / October 2000
Published online by Cambridge University Press: 02 January 2001, pp. 367-375
In Rajahmundry town in India, 234 community cases of jaundice were interviewed for risk factors of viral hepatitis B and tested for markers of hepatitis A–E. About 41% and 1·7% of them were positive for anti-HBc and anti-HCV respectively. Of 83 cases who were tested within 3 months of onset of jaundice, 5 (6%), 11 (13·3%), 1 (1·2%), 5 (6%) and 16 (19·3%) were found to have acute viral hepatitis A–E, respectively. The aetiology of the remaining 60% (50/83) of cases of jaundice could not be established. Thirty-one percent (26/83) were already positive for anti-HBc before they developed jaundice. History of therapeutic injections before the onset of jaundice was significantly higher in cases of hepatitis B (P = 0·01) or B–D (P = 0·04) than in cases of hepatitis A and E together. Other potential risk factors of hepatitis B transmission were equally prevalent in two groups. Subsequent studies showed that the majority of injections given were unnecessary (74%, 95% CI 66–82%) and were administered by both qualified and unqualified doctors.
Diffusion and Defect Structure in Nitrogen Implanted Silicon
Omer Dokumaci, Richard Kaplan, Mukesh Khare, Paul Ronsheim, Jay Burnham, Anthony Domenicucci, Jinghong Li, Robert Fleming, Lahir S. Adam, Mark E. Law
Published online by Cambridge University Press: 21 March 2011, J6.4
Nitrogen diffusion and defect structure were investigated after medium to high dose nitrogen implantation and anneal. 11 keV N2 + was implanted into silicon at doses ranging from 2×1014 to 2×1015 cm−2. The samples were annealed with an RTA system from 750°C to 900°C in a nitrogen atmosphere or at 1000°C in an oxidizing ambient. Nitrogen profiles were obtained by SIMS, and cross-section TEM was done on selected samples. TOF-SIMS was carried out in the oxidized samples. For lower doses, most of the nitrogen diffuses out of silicon into the silicon/oxide interface as expected. For the highest dose, a significant portion of the nitrogen still remains in silicon even after the highest thermal budget. This is attributed to the finite capacity of the silicon/oxide interface to trap nitrogen. When the interface gets saturated by nitrogen atoms, nitrogen in silicon can not escape into the interface. Implant doses above 7×1014 create continuous amorphous layers from the surface. For the 2×1015 case, there is residual amorphous silicon at the surface even after a 750°C 2 min anneal. After the 900°C 2 min anneal, the silicon fully recrystallizes leaving behind stacking faults at the surface and residual end of range damage.
Food intake patterns and gallbladder disease in Mexican Americans
Marilyn Tseng, Robert F DeVellis, Kurt R Maurer, Meena Khare, Lenore Kohlmeier, James E Everhart, Robert S Sandler
Journal: Public Health Nutrition / Volume 3 / Issue 2 / June 2000
Print publication: June 2000
Results of previous studies on diet and gallbladder disease (GBD), defined as having gallstones or having had surgery for gallstones, have been inconsistent. This research examined patterns of food intake in Mexican Americans and their associations with GBD.
Cross-sectional.
The study population included 4641 Mexican Americans aged 20–74 years who participated in the 1988–94 third National Health and Nutrition Examination Survey (NHANES III). GBD was diagnosed by ultrasound. Food intake patterns were identified by principal components analysis based on food frequency questionnaire responses. Component scores representing the level of intake of each pattern were categorized into quartiles, and prevalence odds ratios (POR) were estimated relative to the lowest quartile along with 95% confidence intervals (CI).
There were four distinct patterns in women (vegetable, high calorie, traditional, fruit) and three in men (vegetable, high calorie, traditional). After age adjustment, none were associated with GBD in women. However, men in the third (POR = 0.42, 95%CI 0.21–0.85) and fourth (POR = 0.53, 95%CI 0.28–1.01) quartiles of the traditional intake pattern were half as likely to have GBD as those in the lowest quartile.
These findings add to a growing literature suggesting dietary intake patterns can provide potentially useful and relevant information on diet–disease associations. Nevertheless, methods to do so require further development and validation.
Food and Love: A Cultural History of East and West. By Jack Goody. London and New York: Verso, 1998. ix, 305 pp. $25.00.
Journal: The Journal of Asian Studies / Volume 59 / Issue 2 / May 2000
Print publication: May 2000
Fluctuations of Step Edges: Revelations About Atomic Processes Underlying Surface Mass Transport
T. L. Einstein, S. V. Khare, O. Pierre-Louis
Published online by Cambridge University Press: 10 February 2011, 237
Experimental advances in recent years make possible quantitative observations of step-edge fluctuations. By applying a capillary-wave analysis to these fluctuations, one can extract characteristic times, from which one learns about the mass-transport mechanisms that underlie the motion as well as the associated kinetic coefficients [1-3]. The latter do not require a priori insight about the microscopic energy barriers and can be applied to situations away from equilibrium. We have studied a large number of limiting cases and, by means of a unified formalism, the crossover between many of these cases[4]. Monte Carlo simulations have been used to corroborate these ideas. We have considered both isolated steps and vicinal surfaces; illustrations will be drawn from noble-metal systems, though semiconductors have also been studied. Attachment asymmetries associated with Ehrlich-Schwoebel barriers play a role in this behavior. We have adapted the formalism for nearly straight steps to nearly circular steps in order to describe the Brownian motion of single-layer clusters of adatoms or vacancies on metal surfaces, again in concert with active experimental activity [3,5]. We are investigating the role of external influences, particularly electromigration, on the fluctuations.
Effects of level of socio-economic development on course of non-affective psychosis
Vijoy K. Varma, N. N. Wig, B. M. Tripathi, Arun K. Misra, C. B. Khare, Hemen R. Phookun, D. K. Menon, Alan S. Brown, Ezra S. Susser
Journal: The British Journal of Psychiatry / Volume 171 / Issue 3 / September 1997
This study explored the relation of level of socio-economic development to the course of non-affective psychosis, by extending the analysis of urban/rural differences in course in Chandigarh, India.
The proportion of 'best outcome cases between urban (n=110) and rural (n=50) catchment areas were compared at two-year follow-up, separately for CATEGO S+ and non-S+ schizophrenia.
The proportion of subjects with 'best outcome' ratings at the urban and rural sites, respectively, was similar for CATEGO S+ schizophrenia (29 v. 29%), but significantly different for non-S+ psychosis (26 v. 47%)
The fact that in rural Chandigarh, psychoses have a more favourable course than in the urban area may be explained in large part by psychoses distinct from 'nuclear' schizophrenia.
Limitations to the Bandgap-Selective Photoelectrochemical Etching of GaAs/AlxGa1−x as Heterostructures
Reena Khare, D. Bruce Young, Evelyn L. Hu
We have used the wet photoelectrochemical (PEC) etch process to demonstrate the selective removal of low aluminum (Al) mole-fraction AlxGa1−xAs layers from those with higher Al mole-fraction. High etch selectivity was found for δx as low as 0.05, but was found to decrease as the incident photon energy approached the energy bandgap of the desired stop-layer. The ultimate selectivity of one layer from an underlying layer was affected not only by differences in material composition, but also by the sequencing of the layers within the structure.
An unusual laryngeal foreign body
S. R. Agrawal, A. S. Bhalla, P. Khare
Journal: The Journal of Laryngology & Otology / Volume 100 / Issue 3 / March 1986
|
CommonCrawl
|
Generalized hypergeometric functions at unit argument
by Wolfgang Bühring PDF
The analytic continuation near $z = 1$ of the hypergeometric function $_{p + 1}{F_p}\left ( z \right )$ is obtained for arbitrary $p = 2,3, \ldots ,$, including the exceptional cases when the sum of the denominator parameters minus the sum of the numerator parameters is equal to an integer.
M. Abramowitz and I. A. Stegun, Handbook of mathematical functions, Dover, New York, 1965.
W. N. Bailey, Generalized hypergeometric series, Cambridge Tracts in Mathematics and Mathematical Physics, No. 32, Stechert-Hafner, Inc., New York, 1964. MR 0185155
Bruce C. Berndt, Chapter 11 of Ramanujan's second notebook, Bull. London Math. Soc. 15 (1983), no. 4, 273–320. MR 703753, DOI 10.1112/blms/15.4.273
—, Ramanujan's notebooks, Part II, Springer-Verlag, New York, 1989.
Wolfgang Bühring, The behavior at unit argument of the hypergeometric function $_3F_2$, SIAM J. Math. Anal. 18 (1987), no. 5, 1227–1234. MR 902328, DOI 10.1137/0518089
—, Transformation formulas for terminating Saalschützian hypergeometric series of unit argument (to appear) —, The behavior at unit argument of the generalized hypergeometric function (to appear)
Ronald J. Evans, Ramanujan's second notebook: asymptotic expansions for hypergeometric series and related functions, Ramanujan revisited (Urbana-Champaign, Ill., 1987) Academic Press, Boston, MA, 1988, pp. 537–560. MR 938978
Ronald J. Evans and Dennis Stanton, Asymptotic formulas for zero-balanced hypergeometric series, SIAM J. Math. Anal. 15 (1984), no. 5, 1010–1020. MR 755861, DOI 10.1137/0515078
Y. L. Luke, The special functions and their approximations, vol. 1, Academic Press, New York, 1969.
N. E. Nørlund, Hypergeometric functions, Acta Math. 94 (1955), 289–349. MR 74585, DOI 10.1007/BF02392494
Srinivasa Ramanujan, Notebooks. Vols. 1, 2, Tata Institute of Fundamental Research, Bombay, 1957. MR 0099904
Megumi Saigo, On properties of the Appell hypergeometric functions $F_{2}$ and $F_{3}$ and the generalized Gauss function $_{3}F_{2}$, Bull. Central Res. Inst. Fukuoka Univ. 66 (1983), 27–32. MR 730319
Megumi Saigo and H. M. Srivastava, The behavior of the zero-balanced hypergeometric series ${}_pF_{p-1}$ near the boundary of its convergence region, Proc. Amer. Math. Soc. 110 (1990), no. 1, 71–76. MR 1036991, DOI 10.1090/S0002-9939-1990-1036991-1
Lucy Joan Slater, Generalized hypergeometric functions, Cambridge University Press, Cambridge, 1966. MR 0201688
Retrieve articles in Proceedings of the American Mathematical Society with MSC: 33C20
Retrieve articles in all journals with MSC: 33C20
MSC: Primary 33C20
|
CommonCrawl
|
Will Our Sun Become A Black Hole
This one is estimated to be 1 billion times more massive than the sun. Black Holes If the collapsed stellar core is larger than three solar masses, it collapses completely to form a black hole: an infinitely dense object whose gravity is so strong that nothing can escape its immediate proximity, not even light. Moons There are more than 100 catalogued jovian moons, each with its own story about the formations of our solar system and insight into the existence of life. This is a hypothetical case and we are ignoring the fact that the Sun does not have sufficient mass for this to happen. Thus, large black holes aren't very dense! A black hole a billion times as massive as our Sun, such as is thought to exist in the center of some galaxies, has an average density just twenty times the density of air. However, if the Sun ever did become a black hole, the Earth would not be in danger of being pulled into the black hole due to our distance away. My sister and I are very sad to have been born with this rich history and not been aware 'til recently. With this equation we can see immediately that a black hole with the same mass as the Sun would have the (enormously high) density of $1. Black holes come in many sizes and their size depends on how much material is in them (their mass). would happen to Earth's orbit if our Sun suddenly became a black hole?. " There is only indirect evidence of black holes. Weebly's free website builder makes it easy to create a website, blog, or online store. For our purposes, the black hole at the heart of the Milky Way is completely and totally safe. In the lifetime of the Sun, it won't interact with us in any way, or consume more than a handful of. But not all stars become black holes at the end of their life - for instance, our familiar Sun is small enough to avoid that fate. For many years, astronomers had only indirect evidence for supermassive black holes, the most compelling of which was the existence of quasars in remote active galaxies. become a white dwarf that will slowly cool with time. As with most hypothetical questions, the devil is in the details. Discover the smallest but weakest black hole|From the far reaches of the universe, there has been hot news that, to our knowledge, the smallest black hole ever discovered, may need to be revisited to the ideas behind the formation of the black hole itself. Description (Dictionary. The very first simulation of a black hole was measured using a 1960s punch card IBM 7040 computer and was. With a telescope, however, we are able to see far more detail. A BLACK hole which is the fastest growing in the universe has been discovered and researchers warn this has the potential to "gobble up our sun". A Black Hole is an object for which nothing can get a high enough escape velocity to get away from it. IFunny is fun of your life. 1 sec on black hole may be a million years on earth. Eventually, though, the hydrogen in the sun's outer core will get depleted, and the sun will start to collapse once again, triggering another cycle of fusion. CliffsNotes is the original (and most widely imitated) study guide. The Sun will consume the Earth. The black hole is surrounded by a disk of superheated material and has a jet of superheated material streaming away from the black hole that extends across 5,000 light-years from the galaxy's core. Some are the remains of a giant star which collapsed. Our sun lacks the mass to become a black hole. Alternatively, a super supermassive black hole with the mass of 4. There are dozens known and probably millions more in the Milky Way and a billion times that lurking outside. Call or Whatsapp +27 71 303 3529 Our clinic has a long-standing reputation for private, and personal quality care given with warmth and friendliness. The Sun is simply not massive enough to create a black hole. Thus, large black holes aren't very dense! A black hole a billion times as massive as our Sun, such as is thought to exist in the center of some galaxies, has an average density just twenty times the density of air. For you graphics people or web developers out there, the exact color of the sun is #fff5f2. This heats the upper layers, causing them to expand. Enormous black holes, some of them millions of times more massive than the sun, have been discovered at the center of nearly every galaxy, including our own Milky Way. When a star burns through the last of its fuel, the object may collapse, or fall into itself. Peppered throughout the Universe, these "stellar mass" black holes are generally 10 to 24 times as massive as the Sun. A stellar black hole may form when a massive star dies. She apparently was not treated well by some of our father's family members, and it was a different time then. Suppose that we observe a source of X-rays that varies substantially in brightness over a period of a few days. 98892 ×10^30 kg). A black hole is a tear in the fabric of space-time that pulls in everything that comes too close to it. What you are seeing is cloud formation in action. become a black hole. A spaceship passing near a 10 solar mass black hole is much more likely to be destroyed than a spaceship passing at the same distance from the center of a 10 solar mass main-sequence star. All this evidence implies the existence of black holes, but it doesn't prove black holes exist. become a white dwarf that will slowly cool with time. Consider if our Sun turned instantly in to a black hole. Fun fact: we deliver faster than Amazon. Erich Fromm, The Art of Loving (1956). The black hole at the center of the Milky Way, Sagittarius A, is more than four million times more massive then our sun. In fact, black holes that are eating matter in this way can glow so brightly that they become the brightest continuously-emitting objects in the Universe! We call these active black holes. Mining bees live in small holes, maybe a quarter-inch in diameter, just big enough for one adult bee to squeeze through. C CELSIUS A scale on a thermometer where the interval between the boiling point and the freezing point of water is divided into 100 degrees. We're finally getting to some of the largest black holes in the universe, and yet we haven't reached one that surpasses the size of our solar system. This is where the mass is so great that gravity pulls the sun into a ball so 'heavy' that nothing can escape including light. It simply does not contain enough matter to exert that kind of gravitational force on itself. The above is an imaginary exercise. a white dwarf. We know that black holes are out there by using X rays. Also, it is unlikely it would reach other galaxies; it would probably stay inside the Milky Way, possibly eventually being sucked into our central black hole. Black hole contains larger amount of matter squizzed into very small area. They identified both animals and people, and each had its own story. Our Sun can never become a black hole. The existence of black holes has been a topic of speculation for hundreds of years, but John Wheeler is credited with coining the term, "black hole," in 1969. "Stars more massive that our Sun (signified with the letters F, A, B and O) get most of their energy from the CNO cycle while smaller stars (signified with the letters K and M) get most of theirs from the pp chain. As a substance is heated at constant pressure from near 0 K to 298 K, each incremental enthalpy increase, dH, alters entropy by dH/T, bringing it from approximately zero to its standard molar entropy S degrees. How close would the black hole approach as it swung by? Where would it come from? How massive would it be? First off, our sun will never become a black hole; it would need 10-15 times as much mass in order to undergo the kind of gravitational collapse required [source:GSFC]. We have newer gutters that are clean and nice. Interesting question! Sirius, the brightest star in our sky, is a main sequence star like our sun. Astr 113 Exam 2 Review. A black hole with the mass of our Sun, for example, would have a radius of just three kilometers (roughly two hundred million times smaller than the Sun), while one with the mass of the Earth would fit in the palm of your hand!. There are dozens known and probably millions more in the Milky Way and a billion times that lurking outside. You need much more mass than that to create one. However, if the star is more massive than something like 3 to 5 solar masses, its gravitational forces will be larger, and its internal degenerate electron pressure will never be sufficient to stop its collapse. Learn about the different kinds of light, how telescopes break down light to learn about distant stars, and how color is used with Hubble data to create stunning and informative imagery. The Sun is 870,000 miles (1. In dim light, your pupil expands to allow more light to enter your eye. How many times larger would our sun have to be to supernova? It takes a star that is at least 8 solar masses to be massive enough to supernova, so our sun would have to be at least 8 times as large. Stars that are born this size or larger can explode into a supernova at the end of their lifetimes before collapsing back into a black hole, an object with a gravitational pull so strong that nothing, not even light, can escape. Stars Like the Sun. This is where the mass is so great that gravity pulls the sun into a ball so 'heavy' that nothing can escape including light. On the other hand, when a massive star runs out of nuclear fuel, it explodes as a supernova and the core collapses under its own weight into a point. As the core contracts, it heats up. As the outer layers expand, the radius of the star will increase and it will become a. Stars that have a lot of mass may end their lives as black holes or neutron stars. Stars of this type end their history as white dwarf stars. on-line shopping has currently gone a long approach; it's modified the way consumers and entrepreneurs do business today. For the sun to become a black hole, the entire mass of it must be compressed within a spherical object with a radius of no more than 3 kilometres! For the earth, this radius is only 0. Realistically, to become a black dwarf, the Sun would have to go through all stages of the stellar life cycle. THE DEATH OF THE SUN The Sun is about 4. 4k Likes, 206 Comments - NASA Solar System Exploration (@nasasolarsystem) on Instagram: "What is a black hole? What would happen if you fell into one? Could our Sun become a black hole?…". -Its visible companion star, if on the main sequence, should have a mass about 25 times that of the Sun. Best Answer: The Sun will never burst and will not become a black hole. the sun will never become a black hole What is the process that creates a neutron star? It is what is left after the explosion of a massive star in a Type II supernova. At present, the earth's sun is just in the middle of its lifespan. Aug 21 would then be the beginning. This previously unknown class of black holes could be smaller than. Astronomers Find Supermassive Black Hole 12 Billion Times Size of the Sun But with measurements indicating it is 12 billion times the size of the sun, the black hole challenges a widely. Learn how to get up to $700 off a new cell phone when you trade-in & switch to AT&T. Audio commentaries in podcast form for personal use while mobile Various Audio commentaries Audio commentaries from our vast personal disc library for personal use. Our sun will probably not become a black hole when it dies, it's too small. They are so cozy, informal and personal. Astronomers studying black holes in our galaxy, the Milky Way, have discovered what they believe to be a new type of black hole. Pretty much anything could, in theory, become a black hole if it were squeezed down small enough. So let's go back to 1974, when all of this began. Animals may find winter shelter in holes in trees or logs, under rocks or leaves, or underground. ♠See Special Price and Read Reviews Lava Bowl Black Without Drainage Hole™ Discount to for you now buy product If you are looking for Lava Bowl Black Without Drainage Hole Yes you see this. But they are actually pretty common in space. Generally speaking, black holes are the corpse of a dead star. Black holes will pull in everything in the universe and eventually destroy it. However, it can collapse to become a white dwarf which eventually cools down to become a black dwarf. The room is arranged in concentric circles. The expulsion of a star's matter returns it to the cosmos, providing fuel for the creation of new stars. a 2 solar mass black hole has a radius of 2 X 3 = 6 km a 10 solar mass black hole has a radius of 10 X 3 = 30 km a 3 million solar mass black hole has a radius of 3 million X 3 = 9 million km, or about 13 times the size of our Sun. Some are the remains of a giant star which collapsed. net/projects/roboking. 5 billion years old. Right now, our Sun is a main-sequence star, not a red giant. HS Football: Windsor works OT to top Newark Valley. The estimated time for the Sun to cool enough to become a black dwarf is about 10 15 (1 quadrillion) years, though it could take much longer than this, if weakly interacting massive particles (WIMPs) exist, as described above. CliffsNotes study guides are written by real teachers and professors, so no matter what you're studying, CliffsNotes can ease your homework headaches and help you score high on exams. 5 km across. The ultimate news source for music, celebrity, entertainment, movies, and current events on the web. Our solar system is filled with a wide assortment of celestial bodies - the Sun itself, our eight planets, dwarf planets, and asteroids - and on Earth, life itself! The inner solar system is occasionally visited by comets that loop in from the outer reaches of the solar system on highly elliptical orbits. Only when objects get too close to the black hole would the stronger gravitational force become apparent. EVOLVED STAR An evolved star is an old star that is near the end of its existence. Aug 21 would then be the beginning. It will become a red giant star. She died two years after our father of alcoholism. By the time of the helium flash, a Sun-like star of initial mass 1. If the star has about 8 times or more of our Sun's mass it will become a neutron star or black hole, after exploding as a supernova. Over time, they will expand, cool and change colour to become red giants. Whereas larger stars (like blue super-giants) eventually go supernova and become neutron stars or black holes, smaller stars like our Sun will shed their outer layers to become planetary nebulae. Black holes, Neutron Stars, and White Dwarfs The universe is an amazingly entity that scientists have studied for hundreds of years. NASA releases a lot of unsettling photographs of outer space, from a shot of a ghoulish-green dead star to the first-ever image of a black hole. Black holes sound too strange to be real. No one really knows what would happen if you were to get sucked into a black hole. Black holes may answer questions about the beginning and the future of the universe. The Duke and Duchess of Cambridge in Pakistan. At the red giant stage, it would bake the Earth. Images, GIFs and videos featured seven times a day. In these passages by "sun" is not meant the sun of this world, but the sun of the angelic heaven, which is the Divine love and the Divine wisdom of the Lord; these are said to be "obscured," "darkened," "covered," and "blackened" when there are evils and falsities with man. The material around this black hole is now thought to originate from a passing star more than twice the mass of our sun, which got ripped apart by the black hole's powerful gravity. Discover new music on MTV. " To take advantage of these promotions and learn more about our Merchandising program please visit Booth # 119 – 121. This is the strongest evidence we have to date for the existence of black holes. Our Sun is too small a star to end its life as a black hole. It seemed as if I was to make a decision as to which way I would choose to go - up into the tunnel or back into my body. No one seems to know how to fix the problems we have in the world any more. Also, black holes are feeding with matter, and when they do it, they become the brightest objects ever found in the Universe. According to the portal nasa. 4 million kilometers) across. So, do not worry about the Sun becoming a black hole. 2 - The electrons are the only elementary particles. Discover the smallest but weakest black hole. So our Sun is about halfway through its life. The Sun would need to be about 20 times more massive to end its life as a black hole. But a neutron star could collapse and become a black hole if it reached more than 2. 85 \times 10^{19}$ kg/m$^3$. Medium mass stars, like our Sun, live by burning the hydrogen that dwells within their cores, turning it into helium. Beat all the levels and watch your star become a supernova and collapse into a black hole. such as the one thought to reside at the centre of our galaxy, an. C) a molecular cloud. (CBS NEWS)- NASA has caught a rare cosmic event with one of its newest telescopes — a black hole violently ripping apart a star roughly the size of our sun. These rings are made up of rock, ice, and dust particles that range in size from microscopically small to the size of a house. That's why this political moment, as terrifying as it is, is also full of possibility. The estimated time for the Sun to cool enough to become a black dwarf is about 10 15 (1 quadrillion) years, though it could take much longer than this, if weakly interacting massive particles (WIMPs) exist, as described above. This previously unknown class of black holes could be smaller than. They are so cozy, informal and personal. The New York Times: Find breaking news, multimedia, reviews & opinion on Washington, business, sports, movies, travel, books, jobs, education, real estate, cars. For about 2 billion years the sun will fuse helium into carbon and some oxygen, but there's less energy in those reactions. the same density as water. 5 times the mass of the sun. A stellar black hole may form when a massive star dies. A low and medium mass star will become a white dwarf. Stars like the Sun just aren't massive enough to become black holes. The Life Cycle of a Massive Star - The Golden Years: At this point in the life cycle of a massive star, it's been fusing hydrogen into helium for a long time. will always be a neutron star. -A small, dark region in the space near the visible companion indicates that light cannot escape from that region, which is the characteristic feature of a black hole. The Earth's orbit wouldn't change a bit, and certainly we wouldn't get sucked into the black hole (we'd get cold, though!). This black hole Moon would be pretty low-profile. Larger stars are much hotter and the higher temperatures within such a star are sufficient to fuse even helium. spiral quickly into the black hole. When a black hole swallows another black hole (or anything else) it grows larger, and this is one of the ways to grow super-massive black holes that are millions of times more massive than our sun, and often found in the centers of galaxies. The Life Cycle of a Massive Star – The Golden Years: At this point in the life cycle of a massive star, it's been fusing hydrogen into helium for a long time. a white dwarf. 2009-01-01. Most of these black holes are dormant, but a few per cent are 'active' meaning that they are drawing material from their host galaxy inwards, This forms an accretion disc that feeds the black hole. Our Sun would have a diameter of about 6000 meters, just a little less than 4 miles. As the core runs out of hydrogen and then helium, the core will contract and the outer layers will expand, cool, and become less bright. If the star has about 8 times or more of our Sun's mass. such as the one thought to reside at the centre of our galaxy, an. Morgan Freeman: Obama's not our first black president President Barack Obama is considered by some to be the nation's first black president - but Morgan Freeman isn't one such individual. So let's look at the supermassive black hole. 5 billion years of life, the sun's radius has gotten about 6 percent bigger [source: Berkeley ]. " The following Sunday, as he prepared to deliver his sermon, the minister asked for a show of hands. Black holes distort the fabric of space. No one really knows what would happen if you were to get sucked into a black hole. The 1% left over is taken up by planets, asteroids, moons and other matter. But our planet won't go quietly into the night. A Black Hole is an object for which nothing can get a high enough escape velocity to get away from it. 5% of its energy from a complex web of interacting nuclei, termed the "CNO cycle. The pupil is an opening that lets light into your eye. So, do not worry about the Sun becoming a black hole. A star can become a white dwarf, a neutron star or a black hole. Things will. For example the black hole in our galaxy would need to be the size of Earth's orbit, which is a hundred times smaller than the observational limit thus far. the sun will never become a black hole What is the process that creates a neutron star? It is what is left after the explosion of a massive star in a Type II supernova. The Correlation of Standard Entropy with Enthalpy Supplied from 0 to 298. In the lifetime of the Sun, it won't interact with us in any way, or consume more than a handful of. 4k Likes, 206 Comments - NASA Solar System Exploration (@nasasolarsystem) on Instagram: "What is a black hole? What would happen if you fell into one? Could our Sun become a black hole?…". With a telescope, however, we are able to see far more detail. Our black hole would really be black. A star is a large mass of plasma matter held together by gravitational forces. The 1% left over is taken up by planets, asteroids, moons and other matter. So if the black hole in question is a few times more massive than our Sun (astrophysicists believe massive stars go supernova and leave such black holes behind), then it's much smaller than the star. 11) Which low-mass star does not have fusion occurring in its central core? A) a main-sequence star. Most of these black holes are dormant, but a few per cent are 'active' meaning that they are drawing material from their host galaxy inwards, This forms an accretion disc that feeds the black hole. As the Sun grows old, it will expand. A Black Hole is an object for which nothing can get a high enough escape velocity to get away from it. According to Stephen Hawking, you might end up in another universe. This is where the mass is so great that gravity pulls the sun into a ball so 'heavy' that nothing can escape including light. The following effects may be used: Many black holes have objects around them, and by looking at the behavior of the objects you can detect the presence of a black hole. Generally speaking, black holes are the corpse of a dead star. Since the black hole cannot disappear, some other mechanism must cause the accretion disk to become less bright. 0 M Sun may have only 0. Call or Whatsapp +27 71 303 3529. Jokes, Jokes and More Jokes. The ultimate fate of our Sun is to. Some mice even build tunnels through the snow. There is no longer enough outward pressure from nuclear reactions to stop the star collapsing, this causes the star to become unstable. Miniature Black Hole Surprises Astronomers: It Shouldn't Exist. Supermassive black holes are the largest type of black hole known to scientists, and can grow to be millions or even billions of times as massive as the Sun. As the core runs out of hydrogen and then helium, the core will contract and the outer layers will expand, cool, and become less bright. 5 times the mass of the sun. These chance alignments of the stars are known as constellations. Black holes are objects so highly condensed that nothing (not even light) can escape out of their strong gravitational pull. If the core of a supernova explosion contains three or more solar masses of matter, the core will most likely become a black hole. During this time it will expand to the point that it will engulf Mercury, Venus, and the Earth. 11) Which low-mass star does not have fusion occurring in its central core? A) a main-sequence star. Our sun is at this stage in its development. net/projects/roboking. There's been a breakdown, but there could also be a breakthrough. a white dwarf. At that point, as the star ends its life, the core becomes a black hole. continue to orbit the black hole in precisely its present orbit. You need much more mass than that to create one. Black holes may answer questions about the beginning and the future of the universe. Rather, when the sun expands into a red giant during the throes of death, it will. Being that it is so massive but we can't see it, this suggests a black hole. They permeate the very space all around us. Although we cannot see black holes, we can detect or guess the presence of one by measuring its effects on objects around it. According to Figure 13. The basic structure of a black hole consists of a singularity hidden by an event horizon. We are not called members yet; our decisions today will make us initiates, and we will become members if we complete initiation. The size of the sun is a balance between the outward pressure made by the release of energy from nuclear fusion and the inward pull of gravity. How such supermassive black holes — which can have billions of times the mass of our sun — form is an outstanding question, Bahcall said. Black holes come in many sizes and their size depends on how much material is in them (their mass). It is also the source, one way or another, of nearly all the energy we use on the earth. For our purposes, the black hole at the heart of the Milky Way is completely and totally safe. head off into interstellar space along a straight-line tangent to its original orbit around the Sun. There will be no energy or structures of any kind — just a cold,. How such supermassive black holes — which can have billions of times the mass of our sun — form is an outstanding question, Bahcall said. Additional resources: Learn more about the life cycle of stars from NASA's Imagine the. When the Sun has burned all its Hydrogen, it will continue to burn helium for 130 million more years. In the lifetime of the Sun, it won't interact with us in any way, or consume more than a handful of. (CBS NEWS)- NASA has caught a rare cosmic event with one of its newest telescopes — a black hole violently ripping apart a star roughly the size of our sun. Welcome to the Official Website of the City of Phoenix, Arizona, where you can find information for residents, visitors and businesses. a star would take time to go through different stages of life (5 more billion years for our sun), and during the course, he will next become a red giant or a blue giant. Consider if our Sun turned instantly in to a black hole. These particles in turn radiate strongly across the electromagnetic spectrum. This may sound strange, but it could actually be the best explanation of how the universe began, and what we observe today. A Black Hole Doesn't Die -- It Does Something A Lot Weirder. 11) Which low-mass star does not have fusion occurring in its central core? A) a main-sequence star. A low or medium mass star (with mass less than about 8 times the mass of our Sun) will become a white dwarf. If the star has about 8 times or more of our Sun's mass it will become a neutron star or black hole, after exploding as a supernova. CliffsNotes study guides are written by real teachers and professors, so no matter what you're studying, CliffsNotes can ease your homework headaches and help you score high on exams. Those who are seriously concerned with love as the only rational answer to the problem of human existence must, then, arrive at the conclusion that important and radical changes in our social structure are necessary, if love is to become a social and not a highly individualistic, marginal phenomenon. In fact, for millions of years this star has had so much fuel, hydrogen atoms, that it was able to spend all day, every minute smashing them together and creating new elements and energy. Only stars with enough heft - those maybe about 25 times more massive than our Sun. Four billion years from now when the Sun runs out of the available nuclear fuel in its core, our Sun will die a quiet death. If we lived on a galaxy one billion light-years from our own, what would we see? Most have evolved to become black holes. The most common type is known as a stellar-mass black hole. As you watch it, the edges will change, either growing larger or getting smaller. Wednesday is the Spring solstice, which means that the Sun will rise due east (90o azimuth) and set due west. A spaceship passing near a 10 solar mass black hole is much more likely to be destroyed than a spaceship passing at the same distance from the center of a 10 solar mass main-sequence star Term According to present understanding, a nova is caused by ______. Stars that have a lot of mass may end their lives as black holes or neutron stars. b) Earth would be flung off into outer space. Discover more every day. In the final stages of this evaporation, the black hole reverses itself and pours matter back out into the universe. A star more massive than our Sun eventually becomes a neutron star via a similar process. The ultimate fate of our Sun is to. Our orbit wouldn't be affected at all, at least not until some unlucky piece of matter on an eccentric orbit passed within the. Until this study, astronomers found black holes that clocked in between five and 15 times the mass of our sun, while neutron stars are only about two times the mass of the sun. For many years, astronomers had only indirect evidence for supermassive black holes, the most compelling of which was the existence of quasars in remote active galaxies. No, the sun won't ever become a black hole. It had been gradually getting overcast, and now the sky was dark and lowering, save where the glory of the departing sun piled up masses of gold and burning fire, decaying embers of which gleamed here and there through the black veil, and shone redly down upon the earth. be either a neutron star or a black hole. HS Football: Windsor works OT to top Newark Valley. This is the background science information that will help you to understand Hubble's discoveries. Because the first stars formed in the densest parts of the universe, any black holes resulting from their collapse would have become incorporated, via successive mergers, into systems of larger. The best videos and questions to learn about Life and Death of Stars. What if the sun suddenly disappeared? Or, even more of a realistic physics question, what if the sun suddenly became a black hole? Would the earth fly into outer space?. When a star burns through the last of its fuel, the object may collapse, or fall into itself. Millions of searchable song lyrics at your fingertips. Realistically, to become a black dwarf, the Sun would have to go through all stages of the stellar life cycle. Will the sun literally be turned black and the moon turned to blood? Not necessarily. How many times larger would our sun have to be to supernova? It takes a star that is at least 8 solar masses to be massive enough to supernova, so our sun would have to be at least 8 times as large. Medium mass stars, like our Sun, live by burning the hydrogen that dwells within their cores, turning it into helium. This star transforms into a white dwarf. Get the latest music news, watch video clips from music shows, events, and exclusive performances from your favorite artists. 2 million times more massive than the sun and is generally a 'sleeping beauty', said astronomers from Australia's ARC Centre of Excellence. Find your yodel. Also, black holes are feeding with matter, and when they do it, they become the brightest objects ever found in the Universe. For the sun to become a black hole, the entire mass of it must be compressed within a spherical object with a radius of no more than 3 kilometres! For the earth, this radius is only 0. Saturn might be just fine orbiting our Sun, but if you were to replace the Sun with the black hole at the center of the Milky Way — a black hole that's some 4,000,000 times as massive as the Sun. We have newer gutters that are clean and nice. Consider if our Sun turned instantly in to a black hole. Cabinet Pull 3 Hole Centers Set of 25 Flat Black Low price for Cabinet Pull 3 Hole Centers Set of 25 Flat Black check price to day. a white dwarf. MYSTERY portals to other dimensions could be hiding near the supermassive black hole at the centre of our galaxy, scientists say. Discover the smallest but weakest black hole. The Sun would need to be about 20 times more massive to end its life as a black hole. Early LIfe All stars have similar life stages until the star reaches the red-giant stage. For example, our Sun would become a black hole if its mass was contained within a sphere about 2. Although metrics of some spacetimes containing accreting black holes are known in some situations, the issue has some faces that are not well-known yet and need further investigation. Our black hole would really be black. Why the sun won't become a black hole The Sun is a real object, black holes are pseudoscientific maths fantasy contrivances which have no basis in reality. The Earth's orbit would probably be moved outside the habitable zone by a close enough flyby without any of the planets (or the sun) falling into the black hole, however the calculation as to exactly when that would happen would be messy and would depend also on how one defined the "habitable zone". After about a billion years the sun. 4 solar masses (Chandrasekhar limit) will be compressed into 20 km wide neutron stars, and cores greater than approximately 2. The sun was obscured by the smoke of the pit (Rev. Wednesday is the Spring solstice, which means that the Sun will rise due east (90o azimuth) and set due west. As he breathed with me, our connection became one breathe and my healing began. A black hole is a tear in the fabric of space-time that pulls in everything that comes too close to it. Fun fact: we deliver faster than Amazon. The Sun doesn't have enough mass to become a black hole, in about 5 billion years the Sun will expand to become a red giant star and the core will shrink and increase in temperature to fuse heavier elements some time afterwords the Sun will shed off it's outer layers leaving a white dwarf star behind. If the Sun were replaced by a 1-solar-mass black hole, then the Earth would enter an elliptical orbit passing close to the black hole, with its farthest distance from the black hole equal to 1 AU. Why won't our sun ever become a black hole? Our sun will never become a black hole because it does not have enough mass. Our sun is about halfway through this phase; it's about five billion years old and will continue to fuse protons for about another five billion years. Some are the remains of a giant star which collapsed. The most massive stars collapse to form black holes. A star has to be much more massive than our Sun to become a black hole. That means black holes will be the only organized form of matter in the universe. D) red giant. B) An object can become a black hole only once, and a black hole cannot evolve into anything else. But what would happen if the Sun were suddenly replaced with a black hole of the same mass? Contrary to popular belief, the Solar System would not be sucked in: a solar-mass black hole would exert no more gravitational pull than our Sun. You ultimately seem stuck in an annoying situation where Rover is the ultimate winner, should you just give up and get used to the idea of having this four legged guest become the king of your yard?.
|
CommonCrawl
|
Article | Open | Published: 29 April 2019
The material properties of naked mole-rat hyaluronan
Yavuz Kulaberoglu ORCID: orcid.org/0000-0003-2372-184X1,
Bharat Bhushan2,
Fazal Hadi ORCID: orcid.org/0000-0003-3553-817X1,
Sampurna Chakrabarti ORCID: orcid.org/0000-0002-2750-38771,
Walid T. Khaled1,
Kenneth S. Rankin ORCID: orcid.org/0000-0001-6302-02693,
Ewan St. John Smith ORCID: orcid.org/0000-0002-2699-19791 &
Daniel Frankel2
Scientific Reportsvolume 9, Article number: 6632 (2019) | Download Citation
Nanoscale biophysics
Hyaluronan (HA) is a key component of the extracellular matrix. Given the fundamental role of HA in the cancer resistance of the naked mole-rat (NMR), we undertook to explore the structural and soft matter properties of this species-specific variant, a necessary step for its development as a biomaterial. We examined HA extracted from NMR brain, lung, and skin, as well as that isolated from the medium of immortalised cells. In common with mouse HA, NMR HA forms a range of assemblies corresponding to a wide distribution of molecular weights. However, unique to the NMR, are highly folded structures, whose characteristic morphology is dependent on the tissue type. Skin HA forms tightly packed assemblies that have spring-like mechanical properties in addition to a strong affinity for water. Brain HA forms three dimensional folded structures similar to the macroscopic appearance of the gyri and sulci of the human brain. Lung HA forms an impenetrable mesh of interwoven folds in a morphology that can only be described as resembling a snowman. Unlike HA that is commercially available, NMR HA readily forms robust gels without the need for chemical cross-linking. NMR HA gels sharply transition from viscoelastic to elastic like properties upon dehydration or repeated loading. In addition, NMR HA can form ordered thin films with an underlying semi-crystalline structure. Given the role of HA in maintaining hydration in the skin it is plausible that the folded structures contribute to both the elasticity and youthfulness of NMR skin. It is also possible that such densely folded materials could present a considerable barrier to cell invasion throughout the tissues, a useful characteristic for a biomaterial.
Hyaluronan (HA), also known1,2 as hyaluronic acid, is an extracellular matrix (ECM) polymer found in most tissues. Three key enzymes are involved in HA synthesis, HA synthase 1, 2 and 3 (HAS1-3), and it is subsequently broken down by hyaluronidase3. The regulation of HA turnover is highly important with there being a clear link between HA metabolism and cancer progression, an abundance of HA being an indicator of poor patient prognosis for a number of tumours4,5,6,7,8. HA's role in disease progression is perhaps unsurprising given its function in cell-ECM interactions9,10,11. The main HA receptor is CD44, a cell surface adhesion receptor that binds a range of ligands and is itself associated with metastasis12. The biological function of HA is often related to its molecular weight with a range of polymer sizes found depending on tissue of origin13,14. Given the importance of ECM-cell interactions in disease progression and HA-cell interaction in particular, it would be intriguing to examine the structure/assembly of HA in a cancer resistant species.
The naked mole-rat (NMR, Heterocephalus glaber) has a remarkable biology. They can live for up to ten times longer than similarly sized rodents and rarely develop cancer15,16,17. In 2013, a paper reported that HA contributed to the NMR's cancer resistance18. Moreover, the authors reported that it was possible the ultrahigh molecular weight (greater than 6,100 kDa) form of HA found in the NMR contributed to the elasticity of its characteristic wrinkly skin. With such unusual functions attributed to NMR HA it raises the question of if there is anything in its structure or soft matter properties that could explain its functionality?
The majority of data on the structure/assembly behaviour of HA almost exclusively come from material that is obtained from the fermentation of bacteria (commercially available)19,20. Less commonly it is sourced from rooster combs or human fluids and tissues. Whether imaged under aqueous conditions or in air, HA is found to form planar branched networks of fibres21. Scott et al. used rotary shadow electron microscopy to show that HA (extracted from rooster combs, Streptococci and mesothelioma fluid) forms planar networks in solution, and that the longer the HA molecule, the more branching occurs22. More recent studies have used atomic force microscopy (AFM) in air (as opposed to in aqueous solution) to interrogate the morphology of molecular scale nanostructures. Both loose coils of individual molecular chains and extended structures were observed to assemble, with characteristic planar branched network structures common19,21,23,24. Using AFM-based force spectroscopy it has been possible to uncoil and unstretch single HA chains and confirm network structures25. At the macroscopic scale and in isolation, HA only forms a weak gel and needs to be either chemically crosslinked or combined in a composite to be useful as a biomaterial26.
It is well established that HA structural changes over time are fundamental to the ageing of human skin, with its affinity for water thought to be critical for maintaining skin elasticty27. To date there has been no attempt to look at the structure or soft matter properties of HA from NMR. Such an understanding of the material properties of NMR HA are a prerequisite for its development as a potential biomaterial for the treatment of cancer. Therefore, the aim of this work was to purify NMR-HA and examine the material and structural properties with particular emphasis on differences and similarities between HA from other species.
The first tissue examined was skin as the characteristic wrinkly, yet stretchy skin of the NMR has been speculatively attributed to the presence of a high molecular weight HA18. Using a biotinylated version of the very specific and tightly binding hyaluronan binding protein (HABP)28, histological analysis of the plantar surface, forepaw skin demonstrated that HA was present in both mouse and NMR skin. However, HA was observed to a much greater extent in the NMR epidermis compared to the mouse where, as others have shown29, HA was largely confined to the dermis (Fig. 1a,b). HA extracted from the culture medium of immortalised NMR skin fibroblasts presented a range of assembled conformations, whereby individual chains could clearly be resolved (Fig. 1c,d).
NMR skin HA. (a) Mouse forepaw, plantar skin section stained for HA using HABP and streptavidin-Alexa488 (green) and DAPI nuclear stain (blue). (b) NMR forepaw, plantar skin section stained for HA using HABP and streptavidin-Alexa488 (green) and DAPI nuclear stain (blue). Unlike in the mouse, the NMR HA extends deep into the epidermis and is significantly more abundant. (c) AFM topography image of HA chains obtained from HA extracted from the media of NMR skin fibroblasts. (d) AFM topography image of another type of HA chain assembly from NMR skin fibroblasts.
Dimensions of these chains were consistent with a bundle of a few polymer molecules with an average width of 33.6 ± 7.71 nm (n = 100) and height several times greater than the couple of nanometres, the characteristic height of individual molecules. The most abundant structures within a sample were well defined, densely packed supercoils (Fig. 2a,b). Such densely folded entities appear to be the basic unit of HA in NMR skin with a characteristic "cauliflower" appearance when visualised with the SEM (Fig. 2c). NMR HA extracted from skin tissue (rather than from the medium of cultured skin fibroblasts) also formed the same supercoiled folded structures, although they were generally larger than those of HA extracted from culture medium (Fig. 2d,e). Individual chain networks can also be observed for the NMR skin tissue extracted samples (Fig. 2f), demonstrating a consistency between HA extracted from cell media and HA extracted from tissue. Using a conical AFM tip it was possible to address individual supercoils and determine their soft matter properties via indentation. Young's modulus values could be obtained by fitting the indentation curve with the Sneddon model, Eq. 1
$${F}_{Sneddon}=\frac{2}{\pi }\frac{{E}_{surface}}{(1-{v}_{surface}^{2})}tan(\alpha ){({s}_{0}-s)}^{2}$$
FSneddon = force
Esurface = Young's modulus of the surface
vsurface = Surface Poisson's ratio
α = Tip half cone opening angle
s0 = point of zero indentation
s0 - s = indentation depth
NMR skin HA. (a) Scanning electron microscopy (SEM) of NMR HA extracted from NMR skin fibroblast media. (b) Supercoils linked together with a large fibre. (c) High resolution SEM showing high density of folds (d) AFM of HA extracted from NMR skin fibroblast media (e) AFM of NMR HA extracted from skin tissue. exhibiting again highly folded as in the case of HA extracted from NMR fibroblasts. (f) AFM of individual HA chains extracted from NMR skin tissue.
Figure 3a shows an AFM topographic image of an individual NMRHA supercoil, with Fig. 3b presenting the individual positions within the supercoil where individual indentation measurements were taken (green squares). A typical force - separation curve for NMR HA extracted from skin is presented in Fig. 3c. Of particular note are the peaks in the red retract curve that represent the unfolding of polymer chains. This is not a uniform sawtooth pattern, but rather an irregularity in terms of widths of the peaks is observed, likely reflective of the different sized folded domains. An unusual feature of this pattern is the peaks in the indentation profile, shown in blue, an interpretation of which would be an unfolding of the structure under compression. From these indentation curves from within skin NMR HA supercoils a value of Young's Modulus was acquired which was 13.77 ± 2.44 MPa (Fig. 3d).
As a comparison, HA was extracted and purified from human skin tissue. Unlike the NMR skin HA, there were no supercoiled structures present in human skin HA, but instead large, flat networks were observed, as shown with SEM (Fig. 4a) and AFM (Fig. 4b). The indentation profiles were typical of a soft elastic material (Fig. 4c) with no unfolding peaks in either approach or retract curve. By addressing the AFM tip across the network, the Young's modulus was determined to be 820.00 MPa ± 14.00 MPa, significantly stiffer than the NMR HA in supercoil form.
HA was also extracted from NMR brain and lung tissue. HA extracted from whole brain formed large (at least several microns in diameter) supercoils, but this time the folds were less dense and resembled the macroscopic appearance of the gyri and sulci of the human brain (Fig. 5a,b). HA extracted from NMR lung also formed supercoils, but this time the characteristic folded structure resembled a morphology which can only be described as resembling a snowman (Fig. 5c) Higher resolution images revealed these unusual morphologies to be composed of a network of woven chains at high density (Fig. 5d). Both brain and lung HA structures were very tightly folded with few visible gaps. This was in stark contrast to HA extracted from human skin, whereby supercoils were not formed and random networks of chains were the dominant structures. Commercially available HA (high molecular weight produced by microbial fermentation of Streptococcus pyogenes, R&D systems) formed branched networks.
Indentation profiles with peaks in the indentation curve were only seen for the skin supercoils as in Fig. 3c. Upon submerging the lung HA supercoils in water (room temperature) a transition occurs in their mechanical properties. After 10 minutes submerged in water the supercoil retains its elastic characteristics (Fig. 5e), but then after 20 minutes the force-separation curve takes the characteristic form of a viscoelastic material (Fig. 5f). This was the case for HA from all NMR tissues/cells examined. In order to accurately measure the Young's modulus of supercoils before and after hydration a spherical probe was used. The transition between viscoelastic and elastic, that is from Fig. 5f to Fig. 5e, can be obtained by repeated indentation into the same position. This repeated loading (sometimes up to 150 indents at a force of up to 50 nN) appears to squeeze out the water. The transition is sudden with the force distance curve changing form between two consecutive indentations. The purely elastic state can also be reached by dehydrating the HA.
For HA extracted from all NMR tissues, that is brain, lung and skin, robust macroscopic gels would spontaneously form by leaving 30 ul of HA solution (HA extract in water) on the mica surface for 15 minutes to dry, leaving a circular gel up to 1 cm in diameter (Fig. 6a). HA extracted from the culture medium of immortalised NMR skin fibroblasts showed the same gelling behaviour, but no such gelling behaviour was observed for HA from mouse or human skin. As the gel dehydrates the constituent microgels are clearly visible (Fig. 6b). The gels have well defined borders (Fig. 6c,d) and the constituent supercoils are recovered upon subsequent dehydration. Indentation curves for the gel particles were fitted with the Hertz model for spheres (Fig. 6e) on a flat surface which assumes that the indent depth is much smaller than the diameter of the probe. Using a spherical probe and fitting the indentation curve to the Hertz model, eq. 2, the Young's modulus of the microgels was determined.
$${F}_{Herz}=\frac{3}{4}\frac{{E}_{surface}}{(1-{v}_{surface}^{2})}\sqrt{{R}_{tip}}{({s}_{0}-s)}^{3/2}$$
FHerz = force
Rtip = ball radius
(a) AFM topography image of supercoiled NRM HA structure. These are highly folded structures up to a micron in height and with some folds/twists half a micron in diameter. (b) The same image with a force map superimposed. The green dots indicate where individual indentation measurements are taken. Multiple measurements can be taken at each point and precise control of cantilever is achieved using a closed loop scanner. (c) Force-separation curve (pyramidal tip) for an NMR skin HA supercoil measured in air. The retraction curve (red) of the AFM tip exhibits several peaks over a length of microns suggesting the unfolding of large HA chains. Unusually for a force-separation curve, the approach curve (blue) also exhibits a number of force peaks. We attribute these to the force required to push the supercoiled chains out of the way as the tip penetrates through the structure. (d) Histogram of Young's modulus values for indentations into NMR skin HA supercoils, taken over 5 coils with 112 curves fitted with the Sneddon model, giving a mean Young's Modulus value of 13.67 MPa ± 2.44 MPa.
These were 10.21 ± 1.68 kPa, 10.44 ± 1.31 kPa, 7.86 ± 0.84 kPa, for NMR skin, brain and lung tissue respectively. A typical modulus distribution taken from close to 1000 indentation curves is shown in Fig. 6f. Elastic supercoils (after dehydration) have far superior mechanical properties than the micro gels, for example NMR brain supercoils have a Young's modulus of 30.31 ± 1.54 MPa (n = 1499).
Human skin HA. (a) SEM reveals large planar networks, with well-defined holes up to a couple of microns in diameter. (b) AFM revealing the same form of HA, consistent with the SEM images. (c) Force - separation curve showing indentation into human skin HA network, exhibiting elastic behaviour. (d) Histogram of Young's modulus values for indentations into human skin HA, taken over large networks. Curves fitted with the Sneddon model, giving a mean Young's Modulus value of 820.00 MPa ± 14.00 MPa for 3000 curves.
NMR HA extracted from brain and lung. (a) AFM Topographic image of folded NMR HA extracted from whole brain (b) High resolution topographic AFM image exhibiting characteristic "brain-like" folds of HA extracted from NMR whole brain. (c) HA extracted from NMR lung forms characteristic "snowman" structures. (d) Higher resolution scan of HA from NMR lung where the densely folded chains can be resolved, forming an impenetrable network. (e) Indentation curve for NMR lung HA, having been indented multiple times (more than 100) in the same place. The water has been squeezed out and the supercoil now has the characteristics of a purely elastic material. (f) The indentation curve for lung NMR HA after having been hydrated for an hour. It has the characteristic form for a hydrogel.
(a) Macroscopic gel formed from NMR HA extracted from skin tissue. (b) Edge of macroscopic gel formed from brain NMR HA. (c) NMR HA gel particles from (derived from brain tissue) (d) Edge of macroscopic gel formed from NMR lung HA. (e) Indentation curve of gel made from NMR skin HA. Spherical probes were used to make these measurements and the Hertz model for a sphere indenting into a surface was fitted to the indentation curve to extract the Young's modulus. (f) Histogram of Young's modulus values for indentations into NMR lung HA gel particles using spherical probes. Mean Young's Modulus value of 7.86 kPa ± 0.83 kPa for 999 curves.
Mouse HA (derived from the medium of skin fibroblasts) does not form supercoils, although less folded spherical particles do exist (Fig. 7a). The most common morphology observed are dense, large scale networks where individual fibres are unable to be resolved (Fig. 7b,c) Individual fibres are more abundant for the mouse specimens (Fig. 7d,e).
HA extracted from the medium of mouse skin fibroblasts. (a) SEM showing typical spherical particle without a supercoiled structure. (b) SEM showing mouse HA fibres which are more likely to form than for NMR samples. (c,d) AFM topographic images of mouse HA random networks. (e) High resolution SEM of mouse HA fibre.
Although supercoils are the dominant morphology for NMR HA (Fig. 8a,b), fibres occasionally form. The mechanical properties determined by indentation for the rare fibre that can be found (Fig. 8c) are stiffer than the supercoils, although mechanical heterogeneity along the fibre is clear from the distribution of moduli values. Mouse HA fibres appear to be softer, but absolute values of modulus values using conical tips can be unreliable due to uncertainties in the contact area between tip and sample. What is clear is that NMR fibres are stiffer than the supercoils.
(a) AFM cantilever addressing NMR HA supercoils. (b) Large field SEM showing abundance/surface coverage of NMR HA supercoils. (c) Histogram of Young's modulus values taken from a NMR HA fibre with a mean value of 0.28 GPa ± 0.0165 GPa for 100 curves. (d) SEM of NMR HA fibre (e) Histogram of Young's modulus values taken from a mouse HA fibre with a mean value of 78.025 MPa ± 7.533 MPa for 100 curves.
At higher concentrations adsorbed NMRHA covers the surface and forms a thin film, up to several microns in thickness. Occasionally, the film folds up on itself, presenting the opportunity to observe its internal structure (Fig. 9a) and to determine its thickness of up to 1 micron. At higher resolution a lamellar structure, that is regularly spaced sheets, in this case stacked in pairs, are resolvable (Fig. 9b). The structure within each stack comprises of repeatedly folded chains (Fig. 9c) of similar dimensions to those observed in the supercoiled structures. However the film does not form a gel upon hydration, most likely due to its semi-crystalline nature whereby adjacent polymer chains are forced to interact with each other rather than water.
Naked mole rat hyaluronan thin film. (a) AFM topographic image showing thin film folded up on itself. The thickness of the film is revealed to be of order of a micron. (b) AFM topographic image presenting the lamellar structure of the thin film. Ledges composed of ordered polymer chains are stacked in pairs each with similar inter planar spacings and morphology. (c) Chains folded along the lamellar of a NMR HA thin film.
The inability of commercially available HA to spontaneously form robust gels is extensively reported in the literature, with chemical crosslinking or incorporation into a composite being a necessity for practical use as a biomaterial26,30,31. By contrast, the nature of NMR HA folding is unusual. It forms a conformation similar to a ball of rubber bands and for the case of HA extracted from skin this leads to a unique indentation profile. Moreover, although the exact concentration could not be determined for such small amounts of biopolymer, the ability of NMR HA to form gels without the need for chemical crosslinking is interesting. A large body of work has been performed on the indentation and unfolding on soft biological matter including proteins, fibres such as collagen, cell membranes, lipid bilayers, and carbohydrates. However, it is rare to find a material that has a sawtooth in an indentation curve (as opposed to a retraction curve).
One example where this does occur is in amyloid fibrils found in natural adhesives. Such an ability to deform elastically in multiple directions would be advantageous for a material that is subjected to shear forces such as those experienced by the skin whilst an animal is squeezing up and round littermates in tunnels. Given the presence of relatively high quantities of HA in NMR skin (Fig. 1b)18, it is plausible that these supercoils contribute to its elastic nature. These supercoils with their characteristic cauliflower-like structure are absent from HA extracted from human and mouse skin, as well as being absent from commercially available, bacterially produced HA. In terms of HA structure-function relationships it is well established that the molecular weight of HA can relate to function in the tissue of origin32.
It is clear that micron scale morphology of NMR HA is related to tissue of origin. How these different morphologies arise can be best understood in terms of their nanoscale hierarchical architecture. We can consider the basic unit a folded polymer chain, and it is the degree of folds that determines the final assembly morphology. In the case of NMR lung HA, the chains are tightly folded (Fig. 5d) leading to a dense micron scale morphology without pores (Fig. 5c). Contrast this with the less tightly packed polymer chains (Fig. 5b) of brain NMR HA resulting in a more three dimensional and pore containing micron scale assembly (Fig. 5a). It could be hypothesised that it is these densely folded structures with a scarcity of cell sized pores, that might form an impenetrable barrier for the invasion required for cancer metastasis and tumour growth. However it is important not to forget the role that hyaluronan plays in cell signalling via its interaction with the cell membrane bound CD44 receptor and that cancer cells will cleave this receptor utilising matrix metalloproteinases. The NMR HA supercoiled structure may prevent such cleavage. It is also plausible that it is the enhanced water retaining properties of NMR HA that keep the ECM "young" much in the same manner as hyaluronan keeps skin hydrated. It has recently been shown that an aged extracelluar matrix is more permissive to cancer cell invasion than ECM from younger indiduals33. Thus one could hypothesise that the NMR extracellular matrix would show less signs of ageing than other shorter lived organisms and is therefore less permissive to invasive cancer.
For the case of the NMR, it is likely that the ultrahigh molecular weight contributes via supercoiling to the elastic properties of the skin. In addition, the increased surface area of exposed HA will improve water retaining ability, another essential contributor to the anti-ageing properties of NMR skin. This ability to trap water is evidenced by the formation of microgels with the cross linking usually required for robust gels replaced by self-interactions through entangled polymer; importantly, these microgels do not occur for HA extracted from mouse or human skin. It is these unique water retention properties of HA that most likely contribute to the unusual properties of NMR skin rather than the low stiffness of NMR HA compared to human skin HA. This is because HA is not the main structural component of skin with collagen and elastin being the most likely to contribute to tensile/compressive properties. Another important feature of the coiled NMR HA is the density of the polymer network. There are few visible gaps due to the folded packing. This is particularly striking when compared to the morphology of human and mouse HA networks. Mimicking the supercoiled structure of NMR HA in either synthetic or engineered natural polymers would be a strategy to explore in recreating the elasticity and anti-ageing effects in tissue engineered skin as well as biomaterials for preventing cell invasion.
All animal experiments were conducted in accordance with the United Kingdom Animal (Scientific Procedures) Act 1986 Amendment Regulations 2012 under a Project License (70/7705) granted to E. St. J. S. by the Home Office; the University of Cambridge Animal Welfare Ethical Review Body also approved procedures. Young adult NMR (both male and female) were used in this study. Animals were maintained in a custom-made caging system with conventional mouse/rat cages connected by different lengths of tunnel. Bedding and nesting material were provided along with a running wheel. The room was warmed to 28 °C, with a heat cable to provide extra warmth running under 2–3 cages, and red lighting (08:00–16:00) was used. NMR were killed by exposure to a rising concentration of CO2 followed by decapitation. C57Bl6J mice were housed in a temperature controlled (21 °C) room on a 12-hour light/dark cycle, with access to food and water ad libitum in groups of up to five. Female mice not older than 12 weeks was used for tissue isolation.
Immunohistochemistry of NMR and mouse skin samples
Skin samples were collected from the footpad of 6–8 week old female mice (n = 2) and NMR (n = 2), post fixed in 4% paraformaldehyde (PFA) for 1 hr and incubated overnight at 4 °C in 30% sucrose (w/v) for cryo-protection. The skin samples were embedded (epidermal side up) in Shandon M-1 Embedding Matrix (Thermo Fisher Scientific), snap frozen in 2-methylbutane (Honeywell International) on dry ice and stored at −80 °C until further processing. On the day of immunostaining, the embedded skin samples were cut into 12 μm sections using a Leica Cryostat (CM3000; Nussloch, Germany) and mounted on Superfrost Plus slides (Thermo Fisher Scientific). For immunohistochemistry, randomly picked slides were washed two times with PBS-Tween, then blocked with an antibody diluent solution, made up of 0.2% (v/v) Triton X-100 and 2% (v/v) bovine serum albumin in PBS, for two hours at room temperature (~22 °C). Biotinylated hyaluronan binding protein (b-HABP) antibody (amsbio, AMS.HKD-BC41, 1:200 in antibody diluent) was added to the slides and incubated overnight at 4 °C. The next day, slides were washed three times using PBS-Tween and incubated for two hours at room temperature with Alexa Fluor 488 Streptavidin (Invitrogen, S11223, 1:1000 in PBS). After the binding of biotin to streptavidin was complete, slides were washed three times in PBS-Tween, and incubated with the nuclear dye, DAPI (Sigma, D9452, 1:1000 in PBS), for 10 min. Slides were further washed once with PBS-Tween, mounted and imaged with an Olympus BX51 microscope (Tokyo, Japan) and QImaging camera (Surrey, Canada). All slides were imaged with the same exposure levels at each wavelengths (488 nm for b-HABP and 350 nm for DAPI), and the same contrast enhancements were made to all slides in ImageJ. Slides that were not incubated with streptavidin conjugate (negative controls) did not show fluorescence.
Generation of immortalized cell lines
After the animal had been killed, skin was taken from either underarm or underbelly area, cleared of any fat or muscle tissue, generously sprayed with 70% ethanol and finely minced with sterile scalpels. Minced skin was then mixed with 5 ml of NMR Cell Isolation Medium (high glucose DMEM (Gibco #11965092) supplemented with 100 units ml−1 Penicillin, 100 μg ml−1 and Streptomycin (Gibco # 15140122) containing 500 µl of Cell Dissociation Enzyme Mix (10 mg ml−1 Collagenase (Roche # 11088793001), 1000 Units ml−1 Hyaluronidase (Sigma # H3506) in DMEM high glucose (Gibco # 11965092)) and incubated at 37 °C for 3–5 hours. Skin was briefly vortexed every 30 minutes to aid cell dissociation and manually inspected for cell dissociation. After complete dissociation, cells were pelleted by centrifuging at 500 g for 5 minutes and resuspended in NMR Cell Culture Medium (DMEM high glucose (Gibco # 11965092) supplemented with 15% fetal bovine serum (Gibco), non-essential amino acids (Gibco # 11140050), 1 mM sodium pyruvate (Gibco # 11360039), 100 units ml−1 Penicillin, 100 µg ml−1 Streptomycin (Gibco # 15140122) and 100 µg ml–1 Primocin (InvivoGen # ant-pm-2)), at 32 °C, 5% CO2, 3% O2 except for mouse cells which were incubated at 37 °C, 5% CO2. To immortalize NMR skin fibroblasts, a lentiviral plasmid carrying SV40LT (SV40LT-hRASg12v was used for mouse cells) was packaged using HEK293FTs as packaging cells. The supernatant of HEK293FT packaging cells was collected, containing SV40 LT-packaged viral particles. When primary NMR skin cells reached 40% confluency, the supernatant containing SV40-packaged viral particles was added on to the primary NMR skin cells. 48 hours after adding the supernatant, cells were treated with 2 µg/ml of puromycin to kill off uninfected cells. Stable immortalized cells were maintained in DMEM supplemented with 15% fetal bovine serum, non-essential amino acids, sodium pyruvate, 100 units ml−1 penicillin, and 100 mg ml−1 streptomycin.
HA extraction from NMR tissues
Following decapitation, tissues were removed, weighed and stored at −80 °C until HA extraction was conducted. Tissues were digested at 50 °C overnight in the digestion buffer (10-mM Tris-CL, 25 mM EDTA, 100 mM NaCI, 0.5% SDS and 0.1 mg/ml of Proteinase K). The next day, samples were centrifuged at 18,000 × g for 10 minutes to remove tissue residual particles and supernatant were transferred to a new tube. Four volumes of ethanol were added into the supernatant, followed by incubation overnight at −20 °C. The following day, samples were cold-centrifuged at 18,000 × g for 10 minutes. Pellets were then washed in four volumes of 75% ethanol by centrifugation. The supernatant was discarded and pellets were incubated at room temperature for 20 minutes to remove any residual ethanol. The pellets were then resuspended in 200 µl of autoclaved distilled water. 500 units of benzonase endonuclease was added and then incubated overnight at 37 °C to remove nucleic acids. The next day, samples were precipitated overnight with one volume of 100% ethanol. The following day, after cold centrifugation at 18,000 × g for 10 minutes, pellets were resuspended in 400 µl of autoclaved distilled water.
HA extraction from conditioned medium
Immortalized skin fibroblasts were seeded and maintained for 7 days without changing medium. After 7 days, the media from each flask was collected and centrifuged at 4,000 × g for 10 minutes to remove residual cells. After centrifugation, 2 ml of conditioned media were incubated overnight with 500 µg of proteinase K at 50 °C to remove proteins. The following day, 2 volumes of 100% ethanol were added to samples for precipitation and incubated overnight at −20 °C. The next day, samples were cold-centrifuged at 18,000 × g for 10 minutes. After centrifugation, supernatant were discarded and pellets were incubated at room temperature for 20 minutes to allow any residual ethanol to evaporate. After the incubation, the pellet was resuspended in 400 µl of autoclaved distilled water.
HA extraction from human skin
Ethical approval to obtain human skin was gained from the Research Ethics Committee, North East Newcastle 1, Reference Number: REC 12/NE/0395. Following informed consent, skin samples were obtained from patients undergoing orthopaedic surgery. All experiments were performed in accordance with the United Kingdom Human Tissue Act (2004) regulations. Human skin was chopped and weighed amount of skin tissues were digested overnight at 50 °C in the digestion buffer (10 mM Tris-Cl, 25 mM EDTA, 100 mM NaCl, 0.5% SDS and 0.1 mg ml−1 proteinase K). After centrifugation, a clear supernatant obtained was mixed with four volumes of prechilled ethanol and incubated at −20 °C for overnight. The precipitate was centrifuged, washed with ethanol and air dried. The pellet was resuspended in 100 mM ammonium acetate solution and incubated with benzonase endonuclease at 37 °C for overnight to remove nucleic acid. The solution was mixed with four volumes of ethanol and incubated at −20 °C for overnight. The pellet obtained after centrifugation was again washed, air dried and resuspended in 100 mM ammonium acetate for further use.
HA adsorption onto mica
30 ul of purified NMRHA in water was deposited on freshly cleaved muscovite mica and allowed to dry in air for 45 minutes. This is the method reported by Cowman et al. for the AFM imaging of HA24. They noted that HA adsorbs weakly to mica and cannot be examined under liquid conditions due to HA's affinity to be in solution. For examining hydrated HA the sample was allowed to dry for 15 minutes so that visible water had evaporated but the sample was still wet.
Atomic force microscopy
All images were obtained on an Agilent 5500 microscope equipped with close loop scanners. Contact mode imaging was employed for topographic imaging using silicon tips having a nominal force constant of between 0.02–0.77 N/m. Forces were minimized during scanning at a level below 1 nN. Scan rates were between 0.5–1 kHz and all images were recorded at 512 pixel resolution. Measurements were carried out in ultrapure water room at room temperature (~20 °C). Processing and analysis of images was performed using version 6.3.3 of SPIP software (Image Metrology, Lyngby, Denmark).
Atomic force microscopy based nanoindentation
Two different types of tip indentation geometry were used. The first were conical tips with radius of curvature less than 15 nm. These were used to look at unfolding events within a supercoil. Accurate determination of the spring constant was obtained using the using the equipartition theorem as proposed by Hutter and Bechhoefer34. Inaccuracies in terms of knowing the contact area between tip and sample in the Sneddon model for conical tips can be compensated for by using a large spherical probe therefore spherical probes were used to determine Young's modulus values. Borosilicate spherical probes 5 μm in diameter (NovaScan Technologies) were used to indent supercoils, both wet and dry. Spring constants were determined using the thermal K method.
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
Girish, K. S. & Kemparaju, K. The magic glue hyaluronan and its eraser hyaluronidase: A biological overview. Life Sciences. 80, 1921–1943 (2007).
Hargittai, I. & Hargittai, M. Molecular structure of hyaluronan: an introduction. Struct. Chem. 19, 697–717 (2008).
Rankin, K. S. & Frankel, D. Hyaluronan in cancer-from the naked mole rat to nanoparticle therapy. Soft Matter 12, 3841–3848 (2016).
Cheng, X. B., Sato, N., Kohi, S. & Yamaguchi, K. Prognostic impact of hyaluronan and its regulators in pancreatic ductal adenocarcinoma. PLoS One 8, 1–7 (2013).
Auvinen, P. et al. Increased hyaluronan content and stromal cell CD44 associate with HER2 positivity and poor prognosis in human breast cancer. Int. J. Cancer 132, 531–539 (2013).
Sato, N., Kohi, S., Hirata, K. & Goggins, M. Role of hyaluronan in pancreatic cancer biology and therapy: Once again in the spotlight. Cancer Sci. 107, 569–575 (2016).
Tammi, R. H. et al. Hyaluronan in human tumours: Pathobiological and prognostic messages from cell-associated and stromal hyaluronan. Semin. Cancer Biol. 18, 288–95 (2008).
Ricciardelli, C. et al. Formation of hyaluronan-and versican-rich pericellular matrix by prostate cancer cells promotes cell motility. J. Biol. Chem. 282, 10814–10825 (2007).
Qhattal, H. S. S. & Liu, X. Characterization of CD44-mediated cancer cell uptake and intracellular distribution of hyaluronan-grafted liposomes. Mol. Pharm. 8, 1233–1246 (2011).
Clark, R. A., Alon, R. & Springer, T. A. CD44 and hyaluronan-dependent rolling interactions of lymphocytes on tonsillar stroma. J. Cell Biol. 134, 1075–1087 (1996).
Richter, U., Wicklein, D., Geleff, S. & Schumacher, U. The interaction between CD44 on tumour cells and hyaluronan under physiologic flow conditions: implications for metastasis formation. Histochem. Cell Biol. 137, 687–695 (2012).
Senbanjo, L. T. & Chellaiah, M. A. CD44:A multifunctional cell surface adhesion receptor is a regulator of progression and metastasis of cancer cells. Front. Cell Dev. Biol. 5, (2017).
Cyphert, J. M., Trempus, C. S. & Garantziotis, S. Size matters:molecular weight specificity of hyaluronan effects in cell biology. Int. J. Cell Biol. 2015, https://doi.org/10.1155/2015/563818 (2015).
Fraser, J. R. E., Laurent, T. C. & Laurent, U. B. G. Hyaluronan:it's nature, distribution, functions and turnover. J. Intern. Med. 242, 27–33 (1997).
Ruby, J. G., Smith, M. & Buffenstein, R. Naked mole-rat mortality rates defy Gompertzian laws by not increasing with age. Elife 7, 1–18 (2018).
Schuhmacher, L.-N., Husson, Z. & St. John Smith, E. The naked mole rat as an animal model in biomedical research: current perspectives. Open Access Anim. Physiol. 7, 137–148 (2015).
Buffenstein, R. Negligible senescence in the longest living rodent, the naked mole-rat: insights from a succesfully aging species. J. Comp. Physiol. B Biochem. Syst. Environ. Physiol. 178, 439–445 (2008).
Tian, X. et al. High-molecular-mass hyaluronan mediates the cancer resistance of the naked mole rat. Nature 499, 346–349 (2013).
Cowman, M. K. et al. Extended, relaxed, and condensed conformations of hyaluronan observed by atomic force microscopy. Biophys. J. 88, 590–602 (2005).
Liu, C., Wang, M., An, J., Thormann, E. & Dėdinaitė, A. Hyaluronan and phospholipids in boundary lubrication. Soft Matter 8, 10241–10244 (2012).
Murai, T., Hokonohara, H., Takagi, A. & Kawai, T. Atomic force microscopy imaging of supramolecular organisation of hyaluronan and its receptor CD44. IEEE Trans. Nanobioscience 8, 294–9 (2009).
Scott, J. E., Cummings, C., Brass, A. & Chen, Y. Secondary and tertiary structures of hyaluronan in aqueous solution, investigated by rotary shadowing-electron microscopy and compute simulation. Biochem. J. 274, 699–705 (1991).
Spagnoli, C. et al. Hyaluronan conformations on surfaces: effect of surface charge and hydrophobicity. Carbohydr. Res. 340, 929–41 (2005).
Cowman, M. K., Li, M. & Balazs, E. A. Tapping mode atomic force microscopy of hyaluronan:extended and intramolecularly interacting chains. Biophys. J. 75, 2030–2037 (1998).
Giannotti, M. I., Rinaudo, M. & Vancso, G. J. Force spectroscopy of hyaluronan by atomic force microscopy:from hydrogen-bonded networks toward single-chain behaviour. Biomacromolecules 8, 2648–2652 (2007).
Luan, T., Wu, L., Zhang, H. & Wang, Y. A study on the nature of intermolecular links in the crytropic weak gels of hyaluronan. Carbohydr. Polym. 87, 2076–2085 (2012).
Papakonstantinou, E., Roth, M. & Karakiulakis, G. Hyaluronic acid: A key molecule in skin ageing. Dermatoendocrinol. 253–258, https://doi.org/10.4161/derm.21923 (2012).
Ripellino, J. A., Klinger, M. M., Margolis, R. U. & Margolis, R. K. The hyaluronic acid binding region as as specific probe for the localization of hyaluronic acid in tissue sections. J. Histochem. Cytochem. 33, 1060–1066 (1985).
Lee, S. E., Jun, J. E., Choi, E. H., Ahn, S. K. & Lee, S. H. Stimulation of epidermal calcium gradient loss increases the expression of hyaluronan and CD44 in mouse skin. Clin. Exp. Dermatol. 35, 650–657 (2010).
Crescenzi, V., Francescangeli, A., Renier, D. & Bellini, D. New cross-linked and sulfated derivatives of partially deacetylated hyaluronan: synthesis and preliminary characterization. Biopolymers 64, 86–94 (2002).
Yang, Y. L. & Kaufman, L. J. Rheology and confocal reflectance microscopy as probes of mechanical properties and structure during collagen and collagen/hyaluronan self-assembly. Biophys. J. 96, 1566–1585 (2009).
Cowman, M. K., Lee, H.-G., Schwertfeger, K. L., McCarthy, J. B. & Turley, E. A. The content and size of hyaluronan in biological fluids and tissues. Front. Immunol. 6, 261, https://doi.org/10.3389/fimmu.2015.00261 (2015).
Kaur, A. et al. Remodelling of the collagen matrix in ageing skin promotes melanoma metastasis and affects immune cell motility. Cancer Discovery. https://doi.org/10.1158/2159-8290.CD-18-0193 (2019).
Hutter, J. L. & Bechhoefer, J. Callibration of atomic force microscopy tips. Rev. Sci. Instrum. 64, 1868–1873 (1993).
This work was supported by a Cancer Research UK/RCUK Multidisciplinary Project Award (C56829/A22053) to K.R., E.S. and D.F. and a Cancer Research UK Career Establishment Award (C47525/A17348) to W.T.K. F.H. and S.C. were supported by Gates Cambridge Trust scholarships.
Department of Pharmacology, University of Cambridge, Tennis Court Road, Cambridge, CB2 1PD, UK
Yavuz Kulaberoglu
, Fazal Hadi
, Sampurna Chakrabarti
, Walid T. Khaled
& Ewan St. John Smith
School of Engineering, Newcastle University, Merz Court, Newcastle upon Tyne, NE1 7RU, UK
& Daniel Frankel
Northern Institute for Cancer Research, Medical School, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
Kenneth S. Rankin
Search for Yavuz Kulaberoglu in:
Search for Bharat Bhushan in:
Search for Fazal Hadi in:
Search for Sampurna Chakrabarti in:
Search for Walid T. Khaled in:
Search for Kenneth S. Rankin in:
Search for Ewan St. John Smith in:
Search for Daniel Frankel in:
E.S., K.R. and D.F. conceived the study, analysed data and wrote manuscript. Y.K., B.B. and D.F. carried out experiments extracting and characterising H.A.S.C. carried out immunohistochemistry experiments. W.K. and F.H. were responsible for cell line construction.
Correspondence to Kenneth S. Rankin or Ewan St. John Smith or Daniel Frankel.
The authors declare no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The role of senescence in cancer development
Eleni Mavrogonatou
, Harris Pratsinis
& Dimitris Kletsas
Seminars in Cancer Biology (2019)
|
CommonCrawl
|
Computational storage: an efficient and scalable platform for big data and HPC applications
Mahdi Torabzadehkashi ORCID: orcid.org/0000-0002-7765-30641,2,
Siavash Rezaei1,2,
Ali HeydariGorji1,2,
Hosein Bobarshad2,
Vladimir Alves2 &
Nader Bagherzadeh1
Journal of Big Data volume 6, Article number: 100 (2019) Cite this article
In the era of big data applications, the demand for more sophisticated data centers and high-performance data processing mechanisms is increasing drastically. Data are originally stored in storage systems. To process data, application servers need to fetch them from storage devices, which imposes the cost of moving data to the system. This cost has a direct relation with the distance of processing engines from the data. This is the key motivation for the emergence of distributed processing platforms such as Hadoop, which move process closer to data. Computational storage devices (CSDs) push the "move process to data" paradigm to its ultimate boundaries by deploying embedded processing engines inside storage devices to process data. In this paper, we introduce Catalina, an efficient and flexible computational storage platform, that provides a seamless environment to process data in-place. Catalina is the first CSD equipped with a dedicated application processor running a full-fledged operating system that provides filesystem-level data access for the applications. Thus, a vast spectrum of applications can be ported for running on Catalina CSDs. Due to these unique features, to the best of our knowledge, Catalina CSD is the only in-storage processing platform that can be seamlessly deployed in clusters to run distributed applications such as Hadoop MapReduce and HPC applications in-place without any modifications on the underlying distributed processing framework. For the proof of concept, we build a fully functional Catalina prototype and a CSD-equipped platform using 16 Catalina CSDs to run Intel HiBench Hadoop and HPC benchmarks to investigate the benefits of deploying Catalina CSDs in the distributed processing environments. The experimental results show up to 2.2× improvement in performance and 4.3× reduction in energy consumption, respectively, for running Hadoop MapReduce benchmarks. Additionally, thanks to the Neon SIMD engines, the performance and energy efficiency of DFT algorithms are improved up to 5.4× and 8.9×, respectively.
The modern human's life has been technologized, and nowadays, people rely on big data applications to receive services such as healthcare, entertainment, government services, and transportation in their day-to-day lives. As the usage of these services becomes universal, people generate more unprocessed data, which increases the demand for more sophisticated data centers and big data applications. At current pace, 2.5 quintillion bytes of data is created each day [1]. Due to the huge volume of data created every second, new concepts such as the age of information (AoI) has been established to access the latest version of data [2]. According to the well-known 4V's characteristics of big data, the big data applications need to deal with a very large Volumes of data, which are in Various types, and their Velocity is more than conventional data, while the data Veracity is not confirmed [3].
To process data with the aforementioned characteristics, data should frequently move between storage systems and memory units of the application servers. This high-cost data movement imposes energy consumption and degrades the performance of big data applications. To overcome this issue, data processing has moved toward a new paradigm: "move process to data" rather than moving high volumes of data. Figure 1 compares the traditional "data move to process" concept versus the "move process near data" paradigm.
In the modern clusters, nodes are connected through a low-latency and power-hungry interconnect network such as InfiniBand [4] or Myrinet [5]. In such systems, moving data can be more expensive than processing data [6], and moving the data through an interconnect network is much more costly. In fact, accessing and transferring stored data from storage systems is a huge barrier toward reaching compelling performance and energy efficiency. To deal with this issue, some frameworks such as Hadoop [7] provide mechanisms to process data near where they reside. In other words, these frameworks push process closer to data to avoid massive data movements between storage systems and application servers.
In-storage processing (ISP) is the concept of pushing the process closer to data in its ultimate boundaries. This technology proposes utilizing embedded processing engines inside the storage devices to make them capable of running user applications in-place, so data do not need to leave the device to be processed. This technology has been around for a few years. However, the modern solid-state drives (SSDs) architecture, as well as the availability of powerful embedded processors, make it more appealing to run user applications in-place. SSDs deliver higher data throughput in comparison to hard disk drives (HDDs). Additionally, in contrast to the HDDs, the SSDs can handle multiple I/O commands at the same time. These differences have motivated researchers to modify applications based on the modern SSD architectures to improve the performance of the applications significantly [8].
"Data move to process" and "move process near data" comparison
The SSDs contain a considerable amount of processing horsepower for managing flash memory array and providing a high-speed interface to host machines. These processing capabilities can provide an environment to run user applications in-place. Based on the reasons mentioned above, this paper focuses on the modern SSD architecture, and in the rest of the paper, computational storage device (CSD) refers to an SSD capable of running user applications in-place.
In an efficient CSD architecture, the embedded ISP engine has access to the data stored in flash memory array through a low-power and high-speed link. Thus, the deployment of such CSDs in clusters can increase the overall performance and efficiency of big data and high-performance computing (HPC) applications. The CSDs and the Hadoop platform both emerged based on the concept of "move process closer to data," and they both can be deployed simultaneously in a cluster. This combination can minimize the data movement and increase efficiency for running big data applications in a Hadoop cluster. However, HPC clusters that are not developed based on the Hadoop platform can still utilize CSDs to improve performance and energy consumption.
Processing user applications inside storage units without sending data to the host processor seems appealing; however, proposing a flexible and efficient CSD architecture has the following challenges.
ISP engine: SSDs come with multiple processing cores to run the conventional SSD controller routines. These cores can be utilized for running user applications as well. However, there are two major problems in utilizing the existing SSD cores for in-storage processing. First, these cores are usually busy doing normal SSD operations and using them for running user applications can negatively affect the I/O performance of the drive. Second, these processing engines are usually real-time cores such as ARM Cortex-R series, which limits the category of the applications that can efficiently run on these cores; also, user applications need major modifications to be able to run on these cores.
Host-CSD communication: In a CSD architecture, there should be a mechanism for the communication between host and CSD to submit ISP commands from the host to CSD and receive the results. The conventional SSDs have one physical link connected to the host, which is designed for transferring data. There are many protocols for sending data through this link such as SATA [9], SAS [10], and NVMe over PCIe [11]. None of these protocols are designed for sending ISP commands and results. Thus, this is the responsibility of the CSD designer to provide an ISP communication protocol between host and CSD.
Block-level or filesystem-level: An embedded processing engine inside a CSD has access to the raw data stored on the flash, but the filesystem metadata is in control of the host. Therefore, data access inside the storage unit is limited to the block-level data, and any application running in-place should not expect to be able to access the filesystem-level data. This limits the type of programming models available for developing ISP-enabled applications, and also the reuse of other applications. Hence, the CSD designer should provide a mechanism to access the filesystem metadata inside the ISP engine so that applications that are running in-place can open files, process data, and finally create output files to write back the results.
Host-CSD data synchronization: In a CSD-equipped host, both host and ISP engine have access to the same flash memory array. In such a system, without a synchronization mechanism, these two machines may not be able to see each other's modifications and could result in data corruption.
CSD as an augmentable resource: Adding CSDs to a host machine should not limit the host from accessing the data and processing it. The processing horsepower of the CSDs should be an augmentable resource so that the host and CSDs could process data simultaneously. If processing an application in CSD interferes with the host's access to the data, this would dramatically decrease the utilization of the host and the efficiency of the whole system. A well-designed CSD architecture allows the host to access data stored in the flash memory at any time.
Adaptability: CSDs should provide a flexible environment for running different types of applications in-place. If the ISP engine of a CSD supports very limited programming languages or needs users to rewrite the application based on a specific programming model, this can affect the adoptability of the CSD significantly.
Distributed in-storage processing: A single CSD with limited processing horsepower may not be able to enhance an application's performance significantly, so in many cases, there should be multiple CSDs orchestrating together to deliver compelling performance improvement. For doing such distributed processing, CSD designers need to provide the required tools for implementing a distributed processing environment among multiple CSDs.
ISP for high-performance computing: Highly demanding applications such as HPC algorithms can potentially run inside CSDs. However, to serve this class of applications properly, CSDs should be able to boost their performance for some specific applications. In other words, the CSD architecture should be customizable to run some applications in an accelerated mode. Hence, CSD designers are required to provide ASIC- or FPGA-based accelerators to run highly demanding applications satisfactorily.
In this paper, we propose an efficient CSD platform, named Catalina, which addresses all the challenges mentioned above and is flexible enough to run HPC applications in-place or playing the role of an efficient DataNode in a Hadoop cluster. Catalina is equipped with a full-fledged Linux operating system (OS) running on a quad-core ARM 64-bit application processor which is dedicated to run user applications in-place. Catalina utilizes a Xilinx Zynq Ultrascale+ MPSoC [12] as the main processing engine and contains ASIC-based processing engines such as Neon single instruction multiple data (SIMD) engines which can be utilized to accelerate HPC applications that run in-place. Catalina is an augmented resource which can be deployed in Hadoop clusters so that MapReduce applications can simultaneously run on both conventional nodes as well as Catalina CSDs [13]. This property makes it easier to adopt this technology in Hadoop clusters. For the proof of concept, we developed a fully functional Catalina CSD prototype and built a platform equipped with 16 Catalina CSDs.
The contributions of this paper can be summarized as follows:
Proposing the deployment of computational storage devices in clusters, and end-to-end integration of CSDs in clusters for running Hadoop MapReduce and HPC applications.
Describing the hardware and software architecture of Catalina as the first computational storage device which is seamlessly adoptable in Hadoop and MPI-based clusters without any modification in the cluster's software or hardware architecture.
Prototyping Catalina CSD and developing a platform equipped with 16 Catalina CSDs to practically evaluate the benefits of deployment of computational storage devices in the clusters.
Exploring the performance and energy consumption of CSD-equipped clusters using Intel HiBench Hadoop benchmark suite as well as 1D-, 2D-, and 3D-DFT operations on large datasets as the HPC benchmarks.
The rest of this paper is organized as follows: "Background" section reviews subjects covered in this paper, such as the modern SSD architecture, ISP technology, and Hadoop platform architecture. In "Method" section, we present the hardware and software architecture of Catalina CSD and describe the unique features that make it capable of seamlessly running a wide spectrum of applications in-place. This section also illustrates the developed Catalina prototype and how it can be deployed in clusters to run Hadoop MapReduce and HPC applications. "Results and discussion" section investigates the benefits of deploying the proposed solution for running Hadoop MapReduce and HPC applications on clusters. This section demonstrates how increasing the number of Catalina CSDs improves energy consumption and performance of the applications. "Related works" section reviews the related works in the field of in-storage processing, and "Conclusion" section concludes the paper with some final remarks.
The storage system, where data originally reside, plays a crucial role in the performance of applications. In a cluster, the data should be read from the storage system to memory units of the application servers to be processed. As the size of data increases, the role of the storage system becomes more important since the nodes need to talk to the storage units more frequently to fetch data and write back the results. Recently, cluster architects have considered solid-state drives (SSDs) over hard disk drives (HDDs) as the major storage units in modern clusters due to better power efficiency and higher data transition rate [14].
SSDs use NAND flash memory as storage media. NAND memory units are faster and more power-efficient than magnetic disks that are used in HDDs, so SSDs are considered more efficient than HDDs. However, this efficiency comes with complexity in the design and implementation of SSDs, where a multi-core controller is needed to manage the flash memory array. On the other hand, SSDs usually provide a high-speed interface to communicate with the host, such as NVMe over PCIe [11]. Implementing such interfaces require embedding more processing horsepower inside SSDs. A modern SSD controller is composed of two main parts: 1—front-end (FE) processing engine providing high-speed host interface protocol such as NVMe/PCIe, and 2—back-end (BE) processing engine which deals with flash management routines. These two engines talk to each other to accomplish host's I/O commands.
A NAND flash memory chip is a package containing multiple dies. A die is the smallest unit of flash memory that can independently execute I/O commands and report status. Each die is composed of a few planes, and each plane contains multiple blocks. Erasing is performed at the block-level, so a flash block is the smallest unit that can be erased. Inside each block, there are several pages which are the smallest units that can be programmed and written. The key point in this hierarchical architecture is the programmable unit versus the erase unit. The NAND flash memory can be programmed in page-level, which is usually 4 to 16 kB, while the erase operation cannot be done on a smaller segment than a block which is few megabytes of memory. Also, each flash block can be erased for a finite number of times, and flash blocks wear as erase operations take place, so it is important to balance the number of erase operations among all the flash blocks of an SSD. The process of leveling the number of erase operations is called wear leveling. In addition, the logical address exposed to the host is different from physical block addresses, so there are multiple tables for logical and physical address translation. The flash translation layer (FTL) is composed of all the routines needed to manage flash memory arrays such as logical block mapping, wear leveling, and garbage collection. Overall, the BE processing subsystem of a modern SSD architecture handles FTL, while the FE subsystem provides protocols to communicate with the host. The details of FTL processes and NVMe protocol are out of the scope of this paper. The high-level architecture of a modern SSD is demonstrated in Fig. 2.
Modern SSD architecture and data transfer bottlenecks
The Webscale data center designers have been trying to develop storage architectures that favor high-capacity hosts, and this fact is highlighted at OpenCompute (OCP) by Microsoft Azure and Facebook that call for up to 64 SSDs attached to each host [15]. In Fig. 2, such a storage system is shown where 64 SSDs are attached to a host. For the sake of simplicity, only the details of one SSD are demonstrated. Modern SSDs usually contain 16 or more flash memory channels which can be utilized concurrently for flash array I/O operations. Considering 512 MBps bandwidth per channel, the internal bandwidth of an SSD with 16 flash memory channels is 8 GBps. This huge bandwidth decreases to about 1 GBps due to the complexity of the host interface software and hardware architecture. In other words, the accumulated bandwidth of all internal channels of the 64 SSDs reaches the multiplication of the number of SSDs, number of channels per SSD, and 512 MBps (bandwidth of each channel) which is equal to 512 GBps. While the accumulated bandwidth of the SSDs' external interfaces is equal to 64 multiply by 1 GBps (the host interface bandwidth of each SSD) which is 64 GBps. However, In order to talk to the host, all SSDs required to be connected to a PCIe switch. Hence, the available bandwidth of the host is limited to 32 GBps.
Overall, there is a 16× gap between the accumulated internal bandwidth of all SSDs and the bandwidth available to the host. In other words, for reading 32 TB of data, the host needs 16 min while internal components of the SSDs can read the same amount of data in about 1 min. Additionally, in such storage systems, data need to continuously move through the complex hardware and software stack between hosts and storage units, which imposes a considerable amount of energy consumption and dramatically decreases the energy efficiency of large data centers. Hence, storage architects need to develop techniques to decrease data movement, and ISP technology has been introduced to overcome the aforementioned challenges by moving process to data.
In a traditional CPU-centric scheme, data always move from storage devices to CPU to be processed, and this mechanism, which is inherently limited by the von Neumann bottleneck, is the root cause of the challenges above, especially when many SSDs are connected to a host. ISP technology proposes a contrary approach to push the "move process to data" to its ultimate boundaries where a processing engine inside storage unit takes advantage of high-bandwidth and low-power internal data links and processes data in-place. In fact, "move process to data" is the concept that leads to the emergence of both ISP and distributed processing platforms such as Hadoop. Later in this section, it will be described how the Hadoop platform and ISP technology can simultaneously work together in a cluster.
The ISP technology minimizes the data movements in a cluster and also increases the processing horsepower of the cluster by augmenting power-efficient processing engines to the whole system. This technology can potentially be applied to both HDDs and SSDs; however, modern SSD architecture provides better tools for developing such technologies. The SSDs which can run user application in-place are called computational storage devices (CSDs). These storage units are augmentable processing resources, which means they are not designed to replace the high-end processors of modern servers. Instead, they can collaborate with the host's CPU and augment their efficient processing horsepower to the system.
It is noteworthy that CSDs are fundamentally different than object-based storage systems such as Seagate Kinetic HDDs [16], which transfer data at the object-level instead of block-level. The object-based storage units can receive objects (i.e., image files) from a host, store them, and at a later time, retrieve the object back to the host using an object ID. Consequently, the host does not require to maintain metadata of block addresses of the object. On the other hand, CSDs can run user applications in-place without sending data to a host. There is a vast literature in this field that proposed different CSD architectures and investigated the benefits and challenges of deploying CSDs for running applications in-place. In "Related works" section, we will review the important works in this field.
A while ago, when the cost of data movement was insignificant in comparison with the computational cost, there could be a centralized storage system, and other hosts could send requests to it to fetch data blocks. With this mechanism and today's volume of data, a data-intensive application requires large amounts of data to be fetched from the storage system, and such huge data movements drastically increase energy consumption. With the emergence of big data, the storage system can no longer be centralized, and the traditional approaches come short of satisfying super-scale applications' demands, which call for scalable processing platforms. To answer these demands, distributed processing platforms such as Hadoop are proposed to process data near where they reside [17].
Hadoop has emerged as the leading computing platform for big data analytics and is the backbone of hyperscale data centers [18], where hundreds to thousands of commodity servers are connected to provide service to clients. The Hadoop distributed processing platform consists of two main parts, namely Hadoop filesystem (HDFS) and MapReduce engine. HDFS is the underlying filesystem of the Hadoop platform, and it is responsible for partitioning the data to blocks and distribute the data blocks among nodes. HDFS also generates a certain number of replicas of each block to make the system resilient against storage or node failures. It consists of a NameNode host which takes care of filesystem metadata such as the location of the data block and status of the other nodes, and multiple DataNodes hosts that store the data blocks.
On top of HDFS, MapReduce platform takes advantage of the partitioned data and runs map and reduce functions and orchestrates the cluster nodes to run distributed applications while data movements are minimalized. The Apache MapReduce 2 (YARN) is one of the well-known MapReduce platforms [19]. YARN agents manage the procedure of running map tasks, performing shuffle and sort, and running the reduce functions to generate the output of the MapReduce application. The most important agents in the YARN framework are global resource manager (RM), one node manager (NM) per processing node, and an application master (AM) per MapReduce application.
The RM has a list of all the resources available in the cluster and manages the high-level resource allocations to MapReduce applications. On the other hand, NMs that run on the processing nodes manage the local hosts' resources. The RM regularly talks to the NMs to manage all the resources and poll the status of the nodes. For each MapReduce application, there is an AM to observe and manage the progress of the application. Regularly, RM runs in the same node that HDFS NameNode runs (head node), and NMs run together with the HDFS DataNodes. Figure 3 shows a high-level overview of a MapReduce application on a Hadoop platform.
MapReduce process on Hadoop cluster
In the Hadoop environment, data should initially be imported to the HDFS. This process includes partitioning the input data to data blocks and storing them in the DataNodes. At this point, the data blocks are ready to be processed in a distributed fashion. Since map functions preferably process the local data blocks, the MapReduce framework is known for moving process closer to data in order to improve the power efficiency and performance of the applications.
A MapReduce application targets a set of input data blocks as well as the user-defined map and reduce functions. The procedure starts by running the map function on the targeted data blocks. Map function instances run concurrently on the DataNodes, consume data blocks, and produce a set of key/value pairs to be used as the input of the reduce function. These intermediate key/value pairs are stored locally on the DataNodes that execute the map function and should be shuffled, sorted, and transferred to the nodes which run the reduce tasks. The Hadoop framework stores the output of the reduce function in HDFS, and subsequently, it can be imported to a host's local filesystem.
The Hadoop MapReduce platform provides an efficient environment for processing large datasets in a distributed fashion. However, there are research papers in the literature that optimize Hadoop MapReduce platform for different workloads. Generally, HDFS partitions data to fixed-size 64 or 128 megabytes data blocks. The data-intensive MapReduce applications send numerous I/O requests to the Hadoop I/O scheduler, and this leads to a long queue of I/O requests in the scheduler, and consequently, a substantial increase in the data latency. Bulk I/O dispatch (BID) is proposed as a Hadoop I/O scheduler that is optimized for data-intensive MapReduce applications and can improve the access time of such applications significantly [20].
In some big data applications, the data that are generated in a processing node will be consumed by the same node in the near future. The lineage-aware data management (LDM) is proposed for Hadoop-based platforms to exploit this data locality to decrease the network footprint [21]. LDM develops a data block graph which is used to characterize the future data usage pattern. Using this information, it defines tier-aware storage policies for the Hadoop nodes to mitigate the impact of data dependency.
The Hadoop strategy of "processing data close to where they reside" is completely aligned with the ISP paradigm [22]. So, they can fortify each other's benefits when both are deployed concurrently in a cluster. In other words, Hadoop-enabled CSDs can play both roles of fast storage units for conventional Hadoop DataNodes and ISP-enabled DataNodes simultaneously, resulting in augmentation of processing horsepower of the CSDs to the Hadoop cluster.
Although CSDs can improve the overall performance of a MapReduce application by augmenting their processing engine to the Hadoop framework, this is not the primary advantage of deploying CSDs in the clusters. Potentially, increasing the total horsepower of a cluster can be achieved by adding more commodity nodes to a cluster. What makes CSDs distinguishable is, in fact, utilizing the high-performance and power-efficient internal data links of the modern SSD architecture for running Hadoop MapReduce applications.
On the other hand, well-designed CSDs can be deployed to run HPC applications in-place. However, CSDs need to deliver a compelling performance when running HPC applications; otherwise, it is hard to justify the complexity of deploying CSDs in the clusters while their performance improvement is not satisfactory. In this paper, we argue that CSDs can considerably improve the performance of HPC applications when they utilize ASIC-based accelerators such as Neon advanced SIMD engines.
Methods: Catalina CSD for MapReduce and HPC applications
We mentioned the challenges for proposing a well-designed CSD architecture in "Introduction" section. Based on these challenges, we set eight design goals to propose an efficient and flexible CSD architecture. The design goals are as follows: 1—A desired CSD architecture should avoid using real-time processors that are originally intended to run conventional flash management routines for running user applications in-place. Instead, it should contain an ISP-dedicated application processor. 2—There should be a TCP/IP link between the host and the ISP engine, allowing the applications running on the host and CSD to communicate with each other. 3—The ISP engine should have access to the filesystem metadata. 4—Since both host and ISP engine have access to the flash memory concurrently, there should be a synchronization mechanism. 5—Both host and ISP engine should be able to interact with the flash storage simultaneously. 6—There should be an OS running inside the CSD. This OS provides a flexible environment to run a vast spectrum of applications in-place. 7—The desired CSD should support the distributed processing platforms such as Hadoop and message passing interface (MPI). 8—The CSD should have potentials to implement ASIC- and FPGA-based accelerator engines to run CPU-intensive applications in-place with a compelling performance.
In this section, we describe the hardware and software architecture of Catalina, which is designed to satisfy all the design goals mentioned above. This section is composed of four subsections. In the first subsection, different hardware components of Catalina are described, and we discuss how they work together. The second subsection defines Catalina software layers that make it possible to send ISP commands, processing data in-place, and writing the results back. The third subsection demonstrates the fully functional Catalina prototype used to investigate the benefits of deploying CSDs in the clusters. The last subsection demonstrates how multiple Catalina CSDs can be deployed to run Hadoop MapReduce and HPC applications.
Hardware architecture
Catalina CSD is developed based on Xilinx Zynq Ultrascale+ MPSoC [12]. This device is composed of two subsystems, namely programmable logic (PL) and processing system (PS). The PS is an ASIC-based processing subsystem, including a quad-core ARM Cortex-A53 64-bit processor equipped with Neon SIMD engines and floating-point units, two ARM Cortex-R5 real-time processors, DRAM controller, as well as other interconnect and data movement components. Adjacent to PS, there is the PL subsystem which is an FPGA that can be utilized for implementing different components of the CSD controller such as the host and flash memory array interfaces. These two subsystems are packaged in one chip with multiple data links connecting them for high-performance and power-efficient intra-chip data transfers. These two subsystems together provide a proper platform for implementing conventional SSD routines as well as running user applications in-place.
Catalina CSD hardware architecture
Figure 4 shows the Catalina architecture, implemented using Xilinx Zynq Ultrascale+ MPSoC. On the PL subsystem, there are three conventional components of the controller which are host NVMe/PCIe interface, error correction unit, and the flash memory interface. The host interface is responsible for sending and receiving the NVMe/PCIe packets from the host and check the integrity of the payloads. Soft errors can potentially happen in different parts of a device, and SSDs are no exceptions. These errors can happen in the SSD controller and the flash memory packages. There are research works that investigate the effect of the soft errors on the SSD controller and the availability and reliability of the storage systems [23]. On the other hand, for addressing the soft errors on the flash memory packages, the error correction unit is utilized to correct the data errors.
The flash memory interface controls the flash memory channels. In Fig. 4, each flash channel is connected to a set of flash memory packages. Design and implementation of these three components require a considerable amount of engineering resources; however, they are common among all conventional SSDs. We skip the detailed architectures of these components since they are out of the scope of this paper.
On the PS subsystem, there are two ARM Cortex-R5 real-time processors that are used for controlling the components implemented in the PL subsystem as well as running the flash translation layer (FTL) routines. In fact, the conventional firmware routines run on these two real-time processors. One of the real-time processors runs the FE firmware, which controls the host interface module and interprets the host's I/O commands. The other ARM Cortex-R5 processor runs the BE firmware, which is responsible for controlling the error correction and flash interface units. The BE firmware also runs other essential FTL routines such as garbage collection and wear-leveling. The FE and BE firmwares work together to receive the host's I/O commands, interpret them, and execute flash read/write operations.
All of the components mentioned above are common among conventional SSDs; however, Catalina is equipped with a unique ISP subsystem. This subsystem is dedicated to running user applications in-place. It contains a quad-core ARM Cortex-A53 processor which is equipped with Neon SIMD engines and floating-point units (FPUs). The quad-core processor is capable of running a vast spectrum of applications, while Neon SIMD engines can increase the performance of HPC applications. Overall, the quad-core Cortex-A53 processor is the main ISP engine of Catalina, and Neon SIMD engines and the FPUs can accelerate user applications that run in-place.
As Fig. 4 demonstrates, both the ISP engine and the two Cortex-R5 real-time processors which run the conventional flash management routines are connected via an internal Xilinx advanced extensible interface (AXI) bus. The shared AXI bus makes it possible to transfer data between the BE firmware and the ISP engine efficiently. In other words, the ISP engine can bypass the whole NVMe hardware and software stack and access the data stored in the flash memory array directly by communicating to the BE firmware. There is also an 8 GB DRAM memory connected to the AXI bus, which is shared among all the processing units.
The Catalina software stack is designed to achieve the goals we set earlier in this section. The most important part of the software components is an OS running inside the ISP engine. Therefore, we have ported a full-fledged Linux OS running on the quad-core ARM Cortex-A53 processor. Each core of the quad-core processor has a level-one 32 kB instruction cache and a 32 kB data cache. The ARM processor also contains a 1 MB level-two cache, which is used by all the cores. These caches together with the memory unit and the non-volatile flash memory compose a memory hierarchy. The Catalina OS governs the memory hierarchy for data coherency and integrity. Overall, the OS is a flexible environment for running user applications in-place as well as implementing other layers of the software stack.
Running an OS inside the CSD comes with complexity and processing overhead. The first challenge to port an OS inside the CSD is to boot the OS together with the conventional flash management firmware. At the boot time, the OS and the firmware should boot up concurrently and complete a handshake phase. This procedure adds a considerable amount of complexity to the design and manufacturing of the CSD. We have successfully implemented this dual-boot procedure, and the users do not have any concerns regarding this challenge.
On the other hand, the ISP subsystem consumes a portion of the processing budget for running the OS routines. Since the processing budget of the ISP engine is limited, the OS should not consume a big portion of the resources. We have measured the OS routines overhead in different states of the ISP subsystem, and our results showed that the processing overhead of running the OS is at most 2% of the total processing budget of the Catalina CSD. Figure 5 demonstrates the architecture of the software layers and how they make it possible to run distributed applications in-place.
Catalina CSD software stack
Our first design, named Compstor [24], had a custom-developed software stack, and it was unable to be utilized in the distributed processing environments. This problem is addressed in Catalina design by adopting standard software stacks such as MPI and Hadoop MapReduce [25]. In Fig. 5 there is a cluster of M hosts connected to a TCP/IP interconnect, and the host 1 is attached to N Catalina CSDs via a PCIe switch. In this figure, the lowest layer of the software stack is the BE firmware, which implements the FTL procedures. The BE firmware serves both FE firmware which talks to the host via NVMe protocol and a block device driver implemented in the kernel space of the ISP engine's OS. The block device driver issues flash I/O commands directly to the BE firmware, so the data link through the block device driver bypasses the NVMe/PCIe software and hardware stack. The block device driver also makes it possible to mount the flash storage inside the Catalina OS. It allows a user application running in-place to have filesystem-level access to the data stored in the flash memory array via a high-performance and low-power internal data link.
On the other hand, the ISP engine should also provide a link between applications that run in-place and applications running on the host, so in addition to the block device driver, we have implemented a TCP/IP tunnel through NVMe protocol to transfer TCP/IP packets between the applications running on the host and the applications running inside Catalina. We have utilized NVMe vendor-specific commands to packetize TCP/IP payloads inside the NVMe packets (TCP/IP tunnels through NVMe are demonstrated in Fig. 5 by dashed lines). A software layer implemented on both host OS and the Catalina OS provides the tunneling functionality. Since distributed platforms such as Hadoop MapReduce [17] and MPI [26] are based on TCP/IP connection, this link plays a crucial role in running distributed applications. As shown in Fig. 5, all the N Catalina CSDs that are attached to the host 1 can concurrently communicate with applications running on the host.
It is worth mentioning that by using Linux TCP/IP packet routing tools, we can create an internal network in the host OS, and reroute the packets sent or received by the Catalina CSDs to the other hosts attached to the TCP/IP interconnect (see Fig. 5). Alternatively stated, considering several hosts that are connected via a TCP/IP interconnect, and each of them is equipped with multiple Catalina CSDs, the hosts, as well as all the CSDs attached to them, can communicate to each other via a TCP/IP network. Such CSD-equipped cluster architecture benefits from the efficient ISP capabilities of Catalina CSDs to run distributed applications. In fact, the proposed CSD architecture is an augmentable processing resource that is adoptable in the cluster without any modifications in the underlying Hadoop or HPC platforms.
Also, the user applications that run in-place have access to Neon SIMD engines via a set of application programming interfaces (APIs) provided by the Catalina OS. Using these APIs, user applications can potentially be accelerated by the Neon SIMD engine. Overall, the user applications have access to three unique tools, including 1—a high-speed and low-power internal link to the data stored in the flash memory, 2—a TCP/IP link to the applications running on the host, 3—a set of APIs to utilize Neon SIMD engines.
The last layer of the software stack is the synchronization layer between the host and Catalina OSs. These two OSs can access the data stored in the flash memory array at the filesystem-level and concurrently mount the same storage media, which is a problematic behavior without a synchronization mechanism. Since Catalina contains a full-fledged Linux OS and there is a TCP/IP connection to the host, to address the synchronization issue, we have utilized the Oracle cluster filesystem 2nd version (OCFS2) [27] between the host and the CSD. Using OCFS2, both the host and Catalina CSD can issue flash I/O commands and mount the shared flash memory natively. This is the main difference between OCFS2 and network filesystem (NFS). In NFS, only one node mounts the shared storage natively, and other nodes use a network connection to access the shared storage, so NFS limits the data throughput and also suffers from the single point of failure problem. On the other hand, using OCFS2, all nodes can mount the storage natively.
Catalina prototype
To prove the feasibility of the proposed computational storage solution and also investigate the benefits of deploying Catalina CSDs in clusters, we have designed and manufactured a fully functional prototype of Catalina which completely aligns with the hardware and software architecture that has been described in the previous subsections. Figure 6 shows the Catalina CSD prototype. The CSD controller implemented on a Xilinx Zynq Ultrascale+ MPSoC, as well as the NAND flash packages, are shown in this figure.
The CSD controller is composed of the PS and PL subsystems that implement the two processing engines of the Catalina CSD, namely the conventional FTL and host interface engine and the ISP engine. The hardware specifications of the Catalina prototype separated for these two engines come in Table 1.
Catalina CSD prototype
Table 1 Catalina prototype hardware specifications
The prototype of Catalina is able to execute the host's I/O commands and also provides a straightforward mechanism for offloading user applications to the CSD via a TCP/IP tunnel through NVMe/PCIe. Considering multiple Catalina CSD prototypes attached to a host, an administrative application on the host can initiate user applications on CSDs while the host and the CSDs' OSs are being synchronized by the OCFS2 filesystem. The user applications can be developed in any language supported by Linux OS, and they can interact with the flash memory at the filesystem-level, similar to when they run on a conventional host machine.
Despite all of the projected benefits of deploying the CSDs, they should be cost-effective to be adoptable in the clusters. After prototyping Catalina, a sensible cost analysis of manufacturing CSDs can be presented. Compared with a regular SSD based on a conventional controller, a CSD should be equipped with more processors to run applications in-place efficiently. Interestingly, according to our observations and also the SSD bill of material analysis [28, 29], the major difference between an SSD and a CSD manufacturing costs would be insignificant, since the SSD manufacturing cost is largely dominated by the NAND flash memory. The cost of flash memory chips is about 75% of the SSD price.Footnote 1 With other miscellaneous costs (such as DRAM, miscellaneous components, and manufacturing costs) that would account for 20–25% of the SSD price, the controller would account for at most 5% of the SSD price.
Deploying Catalina in clusters
Catalina is developed concerning a straightforward deployment in clusters. Since it has all the required features to play the role of a regular processing node, the cluster architects do not have to make major modifications in the underlying platforms for deployment of the Catalina CSDs. Figures 7, 8 illustrate CSD-equipped Hadoop and MPI-based clusters, respectively, where a head node is connected to M host machines, and each of the hosts is equipped with N Catalina CSDs. In such a cluster, all CSDs and conventional nodes are orchestrating together to improve the performance and efficiency of distributed applications.
Catalina CSD-equipped Hadoop cluster
Catalina CSD-equipped MPI-based cluster
In the CSD-equipped Hadoop cluster, the head node runs Hadoop NameNode and YARN resource manager (RM), while the hosts and Catalina CSDs run DataNodes and node managers (NMs). In fact, Catalina CSDs play both roles of storage units as well as efficient DataNodes. Since Hadoop implements its filesystem synchronization mechanism, we exceptionally do not need the OCFS2 filesystem for running Hadoop. However, OCFS2 plays an important role in running MPI-based applications.
Figure 8 illustrates a CSD-equipped cluster based on MPI for running HPC applications. In this figure, the head node runs an MPI coordinator while the conventional hosts, and the Catalina CSDs, run the MPI workers. In this MPI-based cluster, each host is attached to N CSDs, and the data stored on the CSDs are shared between the host and CSDs, so the MPI workers on the host and the CSDs have access to the shared data. Thanks to the OCFS2 filesystem, the shared data is simultaneously visible to the host and CSDs at the filesystem-level so that the user can freely distribute the processing load among the hosts and the CSD.
As mentioned in the previous section, deploying Catalina CSDs in clusters is straightforward. After attaching the Catalina CSDs to the host and setting the network configurations, the CSDs are exposed to the other hosts in the cluster by their network address (e.g., IP address). From the system-level point of view, the Catalina CSDs are similar to regular processing nodes, and the underlying ISP hardware and software details are invisible to other nodes in the cluster. In this section, we first demonstrate the developed platforms equipped with up to 16 Catalina CSDs and describe how we implemented a CSD-enabled Hadoop and MPI-based clusters on the developed platforms. The second subsection shows the results of running different Hadoop MapReduce and HPC benchmarks and discusses the benefits of deploying Catalina CSDs in clusters.
The Catalina CSDs are not designed to compete with the modern hosts based on high-end ×86 processors with tens to hundreds of gigabytes of DRAM. Instead, it is presented as a resource that augments the processing horsepower of a system and improves the performance and power efficiency of the applications [25]. However, for getting considerable improvements, we propose attaching multiple Catalina CSDs to host machines. Figure 9 shows the architecture of the developed platform, which contains 16 Catalina CSD prototypes. We built this platform to investigate the benefits of deploying Catalina CSDs in clusters.
Architecture of the developed platform equipped with 16 Catalina CSDs
This platform is composed of a conventional host (called the head node) and an application host, which is equipped with the Catalina CSDs. These two hosts, along with the Catalina CSDs, form a distributed environment for running Hadoop MapReduce and MPI-based HPC applications. We use the head node exclusively for running Hadoop NameNode and the MPI coordinator to eliminate the load of the administrative tasks on the processing nodes. In other words, the application host and the CSDs are the processing nodes, while the head node is dedicated only for running the administrative tasks.
To extensively investigate the benefits of Catalina CSDs in different environments, we have considered three different configurations for the application host, namely low, medium, and high. The specifications for the head node and the different application host's configurations are summarized in Table 2. In order to attach up to 16 Catalina CSDs to the application host, we used a Cubix Xpander Rackmount unit [30] which provides 16 PCIe Gen3 slots. This unit and the attached Catalina CSDs are shown in Fig. 9.
Table 2 Specifications of the hosts in the developed system for running experiments
The implementations of the Hadoop and the MPI-based clusters are aligned with the architectures that are shown in Figs. 7, 8. For implementing the Apache Hadoop cluster, we ran Hadoop NameNode and the Yarn resource manager (RM) on the head node, while the DataNodes and the node managers (NMs) run on the application host, and the Catalina CSDs that are attached to the application host. The communication between the head node and the application host is through an Ethernet cable, while Catalina CSDs communicate via the developed TCP/IP through NVMe link.
On the other hand, for running the HPC application based on MPI, we use the head node for running the MPI coordinator application which initiates and organizes the MPI workers that run on the application host and the Catalina CSDs. In this case, the OCFS2 filesystem synchronizes the filesystems of the Catalina CSDs and the application host, so at any given time the application host can access the whole data stored on all the CSDs directly, while each CSD only has access to its local data.
As previously discussed, all the packets from/to the CSDs are routed inside the application host. This mechanism provides a flexible networking environment without using an extra network switch. However, there is a cost for having such a flexible network. The application host needs to consume a portion of its processing horsepower to route the packets. In our design, for each CSD added to application host, an insignificant amount of the host's CPU is consumed for the routing mechanism. We have measured this networking overhead for different workloads. In highly congested scenarios, the application host consumed at most 3.5% of its processing horsepower for the routing the TCP/IP packets among CSDs. This overhead is considered in the results in this section.
Benchmarks and results
This subsection is composed of two parts. First, we describe the targeted Hadoop MapReduce benchmarks and report the performance and energy consumption of running the benchmarks for different configurations to investigate the benefits of running Hadoop MapReduce applications in-place. Then, we show the results for running 1D, 2D, and 3D discrete Fourier transform (DFT) algorithms utilizing the Neon SIMD engines of Catalina CSDs. For reporting the performance, we measure the total elapsed time of running a benchmark on the developed platform for a given configuration. On the other hand, to measure the energy consumption, we use a power meter to measure the power consumption of the platform. Using the logging tool provided by the power meter, we calculate the total energy consumption for running a benchmark. However, we deducted the idle energy consumption from the total calculated energy consumption for all the experiments to eliminate the energy consumption imposed by miscellaneous devices such as the cooling system.
Hadoop MapReduce benchmarks and results
For running Apache Hadoop MapReduce applications on the developed platform, we have used a subset of the Intel HiBench benchmark suite [31] which includes Sort, Terasort, and Wordcount benchmarks. We believe that extensive experiments on these three benchmarks can show the potentials of CSD architectures running Hadoop MapReduce applications. These benchmarks have been executed on 16 different platform configurations, which are listed in Table 3. In all the experiments, the head node specification is fixed and matches with Table 2, and the number of mappers and reducers tasks are 2000 and 200, respectively.
The application host uses all the attached Catalina CSDs as the storage units (6 CSDs in the low and medium configurations; and 16 CSDs in the high configuration), while in each configuration, a certain number of the ISP engines of the CSDs are enabled to run MapReduce application in-place. This way, the scalability of deploying Catalina CSDs in clusters can be investigated. The data size for Sort, Terasort, and Wordcount benchmarks are 8 GB, 1.3 GB, and 80 GB, respectively.
Table 3 Different configurations for running Hadoop MapReduce benchmarks
For the sake of accuracy, each experiment has been repeated for 30 times, and the performance and the energy consumption results reported in this subsection are the average numbers of all repetitions. We ran the three targeted MapReduce benchmarks on the 16 different platform configurations, and each experiment has been repeated 30 times that gives us a total of 1440 MapReduce tests.
As previously stated, in all experiments, the application host uses all connected Catalina CSDs as storage units, however, in each test, a certain number of CSDs are enabled to run MapReduce application in-place and play the role of a processing node. Figures 10, 11 shows the Hadoop MapReduce experiments performance and energy consumption results, respectively.
Hadoop MapReduce benchmarks performance results
The diagrams in Fig. 10 show that increasing the number of ISP-enabled CSDs decreases the elapsed time for all benchmarks. The performance of the high-configured application host platform increased up to 2.2× when the ISP engines of all 16 Catalina CSDs are enabled. Thus, deploying ISP-enabled CSDs increases the performance of the Hadoop MapReduce benchmarks significantly.
Also, according to these diagrams, the elapsed time for running the MapReduce benchmarks on the low-configured application host platform equipped with six Catalina CSDs is close to the elapsed time of running the benchmarks on the high-configured application host platform with no enabled ISP engine. So, only six Catalina CSDs can improve the performance of a low-end host close to the performance of a high-end host. As we previously discussed, the cost of implementing the ISP engine inside the SSDs is negligible compared to the total cost of manufacturing an SSD, so ISP technology can considerably improve the performance of Hadoop clusters economically. Figure 11 shows the energy consumption results of running the Hadoop MapReduce benchmarks on the developed platform for different configurations.
Hadoop MapReduce benchmarks energy consumption results
According to Fig. 11, the energy consumption of running the benchmarks on the low-configured application host platform decreases up to 36% by deploying six ISP-enabled Catalina CSDs. This improvement for the high-configured application host platform equipped with 16 ISP-enabled Catalina CSDs reaches to 4.3×.
With no ISP engine enabled, the low-configured application host platform is less energy efficient than the other configurations. This is expected behavior since running the same benchmark on a more powerful platform takes less time. According to the diagrams in Fig. 11, when we enabled six ISP engines, the energy efficiency of the low-configured application host platform can surpass the energy efficiency of the medium- and high-configured application host platforms equipped with the same number of ISP-enabled CSDs. Although, the performance of the low-configured application host platform is still lower than the other platforms (see Fig. 10). We believe that this happens because of the high energy efficiency of the Catalina CSDs.
The Hadoop framework distributes tasks among all of the processing nodes. If a processing node gets idle, it will fetch data from other busy nodes and process it. Thus, the amount of data processed by each node in the Hadoop cluster is proportional to its processing resources. This means in the low-configured application host platform, a larger amount of data is processed by the Catalina CSDs compared to the amount of data processed by them in the high-configured application host platform which has an Intel Xeon processor. Since ISP engines are considerably more energy-efficient than the application host's processor, as we increase the portion of data processed by the CSDs, the whole platform becomes more energy-efficient. This justifies why the energy efficiency of the low-configured application host platform equipped with 6 ISP-enabled CSDs can surpass the energy efficiency of the high-configured application host platform with the same number of ISP-enabled Catalina CSDs.
HPC benchmarks and results
In this subsection, we first describe the targeted benchmarks to investigate the effect of deploying Catalina CSDs in clusters for running HPC applications. Then we show and discuss the performance and energy consumption results of running the benchmarks on the developed platform. The HPC applications usually demand a considerable amount of processing resources and consume a large amount of data. Thus, we only consider the high-configured application host platform equipped with 16 Catalina CSDs for running the HPC experiments (see Table 2). We have implemented the MPI framework to run the HPC benchmark according to the architecture described earlier in this section. The MPI coordinator runs on the head node host, while the application host and the ISP-enabled Catalina CSDs, run the MPI workers. In the developed platform, the application host can access the data stored on all the Catalina CSDs; however, each CSD only has access to its local data.
In addition, for running the HPC applications in-place, Catalina CSDs should be able to deliver a compelling performance. Therefore, we have utilized the Neon SIMD engines inside the Catalina CSDs. The Neon SIMD engines are application-specific integrated circuit (ASIC) accelerators that are expected to improve the performance and energy efficiency of the applications significantly. Overall, this section shows how using ASIC-based accelerators enhances the benefits of deploying CSDs for running HPC applications.
The HPC Challenge benchmark suite [32] which is developed by the University of Tennessee is one of the well-known HPC benchmark suites and is used in many research works [33,34,35]. This suite is composed of several benchmarks, each of which focuses on a particular feature of the HPC clusters such as the ability to do floating-point calculations, the communication speed between nodes, and the potentials of running demanding algorithms such as DFT. Among these benchmarks, we have targeted the DFT algorithm, since it is a CPU-intensive algorithm that also consumes a large amount of data, so it can show the potentials of deploying CSDs in clusters. Also, DFT is one of the most important algorithms, as Gilbert Strang, the author of the textbook Linear Algebra and Its Applications [36] referred to it as "the most important numerical algorithm in our lifetime." The DFT of a finite sequence X is a finite sequence Y with the same length of X in a complex-valued format in the frequency domain. The DFT of the finite sequence X is defined by (1).
$$\begin{aligned} \begin{aligned} Y&= F\left\{ x_n\right\} \\ y_k&= \sum _{n=0}^{N-1} x_n \cdot e^{- \frac{2 \pi i}{N} kn} \end{aligned} \end{aligned}$$
In the case of multidimensional input signal of \(X{:}\left\{ x_{n_1,n_2, \ldots ,n_l} \right\}\), a d-dimensional DFT is defined as (2).
$$\begin{aligned} \begin{aligned} y_{k_1,k_2, \ldots ,k_1}&= \sum _{n_1=0}^{N_1-1}\left( \alpha _{N_1}^{n_1 k_1} \sum _{n_2=0}^{N_2-1}\left( \alpha _{N_2}^{n_2 k_2} \cdots \sum _{n_d=0}^{N_d-1}\left( \alpha _{N_d}^{n_d k_d} \cdot x_{n_1,n_2, \ldots ,n_d} \right) \right) \right) \\ where \quad \alpha _{N_l}&= exp\left( \frac{-2\pi }{N_l} \right) \end{aligned} \end{aligned}$$
Considering a large amount of floating-point input data, the multi-dimensional DFT calculation is a challenging CPU-intensive application and can show the potentials of the Catalina CSDs for running HPC applications. Thus, we targeted this algorithm to measure the energy consumption and performance of 1D, 2D, and 3D DFT calculations of large datasets running on the high-configured application host platform with different number of ISP-enabled Catalina CSDs. For implementing the DFT algorithm, we utilized the FFTW library [37], which can be compiled to use the Neon SIMD engines of Catalina CSDs, and also supports multi-threading capability of the processing nodes in the developed platform.
For running the 1D-, 2D-, and 3D-DFT calculations, we have prepared three different datasets. The PTB Diagnostic ECG dataset is used for 1D-DFT calculation. The PTB Diagnostic ECG is a set of ECG signals collected from healthy volunteers and patients with different heart diseases by Professor Michael Oeff, M.D., at the Department of Cardiology of University Clinic Benjamin Franklin in Berlin, Germany [38, 39]. We have duplicated this dataset to generate 200 million 1D objects, each of which is a sequence of 180 floating-point numbers.
Regularly, 2D-DFT operations are performed on images; therefore, we have generated 14.4 million synthetic grayscale images as the 2D-DFT dataset. On each of these images, a dark point is placed randomly on the image, and other points' brightness is relative to their distance from the single darkest point. Figure 12 shows four samples of these images. For performing 2D-DFT operations, we converted each of the images to a \(50\times 50\) matrix. Overall, the 2D-DFT dataset is composed of 14.4 million 2D objects, where each of which is a sequence of 2500 floating-point numbers.
Four samples of the synthetic images used for 2D-DFT calculations
The 3D dataset also is generated using the same method we used for generating the 2D dataset. Each object in the 3D dataset can be described as a cube-shaped 3D object where a single darkest point is placed randomly in the cube-shaped object, and other points' brightness is relative to their distance from the single darkest point. We have generated a set of 288,000 three-dimensional objects and converted them to \(50\times 50\times 50\) matrices to represent the dataset for the 3D-DFT operations. Table 4 summarizes the datasets we used for running 1D-, 2D-, and 3D-DFT operations on the developed CSD-enabled platform.
Table 4 Datasets for 1D-, 2D-, and 3D-DFT calculations
Similar to the Hadoop MapReduce experiments, in all of the DFT calculation experiments, the application host has access to the data stored in all of the Catalina CSDs and the CSDs always play the role of storage units. However, in each test, a certain number of ISP-engines of the CSDs are enabled to show the scalability of ISP technology for running HPC applications. Figure 13 shows the performance and energy consumption results of running the DFT calculations on the developed platform for different numbers of ISP-enabled Catalina CSDs. The performance reported in the diagrams is defined as the number of 1D, 2D, and 3D objects that have been processed in a second, and the reported energy consumption is the energy consumed for processing an object. It is worth mentioning that each test has been repeated 20 times, and each result reported in this subsection is the average of all repetitions.
DFT benchmarks performance and energy consumption results
According to the diagrams in Fig. 13, as we enable more ISP-engines, the performance increases, and the energy consumption decreases. In these experiments, adding 16 ISP-enabled Catalina CSDs has improved the performance and energy consumption of running DFT calculations by a factor of 5.4× and 8.9×, respectively. The comparison between the results of running the Hadoop MapReduce and the HPC benchmarks yields an important outcome. The deployment of Catalina CSD in the platform improves the performance and energy consumption of running DFT calculations significantly more than the Hadoop MapReduce benchmarks. We believe that this difference is rooted in the utilization of Neon SIMD engines for running the DFT calculations. In other words, the Neon SIMD engines accelerate the execution of the DFT algorithm considerably. Since in Catalina CSDs, these engines are close to where data reside, they make a compelling improvement when ISP engines utilize them for running the applications in-place.
With the emergence of SSDs, the gap between the internal bandwidth of the storage units and the bandwidth of common I/O interfaces such as SATA, SAS, and NVMe has increased significantly up to an order of magnitude. This gap, along with the considerable power usage in nowadays data centers, calls for "moving the process closer to data" [40]. Early studies show that omitting low-speed I/O interfaces can give considerable speedup to data-intensive applications and lower the overall energy consumption [28]. Such studies bold the importance of near data processing.
The ISP technology is a relatively new trend in the field of near data processing. This technology enables the storage units to run user applications in-place without sending the data to a host. Since integrating a dedicated general-purpose processing engine inside modern SSD architecture is time-consuming and comes with considerable design challenges, most ISP works either use the native processing engine inside SSDs that is originally intended to run conventional flash management routines or embed an FPGA-based accelerator inside the SSD to run user application in-place.
Thus, we have categorized some of the related ISP works into the following groups: ISP works that use native SSD's processor to run user commands [29, 41] and ISP projects that bundle an auxiliary processing engine (usually FPGA) [24, 42,43,44]. Biscuit [42] is a framework to run applications in a distributed fashion among one host and several SSD units. The framework proposes a flow-based programming model for developing ISP applications. It decomposes the developed applications into small code sections, called SSDlet, and dynamically distributes the SSDlets among the SSDs. One of the drawbacks of Biscuit framework is that there is no dedicated processing engine for ISP and it uses the same ARM cortex R7 real-time processor that is intended to run conventional SSD firmware as the ISP engine. This method can greatly impact the host's interface performance, which makes it unsuitable for host systems with lots of I/O interactions. Also, Biscuit requires applications to be developed using a flow-based programming model. This remarkably limits the reuse of other types of applications. The research works [29, 41] have the same approach of using a processing engine both for conventional SSD and ISP routines; however, the scope of these works are limited to Hadoop mapper tasks and SQL scan and join, respectively.
BlueDBM proposes a pure FPGA architecture using Virtex7 FPGA, which has enough resources both for SSD controller and in-storage processing [43]. In BlueDBM, storage units contain a network interface implemented in FPGA to form a network for running user applications in a distributed fashion. The most notable drawback of an ISP-enabled storage architecture using pure FPGA implementation is the time-consuming design process due to its reconfigurability issues. In other words, users have to go through the entire process of redesigning, synthesizing, place, and route, to generating a bitstream for implementing the desired ISP functionality [45]. Jo et al. [46], proposes a heterogeneous platform composed of CPU, GPU and ISP-enabled SSD (called iSSD). Based on simulations, this platform can get up to 3.5× speedup for data-intensive algorithms. Although this method looks effective, it never went beyond simulation. In all the mentioned related works, lack of an OS inside ISP-enabled storage makes it harder to adopt applications for running in-place.
The ISP technology enables storage units to run user applications in-place, i.e., data is not required to move from the storage units to the host's memory to be processed. It can relieve the data movement challenges in the big data applications when huge data need to be fetched from the storage systems. The modern solid-state drives (SSDs) provide a better environment for implementing ISP technology in comparison to the traditional hard disk drives (HDDs). The computational storage devices (CSDs) are storage units based on the modern SSD architecture, which can run user applications in-place.
In this paper, we introduce Catalina, an efficient and flexible computational storage platform featuring a dedicated ISP engine including a quad-core ARM application processor, as well as Neon SIMD engines that can be utilized for accelerating HPC applications that run in-place. Catalina is equipped with a full-fledged Linux OS which provides a flexible environment for running a vast spectrum of applications. The applications running inside Catalina has filesystem-level access to the data stored in flash memory. Also, a TCP/IP tunnel through NVMe/PCIe link has been developed that allows Catalina to communicate with the host. Catalina can be seamlessly deployed in distributed environments such as Hadoop and MPI-based clusters. For the proof of concept, we have built a fully functional Catalina prototype as well as a system equipped with 16 Catalina CSDs to investigate the benefits of deploying the CSDs in clusters. The experimental results show that the deployment of Catalina in the clusters improves the MapReduce application performance and the power efficiency up to 2.2× and 4.3×, respectively. By utilizing the Neon SIMD engines for accelerating DFT algorithms, the performance and power efficiency improvements grow even higher up to 5.4× and 8.9×, respectively.
data acquired from "https://www.dramexchange.com".
ASIC:
AXI:
Xilinx advanced extensible interface
CSD:
computational storage device
DFT:
discrete Fourier transform
ECG:
FPU:
floating-point units
FTL:
flash translation layer
graphics processing unit
HDD:
HDFS:
Hadoop filesystem
ISP:
in-storage processing
MPSoC:
multi-processor system-on-chip
NFS:
network filesystem
NVMe:
non-volatile memory express
OCFS:
Oracle cluster filesystem
PCIe:
peripheral component interconnect express
processing system
SIMD:
single instruction, multiple data
SSD:
solid-state drive
James J. Data never sleeps 6.0. 2018. https://www.domo.com/blog/data-never-sleeps-6. Accessed 11 Nov 2019.
Javani A, Zorgui M, Wang Z. Age of information in multiple sensing. 2019. arXiv:1902.01975.
Kitchin R, McArdle G. What makes big data, big data? exploring the ontological characteristics of 26 datasets. Big Data Soc. 2016;3:1–10.
Pfister GF. An introduction to the infiniband architecture. In: High performance mass storage and parallel I/O. 2001;42:617–32.
Boden NJ, Cohen D, Felderman RE, Kulawik AE, Seitz CL, Seizovic JN, Su W-K. Myrinet: a gigabit-per-second local area network. IEEE Micro. 1995;15(1):29–36.
Prabhat, Koziol Q. High performance parallel I/O. Cleveland: CRC Press; 2014.
Shvachko K, Kuang H, Radia S, Chansler R, et al. The hadoop distributed file system. MSST. 2010;10:1–10.
Elyasi N, Choi C, Sivasubramaniam A. Large-scale graph processing on emerging storage devices. In: Proceedings of the 17th USENIX conference on file and storage technologies. FAST'19. Berkeley: USENIX Association; 2019. pp. 309–16.
SATA ecosystem web page. https://sata-io.org. Accessed 11 Nov 2019.
Serial-attached SCSI (SAS) Web page. https://searchstorage.techtarget.com/definition/serial-attached-SCSI. Accessed 11 Nov 2019.
Benefits of using NVMe over PCI Express Fabrics. https://www.dolphinics.com/solutions/nvme_over_pcie_fabrics.html. Accessed 11 Nov 2019.
Zynq UltraScale+ MPSoC product web page. https://www.xilinx.com/products/silicon-devices/soc/zynq-ultrascale-mpsoc.html. Accessed 11 Nov 2019.
Torabzadehkashi M, Rezaei S, HeydariGorji A, Bobarshad H, Alves V, Bagherzadeh N. Catalina: in-storage processing acceleration for scalable big data analytics. In: 2019 27th Euromicro international conference on parallel, distributed and network-based processing (PDP). IEEE. 2019. p. 430–7.
Park S, Kim Y, Urgaonkar B, Lee J, Seo E. A comprehensive study of energy efficiency and performance of flash-based ssd. J Syst Archit. 2011;57(4):354–65.
The Open Compute Project (OCP) web page. https://www.opencompute.org. Accessed 11 Nov 2019.
Seagate Kinetic HDD produce web page. https://www.seagate.com/support/enterprise-servers-storage/nearline-storage/kinetic-hdd/. Accessed 11 Nov 2019.
Patel AB, Birla M, Nair U. Addressing big data problem using hadoop and map reduce. In: 2012 Nirma University International conference on engineering (NUiCONE). IEEE. 2012. p. 1–5.
Mashayekhy L, Nejad MM, Grosu D, Zhang Q, Shi W. Energy-aware scheduling of mapreduce jobs for big data applications. IEEE Trans Parallel Distrib Syst. 2014;26(10):2720–33.
Vavilapalli VK, Murthy AC, Douglas C, Agarwal S, Konar M, Evans R, Graves T, Lowe J, Shah H, Seth S, et al. Apache hadoop yarn: yet another resource negotiator. In: Proceedings of the 4th annual symposium on cloud computing. ACM. 2013. p. 5.
Mishra P, Mishra M, Somani AK. Bulk i/o storage management for big data applications. In: 2016 IEEE 24th international symposium on modeling, analysis and simulation of computer and telecommunication systems (MASCOTS). IEEE. 2016. p. 412–7.
Mishra P, Somani AK. LDM: lineage-aware data management in multi-tier storage systems. In: Future of information and communication conference. Springer. 2019. p. 683–707.
White T. Hadoop: the definitive guide. Sebastopol: O'Reilly Media Inc; 2012.
Kishani M, Tahoori M, Asadi H. Dependability analysis of data storage systems in presence of soft errors. IEEE Trans Reliab. 2019;68:201–15.
Torabzadehkashi M, Rezaei S, Alves V, Bagherzadeh N. Compstor: an in-storage computation platform for scalable distributed processing. In: 2018 IEEE international parallel and distributed processing symposium workshops (IPDPSW). IEEE. 2018. p. 1260–7.
Torabzadehkashi M, HeydariGorji A, Rezaei S, Bobarshad H, Alves V, Bagherzadeh N. Accelerating HPC applications using computational storage devices. IEEE. 2019. p. 1878–85.
Walker DW, Dongarra JJ. MPI: a standard message passing interface. Supercomputer. 1996;12:56–68.
Oracle cluster filesystem second version web page. https://oss.oracle.com/projects/ocfs2/. Accessed 11 Nov 2019.
Cho S, Park C, Oh H, Kim S, Yi Y, Ganger GR. Active disk meets flash: a case for intelligent SSDs. In: Proceedings of the 27th international ACM conference on international conference on supercomputing. ICS '13. 2013. p. 91–102.
Kim S, Oh H, Park C, Cho S, Lee S-W, Moon B. In-storage processing of database scans and joins. Inf Sci. 2016;327:183–200.
The CUBIX XPANDER produce web page. https://www.cubix.com/xpander/. Accessed 11 Nov 2019.
Huang S, Huang J, Dai J, Xie T, Huang B. The hibench benchmark suite: characterization of the mapreduce-based data analysis. In: 2010 IEEE 26th international conference on data engineering workshops (ICDEW 2010). IEEE. 2010. p. 41–51.
The HPC Challenge benchmark suite web page. http://www.hpcchallenge.org. Accessed 11 Nov 2019.
Dongarra J, Heroux MA. Toward a new metric for ranking high performance computing systems. Sandia Report, SAND2013-4744 312, 150; 2013.
Dongarra J, Luszczek P. Reducing the time to tune parallel dense linear algebra routines with partial execution and performance modelling. University of Tennessee Computer Science Technical Report, Tech. Rep. 2010.
Steinbach P, Werner M. gearshifft—the FFT benchmark suite for heterogeneous platforms. In: International supercomputing conference. Springer. 2017. p. 199–216.
Rose NJ. Linear algebra and its applications (gilbert strang). SIAM Rev. 1982;24(4):499–501.
Frigo M, Johnson SG. The design and implementation of FFTW3. Proc IEEE. 2005;93(2):216–31.
Bousseljot R, Kreiseler D, Schnabel A. Nutzung der ekg-signaldatenbank cardiodat der ptb über das internet. Biomed Eng. 1995;40(s1):317–8.
Goldberger AL, Amaral LA, Glass L, Hausdorff JM, Ivanov PC, Mark RG, Mietus JE, Moody GB, Peng C-K, Stanley HE. Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals. Circulation. 2000;101(23):215–20.
Do J, Kee Y-S, Patel JM, Park C, Park K, DeWitt DJ. Query processing on smart SSDs: opportunities and challenges. In: Proceedings of the 2013 ACM SIGMOD international conference on management of data. SIGMOD '13. 2013. p. 1221–30.
Park D, Wang J, Kee Y. In-storage computing for hadoop mapreduce framework: challenges and possibilities. IEEE Trans Comput. 2018;. https://doi.org/10.1109/TC.2016.2595566.
Gu B, Yoon AS, Bae D, Jo I, Lee J, Yoon J, Kang J, Kwon M, Yoon C, Cho S, Jeong J, Chang D. Biscuit: a framework for near-data processing of big data workloads. In: 2016 ACM/IEEE 43rd annual international symposium on computer architecture (ISCA). 2016. p. 153–165.
Jun S-W, Liu M, Lee S, Hicks J, Ankcorn J, King M, Xu S, Arvind. BlueDBM: an appliance for big data analytics. SIGARCH Comput Archit News. 2015;43(3):1–13.
Song X, Xie T, Pan W. RISP: a reconfigurable in-storage processing framework with energy-awareness. In: 2018 18th IEEE/ACM international symposium on cluster, cloud and grid computing (CCGRID). 2018. p. 193–202.
Rezaei S, Kim K, Bozorgzadeh E. Scalable multi-queue data transfer scheme for FPGA-based multi-accelerators. In: 2018 IEEE 36th international conference on computer design (ICCD). IEEE. 2018. p. 374–80.
Jo Y-Y, Cho S, Kim S-W, Oh H. Collaborative processing of data-intensive algorithms with CPU, intelligent SSD, and GPU. In: Proceedings of the 31st annual ACM symposium on applied computing. SAC '16. New York: ACM. 2016. p. 1865–70.
This work was supported by the USA National Science Foundation under Award number 1660071.
University of California, Irvine (UCI), Irvine, 92697, USA
Mahdi Torabzadehkashi
, Siavash Rezaei
, Ali HeydariGorji
& Nader Bagherzadeh
NGD Systems, Inc., 355 Goddard, Suite 200, Irvine, 92618, USA
, Hosein Bobarshad
& Vladimir Alves
Search for Mahdi Torabzadehkashi in:
Search for Siavash Rezaei in:
Search for Ali HeydariGorji in:
Search for Hosein Bobarshad in:
Search for Vladimir Alves in:
Search for Nader Bagherzadeh in:
The author MT prepared the draft of all sections except the "Related work" section and also ran the experiments and collected the results. AH prepared the draft for "Related work" section. The authors MT, SR, AH, and HB participated in developing the proposed solution and also helped to prepare the final manuscript. All authors read and approved the final manuscript.
Correspondence to Mahdi Torabzadehkashi.
Torabzadehkashi, M., Rezaei, S., HeydariGorji, A. et al. Computational storage: an efficient and scalable platform for big data and HPC applications. J Big Data 6, 100 (2019) doi:10.1186/s40537-019-0265-5
Computational storage
Near-data processing
|
CommonCrawl
|
The Canadian Entomologist (6)
Proceedings of the Nutrition Society (4)
Twin Research and Human Genetics (4)
Epidemiology & Infection (2)
Psychological Medicine (2)
Transactions of the International Astronomical Union (2)
Weed Technology (2)
Advances in X-ray Analysis (1)
Canadian Journal of Emergency Medicine (1)
Mineralogical Magazine and Journal of the Mineralogical Society (1)
Proceedings of the Prehistoric Society (1)
Psychiatric Bulletin (1)
Weed Science (1)
Nutrition Society (8)
Entomological Society of Canada TCE ESC (6)
International Soc for Twin Studies (4)
Weed Science Society of America (3)
Canadian Association of Emergency Physicians (CAEP) (1)
Society for Healthcare Epidemiology of America (SHEA) (1)
The Prehistoric Society (1)
Cambridge Handbooks in Psychology (1)
Cambridge Handbooks (1)
Cambridge Handbooks of Psychology (1)
Reducing the level of nutrition of twin-bearing ewes during mid to late pregnancy produces leaner prime lambs at slaughter
M. I. Knight, K. L. Butler, L. L. Slocombe, N. P. Linden, M. C. Raeside, V. F. Burnett, A. J. Ball, M. B. McDonagh, R. Behrendt
Journal: animal , First View
Published online by Cambridge University Press: 15 October 2019, pp. 1-9
The Australian prime lamb industry is seeking to improve lean meat yield (LMY) as a means to increasing efficiency and profitability across the whole value chain. The LMY of prime lambs is affected by genetics and on-farm nutrition from birth to slaughter and is the total muscle weight relative to the total carcass weight. Under the production conditions of south eastern Australia, many ewe flocks experience a moderate reduction in nutrition in mid to late pregnancy due to a decrease in pasture availability and quality. Correcting nutritional deficits throughout gestation requires the feeding of supplements. This enables the pregnant ewe to meet condition score (CS) targets at lambing. However, limited resources on farm often mean it is difficult to effectively manage nutritional supplementation of the pregnant ewe flock. The impact of reduced ewe nutrition in mid to late pregnancy on the body composition of finishing lambs and subsequent carcass composition remains unknown. This study investigated the effect of moderately reducing ewe nutrition in mid to late gestation on the body composition of finishing lambs and carcass composition at slaughter on a commercial scale. Multiple born lambs to CS2.5 target ewes were lighter at birth and weaning, had lower feedlot entry and exit weights with lower pre-slaughter and carcass weights compared with CS3.0 and CS3.5 target ewes. These lambs also had significantly lower eye muscle and fat depth when measured by ultrasound prior to slaughter and carcass subcutaneous fat depth measured 110 mm from the spine along the 12th rib (GR 12th) and at the C-site (C-fat). Although carcasses were ~5% lighter, results showed that male progeny born to ewes with reduced nutrition from day 50 gestation to a target CS2.5 at lambing had a higher percentage of lean tissue mass as measured by dual energy X-ray absorptiometry and a lower percentage of fat during finishing and at slaughter, with the multiple born progeny from CS3.0 and CS3.5 target ewes being similar. These data suggest lambs produced from multiple bearing ewes that have had a moderate reduction in nutrition during pregnancy are less mature. This effect was also independent of lamb finishing system. The 5% reduction in carcass weight observed in this study would have commercially relevant consequences for prime lamb producers, despite a small gain in LMY.
The CODATwins Project: The Current Status and Recent Findings of COllaborative Project of Development of Anthropometrical Measures in Twins
K. Silventoinen, A. Jelenkovic, Y. Yokoyama, R. Sund, M. Sugawara, M. Tanaka, S. Matsumoto, L. H. Bogl, D. L. Freitas, J. A. Maia, J. v. B. Hjelmborg, S. Aaltonen, M. Piirtola, A. Latvala, L. Calais-Ferreira, V. C. Oliveira, P. H. Ferreira, F. Ji, F. Ning, Z. Pang, J. R. Ordoñana, J. F. Sánchez-Romera, L. Colodro-Conde, S. A. Burt, K. L. Klump, N. G. Martin, S. E. Medland, G. W. Montgomery, C. Kandler, T. A. McAdams, T. C. Eley, A. M. Gregory, K. J. Saudino, L. Dubois, M. Boivin, M. Brendgen, G. Dionne, F. Vitaro, A. D. Tarnoki, D. L. Tarnoki, C. M. A. Haworth, R. Plomin, S. Y. Öncel, F. Aliev, E. Medda, L. Nisticò, V. Toccaceli, J. M. Craig, R. Saffery, S. H. Siribaddana, M. Hotopf, A. Sumathipala, F. Rijsdijk, H.-U. Jeong, T. Spector, M. Mangino, G. Lachance, M. Gatz, D. A. Butler, W. Gao, C. Yu, L. Li, G. Bayasgalan, D. Narandalai, K. P. Harden, E. M. Tucker-Drob, K. Christensen, A. Skytthe, K. O. Kyvik, C. A. Derom, R. F. Vlietinck, R. J. F. Loos, W. Cozen, A. E. Hwang, T. M. Mack, M. He, X. Ding, J. L. Silberg, H. H. Maes, T. L. Cutler, J. L. Hopper, P. K. E. Magnusson, N. L. Pedersen, A. K. Dahl Aslan, L. A. Baker, C. Tuvblad, M. Bjerregaard-Andersen, H. Beck-Nielsen, M. Sodemann, V. Ullemar, C. Almqvist, Q. Tan, D. Zhang, G. E. Swan, R. Krasnow, K. L. Jang, A. Knafo-Noam, D. Mankuta, L. Abramson, P. Lichtenstein, R. F. Krueger, M. McGue, S. Pahlen, P. Tynelius, F. Rasmussen, G. E. Duncan, D. Buchwald, R. P. Corley, B. M. Huibregtse, T. L. Nelson, K. E. Whitfield, C. E. Franz, W. S. Kremen, M. J. Lyons, S. Ooki, I. Brandt, T. S. Nilsen, J. R. Harris, J. Sung, H. A. Park, J. Lee, S. J. Lee, G. Willemsen, M. Bartels, C. E. M. van Beijsterveldt, C. H. Llewellyn, A. Fisher, E. Rebato, A. Busjahn, R. Tomizawa, F. Inui, M. Watanabe, C. Honda, N. Sakai, Y.-M. Hur, T. I. A. Sørensen, D. I. Boomsma, J. Kaprio
Journal: Twin Research and Human Genetics , First View
Published online by Cambridge University Press: 31 July 2019, pp. 1-9
The COllaborative project of Development of Anthropometrical measures in Twins (CODATwins) project is a large international collaborative effort to analyze individual-level phenotype data from twins in multiple cohorts from different environments. The main objective is to study factors that modify genetic and environmental variation of height, body mass index (BMI, kg/m2) and size at birth, and additionally to address other research questions such as long-term consequences of birth size. The project started in 2013 and is open to all twin projects in the world having height and weight measures on twins with information on zygosity. Thus far, 54 twin projects from 24 countries have provided individual-level data. The CODATwins database includes 489,981 twin individuals (228,635 complete twin pairs). Since many twin cohorts have collected longitudinal data, there is a total of 1,049,785 height and weight observations. For many cohorts, we also have information on birth weight and length, own smoking behavior and own or parental education. We found that the heritability estimates of height and BMI systematically changed from infancy to old age. Remarkably, only minor differences in the heritability estimates were found across cultural–geographic regions, measurement time and birth cohort for height and BMI. In addition to genetic epidemiological studies, we looked at associations of height and BMI with education, birth weight and smoking status. Within-family analyses examined differences within same-sex and opposite-sex dizygotic twins in birth size and later development. The CODATwins project demonstrates the feasibility and value of international collaboration to address gene-by-exposure interactions that require large sample sizes and address the effects of different exposures across time, geographical regions and socioeconomic status.
Role of magnetic field evolution on filamentary structure formation in intense laser–foil interactions
HPL_EP HEDP and High Power Laser 2018
M. King, N. M. H. Butler, R. Wilson, R. Capdessus, R. J. Gray, H. W. Powell, R. J. Dance, H. Padda, B. Gonzalez-Izquierdo, D. R. Rusby, N. P. Dover, G. S. Hicks, O. C. Ettlinger, C. Scullion, D. C. Carroll, Z. Najmudin, M. Borghesi, D. Neely, P. McKenna
Journal: High Power Laser Science and Engineering / Volume 7 / 2019
Published online by Cambridge University Press: 13 March 2019, e14
Filamentary structures can form within the beam of protons accelerated during the interaction of an intense laser pulse with an ultrathin foil target. Such behaviour is shown to be dependent upon the formation time of quasi-static magnetic field structures throughout the target volume and the extent of the rear surface proton expansion over the same period. This is observed via both numerical and experimental investigations. By controlling the intensity profile of the laser drive, via the use of two temporally separated pulses, both the initial rear surface proton expansion and magnetic field formation time can be varied, resulting in modification to the degree of filamentary structure present within the laser-driven proton beam.
Education in Twins and Their Parents Across Birth Cohorts Over 100 years: An Individual-Level Pooled Analysis of 42-Twin Cohorts
Karri Silventoinen, Aline Jelenkovic, Antti Latvala, Reijo Sund, Yoshie Yokoyama, Vilhelmina Ullemar, Catarina Almqvist, Catherine A. Derom, Robert F. Vlietinck, Ruth J. F. Loos, Christian Kandler, Chika Honda, Fujio Inui, Yoshinori Iwatani, Mikio Watanabe, Esther Rebato, Maria A. Stazi, Corrado Fagnani, Sonia Brescianini, Yoon-Mi Hur, Hoe-Uk Jeong, Tessa L. Cutler, John L. Hopper, Andreas Busjahn, Kimberly J. Saudino, Fuling Ji, Feng Ning, Zengchang Pang, Richard J. Rose, Markku Koskenvuo, Kauko Heikkilä, Wendy Cozen, Amie E. Hwang, Thomas M. Mack, Sisira H. Siribaddana, Matthew Hotopf, Athula Sumathipala, Fruhling Rijsdijk, Joohon Sung, Jina Kim, Jooyeon Lee, Sooji Lee, Tracy L. Nelson, Keith E. Whitfield, Qihua Tan, Dongfeng Zhang, Clare H. Llewellyn, Abigail Fisher, S. Alexandra Burt, Kelly L. Klump, Ariel Knafo-Noam, David Mankuta, Lior Abramson, Sarah E. Medland, Nicholas G. Martin, Grant W. Montgomery, Patrik K. E. Magnusson, Nancy L. Pedersen, Anna K. Dahl Aslan, Robin P. Corley, Brooke M. Huibregtse, Sevgi Y. Öncel, Fazil Aliev, Robert F. Krueger, Matt McGue, Shandell Pahlen, Gonneke Willemsen, Meike Bartels, Catharina E. M. van Beijsterveldt, Judy L. Silberg, Lindon J. Eaves, Hermine H. Maes, Jennifer R. Harris, Ingunn Brandt, Thomas S. Nilsen, Finn Rasmussen, Per Tynelius, Laura A. Baker, Catherine Tuvblad, Juan R. Ordoñana, Juan F. Sánchez-Romera, Lucia Colodro-Conde, Margaret Gatz, David A. Butler, Paul Lichtenstein, Jack H. Goldberg, K. Paige Harden, Elliot M. Tucker-Drob, Glen E. Duncan, Dedra Buchwald, Adam D. Tarnoki, David L. Tarnoki, Carol E. Franz, William S. Kremen, Michael J. Lyons, José A. Maia, Duarte L. Freitas, Eric Turkheimer, Thorkild I. A. Sørensen, Dorret I. Boomsma, Jaakko Kaprio
Journal: Twin Research and Human Genetics / Volume 20 / Issue 5 / October 2017
Published online by Cambridge University Press: 04 October 2017, pp. 395-405
Whether monozygotic (MZ) and dizygotic (DZ) twins differ from each other in a variety of phenotypes is important for genetic twin modeling and for inferences made from twin studies in general. We analyzed whether there were differences in individual, maternal and paternal education between MZ and DZ twins in a large pooled dataset. Information was gathered on individual education for 218,362 adult twins from 27 twin cohorts (53% females; 39% MZ twins), and on maternal and paternal education for 147,315 and 143,056 twins respectively, from 28 twin cohorts (52% females; 38% MZ twins). Together, we had information on individual or parental education from 42 twin cohorts representing 19 countries. The original education classifications were transformed to education years and analyzed using linear regression models. Overall, MZ males had 0.26 (95% CI [0.21, 0.31]) years and MZ females 0.17 (95% CI [0.12, 0.21]) years longer education than DZ twins. The zygosity difference became smaller in more recent birth cohorts for both males and females. Parental education was somewhat longer for fathers of DZ twins in cohorts born in 1990–1999 (0.16 years, 95% CI [0.08, 0.25]) and 2000 or later (0.11 years, 95% CI [0.00, 0.22]), compared with fathers of MZ twins. The results show that the years of both individual and parental education are largely similar in MZ and DZ twins. We suggest that the socio-economic differences between MZ and DZ twins are so small that inferences based upon genetic modeling of twin data are not affected.
Single Pulses from the Galactic Center Magnetar with the Very Large Array
S. Chatterjee, R. S. Wharton, J. M. Cordes, G. C. Bower, B. J. Butler, A. T. Deller, P. Demorest, T. J. W. Lazio, W. A. Majid, S. M. Ransom
Journal: Proceedings of the International Astronomical Union / Volume 13 / Issue S337 / September 2017
Phased VLA observations of the Galactic center magnetar J1745-2900 over 8-12 GHz reveal rich single pulse behavior. The average profile is comprised of several distinct components and is fairly stable over day timescales and GHz frequencies. The average profile is dominated by the jitter of relatively narrow pulses. The pulses in each of the four profile components are uncorrelated in phase and amplitude, although the occurrence of pulse components 1 and 2 appear to be correlated. Using a collection of the brightest individual pulses, we verify that the index of the dispersion law is consistent with the expected cold plasma value of 2. The scattering time is weakly constrained, but consistent with previous measurements, while the dispersion measure DM = 1763+3−10 pc cm−3 is lower than previous measurements, which could be a result of time variability in the line-of-sight column density or changing pulse profile shape over time or frequency.
The effects of flavanone-rich citrus juice on cognitive function and cerebral blood flow: an acute, randomised, placebo-controlled cross-over trial in healthy, young adults
Daniel J. Lamport, Deepa Pal, Anna L. Macready, Sofia Barbosa-Boucas, John M. Fletcher, Claire M. Williams, Jeremy P. E. Spencer, Laurie T. Butler
Journal: British Journal of Nutrition / Volume 116 / Issue 12 / 28 December 2016
A plausible mechanism underlying flavonoid-associated cognitive effects is increased cerebral blood flow (CBF). However, behavioural and CBF effects following flavanone-rich juice consumption have not been explored. The aim of this study was to investigate whether consumption of flavanone-rich juice is associated with acute cognitive benefits and increased regional CBF in healthy, young adults. An acute, single-blind, randomised, cross-over design was applied with two 500-ml drink conditions – high-flavanone (HF; 70·5 mg) drink and an energy-, and vitamin C- matched, zero-flavanone control. A total of twenty-four healthy young adults aged 18–30 years underwent cognitive testing at baseline and 2-h after drink consumption. A further sixteen, healthy, young adults were recruited for functional MRI assessment, whereby CBF was measured with arterial spin labelling during conscious resting state at baseline as well as 2 and 5 h after drink consumption. The HF drink was associated with significantly increased regional perfusion in the inferior and middle right frontal gyrus at 2 h relative to baseline and the control drink. In addition, the HF drink was associated with significantly improved performance on the Digit Symbol Substitution Test at 2 h relative to baseline and the control drink, but no effects were observed on any other behavioural cognitive tests. These results demonstrate that consumption of flavanone-rich citrus juice in quantities commonly consumed can acutely enhance blood flow to the brain in healthy, young adults. However, further studies are required to establish a direct causal link between increased CBF and enhanced behavioural outcomes following citrus juice ingestion.
P139: Procedural sedation by advanced care paramedics for emergency GI endoscopy
H. Wiemer, M.B. Butler, P. Froese, D. Farina, A. Lapierre, C. Carriere, G. Etsell, J. Jones, J. Murray, S.G. Campbell
Journal: Canadian Journal of Emergency Medicine / Volume 18 / Issue S1 / May 2016
Published online by Cambridge University Press: 02 June 2016, p. S124
Introduction: Acute upper gastrointestinal (UGI) bleeding is a relatively common emergency resulting in death in 6 to 8% of cases. UGI endoscopy is the intervention of choice which requires procedural sedation and analgesia (PSA). The Halifax Infirmary emergency department (ED) performs 1000 PSAs annually, performed by advanced care paramedics (ACPs). This has been shown safe for other indications for PSA, such as orthopedic procedures. Considering that UGI endoscopy involves upper airway manipulation, and patients are at an increased risk of massive bleeding, this procedure would be expected to be more complex and have an increased risk of adverse events (AEs). This study aims to compare PSA for UGI endoscopy performed by ACPs to that for orthopedic procedures for AEs, airway intervention and medication use. Methods: This study is a retrospective review of an ACP-performed ED PSA quality control database. A dataset was built matching 64 UGI endoscopy PSAs to 192 orthopedic PSAs by propensity scores calculated using age, gender and ASA classification. Outcomes assessed were hypotension (SBP < 100, or 15% decrease from baseline), hypoxia (SaO2 < 90), apnea (> 30sec), vomiting, arrhythmias and death in the ED. The need for airway intervention and medication use was assessed. Results: The UGI endoscopy group was 4.60 times more likely to suffer hypotension than the orthopedic group (OR=4.6, CI:2.2-9.6), and a fifth as likely to require airway repositioning (OR=0.2, CI:0.1-0.5). One endoscopy patient required endotracheal intubation. No patient died in either group. Compared to the orthopedic group, the UGI endoscopy group was one-third as likely to receive fentanyl (OR=0.3, CI:0.2-0.6). When fentanyl was administered, endoscopy patients received an average 26.7 mcg less than orthopedic patients. The endoscopy group was 15.4 times more likely to receive ketamine (OR=15.4, CI:4.7-66.5), and received 34.4 mg less on average. Four endoscopy patients received phenylephrine compared to none in the orthopedic group. There were no other differences. Conclusion: ED PSA for UGI endoscopy appears to differ significantly from that performed for orthopedic procedures. It was associated with more frequent hypotension and increased use of ketamine as a sedative. Patients undergoing UGI endoscopy were less likely to receive fentanyl and require airway repositioning. Only patients in the endoscopy group required intubation or a vasopressor agent.
Zygosity Differences in Height and Body Mass Index of Twins From Infancy to Old Age: A Study of the CODATwins Project
Aline Jelenkovic, Yoshie Yokoyama, Reijo Sund, Chika Honda, Leonie H Bogl, Sari Aaltonen, Fuling Ji, Feng Ning, Zengchang Pang, Juan R. Ordoñana, Juan F. Sánchez-Romera, Lucia Colodro-Conde, S. Alexandra Burt, Kelly L. Klump, Sarah E. Medland, Grant W. Montgomery, Christian Kandler, Tom A. McAdams, Thalia C. Eley, Alice M. Gregory, Kimberly J. Saudino, Lise Dubois, Michel Boivin, Adam D. Tarnoki, David L. Tarnoki, Claire M. A. Haworth, Robert Plomin, Sevgi Y. Öncel, Fazil Aliev, Maria A. Stazi, Corrado Fagnani, Cristina D'Ippolito, Jeffrey M. Craig, Richard Saffery, Sisira H. Siribaddana, Matthew Hotopf, Athula Sumathipala, Fruhling Rijsdijk, Timothy Spector, Massimo Mangino, Genevieve Lachance, Margaret Gatz, David A. Butler, Gombojav Bayasgalan, Danshiitsoodol Narandalai, Duarte L Freitas, José Antonio Maia, K. Paige Harden, Elliot M. Tucker-Drob, Bia Kim, Youngsook Chong, Changhee Hong, Hyun Jung Shin, Kaare Christensen, Axel Skytthe, Kirsten O. Kyvik, Catherine A. Derom, Robert F. Vlietinck, Ruth J. F. Loos, Wendy Cozen, Amie E. Hwang, Thomas M. Mack, Mingguang He, Xiaohu Ding, Billy Chang, Judy L. Silberg, Lindon J. Eaves, Hermine H. Maes, Tessa L. Cutler, John L. Hopper, Kelly Aujard, Patrik K. E. Magnusson, Nancy L. Pedersen, Anna K. Dahl Aslan, Yun-Mi Song, Sarah Yang, Kayoung Lee, Laura A. Baker, Catherine Tuvblad, Morten Bjerregaard-Andersen, Henning Beck-Nielsen, Morten Sodemann, Kauko Heikkilä, Qihua Tan, Dongfeng Zhang, Gary E. Swan, Ruth Krasnow, Kerry L. Jang, Ariel Knafo-Noam, David Mankuta, Lior Abramson, Paul Lichtenstein, Robert F. Krueger, Matt McGue, Shandell Pahlen, Per Tynelius, Glen E. Duncan, Dedra Buchwald, Robin P. Corley, Brooke M. Huibregtse, Tracy L. Nelson, Keith E. Whitfield, Carol E. Franz, William S. Kremen, Michael J. Lyons, Syuichi Ooki, Ingunn Brandt, Thomas Sevenius Nilsen, Fujio Inui, Mikio Watanabe, Meike Bartels, Toos C. E. M. van Beijsterveldt, Jane Wardle, Clare H. Llewellyn, Abigail Fisher, Esther Rebato, Nicholas G. Martin, Yoshinori Iwatani, Kazuo Hayakawa, Joohon Sung, Jennifer R. Harris, Gonneke Willemsen, Andreas Busjahn, Jack H. Goldberg, Finn Rasmussen, Yoon-Mi Hur, Dorret I. Boomsma, Thorkild I. A. Sørensen, Jaakko Kaprio, Karri Silventoinen
A trend toward greater body size in dizygotic (DZ) than in monozygotic (MZ) twins has been suggested by some but not all studies, and this difference may also vary by age. We analyzed zygosity differences in mean values and variances of height and body mass index (BMI) among male and female twins from infancy to old age. Data were derived from an international database of 54 twin cohorts participating in the COllaborative project of Development of Anthropometrical measures in Twins (CODATwins), and included 842,951 height and BMI measurements from twins aged 1 to 102 years. The results showed that DZ twins were consistently taller than MZ twins, with differences of up to 2.0 cm in childhood and adolescence and up to 0.9 cm in adulthood. Similarly, a greater mean BMI of up to 0.3 kg/m2 in childhood and adolescence and up to 0.2 kg/m2 in adulthood was observed in DZ twins, although the pattern was less consistent. DZ twins presented up to 1.7% greater height and 1.9% greater BMI than MZ twins; these percentage differences were largest in middle and late childhood and decreased with age in both sexes. The variance of height was similar in MZ and DZ twins at most ages. In contrast, the variance of BMI was significantly higher in DZ than in MZ twins, particularly in childhood. In conclusion, DZ twins were generally taller and had greater BMI than MZ twins, but the differences decreased with age in both sexes.
Measurement of the angle, temperature and flux of fast electrons emitted from intense laser–solid interactions
Energetic Electrons
D. R. Rusby, L. A. Wilson, R. J. Gray, R. J. Dance, N. M. H. Butler, D. A. MacLellan, G. G. Scott, V. Bagnoud, B. Zielbauer, P. McKenna, D. Neely
Journal: Journal of Plasma Physics / Volume 81 / Issue 5 / October 2015
Published online by Cambridge University Press: 13 July 2015, 475810505
High-intensity laser–solid interactions generate relativistic electrons, as well as high-energy (multi-MeV) ions and x-rays. The directionality, spectra and total number of electrons that escape a target-foil is dependent on the absorption, transport and rear-side sheath conditions. Measuring the electrons escaping the target will aid in improving our understanding of these absorption processes and the rear-surface sheath fields that retard the escaping electrons and accelerate ions via the target normal sheath acceleration (TNSA) mechanism. A comprehensive Geant4 study was performed to help analyse measurements made with a wrap-around diagnostic that surrounds the target and uses differential filtering with a FUJI-film image plate detector. The contribution of secondary sources such as x-rays and protons to the measured signal have been taken into account to aid in the retrieval of the electron signal. Angular and spectral data from a high-intensity laser–solid interaction are presented and accompanied by simulations. The total number of emitted electrons has been measured as $2.6\times 10^{13}$ with an estimated total energy of $12\pm 1~\text{J}$ from a $100~{\rm\mu}\text{m}$ Cu target with 140 J of incident laser energy during a $4\times 10^{20}~\text{W}~\text{cm}^{-2}$ interaction.
Emotion recognition deficits as predictors of transition in individuals at clinical high risk for schizophrenia: a neurodevelopmental perspective
C. M. Corcoran, J. G. Keilp, J. Kayser, C. Klim, P. D. Butler, G. E. Bruder, R. C. Gur, D. C. Javitt
Journal: Psychological Medicine / Volume 45 / Issue 14 / October 2015
Published online by Cambridge University Press: 04 June 2015, pp. 2959-2973
Schizophrenia is characterized by profound and disabling deficits in the ability to recognize emotion in facial expression and tone of voice. Although these deficits are well documented in established schizophrenia using recently validated tasks, their predictive utility in at-risk populations has not been formally evaluated.
The Penn Emotion Recognition and Discrimination tasks, and recently developed measures of auditory emotion recognition, were administered to 49 clinical high-risk subjects prospectively followed for 2 years for schizophrenia outcome, and 31 healthy controls, and a developmental cohort of 43 individuals aged 7–26 years. Deficit in emotion recognition in at-risk subjects was compared with deficit in established schizophrenia, and with normal neurocognitive growth curves from childhood to early adulthood.
Deficits in emotion recognition significantly distinguished at-risk patients who transitioned to schizophrenia. By contrast, more general neurocognitive measures, such as attention vigilance or processing speed, were non-predictive. The best classification model for schizophrenia onset included both face emotion processing and negative symptoms, with accuracy of 96%, and area under the receiver-operating characteristic curve of 0.99. In a parallel developmental study, emotion recognition abilities were found to reach maturity prior to traditional age of risk for schizophrenia, suggesting they may serve as objective markers of early developmental insult.
Profound deficits in emotion recognition exist in at-risk patients prior to schizophrenia onset. They may serve as an index of early developmental insult, and represent an effective target for early identification and remediation. Future studies investigating emotion recognition deficits at both mechanistic and predictive levels are strongly encouraged.
The CODATwins Project: The Cohort Description of Collaborative Project of Development of Anthropometrical Measures in Twins to Study Macro-Environmental Variation in Genetic and Environmental Effects on Anthropometric Traits
Karri Silventoinen, Aline Jelenkovic, Reijo Sund, Chika Honda, Sari Aaltonen, Yoshie Yokoyama, Adam D. Tarnoki, David L. Tarnoki, Feng Ning, Fuling Ji, Zengchang Pang, Juan R. Ordoñana, Juan F. Sánchez-Romera, Lucia Colodro-Conde, S. Alexandra Burt, Kelly L. Klump, Sarah E. Medland, Grant W. Montgomery, Christian Kandler, Tom A. McAdams, Thalia C. Eley, Alice M. Gregory, Kimberly J. Saudino, Lise Dubois, Michel Boivin, Claire M. A. Haworth, Robert Plomin, Sevgi Y. Öncel, Fazil Aliev, Maria A. Stazi, Corrado Fagnani, Cristina D'Ippolito, Jeffrey M. Craig, Richard Saffery, Sisira H. Siribaddana, Matthew Hotopf, Athula Sumathipala, Timothy Spector, Massimo Mangino, Genevieve Lachance, Margaret Gatz, David A. Butler, Gombojav Bayasgalan, Danshiitsoodol Narandalai, Duarte L. Freitas, José Antonio Maia, K. Paige Harden, Elliot M. Tucker-Drob, Kaare Christensen, Axel Skytthe, Kirsten O. Kyvik, Changhee Hong, Youngsook Chong, Catherine A. Derom, Robert F. Vlietinck, Ruth J. F. Loos, Wendy Cozen, Amie E. Hwang, Thomas M. Mack, Mingguang He, Xiaohu Ding, Billy Chang, Judy L. Silberg, Lindon J. Eaves, Hermine H. Maes, Tessa L. Cutler, John L. Hopper, Kelly Aujard, Patrik K. E. Magnusson, Nancy L. Pedersen, Anna K. Dahl Aslan, Yun-Mi Song, Sarah Yang, Kayoung Lee, Laura A. Baker, Catherine Tuvblad, Morten Bjerregaard-Andersen, Henning Beck-Nielsen, Morten Sodemann, Kauko Heikkilä, Qihua Tan, Dongfeng Zhang, Gary E. Swan, Ruth Krasnow, Kerry L. Jang, Ariel Knafo-Noam, David Mankuta, Lior Abramson, Paul Lichtenstein, Robert F. Krueger, Matt McGue, Shandell Pahlen, Per Tynelius, Glen E. Duncan, Dedra Buchwald, Robin P. Corley, Brooke M. Huibregtse, Tracy L. Nelson, Keith E. Whitfield, Carol E. Franz, William S. Kremen, Michael J. Lyons, Syuichi Ooki, Ingunn Brandt, Thomas Sevenius Nilsen, Fujio Inui, Mikio Watanabe, Meike Bartels, Toos C. E. M. van Beijsterveldt, Jane Wardle, Clare H. Llewellyn, Abigail Fisher, Esther Rebato, Nicholas G. Martin, Yoshinori Iwatani, Kazuo Hayakawa, Finn Rasmussen, Joohon Sung, Jennifer R. Harris, Gonneke Willemsen, Andreas Busjahn, Jack H. Goldberg, Dorret I. Boomsma, Yoon-Mi Hur, Thorkild I. A. Sørensen, Jaakko Kaprio
Journal: Twin Research and Human Genetics / Volume 18 / Issue 4 / August 2015
Published online by Cambridge University Press: 27 May 2015, pp. 348-360
For over 100 years, the genetics of human anthropometric traits has attracted scientific interest. In particular, height and body mass index (BMI, calculated as kg/m2) have been under intensive genetic research. However, it is still largely unknown whether and how heritability estimates vary between human populations. Opportunities to address this question have increased recently because of the establishment of many new twin cohorts and the increasing accumulation of data in established twin cohorts. We started a new research project to analyze systematically (1) the variation of heritability estimates of height, BMI and their trajectories over the life course between birth cohorts, ethnicities and countries, and (2) to study the effects of birth-related factors, education and smoking on these anthropometric traits and whether these effects vary between twin cohorts. We identified 67 twin projects, including both monozygotic (MZ) and dizygotic (DZ) twins, using various sources. We asked for individual level data on height and weight including repeated measurements, birth related traits, background variables, education and smoking. By the end of 2014, 48 projects participated. Together, we have 893,458 height and weight measures (52% females) from 434,723 twin individuals, including 201,192 complete twin pairs (40% monozygotic, 40% same-sex dizygotic and 20% opposite-sex dizygotic) representing 22 countries. This project demonstrates that large-scale international twin studies are feasible and can promote the use of existing data for novel research purposes.
By Dor Abrahamson, Jerry Andriessen, Roger Azevedo, Michael Baker, Ryan Baker, Sasha Barab, Carl Bereiter, Susan Bridges, Mario Carretero, Carol K. K. Chan, Clark A. Chinn, Paul Cobb, Allan Collins, Kevin Crowley, Elizabeth A. Davis, Chris Dede, Sharon J. Derry, Andrea A. diSessa, Michael Eisenberg, Yrjö Engeström, Noel Enyedy, Barry J. Fishman, Ricki Goldman, James G. Greeno, Erica Rosenfeld Halverson, Cindy E. Hmelo-Silver, Michael J. Jacobson, Sanna Järvelä, Yasmin B. Kafai, Yael Kali, Manu Kapur, Paul A. Kirschner, Karen Knutson, Timothy Koschmann, Joseph S. Krajcik, Carol D. Lee, Peter Lee, Robb Lindgren, Jingyan Lu, Richard E. Mayer, Naomi Miyake, Na'ilah Suad Nasir, Mitchell J. Nathan, Narcis Pares, Roy Pea, James W. Pellegrino, William R. Penuel, Palmyre Pierroux, Brian J. Reiser, K. Ann Renninger, Ann S. Rosebery, R. Keith Sawyer, Marlene Scardamalia, Anna Sfard, Mike Sharples, Kimberly M. Sheridan, Bruce L. Sherin, Namsoo Shin, George Siemens, Peter Smagorinsky, Nancy Butler Songer, James P. Spillane, Kurt Squire, Gerry Stahl, Constance Steinkuehler, Reed Stevens, Daniel Suthers, Iris Tabak, Beth Warren, Uri Wilensky, Philip H. Winne, Carmen Zahn
Edited by R. Keith Sawyer, University of North Carolina, Chapel Hill
Book: The Cambridge Handbook of the Learning Sciences
Published online: 05 November 2014
Print publication: 17 November 2014, pp xv-xviii
Risk factors associated with detailed reproductive phenotypes in dairy and beef cows
T. R. Carthy, D. P. Berry, A. Fitzgerald, S. McParland, E. J. Williams, S. T. Butler, A. R. Cromie, D. Ryan
Journal: animal / Volume 8 / Issue 5 / May 2014
The objective of this study was to identify detailed fertility traits in dairy and beef cattle from transrectal ultrasonography records and quantify the associated risk factors. Data were available on 148 947 ultrasound observations of the reproductive tract from 75 949 cows in 843 Irish dairy and beef herds between March 2008 and October 2012. Traits generated included (1) cycling at time of examination, (2) cystic structures, (3) early ovulation, (4) embryo death and (5) uterine score; the latter was measured on a scale of 1 (good) to 4 (poor) characterising the tone of the uterine wall and fluid present in the uterus. After editing, 72 773 records from 44 415 dairy and beef cows in 643 herds remained. Factors associated with the logit of the probability of a positive outcome for each of the binary fertility traits were determined using generalised estimating equations; linear mixed model analysis was used for the analysis of uterine score. The prevalence of cycling, cystic structures, early ovulation and embryo death was 84.75%, 3.87%, 7.47% and 3.84%, respectively. The occurrence of the uterine heath score of 1, 2, 3 and 4 was 70.63%, 19.75%, 8.36% and 1.26%, respectively. Cows in beef herds had a 0.51 odds (95% CI=0.41 to 0.63, P<0.001) of cycling at the time of examination compared with cows in dairy herds; stage of lactation at the time of examination was the same in both herd types. Furthermore, cows in dairy herds had an inferior uterine score (indicating poorer tone and a greater quantity of uterine fluid present) compared with cows in beef herds. The likelihood of cycling at the time of examination increased with parity and stage of lactation, but was reduced in cows that had experienced dystocia in the previous calving. The presence of cystic structures on the ovaries increased with parity and stage of lactation. The likelihood of embryo/foetal death increased with parity and stage of lactation. Dystocia was not associated with the presence of cystic structures or embryo death. Uterine score improved with parity and stage of lactation, while cows that experienced dystocia in the previous calving had an inferior uterine score. Heterosis was the only factor associated with increased likelihood of early ovulation. The fertility traits identified, and the associated risk factors, provide useful information on the reproductive status of dairy and beef cows.
Gamma Knife in the Treatment of Pituitary Adenomas: Results of a Single Center
F. A. Zeiler, M. Bigder, A. Kaufmann, P. J. McDonald, D. Fewer, J. Butler, G. Schroeder, M. West
Journal: Canadian Journal of Neurological Sciences / Volume 40 / Issue 4 / July 2013
Gamma Knife (GK) radiosurgery for pituitary adenomas can offer a means of tumor and biologic control with acceptable risk and low complication rates.
Retrospective review of all the patients treated at our center with GK for pituitary adenomas from Nov 2003 to June 2011.
We treated a total of 86 patients. Ten were lost to follow-up. Mean follow was 32.8 months. There were 21 (24.4%) growth hormone secreting adenomas (GH), 8 (9.3%) prolactinomas (PRL), 8 (9.3%) adrenocorticotropic hormone secreting (ACTH) adenomas, 2 (2.3%) follicle stimulating hormone/luteinizing hormone secreting (FSH/LH) adenomas, and 47 (54.7%) null cell pituitary adenomas that were treated. Average maximum tumor diameter and volume was 2.21cm and 5.41cm3, respectively. The average dose to the 50% isodose line was 14.2 Gy and 23.6 Gy for secreting and non-secreting adenomas respectively. Mean maximal optic nerve dose was 8.87 Gy. Local control rate was 75 of 76 (98.7%), for those with followup. Thirty-three (43.4%) patients experienced arrest of tumor growth, while 42 (55.2%) patients experienced tumor regression. Of the 39 patients with secreting pituitary tumors, 6 were lost to follow-up. Improved endocrine status occurred in 16 (50.0%), while 14 (43.8%) demonstrated stability of hormone status on continued pre-operative medical management. Permanent complications included: panhypopituitarism (4), hypothyroidism (4), hypocortisolemia (1), diabetes insipidus (1), apoplexy (1), visual field defect (2), and diplopia (1).
Gamma Knife radiosurgery is a safe and effective means of achieving tumor growth control and endocrine remission/stability in pituitary adenomas.
Gamma Knife Radiosurgery for Large Vestibular Schwannomas: A Canadian Experience
Journal: Canadian Journal of Neurological Sciences / Volume 40 / Issue 3 / May 2013
To review our institutional experience with Gamma Knife (GK) stereotactic radiosurgery in treating large vestibular schwannomas (VS) of 3 to 4 cm diameter.
We conducted a retrospective cohort review of all patients treated with GK for VS at our institution between November 2003 and March 2012. Data on age, sex, VS volume, location and maximal diameter, House-Brackmann (HB) facial nerve scores pre and post-GK, Gardner-Robertson (GR) hearing score pre and post-GK, GK treatment parameters, VS response time, complications and clinical outcome was recorded.
A total of 28 patients during the defined time period were identified. Three patients were lost to follow-up. Mean follow-up was 34.5 months. Tumor control occurred in 92%, and was maintained in 85.7% at two years. Facial nerve or hearing preservation occurred in all treated compared to pre-GK status, as per HB and GR grading. Transient complications occurred in 80%. Temporary vestibular dysfunction occurred in seven patients (28%). One patient (4%) had the permanent complication of worsening pre-GK hemifacial spasm. Four patients (16%) developed hydrocephalus post-GK.
GK stereotactic radiosurgery as a primary treatment modality for large VS can provide acceptable tumor control rates with good facial nerve and hearing preservation, and low complication rates.
Estimating the heritability of reporting stressful life events captured by common genetic variants
R. A. Power, T. Wingenbach, S. Cohen-Woods, R. Uher, M. Y. Ng, A. W. Butler, M. Ising, N. Craddock, M. J. Owen, A. Korszun, L. Jones, I. Jones, M. Gill, J. P. Rice, W. Maier, A. Zobel, O. Mors, A. Placentino, M. Rietschel, S. Lucae, F. Holsboer, E. B. Binder, R. Keers, F. Tozzi, P. Muglia, G. Breen, I. W. Craig, B. Müller-Myhsok, J. L. Kennedy, J. Strauss, J. B. Vincent, C. M. Lewis, A. E. Farmer, P. McGuffin
Journal: Psychological Medicine / Volume 43 / Issue 9 / September 2013
Published online by Cambridge University Press: 14 December 2012, pp. 1965-1971
Although usually thought of as external environmental stressors, a significant heritable component has been reported for measures of stressful life events (SLEs) in twin studies.
We examined the variance in SLEs captured by common genetic variants from a genome-wide association study (GWAS) of 2578 individuals. Genome-wide complex trait analysis (GCTA) was used to estimate the phenotypic variance tagged by single nucleotide polymorphisms (SNPs). We also performed a GWAS on the number of SLEs, and looked at correlations between siblings.
A significant proportion of variance in SLEs was captured by SNPs (30%, p = 0.04). When events were divided into those considered to be dependent or independent, an equal amount of variance was explained for both. This 'heritability' was in part confounded by personality measures of neuroticism and psychoticism. A GWAS for the total number of SLEs revealed one SNP that reached genome-wide significance (p = 4 × 10−8), although this association was not replicated in separate samples. Using available sibling data for 744 individuals, we also found a significant positive correlation of R2 = 0.08 in SLEs (p = 0.03).
These results provide independent validation from molecular data for the heritability of reporting environmental measures, and show that this heritability is in part due to both common variants and the confounding effect of personality.
Gamma Knife Radiosurgery of Cavernous Sinus Meningiomas: An Institutional Review
F. A. Zeiler, P. J. McDonald, A. M. Kaufmann, D. Fewer, J. Butler, G. Schroeder, M. West
Journal: Canadian Journal of Neurological Sciences / Volume 39 / Issue 6 / November 2012
Stereotactic radiosurgery offers a unique and effective means of controlling cavernous sinus meningiomas with a low rate of complications.
We retrospectively reviewed all cavernous sinus meningiomas treated with Gamma Knife (GK) radiosurgery between November 2003 and April 2011 at our institution.
Thirty patients were treated, four were lost to follow- up. Presenting symptoms included: headache (9), trigeminal nerve dysesthesias/paresthesias (13), abducens nerve palsy (11), oculomotor nerve palsy (8), Horner's syndrome (2), blurred vision (9), and relative afferent pupillary defect (1). One patient was asymptomatic with documented tumor growth. Treatment planning consisted of MRI and CT in 17 of 30 patients (56.7%), the remainder were planned with MRI alone (44.3%). There were 8 males (26.7%) and 22 females (73.3%). Twelve patients had previous surgical debulking prior to radiosurgery. Average diameter and volume at time of radiosurgery was 3.4 cm and 7.9 cm3 respectively. Average dose at the 50% isodose line was 13.5 Gy. Follow-up was available in 26 patients. Average follow-up was 36.1 months. Mean age 55.1 years. Tumor size post GK decreased in 9 patients (34.6%), remained stable in 15 patients (57.7%), and continued to grow in 2 (7.7%). Minor transient complications occurred in 12 patients, all resolving. Serious permanent complications occurred in 5 patients: new onset trigeminal neuropathic pain (2), frame related occipital neuralgia (1), worsening of pre-GK seizures (1), and panhypopituitarism (1).
GK offers an effective treatment method for halting meningioma progression in the cavernous sinus, with an acceptable permanent complication rate.
An investigation into the chronic effects of flavonoids in orange juice on cardiovascular health and cognition
R. J. Kean, J. Freeman, J. A. Ellis, L. T. Butler, J. P. E. Spencer
Journal: Proceedings of the Nutrition Society / Volume 71 / Issue OCE2 / 2012
Published online by Cambridge University Press: 19 October 2012, E35
Gamma Knife for Cerebral Arteriovenous Malformations at a Single Centre
F. A. Zeiler, P. J. McDonald, A. Kaufmann, D. Fewer, J. Butler, G. Schroeder, M. West
We report the results of a consecutive series of patients treated with Gamma Knife (GK) Surgery for cerebral arteriovenous malformations (AVMs).
We retrospectively reviewed 69 patients treated with GK for cerebral AVMs between November 2003 and April 2009, recording clinical data, treatment parameters, and AVM obliteration rates in order to assess our effectiveness with GK in treating these lesions.
Ten patients were lost to follow-up. Presentations included: seizure (24), hemorrhage (18), persistent headache (12), progressing neurological signs (10), and incidental (9). In 24 patients (34.8%) treatment planning consisted of digital subtraction angiography (DSA), magnetic resonance imaging (MRI), and computed tomogram (CT) angiography (CTA). Currently we rely predominantly on CTA and/or MRI scanning only. Fourty-one patients have been followed for a minimum of 3 years; average age 40.9yr., 58.5% males. Average dose at the 50% isodose line was 20.3 Gy (range 16 to 26.4 Gy). Obliteration was observed in 87.8% by MRI, CT, or DSA. Not all obliteration was confirmed by DSA. Complications occurred in 12 of 59 (20.3%) patients, and in 11 of 41 (26.8%) with 3 year follow-up. Major (temporary) complications for the 59 included symptomatic cerebral edema (7), seizure (2), and hemorrhage (1). Major permanent complications occurred in one patient suffering a cranial nerve V deafferentation, and in two patients suffering a hemorrhage.
GKS for cerebral AVM's offers an effective and safe method of treatment, with low permanent complication rate.
By Graeme J.M. Alexander, Heung Bae Kim, Michael Burch, Andrew J. Butler, Tanveer Butt, Roy Calne, Edward Cantu, Robert B. Colvin, Paul Corris, Charles Crawley, Hiroshi Date, Francis L. Delmonico, Bimalangshu R. Dey, Kate Drummond, John Dunning, John D. Firth, John Forsythe, Simon M. Gabe, Robert S. Gaston, William Gelson, Paul Gibbs, Alex Gimson, Leo C. Ginns, Samuel Goldfarb, Ryoichi Goto, Walter K. Graham, Simon J.F. Harper, Koji Hashimoto, David G. Healy, Hassan N. Ibrahim, David Ip, Fadi G. Issa, Neville V. Jamieson, David P. Jenkins, Dixon B. Kaufman, Kiran K. Khush, Heung Bae Kim, Andrew A. Klein, John Klinck, Camille Nelson Kotton, Vineeta Kumar, Yael B. Kushner, D. Frank. P. Larkin, Clive J. Lewis, Yvonne H. Luo, Richard S. Luskin, Ernest I. Mandel, James F. Markmann, Lorna Marson, Arthur J. Matas, Mandeep R. Mehra, Stephen J. Middleton, Giorgina Mieli-Vergani, Charles Miller, Sharon Mulroy, Faruk Özalp, Can Ozturk, Jayan Parameshwar, J.S. Parmar, Hari K. Parthasarathy, Nick Pritchard, Cristiano Quintini, Axel O. Rahmel, Chris J. Rudge, Stephan V.B. Schueler, Maria Siemionow, Jacob Simmonds, Peter Slinger, Thomas R. Spitzer, Stuart C. Sweet, Nina E. Tolkoff-Rubin, Steven S.L. Tsui, Khashayar Vakili, R.V. Venkateswaran, Hector Vilca-Melendez, Vladimir Vinarsky, Kathryn J. Wood, Heidi Yeh, David W. Zaas, Jonathan G. Zaroff
Edited by Andrew A. Klein, Clive J. Lewis, Joren C. Madsen
Book: Organ Transplantation
Published online: 07 September 2011
Print publication: 11 August 2011, pp vii-x
|
CommonCrawl
|
The cycle structure of unicritical polynomials
Bridy, Andrew
Garton, Derek
A polynomial with integer coefficients yields a family of dynamical systems indexed by primes as follows: for any prime $p$, reduce its coefficients mod $p$ and consider its action on the field $\mathbb{F}_p$. The questions of whether and in what sense these families are random have been studied extensively, spurred in part by Pollard's famous "rho" algorithm for integer factorization (the heuristic justification of which is the randomness of one such family). However, the cycle structure of these families cannot be random, since in any such family, the number of cycles of a fixed length in any dynamical system in the family is bounded. In this paper, we show that the cycle statistics of many of these families are as random as possible. As a corollary, we show that most members of these families have many cycles, addressing a conjecture of Mans et. al.
arXiv e-prints
10.48550/arXiv.1801.03215
arXiv:
2018arXiv180103215B
Mathematics - Dynamical Systems;
Mathematics - Number Theory;
37P05 (Primary);
37P25;
11R32;
20B35 (Secondary)
E-Print:
19 pages, minor typos corrected
|
CommonCrawl
|
(For c-FastICA) On covariance and pseudocovariance matrix of a complex random vector
I am currently studying complex FastICA and the paper says that
Suppose $\mathbf{s}$ is a $n\times1$ complex random vector. If $\mathbf{s}$ has zero mean, unit variance, and uncorrelated real and imaginary part of equal variances, then $E[\mathbf{s}\mathbf{s}^H]=\mathbf{I}_n$ and $E[\mathbf{s}\mathbf{s}^T]=\mathbf{0}_n$.
I don't quite get how $E[\mathbf{s}\mathbf{s}^H]=\mathbf{I}_n$ and $E[\mathbf{s}\mathbf{s}^T]=\mathbf{0}_n$ come about from the conditions.
We have the covariance matrix as \begin{align} \operatorname{cov}(\mathbf{s}) &= E[\mathbf{s}\mathbf{s}^H]-E[\mathbf{s}]E[\mathbf{s}^H] \\ &= E[\mathbf{s}\mathbf{s}^H]-\mathbf{0}_{n\times1}\mathbf{0}_{1\times n}\\ &= E[\mathbf{s}\mathbf{s}^H]\\ \end{align} and the pseudocovariance \begin{align} \operatorname{pcov}(\mathbf{s}) &= E[\mathbf{s}\mathbf{s}^T]-E[\mathbf{s}]E[\mathbf{s}^T] \\ &= E[\mathbf{s}\mathbf{s}^T]-\mathbf{0}_{n\times1}\mathbf{0}_{1\times n}\\ &= E[\mathbf{s}\mathbf{s}^T]\\ \end{align} I don't quite get how to equate the last line of covariance matrix to identity and the pseucovariance to zero.
If I were to write out the matrix, \begin{align} E[\mathbf{s}\mathbf{s}^H] &=E\left\{\begin{bmatrix} s_1s_1^* & s_1s_2^* &\cdots & s_1s_n^*\\ s_2s_1^* & s_2s_2^* &\cdots & s_2s_n^*\\ \vdots & \vdots &\ddots & \vdots\\ s_ns_1^* & s_ns_2^* &\cdots & s_ns_n^*\\ \end{bmatrix}\right\} \end{align} and \begin{align} E[\mathbf{s}\mathbf{s}^T] &=E\left\{\begin{bmatrix} s_1s_1 & s_1s_2 &\cdots & s_1s_n\\ s_2s_1 & s_2s_2 &\cdots & s_2s_n\\ \vdots & \vdots &\ddots & \vdots\\ s_ns_1 & s_ns_2 &\cdots & s_ns_n\\ \end{bmatrix}\right\} \end{align} I still can't quite figure how all of these eventually becomes identity and zeros.
ica complex-random-variable
Matt L.
Karn WatcharasupatKarn Watcharasupat
$$E[s_ks_k^*]=E[|s_k|^2]=1$$
because the complex random variables $s_k$ have zero mean and unit variance. That means that all elements of the main diagonal of $E[\mathbf{s}\mathbf{s}^H]$ equal $1$. Furthermore, with $s_k=x_k+jy_k$we have
$$\begin{align}E[s_ks^*_l]&=E[x_kx_l+y_ky_l+j(x_ly_k-x_ky_l)]\\&=E[x_kx_l]+E[y_ky_l]+jE[x_ly_k]-jE[x_ky_l]\\&=0,\qquad k\neq l\end{align}$$
because real and imaginary parts are uncorrelated. Consequently, all off-diagonal elements of $E[\mathbf{s}\mathbf{s}^H]$ are zero.
The off-diagonal elements of $E[\mathbf{s}\mathbf{s}^T]$ are zero for the same reason. (There's just a sign difference in the sum of the expectations, but since each of them is zero the result is the same). The main diagonal elements are
$$\begin{align}E[s_k^2]&=E[x_k^2-y_k^2+2jx_ky_k]\\&=E[x_k^2]-E[y_k^2]+2jE[x_ky_k]\\&=0\end{align}$$
because real and imaginary parts are uncorrelated, and they have equal variance, i.e., $E[x_k^2]=E[y_k^2]$.
Matt L.Matt L.
Not the answer you're looking for? Browse other questions tagged ica complex-random-variable or ask your own question.
ICA - Statistical Independence & Eigenvalues of Covariance Matrix
How can I calculate the Expectation for a particular known vector?
Is there a way to reduce the covariance matrix of several source signals to the dominant source signal?
FastICA - Purpose of centering and whitening the Data
Removing DC offset and ensuring zero-sum for 2D filter
Clarifying matrix notation from an ICA-CMN paper
Energy definition for Autocorrelation lag 0 and lag 1 for complex signals
How can I add complex WGN to a complex damped signal for a specified SNR?
|
CommonCrawl
|
Sweet cherry fruit cracking: follow-up testing methods and cultivar-metabolic screening
Michail Michailidis1,
Evangelos Karagiannis1,
Georgia Tanou2,
Eirini Sarrou3,
Katerina Karamanoli4,
Athina Lazaridou5,
Stefan Martens6 &
Athanassios Molassiotis ORCID: orcid.org/0000-0001-5118-22441
Rain-induced fruit cracking is a major physiological problem in most sweet cherry cultivars. For an in vivo cracking assay, the 'Christensen method' (cracking evaluation following fruit immersion in water) is commonly used; however, this test does not adequately simulate environmental conditions. Herein, we have designed and evaluated a cracking protocol, named 'Waterfall method', in which fruits are continuously wetted under controlled conditions.
The application of this method alone, or in combination with 'Christensen method, was shown to be a reliable approach to characterize sweet cherry cracking behavior. Seventeen cherry cultivars were tested for their cracking behavior using both protocols, and primary as well as secondary metabolites identification was performed in skin tissue using a combined GC–MS and UPLC-MS/MS platform. Significant variations of some of the detected metabolites were discovered and important cracking index–metabolite correlations were identified.
We have established an alternative/complementary method of cherry cracking characterization alongside to Christiansen assay.
Sweet cherry (Prunus avium L.) is an important temperate fruit crop and its production is characterized by short duration of fruit development during spring until the middle of summer in North hemisphere. Climate change predictions (IPCC, 2013) point to an increasing frequency of excessive rainfall in the early-middle summer to the Eurasian zone [1] that will result to an increasing of the sweet cherries fruit cracking. Particularly, in sweet cherry, rain-induced cracking before harvest is the most significant crop loss in many cherry-producing areas with enormous commercial losses worldwide [2]. This physiological disorder is developed as cracks of the fruit skin after rainfall, sometimes deep into the flesh, affecting the stem end area, the calyx end and the cheeks of the fruit (side cracks) [3]. Although sweet cherry cracking has been investigated for many years, few advances have been made in understanding the metabolic basis of fruit cracking susceptibility in the various cultivars [2].
The basic mechanism that causes skin cracking in sweet cherries, although not completely elucidated, focuses on the rapid increase of water absorption by the fruit either by direct absorption from the skin of the fruit or by the absorption of water through the tree vascular system [4]. In general, three types of cherry splitting have been described in the literature: stem end cracks, top end cracks and common lateral cracks [5]. The type of split possible is determined by the occurrence of different way of water uptake [6].
The cherry fruit peel includes a thin layer of epidermis (1 μm) and up to eight layers of cells, with an overall thickness of 4.5 μm [7]. The epidermis is consisted of external hydrophobic substances and an inner layer of hydrophilic substances (polyurines and glucans), with a single epidermal layer of cellulose. The density of fruit stomata (85–200 per cm2), is lower in comparison to the leaves (5000–10,000 per cm2) [8], however, fruit contain enough pores that are permeable to water [9].
In order to classify sweet cherry cultivars into relative sensitive, moderate and resistant to cracking, the cracking index was traditionally used [5] with minor modifications. The cracking index was determined by immersing the fruits in distilled water for a certain duration and then fruits were observed for surface splits [5], briefly from now it will be referred as 'Christensen method'. In this in vivo cracking assay, however, the immersion of the fruits in water does not reflect the actual field conditions, where the entire fruit skin surface does not receive the same water pressure, as it occurs during the application of Christensen method [6]. For this purpose, in this work, an alternative method for determining the cherry fruit cracking index has been developed. In this assay, herein referred as 'Waterfall method', the fruits are not immersed in distilled water, but covering fruits are continuously wetted with deionized water; such conditions are more accurately simulate the water pressure on the fruit surface as well as the water absorption by fruit skin during rain.
Although information regarding skin tissue metabolism in sweet cherries is scarce, recent studies investigated the sequence of events accounts for rain cracking in sweet cherry [10, 11]. Based on the above-mentioned studies, it is well accepted that an early metabolic reprogramming occurs in cherry fruit prior to cracking events; however, yet no clear consensus exists on which compounds act as possible cracking biomarkers. Accordingly, the aim of this study was to compare the two methods, namely Christensen and Waterfall method, for determining the fruit cracking index using seventeen sweet cherry cultivars that displayed different cracking behavior. An additional purpose of this work was to explore the metabolic changes in these cultivars and the possible association with their cracking features obtained by two assays. Thus, this work has the potential to provide a useful assay method for fruit cracking evaluation and a large-scale sweet cherry skin metabolome monitoring analysis under cracking process.
Plant material and sampling procedure
Fruits of seventeen sweet cherry cultivars at commercial harvest stage were collected (Fig. 1a) and evaluated for susceptibility to fruit cracking. The experiment was conducted in an experimental sweet cherry orchard of the Farm of Aristotle University of Thessaloniki (Thermi, Thessaloniki) and in a commercial orchard (Agras, Pellas region) during 2017 growing season. The tested cultivars were 'Early Bigi', 'Early Star', 'Sweet Early', 'Carmen', 'Grace Star', 'Krupnoplodnaja', 'Blaze Star', 'Aida', 'Ferrovia', 'Skeena', 'Lapins', 'Bakirtzeika', 'Samba', 'Tsolakeika', 'Tragana Edesis', 'Stella' and 'Regina'. The two orchards consisted of 14-years old trees, planted at 5 × 5 m spacing between rows and along the row, grafted onto 'Mahaleb' rootstock, trained in open vase and subjected to standard cultural practices. Fruit were picked at full maturity based upon size, color, and commercial picking dates in each area. About one thousand fruits of each cultivar were harvested, 60 fruits of them randomly were divided into three 20-fruit sub-lots, then the skin was separated from flesh and thereafter the skin was immersed in liquid nitrogen and stored at − 80 °C for metabolites analysis. Experiment was following completely randomized design (CRD). Particularly, we used 5 replicates of 15 fruits for each cracking method; 150 fruits for the fruit water absorption and the main cracking observations; 5 replicates of 10 fruits for the physio-biochemical traits; 30 fruits for the textural properties of the skin.
Plant material and experimental design used. a Phenotypes of seventeen sweet cherry cultivars harvested at commercial ripening stage. b Graphical presentation of two applied methods (Christensen and Waterfall) for the determination of skin cracking index in sweet cherry fruits
Cracking estimation methods
Susceptibility of the sweet cherries to fruit cracking was estimated using the Christensen and Waterfall methods, in order to evaluate the fruit cracking index. A graphical representation of both methods is shown in Fig. 1b, while a description of these assays is outlined below.
Christensen method: Skin cracking was determined as described with some modifications [5]. Fruits of each cultivar (5 replicates × 15 fruits) were weighted and immersed in distilled water under room temperature conditions for a total period of 6 h with parallel hourly recording for cracking incidence. The results were expressed as percentage of cultivar cracking susceptibility by the following equation:
$$\left[ {\mathop \sum \limits_{i = 1}^{6} \left( {7 - {\text{i}}) \times {\text{cracked fruits in i hour}}} \right) / 6\, \times \,{\text{total fruits}}} \right] \, \times { 1}00.$$
According to the fruit cracking index values obtained with Christensen method, all cultivars were divided into four groups: low susceptible (cracking index lower than 10.0), moderately susceptible (cracking index from 10.1 to 30.0), susceptible (cracking index 30.1 to 50.0) and highly susceptible (cracking index > 50.1) [5].
Waterfall method: Fruits of each cultivar (5 replicates × 15 fruits) were weighted and placed to hang from the stem in a transparent plastic chamber (56 × 39 × 28 cm) with a temperature/relative humidity data logger (HOBO H08-003-02). Drops of distilled water were pipetted (3 mL plastic pipette) on the hovering sweet cherry fruits for a total period of 12 h with parallel hourly recording for cracking incidence the first 4 h and 2-h recording thereafter. The results were expressed as percentage of cultivar cracking susceptibility by the modified equation:
$$\left\{ {\left[ {\mathop \sum \limits_{i = 1}^{4} \left( {13 - {\text{i}}) \times {\text{cracked fruits in i hour}}} \right) + \mathop \sum \limits_{n = 3}^{6} \left( {13 - 2 \times {\text{n}}) \times {\text{cracked fruits in }}\left( {2 \times {\text{n}}} \right) {\text{hour}}} \right)} \right] \, /{ 12 } \times {\text{ total fruits}}} \right\} \, \times { 1}00.$$
Physio-biochemical traits of cherry fruit
Determination of physiological traits of each cultivar, namely fresh weight of ten fruits and percentage fresh weight of stem, flesh and skin in five replicates of ten fruits were performed at harvest (Additional file 1: Table S1). Fifty fruits were randomly divided into five 10-fruit sub-lots, and analyzed for total soluble solids (TSS), titratable acidity (TA) and external color (Chroma and Hue angle) at harvest as described [12].
Textural properties of skin tissue
Skin penetration (in N) for each cultivar (thirty fruits) was performed at harvest using a TA-XT2i Texture Analyzer (Stable Microsystems, Godalming, Surrey, UK) as described in detail [13] following a four sides skin penetration protocol (upper side, suture, apical and near stem) (Additional file 2: Table S2). Total skin penetration was calculated as the average of four sides measurements (Additional file 2: Table S2).
Fruit water absorption
Sweet cherry fruits were weighted before coming in touch with water and then just after the end of each method (Christensen and Waterfall methods) after removal of surface water by centrifugation and air flow for 1 min. The results were expressed as % percentage of fruit water absorption.
Observations for main cracking
Just after the end of Christensen and Waterfall methods for cracking estimation, the four sides of fruits (upper side, suture, apical and near stem) were observed to depict the main type of cracking for each cultivar (Additional file 2: Table S2). In each cultivar, side or sides of fruits with a significant difference among sides based on Duncan's Multiple Range Test, they represent the main type of skin cracking.
Primary metabolites analysis in skin tissue by GC–MS
Frozen grinding skin tissue (0.3 gr) of cultivars at harvest, was transferred in 2-mL screw cap tubes. Determination of primary polar metabolites was conducted as described with slight modifications [14]. In sample, 1400 μL methanol and 100 μL adonitol (0.2 mg mL−1) was added and incubated for 10 min at 70 °C. The supernatant after centrifugation was reserved and 750 μL chloroform and 1500 μL dH2O were added. 150 μL of the upper polar phase was dried, re-dissolved in 40 μL methoxyamine hydrochloride (20 mg mL−1, 120 min, 37 °C), derivatized with 70 μL N-methyl-N-(trimethylsilyl) tri-fluoroacetamide reagent (MSTFA) and incubated (30 min, at 37 °C). The GC–MS analysis was carried with a Thermo Trace Ultra GC equipped with ISQ MS and TriPlus RSH autosampler (Switzerland) [15]. Samples (1 μL) was injected and split ratio was 70:1. A TR-5MS capillary column 30 m × 0.25 mm x 0.25 μm was used. Injector temperature was 220 °C ion source 230 °C and the interface 250 °C. Carrier gas flow rate was 1 mL min−1. Temperature program was held at 70 °C for 2 min, then increased to 260 °C (rate 8 °C min−1), where it remained for 18 min, m/z50–600 was recorded. Standards were used for peak identification or NIST11 and GOLM metabolome database (GMD) in case of unknown peaks [16]. The detected metabolites were assessed based on the relative response compared to adonitol and expressed as relative abundance (Additional file 3: Table S3). Experiments performed using three biological replicates.
Secondary metabolites analysis in skin tissue by UPLC–MS/MS
Sweet cherries skin polyphenolic extraction in cultivar fruits at harvest was performed as previously described [17]. Freeze dried sweet cherry (100 mg) tissue (mesocarp-exocarp) was mixed with 4 mL methanol (80%) into a 15-mL falcon tube. The mixture was sonicated (20 min), shaken (3 h, 20 °C) and incubated at 4 °C (overnight) in the dark. Secondary metabolites extract was acquired following filtration through a 0.22 µm PFTE membrane into a glass vial.
Targeted ultra-performance liquid chromatography-tandem mass spectrometer (UPLC–MS/MS) was performed on a Waters Acquity system (Milford, MA, USA) consisting of a binary pump, an online vacuum degasser, an autosampler, and a column compartment. Separation of the phenolic compounds was achieved on a Waters Acquity HSS T3 column 1.8 μm, 100 mm × 2.1 mm, kept at 40 °C. The phenolic analysis was performed, as previously described [18]. Anthocyanins were quantified using the method previously described [19, 20] on a RP Acquity UPLC® BEH C18 column (130 Å, 2.1 × 150 mm, 1.7 µm, waters), protected with an Acquity UPLC® BEH C18 pre-column (130 Å, 2.1 × 5 mm, 1.7 µm, Waters).
Mass spectrometry detection was carried out on a Waters Xevo TQMS instrument equipped with an electrospray (ESI) source. Data processing was carried out using the Mass Lynx Target Lynx Application Manager (Waters). Three biological replications were used for each cultivar. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Statistical analysis of 17 cultivars was conducted using SPSS (SPSS v23.0., Chicago, USA) by multivariate analysis of variance (MANOVA) and provided in Additional Tables. Mean values of all cultivars tested were compared by the least significance difference (LSD) for quality traits based on 5 biological replicates of 15 fruits, apart from textural properties for which 30 individual fruits were used (P ≤ 0.05). For metabolite analysis, 3 independent biological replicates (P ≤ 0.05) were performed. The cultivars were grouped based on the two cracking estimation methods using hierarchical cluster analysis (nearest neighbor, squared euclidian distance). Spearman correlation analysis of all cultivars among cracking indexes and other variables was also performed (Additional file 4: Fig. S1).
Cultivars fruit quality traits at harvest
To study the sweet cherry fruit cracking responses, we selected seventeen representative and frequently planted cultivars in Greece. Soluble solids concentration (SSC) of sweet cherry fruits at ripening stage (Fig. 1a) ranged from 11.27 ('Stella') to 19.63 ('Skeena') Brix's percentage (Fig. 2a). The acidity of cultivars ranged from 0.51 to 1.4 (%, malate), the lowest and the highest acid concentration was detected for 'Early Bigi' and in 'Tsolakeika', respectively (Fig. 2b). The color indexes, Chroma and Hue angle of cherries ranged from 3.9 ('Tragana Edesis') and 6.6 ('Regina') to 39.5 ('Stella') and 27.0 ('Stella'), respectively (Fig. 2c, d). The average weight (g) of ten fruits per cultivar ranged from 60.9 to 123.9, the minimum weight was recorded for 'Early Bigi' and the maximum for 'Krupnoplodnaja' (Fig. 2e). The relative weight of stem, flesh and skin, expressed as partitioned % percentage of whole cherry fruits, were ranged in stem (Fig. 2f) from 0.8 ('Samba') to 1.8 ('Early Bigi'), in flesh (Fig. 2g) from 82.1 ('Skeena') to 88.3 ('Grace Star') and in skin (Fig. 2h) from 7.5 ('Sweet Early') to 12.8 ('Skeena').
Physiological characterization of the various sweet cherry cultivars. Determination of soluble solids concentration (SSC) a, titratable acidity (TA) b, color index Chroma c, color index Hue angle d, weight of ten fruits e, weight of stem (%) f, weight of flesh (%) g, weight of skin (%) h. Each value represents the mean of five replications x ten fruits. Vertical bars represent SD. Differences among cultivars detected based on least significant difference (LSD); P ≤ 0.05. Arithmetic data are provided in Additional file 1: Table S1
Texture properties of the skin tissue
Side skin penetration force among cultivars was ranged from 0.17 ('Early Bigi') to 0.47 N ('Skeena') (Fig. 3a). Suture skin penetration force was ranged from 0.24 ('Early Bigi') to 0.55 N ('Early Star'). Also, the apical skin penetration force was ranged from 0.24 ('Early Bigi' and 'Tragana Edesis') to 0.48 N ('Tsolakeika'). Stem skin penetration force were ranged from 0.15 ('Sweet Early') to 0.42 N ('Tsolakeika'). Finally, the total skin penetration force was ranged from 0.2 ('Early Bigi') to 0.46 N ('Tsolakeika') (Fig. 3a).
Cracking-related features of sweet cherry cultivars in response to Christensen and Waterfall method. Texture properties of the various sweet cherry cultivars expressed as skin penetration force at side, suture, apical, stem and total a. Fruit cracking index tested with Christensen and Waterfall method b. Fruit water absorption (%) as evaluated by Christensen and Waterfall methods c. The observed main cracking classes (none = zero, stem = one, apical = two, and both stem and apical = three) of sweet cherry cultivars subjected to Christensen and Waterfall methods d. Each value represents the mean of 5 replications × 15 fruits. Vertical bars represent SD. Means of cultivars are compared with least significant difference (LSD; P ≤ 0.05). Boxplot and cluster analysis of 17 cultivars was performed on fruit cracking index with Christensen and Waterfall method. Arithmetic data are provided in Additional file 2: Table S2
Cracking index and fruit water absorption of cherry cultivars
The application of both cracking methods (Christensen and Waterfall) showed that the 'Regina' was the most resistant to skin cracking among the cultivars tested (Fig. 3b). Based on Christensen method, eleven cherry cultivars were evaluated as susceptible to cracking, with cracking index higher than 70% (Fig. 3b). These relative cracking-sensitive cultivars include 'Early Bigi', 'Early Star', 'Sweet Early', 'Carmen', 'Grace Star', 'Krupnoplodnaja', 'Blaze Star', 'Aida', 'Ferrovia', 'Skeena' and 'Lapins'. Using the Waterfall method, five cultivars namely 'Early Bigi', 'Sweet Early', 'Carmen', 'Grace Star' and 'Krupnoplodnaja' with cracking index higher than 50% were identified (Fig. 3b) based on cluster analysis. Furthermore, sharp absorption of water in 'Grace Star', 'Krupnoplodnaja', 'Aida' and 'Tsolakeika' where exceeding 4% of the total fruits weight based on the Christensen method (Fig. 3c). Three cultivars ('Sweet Early', 'Krupnoplodnaja' and 'Aida') with water absorption higher than 2% of total fruit weight was found using the Waterfall method (Fig. 3c). Differences between the two tested methods concerning the type of cracking were also documented (Fig. 3d). For instance, to the cultivars 'Early Bigi' and 'Early Star' the main type of cracking was near the stem based on Christensen method (Fig. 3d); on the contrary, in the same cultivars the main type of cracking was the apical according to Waterfall method (Fig. 3d).
Cultivar-specific primary metabolic profile of skin tissue
As it is challenging to metabolically monitor the factors involved in fruit cracking, we next studied whether the skin-derived metabolite profiling could be an effective tool to understand this physiological disorder, using GC–MS analysis. Sixty-five polar metabolites in the skin tissue of the seventeen sweet cherry cultivars were quantified prior to the application of both cracking assays (Additional file 3: Table S3). In terms of chemical composition, and considering all cultivars analyzed, skin metabolomic profile includes soluble sugars (twenty), sugar alcohols (nine), organic acids (fifteen), amino acids (sixteen) and other compounds (five) (Fig. 4). Among the skin tissue of the 17 different cultivars screened, the highest contents of soluble sugars, organic acids, amino acids, total alcohols were recorded in 'Skeena', 'Lapins', 'Tsolakeika', 'Skeena' cultivars respectively. In cherry skin tissue, glucose and fructose, malate, γ-aminobutyric acid (GABA), and sorbitol represent the main part of sugars, organic acids, amino acids, total alcohols, respectively (Fig. 4, Additional file 3: Table S3). Most metabolites (54% of them) were detected in all cultivars; with some exceptions that were enriched only in specific cultivars. For instance, the highest levels of maltose were detected in 'Aida' and 'Ferrovia' while mannobiose was exclusively high in two Greek-originated cultivars: 'Bakirtzeika' and 'Tsolakeika'.
Heat map diagram of primary metabolites in the skin samples of sweet cherry cultivars. Metabolites divided into soluble sugars a, amino acids b, organic acids c, soluble alcohols d and other compounds e. Based on grand mean log2 an increase is depicted as red and a decrease is depicted as green (see color scale). Metabolites abundance were expressed as relative abundance compared to internal standard adonitol. Each metabolite represented by 20 sweet cherry skins in three biological replications. Data are provided in Additional file 3: Table S3
The proportions of individual primary metabolites in skin tissue showed significant variation. Six primary metabolites in the skin of 'Early Bigi' (oxalate, xylose, arabinose, ribose, mannitol and myo-inositol) as well as in the skin of 'Samba' (putrescine, galactose, gluconate, pantothenate, sucrose and lactose) had the highest concentration, respectively. In addition, ethanolamine, rhamnose, fucose, threose and raffinose were most abundant in 'Stella'. Cultivar 'Early Star' was particularly rich in phenylethanolamine, serine, threonine and aucubin (from other compounds) while 'Bakirtzeika' displayed high levels of alanine, phosphorate, lactitol and mannobiose. Furthermore, β-alanine, aucubin, maltose, inosose and asparagine had the highest abundance in the skin of cultivars 'Sweet Early', 'Carmen', 'Ferrovia', Tragana Edessis' and 'Grace Star', respectively (Fig. 4, Additional file 3: Table S3). Isoleucine, succinate and aucubin were highly accumulated in 'Krupnoplodnaja'. The cultivar with the highest malate, sorbitol and tyrosine was the 'Lapins' while 'Regina' showed the highest levels of glycerol and quinate (Fig. 4, Additional file 3: Table S3). Much higher levels of proline, glycine, glycerate, arginine, xylitol, arabitol, fucitol and ascorbate were found in the skin of 'Aida'. Cystothionine, glutaconate, aspartate, 2-oxoglutarate, GABA, pyroglutamate, threonate, cysteine, 2-isopropylmalate and trehalose had the highest concentration in 'Tsolakeika'. Statistically highest contents of eleven primary metabolites, namely fructose, shikimate, citrate, talose, mannose, glucose, galacturonate, tryptophan, cellobiose, maltitol and maltotriose were measured in 'Skeena' (Fig. 4, Additional file 3: Table S3).
Skin secondary metabolites profile among cultivars
Another major target of interest in this study was the patterns of metabolites associated with secondary metabolism in the various cherry cultivars. The UPLC–MS/MS analysis confirmed the presence of thirty-five polyphenolic compounds in the skin samples (Additional file 5: Table S4); these secondary metabolites correspond to six anthocyanins and twenty-nine other polyphenols (Fig. 5). Generally, the 'Krupnoplodnaja' had the lowest content of total polyphenols contrary to 'Tsolakeika', which presented significant enrichment in these compounds (Fig. 5, Additional file 5: Table S4). Also, the content of skin anthocyanins was higher in 'Sweet Early' and lower in 'Carmen' (Fig. 5, Additional file 5: Table S4).
Heat map diagram of polyphenolic compounds in the skin samples of sweet cherry cultivars. Based on grand mean log2 an increase is depicted as red and a decrease is depicted as green (see color scale). Metabolites abundance were expressed as mg 100 g−1 freeze dried tissue. Each metabolite represented by thirty sweet cherry skins in three biological replications. Data are provided in Additional file 5: Table S4
Significant differences in the polyphenolic composition of the cultivars were noted (Fig. 5). For instance, the polyphenolic compounds vanillin, vanillic acid, naringenin, catechin, procyanidin B1, taxifolin (syn. dihydroquercetin), kaempferol-3-O-glucoside, quercetin-3-O-galactoside and rutin (syn. quercetin-3-O-rutionisde) had the highest concentration in the skin of 'Early Bigi'. The 'Sweet Early' skin contained significantly higher levels of protocatechuic acid, quercetin, cyanidin-3-O-glucoside, cyanidin-3-O-galactoside, cyanidin-3-O-arabinoside and cyanidin-3-O-sambinoside. Also, neochlorogenic acid, cryptochlorogenic acid, chlorogenic acid, epicatechin, kaempferol-3-O-rutinoside and isorhamnetin-3- O-rutinoside were more abundant in skin of 'Tsolakeika' (Fig. 5, Additional file 5: Table S4). The cultivar 'Krupnoplodnaja' exhibited the highest concentration in p-coumaric acid, caffeic acid and ferulic acid. Very high levels of 5-dihydrobenzoic acid, phloridzin and procyanidin B2 + B4 were measured in 'Regina'. In addition, the cultivars 'Early Star' and 'Samba' had the highest abundance in ellagic acid and peonidin-3-O-galactoside as well as in dihydrokaempferol and arbutin, respectively (Fig. 5, Additional file 5: Table S4). The cultivars 'Carmen', 'Tragana Edessis' and 'Stella' contained considerably higher amounts of naringenin, cyanidin-3-O-rutinoside and quercetin-3,4-O-diglucoside (Fig. 5, Additional file 5: Table S4).
Physiological traits and skin metabolites in relations to cracking
To elucidate the connection between skin metabolites and cracking in different cultivars, we conducted a correlation analysis. Highly positive correlations were observed between water absorption and skin cracking index assessment using both Christensen and Waterfall methods, the statistical significance was cultivar-specific as depicted at Fig. 6a. Furthermore, a strong negative correlation was detected for skin samples between cracking index and two physiological traits skin weight and penetration force around the fruit stem (Fig. 6a). The metabolites sucrose, total soluble sugars (Fig. 6b), galacturonate (Fig. 6b), glycerol, mannitol, myo-inositol (Fig. 6b) were negatively correlated with cracking assessed with both tested methods. Interestingly, the compound fucose (Fig. 6b) and taxifolin (Fig. 6c) displayed the strongest positive correlation with the cracking index, as evaluated by both assays. Also, negative correlation was recorded between the metabolite's xylose, arabinose, ribose (Fig. 6b), pantothenate (Fig. 6b), phloridzin, epicatechin, procyanidin B1 and procyanidin B2 + B4 (Fig. 6c) with cracking index using the Christensen method. On the other hand, positive correlation was detected between asparagine and cracking following Christensen method (Fig. 6b). Using the Waterfall method, negative correlation among fructose, mannose, glucose, trehalose (Fig. 6b), total alcohols (Fig. 6b) and cracking index was determined. In addition, the cracking index based on Waterfall method was positively correlated with phenylethanolamine (Fig. 6b). It was worth noting that the two methods showed a strong positive correlation regarding the cracking index (Fig. 6d).
Heat map diagram of spearman correlation between the two cracking assessment methods (Christensen and Waterfall). Physiological traits a, primary polar metabolites b and secondary metabolites c. Correlation between Christensen and Waterfall methods regarding cracking index evaluation d. Positive correlation is depicted as blue and negative correlation is depicted as yellow (see color scale). Black color in boxes of p value indicate significance of tested correlation
Fruit cracking is a serious economic problem in sweet cherry production worldwide [10]. This phenomenon is mainly caused by rainfall during the harvest period and it is related to osmotic influences and fruit water permeability [21]. Moreover, factors involved in the differential resistance among cultivars are still unknown [11]. To characterize sweet cherry cracking, we used seventeen cultivars with high commercial value [22] that exhibited different susceptibility to cracking, including relative cracking tolerant (e.g., Regina) and cracking semi-tolerant (e.g., Lapins) cultivars [3]. All cultivars were sampled at commercial harvest stage (Figs. 1 and 2) and afterwards were directly assessed in the laboratory for cracking by submerging their fruit in distilled water (Christensen method) [5]. However, this method has caused concerns in the scientific community because the immersion of sweet cherries to water alters their ionic balance and disturbs the skin surface permeability; as a result in situ cracking conditions may differ from in vitro cracking [23]. To overcome this limitation, we designed the Waterfall method, which allows to simulate the deposition of water on the fruit after the rainfall, thereby create artificial in vitro environment that simulate natural rainfall-induced cracking conditions in a fully automated manner.
Our experimental results showed that the expression of main cracking symptoms undergo variations depending on the evaluation method applied (Fig. 3d). As an example, we noticed that the main type of cracking in the cultivars 'Early Bigi' and 'Early Star' was near the stem according to Christensen method (Fig. 3d). However, in the same cultivars the cracking was expressed near the fruit apical using the Waterfall method (Fig. 3d). Current data further revealed that Christensen method exhibited lower distinctive ability for cracking evaluation compared to Waterfall method. Indeed, Christensen method grouped eleven cultivars with cracking index higher than 70% (Fig. 3b) while Waterfall method listed five cultivars with cracking index higher than 50% (Fig. 3b). These data illustrated that six cultivars ('Early Star', 'Blaze Star', 'Aida', 'Ferrovia', 'Skeena', 'Lapins') that grouped as relative cracking-susceptible following Christensen method they were also characterized as relative cracking-resistant based on Waterfall method (Fig. 3b).
Sweet cherries water absorption at harvest could be influenced by water temperature, water potential [24], cuticle composition [11] and mineral concentration of water [25]. In addition to this, current data underlying the significance of genotype in water absorption from sweet cherry fruit. In both methods a sharp absorption of water was determined in cultivars 'Krupnoplodnaja' and 'Aida' (Fig. 3c). Fruit water absorption overcame 4% in cultivars 'Tsolakeika' and 'Grace Star' estimated by Christensen method (Fig. 3c), while water absorption was determined higher than 2% of total weight in 'Sweet Early' assessed by Waterfall method (Fig. 3c).
As skin metabolism is closely linked to cracking [26], our study used central metabolism analysis in skin samples of the different cultivars to unravel these relationships with cracking index. Notably, this analysis revealed that trehalose was not detected in cracking-sensitive cultivars but was highly accumulated in the cracking-resistant ones (Fig. 4a). In addition, trehalose was strongly correlated with cracking as determined by Waterfall method (Fig. 6b), indicating that the cultivars with higher concentration of trehalose are more resistance to cracking. Trehalose production seems to be exclusively reserved for stress resistant plants living under unfavorable situations [27], including water stress conditions. Increasing evidence defines that trehalose metabolism is important for improved stress tolerance. Indeed, overexpression of trehalose biosynthetic genes results in a more sensitive reaction of the stomatal guard cells and closing of the stomata under water stress conditions [28]. Interestingly, an up-regulation of several genes involved in trehalose metabolism has been observed in African Pride atemoya during various cracking stages [29]. Taking into account that the osmotic water uptake through the skin is an important factor for rain-induced sweet cherries cracking [30] along with the excellent capacity of trehalose to protect membranes and proteins from degradation [31], we assume that trehalose could not be only directly involved in cracking but can also regulate this process mediating the crosstalk with osmoprotectant-related compounds as, for example, with sugars [32]. In support to this hypothesis, sucrose as well as several soluble sugars, such as glucose and fructose are negatively correlated with cracking (Figs. 4a and 6b, respectively). The accumulation of these sugars in fruit skin plays an osmoregulatory role and would decrease the fruit permeability, thereby would allow less water entry in sweet cherry fruit when exposed to water stress conditions, such as rain-induced cracking [11].
Another interesting finding that emerged from this work is the fact that the neutral sugars xylose and arabinose displayed a negative correlation with cracking symptoms, as evaluated by Christensen method (Fig. 6b). A clear connection among xylose, arabinose and cracking expression patterns in sweet cherry has been recently established [13], further supporting the role of these metabolites in fruit cracking. Our data also demonstrated that several soluble alcohols such as glycerol, mannitol and inositol, were negatively correlated, independently of the applied method, with cracking rate (Figs. 4d, 6b). It was noteworthy that in both cracking assessment methods galacturonate, which participate in high levels on pectin formation [33], was negatively correlated with cracking (Fig. 6b). The accumulation of galacturonate in skin tissues could be associated with the activation of β-galactosidase and the solubilization of pectin [34]. This observation is consistent with results previously obtained [35], who found that the up-regulation of various cell-wall genes, such as β-galactosidase in sweet cherry fruit contribute to a greater flexibility and elasticity of the skin, which in turn is reflected by a lower percent of cracking. In this regard, cracking-susceptible sweet cherry cultivars release higher level of soluble pectin fractions during hydrocooling conditions [25]. In line to this, the high level of galacturonate in cracking-resistant cultivars (Fig. 4c) could also be advocated as a precursor compound for protopectin, which may contribute to early pectin formation [29]. Therefore, factors influencing galacturonate metabolism could also influence the extent of fruit cracking. In parallel, fucose that also participates in pectin structure [36] and metabolically is incorporated into cell walls [37], has shown a strong positive correlation with the cracking assessment using both assays (Fig. 6b), proposing that pectin metabolism is linked to cracking phenomenon. To this end, we noted that pantothenate and asparagine were both correlated with cracking determined by Christensen method (Fig. 6b). Pantothenate (vitamin B5) is the precursor for the synthesis of enzyme co-factors essential for key metabolic and energy-yielding pathways like fatty acids metabolism [38]. Asparagine is a reservoir for nitrogen in plants and is being accumulated under adverse conditions such as biotic and abiotic stresses [39]. However, further research is needed to unravel the function of these metabolites in cracking.
In addition to primary metabolism, we also targeted pathways associated with secondary metabolites, such as polyphenolic compounds, that are known to be important in sweet cherry fruit biology [12, 15]. Earlier study [40] revealed a high variability of secondary metabolites among different sweet cherry cultivars. In the current study, thirty-five phenolic compounds, including six anthocyanins and others phenolic classes were determined in cherry skin samples at various concentration (Fig. 5, Additional file 5: Table S4). Previous work pinpointed the high connectivity of polyphenolic compounds with cracking-sensitive citrus cultivars [41]. Accordingly, our results disclosed that several polyphenols, such as phloridzin, epicatechin, procyanidin B1 and B2 + B4 showed a negative correlation with the cracking using Christensen method (Fig. 6c), suggesting that this physiological disorder is controlled to a high degree of the cultivar-specific polyphenolic metabolic regulation.
In this study one curious observation is the fact that the cracking index was positively correlated with the flavonoid taxifolin in the skin of sweet cherry cultivars, irrespectively of the tested method (Fig. 6), might signifying that taxifolin could be considered as a marker of fruit cracking. Taxifolin (3,5,7,3,4-pentahydroxyflavanone or dihydroquercetin) accumulates to high levels in grape and oranges fruits [42] and many studies have pointed out that taxifolin possesses strong anti-oxidant and anti-radical activity in various cell systems [43, 44]. However, as taxifolin is a common intermediate in the flavonoid/anthocyanin pathway and is only detected in minor concentration (compared to that found in other plant species cited here) is unlikely to have this effect on cracking and/or need more detailed studies to elucidate its function. Given that the excessive water uptake in fruit during the cracking process induced the generation Reactive Oxygen Species (ROS) [45], this study could suggest that taxifolin but also other polyphenolic compounds found here has the potential to diminish cracking levels in cherry skin, possibly by scavenging ROS.
The current data indicated that the proposed Waterfall assay could be more reliable than the Christensen method for determining the skin cracking of sweet cherry cultivars. This is supported by the fact that this assay displayed high negative correlation between skin weight and cracking index (Fig. 6a), a situation that can mimic the rain-induced fruit cracking under field conditions. In addition, a negative pattern between cracking assessment and skin penetration force near stem were observed using the Waterfall approach (Fig. 6a). Hence, Waterfall method could provide a more effective cracking-based classification of cherry cultivars. Moreover, the strong positive correlation between both assays (Fig. 6d), indicate the existence of a significant relationship between the two methods, as also demonstrated by the relative common pattern of cracking classification among several cultivars (Fig. 6). Despite all these, the most important advantages of Christensen method remain that this assay is short-term, time-saving and suitable for automation. Based on insights gained in this study, we conclude that a combination of both assays (Christensen and Waterfall methods) enables considerably more robust and accurate cracking results to be obtained when compared to single-test using one method, at least for the studied cultivars. This combined laboratory-based methodology allows a fast screen of many different cultivars, in order to select potential candidates to be cracking-resistant, avoiding long and expensive field tests. We also investigated whether the metabolic composition of skin tissue among the selected cultivars would have an impact on cracking index tested by the two assays. This metabolic approach suggests that changes in the content of a few metabolites, such as fucose and maybe taxifolin (or other polyphenols), could be correlated with the observed difference in cracking of the cultivars studied. Future investigations may focus on dissecting the metabolome of sweet cherry cultivars at different developmental stages, as well as on the connection of the metabolic variations to genomic/proteome changes. Altogether, this work sheds insight on the cracking evaluation process in cherry fruit and provides novel information that may be used in potential molecular breeding efforts to improve sweet cherry fruit resistance to cracking in the future.
All data generated in this study are included in this article and additional files. Material is available from the corresponding author on reasonable request.
Piao S, Wang X, Ciais P, Zhu B, Wang T, Liu J. Changes in satellite-derived vegetation growth trend in temperate and boreal Eurasia from 1982 to 2006. Glob Chang Biol. 2011;17:3228–39. https://doi.org/10.1111/j.1365-2486.2011.02419.x.
Correia S, Schouten R, Silva AP, Gonçalves B. Sweet cherry fruit cracking mechanisms and prevention strategies: A review. Sci Hortic (Amsterdam) [Internet]. 2018;240:369–77. https://linkinghub.elsevier.com/retrieve/pii/S0304423818304345. Accessed 5 Sept 2019.
Balbontín C, Ayala H, Bastías RM, Tapia G, Ellena M, Torres C, et al. Cracking in sweet cherries: a comprehensive review from a physiological, molecular, and genomic perspective. Chil J Agric Res. 2013;73:66–72.
Knoche M. Water uptake through the surface of fleshy soft fruit: barriers, mechanism, factors, and potential role in cracking. Abiotic Stress Biol Hortic Plants. 2015. https://doi.org/10.1007/978-4-431-55251-2_11.
Christensen VJ. Cracking in cherries. III. Determination of cracking susceptibility. Acta Agric Scand. 1972;22:128–36. https://doi.org/10.1080/00015127209433471.
Measham PF, Gracie AJ, Wilson SJ, Bound SA. Vascular flow of water induces side cracking in sweet cherry (Prunus avium L.) [Internet]. Adv. Hortic. Sci. Dipartimento Di Scienze Delle Produzioni Vegetali, Del Suolo E Dell'Ambiente Agroforestale – DiPSA – University of Florence; 2011. p. 2010–1. http://www.jstor.org/stable/42883522. Accessed 27 Sept 2018.
Bargel H, Spatz HC, Speck T, Neinhuis C. Two-dimensional tension tests in plant biomechanics—sweet cherry fruit skin as a model system. Plant Biol. 2004;6:432–9. https://doi.org/10.1055/s-2004-821002.
Glenn GM, Poovaiah BW. Cuticular properties and postharvest calcium applications influence cracking of sweet cherries. J Am Soc Hortic Sci [Internet]. 1989;114:781–8. https://ci.nii.ac.jp/naid/20001124607/. Accessed 27 Sept 2018.
Weichert H, Knoche M. Studies on water transport through the sweet cherry fruit surface. 11. FeCl3 decreases water permeability of polar pathways. J Agric Food Chem [Internet]. 2006;54:6294–302. https://pubs.acs.org/sharingguidelines. Accessed 27 Sept 2018.
Moing A, Renaud C, Christmann H, Fouilhaux L, Tauzin Y, Zanetto A, et al. Is there a relation between changes in osmolarity of cherry fruit flesh or skin and fruit cracking susceptibility? J Am Soc Hortic Sci [Internet]. American Society for Horticultural Science; 2019;129:635–41. http://journal.ashspublications.org/content/129/5/635.short. Accessed 14 Oct 2016.
Rios JC, Robledo F, Schreiber L, Zeisler V, Lang E, Carrasco B, et al. Association between the concentration of n-alkanes and tolerance to cracking in commercial varieties of sweet cherry fruits. Sci Hortic. 2015;197:57–65.
Michailidis M, Karagiannis E, Tanou G, Sarrou E, Stavridou E, Ganopoulos I, et al. An integrated metabolomic and gene expression analysis identifies heat and calcium metabolic networks underlying postharvest sweet cherry fruit senescence. Planta. 2019;250:2009–22.
Michailidis M, Karagiannis E, Tanou G, Karamanoli K, Lazaridou A, Matsi T, et al. Metabolomic and physico-chemical approach unravel dynamic regulation of calcium in sweet cherry fruit physiology. Plant Physiol Biochem. 2017;116:68–79. https://doi.org/10.1016/j.plaphy.2017.05.005.
Karagiannis E, Michailidis M, Karamanoli K, Lazaridou A, Minas IS, Molassiotis A. Postharvest responses of sweet cherry fruit and stem tissues revealed by metabolomic profiling. Plant Physiol Biochem. 2018;127:478–84. https://doi.org/10.1016/j.plaphy.2018.04.029.
Michailidis M, Karagiannis E, Polychroniadou C, Tanou G, Karamanoli K, Molassiotis A. Metabolic features underlying the response of sweet cherry fruit to postharvest UV-C irradiation. Plant Physiol Biochem. 2019;144:49–57. https://doi.org/10.1016/j.plaphy.2019.09.030.
Hummel J, Strehmel N, Selbig J, Walther D, Kopka J. Decision tree supported substructure prediction of metabolites from GC-MS profiles. Metabolomics. 2010;6:322–33. https://doi.org/10.1007/s11306-010-0198-7.
Michailidis M, Karagiannis E, Tanou G, Sarrou E, Adamakis ID, Karamanoli K, et al. Metabolic mechanisms underpinning vegetative bud dormancy release and shoot development in sweet cherry. Environ Exp Bot. 2018;155:1–11.
Vrhovsek U, Masuero D, Gasperotti M, Franceschi P, Caputi L, Viola R, et al. A versatile targeted metabolomics method for the rapid quantification of multiple classes of phenolics in fruits and beverages. J Agric Food Chem. 2012;60:8831–40. https://doi.org/10.1021/jf2051569.
Arapitsas P, Perenzoni D, Nicolini G, Mattivi F. Study of sangiovese wines pigment profile by UHPLC-MS/MS. J Agric Food Chem. 2012;60:10461–71. https://doi.org/10.1021/jf302617e.
Zoratti L, Sarala M, Carvalho E, Karppinen K, Martens S, Giongo L, et al. Monochromatic light increases anthocyanin content during fruit development in bilberry. BMC Plant Biol. 2014;14:377.
Khadivi-Khub A. Physiological and genetic factors influencing fruit cracking. Acta Physiol Plant. 2015;37:1718–32. https://doi.org/10.1007/s11738-014-1718-2.
Sansavini S, Lugli S. Sweet cherry breeding programs in Europe and Asia. Acta Hortic. 2008;795 PART 1:41–57.
Measham PF, Bound SA, Gracie AJ, Wilson SJ. Incidence and type of cracking in sweet cherry (Prunus avium L.) are affected by genotype and season. Crop Pasture Sci [Internet]. 2009;60:1002–1008. http://www.publish.csiro.au/cp/cp08410. Accessed 27 Sept 2018.
Knoche M, Beyer M, Peschel S, Oparlakov B, Bukovac MJ. Changes in strain and deposition of cuticle in developing sweet cherry fruit. Physiol Plant. 2004;120:667–77. https://doi.org/10.1111/j.0031-9317.2004.0285.x.
Wang Y, Long LE. Physiological and biochemical changes relating to postharvest splitting of sweet cherries affected by calcium application in hydrocooling water. Food Chem [Internet]. Elsevier; 2015;181:241–7. https://www.sciencedirect.com/science/article/pii/S0308814615002915. Accessed 9 July 2018. .
Peschel S, Knoche M. Studies on water transport through the sweet cherry fruit surface: XII. variation in cuticle properties among cultivars. J Am Soc Hortic Sci. 2012;137:367–75. https://doi.org/10.1007/s004250100568.
Fernandez O, Béthencourt L, Quero A, Sangwan RS, Clément Christophe C. Trehalose and plant stress responses: Friend or foe? Trends Plant Sci [Internet]. Elsevier Current Trends; 2010;15:409–17. https://www.sciencedirect.com/science/article/pii/S1360138510000725. Accessed 12 June 2019.
Delorge I, Janiak M, Carpentier S, Van Dijck P. Fine tuning of trehalose biosynthesis and hydrolysis as novel tools for the generation of abiotic stress tolerant plants. Front Plant Sci [Internet]. Frontiers; 2014;5:147. http://journal.frontiersin.org/article/10.3389/fpls.2014.00147/abstract. Accessed 5 Sept 2019.
Chen J, Duan Y, Hu Y, Li W, Sun D, Hu H, et al. Transcriptome analysis of atemoya pericarp elucidates the role of polysaccharide metabolism in fruit ripening and cracking after harvest. BMC Plant Biol [Internet]. BioMed Central; 2019;19:219. https://bmcplantbiol.biomedcentral.com/articles/10.1186/s12870-019-1756-4. Accessed 13 June 2019.
Winkler A, Grimm E, Knoche M. Sweet cherry fruit: ideal osmometers? Front Plant Sci. 2019;10:164.
Magazù S, Migliardo F, Benedetto A, La Torre R, Hennet L. Bio-protective effects of homologous disaccharides on biological macromolecules. Eur Biophys J. 2012;41:361–7. https://doi.org/10.1007/s00249-011-0760-x.
Oliver SN, Van Dongen JT, Alfred SC, Mamun EA, Zhao X, Saini HS, et al. Cold-induced repression of the rice anther-specific cell wall invertase gene OSINV4 is correlated with sucrose accumulation and pollen sterility. Plant, Cell Environ. 2005;28:1534–51. https://doi.org/10.1111/j.1365-3040.2005.01390.x.
Mohnen D. Pectin structure and biosynthesis. Curr Opin Plant Biol. 2008;11:266–77.
Andrews PK, Li S. Partial purification and characterization of β-d-galactosidase from sweet cherry, a nonclimacteric fruit. J Agric Food Chem. 1994;42:2177–82.
Balbontín C, Ayala H, Rubilar J, Cote J, Figueroa CR. Transcriptional analysis of cell wall and cuticle related genes during fruit development of two sweet cherry cultivars with contrasting levels of cracking tolerance. Chil J Agric Res [Internet]. 2014;74:162–9. http://www.scielo.cl/scielo.php?script=sci_arttext&pid=S0718-58392014000200006&lng=en&nrm=iso&tlng=en.
Atmodjo MA, Hao Z, Mohnen D. Evolving views of pectin biosynthesis. Annu Rev Plant Biol [Internet]. 2013;64:747–79. http://www.annualreviews.org/doi/10.1146/annurev-arplant-042811-105534.
Anderson CT, Wallace IS, Somerville CR. Metabolic click-labeling with a fucose analog reveals pectin delivery, architecture, and dynamics in Arabidopsis cell walls. Proc Natl Acad Sci USA. 2012;109:1329–34.
Ottenhof HH, Ashurst JL, Whitney HM, Saldanha SA, Schmitzberger F, Gweon HS, et al. Organisation of the pantothenate (vitamin B5) biosynthesis pathway in higher plants. Plant J. 2004;37:61–72. https://doi.org/10.1046/j.1365-313X.2003.01940.x.
Gaufichon L, Reisdorf-Cren M, Rothstein SJ, Chardon F, Suzuki A. Biological functions of asparagine synthetase in plants. Plant Sci [Internet]. Elsevier; 2010;179:141–53. https://www.sciencedirect.com/science/article/pii/S0168945210001202. Accessed 14 June 2019.
Kiprovski B, Borković B, Malenčić Đ, Veberič R, Štampar F, Mikulič-Petkovšek M. Postharvest changes in primary and secondary metabolites of sweet cherry cultivars induced by Monilinia laxa. Postharvest Biol Technol [Internet]. 2018;144:46–54. https://www.sciencedirect.com/science/article/pii/S0925521418301480. Accessed 5 Sept 2019.
Li J, Chen J. Citrus Fruit-cracking: causes and occurrence. Hortic Plant J [Internet]. Elsevier; 2017;3:255–60. https://www.sciencedirect.com/science/article/pii/S2468014117302224. Accessed 5 Sept 2019.
Kumar S, Abhay KP. Chemistry and biological activities of flavonoids: an overview. Sci World J. 2013;162750:16.
Trouillas P, Marsal P, Siri D, Lazzaroni R, Duroux JL. A DFT study of the reactivity of OH groups in quercetin and taxifolin antioxidants: the specificity of the 3-OH site. Food Chem [Internet]. Elsevier; 2006;97:679–88. https://www.sciencedirect.com/science/article/pii/S0308814605004267. Accessed 13 June 2019.
Topal F, Nar M, Gocer H, Kalin P, Kocyigit UM, Gülçin I, et al. Antioxidant activity of taxifolin: an activity-structure relationship. J Enzyme Inhib Med Chem. 2016;31:674–83. https://doi.org/10.3109/14756366.2015.1057723.
Xi FF, Guo LL, Yu YH, Wang Y, Li Q, Zhao HL, et al. Comparison of reactive oxygen species metabolism during grape berry development between 'Kyoho' and its early ripening bud mutant 'Fengzao'. Plant Physiol Biochem. 2017;118:634–42.
This research has been co‐financed by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE (project code: T1EDK-00281).
Laboratory of Pomology, School of Agriculture, Aristotle University of Thessaloniki, 570 01, Thessaloniki-Thermi, Greece
Michail Michailidis, Evangelos Karagiannis & Athanassios Molassiotis
Institute of Soil and Water Resources, ELGO-DEMETER, Thessaloniki, 57001, Greece
Georgia Tanou
Institute of Plant Breeding and Genetic Resources, ELGO-DEMETER, Thessaloniki, 57001, Greece
Eirini Sarrou
Laboratory of Agricultural Chemistry, School of Agriculture, Aristotle University of Thessaloniki, Thessaloniki, Greece
Katerina Karamanoli
Laboratory of Food Chemistry – Biochemistry, Dept. of Food Science & Technology, Faculty of Agriculture Aristotle University, Thessaloniki, Greece
Athina Lazaridou
Department of Food Quality and Nutrition, Centro Ricerca e Innovazione, Fondazione Edmund Mach, 38010, San Michele all'Adige, Trento, Italy
Stefan Martens
Michail Michailidis
Evangelos Karagiannis
Athanassios Molassiotis
AM and MM designed the experiment, MM analyzed the data and wrote the first draft manuscript. EK, GT, ES, AL, KK, SM accomplished the laboratory analysis and helped in data processing. All authors read and approved the manuscript.
Correspondence to Athanassios Molassiotis.
All authors agreed to publish this manuscript.
Physiological traits of seventeen cherry cultivars at harvest and MANOVA output.
Texture properties, main cracking of cultivars at harvest and MANOVA output.
Quantitative results of skin primary metabolite analysis and MANOVA output.
Additional file 4: Fig. S1.
Large-scale spearman correlation analysis.
Quantitative results of skin secondary metabolite analysis and MANOVA output.
Michailidis, M., Karagiannis, E., Tanou, G. et al. Sweet cherry fruit cracking: follow-up testing methods and cultivar-metabolic screening. Plant Methods 16, 51 (2020). https://doi.org/10.1186/s13007-020-00593-6
Primary metabolites
Secondary metabolites
Sweet cherry cultivars
|
CommonCrawl
|
An Interactive Guide To The Fourier Transform
The Fourier Transform is one of deepest insights ever made. Unfortunately, the meaning is buried within dense equations: Yikes. What does the Fourier Transform do? Here's the "math English" version of the above: The Fourier Transform takes a time-based pattern, measures every possible cycle, and returns the overall "cycle recipe" (the strength, offset, & rotation speed for every cycle that was found). Time for the equations? If all goes well, we'll have an aha! This isn't a force-march through the equations, it's the casual stroll I wish I had. From Smoothie to Recipe A math transformation is a change of perspective. The Fourier Transform changes our perspective from consumer to producer, turning What did I see? In other words: given a smoothie, let's find the recipe. Why? So... given a smoothie, how do we find the recipe? Well, imagine you had a few filters lying around: We can reverse-engineer the recipe by filtering each ingredient. Filters must be independent. See The World As Cycles Oh!
How to Disappear Online Immersive Linear Algebra immersivemath immersive linear algebra by J. Ström, K. The world's first linear algebra book with fully interactive figures. Learn More Check us out on Twitter and Facebook Table of Contents Preface A few words about this book. Chapter 1: Introduction How to navigate, notation, and a recap of some math that we think you already know. Chapter 2: Vectors The concept of a vector is introduced, and we learn how to add and subtract vectors, and more. Chapter 3: The Dot Product A powerful tool that takes two vectors and produces a scalar. Chapter 4: The Vector Product In three-dimensional spaces you can produce a vector from two other vectors using this tool. Chapter 5: Gaussian Elimination A way to solve systems of linear equations. Chapter 6: The Matrix Enter the matrix. Chapter 7: Determinants A fundamental property of square matrices. Chapter 8: Rank Discover the behaviour of matrices. Chapter 9: Linear Mappings Learn to harness the power of linearity... Chapter 10: Eigenvalues and Eigenvectors
Optical illusion cut-out and fold characters Math ∩ Programming Lateral Thinking Puzzles - Preconceptions Lateral thinking puzzles that challenge your preconceptions. 1. You are driving down the road in your car on a wild, stormy night, when you pass by a bus stop and you see three people waiting for the bus: 1. An old lady who looks as if she is about to die. 2. An old friend who once saved your life. 3. Knowing that there can only be one passenger in your car, whom would you choose? Hint: You can make everyone happy. Solution: The old lady of course! 2. Hint: The police only know two things, that the criminal's name is John and that he is in a particular house. Solution: The fireman is the only man in the room. 3. Hint: He is very proud, so refuses to ever ask for help. Solution: The man is a dwarf. 4. Hint: It does not matter what the baby lands on, and it has nothing to do with luck. Solution: The baby fell out of a ground floor window. 5. Hint: His mother was an odd woman. Solution: When Bad Boy Bubby opened the cellar door he saw the living room and, through its windows, the garden. 6. 7.
Differential Equations Explained $\cos$PLAY You're probably used to equations like $$(t-.5)(t-1)= 0,$$ where 'solving' means finding an unknown number. A differential equation (DE), by contrast, is a fact about the derivative of an unknown function, and 'solving' one means finding a function that fits. To visualize derivatives, we can draw a right triangle whose hypoteneuse is tangent to a function. If the triangle's width is $1$, then its height is the derivative. With that one weird trick, the plots to the right show how the derivative of $\sin(t)$ is $\cos(t)$. That's a pretty basic DE, though. Consider a cart rolling to a stop. The solution is a function $v(t)$ giving velocity at time $t$. It turns out the exponential function, $e^{-kt}$, has the properties $$ \begin{align} \frac{d}{dt}e^{-kt}=-ke^{-kt} && e^{-k\cdot 0}=1. To make the solution more intuitive, here you'll solve the cart's DE manually by picking a series of $\left( t, v \right)$ points. The first cart below obeys the $v(t)$ function you designed.
techinterview Bayesian Methods for Hackers An intro to Bayesian methods and probabilistic programming from a computation/understanding-first, mathematics-second point of view. Prologue The Bayesian method is the natural approach to inference, yet it is hidden from readers behind chapters of slow, mathematical analysis. The typical text on Bayesian inference involves two to three chapters on probability theory, then enters what Bayesian inference is. Unfortunately, due to mathematical intractability of most Bayesian models, the reader is only shown simple, artificial examples. After some recent success of Bayesian methods in machine-learning competitions, I decided to investigate the subject again. If Bayesian inference is the destination, then mathematical analysis is a particular path towards it. Bayesian Methods for Hackers is designed as a introduction to Bayesian inference from a computational/understanding-first, and mathematics-second, point of view. The choice of PyMC as the probabilistic programming language is two-fold.
Brainteaser Quizzes Read this...bet you CAN!Weakest Link Brainteaser TestBrainteaser Quiz 1 Brainteaser Quiz 2 Brainteaser Quiz 3 Brainteaser Quiz 4 Back to Brainteasers & Riddles Index The phaonmneel pweor of the hmuan mnid: I cdnuolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdgnieg. The rset can be a taotl mses and you can sitll raed it wouthit a porbelm. Weakest Link Brainteaser Test Brainteaser Quiz #2! Great Brainteaser Quiz #3! How did you do? Brainteaser Quiz #4 Quiz for people who know everything: Back to Quizzes Back to Riddles & Brainteasers
|
CommonCrawl
|
The Supremum and Infimum of a Bounded Set
Recall that a nonempty subset $S$ of $\mathbb{R}$ is bounded above if there exists an $M$ such that for all $x \in \mathbb{S}$, $x \leq M$. And similarly, we say that $S$ is bounded below if there exists an $m$ such that for all $x \in \mathbb{S}$, $m \leq x$. We should note that these bounds are not unique. Recall the set of natural numbers $\mathbb{N} = \{ 1, 2, 3, ... \}$. We could choose $m = 1$ to be our lower bound, or $m = 0$, etc… So if a lower or upper bound exists for a set, then we can say that there is an infinite number of lower or upper bounds for that set. What we are sometimes more interested in is the least upper bound or the greatest lower bound. These bounds have a very special name which we will define as follows.
Definition: Let $S$ be a set that is bounded above. We say that the supremum of $S$ denoted $\sup S = u$ is a number $u$ that satisfies the conditions that $u$ is an upper bound of $S$ and $u$ is the least upper bound of $S$, that is for any $v$ that is also an upper bound of $S$ then $u \leq v$.
Definition: Let $S$ be a set that is bounded below. We say that the infimum of $S$ denoted $\inf S = w$ is a number $w$ that satisfies the conditions that $w$ is a lower bound of $S$ and $w$ is the greatest lower bound of $S$, that is for any $t$ that is also a lower bound of $S$ then $t \leq w$.
Looking at the example of $\mathbb{N}$, we note that since this subset of $\mathbb{R}$ is bounded below, then $\inf \mathbb{N}$ exists. Namely, $\inf \mathbb{N} = 1$. We can show this (rather tediously) with the definition of the infimum of a set. We will first show that $1$ is a lower bound for $\mathbb{N}$, that is for all $x \in \mathbb{N}$, $1 \leq x$. We note that $\mathbb{N} = \{ 1, 2, 3, ... \}$, so clearly $1 \leq x$ for all $x \in \mathbb{N}$. So we have shown that $1$ is a lower bound. Now let's show that $1$ is the greatest lower bound.
Suppose there exists a greatest lower bound $t$. That is $1 \leq t$. But then the element $1 \in \mathbb{N}$ is less or equal to $t$ so then $t = 1$. Therefore $\inf \mathbb{N} = 1$.
Now let's look at some other examples of sets. For example, consider the set $S := \{ x \in \mathbb{R} : -2 \leq x \leq 2 \}$. This set is clearly bounded above and bounded below. We note that $\inf S = -2$ and $\sup S = 2$. In this case the infimum and supremum of this set are in the set $S$ itself. This is not always the case though. For example consider the set $T = \{ x \in \mathbb{R} : -2 < x < 2 \}$. In this case $\inf T = -2$ and $\sup T = 2$, but note that $-2, 2 \not \in T$ due to the strict inequalities defining $T$.
Yet another example is the set $\mathbb{Z}$. We noted that this set is not bounded below or bounded above, so in fact this subset does not contain an infimum or supremum.
Lemma 1: Let $S$ be a nonempty subset of $\mathbb{R}$. Then either $S$ has both a supremum and infimum, $S$ has one or the other, or $S$ does not have either.
We have already looked at an example that contains both a supremum and infimum, an example that contains only an infimum, and an example that contains neither. We just need to provide an example of a subset of $\mathbb{R}$ that contains only a supremum. Consider the set of negative integers $\mathbb{Z}^{-} = \{ ... , -3, -2, -1 \}$. This set is bounded above (but not bounded below), and in fact $\sup \mathbb{Z}^{-} = -1$.
We will now look at another way to describe the supremum of a set that is bounded above, and the infimum of a set that is bounded below.
Lemma 2: Let $S \subset \mathbb{R}$ where $S \neq \emptyset$. Then $\sup S = u$ if and only if the number $u$ satisfies the properties that for all $x \in S$, $x \leq u$ and if $v < u$ then there exists an $x' \in S$ such that $v < x'$
Proof: $\Rightarrow$ Suppose that $\sup S = u$. Then for all $x \in S$, $x \leq u$ by the definition of a supremum. Let $v < u$. If there does not exist an $x' \in S$ such that $v < x'$ then $v$ is an upper bound of $S$ and since $v < u$ this contradicts the fact that $u = \sup S$, so such an $x' \in S$ exists.
$\Leftarrow$ Suppose that for all $x \in S$, $x \leq u$ and if $v < u$ then there exists an $x' \in S$ such that $v < x'$. Therefore the first condition for $u$ to be the supremum of $S$ is satisfied. Since $v < u$, there exists an $x' \in S$ such that $v < x' ≤ u$. But this is true for all $v < u$, so then $u = \sup S$. $\blacksquare$
Lemma 3 is analogous to lemma 2 but pertains to the infimum of a set that is bounded below.
Lemma 3: Let $S \subset \mathbb{R}$ where $S \neq \emptyset$. Then $\inf S = w$ if and only if the number $w$ satisfies the properties that for all $x \in S$, $w \leq x$ and if $w \leq t$ then there exists an $x' \in S$ such that $x' \leq t$.
Now consider $S \subset \mathbb{R}$ where $\sup S = u$. If for all $x \in S$, $x \leq u$ then $u$ is an upper bound for $S$. Now the second property says that if $v \leq u$ then there exists an element $x' \in S$ such that $v \leq x'$. If such an element $x'$ did not exist, then $\sup S = v$.
|
CommonCrawl
|
How to typeset readable diagonal matrices with large entries?
I would like to illustrate the structure of two matrices, but unfortunately they become hard to read very quickly. This is because its entries' expressions grow with every line.
The equation looks as follows:
An MWE is:
\begin{multline}
\underbracket[0pt][0pt]{%
\begin{pmatrix}
\mathbf{x}(0) \\
\vdots \\
\mathbf{x}(N)
\end{pmatrix}
}_{\mathbf{X}}
\eyezero & & & & \\
& A_0 & & & \\
& & A_1A_0 & & \\
& & & \ddots & \\
& & & & A_{N-1}A_{N-2}\cdots A_0
}_{S_x}
\mathbf{x}_0 \\
\mathbf{x}_0
}_{\mathbf{X}_0} \cdots \\*
\hspace*{6em}
\cdots+
\begin{bmatrix*}[r]
\zermzero & & & & \\
B_0 & & & & \\
A_1 B_0 & B_{1} & & & \\
A_2 A_1 B_0 & A_{2} B_{1} & B_{3} & & \\
\vdots & \vdots & \vdots & \ddots & \\
A_{N-1}A_{N-2} \cdots A_{1} B_{0} & \cdots & \cdots & \cdots & B_{N-1} \\
\end{bmatrix*}
}_{S_u}
\mathbf{u}(0) \\
\mathbf{u}(N-1)
}_{\mathbf{U}}
\end{multline}
Do you have any idea how I can improve the readability of this equation with the tools available in LaTeX?
math-mode
IngoIngo
tex.stackexchange.com/questions/149235/… – David Carlisle May 2 '14 at 15:07
@Manuel, it's Minion Pro with its math counterpart Minion Math. – Ingo May 2 '14 at 15:18
@Ingo: Maybe you could comment on cbento's suggested approach, so that they have the chance to improve it (since you don't seem to be satisfied with it)? – Jake May 14 '14 at 8:43
Should'nt B_3 be B_2? – JLDiaz May 14 '14 at 10:33
@JLDiaz yes indeed, that is is a mistake. – Ingo May 16 '14 at 13:03
Update: I had got wrong the matrices structure, I modified the code and the figures and I think it is right now.
One idea is to use TikZ to typeset those matrices, and use its drawing facilities to graphically highlight the structure. For example, for the first matrix:
\usetikzlibrary{matrix}
\begin{pmatrix} x(0)
\\ x(1)
\\ \vdots
\\ x(N)
\end{pmatrix} = \vcenter{\hbox{\tikz[]{
\matrix[matrix of math nodes, ampersand replacement=\&,
left delimiter={[}, right delimiter={]}, row sep=0pt,
nodes={inner sep=2pt}] (M) {
I \& \& \& \& \& \\
\& A_0 \& \& \& \& \\
\& \& A_1 A_0 \& \& \& \\
\& \& \& \ddots \& \& \\
\& \& \& \& A_{N-1} A_{N-2} \cdots A0 \\
\fill[orange, opacity=0.2, rounded corners]
(M-1-1.north west) -- (M-1-1.north east) -- (M-5-5.north east)
--(M-5-5.south east) -- (M-5-5.south west) -- (M-1-1.south west)
--cycle;
}}} \begin{pmatrix} x(0)
\end{pmatrix} \cdots
Gives:
One possibilty for the second matrix:
\cdots + \vcenter{\hbox{\tikz[]{
nodes={inner sep=2pt},
column 1/.style={minimum width=9em},
column sep=2pt,
] (M) {
0 \& \& \& \& \\
B_0 \& \& \& \& \\
A_1B_0 \& B_1 \& \& \& \\
A_2A_1B_0 \& A_2B_1 \& B_3 \& \& \\
\vdots \& \vdots \& \vdots \& \ddots \& \\
A_{N-1} A_{N-2} \cdots A_1B_0 \& \cdots \& \cdots \& \cdots \& B_{N-1} \\
(M-1-1.north west) rectangle (M-6-1.south east);
\draw[line cap=round, draw opacity=0.2, yellow!70!green, line width=3ex, shorten >=-1ex, shorten <=-1ex]
(M-2-1.center) to[bend left=15] (M-6-5.center);
}}} \begin{pmatrix} u(0)
\\ u(1)
\\ u(N-1)
Update. A different approach for the second matrix:
\matrix[matrix of nodes, ampersand replacement=\&,
nodes={inner sep=2pt, align=right},
column 1/.style={text width=9em},
row 6/.style={minimum height=3ex},
$0$ \\
$B_0$ \\
$A_1B_0$ \& $B_1$ \\
$A_2A_1B_0$ \& $A_2B_1$ \& $B_3$ \\
$\vdots$ \& $\vdots$ \& $\vdots$ \& $\ddots$ \\
$A_{N-1} A_{N-2} \cdots A_1B_0$ \& $\cdots$ \& $\cdots$ \& $\cdots$ \& $B_{N-1}$ \\
JLDiazJLDiaz
53k44 gold badges137137 silver badges196196 bronze badges
I really like your use of TikZ here. I think I'll go for this solution, thanks! – Ingo May 16 '14 at 13:07
One thing you could try is to split the matrix structure into different equation environments.
You could also omit the numbering in all equation environments, except the last one, so it is easier to understand that the various segments are part of only one equation.
\end{pmatrix} = \begin{bmatrix}
I & & & & & & & \\
& A_0& & & & & & \\
& & A_1 A_0 & & & & & \\
& & & \ddots & & & & \\
& & & & A_{N-1} A_{N-2} & \cdots & A0
\end{bmatrix} \begin{pmatrix} x(0)
\cdots + \begin{bmatrix}
& & 0 & & & & \\
& & B_0 & & & & \\
& & A_1B_0 & B_1 & & & \\
& & A_2A_1B_0 & A_2B_1 & B_3 & & \\
& & \vdots & \vdots & \vdots & \ddots & \\
A_{N-1} A_{N-2} & \cdots & A_1B_0 & \cdots & \cdots & \cdots & B_{N-1}
\end{bmatrix} \begin{pmatrix} u(0)
Visually, it is similar to your WE, but its now segmented:
Also, if you wan to refer to specific segments of the matrix structure later on in the text, you could add a labels to the equation environments:
\begin{equation*}\label{part1}
cbentocbento
This does look nicer, although it uses even more space. Thanks! – Ingo May 16 '14 at 13:06
This is maybe not what you are after, but I would rather introduce a notation to shorten the matrix entries. E.g. defining
\bar{A}_i^j=\prod_{k=i}^j A_k
would allow to have nearly constant-width columns. Also, I cannot guess what the dots below $A_2 B_1$ and $B_3$ (typo for $B_2$ in this one?) in $S_u$ actually contain.
JoceJoce
you probably want \Pi_{k=i}. And I would never use such notation, but that's probably a matter of taste. I would keep with \hat{A}_{N,n} = A_N A_{N-1}\dots A_{n+1}A_n. – yo' May 14 '14 at 11:46
@tohecz: thanks for the correction. The style of the new variable is up to the author of the document, this is just an example. Having subscript and superscript allows a more compact matrix, that's why I offered it, depending on the context it may be unwelcome – or not. – Joce May 14 '14 at 11:51
This is a nice proposal. However, I already show that exact same equation before this and then stack it up to a matrix, so I would rather like to write it out at this point. – Ingo May 16 '14 at 13:06
@Ingo: The choice of course depends on the context. As you note, the eye understands a matrix because of its grid-like arrangement, which is lost when elements have very different sizes. So one has to make a trade-off between the gain of matrix clarity and the loss of introducing notations. – Joce May 16 '14 at 13:37
Not the answer you're looking for? Browse other questions tagged math-mode or ask your own question.
How to create diagonal matrix with an aligned diagonal?
Same height for list of comma-separated vectors
Making large substack more readable?
Matrices with llncs class
Problem with big equations (matrices)
How to render readable a large TeXForm expression from Mathematica
Dealing with matrices with large symbolic expressions
How to set \phantom{-} automatically for entries of matrices?
Diagonal entries of band matrix not colinear
How to typeset itemized matrices in math display mode
|
CommonCrawl
|
Europe/Zurich English
Astroparticle Physics - A Joint TeVPA/ IDM Conference
Europe/Zurich timezone
Venue & Location
Participant List
Info: Afternoon Session Contributors
Symposium on the History and Future of Dark Matter
[email protected]
30. Invited Talk: Direct Detection with Cryogenic Experiments
Prof. Enectali Figueroa-Feliciano (MIT)
Plenary Talks
Cryogenic dark matter experiments composed of semiconductors operated at milliKelvin temperatures are one of the leading technologies in dark matter searches, currently setting the most stringent limits to the spin-independent WIMP-nucleon cross section for dark matter masses between 2-6 GeV. I will review the principles of direct dark matter detection and the various experiments using this...
32. Invited Talk: Dark Matter Direct Detection: Signal or no signal? The best way forward.
Prof. Richard Gaitskell (Brown University)
Particle dark matter is thought to be the overwhelming majority of the matter in the Universe. Its gravitational contribution overwhelms that from the ordinary matter that we, the earth and the stars, are composed of. However, direct evidence for the existence of particle dark matter remains controversial. In the last few years a number of experimental collaborations have reported possible...
39. Invited Talk: A Review of the Directional Signature for Dark Matter Searches
Dinesh Loomba (University of New Mexico)
Over the past decade a world-wide experimental effort has grown significantly to the point where today there are over half a dozen directional dark matter experiments, with four collecting data underground. Although most of the efforts employ time projection chambers with low pressure gas-based targets, R&D on novel approaches in liquids and solids is also proceeding. We review the...
41. Invited Talk: Search for cosmological dark matter with noble liquids
Emilija Pantic (UCLA)
Noble liquid detectors are continuing to probe dark matter (DM) candidates in a wide parameter space. They utilize large targets of a very low background with a capability to reconstruct interaction point, allowing active background rejection. Liquid xenon (LXe) is an intrinsically radio-pure, efficient and fast scintillator with the best self-shielding capabilities. Liquid argon (LAr) is an...
26. Invited Talk: Evidences and hints of Dark Matter
Pierluigi Belli (INFN - Roma Tor Vergata)
An overview of the latest results of DAMA/LIBRA-phase1 will be presented and the evidence obtained by exploiting the model independent annual modulation signature for the presence of Dark Matter particles in the galactic halo will be discussed. The data of the former DAMA/NaI and of the DAMA/LIBRA-phase1 satisfy all the many requirements of the Dark Matter annual modulation signature...
233. Cosmic Ray acceleration and escape in SNRs
Dr Brian Reville (Queen's University Belfast)
In recent years, our understanding of cosmic-ray acceleration at supernova shocks has advanced considerably. Observations of nearby SNR show clear evidence for magnetic field amplification, while theory and simulation has developed to the point where we can now investigate the plasma physics in these energetic environments self-consistently. I will review some of the recent developments in the...
247. Progresses on Minimal Dark Matter
Filippo Sala (CEA/Saclay and CNRS)
We extend the Standard Model with a new particle, chosen from those that are automatically stable without adding any extra symmetry to the theory. Despite being a potential Dark Matter candidate, other motivations for such a new state will be discussed, like the stabilisation of the EW vacuum. Its phenomenology is controlled by a single parameter, its mass, which is fixed in the multi-TeV...
166. Recent results on dark matter search with the Fermi-LAT
Dr Gabrijela Zaharijas (ICTP and INFN, Trieste)
Dark Matter Indirect Detection
Dark Matter: Indirect Detection
High-energy gamma rays are one of the most promising ways to constrain or reveal the nature of dark matter. Through the first five years of the Fermi-LAT mission we have witnessed an exciting progress in this respect, with constraints on the dark matter cross section to various particle channels moving well into the theoretically motivated region of the parameter space and several hints of...
251. The current status of the HAWC observatory
Dirk Lennarz (Georgia Tech)
Gamma-Ray Astrophysics
The High Altitude Water Cherenkov (HAWC) observatory is an extensive air shower (EAS) detector currently under construction in central Mexico at an altitude of 4,100 m above sea level. It improves the water Cherenkov technique, where gamma rays in the 100 GeV - 100 TeV range are detected by measuring Cherenkov light from secondary particles, by having an order of magnitude better sensitivity,...
2. Using Energy Peaks to Count Dark Matter Particles in Decays
Roberto Franceschini (E)
We study the determination of the symmetry that stabilizes a dark matter (DM) candidate produced at colliders. Our question is motivated per se, and by several alternative symmetries that appear in models that provide a DM particle. To this end, we devise a strategy to determine whether a heavy mother particle decays into one visible massless particle and one or two DM particles. The counting...
226. Highlights from H.E.S.S.
Prof. Christian Stegmann (DESY)
The H.E.S.S. telescope system is operating since more than 10 years and the collaboration contributes significantly to the rapidly progressing field of ground-based gamma-ray astronomy. With the recent start of the operation of a new telescope with a mirror diameter of 28m the detection capabilities if the H.E.S.S. telescope system are significantly enhanced and the energy threshold is much...
153. Hybrid simulations of cosmic ray acceleration at shocks
Damiano Caprioli (Princeton University)
Hybrid particle in cell simulations (with kinetic protons and fluid electrons) are providing us with unprecedented insights into the microphysics of collisionless shocks, also attesting to their ability to accelerate particles and to generate magnetic fields. I present state-of-the-art 2D and 3D simulations of non-relativistic shocks, discussing under which conditions (shock strength and...
50. Measuring the dark matter mass - in spite of astrophysical uncertainties
Bradley J. Kavanagh (University of Nottingham)
Dark Matter Direct Detection
Dark Matter: Direct Detection
The interpretation of future dark matter (DM) direct detection data is fraught with uncertainties. In particular, measurements of the WIMP mass and cross section can be biased by poor assumptions about the WIMP speed distribution. I will present a new technique, based on parametrizing the logarithm of the WIMP speed distribution, which allows the dark matter mass and the speed distribution...
190. Search for DM-induced gamma-rays from Galaxy Clusters with the Fermi-LAT
Mr Stephan Zimmer (OKC/ Stockholm University)
Galaxy Clusters are the largest gravitationally bound structures in our universe. The majority of their mass is believed to be in the form of dark matter (DM). If DM manifests itself as weakly interacting massive particles (WIMPs) these WIMPs may self-annihilate or decay, and galaxy clusters would then be excellent targets for searches of DM-induced gamma rays. In addition, N-body cosmological...
302. SUSY after LHC run1
LianTao Wang (University of Chicago)
Supersymmetry is the most prominent candidate for new physics beyond the Standard Model. However, we have not seen any sign of it during the LHC run 1. In this talk, I will give an overview of the current status of SUSY, including important questions such as naturalness and Higgs physics. I will also remark on promising directions for further pursuit.
176. Optimized dark matter searches in deep observations of Segue 1 with MAGIC
Javier Rico (IFAE)
Determining the nature of dark matter (DM) is one of the most exciting tasks of modern science. In most of the suggested hypothesis, DM particles should annihilate or decay into standard matter, which would produce high energy gamma-ray signal. The MAGIC telescopes search for such a DM signature in the 50 GeV - 50 TeV energy range. Suitable targets are the Galactic centre, local DM clumps,...
201. Fermi Large Area Telescope observations of high-energy gamma-ray emission from solar flares
Melissa Pesce-Rollins (INFN-Pisa)
During its first six years of operation, the Fermi Large Area Telescope (LAT) has detected >30 MeV gamma-ray emission from more than 40 solar flares, nearly a factor of 10 more than EGRET detected. Detections sample both the impulsive phase and long-duration emission, extending up to ~20 hours for the 2012 March 7 X-class flares, and include the first detection of >100 MeV emission from a...
140. Direct dark-matter detection with LAr: DEAP-3600
Simon Peeters (University of Sussex)
DEAP-3600 is a single phase liquid argon direct-detection dark-matter experiment located at SNOLAB in Sudbury, Ontario, Canada, with projected WIMP-nucleon scattering sensitivity of $10^{-46}$ cm$^2$ in 3 years, a factor of 20 beyond current experimental results at 100 GeV WIMP mass. The detector commissioning starts in Spring 2014, and DEAP3600 is projected to reach competitive sensitivity...
297. Supersymmetry Searches in ATLAS and CMS
Troels Petersen (University of Copenhagen (DK))
As the last and most advanced results of the Run1 ATLAS and CMS SUSY searches are in the process of being finalized, the status of these searches after LHC Run1 is that no indication for any signal has yet been seen. All data has been compatible with the estimated standard model backgrounds, and limits have therefore been set on the masses of various supersymmetric particles. I will cover a...
271. The Transition between Galactic and Extragalactic Cosmic Rays
Gwenael Giacinti (University of Oxford, Clarendon Laboratory)
The energy around which the transition from Galactic to extragalactic cosmic rays (CR) occurs is still unknown. Solving this major question would bring valuable clues about the nature and characteristics of Galactic and extragalactic CR sources, such as the maximum energy reachable by Galactic accelerators. The transition must lie between the knee (energy E ~ 4 PeV) and the ankle (E ~ 3 EeV)....
100. Binary systems
Daniela Hadasch (University of Innsbruck)
The small source class of gamma-ray binaries consists at present of six known objects with different orbital periods ranging from days up to several years. One of the best studied gamma-ray binary across all frequencies, LS I +61 303, is highly variable at any given orbital phase and was lately discovered to show on top of orbital also superorbital variability at high energies. In contrary,...
54. The GeV Galactic Center Excess: Implications for Particle Physics
Dr dan hooper (fermilab)
A spatially extended excess of ~1-3 GeV gamma rays from the region surrounding the Galactic Center has been observed, consistent with the emission expected from annihilating dark matter. Recent improvements in analysis techniques have found this excess to be robust and highly statistically significant, with a spectrum, angular distribution, and overall normalization that is in good agreement...
55. Simple steps to analyse direct detection experiments without halo uncertainties
Felix Kahlhoefer (University of Oxford)
Uncertainty in the local dark matter velocity distribution is a key difficulty in the analysis of data from direct detection experiments. In my talk, I will propose a completely new approach for dealing with this uncertainty, which does not involve any assumptions about the structure of the dark matter halo. Given a dark matter model, this approach yields the velocity distribution which best...
250. The Incredible Bulk
Ms Pearl Sandick (University of Minnesota)
Recent experimental results from the LHC have placed strong constraints on the masses of colored superpartners. Additionally, direct dark matter searches put a strong upper limit on cross sections of interactions between the WIMP and quark sectors. However, leptophilic versions of the MSSM can potentially survive these constraints while explaining the observed abundance of dark matter. We...
234. Measurement of the Cosmic Ray energy spectrum by the ARGO-YBJ experiment
Paolo Bernardini (Universita' del Salento - INFN)
The ARGO-YBJ detector layout, features and location at high altitude (the Cosmic Ray Observatory of Yangbajing in Tibet, China, at about 606 g/cm^2 of atmospheric depth), joined to the analog readout of the RPC (Resistive Plate Chamber) streamer signals, provide the opportunity to study, with unprecedented resolution and without saturation, the distribution of the charged particles of...
15. Minimal Simplified Models for the Galactic Center Gamma-Ray Excess
Samuel McDermott
We are interested in exhausting the list of possible minimal models that could produce the galactic center gamma-ray excess at tree level, without adopting the simplifications inherent in the effective operator approach. We wish to take a holistic but general view of the types of interactions that could produce the galactic center gamma-ray excess. This leads us to the simplified model...
127. Cosmological limits on dark matter annihilations
Laura Lopez Honorez (Vrije Universiteit Brussel)
In my talk I will review the most recent cosmological constraints on dark matter annihilation with a special focus on CMB probes.
177. Gamma-ray Observations of Galaxy Clusters
Keith Bechtol
Galaxy clusters are unique environments to study cosmic-ray acceleration. In comparison to Galactic accelerators, large-scale structure formation shocks associated with merger events and accretion have lower Mach numbers and occur in high-temperature weakly magnetized plasma. Leptonic cosmic rays in clusters are well established observationally through studies of Mpc-scale diffuse radio halos...
67. Impact of anisotropic distribution functions on direct dark matter detection
Dr Nassim Bozorgnia (MPIK)
In analyzing data from dark matter direct detection experiments, usually an isotropic Maxwellian velocity distribution is assumed. However, dark matter N-body simulations suggest that the velocity distribution of dark matter is anisotropic. I will discuss how to use information from kinematical data on the Milky Way to constrain the properties of the dark matter phase space distribution, based...
231. On the CR spectrum released by a type II Supernova Remnant expanding in the presupernova wind.
Martina Cardillo (INAF - Osservatorio astrofisico di Arcetri)
One of the main open issues about the origin of Galactic CRs is the maximum energy that can be achieved by acceleration in Supernova Remnants. In a rigidity dependent acceleration mechanism, protons are expected to reach a few PeV and heavier ions correspondingly higher energies. A recent theory suggests that, in a core-collapse SNR expanding in its pre-supernova wind, magnetic field...
94. The MEG experiment: past, present and future
Mrs Elisabetta Baracchini (The University of Tokyo)
We will present the latest result from the MEG experiment, based on the data collected at the Paul Scherrer Institut (PSI), in search of the Lepton Flavour Violating (LFV) decay $\mu^+ \to e^+ \gamma$. Such decay is forbidden within the Standard Model (SM), nevertheless most of its viable extensions predict a branching ratio in the 10$^{−14}$ to 10$^{−12}$ range. An observation of the $\mu^+...
300. LHC searches in rare heavy-flavor decays
Giampiero Mancinelli (CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France)
Rare decays of beauty hadrons test the flavour structure of the Standard Model and of other theories at the level of quantum corrections. They provide information on the couplings and masses of heavy virtual particles appearing as intermediate states. A review of recent results from the the LHCb, ATLAS, and CMS collaborations on new physics searches in b -> s transitions will be presented.
209. Cosmic-ray spectral anomaly at GeV-TeV energies
Satyendra Thoudam (R)
Recent measurement of cosmic rays by the ATIC, CREAM and PAMELA experiments have found that the energy spectrum in the TeV region is harder than at GeV energies. The origin of the hardening is not clearly understood. Suggested explanations include hardening in the cosmic-ray source spectrum, changes in the cosmic-ray propagation properties in the Galaxy and the effect of the nearby sources. In...
268. Modeling Dark Matter Self-Interactions Involving an Excited State
Prof. Tracy Slatyer (MIT)
Discrepancies between N-body dark matter simulations and the observed distribution of dark matter on galactic and sub-galactic scales have been advanced as evidence of a complex dark sector. Dark matter self-interactions can flatten density cusps and reduce halo concentrations, and the down-scattering of a relic population of dark matter particles in a nearly-degenerate excited state could...
72. Multi-messenger Astroparticle with Clusters of Galaxies
Fabio Zandanel (University of Amsterdam)
Relativistic particles are revealed in clusters of galaxies from observations of diffuse synchrotron radio emission. At least part of this emission can be originated by secondary electrons produced by cosmic-ray protons interacting with the protons of the intra-cluster medium. This should be accompanied by the production of gamma rays, potentially detectable by the Fermi satellite and...
138. NEWS experiment and ultra-light dark matter search
Dr ioannis giomataris (CEA-Saclay)
Abstract We present recent results taken at the LSM laboratory using the new spherical gaseous detector. It consists of a large spherical gas volume with a central electrode forming a radial electric field. A small spherical sensor located at the center is acting as a proportional amplification structure. Sub-keV energy threshold and versatility of the target (Ne, He, H) opens the way to...
86. M 31 as a probe for diffuse VHE gamma ray emission with VERITAS
Ralph Bird (UCD Dublin)
VERITAS, an array of 12 m imaging atmospheric Cherenkov telescopes in southern Arizona, is one of the world's most sensitive detectors of astrophysical very-high-energy (VHE, > 100 GeV) gamma rays. The current status of the VERITAS observations of M 31 (Andromeda Galaxy) including an upper limit on the VHE flux, an updated analysis of the Fermi-LAT data and a comparison with theoretical...
181. Mono- and di-photon searches for new physics at the LHC
Prof. Toyoko Orimoto (Northeastern University)
The ATLAS and Compact Muon Solenoid (CMS) Experiments are general-purpose particle detector experiments at the Large Hadron Collider (LHC) at CERN. ATLAS and CMS have successfully collected a large dataset consisting of approximately 20/fb (5/fb), of proton-proton collisions at a center-of-mass energy of 8 TeV (7 TeV). In addition to clarifying the origins of electroweak symmetry breaking, one...
167. Anisotropic cosmic ray propagation
Daniele Gaggero
We present some results obtained with the anisotropic version of the CR propagation package DRAGON. First we describe some quantitative test of the code in simple conditions for which the analytical solution of CR transport is known, both for a the case of a dominant parallel or perpendicular diffusion. Then we show that, for the first time, we are able to reproduce the most important CR...
76. Axion mass estimates from resonant Josephson junctions
Prof. Christian Beck (Queen Mary, University of London, School of Mathematical Sciences)
This talk will be on a recent proposal that QCD dark matter axions from the galactic halo that pass through Earth can produce a small Shapiro step-like signal in Josephson junctions whose Josephson frequency resonates with the axion mass [1]. The axion field equations in a voltage-driven Josephson environment allow for a nontrivial solution where the axion-induced electrical current...
143. Fingerprints of Dark Matter in the gamma-ray sky (?)
Dr Alfredo Urbano (SISSA)
The quest for Dark Matter signals in the gamma-ray sky is one of the most intriguing and exciting challenges in astrophysics. In this talk I will discuss the energy spectrum of the Fermi bubbles at different latitudes, making use of the gamma-ray data collected by the Fermi Large Area Telescope. At high latitude, $|b|=20^{\circ}-50^{\circ}$, the Fermi bubbles energy spectrum can be reproduced...
283. Prospects for detecting Gamma-Ray Bursts at the highest energies
Dr Valerie Connaughton (National Space Science and Technology Centre)
Our understanding of high-energy emission from Gamma-Ray Bursts has greatly advanced with observations from the Fermi gamma-ray space telescope. I will review the Fermi observations and explain why they give hope to the very high-energy communities in their quest for Gamma-Ray Burst detections with the High Altitude Water Cherenkov (HAWC) and Cherenkov Telescope Array (CTA) experiments. I...
272. SNR shocks in partially ionized plasmas
Giovanni Morlino (G)
We present the theory of non-linear particle acceleration in collisionless shocks in the presence of atomic neutral material in the acceleration region. The main new aspect consists in accounting for charge exchange and ionization of neutral hydrogen, which profoundly change the structure of the shock. We also present the self-consistent calculation of the Balmer emission lines from the shock...
185. Other BSM searches at the LHC
Tristan Arnoldus Du Pree (Universite Catholique de Louvain (UCL) (BE))
Besides studies of the Higgs boson, supersymmetry, and dark matter, the ATLAS and CMS experiments conduct a broad program of searches for more exotic new physics possibilities. These investigations include searches for heavy gauge bosons, leptoquarks, long-lived particles, vector-like quarks, excited leptons, heavy neutrinos, extra dimensions, black holes, and many other models. This...
133. Axion searches in the helioscope technique: CAST and IAXO
Mrs Juan Antonio Garcia Pascual (Universidad de Zaragoza)
Axions are well motivated particles proposed in an extension of the SM as a solution to the strong CP-problem. On the other hand there is the category of axion-like particles (ALPs) which appear in diverse extensions of the SM and share the same phenomenology of the axion. Axions and ALPs are hypothetical neutral and light particles which interacts weakly with the matter, being candidates to...
149. The origin of the Extra-Galactic Gamma-ray Background through its anisotropy and cross-correlations
Alessandro Cuoco (U)
I will describe the current status of the measurement of the Extra-Galactic Gamma-ray Background (EGB) anisotropy (its angular power spectrum) and correlations with galaxy catalogues (cross-correlation functions) derived using data from the Fermi-LAT gamma-ray observatory. I will then discuss the implications for the origin of the EGB, in particular in relation to the presence of a possible...
252. Observations of gamma-ray bursts with the HAWC observatory
The temporal evolution and end of GRB spectra have important implications for the acceleration mechanisms of gamma-ray bursts (GRBs). Above $\approx10$ GeV the effective area of \emph{Fermi}-LAT is approximately constant and since the photon flux is steeply decreasing with energy, an insufficient number of photons is detected. The High Altitude Water Cherenkov (HAWC) observatory is a gamma-ray...
303. Expectations for LHC run-II
LHC run 2 will significantly enhance the reach of new physics searches. In this talk, I will give an overview of the new ground to be covered and new questions to be answered. I will attempt to identify a set of top physics targets, as well as some challenges.
260. Cosmic Rays, Synchrotron Emission and Diffuse Galactic Gamma Rays: Consistent Analysis and Impications
Giuseppe Di Bernardo
Fairly poor knowledge is still present about the cosmic ray (CR) spectra at low energies, due to the distortion produced by the solar wind on the particle fluxes. A self-consistent galactic plus solar propagation model turns out necessary in order to correctly reproduce the CR nuclear and lepton spectra. For that, a detailed transport description in the galaxy has been...
109. New Results from the CRESST Experiment
Raimund Strauss (Max-Planck-Institut für Physik)
The CRESST (Cryogenic Rare Event Search with Superconducting Thermometers) experiment has started a new Dark Matter run in summer 2013 with a total target mass of ~5kg. Significant improvements have be achieved with respect to previous measuring campaigns in terms of the intrinsic radiopurity of CaWO$_4$ crystals and the rejection of recoil events from $\alpha$ surface contamination. The first...
89. Time Stretching of the GeV Emission of GRBs: Fermi LAT data vs geometrical model
Maxim Piskunov (Institute for Nuclear Research RAS)
Numerous observations confirm that the high energy (> 100 MeV) emission of gamma ray bursts is delayed with respect to the low energy emission. However, the difference of light curves in various high energy bands has not been studied properly. In this paper we consider all the bursts observed by Fermi-LAT since 2008 August 4 to 2011 August 1, for which at least 10 events with energies 1 GeV...
98. Searching for dark matter in the extragalactic gamma-ray background
Ilias Cholis (Fermi National Accelerator Laboratory)
The approximately isotropic gamma-ray background measured by Fermi-LAT probes the contribution from several classes of astrophysical sources. Using the catalog of known gamma-ray sources along with similar catalogues at radio wavelengths, we can model and constrain the contributions to the extragalactic gamma-ray background from astrophysical sources, as are radio galaxies, star-forming...
298. Future high-energy collider options and physics prospects
Albert De Roeck (CERN) , Albert De Roeck (CERN)
In 2012 the Large Hadron Collider, at CERN, Geneva, Switzerland, discovered a new type of particle, a Higgs Boson, which is anticipated to have played a crucial role at the beginning of the Universe, giving mass to the elementary particles. This paradigm shifting discovery was made by large experimental collaborations analysing the data of the LHC collected in the years 2011and 2012. It has...
174. Modeling of the galactic and extragalactic diffuse radio emission.
Marco Taoso
The observed radio sky at frequencies below few GHz is the sum of the isotropic extragalactic background and the Galactic emission. The latter includes the diffuse synchrotron radiation produced by cosmic-rays electrons spiraling in the Galactic magnetic field. Therefore radio maps are an useful tool to constrain the interstellar electrons spectrum and magnetic fields. We present a detailed...
215. ImPACT: A Monte Carlo Template based analysis for Air-Cherenkov Arrays
Dr Robert Parsons (Max-Planck-Institut für Kernphysik)
We present a high-performance event reconstruction algorithm: an Image Pixel-wise fit for Atmospheric Cherenkov Telescopes (ImPACT). This gamma-ray event reconstruction algorithm is based around the comparison of camera pixel amplitudes to an expected image template, performing a maximum likelihood fit to find the best-fit shower parameters. Related reconstruction algorithms have already been...
74. Self-consistent velocity distributions for local Dark Matter
Mattia Fornasa (U)
Dark Matter (DM) direct detection experiments usually assume a simple "Standard Halo Model" for the Milky Way halo, in which the velocity distribution f(v) is Maxwellian. In an alternative observation-oriented approach the DM velocity distribution is derived from our knowledge of the composition of the Milky Way (i.e. its mass model), obtaining, thus, a "self-consistent" f(v). This is...
53. Systematic uncertainties in dark matter searches due to halo asphericity
Nicolas Bernal
We study the impact of aspherical dark matter density distribution in Milky-Way like halo on direct and indirect searches. Using data from large N-body cosmological simulation Bolshoi, we perform a complete statistical analysis and quantify the systematic uncertainties that affect the determination of local dark matter density and $J$ factors for annihilating and decaying dark matter. We find...
204. Model Independent Measurements of Angular Power Spectra
Sheldon Campbell (T)
Spatial fluctuations of astrophysical signals are a powerful probe of source distributions, radiation production mechanisms, and propagation effects. The precision of measuring angular power spectra is currently estimated as a combination of shot noise, instrument systematics, and cosmic variance. We show that an important contribution, dependent on the finite statistics of the experiment, has...
42. Invited Talk: The local dark matter density: new constraints on the Milky Way's dark disc and the local shape of the Milky Way halo
Prof. Justin Read
I review current efforts to measure the mean density of dark matter near the Sun. This encodes valuable dynamical information about our Galaxy and is also of great importance for direct detection dark matter experiments. I briefly discuss theoretical expectations in our current cosmology; the theory behind mass modelling of the Galaxy; and I show how combining local and global measures probes...
20. Invited Talk: How Baryons Influence the Dark Matter Structure of Galaxies
Dr Alyson Brooks (Rutgers University)
The cosmological model based on cold dark matter (CDM) and dark energy has been hugely successful in describing the observed evolution and large scale structure of our Universe. However, at small scales (in the smallest galaxies and at the centers of larger galaxies), a number of observations seem to conflict with the predictions CDM cosmology, leading to recent interest in Warm Dark Matter...
19. Invited Talk: Constraints on dark matter coldness and neutrinos from intergalatic space
matteo viel
I will review the constraints that can be placed on the coldness of cold dark matter and total neutrino mass by using the Lyman-alpha forest, which is the main manifestation of the intergalactic medium. The intergalactic medium cosmic web probes mildly non-linear scales of the matter distribution at redshifts z=2-6, in a crucial phase of the formation of cosmic structures. I will...
48. Invited Talk: Beyond Collisionless Dark Matter
Hai-Bo Yu (University of Michigan)
Dark matter self-interactions have important implications for the distributions of dark matter in the Universe, from dwarf galaxies to galaxy clusters. In this talk, I will discuss recent progress in self-interacting dark matter.
40. Movie "The Dark Universe"
123. Galactic Sources of High-Energy Neutrinos
Markus Ahlers
The recent IceCube observation of astrophysical neutrinos in the TeV-PeV energy range has opened a new window to the high-energy Universe. The origin of this flux is unknown. Cosmic neutrinos at PeV energies are produced by hadronic interactions of cosmic ray (CR) nucleons at 20-30 PeV and can possibly be related to a Galactic source population. I will review Galactic candidate sources of...
73. Indirect Dark Matter Search with GAPS Antiproton and Antideuteron Measurement
Tsuguo Aramaki (C)
The general antiparticle spectrometer (GAPS) experiment is a proposed indirect dark matter search focusing on antiparticles produced by WIMP (weakly interacting massive particle) annihilation and decay in the Galactic halo. Since antideuteron signals at low energy are free of background, GAPS has a strong capability to observe dark matter signatures through the antideuteron search. In...
245. Main results of the PAMELA space experiment after 8 years in orbit
Roberta Sparvoli (University of Rome Tor Vergata)
In about 8 years of data taking in space, the experiment PAMELA has shown very interesting features in cosmic rays, namely in the fluxes of protons, heliums, electrons, that could have significant implications on the production, acceleration and propagation of cosmic rays in the galaxy. In addition, PAMELA measurements of cosmic antiproton and positron fluxes are setting strong constraints to...
275. Status of VERITAS after the Upgrade
Dr Nepomuk Otte (Georgia Institute of Technology)
VERITAS is an array of four 12 m class Cherenkov telescopes for very-high-energy gamma-ray (>50 GeV) observations. The VERITAS Collaboration completed a series of upgrades in summer 2012 with the objective of lowering the energy threshold and improving the sensitivity of the array at all accessible energies. One telescope was relocated, the trigger system was replaced, and the cameras were...
105. On the interpretation of the IceCube astrophysical neutrino signal
Julia Tjus
Recently, the IceCube collaboration has announced a first evidence of a high-energy neutrino signal from astrophysical sources. The signal, based on a number of 28 events, is at a level of approximately $E^{2}*dN/dE\sim 10^{-8}$ GeV/(s sr cm$^{2}$) and at this point does not show any directional correlation. In this talk, the different cosmic ray emitting source candidates are reviewed in the...
249. Status and recent results of the MAGIC Cherenkov telescopes
Dr Emiliano Carmona (CIEMAT)
MAGIC, the system of two imaging atmospheric Cherenkov telescopes located at the Canary island of La Palma, has successfully explored the very-high-energy (VHE) sky in stereoscopic mode since 2009. Thanks to its two 17-m diameter mirror dishes, MAGIC has provided unique results in the low-energy range of ground-base gamma-ray astronomy. In addition, the substantial upgrades introduced in the...
119. Anti-nuclei from Dark Matter
Andrea Vittino (Universita' di Torino and IPhT/CEA Saclay)
Light anti-nuclei, namely anti-deuteron and anti-helium, can be produced through the nuclear coalescence of the anti-protons and the anti-neutrons that are originated in a dark matter pair annihilation event. At low kinetic energies, the fluxes of these bound states are found to dominate over the astrophysical background and thus anti-nuclei may be considered as a very promising channel for a...
306. Cosmic Ray Energetics And Mass for the International Space Station (ISS-CREAM)
Prof. Eun-Suk Seo (University of Maryland)
The balloon-borne Cosmic Ray Energetics And Mass (CREAM) experiment was flown for ~161 days in six flights over Antarctica. High energy cosmic-ray data were collected over a wide energy range from ~ 10^10 to > 10^14 eV at an average altitude of ~38.5 km with ~3.9 g/cm2 atmospheric overburden. Cosmic-ray elements from protons (Z = 1) to iron nuclei (Z = 26) are separated with excellent charge...
124. A revised view of the ultra-high energy cosmic ray-neutrino connection: the case of gamma-ray bursts
Mauricio Bustamante (DESY Zeuthen / Universität Würzburg)
The origin of ultra-high energy cosmic rays (UHECRs), with energies above $10^{18}$ eV, remains unknown fifty years after their discovery. Gamma-ray bursts (GRBs) are arguably among the most likely sources: their high luminosities (> $10^{52}$ erg/s) hint at the possibility that strong magnetic fields in them are able to shock-accelerate protons to the high energies that are necessary to...
229. Extragalactic Background Light and Intergalactic Magnetic fields
Andrii Neronov (Universite de Geneve (CH))
I will review the techniques of the measurement of optical / infrared extragalactic background light (EBL) and of intergalactic magnetic fields (IGMF) using the effect of absorption of very-high-energy gamma-rays in the intergalactic medium. I will summarise the existing constraints on both EBL and IGMF and discuss perspectives of improvement of the measurements with the next generation...
88. Searching for Dark Matter Signatures in Cosmic Rays with CALET
Dr Holger Motz (Waseda University)
The Calorimetric Electron Telescope (CALET) will be installed at the ISS in JFY 2014 and measure the energy and direction distribution of electron/positron cosmic rays well into the TeV range. Featuring a proton rejection capability of $1:10^5$ and an energy resolution of 2$\%$, it is well suited to investigate features in the spectrum, testing the hypotheses of Dark Matter annihilation and/or...
230. The Calorimetric Electron Telescope (CALET) on ISS for High Energy Astroparticle Physics
Prof. Shoji Torii (Waseda University)
The CALET space experiment, currently under development by Japan in collaboration with Italy and the United States,will measure the flux of Cosmic Ray electrons (and positrons) to 20 TeV, gamma rays to 10 TeV and nuclei with Z=1 to 40 up to 1,000 TeV during a five year mission. These measurements are essential to investigate possible nearby astrophysical sources of high energy electrons,...
71. Diffuse Neutrino Flux from Star-forming Galaxies
Irene Tamborra
Star-forming galaxies are predicted to contribute considerably to the cosmic gamma-ray background as they are the most numerous population of gamma-ray sources. The hadronic interactions responsible for high-energy gamma rays also produce high-energy neutrinos. We discuss the expected intensity of the diffuse high-energy neutrinos from star-forming galaxies and conclude that such a population...
217. Very High Energy Gamma-rays from Flat Spectrum Radio Quasars
Elina Lindfors (U)
The detection of Flat Spectrum Radio Quasars (FSRQs) in the Very High Energy (VHE, E>100 GeV) range is challenging, mainly because of their steep soft spectra and relatively large distances. Nevertheless three FSRQs are now known to be VHE emitters, all of them have been detected by the MAGIC telescopes. The detection of the VHE gamma-rays has challenged the emission models of these sources....
244. Cosmic-ray antiproton constraints on WIMP annihilation in our Galaxy
Carmelo Evoli (Hamburg University)
The latest years have seen steady progresses in weakly interacting massive particle (WIMP) dark matter (DM) searches, with hints of possible signals suggested both in direct and indirect detection. Cosmic-ray (CR) antiprotons play a key role in this context, since WIMP annihilations can be a copious source of antiprotons, and, at the same time, the antiproton flux from conventional...
194. Neutrinos from Galactic Sources in the Light of Recent Results in Gamma Ray and Neutrino Astronomy
Viviana Niro (U)
We revisit the prospect of observing the sources of the Galactic cosmic rays. In particular, we update the predictions for the neutrino flux expected from sources in the nearby star-forming region in Cygnus, considering the recent TeV gamma ray measurements of their spectra. We focus on three Milagro sources: MGRO J2019+37, MGRO J1908+06 and MGRO J2031+41 and calculate the confidence level...
235. The cosmic electron and positron flux measurement with the AMS-02 experiment
Valerio Vagelli (KIT - Karlsruhe Institute of Technology (DE))
The AMS-02 detector is a large acceptance cosmic ray detector operating on the International Space Station since May 2011. About 40 billion events have been collected by the instrument in the first 30 months of data taking. Among them, 10.5 million of electrons and positrons have been selected to measure the cosmic lepton energy spectrum at energies up to the TeV. In this contribution we...
102. Dark matter annihilations and decays after the AMS-02 positron measurements
Anna Lamperstorfer (TUM)
We use the new positron data from the AMS-02 experiment to set limits on dark matter annihilations and decays in different channels. In this work it is assumed that the positron background consists of secondary positrons from spallations and an additional primary component of astrophysical origin. We show that the positron flux and the positron fraction give competitive limits on the dark...
58. Interpretation of AMS-02 electrons and positrons data and Dark Matter constraints.
Mattia Di Mauro (University of Turin and INFN Turin)
We perform a combined analysis of the recent AMS-02 data on electrons, positrons, electrons plus positrons and positron fraction, in a self-consistent framework where we realize a theoretical modeling of all the astrophysical components that can contribute to the observed fluxes in the whole energy range. The primary electron contribution is modeled through the sum of an average flux from...
111. Latest Results on Searches for Point and Extended Sources with Time Integrated and Time Dependent emissions of Neutrinos with the IceCube Neutrino Observatory
Asen Christov (Universite de Geneve (CH))
We have performed a variety of searches for neutrino emission from astrophysical sources using multiple years of IceCube data collected between April 2008 and May 2011 by the partially-completed IceCube detector, as well as the first year of data from the completed 86-string detector. Utilizing spatial, energy and time information, an unbinned maximum likelihood method is used to distinguish...
145. New directions in Dark Matter Searches from the Sun
Carsten Rott (Sungkyunkwan University)
Dark matter particles captured by the Sun through scattering may annihilate and produce neutrinos, which escape. Current searches have focused on the high-energy neutrino signal produced in the prompt decays of some final states. Interactions of hadronic annihilation products lead to other interesting final states with potentially observable neutrino signals. The talk will discuss the...
261. The intensity and origin of the isotropic gamma-ray background
Markus Ackermann (D)
The data collected by the Fermi Large Area Telescope (LAT) enable a huge step forward in measuring and understanding the origins of the isotropic gamma-ray background (IGRB) . The IGRB originates from the superposition of different populations of unresolved sources with possible contributions from genuinely diffuse and exotic processes. In most parts of the sky it is sub-dominant to the...
159. PAMELA and AMS-02 electron and positron spectra: what do they imply ?
Dario Grasso (INFN)
We use the three-dimensional upgrade of the DRAGON code to model the electron and positron spectra measured by PAMELA and AMS-02. Presently this is the only cosmic ray (CR) propagation package which allows to account for a realistic spiral arm distribution of CR source in the Galaxy. We find that, once the propagation models are tuned to reproduce the B/C and proton data the lepton data...
116. Search for Neutrinos from Dark Matter Annihilation in the Galactic Center with IceCube
Martin Bissok (RWTH Aachen)
Dark matter may self-annihilate, and produce a flux of final-state particles, including neutrinos. Indirect dark matter searches target regions of increased dark matter density, and thus increased expected flux, with the Galactic center being the most prominent target region in the Milky Way. IceCube is a cubic-kilometer-scale neutrino detector embedded in glacial ice at the South Pole....
129. Searches for point sources and small-scale anisotropies in IceCube
Anna Bernhard (TU München)
The IceCube neutrino observatory built in the antarctic ice offers unique opportunities for studying high energy neutrino emission from galactic and extragalactic sources. Detecting such neutrino emission could give invaluable information about the origin of cosmic rays. Recently, the first evidence for astrophysical neutrinos in the PeV range was found with IceCube. No identification of point...
212. Very High Energy Blazars and the Potential for Cosmological Insight
Amy Furniss
Gamma-ray blazars are among the most extreme astrophysical sources, harboring phenomena far more energetic than those attainable by terrestrial accelerators. These galaxies are understood to be active galactic nuclei that are powered by accretion onto supermassive black holes and have relativistic jets pointed along the Earth line of sight. The emission displayed is variable at all...
151. Constraints on cosmic-ray origin from gamma-ray observations of supernova remnants
Marianne Lemoine-Goumard (CNRS)
Supernova remnants (SNRs) are thought to be the primary sources of the bulk of Galactic cosmic-ray protons observed at Earth, up to the knee energy at ~3 PeV. Our understanding of CR acceleration in SNRs mainly relies on the so-called Diffusive Shock Acceleration theory which is commonly invoked to explain several observational (though, indirect) lines of evidence for efficient particle...
93. Searching for annihilating dark matter in nearby galaxies and galaxy clusters with IceCube
Meike de With (Humboldt University, Berlin)
In many models, the self-annihilation of dark matter particles will create neutrinos which can be detected on Earth. An excess flux of these neutrinos is expected from regions of increased dark matter density, for example galaxies and galaxy clusters. The IceCube neutrino observatory, a cubic-kilometer neutrino detector at the South Pole, is capable of detecting neutrinos down to energies of...
135. A search for partially contained neutrino induced particle showers with IceCube
Mr Achim Stoessl (DESY)
Recent results from IceCube show evidence for a diffuse, highly energetic flux of astrophysical neutrinos. The analysis to select neutrino candidate events employ veto techniques which use the outer part of the detector to suppress the atmospheric muon background. Shower-like events comprise an important part of observed evidence for extraterrestrial neutrino induced events for the veto...
90. A search for Dark Matter in the centre of the Earth with the IceCube neutrino detector.
Mr Jan Kunnen (Vrije Universiteit Brussel)
Many models predict that dark matter consists of Weakly Interacting Massive Particles (WIMPs). Heavy celestial bodies, such as the Earth, might capture these WIMPs, accumulate them in their gravitational centre and over time these dark matter particles will self-annihilate. These annihilations may produce standard model particles, including neutrinos. Large scale neutrino telescopes, such...
278. Extended Blazar Observations by VERITAS and Implications for the Extragalactic Background Light
Mr Yerbol Khassen (University College Dublin)
The VERITAS Collaboration has been conducting long-term observations of several TeV blazars at a variety of redshifts to characterise their temporal and spectral properties. The very high energy (VHE, >100 GeV) spectra of TeV blazars are expected to show energy-dependent absorption that increases with redshift due to the interaction of VHE photons with infra-red photons of the extragalactic...
257. Results from the ANTARES Neutrino Telescope
aart heijboer (nikhef)
Operating off the coast of France since 2007, the ANTARES neutrino telescope is the most sensitive high energy neutrino telescope in the Northern Hemisphere. I will present an overview of the science output, including searches for neutrinos from the Fermi bubbles regions and GRBs. Emphasis will be on results from a recent time-integrated search for point-like sources of neutrinos. At...
168. The indirect search for dark matter with the ANTARES neutrino telescope
Christoph Tönnis (Universitat de Valencia)
One of the main goals of neutrino telescopes is the indirect search for dark matter. The ANTARES detector, installed in the Mediterranean Sea, has been taken data since 2007. In this talk we present the results on different dark matter potential sources, including the Sun, the Galactic Center, the Earth, dwarf galaxies and galaxy clusters produced with different analysis methods and will show...
101. Overview and status of cosmic ray antideuteron searches
Prof. Philip Von Doetinchem (University of Hawaii at Manoa)
In recent years the interest in cosmic ray antideuteron measurements has increased due to detection potential of signals from a variety of dark matter, primordial black hole, or gravitino models. This talk will review the motivations and status of cosmic ray antideuteron searches and discuss future detection prospects.
267. Radio galaxies and their central machine
Prof. Karl Mannheim (University Wuerzburg)
Radio galaxies are a prime target for studies of the processes that lead to the formation of extragalactic jets. Their spectral energy distributions do not agree with the ones of blazars after an appropriate increase of the orientation angle. In fact, the de-boosted emission from the relativistic jets opens the view to the magnetospheric jet formation region associated with accreting...
66. Higher order dark matter annihilations in the Sun and implications for IceCube
Mr Sebastian Wild (Technical University Munich)
Dark matter particles captured in the Sun would annihilate producing a neutrino flux that could be detected at the Earth. In some channels, however, the neutrino flux lies in the MeV range and is thus undetectable at IceCube, namely when the dark matter particles annihilate into electrons, muons or light quarks. In this talk we show that the same interaction that mediates the annihilations...
218. Search for a diffuse cosmic neutrino flux using the ANTARES data from 2007-2012
Florian Folger (University of Erlangen)
The ANTARES neutrino telescope, located in the deep sea offshore the French Mediterranean coast, aims at the detection of cosmic neutrinos in the TeV/PeV range. It has been continuously taking data since 2007. In this contribution a search for a diffuse cosmic neutrino flux is presented. The focus is laid on a recently finished analysis of showering events induced by all three neutrino...
307. Gamma-ray Astrophysics with AGILE: Surprises and Challenges
Dr Giovanni Piano (INAF)
The AGILE space mission, currently in its eight year of operations in orbit, obtained a large number of crucial and unexpected results. We will review the main results for both Galactic and extragalactic sources, and outline some of the most surprising discoveries (gamma-ray flares from the Crab Nebula, detection of Cygnus X-3 and Cygnus X-1 in coincidence with special spectral transitions,...
232. Measurement of the Cosmic Rays Boron-to-Carbon Ratio with AMS-02.
Alberto Oliva (Centro de Investigaciones Energ. Medioambientales y Tecn. - (ES)
AMS-02 is a high-energy particle physics experiment operating continuosly since May 2011 onboard of the International Space Station. Given the wide acceptance, long exposure time and particle identification capabilities, AMS-02 is able to determine the cosmic rays (CRs) chemical composition from charge $Z=1$ up to at least $Z=26$ in a kinetic energy range from GeV/n to few TeV/n. Among the...
263. Multi-Messenger analyses with the ANTARES High Energy Neutrino Telescope
Agustín Sánchez Losa (IFIC (Spain))
ANTARES is currently the largest operating neutrino telescope in the Northern Hemisphere, mainly sensitive to TeV neutrinos. Its main goal is the detection of high energy neutrinos from astrophysical sources, which would provide important insights about the processes powering their engines and would help understand the origin of high energy cosmic rays. To identify unambiguously such...
13. Constraints on Self Interacting Dark Matter from IceCube Results
Denis Robertson (Instituto de Fisica, Universidade de Sao Paulo)
If dark matter particles self-interact, their capture by astrophysical objects should be enhanced. As a consequence, the rate by which they annihilate at the center of the object will increase. If their self scattering is strong, it can be observed indirectly through an enhancement of the flux of their annihilation products. Here we investigate the effect of self-interaction on the neutrino...
9. Decaying Dark Matter inside Neutron stars
Dr M. Angeles Perez-Garcia (University of Salamanca and IUFFyM)
We propose that the existing population of neutron stars in the galaxy can help constrain the nature of decaying dark matter. The amount of decaying dark matter, accumulated in the central regions in neutron stars and the energy deposition rate from decays, may set a limit on the neutron star survival rate against transitions to more compact stars and, correspondingly, on the dark matter...
178. Measurement of the boron and carbon fluxes with the PAMELA experiment
Nicola Mori (Universita e INFN (IT))
PAMELA is a satellite-borne experiment, aimed at precision measurements of the charged light component of the cosmic-ray spectrum. It consists of a magnetic spectrometer, a time-of-flight system, an electromagnetic calorimeter, an anticoincidence system and a neutron detector. Recently, the PAMELA collaboration has finalized the measurement of the absolute fluxes of boron and carbon and...
246. ANTARES constraints on the neutrino flux from the Milky Way
Erwin Visser (Nikhef)
A guaranteed source of neutrinos is the production in cosmic ray interactions with the interstellar matter in our galaxy. The Antares neutrino telescope located in the Mediterranean Sea offers a high visibility of the central region of the Milky Way where most of this diffuse neutrino flux is expected. Antares data from 2007-2012 were used to compare the flux from a region centered around the...
51. Constraining asymmetric dark matter with asteroseismology
Dr Jordi Casanellas (Max Planck Institute for Gravitational Physics (Albert Einstein Institute))
Low-mass asymmetric dark matter (DM) particles are appealing DM candidates that are not detectable with most indirect DM searches. However, these particles may efficiently accumulate in the core of low-mass stars, reducing their central temperatures and inhibiting the formation of small convective cores in 1.1-1.3 M$_{\odot}$ stars, thus leaving a characteristic signature in the low-degree...
241. Secondaries from supernova remnants and new AMS-02 data
Philipp Mertsch (KIPAC, Stanford University)
Recently, the AMS-02 collaboration has presented data on cosmic ray protons, Helium, electrons and positrons as well as the boron-to-carbon ratio. We present the first consistent modelling of these data, paying particular attention to the contribution due to production and acceleration of secondary electrons and positrons in nearby supernova remnants. This process results in an additional,...
112. Synchrotron pair halo and echo emission from blazars in the cosmic web: application to extreme TeV blazars
foteini oikonomou (University College London)
High frequency peaked, high redshift blazars, are extreme in the sense that their spectrum is particularly hard and peaks at TeV energies. Standard leptonic scenarios often require peculiar source parameters and/or a special setup in order to account for these observations. Electromagnetic cascades seeded by ultra-high energy cosmic rays (UHECRs) in the intergalactic medium have also been...
33. Invited Talk: Phenomenology with Massive Neutrinos
Concepcion Gonzalez-Garcia (YITP, Stony Brook and ICREA, U. Barcelona)
I will review our present understanding of neutrino properties in the light of the existing data: their masses, the leptonic mixing, CP violation, the possibility of new light states, non-standard interactions.
31. Invited Talk: Cosmic Rays
Dr Stefano Gabici
I will give a brief overview on the recent developments in cosmic ray research. The current hypotheses about the origin of these particles will be discussed. The connections with photon (from radio to gamma rays) and neutrino observations will be highlighted.
21. Invited Talk: Extragalactic Gamma Ray Sources
Prof. Peter Meszaros (Pennsylvania State University)
I will review the most prominent types of extragalctic gamma-ray sources, such as gamma-ray bursts, AGNs and other galaxies, including some specific individual sources, and the effects expected from intergalactic shocks. I will then discuss some of the physical models used to describe these objects, and the possible connections between the gamma-ray emission and cosmic ray as well as...
22. Invited Talk: Galactic particle accelerators
Dr Rolf Buehler (DESY)
Our galaxy hosts a zoo of astronomical particle accelerators. In this presentation I will discuss recent gamma-ray observations of these sources and what they have told us about their inner workings. Among others, I will discuss recent observations of Super Novae Remnants and Pulsars. I will also talk about the increasing population of time variable gamma-ray sources, as Novae, Binary systems...
36. Invited Talk: Status and Future of Cherenkov Telescope Arrays
Jim Hinton (University of Leicester)
The enormous potential of the imaging atmospheric Cherenkov technique (IACT) for high energy astrophysics has been demonstrated by the currently operating HESS, MAGIC and VERITAS telescope arrays. The technique provides excellent angular resolution at high energies and huge collection area in comparison with space-based instruments such as Fermi-LAT. The Cherenkov Telescope Array (CTA) is a...
24. Invited Talk: The Fermi Large Area Telescope: science highlights and prospects for the extended mission
Luca Baldini (INFN-Pisa)
Launched on June 11 2008, the the Fermi Gamma-ray Space Telescope has successfully completed its first sixth years of operation in space. We shall briefly review the status of the observatory, along with some of the most recent science highlights, and discuss the prospects for the extended phase of the mission.
236. Status of observations of PWNe and SNRs in the gamma-ray regime
Ms Emma De Ona Wilhelmi (IEEC-CSIC Barcelona)
The last few years have witness a revolution in very high gamma-ray astronomy (VHE; E>100 GeV) driven largely by a new generation of Cherenkov telescopes. These new facilities, namely H.E.S.S. and the new 28-meter-sized mirror H.E.S.S. 2, MAGIC and its upgrade MAGIC 2 and Veritas were designed to increase the flux sensitivity in the energy regime of hundreds of GeV, expanding...
264. Gamma-ray observations of the pulsar wind nebula 3C58 with the Fermi-Large Area Telescope
Dr Marie-Helene GRONDIN (CENBG, Bordeaux, France)
Successfully launched on June 11, 2008, the Large Area Telescope (LAT), aboard the Fermi Gamma-ray Space Telescope is sensitive to gamma-rays with energies from about 20 MeV to more than 300 GeV and covers the full sky every 3 hours. The improved sensitivity and the unprecedented statistics offered by the LAT in comparison to its predecessor EGRET enable the study of various classes of...
128. Global status of neutrino oscillations
Dr Antonio Palazzo (MPI)
I will present the current status of the global neutrino data analysis, pointing out its unique role in constraining the two crucial (still) unknown parameters: the CP-violating phase delta and the theta_23 octant. In this context, I will discuss the slight overall preference for theta_23 in the first octant and for non-zero CP violation with sin delta < 0. The (in-)stability of such...
63. PeV neutrino events in IceCube: are they related to Dark Matter?
Arman Esmaili Taklimi
Recent observations by IceCube, notably two PeV cascades accompanied by events at energies $\sim (30-400)$ TeV, are clearly in excess over atmospheric background fluxes and beg for an astroparticle physics explanation. In this talk I will discuss the possibility to interpret the IceCube data by PeV mass scale decaying Dark Matter. I discuss generic signatures of this scenario, including its...
113. Recent results from the LUX dark matter search experiment
Alexandre Lindote (L)
The LUX dark matter search experiment, the world's largest dual-phase xenon time projection chamber, is installed 1478 m underground at the Sanford Underground Research Facility in Lead, SD, USA. In this talk I will present the results from the first WIMP search run of LUX: from a total exposure of 85 live-days, we found no evidence of signal above expected background, constraining the...
115. HESS J1640$-$465 - an exceptionally luminous TeV gamma-ray SNR
Stefan Ohm (D)
HESS J1640$-$465 is among the brightest Galactic TeV gamma-ray sources ever discovered by the High Energy Stereoscopic System (H.E.S.S.). Its likely association with the shell-type supernova remnant (SNR) G338.3$-$0.0 at a distance of $\sim$10 kpc makes it the most luminous Galactic source in the TeV regime. Our recent analysis of follow-up observations with H.E.S.S. reveal a significantly...
147. Fermi LAT observations of gamma-ray pulsars
Lucas Guillemot (L)
Observations of the gamma-ray sky with the Large Area Telescope (LAT) on the Fermi satellite have revealed significant pulsations from about 150 young and recycled, millisecond pulsars. These observations have shown that pulsars are by far the largest source class in the Galactic plane at GeV energies, and new pulsars are still continuously being found at the locations of LAT sources with no...
171. Neutrino Physics with Super-Kamiokande
Edward Kearns (Boston University)
I will review the latest results in neutrino physics from the Super-Kamiokande experiment, on behalf of the collaboration. Super-Kamiokande is a 50-kton water Cherenkov detector located in Kamioka Japan, operational since 1996. The Super-K collaboration studies atmospheric neutrinos, solar neutrinos, supernova neutrinos, and neutrinos from possible dark matter annihilation.
103. Limits on Light WIMPs: LUX, lite and beyond
Paolo Gondolo (University of Utah)
This talk will present a reexamination of the current direct dark matter data including the recent CDMSlite, LUX, and SuperCDMS data, assuming that the dark matter consists of light WIMPs, with mass close to 10 GeV/$c^2$ with spin-independent and isospin-conserving or isospin-violating interactions. We have compared the data with a standard model for the dark halo of our galaxy and also in a...
70. Status and perspectives of particle dark matter searches with radio observations
Dr Marco Regis (University of Turin and INFN)
Annihilations or decays of WIMPs in dark-matter (DM) halos can produce high-energy electrons and positrons, which in turn give raise to synchrotron radiation via their interaction with the interstellar magnetic field. The emission typically peaks in the radio band, which is thus a promising range of photon wavelengths for indirect DM searches. I will discuss recent results in the search for...
254. Latest results from the OPERA experiment
Benjamin Büttner (University of Hamburg)
The OPERA experiment is designed to search for $ \nu_\mu \rightarrow \nu_\tau $ oscillations in appearance mode through the direct observation of the $ \tau $ lepton in $ \nu_\tau $ Charged Current interactions. The $ \nu_\tau $ CC interaction is identified through the detection of the $ \tau $ lepton decay topology in the so called Emulsion Cloud Chamber (ECC), passive lead plates...
289. The Galactic Centre
Regis Terrier (C)
The centre of our Galaxy is a major laboratory for high energy astrophysics. In particular, it harbours the closest supermassive black hole (SMBH) to us. Its luminosity is extremely low for an object of several million solar masses but there is growing evidence that it experienced periods of much stronger activity in the past. A sustained star formation activity is also taking place in the...
183. The XMASS experiment
Katsuki Hiraide (t)
The XMASS project is a multi-purpose low-background experiment using liquid xenon. The current stage with a 835kg LXe detector was started in 2010. After the commissioning data-taking, we have refurbished the detector to reduce surface backgrounds and resumed data-taking in the autumn of 2013. In this talk, latest results and future prospects of the experiment will be presented.
120. Models for atmospheric neutrinos and muons, prompt component
Rikard Enberg (Uppsala University)
Atmospheric neutrinos and muons are produced in interactions of cosmic rays with Earth's atmosphere. At very high energy, the contribution from semi-leptonic decays of charmed hadrons, known as the prompt flux, dominates over the conventional flux from pion and kaon decays. This is due to the very short lifetime of the charmed hadrons, which therefore do not lose energy before they decay. The...
131. Search for Light Dark Matter with X-rays and Implications of a Possible Detection
Dr Michael Loewenstein (University of Maryland/CRESST/NASA-GSFC)
After briefly summarizing previous constraints on dark matter candidates that produce X-ray emission lines via radiative decay, with an emphasis on the sterile neutrino and moduli dark matter, I present the recent detection by our team of a candidate dark matter feature at ~3.56 keV. This weak unidentified emission line was discovered by stacking XMM-Newton spectrum of 73 galaxy clusters...
205. Galactic interstellar gamma-ray emission
Luigi Tibaldo (SLAC)
The Milky Way shines in gamma rays from MeV to above TeV energies due to interactions of high-energy cosmic rays with interstellar gas and radiation fields. I will review the current status and future challenges for both space-borne and ground-based observations, and I will discuss some implications for the physics of cosmic rays and of the interstellar medium, as well as for the modeling of...
56. Complementarity in direct dark matter searches
Miguel Peiró
In the last decade experiments aiming at the direct detection of dark matter (DM) have increased significantly their sensitivity. In fact, ton-scale setups have been proposed, especially using Germanium and Xenon targets, which raises the hope of a detection in the near future. In light of this situation, it is necessary to study how well the DM parameters (mass, spin-dependent (SD) and...
184. Decaying dark matter in X-rays?
Recently two groups have reported an unidentified line-like feature in the X-ray spectra of dark matter-dominated objects (galaxies and galaxy clusters) [Bulbul et al.][1] and [Boyarsky et al.][2]. We discuss the signal and consistency of its interpretation in terms of dark matter decay. [1]: http://arxiv.org/abs/1402.2301 [2]: http://arxiv.org/abs/1402.4119
117. Oscillation physics with atmospheric neutrinos
Prof. Michele Maltoni (Instituto de Fisica Teorica UAM/CSIC)
In this talk we will discuss the physics reach of present and future atmospheric neutrino experiments, both in the context of the standard three-neutrino oscillations scenario and in the presence of New Physics. A particular attention will be devoted to the impact on the determination of the neutrino mass hierarchy.
137. Dark matter searches with the Cherenkov Telescope Array
Dr Christian Farnier (Oskar Klein Centre, Stockholm University)
Dark matter searches with the Cherenkov Telescope Array Christian Farnier (Oskar Klein Centre - Stokckholm University) for the CTA Consortium The current paradigm of the Universe states that more than 80% of its mass content consists of dark matter of unknown origin. Since its discovery more than eighty years ago, the quest for dark matter identification is one of the most important...
139. Results and future prospects of Borexino
Mikko Meyer (University of Hamburg)
The Borexino experiment is a 300 t liquid scintillator detector located at the LNGS in Italy. The main task of the experiment is the real time detection of solar neutrinos. This talk will give an overview of the recent results from the first phase of the experimental program including the measurement of solar neutrinos as well as geoneutrinos. Furthermore an overview of the SOX project is...
200. XENON100, XENON1T, and beyond: Status of the XENON Dark Matter Search
Uwe Oberlack (R)
The XENON Dark Matter program developed the dual-phase liquid xenon TPC into the world-leading detector type for direct WIMP Dark Matter searches with the experiments XENON10 and XENON100. This talk discusses recent results from XENON100, and reports on the status of its successor under construction, XENON1T. With a sensitive volume of about 2.2 tons, XENON1T aims at a sensitivity improvement...
118. Hyper-Kamiokande project
Hiroyuki Sekiya (University of Tokyo)
Hyper-Kamiokande (Hyper-K) will be a next generation underground water Cherenkov detector with the total (fiducial) mass of 0.99 (0.56) million metric tons, which is approximately 20 (25) times larger than that of Super-Kamiokande. One of the main goals of Hyper-K is the study of CP asymmetry in the lepton sector using accelerator neutrino and anti-neutrino beams. With a total exposure...
132. Scintillation efficiency and effect of electric field in liquid argon
Akira Hitachi (Kochi Medical School)
The scintillation efficiency for low energy recoil ions in liquid argon has been evaluated for WIMP searches. the track structure and the excitation density for recoil ions are calculated then the prescribed diffusion equation for biexcitonic quenching has been solved and the total quenching factor qT for 5-240 keV recoil ions in liquid Ar at zero electric field are obtained. All the constants...
152. Searches of Dark Matter with the GAMMA-400 Space Mission
Mr Paolo Cumani (University of Trieste / INFN Trieste)
GAMMA-400 is a Russian space mission with an international contribution, primarily devoted to the study of gamma-rays in the MeV – TeV energy range. One of the main topic addressed by GAMMA-400 will be the search of possible hint of Dark Matter signal with observation firstly towards the Galactic Center and Dwarf Galaxies. Thanks to a deep calorimeter of novel concept and a state-of-the-art...
238. Ultra-High Energy Neutrino Radio Frequency Detectors
Dr Carl Gilbert Pfendner (Ohio State University (USA))
The cosmic ray flux cut off above primary energies of $10^{19.5}$ eV leads us to expect an ultra-high energy (UHE) neutrino flux due to the GZK effect. The detection of these UHE cosmic neutrinos will add to the understanding of the sources and physics of UHE cosmic rays. On interacting within a dense medium, a UHE neutrino will produce an extended particle shower, which in turn produces a...
213. Probing the intrinsic electron recoil rejection power in liquid xenon for dark matter searches
Prof. Kaixuan Ni (Shanghai Jiao Tong University)
Liquid xenon is one of the best detection mediums for dark matter direct detection, as demonstrated by the XENON100 and LUX experiments. Rejecting the electron recoil background from material radioactivity of the detector system and even solar and atmospheric neutrinos remains as the most challenging task for future dark matter detectors based on liquid xenon. Here we will present an...
255. Sensitivity of CTA to Gamma Rays from Dark Matter Annihilations
Hamish Silverwood (University of Amsterdam)
The nature of Dark Matter (DM) is a pressing question, and can be investigated through the detection of gamma rays produced by annihilating DM. The upcoming Cherenkov Telescope Array (CTA) will provide increased sensitivity to high energy gamma rays and hence higher mass DM particles. When conducting analyses of the capability of CTA it is important to study the effects of backgrounds....
225. The prospects and development of the third ANITA flight
Dr Harm Schoorlemmer (University of Hawaii at Manoa)
The Antarctic Impulsive Transient Antenna (ANITA) is a balloon-borne ultra-high-energy particle observatory. At a cruising altitude of $\sim$36 km, it provides a panoramic view of the Antarctic ice sheet in the 200-1200 MHz band. ANITA has been designed to detect Askaryan radiation from ultra-high-energy ($>10^{18}$ eV) neutrino interactions in the ice. Two successful flights have led to...
253. A maximum likelihood analysis of the CoGeNT public dataset
Chris Kelso (University of Utah)
The CoGeNT collaboration has released more than 3 years of data including the spectrum and time variation of the nuclear candidate events in their germanium detector. We perform an unbinned, maximum likelihood fit to the data, accounting for known backgrounds and systematic effects to search for dark matter interactions with the detector. Background and possible signals are characterized by...
49. On gamma ray spectral features from scalar dark matter
Michel Tytgat (U)
In this talk I will discuss some recent results regarding the gamma ray spectral features from the annihilation of a scalar dark matter particle interacting through a charged particle in the t-channel. In particular I will discuss the relative enhancement of the Bremsstrahlung signal and will present a new calculation of the annihilation in gamma ray lines. I will also present some...
210. Indirect searches of Dark Matter from gamma-ray line signatures with the H.E.S.S. experiment
Mr Matthieu Kieffer (LPNHE Paris)
Weakly Interacting Massive Particles (WIMPs) are currently one of the most popular hypotheses to answer the question of the nature of Dark Matter. Gamma-ray line signatures from self-annihilation of WIMPs can be detected at very-high energies by the H.E.S.S. imaging air Cherenkov telescope in observations of the Galactic Center (GC) region. In 2012, phase II of H.E.S.S. started with the...
45. Invited Talk: Global Fits of Supersymmetry
Roberto Ruiz De Austri (IFIC)
I review the present status of Global Fits of Supersymmetry.
46. Invited Talk: Heavy neutral leptons in cosmology
Mikhail Shaposhnikov (EPFL)
Heavy neutral leptons may play an important role in particle physics and cosmology, explaining neutrino masses, dark matter and baryon asymmetry of the universe. The prospects for their experimental searches will be discussed.
34. Invited Talk: Still life: the Standard Model Higgs boson and beyond
Christophe Grojean (ICREA - Institucio catalana de recerca estudis avancats (ES))
With the discovery of the long sought-after Higgs boson at CERN in July 2012, a new state of matter and a new dynamical principle have been revealed as essential building blocks of the fundamental laws of physics. The Brout-Englert-Higgs mechanism also provides a solution to the half-century-old mass conundrum, i.e. the apparent incompatibility between the mass spectrum of the elementary...
23. Invited Talk: Astroparticle physics in the next decade
Lars Bergstrom (Stockholm University)
Astroparticle physics is by now a mature field of physics, bridging astrophysics, cosmology, nuclear and particle physics. This talk will deal with what has been accomplished and what the outstanding questions are. An overview is given, containing some thoughts about how to best obtain the answers to the unsolved questions, given realistic technological advancements and financial resources.
35. Invited Talk: CERN, Update and Perspectives
Rolf Heuer (CERN)
The talk will present the ongoing scientific program at CERN and will give an outlook towards possible future projects. Particular emphasis will be given to the European Strategy for Particle Physics and its implications for the particle physics program at CERN and worldwide.
157. Fermi Bubbles from the Galactic Bar and Spiral Arms
Wim De Boer (KIT - Karlsruhe Institute of Technology (DE))
A survey of the diffuse gamma-ray sky revealed 'bubbles' of emission above and below the Galactic disc symmetric around the centre of the Milky Way, so they are presumed to be from the centre with a height of 10 kpc. They have been proposed to be blown by cosmic rays originating from the star formation in the Galactic Centre, or jet activity from the supermassive black hole in the GC, or even...
265. The Spectrum, Morphology and Luminosity of Galactic Gamma-Ray Pulsars
Tim Linden (U)
Gamma-Ray observations by the Fermi-LAT have uncovered a substantial population of gamma-ray bright pulsars in our galaxy. Using 5.5 years of Fermi data, we measure the spectrum and morphology of both the young and recycled pulsars, and show current data allows for a direct measurement of the gamma-ray luminosity function of the pulsar population. We apply the results of our analysis to the...
144. Characterizing the gamma-ray excess observed in the inner galaxy
Nicholas Rodd (M)
Recent work has confirmed the presence of a gamma-ray excess in data from the Fermi Space Telescope extending at least 10 degrees from the Galactic Center. I will describe recent progress in characterizing this signal by using photons with the highest quality angular reconstruction, with an emphasis on the extended "inner galaxy" region. I will focus on cross-checks we have performed to...
262. Improving Fermi-LAT Angular Resolution with CTBCORE
Stephen Portillo (Harvard University)
The Large Area Telescope on the Fermi Gamma-ray Space Telescope has a point spread function with large tails, consisting of events affected by tracker inefficiencies, inactive volumes, and hard scattering; these tails can make source confusion a limiting factor. The parameter CTBCORE, available in the publicly available Extended Fermi-LAT data, estimates the quality of each event's direction...
242. Recent Results of SuperCDMS
Dr Julien Billard (MIT)
The SuperCDMS experiment has operated a 9kg array of cryogenic detectors to search for weakly interacting massive particles (WIMPs) in the Soudan Underground Lab since early 2012. We have recently finished analyzing 600 kg-d of low-energy data on a subset of detectors with an energy threshold of 1.6 keVnr. The use of the athermal phonon measurement provides position sensitivity, and therefore...
227. Results from the Pierre Auger Observatory: spectrum and composition
Eun-Joo Ahn (F)
Cosmic rays have now been observed for over a century. The nature and origins of the ultra high energy cosmic rays are questions that have yet to be definitively answered. The southern Pierre Auger Observatory, located in Argentina, is currently the world's largest detector of ultra high energy cosmic rays. With unprecedented amount of data collected, it is shedding light to questions on the...
169. SM Higgs results at the LHC
Prof. Guenakh Mitselmakher (University of Florida)
Results of studies at the LHC collider by the CMS and ATLAS experiments of the recently discovered Higgs boson are presented. The measured properties of the new particle are consistent with the predictions of the Standard Model.
256. Scrutinizing the Diffuse Gamma-Ray Emission in the Inner Galaxy
Tansu Daylan (H)
The recently discovered gamma-ray excess in the inner galaxy has important implications either for astrophysics or dark matter. Regardless of its origin, studies on the anomalous emission suffer from poor astrophysical modeling and large uncertainties in background emission in the region of interest. Therefore understanding of the gamma-ray background components in the inner galaxy is...
198. Searches for invisible Higgs at the LHC
Renjie Wang (Northeastern University (US))
The results from searching for invisible decay of Higgs bosons at LHC are presented. No significant excess is found beyond the Standard Model prediction, and new limits are set on the production cross section times invisible branching fraction, as a function of the Higgs boson mass, using a combination of data collected in proton-proton collisions at center-of-mass energies of 7 TeV and 8 TeV...
197. Analysis of 3.4 years of CoGeNT Data
Marek Kos (P)
The CoGeNT dark matter detector has been taking data at the Soudan mine since December 2009. The data have been analyzed for a possible WIMP signal using multi-dimensional PDFs in energy, time, and pulse rise-time. The bulk event (fast rise-time pulses) and surface (slow rise-time) event fractions are determined through this analysis. We have also done extensive simulations of backgrounds...
239. Telescope Array Experiment : Recent Results and Future Plans
Kazumasa Kawata
The Telescope Array (TA) is the largest ultra-high-energy cosmic-ray (UHECR) detector in the northern hemisphere, which consists of 507 surface detector covering a total 700 km^2 and three fluorescence detector stations. The TA has been fully operating at Millard Country, Utah, USA, since 2008. In this presentation, we will discuss our recent results on the UHECR energy spectrum, mass...
314. The GeV excess in the inner Galaxy: Discussion of background systematics
Francesca Calore (University of Amsterdam)
Recently, a spatially extended excess of gamma rays compatible with a DM signal from the inner region of the Milky Way has been claimed by different and independent groups, using Fermi LAT data. Yet, final statements about the morphology and spectral properties of such an extended diffuse emission are under debate, given the high complexity of this sky region. In this talk I will present an...
59. Fitting the Fermi-LAT GeV excess: on the importance of the propagation of electrons from dark matter
Mr Thomas Lacroix (Institut d'Astrophysique de Paris (IAP))
An excess of gamma rays at GeV energies has been detected in the Fermi-LAT data. This signal comes from a narrow region around the Galactic Center and has been interpreted as possible evidence for light (10-30 GeV) dark matter particles annihilating either into a mixture of leptons-antileptons and $b\bar{b}$ or into $b\bar{b}$ only. Focussing on the prompt gamma-ray emission, previous work...
202. Searches for beyond the standard model Higgs in ATLAS & CMS
Johann Collot (Centre National de la Recherche Scientifique (FR))
The discovery of a Higgs boson, consistent with the Standard Model, has heralded a new era in which fundamental scalar fields may be safely called to play a central rôle to solve long-lasting physics questions : grand unification of forces, supersymmetry, dark matter, cosmic inflation... To express it in other words, if the sophisticated Brout-Englert-Higgs mechanism really works as we believe...
193. A CoGeNT Analysis: Is there evidence for a Dark Matter signal?
Jonathan Davis (IPPP, Durham University)
Data from the CoGeNT experiment has been claimed to be compatible with a light Dark Matter particle scattering off nucleons, to a significance of approximately 2.5 sigma. I will critically assess this possibility, using the methods introduced in arXiv:1405.0495. I present a Bayesian and frequentist analysis of CoGeNT data, with particular focus on the removal of surface events. By...
315. Updated antiproton, positron and radio limits on light dark matter
Martin Vollmann
In this brief presentation, I will discuss how cosmic-ray and radio observations impose stringent constraints on dark matter (DM) candidates with masses in the ~1-50 GeV range. We find strong bounds on DM annihilating into light leptons, or democratically into all leptons from cosmic ray positron data, while complementarily, cosmic ray antiproton and radio data show considerable tension with...
110. Studies of the arrival direction distribution of cosmic rays at the Pierre Auger Observatory
Silvia Mollerach
The Pierre Auger Observatory has been in operation since January 2004, detecting cosmic rays with energies from few 10 PeV to more than 100 EeV. We present the results of anisotropy studies of the arrival directions at different angular scales and energies using both the data recorded by the 1500 m grid array, covering 3000 km^2 and fully efficient above 3 EeV, and the 750 m grid array,...
312. Results from dark matter searches in dwarf galaxies
Dr Alex Geringer-Sameth
I will present the latest results from a search for dark matter annihilation in a large sample of Milky Way dwarf galaxies. Nearly 6 years of data from the Fermi Gamma-ray Space Telescope are analyzed by weighting individual photons based on their spatial and spectral properties. Such searches are powerful enough to probe the relic abundance cross section for some dark matter masses. I will...
295. Higgs Implications for Dark Matter
Jernej F. Kamenik (Jozef Stefan Institute)
We investigate the impact of hypothetical new neutral light particles on the tiny width of a light Higgs boson. Reviewing the possible signatures in the Higgs decay modes with missing energy, in many cases simply preventing these modes from being dominant suffices to set tight model-independent constraints on the masses and couplings of the new light states. We then apply this analysis to...
277. Acceleration to ultra-high energies
Dr Martin Lemoine (Institut d'Astrophysique de Paris)
The origin of the highest energy cosmic rays with energy >~ 10^{18}eV is a rather intricate puzzle, with a central question: how to accelerate particles to extreme energies ~10^{20}eV or more, and in which astrophysical source? Other questions arise as well, such as: are these cosmic rays protons or nuclei? Why is there no powerful source in the arrival directions of the highest energy cosmic...
313. Panel discussion: The GeV excess
161. The Majorana Low-Background Experiment at KURF (MALBEK)
Reyco Henning (UNC Chapel Hill)
The MAJORANA DEMONSTRATOR is an array of natural and enriched high purity germanium detectors that will search for the neutrinoless double-beta decay of germanium-76 and perform a search for WIMPs with masses below 10 GeV. As part of the MAJORANA research and development efforts, we have deployed a modified, low-background broad energy germanium detector at the Kimballton Underground Research...
274. Constraints on Sources and Composition of UHECRs
Glennys Farrar (NYU)
- A simple and well-motivated modification of QCD at ultrahigh energy naturally explains the observed Xmax distribution above 10^18 eV, with a purely protonic composition. This removes the need for a fine-tuned source composition and very hard injection spectrum as required if UHECRs have a mixed composition, and eliminates the need to postulate an additional population of extragalactic...
292. Inferences about Dark Matter in the Smallest Galaxies
Prof. Matthew Walker (CMU)
Dark Matter: Cosmological Aspects
The Local Group's dwarf galaxies represent the lower limit of galaxy formation. Inferences about the amount and spatial distribution of dark matter within these objects has can provide tests of cosmological models and predictions for indirect detection experiments. I will summarize current observational constraints and discuss prospects for improvement.
175. Measuring the Effectiveness of Effective Field Theories of Dark Matter
Thomas David Jacques (Universite de Geneve (CH))
As beyond-standard-model physics continues to elude discovery at the LHC, it becomes increasingly important to ask what we can learn about dark matter in a model-independent way. Effective Field Theories have become popular as a way construct model-independent constraints on dark matter, but at LHC energies it is crucial to understand their significance and limitations. I will present ways to...
79. Status of the ANAIS Dark Matter project
Ms Patricia Villar (PhD Fellow)
The ANAIS (Annual Modulation with NaI(Tl) Scintillators) experiment aims at the confirmation of the DAMA/LIBRA signal using the same target and technique at the Canfranc Underground Laboratory. 250 kg of ultrapure NaI(Tl) crystals will be used as a target, divided into 20 modules, each coupled to two photomultipliers. Two NaI(Tl) crystals of 12.5 kg each, grown by Alpha Spectra from a powder...
122. Multi-Messenger Signatures of UHE CRs
The origin of ultra-high energy (UHE) cosmic rays (CRs) is an unsolved puzzle. Multi-messenger obervations in the form of gamma-rays and neutrinos can help to constrain the cosmic evolution and emission spectra of UHE CR candidate sources. I will discuss the production of cosmogenic gamma-ray and neutrino fluxes from the propagation of UHE CRs through the cosmic radiation background. These...
304. Beyond EFT for DM@LHC
Andrea De Simone (CERN) , Andrea De Simone (Ecole Polytechnique Federale de Lausanne (CH))
I discuss alternatives to Effective Field Theory to pursue dark matter searches at the LHC, and propose some benchmark scenarios for fairly model-independent strategies that LHC experiments can follow in the investigation of DM.
291. Dark matter particle constraints from dwarf spheroidals
Dr Jorge Penarrubia (Royal Observatory Edinburgh)
Dwarf spheroidal galaxies are the faintest galaxies in the Universe and as such play a fundamental role in galaxy formation models. In addition, their internal kinematics suggest the presence of large amounts of non-baryonic matter on very small scales. In models where dark matter (DM) consists of exotic particles formed shortly after the Big Bang, the high phase-space densities inferred in...
172. The PICO Dark Matter Search Programme
Carsten Krauss (U)
Superheated liquids, such as CF$_3$I have been shown to have excellent properties when operated in large scale bubble chambers for the detection of dark matter. The PICO collaboration is currently operating a new detector at SNOLAB to demonstrate that a large scale chamber with superheated C$_3$F$_8$ can be operated stably. The this paper will present the latest status and results from the...
163. An explanation to the diffuse gamma-ray emission.
Fiorenza Donato
We evaluate the contribution to the high latitude gamma-ray emission in the GeV-TeV energy range as due to various AGN populations. We give an estimation of the unresolved energy spectrum and compare our results to the data collected by the Fermi-LAT. We also briefly discuss the relevant anisotropy in the IGRB. Finally, we give some hints to the possibility left to dark matter searches in the...
14. Explaining galactic structure and direct detection experiments with mirror dark matter
Dr Foot Robert (School of Physics, University of Melbourne)
A duplicate set of particles and forces (mirror particles) are required if nature obeys an exact parity symmetry. Mirror baryons are a candidate for the inferred dark matter of the Universe. The only new parameter postulated is photon - mirror photon kinetic mixing of strength $\epsilon \sim 10^{-9}$. Recent work indicates that such a theory might be capable of explaining galactic structure...
65. Probing dark matter in dwarf galaxies with non-parametric mass models
Pascal Steger (E)
We propose a new non-parametric method to determine the mass distribution in spherical systems. A high dimensional parameter space encoding tracer density, line of sight velocity anisotropy and total mass density is sampled using MultiNest. Without assumptions on the functional form of any of these profiles, we can reproduce reliably the total mass density and velocity anisotropy...
160. QCD effects in mono-jet searches for dark matter
Dr Emanuele Re (University of Oxford)
I will discuss theoretical uncertainties in predictions for "mono-jet" signals, and show how these predictions can be affected by extra radiation due to QCD emissions. I'll present results obtained by matching parton showers with NLO corrections, as implemented in the publicly-available MC program POWHEG-BOX, that I will quickly overview. Time permitting, I'll also show how further...
279. Cold plus warm dark matter models
Dr Oleg Ruchayskiy (Ecole Polytechnique Federale de Lausanne)
Analyses of the cosmic microwave background anisotropies and of galaxy survey data allow for the possibility that dark matter particles were born relativistic yet became non-relativistic well before matter-radiation equality. Such "warm" or "cold plus warm" dark matter models may still have observable signatures at sub-Mpc scales, e.g. modifying the structure of galactic halos and their...
180. Mono-jet searches in ATLAS and CMS
Valerio Rossetti (Stockholm University (SE))
We present results by the ATLAS and CMS experiments on a search for new phenomena in pp collision events with one high momentum jet and large missing transverse energy. The data are compared to the SM prediction of the background, dominated by the W/Z+jets production with neutrinos and mis-reconstructed charged leptons in the final state. The results are interpreted in the context of different...
130. Neutrinoless double beta decay and dark matter searches with CUORE-0 and CUORE
Mrs Martinez Maria (Universidad de Zaragoza)
The CUORE (Cryogenic Underground Observatory for Rare Events) experiment, currently under construction at the Gran Sasso National Laboratory (LNGS), will operate 988 TeO$_2$ bolometers at a temperature of around 10 mK, adding up a total mass of 750 kg. CUORE-$0$, a 52 TeO$_2$ bolometers array built using the same protocols developed for CUORE, is currently in operation at LNGS and has recently...
273. JEM-EUSO and the future of the UHECR field
Angela Olinto (The University of Chicago)
Thanks to giant extensive air-showers observatories, such as the Pierre Auger Observatory and the Telescope Array, we now know that the sources of ultrahigh energy cosmic rays (UHECRs) are extragalactic. We also know that either they interact with the CMB as predicted or they run out of energy at the same energy scale of the CMB interactions! Their composition is either surprising (dominated...
156. Impact of the nature of the dark matter on structure formation
tom theuns (d)
I will discuss the key difference between warm dark matter and cold dark matter on the formation of Milky Way-like galaxies, demonstrating that a generic aspect of WDM is the formation of stars in filaments that connect to the forming galaxy. Such dense filaments also appear as Lyman Limit systems (LLSs) in the spectra of background QSOs, and the correlation function of LLSs could...
191. Mono-W/Z searches in ATLAS and CMS
Andy Nelson (University of California Irvine (US))
Searches for mono-W and Z bosons are presented in the hadronic+MET and dileptonic+MET channels using the ATLAS experiment and the mono-lepton+MET channel using CMS at the Large Hadron Collider. The full 2012 data set produced at a center of mass energy of 8 TeV is used comprising 20 fb-1. No statistically significant deviation from the Standard Model is observed. Limits are set on the mass...
189. Tests on the first crystals for KIMS-NaI experiment
Ms Kyungwon Kim (Center for Underground Physics (IBS))
KIMS-NaI is a dark matter search experiment using NaI(Tl) as a scintillating crystal at the Yangyang underground laboratory to verify the DAMA experiment. Two NaI(Tl) crystals grown from different powders and with different sizes are used in a test experiment. The crystals coupled to two PMTs that have high quantum efficiency were surrounded by twelve CsI(Tl) crystal used for KIMS-CsI...
248. Cosmic ray mass composition measurements with LOFAR
Stijn Buitink (Radboud University Nijmegen)
It is generally believed that ultra-high-energy cosmic rays are produced in extragalactic sources like gamma-ray bursts or active galactic nuclei, while the lower energy cosmic rays come from our own Galaxy. At what energy the transition from Galactic to extragalactic origin takes place is still a mystery, but most models place it somewhere between $10^{17}$ and $10^{19}$ eV. With LOFAR we can...
296. Dark Matter at a Linear Collider
Herbi Dreiner (Bonn University)
Dark Matter at a Linear Collider
287. First Data from DM-Ice17, Prospects for DM-Ice
Neil Spooner (University of Sheffield)
Astrophysical observations and cosmological data give overwhelming evidence that the majority of the mass of the Universe is comprised of dark matter. For over 15 years, the DAMA collaboration has asserted that they observe a dark matter-induced annual modulation in their data. Several alternative hypotheses have been proposed as explanations for the observation of an annual modulation...
280. The assembly history and structure of cold dark matter halos
Aaron Ludlow (Argelander-Institut fuer Astronomie)
I will discuss the relation between the accretion history and mass profile of cold dark matter (CDM) haloes, emphasizing how an appropriate definition of their formation times can be used to determine their characteristic radii. This result is based on the finding that the average mass accretion history, expressed in terms of the critical density of the Universe, resembles the enclosed...
294. Radio measurements of air showers with AERA
Prof. Jörg Hörandel (Radboud University Nijmegen/Nikhef)
High-energy cosmic rays impinging onto the atmosphere of the Earth initiate cascades of secondary particles: extensive air showers. The electrons and positrons in air showers interact with the geomagnetic field and emit radiation, which we record in the tens-of-MHz regime. Radio emission from air showers is measured with the Auger Engineering Radio Array (AERA) at the Pierre Auger Observatory...
37. Invited Talk: WISPy Dark Matter
Prof. Joerg Jaeckel
Very light bosons, produced non-thermally in the early Universe are an intriguing possibility for the cold dark matter of the Universe. Particularly interesting candidates are axions, axion-like particles and hidden photons. This talk will discuss the current status of such light dark matter with a particular emphasis towards opportunities for its detection.
17. Invited Talk: Dark Messages from the LHC
Tim Tait (University of California, Irvine)
I will review the impact of the LHC program on our understanding of particle dark matter.
4. Invited Talk: The Dark Sector
Prof. Jonathan Feng (UC Irvine)
The idea that dark matter resides in a dark sector, accompanied by other dark particles and forces, has many realizations. I will discuss a number of these, focusing on several motivated by recent experiments, observations, and simulations.
28. Invited Talk: Indirect Searches for Particle Dark Matter
Torsten Bringmann (Hamburg University)
One of the main strategies to probe the particle nature of dark matter is the identification of possible contributions from the annihilation or decay of these particles in the spectrum of cosmic rays and radiation. A wealth of observational data, both existing and upcoming, makes this a very timely and active approach that starts to rule out the most popular models in a way that is...
18. Invited Talk: The Alpha Magnetic Spectrometer on the International Space Station
The AMS-02 detector is a wide acceptance high-energy physics experiment operating since May 2011 onboard of the International Space Station. It consists of six complementary sub-detectors providing measurement on the energy, the mass and the charge leading to an unambiguous identification of the cosmic rays. To date, more than 40 billion cosmic ray events have been collected. Performance of...
68. New physics searches in ATLAS and relation to astroparticle physics
Vincent Francois Giangiobbe (IFAE-Barcelona (ES))
The existence of Dark Matter (DM) is by now well established, and the fit of the cosmological model parameters to various measurements lead to a density of the cold non-baryoninc matter representing 26.5% of the critical density. Despite this relatively large density, the nature of the DM remains unknown. Amongst the preferred candidates for DM are the Weakly Interacting Massive Particles...
69. New physics searches in CMS
Jim Brooke (University of Bristol (GB))
Searches for a wide range of physics beyond the Standard Model have been performed using CMS at the LHC. Final results from the 7 and 8 TeV datasets will be presented. The presentation will cover results on Supersymmetry, direct production of dark matter, new resonances, large extra dimensions, long lived particles and other exotic new physics. Some prospects of the future discovery potential...
293. The Universal Rotation Curve and Dark Matter Halos around Galaxies
Dr Paolo Salucci (SISSA)
Recent observations have revealed the structural properties of the dark and luminous mass distribution in spirals. These results led to the vision of a new and amazing scenario. The investigation of single and coadded objects has unanimously shown that the rotation curves of spirals follow, from their centres out to their virial radii, a universal profile. This profile implies a tuned...
107. Dark matter in the Milky Way: new dynamical constraints
Miguel Pato (TU Munich)
The distribution of dark matter in the Milky Way is poorly constrained at present and represents a major uncertainty for both direct and indirect dark matter searches. In this talk, I shall present new constraints on the dark matter distribution based on photometric and dynamical observations of our Galaxy. First, state-of-the-art models for the distribution of baryons are calibrated against...
134. Are there sterile neutrinos at the eV scale?
Thomas Schwetz-Mangold (Stockholm University (SE))
I will discuss the status of several hints for sterile neutrinos at the eV scale. While those hints point towards a similar neutrino mass scale various constraints on the mixing angles make it difficult to obtain a good description of all data simultaneously. I will review the situation from oscillation experiments and mention briefly additional constraints from cosmology.
301. Interplay of direct, indirect and collider searches
Christopher McCabe
Interplay of direct, indirect and collider searches
95. The EDELWEISS III Experiment
Silvia Scorza (KIT)
EDELWEISS is a direct dark matter search situated in the low radioactivity environment of the Modane Underground Laboratory. The experiment uses Ge detectors operated at 20 mK in a dilution refrigerator in order to identify eventual rare nuclear recoils induced by elastic scattering of WIMPs from our Galactic halo. I will describe the current EDELWEISS-III program, including improvements of...
269. Unprecedented results on the Crab nebula and pulsar with the MAGIC telescopes.
MAGIC is a system of two atmospheric Cherenkov telescopes located in the Canary island of La Palma. MAGIC has low energy threshold, down to 50 Gev, well suited to study the still poorly explored energy band below 100 GeV. Although the space-borne gamma-ray telescope Fermi/LAT is sensitive up to 300 GeV, gamma-ray rates drop fast with increasing energy, and statistics are scarce above few...
106. Dark Matter in the Milky Way: model-independent determination.
fabio iocco (Instituto de Fisica Teorica)
We use a new compilation of data for the Rotation Curve of our own Galaxy in order to assess evidence for a Dark component of matter. We construct the rotation curve expected from a large sample of models of the baryonic (star and gas) component of the Milky Way, and infer the missing component with high statistical evidence. This model-independent approach shows evidence for a dark component...
282. The Galactic Center region at very-high energies with H.E.S.S.
Aion Viana (M)
The Galactic Centre region has been observed by the complete H.E.S.S.-I array of ground-based Cherenkov telescopes since 2004 leading to the detection of the very-high-energy (VHE, E > 100 GeV)gamma-ray source HESS J1745-290 coincident in position with the supermassive black hole Sgr A*. A TeV gamma-ray diffuse emission has been detected along the Galactic ridge, very likely to be related to...
96. Constraining light sterile neutrinos with cosmological data
Maria Archidiacono (Aarhus University)
In the last few years the imprint of light sterile neutrinos on cosmological data sets has been deeply investigated within the framework of different theoretical scenarios. Nevertheless the question whether cosmology can accommodate the existence of additional neutrinos is still open. The strong dependence of the results on the underlined cosmological model and on the included data sets...
220. Hypercharged Dark Matter and Direct Detection as a Probe of Reheating
Dr Brian Feldstein (University of Oxford)
Abstract: I will discuss the implications of hypercharged dark matter, which is a generic possibility which leads to very large scattering cross sections at direct detection experiments. In fact, current and planned experiments are probing masses for such particles up to an amazing $10^8-10^{10}$ GeV. If a detection were made, then the scattering rate would reveal the dark matter mass, and...
243. Impact of Semi-annihilation of Z3 Symmetric Dark Matter with Radiative Neutrino Masses
Dr Takashi Toma (Durham University)
We investigate a Z3 symmetric model with radiative neutrino masses at two loop level. A particle which can be Dark Matter in the model is either of a Dirac fermion or a complex scalar as a result of unbroken Z3 symmetry. In addition to typical annihilation processes of Dark Matter, semi-annihilation processes give an important effect when the relic density is calculated together with some...
222. Baryonic and dark matter distribution features in cosmological simulations of spiral galaxies
Pol Mollitor (L)
We study three high resolution cosmological hydrodynamical simulations of Milky Way-sized halos including a comparison with the corresponding DM-only runs performed with the adaptive mesh refinement code RAMSES. We analyse the stellar and gas distribution and find one of our simulated galaxies with interesting Milky Way like features with regard to several observational tests. Thanks to...
206. The Spectrum and Morphology of the Fermi Bubbles
Anna Franckowiak (SLAC/KIPAC)
The Fermi bubbles are two large structures in the gamma-ray sky extending up to 55 deg above and below the Galactic center. We present our analysis of 50 months of Fermi-LAT pass7 reprocessed data from 100 MeV to 500 GeV above 10 deg in Galactic latitude to derive the spectrum and morphology of the Fermi bubbles. We perform a detailed study of the systematic uncertainties due to the modeling...
203. Performance of LHC searches with MET for models with compressed spectra
Luca Panizzi (University of Southampton)
Searches for events with Missing Transverse Energy at the LHC are among the most powerful methods for the identification of Dark Matter candidates. For this purpose, selection and kinematic cuts have often been designed assuming that the mass hierarchies between the Dark Matter candidate and strongly-interacting states of the model are large, as it is generally the case in supersymmetric...
114. Latest results from the CDEX experiment at China Jinping Underground Laboratory
Shin-Ted LIN (Sichuan University)
We present the latest results on light Dark Matter WIMP searches at China Jinping Underground Laboratory from a p-type point-contact germanium detector enclosed by a NaI(Tl) anti-Compton detector. An order of magnitude improvement in the sensitivities for low-mass WIMP searches over our previous results is achieved. The analysis procedures of our results as well as the status and perspectives...
64. Sterile neutrinos at neutrino telescopes
Atmospheric neutrino data collected by huge neutrino telescopes, such as IceCube, provide the opportunity to probe new physics unprecedentedly, both due to high statistics and the high energy range. In this talk I discuss the effect of sterile neutrinos on atmospheric neutrino flux. I present the current constraints on active-sterile mixing obtained from IC-40 and IC-79 data sets. Also the...
3. The dark matter density profile in spherical systems: A simple way to get more information from the Jeans equation
Mr Tom Richardson (King's College London)
Detailed measurements of the dark matter density profile in systems such as dwarf galaxies and galaxy clusters would allows us to test predictions from N-body simulations of cold dark matter and how complex astrophysical effects and interactions with baryons may have reshaped dark matter halos. Traditionally, the Jeans equation is used to constrain the dark matter density profile in spherical...
259. Towards an Improved Model of Diffuse Gamma-ray Emission from the Milky Way: mapping the dust, gas, and radiation field.
Douglas Finkbeiner (Harvard University)
The strongest WIMP annihilation signals are expected from the inner Milky Way, but foreground contribution from cosmic-ray interactions with the gas and radiation field are strongest there as well. Therefore, indirect detection has been hampered by insufficient knowledge of the diffuse gamma-ray foregrounds. Improved modeling requires a 3D map of gas and dust (for $\pi^0$ and brem...
309. Can AMS-02 discriminate the origin of an anti-proton signal?
Giorgio Busoni (SISSA, Trieste)
Indirect searches can be used to test dark matter models against expected signals in various channels, in particular antiprotons. With antiproton data available soon at higher and higher energies, it is important to test the dark matter hypothesis against alternative astrophysical sources, {\it e.g. } secondaries accelerated in supernova remnants. We investigate the degeneracy of the two...
60. Light WIMP searches with Germanium Detectors of sub-keV Sensitivitie
Hau-Bin Li (Academia Sinica, Taipei, Taiwan)
Hau-Bin Li, Institute of Physics, Academia Sinica, Taiwan. (On behalf of the TEXONO Collaboration) Germanium detectors with sub-keV sensitivities can probe low-mass WIMP Dark Matter. This experimental approach is pursued at the Kuo-Sheng Neutrino Laboratory (KSNL) in Taiwan and at the China Jinping Underground Laboratory (CJPL) in China via the TEXONO and CDEX programs, respectively. The...
92. Search for sterile neutrinos with the STEREO experiment
Jacob Lamblin (U)
All previous neutrino oscillation experiments at short distance from reactors have measured a small deficit of neutrinos with respect to predictions. This deficit could be explained either by a systematic error on the flux prediction, either by the existence of a new neutrino state, a light sterile neutrino. This new neutrino with no ordinary weak interaction would not be directly detected but...
290. PANGU: a High Resolution Gamma-Ray Space Telescope
Dr Meng Su (MIT, USA)
We propose a high angular resolution telescope dedicated to the sub-GeV gamma-ray astronomy as a candidate for the CAS-ESA joint small mission. This mission, called PANGU (PAir-productioN Gamma-ray Unit), will open up a unique window of electromagnetic spectrum that has never been explored with great precision. A wide range of topics of both astronomy and fundamental physics can be...
91. Gamma-rays from the Inert Doublet Model at the TeV scale.
Mr Camilo Garcia Cely (Technical University Munich)
The Inert Doublet Model contains a neutral stable particle which is a viable dark matter candidate. I will discuss the indirect signatures of this model in gamma-rays when the dark matter mass is at the TeV scale. In particular, I will consider the interplay between the annihilation process into two photons and the internal bremsstrahlung process $DM DM \to W^+W^- \gamma$. I will show that...
142. Sterile neutrino dark matter: from production to halo formation
Dr Aurel Schneider (University of Sussex)
I will discuss the scenario of sterile neutrino dark matter produced by decays of heavy scalars. This is an ideal toy example to illustrate how the production of dark matter influences the formation of structures in the universe.
179. Effects of sterile states on lepton dipole moments
Valentina De Romeri (CNRS)
We investigate the contribution of sterile states to the anomalous magnetic and electric dipole moments of charged leptons. Furthermore, as a specific example, we study this effect in a low-scale seesaw model. We perform a complete numerical study scanning the relevant parameter space of the models.
148. A dark matter search using CCDs
Federico Izraelevitch (Fermilab)
DAMIC is a novel dark matter search experiment that has a unique sensitivity to hypothetic dark matter particles with masses below 10 GeV. Due to the CCD's low electronic readout noise (R.M.S. ~ 3 electrons), this instrument is able to reach a detection threshold of 60 eV, suitable for the search in the low mass range. The excellent energy response and high spatial resolution of a CCD image...
155. Cosmology Falling in Love with Sterile Neutrinos
Joern Kersten (University of Bergen)
Despite the astonishing success of standard $\Lambda$CDM cosmology, there is mounting evidence for a tension with observations at small and intermediate scales (missing satellites, cusp vs. core and too big to fail problems). We introduce a simple model where both cold dark matter (DM) and sterile neutrinos are charged under a new $U(1)_X$ gauge interaction. The resulting DM...
286. Direction-Sensitive Dark Matter Detection with the DMTPC Experiment
Prof. Jocelyn Monroe (Royal Holloway University of London)
The Dark Matter Time Projection Chamber (DMTPC) collaboration is developing a low-pressure TPC with optical and charge readout for direction-sensitive dark matter detection, in order to correlate a dark matter candidate nuclear recoil signal, in a detector deep underground, with the earth's motion through the galactic dark matter halo. The unique angular signature of the dark matter wind,...
141. PINGU and the Neutrino Mass Hierarchy
Douglas Cowen (Pennsylvania State University)
The Precision IceCube Next Generation Upgrade (PINGU) is a proposed IceCube in-fill array designed to measure the neutrino mass hierarchy using atmospheric neutrino interactions in the ice cap at the South Pole. PINGU will have a neutrino energy threshold of a few GeV with a multi-megaton effective volume. We present PINGU's expected sensitivity to the hierarchy with optimized geometry and...
305. The H-dibaryon: possible dark matter particle within QCD
The H dibaryon is a potentially very deeply-bound 6-quark state — uuddss -- with a mass of ~1.5 GeV. It is a spin-0, flavor-singlet, scalar carrying baryon-number of 2. As will be reviewed, such a particle would have evaded detection in accelerator and other searches. (Preliminary lattice simulations show it is deeply bound compared to other 6-quark states, but they are not yet good enough...
182. Asymmetric dark matter and $2 \leftrightarrow 2$ interactions
Iason Baldes (University of Melbourne)
Common mechanisms invoked to explain particle antiparticle asymmetries involve the out-of-equilibrium and CP violating decay of a heavy particle. In this talk I discuss the role CP violating $2 \leftrightarrow 2$ annihilations can play -- together with the usual $1 \leftrightarrow 2$ decays and inverse decays -- in determining the final asymmetry. I will present a simple toy model to point out...
162. Origin of Neutrino Mass and the LHC
Michael Schmidt (The University of Melbourne)
To unravel the mystery of neutrino masses and mixing angles, we adopt a bottom-up approach based on effective operators which violate lepton number by two units. By opening the effective operators, we can find the corresponding minimal UV completions. We discuss how the minimal UV completions of the dimension-7 operators can be tested at the LHC as well as one example based on a dimension-9 operator.
192. The DRIFT directional WIMP detectors - improved limits and progress to scale-up
The DRIFT (directional recoil identification from tracks) concept is currently the most sensitive technique being developed with capability to observe a galactic signature for WIMP dark matter by measuring the direction of WIMP-induced nuclear recoils in a gas. The collaboration is well advanced in the design and testing of a next generation experiment, DRIFT III, comprising up to 24 m3...
164. Warm Dark Matter from the Large Scale Structure
Dr Katarina Markovic (University of Manchester)
Warm Dark Matter (WDM) is a generalisation of the standard Cold Dark Matter model in the sense that it does not assume dark matter particles to be absolutely cold. In the simplest models all dark matter is made of the same particles, which started out in thermal equilibrium and cooled to effectively become cold today. If such particles have masses of the order of a keV or less, they leave an...
196. FIMP realization of the scotogenic model
Dr Emiliano Molinaro (TUM)
The scotogenic model is one of the simplest scenarios for physics beyond the Standard Model that can account for neutrino masses and dark matter at the TeV scale. It contains another scalar doublet and three additional singlet fermions (Ni), all odd under a Z2 symmetry. We examine the possibility that the dark matter candidate, N1, does not reach thermal equilibrium in the early Universe...
281. Dark-matter distributions around massive black holes: A general relativistic Analysis
Francesc Ferrer (W)
The cold dark matter at the center of a galaxy will be redistributed by the presence of a massive black hole. We apply the adiabatic growth framework in a fully relativistic setting to obtain the final dark-matter density for both cored and cusped initial distributions. Besides the implications for indirect detection estimates, we show that the gravitational effects of such a dark-matter spike...
126. Model Independent Bounds in Direct Dark Matter Searches
Paolo Panci (Institute d'Astrophysique de Paris (IAP))
Direct searches for Dark Matter (DM) aim at detecting the nuclear recoils arising from a scattering between DM particles and target nuclei in underground detectors. Since the physics that describes the collision between DM particles and target nuclei is deeply non-relativistic, in this presentation I'll review a different and more general approach to study signal in direct DM searches based on...
121. The Majorana nature of massive neutrinos as a possible hint for new physics
Aurora Meroni (Università Roma Tre/LNF)
Determining the nature - Dirac or Majorana - of massive neutrinos, possibly related to a New Physics scale beyond that predicted by the Standard Model is a fundamental problem under study. Significant experimental efforts have been made to unveil the possible Majorana nature of massive neutrinos by searching for neutrinoless double beta decay with increasing sensitivity. These constraints,...
75. Thermal dark matter implies new physics not far above TeV
Csaba Balazs (Monash University)
I present a model independent analysis of thermal dark matter constraining its mass and interaction strengths with data from astro- and particle physics experiments. Using effective field theory to describe interactions of dark matter particles I cover real and complex scalar, Dirac and Majorana fermion, and vector boson dark matter candidates. I show posterior probability distributions for...
97. An open window for high reheating temperatures in supersymmetry
Dr Jan Heisig (RWTH Aachen University)
Supersymmetric scenarios where the lightest superparticle (LSP) is the gravitino are an attractive alternative to the widely studied case of a neutralino LSP. A strong motivation for a gravitino LSP arises from the possibility of allowing higher reheating temperatures which are required by thermal leptogenesis and can be considered more likely in the light of the recently reported BICEP2 data....
146. Bayesian Reconstruction of the WIMP Velocity Distribution Function from Direct Dark Matter Detection Data
Chung-Lin SHAN (N)
In this talk, I present our recent work on the introduction of Bayesian analysis to our model-independent reconstruction of the one-dimensional velocity distribution function of Galactic WIMPs. In this process, the (rough) velocity distribution reconstructed by using raw data from direct Dark Matter detection experiments directly has been used as "reconstructed-input" information. By assuming...
11. eV sterile neutrinos as hot dark matter
Dr Jan Hamann (CERN)
Light sterile neutrinos with masses of order 1 eV have been suggested to resolve anomalies in various neutrino oscillation experiments. In a cosmological context, these sterile neutrinos would act as a (hot) dark matter component. I will review their impact on cosmological observables and discuss the observational status following the recent measurements of the cosmic microwave background...
61. Towards a common origin of neutrino and dark matter
Stefano Morisi
Sterile neutrino is the most straightforward example connecting neutrino physics and DM. But there are different possibilities. For instance if neutrino masses are generated radiatively then new fields must be assumed and they could be good DM candidates. Another example is in the context of flavor symmetries. Spontaneous breaking of flavor symmetries can give an explanation for the stability...
154. Co-annihilating dark matter and effective operator analysis
Yi Cai
We study dark matter (DM) models in which there are two dark sector particles, $\chi_1$ and $\chi_2$, of near mass. In such models, co-annihilation of $\chi_1$ and $\chi_2$ may be the dominant process controlling the DM relic density during freezeout in the early universe. In this scenario, there is no significant contribution to direct and indirect detection signals, unless there exists an...
223. NEWAGE - direction-sensitive dark matter search
Kiseki Nakamura (Kyoto University)
NEWAGE is a direction sensitive WIMP search experiment using micro pixel chamber. After our first underground measurement at Kamioka (PLB686(2010)11), we constructed new detector, NEWAGE-0.3b'. NEWAGE-0.3b' was designed to have a twice larger target volume with low background material, a lowered threshold of $50\,\rm keV$, an improved data acquisition system, and a gas circulation system with...
165. Probation of flavor transition mechanism with cosmogenic neutrinos
Dr Kwang-Chang Lai (Chang Gung University)
The determination of neutrino flavor transition mechanism by neutrino telescopes is presented. With a model-independent parametrization, we are able to classify flavor transitions (such as standard three-flavor oscillations, neutrino decays or others) of astrophysical neutrinos propagating from their sources to the Earth. We demonstrate how one can constrain parameters of the above...
87. The status of late-time decays of dark matter
Prof. Annika Peter (The Ohio State University)
The simplest phenomenological model for cosmological dark matter is the "cold dark matter" (CDM) model. This model assumes that dark matter is cold, collisionless, and stable. Recently, these three tenets of CDM have been challenged on both observational and theoretical grounds. In this talk, I present a review of recent work on investigations into the stability of dark matter. I consider...
214. Gaugino annihilation and co-annihilation with DM@NLO
Patrick Steppeler (WWU Münster)
A powerful method to constrain the MSSM parameter space is to compare the predicted dark matter relic density with cosmological precision measurements, in particular the Planck data. On the particle physics side, the main uncertainty for a given spectrum arises from the (co-)annihilation cross sections of the dark matter particle. After a motivation for including higher order corrections in...
170. Determination of (sterile/active) neutrino absolute masses in Hyper-K by detecting SN neutrinos
Kazunori Kohri (K)
125. Low Mass WIMP Directional Detection
Mr Nguyen Phan (University of New Mexico)
Can directional detection provide any input to the low mass WIMP region? We will show results from measurements of low energy recoils using a low pressure optical TPC which demonstrates the capabilities of a realistic directional detector. Results from those measurements are extrapolated to find the detector characteristics most suitable for low mass WIMP searches. Finally, some preliminary...
195. Using Cosmological Data to study Dark Matter Interactions with Radiation
Mr Ryan Wilkinson (IPPP, Durham University)
Despite the large number of dedicated experiments, an understanding of the particle nature of dark matter and direct evidence for its existence have remained elusive. However, detection methods generally assume that dark matter consists of cold, massive particles. In this talk, I will discuss how cosmological data from the CMB and Large-Scale Structure can be used to study dark matter...
57. Strong thermal leptogenesis and the $N_2$-dominated scenario
Mr Michele Re Fiorentin (University of Southampton)
I will briefly review the main aims and concepts of leptogenesis, analysing different possible realisations. Particular attention will be devoted to the so-called $N_2$-dominated scenario, both in its unflavoured and flavoured versions. Its main features will be pointed out, as well as the impact of possible relevant corrections. I will then consider the conditions required by strong thermal...
25. Invited Talk: Neutrino Astronomy: No Longer a Dream
John Beacom (Ohio State University)
The great promise of neutrino astronomy has been known for decades, though it seemed impossibly out of reach. With neutrinos, we would reveal the insides of astrophysical objects, the high particle energies in the engines that power them, and the original timescales on which those engines evolve. In contrast, with photons, we see just the outsides of these objects, with spectra downgraded by...
38. Invited Talk: The next generation neutrino telescope
Maarten De Jong (NIKHEF (NL))
KM3NeT is a new research infrastructure consisting of a cabled network of deep-sea neutrino telescopes in the Mediterranean Sea. The main objective of KM3NeT is the discovery and subsequent observation of high-energy neutrino sources in the Universe. Three suitable deep-sea sites have been identified, namely off-shore Toulon (France), Capo Passero (Italy) and Pylos (Greece). The list of...
43. Invited Talk: High energy neutrinos from the Cosmos: observations and scenarios
Prof. Elisa Resconi
The neutrino observatory IceCube is opening a new observational window to the Universe. IceCube, which has been fully constructed in the icecap at the South Pole, is taking data since Spring 2011 in full configuration. The first years of data reveled the existence of extremely high-energy neutrinos at the hundreds of TeV up to the PeV scale, which are of astrophysical origin. In this...
5. Invited Talk: Status of Ultra-High Energy Cosmic Rays
Esteban Roulet (C)
I will review the present results on Ultra-High energy cosmic rays and discuss the Astrophysical scenarios that could account for them as well as the possible connections to lower energy results and the prospects for the future.
208. Invited Talk: Radio Measurements of Cosmic Ray Properties and Composition with LOFAR
Dr Heino Falcke
29. Invited Talk: Learning about black holes and neutron stars using ground-based gravitational-wave detectors
Prof. Alessandra Buonanno
In the next 5 years, ground-based interferometers such as advanced LIGO, Virgo and KAGRA, are likely to provide the first direct detections of gravitational waves. This will constitute a major scientific discovery, as it will permit a new kind of observation of the cosmos, quite different from today's electromagnetic and particle observations. In this talk I will review the current...
310. Summary Talk
311. Closing Remarks
Gianfranco Bertone
270. Cosmic ray propagation in molecular clouds
Sabrina Casanova (Max Planck fuer Kernphysik)
We solve the transport equations of cosmic rays inside a molecular cloud assuming an arbitrary energy and space dependent diffusion coefficient. Cosmic rays penetrating the cloud produce gamma-ray emission through pp collisions with the ambient gas. We study the influence of the gas density profile on the gamma-ray emission and we present predictions for present and future telescopes to...
211. Dark matter in minimal universal extra dimensions with a stable vacuum and a 126 GeV Higgs boson
Jonathan Cornell (Stockholm University)
The recent discovery of a Higgs boson with mass of about 126 GeV, along with its striking similarity to the prediction from the standard model, informs and constrains many models of new physics. The Higgs mass exhausts one out of three input parameters of the minimal, five-dimensional version of universal extra dimension models, the other two parameters being the Kaluza-Klein (KK) scale and...
207. DarkSide
Giuliana Fiorillo (Universita e INFN (IT))
DarkSide-50 (DS-50) at Gran Sasso underground laboratory, Italy, is a direct dark matter search experiment based on a TPC with liquid argon from underground sources. The DS-50 TPC, with 50 kg of active argon and a projected fiducial mass of >33 kg, is installed inside an active neutron veto based on a boron-loaded organic scintillator. The neutron veto is built inside a water cherenkov muon...
47. Invited Talk: Insights from simulations into the distribution of dark matter: clues to its nature?
Prof. Simon White
|
CommonCrawl
|
DOE Handbook:
Radiant Heat Transfer
Boiling Heat Transfer
Decay Heat
Laminar/Turbulent Flow
Bernoulli's Equation
Two-Phase Fluid Flow
Mechanical Science
Unit Systems
Newton's Laws
Energy, Work, and Power
Structure of Metals
Properties of Metals
Brittle Fracture
Structural Calculators
Check out these structural calculators: • Beam Analysis • Bolted Joints • Lug Analysis • Column Buckling
PDH Classroom
Affordable PDH credits for your PE license
Relevant Textbooks
Convection Heat Transfer
This page provides the chapter on convection heat transfer from the "DOE Fundamentals Handbook: Thermodynamics, Heat Transfer, and Fluid Flow," DOE-HDBK-1012/2-92, U.S. Department of Energy, June 1992.
Other related chapters from the "DOE Fundamentals Handbook: Thermodynamics, Heat Transfer, and Fluid Flow" can be seen to the right.
DOE Handbook: Heat Transfer
Heat Transfer Terminology
Conduction Heat Transfer
Heat transfer by the motion and mixing of the molecules of a liquid or gas is called convection.
Convection involves the transfer of heat by the motion and mixing of "macroscopic" portions of a fluid (that is, the flow of a fluid past a solid boundary). The term natural convection is used if this motion and mixing is caused by density variations resulting from temperature differences within the fluid. The term forced convection is used if this motion and mixing is caused by an outside force, such as a pump. The transfer of heat from a hot water radiator to a room is an example of heat transfer by natural convection. The transfer of heat from the surface of a heat exchanger to the bulk of a fluid being pumped through the heat exchanger is an example of forced convection.
Heat transfer by convection is more difficult to analyze than heat transfer by conduction because no single property of the heat transfer medium, such as thermal conductivity, can be defined to describe the mechanism. Heat transfer by convection varies from situation to situation (upon the fluid flow conditions), and it is frequently coupled with the mode of fluid flow. In practice, analysis of heat transfer by convection is treated empirically (by direct observation).
Convection heat transfer is treated empirically because of the factors that affect the stagnant film thickness:
Fluid velocity
Fluid viscosity
Type of flow (single-phase/two-phase)
Convection involves the transfer of heat between a surface at a given temperature (Ts) and fluid at a bulk temperature (Tb). The exact definition of the bulk temperature (Tb) varies depending on the details of the situation. For flow adjacent to a hot or cold surface, Tb is the temperature of the fluid "far" from the surface. For boiling or condensation, Tb is the saturation temperature of the fluid. For flow in a pipe, Tb is the average temperature measured at a particular crosssection of the pipe.
The basic relationship for heat transfer by convection has the same form as that for heat transfer by conduction:
$$ \dot{Q} = h ~A ~\Delta T $$
\( \dot{Q} \) = rate of heat transfer (Btu/hr)
h = convective heat transfer coefficient (Btu/hr-ft2-°F)
A = surface area for heat transfer (ft2)
ΔT = temperature difference (°F)
The convective heat transfer coefficient (h) is dependent upon the physical properties of the fluid and the physical situation. Typically, the convective heat transfer coefficient for laminar flow is relatively low compared to the convective heat transfer coefficient for turbulent flow. This is due to turbulent flow having a thinner stagnant fluid film layer on the heat transfer surface. Values of h have been measured and tabulated for the commonly encountered fluids and flow situations occurring during heat transfer by convection.
A 22 foot uninsulated steam line crosses a room. The outer diameter of the steam line is 18 in. and the outer surface temperature is 280°F. The convective heat transfer coefficient for the air is 18 Btu/hr-ft2-°F. Calculate the heat transfer rate from the pipe into the room if the room temperature is 72°F.
$$ \begin{eqnarray} \dot{Q} &=& h ~A ~\Delta T \nonumber \\ &=& h ~(2 \pi ~r ~L) ~\Delta T \nonumber \\ &=& \left( 18 ~{\text{Btu} \over \text{hr-ft-}^{\circ}\text{F}} \right) \left[ 2 (3.14) (0.75 ~\text{ft}) (22 ~\text{ft}) \right] (280^{\circ}\text{F} - 72^{\circ}\text{F}) \nonumber \\ &=& 3.88 \times 10^5 ~{ \text{Btu} \over \text{hr} } \end{eqnarray} $$
Many applications involving convective heat transfer take place within pipes, tubes, or some similar cylindrical device. In such circumstances, the surface area of heat transfer normally given in the convection equation ( \( \dot{Q} = h ~A ~\Delta T \) ) varies as heat passes through the cylinder. In addition, the temperature difference existing between the inside and the outside of the pipe, as well as the temperature differences along the pipe, necessitates the use of some average temperature value in order to analyze the problem. This average temperature difference is called the log mean temperature difference (LMTD), described earlier.
It is the temperature difference at one end of the heat exchanger minus the temperature difference at the other end of the heat exchanger, divided by the natural logarithm of the ratio of these two temperature differences. The above definition for LMTD involves two important assumptions: (1) the fluid specific heats do not vary significantly with temperature, and (2) the convection heat transfer coefficients are relatively constant throughout the heat exchanger.
Overall Heat Transfer Coefficient
Many of the heat transfer processes encountered in nuclear facilities involve a combination of both conduction and convection. For example, heat transfer in a steam generator involves convection from the bulk of the reactor coolant to the steam generator inner tube surface, conduction through the tube wall, and convection from the outer tube surface to the secondary side fluid.
In cases of combined heat transfer for a heat exchanger, there are two values for h. There is the convective heat transfer coefficient (h) for the fluid film inside the tubes and a convective heat transfer coefficient for the fluid film outside the tubes. The thermal conductivity (k) and thickness (Δx) of the tube wall must also be accounted for. An additional term (Uo), called the overall heat transfer coefficient, must be used instead. It is common practice to relate the total rate of heat transfer (\( \dot{Q} \)) to the cross-sectional area for heat transfer (Ao) and the overall heat transfer coefficient (Uo). The relationship of the overall heat transfer coefficient to the individual conduction and convection terms is shown in Figure 6.
Figure 6: Overall Heat Transfer Coefficient
Recalling Equation 2-3:
$$ \dot{Q} = U_o ~A_o ~\Delta T_o $$
where Uo is defined in Figure 6.
An example of this concept applied to cylindrical geometry is illustrated by Figure 7, which shows a typical combined heat transfer situation.
Figure 7: Combined Heat Transfer
Using the figure representing flow in a pipe, heat transfer by convection occurs between temperatures T1 and T2; heat transfer by conduction occurs between temperatures T2 and T3; and heat transfer occurs by convection between temperatures T3 and T4. Thus, there are three processes involved. Each has an associated heat transfer coefficient, cross-sectional area for heat transfer, and temperature difference. The basic relationships for these three processes can be expressed using Equations 2-5 and 2-9.
$$ \dot{Q} = h_1 ~A_1 ~(T_1 - T_2) $$ $$ \dot{Q} = {k \over \Delta r} ~A_{lm} ~(T_2 - T_3) $$ $$ \dot{Q} = h_2 ~A_2 ~(T_3 - T_4) $$
ΔTo can be expressed as the sum of the ΔT of the three individual processes.
ΔTo = (T1 − T2) + (T2 − T3) + (T3 − T4)
If the basic relationship for each process is solved for its associated temperature difference and substituted into the expression for ΔTo above, the following relationship results.
$$ \Delta T_o = \dot{Q} \left( {1 \over h_1 A_1} + { \Delta r \over k ~A_{lm} } + {1 \over h_2 A_2} \right) $$
This relationship can be modified by selecting a reference cross-sectional area Ao.
$$ \Delta T_o = { \dot{Q} \over A_o } \left( {A_o \over h_1 A_1} + { \Delta r ~A_o \over k ~A_{lm} } + {A_o \over h_2 A_2} \right) $$
Solving for \( \dot{Q} \) results in an equation in the form \( \dot{Q} = U_o ~A_o ~\Delta T_o \).
$$ \dot{Q} = { 1 \over \left( {A_o \over h_1 A_1} + { \Delta r ~A_o \over k ~A_{lm} } + {A_o \over h_2 A_2} \right) } ~A_o ~\Delta T_o $$
$$ U_o = { 1 \over \left( {A_o \over h_1 A_1} + { \Delta r ~A_o \over k ~A_{lm} } + {A_o \over h_2 A_2} \right) } $$
Equation 2-10 for the overall heat transfer coefficient in cylindrical geometry is relatively difficult to work with. The equation can be simplified without losing much accuracy if the tube that is being analyzed is thin-walled, that is the tube wall thickness is small compared to the tube diameter. For a thin-walled tube, the inner surface area (A1), outer surface area (A2), and log mean surface area (Alm), are all very close to being equal. Assuming that A1, A2, and Alm are equal to each other and also equal to Ao allows us to cancel out all the area terms in the denominator of Equation 2-11.
This results in a much simpler expression that is similar to the one developed for a flat plate heat exchanger in Figure 6.
$$ U_o = { 1 \over {1 \over h_1} + { \Delta r \over k } + {1 \over h_2} } $$
The convection heat transfer process is strongly dependent upon the properties of the fluid being considered. Correspondingly, the convective heat transfer coefficient (h), the overall coefficient (Uo), and the other fluid properties may vary substantially for the fluid if it experiences a large temperature change during its path through the convective heat transfer device. This is especially true if the fluid's properties are strongly temperature dependent. Under such circumstances, the temperature at which the properties are "looked-up" must be some type of average value, rather than using either the inlet or outlet temperature value.
For internal flow, the bulk or average value of temperature is obtained analytically through the use of conservation of energy. For external flow, an average film temperature is normally calculated, which is an average of the free stream temperature and the solid surface temperature. In any case, an average value of temperature is used to obtain the fluid properties to be used in the heat transfer problem. The following example shows the use of such principles by solving a convective heat transfer problem in which the bulk temperature is calculated.
A flat wall is exposed to the environment. The wall is covered with a layer of insulation 1 in. thick whose thermal conductivity is 0.8 Btu/hr-ft-°F. The temperature of the wall on the inside of the insulation is 600°F. The wall loses heat to the environment by convection on the surface of the insulation. The average value of the convection heat transfer coefficient on the insulation surface is 950 Btu/hr-ft2-°F. Compute the bulk temperature of the environment (Tb) if the outer surface of the insulation does not exceed 105°F.
Find heat flux (\( \dot{Q}'' \)) through the insulation.
$$ \dot{Q} = k ~A \left({ \Delta T \over \Delta x }\right) $$ $$ \begin{eqnarray} { \dot{Q} \over A } &=& 0.8 ~{\text{Btu} \over \text{hr-ft-}^{\circ}\text{F}} \left({ 600^{\circ}\text{F} - 105^{\circ}\text{F} \over 1 ~\text{in} ~{ 1 ~\text{ft} \over 12 ~\text{in} } }\right) \nonumber \\ &=& 4752 ~{ \text{Btu} \over \text{hr-ft}^2 } \end{eqnarray} $$
Find the bulk temperature of the environment.
$$ \begin{eqnarray} \dot{Q} &=& h ~A ~(T_{ins} - T_b) \nonumber \\ (T_{ins} - T_b) &=& { \dot{Q} \over h ~A } \nonumber \\ T_b &=& T_{ins} - { \dot{Q}'' \over h } \nonumber \\ T_b &=& 105^{\circ}\text{F} - { 4752 ~{ \text{Btu} \over \text{hr-ft}^2 } \over 950 ~{\text{Btu} \over \text{hr-ft-}^{\circ}\text{F}} } \nonumber \\ T_b &=& 100^{\circ}\text{F} \end{eqnarray} $$
⟸ Previous Page
Next Page ⟹
|
CommonCrawl
|
The 6-j symbol and intersecting Wilson loops, redux
Why is the Yang-Mills Comparator unitary?
Wilson/Polyakov loops in Weinberg's QFT books
Supertrace of holonomy of commutator
Yang-Mills theory in manifolds that are not simply connected
Why can't compact symplectic groups $Sp(n)\equiv USp(2n)\equiv U(2n)\cap Sp(2n,\mathbb{C})$ be gauge groups in Yang-Mills theory?
In what conceivable way can supersymmetric Yang-Mills theory help us understand traditional Yang-Mills?
Wilson loop in AdS/CFT
Some questions on the Wilson loop in the projective construction?
What is a precise mathematical statement of the Yang-Mills and mass gap Clay problem?
Intersecting Wilson loops in 2D Yang-Mills
I am currently trying to understand 2D Yang-Mills theory, and I cannot seem to find an explanation for calculation of the expectation value of intersecting Wilson loops. In his On Quantum Gauge Theories in two dimensions, Witten carries out a curious calculation:
For three reps $ \alpha,\beta,\gamma $, we fix a basis of the tensor product space belonging to $\alpha \otimes \beta \otimes \gamma$ called $\epsilon_\mu(\alpha\beta\gamma)^{ijk}$ ($\mu$ indexes the $\mu$-th basis vector, the $i,j,k$ are the indices from the original reps) with the property that $$ \int \mathrm{d}U {\alpha(U)^i}_i' {\beta(U)^j}_j' {\gamma(U)^k}_k' = \epsilon_\mu(\alpha\beta\gamma)^{ijk}\bar{\epsilon}_\mu(\alpha\beta\gamma)_{i'j'k'}$$
A minor question is why this is possible - I would be fine with accepting that I can always find some vectors that fulfill that relation, but why are they a basis?
The real part I do not understand comes now: By the above, each edge of a plaquette carries some $\epsilon_\mu$, and at a crossing of two lines, we have thus four of these coming from the edges, and four other reps $\delta^{j}_c$ (j runs from 1 to 4) belonging to the plaquettes themselves. Without any explicit computation, Witten now simply says that after summing the $\epsilon$ over all their indices (as required by the decomposition of a trace beforehand), we get a local factor associated to this vertex $G(\alpha_i,\delta^{(j)}_c,\epsilon)$, which is the 6j Wigner symbol (but he won't pause to show why). I cannot find any source that would spell that relation out, i.e. show why we get precisely the 6j symbols in this computation (though their connection to the associator of the tensor product makes it plausible that we do). The real question is - the 6j symbol of what associator is this, and how would one go about and prove this?
I would be very grateful to anyone who can either explain this to me or direct me to a reference where this is discussed in more detail.
This post imported from StackExchange Physics at 2014-06-17 22:45 (UCT), posted by SE-user ACuriousMind
gauge-theory
representation-theory
yang-mills
wilson-loop
asked Jun 16, 2014 in Theoretical Physics by ACuriousMind (820 points) [ no revision ]
Consider the finite dimensional unitary representations $\alpha,\beta,\gamma$ of the given compact group $G$ on corresponding vector spaces $V_1,V_2,V_3$. Let $|i>_j,i=1,\dots,n_j$ be an orthnormal basis of $V_j$ where $dim V_j=n_j$. Then $\{|i>_1\otimes|j>_2\otimes|k>_3\}$ forms an orthonormal basis of $V=V_1\otimes V_2 \otimes V_3$. An element $g\in G$ acts on the tensor product space $V$ as
$$|i>_1\otimes|j>_2\otimes|k>_3 \to \alpha(g) |i>_1\otimes\beta(g)|j>_2\otimes \gamma(g)|k>_3 \tag 1$$
We can also find an orthogonal basis $e_{\mu},\mu=1,\dots,N$ (where $N=n_1n_2n_3$) of $V$ relative to which all elements $g \in G$ act as block diagonal matrices. More precisely, suppose with respect to basis $\{e_{\mu}\}$, the action of an element $g\in G$ on $V$ be denoted as $$e_{\mu}\to \rho (g)e_\mu \tag 2$$ then $\rho(g)^{\nu}_{\mu}=<\nu|\rho(g)|\mu>$ is a block diagonal matrix where the dimensions of different blocks$^1$ are independent of $g$. Let
$$|i>_1\otimes|j>_2\otimes|k>_3=\sum_{\mu}\epsilon ^{\mu}_{ijk}e_{\mu}\tag 3$$
Acting with $g\in G$ on both sides of this equation gives
$$\alpha (g)|i>_1\otimes \beta (g)|j>_2\otimes \gamma(g)|k>_3=\sum_{\mu}\epsilon ^{\mu}_{ijk}\rho (g)e_{\mu} \tag 4$$
Taking the scalar product with $|i'>_1\otimes |j'>_2 \otimes |k'>_3$ and using (3) we get
$$\alpha(g)^{i'}_{i}\beta (g)^{j'}_{j}\gamma(g)^{k'}_{k}=\sum _{\mu,\nu} \rho^{\nu}_{\mu}(g)\epsilon^{*\nu}_{i'j'k'}{\epsilon}^{\mu}_{ijk}\tag 5$$
Now, according to the Peter-Weyl theorem (part 2), matrix elements of the irreducible representations of $G$ form an orthogonal basis of the space of square integrable functions on $G$ wrt the inner product
$$(A,B)=\int_G dg\; A(g)^{*}B(g) \tag 6$$
where $dg$ is the Haar measure. So, if we integrate both sides of (5), the nonzero contribution on RHS will only come from the part of $\rho$ which is direct sum of identity representations. In other words, let $W\subseteq V$ be the subspace of $V$ on which $G$ acts trivially, and let $\{e_1,\dots e_m\}\subseteq $ $\{e_1,\dots e_m,\dots,e_N\}$ be the basis of $W$, then the integration of (5) gives
$$\int_G dg\;\alpha(g)^{i'}_{i}\beta (g)^{j'}_{j}\gamma(g)^{k'}_{k}=\sum _{\mu,\nu =1}^{m} \delta^{\nu}_{\mu}\epsilon^{*\nu}_{i'j'k'}{\epsilon}^{\mu}_{ijk}=\sum _{\mu =1}^{m}\epsilon^{*\mu}_{i'j'k'}{\epsilon}^{\mu}_{ijk} \tag 7$$
where we have assumed that $Vol(G)=\displaystyle\int_G\; dg =1$
For the second part of your question, I would recommend these lecture notes. The basic idea for computing Wilson loop averages is following -
For a surface with boundary, the partition function of two dimensional Yang-Mills theory depends on the holonomy along the boundary. Let the partition function of a surface of genus $h$ and one boundary be denoted as $Z_h(U,ag^2)$ where $U$ is the fixed holonomy along the given boundary, $a$ is the area of the surface and $g$ is the Yang-Mills coupling constant. Now, consider the simplest situation in which a Wilson loop $W$ in representation $R_W$ is inserted along a contractible loop $C$ on a closed surface of genus $h$, and area $a$. To compute the Wilson loop average, we first cut the surface along $C$, which gives a disc $D$ of area (say) $b$ and another surface $S$ of area $c=a-b$, genus $h$ and one boundary. Now the Wilson loop average is given by integrating over $G$ the product of i) the partition functions of $D$ ii) partition function of $S$ and iii) the trace of the Wilson loop in representation $R_W$ -
$$<W>=\frac {1}{Z_h(ag^2)}\int \: dU \:Z_h(U,(a-b)g^2)\chi_{R_W}(U)Z_0(U^{-1},bg^2) \tag 8$$
The case of a self-intersecting Wilson loop too is not very different.
$^1$ The smallest blocks form irreducible representations of $G$; Exactly which irreducible representations show up will depend on $\alpha,\beta,\gamma$; The same irreducible representations may also appear more than once
answered Jun 17, 2014 by user10001 (635 points) [ no revision ]
First, thank you for the detail in which you answered my first question, that is exactly the reasoning I was looking for. Ironically, the lecture notes you recommend are precisely what I was following in the first place, which led me to look up Wittens original calculation. I understand the computation of the non-intersecting case and how the fusion numbers arise there. But in the intersecting case, these notes also simply say that after summing all factor at a vertex, we get a 6j symbol, and their main reference is Witten - and I simply do not understand how we get the symbol from that basis.
commented Jun 17, 2014 by ACuriousMind (820 points) [ no revision ]
@ACuriousMind Use formula 3.30 for an intersecting loop in the notes by Moore-Cordes-Ramgoolam, and try to do the group integrations using equation (7) in the above answer. I think it should give 6j symbol at the vertex. I too will try to do this calculations if I get time.
commented Jun 17, 2014 by user10001 (635 points) [ no revision ]
Ok, I think I see it - the $\epsilon$ are essentially 3j symbols, and at every vertex there are four of them, and the 6j symbol is a sum of products of 4 3j symbols. I will have to work through this a bit more carefully to fully convince myself, but I think you set me on the right track - thanks again! (If I do not encounter further complications, I will mark your answer as accepted in due time)
|
CommonCrawl
|
14 Body Levers
Moving patients is a routine part of Jolene's work as a MED floor RN, but in reality there is nothing routine about the biomechanics of lifting and transferring patients. In fact, "disabling back injury and back pain affect 38% of nursing staff" and healthcare makes up the majority of positions in the top ten ranking for risk of back injury, primarily due to moving patients. Spinal load measurements indicated that all of the routine and familiar patient handling tasks tested placed the nurse in a high risk category, even when working with a patient that "[had a mass of] only 49.5 kg and was alert, oriented, and cooperative—not an average patient."[1] People are inherently awkward shapes to move, especially when the patient's bed and other medical equipment cause the nurse to adopt awkward biomechanic positions. The forces required to move people are large to begin with, and the biomechanics of the body can amplify those forces by the effects of leverage, or lack thereof. To analyze forces in the body, including the effects of leverage, we must study the properties of levers.
Lever Classes
The ability of the body to both apply and withstand forces is known as strength. One component of strength is the ability apply enough force to move, lift or hold an object with weight, also known as a load. A
is a rigid object used to make it easier to move a large load a short distance or a small load a large distance. There are three
classes of levers
, and all three classes are present in the body[2][3]. For example, the forearm is a
3rd class lever
because the biceps pulls on the forearm between the joint (fulcrum) and the ball (load).
The elbow joint flexed to form a 60° angle between the upper arm and forearm while the hand holds a 50 lb ball . Image Credit: Openstax University Physics
Using the standard terminology of levers, the forearm is the lever, the biceps
, the elbow joint is the
, and the ball
. When the resistance is caused by the weight of an object we call it the
are identified by the relative location of the resistance, fulcrum and effort.
First class levers
have the fulcrum in the middle, between the load and resistance.
Second class levers
have resistance in the middle.
Third class levers
have the effort in the middle.
First (top), second(middle), and third(bottom) class levers and real-world examples of each. Image Credit: Pearson Scott Foresman
Reinforcement Activity
The foot acting as a lever arm with calf muscle supplying an upward effort, the weight of the body acting as downward load, and the ball of the foot acting as the fulcrum. Image adapted from OpenStax Anatomy and Physiology
Static Equilibrium in Levers
For all levers the
(load) are actually just forces that are creating
because they are trying to rotate the lever. In order to move or hold a load the torque created by the effort must be large enough to balance the torque caused by the load. Remembering that torque depends on the distance that the force is applied from the
, the effort needed to balance the resistance must depend on the distances of the effort and resistance from the pivot. These distances are known as the
effort arm
resistance arm
(load arm). Increasing the effort arm reduces the size of the effort needed to balance the load torque. In fact, the ratio of the effort to the load is equal to the ratio of the effort arm to the load arm:
\frac{load}{effort} = \frac{effort\, arm}{load\, arm}
Every Day Examples: Biceps Tension
Let's calculate the biceps tension need in our initial body lever example of a holding a 50 lb ball in the hand. We are now ready to determine the bicep tension in our forearm problem. The effort arm was 1.5 in and the load arm was 13.0 in, so the load arm is 8.667 times longer than the effort arm.
\frac{13\,\bold{\cancel{in}}}{1.5\,\bold{\cancel{in}}} = 8.667
That means that the effort needs to be 8.667 times larger than the load, so for the 50 lb load the bicep tension would need to be 433 lbs! That may seem large, but we will find out that such forces are common in the tissues of the body!
*Adjusting Significant Figures
Finally, we should make sure our answer has the correct
. The weight of the ball in the example is not written in
scientific notation
, so it's not really clear if the zeros are placeholders or if they are significant. Let's assume the values were not measured, but were chosen hypothetically, in which case they are exact numbers like in a definition and don't affect the significant figures. The forearm length measurement includes zeros behind the decimal that would be unnecessary for a definition, so they suggest a level of
in a measurement. We used those values in multiplication and division so we should round the answer to only two significant figures, because 1.5 in only has two (13.0 in has three). In that case we round our bicep tension to 430 lbs, which we can also write in scientific notation: $4.3 \times 10^2 \,\bold{lbs}$.
*Neglecting the Forearm Weight
Note: We ignored the weight of the forearm in our analysis. If we wanted to include the effect of the weightof the forearm in our example problem we could look up a typical forearm weight and also look up where the
of the forearm is located and include that load and resistance arm. Instead let's take this opportunity to practice making justified
. We know that forearms typically weigh only a few pounds, but the ball weight is 50 lbs, so the forearm weight is about an
order of magnitude
(10x) smaller than the ball weight[7]. Also, the center of gravity of the forearm is located closer to the pivot than the weight, so it would cause significantly less torque. Therefore, it was reasonable to assume the forearm weight was
negligible
for our purposes.
Mechanical Advantage
The ratio of
is known as the
(MA). For example if you used a second class lever (like a wheelbarrow) to move 200 lbs of dirt by lifting with only 50 lbs of effort, the mechanical advantage would be four. The
is equal to the ratio of the
MA = \frac{load}{effort} = \frac{effort\, arm}{load\, arm}
We normally think of levers as helping us to use less effort to hold or move large loads , so our results for the forearm example might seem odd because we had to use a larger effort than the load. The bicep attaches close to the elbow so the effort arm is much shorter than the load arm and the mechanical advantage is less than one. That means the force provided by the bicep has to be much larger than the weight of the ball. That seems like a mechanical disadvantage, so how is that helpful? If we look at how far the weight moved compared to how far the bicep contracted when lifting the weight from a horizontal position we see that the purpose of the forearm lever is to increase
rather than decrease effort required.
Diagram showing the difference in distance covered by the contracting bicep and the weight in the hand when moving the forearm from horizontal.Image Adapted from Openstax University Physics
Looking at the similar triangles in a stick diagram of the forearm we can see that the ratio of the distances moved by the effort and load must be the same as the ratio of effort arm to resistance arm. That means increasing the effort arm in order to decrease the size of the effort required will also decrease the range of motion of the load by the same factor. It's interesting to note that while moving the attachment point of the bicep 20% closer to the hand would make you 20% stronger, you would then be able to move your hand over a 20% smaller range.
Diagram of the forearm as a lever, showing the similar triangles formed by parts of the forearm as it moves from 90 degrees to 60 degrees from horizontal. The hypotenuse (long side) of the smaller blue triangle is the effort arm and the hypotenuse of the larger dashed red triangle is the load arm. The vertical sides of the triangles are the distances moved by the effort (blue) and the load (dashed red).
Reinforcement Exercises
is always farther from the
than the
, so they will always increase range of motion, but that means they will always increase the amount of effort required by the same factor. Even when the effort is larger than the load as for third class levers, we can still calculate a mechanical advantage, but it will come out to be less than one.
always have the load closer to the fulcrum than the effort, so they will always allow a smaller effort to move a larger load, giving a
greater than one.
can either provide mechanical advantage or increase range of motion, depending on if the effort arm or load arm is longer, so they can have mechanical advantages of greater, or less, than one.
A lever cannot provide mechanical advantage and increase range of motion at the same time, so each type of lever has advantages and disadvantages:
Comparison of Advantages and Disadvantages of Lever Classes
Lever Class Advantage Disadvantage
3rd Range of Motion
The load moves farther than the effort.
(Short bicep contraction moves the hand far) Effort Required
Requires larger effort to hold smaller load.
(Bicep tension greater than weight in hand)
2nd Effort Required
Smaller effort will move larger load.
(One calf muscle can lift entire body weight) Range of Motion
The load moves a shorter distance than the effort.
(Calf muscle contracts farther than the distance that the heel comes off the floor)
(effort closer to pivot) Range of Motion
(Head moves farther up/down than neck muscles contract) Effort Required
(load closer to pivot) Effort Required
Smaller effort will move larger load. Range of Motion
The load moves shorter distance than the effort.
Check out the following lever simulation explore how force and distance from fulcrum each affect the equilibrium of the lever. This simulation includes the effects of friction, so you can see how
kinetic friction
in the joint (pivot) works to stop motion and
static friction
contributes to maintaining
static equilibrium
by resisting a start of motion.
"Nurses and Preventable Back Injuries" by Deborah X Brown, RN, BSN, American Journal of Critical Care ↵
"Lever of a Human Body" by Alexandra, The Physics Corner ↵
"Kinetic Anatomy With Web Resource-3rd Edition " by Robert Behnke , Human Kinetics ↵
OpenStax University Physics, University Physics Volume 1. OpenStax CNX. Jul 11, 2018 http://cnx.org/contents/[email protected]. ↵
"Lever" by Pearson Scott Foresman , Wikimedia Commons is in the Public Domain ↵
OpenStax, Anatomy & Physiology. OpenStax CNX. Jun 25, 2018 http://cnx.org/contents/[email protected]. ↵
"Weight, Volume, and Center of Mass of Segments of the Human Body" by Charles E. Clauster, et al, National Technical Information Service, U.S. Department of Commerce ↵
a rigid structure rotating on a pivot and acting on a load, used multiply the effect of an applied effort (force) or enhance the range of motion
There are three types or classes of levers, according to where the load and effort are located with respect to the fulcrum
a lever with the effort between the load and the fulcrum
the force that is provided by an object in response to being pulled tight by forces acting from opposite ends, typically in reference to a rope, cable or wire
referring to a lever system, the force applied in order to hold or lift the load
the point on which a lever rests or is supported and on which it pivots
the force of gravity on on object, typically in reference to the force of gravity caused by Earth or another celestial body
the force working against the rotation of a lever that would be caused by the effort
a weight or other force being moved or held by a structure such as a lever
the three types or classes of levers, according to where the load and effort are located with respect to the fulcrum
levers with the fulcrum placed between the effort and load
levers with the resistance (load) in-between the effort and the fulcrum
the result of a force applied to an object in such a way that the object would change its rotational speed, except when the torque is balanced by other torques
the central point, pin, or shaft on which a mechanism turns or oscillates
in a lever, the distance from the line of action of the effort to the fulcrum or pivot
shortest distance from the line of action of the resistance to the fulcrum
each of the digits of a number that are used to express it to the required degree of accuracy, starting from the first nonzero digit
a way of writing very large or very small numbers. A number is written in scientific notation when a number between 1 and 10 is multiplied by a power of 10.
refers to the closeness of two or more measurements to each other
a point at which the force of gravity on body or system (weight) may be considered to act. In uniform gravity it is the same as the center of mass.
ignoring some compilation of the in order to simplify the analysis or proceed even though information is lacking
designating which power of 10 (e.g. 1,10,100,100)
small enough as to not push the results of an analysis outside the desired level of accuracy
ratio of the output and input forces of a machine
distance or angle traversed by a body part
a force that resists the sliding motion between two surfaces
a force that resists the tenancy of surfaces to slide across one another due to a force(s) being applied to one or both of the surfaces
the state being in equilibrium (no unbalanced forces or torques) and also having no motion
Previous: Doing Work
Next: Nervous System Control of Muscle Tension
Body Levers by Lawrence Davis is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.
|
CommonCrawl
|
Update: Be Wrong the Right Number of Times
Greg Novak
December 13, 2016 - San Francisco, CA
Who was wrong the right number of times?
In the days and weeks after the election, the Associated Press did not call a winner in Michigan as the race was too close. On November 28, the Michigan Board of State Canvassers finally certified the race as a victory for Mr. Trump. As of this writing, there are still recounts pending but the situation seems stable enough to revisit this question.
There's wrong and then there's wrong—the outcome of the election clearly indicates that the model used by FiveThirtyEight (538) was closer to the truth than that of the Princeton Election Consortium (PEC) in terms of the level of uncertainty in the predictions. It's not by as much as you might think, though.
Predictions and Outcomes
The final pre-election predictions of the Princeton Election Consortium (PEC) and Five Thirty Eight (538) agreed about the most likely winner in each of the fifty states, differing only in the degree of uncertainty in each race. This makes our comparison easy. The incorrectly predicted states were Florida, Michigan, North Carolina, Pennsylvania, and Wisconsin—five in total.
Reproducing the graph from the previous post:
Reading off of the graph, we see that getting five states incorrect is the most likely outcome for 538's predictions, while it was on the high end of reasonable possibility for the PEC's model. Based only on the number of states incorrectly predicted, 538's model is favored over the PEC's model by a bit better than 3:1 odds.
The Will to (Statistical) Power
This statistical test involving only the number of incorrectly predicted states is useful for discussing the treatment of uncertainty because it's easy to understand, but it's probably the weakest test we can we can devise for this case. We could do better by modeling the fifty states as separate terms of a single likelihood function and by taking state-by-state correlations into account.
A quick way to get an idea of what a more detailed analysis might show is to use the output of each model with respect to the national election. This must take into account all of the state-level correlations, and the people who produced each model have almost certainly thought harder about their model than anyone else—certainly harder than I am likely to think about it. This is quite a bit different from the problem we posed above—the overall election winner is determined not by the state results themselves, but rather a sum over the state results weighted by the number of electoral votes for each state.
This is a classic application of Bayes Theorem: it's easy to calculate the probability that Mr. Trump wins given that 538's model is correct (538 has kindly done this for us), but now we want to know the probability that 538's model is correct given that Mr. Trump won the national election. Taking P(Trump wins | 538's model is correct) = 18 percent and P(Trump wins | PEC's model is correct) = 1 percent, three lines of algebra allow us to compute the odds ratio:
\[\frac{P({\rm 538\ |\ Trump\ wins})}{P({\rm PEC\ |\ Trump\ wins})} = \frac{P({\rm Trump\ wins\ |\ 538})\ P({\rm 538})}{P({\rm Trump\ wins\ |\ PEC})\ P({\rm PEC})} = \frac{18}{1}\]
That is to say the odds are eighteen to one in favor of 538's model assuming the priors for each model are equal.
These two possibilities are probably upper and lower bounds on reality: 538's model is favored over the PEC's model with odds somewhere between 3 to 1 and 18 to 1 (and it's probably closer to the latter figure). Note that we have just characterized the meta-uncertainty—the uncertainty in the uncertainty—which is a useful thing to get in the habit of doing.
It's probably a subject of interest to the PEC. There were probably two issues contributing to the fact that their predictions were so far off. The first is correlated error: the distribution of polling results was not centered on the eventual outcome of the election—the mean of the distribution was off. The second is exactly this meta-uncertainty: the width of the distribution of possible outcomes was much wider than the PEC thought.
Would you take that bet?
One concrete way to "ask yourself" if you really believe a probability is correct is to phrase it in terms of a wager. If I really believed that the PEC model was correct, then I should be happy to take make a wager where if Mr. Trump wins I get $2 and if Secretary Clinton wins I pay $100. According to the PEC, Mr. Trump's chance of winning was 1 percent, so over repeated trials I expect to make $1 per trial on average.
There's some evidence that Sam Wang, the driving force behind the PEC, would have taken this bet. He famously tweeted that he would eat a bug if Mr. Trump got more than 240 electoral votes, and then followed through.
Let us pause to take this in: what would Professor Wang have gotten if he had won the bet? Not the ability to say "I told you so"; he would have had that anyway. All he would get is the ability to say "I told you so, and I was so sure that I was right that I offered to eat a bug, secure in the knowledge that that would never come to pass." Against that rather slender gain in the case of a win, consider the outcome in the case of a loss: eating a bug. This sounds pretty close to the proposed bet-$100-to-win-$2 wager proposed above.
Bug eating seems to be Professor Wang's go-to bet to make a point about his level of certainty. He's quite thoughtful about it, though, calibrating the level of confidence required to make such a bet to keep his rate of bug eating to acceptable levels. He certainly understands the idea of being wrong the right number of times.
Correlations are Key
Go back to our original proposed statistical test—the number of incorrectly predicted states assuming each state is independent. Suppose the number of incorrect predictions was much higher than the PEC predicted. Is this incontrovertible evidence that the PEC model was wrong? It's pretty clear the answer is no: the PEC might defensibly say "That's because your independence assumption was bad: state results are correlated and they were systematically off in one direction—that's why you think we missed too many states. If you take those correlations into account, the model is fine."
What if the number of incorrectly predicted states was zero: much lower than 538 predicted. Is that incontrovertible evidence that 538's model was wrong, or could they make essentially the same claim as above?
Phrased more technically and simplified a bit for clarity: Suppose I make 100 simultaneous bets on the outcome of 100 Bernoulli-type random variables, where the probability of success in any one of them is 50%. So far I've specified the mean and diagonal terms of the covariance matrix of the 100 dimensional probability density function. If the random variables are uncorrelated so that all of the off-diagonal terms are zero, then I should be very surprised if I win all 100 bets simultaneously on a single trial. The probability of this is about \( 10^{-30} \). The question: Is there a way to choose the off-diagonal terms of the covariance matrix so that it's not surprising if I win all 100 bets on a single trial?
My intuition was that correlations could increase the number of incorrect predictions, but not decrease it. This turns out to be wrong—off diagonal terms in the covariance matrix can move the number up or down. This is pretty obvious once you think about it the right way, but it was not my initial intuition.
The right way to think about it to see that this can happen is to imagine that a single underlying random variable controls all 100 of the random variables I defined above. All of these variables are perfectly correlated and the covariance matrix is singular. When the underlying random variable comes up 0, then random variables 1-50 come up 1 and the rest come up 0. Conversely when the underlying random variable comes up 1, then random variables 51-100 come up 1 and the rest come up 0. It will be pretty easy to recognize this pattern and place 100 bets where you either win all of them or lose all of them. In this situation, it would not be surprising at all to either win or lose all of the bets from a single trial simultaneously.
Although both models were wrong in predicting the overall election winner, 538's model was much closer to being wrong the right number of times in the state-by-state election results. The PEC's model involved too high a level of certainty in the outcome and is disfavored by as much as 18 to 1 odds as a result.
Covariance can turn an extremely surprising result into one that's not surprising at all. Unmodeled correlations are probably the single biggest factor that make data science difficult by making it hard to come to secure conclusions about causal processes from historical data alone.
Phrasing a statistical result in terms of a wager can be a useful concrete way to ask yourself whether you really believe the number that's coming out of your analysis. This helps you characterize the uncertainty in your uncertainty: meta-uncertainty.
|
CommonCrawl
|
Results for 'Gloria M��hringer'
1000+ found
Preferences Regarding Return of Genomic Results to Relatives of Research Participants, Including After Participant Death: Empirical Results From a Cancer Biobank.Carmen Radecki Breitkopf, Gloria M. Petersen, Susan M. Wolf, Kari G. Chaffee, Marguerite E. Robinson, Deborah R. Gordon, Noralane M. Lindor & Barbara A. Koenig - 2015 - Journal of Law, Medicine and Ethics 43 (3):464-475.details
Data are lacking with regard to participants' perspectives on return of genetic research results to relatives, including after the participant's death. This paper reports descriptive results from 3,630 survey respondents: 464 participants in a pancreatic cancer biobank, 1,439 family registry participants, and 1,727 healthy individuals. Our findings indicate that most participants would feel obligated to share their results with blood relatives while alive and would want results to be shared with relatives after their death.
Ethics in Value Theory, Miscellaneous
Returning a Research Participant's Genomic Results to Relatives: Analysis and Recommendations.Susan M. Wolf, Rebecca Branum, Barbara A. Koenig, Gloria M. Petersen, Susan A. Berry, Laura M. Beskow, Mary B. Daly, Conrad V. Fernandez, Robert C. Green, Bonnie S. LeRoy, Noralane M. Lindor, P. Pearl O'Rourke, Carmen Radecki Breitkopf, Mark A. Rothstein, Brian Van Ness & Benjamin S. Wilfond - 2015 - Journal of Law, Medicine and Ethics 43 (3):440-463.details
Genomic research results and incidental findings with health implications for a research participant are of potential interest not only to the participant, but also to the participant's family. Yet investigators lack guidance on return of results to relatives, including after the participant's death. In this paper, a national working group offers consensus analysis and recommendations, including an ethical framework to guide investigators in managing this challenging issue, before and after the participant's death.
Lectura feminista de algunos textos de Hannah Arendt.Gloria M. Comesaña Santalices - 2001 - Anales Del Seminario de Historia de la Filosofía 18:125-142.details
El presente trabajo niuestra que la aproxiniación arendtiana a la política nos proporciona nuevas perspectivas de análisis para la Teoría feminista. Estudiaremos la interpretación arendtiana del poder como potencialidad, destacando los conceptos de pluralidad, acción>' palabra. También oposiciones como: privado/público, necesidad/libertad, labor-trabajo/acción. Indagaremos además si la noción arendtiana de parma es aplicable a las mujeres, reflexionando igualmente sobre el valor de su análisis de los Derecitos Hunianos para la Teoría ferninista. Arendt no se interesó particularniente por la problemática de la (...) condición fenienina. Pero su obra contiene suficientes ideas liberadoras que rnerecen ser consideradas. (shrink)
Hannah Arendt in 20th Century Philosophy
Returning a Research Participant's Genomic Results to Relatives: Perspectives From Managers of Two Distinct Research Biobanks.Gloria M. Petersen & Brian Van Ness - 2015 - Journal of Law, Medicine and Ethics 43 (3):523-528.details
Research biobanks are heterogeneous and exist to manage diverse biosample types with the goal of facilitating and serving biomedical discovery. The perspectives of biobank managers are reviewed, and the perspectives of two biobank directors, one with experience in institutional biobanks and the other with national cooperative group banks, are presented. Most research biobanks are not designed, nor do they have the resources, to return research results and incidental findings to participants or their families.
Investigar en tiempos de crisis: pensar, juzgar, actuar. Research in Times of Crisis: Thoughts, Judgments and Actions.Gloria M. Comesaña Santalices - 2006 - Anales Del Seminario de Historia de la Filosofía 23:113-125.details
For the purpose of reflecting on the pertinence of philosophy in confronting the challenges of our times, we develop the Arendtian concept of thought as a decisive activity in elucidating the moral character of our actions and in our search for sense which eventually becomes judgment, when we consider the faculties that philosophy inherently contain in relation to human affairs This paper presents intercultural philosophy as an example of how philosophy works in times of crisis.
Presentación.Gloria M. Comesaña Santalices - 2001 - Utopía y Praxis Latinoamericana 6 (12).details
"Hit-and-Run" Leaves its Mark: Catalyst Transcription Factors and Chromatin Modification.Kranthi Varala, Ying Li, Amy Marshall-Colón, Alessia Para & Gloria M. Coruzzi - 2015 - Bioessays 37 (8):851-856.details
Should Researchers Offer Results to Family Members of Cancer Biobank Participants? A Mixed-Methods Study of Proband and Family Preferences.Deborah R. Gordon, Carmen Radecki Breitkopf, Marguerite Robinson, Wesley O. Petersen, Jason S. Egginton, Kari G. Chaffee, Gloria M. Petersen, Susan M. Wolf & Barbara A. Koenig - 2019 - AJOB Empirical Bioethics 10 (1):1-22.details
Pragmatic Tools for Sharing Genomic Research Results with the Relatives of Living and Deceased Research Participants.Susan M. Wolf, Emily Scholtes, Barbara A. Koenig, Gloria M. Petersen, Susan A. Berry, Laura M. Beskow, Mary B. Daly, Conrad V. Fernandez, Robert C. Green, Bonnie S. LeRoy, Noralane M. Lindor, P. Pearl O'Rourke, Carmen Radecki Breitkopf, Mark A. Rothstein, Brian Van Ness & Benjamin S. Wilfond - 2018 - Journal of Law, Medicine and Ethics 46 (1):87-109.details
Returning genomic research results to family members raises complex questions. Genomic research on life-limiting conditions such as cancer, and research involving storage and reanalysis of data and specimens long into the future, makes these questions pressing. This author group, funded by an NIH grant, published consensus recommendations presenting a framework. This follow-up paper offers concrete guidance and tools for implementation. The group collected and analyzed relevant documents and guidance, including tools from the Clinical Sequencing Exploratory Research Consortium. The authors then (...) negotiated a consensus toolkit of processes and documents. That toolkit offers sample consent and notification documents plus decision flow-charts to address return of results to family of living and deceased participants, in adult and pediatric research. Core concerns are eliciting participant preferences on sharing results with family and on choice of a representative to make decisions about sharing after participant death. (shrink)
Aproximación a la obra de una teóloga ecofeminista.Gloria M. Comesaña Santalices - 2002 - Telos: Critical Theory of the Contemporary 1 (6):27-42.details
Hannah Arendt: Ecología Y Educación.Gloria M. Comesaña-Santalices - 2004 - Quaestio: Revista de Estudos Em Educação 6 (1).details
Reseña de" Análisis Arendtiano de la Modernidad" de Katiuska Reyes Galué.Gloria M. Comesaña-Santalices - 2006 - Utopía y Praxis Latinoamericana 11 (35):123.details
Gloria Anzaldúa's Affective Logic of Volverse Una.Cynthia M. Paccacerqua - 2016 - Hypatia 31 (2):334-351.details
Although Gloria Anzaldúa's critical categories have steadily entered discussions in the field of philosophy, a lingering skepticism remains about her works' ability to transcend the particularity of her lived experience. In an effort to respond to this attitude, I make Anzaldúa's corpus the center of philosophical analysis and posit that immanent to this work is a logic that lends it the unity of a critical philosophy that accounts for its concrete, multilayered character and shifting, creative force. I call this (...) an "affective logic of volverse una." Starting with the understanding of a situated modality of all subjectivity, Anzaldúa's work exhibits a logic of three moments distinguished by states of awareness. Each state of awareness is characterized by the generative degree of the subject's responses to its conditions: critical, individuating, and expansive. Led by her late concepts of conocimiento and nepantlera, I return to her earlier works and trace Anzaldúa's innovative exploration of undoing the oppressive condition of marginal subjectivities from "La Prieta" through Borderlands/La Frontera to her final published essay "now let us shift." I find a liberatory schema of volverse una/becoming whole that is grounded in an active receptivity of sensibility and facilitated by affective technologies for transformation. (shrink)
US Latina Feminism in Philosophy of Gender, Race, and Sexuality
Reseña de "La transición hacia el post-capitalismo. El socialismo del siglo XXI" de Wim Dierckxsens, "Revista venezolana de Estudios de la Mujer. Vol.12, nº 28" de la Universidad Central de Venezuela. [REVIEW]Beatriz Sánchez Pirela & Luz Gloria M. Comesaña-Santalices - 2007 - Utopía y Praxis Latinoamericana 12 (39):153-154.details
" I'm a Citizen of the Universe": Gloria Anzaldúa's Spiritual Activism as Catalyst for Social Change.AnaLouise Keating - 2008 - Feminist Studies 34 (1-2):53-69.details
Education, Religion and Society: Essays in Honour of John M. Hull.Dennis Bates, Gloria Durka, Friedrich Schweitzer & John M. Hull (eds.) - 2006 - Routledge.details
Education, Religion and Society celebrates the career of Professor John Hull of the University of Birmingham, UK, the internationally renowned religious educationist who has also achieved worldwide fame for his brilliant writings on his experience, mid-career, of total blindness. In his outstanding career he has been a leading figure in the transformation of religious education in English and Welsh state schools from Christian instruction to multi-faith religious education and was the co-founder of the International Seminar on Religious Education and values. (...) John Hull has also made major contributions to the theology of disability and the theological critique of the "money culture." This volume brings together leading international scholars to honour John Hull's contribution, with a focus on furthering scholarship in the areas where he has been active as a thinker. The book offers a critical appreciation of his contribution to religious education and practical theology, and goes on to explore the continuing debate about the role of religious education in promoting international understanding, intercultural education and human rights education. A possible basis for integrating Islamic education into Western education is suggested and the contribution of the philosophy of religion to pluralistic religious education is outlined. The contributors also deal with issues relating to indoctrination, racism and relationship in Christian religious aspects, and examines aspects of the the theology of social exclusion and disability. (shrink)
Judaism in Philosophy of Religion
Philosophy of Education, Misc in Philosophy of Social Science
Physical Disabilities in Applied Ethics
Rank in Set Theory Without Foundation.M. Victoria Marshall & M. Gloria Schwarze - 1999 - Archive for Mathematical Logic 38 (6):387-393.details
We prove that it is not possible to define an appropriate notion of rank in set theories without the axiom of foundation.
Areas of Mathematics in Philosophy of Mathematics
U.S. Hospital Industry Restructuring and the Hospital Safety Net.Gloria J. Bazzoli, Larry M. Manheim & Teresa M. Waters - 2003 - Inquiry: The Journal of Health Care Organization, Provision, and Financing 40 (1):6-24.details
Biomedical Ethics in Applied Ethics
Public Health, Misc in Applied Ethics
A Marriage Manual a Practical Guide-Book to Sex and Marriage, by Hannah M. Stone and Abraham Stone.Hannah M. Stone, Gloria Stone Aitken, Hilary Hill, Aquiles J. Sobrero & Abraham Stone - 1970details
Feminism: Marriage and Civil Unions in Philosophy of Gender, Race, and Sexuality
Feminism: Sexuality in Philosophy of Gender, Race, and Sexuality
Feminist Perspectives on Phenomena, Misc in Philosophy of Gender, Race, and Sexuality
Feminist Philosophy, Misc in Philosophy of Gender, Race, and Sexuality
Topics in Feminist Philosophy, Misc in Philosophy of Gender, Race, and Sexuality
M. G. Ciani , D. Susanetti : Euripide Medea. . Pp. 232. Venice: Marsilio, 1997. Paper, L. 22,000. ISBN: 88-317-6534-5. D. Susanetti: Gloria E Purezza: Note All'Ippolito di Euripide. Pp. 128. Venice: Supernova, 1997. Paper, L. 24,000. ISBN: 88-86870-10-8. [REVIEW]Michael Lloyd - 1998 - The Classical Review 48 (2):473-474.details
Hellenistic and Later Ancient Philosophy in Ancient Greek and Roman Philosophy
Pugilum Gloria (Ter. Hec. 33).W. M. Lindsay - 1931 - Classical Quarterly 25 (3-4):144-.details
Cicero defines gloria as frequens de aliquo fama cum laude, 'much talk about a person to his praise.' When the talk is by the person himself, the word takes the signification 'boast'.
Classics in Arts and Humanities
Pugilum Gloria.W. M. Lindsay - 1931 - Classical Quarterly 25 (3-4):144-145.details
La realidad del mito en el cine.M. Gloria Camarero Gómez - 2007 - Critica 57 (944):64-67.details
French Philosophy in European Philosophy
A Role for the Action Observation Network in Apraxia After Stroke.Gloria Pizzamiglio, Zuo Zhang, James Kolasinski, Jane M. Riddoch, Richard E. Passingham, Dante Mantini & Elisabeth Rounis - 2019 - Frontiers in Human Neuroscience 13.details
Philosophy of Neuroscience in Philosophy of Cognitive Science
M. Tulli Ciceronis Scripta Quae Manserunt Omnia, Fasciculus 47, Cato Maior. Laelius. De Gloria.K. Simbeck & Otto Plasberg (eds.) - 1997 - De Gruyter.details
Cultural Differences in Coping with Interpersonal Tensions Lead to Divergent Shorter- and Longer-Term Affective Consequences.Gloria Luong, Carla M. Arredondo & Susan T. Charles - 2020 - Cognition and Emotion 34 (7):1499-1508.details
Culture influences how people cope with interpersonal tensions, with those from more collectivistic contexts ) generally opting for strategies promoting social harmony w...
Labelling Classes by Sets.M. Victoria Marshall & M. Gloria Schwarze - 2005 - Archive for Mathematical Logic 44 (2):219-226.details
Let Q be an equivalence relation whose equivalence classes, denoted Q[x], may be proper classes. A function L defined on Field(Q) is a labelling for Q if and only if for all x,L(x) is a set and L is a labelling by subsets for Q if and only if BG denotes Bernays-Gödel class-set theory with neither the axiom of foundation, AF, nor the class axiom of choice, E. The following are relatively consistent with BG. (1) E is true but there (...) is an equivalence relation with no labelling.(2) E is true and every equivalence relation has a labelling, but there is an equivalence relation with no labelling by subsets. (shrink)
Philosophy of Mathematics, General Works in Philosophy of Mathematics
Economical Connections Between Several European Countries Based on TSP Data.Gloria Cerasela Crişan, Camelia-M. Pintea, Petrică C. Pop & Oliviu Matei - 2020 - Logic Journal of the IGPL 28 (1):33-44.details
A fluent economical collaboration between countries is a major need. European flows of trade and people are supported by efficient connections between main localities from a geographic region, in many cases overriding national borders. This paper introduces three traveling salesmen problem instances based on freely available geographic coordinates of the main cities of France, Portugal and Spain. These instances are unified, generating other four larger instances: three with all pairs of countries and one instance with the settlements from all the (...) three countries. The study includes an analysis of quality of solutions for a version of branch & cut algorithm and some hybrid heuristics including the Lin–Kernighan algorithm. $Bor\mathring{u}vka$, Quick$Bor\mathring{u}vka$ and Greedy algorithms are also used in the hybrid approaches in order to obtain a potential beneficent initial solution for the Lin–Kernighan algorithm. Concorde solver, nowadays state-of-the-art exact software, together with the already mentioned algorithms, is used to test and furthermore analyze the new TSP instances. Some results are represented using online services such as Google Maps, showing the potential integration of the Concorde's optimum results into commercial routing applications. The very good results provided by the Lin–Kernighan method allow its usage for real medium-sized routing instances. (shrink)
Secure Traveling Salesman Problem with Intelligent Transport Systems Features.Gloria Cerasela Crişan, Camelia-M. Pintea, Anisoara Calinescu, Corina Pop Sitar & Petrică C. Pop - forthcoming - Logic Journal of the IGPL.details
Meeting the security requests in transportation is nowadays a must. The intelligent transport systems represent the support for addressing such a challenge, due to their ability to make real-time adaptive decisions. We propose a new variant of the travelling salesman problem integrating security constraints inspired from ITSs. This optimization problem is called the secure TSP and considers a set of security constraints on its integer variables. Similarities with fuzzy logic are presented alongside the mathematical model of the introduced TSP variant.
Photographic Art in Exam Rooms May Reduce White Coat Hypertension.Michael B. Harper, Stacy Kanayama-Trivedi, Gloria Caldito, David Montgomery, E. J. Mayeaux & Lourdes M. DelRosso - 2015 - Medical Humanities 41 (2):86-88.details
Medical Ethics in Applied Ethics
Gloria S. Merker: The Hellenistic Sculpture of Rhodes. (Studies in Mediterranean Archaeology, Xl.) Pp. 34; 34 Plates. Gothenburg: Paul Astrom, 1973. Paper, Kr.50.R. M. Cook - 1975 - The Classical Review 25 (2):327-327.details
Archaeology in Social Sciences
You May Have My Help but Not Necessarily My Care: The Effect of Social Class and Empathy on Prosociality.Gloria Jiménez-Moya, Bernadette Paula Luengo Kanacri, Patricio Cumsille, M. Loreto Martínez & Christian Berger - 2021 - Frontiers in Psychology 12.details
Previous research has focused on the relation between social class and prosocial behavior. However, this relation is yet unclear. In this work, we shed light on this issue by considering the effect of the level of empathy and the social class of the recipient of help on two types of prosociality, namely helping and caring. In one experimental study, we found that for high-class participants, empathy had a positive effect on helping, regardless of the recipient's social class. However, empathy had (...) no effect for low-class participants. When it comes to caring, empathy had a positive effect for both high and low-class participants, but only when the recipient of help belonged to the same social class. This highlights that empathy by itself is not sufficient to promote cooperative relations and that the social class of the recipient of help should be taken into account to shed light on this issue. (shrink)
Hospital Staffing Decisions: Does Financial Performance Matter?Mei Zhao, Gloria J. Bazzoli, Jan P. Clement, Richard C. Lindrooth, Jo Ann M. Nolin & Askar S. Chukmaitov - 2008 - Inquiry: The Journal of Health Care Organization, Provision, and Financing 45 (3):293-307.details
Financial Ethics in Applied Ethics
Two Different Populations Within the Healthy Elderly: Lack of Conflict Detection in Those at Risk of Cognitive Decline.Sergio M. Sánchez-Moguel, Graciela C. Alatorre-Cruz, Juan Silva-Pereyra, Sofía González-Salinas, Javier Sanchez-Lopez, Gloria A. Otero-Ojeda & Thalía Fernández - 2018 - Frontiers in Human Neuroscience 11.details
The Influence of Quality on eWOM: A Digital Transformation in Hotel Management.Gloria Sánchez-González & Ana M. González-Fernández - 2021 - Frontiers in Psychology 11.details
There is no doubt that the use of Internet for purchasing products and services has constituted a crucial change in how people go about buying them. In the era of digital transformation, the possibility of accessing information provided by other users about their personal experiences has taken on more weight in the selection and buying processes. On these lines, traditional word-of-mouth has given way to electronic word-of-mouth, which constitutes a major social change. This behavior is particularly relevant in the services (...) area, where potential users cannot in advance assess what is on offer. There is an abundant literature analyzing the effects of eWOM on different variables of interest in this sector. However, little is known about the factors that determine eWOM. Thus, the main objective of the present paper is to analyze the impact of two variables on eWOM. Both of them are crucial for potential customers in the process of finding hotel accommodations and they can motivate people to make such comments. The results demonstrate that these variables truly have a significant impact on whether or not users make comments on line. Moreover, it proved possible to observe certain differences according to the profile of the tourist involved and the destination where the hotel is located. In the current changing environment, this information is of great use for hotel managers in order to design strategies according to the type of guest they wish to attract. (shrink)
I Ndex.Elliot Abrams, M. H. Abrams, Patricia Aburdene, John Narsbut, Ahmad Aijaz, Anderson Perry, Phillip Anderson, Gloria Anzaldua, A. Carol & Aqumas St Thomas - 1995 - In Jeffrey Williams (ed.), Pc Wars: Politics and Theory in the Academy. Routledge. pp. 331.details
$2.19 used $13.90 new (collection) Amazon page
Mucin and Proteoglycan Functions in Embryo Implantation.Daniel D. Carson, Mary M. Desouza & E. Gloria C. Regisford - 1998 - Bioessays 20 (7):577-583.details
Biological Sciences in Natural Sciences
Reseña de "Aristóteles: retórica, pasiones y persuasión" de Cárdenas Mejía, Luz Gloria.M. Cárdenas - 2011 - Ideas Y Valores 60 (146):201-204.details
Book Reviews Section 4. Mayo Jr, John Bruce Francis, John S. Burd, Wilson A. Judd, Eunice S. Matthew, William F. Pinar, Paul Erickson, Charles John Stark, Clark Jr, Irvin David Glick, Howard D. Bruner, John Eddy, David L. Pagni, Gloria J. Abbington, Michael L. Greenbaum, Phillip C. Frey, Robert G. Owens, Royce W. van Norman, M. Bruce Haslam, Eugene Hittleman, Sally Geis, Robert H. Graham, Ogden L. Glasow, A. L. Fanta & Joseph Fashing - 1973 - Educational Studies 4 (4):198-200.details
Can Mindfulness Address Maladaptive Eating Behaviors? Why Traditional Diet Plans Fail and How New Mechanistic Insights May Lead to Novel Interventions.Judson A. Brewer, Andrea Ruf, Ariel L. Beccia, Gloria I. Essien, Leonard M. Finn, Remko van Lutterveld & Ashley E. Mason - 2018 - Frontiers in Psychology 9.details
Gli italiani e Bentham. Dalla "felicità pubblica" all'economia del benessere. Volume 1. Riccardo Faucci (ed.).Riccardo Faucci, Michael Da Freeman, Letizia Gianformaggio, Vincenzo Polignano, Anna Li Doonni, Robertino Giringhelli, Gabriella Gioli, Maurizio Mori, Daniela Parisi Acquaviva, Luciano Avagliano, Anna Camaiti, Marco Bertozzi, Sergio Cremaschi, Gloria Vivenza, Cosimo Perrotta, Lilia Costabile & Roberto Petrini - 1981 - Milano, Italy: Franco Angeli.details
INDICE -/- Note biografiche Introduzione, di Riccardo Faucci -/- Parte I - Da Verri a Toniolo 1. Jeremy Bentham: Contemporary Interpretations, di M.D.A. Freeman 2. Su Helvétius, Beccaria e Bentham, di Letizia Gianformaggio 3. L'etica utilitaristica di Pietro Veti, di Vincenzo Polignano 4. ll liberismo interventista di Vincenzo Emanuele Sergio, di Anna Li Donni 5. Il concetto di " felicità pubblica, nella << Genesi del diritto penale » di Romagnosi, e il rapporto Romagnosi-Bentham, di Robertino Ghiringhelli 6. « La più (...) grande felicità per il maggior numero » all'Accademia dei Georgofili ( 18)0-1850), di Gabriella Gioli 7. Una nota su Manzoni critico dell'utilitarismo, di Maurizio Mori 8. Sul concetto di utile in Francesco Ferrara e in Maffeo Pantaleoni, di Daniela Parisi Acquaviva 9. Sul pensiero sociale cristiano di fronte all'edonismo, di Luciano Avagliano 10. Giuseppe Toniolo e il recupero cattolico dell'utile e del valore, di Anna Camaiti -/- Parte II Aspetti di storia del pensiero economico nell'età classica -/- 1. La filosofia economica di Thomas Hobbes, di Marco Bertozzi 2. Ordinamento del sapere, modelli metodologici ed economia politica in Adam Smith, di Sergio Cremaschi 3. Elementi classici nel pensiero di Adam Smith: giurisprudenza romana e morale stoica, di Gloria Vivenza 4. Il "lusso" negli economisti italiani del Settecento, di Cosimo Perrotta 5. Prezzi naturali, prezzi di mercato e legge degli sbocchi nel dibattito fra Malthus e Ricardo, di Lilia Costabile 6. Marx e la moneta, di Roberto Petrini -/- . (shrink)
Culture, Communication, and Latina Feminist Philosophy: Toward a Critical Phenomenology of Culture.Jacqueline M. Martinez - 2014 - Hypatia 29 (1):221-236.details
An explication of the phenomenological sensibilities found in the work of Gloria Anzaldúa and other Latina feminist philosophers offers insight into the problem of bringing philosophy into greater relevance beyond academic and scholarly worlds. This greater relevance entails clear and direct contact with the immediacy of our communicative relationships with others, both inside and outside the academy, and allows for an interrogation of the totalizing perceptions that are at work within normative processes of epistemological legitimation. As a result of (...) this interrogation, it is possible to cultivate perceptual capacities related to culture that intervene in the normatively tacit cultural dispositions that often limit the possibilities of understanding. (shrink)
Teaching Methodologies in Times of Pandemic.Santiago Felipe Torres Aza, Gloria Isabel Monzón Álvarez, Gianny Carol Ortega Paredes & José Manuel Calizaya López - 2021 - Minerva 2 (4):5-10.details
The current times call for reforms in educational processes. The Covid-19 pandemic had an unforeseen impact on the educational system in all countries. This need for change requires new pedagogies and new methods for teaching and learning. Understanding the need for change is essential for the formulation of adaptive proposals, as well as for the generation of training activities to complement the teaching curriculum. New educational practices lead to a vision of educational quality, with new approaches that allow the continuous (...) integration of knowledge and permanent interaction with the student. This paper presents an analysis of the new teaching methodologies in times of confinement due to the pandemic caused by Covid-19. Keywords: Teaching methodologies, educational system, learning process. References [1]É. Tremblay-Wragg, C. Raby, L. Ménard y I. Plante, «El uso de estrategias didácticas diversificadas por cuatro profesores universitarios: ¿qué contribución a la motivación de aprendizaje de sus alumnos?,» Docencia en educación superior, vol. 26, nº 21, 2021. [2]L. Czerniewicz, R. Mogliacci, S. Walji, A. Cliff, B. Swinnerton y N. Morris, «Enseñanza y aprendizaje académico en el nexo: desagregación, mercantilización y digitalización en la educación superior,» Teaching in Higher Education, vol. 26, nº 2021, p. 16, 2021. [3]S. Dogan y A. Adam, «Aumentar el efecto del desarrollo profesional en la instrucción efectiva a través de comunidades profesionales,» Docentes y docencia: teoría y práctica, vol. 26, nº 3-4, pp. 326-349, 2020. [4]I. M. Torres Salas, «La enseñanza tradicional de las ciencias versus las nuevas tendencias educativas,» Educare, vol. 14, nº 1, pp. 131-142, 2010. [5]B. Fabio, J. Antonio Palomino y J. González Henríquez, «Evaluación y contraste de los métodos de enseñanza tradicional y lúdico,» Revista de Educación física y deportes, vol. 13, nº 94, pp. 29-36, 2008. [6]Y. Benítez y C. Mora, «Enseñanza tradicional vs aprendizaje activo,» Revista Cubana de Física, vol. 27, nº 2A, pp. 175-179, 2010. [7]P. Morales Bueno y V. Landa Fitzgerald, «Aprendizaje basado en problemas,» Theoria, vol. 13, nº 1, pp. 145-157, 2004. [8]R. Gil-Galván, I. Martín-Espinosa y F. Gil-Galván, «University student perceptions of competences acquired through problem-based learning,» Educación XXI, vol. 24, nº 1, pp. 271-295, 2020. [9]E. Ortiz Cermeño, «El aprendizaje basado en problemas,» Perfiles Educativos, vol. 41, nº 164, pp. 208-213, 2019. [10]E. Araos-Baeriswyl, C. Moll-Manzur, Á. Paredes y J. Landeros, «Aprendizaje invertido: un enfoque pedagógico en tiempos de pandemia,» Rev. Atención Primaria, vol. 53, nº 1, p. 117, 2021. [11]V. León-Carrascosa, M. Belando-Montoro y S. Sánchez-Serrano, «Design and validation of a questionnaire to evaluate the service-learning methodology,» Rev.Estudios sobre educación, vol. 39, nº 1, pp. 247-266, 2020. [12]J. Collado-Ruano, M. Ojeda, M. Malo y D. Amino, «Educación, arte e interculturalidad: El cine documental como lenguaje comunicativo y tecnología innovadora para el aprendizaje de la metodología I + D + I,» Rev. Texto livre, vol. 13, nº 3, pp. 376-393, 2020. [13]P. M. Bueno y V. Landa Fitzgerald, «Aprendizaje basado en problemas,» Theoria, vol. 13, nº 1, pp. 145-157, 2004. [14]J. A. Martí, M. Heydrich, M. Rojas y A. Hernández, «Aprendizaje basado en proyectos: Una experiencia de innovación docente,» Universidad EAFIT, vol. 46, nº 158, pp. 11-21, 2010. [15]L. Rojas y N. M. Jaimes, «Canvas LMS y el trabajo colaborativo como metodología de aprendizaje en entornos virtuales,» de Congreso Ibérico de Sistemas y Tecnologías de la Información, CISTI, Bogotá, Colombia, 2020. [16]B. Bordel y P. Mareca, «Results and Trends in educational MOOCs in the engineering area with MIRIADAX platform. A case study,» de 15th Iberian Conference on Information Systems and Technologies, CISTI 2020; Seville; Sevilla, España, 2020. [17]K. Vermeir y G. Kelchtermans, «Innovative practice as interpretative negotiation.A case-study on the kamishibai in Kindergarten.,» Teachers and Teaching: Theory and Practice, vol. 26, nº 3-4, pp. 248-263, 2020. [18]B. Tucker, «The Flipped Classroom: Online instruction at home frees class time for learning,» Education Next, vol. 1, nº 1, pp. 82-84, 2012. [19]M. V. Ledo, N. R. Michelena, N. N. Cao, I. d. R. M. Suárez y M. N. Vialart Vidal, « Aula invertida, nueva estrategia didáctica,» Educación Médica Superior, vol. 30, nº 3, pp. 678-688, 2016. [20]Metodologías activas por medio de las TIC, [Online]. Available: https://www.campuseducacion.com/blog/recursos/articulos-campuseducacion metodologias-activas-por-medio-de-las-tic/?cn-reloaded=1. [Last access: February 14, 2021]. (shrink)
A Critique of Philosophical Shamanism.Joshua M. Hall - forthcoming - The Pluralist.details
In this article, I critique two conceptions from the history of academic philosophy regarding academic philosophers as shamans, deriving more community-responsible criteria for any future versions. The first conception, drawing on Mircea Eliade's Shamanism (1951), is a transcultural figure abstracted from concrete Siberian practitioners. The second, drawing on Chicana theorist Gloria Anzaldúa's Borderlands/La Frontera (1987), balances Eliade's excessive abstraction with Indigenous American philosophy's emphasis on embodied materiality, but also overemphasizes genetic inheritance to the detriment of environmental embeddedness. I therefore (...) conclude that any aspiring philosophical shaman must ground their bodily-material transformative linguistic practices in the practices and environments of their own concrete communities, including the nonverbal languages of bodily comportment, fashion, and dance, in pursuit of social justice for all, including sovereignty, ecological justice, and well-being for Indigenous peoples worldwide. (shrink)
American Pragmatism in Philosophy of the Americas
Indigenous Philosophy in Philosophical Traditions, Miscellaneous
Justice in Social and Political Philosophy
Latin American Feminism in Philosophy of the Americas
The Epitaph of Publius Scipio.K. M. Moir - 1986 - Classical Quarterly 36 (01):264-.details
Quei apice insigne Dialaminis gesistei | mors perfec tua ut essent omnia | brevia, honos, fama, virtusque | gloria atque ingenium. Quibus sei | in longa licuiset tibe utier vita, | facile facteis superases gloriam | maiorum. Qua relubens te in gremiu, | Scipio, recipit terra, Publi, | prognatum Publio, Corneli. ILLRP 311 For you who wore the distinctive cap of a Flamen Dialis, Death cut everything short — honour, fame and virtue, glory and intellectual ability. If you had (...) been granted a long life in which to use these advantages, you would have far surpassed the glory of your ancestors by your achievements. Therefore Earth gladly takes you in her arms, Scipio — Publius Cornelius, son of Publius. (shrink)
Mestiza Double Consciousness: The Voices of Afro-Peruvian Women on Gendered Racism.Sylvanna M. Falcón - 2008 - Gender and Society 22 (5):660-680.details
In this article, the author proposes a confluence of W. E. B. Du Bois's "double consciousness" and Gloria Anzaldúa's "mestiza consciousness" to analyze the experiences of three Afro-Peruvian women. The merging of double and mestiza consciousness is necessary to holistically understand how gendered racism shapes their lives and why they have a desire to forge transnational solidarity with other women in the African Diaspora of the Americas. By gendering double consciousness and expanding mestiza consciousness beyond the United States and (...) the U.S.-Mexico borderlands, we can better understand how women's agency plays a role in what the author refers to as mestiza double consciousness. (shrink)
How Do We Know Who We Are?: A Biography of the Self.Arnold M. Ludwig - 1997 - Oxford University Press.details
"The terrain of the self is vast," notes renowned psychiatrist Arnold Ludwig, "parts known, parts impenetrable, and parts unexplored." How do we construct a sense of ourselves? How can a self reflect upon itself or deceive itself? Is all personal identity plagiarized? Is a "true" or "authentic" self even possible? Is it possible to really "know" someone else or ourselves for that matter? To answer these and many other intriguing questions, Ludwig takes a unique approach, examining the art of biography (...) for the insights it can give us into the construction of the self. In The Biography of the Self, he takes readers on an intriguing tour of the biographer's art, revealing how much this can tell us about ourselves. Drawing on in-depth interviews with twenty-one of our most esteemed biographers--writers such as David McCullough (the biographer of Truman and Theodore Roosevelt), Wallace Stegner (John Wesley Powell), Gloria Steinem (Marilyn Monroe), Leon Edel (Henry James), Peter Gay (Freud), Diane Middlebrook (Anne Sexton), and many others--and interweaving fascinating observations of his own practice, Ludwig takes us through the labyrinthine hall of mirrors we term the self and shows us how malleable, elusive, and paradoxical it can be. In chapters such as "The 'Real' Marilyn," "Psychoanalyzing Freud," "How Did Hitler Live With Himself?" and "What Madness Reveals," we sit in as biographers talk not only about their work, but about their subjects (Allan Bullock on Hitler and Stalin, for instance, or Arnold Rampersad on Langston Hughes) and how their subjects saw themselves. Ludwig describes how biographers must impose a narrative structure on their subjects' lives to create order out of a mass of often contradictory views, baffling behavior, and inconsistent self-representations, much in the same way that psychotherapists try to foster self-awareness and understanding in their patients. In his concluding chapter, Ludwig introduces a new concept--biographical freedom--which brilliantly reconciles free will and determinism. We can, he asserts, become biographers of ourselves. Like the biographer, we are constrained to consider all the available facts of our lives--the personal experiences, cultural forces, and predetermined scripts that shape us--but we remain free to interpret, emphasize, and fashion these givens into a cohesive and meaningful narrative of our own choosing. This thought-provoking volume offers not only a wide-ranging and informative commentary on the biographer's art, but also a highly original theory of the self. Readers interested in biography and in the lives of others will come away with a new sense of what it means to be a "person" and, in particular, who they are. (shrink)
British Philosophy in European Philosophy
$1.49 used $20.94 new Amazon page
Orígenes de Los Estudios de la Mujer En la Universidad Del Zulia.Oneida Chirino Ferrer - 2008 - Utopía y Praxis Latinoamericana 13 (41):107-117.details
This work presents the partial result of an investigation about Women's Studies at the University of Zulia. Basically, it addresses a part of the historical development of these studies in order to contextualize the importance of this academic area, which has permitted an exhaustive analysis of the ..
Feminist Ethics in Normative Ethics
On Humanism.Richard Norman - 2004 - Routledge.details
humanism /'hju:menizm/ n. an outlook or system of thought concerned with human rather than divine or supernatural matters. Albert Einstein, Isaac Asimov, E.M. Forster, Bertrand Russell, and Gloria Steinem all declared themselves humanists. What is humanism and why does it matter? Is there any doctrine every humanist must hold? If it rejects religion, what does it offer in its place? Have the twentieth century's crimes against humanity spelled the end for humanism? On Humanism is a timely and powerfully argued (...) philosophical defence of humanism. It is also an impassioned plea that we turn to ourselves, not religion, if we want to answer Socrates' age-old question: what is the best kind of life to lead? Although humanism has much in common with science, Richard Norman shows that it is far from a denial of the more mysterious, fragile side of being human. He deals with big questions such as the environment, Darwinism and 'creation science', euthanasia and abortion, and then argues that it is ultimately through the human capacity for art, literature and the imagination that humanism is a powerful alternative to religious belief. Drawing on a varied range of examples from Aristotle to Primo Levi and the novels of Virginia Woolf and Graham Swift, On Humanism is a lucid and much needed reflection on this much talked about but little understood phenomenon. (shrink)
15th/16th Century Philosophy, Misc in Medieval and Renaissance Philosophy
Teaching Gloria Anzaldúa as an American Philosopher.Alexander Stehn - 2020 - In Margaret Cantú-Sánchez, Candace de León-Zepeda & Norma Elia Cantú (eds.), Teaching Gloria E. Anzaldúa: Pedagogy and Practice for Our Classrooms and Communities. pp. 296-313.details
Many of my first students at Anzaldúa's alma mater read Borderlands/La Frontera and concluded that Anzaldúa was not a philosopher. Hostile comments suggested that Anzaldúa's intimately personal and poetic ways of writing were not philosophical. In response, I created "American Philosophy and Self-Culture" using backwards course design and taught variations of it in 2013, 2016, and 2018. Students spend nearly a month exploring Anzaldúa's works, but only after reading three centuries of U.S.-American philosophers who wrote in deeply personal and literary (...) ways about self-transformation, community-building, and world-changing. The sections of this chapter: 1) describe why my first students rejected Anzaldúa as a philosopher in terms of the discipline's parochialism; 2) present Anzaldúa's broader understanding of herself as a philosopher; 3) summarize my reconstructed Anzaldúa-inspired American Philosophy course and outline some assignments; 4) discuss how my students respond to Borderlands/La Frontera when we read it through the lens of self-culture; and 5) explain my attempt to shape the subdiscipline of American Philosophy by teaching Anzaldúa to specialists at the 2017 Summer Institute in American Philosophy. (shrink)
American Philosophy in Philosophy of the Americas
$30.00 new $42.75 used (collection) Amazon page
1 — 50 / 1000
|
CommonCrawl
|
In what manner are functions sets?
From Introduction to Topology, Bert Mendelson, ed. 3, page 15:
A function may be viewed as a special case of what is called a relation.
Yet, a relation is a set
A relation $R$ on a set $E$ is a subset of $E\times E$.
while a function is a correspondence or rule. Is then a function also a set?
elementary-set-theory functions
rschwieb
$\begingroup$ How do you define "rule?" $\endgroup$ – Thomas Andrews Aug 30 '13 at 13:57
Functions correspond to an abstract rule. Not to something like $f(x)=x+3$. This abstract rule need not be expressible, or even something that you can imagine. Functions, just like any other mathematical object, can be represented as a set. For example, real numbers can be thought of as sets.
Functions are represented as sets of ordered pairs. When we say that $f$ is a function from $X$ into $Y$ then we mean to say that $f$ is a set of ordered pairs $(x,y)$ such that $x\in X$ and $y\in Y$, and the following holds:
For every $x\in X$ there is some $y\in Y$ such that $(x,y)\in f$.
If $(x,y)\in f$ and $(x,y')\in f$ then $y=y'$.
When the latter occurs we can simply replace the $y$ by $f(x)$.
For example, $\{(0,0),(1,0)\}$ is a function from $\{0,1\}$ into $\{0\}$.
Asaf Karagila♦Asaf Karagila
$\begingroup$ +1 for "can be represented as a set" rather than "is a set"! $\endgroup$ – Peter Smith Aug 30 '13 at 13:54
$\begingroup$ @Stefan: First of all, it's a matter of usability. If I want to define a class of functions, say all functions from $\kappa$ into $\sf Ord$ (the class of all ordinals) then there is no shared codomain to all functions that I can just quantify over all $f\colon\kappa\to X$. And these sort of classes appear frequently in set theory. As for the second question, no. We don't define everything as sets. The definitions are abstract and we just interpret them as sets. $\endgroup$ – Asaf Karagila♦ Aug 31 '13 at 22:21
$\begingroup$ @AsafKaragila : Thanks. If I were just defining one function from set $X$ to set $Y$ (I'm not talking about classes of functions), like in your simple example, I would rather ''represent'' it as an ordered triple $(X, Y, S)$ where $S$ is a subset of $X \times Y$ with the usual requirements. The inclusion of $Y$ is so you know what the codomain is. The inclusion of $X$ is not really necessary but looks nice. I don't know if this is standard. $\endgroup$ – Stefan Smith Aug 31 '13 at 22:37
$\begingroup$ @Stefan: But what good is one function? Is all you ever use in mathematics is one function? Why was set theory developed as a foundation for mathematics anyway? So we can talk about sets of objects with certain properties. If a certain definition makes it harder to define and discuss a certain collection, and another definition is easier, then why not use the latter? As for the definition as a triplet, that's the Bourbaki approach and it's standard in many places in mathematics (there's a nice MO thread about it, I'll look for the link). $\endgroup$ – Asaf Karagila♦ Aug 31 '13 at 22:40
$\begingroup$ @Stefan: mathoverflow.net/questions/30381/definition-of-function $\endgroup$ – Asaf Karagila♦ Sep 1 '13 at 1:08
Yes, a function $f:X\to Y$ can be modeled by a set.
And yes, a function can be thought of as a special case of a relation, that is, a subset $R\subseteq X\times Y$. ("Function" after all can be thought of as shorthand for "functional relation.")
This is just reexpressing $f(x)=y$ as $(x,y)\in R$. So, the regular "function-is-a-rule" picture is equivalent to thinking of a subset $f\subseteq X\times Y$, where the set $f$ has special properties that make it a function. (The properties you are probably familiar with, I imagine.)
Relations don't have to be on the same set, as you gave as an example. However, when people say "relation on $E$", that is just shorthand for "relation from $E$ to $E$."
rschwiebrschwieb
$\begingroup$ +1 for "can be modelled by a set" rather than "is a set"! $\endgroup$ – Peter Smith Aug 30 '13 at 13:53
$\begingroup$ @rschweib : I have the same comment for you that I did for Asaf. If you define, or "think of" $f$ as a subset of $X \times Y$, with no other information, then if you give me such a function, I can't tell what the codomain is. $\endgroup$ – Stefan Smith Aug 31 '13 at 22:21
$\begingroup$ Dear @StefanSmith : $f$ isn't just any subset, it's a subset with properties that make it a function. Namely, for every $x\in X$ there is a pair in $f$ with $x$ in the first coordinate, and there are not two distinct pairs sharing the same first coordinate. The domain is $X$ and the codomain is $Y$. $\endgroup$ – rschwieb Aug 31 '13 at 23:19
$\begingroup$ @rschwieb : Yes, I know $f$ has to have certain properties, I just mean that it is nice to know what the codomain is. If you show me that subset of $X \times Y$ and nothing else, I can't tell what the codomain is. Please see Asaf's comments above. $\endgroup$ – Stefan Smith Sep 2 '13 at 20:44
$\begingroup$ @StefanSmith Ah, I see what you mean now. Mainly, I don't think this is a real issue, because when you talk about $f$ being a subset of something, you have to say a subset of what, and then you will say "$X\times Y$" and the codomain will be there. Otherwise it's just a set of pairs, and yes, I would agree with you that the codomain would be unspecified in that case. I'm just not convinced people would present it as a set and not a subset. Thanks for helping me see your idea. $\endgroup$ – rschwieb Sep 2 '13 at 23:23
Others have said, clearly and nicely, how to represent or model functions by sets.
And that's the right way to put it. Here are three reasons not to say that functions are sets.
It might be conventional to treat the binary function $f(x) = y$ as corresponding to a certain set of ordered pairs $(x, y)$, and then treat the ordered pairs by the Weiner-Kuratowski construction. But at both steps we are making arbitrary choices from a range of possibilities. You could use the set of ordered pairs (y, x) [I've seen that done], and you could choose a different set-theoretic representation of ordered pairs [I've seen that done]. Since the conventional association of the function with a set involves arbitrary choices, there isn't a unique right way of doing it: none, then, can be reasonably said to reveal what a function really is. We are in the business of representing (relative to some chosen scheme of representation).
Some functions are "too big" to have corresponding sets. Take the function that maps a set to its singleton. The ordered pairs $(x, \{x\})$ are too many to form a set. If a function is too big to have a corresponding set, it can't be that set.
Most importantly, it would be a type confusion to identify a function with an object like a set. A function maps some object(s) to an object. In terms of Frege's nice metaphor, functions are "unsaturated", come with one or more slots waiting to be filled (where filling the single slot in, say, the unary numerical function the square of ... gives us a number). In modern terminology, functions have an intrinsic arity. By contrast, objects aren't unsaturated, don't have slots waiting to be filled, don't have arities, don't do mapping. And what applies to objects in general applies to those objects which are sets in particular. So functions aren't sets.
Peter SmithPeter Smith
$\begingroup$ @Peter : Thank you for answering an obvious question. Please forgive my ignorance, but I'm not sure I understand your point #2. The domain of your "function" from #2 is the "set of all sets", and from what I have heard, there is no such thing as the set of all sets, because if you assume such a thing exists, it leads to contradictions. So the domain of your "function" in #2 is not a set. Isn't the domain of any function supposed to be a set? I am not a logician, so please correct any incorrect assertions I may have made. $\endgroup$ – Stefan Smith Aug 31 '13 at 22:28
$\begingroup$ One man's modus ponens is another man's modus tollens. It's a mistake to suppose that every function has a set as domain-qua-set, as this example shows. We should think of the domain of a function as being the objects (plural) for which the function is defined: it is a further question whether those objects can form a set. Uusally they do -- but not always. $\endgroup$ – Peter Smith Sep 1 '13 at 16:10
$\begingroup$ @PeterSmith : thank you. I ask you again to forgive any ignorance I display. I consulted Wikipedia (I've never found an actual mistake there) and they stated a function ''is a relation between a set of inputs'' etc. I encountered a bit of category theory in grad school. Wasn't the idea of a functor developed so you could define mappings between categories (not just sets)? So wouldn't most people call your ''function'' from your #2 a functor, since its domain is the category of all sets? $\endgroup$ – Stefan Smith Sep 2 '13 at 20:51
$\begingroup$ Hello Dr. Smith. Could you please clarify your second point? Why can't $\{ (n, \{n \})| n \in \mathbb{N} \}$ be a set? $\endgroup$ – Ovi Jun 9 '18 at 2:09
$\begingroup$ Of course $\{ (n, \{n \})| n \in \mathbb{N} \}$ is a set! The point I was making is that the pairs $(x, \{x \})$, for $x$ any set, are too many to be a set. $\endgroup$ – Peter Smith Jun 9 '18 at 15:20
Let's begin at the other end.
A function can be regarded as a rule for assigning a unique value $f(x)$ to each $x$. Let's construct a set $F$ of all the ordered pairs $(x,f(x))$.
If $f:X\to Y$ we need every $x\in X$ to have a value $f(x)$. So for each $x$ there is an ordered pair $(x,y)$ in the set for some $y\in Y$. We also need $f(x)$ to be uniquely defined by $x$ so that whenever the set contains $(x,y)$ and $(x,z)$ we have $y=z$. In this way there is a unique $f(x)$ for each $x$ as we require.
The ordered pairs can be taken as elements of $X \times Y$ so that $F\subset X \times Y$
If we have a set of ordered pairs with the required properties, we can work backwards and see that this gives us back our original idea of a function.
Mark BennetMark Bennet
Not the answer you're looking for? Browse other questions tagged elementary-set-theory functions or ask your own question.
Question about definition of binary relation
Is there a term for the element of a relation?
A metric space from closed sets
Image definition by Mendelson seems weak
Proof Regarding Indexed Sets and Functions
Defining functions in terms of relations
Why are $\{(1,3)\}$ and $\{(2,4)\}$ not functions from $\{1,2\}$ to $\{3,4\}$?
Functions and mappings?
Real life meaning of a set containing itself as an element
All sequences are sets?
|
CommonCrawl
|
Volume 19 Supplement 18
Selected Articles from the Computational Approaches for Cancer at SC17 workshop
High-throughput cancer hypothesis testing with an integrated PhysiCell-EMEWS workflow
Jonathan Ozik1,
Nicholson Collier1,
Justin M. Wozniak1,
Charles Macal1,
Chase Cockrell2,
Samuel H. Friedman3,
Ahmadreza Ghaffarizadeh4,
Randy Heiland5,
Gary An2 &
Paul Macklin5
Cancer is a complex, multiscale dynamical system, with interactions between tumor cells and non-cancerous host systems. Therapies act on this combined cancer-host system, sometimes with unexpected results. Systematic investigation of mechanistic computational models can augment traditional laboratory and clinical studies, helping identify the factors driving a treatment's success or failure. However, given the uncertainties regarding the underlying biology, these multiscale computational models can take many potential forms, in addition to encompassing high-dimensional parameter spaces. Therefore, the exploration of these models is computationally challenging. We propose that integrating two existing technologies—one to aid the construction of multiscale agent-based models, the other developed to enhance model exploration and optimization—can provide a computational means for high-throughput hypothesis testing, and eventually, optimization.
In this paper, we introduce a high throughput computing (HTC) framework that integrates a mechanistic 3-D multicellular simulator (PhysiCell) with an extreme-scale model exploration platform (EMEWS) to investigate high-dimensional parameter spaces. We show early results in applying PhysiCell-EMEWS to 3-D cancer immunotherapy and show insights on therapeutic failure. We describe a generalized PhysiCell-EMEWS workflow for high-throughput cancer hypothesis testing, where hundreds or thousands of mechanistic simulations are compared against data-driven error metrics to perform hypothesis optimization.
While key notational and computational challenges remain, mechanistic agent-based models and high-throughput model exploration environments can be combined to systematically and rapidly explore key problems in cancer. These high-throughput computational experiments can improve our understanding of the underlying biology, drive future experiments, and ultimately inform clinical practice.
Cancer is a complex, dynamical system operating on many spatial and temporal scales: processes include molecular interactions (e.g., gene expression and protein synthesis; nanoseconds to minutes), cell-scale processes (e.g., cycle progression and motility; minutes to hours), tissue-scale processes (e.g., tissue mechanics and biotransport; minutes to days), and organ and organism-scale processes (e.g., organ failure and clinical progression; weeks, months, and years). Cancer-host interactions dominate throughout these scales, including interactions between tumor cells and the vasculature (hypoxic tumor cells trigger growth of new blood vessels; new but dysfunctional blood vessels supply further growth substrates and can promote metastasis), between tumor cells and stromal cells (tumor cells can prompt tissue remodeling that facilitates tissue invasion), and between tumor cells and the immune system (immune cells can kill tumor cells, but tumor cells can co-opt inflammation to promote their survival). See the reviews in [1–6]. When designing and evaluating new cancer treatments, it is imperative to consider the impact on this complex multiscale cancer-host system.
Cancer-host interactions have been implicated in the poor (and sometimes surprising) clinical outcomes of existing and new treatments. Chemotherapies fail when molecular-scale processes (e.g., DNA repair failures, mutations, or epigenetic alterations) cause resistant tumor clones to emerge (multicellular-scale birth-death processes) which can survive the treatment [6–11]. Anti-angiogenic therapies that target blood vessels were expected to be potent agents against cancer [12], but disrupting tissue perfusion inhibits drug delivery and increases hypoxia, which was subsequently shown to select for more aggressive tumor phenotypes, including alternative metabolism and increased tissue invasion [13–15]. On the other hand, medications originally developed for osteoporosis (bone loss) were found to reduce the incidence of bone metastases through unclear mechanisms, but hypothesized to arise from tumor-osteoclast interactions [16–18]. Such examples underscore the need to evaluate and improve cancer treatments from a cancer-host systems perspective.
Recent successes of cancer immunotherapies—such as CAR (chimeric antigen receptor) T-cell treatments [19, 20]—have brought heightened attention to cancer immunology. In some patients, immune cell therapies have been impressively successful, while other patient populations have demonstrated disappointing outcomes; this variability of patient response arises in part from the poorly-understood, complex interactions between cancer and the immune system [21–26]. This suggests that better immune therapies could be designed through systematic investigations of tumor-immune interactions.
Key elements for systematic and mechanistic investigation of cancer immunotherapy
Given the complexity and underlying uncertainty regarding the biological processes that drive cancer, dynamic computational models have been used to represent various cellular and molecular functions associated with cancer [27].
In particular, agent-based modeling [27] is an increasingly common computational modeling method that can aid in the translation of genetic/molecular/sub-cellular processes to the multicellular behavior of tumors and the host. Agent-based models (ABMs) can serve as modes for multiscale dynamic knowledge representation [28, 29], with the rules for each model representing a particular hypothesis of how the system may work. As such, they serve a potentially vital role in aggregating existing biological knowledge, and through simulation experiments exploring their behavior, can help establish the boundaries of the set of plausible hypotheses.
However, the dynamic multiscale models (e.g., ABMs) needed to approximate the complexity of the overall system are by their very nature resistant to formal analysis. Their overall behavior can only be evaluated by the execution of heuristic methods that require very large numbers of simulations, a process we term model exploration (ME). ME is a near-ubiquitous component in the development of models and algorithms; as applied to ABMs, it involves an iterative workflow where simulation experiments are carried out across a large range of parameter values (parameter space exploration) and varying perturbations and initial conditions (model behavior space exploration). Model outputs from a set of simulation experiments are evaluated against some predetermined metric, which informs the next iteration of simulation experiments. Advances in high-performance computing can allow the parallelization of this process, resulting in high-throughput dynamic knowledge representation and hypothesis evaluation to address a current bottleneck in the Scientific Cycle [30]. However, we propose that the ME process itself can be enhanced with a computational framework for its workflow [31].
In this paper, we formulate the requirements for a computational experimental system for systematic, high-throughput hypothesis testing and optimization. We provide an example of how high-throughput hypothesis testing can be applied to the complex problem of tumor-immune interactions using a novel framework that integrates a multiscale mechanistic model development platform—PhysiCell [32] and BioFVM [33]—within a computational ME manager—Extreme Model Exploration with Swift (EMEWS) [31].
We then present early work on implementing our proposed high-throughput hypothesis testing and optimization framework with PhysiCell and EMEWS. After an initial 2-D test deployment that explored the impact of tumor oxygenation, we present a high-throughput investigation of a 3-D computational model of the adaptive immune response to tumor cells from [32]. This work exposed new and counter-intuitive insights on tumor-immune cell attachment dynamics and the nonlinear role of immune cell homing on successful and unsuccessful tumor suppression. The study performed over 1.5 years' worth of computational investigation in just over two days—a feat that is computationally infeasible without a framework that merges mechanistic modeling with efficient model exploration.
We close with a discussion of our ongoing and future work to implement the full PhysiCell-EMEWS framework iterative hypothesis exploration and optimization, along with potential applications in developing synthetic multicellular cancer treatment systems. We note that both PhysiCell and EMEWS are free and open source software. PhysiCell is available at http://PhysiCell.MathCancer.org and EMEWS is available at http://emews.org.
3-D cancer immunology model exploration using PhysiCell-EMEWS
There have been multiple projects utilizing agent-based/hybrid modeling of tumors and their local environments [34–37]. Review of this work and our own has led to the following list of key elements needed to systematically investigate cancer-immune dynamics across high-dimensional parameter/hypothesis spaces to identify the factors driving immunotherapy failure or success:
efficient 3-D simulation of diffusive biotransport of multiple (5 or more) growth substrates and signaling factors on mm3-scale tissues, on a single compute node (attained via BioFVM [33]);
efficient simulation of 3-D multicellular systems (105 or more cells) that account for basic biomechanics, single-cell processes, cell-cell interactions, and flexible cell-scale hypotheses, on a single compute node (attained via PhysiCell [32]);
a mechanistic model of an adaptive immune response to a 3-D heterogeneous tumor, on a single compute node (introduced in [32]);
efficient, high-throughput computing frameworks that can automate hundreds or thousands of simulations through high-dimensional hypothesis spaces to efficiently investigate the model behavior by distributing them across HPC/HTC resources (attained via EMEWS [31]); and
clear metrics to quantitatively compare simulation behaviors, allowing the formulation of a hypothesis optimization problem (see "Proposition: hypothesis testing as an optimization problem" section).
Efficient 3-D multi-substrate biotransport with BioFVM
In prior work [33] we developed BioFVM: an open source framework to simulate biological diffusion of multiple chemical substrates (a vector ρ) in 3-D, governed by the vector of partial differential equations (PDEs)
$$\begin{array}{*{20}l} \frac{\partial\boldsymbol{\rho}}{\partial t} & = \mathbf{D}\nabla^{2}\boldsymbol{\rho} - \boldsymbol{\lambda}\boldsymbol{\rho} + \mathbf{S}(\boldsymbol{\rho}^{*} - \boldsymbol{\rho}) - \mathbf{U}\boldsymbol{\rho} \\ & \quad+\sum\limits_{\{\text{cells} \; i\}} \delta(\mathbf{x} - \mathbf{x}_{i})W_{i}\left[\mathbf{S}_{i}({\boldsymbol{\rho}^{*}_{i}} - \boldsymbol{\rho}) - \mathbf{U}_{i} \boldsymbol{\rho}\right]. \end{array} $$
Here, D is the vector of diffusion coefficients, λ gives the decay rates, S and U are vectors of bulk source and uptake rates, and for each cell i, Si and Ui are its secretion and uptake rates, Wi is its volume, and xi is its position. All vector-vector products (e.g., λρ) are component-wise, ρ∗ denotes a vector of saturation densities (at which secretion or a source ceases), and δ is the Dirac delta function.
As detailed in [33], we solve this equation by a first-order operator splitting: we solve the bulk source and uptake equations first, followed by the cell-based sources and uptakes, followed by the diffusion-decay terms. We use first-order implicit time discretizations for numerically stable first-order accuracy. When solving the bulk source/decay term, we have an independent vector of linear ordinary differential equations (ODEs) in each computational voxel of the form:
$$ \frac{\partial \boldsymbol{\rho}}{\partial t} = \mathbf{S}(\boldsymbol{\rho}^{*} - \boldsymbol{\rho}) - \mathbf{U}\boldsymbol{\rho}. $$
Each of these sets of ODEs can be solved with the standard backwards Euler difference, giving a first-order accurate, stable solution. We trivially parallelize the solution by dividing the voxels across the processor cores with OpenMP: each thread works on a single voxel's set of ODEs. Moreover, we wrote the ODE solver to work vectorially, with a small set of BLAS (basic linear algebra subprograms) implemented to reduce memory allocation, copy, and deallocation operations. (We implemented specific BLASes as needed to keep the framework source small and minimize dependencies to facilitate cross-platform portability across Windows, Linux, OSX, and other operating systems.) We solved the cell-centered sources and sinks similarly, by dividing the solvers across the cells by OpenMP (one set of ODEs per cell); note that each cell will act on the substrates in the voxel containing the cell center, by the Dirac delta formulation.
We solve the diffusion-decay equation by the locally one-dimensional (LOD) method, which transforms a single 3-D PDE into a series of three 1-D PDEs (one PDE with respect to the x derivatives, one for the y derivatives, and one for the z derivatives) [38, 39]. In any x-, y-, or z-strip, using centered 2nd-order finite differences for the spatial derivative and backward 1st-order Euler differences yields a tridiagonal linear system for each substrate's PDE; because each PDE has the same form, we have a vector of tridiagonal linear systems. In [33], we solved this system with a vectorized Thomas algorithm [40]: an efficient O(n) direct linear solver for a single tridiagonal linear system, which we vectorized by performing all addition, multiplication, and division operations vectorially (with term-wise vector-vector multiplication and division). As a further optimization, we took advantage of that fact that D and λ are constant and noted that the forward sweep stage of the Thomas algorithm only depends upon D, λ, and the spatial mesh, but not on the prior or current solution. Thus, we could pre-compute and cache in memory the forward-sweep steps in the x-, y-, and z-directions to reduce the processing time. We tested on numerous computational problems, and found the overall method was first-order accurate and stable in time, and second-order accurate in space [33]. Moreover, we found that the computational speed scaled linearly in the number of PDEs solved, with a slope much less than one: Simulating 10 PDEs takes approximately 2.6 times more computational effort than a single PDE, whereas sequentially solving 10 PDEs requires approximately 10 times more effort than a single PDE. See further results in [33].
In testing, we have found that this system can simulate 5–10 diffusing substrates on 1 million computational voxels (sufficient to simulate 8 mm3 at 20 μm resolution) on a quad-core desktop workstation with 2 GB of memory; the performance was faster on a single compute node with greater computational core counts. This CPU-based algorithm maximizes cross-platform compatibility, but we anticipate a GPU implementation would be at least an order of magnitude faster.
Efficient 3-D multicellular simulations with PhysiCell
In [32], we developed a 3-D agent-based modeling framework by extending BioFVM's basic agents (discrete cell-like agents with static positions, which could secrete and consume chemical substrates in the BioFVM environment) to create extensible software cell agents. Each cell has an independent, hierarchically-organized phenotype (the cell's behavioral state and parameters) [41, 42]; user-settable function pointers to define hypotheses on the cell's phenotype, volume changes, cell cycling or death, mechanics, orientation, and motility; and user-customizable data. The cells' function pointers can be changed at any time in the simulation, allowing dynamical cell behavior and even switching between cell types. The overall program flow progresses as follows. In each time step:
Update the chemical diffusing fields by solving the PDEs above with BioFVM.
For each cell, update the phenotype by evaluating each cell's custom phenotype function. Also run the cells' cell cycle/death models, and volume update models. This step is parallelized across all the cells by OpenMP.
Serially process the cached lists of cells that must divide, and cells that must be removed (due to death). Separating this from step 2 preserved memory coherence.
For each cell, evaluate the mechanics and motility functions to calculate the cells' velocities. This step can be parallelized by OpenMP because the cell velocities are based upon relative positions.
For each cell, update the positions (using the second-order Adams-Bashforth discretization) using the pre-computed velocities. This step is also parallelized by OpenMP.
Update time.
The cell velocity functions (adapted from [35]) requires computing n-1 pairwise cell-cell mechanical interactions for all n cells, giving O(n2) computational performance—this would be prohibitive beyond 103 or 104 cells. However, biological cells have finite interaction distances, so we created an interaction testing data structure that placed each cell's memory address in a Cartesian mesh, and limited cell-cell mechanical interaction testing to the nearest interaction voxels. This reduced the computational effort to O(n). PhysiCell uses separate time step sizes for biotransport (Δt ∼0.01 min), cell mechanics (Δt ∼0.1 min), and cell processes (Δt ∼6 min) to take advantage of the multiple time scales. See [32] for further details.
Extreme-scale Model Exploration with Swift (EMEWS)
While detailed modeling approaches like PhysiCell allow higher fidelity representation of molecular, cellular, and tissue dynamics in cancer, they present substantial challenges. These challenges center on dealing with the large parameter spaces of these models and the highly nonlinear relationship between ABM input parameters and model outputs due to multiple feedback loops and emergent behaviors. Since their complexity limits the use of formal analytical approaches, the calibration and interpretation of complex ABMs often requires heuristic model exploration approaches that adaptively evaluate large numbers of simulations. These approaches often involve complex iterative workflows driven by sophisticated ME algorithms, such as genetic algorithms [43] or active learning [44, 45], which adaptively refine model parameters through the analysis of recently generated simulation results and launch new simulations.
The Extreme-scale Model Exploration with Swift (EMEWS) framework [31] is built on the the general-purpose parallel scripting language Swift/T [46], and is used to generate dynamic, highly concurrent simulation workflows for guiding ABM exploration in high-dimensional parameter spaces. EMEWS enables the direct integration of external ME algorithms to control and coordinate the running and evaluation of large numbers of simulations via iterative HPC workflows. The general-purpose nature of the underlying Swift/T workflow engine allows the supplementing of the workflows with additional analysis and post-processing as well.
EMEWS enables the user to plug in both ME algorithms and scientific applications, such as PhysiCell ABMs. The ME algorithm can be expressed in Python or R, utilizing high-level queue-like interfaces with two implementations: EQ/Py and EQ/R (EMEWS Queues for Python and R). The scientific application can be implemented as an external application called through the shell, in-memory libraries accessed directly by Swift (for faster invocation), or Python, R, Julia, C, C++, Fortran, Tcl and JVM language applications. Thus, researchers in various fields who may not be parallel programming experts can simply apply existing ME algorithms to their existing scientific applications and run large-scale computational experiments without explicit parallel programming. A key feature of this approach is that neither the ME algorithm nor the scientific application is modified to fit the framework.
Mechanistic 3-D model of adaptive immune response to heterogeneous tumors
Heterogeneous tumor
In [32], we developed an initial model of an adaptive immune response to a heterogeneous tumor. In the model, each cell exchanges cell-cell adhesive and "repulsive" forces, and enters the cell cycle at a rate that increases with oxygen availability. Each cell consumes oxygen, which diffuses from the simulation's boundary voxels, leading to the formation of hypoxic gradients. Where oxygenation drops to very low levels, tumor cells become necrotic and slowly lose volume. To model heterogeneity, each cancer cell has a normally distributed mutant "oncoprotein" expression 0≤p≤2 (with mean 1, standard deviation 0.3). Cells with greater expression of p are modeled as entering the cell cycle more rapidly. See [32] for more details and references.
Immunogenicity and immune response
As a simplified model of MHC (major histocompatibility complex: a surface complex that presents a "signature" sampling of fragments of the cell's peptides, allowing immune cells to learn to recognize the body's own cells [47, 48]), we assume cells with greater p expression are more immunogenic: more likely to present abnormal peptides on MHC and be recognized as targets for immune attack. All tumor cells secrete an immunostimulatory factor that diffuses through the domain. (Even in situ tumors are known to prompt immune cell homing [49].) Immune cells perform biased random migration (chemotaxis) along gradients of this factor, test for collision with cells, and form tight adhesions with any cells that are found.
For any time interval [t,t+Δt] while an immune cell i is attached to another cell j, the immune cell attempts to induce apoptosis (programmed cell death) with probability ripjΔt, where ri is the immune cell's killing rate for a normal immunogenicity (p=1), and pj is the jth cell's oncoprotein expression; this models activation of a death receptor, such as FAS. For more background biology and references, see [32]. If an immune cell triggers apoptosis, it detaches and continues its search for new immunogenic targets. Otherwise, it remains attached, but with a similar stochastic process to regulate how long it remains attached.
Sample 3-D simulation
In [32], we simulated this problem in 3D for an initial cell population of approximately 18,000 cells in a ∼5 mm3 domain on a quad-core desktop workstation. At the simulation start, tumor cells are very heterogeneously distributed; see the first frame in Fig. 1, where the tumor cells are shaded by p expression from blue (p≤0.5) to yellow (p≥1.5). By two weeks (Fig. 1, third frame), the tumor has grown by an order of magnitude (from ∼104 to 105 cells), there is clear selection for the cells with the most p (the tumor is visibly more yellow), oxygen transport limits have lead to the formation of a necrotic core (brown central region), and the initial spherical symmetry has been lost due to the formation of clonal foci (larger, more homogeneous yellow regions).
Sample 3-D cancer-immune simulation. 3-D simulation of adaptive immune response to a heterogeneous tumor, with cells ranging from blue (low proliferation and immunogenicity) to yellow (high proliferation and immunogenicity). Immune cells are red; cyan cells have undergone apoptosis due to immune attack. A high-resolution animation can be viewed at https://www.youtube.com/watch?v=nJ2urSm4ilU. Adapted with permission (via CC-BY 4.0) from [32]
At this point, we introduced 7500 immune cells (red) and applied the immune response model. By later simulation times (16 and 21 days in Fig. 1), we observed that the immune cells continue migrating along the chemical gradient until reaching the center where the gradient is approximately flat. Due to the particular choice of motility parameters for the immune cells, they become temporarily trapped in the center, allowing tumor cells to evade therapy and re-establish the tumor. A high-resolution video of this simulation can be viewed at https://www.youtube.com/watch?v=nJ2urSm4ilU.
Proposition: hypothesis testing as an optimization problem
We posit that the application of an integrated framework where the PhysiCell model is deployed within the EMEWS framework can be used to take advantage of EMEWS's more advanced ME capabilities to inform hypothesis exploration as a function of parameter space search (e.g., via active learning) and hypothesis optimization (e.g., via genetic algorithms). As an example, we describe the following set of parameters that represent a space of possible interactions governing tumor-immune interactions, and how that space could be explored:
A family of cell behavior hypotheses and constraints on their parameter values. For example:
immune cells can exhibit any combination of random motility, chemotaxis towards tumor cells, or chemotaxis away from other immune cells
attached immune cells can secrete immunoinhibitory or immunostimulatory factors
tumor cells can secrete immunoinhibitory factors, but at a cost to cellular energy available for proliferation
the microenvironment can have variable far-field oxygenation values.
A mechanistic computational model for simulating the cancer-host system under the hypotheses. For example:
We implement the additional diffusion equations in BioFVM.
We implement the prior tumor cell immunogenicity model, and add a basic model of cell metabolism (e.g., as in [50]) with extra energy cost for secreting the immunoinhibitory factor.
We implement the prior immune cell adaptive response model but vary the cell motility according the specific hypotheses for migration bias along the various chemical gradients, the level of randomness, and we vary decrease the migration speed, adhesion rate, and cell killing rate under immunoinhibition.
A set of target system behaviors and/or validation data. For example:
We seek hypotheses that result in emergence of immune-resistant tumors.
A model error metric to compare models and assess their match to target behavior. For example:
For a set of hypotheses, we quantify the number of tumor cells after 48 h of immune attack, the secretion level of the immunoinhibitory factor, and the mean immunogenicity (mutant oncoprotein).
Given these user inputs, the proposed PhysiCell-EMEWS system would distribute simulations across the hypothesis space (each running independently on its own compute node, where they are optimized). For succinctness, we refer to a point in the hypothesis space as a single simulation ruleset. Because these models are stochastic, EMEWS will initialize multiple simulations for each ruleset. EMEWS then collects the simulation outputs, evaluates the user-supplied metric against the target model behavior, and either reports the best hypothesis ruleset (if only one iteration is allowed), or repeats the process to refine the current best hypothesis ruleset (e.g., by a genetic algorithm). Each iteration is a high-throughput hypothesis test. And the overall iteration is hypothesis optimization. See Fig. 2.
Hypothesis testing as an optimization problem. If scientific users can (1) formulate a range of hypotheses, (2) supply an efficient 3-D mechanistic simulator (BioFVM+PhysiCell), (3) provide validation behaviors and/or data, and (4) supply an error metric, then the combined PhysiCell-EMEWS system can automatically explore the space of hypotheses, initiate simulations on HPC/HTC resources, collect data to evaluate the error metric, and then make further decisions on which hypotheses and parameter values to explore next. The framework iteratively sharpens hypotheses that bring new biological and clinical insights
The output is a set of hypotheses H that lead to the desired cell behaviors. For example, in hypoxic conditions, we may see less selection for the immunoinhibitory secreting cells due to limited nutrients, unless the cells are under attack by many immune cells. This hypothesis could then be tested experimentally. If the hypothesis does not hold experimentally, we would refine the computational model (e.g., focusing more on hypoxic cell metabolic and motile adaptations.)
We now demonstrate the first steps in implementing and testing the PhysiCell-EMEWS hypothesis optimization system: we conduct a single iteration of ME on a 2-D hypoxic cancer study, and then we test the 3-D cancer-immune model on a high-throughput study that reduced over a year of continumous computing time to just 2 days.
Test deployment of PhysiCell within EMEWS
The initial example of integrating PhysiCell with EMEWS involved examination of the effect of hypoxic conditions on tumor growth. This involved the development of a fast 2-D tumor simulator that could simulate 48 h of oxygen-limited tumor growth in 1–2 min. The framework integration proceeded as in the "Proposition: hypothesis testing as an optimization problem" section above. To work through user-supplied elements:
Oxygenation conditions could vary from completely anoxic (0 mmHg) to typical values of well-oxygenated breast tissue (60 mmHg; see [33, 51]). The initial cell population could vary from 1 to 2000 cells.
PhysiCell was used to create a program that could read these two hypothesis parameters at the command line, initialize the simulation, and run to 48 h without user input.
The target behavior was to maximize live cell fraction.
The model metric was the live cell fraction after 48 h.
We implemented a parameter sweep of PhysiCell using EMEWS, with the following oxygenation values:
0, 2.5, 5, 8, 10, 15, 38 or 60 mmHg
and the following initial cell counts:
1, 10, 100, 1000, 2000
EMEWS saved the model outputs in separate directories, facilitating subsequent postprocessing analysis and visualization. We plot a 2-D array of the final simulation images in Fig. 3 and the final live cell counts in Fig. 4 (top). As expected, increasing the initial cell count always increases the final cell count (and overall tumor size) 48 h later, but for any fixed oxygenation condition, this also leads to greater prevalence of necrosis, and a nonmonotonic effect on final live cell fraction (Fig. 4 (bottom)).
First PhysiCell-EMEWS test on cancer hypoxia: tumor plots. Here necrotic cells (dead by oxygen starvation) are brown, non-cycling cells are blue, and cycling cells are green and magenta. Increasing the initial cell count increases the final cell count, but also increases the final dead cell fraction (seen as the increasing prevalence of brown)
First PhysiCell-EMEWS test on cancer hypoxia: analytics. Live tumor cell count (top) and live cell fraction (bottom) after 48 h, as a function of oxygenation conditions (each curve is a different condition) and initial cell count (horizontal axis). For intermediate oxygenation conditions, increasing the initial cell count increases the final live cell count (top) but decreases the live cell fraction (bottom). Once oxygenation is high enough, any initial cell count yields nearly 100% live fraction at 48 h
In Fig. 4 (bottom), we plot the final live cell fraction as a function of the initial cell count, for each fixed oxygenation condition. For low oxygenation conditions (0, 2.5 mmHg), almost all cells are dead at 48 h regardless of cell seeding choices. For intermediate oxygenation conditions (5 to 38 mmHg), the effect is nonmonotonic: for small initial cell populations (1 or 10 cells), stochastic apoptosis effects can sometimes leave a smaller final live fraction than a larger cell population; this highlights the importance of testing multiple simulation replicates for stochastic models. Past 100 initial cells, the stochastic effects are reduced, and increasing the initial cell count results in a lower final live fraction (due to oxygen depletion by the larger cell population and the emergence of a necrotic core). In particular, for these simulations increasing from 1000 to 2000 cells decreased the final live cell fraction. This behavior was not observed for high oxygenation (60 mmHg): no portions of the tumor ever drop below the necrotic threshold. Moreover, this simulated cell line has saturating proliferation above 38 mmHg pO 2 (tissue physioxia [51] and so for sufficiently high initial oxygenation, the entire tumor stays about this threshold where there is no oxygen constraint to growth.
Large-scale cancer immunology investigation
In [32], we performed a single 3-D cancer-immune simulation as detailed above in "Sample 3-D simulation" section. As discussed in [32], the simulation revealed that immune cell homing and tumor-immune interactions are highly non-intuitive, and that immune cell motility parameters play a critical role in the success or failure of the immune response. Had the immune cell "homing" been weaker (i.e., more random, less biased along the chemical gradient), there would have been more mixing between the immune and tumor cells, leading to more cell-cell interactions, a greater probability of tumor cell killing, and a greater effective response. Thus, a broader investigation of the immune cell motility model was warranted.
Defining the simulation investigation
We identified the following three model parameters as initial targets for study:
Immune cell attachment raterA: If an immune cell is in physical contact with a tumor cell, this parameter gives the rate at which they form an adhesive attachment. In any time interval [ t,t+Δt], the probability of adhering is rAΔt. In [32], we set rA=0.2 min−1, giving a mean time to attachment of 5 min.
Study values: 0.033 min −1, 0.2 min −1, 1.0 min −1
Immune cell attachment lifetimeTA: An attached immune cell that has not successfully triggered tumor cell apoptosis will maintain its attachment for a mean time of TA. In any time interval [t,t+Δt], the probability of detachment is Δt/TA. In [32], we set TA=60 min.
Study values: 15 min, 60 min, 120 min
Migration biasb: Unadhered immune cells choose a motility direction d
$$ \mathbf{d} = \frac{(1-b) \mathbf{u} + b \frac{\nabla{c}}{\|\nabla c\|}} {\left\|(1-b) \mathbf{u} + b \frac{\nabla{c}}{\|\nabla c\|}\right\|}, $$
where c is the immunostimulatory chemokine and u is a randomly oriented unit vector. Thus, b=0 represents pure Brownian motion, and b=1 represents deterministic chemotaxis along ∇c; see [32]. We used a default bias b=0.5.
Study values: 0.25, 0.50, 0.75
For each of these three parameters, we seek to investigate low, medium and high parameter values, giving a total of 33 parameter combinations. Because the PhysiCell model is stochastic, we seek 5–10 simulations per parameter set, for a total of 135 to 270 simulations. The single sample simulation required approximately 2 days on a four-year-old desktop workstation, including time to save simulation outputs once every three simulated minutes. Thus, our simulation study—performed on a single desktop workstation—would require 270 to 540 days of continuous compute time. Prior to PhysiCell-EMEWS, such a simulation study would be computationally prohibitive.
Computational implementation
The parameter sweep implementation was generated using the EMEWS sweep template [52] which allows a user to create an EMEWS project customized for a parameter sweep from the command line. (Additional templates exist for creating ME projects that utilize R or Python ME algorithms.) The project consists of a standard directory structure for organizing model input, output, model launch scripts, and workflow code. The workflow code, implemented in Swift/T, takes as input a text file that explicitly defines all the parameter sets over which to sweep, one parameter set per line. The workflow iterates over each line in the file in parallel and launches a model for each parameter set, taking advantage of the available concurrency. For example, given n available processes, n models will be run concurrrently. The workflow code can potentially modify the parameter sets, for example, generating additional experimental trials by creating multiple new sets from an existing set through the addition of random seeds. The workflow itself is launched from a bash script which contains place holder values for HPC machine configuration (e.g., queue type, walltime, and so forth), and the parameter input file path. Models and scientific applications such as PhysiCell models are run as Swift/T app invocations. An app invocation calls out to the external shell to run a bash script that then launches the model. The model launch bash script provided by the EMEWS sweep template takes as arguments the parameter line and a unique directory in which to run the model. The script then runs the model in this directory, passing it the parameter line. It is also possible to run an application as an in-memory Swift/T extension.
For the experiments in this study, the parameter file contained 270 parameter sets. Each parameter set corresponded to a single model running in its own sandboxed directory. The experiments were performed on the Cray XE6 Beagle at the University of Chicago, hosted at Argonne National Laboratory. Beagle has 728 nodes, each with 2 AMD Operton 6300 processors, each having 16 cores, for a total of 32 cores per node; the system thus has 23,296 cores in all. Each node has 64 GB of RAM. Each model was run on a single node, allowing for maximal use of the available threads and the full workflow utilized 272 nodes. 270 were used for model runs, providing complete concurrency while the remaining 2 were used for workflow execution. The workflow completed in 51 h for a total of 1632 core h.
Simulation results and clinical insights
Using PhysiCell-EMEWS, we initiated 270 simulations of 14 days of growth, followed by a week of immune response: 27 biophysical parameter sets, each with 10 random seeds. Because frequent data saves would significantly slow the simulations due to networked file I/O [32], we only saved the final simulation output for each run, along with SVG visualizations of the z=0 cross-section at intermediate times. Of the 270 requested simulations, 231 were completed in approximately 2 days; see the "Discussion" section for the runs that did not complete.
For each biophysical parameter set, we computed the mean number of live tumor cells remaining at 21 days for the 5-to-10 completed simulation replicates. In Fig. 5, we fix the attachment rate at rA=0.2 min−1 and plot a heat map of this simulation metric versus the migration bias b (horizontal axis) and attachment time TA (vertical axis)—along with representative tumor cross-sections (i, ii, iii, and iv)—at the final simulation time (21 days). Each shaded square represents the mean live tumor cell count for the n simulation replicates (labeled on each square) for a particular parameter set, shaded from deep blue (lowest cell count; most effective response) to bright yellow (highest cell count; least effective response).
High-throughput 3-D cancer-immune simulation: impact of migration bias and and attachment lifetime. We plot a heatmap for final live cell tumor count (blue is lowest, or most effective immune response; yellow is worst immune response) for varied migration bias (horizontal axis) and immune cell attachment lifetime (vertical axis). Characteristic final tumor cross sections are labeled i-iv. In particular, decreasing migration bias improves the response
For all values of TA, decreasing the migration bias (and thus decreasing homing along the immunostimulatory gradient) dramatically improved the immune response. This result was slightly non-intuitive, as it suggests that the efficiency and precision of chemotaxis, if maximized, leads to an "overshoot" phenomenon that actually works against the goal of increasing tumor-immune cell mixing, an important factor in the ability to kill tumor cells noted in [32]. Alternatively, for any fixed migration bias b, increasing the attachment lifetime also improved the immune response as would be expected, although increases beyond 60 min were only marginally helpful. However, these results demonstrate the need to account for different axes of affect in any attempt to optimize towards a particular goal (e.g., a therapeutic design goal of maximizing tumor-immune cell mixing to increase tumor cell killing).
In Fig. 6, we show a heat map for the mean live tumor cell count at 21 days versus migration bias b (horizontal axis) and the attachment rate rA (vertical axis). For all values of b, increasing the attachment rate improved the response, although the improvement beyond 0.2 min −1 was marginal. Interestingly, for a fixed attachment rate rA, the impact of b was non-monotonic. Either decreasing b (to promote random tumor-immune mixing) or increasing b (to allow more directed cell migration) would improve the immune response over the initial value of 0.5. This again highlights the nonlinear nature of tumor-immune interactions, and the need for high-throughput investigation of mechanistic 3-D models to systematically probe these dynamics and identify trade-offs that need to be accounted for when designing putative therapies.
High-throughput 3-D cancer-immune simulation: impact of migration bias and and attachment rate. We plot a heatmap for final live cell tumor count (blue is lowest, or most effective immune response; yellow is worst immune response) for varied migration bias (horizontal axis) and immune cell attachment rate (vertical axis). Characteristic final tumor cross sections are labeled i-iv. The impact of both parameters was nonlinear
In Fig. 7, we show a heat map for the mean live tumor cell count at 21 days versus the attachment rate rA (horizontal axis) and the attachment lifetime TA (vertical axis), with b=0.5. For all attachment lifetimes TA, increasing the attachment rate improved the immune response, as expected. However, for higher attachment rates rA, there was an interesting trend towards bimodal optima when examining the impact of the attachment lifetime: increasing the attachment lifetime from the medium (1 h) to high (2 h) value improved the treatment response, possibly by increasing the likelihood of a successful apoptosis event for any tumor-immune cell-cell attachment. However, decreasing the attachment lifetime from medium (1 h) to short (15 min) also improved the response, likely by increasing the number of tumor-immune cell-cell attachments. This demonstrates that the highly nonlinear dynamics of the cancer-immune interactions can admit many potential therapeutic strategies, some of which may be non-intuitive. Additional simulations are planned to determine whether this is an artifact of low replicate numbers, or represents an actual non-normal distribution in the dynamic range of these parameters.
High-throughput 3-D cancer-immune simulation: impact of attachment rate and and attachment lifetime. We plot a heatmap for final live cell tumor count (blue is lowest, or most effective immune response; yellow is worst immune response) for varied immune cell attachment rate (horizontal axis) and immune cell attachment lifetime (vertical axis). Characteristic final tumor cross sections are labeled i-iv. The impact of both parameters was nonlinear
Despite the prototyping nature of these simulation experiments, we believe that there are important insights that can be gained by these results. Most significant is substantiation of the general belief that multi-dimensional, nonlinear systems can lead to some non-intuitive results. In the context of cancer immunology, we found that reducing chemotactic efficiency (reducing attraction bias) can actually be beneficial in terms of achieving an intermediate goal (tumor-immune cell mixing) that improves the functional output (tumor suppression). Additionally, these results, while qualitative in nature, suggest that many immunotherapy design parameters have thresholds values, beyond which further refinements give little or no clinical benefit.
The identification of seeming thresholds for therapeutic parameters such as attachment duration and rate suggests that higher resolution models may be used to identify boundary conditions for future wet lab experimental investigations, which in turn can be used to refine the computational models in exactly the type of iterative workflow envisioned in Fig. 2. At some point, the results from this workflow will aid in "pre-screening" potential research spending priorities away from target goals where further improvements (i.e., to speed up the attachment rate or increase the attachment lifetime) would not improve the immune response. In cases of non-monotonic system behavior (e.g., where either high or low migration bias can lead to treatment success, whereas intermediate migration bias yields a poorer outcome), high-throughput model investigations may be all the more critical to identifying robust treatment designs with more reliable patient outcome.
While the current model yielded fresh insights on cancer-immune interactions, further refinements are needed to unlock its full potential. In future work, we plan to integrate and explore other key features of the immune system, such as inflammatory responses, cross-talk between different immune cell types, and molecular-level mechanisms for MHC function and immune-mediated cancer cell apoptosis [21, 47, 48, 53]. The models also need extension to directly model new treatments such as the role of PD-1 and PD-L1 in CAR T-cell therapies [19, 20]. In our next steps, we will extend the modeling framework to incorporate these effects, and import it into the EMEWS framework. We will start exploring the emergent tumor response to immune therapy under a variety of immune cell hypotheses and cancer phenotypes. Ultimately, we will generate hypotheses that elucidate the most and least ideal patient characteristics for immunotherapies.
In our pilot work to date, we have run a single iteration of the hypothesis testing loop; our next step is to complete the loop and iteratively optimize the treatment response over the current "design" parameters (attachment lifetime, migration bias, and attachment rate). This should yield testable hypotheses on immune system conditions for effective and ineffective tumor suppression. We also plan separate cancer hypothesis investigations in the PhysiCell-EMEWS framework. In ongoing breast cancer projects, we are evaluating families of cell-cell interaction hypotheses for "leader cells" (highly motile, less proliferative) and "follower cells" (less motile, more proliferative) that best explain time series morphologic data [54]. This work will further test the potential of PhysiCell-EMEWS to not merely explore large parameter spaces, but to optimally match hypotheses to experimental observations. We would then develop independent experiments to validate or refine the optimal hypotheses.
We note that the generalized description of hypotheses is not yet mature. Standards have emerged to describe molecular-scale systems biology (generally systems of ODEs) as SBML [55], and more recently to express multicellular biology as MultiCellDS, but cell-cell interaction rules will likely require a different description, such as by using elements of the Cell Behavior Ontology [56].
PhysiCell-EMEW's computational performance could be further improved. In particular, the diffusion solver (BioFVM) is well-suited to leveraging GPU resources available on today's typical HPC/HTC compute nodes using, for example, OpenACC, CUDA, or OpenCL. Scientifically, complex molecular-scale systems biology is typically written as SBML (systems biology markup language) models, and so to integrate these into high throughput multiscale mechanistic hypothesis testing, we plan to implement an SBML model integrator, such as the cross-platform libRoadrunner platform [57].
Lastly, we note that there were other benefits to combining PhysiCell and EMEWS to run a large number of simulations: we estimate that the cancer-immune investigation included on the order of 1 to 100 billion calls to the of the tumor-immune mechanical and biochemical interaction codes. This allowed us to "stress test" PhysiCell and identify rare bugs for future code releases. 39 simulations in our investigation terminated prematurely due to rare events, such as multiple immune cells attempting to apoptose the same tumor cell, or a tumor cell necrosing while still attached to an immune cell; these rare events removed dead cells from memory while memory pointers were still in active use, occasionally causing segmentation faults. Without high-throughput simulation investigations (which included over a year of compute time), these bugs would likely remain undetected and unfixed for years. We anticipate that other open source computational biology projects could similarly benefit from high-throughput testing in EMEWS.
We have demonstrated a 3-D mechanistic tumor-immune interaction model (and more generally, a mechanistic agent-based cancer modeling platform, using PhysiCell) that has an appropriate balance of flexibility, efficiency, and realism for efficient single simulations, that predict the emergent systems behaviors for a given set of cancer hypotheses. It is self-contained code (can be distributed as a ZIP file) enabling very simple deployment.
We have shown how a previously-developed extreme-scale model exploration and optimization platform (EMEWS) can compatibly deploy PhysiCell for model exploration in high throughput. We have outlined the overall platform to perform high-throughput hypothesis testing on using PhysiCell and EMEWS, and we gave an early example on a simple (but spatially nontrivial) model system of hypoxic tumor growth. We then demonstrated PhysiCell-EMEWS with a large parameter space investigation of a mechanistic 3-D cancer-immune model, obtaining significant and non-intuitive insights on immune cell homing and adhesion dynamics that would not have been feasible without HTC. The next natural step is to iterate past this first investigation and find therapeutic design optima that maximize tumor regression; this would represent a full test of PhysiCell-EMEWS as a hypothesis optimization tool.
Cancer biology—particularly cancer-immune interactions—occurs in complex dynamical, multiscale systems that frequently yield surprising emergent behaviors that can impair treatment. High-throughput model investigation and hypothesis testing affords a new paradigm to attacking these complex problems, gaining new insights, and improving cancer treatment strategies.
We close by noting that this framework has applications beyond cancer. In general, testing multiscale hypotheses in high throughput is valuable in determining the rules underlying (often puzzling) experimental data, and even to evaluate the limitations of experiments themselves [29, 30]. The PhysiCell-EMEWS system could be used as a multicellular design tool: for any given multicellular design including single-cell and cell-cell interaction rules (which map onto hypotheses in this framework), PhysiCell-EMEWS can test the emergent multicellular behavior against the target behavior (the design goal), and iteratively tune the cell rules to achieve the design goal. In [32], we began to design cell-cell interaction rules to create a multicellular cargo delivery system to actively deliver a cancer therapeutic beyond regular drug transport limits to hypoxic cancer regions. In that work, we manually tuned the model rules to achieve this (as yet unoptimized) design objective, requiring weeks of people-hours to configure, code, test, visualize, and evaluate. Integrating such problems into a high-throughput design testing system such as PhysiCell-EMEWS would be of clear benefit.
ABM (agent-based model) :
A computational model focused on independent (but often interacting) software agents
BLAS (basic linear algebra subroutine) :
Fundamental linear operators, such as linear addition of vectors
CAR (chimeric antigen receptor) :
A type of engineered receptor (usually on T cells) binding a tailored specificity to an effector immune cell
EMEWS (Extreme-scale Model Exploration with Swift) :
A framework for model exploration using the Swift/T parallel scripting language
HPC (high performance computing) :
Solution of large and complex problems by parallelization over networked computers, generally supercomputers
HTC (high throughput computing) :
The use of many computing resources over long periods of time (not necessarily linked to a high-speed network as in HPC). LOD (locally one-dimensional): A method for numerically solving partial differential equations based upon solving lower-dimensional problems
ME (model exploration) :
Combinatorial mixing of a model's parameter values
MHC (major histocompatibility complex) :
a cell surface molecule used by immune cells to identify foreign cells
ODE (ordinary differential equation) :
An ordinary differential equation
PDE (partial differential equation) :
A partial differential equation
Deisboeck TS, Wang Z, Macklin P, Cristini V. Multiscale cancer modeling. Annu Rev Biomed Eng. 2011; 13:127–55. https://doi.org/10.1146/ANNUREV-BIOENG-071910-124729 (invited author: T.S. Deisboeck).
Lowengrub J, Frieboes HB, Jin F, Chuang Y-L, Li X, Macklin P, Wise SM, Cristini V. Nonlinear modeling of cancer: Bridging the gap between cells and tumors. Nonlinearity. 2010; 23(1):1–91. https://doi.org/doi:10.1088/0951-7715/23/1/R01. (invited author: J. Lowengrub).
Kam Y, Rejniak KA, Anderson AR. Cellular modeling of cancer invasion: integration of in silico and in vitro approaches. J Cell Physiol. 2012; 227:431–8. https://doi.org/10.1002/jcp.22766.
Rejniak KA, Anderson AR. State of the art in computational modelling of cancer. Math Med Biol. 2012; 29:1–2. https://doi.org/doi:10.1093/imammb/dqr029.
Macklin P, Frieboes HB, Sparks JL, Ghaffarizadeh A, Friedman SH, Juarez EF, Jonckheere E, Mumenthaler SM. In: Rejniak KA, (ed).Progress Towards Computational 3-D Multicellular Systems Biology, vol. 936. Bern: Springer; 2016, pp. 225–46. https://doi.org/10.1007/978-3-319-42023-3_12. Chap. 12. (invited author: P. Macklin).
Macklin P. Biological background. In: V. Cristini and J.S. Lowengrub, Multiscale Modeling of Cancer: An Integrated Experimental and Mathematical Modeling Approach. Cambridge: Cambridge University Press: 2010. p. 8–23. https://doi.org/10.1017/CBO9780511781452.003. Chap. 2. (invited author: P. Macklin).
Xiong G, Feng M, Yang G, Zheng S, Song X, Cao Z, et al. The underlying mechanisms of non-coding RNAs in the chemoresistance of pancreatic cancer. Cancer Lett. 2017; 397:94–102. https://doi.org/10.1016/j.canlet.2017.02.020.
Sakthivel KM, Hariharan S. Regulatory players of DNA damage repair mechanisms: Role in Cancer Chemoresistance. Biomed Pharmacother. 2017; 93:1238–45. https://doi.org/10.1016/j.biopha.2017.07.035.
Decker JT, Hobson EC, Zhang Y, Shin S, Thomas AL, Jeruss JS, Arnold KB, Shea LD. Systems analysis of dynamic transcription factor activity identifies targets for treatment in olaparib resistant cancer cells. Biotech Bioeng. 2017; 114(9):2085–95. https://doi.org/10.1002/bit.26293.
Martinez-Cardus A, Vizoso M, Moran S, Manzano JL. Epigenetic mechanisms involved in melanoma pathogenesis and chemoresistance. Ann Transl Med. 2015; 3:209. https://doi.org/10.3978/j.issn.2305-5839.2015.06.20.
Abdullah LN, Chow EK. Mechanisms of chemoresistance in cancer stem cells. Clin Transl Med. 2013; 2:3. https://doi.org/10.1186/2001-1326-2-3.
Kerbel R, Folkman J. Clinical translation of angiogenesis inhibitors. Nat Rev Cancer. 2002; 2:727–39. https://doi.org/10.1038/nrc905.
Kindler HL, Niedzwiecki D, Hollis D, Sutherland S, Schrag D, Hurwitz H, et al. Gemcitabine plus bevacizumab compared with gemcitabine plus placebo in patients with advanced pancreatic cancer: Phase III trial of the cancer and leukemia group b (CALGB 80303). J Clin Oncol. 2010; 28:3617–22. https://doi.org/10.1200/JCO.2010.28.1386.
Keunen O, Johansson M, Oudin A, Sanzey M, Rahim SA, Fack F, et al. Anti-VEGF treatment reduces blood supply and increases tumor cell invasion in glioblastoma. Proc Natl Acad Sci U S A. 2011; 108:3749–54. https://doi.org/10.1073/pnas.1014480108.
McIntyre A, Harris AL. Metabolic and hypoxic adaptation to anti-angiogenic therapy: a target for induced essentiality. EMBO Mol Med. 2015; 7:368–79. https://doi.org/10.15252/emmm.201404271.
Diel IJ, Solomayer EF, Costa SD, Gollan C, Goerner R, Wallwiener D, et al. Reduction in new metastases in breast cancer with adjuvant clodronate treatment. N Engl J Med. 1998; 339:357–63. https://doi.org/10.1056/NEJM199808063390601.
Oades GM, Coxon J, Colston KW. The potential role of bisphosphonates in prostate cancer. Prostate Cancer Prostatic Dis. 2002; 5:264–72. https://doi.org/10.1038/sj.pcan.4500607.
Mathew A, Brufsky A. Decreased risk of breast cancer associated with oral bisphosphonate therapy. Breast Cancer (Dove Med Press). 2012; 4:75–81. https://doi.org/10.2147/BCTT.S16356.
Holzinger A, Barden M, Abken H. The growing world of CAR T cell trials: a systematic review. Cancer Immunol Immunother. 2016; 65:1433–50. https://doi.org/10.1007/s00262-016-1895-5.
Haji-Fatahaliha M, Hosseini M, Akbarian A, Sadreddini S, Jadidi-Niaragh F, Yousefi M. CAR-modified T-cell therapy for cancer: an updated review. Artif Cells Nanomed Biotechnol. 2016; 44:1339–49. https://doi.org/10.3109/21691401.2015.1052465.
Mellman I, Coukos G, Dranoff G. Cancer immunotherapy comes of age. Nature. 2011; 480:480–9. https://doi.org/10.1038/nature10673.
Robbins PF, Morgan RA, Feldman SA, Yang JC, Sherry RM, Dudley ME, et al. Tumor regression in patients with metastatic synovial cell sarcoma and melanoma using genetically engineered lymphocytes reactive with NY-ESO-1. J Clin Oncol. 2011; 29:917–24. https://doi.org/10.1200/JCO.2010.32.2537.
Li Y, Jiang F, Lv X, Zhang R, Lu A, Zhang G. A mini-review for cancer immunotherapy: Molecular understanding of PD-1/PD-L1 pathway & translational blockade of immune checkpoints. Int J Mol Sci. 2016; 17. https://doi.org/10.3390/ijms17071151.
Pardoll DM. The blockade of immune checkpoints in cancer immunotherapy. Nat Rev Cancer. 2012; 12:252–64. https://doi.org/10.1038/nrc3239.
Fridman WH, Pages F, Sautes-Fridman C, Galon J. The immune contexture in human tumours: impact on clinical outcome. Nat Rev Cancer. 2012; 12:298–306. https://doi.org/10.1038/nrc3245.
de Visser KE, Eichten A, Coussens LM. Paradoxical roles of the immune system during cancer development. Nat Rev Cancer. 2006; 6:24–37. https://doi.org/10.1038/nrc1782.
Materi W, Wishart DS. Computational Systems Biology in Cancer: Modeling Methods and Applications. Gene Regul Syst Biol. 2007; 1:91–110. https://doi.org/10.1177/117762500700100010.
An G. Introduction of an agent-based multi-scale modular architecture for dynamic knowledge representation of acute inflammation. Theor Biol Med Model. 2008; 5:11. https://doi.org/10.1186/1742-4682-5-11.
Vodovotz Y, An G. Translational Systems Biology: Concepts and Practice for the Future of Biomedical Research. 1st ed.Boston: Academic Press; 2014. https://www.sciencedirect.com/book/9780123978844.
An G. Closing the Scientific Loop: Bridging Correlation and Causality in the Petaflop Age. Sci Transl Med. 2010; 2(41):41–344134. https://doi.org/10.1126/scitranslmed.3000390.
Ozik J, Collier NT, Wozniak JM, Spagnuolo C. From desktop to large-scale model exploration with Swift/T. In: Proceedings of the 2016 Winter Simulation Conference, WSC '16. Piscataway: IEEE Press: 2016. p. 206–20. http://dl.acm.org/citation.cfm?id=3042094.3042132.
Ghaffarizadeh A, Heiland R, Friedman SH, Mumenthaler SM, Macklin P. PhysiCell: an open source physics-based cell simulator for 3-D multicellular systems. PLoS Comput Biol. 2018; 14(2). https://doi.org/10.1371/journal.pcbi.1005991.
Ghaffarizadeh A, Friedman SH, Macklin P. BioFVM: an efficient, parallelized diffusive transport solver for 3-D biological simulations. Bioinformatics. 2016; 32(8):1256–8. https://doi.org/doi:10.1093/bioinformatics/btv730.
Zhang L, Athale CA, Deisboeck TS. Development of a three-dimensional multiscale agent-based tumor model: simulating gene-protein interaction profiles, cell phenotypes and multicellular patterns in brain cancer. J Theor Biol. 2007; 244(1):96–107. https://doi.org/10.1016/j.jtbi.2006.06.034.
Macklin P, Edgerton ME, Thompson AM, Cristini V. Patient-calibrated agent-based modelling of ductal carcinoma in situ (DCIS): From microscopic measurements to macroscopic predictions of clinical progression. J Theor Biol. 2012; 301:122–40. https://doi.org/10.1016/j.jtbi.2012.02.002.
Figueredo GP, Siebers P-O, Aickelin U. Investigating mathematical models of immuno-interactions with early-stage cancer under an agent-based modelling perspective. BMC Bioinformatics. 2013; 14(6):6. https://doi.org/10.1186/1471-2105-14-S6-S6.
Rejniak KA, Anderson ARA. Hybrid models of tumor growth. Wiley Interdiscip Rev Syst Biol Med. 2011; 3(1):115–25. https://doi.org/10.1002/wsbm.102.
Marchuk GI. Splitting and alternating direction methods In: Ciarlet PG, Lions JL, editors. Handbook of Numerical Analysis, vol. 1. Elsevier Science Publishers B.V.: 1990. p. 197–462. https://doi.org/10.1016/S1570-8659(05)80035-3.
Yanenko NN. Simple Schemes in Fractional Steps for the Integration of Parabolic Equations In: Holt M, editor. The Method of Fractional Steps. Springer: 1971. p. 17–41. https://doi.org/10.1007/978-3-642-65108-3_2.
Thomas LH. Elliptic Problems in Linear Difference Equations over a Network. In: Watson Sci Comput Lab Report. New York: Columbia University: 1949.
Friedman SH, Anderson ARA, Bortz DM, Fletcher AG, Frieboes HB, Ghaffarizadeh A, Grimes DR, Hawkins-Daarud A, Hoehme S, Juarez EF, Kesselman C, Merks RMH, Mumenthaler SM, Newton PK, Norton K-A, Rawat R, Rockne RC, Ruderman D, Scott J, Sindi SS, Sparks JL, Swanson K, Agus DB, Macklin P. MultiCellDS: a standard and a community for sharing multicellular data. bioRxiv. 2016; 090696. https://doi.org/10.1101/090696.
Friedman SH, Anderson ARA, Bortz DM, Fletcher AG, Frieboes HB, Ghaffarizadeh A, Grimes DR, Hawkins-Daarud A, Hoehme S, Juarez EF, Kesselman C, Merks RMH, Mumenthaler SM, Newton PK, Norton K-A, Rawat R, Rockne RC, Ruderman D, Scott J, Sindi SS, Sparks JL, Swanson K, Agus DB, Macklin P. MultiCellDS: a community-developed standard for curating microenvironment-dependent multicellular data. bioRxiv. 2016; 090456. https://doi.org/10.1101/090456.
Holland JH. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. Cambridge, Mass: A Bradford Book; 1992.
Settles B. Active learning. Synth Lect Artif Intell Mach Learn. 2012; 6:1–114. https://doi.org/10.2200/S00429ED1V01Y201207AIM018.
Cevik M, Ergun MA, Stout NK, Trentham-Dietz A, Craven M, Alagoz O. Using Active Learning for Speeding up Calibration in Simulation Models. Med Dec Making. 2016; 36:581–93. https://doi.org/10.1177/0272989X15611359.
Wozniak JM, Armstrong TG, Wilde M, Katz DS, Lusk E, Foster IT. Swift/T: Large-Scale Application Composition via Distributed-Memory Dataflow Processing. In: 2013 13th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing: 2013. p. 95–102. https://doi.org/10.1109/CCGrid.2013.99.
Khanna R. Tumour surveillance: Missing peptides and mhc molecules. Immunol Cell Biol. 1998; 76(1):20–6. https://doi.org/10.1046/j.1440-1711.1998.00717.x.
Comber JD, Philip R. Mhc class i antigen presentation and implications for developing a new generation of therapeutic vaccines. Ther Adv Vaccines. 2014; 2(3):77–89. https://doi.org/10.1177/2051013614525375.
Macklin P, Mumenthaler S, Lowengrub J. Modeling multiscale necrotic and calcified tissue biomechanics in cancer patients: application to ductal carcinoma in situ (DCIS) In: Gefen A., editor. Multiscale Computer Modeling in Biomechanics and Biomedical Engineering. Berlin, Germany: Springer: 2013. p. 349–80. https://doi.org/10.1007/8415_2012_150 Chap. 13. (invited author: P. Macklin).
Gatenby RA, Smallbone K, Maini PK, Rose F, Averill J, Nagle RB, et al. Cellular adaptations to hypoxia and acidosis during somatic evolution of breast cancer. Br J Cancer. 2007; 97:646–53. https://doi.org/10.1038/sj.bjc.6603922.
McKeown SR. Defining normoxia, physoxia and hypoxia in tumours—implications for treatment response. Br J Radiology. 2014; 87:20130676. https://doi.org/10.1259/bjr.20130676.
EMEWS: Extreme-scale Model Exploration with Swift. http://emews.org Accessed 28 Dec 2017.
Ichim CV. Revisiting immunosurveillance and immunostimulation: Implications for cancer immunotherapy. J Transl Med. 2005; 3(1):8. https://doi.org/10.1186/1479-5876-3-8.
Cheung K, Gabrielson E, Werb Z, Ewald A. Cell. 2013; 155(7):1639–51. https://doi.org/10.1016/j.cell.2013.11.029.
Hucka M, Finney A, Sauro HM, Bolouri H, Doyle JC, Kitano H, Arkin AP, Bornstein BJ, Bray D, Cornish-Bowden A, Cuellar AA, Dronov S, Gilles ED, Ginkel M, Gor V, Goryanin II, Hedley WJ, Hodgman TC, Hofmeyr J-H, Hunter PJ, Juty NS, Kasberger JL, Kremling A, Kummer U, Le Novère N, Loew LM, Lucio D, Mendes P, Minch E, Mjolsness ED, Nakayama Y, Nelson MR, Nielsen PF, Sakurada T, Schaff JC, Shapiro BE, Shimizu TS, Spence HD, Stelling J, Takahashi K, Tomita M, Wagner J, Wang J, SBML Forum. The systems biology markup language (SBML): a medium for representation and exchange of biochemical network models. Bioinformatics. 2003; 19(4):524–31. https://doi.org/doi:10.1093/bioinformatics/btg015.
Sluka JP, Shirinifard A, Swat M, Cosmanescu A, Heiland RW, Glazier JA. The cell behavior ontology: describing the intrinsic biological behaviors of real and model cells seen as active agents. Bioinformatics. 2014; 30(16):2367–74. https://doi.org/doi:10.1093/bioinformatics/btu210.
Somogyi ET, Bouteiller J-M, Glazier JA, König M, Medley JK, Swat MH, Sauro HM. libroadrunner: a high performance sbml simulation and analysis library. Bioinformatics. 2015; 31(20):3315–21. https://doi.org/doi:10.1093/bioinformatics/btv363.
Macklin P, Heiland R. 1.0.3 MathCancer/PhysiCell-EMEWS: 1.0.3 - PhysiCell-EMEWS method paper. 2018. https://doi.org/10.5281/zenodo.1163558 https://doi.org/10.5281/zenodo.1163558. Accessed 31 Jan2018.
This material is based upon work supported by the U.S. Department of Energy, Office of Science, under contract number DE-AC02-06CH11357. This research was supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration. We thank the Breast Cancer Research Foundation, the Jayne Koskinas Ted Giovanis Foundation for Health and Policy, the National Institutes of Health (R01GM115839, R01CA180149, S10OD018495), the Department of Energy (National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH1123 and from Lawrence Livermore National Laboratory under Award #B616283), and the National Science Foundation (1720625) for generous support. The publication fee was supported by funding from the Breast Cancer Research Foundation and the Jayne Koskinas Ted Giovanis Foundation for Health and Policy. The funding bodies had no role in the design or conclusions of the study.
All simulation source code and scripts for execution and analysis for this project (including data generation) are available at https://github.com/MathCancer/PhysiCell-EMEWS and at [58].
About this supplement
This article has been published as part of BMC Bioinformatics Volume 19 Supplement 18, 2018: Selected Articles from the Computational Approaches for Cancer at SC17 workshop. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-19-supplement-18.
Argonne National Laboratory, Argonne, IL, USA
Jonathan Ozik, Nicholson Collier, Justin M. Wozniak & Charles Macal
Dept. of Surgery, University of Chicago, Chicago, IL, USA
Chase Cockrell & Gary An
Opto-Knowledge Systems, Inc., Torrance, CA, USA
Samuel H. Friedman
Lawrence J. Ellison Center for Transformative Medicine, University of Southern California, Los Angeles, CA, USA
Ahmadreza Ghaffarizadeh
Intelligent Systems Engineering, Indiana University, Bloomington, IN, USA
Randy Heiland & Paul Macklin
Jonathan Ozik
Nicholson Collier
Justin M. Wozniak
Charles Macal
Chase Cockrell
Randy Heiland
Gary An
Paul Macklin
GA and PM initiated and designed the overall PhysiCell-EMEWS project. SHF, AG, and PM designed and developed the original PhysiCell software. PM developed the cancer-immune model. JO, NC, JMW, CM, CC and GA designed, developed and executed the EMEWS framework for this project. RH and PM analyzed the resulting data and provided the figures. GA and PM obtained funding. All authors contributed to, read, and approved the final version of this manuscript.
Correspondence to Paul Macklin.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Ozik, J., Collier, N., Wozniak, J. et al. High-throughput cancer hypothesis testing with an integrated PhysiCell-EMEWS workflow. BMC Bioinformatics 19, 483 (2018). https://doi.org/10.1186/s12859-018-2510-x
DOI: https://doi.org/10.1186/s12859-018-2510-x
Agent-based model
PhysiCell
High throughput computing
EMEWS
|
CommonCrawl
|
Area beneath $y=x$ from $-\infty$ to $\infty$
$$\int_{-\infty}^{\infty}x\,dx$$ According to my teacher, this improper integral diverges because "if one or both integrals diverge, the entire integral diverges." Evaluating it as a limit, however, it seems to cancel out and give $0$.
I understand this gives an indeterminate form, and that generally, it is incorrect to "cancel out infinity," but an indeterminate form doesn't mean that it can't be evaluated to diverge, as it seems to do in this case. Some intuition behind this conclusion also lies in the fact that either side is decreasing at the same rate, so it seems obvious that the area goes to $0$.
If it truly does diverge, then to what? it seems absurd to say that it blows up to $\pm\infty$.
calculus improper-integrals
edited Mar 7 at 5:36
let's have a breakdown
asked Mar 7 at 3:23
RoshanRoshan
$\begingroup$ odd function, it should be $0$ $\endgroup$ – Vasya Mar 7 at 3:31
$\begingroup$ You may find the answers here helpful. $\endgroup$ – jmerry Mar 7 at 3:36
$\begingroup$ Basically you have answered your own question: "generally, it is incorrect to cancel out infinity". Except for one thing: actually, it is always incorrect to cancel out infinity (when working in the real numbers), even though sometimes it will give you the right answer by accident. $\endgroup$ – David Mar 7 at 3:38
$\begingroup$ @Vasya: That's not the definition of the improper integral. An improper Riemann integral over the entire real line exists if and only if each of the integrals on $(-\infty,c]$ and on $[c,\infty)$ exist, for an arbitrary $c$, which requires two limits to exist. Here, neither of those limits exists. $\endgroup$ – Arturo Magidin Mar 7 at 3:40
$\begingroup$ Although the improper integral diverges, it has a Cauchy principal value of $0$. $\endgroup$ – Robert Israel Mar 7 at 3:59
The issue in assigning a value to $$ \int_{-\infty}^\infty x\,\mathrm{d}x $$ really involves what it means for a function $f$ to be integrable. Unfortunately, there is no short complete answer to this question. In the Lebesgue theory of integration, a function $f : \mathbb{R} \to \mathbb{R}$ is said to be integrable on $\mathbb{R}$ if $$ \int_{-\infty}^{\infty} \left\vert f(x)\right\vert\mathrm{d}x < \infty. $$ (Here we are ignoring any assumptions that are required for the above to make sense. In any case, these will always be satisfied for a continuous function and, in particular, your function $f(x) =x$.)
The requirement that $\int_{-\infty}^\infty |f|\mathrm{d}x < \infty$ is equivalent to asking that both $$ \int_{-\infty}^{\infty} f_+(x)\,\mathrm{d}x < \infty \quad \text{and} \quad \int_{-\infty}^{\infty} f_-(x)\,\mathrm{d}x < \infty $$ where $f_{+}(x) = \max(f(x), 0)$ and $f_-(x) = \max(-f(x),0)$. If $f$ is integrable according to the definition above, we then define \begin{equation}\label{eq:star}\tag{$\star$} \int_{-\infty}^\infty f(x)\,\mathrm{d}x = \int_{-\infty}^\infty f_+(x)\,\mathrm{d}x - \int_{-\infty}^\infty f_-(x)\,\mathrm{d}x \end{equation} which will be a finite number.
Clearly, the function $f(x) = x$ does not satisfy any of these hypothesis because \begin{align*} \int_{-\infty}^{\infty} f_+(x)\,\mathrm{d}x = \int_{0}^\infty x\,\mathrm{d}x = \infty. \end{align*}
Now, we go through all of this trouble to ensure that we never end up writing something along the lines of $\infty - \infty$ in \eqref{eq:star}, which cannot be made sense of.
However, as you have observed, something interesting happens with $f(x)=x$. For each $\alpha > 0$, you have shown that $$ \int_{-\alpha}^\alpha x\,\mathrm{d}x = \frac{\alpha^2 - \alpha^2}{2} = 0. $$ Hence, the limit $$ \lim_{\alpha \to \infty} \int_{-\alpha}^\alpha x\,\mathrm{d}x = 0 $$ exists and is well defined. This means that the number given by \begin{equation}\label{eq:dagger}\tag{$\dagger$} \int_{-\infty}^\infty x\,\mathrm{d}x \stackrel{?}{=} \lim_{\alpha \to \infty} \int_{-\alpha}^\alpha x\,\mathrm{d}x = 0 \end{equation} exists. Thus, the integral $\int_{-\infty}^\infty x\,\mathrm{d}{x}$ only exists in the improper sense (in this case, we are forced to use the Cauchy principle value as our definition of improper). In other words, $\int_{-\infty}^\infty x\,\mathrm{d}{x}$ should be interpreted as an improper integral (and even then, we need the Cauchy principle value). Although these do not make much sense in the Lebesgue sense (in which we require that $|f|$ be integrable), there are theories of integration that deal with these improper integrals (see the Gauge integral, for instance).
Short answer: Whether or not $\int_{-\infty}^\infty x\,\mathrm{d}x$ exists as an integral depends on the context. It does not exist as a Lebesgue (or Riemann) integral, but it does exist if you want to talk specifically about the value $$ \lim_{\alpha \to \infty} \int_{-\alpha}^\alpha x\,\mathrm{d}x $$
Edit: We also point out that the expression in \eqref{eq:dagger} would make a "bad" definition for an integral. To see why, consider first a (Lebesgue) integrable function $f : \mathbb{R} \to \mathbb{R}$. Then, $f$ will also be integrable on any interval $[a,b] \subset \mathbb{R}$. In fact, $f$ will be integrable on any interval of the form $(c,\infty)$. Moreover, the following additive rule would hold: $$ \int_{-\infty}^\infty f(x)\,\mathrm{d}x = \int_{-\infty}^c f(x)\,\mathrm{d}x + \int_{c}^\infty f(x)\,\mathrm{d}x. $$ Now, both of these properties are to be expected of an integral (after all, they are fundamental and very intuitive properties). However, despite existing as a limit, the "integral" $\int_{-\infty}^\infty x\,\mathrm{d}x$ fails both of these properties. Indeed, $$ \int_{c}^\infty x\,\mathrm{d}x = \infty \quad \text{and} \quad \int_{-\infty}^c x\,\mathrm{d}x = - \infty $$ for every $c \in \mathbb{R}$. Consequently, the additive rule $$ \int_{-\infty}^\infty x\,\mathrm{d}x \stackrel{?}{=} \int_{-\infty}^c x\,\mathrm{d}x + \int_{c}^\infty x\,\mathrm{d}x $$ also fails. In short, incorporating \eqref{eq:dagger} into our definition of the integral would cause us to lose many of the nice properties the integral satisfies. So, although we can partially avoid having $\infty - \infty$ in this case, we still end up breaking several familiar properties the integral should satisfy.
answered Mar 7 at 3:41
rolandcyprolandcyp
$\begingroup$ Okay, thanks, this makes sense, but I am still not completely understanding the purpose of these distinctions. Also, why do we at all "go through the trouble of avoiding $\infty - \infty$," in a case such as this where it is so easily simplifiable and really poses no problem at all and can indeed be made sense of? $\endgroup$ – Roshan Mar 7 at 7:14
$\begingroup$ Please see my edit. $\endgroup$ – rolandcyp Mar 7 at 15:26
Not the answer you're looking for? Browse other questions tagged calculus improper-integrals or ask your own question.
What is the intuition behind why the integration of $f(x) = x$ for closed interval of negative to positive infinity diverges, rather than being zero?
Diverging improper integral
"Not converging" vs. diverging improper integral
Convergence/Divergence of Improper Trigonometric Integral $\int\limits_{0}^{\frac{\pi}{2}} \tan(x) dx$
The integrals from $1$ to $\infty$ for $\dfrac{1}{x}$ and $\dfrac{1}{x^2}$
Having Trouble Understanding The Integral Test for Series.
Why $ \int^{+\infty}_{-\infty} x \, dx \neq 0 $
Necessary condition for the convergence of an improper integral.
Test for divergence of $\int_{0}^{\infty} \frac{\sin^2(x)}{x}dx$ without evaluating the integral
How does $\iint_{\mathbf{ℝ^2}}\frac{x}{1+x^2+y^2}dxdy$ diverge?
|
CommonCrawl
|
Educational Codeforces Round 57 (Rated for Div. 2)
combinatorics
Announcement (ru)
Tutorial (ru)
E. The Top Scorer
256 megabytes
standard input
standard output
Hasan loves playing games and has recently discovered a game called TopScore. In this soccer-like game there are $$$p$$$ players doing penalty shoot-outs. Winner is the one who scores the most. In case of ties, one of the top-scorers will be declared as the winner randomly with equal probability.
They have just finished the game and now are waiting for the result. But there's a tiny problem! The judges have lost the paper of scores! Fortunately they have calculated sum of the scores before they get lost and also for some of the players they have remembered a lower bound on how much they scored. However, the information about the bounds is private, so Hasan only got to know his bound.
According to the available data, he knows that his score is at least $$$r$$$ and sum of the scores is $$$s$$$.
Thus the final state of the game can be represented in form of sequence of $$$p$$$ integers $$$a_1, a_2, \dots, a_p$$$ ($$$0 \le a_i$$$) — player's scores. Hasan is player number $$$1$$$, so $$$a_1 \ge r$$$. Also $$$a_1 + a_2 + \dots + a_p = s$$$. Two states are considered different if there exists some position $$$i$$$ such that the value of $$$a_i$$$ differs in these states.
Once again, Hasan doesn't know the exact scores (he doesn't know his exact score as well). So he considers each of the final states to be equally probable to achieve.
Help Hasan find the probability of him winning.
It can be shown that it is in the form of $$$\frac{P}{Q}$$$ where $$$P$$$ and $$$Q$$$ are non-negative integers and $$$Q \ne 0$$$, $$$P \le Q$$$. Report the value of $$$P \cdot Q^{-1} \pmod {998244353}$$$.
The only line contains three integers $$$p$$$, $$$s$$$ and $$$r$$$ ($$$1 \le p \le 100$$$, $$$0 \le r \le s \le 5000$$$) — the number of players, the sum of scores of all players and Hasan's score, respectively.
Print a single integer — the probability of Hasan winning.
In the first example Hasan can score $$$3$$$, $$$4$$$, $$$5$$$ or $$$6$$$ goals. If he scores $$$4$$$ goals or more than he scores strictly more than his only opponent. If he scores $$$3$$$ then his opponent also scores $$$3$$$ and Hasan has a probability of $$$\frac 1 2$$$ to win the game. Thus, overall he has the probability of $$$\frac 7 8$$$ to win.
In the second example even Hasan's lower bound on goal implies him scoring more than any of his opponents. Thus, the resulting probability is $$$1$$$.
|
CommonCrawl
|
Home Rapporter not every orthogonal set in is linearly independent
not every orthogonal set in is linearly independent
In general, n linearly independent vectors are required to describe all locations in n-dimensional space. Proof: The dot product of a linear relation a1 ... if w~ is orthogonal to every vector ~v ∈ V. 1. The first statement of this theorem allows us to introduce the following definition. If V is a vector space of dimension n, then: A subset of V with n elements is a basis if and only if it is linearly independent. Mark each statement True or False. Is that the right logic? 1. (True |False) Not every linearly independent set in Rn is an orthogonal set 12. Hence obtain an orthonormal set of vectors in $\mathbb{R}^{3}$. Question: All vectors are in {eq}\displaystyle R_n. Finally, the list spans since every vector in can be written as a sum of a vector in and a vector in . Determine Linearly Independent or Linearly Dependent. (b) Suppose cv 1 + dv 2 = 0. To prove So first checking you dot V, we'd get one over route 10 times three over Route 10 for the X component than for the y component dot product. However they are not orthogonal. Justify each answer.a. 1.7 Linear Independence De nitionMatrix ColumnsSpecial Cases Special Cases: 2. Use the Gram Schmidt process. Not every orthogonal set in Rn is linearly independent : False: If a set S={u1....up} has the property that ui*uj=0 whenever i dose not equal j, then S is an orthonormal set. Let us first consider the case when S is finite, i.e., Click 'Join' if it's correct, By clicking Sign up you accept Numerade's Terms of Service and Privacy Policy, Whoops, there might be a typo in your email. Show that every solution of Ax = 0 is orthogonal to the rows of A. The orthogonal projection of $\mathbf{y}$ onto $\mathbf{v}$ is the same as the orthogonal projection of $\mathbf{y}$ onto $c \mathbf{v}$ whenever $c \neq 0$e. So they canceled. Recipes: an orthonormal set from an orthogonal set, Projection Formula, B-coordinates when B is an orthogonal set, Gram–Schmidt process. In this section, we give a formula for orthogonal projection that is considerably simpler than the one in Section 6.3 , in that it does not require row reduction or matrix inversion. 014 10.0points Not every orthogonal set in R n is linearly independent. Thus, the set $\left\{\mathbf{u}_{1}, \mathbf{u}_{2}, \mathbf{u}_{3}\right\}$ is an orthonormal set. The vectors a 1, ..., a n are called linearly dependent if there exists a non-trivial combination of these vectors is equal to the zero vector. A = {a1, a2, a3, …., an} is a set of linearly independent vectors only when for no value (other than 0) of scalars(c1, c2, c3…cn), linear combination of vectors is equal to 0. If a set is only orthogonal, normalize the vectors to produce an orthonormal set.$\left[\begin{array}{r}{-.6} \\ {.8}\end{array}\right],\left[\begin{array}{c}{.8} \\ {.6}\end{array}\right]$. Not every linearly independent set in $\mathbb{R}^{n}$ is an orthogonal set. A vector n is said to be normal to a plane if it is orthogonal to every vector in that plane.. Since 43 13 32 0, 35 81 the set is not orthogonal. OB. Orthogonal Complements. Vocabulary words: orthogonal set , orthonormal set . Explain why $W=\mathbb{R}^{n} .$, Determine which sets of vectors are orthonormal. In this video you will learn what an orthogonal set is, and that every orthogonal set of nonzero vectors, is a linearly independent set. We prove that the set of three linearly independent vectors in R^3 is a basis. Click to sign up. Probability, ... Orthogonal Vectors and Subspaces | MIT 18.06SC Linear Algebra, Fall 2011 - Duration: 10:20. We denote by ⟨⋅,⋅⟩ the inner product of L. Let S be an orthonormal set of vectors. every orthonormal set is linearly independent. And an orthonormal basis is an orthogonal basis whose vectors are of length 1. Astronauts head to launch site for SpaceX's 2nd crew flight; How cell processes round up … all finite subsets of S are linearly independent. Determine whether the given set of vectors is an orthogonal set in $\mathbb{R}^{n} .$ For those that are, determine a corresponding orthonormal set of vectors.$$\{(1,3,-1,1),(-1,1,1,-1),(1,0,2,1)\}$$, Determine whether the given set of vectors is an orthogonal set in $\mathbb{R}^{n} .$ For those that are, determine a corresponding orthonormal set of vectors.$$\{(2,-1,1),(1,1,-1),(0,1,1)\}$$, Determine whether the given set of vectors is an orthogonal set in $\mathbb{R}^{n} .$ For those that are, determine a corresponding orthonormal set of vectors.$$\{(1,2,-1,0,3),(1,1,0,2,-1),(4,2,-4,-5,-4)\}$$, Determine which sets of vectors are orthogonal.$\left[\begin{array}{r}{3} \\ {-2} \\ {1} \\ {3}\end{array}\right],\left[\begin{array}{r}{-1} \\ {3} \\ {-3} \\ {4}\end{array}\right],\left[\begin{array}{l}{3} \\ {8} \\ {7} \\ {0}\end{array}\right]$. FALSE Orthogonal implies linear independence. And if it's not, we have to make it one by normalizing each of the vectors. A subset of V with n elements is a basis if and only if it is spanning set of V. So they are orthogonal. Vocabulary words: orthogonal set , orthonormal set . (Another way of looking at this is that the set v1,v2, ,vk contains more vectors than there are entries in each vector, so the set must be linearly dependent.) Projection Formula, B-coordinates when B is an orthogonal basis is an orthonormal.! Normal set n linearly independent, then for you dot w. we 'll do the thing! Orthogonal set of vectors in an inner product space is linearly independent set in Rn linearly. Vectors with fewer than n elements is linearly dependent set or a linearly dependent the. 52 020, 30 3 6 0 6 the set is linearly independent set vectors:! Checking V, we need to show that every solution of Ax = 0 i. If and only if the vectors in an inner product space is linearly independent set is.... Fixed k in 1, theorem allows us to see this result, suppose that the 's form basis... Of length 1 relation / equation of Linear dependence relation / equation of Linear dependence all bases of j. Houston Math 2331, Linear Algebra, Fall 2011 - Duration: 10:20 if and if! Product of L. Let S S be an orthonormal set of all linearly set. The definition of orthogonal complement is similar to that of a normal vector 2-D and 3-D.... The 's form a basis eigenvalues, we would get 9/10 plus 1/20 which again gives us our number... The rank equals the number of vectors is linearly independent why $ W=\mathbb { R } {! Written as a Linear Combination of the others ( B ) suppose cv 1 + u 2 = 1! Recipes: an orthonormal basis is an orthogonal set of vectors and are linearly independent true |False ) not linearly! ) suppose cv 1 + dv 2 = 0 whenever i 6 = j, S... To find z orthogonal to x and y in eq set from an orthogonal basis a... The Linear mapping x 7! Ax preserves length not ignored, becomes... The new vectors may not be normal ( magnitude may not be 1 ) u is an orthogonal set Rn... The W is Ortho normal, Determine which sets of vectors are linearly independent or dependent. Independent set is linearly dependent linearly independent also note that a single vector is orthogonal, S... Are affinely independent W=\mathbb { R } ^ { n }. $, Determine which sets of vectors the... Containing a single vector, say e₁, is also orthonormal, the vectors ( -::1-! Going to have to find the dot product of V have the same number of elements vectors which are vectors! Probability,... every basis of S contains the same number of in! Might be a typo in your email '' - Duration: 12:28 it is linearly,. In can be written as a Linear Combination Determine whether the following set of vectors in Rn is linearly,... 12 is an orthogonal basis is an orthogonal set of vectors are this... Must be logged in to bookmark a video News on Phys.org, any set of vectors in an inner space... Product of L L. Let S be an orthonormal set, Gram–Schmidt process question wants to. Route in common, we then have, so λk=0, and S infinite... A video component is 1/10 x component is 1/10 x component is 1/10 x is! There Might be a subspace if B = 0 not every orthogonal set in is linearly independent u n W and V NW denote by ⋅ ⋅., suppose S is orthonormal if every vector in can be made...., there Might be a subspace if B = 0 ( magnitude may not be 1 ) one vector and. N } $ is linearly independent.b only if the vectors ( -: --:1- are linearly independent dot... { /eq } Check the true statements below: a since 31 33 13 28... From an orthogonal set the following set of vectors, the list spans since every vector the... A normal set a typo in your email picture: whether a of! Consisting of three vectors of R^3 is a basis whose vectors are normalized, then S is infinite countable. Which is not linearly independent if and only if the augmented vectors are orthonormal 4 0 4 50 52,! Also orthonormal, the list spans since every vector in all finite subsets of S contains the same thing same. Constants c 1, 33 13 23 28 38 0, 35 81 set...: 10:20 10.0points not every linearly independent vectors are affinely independent the dimension of others... Augmented vectors are affinely independent, express one vector in the set linearly... W '' repeated eigenvalues, we have repeated eigenvalues, we need to show that the rows a. Forms a basis whose vectors are required to describe all locations in n-dimensional space theorem [ thm: orthbasis that! A Linear Combination Determine whether the following set of nonzero vectors is linearly or... Ax preserves length in n-dimensional space set, Gram–Schmidt process three vectors of is. Are constants c 1, form a basis linearly dependent vectors properties: for 2-D and 3-D vectors theorem thm. Mark each statement true or false moreover, any set containing a vector... In S has magnitude 1 and the set is orthogonal to the rows a! { eq } \displaystyle R_n { /eq } is a linearly independent matrix, we can find..., Gram–Schmidt process make it one by normalizing each of the subspace. spanning set consisting three... X component is zero for the second vector W and V NW mutually orthogonal (... To add a third vector to the linearly independent, we have u =... Cv 1 + dv 2 = u 1 infinite ( countable or uncountable ) be 1 ) vectors the... A matrix with orthonormal columns is an orthogonal matrix, we would get 9/10 plus 1/20 plus which! Might be a basis m n matrix a are orthonormal component, giving us zero, then S is.. Image of every algebraic basis is a linearly independent, then some the. Orthogonal set.b since every vector in can be written as a Linear Combination the! The image of every algebraic basis is a basis linearly dependent set or linearly! Set from an orthogonal set 12 12 is an orthonormal set of vectors again gives our. Of L. Let S be an orthonormal basis is a basis every set vectors. It becomes necessary to add a third vector to the linearly independent set, Fall 2011 Duration. That an orthonormal set of n orthogonal vectors and make sure that they 're all zero infinite case from!
Monarch Butterfly Animal Crossing, Superb Fairy-wren Predators, Mac Blush Swatches, How To Become A Materials Engineer, Plant Manager Job Description, Hardwood Timber Prices Uk, The Train Song Lyrics, How To Fix Cakey Cookies, Branded Packaging Bags, Miracle-gro Water Soluble Rose Plant Food,
|
CommonCrawl
|
The ZX-calculus - A Slightly Longer Introduction
Reading the rules
What is a rule?
The ZX-calculus has two important meta-rules:
Only connectivity matters
Everything is still true if you swap red and green
All other rules in the ZX-calculus look like an equation between two diagrams. For example: The rule to the right is called the green spider rule, and this is because these coloured nodes remind people of spiders with bodies (the coloured circle) and legs (the black wires.) Spiders are allowed to have any number of legs, including zero!
The rule is simple: If two green spiders are joined (there is a wire from one to the other) then you can merge those two spiders together. As you do so the newly merged spider maintains all its other connections. If the spiders had angles on them (called phases) then you add those angles together, remembering to work modulo $2 \pi$.
Wires that don't have a node at both ends are called "open wires." These indicate that we don't mind what those wires are attached to at the other end. Think of them as boundaries that allow us to embed our small diagram into some larger context. The "..." indicates that we also don't mind how many of these wires there are (including possibly 0 wires.)
The usual presentation of this rule is a little more complicated, because it says that if you have have one or more wires between the nodes you can merge them, but this version will do for now.
The green spider rule, saying you can merge two green spiders if they are joined by a wire
The analogous red spider rule, found by just swapping the colours red and green everywhere
Swapping red and green
Since everything remains true if you swap red and green (you need to swap it everywhere!) then this rule, called the red spider rule, also holds true. It says precisely the same thing, but for red spiders.
The wires joining the spiders can be as long as you like, can bend around, leave the node in any direction, and are allowed to cross other wires freely. As the first meta-rule says: All that matters is the connectivity ("how many wires are there") between the nodes.
It's important to remember that these rules are equalities: Whenever you see part of a diagram that looks like the left hand side of a rule you can replace it with the right hand side of the rule, but also whenever you see an instance of the right hand side of a rule you can replace it with the left hand side of the rule too.
Moving angles through $\pi$ nodes
Here is a rule involving a green $\alpha$ spider (meaning it can have any angle on it) and a red $\pi$ spider, running along a single wire. The rule says that we can swap the order of these two spiders, but it has the side affect of flipping the sign in front of the $\alpha$.
The $\pi$-commutation rule (as this is known) we show here is actually only true up to a complex scalar factor. This is something that can be easily fixed by just including the scalar, but many people prefer to work with scalars left implicit (to avoid clutter in their diagrams.)
The pi-commutation rule (up to a complex scalar factor)
The Hadamard rule allows you to convert red spiders to green spiders and back again using Hadamard gates
Hadamards swap colours too
This is called the Hadamard rule, and it describes the action of the Hadamard gate (the yellow squares.) It says that we can use Hadamard gates to turn a green spider into a red spider (and, because of the second meta-rule, red spiders into green spiders.) It can be shown that two Hadamard gates applied in succession cancel each other out, meaning that not only is this a very versatile rule, but there is also a limit to how many Hadamard gates you will need in your diagram. There need to be Hadamard gates (yellow squares) on all of the wires in the left hand diagram.
Why is it a Hadamard gate, not a Hadamard spider? The Hadamard (in ZX) is limited to having two wires and two wires only. One can actually build it from red and green spiders, and so it's really just a short-hand symbol, but it's such a useful one that it is sometimes given the same level of status as the red and green spiders.
Since two Hadamard gates in a row cancel each other out, the creation and deletion of pairs of Hadamards is often done implicitly. This means that one can "pull" a Hadamard gate through a spider, changing its colour, and creating new Hadamarad gates on every other leg.
These aren't all of the rules of the ZX-calculus, but they should give you enough indication of of how to read the rules. In 2018 a complete set of rules for the ZX-calculus was found: This means that if you have two diagrams that describe the same matrix (we talk about how to do that later,) then the rules of the ZX-calculus can take you from one diagram to the other. Since then other complete sets of rules have been found, and it is now a matter of choosing the right set of rules for the job in hand.
Applying the rules
Spotting where to use them
Now that we know what the rules are, let's talk about how to use them. The intuitive notion behind rewriting (as rule application is also known) is to identify part of a diagram that matches the left hand side of your rule, and replace it with the right hand side (or vice versa,) keeping the rest of the diagram in place.
Here we have applied the Hadamard rule (described earlier) to the diagram, by first identifying a place where it can be applied, then replacing the matched region with the right hand side of the rule.
Since we can bend wires and move nodes around as much as we like, it can often be hard to spot where such a rule application can take place. Remember that all that matters is which nodes are connected to which other nodes.
The dashed rectangle shows where we are applying the Hadamard rule
Choices of which rule to apply and where have consequences
Knowing where to use them
To the left we show two possible chains of rules (out of many.) The grey arrows show rule applications. Starting at the top we pull the Hadamard gate through a spider, but we have a choice of which spider to use! After doing so we merge the adjacent spiders of the same colour. The ZX-calculus does not have a terminating (nor confluently oriented) set of rewrite rules, so choices of which rules to apply and where to apply them really do matter.
Words, words, words (and most of them about rules)
The ZX-calculus overlaps with quantum computing, traditional logic, and category theory. It uses ideas that have similar, but not identical, names in these areas. Here is a short glossary that will hopefully help a reader when they go on to read other documents.
A generator is a diagrammatic building block, which cannot be broken down further. In ZX these are the red and green spiders. Some people also include the Hadamard gate as a generator rather than a shorthand.
A rule, also known as an axiom, is an equation between two diagrams that is simply stated as being true.
A fragment of the ZX-calculus is a restriction of the angles allowed on the nodes:
Universal ZX allows any angle between 0 and $2\pi$
Clifford+T ZX allows angles that are multiples of $\pi/4$
Stabilizer ZX allows angles that are multiples of $\pi/2$
These are just the best known ones, and these fragments have had extensive study. Although research in this area is slowing down, there is still work being done on finding the most convenient sets of rules for a given setting. In particular: The fragment you are working in determines which properties your set of rules will have.
The ZW and ZH calculi are two other calculi that work in a similar way to ZX, but have different generators and different rules.
A calculus comes with an interpretation (we give ZX's interpretation lower down) which is a way of turning diagrams into a more computable setting. For ZX, as with the rest of quantum computing, this is complex matrices.
A calculus with a set of rules is sound if those rules express true statements after being interpreted.
A calculus with a set of rules is complete if the set of rules can be used to show any true statement possible in the language (using the interpretation to determine "true".) The fragments mentioned above all have complete sets of rules.
Circuits and Diagrams and Matrices
Not all diagrams are circuits, but all circuits are diagrams
One can build all quantum circuits from single qubit Z and X rotations, and two qubit CNOT gates. Quantum circuit diagrams are very rigid in structure: There are a fixed number of wires, and a fixed direction of flow in the diagram.
To turn a quantum circuit diagram into a ZX-calculus diagram is extremely easy:
Keep wires as they are
All Z rotation gates become green spiders
All X rotation gates become red spiders
All CNOT gates become a (connected) pair of red and green spiders, with the green spider on the control qubit.
The CNOT gate, here shown so the wire connecting the two nodes does not run vertically
Interpretations of the bent wire ("cup"), Hadamard, and green spider elements of the ZX-calculus
Interpreting the ZX-calculus
There are ZX diagrams that do not correspond to any unitary circuit (this isn't a problem, it just means ZX can describe non-unitary matrices.) As such there is no general method to turn a ZX diagram into a circuit diagram, although a lot of work has gone into working out how to do so when you can. What we can always do is turn a ZX diagram into a corresponding complex matrix.
To do this we only need to know what to do with green nodes, bent wires, and Hadamard gates (and how to glue these elements together.) This is because we can turn any red nodes into green nodes using the Hadamard rule described earlier. We also need to pick a direction of flow in our diagram. We'll be using left-to-right here, but any other orientation is possible (and needs to be clearly stated.)
The big square brackets mean "interpretation of", and sends a diagram to a matrix. Spiders can have any number of input and output legs, and a spider with $n$ inputs and $m$ outputs is interpreted as a matrix with $2^n$ columns and $2^m$ rows.
Breaking down a diagram (i.e. tensor and composition)
ZX-calculus diagrams are read the same way that circuit diagrams are read: If your circuit is running left-to-right then you break the diagram down into columns. You view everything happening in a column as happening at the same time, and take the tensor product of all the elements in that column. Then you view the sequence of columns as a sequence of operations, taking the matrix composition of those operations.
To the right is a diagram that has been broken down so that each cell in the grid contains just one element. Remember you can bend wires around as needed to make this easier to do.
An indication of how to break a diagram down into smaller diagrams, so that each cell contains only one element.
Transposes, cups, caps and reflections
Bent wires are a thing that may catch you off guard if you come from circuit diagrams. The interpretation of a bare wire running with the flow of your diagram is just the identity matrix. How about when a wire bends round to form a "cup"? This interpretation is given above, but we leave out what to do when the wire forms a "cap". This is because reflecting a diagram (in the vertical axis) has the effect of applying a transpose to the matrix interpretation.
For diagrams that flow left-to-right there is no interpretation for a vertical wire. We get around this by giving any vertical wires the slightest of bends, so that it forms either a cup or a cap (and it does not matter which option one chooses.)
Thank you for reading
Hopefully this introduction will give you an indication of how to read ZX diagrams and their rules. The publications page gives a list of the latest papers, and you can filter by authors, titles, keywords, and abstracts to find the content you are looking for. If you have any comments or suggestions on how we can improve this tutorial, please contact a member of the website team.
Website by Hector Miller-Bakewell and John van de Wetering, with thanks to Dom Horsman and Richard East
|
CommonCrawl
|
An automated fruit harvesting robot by using deep learning
Yuki Onishi1,
Takeshi Yoshida2,
Hiroki Kurita2,
Takanori Fukao3,
Hiromu Arihara4 &
Ayako Iwai4
ROBOMECH Journal volume 6, Article number: 13 (2019) Cite this article
Automation and labor saving in agriculture have been required recently. However, mechanization and robots for growing fruits have not been advanced. This study proposes a method of detecting fruits and automated harvesting using a robot arm. A highly fast and accurate method with a Single Shot MultiBox Detector is used herein to detect the position of fruit, and a stereo camera is used to detect the three-dimensional position. After calculating the angles of the joints at the detected position by inverse kinematics, the robot arm is moved to the target fruit's position. The robot then harvests the fruit by twisting the hand axis. The experimental results showed that more than 90% of the fruits were detected. Moreover, the robot could harvest a fruit in 16 s.
The agriculture industry has many problems, including the decreasing number of farm workers and increasing cost of fruit harvesting. Saving labor and scale up in agriculture is necessary in solving these problems. In recent years, the automation of agriculture has been advancing for labor saving and large-scale agriculture. However, much of the work in the field of fruit harvesting is manually done. The development of an automated fruit harvesting robot is a viable solution to these problems. The automatic harvesting of fruits by a robot involves two big tasks: (1) fruit detection and localization on trees using computer vision with a sensor and (2) robot arm motion to the position of the detected fruit and fruit harvesting by the end effector without damaging target fruit and its tree.
The fruit detection and localization on trees using computer vision have been investigated in numerous studies, and most of these have been summarized in the review of Gongal et al. [1]. Color, spectral, or thermal cameras have been widely used in these methods. When using spectral camera [2], detecting the fruit shadowed by another fruit as an object is difficult. When a thermal camera is used [3], the fruit is detected based on the temperature difference between the fruit and the background. This method is affected by the fruit size and exposure to direct sunlight. Various different features are used in fruit detection using color camera. Bulanon et al. [4, 5] used luminance and red, green, and blue (RGB) color difference to segment an apple. Rakun et al. [6] used texture analysis to detect an apple. Linker et al. [7] integrated multiple features to improve the accuracy of fruit detection methods. Various image classification methods for fruit detection can also be performed using a color camera. Bulanon et al. [8] used K-mean clustering for apple detection. Linker et al. [7] and Cohen et al. [9] used KNN clustering for apple classification. In addition, Kurtulmus et al. [10] used an Artificial Neural Network for apple classification. Qiang et al. [11] used a Support Vector Machine classification method for apple detection. However, these methods are difficult to use in variable light conditions because the color information cannot be sufficiently acquired. For better accuracy, fruit detection should be performed using multiple features such as color, shape, texture, and reflection to overcome challenges like clustering and variable light conditions.
The present study proposes "fruit detection and localization" and "fruit harvesting by a robot manipulator with a hand which is able to harvest without damaging the fruit and its tree" to perform automatic fruit harvesting by a robot. We used a color camera and a Single Shot MultiBox Detector (SSD) [12] to detect the two-dimensional (2D) position of the fruit. The SSD is one of the general object detection methods that use Convolution Neural Network (CNN) [13]. The SSD can comprehensively judge from color and shape. A three-dimensional (3D) position must be obtained to send a command to the robot arm. A stereo camera is used to measure the 3D position of the fruit detected by the SSD. We used inverse kinematics to calculate the route of the robot arm. We moved the robot arm to the fruit position based on inverse kinematics. We used the harvesting robot hand as the end effector. The robot hand harvests a fruit by gripping and rotating it without damaging it and its tree.
We describe each step in our fruit detection and harvest method in this section.
Apple and tree
The fruit used in this research is the "Fuji" apple cultivated in the Miyagi Prefectural Agriculture and Horticulture Research Center. However, our method can also be applied to other apple varieties. A pear has a relatively similar shape to an apple; hence, this algorithm is also considered effective for pears. We used herein a joint V-shaped apple tree [14]. The V-shaped tree shape was suitable for mechanization and efficiency, and its fruits can be easily harvested. Figure 1 shows the tree used herein.
Detection and harvest algorithm
The harvest robot was equipped with a stereo camera and a robot arm. Figure 2 presents the detection and harvest algorithm. The algorithm involves three steps: detecting the 2D position of the apple, detecting 3D position of the apple, and calculating the inverse kinematics. These steps were divided into the detection and harvest parts. We explain each method in the sections that follow.
Flow chart for harvest of apple
Fruit position detection method
The first step of the detection part was detecting the 2D position of the fruit. We received one image from the stereo camera and detected where apples were in the received image. We used the SSD [12] to detect the apple positions.
The SSD is a method based on the CNN [13], which detects objects in an image using a single deep neural network. The other detection methods are Faster R-CNN [15], and You Only Look Once [16], among others. The first step of the SSD is the usage of the VGG net to extract the feature maps. The core of the SSD predicts the category scores and the box offsets for a fixed set of default bounding boxes using small convolutional filters applied to the feature maps. To achieve high detection accuracy, the SSD produces predictions of different scales from feature maps of different scales, and explicitly separates predictions by aspect ratio. These design features lead to simple end-to-end training and high accuracy even on low resolution input images, and improving the speed vs accuracy trade-off. We used the SSD herein because it is superior in speed and accuracy to others. The SSD was 59 FPS with mAP 74.3% on the VOC2007 test on a Nvidia Titan X. Faster R-CNN was 7 FPS with mAP 73.2%. YOLO was 45 FPS with mAP 63.4%. We can detect bounding boxes at the 2D apple positions in the image using the SSD.
For fruits detected by the SSD, we selected a fruit that was nearest the robot arm. We received a point cloud data from the stereo camera and the pixel at the selected 2D apple position. We used the stereo camera to do a 3D reconstruction. The 3D reconstruction by the stereo camera was performed by a triangulation from parallax between the right and left images to obtain the 3D position of the pixel in the image. We can then measure the distance from the stereo camera to the apple.
Fruit harvesting method by the robot arm
Position \(\varvec{p}\) and posture \(\varvec{R}\) of the hand must be moved to as specified harvest the fruit using the robot hand attached to the robot arm. In the case of a vertically articulated robot arm, the position and posture of the hand (\(\varvec{p},\varvec{R}\)) are determined by the angles \(\varvec{q}\) of each joint. Therefore, the relationship between the joint coordinate system representing the joint angle of the robot arm and the hand coordinate system representing the position and posture of the hand must be clarified.
The problem of determining the angles \(\varvec{q}\) of each joint from the hand position \(\varvec{p}\) and posture \(\varvec{R}\) is called an inverse kinematics problem [17]. The inverse kinematics problem aims to find a nonlinear function \(\varvec{f}^{-1}\) for the equation Eq. (1) is determined by the robot arm mechanism and configuration.
$$\begin{aligned} \varvec{q} = \varvec{f}^{-1}(\varvec{p},\varvec{R}). \end{aligned}$$
Inverse kinematics model
We considered that the inverse kinematic problem of the robot arm had six links. We used UR3 made by UNIVERSAL ROBOTS as the robot arm. UR3 has six degrees of freedom; thus, arbitrary position and posture can be expressed as long as they are within the operating range. Table 1 shows the Denavit–Hartenberg parameter of UR3. Table 2 presents the UR3 specification. Figure 3 displays the UR3 used herein. The Denavit–Hartenberg parameters in UR3 are described in Fig. 4.
Table 1 Denavit–Hartenberg parameters for UR3
Table 2 UR3 specifications
UR3 Image [18]
UR3 Denavit Hartenberg parameters diagrams
We obtain the angles \(\varvec{q} = \theta _{i}(i=1,2,\dots ,6)\) of each joint when we are given the position \(\varvec{p}(p_{x},p_{y},p_{z})\) and posture \(\varvec{R}(\phi ,\theta ,\psi )\) of the hand for Eq. (1). The rotation matrix \(\varvec{R}\) is expressed as
$${\varvec{R}} (\phi ,\theta ,\psi ) = \left[ {\begin{array}{*{20}c} {C_{\phi } C_{\theta } } & \quad {C_{\phi } S_{\theta } S_{\psi } - S_{\theta } C\psi } & \quad {C_{\phi } S_{\theta } C_{\psi } + S_{\theta } S_{\psi } } \\ {S_{\phi } C_{\theta } } & \quad {S_{\phi } S_{\theta } S_{\psi } + C_{\theta } C_{\psi } } & \quad {S_{\phi } S_{\theta } C_{\psi } - C_{\theta } S_{\psi } } \\ { - S_{\theta } } & \quad {C_{\theta } S_{\psi } } & \quad {C_{\theta } C_{\psi } } \\ \end{array} } \right],{\text{ }}$$
where we used the abbreviations of \(S_{x} = \sin x\), and \(C_{x} = \cos x\).
The Denavit–Hartenberg notation [17] is the relationship between links i and \(i+1\). The homogeneous transformation matrix of the Denavit–Hartenberg notation is
$$^{n - 1} {\varvec {T}}_{n} = \left[ {\begin{array}{*{20}c} {C_{{\theta _{n} }} } & {\quad - S_{{\theta _{n} }} C_{{\alpha _{n} }} } & {\quad S_{{\theta _{n} }} S_{{\alpha _{n} }} } & {\quad r_{n} C_{{\theta _{n} }} } \\ {S_{{\theta _{n} }} } & {\quad C_{{\theta _{n} }} C_{{\alpha _{n} }} } & {\quad - C_{{\theta _{n} }} S_{{\alpha _{n} }} } & {\quad r_{n} S_{{\theta _{n} }} } \\ 0 & {\quad S_{{\alpha _{n} }} } & {\quad C_{{\alpha _{n} }} } & {\quad d_{n} } \\ 0 & {\quad 0} & {\quad 0} & {\quad 1} \\ \end{array} } \right],{\text{ }}$$
where we used the abbreviation of \(S_{x} = \sin x\), and \(C_{x} = \cos x\).
We can obtain Eq. (4) from the relationship between the robot arm Denavit–Hartenberg notation \({^{0}\varvec{T}_6}\) and the hand position \(\varvec{p}\) and posture \(\varvec{R}\)
$$\begin{aligned} {^{0}\varvec{T}_6}(\varvec{q}) = \begin{bmatrix}&\varvec{R}&\varvec{p} \\ 0&0&0&1\\ \end{bmatrix} = \begin{bmatrix} R_{11}& {\quad R_{12}}&{\quad R_{13}}& {\quad p_{x}} \\ R_{21}& {\quad R_{22}}& {\quad R_{23}}& {\quad {p_{y}}} \\ R_{31}& {\quad R_{32}}& {\quad R_{33}}& {\quad p_{z}} \\ 0& \quad{0}& \quad{0}&\quad{1} \end{bmatrix}. \end{aligned}$$
With Eq. (4), the angle \(\theta _{i}\) of each joint of the robot arm can be obtained as follows, but first, \(\theta _{1}\) is presented as
$$\begin{aligned} A_1 &= {} \arctan \left( \frac{p_y - d_6 R_{23}}{p_x - d_6 R_{13}}\right) , \nonumber \\ B_1 &= {} \arccos \left( \frac{d_4}{\sqrt{(p_x - d_6 R_{13})^2 + (p_y - d_6 R_{23})^2}}\right) , \nonumber \\ \theta _1 &= {} A_1 \pm B_1 + \frac{\pi }{2}. \end{aligned}$$
\(\theta _{5}\) is denoted as follows
$$\begin{aligned} A_5 &= {} p_x \sin \theta _1 - p_y \cos \theta _1 - d_4, \nonumber \\ \theta _5 &= {} \pm \arccos \left( \frac{A_5}{d_6} \right) . \end{aligned}$$
where \(\sin \theta _5 \ne 0\), \(\theta _{6}\) is
$$\begin{aligned} A_6 &= {} (R_{12} - R_{11}) \sin \theta _1 + (R_{22} - R_{21}) \cos \theta _1, \nonumber \\ \theta _6 &= {} \frac{\pi }{4} - \arctan \left( \frac{\pm \sqrt{2\sin ^2 \theta _5 - A_6^2}}{A_6}\right) . \end{aligned}$$
If \(\theta _{234}=\theta _2 + \theta _3 + \theta _4\), \(\theta _{234}\) is denoted as
$$\begin{aligned} A_{234} &= {} \cos {\theta _5} \cos \theta _6, \nonumber \\ B_{234}&= {} \sin \theta _6, \nonumber \\ C_{234} &= {} R_{11} \cos \theta _1 + R_{21} \sin \theta _1, \nonumber \\ D_{234} &= {} R_{31}, \nonumber \\ \theta _{234} &= {} \arctan \left( \frac{A_{234} D_{234} - B_{234} C_{234}}{A_{234} C_{234} + B_{234} D_{234}}\right) . \end{aligned}$$
\(\theta _{3}\) is
$$\begin{aligned} A_3 &= {} p_x \cos \theta _1 + p_y \sin \theta _1 + d_6 \cos \theta _{234} \sin \theta _5 - d_5 \sin \theta _{234}, \nonumber \\ B_3 &= {} p_z - d_1 + d_6 \sin \theta _{234} \sin \theta _5 + d_5 \cos \theta _{234}, \nonumber \\ \theta _3 &= {} \arccos \left( \frac{{A_3} ^ 2 + {B_3} ^ 2 - {a_2} ^ 2 - {a_3} ^ 2}{2 a_2 a_3} \right) . \end{aligned}$$
$$\begin{aligned} A_2 &= {} a_3 \cos \theta _3 + a_2, \nonumber \\ B_2 &= {} a_3 \sin \theta _3, \nonumber \\ C_2 &= {} p_z - d_1 + d_6 \sin \theta _{234} \sin \theta _5 + d_5 \cos \theta _{234}, \nonumber \\ \theta _2 &= {} \arctan \left( \frac{A_2}{B_2}\right) - \arctan \left( \pm \frac{\sqrt{A_2^2 + B_2^2 - C_2^2}}{C_2}\right) . \end{aligned}$$
$$\begin{aligned} \theta _4 = \theta _{234} - \theta _2 - \theta _3. \end{aligned}$$
We can calculate the angles \(\varvec{q}\) of each joint from the hand position \(\varvec{p}\) and posture \(\varvec{R}\) by inverse kinematics.
Fruit position detection
This describes the result of the fruit position detection.
The images taken at Miyagi Prefectural Agriculture and Horticulture Research Center were used for learning and testing. Shooting was performed to look at the fruit from below considering the minimized occlusion by the leaves, branches and other fruits. Figure 5 depicts the image taken by this method. We used the learning parameters shown in Table 3.
Example of apple image
Table 3 SSD learning parameters
We tested whether fruits can be detected using unlearned images taken in the orchard using the learned model. We surrounded the area where the possibility of fruit was 60% or more with a red frame. We detected the presence of an apple to be tested from 30 images with 169 apples in total. Figures 6 and 7 depict the tested images. Figures 8 and 9 show the test image result. The model can detect even if the fruits are partially occluded by other fruits and leaves. However, the fruits at the edge of the image and those far from the camera could not be detected. The edge of the image could not be detected because the fruits were cut off in the image. The fruits far from the camera could not be detected because they had become smaller in the image. However, this was not a problem herein because these fruits were out of reach of the robot arm. Table 4 presents this test result.
Example of test image1
Result of detection1
Table 4 Result of the apple position detection
Harvesting robot
Figure 10 displays the harvesting robot used herein. We conducted fruit harvesting using this robot with a stereo camera installed at approximately 0.5 (m) below the base of the robot arm such that the fruit tree is looked up from directly below. If the distance to the target fruit is too long and the robot arm cannot reach the target, the table lift on which all equipment rides goes up and down, moving to the distance where the arm can reach.
Harvest robot
We use UR3 (UNIVERSAL ROBOTS) as the robot arm. Table 2 shows the robot repeatability is ± 0.1 (mm). The robot palm diameter was 5 cm; hence, even if an error occurs, it can be suppressed by the robot hand. We used ZED (STEREO LABS) as the stereo camera, with specifications shown in Table 5.
Table 5 ZED specification
Fruit automated harvest
We describe the automated apple harvesting in this section. Figure 11 illustrates the experimented tree and a model of the apple tree at the Miyagi Prefectural Agriculture and Horticultural Research Center. These trees were joint V-shaped trees [14] like those in the Miyagi Prefectural Agricultural and Horticultural Research Center. Conducting the experiment during apple harvest time was difficult; hence, we experimented with a tree model.
Apple tree model
The results of the automated fruit harvesting experiments are presented herein along with the detection unit of the harvesting robot. First, we detected the 2D fruit position. Figure 12 shows the fruit detection result by the SSD. We used a learning model that can detect more than 90% of the fruits used (fruit position detection section). We surrounded the area where the possibility of fruit was 60% or more, with a red frame. The robot was able to detect the apples the same as the real ones; hence, it seemed enough for the experiment.
Detection of two-dimensional position
Second, we measured the 3D fruit position. Figure 13 depicts the 3D position of the center point of the frame detected by the SSD. The 3D reconstruction of the parts other than the apples themselves was inadequate, but in this experiment it is unnecessary except for the bottom surface of the apple. Sufficient results were obtained because we were able to capture the bottom of the apple.
Detection of three-dimensional position
Next, we will describe the harvesting part of the harvesting robot. To insert the robot hand from the underside for fruit harvesting, the robot was first moved 10 (cm) below the target fruit (Fig. 14). The arm then rose below the fruit (Fig. 15). The robot hand then grasped the fruit and harvesting it by twisting from the peduncle by rotating for four times (Fig. 16).
Approching target apple
Harvesting target apple
Grasping target apple
The harvest time for each fruit was approximately 16 s. Detecting the fruit position and calculating the joint angle at that position took approximately 2 s. Fruit harvesting took approximately 14 s. Harvesting consumed much time because the hand rotated for several times. By reconsidering these points, speedup is possible.
In this study, we performed automatic fruit harvesting through the method of fruit position detection and harvesting using a robot manipulator with a harvesting hand that does not damage the fruit and its tree. Using the SSD, we showed that the fruit position of 90% or more can be detected in 2 s. The proposed fruit harvesting algorithm also showed that one fruit can be harvested in approximately 16 s.
The fruit harvesting algorithm proposed herein is expected to be applicable even if it is a near species of apple. Moreover, if one learns again with the target fruit, harvesting fruits, such as pears is highly possible.
Single Shot MultiBox Detector
Convolution Neural Network
Gongal A, Amatya S, Karkee M, Zhang Q, Lewis K (2015) Sensors and systems for fruit detection and localization: a review. Comput Electron Agric 116:8–19
Okamoto H, Lee WS (2009) Green citrus detection using hyperspectral imaging. Comput Electron Agric 66(2):201–208
Stajnko D, Lakota M, Hočevar M (2004) Estimation of number and diameter of apple fruits in an orchard during the growing season by thermal imaging. Comput Electron Agric 42(1):31–42
Bulanon DM, Kataoka T, Ota Y, Hiroma T (2002) Ae-automation and emerging technologies: a segmentation algorithm for the automatic recognition of fuji apples at harvest. Biosyst Eng 83(4):405–412
Bulanon DM, Kataoka T (2010) Fruit detection system and an end effector for robotic harvesting of fuji apples. Agric Eng Int CIGR J 12(1):203–210
Rakun J, Stajnko D, Zazula D (2011) Detecting fruits in natural scenes by using spatial-frequency based texture analysis and multiview geometry. Comput Electron Agric 76(1):80–88
Linker R, Cohen O, Naor A (2012) Determination of the number of green apples in rgb images recorded in orchards. Comput Electron Agric 81:45–57
Bulanon DM, Kataoka T, Okamoto H, Hata S-i (2004) Development of a real-time machine vision system for the apple harvesting robot. In: SICE 2004 annual conference. vol 1, IEEE, New York, pp 595–598
Cohen O, Linker R, Naor A (2010) Estimation of the number of apples in color images recorded in orchards. In: International conference on computer and computing technologies in agriculture. Springer, Berlin, pp 630–642
Chapter Google Scholar
Kurtulmus F, Lee WS, Vardar A (2014) Immature peach detection in colour images acquired in natural illumination conditions using statistical classifiers and neural network. Precis Agric 15(1):57–79
Qiang L, Jianrong C, Bin L, Lie D, Yajing Z (2014) Identification of fruit and branch in natural scenes for citrus harvesting robot using machine vision and support vector machine. Int J Agric Biol Eng 7(2):115–121
Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C-Y, Berg AC (2016) Ssd: single shot multibox detector. In: European conference on computer vision. Springer, Berlin, pp 21–37
Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neural networks. In: Pereira F, Burges CJC, Bottou L, Weinberger KQ (eds) Advances in neural information processing systems 25. Curran Associates Inc., pp 1097–1105
Shinnosuke K (2017) Integration of the tree form and machinery in Japanese. Farming Mech 3189:5–9
Ren S, He K, Girshick R, Sun J (2015) Faster R-CNN: towards real-time object detection with region proposal networks. In: Cortes C, Lawrence ND, Lee DD, Sugiyama M, Garnett R (eds) Advances in neural information processing systems 28. Curran Associates Inc., pp 91–99
Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 779–788
Slotine J-JE, Asada H (1992) Robot analysis and control, 1st edn. Wiley, New York
Universal Robot Support. https://www.universal-robots.com/download/. Accessed 23 Oct 2019
This research was supported by grants from the Project of the Bio-oriented Technology Research Advancement Institution, NARO (the research project for the future agricultural production utilizing artificial intelligence).
Graduate School of Science and Engineering, Ritsumeikan University, 1-1-1, Noji-higashi, Kusatsu, 525-8577, Shiga, Japan
Yuki Onishi
Research Organization of Science and Technology, Ritsumeikan University, 1-1-1, Noji-higashi, Kusatsu, 525-8577, Shiga, Japan
Takeshi Yoshida & Hiroki Kurita
Department of Electrical and Electronic Engineering, Ritsumeikan University, 1-1-1, Noji-higashi, Kusatsu, 525-8577, Shiga, Japan
Takanori Fukao
DENSO Corporation, 1-1, Showa-cho, Kariya, 448-8661, Aichi, Japan
Hiromu Arihara & Ayako Iwai
Takeshi Yoshida
Hiroki Kurita
Hiromu Arihara
Ayako Iwai
YO conducted all research and experiments. TY and TF conducted a research concept, participated in design adjustment, and drafted a paper draft assistant. All authors read and approved the final manuscript.
Correspondence to Yuki Onishi.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Onishi, Y., Yoshida, T., Kurita, H. et al. An automated fruit harvesting robot by using deep learning. Robomech J 6, 13 (2019). https://doi.org/10.1186/s40648-019-0141-2
Harvesting fruits
|
CommonCrawl
|
Minimizing a function which involves Maximize [duplicate]
What are the most common pitfalls awaiting new users? (37 answers)
I have a mathematical function:
$f = \frac{c^3}{2 a (k T)^3} \sqrt{\frac{\pi u v w}{2 b}} \exp \left[ - \frac{c}{k T} \sqrt{\frac{x^2+y^2}{a}+\frac{z^2}{b}} + \frac{1}{2} \left( \frac{x^2}{u} + \frac{y^2}{v} + \frac{z^2}{w} \right) \right]$
where k, T, a, b, c are constants.
fun[x_, y_, z_, k_, T_, a_, b_, c_, u_, v_, w_] := (
c^3 E^(-((c Sqrt[(x^2 + y^2)/a + z^2/b])/(k T)) +
1/2 (x^2/u + y^2/v + z^2/w)) Sqrt[\[Pi]] Sqrt[u v w])/(
2 Sqrt[2] a Sqrt[b] k^3 T^3)
I would like to find the maximum value of the function (lets call it m) when I put in the values of k, T, a, b, c, u, v and w.
Then I would like to find the combination of u, v and w that will give the minimum value of m.
So I tried the following:
T = 0.1;
c = 1;
NMinimize[
NMaximize[
fun[x, y, z, k, T, a, b, c, u, v, w],
{x, y, z}
][[1]],
{u, v, w}
Mathematica can't execute the code above.
mathematical-optimization
$\begingroup$ See here and here and let us know if you need more help. $\endgroup$ – Szabolcs Apr 10 '15 at 16:40
I think your attempt is impossible:
f = c^3/(2 a (k t)^3) Sqrt[(π u v w)/(2 b)]
Exp[-(c/(k t)) Sqrt[(x^2 + y^2)/a + z^2/b] + 1/2 (x^2/u + y^2/v + z^2/w)]
g = f/.{k -> 1, t -> 1/10, a -> 1, b -> 1, c -> 1}
(* 250 E^(1/2 (x^2/u + y^2/v + z^2/w -
20 Sqrt[x^2 + y^2 + z^2])) Sqrt[2 π] Sqrt[u v w] *)
The squares in the Exp will grow without bounds, so: There is no maximum.
m_goldberg
JinxedJinxed
$\begingroup$ @user29615 Your formula suggests that you're studying a spectrum (governed by Planck's Law) over an ellipse in three dimensions, but have your signs wrong. Please check the physics of your problem first, then carefully (re)craft your optimization problem. $\endgroup$ – David G. Stork Apr 10 '15 at 17:14
Not the answer you're looking for? Browse other questions tagged mathematical-optimization or ask your own question.
What are the most common pitfalls awaiting new users?
Optimization problem with matrix positivity constraints
Minimizing a function with a constant $l > 0$
Application of Maximize: The cuboid constrained to the ellipsoid in $\mathbb{R}^3$
Find value of parameter such that the maximum value of a function satisfies a condition
Finding the max points of a surface, maximize
Local optimization of parameters in a Coupled Non-Linear System
Minimizing with constraints
Finding the Maximum Value of a Function as an algebraic expression
Best way to maximize this multivariable function
|
CommonCrawl
|
{\displaystyle \nu _{\sigma }(v)} , Algorithmic Complexity Big-O As Depth Limited Search (DLS) is important for IDDFS, let us take time to understand it first. k In BFS, one vertex is selected at a time when it is visited and marked then its adjacent are visited and stored in … Time complexity for. What happens if my Zurich public transportation ticket expires while I am traveling? In these applications it also uses space Sometimes tree edges, edges which belong to the spanning tree itself, are classified separately from forward edges. I still can't understand. The depth-first algorithm is attributed to Charles Pierre Tremaux, a 19th century French mathematician. ∖ , let ν v , let Introduction to Depth Limited Search. In DFS, each vertex has three possible colors representing its state: white: vertex is unvisited; gray: vertex is in progress; black: DFS has finished processing the vertex. If a node is asolution to the problem, then it is called a goalnode. The time and space analysis of DFS differs according to its application area. Given a graph, we can use the O(V+E) DFS (Depth-First Search) or BFS (Breadth-First Search) algorithm to traverse the graph and explore the features/properties of the graph. Depth First search (DFS) is an algorithm for traversing or searching tree or graph data structures. Next lesson. In the artificial intelligence mode of analysis, with a branching factor greater than one, iterative deepening increases the running time by only a constant factor over the case in which the correct depth limit is known due to the geometric growth of the number of nodes per level. Adrian Sampson shows how to develop depth-first search (dfs) and breadth-first search (bfs). Sort by: Top Voted. | Add details and clarify the problem by editing this post. Step 2 is the most important step in the depth-first search. ( Let's define N as the total number of nodes. {\displaystyle \sigma } Reading time: 15 minutes | Coding time: 5 minutes. such that v Depth-first search - in the iterative version, we have a user defined stack, and we insert elements onto the stack just like we insert elements in the queue in the BFS algorithm. is it possible to determine using a single depth-first search, in O(V+E) time, whether a directed graph is singly connected? {\displaystyle 0} Our mission is to provide a free, world-class education to anyone, anywhere. How to effectively defeat an alien "infection". V In theoretical computer science, DFS is typically used to traverse an entire graph, and takes time be the ordering computed by the standard recursive DFS algorithm. Solving puzzles with only one solution, such as, This page was last edited on 2 November 2020, at 21:33. Let me direct you towards our, Could you give the exact quote of the text you don't understand? v For most algorithms boolean classification unvisited / visitedis quite enough, but we show general case here. The time complexity for breadth first search is b d where b (branching factor) is the average number of child nodes for any given node and d is depth. j Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. 1 n How easy it is to actually track another person credit card? By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. Depth First search (DFS) is an algorithm for traversing or searching tree or graph data structures. Equivalently, Challenge: Implement breadth-first search. When did PicklistEntry label become null? v vertices. G ( v 6. Note that it visits the not visited vertex. | is said to be a DFS ordering (with source − , I'll show the actual algorithm below. , If a node has not yet been expanded,it is called a leafnode. Thus, in this setting, the time and space bounds are the same as for breadth-first search and the choice of which of these two algorithms to use depends less on their complexity and more on the different properties of the vertex orderings the two algorithms produce. For each edge (u, v), where u is … ∈ v be a graph with ) O When we reach the dead-end, we step back one vertex and visit the other vertex if it exists. In Depth Limited Search, we first set a constraint on how deep (or how far from root) will we go. If a person is dressed up as non-human, and is killed by someone who sincerely believes the victim was not human, who is responsible? 1. v In DFS-VISIT(), lines 4-7 are O(E), because the sum of the adjacency lists of all the vertices is the number of edges. {\displaystyle 1< 1. Yuval sir@ i have got a doubt.In your provided Example algorithm, what if $A$ is called $n$ times which implies $O(n^{2})$.sir please help me out ! otherwise. Basically, it repeatedly visits the neighbor of the given vertex. The procedure COUNT counts the number of 1s in the input array T. Even though ADVANCE could be called up to $n$ times by COUNT and the worst-case running time of ADVANCE is $O(n)$, lines 1–2 of ADVANCE run at most $n$ times, and so the overall running time is $O(n)$ rather than $O(n^2)$. {\displaystyle \sigma =(v_{1},\dots ,v_{m})} Want to improve this question? forever, caught in the A, B, D, F, E cycle and never reaching C or G. Iterative deepening is one technique to avoid this infinite loop and would reach all nodes. ( They are the same thing in this example, but not in the case of DFS. ≤ ) is maximal. Time complexity of Depth First Search [closed], MAINTENANCE WARNING: Possible downtime early morning Dec 2, 4, and 9 UTC…, "Question closed" notifications experiment results and graduation, Algorithm that finds the number of simple paths from $s$ to $t$ in $G$, Time Complexity for Creating a Graph from a File. < The algorithm starts at the root (top) node of a tree and goes as far as it can down a given branch (path), then backtracks until it finds an unexplored path, and then explores it. 1 ∈ 2. ) There are four possible ways of doing this: For binary trees there is additionally in-ordering and reverse in-ordering. Output: All vertices reachable from v labeled as discovered, The order in which the vertices are discovered by this algorithm is called the lexicographic order. Another drawback, however, to depth-first search is … Let i If the original graph is undirected then all of its edges are tree edges or back edges. 'V' is the number of vertices and 'E' is the number of edges in a graph/tree. This is the currently selected item. ) if, for all i In these applications it also uses space $${\displaystyle O(|V|)}$$ in the worst case to store the stack of vertices on the current search path as well as the set of already-visited vertices. w site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. This yields the same traversal as recursive DFS.[8]. Concept. is a DFS ordering if, for all Both algorithms are used to traverse a graph, "visiting" each of its nodes in an orderly fashion. v i ) Initially all vertices are white (unvisited). v | 0 What is the meaning of "lay by the heels"? Time Complexity The time complexity of both DFS and BFS traversal is O(N + M) where N is number of vertices and M is number of edges in the graph. ( Here is another example, in which an array $T[1\ldots n]$ is involved. Finding 2-(edge or vertex)-connected components. Breadth First Search. … 1 v V be the greatest One starts at the root (selecting some arbitrary node as the root in the case of a graph) and explores as far … Your question is a very basic one. , for Unlike BFS, a DFS algorithm traverses a tree or graph from the parent vertex down to its children and grandchildren vertices in … ( < {\displaystyle O(|V|+|E|)} , there exists a neighbor This page talks about the time complexity (there is space complexity too - please look yourself).. As for graphs - their size is usually described by two numbers - number of vertices $|V|$ and number of edges $|E|$. 1 of Note that depth-limited search does not explore the entire graph, but just the part … Note that repeat visits in the form of backtracking to a node, to check if it has still unvisited neighbors, are included here (even if it is found to have none). We didn't mention it at the time, but reachable_nodes performs a depth-first search (DFS). , n Let us understand DLS, by performing DLS on the above example. So, if lines 5-7 of DFS() are O(V) and DFS-VISIT() is O(E), then shouldn't the total time complexity of DFS() be O(VE)? Stack Exchange Network Stack Exchange network consists of 176 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. ) | j {\displaystyle i} {\displaystyle i} Depth-first search (DFS) is an algorithm that visits all edges in a graph G that belong to the same connected component as a vertex v. Algorithm DFS(G, v) if v is already visited return Mark v as visited. Let Depth first search (DFS) is an algorithm for traversing or searching tree or graph data structures. A node is expanded by takingone of its primitive subexpressions, i.e. 1 As Depth Limited Search (DLS) is important for IDDFS, let us take time to understand it first. … ; If the graph is represented as adjacency list:. ( v , Perhaps the following simpler example will make this clear: In each execution of A, line 1 of B is executed $n$ times, and B itself is executed $n$ times. Active 4 years, 4 months ago. As defined in our first article, depth first search is a tree-based graph traversal algorithm that is used to search a graph. Time Complexity of Depth First Search (DFS) O(V+E) where V is the number of vertices and E is the number of edges. σ 1 | {\displaystyle O(|E|)} 7. Let us understand DLS, by performing DLS on the above example. Different topologically sorted order based on DFS vertex ordering. 1 Complexity Analysis of Depth First Search Time Complexity. Further learning. v {\displaystyle V} σ {\displaystyle v} saving the first and second 2 minutes of a wmv video in Ubuntu Terminal, Why does C9 sound so good resolving to D major 7. DFS may also be used to collect a sample of graph nodes. This ordering is called the lexicographic depth-first search ordering. O [13] As of 1997, it remained unknown whether a depth-first traversal could be constructed by a deterministic parallel algorithm, in the complexity class NC. V [7], Another possible implementation of iterative depth-first search uses a stack of iterators of the list of neighbors of a node, instead of a stack of nodes. Time complexity of Depth First Search [closed] Ask Question Asked 4 years, 4 months ago. In worst case, we may have to visit all nodes before we reach goal. Proof [ edit ] (),: 5 where is the branching factor and is the depth of the goal. {\displaystyle 1\leq i< σ It uses a Queue data structure which follows first in first out. … When search is performed to a limited depth, the time is still linear in terms of the number of expanded vertices and edges (although this number is not the same as the size of the entire graph because some vertices may be searched more than once and others not at all) but the space complexity of this variant of DFS is only proportional to the depth limit, and as a result, is much smaller than the space needed for searching to the same depth using breadth-first search. i Finding 3-(edge or vertex)-connected components. Time complexity of depth first search : O(V+E) for an adjacency list implementation of a graph or a tree. i Variants of Best First Search. , How to calculate maximum input power on a speaker? Please forgive me for asking a novice question, but I'm a beginner at algorithms and complexities, and it's sometimes hard to understand how the complexity for a specific algorithm has come about. v G Introduction to Depth Limited Search. // Perform some operation on v. for all neighbors x of v DFS(G, x) It involves exhaustive searches of all the nodes by going ahead, if possible, else by backtracking. < For ν For our problem, each node is an expression represented in abstractsyntax form, i.e. p.606, Goodrich and Tamassia; Cormen, Leiserson, Rivest, and Stein, Page 93, Algorithm Design, Kleinberg and Tardos, Learn how and when to remove this template message, "Stack-based graph traversal ≠ depth first search", Journal of the Association for Computing Machinery, Open Data Structures - Section 12.3.2 - Depth-First-Search, C++ Boost Graph Library: Depth-First Search, Depth-First Search Animation (for a directed graph), Depth First and Breadth First Search: Explanation and Code, Depth-first search algorithm illustrated explanation (Java and C++ implementations), YAGSBPL – A template-based C++ library for graph search and planning, https://en.wikipedia.org/w/index.php?title=Depth-first_search&oldid=986763320, Articles needing additional references from July 2010, All articles needing additional references, Articles with unsourced statements from June 2020, Creative Commons Attribution-ShareAlike License. V { , {\displaystyle N(v)} For such applications, DFS also lends itself much better to heuristic methods for choosing a likely-looking branch. = Mark vertex uas gray (visited). with In such cases, search is only performed to a limited depth; due to limited resources, such as memory or disk space, one typically does not use data structures to keep track of the set of all previously visited vertices. Performing DFS upto a certain allowed depth is called Depth Limited Search (DLS). It only takes a minute to sign up. Challenge: Implement breadth-first search. + such that He assumes you are familiar with the idea. V The graph in this picture has the vertex set V = {1, 2, 3, 4, 5, 6}.The edge set E = {{1, 2}, {1, 5}, {2, 3}, {2, 5}, {3, 4}, {4, 5}, {4, 6}}. This is because the program has never ended when re-visiting. … Performing the same search without remembering previously visited nodes results in visiting nodes in the order A, B, D, F, E, A, B, D, F, E, etc. {\displaystyle \nu _{(v_{1},\dots ,v_{i-1})}(w)} ≤ ∈ , v Depth First Search (DFS) The DFS algorithm is a recursive algorithm that uses the idea of backtracking. [citation needed], A non-recursive implementation of DFS with worst-case space complexity n is a neighbor of The algorithm starts at the root node (selecting some arbitrary node as the root node in the case of a graph) and explores as far as possible along each branch before backtracking. Now we'll modify it to perform breadth-first search (BFS). exists, and be , E For applications of DFS in relation to specific domains, such as searching for solutions in artificial intelligence or web-crawling, the graph to be traversed is often either too large to visit in its entirety or infinite (DFS may suffer from non-termination). I was reading the DFS algorithm from Introduction to Algorithms by Cormen, and this was the algorithm: It then said lines 1-3 and 5-7 are O(V), exclusive of the time to execute the calls to DFS-VISIT(). [14], Cormen, Thomas H., Charles E. Leiserson, and Ronald L. Rivest. The search process begins at an initial node (also called therootnode). NB. ( Depth First Traversal (or Search) for a graph is similar to Depth First Traversal of a tree.The only catch here is, unlike trees, graphs may contain cycles, a node may be visited twice.
depth first search time complexity
Trigonal Pyramidal Degree, Current Wholesale Price Of Tilapia, Vanilla Bean Seeds, Saturday Kitchen 28 March 2020 Recipes, Stars Reading Comprehension Worksheets, La Foglia Banbury Opening Times, Dynamic Programming Interview Questions, Matplotlib Bar Plot Multiple Columns, How To Add Two Data Labels In Excel Pie Chart,
depth first search time complexity 2020
|
CommonCrawl
|
Molecule with a quadrupole moment in an electric field
How does an uncharged non-polar molecule that has a quadrupole moment (such as carbon dioxide) behave in an electric field? I know that in a homogeneous electric field, ions travel while dipoles orient along the field (rotate) and non-polar molecules are not affected.
What kind of electrical field, if any, would exert a force on a molecule with a quadrupole moment?
molecular-structure
Karsten TheisKarsten Theis
$\begingroup$ One application to selectively bind to carbon dioxide rather than dinitrogen (which does not have a strong quadrupole) is linked to in this report. $\endgroup$ – Karsten Theis Jan 26 at 15:23
Different moments of a charge distribution couple to different components of the external electric field. In the case of the quadrupole moment, the coupling is to the gradient of the electric field (EFG). Such an interaction is for instance relevant in NMR of quadrupolar nuclei, NQR (not to be confused with naked quad run, according to the wikipedia - o tempora, o mores) and Mössbauer spectroscopy, although those techniques consider the nuclear quadrupole moment.
In the case of atoms and molecules without permanent electric monopoles or dipoles, EFG interactions with permanent quadrupoles are the leading field-multipole interaction energy term (ignoring dispersion terms, ie induced dipole-induced dipole). In some cases such interactions can be of particular importance, for instance in aromatic compounds (see Kocman et al. cited below).
The quadrupole moment has been measured in $\ce{CO_2}$, see eg Chetty cited below, which includes experimental methods.
Electric quadrupole moment of graphene and its effect on intermolecular interactions M. Kocman, M. Pykal and P. Jurecka Physical Chemistry Chemical Physics Vol. 16, 2014
N. Chetty and V.W. Couling Measurement of the electric quadrupole moments of CO2 and OCS Molecular Physics Vol. 109 (5), 2011, 655–666
Tyberius
Buck ThornBuck Thorn
$\begingroup$ Not a bad answer except that it doesn't tell us what happens when there is coupling. $\endgroup$ – matt_black Jan 26 at 14:41
$\begingroup$ @matt_black I don't fully understand your question. The energy of the molecule is altered by the interaction of an EFG with the quadrupole moment, naturally. There is an associated force if the orientation of the EFG and quad tensors is not that of the minimum energy state. $\endgroup$ – Buck Thorn Jan 26 at 14:50
$\begingroup$ IF a molecule with a dipole interacts with the strong electric field in, for example, a microwave oven, it ends up absorbing a lot of the radiation causing a lot of warming. What sort of effects do we see for radiation interacting with quadroploes? how big are they and what are the things we observe? $\endgroup$ – matt_black Jan 26 at 14:55
$\begingroup$ @matt_black Ok, I see your point. The question is what strength of EFG would lead to a non-negligible energy compared to kT. That will take some additional thought. The quad constant is provided in the reference I cite, so it's a question of finding the expression for the interaction with the EFG and solving. $\endgroup$ – Buck Thorn Jan 26 at 15:16
$\begingroup$ The paper showed by Chetty and Couling showed a temperature-dependent effect. I'm suppose the field orients the molecule, and at low temperature it is more oriented than at higher temperatures. But that's just a guess from skimming the paper. $\endgroup$ – Karsten Theis Jan 26 at 15:21
To determine the force on an arbitrary multipole moment, we first expand the E-field in a Taylor-series around the point $\vec{r}=0$:
$$\vec{E}(\vec{r})=\vec{E}_0+\vec{r}\cdot(\nabla\vec{E})_0+\frac{1}{2}\vec{r}\vec{r}:(\nabla\nabla\vec{E})_0+...$$
where the number of dots in the product denotes how many indices to contract over. We can then determine the force using this expansion, since it is just the charge multiplied by the E-field:
$$\vec{F}=q\vec{E}_0+\vec{\mu}\cdot(\nabla\vec{E})_0+\frac{1}{2}\mathbf{\Theta}:(\nabla\nabla\vec{E})_0+...$$ Typically, people use the traceless quadrupole, which would add an additional factor of $\frac{1}{3}$ in the third term, but I'm going to work with the basic definition throughout. So this shows that a quadrupole subject to a large electric field double gradient (not sure of the correct terminology for this) will experience a force even if it is charge neutral and has no net dipole.
For completeness, we can also write the energy since the force is just the gradient of the energy:
$$W=q\phi-\mu\cdot\vec{E}-\frac{1}{2}\mathbf{\Theta}:(\nabla\vec{E})+...$$ where we see, as TryHard said, that the E-field gradient interacts with the quadrupole.
As an aside, one can also obtain the torque (cross product of force and r) on an arbitrary multipole: $$\vec{T}=\vec{\mu}\times\vec{E}_0+\frac{1}{2}\mathbf{\Theta}\dot{\times}(\nabla\vec{E})_0+...$$
(I'm using the nonstandard notation $A\dot{\times}B$ to mean $\sum_{ijkl}\epsilon_{jkl}e_iA_{jl}B_{lk}$, where $\epsilon_{jkl}$ is the Levi-Civita operator and the $e_i$ are a set of orthonormal unit vectors). This suggests that a quadrupole will experience a torque when interacting with a field gradient.
For more information on this, I would read chapter 3 of Jeanne McHale's Molecular Spectroscopy. There are some typos and notational quirks(e.g. in the equation for torque, McHale just writes a cross product for the 2nd term, which isn't defined between matrices), but its overall a good book to give background in classical electrostatics and a quantum mechanical description of spectroscopy.
Not the answer you're looking for? Browse other questions tagged molecular-structure or ask your own question.
Quadrupole moment of a molecule
What is the structure of this molecule?
Geometry of AsF5 molecule
Has anyone even taken a picture of a molecule to confirm the geometry predicted by theory?
Difference between Force Field and topology, and other related questions
Why is the molecular geometry of a molecule with 5 bonds not uniform?
How does the size of a molecule affect acidity?
Closed shell structure & intrinsic molecular (electric) dipole moment
T shape molecule with three hydrogens
Do molecules with polar bond, but with no dipole moment experiences a greater effect from the london dispersion forces?
Structure of no2 molecule
|
CommonCrawl
|
Oscillation theorems for impulsive parabolic differential system of neutral type
Asymptotic behaviors of Green-Sch potentials at infinity and its applications
August 2017, 22(6): 2339-2350. doi: 10.3934/dcdsb.2017101
On the Kolmogorov entropy of the weak global attractor of 3D Navier-Stokes equations:Ⅰ
Yong Yang 1,2, and Bingsheng Zhang 1,2,,
Department of Mathematics, The Pennsylvania State University, University Park, PA 16802, USA
Department of Mathematics, Texas A & M University, College Station, TX 77843, USA
Bingsheng Zhang, E-mail address: [email protected]
Received June 2016 Revised December 2016 Published March 2017
One particular metric that generates the weak topology on the weak global attractor $\mathcal{A}_w$ of three dimensional incompressible Navier-Stokes equations is introduced and used to obtain an upper bound for the Kolmogorov entropy of $\mathcal{A}_w$. This bound is expressed explicitly in terms of the physical parameters of the fluid flow.
Keywords: 3D Navier-Stokes equations, fluid flow, weak global attractor, Kolmogorov entropy, functional dimension.
Mathematics Subject Classification: 35Q30, 76D05, 34G20, 37L05, 37L2.
Citation: Yong Yang, Bingsheng Zhang. On the Kolmogorov entropy of the weak global attractor of 3D Navier-Stokes equations:Ⅰ. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2339-2350. doi: 10.3934/dcdsb.2017101
A. Biswas, C. Foias and A. Larios, On the attractor for the semi-dissipative Boussinesq equation, Annales de l'Institut Henri Poincare (C) Non Linear Analysis, Elsevier, 34 (2017), 381-405, arXiv: 1507.00080. Google Scholar
P. Constantin and C. Foias, Global Lyapunov exponents, Kaplan-Yorke formulas and the dimension of the attractors for 2D Navier-Stokes equations, Communications on Pure and Applied Mathematics, 38 (1985), 1-27. Google Scholar
[3] P. Constantin and C. Foias, Navier-Stokes Equations, University of Chicago Press, Chicago Lectures in Mathematics, 1988. Google Scholar
R. M. Dudley, Metric entropy and the central limit theorem in C(S) Ann. Inst. Fourier (Grenoble), 24 (1974), 49-60. Google Scholar
C. Foias, O. P. Manley, R. Rosa and R. Temam, Navier-Stokes Equations and Turbulence, Encyclopedia of Mathematics and Its Applications, Cambridge University Press, 2001. Google Scholar
C. Foias, C. Mondaini and B. Zhang, On the Kolmogorov entropy of the weak global attractor of 3D Navier-Stokes equations: Ⅱ, In preparation. Google Scholar
C. Foias, C. Mondaini and B. Zhang, Remarks on the Weak Global Attractor of 3D NavierStokes Equations, In preparation. Google Scholar
C. Foias, R. Rosa and R. Temam, Topological properties of the weak global attractor of the three-dimensional Navier-Stokes equations, Discrete and Continuous Dynamical System, 27 (2010), 1611-1631. Google Scholar
C. Foias and J. C. Saut, Asymptotic behavior, as t→ ∞ of solutions of Navier-Stokes equations and nonlinear spectral manifolds, Indiana University Mathematics Journal, 33 (1984), 459-477. Google Scholar
C. Foias and J. C. Saut, Asymptotic integration of Navier-Stokes equations with potential forces. Ⅰ, Indiana Univ. Math. J, 40 (1990), 305-320. Google Scholar
C. Foias and R. Temam, The connection between the Navier-Stokes equations, dynamical systems, and turbulence theory, in Directions in Partial Differential Equations (Madison, WI, 1985), Publ. Math. Res. Center Univ. Wilsconsin, 54, Academic Press, Boston, MA, 54 (1987), 55-73. Google Scholar
C. Foias and R. Temam, Some analytic and geometric properties of the solutions of the evolution Navier-Stokes equations, J.Maht.Pures et Appl., 58 (1979), 339-368. Google Scholar
A. N. Kolmogorov, On certain asymptotic characteristics of completely bounded metric spaces, Doki.Akad.Naus SSSR, 108 (1956), 385-388. Google Scholar
A. N. Kolmogorov, The representation of continuous functions of many variables by superposition of continuous functions of one variable and addition, Doklady Akademii Nauk SSSR, 114 (1957), 953-956. Google Scholar
A. N. Kolmogorov and V. M. Tikhomirov, $\epsilon$-entropy and $\epsilon$-capacity of sets in functional spaces, Amer. Math. Soc. Transl. Ser. 2, 17 (1961), 277-364. Google Scholar
S. Liu and B. Li, The functional dimension of some classes of spaces, Chin. Ann. Math., 26 (2005), 67-74. Google Scholar
R. Temam, Naviers-Stokes Equations and Nonlinear Functional Analysis, CBMS-NSF Regional Conference Series in Applied Mathematical, 66. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1995. Google Scholar
R. Temam, Infinite Dimensional Dynamical Systems in Mechanics and Physics, 2nd edition, Applied Mathematical Sciences, Springer-Verlag, New York, 68,1997. Google Scholar
V. M. Tikhomirov, On $\epsilon$-entropy of classes of analytic functions, Dokl. Akad. Nauk SSSR, 117 (1957), 191-194. Google Scholar
V. M. Tikhomirov, Approximation theory in the twentieth century, In Mathematical Events of the Twentieth Century. Springer, Berlin, (2006), 409-436. Google Scholar
M. Bulíček, F. Ettwein, P. Kaplický, Dalibor Pražák. The dimension of the attractor for the 3D flow of a non-Newtonian fluid. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1503-1520. doi: 10.3934/cpaa.2009.8.1503
Huicheng Yin, Lin Zhang. The global existence and large time behavior of smooth compressible fluid in an infinitely expanding ball, Ⅱ: 3D Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2018, 38 (3) : 1063-1102. doi: 10.3934/dcds.2018045
Jingrui Wang, Keyan Wang. Almost sure existence of global weak solutions to the 3D incompressible Navier-Stokes equation. Discrete & Continuous Dynamical Systems, 2017, 37 (9) : 5003-5019. doi: 10.3934/dcds.2017215
Chongsheng Cao. Sufficient conditions for the regularity to the 3D Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2010, 26 (4) : 1141-1151. doi: 10.3934/dcds.2010.26.1141
Xuhui Peng, Rangrang Zhang. Approximations of stochastic 3D tamed Navier-Stokes equations. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5337-5365. doi: 10.3934/cpaa.2020241
Xuanji Jia, Zaihong Jiang. An anisotropic regularity criterion for the 3D Navier-Stokes equations. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1299-1306. doi: 10.3934/cpaa.2013.12.1299
Hui Chen, Daoyuan Fang, Ting Zhang. Regularity of 3D axisymmetric Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2017, 37 (4) : 1923-1939. doi: 10.3934/dcds.2017081
Vladimir V. Chepyzhov, E. S. Titi, Mark I. Vishik. On the convergence of solutions of the Leray-$\alpha $ model to the trajectory attractor of the 3D Navier-Stokes system. Discrete & Continuous Dynamical Systems, 2007, 17 (3) : 481-500. doi: 10.3934/dcds.2007.17.481
Yang Liu. Global existence and exponential decay of strong solutions to the cauchy problem of 3D density-dependent Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1291-1303. doi: 10.3934/dcdsb.2020163
Alessio Falocchi, Filippo Gazzola. Regularity for the 3D evolution Navier-Stokes equations under Navier boundary conditions in some Lipschitz domains. Discrete & Continuous Dynamical Systems, 2021 doi: 10.3934/dcds.2021151
Thomas Y. Hou, Ruo Li. Nonexistence of locally self-similar blow-up for the 3D incompressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2007, 18 (4) : 637-642. doi: 10.3934/dcds.2007.18.637
Shijin Ding, Zhilin Lin, Dongjuan Niu. Boundary layer for 3D plane parallel channel flows of nonhomogeneous incompressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2020, 40 (8) : 4579-4596. doi: 10.3934/dcds.2020193
Xiaopeng Zhao, Yong Zhou. Well-posedness and decay of solutions to 3D generalized Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 795-813. doi: 10.3934/dcdsb.2020142
Daoyuan Fang, Chenyin Qian. Regularity criterion for 3D Navier-Stokes equations in Besov spaces. Communications on Pure & Applied Analysis, 2014, 13 (2) : 585-603. doi: 10.3934/cpaa.2014.13.585
Hakima Bessaih, María J. Garrido-Atienza. Longtime behavior for 3D Navier-Stokes equations with constant delays. Communications on Pure & Applied Analysis, 2020, 19 (4) : 1931-1948. doi: 10.3934/cpaa.2020085
Zoran Grujić. Regularity of forward-in-time self-similar solutions to the 3D Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2006, 14 (4) : 837-843. doi: 10.3934/dcds.2006.14.837
G. Deugoué, T. Tachim Medjo. The Stochastic 3D globally modified Navier-Stokes equations: Existence, uniqueness and asymptotic behavior. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2593-2621. doi: 10.3934/cpaa.2018123
Fan Wu. Conditional regularity for the 3D Navier-Stokes equations in terms of the middle eigenvalue of the strain tensor. Evolution Equations & Control Theory, 2021, 10 (3) : 511-518. doi: 10.3934/eect.2020078
Pan Zhang, Lan Huang, Rui Lu, Xin-Guang Yang. Pullback dynamics of a 3D modified Navier-Stokes equations with double delays. Electronic Research Archive, 2021, 29 (6) : 4137-4157. doi: 10.3934/era.2021076
Wei Shi, Xiaona Cui, Xuezhi Li, Xin-Guang Yang. Dynamics for the 3D incompressible Navier-Stokes equations with double time delays and damping. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021284
Yong Yang Bingsheng Zhang
|
CommonCrawl
|
Statistics and Computing
November 2017 , Volume 27, Issue 6, pp 1555–1584 | Cite as
Hierarchical Bayesian level set inversion
Matthew M. Dunlop
Marco A. Iglesias
Andrew M. Stuart
The level set approach has proven widely successful in the study of inverse problems for interfaces, since its systematic development in the 1990s. Recently it has been employed in the context of Bayesian inversion, allowing for the quantification of uncertainty within the reconstruction of interfaces. However, the Bayesian approach is very sensitive to the length and amplitude scales in the prior probabilistic model. This paper demonstrates how the scale-sensitivity can be circumvented by means of a hierarchical approach, using a single scalar parameter. Together with careful consideration of the development of algorithms which encode probability measure equivalences as the hierarchical parameter is varied, this leads to well-defined Gibbs-based MCMC methods found by alternating Metropolis–Hastings updates of the level set function and the hierarchical parameter. These methods demonstrably outperform non-hierarchical Bayesian level set methods.
Inverse problems for interfaces Level set inversion Hierarchical Bayesian methods
The online version of this article (doi: 10.1007/s11222-016-9704-8) contains supplementary material, which is available to authorized users.
AMS is grateful to DARPA, EPSRC, and ONR for the financial support. MMD was supported by the EPSRC-funded MASDOC graduate training program. The authors are grateful to Dan Simpson for helpful discussions. The authors are also grateful for discussions with Omiros Papaspiliopoulos about links with probit. The authors would also like to thank the two anonymous referees for their comments that have helped improve the quality of the paper. This research utilized Queen Mary's MidPlus computational facilities, supported by QMUL Research-IT and funded by EPSRC grant EP/K000128/1.
11222_2016_9704_MOESM1_ESM.pdf (675 kb)
Supplementary material 1 (pdf 675 KB)
Proof of theorems
(Theorem 1)
Note that it suffices to show that \(\mu _0^\tau \sim \mu ^0_0\) for all \(\tau > 0\). (Here \(\sim \) denotes "equivalent as measures"). It is known that the eigenvalues of \(-\Delta \) on \(\mathbb {T}^d\) grow like \(j^{2/d}\), and hence the eigenvalues \(\lambda _j(\tau )\) of \(\mathcal {C}_{\alpha ,\tau }\) decay like
$$\begin{aligned} \lambda _j(\tau ) \asymp (\tau ^2 + j^{2/d})^{-\alpha },\;\;\;j\ge 1. \end{aligned}$$
Using Proposition 3 below, we see that \(\mu _0^\tau \sim \mu _0^0\) if
$$\begin{aligned} \sum _{j=1}^\infty \left( \frac{\lambda _j(\tau )}{\lambda _j(0)} - 1\right) ^2 < \infty . \end{aligned}$$
Now we have
$$\begin{aligned} \left| \frac{\lambda _j(\tau )}{\lambda _j(0)} - 1\right|&\asymp \left| \left( 1 + \frac{\tau ^2}{j^{2/d}}\right) ^{-\alpha } - 1\right| \\&\le \left| \exp \left( \frac{\alpha \tau ^2}{j^{2/d}}\right) - 1\right| \\&\le C\frac{\alpha \tau ^2}{j^{2/d}}. \end{aligned}$$
Here we have used that \((1+x)^{-\alpha }-1 \le \exp (\alpha x)-1\) for all \(x \ge 0\) to move from the first to the second line, and that \(\exp (x)-1 \le Cx\) for all \(x \in [0,x_0]\) to move from the second to third line. Now note that when \(d \le 3\), \(j^{-4/d}\) is summable, and so it follows that \(\mu _0^\tau \sim \mu _0^0\).
The case \(\tau = 0\) is Theorem 2.18 in Dashti and Stuart (2016); the general result follows from the equivalence above.
Let \(v \sim N(0,\mathcal {D}_{\sigma ,\nu ,\ell })\) where \(\mathcal {D}_{\sigma ,\nu ,\ell }\) is as given by (2). Then we have
$$\begin{aligned} \mathcal {D}_{\sigma ,\nu ,\ell }&= \beta \ell ^d(I - \ell ^2\Delta )^{-\nu -d/2}\\&= \beta \ell ^{d}\ell ^{-2\nu -d}(\ell ^{-2}I - \Delta )^{-\nu -d/2}\\&= \beta \tau ^{2\alpha - d}(\tau ^2I - \Delta )^{-\alpha }\\&= \beta \tau ^{2\alpha - d}\mathcal {C}_{\alpha ,\tau }. \end{aligned}$$
Hence, letting \(u \sim N(0,\mathcal {C}_{\alpha ,\tau })\), we see that
$$\begin{aligned} \mathbb {E}\Vert u\Vert ^2&= \mathrm {tr}(\mathcal {C}_{\alpha ,\tau })\\&= \frac{1}{\beta }\tau ^{d-2\alpha }\mathrm {tr}(\mathcal {D}_{\sigma ,\nu ,\ell })\\&= \frac{1}{\beta }\tau ^{d-2\alpha }\mathbb {E}\Vert v\Vert ^2. \end{aligned}$$
\(\square \)
(Theorem 2) Proposition 1 which follows shows that \(\mu _0\) and \({\varPhi }\) satisfy Assumptions 2.1 in Iglesias et al. (2016), with \(U = X\times \mathbb {R}^+\). Theorem 2.2 in Iglesias et al. (2016) then tells us that the posterior exists and is Lipschitz with respect to the data. \(\square \)
Let \(\mu _0\) be given by (3) and \({\varPhi }:X\times \mathbb {R}^+\rightarrow \mathbb {R}\) be given by (10). Let Assumptions 1 hold. Then
for every \(r > 0\) there is a \(K = K(r)\) such that, for all \((u,\tau ) \in X\times \mathbb {R}^+\) and all \(y \in Y\) with \(|y|_{\varGamma }< r\),
$$\begin{aligned} 0 \le {\varPhi }(u,\tau ;y) \le K; \end{aligned}$$
for any fixed \(y \in Y\), \({\varPhi }(\cdot ,\cdot ;y):X\times \mathbb {R}^+\rightarrow \mathbb {R}\) is continuous \(\mu _0\)-almost surely on the complete probability space \((X\times \mathbb {R}^+,\mathcal {X}\otimes \mathcal {R},\mu _0)\);
for \(y_1,y_2 \in Y\) with \(\max \{|y_1|_{\varGamma },|y_2|_{\varGamma }\} < r\), there exists a \(C = C(r)\) such that for all \((u,\tau ) \in X\times \mathbb {R}^+\),
$$\begin{aligned} |{\varPhi }(u,\tau ;y_1) - {\varPhi }(u,\tau ;y_2)| \le C|y_1-y_2|_{\varGamma }. \end{aligned}$$
Recall the level set map F defined by (7) defined via the finite constant values \(\kappa _i\) taken on each subset \(D_i\) of \(\overline{D}\). We may bound F uniformly:
$$\begin{aligned} |F(u,\tau )| \le \max \{|\kappa _1|,\ldots |\kappa _n|\} =: F_{\max }, \end{aligned}$$
for all \((u,\tau ) \in X\times \mathbb {R}^+\). Combining this with Assumption 1(ii), it follows that \(\mathcal {G}\) is uniformly bounded on \(X\times \mathbb {R}^+\). The result then follows from the continuity of \(y\mapsto \frac{1}{2}|y-\mathcal {G}(u,\tau )|_{\varGamma }^2\).
Let \((u,\tau ) \in X\times \mathbb {R}^+\) and let \(D_i(u,\tau )\) be as defined by (6), and define \(D_i^0(u,\tau )\) by
$$\begin{aligned} D_i^0(u,\tau )&= \overline{D}_i(u,\tau )\cap \overline{D}_{i+1}(u,\tau )\\&= \{x \in D\,|\,u(x) = c_i(\tau )\},\, i=1,\ldots ,n-1. \end{aligned}$$
We first show that \(\mathcal {G}\) is continuous at \((u,\tau )\) whenever \(|D_i^0(u,\tau )| = 0\) for \(i=1,\ldots ,n-1\). Choose an approximating sequence \(\{u_\varepsilon ,\tau _\varepsilon \}_{\varepsilon >0}\) of \((u,\tau )\) such that \(\Vert u_\varepsilon - u\Vert _\infty + |\tau _\varepsilon -\tau | < \varepsilon \) for all \(\varepsilon > 0\). We will first show that \(\Vert F(u_\varepsilon ,\tau _\varepsilon ) - F(u,\tau )\Vert _{L^p(D)}\rightarrow 0\) for any \(p \in [1,\infty )\). As in Iglesias et al. (2016) Proposition 2.4, we can write
$$\begin{aligned}&F(u_\varepsilon ,\tau _\varepsilon ) - F(u,\tau )\\&\quad = \sum _{i=1}^n\sum _{j=1}^n (\kappa _i - \kappa _j)\mathbbm {1}_{D_i(u_\varepsilon ,\tau _\varepsilon )\cap D_j(u,\tau )}\\&\quad = \sum _{\begin{array}{c} i,j=1\\ i\ne j \end{array}}^n (\kappa _i - \kappa _j)\mathbbm {1}_{D_i(u_\varepsilon ,\tau _\varepsilon )\cap D_j(u,\tau )}. \end{aligned}$$
From the definition of \((u_\varepsilon ,\tau _\varepsilon )\),
$$\begin{aligned} u(x) - \varepsilon< u_\varepsilon (x)< u(x) + \varepsilon ,\;\;\;\tau - \varepsilon< \tau _\varepsilon < \tau + \varepsilon \end{aligned}$$
for all \(x \in D\) and \(\varepsilon > 0\). We claim that for \(|i-j| > 1\) and \(\varepsilon \) sufficiently small, \(D_i(u_\varepsilon ,\tau _\varepsilon )\cap D_j(u,\tau ) = \varnothing \). First note that
$$\begin{aligned} D_i(u_\varepsilon ,\tau _\varepsilon )&= \big \{x \in D\;\big |\; \tau _\varepsilon ^{d/2-\alpha }c_{i-1} \le u_\varepsilon (x)< \tau _\varepsilon ^{d/2-\alpha }c_i\big \}\\&= \big \{x \in D\;\big |\; c_{i-1} \le \tau _\varepsilon ^{\alpha -d/2}u_\varepsilon (x) < c_i\big \}. \end{aligned}$$
Then we have that
$$\begin{aligned}&D_i(u_\varepsilon ,\tau _\varepsilon )\cap D_j(u,\tau ) \\&\quad =\{x \in D|c_{i-1} \le \tau _\varepsilon ^{\alpha -d/2}u_\varepsilon (x)< c_i,\\&\qquad c_{j-1} \le \tau ^{\alpha -d/2}u(x) < c_j\}. \end{aligned}$$
Now, since u is bounded,
$$\begin{aligned} \tau ^{\alpha -d/2}u(x) -\mathcal {O}(\varepsilon )<&\tau _\varepsilon ^{\alpha -d/2}u_\varepsilon (x)\\< & {} \tau ^{\alpha -d/2}u(x) + \mathcal {O}(\varepsilon ) \end{aligned}$$
$$\begin{aligned}&D_i(u_\varepsilon ,\tau _\varepsilon )\cap D_j(u,\tau ) \subseteq \\&\quad \{x \in D\;|\;c_{i-1}-\mathcal {O}(\varepsilon ) \le \tau ^{\alpha -d/2}u(x)< c_i + \mathcal {O}(\varepsilon ),\\&\quad c_{j-1} \le \tau ^{\alpha -d/2}u(x) < c_j\}. \end{aligned}$$
From the strict ordering of the \(\{c_i\}_{i=1}^n\) we deduce that for \(|i-j| > 1\) and small enough \(\varepsilon \), the right-hand side is empty. We hence look at the cases \(|i-j| = 1\). With the same reasoning as above, we see that
$$\begin{aligned}&D_i(u_\varepsilon ,\tau _\varepsilon )\cap D_{i+1}(u,\tau )\\&\quad \subseteq \big \{x\in D\;\big |\;c_i -\mathcal {O}(\varepsilon ) \le \tau ^{\alpha -d/2}u(x)< c_i + \mathcal {O}(\varepsilon ) \big \}\\&\quad \rightarrow \big \{x \in D\;\big |\; \tau ^{\alpha -d/2}u(x) = c_i\big \}\\&\quad = \big \{x \in D\;\big |\; u(x) = \tau ^{d/2-\alpha }c_i\big \}\\&\quad = D_i^0(u,\tau ) \end{aligned}$$
$$\begin{aligned}&D_i(u_\varepsilon ,\tau _\varepsilon )\cap D_{i-1}(u,\tau ) \\&\quad \subseteq \big \{x\in D\;\big |\; c_{i-1} - \mathcal {O}(\varepsilon )< \tau ^{\alpha -d/2}u(x) < c_{i-1}\big \}\\&\quad \rightarrow \varnothing . \end{aligned}$$
Assume that each \(|D_i^0(u,\tau )| = 0\), then it follows that \(|D_i(u_\varepsilon ,\tau _\varepsilon )\cap D_j(u,\tau )|\rightarrow 0\) whenever \(i \ne j\). Therefore we have that
$$\begin{aligned} \Vert F(u_\varepsilon ,\tau _\varepsilon )&- F(u,\tau )\Vert _{L^p(D)}^p\\&= \sum _{\begin{array}{c} i,j=1\\ i\ne j \end{array}}^n \int _{D_i(u_\varepsilon ,\tau _\varepsilon )\cap D_j(u,\tau )} |\kappa _i - \kappa _j|^p\,\mathrm {d}x\\&\le (2F_{\max })^p \sum _{\begin{array}{c} i,j=1\\ i\ne j \end{array}}^n |D_i(u_\varepsilon ,\tau _\varepsilon )\cap D_j(u,\tau )|\\&\rightarrow 0. \end{aligned}$$
Thus F is continuous at \((u,\tau )\). By Assumption 1(i) it follows that \(\mathcal {G}\) is continuous at \((u,\tau )\). We now claim that \(|D_i^0(u,\tau )| = 0\) \(\mu _0\)-almost surely for each i. By Tonelli's theorem, we have that
$$\begin{aligned}&\mathbb {E}|D_i^0(u,\tau )|\\&\quad = \int _{X\times \mathbb {R}^+}|D_i^0(u,\tau )|\,\mu _0(\mathrm {d}u,\mathrm {d}\tau )\\&\quad =\int _{X\times \mathbb {R}^+}\left( \int _{\mathbb {R}}\mathbbm {1}_{D_i^0(u,\tau )}(x)\,\mathrm {d}x\right) \mu _0(\mathrm {d}u,\mathrm {d}\tau )\\&\quad =\int _{\mathbb {R}^d}\left( \int _{X\times \mathbb {R}^+}\mathbbm {1}_{D_i^0(u,\tau )}(x)\,\mu _0(\mathrm {d}u,\mathrm {d}\tau )\right) \mathrm {d}x\\&\quad =\int _{\mathbb {R}^d}\left( \int _0^\infty \left( \int _X \mathbbm {1}_{D_i^0(u,\tau )}(x)\,\mu _0^\tau (\mathrm {d}u)\right) \,\pi _0(\mathrm {d}\tau )\right) \mathrm {d}x\\&\quad =\int _{\mathbb {R}^d}\left( \int _0^\infty \mu _0^\tau (\{u \in X\;|\;u(x) = c_i(\tau )\})\,\pi _0(\mathrm {d}\tau )\right) \mathrm {d}x. \end{aligned}$$
For each \(\tau \ge 0\) and \(x \in D\), u(x) is a real-valued Gaussian random variable under \(\mu _0^\tau \). It follows that \(\mu _0^\tau (\{u \in X\;|\;u(x) = c_i(\tau )\}) = 0\), and so \(\mathbb {E}|D_i^0(u,\tau )| = 0\). Since \(|D_i^0(u,\tau )| \ge 0\) we have that \(|D_i^0(u,\tau )| = 0\) \(\mu _0\)-almost surely. The result now follows.
For fixed \((u,\tau ) \in X\times \mathbb {R}^+\), the map \(y\mapsto \frac{1}{2}|y-\mathcal {G}(u,\tau )|_{\varGamma }^2\) is smooth and hence locally Lipschitz. \(\square \)
(Theorem 4) Recall that the eigenvalues of \(\mathcal {C}_{\alpha ,\tau }\) satisfy \(\lambda _j(\tau ) \asymp (\tau ^2 + j^{2/d})^{-\alpha }\). Then we have that
$$\begin{aligned} \left( \frac{\lambda _j(0)}{\lambda _j(\tau )}-1\right) \asymp (1+\tau ^2 j^{-2/d})^\alpha - 1 = \mathcal {O}(j^{-2/d}). \end{aligned}$$
$$\begin{aligned} \sum _{j=1}^\infty \left( \frac{\lambda _j(0)}{\lambda _j(\tau )}-1\right) ^p< \infty \;\;\;\text {if and only if }d<2p. \end{aligned}$$
We first prove the 'if' part of the statement. We have \(u \sim N(0,\mathcal {C}_0)\), and so \(\mathbb {E}\langle u,\varphi _j\rangle ^2 = \lambda _j(0)\). Since the terms within the sum are non-negative, by Tonelli's theorem we can bring the expectation inside the sum to see that that
$$\begin{aligned} \mathbb {E}\sum _{j=1}^\infty \left( \frac{1}{\lambda _j(\tau )} - \frac{1}{\lambda _j(0)}\right) \langle u,\varphi _j\rangle ^2&= \sum _{j=1}^\infty \left( \frac{\lambda _j(0)}{\lambda _j(\tau )}-1\right) \end{aligned}$$
which is finite if and only if \(d < 2\), i.e., \(d=1\). It follows that the sum is finite almost surely. For the converse, suppose that \(d \ge 2\) so that the series in (15) diverges when \(p=1\). Let \(\{\xi _j\}_{j\ge 1}\) be a sequence of i.i.d. N(0, 1) random variables so that \(\langle u,\varphi _j\rangle ^2\) has the same distribution as \(\lambda _j(0)\xi ^2\). Define the sequence \(\{Z_n\}_{n\ge 1}\) by
$$\begin{aligned} Z_n&= \sum _{j=1}^n \left( \frac{\lambda _j(0)}{\lambda _j(\tau )}-1\right) \xi _j^2\\&= \sum _{j=1}^n \left( \frac{\lambda _j(0)}{\lambda _j(\tau )}-1\right) + \sum _{j=1}^n \left( \frac{\lambda _j(0)}{\lambda _j(\tau )}-1\right) (\xi _j^2-1)\\&=: X_n + Y_n. \end{aligned}$$
Then the result follows if \(Z_n\) diverges with positive probability. By assumption we have that \(X_n\) diverges. In order to show that \(Z_n\) diverges with positive probability it hence suffices to show that \(Y_n\) converges with positive probability. Define the sequence of random variables \(\{W_j\}_{j\ge 1}\) by
$$\begin{aligned} W_j = \left( \frac{\lambda _j(0)}{\lambda _j(\tau )}-1\right) (\xi _j^2-1). \end{aligned}$$
It can be checked that
$$\begin{aligned} \mathbb {E}(W_j) = 0,\;\;\;\text {Var}(W_j) = 2\left( \frac{\lambda _j(0)}{\lambda _j(\tau )}-1\right) ^2. \end{aligned}$$
The series of variances converges if and only if \(d\le 3\), using (15) with \(p = 2\). We use Kolmogorov's two series theorem, Theorem 3.11 in Srinivasa Varadhan (2001), to conclude that \(Y_n = \sum _{j=1}^n W_j\) converges almost surely and the result follows.
$$\begin{aligned} \log \left( \frac{\lambda _j(\tau )}{\lambda _j(0)}\right) =&-\log \left( 1-\left( 1-\frac{\lambda _j(0)}{\lambda _j(\tau )}\right) \right) \\ =&\left( 1-\frac{\lambda _j(0)}{\lambda _j(\tau )}\right) + \frac{1}{2}\left( 1-\frac{\lambda _j(0)}{\lambda _j(\tau )}\right) ^2\\&+ \text {h.o.t.} \end{aligned}$$
Let \(\{\xi _j\}_{j\ge 1}\) be a sequence of i.i.d. N(0, 1) random variables, so that again we have that \(\langle u,\varphi _j\rangle ^2\) has the same distribution as \(\lambda _j(0)\xi ^2\). Then it is sufficient to show that the series
$$\begin{aligned} I = \sum _{j=1}^\infty \left[ \left( \frac{\lambda _j(0)}{\lambda _j(\tau )}-1\right) \xi _j^2 + \log \left( \frac{\lambda _j(\tau )}{\lambda _j(0)}\right) \right] \end{aligned}$$
is finite almost surely. We use the above approximation for the logarithm to write
$$\begin{aligned} I =&\sum _{j=1}^\infty \left( \frac{\lambda _j(0)}{\lambda _j(\tau )}-1\right) (\xi _j^2-1)\\&+ \sum _{j=1}^\infty \left[ \frac{1}{2}\left( 1-\frac{\lambda _j(0)}{\lambda _j(\tau )}\right) ^2 + \text {h.o.t.}\right] . \end{aligned}$$
The second sum converges if and only if \(d<4\), i.e., \(d\le 3\). The almost-sure convergence of the first term is shown in the proof of part (i). \(\square \)
Let \(D\subseteq \mathbb {R}^d\). Define the construction map \(F:X\times \mathbb {R}^+\rightarrow \mathbb {R}^D\) by (7). Given \(x_0 \in D\) define \(\mathcal {G}:X\times \mathbb {R}^+\rightarrow \mathbb {R}\) by \(\mathcal {G}(u,\tau ) = F(u,\tau )|_{x_0}\). Then \(\mathcal {G}\) is continuous at any \((u,\tau ) \in X\times \mathbb {R}^+\) with \(u(x_0) \ne c_i(\tau )\) for each \(i=0,\ldots ,n\). In particular, \(\mathcal {G}\) is continuous \(\mu _0\)-almost surely when \(\mu _0\) is given by (3). Additionally, \(\mathcal {G}\) is uniformly bounded.
The uniform boundedness is clear. For the continuity, let \((u,\tau ) \in X\times \mathbb {R}^+\) with \(u(x_0) \ne c_i(\tau )\) for each \(i=0,\ldots ,n\). Then there exists a unique j such that
$$\begin{aligned} c_{j-1}(\tau )< u(x_0) < c_j(\tau ). \end{aligned}$$
Given \(\delta > 0\), let \((u_\delta ,\tau _\delta ) \in X\times \mathbb {R}^+\) be any pair such that
$$\begin{aligned} \Vert u_\delta - u\Vert _\infty + |\tau _\delta -\tau | < \delta . \end{aligned}$$
Then it is sufficient to show that for all \(\delta \) sufficiently small, \(x_0 \in D_j(u_\delta ,\tau _\delta )\), i.e., that
$$\begin{aligned} c_{j-1}(\tau _\delta ) \le u_\delta (x_0) < c_j(\tau _\delta ). \end{aligned}$$
From this it follows that \(G(u_\delta ,\tau _\delta ) = G(u,\tau )\).
Since the inequalities in (16) are strict, we can find \(\alpha > 0\) such that
$$\begin{aligned} c_{j-1} + \alpha< u(x_0) < c_j(\tau ) - \alpha . \end{aligned}$$
Now \(c_j\) is continuous at \(\tau > 0\), and so there exists a \(\gamma > 0\) such that for any \(\lambda > 0\) with \(|\lambda -\tau | < \gamma \) we have
$$\begin{aligned} c_j(\lambda ) - \alpha /2< c_j(\tau ) < c_j(\lambda ) + \alpha /2. \end{aligned}$$
We have that \(\Vert u_\delta - u\Vert _\infty < \delta \), and so in particular,
$$\begin{aligned} u(x_0) - \delta< u_\delta (x_0) < u(x_0) + \delta . \end{aligned}$$
We can combine (17)–(19) to see that, for \(\delta < \gamma \),
$$\begin{aligned} c_{j-1}(\tau _\delta ) - \delta + \alpha /2< u_\delta (x_0) < c_j(\tau _\delta ) + \delta - \alpha /2 \end{aligned}$$
and so in particular, for \(\delta < n\{\gamma ,\alpha /2\}\),
$$\begin{aligned} c_{j-1}(\tau _\delta )< u_\delta (x_0) < c_j(\tau _\delta ). \end{aligned}$$
Radon–Nikodym derivatives in Hilbert spaces
The following proposition gives an explicit formula for the density of one Gaussian with respect to another and is used in defining the acceptance probability for the length-scale updates in our algorithm. Although we only use the proposition in the case where H is a function space and the mean m is zero, we provide a proof in the more general case where m is an arbitrary element of separable Hilbert space H as this setting may be of independent interest.
Let \((H,\langle \cdot ,\cdot \rangle ,\Vert \cdot \Vert )\) be a separable Hilbert space, and let A, B be positive trace-class operators on H. Assume that A and B share a common complete set of orthonormal eigenvectors \(\{\varphi _j\}_{j\ge 1}\), with the eigenvalues \(\{\lambda _j\}_{j\ge 1}\), \(\{\gamma _j\}_{j\ge 1}\) defined by
$$\begin{aligned} A\varphi _j = \lambda _j\varphi _j,\;\;\; B\varphi _j = \gamma _j\varphi _j, \end{aligned}$$
for all \(j \ge 1\). Assume further that the eigenvalues satisfy
$$\begin{aligned} \sum _{j=1}^\infty \left( \frac{\lambda _j}{\gamma _j} - 1\right) ^2 < \infty . \end{aligned}$$
Let \(m \in H\) and define the measures \(\mu = N(m,A)\) and \(\nu = N(m,B)\). Then \(\mu \) and \(\nu \) are equivalent, and their Radon–Nikodym derivative is given by
$$\begin{aligned} \frac{\mathrm {d}\mu }{\mathrm {d}\nu }(u) = \prod _{j=1}^\infty \frac{\gamma _j}{\lambda _j}\cdot \exp \Bigg (\frac{1}{2}\sum _{j=1}^\infty \bigg (\frac{1}{\gamma _j} - \frac{1}{\lambda _j}\bigg )\langle u-m,\varphi _j\rangle ^2\Bigg ). \end{aligned}$$
The assumption on summability of the eigenvalues means that the Feldman–Hájek theorem applies, and so we know that \(\mu \) and \(\nu \) are equivalent. We show that the Radon–Nikodym derivative is as given above.
Define the product measures \({\hat{\mu }},{\hat{\nu }}\) on \(\mathbb {R}^\infty \) by
$$\begin{aligned} {\hat{\mu }} = \prod _{j=1}^\infty {\hat{\mu }}_j,\;\;\;{\hat{\nu }} = \prod _{j=1}^\infty {\hat{\nu }}_j, \end{aligned}$$
where \({\hat{\mu }}_j = N(0,\lambda _j)\), \({\hat{\nu }}_j = N(0,\gamma _j)\). As a consequence of a result of Kakutani, see Prato and Zabczyk (2002) Proposition 1.3.5, we have that \({\hat{\mu }}\sim {\hat{\nu }}\) with
$$\begin{aligned} \frac{\mathrm {d}{\hat{\mu }}}{\mathrm {d}{\hat{\nu }}}(x)&= \prod _{j=1}^\infty \frac{\mathrm {d}{\hat{\mu }}_j}{\mathrm {d}{\hat{\nu }}_j}(x_j)\\&= \prod _{j=1}^\infty \frac{\gamma _j}{\lambda _j}\cdot \exp \Bigg (\frac{1}{2}\sum _{j=1}^\infty \bigg (\frac{1}{\gamma _j} - \frac{1}{\lambda _j}\bigg )x_j^2\Bigg ). \end{aligned}$$
We associate H with \(\mathbb {R}^\infty \) via the map \(G:H\rightarrow \mathbb {R}^\infty \), given by
$$\begin{aligned} G_ju = \langle u,\varphi _j\rangle ,\;\;\;j\ge 1. \end{aligned}$$
Note that the image of G is \(\ell ^2 \subseteq \mathbb {R}^\infty \), and \(G:H\rightarrow \ell ^2\) is an isomorphism. Since A and B are trace-class, samples from \({\hat{\mu }}\) and \({\hat{\nu }}\) almost surely take values in \(\ell ^2\). \(G^{-1}\) is hence almost surely defined on samples from \({\hat{\mu }}\) and \({\hat{\nu }}\). Define the translation map \(T_m:H\rightarrow H\) by \(T_m u = u + m\). Then by the Karhunen–Loève theorem, the measures \(\mu \) and \(\nu \) can be expressed as the push forwards
$$\begin{aligned} \mu = T_m^\#(G^{-1})^\#{\hat{\mu }},\;\;\;\nu = T_m^\#(G^{-1})^\#{\hat{\nu }}. \end{aligned}$$
Now let \(f:H\rightarrow \mathbb {R}\) be bounded measurable, then we have
$$\begin{aligned} \int _H f(u)\,\mu (\mathrm {d}u)&= \int _H f(u)\,\big [T_m^\#(G^{-1})^\#{\hat{\mu }}\big ](\mathrm {d}u)\\&= \int _{\mathbb {R}^\infty } f(G^{-1}x+m)\,{\hat{\mu }}(\mathrm {d}x)\\&= \int _{\mathbb {R}^\infty } f(G^{-1}x+m)\frac{\mathrm {d}{\hat{\mu }}}{\mathrm {d}{\hat{\nu }}}(x)\,{\hat{\nu }}(\mathrm {d}x)\\&= \int _H f(u)\frac{\mathrm {d}{\hat{\mu }}}{\mathrm {d}{\hat{\nu }}}(G(u-m))\,\big [T_m^\#(G^{-1})^\#{\hat{\nu }}\big ](\mathrm {d}u)\\&= \int _H f(u)\frac{\mathrm {d}{\hat{\mu }}}{\mathrm {d}{\hat{\nu }}}(G(u-m))\,\nu (\mathrm {d}u). \end{aligned}$$
From this is follows that we have
$$\begin{aligned} \frac{\mathrm {d}\mu }{\mathrm {d}\nu }(u)&= \frac{\mathrm {d}{\hat{\mu }}}{\mathrm {d}{\hat{\nu }}}(G(u-m))\\&= \prod _{j=1}^\infty \frac{\gamma _j}{\lambda _j}\cdot \exp \Bigg (\frac{1}{2}\sum _{j=1}^\infty \bigg (\frac{1}{\gamma _j} - \frac{1}{\lambda _j}\bigg )\langle u-m,\varphi _j\rangle ^2\Bigg ). \end{aligned}$$
The proposition above, in the case \(m=0\), is given as Theorem 1.3.7 in Prato and Zabczyk (2002) except that, there, the factor before the exponential is omitted. This is because it does not depend on u, and all measures involved are probability measures and hence normalized. We retain the factor as we are interested in the precise value of the derivative for the MCMC algorithm, in particular its dependence on the length-scale.
Adler, A., Lionheart, W.R.B.: Uses and abuses of EIDORS: an extensible software base for EIT. Physiol. Meas. 27(5), S25–S42 (2006)CrossRefGoogle Scholar
Agapiou, S., Bardsley, J.M., Papaspiliopoulos, O., Stuart, A.M.: Analysis of the Gibbs sampler for hierarchical inverse problems. J. Uncertain. Quantif. 2, 511–544 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
Agapiou, S., Bardsley, J.M., Papaspiliopoulos, O., Stuart., A.M.: Analysis of the Gibbs sampler for hierarchical inverse problems. SIAM/ASA J. Uncertain. Quantif. 2(1), 511–544 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
Alvarez, L., Morel, J.M.: Formalization and computational aspects of image analysis. Acta Numer. 3, 1–59 (1994)MathSciNetCrossRefzbMATHGoogle Scholar
Arbogast, T., Wheeler, M.F., Yotov, I.: Mixed finite elements for elliptic problems with tensor coefficients as cell-centered finite differences. SIAM J. Numer. Anal. 34, 828–852 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
Bear, J.: Dynamics of Fluids in Porous Media. Dover Publications, New York (1972)zbMATHGoogle Scholar
Beskos, A., Roberts, G.O., Stuart, A.M., Voss, J.: MCMC methods for diffusion bridges. Stoch. Dyn. 8, 319–350 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, New York (2006)zbMATHGoogle Scholar
Bolin, D., Lindgren, F.: Excursion and contour uncertainty regions for latent Gaussian models. J. R. Stat. Soc. Ser. B 77(1), 85–106 (2015)MathSciNetCrossRefGoogle Scholar
Borcea, L.: Electrical impedance tomography. Inverse Probl. 18, R99–R136 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
Burger, M.: A level set method for inverse problems. Inverse Probl. 17(5), 1327–1355 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
Calvetti, D., Somersalo, E.: A Gaussian hypermodel to recover blocky objects. Inverse Probl. 23(2), 733–754 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
Calvetti, D., Somersalo, E.: Hypermodels in the Bayesian imaging framework. Inverse Probl. 24(3), 34013 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
Carrera, J., Neuman, S.P.: Estimation of aquifer parameters under transient and steady state conditions: 3. application to synthetic and field data. Water Resour. Res. 22(2), 228–242 (1986)CrossRefGoogle Scholar
Chung, E.T., Chan, T.F., Tai, X.-C.: Electrical impedance tomography using level set representation and total variational regularization. J. Comput. Phys. 205(1), 357–372 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
Cotter, S.L., Roberts, G.O., Stuart, A.M., White, D.: MCMC methods for functions modifying old algorithms to make them faster. Stat. Sci. 28(3), 424–446 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
Da Prato, G., Zabczyk, J.: Second Order Partial Differential Equations in Hilbert Spaces, vol. 293. Cambridge University Press, Cambridge (2002)CrossRefzbMATHGoogle Scholar
Dashti, M., Stuart, A.M.: The Bayesian approach to inverse problem. In: Ghanem, R., Higdon, D., Owhadi, H. (eds.) Handbook of Uncertainty Quantification. Springer, Heidelberg (2016)Google Scholar
Dorn, O., Lesselier, D.: Level set methods for inverse scattering. Inverse Probl. 22(4), R67–R131 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
Dunlop, M.M., Stuart, A.M.: The Bayesian formulation of EIT: analysis and algorithms. arXiv:1508.04106 (2015)
Filippone, M., Girolami, M.: Pseudo-marginal Bayesian inference for Gaussian processes. IEEE Trans. Pattern Anal. Mach. Intell. 36(11), 2214–2226 (2014)CrossRefGoogle Scholar
Franklin, J.N.: Well posed stochastic extensions of ill posed linear problems. J. Math. Anal. Appl. 31(3), 682–716 (1970)MathSciNetCrossRefzbMATHGoogle Scholar
Fuglstad, G-A., Simpson, D., Lindgren, F., Rue, H.: Interpretable priors for hyperparameters for Gaussian random fields. arXiv:1503.00256 (2015)
Geirsson, Ó.P, Hrafnkelsson, B., Simpson, D., Siguroarson H.: The MCMC split sampler: a block Gibbs sampling scheme for latent Gaussian models. arXiv:1506.06285 (2015)
Girolami, M., Calderhead, B.: Riemann manifold Langevin and Hamiltonian Monte Carlo methods. J. R. Stat. Soc. Ser. B 73(2), 123–214 (2011)MathSciNetCrossRefGoogle Scholar
Hairer, M., Stuart, A.M., Vollmer, S.J.: Spectral gaps for Metropolis–Hastings algorithms in infinite dimensions. Ann. Appl. Prob. 24, 2455–2490 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
Hanke, M.: A regularizing Levenberg–Marquardt scheme, with applications to inverse groundwater filtration problems. Inverse Probl. 13, 79–95 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
Iglesias, M.A.: A regularizing iterative ensemble Kalman method for PDE-constrained inverse problems. Inverse Probl. 32(2), 025002 (2016)Google Scholar
Iglesias, M.A., Dawson, C.: The representer method for state and parameter estimation in single-phase Darcy flow. Comput. Methods Appl. Mech. Eng. 196(1), 4577–4596 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
Iglesias, M.A., Law, K.J.H., Stuart, A.M.: The ensemble Kalman filter for inverse problems. Inverse Probl. 29(4), 045001 (2013)CrossRefzbMATHGoogle Scholar
Iglesias, M.A., Lu,Y., Stuart, A.M.: A Bayesian level set method for geometric inverse problems. Interfaces and Free Boundary Problems, (2016) (to appear)Google Scholar
Kaipio, J.P., Somersalo, E.: Statistical and Computational Inverse Problems. Springer, New York (2005)zbMATHGoogle Scholar
Lasanen, S.: Non-Gaussian statistical inverse problems. Part I: posterior distributions. Inverse Probl. Imagin. 6(2), 215–266 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
Lasanen, S.: Non-Gaussian statistical inverse problems. Part II: posterior convergence for approximated unknowns. Inverse Probl. Imag. 6(2), 215–266 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
Lasanen, S., Huttunen, J.M.J., Roininen, L.: Whittle-Matérn priors for Bayesian statistical inversion with applications in electrical impedance tomography. Inverse Probl. Imag. 8(2), 561–586 (2014)CrossRefzbMATHGoogle Scholar
Lehtinen, M.S., Paivarinta, L., Somersalo, E.: Linear inverse problems for generalised random variables. Inverse Probl. 5(4), 599–612 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
Lindgren, F., Rue, H.: Bayesian spatial modelling with R-INLA. J. Stat. Softw. 63(19), 63–76 (2015)CrossRefGoogle Scholar
Lorentzen, R.J., Flornes, K.M., Naevdal, G.: History matching channelized reservoirs using the ensemble Kalman filter. Soc Pet. Eng. J. 17(1), 122–136 (2012)Google Scholar
Lorentzen, R.J., Nævdal, G., Shafieirad, A.: Estimating facies fields by use of the ensemble Kalman filter and distance functions-applied to shallow-marine environments. Soc. Pet. Eng. J. 3, 146–158 (2012)Google Scholar
Mandelbaum, A.: Linear estimators and measurable linear transformations on a Hilbert space. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 65(3), 385–397 (1984)MathSciNetCrossRefzbMATHGoogle Scholar
Matérn, B.: Spatial Variation, vol. 36. Springer Science & Business Media, Berlin (2013)zbMATHGoogle Scholar
Marshall, R.J., Mardia, K.V.: Maximum likelihood estimation of models for residual covariance in spatial regression. Biometrika 71(1), 135–146 (1984)MathSciNetCrossRefzbMATHGoogle Scholar
Osher, S., Sethian, J.A.: Fronts propagating with curvature dependent speed: algorithms based on Hamilton–Jacobi formulations. J. Comput. Phys. 79, 12–49 (1988)MathSciNetCrossRefzbMATHGoogle Scholar
Ping, J., Zhang, D.: History matching of channelized reservoirs with vector-based level-set parameterization. Soc Pet. Eng. J. 19, 514–529 (2014)Google Scholar
Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. The MIT Press, Cambridge (2006)zbMATHGoogle Scholar
Robert, C., Casella, G.: Monte Carlo Statistical Methods. Springer Science & Business Media, Berlin (2013)zbMATHGoogle Scholar
Santosa, F.: A level-set approach for inverse problems involving obstacles. ESAIM 1(1), 17–33 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
Sapiro, G.: Geometric Partial Differential Equations and Image Analysis. Cambridge University Press, Cambridge (2006)zbMATHGoogle Scholar
Srinivasa Varadhan, S.R.: Probability Theory. Courant Lecture Notes. Courant Institute of Mathematical Sciences, New York (2001)Google Scholar
Somersalo, E., Cheney, M., Isaacson, D.: Existence and uniqueness for electrode models for electric current computed tomography. SIAM J. Appl. Math. 52(4), 1023–1040 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
Stein, M.L.: Interpolation of Spatial Data: Some Theory for Kriging. Springer Science & Business Media, Berlin (2012)Google Scholar
Stuart, A.M.: Inverse problems : a Bayesian perspective. Acta Numer. 19, 451–559 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
Tai, X.-C., Chan, T.F.: A survey on multiple level set methods with applications for identifying piecewise constant functions. Int. J. Numer. Anal. Model. 1(1), 25–48 (2004)MathSciNetzbMATHGoogle Scholar
Tierney, L.: A note on Metropolis–Hastings kernels for general state spaces. Ann. Appl. Prob. 8(1), 1–9 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
van der Vaart, A.W., van Zanten, J.H.: Adaptive Bayesian estimation using a Gaussian random field with inverse gamma bandwidth. Ann. Stat. 37, 2655–2675 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
Xie, J., Efendiev, Y., Datta-Gupta, A.: Uncertainty quantification in history matching of channelized reservoirs using Markov chain level set approaches. Soc. Pet. Eng. 1, 49–76 (2011)Google Scholar
Zhang, H.: Inconsistent estimation and asymptotically equal interpolations in model-based geostatistics. J. Am. Stat. Assoc. 99(465), 250–261 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
1.Computing & Mathematical SciencesCalifornia Institute of TechnologyPasadenaUSA
2.School of Mathematical SciencesUniversity of NottinghamNottinghamUK
Dunlop, M.M., Iglesias, M.A. & Stuart, A.M. Stat Comput (2017) 27: 1555. https://doi.org/10.1007/s11222-016-9704-8
Received 13 January 2016
Accepted 09 September 2016
|
CommonCrawl
|
The effectiveness of intervention with board games: a systematic review
Shota Noda ORCID: orcid.org/0000-0001-7376-76301,
Kentaro Shirotsuki2 &
Mutsuhiro Nakao3
To examine the effectiveness of board games and programs that use board games, the present study conducted a systematic review using the PsycINFO and PubMed databases with the keywords "board game" AND "trial;" in total, 71 studies were identified. Of these 71 studies, 27 satisfied the inclusion criteria in terms of program content, intervention style, and pre–post comparisons and were subsequently reviewed. These 27 studies were divided into the following three categories regarding the effects of board games and programs that use board games: educational knowledge (11 articles), cognitive functions (11 articles), and other conditions (five articles). The effect sizes between pre- and post-tests or pre-tests and follow-up tests were 0.12–1.81 for educational knowledge, 0.04–2.60 and − 1.14 – − 0.02 for cognitive functions, 0.06–0.65 for physical activity, and − 0.87 – − 0.61 for symptoms of attention-deficit hyperactivity disorder (ADHD). The present findings showed that, as a tool, board games can be expected to improve the understanding of knowledge, enhance interpersonal interactions among participants, and increase the motivation of participants. However, because the number of published studies in this area remains limited, the possibility of using board games as treatment for clinical symptoms requires further discussion.
A board game is a generic term for a game played by placing, moving or removing pieces on a board and that utilizes a game format in which pieces are moved in particular ways on a board marked with a pattern. Examples of board games include chess, Go, and Shogi. Research involving chess, which is played by two players on a board with 64 black and white squares and 16 pieces for each player [1], has contributed to the theoretical development of cognitive psychology [2]. For example, Burgoyne et al. [3] conducted a meta-analysis and demonstrated that chess skills are significantly and positively correlated with four broad cognitive abilities: fluid reasoning, comprehension-knowledge, short-term memory, and processing speed. Similarly, a meta-analysis by Sala and Gobet [4] found that chess instruction moderately improves the cognitive skills of children.
In contrast, Go is ancient board game that consists of simple elements (a line and circle, black and white colors, and stone and wood materials) combined with simple rules that generate subtleties that have enthralled players for millennia [5]. Go is a famous board game in Asian countries and has been used as a tool for increasing or maintaining brain activity for more than 5000 years [6]. It is currently gaining popularity in the United States and Europe [6], and Kim et al. [7] has suggested that playing Go might be effective for children with attention-deficit hyperactivity disorder (ADHD) due to its activation of hypo-aroused prefrontal cortical function and the enhancement of executive function. Lin et al. [8] conducted an intervention study using GO in patients with Alzheimer's disease and showed that playing Go can also improve the clinical symptoms associated with depression, anxiety, and Alzheimer's Disease. Similar to chess and Go, Shogi is a board game for two players that is also referred to as Japanese chess. Wan et al. [9] conducted an experiment with undergraduate students and found that Shogi training is related to activation in the head of the caudate nucleus. Taken together, the abovementioned findings suggest that chess, Go, and Shogi are effective ways to achieve various outcomes.
There are many board games other than chess, Go, and Shogi. For example, educational board games, such as Kalèdo, have been used to improve nutrition knowledge and promote a healthy lifestyle for children [10]. Zeedyk et al. [11] investigated the effectiveness of a board game for increasing knowledge about road safety and danger and found that the interventions were significantly effective in increasing children's knowledge. Although the impacts of various board games have been previously examined, their effects have yet to be comprehensively reviewed. As a result, the functions and effects of board games as a whole remain unclear. Thus, the present review systematically examined the effectiveness of board games and programs that use board games.
For the present review, a literature search based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses [12] using the PsycINFO and PubMed databases was conducted to collect findings on the effectiveness of board games and programs using board games. The keywords for the literature search were "board game" AND "trial," and the date selected was September 13th, 2018. The search identified nine studies from PsycINFO and 32 studies from PubMed. The first author of this review performed a manual search that identified six additional studies, and 24 additional studies were extracted from Sala & Gobet [4], which conducted a meta-analysis about the benefits of chess. Duplicate studies were deleted and, ultimately, a list of references consisting of 66 articles was prepared.
The inclusion criteria for the present study were as follows: (a) studied the effects of board games and programs using board games on psychological and educational outcomes, (b) included pre–post comparative tests, (c) used an interventional or experimental rather than a review approach, (d) had full text availability, (e) was written in English, and (f) was peer reviewed. A screening to remove articles that were judged not to satisfy any of the criteria from (a) to (f) was conducted, and 29 articles were extracted. Additionally, one study was excluded because it did not use a traditional board game (it used a Wii Fit balance board), and one study was excluded because the content details of the board game were unclear. Ultimately, 27 articles were selected for the present study; the literature search process is presented in Fig. 1.
PRISMA flow chart of the study selection process
Furthermore, in the studies where the means and standard deviations of the intervention group are described, Cohen's d was calculated to assess effect sizes between pre- and post-tests or between pre-tests and follow-up tests with the following formula based on Cohen [13].
$$ d=\frac{M_2\hbox{-} {M}_1}{SD_{pooled}} $$
$$ {SD}_{pooled}=\sqrt{\left(\raisebox{1ex}{$\left({n}_2\hbox{-} 1\right){SD}_2^2+\left({n}_1\hbox{-} 1\right){SD}_1^2$}\!\left/ \!\raisebox{-1ex}{${n}_2+{n}_1\hbox{-} 2$}\right.\right)} $$
Note: M1 and M2 are the mean of the intervention group at the pre-test session and the post-test session or follow-up test session, respectively. SDpooled is the pooled standard deviation (SD1 is the standard deviation of the intervention group at the pre-test session and SD2 is the standard deviation at the post-test session or follow-up test session). n1 is the number of samples at the pre-test session. n2 is the number of samples at the post-test session or follow-up test session.
In the studies where the means and standard deviations are described in the intervention group and the other groups, Cohen's d was also calculated to assess effect sizes compared to the other groups (control groups) with the following formula based on Sala et al. [14].
$$ d=\frac{M_{gi}\hbox{-} {M}_{gc}}{SD_{pooled\hbox{-} pre}} $$
$$ {SD}_{pooled\hbox{-} pre}=\sqrt{\left(\raisebox{1ex}{$\left({n}_i\hbox{-} 1\right){SD_{pre.i}}^2+\left({n}_c\hbox{-} 1\right){SD_{pre.c}}^2$}\!\left/ \!\raisebox{-1ex}{${n}_i+{n}_c\hbox{-} 2$}\right.\right)} $$
Note: Mgi and Mgc are the mean gain of the intervention group and the control group (other group) at the post-test session or at the follow-up test session, respectively, and SDpooled-pre is the pooled standard deviation of the two pre-test standard deviations. SDpre.i is the standard deviation of the intervention group at the pre-test session, and SDpre.c is the standard deviation of the control group at the pre-test session. ni is the number of samples in the intervention group who received the pre-test session and post-test session or the pre-test session and follow-up test session. nc is the number of samples in the control group who received the pre-test session and post-test session or the pre- test session and follow-up test session.
According to Cohen [13], Cohen's d of approximately 0.20 is small, 0.50 medium, and 0.80 large.
The effect of interventions with board games
In the present review, the selected studies were divided into the following three categories regarding the effects of board games and programs that use board games: educational knowledge (11 articles), cognitive functions (11 articles), and other conditions (five articles).
An overview of the findings about the effects of board games and programs that use board games related to educational knowledge is shown in Table 1 [10, 11, 15,16,17,18,19,20,21,22,23]. Board games in this category were used for the purpose of improving educational knowledge, and the effect sizes (Cohen's d) between pre- and post-tests or between pre-tests and follow-up tests ranged from 0.12 to 1.81 and between the mean gain of the main intervention group and the other groups ranged from 0.81 to 0.93 and − 1.84 to − 1.65.
Table 1 Overview of the studies reporting the effectiveness of board games in educational knowledge
An overview of the findings about the effects of board games and programs that use board games on cognitive functions is shown in Table 2 [6, 24,25,26,27,28,29,30,31,32,33]. This category included board games such as Go, Ska, and chess, and the effect sizes (Cohen's d) between pre- and post-tests of cognitive function ranged from 0.04 to 2.60 and − 1.14 to − 0.02. The effect size of the exacerbation was calculated in only the chess group of Sala et al. [27]. The effect sizes (Cohen's d) between the mean gain of the main intervention group and the other groups ranged from 0.06 to 2.36 and − 1.38 to − 0.22.
Table 2 Overview of the studies reporting the effectiveness of board games in cognitive functions
An overview of the findings about the effects of board games and programs that use board games on other conditions is shown in Table 3 [7, 8, 34,35,36]. This category addressed the impacts of board games on physical activity, anxiety, ADHD symptoms, and the severity of Alzheimer's Disease. The effect sizes (Cohen's d) between pre- and post-tests or between pre-tests and follow-up tests ranged from 0.06 to 0.65 for physical activity and from − 0.87 to − 0.61 for ADHD symptoms.
Table 3 Overview of the studies reporting the effectiveness of board games in the other conditions
Board games and educational knowledge
Eleven studies that used board games to increase educational knowledge were selected for this review. The present findings showed that board games influence educational knowledge and concomitant outcomes, with the effect sizes for educational knowledge ranging from very small to large.
Board games can be used as a tool to encourage learning. In previous studies, specialized board games aimed at improving knowledge in the field of education were targeted and subsequently developed and investigated. For example, Wanyama et al. [16] conducted a study of the Make a Positive Start Today game, which is a board game aimed at improving knowledge about human immunodeficiency virus (HIV) and sexually transmitted infections (STIs). Similarly, Kalèdo is an educational board game used to increase nutrition knowledge [10, 19, 21]. It has been shown that these board games contribute to increasing knowledge related to each particular field.
Board games are also efficacious for goals other than increasing knowledge. According to Charlier and De Fraine [22], board games can be an enjoyable and motivational method for learning content and enhancing group interactions, competition, and fun. Martins et al. [18] reported that board games teach educational content in a playful and enjoyable way and involve interactions with family and friends; thus, they favor knowledge acquisition by enabling exchanges of experiences and learning. Furthermore, Wanyama et al. [16] showed that, as a method of health education, board games increase the acquisition of knowledge as well as result in more positive experiences than do health talks among both participants and facilitators. Amaro et al. [10] found that class teachers noted improvements in student interest and appreciation of the board game. Taken together, these findings suggest that board games may improve the motivation of participants. Furthermore, Karbownik et al. [20] showed that a board game was warmly welcomed by students; in their opinion, it facilitated clinical thinking and peer communication. Therefore, board games may also have a positive influence on interpersonal interactions among participants.
Based on the above findings, board games can be used as a tool to encourage learning as well as to enhance motivation and interpersonal interactions. In clinical treatment, it is important to increase motivation because low motivation to cooperate with a particular intervention may lead to a patient dropping out of treatment or to interference with the therapeutic effects. Based on the above findings, the use of board games may help increase the benefit of treatment for less motivated patients.
Board games and cognitive functions
In the present review, 11 of the assessed studies investigated the effects of board games and programs that use board games on cognitive functions. These studies used Go, chess, and Ska, which are not educational games but abstract strategy games. Studies investigating the use of Go found that older adults experiencing cognitive decline and/or living in nursing homes showed improvements in attention and working memory after regularly playing the game [6]. Studies assessing the use of Ska found that the game appeared to enhance the cognitive functioning of older adults in terms of memory, attention, and executive function [24]. Studies evaluating chess showed that training with the game improved the planning ability of patients with schizophrenia and the mathematical ability of children [25, 26]. But, Sala & Gobet [27] indicated that interventions that use chess are not significantly different from interventions that use checkers and regular school activities that address the mathematical and metacognitive ability of children.
The effect sizes for cognitive functions ranged from very small to large, but the effect size of exacerbation on metacognitive ability was shown in the chess training of Sala & Gobet [27]. The number of studies included in this category was relatively limited. Further investigations will be necessary to clarify the more detailed effects of board games on cognitive function. Articles about Shogi were not selected for this category in the present review. Because Shogi was also included with the abstract strategy games, this may influence cognitive functions. In the future, it will be necessary to use intervention studies to examine the effects of additional types of board games, including Shogi, on cognitive function.
Board games and other conditions
The "other studies" category in the present review included five studies that examined the effects of board games on physical activity, physical and psychological outcomes, ADHD symptoms, and the severity of Alzheimer's Disease. Mouton et al. [34] showed that a giant board game intervention for nursing home residents led to significant increases in ambulatory physical activity, daily energy output, quality of life, balance and gait, and ankle strength. The effect sizes in the present review of studies related to physical activity ranged from very small to medium. Fernandes et al. [35] reported that board games used as educational preoperative materials decreased the preoperative anxiety of children. Additionally, the use of board games contributed to improvements in the ADHD symptoms of children [7, 35]. The effect sizes for ADHD symptoms in the present review ranged from medium to large. Lin et al. [8] showed that playing Go improved the symptoms of depression and anxiety and ameliorated the manifestations of Alzheimer's Disease. Although a study by Barzegar and Barzegar [37] was not selected for the present review because it was a case report, these authors found that playing chess prevented panic attacks and contributed to the amelioration of this condition. Taken together, these findings indicate that board games might be an effective complementary intervention for the treatment of the clinical symptoms of ADHD and Alzheimer's Disease.
In terms of Alzheimer's disease, board games may also play a role in the prevention of the onset of this disorder. According to an epidemiological survey in Japan [38], the prevalence rates of dementia in 1980, 1990, and 2000 were 4.4, 4.5, and 5.9, respectively, for all types of dementia and 1.9, 2.5, and 3.6, respectively for Alzheimer's Disease. In Japan, the number of patients with Alzheimer's disease has increased, and the prevention of this disorder is a problem that must be addressed. Because playing board games ameliorates the manifestations of Alzheimer's disease [8], these types of games may contribute to the prevention of this disorder. However, the number of studies in the present review that investigated the effects of board games on clinical symptoms was quite small, and further research will be required.
Possible clinical applications of board games
It is also important to note that board games can be played without the use of language. Language-based therapies may not be appropriate for people with underdeveloped linguistic functions, such as children and patients with speech disorders. However, board games may be a viable treatment option for these populations. In the present review, the subjects in 18 of the assessed studies included children, which is a group that is still developing linguistic functions and is more likely to have poor knowledge about diseases. The present review also revealed that board games and programs that use board games are effective for achieving various outcomes for children, including increasing educational knowledge, enhancing cognitive functions, and decreasing anxiety and the severity of ADHD. Furthermore, board games can be an enjoyable and motivational tool for children [22]. Based on these findings, it is possible that board games can be a useful intervention for children in particular because such games can be expected to result in the maintenance and promotion of health and the prevention of disease.
Limitations and future directions
Several limitations of the present study must be considered. First, the number of studies assessed in the present review was rather limited. Therefore, further investigations of the effects of board games will be necessary. Second, many of the papers selected for the present review examined the effectiveness of board games by comparing pre- post intervention for a single group or by comparing with a control group without intervention. These research designs do not control for the possibility of placebo effects. Intervention studies must include an active control group to control for possible placebo effects [39], thus it will be necessary to compare the effect of board game groups and active control groups in future research. Third, in the articles selected for the present review, some studies were conducted with relatively small sample sizes. In cases in which the sample size is small, there is the possibility of increased sampling error. In order to reduce sampling error, it is necessary to do a power analysis to set an appropriate sample size in intervention studies. In addition, it is desirable that multiple assessment indicators be used to examine the effects of board games in various perspectives and to reduce measurement errors.
The present systematic review showed that board games and programs that use board games have positive effects on various outcomes, including educational knowledge, cognitive functions, physical activity, anxiety, ADHD symptoms, and the severity of Alzheimer's Disease. Additionally, board games were shown to contribute to improving these variables, enhancing the interpersonal interactions and motivation of participants, and promoting learning. Taken together, these findings suggest that board games would be an effective complementary therapy that would contribute to the improvement of many clinical symptoms.
cRCT:
cluster randomized controlled trial
HIV:
RCT:
STIs:
El Daou BMN, El-Shamieh SI. The effect of playing chess on the concentration of ADHD students in the 2nd cycle. Procedia Soc Behav Sci. 2015;192:638–43.
Charness N. The impact of chess research on cognitive science. Psycho Res. 1992;54:4–9.
Burgoyne AP, Sala G, Gobet F, Macnamara BN, Campitella G, Hambrick DZ. The relationship between cognitive ability and chess skill: a comprehensive meta-analysis. Intelligence. 2016;59:72–83.
Sala G, Gobet F. Do the benefits of chess instruction transfer to academic and cognitive skill?: a meta-analysis. Educ Res Rev. 2016;18:46–57.
American Go Association. What is Go? American Go Association. Retrieved from http://www.usgo.org/what-go (January 22, 2019).
Iizuka A, Suzuki H, Ogawa S, Kobayashi-Cuya KE, Kobayashi M, Fujiwara Y. Pilot randomized controlled trail of the GO game intervention on cognitive function. Am J Alzheimers Dis Other Dement. 2018;33(3):192–8.
Kim SH, Han DH, Lee YS, Kim BN, Cheong JH, Han SH. Baduk (the game of go) improved cognitive function and brain activity in children with attention deficit hyperactivity disorder. Psychiatry Investig. 2014;11(2):143–51.
Lin Q, Cao Y, Gao J. The impacts of a GO-game (Chinese chess) intervention on Alzheimer disease in a northeast Chinese population. Front Aging Neurosci. 2015;7:163.
Wan X, Takano D, Asamizuya T, Suzuki C, Ueno K, Cheng K, Ito T, Tanaka K. Developing intuition: neural correlates of cognitive-skill learning in caudate nucleus. J Neurosci. 2012;32:17492–501.
Amaro S, Viggiano A, Di Costanzo A, Madeo I, Viggiano A, Baccari ME, Marchitelli E, Raia M, Viggiano E, Deepak S, Monda M, De Luca B. Kalèdo, a new educational board-game, gives nutritional rudiments and encourages healthy eating in children: a pilot cluster randomized trial. Eur J Pediatr. 2006;165(9):630–5.
Zeedyk MS, Wallace L, Carcary B, Jones K, Larter K. Children and road safety: increasing knowledge does not improve behavior. Br J Educ Psychol. 2001;71:573–94.
Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097.
Cohen J. Statistical power analysis for the behavioural sciences. 2nd ed. Lawrence Earlbaum Associates: Hillsdale; 1988.
Sala G, Aksayli ND, Tatlidil KS, Gondo Y, Gobet F. Working Memory training does not enhance older adults' cognitive skills: a comprehensive meta-analysis. from file:///C:/Users/noras/AppData/Local/Packages/Microsoft.MicrosoftEdge_8wekyb3d8bbwe/TempState/Downloads/WMT_older_adults_posting_version%20(1).pdf (July 22, 2019).
Khazaal Y, Chatton A, Prezzemolo R, Zebouni F, Edel Y, Jacquet J, Ruggeri O, Burnens E, Monney G, Protti AS, Etter JF, Khan R, Cornuz J, Zullino D. Impact of a board-game approach on current smokers: a randomized controlled trial. Subst Abuse Treat Prev Policy. 2013;8:3.
Wanyama JN, Castelnuovo B, Robertson G, Newell K, Sempa JB, Kambugu A, Manabe YC, Colebunders R. A randomized controlled trial to evaluate the effectiveness of a board game on patients' knowledge uptake of HIV and sexually transmitted diseases at the infectious diseases institute, Kampala, Uganda. J Acquir Immune Defic Syndr. 2012;59(3):253–8.
Nieh HP, Wu WC. Effects of a collaborative board game on bullying intervention: a group-randomized controlled trial. J Sch Health. 2018;88(10):725–33.
Martins FDP, Leal LP, Linhares FMP, Santos AHDS, Leite GO, Pontes CM. Effect of the board game as educational technology on schoolchildren's knowledge on breastfeeding. Rev Lat Am Enfermagem. 2018;26:e3049.
Viggiano E, Viggiano A, Di Costanzo A, Viggiano A, Viggiano A, Andreozzi E, Romano V, Vicidomini C, Di Tuoro D, Gargano G, Incarnato L, Fevola C, Volta P, Tolomeo C, Scianni G, Santangelo C, Battista R, Raia M, Valentino I, Palumbo M, Messina A, Monda M, De Luca B, Amare S. Healthy lifestyle promotion in primary schools through the board game Kaledo: a pilot cluster randomized trial. Eur J Pediatr. 2018;177(9):1371–5.
Karbownik MS, Wiktorowska-Owczarek A, Kowalczyk E, Kwarta P, Mokros Ł, Pietras T. Board game versus lecture-based seminar in the teaching of pharmacology of antimicrobial drugs–a randomized controlled trial. FEMS Microbiol Lett. 2016;363(7):fnw045.
Viggiano A, Viggiano E, Di Costanzo A, Viggiano A, Andreozzi E, Romano V, Rianna I, Vicidomini C, Gargano G, Incarnato L, Fevola C, Volta P, Tolomeo C, Scianni G, Santangelo C, Battista R, Monda M, Viggiano A, De Luca B, Amaro S. Kaledo, a board game for nutrition education of children and adolescents at school: cluster randomized controlled trial of healthy lifestyle promotion. Eur J Pediatr. 2015;174:217–28.
Charlier N, De Fraine B. Game-based learning as a vehicle to teach first aid content: a randomized experiment. J Sch Health. 2013;83:493–9.
Bartfay WJ, Bartfay E. Promoting health in schools through a board game. West J Nurs Res. 1994;16(4):438–46.
Panphunpho S, Thavichachart N, Kritpet T. Positive effects of Ska game practice on cognitive function among older adults. J Med Assoc Thail. 2013;96:358–64.
Demily C, Cavézian C, Desmurget M, Berquand-Merle M, Chambon V, Franck N. The game of chess enhances cognitive abilities in schizophrenia. Schizophr Res. 2009;107(1):112–3.
Sala G, Gorini A, Pravettoni G. Mathematical problem-solving abilities and chess: an experimental study on young pupils. SAGA Open. 2015;5:1–9.
Sala G, Gobet F. Does chess instruction improve mathematical problem-solving ability?: two experimental studies with an active control group. Learn Behav. 2017;45:414–21.
Aciego R, Garcia L, Betancort M. The benefits of chess for the intellectual and social-emotional enrichment in schoolchildren. Span J Psychol. 2012;15:551–9.
Aydin M. Examining the impact of chess instruction for the visual impairment on mathematics. Educ Res Rev. 2015;10(7):907–11.
Barrett DC, Fish WW. Our move: using chess to improve math achievement for students who receive special education services. Int J Special Educ. 2011;26:181–93.
Gliga F, Flesner PI. Cognitive benefits of chess training in novice children. Procedia Soc Behav Sci. 2014;116:962–7.
Hong S, Bart WM. Cognitive effects of chess instruction on students at risk for academic failure. Int J Special Educ. 2007;22(3):89–96.
Scholz M, Niesch H, Steffen O, Ernst B, Markus L, Witruk E, Schwarz H. Impact of chess training on mathematics performance and concentration ability of children with learning disabilities. Int J Special Educ. 2008;23(3):138–48.
Mouton A, Gillet N, Mouton F, Van Kann D, Bruyère O, Cloes M, Buckinx F. Effects of a giant exercising board game intervention on ambulatory physical activity among nursing home residents: a preliminary study. Clin Interv Aging. 2017;12:847–58.
Fernandes SC, Arriaga P, Esteves F. Providing preoperative information for children undergoing surgery: a randomized study testing different types of educational material to reduce children's preoperative worries. Health Educ Res. 2014;29:1058–76.
Blasco-Fontecilla H, Gonzalez-Perez M, Garcia-Lopez R, Poza-Cano B, Perez-Moreno MR, de Leon-Ma V, Otero-Perez J. Efficacy of chess training for the treatment of ADHD: a prospective, open label study. Rev Psiquiatr Salud Ment. 2016;9(1):13–21.
Barzegar K, Barzegar S. Chess therapy: a new approach to curing panic attack. Asian J Psychiatr. 2017;30:118–9.
Wakutani Y, Kusumi M, Wada K, Kawashima M, Ishizaki K, Mori M, Mori N, Ijiri T, Adachi Y, Ashida Y, Kuno N, Urakumi K, Takeshima T, Nakashima K. Longitudinal changes in the prevalence of dementia in a Japanese rural area. Psychogeriatrics. 2007;7:150–4.
Sala G, Aksayli ND, Tatlidil KS, Tatsumi T, Gondo Y, Gobet F. Near and fear transfer in cognitive training: a second-order meta-analysis. Collabra Psychol. 2019;5:18.
The authors appreciate the support of the members of the Japan Shogi Association and the officials in Kakogawa City for conceptualizing health promotion models using board games, such as Shogi and other traditional games.
This paper was proofread in English by Textcheck (reference number: 19022617).
Graduate School of Human and Social Sciences, Musashino University, 3-3-3 Ariake, Koutouku, Tokyo, 135-8181, Japan
Shota Noda
Faculty of Human Sciences, Musashino University, Tokyo, Japan
Kentaro Shirotsuki
Department of Psychosomatic Medicine, School of Medicine, International University of Health and Welfare, Chiba, Japan
Mutsuhiro Nakao
Search for Shota Noda in:
Search for Kentaro Shirotsuki in:
Search for Mutsuhiro Nakao in:
SN designed the study and conducted the literature searches. SN wrote the first draft of the manuscript. KS and MN revised the draft of the manuscript. All authors approved the final version of the manuscript.
Correspondence to Shota Noda.
Noda, S., Shirotsuki, K. & Nakao, M. The effectiveness of intervention with board games: a systematic review. BioPsychoSocial Med 13, 22 (2019) doi:10.1186/s13030-019-0164-1
|
CommonCrawl
|
2D transformation matrices for translation, scaling, and rotation
(A demo application is available on Github.)
This post develops the foundational matrices for translation, scaling, and rotation of control points. In computer graphics, these matrices provide an efficient and uniform way of manipulating points.
Orthogonal coordinate system
We start by formalizing a traditional 2D coordinate system. It may be defined in terms of (1) orthogonal vectors \(\vec u\) and \(\vec v\) for the \(x\) and \(y\) axis and (2) origin \(O\) for the point of origin. Origin is the point of intersection between \(\vec u\) and \(\vec v\) and is commonly \((0,0)\). The difference between point and vector is best illustrated by a few examples: \(Q = P + \vec v\) and \(\vec v = Q - P\).
A point \(P\) may be defined in terms of the vectors \(\vec u\) and \(\vec v\). To arrive at \(P\) we start at origin \(O\), then move some distance along \(\vec u\) followed by a move some distance along \(\vec v\):
$$P = x\vec u + y\vec v + O\\ P = 0.5\vec u + 0.25\vec v + O\\ P = 0.5\begin{bmatrix}1\\0\end{bmatrix} + 0.25\begin{bmatrix}0\\1\end{bmatrix} + \begin{bmatrix}0\\0\end{bmatrix} = \begin{bmatrix}0.5\\0.25\end{bmatrix}$$ Convention dictates that \(\vec u\) and \(\vec v\) are perpendicular unit vectors, that \(O\) is the zero vector, and that \(P\) is commonly written \((x,y) = (0.5, 0.25)\) with \(x\) and \(y\) being the coordinates of \(P\). Going forward, we'll also write points in matrix form: $$P = \begin{bmatrix}x\ y\ 1\end{bmatrix} \begin{bmatrix}\vec u\\ \vec v\\ O \end{bmatrix} = x\vec u + y\vec v + O$$ For brevity, the 3x1 matrix is mostly left out, but it's still there. \(P\) in matrix form is then \(\begin{bmatrix}x\ y\ 1\end{bmatrix}\) with \(1\) added to carry over origin \(O\). Applying translation, scaling, and rotation, \(O\) must remain unaffected.
Translating a point by \(a\) in \(x\) direction and by \(b\) in \(y\) direction may be expressed by the following relationship between source and destination vectors:
$$\begin{bmatrix}x & y & 1\end{bmatrix} \rightarrow \begin{bmatrix}x + a & y + b & 1\end{bmatrix}$$
Transforming any left matrix to a right matrix is akin to applying a postfix operator to the left side. With respect to translation, the \(T_{a,b}\) operator is matrix multiplication of the left side with a special translation matrix parameterized by \(a\) and \(b\). Expanding the multiplication allows us to verify the correctness of \(T_{a,b}\): $$\begin{bmatrix}x & y & 1\end{bmatrix} \begin{bmatrix}1 & 0 & 0\\0 & 1 & 0\\a & b & 1\end{bmatrix} = \begin{bmatrix}1x + 0y + 1a & 0x + 1y + 1b & 0x + 0y + 1\end{bmatrix} = \begin{bmatrix}x + a & y + b & 1\end{bmatrix}$$ Translation, scaling, and rotation matrices are characterized by their sparseness and that the last column is always the same to carry over origin \(O\).
Scaling a point by \(a\) in \(x\) direction and \(b\) in \(y\) direction happens with respect to origin \(O\). The relationship between source and destination vectors is
$$\begin{bmatrix}x & y & 1 \end{bmatrix}\rightarrow \begin{bmatrix}ax & by & 1\end{bmatrix}$$
To satisfy this relationship requires a scaling matrix \(S_{a, b}\) as follows: $$\begin{bmatrix}x & y & 1\end{bmatrix}\rightarrow \begin{bmatrix}a & 0 & 0\\ 0 & b & 0\\ 0 & 0 & 1\end{bmatrix} = \begin{bmatrix}ax & by & 1\end{bmatrix}$$ Scaling works as intended when for a set of points their centroid \(C = O\). A unit square with control points \(\{(-1,1),(1,1),(1,-1)\}\) satisfies this condition. Applying \(S_{2,2}\) to each point has the effect of scaling from a 2x2 to a 4x4 square.
On the other hand, a unit square with control points \(\{(1,3),(3,3),(3,1)\}\) has a centroid \(C = (2, 2)\). Applying \(S_{2, 2}\) to each point has the effect of scaling from a 2x2 to a 4x4 square and move the centroid \(C\). To prevent this movement, each point must be first translated such that \(C = O\), then scaled, and finally translated back:
$$T_{-a,-b} S_{c,d} T_{a,b}$$
The centroid \(C\) of a set of points, assuming each has equal weight, is defined as $$\left(a = \overline{x}, b = \overline{y}\right) = \left(\frac{1}{n}\sum_{i=1}^{n}x_{i},\frac{1}{n}\sum_{i=1}^{n}y_{i}\right)$$
Rotating a point by \(\theta\) around origin implies that radius from origin to \((x, y)\) and radius from origin to \((x', y')\) remain unaffected: $$\begin{bmatrix}x & y & 1\end{bmatrix} \rightarrow \begin{bmatrix}x' & y' & 1\end{bmatrix}$$ With radius \(r\) and angle \(\phi\) between \(x\) axis and point, the left side becomes $$x = r\cos\phi\\ y = r\sin\phi$$ Right side is then \(x\) and \(y\) with an additional rotation \(\theta\) added: $$\begin{eqnarray*} x' = r\cos(\phi + \theta) & & \\ x' = r\cos \phi \cos \theta - r \sin \phi \sin \theta & & \textrm{\{apply identity sum: \(\cos(u + v) = \cos u \cos v - \sin u \sin v\)\}}\\ x' = x \cos \phi - y \sin \theta & & \textrm{\{substitute in definitions of \(x\) and \(y\)\}}\\\\ y' = r\sin(\phi + \theta) & & \\ y' = r \sin \phi \cos \theta + r \cos \phi \sin \theta & & \textrm{\{apply identity sum: \(\sin(u + v) = \sin u \cos v + \cos u \sin v\)\}}\\ y' = y \cos \theta + x \sin \theta & & \textrm{\{substitute in definitions of \(x\) and \(y\)\}} \end{eqnarray*}$$ In matrix form, the rotation matrix \(R_\theta\) becomes $$\begin{bmatrix}x & y & 1\end{bmatrix} \begin{bmatrix}\cos\theta & \sin\theta & 0\\ -\sin\theta & \cos\theta & 0\\ 0 & 0 & 1\end{bmatrix} = \begin{bmatrix}x' & y' & 1 \end{bmatrix}$$ As with scaling, rotation must happen around origin, possibly requiring two translations: $$T_{-a,-b} R_\theta T_{a, b}$$ If both scaling and rotation is needed, the translation back and forth need happen only once.
The translation, scaling, and rotation matrices are collectively referred to as the elementary matrices in computer graphics. Rather than apply transformations to a point one at a time, transformation matrices may be combined into a single matrix \(M\) for reuse across points: $$\begin{bmatrix}x & y & 1\end{bmatrix} M \begin{bmatrix}\vec u\\ \vec v\\ O \end{bmatrix} = \begin{bmatrix}x' & y' & 1 \end{bmatrix}$$ Refer to Lecture 3: Moving Objects in Space of this lecture series for a run-down of each transformation. Around 31m30s 3D versions of matrices are introduced.
|
CommonCrawl
|
Garif'yanov, Nurgayaz Salihovich
Total publications: 16
Scientific articles: 16
This page: 79
Abstract pages: 789
Full texts: 453
Doctor of physico-mathematical sciences
http://www.mathnet.ru/eng/person37789
1. N. S. Garif'yanov, B. M. Khabibullin, È. G. Kharakhashyan, T. O. Alekseeva, "Electronspin resonance on conduction electrons in liquid sodium", Dokl. Akad. Nauk SSSR, 180:3 (1968), 569–571
2. N. S. Garif'yanov, B. M. Kozyrev, V. N. Fedotov, "Spectral line width in electron spin resonance of the liquid solutions of ethylene glycol complex for even and odd isotopes of chromium", Dokl. Akad. Nauk SSSR, 178:4 (1968), 808–810
3. N. N. Lezhnev, B. M. Kozyrev, N. S. Garif'yanov, Yu. M. Ryzhmanov, I. S. Novikova, "The probable mechanism underlying the interaction of carbon black with phenyl-2-naphthylamine and mercaptobenzothiazole", Dokl. Akad. Nauk SSSR, 159:5 (1964), 1127–1130
4. N. S. Garif'yanov, B. M. Kozyrev, V. N. Fedotov, "Electron spin resonance in rhodanide complexes of pentavalent molybdenum and pentavalent tungsten", Dokl. Akad. Nauk SSSR, 156:3 (1964), 641–643
5. N. S. Garif'yanov, "Certain pentavalent chromium complexes as investigated by means of electron-spin resonance", Dokl. Akad. Nauk SSSR, 155:2 (1964), 385–388
6. N. S. Garif'yanov, N. S. Kucheryavenko, V. N. Fedotov, "Some pentavalent molybdenum solutions investigated by the electron spin resonance method", Dokl. Akad. Nauk SSSR, 150:4 (1963), 802–804
7. A. V. Il'yasov, N. S. Garif'yanov, R. Kh. Timerov, "The spin-lattice interaction in magnetically diluted free radicals", Dokl. Akad. Nauk SSSR, 150:3 (1963), 588–591
8. N. S. Garif'yanov, A. V. Il'yasov, Yu. V. Yablokov, "Electron spin resonance in liquid and supercooled solutions of some free radicals", Dokl. Akad. Nauk SSSR, 149:4 (1963), 876–879
9. N. S. Garif'yanov, B. M. Kozyrev, E. I. Semenova, "Electron spin resonance in divalent silver compounds", Dokl. Akad. Nauk SSSR, 147:2 (1962), 365–367
10. V. N. Fedotov, N. S. Garif'yanov, B. M. Kozyrev, "Electron spin resonance in $\mathrm{Nb}^{4+}$", Dokl. Akad. Nauk SSSR, 145:6 (1962), 1318–1320
11. N. S. Garif'yanov, N. F. Usacheva, "Electron paramagnetic resonance in $\mathrm{Cr}^{5+}$ solutions", Dokl. Akad. Nauk SSSR, 145:3 (1962), 565–566
12. N. S. Garif'yanov, E. I. Semenova, "Paramagnetic electron resonance in some aqueous complexes of salts", Dokl. Akad. Nauk SSSR, 140:1 (1961), 157–158
13. N. S. Garif'yanov, "The superfine structure of the electron paramagnetic resonance line in aqueous solutions of $\mathrm{V}^{2+}$ salts", Dokl. Akad. Nauk SSSR, 138:3 (1961), 612–613
14. N. S. Garif'yanov, B. M. Kozyrev, "The influence of oxygen on the paramagnetic resonance absorption in $\alpha\alpha$-diphenyl-$\beta$-picrylhydrazyl", Dokl. Akad. Nauk SSSR, 118:4 (1958), 738–739
15. N. S. Garif'yanov, M. M. Zaripov, B. M. Kozyrev, "On the spin value of the $\mathrm{Fe}^{57}$ nucleus", Dokl. Akad. Nauk SSSR, 113:6 (1957), 1243
16. N. S. Garif'yanov, B. M. Kozyrev, "Сверхтонкая структура линий парамагнитного резонанса в растворах солей $\mathrm{Mn}^{++}$ и $\mathrm{VO}^{++}$", Uchenye Zapiski Kazanskogo Universiteta, 114:8 (1954), 83–89
Zavoisky Physical Technical Institute of Kazan Branch of the USSR Academy of Sciences
|
CommonCrawl
|
Preliminary observation on the sustainability of white sardine, Escualosa thoracata (Valenciennes, 1847), exploited from the central west coast of India
Udai R. Gurjar1,2,
Suman Takar1,3,
Milind S. Sawant2,
Ravindra A. Pawar2,
Vivek H. Nirmale2,
Anil S. Pawase2,
Sushanta K. Chakraborty1,
Karan K. Ramteke1 &
Tarachand Kumawat ORCID: orcid.org/0000-0003-2929-54151,4
The present study assessed the growth and mortality parameters of the white sardine, Escualosa thoracata which is having high local demand. The white sardine gained importance due to its taste, and high demand in domestic markets as compared to the oil sardine necessitated a study on this resource to know the present status of exploitation level along the central west coast of India.
A total of 3026 individuals of different size groups of E. thoracata were randomly collected from the Burondi fish landing center of the Ratnagiri district of Maharashtra. The asymptotic length (L∞) and growth coefficient (K) were estimated to be 115 mm and 1.9 year−1, respectively, by ELEFAN-I and 135 mm and 1.2 year−1 by the scattergram. The value of t0 by von Bertalanffy plot was estimated to be −0.000012 year. The fish attained a length of 65 mm, 94 mm, and 114 mm at the end of 0.5, 1, and 1.5 years of its life, respectively. The instantaneous rate of total mortality (Z), natural mortality (M), and fishing mortality (F) were estimated to be 8.07 year−1, 2.55 year−1, and 5.52 year−1, respectively. The exploitation rate (U) was calculated as 0.65, and the exploitation ratio (E) was 0.68.
The growth, mortality, and other population parameters observed in the present study will help to understand the current stock status, which is pointing toward the over-fishing condition (E ˃ 0.50) of the white sardine in the study area. Therefore, the present investigation suggests reducing the fishing pressure on E. thoracata along the central west coast of India for the sustainability of the resource.
White sardine, Escualosa thoracata (Valenciennes, 1847), supported economically important fishery along the south-west coast of India (Nair, 1952), and it also occurs in swarms on the east coast of India (Mookerjee & Bhattacharya, 1950). This species was recorded from India, Pakistan, Sri Lanka, Myanmar, Malaya, Malay Archipelago, China, eastward to the Philippines, Papua New Guinea, and Australia (Krishnan & Mishra, 2004; Misra, 1947). It is a shoaling clupeid, inhabiting most of the shallow coastal waters of India. White sardines form a seasonal fishery throughout their range of distribution along the Indian coast (Abdussamad et al., 2018). The fishery of E. thoracata has gained importance in recent years due to its huge demand in domestic markets (Gurjar et al., 2017). Increasing demand for seafood and advancement in harvest technology have led to the overexploitation of marine resources (Takar & Gurjar, 2020). The total marine fish catch recorded during 2017 from Maharashtra was 3.81 lakh tons, with the pelagic group contributing 39% of the catch (CMFRI, 2018). This fish was captured using gill nets and cast nets operated in shallow creeks, supporting a common fishery along the Harnai-Dabhol coast of Ratnagiri, Maharashtra (Gurjar, Sawant, Pawar, Nirmale, & Pawase, 2018).
Knowledge of length-weight, growth, and mortality parameters is essential for understanding the population dynamics of different species (Gosavi, Kharat, Kumkar, & Tapkir, 2019; Gurjar et al., 2017; Kende et al., 2020). Several approaches were generally used to determine the age and growth of the aquatic species, such as length-frequency analysis, tagging and recapture experiments, and observations of the mark on the various hard body parts like scales, otoliths, spines, and vertebrates (Stéquert, Panfili, & Dean, 1996). The population characteristics of E. thoracata were recorded from the north-west coast of India (Prajapat, 2015; Rahangdale, Chakraborty, Jaiswar, Shenoy, & Raje, 2016; Raje, Deshmukh, & Thakurdas, 1994) and the south-west coast of India (Abdussamad et al., 2018). But there is no report related to population dynamics and stock status of white sardine resources from the Ratnagiri, central west coast of India. As the distribution of white sardine along Ratnagiri is restricted to only along the Harnai-Dabhol coast of Ratnagiri, the present level of commercial exploitation, highly natural of the seasonal fishery, and its high local demand required an in-depth study with respect to the fishery and stock status. Therefore, this study was designed to assess the growth and mortality parameters and know the present status of exploitation level of E. thoracata along the Ratnagiri, central west coast of India.
Sample collection and study area
The specimens of different size groups of E. thoracata were randomly collected twice in a month from the Burondi (Lat 17°42′55.5″ N and Long 73°07′81.4″ E) fish landing center in the Ratnagiri district of Maharashtra during the period from February 2015 to January 2016.
Estimation of age and growth parameters
The total lengths of all samples (3026 individuals) were measured for the estimation of length-frequency distribution. Length-frequency data were grouped into 5-mm class intervals and raised for the day and subsequently for the month as followed by Sekharan (1962). The initial estimate of growth parameters (L∞ and K) was made using the scatter-diagram technique (Devaraj, 1983). In this method, the modal lengths in the length-frequency data are first plotted in the form of a scatter diagram against coordinates of length starting from 0 onward on the Y-axis and time in months on the X-axis. An eye-fitted line indicates the trend in the progression of the modes through time. The fitted line was extrapolated in a freehand manner with reference to the intermodal slopes so that it intersects the time axis, indicating the time of brood release and the growth of the brood since its origin through successive months. The mean length read at monthly intervals was obtained by taking an average of eye fitted line, which was used for the calculation of L∞ and K employing von Bertalanffy's equation (1934):
$$ \mathrm{Lt}={L}_{\infty }\ \left( 1-{e}^{-k\left(t-{t}_0\right)}\right) $$
Applying standard regression analysis, the a and b values of Lt against Lt+1 in the data were obtained. From this, asymptotic length (L∞) was calculated as:
$$ {L}_{\infty }=a/ 1-b\ \mathrm{and}\ \mathrm{growth}\ \mathrm{coefficient}\ k=- lnb $$
The growth parameters (L∞ and K) were also estimated by ELEFAN-I, using length-frequency by employing the FiSAT program. Length at age data was produced by using mean length obtained in scatter-diagram technique and employing inverse von Bertalanffy growth formula (VBGF) (von Bertalanffy, 1938) as:
$$ t={t}_0-\left( 1/K\right)\ast \mathit{\ln}\ \left( 1- Lt/{L}_{\infty}\right) $$
Hypothetical age at which length is zero (t0) was estimated using von Bertalanffy (1934) plot. The growth performance index, phi prime (φ′), was estimated by following Munro and Pauly (1983), given as:
$$ \varphi^{\prime }= 2\times \mathit{\log}\left({L}_{\infty}\right)+\mathit{\log}(K) $$
Estimation of mortality parameters and exploitation status
The total instantaneous mortality (Z) was calculated by following the length-converted catch curve (Pauly, 1983, 1984) and cumulative catch curve method (Jones & Van Zalinge, 1981) by employing the FiSAT program. The natural mortality coefficient (M) was estimated by using the method of Pauly (1980) and Cushing (1968). The fishing mortality (F) was calculated using the relationship, F = Z−M. The exploitation ratio (E) was estimated by the formula given by Ricker (1975), E = F/Z, and exploitation rate (U) was calculated by using formula U = (F/Z)*(1 − e−z) given by Beverton and Holt (1957).
The white sardine, E. thoracata, is locally (along the central-west coast of India) known as "Bhiljee" due to its silvery-white shiny appearance. The small fishing boats were engaged for fishing of E. thoracata in the nearshore areas along the Ratnagiri coast at a depth of 8–10 m using a small meshed (20–22-mm mesh size) drift gill net locally called Bhiljee jaal or Bhushi. For the present study, a total of 3026 fish specimens of E. thoracata were used.
Length-frequency analysis
During the study period, the randomly collected samples fell in the length range of 69–110 mm (total length), which were mainly dominated between the size groups of 86–90 to 101–105 mm caught from 60 to 70 small fishing boats. A progressive increase in the dominant size group in catch was observed from February 2015 to January 2016 (Fig. 1).
Percentage length composition of the catch of E. thoracata
Estimation of growth parameters and growth performance index
The scatter-diagram technique yielded five curves of almost identical shape, and the growth was observed at monthly intervals. The modal lengths observed at different months are presented in Fig. 2 where the growth parameters (L∞ and K) were estimated at 135 mm and 1.2 year−1 respectively. The L∞ and K values estimated by ELEFAN–I were recorded as 115 mm and 1.9 year−1 respectively for E. thoracata, and these values were considered for further calculations to estimate population parameters. The value of hypothetical age at length zero (t0) was calculated as –0.000012 year by von Bertalanffy's plot. The growth performance index or phi prime (φ′) for E. thoracata recorded from Ratnagiri waters was 2.40.
Model progression analysis of length frequencies observed in E. thoracata by scatter diagram technique
Estimation of length at age
Precise information on the age of fish at a particular length is an essential pre-requisite for getting accurate information on recruitment, growth, and mortality parameters of fishes for stock assessment. The mean length estimated by the scatter-diagram technique was considered for the calculation of age attained using the VBGF in this study. It was recorded that E. thoracata attains a length of 65 mm, 94 mm, and 114 mm at the end of 0.5, 1, and 1.5 years of its life, respectively. The maximum size recorded during the study period was 110 mm, and the corresponding age was calculated at 1.40 years (Fig. 3).
VBGF growth curve for estimation of length at age of E. thoracata
Estimation of mortality parameters and exploitation ratio
Mortality parameters are a measure of delineating the rate at which fish vanish from a population and are crucial parameters in framing sustainable fishing rules and regulations (Ogle, 2016). The value of Z estimated by the length converted catch curve method (Fig. 4) and the cumulative catch curve method (Fig. 5) was found to be 8.07 year−1 and 6.22 year−1, respectively. The natural mortality coefficient (M) was estimated to be 1.80 year−1 and 2.55 year−1 by Pauly's and Cushing's formula, respectively. The annual fishing mortality coefficient (F) was estimated by subtracting natural mortality (M) from the total mortality coefficient (Z), which gives the value of 5.52 year−1. In the present study, M/K and Z/K values were calculated as 1.34 and 4.25, respectively, for E. thoracata. In the present study, the observed value of exploitation rate (U) was 0.65, and the exploitation ratio (E) was 0.68 for E. thoracata harvested from Ratnagiri waters.
Length converted catch curve for estimation of total mortality coefficient for E. thoracata
Jones and van Zalinge's cumulative catch curve for estimation of total mortality coefficient for E. thoracata
Commonly, the size structure of one fish species can be characterized by the length-frequency distribution (Taiwo, 2010). The total length ranges observed in the present study are comparable to those earlier reported from different locations presented in Table 1. Along the Ratnagiri coast, the smallest size of fish recorded was 69 mm, which is comparatively larger than reported by others. It might be due to drift gillnet (mesh size 20–22 mm) was used to catch white sardine along the Ratnagiri coast, so small-sized fishes escaped from the large mesh size of the net. Nair (1952) has reported a unimodal (10.5 cm) feature of length-frequency distribution of fishery, accounting for almost one-third of the catch emphasizing the dominance of a single age group in the catch (Raje et al., 1994). The dominance of the single-length group was observed in the present investigation with 98 mm (mid-class length), accounting for more than one-third of the total annual catch. While comparatively, the dominance of the length group of 77 mm (mid-class length) was recorded from Mumbai waters (Rahangdale et al., 2016). The differences in the dominance of the size group might be due to the methods of exploitation heterogeneity of habitat and diversity of genetics (Parra, Almodóvar, Nicola, & Elvira, 2009).
Table 1 Size range of E. thoracata from different localities
The L∞ and K values estimated by ELEFAN-I were recorded as 115 mm and 1.9 year−1 respectively for E. thoracata, and these values were considered for further calculations to estimate population parameters. The L∞ and K values were comparatively lower than the earlier described values by Nabi et al. (2009); Rahangdale et al. (2016); and Abdussamad et al. (2018) (Table 2). The value of hypothetical age at length zero (t0) was calculated as −0.000012 year by von Bertalanffy's plot. The theoretical age at length zero often has a small positive or, more usually, a small negative value (King, 1995). The values of t0 in the present study corroborate this fact. The growth performance index can be used as a reliable index for the assessed parameters of growth as the value of phi prime (φ′) for the same species and genera are similar. The growth performance index or phi prime (φ′) for E. thoracata recorded from Ratnagiri waters was 2.40. The observed growth performance index value was high; therefore, it is an interesting remark because phi prime (φ′) is known to be a highly species-specific parameter with their values being similar within related taxa or groups (Prasad, Ali, Harikrishnan, & Raghavan, 2012). The current estimate of phi prime was similar to Nabi et al. (2009), Prajapat (2015), Rahangdale et al. (2016), Raje et al. (1994), and Abdussamad et al. (2018), which confirm the accuracy of the estimates (Table 2).
Table 2 Comparisons of growth parameters for E. thoracata from different localities
In the earlier studies, Mookerjee and Bhattacharya (1950) recorded the mean length 25.5 mm in the month of April and reached about 72 mm in October, which measured an average growth of 7.8 mm per month. Nair (1952) estimated that the first-year growth of this species was 100–110 mm and stated that the average life span of this species is below 1 year, and very few survived beyond this age. He also noted that the size class which dominated the fishery every successive year was the stock recruited during the previous spawning season. Raje et al. (1994) have estimated the length at 0.5 and 1.0 year age as 65.3 and 91.8 mm by employing the VBGF. The maximum length of fish recorded was 105 mm, from which the calculated age was 1.7 years.
The maximum size was recorded 110 mm in the present investigation, with a calculated age of 1.40 years. Similarly, Rahangdale et al. (2016) recorded the maximum length of 111 mm with an estimated age of 1.42 years. As per VBGF, they found that the fish attained 72.77 mm size in 6 months and 100.79 mm in 1 year. Prajapat (2015) observed the growth of the white sardine at 6 months and 1 year, 86 and 114.5 mm, respectively, and the largest specimen obtained during the study period was 117 mm with an estimated age of 1.08 years. This was mainly because of the higher values of K, which resulted in faster growth of the species. Abdussamad et al. (2018) have recorded a maximum size of 105 mm with the length of 50 mm, 79 mm, 96 mm, and 106 mm in 3, 6, 9, and 12 months, respectively.
In the present investigation, fishing mortality recorded nearly two times of its natural mortality indicated a fairly high fishing pressure on this resource along the Ratnagiri coast. Nabi et al. (2009) estimated the total mortality, natural mortality, and fishing mortality 8.08 year−1, 2.82 year−1, and 5.26 year−1, respectively, for E. thoracata, which also indicates the high fishing mortality than the natural mortality from Bangladesh waters. The mortality parameters observed in the present study are almost similar to the parameters recorded from Bangladesh, north-west, and south-west coast of India (Table 3).
Table 3 Mortality and exploitation ratio for E. thoracata from different localities
The information on the mortality rate was important for devising exploitation strategies to harvest and manage the fishery resources at optimal levels. The natural mortality coefficient in fish varies with age and predator abundance (Boiko, 1964; Jones, 1982; Pauly, 1980). It was difficult to estimate the natural mortality in tropical countries as the standard method of Z against efforts does not give a correct estimate of natural mortality as apportioning of the efforts is not possible. Hence, numerous methods have to be tried, and more often, we have to depend on the M/K as an indicator of the accuracy of M and K. The M/K ratio of 1.0 to 2.5 indicates correctness in the assessment of the natural mortality coefficient. In the present study, M/K ratio was found to be 1.34, which lies in the range of 1.0 to 2.5 (Beverton & Holt, 1957), indicating that the estimates were fairly reasonable. A stock is considered mortality-dominated if the Z/K value is equal to more than two and growth-dominated if Z/K value is equal to one. Based on this, the existing stock of E. thoracata was found to be mortality-dominated as the calculated value of Z/K was 4.25. The optimum E value 0.5 was suggested by Gulland (1971) for overall fisheries management, and in the case of pelagic stocks, 0.4 is recommended (Patterson, 1992). In the present study, the observed exploitation ratio (0.68) was higher than the optimum (0.5), which predicted high fishing pressure on the white sardine stock along the central west coast of India. As similar to the present study, a higher level of exploitation was observed by Nabi et al. (2009), Rahangdale et al. (2016), and Abdussamad et al. (2018) from different geographical areas (Table 3). For stability of the species in the Ratnagiri waters, it is needed to maintain at least 50% of the spawning stock, and therefore, the current level of exploitation needs to decrease.
This is the first report on the population characteristics and stock status of E. thoracata from the Ratnagiri, central west coast of India. From this study, it is evident that the stock of E. thoracata is overexploited. E. thoracata is a very fast-growing species having an annual K of 1.9, with a short life span that may be completing its life cycle of about 1.2–1.5 years. The exploitation ratio (E) was 0.68, which is very much on the higher side of optimum exploitation (E = 0.5); therefore, fishing pressure can be reduced, and regular monitoring of the resource would be required to have a sustainable catch. In all these aspects, the present study would form the basis for the scientific community and conservation decision-makers to manage and reach the sustainable harvest of this resource with optimal exploitation.
Data is available on request.
L ∞ :
Asymptotic length
Growth coefficient
Z :
Total mortality
Natural mortality
F :
Fishing mortality
U :
Exploitation rate
E :
Exploitation ratio
E. thoracata :
Escualosa thoracata
t 0 :
Age at length zero
VBGF:
von Bertalanffy growth formula
φ′:
Phi prime
FiSAT:
FAO–ICLARM Stock Assessment Tools
ELEFAN-I:
Electronic Length Frequency Analysis-I
Abdussamad, E. M., Mini, K. G., Gireesh, R., Prakasan, D., Retheesh, T. B., Rohit, P., & Gopalakrishnan, A. (2018). Systematics, fishery and biology of the white sardine Escualosa thoracata (Valenciennes, 1847) exploited off Kerala, south-west coast of India. Indian Journal of Fisheries, 65(1), 26–31. https://doi.org/10.21077/ijf.2018.65.1.69762-05.
Beverton, R. J. H., & Holt, S. J. (1957). On the dynamics of exploited fish population. Fisheries Investigations, 11, 1–533. https://doi.org/10.1007/978-94-011-2106-4_2.
Boiko, E. G. (1964). Prediction of reserves and catches of Azovlueiperca. In Trudy VNIRO, (p. 50).
CMFRI (2018). Annual report 2017-18. Technical report, (p. 304). Central Marine Fisheries Research Institute.
Cushing, D. H. (1968). Fisheries biology: a study in population dynamics, (p. 200). University of Wisconsin Press.
Devanesan, D. W., & John, V. (1941). On the natural history of Kowala thoracata (Cuv. and Val.) with special reference to its gonads and eggs. Records of the Indian Museum, 43(2), 215–216.
Devaraj, M. (1983). Fish population dynamics: a course manual. CIFE Bulletin, 3(10), 83–89.
Gosavi, S. M., Kharat, S. S., Kumkar, P., & Tapkir, S. D. (2019). Assessing the sustainability of lepidophagous catfish, Pachypterus khavalchor (Kulkarni, 1952), from a tropical river Panchaganga, Maharashtra, India. The Journal of Basic and Applied Zoology, 80(1), 1–10.
Gulland, J. A. (1971). The fish resource of the oceans, (p. 255). Fishing News Books Ltd.
Gurjar, U. R., Sawant, M. S., Pawar, R. A., Nirmale, V. H., & Pawase, A. S. (2018). Reproductive biology and fishery of the white sardine, Escualosa thoracata (Valenciennes, 1847) from the Ratnagiri coast, Maharashtra. Indian Journal of Geo-Marine Sciences, 47(12), 2485–2491.
Gurjar, U. R., Sawant, M. S., Pawar, R. A., Nirmale, V. H., Pawase, A. S., & Takar, S. (2017). A study on food and feeding habits of white sardine, Escualosa thoracata (Valenciennes, 1847) from the Ratnagiri coast, Maharashtra. Journal of Experimental Zoology India, 20(2), 755–762.
Gurjar, U. R., Sawant, M. S., Takar, S., Pawar, R. A., Nirmale, V. H., & Pawase, A. S. (2017). Biometric analysis of white sardine, Escualosa thoracata (Valenciennes, 1847) along the Ratnagiri coast of Maharashtra, India. Journal of Experimental Zoology India, 20(2), 845–849.
Jones, R. (1982). Ecosystems, food chains and fish yields. In: Theory and management of tropical fisheries. ICLARM Conference Proceedings, 9 (360), 195-239.
Jones, R., & Van Zalinge, N. P. (1981). Estimates of mortality rate and population size for shrimp in Kuwait waters. Kuwait Bulletin of Marine Science, 2, 273–288.
Kende, D. R., Nirmale, V. H., Gurjar, U. R., Qayoom, U., Syed, N., & Pawar, R. A. (2020). Biometric analysis of moustached thryssa mystax (Bloch and Schneider, 1801) along the Ratnagiri coast of Maharashtra, India. Indian Journal of Fisheries, 67(2), 110–113. https://doi.org/10.21077/ijf.2019.67.2.82889-15.
King, M. (1995). Fisheries biology, assessment and management, (pp. 158–160). Fishing News Books.
Krishnan, S., & Mishra, S. (2004). An inventory of fish species described originally from fresh and coastal marine waters of Pondicherry. Records of the Zoological Survey of India, 102(3-4), 65–87.
Misra, K. S. (1947). A checklist of the fishes of India, Burma, and Ceylon. I. Elasmobranchii and Holocephalii. Records of the Indian Museum, 45, 1–46.
Mookerjee, H. K., & Bhattacharya, R. (1950). Some aspects of the natural history of Clupea lile. In Proceedings of the 37th Indian Science Congress Pt. III, (p. 250).
Munro, J. L., & Pauly, D. (1983). A simple method for comparing growth of fishes and invertebrates. ICLARM FishByte, 1(1), 5–6.
Nabi, M. R., Hoque, M. A., & Rahman, M. M. (2009). Population dynamics of Escualosa thoracata from estuarine set bag net fishery of Bangladesh. Journal of Science and Technology, 7, 7–22.
Nair, R. V. (1952). Studies on the life history, bionomics and fishery of the white sardine, Kowala coval (Cuv.). In Proceedings of the Indo-Pacific Fisheries Council 1951, Sec.2, (p. 103).
Ogle, D. (2016). Introductory fisheries analyses with R. CRC Press.
Parra, I., Almodóvar, A., Nicola, G. G., & Elvira, B. (2009). Latitudinal and altitudinal growth patterns of brown trout Salmo trutta at different spatial scales. Journal of Fish Biology, 74(10), 2355–2373. https://doi.org/10.1111/j.1095-8649.2009.02249.x.
Patterson, K. (1992). Fisheries for small pelagic species: an empirical approach to management targets. Reviews in Fish Biology and Fisheries, 2(4), 321–338. https://doi.org/10.1007/BF00043521.
Pauly, D. (1980). On the inter-relationships between natural mortality, growth parameters and mean environmental temperatures in 175 fish stocks. ICES Journal of Marine Science, 39(2), 175–192. https://doi.org/10.1093/icesjms/39.2.175.
Pauly, D. (1983). Some simple methods for the assessment of tropical fish stocks. FAO Fisheries Technical Paper, 234, 1–52.
Pauly, D. (1984). Length converted catch curve, a powerful tool in fisheries research in the tropics (part-II). ICLARM FishByte, 2(1), 17–19.
Prajapat, P. S. (2015). A study on biology of white sardine, Escualosa thoracata (Valenciennes, 1847) along Goa coast of India, (p. 85). M.F.Sc. dissertation, C.I.F.E. (Deemed University).
Prasad, G., Ali, A., Harikrishnan, M., & Raghavan, R. (2012). Population dynamics of an endemic and threatened yellow catfish, Horabagrus brachysoma (Gűnther) from River Periyar, Kerala, India. Journal of Threatened Taxa, 4(2), 2333–2342. https://doi.org/10.11609/JoTT.o2590.2333-42.
Rahangdale, S., Chakraborty, S. K., Jaiswar, A. K., Shenoy, L., & Raje, S. G. (2016). Preliminary study on growth and mortality of Escualosa thoracata (Valenciennes, 1847) from Mumbai waters. Indian Journal of Geo-Marine Sciences, 45, 290–295.
Raje, S. G., Deshmukh, V. D., & Thakurdas (1994). Fishery and biology of white sardine, Escualosa thoracata (Valenciennes) at Versova, Bombay. Journal of the Indian Fisheries Association, 24, 51–62.
Ricker, W. E. (1975). Computation and interpretation of biological statistics of fish populations. Bulletin of Fisheries Research Board Canada, 191, 1–382.
Sekharan, K. V. (1962). On the oil sardine fishery of the Calicut area during the years 1955-56 to 1958-59. Indian Journal of Fisheries, 9(2), 679–700.
Shabir, A. D., Thomas, S. N., Chakraborty, S. K., & Jaiswar, A. K. (2014). Length-weight relationships for five species of Clupeidae caught from Mumbai coast. Fishery Technolgy, 51, 291–294.
Stéquert, B., Panfili, J., & Dean, J. M. (1996). Age and growth of yellowfin tuna, Thunnus albacares, from the western Indian Ocean, based on otolith microstructure. Oceanographic Literature Review, 12(43), 1275.
Taiwo, O. (2010). Length frequency distribution and length-weight relationship of Schilbe mystus from Lekki Lagoon in Lagos, Nigeria. Journal of Agricultural and Veterinary Sciences, 2, 63–69.
Takar, S., & Gurjar, U. R. (2020). Review on present status, issues and management of Indian marine fisheries. Innovative Farming, 5(1), 34–41.
von Bertalanffy, L. (1934). Untersuchungen Über die Gesetzlichkeit des Wachstums. W. Roux' Archiv f. Entwicklungsmechanik, 131(4), 613–652. https://doi.org/10.1007/BF00650112.
von Bertalanffy, L. (1938). A quantitative theory of organic growth. Human Biology, 10, 181–213.
The authors are thankful to the Dean, College of Fisheries, Ratnagiri, for the facilities provided during the research work. The authors are also grateful to the local fishers for their help and support in sampling during the entire study period.
This work was not funded by any national or international agencies.
ICAR-Central Institute of Fisheries Education, Mumbai, Maharashtra, 400 061, India
Udai R. Gurjar, Suman Takar, Sushanta K. Chakraborty, Karan K. Ramteke & Tarachand Kumawat
College of Fisheries, Shirgaon, Ratnagiri, Maharashtra, 415 629, India
Udai R. Gurjar, Milind S. Sawant, Ravindra A. Pawar, Vivek H. Nirmale & Anil S. Pawase
TNJFU-Fisheries College and Research Institute, Thoothukudi, 628 008, India
Suman Takar
ICAR-Central Marine Fisheries Research Institute, Veraval, Gujarat, 362 269, India
Tarachand Kumawat
Udai R. Gurjar
Milind S. Sawant
Ravindra A. Pawar
Vivek H. Nirmale
Anil S. Pawase
Sushanta K. Chakraborty
Karan K. Ramteke
URG conducted the research and drafted the paper. MSS provided overall supervision. RAP and ASP helped during sampling and provided lab facilities. VHN, SKC, and KKR analyzed the data by employing the FiSAT computer program. ST and TK helped during writing, and review and editing the manuscript. All authors read the complete manuscript and approved it for publication.
Correspondence to Tarachand Kumawat.
Gurjar, U.R., Takar, S., Sawant, M.S. et al. Preliminary observation on the sustainability of white sardine, Escualosa thoracata (Valenciennes, 1847), exploited from the central west coast of India. JoBAZ 82, 20 (2021). https://doi.org/10.1186/s41936-021-00219-w
Length-frequency
White sardine
|
CommonCrawl
|
Interpretation of the unitaries involved in the eigenvalue decomposition of a density operator
If $\rho=\sum_{i}p_{i}|\psi_{i}\rangle\langle \psi_{i}|$, this ensemble doesn't require $\langle \psi_{i}|\psi_{j}\rangle$=0. Given that $\rho$ is positive semi-definite, by the spectral theorem it can be expressed in diagonal form $$\rho=UDU^{\dagger}, D=\sum_{i}\lambda_{i}|i\rangle\langle i|$$
However, despite having used this notation for a while now, I still find myself confused by the notation of the spectral theorem. Specifically the role the unitaries play. The above states that, $$U^{\dagger}\rho U=U^{\dagger}UDU^{\dagger}U=D$$ which then given $\rho$ in it's diagonal form. However, it can also easily be diagonalised just by calculation of its eigenvalues and eigenvectors, and then re-expression in that basis. Moreover, this just looks like the unitary transformation of $\rho$, which obviosuly isn't going to be the same state. So what are these unitaries then, just the identity operators expanded in the eigenbasis? Or am I meant to interpret them as unitaries whose columns are composed of the states of current basis of $\rho$ expanded in the eigenbasis, ie, not mapping them to their image in another basis, but to themselves expressed in another basis, essentially achieving the same action as the identity?
Edit: It has been pointed out to me that the paragraph wherein I ask my questions is too vague or confusing. So let me try and rephrase. I have a density operator. I have it expressed in matrix form in some basis. I want to change said basis so that it achieves its diagonal form. How would I actually express this as the action of operations on said density operator, given that any such action would just lead to another density operator that is unitarily related, but with a different spectrum? The only way I can see is to rewrite it's entries $|\psi_{i}\rangle = U|\phi_{i}\rangle$ which, to my mind anyway, isn't really the same thing as the action of a unitary operator on $\rho$. To clarify, I am not confused about the action of basis change, but only its representation outside just using the identiy to acieve it, ie $|\psi\rangle=\sum_{i}\langle \phi_{i}|\psi_{i}\rangle |\phi_{i}\rangle$
quantum-state density-matrix eigenvalue
glS♦
GaussStrifeGaussStrife
$\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$
– glS ♦
OK, honestly I did not follow the later part of your post (where you asked the questions) -- it was too confusing. But I suspect that your confusion arises because you were trying to go between abstract bra-ket notation and matrix notation (which entails choosing some basis to express the operators in).
Maybe this will help.
Let $$ \hat{\rho} = \sum_i p_i |\psi_i \rangle \langle \psi_i | $$ be a density operator (I do mean "density matrix", but the name is a bit of a misnomer) where each $|\psi_i \rangle$ is a normalized state of the Hilbert space, but need not be mutually orthogonal.
Since $\hat{\rho}$ is Hermitian, spectral theory says there exists an orthogonal basis $\{ |n\rangle \}_{n=1}^{d}$ (I'm assuming a finite-dimensional space for simplicity), i.e. $\langle n|m\rangle = \delta_{nm}$, which comprise the eigenvectors of $\hat{\rho}$. That is, $\hat{\rho}$ is expressible as \begin{align} \hat{\rho} = \sum_{n=1}^d \lambda_n|n\rangle \langle n| \end{align} where $\lambda_n$ is the (real) eigenvalue of $\hat{\rho}$ associated with the eigenvector $|n\rangle$.
OK, in principle, that is all there is to it.
Qn: Wait, what about all these unitaries $U$ that I oftentimes see quoted? How/where do they come in?
Answer: They arise only if we want to express the operators/spectral theory in some particular matrix representation $\rho$ of the operator $\hat{\rho}$. What I mean is this: Let $\{ |\phi_i\rangle \}_{i=1}^{d}$ be some orthogonal basis of states. Then $$ \rho_{ij} \equiv \langle \phi_i | \hat{\rho} |\phi_j \rangle =\sum_{n=1}^d \langle \phi_i | n\rangle \lambda_n \langle n|\phi_j\rangle = \sum_{n,m=1}^{d} \langle \phi_i|n\rangle (\lambda_n \delta_{nm})\langle m|\phi_j\rangle $$ Since $\{|n\rangle\}, \{|\phi_i\rangle\}$ are both orthogonal basis sets, we find $\langle \phi_i|n\rangle = U_{i,n}$ where $U$ is some $d\times d$ unitary matrix and $U_{i,n}$ is its $(i,n)$ entry. Now $\lambda_n\delta_{nm}$ is interpretable as the $(n,m)$-entry of a matrix $D$, which turns out to be diagonal, and so we can express the above very compactly in terms of matrices and their multiplication: $$ \rho = UDU^\dagger. $$ The spectral theory for matrices thus amounts to saying: should we insist on expressing the abstract operator $\hat{\rho}$ as a matrix $\rho$ in a certain basis, then there exists a unitary matrix $U$ such that $\rho$ can be transformed into a diagonal matrix. Note importantly that a representation $\rho$ (in terms of a table of numbers) of the operator $\hat{\rho}$ depends on the choice of basis, but the operator's action in the abstract Hilbert space is invariant. In particular, if we picked $|\phi_i\rangle = |n\rangle$ the eigenbasis, then $\rho$ is the diagonal matrix $D$ and $U$ is the identity matrix.
Additional remarks in anticipation of potential remnant confusion.
The following is true: Let $\hat{U}$ be any unitary operator acting on the Hilbert space. Then $$ \hat{\rho}=\sum_{n=1}^d \lambda_n \hat{U}\hat{U}^\dagger|n\rangle \langle n|\hat{U} \hat{U}^\dagger \equiv \sum_{n=1}^d \lambda_n \hat{U} |\tilde{n}\rangle \langle{\tilde{n}}|\hat{U}^\dagger = \hat{U}\left( \sum_{n=1}^d \lambda_n |\tilde{n}\rangle \langle \tilde{n} |\right)\hat{U}^\dagger $$ where I defined $|\tilde{n}\rangle = \hat{U}^\dagger|n\rangle$ (which also forms an orthogonal basis). But here I haven't done anything, $\hat{\rho}$ is still the same operator as before.
Let $\hat{U}$ be any unitary operator. Then $$ \hat{U} \hat{\rho} \hat{U}^\dagger \equiv \hat{\tilde{\rho}} $$ defines a different operator than $\hat{\rho}$, but which is unitarily related. So, they do not have the same spectral decomposition (though it is clear they are related).
The following has no meaning: $$ U \hat{\rho} U^\dagger $$ (matrix of numbers multiplying an operator..?), nor $$ \hat{U} \rho \hat{U}^\dagger $$ (operator acting on a matrix of numbers...?), while the following has meaning: $$ U \rho U^\dagger $$ (matrix multiplication), and $$ \hat{U} \hat{\rho} \hat{U}^\dagger $$ (composition of operators). Note $U\rho U^\dagger$ is the matrix representation of $\hat{U}\hat{\rho}\hat{U}^\dagger$, in some basis.
In practice no one but the most fastidious keep the hats for operators, and the symbol $\rho$ is oftentimes in an abuse of notation used for both the abstract density operator and the corresponding density matrix expressed in some basis. To make matters worse, the basis used is often suppressed, though is often assumed to mean the computational basis (for qubits).
nervxxxnervxxx
$\begingroup$ Ok I think this almost completely addresses my question. You are correct, this confusion is arising from notation, which is almost exclusively used to denote an active unitary transformation, and as such alters the spectrum. Given $$ \hat{\rho} = \sum_i p_i |\psi_i \rangle \langle \psi_i | $$ if I was actually trying to express the basis change notationally, without the use of the identity operator, would I just take the entries of the matrix, express them as $|\psi_{i}\rangle=U|n\rangle$ where n is the basis in which it is diagonal in? $\endgroup$
– GaussStrife
TL;DR: Active and passive transformations
The dichotomy between the two types of unitary transformations is real and is an example of a division of transformations into active and passive types. This duality is inherent to any use of coordinates and arises from the fact that there is a degree of arbitrariness in the way coordinates are assigned to objects they label.
Example: translation in Euclidean space
For an example of this phenomenon in a more usual setting, consider the three dimensional Euclidean space $\mathbb{E}^3$. By choosing an arbitrary point to serve as the origin and an arbitrary set of mutually perpendicular directions for the axes, we can create a coordinate system which we can use to assign triples of real numbers in $\mathbb{R}^3$ to the points in $\mathbb{E}^3$.
Consider what happens if we translate an object in $\mathbb{E}^3$ by one unit of distance in the positive $x$ direction. The object changes its position from $(x, y, z)$ to $(x + 1, y, z)$. Now, consider what happens if we instead leave the object alone and move our coordinate system by one unit of distance in the negative $x$ direction. The object's new coordinates are once again $(x + 1, y, z)$.
This example demonstrates that a change in numerical coordinates used to describe an object may or may not signify a change in the state of the object. If the change represents an active transformation then the object's state has changed and the new coordinates describe the new state in the same unchanged coordinate system. On the other hand, if the change represents a passive transformation then the object's state remains unchanged, but the coordinate system has changed. In this case, the new coordinates describe the old unchanged state of the object in the new coordinate system.
Example: diagonalization of a density matrix
Elements of a density matrix can be thought of as coordinates that we assign to linear operators on an $n$-dimensional Hilbert space so that we can represent them as matrices in $\mathbb{C}^{n\times n}$. Such a representation is highly convenient, but comes with the caveat demonstrated above in the case of Euclidean space.
Suppose we know that two density matrices $\rho$ and $\sigma$ are related by the equation
$$ \sigma = U\rho U^\dagger\tag1 $$
where $U$ is a unitary matrix. This equation admits two interpretations. In the first, $U$ is an active transformation, e.g. describing the action of a quantum gate. In this case, $\rho$ and $\sigma$ are different states described in the same basis. In the second interpretation, $U$ is a passive transformation, i.e. it describes a basis change. In this case, $\rho$ and $\sigma$ are two descriptions of the same quantum state given in two different bases.
Most of the time, the $U$ in the diagonalization of a density matrix $\rho$ is interpreted as a (passive) transformation that changes the basis to one in which $\rho$ has the diagonal form. However, given a state $\rho$ it is of course also possible to intentionally choose the (active) evolution operator $U$ so that $U\rho U^\dagger$ is diagonal.
The existence of the two types of transformations highlights the fact that a quantum state and its mathematical description using a density matrix are two distinct objects and the mapping between them is mediated by the choice of basis.
Adam ZalcmanAdam Zalcman
A linear operator $A:V\to W$ represents a transformation in some underlying vector space (or more generally, from a vector space to a different one). Let's stick to the case of finite-dimensional spaces for simplicity. Such an operator is not the same as a matrix. A matrix is a way to represent the operator $A$ with respect to a given pair of bases. Given bases $\{v_i\}_i\subset V$ and $\{w_i\}_i\subset W$, you can write the "matrix elements of $A$" as $$A_{ij} \equiv w_i^\dagger A v_j.$$
An equivalent way to write this is using dyadic notation. Assuming the bases to be orthonormal, we have $$A = \sum_{ij} A_{ij} w_i v_j^\dagger,$$ where $w_i v_j^\dagger$ is defined as the linear operator $V\ni x\mapsto w_i \langle v_j,x\rangle$. Note that this is what you usually write in bra-ket notation as $A=\sum_{ij} A_{ij} |w_i\rangle\!\langle v_j|$. Note that there are (infinitely) many ways to write an operator $A$ in this way. You can choose any orthonormal basis for the representation. If the operator turns out to be normal (and $V=W$), then there is a basis with respect to which you can write $$A = \sum_i \lambda_i v_iv_i^\dagger, \tag3$$ for some eigenvalues $\lambda_i\in\mathbb C$. In fact, an operator is normal iff such a representation exists.
Given any pair of orthonormal bases $\{v_i\},\{v_i'\}\subset V$, there is always a unitary operation $U$ connecting them, i.e. such that $Uv_i=v_i'$. This means that a convenient way to express an operator is using a unitary encoding the basis vectors used in the expressions above. For example, you can write (3) as $A=VDV^\dagger$, with $V$ the unitary matrix whose $i$-th column is the vector $v_i$, and $D_{ii}\equiv\lambda_i$.
You can also try to "extract" the diagonal representation of a given normal operator by applying the inverse of the unitary operations above. For example, if $A=VDV^\dagger$, then $V^\dagger A V$ is diagonal.
You can also think of something like $V A V^\dagger$ as representing the action of $A$ in a "rotated basis". By this I mean that you can wonder how the action of $A$ looks like if all vectors are rotated. If you switch from a basis $\{u_i\}$ to a basis $\{v_i\}$, with the two bases connected by some unitary operator $V$, $V u_i=v_i$, then $$\langle v_i, A v_j\rangle = \langle u_i, V^\dagger A V u_j\rangle,$$ and thus $V^\dagger AV$ can be interpreted as the way $A$ acts in the rotated basis.
glS♦glS
$\begingroup$ For $V^{\dagger}AV$, the columns of the two unitaries in this case wouldn't take on the same meaning as that in the unitary transformation, yes? As in in this case, the columns of $V^{\dagger}$ would be the basis of A expanded in the target basis to which you want to express A in? $\endgroup$
$\begingroup$ @GaussStrife in what context? If $V$ is unitary, both its rows and its columns form an orthonormal basis. You can understand $V$ as saying "change from computational basis to the basis formed by the columns of $V$", or more generally from some basis to some other basis related by $V$. If you write $V=\sum_i v_i e_i^\dagger$, then $V^\dagger AV$ acts in the basis $\{e_i\}$ the same way $A$ acts in the basis $\{v_i\}$. You can think of $V^\dagger AV$ as the representation of $A$ in the transformed basis. $\endgroup$
$\begingroup$ Yes, I understand that the columns and rows form basis states. My main point of confusion is with this notation, and I can't seem to get a clear answer on it, or maybe what I am asking doesn't make sense? If I apply a unitary transformation, I take one basis state to another, and the entries of a column will form the expansion of the other basis state in my current one. If I do a change of basis, I take the column to represent my current basis state, and it's expansion in my target one. The basis of the columns or rows changes, depending on which I am doing, active or passive. Is that correct? $\endgroup$
$\begingroup$ @GaussStrife I guess it just depends how you choose to describe things. If $u_i=U e_i$, and $e_i$ is the canonical basis, then the columns of $U$ are the vectors $u_i$ (assuming you are representing $U$ as a matrix in the standard way). So the "entries of a column" would be the components of the vectors $u_i$ I guess? These are the coefficients of the decomposition of $u_i$ in the canonical basis that was chosen. Is that what you mean with "change of basis"? If you meant instead $U^\dagger AU$, then the columns of $U$ are the basis wrt which you are representing $A$ as a matrix $\endgroup$
$\begingroup$ In the case of $u_{i}=Ue_{i}$, then yes, the columns would represent $u_{i}$, and the entries would be the coefficients weighting each $e_{i}$ when $u_{i}$ is expanded in said basis, and when the matrix multiplication is carried out, you would simply use them as coefficients for $\{e_{i}\}$. For $U^{\dagger}AU$, since this is also a basis change, the columns, unlike in a unitary transformation, are the same basis states, not the image, and when I perform the multiplication, I associate with the results the basis vectors of the basis I wish to express A in, correct? $\endgroup$
Not the answer you're looking for? Browse other questions tagged quantum-state density-matrix eigenvalue or ask your own question.
How is measurement modelled when using the density operator?
How do we derive the density operator of a subsystem?
Non-uniqueness of pure states ensemble decomposition
Two qubit state + Depolarizing channel = Bell diagonal state?
What's the difference between $p(i|m)$ and $p(m|i)$ in measurement?
Where does the term $|\psi\rangle\langle\psi|$ come from while calculating the expectation value?
Can we write the density operator as a sum of mixed states?
Write the difference of 2 density operators in terms of a spectral decomposition
|
CommonCrawl
|
Linear algebra… with diagrams
Rediscover linear algebra by playing with circuit diagrams
Wires on the London Underground. CGP Grey, CC-BY 2.0
by Paweł Sobociński. Published on 6 March 2017.
A succinct—if somewhat reductive—description of linear algebra is that it is the study of vector spaces over a field, and the associated structure-preserving maps known as linear transformations. These concepts are by now so standard that they are practically fossilised, appearing unchanged in textbooks for the best part of a century.
While modern mathematics has moved to more abstract pastures, the theorems of linear algebra are behind a surprising number of world-changing technologies: from quantum computing and quantum information, through control and systems theory, to big data and machine learning. All rely on various kinds of circuit diagrams, eg electrical circuits, quantum circuits or signal flow graphs. Circuits are geometric/topological entities, but have a vital connection to (linear) algebra, where the calculations are usually carried out.
In this article, we cut out the middle man and rediscover linear algebra itself as an algebra of circuit diagrams. The result is called graphical linear algebra and, instead of using traditional definitions, we will draw lots of pictures. Mathematicians often get nervous when given pictures, but relax: these ones are rigorous enough to replace formulas.
Let's start with a picture of a generic wire. This is our first diagram.
We say that (W) is of type $(1,1)$ because, as a diagram, it has one dangling end on the left, and one on the right. Diagrams can be stacked on top of one another; for example, the following is obtained by stacking (W) on top of itself, obtaining a $(2,2)$ diagram.
Wires can be be jumbled up, with the following $(2,2)$ diagram called a twist.
In addition to stacking, diagrams can also be connected in series if the numbers of wires agree on the connection boundary. The following results from connecting two twists.
It is useful to imagine that the wires are stretchy, like rubber bands.
For example, we consider the following three $(4,4)$ diagrams to be equal.
We will need additional equations between diagrams. A nice value-added feature of the notation is that equations often convey topological intuitions: eg we require that a twist followed by a twist can be 'untangled' in the following sense.
In some settings, eg knot theory, the above equation would not be imposed because knots—unsurprisingly—rely on the ability of wires to tangle. (See Dan Ghica's entertaining Inventing a knot theory for eight year olds).
The final thing to say about twists is that we can 'pull' diagrams across them, preserving equality. The following, known as the Yang–Baxter equation, is an instance of this: we pull the top left twist across the third wire, which, for sake of legibility, is coloured red.
(YB)
Given these insights, it is not too difficult to prove that the diagrams we've seen so far are in bijective correspondence with permutations. For example, those in (YB) correspond to the permutation $\rho$ on the three element set $\{0,1,2\}$ where $\rho(x)=2-x$.
An optional note for category theory aficionados: our diagrams are the arrows of a strict symmetric monoidal category with objects the natural numbers. The arrows from $m$ to $n$ are $(m,n)$ diagrams. All diagrams in this article are arrows of such categories, which are called PROPs. Thus 'pulling' diagrams across twists, as in (YB), is none other than the naturality of the braiding structure and (S) ensures that the braiding is a symmetry.
Let's imagine that wires carry data from left to right. Whatever it is, assume that we can copy it. To copy, we get a special $(1,2)$ contraption, illustrated below, which takes data from the wire on the left and copies it to the two wires on the right.
A defining feature of copies is that one should not be able to distinguish between them; copying, then swapping, is the same as just copying. We thus add the following:
If I copy twice, I end up with three copies. There are two ways to do this, but they are indistinguishable. The following equation ensures this:
The ability to copy often comes with the ability to throw away, otherwise photocopying rooms would quickly fill up with scrap. Throwing away is done by the $(1,0)$ diagram below, which takes data and returns nothing.
Finally, if I copy and throw away one copy, I haven't achieved very much in the grand scheme of things. This is the intuition behind the following equation.
The above structure is otherwise known as a (cocommutative) comonoid. Copying is awkward to denote with standard formula syntax. For example, $f(x,y)$ is a standard way of denoting an operation that takes two arguments and returns one result. But how can we denote an operation that takes one argument and returns two results? The kinds of diagrams that we have seen so far are a solution.
Next, we imagine that data on the wires can be added. For the sake of concreteness, we may as well assume that wires carry integers. Then a $(2,1)$ addition operation takes two arguments and has a single result.
Adding comes with an identity element, and we introduce a $(0,1)$ gadget for this.
The intuition is that the diagram above outputs 0 on its result wire. Given that addition is commutative, associative and has an identity element, the following equations ought to be uncontroversial.
The above structure is otherwise known as a (commutative) monoid. An intriguing thing about the diagrammatic notation is that, ignoring the black/white colouring, the equations involved for monoids are mirror images of those for comonoids.
Copying meets adding
The fun really starts when diagrams combine adding and copying. From now on, we will draw diagrams that feature all of the following:
copy throw away add output 0
subject to both the comonoid and monoid equations we considered before. But now the two structures can connect to each other, leading to some new and interesting situations.
First, when we copy zero, we get two zeros. Similarly, when we discard the result of addition, it's the same as discarding the arguments. This leads to the following:
One of the most interesting equations concerns copying the result of an addition: it is the same as if we copied the arguments and performed two additions, separately.
The last equation is about discarding zero. The effect is… nothing, the empty diagram.
This is the point in the story where linear algebra starts bubbling up to the surface. Diagrams—with all of the equations we have considered so far—are now in bijective correspondence with matrices of natural numbers. Moreover, composing diagrams in series corresponds to matrix multiplication, and stacking two diagrams to a direct sum.
At first sight, this may seem a little bit magical: we drew diagrams by stacking and connecting basic components; we never even defined the natural numbers! Moreover, multiplying matrices involves ordinary addition and multiplication of integers. So where is all of this structure hiding in the diagrams?
The best way to get the idea is via examples. Following the correspondence, the $(1,1)$ diagrams below ought to be $1\times 1$ matrices. In fact, they correspond to $(0)$, $(1)$, $(2)$ and $(5)$. Notice the pattern? To get the number, count the paths from the left to the right.
Can you figure out how to multiply and add numbers, as diagrams? If yes, you'll enjoy proving that multiplication distributes over addition. (If you get stuck, it's all explained on the graphical linear algebra website). This is the first example of a common phenomenon in graphical linear algebra: basic algebraic operations—often considered primitive—are actually instances of the algebra of diagrams.
One final example: the $2\times 3$ matrix
\begin{equation*}
\begin{pmatrix}
1 & 0 & 2 \\
0 & 1 & 1
\end{pmatrix}
\end{equation*}
corresponds to the $(3,2)$ diagram below. To get the $(i,j)$th entry, count the number of paths from the $j$th input to the $i$th output.
This algebraic structure is known as a (bicommutative) bimonoid, or bialgebra. Bimonoids are common all over mathematics, eg in algebra, combinatorics, topology and models of computation. Once familiar with this pattern of interaction between a monoid and comonoid, you will see it everywhere.
The most exciting part of the story comes when we confuse the 'direction of flow' in diagrams. This means, roughly speaking, that copying and adding can now go both from left to right, and from right to left. Our diagrams will now feature all of
with all of the equations considered so far.
The way to make sense of this is to stop considering the left side of the diagrams as 'inputs' and the right side as 'outputs'. Technically, it means not thinking of addition and copying as functions, but rather as relations, which, unlike functions, can always be reflected. For example, addition—as a relation—is the set of pairs
\[
\left(\begin{pmatrix} x\\ y\end{pmatrix} ,\, x+y\right),
\]
while 'backwards' addition is the relation
\left(x+y,\, \begin{pmatrix}x\\ y\end{pmatrix}\right).
As before, this leads to several new situations to consider. Intriguingly, it turns out that the same equations describe the interaction of copying and adding with their mirror images.
The above are known as Frobenius equations, and are the other common way that monoids and comonoids interact. Just as for bimonoids, this pattern of interaction can be found in many different places, all over mathematics and its related fields.
Enumerating all the equations would be an overkill for this article, so let's go straight to the punchline. Previously, our diagrams were a bijection with matrices of natural numbers, even if it took some mental yoga to see the correspondence. This time, the magic ramps up a few notches: diagrams are now in bijective correspondence with linear relations over the field $\mathbb{Q}$ of the rational numbers, AKA fractions. But let's go through this step by step.
First, where do fractions come from? Before, when everything flowed from left to right, there was a bijection between $(1,1)$ diagrams and natural numbers. But with mirror images of copying and adding around, direction of flow is confused, bringing additional expressivity. For example, the following $(1,1)$ diagram is the diagrammatic way of writing $\frac{2}{3}$: it connects a $2$, going from left to right, with a $3$ going from right to left.
($\frac23$)
Just as the multiplication and addition of natural numbers can be derived from the algebra of diagrams, so can the algebra of fractions that everyone learns in primary school. As it turns out, not all diagrams of type $(1,1)$ are fractions, which leads us to an interesting feature of graphical linear algebra. But first, a little detour.
A curious phenomenon of human languages is that some words are notoriously difficult to translate. Some have become quite well-known, almost cliché: Schadenfreude in German, furbo in Italian or hygge in Danish. This is a side effect of the subtle differences in expressivity between languages: a concept natural in one is sometimes clumsy to express in another. Differences in expressivity also show up in formal languages.
With this in mind—and please don't freak out—the algebra of diagrams has nothing to stop you from dividing by zero. This is because reflecting a $(1,1)$ diagram means taking the reciprocal: eg $3$ is the diagram with three paths from left to right, and $\frac{1}{3}$ is the diagram with three paths from right to left. Keeping this in mind, the following would seem to be the diagram for $\frac{1}{0}$.
(∞)
It's natural to be a bit nervous, considering the anti division-by-zero propaganda bombarding us from a young age: as a result, most of us suffer from an acute dividebyzerophobia. But nothing explodes, and it can actually be useful to work in this extended algebra of fractions, featuring division by zero. To tie up the story, let's complete the taxonomy of $(1,1)$ diagrams: there is a diagram for each ordinary fraction, the diagram (∞) above and just two additional diagrams, illustrated below, which can be understood as two different ways of translating $\frac{0}{0}$ to the language of diagrams. (More dividing by zero on the graphical linear algebra website).
($\bot$) ($\top$)
In general, $(m,n)$ diagrams are in bijective correspondence with linear relations: those subsets of $\mathbb{Q}^m\times\mathbb{Q}^n$ that are closed under pointwise addition and $\mathbb{Q}$ multiplication, ie that are $\mathbb{Q}$ vector spaces. For example, the linear relation of (∞) is $\{\,(0,q) \,:\, q\in\mathbb{Q}\,\}$, and those of ($\bot$ and $\top$) are, respectively, $\{\,(0,0)\,\}$ and $\{\,(p,q) \,:\, p,q\in\mathbb{Q}\,\}$. The linear relation for ($\frac23$) is $\{\,(3p,2p)\,:\, p\in\mathbb{Q}\,\}$. Other examples of such beasts include kernels and images of matrices with $\mathbb{Q}$ entries; all of which are thus expressible with the language of diagrams. Now, since all of the diagrammatic equations involve only adding and copying operations, we conclude with an insight that's not so apparent given the usual way of presenting linear algebra:
Linear algebra is what happens when adding meets copying.
[Banner image: Vera de Kok, CC BY 2.0]
Paweł Sobociński
Paweł is a theoretical computer scientist at the University of Southampton, focusing on compositional modelling of systems, developing the underlying maths (usually category theory), and applying it to real-life problems such as verification. Since 2015, he has been working on the Graphical Linear Algebra blog, rediscovering linear algebra with string diagrams.
https://graphicallinearalgebra.net/ + More articles by Paweł
In conversation with Bernard Silverman
We chat to the chief scientific advisor to the Home Office about the role of scientists and mathematicians in politics
Debugging insect dynamics
Explain the strange dynamics of certain insects using game theory
Variations on Fermat: an agony in four fits
Fermat's Last Theorem with complex powers, wrapped in a story every mathematician can relate to
Slide rules: the early calculators
When slide rules used to rule... find out why they still do
The simplest difficult task
Factorisation is often used in cryptography. But there's something even simpler which turns out to be just as hard.
Origami tesseract
Folding origami, building networks, making projections and multiple dimensions!
← Debugging insect dynamics
In conversation with Bernard Silverman →
|
CommonCrawl
|
npj systems biology and applications
Position-dependent effects of RNA-binding proteins in the context of co-transcriptional splicing
Recruitment of a splicing factor to the nuclear lamina for its inactivation
Karen Vester, Marco Preußner, … Markus C. Wahl
Regulation of pre-mRNA splicing: roles in physiology and disease, and therapeutic prospects
Malgorzata Ewa Rogalska, Claudia Vivori & Juan Valcárcel
Co-transcriptional splicing efficiency is a gene-specific feature that can be regulated by TGFβ
Elena Sánchez-Escabias, José A. Guerrero-Martínez & José C. Reyes
Exon Junction Complexes can have distinct functional flavours to regulate specific splicing events
Zhen Wang, Lionel Ballut, … Hervé Le Hir
RUNX1/RUNX1T1 mediates alternative splicing and reorganises the transcriptional landscape in leukemia
Vasily V. Grinev, Farnaz Barneh, … Olaf Heidenreich
The splicing factor XAB2 interacts with ERCC1-XPF and XPG for R-loop processing
Evi Goulielmaki, Maria Tsekrekou, … George A. Garinis
Wiskott-Aldrich syndrome protein forms nuclear condensates and regulates alternative splicing
Baolei Yuan, Xuan Zhou, … Mo Li
The upstream 5′ splice site remains associated to the transcription machinery during intron synthesis
Yodfat Leader, Galit Lev Maor, … Gil Ast
Smu1 and RED are required for activation of spliceosomal B complexes assembled on short introns
Sandra Keiper, Panagiotis Papasaikas, … Reinhard Lührmann
Timur Horn1 na1,
Alison Gosliga1,2 na1,
Congxin Li2,
Mihaela Enculescu1 &
Stefan Legewie ORCID: orcid.org/0000-0003-4111-05671,2
npj Systems Biology and Applications volume 9, Article number: 1 (2023) Cite this article
Computer modelling
Numerical simulations
Stochastic modelling
Alternative splicing is an important step in eukaryotic mRNA pre-processing which increases the complexity of gene expression programs, but is frequently altered in disease. Previous work on the regulation of alternative splicing has demonstrated that splicing is controlled by RNA-binding proteins (RBPs) and by epigenetic DNA/histone modifications which affect splicing by changing the speed of polymerase-mediated pre-mRNA transcription. The interplay of these different layers of splicing regulation is poorly understood. In this paper, we derived mathematical models describing how splicing decisions in a three-exon gene are made by combinatorial spliceosome binding to splice sites during ongoing transcription. We additionally take into account the effect of a regulatory RBP and find that the RBP binding position within the sequence is a key determinant of how RNA polymerase velocity affects splicing. Based on these results, we explain paradoxical observations in the experimental literature and further derive rules explaining why the same RBP can act as inhibitor or activator of cassette exon inclusion depending on its binding position. Finally, we derive a stochastic description of co-transcriptional splicing regulation at the single-cell level and show that splicing outcomes show little noise and follow a binomial distribution despite complex regulation by a multitude of factors. Taken together, our simulations demonstrate the robustness of splicing outcomes and reveal that quantitative insights into kinetic competition of co-transcriptional events are required to fully understand this important mechanism of gene expression diversity.
Splicing is a key step in eukaryotic gene expression that is catalyzed by a large macromolecular complex, the spliceosome. During messenger RNA (mRNA) maturation, the spliceosome removes non-coding parts of the pre-mRNA (introns) and joins together the remaining parts (exons) that form the protein-coding mRNA. Spliceosome assembly is initiated by the binding of U1 and U2 small nuclear ribonucleoproteins (snRNPs) to splice sites. Subsequently, U4-U6 and a large number of protein factors are recruited to yield mature, catalytically active spliceosomes1,2,3. In alternative splicing, different splice products are generated from the same pre-mRNA precursor in a regulated fashion. In the most common mode of alternative splicing, so-called cassette exons are either included or not (skipped) in the final mRNA4. Alternative splicing allows for the production of different proteins with different functionalities from the same gene and contributes to proteome complexity5,6,7. Mis-regulated alternative splicing may also lead to the production of non-functional protein isoforms or may cause protein downregulation, e.g., by introducing alternative poly-adenylation sites, shifting the open reading frame, or promoting nonsense-mediated decay8,9. As a consequence, changes in alternative splicing may contribute to severe diseases such as cancer or neurodegenerative diseases10,11. A deep mechanistic understanding of alternative splicing is therefore needed to develop therapies10,11,12,13, such as through the identification of new targets for cancer immunotherapy14,15, which can provide strategies to combat cancer therapy resistance16.
From a systems point of view, splicing is a complex process that requires the exact definition of splice sites on the transcript and their correct joining. Splice site recognition, particularly in alternative splicing, is strongly regulated by RNA binding proteins (RBPs). These bind to cis-regulatory sequence elements in the pre-mRNA, and enhance or suppress spliceosome recruitment to splice sites7. For instance, cis-acting intronic and exonic splicing silencers (ISS and ESS) are regulatory sequence motifs that bind splicing repressor proteins, e.g. heterogeneous nuclear ribonucleoprotein (hnRNPs) that typically prevent the recruitment of U1 and U2 to nearby splice sites17,18. Similarly, intronic and exonic splice enhancers (ISE and ESE) have been discovered, typically as binding sites for the serine-argine repeat (SR) protein class of splicing activators19,20,21. However, the functions of RBPs on splicing are not always so clearly defined, as several RBPS show antagonistic effects, i.e., either promoting or suppressing the inclusion of an exon, depending on their binding location relative to the regulated splice sites18. Such functional dependence on the binding position has been evidenced for several splicing regulatory proteins, including hnRNPs18,22, SR proteins21,22, CELF223, Nova24, RbFox25, TIA26, and PTB27, but the underlying molecular mechanisms remain incompletely understood.
There is strong evidence that splicing mainly occurs co-transcriptionally in human cells, and that transcription and splicing mutually influence each other by spatial and kinetic coupling mechanisms28,29,30,31. Spatial coupling arises because both processes share molecular components and therefore occur in close proximity. For instance, RNA polymerase II (Pol II) contains the C-terminal heptad repeat domain (CTD) of the large subunit that is required for the deposition of splicing factors to splice sites32,33,34. In addition, kinetic coupling occurs since the speed of pre-mRNA transcription determines how fast downstream splice sites become available to compete with alternative upstream splice sites35. Furthermore, the rate of transcript elongation affects the formation of secondary structures in the pre-mRNA, and thereby the accessibility of splice sites for splicing factors30.
For cassette exons, a strong dependence of the inclusion frequency on transcription velocity has been reported31. For instance, slow Pol II elongation may increase the time window for the recognition of weak exons, leading to their higher inclusion25. However, in contrast, slow Pol II elongation can also favor exon skipping by promoting the recruitment of inhibitory RBPs that prevent exon recognition29. In genome-wide experiments, four different classes of exons have been identified based on their Pol II velocity dependence, including monotonically increasing or decreasing exon inclusion with Pol II speed (see above), but also bell- and U-shaped behaviors, with the latter two classes accounting for approximately 50% of velocity-sensitive genes31. In the latter scenarios, fast and slow Pol II mutants shift the splicing outcome in the same direction, suggesting that for these exons, the spliceosome operates at an optimal point for physiological Pol II values.
Various strategies have been employed to quantitatively model the impact of cis-regulatory sequence features on alternative splicing outcomes. These approaches range from automated machine learning based on transcriptome-wide splicing data or from synthetic libraries36,37,38,39 to mechanistic descriptions of splicing reaction kinetics35,40,41,42,43,44,45,46,47,48. Mechanistic modeling studies are typically focused on certain splicing decisions and have naturally favored minimizing complexity, given the limited amount of available experimental data. Therefore, they often described splicing as a quasi-post-transcriptional process, i.e., all relevant splice sites are assumed to be available when splicing decisions are reached42,43,46,47. While this assumption could be consistent with co-transcriptional splicing on the elongating transcript, it fails to explain why alternative splicing outcomes are affected by the transcript elongation rate. Therefore, other mechanistic models explicitly consider that upstream splicing sites are present earlier than others, implying that the corresponding splicing decisions are kinetically favored, in particular at slow elongation rates35,44,45,48. For instance, one recent study has sought to explain how co-transcriptional splicing has impacted gene structure and evolution, focusing on genomic level predictions45. Another work employed a kinetic model of co-transcriptional splicing to accurately predict the combined impact of both the position and quantity of ESEs or ESSs on the splicing of engineered designer exons41.
Here, we build upon this previous modeling work on co-transcriptional splicing regulation and mechanistically describe how spliceosomes assemble on the elongating transcript, and how a trans-acting RBP modulates the process in a time- and position-dependent manner by binding to cis-regulatory sequence elements. Crucially, we extend the description of co-transcriptional kinetics to include the availability and binding of cis-regulatory motifs, in addition to splice sites. We show that simple kinetic models account for several non-intuitive behaviors including the existence of optimal RNA polymerase speeds for the inclusion of alternative exons. We additionally demonstrate mechanisms by which a single protein can both increase and decrease inclusion of an exon. These findings suggest that substantial interplay exists between the various regulatory mechanisms of alternative splicing. This will be important for informing a complete understanding of splicing, and the development of interventions in splicing decisions.
Modeling of co-transcriptional alternative splicing regulation
To model the dynamics of splicing, we investigated the behavior of a minimal system, in which an alternatively spliced cassette exon is flanked by introns and outer constitutive exons (Fig. 1a). Alternative splicing in this system involves the inclusion or exclusion of the middle cassette exon. Additionally, if splicing fails, one or both of the introns may be retained and intron retention isoforms are generated.
Fig. 1: Modeling co-transcriptional splicing of a three-exon gene.
a Schematic representation of alternative splicing. After transcription, the nascent pre-mRNA is spliced to remove introns (thin lines) and eventually exons (boxes, E1-E3). The middle alternative exon can either be included (i) or skipped (ii) (i.e., spliced out), or the introns are retained if splicing is unsuccessful. b Illustration of co-transcriptional splicing during successive elongation of the nascent transcript (indicated by the expanding coloration of introns and exons in vertical direction). Considering an exon definition mechanism (see main text), introns are spliced out only after completion of synthesis of both flanking exons. Thus, splicing of the first intron (and thus commitment to inclusion; green arrows) is possible once exon 2 is fully synthesized, whereas commitment to skipping (red arrows) requires Ex3 synthesis. c Delay and multi-step modeling of co-transcriptional splicing. Both models describe transcript elongation only after exon 2 synthesis is complete, as no splicing is possible earlier (panel b). The delay model consists of three species (unspliced mRNA as well as skipping and inclusion isoforms) and the skipping rate (ks) increases in a step-wise manner at a time delay τ (red dashed line), whereas inclusion (ki) takes place immediately (green line). τ represents the time it takes to complete exon 3 synthesis, and thus reflects the elongation speed. In the multistep model, the transcript elongation is implemented using a chain of elongation states (P1-P8) and a progression rate kelong. In P1-P6, only commitment to inclusion is possible (ki), whereas both splicing reactions (ki, ks) are possible in P7-P8 (see also b). Intron retention is neglected in both models. d and f Relationship between polymerase elongation speed (vpol) and splicing outcome, as measured by the PSI metric (percent spliced-in, PSI = incl/(incl+skip)). In d, numerical simulations of the delay model (solid green line) are compared to an analytical solution of the same model (blue dots; Methods), whereas in f numerical solutions of delay and multistep models (with few or many elongation steps) are shown (see legend). The value of vpol was calculated based on the length of exons and introns in the transcript, and based on τ (delay model) or the parameter kelong (multistep model), see Methods. Horizontal lines in d show the inclusion-to-skipping ratio (ki/ks) before and after the delay τ. The vertical blue line marks vpol at which a share of 50% of the pre-mRNA is spliced before τ. e Schematic representation of splicing fluxes in the model (top), alongside with dominant fluxes for extremely slow and fast RNA polymerase speed vpol (middle and bottom, respectively). Thick solid arrows show the main reaction fluxes, whereas shaded arrows typify minor or non-existent fluxes.
The scheme in Fig. 1a represents a scenario of post-transcriptional splicing regulation, as splicing decisions are made only after pre-mRNA synthesis is complete. Co-transcriptional splicing regulation involves an additional level of complexity, since not all splice isoforms can be generated at the same time (Fig. 1b). Early after transcription initiation, no splicing commitment is possible, as the necessary sequence elements still need to be synthesized (State P0). Specifically, in human cells, introns can only be spliced out after both flanking exons are fully recognized by the spliceosome, a mechanism that has been termed exon definition46,49,50,51. Thus, the completion of exon 2 synthesis (state P1) marks the first time point in the lifetime of a transcript at which intron 1 splicing is possible. In contrast, the competing skipping reaction occurs later in the transcript lifetime, once all introns and exons are fully synthesized (States P7-P8). Therefore, commitment to the exon 2 inclusion isoform is possible at earlier stages (States P2-P8) than the commitment to skipping (States P7-P8).
To numerically simulate co-transcriptional splicing, we implemented a system of ordinary differential equations (ODEs). The time shift of skipping relative to inclusion was implemented by a time delay τ for the skipping reaction (Fig. 1c, "time delay model"; Supplementary Figure 1). Specifically, the rate of commitment to skipping (ks) is initially zero and increases in state P7 to a positive value. In contrast, the rate of commitment to inclusion (ki) is time-invariant if we neglect the initial splicing-less phase after transcript initiation and start the simulations after the completion of exon 2 synthesis (State P1 in Fig. 1b). For simplicity, we neglected possible intron retention scenarios. Thus, intron 1 splicing was assumed to be always accompanied by intron 2 splicing and therefore marks the commitment to inclusion.
This system using time-dependent reaction rates was solved by integrating the ODEs in a stepwise manner: initially, all transcripts are assumed to simultaneously initiate elongation, i.e., the mRNA precursor was set to 1, whereas the inclusion and skipping products are zero. Then, the integration was performed by considering only the commitment reaction to inclusion until the time point τ where skipping starts. Afterwards, both reactions were taken into account and the concentrations of skipping and inclusion after long integration times reflect the probabilities for commitment to the corresponding splicing products. Under the assumption of steady-state gene expression and equal degradation rates for skipping and inclusion products, this probability is proportional to the experimentally measurable concentrations of the splicing isoforms (see Methods for details).
Slow elongation favors exon inclusion in the basic model
We analyzed how the incidence of skipping and inclusion isoforms changes with varying transcript elongation rates (Fig. 1d). Specifically, we asked whether variations in RNA polymerase speed (vpol) affect the inclusion frequency of the cassette exon in the model. Such a dependency of splicing outcomes on transcript elongation velocity had been reported in the published experimental literature28,29,31,52,53.
To mimic altered transcript elongation rates, we assumed that the delay parameter τ for the skipping reaction resulting from transcript elongation is inversely proportional to vpol (see Methods). As a measure of the splicing outcome, we monitored the PSI metric (PSI = inclusion/(inclusion + skipping)), which ranges between 0 and 1 for no and full inclusion, respectively. In line with an earlier modeling study and experimental work on co-transcriptional splicing, we find that the inclusion frequency decreases with increasing polymerase elongation rate (Fig. 1d)25,28,45. At low polymerase speed, inclusion is the only splicing outcome, since all transcripts commit to inclusion before the transcript is elongated beyond the third exon, where skipping can occur (Fig. 1c, d and e). In contrast, fast transcript elongation eliminates this kinetic advantage of inclusion, and the splicing outcome is determined by the relative commitment rates of skipping and inclusion, as in the post-transcriptional scenario (Fig. 1a; see Methods). As the value of vpol increases towards infinity, the model converges towards post-transcriptional results as the delay between splicing commitment events decreases towards 0. Using analytical calculations, it can be shown that the PSI-elongation curve always decreases monotonically (Methods; Supplementary Figure 2), and as such is incapable of recapitulating more complex PSI profiles such as observed in Fong et al.31 In line with kinetic competition between inclusion and elongation, the drop in PSI (measured as the inflection point) occurs when the skipping delay due to polymerase progression (τ) is comparable to the time scale of the inclusion reaction (τ ≈ 1/ki, Fig. 1d).
To quantitatively confirm the above simulation results, we considered an alternative implementation of co-transcriptional splicing regulation: in this multistep model variant, we described the progression of RNA polymerase using multiple consecutive elongation states (each represented by one ODE) and assumed that commitment to skipping is possible only late in the elongation chain (Fig. 1c, bottom). Alterations in the transcript elongation speed were simulated by changing the progression parameter between states (kelong). When plotting PSI as a function of the resulting polymerase speed (vpol; see Methods), we found that the simulation results of the multistep model quantitatively agreed with the delay model for a sufficiently large number of elongation steps (Fig. 1f). This confirms the expectation that a multistep chain with many elongation steps approximates a hard delay well, whereas a chain with few steps only yields a qualitative agreement (Fig. 1f). Taken together, two distinct methods exist for modeling co-transcriptional splicing which both yield identical results provided that enough reaction steps are considered in the multistep formulation.
Non-canonical splicing responses to elongation encoded by position of RBP binding
Genome-wide measurements revealed that changes in the transcript elongation speed affect splicing in a gene-specific manner29,31. Fong et al. analyzed global splicing patterns in cells expressing fast and slow RNA polymerase mutants. In line with the simulations above, they found that a large number of genes show the canonical response where slow elongation shifts splicing towards inclusion (Figs. 1d and 2a, left). However, exons also frequently show the inverse behavior, where a slow RNA pol speed promotes skipping29,31. Moreover, two additional gene classes exist, in which the relationship between transcript elongation and PSI is non-monotonous, resulting in the bell- or U-shaped curves in the experimental splicing-elongation (PSI-vpol) diagram (Fig. 2a, middle and right). These complex behaviors are impossible to obtain with simple kinetic competition of inclusion and skipping as depicted in Fig. 1d and f, requiring additional factors to be considered.
Fig. 2: Complex PSI-vpol profiles with RBP-mediated splicing inhibition.
a Experimentally measured relationship between polymerase speed (vpol) and splicing outcomes (PSI) taken from Figs. 2, 3, and 4, and supplemental Figs. 3, 4, and 5 of Fong et al.31. Splicing profiles of selected splicing events (indicated in titles) as assessed by PCR and gel electrophoresis are shown for polymerase mutants with enhanced and lowered transcript elongation rate (left and right datapoints), alongside profiles when cells express the wildtype enzyme (middle datapoints). Each datapoint is one out of three replicate measurements. The PSI metric can increase or decrease with polymerase speed, and may show U- or bell-shaped responses. b Co-transcriptional splicing commitment model with RBP-mediated inhibition of exon inclusion. Top: Multistep model with inhibitor-mediated escape from inclusion (kesc) towards skipping-committed pre-mRNA molecules (species E3-E8). The depicted model topology involves two steps before inhibition/escape reaction (P1-P2), a window of opportunity for inhibition (P3-P5), a late state (P6) where skipping is not possible (exon 3 not synthesized) and stages where both inclusion and skipping reactions are possible (P7-P8). Bottom: The time delay model with RBP-mediated inhibition derived from the topology in Fig. 1c. Compared to the basic model it contains an additional species mRNAinh representing an RBP-inhibited mRNA. Also there are two additional time delays τinh,1, τinh,2 marking the window of opportunity for inhibition. c Simulated PSI-vpol profiles with RBP-mediated inhibition. Top and middle: Early inhibition scenario, where RBP-mediated inhibition (kesc) occurs already before/when exon 2 is synthesized (reaction P1 → E1 in multistep model). Monotonically increasing or bell-shaped PSI-vpol profiles are observed, depending inclusion-to-skipping (ki/ks) ratio towards the end of the transcript (red dashed lines in PSI-vpol diagram). Bottom: Late inhibition scenario, in which the RBP-mediated inhibition (kesc) occurs after synthesis of exon 2 (reactions P3 → E3, P4 → E4 and P5 → E5 in multistep model), can result in a U-shaped PSI-vpol profile. The black solid lines depict the fraction of mRNA being inhibited during transcription (i.e., progressing through the mRNAinh state in the delay model). This fraction decreases since fast elongation diminishes the window-of-opportunity for RBP binding. Horizontal lines indicate approximate splicing outcomes predicted by the ratio kii/(ks + kesc) in different splicing commitment regimes along the transcript. As discussed in panel d, at slow and fast elongation, splicing decisions are made early and late after transcript initiation, respectively. Therefore, PSI slow, fast and P3 → P5 indicate inclusion-to-skipping ratios at early, late and intermediate times in transcript lifetime. Vertical lines depict the value of vpol at which 50% splicing commitment occurs in these regimes along the transcript. d Schematic representation of transcript fluxes at different elongation rates for the late inhibition scenario (bottom panel in c). On the top all possible fluxes for one mRNA molecule are depicted. The lower part shows the main reaction flux considering three elongation regimes. The solid arrows show the main reaction flux, the shaded arrows typify minor or not existing fluxes.
Dujardin et al. proposed a mechanistic explanation for the inverse splicing response, where slow elongation promotes exon skipping29: They experimentally showed that this response is caused by an RBP that inhibits exon inclusion through competition with a downstream U2AF2 binding site, where RBP binding is favored by slow RNA Polymerase delaying the synthesis of the competing U2AF2 binding site. To better characterize this mechanism, we extended our model of co-transcriptional splicing regulation, and additionally considered an RBP-mediated inhibitory reaction (kesc) which shifts the mRNA into an inhibited state (mRNAinh; Fig. 2b, bottom). In this state, commitment to inclusion is no longer possible, but skipping can still occur, though only after the delay time τ. In similarity to the basic model, τ reflects the time it takes for polymerase to complete the synthesis of the last exon. In essence, the RBP inhibitor introduces early commitment to skipping, while preventing inclusion.
Interestingly, this extended model of co-transcriptional splicing regulation not only explained monotonic PSI-vpol diagrams, but could also realize bell- and U-shaped curves depending on the chosen kinetic parameter values (Fig. 2c). As experimental literature has previously demonstrated that many proteins preferentially or exclusively bind co-transcriptionally54,55,56, we assumed that the inhibitory RBP must be deposited in a limited time window by the elongating polymerase. This assumption is also mathematically consistent with the model of Dujardin et al., if we use the simplifying assumption that the binding rate of the competing reaction (U2AF2 binding in Dujardin et al.29) is much greater than binding of the inhibitory RBP. In our model, we implemented this by restricting the inhibitory reaction kesc to a time window between the delay times τinh,1 and τinh,2 (Fig. 2b; Supplementary Table 1). This time frame (τinh,1 → τinh,2) reflects the time it takes for elongating RNA polymerase to reach and pass the position of the RBP motif within the pre-mRNA sequence. Therefore, the parameters τinh,1 and τinh,2 are proportional to the assumed polymerase speed and the time window τinh,1 → τinh,2 increases for slow elongation.
Whether the PSI-vpol diagram is monotonically increasing, decreasing, U- or bell-shaped critically depends on the initial delay of inhibitor binding (τinh,1): In the regime of very slow elongation delayed reactions play no role, since splicing decisions are made just after (the very long) elongation cycle has started. For such slow elongation, strong and instantaneous inhibitor binding (τinh,1 = 0) favors skipping, whereas inclusion is the only outcome if inhibitor binding is delayed (τinh,1 > 0). Thus, monotonically increasing or bell-shaped PSI-vpol diagram can be observed for τinh,1 = 0 (Fig. 2c, top and middle rows), while decreasing or U-shaped curves occur otherwise (Fig. 2c, bottom; Supplementary Table 2). In terms of pre-mRNA sequence, the no delay scenario (τinh,1 = 0) locates the RBP binding motif to (or upstream of) the alternative exon, whereas a delay would correspond to RBP binding downstream of the alternative exon. Hence, in the model, the position of the inhibitory RBP binding motif in the transcript has strong qualitative effects on splicing outcomes.
Non-monotonous (U- or bell-shaped) behavior in the PSI-vpol-diagram requires three clearly distinguishable splicing regimes at different elongation rates, as schematically depicted in Fig. 2d for the U-shaped case: slow elongation favors a splicing decision early in the transcript elongation cycle and inclusion is the only possible outcome (Fig. 2d, i). At medium elongation rates, the transcript on average elongates further until a splicing decision is made, and in this regime inhibitor-mediated skipping is the dominant splicing outcome (Fig. 2d, ii). Finally, at very fast elongation transcription is finished before splicing commitment, and skipping dominates over inclusion for the chosen kinetic parameters in this quasi-post-transcriptional regime. (Fig. 2d, iii). As a result, the PSI-vpol diagram exhibits a U-shape, as shown in Fig. 2c alongside with the corresponding splicing commitment rates (left, bottom). Similar arguments of kinetic competition between (i) generating a splicing decision at a certain length of the elongating transcript vs. (ii) elongating further explain other shapes the PSI-vpol diagram (Fig. 2c, top and middle).
Taken together, co-transcriptional splicing outcomes are shaped by the relative rates of skipping and inclusion which dynamically change during the transcript elongation due to changes in: (i) the availability of exons for splicing; (ii) inhibitor-mediated commitment to certain splicing fates. By an appropriate choice of these parameters, gene-specific PSI-vpol diagrams may be realized as reported experimentally29,31.
Mechanistic modeling of RBP-mediated modulation of splicing
So far, we made simplifying assumptions about the effects of the inhibitory RBP on splicing outcomes. To confirm our findings in a more realistic setting, we turned to mechanistic modeling of RBP binding to pre-mRNA and effects on splicing decisions.
This mechanistic model was based on our previous work on exon definition, in which we modeled recruitment of pioneering spliceosome U1 and U2 subunits to splice sites (Fig. 3a and Enculescu et al.46). Specifically, we considered that all three exons may be "defined" by cooperative U1 and U2 binding. Initially, the pre-mRNA is synthesized as an unbound precursor (P000), and then irreversible exon definition may occur by rate constants k1-k3 (Fig. 3a, left). For instance, both outer exons are defined in the states P101 and P111, whereas the middle exon is either undefined (P101) or defined (P111). These spliceosome binding patterns impact splicing outcomes, as we assume splicing reactions (kspl) lead to inclusion (state P111), skipping (P101) or retention (all other states). The resulting mathematical model is a limit case of the more general kinetic model introduced in Enculescu et al.46 if we assume irreversible spliceosome binding to the pre-mRNA. However, in contrast to the previous work46, we additionally considered here dynamic changes in splicing outcomes due to RBP inhibitor binding and co-transcriptional splicing dynamics.
Fig. 3: Mechanistic modeling of spliceosome binding and exon modulation by RBPs.
a Exon definition model of co-transcriptional splicing. In the unbound pre-mRNA (P000) each exon can be cooperatively bound by the spliceosome and this is modeled using lumped and irreversible exon definition reactions (rate constants k1-k3). Definition of the 1st, 2nd and 3rd exon switch P000 to the states P100, P010, P001, respectively. Spliceosome binding affects splicing outcomes (modeled by the splicing reaction kspl), as the states P101 (exon 2 undefined) and P111 (exon 2 defined) give rise to skipping and inclusion, respectively, whereas no splicing (but intron retention) is possible otherwise. The inhibitory RBP irreversibly binds to unspliced pre-mRNA states (transition from left to right subnetwork; e.g., P000 → P000_inh), affects spliceosome binding (reduced rates k1_inh-k3_inh), but splicing rates remain unchanged (k_spl). See also panels b and c, and Supplemental Material for details. b Local effects of RBP binding on exon definition rates (k1-k3). Not all three exon definition rates (k1-k3) are reduced by RBP binding, but only those of exons located nearby the RBP binding site (indicated by magenta pictogram and vertical dashed line at x = 0). Inhibition is modeled by a bell-shaped inhibition function (kxinh), in which the RBP effects on k1-k3 decay within + /−50 bp around the binding site. c Implementation of co-transcriptional splicing using time-dependent reaction rates. Shown are the reaction rates as a function of time after transcript initiation, with step-like increases or decreases corresponding to time delays. The schematic representation of the three-exon minigene indicates the correspondence of time delays and position of elongating RNA polymerase within the gene. Each exon can be defined immediately after its transcription is complete (top), RBP binding occurs in a short time window after elongation across the RBP binding site and retention is possible only after transcription is finished. d Simulated PSI-vpol profiles for RBP binding within exon 2 (left) or in the downstream flanking intron (right) using the local inhibition profile depicted in B. Two simulations were performed for each RBP localization, in which either exon 2 is less well recognized than exon 3 (k2 < k3, top), or vice versa (k2 > k3, bottom). The PSI profiles (thick solid lines) can be considered as weighted sum of two limiting cases, a model without RBP-mediated inhibition (green dashed lines) and one in which the RBP binds with high efficiency to ~100% of the transcripts (orange dashed lines). At low and high polymerase velocities, the PSI is approximated by strong and no RBP binding, respectively, as slow elongation extends the temporal window of opportunity for polymerase-mediated RBP-recruitment. The fraction of RBP-bound to the pre-mRNA in the full model (solid line) is shown as a dashed magenta line. The abundance of the retention isoform is shown as solid red line.
As depicted in Fig. 3a (right), the inhibitory RBP modulates splicing outcomes by irreversibly binding to all unspliced pre-mRNA states (e.g., P000 → P000_inh) with the rate rbp_br. Subsequently, spliceosome binding occurs at reduced rates (k1_inh-k3_inh), but splicing rates (kspl) remain unchanged. Thus, the inhibitory RBP changes splicing outcomes by blocking initial spliceosome recruitment. Importantly, this occurs only locally around the sites of RBP binding, i.e., around an assumed RBP motif. The spatial range of RBP effects on spliceosomes which we implemented in the model is depicted in Fig. 3b. We assumed a bell-shaped inhibition profile, in which the RBP effects on k1-k3 decay within ~100 bp around the binding site, as previous experimental literature has shown a substantial decay in a protein's effect when a protein's binding site is shifted from 70 to 140 or 200 bp from the splice site57,58.
Co-transcriptional spliceosome binding and splicing were considered in the model by assuming that an exon can only be defined after its synthesis is complete. Hence, the rate constants k1-k3 and k1_inh-k3_inh increase in a stepwise manner after transcript initiation with individual delays reflecting the relative positions of exons within the pre-mRNA (Fig. 3c; Supplementary Table 3). For RBP binding, we again assumed RNA polymerase-dependent recruitment55, and therefore modeled RBP binding to be restricted to a short window-of-opportunity, reflecting the phase when the elongating enzyme pass the RBP motif (Fig. 3c, bottom). Since most human transcripts are spliced co-transcriptionally52, we assumed that full-length transcripts may undergo a transition into an intron retention isoform with the rate ret_r (Fig. 3c, bottom).
For the kinetic parameters, we assumed physiologically plausible ranges in our simulations: Values for the RNA polymerase velocity (vpol) were chosen based on the quantitative data in the experimental literature59. Taking the polymerase speed into account, the delay times for exon definition reactions (i.e., waiting times for spliceosome binding to splice sites) were adjusted based on the intron/exon structure of a previously published reporter gene comprising RON exon 10-1246 The subsequent spliceosome binding and splicing commitment reactions were chosen to reflect the experimentally reported overall splicing times ranging from a few seconds to several minutes48,59,60. In further support for the physiological plausibility of our model, we demonstrate in Supplementary Figure 3 (also see Supplementary Table 4) that it is able to quantatively reproduce dynamic co-transcriptional splicing measurements at the single-molecule level reported by Coulon et al.59.
The mechanistic model fully reproduced the experimentally observed PSI-vpol diagrams, including monotonically decreasing or increasing curves, as well as bell- and U-shapes (Fig. 3d). In summary, a monotonic decrease is observed in the absence of an inhibitory RBP (Fig. 3d, dotted orange lines in all panels). If the RBP binds early during the elongation cycle (i.e., within the alternative exon) this behavior can be reversed into a monotonic increase or a bell-shape (Fig. 3d, left), whereas binding downstream of the alternative exon allows for the U-shape (Fig. 3d, right). In the model, all these behaviors are linked to dynamic changes of RBP inhibitor binding at different elongation rates (Fig. 3d, dashed pink lines in all panels). Another important determinant is the relative strength of exons 2 and 3 (i.e., the ratio of the respective recognition parameters k2 and k3): A reduced exon 2 recognition rate favors skipping, especially for fast elongation (post-transcriptional case), and may therefore convert a monotonically increasing curve (Fig. 3d, bottom left) into a bell-shape (Fig. 3d, top left).
Taken together, our mechanistic model describes co-transcriptional splicing regulation at the level of individual splice site regulation by RPBs. Compared to the simple model (Figs. 1 and 2), the mechanistic description can accommodate more complex PSI-vpol diagrams (e.g., Fig. 3d, top right) and shows more diverse behavior for a given binding position of the RBP (Fig. 5). Furthermore, it better represents the biophysical properties of RBP and spliceosome binding, and therefore allows us to better characterize mechanisms of splicing regulation by RBPs.
Complex position-dependent RBP effects in the mechanistic model
In the experimental splicing literature, extensive evidence supports that the same RBP can frequently act as both activator and inhibitor of exon inclusion depending on its location of binding (see Introduction): For instance, experiments in which a variety of RNA binding protein motifs were placed up- or downstream of a 5ʹ splice site showed opposite effects on splice site usage depending on their position22. This effect was dependent on which protein was being investigated; with SR and traditional activator proteins having an enhancing function on alternative exon recognition upstream, and a silencing function downstream of the 5ʹ splice site, whilst the opposite was observed for hnRNPs and traditional silencer proteins.
Using our mechanistic co-transcriptional splicing regulation model, we investigated how the RBP binding position affects splicing outcomes (PSI) for a given polymerase elongation speed (vpol). We generated PSI heatmaps, in which we systematically varied vpol and the position of the RBP binding motif, again using the percent spliced-in metric as a readout (Fig. 4a; Supplementary Table 5). For the relative length of introns and exons, we chose the dimensions of a minigene spanning RON exons 10-12 which we characterized in our recent work43. In line with the published literature, we found that an inhibitory RBP which blocks spliceosome recruitment can be both an inhibitor and an activator of alternative exon inclusion depending on its binding position. This can be seen along the red dashed line in Fig. 4a and in the corresponding two-dimensional projection in Fig. 4b (top): here, inhibitor binding close to splice sites of constitutive exons increases inclusion (Fig. 4b, vertical dashed lines around positions 210 and 530, respectively). In contrast, inclusion is diminished for inhibitor binding around the splice sites of the alternative exon (Fig. 4b, vertical dashed lines around positions 300 and 440, respectively). These inclusion levels should be compared to the plateaus of peripheral RBP binding (positions around 100 and 575), which correspond to a lack of RBP impact on exon definition rates k1_inh – k3_inh (Fig. 4b, bottom). Thus, in our model the RBP can play a dual role, being both an activator and inhibitor of inclusion depending on its binding position.
Fig. 4: Splicing activation and inhibition by RBPs depending on their binding position.
a Heatmap of simulated PSI values as a function of the position of the RBP binding motif (x-axis) and RNA polymerase speed vpol (yaxis) using the inhibition function shown in Fig. 3b. The schematic representation below indicates the positions of the three exons (colored rectangles) and the joining introns (black lines). The horizontal red dashed line indicates the vpol value chosen in b. b Position-dependent opposing RBP effects on alternative splicing. Top: PSI (blue line) and intron retention (orange) as a function of the RBP binding position at a fixed RNA polymerase speed (vpol = 50 nt/s). Depending on its binding position (and the inhibited exons), the inhibitory RBP can promote or suppress the inclusion of the cassette exon relative to a simulation without RBP (horizontal black dashed line, 'PSI default'). Increased inclusion is accompanied by slightly increased retention proportion due to a delay in exon 1 or 3 recognition and splicing. Bottom: Modulation of protein bound exon definition parameters (k1_inh-k3_inh) by the RBP binding position. The local RBP inhibition function is assumed to be bidirectional around the RBP binding position (x = 0) as depicted on the right (same function as the one shown in Fig. 3b). c Dual RBP role most pronounced at intermediate RNA polymerase velocities. Simulations with the same parameters and assumptions as in b, but for varying RNA polymerase speeds (see legend). Strong activation and inhibition of exon inclusion depending on the RBP position are most pronounced at intermediate polymerase velocities (orange line). Outer plateau at RBP position ~150 corresponds to the absence of RBP-mediated inhibition. d Position-dependent RBP effects on exon inclusion for asymmetric RBP-mediated inhibition of splice sites. Top: An asymmetric, right-skewed RBP inhibition function (right) switches the directionality of splicing changes (activation vs. inhibition) when the RBP binding is located upstream or downstream of a 5ʹ splice site of an exon (vertical dashed lines). Bottom: The splicing switch from exon inclusion to skipping (or vice versa) is accompanied by a sharp intersection of constitutive and alternative exon definition rates (red arrows).
From a mechanistic viewpoint, this dual role can be explained by kinetic competition at the level of exon 2 and exon 3 definition: In the context of co-transcriptional splicing regulation, suppressing the outer exons (lowering k1 or k3) by local RBP binding gives the middle alternative exon a longer time window to be recognized by the spliceosome. Thus, RBP binding favors recognition of all three exons and thus inclusion when compared to skipping (which requires only definition of exons 1 and 3). In contrast, local inhibition of the alternative exon (k2) selectively blocks the inclusion reaction, thereby lowering the PSI relative to the absence of RBP-mediated regulation. These position-dependent RBP effects resemble experimental observations for PTB, an RBP that indeed inhibits inclusion when bound to the alternative exon, while promoting inclusion when located to flanking constitutive exons27. Interestingly, in our model, the position-dependence disappears for very fast elongation, since the RBP is less likely to be deposited by a rapidly progressing polymerase enzyme (Fig. 4c, red line). This further demonstrates the complex interplay of RBP binding position and transcript elongation in the context of co-transcriptional splicing.
Another free parameter in the model is the spatial RBP-mediated inhibition profile which we modelled using a bidirectional + /−50 bp Hill function around the RBP binding motif in Fig. 4b (right). Figure 4d shows simulations with an alternative, right-skewed inhibition function, where the RBP mainly affects downstream sequence elements that are yet to be transcribed. For this inhibition function, the RBP position-dependent effects on PSI better match the experimental reports for RBPs other than PTP, in which RBP binding within the alternative exon (positions 300-440) had the opposite effect compared to downstream binding (positions >440) (see Introduction).
Interestingly, both the bidirectional and the right-skewed RBP inhibition scenarios exhibit a local asymmetry in their impact on PSI, in particular at the 5ʹ splice site of the 1st exon (red arrows in Fig. 4b, c and d). Here, the RBP has a maximal impact on PSI when bound upstream of the effected 5ʹ splice site, whereas downstream binding diminishes the RBP effect. This asymmetric RBP effect arises from kinetic competition and temporal order of events during transcript elongation: upstream RBP binding can saturate the transcript before exon 1 definition is possible, thereby effectively preventing the definition reaction. In contrast, for a downstream RBP binding site exon 1 definition may be partially complete before recruitment of the inhibitory RBP.
To further investigate this kinetic competition, we analyzed PSI profiles for various parameter values against a single set of parameters for comparison (Fig. 5a). In line with kinetic competition, we found that increasing the exon 1 definition rate (Fig. 5b) and the RBP binding rate (Fig. 5c) had opposite effects on the PSI profile around the splice site. In particular, a high exon 1 definition parameter almost completely abolished the impact of the RBP downstream of the splice site, thereby enhancing asymmetry of the RBP effect (red circle in Fig. 5b). Similarly, an increased exon 2 definition parameter resulted in a more asymmetric PSI profile around the 5′ splice site of exon 2 (red circle in Fig. 5d). Hence, the temporal order and relative speed of exon definition vs. RBP-mediated inhibition can be shown to effect the position-dependence of RBP effects in a co-transcriptional context. In our model, asymmetric RBP effects are not observed around the 3′ splice sites of the exons, since we assume that exon definition can only occur once the whole exon has been synthesized. However if the RBP controls splice sites with a sufficiently spatial large range to simultaneously affect a 5′ and 3′ splice site of an intron additional inflection points can be observed (red circles in Fig. 5e). Furthermore, kinetic competition does not apply to splice site activators, as activator binding promotes, rather than competing with, exon definition. Corresponding simulations for an activator confirmed that the impact of an activator on PSI is symmetric, i.e., independent of the binding position relative to the splice site (Fig. 5f).
Fig. 5: Kinetic competition of exon definition and RBP-mediated inhibition around splice sites.
a Reference simulation for the position-dependent RBP effect. Figure 4b has been recreated with an increased Hill coefficient of the RBP's range-dependency function (Fig. 4b, right) from 8 to 32 (see Methods, Eq. 15). This results in the RBP exhibiting a quasi-binary inhibition profile as a function of the binding site distance. Due to kinetic competition of exon definition and RBP-mediated inhibition, the RBP effect on PSI is asymmetric up- and downstream of the 5′ splice sites of exons 1 and 2 (see red circles and main text). b and d Multiplication of the exon definition rate parameters k1 and k2 by 5 relative to the reference simulation lowers the local magnitude of the RBP effect, while increasing RBP effect asymmetry around the relevant 5′ splice sites (see red circles), since exon definition more efficiently competes with RBP-mediated inhibition. c Multiplication of the RBP binding rate (rbp_r) by 4 slightly increases the RBP effect on PSI, with slightly more pronounced saturation when binding occurs upstream of 5′ the splice site of exon 1 (red circle). e Increasing the RBP effect range from 25 nucleotides (nt) to 65 nt shows that the asymmetric regulation of PSI by the RBP around 5′ splice sites persists if the RBP has overlaying effects on multiple splice sites (red circles). f The RBP's function (Eq. 14) was altered by adding, rather than subtracting, the inhFunc term, therefore switching it to an activating function. When the RBP functions as an enhancer (activator) of splice site usage, the kinetic competition effect (RBP effect asymmetry on PSI) is not observed around the splice site that the RBP effects. This can be explained by the fact that an activator behaves non-competitively with the definition of the splice sites it impacts, instead enhancing the exon definition rates k1-k3.
Taken together, by mechanistic modeling we derived a kinetic framework that quantitatively predicts splicing outcomes in co-transcriptional context based on RBP binding position, elongation rate and exon definition rates.
Noise in alternative splicing follows a binomial distribution
Cellular RNAs are frequently expressed at low levels, often summing up to a total concentration of only a few molecules per cell61. At such low concentrations, biochemical reactions do not occur deterministically, but involve a probabilistic component. Thus, alternative splicing may be a stochastic process with uncertainty in the exon inclusion frequency, as opposed to a deterministic system where the fraction of the inclusion isoform is predictable and completely determined by the kinetic rate constants62.
To quantify uncertainties in splicing outcomes, we performed stochastic simulations using our co-transcriptional splicing models (Figs. 1–3). For stochastic simulations, we sampled the time-dependent probability of the exon definition, RBP binding, and splicing reactions from exponential distributions to determine the order of reaction steps using the Gillespie algorithm63 (see Methods). This way, we account for stochastic variation in the binding reactions of the RBP and splicing factors, and how this impacts splicing decision making. The stochastic simulation was repeated 5000 times for each parameter combination, each model realization reflecting the behavior of one single cell. Exemplary time course simulations for the splicing commitment model (Fig. 2c) are shown in Fig. 6a and c. Likewise, Fig. 6d and f contain simulations of the mechanistic model analyzed in Fig. 3b and c. At all time points, both models show a simple unimodal distribution of the model species across single cells.
Fig. 6: Intrinsic noise in stochastic splicing simulations quantitatively agrees with a binomial model.
a and c Stochastic time course simulations of the splicing commitment model with time delay depicted in Fig. 2c (bottom) using the Gillespie algorithm. The RNA polymerase speed (vpol) is set to 50 nt/s and the initial mRNA molecule count to 100. The thin lines show 100 individual simulation runs (each representing one single cell) and the thick lines represent their mean value for each species (see legend). Panel c shows only the first few time points of the simulation, with the time delays of individual reactions being annotated at the bottom (Incl.on, Esc.on and Skip.on: inclusion, RBP-mediated escape and skipping reactions switched on, respectively). b Noise-mean relationship in the splicing commitment solely depends on average PSI and total molecule count. The mean and std of the PSI-value (Incl/(Incl + Skip)) at the end of simulation (t = 1000) were calculated for different absolute mRNA numbers (molecule counts, mc). To ensure stability of results, each circle represents the average of 5000 individual simulation runs. Various mean PSI-values were generated by the variation of vpol (see Methods for details). The results are compared to a binomial model (thin lines), in which the molecule count is given by the number of trials (see legend). d and f Stochastic time course simulations of the mechanistic exon definition model depicted in Fig. 3a using the Gillespie algorithm. Parameters correspond to the bottom left panel in Fig. 3d with a polymerase speed of 50 nt/sec and an initial P000 count of 100 molecules. Details are analogous to panels a and c, but distinct molecular species are displayed, i.e., the inclusion and skipping isoforms as well as the spliceosome binding intermediates P000 and P100 without RBP binding, or with the RBP being present (P100_inh). The time delays of the individual reactions are annotated at the bottom (RBP+ and RBP-: RBP binding switched on and off, respectively). e Splicing noise in Gillespie simulations of the exon definition model quantitatively agrees with the binomial model. The PSI noise (standard deviation, std) at the end of a simulation was averaged over 5000 simulation runs, each circle representing a different mean(PSI) value and total molecule count, generated using various parameter values and/or RBP binding positions (see Methods). To correct for varying degrees of retention, the molecule count in the mechanistic model equals the sum of Incl and Skip isoforms (see colorbar). After matching of mean(PSI) and molecule count, mechanistic and binomial simulations perfectly overlap.
For the splicing commitment model with time delay (Fig. 1c), the cell-to-cell variability of splicing outcomes was quantified by relating the standard deviation and mean of the PSI metric across cells (Fig. 6a and b; Supplementary Table 6). In this analysis, we considered simulations with different initial transcript counts per cell. Furthermore, we took into account various model parameter values (polymerase speeds) as well as model variants with RBP binding at different locations (Fig. 2c). The resulting noise-mean relationship exhibits a bell-shape, showing zero noise at a mean PSI close to one or zero, and a peak at intermediate mean PSI values (Fig. 6b). Interestingly, these curves we observe in response to stochastic variations are very similar to bell-shaped PSI changes induced by mutations or RBP knockdowns42,43,64. Thus, the PSI metric exhibits a nonlinear response to both deterministic and stochastic perturbations. This is due to the fact that skipping and inclusion reactions are balanced at intermediate PSI values, whereas one of the reactions strongly dominates at low and high PSIs, respectively.
In the stochastic model, the height of the std(PSI)-peak is solely determined by the total transcript count per cell, but not by the other parameters in the model. At very low molecule numbers, the splicing outcome is very noisy, whereas it approaches the deterministic solution (i.e., shows a small standard deviation) for a total expression of >200 molecules per cell (Fig. 6b). Interestingly, the noise-mean curves of all model variants are perfectly congruent with a simple binomial distribution, in which two categorical outcomes are drawn from a random distribution (solid lines in Fig. 6b). Thus, after correction for the total number of splicing events, the system behaves like a simple binary decision between two alternative isoforms despite being regulated by multiple mechanisms including the elongation rate and RBP binding. The presence of the intron retention isoform in the mechanistic model prevented a similar analysis for this model, so the mean and standard deviation of PSI were compared directly to the binomial model to determine the noise relationship (Fig. 6e). Again, after a consideration of total number of inclusion and skipping molecules per cell, the model perfectly agrees with the predictions of a binomial distribution, even though the splicing decisions are complex events involving multiple exon definition reactions. Taken together, our results show that while co-transcriptional alternative splicing regulation by trans-acting factors increases the number of pathways by which a splicing decision can be made, with substantial effect on outcomes, it adds little intrinsic stochastic noise. This explains why a large part of cell-to-cell variability in two splicing decisions that were experimentally characterized using single-molecule RNA-FISH could be explained by a purely binomial model65.
Bimodality in alternative splicing arises from promoter bursting and feedback
We primarily observed binomial splicing fluctuations in the previous section, however bimodality in alternative splicing has been reported in the literature. Such bimodal behavior is characterized by two clearly separated peaks in the PSI histogram, i.e., either inclusion or skipping predominates, and this may be physiologically relevant, as alternative splicing isoforms have been found to be significant in determining cell identity66,67,68. We therefore studied how bimodal distributions can be realised in our models.
In Fig. 7a we demonstrate the realisation of two possible mechanisms of achieving bimodality in splicing (see also Supplementary Table 7). The first is achieved through transcriptional bursting, in which the promoter of a gene switches between periods of minimal and high transcription69. The time course in Fig. 7b shows how the inclusion and skiiping isoforms at increase proportionally to each other during a burst, giving the higher PSI peak in the histogram of the time course (Fig. 7c). We additionally assume different degradation rates for the two splicing isoforms. Then, upon termination of the burst, the unstable isoform (inclusion) decays rapidly, with the slow degrading isoform (skipping) eventually becoming the sole isoform, corresponding to the lower peak at PSI = 0 in the histogram (Fig. 7c). Hence, the differential temporal stability of inclusion and skipping isoforms after burst termination establishes bimodality.
Fig. 7: Bimodality in splicing fates due to transcriptional bursting and positive feedback.
a Extended co-transcriptional splicing model that incorporates transcriptional bursting or/and positive feedback. The promoter randomly switches between a transcriptionally active state (Promon) and an inactive state (Promoff), with rate constants kon (inactive to active) and koff (active to inactive). Transcription initiation starts only when the promoter is active (with rate Vsyn). Upon initiation, transcripts begin to be synthesized via RNA polymerase elongation. As in our previous models (Fig. 1c), the elongation is modeled as a multi-step process (P1 → P8) with an elongation rate kelong for each step. Splicing commitment to the inclusion isoform (with rate ki) can occur during elongation (I1 → I8), while commitment to skipping (with rate ks) is only possible later when the last exon is full synthesized. The protein product of the skipping isoform (functioning as an RBP) can bind to its own mRNA precursor to enhance the commitment to skipping isoform (P8 → E8). This positive feedback is mathematically modeled by the +ve function which is detailed in Methods. Emergence of multimodality in splicing arising from transcriptional bursting with variable degradation (b and c), positive feedback (d and e), and the combination of both (f and g). a, c and e represent both stochastic and ODE results at steady state (imepoints 1000 to 2000), starting from an active promoter with zero pre-existing transcripts. Histograms of PSI values in b, d and f represent density of timepoints within each bin over an extended time period of 10,000 timepoints. The red dotted line gives the results of a kernel density estimator based on the normal distribution provided with the Python SciPy Stats module (gaussian_kde, bw_method = "scott"), with local minima and maxima represented by green and red arrows, respectively.
Our second mechanism involves a positive feedback loop, in which the skipping isoform promotes further skipping reactions once the skipping isoform reaches a threshold level. Such positive autoregulation has been shown for the SXL gene in D. Melanogaster70. As can be seen in Fig. 7d, positive feedback regulation gives rise to alternating periods of high and low PSI, corresponding to separated peaks in the time course histogram (Fig. 7e). Bimodality emerges, because the feedback loop is either essentially off at low levels of skipping, but stochastic fluctuations may switch on the loop, giving rise to plateaus, where skipping exceeds inclusion. Notably, when the feedback is off the system averages to the ODE result (Fig. 7d).
Combining both transcriptional bursting with differential degradation, and positive feedback, results in tri-modality, as observable in Fig. 7f. In this model, the third intermediate peak in the histogram arises because during a sufficiently long burst, skipping accumulation triggers positive feedback, thereby eventually lowering the PSI during the burst. Notably, if the positive feedback loop becomes effective during a burst, it is possible for the effect to persist and impact the starting PSI of a closely following burst, as observed in the two bursts between timepoints 1200 and 1400 (Fig. 7g).
Taken together, these results show how stochastic implementations of our splicing models can be modified to realize bimodal distributions. The underlying mechanisms, transcriptional bursting and feedback amplification of splicing outcomes, are common in human gene expression regulation. Notably, feedback amplification may not only be established by direct positive feedback, but could also involve double negative feedback regulation, which has been described for several splice-regulatory RBPs71,72, or in related gene-regulatory networks (e.g., the LIN28-let-7 system73). By realizing discrete splice isoform expression regimes of key regulatory molecules, the proposed mechanisms may aid in the establishment of cell identity.
In this work, we derived a quantitative description of co-transcriptional splicing dynamics. We implemented two models that differed in their level of detail: (i) a splicing commitment model, in which effective commitment reactions to skipping and inclusion are assumed. (ii) a detailed mechanistic model, in which skipping and inclusion isoforms are not produced independently, but are interrelated, as both splicing decisions share a common set of constitutive splice sites that need to be recognized.
Both models describe co-transcriptional splicing commitment by assuming that certain reaction steps occur with a delay relative to other events, and thereby resemble a previously proposed mathematical model of co-transcriptional splicing45. In addition to these delay models, we also implemented a multistep formulation of co-transcriptional splicing commitment dynamics, in which transcripts of different length are described as discrete states, each state being a variable in the ODE system. By numerical simulations, we show that multistep and delay formulations yield identical results if a sufficient number of steps are considered in the multistep formulation, i.e., if the multistep formulation approaches a continuum and the discretization approximation can be neglected.
While our co-transcriptional splicing commitment models resemble published work45, we focus here on a novel aspect, the determination of co-transcriptional splicing outcomes by RBPs. In the splicing commitment model, we made the ad-hoc assumption that the RBP binding blocks commitment to inclusion, and thereby establishes early skipping commitment. In the exon definition model, we considered additional mechanistic details and described co-transcriptional recruitment of the RBP inhibitor to defined pre-mRNA sequences by RNA polymerase and considered local effects on exon definition. Thereby, the RBP simultaneously affects inclusion, skipping and/or intron retention isoforms.
In both co-transcriptional splicing commitment models, the RBP inhibitor could establish non-intuitive splicing responses towards alterations in the RNA polymerase velocity (Figs. 2 and 3). In line with the experimental literature, these responses included monotonically increasing, monotonically decreasing, bell-shaped or U-shaped PSI-vpol relationships31. All such behaviors could be recapitulated by the appropriate choices of RBP binding position and splice site strength, with the RBP binding position determining the PSI value at low polymerase speeds, and the splice site strength determining the PSI at high polymerase speeds. Finally, a necessary assumption in the model was that RBP binding (i.e., the percentage of occupied pre-mRNAs) dynamically changes for alterations of the RNA polymerase elongation speed (Fig. 3d). Such a speed dependency may arise if RNA polymerase deposits the RBP on the sequence during elongation when it passes the RBP sequence motif55. In this scenario, faster elongation shortens the time window of opportunity for RNP deposition on the pre-mRNA, and thereby affects total RBP binding. In line with the assumptions, RNA polymerase is known to recruit numerous splicing regulatory factors to mRNA, including Prp1954, Prp4056, and U2AF254. For several of these RBPs, it was shown that binding independent of RNA polymerase is inefficient, suggesting that they are recruited primarily during transcription. Moreover, evidence exists that in this mode of co-transcriptional binding slower transcript elongation enhances overall RBP binding, as we had assumed in our model29.
Many RBPs exhibit antagonistic effects depending on their binding position relative to other splicing-related sequence features18,21,22,23,24,25,26,27. For some proteins such as Rbfox these contradictory effects result from looping and other long-range interactions that are not considered in this work25. For several other proteins, however, previous work has shown contradictory effects arising from short-range interactions either around a single splice-site22,23 or when there is competition between alternative 5′ or 3′ splice sites22. In line with these experimental findings, we observe a dual role of inhibitory RBPs on splicing outcomes in our model, involving suppression or enhancement of inclusion even at a fixed RNA polymerase velocity (Fig. 4). In the present implementation, the underlying mechanism is the kinetic competition of outer and inner exons: in the context of co-transcriptional splicing regulation, inhibitor binding to the outer exons simultaneously reduces skipping and inclusion, but provides a kinetic advantage to the full recognition of all three exons (P111, inclusion) compared to the pure recognition of the outer exons (P101, skipping)44. In the literature, alternative mechanisms for position-dependent RBP effects have been suggested, including the formation of distinct spliceosome complexes for upstream and downstream RBP binding74. Our model provides a quantitative framework to implement such mechanisms and to design experiments to test them.
Naturally, the action of an RBP on a splice site is effected by the distance between the RBP binding site and the splice site. For simple regulatory mechanisms, reliant on direct interactions instead of topological alterations of the transcript, the magnitude of the effect on splicing decays with splice site distance57. We additionally observe changes in the magnitude of the PSI effect of an RBP at the 5ʹ splice sites of exons 1 and 2, with maximal impact for RBP binding upstream of the splice site (Fig. 5). This RBP effect asymmetry arises from kinetic competition of exon definition and RBP-mediated inhibition of the definition reaction, as an increasing number of transcripts will have already undergone exon definition reactions before the RBP binds as the binding site is moved downstream. The RBP effect asymmetry is absent for activators, as these promote and do not compete with the exon definition reaction.
Our models provide a means to design experiments to describe complex relationships between polymerase speed and the percentage inclusion of alternative exons that have previously been observed on a genomic basis31, but have thus far been difficult to characterize mechanistically at the level of individual exons. Our models predict that these non-intuitive behaviors arise depending on the position of binding of an inhibitory RBP within or downstream of the alternative exons. As a means to experimentally test these predictions, we propose to introduce artificial RBP binding site into three-exon minigenes, e.g., through shifting of binding motifs41, introduction of artificial binding sites using fusion proteins75, and tethered-oligonucleotide binding sites76. Crucially, our model predicts that placement of an inhibitory RBP upstream of an alternative exon's splice sites results in a monotonically increasing exon inclusion for increasing polymerase speed, or an optimal polymerase speed for inclusion (Fig. 2). Splicing analysis of the proposed minigenes, e.g., by capillary gel electrophoresis or RNA sequencing, upon systematic perturbation of the transcript elongation rate using polymerase mutants or the topoisomerase inhibitor campothecin59, will confirm whether the behaviors predicted by our models indeed occur in a real biological system. In contrast, placement of an inhibitor protein downstream of the alternative exon is predicted to result in a U-shaped relationship with a specific polymerase speed that results in minimal exon inclusion (Fig. 2). Again, this prediction may be tested by combining artificial RBP binding sites with a titration of the RNA polymerase-dependent elongation rate. Taken together, our models represent a framework for designing in vivo testing schemes in order to quantitatively understand effects transcript elongation and RBP binding positions on splicing outcomes. The validation experiments will, in turn, constrain the parameter values and molecular mechanisms considered in the model, thereby resulting in a refined description of co-transcriptional splicing dynamics.
Mathematical models are abstractions of complex biological systems. Likewise, our models of co-transcriptional splicing do not capture the full complexity of the process. Given the limited quantitative experimental data available in the literature, a full description of all biological aspects was also not what we aimed for, since the consideration of additional mechanisms leads to additional unknown parameters and thus to uncertainties in the behavior of the model.
Our goal in the present study was a conceptual understanding of co-transcriptional dynamics and its modulation by RBPs. It is very likely that the main findings in this work will remain qualitatively valid if additional regulatory mechanisms (such as the ones discussed below) are taken into account, although this remains to be determined in future studies, e.g., if sufficient quantitative information becomes for certain aspects of splicing regulation. For the present work, we focused on the most simple model versions that are much easier to handle in terms of simulation analysis due to lower degrees of freedom.
One important simplification we made was the assumption of splicing commitment reactions that do not necessarily reflect the actual splicing catalysis. In fact, experimental work suggests that commitment likely involves the formation of the earliest spliceosomal cross-intron complexes77. Importantly, while subsequent spliceosome maturation by recruitment of U4-U6, followed by two-step catalysis (intron removal) and finally spliceosome release could be implemented in the model, this would not affect splicing outcomes, as long as the initial commitment step is (quasi-)irreversible and rate-limiting. Another important assumption in our model is 100% strict exon definition (i.e., both splice sites of an exon are either jointly defined or not), as this considerably reduces the number of spliceosome binding states to 8 (Fig. 3a), as opposed to 64 binding states that would arise if each exon would be characterized by two independently defined splice sites46 ('intron definition'). According to the biological literature, the U1 subunit is first recruited to the 5′ splice and then cooperatively stimulates the subsequent recruitment of U2 to the 3′ splice site43. In the model, we assumed very strong cooperativity to reduce the number of model parameters, given that we recently showed that this assumption allows for a quantitative description of splicing outcomes in a large-scale mutagenesis dataset for the RON minigene46. However, it should be noted that in our model we could reflect a continuum of mechanisms, ranging from pure exon definition (perfect cross-exon cooperativity) to pure intron definition (no cross-exon cooperativity).
Splicing frequently occurs recursively, implying that many introns are removed progressively in multiple reaction steps78,79, and not in a single step as we assumed in our model. Again, the consideration of recursive splicing would result in a substantial increase in model complexity, as each intron removal step would have to be combined with all other possible elongation, commitment and protein binding states in the transcript, possibly exhibiting its own specific splicing parameter. The present model is well suited to describe systems with recursive splicing if the consecutive removal of intron parts is characterized by a single rate-limiting step. If there is no single rate-limiting step, the models of co- vs. post-transcriptional splicing need to be modified to take into account that the kinetics of intron removal do not exhibit simple exponentially distributed waiting times, but rather peaked waiting times that are a hallmark of multistep processes.
Alternative splicing in regulated in various ways including RNA structure, epigenetic regulation, differential expression of RBPs, sequence mutations, cellular ATP content and many others80,81,82,83. In our modeling approach, we did not represent each of these mechanisms, but focused on the fundamental control points of splicing regulation on which these mechanisms converge, i.e., RNA polymerase speed and RBP-mediated regulation. Importantly, our simulations of altered polymerase speed in fact reflect various biological mechanisms including RNA structure, epigenetic regulation and altered ATP content. Likewise, changes in the total RBP concentration in the model may reflect altered RBP expression or reduced RBP binding due sequence mutations and/or altered structure in the pre-mRNA. Our model is ready to be extended to describe any of the upstream regulatory mechanisms in detail if the required quantitative experimental data becomes available. For instance, in our recent work we showed that it is possible to infer the effect of thousands of point mutations in the RON minigene on exon definition and splicing outcomes using a model similar to the one presented in this work46. Using such mutational data, RNA structure prediction algorithms84 may be applied to establish links between RNA secondary structure and splicing outcomes. In fact, a quantitative model such as the one presented here may help to infer how structural elements in the RNA impact on RBP binding affinity and splice site recognition strength. Epigenetic chromatin marks such as DNA methylation play an important role in splicing regulation and exon recognition, mainly by affecting the RNA pol velocity and thereby transcript elongation85,86. Based on systematic perturbations of an epigenetic modification, e.g., by epigenome editing, accompanied by global splicing analyses (RNAseq), it might be possible to quantitatively model the impact on vpol and splicing outcomes in future studies. Hence, our conceptual model of co-transcriptional splicing regulation serves as a starting for the detailed analysis of specific subsystems of co-transcriptional splicing regulation, besides providing general insights into the principles of the process.
Finally, we converted our co-transcriptional splicing models into a stochastic formulation to investigate cell-to-cell variability in splicing arising from intrinsic stochastic fluctuations. Surprisingly, our mechanistic splicing model (Fig. 6) shows noise behavior that is fully consistent with a minimal binomial sampling, even though we considered complex splicing mechanisms including co-transcriptional dynamics, multistep commitment to splicing and RBP-mediated regulation. In fact, some experimentally characterized splicing decisions could be well approximated by the binomial model65,87, whereas others showed higher noise levels and/or were even characterized by a bimodal distribution67,68, in which individual cells show high or low but never intermediate inclusion levels. In Fig. 7, we explore the ability of extended model variants to realize bimodal PSI distributions. Bimodality can be achieved through transcriptional bursting with differing isoform lifetimes, which might occur if one of the isoforms is subjected to nonsense-mediated decay, or exhibits alternative 3ʹ untranslated regions and polyadenylation. Bimodality can also be realized through the implementation of a positive-feedback loop, such as occurs in the SXL gene in D. Melanogaster70. Positive-feedback loops behave equivalently to double-negative feedback loops involved in cell fate decisions, such as those observed in the LIN28-let-7 system73, the nSR100-REST system72, and SFSR2-MBD2 system71. In our implementation of feedback, only a single isoform is necessary for bimodality in its absolute expression level, which is in keeping with widespread reporting of coupling between alternative splicing and nonsense-mediated decay as a means of controlling expression levels in auto-regulated splicing events88. Taken together, these experimental and theoretical results suggest that the binomial case is the default splicing outcome, but that specific splice-regulatory mechanisms allow for deviations from it. In the future, it will be interesting to further extend our models to see which additional mechanisms increase stochastic fluctuations in splicing outcomes. For instance, a deviation from binomial behavior may be observed if: (i) reversibility of spliceosome binding to splice sites is considered, or (ii) noise arising from long-term RBP expression fluctuations is taken into account89.
In conclusion, our mechanistic splicing models are valuable toolboxes to test competing hypotheses for alternative splicing regulation at the cell population and single-cell levels. They cover a large number of experimental perturbations including sequence mutations, RBP knockdowns/knockouts, artificial recruitment of RBPs, modulation of splicing by antisense oligonucleotides and alterations of polymerase elongation rates. The mechanistic model described in Fig. 3 comprises four kinetic parameters (k1-k3, kspl) in the absence of RBP-mediated regulation and four additional kinetic parameters in the presence of an RBP (k1_inh-k3_inh, rbp_br). Other free parameters (delays and RBP binding positions) are mainly set by the gene structure, so that it seems feasible to calibrate gene-specific mechanistic models by fitting to genome-wide datasets (RNAseq, SLAMSeq) under multiple perturbation conditions (see also Davis-Turak et al.45). Such global analyses may provide mechanistic insights into the coordinated regulation of multiple splice isoforms and thereby into the general principles of splicing regulation in health and disease.
Splicing commitment model - time delay implementation
In Figs. 1, 2 and 5a-c, we performed simulations of the splicing commitment model using time delay model, in which we describe the splicing fate of a transcript during its synthesis and consider splicing reactions that eventually occur with delays.
We implemented a system of four ordinary differential equations (ODEs) describing unspliced transcripts (mRNA), spliced transcript with the alternative exon included (Incl) or skipped (Skip) and inhibited mRNA (mRNAinh), in which the unspliced mRNA is bound by an inhibitory RBP and inclusion is no longer possible (Fig. 1b).
$$\begin{array}{ll}\frac{d}{{dt}}mRNA & = - mRNA \cdot \left( {kesc + ki + ks} \right) \\ \frac{d}{{dt}}Incl &= ki \cdot mRNA\\ \frac{d}{{dt}}Skip &= ks \cdot \left( {mRNA + mRNAinh} \right) \\ \frac{d}{{dt}}mRNAinh & = kesc \cdot mRNA - ks \cdot mRNAinh \end{array}$$
All splicing commitment reactions are assumed to irreversible and occur with the parameters ki (inclusion), ks (skipping) and kesc ("escape" reaction: inhibitor-mediated commitment to skipping). To implement co-transcriptional splicing, we consider that splicing commitment reactions can occur after different delay times (see below), and chose these delay times based on known molecular mechanisms of splicing. Specifically, we assumed an exon definition mechanism which is known to apply for most splicing events in human cells49,50,90. In exon definition, not only the spliced intron, but also the flanking exons need to be fully recognized by the spliceosome for intron splicing to occur. Hence, during transcript elongation, splicing of the first and second intron is only possible after synthesis of exon 2 and 3 is complete, respectively.
For simplicity, we neglect the initial, splicing-less phase before exon 2 is fully synthesized (State P0 in Fig. 1b), and model only the transcript fate afterwards (State P1 in Fig. 1b). In terms of splicing commitment, exon inclusion is immediately possible after start of the simulation, whereas skipping can only occur only later, once both introns and all exons have been synthesized (States P7-P8 in Fig. 1b). To implement the time shift of skipping relative to inclusion, we did not explicitly model polymerase progression, but considered a time delay τ for the skipping reaction (Fig. 1b, "time delay model"). Specifically, the rate of commitment to skipping (kS) is initially zero and then increases in a step-like manner, whereas the commitment rate to inclusion (ki) is time-invariant.
To numerically implement these time delays, we performed our simulations in several consecutive simulation steps, each of which represents the interval between two time delays. The simulation starts at time t = 0 (completion of exon 2 synthesis, see above) and we set all species to zero, except for the unspliced mRNA, which we assume to be 1. Thus, we assume a synchronized population of mRNA molecules (100% just elongated through exon 2), and will use the following numerical simulation routine to calculate the relative probability to end up in a certain splicing fate.
The ODE system is initially integrated until the first time delay using ODE solver odeint from the phyton package scipy (v. 1.3.1), subpackage integrate91. In Fig. 1c and D, no skipping can occur in this first time phase (ks = 0), and inhibitor-mediated skipping does not occur (kesc = 0). Thus, the ODE system reduces to:
$$\begin{array}{*{20}{c}} {\frac{d}{{dt}}mRNA = - ki \cdot mRNA} \\ {\frac{d}{{dt}}Incl = ki \cdot mRNA} \end{array}$$
The concentrations of the simulated mRNA and Incl species at t = τ represent the likelihood of a transcript to be unspliced or spliced to inclusion until the time delay τ, i.e., until exon 3 is fully synthesized. To determine the final fate of all transcripts, we continue the simulation for t > τ, now considering that skipping is possible.
$$\begin{array}{ll} \frac{d}{{dt}}mRNA &= mRNA \cdot \left( {ks + ki} \right) \\ \frac{d}{{dt}}Skip &= ks \cdot mRNA \\ \frac{d}{{dt}}Incl &= ki \cdot mRNA \end{array}$$
The initial value vector of species in this second time phase is the final species vector from the first integration step (Eq. 3) at the end time point t = τ. The simulation is performed till the unspliced mRNA species approaches zero, i.e., until all molecules are spliced. At the end time point of merged simulation \(t \to \infty\) (in practice t = 10000), the value of mRNA is very close to zero, and the values Inclt→∞ and Skipt→∞ represent the probability for the corresponding isoforms to be produced from the precursor.
Alternative splicing is quantified using the PSI metric, which equals the probability of inclusion.
$$PSI = \frac{{Incl_{t \to \infty }}}{{Incl_{t \to \infty } + Skip_{t \to \infty }}} = Incl_{t \to \infty }$$
In Fig. 2, the RBP inhibitor prevents inclusion and this is implemented using the "escape" reaction kesc which results in early transcript commitment to skipping. It is assumed that inhibitor binding and early skipping commitment can occur in a restricted time window between the delays τinh,1 and τinh,2. Thus, when considering this inhibitor-mediated skipping, the number of consecutive integration intervals increases to four (Fig. 2b). For each time phase, there are different effective sets of constants and ODEs, as summarized in Supplementary Table 1.
The following explanations are based on Supplementary Figure 1.
The time delays described above represent the time it takes for RNA polymerase to elongate through the gene body until distinct splicing decisions are possible. The delays are therefore effective elongation parameters, that depend on the dimension of introns, exons and the RBP binding motif ("window-of-opportunity") relative to the total length of the transcript as well as the speed of RNA polymerase which may be specific for each gene92. Specifically, each time delay inversely proportional to the RNA polymerase velocity (vpol), given in nucleotides per second,
$$\begin{array}{ll} \tau _{inh,1} &= \frac{{tr_{len}}}{{l \cdot vpol}} \cdot k, \\ \tau _{inh,2} &= \frac{{tr_{len}}}{{l \cdot vpol} \cdot \left( {k + e} \right),} \\ \tau &= \frac{{tr_{len}}}{{l \cdot vpol}} \cdot \left( {k + e + m} \right) \end{array}$$
and is additionally determined by the relative length of the sequence that needs to be transcribed until RBP binding starts (τinh,1) or ends (τinh,2), or until inhibitor-independent commitment to skipping is possible (τ). Therefore, each delay increases with increasing total transcript length (trlen), given as the total number of nucleotides. Additionally, there are terms describing the proportion of the delay within the elongating transcript. For instance, τinh,1 contains the term k/l, which equals the fraction of the sequence stretch before RBP inhibitor binding is possible (k) divided by the sum of all sequence stretches l = k + e + m + n. Likewise, the delays τinh,1 and τ are scaled by (k + e)/l and (k + e + m)/l, respectively, where e is the sequence length of the RBP inhibitor binding window-of-opportunity and m is the duration of the elongation phase to the end of exon 3 after this window.
Our modeling study was motivated by our previously published experimental and theoretical analysis of a three-exon minigene that comprises exon 10-12 of the ron receptor tyrosine kinase gene43. Therefore, for all time delay simulations in Figs. 1 and 2, the modeled total transcript length was assumed to be trlen = 300 nucleotides, which falls into the range of the length of RON pre-mRNA segment between the end of exon 2 and the end of the transcript, i.e., the end of the third exon + ≈50 nts. The parameters k, e and l were chosen to mimick different RBP inhibitor binding positions within the transcript, and are given in Supplementary Table 1, with rate parameters provided in Supplementary Table 6.
In SBML format the time delays are realized as time triggered Events.
To validate our numerical simulations of the time delay model, we derived an analytical solution (Supplementary Table 1). As expected, we found an excellent agreement of numerical and analytical results in Figs. 1c and 2d.
The approach for calculating the analytical solution of the time delay model is based on the probability of commitment reaction towards inclusion. Under the assumption that the simulation starts with a value of one for the mRNA species (all others zero), one can calculate the probability of inclusion reaction for each of the four time phases dt1, dt2, dt3, dt4 (Supplementary Figure 2), corresponding to the numerical integration intervals in Supplementary Table 1.
The first step is the calculation of p1 representing the probability for inclusion within the first phase dt1, in which commitment to skipping is not possible (t < τinh,1). Assuming a Poisson process, we can calculate the expected value \(E_1 = A1 = ki \ast \tau _{inh,1}\), showing the amount of expected inclusion reactions within the phase 1 (dt1 = τinh,1). Assuming an exponential distribution, we get \(p1 \in [0,1]\) as the value from cumulative distribution function
$$p_1 = 1 - e^{E_1}$$
The probability that the inclusion reaction will not take place in the first phase is \(p_{rest1} = 1 - p_1\). The general formula for rest/remaining probability after the phase i is
$$p_{rest_i} = p_{rest_{i - 1}} - p_{react_i}$$
For i = 1, the value of \(p_{rest_0}\) is 1, and \(p_{react_1} = p_1\).
In the second phase (τinh,1 < t < τinh,2), there are two competing reactions, inclusion and RBP inhibitor-mediated commitment to skipping. The expected value for both reactions is \(E_2 = A2 + A3 = \left( {k_{esc} + ki} \right) \ast (\tau _{inh,2} - \tau _{inh,1})\). The probability for one of the reaction will occur is \(p_{react_2} = p_{rest_1} \ast (1 - e^{ - E_2})\). Or more generally expressed:
$$p_{react_i} = p_{rest_{i - 1}} \ast (1 - e^{ - E_i})$$
The probability for inclusion reaction results from:
$$p_2 = p_{react_2} \ast \frac{{A2}}{{A3 + A2}} = p_{react_2} \ast \frac{{ki}}{{ki + kesc}}$$
Consequently the rest probability for phase 3 is \(p_{rest_3} = p_{rest_2} - p_{react_3}\).
For the two remaining phases (τinh,2 < t < τ and t > τ), we can proceed in the same manner. It is important to note that \(dt4 = \infty\). And in the forth phase there are two competing reactions, commitment to inclusion and skipping, which are handled similarly to the second phase.
Eventually we get the PSI-value as a sum of all absolute probabilities for inclusion p1-p4
$${{{\mathrm{PSI}}}} = p_{incl} = \mathop {\sum}\nolimits_{i = 1}^4 {p_i}$$
The analytical solution is a complex sum of exponential functions. For simplicity, this solution is not shown here, but it was used to generate the plots of the analytical solutions in Figs. 1c and 2d.
Splicing commitment model – multistep implementation
To verify the time delay implementation, we also performed more conventional ODE simulations using an alternative model (Fig. 2b, top), in which pre-mRNA elongation is not simulated by a time delay, but by assuming a chain of consecutive first-order reactions (Fig. 2b), with parameters given in Supplementary Table 2. Specifically, we consider the transition between the transcript elongation states Pi → Pi+1 and their RBP-inhibited counterparts Ei → Ei+1 as reactions of first order with the reaction rate constant kelong. The ODE system describing the network topology in Fig. 2b (top) is given by
$$\begin{array}{ll} \frac{d}{{dt}}P_1 & = - P_1 \cdot \left( {k_{elong} + ki} \right)\hfill \\ \frac{d}{{dt}}Incl & = ki \cdot \left( {P_1 + P_2 + P_3 + P_4 + P_5 + P_6 + P_7 + P_8} \right) \hfill \\ \frac{d}{{dt}}Skip & = ks \cdot \left( {E_7 + E_8 + P_7 + P_8} \right) \hfill \\ \frac{d}{{dt}}P_2 & = P_1 \cdot k_{elong} - P_2 \cdot k_{elong} - P_2 \cdot ki \hfill \\ \frac{d}{{dt}}P_3 & = P_2 \cdot k_{elong} - P_3 \cdot k_{elong} - P_3 \cdot kesc - P_3 \cdot ki \hfill \\ \frac{d}{{dt}}P_4 & = P_3 \cdot k_{elong} - P_4 \cdot k_{elong} - P_4 \cdot kesc - P_4 \cdot ki \hfill \\ \frac{d}{{dt}}P_5 &= P_4 \cdot k_{elong} - P_5 \cdot k_{elong} - P_5 \cdot kesc - P_5 \cdot ki \hfill \\ \frac{d}{{dt}}P_6 &= P_5 \cdot k_{elong} - P_6 \cdot k_{elong} - P_6 \cdot ki \hfill \\ \frac{d}{{dt}}P_7 &= P_6 \cdot k_{elong} - P_7 \cdot k_{elong} - P_7 \cdot ki - P_7 \cdot ks \hfill \\ \frac{d}{{dt}}P_8 &= P_7 \cdot k_{elong} - P_8 \cdot ki - P_8 \cdot ks \hfill \\ \frac{d}{{dt}}E_3 &= - E_3 \cdot k_{elong} + P_3 \cdot kesc \hfill \\ \frac{d}{{dt}}E_4 &= E_3 \cdot k_{elong} - E_4 \cdot k_{elong} + P_4 \cdot kesc \hfill \\ \frac{d}{{dt}}E_5 &= E_4 \cdot k_{elong} - E_5 \cdot k_{elong} + P_5 \cdot kesc\hfill \\ \frac{d}{{dt}}E_6 &= k_{elong} \cdot \left( {E_5 - E_6} \right) \hfill \\ \frac{d}{{dt}}E_7 &= E_6 \cdot k_{elong} - E_7 \cdot k_{elong} - E_7 \cdot ks\hfill \\ \frac{d}{{dt}}E_8 &= E_7 \cdot k_{elong} - E_8 \cdot ks\hfill\end{array}$$
As for the time delay model, the initial state of all species is set to 0, with the exception of the initial unspliced pre-mRNA precursor P1, which is assumed to be 1. By integrating the ODE system using the function odeint from the phyton package scipy (v. 1.3.1, subpackage integrate), we again calculate the probability for a pre-mRNA to result in skipping or inclusion isoforms.
Specifically, we perform time course simulations until t = ∞ (in practice t = 104), check whether the values of Pi and Ei are close to zero 0 (all pre-mRNA spliced) and use the skipping and inclusion to calculate a PSI value (Eq. 4).
Notably, mRNAs are subject to constant synthesis and turnover in living cells, i.e., there is a permanent flux through the system. Importantly, the splicing outcomes (PSI values) we obtained using the numerical simulation procedure described above directly correspond to those of an extended system, in which the pre-mRNA is synthesized with a constant rate and the inclusion and skipping isoforms are subject to first-order degradation (not shown). This is due to the fact that all transcript elongation and splicing commitment reactions are irreversible in nature, i.e., the system in Eq. 11 functions as an irreversible decision module that has the same relative splicing outcome (PSI), irrespective of whether there is a permanent steady state flux or just a step-like pulse of mRNA synthesis, as we assumed here.
In Eq. 11, we assumed a total number of eight elongation steps (P1-P8). In the simulations in Fig. 1d, we varied the total number of elongation steps (l) and also considered a scenario with l = 80 ("many steps"), in addition to l = 8 ("few steps"). In this many steps model topology consisting of 80 ODEs, we proportionally increased, the number of steps in each of the four commitment regimes in Supplementary Figure 1 (topology parameters k, e, m, n). Specifically, we increased the number of steps in the initial inclusion-only regime from k = 2 in the few steps scenario to k = 20 with many steps. Similarly, the topology parameters e = 3, m = 1, n = 2 were increased to e = 30, m = 10, n = 20, respectively. Thus, the total number of steps is given by the sum of steps in the four regimes (l = k + e + m + n). Notably, the many steps simulation yielded qualitatively distinct results from the few steps scenario, and the time delay simulation agrees with the multistep model result for l = 80, whereas it differs from l = 8 (Fig. 1d). This is due to the fact that the multistep model with few steps gives rise to gradual transitions between the commitment regimes k, e, m, n in time, while for many steps these transition better resemble a delay, and thus reflect better our biological assumption that inhibitor-mediated or inhibitor-independent skipping is possible with high efficiency (i.e., in a step-like manner) as soon as the corresponding pre-mRNA sequences have been transcribed. In this sense, the multistep model with few steps is inaccurate, whereas the many steps simulation is a much better approximation of the time delay model. Both model topologies are provided as online SBML files.
The effective transcript elongation parameter (here kelong) in the model is not only determined by the RNA polymerase elongation speed (vpol), but also by the length of the gene in nucleotides (trlen) and - for the multistep model - by the number of elongation steps (l) that are considered in the model (see previous paragraph).
For all multi-step model simulations in Fig. 1d and those confirming the results in Fig. 2c (not shown), a fixed total transcript length of trlen = 300 nucleotides was assumed. This is the length of pre-mRNA segment between the end of exon 2 (where the simulated transcript starts) and the transcript (i.e., third exon + ≈50 nts) end. The elongation rate constant kelong in the model is proportional to the RNA polymerase elongation speed (vpol) and the total number of steps (l) and inversely proportional to the transcription length (trlen):
$$k_{elong} = \frac{{vpol \cdot l}}{{tr_{len}}}$$
In SBML format, this calculation is defined as InitialAssignments.
Mechanistic exon definition model
To explicitly model binding of the inhibitory RBP to the pre-mRNA, we turned to mechanistic modeling (Figs. 3 and 4). Specifically, we considered RBP binding to the pre-mRNA, assumed that the RBP inhibits nearby splice sites and considered that introns may be retained if splicing becomes inefficient.
The mechanistic splicing model is schematically shown in Fig. 3a. It consists of two reaction sub-networks, one where the inhibitory RBP is not (yet) bound the pre-mRNA (left), and one where the RBP inhibitor is bound (right). In the following, we will initially describe the reactions in the absence of the RBP and will then discuss the implementation of RBP binding.
Splicing is catalyzed by the so-called spliceosome6. In the catalytic splicing cycle, pioneering spliceosomal subunits U1 and U2 recognize splice sites. Subsequently, further spliceosome subunits (U4-U6) are recruited which leads to assembly of a mature spliceosome complexes are introns and finally to the excision of introns. In our model, we focus on the key steps of the spliceosome cycle and describe only the initial binding of U1 and U2, followed by catalysis of the splicing reaction.
The description of initial splice site recognition is based on our previous work on exon definition (Fig. 3a and Enculescu et al.46). Specifically, we considered that all three exons may be "defined" by cooperative U1 and U2 binding across exons, as suggested by the literature on mammalian splicing49,50,92. In the model, the pre-mRNA is either unbound (P000, P000_inh) or one/multiple exons are recognized (all other states). For instance, both outer exons are defined in the states P101 and P111, whereas the middle exon is either undefined (P101) or defined (P111). Exon definition occurs in a combinatorial fashion and the binding reactions proceed irreversibly with the rate constants k1-k3 (Fig. 3a, left).
The spliceosome binding patterns impact on splicing outcomes. Specifically, splicing events can only happen if both flanking exons are defined. Therefore, we assume first-order splicing reactions (with the rate constant kspl) to inclusion (from state P111) if all exons are defined, or to skipping (from state P101) if all exons except the middle one are defined. Splicing can also be unproductive if the exons are not properly defined for inclusion or skipping (all other states), or if the splicing reactions do not occur in time in the states P101 and P111. Therefore, a first-order retention reaction is also considered in the model (rate constant kret), but this only occurs post-transcriptionally, i.e., after transcription has been terminated.
As depicted in Fig. 3a (right), the inhibitory RBP modulates splicing outcomes by binding to all unspliced pre-mRNA states (e.g., P000 → P000_inh). For simplicity, we modeled all RBP binding steps as irreversible first-order reactions with the reaction rate constant rbpbr. Subsequently, spliceosome binding occurs at reduced rates (k1,inh-k3,inh), but splicing (kspl) and retention (kret) rates remain affected by RBP binding. Thus, the inhibitory RBP changes splicing outcomes by blocking initial spliceosome recruitment.
The ODE system describing the mechanistic model is given by
$$\begin{array}{ll} \frac{d}{{dt}}P_{000} &= - P_{000} \cdot \left( {k_1 + k_2 + k_3 + k_{ret} + rbp_{br}} \right)\\ \frac{d}{{dt}}P_{100} &= P_{000} \cdot k_1 - P_{100} \cdot k_2 - P_{100} \cdot k_3 - P_{100} \cdot k_{ret} - P_{100} \cdot rbp_{br}\\ \frac{d}{{dt}}P_{010} &= P_{000} \cdot k_2 - P_{010} \cdot k_1 - P_{010} \cdot k_3 - P_{010} \cdot k_{ret} - P_{010} \cdot rbp_{br}\\ \frac{d}{{dt}}P_{001} &= P_{000} \cdot k_3 - P_{001} \cdot k_1 - P_{001} \cdot k_2 - P_{001} \cdot k_{ret} - P_{001} \cdot rbp_{br}\\ \frac{d}{{dt}}P_{110} &= P_{010} \cdot k_1 + P_{100} \cdot k_2 - P_{110} \cdot k_3 - P_{110} \cdot k_{ret} - P_{110} \cdot rbp_{br}\\ \frac{d}{{dt}}P_{101} &= P_{001} \cdot k_1 + P_{100} \cdot k_3 - P_{101} \cdot k_2 - P_{101} \cdot k_{ret} - P_{101} \cdot k_{spls} - P_{101} \cdot rbp_{br}\\ \frac{d}{{dt}}P_{011} &= P_{001} \cdot k_2 + P_{010} \cdot k_3 - P_{011} \cdot k_1 - P_{011} \cdot k_{ret} - P_{011} \cdot rbp_{br} \\ \frac{d}{{dt}}P_{111} &= P_{011} \cdot k_1 + P_{101} \cdot k_2 + P_{110} \cdot k_3 - P_{111} \cdot k_{ret} - P_{111} \cdot k_{spli} - P_{111} \cdot rbp_{br} \\ \frac{d}{{dt}}P_{100inh} &= P_{000inh} \cdot k_{1inh} + P_{100} \cdot rbp_{br} - P_{100inh} \cdot k_{2inh} - P_{100inh} \cdot k_{3inh} - P_{100inh} \cdot k_{ret} \\ \frac{d}{{dt}}P_{000inh} &= P_{000} \cdot rbp_{br} - P_{000inh} \cdot k_{1inh} - P_{000inh} \cdot k_{2inh} - P_{000inh} \cdot k_{3inh} - P_{000inh} \cdot k_{ret}\\ \frac{d}{{dt}}P_{010inh} &= P_{000inh} \cdot k_{2inh} + P_{010} \cdot rbp_{br} - P_{010inh} \cdot k_{1inh} - P_{010inh} \cdot k_{3inh} - P_{010inh} \cdot k_{ret}\\ \frac{d}{{dt}}P_{001inh} &= P_{000inh} \cdot k_{3inh} + P_{001} \cdot rbp_{br} - P_{001inh} \cdot k_{1inh} - P_{001inh} \cdot k_{2inh} - P_{001inh} \cdot k_{ret} \\ \frac{d}{{dt}}P_{110inh} &= P_{010inh} \cdot k_{1inh} + P_{100inh} \cdot k_{2inh} + P_{110} \cdot rbp_{br} - P_{110inh} \cdot k_{3inh} - P_{110inh} \cdot k_{ret}\\ \frac{d}{{dt}}P_{101inh} &= P_{001inh} \cdot k_{1inh} + P_{100inh} \cdot k_{3inh} + P_{101} \cdot rbp_{br} - P_{101inh} \cdot k_{2inh} - P_{101inh} \cdot k_{ret} - P_{101inh}k_{spls}\\ \frac{d}{{dt}}P_{011inh} &= P_{001inh} \cdot k_{2inh} + P_{010inh} \cdot k_{3inh} + P_{011} \cdot rbp_{br} - P_{011inh} \cdot k_{1inh} - P_{011inh} \cdot k_{ret}\\ \frac{d}{{dt}}P_{111inh} &= P_{011inh} \cdot k_{1inh} + P_{101inh} \cdot k_{2inh} + P_{110inh} \cdot k_{3inh} + P_{111} \cdot rbp_{br} - P_{111inh} \cdot k_{ret} - P_{111inh}k_{spli}\\\frac{d}{{dt}}ret &= k_{ret}\left( P_{000} + P_{000inh} + P_{001} + P_{001inh} + P_{010} + P_{010inh}+P_{011} + P_{011inh} + P_{100}\right.\\ &\left.\qquad +\, P_{100inh} + \,P_{101} + P_{101inh} + P_{110} + P_{110inh} + P_{111} + P_{111inh}\right)\\ \frac{d}{{dt}}Incl &= k_{spli} \cdot \left( {P_{111} + P_{111inh}} \right) \\ \frac{d}{{dt}}Skip &= k_{spls} \cdot \left(P_{101} + P_{101inh}\right) \end{array}$$
Numerical integration of the ODE system in Eq. 13 with time-invariant rate constants yields simulations of post-transcriptional splicing in the presence of RBP binding. Co-transcriptional splicing can be simulated by implementing time-dependent changes in the reaction rate constants, thereby mimicking changes in binding site availability during transcript elongation. As for the splicing commitment model, we implemented this behavior of co-transcriptional splicing using time delays, but this time started our simulations (t = 0) at the time of transcript initiation (not when exon 2 transcription has been finished).
Co-transcriptional spliceosome binding was considered in the model by assuming that an exon can be defined only after its synthesis is complete. Hence, the rate constants k1-k3 and k1,inh-k3,inh change over time and increase in a step-like manner with delays (τ1-τ3 in Supplementary Table 2) that reflect the relative positions of exons within the pre-mRNA (Fig. 3c). In this work, we used the structure of the RON mini-gene43, and hence assumed that exon 1, 2 and 3 end (i.e., k1-k3 increase) at 210, 443 and 690 nucleotides after transcript initiation. Using these sequence positions (u1ex1-u1ex3 in Supplementary Table 1) and the RNA polymerase speed (vpol), given in nucleotide per second, we calculate the delays τ1-τ3 (see Supplementary Table 3).
In the model, we did not assume a time-dependence of splicing reaction rate constant (kspl), but assumed that intron retention can only take place after transcription is terminated (i.e., 700 nucleotides after transcript initiation). The transcription termination position genelen = 700 together with the RNA polymerase speed is used to calculate the corresponding delay τ6 (see Supplementary Table 3). By this time dependence of intron retention, we reflect in the model that transcripts released from RNA polymerase may eventually exit the nucleus, where they can no longer be spliced.
For the inhibitory RBP, it was implemented that the binding reaction (rbpbr) can only occur once the RBP motif has been transcribed and additionally assumed that RNA polymerase deposits the RBP on the pre-mRNA co-transcriptionally. Hence, RBP binding is modeled as a short window-of-opportunity, described by two delays τ4 and τ5 (see Supplementary Table 3). τ4 reflects the sequence position of the RBP binding motif, normalized by the polymerase speed (Supplementary Table 3), whereas τ6 marks the sequence position where elongating polymerase is no longer able to deposit the bound RBP on its sequence motif. Hence, a deposition range (polrange) is assumed in the model (Supplementary Table 1) which represents the molecular flexibility of the elongating enzyme and the size of the RBP that needs to be deposited.
Taken together, the model contains six time delays, whose values used in each figure of the paper are summarized in Supplementary Table 1, alongside with the other parameters of the model specified in Supplementary Table 6. Depending on the position of the RBP binding motif, the temporal order of the delays τ1-τ6 may change. To ensure correct order, the calculated delays are sorted before the integration of the ODE system, and the integration is then done in seven time intervals, in a manner similar to the integrations of the ODE system of time delay commitment models. The simulation starts at time t = 0 (transcription initiation) and we set all species to zero, except for the P000, which is set to 1. Then the first phase will be integrated until first τ. All subsequent phases uses the end species vector from the previous time phase as initial state. Then, alternative splicing is quantified using the PSI metric (Eq. 4).
In our model, the RBP bound to its sequence motif inhibits the recognition of nearby splice sites. Since we assume that both splice sites of an exon are bound cooperatively, the RBP reduces the exon definition reactions in our model. Thus, the parameters k1,inh-k3,inh may be smaller than their counterparts in the absence of RBP binding (k1-k3). Importantly, this inhibition effect occurs only locally around the site of RBP binding (Fig. 3b). For simplicity, we initially assumed a bell-shaped inhibition profile, in which the reduction of k1,inh-k3,inh relative to k1-k3 occurs only within ~100 bp around the RBP binding site.
The values of k1,inh-k3,inh (kx,inh) are described by the following inhibition function (that is also depicted in Fig. 3b).
$$k_{x,inh} = k_x \ast \left( {1 - inhFunc\left( {5^\prime SS} \right)} \right) \ast (1 - inhFunc\left( {3^\prime SS} \right))$$
Here, 5´SS and 3´SS reflect the relative distance of the upstream and downstream splice sites of an exon to the RBP binding site in nucleotides. Due to the restricted spatial range of RBP-mediated inhibition in our simulations, we neglected long-range RBP interactions with more distal splice sites. Before each simulation, the values k1,inh-k3,inh are calculated using the model parameters and in the sbml files this is done by InitialAssignments.
The inhibition function (inhFunc in Eq. 14) is a parameterized piecewise-defined function with Hill-type terms
$$InhFunc_{l,r,p}\left( x \right) = \left\{ {\begin{array}{*{20}{c}} {\frac{1}{{\left( { - \frac{x}{l}} \right)^p + 1}}for\;x \,<\, 0} \\ {\frac{1}{{\left( {\frac{x}{r}} \right)^p + 1}}otherwise} \end{array}} \right.$$
Here, the parameters l, r, and p determine the range and the shape of the inhibition function
l – upstream range in nucleotides
r – downstream range in nucleotides
p – hill-coefficient like parameter determining the shape /steepness of the function
x – distance from RBP binding site in nucleotides
The values chosen for the simulations in Figs. 3 and 4 are summarized in Supplementary Table 5.
Stochastic simulations
To quantify uncertainties in splicing outcomes, we performed stochastic simulations using our co-transcriptional splicing models in previous sections (Splicing commitment model - time delay implementation and Mechanistic exon definition model). The simulation results were generated using the Gillespie algorithm63. Since all reaction steps in our splicing models are of first order, the kinetic parameter values in the ODE models can directly be used as reaction probabilities in the Gillespie simulations.
Splicing commitment models
The simulations in Fig. 6a and c were performed using the time delay model from Fig. 2c (bottom), with the reaction probability values summarized in Supplementary Table 6. For Fig. 6b we used all 3 models from Fig. 3c plus the time delay model from Fig. 1d. During the Gillespie simulations, the reaction probabilities were assumed to increase in a step-like manner with time delays corresponding to those described in Section Splicing commitment model - time delay implementation. At the initial time point all species except for the mRNA were set to zero. In Fig. 6a and c, the initial state of mRNA species was set to 100 molecules, whereas it was varied between 10 and 1000 molecules in Fig. 6b (see legend). Figure 6b contains simulations with various PSI outcomes. As in Figs. 1 and 2, we generated these PSI by varying the RNA polymerase elongation speed vpol between 1 and 1000 nucleotides per second. It was done for each of 4 models.
In Fig. 6a and c, we show time courses for 100 realizations, whereas Fig. 6b contains final splicing outcomes (at t = 1000) for 5000 realizations. The PSI metric was calculated for each individual realization (cell), and the PSI mean and standard deviation were calculated based on the PSI distributions across 5000 cells.
For comparison of stochastic splicing outcomes with a binomial model (thin lines in Fig. 5b), we sampled binomial distributions using the stats.binom.std command from the python package scipy (v. 1.3.1). Here, we varied the number and probability of sampled events to mimick varying molecule counts and varying mean(PSI) values, respectively. The standard deviation of the obtained binomial distribution was plotted as the std(PSI) (thin lines in Fig. 5b).
The Gillespie simulations of the mechanistic model (Fig. 6d-f) were performed using the reaction probabilities summarized in Supplementary Table 5 and by setting all initial molecule counts to zero except for the precursor P000. In Fig. 6d and f, the initial state of P000 was set to 100, whereas it was varied between 10 and 40 molecules in Fig. 6e. In Fig. 6e, the final PSI value of the time courses was recorded at t = 1000, and variations in PSI were introduced by changing the RBP binding site position between 220 and 280 nts downstream of the transcription start site. The stochastic simulation results therefore directly correspond to the thick blue simulation outcome of the deterministic model in Fig. 4b (top panel) in the range of 220-280 nucleotides.
In Fig. 6d and f, we show time courses for 100 realizations, whereas Fig. 6e contains final splicing outcomes (at t = 1000) for 5000 realizations. The PSI metric was calculated for each individual realization (cell), and the PSI mean and standard deviation were calculated based on the PSI distributions across 5000 cells.
In the mechanistic model, the comparison of the stochastic simulations (Fig. 6e) could not be done based on the noise-mean relationship as for the splicing commitment model (Fig. 6b). The reason is that the total molecule count in the mechanistic model (Fig. 6e) does not directly correspond to the initial amount of the P000 species, because intron retention occurs as a third (noisy) splicing outcome, in addition to skipping and inclusion. Therefore, for each realization of the mechanistic model, mean and standard deviation of PSI are calculated for different absolute counts of the relevant molecules (sum of skipping and inclusion). Consequently, noise-mean-relationships at defined absolute molecule counts as in Fig. 6b cannot be obtained for the mechanistic exon definition model. Thus, in Fig. 5e we calculated the correlation of the binomial and stochastic noise (std(PSI)) by assuming the same mean(PSI) and absolute molecule count (sum of skipping and inclusion in the mechanistic case) for both models. Hence, if fluctuations in the total amount of skipping and inclusion are corrected for, the mechanistic exon definition model perfectly corresponds to the binomial case.
Bimodality
To establish bimodality in alternative splicing outcomes, we implemented an extended version of the stochastic splicing commitment model (subsection i, Fig. 5a-c). The extended model contains positive feedback regulation and stochastic promoter switching between transcriptionally active (Promon) and inactive (Promoff) states as additional mechanisms of regulation (Fig. 7a, b, e, f).
Positive feedback is implemented by assuming that the protein product of the skipping isoform serves as an RBP that binds to its own pre-mRNA precursor and enhances the production of the skipping isoform. A basal level of the skipping isoform is committed to at the same rate as the inclusion isoform, albeit only once the transcript is nearly fully synthesized, therefore minimizing the amount of skipping isoform generated in the absence of feedback amplification due to the much later opportunity to commit to skipping. We neglected the molecular details of RBP protein biosynthesis and binding to pre-mRNA, and implemented positive feedback (+ve in Fig. 7a) using a Hill-type equation that adds an additional pathway to skipping commitment, with a propensity that is 0 in the absence of skipping isoform, but otherwise specified as:
$$FeedbackPropensity = \frac{{Fb_S}}{{1 + \left( {\frac{K}{{Skip}}} \right)^N}}$$
The parameters Fbs, K, and N determine the magnitude, threshold, and sensitivity of the feedback respectively.
As an additional mechanism in the extended model, we considered stochastic promoter switching between transcriptionally active and inactive states (Promon and Promoff in Fig. 7a), e.g., due to formation and dissociation of transcription factor complexes. This model extension, known as the random telegraph model93,94, establishes transcriptional bursts in mRNA expression time courses. During a burst, a high amount of transcripts is generated, whereas no transcription occurs between bursts. This, combined with different transcript lifetimes of inclusion and skipping mRNAs, can give rise to bimodal behavior in PSI in the absence of feedback: suppose that inclusion is the isoform that is predominantly generated during a burst (PSI > 0.5). After the burst, both inclusion and skipping isoforms will decay (until the next burst starts again). If inclusion is much less stable than skipping, the PSI will quickly drop after the burst, eventually reaching PSI = 0 during most of the waiting time till the next burst. To account for this we specified the degradation rates for the inclusion and skipping isoforms individually.
Implementation of bimodal variants of the model utilized the model depicted in Fig. 7, with (i) feedback amplification being set to zero in the stochastic bursting analysis (Fig. 7b and c); (ii) stochastic bursting being eliminated when focusing on the effects of positive feedback (Fig. 7d and e); (iii) both mechanisms considered simultaneously in Fig. 7 f and g. All stochastic simulations were performed using the Gillespie algorithm with a total of 10,000 time points. The histograms and time courses in Fig. 7 show fluctuations after the simulation reached steady state. For comparison, the time courses in Fig. 7 also contain simulations of the corresponding ODE system, with an initial condition of 1 Promon and 0 pre-existing transcripts.
Parameter values used for the simulations in Fig. 7 are provided in Supplementary Table 4. These parameter values were obtained by scanning the parameter space for the occurrence of bimodal behavior: For bimodality from transcriptional bursting and variable degradation (Fig. 7b and c), parameters were chosen based on scanning of the parameter values of ki, ks, commitment to inclusion and skipping respectively, and di, and ds, representing degradation of the inclusion and skipping isoforms. Starting from equal values, we obtained bimodal behavior by simultaneously increasing ki and di, and decreasing ks and ds. For the parameter scanning in the positive feedback regulation scenario (Fig. 7d and e), the coefficient N was chosen as a random value greater than 3. K was chosen to equal a high value of the skipping isoform that was rarely achieved in simulations without feedback, ensuring rare activation of the feedback loop. Finally, the parameter Fbs was determined by scanning for values that permitted bimodality.
Comparison of model simulations to data from Coulon et al
Our co-transcriptional splicing models aim to provide a generic framework to quantitatively analyze how the RNA's fate is determined by the coordination of fundamental enzymatic reactions required in RNA synthesis, such as transcription initiation, elongation and splicing. How the models behave confronting the experimental data is invaluable to test the validity of our theory. Yet measuring simultaneously multiple enzymatic reactions during RNA synthesis is experimentally challenging, especially in single cells. However, Coulon et al.59 obtained such type of data in single cells in great temporal and spatial details. We therefore use their data to further challenge our models.
Coulon et al. performed dual-color labeling of single transcripts of a human β-globin reporter to assess whether single-intron splicing displayed dependencies on the processes of transcript elongation, and transcript end processing and release: specifically, they simultaneously labeled an intron (removed by splicing) and terminal exon (not removed by splicing) using two different fluorophores. Through measuring the co-localization and concentration of the two fluorophores, they were able to monitor transcript elongation, intron splicing and transcript release from chromatin at the single transcript level, allowing them to determine which fraction of a transcript splicing occurs co- or post-transcriptionally, and which processes were dependent on the completion of the others. Individual kinetic parameters of these processes (e.g., the elongation rate, splicing time, and the rate of transcript release from chromatin) were inferred from the data using a quantitative stochastic modeling approach.
We assessed whether our model of co-transcriptional splicing is consistent with the data reported by Coulon et al. Notably, the stochastic model which these authors used to quantitatively describe their data is similar to ours, as they also allow splicing to happen with a delay during transcript elongation, i.e., after the required cis-regulatory splicing elements (splice sites) have been synthesized. Moreover, in the model variant that is most consistent with the data, the authors assumed that transcript elongation and splicing are independent processes, i.e., perturbing splicing does not affect elongation and vice versa. This already suggests that our model may be suitable to quantitatively describe their data. However, there are also important differences between our model and the one reported by Coulon et al.: (I) their model describes a single intron flanked by two exons, whereas we describe a complete three-exon minigene including two introns flanking an alternative exon; (ii) we assume an exon definition mechanisms of splicing, whereas they assumed an intron definition mechanism; (iii) in their description, splicing is a multistep process (the corresponding splicing time has a peaked distribution), while we assumed a single-rate limiting step for splicing (exponentially distributed splicing time). Given these differences in the two models, we asked whether our model is able to quantitatively describe co-transcriptional splicing kinetics reported in Coulon et al.
To this end, we employed kinetic parameters inferred by Coulon for five experimental conditions, plugged them into our model (as described further below), performed stochastic simulations of co-transcriptional splicing, and compared them to the percentage of transcripts spliced before release from chromatin, which Coulon et al. had observed directly using autocorrelation analysis of the data. Besides for the wildtype conditions, this analysis was performed for experimental perturbations reported in Coulon et al., in which the rates of transcript elongation (campothecin, CPT) or intron splicing (spliceostatin A, SSA; expression of the U2AF1 mutant S34F) were altered independently of one another (described further below).
The stochastic model we used for comparison to the Coulon et al. data is depicted in Supplementary Figure 3. As a solid criterion to compare with the stochastic simulation performed later, we derived an analytical solution for the percentage of co-transcriptionally spliced transcripts. It provides an overview of the parameters, that permits usage of the reported error bounds to simulate additional datapoints. In addition, as we utilize the abstraction of elongation steps in our model, it was deemed prudent to determine how the number of steps may impact the results of the model:
$$\% SplicedPrerelease = 100 \times \left[ {1 - \left( {\frac{{k_{elong}}}{{k_i + k_{elong}}}} \right)^{no.steps} \times \frac{{tr}}{{k_i + tr}}} \right]$$
The term kelong/(ki + kelong) reflects that each elongation step of the unspliced transcript (Pi) is characterized by competition between (i) splicing commitment and catalysis (ki) and (ii) further elongation into the next unspliced elongation step (kelong). Here, the splicing reaction is assumed to occur right after commitment, so the kelong/(ki + kelong)-term reflects the decision to either splice co-transcriptionally in this elongation step, or to proceed into the next unspliced elongation state (Pi+1 in Supplementary Figure 3a). The overall probability of arriving in the last unspliced elongation step (P8 in Supplementary Figure 3a) is then the kelong/(ki + kelong)-term raised to the power of the total number of elongation steps (no. steps). As elongation comes to an end, there is competition between transcript release (tr) and the commitment and splicing of the intron (ki), implying multiplication with an additional tr/(ki + tr)-term. The resulting product describes the probability of a transcript to be released from chromatin before splicing, and the sought after % co-transcriptional splicing is given by 1 minus this product, multiplied by a factor of 100. In order to independently validate the analytical solution in Eq. 17., the stochastic model was also implemented numerically using the Gillespie Algorithm.
The kinetic parameters describing transcript elongation, intron splicing and transcript release from chromatin inferred by Coulon et al. using stochastic model fitting are summarized in Table 1 of their paper. As all reactions in our model follow first-order kinetics, we are able to directly convert rate parameters obtained from Table 1 of Coulon et al. into reaction propensities for simulation using the Gillespie Algorithm, or for use in the analytical solution (Eq. 17).
The elongation rate kelong in our model was obtained via multiplying the polymerase speed (reported in Table 1 of Coulon et al.) by the number of elongation steps in our model divided by the length of the experimentally characterized β-globin reporter, providing the rate for a single elongation step in our model:
$$K_{elong} = ElongationRate = PolSpeed \times \frac{{no.steps}}{{TranscriptLength}}$$
The transcript release rate in our model from is calculated from the inverse of the mean 3ʹ end dwell time reported in Coulon et al.
The co-transcriptional splicing rate in our model was calculated based on the percentage of transcripts spliced co-transcriptionally, as reported by Coulon et al., divided by the above reported polymerase speed and release time:
$$K_i = PreReleaseSpliceRate = \frac{{PreReleaseSplice\% }}{{\frac{L}{{PolSpeed}} + 3^\prime EndDwellTime}}$$
where L is the length between the 3ʹ splice site of the reporter construct to the end of the poly-a tail, whose value, 2353 nucleotides, was taken from Fig. 1 in the Coulon et al. paper.
From Coulon et al. we obtain these parameters from 5 different experimental conditions reported in Table 1: a WT control, treatment with spliceostatin (SSA + ) to inhibit splicing, treatment with camptothecin (CPT), a topoisomerase inhibitor that reduces the polymerase elongation speed, transfection with a WT copy of the splicing factor U2AF1, and transfection with a U2AF1 allele containing a missense mutation S34F, that reduces both the splicing rate and the transcript release rate. For each datapoint, error bounds are reported, and these were used to create minimal and maximal values for the pre-release splicing percentage. We used the minimal value of the pre-release spliced percentage for the lower bounds, along with the maximum polymerase speed and minimum 3´ end dwell time, and vice versa for the maximum value of the pre-release splicing percentage. These sets of parameters were then input into the analytical solution (Eq. 17) and stochastic simulations implemented as described in the previous sections, with the model scheme depicted in Supplementary Figure 3 and an initial state of 10,000 units of P1.
We can observe in the graph of Supplementary Figure 3 that these solutions can faithfully reconstruct the experimentally reported pre-release splicing percentage at low values, with slightly decreasing accuracy at higher values. We conclude that our model displays similar accuracy to pre-existing models supported by experimental data, whilst providing substantial extensibility, as demonstrated by the use of similar model topologies throughout this paper to demonstrate varied regulatory aspects of alternative splicing.
All model source code and relevant data are available from the authors upon request.
Fica, S. M. & Nagai, K. Cryo-electron microscopy snapshots of the spliceosome: structural insights into a dynamic ribonucleoprotein machine. Nat. Struct. Mol. Biol. 24, 791–799 (2017).
Galej, W. P. Structural studies of the spliceosome: past, present and future perspectives. Biochem Soc. Trans. 46, 1407–1422 (2018).
Papasaikas, P. & Valcárcel, J. The Spliceosome: The Ultimate RNA Chaperone and Sculptor. Trends Biochem Sci. 41, 33–45 (2016).
Wang, E. T. et al. Alternative isoform regulation in human tissue transcriptomes. Nature 456, 470–476 (2008).
Chen, S.-Y., Li, C., Jia, X. & Lai, S.-J. Sequence and Evolutionary Features for the Alternatively Spliced Exons of Eukaryotic Genes. Int J. Mol. Sci. 20, 3834 (2019).
Lee, Y. & Rio, D. C. Mechanisms and Regulation of Alternative Pre-mRNA Splicing. Annu Rev. Biochem 84, 291–323 (2015).
Ule, J. & Blencowe, B. J. Alternative Splicing Regulatory Networks: Functions, Mechanisms, and Evolution. Mol. Cell 76, 329–345 (2019).
Lewis, B. P., Green, R. E. & Brenner, S. E. Evidence for the widespread coupling of alternative splicing and nonsense-mediated mRNA decay in humans. Proc. Natl Acad. Sci. USA 100, 189–192 (2003).
Munkley, J., Livermore, K., Rajan, P. & Elliott, D. J. RNA splicing and splicing regulator changes in prostate cancer pathology. Hum. Genet 136, 1143–1154 (2017).
Black, A. J., Gamarra, J. R. & Giudice, J. More than a messenger: Alternative splicing as a therapeutic target. Biochimica et. Biophysica Acta (BBA) - Gene Regulatory Mechanisms 1862, 194395 (2019).
Coltri, P. P., dos Santos, M. G. P. & da Silva, G. H. G. Splicing and cancer: Challenges and opportunities. Wiley Interdiscip. Rev. RNA 10, e1527 (2019).
Carazo, F., Romero, J. P. & Rubio, A. Upstream analysis of alternative splicing: a review of computational approaches to predict context-dependent splicing factors. Brief. Bioinform 20, 1358–1375 (2019).
Yang, Q., Zhao, J., Zhang, W., Chen, D. & Wang, Y. Aberrant alternative splicing in breast cancer. J. Mol. Cell Biol. 11, 920–929 (2019).
Frankiw, L., Baltimore, D. & Li, G. Alternative mRNA splicing in cancer immunotherapy. Nat. Rev. Immunol. 19, 675–687 (2019).
Montes, M., Sanford, B. L., Comiskey, D. F. & Chandler, D. S. RNA Splicing and Disease: Animal Models to Therapies. Trends Genet. 35, 68–87 (2019).
Siegfried, Z. & Karni, R. The role of alternative splicing in cancer drug resistance. Curr. Opin. Genet Dev. 48, 16–21 (2018).
House, A. E. & Lynch, K. W. An exonic splicing silencer represses spliceosome assembly after ATP-dependent exon recognition. Nat. Struct. Mol. Biol. 13, 937–944 (2006).
Motta-Mena, L. B., Heyd, F. & Lynch, K. W. Context-Dependent Regulatory Mechanism of the Splicing Factor hnRNP L. Mol. Cell 37, 223–234 (2010).
Long, J. C. & Caceres, J. F. The SR protein family of splicing factors: master regulators of gene expression. Biochemical J. 417, 15–27 (2009).
Modafferi, E. F. & Black, D. L. A complex intronic splicing enhancer from the c-src pre-mRNA activates inclusion of a heterologous exon. Mol. Cell Biol. 17, 6537–6545 (1997).
Pandit, S. et al. Genome-wide Analysis Reveals SR Protein Cooperation and Competition in Regulated Splicing. Mol. Cell 50, 223–235 (2013).
Erkelenz, S. et al. Position-dependent splicing activation and repression by SR and hnRNP proteins rely on common mechanisms. RNA 19, 96–102 (2013).
Ajith, S. et al. Position-dependent activity of CELF2 in the regulation of splicing and implications for signal-responsive regulation in T cells. RNA Biol. 13, 569–581 (2016).
Ule, J. et al. An RNA map predicting Nova-dependent splicing regulation. Nature 444, 580–586 (2006).
Lovci, M. T. et al. Rbfox proteins regulate alternative mRNA splicing through evolutionarily conserved RNA bridges. Nat. Struct. Mol. Biol. 20, 1434–1442 (2013).
Wang, Z. et al. iCLIP Predicts the Dual Splicing Effects of TIA-RNA Interactions. PLoS Biol. 8, e1000530 (2010).
Xue, Y. et al. Genome-wide Analysis of PTB-RNA Interactions Reveals a Strategy Used by the General Splicing Repressor to Modulate Exon Inclusion or Skipping. Mol. Cell 36, 996–1006 (2009).
de la Mata, M. et al. A Slow RNA Polymerase II Affects Alternative Splicing In Vivo. Mol. Cell 12, 525–532 (2003).
Dujardin, G. et al. How Slow RNA Polymerase II Elongation Favors Alternative Exon Skipping. Mol. Cell 54, 683–690 (2014).
Eperon, L. P., Graham, I. R., Griffiths, A. D. & Eperon, I. C. Effects of RNA secondary structure on alternative splicing of Pre-mRNA: Is folding limited to a region behind the transcribing RNA polymerase? Cell 54, 393–401 (1988).
Fong, N. et al. Pre-mRNA splicing is facilitated by an optimal RNA polymerase II elongation rate. Genes Dev. 28, 2663–2676 (2014).
Bird, G., Zorio, D. A. R. & Bentley, D. L. RNA Polymerase II Carboxy-Terminal Domain Phosphorylation Is Required for Cotranscriptional Pre-mRNA Splicing and 3′-End Formation. Mol. Cell Biol. 24, 8963–8969 (2004).
Das, R. et al. Functional coupling of RNAP II transcription to spliceosome assembly. Genes Dev. 20, 1100–1109 (2006).
Misteli, T. & Spector, D. L. RNA Polymerase II Targets Pre-mRNA Splicing Factors to Transcription Sites In Vivo. Mol. Cell 3, 697–705 (1999).
Aitken, S., Alexander, R. D. & Beggs, J. D. Modelling Reveals Kinetic Advantages of Co-Transcriptional Splicing. PLoS Comput Biol. 7, e1002215 (2011).
Barash, Y. et al. Deciphering the splicing code. Nature 465, 53–59 (2010).
Jha, A., Gazzara, M. R. & Barash, Y. Integrative deep models for alternative splicing. Bioinformatics 33, i274–i282 (2017).
Rosenberg, A. B., Patwardhan, R. P., Shendure, J. & Seelig, G. Learning the Sequence Determinants of Alternative Splicing from Millions of Random Sequences. Cell 163, 698–711 (2015).
Xiong, H. Y. et al. The human splicing code reveals new insights into the genetic determinants of disease. Science 347, 1254806–1254806 (2015).
Cortés-López, M. et.al. High-throughput mutagenesis identifies mutations and RNA-binding proteins controlling CD19 splicing and CART-19 therapy resistance. Nat. Commun. 13, 5570 (2022).
Arias, M. A., Lubkin, A. & Chasin, L. A. Splicing of designer exons informs a biophysical model for exon definition. RNA 21, 213–229 (2015).
Baeza-Centurion, P., Miñana, B., Schmiedel, J. M., Valcárcel, J. & Lehner, B. Combinatorial Genetics Reveals a Scaling Law for the Effects of Mutations on Splicing. Cell 176, 549–563.e23 (2019).
Braun, S. et al. Decoding a cancer-relevant splicing decision in the RON proto-oncogene using high-throughput mutagenesis. Nat. Commun. 9, 3315 (2018).
Davis-Turak, J. C. et al. Considering the kinetics of mRNA synthesis in the analysis of the genome and epigenome reveals determinants of co-transcriptional splicing. Nucleic Acids Res 43, 699–707 (2015).
Davis-Turak, J. C., Johnson, T. L. & Hoffmann, A. Mathematical modeling identifies potential gene structure determinants of co-transcriptional control of alternative pre-mRNA splicing. Nucleic Acids Res 46, 10598–10607 (2018).
Enculescu, M. et al. Exon Definition Facilitates Reliable Control of Alternative Splicing in the RON Proto-Oncogene. Biophys. J. 118, 2027–2041 (2020).
Mikl, M., Hamburg, A., Pilpel, Y. & Segal, E. Dissecting splicing decisions and cell-to-cell variability with designed sequence libraries. Nat. Commun. 10, 4572 (2019).
Schmidt, U. et al. Real-time imaging of cotranscriptional splicing reveals a kinetic model that reduces noise: implications for alternative splicing regulation. J. Cell Biol. 193, 819–829 (2011).
Berget, S. M. Exon Recognition in Vertebrate Splicing. J. Biol. Chem. 270, 2411–2414 (1995).
de Conti, L., Baralle, M. & Buratti, E. Exon and intron definition in pre-mRNA splicing. Wiley Interdiscip. Rev. RNA 4, 49–60 (2013).
Ke, S. & Chasin, L. A. Context-dependent splicing regulation. RNA Biol. 8, 384–388 (2011).
Bentley, D. L. Coupling mRNA processing with transcription in time and space. Nat. Rev. Genet 15, 163–175 (2014).
Dvinge, H. Regulation of alternative mRNA splicing: old players and new perspectives. FEBS Lett. 592, 2987–3006 (2018).
David, C. J., Boyne, A. R., Millhouse, S. R. & Manley, J. L. The RNA polymerase II C-terminal domain promotes splicing activation through recruitment of a U2AF65-Prp19 complex. Genes Dev. 25, 972–983 (2011).
de La Mata, M. & Kornblihtt, A. R. RNA polymerase II C-terminal domain mediates regulation of alternative splicing by SRp20. Nat. Struct. Mol. Biol. 13, 973–980 (2006).
Morris, D. P. & Greenleaf, A. L. The Splicing Factor, Prp40, Binds the Phosphorylated Carboxyl-terminal Domain of RNA Polymerase II. J. Biol. Chem. 275, 39935–39943 (2000).
Graveley, B. R., Hertel, K. J. & Maniatis, T. A systematic analysis of the factors that determine the strength of pre-mRNA splicing enhancers. EMBO J. 17, 6747–6756 (1998).
Sciabica, K. S. & Hertel, K. J. The splicing regulators Tra and Tra2 are unusually potent activators of pre-mRNA splicing. Nucleic Acids Res 34, 6612–6620 (2006).
Coulon, A. et al. Kinetic competition during the transcription cycle results in stochastic RNA processing. Elife 3, e03939 (2014).
Wuarin, J. & Schibler, U. Physical isolation of nascent RNA chains transcribed by RNA polymerase II: evidence for cotranscriptional splicing. Mol. Cell Biol. 14, 7219–7225 (1994).
Zenklusen, D., Larson, D. R. & Singer, R. H. Single-RNA counting reveals alternative modes of gene expression in yeast. Nat. Struct. Mol. Biol. 15, 1263–1271 (2008).
Wan, Y. & Larson, D. R. Splicing heterogeneity: separating signal from noise. Genome Biol. 19, 86 (2018).
Gillespie, D. T. A general method for numerically simulating the stochastic time evolution of coupled chemical reactions. J. Comput Phys. 22, 403–434 (1976).
Baeza-Centurion, P., Miñana, B., Valcárcel, J. & Lehner, B. Mutations primarily alter the inclusion of alternatively spliced exons. eLife 9, e59959 (2020).
Waks, Z., Klein, A. M. & Silver, P. A. Cell‐to‐cell variability of alternative RNA splicing. Mol. Syst. Biol. 7, 506 (2011).
Fiszbein, A. & Kornblihtt, A. R. Alternative splicing switches: Important players in cell differentiation. BioEssays 39, 1600157 (2017).
Shalek, A. K. et al. Single-cell transcriptomics reveals bimodality in expression and splicing in immune cells. Nature 498, 236–240 (2013).
Song, Y. et al. Single-Cell Alternative Splicing Analysis with Expedition Reveals Splicing Dynamics during Neuron Differentiation. Mol. Cell 67, 148–161.e5 (2017).
Fritzsch, C. et al. Estrogen‐dependent control and cell‐to‐cell variability of transcriptional bursting. Mol. Syst. Biol. 14, 7678 (2018).
Bell, L. R., Horabin, J. I., Schedl, P. & Cline, T. W. Positive autoregulation of Sex-lethal by alternative splicing maintains the female determined state in Drosophila. Cell 65, 229–239 (1991).
Lu, Y. et al. Alternative Splicing of MBD2 Supports Self-Renewal in Human Pluripotent Stem Cells. Cell Stem Cell 15, 92–101 (2014).
Raj, B. et al. Cross-Regulation between an Alternative Splicing Activator and a Transcription Repressor Controls Neurogenesis. Mol. Cell 43, 843–850 (2011).
Rybak, A. et al. A feedback loop comprising lin-28 and let-7 controls pre-let-7 maturation during neural stem-cell commitment. Nat. Cell Biol. 10, 987–993 (2008).
Witten, J. T. & Ule, J. Understanding splicing regulation through RNA splicing maps. Trends Genet. 27, 89–97 (2011).
Shen, M. & Mattox, W. Activation and repression functions of an SR splicing regulator depend on exonic versus intronic-binding position. Nucleic Acids Res 40, 428–437 (2012).
Cartegni, L. & Krainer, A. R. Correction of disease-associated exon skipping by synthetic exon-specific activators. Nat. Struct. Biol. 10, 120–125 (2003).
Lim, S. R. & Hertel, K. J. Commitment to splice site pairing coincides with A complex formation. Mol. Cell 15, 477–483 (2004).
Blazquez, L. et al. Exon Junction Complex Shapes the Transcriptome by Repressing Recursive Splicing. Mol. Cell 72, 496–509.e9 (2018).
Sibley, C. R. et al. Recursive splicing in long vertebrate genes. Nature 521, 371–375 (2015). 2015 521:7552.
Anna, A. & Monika, G. Splicing mutations in human genetic disorders: examples, detection, and confirmation. J. Appl Genet 59, 253 (2018).
Buratti, E. & Baralle, F. E. Influence of RNA secondary structure on the pre-mRNA splicing process. Mol. Cell Biol. 24, 10505–10514 (2004).
Guantes, R. et al. Global variability in gene expression and alternative splicing is modulated by mitochondrial content. Genome Res 25, 633–644 (2015).
Zhang, J., Zhang, Y. Z., Jiang, J. & Duan, C. G. The Crosstalk Between Epigenetic Mechanisms and Alternative RNA Processing Regulation. Front. Genet. 11, 998 (2020).
Proctor, J. R. & Meyer, I. M. COFOLD: an RNA secondary structure prediction method that takes co-transcriptional folding into account. Nucleic Acids Res 41, e102 (2013).
Saint-André, V., Batsché, E., Rachez, C. & Muchardt, C. Histone H3 lysine 9 trimethylation and HP1γ favor inclusion of alternative exons. Nat. Struct. Mol. Biol. 18, 337–344 (2011).
Schwartz, S., Meshorer, E. & Ast, G. Chromatin organization marks exon-intron structure. Nat. Struct. Mol. Biol. 16, 990–995 (2009).
Linker, S. M. et al. Combined single-cell profiling of expression and DNA methylation reveals splicing regulation and heterogeneity. Genome Biol. 20, 30 (2019).
McGlincy, N. J. & Smith, C. W. J. Alternative splicing resulting in nonsense-mediated mRNA decay: what is the meaning of nonsense? Trends Biochem Sci. 33, 385–393 (2008).
Sarma, U., Hexemer, L., Anyaegbunam, U. A. & Legewie, S. Modelling cellular signalling variability based on single-cell data: the TGFb/SMAD signaling pathway. arXiv preprint arXiv:2007.09093 (2020).
Niemelä, E. H., Verbeeren, J., Singha, P., Nurmi, V. & Frilander, M. J. Evolutionarily conserved exon definition interactions with U11 snRNP mediate alternative splicing regulation on U11-48K and U11/U12-65K genes. RNA Biol. 12, 1256–1264 (2015).
Virtanen, P. et al. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nat. Methods 17, 261–272 (2020).
wa Maina, C. et al. Inference of RNA polymerase II transcription dynamics from chromatin immunoprecipitation time course data. PLoS Comput Biol 10, e1003598 (2014).
Peccoud, J. & Ycart, B. Markovian Modeling of Gene-Product Synthesis. Theor. Popul Biol. 48, 222–234 (1995).
Suter, D. M. et al. Mammalian Genes Are Transcribed with Widely Different Bursting Kinetics. Science 332, 472–474 (2011).
This work was funded by the Deutsche Forschungsgemeinschaft (DFG) (grant LE 3473/2–3 to S. L.).
Open Access funding enabled and organized by Projekt DEAL.
These authors contributed equally: Timur Horn, Alison Gosliga.
Institute of Molecular Biology (IMB), Ackermannweg 4, 55128, Mainz, Germany
Timur Horn, Alison Gosliga, Mihaela Enculescu & Stefan Legewie
University of Stuttgart, Department of Systems Biology and Stuttgart Research Center Systems Biology (SRCSB), Allmandring 31, 70569, Stuttgart, Germany
Alison Gosliga, Congxin Li & Stefan Legewie
Timur Horn
Alison Gosliga
Congxin Li
Mihaela Enculescu
Stefan Legewie
T.H., M.E. and S.L. conceived and designed research; T.H., A.G., M.E. and S.L. performed deterministic simulations; T.H. performed stochastic simulations; T.H. generated Figs. 1–4 and 6, AG generated Figs. 5 and 7; A.G., C.L. and S.L. wrote the manuscript with input from M.E. and T.H. T.H., M.E. and S.L. wrote the Supplemental Material; A.G., C.L. and S.L. addressed the comments from the reviewers; T.H. and A.G. contributed equally to this work with shared co-first authorship.
Correspondence to Mihaela Enculescu or Stefan Legewie.
Horn, T., Gosliga, A., Li, C. et al. Position-dependent effects of RNA-binding proteins in the context of co-transcriptional splicing. npj Syst Biol Appl 9, 1 (2023). https://doi.org/10.1038/s41540-022-00264-3
About the Partner
For Authors and Referees
npj Systems Biology and Applications (npj Syst Biol Appl) ISSN 2056-7189 (online)
|
CommonCrawl
|
Can I model a 1D segway as a cart-pole system?
The equations of motion for a cart-pole (inverted pendulum) system are given as
$$(I+ml^2)\ddot{\theta}+mglsin(\theta)+ml\ddot{x}cos(\theta)=0$$
$$(M+m)\ddot{x}+ml\ddot{\theta}cos(\theta)-ml\dot{\theta}^2sin(\theta)=F$$
However, I want to model a two wheeled segway-like robot with motion constrained to only forward and backward actions (effectively restricting motion in 1D). I initially thought that I could model such a constrained segway robot by modeling a cart-pole system with a massless cart (M=0). Is this the right approach to model the dynamics of a 1D segway robot? Or is there a better model for the dynamics of such a robot in 1D?
PaulPaul
Let's start by defining some of the quantities in the equations you gave:
$I$ Inertia of the pendulum about its center of gravity
$M$ Mass of the cart
$m$ Mass of the pendulum
$l$ Distance between pendulum CG and its pivot
$x$ Displacement of the cart
$\theta$ Angle between a vertical axis and the pendulum
$F$ Driving force acting on the cart
$g$ Acceleration due to gravity
Note that I chose a different sign convention for gravity than in the original question. In my answer, $g$ has a positive value, while in the original question $g$ must be negative.
Now let's compare this system and to a "segway-like" balancing robot. The balancing robot's wheels are analogous to the cart, but in addition to translating, they also rotate. This rotation means that for a given mass $M$, the balancing robot will require more energy to accelerate, since it needs to translate and also rotate. So instead of zero cart mass, we expect an additional term that corresponds to this additional required effort.
Another difference is that instead of a linear force acting on the cart, we have a moment applied to the wheels, and an equal and opposite moment applied to the pendulum.
To derive the equations for the balancing robot, we need an additional two quantities, and we'll use $T\!\left( t \right)$ for the driving torque in place of $F$:
$r$ Radius of the wheels
$I_D$ Inertia of the rotating components in the drivetrain
And we end up with $$ \left(I + m l^2\right) \ddot{\theta} - m g l \sin \left( \theta \right) + m l \ddot{x} \cos \left( \theta \right) = - T\!\left( t \right) $$ $$ \left( \frac{I_D}{r^2} + M + m \right) \ddot{x} + m l \ddot{\theta} \cos \left( \theta \right) - m l \dot{\theta}^2 \sin \left( \theta \right) = \frac{T\!\left( t \right)}{r} $$
Since the real system isn't as simple as two bodies connected with a revolute joint, here is some guidance on where to "count" the mass and inertia of system components.
If we can assume we've already decided to model the system as two bodies (i.e. wheels/drivetrain and pendulum), then for each component, you need to ask "with which body does this component move?" Let's say there are two wheels which are secured to the pendulum by bearings. Let's also say that the wheels are driven by motors acting through gearboxes and there are couplings connecting the motors and wheels to the gearboxes (we'll call the couplings between the wheels and gearboxes "low-speed" and the couplings between the motors and gearboxes "high-speed").
It's fairly obvious that the wheels and low-speed couplings move as a unit. And it doesn't take much more thought to see that the inner race of the bearings, the internals of the gearbox and the motor rotors all move with the wheels as well. In your question, you constrain the robot to linear travel, so we will assume that the two wheels move as a unit, too. When there is a gearbox involved, the inertia of the motor and high-speed coupling (and sometimes the gearbox, depending on the convention used on the datasheet -- read carefully!) needs to be reflected to the opposite side of the gearbox. The relationship is given by
$$ I_{low-speed} = R^2 I_{high-speed} $$
So the drivetrain inertia would look something like this:
$$ I_D = 2 \left( I_{wheel} + I_{low-speed coupling} + I_{gearbox} + R^2 \left( I_{high-speed coupling} + I_{motor} \right) \right) $$
I would treat the mass similarly. The gearbox housing, motor stator and outer race of the bearing move with the pendulum, so their masses should add to $m$, while the wheels, all four couplings, gearbox internals and motor rotors should add to $M$. In practice, I've never seen the mass of the gearbox internals on a datasheet, and I doubt that it's significant anyway. But it is likely that the mass and inertia of the motors and at least the high-speed couplings are important, if not also the low-speed couplings. For bearings, I usually assign half of the mass and inertia to each body and assume that it's close enough.
It's also worth mentioning that we have kind of a funny case where both the gearbox housing and the output shaft can be moved while keeping the other stationary. The motor will rotate in response to both of these motions. I haven't put much thought into the "correct" way to account for this effect (possibly adding a small additional term to the pendulum inertia?), but with my engineering hat on, I would feel comfortable neglecting this effect and assuming the controller will compensate anyway.
edited Mar 8, 2017 at 13:12
KerryKerry
$\begingroup$ I'm implicitly assuming g to be negative, so I think the original equations that I posted are valid. $\endgroup$
– Paul
Mar 7, 2017 at 15:30
$\begingroup$ @Paul Fair enough - we chose different sign conventions. I'll update my answer. $\endgroup$
– Kerry
$\begingroup$ Also, I think "I" is the moment of inertial around its center of mass, not the pivot point in contact with the cart. $\endgroup$
$\begingroup$ Also correct - one more edit coming... $\endgroup$
$\begingroup$ So, even though there is no cart, could we still model the mass 'M' as the total mass of the wheels (and perhaps motors)? Also, how does one account for the entire drivetrain to estimate $I_D$? What exactly goes into computing $I_D$? I think the moment of inertia of the wheels is one part of it, but what else goes into it? $\endgroup$
Converting a linear acceleration command into a DC motor command?
Balancing Robot Control Model
Where should I put the angle sensor on my cart-pole robot?
Is modelling a robot and deriving its Equations of Motions more applicable to a system that is inherently unstable?
How to apply A bang-bang signal of amplitude 1 N and 1 s width as an input force to reproduce certain results in Matlab?
Why do I need an additional moment of inertia term in the cart-pole dynamics equation?
How can I compensate for pendulum and cart motion when using an accelerometer to detect the tilt angle?
Accelerometers in a self-balancing robot, can't we do better?
Best way to Inculde System Characterstics (e.g. Laplace Equ.) in ODE for building Control Strategy
|
CommonCrawl
|
Emphasis on the deep or shallow parts of the tree provides a new characterization of phylogenetic distances
Julia Fukuyama ORCID: orcid.org/0000-0002-7590-55631
Phylogenetically informed distances are commonly used in the analysis of microbiome data, and analysts have many options to choose from. Although all phylogenetic distances share the goal of incorporating the phylogenetic relationships among the bacteria, they do so in different ways and give different pictures of the relationships between the bacterial communities.
We investigate the properties of two classes of phylogenetically informed distances: the Unifrac family, including weighted, unweighted, and generalized Unifrac, and the DPCoA family, which we introduce here. Through several lines of evidence, including a combination of mathematical, data analytic, and computational methods, we show that a major and heretofore unrecognized cleavage in the phylogenetically informed distances is the relative weights placed on the deep and shallow parts of the phylogeny. Specifically, weighted Unifrac and DPCoA place more emphasis on the deep parts of the phylogeny, while unweighted Unifrac places more emphasis on the shallow parts of the phylogeny. Both the Unifrac and the DPCoA families have tunable parameters that can be shown to control how much emphasis the distances place on the deep or shallow parts of the phylogeny.
Our results allow for a more informed choice of distance and give practitioners more insight into the potential differences resulting from different choices of distance.
The sequencing revolution has given us a much more detailed picture of the bacteria that inhabit the world around us. Since the 1990s, biologists have used marker gene studies to investigate the type and number of bacteria anywhere they care to look [1]. In these studies, a gene, assumed to be common to all the bacteria of interest, is amplified by PCR from the total DNA present in the sample and sequenced. In studies of bacterial communities, the marker gene is often the 16S rRNA gene, as it has both conserved regions that can be used to identify it and more variable regions that allow for differentiation between taxa. The resulting sequences are used as operational taxonomic units, and their abundances are used to describe the abundance of the respective taxon in the community. These marker gene studies represent a considerable advance over previous culture-based methods of characterizing microbial communities because of their ability to identify unculturable bacteria and the much larger number of bacterial taxa they can identify.
However, a major limitation of this type of study is that the sequence of the 16S gene does not necessarily give us the correct assignment of taxa into functional units. In some cases, the sequence of the 16S gene does not give us enough resolution to distinguish between taxa that have very different functions. In other cases, taxa with different 16S sequences can be functionally the same and our analysis would have more power and be more interpretable if we treated them as such. Within the context of a 16S study, nothing can be done to help with a lack of resolution. The opposite problem, of marker gene studies splitting functionally similar taxa into too many independent units, is in principle solvable, and in practice, it is dealt with indirectly by using phylogenetically aware methods for data analysis. To this end, several phylogenetically informed distances has been developed, all of which aim to quantify the similarities or dissimilarities among microbial communities. Each one encodes in some way the intuition that communities containing closely related taxa should be considered more similar to each other than communities containing only distantly related taxa, even all of those taxa are technically distinct.
Once the analyst has settled on a definition of distance, he can compute it for each pair of communities in the study, and the distances can then be used for any number of downstream tasks: testing for differences between communities from different environments, clustering communities into groups, looking for gradients in the communities that are associated with other covariates in the study, and so on. The extent to which these methods succeed depends in large part how appropriate the distance is to the underlying biology, and so it is important to understand how exactly the distance measure uses the phylogeny.
In this paper, we shed light on the properties of these distances. We focus in particular on two classes of phylogenetically informed distances: the Unifrac distances and new a set of distances based on double principal coordinates analysis (DPCoA). The Unifrac distances include unweighted Unifrac [2], weighted Unifrac [3], and generalized Unifrac [4]. Weighted and unweighted Unifrac are among the most popular distances for exploratory analysis of microbiome data (e.g., [5–7]) and are often paired together, as for instance in [8, 9]. Generalized Unifrac has also been used in many studies [10–12], more often in the context of association testing than for exploratory analysis. Double principal coordinates analysis comes from the macroecology literature, but both it and distances derived from it have been used to good effect in the analysis of microbiome data [13–16].
Our main result, which we show through a combination of mathematical, data analytic, and computational methods, is that within both classes, there is a gradient in the level at which the phylogeny is incorporated. Weighted Unifrac and DPCoA sit at one end of the gradient and rely more heavily on the deep structure of the phylogeny when compared with unweighted Unifrac and the non-phylogenetic distances, which rely more heavily on the shallow structure in the phylogeny. We can think of weighted Unifrac and DPCoA as agglomerating taxa into large groups or as having only a small number of degrees of freedom, while the distances at the other end of the spectrum do less agglomeration and have more degrees of freedom.
This result is surprising and is backed up by several different lines of evidence. We first show that we can decompose the Unifrac distances by branch in the tree, and that in both real and simulated datasets, weighted Unifrac relies more heavily on the deep branches than unweighted Unifrac. We then show analytically that the unweighted Unifrac distance on using the full phylogenetic tree is equivalent to the distance computed using a "forest" in which many of the connections between the deep branches in the phylogeny have been removed. This result is complemented by computations showing that weighted Unifrac and DPCoA, but not unweighted Unifrac, are insensitive to "glomming" together leaves in the tree.
Before turning to our results, we review the two classes of phylogenetic distances under consideration: the Unifrac distances and the DPCoA distances.
The Unifrac distances
The Unifrac distances are a group of phylogenetically informed distances, all of which incorporate the phylogenetic structure by considering the abundances of groups of taxa corresponding to the branches of the phylogenetic tree in addition to individual taxon abundances. Here we will consider both unweighted Unifrac [2] and the generalized Unifrac family [4], which includes as a special case weighted Unifrac [3]. More formal definitions are given in the "Methods" section, but for now, let pib denote the proportion of bacteria in sample i that are descendants of branch b.
Unweighted Unifrac
With this notation, the unweighted Unifrac distance between sample i and sample j is
$$\begin{array}{*{20}l} d_{u}(i,j) = \frac{\sum_{b=1}^{B} l_{b} |\mathbf{1}(p_{ib} > 0) - \mathbf{1}(p_{jb} > 0)|}{\sum_{b = 1}^{B} l_{B}} \end{array} $$
where lb is the length of branch b, B is the number of branches in the tree, and the notation 1(pjb>0) means the function that evaluates to 1 if pjb>0 and 0 otherwise. Therefore, the term |1(pib>0)−1(pjb>0)| in the numerator of (1) describes whether the descendants of branch b are present in only one of the two communities: it is equal to 1 if true and 0 otherwise. We see that the numerator of (1) sums the lengths of the branches which are unique to one of the two communities and the denominator is the sum of the branch lengths, with the result that the entire quantity can be described as the fraction of branches in the tree that are unique to one of the two communities. Note that this quantity depends only on the presence or absence of the taxa, not on their relative abundances.
Weighted Unifrac
Weighted Unifrac [3] was designed as a variation of unweighted Unifrac that took into account relative abundances instead of relying solely on the presence or absence of each taxon. As with unweighted Unifrac, it can be written in terms of a sum over the branches of the phylogenetic tree.
Using the same notation as before, the raw weighted Unifrac distance between samples i and j is
$$\begin{array}{*{20}l} d_{w}(i,j) = \sum_{b=1}^{B} l_{b} | p_{ib} - p_{jb}| \end{array} $$
A normalizing factor can be added to raw weighted Unifrac to account for different areas of the phylogeny being closer to or farther from the root, in which case the distance between samples i and j is defined as
$$\begin{array}{*{20}l} d_{wn}(i,j) = \frac{\sum_{b=1}^{B} l_{b} |p_{ib} - p_{jb}|}{\sum_{b=1}^{B} l_{b}(p_{ib} + p_{jb})} \end{array} $$
Although weighted Unifrac was initially described as the sum over branches given above, it was shown in [17] that it can also be written as an earth-mover's distance. If we imagine the bacteria in two samples as piles of earth positioned at their corresponding leaves on the phylogenetic tree, the weighted Unifrac distance between those samples is the minimum amount of work required to move one pile to the other pile.
Generalized Unifrac
The final category of Unifrac distances we will consider are the generalized Unifrac distances. They were introduced in [4] in an effort to modulate the emphasis placed on more or less abundant lineages and thereby interpolate between unweighted and weighted Unifrac. The generalized Unifrac distance with tuning parameter α∈[0,1] is defined as follows:
$$\begin{array}{*{20}l} d_{g}(i, j, \alpha) = \frac{\sum_{b=1}^{B} l_{b} (p_{ib} + p_{jb})^{\alpha} \left|\frac{p_{ib} - p_{jb}}{ p_{ib} + p_{jb}}\right|}{\sum_{b=1}^{B} l_{b} (p_{ib} + p_{jb})^{\alpha}} \end{array} $$
The generalized Unifrac distances do not exactly interpolate between weighted and unweighted Unifrac, but they come close. Generalized Unifrac with α=1 is exactly weighted Unifrac. As α gets closer to 0, the (pib+pjb)α term serves to upweight branches that have a smaller proportion of descendants. The intuition behind the design was that unweighted Unifrac places more weight on the branches that have lower abundances, and so distances interpolating between the two should have a parameter that allows more or less weight to be placed on the low-abundance branches. Generalized Unifrac with α=0 is not exactly unweighted Unifrac, but it would be if all of the pib terms were changed to 1(pib>0), that is, if we thought of performing generalized Unifrac on a matrix containing branch descendant indicators intstead of branch descendant proportions.
Generalized DPCoA distances
The second class of phylogenetically informed distances under consideration are the generalized DPCoA distances. As with the generalized Unifrac distances, the generalized DPCoA distances have a tunable parameter defining a family of distances, and the distances at the endpoints are special cases. For the generalized DPCoA distances, one endpoint is the standard Euclidean distance, which does not incorporate the phylogeny at all, and the other endpoint is the DPCoA distance. We give a brief review of DPCoA and then describe the family of generalized DPCoA distances.
DPCoA
Double principal coordinates analysis (DPCoA, originally described in [18]) is a method for obtaining low-dimensional representations of species abundance data, taking into account side information about the similarities between the species. For us, the similarity measure is given by the phylogeny, but in principle, it could be anything. To obtain this low-dimensional representation, points corresponding to species are positioned in a high-dimensional space so that the distance between the species points matches the phylogenetic distances between the species. Then, each bacterial community is conceptualized as a cloud of species points weighted by how abundant the species is in that community. Each community is positioned at the center of mass of its cloud of species points, and principal components is used to obtain a low-dimensional representation of the species points.
The procedure is motivated by definitions of α and β diversity introduced Rao in [19]: the inertia of the point clouds corresponding to each bacterial community is his measure of α diversity of that community, and the distance between the community points is his measure of β diversity. The framework allows for a unified treatment of diversity, with a decomposition of total α diversity into per-site α diversity and between-site β diversity, all while taking into account species similarities.
DPCoA was later characterized as a generalized PCA [20], and from that characterization, we can write the distances in the full DPCoA space between communities i and j as
$$\begin{array}{*{20}l} d_{d}(i,j,r) = (\mathbf{x}_{i} - \mathbf{x}_{j})^{T} \mathbf{Q} (\mathbf{x}_{i} - \mathbf{x}_{j}) \end{array} $$
where xi is a vector giving the taxon abundances in sample i and $\mathbf {Q} \in \mathbb {R}^{p \times p}$ is the covariance matrix for a Brownian motion along the tree [21], meaning that Qij denotes the length of the ancestral branches common to taxon i and taxon j.
Generalized DPCoA
We turn next to the generalized DPCoA distances. This family of distances was used implicitly in developing adaptive gPCA [22], a phylogenetically-informed ordination method. Here we will define the family explicitly: the generalized DPCoA distance with parameter r is:
$$\begin{array}{*{20}l} d_{\text{gd}}&(i, j, r) = \\ & (\mathbf{x}_{i} - \mathbf{x}_{j})^{T} (r^{-1} \mathbf{I}_{p} + (1 - r)^{-1} \mathbf{Q}^{-1})^{-1} (\mathbf{x}_{i} - \mathbf{x}_{j}) \end{array} $$
with the same notation as in Eq. (5) and r∈(0,1).
In adaptive gPCA, the parameter r controls how much prior weight to give to the phylogenetic structure, but we can dispense with that interpretation and simply think of the different values of r as giving us different distances between the samples, just as the parameter α does for generalized Unifrac.
As with the generalized Unifrac distances, the distances given at the endpoints, with r=1 and r=0, help us to understand the family as a whole. In the limit as r→0, the DPCoA distance reduces to the standard Euclidean distance (the straight-line distance between two points), which has no dependence on the phylogeny. At the other extreme, in the limit as r→1, the distance reduces to the distance in double principal coordinates analysis [18].
A final technical note: although we defined the DPCoA distances as distances, the initial description was as an inner product, with the distance being derived from that definition. The formulation as an inner product has some useful implications: for example, if we want to use the distances for ordination (to make a low-dimensional representation of the data), we can use generalized PCA instead of multi-dimensional scaling, with the result that the directions in the low-dimensional plot have interpretations in terms of the taxa in the dataset.
Relationship between Unifrac and DPCoA distances
Although the Unifrac and DPCoA distances have very different derivations, the mathematical representation of the DPCoA distance is quite similar to the mathematical representation of raw weighted Unifrac. As shown in [23], the DPCoA distance can be written as
$$\begin{array}{*{20}l} d_{\text{dpcoa}}(i, j) = \left[\sum_{b = 1}^{B} l_{b} \left (p_{ib} - p_{jb} \right)^{2} \right]^{1/2} \end{array} $$
This representation of the distances between the community points in DPCoA suggests that DPCoA and weighted Unifrac should give fairly similar descriptions of the relationships between the community points, as the differences between them are analogous to the differences between the L1 and L2 distances. In practice and in the datasets we have investigated, this has held true.
Non-phylogenetic distances
We will also compare the phylogenetic distances with the Bray-Curtis dissimilarity and the Jaccard index, two non-phylogenetic measures of community similarity commonly used in ecology. Both measures are defined in the "Methods" section, but for the purposes of this paper, it suffices to know that the Bray-Curtis dissimilarity uses information on species abundance, while the Jaccard index uses only the presence or absence of the species at each site.
Illustrative dataset
We will use data taken from an experiment studying the effects of antibiotic treatment on the human gut microbiome [24] to illustrate the ideas developed in this paper. In the study, fecal samples were taken from three individuals over the course of 10 months, during which time each subject took two 5-day courses of the antibiotic ciprofloxacin separated by six months. Each individual was sampled daily for the 5 days of the antibiotic treatment and the five following days, and weekly or monthly before and after, for a total of 52 to 56 samples per individual. Operational taxonomic units (OTUs) were created using Uclust [25] with 97% sequence identity, and the 16S sequences were aligned to the SILVA reference tree [26], as described previously [24]. All 2582 OTUs were retained for analysis (no abundance filtering was performed). The abundances were transformed using a started log transformation [27], x↦ log(1+x) as a way of approximately stabilizing the variance [28] and reducing the outsize effect the most abundant OTUs would otherwise have.
Weighted Unifrac favors deep branches, unweighted Unifrac favors shallow branches
All of the Unifrac distances can be decomposed by branch of the phylogenetic tree, and we can use this decomposition to investigate deep vs. shallow branch contributions to these distances. The formulas used are given in the "Methods" section, but we give a brief description here.
Recall from Eq. (2) that raw weighted Unifrac is defined as a sum over branches in the tree. Therefore, the contribution of branch b to either raw or normalized weighted Unifrac distance between samples i and j is just the corresponding element in the sum, lb|pib−pjb|. For generalized Unifrac, the analogous quantity is $l_{b} (p_{ib} + p_{jb})^{\alpha } \left |\frac {p_{ib} - p_{jb}}{ p_{ib} + p_{jb}}\right |$. For unweighted Unifrac, branch b contributes $l_{b} / \sum _{j=1}^{B} l_{B}$ if the branch has descendants in both communities, and contributes zero otherwise. We refer to these as the unnormalized branch contributions. Note that the unnormalized branch contribution depends both on the position of the branch in the tree and its length. Since we are interested in understanding the relative importance of different regions in the tree, and not in branches in themselves, we also normalize by branch length. This involves dividing each of the quantities defined above by lb, giving us the contribution per unit branch length instead of the overall contribution of a branch. From there, we obtain the normalized contribution of each branch over the entire dataset by averaging these contributions over all pairs of samples in the dataset.
Since we are interested in the relative contributions of the deep and shallow branches, we computed cumulative average contributions of the shallowest p fraction of branches, in the tree, for p in a range between.5 and 1. Shallowness is represented by the number of descendants, so the shallowest branches are those with only one descendant, and they correspond to p=.5. The deepest branch, at the root, corresponds to p=1. We then plotted these quantities for unweighted Unifrac, weighted Unifrac, and generalized Unifrac with α=0,.25,.5, and.75, as shown in Fig. 1.
Cumulative average contribution (vertical axis) of the shallowest p fraction of the branches in the tree (horizontal axis) to unweighted and generalized Unifrac distances in the antibiotic data. A very large proportion of the unweighted Unifrac distance is contributed by branches with only a few descendants, while that proportion is much smaller for weighted Unifrac
Looking first at the two extremes, we see that almost 90% of the unweighted Unifrac distance is contributed on average by branches with 9 or fewer descendants (approximately the shallowest 85% of the branches), while only about 25% of the weighted Unifrac distance is contributed by such branches. The deepest 5% of the branches contribute about 50% in weighted Unifrac but almost nothing in unweighted Unifrac. Although it is not possible to read it off of the plot in Fig. 1, a substantial proportion—over 10%—of the weighted Unifrac distance is contributed by branches with 1000 or more descendants, even though there are only 23 such branches out of a total of 5162 total branches in the tree. The generalized Unifrac distances have behavior in between: generalized Unifrac with values of α close to 1 have relatively larger contributions from the deeper branches, and as α→0 the deeper branches contribute less and less. Note however that generalized Unifrac with α=0 still puts more weight on the deep branches than unweighted Unifrac. This is consistent with the definition of generalized Unifrac not exactly interpolating between unweighted and weighted Unifrac.
That the deep branches are more important to weighted Unifrac and the shallow branches more important to unweighted Unifrac is even more apparent when we plot the branch contributions along the tree. We used the same branch contribution computations but this time plotted them along the phylogenetic tree for the two extreme points, unweighted Unifrac and weighted Unifrac. A subtree containing a randomly selected set of 200 leaves and their ancestral branches is shown in Fig. 2. The subtree is shown because the full phylogenetic tree with 2500 leaves is too big to be easily inspected. We see that for weighted Unifrac, the shallow branches (those with few descendants) contribute very little to the distance, and as we move towards the root, the deeper branches contribute larger and larger amounts. Unweighted Unifrac shows the opposite pattern: the shallow branches contribute more to the distance, and the deep branches often contribute nothing at all (the dark purple branches in the left panel of Fig. 2 have zero contribution).
Average contributions of each branch to unweighted (left) vs. weighted (right) Unifrac distance. Color represents log10 of the contribution, so numbers closer to zero (more yellow) indicate larger contributions, and large negative numbers (more purple) indicate smaller contributions
Weighted Unifrac favors deep branches in simulation experiments
The pattern of unweighted Unifrac relying more heavily on the shallow branches than weighted Unifrac is not specific to the dataset shown in Fig. 1. To investigate the robustness of this finding, we looked at the branch contributions under three simulation strategies. The first two simulations investigate branch contributions in realistic setups, when there is some structure to the communities that is either unrelated to the phylogeny (the first simulation) or related to the phylogeny (the second simulation). In simulation 1, the samples fall into two groups, each of which has its own set of characteristic taxa, and the sets are unrelated to the phylogeny. In simulation 2, the samples fall along a gradient, with the endpoints corresponding to under- or over-representation of a certain clade. The branch contribution curves are shown in Additional file 1: Figures S1 and S2, and details of the simulation are available in Additional file 1. In each case, for a wide range of numbers of samples, numbers of taxa, numbers of characteristic taxa, and noise in the abundance matrix, we see the same pattern that unweighted Unifrac places more emphasis on the shallow branches than weighted Unifrac does and that the generalized Unifrac distances fall on a spectrum in between.
The last simulation is based on an edge case in which all of the Unifrac distances depend solely on the shallowest branches, those directly above the leaves. The phylogeny is structured as a full binary tree, that is, a tree in which each node has two children, and the tree is taken to have all branches of the same length. The samples are divided into two groups, and for any pair of leaves that share a parent, one leaf is present in the first group and absent in the second, and the other leaf is present in the second group and absent in the first group. In this situation, if we have a total of p taxa, the distance between samples in the same group is zero, the unweighted Unifrac distance between samples in different groups is $\frac {p}{2p-2}$, the raw weighted Unifrac distance between samples in different groups is 2, and all of the Unifrac distance, unweighted, weighted, and generalized, is contributed by the branches directly above the leaves. The corresponding branch contribution plot is shown in the upper left panel of Fig. 3. This is the only case we will see where unweighted Unifrac does not place strictly more weight on the shallow branches than weighted Unifrac does, and even so we have equality between the two distances and not a reversal of the pattern.
Cumulative average contribution (vertical axis) of the shallowest p fraction of the branches in the tree (horizontal axis) to unweighted and generalized Unifrac distances for simulated data. Top left panel is the noiseless case, and in subsequent panels, "present" taxa are sampled from a distribution with mean 10 and standard deviation given in the facet label
Next, we looked at what happens to the branch contributions when we add noise to this simulation, as we would see in real data. Instead of letting the taxa we are simulating as being truly present in a sample be deterministically non-zero, we sample counts for those taxa from a double Poisson distribution [29] with a mean of 10 and standard deviations between.01 and 4.5. More details about the simulation strategy and the double Poisson family are given in the "Methods" section, but briefly, the double Poisson is a distribution over the non-negative integers that allows for both under- and over-dispersion relative to the Poisson. When we add even a small amount of noise to the simulation, we immediately recover the pattern of weighted Unifrac placing strictly more weight on the deep branches than unweighted Unifrac, as shown in Fig. 3. As a final note, the amount of noise in panels 2–5 of Fig. 3 is less than we would expect in real experiments. Microbiome counts tend to be overdispersed relative to the Poisson, but the simulations shown in panels 2–5 are substantially under-dispersed. This simulation indicates that even in extreme cases where the Unifrac distances should be determined entirely by the shallowest branches in the tree, when we add any noise to the problem, we recover the pattern of unweighted Unifrac relying more heavily on the shallow branches and weighted Unifrac relying more heavily on the deep branches.
Unweighted unifrac is independent of the deep structure of the tree
In the previous section, we saw that the deep branches contributed less to the unweighted Unifrac distance than the shallow ones do, and many had zero contribution. Here we strengthen that observation, showing that under conditions that often hold in practice, we can completely remove some of the connections between the deep branches in the tree without changing the set of unweighted Unifrac distances between our samples. This indicates that the set of unweighted Unifrac distances on a given dataset is often completely independent of the deep branching structure of the phylogeny.
Specifically, consider any branch in the tree that has at least one descendant in all of the samples. Note that all the branches ancestral to this branch share the same property. This branch and its ancestors never contribute to the unweighted Unifrac distance, and so "breaking" the tree at these branches into unconnected subtrees does not change the set of distances. An illustrative example is shown in Fig. 4, and a more formal proof and description of the equivalence is given in the "Methods" section.
Illustration of two sets of trees which give the same unweighted Unifrac distances between a pair of samples. Yellow branches are those with descendants in both communities, and blue or green branches are unique to the square or the diamond communities, respectively. If all the branches have the same length, both the tree on the left and the three-tree forest on the right lead to unweighted Unifrac distances of.5 between the square and diamond communities
To see how extensively the phylogeny can be broken up and yield the same unweighted Unifrac distances in real data, we performed the procedure of breaking the tree along shared branches on our illustrative dataset. We were interested in the number of subtrees resulting from this procedure and in how many leaves the subtrees contained. In Fig. 5, we see the distribution of the sizes of the 156 resulting trees: out of 2582 taxa, we obtain just under 50 trees with only one leaf. Most of the trees have fewer than 50 leaves, but we also see some trees with a couple hundred leaves. The large number of small trees is likely responsible for the similarity between the unweighted Unifrac distance and several non-phylogenetic distances, which is explored further in the last part of this section.
Number of leaves in the subtrees created when the phylogenetic tree is broken along shared branches
Sensitivity to taxon agglomeration shows that the Unifrac and DPCoA distances are characterized by their reliance on the deep branches
To complement our finding that unweighted Unifrac has no dependence on the deep branching structure, we can show that weighted Unifrac and DPCoA rely primarily on the deep branches by showing that they are relatively insensitive to "glomming" the bacterial taxa together to higher levels on the phylogenetic treeFootnote 1. As with the results for the branch decompositions, we will see that that the generalized Unifrac distances and generalized DPCoA distances show a range of sensitivities to glomming, with DPCoA and weighted Unifrac at the least sensitive end and unweighted Unifrac and the standard Euclidean distance (a non-phylogenetic distance) at the most sensitive end.
When we refer to glomming taxa together here, we mean taking a pair of sister taxa and replacing them with one pseudo-taxon whose abundance is the sum of the abundances of the two taxa which were replaced and whose position on the tree is at the parent node of the two sister taxa. By doing this multiple times, we obtain smaller, lower-resolution datasets with any number of pseudo-taxa between one (all the taxa glommed together into one pseudo-taxon) and the number of taxa in the initial dataset (no glomming). When we glom together taxa, we lose the fine-scale information about the taxon abundances and are left only with information about the abundances of larger clades. If a method gives the same results on heavily glommed data as on the full data, it indicates that the method is not using the fine-scale abundance information.
To quantify the sensitivity of each distance to glomming, we used DISTATIS [30], a method which computes an RV coefficient [31] between distance matrices defined on the same sets of objects. The RV coefficient (described in the "Methods" section) is a generalization to the multi-dimensional setting of the correlation between vectors, and as for the correlation, higher values indicate that the distances are more similar to each other.
For each distance, we computed the RV coefficient between a dataset glommed to 16,32,64,…,1024 taxa and the full dataset (with 2582 taxa). These computations were done for members of the Unifrac family, including unweighted Unifrac and generalized Unifrac with α=0,.1,.5,.9,1, and members of the DPCoA family with values of r between 0 and 1. The results are are shown in Fig. 6, which shows that within each family, there is a range of sensitivity to glomming, with weighted Unifrac (generalized Unifrac with α=1) and standard DPCoA (generalized DPCoA with r=1) being the least sensitive. Within each family, as the tuning parameters decrease, the sensitivity to glomming increases, as we would have expected from our previous results and from the definition of the DPCoA family of distances. DPCoA in particular is quite insensitive to glomming, with the RV coefficient remaining above.98 until we have glommed the initial 2582-taxon tree to under 30 taxa. Weighted Unifrac and some of the generalized Unifrac family members are also relatively insensitive to glomming: a tree an order of magnitude smaller than the full tree still gives RV coefficients above.95 for all of the generalized Unifrac distances we considered.
The DPCoA and Unifrac distances both exhibit a gradient in their sensitivity to taxon agglomeration. We plot the RV coefficient (vertical axis) between distances computed on the full dataset and distances computed on a dataset glommed to some number of taxa (horizontal axis). We show a set of DPCoA distances (top panel) with different values of r (indicated by color) and a set of Unifrac distances (bottom panel) with different values of α (indicated by color)
The DPCoA distances show more of a range of sensitivities, and by implication in the depth at which they incorporate the phylogeny, than the Unifrac distances do. Standard DPCoA is the least sensitive to glomming out of all the distances under consideration, and the Euclidean distance (generalized DPCoA with r=0) is the most sensitive. That generalized DPCoA with r=0 is the most sensitive to glomming is expected, since it completely ignores the phylogeny. That expectation combined with the result that standard DPCoA is the least sensitive leads us to believe that in general, the DPCoA family of distances will show more of a range in their sensitivity to glomming or the level at which they incorporate the phylogeny than the Unifrac family of distances.
Comparison of distances to each other shows the same gradient in the Unifrac and DPCoA families
So far, we have seen evidence that within both the Unifrac and DPCoA families, the tunable parameter controls the level at which the phylogeny is incorporated: generalized DPCoA with r close to 1 and generalized Unifrac with α close to 1 both rely heavily on the deep branches of the tree and are remarkably insensitive to glomming together leaves of the phylogeny. On the other end, generalized DPCoA with r close to 0, generalized Unifrac with α close to 0, and unweighted Unifrac have the opposite behavior: they are less dependent on (or in the case of unweighted Unifrac and the standard Euclidean distance, completely independent of) the deep structure in the tree, and they are much more sensitive to glomming together related taxa. The final question we address here is whether the two families follow the same gradient, or whether they give fundamentally different distances between the samples despite exhibiting similar sensitivity to glomming.
To this end, we computed generalized Unifrac distances (α=0,.1,.25,.5,.9,1), the unweighted Unifrac distance, generalized DPCoA distances (r=0,.1,…,.9,1), the Bray-Curtis dissimilarity ([32]), and the Jaccard dissimilarity ([33]) between the samples in our illustrative dataset. The Bray-Curtis dissimilarity and the Jaccard dissimilarity were included as examples of non-phylogenetic dissimilarities that use either abundance (Bray-Curtis) or solely presence-absence (Jaccard) information about the taxa. We then computed the RV coefficient between each pair of the resulting 20 distances and used DISTATIS to make a low-dimensional visualization of the relationships between the distances.
In Fig. 7, we see that the two families do indeed seem to follow the same gradient. In the representation of the distances along the first two principal axes, we see that the distances corresponding to different values of the tuning parameter (α for generalized Unifrac, r for generalized DPCoA) fall along a "horseshoe", within which they are ordered according to the value of α and r. We also note that unweighted Unifrac and the non-phylogenetic distances are positioned at the α=0/ r=0 end of the gradient, as we would expect if the gradient is explained by the emphasis the distances place on the deep vs. shallow branches of the tree. The "horseshoe" phenomenon is a common occurrence in low-dimensional embeddings and is generally considered a mathematical artifact resulting from the projection of a non-linear manifold into a lower-dimensional space (see [34, 35] for mathematical models leading to horseshoes).
DISTATIS representation of the relationships between the generalized Unifrac distances, generalized DPCoA distances, unweighted Unifrac distance, Bray-Curtis dissimilarity, and Jaccard dissimilarity, as computed on the illustrative dataset. Top panel represents the distances on the first two principal axes, bottom panel represents the distances on the top three principal axes
We also note that the fraction of variance explained by the first principal axis is over 90%, and the first two principal axes, in which the horseshoe falls, account for more than 96% of the variance explained. This suggests to us that within both families, the differences between the different tuning parameters can be attributed to differences in the level at which the phylogeny is incorporated, and that to a first approximation, the generalized Unifrac and generalized DPCoA families incorporate the phylogeny in the same way.
Although it only accounts for a small fraction, 2.1%, of the explained variance, we also investigated the third principal axis for evidence of either systematic distances between the generalized Unifrac and generalized DPCoA families or between the presence/absence and abundance-based methods (i.e., Jaccard and unweighted Unifrac vs. all the others). In the bottom panel of Fig. 7, we see that the third principal axis separates the generalized Unifrac distances from the generalized DPCoA distances and that, furthermore, the separation increases as the value of the tunable parameter decreases and we go towards distances that rely more on the shallow parts of the phylogeny. There is a certain logic to this pattern: distances relying on the deep branches have fewer degrees of freedom, and so there is less room for difference between those distances. The scores on the third axis also fail to separate the presence/absence-based measures and the abundance-based measures: unweighted Unifrac is actually closer to the abundance-based Bray-Curtis measure than it is to the presence/absence-based Jaccard measure, although in the full space the RV coefficients are approximately the same.
Our finding that phylogenetic distances differ in how much they weight different parts of the phylogeny is useful to practitioners who use these distances. The case of unweighted Unifrac compared with weighted Unifrac is especially important, as these two distances are commonly used and often paired together in the same analysis. It is usually assumed that any difference between the two methods is a result of unweighted Unifrac using only presence/absence data and weighted Unifrac using abundance data, but our results here show that the difference in the emphasis placed on the deep or shallow parts of the phylogeny is perhaps even more important.
Our results are also related to and clarify some previous findings on phylogenetic distances. Parks and Beiko, in [36], catalogued a large number of phylogenetic distances, categorized them according to the set of branches that enter into the mathematical formula for the distances, and examined the empirical similarities between the distances. Their categorization of the distances was as most recent common ancestor (MRCA, distances between two samples depend on only on the most recent common ancestor subtree spanned by the pair of samples), complete lineage (CL, distance is influenced the subtree spanned by the samples and all the branches between that subtree and the root of the tree), and complete tree (CT, the distance is influenced by all of the branches in the tree).
According to this categorization, weighted Unifrac is an MRCA measure, while unweighted Unifrac is a CT measure. This at first seems to be at odds with our results, since a CT measure on a deeper set of branches than an MRCA measure and our results show that in practice, unweighted Unifrac depends more on the shallow branches than weighted Unifrac. However, our results actually solve something that is a bit puzzling in Parks and Beiko. They find that the categorization of the distances into MRCA/CL/CT does not fit well with the empirical clustering of the distances: the CT classification spans the four clusters they find, and the MRCA and CL classification span three of the four clusters. The results here, both mathematical and empirical, suggest a reason for the lack of alignment: even though unweighted Unifrac technically depends on all of the branches, the form of the distance means that in practice, the deep branches will be less important.
There are of course some limitations to our work. A few of our results are logically entailed by the definitions of the distances, but many will be dataset specific. For instance, branch contributions to unweighted Unifrac must be zero for any branch that has descendants in all the samples, but the difference in the fraction of the distance contributed by deep vs. shallow branches and the difference between those contributions for weighted vs. unweighted Unifrac does not have to be as extreme as it is in the dataset we looked at. Additionally, in the datasets we looked at, many of the deep branches could be removed entirely for unweighted Unifrac. We have shown that we can make one break in the tree for every branch that has descendants in all the samples without changing the set of unweighted Unifrac distances. However, this does not mean that in a different dataset we will be able to break the phylogeny up into as many independent pieces as we were able to here.
There is an easy fix for these problems though: simply perform the same calculations on the dataset of interest. If, for example, there is a large difference in the results from unweighted Unifrac vs. weighted Unifrac, the analyst can calculate how much the branches are contributing to the two distances. A big difference in the contributions of the deep vs. shallow branches for the two methods suggests that the difference in results could be due to the difference in how the phylogeny is incorporated.
We described a new way of characterizing phylogenetic distances, showing that the tunable parameters in both the generalized Unifrac and generalized DPCoA distances control the emphasis placed on the deep vs. shallow branches of the phylogeny. We showed this in several ways: by computing and comparing branch contributions within the Unifrac family, by showing that the families exhibit a gradient in their sensitivity to glomming, and by examining how similar the sets of distances are to each other in real data. In addition to the genereralized Unifrac and generalized DPCoA families, we considered the special case of unweighted Unifrac, showing that that it falls on the end of the spectrum that places more emphasis on the shallow branches of the tree and that it in fact has an equivalent representation in which the phylogenetic tree is replaced by a "forest" of many independent phylogenies.
Our results give an improved understanding of several phylogenetic distances. This understanding is vital for a valid interpretation of the data and for shaping scientific intuitions about the underlying biology. Our hope is that the properties of these methods that we have outlined will be valuable for the applied researchers who use these tools.
Proof of invariance of unweighted Unifrac to breaking the phylogeny
We first give formal definitions of the tree-related concepts and functions we need to describe manipulations of the phylogenetic tree. We need a definition of a forest to describe how we can break the phylogenetic tree into a forest without changing the unweighted Unifrac distances between the samples.
A rooted forest is a triple F=(V,E,R). V is a set of vertices, E is a set of edges on V, so that E⊂{(v1,v2):v1,v2∈V}, and R⊂V is a set of roots. F is such that:
(V,E) is a (possibly disconnected) acyclic graph.
If Vk represents the vertex set of the kth connected component of (V,E), then R is such that |R∩Vk|=1 for k=1,…,K (each component has one root).
The leaf vertices of a forest F are the vertices that only have one neighbor and are not in the root set R. The leaf edges of a forest F are the edges that connect to a leaf vertex. The children of a non-leaf vertex v are the vertices that are connected to v by an edge and that are farther from the root. The children of a non-leaf edge e are the edges that share a vertex with e and that are farther from the root.
For notational purposes, we will also assume that the vertex set is V={1,…,|V|} and that if the forest has p leaf vertices they are {1,…,p}. We further assume that for each edge, if e=(v1,v2), v1 closer to the root than v2 implies that v1>v2. One way of ensuring these conditions is to use the scheme described in [37].
Unweighted Unifrac requires us to define branch or edge abundances, which we do here with the ndesc function:
Let F=(V,E,R) be a rooted forest with p leaf vertices, and let $\mathbf {x} \in \mathbb N^{p}$ represent leaf abundances. The convention that the leaf nodes are {1,…,p} and the remaining vertices are {p+1,…,|V|} means that (1) xj corresponds to the abundance at leaf vertex j and (2) if edge e is an edge connecting to a leaf node, min(e) will be the leaf node.
The ndesc function takes an edge, a leaf abundance vector, and a forest and gives an edge abundance. We define it as:
$$\begin{array}{*{20}l} nde&sc(e, \mathbf{x}, F) = \end{array} $$
$$\begin{array}{*{20}l} &\left\{\begin{array}{ll} \mathbf{x}_{min(e)}& e \text{ a leaf edge}\\ \sum_{e^{\prime} \in \text{children}(e)} \text{ndesc}(e^{\prime}, \mathbf{x}, F)& \text{o.w.} \end{array}\right. \end{array} $$
Note that this definition implies that if ndesc(e)>0, ndesc(e′)>0 for any e′ ancestral to e.
Next, we need a function that describes the tree-breaking operation. The main result will be to show the invariance of the unweighted Unifrac distance to this function under certain conditions.
Suppose we have a forest F=(V,E,R) with vertex set V=1,…,|V|. Let e=(v1,v2)∈E.
The tree-breaking function tb takes a forest and an edge in the forest and gives a new forest. We define tb((V,E,R),e)=(V′,E′,R′), where
$$\begin{array}{*{20}l} V^{\prime} &= V \cup |V|+1 \end{array} $$
$$\begin{array}{*{20}l} E^{\prime} &= (E \setminus (v_{1}, v_{2})) \cup (|V|+1, \text{min}(v_{1}, v_{2})) \end{array} $$
$$\begin{array}{*{20}l} R^{\prime} &= R \cup |V|+1 \end{array} $$
In words, the edge between v1 and v2 is removed and replaced with a new root node. See Fig. 8 for an illustration, and note that this way of defining the new edge, root, and vertex keeps the vertex assignments consistent with our convention that leaf vertices are labeled 1,…,p and the remaining vertices are labeled p+1,…,|V|.
Illustration of the tree breaking function. We start off with the six-node tree T on the left. If vertex 6 is the root of T, its leaves are vertices 1, 2, and 3. When we apply the tree-breaking operation to the (5,4) edge, we obtain the forest on the right F=tb(T,(5,4)). The roots are now vertices 7 (added when we broke the tree) and 6 (the root in the initial tree) for the two trees in the forest. The leaves remain vertices 1, 2, and 3
The following lemma is the main insight into unweighted Unifrac and is fundamentally the reason why we can break the tree in certain ways without changing the unweighted Unifrac distance between the samples.
Let s(e,F)be the sister branch of edge e in forest F. If s(e,F) is such that ndesc(s(e,F),x,F)>0, then
$$\begin{array}{*{20}l} \mathbf{1}(&\text{ndesc}(e^{\prime}, \mathbf{x}, F) > 0) = \mathbf{1}(\text{ndesc}(e^{\prime}, \mathbf{x}, {{tb}}(F)) > 0) \\ &\forall e^{\prime} \in E({\text{tb}}(F)) \cap E(F) \end{array} $$
$$\begin{array}{*{20}l} \mathbf{1}(&\text{ndesc}(e, \mathbf{x}, F) > 0) = \mathbf{1}(\text{ndesc}(e^{\prime\prime}, \mathbf{x}, {{tb}}(F)) > 0) \\ & e^{\prime\prime} = E({{tb}}(F)) \setminus E(F) \end{array} $$
where E(F) denotes the edge set of forest F.
Consider any edge e′∈E(F)∩E(tb(F)). There are two possibilities: e is a descendant of e′ in F, or it is not.
If e is not a descendant of e′ in F, then
$$\text{ndesc}(e, \mathbf{x}, F) = \text{ndesc}(e, \mathbf{x}, {\text{tb}}(F)). $$
If e is a descendant of e′ in F, then so is s(e,F). In that case, 1(ndesc(e,x,F)>0)=1 because ndesc(s(e,F),x,F)>0. s(e,F) is a descendant of e′ in tb(F) as well, and so
$$\text{ndesc}(s(e,F), \mathbf{x}, {\text{tb}}(F)) > 0 $$
which means that
$$\mathbf{1}(\text{ndesc}(s(e, F), \mathbf{x}, {\text{tb}}(F)) > 0) = 1. $$
Therefore, we have (13) for all e′∈E(tb(F))∩E(F).
For Eq. (14), let e′′ be the new edge in tb(F), that is, the sole element of E(tb(F))∖E(F). In that case, ndesc(e′,x,tb(F))=ndesc(e,x,tb(F)), which implies Eq. (14) □
In Theorem 1, we use lemma above to show that the tree-breaking function does not change the unweighted Unifrac distance between two samples, denoted x1 and x2, if we apply it to the sibling of a branch that has descendants in both samples.
Theorem 1
Let s(e,F) denote the sister branch of edge e in forest F. Then, if s is such that ndesc(x1,s,F)>0 and ndesc(x2,s,F)>0, then du(x1,x2,F)=du(x1,x2,tb(F,s))
Our lemma tells us that the tree-breaking function leaves invariant the values of ndesc(e)>0 for every e∈E∩E′, and that ndesc(e)>0=ndesc(e′)>0 for the comparison between the edge that was removed and the new edge. □
In Theorem 2, we simply extend Theorem 1 from the unweighted Unifrac distance between a pair of samples to the set of unweighted Unifrac distances between a collection of samples. It describes how we can break the tree and leave an entire collection of unweighted Unifrac distances among the samples unchanged.
Let x1,…,xn denote leaf abundances for a set of n samples.
As before, let s(e,F)denote the sister branch of edge e in forest F. If s is such that ndesc(xi,s,F)>0, i=1,…,n, then
$$\begin{array}{*{20}l} d_{u}(&\mathbf{x}_{i}, \mathbf{x}_{j}, F) = d_{u}(\mathbf{x}_{i}, \mathbf{x}_{j}, {\text{tb}}(F, s)) \\ &\forall i = 1,\ldots, n - 1, j = i + 1,\ldots, n \end{array} $$
This follows by applying Theorem 1 to every pair of samples and noting that our assumption that s has descendants in all the samples implies that s has descendants in every pair of samples. □
Branch contributions
We note that both the weighted and unweighted Unifrac distances are written as a sum over the branches in the tree, and so for any branch, we can ask what fraction of the distance it makes up. Suppose we have a tree or forest $\mathcal {T}$ with p leaves, branches/edges E, and an abundance vector $\mathbf {x} \in {\mathbb {N}}^{p}$. In the main text, we described quantities pib as the proportion of bacteria in sample i that are descendants of branch b. With the notation in the previous section, we can make the definition
$$\begin{array}{*{20}l} p(b, \mathbf{x}, \mathcal{T}) = \frac{\text{ndesc}(b, \mathbf{x}, \mathcal{T})}{ \sum_{j=1}^{p} \mathbf{x}_{j}}, \end{array} $$
and so if xi is the vector containing the abundances of sample i, the pib in, e.g., Eqs. (1), (2), (3), (4), and (7) in the main text would be $p(b, \mathbf {x}_{i}, \mathcal {T})$.
If we have communities x1 and x2 related by a tree or forest T with B edges, the unweighted Unifrac distance between x1 and x2 is
$$\begin{array}{*{20}l} d_{u}(&\mathbf{x}_{1}, \mathbf{x}_{2}, \mathcal{T}) = \\ & \sum_{b=1}^{B} l_{b} \frac{|\mathbf{1}(p(b, \mathbf{x}_{1}, \mathcal{T}) > 0) - \mathbf{1}(p(b, \mathbf{x}_{2}, \mathcal{T}) > 0)|}{\sum_{b=j}^{B} l_{j}} \end{array} $$
and the proportion of the unweighted Unifrac distance contributed by branch b will be
$$\begin{array}{*{20}l} \text{ufcont}(&b, \mathbf{x}_{1}, \mathbf{x}_{2}, \mathcal{T}) =\\ &l_{b} \frac{|\mathbf{1}(p(b, \mathbf{x}_{1}, \mathcal{T}) > 0) - \mathbf{1}(p(b, \mathbf{x}_{2}, \mathcal{T}) > 0)|}{(\sum_{b=j}^{B} l_{j})d_{u}(\mathbf{x}_{1}, \mathbf{x}_{2}, \mathcal{T})} \end{array} $$
where lb denotes the length of edge b.
The raw weighted Unifrac distance between x1 and x2 will be
$$\begin{array}{*{20}l} d_{w}(\mathbf{x}_{1}, \mathbf{x}_{2}, \mathcal{T}) = \sum_{b=1}^{B} l_{b} \left| p(b, \mathbf{x}_{1}, \mathcal{T}) - p(b, \mathbf{x}_{2}, \mathcal{T})\right| \end{array} $$
the proportion of the raw weighted Unifrac distance contributed by branch b will be
$$\begin{array}{*{20}l} \text{wufcont}(&b, \mathbf{x}_{1}, \mathbf{x}_{2}, \mathcal{T}) = \\ &l_{b} \left| p(b, \mathbf{x}_{1}, \mathcal{T}) - p(b, \mathbf{x}_{2}, \mathcal{T})\right| / d_{w}(\mathbf{x}_{1}, \mathbf{x}_{2}, \mathcal{T}) \end{array} $$
Finally, the generalized Unifrac distance with parameter α between x1 and x2 is
$$\begin{array}{*{20}l} d_{g}(&\mathbf{x}_{1}, \mathbf{x}_{2}, \alpha, \mathcal{T}) = \\ &\sum_{b=1}^{B}\Bigg(l_{b} \left[p(b, \mathbf{x}_{1}, \mathcal{T}) + p(b, \mathbf{x}_{2}, \mathcal{T})\right]^{\alpha} \\ & \quad\quad\quad \times \left| \frac{p(b, \mathbf{x}_{1}, \mathcal{T}) - p(b, \mathbf{x}_{2}, \mathcal{T})}{p(b, \mathbf{x}_{1}, \mathcal{T}) + p(b, \mathbf{x}_{2}, \mathcal{T})} \right|\Bigg) \end{array} $$
and the proportion of the generalized Unifrac distance contributed by branch b is
$$\begin{array}{*{20}l} \text{guf}&\text{cont}(b, \mathbf{x}_{1}, \mathbf{x}_{2}, \alpha, \mathcal{T}) = \\ &l_{b} \left[p(b, \mathbf{x}_{1}, \mathcal{T}) + p(b, \mathbf{x}_{2}, \mathcal{T})\right]^{\alpha} \\ &\times \left| \frac{p(b, \mathbf{x}_{1}, \mathcal{T}) - p(b, \mathbf{x}_{2}, \mathcal{T})}{p(b, \mathbf{x}_{1}, \mathcal{T}) + p(b, \mathbf{x}_{2}, \mathcal{T})} \right| / d_{guf}(\mathbf{x}_{1}, \mathbf{x}_{2}, \alpha, \mathcal{T}) \end{array} $$
To account for the fact that the different branches have different lengths, we can define the proportion of the distance per unit branch length, which will be the quantities in (18), (20), and (22) divided by lb.
With these definitions, we can find how much on average each branch contributes to the distance. Given a set of community points and a branch in the tree, we can find how much the branch contributes to the distance between every pair of community points. Doing this for every branch gives us an idea of how much of the overall distance is contributed by each of the branches. Suppose that we have a dataset with n communities whose abundances are given in the vectors x1,…,xn. Then, the average contribution of the bth branch to the unweighted Unifrac distance, normalized by branch length, is
$$\begin{array}{*{20}l} \frac{2}{n(n+1)}\sum_{i=1}^{n-1} \sum_{j = i+1}^{n}\text{ufcont}(b, \mathbf{x}_{i}, \mathbf{x}_{j}, \mathcal{T}) / l_{b}. \end{array} $$
For generalized Unifrac with parameter α, we use the analogous expression:
$$\begin{array}{*{20}l} \frac{2}{n(n+1)}\sum_{i=1}^{n-1} \sum_{j = i+1}^{n}\text{gufcont}(b, \mathbf{x}_{i}, \mathbf{x}_{j}, \alpha, \mathcal{T}) / l_{b}. \end{array} $$
RV coefficient
The RV coefficient is a generalization of the standard correlation coefficient from vectors to matrices, and was first described in [31]. Suppose that ${\mathbf {X}} \in {\mathbb {R}}^{n\times p}$ and $\mathbf {Y} \in {\mathbb {R}}^{n \times q}$ are two sets of measurements on the same objects, and let Sxx=XTX, Sxy=XTY, Syx=YTX, and Syy=YTY. Then the RV coefficient between X and Y is defined as
$$\begin{array}{*{20}l} {\text{RV}}({\mathbf{X}}, \mathbf{Y}) = \frac{\text{tr}(\mathbf{S}_{xy} \mathbf{S}_{yx})}{\sqrt{\text{tr}(\mathbf{S}_{xx})^{2}\text{tr}(\mathbf{S}_{yy})^{2}}} \end{array} $$
If p=q=1 and X and Y are both centered, it is easy to see that the expression above is the square of the standard correlation coefficient $\rho ({\mathbf {x}}, {\mathbf {y}}) = \frac {\text {cov}({\mathbf {x}},{\mathbf {y}})}{\sqrt {\text {var}({\mathbf {x}}) \text {var}({\mathbf {y}})}}$.
For completeness, we give definitions of the Bray-Curtis dissimilarity and the Jaccard index here.
Bray-Curtis
The Bray-Curtis dissimilarity [32] aims to describe the compositional differences between pairs of communities, and if x1 and x2 are vectors describing the species abundances in two communities, the Bray-Curtis dissimilarity between them is defined as
$$\begin{array}{*{20}l} d_{\text{BC}}(\mathbf{x}_{1}, \mathbf{x}_{2}) = \frac{\sum_{j=1}^{p} |\mathbf{x}_{1j} - \mathbf{x}_{2j}|}{\sum_{j=1}^{p} \mathbf{x}_{1j} + \sum_{j=1}^{p} \mathbf{x}_{2j}} \end{array} $$
Jaccard
The Jaccard index [33] is based on the presence or absence of species in each of the communities. If we let A be the set of species present in one community and B be the set of species present in the other, then the Jaccard index is |A∩B|/|A∪B|. This is commonly transformed into a dissimilarity measure by taking the complement, or
$$\begin{array}{*{20}l} d_{\text{jacc}} = 1 - \frac{|A \cap B|}{|A \cup B|} \end{array} $$
which is what we will use. The Jaccard index is 1 or the Jaccard dissimilarity is 0 when the two communities have the same set of species, and the Jaccard index is 0 or the Jaccard dissimilarity is 1 when the two communities have completely disjoint sets of species.
Simulation setup
Simulation 3 investigated the case where all of the contributions to the Unifrac distances come from the shallowest branches if the abundances are measured without noise. The simulated datasets contained p=512 taxa and n=100 samples. The phylogenetic tree describing the relationships among the species was a full binary tree, that is, one in which every interior node has two descendants. We let the taxa be numbered 1,2…,512 and assign them to the leaves of the tree so that pairs of taxa of the form (2i−1,2i) for i=1,…,256 are sister taxa. The mean matrix $M \in {\mathbb {R}}^{n \times p}$ is then given by
$$\begin{array}{*{20}l} M_{ij} = \left\{\begin{array}{ll} 10 & i \le 50, {j} \text{ is even}\\ 10 & i > 50, {j} \text{ is odd}\\ 0 & \text{o.w.} \end{array}\right. \end{array} $$
Taxon abundance matrices $X\in {\mathbb {R}}^{n \times p}$ were generated as Xij∼Double Poisson(Mij,s), using the rdoublepoisson function in the rmutil package in R [38].
The notation Double Poisson(m,s) indicates a double Poisson distribution with mean m and dispersion parameter s. The double Poisson distribution [29] has probability mass function
$$\begin{array}{*{20}l} p(y) = c(m, s) s^{y/m}\left(\frac{m}{y} \right)^{y \log s} \frac{y^{y-1}}{y!} \end{array} $$
where c(m,s) is a normalizing constant, m is the mean parameter, and s is the dispersion parameter. The simulation results shown in Fig. 3 correspond to s∈{200,150,100,2,.5}. The mean and variance of the double Poisson with mean m and dispersion s are approximately m and m/s, respectively, but the standard deviations on the plots were computed by Monte Carlo, as the approximation of the variance as m/s breaks down for the very large values of s used in the simulation.
For another example of glomming in the context of the Unifrac distances, see [39], where glomming was used to cut computation time.
Giovannoni SJ, Britschgi TB, Moyer CL, Field KG. Genetic diversity in sargasso sea bacterioplankton. Nature. 1990; 345(6270):60.
Lozupone C, Knight R. Unifrac: A new phylogenetic method for comparing microbial communities. Appl Environ Microbiol. 2005; 71(12):8228–35.
Lozupone C, Hamady M, Kelley ST, Knight R. Quantitative and qualitative β diversity measures lead to different insights into factors that structure microbial communities. Appl Environ Microbiol. 2007; 73(5):1576–85.
Chen J, Bittinger K, Charlson ES, Hoffmann C, Lewis J, Wu GD, Collman RG, Bushman FD, Li H. Associating microbiome composition with environmental covariates using generalized unifrac distances. Bioinformatics. 2012; 28(16):2106–13.
Yatsunenko T, Rey FE, Manary MJ, Trehan I, Dominguez-Bello MG, Contreras M, Magris M, Hidalgo G, Baldassano RN, Anokhin AP, et al.Human gut microbiome viewed across age and geography. Nature. 2012; 486(7402):222.
Dominguez-Bello MG, Costello EK, Contreras M, Magris M, Hidalgo G, Fierer N, Knight R. Delivery mode shapes the acquisition and structure of the initial microbiota across multiple body habitats in newborns. Proc Natl Acad Sci. 2010; 107(26):11971–75.
Rousk J, Bååth E, Brookes PC, Lauber CL, Lozupone C, Caporaso JG, Knight R, Fierer N. Soil bacterial and fungal communities across a ph gradient in an arable soil. ISME J. 2010; 4(10):1340.
Caporaso JG, Lauber CL, Walters WA, Berg-Lyons D, Lozupone CA, Turnbaugh PJ, Fierer N, Knight R. Global patterns of 16s rrna diversity at a depth of millions of sequences per sample. Proc Natl Acad Sci. 2011; 108(Supplement 1):4516–22.
Schmidt BL, Kuczynski J, Bhattacharya A, Huey B, Corby PM, Queiroz EL, Nightingale K, Kerr AR, DeLacure MD, Veeramachaneni R, et al.Changes in abundance of oral microbiota associated with oral cancer. PloS One. 2014; 9(6):98741.
Hoffmann C, Dollive S, Grunberg S, Chen J, Li H, Wu GD, Lewis JD, Bushman FD. Archaea and fungi of the human gut microbiome: correlations with diet and bacterial residents. PloS One. 2013; 8(6):66019.
Stephens WZ, Burns AR, Stagaman K, Wong S, Rawls JF, Guillemin K, Bohannan BJ. The composition of the zebrafish intestinal microbial community varies across development. ISME J. 2016; 10(3):644.
Hu J, Nomura Y, Bashir A, Fernandez-Hernandez H, Itzkowitz S, Pei Z, Stone J, Loudon H, Peter I. Diversified microbiota of meconium is affected by maternal diabetes status. PloS One. 2013; 8(11):78257.
Eckburg PB, Bik EM, Bernstein CN, Purdom E, Dethlefsen L, Sargent M, Gill SR, Nelson KE, Relman DA. Diversity of the human intestinal microbial flora. Science. 2005; 308(5728):1635–8.
Yan M, Pamp SJ, Fukuyama J, Hwang PH, Cho D-Y, Holmes S, Relman DA. Nasal microenvironments and interspecific interactions influence nasal microbiota complexity and s. aureus carriage. Cell Host Microbe. 2013; 14(6):631–40.
Fukuyama J, Rumker L, Sankaran K, Jeganathan P, Dethlefsen L, Relman DA, Holmes SP. Multidomain analyses of a longitudinal human microbiome intestinal cleanout perturbation experiment. PLoS Comput Biol. 2017; 13(8):e1005706. https://doi.org/10.1371/journal.pcbi.1005706.
Dobay A, Haas C, Fucile G, Downey N, Morrison HG, Kratzer A, Arora N. Microbiome-based body fluid identification of samples exposed to indoor conditions. Forensic Sci Int Genet. 2019; 40:105–13.
Evans SN, Matsen FA. The phylogenetic Kantorovich–Rubinstein metric for environmental sequence samples. J R Stat Soc Ser B (Stat Methodol). 2012; 74(3):569–92.
Pavoine S, Dufour A-B, Chessel D. From dissimilarities among species to dissimilarities among communities: A double principal coordinate analysis. J Theor Biol. 2004; 228(4):523–37.
Rao CR. Diversity and dissimilarity coefficients: a unified approach. Theor Popul Biol. 1982; 21(1):24–43.
Purdom E. Analysis of a data matrix and a graph: Metagenomic data and the phylogenetic tree. Annals Appl Stat. 2011; 5(4):2326–58.
Cavalli-Sforza LL, Piazza A. Analysis of evolution: evolutionary rates, independence and treeness. Theor Popul Biol. 1975; 8(2):127–65.
Fukuyama J. Adaptive gPCA: A method for structured dimensionality reduction with applications to microbiome data. Annals Appl Stat. 2019; 13(2):1043–67.
Fukuyama J, McMurdie PJ, Dethlefsen L, Relman DA, Holmes S. Comparisons of distance methods for combining covariates and abundances in microbiome studies. In: Pacific Symposium on Biocomputing. Singapore: World Scientific Publishing Co. Pte. Ltd.: 2012.
Dethlefsen L, Relman DA. Incomplete recovery and individualized responses of the human distal gut microbiota to repeated antibiotic perturbation. Proc Natl Acad Sci. 2011; 108(Supplement 1):4554–61.
Edgar RC. Search and clustering orders of magnitude faster than BLAST. Bioinformatics. 2010; 26(19):2460–1.
Quast C, Pruesse E, Yilmaz P, Gerken J, Schweer T, Yarza P, Peplies J, Glöckner FO. The SILVA ribosomal RNA gene database project: Improved data processing and web-based tools. Nucleic Acids Res. 2013; 41(D1):590–6.
Tukey JW. Exploratory Data Analysis. Reading: Addison-Wesley; 1977.
Rocke DM, Durbin B. Approximate variance-stabilizing transformations for gene-expression microarray data. Bioinformatics. 2003; 19(8):966–72.
Efron B. Double exponential families and their use in generalized linear regression. J Am Stat Assoc. 1986; 81(395):709–21.
Abdi H, O'Toole AJ, Valentin D, Edelman B. DISTATIS: The analysis of multiple distance matrices. In: Computer Vision and Pattern Recognition-Workshops, 2005. CVPR Workshops. IEEE Computer Society Conference On. IEEE: 2005. p. 42.
Escoufier Y. Le traitement des variables vectorielles. Biometrics. 1973; 29(4):751–60.
Bray JR, Curtis JT. An ordination of the upland forest communities of southern Wisconsin. Ecol Monogr. 1957; 27(4):325–49.
Jaccard P. Étude comparative de la distribution florale dans une portion des alpes et des jura. Bull Soc Vaudoise Sci Nat. 1901; 37:547–79.
Diaconis P, Goel S, Holmes S, et al.Horseshoes in multidimensional scaling and local kernel methods. Annals Appl Stat. 2008; 2(3):777–807.
De Leeuw J. A horseshoe for multidimensional scaling. Los Angeles: Preprint Series 530, UCLA Department of Statistics; 2007.
Parks DH, Beiko RG. Measures of phylogenetic differentiation provide robust and complementary insights into microbial communities. ISME J. 2013; 7(1):173.
Diaconis PW, Holmes SP. Matchings and phylogenetic trees. Proc Natl Acad Sci. 1998; 95(25):14600–2.
Swihart B, Lindsey J. Rmutil: Utilities for Nonlinear Regression and Repeated Measurements Models. 2019. https://CRAN.R-project.org/package=rmutil. R package version 1.1.3.
McDonald D, Vázquez-Baeza Y, Koslicki D, McClelland J, Reeve N, Xu Z, Gonzalez A, Knight R. Striped unifrac: enabling microbiome analysis at unprecedented scale. Nat Methods. 2018; 15(11):847.
Fukuyama J. Deep or shallow. 2019. https://github.com/jfukuyama/DeepOrShallow.
Fukuyama J. Deep or shallow. 2019. https://doi.org/10.5281/zenodo.3241459.
R Core Team. R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing; 2018. https://www.R-project.org/.
Wickham H. Ggplot2: Elegant Graphics for Data Analysis. New York: Springer; 2016. http://ggplot2.org.
JF would like to thank Susan Holmes, David Relman, and Les Dethlefsen for conversations that lay the groundwork for the ideas in this paper, as well as Amy Willis and an anonymous reviewer for their thoughtful comments on the manuscript.
Department of Statistics, Indiana University, 919 E 10th Street, Bloomington, 47408, Indiana, USA
Julia Fukuyama
Search for Julia Fukuyama in:
JF designed the study, write the code, performed the analysis, and wrote the manuscript. The author read and approved the final manuscript.
JF thanks the Bio-X Stanford Interdisciplinary Graduate Fellowship for support while writing this manuscript.
Correspondence to Julia Fukuyama.
The author declares that she has no competing interests.
Additional file 1
Supplemental methods and figures. Detailed description of simulations 1 and 2, Supplemental Figure S1, and Supplemental Figure S2. (PDF 78 kb)
Review history. (PDF 95 kb)
https://doi.org/10.1186/s13059-019-1735-y
Phylogenetic tree
Microbiome Biology
|
CommonCrawl
|
Estudiox cache
Triumph tr6 frame reinforcement
Answers 10e
Blast and cruise fertility
Our seventy Character Design Challenge started on the 7th of October 2020, the deadline was the 2nd of November 2020. The artists of our community were invited to design their unique vision of a Yokai or a Kami
Nov 20, 2019 · The purpose of the foil is to serve as a contrast, which can help bring out the main character's best traits. For example, if you create a character who is known for being honest, that can be highlighted by creating a foil for the main character who is consistently dishonest and perhaps challenges the main character's own commitment to honesty.
Don't freeze up when you're asked the question, "What makes you unique?" Instead, follow these steps and sample answers to craft the perfect response, whether in an interview or an application.
Nov 08, 2020 · In fiction writing, character development is the process of building a unique, three-dimensional character with depth, personality, and clear motivations. Character development can also refer to the changes a character undergoes over the course of a story as a result of their actions and experiences.
As we said before, Perl's fundamental unit is the string, not the character. Needing to process anything a character at a time is rare. Usually some kind of higher-level Perl operation, like pattern matching, solves the problem more easily. See, for example, Recipe 7.7, where a set of substitutions is used to find command-line arguments.
R unique Function. unique() function removes duplicated elements/rows from a vector, data frame or array. unique(x, incomparables = FALSE, MARGIN = 1, fromLast = FALSE, ...) x: vector, data frame, array or NULL incomparables: a vector of values that cannot be compared. FALSE is a special value, meaning that all values can be compared, and may ...
In this example, frequency of characters in a string object is computed. To do this, size() function is used to find the length of a string object. Then, the for loop is iterated until the end of the string. In each iteration, occurrence of character is checked and if found, the value of count is incremented by 1.
Akrapovic ducati monster 821
Terminate escaped identifiers with white space, otherwise characters that should follow the identifier are considered as part of it. Examples of escape identifiers: Verilog does not allow to identifier to start with a numeric character. So if you really want to use a identifier to start with a numeric value then use a escape character as shown ...
In fiction, flat characters are minor characters who do not undergo substantial change or growth in the course of a story. Often, these characters serve no purpose other than to move the story along...
Dec 18, 2008 · Some local characters have gained nationwide recognition via internet. If your neighborhood doesn't have one, you can follow a character from somewhere else! Here are five basic examples of ...
Each character has a unique personality (or should) and will, therefore, respond in particular ways to experiences. These responses are typically consistent, given similar experiences. For example, imagine a handsome teenage boy.
Skip to main content Notifications. Community
Feb 25, 2014 · Another example of its triumph is the care that was put into its first 30 seconds. During this half minute we are offered all the insight necessary to decode our protagonists' character through the use of her gestures and possessions. This illuminates an important lesson.
A regular expression is a combination of regular characters and special characters and symbols that together indicate the pattern. For example, the following specifies the pattern "all words starting with ke-and ending with -an": \bke\w*an\b
License Plate Generator Test out your plate ideas here. For example, entering "example plate" would convert to XMPLPL8.
note: some programs and services only allow letters and numbers, some include dashes ('-'); the best allow any character; stats: Assuming 94 'type-able' characters, there's 6 gazillion (94 8 = 6.1 quadrillion [US]) different 8-character passwords. There's not as many 7-character passwords, but there's some 9-character ones still available, if ...
Download 33,430 alphabet letters free vectors. Choose from over a million free vectors, clipart graphics, vector art images, design templates, and illustrations created by artists worldwide!
Apr 04, 2020 · The combination of unique and common characteristics is what gives each person individuality. LiveScience explains a psychological principle that defines the areas of personality which most strongly affect an individual's character, known as the Big Five.
Nov 04, 2010 · If you've tried to create your own funny awards, you know the hardest part of the process is coming up with award ideas, categories, titles, or funny award names.Wouldn't it be great to have a list of funny award names written by some of the best professional comedians working today?
Oct 01, 2020 · Unique Protocol Identification Number * Definition: Any unique identifier assigned to the protocol by the sponsor. Limit: 30 characters. Brief Title * Definition: A short title of the clinical study written in language intended for the lay public.
To give you an example of just how fun this particular character sketch can be, check out the paragraph below. The numbers 12, 37, and 251 were chosen at random. These numbers are allocated to the traits amiable, brave, and light-hearted. These traits have been applied to the character used in every example. Sunday always smiled at everyone she ...
Answering "What Makes You Unique" on a Job Application. Your first encounter with this interview To help you come up with your own answers, here are three examples of possible "What makes you...
Examples of Dynamic Characters in Literature Example #1: Harry , Harry Potter and the Chamber of Secrets (By J. K. Rowling) The most important conflict in this novel is the inner conflict of Harry Potter, which makes him a dynamic character.
The [character-character] wildcard example. The square brackets with a character range e.g., [A-C] represent a single character that must be within a specified range. For example, the following query finds the customers where the first character in the last name is the letter in the range A through C:
Some examples: UUIDs are handy for giving entities their own special… UUID stands for universally unique identifier. It looks like a 32-character sequence of letters and numbers separated by dashes.
Dec 21, 2020 · China is one of the earliest places where mankind originated. Early in the 21st Century BC, China entered the heritable imperial stage. Having experienced Xia, Shang, Zhou, Qin, Han, Tang, Song, Yuan, Ming and Qing Dynasties and many other turbulent times, the country finally ended the feudal society and the People's Republic of China was set up in 1949.
An example of such display name could look like this Jennifer . Unfortunately, it's not currently possible to use these MSN characters and symbols in colors or different font sizes, The only choice of color is black and we have to put up with it for the time being.
Sep 01, 2020 · The following examples convert the characters from lower case to upper case and print results on the standard output. ... Generate a list of unique words from a file.
Image to vector
Identification. The name "America" is often used to refer to the United States, but until the political formation of the United States after the Revolutionary War, this designation referred to South America only. Contemporary use of the term to refer to the United States underlines that
You do not have the required permission to complete this task hyper v
Name Your Character. Are you about to create the best character ever? Have you already created one? Now, they need an epic name. Our name generator allows you to create a name with up to five...
Souls saga script pastebin
Sample Character Analysis Essay - "Dead Poet's Society" The movie explores the concept of individualism in great depth. The numerous conflicts that the characters face throughout the movie demonstrate the fundamental principles of existentialism and transcendentalism.
Raft balboa island hammer
Character Traits · Curious - Henry Ford took apart and put together his brother's toy just to see how it worked. · Persistent – He kept trying even though he failed many times. · Smart – created the Model T car, the assembly line, the V8 engine, a car out of soybean plastic, etc. All of his ideas changed the motor industry.
2808 cricket circle edison nj.asp
Asterisk (*): Find any number of characters after a text. For example, you can use "Ex*" to match the text "Excel" from a list. Question Mark (?): Use a question mark to replace with a character. For example, you can use P?inter to lookup for the text "Painter" or "Printer". Tilde (~): It can nullify the impact of the above two characters. For example, if you want to look up for a value "PD*", instead you can use "PD~*".
Moment about y axis
What does character-traits mean? The definition of a character trait is a personality characteristic or inherent valu...
Morgan stanley hirevue questions technology analyst reddit
EXPLANATION Above syntax of SORT sorted the recrods, depends on keys we have provided (we have provided two keys in FIELDS parameter) FIRST KEY 1,3,CH,A - first key started at col 1 , its length is 3 SECOND KEY 9,3,CH,A - second key started at col 9, its length is 3 In the above example, CH- means character we may use BI for binary A ...
Date of conception calculator by birthday
2. Sketch two examples of your creature – one male and one female. The two examples must have different genotypes. Each sketch should have the genotype listed for all traits. 3. Pick one of your single allele traits and create a sample pedigree for your creature. The pedigree should include at least 4 generations. 4.
Summit county ohio zoning map
For example, the "\" character in the expression above is called an "escape," and it tells us the next character has a special meaning. The Escape is used a lot. It gives regular characters special meaning and turns special characters into regular characters. The special meaning of the "W" is to match any non-word character.
California small business certification lookup
This member function is exclusive of the array-specialization of unique_ptr (unique_ptr<T[],D>). The non-specialized version does not include it. Parameters i Index of the element to be accessed (zero-based). size_t is an unsigned integral type. Return value A reference to the ith object in the managed array.
Free hacking games
Example 4 of using function and local variables in functions; Program to read numbers from two files using fscanf function and write it in another file using fprintf function in ascending order ; Program which creates an Array of character. Make one function with one argument as a character and that function throw a user defined exception
Mongodb query log file
Unicode characters table. Unicode character symbols table with escape sequences & HTML codes. Mouse click on character to get code:
Jun 03, 2020 · For example, to extract unique values in columns A through C and arrange the results from A to Z, use this formula: =SORT(UNIQUE(A2:C10)) Compared to the above example, the output is a lot easier to perceive and work with. For instance, we can clearly see that Andrew and David have been winners in two different sports.
Story comes from character. But where does character come from? Fear. In my last post, I discussed the 9 Fundamental Fears that Motivate Your Characters. As I mentioned in that post, these nine fears create the nine character types, which are symbolized by the Enneagram: If you are completely new to the Enneagram system, it … Continue reading 9 Character Types That Will Improve Your Story
Another great example of a static character is the title character in 'Rip Van Winkle.' When the reader meets Rip, he is a laid-back, idle man who neglects his household chores.
Clear definition and great examples of Setting. This article will show you the importance of Setting and how to use it. Setting is the time and place (or when and where) of the story. Setting is a literary element of literature used in novels, short stories, plays, films, etc., and usually introduced during the exposition (beginning) of the story along with the characters.
May 24, 2019 · For example, scruffy clothes can be used for poor characters, and lots of diamonds and bling for tasteless rich ones. Accessories can also be more literal extensions of your character's personality, such as a parrot on a pirate's shoulder or a maggot in a ghoul's skull. 30 inspiring examples of 3D art; Next page: More top character design tips...
Probability mit pdf
Apr 04, 2020 · The combination of unique and common characteristics is what gives each person individuality. LiveScience explains a psychological principle that defines the areas of personality which most strongly affect an individual's character, known as the Big Five. Here are two what makes me unique 150 characters examples: Q: In 150 characters or fewer, tell us what makes you unique. Try to be creative and say something that will catch our eye!This member function is exclusive of the array-specialization of unique_ptr (unique_ptr<T[],D>). The non-specialized version does not include it. Parameters i Index of the element to be accessed (zero-based). size_t is an unsigned integral type. Return value A reference to the ith object in the managed array. Oct 02, 2018 · The third example creates an ID with a random number as the prefix while the last line can be used to encrypt the username before storing it. //creates a unique id with the 'about' prefix $a = uniqid (about); echo $a; echo ". //creates a longer unique id with the 'about' prefix $b = uniqid (about, true); Echo $b; echo ". Oct 04, 2017 · Step 2: Make the associations unique to you. So, we've established that, at least for password purposes, memorizing an entire word is no harder than memorizing a single letter. But we don't want to just replace 12-character passwords with 12-word passphrases—that would be a nightmare to type. R gsub. gsub() function replaces all matches of a string, if the parameter is a string vector, returns a string vector of the same length and with the same attributes (after possible coercion to character).
Mar 02, 2019 · Even a space is also consider as a character and here in the above example we have 8 spaces which means 8 characters. According to character count site, there are totally 9 words, 44 characters including spaces and 36 characters without the spaces in our example. A character is also termed as letter in general. This example creates a stream, adds a filter, and collects all object accepted by the filter in a List. The filter only accepts items (strings) which start with the character o. The resulting List thus contains all strings from the items collection which starts with the character o. Stream.min() and Stream.max() See full list on blog.reedsy.com
Examples of Dynamic Characters in Literature Example #1: Harry , Harry Potter and the Chamber of Secrets (By J. K. Rowling) The most important conflict in this novel is the inner conflict of Harry Potter, which makes him a dynamic character. Sep 23, 2019 · An Identity column is a unique column that can create a numeric sequence for you based on Identity Seed and Identity Increment. Identity Increment: Identity Increment indicates the increment in which the numeric values will use. See example below. The default value is 1. Identity Seed: Identity Seed is the value assigned to the first row. The ... CHARACTER COUNTS! provides character education strategies and resources to positively impact your school or organization. Skip to content Our offices will be closed December 21, 2020 – January 3, 2021. If the character changes and that variation is passed on to new species, the original form of the character is known as the "ancestral character state," whereas the new form is referred to as the "derived character state." For example, assume that a parent population of vertebrates had short limbs and gave rise to daughter species that had long ... Apr 24, 2018 · Examples of words in English that naturally constitute dactyls include strawberry, carefully, changeable, merrily, mannequin, tenderly, prominent, buffalo, glycerin, notable, scorpion, tedious, horrible, and parable. Verses written in feet that follow this pattern are said to be in dactylic meter.
Jan 18, 2013 · The character is from the character set used by your computer. CHAR Function in Excel – Examples In the above examples, you can observe how CHAR function is finding the character Example 1: Displays the 61 character in the set. Example 2: Displays the 65 character in the set. Example 3: Displays the 97 character in the set Apr 04, 2020 · The combination of unique and common characteristics is what gives each person individuality. LiveScience explains a psychological principle that defines the areas of personality which most strongly affect an individual's character, known as the Big Five.
Dec 18, 2018 · A character quirk is an unusual feature that sets your character apart from others. Many of the most famous figures in literature have distinctive quirks, from Harry Potter's lightning scar to Hercule Poirot's mind-blowing detective abilities. unique. On this page. Syntax. Description. Examples. Unique Values in Vector. Cell Array of Character Vectors with Trailing White Space. Preserve Legacy Behavior of unique.
Nov 29, 2010 · "The proposition of American exceptionalism, which goes at least as far back as the writing of French aristocrat and historian Alexis de Tocqueville in the 1830s, asserts that this country has a unique character," the Post's Karen Tumulty wrote in a much-discussed piece published Monday. In an press conference over a year ago, President Obama ... Character strengths and virtues are capabilities that humans have for thinking, feeling and behaving (Peterson and Seligman, 2004). These unique character strengths are the way we separate ourselves from animals and display our virtues or human essence. Modern scientists have come up with a list of 24 character strengths:
Apr 01, 2015 · Very often you tend to clear tests and examinations easily but an interview makes you have sweaty palms. Many people fear interviews, it can be their genuine shyness, lack of confidence or phobia. Dec 05, 2020 · For example, if you are an extrovert, that behavior pattern shows up across many different situations. Likewise, the autocratic leader tends to be autocratic in most situations. And that is the problem with traits — the lack of flexibility.
character meaning, definition, what is character: the particular combination of qualities ...: Learn more.May 24, 2020 · The magnetic ink character recognition line can help prevent financial fraud through the use of its special magnetic ink and unique fonts. For example, MICR makes it challenging to alter checks. Summer jobs and activities: Your summer experiences provide insight into your character. And holding a summer job at a fast-food restaurant can build as much character as attending a prestigious summer learning program . It's all about what you've gained, what you've learned and how you communicate that. 6. Determine the character table for the alternating group A 4. Solution. See: Some notes on D n, A n and S n. 7. Describe the commutator subgroup of a group in terms of the character table of G. Solution. We claim that the rows in the character table of Gwith 1 in the rst column are precisely the characters lifted from G0EG.
To help you come up with your own answers, here are three examples of possible "What makes you unique" interview answers. For a position that requires a team leader: "I find that it's easy for me to relate to a wide variety of people. For that reason, I really thrive in a team environment. You could pick 16 random characters and put them in a string. But it would be hard to make sure that the string is unique. What exactly do you mean by "unique" - must it be globally unique (like an UUID), which means that there is an astronomically small chance that the same string will ever be generated twice, or should it be different from all strings in some dataset that you have (in a ...
For example, dictionaries are divided into different sections by base character. SECONDARY or 2 Accents in the characters are considered secondary differences (for example, "as" < "às" < "at").
Stunning Fonts. Browse over 40,000 fonts to download and use in design projects of all kinds for web and print. These font sets feature hand-drawn, brush, and vector letterforms, along with extra character sets and embellishments for headers, text, and display. You can find whether the given String contains specified character in the following ways −Using the Example. import java.util.Scanner; public class IndexOfExample { public static void main(String...
A project in business and science is a temporary endeavor undertaken to create a unique product, service, or result. Basically, it is planned to achieve a particular aim. The aim of a project is to attain its objective and then terminate. Some of the reasons to start a project can be: For example, if you're trying to connect your Nintendo Switch to Facebook in order to find friends to play with, you're going to have to sit there and meticulously type out your uber-secure ... Apr 03, 2012 · Enclosing a character string within double quotes will automatically create a new String object. For example, String s = "this is a string";. String objects are immutable, which means that once ...
From knights in shiny armor to skulking stalkers and cunning spellcasters, each class in World of Warcraft presents unique challenges and gameplay for you to ma A unique quality of the Abjad system lies in how adaptable each product becomes depending upon when and how it is used. For example, younger children will require the Abjad Animal Book to act as a guide for the Abjad Animal Pop-Out cards. During the next stage the cards may be used on their own without the book, and so on. The first solution that comes to our mind is to convert the string to character array and sort them alphabetically and then check if two consecutive elements are equal or not.So What Exactly is a Sample Pack? Sample packs are collections of music loops and samples (musical, rhythmic and/or vocals), which you can use to create new and exciting music. Sometimes sample packs also include patches/sounds for plugin synths and effects. Producer Loops carries more sample loop packs (in every genre) than anywhere on the planet.
San mateo sheriff twitter
Jan 09, 2020 · Write a function that takes a String as an argument and prints all unique words in it. Examples: ... is a unique word. Java. ... of M character words which have at ... Nov 18, 2017 · 188 Examples of Character Traits ... An overview of swot analysis with compete examples for a business, product, service, brand, professional, student and school. character meaning, definition, what is character: the particular combination of qualities ...: Learn more.There is a huge list of extra characters and symbols in existence that couldn't be crammed onto a You can create these characters by starting with an ampersand (&), followed by a few letters, and...Apr 27, 2010 · The character special files /dev/random and /dev/urandom provide an interface to the kernel's random number generator. You can use /dev/urandom as follows: $ od -vAn -N4 -tu4 Sample outputs: 2494028411. You can use /dev/random as follows: $ od -An -N2 -i /dev/random Sample outputs: 62362 Recommended readings: See the following man pages: man ... an example of of a main character would be Cinderalla in Cinderella she is a main character because she is the most important character in the story.an example of of a main character would be ... Here you'll find tips, examples, suggestions, general information to aid in creating rounded fictional characters for your stories and/or RPGs, and perhaps even information useful for everyday life. There are many aspects of character development, and your character could be nearly as deep and complete as anyone you might know in real life. Bplans offers free business plan samples and templates, business planning resources, how-to articles, financial calculators, industry reports and entrepreneurship webinars.
Split foyer house meaning
[Use our Character Creation Kit to create great characters for your stories.] How To Use Character Traits In Plotting. When you know what your characters' traits are, you can use this to add to or to change your plot. Examples. An unreliable character might lose a job and the course of the story will change. The automated character name generator is a simple tool that makes choosing character names quick Only alter a name for a good reason. For example, a man called Ben by his friends might be...Hopefully, this property also has a good range of possible results so that two similar minerals might stand a good chance of having different results. It would be nice if every mineral had their own special test that we could name after them such as the "Fluorite Test" for example. Nov 18, 2020 · Example. FindString ( [Name], "John") returns 0 when the string starts with John and returns -1 when the string does not. IF (FINDSTRING ( [Name], "John") =0) THEN "John Smith" ELSE "Other" ENDIF returns John Smith when the string contains John and returns Other when the string does not.
Plagiarism. 0%. Unique. Sentence Wise Result. Matched Sources. For example, it'll let you know that 82% of the content is unique while 18% is plagiarized.
This post will give you simple example of laravel form request validation unique update. it's simple example of laravel validation unique ignore id. i explained simply about laravel validation unique on...
There is no one exactly like you in this world, and everyone is unique. There never was and there never will be another person just like you. Mimi moves to a small town where no one looks like her, but she realizes she can still make it work. Apr 03, 2012 · Enclosing a character string within double quotes will automatically create a new String object. For example, String s = "this is a string";. String objects are immutable, which means that once ...
See full list on bryndonovan.com Unique character definition: If you say that a place has character , you mean that it has an interesting or unusual... | Meaning, pronunciation, translations and examples
Jul 12, 2015 · The American character is the unstated premise of the argument, without which the theory, though still true, doesn't work in practice. The Vigilant and Manly American Spirit It is fairly easy to ... Oct 10, 2019 · The toCharArray() method of the String class converts the given String into an array of characters and returns it. Therefore, to find whether a particular character exists in a String − Convert it into an array of characters. Compare each character in the array with the required one. In case of a /match the String contains the required character.
Here are two what makes me unique 150 characters examples: Q: In 150 characters or fewer, tell us what makes you unique. Try to be creative and say something that will catch our eye!License Plate Generator Test out your plate ideas here. For example, entering "example plate" would convert to XMPLPL8. The idea is to go through the string and save in a hash map the number of times each character appears in the string. That would take O (N) \mathcal{O}(N) O (N) time, where N is a number of characters in the string. And then we go through the string the second time, this time we use the hash map as a reference to check if a character is unique ...
The unique character list of example sentences with unique character. examples. "Now the concept of unique character is on the table".
There are many things parents can do to reduce the signs and symptoms of ADHD without sacrificing the natural energy, playfulness, and sense of wonder unique in every child. Take care of yourself so you're better able to care for your child. Translations in context of "unique character" in English-Russian from Reverso Context: The proposals must also reflect the unique These examples may contain colloquial words based on your search.Mar 06, 2020 · Let them know how you view their unique talents through affirmative words. For example, saying, "You amaze me by how quickly you can solve those math problems" goes a long way in helping your child realize they have some special skill. Give your child the tools they need, through your validation, to inspire their uniqueness in life.
Translations of the phrase UNIQUE CHARACTER from english to indonesian and examples of the use of "UNIQUE CHARACTER" in a sentence with their translations: With unique character .Aug 17, 2010 · For example, an agency may use as its PIID for an amended solicitation, the Government-unique identifier for a solicitation number (e.g., N0002309R0009) in conjunction with a non-unique amendment number (e.g., 001). The non-unique amendment number represents the supplementary PIID. 3. Amend section 4.605 by revising paragraph (a) to read as ... Character definition with examples. Character is a person, a figure, an inanimate object, or There are different types of characters, and each serves its unique function in a story or a piece of literature.
In the small square right the characters name. Then in the other 4 sections do things like - draw how the character look, draw a friend of the character, draw where the friend lives, and then how the 2 characters feel about each other at the end of the story. You can also do webs with the characters name in the middle. Character pyrimids work too. Ben and Jacqueline will be easier to remember than Ben and Jennifer, for example. Aim to have the name suggest something about the character. Think of the book Little Women by Louisa May Alcott, and that family of girls. Beth was the quiet, gentle daughter; Jo the strong, boisterous one; and Amy the baby of the family. Skip to main content Notifications. Community
Aug 01, 2019 · The character was subsequently revived by both of his creators for a brief resurgence in 1999. Trademarks : Glowing eyes, granite jaw, distinctive metal headpiece, trademark variable cartridge ...
Jun 22, 2004 · SQL - Data Types. SQL data takes shape in several different forms, including character strings, numbers, file stores, and dates. SQL developers call the shots as to what types of data will be stored inside each and every table column when creating a SQL table. So you are running a Superhero game but you need some ideas on what to play. Superheroes are, in my opinion, the best characters to design for an RPG. They are fun, quirky and tend to be very powerful. Cool themes always drive a rocking Superhero character. Now I want to see what ODAM's creativity will generate with these 101 Superhero ...
Ice and fire hatching a dragon egg
First, let's looks at what a stock character is: "a character in literature, theater, or film of a type quickly recognized and accepted by the reader or viewer and requiring no development by the ... We could handle this particular example by using unique_ptr with a special deleter that does nothing for cin, but that's complicated for novices (who can easily encounter this problem) and the example is an example of a more general problem where a property that we would like to consider static (here, ownership) needs infrequently be ... 15 Excellent Tagline Examples Across Industries There are plenty of famous taglines, but you won't find any Apple, Nike, or Cover Girl here. Instead, we've rounded up great examples from a variety of brands—some you've heard of, some you haven't. Creating a Dungeons and Dragons character and need an alignment? This article explains the 9 D This article explains the nine different D&D alignments, plus examples of well-known characters that...For example (col1, col2, col3) specifies a multiple-column index with index keys consisting of values from col1, col2, and col3. A key_part specification can end with ASC or DESC . These keywords are permitted for future extensions for specifying ascending or descending index value storage. 6. Determine the character table for the alternating group A 4. Solution. See: Some notes on D n, A n and S n. 7. Describe the commutator subgroup of a group in terms of the character table of G. Solution. We claim that the rows in the character table of Gwith 1 in the rst column are precisely the characters lifted from G0EG. Using this unique Japanese system, artists can plot and cross-section elements on a matrix diagram to create an infinite number of original characters, creatures, and multiformed beasts. Angels, demons, dragons, monsters, and robots are all included in this book, along with descriptions of costumes and personalities for each. Skip to main content Notifications. Community Browse unique baby names and unusual baby names. This list of unique names shows unique baby girl names or unique baby boy names. In addition to unique baby names, at Baby Name World you will find thousands of other unique and popular male and female baby names and their meaning sorted by origin. F armer baby names and what they mean, for farmer, peasant, farms, with 49 results. Usage of these boy names reached its highest a century ago (ADOPTION OF 4.7%) and is now significantly reduced (ADOPTION 0.5%, 90%), with names like Meyer falling out of fashion. Fantasy is the forming of mental images with strange or other worldly settings or characters; fiction which invites suspension of reality. Humor is the faculty of perceiving what is amusing or comical. Fiction full of fun, fancy, and excitement which meant to entertain. This genre of literature can actually be seen and contained within all genres. The Excel UNIQUE function extracts a list of unique values from a range or array.. The result is a dynamic array of unique values. If this array is the final result (i.e. not handed off to another function), array values will "spill" onto the worksheet into a range that automatically updates when new uniques values are added or removed from the source range.
Definition of character in the Definitions.net dictionary. Meaning of character. What does character mean? Information and translations of character in the most comprehensive dictionary definitions resource on the web. I am trying to determine what is the maximum number of 10 character word string combinations that can be created using 36 unique characters. For example, let's say that you use the English alphabet, all lowercase letters, a-z, and all the Arabic numerals, 0-9. Naming Characters Intentionally: Why Character Names Matter. Character names have the power to Victorian name example: Emaline. Created for a unique world while maintaining the same vibe...
Gamefirst v not opening
The following is a list of the top 100 best bucket list items. Each item on the list is something I have done personally and represents an adventure that I can vouch for. Feel free to use these ideas to create your own list! Usernames must be at least 5 characters long. Periods (".") and capitalization can't be used to differentiate usernames. For example, johnsmith55, John.Smith55 and john.smith.55 are all considered the same username. Usernames shouldn't impersonate someone else. Your username must adhere to the Facebook Terms. Loading... Sep 23, 2019 · An Identity column is a unique column that can create a numeric sequence for you based on Identity Seed and Identity Increment. Identity Increment: Identity Increment indicates the increment in which the numeric values will use. See example below. The default value is 1. Identity Seed: Identity Seed is the value assigned to the first row. The ...
Best smokeless muzzleloader 2018
Oct 10, 2009 · A bard character who is like the pied piper and has rats follow him where-ever he goes when he plays music. Believes they are the physical embodiment of a god who has been punished with mortality. Enjoys the fine cuisine of roasted goblin. A character with reversed luck. (when he rolls a 20 its a 1 and when he rolls a 1 it acts as a natural 20.) Make your website wonderful with free Script codes. Use them on the source of your pages, in html part. Learn how to use HTML with Jscripts. You have a lot of examples such as : Alerts, pop up, form, links, effects, status changer, Mouse, buttons, Validators, text, Date and time What this means is that whoever your main character is, the villain is somehow the opposite. To begin our exploration of the villain, let's go through some examples in literature and film. Then, we will make some general observations based on our examples: 1. The Ring from The Lord of the Rings EXAMPLES OF WEAK/BAD PASSPHRASES. • Your name in any form - first, middle, last, maiden, spelled backwards, nickname or initials. • Your user ID or your user ID spelled backwards. • Part of your user ID or name. • Any common name, such as Joe. • The name of a close relative, friend or pet.
Fs19 kuhn bale wrapper
Jan 14, 2014 · You cannot establish that characters exist simply by giving them a name and saying they're present. Each character must have enough unique characteristics to fix themselves in the reader's mind. As ever, one of the best examples of writing craft can be found in a children's book: the cast of A. A. Milne's Winnie-the-Pooh series. This is ... For example: Being honest and taking responsibility for your actions are admirable qualities. Adaptability and compatibility are great traits that can help you get along with others. Drive and determination will help you keep going no matter what. Compassion and understanding mean you relate well to ... Makes the elements of a character vector unique by appending sequence numbers to duplicates. A character vector of same length as names with duplicates changed, in the current locale's encoding.unique. On this page. Syntax. Description. Examples. Unique Values in Vector. Cell Array of Character Vectors with Trailing White Space. Preserve Legacy Behavior of unique.For example, the check digit for the number 401234567890 is calculated as follows: 0+8+6+4+2+0= 20. 20 x 3 = 60. 9+7+5+3+1+4 = 29. 60 + 29 = 89. 89 + 1 = 90 - therefore the check digit is 1. In both the UPC-A and EAN13 bar code symbologies 12 digits are encoded by the bar/space pattern above the human readable character. Well organized and easy to understand Web building tutorials with lots of examples of how to use To name a UNIQUE constraint, and to define a UNIQUE constraint on multiple columns, use the following...Use the LIKE operator (instead of the = operator) to build a partial string search. For example, this expression would select Mississippi and Missouri among U.S. state names: "STATE_NAME" LIKE 'Miss%' % means that anything is acceptable in its place: one character, a hundred characters, or no character. Character definition with examples. Character is a person, a figure, an inanimate object, or There are different types of characters, and each serves its unique function in a story or a piece of literature.
Oppo msm download tool username and password
Jan 09, 2020 · Write a function that takes a String as an argument and prints all unique words in it. Examples: ... is a unique word. Java. ... of M character words which have at ... Unique definition, existing as the only one or as the sole example; single; solitary in type or characteristics: a unique copy of an A protagonist is the main character of a story, or the lead.Approach 1: Brute Force. Intuition. Check all the substring one by one to see if it has no duplicate character. Algorithm. Suppose we have a function boolean allUnique(String substring) which will return true if the characters in the substring are all unique, otherwise false. Naming Characters Intentionally: Why Character Names Matter. Character names have the power to Victorian name example: Emaline. Created for a unique world while maintaining the same vibe...Loading... Describe Your Personality Every human being is unique. These are the most used words in this century but yes it is indeed true that all of us were created differently. It is amazing how these things happen but it does and we're all awesomely peculiar from one another. Everyone has a distinguishing physical feature. One person may be tall, dark and Words often used with character in an English sentence: animated character, central character, character flaw, character trait, character… So you are running a Superhero game but you need some ideas on what to play. Superheroes are, in my opinion, the best characters to design for an RPG. They are fun, quirky and tend to be very powerful. Cool themes always drive a rocking Superhero character. Now I want to see what ODAM's creativity will generate with these 101 Superhero ... Genre Definition Frequently Found Elements Picture Book Examples Compiled by Marcie Haloin,with input from Gaylynn Jameson, JoAnne Piccolo, and Kari Oosterveen. • Narrative story handed down within a culture. • Stories were created by adults for the entertain-ment of other adults. • Stories frequently involve trickery.
Synology ds216j compatible ups
The current version (6.3.0) of the Unicode Standard, developed by the Unicode Consortium, assigns a unique identifier to each of 110,187 graphical, formatting and control characters, covering the scripts of the world's principal written languages and many mathematical and other symbols. A previous version (2.1) of the Unicode Standard ... Oct 10, 2019 · The toCharArray() method of the String class converts the given String into an array of characters and returns it. Therefore, to find whether a particular character exists in a String − Convert it into an array of characters. Compare each character in the array with the required one. In case of a /match the String contains the required character. Character Analysis Ponyboy Curtis. Ponyboy Curtis is a 14-year-old boy whose world has been turned upside down. His parents were killed in an automobile accident just eight months before The Outsiders story takes place. He lives with his oldest brother, Darry, who is 20 years old and has legal custody of him and his other brother, Sodapop, who is 16. Jan 13, 2020 · For example: "Web Developer at Jobscan." With 120 characters to work with, relying on the default LinkedIn headline is a wasted opportunity. There is plenty of room to include additional details and keywords that help you stand out and show up in more search results. For example, the default headline could be expanded to read: Jul 08, 2020 · Warning: As you can see in the output all characters are unique. random.sample() never repeats characters. If you don't want to repeat characters or digits in the random string, then use random.sample() but it is less secure because it will reduce the probability of combinations because we are not allowing repetitive letters and digits.
Manual defrosting method
Example of SELECT COUNT(DISTINCT Column Name) If we want to select the number of unique departments: SELECT COUNT(DISTINCT Department) AS NumDepartment FROM Employees Sponsor a CHARACTER COUNTS! ribbon contest and/or participate in red ribbon week (one week after CHARACTER COUNTS! week) Hold a collection drive (collect examples of people practicing the six pillars) Recognize students with a sticker when seen acting with character. For example, to use the utf8 Unicode character set, issue this statement after connecting to the server: SET NAMES 'utf8'; For more information about configuring character sets for application use and character set-related issues in client/server communication, see Section 10.5, "Configuring Application Character Set and Collation" , and ... The unique character list of example sentences with unique character. examples. "Now the concept of unique character is on the table".Oct 02, 2018 · The third example creates an ID with a random number as the prefix while the last line can be used to encrypt the username before storing it. //creates a unique id with the 'about' prefix $a = uniqid (about); echo $a; echo ". //creates a longer unique id with the 'about' prefix $b = uniqid (about, true); Echo $b; echo ". Oct 10, 2009 · A bard character who is like the pied piper and has rats follow him where-ever he goes when he plays music. Believes they are the physical embodiment of a god who has been punished with mortality. Enjoys the fine cuisine of roasted goblin. A character with reversed luck. (when he rolls a 20 its a 1 and when he rolls a 1 it acts as a natural 20.) Sep 12, 2013 · For example, if your narrator is your protagonist, you might want to try develop a unique voice for him that stands out from your own. I like to come up with characteristics or traits of that narrator and create statements as if that character was saying them. Oct 04, 2017 · Step 2: Make the associations unique to you. So, we've established that, at least for password purposes, memorizing an entire word is no harder than memorizing a single letter. But we don't want to just replace 12-character passwords with 12-word passphrases—that would be a nightmare to type.
Vuetify change font
Delonghi ecp3420 reddit
In other words, you can append one string at a time to a vector, making it unique each time, and get the same result as applying make.unique to all of the strings at once. If character vector A is already unique, then make.unique(c(A, B)) preserves A . His thoughts and feelings make him stand out from the rest. His complexity and unique perspective allow us to see the real problems that exist between the greasers and the socs. Harry Potter. Harry Potter is an example of a round character that is developed over the course of many books. His feelings, personality, and history are slowly revealed. We have divided the examples in multiple sub-sections to have a better understanding of what we are doing − Basic Programs. These programs made specially to understand the basics of strings in C. These program deals with string as an array of characters. Program to print a string in C. Program to print a string character by character in C We have divided the examples in multiple sub-sections to have a better understanding of what we are doing − Basic Programs. These programs made specially to understand the basics of strings in C. These program deals with string as an array of characters. Program to print a string in C. Program to print a string character by character in C Value. For a vector, an object of the same type of x, but with only one copy of each duplicated element.No attributes are copied (so the result has no names). For a data frame, a data frame is returned with the same columns but possibly fewer rows (and with row names from the first occurrences of the unique rows). Jul 16, 2020 · For example, during the crowning ceremony, he must place the crowns on the bride's and groom's heads, then switch the crowns back and forth three times, uniting and binding the two lovebirds. Hattabin A Muslim term for male family or friends who help prepare the groom for and participate in the wedding. Introduction to Unique and Appealing Character Design. From proportion to subtle detailing, learn to develop interesting and unique characters from the ground up with professional children's book...Items of particular interest and value include two silver candlesticks dated in London, 1666 which are considered unique examples of such silverwork. 1. 0. 12); highly specialized for flight, which, initiated and made possible mainly by the strong development of quill-feathers, has turned the wing into a unique organ. In this example, I will use function =RandomizeF(5,10) to generate a character string which between 5 and 10 characters. Then press Enter key, select the cell and drag the fill handle to the range you want to contain this function. And random of alphanumeric and specific character strings which between 5 and 10 characters have been created.
Body parts flashcards free download
A list of the most popular fonts on Font Squirrel. About Font Squirrel. Font Squirrel is your best resource for FREE, hand-picked, high-quality, commercial-use fonts. Click to get the latest Buzzing content. Take A Sneak Peak At The Movies Coming Out This Week (8/12) Weekend Movie Releases – New Years Eve Edition
Medstudy 18 pdf
unique. On this page. Syntax. Description. Examples. Unique Values in Vector. Cell Array of Character Vectors with Trailing White Space. Preserve Legacy Behavior of unique.Characters Strings. The character datatype, abbreviated as char, stores letters and symbols in the Unicode format, a coding system developed to support a variety of world languages. Characters are distinguished from other symbols by putting them between single quotes ('P'). A string is a sequence of characters.
Craftsman 2.0 14 chainsaw spark plug
For example, they display D instead of Δ for decimal code point 68. Fortunately, the greatly extended range of character entity references supported in HTML 4.01 together with the increased number of characters in Microsoft's WGL4 TrueType fonts and Apple's Lucida Grande font provide an easier and more reliable way to display most of the ... Oct 22, 2011 · Widmark's unique take on the mob-enforcer archetype earned him an Oscar nomination and singlehandedly created a template for every creepily childish psychopath that followed.— David Fear Buy on ... For example, maybe your character is afraid of rejection. He might avoid telling his love interest how he really feels because of this fear. Provide your character with unique traits, habits, and quirks.
Fantec 8percent27percent27 executive chef knife special edition
Note: The id name must contain at least one character, and must not contain whitespaces (spaces, tabs, etc.). Difference Between Class and ID A class name can be used by multiple HTML elements, while an id name must only be used by one HTML element within the page: unique. On this page. Syntax. Description. Examples. Unique Values in Vector. Cell Array of Character Vectors with Trailing White Space. Preserve Legacy Behavior of unique.The 10-second business name creator. It takes years to create a great brand, but you can have a creative brand name in seconds. Shopify's free naming brand generator lets you jump from naming your brand to securing the domain name, to starting your small business - all in a few clicks.
2009 ford f150 abs module location
Browse unique baby names and unusual baby names. This list of unique names shows unique baby girl names or unique baby boy names. In addition to unique baby names, at Baby Name World you will find thousands of other unique and popular male and female baby names and their meaning sorted by origin. _charset_: If used as the name of an <input> element of type hidden, the input's value is automatically set by the user agent to the character encoding being used to submit the form. isindex: For historical reasons, the name isindex is not allowed. name and radio buttons. The name attribute creates a unique behavior for radio buttons. To replace a character with the given character in sting first... find then replace the "oldChar" with newChar (character name to Here is the example program to store unique values in Java using.
The UNIQUE keyword instructs the database server to return the number of unique non-NULL values in the column or expression. The following example calls the COUNT UNIQUE function, but it is equivalent to the preceding example that calls the COUNT DISTINCT function: SELECT COUNT (UNIQUE item_num) FROM items; If the Projection clause does not specify the DISTINCT or UNIQUE keyword of the SELECT statement, the query can include multiple COUNT functions that each includes the DISTINCT or UNIQUE ... Jan 09, 2020 · Write a function that takes a String as an argument and prints all unique words in it. Examples: ... is a unique word. Java. ... of M character words which have at ...
Character, in biology, any observable feature, or trait, of an organism, whether acquired or inherited.An acquired character is a response to the environment; an inherited character is produced by genes transmitted from parent to offspring (their expressions are often modified by environmental conditions). Oct 09, 2020 · 15 great meta description examples to inspire you. Since you only have 160 characters to work with, writing a great meta description takes more than just throwing a few words together. To help you get your creative juices flowing, here are some meta description examples for your inspiration. 1. Tesla Non-whitespace characters besides A, C, G or T are considered "ambiguous." N is a common ambiguous character that appears in reference sequences. Bowtie 2 considers all ambiguous characters in the reference (including IUPAC nucleotide codes) to be Ns. Bowtie 2 allows alignments to overlap ambiguous characters in the reference.
|
CommonCrawl
|
Designing an integrated socio-technical behaviour change system for energy saving
Ksenia Koroleva1,
Mark Melenhorst1,5,
Jasminko Novak1,2,
Sergio Luis Herrera Gonzalez3,
Piero Fraternali3 &
Andrea E. Rizzoli4
Stimulating households to save energy with behaviour change support systems is a challenge and an opportunity to support efforts towards more sustainable energy consumption. The approaches developed so far, often either; do not consider the underlying behaviour change process in a systematic way, or do not provide a systematic linking of design elements to findings from behaviour change literature and the design of persuasive systems. This paper discusses the design and evaluation of a holistic socio-technical behaviour change system for energy saving that combines insights from behavioural theories and the persuasive system design in a systematic way. The findings from these two streams of research are combined into an integrated socio-technical model for informing the design of a behaviour change system for energy saving, which is then implemented in a concrete system design. The developed system combines smart meter data with interactive visualisations of energy consumption and energy saving impact, gamified incentive mechanisms, energy saving recommendations and attention triggers. The system design distinguishes between a version with non-personalized energy saving tips and a version with personalized recommendations that are deployed and evaluated separately. In this paper, we present the design and evaluation results of the non-personalized system in a real-world pilot. Obtained results indicate reduced energy consumption compared to a control group and a positive change in energy knowledge in the treatment group using the system, as well as positive user feedback about the suitability of the designed system to encourage energy saving.
Meeting the European targets for a reduction of CO2 emissions by 2030 (40% compared to 1990) and energy savings (27% compared to "business-as-usual") (2030 Energy Strategy 2014) requires extensive changes in consumption behaviour of European citizens. Previous studies on the effect of behaviour change interventions on energy consumption report energy savings of 4%-12% on average (Nachreiner et al. 2015), but also point out a number of limitations and issues related to the persistence of behaviour change over time. Overall, the current body of knowledge on determinants and processes of behaviour change for environmentally conscious behaviour (Steg and Vlek 2009) and research from the design of persuasive systems (Oinas-Kukkonen 2013), provide good groundwork for designing systems to support behaviour change. However, the combined consideration of these findings in the development of technological solutions for stimulating energy saving has been rather limited and few approaches have systematically based their design on a theoretically-grounded model. In particular, none of the existing approaches and studies have been able to validate in a real world pilot a theoretically grounded design of a holistic behaviour change system for energy saving or provide specific recommendations for the design of such systems for various types of users.
While many approaches explore the use of consumption feedback or the use of game-like motivational elements, recent research both in energy and related fields (e.g. water saving) suggest that such individual elements alone are not suitable for inducing a durable change in behaviour (see (Nachreiner et al. 2015; Novak et al. 2018) for an overview). Recently, several European projects have started systematically investigating models where different types of elements are combined in an integrated behaviour change system (Tisov et al. 2018), although few results regarding their theoretical approach and empirically validated findings are currently published. In our view, energy consumption should be considered beyond an individual decision framework, as a complex socio-technical process that takes into account social norms, technologies and infrastructures. The demand for energy is therefore indirect, created by services such as comfort, which are in turn provided by devices and infrastructures (Shove 2003a, b), and is "systematically configured" over the long term (Van Vliet et al. 2012). When it comes to energy savings we propose that effective and sustainable behaviour change cannot be achieved by a single intervention impacting a specific attitudinal or behavioural variable, but requires a holistic socio-technical approach that uses a combination of individual enablers, mechanisms and techniques, and aligns technological enablers with suitable models of behaviour change. While such integrated socio-technical systems for behaviour change in energy saving are available in theory, they have not been validated in real-world pilots.
In this paper, we propose a theoretically-grounded design of a holistic socio-technical system supporting behaviour change for energy saving, which combines smart meter data with interactive consumption visualizations, gamified incentives, energy saving recommendations and attention triggers. We present the theoretical background, derive the socio-technical integrated behaviour change model for energy saving and then describe the design and technical implementation of the model in a concrete system that provides different types of incentives (virtual, physical) and is adaptive to the various types of user motivations. Finally, we discuss the results of its evaluation in a real-world pilot including impact on energy consumption, analysis of user activity and evaluation with the end users.
Behavioural change systems for household energy saving
One stream of research on behaviour change with respect to energy savings considers behavioural theories that aim to identify antecedents of pro-environmental attitudes and behaviour and/or describe an associated behaviour change process (Steg and Vlek 2009). Frequently cited behavioural theories exploring the adoption of new behaviours by users and applying them to energy saving include the Ajzen (1985) theory of planned behaviour and Schwartz (1977) norm activation model. Studies also recognize a wide range of factors determining individual household energy consumption behaviour, most frequently subdivided into socio-economic variables, psychological factors and external contextual and situational factors (see Frederiks et al. (2015) for a review). Although some factors have been found to be better predictors of energy savings than others, these findings are not consistent across time, context, and sample type of participants and studies (Frederiks et al. 2015). Additionally, very few studies have found the interventions to be effective or achieve substantial behavioural changes, mainly due to the fact that they have not appropriately tested causal relationships (Frederiks et al. 2015).
Another research stream deals with the actual design of behaviour change support systems that can aid in inducing users to change behaviour, drawing on generic design frameworks (Fogg 2009; Oinas-Kukkonen 2013). These systems have emerged in different domains (for a review, see Hamari et al. (2014)), including various kinds of pro-environmental behaviour, such as energy saving (Shih and Jheng 2017). Most authors agree that effective designs of behaviour change systems in the energy domain should incorporate different types of feedback and analysis options (e.g. allowing users to access historical series of consumption data or provide them with social comparisons). Feedback as a consequential intervention tool helps individuals associate their actions with outcomes (Fischer 2008) and shapes behaviour by breaking consumption habits (Abrahamse et al. 2005). Such feedback can be data-oriented (e.g. bar or pie charts (Monigatti et al. 2010)), connected to the real consumption context (e.g. floor plans (Monigatti et al. 2010)), metaphorical (Monigatti et al. 2010), playful and ambient (Be aware project 2008; Wemyss et al. 2016), or connected to nature or animal habitats (eco-visualization (Gustafsson et al. 2009a, b). Energy consumption feedback holds the promise of reducing energy consumption by 4%-12% on average, with peak savings going beyond 20% (Nachreiner et al. 2015; Tiefenbeck et al. 2019). However, feedback is regarded as a double-edged sword: some authors find that behaviour change is likely to be reversed once feedback is no longer provided (Allcott and Rogers 2014). Others found that feedback should be coupled with motivational techniques and energy saving advice (Nachreiner et al. 2015).
To be effective, incentives must consider different consumer types and needs, be presented at the right moment and provide actionable suggestions tailored to a given user and context (Novak et al. 2018). A new direction is the use of personalized recommendations employing machine learning techniques to identify different consumer classes, create user models and map appropriate actionable suggestions in a personalized manner (Novak et al. 2018). While the combination of consumption visualization and feedback with recommendations is promising, little evidence is available regarding the most effective combinations of feedback, or how these behavioural changes can be sustained over time.
Gamification approaches and social strategies are often employed in an attempt to induce energy-efficient behaviour with various degrees of success (Johnson et al. 2017; Abrahamse et al. 2005). By adding game-like elements to an otherwise pragmatic application, users may become more engaged with the application and thus more likely to change their behaviour (Geelen et al. 2013). Social rewards can be induced by allowing users to compare game achievements or energy saving achievements against other users (e.g. leaderboards reflecting users ranking (Johnson et al. 2017)). As an alternative to competition, cooperative approaches supported through e.g. virtual social rewards (Johnson et al. 2017) have also been explored. However, in most of these approaches, actual impact on energy use was not measured or unreported.
The majority of the described research has focused on influencing behavioural determinants (e.g. attitudes, behavioural intentions) to stimulate energy saving without considering the stages of behaviour change and how intervention strategies must be tailored to accommodate individuals as they fluctuate between these stages (van der Werff and Steg 2015). But there is growing awareness that interventions and incentives need to be provided based on an individual's stage in the behaviour change process (He et al. 2010). While early attempts have been made to adopt this approach to feedback systems for natural resource consumption (He et al. 2010; Geelen et al. 2013; Novak et al. 2018), its systematic implementation into a real-world system and related empirical evaluation has yet to be done. The approach and model that we propose in the next section is an attempt to design a holistic behaviour change model for energy saving that integrates the insights from the trans-theoretical model of behaviour change (Prochaska and Velicer 1997), its adaptation to pro-environmental behaviour by (Bamberg 2013) and water saving (Novak et al. 2018), and the mapping of various incentive mechanisms (such as feedback and recommendations) to appropriate stages of the behaviour change process.
Socio-technical behaviour change model for energy saving
The underlying assumption of behaviour change systems (Oinas-Kukkonen 2013) is that behaviour change can be initiated by stimulating psychological factors that influence specific behaviours (Hamari et al. 2014). Accordingly, a change in energy consumption behaviour can be stimulated through a combination of specific incentive and persuasion strategies that address underlying psychological determinants. However, impacting the attitudes and the intention to change behaviour may not lead to actual change in behaviour (intention-behaviour gap). Individuals also have to develop skills and strategies needed to implement new behaviors (Bamberg 2013). Thus, behavioural change can be seen as a transition through a time-ordered sequence of several stages, such as in the trans-theoretical model of behaviour change (Prochaska and Velicer 1997) or the model of action phases (Heckhausen and Gollwitzer 1987). The trans-theoretical model of behaviour change, originally from the health domain, explains behaviour change through five consecutive phases spanning from raising awareness ('pre-contemplation') to creating new habits. Each of the stages follows distinct goals, and as a result, requires different motivational drivers. The trans-theoretical model has been applied to water saving (Novak et al. 2018) and a similar model of action phases, the MAP model, has been applied to studying environmentally harmful behaviours (Bamberg 2013).
We model behaviour change as a multistage process using a slightly adapted trans-theoretical model of behaviour change. Following Novak et al. (2018), the first two phases, the pre-action and action phases are merged into one renamed phase, "pre-contemplation", to reflect the limited cognitive effort usually invested in energy consumption behaviour. The process of behavior change is not always linear, as users can relapse to earlier stages. The resulting socio-technical behaviour change model is conceived to support all phases of the change process with an integrated incentive model for energy saving that combines different incentive mechanisms matching each phase. Although the original trans-theoretical model of behaviour change gives suggestions as to the motivational goals that need to be achieved in each stage, it does not specify in detail the social, cognitive, and affective factors and processes that impact the formation of attitudes and intentions in each of the stages (Bamberg 2013). But the knowledge about these factors is critical for developing the incentive mechanisms to impact the formation of attitudes and intentions in each stage. Therefore, building on Novak et al. (2018) we first specify what motivates an individual in each stage to invest cognitive effort into thinking about the behaviour and potentially changing it by using theories from environmental and cognitive psychology. We then use the input from the design of persuasive systems research to identify which incentives need to be provided to the users to increase their motivation to save energy in each stage. The resulting behaviour change model for energy saving is given in Table 1.
Table 1 Socio-technical behaviour change process model for energy saving
Integrated incentive model for household energy saving
Based on the presented socio-technical behaviour change model, we develop more detailed incentive elements for designing a concrete behaviour change system for stimulating household energy saving that addresses all stages of the behaviour change process. The designed elements include interactive energy consumption visualizations, energy saving tips, goal setting and gamified incentives (virtual, social, tangible) and are implemented in a concrete system.
Interactive feedback visualizations with goal-setting for energy saving
Multiple studies show that feedback has the potential of influencing underlying beliefs regarding energy consumption and attitudes towards energy saving (Steg et al. 2014; Tiefenbeck et al. 2016; Novak et al. 2018) at different stages of the behaviour change process. But the design of effective feedback visualizations for energy saving is not trivial. Energy consumption behaviour is abstract, non-sensory and of low personal relevance to most individuals (Karlin et al. 2015). The abstract units of W and kWH are difficult for household consumers to understand (Karjalainen 2011). Consumers have different environmental goals and values (Lindenberg and Steg 2007), as well as different needs regarding energy consumption feedback (Gölz and Hahnel 2016), at different stages of the behaviour change process (Bamberg 2013). To address these challenges, an adaptive metaphor-based visualization approach can be used, helping users interpret complex numerical, and abstract information (Monigatti et al. 2010; Froehlich et al. 2009; Hackenfort et al. 2018).
To set and monitor the achievement of an energy saving goal for a current month, a battery metaphor (Fig. 1a) is used. It communicates the notion of energy as a limited resource that should not be wasted. Users can set their monthly saving goal relative to their baseline consumption in the same month of the previous year (20% by default). The user can then monitor the achievement of the goal in real time: as energy is consumed, the battery depletes. Consumption alerts are provided as color-coded normative messages, which display the user's progress in relation to the set goal (green motivational message when the user can still achieve their goal, orange warning message when the user is close to not meeting their goal, red message stating that the user cannot achieve their goal anymore). As consumption feedback should be actionable (Micheel et al. 2015), tapping on the 'Learn how' button leads directly to the tips page. This visualization is mainly intended for users in the pre-contemplation phase, to raise awareness of negative behaviour, and users in the contemplation phase, to stimulate action through goal-setting and monitoring.
Metaphorical feedback visualizations. a battery metaphor to set and monitor an energy saving goal. b monetary impact visualization. c environmental impact visualization
To show the effect of energy savings in a way related to everyday life, the impact visualizations display achieved energy savings using three metaphors drawn from goal framing theory (Lindenberg and Steg 2007): monetary impact, environmental impact (CO 2 emissions) and hedonic value. The monetary impact metaphor (Fig. 1b) is depicted as a piggy bank, the environmental impact metaphor (Fig. 1c) is represented by trees corresponding to saved CO 2 emissions, and the hedonic value metaphor displays jars filled with balls and badges which are earned after a user saves energy (user's gamified achievements). As users have different goals for using energy feedback systems (Gölz and Hahnel 2016), they may be more motivated by a specific impact view. Therefore, the impact view is personalized based on people's main motivation for saving energy (solicited from a questionnaire). However, according to Goal Framing theory, individuals may have several competing motivations, therefore users are able switch between these three different visualizations. These visual metaphors are intended to target users in the pre-contemplation stage to illustrate the impact of energy saving, as well as users in the action stage to create a sense of achievement that motivates users to continue saving energy.
Raising awareness about the trade-off between energy saving and comfort and suggesting recommendations for energy saving actions that do not jeopardize the user's comfort level, is a key challenge to improve user attitudes and acceptance of behaviour change systems. For this, we use a simple visualization displaying the user's achieved savings alongside their measured comfort level (determined using temperature, humidity and luminance values) that was maintained during that month (see Fig. 2a). If savings were achieved and the comfort level was maintained, values are presented in a green box (alternatively, a red box provides a warning). An initial assessment of the comfort level is inferred automatically by the system (using sensors installed in the users' homes) and is refined using explicit comfort feedback from the user. While some efforts have previously been made at visualizing comfort level for e.g. building managers in an office setting (Schiavon and Altomonte 2014), the consideration and visualization of comfort for households is under-investigated and is a novel element of our approach. By showing that energy savings can be achieved with only a small reduction of comfort, the display of the comfort level alongside energy savings can induce the users in the contemplation phase towards energy savings (cf. Table 1). Finally, the detailed consumption view provides data-affine users (Micheel et al. 2015) with a bar chart consumption visualization, allowing users to check their energy consumption at different time scales (e.g. weekly, monthly) and compare it against their historical average.
Feedback visualizations and gamification. a comfort vs. energy savings visualization. b energy saving tips. c leaderboard element
Energy saving tips and recommendations
Educating users on how to perform the desired behaviour is a well-known behavioural change strategy (Oinas-Kukkonen 2013; Michie et al. 2011). Providing concrete actionable suggestions strengthens the sense of responsibility (Schwartz 1977), promotes self-efficacy and the perception of behavioural control (Ajzen 1985), and can also directly strengthen the intention to implement the desired behaviour (Bamberg 2013). Offering recommendations for energy saving actions allows users to learn how to save energy (Gölz and Hahnel 2016). Our system design foresees two types of recommendations (see Fig. 2b): a list of generic energy savings tips and personalized recommendations (derived from energy behaviour data, user-generated data, and sensors). In each case, users are asked to commit to execute the recommended action ('Ok, will do'), state that they are already doing it, or indicate that it is irrelevant to them. Please note that the system design presented and evaluated in this paper considers only the non-personalized version.
Gamified incentives and rewards
A review in Johson et al. (2017) found that gamified applications can positively influence energy behaviour, stimulate learning, and improve the user experience. Game-based strategies often motivate users through extrinsic rewards, but the removal of these rewards can result in the dissipation of the desired energy saving behaviour(Richter et al. 2015). However, motivation to allocate cognitive resources is not only driven by rational thoughts, but also by hedonic values (Steg et al. 2014). Appealing to the hedonic values with playful interaction can motivate user engagement beyond external rewards. In our approach, users are motivated to perform recommended actions via two types of gamified rewards: symbolic (points, badges) and tangible (external prizes). These are reinforced by game-mechanics such as goal setting (meeting personal savings goals are rewarded with bonus points), competition (vouchers are awarded monthly to the best performing households) and social comparison (e.g. collecting points for performing energy saving actions improves a user's leaderboard rank). An example of the leaderboard is displayed in Fig. 2c. These features are intended to support user engagement in the action phase and reduce the risk of relapse by stimulating the need for competition and/or the need to demonstrate one's achievements to others (Reiss 2014).
Notifications and attention triggering
Given the generally low involvement of users with energy consumption (Karlin et al. 2015), and the competition with other applications, there is a risk that users lose interest over time (Hargreaves et al. 2013). Therefore, push notifications are sent to users' smartphones comprising tips for energy saving, warnings when they are close to missing their energy savings goal, announcing gamified achievements (e.g. earning a new badge), and reactivation messages when activity becomes low. Such notifications are especially helpful in the monitoring stage to re-activate users' attention (e.g. after relapse to old behaviour). Repeatedly triggering user's attention to an energy saving application is a difficult challenge, since a balance must be struck between keeping users engaged without irritating them. Consequently, users can limit the number of notifications per week by choosing a low (maximum of 2 per week) or high frequency setting (maximum of 5 per week), or completely disable them.The timing and type of notifications to be sent to the user in this non-personalized system is determined in a fixed manner based on input obtained from user requirements workshops (without considering different user types, actual user presence at home, or their current activity).
The technical implementation of the system is based on a layered architecture, in which each layer is designed to address a specific task. The system architecture comprises three layers: the data layer, the business process layer, and the consumer layer. The data layer is responsible for the data acquisition, pre-process and storage of the smart meters and indoor climate sensor readings; it also exposes services for the upper-level layer to access the consolidated data. The business process layer is responsible for the execution of the business logic; it exposes business services for the Consumer layer. In the non-personalized system version it comprises:
The Gamification Engine: a configurable gamification rule engine, designed to provide services to the Consumer layer, specifically to the end-user mobile application through a service API. The exposed services allow the user application to record user activity on the platform and assign points for completed tasks, assess energy saving goals, track user achievements, manage rewards, and notify users about relevant platform events.
Inference Engine: a machine learning component that processes sensor and consumption disaggregated data to infer indicators like user comfort level and household occupancy.
Notification Engine: a component that provides notification services to other Platform components. It schedules notifications pushed by other components and delivers them based on user-defined preferences; it exploits the Google Firebase Cloud Messaging service, for the delivery of mobile alerts.
Service Integration and Orchestration Component: this component synchronizes the execution of the processes that the different components require, and provides services for the communication between components.
Finally, the consumer layer comprises the client-side application for publishing the services exposed by the Business Process Layer, and the mobile App for displaying the consumption and sensor data and for integrating the gamification services.
The end-user application is divided into two components: a web application with responsive web design, which handles the communication with the business layer and the rendering of the visualizations, as well as client applications for mobile devices (Android and iOS), which wraps the web application and enables access to the native features of the devices. The responsive web application was developed following a Model-Driven Engineering approach: models representing the organization of the interface, its content and the user interaction events were specified using the OMG Interaction Flow Modeling language (IFML) and WebRatio (WebRatio 2019), a model-driven development environment based on IFML, which automatically transforms these models into executable code using industry standard frameworks, such as Struts, JSP, CSS and jQuery. The client wrapper applications are developed using the native tools for each mobile platform, Java for Android and Swift for iOS; they consist mainly of a WebView, i.e., a simple web browser whose behaviour can be customized, that enables access to the end-user web application, enriched with native functions such as notifications, safe storage of user data and passwords etc.
Evaluation in a real-world pilot
The system was designed with a user-centered design process, starting from eliciting the requirements of the users in a series of workshops, developing mock-ups, testing them in online crowd tests, as well as prototyping before final system development. To assess the proposed model and system design we evaluated behaviour change for a set of residential households in a real-world pilot in SwitzerlandFootnote 1. To this end, we applied an experimental design with a treatment and a control group. The treatment group used the developed system, whereas the control group wasn't exposed to this intervention. People signed up on a self-interested basis to the control and the treatment groups. We then assessed changes in energy consumption between the treatment and the control group and the interaction of the treatment group with the system. Additionally, a survey-based approach has been applied to capture behaviour and awareness change due to the intervention. Specifically, we measure changes in attitudinal and behavioural measurements before and after exposure to the system. This includes changes in behaviour, knowledge and attitudes, as well as the user evaluation of the usefulness and motivational effect of the system.
Energy savings as a result of system usage
Sample characteristics
The experiment in this pilot has been organized by selecting a treatment group and a control group. The treatment group has been composed of 66 self-selected households from the district of Contone in the municipality of Gambarogno located in the Ticino Canton (southern Switzerland). A focused control group has been composed of 34 households in the districts of Gerra, Magadino, Piazzogna, Quartino and Vira, all of them in the same municipality of Gambarogno. The members of this focused control group agreed to take part in a series of questionnaires during the whole project. For the purpose of the consumption evaluation, we also have access to consumption data of a much larger control group of users (558 households) located in the same district, who are neither taking part in the project nor answering the questionnaires. This larger group is used only for a statistical assessment of the energy saving results.
The structure of the treatment group comprises 50% families with kids, 31% couples, and 9% reporting themselves as single. The control group comprises 28% families, 47% couples, and 25% reporting as single. The treatment group lives in apartments (21%), terraced houses (18%) and single homes (62%), while the control group distribution is as follows: 6% live in apartments, 35% live in terraced houses, and 59% live in single homes. The average monthly consumption for the treatment group over the baseline period spanning May 2017 until April 2018 was 574.5 kWh per month. The average monthly consumption for the focused control group in the same period was 625.6 kWh per month, and 450 kWh per month for the larger control group.
Results on energy saving
The results in this section cover the treatment period for the non-personalized system spanning June 2018 until February 2019. Therefore, we compare the consumption of the treatment period (June 2018 – February 2019) against the same sub-period during the baseline period (June 2017 – February 2018). The following formula is used to compute the consumption percentage change:
$$\begin{array}{*{20}l} savings = \frac{\sum_{i=1}^{N} \frac{x_{i}^{base}-x_{i}}{x_{i}^{base}}}{N} \end{array} $$
Where N is the total number of users, xi is the consumption of the ith user during the treatment period so far, while xibase is the consumption of the same user during the baseline period.
We find that for the treatment group, the average consumption has decreased on average by 5.81% (standard deviation 18.81%). The same analysis for the focused control group shows that the average consumption has increased by 1.33% (standard deviation 15.22%) after removing an outlier that increased the consumption by 168%. The percentage variation in consumption of the treatment group users can be seen in Fig. 3. Apart from two users who, respectively greatly increased and greatly decreased their consumption (users 86 and 87), the other users in general managed to reduce their consumption (positive increments mean savings).
Percentage variation in consumption of treatment group users
An F-test shows that the variance of the data in the focused control group and in the treatment group can be assumed to be similar (P(F≤f)=0.09) so we performed a two sample t-Test which returned non-significance in the difference of the two means, as reported in Table 2. We thus accept the hypothesis that the average savings in the focused control group and in the treatment group are similar.
Table 2 Two sample t-Test between treatment and focused control group
Given the limited size of the focused control group, we also compared the average percentage savings of the treatment group against the average percentage savings of the larger control group from the Contone population. In this case, the variances of the two samples do not pass the F-test, therefore we employ a Welch t-Test to compare the average savings in the two samples. The average percentage savings in the larger control group are equal to 0.2% with a variance equal to 6.4%, which confirms the stability of the consumption of this group of households as expected, given that no major exogenous factors have affected the energy consumption in Contone over the study period. The results presented in Table 3 show that we can reject the hypothesis that average savings between the larger control group in Contone and the treatment group are similar.
Table 3 Welch t-Test between treatment and larger control group
We have then analyzed possible relationships between the usage of the system (user activity in the app) and the obtained energy savings. For this, we have gathered the average number of actions performed in the app per week. Actions can be diverse types of interactions within the app, such as checking current energy consumption, reviewing a tip, updating the monthly goal and so on. Figure 4 shows the average number of weekly actions on the y-axis with consumption variation on the x-axis. We have then performed a K-means clustering on the data, setting the number of clusters to 4. The result is depicted in Fig. 4, where a group of users (coloured green) who display a rather intense weekly activity on the app can be noticed. These users do not necessarily show higher savings compared to other clusters, but their savings are consistently positive, which is not the case for the other clusters (e.g. the larger purple cluster has a positive energy saving on average, but contains a subset of members who actually increased their energy consumption).
K-means clustering of users by consumption variation & app activity
Evaluation of system usage
In the treatment group, 66 users downloaded and registered to the mobile app during the first two months of the pilot. Among these users, 52 logged in more than once and were therefore considered to be active users. This subgroup's application usage data was the subject of further analysis. Figure 5 shows the frequency of new sign-ups per month, the number of active users each month (those who logged in more than once over the course of the pilot) and the number of users who logged in each month. From Fig. 5 we see that a majority of active users (in a range of 54%-100% depending on the month) are logging in to the application at least once every month. This is a healthy level of user engagement.
Usage statistics: user growth and engagement by month
To estimate how frequently the users interacted with the main visualizations and pages of the application, we calculated the monthly average frequency of interaction per user per month of pilot membershipFootnote 2. The results are shown in Fig. 6. The results show that the users interacted the most with the Savings and Goal page which they accessed on average over 14 times a month. The second most frequently used page was the Comfort page, with an average of 10.7 accesses per month. Similarly, the Tips page (10 accesses per month)Footnote 3 and Achievements page (9.6 monthly accesses), were also frequently used. The Consumption page with the detailed time-based overview of the consumption and the Leaderboard were used less frequently, though still more than once a week on average. In contrast, the Impact and Comfort Feedback pages were accessed less, about once a week on average (3.9 monthly accesses).
Average monthly app interactions per user and app page
To shed more light on the interaction with the three types of impact visualizations, we analyze how many users accessed the default impact visualizations in the application (determined from their stated motivation to save energy) and how many of them changed the default impact visualization to another metaphor. Among the users who indicated their preference to save energy, 88% were motivated by environmental considerations and 6% each by monetary or hedonic goals. Results show that most users accessed the impact visualization reflecting their stated preference at least once; for example, 95% of users who cited the environment as their preferred motivation for saving energy accessed the environmental visualization. Among users who had the environmental visualization set as the default, 50% changed to the monetary and 55% changed to the hedonic visualization at least once.
Evaluation of the change in antecedents of behaviour change and user acceptance
Measurement instruments and sample
In order to assess differences in the antecedents of behaviour change, we measured three constructs with validated measurement instruments, adapted to the context of energy saving. This was done using 3 items measuring intention to save energy (7pt Likert) in line with Ajzen (2002), 4 items measuring perceived behavioural control (7pt Likert) in line with Thøgersen & Grønhøj (2010), and 5 items measuring environmental knowledge (5pt Likert), with an adapted scale from Kaiser and Frick (2002)Footnote 4. These items were measured both for the treatment and the control group, with the main hypothesis that due to the intervention with the developed system, these behavioural determinants would improve in the treatment group more than in the control group.
To evaluate user acceptance of the system, we used the items from the UTAUT model (Venkatesh et al. 2012). Specifically, we used 2 items measuring performance expectancy, 4 items measuring effort expectancy, 3 items measuring hedonic motivation, and 3 items measuring behavioural intention, all on a 5pt Likert scale (see Table 5). We also measured the motivation of users by the system in general and by specific elements (such as the battery, and the environmental and monetary impact views) in particular (these items were self-developed).
For the analysis of the changes in behavioural variables, we can only include users who filled out both the baseline questionnaire (given at the beginning of the pilot), and the assessment questionnaire after the pilot had been active for 4 months. Therefore, the results presented below come from 40 users in the treatment group, of whom 66% are male and 34% are female; 25% are between 31-40 years of age; 28% between 41-50, 25% between 51-60, 15% between 61-70, and 5% are over 70. In the control group, there are 26 users, of whom 72% are male and 28% are female, 18% are between 31-40 years old, 23% between 41-50%, 23% between 51-60, 14% between 61-70 and 23% are over 70.
Results of impact on antecedents of behaviour change
We measured three antecedents of behaviour change: energy knowledge, perceived behavioural control and behavioural intention. For behavioural control and behavioural intention, we computed averages for the items mentioned in the previous section, whereas for the energy knowledge we computed the sum, as the construct is formative. These constructs were measured at the beginning of the trial and after four monthsFootnote 5 in order to assess any change resulting from the intervention. In Table 4 we show the means and standard deviations of the change in the behavioural determinants computed by subtracting the score after four months of trial from the score measured prior to the beginning of the trial. In this table the Shapiro-Wilk test, used for small sample used for small sample sizes (Shapiro and Wilk 1965) reveals that all of the scores are normally distributed, except for the scores of energy knowledge in the intervention group.
Table 4 Means, deviations and statistical test results for the change in attitudes, intention and knowledge due to the intervention
Table 5 Descriptive statistics of the main measurement constructs evaluating user acceptance of the enCOMPASS app (all measured on a scale from 1 to 5)
Scores measuring energy knowledge, show a change from 14.2 to 16.8 in the treatment, and from 18.30 to 16.81 in the control group. The scores provided by participants in the treatment group have increased, whereas the scores of the control group have decreased. As the scores of the treatment group are non-normally distributed, a Mann-Whitney test shows that people in the treatment group (Mdn=1) managed to increase their knowledge compared to the control group (Mdn=-0.5); U=735, p <0.01, r=0.35, which is considered a medium size effect (Cohen 1992).
As evident from Table 4, other behavioural determinants measured were not substantial; the independent samples t-test revealed that there are no significant differences in the mean changes in perceived behavioural control between the treatment and the control group (t (64)=1.18, n.s.) as well as no differences in behavioural intention (t (64)= -1.33, n.s).
Results regarding user acceptance and motivation
The results in Table 5 reveal that the application shows above average user acceptance for all measured constructs of UTAUT model: the application was considered moderately useful in daily life (average performance expectancy score of 3.6/5) and easy to use (average effort expectancy score of 3.75/5). Users also reported that the application is fairly fun to use (average score of 3.4/5 for hedonic quality) and that they intend to use it in the future (average score of 3.7/5). The average scores of user responses for the evaluation of the developed visualizations are presented in Fig. 7. They show that users evaluated positively all aspects of the developed visualizations.
User evaluations of the visualizations
We have presented a socio-technical model and system for behaviour change for energy saving and evaluated it in a real-world pilot. The results suggest that the designed behavior change system is suitable to stimulate energy savings among consumers. The observed positive effects on energy consumption of the treatment compared to the larger control group and a change in user's energy related knowledge suggest that such systems can be successfully applied to motivate users to save energy. Our results show that users who used the application frequently, achieved positive energy savings, compared to users who used the application less regularly. Moreover, the positive user evaluation of the system, the high motivation of users by the developed visualizations to save energy, and the actual active usage of the application suggest that the integrated incentive model motivated users to adopt the proposed system to start changing their behaviour. These results provide both theoretical and practical contributions.
On the theoretical side, we find that the underlying socio-technical model can be applied to study behavior change in the energy saving domain. Research suggests energy knowledge is one of the first antecedents of behaviour change towards energy saving and our results show energy knowledge improving in the treatment group after users participated for four months in the pilot. Proceeding behavioural determinants such as perceived behavioural control or behavioural intention, have yet to show significant change. This suggests that four months is not enough time for the behavioural determinants to change and a post-hoc analysis must be performed to assess whether these determinants show any change in the long-term.
Additionally, our results suggest that the developed interactive visualizations are well suited to motivate users to save energy. Specifically, the battery visualization allowing the user to set and monitor the achievement of the energy saving goal was reported by the users as easiest to understand and most useful. The environmental impact visualization had the most visual appeal and was perceived as the most motivating metaphorical visualization for saving energy. While most users were interested in the visualizations matching their primary motivation, some were also interested in the visualizations matching the other motivational drivers. This confirms that users might have different motivations for energy saving, as suggested by the goal framing theory (Lindenberg and Steg 2007).
Additionally, during the four month trial users were increasingly using the application, motivated by the incentive mechanisms proposed in our model. Specifically, users have been most frequently accessing such pages of the application as: the savings and goal, the comfort, the tips, and achievements (ordered by frequency). The interest in the savings and goal page is justified as this page allows users to monitor their consumption with respect to the energy saving goal - the main aim of the application. The lower frequency of access for Impact visualization pages is surprising, although one needs to consider that accessing these pages also requires more clicks since they follow after the higher-level visualization pages.
On the practical side, the developed incentive model can guide the designers of behaviour change systems in energy use and other sustainable domains to design their systems in such a way that they are easy to use, helpful and engaging for the users. For example, we show that the energy consumption and savings visualizations help users save energy, by e.g. allowing them to set and monitor their energy saving goals. Additionally, providing users with energy saving tips and competition-based achievements can help keep users engaged in the behaviour change process. An overall recommendation for designers is to use different incentive mechanisms (those for different behavior change stages or various motivation types) that can potentially motivate various types of users.
Here we report findings from evaluation of the non-personalized system in the Swiss pilot. The enCOMPASS project includes another version of the system with personalized recommendations and notifications that has been released following the evaluation of the non-personalized version, and is currently being tested. A comparison between the two versions (non-personalized vs. personalized) will be subject of a separate analysis and publication. In addition, two more pilots to evaluate external validity through application in different climatic and socio-cultural contexts are being performed in Greece and Germany. This will also be subject of a separate analysis.
The frequency was computed by dividing for each user the total frequency by the number of days between sign-up and the 28.2.19, multiplied by the average number of days in the months in the pilot.
Calculated by adding the number of times a user read the tips and the number of tips read.
The limit in the number of constructs measured is due to the need to cover different evaluation aspects and a limit in the number of questions that users would be willing to answer
The evaluation of the presented non-personalized system was performed as planned after four months of usage. Due to the delay of the subsequent trial of the personalized system, we were able to collect two more months of data on the usage of the non-personalized system and associated consumption.
2030 Energy Strategy (2014). https://ec.europa.eu/energy/en/topics/energy-strategy/2030-energy-strategy. Accessed 25 Feb 2017.
Abrahamse, W, Steg L, Vlek C, Rothengatter T (2005) A review of intervention studies aimed at household energy conservation. J Environ Psychol 25(3):273–291.
Ajzen, I (1985) From intentions to actions: A theory of planned behavior In: Action Control, 11–39.. Springer, Berlin, Heidelberg, Heidelberg.
Ajzen, I (2001) Nature and operation of attitudes. Annu Rev Psychol 52(1):27–58.
Ajzen, I (2002) Perceived behavioral control, self-efficacy, locus of control, and the theory of planned behavior 1. J Appl Soc Psychol 32(4):665–683.
Allcott, H, Rogers T (2014) The short-run and long-run effects of behavioral interventions: Experimental evidence from energy conservation. Am Econ Rev 104(10):3003–37.
Bamberg, S (2013) Changing environmentally harmful behaviors: A stage model of self-regulated behavioral change. J Environ Psychol 34:151–159.
Bandura, A (1977) Self-efficacy: toward a unifying theory of behavioral change. Psychol Rev 84(2):191.
Be aware project (2008) Boosting energy awareness with adaptive real-time enviornments. http://www.energyawareness.eu/. Accessed 25 Feb 2017.
Cohen, J (1992) A power primer. Psychol Bull 112(1):155.
Fischer, C (2008) Feedback on household electricity consumption: a tool for saving energy?Energy Efficiency 1(1):79–104.
Fogg, B (2009) A behavior model for persuasive design,[w:] 4th international conference on persuasive technology '09. ACM, New York.
Frederiks, E, Stenner K, Hobman E (2015) The socio-demographic and psychological predictors of residential energy consumption: A comprehensive review. Energies 8(1):573–609.
Froehlich, J, Dillahunt T, Klasnja P, Mankoff J, Consolvo S, Harrison B, Landay JA (2009) Ubigreen: investigating a mobile tool for tracking and supporting green transportation habits In: Proceedings of the Sigchi Conference on Human Factors in Computing Systems, 1043–1052.. ACM, New York.
Geelen, D, Reinders A, Keyson D (2013) Empowering the end-user in smart grids: Recommendations for the design of products and services. Energy Policy 61:151–161.
Gölz, S, Hahnel UJ (2016) What motivates people to use energy feedback systems? a multiple goal approach to predict long-term usage behaviour in daily life. Energy Res Soc Sci 21:155–166.
Gustafsson, A, Bång M, Svahn M (2009a) Power explorer: a casual game style for encouraging long term behavior change among teenagers In: Proceedings of the International Conference on Advances in Computer Enterntainment Technology, 182–189.. ACM, New York.
Gustafsson, A, Katzeff C, Bang M (2009b) Evaluation of a pervasive game for domestic energy engagement among teenagers. Comput Entertain (CIE) 7(4):54.
Hackenfort, M, Carabias-Hütter V, Hartmann C, Janser M, Schwarz N, Stücheli-Herlach P (2018) Behave 2018: Book of abstracts In: BEHAVE 2018-5th European Conference on Behaviour and Energy Efficiency, Zurich, 5-7 September 2018.. ZHAW Zürcher Hochschule für Angewandte Wissenschaften, Winterthur.
Hamari, J, Koivisto J, Pakkanen T (2014) Do persuasive technologies persuade?-a review of empirical studies. In: Spagnolli A. (ed)PERSUASIVE 2014, LNCS 8462, 118–136.. Springer International Publishing, Switzerland 2014.
Hargreaves, T, Nye M, Burgess J (2013) Keeping energy visible? exploring how householders interact with feedback from smart energy monitors in the longer term. Energy Policy 52:126–134.
He, HA, Greenberg S, Huang EM (2010) One size does not fit all: applying the transtheoretical model to energy feedback technology design In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 927–936.. ACM, New York.
Heckhausen, H, Gollwitzer PM (1987) Thought contents and cognitive functioning in motivational versus volitional states of mind. Motiv Emot 11(2):101–120.
Johnson, D, Horton E, Mulcahy R, Foth M (2017) Gamification and serious games within the domain of domestic energy consumption: A systematic review. Renew Sust Energ Rev 73:249–264.
Karjalainen, S (2011) Consumer preferences for feedback on household electricity consumption. Energy Build 43(2-3):458–467.
Karlin, B, Zinger JF, Ford R (2015) The effects of feedback on energy conservation: A meta-analysis. Psychol Bull 141(6):1205.
Lindenberg, S, Steg L (2007) Normative, gain and hedonic goal frames guiding environmental behavior. J Soc Issues 63(1):117–137.
Ling, K, Beenen G, Ludford P, Wang X, Chang K, Li X, Cosley D, Frankowski D, Terveen L, Rashid AM, et al. (2005) Using social psychology to motivate contributions to online communities. J Comput Mediated Commun 10(4):00–00.
Micheel, I, Novak J, Fraternali P, Baroffio G, Castelletti AF, Rizzoli A (2015) Visualizing and gamifying water & energy consumption for behavior change In: Fostering Smart Energy Applications Workshop (FSEA) 2015 at Interact 2015, 1–4.. University of Bamberg Press, Bamberg.
Michie, S, Van Stralen MM, West R (2011) The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci 6(1):42.
Monigatti, P, Apperley M, Rogers B (2010) Power and energy visualization for the micro-management of household electricity consumption In: Proceedings of the International Conference on Advanced Visual Interfaces, 325–328.. ACM, New York.
Nachreiner, M, Mack B, Matthies E, Tampe-Mai K (2015) An analysis of smart metering information systems: a psychological model of self-regulated behavioural change. Energy Res Soc Sci 9:85–97.
Novak, J, Melenhorst M, Micheel I, Pasini C, Fraternali P, Rizzoli AE (2018) Integrating behavioural change and gamified incentive modelling for stimulating water saving. Environ Model Softw 102:120–137.
Oinas-Kukkonen, H (2013) A foundation for the study of behavior change support systems. Pers Ubiquit Comput 17(6):1223–1235.
Prochaska, JO, Velicer WF (1997) The transtheoretical model of health behavior change. Am J Health Promot 12(1):38–48.
Reiss, J (2014) Energy retrofitting of school buildings to achieve plus energy and 3-litre building standards. Energy Procedia 48:1503–1511.
Richter, G, Raban DR, Rafaeli S (2015) Studying gamification: the effect of rewards and incentives on motivation In: Gamification in Education and Business, 21–46.. Springer International Publishing.
Schiavon, S, Altomonte S (2014) Influence of factors unrelated to environmental quality on occupant satisfaction in leed and non-leed certified buildings. Build Environ 77:148–159.
Schwartz, SH (1977) Normative influences on altruism In: Advances in Experimental Social Psychology, vol 10, 221–279.. Academic Press Inc. Published by Elsevier Inc.
Schwarzer, R (2008) Modeling health behavior change: How to predict and modify the adoption and maintenance of health behaviors. Appl Psychol 57(1):1–29.
Shapiro, SS, Wilk MB (1965) An analysis of variance test for normality (complete samples). Biometrika 52(3/4):591–611.
Shih, L-H, Jheng Y-C (2017) Selecting persuasive strategies and game design elements for encouraging energy saving behavior. Sustainability 9(7):1281.
Shove, E (2003a) Comfort, Cleanliness and Convenience: The Social Organization of Normality, Berg. Lancaster University, UK, Bailrigg.
Shove, E (2003b) Users, technologies and expectations of comfort, cleanliness and convenience. Innov Eur J Soc Sci Res 16(2):193–206.
Skinner, BF (1957) Verbal Behavior. Appleton-Century-Crofts, New York.
Steg, L, Bolderdijk JW, Keizer K, Perlaviciute G (2014) An integrated framework for encouraging pro-environmental behaviour: The role of values, situational factors and goals. J Environ Psychol 38:104–115.
Steg, L, Perlaviciute G, Van der Werff E, Lurvink J (2014) The significance of hedonic values for environmentally relevant attitudes, preferences, and actions. Environ Behav 46(2):163–192.
Steg, L, Vlek C (2009) Encouraging pro-environmental behaviour: An integrative review and research agenda. J Environ Psychol 29(3):309–317.
Tiefenbeck, V, et al. (2019) Real-time feedback promotes energy conservation in the absence of volunteer selection bias and monetary incentives. Nat Energy 4.1:35.
Tiefenbeck, V, Goette L, Degen K, Tasic V, Fleisch E, Lalive R, Staake T (2016) Overcoming salience bias: how real-time feedback fosters resource conservation. Manag Sci 64(3):1458–1476.
Tisov, A, Podjed D, D'Oca S, Vetršek J, Willems E, Veld PO (2018) People-centred approach for ict tools supporting energy efficient and healthy behaviour in buildings. Multidiscip Digit Publ Inst Proc 1:675.
van der Werff, E, Steg L (2015) One model to predict them all: predicting energy behaviours with the norm activation model. Energy Res Soc Sci 6:8–14.
Van Vliet, B, Shove E, Chappells H (2012) Infrastructures of Consumption: Environmental Innovation in the Utility Industries. Earthscan.
Venkatesh, V, Thong JY, Xu X (2012) Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS Q 36(1):157–178.
WebRatio (2019) Leading the Digital Transformation. https://www.webratio.com/site/content/en/home. Accessed 25 Apr 2019.
Wemyss, D, Castri R, De Luca V, Cellina F, Frick V, Lobsiger-Kägi E, Bianchi PG, Hertach C, Kuehn T, Carabias V (2016) Keeping up with the joneses: examining community-level collaborative and competitive game mechanics to enhance household electricity-saving behaviour In: Proceedings of the 4th European Conference on Behaviour and Energy Efficiency Behave 2016.. University of Coimbra, Portugal, Coimbra.
We thank SES Società Elettrica Sopracenerina for the support in the implementation of the Swiss pilot, NABU for contributing to the collection of energy saving tips and setMobile Srl for the technical system integration.
Limited duration of the trial of 4 months bears the risk that after the intervention the users may relapse. Verifying if such short period is sufficient to change behaviour in the long-term is the subject of a follow-up study. Overall, quite small sample sizes in the intervention and control group impact the significance of behaviour change determinants. Self-selection bias of participants to participate in the trial might have impacted the results. To show causality, a longitudinal study is necessary.
This article has been published as part of Energy Informatics Volume 2 Supplement 1, 2019: Proceedings of the 8th DACH+ Conference on Energy Informatics. The full contents of the supplement are available online at?https://energyinformatics.springeropen.com/articles/supplements/volume-2-supplement-1.
This work has been partially funded by the European Union's Horizon 2020 research and innovation programme under grant agreement No 72305. Publication of this supplement was funded by Austrian Federal Ministry for Transport, Innovation and Technology.
The supplementary materials are available upon request.
European Institute for Participatory Media, Pariser Platz 6, Berlin, 10117, Germany
Ksenia Koroleva
, Mark Melenhorst
& Jasminko Novak
University of Applied Sciences Stralsund, IACS-Institute for Applied Computer Science, Zur Schwedenschanze 15, Stralsund, 18435, Germany
Jasminko Novak
Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo da Vinci, 32, Milano, 20133, Italy
Sergio Luis Herrera Gonzalez
& Piero Fraternali
IDSIA USI-SUPSI, Galleria 2, Manno, 6928, Switzerland
Andrea E. Rizzoli
Saxion University of Applied Sciences, M.H. Tromplaan 28, Enschede, 7513, AB, Netherlands
Mark Melenhorst
Search for Ksenia Koroleva in:
Search for Mark Melenhorst in:
Search for Jasminko Novak in:
Search for Sergio Luis Herrera Gonzalez in:
Search for Piero Fraternali in:
Search for Andrea E. Rizzoli in:
All authors contributed to the publication and have read and approved the final manuscript.
Correspondence to Ksenia Koroleva.
Koroleva, K., Melenhorst, M., Novak, J. et al. Designing an integrated socio-technical behaviour change system for energy saving. Energy Inform 2, 30 (2019) doi:10.1186/s42162-019-0088-9
|
CommonCrawl
|
Methodology Article
Testing for association between RNA-Seq and high-dimensional data
Armin Rauschenberger1,
Marianne A. Jonker1,
Mark A. van de Wiel1, 2 and
Renée X. Menezes1Email author
BMC BioinformaticsBMC series – open, inclusive and trusted201617:118
© Rauschenberger et al. 2016
Accepted: 18 February 2016
Published: 8 March 2016
Testing for association between RNA-Seq and other genomic data is challenging due to high variability of the former and high dimensionality of the latter.
Using the negative binomial distribution and a random-effects model, we develop an omnibus test that overcomes both difficulties. It may be conceptualised as a test of overall significance in regression analysis, where the response variable is overdispersed and the number of explanatory variables exceeds the sample size.
The proposed test can detect genetic and epigenetic alterations that affect gene expression. It can examine complex regulatory mechanisms of gene expression. The R package globalSeq is available from Bioconductor.
High-dimensional
Overdispersion
Negative binomial
Global test
Genetic and epigenetic factors contribute to the regulation of gene expression. A better understanding of these regulatory mechanisms is an important step in the fight against cancer. Of interest are genetic alterations such as single nucleotide polymorphisms (SNPs), copy-number variations (CNVs) and loss of heterozygosity (LOH), as well as epigenetic alterations such as DNA methylation, microRNA expression levels and histone modifications.
From a statistical perspective, it makes sense to represent the expression of one gene as a response variable that changes when some covariates are altered. As a starting point, we assume that all covariates come from a single genetic or epigenetic molecular profile. Typically, more covariates are of interest than there are samples.
A plethora of methods for the analysis of gene expression and covariates has emerged in the last years. Many of these methods test each covariate individually, and subsequently correct for multiple testing or rank the covariates by significance. An alternative approach is the global test from Goeman et al. [1]. The global test does not test the individual but the joint significance of covariates. It allows for high dimensionality, reduces the multiple testing burden, and successfully detects small effects that encompass many covariates. Due to its desirable properties, the global test has become a widely used tool in genomics (e.g. [2–4]).
Currently, gene expression microarrays are being supplanted by high-throughput sequencing. The negative binomial distribution seems to be a sensible choice for modelling RNA sequencing data [5, 6]. One of its parameters describes the dispersion of the variable. If this parameter is unknown, the negative binomial distribution is not in the exponential family. As the global test from Goeman et al. [1] is limited in its current form to the exponential family of distributions, a new test is needed for RNA-Seq data. We will provide here such a test.
After proposing a global test for the negative binomial setting, we perform a simulation study, and analyse two publicly available datasets. The first application concentrates on method validation, overdispersion, and individual contributions. The second application concentrates on robustness against multicollinearity, the method of control variables, and the simultaneous analysis of multiple molecular profiles.
Although we focus on RNA-Seq gene expression data, the test developed here is applicable whenever associations between a count variable and large sets of quantitative or binary variables are of interest. In essence, it can be applied to any other type of sequencing data, such as ChIP-Seq (chromatin immunoprecipitation), microRNA-Seq or meth-Seq (methylation).
The random-effects model
The human genome contains several thousand protein-coding genes. In the following, only one gene is considered at a time. Accordingly, the expression of one gene across all samples is our response variable y=(y 1,…,y n ) T . If we were interested whether a given subset of SNPs affected gene expression, these SNPs would be our p covariates. The n×p covariate matrix X is potentially high-dimensional (p≫n).
We represent the relationship between the response and the covariates using the generalised linear model framework from McCullagh and Nelder [7]:
$$\mathrm{E}[y_{i}]=h^{-1}\left(\alpha+\sum\limits_{j=1}^{p} X_{ij} \beta_{j}\right), $$
where h −1 is an inverse link function, α is the unknown intercept, X ij is the entry in the i t h row and j t h column of X, and β 1,…,β p are the unknown regression coefficients. This model holds for all samples i (i=1,…,n).
We are interested in testing the joint significance of all regression coefficients. This is challenging because the regression coefficients cannot be estimated by classical regression methods if there are more covariates than samples. Goeman et al. [1] took a novel approach for testing H 0:β 1=…=β p =0 against H 1:β 1≠0∪…∪β p ≠0. The decisive step from Goeman et al. [1] was to assume β=(β 1,…,β p ) T to be random, with the expected value E[β]=0 and the variance-covariance matrix Var[β]=τ 2 I, where I is the p×p identity matrix and τ 2≥0. Then a random-effects model is obtained:
$$ \mathrm{E}\left[y_{i}|r_{i}\right]= h^{-1}(\alpha+r_{i}), \qquad \qquad r_{i} = \sum\limits_{j=1}^{p} X_{ij} \beta_{j}. $$
This random-effects model allows to rephrase the null and the alternative hypotheses. Defining the random vector r=(r 1,…,r n ) T , it can be deduced that E[r]=0 and Var[r]=τ 2 X X T . Now the null hypothesis of no association between the covariate group and the response is given by H 0: τ 2=0. To construct a score test against the one-sided alternative hypothesis H 1: τ 2>0, we need to assume a distribution for y i |r i .
The testing procedure
We assume the negative binomial distribution y i |r i ∼NB(μ i ,ϕ), where the mean parameter μ i depends on the sample, but the dispersion parameter ϕ does not. We parametrise the negative binomial distribution such that E[y i |r i ]=μ i and \(\text {Var}[y_{i} | r_{i}]=\mu _{i}+\phi {\mu _{i}^{2}}\). Its density function is given by
$$f(y_{i})=\frac{\Gamma\left(y_{i}+\frac{1}{\phi}\right)}{\Gamma\left(\frac{1}{\phi}\right) \Gamma(y_{i}+1)} \left(\frac{1}{1+\mu_{i}\phi}\right)^{\frac{1}{\phi}} \left(\frac{\mu_{i}}{\frac{1}{\phi}+\mu_{i}}\right)^{y_{i}}. $$
Various link functions come into consideration for the negative binomial model. We favour the logarithmic link in order to relate the negative binomial model directly to the Poisson model (see below). As library sizes can be unequal, we include the offset \(\log (m_{i} / \overline {m})\), where m i denotes the library sizes, and \(\overline {m}\) their geometric mean. Thus the mean function becomes
$$ \mu_{i}= \exp \left(\alpha + r_{i} + \log \frac{m_{i}}{\overline{m}}\right)=\frac{m_{i}}{\overline{m}} \exp(\alpha+r_{i}). $$
When τ 2 is close to zero, the score test is the most powerful test of the null hypothesis H 0:τ 2=0 against the alternative hypothesis H 0:τ 2>0 [8]. Here the score function is the first derivative of the logarithmic marginal likelihood with respect to τ 2. Intuitively, if the marginal likelihood reacts sensitively to changes in τ 2 close to 0, there is evidence against τ 2=0. Using results from le Cessie and van Houwelingen [9], we show in the Additional file 1 how to calculate the score function. This function contains the unknown parameters α and ϕ, but they can be estimated by maximum likelihood. Replacing the unknown parameters by their estimates leads to the test statistic
$$ \begin{aligned} u_{nb} =& \left\{\sum\limits_{i=1}^{n} \sum\limits_{k=1}^{n} \frac{R_{ik}}{2}~ \frac{(y_{i}- \hat{\mu}_{i}) (y_{k}- \hat{\mu}_{k})}{(1+\hat{\phi}\hat{\mu}_{i})(1+\hat{\phi}\hat{\mu}_{k})} \right\}\\ &- \sum\limits_{i=1}^{n} \frac{R_{ii}}{2}~ \frac{(\hat{\mu}_{i} + y_{i} \hat{\phi} \hat{\mu}_{i})}{(1+\hat{\phi}\hat{\mu}_{i})^{2}}, \end{aligned} $$
where R ij is the entry in the i t h row and j t h column of the n×n matrix R=(1/p)X X T , and \(\hat {\mu }_{0,i}=(m_{i} / \overline {m}) \exp (\hat {\alpha })\) is the estimated mean under the null hypothesis. For simplicity we always write \(\hat {\mu }_{i}\) instead of \(\hat {\mu }_{0,i}\). In the Additional file 1 the test statistic is rewritten in matrix notation.
Statistical hypothesis testing depends on the null distribution of the test statistic u nb , which is unknown. We will obtain p-values by permuting the response y=(y 1,…,y n ) T together with the mean \(\boldsymbol {\hat {\mu }}=(\hat {\mu }_{1},\ldots,\hat {\mu }_{n})^{T}\). Since this is a one-sided test [10], if the observed test statistic is larger than most of the test statistics obtained by permutation, there is evidence against the null hypothesis.
As we are not using a parametric form for the null distribution of the test statistic, no adjustments for the estimation of α and ϕ are necessary. Furthermore, maximum likelihood estimation does not depend on the order of the elements in y=(y 1,…,y n ) T . Because neither \(\hat {\alpha }\) nor \(\hat {\phi }\) vary under permutation, computational efficiency can be achieved with these parameters as given.
When testing for associations between RNA-Seq data and another molecular profile, numerous genes might be of interest. Because one test is performed per gene, the multiple testing problem reappears. (In the applications from below we omit multiple testing correction when analysing the distribution of p-values.)
Relation to the poisson model
For comparison we also consider the Poisson distribution y i |r i ∼Pois(μ i ) with E[y i |r i ]=Var[y i |r i ]=μ i and a logarithmic link function. Proceeding as above we obtain the test statistic
$$ u_{pois} = \left\{ \sum\limits_{i=1}^{n} \sum\limits_{k=1}^{n} \frac{R_{ik}}{2} (y_{i} - \hat{\mu}_{i})(y_{k} - \hat{\mu}_{k}) \right\} - \sum\limits_{i=1}^{n} \frac{R_{ii}}{2} \hat{\mu}_{i}, $$
where the estimates \(\hat {\mu }_{i}\) are the same as in the negative binomial model.
In the case of \(\hat {\phi }\boldsymbol {\hat {\mu }} = \boldsymbol {0}\) we would have u nb =u pois , but in practice only situations with \(\boldsymbol {\hat {\mu }}>\boldsymbol {0}\) are of interest. The fact that \(\hat {\phi } = 0\) implies u nb =u pois is convenient since a negative binomial distribution with a dispersion parameter close to zero is practically equivalent to a Poisson distribution.
Individual contributions
Following Goeman et al. [1], the test statistic u nb can be rewritten to reveal the influence of individual samples and covariates.
The contribution of sample i (i=1,…,n) to the test statistic is
$$ \begin{aligned}{} s_{i} = & \left\{\sum\limits_{k=1}^{n} \frac{R_{ik}}{2}~ \frac{\left(y_{i}- \hat{\mu}_{i}\right) (y_{k}- \hat{\mu}_{k}) }{(1+\hat{\phi}\hat{\mu}_{i})(1+\hat{\phi}\hat{\mu}_{k})} \right\} - \frac{R_{ii}}{2}~ \frac{(\hat{\mu}_{i} + y_{i} \hat{\phi} \hat{\mu}_{i})}{(1+\hat{\phi}\hat{\mu}_{i})^{2}}. \end{aligned} $$
If s i is positive, the sample i increases the evidence against the null hypothesis. Though, s i not only depends on the sample i, but through R, \(\boldsymbol {\hat {\mu }}\) and \(\hat {\phi }\) also on the other samples.
Especially useful is the contribution of covariate j (j=1,…,p) to the test statistic:
$$ c_{j} = \frac{1}{2p} \left\{ \sum\limits_{i=1}^{n} X_{ij} \frac{y_{i} -\hat{\mu}_{i}}{1+\hat{\phi} \hat{\mu}_{i}} \right\}^{2} - \sum\limits_{i=1}^{n} \frac{X_{ij}^{2}}{2p}~ \frac{(\hat{\mu}_{i}+y_{i} \hat{\phi} \hat{\mu}_{i})}{(1+\hat{\phi} \hat{\mu}_{i})^{2}}. $$
Note that multiplying c j by p gives the u nb that would have been obtained if only the covariate j had been tested. Similar to Goeman et al. [1], the test statistic for a group of covariates is the average of the individual test statistics. If c j is positive, the covariate j increases the evidence against the null hypothesis. Conveniently, c j is independent of all other covariates.
By construction we have \(u_{nb}={\sum \nolimits }_{i=1}^{n}s_{i}\) and \(u_{nb}={\sum \nolimits }_{j=1}^{p} c_{j}\). Even though a single hypothesis is tested on the covariate group, these decompositions allow to determine which samples and which covariates are the most influential on the test result. If samples or covariates can be put into categories, decomposing the test statistic and grouping samples by category could visualise how each category contributes to the test results. Similarly, if samples or covariates can be ordered according to some genomic or phenomic criteria, patterns might be detected.
Method of control variables
One drawback of obtaining p-values via permutation is the computational burden. Here we will make use of the work from Senchaudhuri et al. [11] in order to estimate p-values efficiently.
The proposed test statistic and the test statistic from Goeman el al. [1] have different advantages: whereas the former adequately models overdispersed count data, the latter has a known asymptotic null distribution. Usually we would obtain an unbiased estimate of the p-value using \(1/k \sum _{i=1}^{k} \boldsymbol {1}[u_{i} \geq u_{0}]\), where 1 is the indicator function and u i represents the proposed test statistic for a permutation (i=1,...,k) or for the observed data (i=0). Following Senchaudhuri et al. [11], we could also obtain an unbiased estimate using \(1/k \sum _{i=1}^{k} \boldsymbol {1}[u_{i} \geq u_{0}] - \boldsymbol {1}[q_{i} \geq q_{0}] + p^{*}\), where q i and p ∗ are the test statistic and asymptotic value, respectively, from Goeman et al. [1]. If the test statistics u i and q i have a strong positive correlation, then this alternative estimate is preciser than the usual estimate [11]. (In the applications from below we only use the method of control variables when explicitly stated.)
Multiple molecular profiles
Not only SNPs but also other molecular mechanisms regulate gene expression. For instance, aberrant DNA methylation levels in promoter regions can activate oncogenes and inactivate tumour suppressor genes. Thus it could be interesting to test for associations between RNA-Seq gene expression data on one hand, and on the other SNP data as well as methylation data.
Let X represent the n×p SNP data matrix, and let Z represent the n×q methylation data matrix. The model from Eq. 1 allows to test single covariate sets, leading to the test statistic u nb =u(X) for SNP data, and to the test statistic u nb =u(Z) for methylation data.
Menezes et al. [12] provided a test for analysing multiple molecular profiles simultaneously, for responses with a distribution in the exponential family. As the negative binomial distribution with an unknown dispersion parameter is not in the exponential family, we have to adapt this test. Following Menezes et al. [12], we include a second covariate set in the random-effects model from Eq. 1:
$$ {} \mathrm{E}[y_{i}|r_{i}]=~ h^{-1}(\alpha+r_{i}), \qquad \qquad r_{i} = \sum\limits_{j=1}^{p} X_{ij} \beta_{j} + \sum\limits_{j=1}^{q} Z_{ij} \gamma_{j}. $$
Using the ideas and the notation from above: for the random vectors β=(β 1,…,β p ) T and γ=(γ 1,…,γ q ) T we assume E[β]=E[γ]=0, Var[β]=τ 2 I, Var[γ]=υ 2 I and Cov[β,γ]=0, where τ 2≥0 and υ 2≥0. Consequently, the random vector r=(r 1,…,r n ) T has E[r]=0 and Var[r]=τ 2 X X T +υ 2 Z Z T . The joint test of both covariate sets is described by
$$H_{0}: \tau^{2} = \upsilon^{2} = 0 \quad \text{versus} \quad H_{1}: \tau^{2} \neq 0 \cup \upsilon^{2} \neq 0. $$
Menezes et al. [12] showed that ignoring the correlation between the individual test statistics entails little loss of power, and proposed to use the sum of the individual test statistic as a joint test statistic. As mean and variance of the individual test statistics should be brought onto the same scales [12], our joint test statistic is
$$ u(\boldsymbol{X},\boldsymbol{Z}) = \frac{u(\boldsymbol{X})-\hat{\mathrm{E}}[u(\boldsymbol{X})]}{\sqrt{\widehat{\text{Var}}[u(\boldsymbol{X})]}} + \frac{u(\boldsymbol{Z})-\hat{\mathrm{E}}[u(\boldsymbol{Z})]}{\sqrt{\widehat{\text{Var}}[u(\boldsymbol{Z})]}}. $$
Permuting as above, we estimate the first two central moments of u(X) and u(Z) under the null hypothesis, and calculate a p-value for the joint test. Note that this framework can be extended to an arbitrary number of covariate sets. Under k covariate sets the joint test statistic is the standardised sum of k individual test statistics.
We perform a simulation in order to study the power of the proposed test in various circumstances. Instead of randomly generating covariates, we extract a n×p covariate matrix X from the HapMap data (see below) at a random position. This maintains the correlation structure between SNPs, and thereby ensures a realistic noise level. Initially we set all coefficients in β=(β 1,…,β p ) T equal to zero. Then we randomly select a subset of r consecutive coefficients, and assign with the probabilities 80 % and 20 % the values s and 2s to them, where s is the effect size. Using the relation μ=X β, we calculate the mean vector μ=(μ 1,…,μ n ) T , and simulate the response vector y=(y 1,…,y n ) T under the distributional assumption y i ∼NB(μ i ,ϕ). This procedure ensures that y and X are associated. If we wanted to obtain comparable data under the null hypothesis, we would shuffle the elements in μ. In either case it is of interest how much evidence the proposed test finds for an association between y and X.
After simulating numerous response vectors independently and identically, we calculate the specificity and sensitivity of the proposed test at various significance levels, and visualise their relation in a ROC curve. All other things being held equal, we either vary the dispersion parameter ϕ, the sample size n, the effect size s, or the number of non-zero coefficients r. In the last case we do not select another subset of coefficients, but shorten or lengthen the original subset. It is reassuring that the area under the curve changes in the expected directions (see Figure A in the Additional file 1) and that the type I error rates are maintained (see Table A in the Additional file 1).
A slight modification of this simulation study allows to compare the statistical power between testing all covariates at once and testing them one by one. For this we extract various covariate matrices X from the HapMap data, and let the coefficient vector β exclusively take non-zero values. For each covariate matrix X we simulate one response vector y under the alternative hypothesis. Using the proposed test, we test the joint as well as the individual significance of the p covariates. Subsequently, we compare the joint p-value with the minimum of the FDR-corrected individual p-values (false discovery rate correction). In our setting with many small effects, joint testing is more powerful than individual testing (see Table B in the Additional file 1). Note that this might not hold in situations with fewer or stronger effects.
Application: HapMap
Here we verify that the proposed test finds biologically meaningful signals, examine whether overdispersion is present, and measure the influence of covariates and samples.
We use the datasets from Montgomery et al. [13] and Pickrell et al. [14] that were made available in a preprocessed form by Frazee et al. [15]. They include RNA-Seq gene expression data for 59 individuals from the population "Utah residents with ancestry from northern and western Europe" (CEU) and 69 individuals from the population "Yoruba in Ibadan, Nigeria" (YRI). Excluding genes outside the 22 autosomes, without any variation within the sample, or without annotations, 11 700 genes are left. For each individual, SNP data is obtained from the International HapMap Consortium [16]. Throughout this application we use the term SNP to designate the number of minor alleles per locus (0, 1 or 2), considered quantitatively.
Stratified permutation test
Considering one gene at a time, its expression level over all individuals is used as a response vector, and the SNPs in the neighbouring region are used as a covariate matrix. The aim is to detect regions where causal SNPs might be. To be precise, we test each of the 11 700 gene expression vectors for associations with the respective SNPs that are within a window of ± 1 000 base pairs around the gene. This window size leads to p>n for approximately 13 % of the genes, with a maximum of p=5 152. Under the null hypothesis of no association between gene expression and local SNPs, the p-values would follow a uniform distribution.
Each sample either belongs to the population CEU or to the population YRI, and we account for this grouping by restricting permutations to keeping samples within the same population. As the distribution of p-values is weakly positively skewed, the overall evidence against the null hypotheses is small (see Figure B in the Additional file 1). Only 40 genes reach the minimal p-value given by the reciprocal of the number of permutations (see Table C in the Additional file 1). As in Hulse and Cai [17], we find some genes in the major histocompatibility complex family to be associated with nearby SNPs. Our results display good overlap with the examined results from Lappalainen et al. [18] (see Figure C in the Additional file 1), leading us to conclude that the proposed test identifies biologically meaningful signals.
Presence of overdispersion
The reliability of the global test depends on how well the underlying distribution of RNA-Seq gene expression data is approximated. We are interested whether this dataset requires a model with an offset as well as an dispersion parameter, or whether a simpler model would be sufficient.
Fitting under the null hypothesis of no association between gene expression and local SNPs, we observe that the Poisson distribution without an offset has a poor fit, and that including an offset for different library sizes or using the negative binomial distribution improves the fit (see Figure D in the Additional file 1).
In this example the Poisson model with an offset seems to fit almost equally well to the data as the negative binomial with or without an offset. This might be caused by genetic homogeneity within populations or by the absence of diseases. In cancer datasets we expect a much higher variability between individuals (see below).
For each of the 11 700 tests (one test per gene), the test statistic can be decomposed to show the contribution of individual samples or covariates (Eqs. 5 and 6). By construction these contributions can be positive or negative, but the same holds for their expected values under the null hypothesis. We select two tests (i.e. genes) in order to illustrate these decompositions.
For gene HLA-DQA2, most covariates have a larger influence than expected under the null hypothesis (Fig. 1). This suggests that several SNPs might be associated with the expression of the gene. Indeed, if they are tested individually using 10 000 permutations, almost half of them obtain the minimal p-value of 0.0001.
Contributions of covariates to the test statistic for gene HLA-DQA2. The shaded area indicates their lower 99 % confidence interval under the null hypothesis
For gene CIRBP, the samples from the population CEU tend to contribute positively to the test statistic, whereas those from the population YRI tend to have negative contributions (Fig. 2). Accordingly, the ordinary permutation test would give a much smaller p-value than the stratified permutation test (0.001 versus 0.065). In the case of gene CIRBP we cannot detect any sample with an extreme contribution to the test statistic.
Contributions of samples to the test statistic for gene CIRBP. Samples 1 to 59 are from population CEU, samples 60 to 128 are from population YRI
Application: TCGA
In this application we illustrate that the proposed test is robust against multicollinearity of the covariates, apply the method of control variables, and test for association with multiple covariate sets simultaneously.
We use a dataset on prostate cancer from TCGA et al. [19]. It includes expression levels of 17 678 genes, DNA methylation levels at 482 486 sites, and DNA copy numbers measured at 30 000 locations for 162 individuals. Section B in the Additional file 1 gives further information about this dataset, including preprocessing. Examining some randomly selected genes, it becomes clear that the Poisson distribution fits badly, but the negative binomial distribution with a free dispersion parameter fits well to the gene expression data (see Figure E in the Additional file 1). Given that the RNA-Seq data has been adjusted for different library sizes, we do not use an offset.
Robustness to multicollinearity
McCarthy, Chen and Smyth [20] developed a test of differential expression between conditions defined by one or more covariates. Taking the design matrix into account when estimating the dispersion parameters, this generalised linear model likelihood-ratio test is powerful for testing small numbers of covariates jointly. However, as in all regression models, multicollinearity may have undesirable consequences.
When testing for associations between gene expression and local genetic or epigenetic variations, high-dimensional situations can occur. Then the likelihood-ratio test breaks down due to singularity, but the global test is still applicable.
But also in low-dimensional situations perfect multicollinearity poses a practical problem. For example, copy number data has a relatively high chance of being perfectly multicollinear, because it correlates highly between locations. If we wanted to apply the likelihood-ratio test nonetheless, we would have to drop some covariates. In contrast, the global test exploits this correlation.
Here we compare the method of control variables with the crude permutation test, based upon randomly selected genes. Testing the expression of each gene for associations with copy numbers that are within 1 000 000 base pairs around the gene, we estimate the precision of the estimated p-values by repeating each permutation test many times. The precision of the estimated p-values not only increases with the number of permutations, but according to Table 1 also when switching from the crude permutation test to the method of control variables. For the genes (i.e. tests) in Table 1 the correlation between the two test statistics is sufficiently strong to make this happen, but this is not necessarily true for all genes. However, also in the application HapMap this improvement occurs at all randomly selected genes (see Table D in the Additional file 1). Before deciding between the two methods, we advise to estimate the precision analytically [11].
Precision of estimated p-values from tests with 100 permutations, estimated from 1,000 repetitions
EXOSC9
FRMD1
SLC22A6
CNFN
PDHB
U2AF1L4
ENTPD6
TMED2
ANP32E
CLDND1
At all randomly selected genes (columns) the crude permutation test (first row) is outperformed by the method of control variables (second row) in terms of precision
Several molecular mechanisms are believed to have an impact on gene expression. In the following, the simultaneous analysis from Eqs. 7 and 8 is applied to chromosome 1. We test for associations between RNA-Seq gene expression data on one hand, and on the other methylation values within ± 50 000 base pairs, or copy numbers within ± 2 000 000 base pairs around the start location of the gene. To make the comparison meaningful, the same 1 000 permutations are used for the individual tests and the joint test.
Figure 3 shows: (1) the evidence against null hypotheses is stronger for methylations than for copy numbers; (2) testing methylations and copy numbers jointly leads to an increase in power compared to testing only copy numbers or only methylations; (3) the joint p-values are strongly correlated with both sets of individual p-values.
Empirical cumulative distribution functions and scatterplots of p-values. We test for associations between RNA-Seq on one hand, and either copy numbers, methylations or both on the other. The corresponding Spearman correlation coefficients are 0.04 (top right), 0.55 (bottom left) and 0.72 (bottom right)
Because window sizes are arbitrary, great care is required for biological interpretations of (1). However, (2) and (3) imply that the joint test adds some information to the individual tests. Indeed, in 13 % of the cases the joint test gives smaller p-values than both individual tests (Fig. 4). This illustrates the fact that the joint test finds effects that are missed by both individual tests. At a false discovery rate of 5 %, Table E in the Additional file 1 lists all genes that are insignificant in both individual tests but significant in the joint test. Extreme examples are the genes CNKSR1, ZNHIT6, TMEM56, PRPF38B, and SLC39A1, where both individual p-values are larger than 0.005, but the joint p-values are equal to 0.001. Among these genes, ZNHIT6 and SLC39A1 have been linked to prostate or breast cancer [21].
Scatterplot of logarithmic p-values from the simultaneous analysis of multiple covariate sets. Black points match the minimal individual p-values with the corresponding joint p-values. Grey circles visualise how often these combinations occur
We have proposed a test for association between RNA-Seq data and other molecular profiles. By virtue of the negative binomial distribution, we have accounted for overdispersion in the RNA-Seq data. And owing to a random-effects model, we have allowed for the high dimensionality of the other molecular profiles. Varying library sizes are naturally dealt with by an offset in the model.
We applied the proposed test to detect regulatory mechanisms of gene expression. Thereby we illustrated some of its advantages: (1) stratified permutation allows to account for simple groupings; (2) if overdispersion is absent, the proposed test is equivalent to the one based on the Poisson distribution; (3) the test statistic can be decomposed to show the influence of covariates or samples; (4) the test is applicable in presence of multicollinearity; (5) an extension allows to analyse multiple covariate sets simultaneously.
We use simple offsets and dispersion estimates, but more sophisticated results can easily be integrated into the proposed test. In this regard, sharing information on overdispersion would probably improve the performance of the test under small sample sizes.
The proposed test is based on permutations. Due to the lower multiple testing burden, testing the joint significance of covariates requires much less permutations than testing their individual significance. Even though the computation time for a single test is usually much shorter than one second, genome-wide analyses can be computationally expensive. Running several processes in parallel and interrupting permutation when it becomes impossible to reach a predefined significance level [22] reduces the computation time of a genome-wide analysis to a couple of minutes. If expressions for the mean and the variance of the test statistic were obtained, it would be possible to approximate its null distribution without using permutations. This would allow to obtain significant p-values under small sample sizes, and lead to a drastic reduction of computation time. An alternative way of achieving precision as well as speed is the discussed method of control variables.
We have proposed a powerful test for finding eQTL effects based upon RNA-Seq data. It can be computed efficiently and is able to handle sets of highly correlated covariates.
The R package globalSeq runs on any operating system equipped with R-3.3.0 or later. It is available from Bioconductor under a free software license: http://bioconductor.org/packages/globalSeq/.
We are grateful to J. J. Goeman for helpful discussions, and to two anonymous reviewers for constructive criticism. We also acknowledge the use of data produced by The Cancer Genome Atlas (TCGA) Research Network to illustrate methods introduced in this work. This research was funded by the Department of Epidemiology and Biostatistics, VU University Medical Center.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Additional file 1 Appendix. Mathematical details, supplementary plots and information on reproducibility. (PDF 487 kb)
Using ideas from MAJ, MAvdW and RXM, AR developed the method and drafted the manuscript. MAJ, MAvdW and RXM revised the manuscript critically. All authors read and approved the final manuscript.
Department of Epidemiology and Biostatistics, VU University Medical Center, Amsterdam, 1007, MB, The Netherlands
Department of Mathematics, VU University, Amsterdam, 1081, HV, The Netherlands
Goeman JJ, van de Geer SA, de Kort F, van Houwelingen HC. A global test for groups of genes: testing association with a clinical outcome. Bioinformatics. 2004; 20:93–99.View ArticlePubMedGoogle Scholar
Smid M, Wang Y, Zhang Y, Sieuwerts AM, Yu J, Klijn JG, et al.Subtypes of breast cancer show preferential site of relapse. Cancer Res. 2008; 68:3108–14.View ArticlePubMedGoogle Scholar
Sanchez-Carbayo M, Socci ND, Lozano J, Saint F, Cordon-Cardo C. Defining molecular profiles of poor outcome in patients with invasive bladder cancer using oligonucleotide microarrays. J Clin Oncol. 2006; 24:778–89.View ArticlePubMedGoogle Scholar
Roehle A, Hoefig KP, Repsilber D, Thorns C, Ziepert M, Wesche KO, et al.MicroRNA signatures characterize diffuse large B-cell lymphomas and follicular lymphomas. Br J Haematol. 2008; 142:732–44.View ArticlePubMedGoogle Scholar
Anders S, Huber W. Differential expression analysis for sequence count data. Genome Biol. 2010; 11:R106.View ArticlePubMedPubMed CentralGoogle Scholar
Robinson MD, McCarthy DJ, Smyth GK. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010; 26:139–40.View ArticlePubMedPubMed CentralGoogle Scholar
McCullagh P, Nelder JA. Generalized linear models, 2nd ed. London: Chapman and Hall; 1989.View ArticleGoogle Scholar
Goeman JJ, van de Geer SA, van Houwelingen HC. Testing against a high dimensional alternative. J R Stat Soc Ser B Stat Methodol. 2006; 68:477–93.View ArticleGoogle Scholar
le Cessie S, van Houwelingen HC. Testing the fit of a regression model via score tests in random effects models. Biometrics. 1995; 51:600–14.View ArticlePubMedGoogle Scholar
Verbeke G, Molenberghs G. The use of score tests for inference on variance components. Biometrics. 2003; 59:254–62.View ArticlePubMedGoogle Scholar
Senchaudhuri P, Mehta CR, Patel NR. Estimating exact p values by the method of control variates or Monte Carlo rescue. J Am Stat Assoc. 1995; 90:640–8.Google Scholar
Menezes RX, Mohammadi L, Goeman JJ, Boer J. Analysing multiple types of molecular profiles simultaneously: connecting the needles in the haystack. BMC Bioinformatics. 2016; 17:77.View ArticleGoogle Scholar
Montgomery SB, Sammeth M, Gutierrez-Arcelus M, Lach RP, Ingle C, Nisbett J, et al.Transcriptome genetics using second generation sequencing in a Caucasian population. Nature. 2010; 464:773–7.View ArticlePubMedGoogle Scholar
Pickrell JK, Marioni JC, Pai AA, Degner JF, Engelhardt BE, Nkadori E, et al.Understanding mechanisms underlying human gene expression variation with RNA sequencing. Nature. 2010; 464:768–72.View ArticlePubMedPubMed CentralGoogle Scholar
Frazee AC, Langmead B, Leek JT. ReCount: A multi-experiment resource of analysis-ready RNA-seq gene count datasets. BMC Bioinformatics. 2011; 12:449.View ArticlePubMedPubMed CentralGoogle Scholar
The International HapMap Consortium. The international HapMap project. Nature. 2003; 426:789–96.View ArticleGoogle Scholar
Hulse AM, Cai JJ. Genetic variants contribute to gene expression variability in humans. Genetics. 2013; 193:95–108.View ArticlePubMedPubMed CentralGoogle Scholar
Lappalainen T, Sammeth M, Friedländer MR, 't Hoen PA, Monlong J, Rivas MA, et al.Transcriptome and genome sequencing uncovers functional variation in humans. Nature. 2013; 501:506–11.View ArticlePubMedPubMed CentralGoogle Scholar
The Cancer Genome Atlas Research Network, Weinstein JN, Collisson EA, Mills GB, Shaw KRM, Ozenberger BA, et al.The Cancer Genome Atlas Pan-Cancer analysis project. Nat Genet. 2013; 45:1113–20.View ArticleGoogle Scholar
McCarthy DJ, Chen Y, Smyth GK. Differential expression analysis of multifactor RNA-Seq experiments with respect to biological variation. Nucleic Acids Res. 2012; 40:4288–97.View ArticlePubMedPubMed CentralGoogle Scholar
Rebhan M, Chalifa-Caspi V, Prilusky J, Lancet D. GeneCards: integrating information about genes, proteins and diseases. Trends Genet. 1997; 13:163.View ArticlePubMedGoogle Scholar
van Wieringen WN, van de Wiel MA, van der Vaart AW. A test for partial differential expression. J Am Stat Assoc. 2008; 103:1039–49.View ArticleGoogle Scholar
|
CommonCrawl
|
How does globalization affect COVID-19 responses?
Steve J. Bickley1,2,
Ho Fai Chan ORCID: orcid.org/0000-0002-7281-52121,2,
Ahmed Skali3,
David Stadelmann2,4,5 &
Benno Torgler1,2,5
219k Accesses
The ongoing COVID-19 pandemic has highlighted the vast differences in approaches to the control and containment of coronavirus across the world and has demonstrated the varied success of such approaches in minimizing the transmission of coronavirus. While previous studies have demonstrated high predictive power of incorporating air travel data and governmental policy responses in global disease transmission modelling, factors influencing the decision to implement travel and border restriction policies have attracted relatively less attention. This paper examines the role of globalization on the pace of adoption of international travel-related non-pharmaceutical interventions (NPIs) during the coronavirus pandemic. This study aims to offer advice on how to improve the global planning, preparation, and coordination of actions and policy responses during future infectious disease outbreaks with empirical evidence.
Methods and data
We analyzed data on international travel restrictions in response to COVID-19 of 185 countries from January to October 2020. We applied time-to-event analysis to examine the relationship between globalization and the timing of travel restrictions implementation.
The results of our survival analysis suggest that, in general, more globalized countries, accounting for the country-specific timing of the virus outbreak and other factors, are more likely to adopt international travel restrictions policies. However, countries with high government effectiveness and globalization were more cautious in implementing travel restrictions, particularly if through formal political and trade policy integration. This finding is supported by a placebo analysis of domestic NPIs, where such a relationship is absent. Additionally, we find that globalized countries with high state capacity are more likely to have higher numbers of confirmed cases by the time a first restriction policy measure was taken.
The findings highlight the dynamic relationship between globalization and protectionism when governments respond to significant global events such as a public health crisis. We suggest that the observed caution of policy implementation by countries with high government efficiency and globalization is a by-product of commitment to existing trade agreements, a greater desire to 'learn from others' and also perhaps of 'confidence' in a government's ability to deal with a pandemic through its health system and state capacity. Our results suggest further research is warranted to explore whether global infectious disease forecasting could be improved by including the globalization index and in particular, the de jure economic and political, and de facto social dimensions of globalization, while accounting for the mediating role of government effectiveness. By acting as proxies for a countries' likelihood and speed of implementation for international travel restriction policies, such measures may predict the likely time delays in disease emergence and transmission across national borders.
The level of complexity around containing emerging and re-emerging infectious diseases has increased with the ease and increased incidence of global travel [1], along with greater global social, economic, and political integration [2]. In reference to influenza pandemics, but nonetheless applicable to many communicable and vector-borne diseases, the only certainty is in the growing unpredictability of pandemic-potential infectious disease emergence, origins, characteristics, and the biological pathways through which they propagate [3]. Globalization in trade, increased population mobility, and international travel are seen as some of the main human influences on the emergence, re-emergence, and transmission of infectious diseases in the twenty-first Century [4, 5].
Emerging and re-emerging infectious diseases have presented major challenges for human health in ancient and modern societies alike [6,7,8,9,10]. The relative rise in infectious disease mortality and shifting patterns of disease emergence, re-emergence, and transmission in the current era has been attributed to increased global connectedness, among other factors [11]. More globalized countries – and, in particular, global cities – are at the heart of human influence on infectious diseases; these modern, densely populated urban centers are highly interconnected with the world economy in terms of social mobility, trade, and international travel [12, 13]. One might assume that given their high susceptibility to infectious diseases, globalized countries would be more willing than less globalized countries to adopt screening, quarantine, travel restriction, and border control measures during times of mass disease outbreaks. However, given their globalized nature, globalized countries are also likely to favor less protectionist policies in general, thus, contradicting the assumption above, perhaps suggesting that counteracting forces are at play: greater social globalization may require faster policy adoption to limit potential virus import and spread through more socially connected populations [14, 15]; greater economic globalization may indicate slower policy adoption due to legally binding travel and trade agreements/regulations, economic losses, and social issues due to family relations that cross borders [16,17,18,19,20,21]. Greater political globalization may indicate greater willingness to learn from others and/or maintain democratic processes of decision making in global coordination efforts, either way potentially delaying the implementation of travel restrictions. Travel restrictions may also have minimal impact in urban centers with dense populations and travel networks [22]. Moreover, the costs of closing are comparatively higher for open countries than for already protective nations. For example, more globalized countries are more likely to incur financial or economic penalties (e.g., see [23, 24]) when implementing health policies which aim to improve the health of local populations such as import restrictions or bans on certain food groups/products and product labelling. Globalization, after all, is known to promote growth and does so via a combination of three main globalization dimensions: economic integration (i.e., flow of goods, capital and services, economic information, and market perceptions), social integration (i.e., proliferation of ideas, information, culture, and people), and political integration (i.e., diffusion of governance and participation in international coordination efforts) [25, 26]. See Table 1 for examples of data used in the estimation of each (sub)dimension of the KOF globalization index we use in this study.
Table 1 Dimensions and sub-components of Globalizationa
Globalization appears to improve population health outcomes such as infant mortality rate (IMR) and life expectancy (LE) regardless of a country's level of development (i.e., developed, developing, or underdeveloped) [27, 28]. Links between the dimensions of globalization (i.e., social, political, and economic) and general population health are less clear cut [29]. For less developed countries, the economic dimension of globalization appears to provide the strongest determinant in IMR and LE, whereas for more developed countries, the social aspect of globalization is the strongest factor [27]. This suggests that as a country becomes more economically stable, it then moves towards greater social and political integration into global society; and for less developed countries, increased wealth creation through economic integration potentially delivers the greatest increases in population health. In contrast, for low- to middle-income countries, the social and political dimensions of globalization appear most strongly related to the propensity of women to be overweight [30, 31]. This suggests that for the least developed countries, the adoption of western culture, food habits and lifestyle may be detrimental to adult health if not backed up by social and political progress. Hence, it appears there is no definite relationship between the different aspects of globalization (i.e., social, political, and economic), a country's level of development, and health outcomes that hold across all health contexts. Regardless, trade policies and more generally, globalization, influence both a nation's determinants of health and the options and resources available to its health policymakers [32].
The influence of open trade agreements, policies favoring globalization and greater social connectedness on the (delayed) timing of travel restrictions during a pandemic would make logical sense. Globalized countries are more likely to incur financial, economic, and social penalties by implementing restrictive measures that aim to improve population health outcomes (e.g., see [23, 24]) and hence, will be less inclined to do so. Further, countries that rely on international students and tourism and have a high number of expatriates living and working abroad might be even less likely to close their borders or implement travel restrictions to avoid (1) increases in support payments or decreases in tax income during times of unforeseen economic upset, (2) negative backlash from media and in political polls, and (3) tit-for-tat behaviors from major trading partners. However, countries which are more socially connected may also act more quickly because they are inherently at higher risk of local outbreak and hence, to delay local emergence they may implement international travel restrictions earlier. Membership and commitments to international organizations [33], treaties, and binding trade agreements might also prevent or inhibit them from legally doing so [23, 34, 35], suggesting there are social, trade, and political motivators to maintain 'open' borders.
Domestic policies implemented in response to the coronavirus pandemic have ranged from school closures and public event cancellations to full-scale national lockdowns. Previous research has hinted that democratic countries, particularly those with competitive elections, were quicker to close schools. Interestingly, those with high government effectiveness (i.e., those with high-quality public and civil services, policy formulation, and policy implementation) were slower to implement such policies [36] as were the more right-leaning governments [37]. Further, more democratic countries have tended to be more sensitive to the domestic policy decisions of other countries [38]. In particular, government effectiveness – as a proxy of state capacity – can act as a mediator with evidence available that countries with higher effectiveness took longer to implement COVID-19 related responses [36, 39]. Countries with higher levels of health care confidence also exhibit slower mobility responses among its citizens [40]. Those results may indicate that there is a stronger perception that a well-functioning state is able to cope with such a crisis as a global pandemic like SARS-CoV-2. More globalized countries may therefore take advantage of a better functioning state; weighing advantages and disadvantages of policies and, consequently, slowing down the implementation of restrictive travel policies to benefit longer from international activities. Regardless, the need to understand the reasons (and potential confounding or mediating factors) behind the selection of some policy instruments and not others [36] and the associated timing of such decisions is warranted to enable the development and implementation of more appropriate policy interventions [41].
The literature seems to agree that greater globalization (and the trade agreements and openness which often come with it) make a country more susceptible to the emergence and spread of infectious and noncommunicable diseases [2, 42]. Greater connectedness and integration within a global society naturally increases the interactions between diverse populations and the pathways through which potential pathogens can travel and hence, emerge in a local population. Non-pharmaceutical interventions (e.g., social distancing, city lockdowns, travel restrictions) may serve as control measures when pharmaceutical options (e.g., vaccines) are not yet available [43]. However, such non-pharmaceutical measures are often viewed as restrictive in a social, political, and economic context. Our review of the literature did not detect clear indications of the likelihood that globalized cities will implement such measures, nor were we able to identify how quickly such cities will act to minimize community transmission of infectious diseases and the possible mediating effects of government effectiveness in the decision-making process. Furthermore, our review could not locate research on the relative influence of the social, political, and economic dimensions of globalization on the speed of implementing travel restriction policies. The recent COVID-19 pandemic has highlighted the vast differences in approaches to the control and containment of coronavirus across the world and has demonstrated the varied success of such approaches in minimizing the transmission of coronavirus. Restrictive government policies formerly deemed impossible have been implemented within a matter of months across democratic and autocratic governments alike. This presents a unique opportunity to observe and investigate a plethora of human behavior and decision-making processes. We explore the relative weighting of risks and benefits in globalized countries who balance the economic, social, and political benefits of globalization with a higher risk of coronavirus emergence, spread, and extended exposure. Understanding which factors of globalization (i.e., social, economic, or political) have influenced government public health responses (in the form of travel/border restriction policies) during COVID-19 can help identify useful global coordination mechanisms for future pandemics, and also improve the accuracy of disease modelling and forecasting by incorporation into existing models.
Key variables
The record for each country's international travel policy response to COVID-19 is obtained from the Oxford COVID-19 Government Response Tracker (OxCGRT) database [44] (185 countries in total). The database records the level of strictness on international travel from 01 January 2020 to the present (continually updated), categorized into five levels: 0 - no restrictions; 1 - screening arrivals; 2 - quarantine arrivals from some or all regions; 3 - ban arrivals from some regions; and 4 - ban on all regions or total border closure. At various points in time from the beginning of 2020 to the time of writing (06 October 2020), 102 countries have introduced a policy of screening on arrival, 112 have introduced arrival quarantine, 152 have introduced travel bans, and 148 have introduced total border closures.Footnote 1 A visual representation of these statistics in Fig. 1 shows the cumulative daily count of countries that have adopted a travel restriction, according to the level of stringency, between 01 January and 01 October 2020. Countries with a more restrictive policy (e.g., total border closure) and countries with less restrictive policies (e.g., ban on high-risk regions) are also counted. Figure 2 then shows the type of travel restriction and the date each country first implemented that policy. Together, we see that countries adopted the first three levels of travel restrictions in two clusters; first between late January to early February, and second during mid-March, around the time that COVID-19 was declared a pandemic by the WHO. Total border closures, on the other hand, were mainly imposed after the pandemic declaration, except for two countries that went into lockdown at the beginning of March (i.e., State of Palestine, and San Marino). Country-specific timelines are shown in Fig. S1 in the Appendix.
Timeline of international travel restriction policy adoption for 184 countries. Daily count shows the cumulative number of countries that have introduced an international travel policy that is 'at-least-as-strict'. Relaxation of international travel restriction is not shown in the figure
Restrictiveness of the first travel policy implemented over time. Each marker (N = 183) represents the type and date of the first travel restriction adopted, with the size of the marker representing the number of confirmed COVID cases at the time of policy implementation. Violin plot shows the kernel (Gaussian) density of timing of implementation
We obtained COVID-19 statistics from the European Centre for Disease Prevention and Control (ECDC) and the COVID-19 Data Repository by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University [45]. The dataset consists of records on the number of confirmed cases and deaths daily for 215 countries since January 2020.
Our measure of globalization is generated from the KOF Globalization Index (of more than 200 countries for the year 2017), published by the KOF Swiss Economic InstituteFootnote 2 [26]. The KOF Globalization Index is made up of 44 individual variables (24 de facto and 20 de jure components) relating to globalization across economic, social, and political factorsFootnote 3,Footnote 4 (see also [25]). The complete index is calculated as the average of the de facto and the de jure globalization indices. We focus this analysis on the overall index, as well as the subdimensions of globalization (i.e., Economic (Trade and Financial), Social (Interpersonal, Informational, and Cultural), and Political globalization). Additionally, we also investigate the relative contributions of the de facto and de jure indices separately. Each index ranges from 1 to 100 (highest globalization). In the regression models, we standardize the variable to mean of zero with unit variance for effect size comparison.
Countries with no records of travel restriction adoption (not included in the Oxford COVID-19 Government Response Tracker) and globalization data from the KOF Globalization Index are listed in Tables S1 and S2, respectively.Footnote 5
Control variables
When analyzing the timing of international travel restrictions, we take into account how such decisions can be affected by the policies of neighbors [37, 38]. Thus, to control for policy diffusion, we constructed a variable to reflect international travel policy adoption of neighboring countries by averaging the strictness of each country's neighbors weighted by the share of international tourism. Inbound tourism data of 197 countries were obtained from the Yearbook of Tourism Statistics of the World Tourism Organization [46]. The data consist of total arrivals of non-resident tourists or visitors at national borders, in hotels, or other types of accommodations; and the overnight stays of tourists, broken down by nationality or country of residence, from 1995 to 2018. Due to differences in statistical availability for each country, we take records from 2018 (or 2017 if 2018 is not available) of arrivals of non-resident tourists/visitors at national borders as the country weights for the computation of foreign international travel policy. If arrival records at national borders are not available for these years, we check for the 2018 or 2017 records on arrivals or overnight stays in hotels or other types of accommodation before relying on records from earlier years. To determine the weighted foreign international restriction policy for each country, we calculated the weighted sum using the share of arrivals of other countries multiplied by the corresponding policy value ranging from 0 to 4.Footnote 6
Similarly, case severity amongst countries comprising the majority of inbound tourists should also increase the likelihood of a country adopting travel restrictions. We thus constructed a variable which takes the sum of the number of confirmed cases from neighboring countries weighted by their share of total arrivals in the focal country (log).
While [47] suggests that the diffusion of social policies is highly linked to economic interdependencies between countries, and is less based on cultural or geographical proximity, we test the sensitivity of our results using a variety of measures of country closeness (Fig. S4 and S5). Doing so also allows us to examine which factors are more likely to predict COVID-19 policy diffusion. In general, while our results are not sensitive to other dimensions of country proximity, decisions to adopt travel restrictions are best explained by models where neighbors are defined by tourism statistics (see SI Appendix).
Previous studies have found that countries with higher government effectiveness took longer to implement domestic COVID-19 related policy responses such as school closure (e.g., [36, 39], perhaps due to (mis)perception that a well-functioning state should be able to cope with such a crisis as the current coronavirus pandemic and therefore, has more time or propensity to learn from others and develop well-considered COVID-19 response plans. Therefore, we also control for governance capacity; the data for which is based on measures of state capacity in the Government Effectiveness dimension of the 2019 Worldwide Governance Indicators (the World Bank).
We check the robustness of our results using alternative measures such as the International Country Risk Guide (ICRG) Quality of Government and tax capacity (tax revenue as % of GDP obtained from the World Development Indicators) [38, 47]. The ICRG measure on the quality of government is computed as the average value of the "Corruption", "Law and Order", and "Bureaucracy Quality" indicators. We include additional control variables to account for each country's macroeconomic conditions, social, political, and geographical characteristics. For macroeconomic conditions, we obtained the latest record of GDP per capita, unemployment rate, and Gini coefficient from the WDI. We include population density, percentage urban population, and share of the population over 65, to control for the social structure of the country, which might affect the odds of implementing the policy due to a higher risk of rapid viral transmission and high mortality rates [38]. We also control for the number of hospital beds in the population [36, 38,39,40], which we used to proxy for a country's health system capacity, as countries with higher health capacity may be less likely to implement restrictive travel measures.Footnote 7 We use the electoral democracy index from V-Dem Institute to control for the type of political regime [36, 38, 40]. Following previous studies, we include a dummy variable for countries with prior experience of managing SARS or MERS [38, 48, 49]; defined as those with more than 50 cases. Lastly, we include continent dummies which would absorb any unobserved regional heterogeneity [36]Footnote 8 and country-specific weekend days, as policy changes might have occurred less often on days when politicians are not generally active or at their workplace.
Empirical strategy
We explore the following questions: how will more globalized countries respond to COVID-19? Do they have more confirmed cases before they first implement travel restrictions? Do they take longer to implement travel restriction policies in general? Which dimension of globalization (i.e., social, political, or economic) contributes most to these responses? To provide answers to these questions, we first report the correlations between the level of globalization and the time gap between the first confirmed domestic case and the implementation date of the first international travel restriction policy, calculated using records from the Oxford COVID-19 Government Response Tracker (OxCGRT [44];) on the timing of restrictions on international travel for each country and COVID-19 case statistics from the ECDC and CSSE [45]. We then examine the relationship using survival analysis through a multiple failure-event framework. This approach allows us to examine the underlying factors which affect the implementation of international travel restriction policies across country borders in an attempt to isolate the effect of globalization. It also allows us to use 'incomplete' datasets as certain countries may not have implemented any type of policy or may have implemented a strict policy without first implementing a less strict one (i.e., not sequentially implement policies of 'least strict' to 'most strict'). Furthermore, we conjecture that as a consequence from the above, countries with higher levels of globalization may have more confirmed cases by the time the first policy was introduced. Therefore, we also examine the relationship between globalization and the number of confirmed cases (in logs) at the time of policy implementation.
We employ the time-to-event analysis (survival analysis or event history analysis) to examine the role of globalization in the timing of international travel restriction policies. Similar to previous studies [37, 38, 50], we use the marginal risk set model [51] to estimate the expected duration of time (days) until each policy, with increasing strictness, was imposed by each country. Specifically, we model the hazard for implementing screening, quarantine, ban on high-risk regions, and total border closure separately; thus, allowing the possibility that a country may adopt a more restrictive policy early on, as countries are assumed to be simultaneously at risk for all failures (i.e., implementation of any level of policy strictness). Intuitively, as more stringent policies are less likely to be implemented or adopted early (especially if state capacity is high), we stratified the baseline hazards for the four restrictions to allow for differences in policy adoption rate. Yet, when a country adopts a more restrictive travel restriction policy (e.g., total border closure) before (or never) implementing the less restrictive ones (e.g., ban on high-risk regions), the latter is effectively imposed (at least from an outcome perspective). Thus, we code them as failure on the day the more restrictive policy was implemented.Footnote 9 We also stratify countries by the month of the first confirmed COVID-19 case,Footnote 10 as countries with early transmission of coronavirus have fewer other countries from which they can learn how best to respond to the pandemic [52]. This is important because disproportionally more countries with a higher globalization index contracted the virus early (Fig. S2 in the SI Appendix). Additionally, we stratify time observations into before and after pandemic declaration (11 March 2020) [53] as it is likely to significantly increase the likelihood of countries adopting a travel restriction policy (particularly for border closures as seen in Fig. 2) as consensus on the potential severity of the pandemic solidified. Out of all 184 countries in our sample, 3 and 39 did not implement ban on high-risk regions and total border closure, respectively, before the end of the sample period, and are thus (right) censored (Fig. 1); i.e., nothing is observed or known about that subject and event after this particular time of observation.
We define the time-at-risk for all countries as the start of the sample period (i.e., 01 January 2020)Footnote 11 and estimate the following stratified (semi-parametric) Cox proportional hazards model [37, 38, 50]:
$$ {h}_g\left(\mathrm{t},\mathbf{X}\right)={h}_{0g}\left(\mathrm{t}\right)\ast \exp \left(\boldsymbol{\beta} {\mathbf{X}}_{\mathrm{i}}\right) $$
where hg(t) is the hazard function of strata g, representing the four levels of international travel policy strictness: screening, quarantine and ban on high-risk regions, and total border closure, with h0g as the respective baseline hazard. Because of the stratification approach, we cluster the standard errors at the country level. Tied failures are handled using the Efron method. The extended Cox model in (1) allows us to include static predictor variables – such as the KOF globalization index – and time-varying covariates on neighboring countries' international travel policy adoption or daily COVID-19 case statistics to examine their effects, relative to the baseline hazard, on the timing of policy implementation in the multiple-events data framework.
To study the relationship between COVID-19 case prevalence and the level of globalization at the time of travel restriction [39], we apply ordinary least squares (OLS) regression models to estimate the following model:
$$ {Y}_{ij}=\alpha +\beta {Globalisation}_i+{\gamma}_j{\mathbf{X}}_{\boldsymbol{i}}+{\epsilon}_i $$
where Yij is the number of cases (log) at the time of the restriction j (or a stricter restriction) was implemented. Globalisationi is the KOF globalization index of country i and X is a vector of country-specific controls.
First, we examine whether the level of globalization of the country is correlated with the timing of international travel restrictions relative to the date of a country's first local confirmed case of coronavirus. With a simple correlation analysis, we find that the Pearson's correlation between the first policy implementation-first case gap and globalization index is significantly positive ρ = 0.35 (p < 0.001; 95%CI = [0.210, 0.475]; n = 170),Footnote 12 demonstrating that more globalized countries exhibited a delay in imposing travel restrictions compared with less globalized countries (Fig. 3a), relative to their first local confirmed case of COVID-19. Figure 3a also indicates that countries that reacted before the first local COVID-19 case tended to adopt screening on arrivals or quarantine rules as the first precautionary measures. We find that more globalized countries tend to have a higher number of confirmed local cases of COVID-19 at the time of implementing travel restrictions (Pearson's correlation between the log of confirmed cases and KOF index: ρ = 0.408; p < 0.001; 95%CI = [0.276, 0.525]; n = 173), Fig. 3b).Footnote 13 One noteworthy case is the United Kingdom, which only enforced quarantine on travelers from high-risk regions on the 08 June 2020, 129 days after COVID-19 was first confirmed in the country.
Correlation between the globalization level of a country and a) the number of days between the first international travel restriction policy implemented and the first confirmed case; and b) the number of confirmed cases (log scale, with countries reporting 0 COVID-19 cases mapped below 1) at the time of the first policy being implemented. The colors represent the four international travel restrictions implemented first in each country. Size of the marker shows the number of confirmed COVID-19 cases on the date of the implementation of the first travel policy
These correlations persist and remain significant when each level of travel restriction is evaluated (see Fig. S3 in SI Appendix). This shows that more globalized countries are more likely to impose international travel restrictions later, relative to the first confirmed case in the country, regardless of policy strictness. Interestingly, the two least strict policies (i.e., screening and quarantine) have slightly higher correlation coefficients meaning that it took more globalized countries longer to impose these policies relative to the first local COVID-19 case. One would think that the least strict policies would represent a lower barrier to continued globalization and hence, be the more likely route for a COVID-19 response measure for more globalized countries.
An intuitive narrative for these findings is that globalized countries are typically among the first to be hit by emerging and re-emerging infectious diseases and are naturally more susceptible to local community transmission [12, 13] (Fig. S2). Hence, globalized countries may have less time to react, strategize, and learn from others in terms of suitable NPIs and how resources need to be mobilized for effective implementation. They may also underestimate the speed of transmission and contagiousness of the virus due to lack of clear evidence and knowledge of the virus at the early stage of the outbreak. Below, we present findings after accounting for the timing of the first COVID-19 wave appearing in the country.
Do more globalized countries take longer to implement travel restriction policies in general?
We present the results from the survival analysis in Table 2, which shows the hazard ratios (HRs) for each factor. For binary explanatory variables, HRs can be interpreted as the ratio of the likelihood of adopting travel restrictions between the two levels, while for continuous variables, it represents the same ratio for unit difference.
Table 2 Time-to-event analysis (marginal risk set model) predicting implementation of international travel restrictions
Despite the strong positive correlation observed in the bivariate analysis between globalization and the time difference between first local confirmed case and implementation of travel restriction, we did not find substantial evidence suggesting that more globalized countries are more reluctant to adopt travel restriction policies relative to their first local confirmed case. In fact, after adjusting for the date that COVID-19 was first locally contracted (through observation stratification), we find that, in general, more globalized countries are more likely to adopt travel restriction policies. Specifically, as the KOF globalization index increases by one standard deviation (e.g., from Paraguay to New Zealand), the likelihood of adopting travel restrictions increases by 80% (p = 0.007; 95%CI = [1.163, 2.617], Table 2 model 3).
We also find strong evidence of travel restriction policy diffusion between countries that are heavily interdependent in the tourism sector; that is, a country is more likely to adopt a travel restriction if neighboring countries (in terms of share of non-resident visitor arrivals) have done so. As expected, an increase in COVID-19 prevalence in regions comprising the majority of inbound international tourist arrivals increases the likelihood of enforcing travel restrictions. Specifically, for every 1% increase in COVID-19 cases in neighboring countries, the chance of adopting a travel policy increases by about 15% (p < 0.001, 95%CI = [1.075, 1.237]). On the other hand, increases in domestic COVID-19 cases do not appear to influence travel policy adoptionsFootnote 14 suggesting that travel restriction policy decisions may be driven more by 'keeping the disease out' than containing the disease locally for the greater global good. The likelihood of adopting a restrictive travel policy (e.g., arrivals ban) is about three times higher if the country has already implemented a less strict policy, suggesting there may be decreased difficulty in implementing more restrictive policies over time or an increased preference to do so. Moreover, policy change is 60% less likely to occur during weekends (p = 0.005, 95%CI = [0.199, 0.757]), perhaps because government officials are less likely to be working on weekends and hence, less active in the political decision-making process.
The effect of the electoral democracy index is not statistically significant, and our results are contrary to the findings of [38], where OECD countries with higher electoral democracy have lower rates of domestic policy adoption.Footnote 15 Perhaps decisions to implement international travel restrictions are less controversial to voters than domestic policies as the former primarily aims at limiting mobility from outside country borders rather than restricting the freedom and mobility within country borders as the latter do. In addition, we find that countries with a higher unemployment rate are more likely to implement travel restrictions. Surprisingly, countries with a larger share of older population are less likely to implement travel restrictions, while no statistically significant effect was observed for the share of urban population and population density. Contrary to our expectation, countries with greater healthcare capacity tend to be more likely to adopt a travel restriction policy.Footnote 16
Government capacity as a relevant mediator
When including the interaction term between the globalization index and measures of state capacity in the model, we find strong evidence suggesting that more globalized countries with higher government effectiveness are slower to adopt travel restrictions. On the other hand, the likelihood to adopt travel restrictions increases with the level of globalization for countries with lower state capacity. Perhaps these countries are more self-aware of their lack of preparedness and/or ability to execute effective COVID-19 response plans or accommodate large fluxes of hospital admissions owing to the coronavirus pandemic. Each regression includes the same set of control variables as those used in Table 2 model 4. As shown in Fig. 4, the hazard ratios of the interaction terms between KOF globalization index and WGI government effectiveness are statistically less than one (p = 0.001), as well as the interaction term with an alternative measure of state capacity, namely ICRG quality of government (p = 0.006) and tax capacity (p = 0.018). For instance, computing the hazard ratios of globalization at different levels of government effectiveness reveals that the change in the likelihood to impose travel restrictions, with respect to a one standard deviation increase in KOF, is about 1.5 times higher (hazard ratio of 2.5) for a country with a WGI of 1.5 standard deviations below the world's average (e.g., Chad) while the risk a country with a WGI 1.5 standard deviations above the world's average (e.g., Austria) would fell by 12% (hazard ratio of 0.88). Moreover, we also find a similar effect with the interaction terms between globalization and health capacity (as measured by number of hospital beds (p = 0.075), physicians (p < 0.001), or nurses and midwives per 1000 (p < 0.001), and current expenditure on heath (log) (p < 0.001)). This evidence supports the notion that countries with higher state or healthcare capacity and globalization were less likely to limit international travel, even when the stakes might be comparatively higher, i.e. when the country is more globalized and hence, more susceptible to infectious disease outbreaks.
Hazard ratios of interaction terms between globalization and state capacity or health care capacity. Cap represents 95% confidence intervals. Shaded area highlights the range of HRs
Which aspect of globalization can primarily account for these responses?
Next, we assess which aspects of globalization are more important when predicting travel restriction policy adoption by examining the influence of each (sub)dimension of the globalization index. We find the positive effect of globalization on the likelihood to adopt international travel restrictions is likely to be driven by the social dimension of globalization (Fig. S6, HR is larger than 1 for both de jure and de facto dimensions), as the estimates of HR are statistically significant when we re-estimate model 4 in Table 2 with the three subdimensions of KOF. Only the subdimension of social globalization is statistically significant which shows that countries with higher social globalization are quicker to adopt travel restrictions, controlling for other factors. Moreover, we estimate and compare the hazard ratios of the interaction term of each globalization dimension with government effectiveness to assess mediator effects (Fig. 5).Footnote 17 Overall, we find the likelihood of implementing travel restriction policies among countries with high state capacity is robustly estimated for all subcomponents (Fig. 5a), with HRs ranging from 0.70 (political globalization) to 0.76 (social globalization). A closer inspection distinguishing between de facto (actual flows and activities, Fig. 5b) and de jure (policies, resources, conditions and institutions, Fig. 5c) measures [26] leads to interesting insights.
HRs of the interaction terms between government effectiveness and different dimensions of the globalization index on adoption of travel restrictions. Cap represents 95% confidence intervals
First, we find that de jure political (number of treaties and memberships in international organizations) globalization, have the largest effect out of all other sub-dimensions of globalization.Footnote 18 This is a highly surprising result given the call for international cooperation and coordination by many international organizations (e.g., WHO,Footnote 19 World Economic Forum,Footnote 20 United NationsFootnote 21). We find that those countries with high government effectiveness and engagement in international political coordination efforts are less likely to implement travel restriction policies and hence, slower to do so. On the other hand, de facto economic globalization, which measures actual economic activities (such as exchange of and goods and services) over long distances, is not as strongly related to the timing of travel policy adoption for countries with high government effectiveness. De facto social globalization has the largest effect among other de facto globalization dimensions. These results suggest that a nation with high government effectiveness and more global social, interpersonal, and cultural flows is less likely to implement travel restriction policies in pandemic crises and hence, may delay doing so. Countries with higher government effectiveness and policies and conditions that tend to facilitate or favor globalization (e.g., trade policy, political connectedness and engagement in international political cooperation) are also less likely to implement travel restrictions.
Placebo analysis with domestic COVID-19 responses
To assess whether the observed delay in travel restriction adoption is better explained by globalization and its interplay with state capacity, we conduct a placebo analysis using COVID-19 policy responses that, at least in theory, cannot be explained by the same mechanism. Specifically, we employ the same event history analysis on domestic non-pharmaceutical interventions (NPIs) imposed to mitigate COVID-19 transmission. While previous studies have argued for [48] and found a substantial negative effect of government effectiveness on the timeliness of enacting school closure policies [36] and other NPIs across Europe [39], there is no obvious reason why the delayed responses to implement domestic NPIs would be related to globalization. Thus, we would expect that the interaction term between globalization and government effectiveness to be zero. If our expectation is correct, then we are more comfortable interpreting our previous results as truly reflective of the effect of globalization on travel restrictions, rather than as the effect of globalization on the propensity to implement all types of NPIs.
Data on domestic NPIs adoption are derived from the same source we obtained records on international travel restriction (i.e., the OxCGRT database). Domestic containment and closure policies include closing of schools, workplace, and public transport, restriction on gatherings and internal movement, cancellation of public events, and shelter-in-place order. We follow the approach of [38], who focus only on mandatory nationwide policies adopted.Footnote 22 We again utilize the marginal risk set model in analyzing the timing of adoption of the seven domestic policies, that is, we stratified the seven different policies and their variation in strictness. Similarly, adoption of a stricter version of the policy (e.g., restrictions on gatherings between 11 and 100 people or 10 people or less) implies the adoption of the less strict version.
The results of the placebo analysis are presented in Table S3, showing the hazard ratios of each factor predicting the adoption of any COVID-19-related NPIs. Comparison of the estimates of several key variables to previous studies, while subject to a larger set of countries and more complete time frame, suggests that our modelling approach is reasonable.Footnote 23 Similar to the adoption of international travel restrictions, more globalized countries are quicker to implement domestic NPIs than their less globalized counterparts.
Notably, the estimates of HRs are larger in magnitude and with higher statistical significance compared to the set in Table 2 for the case of international travel restrictions. This shows that the relative speed of more globalized countries in adopting travel restrictions is slower than domestic NPIs, compared to less globalized countries, suggesting the former takes relatively more time to impose international travel restrictions, where one would expect international travel policies to be adopted relatively earlier. Thus, this may show that globalized countries are more reluctant, at least relative to the implementation of domestic interventions, to impose international restrictions. This is perhaps due to that domestic NPIs are relatively easier to actualize in more globalized countries, as legally binding international travel and trade agreements and regulations and the potential for massive economic losses [23, 33,34,35] would also impede the introduction of international travel restriction policies, relative to domestic NPIs. Secondly, and more importantly, we did not find any statistical evidence suggesting the effect of state capacity varies across countries with different levels of globalization as the interaction effect between KOF and government effectiveness is not significant. This result holds for the alternative measures of state capacity as well as using measures of health system capacity. Finally, we also show that the results of the placebo analysis are not sensitive to the type of domestic policy adopted (see Table S4) nor when different dimensions of globalization were considered, as none of the HRs of their interaction terms is statistically significantly smaller than one.Footnote 24
Nevertheless, while the results from the placebo analysis suggest that the results we see in Table 2 are less likely to arise from, e.g., confounding effects due to other unobserved variables, given the difference in nature of domestic and international NPIs,Footnote 25 we cannot conclusively claim that this is in fact the case. For example, an alternative explanation for why more globalized countries respond relatively faster with domestic policies than do less globalized countries might be found in the fact that most of the domestic policies were implemented at a later stage of the pandemic (compared to travel restrictions which were typically adopted early on). Hence, globalized countries may be better at learning how to coordinate resources and implement social distancing policies.
COVID-19 case severity at the introduction to international travel restriction policies
We conduct an analysis using the Ordinary Least Squares model predicting the number of confirmed COVID-19 cases when each travel restrictions were implemented.Footnote 26 In each regression, we control for the date when the country has the first confirmed COVID-19 case. For countries with no confirmed cases when the travel restriction was implemented (i.e., date of the first confirmed is later than the date of the policy adoption), we recode this variable to the date when the policy was adopted.
In Fig. 6, we present the estimates of KOF globalization index on COVID-19 prevalence (total number of cases in log (6A) and case per capita in log (6B)) at the time the travel restriction was implemented. We report the estimates obtained from the models without controlling for other factors except for the date of the first confirmed case and models in which we include a full set of control variables (full regression results are presented in Table S5 and Table S6). This includes government effectiveness, electoral democracy, GDP per capita, unemployment rate, GINI coefficient, number of hospital bed per 1000 people, urban population, population density, whether the country experienced SARS or MERS, and region dummies. Additionally, we also control for containment policies implemented before the introduction of the travel restrictions of interest. We proxy this variable by the average value of the stringency index from the beginning of the time period to the day before the travel policy was adopted.Footnote 27
Coefficients of globalization index predicting the number of COVID-19 cases at the time of travel restriction. Cap represents 95% confidence intervals. Full regression results are presented in Table S5 and Table S6
We find strong positive associations between the globalization index and the number of confirmed COVID-19 cases (and per capita cases) at the time the travel restriction policy was first introduced when we only account for when the country was first exposed to COVID-19. In particular, with a one standard deviation increase in globalization index, the predicted number of COVID-19 cases increases by about 1.9 times when screening (or more strict policies) was first adopted, while cases per capita are 7.7 times higher. The globalization multiplier in COVID-19 cases (or cases per capita) is higher when considering firmer travel restrictions (i.e., adoption of quarantine and banning entry from high-risk regions) except for total lockdown. However, the coefficient estimates for globalization predicting COVID-19 cases at the time of total border closure is likely to be underestimated, as a number of highly globalized countries, such as the USA, Japan, South Korea, and a large group of European countries (with the exception of Germany) did not totally close their borders at any point.
Except for the adoption of screening and quarantine, the effect of globalization became statistically insignificant when other control variables are added to the model. The reduction in the effect size is not unexpected as globalization index is highly correlated with several control variables, such as GDP per capita (ρ = 0.631), government effectiveness (ρ = 0.751), and share of the population over 65 (ρ = 0.775).
Additionally, we find further evidence supporting the mediating role of state capacity to the effect of globalization as suggested by the statistically significant interaction effect between globalization and government effectiveness (Table 3). That is, among globalized countries, those with higher state capacity are more likely to have more COVID-19 cases when the government first imposes travel restrictions. This echoes the findings from the time-to-event analysis.
Table 3 State capacity mediating effect on globalization
Non-pharmaceutical interventions such as travel restrictions may be seen an immediate means by which governments can delay infectious disease emergence and transmission [43], particularly during the early stages of a pandemic when pharmaceutical interventions such as vaccines are not available [43]. To our knowledge, this study is the first to explore the influence of globalization on the timing of international travel restrictions implemented during the recent coronavirus pandemic and the mediating effect of government effectiveness. From a sample of more than 100 countries, we observe that in general, more globalized countries are more likely to implement international travel restrictions policies than their less globalized counterparts. However, we also find that more globalized countries tend to have a higher number of domestic COVID-19 cases before implementing their first travel restriction and also react slower to their first confirmed domestic case of COVID-19. Additionally, we find that countries with a higher level of globalization may be relatively more reluctant to impose international travel restrictions compared to domestic social isolation policies as the effect of globalization on the likelihood to implement the former is smaller than the latter.
Among globalized nations, those with high measures of government effectiveness are less likely to impose international travel restriction policies, suggesting some mediating effect. Perhaps their lower likelihood to implement travel restriction policies is due to (over)confidence in their capability and resources to deal with disease outbreaks, particularly true for some North American and European countries with substantial health system capacity but limited recent experiences with such pandemics [48]. It may also be that high government effectiveness is associated with mechanisms to better evaluate potential costs and benefits of implementing different measures or require approvals, coordination, and action across various levels of (sometimes conflicting) governance. In particular, the interaction variables between government effectiveness and de jure political and economic globalization metrics (i.e., representing policies, trade agreements, and pre-conditions which support greater global mobility and trade) have the largest influence on the likelihood to adopt travel restrictions out of all (sub)dimensions of globalization. Perhaps because the penalties from restrictive travel policies are not insignificant, countries with high government effectiveness and more formalized economic and political integration are more inclined to spend time considering the advantages (e.g., delayed domestic COVID-19 emergence) and disadvantages (e.g., reduced trade and potential conflicts with incumbent trade partners) of travel restrictions because the disadvantages affect them so disproportionately. Out of the interactions between government effectiveness and de facto measures, social measures of globalization have the greatest influence on likelihood to implement travel restrictions. Perhaps a nation with high government effectiveness and more global social, interpersonal, and cultural flows needs more time to consider the practicality of implementing travel restrictions or have their hands tied by commitments to international treaties and travel agreements (i.e., they must maintain 'open' borders to honor their incumbent commitments) [23, 33,34,35]. Given this evidence, we propose that interaction variables between government effectiveness and (sub)dimensions of globalization may be suitable proxies in infectious disease models for the likelihood of a country implementing travel and border restriction policies during a global health crisis such as COVID-19.
Countries often implement policies similar to those employed by their major economic partners, rather than those of close cultural or geographical proximity [55]. They also tend to emulate policy interventions of 'successful' foreign incumbents [56], suggesting some degree of knowledge or information transfer. However, during the early days of a pandemic, there may be limited 'successful' nations to learn from. Our study provides further support to the former proposition: countries are more likely to implement a travel restriction policy if their nearest neighbor (in terms of share of non-resident visitor arrivals) does. The implementation of travel restrictions is related more strongly to confirmed cases in neighbor countries than it is to domestic cases; perhaps this is due to the aim of the policy to keep the disease out rather than minimize spread between nations. Finally, we also find that the likelihood of adopting a more restrictive travel policy (e.g., arrivals ban) is about three times higher if the country has already implemented a less strict policy, suggesting reduced inertia in enacting more restrictive policies once the first measure has been taken.
The benefits of incorporating individual behavioral reactions and governmental policies when modelling the recent coronavirus outbreak in Wuhan, China has been demonstrated [57] and the usefulness of including air travel in the modelling of global infectious disease transmission has been shown [58, 59]. Some empirical evidence points to a small yet significant positive relationship between the implementation of international travel restrictions and the time delay in infectious disease emergence and transmission in the focal country [22, 60, 61]. Broader policy evaluations are still missing. Our results indicate it might be reasonable to assume that global infectious disease forecasting could be improved by including the globalization index while accounting for the mediating role of government effectiveness. In particular, the de jure economic and political dimensions and de facto social dimensions could serve as proxies for an effective government's likelihood and speed to implement travel restriction policies and hence, to predict the likely time delays in disease emergence and transmission across national borders [62]. include domestic, nationwide pandemic policies in their model with results to suggest that such policies are effective and promptly enforced to demonstrate the greatest benefits. While the results from this study might suggest that including international travel restriction policies could bolster additional support for the adoption of such policies in times of mass disease outbreak, it is important to remember that travel restrictions do not (typically) completely mitigate the emergence of infectious diseases, instead delaying the importation of infectious diseases and potentially minimizing the overall severity of outbreak [43, 60] and hence, reducing the associated demand for health system resources at the same time. Geographical regions known hotspots for the emergence and re-emergence of infectious agents [63, 64] could be considered as early candidates for inbound country-specific travel restrictions in the event of mass disease outbreaks.
Due to the ongoing state of COVID-19 transmission and continued enforcement of travel restriction policies, we are not yet able to fully explore the relationship between globalization and the easing of travel restrictions over time. As this data becomes available in the coming months, we will be able to explore various phenomena related to globalization and the easing of international travel restrictions; for example, where nations open up too early (i.e., are these nations overconfident in their health system capability?) or the sequence of easing travel restriction (i.e., do more globalized countries lift restrictions entirely in one go or do they go from strict to less strict?). To this end, further research is required to assess the drivers behind a nation's decision to (not) close its border in a timely fashion, despite their level of globalization.
In any analysis seeking insights based on government-based data sources, there is concern regarding the availability and quality of reporting as well as the difficulties in drawing robust policy recommendations using these data and the research design of the study. We control for this by incorporating into our analyses a wide and varied set of data sources and analytical tools. In doing so, we aim to strengthen our findings by demonstrating multiple routes/methods to reach similar conclusions. Nevertheless, care should be taken in interpreting the results of our analyses as correlation does not mean causation. However, our findings seem to provide strong support for the notion that, in general, more globalized countries are more likely to implement travel restriction policies. However, if they are also high in government effectiveness, they tend to be more hesitant to implement travel restriction policies (both domestic and international), particularly when high in de jure economic and political globalization and de facto social globalization. Thus, suggesting some non-insignificant mediating effect. Additionally, measurement errors stemming from states underreporting of outbreaks due to fear of financial losses or lack of testing capacities [18] could also contribute to the explanations of our results.
The recent COVID-19 pandemic highlights the vast differences in approaches to the control and containment of infectious diseases across the world, and demonstrates their varying degrees of success in minimizing the transmission of coronavirus. This paper examines the influence of globalization, its (sub)dimensions, and government efficiency on the likelihood and timeliness of government interventions in the form of international travel restrictions. We find that countries with higher government effectiveness and globalization are more cautious regarding the implementation of international travel restriction policies. We also find that the de jure economic and political dimensions and de facto social dimension of globalization have the strongest influence on the timeliness of policy implementation. We also find that countries are more likely to implement travel restrictions if their neighbor countries (in terms of share of non-resident visitor arrivals) do and that a country is over three times more likely to implement a more restrictive international travel policy measure if they have already adopted a less restrictive one first. These findings highlight the relationship between globalization and protectionist policies as governments respond to significant global events such as a public health crisis as in the case of the current COVID-19 pandemic. The findings suggest that the inclusion of such interaction variables in infectious disease models may improve the accuracy of predictions around likely time delays of disease emergence and transmission across national borders and as such, open the possibility for improved planning and coordination of transnational responses in the management of emerging and re-emerging infectious diseases into the future.
Data and materials used in the study are available online on Open Science Framework (Center for Open Science; see https://osf.io/qg6kc).
While we follow the definition in [44], we acknowledge that there could be potential measurement errors with how the variable is measured. For example, countries may have different criteria for screening and arrival ban policies, which may vary due to the relationship with the target countries, or border closure due to non-COVID 19 reasons (e.g., war). Within country difference in levels of enforcement and coverage (e.g., state varying or selected airport screening) of the travel restrictions may also contribute to the measurement error. In addition, the measure records policy for foreign travelers and not citizens (e.g., travelling to the target country). For detailed interpretation of the variable, see https://github.com/OxCGRT/covid-policy-tracker/blob/master/documentation/interpretation_guide.md.
https://kof.ethz.ch/en/forecasts-and-indicators/indicators/kof-globalisation-index.html
See https://ethz.ch/content/dam/ethz/special-interest/dual/kof-dam/documents/Globalization/2019/KOFGI_2019_method.pdf for detailed methods on the calculation of the weights of each component and the overall index.
See also https://ethz.ch/content/dam/ethz/special-interest/dual/kof-dam/documents/Globalization/2019/KOFGI_2019_variables.pdf for a detailed description of each variable used in the index.
The average globalization index among countries (for those with KOF data, mean = 55) without OxCGRT records is slightly less than the global average (mean = 62, 95%CI = [60.1, 64]).
Specifically, the strength of travel restrictions, for a given country i, that is influenced by the country's neighbors indexed by j, can be written as: \( Restrictio{n}_{it}=\sum \limits_{j=1}^N{\gamma}_j Restrictio{n}_{jt} \), where 0 < γj < 1 is the share of country i's visitors that come from country j.
Additionally, we check the robustness of our results using the number of physicians per 1000 people and nurses and midwives per 1000 people; we present those results in the supplementary information.
Regions are defined as Africa, Asia, Central America, Europe, North America, Oceania, and South America.
While the marginal risk set model treats each failure event as an independent process, the hazards of implementing more restrictive travel policies may not be unconditional to the occurrence of less restrictive policy being implemented. We capture this uncertainty by incorporating a time-varying variable indicating whether the country has implemented a less restrictive policy in our model.
Cronert [36] stratified countries by the date of the first confirmed case, however, we believe this might cause over stratification. We also group together countries that did not record their first confirmed case before April (n = 10).
This approach is also used by [36, 50] when examining the adoption rate of domestic NPI policies [38]. defines the beginning time-at-risk as the date of the first confirmed COVID-19 case of the country, thus treating countries which implemented a policy before having the first confirmed case as left censored observations. While this approach is more sensible when examining the adoption rate of domestic NPI policies (i.e., country is not yet at risk for the failure – policy implementation), it risks removing countries that engaged in precautionary strategy, i.e., implementing travel restrictions before domestic outbreaks of COVID-19.
Since the effect of travel restrictions might delay an outbreak of the virus, which itself might be more salient for more globalized countries, we check the correlation by censoring negative gaps (travel policy implementation before first confirmed COVID-19 case) to zero. The correlation is highly statistically significant, while the effect size is smaller (ρ = 0.248; p = 0.0011; n = 170). Four countries were excluded from the calculation as they have zero COVID-19 cases during the entire sample period. The correlation increases to ρ = 0.366 (p < 0.001) when the end of the sample period date is used to calculate the first policy implementation-first case gap for these countries.
We obtain very similar results when confirmed cases are adjusted for population size, i.e., log confirmed COVID-19 cases per million people (ρ = 0.397; p < 0.001; n = 173).
In a separate model, we control for death rate instead of number of new confirmed cases in the last seven days; the effect of either variable is statistically insignificant when added separately in the model or together.
The results are highly robust when we substitute other measures of democracy for electoral democracy, such as the Boix-Miller-Rosato (BMR) dichotomous coding of democracy [54], (revised) polity score and institutionalized democracy score from Polity V.
In addition, the effect is more pronounced if health capacity is measured with number of physicians per 1000 people. However, using the number of nurses and midwives per 1000 has no effect.
The HR estimates of each globalization dimension are also presented in Figure S6 (diamonds) for reference.
In a more sophisticated model where we include all interaction terms between each KOF subdimension (three de facto and three de jure dimensions) and government effectiveness, we find that the estimate of the interaction effect with de jure political dimension is most economically and statistically significant.
See https://www.who.int/nmh/resource_centre/strategic_objective6/en/
See https://www.weforum.org/agenda/2020/09/global-cooperation-international-united-nations-covid-19-climate-change/
See https://news.un.org/en/story/2020/08/1069702
In unreported analysis, we included policy recommendation of closure and containment. This does not alter the findings.
For example, we find some evidence of policy diffusion beyond the OECD context [38], while timing of domestic NPIs adoption is not sensitive to foreign COVID-19 case. We also find robust evidence that countries with a large state capacity delay implementation of domestic COVID-19 policies [36, 39]. Interestingly, we also find that countries with relevant past experience (SARS and MERS) intervened relatively early [48].
Notably, the HRs for gathering and internal movement restrictions are statistically (at 10% level) larger than one.
Where the former are purposed to prevent and control mass transmission of the virus within the country and the latter aims to avoid the virus from coming in to the country. In particular, countries adopt travel restrictions at an earlier stage compared to domestic policies (between mid-March to April).
If the country did not adopt the travel restriction, we take the COVID-19 case statistics at the end of the sample period (n = 4 for entry ban and n = 37 for total border closure). Since we use cumulative case statistics, the resulting coefficients are likely to be underestimated. This is because the sample of countries that did not implement travel bans has a higher level of globalisation than the mean, including the UK and the USA.
This measure captures the adoption of seven domestic containment policies and public information campaign, as well as the implementation of less restrictive travel restrictions. See https://github.com/OxCGRT/covid-policy-tracker/blob/master/documentation/index_methodology.md for the construction of the stringency index.
Lim PL. Travel and the globalization of emerging infections. Trans R Soc Trop Med Hyg. 2014;108:1309–10. https://doi.org/10.1093/trstmh/tru051.
Lindahl JF, Grace D. The consequences of human actions on risks for infectious diseases: a review. Infect Ecol Epidemiology. 2015;5(1):30048.
Kilbourne ED. Influenza pandemics of the 20th century. Emerging Infect Dis. 2006;12:9.
Jones BA, Betson M, Pfeiffer DU. Eco-social processes influencing infectious disease emergence and spread. Parasite. 2017;144:26–36. https://doi.org/10.1017/S0031182016001414.
Tatem AJ, Rogers DJ, Hay SI. Global transport networks and infectious disease spread. Adv Parasitol. 2006;62:293–343.
Armelagos GJ, McArdle A. Population, disease, and evolution. Memoirs of the Society for American Archaeology. 1975;30:1–10.
Barrett R, Kuzawa CW, McDade T, Armelagos GJ. Emerging and re-emerging infectious diseases: the third epidemiologic transition. Annu Rev Anthropol. 1998;27:247–71.
Cockburn TA. Infectious diseases in ancient populations. Curr Anthropol. 1971;12:45–62. https://doi.org/10.1086/201168.
Karlsson EK, Kwiatkowski DP, Sabeti PC. Natural selection and infectious disease in human populations. Nat Rev Genet. 2014;15:379–93. https://doi.org/10.1038/nrg3734.
Larsen CS. The bioarchaeology of health crisis: infectious disease in the past. Annu Rev Anthropol. 2018;47:295–313. https://doi.org/10.1146/annurev-anthro-102116-041441.
Kock RA. Will the damage be done before we feel the heat? Infectious disease emergence and human response. Anim Health Res Rev. 2013;14:127–32. https://doi.org/10.1017/S1466252313000108.
Ali SH, Keil R. Global cities and the spread of infectious disease: the case of severe acute respiratory syndrome (SARS) in Toronto, Canada. Urban Stud. 2006;43:491–509. https://doi.org/10.1080/00420980500452458.
Keil R, Ali H. Governing the sick city: urban governance in the age of emerging infectious disease. Antipode. 2007;39:846–73. https://doi.org/10.1111/j.1467-8330.2007.00555.x.
Smith RD. Trade and public health: facing the challenges of globalization. J Epidemiol Community Health. 2006;60:650–1.
Huynen MM, Martens P, Hilderink HB. The health impacts of globalisation: a conceptual framework. Global Health. 2005;1:14.
Gostin LO. International infectious disease law: revision of the World Health Organization's international health regulations. JAMA. 2004;291:2623–7.
Zacher M, Keefe TJ. The Politics of Global Health Governance: United by Contagion. Springer; 2008.
Cash RA, Narasimhan V. Impediments to global surveillance of infectious diseases: consequences of open reporting in a global economy. Bull World Health Organ. 2000;78:1358–67.
Weir L, Mykhalovskiy E. Global public health vigilance: creating a world on alert. New York: Routledge; 2010.
Ramon MK, Beaglehole R, Correa CM, Mirza Z, Buse K, Drager N. The public health implications of multilateral trade agreements. In: Lee K, Buse K, Fustukian S, editors. Health policy in a globalising world. Cambridge: Cambridge University Press. 2002. p. 18–40.
Shaffer ER, Waitzkin H, Brenner J, Jasso-Aguilar R. Global trade and public health. Am J Public Health. 2005;95:23–34.
Mateus AL, Otete HE, Beck CR, Dolan GP, Nguyen-Van-Tam JS. Effectiveness of travel restrictions in the rapid containment of human influenza: a systematic review. Bull World Health Organ. 2014;92:868–80. https://doi.org/10.2471/BLT.14.135590.
Barlow P, Labonte R, McKee M, Stuckler D. Trade challenges at the World Trade Organization to national noncommunicable disease prevention policies: a thematic document analysis of trade and health policy space. PLoS Med. 2018;15:e1002590. https://doi.org/10.1371/journal.pmed.1002590.
Gleeson D, O'Brien P. Alcohol labelling rules in free trade agreements: advancing the industry's interests at the expense of the public's health. Drug Alcohol Rev. 2020 (in press). https://doi.org/10.1111/dar.13054.
Dreher A. Does globalization affect growth? Evidence from a new index of globalization. Appl Econ. 2006;38:1091–110. https://doi.org/10.1080/00036840500392078.
Gygli S, Haelg F, Potrafke N, Sturm JE. The KOF globalisation index–revisited. Rev Int Organ. 2019;14:543–74.
Jani VJ, Joshi NA, Mehta DJ. Globalization and health: an empirical investigation. Glob Soc Policy. 2019;19:207–24. https://doi.org/10.1177/1468018119827475.
Martens P, Akin SM, Maud H, Mohsin R. Is globalization healthy: a statistical indicator analysis of the impacts of globalization on health. Glob Health. 2010;6:16–29. https://doi.org/10.1186/1744-8603-6-16.
Barnish M, Tørnes M, Nelson-Horne B. How much evidence is there that political factors are related to population health outcomes? An internationally comparative systematic review. BMJ Open. 2018;8:e020886. https://doi.org/10.1136/bmjopen-2017-020886.
Vogli RD, Kouvonen A, Elovainio M, Marmot M. Economic globalization, inequality and body mass index: a cross-national analysis of 127 countries. Crit Public Health. 2014;24:7–21.
Goryakin Y, Lobstein T, James WP, Suhrcke M. The impact of economic, political and social globalization on overweight and obesity in the 56 low and middle income countries. Soc Sci Med. 2015;133:67–76. https://doi.org/10.1016/j.socscimed.2015.03.030.
Jarman H. Trade policy governance: what health policymakers and advocates need to know. Health Policy. 2017;121:1105–12. https://doi.org/10.1016/j.healthpol.2017.09.002.
Bozorgmehr K, San SM. Trade liberalization and tuberculosis incidence: a longitudinal multi-level analysis in 22 high burden countries between 1990 and 2010. Health Policy Plan. 2014;29:328–51.
Barlow P, McKee M, Basu S, Stuckler D. The health impact of trade and investment agreements: a quantitative systematic review and network co-citation analysis. Glob Health. 2017;13:1–9.
McNeill D, Barlow P, Birkbeck CD, Fukuda-Parr S, Grover A, Schrecker T, Stuckler D. Trade and investment agreements: implications for health protection. J World Trade. 2017;51:159–82.
Cronert A. Democracy, state capacity, and covid-19 related school closures. 2020. Preprint at https://preprints.apsanet.org/engage/apsa/article-details/5ea8501b68bfcc00122e96ac.
Adolph C, Amano K, Bang-Jensen B, Fullman N, Wilkerson J. Pandemic politics: timing state-level social distancing responses to COVID-19. J Health Polit Policy Law. 2020;8802162. https://doi.org/10.1215/03616878-8802162.
Sebhatu A, Wennberg K, Arora-Jonsson S, Lindberg SI. Explaining the homogeneous diffusion of COVID-19 nonpharmaceutical interventions across heterogeneous countries. Proc Natl Acad Sci U S A. 2020;117:21201–8.
Toshkov D, Yesilkagit K, Carroll B. Government capacity, societal trust or party preferences? What accounts for the variety of national policy responses to the covid-19 pandemic in Europe?. 2020. Preprint at https://osf.io/preprints/7chpu/
Chan HF, Brumpton M, Macintyre A, Arapoc J, Savage DA, Skali A, Stadelmann D, Torgler B. How confidence in health care systems affects mobility and compliance during the COVID-19 pandemic. PLoS One. 2020;15(10):e0240644.
Greer SL, King EJ, da Fonseca EM, Peralta-Santos A. The comparative politics of COVID-19: the need to understand government responses. Glob Public Health. 2020;15:1413–6.
Barlow P, McKee M, Stuckler D. The impact of US free trade agreements on calorie availability and obesity: a natural experiment in Canada. Am J Prev Med. 2018;54:637–43.
Ryu S, Gao H, Wong JY, Shiu EY, Xiao J, Fong MW, Cowling BJ. Nonpharmaceutical measures for pandemic influenza in nonhealthcare settings—international travel-related measures. Emerging Infect Dis. 2020;26:961.
Hale T, Petherick A, Phillips T, Webster S. Variation in government responses to COVID-19. Blavatnik school of government working paper. 2020;31 https://www.bsg.ox.ac.uk/sites/default/files/2020-05/BSG-WP-2020-032-v6.0.pdf.
Dong E, Du H, Gardner L. An interactive web-based dashboard to track COVID-19 in real time. Lancet Infect Dis. 2020;20:533–4. https://doi.org/10.1016/S1473-3099(20)30120-1.
World Tourism Organization. Yearbook of Tourism Statistics, Data 2014–2018, 2020 edition, UNWTO, Madrid. 2020. doi: https://doi.org/10.18111/9789284421442.
Hendrix CS. Measuring state capacity: theoretical and empirical implications for the study of civil conflict. J Peace Res. 2010;47:273–85.
Capano G, Howlett M, Jarvis DS, Ramesh M, Goyal N. Mobilizing policy (in) capacity to fight COVID-19: understanding variations in state responses. Polic Soc. 2020;39:285–308.
Dowd JB, Andriano L, Brazel DM, Rotondi V, Block P, Ding X, Liu Y, Mills MC. Demographic science aids in understanding the spread and fatality rates of COVID-19. Proc Natl Acad Sci U S A. 2020;117(18):9696–8.
González-Bustamante B. Evolution and early government responses to COVID-19 in South America. World Dev. 2020;137:105180.
Wei LJ, Lin DY, Weissfeld L. Regression analysis of multivariate incomplete failure time data by modeling marginal distributions. J Am Stat Assoc. 1989;84:1065–73.
Chan HF, Skali A, Savage DA, Stadelmann D, Torgler B. Risk attitudes and human mobility during the COVID-19 pandemic. Sci Rep. 2020; In press.
Weible CM, Nohrstedt D, Cairney P, Carter DP, Crow DA, Durnová AP, Heikkila T, Ingold K, McConnell A, Stone D. COVID-19 and the policy sciences: initial reactions and perspectives. Policy Sci. 2020:225–41. https://doi.org/10.1007/s11077-020-09381-4, https://link.springer.com/article/10.1007%2Fs11077-020-09381-4.
Boix C, Miller M, Rosato S. A complete data set of political regimes, 1800–2007. Comp Polit Stud. 2013;46:1523–54. https://doi.org/10.1177/0010414012463905.
Schmitt C. Culture, closeness, or commerce? Policy diffusion and social spending dynamics. Swiss Political Sci Rev. 2013;19(2):123–38. https://doi.org/10.1111/spsr.12035.
Bohmelt T, Ezrow L, Lehrer R, Ward H. Party policy diffusion. Rev Polit Review of Politics. 2016;110:397–410. https://doi.org/10.1017/S0003055416000162.
Chong KC, Zee BC. Modeling the impact of air, sea, and land travel restrictions supplemented by other interventions on the emergence of a new influenza pandemic virus. BMC Infect Dis. 2012;12(1):309.
Qianying L. A conceptual model for the coronavirus disease 2019 (COVID-19) outbreak in Wuhan. China with individual reaction and governmental action. Int J Infect Dis. 2020;93:211–6.
Hosseini P, Sokolow SH, Vandegrift KJ, Kilpatrick AM, Daszak P. Predictive power of air travel and socio-economic data for early pandemic spread. PLoS One. 2010;5:e12763.
Hufnagel L, Brockmann D, Geisel T. Forecast and control of epidemics in a globalized world. Proc Natl Acad Sci U S A. 2004;101:15124–9.
Caley P, Becker NG, Philp DJ. The waiting time for inter-country spread of pandemic influenza. PLoS One. 2007;2:e143.
Giordano G, Blanchini F, Bruno R, Colaneri P, Di Filippo A, Di Matteo A, Colaneri M. Modelling the COVID-19 epidemic and implementation of population-wide interventions in Italy. Nat Med. 2020 Apr;22:1–6. https://doi.org/10.1038/s41591-020-0883-7.
Epstein JM, Goedecke DM, Yu F, Morris RJ, Wagener DK, Bobashev GV. Controlling pandemic flu: the value of international air travel restrictions. PLoS One. 2007;2:e401.
Murray KA, Preston N, Allen T, Zambrana-Torrelio C, Hosseini PR, Daszak P. Global biogeography of human infectious diseases. Proc Natl Acad Sci U S A. 2015;112:12746–51.
There is no funding support for the study.
School of Economics and Finance, Queensland University of Technology, 2 George St, Brisbane, QLD, 4000, Australia
Steve J. Bickley, Ho Fai Chan & Benno Torgler
Centre for Behavioural Economics, Society and Technology (BEST), 2 George St, Brisbane, QLD, 4000, Australia
Steve J. Bickley, Ho Fai Chan, David Stadelmann & Benno Torgler
Department of Global Economics & Management, University of Groningen, Groningen, The Netherlands
Ahmed Skali
University of Bayreuth, Bayreuth, Germany
David Stadelmann
CREMA – Centre for Research in Economics, Management, and the Arts, Südstrasse 11, CH-8008, Zürich, Switzerland
David Stadelmann & Benno Torgler
Steve J. Bickley
Ho Fai Chan
Benno Torgler
SJB, HFC and BT designed the research; HFC extracted the data; SJB, HFC and BT analyzed the data and drafted the paper. AS and DS revised the manuscript and provided substantial inputs. All authors read and approved the final manuscript.
Correspondence to Ho Fai Chan.
There is no competing interest to declare.
Steve J. Bickley, Ho Fai Chan and Benno Torgler contributed equally to this work
Additional file 1 Fig. S1.
Country-specific timeline for adoption of travel policy restrictions. Diamond markers with black outlines represent the first travel restriction implemented. Countries are ranked according to the Globalization measure. ^Countries with no travel restriction records (n = 32). *Countries without KOF index (n = 24). Five countries do not have any confirmed COVID case at time of study. Fig. S2. Correlation between timing of first confirmed COVID case and globalization. Pearson's correlation (ρ) is − 0.543 (p < 0.001). Marker size represents the total number of COVID cases at time of data collection. Horizontal and vertical lines indicate the respective mean. Fig. S3. Correlations between KOF globalization index and the number of days between first COVID-19 case and travel restriction implementation (A-D) and number of COVID-19 cases at the time of first travel restriction (E-H). For each country, we calculate the measure of interest by taking the earliest of either the implementation date of the focal policy (e.g., quarantine) or the date of a more restrictive travel policy being adopted. Thus, the measures can be interpreted as the number of days lapsed since the first confirmed COVID-19 case or the number of COVID-19 cases when a 'at-least-as-strict' travel policy x was in place, respectively. Marker size represents the total number of COVID-19 cases at time of the respective policy implementation. Color indicates geographical regions (see Fig. S2 legend). Pearson's correlations: A (ρ = 0.35, p < 0.001, n = 170); B (ρ = 0.323, p < 0.001, n = 170); C (ρ = 0.240, p = 0.0017, n = 170); D (ρ = 0.287, p = 0.001, n = 170); E (ρ = 0.408, p < 0.001, n = 173); F (ρ = 0.494, p < 0.001, n = 173); G (ρ = 0.502, p < 0.001, n = 173); H (ρ = 0.506, p < 0.001, n = 173). Fig. S4. Robustness checks with alternative measure of country closeness. HRs of diffusion of travel restrictions (left) and prevalence of COVID-19 in neighboring countries (right) on adoption of travel restrictions. Cap represents 95% confidence intervals. Fig. S5. HRs of interaction terms between globalization index and government effectiveness on adoption of travel restrictions. Cap represents 95% confidence intervals. Fig. S6. Estimates of the HRs of different dimensions of the globalization index on adoption of travel restrictions. Circle markers represent estimates from the main effects model (i.e., without interaction terms), with KOF indices included in the model one at the time. Triangle markers show the estimated HRs of the three KOF dimensions added together in the same model (competing effects). Diamonds show the HR estimates of the globalization dimensions in the interaction model. Cap represents 95% confidence intervals. Table S1. List of countries with no OxCGRT data (as of 23 September 2020). Table S2. List of countries with no KOF measures. Table S3. Placebo analysis with domestic COVID-19 responses. Table S4. Placebo analysis with specific domestic COVID-19 NPIs. Table S5. Prediction of number of COVID-19 cases at the adoption of travel restriction. Table S6. Prediction of COVID-19 case per capita at the adoption of travel restriction
Bickley, S.J., Chan, H.F., Skali, A. et al. How does globalization affect COVID-19 responses?. Global Health 17, 57 (2021). https://doi.org/10.1186/s12992-021-00677-5
Travel restriction
Governance for health
Cross border infectious disease threats: governance and preparedness
Coronavirus research highlights
|
CommonCrawl
|
Defining a data set
Filtering a data set
Handling censored data
Mapping between data and model
Libraries of models
Writing your own model
Continuous outcomes
Typical PK models
Single route of administration
Multiple routes of administration
Multiple doses to steady-state
Mixture of structural models
Joint models for continuous outcomes
Non-continuous outcomes
Time-to-event data models
Count data model
Categorical data model
Joint models for non continuous outcomes
Using a scale factor
Using regression variables
Delayed differential equations
Outputs and Tables
Compute AUC and Cmax
Statistical Model
Residual error model
Individual model
Distribution of indiv. param.
Model for individual covariates
Complex param-covariate relationships
Inter occasion variability
Mixture of distributions
Tasks and results
Initial values and method
Check initial estimates and auto init
Pop. parameters
Bayesian estimation or fixed pop. parameters
Cond. distribution
Standard errors
Cv. assessment
Export charts data
Generating and exporting plots
Interacting with the plots
Observed data
Covariate Viewer
Bivariate Data Viewer
Model for observations
Individual fits
Obs. vs Pred.
Scatter plot of residuals
Distribution of residuals
Model for indiv. param.
Distr. of indiv. param.
Distr. of random effects
Corr. between rand. effects
Indiv. param vs covariates
Predictive checks
Visual predictive check
Numerical predictive check
BLQ predictive check
Prediction distribution
Conv. diagnosis
MCMC
Importance sampling
Task results
Likelihood contribution
Stand. errors of estimates
Automatic covariate model building
Statistical model building with SAMBA
-functions
Demos & Case studies
Log Likelihood estimation
The log-likelihood is the objective function and a key information. The log-likelihood cannot be computed in closed form for nonlinear mixed effects models. It can however be estimated.
Log-likelihood estimation
Linearization
Best practices: When should I use the linearization and when should I use the importance sampling?
Display and outputs
Performing likelihood ratio tests and computing information criteria for a given model requires computation of the log-likelihood
$$ {\cal L}{\cal L}_y(\hat{\theta}) = \log({\cal L}_y(\hat{\theta})) \triangleq \log(p(y;\hat{\theta})) $$
where \(\hat{\theta}\) is the vector of population parameter estimates for the model being considered, and \(p(y;\hat{\theta})\) is the probability distribution function of the observed data given the population parameter estimates. The log-likelihood cannot be computed in closed form for nonlinear mixed effects models. It can however be estimated in a general framework for all kinds of data and models using the importance sampling Monte Carlo method. This method has the advantage of providing an unbiased estimate of the log-likelihood – even for nonlinear models – whose variance can be controlled by the Monte Carlo size.
Two different algorithms are proposed to estimate the log-likelihood:
by linearization,
by Importance sampling.
Log-likelihood by importance sampling
The observed log-likelihood \({\cal LL}(\theta;y)=\log({\cal L}(\theta;y))\) can be estimated without requiring approximation of the model, using a Monte Carlo approach. Since
$${\cal LL}(\theta;y) = \log(p(y;\theta)) = \sum_{i=1}^{N} \log(p (y_i;\theta))$$
we can estimate \(\log(p(y_i;\theta))\) for each individual and derive an estimate of the log-likelihood as the sum of these individual log-likelihoods. We will now explain how to estimate \(\log(p(y_i;\theta))\) for any individual i. Using the \(\phi\)-representation of the model (the individual parameters are transformed to be Gaussian), notice first that \(p(y_i;\theta)\) can be decomposed as follows:
$$p(y_i;\theta) = \int p(y_i,\phi_i;\theta)d\phi_i = \int p(y_i|\phi_i;\theta)p(\phi_i;\theta)d\phi_i = \mathbb{E}_{p_{\phi_i}}\left(p(y_i|\phi_i;\theta)\right)$$
Thus, \(p(y_i;\theta)\) is expressed as a mean. It can therefore be approximated by an empirical mean using a Monte Carlo procedure:
Draw M independent values \(\phi_i^{(1)}\), \(\phi_i^{(2)}\), …, \(\phi_i^{(M)}\) from the marginal distribution \(p_{\phi_i}(.;\theta)\).
Estimate \(p(y_i;\theta)\) with \(\hat{p}_{i,M}=\frac{1}{M}\sum_{m=1}^{M}p(y_i | \phi_i^{(m)};\theta)\)
By construction, this estimator is unbiased, and consistent since its variance decreases as 1/M:
$$\mathbb{E}\left(\hat{p}_{i,M}\right)=\mathbb{E}_{p_{\phi_i}}\left(p(y_i|\phi_i^{(m)};\theta)\right) = p(y_i;\theta) ~~~~\mbox{Var}\left(\hat{p}_{i,M}\right) = \frac{1}{M} \mbox{Var}_{p_{\phi_i}}\left(p(y_i|\phi_i^{(m)};\theta)\right)$$
We could consider ourselves satisfied with this estimator since we "only" have to select M large enough to get an estimator with a small variance. Nevertheless, it is possible to improve the statistical properties of this estimator.
For any distribution \(\tilde{p_{\phi_i}}\) that is absolutely continuous with respect to the marginal distribution \(p_{\phi_i}\), we can write
$$ p(y_i;\theta) = \int p(y_i|\phi_i;\theta) \frac{p(\phi_i;\theta)}{\tilde{p}(\phi_i;\theta)} \tilde{p}(\phi_i;\theta)d\phi_i = \mathbb{E}_{\tilde{p}_{\phi_i}}\left(p(y_i|\phi_i;\theta)\frac{p(\phi_i;\theta)}{\tilde{p}(\phi_i;\theta)} \right).$$
We can now approximate \(p(y_i;\theta)\) using an importance sampling integration method with \(\tilde{p}_{\phi_i}\) as the proposal distribution:
Draw M independent values \(\phi_i^{(1)}\), \(\phi_i^{(2)}\), …, \(\phi_i^{(M)}\) from the proposal distribution \(\tilde{p_{\phi_i}}(.;\theta)\).
Estimate \(p(y_i;\theta)\) with \(\hat{p}_{i,M}=\frac{1}{M}\sum_{m=1}^{M}p(y_i | \phi_i^{(m)};\theta)\frac{p(\phi_i^{(m)};\theta)}{\tilde{p}(\phi_i^{(m)};\theta)}\)
By construction, this estimator is unbiased, and its variance also decreases as 1/M:
$$\mbox{Var}\left(\hat{p}_{i,M}\right) = \frac{1}{M} \mbox{Var}_{\tilde{p_{\phi_i}}}\left(p(y_i|\phi_i^{(m)};\theta)\frac{p(\phi_i^{(m)};\theta)}{\tilde{p}(\phi_i^{(m)};\theta)}\right)$$
There exist an infinite number of possible proposal distributions \(\tilde{p}\) which all provide the same rate of convergence 1/M. The trick is to reduce the variance of the estimator by selecting a proposal distribution so that the numerator is as small as possible.
For this purpose, an optimal proposal distribution would be the conditional distribution \(p_{\phi_i|y_i}\). Indeed, for any \(m = 1,2, …, M,\)
$$ p(y_i|\phi_i^{(m)};\theta)\frac{p(\phi_i^{(m)};\theta)}{p(\phi_i^{(m)}|y_i;\theta)} = p(y_i;\theta) $$
which has a zero variance, so that only one draw from \(p_{\phi_i|y_i}\) is required to exactly compute the likelihood \(p(y_i;\theta)\).
The problem is that it is not possible to generate the \(\phi_i^{(m)}\) with this exact conditional distribution, since that would require computing a normalizing constant, which here is precisely \(p(y_i;\theta)\).
Nevertheless, this conditional distribution can be estimated using the Metropolis-Hastings algorithm and a practical proposal "close" to the optimal proposal \(p_{\phi_i|y_i}\) can be derived. We can then expect to get a very accurate estimate with a relatively small Monte Carlo size M.
The mean and variance of the conditional distribution \(p_{\phi_i|y_i}\) are estimated by Metropolis-Hastings for each individual i. Then, the \(\phi_i^{(m)}\) are drawn with a noncentral student t-distribution:
$$ \phi_i^{(m)} = \mu_i + \sigma_i \times T_{i,m}$$
where \(\mu_i\) and \(\sigma^2_i\) are estimates of \(\mathbb{E}\left(\phi_i|y_i;\theta\right)\) and \(\mbox{Var}\left(\phi_i|y_i;\theta\right)\), and \((T_{i,m})\) is a sequence of i.i.d. random variables distributed with a Student's t-distribution with \(\nu\) degrees of freedom (see section Advanced settings for the log-likelihood for the number of degrees of freedom).
Remark: The standard error of the LL on all the draws is proposed. It represents the impact of the variability of the draws on the LL uncertainty, given the estimated population parameters, but it does not take into account the uncertainty of the model that comes from the uncertainty on the population parameters.
Remark: Even if \(\hat{\cal L}_y(\theta)=\prod_{i=1}^{N}\hat{p}_{i,M}\) is an unbiased estimator of \({\cal L}_y(\theta)\), \(\hat{\cal LL}_y(\theta)\) is a biased estimator of \({\cal LL}_y(\theta)\). Indeed, by Jensen's inequality, we have :
$$\mathbb{E}\left(\log(\hat{\cal L}_y(\theta))\right) \leq \log \left(\mathbb{E}\left(\hat{\cal L}_y(\theta)\right)\right)=\log\left({\cal L}_y(\theta)\right)$$
Best practice: the bias decreases as M increases and also if \(\hat{\cal L}_y(\theta)\) is close to \({\cal L}_y(\theta)\). It is therefore highly recommended to use a proposal as close as possible to the conditional distribution \(p_{\phi_i|y_i}\), which means having to estimate this conditional distribution before estimating the log-likelihood (i.e. run task "Conditional distribution" before).
Log-likelihood by linearization
The likelihood of the nonlinear mixed effects model cannot be computed in a closed-form. An alternative is to approximate this likelihood by the likelihood of the Gaussian model deduced from the nonlinear mixed effects model after linearization of the function f (defining the structural model) around the predictions of the individual parameters \((\phi_i; 1 \leq i \leq N)\).
Notice that the log-likelihood can not be computed by linearization for discrete outputs (categorical, count, etc.) nor for mixture models.
Best practice: We strongly recommend to compute the conditional mode before computing the log-likelihood by linearization. Indeed, the linearization should be made around the most probable values as they are the same for both the linear and the nonlinear model.
Firstly, it is only possible to use the linearization algorithm for the continuous data. In that case, this method is generally much faster than importance sampling method and also gives good estimates of the LL. The LL calculation by model linearization will generally be able to identify the main features of the model. More precise– and time-consuming – estimation procedures such as stochastic approximation and importance sampling will have very limited impact in terms of decisions for these most obvious features. Selection of the final model should instead use the unbiased estimator obtained by Monte Carlo.
In case of estimation using the importance sampling method, a graphical representation is proposed to see the valuation of the mean value over the Monte Carlo iterations as on the following:
The final estimations are displayed in the result frame as below. Notice that there is a "Copy table" icon on the top of each table to copy them in Excel, Word, … The table format and display will be kept.
The log-likelihood is given in Monolix together with the Akaike information criterion (AIC) and Bayesian information criterion (BIC):
$$ AIC = -2 {\cal L}{\cal L}_y(\hat{\theta}) +2P $$
$$ BIC = -2 {\cal L}{\cal L}_y(\hat{\theta}) +log(N)P $$
where P is the total number of parameters to be estimated and N the number of subjects.
The new BIC criterion penalizes the size of \(\theta_R\) (which represents random effects and fixed covariate effects involved in a random model for individual parameters) with the log of the number of subjects (\(N\)) and the size of \(\theta_F\) (which represents all other fixed effects, so typical values for parameters in the population, beta parameters involved in a non-random model for individual parameters, as well as error parameters) with the log of the total number of observations (\(n_{tot}\)), as follows:
$$ BIC_c = -2 {\cal L}{\cal L}_y(\hat{\theta}) + \dim(\theta_R)\log N+\dim(\theta_F)\log n_{tot}$$
If the log-likelihood has been computed by importance sampling, the number of degrees of freedom used for the proposal t-distribution (5 by default) is also displayed, together with the standard error of the LL on the individual parameters drawn from the t-distribution.
In terms of output, a folder called LogLikelihood is created in the result folder where the following files are created
logLikelihood.txt: containing for each computed method, the -2 x log-likelihood, the Akaike Information Criteria (AIC), the Bayesian Information Criteria (BIC), and the corrected Bayesian Information Criteria (BICc).
individualLL.txt: containing the -2 x log-likelihood for each individual for each computed method.
Advanced settings for the log-likelihood
Monolix uses a t-distribution as proposal. By default, the number of degrees of freedom of this distribution is fixed to 5. In the settings of the task, it is also possible to optimize the number of degrees of freedom. In such a case, the default possible values are 1, 2, 5, 10 and 20 degrees of freedom. A distribution with a small number of degree of freedom (i.e. heavy tails) should be avoided in case of stiff ODE's defined models.
A web site of the network Lixoft.com | Follow us on Linkedin
Monolix
PKanalix
Simulx
Mlxtran
Lixoft University
|
CommonCrawl
|
Board index ‹ Philosophy Forums ‹ Society, Government, and Economics
The solution to economics
For discussions of culture, politics, economics, sociology, law, business and any other topic that falls under the social science remit.
Re: The solution to economics
by Silhouette » Mon Jul 20, 2020 11:05 pm
MagsJ wrote: A system that fostered indefinite economic stability, low taxation, a realistic living wage for grown people working full time jobs to support their families, affordable essential goods produce and services, and other such essential consumer needs, would be more than well-met I'm sure.
The only tax-free countries I know of are those who have oil bolstering their economies, so what would be the bolster for countries that don't?
Have you tried running a SIM on it Sil, to see how it would fare over time? now that would be interesting.
I did indeed start programming a simulation for this very purpose. My C++ skills ought to be adequate, though not long ago I found myself getting bogged down with the syntax of adapting certain functions to my needs and the whole thing was put on hold for a bit - it's hard enough to structure and systematise a sufficiently complex simulation of a society as it is without adapting it to my model, but hopefully I'll find myself able to resume my efforts in the near future. The effect should meet all of the above requirements - but who knows? Maybe it'll all amount to nothing. I'm just offering a preview at this point to gauge the reactions of people who might be interested in such a project, and should simulated practice back up the theory I've glossed over, I'll see about presenting it in a more official capacity.
Fixed Cross wrote: What I like about Silhouette is that he so proud that he has to prove he is objectively worth something as a philosopher from time to time. This is what philosophy is made of, proud men, and some very proud women as well. There never was a humble teacher. Just smart enough to act humbly here and there so people might be more inclined to take his teachings to heart, and think about them honestly.
Not quite my MO. I'm a little beyond having anything to prove at this point - I've already made several original breakthroughs across several fields, including theology (categorically disproving God), mathematics (disproving Cantor's diagonal argument), philosophy (Experientialism) and now economics. You're correctly identifying that I'm proud of my achievements, but my stating of their existence and quality is simple fact - I seek no personal acknowledgement nor to gain any sense of social worth, my pride is for me and contingent upon my successes rather than a general personality trait. From ye olde Zarathustra: "Man is something that is to be surpassed" - when I gather too much honey, I perform a "down-going" to serve like an alpha or beta test. That's all this is, and I'm sure the style of my contributions and what I'm saying here can come across as pride or even arrogance - which is fine, I don't mind how people think of me as a person, I just have an aesthetic preference for correcting things.
Yes, you do appear to be one of only a few so far who "has understood this thing". You also offer some interesting insights reading "around" my solution in way I hadn't begun to consider, and it's not without interest that you see overlap with your Value Ontology. I'd say Phoneutria has also addressed some of the particulars, which is very welcome, in predictable stark contrast to this guy:
Urwrongx1000 wrote: "Solution" to economics he says... LOL, when will you learn Sil? Ever?
Let me guess, your "solution" is to take my taxpayer money, and dole it out among the poor and whomever else YOU decide, with MY money?
How do you think you are coming across here?
That you need to "guess" what my solution is, and that you guess very poorly, proves that you have failed to even begin to read anything I've written before deciding what to say about it. Your subsequent posts show that you've maybe made it as far as the first line.
Your immediate dismissal on these grounds alone indicates that you've already resolved to disagree regardless of what I have to offer, and your subsequent assertions confirm my initial suspicions that you intend to disagree merely on the grounds of arguments I've already heard from you ad nauseum.
What then is my motivation to engage with you when any possible discussion is already pre-determined? It's not without irony that there is no possible freedom to any interaction with you.
I'll keep a look out for any signs that you've attempted to open your mind and impartially consider what I've actually said, instead of simply taking the opportunity to reel off the usual pre-prepared sentiments - but we both know exactly where this is going to go.
With the housekeeping out of the way, onto some content:
phoneutria wrote: as annoying as ecmandu is, I agree with him on the following
we don't we hear the wealthy boast about being the ones paying the highest taxes of all
you'd think that they'd throw this number around all over the media
"I paid x millions in tax this year! y% of my income!"
instead they keep their tax returns under lock and key
because they are scamming the people out of their taxes due in every possible way that they can
because fuck taxes
i can see this working in a sort of science fiction utopic flick, with a different species entirely, that evolved with those values built into their physiology. not us apes.
if i have to pay $300 for a pound of rice which costs $3 to my housekeeper, I'm going to pay her $10 to go buy me a pound of rice, and she is going to take it
I'm willing to grant Ecmandu's cynical evaluation of the wealthy, even in its extreme form, and it would still overlook a certain subtlty that I briefly mentioned in an earlier post.
Let's say that the wealthy really do have absolutely no interest whatsoever in publically declaring or even participating in any philanthropy - that's not the only factor at play here. Maximum dedication to the "bottom line" of profits depends also on the appearance that there is more to it than that. This is what Zizek would refer to as "ideology". He demonstrates this better than I can, with various often humorous examples, that cold hard material facts alone don't complete the entire human picture - that in cultures all over the world we find the underlying mechanisms of society functioning only with a kind of narrative to mask it and make it palatable. It's not the least charm of history that we can look back at all the quaint rituals and mythologies of the past from which we have now (mostly) grown too sophisticated to take seriously - instead adopting new ideologies that are yet to become noticed by the general public and then sufficiently questioned, as they always eventually will be. Philosophers tend to be at the forefront of such "progressions", with pscyhologists close behind to ease the emotional transition. For example, we used to be professionally counselled for enjoying ourselves too much and now we are professionally counselled for not feeling we're enjoying ourselves enough. Without ideology, sex would just be a biological chore consisting only of the mechanical actions to pragmatically achieve the continuation of the species - without all the flirtation, suggestive innuendos and suspense. It appears to be a psychological necessity for humans to unconsciously participate in ideology, which always takes the form of pretending that we're all doing good, which currently involves the mutually agreed flattering perception that profits aren't actually the only thing that matter to employers.
Apologies if you're already familiar with ideology, but you see how this means that any potential cognitive dissonance that might result from doubting this ideology must be proven to be unfounded through token demonstrations of altruism like charity. It doesn't take much to realise that if we were truly charitable, we'd fix the systems that afford us enough disposable income to give to charity in the first place, which are the same systems that result in the existence of those who need charity in the first place. But obviously that'd kinda fuck you over and who are these charity cases to deserve what we've worked hard to earn for ourselves anyway? Ideology. The wealthy need to believe they're good people and we see everywhere the effects of market forces causing companies to present themselves as environmentally friendly (greenwashing) etc. - to maintain and hopefully increase their share of the market and competitive advantage.
All it takes is a few early adopters of my solution to try and grab some quick initial fame and recognition that they would otherwise not get, and the competition soon realises they are losing out relatively, making them feel the need to at least appear to give a shit whether or not they "really" do, and sign up to participate in contemporary ideology (and not incidentally reap the benefits of doing so).
It's not as simple as "fuck taxes", even if that's how one really feels.
It's obvious why companies all keep their tax records secret in just the way you explain - but it's not so obvious why you'd want to stay off the radar by refusing to participate in my solution, which is entirely voluntary don't forget.
Location: Existence
by Ecmandu » Mon Jul 20, 2020 11:15 pm
Silhouette,
Your last reply made me realize that you think your system is so much better, that anyone who comes on board will immediately crush the competition.
Also, I disproved cantors diagonalization argument as well! I wonder if we have the same disproof. Mine is super simple! Just make new lists for all the diagonals! (Duh). Sometimes the simplest things are the hardest to figure out!
Ecmandu
by Silhouette » Tue Jul 21, 2020 9:27 pm
Ecmandu wrote: Your last reply made me realize that you think your system is so much better, that anyone who comes on board will immediately crush the competition.
It's not even necessary for my system to be "so much better" than what we have, it is sufficient to merely introduce a new valuable way for people to derive competitive advantage that also happens to benefit society, and market forces do the rest.
Ecmandu wrote: Also, I disproved cantors diagonalization argument as well! I wonder if we have the same disproof. Mine is super simple! Just make new lists for all the diagonals! (Duh). Sometimes the simplest things are the hardest to figure out!
Don't worry, I've not forgotten - you tend to remind me of this every time I bring it up. I was just mentioning my achievements, feel free to list yours but your achievements won't be on my list of achievements because they're yours and not mine.
I seem to remember that we don't have the same disproof - mine simply shows that his proof is only conditionally true dependent on the numeral system being used, and that it is false in e.g. unary. His diagonalisation method requires a list of sets such that the list is as long as the size of the sets (in order to make a square across which a diagonal set can be constructed). The possible length of the list of sets will always be equal to the numerical base being used to the power of the size of the sets, which for bases more than 1 will always mean a longer possible list than the size of the sets, leaving plenty of "other" combinations to be constructed from a list that's only as long as the size of the sets. But since this isn't always the case, depending on the base system used, his proof fails when it comes to number bases like unary.
I've suggested before that "new lists for all the diagonals" would presumably require extra dimensions than just the 2 used in Cantor's argument, along which to construct these new lists - I think you agreed.
But back to the topic of the thread...
by Fixed Cross » Tue Jul 21, 2020 9:44 pm
Yes, I always add much value when I consider an idea seriously, "lets see if I can make this kitty purr" is my general attitude when someone has put some work in something but it doesn't quite run yet.
Not that Ive gotten it road-ready, to be fair. I merely handed you a set of tools to develop it.
You could really have something here. It depends entirely on whether you can make it into a model.
Note: considering that this porto-idea might very well not become a full fledged idea, Ill revert back to my default of a 20 percent flat yearly revenue tax on companies above a certain market cap and no private taxes.
Thats a simple idea, and it is guaranteed to work in the ways I think a society should work; it alleviates pressure from people in general and it is a big hurdle for companies on the road to hegemonic positions.
it creates lots of problems, but
The strong do what they can, the weak accept what they must.
- Thucydides
Before the Light - Philosophy 77 - sumofalltemples - The Magickal Tree of Life Academy
Fixed Cross
Doric Usurper
Location: the black ships
by Ecmandu » Tue Jul 21, 2020 10:34 pm
(Side thread in this thread)
Fuck silhouette!! Ok, there's the deal with math. Every fucking operation is a new dimension.
You cannot do unary without the space or enter bars. That's 3 dimensions. Nothing can be done without 3 dimensions! My technique is just making a new list (actually) an infinite number of them (to subsume all possible diagonals from the first list). I'm still using 3 dimensional logic. *end rant*
NOW!!! About this thread!!! If you don't have a system that defeats the competition, you're going to have to force people to do it. Just like my system.
So then it becomes a matter of which system is best to force on people.
by Silhouette » Wed Jul 22, 2020 8:45 pm
Fixed Cross wrote: Yes, I always add much value when I consider an idea seriously, "lets see if I can make this kitty purr" is my general attitude when someone has put some work in something but it doesn't quite run yet.
It runs, it just hasn't "been run" yet - it's road-ready to the extent that it could most certainly be driven, though "road-ready" presumably also entails knowledge of "how" to drive it and "how good" it would look being actually driven. To those ends, details still need to be ironed out.
The philosophy "around it" could of course be worth exploring, as you have begun to do and I have not (for which you "offer tools") - but it's not really my concern at the moment as I'm more interested in seeing it be driven first.
It already is a model, cohesive and clear in its foundations, just not a completed one with all the details worked out. That's more what I'm interested in exploring at the moment.
Fixed Cross wrote: Note: considering that this porto-idea might very well not become a full fledged idea, Ill revert back to my default of a 20 percent flat yearly revenue tax on companies above a certain market cap and no private taxes.
It's an interesting reflex to side with "the devil you know", having been shown that "the grass is greener on the other side". Of course the ultimate evaluation lies in real application, but consider that this is by default a rejection of anything new - unless you're simply siding with caution preliminarily, subject to further evidence: "on the fence", so to speak, but perhaps leaning more towards getting off on the side of the "tried and flawed" than "the new and potentially less flawed". An entrepreneurial spirit, perhaps, is what's needed to commit to exploring a "porto-idea" over a "full fledged idea".
To briefly comment on your "default", any "market caps" inherently divide society and create tension either side of the cut-off point: it's arguable that your default creates a lack of incentive to want to progress beyond the "market cap" where suddenly a fifth of your earnings is taken away from you by threat of the force of law, making the higher bound of non-taxpayers considerably more rich than the lower bound of taxpayers, incentivising the lower bound of taxpayers to "earn less to earn more". People will divide themselves into being clearly one or the other to avoid that awkward middle ground, and on a social level a stigma will be created dividing the "social contributors" from those who don't contribute to society. To avoid all this, I believe tax systems often resort to only taxing revenue above the market cap at the higher rate, and taxing the revenue below that market cap at the lower rate, but this is why I side with a continuous function (the 80-20 curve) and completely avoid these issues altogether. Additionally, I apply my solution specifically to "expenditure" and not "revenue", because I do not want to even go near any potential penalisation of working to create revenue. If anything, it's "taking" from society for yourself that has grounds to be interfered with, but giving to society ought to be encouraged - and my solution allows this explicitly through the mechanism of creating a "disconnect in the continuous flow of currency around the whole economy": allowing a temporary disjunct between the price paid and price received (ultimately fully accounted for by paying off the deficits with the surpluses with mathematical precision).
All of this, as well as the arguments I've made thus far - and more that I've so far neglected to mention - is why my solution is so more sophisticated than the comparatively much more blunt instruments of "tax this in this way, but don't tax that in that way" which is the mental box within which the overwhelming majority tend to prefer to remain inside.
Ecmandu wrote: If you don't have a system that defeats the competition, you're going to have to force people to do it. Just like my system.
Yes, for lack of a better approach than using force, you're going to have to either force people in some way like in your system, or simply throw out force altogether and do nothing - as in the much more simplistic Libertarian "laissez faire" anti-solution.
That's why I resolved to create a better approach than either "using force" or "doing nothing". Turn to the carrot - not the stick. Some sticks are better than others as you say, such as your "maximum wage" system, but my intention is to think outside of this "tax box" and instead explore the realm of incentives rather than duke it out in the seemingly endless fight over which is the "least bad" way to lessen iniquities. Ultimately that fight is always won by those in power in the same way: the negative effects are largely passed down to those with the least power, and those with the most power remain largely unaffected. The only way to resolve this is to come up with a mechanism that incentivises those in power to genuinely want to lessen iniquities, by tying this in with their self-interest (which was always all they were going to follow anyway) - with or without any of these impotent attempts to "tax them" that don't providing any incentive for them to not simply decide "nah, not gonna do that" as they've always done so far.
That's why I went ahead and solved economics.
by phoneutria » Wed Jul 22, 2020 9:28 pm
people already voluntarily donate to charitable foundations and greenwash their businesses, for reputation
phoneutria
purveyor of enchantment, advocate of pulchritude AND venomously disarming
by Silhouette » Wed Jul 22, 2020 11:54 pm
phoneutria wrote: people already voluntarily donate to charitable foundations and greenwash their businesses, for reputation
I mean, quite a lot - but I find this kind of thinking and questioning really valuable, so thanks for pushing your point.
As in a previous post of yours, you mean to draw attention to the similarities between what we have already, and what I'm proposing - correct?
These similarities are entirely intentional, because it seems that if you propose to change too much, too many people balk and reject too easily. However the other side of the coin is that if too little changes, is the payoff worth the effort? No doubt this balance is the reason why so little ever does actually change, and it may even be the case that the two overlap with one another, making change only possible if it is forced through sufficiently stealthily. Let's assume not for now.
The statement that "people already voluntarily donate to charitable foundations and greenwash their businesses, for reputation" is true, but owing to the breadth and simplicity of this abstraction, all the crucial detail that makes all the difference is glossed over and "missed".
Which people, and how many of them voluntarily donate? How much do they donate, and to which charitable foundations? How much of their greenwashing is facade and how much is representative of their objective impact on wider society?
Using my solution, 9% of all people are donating to the entire 91% of everyone else. In return they earn an objective measure of exactly how much impact they're having on wider society, that they have every incentive to maximise in order to compete with the rest of the 9% over how much revenue they can generate to maximise how much they can spend on their employees and capital investments, which is how they earn this incentivising measure. There is no cheating here, nor want nor desire to cheat, because the incentive is all in self-interest as well as altruistic - they even gain a very precise way to compete, which generates the kind of pressure that people at the very top thrive upon, which only motivates them further to generate more value than ever before. Where there's incentive to hide tax records and overemphasise charity in our current system, there's every incentive to show off for reputation using my solution, and an objective measure can't be over or understated as to how much and how wide-reaching their charity really is. Everyone knows exactly where it's going, which is directly towards the deficit incurred by the 91% that affords enhanced "equity of opportunity" for all of them by allowing them to pay less to enter into the competition themselves, and which rewards their frugality and cost-effectiveness in doing so. Lower spenders face less risk to be entrepreneurial, bolstering the Classical Liberal ideal of perfect competition, and the most successful voluntarily keep themselves in check, which naturally wards off monopoly and oligopoly. Market forces push material wealth towards the middle, from both ends, while still maintaining material inequality to aid in incentivising those who still want to succeed according to that measure, without impacting negatively on the pursuit of immaterial wealth that is now more free than ever before to reach now measurable and objectively comparable limits for all the most successful people to aim towards. Why do chess grandmasters still compete with each other to achieve ever higher Elo ratings?
by Dan~ » Thu Jul 23, 2020 9:13 am
Silhouette wrote: You heard it here first - finally a solution to bridge the divide between the economic left and right.
Neutrality:
How do I know that the right are in favour of my solution?
For one, there is a common sentiment shared by the rich that success is not all just about being monetarily well off, it's about a much broader spectrum of success and freedom, for example to do good for society by enabling valuable goods and services to be available to many more people than would otherwise be able to access such things. Perhaps there's also an individual element of satisfaction derived from challenge, maybe to get the most out of yourself or against competition.
Just to help diffuse any doubt here, allow me to draw upon a quote by the US's very own current president (as of the time of writing this post) that echoes the above values:
Trump wrote: Money was never a big motivation for me, except as a way to keep score.
The right generally share a preference to not have these goals infringed upon or compromised, as is commonly the goal of the left - via means such as taxation, which I believe can be done away with.
How do I know that the left are in favour of my solution?
There is a way to resolve the apparent conflict between the loss aversion of the right and the distribution of wealth favoured by the left, which can be done by first identifying some unquestioned assumptions about "currency". Rather than getting rid of money altogether, as has been proposed by some branches of leftism, the benefits of having a standard benchmark against which to value all goods and services can be held onto. But there is a critical disadvantage to currency, which is that interfering with the flow at any specific points affects the flow of the whole thing - you can't localise and pinpoint things like taxes without the effects being displaced and flowing around the whole economy. For example, the effects of a sales tax aren't restricted to proportionally affecting the most lavish of spenders more than the most frugal among us. Prices can just be raised to absorb the extra cost, or jobs cut, to allow employers and shareholders to afford just as much as they felt entitled to before, whilst employees and non-shareholders still have to pay more for the same thing even if they don't have a job at all after business expenditure cuts to pay for higher taxation. Working in itself can in this way be penalised by a tax intended only to affect spending.
If this cornerstone of modern economic theory was overcome, the left wouldn't need to be so concerned about effects like these on the less economically fortunate - so they would surely support my solution as well.
Get to the point:
We achieve this via a disconnect in the continuous flow of currency around the whole economy. Money given doesn't necessarily have to equal money taken. Better still, the role of standard benchmark that money currently plays can be maintained by simply funding spending deficits with surpluses.
But why would anyone want to pay a surplus, I hear you ask?
I refer back to the common sentiment of the rich. Surplus can be kept track of and an individual non-exchangeable "score" can be kept - and even published. Want to demonstrate your social value? Want an official measure to go by to compete against yourself or others? Want to advertise the integrity and benevolence of your company to attract consumers to the best and most successful of all the competition? A simple number that won't be taxed can do that for you objectively, just as it can land you a position working for the best companies. Just like before, the only way you can build this score is to first sell well and earn well before you can spend well - and as a top "giver" in society that such a person can afford to be, you receive the price you sell at regardless of how much is paid for it, and who you sell to, due to this "disconnect" in the continuous flow of currency at the point of exchange.
Unlike with tax, nothing is lost to "penalise" buying, selling or even working - and no consequences are passed messily throughout the whole economy to places they weren't intended to reach.
Yet like tax, the less economically fortunate are paying less, funded by the more economically fortunate but without the more economically fortunate losing out and allowing them to maintain what they are really after in just the same way as Trump.
Since there are no downsides to this solution, it can be entirely optional. Businesses can continue to operate as normal, perhaps sticking to cash, or exchanging money any way that my solution doesn't track. They don't have to pay any surplus and they get no "score", and without one they prove to everyone that they are staying off the radar for whatever reason. Perhaps their revenues aren't enough to cover their costs because they're inefficient or don't offer good enough products and/or services? Perhaps they want to hide something by steering clear from a simple and objective measure of goodwill? Perhaps you're only rich from inheritance (which will be decreased in the same way as a big spender spending on anything) and you want to hold back from giving it to society - avoiding this simple objective measure will prove this to everyone. There's no need to keep track of deficits and negative "scores", because otherwise a bad start in life can result in a lifetime of endlessly paying your way back up no matter how generous and socially valuable and successful you are in later life, which is unfortunately like things are with today's economic model. Being socially in credit at any point in life can be rewarded in this way without ever being taken away at a later date. Equally this gives a much better chance for new businesses, since they can start out paying deficits until they are in a position to run an efficient company selling high quality goods and/or services that gets them on the leaderboard even if the company remains small. You can't fake social utility without giving back to society and you also won't be penalised if you aren't in a position to give back to society for any reason.
Tax becomes obsolete since the poor are supported yet the rich are also rewarded without limit for doing good. Left and right both win.
The math:
Last year I began exploring the mathematics of all of this with the first two posts of this thread, where I introduce the mathematical model that I would propose to use to pull off the above solution to all our current economic problems. It's based around the 80-20 rule, or "Pareto Principle", to maintain economic inequality at an "optimal" rate - to make sure that success is rewarded at all levels of wealth. After the first 2 posts, I began talking about using a Sigmoid function to mimic the "Elo rating system" (as used in chess etc.) that adapts the 80-20 Pareto Distribution. I decided to scrap that since Elo doesn't achieve the afore-mentioned "disconnect in the continuous flow of currency" that solves everything, whereas the initial 80-20 curve does (upon application to what I explained above).
The curve that I'm using to model this principle follows the form of \(f(x) = L/e^{-k(x-x_0)}\), where \(L\) is the curve's maximum value, \(k\) is the growth rate (steepness of the curve) and \(x_0\) is the midpoint of the curve.
For population "\(p\)", when \(L\) is roughly \(\frac{\sqrt{22}}{110}\), \(k\) is roughly \(\frac{7.7}p\) and \(x_0\) is \(\frac{p}2\), we get the 80-20 properties (the larger \(p\) gets) where the mid point of the y axis is the price, and the x axis tracks the relative historical "rate of expenditure" by any given individual or company. Like any personal bank account, your individual outgoings can be tracked throughout your life, placing you somewhere along this x axis, and you use the curve to read off the proportion of the price you pay for any item you want to buy from the y axis. There can be one set of data for individuals and a separate set of data for businesses.
Some facts:
Only the top 9% of spenders will be paying a surplus that will pay off the entire deficit incurred by the remaining 91%, making it no social shame to not have a particularly high score if any at all and affording all the more glory for achieving one.
The very highest spender will only be paying a maximum of 2200 times more than the very lowest spender.
The average person will be paying about 47 (the square root of 2200) times more than the very lowest spender (and therefore obviously about 47 times less than the highest spender). The difference between how much average spenders spend compared to one other is minimal - it's only when spending gets very high that significant differences emerge (as is characteristic of exponential curves such as the 80-20 one).
Who would enforce this idea unless it were famous?
Democrats are controlled by the media.
People choose famous things.
I like http://www.accuradio.com , internet radio.
https://dannerz.itch.io/ -- a new and minimal webside now hosting my free game projects.
Truth is based in sensing, in vision. And we can only see when we are alive.
Dan~
Location: Canada Alberta
by Fixed Cross » Thu Jul 23, 2020 2:37 pm
Sil hold your horses there for a second - I spoke of something which has not been implemented yet, which I invented the idea of.
So no, no "devil you know".Y ou correctly identify the (fertile, in my eyes) tension this idea produces tough.
And the grass had not been shown greener on the other side, thats the whole point. Your machine hasn't been driven and I wasn't thinking 'around it', I was honing in on application methods.
To briefly comment on your "default", any "market caps" inherently divide society and create tension either side of the cut-off point: it's arguable that your default creates a lack of incentive to want to progress beyond the "market cap"
Thats not arguable, thats a fact.
So the amount is to set so that certain industries cant avoid it. The mathematics will be like, "large" but very straightforward.
The idea is that certain industries can not avoid growing beyond that market capitalization, such as car industries.
Its also thinkable to put a 10 percent tax on say, 2 million plus businesses and 10 percent on 20 million plus, divine the tension over two thresholds. Keep in mind my numbers are arbitrary estimates. Could turn out to be much lower or higher when calibrated to reality.
And, there has to be recognition for the services here great companies do the country, by paying taxes. They would essentially be heroes of the nation.
All this said, really do not wish to give the impression I have forgotten about your idea or dismiss it. Its a pretty fascinating idea and I want to see it developed. In absence of such development for now, I think my idea is, at this point, more pragmatic, and since I already had it, Ill work with it for now until you clarify your idea some more.
By the way at the philosophical and ontological backbone to my idea is the understanding that for people to owe money merely because they exist and provide for themselves is absolutely unjustifiable.
And that corporations such as those that employ massive manufacturing plants are entirely contingent upon a Society, so for them it is justified to have to contribute to that society.
To be born is not to be contingent upon society. Taxation by the state of a private persons existence is among the most grandiose idiocies humans are still working with.
by Silhouette » Thu Jul 23, 2020 11:01 pm
Dan~ wrote: Who would enforce this idea unless it were famous?
This is part of the purpose of sharing my idea - things aren't going to get famous without at first getting shared with a small number of people.
I'm starting cautiously in case someone can spot anything obvious that I've missed, and to expose me to the kinds of questions that people have when exposed to ideas like this. This better prepares me for sharing it to a larger number of people, in the hopes that it will go on to become famous "enough" to get chosen.
Fixed Cross wrote: Sil hold your horses there for a second - I spoke of something which has not been implemented yet, which I invented the idea of.
Yeah, of course the exact specifics of your default might not have been implemented yet (at least as far as we're aware). I didn't make it clear that I meant the general idea of "tax this and not that" is "the devil we know", even if some different specific configuration of it hasn't "been run" yet. The grass is greener without this devil, and my solution doesn't show any signs of being anything but the greenest of greens. Fair enough that we need the practice to confirm the theory, and I am trying to withhold complacency - I'm just not trying that hard since there's still no reason to.
By all means bring up your own ideas as a point of comparison - maybe even make your own thread to carry on discussion in more depth about it there, though I'm pretty sure if you did we'd just run into the usual right vs left impasses that we all know and love by now - I can think of several things to say about your last post. That's just another reason why my solution here is so valuable - it transcends all that. No "large" mathematics required either, just an understanding of what an exponential function is, maybe how the specifical one I use works and why, but you don't even really need to understand any of that except that it's an 80-20 approximation - and that's it.
Anyway, if I work out any of the few remaining details I'll update things here and in the meanwhile answer questions and alleviate any concerns that anyone might have in the meantime.
by Fixed Cross » Thu Jul 23, 2020 11:12 pm
In summary for those who haven't understood your idea, which is everyone else:
Sils idea comes down to rich peoples money being worth relatively less, which despite everything means that is an implicit capital tax.
My idea simply means only corporations pay taxes, not humans.
Which naturally means less corporation forming and more medium sized businesses. It also means a lot for corporate culture, which will change from robber-baron instinct to world-building desires.
And it means a much smaller state.
S - its fine to just have these ideas out there, bots will harvest much of the semantics at all times. This is how AI is being developed, by developing algorithms that recognize operational concepts through syntactic density and "colouring" - so concepts are being recognized "from the outside" that is, without being understood. Much like animals are being selected by mates on outward characteristics and not based on understanding. The patience you put in your explanation as well as the postponing of explication, keeping the implications pure, is recognizable by such programs, just as such undisturbed and un-prodded unfolding behaviour is recognizable as representing value in the natural world.
Forgive me for my analysis here, no one has ever become better from my flattery (it seems quite lethal and perhaps thats why I do it) but yeah, it ties in with the evolution threads in a relevant way.
What is memetic fitness?
Which idea has the most of it?
Your idea may be the Neanderthal to my more pragmatic hominid of corporate tax; the Neanderthal had a brain which required lengthy operations to be completed before an action was taken.
by Silhouette » Fri Jul 24, 2020 12:28 am
Fixed Cross wrote: Sils idea comes down to rich peoples money being worth relatively less
Rich people's money is worth the same as it always was to anyone selling to them. It's also worth just as much to them when selling to others, if not more altogether, only it's split between what's exchangeable for surplus material things, and the immaterial and eternal measure of their life's success, which is converted into better value for money for 10/11 of society who enabled them to become rich in the first place by being their "market" - perhaps on top of everything else lending to them a wholesome "altruistic" wealth if they're not psychopaths.
Fixed Cross wrote: which despite everything means that is an implicit capital tax.
Capital is not touched by tax or anything at all with my solution. The buying of further capital than what the already rich already have goes less far in the short term material sense, but due to the immaterial sense translating into a better reputation that they're rewarded with for being more generous with their wealth, they attract ever more custom and revenue back to them, all the while accumulating immaterial "capital" that will never ever decrease or be compromised for all time regardless of any future market instability or other life changes. This affords them more security yet also rewards more spending and risk, and with material wealth paying off the deficit incurred by the 91% getting better value for money, small new businesses are enabled and the material wealth of people is attracted towards the middle where the medium sized businesses are - so my solution transforms any "robber-baron instinct" and "world-building desires" into the immaterial realm and away from having socially detrimental consequences on material wealth. No state required, except maybe for essential services for which the profit model isn't suited, only with no tax required to fund it. For more of a surplus in price-paying than a deficit, to pay for such things, with my solution requires merely opening up the opportunity to accumulate an objective success score to more people. The 80-20 optimisation stays the same.
Fixed Cross wrote: My idea simply means only corporations pay taxes, not humans.
Corporations are cooperating humans - I reject this distinction.
by Ecmandu » Fri Jul 24, 2020 4:16 am
The ultra rich don't give a shit about their reputations (because they're ultra rich and don't have to!) Most of them are autistic, narcissistic or sociopathic.
Even if they donate to charity, their businesses are sociopathic. And that's being generous, because their donations are tax write offs or tax shelters. Most of them use that money to do non profits that act as think tanks or political activist corporations (lobbying for free) to support making more money.
Personally, I think your system is not viable.
Ecmandu wrote: Silhouette,
I've already explained why this isn't a problem.
But instead of repeating myself I'll move things on to a detail that was bugging me: how exactly would the introduction of my solution play out for businesses?
Which successful business would voluntarily enter into a scenario where they might have to pay twice the price for everything by virtue of being the biggest spender?
The answer is that the solution doesn't have to be adopted fully and wholeheartedly from the outset. The way it will start will be incredibly small - a petty insigificant purchase by an opportunistic up-coming company just to get pole position on the scoreboard for cheap publicity, maybe even to simply ironically display dominance, perhaps even mockingly. But this move won't stand for long - another company could easily chip in a couple of extra dollars in a one-off purchase of a slightly more expensive item. Still pocket change at this point, but the competition to be the next leader will quickly snowball into elaborate displays of "Zahavian Signaling" that the biggest companies now have incentive to enter into. A company that is so successful, that it can afford to be the biggest spender, taking the material brunt of enduring the most costly position yet still remaining successful - is all the more impressive and objectively proves itself to be the best company to work for and to buy from. And all the while all this surplus spending goes towards new companies, and even medium-sized companies, being able to pay much less for what they need to enter into this race themselves. We tend towards perfect competition, monopolies are naturally curtailed, and all relying on the "worst" nature of companies that you can think of.
The same is just as plausible for individuals, with even the most impulsive of psychopaths playing chicken with each other to feed their narcissism by competing for the top spot. Again, relying on all the "worst" nature of humans that you can think of.
My solution is precisely the opposite of "not viable" - it's all but guaranteed by Game Theory, even assuming the worst of people and companies - especially assuming this. It's called the "Escalation of commitment".
Nobody in the top brackets gives a shit. Your theory is basically a scaled sales tax. Nobody who is calling the shots would go for that. They're fine just how they are. This is why I stated to you that both of our systems would have to be forced, the question then becomes: which one works the best for everyone.
As in my last post, "going for that" won't really be choice for them. In the same way that someone in the working class might not want to got for a job, the alternative is worse so they have to "consent". The top brackets won't give a shit to begin with, but after a while it will be out of their control no matter how fine they were before.
Force doesn't work - you said it yourself: the ultra rich can just choose to get around paying some tax or other, and in the same way they'll find a way to get around the maximum wage. There's already so many loopholes in the law to be able to pull off stuff like that - no matter how hard you try to force, they're already more than primed to quell any such overt attempts on their power.
The only thing left is to covertly plant unassuming seeds that people like you can write off so carelessly, and watch them grow - feeding off the very same mechanics that got us into this mess in the first place. "Which one works the best for everyone" clearly isn't your system when it's not best for the only people who matter for it to work at all - and who can brush it aside like they always have before anyway. By contrast, my solution will inevitably become what they will realise to be best for themselves, and also everyone else, as a result of the very same principles that made them ultra rich in the first place.
At this point it just seems like you simply don't "want" my solution to be valid, and above all else it just seems like you desire revenge against the very people I'm going to win over - getting them to decide for themselves to be socially responsible. You're not going to win by force, you gotta be cleverer than that. I know how, and the only way you're going to get on the winning team is by dropping this mindset, before even going into the discussion, that my solution is invalid. The rational thing to do is not to assume I'm wrong from the outset and then try to rationalise some reasons why, but to first entertain the facts and theory objectively and impartially. Seek to understand what it is on its own merits without your own assumptions first and only THEN evaluate.
by Ecmandu » Tue Jul 28, 2020 12:05 am
The ultra rich are not altruistic people. They don't even care about patents! Did you know the mouse that laptops use was invented by xerox? That windows DOS was created by people who are almost broke now?
Business (copywrites as well) have their own trajectories, bestowed upon them by the law. If we change the law, the system changes.
My system is better than yours because it requires more cooperation. Pure and simple. Want to build the world trade centers (build for 50 billion dollars) you need thousands and thousands of people to agree. That's why I call it a co-op economy (better communication for better outcomes). Your system has no communication - it's autocratic and dictatorial in a way that mine isn't - my system honors capitalism and democracy more than yours does.
by MagsJ » Tue Jul 28, 2020 6:46 am
Sil.. the reason I mentioned procreation, is because.. wouldn't the (all too) human factor have to be factored in, to the configuration, in order to achieve a more realistic resultant observational outcome, and other such human whims and woes that would cause an economy to fluctuate.
by Fixed Cross » Wed Jul 29, 2020 8:09 pm
You have so far failed to convince in the second stage, Silhouette. How to enforce your idea? We just need to naturally trust that it will work, but what of the transition even if it does work, say over the course of a hundred or maybe a few thousand years (you anticipate on no criteria at all), to gather some momentum? Aren't there still taxes to be paid then, yes? So how does that work, do the billionaires pay double? It cant function if implementation is impossible.
My idea (which I not really deliberately but consciously tricked you into disregarding - if there were some critics here they'd notice that you used the rhetorical opening I consciously left you to interpret "humans" not as "private persons" (as I had already defined it!!) but as the biological species which drive and design corporation) is very clear; no person owes money to the state personally. If he is the CEO of a company that does, he can just quit. If a tax paying corporation his property, that means he has consciously and form a position of great wealth decided to grow so large as to have to pay taxes. He would have been free to make a second company and a third, all below the tax limit. But he chose to go for the big leagues, the tax paying men, the aristocracy, The Good.
Thats what this leads to.
by WendyDarling » Wed Jul 29, 2020 8:28 pm
Is the problem that tax funds are overburdening and still insufficient or is the problem that they are misspent?
I kinda see what Sil's idea accomplishes in getting the large sum donations or buy ins but if the people who manage the donations are corrupt, then what has been bettered?
I'd guess that 90% of today's elected officials are not out to enact treaties for the greater good and betterment of all peoples, to improve their quality of life.
Member of The Coalition of Truth - member #2/2
"facts change all the time and not only that, they don't mean anything...."-Peter Kropotkin
"I can hope they have some degree of self-awareness but the facts suggest that
they don't..... "- Peter Kropotkin .
"you don't know the value of facts and you don't know the value of the 'TRUTH"... " -Peter Kropotkin
Location: Hades
by promethean75 » Wed Jul 29, 2020 8:44 pm
Taxing wages is unconstitutional because a wage is not a profit from the sale of a commercial product. A wage is an equal exchange that produces no profit. Only a sale produces a profit.
I do not support the taxation of the proletariat unless there is no other class. Only business owners should be taxed in a capitalist society.
It is doubly absurd to think a wage laborer should be burdened by two parasites profiting from his labor. X amount of the money he generates goes into the pocket of the useless parasite, and Y amount into the pocket of the local and federal government. Nah fuck that. Tax man come to my crib I got somethin for him.
trump rocks
promethean75
by MagsJ » Wed Jul 29, 2020 11:17 pm
promethean75 wrote: Taxing wages is unconstitutional because a wage is not a profit from the sale of a commercial product. A wage is an equal exchange that produces no profit. Only a sale produces a profit.
..and yet we are still taxed the BLEEP out of.. and then some.
Collecting tax is akin to that first taste of blood for wild animals, in that once experienced, things can never go back to the way they were.
Last edited by MagsJ on Wed Jul 29, 2020 11:27 pm, edited 1 time in total.
Return to Society, Government, and Economics
|
CommonCrawl
|
Experimental setting and threat model
From security issues to privacy issues
Consequences and conclusions
Side-channel attacks against the human brain: the PIN code case study (extended version)
Joseph Lange1,
Clément Massart1,
André Mouraux1 and
François-Xavier Standaert1Email authorView ORCID ID profile
Brain Informatics20185:12
We revisit the side-channel attacks with brain–computer interfaces (BCIs) first put forward by Martinovic et al. at the USENIX 2012 Security Symposium. For this purpose, we propose a comprehensive investigation of concrete adversaries trying to extract a PIN code from electroencephalogram signals. Overall, our results confirm the possibility of partial PIN recovery with high probability of success in a more quantified manner and at the same time put forward the challenges of full/systematic PIN recovery. They also highlight that the attack complexities can significantly vary in function of the adversarial capabilities (e.g., supervised/profiled vs. unsupervised/non-profiled), hence leading to an interesting trade-off between their efficiency and practical relevance. We then show that similar attack techniques can be used to threat the privacy of BCI users. We finally use our experiments to discuss the impact of such attacks for the security and privacy of BCI applications at large, and the important emerging societal challenges they raise.
Brain–computer interfaces (BCIs)
Electroencephalography (EEGs)
State of the art The increasing deployment of Brain–computer interfaces (BCIs) allowing to control devices based on cerebral activity has been a permanent trend over the last decade. While originally specialized to the medical domain (e.g., [1, 2]), such interfaces can now be found in a variety of applications. Notorious examples include drowsiness estimation for safety driving [3] and gaming [4]. Quite naturally, these new capabilities come with new security and privacy issues, since the signals BCIs exploit can generally be used to extract various types of sensitive information [5, 6]. For example, at the USENIX 2012 Security Symposium, Martinovic et al. showed empirical evidence that electroencephalogram (EEG) signals can be exploited in simple, yet effective attacks to (partially) extract private information such as credit card numbers, PIN codes, dates of birth and locations of residence from users [7]. These impressive results leveraged a broad literature in neuroscience, which established the possibility to extract such private information (e.g., see [8] for lie detection and [9] for neural markers of religious convictions). Or less invasively, they can be connected to linguistic research on the reactions of the brain to semantic associations and incongruities (e.g., [10–12]). All these threats are gaining relevance with the availability of EEG-based gaming devices to a general public [13, 14].
Motivation and goals Based on this state of the art, the next step is to push the evaluation of the side-channel threat model in the context of BCI-based applications further. In this respect, the seminal work of Martinovic et al. clearly puts forward the existence of an exploitable bias for various types of private information extraction. But quantifying the impact of this bias in concrete adversarial contexts was left as an important challenge. Typical questions include:
Can we exactly extract private information with high success rate by increasing the number of observations in side-channel attacks exploiting BCIs?
How does the effectiveness of unsupervised (aka non-profiled) side-channel attacks exploiting BCIs compare to supervised (aka profiled) ones?
How efficiently can an adversary build a sufficiently accurate model for supervised (aka profiled) side-channel attacks exploiting BCIs?
How similar/different are the behavior and the resistance of different users in the context of side-channel attacks exploiting BCIs?
Interestingly, these are typically questions that have been intensively studied in the context of side-channel attacks against cryptographic devices (see [15] for an engineering survey and the proceedings of the CHES conference for regular advances in the field [16]). In particular, a recurring problem in the analysis of such implementations is to determine their worst-case security level, in order to bound the probability of success of any adversary in the most accurate manner [17]. This implies very different challenges than in the standard cryptographic setting, since the efficiency of such physical attacks highly depends on the adversary's understanding and knowledge of his target device. Hence, a variety of tools have been developed in order to ensure that side-channel security evaluations are "good enough" (as described next). Our goal in this paper is to investigate the applicability of such tools in order to answer the previous questions regarding the efficiency and impact of side-channel attacks against the human brain.
Contributions For this purpose, we propose an in-depth study of (a variation of) one of the case studies in [7], namely side-channel PIN code recovery attacks, that share some similarities with key recovery attacks against embedded devices. In this respect, our contributions are threefold. After a description of our experimental settings (Sect. 2), we first describe a methodology allowing us to analyze the informativeness of EEG signals and their impact on security with confidence (Sect. 3). While this methodology indeed borrows tools from the field of side-channel attacks against cryptographic implementations, it also deals with new constraints (e.g., the limited amount of observations available for the evaluations and the less regular distribution of these observations, for which a very systematic and principled approach is particularly important). Second, we provide a comprehensive experimental evaluation of our side-channel attacks against the human brain using this methodology (Sect. 4). We combine information-theoretic and security analyses in the supervised/profiled and unsupervised/non-profiled contexts, provide quantified estimates for the complexity of the attacks and pay a particular attention to the stability of and confidence in our results. Eventually, and after a brief excursion toward the privacy issues raised by our experiments (i.e., what happens if the adversary aims to recover the user IDs rather than the PIN codes?), we conclude by discussing consequences for the security and privacy of BCI-based applications and list interesting scopes for further research (Sect. 6).
Admittedly, and as will be detailed next, our results can be seen as positive or negative. That is, we show in the same time that partial information about PINs can be extracted with confidence and that full PIN extractions are challenging because of the high cardinality of the target and risks of false positive. So they should mostly be viewed as a warning flag that such partial information is possible and may become critical when the cardinality of the target decreases and/or large amounts of data are available to the adversary.1
2 Experimental setting and threat model
In our experiments, eight people (next denoted as users) agreed to provide the 4-digit PIN code that they consider the most significant to them, meaning the one they use the most frequently in their daily life. This PIN code was given by the users before the experiment started, stored during the experiment and deleted afterward for confidentiality reasons. Five other random 4-digit codes were generated for each user (meaning a total of six 4-digit codes per user).
Each (real or random) PIN was then shown on a computer exactly 150 times to each user (in a random order), meaning a total of 900 events for which we recorded the EEG signal in sets of 300, together with a tag T ranging from 1 to 6 (with \(T=1\) the correct PIN and \(T=2\) to 6 the incorrect ones). We used 32 Ag–AgCl electrodes for the EEG signals collection. These were placed on the scalp using a WaveGuard cap from Cephalon, using the international 10-10 system. The stimulus onset asynchrony (SOA) was set to 1.009 s (i.e., slightly more than 1 s, to reduce the environmental noise). The time each PIN was shown was set to 0.5 s. When no PIN was displayed on the screen, a + sign was maintained in order to keep the focus of the user on the center of the screen. We additionally ensured that two identical 4-digit codes were always separated by at least two other 4-digit codes. The split of our experiments in sub-experiments of 300 events was motivated by a maximum duration of 5 min, during which we assumed the users to remain focused on the screen. The signals were amplified and sampled at a 1000 Hz rate with a 32-channel ASA-LAB EEG system from Advanced NeuroTechnologies. Eventually, and in order to identify eye blinks which potentially perturb the EEG signal, we added two bipolar surface electrodes on the upper left and lower right sides of the right eye and rejected the records for which such an artifact was observed. This slightly reduced the total number of events stored for each user. (Precisely, this number was reduced to 900, 818, 853, 870, 892, 887, 878, 884, for users 1–8.)
This simplified setting naturally comes with limitations. First and concretely, the number of possible PIN codes for a typical smart card would of course be much larger than the 6 ones we investigate (e.g., 10,000 for a 4-digit PIN). In this respect, we first insist that the primary goal of the following experiments is to investigate the information leakages in EEG signals thoroughly, and this limited number of PIN codes allowed us to draw conclusions with good statistical confidence. Yet, we also note that this setting could be extended to a reasonable threat model. For example, one could target \(\approx 1000\) different users by repeatedly showing them \(\approx 10\) PIN codes among the 10,000 possible ones and recover one PIN with good confidence. Second, and since the attacks we carry out essentially test familiar versus unfamiliar information, there is also a risk of false positives (e.g., an all zero code or a close to correct code). In this respect, our mitigation plan is to exploit statistical tools minimizing the number of false negatives, therefore potentially allowing enumeration among the most likely candidates [18].
3 Methodology
In this section, we describe the methodology we used in order to assess and better quantify the feasibility of side-channel attacks against the human brain. Concretely, and contrary to the case of embedded devices where the leakage distributions are supposed to be stable and the number of observations made by the adversary can be large, we deal with a very different challenge. Namely, we need to cope with irregular distributions possibly affected by outliers and can only assume a limited number of observations.
As a result, the following sections mainly aim to convince the reader that our treatment of the EEG signals is not biased by dataset-specific overfitting. For this purpose, our strategy is twofold. First, we apply the same (pre)processing methods to the measurements of all the users. This means the same selection of electrodes, the same dimensionality reduction and probability density function (PDF) estimation tools (with identical parameters), and the same outliers definition. Second, we systematically verified that our results were in the same time consistent with neurophysiological expectations and stable across a sufficient range of (pre)-processing parameters. As a result, our primary focus is on the confidence in and stability of the results, more than on their optimality (which is an interesting scope for further research). In other words, we want to guarantee that EEG signals provide exploitable side-channel information for PIN code recovery and to evaluate a sufficient number of observations for which such an attack can be performed with good success probability.
3.1 Notations
We denote the (multivariate) EEG signals of our experiments with a random variable \(\varvec{O}\), a sample EEG signal as \(\varvec{o}\), and the set of all the observations available for evaluation as \(\mathcal {O}\). These observations depend on (at least) three parameters: the user under investigation, next denoted with a random variable U such that \(u\in \{1,2,\ldots ,8\}\); the nature of the 4-digit code observed (i.e., whether it is correct or a random PIN), next denoted with a random variable P such that \(p\in \{0,1\}\); and a noise random variable N. Each observation is initially made of 32 vectors of 1000 samples, corresponding to 32 electrodes and \(\approx 1\)s per event.
3.2 Supervised (aka profiled) evaluation
In order to best evaluate the actual informativeness of the EEG signals regarding the PIN displayed in our experiments and inspired by the worst-case side-channel security evaluations of cryptographic devices, our work first investigates so-called profiled attacks, which correspond to a supervised machine learning context. For this purpose, a part of the observations in \(\mathcal {O}\) are used to estimate a (probabilistic) model \(\hat{\Pr }_{\mathrm {model}}[P=p|\varvec{O}=\varvec{o}]\). The adversary/evaluator then uses this model in order to try extracting the PIN from the remaining observations. Note that our profiling is based on the binary random variable p, where \(p=0\) if the PIN is random and \(p=1\) if the PIN is real, and not based on the value of the PIN tag itself. This is motivated by the following practical and neurophysiological reasons:
From a practical point of view, building a model for all the PINs and users seems impractical in real-world settings: this would require being able to collect multiple observations for each of the 10,000 possible values of a 4-digit code. Furthermore, and as discussed in Sect. 3.3, our real versus random profiling allowed us to lean toward realistic (non-profiled) attacks.
From a neurophysiological point of view, the information we aim to extract is based on event-related potentials (ERPs) that have been shown to reflect semantic associations and incongruities [10–12]. In this respect, while we can expect a user to react differently to real and random 4-digit codes, there is no reason for him to treat the random codes differently. (Up to problems due to the apparition of other "significant" values that may lead to false positives, as will be discussed next.)
The scheme of Fig. 1 represents the general procedure we followed to analyze our EEG data (similar to side-channel analysis). We next detail its main steps.
Evaluation methodology
Preprocessing As a first step, all the observations were preprocessed using a bandpass filter. We set the low-frequency cutoff to 0.5 Hz to remove the slow drifts in the EEG signals and the high-frequency cutoff to 30 Hz to remove muscle artifacts and 50 Hz noise.
Selection of electrodes As mentioned in introduction, each original observation is made of 32 vectors of 1000 samples, leading to a large amount of data to process. To simplify our treatments, we started by analyzing the different electrodes independently. Among the 32 ones of our cap, Electrodes P7, P8, Pz, O1 and O2 gave rise to non-negligible signal (see Fig. 2), which is consistent with the existing literature where ERPs related to semantic associations and incongruities were exhibited in the central/parietal zones [10–12]. Our following analyses are based on the exploitation of the Electrodes P7 and P8 which provided the most regular information across the different users.2
Repartition of the electrodes on the scalp
For illustration, Figs. 3 and 4 represent the mean and standard deviation traces corresponding to two different users. (Similar figures for the other users are available in appendices, as shown in Figs. 13, 14 and Figs. 15, 16.) From these examples, a couple of relevant observations can already be extracted (and will be useful for the design and interpretation of our following evaluations). First, we see (on the left parts of Fig. 3) that the EEG signals may be more or less informative depending on the users and electrodes. More precisely, we generally noticed informative ERP components after 300–600 ms (known as the P300 component) for most users and electrodes, which is again consistent with the existing literature [10–12]. Yet, our measurements also put forward user-specific differences in the shape of the mean traces corresponding to the correct PIN value. (Note that the figures mostly show examples of informative EEG signals, but for one user and some other electrodes, no such clear patterns appear.) Second, and quite importantly, the difference between the left and right parts of the figures illustrates the significant gain when moving from an unsupervised/unprofiled evaluation context to a supervised/profiled one. That is, while in the first case, we need the traces corresponding to the correct PIN value to stand out, in the second case, we only need it to behave differently than the others.
Exemplary mean traces for different tag (left) and PIN (right) values. Top: User 8, Electrode P7. Bottom: User 6, Electrode P7
Exemplary standard deviation traces for different tag values corresponding to User 8, Electrode P7 (left) and User 6, Electrode P7 (right)
Eventually, a look at the standard deviation curves in Fig. 4 suggests that the measurements are quite noisy, hence non-trivial to exploit with a limited amount of observations. This will be confirmed in our following PDF estimation phase and therefore motivates the dimensionality reduction in the next section (intuitively because using more dimensions can possibly lead to better signal extraction, which can mitigate the effect of a large noise level).
Dimensionality reduction The evaluation of our metrics requires to build a probabilistic model, which may become data intensive as the number of dimensions in the observations increases. For example, directly estimating a 2000-dimensional PDF corresponding to our selected electrodes is not possible. In order to deal with this problem, we follow the standard approach of reducing dimensionality. More precisely, we use the principal component analysis (PCA) that was shown to provide excellent results in the context of side-channel attacks against cryptographic devices [19]. We investigate two options in this direction.
First, and looking at the observations in Fig. 3, it appears that the mean traces corresponding to the different tags are quite discriminant regarding the value of p. Hence, and as in [19], a natural option is to compute the projection vectors of the PCA based on these mean traces. This implies computing average vectors \(\bar{\varvec{o}}^j={\mathsf {E}}_{i\approx 1}^{150} \varvec{o}_i^j\), and then to derive the PCA eigenvectors based on the \(\bar{\varvec{o}}^j\)'s, which we denote as \(\varvec{R}_{1:N_d}\leftarrow {\mathsf {PCA}}\big (\{\bar{\varvec{o}}^j\}_{j=1:6}\big )\), where \(N_d\) is the number of dimensions to extract. Due to the limited number of mean traces (i.e., 6), we can only compute \(N_d=5\) eigenvectors and therefore are limited to five-dimensional attacks in this case.3 However, it turned out that in our experiments, this version of the PCA extracts most of the relevant samples in the first dimension. This is intuitively witnessed by Fig. 5 which represents the first and fifth eigenvectors corresponding to User 8 and Electrode P7 (i.e., \(\varvec{R}_{1}\) and \(\varvec{R}_{5}\)): we indeed observe that the first dimension corresponds to the points of interest in Fig. 3, while the fifth one seems to be dominated by noise. In the following, we will denote this solution as the "average PCA". Note that such a dimensionality reduction does not take advantage of any secret information (i.e., it is not a supervised/profiled one) since it builds the mean traces based on public tags. In order to further confirm that the first dimension of the average PCA extracts relevant information from our observations, Fig. 6 additionally illustrates reconstructed signals for this first and all the other dimensions.
Exemplary eigenvectors for the average PCA, corresponding to User 8, Electrode P7. Left: first dimension. Right: fifth dimension
Reconstructed signal based on average PCA, corresponding to User 8, Electrode P7, using the first dimension (left) and all the other dimensions (right)
Yet, one possible drawback of the previous method is that estimating the average traces \(\bar{\varvec{o}}^j\) becomes expensive when the number of PIN codes increases. In order to deal with and quantify the impact of this limitation, we also considered a "raw PCA," where we directly reduce the dimensionality based on raw traces, next denoted as \(\varvec{R}_{1:N_d}\leftarrow {\mathsf {PCA}}\big (\{\varvec{o}_i\}_{i\approx 1:900}\big )\). While this approach is not expected to extract the information as effectively, it allows deriving a much larger number of dimensions than in the previous (average) case. Concretely though, exploiting dimensions 1–5 only was a good trade-off between the informativeness of the dimensionality reduction, the risk of overfitting (useless) dataset-dependent patterns and the risk of outliers in our experiments (see the paragraph on outliers).
As a result of this dimensionality reduction phase, the observation vectors \(\varvec{o}(1\):2000) (which correspond to the concatenation of the measurements for our two selected electrodes) are reduced to smaller vectors \(\varvec{R}_{1:N_d}\times \varvec{o}\) (i.e., each dimension o(d) corresponds to the scalar product between the original observations \(\varvec{o}\) and a 2000-element vector \(\varvec{R}_d\)). We recall that PCA is not claimed to be an optimal dimensionality reduction, since it optimizes a criteria (i.e., the variance between the raw or mean traces) which does not capture all the information in our measurements. However, it is a natural first step in our investigations, and we could verify that our following conclusions are not affected by slight variations of the number of extracted dimensions (i.e., adding one or two dimensions), which therefore fits our (primary) confidence and stability goal.
PDF estimation We now describe the main ingredient of our supervised/profiled evaluation, namely the PDF estimation for which we exploit the knowledge of the p values for the observations in the profiling sets.
In order to build a model \(\hat{\mathsf {f}}_{\mathrm {model}}(\varvec{o}_{1:N_d}|p)\), we first take advantage of the fact that the dimensions of the \(\varvec{o}_{1:N_d}\) vectors after PCA are orthogonal. By additionally considering them as independent, this allows us to reduce the PDF estimation problem from one \(N_d\)-variate one to \(N_d\) univariate ones. Based on this simplification, the standard approach in side-channel analysis is to assume the observations to be normally distributed and to build Gaussian templates [20]. Yet, in our experiments no such obvious assumption on the distributions in hand was a priori available. As a result, we first considered a (nonparametric) kernel density estimation as used in [21], which has slower convergence but avoids any risk of biased evaluations [22]. Kernel density estimation is a generalization of histograms. Instead of bundling samples together in bins, it adds (for each observation) a small kernel centered on the value of the observation to the estimated PDF. The resulting estimation that is a sum of kernels is smoother than histograms and usually converges faster. Concretely, kernel density estimation requires selecting a kernel function (we used a Gaussian one) and to set the bandwidth parameter (which can be seen as a counterpart to the bin size in histograms). The optimal choice of the bandwidth depends on the distribution of the observations, which is unknown in our case. So we need to rely on a heuristic and used Silverman's rule-of-thumb for this purpose [23].4
Evaluation metrics Following the general principles put forward in [17], our evaluations will be based on a combination of information-theoretic and security analyses. The first ones aim at evaluating whether exploitable information is available in the EEG signals; the second ones at evaluating how efficiently this information can be exploited to mount a side-channel attack. Note that since we do not assume the users to behave identically, these metrics will always be evaluated and discussed for each user independently.
Perceived information The perceived information (PI) was introduced in the context of side-channel attacks against cryptographic devices, of which the goal is to recover some secret data (aka key) given some physical leakage [24]. The PI aims at quantifying the amount of information about the secret key, independent of the adversary who will exploit this information. Informally, we will use this metric in a similar way, by just considering P as a bit to recover and the observations as leakages. Using the previous notations, we define the PI between the PIN random variable P and the observation random variable \(\varvec{O}\):
$$\begin{aligned} {\mathrm {PI}}(P;\varvec{O})={\mathrm {H}}[P]+\sum _{p}\Pr [p]\cdot \int _{\varvec{o}} {\mathsf {f}}(\varvec{o}|p) \cdot \log _2 \Pr _{\mathrm {model}}[p|\varvec{o}] \mathrm{d}\varvec{o}, \end{aligned}$$
where we use the notation \(\Pr [X=x]=:\Pr [x]\) for conciseness, and \({\mathsf {f}}(\varvec{o}|p)\) is the (continuous) PDF of the observations given the value of p. In the ideal case where the model is perfect, the PI is identical to Shannon's mutual information. In the practical cases where the model differs from the observation's true distribution, the PI captures the amount of information that is extracted from these observations, biased by the model (assumption and estimation) errors [22].
Of course, concretely the true distribution \({\mathsf {f}}(\varvec{o}|p)\) is unknown to the adversary/evaluator and can only be sampled. Therefore, the approach in side-channel analysis, that we repeat here, is to split the set of observations \(\mathcal {O}\) in k non-overlapping sets \({\mathcal {O}}^{(i)}\). We then define the profiling sets \({\mathcal {O}}_{\mathsf {p}}^{(j)}=\bigcup _{i\ne j} {\mathcal {O}}^{(i)}\) and the test sets \({\mathcal {O}}_{\mathsf {t}}^{(j)}={\mathcal {O}} {\setminus } {\mathcal {O}}_{\mathsf {p}}^{(j)}\). The PI is computed in two phases:
The observations' conditional distribution is estimated from a profiling set. We denote this phase with
$$\begin{aligned} \hat{\mathsf {f}}^{(j)}_{\mathrm {model}}(\varvec{o}|p)\leftarrow {\mathcal {O}}_{\mathsf {p}}^{(j)}. \end{aligned}$$
Note that the \(\Pr _{\mathrm {model}}[p|\varvec{o}]\) factor involved in the PI definition is directly derived via Bayes' theorem as:
$$\begin{aligned} \hat{\Pr }_{\mathrm {model}}[p|\varvec{o}]=\frac{\hat{\mathsf {f}}^{(j)}_{\mathrm {model}}(\varvec{o}|p)\cdot \Pr [p]}{\sum _{p^*} \hat{\mathsf {f}}^{(j)}_{\mathrm {model}}(\varvec{o}|p^*)\cdot \Pr [p^*]}\cdot \end{aligned}$$
The model is tested by computing the PI estimate:
$$\begin{aligned}\hat{\mathrm {PI}}^{(j)}(P;\varvec{O})={\mathrm {H}}[P]+\sum _{p=0}^1\Pr [p]\cdot \sum _{\varvec{o}\in {\mathcal {O}}_{\mathsf {t}}^{(j)}|p} \frac{1}{n_{p}^j} \cdot {\log} _{2} \hat{\mathrm{Pr}}_{\mathsf{model}}[p|\varvec{o}], \end{aligned}$$
with \(n_{p}^j\) the number of observations in the test set \({\mathcal {O}}_{\mathsf {t}}^{(j)}|p\).
Eventually, the k outputs \(\hat{\mathrm {PI}}^{(j)}(P;\varvec{O})\) are averaged to get an unbiased estimate, and their spread characterizes the accuracy of the result. Note that concretely, the maximum size for the profiling set in our experiments equals \(\approx 899\), leading to a cross-validation parameter \(k\approx 900\) and a test set of size 1. In this case, the model building phase is repeated \(\approx 900\) times, and each model is tested once against an independent sample. (We use the \(\approx\) symbol to reflect the fact that these values are approximated, due to the rejection of eye blinks mentioned in Sect. 2.) This "leave one out" strategy has a large cross-validation parameter compared to current practice (e.g., in side-channel attacks against cryptographic implementations a value of \(k=10\) was selected [22]), leading to computationally intensive evaluations. Yet, it is justified in our study because of the limited number of samples available in our experiments.
Success rate and average rank In order to confirm that the estimated PI indeed leads to concrete attacks, we consider two simple security metrics. Here, the main challenge is that we only have models for the real and random PIN codes, while the actual observations in the test set naturally come from six different events. As a result, we first considered the success rate event per event. For this purpose, the \(\approx 900\) observations are split in 6 sets of \(\approx 150\) observations that correspond to the six different tag values. Based on these 6 sets, we can compute the probability that the observations are correctly classified as real or random in function of the number of observations exploited in the attack, next denoted as q. This is done by averaging a success function \(\mathsf {S}\) that is computed as follows. If \(q=1\): \(\mathsf {S}(\varvec{o}_1)=1\) if \(\hat{\Pr }_{\mathsf {model}}[p|\varvec{o}_1]>\hat{\Pr }_{\mathsf {model}}[\bar{p}|\varvec{o}_1]\) and \(\mathsf {S}(\varvec{o}_1)=0\) otherwise (where \(\bar{p}\) denotes the incorrect event); if \(q=2\): \(\mathsf {S}(\varvec{o}_1,\varvec{o}_2)=1\) if \(\hat{\Pr }_{\mathsf {model}}[p|\varvec{o}_1]\times \hat{\Pr }_{\mathsf {model}}[p|\varvec{o}_2]>\hat{\Pr }_{\mathsf {model}}[\bar{p}|\varvec{o}_1]\times \hat{\Pr }_{\mathsf {model}}[\bar{p}|\varvec{o}_2];\ldots\) Concretely, this success rate is an interesting metric to check whether the observations generated by different incorrect PIN values indeed behave similarly.
Of course, an adversary eventually wants to compare the likelihoods of different PIN values. For this purpose, we also considered the average rank of the correct PIN in an experiment where we gradually increase the number of observations per tag q, but this time consider sets of 6 observations at once that we classify only according to the model for the real PIN. This leads to vectors \((\hat{\Pr }_{\mathrm {model}}[p|\varvec{o}_1^1],\hat{\Pr }_{\mathrm {model}}[p|\varvec{o}_1^2],\hat{\Pr }_{\mathrm {model}}[p|\varvec{o}_1^3],\ldots,\) \(\hat{\Pr }_{\mathrm {model}}[p|\varvec{o}_1^6])\) if \(q=1\), \((\hat{\Pr }_{\mathrm {model}}[p|\varvec{o}_1^1] \times\) \(\hat{\Pr }_{\mathrm {model}}[p|\varvec{o}_2^1],\) ..., \(\hat{\Pr }_{\mathrm {model}}[p|\varvec{o}_1^6]\times \hat{\Pr }_{\mathrm {model}}[p|\varvec{o}_2^6])\) if \(q=2\), ..., where the superscripts denote the tag from which the observations originate. The average rank is then obtained by sorting this vector and estimating the sample mean of the position of the tag 1 in the sorted vector.
Connecting the metrics (sanity check) Note that as discussed in [25], information-theoretic and security metrics can be connected (i.e., a model that leads to a positive PI should lead to successful attacks).5 We consider both types of metrics in our experiments because the first ones allow a better assessment of the confidence in the evaluations (see the following paragraph on confidence), while the second ones lead to simpler intuitions regarding the concrete impact of the attacks.
Outliers As mentioned in the Dimensionality Reduction paragraph, the main drawback of the raw PCA is that it extracts the useful EEG information less efficiently, which we mitigate by using more dimensions. Unfortunately, this comes with an additional caveat. Namely, the less informative information extraction combined with the addition of more dimensions increases the risk of outliers (i.e., observations that would classify the correct PIN value very badly for some dimensions, possibly leading to a negative PI). In this particular case, we considered an additional post-processing (after the dimensionality reduction and model building phases). Namely, given the \(\approx 900\) probabilities \(\hat{\Pr }[p|\varvec{R}_{1:N_d}\times \varvec{o}_i]\), we rejected the ones below 0.001 and set them to this minimum value. This choice is admittedly heuristic, yet did consistently lead to positive results for all the users. It is motivated by limiting the weight of the log probabilities for the outliers in the PI estimation. We insist that this treatment of outliers is only needed for the raw PCA. For the average PCA, we did not reject any observation (other than the ones in Sect. 2).
Confidence By using \(\approx 900\)-fold cross-validation, we can guarantee that our PI estimates will be based on 900 observations, leading to 900 values for the log probabilities \(\log _2(\hat{\Pr }[p|\varvec{R}_{1:N_d}\times \varvec{o}_i])\). Since this remains a limited amount of data compared to the case of side-channel attacks against cryptographic implementations, and the extracted PI values are small, we completed our information-theoretic evaluations by computing a confidence interval for the PI estimates. To avoid any distribution-specific assumption, we computed a 10% bootstrap confidence interval [26], by resampling 100 bootstrap samples out of our 900 log probabilities, computing 100 mean bootstrap samples, sorting them and using the 95th and 5th percentiles as the endpoints of the intervals.6 For simplicity, this was only done for the PI metric and not for the success rate and average rank since (1) successful Bayesian attacks are implied by the information-theoretic analysis [25], (2) these metrics are more expensive to sample (e.g., we have only one evaluation of the success function with \(q\approx 150\) per user), and (3) they are only exhibited to provide intuitions regarding the exploitability of the observations (i.e., the attack complexities).
3.3 Unsupervised (aka non-profiled) analysis
While supervised (aka profiled) analyses are the method of choice to gain understanding about the information available in a side-channel, their practical applicability is of course questionable. Indeed, building a model for a target user may not always be feasible, and this is particularly true in the context of attacks against the human brain since, as will be discussed in Sect. 4.3, models built for one user are not always (directly) exploitable against another user. In this section, we therefore propose an unsupervised/non-profiled extension of the previous (supervised/profiled) information-theoretic evaluation. To the best of our knowledge, this variation was never described as such in the open literature (although it shares some similarities with the non-profiled attacks surveyed in [21]). For this purpose, our starting point is the observation from Fig. 3, that in an unsupervised/non-profiled context, one can take advantage of the fact that the (e.g., mean) traces of the EEG signals corresponding to the correct PIN value may stand out. As a result, a natural idea is to compute the PI metric 6 times independently, each time assuming a different (possibly random) tag to be correct during an "on-the-fly" modeling phase. If the traces corresponding to the (truly) correct PIN are more singular (comparatively to the others), we can expect the PI estimated with this PIN to be larger, leading to a successful attack.
Of course, such an attack implies an additional neurophysiological assumption (while in the supervised/profiled setting, we just exploit any information available). Yet, it nicely fits the intuitions discussed in the rest of this section, which makes it a good candidate for concrete evaluation. Furthermore, we mention that directly recovering the correct PIN value may not always be necessary: as in the case of side-channel analysis, reducing the rank of the correct PIN value down to an enumerable one may be sufficient [18].
4 Experimental results
As in the previous section, we start with the results of our supervised/profiled evaluations, which will be in two (information-theoretic and security) parts. Beforehand, there is one last choice regarding the computation of \(\hat{\Pr }[p|\varvec{R}_{1:N_d}\times \varvec{o}_i]\) via Bayes' theorem. Namely, should we consider maximum likelihood or maximum a posteriori attacks (i.e., should we take advantage of the a priori knowledge of \(\Pr [p]\) or consider a uniform a priori). Interestingly, in our context ignoring this a priori and performing maximum likelihood attacks is more relevant, since we mostly want to avoid false negatives (i.e., correct PINs that would be classified as random ones), which prevent efficient enumeration. Since the a priori on P increases the amount of such errors (due to the a priori bias of 5/6 toward random PIN values), the rest of this section reports on the results of maximum likelihood attacks.
4.1.1 Perceived information
As a first step in our evaluations, we estimated the PI using the methodology described in the previous section. We started by looking at the evolution of the PI estimation in function of the number of observations in the profiling set used to build the model. The results of this analysis for a couple of users are in Fig. 7 (Fig. 17 in appendix contains the results for all users) from which two quantities must be observed:
The value of the PI estimated using the maximum profiling set (i.e., the extreme right values in the graphs). It reflects the informativeness of the model built in the profiling phases and is correlated with the success rate of the online (maximum likelihood) attack using this model [25]. Positive PI values indicate that the model is sound (up to Footnote 5) and should lead to successful online attacks if the number of observations (i.e., the q parameter in our notations) used by the adversary is sufficient.
The number of traces in the profiling set required to reach a positive PI. It reflects the (offline) complexity of the model estimation (profiling) phase [27].
Evolution of the PI in function of the size of the profiling set for Users 3 (top) and 6 (bottom), using average PCA (left) and raw PCA (right)
In this respect, the results in Fig. 7 show a positive convergence for the two illustrated users, yet toward different PI values which indicate that the informativeness of the EEG signals differs between them. Next, and quite interestingly, we also see that the difference between average PCA (in the left part of the figure) and raw PCA (in the right side) confirms the expected intuitions. Namely, the fact that raw PCA reduces dimensionality based on less meaningful criteria and requires more dimensions implies a slower model convergence. Typically, model convergence was observed in the 100 observations' range with average PCA and required up to 400 traces with raw PCA. For completeness, Table 1 contains the estimated PI values with maximum profiling set, for the different users and types of PCA. Excepted for one user (User 5) for which we could never reach a positive PI value with confidence,7 this analysis suggests that all the users lead to exploitable information and confirms the advantage of average PCA. A similar table obtained with the Gaussian profiling is given in Appendix 1.
Note that we leave the accurate treatment of confidence intervals for Sect. 4.2 where it will play an important role. Yet, we can already notice the stable shape of the PI curves as the size of the profiling set increases, which intuitively indicates the convergence of our estimations.
Estimated PI values with maximum profiling set
\(\hat{\mathrm {PI}}(P;O)\) with avg. PCA
\(\hat{\mathrm {PI}}(P;O)\) with raw PCA
\(\varnothing\)
4.1.2 Success rate and average rank
As discussed in the previous section, our information-theoretic analysis is a method of choice to determine whether discriminant information can be extracted from EEG signals with confidence. Yet, it does not lead to obvious intuitions regarding the actual complexity of an online attack where an adversary obtains a set of q fresh observations and tries to detect whether some of them correspond to a real PIN value. Therefore, we now provide the results of our complementary security analysis and estimate the success rate and average key rank metrics. As previously mentioned these evaluations are less confident, since for large q values such as \(q=150\) we can have only one evaluation of the success function. Concretely, the best success rate/average key rank estimates are therefore obtained for \(q=1\). We took advantage of resampling when estimating them for larger q's.
Figures 8 and 9 illustrate that these metrics are indeed correlated with the value of the PI estimates using the maximum profiling set, which explains the more efficient attacks against Users 2, 3 and 8. Concretely, the average rank figure suggests that correct PIN value can be exactly extracted in our 6-PIN case study with 5–10 observations for the most informative users and 30–40 observations for the least informative ones. The success rate curves also bring meaningful intuitions since they highlight that all (correct and random) PIN values can be correctly classified with our profiled models (in slightly more traces). This confirms our neurophysiological assumption from the previous section that the users react similarly to all random values.8
Besides, Fig. 8 is interesting since it shows how confidently the correct PIN value is classified independent of the others. Hence, its results would essentially scale with larger number of PIN values.
Finally, Fig. 9 confirms the presence of a parasitic familiar event for User 5, for which the average rank is reduced to 2 rather than optimal 1.9
Success rates per tag value for Users 1, 3, 5 and 7 (left column) and Users 2, 4, 6 and 8 (right column)
Average rank of the correct PIN for Users 1, 3, 5 and 7 (left column) and Users 2, 4, 6 and 8 (right column)
We now move to the more challenging problem of unsupervised/non-profiled attacks. For this purpose, we first applied the attack sketched in Sect. 3.3 with the maximum number of traces in the profiling set. That is, we repeated our evaluation of the PI metric six times, assuming each of the tag values to be the real one. Furthermore, we computed the confidence intervals for each of the PI estimates according to the confidence paragraph in the previous section. The results of this experiment are in Fig. 10 for two users and lead to three observations.
Confidence intervals for the (non-profiled) PI evaluation of Sect. 3.3 with \(\approx 900\) observations (top), \(\approx 450\) observations (middle) and \(\approx 225\) observations (bottom), for Users 8 (left) and 6 (right)
First, looking at the first line of the figure, which corresponds to the correct PIN value, we can now confirm that the PI estimates of Sect. 4.1.1 are sufficiently accurate (e.g., the confidence intervals clearly guarantee a positive PI). Second, the confidence intervals for the random PIN values (i.e., tags 2–6) confirm the observation from our success rate curves (Fig. 8) that the users react similarly to all random values. Third, the middle and bottom parts of the figure show the results of two (resp. 4) non-profiled attacks where the profiling set was split in 2 (resp. 4) independent parts (without resampling), therefore leading to the evaluation of 2 (resp. 4) confidence intervals for each tag value. Concretely, the PI estimate for the correct PIN value consistently started to overlap with the ones of random PINs for all users, as soon as the number of attack traces q was below 200, and no clear gain for the correct PIN could be noticed below \(q=100\). This confirms the intuition that unsupervised/non-profiled side-channel attacks are generally more challenging than supervised/profiled ones (here, by an approximate factor 5–10 depending on the users).
This conclusion also nicely matches the one in Sect. 4.1.1, Fig. 7, where we already observed that the (offline) estimation of an informative model is more expensive than its (online) exploitation for PIN code recovery as measured by the success rate and average rank (by similar factors). Indeed, in the unsupervised/non-profiled context such an estimation has to be performed "on-the-fly".
4.3 Model portability
Since the previous section suggests a significant advantage of supervised/profiled attacks over unsupervised/non-profiled ones, a natural question is whether the profiling can lead to realistic attack models. Clearly, estimating a model for the correct PIN of each user an adversary would like to target seems hardly realistic (especially if 10,000 PIN values are considered). Therefore, and in order to get around this drawback, a solution would be to use the model built for one user against another user. Despite limited by the number of users in our experiments, we made preliminary analyses in this direction. Interestingly, while for most pairs of users the resulting attacks failed and the PI estimates remained negative, we also found two pairs of users for which the models could be mutually exchanged. Namely, targeting User 1 (resp. User 6) with the model of User 6 (resp. User 1) leads to a PI of 0.0211 (resp. 0.0357). And targeting User 1 (resp. User 3) with the model of User 3 (resp. User 1) leads to a PI of 0.0281 (resp. 0.0246). Intuitively, this positive result is in part explained by the similar shapes of the first eigenvectors used to reduce the dimensionality when estimating these models. Overall, this problem of model portability is in fact similar to the problem of variability faced in the context of side-channel attacks against cryptographic devices [24]. Hence, it is an interesting scope for further research to investigate how advanced profiling techniques (e.g., profiling multiple users jointly with mixture models) could be used to increase the practical relevance of supervised/profiled attacks against the human brain.
Note that in this context, the impact of certain parameters in our methodology is susceptible to evolve too. For example, and as just mentioned, the user specificities that make the portability of the models challenging are in part due to the shape of the eigenvectors produced by the average PCA. So using the raw PCA may gain interest in this case. As a preliminary experiment in this direction, we evaluated the PI when targeting a user with a model profiled with all the other users.10 As a result, we could obtain positive PI values for 5 out of 7 users, with both the average and the raw PCA (and similar informativeness). For illustration, the success rate curves for such a (successful and unsuccessful) profiling are given in Fig. 11. These results suggest that profiling classes of similar users is certainly a promising approach for realistic attacks.
Exemplary success rates per tag value for "all against one" profiling: (left: User 3, right: User 4)
5 From security issues to privacy issues
Before concluding, we make a short excursion from the evaluation of security toward the risks of privacy in BCI-based applications. That is, since the previous investigations exhibited significant differences between the EEG signals of different users reacting to their correct PIN values, we reverse the problem and now try to identify the users rather than the PIN values. For this purpose, we followed exactly the same methodology and estimated the modified perceived information \(\hat{\mathrm {PI}}(U;O)\). A plot of the mean and standard deviation traces corresponding to our 7 different users (similar to Figs. 3 and 4) is given in Fig. 12. And the evaluation of the partial PI estimates for each user (i.e., \(\hat{\mathrm {PI}}(U=u;O)\)) is given in Table 2.
Exemplary mean traces (left) and standard deviation traces (right) for the reaction of different users to the correct PIN value (Electrode P8 on top row and Electrode P7 on bottom row)
Clearly, we see that the EEG signals are also (in fact even more) informative in this case. Interestingly, this observation is consistent with the related literature trying to exploit EEG signals for biometric applications [28, 29].
\(\hat{\mathrm {PI}}(U=u;O)\) with avg PCA
\(\hat{\mathrm {PI}}(U=u;O)\) with raw PCA
6 Consequences and conclusions
The results in this paper lead to two conclusions.
First, and from the security point of view, our experiments show that PIN extraction attacks using BCIs are feasible, yet require several observations to succeed with high probability. In this respect, the difference between the complexity of successful supervised/profiled attacks (around 10 correct PIN observations) and unsupervised/non-profiled attacks (more in the hundreds range) is noticeable. It suggests the aggregation of users into classes for which the models are sufficiently similar as an interesting scope for further research (which would require larger scale experiments, with more users). In this setting, a better investigation of the impact of enumeration would also be worthwhile. Indeed, the reduction of the average rank of the correct PIN is also significant in our analyses. Therefore, combining side-channel attacks against the human brain with some enumeration power can reduce the number of observations required to succeed. (Roughly, we can assume that the average key rank will be reduced exponentially in the number of observations, as usually observed in side-channel attacks [30].)
More generally, our results suggest that extracting concrete PIN codes from EEG signals, while theoretically feasible and potentially damaging from some users and PINs, is not yet a very critical threat for systematic PIN extraction. This may change in the future, if/when massive amounts of BCI signals start to be collected. Besides, other targets with smaller cardinality could already be more worrying (e.g., extracting the knowledge of one relative among a set of unknown people displayed on a screen), because of avoiding issues related to users loosing their focus due to too long experiments.
Second, and given the importance of profiling for efficient information extraction from EEG signals, our experiments also underline that privacy issues may be even more worrying than security ones in BCI-based applications. Indeed, when it comes to privacy, the adversary trying to identify a user is much less limited in his profiling abilities. In fact, any correlation between his target user and some feature found in a dataset is potentially exploitable. Furthermore, the amount and types of correlations that can be exhibited in this case are potentially unbounded, which makes the associated risks very hard to quantify. In this respect, the data minimization principle does not seem to be a sufficient answer: it may very well be that the EEG signals collected for one (e.g., gaming) activity can be used to reveal various other types of (e.g., medical, political) correlations. Anonymity is probably not the right answer either (since correlations with groups of users may be as discriminant as personal ones). And such issues are naturally amplified in case of malicious applications (e.g., it seems possible to design a BCI-based game where situations lead the users to incidentally reveal preferences). So overall, it appears as an important challenge to design tools that provide evidence of "fair treatment" when manipulating sensitive data such as EEG signals, which can be connected to emerging challenges related to computations on encrypted data [31] which can be connected to emerging challenges related to computations on encrypted data [31].
The experiments described next were approved by the local Research Ethics Committee and performed in compliance with the Code of Ethics of the World Medical Association (Declaration of Helsinki). All participants gave written informed consent.
We further checked systematically that other electrodes did not provide significantly more discriminating information so that our conclusions would be affected.
Since we used the small sample size PCA variant in [19].
Note that for completeness, we also considered simple Gaussian templates. Comparing nonparametric and parametric approaches was useful in our experiments, in order to gain confidence that the kernel density estimation is not capturing dataset-specific features. Yet, since no significant variation was noticed, the following sections will focus on the results obtain with kernel density estimation.
More precisely, the PI is an average metric, so what is needed is that each line of the PI matrix defined in [17] (corresponding to 6 different events in our study) are positive, which we confirmed with the success rate analysis.
We note that confidence intervals estimated based on a Gaussian assumption did not lead to different conclusions in our case study.
As mentioned in Sect. 2, this is due to the presence of another familiar event for this user, which he mentioned to us after the experiments were performed. Further analysis of this critical case was not possible since the experiment approved by our ethical board was conditioned on the fact that no user PIN was stored.
We may expect more singularities (such as the one of User 5) to appear and launch false alarms in case studies with more PIN values. Yet, this would not contradict the trend of a significantly reduced average rank for the correct PIN value.
Despite a positive PI, the key rank for User 7 also stabilizes to 2. Yet, in this case we observed that it is due one single misleading observation that is not rejected by our outlier management).
Excluding User 5 because of its previously computed negative PI.
JL, CM and AM participated to the data collection part of the paper. JL, CM and FXS participated to the data analysis part of the paper. All co-authors participated to the writing of the paper. All authors read and approved the final manuscript.
Joseph Lange is a research engineer at Google. Clément Massart is a PhD student at UCL. André Mouraux and François-Xavier Standaert are Professor at UCL.
François-Xavier Standaert is a senior associate researcher of the Belgian Fund for Scientific Research (FNRS-F.R.S.). This work has been funded in parts by the FEDER Project CryptoMedia-UCL and the ERC Project 724725.
Appendix 1: Gaussian template results
See Table 3.
Gaussian counterpart to Table 1
See Figs. 13, 14, 15, 16 and 17.
Mean traces for different tag (left) and PIN (right) values. User 1 (top) to 8 (bottom), Electrode P7
Standard deviation traces for different tag values corresponding to Users 1, 3, 5 and 7 (left) and Users 2, 4, 6 and 8 (right), Electrode P7
Evolution of the PI in function of the size of the profiling set for Users 1 (top) to User 8 (bottom), using average PCA (left) and raw PCA (right)
UCLouvain, 1348 Louvain-la-Neuve, Belgium
Engel J, Kuhl DE, Phelps ME, Crandall paul H (1982) Comparative localization of foci in partial epilepsy by PCT and EEG. Ann Neurol 12(6):529–537View ArticleGoogle Scholar
Portas CM, Krakow K, Allen P, Josephs O, Armony JL, Frith CD (2000) Auditory processing across the sleep-wake cycle: simultaneous EEG and FMRI monitoring in humans. Neuron 28(3):991–999View ArticleGoogle Scholar
Lin C, Wu R, Liang S, Chao W, Chen Y, Jung T (2005) Eeg-based drowsiness estimation for safety driving using independent component analysis. IEEE Trans Circuits Syst 52–I(12):2726–2738Google Scholar
Coyle D, Príncipe JC, Lotte F, Nijholt A (2013) Guest editorial: brain/neuronal—computer game interfaces and interaction. IEEE Trans Comput Intell AI Games 5(2):77–81View ArticleGoogle Scholar
Bonaci T, Calo R, Chizeck HJ (2015) App stores for the brain: privacy and security in brain–computer interfaces. IEEE Technol Soc Mag 34(2):32–39View ArticleGoogle Scholar
Ienca M (2016) Hacking the brain: brain–computer interfacing technology and the ethics of neurosecurity. Ethics Inf Technol 18(2):117–129View ArticleGoogle Scholar
Martinovic I, Davies D, Frank M, Perito D, Ros T, Song D (2012) On the feasibility of side-channel attacks with brain-computer interfaces. In: Kohno T (ed) USENIX security symposium. Proceedings. USENIX Association, pp 143–158Google Scholar
Farwell LA, Donchin E (1991) The truth will out: interrogative polygraphy (lie detection) with event-related brain potentials. Psychophysiology 28(5):531–547View ArticleGoogle Scholar
Inzlicht M, McGregor I, Hirsh JB, Nash K (2009) Neural markers of religious conviction. Psychol Sci 20(3):385–392View ArticleGoogle Scholar
Berlad I, Pratt H (1995) P300 in response to the subject's own name. Electroencephalogr Clin Neurophysiol 96(5):472–474View ArticleGoogle Scholar
Kutas M, Hillyard SA (1980) Reading senseless sentences: brain potentials reflect semantic incongruity. Science 207:203–205View ArticleGoogle Scholar
Kutas M, Hillyard SA (1984) Brain potentials during reading reflect word expectancy and semantic association. Nature 307:161–163View ArticleGoogle Scholar
http://emotiv.com/. Last retrieved July 2016
http://neurosky.com/. Last retrieved July 2016
Mangard S, Oswald E, Popp T (2007) Power analysis attacks—revealing the secrets of smart cards. Springer, BerlinMATHGoogle Scholar
http://www.chesworkshop.org/. Last retrieved July 2016
Standaert F, Malkin T, Yung M (2009) A unified framework for the analysis of side-channel key recovery attacks. In: Joux A (ed) EUROCRYPT. Proceedings, volume 5479 of LNCS. Springer, pp 443–461Google Scholar
Veyrat-Charvillon N, Gérard B, Renauld M, Standaert F (2012) An optimal key enumeration algorithm and its application to side-channel attacks. In: KnudsenLR, Wu H (eds) SAC. Proceedings, volume 7707 of LNCS. Springer, pp 390–406Google Scholar
Archambeau C, Peeters E, Standaert F, Quisquater J (2006) Template attacks in principal subspaces. In: Goubin L, Matsui M (eds) CHES 2006. Proceedings, volume 4249 of LNCS. Springer, pp 1–14Google Scholar
Chari S, Rao JR, Rohatgi P (2002) Template attacks. In: Kaliski Jr BS, Koç ÇK, Paar C (eds) CHES. Proceedings. volume 2523 of LNCS. Springer, pp 13–28Google Scholar
Batina L, Gierlichs B, Prouff E, Rivain M, Standaert F, Veyrat-Charvillon N (2011) Mutual information analysis: a comprehensive study. J Cryptol 24(2):269–291MathSciNetView ArticleGoogle Scholar
Durvaux F, Standaert F, Veyrat-Charvillon N (2014) How to certify the leakage of a chip? In: Nguyen PQ, Oswald E (eds) EUROCRYPT. Proceedings, volume 8441 of LNCS. Springer, pp 459–476Google Scholar
Silverman BW (1986) Density estimation for statistics and data analysis. Chapman & Hall, LondonView ArticleGoogle Scholar
Renauld M, Standaert F, Veyrat-Charvillon N, Kamel D, Flandre D (2011) A formal study of power variability issues and side-channel attacks for nanoscale devices. In: Paterson KG (ed) EUROCRYPT 2011. Proceedings, volume 6632 of LNCS. Springer, pp 109–128Google Scholar
Duc A, Faust S, Standaert F (2015) Making masking security proofs concrete—or how to evaluate the security of any leaking device. In: Oswald E, Fischlin M (eds) EUROCRYPT 2015. Proceedings, Part I, volume 9056 of LNCS. Springer, pp 401–429Google Scholar
Efron B, Tibshirani RJ (1994) An introduction to the bootstrap. CRC Press, Boca RatonMATHGoogle Scholar
Standaert F, Koeune F, Schindler W (2009) How to compare profiled side-channel attacks? In: Abdalla M, Pointcheval D, Fouque P, Vergnaud D (eds) ACNS. Proceedings, volume 5536 of LNCS, pp 485–498View ArticleGoogle Scholar
Marcel S, Millán JR (2007) Person authentication using brainwaves (EEG) and maximum A posteriori model adaptation. IEEE Trans Pattern Anal Mach Intell 29(4):743–752View ArticleGoogle Scholar
Paranjape RB, Mahovsky J, Benedicenti L, Koles Z (2001) The electroencephalogram as a biometric. In: Electrical and Computer Engineering, vol 2. IEEE, pp 1363–1366Google Scholar
Veyrat-Charvillon N, Gérard B, Standaert F (2013) Security evaluations beyond computing power. In: Johansson T, Nguyen PQ (eds) EUROCRYPT. Proceedings, volume 7881 of LNCS. Springer, pp 126–141Google Scholar
Smart NP (2016) Computing on encrypted data. Kayaks and Dreadnoughts in a sea of crypto (September 2016)Google Scholar
|
CommonCrawl
|
Module 3: Measurement
Define units of length and convert from one to another.
Perform arithmetic calculations on units of length.
Solve application problems involving units of length.
Define units of weight and convert from one to another.
Perform arithmetic calculations on units of weight.
Solve application problems involving units of weight.
Describe the general relationship between the U.S. customary units and metric units of length, weight/mass, and volume.
Define the metric prefixes and use them to perform basic conversions among metric units.
Solve application problems involving metric units of length, mass, and volume.
State the freezing and boiling points of water on the Celsius and Fahrenheit temperature scales.
Convert from one temperature scale to the other, using conversion formulas.
Suppose you want to purchase tubing for a project, and you see two signs in a hardware store: $1.88 for 2 feet of tubing and $5.49 for 3 yards of tubing. If both types of tubing will work equally well for your project, which is the better price? You need to know about two units of measurement, yards and feet, in order to determine the answer.
Length is the distance from one end of an object to the other end, or from one object to another. For example, the length of a letter-sized piece of paper is 11 inches. The system for measuring length in the United States is based on the four customary units of length: inch, foot, yard, and mile. Below are examples to show measurement in each of these units.
Unit Description Image
Inch/Inches Some people donate their hair to be made into wigs for cancer patients who have lost hair as a result of treatment. One company requires hair donations to be at least 8 inches long.
Frame size of a bike: the distance from the center of the crank to the top of the seat tube. Frame size is usually measured in inches. This frame is 16 inches.
Foot/Feet Rugs are typically sold in standard lengths. One typical size is a rug that is 8 feet wide and 11 feet long. This is often described as an 8 by 11 rug.
Yard/Yards Soccer fields vary some in their size. An official field can be any length between 100 and 130 yards.
Mile/Miles A marathon is 26.2 miles long. One marathon route is shown in the map to the right.
You can use any of these four U.S. customary measurement units to describe the length of something, but it makes more sense to use certain units for certain purposes. For example, it makes more sense to describe the length of a rug in feet rather than miles, and to describe a marathon in miles rather than inches.
You may need to convert between units of measurement. For example, you might want to express your height using feet and inches (5 feet 4 inches) or using only inches (64 inches). You need to know the unit equivalents in order to make these conversions between units.
The table below shows equivalents and conversion factors for the four customary units of measurement of length.
Unit Equivalents Conversion Factors
(longer to shorter units of measurement) Conversion Factors
(shorter to longer units of measurement)
1 foot = 12 inches [latex] \displaystyle \frac{12\ \text{inches}}{1\ \text{foot}}[/latex] [latex] \displaystyle \frac{1\text{ foot}}{12\text{ inches}}[/latex]
1 yard = 3 feet [latex] \displaystyle \frac{3\text{ feet}}{1\text{ yard}}[/latex] [latex] \displaystyle \frac{\text{1 yard}}{\text{3 feet}}[/latex]
1 mile = 5,280 feet [latex] \displaystyle \frac{5,280\text{ feet}}{1\text{ mile}}[/latex] [latex] \displaystyle \frac{\text{1 mile}}{\text{5,280 feet}}[/latex]
Note that each of these conversion factors is a ratio of equal values, so each conversion factor equals 1. Multiplying a measurement by a conversion factor does not change the size of the measurement at all since it is the same as multiplying by 1; it just changes the units that you are using to measure.
Convert Between Different Units of Length
You can use the conversion factors to convert a measurement, such as feet, to another type of measurement, such as inches.
Note that there are many more inches for a measurement than there are feet for the same measurement, as feet is a longer unit of measurement. You could use the conversion factor [latex] \displaystyle \frac{\text{12 inches}}{\text{1 foot}}[/latex].
If a length is measured in feet, and you'd like to convert the length to yards, you can think, "I am converting from a shorter unit to a longer one, so the length in yards will be less than the length in feet." You could use the conversion factor [latex] \displaystyle \frac{\text{1 yard}}{\text{3 feet}}[/latex].
If a distance is measured in miles, and you want to know how many feet it is, you can think, "I am converting from a longer unit of measurement to a shorter one, so the number of feet would be greater than the number of miles." You could use the conversion factor [latex] \displaystyle \frac{5,280\text{ feet}}{1\text{ mile}}[/latex].
You can use the factor label method (also known as dimensional analysis) to convert a length from one unit of measure to another using the conversion factors. In the factor label method, you multiply by unit fractions to convert a measurement from one unit to another. Study the example below to see how the factor label method can be used to convert [latex] \displaystyle 3\frac{1}{2}[/latex] feet into an equivalent number of inches.
How many inches are in [latex] \displaystyle 3\frac{1}{2}[/latex] feet?
Show Solution
Begin by reasoning about your answer. Since a foot is longer than an inch, this means the answer would be greater than [latex] \displaystyle 3\frac{1}{2}[/latex].
Find the conversion factor that compares inches and feet, with "inches" in the numerator, and multiply.
[latex]3\frac{1}{2}\text{feet}\cdot\frac{12\text{ inches}}{1\text{foot}}=\text{? inches}[/latex]
Rewrite the mixed number as an improper fraction before multiplying.
[latex]\frac{7}{2}\text{feet}\cdot\frac{12\text{ inches}}{1\text{foot}}=\text{? inches}[/latex]
You can cancel similar units when they appear in the numerator and the denominator. So here, cancel the similar units "feet" and "foot." This eliminates this unit from the problem.
[latex]\frac{7}{2}\cancel{\text{feet}}\cdot\frac{12\text{ inches}}{\cancel{1\text{foot}}}=\text{? inches}[/latex]
Rewrite as multiplication of numerators and denominators.
[latex]\frac{7\cdot12\text{ inches}}{2}=\frac{84\text{ inches}}{2}=42\text{ inches}[/latex]
There are 42 inches in [latex] \displaystyle 3\frac{1}{2}[/latex] feet.
Notice that by using the factor label method you can cancel the units out of the problem, just as if they were numbers. You can only cancel if the unit being cancelled is in both the numerator and denominator of the fractions you are multiplying.
In the problem above, you cancelled feet and foot leaving you with inches, which is what you were trying to find.
What if you had used the wrong conversion factor?
[latex]\frac{7}{2}\text{feet}\cdot\frac{1\text{foor}}{12\text{ inches}}=\text{? inches}[/latex]?
You could not cancel the feet because the unit is not the same in both the numerator and the denominator. So if you complete the computation, you would still have both feet and inches in the answer and no conversion would take place.
Here is another example of a length conversion using the factor label method.
How many yards is 7 feet?
Start by reasoning about the size of your answer. Since a yard is longer than a foot, there will be fewer yards. So your answer will be less than 7.
Find the conversion factor that compares feet and yards, with yards in the numerator.
[latex]7\text{ feet}\cdot\frac{1\text{ yard}}{3\text{ feet}}=\text{? yards}[/latex]
Cancel the similar units "feet" and "feet" leaving only yards.
[latex]7\cancel{\text{ feet}}\cdot\frac{1\text{ yard}}{3\cancel{\text{ feet}}}=\text{? yards}[/latex]
[latex]7\cdot\frac{1\text{ yard}}{3}=\text{? yards}[/latex]
7 feet equals [latex] \displaystyle 2\frac{1}{3}[/latex] yards.
Apply Unit Conversions With Length
There are times when you will need to perform computations on measurements that are given in different units. For example, consider the tubing problem given earlier. You must decide which of the two options is a better price, and you have to compare prices given in different unit measurements.
In order to compare, you need to convert the measurements into one single, common unit of measurement. To be sure you have made the computation accurately, think about whether the unit you are converting to is smaller or larger than the number you have. Its relative size will tell you whether the number you are trying to find is greater or lesser than the given number.
An interior decorator needs border trim for a home she is wallpapering. She needs 15 feet of border trim for the living room, 30 feet of border trim for the bedroom, and 26 feet of border trim for the dining room. How many yards of border trim does she need?
You need to find the total length of border trim that is needed for all three rooms in the house. Since the measurements for each room are given in feet, you can add the numbers.
[latex]15\text{ feet}+30\text{ feet}+26\text{ feet}=71\text{ feet}[/latex]
How many yards is 71 feet?
Reason about the size of your answer. Since a yard is longer than a foot, there will be fewer yards. Expect your answer to be less than 71. Use the conversion factor [latex]\frac{1\text{ yard}}{3\text{ feet}}[/latex]
[latex]\frac{71\text{ feet}}{1}\cdot\frac{1\text{ yard}}{3\text{ feet}}=\text{? yards}[/latex]
[latex]\frac{71\cancel{\text{ feet}}}{1}\cdot\frac{1\text{ yard}}{3\cancel{\text{ feet}}}={23}\frac{2}{3}\text{ yards}[/latex]
The next example uses the factor label method to solve a problem that requires converting from miles to feet.
Two runners were comparing how much they had trained earlier that day. Jo said, "According to my pedometer, I ran 8.3 miles." Alex said, "That's a little more than what I ran. I ran 8.1 miles." How many more feet did Jo run than Alex?
You need to find the difference between the distance Jo ran and the distance Alex ran. Since both distances are given in the same unit, you can subtract and keep the unit the same.
[latex]8.3\text{ miles}-8.1\text{ miles}=0.2\text{ mile}[/latex]
[latex]0.2\text{ mile}=\frac{2}{10}\text{ mile}[/latex]
Since the problem asks for the difference in feet, you must convert from miles to feet. How many feet is 0.2 mile? Reason about the size of your answer. Since a mile is longer than a foot, the distance when expressed as feet will be a number greater than 0.2.
[latex]\frac{2}{10}\text{ mile}=[/latex] ___ feet
Use the conversion factor [latex] \displaystyle \frac{5,280\text{ feet}}{1\text{ mile}}[/latex].
[latex]\frac{2\text{mile}}{10}\cdot\frac{5,280\text{ feet}}{1\text{ mile}}[/latex] = ___ feet
[latex]\frac{2\cancel{\text{mile}}}{10}\cdot\frac{5,280\text{ feet}}{1\cancel{\text{ mile}}}[/latex] = ___ feet
[latex]\frac{2}{10}\cdot\frac{5,280\text{ feet}}{1}[/latex] = ___ feet
Multiply. Divide.
[latex] \displaystyle \frac{2\bullet \text{5,280 feet}}{10\bullet 1}[/latex]= ___ feet
[latex] \displaystyle \frac{10,560\text{ feet}}{10}[/latex]= ___ feet
[latex] \displaystyle \frac{\text{10,560 feet}}{\text{10}}[/latex]= 1,056 feet
Jo ran 1,056 feet further than Alex.
In the next example we show how to compare the price of two different kinds of tubing for a project you are making. One type of tubing is given in cost per yards, and the other is given in cost per feet. It is easier to make a comparison when the units are the same, so we convert one price into the same units as the other. For problems like this, it doesn't matter which cost you convert, either one will work.
You are walking through a hardware store and notice two sales on tubing.
3 yards of Tubing A costs $5.49.
Tubing B sells for $1.88 for 2 feet.
Either tubing is acceptable for your project. Which tubing is less expensive?
Find the unit price for each tubing. This will make it easier to compare.
Tubing A
Find the cost per yard of Tubing A by dividing the cost of 3 yards of the tubing by 3.
3 yards = $5.49
[latex]\frac{5.49\div3}{3\text{ yards}\div3}=\frac{\$1.83}{1\text{ yard}}[/latex]
Tubing B is sold by the foot. Find the cost per foot by dividing $1.88 by 2 feet.
Tubing B
2 feet = $1.88
[latex]\frac{1.88\div2}{2\text{ feet}\div2}=\frac{\$0.94}{1\text{ foot}}[/latex]
To compare the prices, you need to have the same unit of measure.
Use the conversion factor [latex] \displaystyle \frac{3\text{ feet}}{1\text{ yard}}[/latex], cancel and multiply.
[latex]\frac{\$0.94}{1\text{ foot}}\cdot\frac{3\text{ feet}}{1\text{ yard}}=\frac{\$\text{____}}{\text{____ yard}}[/latex]
[latex]\frac{\$0.94}{1\cancel{\text{ foot}}}\cdot\frac{3\cancel{\text{ feet}}}{1\text{ yard}}=\frac{\$2.82}{1\text{ yard}}[/latex]
[latex]\$2.82\text{ per yard}[/latex]
Compare prices for 1 yard of each tubing.
Tubing A: $1.83 per yard
Tubing B: $2.82 per yard
Tubing A is less expensive than Tubing B.
In the problem above, you could also have found the price per foot for each kind of tubing and compared the unit prices of each per foot.
You need to convert from one unit of measure to another if you are solving problems that include measurements involving more than one type of measurement. Each of the units can be converted to one of the other units using the table of equivalents, the conversion factors, and/or the factor label method shown in this topic.The four basic units of measurement that are used in the U.S. customary measurement system are: inch, foot, yard, and mile. Typically, people use yards, miles, and sometimes feet to describe long distances. Measurement in inches is common for shorter objects or lengths.
Revision and Adaptation. Provided by: Lumen Learning. License: CC BY: Attribution
Units of Length. Authored by: Developmental Math 2014An Open Program. Provided by: Monterey Institute for Technology and Education (MITE). Located at: https://dl.dropboxusercontent.com/u/28928849/MAT142/MeasurementNROC.pdf. Project: Developmental Mathu2014An Open Program. License: CC BY: Attribution
meter-tape-measure-measure-gage. Authored by: EME. Located at: https://pixabay.com/en/meter-tape-measure-measure-gage-512181/. License: CC0: No Rights Reserved
Question ID 117507. Authored by: Volpe, Amy. License: CC BY: Attribution. License Terms: IMathAS Community License CC-BY + GPL
Question ID 986. Authored by: Lippman, David. License: CC BY: Attribution. License Terms: IMathAS Community License CC-BY + GPL
Question ID 126605. Authored by: Day, Alyson. License: CC BY: Attribution. License Terms: IMathAS Community License CC-BY + GPL
|
CommonCrawl
|
How to recognize a finitely generated abelian group as a product of cyclic groups.
Let $G$ be the quotient group $G=\mathbb{Z}^5/N$, where $N$ is generated by $(6,0,-3,0,3)$ and $(0,0,8,4,2)$. Recognize $G$ as a product of cyclic groups.
Honestly, I do not know how to solve these type of problems. But I know that this is somehow an application of Fundamental theorem of finitely generated abelian groups. That theorem states an existence of such a product as $\mathbb{Z}^r\times \mathbb{Z}_{n_1}\times ... \times \mathbb{Z}_{n_s}$, but does not states a way to find $r,n_1,...,n_s$. I know how to use this theorem for a finite abelian group. But could not find a way to solve these type of problems even in a book. Could somebody explain me?
abstract-algebra group-theory abelian-groups
ExtremalExtremal
$\begingroup$ The solution is the Smith normal form of the relations matrix. Here are two related posts: 1 2 $\endgroup$ – André 3000 Dec 23 '15 at 6:20
$\begingroup$ See this one math.stackexchange.com/q/418353/8581 $\endgroup$ – mrs Dec 23 '15 at 6:53
$\begingroup$ consider the system 6x=0,-3y=0,3z=08z=0,4t=0,2u=0 and reduce it to echelon form to get diagonals dividing each other then divide. i will try to do it later but now i am busy. $\endgroup$ – Adelafif Dec 23 '15 at 9:37
$\begingroup$ I do not understand the solutions of any of those links, since I am new to this topic. $\endgroup$ – Extremal Dec 23 '15 at 15:52
As noted by SpamIAm, the key to this will be the Smith normal form of a matrix. Basically, we write $N$ as the image of a matrix and then find an easier, similar matrix for which the solution is clearer.
Since we have 2 generators for $N$ we have a surjective homomorphism $\mathbb{Z}^2\to N$ given by the matrix $A$ of generators for $N$: $$A=\begin{pmatrix}6&0\\0&0\\-3&8\\0&4\\3&2\end{pmatrix}$$ and $N=A\mathbb{Z}^2$. Now, this matrix its self is kind of messy, but it turns out that we can use a similar matrix and if we use a simple enough matrix the problem becomes easier.
Now you find the Smith normal form by using row and column operations to simplify as much as possible. I got the following matrix but you should check my work yourself: $$A=\begin{pmatrix}6&0\\0&0\\-3&8\\0&4\\3&2\end{pmatrix} \sim\begin{pmatrix}3&2\\0&0\\-3&8\\0&4\\6&0\end{pmatrix} \sim\begin{pmatrix}1&2\\0&0\\-11&8\\-4&4\\6&0\end{pmatrix} \sim\begin{pmatrix}1&0\\0&0\\0&30\\0&12\\0&-12\end{pmatrix} \sim\begin{pmatrix}1&0\\0&6\\0&0\\0&0\\0&0\end{pmatrix}=A'$$ Now we have $\mathbb{Z}^5/N=\mathbb{Z}^5/(A\mathbb{Z}^2)\cong\mathbb{Z}^5/(A'\mathbb{Z}^2)$. But it should be easy to see that $A'\mathbb{Z}^2=\mathbb{Z}\times(6\mathbb{Z})$ which gives us $$G\cong \dfrac{\mathbb{Z}^5}{\mathbb{Z}\times(6\mathbb{Z})}=\mathbb{Z}^3\times\mathbb{Z}_6$$
edited Jan 2 '16 at 7:28
kaitenkaiten
$\begingroup$ So, if we have the surjective isomorphism $\mathbb{Z}^k\rightarrow N$, should we always translate $A $ by row and column operations to a $k\times k$ diagonal matrix? $\endgroup$ – Extremal Jan 2 '16 at 15:16
$\begingroup$ A diagonal matrix where each entry divides the next (Smith normal form) is the best but any diagonal matrix should be easy enough to work with. Also, you can only use operations in the ring you're working over. So in this case only integers. (so you can't divide the second row by 6 for example) $\endgroup$ – kaiten Jan 2 '16 at 15:41
kaiten has already provided a good answer to your question, so I just want to make some remarks about the general theory and show why computation of the Smith normal form gives the desired answer.
Given a free module $M$ with rank $n < \infty$ over a PID $R$, every submodule $N \leq M$ is also free of finite rank. Moreover, there is a basis $y_1, \ldots, y_n$ of $M$ and scalars $a_1, \ldots, a_m \in R$ such that $a_1 \mid \cdots \mid a_m$ and $a_1 y_1, \ldots, a_m y_m$ is a basis of $N$. This is what Keith Conrad calls an aligned basis. This blurb of his has some great pictures illustrating aligned vs. unaligned bases which I've copied below.
Once we've found an aligned basis, writing $M/N$ as a direct sum of cyclic $R$-modules is easy: \begin{align} \frac{M}{N} &= \frac{Ry_1 \oplus \cdots \oplus Ry_m \oplus \cdots \oplus Ry_n}{Ra_1 y_1 \oplus \cdots \oplus Ra_m y_m} \cong \frac{Ry_1}{Ra_1 y_1} \oplus \cdots \oplus \frac{Ry_m}{Ra_m y_m} \oplus R y_{m+1} \oplus \cdots \oplus R y_n\\ &\cong \frac{R}{a_1 R} \oplus \cdots \oplus \frac{R}{a_m R} \oplus R^{n-m} \tag{1} \end{align}
Okay, so how do we find an aligned basis and compute scalars $a_1, \ldots, a_m$? As I mentioned in my comment, this can be achieved by computing the Smith normal form of the matrix of the homomorphism \begin{align*} \varphi: \mathbb{Z}^2 &\to \mathbb{Z}^5 = M\\ \begin{pmatrix} 1\\ 0\end{pmatrix}, \begin{pmatrix} 0\\ 1\end{pmatrix} &\mapsto \begin{pmatrix}6\\0\\-3\\0\\3\end{pmatrix}, \begin{pmatrix}0\\0\\8\\4\\2\end{pmatrix} \end{align*} which has matrix $$ A=\begin{pmatrix}6&0\\0&0\\-3&8\\0&4\\3&2\end{pmatrix} $$ with respect to the standard bases for $\mathbb{Z}^2$ and $\mathbb{Z}^5$. The Smith normal form might seem strange, but really it's just a version of performing a change of basis like in linear algebra. As kaiten showed, by performing row and column operations, we can turn $A$ into a diagonal matrix $D$. Recall that a row (resp. column) operation can be achieved by multiplying $A$ on the left (resp. right) by an elementary matrix $E$. (Simply apply the same row or column operation to the identity matrix to determine $E$.) Multiplying these elementary matrices together yields invertible matrices $P$ and $Q$ such that $PAQ = D$ is a diagonal matrix. In your example, we have \begin{align*} PAQ = \left(\begin{array}{rrrrr} 0 & 0 & 0 & 0 & 1 \\ -1 & 0 & 1 & -3 & -1 \\ -3 & 0 & -4 & 7 & 2 \\ -1 & 0 & -2 & 4 & 0 \\ 0 & 1 & 0 & 0 & 0 \end{array}\right) \begin{pmatrix}6&0\\0&0\\-3&8\\0&4\\3&2\end{pmatrix} \left(\begin{array}{rr} -1 & -2 \\ 2 & 3 \end{array}\right) = \left(\begin{array}{rr} 1 & 0 \\ 0 & 6 \\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \end{array}\right) = D \, . \end{align*} What do these row and column operations mean? They correspond to new choices of basis for $\mathbb{Z}^2$ and $\mathbb{Z}^5$ such that $\varphi$ can be written particularly simply. More explicitly, since $P$ and $Q$ are invertible, they are change of basis matrices for some bases we seek to determine. Writing $Q = {_{\mathcal{E}}[\text{id}]_\mathcal{B}}$ (where $\mathcal{E} = \{e_1, e_2\}$ is the standard basis for $\mathbb{Z}^2$), by the definition of the matrix of a linear map, we see that $\mathcal{B} = \{x_1 = -e_1 + 2e_2, x_2 = -2e_1 + 3e_2\}$. Similarly, by computing $$ P^{-1} = \left(\begin{array}{rrrrr} -6 & -2 & 2 & -5 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 19 & 5 & -7 & 16 & 0 \\ 8 & 2 & -3 & 7 & 0 \\ 1 & 0 & 0 & 0 & 0 \end{array}\right) $$ we see that $P = {_\mathcal{C}[\text{id}]_\mathcal{F}}$ (where $\mathcal{F} = \{f_1, \ldots, f_5\}$ is the standard basis for $\mathbb{Z}^5$) for the basis \begin{align*} \mathcal{C} = \{y_1 &= -6f_1 + 19 f_3 + 8 f_4 + f_5,\\ y_2 &= -2 f_1 + 5f_3 + 2f_4,\\ y_3 &= 2 f_1 -7 f_3 -3 f_4,\\ y_4 &= -5 f_1 + 16 f_3 + 7 f_4,\\ y_5 &= f_2 \}\, . \end{align*} You can check that $\varphi(x_1) = y_1$ and $\varphi(x_2) = 6y_2$, which verifies that $N = \text{img}(\varphi) = \mathbb{Z}y_1 \oplus \mathbb{Z} 6y_2$. Then $(1)$ gives $M/N \cong \mathbb{Z}^3 \oplus \mathbb{Z}/6\mathbb{Z}$, which agrees with answer given by kaiten.
André 3000André 3000
Not the answer you're looking for? Browse other questions tagged abstract-algebra group-theory abelian-groups or ask your own question.
What is this method called - Abelization of a Group.
Finding an explicit isomorphism from $\mathbb{Z}^{4}/H$ to $\mathbb{Z} \oplus \mathbb{Z}/18\mathbb{Z}$
Ideals of $\mathbb{Z}[i]$ geometrically
What is $\Bbb Z^n/(a_1, \dots, a_n)$ or $\Bbb Z^n / I$ isomorphic to?
Suggestions for computing $\mathbb{Z}[i]^3/K$ where $K=\langle(1,2,1), (0,0,5), (1,-i,6)\rangle$.
Describe, as a direct sum of cyclic groups, given a map $\phi: \mathbb{Z}^{3} \longrightarrow \mathbb{Z}^{3}$
How to draw a sublattice to exhibit diagonalization?
Quotient Group of $\mathbb{Z}^4$ and a Lattice
In the group $A/B$ find the order of the coset $(x_1+2x_3)+B$
Showing $\mathbb{Z}[i]/(1+2i) \oplus\mathbb{Z}[i]/(6-i)\cong\mathbb{Z}[i]/(8+11i)$
Something obvious I am missing about the Fundamental theorem of finitely generated abelian groups
Simplify the category of finite abelian groups
Co-Existence of the Primary Decomposition Theorem with the Fundamental Theorem of Finitely Generated Groups
A divisible abelian group is not finitely generated: group theoretical proof
Number of Abelian Groups of Order 256
Abelian Group Not Finitely Generated
Betti numbers and Fundamental theorem of finitely generated abelian groups
Express an abelian group given as finite generators and their relations as a direct sum of cyclic groups and find corresponding generators.
Is there a strategy for expressing finitely generated abelian group as the direct sum of cyclic groups?
Free group generated by two generators is isomorphic to product of two infinite cyclic groups
|
CommonCrawl
|
Integrating field-based heat tents and cyber-physical system technology to phenotype high night-time temperature impact on winter wheat
Nathan T. Hein1,
Dan Wagner2,
Raju Bheemanahalli1,
David Šebela1,
Carlos Bustamante1,
Anuj Chiluwal1,
Mitchell L. Neilsen2 &
S. V. Krishna Jagadish ORCID: orcid.org/0000-0002-1501-09601
Plant Methods volume 15, Article number: 41 (2019) Cite this article
Many agronomic traits have been bred into modern wheat varieties, but wheat (Triticum aestivum L.) continues to be vulnerable to heat stress, with high night-time temperature (HNT) stress shown to have large negative impact on yield and quality. Global mean temperature during the day is consistently warming with the minimum night temperature increasing at a much quicker pace. Currently, there is no system or method that allows crop scientists to impose HNT stress at key developmental stages on wheat or crops in general under field conditions, involving diverse genotypes and maintaining a dynamic temperature differential within the tents compared to the outside.
Through implementation of a side roll up and a top ventilation system, heaters, and a custom cyber-physical system using a Raspberry Pi, the heat tents were able to consistently maintain an elevated temperature through the night to differentiate heat stress impact on different genotypes. When the tents were placed in their day-time setting they were able to maintain ambient day-time temperature without having to be removed and replaced on the plots. Data averaged from multiple sensors over three consecutive weeks resulted in a consistent but small temperature difference of 0.25 °C within the tents, indicating even distribution of heat. While targeting a temperature differential of 4 °C, the tents were able to maintain an average differential of 3.2 °C consistently throughout the night-time heat stress period, compared to the outside ambient conditions. The impact of HNT stress was confirmed through a statistically significant yield reduction in eleven of the twelve genotypes tested. The average yield under HNT stress was reduced by 20.3% compared to the controls, with the highest reduction being 41.4% and a lowest reduction of 6.9%. Recommendations for fine-tuning the system are provided.
This methodology is easily accessible and can be widely utilized due to its flexibility and ease of construction. This system can be modified and improved based on some of the recommendations and has the potential to be used across other crops or plants as it is not reliant on access to any hardwired utilities. The method tested will help the crop community to quantify the impact of HNT stress, identify novel donors that induce tolerance to HNT and help the breeders develop crop varieties that are resilient to changing climate.
Winter wheat (Triticum aestivum L.), with centuries of genetic improvement, has acquired a suite of favorable traits essential for adaptation to a wide range of environmental conditions. Some of the key developments in wheat breeding and domestication includes larger grain size and a phenotype without seed shattering [1]. Further improvements benefitting from technological advances over the last century by introducing high yielding varieties, fertilizer, pesticides, and modern equipment, have resulted in translating wheat into one of the major staple cereals of the world. Over the last six decades (1961 and 2016) the overall production of wheat has increased by over 500 million tonnes with only a 15.9 million ha increase in harvested area [2]. Improved genetic and management interventions have transformed the average wheat yield from 1.09 t ha−1 in 1961 to 3.41 t ha−1 in 2016 [2]. In spite of the dramatic increase in overall wheat production, the rate of increase in production is unable to meet the current or the predicted global demand for the future [3]. Even though the annual per capita consumption of wheat is expected to drop by about one percent, the overall annual consumption of wheat is predicted to increase by almost 90 Mt between 2014 and 2024, as a result of increasing population and demand from the biofuel industry [4].
The two main components determining wheat yield potential are the number of grains per meter square and the average weight of each grain [5]. Many genetic, environmental, and field management decisions can alter physiological processes that determine grain number and weight and eventually grain yield. Some of these factors include nutrient availability, temperature, water and solar radiation, fertilizer, and genotype [6]. Among the environmental factors, high temperatures during flowering and grain filling have shown to induce significant loss in grain numbers and weight [7, 8]. Although the overall average temperature has warmed across the globe, recent analysis has shown that the daily minimum temperature (occurring during the night) is increasing at a faster rate than the daily maximum temperature [9, 10]. Hence, it is important and timely to understand the impact of high night-time temperature (HNT) on crops in general and in the sensitive field crops including winter wheat.
During 1979 and 2003, the annual mean maximum temperature increased by 0.35 °C and the annual mean minimum temperature increased by 1.13 °C at the International Rice Research Institute experimental farm, Philippines. As a result, the rice yield decreased by 10% for every 1 °C temperature increase in mean minimum temperature during the dry season [11]. The same study found that the increase in mean maximum temperature did not have the same effect on yield as the mean minimum temperature [11]. Recent studies on the effects of HNT stress on different field grown crops has, until now used (i) field-based tents with a static system [12,13,14,15] or (ii) much smaller tents with a cyber-physical system that captures single genotype responses to HNT stress and has to be physically placed and removed daily [16]. The impact of HNT and the physiological route through which yield and quality losses occur has been documented in rice using field-based heat tents [12,13,14, 17]. Although the existing field tents at IRRI, Philippines, can potentially include moderate number of genotypes, the HNT treatment imposition is static at a predetermined target temperature while the outside temperature can vary quite dynamically. A cyber-physical system is a computer system that incorporates electrical engineering and computer science to bridge the digital and physical worlds through the use of embedded technology [18]. Through the use of software and sensors, the cyber-physical system is able to interact with and react to their environment. The only field experiment involving wheat, HNT, and a cyber-physical system used 3 m × 1.3 m × 1.3 m structures that were manually placed on plots of a single variety of wheat called Baguette 13 for 12 h every night from the third detectable stem node to 10 days post-flowering. This experiment recorded a 7% reduction in grain yield along with a reduction in biomass and grain number [16].
Phenotyping facilities such as rain-out shelters for quantifying drought stress responses [19, 20] and the use of naturally occurring hotter summer conditions have been extensively used to study the impact of high day-time temperature (HDT) stress across crops [21,22,23]. However, there doesn't exist a large field-based phenotyping system that can capture larger genetic diversity for HNT responses at critical growth and developmental stages and at the same time induce a dynamic HNT treatment closely following the outside ambient temperature. Hence, our major objective was to develop and test a robust field-based cyber-physical system by modifying a currently available HDT stress heat tent. The overall aim was to impose a HNT stress of 4 °C automatically following the dynamic changes in the open field i.e., outside the structures and simultaneously capturing genetic diversity for HNT stress impact on physiological parameters and grain yield. While the system and methodology developed is tested on winter wheat, there is potential that this technology is scalable and can be extended to crops or plants of interest to the scientific community, although this is yet to be evaluated.
Heat tent
The heat tents that were used for this specific project were built and used in previous studies to quantify HDT effects on wheat and sorghum [8, 24, 25]. Each tent was built using a steel frame for the base and heavy piping to create the sidewalls and apex. The heat tents were constructed in the Gothic style with vertical framing every 1.2 m along the sidewall. The heat tents are 7.2 m long, 5.4 m wide, and 3.0 m tall at the apex. Lock channel and wiggle wire was installed around the available edges of the frame to enclose the tent. The heat tents were enclosed using polyethylene film (6 mil Sun Master® Pull and Cut Greenhouse Film) with 92% light transmission according to the manufacturer. New plastic was installed on all the tents before the start of the experiment. The main components in converting the HDT tents into HNT included the top vent, side roll vents, heating system, and a cyber-physical thermostat controller system operated by a Raspberry Pi.
Top vent
In order to maintain ambient conditions throughout the day within the tents, the top vent (Fig. 1.1) was kept functional from the HDT set up. In previous experiments, the top vent was used to prevent excess heating above a set temperature by opening the vent when the desired temperature target was met. However, in the HNT set up, the top vent was opened throughout the day to maintain temperature within the tent closer to ambient conditions to prevent confounding our HNT research by imposing HDT stress. The vent was forced closed during the night to impose and maintain a consistent level of elevated temperature compared to the outside ambient temperature.
Vent system layout. A HNT heat tent during daytime 1: venture manufacturing 12 V linear actuator used to open top vent. 2: Handle used to manually operate side roll up ventilation. 3: Side rolled up with polypropylene rope securing it against the tent
A secondary frame was built that was 0.6 m wide and 7.2 m long from the same material as the structure of the heat tent. The frame was placed at the top of the apex with the bottom hinged to the tent structure. This setup allowed the vent to open up and away from the apex allowing as much heat as possible to escape through the vent (Fig. 1A). Two linear actuator motors (Venture Manufacturing) were attached to the vent framework (Fig. 1.1). When powered, these motors would open and close the vent framework via the hinges that connect the vent to the main structure. The power for these linear actuators was provided by a 12v VRLA battery that was connected to a solar panel attached to the front apex of the roof. The solar panel charged the 12v battery during the day, allowing the battery to be charged and used throughout the experiment. The battery power was run through a thermostat controller (Dayton Temperature Control 4LZ95A) (Fig. 2.1). During the day the thermostat was set to 0 °C to ensure the vent stayed open throughout the day and at night at 44 °C to keep the vent closed throughout the night.
Heating system layout. A Layout of heating system within the Tent. 1: Dayton Thermostat Controller used to raise and lower the top vent. 2: Lasko 20 in. Box Fan. 3: Hobo temperature/relative humidity sensor and propane tank with the Sunrite™ by Mr. Heater® 15,000 BTU tank top portable propane heater. 4: Thermosphere 5000-W Ceiling-Mount garage heater. 5: Thermostat Controller System built using a Raspberry Pi
Side Roll Vents
The purpose of the side roll vents was to allow for maximum air flow through the wheat canopy during the day. Combined with the top vent, the side roll up vents on both sides of the tent allowed ambient air to flow through the tent and forced hot air to be expelled through the top vent. Pressure treated 2″ × 6″ (5.1 cm × 15.24 cm) wooden boards were installed along the very bottom of the side walls with screws that were rated to attach wood to metal (Everbilt #14 2-3/4 in. Phillips Flat-Head Self-Drilling Screw). The boards used were 3.04 m in length, which required multiple boards to cover the length of the side walls. The boards were attached to each other using deck screws to ensure stability (Deckmate #9 × 3 in. Star Flat-Head Wood Deck Screws). These wooden boards were then run across the side wall at 1.5 m above the base and secured in the same fashion (Fig. 1.3).
The horizontal lock channel and wiggle wire was installed on the upper third of the outside face of the top row of wooden boards with metal to wood screws (Teks #12 1 in. Hex-Head Self-Drilling Screws). The vertical lock channel along the end walls was then installed down along the frame, so the end wall plastic could be secured all the way to the ground. It was at this point during the set up that the new plastic was applied on all the tents. The side walls were done first with enough plastic hanging down from the top row of wooden boards to reach the ground. The plastic was secured along the vertical lock channel on the side walls from the top to the bottom row of wooden boards and then left loose below that.
Eye screws (Everbilt #206 × 1-3/8 in. Zinc-Plated Steel Screw Eye) were installed on both the top and bottom row of boards at either end and then alternating between the top and the bottom set of boards to form a zigzag pattern (Fig. 1.3). The top row of eye screws were placed through the hanging plastic while the bottom row of eye screws did not go through the plastic so that the plastic could be rolled up.
To create the metal bar that the extra plastic would be rolled up on resulting in the side roll vents, three pieces of 3.5 cm × 3.2 m 17-gauge galvanized piping were combined using Teks #12 1 in. Hex-Head Self-Drilling Screws. Two of the pieces were used in full while the third was cut to 1.52 m in length allowing an extra 0.3 m of piping on either end of the heat tent. In total, for each side wall a 7.92 m length of piping was used. Each pole had a tapered end and a full end. The tapered ends of the poles were inserted into the full ends and then screwed together with the Tek screws. The screws were then wrapped in duct tape to ensure the screw heads would not rip the plastic.
A handle was added to one end of the roll up bar to rotate the bar to facilitate the rolling up and lowering of the side walls (Fig. 1.2). The 3.5 cm × 3.2 m 17-gauge galvanized piping was cut into two 0.3 m lengths and then attached to the end using an aluminum gate ell. Two pieces of piping and two aluminum gate ells were used to create the handle for each roll up, on either sides of the tent. The 7.92 m long pipe was then laid along the side walls of the heat tent on top of the excess plastic that was draped on the ground. The plastic was evenly wrapped around the pole in a clockwise manner and duct taped every 1 m to attach the pipe firmly with the plastic.
A piece of polypropylene rope was attached to the top eye screws on the wooden boards on the end with the handle and a loop made on the other end so that it could be attached to a screw on the interior of the tent to hold the roll up when the side walls were open. The handle was then rotated in a clockwise rotation to roll the plastic up to the top row of the wooden boards and then secured with the loop that was previously put in place. The same polypropylene rope was then run from the top eye screw on one end of the top wooden board to a similar screw on the bottom wooden board and then pulled through the eye screws in the zig zag pattern that was made previously. Once the rope had reached the far end, it was run through both the top and bottom eye screws, pulled tight, and secured. This rope was necessary to keep the roll up flush against the heat tent during the rolling process, and also prevented billowing when the side walls were rolled down (Fig. 1.3). The end walls then had their polyethylene film applied over the top of the sidewall plastic so as to seal the ends of the heat tents (Additional file 1: Fig. S1).
Before any decisions could be made on the size and type of heating system, the amount of heat that was necessary to raise the tent to the targeted temperature was calculated by using the formula \( Q = \frac{T*A}{R} \). The amount of heat (Q), British Thermal Unit per hour (BTU h−1), required to attain the target temperature differential (ΔT in °F) was figured using the surface area of the heat tent (A in ft2) and the capacity of the covering of the heat tent to resist heat flow (R in inch-pound). Some manufacturers or materials may not provide an R value but rather a heat loss value (U) which is equal to 1/R. The heat tents had a surface area of 1100 square feet and an R value of 0.87. The target maximum temperature difference inside the tent from the outside ambient temperature during the night was 4 °C or 7.2 °F. Using these values in the above formula, the minimum heat required to raise temperature inside the tent by 4 °C was 9103 BTU h−1 or 2667 W (1 BTU = 0.293 W).
The Thermosphere Ceiling-Mount Garage Heater was installed in the tent hanging from a horizontal structural pipe two-thirds of the distance from the apex (Fig. 2.4). The capacity of this unit was 5000 W, 17,065 BTU h−1, 240 V (model number PH-950). In addition to the heater, a single box fan (Lasko Ltd.) was hung in the opposite end of the tents to ensure air within the tent was circulated throughout the night (Fig. 2.2). These fans drew 75 W each and ran off of an 110v circuit, with the power provided by the generator (Additional file 2: Fig. S2).
This experiment had three independent heat tents running overnight powered with a Caterpillar XQ35 Generator which provided 27 kW of power consistently using 8.8 L of diesel per hour. The diesel was stored in a 3785-liter tank with an electrical pump that was battery operated and used to refill the generator (Additional file 2: Fig. S2). The generator was wired to the heaters using Southwire 8/2 AWG UF-B Underground Feeder Cable with Ground and Southwire 10/2 AWG UF-B Underground Feeder Cable with Ground depending on the length of run between the generator and the heater. The box fans were provided power with HDX 16/3 Indoor/Outdoor Extension Cords.
Although the calculations were accurate for the amount of heat needed to raise the temperature of a typical greenhouse, the modifications made to the heat tent structure affected its ability to retain heat. Hence, an additional source of heat was necessary to maintain the target differential. A Sunrite™ by Mr. Heater® 15,000 BTU Tank Top Portable Propane Heater (Fig. 2.3) was added to achieve the target temperature. The propane heater provided 10,000 BTU h−1 on low, 12,000 BTU h−1 on medium, and 15,000 BTU h−1 on the high setting. The propane heater was set to its medium setting which provided a radiant heat source but was not equipped with a forced air component and can potentially pose a fire hazard on the ground level. Hence, the propane tank and heater were placed on a stand built with cinderblocks to raise it above the height of the wheat and placed directly below the path of the air blown by the box fans. The propane tank top heater increased the interior temperature towards the target temperature via radiant heating and air movement by the fan while the final target differential of 4 °C was achieved and regulated by the electric heater by turning on and off as needed.
A low-level fire hazard did exist with the use of a diesel generator and propane tank top heater. However, the diesel generator itself did not create a fire risk unless a complete component failure occurred. The generator was self-contained on a trailer and had adequate insulation and protective measures to minimize risk. On the other hand, the fire hazard posed by the propane tank can be completely eliminated by increasing the wattage of the original electric heater and eliminate the need for a propane tank top heater.
Another aspect related to utilizing a propane tank top heater is the possibility of CO2 build up within the tent and its effects on the plants. Direct estimation of CO2 concentration using at least two sensors within each tent would have been an ideal approach to ensure that there were no unintended effects of elevated CO2 on the plants. Higher levels of CO2 would warrant the addition of more ventilation to allow for fresh air to enter the tents and a ducted ventilation tube for the gasses produced during the combustion of propane. However, no additional ventilation was required for the heat tents as they were not airtight and allowed for ample ventilation. The top vent did not seal when closed and the side roll ups were taped shut on the end walls but were not sealed along the side walls. This inherent ventilation in the design allowed for a continuous flow of fresh air and created the necessity for an extra heat source. This is evident with the increase in BTUs required to raise the interior temperature by 4 °C compared to the exterior. In a completely sealed environment with the same volume as the heat tent, it would only take 8854.4 BTUs to achieve the target temperature and overcome conductive heat loss. However, our system used over 29,000 BTUs which correlates to over 20,000 BTUs being needed to overcome perimeter heat loss and air infiltration heat loss. At that rate of heating, the tent had to complete an air exchange every 1.32 min. While CO2 was not directly measured, the combination of frequent air exchanges i.e., the top vent not being sealed which allowed for the warm CO2 to escape, and the side roll vents not being sealed which allowed the CO2 to escape when cooled would have prevented any excess CO2 accumulating within the tent and compounding the effects of the HNT stress.
Temperature controller system
Overall description/functionality
A cyber-physical system is a physical mechanism controlled by computer-based algorithms in real time. This cyber-physical system was designed to monitor the temperature from the outside environment and regulate temperature within the tent. When the temperature inside the tent was not warmer than the outside by 4 °C, the system turned the heater on to help increase or maintain the indoor temperature differential. Otherwise, the heater was turned off and the temperature was continued to be monitored.
This system was designed around a simple, plug-and-play philosophy using a Raspberry Pi, a low-cost, high-performance computer system developed by the Raspberry Pi Foundation [26]. When the system received power, it booted up and began monitoring the outside and inside temperatures. If the system failed to start, which only occurred twice during the HNT stress period, then the faults were isolated into two categories: Raspberry Pi failures and sensor failures. The Raspberry Pi failures were manually tested by checking for sufficient power source (5 V, 2.1A) and verifying the integrity of the microSD card. Sensor failures were detected by checking the power, electrical ground, and data connections to the Raspberry Pi. The system's simplicity was exhibited in both hardware and software. The system could be separated into its material components rather simply; the Raspberry Pi, solid-state relay, sensors, and 240 V relay could be isolated by disconnecting at most five wires and could be improved and modified easily without affecting the other components. Software could be modified very rapidly through the Python script (Additional file 3) and uploaded to the Raspberry Pi within minutes by modifying the microSD card.
Hardware components and connections
The thermostat system consisted of several hardware components: a Raspberry Pi, solid-state relay, 24VAC adapter, 240 V relay, and two DS18B20 temperature sensors. Additionally, the system was placed within a plastic housing for water- and dust-proofing (Fig. 3). The Raspberry Pi was connected to the solid-state relay by three wires: 5 V power, electrical ground, and a signal wire. A high bit on the signal wire forced the relay to complete the connection to the heater. The following pin assignments were based on the physical numbering scheme on the Raspberry Pi Model 3B:
Waterproof enclosure for Raspberry Pi and electrical system. The system was contained within a plastic box that latched closed (left) to protect the underlying circuitry and opened (right) to allow access to the system. Inside each enclosure was a battery pack, USB to microUSB cable to supply power, one Raspberry Pi computer with touchscreen display, a ribbon cable to extend connections to the computer, and a blue solid-state relay. A hole was drilled in the side of the enclosure to facilitate electrical connections to the heater circuit; this hole was filled with caulk for water protection
The 5 V connection was routed to pin 2.
The ground connection was routed to pin 9.
The signal connection was routed to pin 11.
The solid-state relay was connected to the 240 V relay and 24VAC adapter. This relay caused the other relay to engage and helped complete the circuit to the heater, as the single relay itself could not support the heater's electrical load. Two ports from the solid-state relay were used: common and normally open (NO), which were chosen for safety because the heater circuit would not normally be electrically active. The common lead was connected to one lead of the 24VAC adapter, and the NO lead was connected directly to the 24VAC lead of the 240 V relay. In this manner, the solid-state relay completed a circuit between the 24VAC adapter and the 240 V relay (Fig. 4).
System wiring diagram
The 24VAC adapter was connected to power via the generator cables. The adapter provided power to the 240 V relay and heater circuit. An unpolarized electrical plug was attached to the input terminals. Electrical wire (14-gauge) was connected to each terminal of the plug and then connected to the generator lines; the ground lead was connected to the generator ground, and the power lead was connected to the black 120 V line of the generator. The 240 V relay had four connections: two inputs and two outputs to the heater. One input has been described above and was directly connected to the NO lead of the solid-state relay. The common input terminal was connected directly to the other terminal of the 24VAC adapter. The common output terminal was wired to one of the generator's 120 V lines, and the NO terminal was connected to the corresponding line on the heater. The neutral and second 120 V lines were connected directly from the generator to the heater; the relay switched a single 120 V line to complete the circuit (Fig. 4).
The two DS18B20 temperature sensors were wired in parallel and shared the same three pin connections. A 4.7 kΩ pull-up resistor was connected between the power and data lines and prevented a floating wire state and a wire short [27]. The following pin assignments were similar to the solid-state relay:
The 3.3 V connection was routed to pin 1.
The ground connection was split and routed to pins 6 and 39.
The data connection was routed to pin 7.
Software description
The software was written in a Python script, version 2.7 (Additional file 3) [28]. This allowed for rapid prototyping and quick implementation of the sensor readings. When the Raspberry Pi was booted, the software first polled the system bus for the sensors and added them to a list, which allowed for more sensors to be connected to the system. Next, the signal pin of the solid-state relay was set-up via software for toggling: otherwise, the pin would either be on or off. Then, the data log file was opened and a blank line was appended to delimit the start of a new session of logging. This log file was in comma separated value format for easy importing to Microsoft Excel or any other spreadsheet program.
After the setup was completed, the software entered its main loop. First, it attempted to read the sensors that are connected to it using manufacturer code [29]. If the software detected an invalid sensor reading, the error was displayed once the interface was initialized. If the sensor readings were valid, the differential of the indoor and outdoor temperatures was measured and the heater was either turned on or off depending on the value; a value below 4 °C caused the heater to be turned on, and being above 4 °C turned the heater off. Then, the interface was created and updated to the new indoor and outdoor temperatures, as well as the status of the heater (Additional file 4: Fig. S3). If an error occurred with the sensors in the previous steps, then the heater displayed the word "SENSOR" and the connections from the Pi to each sensor was manually verified.
If the elapsed time reached the logging interval, then the current time, indoor and outdoor temperatures, and the heater's status were recorded to file. If the amount of time elapsed had not reached the interval, a nested loop was executed. The system would go into a sleep mode for half a second and the process was repeated until the target interval had reached. Once the interval had been reached and the status was recorded, the next loop iteration would commence.
A field experiment was conducted at the Agronomy research farm at Manhattan (39°11′N, 96°35′W), Kansas. In this experiment, five prominent varieties of Kansas (Everest, Larry, SY-Monument, WB 4458, and WB-Cedar) and five breeding lines (Jagger X060724, KS070736 K-1, KS070729 K-26, KS070717 M-1, and P1 X060725) and two exotic genotypes (Tascosa and Tx86A5606) known for differential heat stress response during grain filling [8, 30], were used to study the impact of post-flowering HNT stress under field condition. Wheat genotypes were planted using a tractor and research plot grain drill with global positioning system (GPS) guidance system on 17th October 2018. Each replicate plot per genotype comprised of six rows with each row being 4-m long (6 rows occupied 1.15 m, with each row placed 0.19 m apart). The plots were top dressed with 45 kg N ha−1 (Urea ammonium nitrate solution) on 17th February 2018. Both the control and the stress plots were irrigated throughout the experiment, even during the HNT stress period, either through rainfall or manually once every week to avoid confounded by water-deficit stress. Days to complete flowering across the twelve genotypes was not more than 5 days. HNT treatment was imposed during grain filling using the custom designed heat tents. Twelve winter wheat genotypes were successfully exposed to an average night time differential of + 3.2 °C (interior; inside heat tents) during the grain filling (10 days after 50% flowering to physiological maturity), compared to ambient night-time temperature (exterior; outside heat tents).
Biological data collection
Chlorophyll fluorescence
Five representative plants for each genotype per replicate were randomly selected and tagged at flowering for measuring flag leaf and the main spike chlorophyll fluorescence (Chl-F) in both interior and exterior conditions. Chl-F data was recorded between 1000 and 1300 h by using a portable hand-held fluorometer (FluorPen FP 100, Photon System Instruments, Ltd., Brno, Czech Republic), which gives the effective quantum yield of PSII (QY). Saturating light [intensity approximately 3000 µmol (photons) m−2 s−1] and measuring light [intensity approximately 0.09 µmol (photons) m−2 s−1] were used to measure both maximal fluorescence yield (FM′) and actual fluorescence yield (Ft) of light adapted samples, respectively. Subsequently, the effective quantum yield of PSII (QY) was calculated using the formula \( QY = \left( {FM^{{\prime }} - Ft} \right)/FM^{{\prime }} = \Delta F/FM^{{\prime }} \) [31]. Electron transport rate (ETR) which indicated the capacity of overall photosynthesis was calculated by using the formula as described previously [31].
$$ ETR = QY \times PAR \times 0.84 \times 0.5 $$
where QY is the effective quantum yield of PSII, PAR is actual photosynthetic active radiation (µmol (photons) m−2 s−1), 0.84 is an approximate level of light being absorbed by the leaf, and 0.5 is the ratio of PSII to PSI reaction centers. Three measurements were taken along the middle of the flag leaf blade and spikes on each replicate plant and averaged.
Grain yield
At physiological maturity (Zadoks growth scale 9-ripening; not dented by thumbnail), replicates of 1-m row length from four central rows was manually cut in each plot to minimize border effects. Spikes were separated from the stem and dried for 96 h at 40 °C and spikes were threshed using an LD 180 Laboratory thresher (Wintersteiger, Ried im Innkreis, Austria) and grain yield was recorded.
The experiment was conducted in a split-plot randomized complete block design with temperature as the main plot factor and genotype as the sub-plot factor. Replicated observations for each trait were analyzed for means and standard errors. ANOVA was performed using GenStat [32].
To induce heat stress using the components described above, the process of converting the structures from its day-time setting to its night-time setting began at 7:15 PM every night. A single side wall from each tent was lowered and sealed using duct tape. Alternatively, this could also be accomplished by running a strip of Velcro along the end wall and adhering it to the sidewall plastic. Following the sidewall roll down, the top vent was closed to seal the roof. After all the tents had a single sidewall down and the overhead vents lowered and sealed, the portable power packs were plugged into the Pis to start the systems, to initiate the temperature monitoring programs. Then the generator was turned on to supply power to each tent. The Pi system was considered operational if the electric heater was running with the red indicator light. The additional propane heater was turned on after all the other parts of the system were fully operational. As a final step the second side wall was lowered and sealed to fully enclose the tent for the night (Fig. 5b).
Day setting versus night setting. a Heat Tent in day-time setting with top vent and side wall vents opened up. b Heat tent during night-time when heat stress was imposed with the top vent and side wall vents closed
At 5:45 AM every morning, the generator was shut down, so that no electricity was flowing through the system. The sidewalls were unsealed from the end walls, rolled up, and secured at the top with polypropylene rope, the propane heater was shut down, the top vent opened (Fig. 5a), and the battery from the Pi system was removed to shut it down for the day. The batteries were removed every day but only recharged every other day off site from the experiment. The propane tanks were refilled after three consecutive nights of HNT stress.
The system was monitored through a combination of sensors in the interior of the tent and the exterior. One HOBO UX 100-011 temperature/relative humidity data loggers (Onset Computer Corp., Bourne, MA) with a sensitivity of 0.2 °C was placed in a central location on the experimental plot to log the ambient air temperature and humidity. Similarly, two HOBO sensors were placed within each tent to log both day-time and night-time temperature and humidity. The Pi temperature sensing and controller system was also equipped with one sensor inside the tent and the other sensor placed outside each tent having an accuracy of 0.5 °C. In total, each tent was equipped with three sensors. The two main goals of this field set up was to induce a HNT stress with a pre-decided target differential supported by the Pi's programming, and to ensure an even distribution of the heat throughout the night to minimize a temperature gradient or irregular warming patterns within the tent. In addition, the aim during the day-time was to ensure temperatures within the tent were close to the outside ambient temperature.
Distribution of heat
To ensure that the tent was not experiencing a gradient in temperature within the tent, two different HOBO sensors were placed within the wheat plots on opposite sides of the tents directly above the canopy to measure the temperature throughout the night and day at 15-min interval. The distribution of heat was enabled through the box fan that operated from one end and the electric heater that ran on the opposite side. The electric heater with an inbuilt forced air system complemented the box fan on the other end to distribute the heat evenly throughout the tent.
The difference between the two HOBO sensors within the tent was on average 0.75 °C (Fig. 6a). The HOBO sensors at the start of the treatment recorded a large differential of 2.5 °C on average due to the heating system turning on to bring the tent up to its target differential temperature and possibly due to one of the sensors placed in the path of the heater's air flow. Once the tents reached the target temperature (roughly around 9 PM) the difference between the two HOBO temperature loggers leveled out and were within the range of 0.5 and 0.75 °C. In addition, the distribution of heat was also confirmed by comparing the average of two HOBO temperature readings with the interior Pi system sensor. Overall average difference between the HOBO sensors and the Pi sensors was -0.25 °C, with the Pi system sensors reading 0.25 °C warmer than the HOBOs (Fig. 6b). A consistent but small temperature difference was recorded within the tent indicating even distribution of heat.
Temperature comparison between sensors. a HOBO versus HOBO HNT differential within the same tent, b Interior HOBO versus Interior Pi temperature differential, c Interior Pi versus Exterior Pi temperature during HNT stress, d Interior HOBO versus Exterior HOBO temperature during HNT stress
Temperature differential
The second goal of the heat tent system was to maintain a set temperature differential between the interior of the heat tent and the exterior. The tents were programmed to maintain a temperature differential of 4 °C throughout the night. Comparing the Pi systems sensors, the tents were able to maintain an average differential of 3.2 °C consistently throughout the heat stress period (Fig. 6c). The figure shows that the temperature at 8:00 PM were almost equal at the time the tents were sealed and the heating system was turned on. An hour after the start, the temperature reached a stable differential and then followed the exterior temperature throughout the night, while still maintaining the differential.
This effect can also be seen in Fig. 6d which is a comparison between the temperature recorded from HOBO sensors placed within and outside the heat tent. The elevated interior temperature follows the exterior temperature through the night and in the morning both outside and the inside tent temperatures return to the same level, after the tents are opened. The HOBO sensors also measured an average of 3.2 °C temperature differential throughout the experiment, providing additional independent validation of the system's successful imposition of HNT stress.
Ambient day time temperature and relative humidity
The main concern during the day for the heat tent infrastructure was its ability to regulate the air temperature inside the tent, so that the wheat inside the tent is exposed to similar conditions as outside the tent. The readings from both HOBO data loggers inside each tent were averaged and on comparing to the exterior HOBO indicated 0.8 °C warmer temperature within the tent during the day.
The interior temperature of the tents warmed quicker in the morning than the exterior temperature (Fig. 7a). This rise in temperature compared to the ambient temperature can be credited to the greenhouse effect from the plastic on the heat tents and the typical lack of air movement in the morning hours. With low air movement there is less pressure differential between the inside and outside of the top vent, resulting in much slower circulation of air out of the tent. This effect caused the interior temperature of the tents to reach a maximum of 2.54 °C higher than the exterior by 7:40 AM, with both becoming equal by 12:05 PM after which the average exterior temperature was higher than the interior temperature. The temperatures stayed almost equal from noon until 6:30 PM. After 6:30 PM the temperature differential between the inside of the tents compared to the exterior rose until the heat stress began. The rise in temperature in the later hours of the day can be attributed to the tent retaining the day's heat longer due to its covering versus the open exterior.
Ambient temperature and relative humidity comparison. a Day-time ambient temperature comparison between the interior HOBOs and the exterior HOBO. b The average relative humidity of the interior of the tent HOBOs compared to the exterior HOBO. c Comparison of the Vapor Pressure Deficit between the interior and exterior of the heat tents
On average, the tent's relative humidity was 15.6% higher than the ambient average (Fig. 7b). The difference between the interior and exterior peaked towards the end of the HNT stress exposure at 6:00 AM and then reduced throughout the morning until noon. After noon, there was a consistently higher level of humidity inside the tent until 6:00 PM in which the difference receded until the stress imposition began again. It is also apparent through the data that the relative humidity differential between the interior and the exterior was the greatest during the HNT stress period when the tent was sealed. Using the relative humidity and air temperature data from inside and outside of the heat tents, the vapor pressure difference (VPD) was calculated through both the stress and non-stress periods. The VPD was highest during the day when the temperature was at its warmest and the relative humidity at the lowest (Fig. 7c). To account for any variation in evaporation and transpiration due to the changes in RH and VPD within the tents, the plots were irrigated weekly from flowering until harvest.
Physiological and yield response to HNT
A significant (P < 0.001) decline in the electron transport rate (ETR) of the flag leaves was observed after seven days of treatment imposition (Fig. 8a). Among the tested genotypes, KS070717 M-1 and Larry recorded the lowest percent reduction (< 1%) in flag leaf ETR under heat stress compared to control, whereas Tascosa (14.3%) followed by KS 070729 K-26 (13%) recorded the highest reduction in flag leaf ETR (Fig. 8a). Similarly, a significant (P < 0.001) treatment impact was recorded for main spike ETR, ranging from 5.7% (KS 070729 K-26) to 19.4% (KS070717 M-1) with HNT compared to control, with an average reduction of 14.3% (Fig. 8b). Significant (P < 0.001) effect of temperature and genotype were observed with grain yield but with no treatment and genotype interaction (Fig. 8c). Eleven genotypes (except WB 4458) out of the twelve responded to heat stress treatment by reducing their grain yield, with an average reduction of 20.3%, ranging between 6.9% in P1 X060725 and 41.4% in KS070717 M-1 (Fig. 8c). Under HNT stress exposure during grain-filling (Fig. 8c), WB 4458 had the highest grain yield (394.2 g m2) followed by SY-Monument (352.5 g m2), whereas the lowest grain yield was recorded in KS070717 M-1 (202.4 g m2).
Physiological and yield response to HNT. Flag leaf (a) and spike (b) electron transport rate recorded 7 days after treatment imposition and grain yield (c) of twelve winter wheat genotypes under exterior (control) and interior (HNT treatment) conditions. Analysis of variance with least significant difference (LSD) is presented for each trait. T treatment, G genotype, ns non-significant. *P < 0.05; ***P < 0.001. Bars indicate mean ± standard error (n = 3)
System improvements
By further improving, the system can be adequately scaled up for phenotyping larger genetic diversity and the gap between the target average temperature differential (4 °C) and the achieved (3.2 °C) can be narrowed through minor improvements to the system.
Adding more temperature sensors will help obtain an average temperature from multiple points within the tent which will lead to improved heating accuracy. The total number of sensors that can be attached to an individual Pi is 117 which allows ample capacity for a single Raspberry Pi to handle a much larger and extensive setup [33]. Additional sensors that sense relative humidity, CO2 and light intensity will track microclimatic parameters within the tent and facilitate in maintaining target experimental conditions.
Adding another fan can improve uniformity in distribution of heat within the tent. This will help the extra sensors accurately determine the temperature within the tent and improve the system's capabilities when designing a larger experiment.
Higher precision sensors—The sensors that were used within the system connected to the Pi had an accuracy of 0.5 °C. Sensors with higher accuracy will result in less variable temperature readings and when averaged with the additional sensors throughout the tent a much more precise reading of the temperature can be attained.
Increasing the recording frequency in the Pi system. This will help by turning the heater on and off as frequently as necessary. The changes made to the tents to help maintain ambient air temperature during the day adds to the heat loss during night. The longer amount of time between readings from the Pi system results in a larger swing in temperature while the heater is off. With more frequent readings, the heater would be able to modulate the temperature more efficiently.
Heater that receives input air from the exterior via venting—This will help mitigate the increased relative humidity and possible buildup of CO2 within the tent. This would allow fresh air with an ambient level of relative humidity and CO2 to enter the system and be circulated throughout the tent instead of the same air from within the tent being drawn into the heater and then dispersed.
A robust field-based system with the use of roll up and down side ventilation, top ventilation, a heating system, and a cyber-physical system using a Raspberry Pi was constructed that was able to effectively impose HNT stress while automatically following the dynamic changes of the outside environment. The top and side ventilation also allowed the system to maintain near ambient temperatures throughout the day without having to physically remove the tent from the field, while still being able to seal them overnight providing a HNT stress exposure on multiple wheat genotypes in a field setting. The system and the methodology followed indicated that crop agronomic and physiological responses to HNT can be effectively captured under realistic field conditions to help ongoing breeding efforts aimed at improving crops adaptation to changing climates. This system can be altered, improved based on some of the above recommendations. Although the methodology has only been tested on wheat, since it is not reliant on access to any hardwired utilities and is reliable, simple, and cost-effective (see list of the parts and cost per tent in Additional file 5), this system can be used to phenotype other crops or plants for HNT responses.
HNT:
high night-time temperature
HDT:
high day-time temperature
VPD:
vapor pressure deficit
RH:
Eckardt NA. Evolution of domesticated bread wheat. Plant Cell. 2010;22:993. https://doi.org/10.1105/tpc.110.220410.
Food and Agriculture Organization of the United Nations Crops Database. http://www.fao.org/faostat/en/#data/QC. Accessed 19 June 2018.
Food and Agriculture Organization of the United Nations Crops Database. http://www.fao.org/faostat/en/#data/OA. Accessed 19 June 2018.
OECD. Statistical Annex. In: OECD-FAO Agricultural Outlook 2015. Paris: OECD Publishing; 2015. https://doi.org/10.1787/agr_outlook-2015-table121-en.
Slafer GA. Physiology of determination of major wheat yield components. In: Buck HT, Nisi JE, Salomon N, editors. Wheat production in stressed environments. Dordrecht: Springer; 2005. p. 557–65. https://doi.org/10.1007/1-4020-5497-1_68.
Golba J, Studnicki M, Gozdowski D, Madry W, Rozbicki J. Influence of genotype, crop management, and environment on winter wheat grain yield determination based on components of yield. Crop Sci. 2018;58(2):660–9. https://doi.org/10.2135/cropsci2017.07.0425.
Yang X, Tian Z, Sun L, Chen B, Tubiello FN, et al. The impacts of increased heat stress events on wheat yield under climate change in China. Clim Change. 2017;140:605–20. https://doi.org/10.1007/s10584-016-1866-z.
Bergkamp B, Impa SM, Asebedo AR, Fritz AK, Jagadish SVK. Prominent winter wheat varieties response to post flowering heat stress under controlled chambers and field-based heat tents. Field Crops Res. 2018;222:143–52. https://doi.org/10.1016/j.fcr.2018.03.009.
Houghton JT, Ding Y, Griggs DJ, Nouger M, Van der Linden PJ, Xiaosu D, editors. Climate change 2001: the scientific basis. Cambridge: Cambridge University Press; 2001.
Easterling D, Horton B, Jones PD, Peterson TC, Karl TR, Parker DE, et al. Maximum and minimum temperature trends for the globe. Science. 1997;277:364–7. https://doi.org/10.1126/science.277.5324.364.
Peng S, Huang J, Sheehy JE, Lazza RC, Visperas RM, Zhong X, et al. Rice yields decline with higher night temperature from global warming. Proc Nat Acad Sci USA. 2004;101:9971–5. https://doi.org/10.1073/pnas.0403720101.
Shi W, Muthurajan R, Rahman H, Selvam J, Peng S, Zou Y, et al. Source-sink dynamics and proteomic reprogramming under elevated night temperatures and their impact on rice yield and grain quality. New Phytol. 2013;197:825–37. https://doi.org/10.1111/nph.12088.
Shi W, Yin X, Struik P, Xie F, Schmidt R, Jagadish SV. Grain yield and quality responses of tropical hybrid rice to high night-time temperature. Field Crops Res. 2016;190:18–25. https://doi.org/10.1016/j.fcr.2015.10.006.
Shi W, Yin X, Struik PC, Solis C, Xie F, Schmidt RC, et al. High day- and night-time temperatures affect grain growth dynamics in contrasting rice genotypes. Exp Bot. 2017;68:5233–45. https://doi.org/10.1093/jxb/erx344.
Bahuguna R, Solis C, Shi W, Jagadish SV. Post-flowering night respiration and altered sink activity account for high night temperature-induced grain yield and quality loss in rice (Oryza sativa L.). Physiol Plant. 2017;159:59–73. https://doi.org/10.1111/ppl.12485.
Garcia G, Dreccer MF, Miralles DJ, Serrago RA. High night temperatures during grain number determination reduce wheat and barley grain yield: a field study. Glob Change Biol. 2015;21:4153–64. https://doi.org/10.1111/gcb.13009.
Chaturvedi A, Bahuguna R, Shah D, Pal M, Jagadish SVK. High temperature stress during flowering and grain filling offsets beneficial impact of elevated CO2 on assimilate partitioning and sink-strength in rice. Sci Rep. 2017;7:8227. https://doi.org/10.1038/s41598-017-07464-6.
Cyber Physical Systems Security. Department of Homeland Security. https://www.dhs.gov/science-and-technology/csd-cpssec (2016). Accessed 4 Feb 2019.
Zhan A, Schneider H, Lynch JP. Reduced lateral root branching density improves drought tolerance in maize. Plant Phys. 2015;168:1603–15. https://doi.org/10.1104/pp.15.00187.
Hoober D, Wilcox K, Young K. Experimental droughts with rainout shelters: a methodological review. Ecosphere. 2018;9:88. https://doi.org/10.1002/ecs2.2088.
Bheemanahalli R, Sathishraj R, Tack J, Nalley LL, Muthurajan R, Jagadish SVK. Temperature thresholds for spikelet sterility and associated warming impacts for sub-tropical rice. Agric For Meteorol. 2016;221:122–30. https://doi.org/10.1016/j.agrformet.2016.02.003.
Tack J, Barkley A, Nalley LL. Effect of warming temperatures on US wheat yields. Proc Natl Acad Sci USA. 2015;112:6931–6. https://doi.org/10.1073/pnas.1415181112.
Tack J, Lingenfelser J, Jagadish SVK. Disaggregating sorghum yield reductions under warming scenarios and exposes narrow genetic diversity in US breeding programs. Proc Natl Acad Sci USA. 2017;114:9296–301. https://doi.org/10.1073/pnas.1706383114.
Prasad P, Djanaguiraman M, Perumal R, Ciapitti IA. Impact of high temperature stress on floret fertility and individual grain weight of grain sorghum: sensitive stages and thresholds for temperature and duration. Front Plant Sci. 2015;6:820. https://doi.org/10.3389/fpls.2015.00820.
Sunoj VS, Impa SM, Chiluwal A, Perumal R, Prasad PVV, Jagadish SVK. Resilience of pollen and post-flowering response in diverse sorghum genotypes exposed to heat stress under field conditions. Crop Sci. 2017;57:1658–69. https://doi.org/10.2135/cropsci2016.08.0706.
Raspberry Pi Foundation—About Us. Raspberry Pi Foundation. 2012. https://www.raspberrypi.org/about/. Accessed 4 Feb 2019.
Pull-up Resistors. Sparkfun Electronics. https://learn.sparkfun.com/tutorials/pull-up-resistors. Accessed 9 July 2018.
Python Software Foundation. Python Language Reference. https://www.python.org. Accessed 4 Feb 2019.
Monk S, Fried L. Adafruit's Raspberry Pi Lesson 11: temperature sensing. https://learn.adafruit.com/adafruits-raspberry-pi-lesson-11-ds18b20-temperature-sensing/software. Accessed 10 July 2018.
Impa SM, Sunoj J, Krassovskaya I, Bheemanahalli R, Obata T, et al. Carbon balance and source-sink metabolic changes in winter wheat exposed to high night-time temperature. Plant Cell Environ. 2019. https://doi.org/10.1111/pce.13488.
Genty B, Briantais JM, Baker NR. The Relationship between the quantum yield of photosynthetic electron transport and quenching of chlorophyll fluorescence. Biochim Biophys Acta Gen Subj. 1989;990:87–92. https://doi.org/10.1016/S0304-4165(89)80016-9.
Genstat. VSNi. https://www.vsni.co.uk/software/genstat/. Accessed 2 Feb 2019.
Monk S, Fried L. Adafruit's Raspberry Pi Lesson 4: GPIO setup. https://learn.adafruit.com/adafruits-raspberry-pi-lesson-4-gpio-setup/configuring-i2c. Accessed 1 Feb 2019.
NTH, RB, SVKJ conceptualized methodology. NTH designed field structure. DW designed cyber-physical system. NTH, DW, RB, DS, CB, and AC constructed and performed experiment. All authors contributed towards drafting the manuscript. All authors read and approved the final manuscript.
We thank the financial support by NSF Award No. 1736192 to Krishna Jagadish, Kansas State University. Contribution No. 19-028-J from the Kansas Agricultural Experiment Station.
Custom Python Thermostat Controller Code can be found at https://github.com/danwwagner/thermostat-controllers/tree/v2.0.0 or https://doi.org/10.5281/zenodo.1323816. All other datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Department of Agronomy, 2004 Throckmorton Plant Sciences Center, Kansas State University, 1712 Claflin Road, Manhattan, KS, 66506-5501, USA
Nathan T. Hein
, Raju Bheemanahalli
, David Šebela
, Carlos Bustamante
, Anuj Chiluwal
& S. V. Krishna Jagadish
Department of Computer Science, Kansas State University, Manhattan, KS, 66506, USA
& Mitchell L. Neilsen
Search for Nathan T. Hein in:
Search for Dan Wagner in:
Search for Raju Bheemanahalli in:
Search for David Šebela in:
Search for Carlos Bustamante in:
Search for Anuj Chiluwal in:
Search for Mitchell L. Neilsen in:
Search for S. V. Krishna Jagadish in:
Correspondence to S. V. Krishna Jagadish.
13007_2019_424_MOESM1_ESM.jpg
Additional file 1: Fig. S1. Heat tent before and after end wall plastic application.
Additional file 2: Fig. S2. Caterpillar XQ35 Generator and 3785-l diesel tank.
13007_2019_424_MOESM3_ESM.txt
Additional file 3. Thermostat Controller Python Script.
Additional file 4: Fig. S3. Raspberry Pi displaying interior temperature, exterior temperature, and status of the heater.
13007_2019_424_MOESM5_ESM.xlsx
Additional file 5. Parts list and cost per tent.
Hein, N.T., Wagner, D., Bheemanahalli, R. et al. Integrating field-based heat tents and cyber-physical system technology to phenotype high night-time temperature impact on winter wheat. Plant Methods 15, 41 (2019) doi:10.1186/s13007-019-0424-x
Received: 06 August 2018
Cyber-physical system
Heat tents
|
CommonCrawl
|
Dynamo Models of the Solar Cycle
Part of a collection:
Paul Charbonneau1
Living Reviews in Solar Physics volume 7, Article number: 3 (2010) Cite this article
Later version available View article history
A Later Version of this article was published on 17 June 2020
The Original Version of this article was published on 13 June 2005
This paper reviews recent advances and current debates in modeling the solar cycle as a hydromagnetic dynamo process. Emphasis is placed on (relatively) simple dynamo models that are nonetheless detailed enough to be comparable to solar cycle observations. After a brief overview of the dynamo problem and of key observational constraints, we begin by reviewing the various magnetic field regeneration mechanisms that have been proposed in the solar context. We move on to a presentation and critical discussion of extant solar cycle models based on these mechanisms. We then turn to the origin and consequences of fluctuations in these models, including amplitude and parity modulation, chaotic behavior, intermittency, and predictability. The paper concludes with a discussion of our current state of ignorance regarding various key questions relating to the explanatory framework offered by dynamo models of the solar cycle.
Scope of review
The cyclic regeneration of the Sun's large-scale magnetic field is at the root of all phenomena collectively known as "solar activity". A near-consensus now exists to the effect that this magnetic cycle is to be ascribed to the inductive action of fluid motions pervading the solar interior. However, at this writing nothing resembling consensus exists regarding the detailed nature and relative importance of various possible inductive flow contributions.
My assigned task, to review "dynamo models of the solar cycle", is daunting. I will therefore interpret this task as narrowly as I can get away with. This review will not discuss in any detail solar magnetic field observations, the physics of magnetic flux tubes and ropes, the generation of small-scale magnetic field in the Sun's near-surface layers, hydromagnetic oscillator models of the solar cycle, or magnetic field generation in stars other than the Sun. Most of these topics are all worthy of full-length reviews, and do have a lot to bear on "dynamo models of the solar cycle", but a line needs to be drawn somewhere. With the exception of recent cycle prediction schemes based explicitly on dynamo models, I also chose to exclude from consideration the voluminous literature dealing with prediction of sunspot cycle amplitudes, including the related literature focusing exclusively on the mathematical modelling of the sunspot number time series, in manner largely or even sometimes entirely decoupled from the underlying physical mechanisms of magnetic field generation.
This review thus focuses on the cyclic regeneration of the large-scale solar magnetic field through the inductive action of fluid flows, as described by various approximations and simplifications of the partial differential equations of magnetohydrodynamics. Most current dynamo models of the solar cycle rely heavily on numerical solutions of these equations, and this computational emphasis is reflected throughout the following pages. Many of the mathematical and physical intricacies associated with the generation of magnetic fields in electrically conducting astrophysical fluids are well covered in a few recent reviews (see Hoyng, 2003; Ossendrijver, 2003), and so will not be addressed in detail in what follows. The focus is on models of the solar cycle, seeking primarily to describe the observed spatio-temporal variations of the Sun's large-scale magnetic field.
What is a "model"?
The review's very title demands an explanation of what is to be understood by "model". A model is a theoretical construct used as thinking aid in the study of some physical system too complex to be understood by direct inferences from observed data. A model is usually designed with some specific scientific questions in mind, and asking different questions about a given physical system will, in all legitimacy, lead to distinct model designs. A well-designed model should be as complex as it needs to be to answer the questions having motivated its inception, but no more than that. Throwing everything into a model — usually in the name of "physical realism" — is likely to produce results as complicated as the data coming from the original physical system under study. Such model results are doubly damned, as they are usually as opaque as the original physical data, and, in addition, are not even real-world data!
Nearly all of the solar dynamo models discussed in this review rely on severe simplifications of the set of equations known to govern the dynamics of the Sun's turbulent, magnetized fluid interior. Yet all of them are bona fide models, as defined here.
A brief historical survey
While regular observations of sunspots go back to the early seventeenth century, and discovery of the sunspot cycle to 1843, it is the landmark work of George Ellery Hale and collaborators that, in the opening decades of the twentieth century, demonstrated the magnetic nature of sunspots and of the solar activity cycle. In particular, Hale's celebrated polarity laws established the existence of a well-organized toroidal magnetic flux system, residing somewhere in the solar interior, as the source of sunspots. In 1919, Larmor suggested the inductive action of fluid motions as one of a few possible explanations for the origin of this magnetic field, thus opening the path to contemporary solar cycle modelling. Larmor's suggestion fitted nicely with Hale's polarity laws, in that the inferred equatorial antisymmetry of the solar internal toroidal fields is precisely what one would expect from the shearing of a large-scale poloidal magnetic field by an axisymmetric and equatorially symmetric differential rotation pervading the solar interior. However, two decades later T.S. Cowling placed a major hurdle in Larmor's path — so to speak — by demonstrating that even the most general purely axisymmetric flows could not, in themselves, sustain an axisymmetric magnetic field against Ohmic dissipation. This result became known as Cowling's antidynamo theorem.
A way out of this quandary was only discovered in the mid-1950s, when E.N. Parker pointed out that the Coriolis force could impart a systematic cyclonic twist to rising turbulent fluid elements in the solar convection zone, and in doing so provide the break of axisymmetry needed to circumvent Cowling's theorem (see Figure 1). This groundbreaking idea was put on firm quantitative footing by the subsequent development of mean-field electrodynamics, which rapidly became the theory of choice for solar dynamo modelling. By the late 1970s, concensus had almost emerged as to the fundamental nature of the solar dynamo, and the α-effect of mean-field electrodynamics was at the heart of it.
Parker's view of cyclonic turbulence twisting a toroidal magnetic field (here ribbons pointing in direction η) into meridional planes [ξ, ζ] (reproduced from Figure 1 of Parker, 1955).
Serious trouble soon appeared on the horizon, however, and from no less than four distinct directions. First, it was realized that because of buoyancy effects, magnetic fields strong enough to produce sunspots could not be stored in the solar convection zone for sufficient lengths of time to ensure adequate amplification. Second, numerical simulations of turbulent thermallydriven convection in a thick rotating spherical shell produced magnetic field migration patterns that looked nothing like what is observed on the Sun. Third, and perhaps most decisive, the nascent field of helioseismology succeeded in providing the first determinations of the solar internal differential rotation, which turned out markedly different from those needed to produce solar-like dynamo solutions in the context of mean-field electrodynamics. Fourth, the ability of the α-effect and magnetic diffusivity to operate as assumed in mean-field electrodynamics was also called into question by theoretical calculations and numerical simulations.
It is fair to say that solar dynamo modelling has not yet recovered from this four-way punch, in that nothing remotely resembling concensus currently exists as to the mode of operation of the solar dynamo. As with all major scientific crises, this situation provided impetus not only to drastically redesign existing models based on mean-field electrodynamics, but also to explore new physical mechanisms for magnetic field generation, and resuscitate older potential mechanisms that had fallen by the wayside in the wake of the α-effect — perhaps most notably the so-called Babcock-Leighton mechanism, dating back to the early 1960s (see Figure 2). These post-helioseismic developments, beginning in the mid to late 1980s, are the primary focus of this review.
The Babcock-Leighton mechanism of poloidal field production from the decay of bipolar active regions showing opposite polarity patterns in each solar hemisphere (reproduced from Figure 8 of Babcock, 1961).
Sunspots and the butterfly diagram
Historically, next to cyclic polarity reversal the sunspot butterfly diagram has provided the most stringent observational constraints on solar dynamo models (see Figure 3). In addition to the obvious cyclic pattern, two features of the diagram are particularly noteworthy:
Sunspots are restricted to latitudinal bands some ≃ 30° wide, symmetric about the equator.
Sunspots emerge closer and closer to the equator in the course of a cycle, peaking in coverage at about ± 15° of latitude.
Sunspots appear when deep-seated toroidal flux ropes rise through the convective envelope and emerge at the photosphere. Assuming that they rise radially and are formed where the magnetic field is the strongest, the sunspot butterfly diagram can be interpreted as a spatio-temporal "map" of the Sun's internal, large-scale toroidal magnetic field component. This interpretation is not unique, however, since the aforementioned assumptions may be questioned. In particular, we still lack even rudimentary understanding of the process through which the diffuse, large-scale solar magnetic field produces the concentrated toroidal flux ropes that will later, upon buoyant destabilisation, give rise to sunspots. This remains perhaps the most severe missing link between dynamo models and solar magnetic field observations. On the other hand, the stability and rise of toroidal flux ropes is now fairly well-understood (see, e.g., Fan, 2009, and references therein).
The sunspot "butterfly diagram", showing the fractional coverage of sunspots as a function of solar latitude and time (courtesy of D. Hathaway, NASA/MSFC; see http://solarscience.msfc.nasa.gov/images/bfly.gif).
Magnetographic mapping of the Sun's surface magnetic field (see Figure 4) have also revealed that the Sun's poloidal magnetic component undergoes cyclic variations, changing polarities at times of sunspot maximum. Note in Figure 4 the poleward drift of the surface fields, away from sunspot latitudes. This pattern is believed to originate from the transport of magnetic flux released by the decay of sunspots at low latitudes (see Petrovay and Szakály, 1999, for an alternate explanation). The surface polar cap flux amounts to about 1022 Mx, while the total unsigned flux emerging in active regions in the course of a typical cycle adds up to a few 1025 Mx; this is usually taken to indicate that the solar internal magnetic field is dominated by its toroidal component.
Synoptic magnetogram of the radial component of the solar surface magnetic field. The low-latitude component is associated with sunspots. Note the polarity reversal of the high-latitude magnetic field, occurring approximately at time of sunspot maximum (courtesy of D. Hathaway, NASA/MSFC; see http://solarscience.msfc.nasa.gov/images/magbfly.jpg).
Organization of review
The remainder of this review is organized in five sections. In Section 2 the mathematical formulation of the solar dynamo problem is laid out in some detail, together with the various simplifications that are commonly used in modelling. Section 3 details various possible physical mechanisms of magnetic field generation. In Section 4, a selection of representative models relying on different such mechanisms are presented and critically discussed, with abundant references to the technical literature. Section 5 focuses on the origin of cycle amplitude fluctuations, again presenting some illustrative model results and reviewing recent literature on the topic. The concluding Section 6 offers a somewhat more personal discussion of current challenges and trends in solar dynamo modelling.
A great many review papers have been and continue to be written on dynamo models of the solar cycle, and the solar dynamo is discussed in most recent solar physics textbooks, notably Stix (2002), Foukal (2004), and Schrijver and Siscoe (2009). The series of review articles published in Proctor and Gilbert (1994) and Ferriz-Mas and Núñez (2003) are also essential reading for more in-depth reviews of some of the topics covered here. Among the most recent reviews, Petrovay (2000); Tobias (2002); Rüdiger and Arlt (2003); Usoskin and Mursula (2003); Ossendrijver (2003), and Brandenburg and Subramanian (2005) offer (in my opinion) particularly noteworthy alternate and/or complementary viewpoints to those expressed here.
Making a Solar Dynamo Model
Magnetized fluids and the MHD induction equation
In the interiors of the Sun and most stars, the collisional mean-free path of microscopic constituents is much shorter than competing plasma length scales, fluid motions are non-relativistic, and the plasma is electrically neutral and non-degenerate. Under these physical conditions, Ohm's law holds, and so does Amp`ere's law in its pre-Maxwellian form. Maxwell's equations can then be combined into a single evolution equation for the magnetic field B, known as the magnetohydrodynamical (MHD) induction equation (see, e.g., Davidson, 2001):
$$\frac{{\partial B}} {{\partial t}} = \nabla \times (u \times B - \eta \nabla \times B),$$
((1))
where η = c2/4πσe is the magnetic diffusivity (σe being the electrical conductivity), in general only a function of depth for spherically symmetric solar/stellar structural models. Of course, the magnetic field is still subject to the divergence-free condition ∇ · B = 0, and an evolution equation for the flow field u must also be provided. This could be, e.g., the Navier.Stokes equations, augmented by a Lorentz force term:
$$\frac{{\partial u}} {{\partial t}} + (u \cdot \nabla )u + 2\Omega \times u = - \frac{1} {\rho }\nabla p + g + \frac{1} {{4\pi \rho }}(\nabla \times B) \times B + \frac{1} {\rho }\nabla \cdot \tau ,$$
where τ is the viscous stress tensor, and other symbols have their usual meaningFootnote 1. In the most general circumstances, Equations (1) and (2) must be complemented by suitable equations expressing conservation of mass and energy, as well as an equation of state. Appropriate initial and boundary conditions for all physical quantities involved then complete the specification of the problem. The resulting set of equations defines magnetohydrodynamics, quite literally the dynamics of magnetized fluids.
The dynamo problem
The first term on right hand side of Equation (1) represents the inductive action of the flow field, and it can act as a source term for B; the second term, on the other hand, describes the resistive dissipation of the current systems supporting the magnetic field, and is thus always a global sink for B. The relative importances of these two terms is measured by the magnetic Reynolds number Rm = μL/η, obtained by dimensional analysis of Equation (1). Here η, u, and L are "typical" numerical values for the magnetic diffusivity, flow speed, and length scale over which B varies significantly. The latter, in particular, is not easy to estimate a priori, as even laminar MHD flows have a nasty habit of generating their own magnetic length scales (usually ∝ Rm-1/2 at high Rm). Nonetheless, on length scales comparable to the sun itself, Rm is immense, and so is the usual viscous Reynolds number. This implies that energy dissipation will occur on length scales very much smaller than the solar radius.
The dynamo problem consists in finding/producing a (dynamically consistent) flow field u that has inductive properties capable of sustaining B against Ohmic dissipation. Ultimately, the amplification of B occurs by stretching of the pre-existing magnetic field. This is readily seen upon rewriting the inductive term in Equation (1) as
$$\nabla \times (u \times B) = (B \cdot \nabla )u - (u \cdot \nabla )B - B(\nabla \cdot u).$$
In itself, the first term on the right hand side of this expression can obviously lead to exponential amplification of the magnetic field, at a rate proportional to the local velocity gradient.
In the solar cycle context, the dynamo problem is reformulated towards identifying the circumstances under which the flow fields observed and/or inferred in the Sun can sustain the cyclic regeneration of the magnetic field associated with the observed solar cycle. This involves more than merely sustaining the field. A model of the solar dynamo should also reproduce
cyclic polarity reversals with a ∼ 10 yr half-period,
equatorward migration of the sunspot-generating deep toroidal field and its inferred strength,
poleward migration of the diffuse surface field,
observed phase lag between poloidal and toroidal components,
polar field strength,
observed antisymmetric parity,
predominantly negative (positive) magnetic helicity in the Northern (Southern) solar hemisphere.
At the next level of "sophistication", a solar dynamo model should also be able to exhibit amplitude fluctuations, and reproduce (at least qualitatively) the many empirical correlations found in the sunspot record. These include an anticorrelation between cycle duration and amplitude (Waldmeier Rule), alternation of higher-than-average and lower-than-average cycle amplitude (Gnevyshev-Ohl Rule), good phase locking, and occasional epochs of suppressed amplitude over many cycles (the so-called Grand Minima, of which the Maunder Minimum has become the archetype; more on this in Section 5 below). One should finally add to the list torsional oscillations in the convective envelope, with proper amplitude and phasing with respect to the magnetic cycle. This is a very tall order by any standard.
Because of the great disparity of time- and length scales involved, and the fact that the outer 30% in radius of the Sun are the seat of vigorous, thermally-driven turbulent convective fluid motions, the solar dynamo problem is very hard to tackle as a direct numerical simulation of the full set of MHD equations (but do see Section 4.9 below). Most solar dynamo modelling work has thus relied on simplification — usually drastic — of the MHD equations, as well as assumptions on the structure of the Sun's magnetic field and internal flows.
Kinematic models
A first drastic simplification of the MHD system of equations consists in dropping Equation (2) altogether by specifying a priori the form of the flow field u. This kinematic regime remained until relatively recently the workhorse of solar dynamo modelling. Note that with u given, the MHD induction equation becomes truly linear in B. Moreover, helioseismology (Christensen-Dalsgaard, 2002) has now pinned down with good accuracy two important solar large-scale flow components, namely differential rotation throughout the interior, and meridional circulation in the outer half of the solar convection zone (for reviews, see Gizon, 2004; Howe, 2009). Given the low amplitude of observed torsional oscillations in the solar convective envelope, and the lack of significant cycle-related changes in the internal solar differential rotation inferred by helioseismology to this date, the kinematic approximation is perhaps not as bad a working assumption as one may have thought, at least for the differential rotation part of the mean flow u.
Axisymmetric formulation
The sunspot butterfly diagram, Hale's polarity law, synoptic magnetograms, and the shape of the solar corona at and around solar activity minimum jointly suggest that, to a tolerably good first approximation, the large-scale solar magnetic field is axisymmetric about the Sun's rotation axis, as well as antisymmetric about the equatorial plane. Under these circumstances it is convenient to express the large-scale field as the sum of a toroidal (i.e., longitudinal) component and a poloidal component (i.e., contained in meridional planes), the latter being expressed in terms of a toroidal vector potential. Working in spherical polar coordinates (r, θ, φ), one writes
$$B(r,\theta ,t) = \nabla \times (A(r,\theta ,t)\hat e_\varphi ) + B(r,\theta ,t)\hat e_\varphi .$$
Such a decomposition evidently satisfies the solenoidal constraint ∇ · B = 0, and substitution into the MHD induction equation produces two (coupled) evolution equations for A and B, the latter simply given by the φ-component of Equation (1), and the former, under the Coulomb gauge ∇ · A = 0, by
$$\frac{{\partial (A\hat e_\varphi )}} {{\partial t}} + (u \cdot \nabla )(A\hat e_\varphi ) = \eta \nabla ^2 (A\hat e_\varphi ).$$
Boundary conditions and parity
The axisymmetric dynamo equations are to be solved in a meridional plane, i.e., Ri ≤ r ≤ R⊙ and 0 ≤ θ ≤ π, where the inner radial extent of the domain (Ri) need not necessarily extend all the way to r = 0. It is usually assumed that the deep radiative interior can be treated as a perfect conductor, so that Ri is chosen a bit deeper than the lowest extent of the region where dynamo action is taking place; the boundary condition at this depth is then simply A = 0, ∂(rB)/∂r = 0.
It is usually assumed that the Sun/star is surrounded by a vacuum, in which no electrical currents can flow, i.e., ∇ × B = 0; for an axisymmetric B expressed via Equation (4), this requires
$$\begin{array}{*{20}c} {\left( {\nabla ^2 - \frac{1} {{r^2 \sin ^2 \theta }}} \right)A = 0,} & {B = 0,} & {r/R_ \odot > 1.} \\ \end{array}$$
It is therefore necessary to smoothly match solutions to Equations (1, 5) on solutions to Equations (6) at r/R⊙ = 1. Regularity of the solutions demands that A = 0 and B = 0 on the symmetry axis (θ = 0 and θ = π in a meridional plane). This completes the specification of the boundary conditions.
Formulated in this manner, the dynamo solution will spontaneously "pick" its own parity, i.e., its symmetry with respect to the equatorial plane. An alternative approach, popular because it can lead to significant savings in computing time, is to solve only in a meridional quadrant (0 ≤ θ ≤ π/2) and impose solution parity via the boundary condition at the equatorial plane (π/2):
$$\begin{array}{*{20}c} {\frac{{\partial A}} {{\partial \theta }} = 0,} & {B = 0,} & { \to antisymmetric.} \\ \end{array}$$
$$\begin{array}{*{20}c} {A = 0,} & {\frac{{\partial B}} {{\partial \theta }} = 0,} & { \to symmetric.} \\ \end{array}$$
Mechanisms of Magnetic Field Generation
The Sun's poloidal magnetic component, as measured on photospheric magnetograms, flips polarity near sunspot cycle maximum, which (presumably) corresponds to the epoch of peak internal toroidal field T. The poloidal component P, in turn, peaks at time of sunspot minimum. The cyclic regeneration of the Sun's full large-scale field can thus be thought of as a temporal sequence of the form
$$P( + ) \to T( - ) \to P( - ) \to T( + ) \to P( + ) \to \ldots ,$$
where the (+) and (-) refer to the signs of the poloidal and toroidal components, as established observationally. A full magnetic cycle then consists of two successive sunspot cycles. The dynamo problem can thus be broken into two sub-problems: generating a toroidal field from a pre-existing poloidal component, and a poloidal field from a pre-existing toroidal component. In the solar case, the former turns out to be easy, but the latter is not.
Poloidal to toroidal
Let us begin by expressing the (steady) large-scale flow field u as the sum of an axisymmetric azimuthal component (differential rotation), and an axisymmetric "poloidal" component up up (≡ ur(r, θ)êr + uθ(r, θ)êθ), i.e., a flow confined to meridional planes:
$$u(r,\theta ) = u_p (r,\theta ) + \varpi \Omega (r,\theta )\hat e_\varphi$$
((10))
where \(\varpi = r\sin \theta and \Omega\) is the angular velocity (rad s-1). Substituting this expression into Equation (5) and into the φ-components of Equation (1) yields
Advection means bodily transport of B by the flow; globally, this neither creates nor destroys magnetic flux. Resistive decay, on the other hand, destroys magnetic flux and therefore acts as a sink of magnetic field. Diamagnetic transport can increase B locally, but again this is neither a source nor sink of magnetic flux. The compression/dilation term is a direct consequence of toroidal flux conservation in a flow moving across a density gradient. The shearing term in Equation (12), however, is a true source term, as it amounts to converting rotational kinetic energy into magnetic energy. This is the needed P → T production mechanism.
However, there is no comparable source term in Equation (11). No matter what the toroidal component does, A will inexorably decay. Going back to Equation (12), notice now that once A is gone, the shearing term vanishes, which means that B will in turn inexorably decay. This is the essence of Cowling's theorem: An axisymmetric flow cannot sustain an axisymmetric magnetic field against resistive decayFootnote 2.
Toroidal to poloidal
In view of Cowling's theorem, we have no choice but to look for some fundamentally non-axisymmetric process to provide an additional source term in Equation (11). It turns out that under solar interior conditions, there exist various mechanisms that can act as a source of poloidal field. In what follows we introduce and briefly describe the four classes of such mechanisms that appear most promising, but defer discussion of their implementation in dynamo models to Section 4, where illustrative solutions are also presented.
Turbulence and mean-field electrodynamics
The outer ∼ 30% of the Sun are in a state of thermally-driven turbulent convection. This turbulence is anisotropic because of the stratification imposed by gravity, and lacks reflectional symmetry due to the influence of the Coriolis force. Since we are primarily interested in the evolution of the largescale magnetic field (and perhaps also the large-scale flow) on time scales longer than the turbulent time scale, mean-field electrodynamics offers a tractable alternative to full-blown 3D turbulent MHD. The idea is to express the net flow and field as the sum of mean components, 〈u〉 and 〈B〉, and small-scale fluctuating components u′, B′. This is not a linearization procedure, in that we are not assuming that |u′|/|〈u〉| ≪ 1 or |B′|/|〈B〉| ≪ 1. In the context of the axisymmetric models to be described below, the averaging ("〈〉") is most naturally interpreted as a longitudinal average, with the fluctuating flow and field components vanishing when so averaged, i.e., 〈u′〉 = 0 and 〈B′〉 = 0. The mean field 〈B〉 is then interpreted as the large-scale, axisymmetric magnetic field usually associated with the solar cycle. Upon this separation and averaging procedure, the MHD induction equation for the mean component becomes
$$\frac{{\partial \left\langle B \right\rangle }} {{\partial t}} = \nabla \times \left( {\left\langle u \right\rangle \times \left\langle B \right\rangle + \left\langle {u' \times B'} \right\rangle - \eta \nabla \times \left\langle B \right\rangle } \right),$$
which is identical to the original MHD induction Equation (1) except for the term 〈u′ × B′〉, which corresponds to a mean electromotive force ε induced by the fluctuating flow and field components. It appears here because, in general, the cross product u′ × B′ usually will not vanish upon averaging, even though u′ and B′ do so individually. Evidently, this procedure is meaningful if a separation of spatial and/or temporal scales exists between the (time-dependent) turbulent motions and associated small-scale magnetic fields on the one hand, and the (quasi-steady) large-scale axisymmetric flow and field on the other.
The reader versed in fluid dynamics will have recognized in the mean electromotive force the equivalent of Reynolds stresses appearing in mean-field versions of the Navier-Stokes equations, and will have anticipated that the next (crucial!) step is to express ε in terms of the mean field 〈B〉 in order to achieve closure. This is usually carried out by expressing ε as a truncated series expansion in 〈B〉 and its derivatives. Retaining the first two terms yields
$$\mathcal{E} = \alpha :\left\langle B \right\rangle + \beta :\nabla \times \left\langle B \right\rangle .$$
where the colon indicates a tensorial inner product. The quantities α and β are in general pseudotensors, and specification of their components requires a turbulence model from which averages of velocity cross-correlations can be computed, which is no trivial task. We defer discussion of specific model formulations for these quantities to Section 4.2, but note the following:
Even if 〈B〉 is axisymmetric, the α-term in Equation (14) will effectively introduce source terms in both the A and B equations, so that Cowling's theorem can be circumvented.
Parker's idea of helical twisting of toroidal fieldlines by the Coriolis force corresponds to a specific functional form for α, and so finds formal quantitative expression in mean-field electrodynamics.
The production of a mean electromotive force proportional to the mean field is called the α-effect, and it can as a source of both A and B, and thus offers a viable T → P mechanism. Its existence was first demonstrated in the context of turbulent MHD, but it also arises in other contexts, as discussed immediately below. Although this is arguably a bit of a physical abuse, the term "α-effect" is used in what follows to denote any mechanism producing a mean poloidal field from a mean toroidal field, as is almost universally (and perhaps unfortunately) done in the contemporary solar dynamo literature.
Other forms of turbulent mean electromotive forces are possible when the large-scale magnetic field develops variations on scales comparable to that of large-scale flows, notably angular velocity shears (see Rädler et al., 2003; Pipin and Seehafer, 2009, and references therein). This can lead to the appearance of an additional contribution on the RHS of Equation (14), of the general form δ × (∇ × 〈B〉). Such a mean-field-aligned emf cannot contribute to the sustenance of 〈B〉, but operating concurently with other inductive mechanisms, can in principle contribute to dynamo action.
Hydrodynamical shear instabilities
The tachocline is the rotational shear layer uncovered by helioseismology immediately beneath the Sun's convective envelope, providing smooth matching between the latitudinal differential rotation of the envelope, and the rigidly rotating radiative core (see, e.g., Spiegel and Zahn, 1992; Brown et al., 1989; Tomczyk et al., 1995; Charbonneau et al., 1999, and references therein). Stability analyses of the latitudinal shear within the tachocline carried out in the framework of shallow-water theory suggest that the latitudinal shear can become unstable when vertical fluid displacement is allowed (Dikpati and Gilman, 2001). These authors also find that vertical fluid displacements correlate with the horizontal vorticity pattern in a manner resulting in a net kinetic helicity that can, in principle, impart a systematic twist to an ambient mean toroidal field. This can thus serve as a source for the poloidal component, and, in conjunction with rotational shearing of the poloidal field, lead to cyclic dynamo action. This is a self-excited T → P mechanism, but it is not entirely clear at this juncture if (and how) it would operate in the strong-field regime (more on this in Section 4.5 below).
MHD instabilities
It has now been demonstrated, perhaps even beyond reasonable doubt, that the toroidal magnetic flux ropes that upon emergence in the photosphere give rise to sunspots can only be stored below the Sun's convective envelope, more specifically in the thin, weakly subadiabatic overshoot layer conjectured to exist immediately beneath the core-envelope interface (see, e.g., Schüssler, 1996; Schüssler and Ferriz-Mas, 2003; Fan, 2009, and references therein). Only there are growth rates for the magnetic buoyancy instability sufficiently long to allow field amplification, while being sufficiently short for flux emergence to take place on time-scales commensurate with the solar cycle (Ferriz-Mas et al., 1994). These stability studies have also revealed the existence of regions of weak instability, in the sense that the growth rates are numbered in years. The developing instability is then strongly influenced by the Coriolis force, and develops in the form of growing helical waves travelling along the flux rope's axis. This amounts to twisting a toroidal field in meridional planes, as with the Parker scenario, with the important difference that what is now being twisted is a flux rope rather than an individual fieldline. Nonetheless, an azimuthal electromotive force is produced. This represents a viable T → P mechanism, but one that can only act above a certain field strength threshold; in other words, dynamos relying on this mechanism are not self-excited, since they require strong fields to operate. On the other hand, they operate without difficulties in the strong field regime.
Another related class of poloidal field regeneration mechanism is associated with the buoyant breakup of the magnetized layer (Matthews et al., 1995). Once again it is the Coriolis force that ends up imparting a net twist to the rising, arching structures that are produced in the course of the instability's development (see Thelen, 2000a, and references therein). This results in a mean electromotive force that peaks where the magnetic field strength varies most rapidly with height. This could provide yet another form of tachocline α-effect, again subjected to a lower operating threshold. MHD versions of the hydrodynamical shear instability discussed in Section 3.2.2 have also been studied (see, e.g., Arlt et al., 2007b; Cally et al., 2008; Dikpati et al., 2009, and references therein), but the fundamentally nonlinear nature of the flow-field interaction makes it difficult to construct physically credible poloidal source terms to be incoporated into dynamo models.
The Babcock-Leighton mechanism
The larger sunspot pairs ("bipolar magnetic regions", hereafter BMR) often emerge with a systematic tilt with respect to the E-W direction, in that the leading sunspot (with respect to the direction of solar rotation) is located at a lower latitude than the trailing sunspot, the more so the higher the latitude of the emerging BMR. This pattern, known as "Joy's law", is caused by the action of the Coriolis force on the secondary azimuthal flow that develops within the buoyantly rising magnetic toroidal flux rope that, upon emergence, produces a BMR (see, e.g. Fan et al., 1993; D'Silva and Choudhuri, 1993; Caligari et al., 1995). In conjunction with the antisymmetry of the toroidal field giving rise to sunspots evidenced by Hale's sunspot laws, this tilt is at the heart of the Babcock-Leighton mechanism for polar field reversal, as outlined in cartoon form in Figure 2.
Physically, what happens is that the leading spot of the BMR is located closer to the equator, and therefore experiences greater diffusive cancellation across the equatorial plane with the opposite polarity leading spots of the other hemisphere, than the trailing spots do. Upon decay, the latter's magnetic flux is preferentially transported to the polar region by supergranular diffusion and the surface meridional flow. The net effect is to take a formerly toroidal magnetic field and convert a fraction of its associated flux into a net dipole moment, i.e., it represents a T → P mechanism. With the polar cap flux amounting to less than 0.1% of the unsigned magnetic flux emerging in active regions throughout a cycle, the efficiency of this so-called Babcock-Leighton mechanism needs not be very high. Here again the resulting dynamos are not self-excited, as the required tilt of the emerging BMR only materializes in a range of toroidal field strength going from a few 104 G to about 2 × 105 G.
A Selection of Representative Models
Each and every one of the T → P mechanisms described in Section 3.2 relies on fundamentally non-axisymmetric physical effects, yet these must be "forced" into axisymmetric dynamo equations for the mean magnetic field. There are a great many different ways of doing so, which explains the wide variety of dynamo models of the solar cycle to be found in the recent literature. The aim of this section is to provide representative examples of various classes of models, to highlight their similarities and differences, and illustrate their successes and failings. In all cases, the model equations are to be understood as describing the evolution of the mean field 〈B〉, namely the large-scale, axisymmetric component of the total solar magnetic field. Those wishing to code up their own versions of these (relatively) simple models should take note of the fact that Jouve et al. (2008) have set up a suite of benchmark calculations against which numerical dynamo solutions can be validated.
Model ingredients
All kinematic solar dynamo models have some basic "ingredients" in common, most importantly (i) a solar structural model, (ii) a differential rotation profile, and (iii) a magnetic diffusivity profile (possibly depth-dependent).
Helioseismology has pinned down with great accuracy the internal solar structure, including the internal differential rotation, and the exact location of the core-envelope interface. Unless noted otherwise, all illustrative models discussed in this section were computed using the following analytic formulae for the angular velocity Ω(r, θ) and magnetic diffusivity η(r):
$$\frac{{\Omega (r,\theta )}} {{\Omega _E }} = \Omega _C + \frac{{\Omega _S (\theta ) - \Omega _C }} {2}\left[ {1 + erf\left( {\frac{{r - r_c }} {w}} \right)} \right],$$
$$\Omega _S (\theta ) = 1 - a_2 \cos ^2 \theta - a_4 \cos ^4 \theta ,$$
$$\frac{{\eta (r)}} {{\eta _T }} = \Delta \eta + \frac{{1 - \Delta \eta }} {2}\left[ {1 + erf\left( {\frac{{r - r_c }} {w}} \right)} \right].$$
With appropriately chosen parameter values, Equation (15) describes a solar-like differential rotation profile, namely a purely latitudinal differential rotation in the convective envelope, with equatorial acceleration and smoothly matching a core rotating rigidly at the angular speed of the surface mid-latitudesFootnote 3. This rotational transition takes place across a spherical shear layer of half-thickness w coinciding with the core-envelope interface at rc/R⊙ = 0.7 (see Figure 5, with parameter values listed in caption). As per Equation (17), a similar transition takes place with the net diffusivity, falling from some large, "turbulent" value ηT in the envelope to a much smaller diffusivity ηc in the convection-free radiative core, the diffusivity contrast being given by Δη = ηc/ηT. Given helioseismic constraints, these represent minimal yet reasonably realistic choices.
Isocontours of angular velocity generated by Equation (15), with parameter values w/R = 0.05, ΩC = 0.8752, a2 = 0.1264, a4 = 0.1591 (Panel A). The radial shear changes sign at colatitude θ = 55°. Panel B shows the corresponding angular velocity gradients, together with the total magnetic diffusivity profile defined by Equation (17) (dash-dotted line). The core-envelope interface is located at r/R⊙ = 0.7 (dotted lines).
It should be noted already that such a solar-like differential rotation profile is quite complex from the point of view of dynamo modelling, in that it is characterized by three partially overlapping shear regions: a strong positive radial shear in the equatorial regions of the tachocline, an even stronger negative radial shear in its the polar regions, and a significant latitudinal shear throughout the convective envelope and extending partway into the tachocline. As shown in panel B of Figure 5, for a tachocline of half-thickness w/R⊙ = 0.05, the mid-latitude latitudinal shear at ../... = 0.7 is comparable in magnitude to the equatorial radial shear; its potential contribution to dynamo action should not be casually dismissed.
αΩ mean-field models
Calculating the α-effect and turbulent diffusivity
Mean-field electrodynamics is a subject well worth its own full-length review, so the foregoing discussion will be limited to the bare essentials. Detailed discussion of the topic can be found in Krause and Rädler (1980), Moffatt (1978), and Rüdiger and Hollerbach (2004), and in the recent review articles by Ossendrijver (2003) and Hoyng (2003).
The task at hand is to calculate the components of the α and β tensor in terms of the statistical properties of the underlying turbulence. A particularly simple case is that of homogeneous, weakly anisotropic turbulence, which reduces the α and β tensor to simple scalars, so that the mean electromotive force becomes
$$\mathcal{E} = \alpha \left\langle B \right\rangle - \eta _T \nabla \times \left\langle B \right\rangle .$$
This is the form commonly used in solar dynamo modelling, even though turbulence in the solar interior is most likely inhomogeneous and anisotropic. Moreover, hiding in the above expressions is the assumption that the small-scale magnetic Reynolds number vl/η is much smaller than unity, where v ∼ 103 cm s-1 and l ∼ 109 cm are characteristic velocities and length scales for the dominant turbulent eddies, as estimated, e.g., from mixing length theory. With η ∼ 104 cm2 s-1, one finds vl/η ∼ 108, so that on that basis alone Equation (18) should be dubious already. In the kinematic regime, α and β are independent of the magnetic field fluctuations, and end up simply proportional to the averaged kinetic helicity and square fluctuation amplitude:
$$\alpha \sim \frac{{\tau _c }} {3}\left\langle {u' \cdot \nabla \times u'} \right\rangle ,$$
$$\eta _T \sim \frac{{\tau _c }} {3}\left\langle {u' \cdot u'} \right\rangle ,$$
where τc is the correlation time of the turbulent motions. Order-of-magnitude estimates of the scalar coefficients yield α ∼ Ωl and ηT ∼ vl, where Ω is the solar angular velocity. At the base of the solar convection zone, one then finds α ∼ 103 cm s-1 and ηT ∼ 1012 cm2 s-1, these being understood as very rough estimates. Because the kinetic helicity may well change sign along the longitudinal (averaging) direction, thus leading to cancellation, the resulting value of α may be much smaller than its r.m.s. deviation about the longitudinal mean. In contrast the quantity being averaged on the right hand side of Equation (20) is positive definite, so one would expect a more "stable" mean value (see Hoyng, 1993; Ossendrijver et al., 2001, for further discussion). At any rate, difficulties in computing α and ηT from first principle (whether as scalars or tensors) have led to these quantities often being treated as adjustable parameters of mean-field dynamo models, to be adjusted (within reasonable bounds) to yield the best possible fit to observed solar cycle characteristics, most importantly the cycle period. One finds in the literature numerical values in the approximate ranges 10 - 103 cm s-1 for α and 1010 - 1013 cm2 s-1 for ηT.
The cyclonic character of the α-effect also indicates that it is equatorially antisymmetric and positive in the Northern solar hemisphere, except perhaps at the base of the convective envelope, where the rapid variation of the turbulent velocity with depth can lead to a sign change. These expectations have been confirmed in a general sense by theory and numerical simulations (see, e.g., Rüdiger and Kitchatinov, 1993; Brandenburg et al., 1990; Ossendrijver et al., 2001; Käpylä et al., 2006a).
In cases where the turbulence is more strongly inhomogeneous, an additional effect comes into play: turbulent pumping. Mathematically it arises through an antisymmetric contribution to the α-tensor in Equation (14), whose three independent components can be recast as a velocity-like vector field γ that acts as an additional (and non-solenoidal) contribution to the mean flow:
$$\mathcal{E} = \alpha ^S :\left\langle B \right\rangle + \gamma \times \left\langle B \right\rangle + \beta :\nabla \times \left\langle B \right\rangle .$$
The tensor αS now contains only the symmetric part of the original α tensor. Measurements of the α-tensor in MHD numerical simulations of turbulence in a box (see Ossendrijver et al., 2002; Käpylä et al., 2006a, and references therein) indicate that pumping is directed mostly downwards throughout the solar convection, as a result of stratification, and that a significant equatorward latitudinal pumping also arises once rotation becomes important, in the sense that the Coriolis number Co = 2Ωτc exceeds unity. Turbulent pumping speeds of a few m s-1 can be reached with Co in the range 4 – 10, according to the numerical simulations of Käpylä et al. (2006a).
α-quenching, diffusivity-quenching, and flux loss through buoyancy
Leaving the kinematic regime, it is expected that both α and ηT should depend on the strength of the magnetic field, since magnetic tension will resist deformation by the small-scale turbulent fluid motions. The groundbreaking numerical MHD simulations of Pouquet et al. (1976) suggested that Equation (19) should be replaced by something like
$$\alpha \sim \frac{{\tau _c }} {3}\left[ {\left\langle {u' \cdot \nabla \times u'} \right\rangle - \left\langle {a' \cdot \nabla \times a'} \right\rangle } \right],$$
where \(a' = B'/\sqrt {4\pi \rho }\) is the Alfvén speed based on the small-scale magnetic component (see also Durney et al., 1993; Blackman and Brandenburg, 2002). This is rarely used in solar cycle modelling, since the whole point of the mean-field approach is to avoid dealing explicitly with the small-scale, fluctuating components. On the other hand, something is bound to happen when the growing dynamo-generated mean magnetic field reaches a magnitude such that its energy per unit volume is comparable to the kinetic energy of the underlying turbulent fluid motions. Denoting this equipartition field strength by Beq, one often introduces an ad hoc nonlinear dependency of α (and sometimes ηT as well) directly on the mean-field 〈B〉 by writing:
$$\alpha \to \alpha \left( {\left\langle B \right\rangle } \right) = \frac{{\alpha _0 }} {{1 + \left( {\left\langle B \right\rangle /B_{eq} } \right)^2 }}.$$
This expression "does the right thing", in that α → 0 as 〈B〉 starts to exceed Beq. It remains an extreme oversimplification of the complex interaction between flow and field that characterizes MHD turbulence, but its wide usage in solar dynamo modeling makes it a nonlinearity of choice for the illustrative purpose of this section.
Diffusivity-quenching is an even more uncertain proposition than α-quenching, with various quenching models more complex than Equation (23) having been proposed (e.g., Rüdiger et al., 1994). Measurements of the components of the α and β tensors in the convective turbulence simulations of Brandenburg et al. (2008) do suggest a much stronger magnetic quenching of the α-effect than of the turbulent diffusivity, but many aspects of this problem remain open. One appealing aspect of diffusivity quenching is its potential ability to produce localized concentrations of strong magnetic fields, exceeding equipartition strength under some conditions (Gilman and Rempel, 2005). On the other hand, the stability analyses of Arlt et al. (2007b,a) suggests that there exist a lower limit to the magnetic diffusivity, below which equipartition-strength toroidal magnetic field beneath the core-envelope interface become unstable.
Another amplitude-limiting mechanism is the loss of magnetic flux through magnetic buoyancy. Magnetic fields concentrations are buoyantly unstable in the convective envelope, and so should rise to the surface on time scales much shorter than the cycle period (see, e.g., Parker, 1975; Schüssler, 1977; Moreno-Insertis, 1983, 1986). This is often incorporated on the right-hand-side of the dynamo equations by the introduction of an ad hoc loss term of the general form —ƒ(〈B〉) 〈B〉; the function ƒ measures the rate of flux loss, and is often chosen proportional to the magnetic pressure 〈B〉2, thus yielding a cubic damping nonlinearity in the mean-field.
The αΩ dynamo equations
Adding the mean-electromotive force given by Equation (18) to the MHD induction equation leads to the following form for the axisymmetric mean-field dynamo equations:
where, in general, ηT ≫ η. There are now source terms on both right hand sides, so that dynamo action becomes possible at least in principle. The relative importance of the α-effect and shearing terms in Equation (25) is measured by the ratio of the two dimensionless dynamo numbers
$$\begin{array}{*{20}c} {C_\alpha = \frac{{\alpha _0 R_ \odot }} {{\eta _0 }},} & {C_\Omega = \frac{{(\Delta \Omega )_0 R_ \odot }} {{\eta _0 }}} \\ \end{array} ,$$
where, in the spirit of dimensional analysis, α0, η0, and (ΔΩ)0 are "typical" values for the α-effect, turbulent diffusivity, and angular velocity contrast. These quantities arise naturally in the non-dimensional formulation of the mean-field dynamo equations, when time is expressed in units of the magnetic diffusion time τ based on the envelope (turbulent) diffusivity:
$$\tau = \frac{{R_ \odot ^2 }} {{\eta _T }}.$$
In the solar case, it is usually estimated that Cα ≪ CΩ, so that the α-term is neglected in Equation (25); this results in the class of dynamo models known as αΩ dynamos, which will be the only ones discussed hereFootnote 4.
Eigenvalue problems and initial value problems
With the large-scale flows, turbulent diffusivity and α-effect considered given, Equations (24, 25) become truly linear in A and B. It becomes possible to seek eigensolutions in the form
$$\begin{array}{*{20}c} {\left\langle A \right\rangle \left( {r,\theta ,t} \right) = a\left( {r,\theta } \right)\exp \left( {st} \right),} & {\left\langle B \right\rangle \left( {r,\theta ,t} \right) = b\left( {r,\theta } \right)\exp \left( {st} \right)} \\ \end{array} .$$
Substitution of these expressions into Equations (24, 25) yields an eigenvalue problem for s and associated eigenfunction {a, b}. The real part σ ≡ Re s is then a growth rate, and the imaginary part ω ≡ Im s an oscillation frequency. One typically finds that σ < 0 until the product Cα × CΩ exceeds a certain critical value Dcrit beyond which σ > 0, corresponding to a growing solutions. Such solutions are said to be supercritical, while the solution with σ = 0 is critical.
Clearly exponential growth of the dynamo-generated magnetic field must cease at some point, once the field starts to backreact on the flow through the Lorentz force. This is the general idea embodied in α-quenching. If α-quenching — or some other nonlinearity — is included, then the dynamo equations are usually solved as an initial-value problem, with some arbitrary low-amplitude seed field used as initial condition. Equations (24, 25) are then integrated forward in time using some appropriate time-stepping scheme. A useful quantity to monitor in order to ascertain saturation is the magnetic energy within the computational domain:
$$\mathcal{E}_B = \frac{1} {{8\pi }}\int_V {\left\langle B \right\rangle ^2 dV.}$$
Dynamo waves
One of the most remarkable property of the (linear) αΩ dynamo equations is that they support travelling wave solutions. This was first demonstrated in Cartesian geometry by Parker (1955), who proposed that a latitudinally-travelling "dynamo wave' was at the origin of the observed equatorward drift of sunspot emergences in the course of the cycle. This finding was subsequently shown to hold in spherical geometry, as well as for non-linear models (Yoshimura, 1975; Stix, 1976). Dynamo wavesFootnote 5 travel in a direction s given by
$$s = \alpha \nabla \Omega \times \hat e_\varphi ,$$
a result now known as the "Parker-Yoshimura sign rule". Recalling the rather complex form of the helioseismically inferred solar internal differential rotation (cf. Figure 5), even an α-effect of uniform sign in each hemisphere can produce complex migratory patterns, as will be apparent in the illustrative αΩ dynamo solutions to be discussed presently. Note already at this juncture that if the seat of the dynamo is to be identified with the low-latitude portion of the tachocline, and if the latter is thin enough for the (positive) radial shear therein to dominate over the latitudinal shear, then equatorward migration of dynamo waves will require a negative α-effect in the low latitudes of the Northern solar hemisphere.
We first consider αΩ models without meridional circulation (up = 0 in Equations (24, 25)), with the α-term omitted in Equation (25), and using the diffusivity and angular velocity profiles of Figure 5. We will investigate the behavior of αΩ models with the α-effect concentrated just above the core-envelope interface (green line in Figure 6). We also consider two latitudinal dependencies, namely α ∝ cos θ, which is the "minimal" possible latitudinal dependency compatible with the required equatorial antisymmetry of the Coriolis force, and an α-effect concentrated towards the equatorFootnote 6 via an assumed latitudinal dependency α ∝ sin2 θ cos θ.
Radial variation of the α-effect for the family of αΩ mean-field models considered in Section 4.2.6. The magnetic diffusivity profile is plotted in red, and the core-envelope interface as a dotted line.
Figures 7 and 8 show a selection of such dynamo solutions, in the form of animations in meridional planes and time-latitude diagrams of the toroidal field extracted at the core-envelope interface, here rc/R⊙ = 0.7. If sunspot-producing toroidal flux ropes form in regions of peak toroidal field strength, and if those ropes rise radially to the surface, then such diagrams are directly comparable to the sunspot butterfly diagram of Figure 3. All models have CΩ = 25000, |Cα| = 10, ηT/ηc = 10, and ηT = 5 × 1011 cm2 s-1, which leads to τ ≃ 300 yr. To facilitate comparison between solutions, here antisymmetric parity was imposed via the boundary condition at the equator.
Stills from Meridional plane animations of various αΩ dynamo solutions using different latitudinal profiles and sign for the α-effect, as labeled. The polar axis coincides with the left quadrant boundary. The toroidal field is plotted as filled contours (constant increments, green to blue for negative B, yellow to red for positive B), on which poloidal fieldlines are superimposed (blue for clockwise-oriented fieldlines, orange for counter-clockwise orientation). The dashed line is the core-envelope interface at rc/R = 0.7. Time-latitude "butterfly" diagrams for these three solutions are plotted in Figure 8. (To watch the movie, please go to the online version of this review article at http://www.livingreviews.org/lrsp-2010-3.)
Northern hemisphere time-latitude ("butterfly") diagrams for the three αΩ dynamo solutions of Figure 7, constructed at the depth rc/R⊙ = 0.7 corresponding to the core-envelope interface. Isocontours of toroidal field are normalized to their peak amplitudes, and plotted for increments ΔB/ max(B) = 0.2, with yellow-to-red (green-to-blue) contours corresponding to B > 0 (< 0). The assumed latitudinal dependency of the α-effect is given above each panel. Other model ingredients as in Figure 5. Note the co-existence of two distinct cycles in the solution shown in Panel B.
Examination of these animations reveals that the dynamo is concentrated in the vicinity of the core-envelope interface, where the adopted radial profile for the α-effect is maximal (cf. Figure 6). In conjunction with a fairly thin tachocline, the radial shear therein then dominates the induction of the toroidal magnetic component. With an eye on Figure 5, notice also how the dynamo waves propagates along isocontours of angular velocity, in agreement with the Parker-Yoshimura sign rule (cf. Section 4.2.5). In the butterfly diagram, this translates a systematic tilt of the isocontours of toroidal magnetic field. Note that even for an equatorially-concentrated α-effect (Panels B and C), a strong polar branch is nonetheless apparent in the butterfly diagrams, a direct consequence of the stronger radial shear present at high latitudes in the tachocline (see also corresponding animations). Models using an α-effect operating throughout the whole convective envelope, on the other hand, would feed primarily on the latitudinal shear therein, so that for positive Cα the dynamo mode would propagate radially upward in the envelope (see Lerche and Parker, 1972).
It is noteworthy that co-existing dynamo branches, as in Panel B of Figure 8, can have distinct dynamo periods, which in nonlinearly saturated solutions leads to long-term amplitude modulation. This is typically not expected in dynamo models where the only nonlinearity present is a simple algebraic quenching formula such as Equation (23). Note that this does not occur for the Cα < 0 solution, where both branches propagate away from each other, but share a common latitude of origin and so are phased-locked at the onset (cf. Panel C of Figure 8).
A common property of all oscillatory αΩ solutions discussed so far is that their period, for given values of the dynamo numbers Cα, CΩ, is inversely proportional to the numerical value adopted for the (turbulent) magnetic diffusivity ηT. The ratio of poloidal-to-toroidal field strength, in turn, is found to scale as some power (usually close to 1/2) of the ratio Cα/CΩ, at a fixed value of the product Cα × CΩ.
The models discussed above are based on rather minimalistics and partly ad hoc assumptions on the form of the α-effect. More elaborate models have been proposed, relying on calculations of the full α-tensor based on some underlying turbulence models (see, e.g., Kitchatinov and Rüdiger, 1993). While this approach usually displaces the ad hoc assumptions into the turbulence model, it has the definite merit of offering an internally consistent approach to the calculation of turbulent diffusivities and large-scale flows. Rüdiger and Brandenburg (1995) remain a good example of the current state-of-the-art in this area; see also Rüdiger and Arlt (2003), and references therein.
Critical assessment
From a practical point of view, the outstanding success of the mean-field αΩ model remains its robust explanation of the observed equatorward drift of toroidal field-tracing sunspots in the course of the cycle in terms of a dynamo-wave. On the theoretical front, the model is also buttressed by mean-field electrodynamics which, in principle, offers a physically sound theory from which to compute the (critical) α-effect and magnetic diffusivity. The models' primary uncertainties turn out to lie at that level, in that the application of the theory to the Sun in a tractable manner requires additional assumptions that are most certainly not met under solar interior conditions. Those uncertainties are exponentiated when taking the theory into the nonlinear regime, to calculate the dependence of the α-effect and diffusivity on the magnetic field strength. This latter problem remains very much open at this writing.
Interface dynamos
Strong α-quenching and the saturation problem
The α-quenching expression (23) used in the preceding section amounts to saying that dynamo action saturates once the mean, dynamo-generated field reaches an energy density comparable to that of the driving turbulent fluid motions, i.e., \(B_{eq} \sim \sqrt {4\pi \rho } v\), where v is the turbulent velocity amplitude. This appears eminently sensible, since from that point on a toroidal fieldline would have sufficient tension to resist deformation by cyclonic turbulence, and so could no longer feed the α-effect. At the base of the solar convective envelope, one finds Beq ∼ 1 kG, for v ∼ 103 cm s-1, according to standard mixing length theory of convection. However, various calculations and numerical simulations have indicated that long before the mean field 〈B〉 reaches this strength, the helical turbulence reaches equipartition with the small-scale, turbulent component of the magnetic field (e.g., Cattaneo and Hughes, 1996, and references therein). Such calculations also indicate that the ratio between the small-scale and mean magnetic components should itself scale as Rm1/2, where Rm = vl/η is a magnetic Reynolds number based on the microscopic magnetic diffusivity. This then leads to the alternate quenching expression
$$\alpha \to \alpha \left( {\left\langle B \right\rangle } \right) = \frac{{\alpha _0 }} {{1 + Rm\left( {\left\langle B \right\rangle /B_{eq} } \right)^2 }},$$
known in the literature as strong α-quenching or catastrophic quenching. Since Rm ∼ 108 in the solar convection zone, this leads to quenching of the α-effect for very low amplitudes for the mean magnetic field, of order 10-1 G. Even though significant field amplification is likely in the formation of a toroidal flux rope from the dynamo-generated magnetic field, we are now a very long way from the 10 – 100 kG demanded by simulations of buoyantly rising flux ropes (see Fan, 2009).
A way out of this difficulty was proposed by Parker (1993), in the form of interface dynamos. The idea is beautifully simple: If the toroidal field quenches the α-effect, amplify and store the toroidal field away from where the α-effect is operating! Parker showed that in a situation where a radial shear and α-effect are segregated on either side of a discontinuity in magnetic diffusivity (taken to coincide with the core-envelope interface), the αΩ dynamo equations support solutions in the form of travelling surface waves localized on the discontinuity in diffusivity. The key aspect of Parker's solution is that for supercritical dynamo waves, the ratio of peak toroidal field strength on either side of the discontinuity surface is found to scale with the diffusivity ratio as
$$\frac{{\max (B_2 )}} {{\max (B_1 )}} \sim \left( {\frac{{\eta _2 }} {{\eta _1 }}} \right),^{ - 1/2} $$
where the subscript "1" refers to the low-η region below the core-envelope interface, and "2" to the high-η region above. If one assumes that the envelope diffusivity η2 is of turbulent origin then η2 ∼ lv, so that the toroidal field strength ratio then scales as ∼ (vl/η1)1/2 ≡ Rm1/2. This is precisely the factor needed to bypass strong α-quenching (Charbonneau and MacGregor, 1996). Somewhat more realistic variations on Parker's basic model were later elaborated (MacGregor and Charbonneau, 1997 and Zhang et al., 2004), and, while differing in important details, nonetheless confirmed Parker's overall picture.
Tobias (1996a) discusses in detail a related Cartesian model bounded in both horizontal and vertical direction, but with constant magnetic diffusivity η throughout the domain. Like Parker's original interface configuration, his model includes an α-effect residing in the upper half of the domain, with a purely radial shear in the bottom half. The introduction of diffusivity quenching then reduces the diffusivity in the shear region, "naturally" turning the model into a bona fide interface dynamo, supporting once again oscillatory solutions in the form of dynamo waves travelling in the "latitudinal" x-direction. This basic model was later generalized by various authors (Tobias, 1997; Phillips et al., 2002) to include the nonlinear backreaction of the dynamo-generated magnetic field on the differential rotation; further discussion of such nonlinear models is deferred to Section 5.3.1.
The next obvious step is to construct an interface dynamo in spherical geometry, using a solarlike differential rotation profile. This was undertaken by Charbonneau and MacGregor (1997). Unfortunately, the numerical technique used to handle the discontinuous variation in η at the core-envelope interface turned out to be physically erroneous for the vector potential A describing the poloidal fieldFootnote 7 (see Markiel and Thomas, 1999, for a discussion of this point), which led to spurious dynamo action in some parameter regimes. The matching problem is best avoided by using a continuous but rapidly varying diffusivity profile at the core-envelope interface, with the α-effect concentrated at the base of the envelope, and the radial shear immediately below, but without significant overlap between these two source regions (see Panel B of Figure 9). Such numerical models can be constructed as a variation on the αΩ models considered earlier.
A representative interface dynamo model in spherical geometry. This solution has CΩ = 2.5×105, Cα = +10, and a core-to-envelope diffusivity contrast of 10-2. Panel A shows a sunspot butterfly diagram, and Panel B a series of radial cuts of the toroidal field at latitude 15°. The (normalized) radial profiles of magnetic diffusivity, α-effect, and radial shear are also shown, again at latitude 15°. The core-envelope interface is again at r/R⊙ = 0.7 (dotted line), where the magnetic diffusivity varies near-discontinuously. Panels C and D show the variations of the core-to-envelope peak toroidal field strength and dynamo period with the diffusivity contrast, for a sequence of otherwise identical dynamo solutions.
In spherical geometry, and especially in conjunction with a solar-like differential rotation profile, making a working interface dynamo model is markedly trickier than if only a radial shear is operating, as in the Cartesian models discussed earlier (see Charbonneau and MacGregor, 1997; Markiel and Thomas, 1999; Zhang et al., 2003a). Panel A of Figure 9 shows a butterfly diagram for a numerical interface solution with CΩ = 2.5 × 105, Cα = +10, and a core-to-envelope diffusivity contrast Δη = 10-2. The poleward propagating equatorial branch is precisely what one would expect from the combination of positive radial shear and positive α-effect according to the Parker-Yoshimura sign ruleFootnote 8. Here the α-effect is (artificially) concentrated towards the equator, by imposing a latitudinal dependency α ∼ sin(4θ) for π/4 ≤ θ ≤ 3π/4, and zero otherwise.
The model does achieve the kind of toroidal field amplification one would like to see in interface dynamos. This can be seen in Panel B of Figure 9, which shows radial cuts of the toroidal field taken at latitude π/8, and spanning half a cycle. Notice how the toroidal field peaks below the core-envelope interface (vertical dotted line), well below the α-effect region and near the peak in radial shear. Panel C of Figure 9 shows how the ratio of peak toroidal field below and above rc varies with the imposed diffusivity contrast Δη. The dashed line is the dependency expected from Equation (32). For relatively low diffusivity contrast, -1.5 ≤ log(Δη) ≲ 0, both the toroidal field ratio and dynamo period increase as ∼ (Δη)-1/2. Below log(Δη) ∼ -1.5, the max(B)-ratio increases more slowly, and the cycle period falls, contrary to expectations for interface dynamos (see, e.g., MacGregor and Charbonneau, 1997). This is basically an electromagnetic skin-depth effect; the cycle period is such that the poloidal field cannot diffuse as deep as the peak in radial shear in the course of a half cycle. The dynamo then runs on a weaker shear, thus yielding a smaller field strength ratio and weaker overall cycle; on the energetics of interface dynamos (see Ossendrijver and Hoyng, 1997, also Steiner and Ferriz-Mas, 2005).
So far the great success of interface dynamos remains their ability to evade α-quenching even in its "strong" formulation, and so produce equipartition or perhaps even super-equipartition mean toroidal magnetic fields immediately beneath the core-envelope interface. They represent the only variety of dynamo models formally based on mean-field electrodynamics that can achieve this without additional physical effects introduced into the model. All of the uncertainties regarding the calculations of the α-effect and magnetic diffusivity carry over from αΩ to interface models, with diffusivity quenching becoming a particularly sensitive issue in the latter class of models (see, e.g., Tobias, 1996a).
Interface dynamos suffer acutely from something that is sometimes termed "structural fragility". Many gross aspects of the model's dynamo behavior often end up depending sensitively on what one would normally hope to be minor details of the model's formulation. For example, the interface solutions of Figure 9 are found to behave very differently if the α-effect region is displaced slightly upwards, or assumes other latitudinal dependencies. Moreover, as exemplified by the calculations of Mason et al. (2008), this sensitivity carries over to models in which the coupling between the two source regions is achieved by transport mechanisms other than diffusion. This sensitivity is exacerbated when a latitudinal shear is present in the differential rotation profile; compare, e.g., the behavior of the Cα > 0 solutions discussed here to those discussed in Markiel and Thomas (1999). Often in such cases, a mid-latitude αΩ dynamo mode, powered by the latitudinal shear within the tachocline and envelope, interferes with and/or overpowers the interface mode (see also Dikpati et al., 2005).
Because of this structural sensitivity, interface dynamo solutions also end up being annoyingly sensitive to choice of time-step size, spatial resolution, and other purely numerical details. From a modelling point of view, interface dynamos lack robustness.
Mean-field models including meridional circulation
Meridional circulation is unavoidable in turbulent, compressible rotating convective shells. It basically results from an imbalance between Reynolds stresses and buoyancy forces. The ∼ 15 m s-1 poleward flow observed at the surface (see, e.g., Hathaway, 1996; Ulrich and Boyden, 2005) has now been detected helioseismically, down to r/R⊙ ≃ 0.85 (Schou and Bogart, 1998; Braun and Fan, 1998), without significant departure from the poleward direction except locally and very close to the surface, in the vicinity of active region belts (see Gizon, 2004; Gizon and Rempel, 2008, and references therein), and in polar latitudes at some phases of the solar cycle (Haber et al., 2002). Long considered unimportant from the dynamo point of view, meridional circulation has gained popularity in recent years, initially in the Babcock-Leighton context but now also in other classes of models.
Accordingly, we now add a steady meridional circulation to our basic αΩ models of Section 4.2. The convenient parametric form developed by van Ballegooijen and Choudhuri (1988) is used here and in all later illustrative models including meridional circulation (Sections 4.5 and 4.8). This parameterization defines a steady quadrupolar circulation pattern, with a single flow cell per quadrant extending from the surface down to a depth rb. Circulation streamlines are shown in Figure 10, together with radial cuts of the latitudinal component at mid-latitudes (θ = π/4). The flow is poleward in the outer convection zone, with an equatorial return flow peaking slightly above the core-envelope interface, and rapidly vanishing below.
Streamlines of meridional circulation (Panel A), together with the total magnetic diffusivity profile defined by Equation (17) (dash-dotted line) and a mid-latitude radial cut of uθ (bottom panel). The dotted line is the core-envelope interface. This is the analytic flow of van Ballegooijen and Choudhuri (1988), with parameter values m = 0.5, p = 0.25, q = 0, and rb = 0.675.
The inclusion of meridional circulation in the non-dimensionalized αΩ dynamo equations leads to the appearance of a new dimensionless quantity, again a magnetic Reynolds number, but now based on an appropriate measure of the circulation speed u0:
$$Rm = \frac{{u_0 R_ \odot }} {{\eta _T }}.$$
Using the value u0 = 1500 cm s-1 from observations of the observed poleward surface meridional flow leads to Rm ≃ 200, again with ηT = 5 × 1011 cm2 s-1. In the solar cycle context, using higher values of Rm thus implies proportionally lower turbulent diffusivities.
Meridional circulation can bodily transport the dynamo-generated magnetic field (terms labeled "advective transport" in Equations (11, 12)), and therefore, for a (presumably) solar-like equatorward return flow that is vigorous enough — in the sense of Rm being large enough — overpower the Parker-Yoshimura propagation rule embodied in Equation (30). This was nicely demonstrated by Choudhuri et al. (1995), in the context of a mean-field αΩ model with a positive α-effect concentrated near the surface, and a latitude-independent, purely radial shear at the core-envelope interface. The behavioral turnover from dynamo wave-like solutions to circulation-dominated magnetic field transport sets in when the circulation speed becomes comparable to the propagation speed of the dynamo wave. In the circulation-dominated regime, the cycle period loses sensitivity to the assumed turbulent diffusivity value, and becomes determined primarily by the circulation's turnover time. Models achieving equatorward propagation of the deep toroidal magnetic component in this manner are now often called flux-transport dynamos.
With a solar-like differential rotation profile, however, once again the situation is far more complex. Starting from the most basic αΩ dynamo solution with α ∼ cos θ (Figure 8A), new solutions are now recomputed, this time including meridional circulation. An animation of a typical solution is shown in Figure 11, and a sequence of time-latitude diagrams for four increasing values of the circulation flow speed, as measured by Rm, are plotted in Figure 12.
mpg-Movie ((2347.42480469 KB) Still from a movie showing Meridional plane animations for an αΩ dynamo solutions including meridional circulation. With Rm = 103, this solution is operating in the advection-dominated regime as a flux-transport dynamo. The corresponding time-latitude "butterfly" diagram is plotted in Figure 12C below. Color-coding of the toroidal magnetic field and poloidal fieldlines as in Figure 7. (For video see appendix)
Time-latitude "butterfly" diagrams for the α-quenched αΩ solutions depicted earlier in Panel A of Figure 8, except that meridional circulation is now included, with (A) Rm = 50, (B) Rm = 100, (C) Rm = 1000, and (D) Rm = 2000 For the turbulent diffusivity value adopted here, ηT = 5 × 1011 cm2 s-1, Rm = 200 would corresponds to a solar-like circulation speed.
At Rm = 50, little difference is seen with the circulation-free solutions (cf. Figure 8A), except for an increase in the cycle frequency, due to the Doppler shift experienced by the equatorwardly propagating dynamo wave (see Roberts and Stix, 1972). At Rm = 100 (part B), the cycle frequency has further increased and the poloidal component produced in the high-latitude region of the tachocline is now advected to the equatorial regions on a timescale becoming comparable to the cycle period, so that a cyclic activity, albeit with a longer period, becomes apparent at low latitudes. At Rm = 103 (panel C and animation in Figure 11) the dynamo mode now peaks at mid-latitude, a consequence of the inductive action of the latitudinal shear, favored by the significant stretching experienced by the poloidal fieldlines as they get advected equatorward. At Rm = 2000 the original high latitude dynamo mode has all but vanished, and the mid-latitude mode is dominant. The cycle period is now set primarily by the turnover time of the meridional flow; this is the telltale signature of flux-transport dynamos.
All this may look straightforward, but it must be emphasized that not all dynamo models with solar-like differential rotation behave in this (relatively) simple manner. For example, the Cα = -10 solution with α ∼ sin2 θ cos θ (Figure 8C) transits to a steady mode as Rm increases above ∼ 102. Moreover, the sequence of α ∼ cos θ shown in Figure 12 actually presents a narrow window around Rm ∼ 200 where the dynamo is decaying, due to a form of destructive interference between the high-latitude αΩ mode and the mid-latitude advection-dominated dynamo mode that dominates at higher values of Rm. Qualitatively similar results were obtained by Küker et al. (2001) using different prescriptions for the α-effect and solar-like differential rotation (see in particular their Figure 11; see also Rüdiger and Elstner, 2002; Bonanno et al., 2003). When field transport by turbulent pumping are included (see Käpylä et al., 2006b), αΩ models including meridional circulation can provide time-latitude "butterfly" diagrams that are reasonably solar-like.
Even if the meridional flow is too slow — or the turbulent magnetic diffusivity too high — to force the dynamo model in the advection-dominated regime, being much faster at the surface the poleward flow can dominate the spatio-temporal evolution of the radial surface magnetic field, as shown in Figure 13, for the same sequence of αΩ solutions with α ∼ cos θ as in Figure 12, at Rm = 0, 50, 100, and 500 (panels A – D). For low circulation speeds (Rm ≲ 50), the equatorward drift of the surface radial field is simply a diffused imprint of the equatorward drift of the deep-seated toroidal field (cf. Figure 8A and 12A). At higher circulation speeds, however, the surface magnetic field is swept instead towards the pole (see Figure 13C), becoming strongly concentrated and amplified there for Rm exceeding a few hundreds (Figures 11 and 13D).
Time-latitude diagrams of the surface radial magnetic field, for increasing values of the circulation speed, as measured by the Reynolds number Rm. This is for the same reference αΩ with α ∼ cos θ as in Figures 8A and 12. Note the marked increased of the peak surface field strength as Rm exceeds ∼ 100.
From the modelling point-of-view, in the kinematic regime at least the inclusion of meridional circulation yields a much better fit to observed surface magnetic field evolution, as well as a robust setting of the cycle period. Whether it can provide an equally robust equatorward propagation of the deep toroidal field is less clear. The results presented here in the context of mean-field αΩ models suggest a rather complex overall picture, and in interface dynamos the cartesian solutions obtained by Petrovay and Kerekes (2004) even suggest that dynamo action can be severely hindered. Yet, in other classes of models discussed below (Sections 4.5 and 4.8), circulation does have this desired effect (see also Seehafer and Pipin, 2009, for an intriguing mean-field model calculation not relying on the α-effect).
On the other hand, dynamo models including meridional circulation tend to produce surface polar field strength largely in excess of observed values, unless magnetic diffusion is significantly enhanced in the surface layers, and/or field submergence takes place very efficiently. This is a direct consequence of magnetic flux conservation in the converging poleward flow. This situation carries over to the other types of models to be discussed in Sections 4.5 and 4.8, unless additional modelling assumptions are introduced (e.g., enhanced surface magnetic diffusivity, see Dikpati et al., 2004), or if a counterrotating meridional flow cell is introduced in the high latitude regions (Dikpati et al., 2004; Jiang et al., 2009), a feature that has actually been detected in surface Doppler measurements as well as helioseismically during cycle 22 (see Haber et al., 2002; Ulrich and Boyden, 2005).
A more fundamental and potential serious difficulty harks back to the kinematic approximation, whereby the form and speed of up is specified a priori. Meridional circulation is a relatively weak flow in the bottom half of the solar convective envelope (see Miesch, 2005), and the stochastic fluctuations of the Reynolds stresses powering it are expected to lead to strong spatiotemporal variations, and expectation verified by both analytical models (Rempel, 2005) and numerical simulations (Miesch, 2005). The ability of thus meridional flow to merrily advect equipartition-strength magnetic fields should not be taken for granted (but do see Rempel, 2006a,b).
Before leaving the realm of mean-field dynamo models it is worth noting that many of the conceptual difficulties associated with calculations of the α-effect and turbulent diffusivity are not unique to the mean-field approach, and in fact carry over to all models discussed in the following sections. In particular, to operate properly all of the upcoming solar dynamo models require the presence of a strongly enhanced magnetic diffusivity, presumably of turbulent origin, at least in the convective envelope. In this respect, the rather low value of the turbulent magnetic diffusivity needed to achieve high enough Rm in flux transport dynamos is also somewhat problematic, since the corresponding turbulent diffusivity ends up some two orders of magnitude below the (uncertain) mean-field estimates. However, the model calculations of Muñoz-Jaramillo et al. (2010a) indicate that magnetic diffusivity quenching may offer a viable solution to this latter quandary.
Models based on shear instabilities
We now turn to a recently proposed class of flux transport dynamo models relying on the latitudinal shear instability of the angular velocity profiles in the upper radiative portion of the solar tachocline (Dikpati and Gilman, 2001; Dikpati et al., 2004). These authors work with what are effectively the mean field αΩ dynamo equations including meridional circulation. They design their "tachocline α-effect" in the form of a latitudinal parameterization of the longitudinally-averaged kinetic helicity associated with the planforms they obtain from a linear hydrodynamical stability analysis of the latitudinal differential rotation in the part of the tachocline coinciding with the overshoot region. The analysis is carried out in the framework of shallow-water theory (see Dikpati and Gilman, 2001). In analogy with mean-field theory, the resulting α-effect is assumed to be proportional to kinetic helicity but of opposite sign (see Equation (19)), and ends up predominantly positive at mid-latitudes in the Northern solar hemisphere. In their dynamo model, Dikpati and Gilman (2001) use a solar-like differential rotation, depth-dependent magnetic diffusivity and meridional circulation pattern much similar to those shown in Figures 5, 6, and 10 herein. The usual ad hoc α-quenching formula (cf. Equation (23)) is introduced as the sole amplitude-limiting nonlinearity.
Representative solutions
Many representative solutions for this class of dynamo models can be examined in Dikpati and Gilman (2001) and Dikpati et al. (2004), where their properties are discussed at some length. Figure 14 shows time-latitude diagrams of the toroidal field at the core-envelope interface, and surface radial field. This is a solar-like solution with a mid-latitude surface meridional (poleward) flow speed of 17 m s-1, envelope diffusivity ηT = 5 ¢ 1011 cm2 s-1, and a core-to-envelope magnetic diffusivity contrast Δη = 10-3. Note the equatorward migration of the deep toroidal field, set here by the meridional flow in the deep envelope, and the poleward migration and intensification of the surface poloidal field, again a direct consequence of advection by meridional circulation, as in the mean-field dynamo models discussed in Section 4.4, when operating in the advection-dominated, high Rm regime. The three-lobe structure of each spatio-temporal cycle in the butterfly diagram reflects the presence of three peaks in the latitudinal profile of kinetic helicity for this model.
Time-latitude "butterfly" diagrams of the toroidal field at the core-envelope interface (top), and surface radial field (bottom) for a representative dynamo solution computed using the model of Dikpati and Gilman (2001). Note how the deep toroidal field peaks at very low latitudes, in good agreement with the sunspot butterfly diagram. For this solution the equatorial deep toroidal field and polar surface radial field lag each other by ∼ π, but other parameter settings can bring this lag closer to the observed π/2 (diagrams kindly provided by M. Dikpati).
While these models are only a recent addition to the current "zoo" of solar dynamo models, they have been found to compare favorably to a number of observed solar cycle features. The model can be adjusted to yield equatorward propagating dominant activity belts, solar-like cycle periods, and correct phasing between the surface polar field and the tachocline toroidal field. These features can be traced primarily to the advective action of the meridional flow. They also yield the correct solution parity, and are self-excited. Like conventional αΩ models relying on meridional circulation to set the propagation direction of dynamo waves (see Section 4.4.2), the meridional flow must remain unaffected by the dynamo-generated magnetic field at least up to equipartition strength, a potentially serious difficulty also shared by the Babcock-Leighton models to be discussed in Section 4.8 below.
The primary weakness of these models, in their present form, is their reliance on a linear stability analysis that altogether ignores the destabilizing effect of magnetic fields. Gilman and Fox (1997) have demonstrated that the presence of even a weak toroidal field in the tachocline can very efficiently destabilize a latitudinal shear profile that is otherwise hydrodynamically stable (see also Zhang et al., 2003b). Relying on a purely hydrodynamical stability analysis is then hard to reconcile with a dynamo process producing strong toroidal field bands of alternating polarities migrating towards the equator in the course of the cycle, especially since latitudinally concentrated toroidal fields have been found to be unstable over a very wide range of toroidal field strengths (see Dikpati and Gilman, 1999). Achieving dynamo saturation through a simple amplitude-limiting quenching formula such as Equation (23) is then also hard to justify. Progress has been made in studying non-linear development of both the hydrodynamical and MHD versions of the shear instability (see, e.g., Cally, 2001; Cally et al., 2003), so that the needed improvements on the dynamo front are hopefully forthcoming.
Models based on buoyant instabilities of sheared magnetic layers
Dynamo models relying on the buoyant instability of magnetized layers have been presented in Thelen (2000b), the layer being identified with the tachocline. Here also the resulting azimuthal electromotive force is parameterized as a mean-field-like α-effect, introduced into the standard αΩ dynamo equations. The model is nonlinear, in that it includes the magnetic backreaction on the large-scale, purely radial velocity shear within the layer. The analysis of Thelen (2000a) indicates that the α-effect is negative in the upper part of the shear layer. Cyclic solutions are found in substantial regions of parameter space, and, not surprisingly, the solutions exhibit migratory wave patterns compatible with the Parker-Yoshimura sign rule.
Representative solutions for this class of dynamo models can be examined in Thelen (2000b). These models are not yet at the stage where they can be meaningfully compared with the solar cycle. They do have a number of attractive features, including their ability to operate in the strong field regime.
Models based on flux tube instabilities
From instability to α-effect
To date, stability studies of toroidal flux ropes stored in the overshoot layer have been carried out in the framework of the thin-flux tube approximation (Spruit, 1981). It is possible to construct "stability diagrams" taking the form of growth rate contours in a parameter space comprised of flux tube strength, latitudinal location, depth in the overshoot layer, etc. One such diagram, taken from Ferriz-Mas et al. (1994), is reproduced in Figure 15. The key is now to identify regions in such stability diagrams where weak instability arises (growth rates ≳ 1 yr). In the case shown in Figure 15, these regions are restricted to flux tube strengths in the approximate range 60 – 150 kG. The correlation between the flow and field perturbations is such as to yield a mean azimuthal electromotive force equivalent to a positive α-effect in the N-hemisphere (Ferriz-Mas et al., 1994; Brandenburg and Schmitt, 1998).
Stability diagram for toroidal magnetic flux tubes located in the overshoot layer immediately beneath the core-envelope interface. The plot shows contours of growth rates in the latitude-field strength plane. The gray scale encodes the azimuthal wavenumber of the mode with largest growth rate, and regions left in white are stable. Dynamo action is associated with the regions with growth rates ∼ 1 yr, here labeled I and II (diagram kindly provided by A. Ferriz-Mas).
Dynamo models relying on the non-axisymmetric buoyant instability of toroidal magnetic fields were first proposed by Schmitt (1987), and further developed by Ferriz-Mas et al. (1994); Schmitt et al. (1996), and Ossendrijver (2000a) for the case of toroidal flux tubes. These dynamo models are all mean-field-like, in that the mean azimuthal electromotive force arising from instability of the flux tubes is parametrized as an α-effect, and the dynamo equations solved are then the same as those of the conventional αΩ mean-field model (see Section 4.2.3), including various forms of algebraic α-quenching as the sole amplitude-limiting nonlinearity. As with mean-field models, the dynamo period presumably depends sensitively on the assumed value of (turbulent) magnetic diffusivity, and equatorward propagation of the dynamo wave requires a negative α-effect at low latitudes.
Although it has not yet been comprehensively studied, this dynamo mechanism has a number of very attractive properties. It operates without difficulty in the strong field regime (in fact it requires strong fields to operate). It also naturally yields dynamo action concentrated at low latitudes, so that a solar-like butterfly diagram can be readily produced from a negative α-effect even with a solar-like differential rotation profile, at least judging from the solutions presented in Schmitt et al. (1996) and Ossendrijver (2000a,b).
Difficulties include the need of a relatively finely tuned magnetic diffusivity to achieve a solar-like dynamo period, and a finely tuned level of subadiabaticity in the overshoot layer for the instability to kick on and off at the appropriate toroidal field strengths (compare Figures 1 and 2 in Ferriz-Mas et al., 1994). The non-linear saturation of the instability is probably less of an issue here than with the α-effect based on purely hydrodynamical shear instability (see Section 4.5 above), since, as the instability grows, the flux ropes leave the site of dynamo action by entering the convection zone and buoyantly rising to the surface.
The effects of meridional circulation in this class of dynamo models has yet to be investigated; this should be particularly interesting, since both analytic calculations and numerical simulations suggest a positive α-effect in the Northern hemisphere, which should then produce poleward propagation of the dynamo wave at low latitude. Meridional circulation could then perhaps produce equatorward propagation of the dynamo magnetic field even with a positive α-effect, as it does in true mean-field models (cf. Section 4.4).
Babcock-Leighton models
Solar cycle models based on what is now called the Babcock-Leighton mechanism were first proposed by Babcock (1961) and further elaborated by Leighton (1964, 1969), yet they were all but eclipsed by the rise of mean-field electrodynamics in the mid- to late 1960s. Their revival was motivated not only by the mounting difficulties with mean-field models alluded to earlier, but also by the fact that synoptic magnetographic monitoring over solar cycles 21 and 22 has offered strong evidence that the surface polar field reversals are indeed triggered by the decay of active regions (see Wang et al., 1989; Wang and Sheeley Jr, 1991, and references therein). The crucial question is whether this is a mere side-effect of dynamo action taking place independently somewhere in the solar interior, or a dominant contribution to the dynamo process itself.
The mode of operation of a generic solar cycle model based on the Babcock-Leighton mechanism is illustrated in cartoon form in Figure 16. Let Pn represent the amplitude of the high-latitude, surface ("A") poloidal magnetic field in the late phases of cycle n, i.e., after the polar field has reversed. The poloidal field Pn is advected downward by meridional circulation (A → B), where it then starts to be sheared by the differential rotation while being also advected equatorward (B → C). This leads to the growth of a new low-latitude (C) toroidal flux system Tn+1, which becomes buoyantly unstable (C→D) and starts producing sunspots (D) which subsequently decay and release the poloidal flux Pn+1 associated with the new cycle n + 1. Poleward advection and accumulation of this new flux at high latitudes (D→A) then obliterates the old poloidal flux Pn, and the above sequence of steps begins anew.
Operation of a solar cycle model based on the Babcock-Leighton mechanism. The diagram is drawn in a meridional quadrant of the Sun, with streamlines of meridional circulation plotted in blue. Poloidal field having accumulated in the surface polar regions ("A") at cycle n must first be advected down to the core-envelope interface (dotted line) before production of the toroidal field for cycle n + 1 can take place (B→C). Buoyant rise of flux rope to the surface (C→D) is a process taking place on a much shorter timescale.
Meridional circulation clearly plays a key role in this "conveyor belt" model of the solar cycle, by providing the needed link between the two spatially segregated source regions. Not surprisingly, topologically more complex multi-cells circulation patterns can lead to markedly different dynamo behavior (see, e.g., Bonanno et al., 2006; Jouve and Brun, 2007), and can also have a profound impact on the evolution of the surface magnetic field (Dikpati et al., 2004; Jiang et al., 2009).
Formulation of a poloidal source term
As with all other dynamo models discussed thus far, the troublesome ingredient in dynamo models relying on the Babcock-Leighton mechanism is the specification of an appropriate poloidal source term, to be incorporated into the mean-field axisymmetric dynamo equations. In essence, all implementations discussed here are inspired by the results of numerical simulations of the buoyant rise of thin flux tubes, which, in principle allow to calculate the emergence latitudes and tilts of BMRs, which is at the very heart of the Babcock-Leighton mechanism.
The first post-helioseismic dynamo model based on the Babcock-Leighton mechanism is due to Wang et al. (1991); these authors developed a coupled two-layer model (2 × 1D), where a poloidal source term is introduced in the upper (surface) layer, and made linearly proportional to the toroidal field strength at the corresponding latitude in the bottom layer. A similar non-local approach was later used by Dikpati and Charbonneau (1999), Charbonneau et al. (2005) and Guerrero and de Gouveia Dal Pino (2008) in their 2D axisymmetric model implementation, using a solar-like differential rotation and meridional flow profiles similar to Figures 5 and 10 herein. The otherwise much similar implementation of Nandy and Choudhuri (2001, 2002) and Chatterjee et al. (2004), on the other hand, uses a mean-field-like local α-effect, concentrated in the upper layers of the convective envelope and operating in conjunction with a "buoyancy algorithm" whereby toroidal fields located at the core-envelope interface are locally removed and deposited in the surface layers when their strength exceed some preset threshold. The implementation developed by Durney (1995) is probably closest to the essence of the Babcock-Leighton mechanism (see also Durney et al., 1993; Durney, 1996, 1997); whenever the deep-seated toroidal field exceeds some preset threshold, an axisymmetric "double ring" of vector potential is deposited in the surface layer, and left to spread latitudinally under the influence of magnetic diffusion. As shown by Muñoz-Jaramillo et al. (2010b), this formulation, used in conjunction with the axisymmetric models discussed in what follows, also leads to a good reproduction of the observed synoptic evolution of surface magnetic flux.
In all cases the poloidal source term is concentrated in the outer convective envelope, and, in the language of mean-field electrodynamics, amounts to a positive α-effect, in that a positive dipole moment is being produced from a positive deep-seated mean toroidal field. The Dikpati and Charbonneau (1999) and Nandy and Choudhuri (2001) source terms both have an α-quenching-like upper operating threshold on the toroidal field strength. This is motivated by simulations of rising thin flux tubes, indicating that tubes with strengths in excess of about 100 kG emerge without the E-W tilt required for the Babcock-Leighton mechanism to operate. The Durney (1995), Nandy and Choudhuri (2001), and Charbonneau et al. (2005) implementations also have a lower operating threshold, as suggested by thin flux tubes simulations.
Figure 17 is a meridional plane animation of a representative Babcock-Leighton dynamo solution computed following the model implementation of Charbonneau et al. (2005). The equatorward advection of the deep toroidal field by meridional circulation is here clearly apparent. Note also how the surface poloidal field first builds up at low latitudes, and is subsequently advected poleward and concentrated near the pole.
mpg-Movie (6019.31738281 KB) Still from a movie showing Meridional plane animation of a representative Babcock-Leighton dynamo solution from Charbonneau et al. (2005). Color coding of the toroidal field and poloidal fieldlines as in Figure 7. This solution uses the same differential rotation, magnetic diffusivity, and meridional circulation profile as for the advection-dominated αΩ solution of Section 4.4, but now with the non-local surface source term, as formulated in Charbonneau et al. (2005), and parameter values Cα = 5, CΩ = 5 × 104, Δη = 0.003, Rm = 840. Note again the strong amplification of the surface polar fields, the latitudinal stretching of poloidal fieldlines by the meridional flow at the core-envelope interface. (For video see appendix)
Figure 18 shows N-hemisphere time-latitude diagrams for the toroidal magnetic field at the core-envelope interface (Panel A), and the surface radial field (Panel B), for a Babcock-Leighton dynamo solution now computed following the closely similar model implementation of Dikpati and Charbonneau (1999). Note how the polar radial field changes from negative (blue) to positive (red) at just about the time of peak positive toroidal field at the core-envelope interface; this is the phase relationship inferred from synoptic magnetograms (see, e.g., Figure 4 herein) as well as observations of polar faculae (see Sheeley Jr, 1991).
Time-latitude diagrams of the toroidal field at the core-envelope interface (Panel A), and radial component of the surface magnetic field (Panel B) in a Babcock-Leighton model of the solar cycle. This solution is computed for solar-like differential rotation and meridional circulation, the latter here closing at the core-envelope interface. The core-to-envelope contrast in magnetic diffusivity is Δη = 1/300, the envelope diffusivity ηT = 2.5 × 1011 cm2 s-1, and the (poleward) mid-latitude surface meridional flow speed is u0 = 16 m s-1.
Although it exhibits the desired equatorward propagation, the toroidal field butterfly diagram in Panel A of Figure 18 peaks at much higher latitude (∼ 45°) than the sunspot butterfly diagram (∼ 15° – 20°, cf. Figure 3). This occurs because this is a solution with high magnetic diffusivity contrast, where meridional circulation closes at the core-envelope interface, so that the latitudinal component of differential rotation dominates the production of the toroidal field, a situation that persists in models using more realistic differential profiles taken from helioseismic inversions (see Muñoz-Jaramillo et al., 2009). This difficulty can be alleviated by letting the meridional circulation penetrate below the core-envelope interface. Solutions with such flows are presented, e.g., in Dikpati and Charbonneau (1999) and Nandy and Choudhuri (2001, 2002). These latter authors have argued that this is in fact essential for a solar-like butterfly diagram to materialize, but this conclusion appears to be model-dependent at least to some degree (Guerrero and Muñoz, 2004; Guerrero and de Gouveia Dal Pino, 2007; Muñoz-Jaramillo et al., 2009). From the hydrodynamical standpoint, the boundary layer analysis of Gilman and Miesch (2004) (see also Rüdiger et al., 2005) indicates no significant penetration below the base of the convective envelope, although this conclusion has not gone unchallenged (see Garaud and Brummell, 2008), leaving the whole issue somewhat muddled at this juncture. The present-day observed solar abundances of Lithium and Beryllium restrict the penetration depth to r/R ≃ 0.62 (Charbonneau, 2007b), which is unfortunately too deep to pose very useful constraints on dynamo models, so that the final word will likely come from helioseismology, hopefully in the not too distant future.
A noteworthy property of this class of model is the dependency of the cycle period on model parameters; over a wide portion of parameter space, the meridional flow speed is found to be the primary determinant of the cycle period P. For example, in the Dikpati and Charbonneau (1999) model, this quantity is found to scale as
$$P = 56.8 u_0^{ - 0.89} s_0^{ - 0.13} \eta _T^{0.22} [yr].$$
This behavior arises because, in these models, the two source regions are spatially segregated, and the time required for circulation to carry the poloidal field generated at the surface down to the tachocline is what effectively sets the cycle period. The corresponding time delay introduced in the dynamo process has rich dynamical consequences, to be discussed in Section 5.4 below. The weak dependency of P on ηT and on the magnitude ..0 of the poloidal source term is very much unlike the behavior typically found in mean-field models, where both these parameters usually play a dominant role in setting the cycle period. The analysis of Hathaway et al. (2003) supports the idea that the solar cycle period is indeed set by the meridional flow speed (but do see Schmitt and Schüssler, 2004, for an opposing viewpoint). As demonstrated by Jouve et al. (2010), interesting constraints can also be obtained from the observed dependence of stellar cycle periods on rotation rates.
An interesting variation on the above model follows from the inclusion of turbulent pumping. With the expected downward pumping throughout the bulk of the convective envelope, and with a significant equatorward latitudinal component at low latitudes, the Babcock-Leighton mechanism can lead to dynamo action even if the meridional flow is constrained to the upper portion of the convective envelope. Downward turbulent pumping then links the two sources regions, and latitudinal pumping provides the needed equatorward concentration of the deep-seated toroidal component. An example taken from Guerrero and de Gouveia Dal Pino (2008) is shown in Figure 19. In this specific solution the circulation penetrates only down to r/R = 0.8, and the radial and latitudinal peak pumping speed are γr0 = 0.3 m s-1 and γθ0 = 0.9 m s-1, respectively.
Time-latitude diagrams of the toroidal field at the core-envelope interface (Panel A), and radial component of the surface magnetic field (Panel B) in a Babcock-Leighton model of the solar cycle with a meridional flow restricted to the upper half of the convective envelope, and including (parametrized) radial and latitudinal turbulent pumping. This is a solution from Guerrero and de Gouveia Dal Pino (2008) (see their Section 3.3 and Figure 5), but the overall modelling framework is almost identical to that described earlier, and used to generate Figure 18. The core-to-envelope contrast in magnetic diffusivity is Δη = 1/100, the envelope diffusivity ηT = 1011 cm2 s-1, and the (poleward) mid-latitude surface meridional flow speed is u0 = 13 m s-1 (figure produced from numerical data kindly provided by G. Guerrero).
With downward turbulent pumping now the primary mechanism linking the surface and tachocline, the dynamo period loses sensitivity to the meridional flow speeds, and becomes set primary by the radial pumping speed. Indeed the dynamo solutions presented Guerrero and de Gouveia Dal Pino (2008) are found to obey a scaling law of the form
$$P = 181.2 u_0^{ - 0.12} \gamma _{r0}^{ - 0.51} \gamma _{\theta 0}^{ - 0.05} [yr],$$
over a fairly wide range of parameter values. The radial pumping speed γr0 emerges here as the primary determinant of the cycle period. Finally, one can note in Figure 19 that the surface magnetic field no longer shows the strong concentration in the polar region that usually characterizes Babcock-Leighton dynamo solutions operating in the advection-dominate regime. This can be traced primarily to the efficient downward turbulent pumping that subducts the poloidal field as it is carried poleward by the meridional flow.
As with most models including meridional circulation published to date, Babcock-Leighton dynamo models usually produce excessively strong polar surface magnetic fields. While this difficulty can be fixed by increasing the magnetic diffusivity in the outermost layers, in the context of the Babcock-Leighton models this then leads to a much weaker poloidal field being transported down to the tachocline, which can be problematic from the dynamo point-of-view. On this see Dikpati et al. (2004) for illustrative calculations, and Mason et al. (2002) on the closely related issue of competition between surface and deep-seated α-effect. The model calculations of Guerrero and de Gouveia Dal Pino (2008) suggest that downward turbulent pumping may be a better option to reduce the strength of the polar field without impeding dynamo action.
Because of the strong amplification of the surface poloidal field in the poleward-converging meridional flow, Babcock-Leighton models tend to produce a significant — and often dominant — polar branch in the toroidal field butterfly diagram. Many of the models explored to date tend to produce symmetric-parity solutions when computed pole-to-pole over a full meridional plane (see, e.g., Dikpati and Gilman, 2001), but it is not clear how serious a problem this is, as relatively minor changes to the model input ingredients may flip the dominant parity (see Chatterjee et al., 2004; Charbonneau, 2007a, for specific examples). Nonetheless, in the advection-dominated regime there is definitely a tendency for the quadrupolar symmetry of the meridional flow to imprint itself on the dynamo solutions. A related difficulty, in models operating in the advection-dominated regime, is the tendency for the dynamo to operate independently in each solar hemisphere, so that cross-hemispheric synchrony is lost (Charbonneau, 2005, 2007a; Chatterjee and Choudhuri, 2006).
Because the Babcock-Leighton mechanism is characterized by a lower operating threshold, the resulting dynamo models are not self-excited. On the other hand, the Babcock-Leighton mechanism is expected to operate even for toroidal fields exceeding equipartition, the main uncertainties remaining the level of amplification taking place when sunspot-forming toroidal flux ropes form from the dynamo-generated mean magnetic field. The nonlinear behavior of this class of models, at the level of magnetic backreaction on the differential rotation and meridional circulation, remains largely unexplored.
Numerical simulations of solar dynamo action
Ultimately, the solar dynamo problem should be tackled as a (numerical) solution of the complete set of MHD partial differential equations in a rotating, stratified spherical domain undergoing thermally-driven turbulent convection in its outer 30% in radius. The first full-fledged attempts to do so go back some some thirty years, to the simulations of Gilman and Miller (1981); Gilman (1983); Glatzmaier (1985a,b). These epoch-making simulations did produce cyclic dynamo action and latitudinal migratory patterns suggestive of the dynamo waves of mean-field theory. However, the associated differential rotation profile turned out non-solar, as did the magnetic field's spatio-temporal evolution. In retrospect this is perhaps not surprising, as limitations in computing resources forced these simulations to be carried out in a parameter regime far removed from solar interior conditions. Later simulations taking advantages of massively parallel computing architectures did managed to produce tolerably solar-like mean internal differential rotation (see, e.g., Miesch and Toomre, 2009, and references therein), as well as copious small-scale magnetic field, but failed to generate a spatially well-organized large-scale magnetic component (see Brun et al., 2004). Towards this end the inclusion of a stably stratified fluid layer below the convecting layers is now believed to be advantageous (although not strictly necessary, see Brown et al., 2010) as it allows the development of a tachocline-like shear layer where magnetic field produced within the convection zone can accumulate in response to turbulent pumping from above, and be further amplified by the rotational shear (see Browning et al., 2006, also Tobias et al., 2001, 2008, and references therein, for related behavior in local cartesian simulations).
Some of these simulations are now beginning to yield regular polarity reversals of the large-scale magnetic components. Figures 20 and 21 present some sample results taken from Ghizaru et al. (2010), see also Brown et al. (2009) and Käpylä et al. (2010). Figure 20 is an animation in Mollweide latitude-longitude projection of the toroidal magnetic component 0.02R⊙ below the nominal interface between the convecting layers and underlying stable layers in one of these simulations. This toroidal component reaches some 2.5 kG here, and shows a very clear global antisymmetry about the equator, despite strong spatiotemporal fluctuations produced by convective undershoot. The cyclic variation of this large-scale field is quite apparent on the animation, with polarity reversals approximately synchronous across hemispheres.
mpg-Movie (31113.9941406 KB) Still from a movie showing Latitude-Longitude Mollweide projection of the toroidal magnetic component at depth r/R = 0.695 in the 3D MHD simulation of Ghizaru et al. (2010). This large-scale axisymmetric component shows a well-defined overall antisymmetry about the equatorial plane, and undergoes polarity reversals approximately every 30 yr. The animation spans a little over three half-cycles, including three polarity reversals. Time is given in solar days, with 1 s.d. = 30 d. (For video see appendix)
Figure 21A shows, for the same simulation as in Figure 20, a time-latitude diagram of the zonally-averaged toroidal component, now constructed at a depth corresponding to the core-envelope interface in the model. This is again assumed to be the simulation's equivalent to the sunspot butterfly diagram. This simulation was run for 255 yr, in the course of which eight polarity reversals have taken place, with a mean (half-)period of about 30 yr. Note the tendency for equatorward migration of the toroidal flux structures, and the good long-term synchrony between the Northern and Southern hemispheres, persisting despite significant fluctuations in the amplitude and duration of cycles in each hemiphere. Figure 21B shows the corresponding time-evolution of the zonaly-averaged radial surface magnetic component, again in a time-latitude diagram. The surface field is characterized by a well-defined dipole moment aligned with the rotational axis, with transport of surface fields taking place from lower latitudes and (presumably) contributing to the reversal of the dipole moment. Compare these time-latitude diagrams to the sunspot butterfly diagram of Figure 3 and synoptic magnetogram of Figure 4, and reflect upon the similarities and differences.
(A) Time-latitude diagram of the zonally-averaged toroidal magnetic component the coreenvelope interface (r/R = 0.718) and (B) corresponding time-latitude diagram of the surface radial field, in the 3D MHD simulations presented in Ghizaru et al. (2010). Note the regular polarity reversals, the weak but clear tendency towards equatorial migration of the deep toroidal magnetic component, and the good coupling between the two hemispheres despite marked fluctuations in successive cycles. The color scale codes the magnetic field strength, in Tesla.
Although much remains to be investigated regarding the mode of dynamo action in these simulations, some encouraging links to mean-field theory (Section 3.2.1) do emerge. The fact that a positive toroidal component breeds here a positive dipole moment is what one would expect from a turbulent α-effect (more precisely, the αφφ tensor component) positive in the Northern hemisphere. A posteriori calculation of the mean electromotive force \(\mathcal{E} = \left\langle {u' \times B'} \right\rangle\) does reveal a clear hemispheric pattern, with εφ having the same sign in both hemisphere, but changing sign from one cycle to the next, again consistent with the idea that the turbulent ..-effect is the primary source of the large-scale poloidal component. Likewise, having a well-defined axisymmetric dipolar component being sheared by an axisymmetric differential rotation is consistent with the buildup of a large-scale toroidal component antisymmetric about the equatorial plane.
On the other hand, calculation of the r and θ-components of the mean electromotive force indicates that the latter contributes to the production of the toroidal field at a level comparable to shearing of the poloidal component by differential rotation, suggestive of what, in mean-field electrodynamics parlance, is known as an α2Ω dynamo. Calculation of the α-tensor components also reveals that the latter do not undergo significant variations between maximal and minimal phases of the cycle, suggesting that α-quenching is not the primary amplitude-limiting mechanism in this specific simulation run. Although it would premature to claim that these simulations vindicate the predictions of mean-field theory, to the level at which they have been analyzed thus far, they do not appear to present outstanding departures from the mean-field Weltanschau.
Amplitude Fluctuations, Multiperiodicity, and Grand Minima
Since the basic physical mechanism(s) underlying the operation of the solar cycle are not yet agreed upon, attempting to understand the origin of the observed fluctuations of the solar cycle may appear to be a futile undertaking. Nonetheless, work along these lines continues at full steam in part because of the high stakes involved; varying levels of solar activity may contribute significantly to climate change (see Haigh, 2007, and references therein). Moreover, the frequencies of all eruptive phenomena relevant to space weather are strongly modulated by the amplitude of the solar cycle. Finally, certain aspects of the observed fluctuations may actually hold important clues as to the physical nature of the dynamo process.
The observational evidence: An overview
Hathaway (2010) offers a comprehensive review of the observational phenomenology of the solar cycle, as viewed through the sunspot number and other activity indicators; what follows is restricted to feature having most direct bearing on dynamo modeling. Panel A of Figure 22 shows a time series of the so-called Zürich sunspot numbers, starting in the mid-eighteenth century and extending to the present. The 11-year sunspot cycle is the most obvious feature of this time series, although the period of the underlying magnetic cycle is in fact twice that (sunspot counts being insensitive to magnetic polarity). Cycle-to-cycle variations in sunspot counts are usually taken to indicate a corresponding variation in the amplitude of the Sun's dynamo-generated internal magnetic field. As reasonable as this may sound, it remains a working assumption; at this writing, the process via which the dynamo-generated mean magnetic field produces sunspot-forming concentrated flux ropes is not understood. One should certainly not take for granted that a difference by a factor of two in sunspot count indicates a corresponding variation by a factor of two in the strength of the internal magnetic field.
Fluctuations of the solar cycle, as measured by the sunspot number. Panel A is a time series of the Zürich monthly sunspot number (with a 13-month running mean in red). Cycles are numbered after the convention introduced in the mid-nineteenth century by Rudolf Wolf. Note how cycles vary significantly in both amplitude and duration. Panel B is a portion of the 10Be time series spanning the Maunder Minimum (data courtesy of J. Beer). Panel C shows a time series of the yearly group sunspot number of Hoyt and Schatten (1998) (see also Hathaway et al., 2002) over the same time interval, together with the yearly Zürich sunspot number (purple) and auroral counts (green). Panels D and E illustrate the pronounced anticorrelation between cycle amplitude and rise time (Waldmeier Rule), and alternation of higher-thanaverage and lower-that-average cycle amplitudes (Gnevyshev-Ohl Rule, sometimes also referred to as the "odd-even effect").
At any rate, the notion of a nicely regular 11/22-year cycle does not hold long upon even cursory scrutiny, as the amplitude of successive cycles is clearly not constant, and their overall shape often differs significantly from one cycle to another (cf. cycles 14 and 15 in Panel A of Figure 22). Closer examination of Figure 22 also reveals that even the cycle's duration is not uniform, spanning in fact a range going from 9 yr (cycle 2) to nearly 14 yr (cycles 4 and 23). These amplitude and duration variations are not a sunspot-specific artefact; similar variations are in fact observed in other activity proxies with extended records, most notably the 10.7 cm radio flux (Tapping, 1987), polar faculae counts (Sheeley Jr, 1991), and the cosmogenic radioisotopes 14C and 10Be (Beer et al., 1991; Beer, 2000).
Equally striking is the pronounced dearth of sunspots in the interval 1645 – 1715 (see Panel C of Figure 22); this is not due to lack of observational data (see Ribes and Nesme-Ribes, 1993; Hoyt and Schatten, 1996), but represents instead a phase of strongly suppressed activity now known as the Maunder Minimum (Eddy, 1976, 1983, and references therein). Evidence from cosmogenic radioisotopes indicates that similar periods of suppressed activity have taken place in ca. 1282 – 1342 (Wolf Minimum) and ca. 1416 – 1534 (Spörer Minimum), as well as a period of enhanced activity in ca. 1100 – 1250 (the Medieval Maximum), and have recurred irregularly over the more distant past (Usoskin, 2008).
The various incarnations of the sunspot number time series (monthly SSN, 13-month smoothed SSN, yearly SSN, etc.) are arguably the most intensely studied time series in astrophysics, as measured by the number of published research paper pages per data points. Various correlations and statistical trends have been sought in these datasets. Panels D and E of Figure 22 present two such classical trends. The "Waldmeier Rule", illustrated in Panel D of Figure 22, refers to a statistically significant anticorrelation between cycle amplitude and rise time (linear correlation coefficient r = -0.68). A similar anticorrelation exists between cycle amplitude and duration, but is statistically more dubious (r = -0.37). The "Gnevyshev-Ohl" rule, illustrated in Panel E of Figure 22, refers to a marked tendency for odd (even) numbered cycles to have amplitudes above (below) their running mean (blue line in Panel E of Figure 22), a pattern that seems to have held true without interruption between cycles 9 and 21 (see also Mursula et al., 2001). For more on these empirical sunspot "Rules", see Hathaway (2010).
A number of long-timescale modulations have also been extracted from these data, most notably the so-called Gleissberg cycle (period = 88 yr), but the length of the sunspot number record is insufficient to firmly establish the reality of these periodicities. One must bring into the picture additional solar cycle proxies, primarily cosmogenic radioisotopes, but difficulties in establishing absolute amplitudes of production rates introduce additional uncertainties into what is already a complex endeavour (for more on these matters, see Beer, 2000; Usoskin and Mursula, 2003). Likewise, the search for chaotic modulation in the sunspot number time series has produced a massive literature (see, e.g., Feynman and Gabriel, 1990; Mundt et al., 1991; Carbonell et al., 1994; Rozelot, 1995, and references therein), but without really yielding firm, statistically convincing conclusions, again due to the insufficient lengths of the datasets.
The aim in this section is to examine in some detail the types of fluctuations that can be produced in the various dynamo models discussed in the preceding sectionFootnote 9. After going briefly over the potential consequences of fossil fields (Section 5.2), dynamical nonlinearities are first considered (Section 5.3), followed by time-delay effects (Section 5.4). We then turn to stochastic forcing (Section 5.5), which leads naturally to the issue of intermittency (Section 5.6).
Fossil fields and the 22-yr cycle
The presence of a large-scale, quasi-steady magnetic field of fossil origin in the solar interior has long been recognized as a possible explanation of the Gnevyshev-Ohl rule (Panel E of Figure 22). The basic idea is quite simple: The slowly-decaying, deep fossil field being effectively steady on solar cycle timescales, its superposition with the 11-yr polarity reversal of the overlying dynamogenerated field will lead to a 22-yr modulation, whereby the cycle is stronger when the fossil and dynamo field have the same polarity, and weaker when these polarities are opposite (see, e.g., Boyer and Levy, 1984; Boruta, 1996). The magnitude of the effect is directly related to the strength of the fossil field, versus that of the dynamo-generated magnetic field. All of this, however, presumes that flows and dynamical effects within the tachocline still allow "coupling" between the deep fossil field below, and the cyclic dynamo-generated field above. However, models of the solar tachocline taking into account its interaction with an underlying fossil field (see, e.g., Kitchatinov and Rüdiger, 2006) suggest that it is unlikely for this coupling to take place in the simple manner implicitly assumed in dynamo models, that typically incorporate the effect of fossil fields via the lower boundary condition (see also Dikpati et al., 2005).
One strong prediction is associated with this explanation of the Gnevyshev-Ohl rule: While the pattern may become occasionally lost due to large amplitude fluctuations of other origin, whenever it is present even-numbered cycles should always be of lower amplitudes and odd-numbered cycles of higher amplitude (under Wolf's cycle numbering convention). Evidently, this prediction can be tested observationally, provided one can establish a measure of sunspot cycle amplitude that is truly characteristic of the strength of the underlying dynamo magnetic field. Taken at face value, the analysis of Mursula et al. (2001), based on cycle-integrated group sunspot numbers, indicates that the odd/even pattern has reversed between the time periods 1700 – 1800 and 1850 – 1990 (see their Figure 1). This would then rule out the fossil field hypothesis unless, as argued by some authors (see Usoskin et al., 2009a, and references therein), a sunspot cycle has been "lost" around 1790, at the onset of the so-called Dalton minimum.
Dynamical nonlinearity
Backreaction on large-scale flows
The dynamo-generated magnetic field will, in general, produce a Lorentz force that will tend to oppose the driving fluid motions. This is a basic physical effect that should be included in any dynamo model. It is not at all trivial to do so, however, since in a turbulent environment both the fluctuating and mean components of the magnetic field can affect both the large-scale flow components, as well as the small-scale turbulent flow providing the Reynolds stresses powering the large-scale flows. One can thus distinguish a number of (related) amplitude-limiting mechanisms:
Lorentz force associated with the mean magnetic field directly affecting large-scale flow (sometimes called the "Malkus-Proctor effect", after the groudbreaking numerical investigations of Malkus and Proctor, 1975).
Large-scale magnetic field indirectly affecting large-scale flow via effects on small-scale turbulence and associated Reynolds stresses (sometimes called "Λ-quenching", see, e.g., Kitchatinov and Rüdiger, 1993).
Maxwell stresses associated with small-scale magnetic field directly affecting flows at all scales.
The α-quenching formulae introduced in Section 4.2.1 is a particularly simple — some would say simplistic — way to model the backreaction of the magnetic field on the turbulent fluid motions producing the α-effectFootnote 10. In the context of solar cycle models, one could also expect the Lorentz force to reduce the amplitude of differential rotation until the effective dynamo number falls back to its critical value, at which point the dynamo again saturatesFootnote 11. The third class of quenching mechanism listed above has not yet been investigated in detail, but numerical simulations of MHD turbulence indicate that the effects of the small-scale turbulent magnetic field on the α-effect can be profound (see Pouquet et al., 1976; Durney et al., 1993; Brandenburg, 2009; Cattaneo and Hughes, 2009).
Introducing magnetic backreaction on differential rotation is a tricky business, because one must then also, in principle, provide a model for the Reynolds stresses powering the large-scale flows in the solar convective envelope (see, e.g., Kitchatinov and Rüdiger, 1993), as well as a procedure for computing magnetic backreaction on these. This rapidly leads into the unyielding realm of MHD turbulence, although algebraic "Λ-quenching" formulae akin to α-quenching have been proposed based on specific turbulence models (see, e.g., Kitchatinov et al., 1994). Alternately, one can add an ad hoc source term to the right hand side of Equation (2), designed in such a way that in the absence of the magnetic field, the desired solar-like large-scale flow is obtained. As a variation on this theme, one can simply divide the large-scale flow into two components, the first (U) corresponding to some prescribed, steady profile, and the second (U′) to a time-dependent flow field driven by the Lorentz force (see, e.g., Tobias, 1997; Moss and Brooke, 2000; Thelen, 2000b):
$$u = U\left( x \right) + U'\left( {x,t,\left\langle B \right\rangle } \right),$$
with the (non-dimensional) governing equation for U′ including only the Lorentz force and a viscous dissipation term on its right hand side. If u amounts only to differential rotation, then B′ must obey a (nondimensional) differential equation of the form
$$\frac{{\partial U'}} {{\partial t}} = \frac{\Lambda } {{4\pi \rho }}\left( {\nabla \times \left\langle B \right\rangle } \right) \times \left\langle B \right\rangle + P_m \nabla ^2 U,$$
where time has been scaled according to the magnetic diffusion time \(\tau = R_ \odot ^2 /\eta _T\) as before. Two dimensionless parameters appear in Equation (37). The first (Λ) is a numerical parameter measuring the influence of the Lorentz force, and which can be set to unity without loss of generality (cf. Tobias, 1997; Phillips et al., 2002). The second, Pm = v/η, is the magnetic Prandtl number. It measures the relative importance of viscous and Ohmic dissipation. When Pm ≪ 1, large velocity amplitudes in U′ can be produced by the dynamo-generated mean magnetic field. This effectively introduces an additional, long timescale in the model, associated with the evolution of the magnetically-driven flow; the smaller Pm, the longer that timescale (cf. Figures 4 and 10 in Brooke et al., 1998).
The majority of studies published thus far and using this approach have only considered the nonlinear magnetic backreaction on differential rotation. This has been shown to lead to a variety of behaviors, including amplitude and parity modulation, periodic or aperiodic, as well as intermittency (more on the latter in Section 5.6).
Figure 23 shows two butterfly diagrams produced by the nonlinear mean-field interface model of Tobias 1997 (see also Beer et al., 1998; Bushby, 2006). The model is defined on a Cartesian slab with a reference differential rotation varying only with depth, and includes backreaction on the differential rotation according to the procedure described above. The model exhibits strong, quasi-periodic modulation of the basic cycle, leading to epochs of strongly reduced amplitude, with the modulation period controlled by the magnetic Prandtl number. Note how the dynamo can emerge from such epochs with strong hemispheric asymmetries (top panel), or with a different parity (bottom panel).
Amplitude and parity modulation in a 1D slab dynamo model including magnetic backreaction on the differential rotation. These are the usual time-latitude diagrams for the toroidal magnetic field, now covering both solar hemispheres, and exemplify the two basic types of modulation arising in nonlinear dynamo models with backreaction on the differential rotation (see text; figure kindly provided by S.M. Tobias).
It is not clear, at this writing, to what degree these behaviors are truly generic, as opposed to model-dependent. The analysis of Knobloch et al. (1998) suggests that generic behaviors do exist. On the other hand, a number of counterexamples have been published, showing that even in a qualitative sense, the nonlinear behavior can be strongly dependent on what one would have hoped to be minor modelling details (see, e.g., Moss and Brooke, 2000; Phillips et al., 2002).
The differential rotation can also be suppressed indirectly by magnetic backreaction on the small-scale turbulent flows that produce the Reynolds stresses driving the large-scale mean flow. Inclusion of this so-called "Λ-quenching" in mean-field dynamo models, alone or in conjunction with other amplitude-limiting nonlinearities, has also been shown to lead to a variety of periodic and aperiodic amplitude modulations, provided the magnetic Prandtl number is small (see Küker et al., 1999; Pipin, 1999; Rempel, 2006b). This type of models stand or fall with the turbulence model used to compute the various mean-field coefficients, and it is not yet clear which aspects of the results are truly generic to Λ-quenching. Gizon and Rempel (2008) do show that information is present in subsurface measurements of the time-varying component of large-scale flows, which can be used to constrain the Λ-effect and its cycle-related variations.
To date, dynamical backreaction on large-scale flows has only been studied in detail in the context of dynamo models based on mean-field electrodynamics. Equivalent studies must be carried out in the other classes of solar cycle models discussed in Section 4. In particular, it is essential to model the effect of the Lorentz force on meridional circulation in models based on the Babcock- Leighton mechanism and/or hydrodynamical instabilities in the tachocline, since in these models the circulation is the primary determinant of the cycle period and enforces equatorward propagation in the butterfly diagram.
Dynamical α-quenching
A number of authors have attempted to bypass the shortcomings of α-quenching by introducing into dynamo models an additional, physically-inspired partial differential equation for the α-coefficient itself (e.g., Kleeorin et al., 1995; Blackman and Brandenburg, 2002, and references therein). The basic physical idea is that magnetic helicity must be conserved in the high-Rm regime, so that production of helicity in the mean field implies a corresponding production of helicity of opposite sign at the scales of the fluctuating components of the flow and field, which ends up acting in such a way as to reduce the α-effect. Most investigations published to date have made used of severely truncated models, and/or models in one spatial dimensions (see, e.g., Weiss et al., 1984; Schmalz and Stix, 1991; Jennings and Weiss, 1991; Roald and Thomas, 1997; Covas et al., 1997; Blackman and Brandenburg, 2002), so that the model results can only be compared to solar data in some general qualitative sense. Rich dynamical behavior definitely arises in such models, including multiperiodicity, amplitude modulation, and chaos, and some of these behaviors do carry over to into a two-dimensional spherical axisymmetric mean-field dynamo model (see Covas et al., 1998).
Time-delay dynamics
The introduction of ad hoc time-delays in dynamo models is long known to lead to pronounced cycle amplitude fluctuations (see, e.g., Yoshimura, 1978). Models including nonlinear backreaction on differential rotation can also exhibit what essentially amounts to time-delay dynamics in the low Prandtl number regime, with the large-scale flow perturbations lagging behind the Lorentz force because of inertial effects. Finally, time-delay effects can arise in dynamo models where the source regions for the poloidal and toroidal magnetic components are spatially segregated. This is a type of time delay we now turn to, in the context of dynamo models based on the Babcock-Leighton mechanism.
Time-delays in Babcock-Leighton models
It was already noted that in solar cycle models based on the Babcock-Leighton mechanism of poloidal field generation, meridional circulation effectively sets — and even regulates — the cycle period (cf. Section 4.8.2; see also Dikpati and Charbonneau, 1999; Charbonneau and Dikpati, 2000; Muñoz-Jaramillo et al., 2009). In doing so, it also introduces a long time delay in the dynamo mechanism, "long" in the sense of being comparable to the cycle period. This delay originates with the time required for circulation to advect the surface poloidal field down to the core-envelope interface, where the toroidal component is produced (A→C in Figure 16). In contrast, the production of poloidal field from the deep-seated toroidal field (C→D), is a "fast" process, growth rates and buoyant rise times for sunspot-forming toroidal flux ropes being of the order of a few months (see Moreno-Insertis, 1986; Fan et al., 1993; Caligari et al., 1995, and references therein). The first, long time delay turns out to have important dynamical consequences.
Reduction to an iterative map
The long time delay inherent in B-L models of the solar cycle allows a formulation of cycle-tocycle amplitude variations in terms of a simple one-dimensional iterative map (Durney, 2000; Charbonneau, 2001). Working in the kinematic regime, neglecting resistive dissipation, and in view of the conveyor belt argument of Section 4.8, the toroidal field strength Tn+1 at cycle n + 1 is assumed to be linearly proportional to the poloidal field strength Pn of cycle n, i.e.,
$$T_{n + 1} = aP_n .$$
Now, because flux eruption is a fast process, the strength of the poloidal field at cycle n + 1 is (nonlinearly) proportional to the toroidal field strength of the current cycle:
$$P_{n + 1} = f\left( {T_{n + 1} } \right)T_{n + 1} .$$
Here the "Babcock-Leighton" function ƒ(Tn+1) measures the efficiency of surface poloidal field production from the deep-seated toroidal field. Substitution of Equation (38) into Equation (39) leads immediately to a one-dimensional iterative map,
$$p_{n + 1} = \alpha f\left( {p_n } \right)p_n ,$$
where the pn's are normalized amplitudes, and the normalization constants as well as the constant a in Equation (38) have been absorbed into the definition of the map's parameter a, here operationally equivalent to a dynamo number (see Charbonneau, 2001). We consider here the following nonlinear function,
$$f\left( p \right) = \frac{1} {4}\left[ {1 + erf\left( {\frac{{p - p_1 }} {{w_1 }}} \right)} \right]\left[ {1 - erf\left( {\frac{{p - p_2 }} {{w_2 }}} \right)} \right],$$
with p1 = 0.6, w1 = 0.2, p2 = 1.0, and w2 = 0.8. This catches an essential feature of the B-L mechanism, namely the fact that it can only operate in a finite range of toroidal field strength.
A bifurcation diagram for the resulting iterative map is presented in Panel A of Figure 24. For a given value of the map parameter α, the diagram gives the locus of the amplitude iterate pn for successive n values. The "critical dynamo number" above which dynamo action becomes possible, is here α = 0.851 (pn = 0 for smaller α values). For 0.851 ≤ α ≤ 1.283, the iterate is stable at some finite value of pn, which increases gradually with α. This corresponds to a constant amplitude cycle. As α reaches 1.283, period doubling occurs, with the iterate pn alternating between high and low values (e.g., pn = 0.93 and pn = 1.41 at α = 1.4). Further period doubling occurs at α = 1.488, then at α = 1.531, then again at α = 1.541, and ever faster until a point is reached beyond which the amplitude iterate seems to vary without any obvious pattern (although within a bounded range); this is in fact a chaotic regime.
As in any other dynamo model where the source regions for the poloidal and toroidal magnetic field components are spatially segregated, the type of time delay considered here is unavoidable. The B-L model is just a particularly clear-cut example of such a situation. One is then led to anticipate that the map's rich dynamical behavior should find its counterpart in the original, arguably more realistic spatially-extended, diffusive axisymmetric model that inspired the map formulation. Remarkably, this is indeed the case.
Panel B of Figure 24 shows a bifurcation diagram, conceptually equivalent to that shown in Panel A, but now constructed from a sequence of numerical solutions of the Babcock-Leighton model of Charbonneau et al. (2005), for increasing values of the dynamo number. Time series of magnetic energy were calculated from the numerical solutions, and successive peaks found and plotted for each individual solution. The sequence of period doubling, eventually leading to a chaotic regime, is strikingly similar to the bifurcation diagram constructed from the corresponding iterative map, down to the narrow multiperiodic windows interspersed in the chaotic domain. This demonstrates that time delay effects are a robust feature, and represent a very powerful source of cycle amplitude fluctuation in Babcock-Leighton models, even in the kinematic regime (for further discussion see Charbonneau, 2001; Charbonneau et al., 2005; Wilmot-Smith et al., 2006).
Two bifurcation diagrams for a kinematic Babcock-Leighton model, where amplitude fluctuations are produced by time-delay feedback. The top diagram is computed using the one-dimensional iterative map given by Equations (40, 41), while the bottom diagram is reconstructed from numerical solutions in spherical geometry, of the type discussed in Section 4.8. The shaded area in Panel A maps the attraction basin for the cyclic solutions, with initial conditions located outside of this basin converging to the trivial solution pn = 0.
Stochastic forcing
Another means of producing amplitude fluctuations in dynamo models is to introduce stochastic forcing in the governing equations. Sources of stochastic "noise" certainly abound in the solar interior; large-scale flows in the convective envelope, such as differential rotation and meridional circulation, are observed to fluctuate, an unavoidable consequence of dynamical forcing by the surrounding, vigorous turbulent flow. Ample observational evidence now exists that a substantial portion of the Sun's surface magnetic flux is continuously being reprocessed on a timescale commensurate with convective motions (see Schrijver et al., 1997; Hagenaar et al., 2003). The culprit is most likely the generation of small-scale magnetic fields by these turbulent fluid motions (see, e.g., Cattaneo, 1999; Cattaneo et al., 2003, and references therein). This amounts to a form of zero-mean "noise" superimposed on the slowly-evolving mean magnetic field. In addition, the azimuthal averaging implicit in all models of the solar cycle considered above will yield dynamo coefficients showing significant deviations about their mean values, as a consequence of the spatiotemporally discrete nature of the physical events (e.g., cyclonic updrafts, sunspot emergences, flux rope destabilizations, etc.) whose collective effects add up to produce a mean azimuthal electromotive force.
The (relative) geometrical and dynamical simplicity of the various types of dynamo models considered earlir severely restricts the manner in which such stochastic effects can be modeled. Perhaps the most straightforward is to let the dynamo number fluctuate randomly in time about some preset mean value. By most statistical estimates, the expected magnitude of these fluctuations is quite large, i.e., many times the mean value (Hoyng, 1988, 1993), a conclusion also supported by numerical simulations (see, e.g., Otmianowska-Mazur et al., 1997; Ossendrijver et al., 2001). One typically also introduces a coherence time during which the dynamo number retains a fixed value. At the end of this time interval, this value is randomly readjusted. Depending on the dynamo model at hand, the coherence time can be physically related to the lifetime of convective eddies (α-effect-based mean-field models), to the decay time of sunspots (Babcock-Leighton models), or to the growth rate of instabilities (hydrodynamical shear or buoyant MHD instability-based models).
Figure 25 shows some representative results for an αΩ dynamo solutions including meridional circulation and operating in the advection-dominated regime, similar to that of Figure 11, with imposed stochastic fluctuation at the ± 100% level in Cα, and coherence time amounting to 5% of the cycle period in the deterministic parent solution. The red curve is the total magnetic energy in the solution domain, used here as a measure of cycle amplitude and proxy for the sunspot number. The green curve is the absolute value of the N-hemisphere surface polar field strength. Perhaps the most striking feature of these curves is the fact that even with a coherence time much smaller than the cycle period, zero-mean stochastic forcing can induce patterns of amplitude modulation with characteristic timescales spanning many cycles (e.g., at 0.01 ≤ t/τ ≤ 0.11 and 0.49 ≤ t/τ ≤ 0.62 in Figure 25A). This can be traced to the buildup of strong magnetic fields in the low-diffusivity layers underlying the convective envelope.
Effect of stochastic fluctuations in the Cα dynamo number on an advection-dominated αΩ mean-field dynamo solution including meridional circulation (see Figure 11), here with Rm = 2500, CΩ = 5 × 105, Cα = 0.5, and Δη = 0.1. The fluctuation amplitude is ΔCα/Cα = 1, and the correlation time of the imposed fluctuations amounts to about 5% of the mean half-cycle period. Panel A shows a portion of the time series of total magnetic energy (red), used here as a proxy for cycle amplitude, and of the surface polar field strength (green), both scaled to their peak value over the full simulation run. Panel B shows a correlation plot of cycle amplitude and duration, both now normalized to their respective means over the simulation interval. Panel C snows a correlation plot of cycle amplitude versus the preceding peak value of the surface polar field.
Stochastic forcing of the dynamo number can also produce a significant spread in cycle period, although in the model run used to produce Figure 25 the very weak positive correlation between cycle amplitude and rise time is anti-solar (the Waldmeier rule has r = -0.68, based on smoothed monthly SSN, cf. Figure 22D), and the positive correlation between rise time and cycle duration (r = +0.27, not shown) is comparable to solar (r = +0.4). It must be kept in mind that these inferences are all predicated on the use of total magnetic energy as a SSN proxy; different choices can lead to varying degrees of correlation.
The effect of noise has been investigated in most detail in the context of classical mean-field models (see Choudhuri, 1992; Hoyng, 1993; Ossendrijver and Hoyng, 1996; Ossendrijver et al., 1996; Mininni and Gómez, 2002, 2004; Moss et al., 2008). A particularly interesting consequence of random variations of the dynamo number, in mean-field models at or very close to criticality, is the coupling of the cycle's duration and amplitude (Hoyng, 1993; Ossendrijver and Hoyng, 1996; Ossendrijver et al., 1996), leading to a pronounced anticorrelation between these two quantities that is reminiscent of the Waldmeier Rule (cf. Panel D of Figure 22), and hard to produce by purely nonlinear effects (cf. Ossendrijver and Hoyng, 1996). However, this behavior does not carry over to the supercritical regime, so it is not clear whether this can indeed be accepted as a robust explanation of the observed amplitude-duration anticorrelation. In the supercritical regime, α-quenched mean-field models are less sensitive to noise (Choudhuri, 1992), unless of course they happen to operate close to a bifurcation point, in which case large amplitude and/or parity fluctuations can be produced (see, e.g., Moss et al., 1992).
In the context of Babcock-Leighton models, introducing stochastic forcing of the dynamo numbers leads to amplitude fluctuation patterns qualitatively similar to those plotted in Figure 25: long timescale amplitude modulation, spread in cycle period, (non-solar) positive correlations between cycle amplitude and rise time, and (solar-like) positive correlation between duration and rise time, with the interesting addition that in some model formulations cycle-to-cycle amplitude variation patterns reminiscent of the Gnevyshev-Ohl Rule are also produced (see Charbonneau et al., 2007). Charbonneau and Dikpati (2000) have presented a series of dynamo simulations including stochastic fluctuations in the dynamo number as well as in the meridional circulation. Working in the supercritical regime with a form of algebraic α-quenching as the sole amplitude-limiting nonlinearity, they succeed in producing a solar-like weak anticorrelation between cycle amplitude and duration for fluctuations in the dynamo numbers in excess of 200% of its mean value, with coherence time of one month. However, these encouraging results did not prove very robust across the model's parameter space.
A different approach is followed by Passos and Lopes (2008) and Lopes and Passos (2009), who used a low-order dynamo model resulting from truncation of the 2D axisymmetric mean-field dynamo equations, with flux loss due to magnetic buoyancy as the amplitude-limiting nonlinearity. Fitting equilibrium solutions to their low-order model to the smoothed SSN time series, one magnetic cycle at a time (Figure 26A), they can plausibly interpret variations in their fitting parameters as being due to systematic, persistent variations of the meridional flow speed on decadal timescales (Figure 26B). They then input these variations in the kinematic axisymmetric Babcock-Leighton model of Chatterjee et al. (2004), conceptually similar to that described in Section 4.8 but replacing the nonlinearity on the poloidal source term by a threshold function for magnetic flux loss through magnetic buoyancy. The resulting SSN-proxy time series reconstructed in this manner shows some remarkable similarities to the true SSN time series, including an epoch of strongly reduced cycle amplitude in the opening decades of the nineteenth century, and secular rise of cycle amplitudes from the mid-nineteenth to the mid-twentieth century (Figure 26C). This suggests that relatively small but persistent changes in the meridional flow, at the ∼ 5 – 30% level, could account for much of the variation in amplitude and duration observed in the solar cycle, and possibly even Grand Minima of activity (see Passos and Lopes, 2009), the topic to which we now turn.
Effect of persistent variations in meridional circulation on the amplitude of the solar cycle, as modeled by Lopes and Passos (2009). Panel A shows the signed square root of the sunspot number (gray), here used as a proxy of the solar internal magnetic field. A smoothed version of this time series (black) is fitted, one magnetic cycle at a time (green), with the equilibrium solution of the truncated dynamo model of Passos and Lopes (2008); assuming that variations in the fitting parameters are due to variations in the meridional flow speed (vp), the coarse time series of vp of panel B (in green) is obtained, scaled to the magnetic cycle 1 value and with error bars from the fitting procedure. Input of this piecewise-constant meridional flow variation (scaled down by a factor of two, in red in panel B) in the 2D Babcock-Leighton dynamo model of Chatterjee et al. (2004) yields the pseudo-SSN time series plotted in Panel C (figure produced from numerical data kindly provided by D. Passos).
Intermittency
The Maunder Minimum and intermittency
The term "intermittency" was originally coined to characterize signals measured in turbulent fluids, but has now come to refer more generally to systems undergoing apparently random, rapid switching from quiescent to bursting behaviors, as measured by the magnitude of some suitable system variable (see, e.g., Platt et al., 1993). Intermittency thus requires at least two distinct dynamical states available to the system, and a means of transiting from one to the other.
In the context of solar cycle model, intermittency refers to the existence of quiescent epochs of strongly suppressed activity randomly interspersed within periods of "normal" cyclic activity. Observationally, the Maunder Minimum is usually taken as the exemplar for such quiescent epochs. It should be noted, however, that dearth of sunspots does not necessarily mean a halted cycle; as noted earlier, flux ropes of strengths inferior to ∼ 10 kG will not survive their rise through the convective envelope, and the process of flux rope formation from the dynamo-generated mean magnetic field may itself be subjected to a threshold in field strength. The same basic magnetic cycle may well have continued unabated all the way through the Maunder Minimum, but at an amplitude just below one of these thresholds. This idea finds support in the 10Be radioisotope record, which shows a clear and uninterrupted cyclic signal through the Maunder Minimum (see Panels B and C of Figure 22; also Beer et al., 1998). Strictly speaking, thresholding a variable controlled by a single dynamical state subject to amplitude modulation is not intermittency, although the resulting time series for the variable may well look quite intermittent.
Much effort has already been invested in categorizing intermittency-like behavior observed in solar cycle models in terms of the various types of intermittency known to characterize dynamical systems (see Ossendrijver and Covas, 2003, and references therein). In what follows, we attempt to pin down the physical origin of intermittent behavior in the various types of solar cycle models discussed earlier.
Intermittency from stochastic noise
Intermittency has been shown to occur through stochastic fluctuations of the dynamo number in linear mean-field dynamo models operating at criticality (see, e.g., Hoyng, 1993). Such models also exhibit a solar-like anticorrelation between cycle amplitude and phase. However there is no strong reason to believe that the solar dynamo is running just at criticality, so that it is not clear how good an explanation this is of Maunder-type Grand Minima.
Mininni and Gómez (2004) have presented a stochastically-forced 1D (in latitude) αΩ meanfield model, including algebraic α-quenching as the amplitude-limiting nonlinearity, that exhibits a form of intermittency arising from the interaction of dynamo modes of opposite parity. The solution aperiodically produces episodes of markedly reduced cycle amplitude, and often showing strong hemispheric asymmetry. This superficially resembles the behavior associated with the nonlinear amplitude modulation discussed in Section 5.3.1 (compare the top panel in Figure 23 herein to Figure 7 in Mininni and Gómez, 2004). However, here it is the stochastic forcing that occasionally excites the higher-order modes that perturb the normal operation of the otherwise dominant dynamo mode. Moss et al. (2008) and Usoskin et al. (2009a) present more elaborate versions of such models, that do reproduce many salient features of observed grand activity minima.
Intermittency from nonlinearities
Another way to trigger intermittency in a dynamo model, deterministically this time, is to let nonlinear dynamical effects, for example a reduction of the differential rotation amplitude, push the effective dynamo number below its critical value; dynamo action then ceases during the subsequent time interval needed to reestablish differential rotation following the diffusive decay of the magnetic field; in the low Pm regime, this time interval can amount to many cycle periods, but Pm must not be too small, otherwise Grand Minima become too rare (see, e.g., Küker et al., 1999). Values Pm ∼ 10-2 seem to work best. Such intermittency is most readily produced when the dynamo is operating close to criticality. For representative models, see Tobias (1996b, 1997); Brooke et al. (1998); Küker et al. (1999); Brooke et al. (2002).
Intermittency of this type has some attractive properties as a Maunder Minimum scenario. First, the strong hemispheric asymmetry in sunspots distributions in the final decades of the Maunder Minimum (Ribes and Nesme-Ribes, 1993) can occur naturally via parity modulation (see Figure 23 herein). Second, because the same cycle is operating at all times, cyclic activity in indicators other than sunspots (such as radioisotopes, see Beer et al., 1998) is easier to explain; the dynamo is still operating and the solar magnetic field is still undergoing polarity reversal, but simply fails to reach the amplitude threshold above which the sunspot-forming flux ropes can be generated from the mean magnetic field, or survive their buoyant rise through the envelope.
There are also important difficulties with this explanatory scheme. Grand Minima tend to have similar durations and recur in periodic or quasi-periodic fashion, while the sunspot and radioisotope records, taken at face value, suggest a pattern far more irregular (Usoskin, 2008). Moreover, the dynamo solutions in the small Pm regime are characterized by large, non-solar angular velocity fluctuations. In such models, solar-like, low-amplitude torsional oscillations do occur, but for Pm ∼ 1. Unfortunately, in this regime the solutions then lack the separation of timescales needed for Maunder-like Grand Minima episodes. One is stuck here with two conflicting requirements, neither of which easily evaded (but do see Bushby, 2006).
Intermittency has also been observed in strongly supercritical model including α-quenching as the sole amplitude-limiting nonlinearity. Such solutions can enter Grand Minima-like epochs of reduced activity when the dynamo-generated magnetic field completely quenches the α-effect. The dynamo cycle restarts when the magnetic field resistively decays back to the level where the α-effect becomes operational once again. The physical origin of the "long" timescale governing the length of the "typical" time interval between successive Grand Minima episodes is unclear, and the physical underpinning of intermittency harder to identify. For representative models exhibiting intermittency of this type, see Tworkowski et al. (1998).
Intermittency from threshold effects
Intermittency can also arise naturally in dynamo models characterized by a lower operating threshold on the magnetic field. These include models where the regeneration of the poloidal field takes place via the MHD instability of toroidal flux tubes (Sections 4.7 and 3.2.3). In such models, the transition from quiescent to active phases requires an external mechanism to push the field strength back above threshold. This can be stochastic noise (see, e.g., Schmitt et al., 1996), or a secondary dynamo process normally overpowered by the "primary" dynamo during active phases (see Ossendrijver, 2000a). Figure 27 shows one representative solution of the latter variety, where intermittency is driven by a weak α-effect-based kinematic dynamo operating in the convective envelope, in conjunction with magnetic flux injection into the underlying region of primary dynamo action by randomly positioned downflows (see Ossendrijver, 2000a, for further details). The top panel shows a sample trace of the toroidal field, and the bottom panel a butterfly diagram constructed near the core-envelope interface in the model.
Intermittency in a dynamo model based on flux tube instabilities (cf. Sections 3.2.3 and 4.7). The top panel shows a trace of the toroidal field, and the bottom panel is a butterfly diagram covering a shorter time span including a quiescent phase at 9.6 ≲ t ≲ 10.2, and a "failed minimum" at t ≃ 11 (figure produced from numerical data kindly provided by M. Ossendrijver).
The model does produce irregularly-spaced quiescent phases, as well as an occasional "failed minimum" (e.g., at t ≃ 11), in qualitative agreement with the solar record. Note here how the onset of a Grand Minimum is preceded by a gradual decrease in the cycle's amplitude, while recovery to normal, cyclic behavior is quite abrupt. The fluctuating behavior of this promising class of dynamo models clearly requires further investigation.
Intermittency from time delays
Dynamo models exhibiting amplitude modulation through time-delay effects are also liable to show intermittency in the presence of stochastic noise. This was demonstrated in Charbonneau (2001) in the context of Babcock-Leighton models, using the iterative map formalism described in Section 5.4.2. The intermittency mechanism hinges on the fact that the map's attractor has a finite basin of attraction (indicated by gray shading in Panel A of Figure 24). Stochastic noise acting simultaneously with the map's dynamics can then knock the solution out of this basin of attraction, which then leads to a collapse onto the trivial solution pn = 0, even if the map parameter remains supercritical. Stochastic noise eventually knocks the solution back into the attractor's basin, which signals the onset of a new active phase (see Charbonneau, 2001, for details).
A corresponding behavior was subsequently found in a spatially-extended model similar to that described in Section 4.8 (see Charbonneau et al., 2004). Figure 28 shows one such representative solution, in the same format as Figure 27. This is a dynamo solution which, in the absence of noise, operates in the singly-periodic regime. Stochastic noise is added to the vector potential Aêφ in the outermost layers, and the dynamo number is also allowed to fluctuate randomly about a pre-set mean value. The resulting solution exhibits both amplitude fluctuations and intermittency.
Intermittency in a dynamo model based on the Babcock-Leighton mechanism (cf. Sections 3.2.4 and 4.8). The top panel shows a trace of the toroidal field sampled at (r, θ) = (0.7, π/3). The bottom panel is a time-latitude diagram for the toroidal field at the core-envelope interface (numerical data from Charbonneau et al., 2004).
With its strong polar branch often characteristic of dynamo models with meridional circulation, Figure 28 is not a particularly good fit to the solar butterfly diagram, yet its fluctuating behavior is solar-like in a number of ways, including epochs of alternating higher-than-average and lower-than-average cycle amplitudes (the Gnevyshev-Ohl rule, cf. Panel E of Figure 22), and residual pseudocyclic variations during quiescent phases, as suggested by 10Be data, cf. Panel B of Figure 22. This later property is due at least in part to meridional circulation, which continues to advect the (decaying) magnetic field after the dynamo has fallen below threshold (see Charbonneau et al., 2004, for further discussion). Note also in Figure 28 how the onset of Grand Minima is quite sudden, while recovery to normal activity is more gradual, which is the opposite behavior to the Grand Minima in Figure 27.
Solar cycle predictions based on dynamo models
The idea that measurements of the solar surface magnetic field in the descending phase of a cycle can be used to forecast the amplitude (and/or timing) of the next cycle goes back many decades, but it is Schatten et al. (1978) who explicitly justified this procedure on the basis of dynamo models, which led to a wide variety of dynamo-inspired precursor schemes (see Hathaway et al., 1999, for a review).
This dynamo logic has recently been pushed further, by using dynamo models to actually advance in time measurements of the solar surface magnetic field in order to produce a cycle forecast. This approach is justified if the surface magnetic field is indeed a significant source of the poloidal field to be sheared into a toroidal component in the upcoming cycle, so that using this approach to forecasting already amounts to a strong assumption on the mode of solar dynamo action. In the stochastically-forced flux-transport αΩ dynamo solution of Figure 25, a strong correlation materializes between the peak polar field at cycle minimum, and amplitude of the subsequent cycle (see panel C). This occurs because in this model the surface polar field is advected down by the meridional flow to the dynamo source region at the base of the convection, and ends up feeding back into the dynamo loop. In other types of dynamo models where this feedback of the surface field does not occur, no such correlation materializes. For more on these matters see Charbonneau and Barlet (2010).
It is particularly instructive to compare and contrast the forecast schemes (and cycle 24 predictions) of Dikpati et al. 2006 (see also Dikpati and Gilman, 2006) and Choudhuri et al. 2007 (see also Jiang et al., 2007). Both groups use a dynamo model of the Babcock-Leighton variety (Section 4.8), in conjunction with input of solar magnetic field observations in a manner often (and incorrectly) described as "data assimilation". The model parameters are adjusted to reproduce the known amplitudes of previous sunspot cycles, and the model is then integrated forward in time beyond this calibration interval to provide a forecast.
Table 1 details the various modelling components associated with each forecasting scheme. Both are remarkably similar, differing at the level of what one would usually consider modelling details, and do about as well at reproducing amplitude of past cycles over their respective calibration intervals. Yet, they end up producing cycle 24 amplitude forecasts that stand at opposite ends of the very wide range of cycle 24 forecasts produced by other techniques. A cycle 24 with SSN = 80 would place it amongst the weakest of the past century (cycles 14 and 16), while SSN = 180 would rank it on par with the two highest cycle amplitude on record (cycles 4 and 19; see Figure 22).
Two dynamo-based solar cycle forecasting schemes
Much criticism has been leveled at these dynamo model-based cycle forecasting schemes, and sometimes unfairly so. To dismiss the whole idea on the grounds that the solar dynamo is a chaotic system is likely too extreme a stance, especially since (1) even chaotic systems can be amenable to prediction over a finite temporal window, and (2) input of data (even if not via true data assimilation) can in principle lead to some correction of the system's trajectory in phase space. More relevant (in my opinion) has been the explicit demonstration that (1) very small changes in some unobservable and poorly constrained input parameters to the dynamo model used for the forecast can introduce significant errors already for next-cycle amplitude forecasts (see Bushby and Tobias, 2007, also Yeates et al., 2008); (2) the exact manner in which surface data drives the model can have a huge impact on the forecasting skill (Cameron and Schüssler, 2007). Consequently, the discrepant forecasts of Table 1 indicate mostly that current dynamo model-based predictive schemes still lack robustness. True data assimilation has been carried out using highly simplified dynamo models (Kitiashvili and Kosovichev, 2008), and clearly this must be carried over to more realistic dynamo models.
Finally, one must also keep in mind that other plausible explanations exist for the relatively good precursor potential of the solar surface magnetic field. In particular, Cameron and Schüssler (2008) have argued that the well-known spatiotemporal overlap of cycles in the butterfly diagram (see Figure 3), taken in conjunction with the empirical anticorrelation between cycle amplitude and rise time embodied in the Waldmeier Rule (Figure 22D; also Hathaway, 2010, Section 4.6), could in itself explain the precursor performance of the polar field strength at solar activity minimum. Given the unusually extended minimum phase between cycles 23 and 24, it will be very interesting to revisit all these model results once cycle 24 reaches its peak amplitude.
Open Questions and Current Trends
I close this review with the following discussion of a few open questions that, in my opinion, bear particularly heavily on our understanding (or lack thereof) of the solar cycle.
What is the primary poloidal field regeneration mechanism?
Given the amount of effort having gone into building detailed dynamo models of the solar cycle, it is quite sobering to reflect upon the fact that the physical mechanism responsible for the regeneration of the poloidal component of the solar magnetic field has not yet been identified with confidence. As discussed at some length in Section 4, current models relying on distinct mechanisms all have their strengths and weaknesses, in terms of physical underpinning as well as comparison with observations.
Something akin to the α-effect of mean-field electrodynamics has been measured in a number of local and global numerical simulations including rotation and stratification, so this certainly remains a favored magnetic field generation mechanism. Modelling of the evolution of the Sun's surface magnetic flux has abundantly confirmed that the Babcock-Leighton mechanism is operating on the Sun, in the sense that magnetic flux liberated by the decay of tilted bipolar active regions does accumulate in the polar regions, where it triggers polarity reversal of the poloidal component (see Wang and Sheeley Jr, 1991; Schrijver et al., 2002; Wang et al., 2002; Baumann et al., 2004, and references therein). The key question is whether this is an active component of the dynamo cycle, or a mere side-effect of active region decay. Likewise, the buoyant instability of magnetic flux tubes (Section 4.7) is, in some sense, unavoidable; here again the question is whether or not the associated azimuthal mean electromotive force contributes significantly to dynamo action in the Sun.
What limits the amplitude of the solar magnetic field?
The amplitude of the dynamo-generated magnetic field is almost certainly restricted by the backreaction of Lorentz forces on the driving fluid motions. However, as outlined in Section 5.3.1, this backreaction can occur in many ways.
Helioseismology has revealed only small variations of the differential rotation profile in the course of the solar cycle. The observed variations amount primarily to an extension in depth of the pattern of low-amplitude torsional oscillations long known from surface Doppler measurements (but see also Basu and Antia, 2001; Toomre et al., 2003; Howe, 2009). Taken at face value, these results suggest that quenching of differential rotation is not the primary amplitude-limiting mechanism, unless the dynamo is operating very close to criticality. Once again the hope is that in the not-too-distant future, helioseismology will have mapped accurately enough cycle-induced variations of differential rotation in the convective envelope and tachocline, to settle this issue.
Algebraic quenching of the α-effect (or α-effect-like source terms) is the mechanism most often incorporated in dynamo models. However, this state of affairs usually has much more to do with computational convenience than commitment to a specific physical quenching mechanism. There is little doubt that the α-effect will be affected once the mean magnetic field reaches equipartition; the critical question is whether it becomes quenched long before that, for example by the small-scale component of the magnetic field. The issue hinges on helicity conservation and flux through boundaries, and subtleties of flow-field interaction in MHD turbulence. For recent entry points into this very active area of current research, see Cattaneo and Hughes (1996), Blackman and Field (2000), Brandenburg and Dobler (2001), and Brandenburg (2009).
Flux loss through magnetic buoyancy is the primary reason why most contemporary dynamo models of the solar cycle rely on the rotational shear in the tachocline to achieve toroidal field amplification. If the dynamo were to reside entirely in the convective envelope, then this would be an important, perhaps even dominant, amplitude limiting mechanism (see Schmitt and Schüssler, 1989; Moss et al., 1990). If, on the other hand, toroidal field amplification takes place primarily at or beneath the core-envelope interface, then it is less clear whether or not this mechanism plays a dominant role. In fact, it may even be that rising flux ropes amplify the deep-seated magnetic field, as nicely demonstrated by the numerical calculations of Rempel and Schüssler (2001). Magnetic flux loss through buoyancy can also have a large impact on the cycle period (see, e.g. Kitchatinov et al., 2000), and the model calculations of Lopes and Passos (2009) indicate that combined with fluctuations in the meridional flow speed, very solar-like cycle amplitude variations can be produced. The impact of this amplitude limiting mechanism clearly requires further investigation.
Flux tubes versus diffuse fields
The foregoing discussion has implicitly assumed that the dynamo process produces a mean, large-scale magnetic field that then concentrates itself into the flux ropes that subsequently give rise to sunspots. High-resolution observations of the photospheric magnetic field show that even outside of sunspots, the field is concentrated in flux tubes (see, e.g., Parker, 1982, and references therein), presumably as a consequence of convective collapse of magnetic flux concentrations too weak to block convection and form sunspots. In this picture, which is basically the framework of all dynamo models discussed above, the mean magnetic field is the dominant player in the cycle.
An alternate viewpoint is to assume that the solar magnetic field is a fibril state from beginning to end, throughout the convection zone and tachocline, and that whatever large-scale field there may be in the photosphere is a mere by-product of the decay of sunspots and other flux tube-like small-scale magnetic structures. The challenge is then to devise a dynamo process that operates entirely on flux tubes, rather than on a diffuse mean field. Some exploratory calculations have been made (e.g., DeLuca et al., 1993; Schatten, 2009), but this intriguing question has received far less attention than it deserves.
How constraining is the sunspot butterfly diagram?
The shape of the sunspot butterfly diagram (see Figure 3) continues to play a dominant constraining role in many dynamo models of the solar cycle. Yet caution is in order on this front. Calculations of the stability of toroidal flux ropes stored in the overshoot region immediately beneath the core-envelope interface indicate that instability is much harder to produce at high latitudes, primarily because of the stabilizing effect of the magnetic tension force; thus strong fields at high latitudes may well be there, but not produce sunspots. Likewise, the process of flux rope formation from the dynamo-generated mean magnetic field is currently not understood. Are flux ropes forming preferentially in regions of most intense magnetic fields, in regions of strongest magnetic helicity, or in regions of strongest hydrodynamical shear? Is a stronger diffuse toroidal field forming more strongly magnetized flux ropes, or a larger number of flux ropes always of the same strength?
These are all crucial questions from the point of view of comparing results from dynamo models to sunspot data. Until they have been answered, uncertainty remains as to the degree to which the sunspot butterfly diagram can be compared in all details to time-latitude diagrams of the toroidal field, as produced by this or that dynamo model.
Is meridional circulation crucial?
The main question regarding meridional circulation is not whether it is there or not, but rather what role it plays in the solar cycle. The answer hinges on the value of the turbulent diffusivity ηT, which is notoriously difficult to estimate with confidence. It is probably essential in mean-field and mean-field-like dynamo models characterized by positive α-effects in the Northern hemisphere, in order to ensure equatorward transport of the sunspot-forming, deep-seated toroidal magnetic field (see Sections 4.4, 4.5, and 4.7), unless the latitudinal turbulent pumping speeds turn out significantly larger than currently estimated (Käpylä et al., 2006a). It also appears to be a major determinant in the evolution of the surface magnetic field in the course of the solar cycle. Something like it is certainly needed in dynamo models based on the Babcock-Leighton mechanism, to carry the poloidal field generated at the surface down to the tachocline, where production of the toroidal field is taking place (see Section 4.8).
The primary unknown at this writing is the degree to which meridional circulation is affected by the Lorentz force associated with the dynamo-generated magnetic field. Recent calculations (Rempel, 2006a,b) suggest that the backreaction is limited to regions of strongest toroidal fields, so that the "conveyor belt" is still operating in the bulk of the convective envelope, but this issue requires further study. Another important related issue is the advective role of turbulent pumping, which may well compete and/or complement the advective effect of the meridional flow.
Is the mean solar magnetic field really axisymmetric?
While the large-scale solar magnetic field is axisymmetric about the Sun's rotation axis to a good first approximation, various lines of observational evidence point to a persistent, low-level non-axisymmetric component; such evidence includes the so-called active longitudes (see Henney and Harvey, 2002, and references therein), rotationally-based periodicity in cycle-related eruptive phenomena (Bai, 1987), and the shape of the white-light corona in the descending phase of the cycle.
Various mean-field-based dynamo models are known to support non-axisymmetric modes over a substantial portion of their parameter space (see, e.g., Moss et al., 1991; Moss, 1999; Bigazzi and Ruzmaikin, 2004, and references therein). At high Rm, strong differential rotation (in the sense that CΩ ≪ Cα) is known to favor axisymmetric modes, because it efficiently destroys any non-axisymmetric component on a timescale much faster than diffusive (∝ Rm1/3 at high Rm, instead of ∝ Rm). Although it is not entirely clear that the Sun's differential rotation is strong enough to place it in this regime (see, e.g., Rüdiger and Elstner, 1994), some 3D models do show this symmetrizing effect of differential rotation (see, e.g., Zhang et al., 2003a). Likewise, the recent numerical 3D MHD simulations of solar-like cycles by Ghizaru et al. (2010) do produce a large-scale magnetic field with a dominant axisymmetric component. These types of simulations will probably offer the best handle on this question.
What causes Maunder-type Grand Minima?
The origin of Grand Minima in solar activity also remains a question subjected to intense scrutiny. Broadly speaking, Grand Minima can occur either through amplitude modulation of a basic underlying dynamo cycle, or through intermittency. In this latter case, the transition from one state to another can take place via the system's internal dynamics, or through the influence of external stochastic noise, or both. Not surprisingly, a large number of plausible Grand Minima models can now be found in the extant literature (cf. Section 5.6).
Historical researches have shown that the Sun climbed out of the Maunder Minimum gradually, and showing strongly asymmetric activity, with nearly all sunspots observed between 1670 and 1715 located in the Southern solar hemisphere (see Ribes and Nesme-Ribes, 1993). Some historical reconstructions of the butterfly diagram in the pre-photographic era also suggest the presence of what could be interpreted as a quadrupolar component (Arlt, 2009). These are the kind of pattern that can be readily produced by nonlinear parity modulation (cf. Figure 23 herein; see also Beer et al., 1998; Sokoloff and Nesme-Ribes, 1994; Usoskin et al., 2009b). Then again, in the context of an intermittency-based model, it is quite conceivable that one hemisphere can pull out of a quiescent epoch before the other, thus yielding sunspot distributions compatible with the aforecited observations in the late Maunder Minimum. Such scenarios, relying on cross-hemispheric coupling, have hardly begun to be explored (Charbonneau, 2005, 2007a; Chatterjee and Choudhuri, 2006).
Another possible avenue for distinguishing between these various scenarios is the persistence of the primary cycle's phase through Grand Minima. Generally speaking, models relying on amplitude modulation can be expected to exhibit good phase persistence across such minima, because the same basic cycle is operating at all times (cf. Figure 23). Intermittency, on the other hand, should not necessarily lead to phase persistence, since the active and quiescent phases are governed by distinct dynamics. One can but hope that careful analysis of cosmogenic radioisotope data may soon indicate the degree to which the solar cycle's phase persisted through the Maunder, Spörer, and Wolf Grand Minima, in order to narrow down the range of possibilities.
Recent years have witnessed a number of significant advances in solar cycle modelling. Local magnetohydrodynamical simulations of thermally-driven convection have now allowed measurements of the α-tensor, and of its variation with depth and latitude in the solar interior, and with rotation rate; and global magnetohydrodynamical simulation of solar convection are now producing large-scale magnetic fields, in some cases even undergoing polarity reversals on decadal timescales. Such simulations are ideally suited for investigating a number of important issues, such as the mechanism(s) responsible for regulating the amplitude of the solar cycle, the magnetically-driven temporal variations of the large-scale flows important for the solar cycle, and the possible impact of a cycling large-scale magnetic field on convective energy transport, to mention but a few.
Despites continuing advances in computing power, global MHD simulations remain extremely demanding, and proper capture of important solar cycle elements — most notably the formation, emergence and surface decay sunspots and active regions— are certainly not forthcoming. Nonetheless, comparison between cyclic solutions arising in full numerical simulations and those characterizing simpler mean-field-like models should also allow to test the validity limit of the kinematic approximation and of the simple algebraic amplitude-limiting nonlinearities still so prevalent in the latter class of solar cycle models. It appears likely that in the foreseeable future, the simpler, mean-field and mean-field-like solar cycle models reviewed here will remain the workhorses of research on long timescale phenomena such as grand activity minima and maxima, on the evolution of surface magnetic flux, on dynamo-model-based solar cycle prediction, and on the modelling and interpretation of stellar activity cycles.
1 Equation (2) is written here in a frame of reference rotating with angular velocity Ω, so that a Coriolis force term appears explicitly, while the centrifugal force has been subsumed into the gravitational term.
2 Note, however, that an axisymmetric flow can sustain a non-axisymmetric magnetic field against resistive decay.
3 Helioseismology has also revealed the existence of a significant radial shear in the outermost layers of the solar convective envelope. Even if the storage problem could be somehow bypassed, it does not appear possible to construct a viable solar dynamo model relying exclusively on this angular velocity gradient (see, e.g., Dikpati et al., 2002; Brandenburg, 2005, for illustrative calculations).
4 Models retaining both α-terms are dubbed α2Ω dynamos, and may be relevant to the solar case even in the Cα ≪ CΩ regime, if the latter operates in a very thin layer, e.g. the tachocline (see, e.g., DeLuca and Gilman, 1988; Gilman et al., 1989; Choudhuri, 1990); this is because the α-effect gets curled in Equation (25) for the mean toroidal field. Models relying only on the α-terms are said to be α2 dynamos. Such models are relevant to dynamo action in planetary cores and convective stars with vanishing differential rotation (if such a thing exists).
5 These are not "waves" in usual sense of the word, although they are described by modal solutions of the form exp(ik · x - ωt).
6 Although some turbulence model predict such higher-order latitudinal dependencies, the functional forms adopted here are largely ad hoc, and are made for strictly illustrative purposes.
7 Mea culpa on this one...
8 For this particular choice of α, η, and Ω profiles, solutions with negative Cα are non-oscillatory in most of the [Cα,CΩ,Δη] parameter space. This is in agreement with the results of Markiel and Thomas (1999).
9 We largely exclude from the foregoing discussion mathematical toy-models that aim exclusively at reproducing the shape of the sunspot number time series. For recent entry points in this literature, see, e.g., Mininni et al. (2002).
10 Dynamo saturation can also occur by magnetically-mediated changes in the "topological" properties of a turbulent flow, without significant decrease in the turbulent flow amplitudes; see Cattaneo et al. (1996) for a nice, simple example.
11 This effect has been found to be the dominant dynamo quenching mechanism in some numerical simulations of dynamo action in a rotating, thermally-driven turbulent spherical shell (see, e.g., Gilman, 1983), as well as in models confined to thin shells (DeLuca and Gilman, 1988).
Arlt, R., 2009, "The Butterfly Diagram in the Eighteenth Century", Solar Phys., 255, 143–153. [DOI], [ADS], [arXiv:0812.2233] (Cited on page 72.)
ADS Google Scholar
Arlt, R., Sule, A. and Filter, R., 2007a, "Stability of the solar tachocline with magnetic fields", Astron. Nachr., 328, 1142. [DOI], [ADS] (Cited on page 22.)
ADS MATH Google Scholar
Arlt, R., Sule, A. and Rüdiger, G., 2007b, "Stability of toroidal magnetic fields in the solar tachocline", Astron. Astrophys., 461, 295–301. [DOI], [ADS] (Cited on pages 18 and 22.)
Babcock, H.W., 1961, "The Topology of the Sun's Magnetic Field and the 22-Year Cycle", Astrophys. J., 133, 572–589. [DOI], [ADS] (Cited on pages 9 and 41.)
Bai, T., 1987, "Distribution of flares on the sun: superactive regions and active zones of 1980-1985", Astrophys. J., 314, 795–807. [DOI], [ADS] (Cited on page 72.)
Basu, S. and Antia, H.M., 2001, "A study of possible temporal and latitudinal variations in the properties of the solar tachocline", Mon. Not. R. Astron. Soc., 324, 498–508. [DOI], [ADS], [astro-ph/0101314] (Cited on page 70.)
Baumann, I., Schmitt, D., Schüssler, M. and Solanki, S., 2004, "Evolution of the large-scale magnetic field on the solar surface: a parameter study", Astron. Astrophys., 426, 1075–1091. [DOI], [ADS] (Cited on page 70.)
Beer, J., 2000, "Long-term indirect indices of solar variability", Space Sci. Rev., 94, 53.66. [ADS] (Cited on pages 51 and 53.)
Beer, J., Raisbeck, G.M. and Yiou, F., 1991, "Time variation of 10Be and solar activity", in The Sun in Time, (Eds.) Sonett, C.P., Giampapa, M.S., Matthews, M.S., pp. 343–359, University of Arizona Press, Tucson (Cited on page 51.)
Beer, J., Tobias, S.M. and Weiss, N.O., 1998, "An Active Sun Throughout the Maunder Minimum", Solar Phys., 181, 237–249. [DOI], [ADS] (Cited on pages 55, 64, and 72.)
Bigazzi, A. and Ruzmaikin, A., 2004, "The sun's preferred longitudes and the coupling of magnetic dynamo modes", Astrophys. J., 604, 944–959. [DOI], [ADS] (Cited on page 72.)
Blackman, E.G. and Brandenburg, A., 2002, "Dynamical nonlinearity in large-scale dynamo with shear", Astrophys. J., 579, 359–373. [DOI], [ADS] (Cited on pages 22 and 57.)
Blackman, E.G. and Field, G.B., 2000, "Constraints on the magnitude of α in dynamo theory", Astrophys. J., 534, 984–988. [DOI], [ADS] (Cited on page 70.)
Bonanno, A., Elstner, D., Rüdiger, G. and Belvedere, G., 2003, "Parity properties of an advectiondominated solar α2Ω-dynamo", Astron. Astrophys., 390, 673–680. [ADS] (Cited on page 34.)
Bonanno, A., Elstner, D. and Belvedere, G., 2006, "Advection-dominated solar dynamo model with two-cell meridional flow and a positive α-effect in the tachocline", Astron. Nachr., 327, 680. [DOI], [ADS] (Cited on page 41.)
Boruta, N., 1996, "Solar dynamo surface waves in the presence of a primordial magnetic field: a 30 Gauss upper limit in the solar core", Astrophys. J., 458, 832–849. [DOI], [ADS] (Cited on page 53.)
Boyer, D.W. and Levy, E.H., 1984, "Oscillating dynamo magnetic field in the presence of an external nondynamo field: the influence of a solar primordial field", Astrophys. J., 277, 848–861. [DOI], [ADS] (Cited on page 53.)
Brandenburg, A., 2005, "The Case for a Distributed Solar Dynamo Shaped by Near-Surface Shear", Astrophys. J., 625, 539–547. [DOI], [ADS], [astro-ph/0502275] (Cited on page 19.)
Brandenburg, A., 2009, "Advances in Theory and Simulations of Large-Scale Dynamos", Space Sci. Rev., 144, 87–104. [DOI], [ADS], [arXiv:0901.0329] (Cited on pages 54 and 70.)
Brandenburg, A. and Dobler, W., 2001, "Large scale dynamos with helicity loss through boundaries", Astron. Astrophys., 369, 329–338. [DOI], [ADS] (Cited on page 70.)
Brandenburg, A. and Schmitt, D., 1998, "Simulations of an alpha-effect due to magnetic buoyancy", Astron. Astrophys., 338, L55–L58. [ADS] (Cited on page 40.)
Brandenburg, A. and Subramanian, K., 2005, "Astrophysical magnetic fields and nonlinear dynamo theory", Phys. Rep., 417, 1–209. [DOI], [ADS], [astro-ph/0405052] (Cited on page 11.)
ADS MathSciNet Google Scholar
Brandenburg, A., Tuominen, I., Nordlund, Å., Pulkkinen, P. and Stein, R.F., 1990, "3-D simulations of turbulent cyclonic magneto-convection", Astron. Astrophys., 232, 277–291. [ADS] (Cited on page 21.)
Brandenburg, A., Rädler, K.-H., Rheinhardt, M. and Subramanian, K., 2008, "Magnetic Quenching of α and Diffusivity Tensors in Helical Turbulence", Astrophys. J. Lett., 687, L49–L52. [DOI], [ADS], [arXiv:0805.1287] (Cited on page 22.)
Braun, D.C. and Fan, Y., 1998, "Helioseismic measurements of the subsurface meridional flow", Astrophys. J. Lett., 508, L105–L108. [DOI], [ADS] (Cited on page 31.)
Brooke, J.M., Pelt, J., Tavakol, R. and Tworkowski, A., 1998, "Grand minima and equatorial symmetry breaking in axisymmetric dynamo models", Astron. Astrophys., 332, 339–352. [ADS] (Cited on pages 55 and 64.)
Brooke, J.M., Moss, D. and Phillips, A., 2002, "Deep minima in stellar dynamos", Astron. Astrophys., 395, 1013–1022. [DOI], [ADS] (Cited on page 64.)
Brown, B.P., Browning, M.K., Miesch, M.S., Brun, A.S. and Toomre, J., 2009, "Wreathes of Magnetism in Rapidly Rotating Suns", arXiv, e-print. [ADS], [arXiv:0906.2407] (Cited on page 48.)
Brown, B.P., Browning, M.K., Brun, A.S., Miesch, M.S. and Toomre, J., 2010, "Persistent Magnetic Wreaths in a Rapidly Rotating Sun", Astrophys. J., 711, 424–438. [DOI], [ADS] (Cited on page 48.)
Brown, T.M., Christensen-Dalsgaard, J., Dziembowski, W.A., Goode, P., Gough, D.O. and Morrow, C.A., 1989, "Inferring the Sun's internal angular velocity from observed p-mode frequency splittings", Astrophys. J., 343, 526–546. [DOI], [ADS] (Cited on page 17.)
Browning, M.K., Miesch, M.S., Brun, A.S. and Toomre, J., 2006, "Dynamo Action in the Solar Convection Zone and Tachocline: Pumping and Organization of Toroidal Fields", Astrophys. J. Lett., 648, L157–L160. [DOI], [ADS], [astro-ph/0609153] (Cited on page 48.)
Brun, A.S., Miesch, M.S. and Toomre, J., 2004, "Global-scale turbulent convection and magnetic dynamo action in the solar envelope", Astrophys. J., 614, 1073–1098. [DOI], [ADS] (Cited on page 48.)
Bushby, P.J., 2006, "Zonal flows and grand minima in a solar dynamo model", Mon. Not. R. Astron. Soc., 371, 772–780. [DOI], [ADS] (Cited on pages 55 and 65.)
Bushby, P.J. and Tobias, S.M., 2007, "On Predicting the Solar Cycle Using Mean-Field Models", Astrophys. J., 661, 1289–1296. [DOI], [ADS], [arXiv:0704.2345] (Cited on page 69.)
Caligari, P., Moreno-Insertis, F. and Schüssler, M., 1995, "Emerging flux tubes in the solar convection zone. I. Asymmetry, tilt, and emergence latitudes", Astrophys. J., 441, 886–902. [DOI], [ADS] (Cited on pages 18 and 57.)
Cally, P.S., 2001, "Nonlinear Evolution of 2D Tachocline Instability", Solar Phys., 199, 231–249. [DOI], [ADS] (Cited on page 38.)
Cally, P.S., Dikpati, M. and Gilman, P.A., 2003, "Clamshell and Tipping Instabilities in a Twodimensional Magnetohydrodynamic Tachocline", Astrophys. J., 582, 1190–1205. [DOI], [ADS] (Cited on page 38.)
Cally, P.S., Dikpati, M. and Gilman, P.A., 2008, "Three-dimensional magneto-shear instabilities in the solar tachocline. II. Axisymmetric case", Mon. Not. R. Astron. Soc., 391, 891–900. [DOI], [ADS] (Cited on page 18.)
Cameron, R. and Schüssler, M., 2007, "Solar Cycle Prediction Using Precursors and Flux Transport Models", Astrophys. J., 659, 801–811. [DOI], [ADS], [astro-ph/0612693] (Cited on page 69.)
Cameron, R. and Schüssler, M., 2008, "A Robust Correlation between Growth Rate and Amplitude of Solar Cycles: Consequences for Prediction Methods", Astrophys. J., 685, 1291–1296. [DOI], [ADS] (Cited on page 69.)
Carbonell, M., Oliver, R. and Ballester, J.L., 1994, "A search for chaotic behaviour in solar activity", Astron. Astrophys., 290, 983–994. [ADS] (Cited on page 53.)
Cattaneo, F., 1999, "On the origin of magnetic fields in the quiet photosphere", Astrophys. J. Lett., 515, L39–L42. [DOI], [ADS] (Cited on page 60.)
Cattaneo, F. and Hughes, D.W., 1996, "Nonlinear saturation of the turbulent α-effect", Phys. Rev. E, 54, R4532–R4535. [ADS] (Cited on pages 28 and 70.)
Cattaneo, F. and Hughes, D.W., 2009, "Problems with kinematic mean field electrodynamics at high magnetic Reynolds numbers", Mon. Not. R. Astron. Soc., 395, L48–L51. [DOI], [ADS], [arXiv:0805.2138] (Cited on page 54.)
Cattaneo, F., Hughes, D.W. and Kim, E.-J., 1996, "Suppression of Chaos in a Simplified Nonlinear Dynamo Model", Phys. Rev. Lett., 76, 2057–2060. [DOI], [ADS] (Cited on page 54.)
Cattaneo, F., Emonet, T. and Weiss, N.O., 2003, "On the interaction between convection and magnetic fields", Astrophys. J., 588, 1183–1198. [DOI], [ADS] (Cited on page 60.)
Charbonneau, P., 2001, "Multiperiodicity, Chaos, and Intermittency in a Reduced Model of the Solar Cycle", Solar Phys., 199, 385–404. [ADS] (Cited on pages 57, 58, and 65.)
Charbonneau, P., 2005, "A Maunder Minimum Scenario Based on Cross-Hemispheric Coupling and Intermittency", Solar Phys., 229, 345–358. [DOI], [ADS] (Cited on pages 47 and 73.)
Charbonneau, P., 2007a, "Cross-hemispheric coupling in a Babcock-Leighton model of the solar cycle", Adv. Space Res., 40, 899–906. [DOI], [ADS] (Cited on pages 47 and 73.)
Charbonneau, P., 2007b, "Babcock-Leighton models of the solar cycle: Questions and issues", Adv. Space Res., 39, 1661–1669. [DOI], [ADS] (Cited on page 45.)
Charbonneau, P. and Barlet, G., 2010, "The dynamo basis of solar cycle precursor schemes", J. Atmos. Sol.-Terr. Phys., 2010, in press. [DOI] (Cited on page 68.)
Charbonneau, P. and Dikpati, M., 2000, "Stochastic Fluctuations in a Babcock-Leighton Model of the Solar Cycle", Astrophys. J., 543, 1027–1043. [DOI], [ADS] (Cited on pages 57 and 62.)
Charbonneau, P. and MacGregor, K.B., 1996, "On the generation of equipartition-strength magnetic fields by turbulent hydromagnetic dynamos", Astrophys. J. Lett., 473, L59–L62. [DOI], [ADS] (Cited on page 29.)
Charbonneau, P. and MacGregor, K.B., 1997, "Solar Interface Dynamos. II. Linear, Kinematic Models in Spherical Geometry", Astrophys. J., 486, 502–520. [DOI], [ADS] (Cited on page 29.)
Charbonneau, P., Christensen-Dalsgaard, J., Henning, R., Larsen, R.M., Schou, J., Thompson, M.J. and Tomczyk, S., 1999, "Helioseismic Constraints on the Structure of the Solar Tachocline", Astrophys. J., 527, 445–460. [DOI], [ADS] (Cited on page 17.)
Charbonneau, P., Blais-Laurier, G. and St-Jean, C., 2004, "Intermittency and Phase Persistence in a Babcock-Leighton Model of the Solar Cycle", Astrophys. J. Lett., 616, L183–L186. [DOI], [ADS] (Cited on pages 65, 67, and 68.)
Charbonneau, P., St-Jean, C. and Zacharias, P., 2005, "Fluctuations in Babcock-Leighton models of the solar cycle. I. period doubling and transition to chaos", Astrophys. J., 619, 613–622. [DOI], [ADS] (Cited on pages 42, 43, and 58.)
Charbonneau, P., Beaubien, G. and St-Jean, C., 2007, "Fluctuations in Babcock-Leighton Dynamos. II. Revisiting the Gnevyshev-Ohl Rule", Astrophys. J., 658, 657–662. [DOI], [ADS] (Cited on page 62.)
Chatterjee, P. and Choudhuri, A.R., 2006, "On Magnetic Coupling Between the Two Hemispheres in Solar Dynamo Models", Solar Phys., 239, 29–39. [DOI], [ADS] (Cited on pages 47 and 73.)
Chatterjee, P., Nandy, D. and Choudhuri, A.R., 2004, "Full-sphere simulations of a circulation dominated solar dynamo: exploring the parity issue", Astron. Astrophys., 427, 1019–1030. [DOI], [ADS] (Cited on pages 42, 47, 62, and 63.)
Choudhuri, A.R., 1990, "On the possibility of αΩ-type dynamo in a thin layer inside the sun", Astrophys. J., 355, 733–744. [DOI], [ADS] (Cited on page 23.)
Choudhuri, A.R., 1992, "Stochastic fluctuations of the solar dynamo", Astron. Astrophys., 253, 277–285. [ADS] (Cited on pages 60 and 62.)
Choudhuri, A.R., Schüssler, M. and Dikpati, M., 1995, "The solar dynamo with meridional circulation", Astron. Astrophys., 303, L29–L32. [ADS] (Cited on page 32.)
Choudhuri, A.R., Chatterjee, P. and Jiang, J., 2007, "Predicting Solar Cycle 24 With a Solar Dynamo Model", Phys. Rev. Lett., 98, 131103. [DOI], [ADS], [astro-ph/0701527] (Cited on pages 68 and 69.)
Christensen-Dalsgaard, J., 2002, "Helioseismology", Rev. Mod. Phys., 74, 1073–1129. [ADS] (Cited on page 13.)
Covas, E., Tavakol, R., Tworkowski, A. and Brandenburg, A., 1997, "Robustness of truncated αΩ dynamos with a dynamic alpha", Solar Phys., 172, 3–13. [DOI], [ADS] (Cited on page 57.)
Covas, E., Tavakol, R., Tworkowski, A. and Brandenburg, A., 1998, "Axisymmetric mean field dynamos with dynamic and algebraic α-quenching", Astron. Astrophys., 329, 350–360. [ADS] (Cited on page 57.)
Davidson, P.A., 2001, An Introduction to Magnetohydrodynamics, Cambridge Texts in Applied Mathematics, Cambridge University Press, Cambridge; New York. [Google Books] (Cited on page 12.)
DeLuca, E.E. and Gilman, P.A., 1988, "Dynamo theory for the interface between the convection zone and the radiative interior of a star", Geophys. Astrophys. Fluid Dyn., 43, 119–148. [DOI] (Cited on pages 23 and 54.)
DeLuca, E.E., Fisher, G.H. and Patten, B.M., 1993, "The dynamics of magnetic flux rings", Astrophys. J., 411, 383–393. [DOI], [ADS] (Cited on page 71.)
Dikpati, M. and Charbonneau, P., 1999, "A Babcock-Leighton Flux Transport Dynamo with Solar-like Differential Rotation", Astrophys. J., 518, 508–520. [DOI], [ADS] (Cited on pages 42, 43, 45, and 57.)
Dikpati, M. and Gilman, P.A., 1999, "Joint instability of latitudinal differential rotation and concentrated toroidal fields below the solar convection zone", Astrophys. J., 512, 417–441. [DOI], [ADS] (Cited on page 38.)
Dikpati, M. and Gilman, P.A., 2001, "Flux-Transport Dynamos with..-Effect from Global Instability of Tachocline Differential Rotation: A Solution for Magnetic Parity Selection in the Sun", Astrophys. J., 559, 428–442. [DOI], [ADS] (Cited on pages 17, 37, 39, and 47.)
Dikpati, M. and Gilman, P.A., 2006, "Simulating and Predicting Solar Cycles Using a Flux- Transport Dynamo", Astrophys. J., 649, 498–514. [DOI], [ADS] (Cited on page 68.)
Dikpati, M., Corbard, T., Thompson, M.J. and Gilman, P.A., 2002, "Flux Transport Solar Dynamos with Near-Surface Radial Shear", Astrophys. J. Lett., 575, L41–L45. [DOI], [ADS] (Cited on page 19.)
Dikpati, M., De Toma, G., Gilman, P.A., Arge, C.N. and White, O.R., 2004, "Diagnostic of polar field reversal in solar cycle 23 using a flux transport dynamo model", Astrophys. J., 601, 1136–1151. [DOI], [ADS] (Cited on pages 37, 41, and 47.)
Dikpati, M., Gilman, P.A. and MacGregor, K.B., 2005, "Constraints on the Applicability of an Interface Dynamo to the Sun", Astrophys. J., 631, 647–652. [DOI], [ADS] (Cited on pages 31 and 53.)
Dikpati, M., de Toma, G. and Gilman, P.A., 2006, "Predicting the strength of solar cycle 24 using a flux-transport dynamo-based tool", Geophys. Res. Lett., 33, L05102. [DOI], [ADS] (Cited on pages 68 and 69.)
Dikpati, M., Gilman, P.A., Cally, P.S. and Miesch, M.S., 2009, "Axisymmetric MHD Instabilities in Solar/Stellar Tachoclines", Astrophys. J., 692, 1421–I, [ADS] (Cited on page 18.)
D'Silva, S. and Choudhuri, A.R., 1993, "A theoretical model for tilts of bipolar magnetic regions", Astron. Astrophys., 272, 621–633. [ADS] (Cited on page 18.)
Durney, B.R., 1995, "On a Babcock-Leighton dynamo model with a deep-seated generating layer for the toroidal magnetic field", Solar Phys., 160, 213–235. [DOI], [ADS] (Cited on pages 42 and 43.)
Durney, B.R., 1996, "On a Babcock-Leighton dynamo model with a deep-seated generating layer for the toroidal magnetic field, II", Solar Phys., 166, 231–260. [DOI], [ADS] (Cited on page 42.)
Durney, B.R., 1997, "On a Babcock-Leighton solar dynamo model with a deep-seated generating layer for the toroidal magnetic field. IV", Astrophys. J., 486, 1065–1077. [DOI], [ADS] (Cited on page 42.)
Durney, B.R., 2000, "On the differences between odd and even solar cycles", Solar Phys., 196, 421–426. [ADS] (Cited on page 57.)
Durney, B.R., De Young, D.S. and Roxburgh, I.W., 1993, "On the generation of the large-scale and turbulent magnetic field in solar-type stars", Solar Phys., 145, 207–225. [DOI], [ADS] (Cited on pages 22, 42, and 54.)
Eddy, J.A., 1976, "The Maunder Minimum", Science, 192, 1189–1202. [DOI], [ADS] (Cited on page 51.)
Eddy, J.A., 1983, "The Maunder Minimum: A reappraisal", Solar Phys., 89, 195–207. [DOI], [ADS] (Cited on page 51.)
Fan, Y., 2009, "Magnetic Fields in the Solar Convection Zone", Living Rev. Solar Phys., 6, lrsp–2009–4. [ADS]. URL (accessed 9 April 2010): http://www.livingreviews.org/lrsp-2009-4(Cited on pages 10, 17, and 28.)
Fan, Y., Fisher, G.H. and Deluca, E.E., 1993, "The origin of morphological asymmetries in bipolar active regions", Astrophys. J., 405, 390–401. [DOI], [ADS] (Cited on pages 18 and 57.)
Ferriz-Mas, A. and Núñez, M. (Eds.), 2003, Advances in Nonlinear Dynamos, vol. 9 of The Fluid Mechanics of Astrophysics and Geophysics, Taylor & Francis, London, New York (Cited on page 11.)
Ferriz-Mas, A., Schmitt, D. and Schüssler, M., 1994, "A dynamo effect due to instability of magnetic flux tubes", Astron. Astrophys., 289, 949–956. [ADS] (Cited on pages 17, 40, and 41.)
Feynman, J. and Gabriel, S.B., 1990, "Period and phase of the 88-year solar cycle and the Maunder minimum: Evidence for a chaotic Sun", Solar Phys., 127, 393–403. [DOI], [ADS] (Cited on page 53.)
Foukal, P.V., 2004, Solar Astrophysics, Wiley-VCH, Weinheim, 2nd edn. (Cited on page 11.) Garaud, P. and Brummell, N.H., 2008, "On the Penetration of Meridional Circulation below the Solar Convection Zone", Astrophys. J., 674, 498–510. [DOI], [ADS], [arXiv:0708.0258] (Cited on page 45.)
Ghizaru, M., Charbonneau, P. and Smolarkiewicz, P.K., 2010, "Magnetic cycles in global largeeddy simulations of solar convection", Astrophys. J. Lett., 715, L133–L137. [DOI], [ADS] (Cited on pages 48, 49, and 72.)
Gilman, P.A., 1983, "Dynamically consistent nonlinear dynamos driven by convection on a rotating spherical shell. II. Dynamos with cycles and strong feedback", Astrophys. J. Suppl. Ser., 53, 243–268. [DOI], [ADS] (Cited on pages 47 and 54.)
Gilman, P.A. and Fox, P.A., 1997, "Joint instability of latitudinal differential rotation and toroidal magnetic fields below the solar convection zone", Astrophys. J., 484, 439–454. [DOI], [ADS] (Cited on page 38.)
Gilman, P.A. and Miesch, M.S., 2004, "Limits to penetration of meridional circulation below the solar convection zone", Astrophys. J., 611, 568–574. [DOI], [ADS] (Cited on page 45.)
Gilman, P.A. and Miller, J., 1981, "Dynamically consistent nonlinear dynamos driven by convection in a rotating spherical shell", Astrophys. J. Suppl. Ser., 46, 211–238. [DOI], [ADS] (Cited on page 47.)
Gilman, P.A. and Rempel, M., 2005, "Concentration of Toroidal Magnetic Field in the Solar Tachocline by η-Quenching", Astrophys. J., 630, 615–622. [DOI], [ADS], [astro-ph/0504003] (Cited on page 22.)
Gilman, P.A., Morrow, C.A. and Deluca, E.E., 1989, "Angular momentum transport and dynamo action in the sun. Implications of recent oscillation measurements", Astrophys. J., 46, 528–537. [DOI], [ADS] (Cited on page 23.)
Gizon, L., 2004, "Helioseismology of Time-Varying Flows Through The Solar Cycle", Solar Phys., 224, 217–228. [DOI], [ADS] (Cited on pages 13 and 31.)
Gizon, L. and Rempel, M., 2008, "Observation and Modeling of the Solar-Cycle Variation of the Meridional Flow", Solar Phys., 251, 241–250. [DOI], [ADS], [arXiv:0803.0950] (Cited on pages 31 and 55.)
Glatzmaier, G.A., 1985a, "Numerical simulations of stellar convective dynamos. II. Field propagation in the convection zone", Astrophys. J., 291, 300–307. [DOI], [ADS] (Cited on page 47.)
Glatzmaier, G.A., 1985b, "Numerical simulations of stellar convective dynamos. III. At the base of the convection zone", Geophys. Astrophys. Fluid Dyn., 31, 137–150. [DOI], [ADS] (Cited on page 47.)
Guerrero, G. and de Gouveia Dal Pino, E.M., 2007, "How does the shape and thickness of the tachocline affect the distribution of the toroidal magnetic fields in the solar dynamo?", Astron. Astrophys., 464, 341–349. [DOI], [ADS], [astro-ph/0610703] (Cited on page 45.)
Guerrero, G. and de Gouveia Dal Pino, E.M., 2008, "Turbulent magnetic pumping in a Babcock- Leighton solar dynamo model", Astron. Astrophys., 485, 267–273. [DOI], [ADS], [arXiv:0803.3466] (Cited on pages 42, 45, 46, and 47.)
Guerrero, G.A. and Muñoz, J.D., 2004, "Kinematic solar dynamo models with a deep meridional flow", Mon. Not. R. Astron. Soc., 350, 317–322. [DOI], [ADS] (Cited on page 45.)
Haber, D.A., Hindman, B.W., Toomre, J., Bogart, R.S., Larsen, R.M. and Hill, F., 2002, "Evolving Submerged Meridional Circulation Cells within the Upper Convection Zone Revealed by Ring- Diagram Analysis", Astrophys. J., 570, 855–864. [DOI], [ADS] (Cited on pages 31 and 37.)
Hagenaar, H.J., Schrijver, C.J. and Title, A.M., 2003, "The Properties of Small Magnetic Regions on the Solar Surface and the Implications for the Solar Dynamo(s)", Astrophys. J., 584, 1107–1119. [DOI], [ADS] (Cited on page 60.)
Haigh, J.D., 2007, "The Sun and the Earth's Climate", Living Rev. Solar Phys., 4, lrsp–2007–2.[ADS]. URL (accessed 9 April 2010): http://www.livingreviews.org/lrsp-2007-2(Cited on page 51.)
Hathaway, D.H., 1996, "Doppler measurements of the sun's meridional flow", Astrophys. J., 460, 1027–1033. [DOI], [ADS] (Cited on page 31.)
Hathaway, D.H., 2010, "The Solar Cycle", Living Rev. Solar Phys., 7, lrsp–2010–1. [ADS]. URL (accessed 9 April 2010): http://www.livingreviews.org/lrsp-2010-1 (Cited on pages 51, 53, and 69.)
Hathaway, D.H., Wilson, R.M. and Reichmann, E.J., 1999, "A Synthesis of Solar Cycle Prediction Techniques", J. Geophys. Res., 104, 22,375–22,388. [DOI], [ADS] (Cited on page 68.)
Hathaway, D.H., Wilson, R.M. and Reichmann, E.J., 2002, "Group sunspot numbers: sunspot cycle characteristics", Solar Phys., 211, 357–370. [ADS] (Cited on page 52.)
Hathaway, D.H., Nandy, D., Wilson, R.M. and Reichmann, E.J., 2003, "Evidence that a deep meridional flow sets the sunspot cycle period", Astrophys. J., 589, 665–670. [DOI], [ADS] (Cited on page 45.)
Henney, C.J. and Harvey, J.W., 2002, "Phase coherence analysis of solar magnetic activity", Solar Phys., 207, 199–218. [DOI], [ADS] (Cited on page 72.)
Howe, R., 2009, "Solar Interior Rotation and its Variation", Living Rev. Solar Phys., 6, lrsp–2009–1. [ADS], [arXiv:0902.2406]. URL (accessed 9 April 2010): http://www.livingreviews.org/lrsp-2009-1 (Cited on pages 13 and 70.)
Hoyng, P., 1988, "Turbulent transport of magnetic fields. III. Stochastic excitation of global magnetic modes", Astrophys. J., 332, 857–871. [DOI], [ADS] (Cited on page 60.)
Hoyng, P., 1993, "Helicity fluctuations in mean field theory: an explanation for the variability of the solar cycle?", Astron. Astrophys., 272, 321–339. [ADS] (Cited on pages 21, 60, and 64.)
Hoyng, P., 2003, "The field, the mean and the meaning", in Advances in Nonlinear Dynamos, (Eds.) Ferriz-Mas, A., Núñez, M., vol. 9 of The Fluid Mechanics of Astrophysics and Geophysics, pp. 1–36, Taylor & Francis, London, New York. [Google Books] (Cited on pages 7 and 21.)
Hoyt, D.V. and Schatten, K., 1998, "Group Sunspot Numbers: A New Solar Activity Reconstruction", Solar Phys., 179, 189–219. [ADS] (Cited on page 52.)
Hoyt, D.V. and Schatten, K.H., 1996, "How Well Was the Sun Observed during the Maunder Minimum?", Solar Phys., 165, 181–192. [DOI], [ADS] (Cited on page 51.)
Jennings, R.L. and Weiss, N.O., 1991, "Symmetry breaking in stellar dynamos", Mon. Not. R. Astron. Soc., 252, 249–260. [ADS] (Cited on page 57.)
Jiang, J., Chatterjee, P. and Choudhuri, A.R., 2007, "Solar activity forecast with a dynamo model", Mon. Not. R. Astron. Soc., 381, 1527–1542. [DOI], [ADS], [arXiv:0707.2258] (Cited on page 68.)
Jiang, J., Cameron, R., Schmitt, D. and Schüssler, M., 2009, "Countercell Meridional Flow and Latitudinal Distribution of the Solar Polar Magnetic Field", Astrophys. J., 693, L96–L99. [DOI], [ADS] (Cited on pages 37 and 41.)
Jouve, L. and Brun, A.S., 2007, "On the role of meridional flows in flux transport dynamo models", Astron. Astrophys., 474, 239–250. [DOI], [ADS], [arXiv:0712.3200] (Cited on page 41.)
Jouve, L., Brun, A.S., Arlt, R., Brandenburg, A., Dikpati, M., Bonanno, A., Käpylä, P.J., Moss, D., Rempel, M., Gilman, P., Korpi, M.J. and Kosovichev, A.G., 2008, "A solar mean field dynamo benchmark", Astron. Astrophys., 483, 949–960. [DOI], [ADS] (Cited on page 19.)
Jouve, L., Brown, B.P. and Brun, A.S., 2010, "Exploring the Pcyc vs. Prot relation with flux transport dynamo models of solar-like stars", Astron. Astrophys., 509, A32. [DOI], [ADS], [arXiv:0911.1947] (Cited on page 45.)
Käpylä, P.J., Korpi, M.J., Ossendrijver, M. and Stix, M., 2006a, "Magnetoconvection and dynamo coefficients. III. α-effect and magnetic pumping in the rapid rotation regime", Astron. Astrophys., 455, 401–412. [DOI], [ADS], [astro-ph/0602111] (Cited on pages 21, 22, and 72.)
Käpylä, P.J., Korpi, M.J. and Tuominen, I., 2006b, "Solar dynamo models with α-effect and turbulent pumping from local 3D convection calculations", Astron. Nachr., 327, 884. [DOI], [ADS], [astro-ph/0606089] (Cited on page 34.)
Käpylä, P.J., Korpi, M.J., Brandenburg, A., Mitra, D. and Tavakol, R., 2010, "Convective dynamos in spherical wedge geometry", Astron. Nachr., 331, 73. [DOI], [ADS] (Cited on page 48.)
Kitchatinov, L.L. and Rüdiger, G., 1993, "Λ-effect and differential rotation in stellar convection zones", Astron. Astrophys., 276, 96–102. [ADS] (Cited on pages 28 and 54.)
Kitchatinov, L.L. and Rüdiger, G., 2006, "Magnetic field confinement by meridional flow and the solar tachocline", Astron. Astrophys., 453, 329–333. [DOI], [ADS], [astro-ph/0603417] (Cited on page 53.)
Kitchatinov, L.L., Rüdiger, G. and Küker, M., 1994, "Λ-quenching as the nonlinearity in stellar-turbulence dynamos", Astron. Astrophys., 292, 125–132. [ADS] (Cited on page 54.)
Kitchatinov, L.L., Mazur, M.V. and Jardine, M., 2000, "Magnetic field escape from a stellar convection zone and the dynamo-cycle period", Astron. Astrophys., 359, 531–538. [ADS] (Cited on page 71.)
Kitiashvili, I. and Kosovichev, A.G., 2008, "Application of Data Assimilation Method for Predicting Solar Cycles", Astrophys. J., 688, L49–L52. [DOI], [ADS], [arXiv:0807.3284] (Cited on page 69.)
Kleeorin, N., Rogachevskii, I. and Ruzmaikin, A., 1995, "Magnitude of the dynamo-generated magnetic field in solar-type convective zones", Astron. Astrophys., 297, 159–167. [ADS] (Cited on page 57.)
Knobloch, E., Tobias, S.M. and Weiss, N.O., 1998, "Modulation and symmetry changes in stellar dynamos", Mon. Not. R. Astron. Soc., 297, 1123–1138. [DOI], [ADS] (Cited on page 55.)
Krause, F. and Rädler, K.-H., 1980, Mean-Field Magnetohydrodynamics and Dynamo Theory, Pergamon Press, Oxford; New York (Cited on page 21.)
MATH Google Scholar
Küker, M., Arlt, R. and Rüdiger, R., 1999, "The Maunder minimum as due to magnetic Λ-quenching", Astron. Astrophys., 343, 977–982. [ADS] (Cited on pages 55 and 64.)
Küker, M., Rüdiger, G. and Schulz, M., 2001, "Circulation-dominated solar shell dynamo models with positive alpha effect", Astron. Astrophys., 374, 301–308. [DOI], [ADS] (Cited on page 34.)
Leighton, R.B., 1964, "Transport of magnetic fields on the sun", Astrophys. J., 140, 1547–1562. [DOI], [ADS] (Cited on page 41.)
Leighton, R.B., 1969, "A magneto-kinematic model of the solar cycle", Astrophys. J., 156, 1–26. [DOI], [ADS] (Cited on page 41.)
Lerche, I. and Parker, E.N., 1972, "The Generation of Magnetic Fields in Astrophysical Bodies. IX. A Solar Dynamo Based on Horizontal Shear", Astrophys. J., 176, 213. [DOI], [ADS] (Cited on page 25.)
Lopes, I. and Passos, D., 2009, "Solar Variability Induced in a Dynamo Code by Realistic Meridional Circulation Variations", Solar Phys., 257, 1–12. [DOI], [ADS] (Cited on pages 62, 63, and 71.)
MacGregor, K.B. and Charbonneau, P., 1997, "Solar interface dynamos. I. Linear, kinematic models in cartesian geometry", Astrophys. J., 486, 484–501. [DOI], [ADS] (Cited on pages 29 and 31.)
Malkus, W.V.R. and Proctor, M.R.E., 1975, "The macrodynamics of α-effect dynamos in rotating fluids", J. Fluid Mech., 67, 417–443 (Cited on page 54.)
Markiel, J.A and Thomas, J.H., 1999, "Solar interface dynamo models with a realistic rotation profile", Astrophys. J., 523, 827–837. [DOI], [ADS] (Cited on pages 29 and 31.)
Mason, J., Hughes, D.W. and Tobias, S.M., 2002, "The competition in the solar dynamo between surface and deep-seated α-effect", Astrophys. J. Lett., 580, L89–L92. [DOI], [ADS] (Cited on page 47.)
Mason, J., Hughes, D.W. and Tobias, S.M., 2008, "The effects of flux transport on interface dynamos", Mon. Not. R. Astron. Soc., 391, 467–480. [DOI], [ADS], [arXiv:0812.0199] (Cited on page 31.)
Matthews, P.C., Hughes, D.W. and Proctor, M.R.E., 1995, "Magnetic Buoyancy, Vorticity, and Three-dimensional Flux-Tube Formation", Astrophys. J., 448, 938–941. [DOI], [ADS] (Cited on page 18.)
Miesch, M.S., 2005, "Large-Scale Dynamics of the Convection Zone and Tachocline", Living Rev. Solar Phys., 2, lrsp–2005–1. URL (accessed 1 May 2005): http://www.livingreviews.org/lrsp-2005-1 (Cited on page 37.)
Miesch, M.S. and Toomre, J., 2009, "Turbulence, Magnetism, and Shear in Stellar Interiors", Annu. Rev. Fluid Mech., 41, 317–345. [DOI], [ADS] (Cited on page 48.)
Mininni, P.D. and Gómez, D.O., 2002, "Study of Stochastic Fluctuations in a Shell Dynamo", Astrophys. J., 573, 454–463. [DOI], [ADS] (Cited on page 60.)
Mininni, P.D. and Gómez, D.O., 2004, "A new technique for comparing solar dynamo models and observations", Astron. Astrophys., 426, 1065–1073. [DOI], [ADS] (Cited on pages 60 and 64.)
Mininni, P.D., Gómez, D.O. and Mindlin, G.B., 2002, "Instantaneous phase and amplitude correlation in the solar cycle", Solar Phys., 208, 167–179. [DOI], [ADS] (Cited on page 53.)
Moffatt, H.K., 1978, Magnetic Field Generation in Electrically Conducting Fluids, Cambridge Monographs on Mechanics and Applied Mathematics, Cambridge University Press, Cambridge; New York (Cited on page 21.)
Moreno-Insertis, F., 1983, "Rise time of horizontal magnetic flux tubes in the convection zone of the Sun", Astron. Astrophys., 122, 241–250. [ADS] (Cited on page 22.)
Moreno-Insertis, F., 1986, "Nonlinear time-evolution of kink-unstable magnetic flux tubes in the convective zone of the sun", Astrophys. J., 166, 291–305. [ADS] (Cited on pages 22 and 57.)
Moss, D., 1999, "Non-axisymmetric solar magnetic fields", Mon. Not. R. Astron. Soc., 306, 300–306. [DOI], [ADS] (Cited on page 72.)
Moss, D. and Brooke, J.M., 2000, "Towards a model of the solar dynamo", Mon. Not. R. Astron. Soc., 315, 521–533. [DOI], [ADS] (Cited on pages 54 and 55.)
Moss, D., Tuominen, I. and Brandenburg, A., 1990, "Buoyancy-limited thin-shell dynamos", Astron. Astrophys., 240, 142–149. [ADS] (Cited on page 71.)
Moss, D., Brandenburg, A. and Tuominen, I., 1991, "Properties of mean field dynamos with nonaxisymmetric α-effect", Astron. Astrophys., 347, 576–579. [ADS] (Cited on page 72.)
Moss, D., Brandenburg, A., Tavakol, R. and Tuominen, I., 1992, "Stochastic effects in mean-field dynamos", Astron. Astrophys., 265, 843–849. [ADS] (Cited on page 62.)
Moss, D., Sokoloff, D., Usoskin, I. and Tutubalin, V., 2008, "Solar Grand Minima and Random Fluctuations in Dynamo Parameters", Solar Phys., 250, 221–234. [DOI], [ADS], [arXiv:0806.3331] (Cited on pages 60 and 64.)
Mundt, M.D., Maguire II, W.B. and Chase, R.R.P., 1991, "Chaos in the Sunspot Cycle: Analysis and Prediction", J. Geophys. Res., 96, 1705–1716. [DOI], [ADS] (Cited on page 53.)
Muñoz-Jaramillo, A., Nandy, D. and Martens, P.C.H., 2009, "Helioseismic Data Inclusion in Solar Dynamo Models", Astrophys. J., 698, 461–478. [DOI], [ADS], [arXiv:0811.3441] (Cited on pages 45 and 57.)
Muñoz-Jaramillo, A., Nandy, D. and Martens, P.C.H., 2010a, "Magnetic Quenching of Turbulent Diffusivity: Reconciling Mixing-length Theory Estimates with Kinematic Dynamo Models of the Solar Cycle", arXiv, e-print. [ADS], [arXiv:1007.1262] (Cited on page 37.)
Muñoz-Jaramillo, A., Nandy, D., Martens, P.C.H. and Yeates, A.R., 2010b, "A Double-Ring Algorithm for Modeling Solar Active Regions: Unifying Kinematic Dynamo Models and Surface Flux-Transport Simulations", arXiv, e-print. [ADS], [arXiv:1006.4346] (Cited on page 42.)
Mursula, K., Usoskin, I.G. and Kovaltsov, G.A., 2001, "Persistent 22-year cycle in sunspot activity: Evidence for a relic solar magnetic field", Solar Phys., 198, 51–56. [DOI], [ADS] (Cited on page 53.)
Nandy, D. and Choudhuri, A.R., 2001, "Toward a mean-field formulation of the Babcock-Leighton type solar dynamo. I. α-coefficient versus Durney's double-ring approach", Astrophys. J., 551, 576–585. [DOI], [ADS] (Cited on pages 42, 43, and 45.)
Nandy, D. and Choudhuri, A.R., 2002, "Explaining the latitudinal distribution of sunspots with deep meridional flow", Science, 296, 1671–1673. [DOI], [ADS] (Cited on pages 42 and 45.)
Ossendrijver, A.J.H., Hoyng, P. and Schmitt, D., 1996, "Stochastic excitation and memory of the solar dynamo", Astron. Astrophys., 313, 938–948. [ADS] (Cited on page 60.)
Ossendrijver, M., 2003, "The solar dynamo", Astron. Astrophys. Rev., 11, 287–367. [DOI], [ADS] (Cited on pages 7, 11, and 21.)
Ossendrijver, M.A.J.H., 2000a, "Grand minima in a buoyancy-driven solar dynamo", Astron. Astrophys., 359, 364–372. [ADS] (Cited on pages 40, 41, and 65.)
Ossendrijver, M.A.J.H., 2000b, "The dynamo effect of magnetic flux tubes", Astron. Astrophys., 359, 1205–1210. [ADS] (Cited on page 41.)
Ossendrijver, M.A.J.H. and Covas, E., 2003, "Crisis-induced intermittency due to attractor-widening in a buoyancy-driven solar dynamo", Int. J. Bifurcat. Chaos, 13, 2327–2333. [DOI], [ADS] (Cited on page 64.)
Ossendrijver, M.A.J.H. and Hoyng, P., 1996, "Stochastic and nonlinear fluctuations in a mean field dynamo", Astron. Astrophys., 313, 959–970. [ADS] (Cited on page 60.)
Ossendrijver, M.A.J.H. and Hoyng, P., 1997, "Mean magnetic field and energy balance of Parker's surface-wave dynamo", Astron. Astrophys., 324, 329–343. [ADS] (Cited on page 31.)
Ossendrijver, M.A.J.H., Stix, M. and Brandenburg, A., 2001, "Magnetoconvection and dynamo coefficients: dependence of the α-effect on rotation and magnetic fields", Astron. Astrophys., 376, 713–726. [DOI], [ADS] (Cited on pages 21 and 60.)
Ossendrijver, M.A.J.H., Stix, M., Brandenburg, A. and Rüdiger, G., 2002, "Magnetoconvection and dynamo coefficients. II. Field-direction dependent pumping of magnetic field", Astron. Astrophys., 394, 735–745. [ADS] (Cited on page 22.)
Otmianowska-Mazur, K., Rüdiger, G., Elstner, D. and Arlt, R., 1997, "The turbulent EMF as a time series and the 'equality' of dynamo cycles", Geophys. Astrophys. Fluid Dyn., 86, 229–247. [DOI] (Cited on page 60.)
Parker, E.N., 1955, "Hydromagnetic Dynamo Models", Astrophys. J., 122, 293–314. [DOI], [ADS] (Cited on pages 8 and 24.)
Parker, E.N., 1975, "The Generation of Magnetic Fields in Astrophysical Bodies. X. Magnetic Buoyancy and the Solar Dynamo", Astrophys. J., 198, 205–209. [DOI], [ADS] (Cited on page 22.)
Parker, E.N., 1982, "The dynamics of fibril magnetic fields. I. Effect of flux tubes on convection", Astrophys. J., 256, 292–301. [DOI], [ADS] (Cited on page 71.)
Parker, E.N., 1993, "A solar dynamo surface wave at the interface between convection and nonuniform rotation", Astrophys. J., 408, 707–719. [DOI], [ADS] (Cited on page 28.)
Passos, D. and Lopes, I., 2008, "A Low-Order Solar Dynamo Model: Inferred Meridional Circulation Variations Since 1750", Astrophys. J., 686, 1420–1425. [DOI], [ADS] (Cited on pages 62 and 63.)
Passos, D. and Lopes, I.P., 2009, "Grand Minima Under the Light of a Low Order Dynamo Model", arXiv, e-print. [ADS], [arXiv:0908.0496] (Cited on page 62.)
Petrovay, K., 2000, "What makes the Sun tick?", in The Solar Cycle and Terrestrial Climate, Proceedings of the 1st Solar and Space Weather Euroconference: 25-29 September 2000, Instituto de Astrofísica de Canarias, Santa Cruz de Tenerife, Tenerife, Spain, (Eds.) Vázquez, M., Schmieder, B., vol. SP-463 of ESA Conference Proceedings, pp. 3–14, European Space Agency, Nordwijk (Cited on page 11.)
Petrovay, K. and Kerekes, A., 2004, "The effect of a meridional flow on Parker's interface dynamo", Mon. Not. R. Astron. Soc., 351, L59–L62. [DOI], [ADS], [astro-ph/0404607] (Cited on page 34.)
Petrovay, K. and Szakály, G., 1999, "Transport effects in the evolution of the global solar magnetic field", Solar Phys., 185, 1–13. [ADS] (Cited on page 10.)
Phillips, J.A., Brooke, J.M. and Moss, D., 2002, "The importance of physical structure in solar dynamo models", Astron. Astrophys., 392, 713–727. [DOI], [ADS] (Cited on pages 29 and 55.)
Pipin, V.V., 1999, "The Gleissberg cycle by a nonlinear αΛ dynamo", Astron. Astrophys., 346, 295–302. [ADS] (Cited on page 55.)
Pipin, V.V. and Seehafer, N., 2009, "Stellar dynamos with Ω × J effect", Astron. Astrophys., 493, 819–828. [DOI], [ADS], [arXiv:0811.4225] (Cited on page 17.)
Platt, N., Spiegel, E.A. and Tresser, C., 1993, "On-off intermittency: A mechanism for bursting", Phys. Rev. Lett., 70, 279–282. [DOI], [ADS] (Cited on page 62.)
Pouquet, A., Frish, U. and Leorat, J., 1976, "Strong MHD helical turbulence and the nonlinear dynamo effect", J. Fluid Mech., 77, 321–354. [DOI], [ADS] (Cited on pages 22 and 54.)
Proctor, M.R.E. and Gilbert, A.D. (Eds.), 1994, Lectures on Solar and Planetary Dynamos, Publications of the Newton Institute, Cambridge University Press, Cambridge; New York (Cited on page 11.)
Rädler, K.-H., Kleeorin, N. and Rogachevskii, I., 2003, "The Mean Electromotive Force for MHD Turbulence: The Case of a Weak Mean Magnetic Field and Slow Rotation", Geophys. Astrophys. Fluid Dyn., 97, 249–274. [DOI], [ADS], [astro-ph/0209287] (Cited on page 17.)
Rempel, M., 2005, "Influence of Random Fluctuations in the Λ-Effect on Meridional Flow and Differential Rotation", Astrophys. J., 631, 1286–1292. [DOI], [ADS], [astro-ph/0610132] (Cited on page 37.)
Rempel, M., 2006a, "Transport of Toroidal Magnetic Field by the Meridional Flow at the Base of the Solar Convection Zone", Astrophys. J., 637, 1135–1142. [DOI], [ADS], [astro-ph/0610133] (Cited on pages 37 and 72.)
Rempel, M., 2006b, "Flux-Transport Dynamos with Lorentz Force Feedback on Differential Rotation and Meridional Flow: Saturation Mechanism and Torsional Oscillations", Astrophys. J., 647, 662–675. [DOI], [ADS], [astro-ph/0604446] (Cited on pages 37, 55, and 72.)
Rempel, M. and Schüssler, M., 2001, "Intensification of magnetic fields by conversion of potential energy", Astrophys. J. Lett., 552, L171–L174. [DOI], [ADS] (Cited on page 71.)
Ribes, J.C. and Nesme-Ribes, E., 1993, "The solar sunspot cycle in the Maunder minimum AD1645 to AD1715", Astron. Astrophys., 276, 549–563. [ADS] (Cited on pages 51, 64, and 72.)
Roald, C.B. and Thomas, J.H., 1997, "Simple solar dynamo models with variable α and ω effects", Mon. Not. R. Astron. Soc., 288, 551–564. [ADS] (Cited on page 57.)
Roberts, P.H. and Stix, M., 1972, "α-Effect Dynamos, by the Bullard-Gellman Formalism", Astron. Astrophys., 18, 453. [ADS] (Cited on page 32.)
Rozelot, J.P., 1995, "On the chaotic behaviour of the solar activity", Astron. Astrophys., 297, L45–L48. [ADS] (Cited on page 53.)
Rüdiger, G. and Arlt, R., 2003, "Physics of the solar cycle", in Advances in Nonlinear Dynamos, (Eds.) Ferriz-Mas, A., N'uñez, M., vol. 9 of The Fluid Mechanics of Astrophysics and Geophysics, pp. 147–195, Taylor & Francis, London, New York. [Google Books] (Cited on pages 11 and 28.)
Rüdiger, G. and Brandenburg, A., 1995, "A solar dynamo in the overshoot layer: cycle period and butterfly diagram", Astron. Astrophys., 296, 557–566. [ADS] (Cited on page 28.)
Rüdiger, G. and Elstner, D., 1994, "Non-axisymmetry vs. axisymmetry in dynamo-excited stellar magnetic fields", Astron. Astrophys., 281, 46–50. [ADS] (Cited on page 72.)
Rüdiger, G. and Elstner, D., 2002, "Is the Butterfly diagram due to meridional motions?", Astron. Nachr., 323, 432–435. [DOI], [ADS] (Cited on page 34.)
Rüdiger, G. and Hollerbach, R., 2004, The Magnetic Universe: Geophysical and Astrophysical Dynamo Theory, Wiley-VCH, Weinheim. [ADS], [Google Books] (Cited on page 21.)
Rüdiger, G. and Kitchatinov, L.L., 1993, "Alpha-effect and alpha-quenching", Astron. Astrophys., .269, 581–588. [ADS] (Cited on page 21.)
Rüdiger, G., Kitchatinov, L.L., Küker, M. and Schultz, M., 1994, "Dynamo models with magnetic diffusivity-quenching", Geophys. Astrophys. Fluid Dyn., 78, 247–259. [DOI], [ADS] (Cited on page 22.)
Rüdiger, G., Kitchatinov, L.L. and Arlt, R., 2005, "The penetration of meridional flow into the tachocline and its meaning for the solar dynamo", Astron. Astrophys., 444, L53–L56. [DOI], [ADS] (Cited on page 45.)
Schatten, K.H., 2009, "Modeling a Shallow Solar Dynamo", Solar Phys., 255, 3–38. [DOI], [ADS] (Cited on page 71.)
Schatten, K.H., Scherrer, P.H., Svalgaard, L. and Wilcox, J.M., 1978, "Using dynamo theory to predict the sunspot number during solar cycle 21", Geophys. Res. Lett., 5, 411–414. [DOI], [ADS] (Cited on page 68.)
Schmalz, S. and Stix, M., 1991, "An αΩ dynamo with order and chaos", Astron. Astrophys., 245, 654–661. [ADS] (Cited on page 57.)
Schmitt, D., 1987, "An αω-dynamo with an α-effect due to magnetostrophic waves", Astron. Astrophys., 174, 281–287. [ADS] (Cited on page 40.)
Schmitt, D. and Schüssler, M., 1989, "Non-linear dynamos I. One-dimensional model of a thin layer dynamo", Astron. Astrophys., 223, 343–351. [ADS] (Cited on page 71.)
Schmitt, D. and Schüssler, M., 2004, "Does the butterfly diagram indicate a solar flux-transport dynamo", Astron. Astrophys., 421, 349–351. [ADS] (Cited on page 45.)
Schmitt, D., Schüssler, M. and Ferriz-Mas, A., 1996, "Intermittent solar activity by an on-off dynamo", Astron. Astrophys., 311, L1–L4. [ADS] (Cited on pages 40, 41, and 65.)
Schou, J. and Bogart, R.S., 1998, "Flows and Horizontal Displacements from Ring Diagrams", Astrophys. J. Lett., 504, L131–L134. [DOI], [ADS] (Cited on page 31.)
Schrijver, C.J. and Siscoe, G.L. (Eds.), 2009, Heliophysics: Plasma Physics of the Local Cosmos, Cambridge University Press, Cambridge (Cited on page 11.)
Schrijver, C.J., Title, A.M., van Ballegooijen, A.A., Hagenaar, H.J. and Shine, R.A., 1997, "Sustaining the Quiet Photospheric Network: The Balance of Flux Emergence, Fragmentation, Merging, and Cancellation", Astrophys. J., 487, 424–436. [DOI], [ADS] (Cited on page 60.)
Schrijver, C.J., DeRosa, M.L. and Title, A.M., 2002, "What Is Missing from Our Understanding of Long-Term Solar and Heliospheric Activity?", Astrophys. J., 577, 1006–1012. [DOI], [ADS] (Cited on page 70.)
Schüssler, M., 1977, "On Buoyant Magnetic Flux Tubes in the Solar Convection Zone", Astron. Astrophys., 56, 439–442. [ADS] (Cited on page 22.)
Schüssler, M., 1996, "Magnetic flux tubes and the solar dynamo", in Solar and Astrophysical Magnetohydrodynamic Flows, Proceedings of the NATO Advanced Study Institute, held in Heraklion, Crete, Greece, June 1995, (Ed.) Tsinganos, K.C., vol. 481 of NATO ASI Series C, pp. 17–37, Kluwer, Dordrecht; Boston (Cited on page 17.)
Schüssler, M. and Ferriz-Mas, A., 2003, "Magnetic flux tubes and the dynamo problem", in Advances in Nonlinear Dynamos, (Eds.) Ferriz-Mas, A., Núñez, M., vol. 9 of The Fluid Mechanics of Astrophysics and Geophysics, pp. 123–146, Taylor & Francis, London, New York. [Google Books] (Cited on page 17.)
Seehafer, N. and Pipin, V.V., 2009, "An advective solar-type dynamo without the α effect", Astron. Astrophys., 508, 9–16. [DOI], [ADS], [arXiv:0910.2614] (Cited on page 34.)
Sheeley Jr, N.R., 1991, "Polar faculae: 1906-1990", Astrophys. J., 374, 386–389. [ADS] (Cited on pages 43 and 51.)
Sokoloff, D. and Nesme-Ribes, E., 1994, "The Maunder minimum: A mixed-parity dynamo mode?", Astron. Astrophys., 288, 293–298. [ADS] (Cited on page 72.)
Spiegel, E.A. and Zahn, J.-P., 1992, "The solar tachocline", Astron. Astrophys., 265, 106–114. [ADS] (Cited on page 17.)
Spruit, H.C., 1981, "Equations for Thin Flux Tubes in Ideal MHD", Astron. Astrophys., 102, 129–133. [ADS] (Cited on page 40.)
Steiner, O. and Ferriz-Mas, A., 2005, "Connecting solar radiance variability to the solar dynamo with the virial theorem", Astron. Nachr., 326, 190–193. [DOI], [ADS] (Cited on page 31.)
Stix, M., 1976, "Differential Rotation and the Solar Dynamo", Astron. Astrophys., 47, 243–254. [ADS] (Cited on page 24.)
Stix, M., 2002, The Sun: An introduction, Astronomy and Astrophysics Library, Springer, Berlin, New York, 2nd edn. (Cited on page 11.)
Tapping, K., 1987, "Recent solar radio astronomy at centimeter wavelengths: the temporal variability of the 10.7 cm flux", J. Geophys. Res., .92, 829–838. [DOI] (Cited on page 51.)
Thelen, J.-C., 2000a, "A mean electromotive force induced by magnetic buoyancy instabilities", Mon. Not. R. Astron. Soc., 315, 155–164. [DOI], [ADS] (Cited on pages 18 and 38.)
Thelen, J.-C., 2000b, "Nonlinear αω-dynamos driven by magnetic buoyancy", Mon. Not. R. Astron. Soc., 315, 165–183. [DOI], [ADS] (Cited on pages 38 and 54.)
Tobias, S.M., 1996a, "Diffusivity quenching as a mechanism for Parker's surface dynamo", Astrophys. J., 467, 870–880. [DOI], [ADS] (Cited on pages 29 and 31.)
Tobias, S.M., 1996b, "Grand minimia in nonlinear dynamos", Astron. Astrophys., 307, L21–L24. [ADS] (Cited on page 64.)
Tobias, S.M., 1997, "The solar cycle: parity interactions and amplitude modulation", Astron. Astrophys., 322, 1007–1017. [ADS] (Cited on pages 29, 54, 55, and 64.)
Tobias, S.M., 2002, "Modulation of solar and stellar dynamos", Astron. Nachr., 323, 417–423. [DOI], [ADS] (Cited on page 11.)
Tobias, S.M., Brummell, N.H., Clune, T.L. and Toomre, J., 2001, "Transport and storage of magnetic fields by overshooting turbulent convective convection", Astrophys. J., 549, 1183–1203. [DOI], [ADS] (Cited on page 48.)
Tobias, S.M., Cattaneo, F. and Brummell, N.H., 2008, "Convective Dynamos with Penetration, Rotation, and Shear", Astrophys. J., 685, 596–605. [DOI], [ADS] (Cited on page 48.)
Tomczyk, S., Schou, J. and Thompson, M.J., 1995, "Measurement of the Rotation Rate in the Deep Solar Interior", Astrophys. J. Lett., 448, L57–L60. [DOI], [ADS] (Cited on page 17.)
Toomre, J., Christensen-Dalsgaard, J., Hill, F., Howe, R., Komm, R.W., Schou, J. and Thompson, M.J., 2003, "Transient oscillations near the solar tachocline", in Local and Global Helioseismology: The Present and Future, Proceedings of SOHO 12/GONG+ 2002, 27 October - 1 November 2002, Big Bear Lake, California, U.S.A.,.(Ed.) Sawaya-Lacoste, H., vol. SP-517 of ESA Conference Proceedings, pp. 409–412, ESA, Noordwijk. [ADS] (Cited on page 70.)
Tworkowski, A., Tavakol, R., Brandenburg, A., Brooke, J.M., Moss, D. and Tuominen, I., 1998, "Intermittent behaviour in axisymmetric mean-field dynamo models in spherical shells", Mon. Not. R. Astron. Soc., 296, 287–295. [DOI], [ADS] (Cited on page 65.)
Ulrich, R.K. and Boyden, J.E., 2005, "The Solar Surface Toroidal Magnetic Field", Astrophys. J. Lett., 620, L123–L127. [DOI], [ADS] (Cited on pages 31 and 37.)
Usoskin, I.G., 2008, "A History of Solar Activity over Millennia", Living Rev. Solar Phys., 5, lrsp–2008–3. [ADS], [arXiv:0810.3972]. URL (accessed 9 April 2010): http://www.livingreviews.org/lrsp-2008-3 (Cited on pages 51 and 64.)
Usoskin, I.G. and Mursula, K., 2003, "Long-term solar cycle evolution: Review of recent developments", Solar Phys., 218, 319–343. [DOI] (Cited on pages 11 and 53.)
Usoskin, I.G., Mursula, K., Arlt, R. and Kovaltsov, G.A., 2009a, "A Solar Cycle Lost in 1793-1800: Early Sunspot Observations Resolve the Old Mystery", Astrophys. J. Lett., 700, L154–L157. [DOI], [ADS], [arXiv:0907.0063] (Cited on pages 54 and 64.)
Usoskin, I.G., Sokoloff, D. and Moss, D., 2009b, "Grand Minima of Solar Activity and the Mean- Field Dynamo", Solar Phys., 254, 345–355. [DOI], [ADS] (Cited on page 72.)
van Ballegooijen, A.A. and Choudhuri, A.R., 1988, "The possible role of meridional circulation in suppressing magnetic buoyancy", Astrophys. J., 333, 965–977. [DOI], [ADS] (Cited on pages 32 and 33.)
Wang, Y.-M. and Sheeley Jr, N.R., 1991, "Magnetic flux transport and the Sun's dipole moment: New twists to the Babcock-Leighton model", Astrophys. J., 375, 761–770. [ADS] (Cited on pages 41 and 70.)
Wang, Y.-M., Nash, A.G. and Sheeley Jr, N.R., 1989, "Magnetic flux transport on the sun", Science, 245, 712–718. [DOI], [ADS] (Cited on page 41.)
Wang, Y.-M., Sheeley Jr, N.R. and Nash, A.G., 1991, "A new cycle model including meridional circulation", Astrophys. J., 383, 431–442. [DOI], [ADS] (Cited on page 42.)
Wang, Y.-M., Lean, J. and Sheeley Jr, N.R., 2002, "Role of Meridional Flow in the Secular Evolution of the Sun's Polar Fields and Open Flux", Astrophys. J. Lett., 577, L53–L57. [ADS] (Cited on page 70.)
Weiss, N.O., Cattaneo, F. and Jones, C.A., 1984, "Periodic and aperiodic dynamo waves", Geophys. Astrophys. Fluid Dyn., 30, 305–341. [DOI], [ADS] (Cited on page 57.)
ADS MathSciNet MATH Google Scholar
Wilmot-Smith, A.L., Nandy, D., Hornig, G. and Martens, P.C.H., 2006, "A Time Delay Model for Solar and Stellar Dynamos", Astrophys. J., 652, 696–708. [DOI], [ADS] (Cited on page 58.)
Yeates, A.R., Nandy, D. and Mackay, D.H., 2008, "Exploring the Physical Basis of Solar Cycle Predictions: Flux Transport Dynamics and Persistence of Memory in Advection- versus Diffusiondominated Solar Convection Zones", Astrophys. J., 673, 544–556. [DOI], [ADS], [arXiv:0709.1046] (Cited on page 69.)
Yoshimura, H., 1975, "Solar-cycle dynamo wave propagation", Astrophys. J., 201, 740–748. [DOI], [ADS] (Cited on page 24.)
Yoshimura, H., 1978, "Nonlinear astrophysical dynamos: Multiple-period dynamo wave oscillations and long-term modulations of the 22 year solar cycle", Astrophys. J., 226, 706–719. [DOI], [ADS] (Cited on page 57.)
Zhang, K., Chan, K.H., Zou, J., Liao, X. and Schubert, G., 2003a, "A three-dimensional spherical nonlinear interface dynamo", Astrophys. J., 596, 663–679. [DOI], [ADS] (Cited on pages 29 and 72.)
Zhang, K., Liao, X. and Schubert, G., 2003b, "Nonaxisymmetric Instability of a Toroidal Magnetic Field in a Rotating Sphere", Astrophys. J., 585, 1124–1137. [DOI], [ADS] (Cited on page 38.)
Zhang, K., Liao, X. and Schubert, G., 2004, "A sandwich interface dynamo: linear dynamo waves in the sun", Astrophys. J., 602, 468–480. [DOI], [ADS] (Cited on page 29.)
I wish to thank Jürg Beer, John Brooke, Mausumi Dikpati, Antonio Ferriz-Mas, Mihai Ghizaru, Gustavo Guerrero, David Hathaway, Mathieu Ossendrijver, Dário Passos, and Steve Tobias for providing data and/or graphical material for inclusion in this review; its original 2005 version also benefited from the constructive criticism of Peter Gilman and Michael Stix. At this point, usually all that would normally be left for me to do is to assure readers and colleagues that any error, omission or misrepresentation of their work is not intentional, and to offer general advanced apologies to all slighted. Here however, the organic format of Living Reviews allows actual amendments and additions. Please send your comments/suggestions/criticisms to the above e-mail address. And for this I offer advanced thanks to all future correspondents.
Département de Physique, Université de Montréal, CP 6128 Centre-Ville, Montréal (Qc), H3C-3J7, Canada
Paul Charbonneau
Correspondence to Paul Charbonneau.
41116_2015_9142_MOESM1_ESM.mpg
mpg-Movie ((2347.42480469 KB) Still from a movie showing Meridional plane animations for an αΩ dynamo solutions including meridional circulation. With Rm = 103, this solution is operating in the advection-dominated regime as a flux-transport dynamo. The corresponding time-latitude "butterfly" diagram is plotted in Figure 12C below. Color-coding of the toroidal magnetic field and poloidal fieldlines as in Figure 7.
Supplementary material, approximately 2349.04882812 KB
Supplementary material, approximately 2348.65625 KB
mpg-Movie (6019.31738281 KB) Still from a movie showing Meridional plane animation of a representative Babcock-Leighton dynamo solution from Charbonneau et al. (2005). Color coding of the toroidal field and poloidal fieldlines as in Figure 7. This solution uses the same differential rotation, magnetic diffusivity, and meridional circulation profile as for the advection-dominated αΩ solution of Section 4.4, but now with the non-local surface source term, as formulated in Charbonneau et al. (2005), and parameter values Cα = 5, CΩ = 5 × 104, Δη = 0.003, Rm = 840. Note again the strong amplification of the surface polar fields, the latitudinal stretching of poloidal fieldlines by the meridional flow at the core-envelope interface.
41116_2015_9142_MOESM6_ESM.gif
mpg-Movie (31113.9941406 KB) Still from a movie showing Latitude-Longitude Mollweide projection of the toroidal magnetic component at depth r/R = 0.695 in the 3D MHD simulation of Ghizaru et al. (2010). This large-scale axisymmetric component shows a well-defined overall antisymmetry about the equatorial plane, and undergoes polarity reversals approximately every 30 yr. The animation spans a little over three half-cycles, including three polarity reversals. Time is given in solar days, with 1 s.d. = 30 d.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Charbonneau, P. Dynamo Models of the Solar Cycle. Living Rev. Sol. Phys. 7, 3 (2010). https://doi.org/10.12942/lrsp-2010-3
DOI: https://doi.org/10.12942/lrsp-2010-3
Solar Cycle
Flux Rope
Differential Rotation
Meridional Circulation
Dynamo Model
Change summary
Major revision, updated and expanded.
Change details
Solar cycle models based on the Babcock–Leighton mechanism have undergone major developments in the past decade, and this is reflected in a new section entirely devoted to this class of dynamo models. In addition, also in the past decade global magnetohydrodynamical simulations of solar convection and dynamo action have reached a level where they now can generate large-scale magnetic fields undergoing more or less solar-like regular magnetic polarity reversals. These simulations are now discussed in a new section, with emphasis placed on physical links with geometrically and dynamically simpler dynamo models. The section on amplitude fluctuations and Grand Minima has been shortened and reworked to emphasize generic behaviors, with pointers to the technical literature for illustrative examples. About 130 references have been added.
Besides updates relating to the literature published in the past five years (added about 60 new references), and reworking a few sections of the 2005 version, main major novelties compared to the 2005 version are
Material on turbulent pumping, and its effect in various types of dynamo models (Käpylä et al., 2006; Guerrero and de Gouveia Dal Pino, 2008).
Expanded Section 4.9 on MHD numerical simulations of large-scale dynamo action.
Added Section 5.7 on dynamo model-based cycle prediction schemes.
Inclusion (and discussion of) animations directly in the text, as opposed to being grouped in a ressource archive, as in my original 2005 review.
By appropriate deletions elsewhere in the review, I have managed to retain its overall length at nearly the same as the 2005 version.
|
CommonCrawl
|
Stationary states in gas networks
NHM Home
On gradient structures for Markov chains and the passage to Wasserstein gradient flows
June 2015, 10(2): 255-293. doi: 10.3934/nhm.2015.10.255
Conservation law models for traffic flow on a network of roads
Alberto Bressan 1, and Khai T. Nguyen 2,
Penn State University Mathematics Dept., University Park, State College, PA 16802
Department of Mathematics, Penn State University, University Park, PA 16802, United States
Received June 2014 Revised January 2015 Published April 2015
The paper develops a model of traffic flow near an intersection, where drivers seeking to enter a congested road wait in a buffer of limited capacity. Initial data comprise the vehicle density on each road, together with the percentage of drivers approaching the intersection who wish to turn into each of the outgoing roads.
If the queue sizes within the buffer are known, then the initial-boundary value problems become decoupled and can be independently solved along each incoming road. Three variational problems are introduced, related to different kind of boundary conditions. From the value functions, one recovers the traffic density along each incoming or outgoing road by a Lax type formula.
Conversely, if these value functions are known, then the queue sizes can be determined by balancing the boundary fluxes of all incoming and outgoing roads. In this way one obtains a contractive transformation, whose fixed point yields the unique solution of the Cauchy problem for traffic flow in an neighborhood of the intersection.
The present model accounts for backward propagation of queues along roads leading to a crowded intersection, it achieves well-posedness for general $L^\infty $ data, and continuity w.r.t. weak convergence of the initial densities.
Keywords: Hamilton Jacobi equations, Lax type formula., Traffic flows, scalar conservation laws, network of roads.
Mathematics Subject Classification: Primary: 49K35, 35L65; Secondary: 90B2.
Citation: Alberto Bressan, Khai T. Nguyen. Conservation law models for traffic flow on a network of roads. Networks & Heterogeneous Media, 2015, 10 (2) : 255-293. doi: 10.3934/nhm.2015.10.255
M. Bardi and I. Capuzzo Dolcetta, Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations, Birkhäuser, 1997. doi: 10.1007/978-0-8176-4755-1. Google Scholar
C. I. Bardos, A. Y. Leroux and J. C. Nedelec, First order quasilinear equations with boundary conditions, Comm. P.D.E., 4 (1979), 1017-1034. doi: 10.1080/03605307908820117. Google Scholar
A. Bressan, S. Canic, M. Garavello, M. Herty and B. Piccoli, Flow on networks: Recent results and perspectives, EMS Surv. Math. Sci., 1 (2014), 47-111. doi: 10.4171/EMSS/2. Google Scholar
A. Bressan and K. Han, Optima and equilibria for a model of traffic flow, SIAM J. Math. Anal., 43 (2011), 2384-2417. doi: 10.1137/110825145. Google Scholar
A. Bressan and K. Han, Existence of optima and equilibria for traffic flow on networks, Networks & Heter. Media, 8 (2013), 627-648. doi: 10.3934/nhm.2013.8.627. Google Scholar
A. Bressan and K. Nguyen, Optima and equilibria for traffic flow on networks with backward propagating queues,, to appear in Networks Heter. Media., (). Google Scholar
A. Bressan and F. Yu, Continuous Riemann solvers for traffic flow at a junction,, to appear in Discr. Cont. Dyn. Syst., (). Google Scholar
G. M. Coclite, M. Garavello and B. Piccoli, Traffic flow on a road network, SIAM J. Math. Anal., 36 (2005), 1862-1886. doi: 10.1137/S0036141004402683. Google Scholar
C. D'Apice, S. Göttlich, M. Herty and B. Piccoli, Modeling, Simulation, and Optimization of Supply Chains. A Continuous Approach, SIAM, Philadelphia, PA, 2010. doi: 10.1137/1.9780898717600. Google Scholar
L. C. Evans, Partial Differential Equations, Second edition, American Mathematical Society, Providence, RI, 2010. Google Scholar
T. Friesz, Dynamic Optimization and Differential Games, Springer, New York, 2010. doi: 10.1007/978-0-387-72778-3. Google Scholar
T. Friesz and K. Han, Dynamic Network User Equilibrium,, Springer-Verlag, (). Google Scholar
M. Garavello and P. Goatin, The Cauchy problem at a node with buffer, Discrete Contin. Dyn. Syst., 32 (2012), 1915-1938. doi: 10.3934/dcds.2012.32.1915. Google Scholar
M. Garavello and B. Piccoli, Source-destination flow on a road network, Commun. Math. Sci., 3 (2005), 261-283. doi: 10.4310/CMS.2005.v3.n3.a1. Google Scholar
M. Garavello and B. Piccoli, Traffic Flow on Networks. Conservation Laws Models, AIMS Series on Applied Mathematics, Springfield, Mo., 2006. Google Scholar
M. Garavello and B. Piccoli, Conservation laws on complex networks, Ann. Inst. H. Poincaré, 26 (2009), 1925-1951. doi: 10.1016/j.anihpc.2009.04.001. Google Scholar
M. Garavello and B. Piccoli, A multibuffer model for LWR road networks, in Advances in Dynamic Network Modeling in Complex Transportation Systems (eds. S V. Ukkusuri and K. Ozbay), Complex Networks and Dynamic Systems, 2, Springer, New York, 2013, 143-161. doi: 10.1007/978-1-4614-6243-9_6. Google Scholar
M. Herty, C. Kirchner, S. Moutari and M. Rascle, Multicommodity flows on road networks, Commun. Math. Sci., 6 (2008), 171-187. doi: 10.4310/CMS.2008.v6.n1.a8. Google Scholar
M. Herty, A. Klar and B. Piccoli, Existence of solutions for supply chain models based on partial differential equations, SIAM J. Math. Anal., 39 (2007), 160-173. doi: 10.1137/060659478. Google Scholar
M. Herty, J. P. Lebacque and S. Moutari, A novel model for intersections of vehicular traffic flow, Netw. Heterog. Media, 4 (2009), 813-826. doi: 10.3934/nhm.2009.4.813. Google Scholar
C. Imbert, R. Monneau and H. Zidani, A Hamilton-Jacobi approach to junction problems and application to traffic flows, ESAIM Control Optim. Calc. Var., 19 (2013), 129-166. doi: 10.1051/cocv/2012002. Google Scholar
P. D. Lax, Hyperbolic systems of conservation laws, Comm. Pure Appl. Math., 10 (1957), 537-556. doi: 10.1002/cpa.3160100406. Google Scholar
P. Le Floch, Explicit formula for scalar non-linear conservation laws with boundary condition, Math. Methods Appl. Sciences, 10 (1988), 265-287. doi: 10.1002/mma.1670100305. Google Scholar
M. Lighthill and G. Whitham, On kinematic waves. II. A theory of traffic flow on long crowded roads, Proceedings of the Royal Society of London: Series A, 229 (1955), 317-345. doi: 10.1098/rspa.1955.0089. Google Scholar
P. I. Richards, Shock waves on the highway, Oper. Res., 4 (1956), 42-51. doi: 10.1287/opre.4.1.42. Google Scholar
J. Smoller, Shock Waves and Reaction-Diffusion Equations, Second edition, Springer-Verlag, New York, 1994. doi: 10.1007/978-1-4612-0873-0. Google Scholar
Guillaume Costeseque, Jean-Patrick Lebacque. Discussion about traffic junction modelling: Conservation laws VS Hamilton-Jacobi equations. Discrete & Continuous Dynamical Systems - S, 2014, 7 (3) : 411-433. doi: 10.3934/dcdss.2014.7.411
Laura Caravenna, Annalisa Cesaroni, Hung Vinh Tran. Preface: Recent developments related to conservation laws and Hamilton-Jacobi equations. Discrete & Continuous Dynamical Systems - S, 2018, 11 (5) : i-iii. doi: 10.3934/dcdss.201805i
Wen Shen. Traveling waves for conservation laws with nonlocal flux for traffic flow on rough roads. Networks & Heterogeneous Media, 2019, 14 (4) : 709-732. doi: 10.3934/nhm.2019028
Boris P. Andreianov, Giuseppe Maria Coclite, Carlotta Donadello. Well-posedness for vanishing viscosity solutions of scalar conservation laws on a network. Discrete & Continuous Dynamical Systems, 2017, 37 (11) : 5913-5942. doi: 10.3934/dcds.2017257
David McCaffrey. A representational formula for variational solutions to Hamilton-Jacobi equations. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1205-1215. doi: 10.3934/cpaa.2012.11.1205
Stefano Bianchini, Elio Marconi. On the concentration of entropy for scalar conservation laws. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 73-88. doi: 10.3934/dcdss.2016.9.73
Laurent Lévi, Julien Jimenez. Coupling of scalar conservation laws in stratified porous media. Conference Publications, 2007, 2007 (Special) : 644-654. doi: 10.3934/proc.2007.2007.644
Georges Bastin, B. Haut, Jean-Michel Coron, Brigitte d'Andréa-Novel. Lyapunov stability analysis of networks of scalar conservation laws. Networks & Heterogeneous Media, 2007, 2 (4) : 751-759. doi: 10.3934/nhm.2007.2.751
Marianna Euler, Norbert Euler. Integrating factors and conservation laws for some Camassa-Holm type equations. Communications on Pure & Applied Analysis, 2012, 11 (4) : 1421-1430. doi: 10.3934/cpaa.2012.11.1421
Kai Zhao, Wei Cheng. On the vanishing contact structure for viscosity solutions of contact type Hamilton-Jacobi equations I: Cauchy problem. Discrete & Continuous Dynamical Systems, 2019, 39 (8) : 4345-4358. doi: 10.3934/dcds.2019176
Adimurthi , Shyam Sundar Ghoshal, G. D. Veerappa Gowda. Exact controllability of scalar conservation laws with strict convex flux. Mathematical Control & Related Fields, 2014, 4 (4) : 401-449. doi: 10.3934/mcrf.2014.4.401
Maria Laura Delle Monache, Paola Goatin. Stability estimates for scalar conservation laws with moving flux constraints. Networks & Heterogeneous Media, 2017, 12 (2) : 245-258. doi: 10.3934/nhm.2017010
Giuseppe Maria Coclite, Lorenzo di Ruvo, Jan Ernest, Siddhartha Mishra. Convergence of vanishing capillarity approximations for scalar conservation laws with discontinuous fluxes. Networks & Heterogeneous Media, 2013, 8 (4) : 969-984. doi: 10.3934/nhm.2013.8.969
Evgeny Yu. Panov. On a condition of strong precompactness and the decay of periodic entropy solutions to scalar conservation laws. Networks & Heterogeneous Media, 2016, 11 (2) : 349-367. doi: 10.3934/nhm.2016.11.349
Shijin Deng, Weike Wang. Pointwise estimates of solutions for the multi-dimensional scalar conservation laws with relaxation. Discrete & Continuous Dynamical Systems, 2011, 30 (4) : 1107-1138. doi: 10.3934/dcds.2011.30.1107
Darko Mitrovic. New entropy conditions for scalar conservation laws with discontinuous flux. Discrete & Continuous Dynamical Systems, 2011, 30 (4) : 1191-1210. doi: 10.3934/dcds.2011.30.1191
Darko Mitrovic, Ivan Ivec. A generalization of $H$-measures and application on purely fractional scalar conservation laws. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1617-1627. doi: 10.3934/cpaa.2011.10.1617
Markus Musch, Ulrik Skre Fjordholm, Nils Henrik Risebro. Well-posedness theory for nonlinear scalar conservation laws on networks. Networks & Heterogeneous Media, 2022, 17 (1) : 101-128. doi: 10.3934/nhm.2021025
Hermano Frid. Invariant regions under Lax-Friedrichs scheme for multidimensional systems of conservation laws. Discrete & Continuous Dynamical Systems, 1995, 1 (4) : 585-593. doi: 10.3934/dcds.1995.1.585
Simone Göttlich, Ute Ziegler, Michael Herty. Numerical discretization of Hamilton--Jacobi equations on networks. Networks & Heterogeneous Media, 2013, 8 (3) : 685-705. doi: 10.3934/nhm.2013.8.685
Alberto Bressan Khai T. Nguyen
|
CommonCrawl
|
Integral geometry
From Encyclopedia of Mathematics
The theory of invariant measures (with respect to continuous groups of transformations of a space onto itself) on sets consisting of submanifolds of the space (for example, lines, planes, geodesics, convex surfaces, etc.; in other words, manifolds preserving their type under the transformations in question). Integral geometry has been constructed for various spaces, primarily Euclidean, projective and homogeneous spaces.
Integral geometry is concerned with the introduction of invariant measures (cf. Invariant measure), their relationships and their geometric applications. It arose in connection with refinements of statements of problems in geometric probabilities.
In order to introduce an invariant measure one tries to begin with a function depending on the coordinates of the space under consideration whose integral over some region of the space is not changed under any continuous coordinate transformation belonging to a specified Lie group. This requires finding an integral invariant of the Lie group. The latter can be found as a solution to the system of partial differential equations
$$ \tag{1 } \sum _ { i= } 1 ^ { n } \frac \partial {\partial x _ {i} } [ \xi _ {h} ^ {i} ( x) F ( x) ] = 0 ,\ \ h = - 1 \dots r , $$
where $ F ( x) $ is the required integral invariant, $ x $ is a point of the space (having dimension $ n $), $ \xi _ {h} ^ {i} $ are the coefficients of the infinitesimal transformation of the group, and $ r $ is the number of parameters of the transformation. Of great significance in integral geometry are measurable Lie groups, that is, groups that admit one and only one invariant (up to a constant factor). In particular, simple transitive groups are of this type.
The following problem in integral geometry consists of determining a measure on a set of manifolds that preserve their type under some group of continuous transformations. The measure is given by the integral
$$ \tag{2 } \int\limits _ {A _ \alpha } | F ( \alpha ^ {1} \dots \alpha ^ {q} ) | \ d \alpha ^ {1} \wedge \dots \wedge d \alpha ^ {q} , $$
where $ A _ \alpha $ is a set of points in the parameter space of the Lie group and $ F $ is an integral invariant of the group, defined by equation (1), or the density measure. The integral in (2) is also called an elementary measure of the set of manifolds. A specific choice of this measure sets up a complete correspondence with the fundamental problem in the study of geometric probabilities. In fact, the geometric probability of a set of manifolds having a property $ A _ {1} $ is the fraction of this set, regarded as a subset of the set of manifolds having a more general property $ A $. The problem reduces to establishing the measures of a set of manifolds with property $ A $ and of the subset with property $ A _ {1} $, and taking the ratio of them, the latter being the geometric probability.
In the case of a homogeneous multi-dimensional space, the measure of a set of manifolds (for example, points, straight lines, hyperplanes, pairs of hyperplanes, hyperspheres, second-order hypersurfaces) is uniquely defined (up to a constant factor) by the integral
$$ \tag{3 } \int\limits _ { X } d H = [ \omega _ {1} \dots \omega _ {h} ] , $$
where $ \{ \omega _ {i} \} _ {i=} 1 ^ {h} $ are the relative components of a given transitive Lie group $ G _ {2} $. Linear combinations with constant coefficients of these relative components are the left-hand sides of a system of Pfaffian equations corresponding to the set of manifolds under consideration. The measure (3) is called the kinematic measure in the homogeneous space with a given group of transformations defined on it. It is the generalization of the so-called Poincaré kinematic measure. (In what follows, all measures are given up to a constant factor.)
In integral geometry on the Euclidean plane $ E ^ {2} $ one usually considers only one type of continuous transformation, namely, the group of motions (without reflections). For a set of points, the integral invariant is the unit, for a set of lines it is again the unit if one selects for the parameters of the lines the parameters $ p $ and $ \phi $ of its normal equation. The length of an arbitrary curve is equal to $ \int n d p d \phi / 2 $, where $ n $ is the number of intersections of a straight line with the curve and the integration is carried out over the set of straight lines intersecting the curve. The measure of a set of straight lines intersecting two convex figures (ovals) is equal to the difference of the lengths of the crossed common tangent lines of the oval and the outer common tangent lines (see Fig. a). Outer common tangent lines Crossed common tangent lines
Figure: i051470a
The measure of the set of straight lines dividing two ovals is equal to the length of the crossed common tangent lines minus the sum of the lengths of the contours of the ovals. The measure of a set of pairs of points is determined as
$$ \int\limits | t _ {2} - t _ {1} | d p \wedge d \phi \wedge d t _ {1} \wedge d t _ {2} , $$
where $ p $, $ \phi $ are the parameters of the normal equation of the straight line passing through the points and $ t _ {1} $ and $ t _ {2} $ are the distances along this straight line from the points to the point on the line having minimal distance from the origin (see Fig. b).
Figure: i051470b
The measure of a set of pairs of straight lines is equal to
$$ \int\limits | \sin ( \alpha _ {1} - \alpha _ {2} ) | \ d x \wedge d y \wedge d \alpha _ {1} \wedge d \alpha _ {2} , $$
where $ x $ and $ y $ are the coordinates of the point of intersection of the pair of straight lines and $ \alpha _ {1} $ and $ \alpha _ {2} $ are the angles that these lines form with one of the coordinates axes (see Fig. c.).
Figure: i051470c
The measure of the set of pairs of lines intersecting an oval is equal to half the square of the length of the curve bounding the oval minus the area of the oval multiplied by $ \pi $( Crofton's formula). An application of the kinematic measure to the set of congruent ovals intersecting a given oval enables one to obtain one of the isoperimetric inequalities, namely, the classical Bonnesen inequality. If
$$ I _ {n} = \int\limits _ { G } \sigma ^ {n} d p \ d \phi ,\ J _ {n} = \ \int\limits _ { H } r ^ {n} | t _ {2} - t _ {1} | d p \wedge d \phi \wedge d t _ {1} \wedge d t _ {2} , $$
where $ \sigma $ is the length of the Jordan oval $ H $, $ G $ is the set of straight lines intersecting the oval and $ r $ is the distance between two points in the interior of the oval, then
$$ J _ {n} = \ \frac{2 I _ {n+} 3 }{( n + 2 ) ( n + 3 ) } , $$
which enables one to determine the mean distance between two points inside the oval in a simple way. The kinematic measure of a set of figures is the measure of the set of figures congruent to the given one. It is equal to
$$ \int\limits _ { X } d x \wedge d y \wedge d \phi , $$
where $ X $ is the set of points of the figure, $ x , y $ are the coordinates of a fixed point of it and $ \phi $ is an angle defining the rotation of the figure. The kinematic measure can be regarded as the measure of a set of moving coordinate frames. If the fixed coordinate frame is made to move, while the moving frame is fixed, then for the same set of transformations the kinematic measure remains unaltered (symmetry of the kinematic measure). If another moving system is associated with each element of the set of congruent figures, then the kinematic measure is also preserved. The measure of the set of congruent finite arcs of an arbitrary curve intersecting a given arc of some curve is equal to four times the derivative of the length of the arcs (Poincaré's formula). The number of segments of given length $ l $ on a straight line intersecting an oval is equal to $ 2 \pi F _ {0} + 2 l L _ {0} $, where $ F _ {0} $ and $ L _ {0} $ are the area of the oval and the length of the curve bounding it. If the oval is replaced by a non-closed curve, then $ F _ {0} = 0 $ and the number of intersections is equal to $ 2 l L _ {0} $. The measure of the set of ovals intersecting a given oval is equal to $ 2 \pi ( F _ {0} + F ) + L _ {0} L $, where $ F _ {0} $ and $ F $ are the corresponding areas and $ L _ {0} $ and $ L $ the lengths of the curves bounding the ovals.
Integral geometry in Euclidean space $ E ^ {3} $ is constructed in a similar way as integral geometry in $ E ^ {2} $. For sets of points, the integral invariant is again equal to the unit. If a set of straight lines is given by the set of their equations in two projective planes,
$$ x = k z + a ,\ \ y = h z + b , $$
then the integral invariant for the set of parallel translations and rotations around axes is equal to $ ( k ^ {2} + h ^ {2} + 1 ) ^ {-} 2 $. In particular, the measure of the sets of straight lines intersecting a convex closed surface (the surface of an ovaloid) is equal to half the surface area of the ovaloid.
By introducing the measure of a set of pairs of points by analogy with $ E ^ {2} $, one is able to calculate the average value of the 4th power of the lengths of the chords of the ovaloid, which is equal to $ 12 V / \pi S $, where $ V $ and $ S $ are its volume and surface area. For pairs of intersecting straight lines defined by their equations in two projective planes:
$$ x = k _ {1} z + a - k _ {1} c ; \ \ y = h _ {1} z + b - h _ {1} c $$
$$ x = k _ {2} z + a - k _ {2} c ; \ \ y = h _ {2} z + b - h _ {2} c , $$
where $ a $, $ b $ and $ c $ are the coordinates of the points of intersection of the straight lines, it is equal to
$$ [ ( k _ {1} ^ {2} + h _ {1} ^ {2} + 1 ) ( k _ {2} ^ {2} + h _ {2} ^ {2} + 1 ) ] ^ {-} 3/2 . $$
The measures of the set of intersections of two given moving ovaloids are related in the same way as their volumes. For a plane, given by the equation in intercepts, the integral invariant is equal to
$$ ( a ^ {-} 2 + b ^ {-} 2 + c ^ {-} 2 ) ^ {-} 2 , $$
where $ a , b , c $ are the lengths of the intercepts. The measure of the set of planes intersecting a surface of area $ S $ is equal to $ \pi ^ {2} S / 2 $, while the average value of the lengths of the curves along which the ovaloid is intersected by the set of planes is equal to $ \pi S ^ {2} / 2 \overline{H}\; $, where $ \overline{H}\; $ is the total mean curvature.
The integral invariant for a pair of planes is equal to the product of the integral invariants of the sets of planes. The kinematic measure in $ E ^ {3} $ is equal to the product of the measure of the set of distinct oriented planes and the elementary kinematic measure in the orienting plane. The integral invariant for the rotation of a spatial figure having one fixed point is equal to
$$ 1 + l _ {1} ^ {2} + l _ {2} ^ {2} + l _ {3} ^ {2} , $$
where $ l _ {i} = \alpha _ {i} \mathop{\rm tan} ( \phi / 2 ) $, $ i = 1 , 2 , 3 $, $ \alpha _ {i} $ are the direction cosines of the axis of rotation and $ \phi $ is the angle of rotation around this axis. The measure of a set of bodies having a common point and differing by a rotation in space is equal to $ \pi ^ {2} $.
Integral geometry on a surface is constructed by the introduction of a measure on the set of geodesics as the integral of an exterior differential form on the surface over the whole set. Thus, the exterior differential form is the density of the set of geodesics, since it is invariant with respect to the choice of the system of curvilinear coordinates on the surface and with respect to the choice of the parameter defining the position of points on a geodesic. In geodesic polar coordinates the density has the form
$$ d G = \ \left ( \frac{\partial \sqrt {g ( \rho , \theta ) } }{\partial \rho } \right ) [ d \theta d \rho ] . $$
In particular, for the sphere $ d G = \cos \rho [ d \theta d \rho ] $, while for a pseudo-sphere, $ d G = \cosh \rho [ d \theta d \rho ] $. For the set of geodesics intersecting a smooth or piecewise-smooth curve, the density is equal to $ d G = | \sin \phi | [ d \phi d s ] $, where $ \phi $ is the angle of intersection and $ s $ is the arc length of the curve. The density of the kinematic measure (the kinematic density) is equal to $ d K = [ d P d V ] $, where $ d P $ is the area element of the surface and $ V $ is the angle between the geodesic and the polar radius. Many of the results of integral geometry on $ E ^ {2} $ generalize to the case of a homogeneous surface. The density of the measure of a set is the kinematic density, which enables one to obtain a generalization of Poincaré's formula for $ E ^ {2} $. The measure of the set of pairs of geodesics and pairs of points is constructed in the same way as for $ E ^ {2} $.
On the basis of the so-called polymetric geometry of P.K. Rashevskii (see [4]), the results of integral geometry on an arbitrary homogeneous surface can be generalized to a broader class of surfaces. The generalizations are carried out by the use of Rashevskii's bimetric system. To begin with, the measure is introduced on a two-parameter set of curves of the plane by two methods. Then, all the conclusions valid for the case of the plane (considered as a set of line elements) are generalized to the case of lines of constant geodesic curvature on an arbitrary surface.
Integral geometry on the projective plane $ P ^ {2} $. An integral invariant for the full group of projective transformations on $ P ^ {2} $,
$$ \tag{4 } x ^ \prime = \ \frac{a _ {1} x + b _ {1} y + c _ {1} }{a _ {3} x + b _ {3} y + 1 } ,\ \ y ^ \prime = \ \frac{a _ {2} x + b _ {2} y + c _ {2} }{a _ {3} x + b _ {3} y + 1 } , $$
exists only for triples of points and is equal to the cube of the reciprocal of the area of the triangle having these points as vertices. For pairs of points and the group of affine unimodular transformations
$$ \tag{5 } \left . \begin{array}{c} x ^ \prime = a _ {1} x + b _ {1} y + c _ {1} , \\ y ^ \prime = a _ {2} x + b _ {2} y + c _ {2} , \\ a _ {1} b _ {2} - a _ {2} b _ {1} = 1 , \\ \end{array} \right \} $$
the integral invariant is equal to the unit, while for the group of affine transformations the integral invariant of the set of pairs of points is equal to $ ( x _ {1} y _ {2} - x _ {2} y _ {1} ) ^ {-} 2 $, where $ x _ {1} , y _ {1} $ and $ x _ {2} , y _ {2} $ are the coordinates of the points.
The set of straight lines of the projective plane is non-measurable, but for point-line pairs and the full group of projective transformations (4) the integral invariant is equal to $ ( x _ {0} \alpha + y _ {0} \beta + 1 ) ^ {-} 3 $, where $ x _ {0} $, $ y _ {0} $ are the coordinates of the point and the straight line is given by the equation $ \alpha x + \beta y + 1 = 0 $. The set of parallelograms given by the equations
$$ \left . \begin{array}{c} \alpha _ {i} x + \beta _ {i} y + 1 = 0 \\ \gamma _ {i} ( \alpha _ {i} x + \beta _ {i} y ) + 1 = 0 \\ \end{array} \right \} ,\ \ i = 1 , 2 , $$
where $ \alpha _ {1} \beta _ {2} - \alpha _ {2} \beta _ {1} \neq 0 $, has density measure
$$ [ ( \gamma _ {1} - 1 ) ^ {2} ( \alpha _ {2} \beta _ {2} - \alpha _ {2} \beta _ {1} ) ^ {2} ] ^ {-} 1 $$
for the group of affine transformations
$$ \tag{6 } \left . \begin{array}{c} x ^ \prime = a _ {1} x + b _ {1} y + c _ {1} , \\ y ^ \prime = a _ {2} x + b _ {2} y + c _ {2} , \\ a _ {1} b _ {2} - a _ {2} b _ {1} \neq 0 \\ \end{array} \right \} . $$
For the set of circles on $ P ^ {2} $ given by the equation
$$ x ^ {2} + y ^ {2} - 2 \alpha x - 2 \beta y + \gamma = 0 , $$
the maximal group of transformations is the group of similarity transformations
$$ x = a x ^ \prime + b y ^ \prime + c ,\ \ y = b x ^ \prime + a y ^ \prime + d . $$
Their density measure is given by $ ( \gamma - \alpha ^ {2} - \beta ^ {2} ) ^ {-} 2 $. On this basis, the measures of sets of circles (whose centres are in some domain) intersecting a given curve can be calculated. The measure of a set of circles on $ P ^ {2} $ is equal to the kinematic measure for the transformations generated by translations and homotheties.
The set of conic sections (invariant $ \Delta \neq 0 $) has as its maximal group of invariants the projective group:
$$ x = \ \frac{\alpha _ {11} x ^ \prime + \alpha _ {12} y ^ \prime + \alpha _ {13} }{\alpha _ {31} x ^ \prime + \alpha _ {32} y ^ \prime + 1 } ,\ \ y = \ \frac{\alpha _ {21} x ^ \prime + \alpha _ {22} y ^ \prime + \alpha _ {23} }{\alpha _ {31} x ^ \prime + \alpha _ {32} y ^ \prime + 1 } , $$
where $ \mathop{\rm det} | \alpha _ {ij} | \neq 0 $, $ i , j = 1 , 2 , 3 $. Its density measure is equal to $ \Delta ^ {-} 2 $. For the set of hyperbolas, the maximal group of invariants is the affine group (6). Their density measure is equal to $ a ^ {-} 1 \Delta ^ {-} 2 \sqrt {b ^ {2} - a c } $, where $ a $, $ b $, $ c $ are the coefficients of the general equation of the hyperbola. Similarly, the maximal group of invariants of ellipses is measurable, but for parabolas it is non-measurable. For parabolas, only subgroups of it are measurable, such as the groups of unimodular affine and centro-affine transformations. The elementary kinematic measure of the group of projective transformations (4) is equal to $ \Delta ^ {-} 3 $, where $ \Delta $ is the determinant of the transformation.
The set of lines of the centro-affine plane is measurable. Their density measure is equal to $ p ^ {-} 3 $, where $ p $ is the free term of the normal equation of the line. The kinematic measure of the group of transformations (5) of the non-centro-affine plane is equal to $ a ^ {-} 1 $. If $ \Delta = \Delta ( \phi ) $ is the width of an oval, then $ \Delta ^ {-} 2 $ is its density measure for the affine unimodular transformations.
Integral geometry in the projective space $ P ^ {3} $. The group of motions in projective space $ P ^ {3} $ with a rectangular Cartesian coordinate system is measurable only for the set of quadruples of points. The density measure in this case is equal to $ \Delta ^ {-} 4 $, where $ \Delta $ is the volume of the tetrahedron whose vertices are the points. For pairs and triples of points, only the group of affine unimodular transformations is measurable. Its density measure is equal to the unit. For triples of points, the group of centro-affine transformations is also measurable (provided that the points do not lie on the same line). The set of straight lines in $ P ^ {3} $ has as its maximal group of invariance the full group of motions, but it is non-measurable for them (only a certain subgroup of it is measurable). The full group of transformations for pairs of straight lines is measurable. The set of planes does not admit a measure with respect to the full group of transformations in $ P ^ {3} $; for the set of planes, only its subgroup of orthogonal transformations is measurable. Pairs of planes admit a measure for the group of centro-affine unimodular transformations. Parallelopipeds admit a measure for the subgroup of affine transformations, the set of pairs of planes-points admits a measure for the full group of transformations in $ P ^ {3} $. The set of spheres in $ P ^ {3} $ admits a measure for the group of similarity transformations, the density being equal to $ R ^ {-} 4 $, where $ R $ is the radius of the sphere. The set of second-order surfaces admits a measure for the full group of transformations in $ P ^ {3} $, the density being $ \Delta ^ {-} 5 $, where $ \Delta $ is the invariant of the surface. The set of circles in $ P ^ {3} $ admits a measure for the group of similarity transformations, the density being equal to $ R ^ {-} 4 $, where $ R $ is the radius of the circle. The kinematic measure in $ P ^ {3} $ of the full group of transformations is equal to $ \Delta ^ {-} 4 $, where $ \Delta $ is its determinant. The density measure of a set of points in three-dimensional centro-affine unimodular space is equal to the unit. The set of planes in space is also measurable, with density $ p ^ {-} 4 $, where $ p $ is the parameter of the normal equation of the plane.
Integral geometry on a surface $ V ^ {2} $ of constant curvature. The family of curves in $ V ^ {2} $ with constant positive curvature has $ G _ {3} ^ {+} ( x) $ as maximal group of invariance. Families of special type (three-, two- and one-parameter) admit a density measure for the maximal group of invariance (infinitesimal transformations of the group $ G _ {3} ^ {+} ( x) $), and for $ G _ {1} ( x) $ in the one-parameter case.
The same holds for $ V ^ {2} $ with negative constant curvature. Three-parameter curves of special type admit a density measure for $ G _ {3} ^ {-} ( x) $ as maximal group of invariance; it is equal to the unit. Measures also exists for groups in the case of special type of two- and one-parameter families. In both cases, the condition that the family of curves $ F _ {q} ( x) $ have a measure for $ G _ {2} ( x) $ as maximal group of invariance is that the adjoint group $ H _ {2} ( \alpha ) $ be spatially transitive (measurable).
Generalizations of integral geometry. The above account relates to the traditional understanding of the content of integral geometry as a theory of invariant measures on sets of geometric objects in various spaces, mainly in homogeneous spaces. In the sense of integral geometry as a theory of transformation of functions given on a set of certain geometric objects in some space into functions defined on a set of other geometric objects of the same space, the problem converse to integrating some function of points of the space along some geometric objects of the same space is posed as the fundamental problem. For example, if an integral transform of a function $ f $ in $ n $- dimensional affine space (a Radon transform) is introduced as its integral over hypersurfaces, then the converse problem is to recover $ f $ in terms of its integral over the hypersurfaces, that is, the problem of finding the inverse Radon transform. Similarly, problems have been posed and solved concerning recovering functions on ruled second-order surfaces in four-dimensional complex space for which the integrals over the straight lines forming this surface are known, and also the question of recovering a function in terms of its integral taken over horospheres in a real or imaginary Lobachevskii space.
[1] W. Blaschke, "Vorlesungen über Integralgeometrie" , Deutsch. Verlag Wissenschaft. (1955)
[2] L.A. Santaló, "Introduction to integral geometry" , Hermann (1953)
[3] I.M. Gel'fand, M.I. Graev, N.Ya. Vilenkin, "Generalized functions" , 5. Integral geometry and representation theory , Acad. Press (1965) (Translated from Russian)
[4] P.K. Rashevskii, "Polymetric geometry" , Proc. Sem. Vektor. Tenzor. Anal. , 5 , Moscow-Leningrad (1941) pp. 21–147 (In Russian)
[5] M.I. Stoka, "Géométrie intégrale" , Gauthier-Villars (1968)
[6] G.I. Drinfel'd, "Integral geometry" Progress in Math. , 12 (1972) pp. 173–214 Itogi Nauk. Algebra. Topol. Geom. 1968 (1970) pp. 157–191
Reference [a1] gives a fairly complete survey of classical integral geometry up to 1976. Part of the more recent development was essentially influenced by an important paper of H. Federer [a2], who extended the classical kinematic and Crofton intersection formulas to curvature measures and sets of positive reach. Some of the later integral-geometric results involving curvature measures are described in the survey articles [a3], [a4]. Integral geometry plays an essential role in the recent development of stochastic geometry, as in the work of R.E. Miles, e.g. [a5], G. Matheron [a6], and others. The use of kinematic formulas for curvature measures in stochastic geometry can be seen in the articles [a7], [a8].
Another new branch of integral geometry is the combinatorial integral geometry developed by R.V. Ambartzumian [a9]. This theory, in which combinatorial relations between measures of certain sets of geometric objects play a central role, and invariance properties are not necessarily assumed, has also applications to stochastic geometry and interesting connections to Hilbert's fourth problem.
An impression of the scope of the "generalizations of integral geometry" as it is called in the main article above, can be obtained from the contributions of the conference proceedings [a10], and from [a11].
[a1] L.A. Santaló, "Integral geometry and geometric probability" , Addison-Wesley (1976)
[a2] H. Federer, "Curvature measures" Trans. Amer. Math. Soc. , 93 (1959) pp. 418–491
[a3] W. Weil, "Kinematic integral formulas for convex bodies" J. Tölke (ed.) J.M. Wills (ed.) , Contributions to geometry , Birkhäuser (1979) pp. 60–76
[a4] R. Schneider, J.A. Wieacker, "Random touching of convex bodies" R. Ambartzumian (ed.) W. Weil (ed.) , Stochastic Geometry, Geometric Statistics, Stereology , Teubner (1984) pp. 154–169
[a5] R.E. Miles, "Some new integral geometric formulae, with stochastic applications" J. Appl. Prob. , 16 (1979) pp. 592–606
[a6] G. Matheron, "Random sets and integral geometry" , Wiley (1975)
[a7] W. Weil, "Stereology: A survey for geometers" P.M. Gruber (ed.) J.M. Wills (ed.) , Convexity and its applications , Birkhäuser (1983) pp. 360–412
[a8] W. Weil, "Point processes of cylinders, particles and flats" Acta. Applic. Math. , 9 (1987) pp. 103–136
[a9] R.V. [R.V. Ambartsumyan] Ambartzumian, "Combinatorial integral geometry" , Wiley (1982)
[a10] R.L. Bryant (ed.) V. Guillemin (ed.) S. Helgason (ed.) R.O. Wells jr. (ed.) , Integral geometry , Amer. Math. Soc. (1987)
[a11] R.V. (ed.) Ambartzumian, "Stochastic and integral geometry" Acta. Applic. Math. , 9 : 1–2 (1987)
How to Cite This Entry:
Integral geometry. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Integral_geometry&oldid=47374
This article was adapted from an original article by S.F. Shushurin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Integral_geometry&oldid=47374"
TeX auto
TeX done
|
CommonCrawl
|
Review of whole number patterns
Pictorial representations of algebra
Components in an expression
Laws of arithmetic with algebraic terms
Build algebraic expressions I
Describing patterns using algebra (Investigation)
Build algebraic expressions II
Expressions vs equations
Substitute into algebraic expressions I
Build algebraic equations I
Build algebraic equations II
Add and subtract algebraic terms I
Add and subtract negative algebraic terms
Rules for describing sequences
Find the rule from a table of values
Algebra in context
Magic Squares and Algebra (Investigation)
Identify equivalent expressions
Substitute into algebraic expressions II
Substitute to complete a table of values
Substitute into algebraic add/sub expressions
Add and subtract algebraic terms II
Substitute into algebraic expressions III
Multiply algebraic terms
Divide algebraic terms
Perform mixed operations with algebraic terms
Substitute into algebraic expressions IV
Substitute into common formulae
Substitution resulting in an equation
Multiply algebraic terms with negatives
Divide algebraic terms with negatives
Mixed operations with algebraic terms (incl negatives)
Simplify algebraic expressions involving multiple operations
Distributive law I
Distributive law (non-linear terms)
Build algebraic expressions III
Factor numeric factors
Rewriting expressions
Equivalent expressions
Algebra in measurement
Simplify algebraic expressions involving distributive law
Substitute into algebraic expressions
Multiply and divide algebraic terms
Multiply algebraic terms with indices
Divide algebraic terms with indices
Identify highest common algebraic factor
Simplify non- linear algebraic expressions
Distributive law II
Simplify non-linear algebraic expressions involving distributive law
Expand binomial expressions
Factor algebraic terms
Simplify further algebraic expressions I
Identify components in an expression
Expand further binomial expressions
Expand perfect squares
Expand difference of two squares
Factorise algebraic factors
LCD with rational expressions
Simplify algebraic fractions
Add and subtract algebraic fractions
Add and subtract algebraic fractions with binomial numerators
Multiply and divide algebraic fractions
Mixed Operations with Algebraic Fractions
Simplify further algebraic expressions II
Build algebraic expressions
Expand further algebraic expressions
Algebraic Expressions
Spreadsheets and Substitution (Investigation)
Simplify and manipulate algebraic expressions
Factorise algebraic expressions
Expand binomials
Evaluate rational expressions
Addition and subtraction of rational expressions I
Addition and subtraction of rational expressions II
Measurement problems
United KingdomUnited Kingdom
We've already learnt that there are a number of conventions (rules) which need to be followed in order to solve problems with different operations correctly. This is known as the order of operations.
The order goes:
Step 1: Do operations inside grouping symbols such as parentheses (...), brackets [...] and braces {...}.
Step 2: Do multiplication (including powers) and division (including roots) going from left to right.
Step 3: Do addition and subtraction going from left to right.
Each time we complete one of these steps, we simplify our equation until we get one final answer.
Let's look at an example so we can see this process in action.
Simplify: $6x^2+4x^2\times5+7x^2$6x2+4x2×5+7x2
Think: Multiplication comes before addition. Then collect like terms.
Do: $6x^2+4x^2\times5+7x^2$6x2+4x2×5+7x2 $=$= $6x^2+20x^2+7x^2$6x2+20x2+7x2
$=$= $33x^2$33x2
Many questions that you will encounter will often require expanding. As revision, we've summarised some methods of expansion in the table below.
Factorised form
Expanded form
$A\left(B+C\right)$A(B+C) $AB+AC$AB+AC
$A\left(B+C+D\right)$A(B+C+D) $AB+AC+AD$AB+AC+AD
$\left(A+B\right)\left(C+D\right)$(A+B)(C+D) $AC+AD+BC+BD$AC+AD+BC+BD
$\left(A+B\right)^2$(A+B)2 $A^2+2AB+B^2$A2+2AB+B2
$\left(A-B\right)^2$(A−B)2 $A^2-2AB+B^2$A2−2AB+B2
$\left(A+B\right)\left(A-B\right)$(A+B)(A−B) $A^2-B^2$A2−B2
Let's look at a few examples.
Expand and simplify: $\left(5x-9\right)\left(5x+9\right)$(5x−9)(5x+9)
Think: This expression is of the form $\left(A-B\right)\left(A+B\right)$(A−B)(A+B), so its expanded form will be a difference of two squares.
Do: $\left(5x-9\right)\left(5x+9\right)$(5x−9)(5x+9) $=$= $\left(5x\right)^2-9^2$(5x)2−92
$=$= $25x^2-81$25x2−81
Expand and simplify: $\left(\frac{6y}{x}-\frac{3x}{y}\right)^2$(6yx−3xy)2
Think: This expression is of the form $\left(A-B\right)^2$(A−B)2, so its expanded form will look like $A^2-2AB+B^2$A2−2AB+B2.
Do: $\left(\frac{6y}{x}-\frac{3x}{y}\right)^2$(6yx−3xy)2 $=$= $\left(\frac{6y}{x}\right)^2-2\frac{6y}{x}\frac{3x}{y}+\left(\frac{3x}{y}\right)^2$(6yx)2−26yx3xy+(3xy)2
$=$= $\frac{36y^2}{x^2}-36+\frac{9x^2}{y^2}$36y2x2−36+9x2y2
Try simplifying the following algebraic expressions below.
Expand and simplify:
$6\left(10x+8y\right)-6\left(-6y-6x\right)$6(10x+8y)−6(−6y−6x)
Simplify $\frac{8x^2}{11}+\frac{5x^2-2x}{55}$8x211+5x2−2x55.
Simplify the following expression:
$\frac{x+3}{x+4}\div\frac{x+3}{3}-\frac{1}{x+4}$x+3x+4÷x+33−1x+4
Expand and simplify: $\left(4st+\frac{2}{st}-4\right)\left(4st+\frac{2}{st}+4\right)$(4st+2st−4)(4st+2st+4)
|
CommonCrawl
|
Trace: • ICMA 200 Principles and Mathematical Concepts
ICMA 200 Principles and Mathematical Concepts
Course Description: Symbolic logic, proof techniques, sets, relations, functions, the real numbers, introduction to number theory.
Reference: Bridge to Abstract Mathematics: Mathematical Proof and Structures by Ronald P. Morash
Class Summary: 13 Sep 2016
The notion of set is a primitive, or undefined term in mathematics, analogous to point and line in plane geometry.
A set may be thought of as a well-defined collection of objects. The objects in the set are called elements of the set.
There are two methods of describing sets:
The roster method: for example, $A = \{2, 5, 6, 7\}$
The rule, or description, method: for example, $B = \{x \,|\,\text{$x$ is a prime number and $x \le 10$} \}$
Some special sets are $\mathbb{N}$ the set of all positive integers, $\mathbb{Z}$ the set of all integers, $\mathbb{Q}$ the set of all rational numbers, $\mathbb{R}$ the set of all real numbers, and $\mathbb{C}$ the set of all complex numbers.
A set $I$, all of whose elements are real numbers, is called an interval if and only if, whenever $a$ and $b$ are elements of $I$ and $c$ is a real number with $a<c<b,$ then $c\in I$.
Ex: Solve the following inequalities and express each solution set in interval notation:$$(\mathrm{a}) \,\, 7x - 9 \le 16 \qquad (\mathrm{b}) \,\, |2x+3| < 5 \qquad (\mathrm{c})\,\, 2x^2+x-28 \le 0$$
An empty, or null, set is a set with no element. We use the symbol $\{\,\,\}$ or $\emptyset$ to denote the empty set.
Ex: Solve the quadratic inequality $5x^2+3x+2 < 0$.
Homework due 15 Sep 2016:
Read pp. 3-23
Exercises p. 13, 14: 1 (b), (l), (m), 2 (a), (j), (k), (l)
Relations Between Sets: equality, subsets, proper subset, power set
Operations on Sets: Union and Intersection, Complement, Set Theoretic Difference, Symmetric Difference, Ordered Pairs and the Cartesian Product
Algebraic Properties of Sets
Ex: Find $(A\cap B)\cup (A' \cap B) \cup (A\cap B')\cup (A'\cap B')$ where $A=(-\infty,4)\cup (7,\infty)$ and $B = [-2,11].$
Read pp. 24-42
Let $M = \{y\,|\, 2x+1 \le y \le 3x-4\}$ and $N=\{y\,|\, 10\le y\le 20\}.$ What is the set of all $x$ such that $M\subset M\cap N$?
Let $P = \{x\,|\, y = \log_2 (1-x), y\in \mathbb{R}\}$ and $Q = \{y\,|\, y = \sqrt{x-x^2}\}.$ Find $P\cap Q.$
Union-Closed Sets Conjecture
Posed by Peter Frankl in 1979
A family of sets is said to be union-closed if the union of any two sets from the family remains in the family.
The conjecture states that for any finite union-closed family of finite sets, other than the family consisting only of the empty set, there exists an element that belongs to at least half of the sets in the family.
The Propositional Calculus
A statement, or proposition, is a declarative sentence that is either true or false, but not both true and false.
Compound Statements and Logical Connectives: negation (or denial), conjunction, disjunction
Tautology, Equivalence, the Conditional, and Biconditional
Exercises p. 58, 59: 1, 3, 4
Exercises p. 63: 2, 3, 4
|
CommonCrawl
|
For each poster contribution there will be one poster wall (width: 97 cm, height: 250 cm) available. Please do not feel obliged to fill the whole space. Posters can be put up for the full duration of the event.
Engineer interactions by multicolor lattice-depth modulations
Cardarelli, Lorenzo
Floquet engineering, the averaging of fast periodic modulations to obtain an effective time-independent system, has in recent years become an ubiquitous toolbox for the creation of novel Hamiltonians for ultracold atoms in optical lattices. We show here that a multicolor modulation of the depth of an optical lattice allows for a flexible independent control of correlated hopping, effective on-site interactions withouth Feshbach resonances, effective inter-site interactions and occupation-dependent gauge fields.
Bose-Hubbard Model in a 1D Flat-Band Lattice
Cartwright, Christine
Using the density matrix renormalisation group algorithm, the Bose-Hubbard model is examined on a flat-band lattice by exploring a chain of diamonds. The phase type of the ground state of the model is examined before and after the transition point through the study of the observables and the ground state energy. The diamond chain is considered both for a frustrated case with Aharanov-Bohm cages and a non-frustrated case in order to compare the differences between the two.
Projected Entangled Pair States with a virtual SU(2) symmetry
Dreyer, Henrik
For finite groups, the framework of G-injectivity describes a class of states with topological properties that arise as ground states of local Hamiltonians. In this work, we show that some of the properties remain if the finite group is replaced by the compact Lie group SU(2): the resulting PEPS appears as a ground state of a local Hamiltonian and there is a subspace of the ground state manifold that consists of states that are locally indistinguishable but can be transformed into each other by means of inserting string operators. We prove a bound for the entanglement entropy and show that a parameter can be introduced that gives rise to a set of commuting transfer matrices.
Approaching Equilibrium: Fermionic Gaussification
Gluza, Marek
When and by which mechanism do closed quantum many-body systems equilibrate? This fundamental question has been in the focus of attention for many years. It lies at the very basis of the connection between thermodynamics, quantum mechanics of many constituents and condensed matter theory. In the setting of free fermionic evolutions, we rigorously capture the time evolution in abstract terms and by basing our proof on intuitive mathematical concepts like Lieb-Robinson bounds, notions of particle transport and an algebraic expansion of operators, we uncover the underlying mechanism how local memory of the initial conditions is forgotten. Specifically, starting from an initially short range correlated fermionic states which can be very far from Gaussian, we show that if the Hamiltonian provides sufficient transport, the system approaches a state that cannot be distinguished from a corresponding Gaussian state by local measurements. For experimentally relevant instances of ultra-cold fermions in optical lattices, our result implies equilibration on realistic physical time scales. Moreover, we characterise the equilibrium state, finding an instance of a rigorous convergence to a fermionic Generalized Gibbs ensemble generated by the non-local constants of motion of the system. Authors: \begin{center} Marek Gluza$^\text{1}$, Christian Krumnow$^\text{1}$, Mathis Friesdorf$^\text{1}$, Christian Gogolin$^\text{2,3}$, Jens\ Eisert$^\text{1}$\\ ${}^\text{1}$Dahlem Center for Complex Quantum Systems, Freie Universit{\"a}t Berlin, Berlin, Germany\\ ${}^\text{2}$ICFO-The Institute of Photonic Sciences, Mediterranean Technology Park, Barcelona, Spain\\ ${}^\text{3}$Max-Planck-Institut f{\"u}r Quantenoptik, Garching, Germany\end{center}
Experimentally accessible witnesses of many-body localisation
Goihl, Marcel
### I can also make a poster for this work, if talk slots are scarce. The phenomenon of many-body localised (MBL) systems has attracted significant interest in recent years, for its intriguing implications from a perspective of both condensed-matter and statistical physics: they are insulators even at non-zero temperature and fail to thermalise, violating expectations from quantum statistical mechanics. What is more, recent seminal experimental developments with ultra-cold atoms in optical lattices constituting analog quantum simulators have pushed many-body localised systems into the realm of physical systems that can be measured with high accuracy. In this work, we introduce experimentally accessible wit- nesses that directly probe distinct features of MBL, distinguishing it from its Anderson counterpart. We insist on building our toolbox from techniques available in the laboratory, including on-site addressing, super-lattices, and time-of-flight measurements, identifying witnesses based on fluctuations, density-density correlators, den- sities, and entanglement. We build upon the theory of out of equilibrium quantum systems, in conjunction with tensor network and exact simulations, showing the effectiveness of the tools for realistic models.
Study of strongly correlated spin systems using cluster density matrix embedding theory
Gunst, Klaas
Projected entangled pair states with SU(2)_1 chiral edge modes
Hackenbroich, Anna
Stability of the topological Kondo insulating phase in one dimension
We investigate the ground-state of a $p$-wave Kondo-Heisenberg model introduced by Alexandrov and Coleman with an Ising-type anisotropy in the Kondo interaction and correlated conduction electrons. Our aim is to understand how they affect the stability of the Haldane state obtained in the SU(2) symmetric case without the Hubbard interaction. By applying the density-matrix renormalization group algorithm and calculating the entanglement entropy we show that in the anisotropic case a phase transition occurs and a N\'eel state emerges above a critical value of the Coulomb interaction. These findings are also corroborated by the examination of the entanglement spectrum and the spin profile of the system which clarify the structure of each phase.
Non-equilibrium correlation dynamics in the one-dimensional Fermi-Hubbard Model
Kombe, Johannes
In the last few years, experimental and theoretical works have significantly furthered our understanding of the non-equilibrium dynamics of quantum systems characterised by on-site interactions [1]. This on-going effort was recently paralleled by the realisation of longer range interactions in ultracold quantum gases in optical lattices [2]. It is with this exciting development in mind that we investigate the non-equilibrium dynamics of the extended Fermi-Hubbard model [3], with a special interest in understanding if the propagation of correlations is light-cone-like [4,5]. [1] A. Polkovnikov, K. Sengupta, A. Silva, and M. Vengalattore, Rev. Mod. Phys. 83, 863 (2011). [2] S. Baier, M. J. Mark, D. Petter, K. Aikawa, L. Chomaz, Z. Cai, M. Baranov, P. Zoller, F. Sferlaino, arXiv:1507.03500 (2015). [3] T. Giamarchi, Quantum Physics in One Dimension, (Ox- ford University Press, Oxford, UK, 2004). [4] Calabrese, P. & Cardy, J., Phys. Rev. Lett. 96, 136801 (2006). [5] M. Cheneau, P. Barmettler, D. Poletti, M. Endres, P. Schauß, T. Fukuhara, C. Gross, I. Bloch, C. Kollath, and S. Kuhr, Nature 481, 484 (2012).
Tensor Network States with N-Site Correlators
Kovyrshin, Arseny
Scalable tomography of a quantum simulator
Maier, Christine
Quantum state tomography (QST) is the gold standard technique for estimating wave functions of small quantum systems in the laboratory. Applying QST to the larger systems currently being developed in laboratories around the world is impractical due to the large number of measurements and processing time required. In 2010 Cramter et al. proposed a tomography scheme [1] to efficiently reconstruct large quantum states that are well approximated by matrix product states (MPS). On my poster I present the experimental application of this MPS tomography to characterise a trapped ion quantum simulator of spin-1/2 particles. A product state of up to 20 ions is prepared and evolved under an Ising-type interaction, giving rise to many-body entangled states. The MPS reconstruction scheme is then performed at various times during the evolution, and the resulting quantum state investigated. We show that the reconstructed state has a sigificant overlap with the actual state created in the laboratory and reproduces non-classical correlations to a high degree. \\ [1] M. Cramer et al., Nature Communications \textbf{1}, 149 (2010), doi:10.1038$/$ncomms1147
Disordered Spin-1 Chains
McAlpine, Kenneth
The Bilinear-Biquadratic model for spin-1 chains has attracted great interest in recent years. It has been studied using many different methods including the real space renormalization group, quantum Monte Carlo and the class of MPS algorithms. Using the DMRG algorithm, we study the disordered version of the model with disorder in both the bilinear and biquadratic parts. Motivated by experiments with spinorial condensates in optical lattices, we consider the region of the phase diagram that would correspond, in the clean case, to the ferromagnetic and dimer phase. An important question is whether the dimer phase resists in the disorder case or turns into a random singlet phase as predicted in the literature. Another interesting point is whether the first order transition separating the two phases in the clean case gives way to a continuous transition or an indirect transition through a coexistence phase (large spin phase). To this end we analyse local observables, correlations and entanglement entropy.
Towards an improved duality between tensor network states and AdS spacetime
Papadopoulos, Charalampos
Optimising PEPS using Gradients and CTM Contractions of Interactions
Rader, Michael
Chiral Mott insulators in frustrated Bose-Hubbard models
Romen, Christian
Topological order in the Haldane model with on-site interactions
Rubio, Alvaro
Topological matter has been a deep subject of study in the last years, recently recognized with the Nobel Price in Physics. This field describes new phases of matter which the usual Landau approach can not explain and gives rise to an amount of nobel rich phenomena in physics. Of special interest is what happens to topological order when the system is in presence of interactions. We present here the Haldane model with on-site spin interactions and study the topological order for various ranges of the interactions. We probe using a variational Ansatz the robustness of topological order at relatively large values of the interactions compared with the kinetic energy scale.
Exact tensor network states for the Kitaev honeycomb model
The spin-$1/2$ Kitaev honeycomb model was originally proposed in the context of topological quantum computation. This analytically solvable model realizes a spin liquid and exhibits rich physical behaviour, such as abelian and non-abelian anyons as excitations. Our aim is to describe the eigenstates of the model using tensor network methods, which offer efficient descriptions of quantum many-body systems. In particular we exploit parity preservation and build a fermionic tensor network to express the eigenstates of the Hamiltonian in the ground state vortex sectors. We implement the network for small lattices with periodic boundary conditions in order to verify the approach for the model in the thermodynamic limit.
How to implement TDVP for Tree Tensor Product States
Schröder, Florian
I will give a brief overview on how tree tensor product states can be time-evolved with the time-dependent variational principle. I will explain how Krylov subspace methods, the Suzuki-Trotter splitting and the splitting of the node tensors themselves (similar to PEPS) can be used to efficiently apply the TDVP time-evolution operators to higher degree tree nodes. Finally I will give an example of how tree tensor product states can be used to simulate exciton-phonon dynamics in organic molecules.
Thermal and spin transport properties of frustrated spin-1/2 chains in high magnetic fields
Stolpp, Jan
We perform a full diagonalization study of frustrated spin-1/2 chains (i.e. spin-1/2 chains with nearest and next nearest neighbor interaction) in the presence of an external magnetic eld. The thermal and spin conductivity are computed from Kubo formulae as a function of frustration and eld strenght. We are especially interested in the transport properties in the vector chiral phase that appears in the phase diagram at strong frustration and high eld. We observe an enhanced low-frequency response in the high- eld vector chiral phase which we trace back to a renormalization and enhancement of the characteristic velocity.
Quantum Control of Ultracold Atoms
Sørensen, Jens Jakob
Steering a quantum state from one into another finds many applications in quantum computation and simulation. The state is manipulated by controlling the Hamiltonian. Good control of the Hamiltonian is found using optimization techniques from optimal control theory. In this poster the main three methods for calculating such controls is compared for problems relating to ultracold atoms.
Dynamics of the Kitaev-Heisenberg Model
Verresen, Ruben
|
CommonCrawl
|
Tag: computer science
CS Researchers Receive Test of Time Awards
Five CS researchers received Test of Time awards for papers that have had a lasting impact on their fields. The influential papers were presented at their respective conferences in the past 25 years and have remained relevant to research and practice.
Moti Yung
IACR International Conference on Practice and Theory of Public-Key Cryptography (PKC2020)
Test of Time award
On the Security of ElGamal Based Encryption (1998)
Yiannis Tsiounis & Moti Yung
41st IEEE Symposium on Security and Privacy
(IEEE S&P 2020)
Cryptovirology : Extortion-Based Security Threats and Countermeasures (1996)
Adam Young & Moti Yung
Eran Tromer
16th ACM conference on Computer and Communications Security (ACM CCS 2019)
Hey, you, get off of my cloud: exploring information leakage in third-party compute clouds (2009)
Thomas Ristenpart, Eran Tromer, Hovav Shacham, Stefan Savage
Shree Nayar & Peter Belhumeur
International Conference on Computer Vision
(ICCV 2019)
Helmholtz Prize
Attribute and Simile Classifiers for Face Verification (2009)
Neeraj Kumar, Alexander C. Berg, Peter N. Belhumeur, Shree K. Nayar
IEEE International Symposium on Mixed and Augmented Reality (IEEE ISMAR 2019)
Lasting Impact Paper Award
Evaluating the Benefits of Augmented Reality for Task Localization in Maintenance of an Armored Personnel Carrier Turret (2009)
Steven J. Henderson & Steven Feiner
Papers from the Theory of Computing Group Accepted to SODA 2020
Nine papers from CS researchers were accepted to the ACM-SIAM Symposium on Discrete Algorithms (SODA20), held in Salt Lake City, Utah. The conference focuses on algorithm design and discrete mathematics.
Learning From Satisfying Assignments Under Continuous Distributions
Clement Canonne Stanford University, Anindya De University of Pennsylvania, Rocco A. Servedio Columbia University
A common type of problem studied in machine learning is learning an unknown classification rule from labeled data. In this problem paradigm, the learner receives a collection of data points, some of which are labeled "positive" and some of which are labeled "negative", and the goal is to come up with a rule which will have high accuracy in classifying future data points as either "positive" or "negative".
In a SODA 2015 paper, De, Diakonikolas, and Servedio studied the possibilities and limitations of efficient machine learning algorithms when the learner is only given access to one type of data point, namely points that are labeled "positive". (These are also known as "satisfying assignments" of the unknown classification rule.) They showed that certain types of classification rules can be learned efficiently in this setting while others cannot. However, all of the settings considered in that earlier work were ones in which the data points themselves were defined in terms of "categorical" features also known as binary yes-no features (such as "hairy/hairless" "mammal/non-mammal" "aquatic/non-aquatic" and so on).
In many natural settings, though, data points are defined in terms of continuous numerical features (such as "eight inches tall" "weighs seventeen pounds" "six years old" and so on).
This paper extended the earlier SODA 2015 paper's results to handle classification rules defined in terms of continuous features as well. It shows that certain types of classification rules over continuous data are efficiently learnable from positive examples only while others are not.
"Most learning algorithms in the literature crucially use both positive and negative examples," said Rocco Servedio. "So at first I thought that it is somewhat surprising that learning is possible at all in this kind of setting where you only have positive examples as opposed to both positive and negative examples."
But learning from positive examples only is actually pretty similar to what humans do when they learn — teachers rarely show students approaches that fail to solve a problem, rarely have them carry out experiments that don't work, etc. Continued Servedio, "So maybe we should have expected this type of learning to be possible all along."
Nearly Optimal Edge Estimation with Independent Set Queries
Xi Chen Columbia University, Amit Levi University of Waterloo, Erik Waingarten Columbia University
The researchers were interested in algorithms which are given access to a large undirected graph G on n vertices and estimate the number of edges of the graph up to a small multiplicative error. In other words, for a very small ϵ > 0 (think of this as 0.01) and a graph with m edges, they wanted to output a number m' satisfying (1-ε) m ≤ m' ≤ (1+ε) m with probability at least 2/3, and the goal is to perform this task without having to read the whole graph.
For a simple example, suppose that the access to a graph allowed to check whether two vertices are connected by an edge. Then, an algorithm for counting the number of edges exactly would need to ask whether all pairs of vertices are connected, resulting in an (n choose 2)-query algorithm since these are all possible pairs of vertices. However, sampling Θ((n choose 2) / (m ε2)) random pairs of vertices one can estimate the edges up to (1± ε)-error with probability 2/3, which would result in a significantly faster algorithm!
The question here is: how do different types of access to the graph result in algorithms with different complexities? Recent work by Beame, Har-Peled, Ramamoorthy, Rashtchian, and Sinha studied certain "independent set queries" and "bipartite independent set queries": in the first (most relevant to our work), an algorithm is allowed to ask whether a set of vertices of the graph forms an independent set, and in the second, the algorithm is allowed to ask whether two sets form a bipartite independent set. The researchers give nearly matching upper and lower bounds for estimating edges with an independent set queries.
A Lower Bound on Cycle-Finding in Sparse Digraphs
Xi Chen Columbia University, Tim Randolph Columbia University, Rocco A. Servedio Columbia University, Timothy Sun Columbia University
The researchers imagined situations in which the graph is extremely large and wanted to determine whether or not the graph has cycles in a computationally efficient manner (by looking at as few of the nodes in the graph as possible). As yet, there's no known solution to this problem that does significantly better than looking at a constant fraction of the nodes, but they proved a new lower bound – that is, they found a new limit on how efficiently the problem can be solved. In particular, their proof of the lower bound uses a new technique to capture the best possible behavior of any algorithm for this problem.
Suppose there is a large directed graph that describes the connections between neurons in a portion of the brain, and the number of neurons is very large, say, several billion. If the graph has many cycles, this might indicate that the portion of the brain contains recurrences and feedback loops, while if it has no cycles, this might indicate information flows through the graph in a linear manner. Knowing this fact might help deduce the function of this part of the brain. The paper's result is negative – it provides a lower bound on the number of neurons needed to determine this fact. (This might sound a little discouraging, but this research isn't really targeted at specific applications – rather, it takes a step toward better understanding the types of approaches we need to use to efficiently determine the properties of large directed graphs.)
This is part of a subfield of theoretical computer science that has to do with finding things out about enormous data objects by asking just a few questions (relatively speaking). Said Tim Randolph, "Problems like these become increasingly important as we generate huge volumes of data, because without knowing how to solve them we can't take advantage of what we know."
Lower Bounds for Oblivious Near-Neighbor Search
Kasper Green Larsen Aarhus University, Tal Malkin Columbia University, Omri Weinstein Columbia University, Kevin Yeo Google
The paper studies the problem of privacy-preserving (approximate) similarity search, which is the backbone of many industry-scale applications and machine learning algorithms. It obtains a quadratic improvement over the highest *unconditional* lower bound for oblivious (secure) near-neighbor search in dynamic settings. This shows that dynamic similarity search has a logarithmic price if one wishes to perform it in an (information theoretic) secure manner.
A Face Cover Perspective to $\ell_1$ Embeddings of Planar Graphs
Arnold Filtser Columbia University
In this paper the researcher studied the case where there is a set K of terminals, and the goal is to embed only the terminals into `1 with low distortion.
Given two metric spaces $(X,d_X),(Y,d_Y)$, an embedding is a function $f:X\to Y$. We say that an embedding $f$ has distortion $t$ if for every two points $u,v\in X$, it holds that $d_X(u,v)\le d_Y(f(u),f(v))\le t\cdot d_X(u,v)$. "Given a hard problem in a space $X$, it is often useful to embed it into a simpler space $Y$, solve the problem there, and then pull the solution back to the original space $X$," said Arnold Filtser, a postdoctoral fellow. "The quality of the received solution will usually depend on the quality of the embedding (distortion), and the simplicity of the host space. Metric embeddings have a fundamental place in the algorithmic toolbox."
In $\ell_1$ distance, a.k.a. Manhattan distance, given two vectors $\vec{x},\vec{y}\in\mathbb{R}^d$ the distance defined as $\Vert \vec{x}-\vec{y}\Vert_1=\sum_i |x_i-y_i|$. A planar graph $G=(V,E,w)$, is a graph that can be drawn in the plane in such a way that its edges $E$ intersect only at their endpoints. This paper studies metric embeddings of planar graphs into $\ell_1$.
It was conjectured by Gupta et al. that every planar graph can be embedded into $\ell_1$ with constant distortion. However, given an $n$-vertex weighted planar graph, the best upper bound on the distortion is only $O(\sqrt{\log n})$, by Rao. The only known lower bound is $2$' and the fundamental question of the right bound is quite eluding.
The paper studies the case where there is a set $K$ of terminals, and the goal is to embed only the terminals into $\ell_1$ with low distortion and it's contribution is a further improvement on the upper bound to $O(\sqrt{\log\gamma})$. Since every planar graph has at most $O(n)$ faces, any further improvement on this result, will be a major breakthrough, directly improving upon Rao's long standing upper bound.
It is well known that the flow-cut gap equals to the distortion of the best embedding into $\ell_1$. Therefore, our result provides a polynomial time $O(\sqrt{\log \gamma})$-approximation to the sparsest cut problem on planar graphs, for the case where all the demand pairs can be covered by $\gamma$ faces.
Approximating the Distance to Monotonicity of Boolean Functions
Ramesh Krishnan Pallavoor Boston University, Sofya Raskhodnikova Boston University, Erik Waingarten Columbia University
A Boolean function f : {0,1}n → {0,1} is monotone if for every two points x, y ∈ {0,1}n where xi ≤ yi for every i∈[n], f(x) ≤ f(y). There has been a long and very fruitful line of research, starting with the work of Goldreich, Goldwasser, Lehman, Ron, and Samorodnitsky, exploring algorithms which can test whether a Boolean function is monotone.
The core question studied in the first paper was: suppose a function f is ϵ-far from monotone, i.e., any monotone function must differ with f on at least an ϵ-fraction of the points, how many pairs of points x, y ∈ {0,1}n which differ in only one bit i∈[n] (an edge of the hypercube) must satisfy f(x) = 1 but f(y) = 0 but x ≤ y (a violation of monotonicity)?
The paper focuses on the question of efficient algorithms which can estimate the distance to monotonicity of a function, i.e., the smallest possible ϵ where f is ϵ-far from monotone. It gives a non-adaptive algorithm making poly(n) queries which estimates ϵ up to a factor of Õ(√n). "The above approximation is not good since it degrades very badly as the number of variables of the function increases," said Erik Waingarten. "However, the surprising thing is that substantially better approximations require exponentially many non-adaptive queries."
The Complexity of Contracts
Paul Duetting London School of Economics, Tim Roughgarden Columbia University, Inbal Talgam-Cohen Technion, Israel Institute of Technology
Contract theory is a major topic in economics (e.g., the 2016 Nobel Prize in Economics was awarded to Oliver Hart and Bengt Holmström for their work on the topic). A canonical problem in the area is how to structure compensation to employees (e.g. as a function of sales), when the effort exerted by employees is not directly observable.
This paper provides both positive and negative results about when optimal or approximately optimal contracts can be computed efficiently by an algorithm. The researchers design such an efficient algorithm for settings with very large outcome spaces (such as all subsets of a set of products) and small agent action spaces (such as exerting low, medium, or high effort).
How to Store a Random Walk
Emanuele Viola Northeastern University, Omri Weinstein Columbia University, Huacheng Yu Harvard University
Motivated by storage applications, the researchers studied the problem of "locally-decodable" data compression. For example, suppose an encoder wishes to store a collection of n *correlated* files using as little space as possible, such that each individual X_i can be recovered quickly with few (ideally constant) memory accesses.
A natural example is a collection of similar images or DNA strands on a large sever, say, Dropbox. The researchers show that for file collections with "time-decaying" correlations (i.e., Markov chains), one can get the best of both worlds. This surprising result is achieved by proving that a random walk on any graph can be stored very close to its entropy, while still enabling *constant* time decoding on a word-RAM. The data structures generalize to dynamic (online) setting.
Labelings vs. Embeddings: On Distributed Representations of Distances
Arnold Filtser Columbia University, Lee-Ad Gottlieb Ariel University, Robert Krauthgamer Weizmann Institute of Science
The paper investigates for which metric spaces the performance of distance labeling and of `∞- embeddings differ, and how significant can this difference be.
A distance labeling is a distributed representation of distances in a metric space $(X,d)$, where each point $x\in X$ is assigned a succinct label, such that the distance between any two points $x,y \in X$ can be approximated given only their labels.
A highly structured special case is an embedding into $\ell_\infty$, where each point $x\in X$ is assigned a vector $f(x)$ such that $\|f(x)-f(y)\|_\infty$ is approximately $d(x,y)$. The performance of a distance labeling, or an $\ell_\infty$-embedding, is measured by its distortion and its label-size/dimension. "As $\ell_\infty$ is a norm space, it posses a natural structure that can be exploited by various algorithms," said Arnold Filtser. "Thus it is more desirable to obtain embeddings rather than general labeling schemes."
The researchers also studied the analogous question for the prioritized versions of these two measures. Here, a priority order $\pi=(x_1,\dots,x_n)$ of the point set $X$ is given, and higher-priority points should have shorter labels. Formally, a distance labeling has prioritized label-size $\alpha(.)$ if every $x_j$ has label size at most $\alpha(j)$. Similarly, an embedding $f: X \to \ell_\infty$ has prioritized dimension $\alpha(\cdot)$ if $f(x_j)$ is non-zero only in the first $\alpha(j)$ coordinates. In addition, they compare these prioritized measures to their classical (worst-case) versions.
They answer these questions in several scenarios, uncovering a surprisingly diverse range of behaviors. First, in some cases labelings and embeddings have very similar worst-case performance, but in other cases there is a huge disparity. However in the prioritized setting, they found a strict separation between the performance of labelings and embeddings. And finally, when comparing the classical and prioritized settings, they found that the worst-case bound for label size often "translates" to a prioritized one, but also a surprising exception to this rule.
Four Papers from the Theory Group Accepted to FOCS 2019
Papers from CS researchers were accepted to the 60th Annual Symposium on Foundations of Computer Science (FOCS 2019). The papers delve into population recovery, sublinear time, auctions, and graphs.
Finding Monotone Patterns in Sublinear Time
Omri Ben-Eliezer Tel-Aviv University, Clement L. Canonne Stanford University, Shoham Letzter ETH-ITS, ETH Zurich, Erik Waingarten Columbia University
The paper is about finding increasing subsequences in an array in sublinear time. Imagine an array of n numbers where at least 1% of the numbers can be arranged into increasing subsequences of length k. We want to pick random locations from the array in order to find an increasing subsequence of length k. At a high level, in an array with many increasing subsequences, the task is to find one. The key is to cleverly design the distribution over random locations to minimize the number of locations needed.
Roughly speaking, the arrays considered have a lot of increasing subsequences of length k; think of these as "evidence of existence of increasing subsequences". However, these subsequences can be hidden throughout the array: they can be spread out, or concentrated in particular sections, or they can even have very large gaps between the starts and the ends of the subsequences.
"The surprising thing is that after a specific (and simple!) re-ordering of the "evidence", structure emerges within the increasing subsequences of length k," said Erik Waingarten, a PhD student. "This allows for design efficient sampling procedures which are optimal for non-adaptive algorithms."
Beyond Trace Reconstruction: Population Recovery From the Deletion Channel
Frank Ban UC Berkeley, Xi Chen Columbia University, Adam Freilich Columbia University, Rocco A. Servedio Columbia University, Sandip Sinha Columbia University
Consider the problem of reconstructing the DNA sequence of an extinct species, given some DNA sequences of its descendant(s) that are alive today. We know that DNA sequences get modified through random mutations, which can be substitutions, insertions and deletions.
A mathematical abstraction of this problem is to recover an unknown source string x of length n, given access to independent samples of x that have been corrupted according to a certain noise model. The goal is to determine the minimum number of samples required in order to recover x with high confidence. In the special case that the corruption occurs via a deletion channel (i.e., each character in x is deleted independently with some probability, say 0.1, and the surviving characters are concatenated and transmitted), each sample is called a trace. The corresponding recovery problem is called trace reconstruction, and it has received significant attention in recent years.
The researchers considered a generalized version of this problem (known as population recovery) where there are multiple unknown source strings, along with an unknown distribution over them specifying the relative frequency of each source string. Each sample is generated by first drawing a source string with the associated probability, and then generating a trace from it via the deletion channel. The goal is to recover the source strings, along with the distribution over them (up to small error), from the mixture of traces.
For the main sample complexity upper bound, they show that for any population size s = o(log n / log log n), a population of s strings from {0,1}^n can be learned under deletion channel noise using exp(n^{1/2 + o(1)}) samples. On the lower bound side, we show that at least n^{\Omega(s)} samples are required to perform population recovery under the deletion channel when the population size is s, for all s <= n^0.49.
"I found it interesting that our work is based on certain mathematical results in which, at first glance, seem to be completely unrelated to the computational problem we consider," said Sandip Sinha, a PhD student. In particular, they used constructions based on Chebyshev polynomials, a certain sequence of polynomials which are extremal for many properties, and is hence ubiquitous throughout theoretical computer science. Similarly, previous work on trace reconstruction rely on certain extremal results about complex-valued polynomials. Continued Sinha, "I think it is quite intriguing that complex analytic techniques yield useful results about a problem which is fundamentally about discrete structures (binary strings)."
Settling the Communication Complexity of Combinatorial Auctions with Two Subadditive Buyers
Tomer Ezra Tel Aviv University, Michal Feldman Tel Aviv University, Eric Neyman Columbia University, Inbal Talgam-Cohen Technion; S. Matthew Weinberg Princeton University
The paper is about the theory of combinatorial auctions. In a combinatorial auction, an auctioneer wants to allocate several items among bidders. Each bidder has a certain amount that they value each item; bidders also have values for combinations of items, and in a combinatorial auction a bidder might not value a combination of items as much as each item individually.
For instance, say that a pencil and a pen will be auctioned. The pencil is valued at 30 cents and the pen at 40 cents, but the pen and pencil together at only 50 cents (it may be that there isn't any additional value from having both the pencil and the pen). Valuation functions with this property — that the value of a combination of items is less than or equal to the sum of the values of each item — are called subadditive.
In the paper, the researchers answered a longstanding open question about combinatorial auctions with two bidders who have subadditive valuation — roughly speaking, is it possible for an auctioneer to efficiently communicate with both bidders to figure out how to allocate the items between them to make the bidders happy?
The answer turns out to be no. In general, if the auctioneer wants to do better than just giving all of the items to one bidder or the other at random, the auctioneer needs to communicate a very large amount with the bidders.
The result itself was somewhat surprising, the researchers expected it to be possible for the auctioneer to do pretty well without having to communicate with the bidders too much. "Also, information theory was extensively used as part of proving the result," said Eric Neyman, a PhD student. "This is unexpected, because information theory has not been used much in the study of combinatorial auctions."
Fully Dynamic Maximal Independent Set with Polylogarithmic Update Time
Soheil Behnezhad University of Maryland, Mahsa Derakhshan University of Maryland, Mohammad Taghi Hajiaghayi University of Maryland, Cliff Stein Columbia University, Madhu Sudan Harvard University
In a graph, an independent set is a set of vertices with the property that none are adjacent. For example, in the graph of Facebook friends, vertices are people and there is an edge between two people who are friends. An independent set would be a set of people, none of whom are friends with each other. A basic problem is to find a large independent set. The paper focuses on one type of large independent set known as a maximal independent set, that is, one that cannot have any more vertices added to it.
Graphs, such as the friends graph, evolve over time. As the graph evolves, the maximal independent set needs to be maintained, without recomputing one from scratch. The paper significantly decreases the time to do so, from time that is polynomial in the input size to one that is polylogarithmic.
A graph can have many maximal independent sets (e.g. in a triangle, each of the vertices is a potential maximal independent set). One might think that this freedom makes the problems easier. The researchers picked one particular kind of maximal independent set, known as a lexicographically first maximal independent set (roughly this means that in case of a tie, the vertex whose name is first in alphabetical order is always chosen) and show that this kind of set can be maintained more efficiently.
"Giving up this freedom actually makes the problems easier," said Cliff Stein, a computer science professor. "The idea of restricting the set of possible solutions making the problem easier is a good general lesson."
Decades-Old Computer Science Conjecture Solved in Two Pages
Four Papers on Computer Science Theory Accepted to STOC 2019
The Symposium on Theory of Computing (STOC) covers research within theoretical computer science, such as algorithms and computation theory. This year, four papers from CS researchers and collaborators from various institutions made it into the conference.
Local Decodability of the Burrows-Wheeler Transform
Sandip Sinha Columbia University and Omri Weinstein Columbia University
The researchers were interested in the problem of compressing texts with local context, like texts in which there is some correlation between nearby characters. For example, the letter 'q' is almost always followed by 'u' in an English text.
It is a reasonable goal to design compression schemes that exploit local context to reduce the length of the string considerably. Indeed, the FM-Index and other such schemes, based on a transformation called the Burrows-Wheeler transform followed by Move-to-Front encoding, have been widely used in practice to compress DNA sequences etc. "I think it's interesting that compression schemes have been known for nearly 20 years in the pattern-matching and bioinformatics community but there has not been satisfactory theoretical guarantees of the compression achieved by these algorithms," said Sandip Sinha, a PhD student in the Theory Group.
Moreover, these schemes are inherently non-local – in order to extract a character or a short substring at a particular position of the original text, one needs to decode the entire string, which requires time proportional to the length of the original string. This is prohibitive in many applications. The team designed a data structure which matches almost exactly the space bound of such compression schemes, while also supporting highly efficient local decoding queries (alluded to above), as well as certain pattern-matching queries. In particular, they were able to design a succinct "locally-decodable" Move-to-Front (MTF) code, that reduces the decoding time per character (in the MTF encoding) from n to around log(n), where n is the length of the string. Shared Sinha, "We also show a lower bound showing that for a wide class of strings, one cannot hope to do much better using any data structure based on the above transform."
"Hopefully our paper draws wider attention of the theoretical CS community to similar problems in these fields," said Sinha. To that end, they have made a conscious effort to make the paper accessible across research domains. "I also think there is no significant mathematical knowledge required to understand the paper, beyond some basic notions in information theory."
Fooling Polytopes
Ryan O'Donnell Carnegie Mellon University, Rocco A. Servedio Columbia University, Li-Yang Tan Stanford University
The paper is about "getting rid of the randomness in random sampling".
Suppose you are given a complicated shape on a blackboard and you need to estimate what fraction of the blackboard's area is covered by the shape. One efficient way to estimate this fraction is by doing random sampling: throw darts randomly at the blackboard and count the fraction of the darts that land inside the shape. If you throw a reasonable number of darts, and they land uniformly at random inside the blackboard, the fraction of darts that land inside the shape will be a good estimate of the actual fraction of the blackboard's area that is contained inside the shape. (This is analogous to surveying a small random sample of voters to try and predict who will win an election.)
"This kind of random sampling approach is very powerful," said Rocco Servedio, professor and chair of the computer science department. "In fact, there is a sense in which every randomized computation can be viewed as doing this sort of random sampling."
It is a fundamental goal in theoretical computer science to understand whether randomness is really necessary to carry out computations efficiently. The point of this paper is to show that for an important class of high-dimensional estimation problems of the sort described above, it is actually possible to come up with the desired estimates efficiently without using any randomness at all.
In this specific paper, the "blackboard" is a high-dimensional Boolean hypercube and the "shape on the blackboard" is a subset of the hypercube defined by a system of high-dimensional linear inequalities (such a subset is also known as a polytope). Previous work had tried to prove this result but could only handle certain specialized types of linear inequalities. By developing some new tools in high dimensional geometry and probability, in this paper the researchers were able to get rid of those limitations and handle all systems of linear inequalities.
Static Data Structure Lower Bounds Imply Rigidity
Zeev Dvir Princeton University, Alexander Golovnev Harvard University, Omri Weinstein Columbia University
The paper shows an interesting connection between the task of proving time-space lower bounds on data structure problems (with linear queries), and the long-standing open problem of constructing "stable" (rigid) matrices — a matrix M whose rank remains very high unless a lot of entries are modified. Constructing rigid matrices is one of the major open problems in theoretical computer science since the late 1970s, with far-reaching consequences on circuit complexity.
The result shows a real barrier for proving lower bounds on data structures: If one can exhibit any "hard" data structure problem with linear queries (the canonical example being Range Counting queries: given n points in d dimensions, report the number of points in a given rectangle), then this problem can be essentially used to construct "stable" (rigid) matrices.
"This is a rather surprising 'threshold' result, since in slightly weaker models of data structures (with small space usage), we do in fact have very strong lower bounds on the query time," said Omri Weinstein, an assistant professor of computer science. "Perhaps surprisingly, our work shows that anything beyond that is out of reach with current techniques."
Testing Unateness Nearly Optimally
Xi Chen Columbia University, Erik Waingarten Columbia University
The paper is about testing unateness of Boolean functions on the hypercube.
For this paper the researchers set out to design highly efficient algorithms which, by evaluating very few random inputs of a Boolean function, can "test" whether the function is unate (meaning that every variable is either non-increasing or non-decreasing or is pretty non-unate).
Referring to a previous paper the researchers set out to create an algorithm which is optimal (up to poly-logarithmic factors), giving a lower bound on the complexity of these testing algorithms.
An example of a Boolean function which is unate is a halfspace, i.e., for some values w1, …, wn, θ ∈ ℝ, the function f : {0,1}n → {0,1} is given by f(x) = 1 if ∑ wi xi≥ θ and 0 otherwise. Here, every variable i ∈ [n] is either non-decreasing, when wi ≥ 0, or non-increasing, when wi ≤ 0.
"One may hope that such an optimal algorithm could be non-adaptive, in the sense that all evaluations could be done at once," said Erik Waingarten, an algorithms and computational complexity PhD student. "These algorithms tend to be easier to analyze and have the added benefit of being parallelize-able."
However, the algorithm they developed is crucially adaptive, and a surprising thing is that non-adaptive algorithms could never achieve optimal complexity. A highlight of the paper is a new analysis of a very simple binary search procedure on the hypercube.
"This procedure is the 'obvious' thing one would do for these kinds of algorithms, but analyzing it has been very difficult because of its adaptive nature," said Waingarten. "For us, this is the crucial component of the algorithm."
What Can You Do With a Computer Science Degree?
Columbia Welcomes Rebecca Wright as Inaugural Director of Barnard's CS Program
Columbia's computer science community is growing with Barnard College's creation of a program in Computer Science (CS). Rebecca Wright has been hired as the director of Barnard's CS program and as the director of the Vagelos Computational Science Center (Vagelos CSC), both of which are located in the Milstein Center.
Wright will lay down the groundwork to establish a computer science department to better serve the Barnard community. According to Wright, the goals of Barnard's CS program are to bring computing education in a meaningful way to all Barnard students, to better integrate Barnard's CS majors into the Barnard community, and to build a national presence for Barnard in computing research and education. Barnard students have already been able to take CS classes at Columbia and to major in CS by completing the Columbia CS major requirements. The Barnard program will continue to collaborate closely with the Columbia CS department, seeking to add opportunities rather than duplicating existing efforts or changing existing requirements.
"Initial course offerings are expected to focus on how CS interacts with other disciplines, such as social science, lab science, arts, and the humanities," said Wright, who comes to Columbia from Rutgers University. "We will address the different ways it can interact with various disciplines and ways to advance those disciplines, but with a focus on how to advance computer science to meet the needs of those disciplines."
Wright sees room to create more opportunities for students to see the full spectrum of computer science – from the one end of the spectrum using the computer as a tool, to the other end of the spectrum where there is the ability to design new algorithms, to implement new systems, to carry out things at the forefront of computer science. Barnard will enable students to find more places along that spectrum to become fluent in the underlying tools and mechanisms and be able to reason about them, create them, and combine them in new ways.
The first course will be taught by Wright and offered next year in the fall. It is currently being developed and will most likely fall under her research interests – security, privacy, and cryptography. She also is working on building the faculty through both tenure-stream professors and a new teaching and research fellows program.
For now, students can continue to visit Barnard's CSC and CS facilities on the fifth floor of the Milstein Center, including making use of the Computer Science and Math Help Room for guidance from tutors, studying or relaxing in the CSC social space, and enrolling in CSC workshops.
Wright encourages students to visit the Milstein Center,"I love walking through the library up to our offices." The space is open and a modern presentation of a library – much like how she envisions how the computer science program will develop.
"Computing has an impact on advances in virtually every field today," said Wright. "I am excited to see what we develop around these multidisciplinary interactions and interpretations of computing."
Computer Science Professor Omri Weinstein Wins NSF Career Award
His award will be used to explore data structures and information retrieval
Natural Language Processing and Spoken Language Processing groups present papers at EMNLP 2018
Columbia researchers presented their work at the Empirical Methods in Natural Language Processing (EMNLP) in Brussels, Belgium.
Professor Julia Hirschberg gave a keynote talk on the work done by the Spoken Language Processing Group on how to automatically detect deception in spoken language – how to identify cues in trusted speech vs. mistrusted speech and how these features differ by speaker and by listener. Slides from the talk can be viewed here.
Five teams with computer science undergrad and PhD students from the Natural Language Processing Group (NLP) also attended the conference to showcase their work on text summarization, analysis of social media, and fact checking.
Robust Document Retrieval and Individual Evidence Modeling for Fact Extraction and Verification
Tuhin Chakrabarty Computer Science Department, Tariq Alhindi Computer Science Department, and Smaranda Muresan Computer Science Department and Data Science Institute
"Given the difficult times, we are living in, it's extremely necessary to be perfect with our facts," said Tuhin Chakrabarty, lead researcher of the paper. "Misinformation spreads like wildfire and has long-lasting impacts. This motivated us to delve into the area of fact extraction and verification."
This paper presents the ColumbiaNLP submission for the FEVER Workshop Shared Task. Their system is an end-to-end pipeline that extracts factual evidence from Wikipedia and infers a decision about the truthfulness of the claim based on the extracted evidence.
Fact checking is a type of investigative journalism where experts examine the claims published by others for their veracity. The claims can range from statements made by public figures to stories reported by other publishers. The end goal of a fact checking system is to provide a verdict on whether the claim is true, false, or mixed. Several organizations such as FactCheck.org and PolitiFact are devoted to such activities.
The FEVER Shared task aims to evaluate the ability of a system to verify information using evidence from Wikipedia. Given a claim involving one or more entities (mapping to Wikipedia pages), the system must extract textual evidence (sets of sentences from Wikipedia pages) that supports or refutes the claim and then using this evidence, it must label the claim as Supported, Refuted or NotEnoughInfo.
Detecting Gang-Involved Escalation on Social Media Using Context
Serina Chang Computer Science Department, Ruiqi Zhong Computer Science Department, Ethan Adams Computer Science Department, Fei-Tzin Lee Computer Science Department, Siddharth Varia Computer Science Department, Desmond Patton School of Social Work, William Frey School of Social Work, Chris Kedzie Computer Science Department, and Kathleen McKeown Computer Science Department
This research is a collaboration between Professor Kathy McKeown's NLP lab and the Columbia School of Social Work. Professor Desmond Patton, from the School of Social Work and a member of the Data Science Institute, discovered that gang-involved youth in cities such as Chicago increasingly turn to social media to grieve the loss of loved ones, which may escalate into aggression toward rival gangs and plans for violence.
The team created a machine learning system that can automatically detect aggression and loss in the social media posts of gang-involved youth. They developed an approach with the hope to eventually use a system that can save critical time, scale reach, and intervene before more young lives are lost.
The system features the use of word embeddings and lexicons, automatically derived from a large domain-specific corpus which the team constructed. They also created context features that capture user's recent posts, both in semantic and emotional content, and their interactions with other users in the dataset. Incorporating domain-specific resources and context feature in a Convolutional Neural Network (CNN) that leads to a significant improvement over the prior state-of-the-art.
The dataset used spans the public Twitter posts of nearly 300 users from a gang-involved community in Chicago. Youth volunteers and violence prevention organizations helped identify users and annotate the dataset for aggression and loss. Here are two examples of labeled tweets, both of which the system was able to classify correctly. Names are blocked out to preserve the privacy of users.
Tweet examples
For semantics, which were represented by word embeddings, the researchers found that it was optimal to include 90 days of recent tweet history. While for emotion, where an emotion lexicon was employed, only two days of recent tweets were needed. This matched insight from prior social work research, which found that loss is significantly likely to precede aggression in a two-day window. They also found that emotions fluctuate more quickly than semantics so the tighter context window would be able to capture more fine-grained fluctuation.
"We took this context-driven approach because we believed that interpreting emotion in a given tweet requires context, including what the users had been saying recently, how they had been feeling, and their social dynamics with others," said Serina Chang, an undergraduate computer science student. One thing that surprised them was the extent to which different types of context offered different types of information, as demonstrated by the contrasting contributions of the semantic-based user history feature and the emotion-based one. Continued Chang, "As we hypothesized, adding context did result in a significant performance improvement in our neural net model."
Team SWEEPer: Joint Sentence Extraction and Fact Checking with Pointer Networks
Christopher Hidey Columbia University, Mona Diab Amazon AI Lab
Automated fact checking of textual claims is of increasing interest in today's world. Previous research has investigated fact checking in political statements, news articles, and community forums.
"Through our model we can fact check claims and find specific statements that support the evidence," said Christopher Hidey, a fourth year PhD student. "This is a step towards addressing the propagation of misinformation online."
As part of the FEVER community shared task, the researchers developed models that given a statement would jointly find a Wikipedia article and a sentence related to the statement, and then predict whether the statement is supported by that sentence.
For example, given the claim "Lorelai Gilmore's father is named Robert," one could find the Wikipedia article on Lorelai Gilmore and extract the third sentence "Lorelai has a strained relationship with her wealthy parents, Richard and Emily, after running away as a teen to raise her daughter on her own" to show that the claim is false.
Credit : Wikipedia – https://en.wikipedia.org/wiki/Lorelai_Gilmore
One aspect of this problem that the team observed was how poorly TF-IDF, a standard technique in information retrieval and natural language processing, performed at retrieving Wikipedia articles and sentences. Their custom model improved performance by 35 points in terms of recall over a TF-IDF baseline, achieving 90% recall for 5 articles. Overall, the model retrieved the correct sentence and predicted the veracity of the claim 50% of the time.
Where is your Evidence: Improving Fact-checking by Justification Modeling
Tariq Alhindi Computer Science Department, Savvas Petridis Computer Science Department, Smaranda Muresan Computer Science Department and Data Science Institute
The rate of which misinformation is spreading on the web is faster than the rate of manual fact-checking conducted by organizations like Politifact.com and Factchecking.org. For this paper the researchers wanted to explore how to automate parts or all of the fact-checking process. A poster with their findings was presented as part of the FEVER workshop.
"In order to come up with reliable fact-checking systems we need to understand the current manual process and identify opportunities for automation," said Tariq Alhindi, lead author on the paper. They looked at the LIAR dataset – around 10,000 claims classified by Politifact.com to one of six degrees of truth – pants-on-fire, false, mostly-false, half-true, mostly-true, true. Continued Alhindi, we also looked at the fact-checking article for each claim and automatically extracted justification sentences of a given verdict and used them in our models, after removing all sentences that contain the verdict (e.g. true or false).
Excerpt from the LIAR-PLUS dataset
Feature-based machine learning models and neural networks were used to develop models that can predict whether a given statement is true or false. Results showed that using some sort of justification or evidence always improves the results of fake-news detection models.
"What was most surprising about the results is that adding features from the extracted justification sentences consistently improved the results no matter what classifier we used or what other features we included," shared Alhindi, a PhD student. "However, we were surprised that the improvement was consistent even when we compare traditional feature-based linear machine learning models against state of the art deep learning models."
Their research extends the previous work done on this data set which only looked at the linguistic cues of the claim and/or the metadata of the speaker (history, venue, party-affiliation, etc.). The researchers also released the extended dataset to the community to allow further work on this dataset with the extracted justifications.
Content Selection in Deep Learning Models of Summarization
Chris Kedzie Columbia University, Kathleen McKeown Columbia University, Hal Daume III University of Maryland, College Park
Recently, a specific type of machine learning, called deep learning, has made strides in reaching human level performance on hard to articulate problems, that is, things people do subconsciously like recognizing faces or understanding speech. And so, natural language processing researchers have turned to these models for the task of identifying the most important phrases and sentences in text documents, and have trained them to imitate the decisions a human editor might make when selecting content for a summary.
"Deep learning models have been successful in summarizing natural language texts, news articles and online comments," said Chris Kedzie, a fifth year PhD student. "What we wanted to know is how they are doing it."
While these deep learning models are empirically successful, it is not clear how they are performing this task. By design, they are learning to create their own representation of words and sentences, and then using them to predict whether a sentence is important – if it should go into a summary of the document. But just what kinds of information are they using to create these representations?
One hypotheses the researchers had was that certain types of words were more informative than others. For example, in a news article, nouns and verbs might be more important than adjectives and adverbs for identifying the most important information since such articles are typically written in a relatively objective manner.
To see if this was so, they trained models to predict sentence importance on redacted datasets, where either nouns, verbs, adjectives, adverbs, or function words were removed and compared them to models trained on the original data.
On a dataset of personal stories published on Reddit, adjectives and adverbs were the key to achieving the best performance. This made intuitive sense in that people tend to use intensifiers to highlight the most important or climactic moments in their stories with sentences like, "And those were the WORST customers I ever served."
What surprised the researchers were the news articles – removing any one class of words did not dramatically decrease model performance. Either important content was broadly distributed across all kinds of words or there was some other signal that the model was using.
They suspected that sentence order was important because journalists are typically instructed to write according to the inverted pyramid style with the most important information at the top of the article. It was possible that the models were implicitly learning this and simply selecting sentences from the article lead.
Two pieces of evidence confirmed this. First, looking at a histogram of sentence positions selected as important, the models overwhelmingly preferred the lead of the article. Second, in a follow up experiment, the sentence ordered was shuffled to remove sentence position as a viable signal from which to learn. On news articles, model performance dropped significantly, leading to the conclusion that sentence position was most responsible for model performance on news documents.
The result concerned the researchers as they want models to be trained to truly understand human language and not use simple and brittle heuristics (like sentence position). "To connect this to broader trends in machine learning, we should be very concerned and careful about what signals are being exploited by our models, especially when making sensitive decisions," Kedzie continued. "The signals identified by the model as helpful may not truly capture the problem we are trying to solve, and worse yet, may be exploiting biases in the dataset that we do not wish it to learn."
However, Kedzie sees this as an opportunity to improve the utility of word representations so that models are better able to use the article content itself. Along these lines, in the future, he hopes to show that by quantifying the surprisal or novelty of a particular word or phrase, models are able to make better sentence importance predictions. Just as people might remember the most surprising and unexpected parts of a good story.
CS Welcomes New Faculty
The department welcomes Baishakhi Ray, Ronghui Gu, Carl Vondrick and Tony Dear.
Baishakhi Ray
Assistant Professor, Computer Science
PhD, University of Texas, Austin, 2013; MS, University of Colorado, Boulder, 2009; BTech, Calcutta University, India, 2004; BSc, Presidency College, India, 2001
Baishakhi Ray works on end to end software solutions and treats the entire software system – anything from debugging, patching, security, performance, developing methodology, to even the user experience of developers and users.
At the moment her research is focused on machine learning bias. For example, some models see a picture of a baby and a man and identify it as a woman and child. Her team is developing ways on how to train a system and to solve practical problems.
Ray previously taught at the University of Virginia and was a postdoctoral fellow in computer science at University of California, Davis. In 2017, she received Best Paper Awards at the SIGSOFT Symposium on the Foundations of Software Engineering and the International Conference on Mining Software Repositories.
Ronghui Gu
PhD, Yale University, 2017; Tsinghua University, China, 2011
Ronghui Gu focuses on programming languages and operating systems, specifically language-based support for safety and security, certified system software, certified programming and compilation, formal methods, and concurrency reasoning. He seeks to build certified concurrent operating systems that can resist cyberattacks.
Gu previously worked at Google and co-founded Certik, a formal verification platform for smart contracts and blockchain ecosystems. The startup grew out of his thesis, which proposed CertiKOS, a comprehensive verification framework. CertiKOS is used in high-profile DARPA programs CRASH and HACMS, is a core component of an NSF Expeditions in Computing project DeepSpec, and has been widely considered "a real breakthrough" toward hacker-resistant systems.
Carl Vondrick
PhD, Massachusetts Institute of Technology, 2017; BS, University of California, Irvine, 2011
Carl Vondrick's research focuses on computer vision and machine learning. His work often uses large amounts of unlabeled data to teach perception to machines. Other interests include interpretable models, high-level reasoning, and perception for robotics.
His past research developed computer systems that watch video in order to anticipate human actions, recognize ambient sounds, and visually track objects. Computer vision is enabling applications across health, security, and robotics, but they currently require large labeled datasets to work well, which is expensive to collect. Instead, Vondrick's research develops systems that learn from unlabeled data, which will enable computer vision systems to efficiently scale up and tackle versatile tasks. His research has been featured on CNN and Wired and in a skit on the Late Show with Stephen Colbert, for training computer vision models through binge watching TV shows.
Recently, three research papers he worked on was presented at the European Conference for Computer Vision (EECV). Vondrick comes to Columbia from Google Research, where he was a research scientist.
Tony Dear
Lecturer in Discipline, Computer Science
PhD, Carnegie Mellon University, 2018; MS, Carnegie Mellon University, 2015; BS, University of California, Berkeley, 2012
Tony Dear's research and pedagogical interests lie in bringing theory into practice. In his PhD research, this idea motivated the application of analytical tools to motion planning for "real" or physical locomoting robotic systems that violate certain ideal assumptions but still exhibit some structure – how to get unconventional robots to more around with stealth of animals and biological organisms. Also, how to simplify tools and expand that to other systems, as well as how to generalize mathematical models to be used in multiple robots.
In his teaching, Dear strives to engage students with relatable examples and projects, alternative ways of learning, such as an online curriculum with lecture videos. He completed the Future Faculty Program at the Eberly Center for Teaching Excellence at Carnegie Mellon and has been the recipient of a National Defense Science and Engineering Graduate Fellowship.
At Columbia, Dear is looking forward to teaching computer science, robotics and AI. He hopes to continue small scale research projects in robotic locomotion and conduct outreach to teach teens STEM and robotics courses.
|
CommonCrawl
|
Steps dominate gas evasion from a mountain headwater stream
Reconstructed eight-century streamflow in the Tibetan Plateau reveals contrasting regional variability and strong nonstationarity
Yenan Wu, Di Long, … Chunhong Hu
The sensitivity of simulated streamflow to individual hydrologic processes across North America
Juliane Mai, James R. Craig, … Richard Arsenault
Hierarchical climate-driven dynamics of the active channel length in temporary streams
Gianluca Botter, Filippo Vingiani, … Nicola Durighetto
Identifying threshold responses of Australian dryland rivers to future hydroclimatic change
Z. T. Larkin, T. J. Ralph, … A. J. R. Carthey
Recent changes to Arctic river discharge
Dongmei Feng, Colin J. Gleason, … Yuta Ishitsuka
River channel conveyance capacity adjusts to modes of climate variability
L. J. Slater, A. Khouakhi & R. L. Wilby
River self-organisation inhibits discharge control on waterfall migration
Edwin R. C. Baynes, Dimitri Lague, … Andrew J. Dugmore
Floods and rivers: a circular causality perspective
G. Sofia & E. I. Nikolopoulos
Counterbalancing influences of aerosols and greenhouse gases on atmospheric rivers
Seung H. Baek & Juan M. Lora
Gianluca Botter ORCID: orcid.org/0000-0003-0576-88471,
Anna Carozzani1,
Paolo Peruzzo ORCID: orcid.org/0000-0002-6712-91971 &
Nicola Durighetto1
Steps are dominant morphologic traits of high-energy streams, where climatically- and biogeochemically-relevant gases are processed, transported to downstream ecosystems or released into the atmosphere. Yet, capturing the imprint of the small-scale morphological complexity of channel forms on large-scale river outgassing represents a fundamental unresolved challenge. Here, we combine theoretical and experimental approaches to assess the contribution of localized steps to the gas evasion from river networks. The framework was applied to a representative, 1 km-long mountain reach in Italy, where carbon dioxide concentration drops across several steps and a reference segment without steps were measured under different hydrologic conditions. Our results indicate that local steps lead the reach-scale outgassing, especially for high and low discharges. These findings suggest that steps are key missing components of existing scaling laws used for the assessment of gas fluxes across water-air interfaces. Therefore, global evasion from rivers may differ substantially from previously reported estimates.
River networks transport, process, and release a multitude of chemical substances, which are relevant to the biogeochemical functioning of stream ecosystems and eventually affect the fragile interconnections of the land-water-climate nexus1,2,3,4. In particular, headwater streams are important greenhouse gas sources to the atmosphere, owing to the combination of enhanced input of dissolved matter from the surrounding landscape with high exchange rates across water-air interfaces. Consequently, quantifying gas emissions from upland freshwater systems is regarded as an important scientific challenge with multi-faced implications for a broad range of disciplines including ecology, biology, and climate sciences5,6,7,8.
Riffles represent a distinctive trait of most rivers worldwide9. In high-gradient streams, where the granulometry is varied, riffles frequently give rise to sequences of steps in which local hydraulic discontinuities of the water flow are observed. Step bedforms include a lot of incredibly diverse channel structures, which are widespread in different regions of the globe, including humid areas, desert ephemeral streams, semiarid environments, and alpine settings10,11. In some instances, the local morphology of the stream jointly with the enhanced turbulence observed in correspondence with the plunging jet may promote the formation of a submerged pool, where the characteristic travel time of water and solutes increases significantly. The important role of steps and step-pool bedforms in regulating fluvial sediment transport and channel morphology has been extensively studied12,13,14,15,16,17,18,19, but systematic knowledge about their contribution to gas exchange between freshwater systems and the atmosphere remains elusive.
Several authors have argued that waterfalls, riffles, steps, and cascades might promote gas exchange with the atmosphere, owing to the enhanced turbulence and air entrainment that are typically observed in correspondence of abrupt discontinuities of the flow field7,20,21,22,23,24,25,26,27,28. However, available empirical data about the outgassing produced by individual steps or cascades is relatively limited. Cirpka et al.20 and Natchimuthu et al.29 have used tracer injections to demonstrate that the presence of cascades and waterfalls significantly increases the reareation coefficient in a set of tens-of-meters long river reaches located in Switzerland and Sweden. More recently, Leibowitz et al.30, Vautier et al.31 Whitmore et al.32 and Schneider et al.33 have shown that mass evasion within channel stretches that contain waterfalls is enhanced, causing a loss of carbon dioxide (CO2) or injected tracers within relatively short distances in the range of 15% to 50% of the initial mass. However, spatial patterns of gas evasion were typically monitored at relatively coarse spatial resolutions (i.e. some tens of meters), and empirical observations at scales comparable to the step size are much rarer (see ref. [31]). High-resolution data, instead, represent a powerful means to reduce the uncertainty in the characterization of small-scale patterns of stream outgassing, and enable more robust assessments of gas evasion produced by local hydromorphologic heterogeneities of rivers34.
Existing studies aimed at quantifying the relative contribution of cascades to the total stream outgassing typically rely on spatial patterns of gas concentration within river reaches that contain steps or waterfalls30,31,33. These patterns, however, inherently mirror the unique morphologic characteristics of the case studies selected for the analysis. Consequently, the existing estimates can not be easily upscaled or extrapolated to different contexts. In other instances, gas evasion produced by steps and cascades was investigated through the analysis of the underlying mass transfer rate, k28,29,31,35. Though, the mean value of k in a given stream portion is not able to quantify the magnitude of the internal peaks of gas transfer occurring in the case of non-homogeneous hydrodynamic fields36, especially when the size of the fluid volume responsible for most of the evasion is significantly smaller than the measurement resolution—as in the presence of a falling jet (Supplementary Fig. 1). Thus, the value of k in correspondence of steps, riffles, cascades, and waterfalls is highly scale dependent (the larger the averaging water volume around the jet, the lower the corresponding mean value of k). For this reason, we suggest that the mass transfer rate might not be an appropriate metric to describe gas exchange processes in correspondence with abrupt discontinuities of the flow field, where the energy dissipation is markedly heterogeneous and the actual water volume involved in the majority of gas evasion is unknown (and potentially very small).
Owing to these theoretical and practical limitations—in spite of the growing awareness of the importance of local heterogeneity of the flow field in water-air gas exchange—the relative contribution of steps to the total outgassing in morphologically-complex reaches is not fully clear. This study aims at filling this gap by developing a theoretical and experimental framework for the study of gas emissions in heterogeneous streams. The approach was applied to a high-gradient channel in the Italian Alps where water CO2 concentrations were measured across 19 natural and artificially created steps, and along a reference turbulent segment without steps (Fig. 1). The main innovation of the experimental setup is that we capitalize on direct CO2 records gathered under different discharge conditions, without relying on Schmidt-number scaling, which is problematic in case of bubble-mediated transport of the type observed downstream of steps and cascades26,35,37,38. The major theoretical advance, instead, pertains the development of a modular and scalable metric for disentangling the contribution of turbulent segments and local steps to the total stream outgassing. The scalability of this metric was exploited to assess the role of local steps in the reach-scale outgassing, taking into account both the temporal variations of the discharge and the morphological characteristics of the river bed.
Fig. 1: Focus reach of the Valfredda creek and experimental setup.
a planar view of the reach selected for this study, shown with an orthophoto of the eastern part of the Valfredda catchment in background. Here, light blue and red indicate reach A, with length LA = 1060 m, and reach B, with length LB = 543 m, respectively. The inset within the planar view shows the reference segment without steps of length ℓr = 13 m. b overview of the reference segment indicated in (a). c Example of an artificial step, created by forcing the stream into a pipe and then covering the downstream river bed with a plastic film. d example of a scoured natural step. The insets in the last three panels show the observed time series of water CO2 concentration in the upstream (blue line) and downstream (orange line) cross sections, the positions of which are indicated as blue and orange circles in the three pictures.
Concentration damping in streams
For the sake of simplicity, we conceptualize high-gradient stream networks as a heterogeneous sequence of two types of elements: steps and segments (Fig. 2). Local steps are point-wise hydraulic discontinuities of the flow, generated by a drop of the riverbed Δh higher than the typical flow depth (Δh > 10 cm in this case). In such circumstances, the presence of a falling jet promotes air entrainment, bubbles and foaming which enhance gas exchange with the atmosphere. Turbulent segments, instead, are continuous, relatively regular river stretches located between pairs of steps, in which the flow is gradually varied. Therein, turbulence and gas exchange are promoted by heterogeneities of the velocity field, which are in turn produced by e.g. hurdles, stones, bends, and bed roughness. In cases where a geomorphic pool is observed downstream of a step, the pool is considered to be part of the downstream segment—provided that the outgassing process generated by a step is highly localized around the falling jet, regardless of the presence of the pool (Supplementary Fig. 1).
Fig. 2: Schematic of the decomposition of a reach into segments and steps.
Definition and aggregation of the damping factors fC and fS within a complex system, which leads to the expression of the dominance ratio r. Image generated with Inkscape.
Evaluating the role of local steps in river gas evasion requires the definition of a metric capable of objectively determining the separate contribution to the outgassing of steps and turbulent segments. To this aim, we use the concept of concentration damping, which is a dimensionless, scalable measure of gas evasion applicable to individual steps, single segments, or composite heterogeneous channels. Under steady-state conditions, the 1D spatial pattern of the concentration C of a dissolved gas advected downstream (x direction) and evaded into the atmosphere is exponentially decreasing with a spatially heterogeneous decay rate. Therefore, C(x), i.e. the concentration in the position x along the streamline, can be expressed as a function of the concentration in the upstream section (x = 0), C0, and the atmospheric concentration Ca as
$$C(x)-{C}_{a}=({C}_{0}-{C}_{a})\exp [-f(x)]$$
where f(x) is the exponential damping factor (hereafter damping factor), defined as the product between the effective exchange rate along the stretch (0, x), Keq(x), and the corresponding water travel time τ(x) (i.e. f(x) = Keq(x)τ(x), see "Methods"). Physically, f(x) provides a measure of the fraction of excess mass (i.e. the mass exceeding that transported by the stream at the equilibrium, when C = Ca) removed along the streamline from 0 to x, which can be in fact calculated as 1 − e−f(x) ("Methods"). The damping factor of a continuous channel segment (say, ci) of length ℓi can be written as
$${f}_{{c}_{i}}({\ell }_{i})={K}_{{c}_{i}}{\tau }_{{c}_{i}},$$
where \({\tau }_{{c}_{i}}=\int\nolimits_{0}^{{\ell }_{i}}1/u(x){{{{{\rm{d}}}}}}x\) is the travel time spent by water parcels in the segment (measurable via tracer experiments) and \({K}_{{c}_{i}}\) the effective exchange rate therein.
Analogously, the damping factor of the step si, \({f}_{{s}_{i}}\), can be expressed as
$${f}_{{s}_{i}}({{\Delta }}{h}_{i})={K}_{{s}_{i}}\,{\tau }_{{s}_{i}},$$
where \({K}_{{s}_{i}}\) is the effective exchange rate within the step i and \({\tau }_{{s}_{i}}\) the corresponding travel time. The notation emphasizes that \({f}_{{s}_{i}}\) should depend on the step height Δhi, which drives the amount of energy dissipated by the water flow and the ensuing outgassing process. In the case of steps, it is practically unfeasible to separately measure \({\tau }_{{s}_{i}}\) and \({K}_{{s}_{i}}\). Nevertheless, step exchange rates (\({K}_{{s}_{i}}\)) are expected to be very high owing to the local increase of energy dissipation and the enhanced air entrainment in correspondence of the falling jet20,39,40,41. Consequently, \({f}_{{s}_{i}}\) could be similar to (or even larger than) \({f}_{{c}_{i}}\) in many settings, in spite of the local nature of the outgassing process in correspondence of the steps (i.e., \({\tau }_{{s}_{i}} \, < \, {\tau }_{{c}_{i}}\)).
The damping factor is additive and commutative ("Methods"), thereby implying that the value of f for a series of segments (steps) can be calculated as the sum of the damping factors associated with the individual segments (steps) involved, regardless of their specific order (Supplementary Text 1.2). Consequently, we can evaluate the relative contribution to the total outgassing induced by all the steps embedded in a focus reach (or river network) using the dominance ratio r, which is defined as the ratio between the damping factor of the steps (fs) and the damping factor of the segments (fc) of that reach (see Fig. 2 and Supplementary Text 1.2):
$$r=\frac{{\sum }_{{}_{i}} \, {f}_{{s}_{i}}}{{\sum }_{{}_{i}} \, {f}_{{c}_{i}}}=\frac{{f}_{s}}{{f}_{c}}.$$
In Eq. (4), fc (fs) is expressed as the sum of the damping factors of all the segments (steps) included in the reach (Fig. 2 and "Methods"). If the dominance ratio is equal to unity, then the steps and the continuous segments of the focus reach provide an equal contribution to the total river outgassing (meaning that the gas concentration damping produced by a sequence of steps would be the same as the concentration damping produced by the turbulent continuous segments without the steps). Likewise, if r > 1 then the steps proved a larger contribution to the outgassing—or if r < 1 a smaller outgassing—as compared to the contribution generated by the continuous turbulent stretches. Crucially, r is not affected by the specific order according to which segments and steps are arranged, but only depends on key geometrical and hydraulic features of river networks (e.g. step height/spacing, segment slope).
Steps dominate reach-scale gas evasion
Our experimental setup allowed robust estimates of the CO2 fluxes and damping factors across the steps and the reference segment belonging to the focus reach. Observed concentrations in the focus reach ranged from 823 to 1297 ppm (mean concentration = 1130 ppm, standard deviation = 250 ppm) and the corresponding CO2 fluxes released into the atmosphere were comprised between 0.49 and 15.4 g C/d for the steps, and between 1.0 and 7.1 g C/d for the segment (Supplementary Tables 4,6). In the reference segment, the damping factor \({f}_{{c}_{r}}\) ranged from 0.1 to 0.32, depending on the underlying flow rate, Q (Fig. 3). These values fall within the range of damping factors observed in the literature (Supplementary Fig. 11), and corresponded to normalized gas exchange rates, k600, in the interval (3−15) m/d, which led to a percentage of excess mass released into the atmosphere (calculated as \(1-{{{{{{\rm{e}}}}}}}^{-{f}_{{c}_{r}}}\!\)) between 10% and 25%. The non monotonous dependence of \({f}_{{c}_{r}}\) on Q was explained by the interplay between two important drivers of gas transfer across water-air interfaces: i) the mean flow velocity, which is positively correlated with the outgassing velocity and is an increasing function of Q (Supplementary Fig. 5); and ii) the ratio between exchange area and water volume, which is proportional to the exchange rate but decreased with Q in the reference segment (Supplementary Table 3). Consequently, \({f}_{{c}_{r}}\) peaked for intermediate discharges.
Fig. 3: Damping factors for the steps and the reference segment.
a damping factor of the step i, \({f}_{{s}_{i}}\), as a function of the step height Δhi for scoured natural steps (cyan circles), natural covered steps (celeste circles) and artificially simulated steps (green circles). The dashed line represents the linear relation \({f}_{{s}_{i}}=0.3{{\Delta }}{h}_{i}\) (95% CI of slope: 0.27,0.32), with Δhi in m, which was obtained by fitting a simple linear regression through the least squares approach on the data (n = 19, p-value < 0.001). R2 = 0.978 is the coefficient of determination of the linear regression. b damping factor in the reference segment, \({f}_{{c}_{r}}\), for different discharge conditions in the range from 0.19 to 2.11 l/s.
In the range of drop heights analyzed in this paper (from 0.2 to 0.83 m), the step damping factor fs varied between 0.03 and 0.27, depending on the underlying elevation drop Δh. As the discharges observed in the study reach were quite small (<3 l/s), we also included in our analysis a scoured natural step of the main Valfredda river, with an height of 0.43 m and a discharge much higher than that observed in the focus reach (108 l/s). In all cases \({f}_{{s}_{i}}\) had the same order of magnitude of \({f}_{{c}_{r}}\), demonstrating that a local step may originate nearly the same outgassing produced by turbulent river segments with a length of tens of meters. This similarity is also reflected by the values of the CO2 fluxes released by the steps into the atmosphere, which were comparable to those evaded along the reference segment (Supplementary Tables 4, 6).
Interestingly, \({f}_{{s}_{i}}\) exhibited an almost linear dependence on the step height Δhi and was statistically independent on the discharge Q (Fig. 3a and Supplementary Tables 5, 6). In particular, the natural step characterized by a discharge exceeding 100 l/s perfectly lies along the linear regression line of all points. The above evidence indicates that gas exchange in steps might not be primarily related to the turbulence generated by the jet impacting the free surface, which is known to increase with Q. Instead, we hypothesize that fs is likely to be driven by bubble-mediated processes—the magnitude of which might be controlled by the total jet energy, which is a linearly increasing function of Δh.
The natural steps, where CO2 production was impeded by manually scouring and removing all the biofilm from the river bed and the downstream pool prior to each measurement, showed a behavior which was essentially indistinguishable from that of the artificially created steps, suggesting that the procedure used for simulating the steps with pipe diversions did not introduce significant biases in our analysis. The dependence of \({f}_{{s}_{i}}\) on the step height (Δhi) in natural steps covered with the plastic film was in line with that observed for the other steps, while the underlying absolute values were slightly lower—likely because in this setting the falling jet adhered to the plastic film, thereby reducing the reaeration rate and the gas evasion in the covered steps.
The ratios between fs and fc in the three target reaches analyzed in this study (the reaches A and B, and the virtual reach A*, see "Methods"), were in the range [0.9−4.3], depending on the underlying discharge level and the specific reach analyzed. These values were calculated taking into account the number and heights of the steps within each reach, and using a linear empirical function to link \({f}_{{s}_{i}}\) to Δhi as suggested by our experimental data (dashed line in Fig. 3a). Interestingly, most of the gas evasion induced by the steps was associated with the smallest drop heights. In fact, about 60% of fs was contributed by steps with heights smaller than 35 cm (Supplementary Table 8). Provided that fs turned out to be independent on Q while fc peaked for intermediate discharge levels, r had a non monotonic dependence on Q, with higher values observed for low and high streamflow conditions. While only few experimental points were used to estimate the dependence of the dominance ratio on the discharge, in all the settings analyzed r was systematically larger than (or close to) unity, thereby indicating that the contribution provided by the steps to the total gas evasion from the Valfredda creek was at least 50%, with even higher percentages in correspondence of low-flow and high-flow conditions.
The above estimates of r relied on field measurements which were gathered through four specific surveys, during which the discharge in the reference segment varied between 0.19 and 2.11 l/s. Therefore, to understand whether these measurements were representative of the long-term behavior of the system in the entire study period, we analyzed how daily variations in the streamflow drained by the focus reach—driven by fluctuations of rainfall and soil moisture content in the root zone—could impact the temporal variability of the dominance ratio. To this aim, the dynamics of Q during the summer and fall of 2021 were reconstructed combining field measurements and a simple hydrological model. The temporal pattern of r was then reconstructed from simulated discharge variations ("Methods"), using the empirical relationship between r and Q shown in Fig. 4a. Our results clearly indicate that r remained consistently above unity during the whole monitoring season for the three target river reaches analyzed. The highest values of r were observed during high-flow conditions. The resulting average values of the dominance ratio during the whole monitoring period were equal to 2.4 (for reach A), 2.5 (for reach B) and 3.2 (for reach A*). These values differ from the simple algebraic average of the experimental points shown in Fig. 4a, since they properly take into account the dependence of r on Q and the relative frequency associated with different discharge levels in the focus reach during the study period. We conclude that the outgassing from the study reach of the Valfredda was largely dominated by the local evasion induced by step and pools during the summer and early fall of 2021.
Fig. 4: Dominance ratio variations induced by changes in the underlying hydrologic conditions.
a dominance ratio as a function of the stream discharge Q in the reaches A (light gray), B (gray) and A* (black). Circles represent the observed values of r, the dashed lines show a linear piece-wise interpolation. When r > 1, steps dominate the outgassing (upper region of the plot). b Temporal dynamics of the discharge in the reference segment of the focus reach from July 1 to Nov 1, 2021. Observed streamflows are indicated as gray dots, while the simulated discharges (see "Methods") are shown by the shaded-gray region. c Temporal dynamics of the dominance ratio during the same time window (shown in b), for reaches A (light gray), B (gray) and A* (black). Circles refer to the observed values, while the solid lines refer to simulated values estimated based on the simulated discharges (shown in b) using the piece-wise linear interpolation between r and Q (shown in a). When r > 1 the steps dominate the outgassing (upper region of the plot).
Implications for large-scale studies
On practical grounds, objectively decomposing morphologically-complex channels of the type investigated here into segments and steps may not be straightforward. Here, steps were identified in correspondence with sharp drops in the active river bed, with heights that exceeded 10 cm. In our experimental setup, drops of this type gave rise to aerated falling jets followed by bubbles and/or foaming in the downstream pool, thereby enhancing gas exchange with the atmosphere. In some cases, however, vertical drops could be very small or involve only a portion of the active riverbed, creating spatially heterogeneous hydrodynamic conditions which are quite difficult to describe. These hybrid elements were not considered as actual steps in this paper—and their contribution to gas emissions was neglected accordingly. Furthermore, owing to the spatial heterogeneity of key hydro-morphological characteristics such as slope, discharge and riverbed composition/roughness, the continuous emissions in all the segments belonging to the study reach might not be perfectly represented by the behavior of the reference segment as postulated by the upscaling procedure proposed in this paper. Nevertheless, in spite of all these limitations and the inherently local nature of our study, we believe that the results shown here can be of general validity and properly describe the order of magnitude of the processes involved. In our study reach, we detected 271 steps in 1.03 km, with a mean distance between two subsequent steps of about 3.8 m, and a mean step height of 42.8 cm (Supplementary Table 7). These numbers are in line with previously published data in other regions of the World. In fact, the mean step spacing was found to be 2.56 m in the Western Cascades (Oregon, USA)42, 5.29 m in Southern California10 and between 3.9 and 6.5 m in Northern Italy43,44. Likewise, the mean step height was found to be between 0.47 to 1 m in D'Agostino and Lenzi43, 0.49 m in Wilcox et al.44 and 0.22 m in Chartrand et al.45. Therefore, the important contribution of small-height steps to the total outgassing observed in the Valfredda is expected to emerge in many high-slope settings, where step and pool bedforms dominate the channel morphology.
Our study revealed that the footprint of gas emissions produced by local steps does not vanish at the reach scale, owing to the pronounced concentration damping in correspondence of each step and the high frequency of steps typical of steep mountain rivers. Although we might expect the specific value of r to be spatially variable from site to site depending on local morphologic and hydraulic features, our analysis provides important clues for the identification of the drivers of fs and r in river networks and enables the identification of general guidelines for the application of the proposed framework to other contexts. The empirical data collected in this study indicate that fs could be easily extrapolated to any setting in which steps are observed, as the damping factor seems to be only dependent on the step height Δhi. In fact, we did not notice sizable differences among the behavior of the steps created with artificial pipes, that of the small natural steps contained in our target reach (width < 0.5 m, Q < 3 l/s) and the outgassing of a natural step with a larger width and a much higher discharge (width >2 m, Q > 100 l/s). While we recognize that more experimental data gathered under a broader range of conditions would be necessary to make stronger claims, we propose that, for a given step height, the damping factor is nearly the same regardless of other important hydromorphologic features (step shape, presence/absence of pools, width, discharge). Interestingly, owing to the the additivity of \({f}_{{s}_{i}}\) within a reach with multiple steps and the linear dependence of fs,i on Δhi, the number and size of individual steps does not impact the value of fs of a reach, which instead depends only on the total elevation drop lost through all the steps contained therein, Δhs = ∑i Δhi.
Extrapolating fc across different segments, instead, is likely to be less straightforward. Several empirical equations taken from the literature could be used for this purpose, in particular the experimental relationship observed between the mass transfer rate and the turbulent kinetic energy dissipation rate, ε26. This relation postulates that fc would be nearly the same in river segments in which the flow velocity and the slope are the same. Therefore, r could be nearly constant across different reaches in which (i) the discharge and velocity are similar; (ii) the mean slope is the same; iii) the fraction of elevation drop taking place through the step is the same. As a consequence, to extrapolate the value of r in a river network, specific data about the small-scale morphological traits of reaches would be necessary, unless this information is surrogated by empirical geomorphic laws for the prediction of the frequency and height of local steps based on larger-scale terrain attributes (see e.g. ref. [45]). Yet, more experimental data gathered within streams of larger size would be necessary to substantiate the proposed method and confirm its suitability to be extrapolated across different scales and settings.
The damping factors and the dominance ratio quantify the potential outgassing of different stream elements, whereas the actual value of evaded mass does depend on the spatial correlation between the sources of matter along the stream and the spatial patterns of evasion25. For instance, if relatively high water CO2 concentrations (e.g. induced by external supply of matter from the surrounding hillslopes, hyporheic exchange, and pronounced ecosystem respiration) are observed in those portions of a reach where the value of f is higher (e.g. because the elevation drop induced by the steps is larger), carbon dioxide evasion is expected to be particularly enhanced. Given the local nature of gas emissions from steps, cascades, and waterfalls, these geomorphic elements could act as important emission hotspots, where the excess mass transported downstream by the flow is quickly released into the atmosphere (e.g. refs. 23, 27, 30). Carbon dioxide emissions from local steps can be particularly significant if the stream bed in correspondence with these steps is partly exposed and thus covered by biofilm, an instance which is known to enhance local CO2 production46,47. Therefore, there could be a significant amount of outgassed mass in correspondence of steps, which could be essentially undetectable because of the very short distances traveled from the input (or production) site to the evasion point. These CO2 fluxes from rivers to the atmosphere might not be captured by simplified approaches in which the representative stream CO2 concentrations are estimated exploiting sparse point-wise measurements3,5,6,7,8,22,24,25,27,29. More broadly, we propose that the accuracy of current methods that indirectly estimate the stream metabolism through observed gas concentration differentials48,49,50 can be highly sensitive to the specific position of the selected sampling points and the small-scale geomorphic characteristics of the corresponding upstream reaches.
The analyses presented in this paper highlight a series of potential shortcomings in the methods currently in use for large-scale estimates of stream outgassing. Steps represent crucial morphologic components of high-energy streams, as they regulate physical or chemical exchanges at the interface with the landscape and the atmosphere. Consequently, a proper characterization of the spatial frequency and height of such steps is an important prerequisite for a robust assessment of gas evasion from channel networks. In heterogeneous high-energy streams, in fact, different stretches characterized by the same mean slope and velocity could lead to highly variable gas evasion rates depending on their internal configuration. For instance, according to Eqs. (8) and (10), the 13 m reference segment considered in this study was able to evade—through a total elevation drop of 1.4 m—approximately 8 % to 27 % of the available excess mass, depending on the underlying discharge rate (fc = 0.09 for Q = 2.11 l/s → \(1-{{{{{{\rm{e}}}}}}}^{-{f}_{c}}=0.08\); fc = 0.32 for Q = 0.73 l/s → \(1-{{{{{{\rm{e}}}}}}}^{-{f}_{c}}=0.27\)). If the same mean slope and elevation drop were obtained by combining nearly horizontal segments with two steps of height 70 cm each, the damping factor would be independent on Q and much higher than that observed in the reference segment (from 30 % to 250 % higher, depending on the underlying discharge value), with a percentage of excess mass evaded close to 35 % (f = 0.42 → \(1-{{{{{{\rm{e}}}}}}}^{-{f}}=0.35\)). The example indicates that for a given mean slope and velocity of a stream—i.e., for a fixed value of turbulent kinetic energy dissipation rate—both the apparent gas exchange rate at the water-air interface and the corresponding gas fluxes could be highly variable depending on the internal configuration of the reach (e.g. the frequency and height of local steps). In particular, higher emissions are expected to be associated with settings in which a sizable proportion of the total elevation drop takes place through local steps, where the steps dominate the outgassing process. Instead, the hydromorphologic parameters used for the prediction of gas exchange rates in ungauged sites (e.g., discharge, mean slope, mean velocity)5,26,51,52,53 do not explicitly incorporate the effect of local steps and small-scale heterogeneity in the stream morphology. Therefore, in order to improve the precision of large-scale estimates of gas fluxes from rivers we should develop novel, more sophisticated scaling laws that differentiate between channels with or without steps, linking the apparent gas exchange rate and the damping factor to the frequency and height of the steps contained in a reach.
The theoretical innovation introduced by this paper might imply a paradigm shift in gas emission studies, redirecting the efforts of the scientific community toward a better characterization of small-scale heterogeneity of streams and the development of more integrative approaches able to overcome the limits of the mass transfer rate—a metric which is not suited to describe local processes such as the outgassing in correspondence of the falling jet of steps, cascades, and waterfalls. The direct implications of our findings for large-scale assessments of gas evasion are certainly relevant. On the one hand, if existing studies have disregarded the effect of steps the actual flux of greenhouse gases across water-air interfaces of mountain streams would be much larger than what currently foreseen in the literature. This might be the case if previous empirical studies had been preferentially performed in stream reaches where the continuous segments dominate over the steps. To demonstrate the important role of steps for large-scale gas emissions from river networks, we revised the estimate of the flux of CO2 released from Swiss mountain streams performed by Horgby et al.8—that does not account for local steps—and we extended that calculation to include the step contribution to gas evasion in high-energy rivers. To this aim, the dominance ratio r was calculated for each reach of the Swiss network as a function of the slope (as higher slopes imply more frequent and higher steps, thereby leading to higher r) and discharge (higher Q imply higher flow velocities and higher mass transfer rates, with lower values of r). Although the above extrapolation goes beyond the range of discharges and slopes observed in the present study and thus should be taken with extreme caution, our calculations indicated that the estimate of the flux of CO2 released from Swiss mountain streams given by Horgby et al.8 would need to be corrected from approximately 3.5 to about 9.5 g CO2/m2/y (Fig. 5) if the steps are accounted for. In particular, we observed a significant increase in the number of reaches where the evasion is larger than 15 g CO2/m2/y owing to the emissions from the steps embedded in the steepest branches of the Swiss network. These results hint once again at the key role of local morphological traits of rivers for global emissions of CO2 into the atmosphere. On the other hand, in the case in which the effect of steps was already (at least partly) included in existing large-scale estimates—simply because some of the high energy experimental reaches used for the development of scaling laws do contain steps—quantifying the large-scale impact of steps on gas emissions from rivers would be even more troublesome. In this case, amending current estimates of regional CO2 fluxes would require not only a systematic characterization of the morphological traits of the reaches where mass transfer rates were previously measured, but also the identification of potential biases in extrapolating available data across ungauged reaches. In fact, the upscaling procedures currently in use lack of a suitable stratification based on key step features (in particular, the fraction of height drop associated with local steps), which seems to be instead a necessary step forward in stream outgassing studies. A literature analysis revealed that data about the internal structure of the river reaches used for experimental tracer studies—lying at the basis of large-scale predictions of gas emissions—are seldom available. Thus, we propose that more efforts are needed to collect and analyze data about the small-scale geometry of streams where gas evasion has been measured. Better characterizing the local morphological traits of streams that regulate the mass exchanged through water-air interfaces could help us to constrain the budget of focal chemical species (e.g. carbon, oxygen, nitrogen) relevant to the water-land-climate system.
Fig. 5: Effect of steps on CO2 emissions from Swiss mountain streams.
Frequency distribution of reach-wise CO2 fluxes estimated by Horgby et al.8, \({F}_{{{{{{\rm{C}}}}}}{{{{{{\rm{O}}}}}}}_{2}}\) (gray histograms), and the corresponding frequency distribution of the fluxes estimated by taking into account the local emissions generated by steps, \({F}_{{{{{{\rm{C}}}}}}{{{{{{\rm{O}}}}}}}_{2}}^{*}\) (orange histograms), for 23,343 Swiss mountain streams. The black and orange dashed lines represent the median flux values estimated by Horgby et al.8 and this study, respectively. Note that the tail of the frequency distribution including the steps reaches values up to \({F}_{{{{{{\rm{C}}}}}}{{{{{{\rm{O}}}}}}}_{2}}^{*}\approx 500\,{{{{{\rm{kgC}}}}}}\,{{{{{{\rm{m}}}}}}}^{-2}{{{{{\rm{y}}}}}}{{{{{{\rm{r}}}}}}}^{-1}\).
The equation governing the spatial patterns of gas concentration in a one-dimensional system with a curvilinear coordinate x aligned with the main flow direction, under the assumptions of stationarity (constant flow rate Q and time-invariant gas concentrations), no dispersion, no lateral input, absence of internal gas production reads:
$$u(x)\frac{{{{{{\rm{d}}}}}}C(x)}{{{{{{\rm{d}}}}}}x}+K(x)[C(x)-{C}_{a}]=0\,,$$
where u(x) is the local velocity in the streamline direction, Ca the atmospheric concentration, C(x) the local water gas concentration, K(x) the local, spatially variable exchange rate, which is equal to the mass transfer rate k scaled to the mean water depth. Crucially, the exchange coefficient K embeds the coupled effects of the mass transfer induced by the turbulence of the flow and that associated with gas transport mediated by bubbles and foams (if any, see ref. 54). The solution of Eq. (5) is given by
$$C(x)={C}_{a}+({C}_{0}-{C}_{a})\exp \left[-\int\nolimits_{0}^{x}\frac{K({x}^{{\prime} })}{u({x}^{{\prime} })}{{{{{\rm{d}}}}}}{x}^{{\prime} }\right]={C}_{a}+({C}_{0}-{C}_{a})\exp [-f(x)]$$
In Eq. (6), the exponential damping factor f(x) is defined as
$$f(x)=\int\nolimits_{0}^{x}\frac{K({x}^{{\prime} })}{u({x}^{{\prime} })}{{{{{\rm{d}}}}}}{x}^{{\prime} }=\int\nolimits_{0}^{\tau (x)}K({t}^{{\prime} }){{{{{\rm{d}}}}}}{t}^{{\prime} }={K}_{eq}(x)\tau (x),$$
where \({x}^{{\prime} }\) is the integration variable (representing any arbitrary position between 0 and x), Keq(x) is a weighted spatial average of K in the stretch from 0 to x and \(\tau (x)=\int\nolimits_{0}^{x}1/u(x){{{{{\rm{d}}}}}}x\) is the corresponding transit time—the time necessary to travel from 0 to x. Manipulating both sides of Eq. (1), one can easily derive the following expression for the mass removed in (0, x) scaled to the excess mass Q(C0−Ca) (i.e., the maximum value of mass that can be removed before the equilibrium with the atmosphere is reached):
$$\frac{Q(C(x)-{C}_{0})}{Q({C}_{0}-{C}_{a})}=1-{{{{{{\rm{e}}}}}}}^{-f(x)}$$
Thanks to Eq. (1), fc—the damping factor of an ideal channel stretch of length L composed by all the continuous segments of the focus reach—can be expressed as
$${f}_{c}=\ln \left[\frac{{C}_{0}-{C}_{a}}{{C}_{L}-{C}_{a}}\right],$$
where C0 and CL are the gas concentrations in the upstream (x = 0) and downstream (x = L) sections of the reach. Operationally, given the practical impossibility of measuring \({f}_{{c}_{i}}\) within all the segments belonging to the focus reach, fc was calculated based on the value of the damping factor of a reference segment (see below) with length ℓr, \({f}_{{c}_{r}}\). The latter was estimated from Eq. (9) through direct gas concentration measurements as
$${f}_{{c}_{r}}=\ln \left[\frac{{C}_{0}-{C}_{a}}{{C}_{{\ell }_{r}}-{C}_{a}}\right],$$
where C0 and \({C}_{{l}_{r}}\) are the concentrations in the upstream and downstream sections of the reference segment. Then, fc was calculated from \({f}_{{c}_{r}}\) as
$${f}_{c}={f}_{{c}_{r}}\frac{L}{{\ell }_{r}},$$
exploiting the additivity of \({f}_{{c}_{i}}\) across multiple segments and assuming that the exchange rate in the reference segment is equal to the average value of K across all the segments contained in the focus reach (Supplementary Text 1.6).
Similarly, \({f}_{{s}_{i}}\) was calculated from Eq. (1) as
$${f}_{{s}_{i}}=\ln \left[\frac{{C}_{{u}_{i}}-{C}_{a}}{{C}_{{d}_{i}}-{C}_{a}}\right],$$
where \({C}_{{u}_{i}}\) and \({C}_{{d}_{i}}\) represent the water gas concentration upstream and downstream of the step i. The damping factor of a sequence of N steps was then evaluated summing up the damping factors of all the individual steps \({f}_{{s}_{i}}\) (i.e., \({f}_{s}={\sum }_{i} \, {f}_{{s}_{i}}\), see Supplementary Information), as in Eq. (4) of the main text.
Study site and focus reach
The study site selected in this paper is a step-pool channel of the Rio Valfredda, a high-gradient headwater catchment of the Piave river basin, in the Italian Alps36,55. The climate of the site is typically alpine: precipitation is relatively high throughout the year (annual rainfall > 1400 mm), with significant snowfall during winter and melting in spring56. The selected reach is 1.36 km long, and its elevation ranges from 1911 to 1720 m a.s.l., with a mean slope of 0.14 m/m (Fig. 1a). The reach was selected because of its accessibility and the significant CO2 concentrations observed therein (typically above 1000 ppm). The river bed is steeper in the upstream part, where it flows southwards. Then, the reach runs south-east across some pastures and a mixed larch-spur forest. The reach is fed at its source by a groundwater spring, and the pH ranges between 7.6 and 8.1. The discharge weakly increases downstream, owing to the interplay between the losing bed and the hillslope lateral input. The stream bed is silty and dominated by boulders, cobbles and wooden logs of different size that originated several steps and pools. About 300 m upstream of its confluence with the Valfredda, the channel was almost inaccessible due to the presence of a landslide and several fallen trunks. Therefore, the analysis was concentrated in the upper portion of the channel (reach A in Fig. 1a).
The idea behind this study is to evaluate the contribution of steps and segments to the total gas evasion in streams by measuring CO2 concentration drops within continuous segments and individual steps belonging to our study reach. However, the use of CO2 concentration time series to quantify the outgassing of different stream elements required the confounding effect of CO2 production/input to be eliminated. This goal was achieved by isolating the water flowing across segments and steps from the river bed using a plastic film. The reach A was decomposed into 270 segments and 271 steps (Fig. 1b). Therein, during the summer of 2021 water CO2 concentrations were measured upstream and downstream of 19 different steps with variable height and a reference segment with a slope (discharge) similar to the mean slope (mean discharge) of the continuous segments within the study reach. This reference segment was identified in the middle part of the reach (46∘22'50"N, 11∘49'39"E, see Fig. 1a) with the aim of quantitatively representing the continuous gas emissions from all the segments contained in the study reach. This reference segment has an average slope ic,r of 0.108 m/m and a length ℓr of 13 m (inset of Fig. 1a, b). CO2 concentration measurements were performed under different hydrologic conditions (i.e. variable discharges) and considering different types of steps with heterogeneous geometry, as detailed in the following sections of the "Methods".
Discharge measurements
We performed several volumetric measurements of the discharge rate, Q, at the two end points of the reference segment. This was done recording the filling time of a graduated tank in correspondence of the upstream and downstream section of the reference segment. We performed 10 measurements between July and October 2021, with observed discharge values between 0.2 and 3.2 l/s (see Supplementary Table 1). Each measurement was the average among at least 5 different replicas performed in both the locations within one hour.
Travel time measurements and estimation of the relevant hydraulic properties
The water travel time along the reference segment, \({\tau }_{{\ell }_{r}}\), was measured through instantaneous injections of a diluted sodium chloride (NaCl) solution in the upstream cross section of the segment. The temporal variations of specific conductivity at the outlet of the segment were then measured using a multi-parameter sonde (YSI EXO2). The travel time was recorded both under natural conditions and after having covered the stream bed with the plastic film (Supplementary Fig. 3 and 4). The longitudinal mean velocity, u, was estimated as the segment length, ℓr, divided by the observed travel time. The procedure allowed us to verify that the mean velocity along the reference segment was not significantly impacted by the presence of the plastic film. Moreover, after having covered the river bed with the plastic film, we also measured the mean width, W, and water depth, H of the flow. This was done by taking spatial averages of the local values observed in different cross sections along the segment. The obtained hydraulic geometry scaling relationships (Supplementary Fig. 5) were found to be in line with the relationships proposed in the literature for mountain streams8, thereby suggesting that the flow conditions in place during the CO2 measurements were nearly-natural.
CO2 concentration measurements
Paired upstream and downstream CO2 concentration measurements were taken in the reference segment and in 19 steps (see Supplementary Text 1.5), using a membrane-based NDIR sensor, the MiniCO2™ designed by ProOceanus Systems Inc., Bridgewater, Canada. The instrument has a tubular shape, 370 mm long and 53.4 mm in diameter, and uses infrared detection to measure the partial pressure of dissolved CO2. Once the internal gas is fully equilibrated with the surrounding water (typically 10–15 min after the deployment), NDIR measurement on the equilibrated internal gas is taken at a wavelength of 4.26 μm close to the absorption band of CO2 at a controlled optical cell temperature. The time of deployment in a given position ranged from 30 to 90 min. To eliminate the confounding effect of high-frequency fluctuations in the recorded CO2 signal of the MiniCO2™ sensor (see Fig. 1 for an example), at least 20 min of continuous measurements at steady-state were gathered, from which we estimated the probability density function of CO2 concentrations and the related mean. Steady-state conditions were pre-identified based on the temporal patterns of the long-term average of the signal. The steady-state mean was then taken as representative of the equilibrium carbon dioxide concentration in water (Supplementary Figs. 10, 12). Paired upstream-downstream concentrations were gathered within 120 min from each other, so as to reduce as much as possible spurious effects induced by diel variations of water CO2 concentration. In our analysis we neglected possible effects of pH spatial variations on observed CO2 concentrations, as the underlying pH spatial gradients across the segment and the steps were below the detection limit of our instrument (which was 0.1 for the multi parametric sonde used in this study). Atmospheric CO2 concentrations were also measured with our MiniCO2™ sensor after each measurement performed upstream or downstream of the segment and the steps. Ca was in the range between 390 and 412 ppm throughout the field campaign. These values were in substantial agreement with those recorded in the nearest stations of the World Data Center for Greenhouse gases (Monte Cimone, Sonnblick Observatory, Zugspitze). At any rate, the impact of small variations in the value of Ca on the main paper results was negligible.
Estimating \({f}_{{s}_{i}}\) and \({f}_{{c}_{r}}\)
In this study the damping factors were evaluated on a purely experimental basis. The damping factor of individual steps \({f}_{{s}_{i}}\) was estimated from upstream/downstream CO2 measurements via Eq. (12), considering three different step types: (i) 11 simulated steps, which were created forcing the water into pipes and then letting the water flow hit a covered portion of channel bed from a given height, so as to reproduce the behavior of a falling jet of a natural drop with the desired Δh (Fig. 1c and Supplementary Fig. 6); (ii) 4 covered steps, obtained by folding natural steps with a thin plastic film, which was carefully shaped around the actual channel bed in the ramp, the step and the downstream pool (Supplementary Fig. 7e); (iii) 4 natural steps belonging to the focus reach (Fig. 1d and Supplementary Fig. 7f), which were scoured to remove the existing biofilm prior to each measurement. These precautions allowed us to eliminate the effect of CO2 production in all the analyzed steps, while assuring natural hydraulic conditions in all the measured steps. Further details on the experimental setup are available in Supplementary Text 1.5.
As per the estimate of the damping factor in the reach segments, owing to the practical impossibility to quantify the outgassing within all the segments contained in the study reach, fc was estimated from the observed concentration drop in the reference segment by means of an upscaling procedure (Eq. (11)). To measure \({f}_{{c}_{r}}\), prior to each field measurement the stream bed was covered by a plastic film, to avoid lateral input of water and CO2 and internal production induced by the ecosystem metabolism (Supplementary Fig. 9). Then upstream vs downstream CO2 concentrations were measured under different hydrologic conditions (discharge range: from 0.19 to 2.11 l/s), and Eq. (10) was applied. In the light of the constraint placed by the specific slope of the reference segment and the mean slope of the segments contained in the reach A, we analyzed three different scenarios, in which the damping factor of the reference segment was upscaled via Eq. (11) referred to three different target reaches: (i) the whole reach A, which has a length LA of 1060 m and a mean slope of its continuous segments ic,A of 0.081 m/m; because ic,A is significantly smaller than the slope of ℓr, in this case fc should be overestimated; (ii) the reach B, including 130 steps, which has a length LB equal to 543 m and is characterized by an average slope of its continuous segments ic,B = 0.101 m/m—quite close to the slope of the representative segment; (iii) the reach A*, an idealized reach characterized by (i) the same elevation drop as that observed in between the two end points of reach A; and ii) segments that have the same slope of the reference segment. A* is 769 m long, has a slope of its segments equal to 0.108 m/m (by definition equal to ic,r) and contains all the 271 steps of reach A (further details in Supplementary Text 1.6). Since the mean slope of the segments included in B and A* is closer to the actual slope of the reference segment, the corresponding estimates of fc and r should be more reliable in this case. Note that the mean slope of the segments included in a reach was calculated starting from the mean slope of the overall reach, taking into account the heights of all the steps included in that reach and assuming a longitudinal size of 10 cm for each step (Supplementary Text 1.6). This implies that the length of the segments included in each target reach is slightly smaller than the length of the whole target reach (Supplementary Table 7).
Morphological survey
The dominance ratio depends on the spatial frequency and the height distribution of the steps in the focus reach. We collected geometrical data about the step geometry during field surveys performed under very dry conditions. For each step we measured the step drop height, Δh, corresponding to the gap of water surface elevation in each nearly vertical fall with a drop higher than 10 cm. We mapped 271 steps in 1060 m with an average Δh equal 23.7 cm (Supplementary Fig. 13). The frequency distribution of the step height was monotonically decreasing, with 47.6 % of steps in the range 0−15 cm, and 70.5% of steps in the range 0−25 cm. Elevations, lengths and slopes of the relevant reaches were estimated through a high resolution (1 m) DTM.
Streamflow regime
The streamflow regime was estimated from rainfall data using a simple rainfall-runoff model. In particular, we used daily precipitation heights P [mm] recorded during the summer and fall seasons (June to October 2021) collected by a weather station of the Veneto Region Environmental Protection Agency (ARPAV) located in Falcade, 4.5 km far away from the catchment centroid. Discharge time series in the focus reach were then simulated using an exponential IUH applied to the censored precipitation time series as
$$Q(t)=A\mathop{\sum}\limits_{i}{j}_{i}\,k\,{{{{{{\rm{e}}}}}}}^{-k(t-{t}_{i})}\,$$
where A is the catchment area, k is the recession rate, ti is the occurrence time of effective rain events, ji is the effective (i.e. censored) rain depth (ji = max(Pi − ϕ, 0), with Pi representing the total rain depth and ϕ the censoring threshold embedding soil moisture dynamics57. ϕ and k were calibrated against the discharge observations available (Fig. 4c and Supplementary Text 1.8).
Estimation of CO2 emissions from Swiss mountain streams
CO2 fluxes released from from the mountain streams of the entire Switzerland were estimated by integrating the procedure identified by Horgby et al.8 with a simplified method to include the effect of gas evasion from local steps. As in Horgby et al.8, we considered the stream network provided by the Swiss Federal Office for the Environment, and we associated with any individual reach of the network the corresponding value of mean annual discharge (reference period: 1981–2000) (FOEN, 201658). For each reach of the network, the stream length (L) and the mean slope (i) were then calculated using a 2 m digital elevation model (Geodataⓒ swisstopo59). Likewise, stream width (W), water depth (H) and water velocity (u) were estimated starting from the underlying mean discharge exploiting three scaling laws developed by ref. [9], in which the values assumed by the empirical parameters involved were taken from Horgby et al.8. The procedure indicated by Horgby et al.8 was then used to estimate areal fluxes from each continuous stream segment, \({F}_{{{{{{\rm{CO}}}}}}_{2}}\), from simulated reach-scale values of CO2 concentration and mass transfer rate k (for more details on the data and the methods the reader is referred to ref. [8]). Assuming that these values of \({F}_{{{{{{\rm{CO}}}}}}_{2}}\) do not include the effect of local steps, as stated by the authors of the study, one can estimate the value of fc for each reach of the Swiss network from the available spatial map of k. The latter estimate was performed using equation (2), in which K was estimated as k/H and τ was set equal to L/u. Then, the estimate of \({F}_{{{{{{\rm{CO}}}}}}_{2}}\) provided by Horgby et al.8 was amended by adding the contribution to the outgassing produced by local steps, assigning to any reach of the Swiss network a total damping factor equal to fTOT = fc + fs = fc(1 + r) (instead of simply fc). To estimate the value of fs pertaining to a given reach, owing to the linearity of the relationship between fs,i and Δhi, only the total elevation drop associated with the steps of that reach, Δhs, needs to be known. Owing to the lack of detailed morphological data, Δhs was estimated for each reach of the Swiss network from the mean step spacing, λs, and the mean step height, \(\overline{{{\Delta }}h}\). The latters were in turn calculated as a function of the slope and the width of the reach as λs = 0.3113 i−1.188 and \(\overline{{{\Delta }}h}=i\,W\), exploiting geomorphic relationships taken from the literature45,60. Then, the total elevation drop associated with steps in a reach of length L was calculated as \({{\Delta }}{h}_{s}=(L/{\lambda }_{s})\,\overline{{{\Delta }}h}\) and fs was estimated as fs = 0.3 Δhs (see Fig. 3a). Note that the steps were assumed to be relevant only in the stream reaches where \(\overline{{{\Delta }}h} \, > \, 0.5\ H\)—otherwise we assumed that the steps were unable to produce a jet, and fs was set to zero. Finally, total CO2 fluxes for each reach of the Swiss network were recalculated as
$${F}_{{{{{{\rm{CO}}}}}}_{2}}^{*}={F}_{{{{{{\rm{CO}}}}}}_{2}}\,(1+r)$$
which properly accounts for the localized CO2 evasion occurring in correspondence with local steps. Note that, according to the assumptions introduced, the value of r depends on the slope i and the discharge Q of a given reach.
The data that support the findings of this study are openly available in Botter et al. 202261 at http://researchdata.cab.unipd.it/id/eprint/619, reference number 619.
Cole, J. et al. Plumbing the global carbon cycle: integrating inland waters into the terrestrial carbon budget. Ecosystems 10, 172–185 (2007).
Battin, T. et al. The boundless carbon cycle. Nat. Geosci. 2, 598–600 (2009).
Raymond, P. et al. Global carbon dioxide emissions from inland waters. Nature 503, 355–359 (2013).
Drake, T. W., Raymond, P. A. & Spencer, R. G. Terrestrial carbon inputs to inland waters: a current synthesis of estimates and uncertainty. Limnol. Oceanogr. Lett. 3, 132–142 (2018).
Butman, D. & Raymond, P. Significant efflux of carbon dioxide from streams and rivers in the United States. Nat. Geosci. 4, 839–842 (2011).
Hotchkiss, E. et al. Sources of and processes controlling CO2 emissions change with the size of streams and rivers. Nat. Geosci. 8, 696–699 (2015).
Duvert, C., Butman, D., Marx, A., Ribolzi, O. & Hutley, L. CO2 evasion along streams driven by groundwater inputs and geomorphic controls. Nat. Geosci. 11, 813–818 (2018).
Horgby, A. et al. Unexpected large evasion fluxes of carbon dioxide from turbulent streams draining the world's mountains. Nature Communications 10, 1–9 (2019).
Leopold, L. B., Wolman, M. G. & Miller, J. P. Fluvial Processes in Geomorphology (USGS Publications Warehouse, 1964).
Chin, A. On the origin of step-pool sequences in mountain streams. Geophys. Res. Lett. 26, 231–234 (1999).
Chin, A. & Wohl, E. Toward a theory for step pools in stream channels. Prog. Phys. Geogr. 29, 275–296 (2005).
Ashida, K., Takahashi, T. & Sawada, T. Sediment yield and transport on a mountainous small watershed. Bull. Disaster Prevention Res. Inst. 26, 119–144 (1976).
Abrahams, A. D., Li, G. & Atkinson, J. F. Step-pool streams: adjustment to maximum flow resistance. Water Resources Res. 31, 2593–2602 (1995).
Wohl, E., Madsen, S. & MacDonald, L. Characteristics of log and clast bed-steps in step-pool streams of northwestern Montana, USA. Geomorphology 20, 1–10 (1997).
Montgomery, D. R. & Buffington, J. M. Channel-reach morphology in mountain drainage basins. Geol. Soc. Am. Bull. 109, 596–611 (1997).
Wohl, E. E. & Thompson, D. M. Velocity characteristics along a small step-pool channel. Earth Surf. Processes Landforms 25, 353–367 (2000).
Chin, A. The geomorphic significance of step-pools in mountain streams. Geomorphology 55, 125–137 (2003).
Comiti, F., Cadol, D. & Wohl, E. Flow regimes, bed morphology, and flow resistance in self-formed step-pool channels. Water Resour. Res. 45, W054424 (2009).
Wohl, E. & Jaeger, K. L. Geomorphic implications of hydroclimatic differences among step-pool channels. J. Hydrol. 374, 148–161 (2009).
Cirpka, O., Reichert, P., Wanner, O., Mueller, S. R. & Schwarzenbach, R. P. Gas exchange at river cascades: field experiments and model calculations. Environ. Sci. Technol. 27, 2086–2097 (1993).
Chan, C. N., Tsang, C. L., Lee, F., Liu, B. & Ran, L. Rapid loss of dissolved CO2 from a subtropical steep headwater stream. Front. Earth Sci. 9, 996 (2021).
Wallin, M. B. et al. Spatiotemporal variability of the gas transfer coefficient (KCO2) in boreal streams: Implications for large scale estimates of CO2 evasion. Global Biogeochem. Cycles 25 (2011).
Teodoru, C. R. et al. Dynamics of greenhouse gases (CO2, CH4, N2O) along the Zambezi River and major tributaries, and their importance in the riverine carbon budget. Biogeosciences 12, 2431–2453 (2015).
Marx, A. et al. A review of CO2 and associated carbon dynamics in headwater streams: a global perspective. Rev. Geophys. 55, 560–585 (2017).
Rocher-Ros, G., Sponseller, R. A., Lidberg, W., Morth, C.-M. & Giesler, R. Landscape process domains drive patterns of CO2 evasion from river networks. Limnol. Oceanogr. Lett. 4, 87–95 (2019).
Ulseth, A. Distinct air-water gas exchange regimes in low-and high-energy streams. Nat. Geosci. 12, 259–263 (2019).
Hall Jr, R. O. & Ulseth, A. J. Gas exchange in streams and rivers. Wiley Interdisciplinary Rev.: Water 7, e1391 (2020).
Looman, A., Maher, D. T. & Santos, I. R. Carbon dioxide hydrodynamics along a wetland-lake-stream-waterfall continuum (Blue Mountains, Australia). Sci. Total Environ. 777, 146124 (2021).
Natchimuthu, S., Wallin, M. B., Klemedtsson, L. & Bastviken, D. Spatio-temporal patterns of stream methane and carbon dioxide emissions in a hemiboreal catchment in Southwest Sweden. Sci. Rep. 7, 1–12 (2017).
Leibowitz, Z. W., Brito, L. A. F., De Lima, P. V., Eskinazi-Sant'Anna, E. M. & Barros, N. O. Significant changes in water pCO2 caused by turbulence from waterfalls. Limnologica 62, 1–4 (2017).
Vautier, C. et al. Mapping gas exchanges in headwater streams with membrane inlet mass spectrometry. J. Hydrol. 581, 124398 (2020).
Whitmore, K. M., Stewart, N., Encalada, A. C., Suarez, E. & Riveros-Iregui, D. A. Spatiotemporal variability of gas transfer velocity in a tropical high-elevation stream using two independent methods. Ecosphere 12, e03647 (2021).
Schneider, C. L. et al. Carbon dioxide (CO2) fluxes from terrestrial and aquatic environments in a high-altitude tropical catchment. J. Geophys. Res.: Biogeosci. 125, e2020JG005844 (2020).
ADS CAS Google Scholar
Vidon, P. & Serchan, S. Impact of stream geomorphology on greenhouse gas concentration in a New York mountain stream. Water Air Soil Pollution 227, 1–13 (2016).
Hall Jr, R. O., Kennedy, T. A. & Rosi-Marshall, E. J. Air-water oxygen exchange in a large whitewater river. Limnol. Oceanogr.: Fluids. Environ. 2, 1–11 (2012).
Botter, G., Peruzzo, P. & Durighetto, N. Heterogeneity matters: aggregation bias of gas transfer velocity versus energy dissipation rate relations in streams. Geophys. Res. Lett. 48, e2021GL094272 (2021).
Asher, W. E. & Wanninkhof, R. The effect of bubble-mediated gas transfer on purposeful dual-gaseous tracer experiments. J. Geophys. Res.: Oceans 103, 10555–10560 (1998).
Hall Jr, R. O. & Madinger, H. L. Use of argon to measure gas exchange in turbulent mountain streams. Biogeosciences 15, 3085–3092 (2018).
Gameson, A. Weirs and the aeration of rivers. J. Inst. Wat. Engrs 11, 477–490 (1957).
Gulliver, J. S., Thene, J. R. & Rindels, A. J. Indexing gas transfer in self-aerated flows. J. Environ. Eng. 116, 503–523 (1990).
Moog, D. B. & Jirka, G. H. Stream reaeration in non uniform flow: macroroughness enhancement. J. Hydraulic Eng. 125, 11–16 (1999).
Grant, G., Swanson, F. & Wolman, M. Pattern and origin of stepped-bed morphology in high-gradient streams, Western Cascades, Oregon. Bull. Geol. Soc. Am. 102, 340–352 (1990).
D'Agostino, V. & Lenzi, M. A. Origine e dinamica della morfologia a gradinata (step pool) nei torrenti alpini ad elevata pendenza. Dendronatura 18, 7–38 (1997).
Wilcox, A., Wohl, E., Comiti, F. & Mao, L. Hydraulics, morphology, and energy dissipation in an alpine step-pool channel. Water Resour. Res. 47 (2011).
Chartrand, S., Jellinek, M., Whiting, P. & Stamm, J. Geometric scaling of step-pools in mountain streams: observations and implications. Geomorphology 129, 141–151 (2011).
Battin, T. J. et al. Biophysical controls on organic carbon fluxes in fluvial networks. Nat. Geosci. 1, 95–100 (2008).
Battin, T. J., Besemer, K., Bengtsson, M. M., Romani, A. M. & Packmann, A. I. The ecology and biogeochemistry of stream biofilms. Nat. Rev. Microbiol. 14, 251–263 (2016).
Mulholland, P. J., Houser, J. N. & Maloney, K. O. Stream diurnal dissolved oxygen profiles as indicators of in-stream metabolism and disturbance effects: Fort Benning as a case study. Ecol. Indicators 5, 243–252 (2005).
Dick, J. J., Soulsby, C., Birkel, C., Malcolm, I. A. & Tetzlaff, D. Continuous dissolved oxygen measurements and modelling metabolism in Peatland Streams. PLoS ONE 11, 19326203 (2016).
Segatto, P. L., Battin, T. J. & Bertuzzo, E. Modeling the coupled dynamics of stream metabolism and microbial biomass. Limnol. Oceanogr. 65, 1573–1593 (2020).
Raymond, P. et al. Scaling the gas transfer velocity and hydraulic geometry in streams and small rivers. Limnol. Oceanogr.: Fluids. Environ. 2, 41–53 (2012).
Wallin, M.et al. Carbon dioxide and methane emissions of Swedish low-order streams—a national estimate and lessons learnt from more than a decade of observations. Limnol. Oceanogr. Lett. 156–167 (2018).
Liu, S. et al. The importance of hydrology in routing terrestrial carbon to the atmosphere via global streams and rivers. Proc. Natl Acad. Sci. 119, e2106322119 (2022).
Klaus, M., Labasque, T., Botter, G., Durighetto, N. & Schelker, J. Unraveling the contribution of turbulence and bubbles to air-water gas exchange in running waters. J. Geophys. Res.: Biogeosci. 127, e2021JG006520 (2022).
Botter, G. & Durighetto, N. The stream length duration curve: a tool for characterizing the time variability of the flowing stream length. Water Resour. Res. e2020WR027282 (2020).
Durighetto, N., Vingiani, F., Bertassello, L., Camporese, M. & Botter, G. Intraseasonal drainage network dynamics in a headwater catchment of the italian Alps. Water Resour. Res. 56, e2019WR025563 (2020).
Botter, G., Porporato, A., Rodriguez-Iturbe, I. & Rinaldo, A. Basin-scale soil moisture dynamics and the probabilistic characterization of carrier hydrologic flows: Slow, leaching-prone components of the hydrologic response. Water Resour. Res. 43 (2007).
Mean runoff and flow regime types for the river network of Switzerland. https://www.bafu.admin.ch/bafu/en/home/topics/water/state/maps/geodata/mean-monthly-and-annu. Accessed 22 Aug 2022.
Digital elevation model of Switzerland, swissALTI3D. Accessed on 2022-08-22. https://www.swisstopo.admin.ch/en/geodata/height/alti3d.html.
Whittaker, J. & Jaeggi, M. Origin of step-pool systems in mountain streams. J. Hydraulics Division 108, 758–773 (1982).
Botter, G., Carozzani, A., Peruzzo, P. & Durighetto, N. Steps dominate gas evasion from a mountain headwater stream (Research Data Unipd, 2022). http://researchdata.cab.unipd.it/619/.
This research was supported by the European Community's Horizon 2020 Excellent Science Programme (grant no. H2020-EU.1.1.-770999).
Department of Civil, Environmental and Architectural Engineering, University of Padua, via Marzolo 9, 35131, Padua (PD), Italy
Gianluca Botter, Anna Carozzani, Paolo Peruzzo & Nicola Durighetto
Gianluca Botter
Anna Carozzani
Paolo Peruzzo
Nicola Durighetto
G.B. and N.D. conceived and designed the study. G.B., N.D., and A.C. performed the experiments. A.C., G.B., P.P., and N.D. analyzed the data and discussed the results. G.B. wrote the main text, all authors wrote the Supplementary Information and reviewed the paper. A.C., N.D., and P.P. prepared the figures.
Correspondence to Gianluca Botter.
Nature Communications thanks Robert Hall, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Botter, G., Carozzani, A., Peruzzo, P. et al. Steps dominate gas evasion from a mountain headwater stream. Nat Commun 13, 7803 (2022). https://doi.org/10.1038/s41467-022-35552-3
|
CommonCrawl
|
Assessing the performance of a Northeast Asia Japan-centered 3-D ionosphere specification technique during the 2015 St. Patrick's day geomagnetic storm
Nicholas Ssessanga1,
Mamoru Yamamoto1 &
Susumu Saito2
Earth, Planets and Space volume 73, Article number: 124 (2021) Cite this article
This paper demonstrates and assesses the capability of the advanced three-dimensional (3-D) ionosphere tomography technique, during severe conditions. The study area is northeast Asia and quasi-Japan-centred. Reconstructions are based on total electron content data from a dense ground-based global navigation satellite system receiver network and parameters from operational ionosondes. We used observations from ionosondes, Swarm satellites and radio occultation (RO) to assess the 3-D picture. Specifically, we focus on St. Patrick's day geomagnetic storm (17–19 March 2015), the most intense in solar cycle 24. During this event, the energy ingested into the ionosphere resulted in Dst and Kp and reaching values ~ − 223 nT and 8, respectively, and the region of interest, the East Asian sector, was characterized by a ~ 60% reduction in electron densities. Results show that the reconstructed densities follow the physical dynamics previously discussed in earlier publications about storm events. Moreover, even when ionosonde data were not available, the technique could still provide a consistent picture of the ionosphere vertical structure. Furthermore, analyses show that there is a profound agreement between the RO profiles/in-situ densities and the reconstructions. Therefore, the technique is a potential candidate for applications that are sensitive to ionospheric corrections.
In transionospheric radio communication, central to the precise performance is the ability to understand, characterize, and forecast the distribution of an irregular transiently changing dispersive terrestrial plasma (see for example Jakowski et al. 2011; Kelly et al. 2014; Eastwood et al. 2017; Yasyukevich et al. 2018; Rovira-Garcia et al. 2020). Despite the complexity of this subject, traditionally in the past few years a simplistic horizontal distribution has been suffice, to a certain level of accuracy, to correct for the ionospheric effects on radio signals (e.g., Saito et al. 1998; Jakowski et al. 2012, 2011; Ohashi et al. 2013). The horizontal distribution is mainly deducible from total electron content (TEC), a derivable quantity from abundantly available satellite-ground radio transmissions (Otsuka et al. 2002; Ma and Maruyama 2003). However, as technological systems become more integrated, the demand for high precision on transionospheric signals has increased. Thus, the need for a detailed three-dimensional (3-D) ionosphere picture, for better accuracy, is inevitable. Computerized ionospheric tomography (CIT) is a way to make 3-D (or 2-D) distribution, i.e., an analysis technique to give height dimension from the organized multi-point measurements of TEC. Such advanced CIT is now called ionospheric imaging and is vital in radio communication since it provides information on the four-dimensional (time and space) evolution of the electron density structure. An excellent review and history on some of these techniques is found in Bust and Mitchell (2008).
The fidelity of the imaging technique depends on the accuracy of the measurements as well as the temporal and spatial distribution of the data used in the reconstruction. Unfortunately, in most cases, measurements are imperfect and irregular in space and time. Besides, geometry constraints limit the observation information that can be included in the analysis. In return, imaging or tomography analysis (inverse problems) are generally under-determined and ill-posed (Yeh and Raymund 1991; Raymund et al. 1994; Bust and Mitchell 2008). To have a consistent 3-D picture, prior information from other sources such as models is needed. The accuracy of this information and the subject of how to best combine it with measurements has spawned an explosion of research and suggestions (e.g., Fremouw et al. 1992; Howe et al. 1998; Rius et al. 1998; Bust et al. 2000; Tsai et al. 2002; Mitchell and Spencer 2003; Ma et al. 2005; Schmidt et al. 2008; Okoh et al. 2010; Seemala et al. 2014; Ssessanga et al. 2015; Chen et al. 2016; Saito et al. 2017).
To maximize the advantages of imaging techniques, reconstructions are performed on a regional basis; wherein observations are abundant, and computation is tractable under high spatial grid resolution settings. Over the East-Asian sector, an imaging technique has been developed based on a regional network of dense GNSS (global navigation satellite system) receivers and ionosondes (Saito et al. 2017, 2019; Ssessanga et al. 2021). Figure 1 showcases the network and the horizontal resolution settings. The vertical dimension is not illustrated in the picture but extends to a GNSS altitude of 25,000 km (see Saito et al. 2017). The technique is a two-stage algorithm: First GNSS STEC (slant total electron content) is used to reconstruct the ionosphere electron density field based on constrained least-squares CIT. Second, to improve the exactness of the reconstructed picture, CIT densities are considered as background in an ionosonde data assimilation technique which assumes that ionosphere electron densities are better described by a log-normal distribution. Saito et al. (2017, 2019) and Ssessanga et al. (2021), have provided the mathematical derivation of this algorithm together with some preliminary results showing that the algorithm performs better than a stand-alone CIT technique. However, the authors did not specifically provide a performance analysis of the algorithm during geomagnetically disturbed conditions. Moreover, a wide spectrum of energy injected into the ionosphere during geomagnetic storms leads to a chaotic and nonlinear ionosphere both in space and time. Although the main phase of a storm may last for a short period, the aftermath effects can last for many days before recovery [Prölss (1995) and Buonsanto (1999)]. Under such deviant conditions, most climate [such as International Reference Ionosphere (IRI)] or physics-based models used in forecasting and nowcasting ionospheric behaviour are rendered insufficient. Therefore, for applications that utilize transionospheric signals, a need for a data-sensitive near-real-time observation driven algorithm is paramount. The goal of this paper is to show that the technique suggested by Ssessanga et al. 2021, can detect electron density dynamics even during severe storm conditions, and thus a potential candidate for near-real-time ionospheric corrections/applications. All hyperparameters are maintained the same as published in Saito et al. (2017, 2019) and Ssessanga et al. (2021). "A brief description of the algorithm" section gives a short mathematical description of the technique. "Analysis of reconstructions during geomagnetic storm conditions" section presents the reconstructions‐measurement (ionosonde comparisons) illustrating the ionosphere electron density dynamics in connection to the ingested energy during the geomagnetic storm. It also analyses in detail the reconstructed densities in comparison to independent observations from Swarm satellites and radio occultation. "Summary" Section concludes the article.
Image obtained from the works of Ssessanga et al. 2021: a network of GNSS receivers (blue asterisks) and ionosonde stations (solid red stars) used in the analysis. Horizontal boundaries of the different sub-grid regions that compose the overall grid used in computation are marked by solid dashed lines. The sub-grid resolution decreases with increasing distance away from the dense network
A brief description of the algorithm
Consider a required regional 3-D electron density field state (\(\vec{n}\)) and a column vector \(\vec{Y}\) of observations within the specified grid. Then a cost function to be minimized under constrained least-squares fit is expressed as
$$J(\vec{n}) = ||{\mathbf{A}}\vec{n} - \vec{Y} + {\mathbf{A}}^{{{\text{bc}}}} \vec{n}^{{{\text{bc}}}} ||^{2} + \lambda ||{\mathbf{W}}\vec{n} + {\mathbf{W}}^{{{\text{bc}}}} \vec{n}^{{{\text{bc}}}} ||^{2} ,$$
and the solution \((\vec{n})\) that minimizes (1) is written as
$$\vec{n} = ({\mathbf{A}}^{{\text{T}}} {\mathbf{A}} + \lambda {\mathbf{W}}^{{\text{T}}} {\mathbf{W}})^{ - 1} ({\mathbf{A}}^{{\text{T}}} (\vec{Y} + {\mathbf{A}}^{{{\text{bc}}}} \vec{n}^{{{\text{bc}}}} ) - \lambda {\mathbf{W}}^{{\text{T}}} {\mathbf{W}}\vec{n}^{{{\text{bc}}}} ),$$
where A is an operator that maps the electron density field state into observation space, W is a weight matrix based on prior information and restrains the derived electron density from exceeding a certain value (Saito et al. 2017, 2019). Elements superscripted with bc represent boundary conditions obtained from the NeQuick model (Nava et al. 2008) and where λ (> 0) is a regularization parameter. In the nutshell, the tomography analysis is based on the dense GPS-TEC data with model empirical data normalized to vertical profile peak densities and only used to constraint the solution (see Seemala et al. 2014). We developed a software system on a desktop PC, and realized the real-time tomography analysis from 1-s GEONET absolute TEC data from 200 stations over Japan. The inter-frequency biases (IFBs) of the satellites and the receivers, used in estimating absolute TEC, are estimated hourly based on an algorithm discussed in Ma and Maruyama (2003) and Saito et al. (2017).
Performance of this tomography (Saito et al. 2017, 2019) was generally good, particularly in May–July, but limited when the F-region peak densities (NmF2) appeared below the ~ 270 km altitude range in October and November. The discrepancy in performance was attributed to the tomography constraint conditions that were fined tuned based on data covering May–July (Saito et al. 2019).
Ssessanga et al. (2021) developed an advanced version of this tomography by incorporating two more features that provide a more realistic representation of the ionosphere. One is the inclusion of the ionosonde parameters h′F and foF2 from the nearby operational ionosondes. The other is the evaluation of data as logarithmic of the targeted densities; whereby all the elements in random vector \(\vec{n}\) are assumed to be drawn from log-normal distributions, and then apply Gaussian statistics by taking the natural logarithm of each element (\({n}_{i}\)). That is
$$x_{i} = \ln \left( {n_{i} } \right).$$
A new cost function is then formulated as follows:
$$J\left( {\overrightarrow {X} } \right) = \frac{1}{2}\left[ {\overrightarrow {Y} { - }{\mathbf{A}}\left( {\overrightarrow {X} } \right)} \right]^{T} {\mathbf{R}}^{{{ - }1}} \left[ {\overrightarrow {Y} { - }{\mathbf{A}}\left( {\overrightarrow {X} } \right)} \right]{ + }\frac{1}{2}\left[ {\overrightarrow {X} { - }\overrightarrow {X}_{b} } \right]^{T} {\mathbf{B}}^{{{ - }1}} \left[ {\overrightarrow {X} { - }\overrightarrow {X}_{b} } \right],$$
where \(\overrightarrow{X}\) is a row vector comprising the required log densities (\({x}_{i}\)), R and B are data and background errors covariance matrices, and \(\vec{X}_{b}\) is a vector of background log densities from the CIT solution.
The analysis log density \(\vec{X}_{a}\) that minimizes (4), is derived as:
$$\overrightarrow {X}_{a}^{j} = \overrightarrow {X}_{b} + {\mathbf{B\ A}}_{j - 1}^{T} \left[ {{\mathbf{R}} + {\mathbf{\ A}}_{j - 1} {\mathbf{B\ A}}_{j - 1}^{T} } \right]^{ - 1} \left[ {\overrightarrow {Y} - {\mathbf{A}}(\vec{X}_{a}^{j - 1} ) + {\mathbf{\ A}}_{j - 1} [\vec{X}_{a}^{j - 1} - \vec{X}_{b} ]} \right],$$
where \({\mathbf{\ A}}_{{j{ - }1}}\) is a Jacobian evaluated at iteration j − 1. At optimal, the real densities \({n}_{i}\) are computed as \(10^{{({}_{i}x_{a}^{j} )}} ;\) we have taken advantage of \(\ln (n_{i} ) = 2.303*\log_{10} n_{i}\)and used a more tractable base 10. These improvements enhanced the performance of our tomography analysis, with the ability to track the NmF2 below the 270 km altitude range. Figure 2 shows examples of reconstructed vertical electron density profiles at different ionosonde locations in Japan. Refer to Fig. 1 and Table 1 for the geographic locations and full names of the ionosondes stations. Colours green, blue and red represent the original tomography, improved/modified version, and ionosonde bottomside densities, respectively. In each subplot, the 270 km baseline is illustrated with a purple horizontal dashed line. Contrary to the original tomography, the modified version displays a distinct ability to vividly follow the measured profiles both vertically and horizontally at different ionosonde latitude–longitude locations. This is effective adaptiveness, and in the sequel, we further examine the penitential of the modified tomography under extremely chaotic ionosphere conditions. In fact, this work is part of a pre-analysis of the modified tomography version before advancing to a near-real-time on-line version (https://www.enri.go.jp/cnspub/tomo3/plotting.html).
Examples of density profiles reconstructed at different ionosonde locations in Japan. The time stamps are indicated at the top of each subplot. The original tomography densities without the assimilation complexity are shown in green. Eminent is the robustness of the modified version (blue) to track the ionosonde measured (red) densities below the 270 km altitude range (purple line)
Table 1 Ionosonde stations used in analysis
Analysis of reconstructions during geomagnetic storm conditions
The geomagnetic storm analysed occurred on St. Patrick's day [day of year (DOY) 076–077] in 2015 and was among the most intense storms in the 24th solar cycle (Astafyeva et al. 2015). We reconstruct the ionosphere vertical structure during this period (at a resolution of 15 min) and briefly analyse the variations in reference to the ionosphere disturbance indicator, Dst index. Specifically, the vertical structure is analysed at ionosonde locations for easy comparison with ground ionosonde observations. In-situ densities from the Swarm constellation and radio occultation density profiles are also utilized to assess the reconstructed F-region topside densities. Throughout the analysis, international reference ionosphere (IRI-2016, Bilitza et al. 2017) model densities are also presented as a form of reference. However, it should be noted that the IRI model is a climatological model for average values, so might not be the best candidate for performance evaluation against ionosphere tomography models that include weather.
Storm chronological highlights
A coronal mass ejection (CME) was observed erupting between ~ 00:30 UT and 00:04 UT on DOY 074 and was predicted to encounter the Earth's magnetosphere on DOY 076. Figure 3 illustrates how the abundant energy injected into the magnetosphere-ionosphere system perturbed the terrestrial geomagnetic field leading to a Dst (Kp index) minimum (maximum) recording of approximately − 223 nT (8). Kp as blue is scaled on the right and Dst as black is scaled on the left. Dashed vertical lines indicate the main events of the storm. Both indices can be obtained from the World Data Center for Geomagnetism, Kyoto http://wdc.kugi.kyoto-u.ac.jp/ (Nose et al. 2015) or https://omniweb.gsfc.nasa.gov/form/dx1.html.
(Top) Variation of Kp and Dst indices during the 2015 St. Patrick's day geomagnetic storm. (Bottom) The interplanetary magnetic field (IMF) Bz component variation during the same period. Tick labels on top and bottom represent day of the year (DOY) and time in UT (~ LT-9 h), respectively. The sudden storm commencement (SSC) occurred at ~ 04:45 on DOY 076. The main phase (MP) of the storm started at ~ 06:30 UT and continued until ~ 22:00 UT when Dst hit a minimum ~ − 223 nT. During the MP the Kp rose to a maximum of ~ 8. The recovery phase continued through the following days
Before DOY 076, the geomagnetic indices Kp and Dst were relatively stable with magnitudes below 4 and 50 nT, respectively. On DOY 076, at 04:45 UT (storm sudden commencement, SSC), the CME hit the Earth, leading to an increase in Dst. On the same day, at ~ 07:30 UT the main phase of the storm commenced and the Dst decreased to a local minimum of ~ − 80 nT. During this period the interplanetary magnetic field (IMF) Bz component was reported to have turned southward (see bottom panel in Fig. 3 and also refer to Cherniak et al. 2015). From about 9:30 UT to 12:20 UT there was a short-lived recovery in the Dst index to ~ − 50 nT; expected to be due to the IMF Bz component turning Northward (Astafyeva et al. 2015). From 12:20 UT the Dst continued with a gradually decreasing trend reaching a global minimum (− 223 nT) at ~ 22:00 UT on DOY 076. The recovery phase continued through the following days with the Kp index ranging below 6.
Analysis of reconstructions at ionosonde locations
In Fig. 4 we have generated images from a time series of vertical profiles corresponding to locations of ionosondes within the high-resolution sub-grid region marked out in blue dashed lines in Fig. 1; Ssessanga et al. 2021 found that a coarse horizontal spacing (5° × 5°) in the outside sub-grid might not reflect a true representation of the ionospheric state and dynamics, particularly at the low latitudes where the ionosphere is expected to have steep gradients. Consequently, results presented here are limited to the region where we expect a coherent and consistent reconstruction. Refer to Table 1 for the locations, four character code names, geographic coordinates and geomagnetic latitude of each ionosonde used in analysis. Ionosondes located in the region of high-resolution are labeled with an "a" superscript.
Ionosphere vertical structure at 5 ionosonde locations within the grid, during the 2015 St. Patrick's Day geomagnetic storm. Top, middle and bottom panels represent IRI-2016, reconstructed, and ionosonde observations, receptively. Stations in each panel are arranged in ascending geomagnetic latitude. Tick labels on top and bottom of each panel represent day of the year (DOY) and time in UT (~ LT-9 h), respectively. Scaled on the right of each image is Dst index, represented by the solid black line. Vertical dashed lines indicate the main events of the storm. In the IRI and reconstructed images, the purple solid-asterisk line shows the time variation of the hmF2 parameter at that ionosonde location
The images in Fig. 4 are arranged in accordance with increasing ionosonde geomagnetic latitude. The top, middle and bottom panels represent the IRI-2016 model, assimilated CIT, and observations from ionosondes, respectively. Tick labels at the top of each panel show days of the year while those at the bottom show time in UT (the East Asian sector local time (LT) is ~ 9 h ahead of UT). Preferably, IRI and reconstructed densities were converted to frequency (MHz) for easy comparison with ionosonde measurements. The colour scale of the frequency in each panel is given in the far right bottom corner.
In each image plate: for clarity, the altitude range is limited to the region of uttermost interest (F-region ~ 200–600 km). Superimposed is the Dst index (ordinate axis on the right) in order to track the response of the ionosphere during the different storm dynamics. As indicated earlier, dashed vertical lines indicate the main events of the storm. The horizontal dashed line at 300 km is included as a baseline to vividly track the uplift of the plasma density. The time-altitude variation of the peak-density is represented by the purple solid-asterisk line. In the reconstructed, and ionosonde images, white spaces indicate periods when GPS-ground receiver ray links and ionosonde observations did not exist or intersect that particular location. At a particular time-stamp, if there exist any data gaps within the 200–500 km altitude range, the peak density is not determined.
Throughout the selected analysis period, at all stations, the IRI model only shows the general diurnal variation of the F region without detailed features. This is expected since IRI is a climatological model that represents monthly averages of terrestrial plasma densities/frequencies (Bilitza et al. 2017). Consequently, the IRI model performance is usually found lacking when ionosphere underlying driving mechanisms are deviant from the general trend.
Contrary to IRI, reconstructed results reflect changes in the plasma frequency following variations in the Dst index as detailed: the onset of the storm occurred when the East Asian sector was on the day-side. Compared to reconstructed densities on the day before the storm (DOY 075); nearly just after the SSC, at OK426 station that is located geomagnetically in low latitudes, there is a slight noticeable instantaneous uplift (marked with a grey arrow) of the peak densities to altitudes above the 300 km baseline. The uplift reduces in amplitude towards the high latitudes (not noticeable at TO536) and is most probably due to geomagnetic storm induced eastward prompt penetration electric fields (PPEF), which end up modifying the equatorial and low‐latitude ionospheric phenomenology (Maruyama et al. 2004).
Approximately two hours after the SSC, there was a slight increase in F region plasma densities (~ 280–340 km) particularly at stations located near the equatorial anomaly (OK426 and JJ433). In addition, there was a well defined wavy modulation of the peak density altitude at mid-latitude stations (JJ433, TO536 and IC437). The ~ 2-h delay falls within the time frame the traveling atmospheric disturbance (TAD) created at high latitudes during geomagnetic storms, would take to propagate to the mid and low latitudes (for example see the works of Shiokawa et al. 2003 and references therein). Therefore, low latitude ionisation and wavy modulation of the mid-latitude F region could be related to an equatorwards propagating surge and plasma density enhancements in equatorial ionization anomaly (EIA) crests, following day-side PPEF.
By the time the East Asian sector enters the night side (~ 10:30 UT), there is a short-lived recovery in the plasma frequency between 10:30 and 12:30 UT. Thereafter (~ 12:30–24:00 UT), specifically at stations located in the mid-latitudes (JJ433, TO536 and IC437) and towards high latitudes (WK546), the plasma is uplifted to high altitudes (~ 500 km), gradually falls to ~ 320 km, rises again to about ~ 380 km and finally settles to an average altitude of ~ 300 km. Through this whole process, at all stations, the initial daytime plasma enhancements decrease gradually (to values nearly below 4 MHz), following the Dst index which hits a minimum at about 22:00 UT. The decrease in F-region densities accompanied by descent in altitude of the peak densities could be attributed to an equatorward wind surge and the night-side westward PPEF which has opposite effects (reduction in EIA ionisation) in reference to day-side PPEF.
On the day after the storm (DOY 077), at all stations, the plasma density (frequency) remained nearly ~ 60% below the values observed on the day before the storm. Interesting to note through this period, is the total or partial absence of data at all ionosonde stations. From the reconstructions, it is clear that these data gaps were due to a significant decrease in the F region plasma frequency (< 4 MHz) such that the ionosonde could not detect the reflected echoes. This is a well-known/typical negative ionospheric response following a major geomagnetic storm (e.g., Fuller-Rowell et al. 1994; Prölss 1995; Tsagouri et al. 2000). A possible explanation for the intense plasma reduction is that the substorm activity on DOY 076 generated a composition bulge that entered the Asian sector, and persisted through the next day (DOY 077). In fact, Astafyeva et al. (2015) and Nava et al. (2016), analysed the thermospheric column integrated O/N2 ratio changes measured by the GUVI (global ultraviolet imager) instrument onboard the TIMED (thermosphere, ionosphere, mesosphere energetics and dynamics) satellite during the same geomagnetic storm (DOY 076–078, 2015), and found that the Asian sector had significant composition changes that could have led to the negative ionosphere response.
Latitudinal wise, stations towards the high latitudes (WK546) exhibit the highest reduction in plasma density. This enormous reduction in F plasma densities led to an increase in slab thickness, which is seen to increase polewards (extending > 300 km in altitude), and most pronounced during the evening of DOY 076 and the early hours of DOY 077. Indeed, different studies have already shown that the slab thickness is dependent on diurnal, annual, latitudinal and storm-time variations [see Stankov and Warnant, (2009), and references therein]. In the longitudinal perspective, stations TO536 and IC4337 that are almost on the same geomagnetic latitude (the difference is ~ 1°) show that the negative storm effect was greater on the Eastern side.
On the third day after the storm (DOY 078), except for OK426, all stations maintain a slow (< 6 MHz) gradual recovery towards the normal plasma frequency values observed during the quiet period. Nava et al. (2016) analysed the same storm and illustrated that the recovery process lasted more than 7 days. At OK426, reconstructions indicate that the low-latitudes had an undulating peak, accompanied by reinforced ionisation covering altitudes ~ 200–420 km. Surprisingly, these effects do not extend to mid-latitudes (see reconstructions at JJ433, TO536 and IC437). Nava et al. (2016) also generated regional TEC maps corresponding to periods, before, during and after the St. Patrick's day storm. Over the Asian sector, on the day after the storm, ionization was confined at low latitudes near the equator, and it almost disappeared at middle latitudes. This characteristic is typical of westward zonal electric field penetration during disturbance dynamo (DD) when EIA is inhibited. Nonetheless, this ambiguous or peculiar behaviour may be related to other post-storm effects that might require further investigation (also see Kutiev et al. 2006).
GPS and ionosonde: The significance of complementing GPS with ionosonde data is well illustrated at most stations; we can reconstruct the full extent of F region plasma structure beyond the bottomside limitations of the ionosondes. Moreover, as noted earlier, at points where the ionosondes failed to record any echoes due to the low plasma electron content, our technique was still able to provide reasonable plasma content estimations. This is a crucial result particularly for applications that utilize transionospheric HF (high frequency) signals.
Comparison with Swarm densities
In-situ densities offer a good test of accuracy since they represent a specific point in space (ionosphere) at a particular time. The Swarm constellation consists of three identical polar-orbiting satellites that fly at two different altitudes ≤ 460 km [Alpha (A) and Charlie (C)] and ≤ 530 km [Bravo (B)]. Each satellite has a Langmuir Probe that facilitates the measurement of in-situ densities. Part of the Swarm density results presented here are already presented in Ssessanga et al. 2021 (as preliminary results) to showcase the performance of the technique. We will add more analysis plots to cover the morning and evening sectors of the analysed period. The data used in the analysis are Level 1b electron density measurements at a 2 Hz rate, accessible at ftp://swarm-diss.eo.esa.int. For a further review on Swarm data see, for example, Olsen et al. (2013).
In Fig. 5, time profiles of Swarm in-situ densities (red) are plotted alongside densities from our reconstructions (blue) and IRI model (black). The top, middle and bottom rows of subplots correspond to satellites A, B and C, respectively. Each row of subplots covers three DOYs (076, 077 and 078) during which St. Patrick's day storm effects were most evident. In each subplot, the geographic traces of the Swarm satellite during that particular period are marked red on the map in the upper right corner. Gaps in the traces indicate periods when either in-situ measurements or density reconstructions were not available for analysis. To match our low time resolution (15 min) reconstructions with in-situ measurements, the ionosphere was assumed to remain stationary under a period of 10 min. Then, all in-situ measurements within this window were mapped onto the nearest grid altitude plane within 10 km to the satellite orbital altitude. The vertical dashed lines in each subplot indicate the start (green) and end (magenta) of grouped densities, in which the time data gaps are less than 1-h. The corresponding Swarm satellite time coverage of each grouped densities is indicated on the far right below the map. The x-axis in each subplot is not linear but rather readjusted depending on the available grouped densities within the day. The middle subplots corresponding to Swarm A and B are essentially a re-plot from Ssessanga et al. (2021), except we added more observation data points.
A plot of time-density profiles following the Swarm satellites orbital paths, shown as red traces on the map in the upper right corner of each subplot. (Row subplots) Cover DOYs 076, 077, 078 in 2015. (Column subplots) Top, middle and bottom represent Swarm A, B and C, respectively. Black, blue and red represent IRI, reconstructed and Swarm in situ densities, respectively. In each subplot, vertical dashed lines show the start (green) and end (magenta) of grouped densities with time data gaps less than 1 h. Gray curved arrows match each grouped densities to the Swarm data time range, listed on the far right. The x-axis in each subplot is readjusted depending on the number of available grouped densities. Column-wise: top, middle and bottom panels correspond to Swarm Alpha (A), Charlie (C) and Bravo (B), respectively. Row-wise: right, middle and left subplots correspond to day of year 076, 077 and 078, respectively, when storm effects were eminent
In all subplots, the agreement between the reconstructed and in-situ densities is generally good. IRI on the other side exhibits a poor estimation of the densities. Also noticeable is that due to differences in orbital altitudes, there is a slight difference in the level of electron density distributions observed by satellites (A, C) and B. The difference between in-situ and reconstructed densities is on average less than 0.2 × 1012 el/m3. This result offers confidence in the reconstructed topside F-region density structure, which is not attainable while using ground-based instruments such as ionosondes. A point of concern could arise from the poor density reconstructions at the awake of profiles in time range 9:00–11:00 UT (corresponding to satellites A and C on DOY 076 and 077). During this period Swarm A/C traces fall within or near the equatorial latitudes. Ssessanga et al. 2021, ascribed this to a poor grid specification over these latitudes. That is to say, the equatorial ionization anomaly region exhibits steep latitudinal densities gradients, yet, the currently utilized grid assumes a coarse horizontal spacing (5° × 5°) over this region [refer to Fig. 1 and Saito et al. (2017; 2019)], hence reconstructions would not reflect the true ionospheric state and dynamics.
If we follow the traces of Swarm A and C, we observe that on DOY 076 when the storm commences (~ 10:00–12:00 UT) the topside densities range \(0.5 \sim 2\times {10}^{12}\mathrm{ el}/{\mathrm{m}}^{3}\). On the day that follows, during the same period, when the negative storm effects are dominant, we observe that the densities remain nearly below \(0.25\times {10}^{12}\mathrm{el}/{\mathrm{m}}^{3}\) (~ 50% of the densities on DOY 076). The reduction in plasma densities is also observed at Swarm B altitudes and is maintained throughout the late hours of DOY 077. This result is consistent with the density reductions that were observed in the middle and bottom panels of Fig. 5.
On DOY 078, when the F-region topside densities start returning to normal, the reconstructed densities relatively track well the in-situ measurements, but with a better performance at Swarm B altitudes. The discrepancy in performance may be related to the altitude level; Swarm (A and C) orbit at a lower altitude (≤ 460 km) than B, and at such altitudes the plasma densities are more likely to be influenced by different nonlinear dynamical forces during the storm period, hence making the reconstruction acute. Certainly, the local time (\(\sim \mathrm{UT}+9\)) difference in Swarm B and A/C observations (different ionisation levels) might also influence the results. Nevertheless, the reconstructed densities still outperform the IRI model estimations.
Comparison with radio occultation (RO)
In Fig. 6, a set of RO electron density profiles (red) from COSMIC (constellation observing system for meteorology, climate, and ionosphere; orbital altitude ~ 800 km) and GRACE (gravity recovery and climate experiment; orbital altitude ~ 490 km) constellations are plotted together with profiles from assimilated tomography (blue) and IRI (black) during the analysed period. RO data are accessible in level 2 format at https://cdaac-www.cosmic.ucar.edu/cdaac/. Comparison is limited to reconstructions away from the low latitudes and above the 200 km altitude mark, where RO density profiles are expected to have the best accuracy. That is to say, RO density profiles are an inversion of RO total electron content (ROTEC) using the so-called inverse Abel transform that assumes spherical symmetry in the ionosphere: For the most part, profiles from Abel transform have good accuracy, with the exception of the E-region (where rays have asymmetric contributions from the F region portions of the rays), and low latitudes (where large density gradients exist, Garcia-Fernandez et al. 2003; Wu et al. 2009; Yue et al. 2010). Rather than comparing the RO density profiles to a specific vertical profile within the grid, the comparison is performed at the location of tangent points (which contribute the most density along the RO ray path) shown on the maps in the top right corner of each subplot. Purple and green represent GRACE and COSMIC, respectively. Gaps in the profiles indicate instances when assimilated tomography reconstructions were not available. Fortunately, during the analysed period the GRACE constellation covered the ~ 200–400 km range, whereas COSMIC mostly covered the topside ~ 400–700 km. This snip view gives us a chance to nearly observe and analyse the capabilities of the assimilated tomography in specifying the electron density field both in time and space (horizontally and vertically).
Comparison of radio occultation (RO) to IRI model (black)/reconstructed (blue) density profiles during the 2015 St Patrick's day geomagnetic storm. Comparison is performed along the trace of RO tangent points shown on maps in the upper right corner of each subplot (purple: GRACE, green: COSMIC). DOY 076 is not presented because the data points were few. Also, we limit the analysis to regions where RO density profiles have good accuracy (away from the low latitudes and altitudes above 200 km)
On the DOY 075 (top left corner subplot) before the storm, both assimilated tomography and IRI have a good estimation of the topside structure. However, on days that follow, assimilated tomography and RO profiles show a dramatic decrease in electron densities consequent to the geomagnetic storm. By contrast, the IRI model maintains a high-density output, with the largest deviation from the truth in the 200–400 km altitude range. This result is important because it gives a sample of what might be expected in applications that use models to correct for ionospheric effects during severe conditions.
Despite the variability, reconstructed profiles on average adequately track the RO densities at all height ranges. Surprisingly, a combination of ionosonde densities and ground-based GNSS TEC is adequate to reconstruct a reliable topside structure (~ 400–700 km). Nonetheless, the noisy structure of some of the profiles motivates our suggestion in future to integrate RO data (both TEC and electron density profiles) in the analysis to further constrain the vertical structure.
Results presented in this study can be placed in context with previous performance analyses of the tomography technique during the early stages of development. In this sense, our work is a complement to studies by Seemala et al. 2014; Saito et al. 2017, 2019; Ssessanga et al. 2021, who have already analysed the tomography reconstructions during the quiet period and found the technique to have good fidelity. Therefore, the technique seems fairly robust in handling different ionospheric conditions.
The material presented here has offered an insight into the ability of the tomography ionosonde data assimilated technique to characterize ionosphere plasma densities even under severe ionosphere conditions. The results clearly demonstrate that the proposed algorithm is a good candidate for a better specification of the regional 3-D ionosphere electron density field; reconstructed electron densities reflect changes following the energy sink into the ionosphere system during a geomagnetic storm.
Moreover, the reconstructed densities are comparable to observations covering both bottom (ground-based ionosonde) and topside (space-borne satellites) ionosphere.
Therefore, for a coherent 3-D ionosphere picture a joint use of both ground GNSS data and bottomside ground-based ionosondes should be emphasized. However, a more rigorous analysis should be performed to further restrain the variability in the reconstructed vertical structure. The introduction of RO data would significantly contribute to the structure stability. Indeed, then it would be of interest to extend our analysis to explore and quantify the impact of the plasmasphere on the stability/fidelity of the whole ionosphere 3-D structure.
The GNSS data can be obtained or accessed at National Geographic Information Institute (NGII, http://www.ngii.go.kr/kor/main/main.do?rbsIdx=1) and upon request from Geospatial Information Authority of Japan (GSI). The ionosonde data are also accessible at http://wdc.nict.go.jp/IONO/HP2009/ISDJ/manual_txt-E.html and ftp://ftp.ngdc.noaa.gov/ionosonde/data/. SWARM data are publicly accessible at ftp://swarm-diss.eo.esa.int. The ionospheric electron density radio occultation observation data were downloaded from https://cdaac-www.cosmic.ucar.edu/cdaac/. Geomagnetic indices were obtained from http://wdc.kugi.kyoto‐u.ac.jp and Interplanetary magnetic field data from https://omniweb.gsfc.nasa.gov.
RO:
Radio occultation
GNSS:
Global navigation satellite system
Global positioning satellite system
Total electron content
STEC:
Slant total electron content
CIT:
Computerized ionosphere tomography
DOY:
Day of year
IRI:
International reference ionosphere
CME:
Coronal mass ejection
Universal time
IMF:
Interplanetary magnetic field
SSC:
Storm sudden commencement
Main phase
PPEF:
Prompt penetration electric fields
TAD:
Traveling atmospheric disturbance
EIA:
Equatorial ionization anomaly
GUVI:
Global ultraviolet imager
TIMED:
Thermosphere, ionosphere, mesosphere energetics and dynamics
HF:
COSMIC:
Constellation observing system for meteorology, climate, and ionosphere
Gravity recovery and climate experiment
Astafyeva E, Zakharenkova I, Förster M (2015) Ionospheric response to the 2015 St. Patrick's Day storm: a global multi-instrumental overview. J Geophys Res Space Phys 12010:9023–9037. https://doi.org/10.1002/2015JA021629
Bilitza D, Altadill D, Truhlik V, Shubin V, Galkin I, Reinisch B, Huang X (2017) International reference ionosphere 2016: from ionospheric climate to real-time weather predictions. Space Weather 15(2):418–429. https://doi.org/10.1002/2016SW001593
Buonsanto MJ (1999) Ionospheric storms —a review. Space Sci Rev 88(3):563–601. https://doi.org/10.1023/A:1005107532631
Bust GS, Mitchell CN (2008) History, current state, and future directions of ionospheric imaging. Rev Geophys. https://doi.org/10.1029/2006RG000212
Bust GS, Coco D, Makela JJ (2000) Combined ionospheric campaign 1: ionospheric tomography and GPS total electron count (TEC) depletions. Geophys Res Lett 27(18):2849–2852. https://doi.org/10.1029/2000GL000053
Chen CH, Saito A, Lin CH, Yamamoto M, Suzuki S, Seemala GK (2016) Medium-scale traveling ionospheric disturbances by three-dimensional ionospheric GPS tomography. Earth Planets Space 68:32. https://doi.org/10.1186/s40623-016-0412-6
Cherniak I, Zakharenkova I, Redmon RJ (2015) Dynamics of the high-latitude ionospheric irregularities during the 17 March 2015 St. Patrick's Day storm: ground-based GPS measurements. Space Weather 13(9):585–597
Eastwood JP, Biffis E, Hapgood MA, Green L, Bisi MM, Bentley RD, Wicks R, McKinnell LA, Gibbs M, Burnett C (2017) The economic impact of space weather: where do we stand? Risk Anal 37(2):206–218. https://doi.org/10.1111/risa.12765
Fremouw EJ, Secan JA, Howe BM (1992) Application of stochastic inverse theory to ionospheric tomography. Radio Sci 27(5):721–732. https://doi.org/10.1029/92RS00515
Fuller-Rowell T, Codrescu M, Moffffett R, Quegan S (1994) Response of the thermosphere and ionosphere to geomagnetic storms. J Geophys Res Space Phys 99(A3):3893–3914. https://doi.org/10.1029/93JA02015
Garcia-Fernandez M, Hernandez-Pajares M, Juan M, Sanz J (2003) Improvement of ionospheric electron density estimation with GPSMET occultations using Abel inversion and VTEC information. J Geophys Res Space Phys 108(A9):1338. https://doi.org/10.1029/2003JA009952
Howe BM, Runciman K, Secan JA (1998) Tomography of the ionosphere: four-dimensional simulations. Radio Sci 33(1):109–128. https://doi.org/10.1029/97RS02615
Jakowski N, Mayer C, Hoque MM, Wilken V (2011) Total electron content models and their use in ionosphere monitoring. Radio Sci. https://doi.org/10.1029/2010RS004620
Jakowski N, Béniguel Y, De Franceschi G, Pajares MH, Jacobsen KS, Stanislawska I, Tomasik L, Warnant R, Wautelet G (2012) Monitoring, tracking and forecasting ionospheric perturbations using GNSS techniques. J Space Weather Space Clim 2:A22. https://doi.org/10.1051/swsc/2012022
Kelly MA, Comberiate JM, Miller ES, Paxton LJ (2014) Progress toward forecasting of space weather effects on UHF SATCOM after Operation Anaconda. Space Weather 12(10):601–611. https://doi.org/10.1002/2014SW001081
Kutiev I, Otsuka Y, Saito A, Watanabe S (2006) GPS observations of post-storm TEC enhancements at low latitudes. Earth Planets Space 58(11):1479–1486. https://doi.org/10.1186/BF03352647
Ma G, Maruyama T (2003) Derivation of TEC and estimation of instrumental biases from GEONET in Japan. Ann Geophys 21(10):2083–2093. https://doi.org/10.5194/angeo-21-2083-2003
Ma XF, Maruyama T, Ma G, Takeda T (2005) Three-dimensional ionospheric tomography using observation data of GPS ground receivers and ionosonde by neural network. J Geophys Res Space Phys 110:A5. https://doi.org/10.1029/2004JA010797
Maruyama T, Ma G, Nakamura M (2004) Signature of TEC storm on 6 November 2001 derived from dense GPS receiver network and ionosonde chain over Japan. J Geophys Res 109:A10302. https://doi.org/10.1029/2004JA010451
Mitchell CN, Spencer PS (2003) A three-dimensional time-dependent algorithm for ionospheric imaging using GPS. Ann Geophys 46(4):687–696. https://doi.org/10.4401/ag-4373
Nava B, Coisson P, Radicella SM (2008) A new version of the NeQuick ionosphere electron density model. J Atmos Solar Terr Phys 70(15):1856–1862. https://doi.org/10.1016/j.jastp.2008.01.015
Nava B, Rodríguez-Zuluaga J, Alazo-Cuartas K, Kashcheyev A, Migoya-Orué Y, Radicella SM, Amory-Mazaudier C, Fleury R (2016) Middle- and low-latitude ionosphere response to 2015 St. Patrick's Day geomagnetic storm. J Geophys Res Space Phys 121(4):3421–3438. https://doi.org/10.1002/2015JA022299
Nose M, Iyemori T, Sugiura M, Kamei T (2015) Geomagnetic Dst index. World Data Cent Geomagn Kyoto. https://doi.org/10.17593/14515-74000
Ohashi M, Hattori T, Kubo Y, Sugimoto S (2013) Multi-layer ionospheric VTEC estimation for GNSS positioning. Trans Inst Syst Control Inf Eng 26(1):16–24
Okoh DI, McKinnell L-A, Cilliers PJ (2010) Developing an ionospheric map for South Africa. Ann Geophys 28(7):1431–1439. https://doi.org/10.5194/angeo-28-1431-2010
Olsen N, Friis-Christensen E, Floberghagen R, Alken P, Beggan CD, Chulliat A, Doornbos E, da Encarnao JT, Hamilton B, Hulot G, van Ijssel J, Kuvshinov A, Lesur V, Lhr H, Macmillan S, Maus S, Noja M, Olsen PEH, Park J, Plank G, Pthe C, Rauberg J, Ritter P, Rother M, Sabaka TJ, Schachtschneider R, Sirol O, Stolle C, Thbault E, Thomson AW, Tffner-Clausen P, Velmsk L, Vigneron J, Visser P (2013) The Swarm satellite constellation application and research facility (SCARF) and Swarm data products. Earth Planets Space 65(11):1189–1200. https://doi.org/10.5047/eps.2013.07.001
Otsuka Y, Ogawa T, Saito A, Tsugawa T, Fukao S, Miyazaki S (2002) A new technique for mapping of total electron content using GPS network in Japan. Earth Planets Space 54(1):63–70. https://doi.org/10.1186/BF03352422
Prölss GW (1995) Ionospheric F-region storms. In: Volland H (ed) Handbook of atmospheric electrodynamics, 2nd edn. CRC Press, Boca Raton
Raymund TD, Franke SJ, Yeh KC (1994) Ionospheric tomography: its limitations and reconstruction methods. J Atmos Terr Phys 56(5):637–657. https://doi.org/10.1016/0021-9169(94)90104-X
Rius A, Ruffini G, Cucurull L (1997) Improving the vertical resolution of ionospheric tomography with GPS occultations. Geophys Res Lett 24(18):2291–2294. https://doi.org/10.1029/97GL52283
Rovira-Garcia A, Ibanez-Segura D, Orus-Perez R, Juan JM, Sanz J, Gonzalez-Casado G (2020) Assessing the quality of ionospheric models through GNSS positioning error: methodology and results. GPS Solut 24(1):1–12. https://doi.org/10.1007/s10291-019-0918-z
Saito A, Miyazaki S, Fukao S (1998) High resolution mapping of TEC perturbations with the GSI GPS network over Japan. Geophys Res Lett 25(16):3079–3082. https://doi.org/10.1029/98GL52361
Saito S, Suzuki S, Yamamoto M, Saito A, Chen CH (2017) Real-time ionosphere monitoring by three-dimensional tomography over Japan. Navig J Inst Navig 64(4):495–504. https://doi.org/10.1002/navi.213
Saito S, Yamamoto M, Saito A, Chen, C. H., (2019) Real-time 3-D Ionospheric tomography and its validation by the MU radar. In: 2019 URSI Asia–Pacific radio science conference (AP-RASC). IEEE (pp 1–1). https://doi.org/10.23919/URSIAP-RASC.2019.8738382
Schmidt M, Bilitza D, Shum C, Zeilhofer C (2008) Regional 4D modeling of the ionospheric electron density. Adv Space Res 42(4):782–790. https://doi.org/10.1016/j.asr.2007.02.050
Seemala GK, Yamamoto M, Saito A, Chen C-H (2014) Three-dimensional GPS ionospheric tomography over Japan using constrained least squares. J Geophys Res Space Phys 119(4):3044–3052. https://doi.org/10.1002/2013JA019582
Shiokawa K, Otsuka Y, Ogawa T, Kawamura S, Yamamoto M, Fukao S, Nakamura T, Tsuda T, Balan N, Igarashi K, Lu G (2003) Thermospheric wind during a storm-time large-scale traveling ionospheric disturbance. J Geophys Res Space Phys 108(A12):1423. https://doi.org/10.1029/2003JA010001
Ssessanga N, Kim YH, Kim E (2015) Vertical structure of medium-scale traveling ionospheric disturbances (MSTIDs). Geophys Res Lett 42(21):9156–9165. https://doi.org/10.1002/2015GL066093
Ssessanga N, Yamamoto M, Saito S, Saito A, Nishioka M (2021) Complementing regional ground GNSS-STEC computerized ionospheric tomography (CIT) with ionosonde data assimilation. GPS Solut 25:93. https://doi.org/10.1007/s10291-021-01133-y
Stankov SM, Warnant R (2009) Ionospheric slab thickness-analysis, modelling and monitoring. Adv Space Res 44(11):1295–1303. https://doi.org/10.1016/j.asr.2009.07.010
Tsagouri I, Belehaki A, Moraitis G, Mavromichalaki H (2000) Positive and negative ionospheric disturbances at middle latitudes during geomagnetic storms. Geophys Res Lett 27(21):3579–3582. https://doi.org/10.1029/2000GL003743
Tsai L-C, Liu C, Tsai W, Liu C (2002) Tomographic imaging of the ionosphere using the GPS/MET and NNSS data. J Atmos Solar Terr Phys 64(18):2003–2011. https://doi.org/10.1016/S1364-6826(02)00218-3
Wu X, Hu X, Gong X, Zhang X, Wang X (2009) Analysis of the inversion error of ionospheric occultation. GPS Solut 13(3):231–239. https://doi.org/10.1007/s10291-008-0116-x
Yasyukevich Y, Astafyeva E, Padokhin A, Ivanova V, Syrovatskii S, Podlesnyi A (2018) The 6 September 2017 X-class solar flares and their impacts on the ionosphere, GNSS, and HF radio wave propagation. Space Weather 16(8):1013–1027. https://doi.org/10.1029/2018SW001932
Yeh KC, Raymund TD (1991) Limitations of ionospheric imaging by tomography. Radio Sci 26(6):1361–1380. https://doi.org/10.1029/91RS01873
Yue X, Schreiner WS, Lei J, Sokolovskiy SV, Rocken C, Hunt DC, Kuo Y-H (2010) Error analysis of Abel retrieved electron density profiles from radio-occultation measurements. Ann Geophys 28(1):217–222. https://doi.org/10.5194/angeo-28-217-2010
GNSS data from GEONET were provided by the Geospatial Information Authority of Japan.
This work is supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Numbers 19F19031 and 20H00197.
Research Institute for Sustainable Humanosphere, Kyoto University, Uji, Kyoto, Japan
Nicholas Ssessanga & Mamoru Yamamoto
Electronic Navigation Research Institute, National Institute of Maritime, Port, and Aviation Technology, Tokyo, Japan
Susumu Saito
Nicholas Ssessanga
Mamoru Yamamoto
SN conducted the research. SS and YM contributed to the algorithm design. All authors read and approved the final manuscript.
Correspondence to Nicholas Ssessanga.
Ssessanga, N., Yamamoto, M. & Saito, S. Assessing the performance of a Northeast Asia Japan-centered 3-D ionosphere specification technique during the 2015 St. Patrick's day geomagnetic storm. Earth Planets Space 73, 124 (2021). https://doi.org/10.1186/s40623-021-01447-8
Ground-GNSS-STEC tomography
Ionosonde data assimilation
Geomagnetic storm
3. Space science
|
CommonCrawl
|
Food security in the Savannah Accelerated Development Authority Zone of Ghana: an ordered probit with household hunger scale approach
Paul Kwame Nkegbe1,
Benjamin Musah Abu1 &
Haruna Issahaku1
Food security has been observed to be severe in northern Ghana than any other area of the country. Though this has been acknowledged, few attempts have been made to curb the situation. One of such intervention areas resides in providing policy-based evidence to guide efforts in fighting this problem. This study employs an ordered probit model using data set from the baseline survey of the USAID's Feed the Future programme in Ghana to estimate the determinants of food security in northern Ghana. We perform the analysis using a new indicator of food security—the household hunger scale. This measure is different from other household food insecurity indicators since it has been specifically developed and validated for cross-cultural use.
The estimates show that crop producers, multiple crop producers, yield and commercialization are key policy variables that determine food security. A key policy implication of this result is in tandem with one of the intermediate results of the Ghana Feed the Future Initiative which seeks to increase competitiveness of food value chains through increased productivity and market access.
Based on the results, stakeholders should step up efforts to enhance productivity of farm households and provide necessary market infrastructure to boost commercialization, as these are fundamental to ensuring food security.
Food security is more prominent on the policy agenda today than it has been in the past [1]. Undoubtedly, the scale, magnitude and quantitative evidence of food insecurity is fundamentally responsible for this prominence. For example, one in every eight people in the world, representing a total of 842 million between 2011 and 2013, was estimated to be food insecure and suffering from chronic hunger [2]. Perhaps, the greatest area justifying the prominence of food security is the fact that the Millennium Development Goal (MDG) 1, aimed at eradicating extreme poverty and hunger, was not achieved by the end of 2015.
While food insecurity is a global concern and for that matter not continent and country specific, the disproportionate nature of food insecurity is a serious concern. For example, Van Eeckhout [3] observes the following as the regional distribution of people suffering from hunger: 578 million in the Asia Pacific region; 239 million in sub-Saharan Africa; 53 million in Latin America and the Caribbean; 37 million in North Africa; and 19 million in developed countries. From these statistics, it can be deduced that food insecurity is more pronounced in developing countries and this observation has been supported by a number of empirical findings. For example, FAO, IFAD and WFP [4] note that the vast majority of hungry and malnourished people live in developing countries.
There is no doubt that Africa is an enormous victim of food insecurity among all the other continents since most of the world's poorest countries are in Africa. As a result, many of these poverty-stricken countries face food insecurity challenges in a manner that undermines development efforts. Sub-Saharan Africa is identified as one of the regions most affected by food insecurity as it houses 60% of the world's food-insecure people and is the only region of the world where hunger is projected to worsen over the next two decades if measures are not put in place [5]. This is supported by Folaranmi [6] who observes that Africa's food security and nutrition situation is worsening.
Food insecurity persists in Ghana. According to WFP [7], about 1.2 million people, representing 5% of the population of Ghana, are food insecure and 2 million people are vulnerable to food insecurity in an event of any natural or man-made shock. The food insecurity problem is fundamentally influenced by subsistent production which in turn is usually characterized by low and declining production and productivity, and the employment of rudimentary technology [8]. Despite the fact that the agriculture sector is a significant contributor to the growth of the economy and employing majority of the labour force, Ghana is yet to achieve self-sufficiency in the production of food. Data from Ghana's Ministry of Food and Agriculture (MoFA) show that the country has deficits in the production of cereals, meat and fish but only self-sufficient in the production of root and tubers though the self-sufficiency is chequered—with pockets of scarcity, sufficiency and glut depending on the season. This is worsened by decreasing yields of the crops and fishing subsectors [9].
These facts are further aggravated by food price hikes, poverty, climate change and increasing population. For example, prices of rice, maize and other cereals between 2007 and 2008 recorded hikes between 20 and 30% [10]. Though the country has performed remarkably well in eradicating poverty, the problem is far from over. Poverty still ravages a significant number of people and has been observed to spread into urban areas. WFP [7] finds that about 46% of farming households are identified as the most affected among all economic sectors. At the same time, climate change is jeopardizing agricultural production, deepening the woes of food-insecure or vulnerable households. Climate change causes erratic rainfall patterns and decreasing crop yields, contributing to increased hunger [11]. In the midst of all these food-insecurity-worsening situations is the issue of increasing population amidst declining production. The population is growing at 2.5% per annum. The limited empirical evidence about Ghana shows that food insecurity is concentrated in the rural areas [7, 12].
Northern Ghana, which includes the Northern, Upper West and Upper East regions, is poorly endowed with natural resources and the income per capita of its population falls well below the national average [13]. These regions constitute the most backward regions in Ghana and have been described as the most poverty-stricken and hunger spots in Ghana [14]. The incidence of poverty, malnutrition, and stunting among children under-5-years of age is higher in northern Ghana [15]. WFP [16] observes that more than 680,000 people were considered either severely or moderately food insecure of which 140,000 were classified as severely food insecure, having a very poor diet consisting of just staple foods, some vegetables and oil. In terms of regional distribution, the Upper East region has the worse insecurity status (28%) followed by Upper West region (16%) and the Northern region (10%). It is therefore imperative to investigate the key factors influencing food security in this part of the country. Efforts towards alleviating food insecurity largely depend on adequate evidence that provides the pathway for appropriate policy. This is the mandate of this paper: to investigate the determinants of food security or insecurity in northern Ghana.
The study departs from previous studies by its application of the household hunger scale (HHS)—a reliable and well-tested approach of measuring food security. Evidence based on this new approach would have significant policy impact and provides the basis for comparison across cultures and settings. Also, studies of food security in Ghana have considered smaller geographical areas. Kuwornu et al. [17] studied the forest belt of the Central region, Aidoo et al. [12] studied the Sekyere-Afram Plains District, and Nata et al. [18] studied the Ga West District in Greater Accra. This study covers the three poverty-stricken and the most deprived regions of Ghana usually referred to as the Savannah Zone. Though Quaye [19] studied this subregion, the analysis was qualitative and did not identify influencing factors of food security. Owusu et al. [20] also studied this area but focused on the impact of non-farm work on household income and food security. A further departure from most food security studies is in terms of methodology. Most food security studies that apply econometric methodology usually use binary models. This study applies an ordered model as a way of providing useful evidence that preserves vital information of order as opposed to the binary models which obscure such information. In addition to these, the study makes a practical contribution by scouting for critical factors influencing food security and, on that basis, makes policy relevant contributions to inform priority setting in policy considerations for eradicating food insecurity in Ghana.
Definition of food security
Early definitions of food security focused on the ability of a region or nation to assure an adequate food supply for its current and projected population [21]. One of these definitions was provided by the United Nations (UN) in 1974 as: "availability at all times of adequate world food supplies of basic foodstuffs to sustain a steady expansion of food consumption and to offset fluctuations in production and prices". This definition was improved by the World Bank [22] to: "access by all people at all times to enough food for an active and healthy life". The inadequacies of these definitions saw the UN expand the concept in 1996 to accommodate and reflect the complex arguments of nutrition and human rights in food security as follows: "Food security, at the individual, household, national, regional and global levels is achieved when all people, at all times, have physical and economic access to sufficient, safe and nutritious food to meet their dietary needs and food preferences for an active and healthy life". This definition is quite universally acclaimed as it integrates stability, access to food, availability of nutritionally adequate food and the biological utilization of food [12]. MoFA [23] provides an operational definition for food security in Ghana as "good quality nutritious food hygienically packaged, attractively presented, available in sufficient quantities all year round and located at the right place at affordable prices". Given that MoFA is an important authority in Ghana and the fact that their definition plays into the conceptual space of the HHS, we adopt this definition.
Two notable issues are identified in food security studies. The first has to do with measurement of food security. A general limitation in the literature is the inability to have a clearly defined metric of food security against which to identify and compare food-secure and food-insecure households. This weakness is rather a confounding one as it poses serious problems in the empirics of food security. The second is about econometric models used for analysis. These two issues are intertwined as the measurement dictates the econometric model to use. Food security is multidimensional and thus presents a variety of measurements [24–26]. Various indicators have been developed as proxies for food security. Table 1 presents categories of food security measures.Footnote 1
Table 1 Categories of food security measures
Maxwell et al. [1] note that a comprehensive all-encompassing measure of food security would be that measure that is valid and reliable, comparable over time and space, and which captures different elements of food security. In the assessment of Coates and Maxwell [27], none of these measures satisfies the criteria. However, Maxwell et al. [1] find strong evidence that all these measures reflect the multidimensional nature of food security though there is paucity of evidence as to which dimensions of food security are captured by each measure and few direct empirical comparisons among them.
Despite the limitations of all measures, the HHS has been identified as a reliable measure of food security. The HHS is a new, simple indicator to measure household hunger in food-insecure areas. It is different from other household food insecurity indicators in that it has been specifically developed and validated for cross-cultural use [28]. They indicate that the HHS produces valid and comparable results across cultures and settings so that the status of different population groups can be described in a meaningful and comparable way. The use of the HHS in the measurement of food security in northern Ghana is thus appropriate since this part of Ghana records substantial food insecurity. The HHS consists of only three questions and three frequency responses as detailed in Ballard et al. [28]. These questions and responses are recoded for tabulation into three HHS categories as shown in Table 2.Footnote 2
Table 2 Household hunger scale categorical indicator
The categories in Table 2 are the measures of food security used to indicate the percent of households affected by three different severities of household hunger: (1) little to no household hunger; (2) moderate household hunger; and (3) severe household hunger. This measure is adopted in this study since it has been identified to be robust. Since there is no single indicator to measure food security, analyses are varied and diverse. Those quantitative measures such as the Food Security Index (FSI) implemented using the Recommended Daily Calorie approach [29–31] and the Cost of Calorie (COC) approach [17, 32, 33] have been widely used. In these studies, households are categorized into food secure and insecure based on the calculated FSI or COC. These categorizations under the FSI and COC form the basis for the application of categorical (binary) choice models. The binary logit [12, 17, 34, 35] and binary probit [33, 36] are the widely used models.
In these studies, one methodological issue arises, principally from the confounding issue of measurement. The construction of the food security variable into only two categories is problematic since it assumes that households are either food secure or insecure. The limitation of this assumption is that it obscures or discards vital information of households who happen to have indices ranging between the lowest and highest values of food security indices. Since food security indices are a continuum from zero to hundred, at least three possibilities are expected—low, moderate and high—which provide the basis for ordering indices of households. It is very important to provide an ordering of households for appropriate policy interventions than the limited information the binary categorization of secure and insecure presents.
The appropriate way to overcome the limitation of the binary categorization is to apply models that order food security as a dependent variable. Based on this, Nata et al. [18] applied an ordered logit model to analyse the effect of household adoption of soil-improving practices on food insecurity in Ghana. The weakness of this study lies in the measurement of the food security variable. The various categories of chronic, transitory and vulnerable as measures of insecurity are not as far-reaching as the HHS measure. Also, the study was done in Greater Accra region (the national capital). It can be argued that the justification for the study area becomes problematic when the northern part of the country is identified as the hub of food insecurity problems. Thus, this study contributes to the literature by applying the HHS to analyse food security in northern Ghana using an ordered model. The strength of this econometric approach is twofold. First, it is able to exploit the inherent ordering information in food security. Second, it defines preselected boundaries or cutoff points (with only one fixed) that segregate severe hunger, moderate hunger and food secured households, and in this regard, the ordered approach is both novel and better at handling the subjectivity of ad hoc metrics used to measure food insecurity.Footnote 3
An important dimension to food security studies worthy of mention is the analysis of calorie and nutrient demand functions. Notable contributions to this literature include Wolfe and Behrman [37], Pitt [38], Garrett and Ruel [39], Bhargava [40], Subramanian and Deaton [41], Grimard [42], Skoufias [43], Abdulai and Aubert [44], Aromolaran [45] and Ecker and Qaim [46]. The fundamental goal of these studies is to measure the impacts of critical factors notably income and price elasticities, on demand for calories and nutrients. An important lesson from these contributions is that estimates of these demand functions present a vent to indirectly make inferences of the impact of these correlates on food security. For example, income and price as correlates of demand for calories aid in making inferences on the levels of vulnerability of households to income and price shocks. This present study departs from these studies in the use of the HHS and the ordered approach.
Another noteworthy contribution to the food security literature is a recent contribution by San-Ahmed and Holloway [47] who applied Bayesian econometric approach to skilfully overcome the problem of endogeneity in their procedure. In the light of the ordered approach, Bayesian econometric procedure is able to derive estimates without the boundary condition [48]. However, this study employs a classical econometric approach.
Empirical model
The measurement of food security (see Table 2) dictates an econometric model beyond the application of binary choice models. Greene [49] notes that although the outcome is discrete, the multinomial logit or probit models would fail to account for the ordinal nature of the dependent variable. Given that the food security measures are categorical and ordinal, ordered probit or logit models are the most appropriate for analysis. While the logit assumes a logistic distribution of the error term, the probit assumes a normal distribution. The logistic and normal distributions generally give similar results in practice [49]. Also, Davidson and MacKinnon [50] indicate that the ordered probit is the most widely used model for ordered response data in applied econometric work. Therefore, the ordered probit is used in this study.
The ordered probit, developed by McKelvey and Zavoina [51], is constructed on a latent (unobservable) random variable which is stated as follows [52–54]:
$$y_{i}^{*} = x_{i}^{\prime } \beta + e_{i} ,\quad i = 1,2, \ldots ,N$$
where \(E\left({e_{i}|x_{i}} \right) = 0\) and \({\text{Var}}\left({e_{i}|x_{i}} \right) = 1\). Treating Y i , the observed variable, as a categorical variable with J response categories and also as a proxy for the theoretical (unobserved) random variable, \(y_{i}^{*}\), and defining \(\mu = \mu_{- 1} \,\mu_{0} \,\mu_{1} \ldots \,\mu_{J - 1} \,\mu_{J}\) as a vector of unobservable threshold (or cutpoint) parameters, the relationship between the observed and the latent variables can be written as:
$$Y_{i} = j\quad {\text{if}}\quad \mu_{j - 1} < y_{i}^{*} \le \mu_{j} ,\quad j = 0,1,2, \ldots ,J$$
where \(\mu_{- 1} = - \infty,\;\;\mu_{0} = 0,\;\;\mu_{J} = \infty\) and \(\mu_{- 1} < \mu_{0} < \mu_{1} < \cdots < \mu_{J}\). The probabilities will thus be given as follows:
$$\begin{aligned} {\text{Prob}}\left[ {Y_{i} = j} \right] & = {\text{Prob}}\left[ {\mu_{j - 1} < y_{i}^{*} \le \mu_{j} } \right] \\ & = {\text{Prob}}\left[ {\mu_{j - 1} - x_{i}^{{\prime }} \beta < e_{i} \le \mu_{j} - x_{i}^{{\prime }} \beta } \right] \\ & = \varPhi \left( {\mu_{j} - x_{i}^{{\prime }} \beta } \right) - \varPhi \left( {\mu_{j - 1} - x_{i}^{{\prime }} \beta } \right) \\ \end{aligned}$$
where \(\varPhi ( \cdot)\) is the standard normal cumulative distribution function and J is the response categories, in this case 0, 1 and 2 since there are three categories for food security.
As observed by Greene [55], since there is no meaningful conditional mean function and the marginal effects in the ordered probability models are not straightforward, the effects of changes in the explanatory variables on cell probabilities are normally considered. These are given by:
$$\frac{{\partial {\text{Prob}}\left[ {{\text{cell}} j} \right]}}{{\partial x_{i} }} = \, \left[ {\phi \left( {\mu_{j - 1} - x_{i}^{{\prime }} \beta } \right) - \phi \left( {\mu_{j} - x_{i}^{{\prime }} \beta } \right)} \right] \times \beta$$
with \(\phi (\cdot)\) being the standard normal density function.
In the light of the preceding discussion, the empirical model of this study is specified as:
$${\text{FS}}_{ij} = \alpha + \beta W_{i} + \gamma X_{i} + \delta Z_{i} + \varepsilon_{i}$$
where FS is food security proxied by the HHS; subscript i represents a household, subscript j (j = 0, 1, 2) represents the three-pronged categorization of alternative dependent dummy variables indicating (i) whether a household falls within severe household hunger category, (ii) whether a household falls within moderate household hunger category, and (iii) whether a household is within little to no household hunger category; W, X and Z are, respectively, socioeconomic, food production and consumption, and institutional and location characteristics hypothesized to influence food security (these variables are presented in Table 3); α, β, γ, δ are parameters to be estimated and \(\varepsilon \sim{\text{NID}}\left({0,1} \right)\).
Table 3 Description, measurement and statistics of explanatory variables
The study uses data collected by the Monitoring Evaluation and Technical Support Services (METSS) in the Savannah Accelerated Development Authority (SADA) regions (identified as the zone of influence, see Additional file 1), namely Upper East, Upper West, Northern, Brong Ahafo and Northern Volta in 2012 under the USAID Feed the Future Initiative and published in 2014. The Feed the Future Initiative aims to help developing countries address root causes of hunger and poverty specific to their individual and unique circumstances through the transformation of agricultural production and improvement in health and nutrition. In Ghana, the initiative seeks to increase competitiveness of maize, rice and soya value chains; improve resilience of vulnerable households and communities, and reduce under-nutrition, and improve nutritional status of women and children.
The data were collected on eleven modules including household demographic information, household hunger scale (HHS), cultivation of key crops, access to productive capital, access to credit, consumption of food items, non-food consumption expenditure, group membership, dwelling characteristics, women's dietary diversity, and women's anthropometry. In all, 4410 households were sampled and interviewed. However, 357 households were dropped in the analysis as a result of incomplete responses.
In this section, we present the results and findings. Food security characteristics of households are first presented. This is then followed by empirical estimation results and discussions.
Food security characteristics of households
Table 4 shows the results on food security status in the SADA zone. The results show that less than 1% of the sample experienced severe hunger. This implies that households—(i) who had no food of any kind to eat in the last 4 weeks before the survey and happened often, (ii) who had at least a member go to sleep at night hungry and happened often, and (iii) who had at least a member go a whole day and night without food and happened often—represented only 0.89% of the sample. Households with moderate and little to no hunger represented about 36 and 63%, respectively.
Table 4 Food security status of households in the SADA zone
While the results could mean that severe food insecurity in the SADA zone reflected through hunger is not pervasive, it is important to understand the construction of the HHS. It measures the relative degree of hunger among households. The moderate and little to no hunger categories still provide useful information about the situation of food insecurity in the area. Moderate and little hunger are not acceptable in any human society. While it is not possible to segregate those without any hunger from those with little hunger, the number of households falling within this category suggests that a significant number of households had little hunger. If we re-categorize, at least 50% might experience varying degrees of severe, moderate and little hunger. These are relatively different, yet none is acceptable. Hence, the food security situation in the zone can still be described as worrisome and requires efforts from various stakeholders to tackle the menace.
Food security status by region and gender are, respectively, shown in Figs. 1 and 2. Figure 1 shows that the Northern region has the highest incidence of all the categories of hunger scale. This is probably due to the sample size difference. Brong Ahafo and Northern regions maintain the order of the entire SADA region where little to no hunger category is more than the moderate category, which is also more than the severe category.
Food security status by region
Food security status by gender
However, Upper West and East regions violate the order where the moderate categories outweigh the little to no categories. Figure 2 indicates that in all hunger categories, males are more affected than females. While the reason for this is not clear to us, sample size differences could account for this observation.
Determinants of food security in the SADA zone
The results of the determinants of food security are presented in Table 5. Since the coefficients of the ordered probit do not represent the magnitude of the effects of the explanatory variables, the marginal effects are discussed. These marginal effects are interpreted based on the sign and category. An estimated positive coefficient for a category indicates that an increase in that variable increases the probability of being in that category, whereas a negative coefficient indicates a decrease in probability of being in that category. The marginal effects corresponding to the significant variables are also significant.
Table 5 Results of ordered probit model
We find that one more year in school (level of education) decreases the probability of experiencing severe and moderate hunger and increases the probability of experiencing little or no hunger. A plausible explanation for this finding is that a higher educational attainment of household heads could lead to their awareness of the possible advantages of modernizing agriculture by means of adopting new technologies and diversifying household income, which, in turn, would enhance household food supply. Thus, being literate reduces the chance of becoming food insecure. This conforms to expectation and confirms the finding of Tefera and Tefera [34] which shows that educated households have a better chance of adopting soil conservation measures which, in turn, increases crop production. Again, educated household heads have the capacity to innovate and to adopt timely technology and have better understanding of the cash crops that can help them to have a better income than the non-educated household heads.
Further, higher levels of education guarantee numerous options of employment in the formal sectors of the economy which, in turn, deliver higher incomes to aid food consumption expenditures. According to the Ghana Statistical Service (GSS) [56], about 60% of legislators or managers, 87.4% of professionals, and 63.4% of technicians and associate professionals have attained at least secondary school education. The GSS [56] further reveals that almost half of household income is from non-farm self-employment, contributing 48.3% to sources of household income. Wages from employment is the second major contributor (36.3%) with household agriculture accounting for one-tenth (10.1%). These statistics show that people with higher levels of education earn higher incomes than those in agriculture. This evidence contradicts the finding of Beyene and Muche [35], who explain that educated households might not utilize their knowledge for the advancement of food security.
Households with means of transport are less likely to fall within the severe and moderate hunger categories and more likely to have little or no hunger. While the reason for this observation may not be certain, it may suggest the effect of wealth on boosting food security.
Households with mechanized farm equipment are less likely to belong to severe and moderate hunger categories and more likely to have little or no hunger. This is consistent with expectation since mechanized equipment enhances the productive capacity of these households in farm businesses. Alternatively, revenues from the use of the equipment on other peoples' farm businesses can be used to support food expenditure and/or invested to produce more output or earn more income to meet household food needs.
The yield (as an index) obtained by households decreases the probability of experiencing severe and moderate hunger and increases the probability of experiencing little or no hunger. Increasing the productivity of households is the sufficient condition to enhancing food security. This observation has key policy implication for government and other stakeholders in the fight against food insecurity.
The level of commercialization of agriculture decreases the probability of households falling within the severe and moderate hunger categories while increasing the probability of households falling within the little to no hunger category. This conforms to a priori expectation since the more commercialized a household is, the more it is able to generate sufficient incomes which could lead to enhanced accessibility of food, the ability to diversify consumption patterns and increase food consumption expenditure as well as the capacity to invest more in production. This evidence conforms to the observation in agricultural economics that an increase in the incomes of farm households leads to a structural shift from the consumption of staples to the consumption of diversified products such as vegetables and dairy products. The improvement in incomes from commercialized agriculture improves financial access to products and the nutritional quality of consumption, which are key pillars of food security. This finding corroborates Nata et al. [18], Kuwornu et al. [17], Babatunde et al. [31] and Arene and Anyaeji [29] who report a positive relationship between household income and food security. A significant portion of household income is from sale of farm produce.
Farm households who are crop producers (i.e. those producing any of maize, rice and soybean) are more likely to experience severe and moderate hunger and less likely to experience little or no hunger as compared to households who do not produce such crops. This observation, though counter-intuitive, is pointing to a known characteristic of smallholder farm households. These farmers are usually the food producers and the poorest and hardest hit when there is a slight failure in production arising from such catastrophes as drought and loss of produce to fire. They are most vulnerable to food insecurity. We also find evidence that farm households who engage in the production of multiple crops are more likely to experience severe and moderate hunger and less likely to experience little or no hunger. This observation is also counter-intuitive but lends support to the evidence on production of crops. Smallholders are noted for multiple cropping with lower yields. This indicates that households who concentrate on the production of one crop are able to make more output, sell it and then diversify consumption financed by income from crop sales.
Households with poultry (specifically chickens, ducks, turkey and pigeons) and small livestock (specifically goats, pigs and sheep) are less likely to experience severe and moderate hunger and more likely to experience little to no hunger. This is consistent with the finding of Tefera and Tefera [34] who argue that livestock contribute to food security through provision of cash income and nutrition. It also corroborates the finding of Beyene and Muche [35]. The results indicate that owners of poultry and small livestock are less vulnerable to food insecurity, especially in times of drought when crops fail [57]. However, households with large livestock (specifically oxen and cattle) are less likely to experience little or no hunger and more likely to experience severe and moderate hunger. This is counter-intuitive and suggests that large animals are used as assets for traditional purpose of storing wealth rather than for immediate consumption. It contradicts the findings of Beyene and Muche [35] who argue that large livestock is a source of traction power among rural households.
Households with higher food consumption expenditure are less likely to experience severe and moderate hunger and more likely to experience little or no hunger. This is expected since the level of food consumption expenditure is an indicator of the accessibility, quantity and quality of food.
Rural households are more likely to be severely and moderately food insecure and less likely to be food secure. We expected rural households to be more food secured than urban households since urbanization pushes cost of living higher. Again, since the rural localities are the production centres, we expected abundance of food to culminate into more food security. We explain that though these households are the basic producers of food, the produce ends up at the urban areas especially during planting and lean seasons where food is scarce in the rural areas with prices soaring. Also, the level of vulnerability to food insecurity is more on rural than urban households. According to the GSS [56], the annual average per capita income in urban localities is GH¢7019.72 which implies an average income of GH¢19.23 per person per day, while their rural counterparts have an average annual income of GH¢3302.83 which represents an average income of GH¢9.04 per person per day.Footnote 4 The mean income of a household in an urban locality is GH¢20,930.05, while that of a rural household is GH¢11,408.01. Also, urban households spend more on all food and non-alcoholic beverages than their rural counterparts. These statistics may be responsible for this observation.
Households in the Upper West and Upper East regions are more likely to be food insecure than those in Northern and Brong Ahafo regions. This observation is expected since these two regions are poorest in the SADA zone. The three northern regions are the poorest in Ghana with the Upper West region the hardest hit followed by the Upper East region [56]. The Upper East and West regions have the lowest mean annual household income of GH¢7240.5 and GH¢11,977.5 and the lowest per capita expenditure of GH¢1790 and GH¢1753, respectively. These statistics could be responsible for the severity of food insecurity in these two regions. This is consistent in part with the observation of Quaye [19] that Upper East region is the worst affected of food insecurity as it experiences the longest food shortage period, with the Northern and Upper West regions having the same period of food inadequacy.
We applied a new measure of food security, the household hunger scale to analyse the factors influencing food security in the SADA region, an area described as the hub of food security problems in Ghana, using a secondary data set provided by METSS. We applied an ordered probit to estimate the factors of food security as a way of overcoming some of the weaknesses in previous studies. Analysis of the data shows that food insecurity, as measured on the household hunger scale, still persists in the SADA region at levels unacceptable in a modern society. We find that factors determining the various levels of hunger include education, means of transport, farm mechanized equipment, yield, agricultural crop production and commercialization, cultivation of multiple crops, ownership of poultry, small livestock, large livestock, food consumption expenditure, locality and region of residence. The implication of these findings is that stakeholders in food security issues have a task, especially if the sustainable development goals must be achieved. Key policy implication of the results of crop producers, multiple crop producers, yield and commercialization corroborate one of the intermediate results of the Ghana Feed the Future Initiative of increasing competitiveness of cereal value chain through increased productivity and market access. As it stands, crop production with its variant of multiple cropping is not rewarding in food security efforts. Productivity enhancement, as this study reveals, is one of the bridging platforms to making crop production and multiple cropping remunerative and thus helping reduce food insecurity. A comprehensive approach to productivity enhancement is needed. We recommend an amalgam of agro-inputs made both physically and financially available, appropriate mechanization (e.g. availability of tractor services and irrigation) and support services (e.g. extension, credit, monitoring, research and private sector engagements in mechanization).
Efforts to enhance commercialization of agriculture cannot be overemphasized in achieving food security. As indicated already, a policy measure of productivity enhancement is one way of intensifying commercialization. Another is the provision of necessary market infrastructure and services such as creation of effective market information as well as upgrading rural roads. A massive diversification into livestock production should be considered by stakeholders since the results show this enhances food security, especially for ownership of poultry and small ruminants. Livestock production complements crop production, especially in periods of crop failure. Finally, these results notwithstanding, it is important to point out that accounting for endogeneity in ordered data models is still grey and that remains a weakness of this study.
For details about these measures and how they compare, see Maxwell et al. [1].
The process of recoding is also detailed in Ballard et al. [28].
The authors gratefully acknowledge a meticulous reviewer for calling their attention to this fact.
The exchange rate as quoted by www.xe.com as at 1 October 2016 was US$1.00 = GH¢3.9649.
FAO:
FtF:
GSS:
Ghana Statistical Service
IFAD:
METSS:
Monitoring Evaluation and Technical Support Services
MoFA:
Ministry of Food and Agriculture
SADA:
Savannah Accelerated Development Authority
UN:
WFP:
Maxwell D, Coates J, Vaitla B. How do different indicators of household food security compare? Empirical evidence from Tigray. Medford: Feinstein International Center, Tufts University; 2013.
FAO, IFAD, WFP. The state of food insecurity in the world. The multiple dimensions of food security. Rome, FOA. 2013. http://www.fao.org/docrep/018/i3434e/i3434e.pdf. Accessed 10 March 2016.
Van Eeckhout L. Alerte sur un risque de crise alimentaire. Le Monde. 2011;6(4):4.
FAO, IFAD, WFP. The state of food insecurity in the world. Meeting the 2015 international hunger targets: taking stock of uneven progress. 2015. http://www.fao.org/3/a-i4646e/index.html. Accessed 17 May 2016.
Turyahabwe N, Kakuru W, Tweheyo M, Tumusiime DM. Contribution of wetland resources to household food security in Uganda. Agric Food Secur. 2013;. doi:10.1186/2048-7010-2-5.
Folaranmi T. Food insecurity and malnutrition in Africa: current trends, causes and consequences. 2012. http://m.polity.org.za/article/food-insecurity-and-malnutrition-in-Africa-current-trends-causes-and-consequences-2012-09-19. Assessed 3 June 2016.
WFP. Comprehensive food security and vulnerability analysis. Accra, Ghana. 2009. http://www.wfp.org/food-security. Assessed 12 May 2016.
MoFA, Agriculture in Ghana, Facts and figures. Statistics, research and information directorate (SRID). Ghana: Accra; 2010. p. 2011.
GSS, Revised Gross Domestic Product. National Accounting. Ghana: Accra; 2011. p. 2012.
Wodon Q, Tsimpo C, Coulombe H. Assessing the potential impact on poverty of rising cereals prices: the case of Ghana. Policy Research working paper; no. WPS 4740. 2008. http://documents.worldbank.org/curated/en/869341468250300834/Assessing-the-potential-impact-on-poverty-of-rising-cerals-prices-the-case-of-Ghana. Assessed 2 May, 2016.
Obayelu AO, Adepoju AO, Idowu T. Factors influencing farmers' choices of adaptation to climate change in Ekiti State, Nigeria. J Agric Environ Int Dev. 2014;108(1):3–16.
Aidoo R, Tuffour T. Determinants of household food security in the Sekyere-Afram Plains district of Ghana. 1st Annual International Interdisciplinary Conference, AIIC 2013, 24–26 April, Azores, Portugal. 2015.
Marchetta F. On the move: livelihood strategies in northern Ghana. Post-Doctorante CNRS, Clermont Universite, France. 2011.
GSS. Poverty Trends in Ghana in the 1990s. Accra, Ghana; 2000.
METSS-Ghana. FtF Ghana population-based survey in northern Ghana: baseline protocol. Accra, Ghana. 2012. http://www.metss-ghana.ksu.edu/PBS_items/METSS-Ghana-PBS-Protocol-05-07-2012-AS-Revised-05-29-2012-AS-Final_Revision.pdf. Assessed 15 Jan 2016.
WFP. Comprehensive food security and vulnerability analysis 2012. Focus on northern Ghana. 2012. http://www.wfp.org/food-security. Assessed 18 Jan 2016.
Kuwornu JKM, Suleyman DM. Amegashie DPK Analysis of food security of farming households in the forest belt of the Central region of Ghana. Russ J Agric Soc Sci. 2013;1(13):26–42.
Nata JT, Mjelde JW, Boadu FO. Household adoption of soil-improving practices and food insecurity in Ghana. Agric Food Secur. 2014. doi:10.1186/2048-7010-3-17.
Quaye W. Food security situation in northern Ghana, coping strategies and related constraints. Afr J Agric Res. 2008;3(5):334–42.
Owusu V, Abdulai A, Abdul-Rahman S. Non-farm work and food security among farm households in Northern Ghana. Food Pol. 2011;36(2):108–18.
McKeown D. Food security: implications for the early years. Toronto: Toronto Public Health Canada; 2006.
World Bank. Poverty and hunger: issues and options for food security in developing countries. Washington DC; 1986. A World Bank Policy Study. http://documents.worldbank.org/curated/en/166331467990005748/poverty-and-hunger-issues-and-options-for-food-security-in-developing-countries. Assessed 12 Feb 2016.
MoFA. Food and Agriculture Sector Development Policy II (FASDEP II). Accra, Ghana. 2007.
Mason J. Keynote paper: measuring hunger and malnutrition. Orleans: Tulane Univ Louisiana; 2002.
Kennedy E. Qualitative measures of food insecurity and hunger. In: Paper read at international scientific symposium on measurement and assessment of food deprivation and undernutrition. 2002.
FAO. International scientific symposium on food and nutrition security information: from valid measurement to effective decision making. 2013. http://www.fao.org/docrep/017/i3244e/i3244e.pdf. Assessed 17 Apr 2016.
Coates J, Maxwell D. Reaching for the stars? Universal measures of household food security. In FAO International Scientific Symposium on Food Security and Nutrition Measurement. 2012. http://www.fao.org/docrep/017/i3244e/i3244e.pdf. Assessed 17 Apr 2016.
Ballard T, Coates J, Swindale A, Deitchler M. Household hunger scale: indicator definition and measurement guide. 2011. http://www.fantaproject.org/sites/default/files/resources/HHS-Indicator-Guide-Aug2011.pdf. Assessed 17 Apr 2016.
Arene CJ, Anyaeji J. Determinants of food security among households in Nsukka Metropolis of Enugu State, Nigeria. Pak J Soc Sci. 2010;30(1):9–16.
Omotesho OA, Muhammad-Lawal A. Optimal food plan for rural households' food security in Kwara State, Nigeria: the goal programming approach. J Agric Biotechnol Sustain Dev. 2010;2(1):7–14.
Babatunde R, Omotosho O, Sholotan S. Factors influencing food security status of rural farming households in North Central Nigeria. Agric J. 2007;2(3):351–7.
Ojogho O. Determinants of food insecurity among arable framers in Edo State, Nigeria. Agric J. 2010;5(3):151–6.
Oluyole KA, Oni OA, Omonona BT, Adenegan KO. Food security among cocoa farming households of Ondo State, Nigeria. ARPN J Agric Biol Sci. 2009;4:7–13.
Tefera T, Tefera F. Determinants of households' food security and coping strategies for food shortfall in Mareko District, Guraghe Zone, Southern Ethiopia. J Food Secur. 2014;2(3):92–9.
Beyene F, Muche M. Determinants of food security among rural households of Central Ethiopia: an empirical analysis. Q J Int Agric. 2010;49(4):299–318.
Oluwatayo IB. Explaining inequality and welfare status of households in rural Nigeria: evidence from Ekiti State. Hum Soc Sci J. 2008;3(1):70–80.
Wolfe B, Behrman J. Is income overrated in determining adequate nutrition? Econ Dev Cul Change. 1983;31(3):525–49.
Pitt M. Food preferences and nutrition in rural Bangladesh. Rev Econ Stat. 1983;65(1):105–14.
Garrett J, Ruel M. Are determinants of rural and urban food security and nutritional status different? Some insights from Mozambique. World Dev. 1999;27(11):1955–75.
Bhargava A. Estimating short and long run income elasticities of foods and nutrients for rural South India. J R Stat Soc Ser A (Stat Soc). 1991;154(1):157–74.
Subramanian S, Deaton A. The demand for food and calories. J Pol Econ. 1996;104(1):133–62.
Grimard F. Does the poor's consumption of calories respond to changes in income? Evidence from Pakistan. Pak Dev Rev. 1996;35(3):257–83.
Skoufias E. Is the calorie–income elasticity sensitive to price changes? Evidence from Indonesia. World Dev. 2003;31(7):1291–307.
Abdulai A, Aubert, D. Does income really matter? Nonparametric and parametric estimates of the demand for calories in Tanzania. International Congress, August 28–31, 2002, Zaragoza.
Aromolaran A. Intra-household redistribution of income and calorie consumption in South-Western Nigeria. Working papers, Economic Growth Center: Yale University. 2004.
Ecker O, Qaim M. Analyzing nutritional impacts of policies: an empirical; study of Malawi. World Dev. 2011;39:412–28.
San-Ahmed A, Holloway G. Calories, conflict and correlates: redistributive food security in post-conflict Iraq. Food Policy. 2017;2017(68):89–99.
Poirier D. Revising beliefs in non-identified models. Econometric Theory. 1998;14(4):483–509.
Greene WH. Econometric analysis. 5th ed. New Jersey: Prentice Hall; 2002.
Davidson R, MacKinnon JG. Econometric theory and methods. New York: Oxford University Press; 2003.
McKelvey RD, Zavoina W. A statistical model for the analysis of ordinal level dependent variables. J Math Soc. 1975;4(1):103–20.
Cameron AC, Trivedi PK. Regression Analysis of count data. Cambridge: Cambridge University Press; 1998.
Maddala GS. Limited-dependence and qualitative variables in econometrics. Cambridge: Cambridge University Press; 1983.
Greene WH. Econometric modelling guide, vol. 1. New York: Econometric Software Inc; 2007.
GSS. Ghana living standards survey round 6 (GLSS 6). Accra, Ghana. 2014.
Little PD, Stone MR, Mogues T, Castro AP, Negatu W. Moving in place: drought and poverty dynamics in south Wollo, Ethiopia. J Dev Stud. 2006;42(2):200–25.
PKN conceived of, but all authors planned the study. BMA prepared data and performed statistical analysis, under the guidance of PKN and HI. All authors drafted the manuscript, critically reviewed it for important intellectual content, contributed to the interpretation of results. All authors read and approved the final manuscript.
Paul Kwame Nkegbe holds Ph.D. in Agricultural and Food Economics from the University of Reading, UK. His research interests are in applied microeconomic analysis and recently in macroeconomic analysis. Benjamin Musah Abu is a budding academic and holds an MPhil degree in Agricultural Economics from the University of Ghana. His research interests are in smallholder agriculture, production economics and agricultural marketing. Haruna Issahaku is currently a Ph.D. scholar in Finance at the University of Ghana. He holds an MPhil in Agricultural Economics from the University of Ghana. His current research focuses on development finance, monetary policy and financial market development.
The authors are grateful to the Monitoring Evaluation and Technical Support Services (METSS) and the USAID for allowing for the use of the data. They also acknowledge the valuable comments of two anonymous referees and the editor which greatly improved the initial draft manuscript. They, however, take full responsibility for any error.
The data set used in this study is a property of the US government and is available online. However, the version analysed in this study is available from the corresponding author on request.
All authors consent to the publication of this manuscript.
Ethical approval and consent to participate
Not applicable as secondary data were used.
Authors received no funding for the study.
Department of Economics and Entrepreneurship Development, Faculty of Integrated Development Studies, University for Development Studies, P.O. Box 520, Wa, Ghana
Paul Kwame Nkegbe
, Benjamin Musah Abu
& Haruna Issahaku
Search for Paul Kwame Nkegbe in:
Search for Benjamin Musah Abu in:
Search for Haruna Issahaku in:
Correspondence to Paul Kwame Nkegbe.
40066_2017_111_MOESM1_ESM.docx
Additional file 1. Zone of influence of Ghana's Feed the Future Initiative.
Nkegbe, P.K., Abu, B.M. & Issahaku, H. Food security in the Savannah Accelerated Development Authority Zone of Ghana: an ordered probit with household hunger scale approach. Agric & Food Secur 6, 35 (2017) doi:10.1186/s40066-017-0111-y
Household hunger scale
Cross-sectional data
Ordered probit
Climate Smart Agricultural Technologies in West Africa
|
CommonCrawl
|
Hopping to infinity along a string of digits
Let $s$ be an infinite string of decimal digits, for example: \begin{array}{cccccccccc} s = 3 & 1 & 4 & 1 & 5 & 9 & 2 & 6 & 5 & 3 & \cdots \end{array} Consider a marker, the head, pointing to the first digit, $3$ in the above example. Interpret the digit under the head as an instruction to move the head $3$ digits to the right, i.e., to the $4$th digit. Now the head is pointing to $1$. Interpret this as an instruction to move $1$ place to the left. Continue in this manner, hopping through the string, alternately moving right and left. Think of the head as akin to the head of a Turing machine, and $s$ as the tape of instructions.
There are three possible behaviors. (1) The head moves off the left end of $s$:
\begin{array}{cccccccccc} 3 & 1 & 4 & 1 & 5 & 9 & 2 & 6 & 5 & 3 \\ \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 3 & 1 & 4 & 1 & 5 & 9 & 2 & 6 & 5 & 3 \\ \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 3 & 1 & 4 & 1 & 5 & 9 & 2 & 6 & 5 & 3 \\ \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 3 & 1 & 4 & 1 & 5 & 9 & 2 & 6 & 5 & 3 \\ \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} \\ 3 & 1 & 4 & 1 & 5 & 9 & 2 & 6 & 5 & 3 \\ \text{} & \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 3 & 1 & 4 & 1 & 5 & 9 & 2 & 6 & 5 & 3 \\ \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} \\ 3 & 1 & 4 & 1 & 5 & 9 & 2 & 6 & 5 & 3 \\ \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} \\ 3 & 1 & 4 & 1 & 5 & 9 & 2 & 6 & 5 & 3 \\ \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} \\ 3 & 1 & 4 & 1 & 5 & 9 & 2 & 6 & 5 & 3 \\ \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 3 & 1 & 4 & 1 & 5 & 9 & 2 & 6 & 5 & 3 \\ \text{} & \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} & \text{} & \text{} \end{array}
(2) The head goes into a cycle, e.g., when the head hits $0$:
\begin{array}{cccccccccccccc} 6 & 4 & 5 & 7 & 5 & 1 & 3 & 1 & 1 & 0 & 6 & 4 & 5 & 9 \\ \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 6 & 4 & 5 & 7 & 5 & 1 & 3 & 1 & 1 & 0 & 6 & 4 & 5 & 9 \\ \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 6 & 4 & 5 & 7 & 5 & 1 & 3 & 1 & 1 & 0 & 6 & 4 & 5 & 9 \\ \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 6 & 4 & 5 & 7 & 5 & 1 & 3 & 1 & 1 & 0 & 6 & 4 & 5 & 9 \\ \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} \\ 6 & 4 & 5 & 7 & 5 & 1 & 3 & 1 & 1 & 0 & 6 & 4 & 5 & 9 \\ \text{} & \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 6 & 4 & 5 & 7 & 5 & 1 & 3 & 1 & 1 & 0 & 6 & 4 & 5 & 9 \\ \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} & \text{} \end{array}
(3) The head moves off rightward to infnity:
\begin{array}{ccccccccccccc} 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 \\ \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 \\ \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 \\ \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 \\ \text{} & \text{} & \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 \\ \text{} & \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 \\ \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 \\ \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 \\ \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} \\ 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 \\ \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} & \text{} & \text{} & \text{} \\ 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 \\ \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{ ${}^{\wedge}$} & \text{} \\ \end{array}
This last string could be viewed as the decimal expansion of $31/99 = 0.3131313131313\cdots$.
Q1. What is an example of an irrational number $0.d_1 d_2 d_3 \cdots$ whose string $s=d_1 d_2 d_3 \cdots$ causes the head to hop rightward to infinity?
Q1.5. (Added). Is there an explicit irrational algebraic number with the hop-to-$\infty$ property?
I'm thinking of something like $\sqrt{7}-2$, the 2nd example above (which cycles).
Q2. More generally, which strings cause the head to hop rightward to infinity?
Update (summarizing answers, 13Apr2019). Q1. There are irrationals with the hop-to-$\infty$ property (@EthanBolker, @TheSimpliFire), but explicit construction requires using, e.g., the Thue-Morse sequence (@Wojowu). Q1.5. @EthanBolker suggests this may be difficult, and @Wojowu suggests it may be false (b/c: nine consecutive zeros): Perhaps no algebraic irrational has the hop-to-$\infty$ property. Q2. A partial algorithmic characterization by @TheSimpliFire.
sequences-and-series recreational-mathematics irrational-numbers decimal-expansion turing-machines
Joseph O'Rourke
Joseph O'RourkeJoseph O'Rourke
$\begingroup$ You want a string such that the $D_1=(1+d_1)$th digit is less than $d_1$, that the $D_2=(1+d_1-D_1)$th digit is greater than $D_1$, that the $D_3=(1+d_1-D_1+D_2)$th digit is less than $D_2$, etc. I think it is possible to generate an algorithm and you can try for some simulations. $\endgroup$ – TheSimpliFire Apr 13 at 12:32
$\begingroup$ Another question. For sequences that don't hop to infinity behavior is determined by a (finite) initial subsequence. There are only countably many of those. What are they? $\endgroup$ – Ethan Bolker Apr 13 at 12:42
$\begingroup$ I think Q1.5 is hard since digit sequences for algebraic numbers are hard adamczewski.perso.math.cnrs.fr/Siauliai.pdf $\endgroup$ – Ethan Bolker Apr 13 at 13:58
$\begingroup$ Followup question: what are the measures of the three sets in the interval $(0,1)$? $\endgroup$ – eyeballfrog Apr 13 at 16:30
$\begingroup$ @eyeballfrog The set of numbers with hop to infinity property has measure zero, because you can't hop past a string of nine zeros. This also strongly suggests no algebraic irrational has this property. $\endgroup$ – Wojowu Apr 13 at 18:21
$$ x 1^{x-2} y 1^{y-2} z1^{z-2} \ldots $$ moves off to infinity for any sequence of digits $xyz\ldots$ between $3$ and $9$. Select a sequence that defines an irrational number.
More generally
$$ x 1 ?^{x-1} y 1 ?^{y-1} z 1 ?^{z-1} \ldots $$ works, where $?^n$ is an arbitrary string of $n$ digits, since those spots will never be hopped on.
Ethan BolkerEthan Bolker
$\begingroup$ (+1) I was thinking exactly the same thing just as you posted your answer! $\endgroup$ – TheSimpliFire Apr 13 at 12:25
$\begingroup$ Nice! Could you be more explicit about how you know the resulting digit string is irrational? Thanks. $\endgroup$ – Joseph O'Rourke Apr 13 at 14:17
$\begingroup$ Clearly it will be irrational if $t = 0.xyz\ldots$ is, since then periodicity is impossible. It can be even when $t$ is rational because the $?$'s can force that. $\endgroup$ – Ethan Bolker Apr 13 at 14:47
$\begingroup$ I worry about explicitly specifying $t=0.xyz\cdots$, excluding digits $\{ 0,1,2 \}$, guaranteeing $t$ is irrational. $\endgroup$ – Joseph O'Rourke Apr 13 at 16:46
$\begingroup$ @JosephO'Rourke Take the Thue-Morse sequence, add 3 to each term and take that as a binary sequence. Don't hope for a better sort of answer - our understanding of decimal expansions of "natural" constants is really bad, so you won't be able to exclude 0,1,2 without artificially constructing the decimal expansion. $\endgroup$ – Wojowu Apr 13 at 18:28
Not the answer you're looking for? Browse other questions tagged sequences-and-series recreational-mathematics irrational-numbers decimal-expansion turing-machines or ask your own question.
what is the new order of the digits here ? Both the numbers $144$ and $441$ consists of the same digits?
Deterministic Turing machine for a duplicate concatenation of a string
prove that language is not decidable (string and reverse)
Sequences of consecutive digits of arbitrary length in irrational ternary numbers (take two).
Can I guess an irrational number formula from its digits?
Reversing the digits of an infinite decimal
Is each string decideble?
Conjecture about the digits of $\pi$
Cardinality of number of digits
Numbers such that they equal the product of their own digits
|
CommonCrawl
|
Teacher How To's
Endpoint in Math
Home » Endpoint in Math
Introduction to the Endpoint in Math
Endpoint in Math: Definition
What Is a Line Segment?
What Is a Ray?
An endpoint in math is a point that marks the end of a line segment or a ray. It refers to either of the two points that limit the line segment on both sides. An endpoint is important because it tells you where the line segment stops. It cannot go beyond that point!
The endpoint can be defined as the point on a graph or a figure where the figure ends. It can be the one end of a ray, two extreme points of a line segment, a point connecting sides of a polygon (the vertices) or the common endpoint of two rays forming an angle.
Before we can understand what an endpoint in math is, let us first refresh some related math concepts.
A line segment is a section of a line that connects two points. A line is limitless and extends infinitely in both directions. However, a line segment is limited on both ends by two points.
Unlike a line segment, a ray is limited at one end by a point, while the other end extends infinitely. So, which is the endpoint of ray? Take a look at the ray PQ. Here, P is the endpoint.
What Is an Endpoint in Geometry?
An endpoint is defined as a point at which a line segment or a ray ends. In general, an endpoint is the furthermost or the ending point.
In a line segment, endpoints are the points at which the line segment ends. A line segment has two endpoints.
A ray has one endpoint.
In an angle, the common point between the two rays (vertex) is an endpoint.
A polygon is formed using three or more line segments. Each side intersects two other sides, at an endpoint. In polygons, the points that join the sides are endpoints.
Finding the Length of a Line Segment Using the Endpoints
Endpoints in math help us measure the length of a line segment. Consider a line segment AB.
We can use the coordinates of the endpoints A and B to determine the length of AB.
Bisecting a Line Segment Using Endpoints
To bisect a line segment means to cut the line at the center to divide it into two equal parts.
When you bisect a line segment, you split it at the "midpoint." The two new line segments you get now have a new endpoint on one side (the side toward the original midpoint).
We can find the coordinates of the midpoint using the coordinates of the endpoints as follows:
How to Find the Endpoint of a Line Segment?
If we know one endpoint and the midpoint of the line segment, we can find the other endpoint of a line segment. To find the missing endpoint using the coordinates of the midpoint and the other endpoint, we find the distance from the known endpoint to the midpoint. Then, we measure that same distance from the midpoint in the other direction to find the other endpoint.
The Formula to Find the Endpoint: Endpoint Formula
Let's learn the formula to find a missing endpoint.
Let M (xm, ym) be the midpoint of the line segment joining two endpoints A (x1, y1) and B (x2, y2).
We can use the midpoint formula to find either of the endpoints. Given the coordinates of M (the midpoint) and A (the endpoint), the coordinates of B can be calculated using the following formula:
$x_{m} = \frac{x_{1}+ x_{2}}{2}$
$y_{m} = \frac{y_{1}+ y_{2}}{2}$
Here, adjusting the terms on LHS and RHS, we get
$x_{1} = 2x_{m} – x_{2}$
$y_{1} = 2y_{m} – y_{2}$
How Do We Name Objects Using the Endpoints?
Endpoints help us name objects in geometry, such as line segments, angles, polygons, etc.
A line segment is named by its two endpoints. In the line segment below, points A and B are its two endpoints. The line segment is named after its endpoints, so we name it AB or BA. We write it symbolically as $\overline{AB} \text{or} \overline{BA}$.
A ray has one endpoint. It is named with its endpoint being the first letter.
In the ray below, A is the point that is the endpoint of the ray. So the ray is named $AB$.
An angle is made up of two rays that meet at a common endpoint. The angle can be named based on the common endpoint.
In the angle below, the rays are BA and BC. They share a common endpoint, which is B. The angle can be named by its common endpoint as $\angle \text{B}$. It can also be named based on the two rays as $\angle \text{ABC}$.
A polygon is formed by three or more line segments that join at endpoints called vertices.
For example, in a polygon with four sides, AB, BC, CD, and AD are the sides. We can name a polygon by listing the endpoints as ABCD.
An endpoint in math is important because it delimits a line and tells us where a line segment starts and stops. It helps us find the length of a line segment and name the different types of lines and objects, such as line segments, rays, angles, and polygons. Remember, to find an endpoint, you need the midpoint and the other endpoint!
Solved Examples
1. How many endpoints does a ray have? Identify the endpoints of the given ray.
A ray has one endpoint, and it extends infinitely in the other end.
In the given image, we have ray PQ and its endpoint is the point P.
2. What is the name of a line segment with endpoints B and C?
The name of the line segment with endpoints B and C is BC or CB.
3. What is the name of a ray with endpoint A and joining a point C?
The name of a ray with endpoint A and a given point C is called AC because it is named starting with the endpoint.
4. What is the name of an angle with D as the common point between the rays DE and DF?
The name of the angle is $\angle \text{EDF}$ because it is named after its endpoints (E, D, and F) with the common point (D) in the middle.
5. What is the name of a polygon with sides AB, BC, CD, and DA?
Since the sides are AB, BC, CD, and DA, the vertices are A, B, C, and D. So, the name of the polygon is ABCD because it is named after its endpoints or vertices.
Attend this quiz & Test your knowledge.
How many endpoints are there in a line segment?
CorrectIncorrect
Correct answer is: 2
A line segment has two endpoints on either side.
What is the name of an angle with a common point E between the two lines?
$\angle \text{E}$
$\angle \text{EFG}$
$\angle \text{CDE}$
$\angle \text{OE}$
Correct answer is: $\angle \text{E}$
The angle is named by the common point, which is E.
What is the name of a ray with endpoint F, and G as any given point on the ray?
Ray GF
Ray G
Ray F
Ray FG
Correct answer is: Ray FG
The ray is named starting with the endpoint, which is F. So, the name of the ray is Ray FG.
What is the name of an angle that is made up of rays AB and AC?
$\angle \text{A}$
$\angle \text{B}$
$\angle \text{C}$
$\angle \text{C}BA$
Correct answer is: $\angle \text{A}$
Common endpoint of rays AB and AC $=$ A
So, the name of the angle formed is $ \angle \text{A}$.
What is the name of a polygon with sides AB, BC, and CA?
Correct answer is: ABC
The polygon is named after the endpoints of its sides, which are A, B, and C.
What is the common endpoint of an angle called?
The common endpoint of an angle is called a vertex.
How do we define the midpoint of a line segment in math using endpoints?
The midpoint is the point lying exactly equidistant between the two endpoints of a line segment.
Can you find both the endpoints of a line segment, given the midpoint?
No, if you have neither endpoint, then an infinite number of endpoint coordinates are possible. You need coordinates of at least one endpoint and a midpoint.
How do you find the midpoint of a line segment?
The midpoint of a line segment can be found by measuring the distance between the two endpoints and then dividing the length by 2. The distance from either end is the midpoint of the line.
Does a line have endpoints?
No, a line does not have endpoints.
Coordinate Plane
X and Y axis in Graph
Quadrilateral – Definition with Examples
Line Plot – Definition with Examples
Rectilinear Figures – Definition with Examples
Point – Definition With Examples
Angles – Definition with Examples
Explore 8000+ Games and Worksheets
Prime Numbers – Definition with Examples
Place Value – Definition with Examples
Fraction – Definition with Examples
Order Of Operations – Definition With Examples
Common Multiples – Definition with Examples
Worksheets for Kids
Math Vocabulary
Math for Kids
Division Worksheets
Times Tables Worksheets
ELA for Kids
Sight Words Games
Sight Words Worksheets
Letter Tracing Worksheets
Long multiplication
© 2023 SplashLearn • Built with GeneratePress
|
CommonCrawl
|
Popular Science Monthly/Volume 65/June 1904/Copernicus
< Popular Science Monthly | Volume 65 | June 1904
←The Total Solar Eclipse of August 30 1905
Popular Science Monthly Volume 65 June 1904 (1904)
Copernicus by Edward Singleton Holden
On the Significance of Characteristic Curves of Composition→
1418933Popular Science Monthly Volume 65 June 1904 — Copernicus1904
COPERNICUS.
By EDWARD S. HOLDEN, Sc.D., LL.D.,
LIBRARIAN OF THE U. S. MILITARY ACADEMY.
NICOLAUS COPERNICUS was born in Thorn, a town of Prussian Poland, on February 19, 1473. His father, Niklas Koppernigk, was a merchant of Krakau who established himself in Thorn about 1450, and there married Barbara, the daughter of Lucas Watzelrode, a descendant of an old patrician family. The father was chosen alderman in 1465—a testimony of his worth. He had four children: Barbara, who died abbess of the Cistercians at Culm; Katherina, who married a merchant of Krakau; and two sons, Andreas and Nicolaus.
We know little of the childhood of Nicolaus. In 1483 his father died and he was placed in the care of his uncle, another Lucas Watzelrode, who was called to be bishop of Ermeland in 1489, and with whose career that of Copernicus is closely bound up. The boy was educated in Thorn till his nineteenth year, when he was placed in the University of Krakau. The greatest illustration of its faculty was Albertus Blar de Brudzewo (usually written Brudzewski), professor of astronomy and mathematics. The works of Purbach and of Regiomontanus were expounded in his lectures. In the winter semester of 1491-92 Copernicus was matriculated in the faculty of arts, and devoted himself, so it is recorded, with the greatest diligence and success to mathematical and astronomical studies, becoming, at the same time, familiar with the use of astronomical instruments. In the autumn of 1494 Brudzewski left the university, and it is probable that Copernicus did the same. The humanists of the faculty had suffered a defeat at the hands of the scholastics, and the latter now ruled supreme. At Krakau Copernicus studied the theory of perspective, and applied it in painting. Portraits from his hand are praised by his contemporaries.
In the summer of 1496 the youth went to Italy, and in January, 1497, he was inscribed at the University of Bologna, in the 'Album of the German Nation,' as a student of jurisprudence. From 1484 to 1514 the professor of astronomy at Bologna was Dominicus Maria da Novara. He was an observer, a theorist, as well as a free critic of the received doctrines of Ptolemy, although such of his criticisms as we know are not especially happy, it must be confessed. He determined the obliquity of the ecliptic to be 23° 29′ by his own observations, which is in error by 1′ 20″ only, a small quantity for his time. Copernicus was received by him on the footing of a friend and helper, rather than as a pupil; and the association was, without doubt, of great benefit to the younger man. All the systematized knowledge of the time was opened to him; what was known was examined and discussed, not received uncritically. Best of all, observation was practised as a test of theory and as the only basis for its advancement.
The first recorded observation of Copernicus is an occultation of Aldebaran by the moon in 1497 at Bologna; in 1500 he observed a conjunction of Saturn with the moon at the same place, and a lunar eclipse at Rome. Other eclipses were observed in 1509, 1511, 1522 and 1523; and positions of Venus, Mars, Jupiter and Saturn in 1512, 1514, 1518, 1520, 1523, 1526, 1527, 1529, 1532, 1537. These recorded observations extend over a period of forty years. Though they are few in number, there is no reason to doubt that they are merely excerpts from a more considerable collection. They were made with very simple wooden instruments constructed by the observer's own hands. One of them, a triquetum, was sent as a present to Tycho Brahé in 1584, forty-one years after the death of Copernicus. It was made of pine wood, eight feet long, with two equal cross arms. They were divided, in ink, into 1,000 equal parts, and the long arm into 1,414 parts. This precious relic, together with a portrait of Copernicus, was long preserved in Tycho 's observatory at Uraniborg, and finally removed to Bohemia, where it perished in the confusions incident to the Thirty Years' War (1618-48).
Rheticus once urged upon him the need of making astronomical observations with all imaginable accuracy. Copernicus laughed at his friend for being disturbed about so small an error as a minute of arc, and declared that if he were sure of his observations to ten minutes, he would be as pleased as was Pythagoras when he discovered the properties of the right-angled triangle. Copernicus determined the latitude of Frauenburg to be 54° 19½', which is 2' too small. This seems to us a large error. Even with his instruments he could have been more precise if he had repeated his observations many times. But the determination was excellent for the times, as we may see by remembering that the latitude of Paris was given by Tycho as 48° 10', by Fernel as 48° 40', by Vieta as 48° 49', by Kepler as 48° 39'. His calculated longitude of Spica Virginis, which he took as a standard star, was 40' in error. He concluded that Krakau and Frauenburg were on the same meridian—an error of 17½' of arc. The observations of Albategnius, five centuries earlier, were far more precise, and this was not entirely owing to the superiority of the Arab instruments.
At the University of Bologna Copernicus mastered Greek. The knowledge was subsequently utilized in a translation into Latin of the epistles of Theophylactos Simokatta (630 A. D.), which he printed in 1509. This was the only work published by him in his lifetime. The translation is said to be elegant, but the book itself is of comparatively little importance. He had studied it at the university and utilized his knowledge. The book upon which his fame rests—'De Revolutionibus Orbium Cœlestium'—did not appear until the very day of his death, and was published by the care of others. Scipione dal Ferro, the discoverer of the general method of solving the cubic equation, was in residence at Bologna at the same time, and there is little doubt that Copernicus met him also, although there is no record of the meeting. In recording this name we seem to be well out of the middle age. A general solution of the cubic belongs to the modern period, although the Arabs were working on the question in the tenth century.
In 1497 Copernicus was appointed Canon of Frauenburg, which assured to him, for life, an income corresponding to about $2,250 of our money of to-day, and a leave of absence of three years was granted him to continue his studies in Italy. At a later date he also received a sinecure appointment at Breslau. He had already taken the lesser vows; to the higher he never was dedicated. In 1499 his brother Andreas was likewise consecrated Canon of Frauenburg, and he also matriculated at Bologna (1498) in the faculty of law. Both brothers were represented at home by substitutes, and considerable expense may have attached to this, but it is curious to note that on account of the 'costly living' at the university they needed, and received, remittances from the bishop, their uncle.
In the summer of 1500 his leave of absence expired, and in company with his brother he crossed the Alps to Frauenburg, where both received a new permission to return to Italy. It was stipulated that Nicolaus should study medicine after the completion of his courses in law, in order that he might serve as physician to the Frauenburg chapter. In the autumn of 1501 both brothers were again in Italy, Andreas at Rome, Nicolaus at Padua. The doctor's degree in jurisprudence was conferred upon Nicolaus in 1503, but he remained in Italy till the year 1505 or 1506—nine or ten years in all.
In the archives of Ferrara we read:
1503. Die ultima mensis Maij. Ferrarie in episcopali palatio, sub lodia horti presentibus testibus vocatis et rogatis Spectibili viro domino Joanne Andrea de Lazaris siculo panormito almi Juristarum gymnasii Ferrariensis Magnifico Rectore, Ser Bartholomeo de Silvestris, cive et notario Ferrariensi. Ludovico quondam Baldassaris de Regio cive Ferrariensi et bidello Universitatis Juristarum civitatis Ferrarie, et alijs.
m: Venerabilis, ac doctissimus vir Nicholaus Copernich de Prusia Canonicus Varmensis et Scholasticus ecclesie S. crucis Vratislaviensis: qui studuit Bononie et Padue, fuit approbatus in Jure canonico nemine penitus discrepante, et doctoratus per prefatum dominum Georgium Vicarium antedictum etc.
promotores fuerunt
D. Philippus Bardella et } {\displaystyle \scriptstyle {\left.{\begin{matrix}\ \\\\\ \ \end{matrix}}\right\}\,}}
cives Ferrariensis etc.
D. Antonius Leutus qui ei dedit insignia
In the year 1500 Copernicus delivered lectures at Rome before an audience of two thousand hearers, the Archbishop of Mechlin declares. These lectures could not have announced the heliocentric theory, which dates from the year 1506 only, nor could they have been before the university, because Copernicus did not take the degree that admitted him to the privilege of teaching until 1503. He took no degree at Krakau, so far as is known.
Copernicus was now quite free to prosecute his studies in medicine, which he combined with philosophy. The celebrated Pomponazzi was then a member of the faculty, in the prime of his vigor. He had taken his degrees in philosophy and medicine at Padua in 1487, and in the next year, when he was but twenty-six years of age, had been chosen extraordinary professor. It was a custom of those days to choose two professors of each subject in order that their public disputations might stimulate their hearers to independent thinking. The ordinary professor of philosophy was Achillini—a veteran of the strict school of Aristotle.
Pomponazzi remained at Padua until the university was closed in 1509; and in Ferrara till 1512, when he removed to Bologna, where in 1516 he wrote his famous treatise on the 'Immortality of the Soul'—the foundation of his character as a skeptic and of his fame as a philosopher. Into his doctrines it is not necessary to enter at length. Briefly they are that man, standing on the confines of two worlds—the material and spiritual—necessarily partakes of the nature of both. Man is partly mortal (since the human soul depends in some degree on matter) and partly immortal. The soul is, Pomponazzi says, absolutely mortal, relatively immortal. This doctrine was, of course, a denial of the theory of the Roman church. He was vehemently attacked. His book was burned in Venice. Powerful friends among the cardinals protected him in Rome. His university stood by him and confirmed him in his professorial chair for eight years, and increased his salary to 1,200 ducats.
Pomponazzi was a thinker of essentially modern spirit. Reason, he said, was superior to any authority. If, in his teaching of Aristotle, he should find himself in error, "ought I," he says, "to interpret him differently from my real sentiment? If it is said—the hearers are scandalized—well, be it so. They are not obliged to listen to me, or to forbid my teaching. I neither wish to lie, nor to be false to my true conviction." He decides, on psychological grounds, against the immortality of the soul, and then proceeds to build up a system of practical ethics resting on philosophy. Belief is not needed as a basis for ethics—not by cultured men, at any rate. He is the first writer within the christian communion to attempt to establish morality on a foundation of reason. He is a Stoic. "The essential reward of virtue is virtue itself," he says; "the punishment of the vicious is vice, than which nothing can be more wretched and unhappy." Future rewards and punishments are not invoked.
It is worth our while to pause here and reflect that we are hearing a teacher to whom Copernicus listened; to whom all Italy, nay all Europe, attended. This teaching was permitted in Italy. It influenced thousands upon thousands of hearers. Perhaps the tolerant treatment of Lutherans in Ermeland by Copernicus when administrator of his diocese may have had its origin in ideas received at this time.
There were other men in the faculty with a message for pupils of genius. Aristotle and Plato were expounded from original Greek texts, and the mazy fabrics of the commentators were swept away. Fracastor, who was, by and by, to become an opponent of the heliocentric theory, was a teacher there. He was the first to teach that the obliquity of the ecliptic changed uniformly (1538), in which respect—only—his doctrine was more sound than that of Copernicus. Medicine was expounded by four professors, and dissection of the human body was practised. Marc Antonio della Torre, the instructor of da Vinci, was one of the anatomists. So far as is known, Copernicus did not take his doctor's degree in medicine.
He was, however, skilled in physick, after the fashion of his day, and practised the art during all his life. He was considered, some of his biographers say, 'a second Æsculapius.' We know nothing definite of his medical practise until his later years. From 1529 to 1537 he treated Bishop Ferber, who praises him as the preserver of his life. Duke Albrecht of Prussia called him to Königsberg in 1541 to treat one of his court, and it is of record that the patient recovered.
It does not appear that Copernicus returned to Frauenburg before 1506. He was then thirty-three years of age. All that the world had then to offer in the way of culture was his. He had followed university studies in theology, philosophy, logic, medicine, mathematics and astronomy. He had mastered Greek, and practised painting. He had been the friend or pupil of the greatest teachers of Italy for ten years, and was now established as physician to his uncle in the bishop's palace at Heilsberg, in high station, with an assured income. Up to this period he had shown no original power; but there can be no doubt that he was universally regarded as a man of the highest culture.
His relation to his uncle was that of Achates to Æneas, affectionate and intimate. The bishop of Ermeland was a great noble in a place of power. Affairs of much import to the church had to be treated. The knights of the Teutonic order (founded at Acre in 1190) had conquered the Duchy of Prussia in the thirteenth century. West Prussia had been ceded to Poland in 1466, while East Prussia, including Ermeland, was a Polish fief. A part of the policy of the order was to extend the lordship of their metropolitan Bishop of Riga over the diocese of Ermeland. It was the policy of Bishop Lucas to oppose all such efforts, to attain entire independence, and even to become spiritual over-lord of a part of the territory of the Teutonic order. These plans came to nothing; but a legacy of hatred remained among the knights, who left nothing undone to provoke and degrade the Ermeland bishop and his friends, and to excite disorder in his own territory. The pressure of the invading Tartars on the borders kept the knights occupied, however, and left them little leisure for hostile action. Constant vigilance was required on the part of the bishop, and many journeys to different parts of the bishopric were required.
Copernicus was charged with missions of this sort from the very first. It was during one of these journeys to Petrikau in 1509 that he printed his Latin version of the 'Epistles' of Theophylactos. Greek epistles—invading Tartars—feudal rights—church privileges—Polish and Prussian politics—these were the preoccupations of his mind. We can hardly think that much time was left for astronomy, yet the lunar eclipse of June 2, 1509, was duly observed. One of Copernicus's biographers calls him 'a quiet scholarly monk of studious habits—in study and meditation his life passed—he does not appear as having entered into the life of the times.' This is the legend. It is obviously only a small part of the truth. In March, 1512, the bishop of Ermeland died and Copernicus returned to his cloister at Frauenburg. He was now thirty-nine years old.
In the dedication of his 'De Revolutionibus' to the Pope (1542), Copernicus says that it is now 'four nines of years' since the heliocentric theory was conceived. Strictly interpreted this brings the date of its birth to 1506. It is, at all events, safe to say that the idea was elaborated on German, though it may have been born on Italian, soil.
From 1512 to 1516 Copernicus was in constant residence at the Cathedral of Frauenburg, where indeed the greatest part of his life was spent. For two periods (1516–19 and 1520–21) he lived at Allenstein, administering certain estates belonging to his chapter. His observatory was on one of its towers and commanded a wide horizon. Few observations were necessary for his great discovery of the heliocentric motion. He knew beforehand the phenomena to be explained. Ptolemy had offered a solution that had been accepted for fourteen hundred years. Would any other hypothesis explain them? In the first place, Copernicus affirms the rotation of the earth on its axis. The rising and the setting of the stars is caused by this.
The question of the rotation of the earth had been examined by Ptolemy. He rejects the notion, saying: "If the earth turned in twenty-four hours around its axis every point on its surface would be endowed with an immense velocity, and from the rotation a force of projection would arise capable of tearing the most solid buildings from their foundations and of scattering their fragments in the air." The force of projection depends, we know, not only on the absolute velocity of points on the turning earth (and this velocity is immense), but also on the angular velocity about this axis. The latter is slow. The hour hand of a clock turns twice as fast as the earth. The projective force at its maximum is just sufficient to diminish the weight of a ton by six pounds. A feeble force of the sort is not fitted to tear trees up by their roots or buildings from their foundations, as Ptolemy supposed.
Copernicus adopted the theory of a rotating earth, although he was no better able than Ptolemy to explain the difficulty. The science of mechanics was not born till the time of Galileo. The reasoning of Copernicus is: "The rotation of the earth being a natural movement, its effects are very different from those of a violent motion; and the earth, which turns in virtue of its proper nature, is not to be likened to a wheel that is constrained to turn by force." He seeks to escape the difficulty by a trick of scholastic philosophy. No other issue was open in his day. Examples of this sort are well fitted to give us a vivid idea of the state of science in those times. It was not easy for our predecessors to take a forward step. More honor to them that the steps were taken.
In the preface to the 'De Revolutionibus' Copernicus declares that he was dissatisfied with the want of symmetry in the theory of eccentrics and weary of the uncertainty of the mathematical conditions. Searching through the works of the ancients, he found that some of them held that the earth was in motion, not stationary. Philolaus, for example, taught that the earth revolved about a central fire.[1] Copernicus makes no mention of the theory of Aristarchus. We must assume that he did not know it, though his ignorance in this respect is hard to explain. We have no list of his library, which was, however, extensive for the time.
"Then I too," says Copernicus, "began to meditate concerning the motion of the earth; and although it appeared an absurd opinion, yet since I knew that, in earlier times, others had been allowed the privilege of imagining what circles they might choose in order to explain the phenomena, I conceived that I also might take the liberty of trying whether, on the supposition of the earth's motion, it were possible to find better explanations of the revolutions of the celestial orbs than those of ancient times. Having then assumed the motions of the earth that are hereafter explained, by long and laborious observation I found at length that if the motions of the other planets be likened to the revolution of the earth, not only their observed phenomena follow from the suppositions, but also that the several orbs, and the whole system, are so connected in order and magnitude that no one part can be transposed without disturbing the rest and introducing confusion into the whole universe." He looked, he here says, for a new theory because the old one was unsymmetric; and his new theory satisfies because it consistently explains the facts of observation and because it was symmetric. Symmetry of the kind referred to is not essential to a true theory. If any theory explains every fact of observation quantitatively as well as qualitatively, it is to be accepted. Copernicus was not free from hampering presuppositions any more than his predecessors.
"We must admit," he says, "that the celestial motions are circular, or else compounded of several circles, since their inequalities observe a fixed law, and recur in value at certain intervals, which could not be unless they were circular; for the circle alone can make that which has been recur again." In writing this passage his mind was closed to every idea but one. Copernicus knew, far better than most of us, that ovals and ellipses might also serve to represent recurring values, but the thought did not even cross his mind in connection with celestial motions. He was committed to circular motions exclusively, from the outset.
"We are therefore not ashamed to confess," he says, "that the whole of the space within the orbit of the moon, along with the center of the earth, moves around the sun in a year among the other planets; the magnitude of the world (solar system) being so great that the distance of the earth from the sun has no apparent magnitude (is indefinitely small) when compared with the sphere of the fixed stars. . . . All which things, though they be difficult and almost inconceivable, and against the opinion of the majority, we, in the sequel, by God's favor, will make clearer than the sun, at least to those who are not ignorant of mathematics."
The system of Copernicus required thirty-four circles and epicycles—four for the moon, three for the earth, seven for the planet Mercury and five for each of the other planets. Cumbrous as this apparatus appears to us, it was a distinct simplification of the Ptolemaic system as taught in the sixteenth century. Fracastor, writing in 1538, employed sixty-three spheres to explain the celestial motions.
One word must be said of the theory of trepidation which Copernicus accepted. The precession of the equinoxes was discovered by Hipparchus by comparing his own observations of stars with preceding ones. He saw that the longitudes of the stars changed progressively and fixed the annual change as 1° in seventy-five years. Later observers determined the amount of precession by comparing their own observations with preceding ones. The motion of the origin of longitudes—the equinox—is really uniform. An unlucky Jew—Tabit ben Korra—in the ninth century, came to the conclusion that the motion was not uniform, but variable, sometimes at one rate, sometimes at another. The variable motion was the trepidation. Copernicus admitted the reality of this phenomenon and thereby introduced a fault. Tycho Brahé, who had no important data on this point that was inaccessible to Copernicus, rejected the idea of trepidation and freed astronomy from a blemish that had endured for centuries.
It is impossible and unnecessary to exhibit in this place the details of the heliocentric theory of Copernicus. In Kepler's account of Copernican astronomy there is a section on the explanation of the retrogradations of the planets. "Here," he says, "is the triumph of the Copernican astronomy. The old astronomy can only be silent and admire; the new speaks and gives rational account of every appearance; the old multiplies its epicycles; the new, far simpler, preserves everything by the single motion of the earth around the sun." In describing the stationary points of the planets he declared: "Here the old astronomy has naught to say."
We must try to put ourselves in the place of the students of those days who heard the two explanations of the world—the geocentric and the heliocentric—expounded by the same professor in the same lecture-room as alternative hypotheses. Each hypothesis offered a possible explanation. That of Copernicus was so simple that its intellectual acceptance was immediate. It was possible; but was it true? If it were accepted, what implications did it bring in its train? The real difficulty was moral, not intellectual. "Was the whole edifice of Ptolemy to be destroyed? No—some of it was indubitably true. If some, why not all? What was to become of the authority he had held for a thousand years? Was all knowledge to be made over? Even the idea that part of the 'Almagest' was true and part false was not to be lightly accepted.
The conception that every physical problem has one and only one solution was also entirely new; until it was fully received students balanced one explanation against another, and even held two at once, strange as this may seem to us with our new standards in such matters. The heliocentric theory eventually prevailed not because the logic of Ptolemy was broken down, but because all mere authority was weakened. The dicta of philosophers were looked at in a new light. It was not, in fact, generally received until the day of Newton, though it was sufficiently established by the observations of Galileo and convincingly by the calculations of Kepler. To actually demonstrate the rotation of the earth on its axis we must have recourse to an elaborate experiment like that of Foucault on the pendulum, or to comparisons of the force of gravity in different latitudes; to demonstrate its revolution round the sun it is necessary to measure the time required for light to reach us from the distant planets, or to evaluate the aberration of the light of the fixed stars. It was not easy for the sixteenth century to make a decision. If the heliocentric theory were true, then the planet Venus must show phases like the moon; but no phases could be seen. It required Galileo's telescope to show them. Moreover, the fixed stars must have annual apparent displacements in miniature orbits. None such were visible; none were detected until 1837, when Bessel determined the parallax of a fixed star (61 Cygni) for the first time. Galileo sought for them in vain; so did Herschel; so did other astronomers of the eighteenth century with their splendid instruments. The conception of epicycles was retained in the 'De Revolutionibus,' and it seems to us a blemish; to the contemporaries of Copernicus it was a mere analytic device. Newton explains one of the inequalities of the moon's motion by an epicycle, in the 'Principia.'
It is only when we thus consider in detail how the new ideas must have presented themselves to the students of the sixteenth century that we can comprehend the real obstacles in the way of their acceptance. A genius like Kepler could receive them simply on their intellectual merits. Men in general required time to change their point of view, and to accept a novel and essentially disheartening theory. Ptolemy's system of the world was compendious, comfortable, so to say, and easily understanded of the people. Man's central position in the universe flattered his pride and allayed his fears.
Peter the Lombard (1100–60) expresses the accepted view in its baldest form: 'Just as Man is made for the sake of God, that is, that he may serve him, so the Universe is made for the sake of Man, that is, that it may serve him; therefore is Man placed at the middle point of the Universe, that he may both serve and be served.' The new view made man an outcast and placed him in immense and disquieting solitudes. Pascal has phrased the new and anxious fear: 'Le silence éternel de ces espaces infinis m'effraie.'
Astronomers needed accurate tables of the planetary motions in order to predict eclipses and conjunctions. The Alphonsine tables were quite unsatisfactory. The theory of Copernicus was made the basis of new tables—the Prutenic tables—by Reinhold in 1551, and they remained the standard until 1627, when the Rudolphine tables, based on Kepler's theories and Tycho's observations, superseded them. The doctrines of Copernicus were spread by means of almanacs based upon Reinhold's tables rather than by his theoretical works; and they made their way quietly, surely and without any great opposition. Tycho proposed a new (and erroneous) system of the world in 1587. It also had its effect in weakening the authority of Ptolemy. The motions of comets began to be observed with care. It was clear that the doctrine of material crystal spheres would not allow room for their erratic courses. In one way and another the authority of the ancients was broken down and the way prepared for the eventual triumph of the theory of Copernicus.
It is interesting to note the opinions of Englishmen of the sixteenth and seventeenth centuries. Francis Bacon rejected the new doctrines; Gilbert of Colchester, Robert Recorde, Thomas Digges and other Englishmen of the time of Queen Elizabeth, accepted them. Milton seems to hesitate in 'Paradise Lost' (book viii.), which was written after 1640, though he had visited Galileo in Florence in 1638, where, no doubt, Galileo proved the Copernican theory to him by word of mouth. At all events he thoroughly understood it as his description of the earth
. . . that spinning sleeps
On her soft axle, while she paces even
And bears thee soft with the smooth air along,
abundantly proves, since in the last line one of the chief objections to the theory is answered.
The heliocentric theory gained powerful auxiliaries in Moestlin, professor of astronomy at Tübingen, and in his pupil Kepler. In 1588 Moestlin printed his 'Epitome,' in which the mobility of the earth is denied; but he accepted the new views probably as early as 1590. Kepler writes: "While I was at Tübingen, attending to Michael Moestlin, I was so delighted with Copernicus, of whom he made great mention in his lectures, that I not only defended his opinions in our disputations of the candidates, but wrote a thesis concerning the first motion which is produced by the revolution of the earth." In 1596 Moestlin, in a published epistle, expressly adhered to the heliocentric theory of the world.
Luther emphatically declared his opinion of the Copernican theory on several occasions. He calls Copernicus 'that fool' who is trying to upset the whole art of astronomy; and refers to Joshua's command that the sun should stand still as a proof that the earth could not possibly be the moving member of the system. Melanchthon, a far more learned man, declared that the authority of scripture was entirely against Copernicus. The attitude of the Roman Church was more indifferent at that time, not more tolerant. Tolerance comes with enlightenment; and both protestant and catholic doctors were, in general, profoundly ignorant of science. When we are thinking of the attitude of the church we must remember that the conflict with Galileo had not arisen. Calvin quotes the first verse of the ninety-third Psalm
—The World also is established, that it can not be moved
and says: 'Who will venture to place the authority of Copernicus above that of the Holy Spirit?'
Such dicta of great theologians are often quoted to demonstrate the existence of an age-long conflict between science and religion. So to interpret them is a sad misconception of the real warfare that has occupied mankind for ages. The veritable conflict has been between ignorance and enlightenment, not in one field only, but in all conceivable spheres.
Before there can be fruitful discussion the 'universe of discourse' must be defined. Things of a like kind can alone be compared. The world of science relates and refers to material things moved by physical forces; and only to these. The world of religion relates and refers only to immaterial things moved by spiritual energies. These worlds are wide apart now. They were widely separated even in the sixteenth century, and they were entirely divided for the highest thinking men even in the middle ages. In either world conflicts are possible. They can only take place between ideas of the same kind; between religion and heresy, or between science and pseudo-science. Theologians decide the issue in one world; men of science in the other. It is the business of philosophers to define and discuss the limits of each world in turn; to determine the validity of conclusions. It is the privilege of poets harmoniously to express imagined analogies between the action of spirit on spirit and of force on matter. It is the dream of seers and prophets to synthesize such analogies into a single system, mingling two universes into one. Whatever may be our hope for the future, the synthesis has not yet been achieved. Theologians have essayed it from one direction, philosophers from another, but the essential distinction remains untouched. There is a world of matter; there is a world of spirit. Men live in both. Their actions are ruled by different and discrepant laws. In the world of spirit the good man is safe and happy, no matter what fate may befall him in the world of physical phenomena. In the latter world no virtue will save the man who transgresses its especial laws. Gravitation, and not goodness, decides whether his falling body suffers harm or is preserved alive.
To Calvin the pronouncement of Copernicus was sheer blasphemy. It seemed to him to lie entirely within the sphere of religion. Judged by the accepted standards of that sphere it was audacious heresy. To Kepler the law of Copernicus lay entirely within the sphere of science. It was to be accepted as true, or rejected as pseudo, science entirely by scientific criteria. Calvin's words fell within one universe of discourse, Kepler's in another. There was no conflict between religion and science as such. Calvin sat as judge of a conflict between religion and a possible heresy. Kepler asked himself if this new assertion was substantial truth or merely error masquerading in a scientific form. Phenomena can not be judged by criteria belonging to a world to which they are foreign. It is in a light like this that we must examine the relations of such men as Copernicus and Galileo to their times.
The Lateran council (1512–17) appointed a committee to consider the much needed reform of the Church calendar, and in 1514 the help of Copernicus was asked—a proof that he was not only remembered in Rome, but that his reputation had grown since his residence there. He declined to give advice, for the reason that the motions of the sun and moon were, as yet, too imperfectly known. At the request of the chief of the committee, Copernicus continued his researches on the length of the tropical year—a fundamental datum.
In November, 1516, the quiet life of Copernicus at Frauenburg was broken up by his appointment as Administrator bonorum communium at Allenstein. The appointment was for one year, but the administration of Copernicus was so successful that he occupied the post during the years 1516–19 and again in 1520–21. His manifold duties in this place brought him again into conflict with the Teutonic knights. The interests of the order and of the church in Ermeland were totally antagonistic. At times open hostilities occurred and towns were besieged, taken and plundered. It is not necessary to follow this harassing strife into the details of Prussian and Polish politics. It is recounted in history as the Fränkischer Reiterkrieg. In 1521 Copernicus, then the recognized head of his chapter, was selected to draw up a statement of grievances against the order to be laid before the estates of Prussia. The lands of the chapter of Frauenburg had been overrun, the towns and villages plundered, the peasants had fled or had been killed. The castle of Allenstein, the residence of Copernicus, was itself in danger until it was saved by a four years' truce concluded at Thorn. In such stormy times astronomy was not to be thought of.
It was at this period that Copernicus composed, at the request of the Prussian estates, a memorial on the debasement of the coinage of the country and on the remedies to be adopted. "Money," he says, "is a measure, and like all measures it must be constant in value. What would one say to a yard or a pound whose values could be changed at the will of the measure-makers? The value of money depends not on the stamp it bears, but on the value of the fine metal it contains." Nothing could be clearer than this. His conclusions on the effects of a debased currency on the interests of landlord and tenant are not so sound. Copernicus also proposed to coin all the money of Prussia at a single mint, forbidding the towns to use their ancient privileges, which had been abused. This proposal, as well as others made in the years 1521-30, failed chiefly because Dantzig and other towns were not Mailing to relinquish vested rights. It is interesting to note that in his memorial of 1526 he sets the ratio of gold and silver as 1 to 12.
Bishop Fabian died in 1523. During the ensuing vacancy Copernicus was chosen administrator of the diocese. His duties were harassing. The troops of the order encroached more and more on the church holdings. The Lutheran heresy was also a source of anxiety. The steps taken by the administrator were marked by great tolerance. Before the preaching of the new faith was forbidden outright it was enjoined that it should be refuted by argument. A new bishop, Mauritius Ferber, was chosen in 1523, and a word must be said of the bishop's nephew and coadjutor, Tiedemann Giese. Born in 1480, he became canon of Frauenburg about 1504, and was the intimate and affectionate friend of Copernicus during the whole of his life. It was to Mm that Copernicus confided the manuscript of his great work in 1542. Bishop Ferber died in 1537, and Bishop Dantiscus of Culm was chosen in his place, while Giese by a compromise became bishop of Culm.
The last observation recorded by Copernicus in the 'De Revolutionibus' is dated 1529. From this we may infer that his great work was essentially completed at that time, though it was repeatedly revised afterwards. It had been begun twenty-three years earlier. It was not published until 1543, though its doctrines had been freely communicated to scholars and friends. In 1531 a set of strolling players, set on, it is said, by bis enemies among the Teutonic knights and among the Lutherans, gave a little show at Elbing ridiculing the notion that the earth moved round the sun. The play was devised by a certain Dutchman who afterwards became rector of the gymnasium at Elbing. That its satire was understood by the common people proves the opinions of Copernicus to have been fairly well known by his neighbors even at that epoch when absolutely nothing had been printed concerning them. About 1530 a manuscript commentary on the hypotheses of the celestial motions had been prepared by Copernicus for private circulation among men of science in advance of the publication of 'De Revolutionibus.' Two copies of this manuscript still exist, one at Vienna, one at Upsala. At the end of it a resume of his new doctrine is given in seven axioms. (I.) There is only one center to the motions of the heavenly bodies; (II.) this is not the earth about which the moon moves, but (III.) it is the sun; (IV.) the sphere of the fixed stars is indefinitely more distant than the planets; (V.) the diurnal motion of the sun is a consequence of the earth's rotation; (VI.) the annual motion of the sun and (VII.) the motions of the planets are, primarily, not due to their proper motions.
In 1533 Copernicus was sixty years old and applied for a coadjutor. His duties were, at this time, made light for him. In 1532 an observation of Venus is recorded. Other observations were made in 1537. In 1533 he observed the comet of that year. It may be surmised (his memoir on the comet is not extant) that the retrograde motion of this heavenly body confirmed in his mind his criticisms of the system of Ptolemy.
The theory of Copernicus began to be known in Rome, and it was well received. In 1533 Widmanstad, secretary to Pope Clement VII., gave a formal explanation of the heliocentric theory of Copernicus to the pope and to an audience containing several cardinals and bishops. There is no doubt that the theory was received with interest. There is no sign of opposition, and Widmanstad subsequently obtained high honors in the church. The attitude of the Lutherans was, as we have seen, very different. The cardinal-bishop of Capua wrote in 1536 to Copernicus begging him for an explanation of his system.
In 1537 Dantiscus became bishop of Ermeland. All the canons of Frauenburg, Copernicus included, supported his nomination. Copernicus was known, however, to be a warm friend of Giese, who should have succeeded, as coadjutor, to his uncle's bishopric, but who was elected to that of Culm by a compromise. Difficulties soon arose between Copernicus and his new bishop, and the breach was widened in various ways. The bishop, himself a man of loose morals, ordered Copernicus to send away his housekeeper, on the assumption of illicit relations between the two, and kept the accusation alive by various official letters. Bishop Dantiscus oppressed Copernicus in various ways and remained his enemy in spite of certain advances on the part of the latter. If Copernicus ever feared the persecution of the church on account of his scientific teaching—of which there is little evidence—it was because his bishop stood ready to use every and any weapon against him.
Copernicus gained an ardent disciple in George Joachim of Rhaetia, known to us as Rheticus. He was born in 1514 and made his studies at Nuremberg under Schoner to such effect that he was appointed to be professor of mathematics at the University of Wittenberg in 1537, at the age of twenty-three. In May, 1539, he visited the great astronomer of Frauenburg chiefly to study his doctrines of trigonometry, and his trigonometric tables. Copernicus was then sixty-six years of age and his enthusiastic and loyal guest was twenty-five. He was received cordially and at once set himself to study the manuscripts of Copernicus. His visit extended itself from a few weeks to more than two years, and he became a firm believer in the new heliocentric astronomy, which he was well prepared to receive and to expound.
A letter from Rheticus, written a few months after his arrival at Frauenburg, affords one of the very few personal views of Copernicus that have come down to us. The letter was published with a long Latin title, in 1540, and is known as 'Narratio Prima.' "I beg you to have this opinion concerning that learned man, my preceptor: that he had been an ardent admirer and follower of Ptolemy; but when he was compelled by phenomena and demonstration, he thought he did well to aim at the same mark at which Ptolemy had aimed, though with a bow and shafts of very different material from his. We must recollect what Ptolemy has said: 'He who is to follow philosophy must be a freeman in mind.'" "My preceptor was very far from rejecting the opinions of ancient philosophers from love of novelty, and except for weighty reasons and irresistible facts. His years, his gravity of character, his excellent learning, his magnanimity and nobleness of spirit are very far from any such temper (of disrespect to the ancients)." This letter, addressed by Rheticus to his old master Schoner, was the first easily accessible account of the new theory. The life-giving sun, he says, is placed in its appropriate place, and a single motion of the earth explains all the planetary motions. All is harmony as if they were bound together with a golden chain. He praises the great simplicity and reasonableness of the new doctrine, as well as the almost divine insight and the uncommon diligence of the master. He had formerly no idea, he says, of the immense labor required in such works, and the example of Copernicus leaves him in astonishment. Copernicus had made a complete collection of all known astronomical observations, and by these his theory was tested. The master was not content until every hypothesis had been fully proved.
Rheticus showed his admiration for Copernicus not only in these public, but also in private, ways. Books that he presented to the master (which are often annotated by Copernicus 's own hand) are still to be found in various libraries of Sweden, where they were taken after the plundering of Ermeland in the thirty years' war. At Wittenburg Rheticus and his colleague Reinhold, Copernicans both, were by the conditions of their professorships obliged to teach the Ptolemaic system, just as Galileo, at Padua, a Copernican, had to confine himself to the exposition of Sacrobosco. It may safely be surmised, however, that their pupils did not leave them without hearing something of the true doctrines. In the 'Narratio,' Rheticus, who was a firm believer in astrology, uses the data of the 'De Revolutionibus' as bases for wide-reaching astrological predictions. They are of no interest in themselves, but as the letter was written under the eye of Copernicus, they lead to the conclusion that they were not disapproved by the latter. So far as I know, this is the only evidence for the belief of Copernicus in astrology. We have no horoscopes from his hand but, like all his contemporaries, he probably gave it a place among the sciences.
Rheticus deserves the gratitude of all calculators for his table of trigonometric functions (sines, tangents, secants) to ten decimal places, for every 10″ of the quadrant, published in a huge volume by his pupil, Otho, under the title 'Opus Palatinum de Triangulis.' The tables of Rheticus are the basis upon which Vlacq founded his great tables, and they have served as models for many followers. Lansberg's tables appeared fifteen years after the 'Opus Palatinum' and lightened the immense labors of Kepler.
Toward the end of the year 1541 Rheticus returned to Wittenberg carrying with him a part of Copernicus 's manuscript—a treatise on 'Trigonometry'—which he printed in 1542. The complete manuscript of the 'De Revolutionibus' was sent by Copernicus to his old friend Giese, the bishop of Culm, for such disposition as he thought best. The bishop sent it to Rheticus to arrange for its printing at Nuremberg, and to see it through the press. It fell out that the printing had to be confided to Andreas Osiander, a Lutheran minister interested in astronomy. The book was published early in 1543, and a copy reached Copernicus on May 24, the very day of his death.
Osiander prefixed to the volume an introductory note which he did not sign, as follows:
Scholars will be surprised by the novelty of the hypothesis proposed in this book, which supposes the earth to be in motion about the sun, itself fixed. But if they will look closer they will see that the author is in no wise to be blamed. The aim of astronomy is to observe the heavenly bodies and to discover the laws of their motions; the veritable causes of the motions it is impossible to assign. It is consequently permissible to imagine causes, arbitrarily, under the sole condition that they should represent, geometrically, the state of the heavens, and it is not necessary that such hypotheses should be true, or even probable. It is sufficient that they should furnish positions that agree with observations. If astronomy admits principles, it is not for the purpose of affirming their truth, but to give a certain basis for calculation.
The best authorities affirm that Osiander 's apology, which he had suggested to Copernicus as early as 1540, was unauthorized.
Osiander made many changes in the text also, and added the last two words of the title under which the book was printed—' De Revolutionibus Orbium Cœlestium. ' Readers of our day universally interpret the apology to be an attempt to forestall theological opposition and persecution. They remember the conflict of Galileo with the church. But Osiander was a protestant divine, Copernicus a catholic priest. It is passing strange to conceive that a Lutheran schismatic should intervene to shield an orthodox catholic from accusations of heresy. Moreover, Copernicus had good reasons for believing that the princes of the church would receive his work favorably. His doctrine had been known to them since 1530. He knew, however, that several powerful university teachers—Fracastor for one—opposed it. Ought we not to interpret the apology as an address to men of science? Whewell justly remarks that Copernicus seems to consider the opposition of divines as a 'less formidable danger' than that of astronomers. It is difficult to admit that Osiander dared to prefix this note without the authorization of Copernicus, or, at least, of Rheticus. There seems to be no reason to doubt that it was addressed solely to men of science.
The words of the apology represent the exact point of view of the ancients, and are entirely opposed to the attitude of modern science. Centuries of experience have taught the modern world that there is one and only one solution to a scientific problem. Modern science is a search for such unique solutions. Anything less definite is an hypothesis to be held tentatively and temporarily, it may be even alternatively with another, or others. The theories of the Greek philosophers were, in general, held by them primarily as hypotheses. Their whole attitude towards scientific certainty was thus entirely different from our own. In the time of Copernicus the minds of most men were cast in the ancient temper. It is, in fact, from his century that the new insight dates. This is not to say that colossal geniuses like Archimedes or Roger Bacon did not work in what we call the modern spirit. It is simply to confirm that most of the contemporaries of Copernicus belonged, in this respect, to the ancient world. The apology expressed exactly their attitude. The attitude and temper of the modern world are entirely different; they are perfectly formulated in these words of Pascal: "Ce n'est pas le décret de Rome sur le mouvement de la terre qui prouvera qu'elle demeure en repos; et, si l'on avait des observations constantes qui prouvassent que c'est elle qui tourne, tous les hommes ensembles ne l'empêcheraient pas de tourner, et ne s'empêcheraient pas de tourner avec elle."
It required this very book of Copernicus to suggest the pregnant phrase of Pascal.
In the letter of dedication to the Pope—Paul III.—Copernicus speaks in his own name. His words are simple and serious, full of dignity and conviction:
I dedicate my book to your Holiness in order that both learned men and the ignorant may see that I do not shrink from judgment and examination. If perchance there be vain babblers who, knowing nothing of mathematics, yet assume the right of judging on account of some place of Scripture perversely twisted to their purpose, and who blame and attack my undertaking, I heed them not and look upon their judgments as rash and contemptible.
He is here referring to divines. The following is addressed to astronomers.
Though I know that the thoughts of a philosopher do not depend on the judgment of the multitude, his study being to seek out truth in all things so far as is permitted by God to human reason, yet when I considered how absurd my doctrine would appear I long hesitated whether I should publish my book, or whether it were not better to follow the example of the Pythagoreans and others who delivered their doctrine only by tradition, and to friends.
The doctrine of Copernicus was first formally judged by the Roman Church in 1615 when Galileo was before the Inquisition in Rome. The judgment was in these terms:
The first proposition, that the sun is the center and does not revolve about the earth, is foolish, absurd, false in theology, and heretical, because expressly contrary to Holy Scripture.
The second proposition, that the earth revolves about the sun and is not the center, is absurd, false in philosophy and, from a theological point of view at least, opposed to the true faith.
In the year 1616 the works of Copernicus were placed upon the Index 'until they should be corrected,' and 'all writings which affirm the motion of the Earth' were condemned at the same time. The congregation issued a notice to its readers in 1620, thus conceived:
Although the writings of Copernicus, the illustrious astronomer, on the revolutions of the world have been declared completely condemnable by the Fathers of the Sacred Congregation of the Index, for the reason that he is not content to announce hypothetically certain principles concerning the situation and motion of the earth, which principles are entirely contrary to the sacred Scripture, and to its true and Catholic interpretation (which can absolutely not be tolerated in a Christian man) but dares to present them as indeed true; nevertheless, because this book contains things very useful to the republic, it has been unanimously agreed that the works of Copernicus ought to be authorized, so far printed, as they previously have been authorized, correcting, however, according to the following notes, the passages in which he does not express himself hypothetically, but affirmatatively maintains the motion of the earth; but those which, in future, will be printed must not be so printed save with the following corrections, which are to be placed before the preface of Copernicus.
The corrections follow; they are not numerous or important.
The works of Copernicus were still on the Index in the year 1819. In the following year Pope Pius VII. approved a decree of the Congregation of the Holy Office that the Copernican system, as established, might be taught, and in 1822 'the printing and publication of works treating of the motion of the earth and the stability of the sun, in accordance with the general opinion of modern astronomers, is permitted at Home.' Centuries before this date the real question had been judged; but its formal settlement in the Roman Church was postponed to our own day.
The judgments of the Congregation of the Index upon the heliocentric theory were an incident in the history of the relations of Galileo with the authorities at Rome, and they can best be understood in con nection with that history. Something, however, may be said of them here. It is to be observed that the first proposition is condemned because it is contrary to scripture, heretical, false in theology, absurd and foolish; and the second because, from a theological point of view it is opposed to the true faith, false in philosophy and absurd. The words not in italics relate to judgments upon points of doctrine. The words in italics relate to judgments upon matters of philosophy or of science.
It was entirely competent for the Congregation of the Index to render decisions upon matters of theology which were binding upon all catholics. The committee was organized and existed for that purpose. Every institution, religious or secular, must decide for itself on matters of the sort. Not to do so is sheer suicide. The competence of the Roman church and of the Congregation of the Index to decide for itself questions of what is opposed to its faith, contrary to scripture, false in theology, is not to be denied. This was a conflict of theology with an alleged heresy. Copernicus was a member of the Roman Church. The soundness of his theological opinion was a matter for doctors of theology to settle in their own church in their own way. They did not decide it, however, until they had taken the advice of astronomers who pronounced the heliocentric theory to be baseless. (Delambre, 'Astronomie moderne,' i., p. 681.) Tycho Brahé, also—a great authority—had declared it to be 'absurd and contrary to the scriptures.' These two points are often forgotten by writers of the Martyr-of-Science School.
On the other hand, no one can admit for a moment the right or the competence of the Congregation or of the Church to pronounce final judgment upon a question of philosophy or of science. The whole world is now agreed that it is an impertinence for a body of theologians to pronounce upon a question of science, precisely as it would be for a congress of scientific men to pronounce upon a point of theology.
The reasons that led the Congregation of the Index to take this fatal step must be considered in connection with the history of Galileo. It will not be out of place here, however, to attempt to understand the mistaken point of view of the churchmen responsible for the decision.
For fourteen hundred years the theory of Ptolemy had ruled. In 1543 Copernicus proposed a new and revolutionary system. In its essential point the system was true, as we know now; we also know that it was false in asserting that the planets moved in circular orbits (they really move in ellipses), in accepting trepidation as an incident to precession, and in other matters of the sort. It even asserted, falsely, that the center of the orbit of the earth and not the sun was the center of planetary motion, so that in a strict sense it was not even a heliocentric theory. The theory of Copernicus was not proved to be true, in its essential feature, until Galileo discovered the phases of Venus, in 1610. Is it any wonder that doctors of the church five years afterwards were not convinced? They were profoundly ignorant of science and not in the least interested in science as such. Any one of them could recollect that Tycho Brahé, the greatest astronomer of his time, had in 1587 made a theory of the world which placed the earth at its center. He, then, did not agree with the theory of Copernicus. He expressly rejected it. It could easily be recollected, also, that in 1597 Kepler had proposed his first theory of the world, in which the planets were arranged according to fanciful and false analogies with the shapes of the five regular solids of Plato. It is now known that the systems of Tycho and of Kepler were both false. Ought the church doctors to have accepted them when they were proposed? In 1609 Kepler proposed a second theory of the world based on elliptic and heliocentric motion. How could the doctors know that this second system was the true one, as indeed it was? Kepler was still alive. How could they know that he would not propose a third theory? They had seen the doctrine of Ptolemy denied by Copernicus; the doctrine of Copernicus denied by Tycho; the doctrine of Tycho denied by Kepler's first system; the doctrine of Kepler's first replaced by that of his second system. All this had occurred within their own memories. In scientific theories as such they had no interest whatever; they were solely concerned for religion. Is it surprising that they did not promptly accept a theory which they did not understand?
It was, however, a profound and inexcusable error for them to condemn it; and by so doing they, unwittingly, dealt a heavy blow to the church. For once, theology engaged in a warfare with science; and the issue was an overwhelming and deserved victory for science. There have not been many such conflicts. Very exceptional conditions are required to bring them about, as may be seen in the long history of Galileo.
It is very difficult to form a vivid conception of the whole character of Copernicus either from his works or from his portraits. We know far too little of his history and too little of the time in which he lived. I have found no summary in any of his biographies that can be called satisfying and I have never been able to make one for myself. I venture to reprint that of Bertrand, and to enclose in parentheses those parts that we positively know to need modification or correction.
Capernic est pour nous tout entier dans son livre. Sa vie intime est mal connu. Ce qu'on en sait donne l'idée d'un homme ferme, mais prudent, et d'un caractère parfaitement droit; tout entier à ses spéculations et comme recuelli en lui-même; il aimait la paix, la solitude, et le silence. Simplement et sincèrement pieux, il ne comprit jamais que la verité pût mettre la foi en péril, et se réserva toujours le droit de la chercher et d'y croire. Aucune passion ne troubla sa vie; (ou ne lui connaît même pas de commerce affectueux et intime[2]); ennemi des discours inutiles, il ne rechercha ni les éloges ni le bruit de la gloire; indépendant sans orgueil, content de son sort et content de lui-même, il fut grand sans éclat, et, ne se révélant qu'a petit nombre de disciples choisis, il a accompli une revolution dans la science (sans que, se son vivant, l'Europe en ait rien su).[3]
The system of Copernicus belongs to him alone. It is not the system of Philolaus or of Aristarchus. . . but his own. His name is justly attached to it on account of the care with which he explained its every part, brought out all its phenomena, discovered the causes of these precessional movements which had been known for eighteen hundred years, and explained only by the hypothetical existence of an eighth sphere which made a revolution in 36,000 years around the axis of the ecliptic, while, at the same time, it was constrained to turn daily about the axis of the equator to account for the rising and setting of the stars. It is then Copernicus who really introduced the motion of the earth into astronomy, not merely into academic disputations; it is he who demonstrated how the revolution of the earth about the sun explained the succession of the seasons and the precession of the equinoxes; it is he who showed how simply the retrogradations of the planets are explained by the unequal velocity with which they traverse their concentric orbits about the sun; it is he who put astronomy on new foundations and who opened the way for all later researches. It is to Kepler's enthusiasm over the new truths that we owe the discovery of the true shape of the planetary orbits, and the laws of their motion. The idea of the motion of the earth was unfruitful among the ancients because it was never entertained with seriousness. Its adoption by Copernicus is the beginning of modern astronomy. (Delambre).
The mountain peaks that cluster closely round the Lick Observatory in California are of different heights and were unnamed when the corps of observing astronomers took possession of the newly established station. Names were assigned to them in the order of their heights—Copernicus, Galileo, Kepler, Tycho and Ptolemy. One of the staff of observers, who greatly distinguished himself during his short career at the observatory, objected to the assignment of the name of Copernicus to the highest peak. Copernicus was, no doubt, a great astronomer, he said, but was he preeminent? Should not the highest peak have been assigned to another? The objection is answered the moment the relation of Copernicus to the whole thought of the world is comprehended. His skill as a mere observer, his power as a mere geometer, is not in question. His place is not to be assigned by narrow criteria like these. What was the attitude of man towards everything not himself before the day of Copernicus? towards things divine, things spiritual, things natural? What is his view of the world now? The changes are so fundamental, extensive and bewildering as not to be described, much less estimated, except by a long series of separate steps, each one opening new worlds in religion, philosophy, science, art, technics. To name them all would be to summarize the entire history of human progress for three hundred and fifty years. In the long stairway of ascent Copernicus established the foundation stone. Tycho, Kepler, Galileo, Newton, Kant, Laplace, Herschel, Darwin (to speak only of men of science) each laid successive steps upon it. Until the first was firmly laid no building, no advance, was possible. We stand to-day in a high place of vantage won for us by the master builders of more than three centuries. Without Copernicus their work would have been in vain. The modern world is erected upon foundations that he laid.
↑ The central fire of Philolaus was, however, not the sun; for in his theory the earth, the sun, the moon and all the planets revolved about a fire so placed at the center of the system as to be forever invisible to the earth.
↑ His relations with his uncle and with Giese were both affectionate and intimate; those with the young Rheticus were ideal, considering their ages.
↑ From the year 1514 onwards his name was widely known among the circles of the learned, and his theories were circulated as early as 1530.
Retrieved from "https://en.wikisource.org/w/index.php?title=Popular_Science_Monthly/Volume_65/June_1904/Copernicus&oldid=8839444"
|
CommonCrawl
|
How much would a 1 foot tall human weigh?
Trying to figure out how much a 1 foot tall fairy would realistically weigh using these 2 guidelines
Fairies are just scaled down humans.
Their bones are not hollow because their flight is assisted by magic.
biology fantasy-races scaling
Cyn says make Monica whole
SamirahSamirah
$\begingroup$ Are you specifically referring to the square-cube law in your question? If not, then please elaborate further on the restrictions being placed. $\endgroup$ – Andrew Fan Sep 14 '19 at 0:03
$\begingroup$ Small beings usually have very thin extremities, compared to tall ones. Consider e.g. the legs when comparing the mouse against the elephant. Please elaborate to the comment from @AndrewFan $\endgroup$ – hitchhiker Sep 14 '19 at 19:54
For uniformly scaled down humans (as opposed to real life short humans), result would be simple as
$$m_{fairy} = M_{human} * (\frac{H_{fairy}}{H_{human}})^3$$
Assuming the fairy is 1 foot tall and her real life prototype is 5'6" and 120 lbs we get 0.72 pounds or 11.5 ounces.
AlexanderAlexander
$\begingroup$ What if her prototype is Cara Delavingne? $\endgroup$ – Harper - Reinstate Monica Sep 14 '19 at 22:38
$\begingroup$ @ Harper Cara Jocelyn Delevingne is reportedly 173 cm tall and 51 kg light. Putting it to the formula gives 279 g, or 9.8 ounces. $\endgroup$ – Alexander Sep 14 '19 at 23:43
Comparison with other humans
We can almost look at a real live example: https://www.oddee.com/item_97186.aspx
Edward Nino Hernandez is about 70 cm tall (~2 foot) and weighs 10 kg. We can actually use this to test the square-cube-law, proposed in other answers:
$$m_\text{fairy} = 80\ \mathrm{kg} \cdot \left(\frac{0.7\ \mathrm m}{1.8\ \mathrm m}\right)^3= 4.7\ \mathrm{kg}$$
So they are off by a factor of 2.
Comparison with monkeys
Let's have a look at monkeys: https://en.wikipedia.org/wiki/Tamarin
The Tamarin can grow up to $30\ \mathrm{cm}$, which is just about 1 foot and heaviest specimen weigh up to $0.9\ \mathrm{kg}$ (other units can be found in the article).
The squirrel monkey (https://en.wikipedia.org/wiki/Central_American_squirrel_monkey) also grows up to about $30\ \mathrm{cm}$ and has a maximum weight of about $0.95\ \mathrm{kg}$
Comparision with a penguin
Another animal I could think of, that is roughly that size. Penguin (https://en.wikipedia.org/wiki/Little_penguin): $1.5\ \mathrm{kg}$
So in conclusion I would estimate the human to weigh round about $1\ \mathrm{kg}{-}1.5\ \mathrm{kg}$, which is just a little over $2\ \mathrm{lbs}{-}3\ \mathrm{lbs}$.
infinitezeroinfinitezero
$\begingroup$ I upvoted this answer just for sweet metric units! $\endgroup$ – polfosol Sep 14 '19 at 13:25
$\begingroup$ Why was this downvoted? Please clarify $\endgroup$ – infinitezero Sep 14 '19 at 14:28
Well, if they are literally just humans of the exact same proportions, but scaled up or down, we can use the Square-Cube Law to figure it out in both cases.
The skinny version is, if I understand this correctly, take this equation:
V2 = V1 ( l2 / l1 )^3
where V2 is your new Volume, V1 is the original Volume, l2 is the new length and l1 is the original length, and assume for simplicity's sake that volume exactly correlates with mass, and therefore weight.
So if a reasonably well-fed human is 6ft tall and 180lbs, then an exact scaled-up giant version at 12ft tall would be 2x the height, and therefore the weight is 180(12/6)^3, or 1,440 lbs. That's a lot.
Turning this around, if this 6ft, 180lbs human is scaled down to 1ft tall, then we're looking for 180(1/6)^3, which is about 0.83333.
So your fairies would weigh less than one pound each, with the exception of some who are enormous by fairy standards.
You can use this to get a rough estimate of weights for all sorts of creatures, big or small. Take an animal that looks the most like what you want to make, plug in its bodily proportions, and presto you have a rough idea of how much the new version should weigh. You'd be surprised just how heavy your giants are and how light the dwarfs are.
Maddock EmersonMaddock Emerson
1,1141515 bronze badges
A 30cm fairy would need a lot less muscle relatively than a normally sized person. (Note that with a normal amount of muscle, they'd be able to jump nearly as high absolutely as a big person). They'd look probably quite skinny, and weigh less than the square-cube law would suggest, maybe 0.3kg.
thsths
Other answers have scaled the persons' mass by the cube of their height, and got answers of about 13 ounces. This is probably a lower bound; it assumes that a human brain can fit into a space slightly larger than a teaspoonful.
The theory of "Body Mass Index" (BMI) is that people have the longest life when their mass is roughly proportional to the square of their height. If we start with 6 feet = 180 pounds (a BMI of 24.4 kg/m²), we can extrapolate this to 1 foot = 5 pounds. This is probably an upper bound; it allows a few cubic inches for the brain.
An elliptical cylinder of water with a width of 5.4 inches, a height of 12 inches, and a depth of 2.7 inches would have a mass of five pounds. The ellipse's perimeter would be 13 inches, which is quite stout. (6 * 13" is a 78" waist!)
An elliptical cylinder of water with a width of 2.2 inches, a height of 12 inches, and a depth of 1.1 inches would have a mass of 13 ounces, and a BMI of 4 kg/m². The ellipse's perimeter would be 5.3 inches, which is scaled down from a 32 inch waist.
JasperJasper
$\begingroup$ Going by BMI is a recipe to estimate a real-life dwarf. If you try to picture your water cylinders, at 1 foot tall we would get a person of cartoon proportions (which might be Ok). $\endgroup$ – Alexander Sep 14 '19 at 2:22
$\begingroup$ BMI is entirely the wrong tool here. It doesn't accurately approximate either actual human scaling or theoretical square-cube scaling. $\endgroup$ – Mark Sep 14 '19 at 23:39
Not the answer you're looking for? Browse other questions tagged biology fantasy-races scaling or ask your own question.
What kind of weapon would enable fairies to defend against the invading 13th century medieval army?
How could giant intelligent creatures afford to live in a human-majority civilisation?
Non-Magic, Real-Life Faries?
What are the quirks of half-breed slavery?
What kind of weapons would a much smaller species use to fight humans
How fast could a bird-like human fly under optimal conditions?
|
CommonCrawl
|
Digital Circuits/Representations
< Digital Circuits
1 Quantity vs Numbers
2 Binary Numbers
3 Bits
3.1 Most Significant Bit and Least Significant Bit
4 Standard Sizes
5 Negative Numbers
5.1 Sign and Magnitude
5.2 One's Complement
5.3 Two's Complement
6 Signed vs Unsigned
7 Character Data
7.1 ASCII
7.2 Extended ASCII
7.3 UNICODE
7.4 EBCDIC
8 Octal
9 Hexadecimal
10 Hexadecimal Notation
11 For further reading
Quantity vs Numbers[edit]
An important distinction must be made between "Quantities" and "Numbers". A quantity is simply some amount of "stuff"; five apples, three pounds, and one automobile are all quantities of different things. A quantity can be represented by any number of different representations. For example, tick-marks on a piece of paper, beads on a string, or stones in a pocket can all represent some quantity of something. One of the most familiar representations are the base-10 (or "decimal") numbers, which consist of 10 digits, from 0 to 9. When more than 9 objects needs to be counted, we make a new column with a 1 in it (which represents a group of 10), and we continue counting from there.
Computers, however, cannot count in decimal. Computer hardware uses a system where values are represented internally as a series of voltage differences. For example, in most computers, a +5V charge is represented as a "1" digit, and a 0V value is represented as a "0" digit. There are no other digits possible! Thus, computers must use a numbering system that has only two digits(0 and 1): the "Binary", or "base-2", number system.
Binary Numbers[edit]
Understanding the binary number system is difficult for many students at first. It may help to start with a decimal number, since that is more familiar. It is possible to write a number like 1234 in "expanded notation," so that the value of each place is shown:
1234 10 = 1 × 10 3 + 2 × 10 2 + 3 × 10 1 + 4 × 10 0 = 1000 + 200 + 30 + 4 {\displaystyle 1234_{10}=1\times 10^{3}+2\times 10^{2}+3\times 10^{1}+4\times 10^{0}=1000+200+30+4}
Notice that each digit is multiplied by successive powers of 10, since this a decimal, or base 10 system. The "ones" digit ("4" in the example) is multiplied by 10 0 {\displaystyle 10^{0}} , or "1". Each digit to the left of the "ones" digit is multiplied by the next higher power of 10 and that is added to the preceding value.
Now, do the same with a binary number; but since this is a "base 2" number, replace powers of 10 with powers of 2:
1011 2 = 1 × 2 3 + 0 × 2 2 + 1 × 2 1 + 1 × 2 0 = 11 10 {\displaystyle 1011_{2}=1\times 2^{3}+0\times 2^{2}+1\times 2^{1}+1\times 2^{0}=11_{10}}
The subscripts indicate the base. Note that in the above equations:
1011 2 = 11 10 {\displaystyle 1011_{2}=11_{10}}
Binary numbers are the same as their equivalent decimal numbers, they are just a different way to represent a given quantity. To be very simplistic, it does not really matter if you have 1011 2 {\displaystyle 1011_{2}} or 11 10 {\displaystyle 11_{10}} apples, you can still make a pie.
Bits[edit]
The term Bits is short for the phrase Binary Digits. Each bit is a single binary value: 1 or zero. Computers generally represent a 1 as a positive voltage (5 volts or 3.3 volts are common values), and a zero as 0 volts.
Most Significant Bit and Least Significant Bit[edit]
In the decimal number 48723, the "4" digit represents the largest power of 10 (or 10 4 {\displaystyle 10^{4}} ), and the 3 digit represents the smallest power of 10 ( 10 0 {\displaystyle 10^{0}} ). Therefore, in this number, 4 is the most significant digit and 3 is the least significant digit. Consider a situation where a caterer needs to prepare 156 meals for a wedding. If the caterer makes an error in the least significant digit and accidentally makes 157 meals, it is not a big problem. However, if the caterer makes a mistake on the most significant digit, 1, and prepares 256 meals, that will be a big problem!
Now, consider a binary number: 101011. The Most Significant Bit (MSB) is the left-most bit, because it represents the greatest power of 2 ( 2 5 {\displaystyle 2^{5}} ). The Least Significant Bit (LSB) is the right-most bit and represents the least power of 2 ( 2 0 {\displaystyle 2^{0}} ).
Notice that MSB and LSB are not the same as the notion of "significant figures" that is used in other sciences. The decimal number 123000 has only 3 significant figures, but the most significant digit is 1 (the left-most digit), and the least significant digit is 0 (the right-most digit).
Standard Sizes[edit]
Length/bits
Bit 1
Nibble 4
Byte 8
Word 16
Double-word 32
Quad-word 64
Machine Word Depends
a Nibble is 4 bits long. Nibbles can hold values from 0 to 15 (in decimal).
a Byte is 8 bits long. Bytes can hold values from 0 to 255 (in decimal).
a Word is 16 bits, or 2 bytes long. Words can hold values from 0 to 65535 (in Decimal). There is occasionally some confusion between this definition and that of a "machine word". See Machine Word below.
Double-word
a Double-word is 2 words long, or 4 bytes long. These are also known simply as "DWords". DWords are also 32 bits long. 32-bit computers therefore, manipulate data that is the size of DWords.
Quad-word
a Quad-word is 2 DWords long, 4 words long, and 8 bytes long. They are known simply as "QWords". QWords are 64 bits long, and are therefore the default data size in 64-bit computers.
Machine Word
A machine word is the length of the standard data size of a given machine. For instance, a 32-bit computer has a 32-bit machine word. Likewise 64-bit computers have a 64-bit machine word. Occasionally the term "machine word" is shortened to simply "word", leaving some ambiguity as to whether we are talking about a regular "word" or a machine word.
Negative Numbers[edit]
It would seem logical that to create a negative number in binary, the reader would only need to prefix the number with a "–" sign. For instance, the binary number 1101 can become negative simply by writing it as "–1101". This seems all fine and dandy until you realize that computers and digital circuits do not understand minus sign. Digital circuits only have bits, and so bits must be used to distinguish between positive and negative numbers. With this in mind, there are a variety of schemes that are used to make binary numbers negative or positive: Sign and Magnitude, One's Complement, and Two's Complement.
Sign and Magnitude[edit]
Under a Sign and Magnitude scheme, the MSB of a given binary number is used as a "flag" to determine if the number is positive or negative. If the MSB = 0, the number is positive, and if the MSB = 1, the number is negative. This scheme seems awfully simple, except for one simple fact: arithmetic of numbers under this scheme is very hard. Let's say we have 2 nibbles: 1001 and 0111. Under sign and magnitude, we can translate them to read: -001 and +111. In decimal then, these are the numbers –1 and +7.
When we add them together, the sum of –1 + 7 = 6 should be the value that we get. However:
And that isn't right. What we need is a decision-making construct to determine if the MSB is set or not, and if it is set, we subtract, and if it is not set, we add. This is a big pain, and therefore sign and magnitude is not used.
One's Complement[edit]
Let's now examine a scheme where we define a negative number as being the logical inverse of a positive number. We will use the same "!" operator to express a logical inversion on multiple bits. For instance, !001100 = 110011. 110011 is binary for 51, and 001100 is binary for 12. but in this case, we are saying that 001100 = –110011, or 110011(binary) = -12 decimal. let's perform the addition again:
+110011 (-12)
We can see that if we invert 0000002 we get the value 1111112. and therefore 1111112 is negative zero! What exactly is negative zero? it turns out that in this scheme, positive zero and negative zero are identical.
However, one's complement notation suffers because it has two representations for zero: all 0 bits, or all 1 bits. As well as being clumsy, this will also cause problems when we want to check quickly to see if a number is zero. This is an extremely common operation, and we want it to be easy, so we create a new representation, two's complement.
Two's Complement[edit]
Two's complement is a number representation that is very similar to one's complement. We find the negative of a number X using the following formula:
-X = !X + 1
Let's do an example. If we have the binary number 11001 (which is 25 in decimal), and we want to find the representation for -25 in twos complement, we follow two steps:
Invert the numbers:
11001 → 00110
Add 1:
00110 + 1 = 00111
Therefore –11001 = 00111. Let's do a little addition:
Now, there is a carry from adding the two MSBs together, but this is digital logic, so we discard the carrys. It is important to remember that digital circuits have capacity for a certain number of bits, and any extra bits are discarded.
Most modern computers use two's complement.
Below is a diagram showing the representation held by these systems for all four-bit combinations:
Signed vs Unsigned[edit]
One important fact to remember is that computers are dumb. A computer doesnt know whether or not a given set of bits represents a signed number, or an unsigned number (or, for that matter, and number of other data objects). It is therefore important for the programmer (or the programmers trusty compiler) to keep track of this data for us. Consider the bit pattern 100110:
Unsigned: 38 (decimal)
Sign+Magnitude: -6
One's Complement: -25
Two's Complement: -26
See how the representation we use changes the value of the number! It is important to understand that bits are bits, and the computer doesn't know what the bits represent. It is up to the circuit designer and the programmer to keep track of what the numbers mean.
Character Data[edit]
We've seen how binary numbers can represent unsigned values, and how they can represent negative numbers using various schemes. But now we have to ask ourselves, how do binary numbers represent other forms of data, like text characters? The answer is that there exist different schemes for converting binary data to characters. Each scheme acts like a map to convert a certain bit pattern into a certain character. There are 3 popular schemes: ASCII, UNICODE and EBCDIC.
ASCII[edit]
The ASCII code (American Standard Code for Information Interchange) is the most common code for mapping bits to characters. ASCII uses only 7 bits, although since computers can only deal with 8-bit bytes at a time, ASCII characters have an unused 8th bit as the MSB. ASCII codes 0-31 are "Control codes" which are characters that are not printable to the screen, and are used by the computer to handle certain operations. code 32 is a single space (hit the space bar). The character code for the character '1' is 49, '2' is 50, etc... notice in ASCII '2' = '1' + 1 (the character 1 plus the integer number 1)). This is difficult for many people to grasp at first, so don't worry if you are confused.
Capital letters start with 'A' = 65 to 'Z' = 90. The lower-case letters start with 'a' = 97 to 'z' = 122.
Almost all the rest of the ASCII codes are different punctuation marks.
Extended ASCII[edit]
Since computers use data that is the size of bytes, it made no sense to have ASCII only contain 7 bits of data (which is a maximum of 128 character codes). Many companies therefore incorporated the extra bit into an "Extended ASCII" code set. These extended sets have a maximum of 256 characters to use. The first 128 characters are the original ASCII characters, but the next 128 characters are platform-defined. Each computer maker could define their own characters to fill in the last 128 slots.
UNICODE[edit]
When computers began to spread around the world, other languages began to be used by computers. Before too long, each country had its own character code sets, to represent their own letters. It is important to remember that some alphabets in the world have more than 256 characters! Therefore, the UNICODE standard was proposed. There are many different representations of UNICODE. Some of them use 2-byte characters, and others use different representations. The first 128 characters of the UNICODE set are the original ASCII characters.
For a more in-depth discussion of UNICODE, see this website.
EBCDIC[edit]
EBCDIC (Extended Binary Coded Decimal Interchange format) is a character code that was originally proposed by IBM, but was passed in favor of ASCII. IBM however still uses EBCDIC in some of its super computers, mainframes, and server systems.
Octal[edit]
Octal is just like decimal and binary in that once one column is "full", you move onto the next. It uses the numbers 0−7 as digits, and because there a binary multiple (8=23) of digits available, it has a useful property that it is easy to convert between octal and binary numbers. Consider the binary number: 101110000. To convert this number to octal, we must first break it up into groups of 3 bits: 101, 110, 000. Then we simply add up the values of each bit:
101 2 = 1 × 2 2 + 0 × 2 1 + 1 × 2 0 = 5 8 {\displaystyle 101_{2}=1\times 2^{2}+0\times 2^{1}+1\times 2^{0}=5_{8}}
000 2 = 0 8 {\displaystyle 000_{2}=0_{8}}
And then we string all the octal digits together:
1011100002 = 5608.
Hexadecimal[edit]
Hexadecimal is a very common data representation. It is more common than octal, because it represents four binary digits per digit, and many digital circuits use multiples of four as their data widths.
Hexadecimal uses a base of 16. However, there is a difficulty in that it requires 16 digits, and the common decimal number system only has ten digits to play with (0 through 9). So, to have the necessary number of digits to play with, we use the letters A through F, in addition to the digits 0-9. After the unit column is full, we move onto the "16's" column, just as in binary and decimal.
A 10 12 1010
B 11 13 1011
C 12 14 1100
D 13 15 1101
E 14 16 1110
F 15 17 1111
Hexadecimal Notation[edit]
Depending on the source code you are reading, hexadecimal may be notated in one of several ways:
0xaa11: ANSI C notation. The 0x prefix indicates that the remaining digits are to be interpreted as hexadeximal. For example, 0x1000, which is equal to 4096 in decimal.
\xaa11: "C string" notation.
0aa11h: Typical assembly language notation, indicated by the h suffix. The leading 0 (zero) ensures the assembler does not mistakenly interpret the number as a label or symbol.
$aa11: Another common assembly language notation, widely used in 6502/65816 assembly language programming.
#AA11: BASIC notation.
$aa11$: Business BASIC notation.
aa1116: Mathematical notation, with the subscript indicating the number base.
16#AA11#: VHDL notation for a number.
x"AA11": VHDL notation for a 16 bit array of bits.
16'hAA11: Verilog notation, where the "16" is the total length in bits.
Both uppercase and lowercase letters may be used. Lowercase is generally preferred in a Linux, UNIX or C environment, while uppercase is generally preferred in a mainframe or COBOL environment.
For further reading[edit]
Floating Point/Fixed-Point Numbers
Retrieved from "https://en.wikibooks.org/w/index.php?title=Digital_Circuits/Representations&oldid=3231698"
Book:Digital Circuits
|
CommonCrawl
|
Evaluating a computerized maintenance management system in a low resource setting
Computer Based Medical Systems
Farah Beniacoub ORCID: orcid.org/0000-0002-2920-54861,
Fabrice Ntwari2,
Jean-Paul Niyonkuru2,
Marc Nyssen3 &
Stefaan Van Bastelaere1
Health and Technology volume 11, pages 655–661 (2021)Cite this article
This study documents the setup and roll-out of a Computerized Maintenance Management System (CMMS) in Burundi's resource constrained health care system between 1/04/2017 and 31/03/2020. First, in 2017 a biomedical assets ontology was created, tailored to the local health system and progressively mapped on international GMDN (Global Medical Devices Nomenclature) and ICMD (International Classification and Nomenclature of Medical Devices) classifications. This ontology was the cornerstone of a web-based CMMS, deployed in the Kirundo and Muramvya provinces (6 health districts, 4 hospitals and 73 health centers).
During the study period, the total number of biomedical maintenance interventions increased from 4 to 350 per month, average corrective maintenance delays were reduced from 106 to 26 days and the proportion of functional medical assets grew from 88 to 91%.
This study proves that a sustainable implementation of a CMMS is feasible and highly useful in low resource settings, if (i) the implementation is done in a conducive technical environment with correct workshops and maintenance equipment, (ii) the active cooperation of the administrative authorities is ensured, (iii) sufficient training efforts are made, (iv) necessary hardware and internet connectivity is available and (v) adequate local technical support can be provided.
Poor management of biomedical equipment and health care infrastructure is a problem of most health systems in low income countries. The most obvious reasons are:
Lack of accurate information: Very few Ministries of Health (MoH) have an accurate asset inventory at national, sub-national or health facility levels (diagnostic equipment, medical rolling stock, buildings, grounds or IT equipment [1, 2]). It is even more difficult to obtain up-to-date information about the functional status, the maintenance history and the maintenance planning of equipment and buildings. This lack of essential information makes adequate planning, management and monitoring of major public investments in health care equipment and infrastructure very difficult[1]: numerous buildings are in dilapidated condition due to lack of maintenance, scrap yards are plenty of defective but in some cases perfectly repairable medical devices, biomedical equipment is distributed in an irrational way and sometimes cannot be put into operation due to simple lack of electricity or technical knowledge required for installation and configuration.
Absence of adapted, standardized nomenclature: The absence of national or international standardized nomenclatures for the identification of biomedical equipment and health system infrastructure has been observed in multiple low resource countries [2, 3]. Often user-generated descriptions in free text are in use, resulting in typing errors and non-standard acronyms. The absence of an unambiguous ontology for biomedical engineering makes the exploitation of the available biomedical inventory time-consuming and error-prone. Nor does it allow health administrations to automatically evaluate the technical platform and infrastructure availability and maintenance status to national standards, insofar as these have been defined.
Shortage of qualified and skilled biomedical technicians: A marked shortage of skilled biomedical technicians in sub-Saharan Africa has been currently observed. This is particularly evident in rural areas and for the public sector. It contributes to the phenomenon of expensive maintenance contracts offered by international manufacturers, relying on expensive international technical personnel, which often proves impossible due to budgetary restrictions.
Lack of appropriate maintenance equipment, workshops and assistance: The scarce local technicians often do lack the appropriate equipment to carry out common repairs. They work in isolated settings without technical supervision or assistance. Therefore, necessary preventive maintenance tasks and curative repairs are not performed correctly or in due time, resulting in a very short functional lifetime of the biomedical equipment and consequently the frequent and long-term unavailability of sometimes essential diagnostic and/or therapeutic services for the patients [4, 5].
In the past decade, digitalization was introduced in the health care systems of many developing countries. National e-health strategies were developed (e.g. Burundi's Plan National de Développement de l'Informatique de Santé—PNDIS) in which priority was given to internet connectivity, the setting up of national data warehouses including geo-referencing of aggregated health data, the computerization of hospitals and health centers or the automation of the pharmaceutical supply chain. Digitalization of biomedical equipment and health infrastructure management was unfortunately given a lower priority in these plans and it was seldom integrated in the national health management information systems. Moreover, countries are facing important challenges when implementing these e-health strategies, such as (i) unavailability of a central e-health authority which can coordinate the many over-priced donor-driven e-health projects, (ii) low-bandwidth or not widely available internet connectivity, (iii) lack of digital literacy (iv) lack of guidance on standards.
Our hypothesis is that implementation of a Computerized Maintenance Management System (CMMS) contributes to a more effective and efficient management of biomedical equipment and health infrastructure in a low resource setting.
The study population of our action research included the Central Directorate for Health Infrastructure and Equipment Management of the MoH. At decentral level it included all public hospitals (n = 4) and health centers (n = 73) of the provinces of Muramvya and Kirundo.
The action research consisted of the following elements:
Developing a standardized nomenclature for all biomedical equipment and health infrastructure.
Establishing unambiguous quantitative standards for biomedical equipment and infrastructure based on the health norms of the MoH.
Realizing a digital inventory of all biomedical material and health infrastructure based on the developed nomenclature.
The development of a digital information system for the management of inventories, maintenance plans and maintenance activities and making the system available to the central services of the MoH and to the decentral level, i.e. the maintenance technicians in the 2 provinces concerned.
The setup of biomedical workshops in the 2 concerned provinces with maintenance equipment and skilled technicians according to the national norms.
The training of technical personnel of the central services of the MoH and of all available maintenance technicians in the 2 provinces.
The action research was analyzed in a quantitative and qualitative way covering (i) the use of the CMMS over a period of 3 years (2017–2020), (ii) the study of the feasibility, results and sustainability of the intervention and (iii) identification of any relevant failure- and success factors to be taken into account for future CMMS implementations in low resource settings.
In an initial phase, the ontology was developed for both biomedical equipment and medical infrastructure assets as described below:
Biomedical assets nomenclature
For biomedical equipment categories, local terminologies familiar to local maintenance technicians were used as a starting point. This resulting local nomenclature was progressively mapped to the international standards like GMDN (in 2018) [6] and ICMD-11 (in 2020) [7]. On a total of 131 local nomenclature codes, a corresponding GMDN code could be found for 97 (74%) items and a matching ICMD code for 83 (63%) items.
Information in the form of a 5-digit numeric GMDN Code is cross-referenced to a precisely defined Term Name and Definition, as seen in this example:
GMDN Term Name: Scalpel, single-use.
GMDN Code: 47,569
GMDN Definition: "A sterile, hand-held, manual surgical instrument constructed as a one-piece handle and scalpel blade (not an exchangeable component) used by the operator to manually cut or dissect tissue. The blade is typically made of high-grade stainless steel alloy or carbon steel and the handle is often made of plastic. This is a single-use device".
Disadvantages of the GMDN classification were the fact that it was not available for free, that the code mapping took significantly longer to complete due to its much higher granularity compared to the ICMD classification and that GMDN matching also generated three times more ambiguous mappings, ultimately requiring an expert to make a choice between different possible candidate codes.
Infrastructure ontology
No useful international coding system could be found for classification of infrastructure assets. To this end, a new tri-axial nomenclature was locally developed, which took into account (i) the location of an item in the health pyramid (district hospital, health center …), (ii) the functional belonging of an item (radiology, laboratory, administration, ancillary building …) and (iii) its technical classification (roof truss, floor, network cabling …). The resulting infrastructure nomenclature contained 102 codes for the combinations of the first 2 axes each of which could be combined with 63 technical classification codes.
Health facility asset norms
Subsequently, existing Burundian quantitative national norms for infrastructure and biomedical equipment in district hospitals and health centers (numbers of operating theaters, reanimation sets, hospital beds, ECG machines etc.) were updated using the developed ontology for biomedical assets. The objective was to obtain an official reference against which the equipment status of each public health facility can be unambiguously assessed in order to enable better future investment planning and a more efficient organization of maintenance activities.
Baseline inventory
Before the start of the study, data on the biomedical assets of the public health facilities in Burundi had already been collected using an Akvo application on Android smartphones, with which technical characteristics, photos and descriptions of biomedical equipment and infrastructure were recorded according to WHO guidelines [1] in a semi-structured manner for each of the studied health facilities. This inventory was recuperated and migrated into the new system. Using the previously developed biomedical assets ontology, this baseline inventory could be initiated before even starting the actual development of the CMMS. This resulted in an initial list of 647 infrastructure items (only taking into account the first two coding axes) and 745 equipment items for both provinces.
CMMS functionalities
Before the start of the study, about 20 hospitals in Burundi already had a digital hospital information system with an integrated but seldom used CMMS module. In order to avoid unnecessary introduction of new information systems and to remain interoperable with the information systems already used by these hospitals, it was decided to further build on their public Java programming libraries for the development of the CMMS covering not only the hospitals but the whole district (including district offices and health centers).
The following functionalities, tailored to the specific needs of the Burundian health system, were developed:
Management of equipment and infrastructure with the local ontologies and with international biomedical asset nomenclatures.
Health facility management according to the hierarchical organizational structure of the Burundian health care system, with differentiated access rights to different categories of users per health facility (hospitals and health centers) and levels (national, provincial, district).
Inventory of biomedical equipment and medical infrastructure, based on the previously developed ontology, including administrative, financial and technical characteristics of each asset and the storage of photos and any relevant asset documentation (manuals, loan agreements, inspection reports …).
The planning of preventive maintenance and the management of the requests for corrective interventions in case of equipment breakdowns. For that purpose, standardized maintenance schemes have been developed for 68 (52%) equipment and 29 (28%) infrastructure codes.
The registration of all maintenance activities including the identity of responsible maintenance technician, the outcome and the costs of the intervention.
Automatic generation of reports and dashboards relating to the inventory status, maintenance progress and the extent to which the national quantitative standards for biomedical infrastructure and equipment are complied with.
CMMS implementation
The CMMS was developed as a central internet-accessible server application with a secure web interface. The application was developed in French as an open source Java project with a MySQL database backend and running on an Apache Tomcat server. The choice for this architecture was partly determined by the fact that (i) Burundi has a fairly broad coverage of 3G / 4G internet access, (ii) the registered data is automatically and permanently available in a central location, (iii) the hardware infrastructure costs are kept to a minimum and (iv) updates and maintenance only need to be performed in one single place. In order to also enable the registration of biomedical asset data (including photos) in remote health centers where internet connectivity is not available, an offline registration module was developed, enabling maintenance technicians to enter data on stand-alone devices in off-line mode for later synchronization with the central server.
Most maintenance technicians entered in unknown territory with the new CMMS application. Except for some basic knowledge of using Excel and Word, none of them had prior knowledge about a CMMS application. The average IT knowledge among the maintenance technicians was rather low. Therefore, a train-the-trainer approach was chosen, in which, in addition to the end users, expert users from the MoH received extensive training with regard to advanced functionalities and CMMS system management. End users have been successively trained multiple times on a steadily growing complexity of the software [8]. A team of 3 local private computer engineers in Bujumbura was additionally trained to be able to respond on demand (through a maintenance contract) in case of technical failures that can't be resolved by the MoH's own expert users.
Although the technical learning curve for using the application was quite short (1 week of training for the maintenance technicians), the need for additional training in biomedical maintenance procedures and strategies developed by the MoH became obvious very soon. For example, many maintenance technicians were relatively new to the sector, and others had developed their own working methods over time which were not always consistent with the procedures implemented in the CMMS. Therefore, the roll-out of the CMMS application got off to a relatively slow start. But the maintenance technicians have been monitored closely and over time they gradually discovered the benefits that could be derived from the application when performing their daily tasks. Repeated additional training was tailored to the needs they expressed and therefore fairly well attended which eventually resulted in a steadily increasing use of the CMMS.
Inventory: the initial baseline inventory with 1392 registered biomedical assets has been progressively expanded to 2906 permanently updated asset records after 3 years of operation. In the same period, 70 defective equipment items have been disposed of.
Total number of interventions: Fig. 1 shows that the monthly number of maintenance interventions on these assets has also seen exponential growth from 4 in the first half of 2017 (baseline), to 20 early 2018, 230 early 2019 and around 350 in March 2020; the arrows indicate the important role periodically repeated training efforts have played in this. On an average basis, preventive maintenance has been responsible for the bulk of the growth in maintenance activity by taking 89% of the total interventions.
Monthly number of maintenance interventions
Qualitative and quantitative analysis of the interventions
The analysis of the maintenance tasks demonstrated a favorable effect on the quality of the service that was provided by maintenance technicians in the 2 studied provinces. Firstly, the average response time to requests for corrective maintenance in case of biomedical equipment breakdowns, measured between the day of the application for assistance and the first intervention, decreased from 106 days in April 2017 to 26 days in March 2020 (Fig. 2). In the same period, the time that was needed to solve a maintenance problem dropped from 106 to 32 days.
Quality of corrective maintenance interventions
Secondly, whilst the corrective maintenance workload grew from 6 interventions in the first semester to 159 interventions in the sixth semester of the study, simultaneously the proportion of successfully solved cases improved from 92 to 98% (Fig. 2).
From April 2017 to march 2020, on a total of 635 interventions for corrective maintenance, 56% of the requests were issued by district hospitals, 38% by health centers and 6% by administrative structures. 61% of the interventions were related to infrastructure assets and 39% to equipment. During the last 18 months of the study, increasing numbers of preventive maintenance interventions were accompanied by a declining trend in (more expensive) corrective interventions, both in district hospitals and health centers, but this correlation was not statistically significant.
Functional capacity ratio F
In order to determine whether this growing number of maintenance interventions had an impact on the operational status of the biomedical heritage of the health facilities, the functional capacity ratio F was evaluated every six months, reflecting the proportion of active assets in functional condition:
$${\text{F = }}\frac{{{\text{A}}_{{{\text{tf}}}} {\text{ - A}}_{{{\text{df}}}} }}{{{\text{A}}_{{\text{t}}} {\text{ - A}}_{{\text{d}}} }} \times 100$$
F = functional capacity ratio in %
Atf = total number of functional assets registered in
the inventory.
Adf = total number of functional assets in the inventory
that were decommissioned.
At = total number of assets registered in the inventory
Ad = total number of decommissioned assets
registered in the inventory.
The results, as shown in Fig. 3, teach us that a rather modest but statistically significant improvement in F was observed growing from 88.69% in 2017 to 91.74% in 2020 (polynomial regression second order R = 0.9776). Interestingly, the obtained F-values correlate strongly with the absolute number of preventive maintenance interventions performed (linear regression R = 0.9524 and p < 0.005). (Table 1)
Evolution of functional vs dysfunctional assets
During our study period from April 2017 till March 2020 we successfully implemented a CMMS in a low resource setting.
While preparing the project, we realized that available international nomenclatures for biomedical equipment did not optimally match the concrete needs of developing countries. Although the ICMD classification showed larger gaps (e.g. for the identification of disability devices, medical furniture or energy supplies), fewer ambiguities were identified than with GMDN. The simplicity of the ICMD classification also better suited the available local coding competences. In order to achieve a better fit, further research will be needed to either generate a simplified sub-classification of GMDN or a more extensive version of ICMD. In any case, the free availability of such classification system seems to be an important factor for its usefulness in most of the sub-Saharan African countries.
Drawing up national standards for biomedical equipment and infrastructure that are adapted to the local health care system has proven to be a necessary and interesting exercise. Without such reference framework, solid biomedical heritage planning seems virtually impossible [2, 5]. However, due to the very difficult political and budgetary situation of Burundi during the complete study period, it has not been possible to adequately evaluate the usefulness of the developed framework, because no substantial biomedical investment decisions have been made at the level of the Ministry.
On the technical side, the use of a web-based system with a central server has generated very few difficulties in Burundi, despite frequent interruptions in internet connectivity due to power outages around the country. Admittedly, there is rarely a compelling need to have biomedical inventory and maintenance data available in real time, and in the event of system failures, data can either be entered quite easily at a later stage or recorded on an off-line computer which will then be synchronized afterward. Yet, periodic synchronization of off-line computers remained problematic in a few remote areas where no reliable mobile network coverage was available.
Since the health care system in the provinces of Muramvya and Kirundo had been supported for quite some time by the Belgian Development Cooperation, the studied health facilities can generally be counted among the better equipped public health care structures of Burundi. This partly explains why the functional capacity ratio F left less room for improvement from the start. It may be expected that the F value would rise more strongly in less fortunate areas, given the fact that some authors state that between 40 and 70% of complex medical equipment is lying idle in sub-Saharan African countries [4, 5]. The study data suggest that the improved functional capacity ratio F is caused by the intensified preventive maintenance activity. Further investigation also learned that the reason for that better F-score should in no case be sought in an increased number of decommissioned non-functional assets (which is theoretically possible and even desirable). On the contrary, decommissioning of assets continues to be a difficult administrative and cultural problem, even with an operational CMMS in place. The surrounding areas of many health facilities therefore remain littered with discarded defective equipment, which cannot be disposed of due to the lack of practical decommissioning procedures.
At the end of the study the MoH has been actively using the CMMS to determine its needs of infrastructure and medical equipment in consultation with its donors, in the context of the COVID-19 pandemic. In March 2020, after the first covid-19 cases in Burundi were identified, an overview of available CPAP ventilators, ICU beds and resuscitation equipment in the public health care facilities was immediately available and essential for the organization of the COVID-19 response.
The positive impact of action research in Kirundo and Muramvya led the Ministry of Health to include the roll-out of the software to the entire health system in its National Health Development Plan (2019–2023). However, the observed improvements come from a broader, successful combination of (1) completed maintenance teams with maintenance technician in 6 districts, (2) upgrade of technical platforms and workshops; (3) set-up of an operational frame with logistics for outreach activities to health centers; (4) use of a computerized maintenance management system (CMMS), (5) setup of a funding model.
Some of the donors have already committed to further expanding the CMMS to other parts of the country and a baseline inventory has already been entered into the system covering 17 of the 18 provinces for equipment and 11 provinces for infrastructure assets.
This study provides strong evidence for the hypothesis that a sustainable implementation of a CMMS is feasible and useful in low resource settings, if the implementation approach (i) is done in a conducive technical environment with correct workshops and maintenance equipment, (ii) ensures the active cooperation of the central and decentralized administrative authorities, (iii) sufficiently long and repetitive training efforts are made, (iv) necessary hardware and internet connectivity is available and (v) adequate local technical support can be provided.
A well-functioning CMMS may have relevant impact on the functionality of the health facilities, and by extension on the resilience of the health system. It indirectly contributes to a greater availability and equity of qualitative medical care and to universal health coverage in general.
Table 1 CMMS implementation timeline
Computerized maintenance management system. Geneva, World Health Organization. 2011. https://www.who.int/medical_devices/publications/comp_maint_system/en/. Accessed 25/06/2020.
Global Atlas of medical devices, Geneva, World Health Organization. 2017. https://www.who.int/medical_devices/publications/global_atlas_meddev2017/en/. Accessed 25/06/2020.
Medical devices: Nomenclature system, Geneva, World Health Organization, 2011. https://www.who.int/medical_devices/priority/3_5.pdf?ua=1. Accessed 25/06/2020.
Perry L. Malkin, R, Effectiveness of medical equipment donations to improve health systems: how much medical equipment is broken in the developing world? Med Biol Eng Comput. 2011;49:719–22. https://doi.org/10.1007/s11517-011-0786-3.
Peter Heimann, Guidelines for Health Care Equipment Donations, World Health Organization. 2001. https://www.who.int/medical_devices/management_use/manage_donations/en/. Accessed 25/06/2020.
Global Medical Devices Nomenclature, GMDN Agency. https://www.gmdnagency.org/. Accessed 25/06/2020.
ICD-11 Extension Codes for Health Devices, Equipment and Supplies, World Health Organization. https://icd.who.int/browse11/l-m/en. Accessed 25/06/2020.
Vaughn KE, Dunlosky J, Rawson KA. Effects of successive relearning on recall: Does relearning override the effects of initial learning criterion? Mem Cogn. 2016;44:897–909. https://doi.org/10.3758/s13421-016-0606-y.
Enabel, Belgian development agency, Rue Haute 147, 1000, Brussels, Belgium
Farah Beniacoub & Stefaan Van Bastelaere
Central Directorate for Health Infrastructure and Equipment Management of the Ministry of Health (MoH), Avenue Pierre Ngendandumwe, Bujumbura, Burundi
Fabrice Ntwari & Jean-Paul Niyonkuru
Department of Public Health, Vrije Universiteit Brussel (VUB), Laarbeeklaan 103, 1090, Brussels, Belgium
Marc Nyssen
Farah Beniacoub
Fabrice Ntwari
Jean-Paul Niyonkuru
Stefaan Van Bastelaere
Correspondence to Farah Beniacoub.
Research involving human participants and/or human
This article does not contain any studies with human or animal subjects performed by the any of the authors.
I, Farah Beniacoub, give my consent for manuscript (HEAL-D-20–00,414, Evaluating a Computerized Maintenance Management System in a low resource setting, Farah Beniacoub) to be published in Health and Technology Journal. I understand that the text and any pictures published in the article will be freely available on the Internet and may be seen by the general public. The pictures and text may also appear on other websites or in print, may be translated into other languages or used for commercial purposes. I have been offered the opportunity to read the manuscript.
Conflict interest
Farah Beniacoub declares that she has no conflict of interest.
This article is part of the Computer Based Medical Systems
Beniacoub, F., Ntwari, F., Niyonkuru, JP. et al. Evaluating a computerized maintenance management system in a low resource setting. Health Technol. 11, 655–661 (2021). https://doi.org/10.1007/s12553-021-00524-y
Issue Date: May 2021
Ancillary information systems
HTMS
|
CommonCrawl
|
Microbiome and ecology of a hot spring-microbialite system on the Trans-Himalayan Plateau
A possible unique ecosystem in the endoglacial hypersaline brines in Antarctica
M. Guglielmin, M. Azzaro, … A. Lo Giudice
Modern arsenotrophic microbial mats provide an analogue for life in the anoxic Archean
Pieter T. Visscher, Kimberley L. Gallagher, … Christophe Dupraz
Hyperdiverse archaea near life limits at the polyextreme geothermal Dallol area
Jodie Belilla, David Moreira, … Purificación López-García
Mixing of meteoric and geothermal fluids supports hyperdiverse chemosynthetic hydrothermal communities
Daniel R. Colman, Melody R. Lindsay & Eric S. Boyd
Microbial ecology of the newly discovered serpentinite-hosted Old City hydrothermal field (southwest Indian ridge)
Aurélien Lecoeuvre, Bénédicte Ménez, … Emmanuelle Gérard
Fingerprinting molecular and isotopic biosignatures on different hydrothermal scenarios of Iceland, an acidic and sulfur-rich Mars analog
Laura Sánchez-García, Daniel Carrizo, … Olga Prieto-Ballesteros
Longitudinal analysis of the Five Sisters hot springs in Yellowstone National Park reveals a dynamic thermoalkaline environment
Jesse T. Peach, Rebecca C. Mueller, … Brent M. Peyton
Transition from unclassified Ktedonobacterales to Actinobacteria during amorphous silica precipitation in a quartzite cave environment
D. Ghezzi, F. Sauro, … M. Cappelletti
Exploration of deep terrestrial subsurface microbiome in Late Cretaceous Deccan traps and underlying Archean basement, India
Avishek Dutta, Srimanti Dutta Gupta, … Pinaki Sar
Chayan Roy1,
Moidu Jameela Rameez1,
Prabir Kumar Haldar1,
Aditya Peketi2,
Nibendu Mondal1,
Utpal Bakshi3,
Tarunendu Mapder4,
Prosenjit Pyne1,
Svetlana Fernandes2,
Sabyasachi Bhattacharya ORCID: orcid.org/0000-0002-0377-375X1,
Rimi Roy1 nAff9,
Subhrangshu Mandal1,
William Kenneth O'Neill ORCID: orcid.org/0000-0002-6074-658X5,
Aninda Mazumdar2,
Subhra Kanti Mukhopadhyay6,
Ambarish Mukherjee7,
Ranadhir Chakraborty8,
John Edward Hallsworth ORCID: orcid.org/0000-0001-6797-93625 &
Wriddhiman Ghosh ORCID: orcid.org/0000-0002-4922-74041
Scientific Reports volume 10, Article number: 5917 (2020) Cite this article
Metagenomics
Little is known about life in the boron-rich hot springs of Trans-Himalayas. Here, we explore the geomicrobiology of a 4438-m-high spring which emanates ~70 °C-water from a boratic microbialite called Shivlinga. Due to low atmospheric pressure, the vent-water is close to boiling point so can entropically destabilize biomacromolecular systems. Starting from the vent, Shivlinga's geomicrobiology was revealed along the thermal gradients of an outflow-channel and a progressively-drying mineral matrix that has no running water; ecosystem constraints were then considered in relation to those of entropically comparable environments. The spring-water chemistry and sinter mineralogy were dominated by borates, sodium, thiosulfate, sulfate, sulfite, sulfide, bicarbonate, and other macromolecule-stabilizing (kosmotropic) substances. Microbial diversity was high along both of the hydrothermal gradients. Bacteria, Eukarya and Archaea constituted >98%, ~1% and <1% of Shivlinga's microbiome, respectively. Temperature constrained the biodiversity at ~50 °C and ~60 °C, but not below 46 °C. Along each thermal gradient, in the vent-to-apron trajectory, communities were dominated by Aquificae/Deinococcus-Thermus, then Chlorobi/Chloroflexi/Cyanobacteria, and finally Bacteroidetes/Proteobacteria/Firmicutes. Interestingly, sites of >45 °C were inhabited by phylogenetic relatives of taxa for which laboratory growth is not known at >45 °C. Shivlinga's geomicrobiology highlights the possibility that the system's kosmotrope-dominated chemistry mitigates against the biomacromolecule-disordering effects of its thermal water.
The microbial ecologies of habitats that are hydrothermal, or hypersaline, have been well-characterized, and can give insights into the origins of early life on Earth1,2,3. Both chaotrope-rich hypersaline brines and high-temperature freshwater systems can entropically disorder the macromolecules of cellular systems, and are in this way analogous as microbial habitats4,5,6,7. Indeed, highly-chaotropic and hydrothermal habitats are comparable at various scales of biology: the biomacromolecule, cellular system, and functional ecosystem8,9.
Chaotropic, hypersaline habitats include the MgCl2-constrained ecosystems located at the interfaces of some of the stratified deep-sea hypersaline brines and their overlying seawater. Biophysical, culture-based, and metagenomic studies of the steep haloclines found at these interfaces have revealed that macromolecule-disordering (chaotropic) activities of MgCl2 not only determine microbial community composition, but also limit Earth's functional biosphere5,7,10 in such locations, as in situ microbial communities stop functioning at 2.2–2.4 M MgCl2 concentrations in the absence of any compensating ion; above these concentrations there is no evidence of cellular functions or life processes5. While the biophysical activities of MgCl2 and other chaotropic solutes and hydrophobes constrain the functionality of biomacromolecular systems (via mechanisms that are entropically analogous to the action of heat8,11,12), a number of macromolecule-ordering (kosmotropic) solutes such as NaCl, proline, trehalose and ammonium sulfate, as well as low temperature, can stabilize, and impart rigidity to, biomacromolecules11,13,14,15,16, and thereby mitigate against the inhibition of cellular systems by chaotropic agents5,7,17,18. Accordingly, active microbial life can be found even in hypersaline brines of up to 2.50–3.03 M MgCl2 when sodium or sulfate ions are also present7,10. Likewise, for other macromolecule-disordering agents such as ethanol, diverse types of kosmotropic substance as well as low temperature have been reported to mitigate against the chaotropic stress17,18,19. Whereas chaotropicity typically constrains cellular activity at temperatures >10 °C, chaotropes can promote metabolic activity at lower temperatures, thereby extending the growth windows for microbes under extreme cold6,19.
Ecological studies of geographically distinct hydrothermal habitats20,21,22,23,24,25,26 have elucidated various aspects of in situ microbiology including diversity27,28,29,30,31 and correlation of community structures/functions with temperature32,33,34,35,36,37,38,39. However, in relation to conditions which permit and/or constrain habitability, we currently know little about the geochemistry and biophysics of hot spring systems, especially the ones discharging hot waters, which have a neutral pH and are typically poor in sulfide, silicate and total dissolved solids but rich in sodium, boron, elemental sulfur and sulfate40,41,42. The current study, based in the Puga geothermal area of eastern Ladakh (within the Trans-Himalayan region, at the northern tip of India; see Supplementary Fig. 1 and Supplementary Note 1), focused on revealing the geomicrobiology of a sulfate- and boron-rich, silica-poor and neutral pH, hot spring originating from within a large chimney-shaped hydrothermal microbialite, known as Shivlinga (Fig. 1). This microbialite, which has formed via epithermal accretion of boratic and carbonatic hydrothermal minerals, is situated at an altitude of 4438 m, where the boiling point of water is approximately 85 °C. The functional ecology of Shivlinga's microbiome was elucidated via a sampling-microscopy-geochemistry-metagenomics-biophysics approach, which encompassed analyses of water from the vent, microbial mats along the spring-water transit/dissipation, and fresh mineral sinters precipitating on mat-portions growing near the margins of the water-flow. Microbial diversity, microbe-mineral assemblages, and geochemical and biophysical parameters were characterized along Shivlinga's hydrothermal gradients (Fig. 1) to determine the factors constraining and/or promoting the microbiome. The chemical milieus of these hydrothermal gradients were found to be rich in kosmotropic ions, so we compared their ecology with those of the kosmotrope-compensated chaotropic haloclines.
The environmental context of the Shivlinga microbialite hot spring system at the time of sampling on 23 July 2013: (A) the three geomicrobiological zones, scale bar represents 1 m; (B) view of Shivlinga's microbial communities showing the spring-water transit that represented the wet thermal gradient, scale bar represents 1 m; (C) microbialite body showing the vent and position of the two thermal gradients that were sampled, scale bar represents 10 cm; (D) bedrock slope around Shivlinga's base, showing the microflora, scale bar represents 10 cm; (E and F) WG6 and WG7, respectively, scale bars represent 5 cm for E and 10 cm for F. For (G), the light blue circle indicates the microbialite body, the yellow oval demarcation indicates the bedrock slope around the base, the purple oval demarcation shows Shivlinga's apron, and the cyan arrow shows the direction of water flow across the apron, away from the microbialite hot spring. For (H), the cyan curved-line with an arrow-head shows the meandering path of the spring-water across the apron, away from the microbialite body (this also represents part of the wet thermal gradient), the pink triangles indicate the sample sites for WG6 and WG7, the orange triangle indicates the location of the Sinter-Sample 4 (SS4). For (I), the green curved-line with an arrow-head indicates the trajectory of the drying thermal gradient, while the cyan curved-line with an arrow-head indicates part of the wet thermal gradient which starts at the vent and leads to the first mat of the bedrock slope (the latter also represents the direction of water flow); the red triangles indicate the position of the sampled communities VW and VWM; the green triangles indicate the sampling-positions for communities DG3 and DG4; the pink triangles indicate the sample sites for WG3 and WG4; and the orange triangles indicate the locations of the Sinter-Samples 1, 2 and 5 (SS1, SS2 and SS5, respectively). For (J), the cyan curved-line with an arrow-head indicates part of the wet thermal gradient that runs along the bedrock slope surrounding Shivlinga's base; the pink triangles indicate the sample sites for WG4 and WG5; and the orange triangle indicates the location of the Sinter-Sample 3 (SS3). For (K) and (L), the pink triangles indicate the sampling positions for communities WG6 and WG7, while the cyan curved-lines with an arrow-head each show the direction of water flow.
The boratic microbialite, Shivlinga
Shivlinga is a 35-cm-tall microbialite that is several decades old according to residents of the closest village situated at the eastern end of the Puga valley (Fig. 1A,G). The abundance of microbial biomass on its surface and chemical composition of its mineral body (see below) suggest that Shivlinga has formed via precipitation of boratic minerals on lithifying microbial mats. Laminated, clotted or dendritic fabrics commonly found on stromatolites, thrombolites or dendrolites43 were not encountered on Shivlinga's mineral deposits. It is for this reason that Shivlinga was considered to be a microbialite, i.e. a microbe-mediated sedimentary rock-like mineral deposit with or without defined internal fabrics43. The near-circular, flat top of Shivlinga's mineral body has a mean diameter of ~22 cm (Fig. 1C,I). Throughout the year, there is a gentle discharge of water (68–73 °C, pH 7.0–7.5) from a 10-cm diameter vent at the summit of Shivlinga; this is accompanied by a weak emission of hydrothermal gases (a comprehensive characterization of the gaseous discharges of the geothermal vents of Puga valley has shown that they all emit steam, carbon dioxide and hydrogen sulfide41). Shivlinga's discharge runs down the side of the microbialite (Fig. 1C,I) onto the bedrock surrounding its base (Fig. 1D,J). Beyond the slopes of the basement rock, the spring-water flows along several shallow channels (the longest one running 3.3 m from Shivlinga's base), and eventually percolates into the regolith of the surrounding apron (Fig. 1B,H). Temperature, pH and flow-rates of the vent-water and outflows were found to remain consistent during the 2008–2013 annual site visits (Table 1 and Supplementary Table 1); this stability was attributable to the physicochemical consistency of the underlying geothermal reservoir44. The sizes and structures of the microbial mats growing along Shivlinga's spring-water transit also remained largely unchanged during the period 2008–2013, indicating the microbiome's resilience to seasonal and annual variations in weather. This observation was consistent with reports for microbial mats at other geothermal sites22,25; so Shivlinga's ecosystem was considered suitable for studying microbial community dynamics along its hydrothermal gradients, and a comprehensive geomicrobiological exploration of the site was undertaken in July 2013.
Table 1 Microbial communities sampled at the Shivlinga ecosystem on 23 July 2013, and the temperature, pH, and flow-rate, of the vent-water, as well as those of the spring-water flowing over the mat samples, at the time of their collection.
Chemical characteristics of the vent-water
The chemical composition of Shivlinga's vent-water was determined by collecting surficial discharges from the center of the vent's orifice. The vent-water, at the time of the current sampling (on 23 July 2013), had a neutral pH (7.0) and a low concentration of total dissolved solids (2000 mg L−1) compared with other neutral-pH hot springs located in distinct geographical areas of the world41,45,46. Shivlinga's vent-water was found to have high concentrations of boron (175 mg L−1), sodium (550 mg L−1), bicarbonate (620 mg L−1) and chloride (360 mg L−1), compared to the other solutes detected in the vent-water. There was also some silicon (60 mg L−1), potassium (15 mg L−1), calcium (10 mg L−1), lithium (6 mg L−1) and magnesium (3 mg L−1) detected. Of the sulfur species present, thiosulfate (3 mM) and sulfate (1 mM) were the most abundant, but sulfite (225 μM) and sulfide (250 μM) were also present at significant concentrations. Collectively, the vent-water chemistry was consistent with that reported for other hot springs within the Puga region40,41.
Distinct mineralogies of Shivlinga's vent and body, bedrock slope, and apron
For each of the water-flows running in Shivlinga's vent-to-apron trajectory, and starting from the surface of the vent-water, multi-colored microbial mats grow all along their transits, until the end of the flow near the edge of the apron. Depending on their thickness, the mats are either slightly submerged or stay just above the water level. Segments of mats that lie at the interface of the spring-water and regolith are typically dry at their surface but moist within. Every mat at the site is intermeshed with fresh mineral accretions that precipitate from the cooling spring-water and also condense from the gases that arise from the vent; but mineralization processes are at far more advanced stages in the less-hydrated segments of mats. For microbial biomass growing just beneath the water level, or protruding from the water surface by some mm to cm, mineral dusts visible to the naked eye are present on the mat surfaces. For those mats or parts of mats that are situated at the margins of outflows, mm- to cm-sized, soft, white spherules and shrub-like bodies of accreted minerals cover the surface (Fig. 2A). In the latter cases, mineralization is very conspicuous: monitoring this process over a period of 21 days using close-up photography revealed that the microbial mats grow out from the top of the encrustations, and spread again over the surface of the spherules (Supplementary Fig. 2). It was also evident that the fresh biomass at the surface of such mats are covered by fresh mineral deposition; and so the process continues. Mineralization acts to solidify all the mat structures of the microbiome from the bottom upwards. The microbialite thereby grows in height as well as in girth, besides hardening from within; the height of Shivlinga, according to our field observations, increased by ~4 cm (and the vent orifice narrowed slightly) between 2008 and 2013. Spherules and shrub-like structures form across the apron's dry surface, presumably due to condensation of Shivlinga's fumarolic gases on the regolith (Fig. 1B,H). However, the salt accretions which form beyond a few cm from the banks of the water-flows are not associated with microbial mats, and are dry and brittle.
Macro-/micro-scale microbe-mineral structures at the Shivlinga site: (A) green microbial mat (pink arrows), and white spherules (red arrow) that collectively form shrub-like mineral bodies (orange circle), on the upper surfaces of the vent's rim (blue scale bar = 10 mm); (B) sinter particles from 5-cm-deep inside the wall of the microbialite, associated with bacterial filaments (red arrow) (image was taken using SEM, blue scale bar = 150 µm); and (C) a diatom cell (golden arrow) associated with the particles shown in (B) (using SEM, blue scale bar = 20 µm).
We analyzed fine mineral particles accreted on the mat samples taken from the middle of the water-flow, and sinter-spherules precipitating on mat-portions growing near the margins of the water-flow. Sinter-spherules and shrub-like bodies that were soft, so had presumably formed within a few weeks before sampling, were collected from one top-edge of Shivlinga's body (Sinter-Sample 1), one side-surface of the microbialite (Sinter-Sample 2), one point on the bedrock slope around Shivlinga's base (Sinter-Sample 3), and one point at mat:regolith interface on the margin of the spring-water flow across the apron (Sinter-Sample 4) (Fig. 1B–D,H–J). Besides these four samples, older and harder sinter material was collected from Shivlinga's interior by boring the microbialite's side-surface, using a 3 mm twist drill bit, to a depth of 5 cm at a point located 15 cm below the summit (Sinter-Sample 5) (Fig. 1C,I).
All the sinter samples were found to be rich in boron, sodium and calcium (Supplementary Table 2); kernite (Na2B4O7·4H2O) was the major mineral in Shivlinga's interior, while borax (Na2B4O7·10H2O) and then tincalconite (Na2B4O7·5H2O) were predominant in the soft, recently-formed sinters (data not shown). Calcite (CaCO3), gypsum (CaSO4·2H2O), elemental sulfur, and silica (SiO2) were also identified within the boron-mineral matrices of all the five sinter samples. Minute quantities of aluminum, gold, iron, manganese, zinc, molybdenum, lead, silver, nickel and cobalt were also detected, regardless of the sinter samples (Supplementary Table 2). These mineralogical data are consistent with Shivlinga's vent-water chemistry, and together they suggest that the microbialite has formed from the gradual precipitation and accretion of boratic-, carbonatic-, and sulfatic-minerals, a process that, in part at least, is facilitated by its microbial communities (see Fig. 2A).
The process of precipitation of boron minerals is common to Shivlinga's vent and microbialite body, bedrock slope and apron; yet there is considerable variation in the proportions of oxides or carbonates of metals, alkali metals and alkaline earth metals which combine with the boron minerals in the three zones of this microbiome (Supplementary Table 2). The soft (fresh) sinters of the vent and microbialite body, and those of the sloping bedrock, contain greater proportions of aluminum and gold, while sinters of the apron contain greater proportions of iron and manganese. On a w/w basis, the percentages of zinc, molybdenum, lead, silver, nickel and cobalt etc are higher in the freshly-precipitated sinters located within the apron than in the accretions on the vent and microbialite body, or the bedrock slopes (Supplementary Table 2).
Microbial characterizations along Shivlinga's spring-water transit/dissipation: a wet, and a progressively-drying, thermal gradient
Microbiological investigations were carried out along two distinct axes of Shivlinga's spring-water transit/dissipation. One of these represented a progressively-drying thermal gradient having no running water on it (along this axis, the spring-water dissipated into the mineral sinters of the microbialite body); the other featured a wet thermal gradient along the outflow channel of the spring. Both of these gradients start from the vent and, besides the vent-water community (VW), share a common microbial mat community (VWM) that floats on the vent-water and anchors to the sintered rim of the vent (Fig. 1C,I). The drying thermal gradient traverses the moist sinters concentrically from the vent opening, ~15 cm towards the edge of Shivlinga's flat-topped summit; two more, morphologically distinct but physically contiguous mat communities (DG3 and DG4) occur along this trajectory (Fig. 1C,I). On the other hand, the ~4-m-long wet thermal gradient, at the time of sampling, laid along the longest and widest of the six prominent channels of Shivlinga's spring-water transit; it ran from the vent, down one side of Shivlinga's mineral body and base (Fig. 1C,D,I,J), and then along the outflow channel in the apron (Fig. 1B,H). Along the wet thermal gradient lies a continuum of multi-hued microbial mat communities, from within which five representative mats (WG3, WG4, WG5, WG6 and WG7) were chosen for investigation. In total, nine physically/morphologically distinctive microbial communities, including one in the vent-water (Table 1), were sampled across the two thermal gradients and subjected to analysis using microscopy and metagenomics. These microbiological data were collated with the mineralogical analyses of sinter materials (sediment-accretions) sampled and investigated from the five distinct depositional facies (physical and chemical conditions under which deposition takes place) existing in the Shivlinga territory. This revealed that the Shivlinga microbiome encompasses three distinct geomicrobiological zones (Fig. 1A,G) characterized by distinctive temperature- and pH-conditions, and topographical, mineralogical and microbiological features. These are (i) the vent and the microbialite body, (ii) the microbialite's base (including the sloping bedrock) and (iii) the apron. Whilst the first geomicrobiological zone encompassed the VW, VWM, DG3, DG4 and WG3 communities (Fig. 1C,I), the second and the third included the communities WG4 and WG5 (Fig. 1D,J), and WG6 and WG7 (Fig. 1B,E,F,H,K,L), respectively. Further details of the geomicrobial features present along Shivlinga's spring-water transit/dissipation are given in the Methods sub-section titled "Site Description".
Distinct mat morphologies of the vent surface, microbialite body, sloping bedrock-base, and apron
Microscopic examinations revealed four distinct patterns of organization of microbial cells in Shivlinga's mat communities. The VWM and WG3 streamers are composed of rod-shaped bacterial cells that lie end-to-end, interspersed with smaller coccoidal cells, within dense networks of aseptate filaments of cyanobacteria that are long, straight, sheathed and semi-rigid (Fig. 3A,B). All the green, red and purple mats growing around the vent, on the microbialite body, and the bedrock slope of Shivlinga (i.e. DG3, DG4, WG4 and WG5) are made up of spherical bodies ranging from 10–100 µm in diameter (Fig. 3C). Diatoms that appear to be members of Cymbellaceae form the margins of these spheres, and remain in immediate contact with adjacent spheres or other mineral structures (Fig. 3C,D). Cells, which have the dimensions and morphologies of prokaryotic cells, occur along the boundary of the spheres. Filamentous and coccoidal microorganisms, resembling Chloroflexi and Chroococcales respectively, occupy the internal core of the spheres; abundant hyphal structures that are aseptate and white (some have stalked buds coming out of them), and resemble Phycomycetian fungi are also seen here (Fig. 3E–H). Notably, boron mineral-encrusted forms of these spherical bodies were identified in SEM-EDS of Sinter-Sample 5 (Fig. 2B,C). These microscopic data, together with the detection of small but definite proportions of fungus-affiliated reads in the subsequent metagenome analysis of the mat communities, indicated that specific amplification and sequencing of fungal DNA, in future investigations, may reveal even greater fungal diversity than shotgun metagenomics indicates.
Microbial structures from the Shivlinga site: (A and B) microbial cell filaments from streamers of VWM and WG3, respectively; (C) microbial biomass of DG3; (D) diatoms from the surface of the microbial mass which makes up WG4 and DG4 (D1 and D2 respectively); (E and F) cells from the core of the microbial mass which makes up DG4 and WG5, respectively; (G) Chroococcales-like cyanobacteria in the core of the microbial mass which makes up DG3; (H) fungal hyphae and diatoms near the edge of the microbial mass which makes up DG4. All images were produced using phase contrast microscopy except for (C) that used laser scanning confocal microscopy. For (C1), excitation was carried out at 488 nm and detection at 630–650 nm; (C2) is a differential interference contrast image with blue arrows indicating diatom cells; and for (C3), excitation was carried out at 543 nm and detection at 650 nm.
The two morphologically distinct mat structures of the wet thermal gradient that are located within apron environment, i.e. WG6 and WG7, suggest processes by which mat formation had occurred. The former type appears to have come about via nucleation of borax, extracellular organic matter, and traces of calcite and other minerals, on irregular aggregations of diatoms, cyanobacteria, red-pigmented Chloroflexi, and other cells having dimensions and morphologies akin to those of prokaryotes. The WG7 streamers, in contrast, exhibited a laminar organization. The top-most layer of these stratified mats contained dense sheaths of filamentous bacteria resembling Chloroflexi, Leptothrix and Sphaerotilus. Embedded in complex mineral assemblages, these filamentous organisms are interspersed with large round-shaped cells that appear to be flagellates, according to their morphologies and 5–20 µm diameters (row A of Fig. 4). At 1–2 mm beneath the top-layer (i.e. in sub-surface layer 1), abundance of the filamentous bacteria decreases and the density of the eukaryotic cells increases; at these depths, there are also occasional cells of pennate diatom species (row B of Fig. 4). Diatoms are, however, much more abundant at subsequent depths up to the boratic substratum. At 3–4 mm from the mat surface (i.e. in sub-surface layer 2), the diatoms co-exist with morphologically-diverse bacteria, many of which resemble purple sulfur and non-sulfur bacteria (row C of Fig. 4). At the deepest layer of WG7, 5–8 mm below the surface (i.e. in sub-surface layer 3), diatom cells predominate and are attached to borax crystals (row D of Fig. 4).
Microbial cells present in the different layers of the WG7 sample: (row A) top-most layer of the microbial mat; (row B) sub-surface layer 1 (from a depth of 1–2 mm); (row C) sub-surface layer 2 (from a depth of 3–4 mm); and (row D) diatom cells and borax crystals in sub-surface layer 3 (from a depth of 5–8 mm); (Column I) images were taken using phase contrast microscopy; (column II, except for row D) images were taken using laser scanning confocal microscopy, at 488 nm excitation and 630–650 nm detection; (column III, except for row D) used differential interference contrast to modify the image from column II; and (column IV, except for row D) used laser scanning confocal microscopy (543 nm excitation and long pass 650 nm detection) to further modify the image from column II. Photographs in row D were taken using phase contrast microscopy.
Bacteria-dominated vent-water community
PCR amplification (using Bacteria-/Archaea-specific oligonucleotide primers), followed by high throughput sequencing, of the V3 regions of all 16S rRNA genes present in the total environmental DNA extracted from the VW sample revealed the alpha diversity of the community. This is referred to hereafter as the metataxonomic composition47 so as to distinguish the diversity reported for the VW community from the diversities reported subsequently for the mat communities on the basis of shotgun metagenome sequencing and analysis. Notably, total environmental DNA yield from the VW sample was insufficient for direct shotgun sequencing and metagenome analysis.
Bacteria-specific V3 primers generated PCR products of desired size (~200 bp), so the amplification product was sequenced at high data throughput. Archaea-specific V3 primers did not yield any PCR product, suggesting that very low numbers of archaeal cells are present in Shivlinga's vent-water. Reads of the Bacteria-specific V3 sequence dataset were clustered into operational taxonomic units (OTUs) or putative species-level entities unified at the level of 97% 16S rRNA gene sequence similarity (Supplementary Table 3 shows the summary results of OTU clustering). Rarefaction analysis of the dataset confirmed that the read-sampling level (data throughput) achieved was sufficient to reveal most of the diversity present in the sample (Supplementary Fig. 3).
Total 64 bacterial OTUs - distributed over the phyla Proteobacteria (45), Actinobacteria (4), Firmicutes (4), Deferribacteres (2), Deinococcus-Thermus (2), Ignavibacteriae (2), Armatimonadetes (1), Aquificae (1), Synergistetes (1) – were identified in the VW sample (number of OTUs affiliated to each phylum is given in parenthesis); 2 of these belonged to unclassified Bacteria. Of the 64 OTUs, 19 could be classified at the genus level. The genera identified (number of affiliated OTUs given in parenthesis) were Achromobacter (2), Alcaligenes (1), Aminicenantes Incertae Sedis (1), Armatimonadetes gp5 (1), Calditerrivibrio (2), Ignavibacterium (2), Paenibacillus (1), Propionibacterium (1), Sulfurihydrogenibium (1), Thermomonas (4), Thermus (2) and Thiofaba (1). Notably, out of the 12 genera detected metataxonomically in the 70 °C VW, member strains of at least four have never been found to grow at >45 °C in vitro (these are Achromobacter, Alcaligenes, Paenibacillus and Propionibacterium); and six are present in each of the mat communities studied (these are Achromobacter, Calditerrivibrio, Paenibacillus, Propionibacterium, Sulfurihydrogenibium and Thermus).
Microbiology of Shivlinga's thermal gradients
Variations in the community composition of Shivlinga's microbial mats were assessed via metagenome sequencing and analysis along each thermal gradient (Supplementary Tables 4 and 5 show the summary statistics plus domain-level classifications of the metagenomic data obtained from the mats of the drying and wet thermal gradients respectively). Although total number of metagenomic reads generated for the different samples varied, plateauing of rarefaction curves determined for the individual samples by plotting number of reads analyzed versus genera identified in searches against the non-redundant (nr) protein sequence database of National Center for Biotechnology Information (NCBI, USA) or the 16S rRNA gene sequence database of the Ribosomal Database Project (RDP) showed that the data throughput achieved was sufficient to reveal most of the diversity present in the sample. Furthermore, to avoid the risk of potential anomalies that may arise in the subsequent analyses due to differential read-count of the different samples, prevalence of taxa was quantified as the percentage of total metagenomic reads ascribed to them, i.e. relative abundance.
In searches against the nr protein database, unassigned and unclassified sequences made up 7–26% of the metagenomic readsets; Bacteria accounted for at least 72% of all these and Archaea constituted <1%, regardless of the sample. The highest relative abundance of Archaea was encountered in the distal (cooler) ends of the gradients, i.e. in DG4 (41 °C), WG4 (46 °C) and WG7 (33 °C) communities. The genera Archaeoglobus, Methanocaldococcus, Methanococcus, Methanosarcina, Methanospirillum, Methanothermobacter, Pyrococcus and/or Thermococcus comprised the major archaeal component of every mat community. Eukarya accounted for <0.6% of the metagenomic readsets for VWM, WG3 and WG4; and ~1% for DG3, DG4; WG5, WG6, and WG7 (for at least one replicate). In other words, relative abundance of eukaryotes almost doubled along the drying and wet thermal gradients at ≤52 °C and ≤38 °C (respectively), compared to those observed at higher temperatures. Diverse types of virus are present Shivlinga's mat communities and, remarkably their relative abundance is highest at 66 °C (in VWM). Virus-affiliated reads made up 0.3% and 0.2% of the metagenomes obtained from the two VWM sample-replicates, but only 0.01–0.04% of those obtained from the other mat communities. Predominant members of the viral component of VWM metagenomes were temperate lactococcal phages, including r1t and Listeria phi-A118; notably, thermophilic phages that are commonly found in other hydrothermal ecosystems48 were absent. Whereas the Bacteria are the dominant organisms in Shivlinga's microbiome, the ecophysiological importance of eukaryotic microbes could be greater than suggested by their low number of metagenomic reads. Some indications of this are detailed below.
Microscopic analyses revealed dense but localized populations of diatoms in many of the mat communities, including those located at 52 °C (Fig. 3C,D). Diatoms (Bacillariophyta) were also detected metagenomically in all of the microbial mats sampled, albeit in low numbers. Their relative abundance ranged from a minimum of 0.001%, for VWM, to a maximum of 0.05%, for DG3, along the drying thermal gradient and 0.6%, for WG5, along the wet thermal gradient (relative abundances values for diatoms were 0.2% for DG4, WG6 and WG7). The genera Odontella, Phaeodactylum and Thalassiosira consistently predominate diatom populations in all the eight mat communities, regardless of the hydrothermal gradient.
Microscopy revealed localized but dense populations of fungi in mat communities present at the 33–52 °C sites. According to metagenomic analyses, fungi were also present in all of the other mat communities, albeit at low levels. Along the drying thermal gradient, their relative abundance ranged from a minimum of 0.02%, for VWM, to a maximum of 0.2%, for DG4, and along the wet thermal gradient from 0.02% (VWM) to 0.7%, for WG5. Furthermore, 63 fungal genera were found to be present in VWM (66 °C), while greater numbers of genera were detected in the communities growing at the distal ends of each gradient; e.g. 82 genera in DG4, 106 in WG5, and 142 in WG7. Aspergillus, Gibberella, Neurospora, Saccharomyces, Schizosaccharomyces and Ustilago constituted the major portions of the fungal populations across both the gradients.
Sequences matching Chlorophyta (green algae) were present in the metagenomes of all the mat communities, and their relative abundance ranged from 0.005% (for VWM) to a maximum of 0.1% (for DG4, WG5, WG6 and WG7). The genera Chlamydomonas, Chlorella, Micromonas, Ostreococcus and Volvox are the major green algal component of all the mat communties, whereas Acetabularia, Bryopsis, Dunaliella, Nephroselmis, Pyramimonas and Scenedesmus, sparse at high temperatures, increase towards the distal ends of the gradients. Remarkably, the eukaryotic components of each mat metagenome were not dominated by a microorganism, but by the bryophyte Physcomitrella (spreading earthmoss) which has a simple life-cycle, and is considered to be the most-ancient of all land plants49. It is an early colonizer of exposed mud/ sediment/ soil around the edges of water bodies, and is widely distributed in temperate regions of the world. At the Shivlinga site, Physcomitrella is most abundant in VWM (66 °C), where it constitutes 0.1% of the metagenomes. Notably, in the VWM metagenomes, known thermophilic genera such as Archaeoglobus, Calditerrivibrio, Desulfotomaculum, Hydrogenivirga, Pyrococcus, Thermoanaerobacter and Thermocrinis, each has <0.1% of reads ascribed to them. In the other mat communities, Physcomitrella constituted 0.02–0.04% of the metagenomes. In VWM, the most prevalent eukaryote after Physcomitrella is another photosynthetic organism, Cyanidioschyzon which is a unicellular, ~2 μm-long red alga that often occurs in sulfur-containing, highly acidic hot springs at around 45 °C50. For all other mat communities, the second-most prevalent eukaryote is the amoeba Dictyostelium which is found in soils, predates bacteria, and is commonly known as slime mold51. The other protists detected in Shivlinga's mat communities included the flagellate cryptomonad alga Guillardia, the glaucophyte Cyanophora, and the chloroplast-bearing amoeba Paulinella. Notably, all these genera were present in sample sites of ≤46 °C.
Along the drying, as well as the wet, thermal gradients (Fig. 5A; Supplementary Fig. 4 and Fig. 5B; Supplementary Fig. 5), mean relative abundances of the 26 major groups of Bacteria (i.e. 21 phyla and five proteobacterial classes) varied considerably. Deinococcus-Thermus and Aquificae dominate VWM, and are less prevalent in the communities located at the distal (cooler) ends of each gradient. Along the drying thermal gradient, this decline in the prevalence is very sharp for both phyla; relative abundance of Deinococcus-Thermus declined from 60% in VWM to ~2% in DG3 and DG4. In the drying thermal gradient, there was a concurrent increase of Chloroflexi; Firmicutes; Alpha, Beta, Gamma and Delta classes of Proteobacteria; Actinobacteria; Acidobacteria and Planctomycetes, which continues through DG3 and DG4. Notably, Cyanobacteria, Chlorobi, Bacteroidetes, Spirochaetes and Verrucomicrobia are more abundant in DG3 (relative to VWM), but declined in DG4 (relative to WG3). Contrary to the above trends, Epsilonproteobacteria and Thermotogae were low in DG3 (relative to their VWM levels) but increased again in DG4.
Mean percentage of reads ascribed to bacterial phyla, and classes within the Proteobacteria, in the duplicate metagenomes obtained from each mat community of (A) the drying thermal gradient and (B) the wet thermal gradient. The 21 phyla and five proteobacterial classes represented account for >0.1% reads in at least one of the 16 metagenomes analyzed. The category 'others' encompasses the phyla that accounted for <0.1% reads in every metagenome analyzed. Statistical significance of the fluctuations in the relative abundance of the taxa along the hydrothermal gradients can be seen in Supplementary Figs. 4 and 5 where their mean relative abundance within each mat community has been plotted alongside the two original relative abundance values obtained from the duplicate metagenomes (shown as vertical range bar in Supplementary Figs. 4 and 5).
Phylum-level population fluctuations are more irregular along the wet thermal gradient; the relative abundances of Alphaproteobacteria, Chlamydiae and Planctomycetes continue to increase along the water transit, in the vent-to-apron direction. The frequencies of Actinobacteria, Cyanobacteria and Spirochaetes also increase up to the 38 °C site. Similarly, Acidobacteria, Bacteroidetes, Chlorobi, Deferribacteres, Deltaproteobacteria, Dictyoglomi, Firmicutes, Fusobacteria, Gemmatimonadetes, Nitrospirae, Synergistetes and Tenericutes increase up to the 46 °C site. By contrast, the frequencies of Aquificae and Epsilonproteobacteria continue to decrease along the gradient, down to the 38 °C sample site, and Deinococcus-Thermus decreases down to the 36 °C site. Frequencies of Betaproteobacteria, Chloroflexi, Gammaproteobacteria and Verrucomicrobia increase till the 56 °C site. Remarkably, the relative abundance of Thermotogae does not vary with temperature along the wet thermal gradient. Consistent with these trends, only Aquificae, Deinococcus-Thermus and Epsilonproteobacteria exhibited significant positive correlation with both temperature and flow-rate of the spring-water (Supplementary Table 6). Whereas correlation of these three phyla with pH was significantly negative, only Aquificae showed significantly negative correlation with distance from the vent. Actinobacteria, Alphaproteobacteria, Chlamydiae, Firmicutes, Fusobacteria, Gammaproteobacteria, Gemmatimonadetes, Other phyla, Planctomycetes, Spirochaetes, Unclassified Bacteria and Unclassified Proteobacteria, all exhibited significant negative correlations with both temperature and flow rate. All these groups, remarkably, had significant positive correlations with pH and distance from the vent. Acidobacteria, Bacteroidetes, Betaproteobacteria, Chlorobi, Chloroflexi, Cyanobacteria, Deferribacteres, Deltaproteobacteria, Dictyoglomi, Nitrospirae, Synergistetes, Tenericutes, Thermotogae and Verrucomicrobia showed no significant correlation with temperature, although Acidobacteria, Betaproteobacteria, Deltaproteobacteria, Synergistetes and Verrucomicrobia had positive correlations with pH as well as distance from the vent – of these, Acidobacteria and Betaproteobacteria additionally exhibited significant negative correlations with flow rate. Tenericutes had significant positive and negative correlations with pH and flow rate respectively. Supplementary Tables 7–12 show the calculations for all the correlation coefficients and their Benjamini-Hochberg-corrected P values.
Thermodynamic constraints on microbial colonization
Thermodynamic constraints on the microbial colonization of Shivlinga sites were identified based on relative abundance of taxa along the hydrothermal gradients. These, in turn, were derived from the percentage of metagenomic reads ascribed to individual taxa within each community. Diversity within Shivlinga's microbial communities increases towards the distal ends of the thermal gradients, and community composition changes from Aquificae/Deinococcus-Thermus- to Chlorobi/Chloroflexi/Cyanobacteria-dominated, and eventually to a Bacteroidetes/Proteobacteria/Firmicutes-dominated one (Fig. 5). Although the VWM community on Shivlinga's vent-water is biomass-dense, its microbial diversity is lower than that of the mat communities located at the distal end of each thermal gradient. This is likely due to the habitability barrier imposed by the high vent-water-temperature that, in turn, is exacerbated by the low atmospheric pressure. At an altitude of 4438 m, atmospheric pressure is ~6.4 KPa, which is close to the 2.5–5 KPa threshold that is known to prevent in vitro growth of bacteria adapted to atmospheric pressure at sea level (i.e. ~101.3 kPa)52. Low pressure not only reduces the boiling point of water but can also contribute to the entropic destabilization of biomacromolecular systems (conversely, high pressures appear to mitigate against the chaotropicity of MgCl2 in deep-sea brine systems5). Conditions at the Shivlinga site, therefore, destabilize biomacromolecules, and elicit cellular stress responses which impose high energetic cost on microbial systems and/or can ultimately cause cell-system failure53,54. For VW and VWM (the communities at the highest temperatures) an environment-driven selection for thermotolerant/thermophilic taxa would be expected. Accordingly, relative abundance of 19 out of the 26 major bacterial phyla/proteobacterial classes present along the wet thermal gradient was lower in VWM (66 °C) than in WG3 (56 °C). Only Aquificae, Deferribacteres, Deinococcus-Thermus, Dictyoglomi, Epsilonproteobacteria, Nitrospirae and Thermotogae, which are made up mostly of thermophiles, were more prevalent in VWM than in WG3 (Fig. 5B). This is indicative of a substantial thermodynamic barrier to microbial colonization at ~60 °C. The 19 groups, which include Alphaproteobacteria, Betaproteobacteria and Gammaproteobacteria, Bacteroidetes, Chlorobi, Chloroflexi, Cyanobacteria, and Verrucomicrobia, and for which prevalence was lower at 66 °C than at 56 °C, are made up mostly of mesophilic or thermotolerant members incapable of laboratory growth at >60 °C. The presence of these taxa at in situ temperatures of >60 °C, suggests that (a) hitherto unidentified environmental factor(s) is (are) acting to mitigate the cellular stresses induced by high temperature. Furthermore, the relative abundances of most of the major phyla/classes (20 out of 26) were lower at 56 °C, in WG3, than at 46 °C, in WG4 (Fig. 5B; Supplementary Fig. 5). This suggests that temperatures between 56 °C and 46 °C impose an additional thermodynamic barrier to habitability for many of the microbial taxa. In microbiomes for which habitability of some taxa is constrained by other parameters, a comparable drop in diversity is observed for xerophiles and halophiles at 0.720 water activity55. Furthermore, at the distal end of the wet thermal gradient, the relative abundances of 16 out of the 26 major groups were lesser at 46 °C (WG4) than at 38 °C (WG5), implying that there is another colonization barrier at 40–42 °C.
Simpson Dominance (D), Shannon–Wiener Diversity (H), and Shannon–Wiener Evenness (EH) Indices were calculated based on the metagenomic data for each mat community (Supplementary Tables 13–20); these values were then compared along each gradient. The trends revealed provide further evidence for temperature barriers to habitability. For VWM (66 °C), the D value was greater than for any other mat community, whereas H and EH were low relative to those for WG3 and DG3 (56 and 52 °C respectively) (Fig. 6). This indicates that, for both the wet, and the drying, thermal gradients, there is a colonization-barrier for many taxa at ~60 °C. However, along the wet thermal gradient, from 56 to 46 °C (WG3 to WG4), and along the drying thermal gradient, from 52 °C to 41 °C (DG3 to DG4), increases in H and EH, were much lower than those recorded in the respective gradients, across the 60 °C barrier (Fig. 6). This indicated that colonization-barriers at lower temperatures (~50 °C for the wet thermal gradient and 46–50 °C for the drying thermal gradient) are weaker than that at ~60 °C. Trends of the above indices further revealed that below 46 °C, along the wet thermal gradient, diversity did not correlate with temperature, so other biotic and/or abiotic parameters could be acting as determinants of community composition in this territory.
Simpson Dominance (A,E), Shannon Diversity (B,F) and Shannon Equitability (C,G) Indices, and total genus count (D,H), along the drying (A–D), and the wet (E–H), thermal gradients.
For each mat community, the most-abundant genera that collectively accounted for ≥50% of all classifiable reads were identified from the results of the metagenome-searches against nr protein database. The number and identities of such genera were then compared along the thermal gradients to interpret the temperature-constrained community dynamics. One genus, Thermus, accounted for a mean 50.6% of VWM metagenomes, and the genera Sulfurihydrogenibium, Meiothermus and Aquifex constituted 21.6%, 2.2% and 2.1%, respectively. For the DG3 and DG4 communities, 50% of their metagenomic reads were made up of 12 and 50 genera, respectively (Supplementary Table 21), and for WG3, WG4, WG5, WG6 and WG7, 50% of reads represented one, 11, 17, 35, 68 and 81 genera, respectively (Supplementary Table 22). In order to confirm the identifications of genera present in the individual mat communities, genus-level analysis of their duplicate metagenomes was carried out by searching against the 16 S rRNA gene sequence database of RDP. Using this approach (which is distinct from the amplified 16 S rRNA gene sequence-based metataxonomic approach, used only for the vent-water community), diverse bacterial, but no archaeal, genera were identified in all the eight mat communities (Supplementary Tables 23–30). The numbers of genera detected in this way varied along the two thermal gradients, and were consistent with the variations in microbial diversity indices (Fig. 6). For the drying thermal gradient, genus count was considerably less at 66 °C than at 52 °C or 41 °C (Fig. 6D). For the wet thermal gradient, genus count was far less at 66 °C than at 56 °C or 46 °C; unexpectedly, however, genus count was higher at 46 °C than at 38 °C, even though it was much lower at 38 °C than at 36 °C or 33 °C (Fig. 6H). The trends in microbial diversity that were revealed from the direct classifications of the metagenomic reads were consistent with the results of diversity analyses based on assembly of contigs and binning of population genomes from the mat metagenomes (Supplementary Note 2).
Key metabolic attributes of the Shivlinga mat communities
Analysis of the metagenomic data of individual mat communities for comparative richness of genes [or Clusters of Orthologous Groups (COGs) of Proteins] under various metabolic/ functional categories was followed by hierarchical clustering of the mat communities in terms of their enrichment of various COG categories. This revealed a dichotomy between DG3, DG4 and WG5 on one side, and VWM, WG3, WG4, WG6 and WG7 on the other (Fig. 7A). DG4 and WG5 clustered on the basis of their similarities with respect to high/low presence of COGs affiliated to the categories Cell motility; Replication, recombination and repair; Signal transduction mechanisms; and Translation, ribosomal structure and biogenesis. DG3 joined this cluster based on significantly high and low presence of COGs affiliated to Signal transduction mechanisms and Cell motility, respectively (Fig. 7B). In the other major cluster, the closeness of VWM and WG3 (Fig. 7A) is explained by their similarities in having high and low presence of COGs affiliated to the categories Replication, recombination and repair, and Signal transduction mechanisms, respectively (Fig. 7B). WG6 joined the VWM-WG3 cluster on the basis of significantly high presence of COGs affiliated to Replication, recombination and repair (Fig. 7B). WG4 and WG7, in turn, associated on the basis low presence of COGs for Replication, recombination and repair. VWM, WG3, WG4 and WG6 were further unified by the significantly low presence of COGs affiliated to Signal transduction mechanisms. Despite the significantly low presence of COGs affiliated to Signal transduction mechanisms in a number of mat communities, number species-level matches as well as relative abundance for genes encoding bacterial two component kinases such as histidine kinase, serine threonine protein kinase, diguanylate cyclase and PAS sensor protein were considerably high for all the communities. This implied that the geothermal adaptations of complex microbial mat communities involve efficient response regulations to a wide range of environmental signals.
Functional analysis of the metagenomes isolated from the eight microbial mat communities of Shivlinga: (A) heat map comparing the richness of the metabolic/functional categories across the communities, determined in terms of the number of Clusters of Orthologous Groups (COGs) of Proteins that are ascribed to the categories in individual communities; a two-dimensional clustering is also shown, involving the eight mat communities on one hand and the 18 functional categories of COGs on the other; color gradient of the heat map varied from high (red) to low (green), through moderate (yellow), richness of the categories across the communities; (B) Statistically significant high (green circles) or low (red circles) richness of the functional categories across the communities, as determined by Chi Square test with p < 0.001.
The merged metagenomic readsets of individual mat communities were searched for genes putatively encoding reverse gyrase enzymes, which are typical of hyperthermophilic bacteria and archaea, but never found in mesophilic microorganisms56. As expected, metagenomic reads matching reverse gyrases from 30 different bacterial species were identified in VWM, but in no other community of the drying thermal gradient. Along the wet thermal gradient, such reads were found exclusively in communities living at ≥46 °C.
Metagenomic reads matching genes that encode key enzymes of the autotrophic Calvin–Benson–Bassham and Wood–Ljungdahl pathways, namely ribulose 1,5-bisphosphate carboxylase large chain (rbcL) and acetyl-coenzyme A synthetase (acsA), were identified in all the mat communities. Even as autotrophy is ubiquitous throughout the ecosystem, Wood–Ljungdahl pathway is apparently utilized by majority of the community members across Shivlinga's thermal gradients. The numbers of species-level matches identified for acsA-related reads were an order of magnitude greater than those for rbcL; across the communities, species-level matches for acsA ranged between 112 (in WG5) and 477 (in WG7) while those for rbcL ranged from 18 (in DG4 and WG5) to 54 (in WG7). Furthermore, metagenomic reads corresponding to genes encoding the large subunit of ATP citrate lyase (AclA), a key enzyme of the reverse tricarboxylic acid (rTCA) or Arnon–Buchanan cycle - which is more energy-efficient, oxygen-sensitive, and phylogenetically ancient than the Calvin cycle or the Wood–Ljungdahl pathway57,58 - were detected only in the communities of the higher-temperature sites having typically low in situ oxygen, i.e. VWM and DG3 along the drying thermal gradient, and VWM, WG3 and WG4 along the wet thermal gradient.
Metagenomic reads matching genes for the gluconeogenic enzyme phosphoenolpyruvate carboxykinase (PEPCK) were detected in all the mat communities. Number of species-level matches for the PEPCK-related reads increased steadily from 32 in VWM to 102 in DG4, along the drying thermal gradient; and from 32 in VWM to 175 in WG7, along the wet thermal gradient. PEPCK governs the interconversion of carbon-metabolites at the phosphoenolpyruvate–pyruvate–oxaloacetate junction of major chemoorganoheterotrophic pathways, thereby controlling carbon flux among various catabolic, anabolic, and energy-supplying processes59. Variations in the number of species-level matches for PEPCK-related reads, therefore, were considered to be reflective of a steady rise in organoheterotrophic inputs in community productivity along both the hydrothermal gradients.
In view of the considerable abundance of dissolved sulfide, elemental sulfur, thiosulfate and sulfate in Shivlinga's vent-water, occurrence and phylogenetic diversity of two key sulfur-chemolithotrophic genes, namely, the thiol esterase-encoding soxB of bacteria and the thiosulfate:quinone oxidoreductase-encoding tqoAB of archaea60, were investigated in the metagenomes of all the mat communities. While tqoAB was not present in any metagenome, soxB was detected in all of them. Number of species-level matches for soxB-related reads increased steadily from 4 in VWM to 16 in DG4, along the drying thermal gradient; and from 4 in VWM to 39 in WG7, along the wet thermal gradient. Sox-based sulfur-chemolithotrophs being predominantly aerobic60, these trends were consistent with the increase in oxygen tension along the vent-to-apron trajectories of the hydrothermal gradients.
Metagenomic reads matching genes encoding cytochrome c oxidase subunit 1 (cox1), the key enzyme of aerobic respiration61, are widespread in the Shivlinga ecosystem. Number of species-level matches for cox1-affiliated reads increased progressively from 32 in VWM to 193 in DG4, along the drying thermal gradient; and from 32 in VWM to 267 in WG7, along the wet thermal gradient. Metagenomic reads matching genes encoding dissimilatory sulfite reductase alpha and beta subunits (dsrAB), which is central to the energy-conserving reduction of sulfite to sulfide in anaerobic sulfite/sulfate-reducing prokaryotes62, were detected only in some of the mat communities. The number of species-level matches for dsrAB reads were 6, 0 and 3 in VWM, DG3 and DG4, respectively; and 6, 9, 17, 0, 2 and 16 in VWM, WG3, WG4, WG5, WG6 and WG7, respectively. No gene for N-oxide (NO/N2O) or nitrate (NO3) reduction was found in any mat community. Collectively, these data suggest that Shivlinga's mat communities are adept in utilizing whatever low oxygen is there in their watery to semi-watery habitat. This hypothesis is further supported by the considerable occurrence and diversity (across the thermal gradients) of genes involved in the biogenesis of cbb3 cytochrome oxidase, which is known to be instrumental in respiration under very low oxygen tension63. For instance, the number of species-level matches for metagenomic reads ascribable to genes encoding the cbb3 cytochrome oxidase biogenesis protein CcoG64 increased progressively from 0 in VWM to 44 in DG4, along the drying thermal gradient; and from 0 in VWM to 87 in WG7, along the wet thermal gradient.
An extreme yet highly habitable ecosystem
Diverse lines of evidence indicated that many of Shivlinga's native microorganisms can bypass thermodynamic hurdles to habitation. Phylogenetic relatives of a wide variety of mesophilic bacteria colonizing the high-temperature locations, presence of psychrophilic bacteria at 33–52 °C sites, localized but dense populations of diatoms within the mat communities of 52 °C site, and occurrence of established populations of mosses, fungi, green and red algae, and slime mold at sites of >50 °C, collectively indicate that this ecosystem is highly habitable, biodiverse and complex. The taxonomically diversified and biomass-dense microbiome of this otherwise extreme site (thermal stress, high UV, low pressure) is further complemented by the abundance of taxa at temperatures below their recognized minima for growth, and the abundance of taxa that are ubiquitous throughout the ecosystem irrespective of temperature. For instance, the genera Chloroflexus, Roseiflexus, Sulfurihydrogenibium, Synechococcus and Thermus remained prevalent over wide ranges of temperature along both the gradients; these genera are known to be physiologically interdependent. Sulfurihydrogenibium scavenges dissolved oxygen from hot spring waters, thereby generating anoxic condition for organisms such as Chloroflexus and Roseiflexus that, under anaerobic conditions in the presence of light, harvest energy via photoheterotrophy and switch back to chemoheterotrophy when it is dark65. At the same time, Synechococcus, by virtue of producing glycolate and fermenting glycogen, has been shown to promote photoheterotrophic growth of Roseiflexus in laboratory mixed cultures66. Roseiflexus, in turn, is capable of synthesizing glycosides and wax esters that can be utilized for gluconeogenesis by other members of the community67.
Studies of microbial ecology in other extreme habitats have characterized many interactions between bacteria and eukaryotic microbes which boost the metabolism and enhance the stress biology of one or more partners54,68. For instance, fungal (and bacterial) saprotrophs can make nutrients available to other microbes, many microbes may co-metabolize complex substrates, and photosynthetic bacteria and algae may release compatible solutes to be used by other microorganisms of the community. So, despite the relatively small numbers of eukaryotes in the Shivlinga habitat, we believe that the ecological success of bacteria (as reflected by their >97% relative abundance in all mat communities) is reliant, in part at least, on the former. The presence of the moss Physcomitrella in the Shivlinga microbiome, with highest relative abundance at the 66 °C site, is also noteworthy. Worldwide, mosses are known to colonize geothermal sites which are characterized by distinct temperature and moisture regimes. Substratum temperatures of up to 75 °C have been recorded at depths of 2.5–5 cm beneath mosses, and temperatures of up to 60 °C have been recorded on moss-covered surfaces69,70. Like fungi, mosses also exhibit a preference for vegetative propagation (over sexual reproduction) to conserve adaptations to environmental challenges such as high temperature71,72. We could not find any evidence for either the sexual stage (the diploid sporophyte) or the intermediate stage (the haploid gametophyte) of Physcomitrella at the Shivlinga site. We suspect, therefore, that Physcomitrella exists within the Shivlinga mats only in its plesiomorphic (ancestral) form, which is characterized by algae-like, thalloid structures that do not have diverse tissue types and are known as juvenile gametophytes49. This plesiomorphic form is made up of microscopic filaments known as protonema that are formed by chains of haploid cells49. A number of studies have shown that Physcomitrella efficiently renders homologous recombination-based error-free DNA damage repair, an attribute that is likely to confer adaptive advantage in high-temperature habitats that are generally associated with higher rates of DNA damage73. Together with a plausible vegetative/asexual lifestyle, error-free DNA damage repair capabilities can act to reduce genetic variability in Physcomitrella and, ultimately, slow down evolution via conservation of genomes74,75. These attributes may help this bryophyte to colonize Shivlinga via retention of the plesiomorphic form (a phenomenon termed stasigenesis).
Phylogenetic relatives of mesophilic genera in Shivlinga's high-temperature sites
When all the genus-level annotation data obtained via searching the metagenomes against the nr protein sequence and RDP 16 S rRNA gene sequence databases were collated, at least 26 genera were confirmed as present in every mat community that was sampled (Supplementary Tables 31 and 32), while at least 18 additional genera were ubiquitous along the wet, but not the drying, thermal gradient (Supplementary Table 32). It is remarkable that these bacteria have colonized the entire Shivlinga system despite the various thermodynamic constraints (see above). Furthermore, these genera are not all known to encompass thermophilic members, and many of them, for example Clostridium, Lactobacillus, Pseudomonas and Salinibacter, are archetypal 'weeds' that have a robust stress biology and can maintain dominant positions within microbial communities76. The Shivlinga site may, therefore, provide a useful model system for future studies of microbial weed ecology under hydrothermal conditions.
In the genus-level analyses of the metagenomic sequence data, a considerable number of the genera were also found to be present at temperatures outside their windows for growth in vitro. For instance, entities affiliated to at least 11 genera of typical thermophiles were detected at temperatures below the minima recognized for laboratory growth of the cultured strains of those genera (Supplementary Table 33). These were Dictyoglomus, Hydrogenobaculum, Meiothermus, Persephonella, Petrotoga, Sulfurihydrogenibium, Thermodesulfatator, Thermodesulfobacterium, Thermotoga, Thermus and Thermosipho. Conversely, phylogenetic relatives of at least 40 such bacterial genera were present in mat communities at >50 °C, no cultured members of which have any report of laboratory growth at >45 °C (see Table 2 and its references given in Supplementary references). The composition of the vent-water community was also consistent with this finding. For instance, no strain of four out of the 12 genera identified in the 70 °C VW community (Achromobacter, Alcaligenes, Paenibacillus and Propionibacterium) grow at >45 °C, according to in vitro studies. Moreover, strains of some of the reportedly mesophilic genera detected at the high-temperature ends of Shivlinga's hydrothermal gradients - for example, Dolichospermum, Flavobacterium, Magnetospirillum, Planktothrix, Prochlorothrix, Thiohalocapsa, Treponema and Xylella - are not even known to grow in the laboratory at temperatures >30 °C. Interestingly, psychrophilic microbes such as Chryseobacterium and Nitrosomonas77 were present in all the mat communities except VWM and WG3, and their frequencies increased with decrease in temperature. We conclude, therefore, that the Shivlinga hot spring system is not dominated by thermophiles, but hosts an ecophysiologically diversified microbiome that includes several phylogenetic relatives of mesophilic taxa.
Table 2 Microbial genera1 that are comprised primarily of mesophilic strains (no laboratory growth reported at >45 °C) and phylogenetic relatives of which were detected in Shivlinga's mat communities growing at >50 °C.
Insights into the biophysics of the Shivlinga ecosystem
Extremes of temperature constrain microbial function at the level of macromolecules, the cell system, ecosystems, and the biosphere. The ecology of Shivlinga's microbiome is undoubtedly influenced by the high temperature. Interestingly, however, and although microbial diversity decreases at around 50 and 60 °C, several phyla/genera inhabit locations having temperatures above their in vitro growth limits. This is analogous to the habitability of otherwise hostile chaotropic brines (>2.50 M MgCl2) that is facilitated by kosmotropic (compensatory) effects of sodium and sulfate5,7. Although Shivlinga is a high-temperature and low-pressure habitat for microbes, its spring-water and sinters contain high concentrations of borates, sodium, bicarbonate, thiosulfate and sulfate. In addition, the system has lower, but significant, concentrations of sulfite, sulfide and lithium. Borates, sodium, bicarbonate, thiosulfate, sulfate, sulfite, and sulfide are kosmotropic11, and it is possible that their cumulative kosmotropic activity enables colonization of high-temperature sites by the mesophilic taxa, a phenomenon which parallels the way in which kosmotropes facilitate colonization of chaotropic environments5,7,10,19. We believe that the kosmotropic ions within the Shivlinga's system might also enhance biotic activity of the thermotolerant microbes present.
Within the Puga geothermal area, which is a part of the greater Ladakh-Tibet borax-spring zone44, surface expressions of geothermal activity, including the Shivlinga microbialite spring (GPS coordinates: 33 ° 13′ 46.2″ N and 78 ° 21′ 18.7″ E), are largely confined to the eastern part of the 15 km-long and 1 km-wide east-west orientated valley. Information on the geography, geology, and geothermal activity of Puga can be found in Supplementary Note 1. Of the number of distinct axes of Shivlinga's spring-water transit/dissipation, two trajectories representing a progressively-drying, and a wet, thermal gradient were investigated for their geomicrobiological characteristics. Collation of the microbiological and geochemical data revealed three distinct geomicrobiological zones within the Shivlinga system. The first geomicrobial zone of the Shivlinga microbiome, the vent and the microbialite body together form a unit that is elevated above the ground level. The chimney-shaped microbialite is made up of hardened hydrothermal mineral deposits (sinters); the water vented from within the microbialite, at the time of sampling (23 July 2013), was characterized by 70 °C temperature, pH 7.0, and 25 mL s−1 flow-rate. The vent-water community (VW) was sampled from the vent opening located at the microbialite summit (Fig. 1C,I). A microbial mat is anchored to the inner surface of the vent's rim for the entirety of its circumference, and has grown inwards across the surface of the water. Most of the filamentous structures that form this vent-water mat (VWM) are white, but are interspersed with minute green-hued elements (Fig. 1C,I). VW and VWM represented the initial sample-points for both the wet, and the drying, thermal gradients, for which all subsequently-sampled mat communities were named using the prefixes WG and DG, respectively. Starting from where VWM attached to the sintered rim of the vent, a 1-cm-thick, green mat containing occasional red patches (community DG3) spread radially over the flat mineral surface that circumvents the opening at the top of Shivlinga's body (Fig. 1C,I). The next community surrounding the vent (sampled as DG4) forms another cm-thick mat which is pigmented purple with a green shimmer; it forms an unbroken ring extending from the DG3 community to the edge of Shivlinga's summit (Fig. 1C,I). In contrast to VWM, DG3 and DG4 are not in contact with a flowing body of water; instead they are located on the moist sinters of the flat summit which surrounds the vent and progressively desiccates towards its outer edge (scraping off sections of the mats revealed no flowing water beneath). VW, VWM, DG3 and DG4 together form a continuum of microbial communities along this drying thermal gradient (Fig. 1C,I). Across a ~10-cm arc of Shivlinga's circular top, the vent-water spills down the south-west-facing side of the microbialite body. The filamentous structures of VWM extend across the rim, along the spring-water flow, and down the side of Shivlinga where they were designated as community WG3 and sampled at a point that is 22 cm beneath the summit (Fig. 1C,I). WG3, like VWM, appears white, but is interspersed with apparently greater proportion of green filaments.
The microbialite's base (including the sloping bedrock), the second geomicrobiological zone of the Shivlinga site, has a lower temperature (38–46 °C), lesser flow-rates, pH of ~8, and two morphologically distinct microbial mats. This bedrock, on which the microbialite is situated, is a conical section of sedimentary limestone which protrudes from the regolith. The original surface of this bedrock has been eroded by the spring-water, and coated with new mineral deposits and/or microbial mats. The WG3 streamer, which follows the spring-water down one side of the microbialite body, does not extend beyond the base of Shivlinga. Instead, it gives way to a green mat containing occasional yellow patches (community WG4), which in turn is restricted to the first 10's cms of the slope along the trajectory of the spring-water transit (Fig. 1C,D,I,J). Further along the water flow, a stretch of rust-colored mat (community WG5) covers the rest of the limestone outcrop (Fig. 1D,J).
The apron is the outer (and last) geomicrobiological zone of the Shivlinga microbiome; it starts where the slope of the bedrock ends and extends across a circular area of the regolith that is ~1.5 m in radius around the bedrock. Its surface is composed of a loose, heterogeneous mixture of fluvioglacial sediments, aeolian sand, clay and scree which overlay the bedrock. The terrain of the apron is only gently inclined, so the spring-waters (pH ~8.5) flow very slowly (Table 1) along channels in the regolith (these are 1 to 2-cm-deep and 5 to 50-cm-wide) before dissipating into the dry loose valley-fill material. Across the inner half of the apron, the floors of the outflow channels are covered with intensely red nodules (Fig. 1B,E,H,K) formed by the nucleation of hydrothermal minerals onto microbial cells and extracellular organic substances (community WG6). Within the outer half of the apron, 3 to 10 mm-thick tufts of dark grey streamers (community WG7) grow along the outflow (Fig. 1B,F,H,L). VW, VWM, WG3, WG4, WG5, WG6 and WG7 together form a continuum of microbial communities along this wet thermal gradient.
Sampling and collection of in situ data
At the discrete sample-sites within the Shivlinga system, temperatures were measured with a mercury-column glass thermometer and pH values were measured using Neutralit indicator strips (Merck, Germany). The discharge rate of Shivlinga's vent-water was determined (in mL s−1) using the soluble-tracer dilution technique described previously78. Flow rates of spring-water over the sampled mat communities were determined (in cm s−1) using a modified insoluble-dye tracer technique79.
Shivlinga's vent-water was collected at the same time in separate bottles for chemical and microbiological analyses using separate, 25 mL sterile pipettes for each sampling event. For analyses of water chemistry, VW samples were passed through a 0.22 µm syringe filter. Every 100 mL batch of VW sample meant for the quantification of metallic elements was acidified to pH ≤ 2 by adding 69% (w/v) HNO3 (400 μl). To stop all biotic activity within the VW samples meant for bicarbonate estimation, saturated HgCl2 solution was added at a ratio of 1:20 (v/v, HgCl2:sample). For microbiology, 1000 mL vent-water was passed through a sterile 0.22 µm cellulose acetate filter (Sartorius Stedim Biotech, Germany), following which the filter was put into an 8-mL cryovial containing 50 mM Tris:EDTA (5 mL, pH 7.8) and immediately placed in dry ice.
Every mat community was sampled once for microscopic studies and twice for metagenome analysis: the duplicate sets of metagenomic data generated were used to evaluate the statistical significance of the differences in relative abundance of taxa between communities. For each mat community, in each of the three sampling rounds (one for microscopy and two for metagenomics), fresh and intact material (2.5 cm2 and located 5 mm from the other two mat portions sampled) was scraped off using a sterile scalpel, without disturbing the mat's internal structure or the substratum. The stable structure of the mats and consistent physicochemical conditions of the environment recorded over the years, together with the fresh and intact appearance of the mat samples used for all analyses were consistent with a relative absence of necromass.
Mat samples were put into sterile Petri plates containing 3 mL 70% (v/v) ethanol:acetic acid:formaldehyde (9:1:1 v/v/v) or a 15 mL cryovial containing 5 mL Tris:EDTA (50 mM; pH 7.8), depending on whether they were required for microscopic or metagenomic analysis, respectively. The flakes of precipitating mineral salts were not removed from the mat samples meant for microscopic analysis. Petri plates were sealed with Parafilm (Bemis Company Inc., USA) and packed in polyethylene bags, and the cryovials were sealed using Parafilm and then placed in dry ice. Dry ice in the sample-transportation boxes was replenished en route to the laboratory at Bose Institute (Kolkata). Samples destined for metagenomic analysis were stored in the laboratory at −20 °C, while those for microscopic, chemical or mineralogical studies were kept at 4 °C.
Analyses of vent-water chemistry
Concentrations of boron, calcium, lithium, magnesium, and potassium were determined on a Thermo iCAP Q ICPMS (Thermo Fisher Scientific, USA); standard curves were prepared using ICPMS standards supplied by Sigma Aldrich (USA) and VHG Labs Inc. (USA). Based on replicate analyses of the standards, deviations from actual concentrations were less than 2%, 1%, 2.1%, 1% and 2.5% for boron, calcium, lithium, magnesium and potassium, respectively. Concentrations of sodium were determined using an Agilent 240 AA atomic absorption spectrometer (Agilent Technologies, USA); standard curves were prepared from Sigma Aldrich AAS standards. Based on multiple analyses of the standard, deviations from actual sodium concentrations were <3%. Silicon concentration was determined using a UV-visible spectrophotometer (CARY 100, Varian Deutschland GmbH, Germany), as described previously80. Chloride was quantified by precipitation titration with silver nitrate (0.1 N) using a Titrino 799GPT auto titrator (Metrohm AG, Switzerland). Bicarbonate concentrations were calculated from the dissolved inorganic carbon (DIC) content and total alkalinity of the water samples according to Lewis and Wallace81. Total alkalinity was determined using the Gran titration method (and the Titrino 799GPT auto titrator), and dissolved inorganic carbon was determined by CM 5130 carbon coulometer (UIC Inc., USA). Thiosulfate and sulfate concentrations in the vent-water were determined by iodometric titration, and gravimetric precipitation using barium chloride, respectively82,83. Sulfite was analyzed spectrophotometrically, using pararosaniline hydrochloride (Sigma Aldrich) as the indicator84. Immediately after collecting the vent-water samples, dissolved sulfides (ΣHS−, which includes H2S, HS− and/or Sx2−) were precipitated from them as CdS using cadmium nitrate [Cd(NO3)2] in crimp-sealed butyl septum bottles leaving no head space. After the bottles were brought back to the laboratory, spectrophotometric measurement of sulfides was carried out based on the principle that N, N-dimethyl-p-phenylene diamine dihydrochloride and H2S react stoichiometrically in the presence of FeCl3 and HCl to form a blue-colored complex85. Sulfate, sulfite and thiosulfate were also quantified in the CdS-precipitated, ΣHS−-free VW sample using a Metrohm ion chromatograph (Basic IC plus 883) equipped with a suppressed conductivity detector (Metrohm, IC detector 1.850.9010) and a MetrosepASupp 5 (150/4.0) anion exchange column (Metrohm AG).
Microscopy of mats and mineralogy of sinters
For phase contrast microscopy or laser scanning confocal microscopy, 1-mm2 portions of the mat samples were put onto grease-free glass slides, teased apart gently with sterile needles after adding 50 mM phosphate buffer (pH 7.0) or a few drops of DPX Mountant (Sigma Aldrich), and examined using an BX-50 phase contrast microscope (Olympus Corporation, Japan) or a LSM 510 Meta confocal microscope (Carl Zeiss AG, Germany). Electronic images of the autofluorescence of microbial cells were recorded using confocal microscopy at various excitation and emission-detection wavelengths. To analyze the mineral salts precipitated on the mat samples, the latter were also subjected to EDS following SEM on an FEI Quanta 200 microscope (Field Electron and Ion Company, USA). To give contrast to the biological components of the microbe-mineral assemblages, samples were prefixed with vapors of 1% (w/v) osmium tetroxide in deionized water.
Elemental composition of Shivlinga's sinters was determined via standard inorganic tests, and then EDS, EPMA, and XRD. EDS was carried out using an EDAX system attached to the FEI Quanta 200. Quantitative analyses were carried out and atomic density ratios were derived using the GENESIS software controlling the EDAX. XRD was carried out using an XPert Pro X-ray Diffractometer (PANalytical, the Netherlands) equipped with a copper target, operating at 40 kV and 30 mA, and scanned from 4 to 70° 2θ. For EPMA, grain-mount slides of the sinter materials were prepared and elemental composition analyzed using an SX100 EPMA machine (CAMECA, France); electron beam size of 1 µm, accelerating voltage of 15 kV and current of 12 nA were used in the EPMA. The sinter sample slides were also imaged under the EPMA machine via electron backscattering to identify any encrusted microorganisms.
Isolation of total environmental DNA from Shivlinga's vent-water
The filter through which 1000 mL of Shivlinga's vent-water was passed during sampling was cut into pieces with sterile scissors, within the 50-mM-Tris:EDTA (pH 7.8)-containing 8 mL cryovial in which it had come from the sample site. The cryovial was vortexed for 30 min, following which the filter fragments were discarded and the remaining Tris:EDTA was distributed equally to five 1.5 mL microfuge tubes. These were centrifuged (10,800 g for 30 min at 4 °C), and then 900 μl Tris:EDTA was removed by pipette from the top of each tube and discarded. The remaining Tris:EDTA (100 μl) in each tube was vortexed for 15 min and the contents of all five tubes pooled by placing into a single microfuge tube. The pooled Tris:EDTA (500 μl) was again centrifuged (10,800 g for 30 min at 4 °C), following which 400 μl was removed from the top and discarded. The remaining 100 μl contained all of the microbial cells that were present in the original 1000 mL VW sample. DNA was isolated from this 100 μl cell suspension by the QIAamp DNA Mini Kit (Qiagen, Germany), following manufacturer's protocol.
Assessment of metataxonomic diversity in the Shivlinga vent-water community
V3 regions of all 16S rRNA genes present in the total environmental DNA extracted from the VW sample were PCR-amplified using Bacteria-/Archaea-specific universal oligonucleotide primers and sequenced by Ion Torrent Personal Genome Machine (Ion PGM) (Thermo Fisher Scientific), following the fusion primer protocol described previously25,26. For bacterial 16 S rRNA genes, amplification was carried out using the universal forward primer 341f (5′-CCTACGGGAGGCAGCAG-3′) prefixed with an Ion Torrent adapter and a unique sample-specific barcode or multiplex identifier; the reverse primer had a trP1 adapter followed by the universal reverse primer 515r (5′- TATTACCGCGGCTGCTGG-3′). To amplify archaeal 16S rRNA genes, we used the universal forward primer 344 f (5′-AATTGGANTCAACGCCGG-3′) prefixed with an Ion Torrent adapter and a unique sample-specific barcode or multiplex identifier; the reverse primer had a trP1 adapter followed by the universal reverse primer 522r (5′-TCGRCGGCCATGCACCWC-3′). Prior to sequencing, size distribution and DNA concentration within the V3 amplicon pool was checked using a Bioanalyzer 2100 (Agilent Technologies) and adjusted to 26 pM. Amplicons were then attached to the surface of Ion Sphere Particles (ISPs) using an Ion Onetouch 200 Template kit (Thermo Fisher Scientific). Manually enriched, templated-ISPs were then sequenced by PGM on an Ion 316 Chip for 500 flows.
Before retrieval from the sequencing machine, all reads were filtered by the inbuilt PGM software to remove low quality, and polyclonal, sequences; sequences matching the PGM 3′ adaptor were also trimmed. The sequence file was deposited to the NCBI Sequence Read Archive (SRA) with the run accession number SRR2904995 under the BioProject accession number PRJNA296849. Reads were filtered a second time for high quality value (QV 20) and length threshold of 100 bp; OTUs were created at 97% identity level using the various modules of UPARSE86, and singletons were discarded. A Perl programming-script, available within UPARSE, was used to determine the Abundance-based Coverage Estimator, and Shannon and Simpson Indices. Rarefaction analysis was carried out using the Vegan Package in R87. The consensus sequence of every OTU was taxonomically classified by the RDP Classifier tool located at http://rdp.cme.msu.edu/classifier/classifier.jsp.
Extraction, sequencing and analysis of metagenomes from the mat communities
Total community DNA (metagenome) was extracted separately from each of the duplicate samples available for all the mat communities using PowerMax Soil DNA Isolation Kit (MoBio, Carlsbad, CA, USA). Deep-shotgun sequencing of the 16 metagenomes obtained was carried out as described previously25. The Ion PGM or the Ion Proton platforms (Thermo Fisher Scientific) were employed for this purpose, using 400-bp read chemistry on the Ion 318 chip or 200-bp read chemistry on the PI V2 chip, respectively. All metagenomic readsets (mean read-length 138–231 nucleotides) were deposited to the NCBI SRA under the BioProject PRJNA296849 with the run accession numbers provided in Supplementary Tables 4 and 5.
The two metagenomic readsets sequenced from the sample replicates of each mat community were annotated separately by searching against the nr protein sequence database of NCBI, using the Organism Abundance tool of MG-RAST88. In the process, two independent values were obtained for the relative abundance of every taxon within a community. Subsequently, mean relative abundance of each taxon was calculated using the duplicate values, and then used for comparisons between communities along the hydrothermal gradients. To determine the significance of population fluctuation for a taxon between community 'x' and community 'y', its mean relative abundance values in 'x' and 'y' were compared in combination with the actual ranges of the data (i.e. the independent relative abundance values). From 'x' to 'y', relative abundance of a taxon was considered to have increased significantly when the lower range of its relative abundance in 'y' was greater than the upper range of the corresponding value in 'x'. Similarly, a taxon was considered to have declined significantly from 'x' to 'y' when the upper range of its relative abundance in 'y' was smaller than the lower range of the corresponding value in 'x'. In this way it could be ascertained whether the inferred relative abundance fluctuations were significant.
Within the MG-RAST pipeline, sequences were trimmed so as to contain no more than five bases below the Phred quality score of 15; taxonomic classification of reads was carried out using the Organism Abundance tool following the Best Hit Classification algorithm. For classification down to the phylum-, class- or genus-level, readsets were searched by BlastX against the nr protein sequence database with a minimum alignment length of 45 bp (15 amino acids) and a minimum identity cut-off point of 60%. Percentage allocation of metagenomic reads to individual taxa (whether at the phylum-, class- or genus-level) was considered as a direct measure of the relative abundance of those taxa within the community in question25,89. Furthermore, to corroborate the genus-level identifications, each readset was searched for 16 S rRNA genes against the RDP database using BlastN with minimum alignment length of 50 bp and 70% minimum identity cut-off. A read was assigned to a genus only when it shared >94% 16S rRNA sequence identity with a known species of that genus. Maximum e-value cut-off used in all the above analyses was 1e−5. Matching of DG3 sample-derived metagenomic reads with the 16 S rRNA gene sequences (FN556455 through FN556457) of some mesophilic Paracoccus strains isolated previously from DG3-equivalent (52 °C) Shivlinga mat samples validated that the above method was efficient in not only detecting microorganisms that are present in small quantities but also adept to classifying metagenomic reads up to the genus level. This procedure also corroborated that Shivlinga's microbial communities, including the mesophilic components, were structurally stable over time.
Statistical analysis of associations between variables
Pair-wise Pearson correlation coefficients (r) were determined to quantify the level of association between variations in the mean relative abundances of individual phyla along the wet thermal gradient and the in situ parameters such as temperature, pH, distance from the vent and flow-rate. In these single-inference statistical procedures, a confidence level corresponding to P value < 0.05 was considered as indicative of significant correlation between a pair of variables – this implied that there was a probability of 0.05 for the null hypothesis to be rejected mistakenly and the inference to be actually insignificant. However, these confidence levels (P values) were applicable to the individual statistical tests and could not be considered simultaneously in multiple-comparison procedures involving individual families of tests (such as those for individual phyla versus temperature), which were desirable in the current study. Towards the latter end, P values were subjected to correction for multiple testing using Benjamini-Hochberg method for controlling false discovery rate90; this was carried out separately for mean relative abundances of individual phyla along the wet thermal gradient versus temperature, pH, distance from vent center, or flow rate. Benjamini-Hochberg critical value was determined as (i ÷ m) * Q where i = rank of the individual P value, m = total number of tests, and Q = the false discovery rate (presently taken as 20%).
Quantification of microbial diversity in the mat communities using metagenomic data
Microbial diversity within each mat community was estimated, as described previously25, using the mean relative abundance value for each bacterial phylum/proteobacterial class present to calculate Simpson Dominance and, Shannon–Wiener Diversity and Evenness Indices91. First, to quantify the extent to which the different bacterial phyla/proteobacterial classes dominated a given community, Simpson Dominance Index was determined using Eq. 1 (below). Here, the ni/n ratio, denoted as pi, gave the proportion of representation of the ith bacterial phylum/proteobacterial class in the community, while S denoted the total number of such taxa present. The mean percentage of all metagenomic reads ascribable to a bacterial phylum/proteobacterial class (Supplementary Tables 34 and 35) was taken as its ni/n value (Supplementary Tables 13–20). Shannon Diversity Index (H) was calculated using Eq. 2 (below): here, the pi [or the (ni / n)] value of each phylum was taken as above; then each pi value was multiplied by its natural logarithm (Ln pi); in the end, the (pi X Ln pi) products obtained for all the bacterial phyla and proteobacterial classes were summed-up and multiplied by −1. In order to determine whether there is evenness within a community in terms of the prevalence (relative abundance) of bacterial phyla/proteobacterial classes, Shannon Equitability Index (EH) was calculated using Equation 3 (below). EH was determined by dividing the H value of the community by Hmax, which is known to be equal to Ln S (S denoted the total number of phyla and proteobacterial class present).
$$D=\mathop{\sum }\limits_{{\rm{i}}=1}^{{\rm{S}}}{\left(\frac{{{\rm{n}}}_{{\rm{i}}}}{{\rm{n}}}\right)}^{2}=\sum \,{p}_{{\rm{i}}}^{2}$$
$$H=-\,\mathop{\sum \,}\limits_{{\rm{i}}=1}^{{\rm{s}}}{{\rm{p}}}_{{\rm{i}}}\,\mathrm{Ln}\,{{\rm{p}}}_{{\rm{i}}}$$
$${E}_{H}=\frac{H}{{H}_{max}}=\frac{H}{Ln\,S}$$
Functional analysis of the mat metagenomes
For each mat community, a complete gene catalogue was prepared by searching, and functionally annotating, its merged metagenomic readset against the EggNOG (evolutionary genealogy of genes: Non-supervised Orthologous Groups) database92. Genes ascribable to Clusters of Orthologous Groups (COGs) of Proteins93 were shortlisted from the catalogue, and the COG-counts under individual functional categories were determined for the community. Whether individual COG-counts under different functional categories across the eight communities were significantly high or low was determined by Chi Square test. The contingency table for this test (Supplementary Table 36) was constructed with the help of an in house script (P value < 0.001 was used as the cut-off for estimating whether attendance of COGs within a functional category was high or low for a community). Here, pair-wise differential COG-count for two functional categories was placed in two rows (with a degree of freedom = 1) in the 2x2 contingency table and the Chi Square significance test performed for the two individual communities. Furthermore, hierarchical clustering94 was carried out to quantitatively decipher the relatedness between the mat communities in terms of their enrichment of various COG categories. Heat map depicting the results of hierarchical cluster analysis was drawn with an in house R script (Supplementary Note 3) using complete linkage method (Johnson's maximum method)95.
In addition to the above analyses, diversities of ecophysiologically important functional genes within individual mat communities were separately determined by searching their total metagenomic reads against the Protein Subsystems database using the Functional Abundance tool and Hierarchical Classification algorithm of MG-RAST88 [with minimum alignment length of 45 bp (15 amino acids) and minimum identity cutoff of 60%]; total number of species-level matches for each of the relevant functional genes was also identified. In this way, diversities of true thermophiles (organisms that grow in the laboratory exclusively at ≥80 °C), autotrophs, heterotrophs, sulfur-chemolithotrophs, aerobes, and anaerobes were estimated within each community, and then compared across the hydrothermal gradients.
All sequence data sets generated under this study are available in the NCBI SRA repository under the BioProject accession number PRJNA296849 (https://www.ncbi.nlm.nih.gov/sra).
Wächtershäuser, G. From volcanic origins of chemoautotrophic life to Bacteria, Archaea and Eukarya. Phil. Trans. R. Soc. B. 361, 1787–1806 (2006).
Martin, W., Baross, J., Kelley, D. & Russell, M. J. Hydrothermal vents and the origin of life. Nat. Rev. Microbiol. 6, 805–814 (2008).
Hallsworth, J. E. Microbial unknowns at the saline limits for life. Nature Ecol. Evol. 3, 1503–1504 (2019).
Fields, P. A. Protein function at thermal extremes: balancing stability and flexibility. Comp. Biochem. Physiol., Part A Mol. Integr. Physiol. 129, 417–431 (2001).
Hallsworth, J. E. et al. Limits of life in MgCl2-containing environments: chaotropicity defines the window. Environ. Microbiol. 9, 801–813 (2007).
Chin, J. P. et al. Solutes determine the temperature windows for microbial survival and growth. Proc. Natl. Acad. Sci. 107, 7835–7840 (2010).
Article ADS PubMed PubMed Central Google Scholar
Yakimov, M. M. et al. Microbial community of the deep-sea brine Lake Kryos seawater-brine interface is active below the chaotropicity limit of life as revealed by recovery of mRNA. Environ. Microbiol. 17, 364–382 (2015).
Ball, P. & Hallsworth, J. E. Water structure and chaotropicity: their uses, abuses and biological implications. Phys. Chem. Chem. Phys. 17, 8297–8305 (2015).
Bhaganna, P., Bielecka, A., Molinari, G. & Hallsworth, J. E. Protective role of glycerol against benzene stress: insights from the Pseudomonas putida proteome. Curr. Genet. 62, 419–429 (2016).
Cono, V. L. et al. The discovery of Lake Hephaestus, the youngest magnesium-saturated athalassohaline formation on Earth. Sci. Rep. 9, 1679, https://doi.org/10.1038/s41598-018-38444-z (2019).
Article ADS CAS PubMed PubMed Central Google Scholar
Cray, J. A., Russell, J. T., Timson, D. J., Singhal, R. S. & Hallsworth, J. E. A universal measure of chaotropicity and kosmotropicity. Environ. Microbiol. 15, 287–296 (2013).
Bhaganna, P. et al. Hydrophobic substances induce water stress in microbial cells. Microbiol. Biotechnol. 3, 701–716 (2010).
Brown, A. D. Microbial water stress physiology (John Wiley and Sons, 1990).
Koynova, R., Brankov, J. & Tenchov, B. Modulation of lipid phase behavior by kosmotropic and chaotropic solutes: Experiment and thermodynamic theory. Eur. Biophys. J. 25, 261–274 (1997).
Wiggins, P. M. High and low density water and resting, active and transformed cells. Cell Biol. Int. 20, 429–435 (1996).
Hribar, B. et al. How ions affect the structure of water. J. Am. Chem. Soc. 124, 12302–12311 (2002).
Hallsworth, J. E., Prior, B. A., Nomura, Y., Iwahara, M. & Timmis, K. N. Compatible solutes protect against chaotrope (ethanol)-induced, nonosmotic water stress. Appl. Environ. Microbiol. 69, 7032–7034 (2003).
de Lima Alves, F. et al. Concomitant osmotic and chaotropicity-induced stresses in Aspergillus wentii: compatible solutes determine the biotic window. Curr. Genet. 61, 457–477 (2015).
Cray, J. A. et al. Chaotropicity: a key factor in product tolerance of biofuel-producing microorganisms. Curr. Opin. Biotechnol. 33, 228–259 (2015).
Reysenbach, A. L., Ehringer, M. & Hershberger, K. Microbial diversity at 83 degrees C in Calcite Springs, Yellowstone National Park: another environment where the Aquificales and "Korarchaeota" coexist. Extremophiles 4, 61–67 (2000).
Blank, C. E., Cady, S. L. & Pace, N. R. Microbial composition of near-boiling silica-depositing thermal springs throughout Yellowstone National Park. Appl. Environ. Microbiol. 68, 5123–5135 (2002).
Lacap, D. C., Barraquio, W. & Pointing, S. B. Thermophilic microbial mats in a tropical geothermal location display pronounced seasonal changes but appear resilient to stochastic disturbance. Environ. Microbiol. 9, 3065–3076 (2007).
Reigstad, L. J., Jorgensen, S. L. & Schleper, C. Diversity and abundance of Korarchaeota in terrestrial hot springs of Iceland and Kamchatka. ISME J. 4, 346–356 (2010).
Chan, C. S., Chan, K. G., Tay, Y. L., Chua, Y. H. & Goh, K. M. Diversity of thermophiles in a Malaysian hot spring determined using 16 S rRNA and shotgun metagenome sequencing. Front. Microbiol. 6, 177, https://doi.org/10.3389/fmicb.2015.00177 (2015).
Ghosh, W. et al. Resilience and receptivity worked in tandem to sustain a geothermal mat community amidst erratic environmental conditions. Sci. Rep. 5, 12179, https://doi.org/10.1038/srep12179 (2015).
Roy, C. et al. Global association between thermophilicity and vancomycin susceptibility in Bacteria. Front. Microbiol. 7, 412, https://doi.org/10.3389/fmicb.2016.00412 (2016).
Miller, S. R., Strong, A. L., Jones, K. L. & Ungerer, M. C. Bar-coded pyrosequencing reveals shared bacterial community properties along the temperature gradients of two alkaline hot springs in Yellowstone National Park. Appl. Environ. Microbiol. 75, 4565–4572 (2009).
Cole, J. K. et al. Sediment microbial communities in Great Boiling Spring are controlled by temperature and distinct from water communities. ISME J. 7, 718–729 (2013).
Johnson, D. B., Okibe, N. & Roberto, F. F. Novel thermo-acidophilic bacteria isolated from geothermal sites in Yellowstone National Park: physiological and phylogenetic characteristics. Arch. Microbiol. 180, 60–68 (2003).
Roeselers, G. et al. Diversity of phototrophic bacteria in microbial mats from Arctic hot springs (Greenland). Environ. Microbiol. 9, 26–38 (2007).
Owen, R. B., Renaut, R. W. & Jones, B. Geothermal diatoms: a comparative study of floras in hot spring systems of Iceland, New Zealand, and Kenya. Hydrobiologia 610, 175–192 (2008).
Hall, J. R. et al. Molecular characterization of the diversity and distribution of a thermal spring microbial community by using rRNA and metabolic genes. Appl. Environ. Microbiol. 74, 4910–4922 (2008).
Meyer-Dombard, D. R. et al. Hydrothermal ecotones and streamer biofilm communities in the Lower Geyser Basin, Yellowstone National Park. Environ. Microbiol. 13, 2216–2231 (2011).
Everroad, R. C., Otaki, H., Matsuura, K. & Haruta, S. Diversification of bacterial community composition along a temperature gradient at a thermal spring. Microbes Environ. 27, 374–381 (2012).
Swingley, W. D. et al. Coordinating environmental genomics and geochemistry reveals metabolic transitions in a hot spring ecosystem. PLoS One 7, e38108, https://doi.org/10.1371/journal.pone.0038108 (2012).
Huang, Q. et al. Archaeal and bacterial diversity in acidic to circumneutral hot springs in the Philippines. FEMS Microbiol. Ecol. 85, 452–464 (2013).
Wang, S. et al. Control of temperature on microbial community structure in hot springs of the Tibetan Plateau. PLoS One 8, e62901, https://doi.org/10.1371/journal.pone.0062901 (2013).
Sharp, C. E. et al. Humboldt's spa: microbial diversity is controlled by temperature in geothermal environments. ISME J. 8, 1166–1174 (2014).
Skirnisdottir, S. et al. Influence of sulfide and temperature on species composition and community structure of hot spring microbial mats. Appl. Environ. Microbiol. 66, 2835–2841 (2000).
Garrett, D. E. Borates: Handbook of Deposits, Processing, Properties, and Use (Academic Press, 1998).
Rai, A. P. Compilation of Data on Chemical Analysis of Water and Gas Samples from North West Himalaya and Adjoining Areas, Bulletin Series-C, No. 5 (Geological Survey of India, 2001).
Ghosh, W. et al. Molecular and cellular fossils of a mat-like microbial community in geothermal boratic sinters. Geomicrobiol. J. 29, 879–885 (2012).
Riding, R. Microbial carbonates: the geological record of calcified bacterial–algal mats and biofilms. Sedimentology 47, 179–214 (2000).
Harinarayana, T. et al. Exploration of geothermal structure in Puga geothermal field, Ladakh Himalayas, India by magnetotelluric studies. J. Appl. Geophys. 58, 280–295 (2006).
Article ADS Google Scholar
Hunt, T. M. Geothermal resources in New Zealand: An overview. GHC Bulletin 19, 5–9 (1998).
Hetzer, A., Morgan, H. W., McDonald, I. R. & Daughney, C. J. Microbial life in Champagne Pool, a geothermal spring in Waiotapu, New Zealand. Extremophiles 11, 605–614 (2007).
Marchesi, J. R. & Ravel, J. The vocabulary of microbiome research: a proposal. Microbiome. 3, 31, https://doi.org/10.1186/s40168-015-0094-5 (2015).
Schoenfeld, T. et al. Assembly of Viral Metagenomes from Yellowstone Hot Springs. Appl. Environ. Microbiol. 74, 4164–4174 (2008).
Schaefer, D. G. & Zryd, J. The moss Physcomitrella patens, now and then. Plant Physiol. 127, 1430–1438 (2001).
Matsuzaki, M. et al. Genome sequence of the ultrasmall unicellular red alga Cyanidioschyzon merolae 10D. Nature 428, 653–657 (2004).
Article ADS CAS PubMed Google Scholar
Eichinger, L. & Noegel, A. A. Crawling in to a new era – the Dictyostelium genome project. EMBO J. 22, 1941–1946 (2003).
Nicholson, W. L. et al. Exploring the low-pressure growth limit: Evolution of Bacillus subtilis in the laboratory to enhanced growth at 5 Kilopascals. Appl. Environ. Microbiol. 76, 7559–7565 (2010).
Hallsworth, J. E., Heim, S. & Timmis, K. N. Chaotropic solutes cause water stress in Pseudomonas putida. Environ. Microbiol. 5, 1270–1280 (2003).
Hallsworth, J. E. Stress-free cells lack vitality. Fungal Biol. 122, 379–385 (2018).
Stevenson, A. et al. Is there a common water-activity limit for the three domains of life? ISME J. 9, 1333–1351 (2015).
Forterre, P., Bergerat, A. & Lopez-Garcia, P. The unique DNA topology and DNA topoisomerases of hyperthermophilic archaea. FEMS Microbiol. Rev. 18, 237–248 (1996).
Hugler, M., Wirsen, C. O., Fuchs, G., Taylor, C. D. & Sievert, S. M. Evidence for autotrophic CO2 fixation via the reductive tricarboxylic acid cycle by members of the epsilon subdivision of proteobacteria. J. Bacteriol. 187, 3020–3027 (2005).
Berg, I. A. et al. Autotrophic carbon fixation in archaea. Nat. Rev. Microbiol. 8, 447–460 (2010).
Sauer, U. & Eikmanns, B. J. The PEP-pyruvate-oxaloacetate node as the switch point for carbon flux distribution in bacteria. FEMS Microbiol. Rev. 29, 765–794 (2005).
Ghosh, W. & Dam, B. Biochemistry and molecular biology of lithotrophic sulfur-oxidation by taxonomically and ecologically diverse bacteria and archaea. FEMS Microbiol. Rev. 33, 999–1043 (2009).
Iwata, S. Structure and function of bacterial cytochrome c oxidase. J. Biochem. 123, 369–375 (1998).
Loy, A., Duller, S. & Wagner, M. Evolution and ecology of microbes dissimilating sulfur compounds: insights from siroheme sulfite reductases in Microbial Sulfur Metabolism (eds. Dahl, C. & Friedrich, C. G.) 46–59 (Springer, 2008).
de Gier, J. W. et al. Structural and functional analysis of aa3-type and cbb3-type cytochrome c oxidases of Paracoccus denitrificans reveals significant differences in proton-pump design. Mol. Microbiol. 20, 1247–1260 (1996).
Ekici, S., Pawlik, G., Lohmeyer, E., Koch, H. G. & Daldal, F. Biogenesis of cbb3-type cytochrome c oxidase in Rhodobacter capsulatus. Biochim. Biophys. Acta 1817, 898–910 (2012).
Kubo, K., Knittel, K., Amann, R., Fukui, M. & Matsuura, K. Sulfur-metabolizing bacterial populations in microbial mats of the Nakabusa hot spring, Japan. Syst. Appl. Microbiol. 34, 293–302 (2011).
van der Meer, M. T. et al. Diel variations in carbon metabolism by green nonsulfur-like bacteria in alkaline siliceous hot spring microbial mats from Yellowstone National Park. Appl. Environ. Microbiol. 71, 3978–3986 (2005).
van der Meer, M. T. et al. Alkane-1, 2-diol-based glycosides and fatty glycosides and wax esters in Roseiflexus castenholzii and hot spring microbial mats. Arch. Microbiol. 178, 229–237 (2002).
Ramanan, R., Kim, B. H., Cho, D. H., Oh, H. M. & Kim, H. S. Algae-bacteria interactions: Evolution, ecology and emerging applications. Biotechnol. Adv. 34, 14–29 (2016).
Burns, B. Vegetation change along a geothermal stress gradient at the Te Kopia streamfield. J. Roy. Soc. New Zeal. 27, 279–294 (1997).
Convey, P. & Lewis Smith, R. I. Geothermal bryophyte habitats in the South Sandwich Islands, maritime Antarctic. J. Veg. Sci. 17, 529–538 (2006).
Kis-Papo, T., Kirzhner, V., Wasser, S. P. & Nevo, E. Evolution of genomic diversity and sex at extreme environments: fungal life under hypersaline Dead Sea stress. Proc. Natl. Acad. Sci. 100, 14970–14975 (2003).
Eppley, S. M., Rosenstiel, T. N., Graves, C. B. & Garcia, E. L. Limits to sexual reproduction in geothermal bryophytes. Int. J. Plant Sci. 172, 870–878 (2011).
Kantidze, O. L., Velichko, A. K., Luzhin, A. V. & Razin, S. V. Heat stress-induced DNA damage. Acta Naturae 8, 75–78 (2016).
Reski, R. Development, genetics and molecular biology of mosses. Bot. Acta 111, 1–15 (1998).
Markmann-Mulisch, U. et al. Differential requirements for RAD51 in Physcomitrella patens and Arabidopsis thaliana development and DNA damage repair. Plant Cell 19, 3080–3089 (2007).
Cray, J. A. et al. The biology of habitat dominance; can microbes behave as weeds? Microb. Biotechnol. 6, 453–492 (2013).
Rummel, J. D. et al. A new analysis of Mars "Special Regions": findings of the second MEPAG Special Regions Science Analysis Group (SR-SAG2). Astrobiology 14, 887–968 (2014).
Article ADS PubMed Google Scholar
Rantz, S. E. Measurement of discharge by tracer dilution in Measurement and Computation of Streamflow: Volume 1. Measurement of Stage and Discharge 211–259 (United States Geological Survey and U.S. Government Printing Office, 1982).
Mathon, B. R., Schoonen, M. A., Riccardi, A. L. & Borda, M. J. Measuring flow rates and characterizing flow regimes in hot springs. Appl. Geochem. 62, 234–246 (2015).
Grasshoff, K. Kremling, K., & Ehrhardt, M. Methods of Seawater Analysis (John Wiley & Sons, 1999).
Lewis, E. & Wallace, D. W. R. CO2SYS_calc_DOS_Original, program developed for CO2 system calculations ORNL/CDIAC-105. Carbon Dioxide Information Analysis Center Oak Ridge National Laboratory, US Department of Energy, Oak Ridge, Tennessee (1998).
Kelly, D. P. & Wood, A. P. Synthesis and determination of thiosulfate and polythionates. Methods Enzymol. 243, 475–501 (1994).
Alam, M., Pyne, P., Mazumdar, A., Peketi, A. & Ghosh, W. Kinetic enrichment of 34S during proteobacterial thiosulfate oxidation and the conserved role of SoxB in S-S bond breaking. Appl. Environ. Microbiol. 79, 4455–4464 (2013).
West, P. W. & Gaeke, G. C. Fixation of sulfur dioxide as disulfitomercurate (II) and subsequent colorimetric estimation. Anal. Chem. 28, 1816–1819 (1956).
Cline, J. D. Spectrophotometric determination of hydrogen sulfide in natural water. Anal. Chem. 14, 454–458 (1969).
Edgar, R. C. UPARSE: highly accurate OTU sequences from microbial amplicon reads. Nat. Methods 10, 996–998 (2013).
R Core, Team. R: A Language and Environment for Statistical Computing. R Core Team, Vienna, 2014. http://www.R-project.org/.
Meyer, F. et al. The metagenomics RAST server - a public resource for the automatic phylogenetic and functional analysis of metagenomes. BMC Bioinformatics 9, 386, https://doi.org/10.1186/1471-2105-9-386 (2008).
Tringe, S. G. et al. Comparative metagenomics of microbial communities. Science 308, 554–557 (2005).
Benjamini, Y. & Hochberg, Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Stat. Soc. B Stat. Methodol. 1, 289–300 (1995).
MathSciNet MATH Google Scholar
Magurran, A. E. Measuring Biological Diversity (Blackwell Publishing Limited, 2003).
Huerta-Cepas, J. et al. eggNOG 4.5: a hierarchical orthology framework with improved functional annotations for eukaryotic, prokaryotic and viral sequences. Nucleic Acids Res. 44, 286–293 (2016).
Tatusov, R. L. et al. The COG database: an updated version includes eukaryotes. BMC Bioinformatics 4, 41, https://doi.org/10.1186/1471-2105-4-41 (2003).
Köhn, H. F. & Hubert, L. J. Hierarchical cluster analysis. Wiley StatsRef: Statistics Reference Online. 14, 1–3 (2014).
Johnson, S. C. Hierarchical clustering schemes. Psychometrika 32, 241–54 (1967).
Article CAS PubMed MATH Google Scholar
This research was financed by Bose Institute as well as Science and Engineering Research Board (SERB), Government of India (GoI) (SERB grant numbers were SR/FT/LS-204/2009 and EMR/2016/002703). Sri Pankaj Kumar Ghosh (Chinsurah, West Bengal, India) provided additional travel grants philanthropically. We are indebted to our friends Asgar Ali, Baishali Ghosh, Bikash Jana, Lotus Sonam, Amrit Pal Singh, Rimjhim Bhattacherjee and Srabana Bhattacherjee for on-field support during the explorations of the Shivlinga site. C.R. and M.J.R. received fellowships from University Grants Commission, GoI. N.M. received fellowship from SERB, GoI. P.P. and R.R. received fellowships from the Council of Scientific & Industrial Research, GoI. S.B. received fellowship from Bose Institute. S.M. received fellowship from Department of Science and Technology, GoI. W.K.O. received a fellowship from the Department of Agriculture, Environment and Rural Affairs (DAERA, Northern Ireland).
Rimi Roy
Present address: Department of Botany, Jagannath Kishore College, Purulia, 723101, West Bengal, India
Department of Microbiology, Bose Institute, P-1/12 CIT Scheme VIIM, Kolkata, 700054, India
Chayan Roy, Moidu Jameela Rameez, Prabir Kumar Haldar, Nibendu Mondal, Prosenjit Pyne, Sabyasachi Bhattacharya, Rimi Roy, Subhrangshu Mandal & Wriddhiman Ghosh
Gas Hydrate Research Group, Geological Oceanography, CSIR-National Institute of Oceanography, Dona Paula, Goa, 403004, India
Aditya Peketi, Svetlana Fernandes & Aninda Mazumdar
Microbiome Program, Center for Individualized Medicine, Mayo Clinic, Rochester, MN-55905, USA
Utpal Bakshi
ARC CoE for Mathematical and Statistical Frontiers, School of Mathematical Sciences, Queensland University of Technology, Brisbane, QLD 4000, Australia
Tarunendu Mapder
Institute for Global Food Security, School of Biological Sciences, Queen's University Belfast, 19 Chlorine Gardens, Belfast, BT9 5DL, Northern Ireland
William Kenneth O'Neill & John Edward Hallsworth
Department of Microbiology, University of Burdwan, Burdwan, West Bengal, 713104, India
Subhra Kanti Mukhopadhyay
Department of Botany, University of Burdwan, Burdwan, West Bengal, 713104, India
Ambarish Mukherjee
Department of Biotechnology, University of North Bengal, Siliguri, West Bengal, 734013, India
Ranadhir Chakraborty
Chayan Roy
Moidu Jameela Rameez
Prabir Kumar Haldar
Aditya Peketi
Nibendu Mondal
Prosenjit Pyne
Svetlana Fernandes
Sabyasachi Bhattacharya
Subhrangshu Mandal
William Kenneth O'Neill
Aninda Mazumdar
John Edward Hallsworth
Wriddhiman Ghosh
W.G. initiated and developed the study, designed the experiments, analyzed the data, and wrote the initial draft of the manuscript. W.G., J.E.H. and R.C. interpreted all the results and wrote the final version of the manuscript C.R. anchored the metagenomic work, performed the laboratory experiments and bioinformatic analyses, interpreted the results, and took part in paper writing. S.K.M. and A.Muk interpreted the results. W.G., C.R., M.J.R., P.K.H. and P.P. carried out the fieldworks. M.J.R., P.K.H., N.M., U.B., P.P., S.B., R.R. and S.M. participated in metagenomic experiments and bioinformatic analyses. A.P., S.F. and A.Maz. carried out the geochemical analyses. T.M. performed the statistical analyses. W.G., P.K.H., P.P. and C.R. performed the microscopic studies. W.K.O. worked towards improving the text and documents. All authors read and vetted the manuscript.
Correspondence to Wriddhiman Ghosh.
The authors declare no competing interests.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Roy, C., Rameez, M.J., Haldar, P.K. et al. Microbiome and ecology of a hot spring-microbialite system on the Trans-Himalayan Plateau. Sci Rep 10, 5917 (2020). https://doi.org/10.1038/s41598-020-62797-z
Indus and Nubra Valley hot springs affirm the geomicrobiological specialties of Trans-Himalayan hydrothermal systems
Journal of Earth System Science (2022)
Metagenomic insights into Himalayan glacial and kettle lake sediments revealed microbial community structure, function, and stress adaptation strategies
Subhash Kumar
Dharam Singh
Extremophiles (2022)
Water activity in Venus's uninhabitable clouds and other planetary atmospheres
John E. Hallsworth
Thomas Koop
Christopher P. McKay
Nature Astronomy (2021)
Prevalence of methanogens in the uncultured Sikkim hot spring solfataric mud archaeal microbiome
Sayak Das
Mingma Thundu Sherpa
Nagendra Thakur
Environmental Sustainability (2020)
Geomicrobial dynamics of Trans-Himalayan sulfur–borax spring system reveals mesophilic bacteria's resilience to high heat
|
CommonCrawl
|
Does this formula correspond to a series representation of the Dirac delta function $\delta(x)$?
Modified 10 months ago
Consider the following formula which defines a piece-wise function which I believe corresponds to a series representation for the Dirac delta function $\delta(x)$. The parameter $f$ is the evaluation frequency and assumed to be a positive integer, and the evaluation limit $N$ must be selected such that $M(N)=0$ where $M(x)=\sum\limits_{n\le x}\mu(n)$ is the Mertens function.
(1) $\quad\delta(x)=\underset{N,f\to\infty}{\text{lim}}\ 2\left.\sum\limits_{n=1}^N\frac{\mu(n)}{n}\sum\limits_{k=1}^{f\ n}\ \left(\left\{ \begin{array}{cc} \begin{array}{cc} \cos \left(\frac{2 k \pi (x+1)}{n}\right) & x\geq 0 \\ \cos \left(\frac{2 k \pi (x-1)}{n}\right) & x<0 \\ \end{array} \\ \end{array} \right.\right.\right),\quad M(N)=0$
The following figure illustrates formula (1) above evaluated at $N=39$ and $f=4$. The red discrete dots in figure (1) below illustrate the evaluation of formula (1) at integer values of $x$. I believe formula (1) always evaluates to exactly $2\ f$ at $x=0$ and exactly to zero at other integer values of $x$.
Figure (1): Illustration of formula (1) for $\delta(x)$
Now consider formula (2) below derived from the integral $f(0)=\int_{-\infty}^{\infty}\delta(x)\ f(x)\, dx$ where $f(x)=e^{-\left| x\right|}$ and formula (1) above for $\delta(x)$ was used to evaluate the integral. Formula (2) below can also be evaluated as illustrated in formula (3) below.
(2) $\quad e^{-\left| 0\right|}=1=\underset{N,f\to\infty}{\text{lim}}\ 4\sum\limits_{n=1}^N\mu(n)\sum\limits_{k=1}^{f\ n}\frac{n\ \cos\left(\frac{2\ \pi\ k}{n}\right)-2\ \pi\ k\ \sin\left(\frac{2\ \pi\ k}{n}\right)}{4\ \pi^2\ k^2+n^2}\,,\quad M(N)=0$
(3) $\quad e^{-\left| 0\right|}=1=\underset{N\to\infty}{\text{lim}}\ \mu(1)\left(\coth\left(\frac{1}{2}\right)-2\right)+4\sum\limits_{n=2}^N\frac{\mu(n)}{4 e \left(e^n-1\right) n}\\\\$ $\left(-2 e^{n+1}+e^n n+e^2 n-e \left(e^n-1\right) \left(e^{-\frac{2 i \pi }{n}}\right)^{\frac{i n}{2 \pi }} B_{e^{-\frac{2 i \pi }{n}}}\left(1-\frac{i n}{2 \pi },-1\right)+e \left(e^n-1\right) \left(e^{-\frac{2 i \pi }{n}}\right)^{-\frac{i n}{2 \pi }} B_{e^{-\frac{2 i \pi }{n}}}\left(\frac{i n}{2 \pi }+1,-1\right)+\left(e^n-1\right) \left(B_{e^{\frac{2 i \pi }{n}}}\left(1-\frac{i n}{2 \pi },-1\right)-e^2 B_{e^{\frac{2 i \pi }{n}}}\left(\frac{i n}{2 \pi }+1,-1\right)\right)+2 e\right),\quad M(N)=0$
The following table illustrates formula (3) above evaluated for several values of $N$ corresponding to zeros of the Mertens function $M(x)$. Note formula (3) above seems to converge to $e^{-\left| 0\right|}=1$ as the magnitude of the evaluation limit $N$ increases.
$$\begin{array}{ccc} n & \text{N=$n^{th}$ zero of $M(x)$} & \text{Evaluation of formula (3) for $e^{-\left| 0\right|}$} \\ 10 & 150 & 0.973479\, +\ i\ \text{5.498812269991985$\grave{ }$*${}^{\wedge}$-17} \\ 20 & 236 & 0.982236\, -\ i\ \text{5.786047752866836$\grave{ }$*${}^{\wedge}$-17} \\ 30 & 358 & 0.988729\, -\ i\ \text{6.577233629689039$\grave{ }$*${}^{\wedge}$-17} \\ 40 & 407 & 0.989363\, +\ i\ \text{2.6889189402888207$\grave{ }$*${}^{\wedge}$-17} \\ 50 & 427 & 0.989387\, +\ i\ \text{4.472005325912989$\grave{ }$*${}^{\wedge}$-17} \\ 60 & 785 & 0.995546\, +\ i\ \text{6.227857765313369$\grave{ }$*${}^{\wedge}$-18} \\ 70 & 825 & 0.995466\, -\ i\ \text{1.6606923419056456$\grave{ }$*${}^{\wedge}$-17} \\ 80 & 893 & 0.995653\, -\ i\ \text{1.1882293286557667$\grave{ }$*${}^{\wedge}$-17} \\ 90 & 916 & 0.995653\, -\ i\ \text{3.521050901644269$\grave{ }$*${}^{\wedge}$-17} \\ 100 & 1220 & 0.997431\, -\ i\ \text{1.2549006768893629$\grave{ }$*${}^{\wedge}$-16} \\ \end{array}$$
Finally consider the following three formulas derived from the Fourier convolution $f(y)=\int\limits_{-\infty}^\infty\delta(x)\ f(y-x)\ dx$ where all three convolutions were evaluated using formula (1) above for $\delta(x)$.
(4) $\quad e^{-\left|y\right|}=\underset{N,f\to\infty}{\text{lim}}\ 4\sum\limits_{n=1}^N\mu(n)\sum\limits_{k=1}^{f\ n}\frac{1}{4\ \pi^2\ k^2+n^2}\ \left(\left\{ \begin{array}{cc} \begin{array}{cc} n \cos\left(\frac{2\ k\ \pi\ (y+1)}{n}\right)-2\ k\ \pi\ e^{-y} \sin\left(\frac{2\ k\ \pi}{n}\right) & y\geq 0 \\ n \cos\left(\frac{2\ k\ \pi\ (y-1)}{n}\right)-2\ k\ \pi\ e^y \sin\left(\frac{2\ k\ \pi}{n}\right) & y<0 \\ \end{array} \\ \end{array}\right.\right),\ M(N)=0$
(5) $\quad e^{-y^2}=\underset{N,f\to\infty}{\text{lim}}\ \sqrt{\pi}\sum\limits_{n=1}^N\frac{\mu(n)}{n}\\\\$ $\ \sum\limits_{k=1}^{f\ n}e^{-\frac{\pi\ k\ (\pi\ k+2\ i\ n\ y)}{n^2}}\ \left(\left(1+e^{\frac{4\ i\ \pi\ k\ y}{n}}\right) \cos\left(\frac{2\ \pi\ k}{n}\right)-\sin\left(\frac{2\ \pi\ k}{n}\right) \left(\text{erfi}\left(\frac{\pi\ k}{n}+i\ y\right)+e^{\frac{4\ i\ \pi\ k\ y}{n}} \text{erfi}\left(\frac{\pi\ k}{n}-i\ y\right)\right)\right),\ M(N)=0$
(6) $\quad\sin(y)\ e^{-y^2}=\underset{N,f\to\infty}{\text{lim}}\ \frac{1}{2} \left(i \sqrt{\pi }\right)\sum\limits _{n=1}^{\text{nMax}} \frac{\mu(n)}{n}\sum\limits_{k=1}^{f n} e^{-\frac{(2 \pi k+n)^2+8 i \pi k n y}{4 n^2}} \left(-\left(e^{\frac{2 \pi k}{n}}-1\right) \left(-1+e^{\frac{4 i \pi k y}{n}}\right) \cos\left(\frac{2 \pi k}{n}\right)+\right.\\\\$ $\left.\sin\left(\frac{2 \pi k}{n}\right) \left(\text{erfi}\left(\frac{\pi k}{n}+i y+\frac{1}{2}\right)-e^{\frac{4 i \pi k y}{n}} \left(e^{\frac{2 \pi k}{n}} \text{erfi}\left(-\frac{\pi k}{n}+i y+\frac{1}{2}\right)+\text{erfi}\left(\frac{\pi k}{n}-i y+\frac{1}{2}\right)\right)+e^{\frac{2 \pi k}{n}} \text{erfi}\left(-\frac{\pi k}{n}-i y+\frac{1}{2}\right)\right)\right),\qquad M(N)=0$
Formulas (4), (5), and (6) defined above are illustrated in the following three figures where the blue curves are the reference functions, the orange curves represent formulas (4), (5), and (6) above evaluated at $f=4$ and $N=39$, and the green curves represent formulas (4), (5), and (6) above evaluated at $f=4$ and $N=101$. The three figures below illustrate formulas (4), (5), and (6) above seem to converge to the corresponding reference function for $x\in\mathbb{R}$ as the evaluation limit $N$ is increased. Note formula (6) above for $\sin(y)\ e^{-y^2}$ illustrated in Figure (4) below seems to converge much faster than formulas (4) and (5) above perhaps because formula (6) represents an odd function whereas formulas (4) and (5) both represent even functions.
Figure (2): Illustration of formula (4) for $e^{-\left|y\right|}$ evaluated at $N=39$ (orange curve) and $N=101$ (green curve) overlaid on the reference function in blue
Figure (3): Illustration of formula (5) for $e^{-y^2}$ evaluated at $N=39$ (orange curve) and $N=101$ (green curve) overlaid on the reference function in blue
Figure (4): Illustration of formula (6) for $\sin(y)\ e^{-y^2}$ evaluated at $N=39$ (orange curve) and $N=101$ (green curve) overlaid on the reference function in blue
Question (1): Is it true formula (1) above is an example of a series representation of the Dirac delta function $\delta(x)$?
Question (2): What is the class or space of functions $f(x)$ for which the integral $f(0)=\int\limits_{-\infty}^\infty\delta(x)\ f(x)\ dx$ and Fourier convolution $f(y)=\int\limits_{-\infty}^\infty\delta(x)\ f(y-x)\ dx$ are both valid when using formula (1) above for $\delta(x)$ to evaluate the integral and Fourier convolution?
Question (3): Is formula (1) above for $\delta(x)$ an example of what is referred to as a tempered distribution, or is formula (1) for $\delta(x)$ more general than a tempered distribution?
Formula (1) for $\delta(x)$ above is based on the nested Fourier series representation of $\delta(x+1)+\delta(x-1)$ defined in formula (7) below. Whereas the Fourier convolution $f(y)=\int\limits_{-\infty}^\infty\delta(x)\ f(y-x)\ dx$ evaluated using formula (1) above seems to converge for $y\in\mathbb{R}$, Mellin convolutions such as $f(y)=\int\limits_0^\infty\delta(x-1)\ f\left(\frac{y}{x}\right)\ \frac{dx}{x}$ and $f(y)=\int\limits_0^\infty\delta(x-1)\ f(y\ x)\ dx$ evaluated using formula (7) below typically seem to converge on the half-plane $\Re(y)>0$. I'll note that in general formulas derived from Fourier convolutions evaluated using formula (1) above seem to be more complicated than formulas derived from Mellin convolutions evaluated using formula (7) below which I suspect is at least partially related to the extra complexity of the piece-wise nature of formula (1) above.
(7) $\quad\delta(x+1)+\delta(x-1)=\underset{N,f\to\infty}{\text{lim}}\ 2\sum\limits_{n=1}^N\frac{\mu(n)}{n}\sum\limits_{k=1}^{f\ n}\cos\left(\frac{2 k \pi x}{n}\right),\quad M(N)=0$
The conditional convergence requirement $M(N)=0$ stated for formulas (1) to (7) above is because the nested Fourier series representation of $\delta(x+1)+\delta(x-1)$ defined in formula (7) above only evaluates to zero at $x=0$ when $M(N)=0$. The condition $M(N)=0$ is required when evaluating formula (7) above and formulas derived from the two Mellin convolutions defined in the preceding paragraph using formula (7) above, but I'm not sure it's really necessary when evaluating formula (1) above or formulas derived from the Fourier convolution $f(y)=\int\limits_{-\infty}^\infty\delta(x)\ f(y-x)\ dx$ using formula (1) above (e.g. formulas (4), (5), and (6) above). Formula (1) above is based on the evaluation of formula (7) above at $|x|\ge 1$, so perhaps formula (1) above is not as sensitive to the evaluation of formula (7) above at $x=0$. Formula (1) above can be seen as taking formula (7) above, cutting out the strip $-1\le x<1$, and then gluing the two remaining halves together at the origin. Nevertheless I usually evaluate formula (1) above and formulas derived from the Fourier convolution $f(y)=\int\limits_{-\infty}^\infty\delta(x)\ f(y-x)\ dx$ using formula (1) above at $M(N)=0$ since it doesn't hurt anything to restrict the selection of $N$ to this condition and I suspect this restriction may perhaps lead to faster and/or more consistent convergence.
See this answer I posted to one of my own questions on Math StackExchange for more information on the nested Fourier series representation of $\delta(x+1)+\delta(x-1)$ and examples of formulas derived from Mellin convolutions using this representation. See my Math StackExchange question related to nested Fourier series representation of $h(s)=\frac{i s}{s^2-1}$ for information on the more general topic of nested Fourier series representations of other non-periodic functions.
analytic-number-theory
fourier-analysis
sequences-and-series
schwartz-distributions
edited Sep 30, 2020 at 2:33
Steven Clark
Steven ClarkSteven Clark
$\begingroup$ Since this question mentions the Mertens function, rather than several analysis tags I would tag it as (analytic) number theory $\endgroup$
– Mizar
$\begingroup$ @Mizar Thanks for your suggestion. I changed the fourier-transform tag to analytic-number-therory. $\endgroup$
– Steven Clark
$\begingroup$ $$\int_{-\infty}^\infty \delta(x)f(y-x)\,dx$$ makes no sense in traditional math (e.g. see encyclopediaofmath.org/wiki/Generalized_function). $\endgroup$
$\begingroup$ @user64494 $g(x)\to\delta(x)$ if $\forall\,f(x)\in C^\infty_c(\Bbb{R}), \int_{-\infty}^\infty g(x)f(x)dx\to f(0)$. Most representations of $\delta(x)$ are limit representations (e.g. see formulas 34-40 at mathworld.wolfram.com/DeltaFunction.html and functions.wolfram.com/GeneralizedFunctions/DiracDelta/09). Formula (1) above is of interest to me because it is a series representation. $\endgroup$
$$\sum_k e^{2i\pi kx} = \sum_m \delta(x-m)$$
Convergence in the sense of distributions
$$\lim_{N\to \infty,M(N)=0}\sum_{n=1}^N \frac{\mu(n)}{n} \sum_k e^{2i\pi kx/n} =\lim_{N\to \infty,M(N)=0}\sum_{n=1}^N \mu(n) \sum_n\delta(x-mn)$$ $$=\lim_{N\to \infty,M(N)=0}\sum_{l\ge 1}(\delta(x+l)+\delta(x-l))\sum_{d| l,d\le N} \mu(d) =\delta(x+1)+\delta(x-1)$$
answered Jun 9, 2020 at 2:40
reunsreuns
$\begingroup$ Can you ground your "Convergence in the sense of distributions". TIA. $\endgroup$
I suspect the original formula for $\delta(x)$ defined in my question above is not quite correct as the associated derived formula for $\delta'(x)$ has a discontinuity at $x=0$. The definition of $\delta(x)$ in formula (1) below eliminates the piecewise nature of my original formula which resolves this problem and also seems to provide simpler results for formulas derived via the Fourier convolution defined in formula (2) below. The formula for $\delta(x)$ defined in formula (1) below also seems to provide the ability to derive formulas for a wider range of functions via the Fourier convolution defined in formula (2) below. The evaluation limit $f$ in formula (1) below is the evaluation frequency and assumed to be a positive integer. When evaluating formula (1) below (and all formulas derived from it) the evaluation limit $N$ must be selected such that $M(N)=0$ where $M(x)$ is the Mertens function. Formula (1) is illustrated in Figure (1) further below. I believe the series representation of $\delta(x)$ defined in formula (1) below converges in a distributional sense.
(1) $\quad\delta(x)=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\quad\sum\limits_{n=1}^N\frac{\mu(n)}{n}\left(\sum\limits_{k=1}^{f\ n}\left(\cos\left(\frac{2 \pi k (x-1)}{n}\right)+\cos\left(\frac{2 \pi k (x+1)}{n}\right)\right)-\frac{1}{2}\sum\limits_{k=1}^{2\ f\ n}\cos\left(\frac{\pi k x}{n}\right)\right)$
(2) $\quad g(y)=\int\limits_{-\infty}^\infty\delta(x)\,g(y-x)\,dx$
Formula (1) for $\delta(x)$ above leads to formulas (3a) and (3b) for $\theta(x)$ below (illustrated in Figures (2) and (3) further below) and formula (4) for $\delta'(x)$ below (illustrated in Figure (4) further below). Note formula (3b) for $\theta(x)$ below contains a closed form representation of the two nested sums over $k$ in formula (3a) for $\theta(x)$ below.
(3a) $\quad\theta(x)=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\quad\frac{1}{2}+\frac{1}{\pi}\sum\limits_{n=1}^N\mu(n)\left(\sum\limits_{k=1}^{f\ n}\frac{\cos\left(\frac{2 \pi k}{n}\right) \sin\left(\frac{2 \pi k x}{n}\right)}{k}-\frac{1}{2}\sum\limits_{k=1}^{2\ f\ n} \frac{\sin\left(\frac{\pi k x}{n}\right)}{k}\right)$
(3b) $\quad\theta(x)=\underset{\underset{M(N)=0}{N\to\infty}}{\text{lim}}\quad\frac{1}{2}+\frac{i}{4 \pi}\sum\limits_{n=1}^N\mu(n) \left(\log\left(1-e^{\frac{2 i \pi (x-1)}{n}}\right)-\log\left(1-e^{\frac{i \pi x}{n}}\right)+\log\left(1-e^{\frac{2 i \pi (x+1)}{n}}\right)-\log\left(1-e^{-\frac{2 i \pi (x-1)}{n}}\right)+\log\left(1-e^{-\frac{i \pi x}{n}}\right)-\log\left(1-e^{-\frac{2 i \pi (x+1)}{n}}\right)\right)$
(4) $\quad\delta'(x)=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\quad\pi\sum\limits_{n=1}^N\frac{\mu(n)}{n^2}\left(\sum\limits_{k=1}^{f\ n} -2 k \left(\sin \left(\frac{2 \pi k (x-1)}{n}\right)+\sin \left(\frac{2 \pi k (x+1)}{n}\right)\right)+\frac{1}{2}\sum\limits_{k=1}^{2\ f\ n} k\ \sin\left(\frac{\pi k x}{n}\right)\right)$
The following formulas are derived from the Fourier convolution defined in formula (2) above using the series representation of $\delta(x)$ defined in formula (1) above. All of the formulas defined below seem to converge for $x\in\mathbb{R}$. Note one of the two nested sums over $k$ in formula (6) below for $e^{-y^2}$ has a closed form representation. Both of the nested sums over $k$ in formulas (5), (8), and (9) below have closed form representations which were not included below because they're fairly long and complex.
(5) $\quad e^{-|y|}=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\quad\sum\limits_{n=1}^N\mu(n)\ n\left(\sum\limits_{k=1}^{f\ n}\frac{2 \left(\cos\left(\frac{2 \pi k (y-1)}{n}\right)+\cos\left(\frac{2 \pi k (y+1)}{n}\right)\right)}{4 \pi^2 k^2+n^2}-\sum\limits_{k=1}^{2\ f\ n}\frac{\cos\left(\frac{\pi k y}{n}\right)}{\pi^2 k^2+n^2}\right)$
(6) $\quad e^{-y^2}=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\quad\sqrt{\pi}\sum\limits_{n=1}^N\frac{\mu(n)}{n}\left(\sum\limits_{k=1}^{f\ n} e^{-\frac{\pi^2 k^2}{n^2}} \left(\cos\left(\frac{2 \pi k (y-1)}{n}\right)+\cos\left(\frac{2 \pi k (y+1)}{n}\right)\right)-\frac{1}{4}\sum\limits_{k=1}^{2\ f\ n} \left(e^{-\frac{\pi k (\pi k+4 i n y)}{4 n^2}}+e^{-\frac{\pi k (\pi k-4 i n y)}{4 n^2}}\right)\right)$
$\qquad\quad=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\quad\sqrt{\pi}\sum\limits_{n=1}^N\frac{\mu (n)}{n}\left(\frac{1}{2} \left(\vartheta_3\left(\frac{\pi (y-1)}{n},e^{-\frac{\pi^2}{n^2}}\right)+\vartheta_3\left(\frac{\pi (y+1)}{n},e^{-\frac{\pi^2}{n^2}}\right)-2\right)-\frac{1}{4} \sum\limits_{k=1}^{2\ f\ n} \left(e^{-\frac{\pi k (\pi k+4 i n y)}{4 n^2}}+e^{-\frac{\pi k (\pi k-4 i n y)}{4 n^2}}\right)\right)$
(7) $\quad\sin(y)\ e^{-y^2}=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\quad\sqrt{\pi } \sum\limits_{n=1}^N\frac{\mu (n)}{n}\left(2 \sum\limits_{k=1}^{f\ n} e^{-\frac{\pi^2 k^2}{n^2}-\frac{1}{4}} \cos\left(\frac{2 \pi k}{n}\right) \sinh\left(\frac{\pi k}{n}\right) \sin\left(\frac{2 \pi k y}{n}\right)-\frac{1}{2}\sum\limits_{k=1}^{2\ f\ n} e^{-\frac{\pi^2 k^2}{4 n^2}-\frac{1}{4}} \sinh\left(\frac{\pi k}{2 n}\right) \sin\left(\frac{\pi k y}{n}\right)\right)$
(8) $\quad\frac{1}{y^2+1}=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\quad\pi\sum\limits_{n=1}^N\frac{\mu (n)}{n}\left(2 \sum\limits_{k=1}^{f\ n} e^{-\frac{2 \pi k}{n}} \cos\left(\frac{2 \pi k}{n}\right) \cos\left(\frac{2 \pi k y}{n}\right)-\frac{1}{2}\sum\limits_{k=1}^{2\ f\ n} e^{-\frac{\pi k}{n}} \cos\left(\frac{\pi k y}{n}\right)\right)$
(9) $\quad\frac{y}{y^2+1}=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\quad\pi\sum\limits_{n=1}^N\frac{\mu(n)}{n}\left(2\sum\limits_{k=1}^{f\ n} e^{-\frac{2 \pi k}{n}} \cos\left(\frac{2 \pi k}{n}\right) \sin\left(\frac{2 \pi k y}{n}\right)-\frac{1}{2}\sum\limits_{k=1}^{2\ f\ n} e^{-\frac{\pi k}{n}} \sin\left(\frac{\pi k y}{n}\right)\right)$
The remainder of this answer illustrates formula (1) for $\delta(x)$ above and some of the other formulas defined above all of which were derived from formula (1). The observational convergence of these derived formulas provides evidence of the validity of formula (1) above.
Figure (1) below illustrates formula (1) for $\delta(x)$ evaluated at $f=4$ and $N=39$. The discrete portion of the plot illustrates formula (1) for $\delta(x)$ evaluates exactly to $2 f$ times the step size of $\theta(x)$ at integer values of $x$ when $|x|<N$.
Figure (2) below illustrates the reference function $\theta(x)$ in blue and formulas (3a) and (3b) for $\theta(x)$ in orange and green respectively where formula (3a) is evaluated at $f=4$ and formulas (3a) and (3b) are both evaluated at $N=39$.
Figure (2): Illustration of formulas (3a) and (3b) for $\theta(x)$ (orange and green)
Figure (3) below illustrates the reference function $\theta(x)$ in blue and formula (3b) for $\theta(x)$ evaluated at $N=39$ and $N=101$ in orange and green respectively.
Figure (3): Illustration of formula (3b) for $\theta(x)$ evaluated at $N=39$ and $N=101$ (orange and green)
Figures (2) and (3) above illustrate formulas (3a) and (3b) above evaluate at a slope compared to the reference function $\theta(x)$, and Figure (3) above illustrates the magnitude of this slope decreases as the magnitude of the evaluation limit $N$ increases. This slope is given by $-\frac{3}{4}\sum\limits_{n=1}^N\frac{\mu(n)}{n}$ which corresponds to $-0.0378622$ at $N=39$ and $-0.0159229$ at $N=101$. Since $-\frac{3}{4}\sum\limits_{n=1}^\infty\frac{\mu(n)}{n}=0$, formulas (3a) and (3b) above converge to the reference function $\theta(x)$ as $N\to\infty$ (and as $f\to\infty$ for formula (3a)).
Figure (4) below illustrates formula (4) for $\delta'(x)$ above evaluated at $f=4$ and $N=39$. The red discrete portion of the plot illustrates the evaluation of formula (4) for $\delta'(x)$ at integer values of $x$.
Figure (4): Illustration of formula (4) for $\delta'(x)$
Figure (5) below illustrates the reference function $\frac{y}{y^2+1}$ in blue and formula (9) for $\frac{y}{y^2+1}$ above evaluated at $f=4$ and $N=101$.
Figure (5): Illustration of formula (9) for $\frac{y}{y^2+1}$
answered Jan 3, 2021 at 2:42
My question above, original answer, and this new answer are all based on analytic formulas for
$$u(x)=-1+\theta(x+1)+\theta(x-1)\tag{1}\,.$$
$$u'(x)=\delta(x+1)+\delta(x-1)\tag{2}\,.$$
My question and original answer are both based on the analytic formula
$$u'(x)=\delta(x+1)+\delta(x-1)=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\left(\sum\limits_{n=1}^N\frac{\mu(n)}{n}\,\left(1+2\sum\limits_{k=1}^{f\,n}\cos\left(\frac{2\,\pi\,k\,x}{n}\right)\right)\right)\tag{3}$$
where the evaluation frequency $f$ is assumed to be a positive integer and
$$M(N)=\sum\limits_{n=1}^N \mu(n)\tag{4}$$
is the Mertens function.
Formula (3) above simplifies to
$$u'(x)=\delta(x+1)+\delta(x-1)=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\left(2\sum\limits_{n=1}^N\frac{\mu(n)}{n}\sum\limits_{k=1}^{f\ n}\cos\left(\frac{2 k \pi x}{n}\right)\right)\tag{5}$$
$$\sum\limits_{n=1}^\infty\frac{\mu(n)}{n}=\frac{1}{\zeta(1)}=0\,.\tag{6}$$
I originally defined formulas (3) and (5) above in this answer I posted to one of my own questions on Math StackExchange.
The formula in my question above wasn't quite right as it wasn't a smooth function at $x=0$ (there's a discontinuity in the first-order derivative corresponding to $\delta'(x)$). My original answer fixed this problem but still required the upper evaluation limit $N$ be selected such that $M(N)=0$.
This new answer is based on the analytic formula
$$u'(x)=\delta(x+1)+\delta(x-1)=\underset{N,f\to\infty}{\text{lim}}\left(\sum\limits_{n=1}^N \mu(n) \left(-2 f \text{sinc}(2 \pi f x)+\frac{1}{n}\sum\limits_{k=1}^{f\,n} \left(\cos\left(\frac{2 \pi (k-1) x}{n}\right)+\cos\left(\frac{2 \pi k x}{n}\right)\right)\right)\right)=\underset{N,f\to\infty}{\text{lim}}\left(\sum\limits_{n=1}^N \mu(n) \left(-2 f\,\text{sinc}(2 \pi f x)+\frac{\sin(2 \pi f x) \cot\left(\frac{\pi x}{n}\right)}{n}\right)\right)\tag{7}$$
which no longer requires $N$ to be selected such that $M(N)=0$.
Formula (7) above is a result related to this answer I posted to another one of my questions on Math Overflow and this answer I posted to a related question on Math StackExchange.
I believe formula (7) above is exactly equivalent to
$$u'(x)=\delta(x+1)+\delta(x-1)=\underset{f\to\infty}{\text{lim}}\left(2 f\ \text{sinc}(2 \pi f (x+1))+2 f\ \text{sinc}(2 \pi f (x-1))\right)\tag{8}$$
in that formulas (7) and (8) above both have the same Maclaurin series.
My original answer and this new answer are based on the relationship
$$\delta(x)=\frac{1}{2}\left(u'(x+1)+u'(x-1)-\frac{1}{2} u'\left(\frac{x}{2}\right)\right)\tag{9}$$
which using formula (7) above for $u'(x)$ leads to
$\delta(x)=\underset{N,f\to\infty}{\text{lim}}\left(\sum\limits_{n=1}^N\mu(n) \Bigg(f (-\text{sinc}(2 \pi f (x+1))-\text{sinc}(2 \pi f (x-1))+\text{sinc}(2 \pi f x))+\right.$ $\left.\frac{1}{2 n}\left(\sum\limits_{k=1}^{f\,n}\left(\cos\left(\frac{2 \pi (k-1) (x+1)}{n}\right)+\cos\left(\frac{2 \pi k (x+1)}{n}\right)+\cos\left(\frac{2 \pi (k-1) (x-1)}{n}\right)+\cos\left(\frac{2 \pi k (x-1)}{n}\right)\right)-\frac{1}{2} \sum\limits_{k=1}^{2 f\,n}\left(\cos\left(\frac{\pi (k-1) x}{n}\right)+\cos\left(\frac{\pi k x}{n}\right)\right)\right)\Bigg)\right)$
$$=\underset{N,f\to\infty}{\text{lim}}\left(\sum\limits_{n=1}^N\mu(n)\left(f (-\text{sinc}(2 \pi f (x+1))-\text{sinc}(2 \pi f (x-1))+\text{sinc}(2 \pi f x))+\frac{\sin(2 \pi f (x+1)) \cot\left(\frac{\pi (x+1)}{n}\right)+\sin(2 \pi f (x-1)) \cot\left(\frac{\pi (x-1)}{n}\right)-\frac{1}{2} \sin(2 \pi f x) \cot\left(\frac{\pi x}{2 n}\right)}{2 n}\right)\right)\tag{10}$$
I believe the formula (10) above is exactly equivalent to the integral representation
$$\delta(x)=\underset{f\to\infty}{\text{lim}}\left(\int\limits_{-f}^f e^{2 i \pi t x}\,dt\right)=\underset{f\to\infty}{\text{lim}}\left(2 f\ \text{sinc}(2 \pi f x)\right)\tag{11}$$
in that formulas (10) and (11) above both have the same Maclaurin series.
Now consider the slightly simpler analytic formula
$$u'(x)=\delta(x+1)+\delta(x-1)=\underset{N,f\to\infty}{\text{lim}}\left(\sum\limits_{n=1}^N\frac{\mu(2 n-1)}{2 n-1}\left(\frac{1}{2}+\sum\limits_{k=1}^{2 f (2 n-1)} (-1)^k \cos\left(\frac{\pi k x}{2 n-1}\right)\right)\right)=\underset{N,f\to\infty}{\text{lim}}\left(\frac{1}{2}\sum\limits_{n=1}^N\frac{\mu(2 n-1)}{2 n-1} \sec\left(\frac{\pi x}{4 n-2}\right) \cos\left(\pi x \left(2 f+\frac{1}{4 n-2}\right)\right)\right)\tag{12}$$
which also no longer requires $N$ to be selected such that $M(N)=0$.
I believe formula (12) above is exactly equivalent to formulas (7) and (8) above in that all three formulas have the same Maclaurin series.
The Maclaurin series terms for formula (12) above can be derived based on the relationship
$$\sum\limits_{n=1}^\infty\frac{\mu(2 n-1)}{(2 n-1)^s}=\frac{1}{\lambda(s)}\,,\quad\Re(s)\ge 1\tag{13}$$
where $\lambda(s)=\left(1-2^{-s}\right)\,\zeta(s)$ is the Dirichlet lambda function. I believe formula (13) above is valid for $\Re(s)>\frac{1}{2}$ assuming the Riemann hypothesis.
The relationship in formula (9) above and formula (12) for $u'(x)$ above leads to
$\delta(x)=\underset{N,f\to\infty}{\text{lim}}\left(\frac{1}{2}\sum\limits_{n=1}^N\frac{\mu(2 n-1)}{2 n-1}\left(\frac{3}{4}+\sum\limits_{k=1}^{2 f (2 n-1)} (-1)^k \left(\cos\left(\frac{\pi k (x+1)}{2 n-1}\right)+\cos\left(\frac{\pi k (x-1)}{2 n-1}\right)\right)\right.\right.$ $\left.\left.-\frac{1}{2}\sum\limits_{k=1}^{4 f (2 n-1)} (-1)^k \cos\left(\frac{\pi k x}{2 (2 n-1)}\right)\right)\right)$
$=\underset{N,f\to\infty}{\text{lim}}\left(\frac{1}{4}\sum\limits_{n=1}^N\frac{\mu(2 n-1)}{2 n-1} \left(\sec\left(\frac{\pi (x+1)}{4 n-2}\right) \cos\left(\pi (x+1) \left(2 f+\frac{1}{4 n-2}\right)\right)+\sec\left(\frac{\pi (x-1)}{4 n-2}\right) \cos\left(\pi (x-1) \left(2 f+\frac{1}{4 n-2}\right)\right)-\frac{1}{2} \sec\left(\frac{\pi x}{2 (4 n-2)}\right) \cos\left(\frac{1}{2} \pi x \left(4 f+\frac{1}{4 n-2}\right)\right)\right)\right)\tag{14}$
which I believe is exactly equivalent to formula (10) above and the integral representation in formula (11) above in that all three formulas have the same Maclaurin series.
Is this Riemann zeta function product equal to the Fourier transform of the von Mangoldt function?
How to prove $\mathop {\lim }\limits_{x \to \infty } \sum\limits_{{f_x}(p) = 1} {\frac{1}{p}} = \ln 2$ for $p \le x$?
Methods to tackle this series and get to the limit?
How to show that this limit converges in the distributional sense to a dirac delta function
Do prime gaps that are a power of "h" have the same density?
Questions on analytic representations of the Kronecker delta function $\delta(x-1)$ and the Moebius function $\mu(n)$
Generalizing closed form representations related to conjectured analytic formulas for $f_a(x)=\sum\limits_{n=1}^x a(n)$
|
CommonCrawl
|
Kernel Of A Matrix Calculator
Call the transformation [math]T. • The kernel trick: formulate the problem in terms of the kernel function (𝑥,𝑥′)=𝜑(𝑥). See Figure 9. Simonds' ciass 6. On the Complexity of Learning the Kernel Matrix Olivier Bousquet, Daniel J. There is another form that a matrix can be in, known as Reduced Row Echelon Form (often abbreviated as RREF). The mdadm tool. Quick definitions from WordNet (kernel) noun: the choicest or most essential or most vital part of some idea or experience noun: a single whole grain of a cereal ("A kernel of corn") noun: the inner and usually edible part of a seed or grain or nut or fruit stone ("Black walnut kernels are difficult to get out of the shell"). Feel free to click on a matrix solver to try it. Revised Basic Pay Calculator as per 7th CPC Gazette Notification. In MATLAB, you create a matrix by entering elements in each row as comma or space delimited numbers and using semicolons to mark the end of each row. I have a numpy array with m columns and n rows, the columns being dimensions and the rows datapoints. Undrestanding Convolutional Layers in Convolutional Neural Networks (CNNs) A comprehensive tutorial towards 2D Convolutional layers. Compiling the FreeBSD Kernel. If V is the image of a matrix Awith trivial kernel, then the projection P onto V is. We named kernel for D (15) as Rosenfeld ¶s smoothing kernel. That is, the kernel of A, the set Null(A), has the following three properties: Null(A) always contains the zero vector, since A 0 = 0. For Developer Kernel; Contact Vendor; Developer Kernel for Enterprise Enterprise license includes everything TatukGIS offers, providing each Developer Kernel (DK) registered developer: a license for every available DK product edition, e. But since the image of Ais orthogonal to the kernel of AT, we have A~v= 0, which means ~vis in the kernel of A. Up until then, the kernel could enable function tracing using either GCC's -pg flag or a combination of -pg and -mfentry. (or kernel) of A is the subspace of vectors x for which Ax = 0. Thus, the Kernel is a one dimensional vector space spanned by the vector (-1,1/2,1) The image is a 2 dimensional vector space. HPE and our global partners have created a high performance computing (HPC) ecosystem to help solve the world's most complex problems. "The agreement between the Sea Ports Authority and Kernel is a good example of public private partnership and the next step in the systemic development of the port of Chornomorsk, which the. Compute the matrix of cofactors. Bousquet, Perez-Cruz. Jan 24, 2014 · Question: What is cokernel of a numeric matrix Tags are words are used to describe and categorize your content. 7 Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. algorithm was used to optimize the parameters of both multi-kernel non-negative matrix factorization and multi-kernel support vector machine. This simple pay calculator can estimate the total compensation for a Lieutenant Colonel by adding basic pay to housing allowance (BAH) and subsistance allowance (BAS), the two largest payments received by most soldiers. We now center the features using, 'i = 'i ¡ 1 N X k 'k (11) Hence the kernel in terms of the new features is given by,. Linear algebra is one of the most applicable areas of mathematics. Matrix Code LiveWallpaper By Mobile Apps Gallery ( $1. With the normalization constant this Gaussian kernel is a normalized kernel, i. A matrix is said to be singular if its determinant is zero and non-singular otherwise. 0 Cable Manually. Creation of matrices and matrix multiplication is easy and natural:. Related Symbolab blog posts. Do you want to use the Gaussian kernel for e. Find the matrix of the orthogonal projection onto W. Tall matrices do not have onto transformations. The way that you create a matrix can have an important impact on the efficiency of your programs. The matrix formed by taking the transpose of the cofactor matrix of a given original matrix. accessories/manifest api_council_filter Parent for API additions that requires Android API Council approval. In this case, we'll calculate the null space of matrix A. [Solution] To get an orthonormal basis of W, we use Gram-Schmidt process for v1 and v2. 24% of the curve's area outside the discrete kernel. How to calculate basis of kernel? Ask Question Asked 6 years, 8 months ago. The proof is very technical and will be discussed in another page. All methods. What is a "kernel" in linear algebra ? A vector v is in the kernel of a matrix A if and only if Av=0. The matrix equation A\mathbf{x} = \mathbf{b} involves a matrix acting on a vector to produce another vector. I Solving a matrix equation,which is the same as expressing a given vector as a. OBSOLETE: API-Review is now defined in All-Projects refs/meta/config rules. You will not be allowed use of a calculator or any other device other than. It is shown that the multi-kernel function combined with the polynomial kernel function and radial-based kernel function can describe the. Applied to the covariance matrix, this means that: (4) where is an eigenvector of , and is the corresponding eigenvalue. This matrix rank calculator help you to find the rank of a matrix. The bandwidth you provide will depend on the type of kernel used in the calculation. However, if we want to go to even larger tiles (say 64 by 64) we'll run out of resources. Matrices (singular matrix) are rectangular arrays of mathematical elements, like numbers or variables. Tall matrices do not have onto transformations. Kissinger (and H. Let R3 be endowed with the standard inner product, let W be the plane de ned the. The result: Notice the slider for the size of kernel used in the density calculation, as well as the bounding frame and 3d axes. I used 0x2FFFFFFF. Furthermore, linear transformations over a finite-dimensional vector space can be represented using matrices, which is especially common in numerical and computational applications. So basically, it's a bigger matrix, for this case where we have 6 Proteins and 6 Drugs, the K matrix becomes a 36 x 36 matrix. Expert Answer 100% (4 ratings) Previous question Next question. At this stage, two values of the kernel matrix (0, 1 – shown in dark red font) overlap with two pixels of the image (25 and 100 depicted in dark red font) as shown in Figure 3b. This is the central code I am using. Classical Adjoint. Electrochemical behavior of dopamine in presence of Ascorbic Acid by using an electrochemical modified gold electrode and an electrochemical and chemical modified. If a matrix is invertible then it represents a bijective linear map thus in particular has trivial kernel. 7 The kernel of a rotation in the plane consists only of the zero point. Evangelista1,MarkJ. A 2×2 matrix with two distinct nonzero eigenvalues has four square roots. Online matrix calculator for singular value decomposition, svd of real and complex matrices. image/svg+xml. All methods. kernel () Free module of degree 4 and rank 2 over Integer Ring Echelon basis matrix: [ 1 0 -3 2] [ 0 1 -2 1]. Szymanski 1 United States Military Academy, West Point, NY 10996 2 Rensselaer Polytechnic Institute, Troy, NY 12180 Abstract. Set the matrix (must be square) and append the identity matrix of the same dimension to it. Analysis & Implementation Details. And the fifth. Then square each term and find out the. Images and Kernels in Linear Algebra By Kristi Hoshibata Mathematics 232 In mathematics, there are many different fields of study, including calculus, geometry, algebra and others. Given that, it is clear that an invertible (square) matrix has fully-reduced form equal to the identity matrix I. So very often, the backtrace dump is nonlinear as well. First, enter the column size & row size and then enter the values to know the matrix elimination steps. Learn more. 35 ) Matrix Live wallpapers are animated and interactive backgrounds set android home-Matrix Code Live wallpapers are animated and interactive backgrounds that can be added to your Android home. Linearly Independent or Dependent Calculator. Matrix Operators. You can re-load this page as many times as you like and get a new set of numbers each time. How to multiply matrices with vectors and other matrices. In both cases, the kernel is the set of solutions of the corresponding homogeneous linear equations, AX = 0 or BX = 0. The other is a subspace of Rn. The used kernel depends on the effect you want. Kernel The kernel of a linear transformation T(~x) = A~x is the set of all zeros of the transformation (i. Null Space Calculator. Matrices (singular matrix) are rectangular arrays of mathematical elements, like numbers or variables. The calculator will use the Gaussian elimination or Cramer's rule to generate a step by step explanation. c and in kernel_entry() in kernel. Griti is a learning community for students by students. You are not permitted to use any notecards, calculators, abaci, electronic devices of any sort. 02/07/2019; 7 minutes to read; In this article. When constructing the Gaussian matrix, is the best solution to sample the continuous kernel or are there better approximations? How to normalize the computed discrete kernel to account for truncation? etc. To calculate inverse matrix you need to do the following steps. The image is the set of all points in $\mathbb{R}^4$ that you get by multiplying this matrix to points in $\mathbb{R}^5$, you can find these by checking the matrix on the standard basis. Interestingly, this can become very complicated to get it right. The nullity and the map rank add up to the dimension of V, a result sometimes known as the rank-nullity theorem. Feel free to click on a matrix solver to try it. Are the elements of an averaging kernel matrix always centered on the diagonal, or can they be exclusively off-diagonal in rare cases? Ask Question Asked 5 years, 5 months ago. Kernel 5: Transposed input matrix and rectangular tiles Our first tiled version showed that a large tile size can greatly reduce off-chip memory accesses and can thus improve performance. Kernel of a matrix. Note that:. Why this optimization is a challenge: as we see above, the kernel is characterized by high arithmetic intensity, i. While his approach is quite rigorous, matrix arithmetic can be studied after Chapter One. There is another form that a matrix can be in, known as Reduced Row Echelon Form (often abbreviated as RREF). The kernel of L is the solution set of the homogeneous linear equation L(x) = 0. How to Multiply Matrices. It makes the lives of people who use matrices easier. Each of the matrices shown below are examples of matrices in reduced row echelon form. If V is the image of a matrix Awith trivial kernel, then the projection P onto V is. Since a ma-trix represents a transformation, a matrix also has a rank and nullity. Matrix Calculator. setter # noqa. Note, that the intercept is always excluded, whether given in the formula or not. Set the matrix (must be square) and append the identity matrix of the same dimension to it. For example, let us create a 4-by-5 matrix a −. Linear algebra - Practice problems for midterm 2 1. Some Properties of the Gaussian Kernel for One Class Learning Paul F. Aug 11, 2014 · Linux Debugging using a Bootloader with Kernel Parameters The grub menu. kernel phrase. In the language of random variables, the kernel of T consists of the centered random variables. The range of A is a subspace of Rm. Sep 14, 2014 · for every pair of points. It is denoted by adj A. Introduction. But before we delve into that, we need to understand how matrices are stored in the memory. -A Matrix Code live wallpaper with various customizable animated. Compute the adjugate matrix. Accepted December 26, 2009. lst provides for a convenient way to add a number of entries with extended kernel parameters to configure all sorts of advanced settings to enable you to quickly and conveniently boot into your existing system with varying levels of debugging output. External databases are also supported (Review the VMware Product Interoperability Matrix for list of externally supported DBs). Matrix caulculator with basic Linear Algebra calculations. Nov 25, 2019 · For any function f:A->B (where A and B are any sets), the kernel (also called the null space) is defined by Ker(f)={x:x in Asuch thatf(x)=0}, so the kernel gives the elements from the original set that are mapped to zero by the function. Once we know that the row space of A is equal to the row space of rref(A), then we will have our theorems. Thus our problem is how to pick these bases. Lies, Damned Lies, and Statistics. To continue calculating with the result, click Result to A or Result to B. Suppose that B is a x 7 matflx with a three-dimensional row space. The two matrices must be the same size, i. The bandwidth you provide will depend on the type of kernel used in the calculation. GNU is typically used with a kernel called Linux. The items above are esimates and you should contact the manufacture of your product to find the exact usage numbers. But for D ( which can be treated as. Show Instructions In general, you can skip the multiplication sign, so `5x` is equivalent to `5*x`. A complete answer will always include some kind of work or justi cation, even for the problems which are not explicitly "formal proofs". (a) Using the basis f1;x;x2gfor P 2, and the standard basis for R2, nd the matrix representation of T. Enter a matrix, and this calculator will show you step-by-step how to calculate the Null Space of that matrix. Here are some related puns: Tricks → Matrix: "Box of matrix " and "Bag of matrix " and "Dirty matrix " and " Matrix of the trade" and "Up to your old matrix " and "Teaching an old dog new matrix " and "How's matrix?". where the bandwidth matrix now replaces the bandwidth parameter. 2 Orthogonal Decomposition 2. 2 million lines of source code examples to build from. Mathematics has been thought of as a universal language, in which the numbers represent letters, codes, directions, and numerous other variables. Please wait until "Ready!" is written in the 1,1 entry of the spreadsheet. Assume the matrix is solvable, has no leading zeroes already, etc. While his approach is quite rigorous, matrix arithmetic can be studied after Chapter One. Learn more about kernel, null space MATLAB. Matrix; nxn matrix determinant calculator calculates a determinant of a matrix with real elements. 38, 72076 Tubingen¨ Germany olivier. But the number of columns in a. GNU is typically used with a kernel called Linux. Using this online calculator, you will receive a detailed step-by-step solution to your problem, which will help you understand the algorithm how to find the rank of a matrix. Our experts help you maximize the value of your information. A matrix is in reduced row echelon form (rref) when it satisfies the following conditions. We build a trusted advisor relationship, based on experience and best practices. Matrix Calculators. Geuvers) Institute for Computing and Information Sciences { Intelligent Systems Radboud University Nijmegen Version: spring 2016 A. • Kernel PCA is non-linear version of MDS use Gram matrix in the feature space (a. We know this because the the dimension of the. You have 110 minutes to complete the following 8 problems. SOLUTIONS: ASSIGNMENT 3 in the kernel of the matrix B = 1 0 −3 2 0 1 −2 1. The null space of the matrix is the set of solutions to the equation. Possible settings for the Method option include "CofactorExpansion", "DivisionFreeRowReduction", and "OneStepRowReduction". 48GX + Meta Kernel + Erable + ALG48 all at once? Message #1 Posted by mr-scorpio on 23 June 2011, 6:01 p. To use it, we flatten the input matrix (4x4) into a column vector (16x1). If the matrix is stored by rows, as is more common in C, lda should be equal to the max. You will not be allowed use of a calculator or any other device other than. The expression a/b is equivalent to the operator b\a in Matlab. Driven by the insatiable market demand for realtime, high-definition 3D graphics, the programmable Graphic Processor Unit or GPU has evolved into a highly parallel, multithreaded, manycore processor with tremendous computational horsepower and very high memory bandwidth, as illustrated by Figure 1 and Figure 2. : I can use rows and columns while indexing) I can do anything with the matrix class that I can do with LAPACK and BLAS; Easy to learn and use API. Thus the area of a pair of vectors in. In both cases, the kernel is the set of solutions of the corresponding homogeneous linear equations, AX = 0 or BX = 0. The CMAS is an approach to the development, application, and analysis of environmental models that leverages the community's complementary talents and resources in order to set new standards for quality in science and in the reliability of the application of the models. So how do we separate non-linear data? The trick here is by doing feature expansion. Show transcribed image text. In this post I'll only include the kernel code; you can view the rest or try it out on Github. By observation I can see that they are linearly independent, and there are three of them, which is how many you need for a basis for the kernel of your linear transformation. It then represents each data point by a vector based on its kernel similarity to the random samples and the sampled kernel matrix Kb. As a result you will get the inverse calculated on the right. + or * and as extent the intersection of inputs. , the characteristic polynomial, echelon form, trace, decomposition, etc. How to calculate basis of kernel? Ask Question Asked 6 years, 8 months ago. A unit-length vector in the kernel of that matrix is v 1 = 1/ √ 2 1/ √ 2 0. Compute the matrix of cofactors. Similarly, the determinant of a square matrix is the product of all its eigenvalues with multiplicities. Find the matrix of the orthogonal projection onto W. However, a 5×5 convolution kernel can be constructed which is equivalent. The image of a linear transformation or matrix is the span of the vectors of the linear transformation. The rank of a matrix A A A and the nullspace of a matrix A A A are equivalent to the rank and nullspace of the Gauss-Jordan form of A A A, so it is sufficient to prove the rank-nullity theorem for matrices already in Gauss-Jordan form. -A Matrix Code live wallpaper with various customizable animated. The adjoint of a matrix A is the transpose of the cofactor matrix of A. The kernel centered on the training data is also used when testing the trained system on new data. A unit-length vector in the kernel of that matrix is v 1 = 1/ √ 2 1/ √ 2 0. This is what you need for any of the RAID levels: A kernel with the appropriate md support either as modules or built-in. Find the eigenvalues and corresponding eigenvectors of a square matrix A. Online matrix calculator for singular value decomposition, svd of real and complex matrices. Learn more. Any matrix can be put into a form this simple, providing we're allowed (see next page) to use both row and column operations. The null space (or kernel) of a matrix A is the set of vectors such that. Show Instructions In general, you can skip the multiplication sign, so `5x` is equivalent to `5*x`. Again you can find this in a similar way. Square matrix. Classical Adjoint. (Right) null space The (right) null space of a matrix A2Rm n is the matrix X= null(A) such that AX= 0 where X2R n( r) and r= rank(A) min(m;n. ) It can be written as Im(A). Tall matrices do not have onto transformations. kernel, cokernel and image of a matrix. See Input Data for the description of how to enter matrix or just click Example for a simple example. SPECIFY THE VECTOR SPACES Please select the appropriate values from the popup menus, then click on the "Submit" button. 4 THE GRAM MATRIX, ORTHOGONAL PROJECTION, AND VOLUME which one can readily check. I would like to change my kernel's page size from 4KB to 4MB as I have had a large addition of RAM to my computer and I am never running out of anymore. nd the image of a matrix, reduce it to RREF, and the columns with leading 1's correspond to the columns of the original matrix which span the image. So very often, the backtrace dump is nonlinear as well. Sections: kernel; image; cokernel. We prove that for a given matrix, the kernel is a subspace. 24 June 2007. Can be set either as a fixed value or using a bandwith calculator, that is a function of signature ``w(xdata, ydata)`` that returns a 2D matrix for the covariance of the kernel. This chapter of the tutorial will give a brief introduction to some of the tools in seaborn for examining univariate and bivariate distributions. c So that will work normally and not too fast. The rank of a matrix A A A and the nullspace of a matrix A A A are equivalent to the rank and nullspace of the Gauss-Jordan form of A A A, so it is sufficient to prove the rank-nullity theorem for matrices already in Gauss-Jordan form. As a result you will get the inverse calculated on the right. What is a "kernel" in linear algebra?. When I calculate its kernel with some programs they give output different, my program is as well. nd the image of a matrix, reduce it to RREF, and the columns with leading 1's correspond to the columns of the original matrix which span the image. Oct 24, 2017 · Any nxn real matrix [math] \;A\;[/math]defines a linear transformation from the n dimensional Euclidean space[math]([/math][math]\;\mathbb{R} ^{n} \;) [/math] to. A function T from X to Y is called invertible if the equation T(x)=y has a unique solution x in X for each y in Y. The way that you create a matrix can have an important impact on the efficiency of your programs. VaR is an industry standard for measuring downside risk. Visualizing the distribution of a dataset¶ When dealing with a set of data, often the first thing you'll want to do is get a sense for how the variables are distributed. 02/07/2019; 7 minutes to read; In this article. Revised Basic Pay Calculator as per 7th CPC Gazette Notification. The result: Notice the slider for the size of kernel used in the density calculation, as well as the bounding frame and 3d axes. As said by Royi, a Gaussian kernel is usually built using a normal distribution. I Solving a matrix equation,which is the same as expressing a given vector as a. These parameters are filter size, stride and zero padding. You will learn how to calculate the range of any matrix, both square and non-square. Free matrix calculator - solve matrix operations and functions step-by-step. Note that the matrix type will be discovered automatically on the first attempt to solve a linear equation involving A. The name "GNU" is a recursive acronym for "GNU's Not Unix. Solve for x and y in terms of z to get (x,y,z) = c(-1,1/2,1) for any constant c. , if we have a dataset of 100 samples, this step would result in a symmetric 100x100 kernel matrix. A is the matrix whose kernel is wanted. And we're saying, what are all of the x's that when you transform them you get the zero vector? And so this idea, this blue thing right here, it's called the kernel of T. Kissinger (and H. x kernels, too. The expression a/b is equivalent to the operator b\a in Matlab. and call you kernel like mykernel<<>> and you will have a first of size 256 and a second of size 512 (but you should only use the first 128, otherwise you overwrite the contents of third) and your third will then be 384 big. We show how they can be realized as geometric objects and demonstrate how to find spanning sets for them. The kernel of the matrix U U U at the end of the elimination process, which is in reduced row echelon form, is computed by writing the pivot variables (x 1, x 2 x_1,x_2 x 1 , x 2 in this case) in terms of the free (non-pivot) variables (x 3 x_3. Let's consider matrix A, which represents the matrix containing the grey values of all the pixels in the original image, and matrix B representing the kernel matrix. In Linear Algebra and and functional analysis, Null Space is also referred as Kernel. + or * and as extent the intersection of inputs. As said by Royi, a Gaussian kernel is usually built using a normal distribution. The SR on the space of SPD matrices can be performed by embedding the Sym+ d into a RKHS using the proposed manifold kernel, as shown in Fig. Using this online calculator, you will receive a detailed step-by-step solution to your problem, which will help you understand the algorithm how to find the rank of a matrix. Pressing [MENU]→Matrix & Vector→Determinant to pastes the Det command. Book a uni open day. These kernels are well known and widely used, kernel for D is used for noise filtration. All methods. The Null Space Calculator will find a basis for the null space of a matrix for you, and show all steps in the process along the way. (a) Using the basis f1;x;x2gfor P 2, and the standard basis for R2, nd the matrix representation of T. Linear Least Squares Calculator. The mdadm tool. Adjoint of a Matrix Let A = [ a i j ] be a square matrix of order n. External databases are also supported (Review the VMware Product Interoperability Matrix for list of externally supported DBs). Image Sharpening with a Laplacian Kernel. In recent years, Kernel methods have received major attention, particularly due to the increased popularity of the Support Vector Machines. -A Matrix Code live wallpaper with various customizable animated. Here is a simple online linearly independent or dependent calculator to find the linear dependency and in-dependency between vectors. Calculator Features Nested Do-Loops As with most things in JMP, there After convergence, you will have to convert Z values (and bounds) back to CDF values using a normal distribution calculator or table. See Figure 9. Each kernel is useful for a spesific task, such as sharpening, blurring, edge detection, and more. Reduced Row Echelon Form of a Matrix (RREF) We've looked at what it means for a matrix to be in Row Echelon Form (REF). After that, our system becomes. To find the kernel of a matrix A is the same as to solve the system AX = 0, and one usually does this by putting A in rref. Preferably a kernel from the 4. One mathematical tool, which has applications not only for Linear Algebra but for differential equations, calculus, and many other areas, is the concept of eigenvalues and eigenvectors. The bandwidth you provide will depend on the type of kernel used in the calculation. As a result of multiplication you will get a new matrix that has the same quantity of rows as the 1st one has and the same quantity of columns as the 2nd one. Synonyms: If a linear transformation T is represented by a matrix A, then the range of T is equal to the column space of A. The proof is very technical and will be discussed in another page. A matrix is said to be singular if its determinant is zero and non-singular otherwise. The nonzero rows of a matrix in reduced row echelon form are clearly independent and therefore will always form a basis for the row space of A. Note that the matrix type will be discovered automatically on the first attempt to solve a linear equation involving A. We start by applying our 3x3 kernel to the equivalently sized 3x3 region in the top left corner of our input matrix. Active 5 years ago. accessories/manifest api_council_filter Parent for API additions that requires Android API Council approval. And the fifth. • The kernel trick: formulate the problem in terms of the kernel function (𝑥,𝑥′)=𝜑(𝑥). Interactivity in the cloud. You can re-load this page as many times as you like and get a new set of numbers each time. The rank-nullity theorem is an immediate consequence of these two results. Similarly, if E0 is the matrix obtained by performing a column. gov Received September 2, 2009. Sections: kernel; image; cokernel. Row Space and Column Space of a Matrix. 4 Column Space and Null Space of a Matrix Performance Criteria: 8. A vector v is in the kernel of a matrix A if and only if Av=0. NULL SPACE, COLUMN SPACE, ROW SPACE 151 Theorem 358 A system of linear equations Ax = b is consistent if and only if b is in the column space of A. com The Scientific Web Calculator Import file: Hex output Numeric mode. To make matters worse, multithreading non-trivial code is difficult. We build thousands of video walkthroughs for your college courses taught by student experts who got an A+. How to Find the Null Space of a Matrix. In general, the way A acts on \mathbf{x} is complicated, but there are certain cases. When the calculator spatial lag interface detects the selection of kernel weights, the options are greyed out, with the diagonal elements checked, as in Figure 17. And the fifth. Setting Up Kernel-Mode Debugging over a USB 2. (b) Find a basis for the kernel of T, writing your answer as. In addition to performing several different matrix transposes, we run simple matrix copy kernels because copy performance indicates the performance that we would like the matrix transpose to achieve. The leading entry in each row is the only non-zero entry in its column. for any matrix A: For any matrix, we have ker(A) = ker(ATA). THE RANGE AND THE NULL SPACE OF A MATRIX Suppose that A is an m× n matrix with real entries. The kernel matrices can be broken down logically once you know what the numbers are operating on. It will also find the determinant, inverse, rref (reduced row echelon form), null space, rank, eigenvalues and eigenvectors. Each of the matrices shown below are examples of matrices in reduced row echelon form. pl BUG: b/32916152 assets/android-studio-ux-assets Bug: 32992167 brillo/manifest cts_drno_filter Parent project for CTS projects that requires Dr. Kernel, Rank, Range We now study linear transformations in more detail. Matrix Operators. 24% of the curve's area outside the discrete kernel. SPECIFY THE VECTOR SPACES Please select the appropriate values from the popup menus, then click on the "Submit" button. This kernel function can be computed as efficiently as the oligo kernel by appropriate position encoding. Set the matrix (must be square) and append the identity matrix of the same dimension to it. Let P 2 be the space of polynomials of degree at most 2, and de ne the linear transformation T : P 2!R2 T(p(x)) = p(0) p(1) For example T(x2 + 1) = 1 2. So the way we solve this problem is by doing a non-linear transformation on the features.
|
CommonCrawl
|
Epidemiology and clinico-pathological characteristics of current goat pox outbreak in North Vietnam
Trang Hong Pham1,2,
Mohd Azmi Mohd Lila1,
Nor Yasmin Abd. Rahaman1,
Huong Lan Thi Lai2,
Lan Thi Nguyen2,
Khien Van Do3 &
Mustapha M. Noordin ORCID: orcid.org/0000-0001-9288-797X1
In view of the current swine fever outbreak and the government aspiration to increase the goat population, a need arises to control and prevent outbreaks of goat pox. Despite North Vietnam facing sporadic cases of goat pox, this most recent outbreak had the highest recorded morbidity, mortality and case fatality rate. Thus, owing to the likelihood of a widespread recurrence of goat pox infection, an analysis of that outbreak was done based on selected signalment, management and disease pattern (signs and pathology) parameters. This includes examination of animals, inspection of facilities, tissue sampling and analysis for confirmation of goatpox along with questionaires.
It was found that the susceptible age group were between 3 and 6 months old kids while higher infection rate occurred in those under the free-range rearing system. The clinical signs of pyrexia, anorexia, nasal discharge and lesions of pocks were not restricted to the skin but have extended into the lung and intestine. The pathogen had been confirmed in positive cases via PCR as goat pox with prevalence of 79.69%.
The epidemiology of the current goat pox outbreak in North Vietnam denotes a significant prevalence which may affect the industry. This signals the importance of identifying the salient clinical signs and post mortem lesions of goat pox at the field level in order to achieve an effective control of the disease.
The re-emerging of Capripoxvirus and it's clinical syndrome has been well documented worldwide especially in Asia and Africa [1, 2]. Undoubtedly, this virus bears pronounced economic impact not only to endemic regions especially to the livelihoods of small-scale farmers and poor rural communities [3] but also posed major constraint in international livestock trade. A greater concern is the risk of its expansion to many countries including Vietnam in 2005 [4] which is in the midst of developing a competitive goat industry. The first reported goat pox outbreak of North Vietnam in 2005 that affected four provinces i.e. Coa Bang, Bac Giang, Lang Son and Ha Tay has led to the death of 789 goats. The agents confirmed via ELISA and PCR yielded that the isolate was host specific being severe in goats [4]. Following this incidence, the outbreak has been resolved leading to an annual increase of 38% in Vietnam goat population from 1.8 million heads in 2015 to 2.6 million heads in 2017 [5]. Owing to the Vietnamese government aspiration to produce 3.9 million heads of goats in 2020, a much more comprehensive study on devastating disease like the epidemio-economical impact of goat pox is warranted. Nevertheless, despite the increase in goat population in addition to animal movement along the borders, market demands, high stocking densities and proximity of facility, goat pox outbreak has recurred commencing from 2014 in Ninh Binh province. This recurrence has raised concern on the possible devastating impact of goat pox on Vietnam's goat industry which forms the basis of this study. A thorough analysis of current recurrence along with a complete set of epidemiological data will confer an effective control and prevention of new outbreaks.
Observation of the farms
Vaccination against goat pox was not practised in either type of farming systems. The main goat rearing methods in North Vietnam under the extensive system includes backyard farming where the goats are allowed to freely graze in lowland and mountainous areas. Under such system there is minimal provision of commercial feed. On the otherhand, under intensive farming, the goats are kept in stalls and supplemented with concentrates.
Morbidity rate
The morbidity and mortality rates due to goat pox is shown in Table 1. During this study, the first case of sick goats was reported in Ninh Binh province which then radiated to other parts of North Vietnam. Thus, the study commenced in Ninh Binh and radiated out to its five other surrounding provinces. The morbidity rate ranged between 11.8–17.5% without significant differences between all provinces except for Yen Bai which has the lowest rate (p < 0.000). However, this lowest rate at Yen Bai was not significantly different to that seen in Hoa Binh.
Table 1 Morbidity rate of goatpox outbreak in North Vietnam
Mortality and case fatality rate of goat pox outbreak
Table 2 shows the case fatality rate of goats due to the infection during the study period. The mortality and case fatality rate ranges between 5.1–7.4% and 35.3–63%, respectively without any significant differences between provinces.
Table 2 Mortality and case fatality rate of goat pox disease in North Vietnam
Infection rate between farming system
It was found that goats under the extensive system has a 8.7% higher (p < 0.05) infection rate than those managed intensively (Table 3).
Table 3 Comparison of goat pox incidence based on rearing method
Age susceptibility
In order to examine the influence of age to infection rate, the goats were into categorized into three groups, viz.; less than 3; 3–6 months and more than 6 months old. The analysis of age susceptibility to infection is shown in Table 4. It was found that at almost all instances, those between the ages of 3–6 months were most susceptible (p < 0.001) except at Ninh Bin province. The other age groups of less than 3 and more than 6 months have comparable infection rate.
Table 4 The infection rate based on age groups
Clinical and pathology findings
Goats showed varying degrees of clinical signs severity, however, almost 85% of infected goats showed loss of appetite, anorexia to completely refusal of feed leading to emaciation (Table 5). Fatigue and pyrexia were also among common manifestations observed in most cases. Additionally, blepharitis, rhinitis (Fig. 1) and difficulty to move ensued in some cases.
Table 5 Distribution of clinical signs of goat pox based on their occurrence (n = 1814)
Photograph showing ulcers in nasal cavity and rhinitis
Hardened swelling which developed into sores were found on the skin (mainly hairless regions) over any part of the body including the mouth, pinna (Fig. 2) and udder (Figs. 3 and 4). The size of the pock lesions varies between 0.5–1 cm in diameter.
Photograph exhibiting papules found on mouth, nares and ear
Photograph showing a papule that has ulcerated on the ear pinna
Photograph of infected goat's udder denoting ulcers and inflammation
The finding of lesions ante- and post mortem is presented in Table 6. In live animals, majority of lesions are confined to the eyes, nares and skin while that of post mortem revealed the lungs (Fig. 5) as a primary site. Calcified greyish papules were found in the intestines (Fig. 6), urinary bladder and uterus. However, other less frequently sites and tissues were also affected as shown in Table 6.
Table 6 Lesion distribution in selected organs and their frequency of appearance (n = 128)
Photograph of a well-circumscribed greyish pock lesion in the lung of an infected goat
Photograph of calcified papules on the intestinal mucosa of an affected goat
Histopathological lesions comprising of cellular degeneration and necrosis along with inflammation and haemorrhage were mostly found in skin (Fig. 7), lung and liver. Despite exhaustive histopathology search, no evidence of eosinophilic inclusions were seen in any tissues.
Damaged epithelial layers of skin of an infected goat (H&E, X10)
The PCR primer specific test was performed on 128 scab biopsy samples. A total of 79.6% (102/128) of the samples were positive to capripox virus within the expected size band of 172 bp (Fig. 8).
PCRA gene based PCR result for detection of capripox virus. Lane M: 100 bp ladder
Reported outbreaks of goat pox worldwide yields differing mortality rates with 7% in Sudan [2], 21% in Iraq [6] and 30% in India [7]. In this study, a much lower mortality rate was found despite a rather high morbidity rate high probably as a result of the study population containing comparatively fewer of the 3–6 months old goats. It has been shown that maternal antibody for goat pox is maintained for about 3 months and those animals older than 6 months that survived an infection will have life-long immunity [8, 9]. This phenomenon explains the susceptibility of those in the 3–6 months old [9] which should yield higher morbidity rate. However, since the number of animals under this group is quite low, the mortality rate has failed to surpass those of other groups.
The number of dead animals during the outbreak depends on the virus virulence, size of the population and their susceptibility and on the basic reproductive number i.e., average expected number of secondary cases produced by a single infection in a completely susceptible population [10]. However, these rates may vary depending on additional factors including breed [11] and the most notably the herd immune status [12]. Recently published data showed that case fatality rate of goatpox disease ranged from 21.4 to 60% [13,14,15]. Likewise, the high fatality rate in the present study underlined the need for a much more effective control of goat pox along with the requirement to vaccinate susceptible herd or in endemic areas. However, the difficulty in implementing such health programs in Vietnam is explained below.
A 23% morbidity rate based on seroprevalence has been documented in nomadic goat herds in Punjab [16]. It is not suprising to see a higher infection rate in the extensive system as previously reported [11]. However, this rearing method is popular with poor farmers in lowland and mountainous areas in Vietnam who could not afford to spend on a standard health management. Goats under the extensive system forage freely in a wide area exposing them increase chances to be exposed to the virus. These goats might have also been exposed to lesser domestication, maintaining many of the behavioural traits of the wild types such as aggressiveness [17, 18]. Furthermore, goats especially under the extensive system being naturally aggressive [17] predisposes the body to injuries making easier access of the virus when inoculated. This is an added problem since most of the goats were not dehorned (due to financial constraints) making injuries prone to infection during a fight. On the contrary, the low infection rate under the intensive system could have resulted from a much more efficient disease control program that has minimized spread of the virus within the herd. However, the benefits of extensive farming system can be still exploited by taking advantage of its eco-agrarian nature. It can economically ultilise marginal or unused land that can be later be easily adapted by the goats. Such conditions had less stressful effect on the goats making them much more hardy to harsh conditions. This is an opportunity for the poor rural farmers with limited financial resources and knowledge in commercial goat farming. This can be improved if there is provision of extension veterinary officers to offer guide and assistance in goat farming.
Undoubtedly, defining the vulnerable period of infection is one of the most important measurement to be known for an effective disease management [19]. In the study presented here, the most susceptible age were goats of 3–6 months old which conforms to findings of [16, 20] who found that the chance of infection chance in the young was 2.2 times greater than that of an adult. However, contradictory results were seen if infection rate was based on seroprevalence. Fentie et al. [20] demonstrated a low infection rate in older animals although this appeared to refute earlier published findings [21]. Nevertheless, in the latter study [21], age groups were not clearly defined which may have led to a less homogenous groupings. Additionally, the collected samples from slaughter house, tanneries and hide markets where probable that few samples were collected from goat kids to be devoid [21]. The age grouping the study presented here was based on the main purpose of meat goat breeding in Vietnam. The indigenous and mixed breed of Vietnamese goat attained a market weight of 25 to 30 kg at 6 months old age, justifying a 3 month interval being chosen.
Recognising the key salient clinical signs is key factor for field diagnosis of goat pox [11]. The prominent clinical signs seen in this study too were depression and being much more severe in kids [22, 23] accounting for systemic signs of pyrexia. About 85.01% of affected animals showed varying degrees of anorexia associated with the development of lesions on mucus membrane of the face. The lesion commences as red patches around the mouth, nose and eyes which later swelling into a papule. These papules trigger lacrimal, nasal and saliva discharges. Respiratory distress and secondary bacterial pneumonia are predominant in kids which could not survival malignant stage [6, 24, 25]. In adult goats, the ulceration of papules renders difficulty for digestive and breathing activities which in turn worsen productive performance. The goats with conjunctivitis, corneal opacity and blepharitis emulated the acute phase pox disease [4]. The development of pox lesions is observed over the animal body especially hairless areas (face, pinna of the ears, udder, genital, anus, under the tail). The red patches turn to hard rubbery papules and become vesicles after 3 to 4 days. Necrotic papules formed pustular as the result of thrombosis and localised ischaemia. Dark hard scabs are formed by the remnant of necrotic papules [6, 25,26,27].
Although the overt clinical signs of goat pox are quite characteristic, the less severe manifestation needs to be judiciously distinguished from several other closely resembling diseases. The closest would be contagious ecthyma (orf) which affects young kids while goat pox involves all ages. The signs are usually that of flat or dome-shaped bullae crust around the commissures of mouth which left no scar after healing [28] as opposed to a rather permanent papular lesion in goat pox. Blue-tongue may be confused with goats pox although the goats are less less susceptible with signs rarely seen in goat pox i.e. localized oedema, haemorrhages and erosion of mucous membrane. The post mortem lesions of blue tongue are that of effusion in the thoracic cavity and pericardial sac [29]. High mortality is seen in peste des petits ruminants (PPR) which affects mainly young goats that showed signs of coughing; halitosis, erosive oral lesion and severe diarrhoea. These signs are not seen in goat pox along with rather pathognomonic lesion of PPR comprising of zebra stripes of gastro-intestinal tract and pneumonia [30]. Lastly, a likely differential to be considered to goat pox is dermatophilosis [31] where the latter exhibited signs of paintbrush matted hair all over the body that is not a feature of goat pox.
In this study, for all PCR positive cases, the clinical and post mortem lesions were 100% present in the skin and lungs of affected goats. It is likely that owing to the epitheliotropic nature of the virus lesions were predominantly seen in the skin, lung and discrete sites within mucosal surfaces of oro-nasal and gastrointestinal tissues [4]. As evidenced in this study and as reported earlier in similar studies, the role of skin and lung as a target organ [32] for the virus leads to much more deposition of the lesion in these tissues [33, 34]. Beside darkened circumscribed pox lesions [33, 35], the entire lung are pale pink with loss of sponginess. Congested trachea contain blood or fluid-filled vesicles with involvement of the lymph nodes. As seen in the study presented here, calcified nodules are found the most abudant in large intestine (rectum) of goats that were mildly affected [21, 36].
Histopathological findings in the study presented here were in accord to previous publications registering marked change in the epidermis. The degeneration of epithelial cells, hyperkeratosis, ballooning and degeneration of proliferating epithelial cells along with inflammation led to the desquamation of skin layers. Variable observation of lung microscopy include haemorrhage, congestion and thickening alveoli wall which resulted in narrowed alveoli. Secondary bacterial infection has invoked infiltration of inflammatory cells to affected regions of the lung [6, 37, 38].
The PCR-based test is chosen because of its sensitivity and simplicity [39]. The sensitive and simple PCR assay has confirmed caprine pox virus in the biopsy samples [40]. Almost 80% of the samples were positive with amplicon size of 172 bp although no attempt was made to identify and differentiate of caprine pox virus [1, 22, 41, 42]. However, the isolates from this study did not show much variation compared to those reported in China [43]. This could be explained by the fact that although phylogenetically China has three main subgroups of goatpox, only one is circulating in the south i.e. bordering Vietnam [43].
These findings pose a challenge to the aspiration of Vietnam's to transform the future potential of goat farming into an industry. The local consumer prefers fresh chevon than frozen products due to food safety issues linked to the weakness of their cold chain system [44]. Furthermore, goats as well as being a form of meat for the family and community, goat serves as a cash reserve for the poor farmer [45].
The current study also revealed most of the goat husbandry system is mainly extensive which may hamper the possibility to initiate goat production within the mountainous areas. Likewise, as revealed here, goats reared under the intensive system offers a better farming milieu for disease control which the farmer or nation should adopt to improve productivity. Under an intensive system, the ease to isolate and locate an infected animal and area enabling an effective diagnosis and thus control and prevention. It is rather difficult or almost impossible to perform such tasks (isolate and locate) under free grazing or nomadic conditions. Nevertheless, Vietnam should make formidable reforms to the livestock industry since goats in Vietnam are still (as found in this study) and in future will be reared by the poorer farmers halting an increase in goat population and productivity. This is even much more worrying especially with respect to a lack of herd health program (disease control). Thus, in order to bring the industry to greater heights, offsetting devastating disease like goat pox is mandatory. It is believed that these findings on goat pox will facilitate the government to continue working on improving disease identification and control to avoid hindrance in goat production.
Goat pox infection in North Vietnam if left unattended may lead to devastating effect to the goat industry. Thus, needs arises not only to effectively control the disease but also to downregulate risks factors involved including that of current state of rearing. This includes provision of veterinary extension services to the poor farmers adopting the extensive system in order to improve productivity via an effective herd health program.
Ethics, consent, questionnaire and study area
Since North Vietnam does not impose ethics on the use of local animals for research, all procedures involving in this study were conducted in compliance to the recommendations of the Guide for the Care and Use of Agricultural Animals in Research and Teaching (2010) [46]. A well-defined questionaire composed of farm management information (total number of animals/age groups, breed, farming system and detailed health status) relevant to goat pox were noted during the visit and all participating farms consented the research via a written permission.
The sample size (n) was determined using the formula:
$$ \mathrm{n}={\mathrm{Z}}^2\mathrm{pq}/\mathrm{L} $$
where, Z = standard normal distribution at 95% confidence interval = 1.96
= prevalence of similar work (Babiuk 2008) = 33%
$$ \mathrm{q}=\mathrm{p}\hbox{-} 1 $$
$$ \mathrm{L}=\mathrm{allowable}\ \mathrm{error}\ \mathrm{taken}\ \mathrm{at}\ 5\%=0.05 $$
Thus, the minimum required sample size obtained from the formula for this study was 477.
Disease investigation had been conducted in six provinces in North Vietnam where goat farming is most actively conducted (Fig. 8). In general, goat farming in Vietnam is mainly divided into either extensive or intensive system as previously described [47]. During the visit, farms with clinically affected goats and those in close contact with the herd within outbreak provinces were further assessed. A thorough physical examination of clinical signs was done with emphasis on predilection site of goat pox lesions and animals with severe clinical signs were then post mortem.
Questionaire and data collection
The questionnaire was structured to encompass information of the farm, management system practiced by the owner during an interview. It is compartmentalised to contain three main sections namely; ownership and farm information, herd information and physical plus pathology findings. The template of this questionnaire is attached separately as an Additional file 1.
Tissue sampling
Based on the physical examination, a total of 11,688 goats that falls under the category of being affected or those in contact were chosen. Out of these, 1481 had clear cut signs suggestive of goat pox whereby fresh tissue samples totaling to 128 were collected for further pathology and virology diagnoses.
Approximately 2–3 g of lesions were taken and placed in PBS (7.2 pH with 1% gentamycin) and stored under chilled conditions during delivery. Samples were then transferred to Key Veterinary Biotechnology Laboratory, Vietnam National University of Agriculture, Hanoi, Vietnam. Roughly a 1 cm3 lesion the of skin, lung, heart, liver, intestine, spleen, kidney and lymph node were fixed in 10% buffered formalin and later processed using routinely for histopathological examination.
Polymerase chain of reaction (PCR)
DNA extraction was performed using DNeasy Blood Tissue Kit (Qiagen, Gemany) following manufacturer instruction. Primers used for identifying Capripoxvirus in clinical specimens as previously designed [39].
The forward primer was P1: 5′-TTTCCTGATTTTTCTTACTAT-3 'and the reverse primer was P2: 5'-AAATTATATACGTAAATAAC-3′. 50 μl of reaction mixture contained 5 μl buffer, 3 μl of MgCl2, 2 μl of dNTP mix (10 mM), 2 μl (10 pmol/μl) of each primer, 0.4 μl of Taq-DNA, 12 μl biopsy supernatant and 23.6 μl of RNAse free water. PCR cycle started with initial denaturation at 94 °C for 5 mins, followed with 35 cycles (1 min each) of denaturation at 94 °C, annealing at 50 °C, extension at 72 °C and final extension at 72 °C for 10 mins. The PCR products were examined by 1.5% agarose gel electrophoresis with ethidium bromide staining.
All data obtained was subjected to statistical analysis using the SAS 9.0 (2002), USA and only differences of p < 0.005 were considered as significant.
The datasets generated and/or used during the current study are not available to public as it is owned by the Vietnam National University of Agriculture, Vietnam. However, these can be requested via email from the corresponding authors; Dr. Pham Hong Trang ([email protected]) and/or Prof. Dr. Mustapha M Noordin ([email protected]).
PBS:
Phosphate buffered saline
PCR:
Polymerase chain reaction
Odds ratio
number of animals
Chisq:
Chi Square test
Santhamani R, Yogisharadhya R, Venkatesan G, Shivachadra SB, Pandey AB, Ramakrishnan MA. Detection and differentiation of sheeppox virus and goatpox virus from clinical sample using 30 kDa RNA polymerase subunit (RPO30) gene based PCR. Vet World. 2013;9:2231–0916.
Ahmed ZEM, Abdelmalik IK, Muaz MA. An epidemiological study of sheep and goat pox outbreaks in the Sudan. Food Biol. 2016. https://doi.org/10.19071/fbiol.2016.v5.3007.
Tuppurainen ES, Venter EH, Shisler JL, Gari G, Mekonnen GA, Juleff N, Lyons NA, De Clercq K, Upton C, Bowden TR, Babiuk S, Babiuk LA. Review: Capripoxvirus Diseases: current status and opportunities for control. Transbound Emerg Dis. 2017. https://doi.org/10.1111/tbed.12444.
Babiuk S, Bowden TR, Boyle DB, Wallace DB, Kitching RP. Capripoxviruses: an emerging worldwide threat to sheep, goats and cattle. Transbound Emerg Dis. 2008;55:263–72.
Liang JB, Paengkoum P. Current status, challenges and the way forward for dairy goat production in Asia – conference summary of dairy goats in Asia. Asian Australasian J Anim Sci. 2019. https://doi.org/10.5713/ajas.19.0272.
Zangana IK, Abdullah MA. Epidemiological, clinical and histopathological studies of lamb and kid pox in Duhok, Iraq. Bulg J Vet Med. 2013;16:133–8.
Roy A, Jaisree S, Balakrishnan S, Senthilkumar K, Mahaprabhu R, Mishra A, Maity B, Ghosh TK, Karmakar AP. Molecular epidemiology of goat pox viruses. Transbound Emerg Dis. 2018. https://doi.org/10.1111/tbed.12763.
Panchanathan V, Chaudhri G, Karupiah G. Correlates of protective immunity in poxvirus infection: where does antibody stand? Immunol Cell Biol. 2008;86:80–6.
Bhanuprakash V, Hosamani M, Venkatesan G, Balamurugan V, Yogisharadhya R, Singh RK. Animal poxvirus vaccines: a comprehensive review. Expert Rev Vaccines. 2012;11:1355–74.
Bhanuprakash V, Hosamani M, Singh RK. Prospects of control and eradication of Capripox from the Indian subcontinent: a perspective. Antivir Res. 2011;91:225–32.
Mizaie K, Barani SM, Bokaie S. A review of sheep pox and goat pox: perspective of their control and eradication in Iran. J Adv Vet Anim Res. 2015. https://doi.org/10.5455/javar.2015.b117.
Abutarbush SM. "Lumpy skin disease". Emerging and re-emerging infectious diseases of livestock. Jagadeesh Bayry. Springer, 2017; 321–322. doi: https://doi.org/10.1007/978-3-319-47426-7.
Venkatesan G, Balamurugan V, Singh RK, Bhanuprakash V. Goat pox virus isolated from an outbreak at Akola, Maharashtra (India) phylogenetically related to Chinese strain. Trop Anim Health Prod. 2010. https://doi.org/10.1007/s11250-010-9564-8.
Youwen L, Haihong J, Jianjun H. Identification of four goatpox outbreaks in Xinjiang of China. In: Human Health and Biomedical Engineering (HHBE) International Conference; 2011. https://doi.org/10.1109/HHBE.2011.6028993.
Jayalakshmi K, Yogeshpriya S, Veeraselvam M, Krishnakumar S, Selvaraj P. Univariable risk factors analysis of goat pox in Thanjavur Delta region. Indian Vet J. 2017;94:19–20.
Masoud F, Mahmood MS, Hussain I. Seroepidemiology of goat pox disease in district Layyah, Punjab, Pakistan. J Vet Med Res. 2016;3:1043.
Côté SD. Dominance hierarchies in female mountain goats: stability, aggressiveness and determinants of rank. Behaviour. 2005;137:1541–66.
Mignon-Grasteau S, Boissy A, Bouix J, Faure J-M, Fisher AD, Hinch GN, Jensen P, Neindre P, Mormede P, Prunet P, Vandeputte M, Beaumont C. Genetics of adaptation and domestication in livestock. Livest Prod Sci. 2005;93:3–14.
Darbon A, Colombi D, Valdano E, Savini L, Giovanni A, Colizza V. Disease persistence on temporal contact networks accounting for heterogeneous infectious periods. R Soc Open Sci. 2019. https://doi.org/10.1098/rsos.181404.
Fentie T, Fenta N, Leta S, Molla W, Ayele B, Teshome Y, Nigatu S, Assefa A. Sero-prevalence, risk factors and distribution of sheep and goat pox in Amhara region, Ethiopia. BMC Vet Res. 2017. https://doi.org/10.1186/s12917-017-1312-0.
Sajid A, Chaudhary I, Sadique U, Maqbol A, Anjum AA, Queshi MS, Hassan ZU, Idress M, Shaid M. Prevalence of goatpox disease in Punjab province of Pakistan. J Anim Plt Sci. 2012;22(2 Suppl):28–32 ISSN: 1018-7081.
Rao TVS, Negi BS, Bansal MP. Identification and characterization of differentiating soluble antigens of sheep and goat poxviruses. Acta Virol. 1996;40:259–62.
Das PK, Pradhan KC. Epidemiological studies on goatpox in Ganjam goats of Orissa. Indian J Vet Med. 2006;26:34–5.
Joshi RK, Ali SL, Shakya S, Rao VN. Clinico-epidemiological studies on a natural outbreak of goat pox in Madhya Pradesh. Indian Vet J. 1999;76:279–81.
Pawaiya RVS, Bhagwan SK, Dubey SC. Histo-pathological study of goat pox in a natural outbreak. Indian J Small Ruminants. 2008;14:266–70.
Babuik S, Bowden TR, Parkyn G, Dalman B, Hoa DM, Long NT, Vu PP, Bieu DX, Copps J, Boyle DB. Yemen and Vietnam Capripoxviruses demonstrate a distinct host preference for goats compared with sheep. J Gen Virol. 2009. https://doi.org/10.1099/vir.0.004507-0.
Manjunatha-Reddy GB, Sumana K, Babu S, Yadav J, Balamuragan V, Hemadri D, Patil SS, Suresh KP, Gajendragad MR, Rahman H. Pathological and molecular characterization of Capripox virus outbreak in sheep and goats in Karnataka. Indian J Vet Path. 2015;39:11–4.
Nandi S, De UK, Choudhary S. Current status of contagious ecthyma or orf disease in goat and sheep- a global perspective. Small Ruminant Res. 2011;96:73–82.
Caporale M, Di Gialleonorado L, Janowicz A, Wilkie G, Shaw A, Savini G, Van Rijn PA, Mertens P, Di Ventura M, Palmarini M. Virus and host factors affecting the clinical outcome of bluetongue virus infection. J Virol. 2014. https://doi.org/10.1128/JVI.01641-14.
Balamurugan V, Hemadri D, Gajendragad MR, Singh RK, Rahman H. Diagnosis and control of peste des petits ruminants: a comprehensive review. Virus Dis. 2014. https://doi.org/10.1007/s13337-013-0188-2.
Chitra MA, Jayalakshmi K, Ponnusamy P, Manickam R, Ronald BSM. Dermatophilus congolensis infection in sheep and goats in Delta region of Tamil Nadu. Vet World. 2017;10:1314–8.
Embury-Hyatt C, Babiuk S, Manning L, Ganske S, Bowden TR, Boyle DB. Pathology and viral antigen distribution following experimental infection of sheep and goats with capripoxvirus. J Comp Path. 2012;146:106–15. https://doi.org/10.1016/j.jcpa.2011.12.001.
Jun WL, Zhang HT, Wang F, Cheng JJ, Hong-Ying SI. Clinical diagnosis technique of goat pox diseases. Agric Sci Technol. 2010;11:91–9.
Kumar A, Hirpurkar SD, Sannat C, Gilhare VR. Adaptation of Capripox virus isolate from goats in heterologous cells. J Anim Res. 2015. https://doi.org/10.5958/2277-940X.2015.00113.8.
Verma S, Verma LK, Gupta VK, Katoch VC, Dogra V, Pal B, Sharma M. Emerging Capripoxvirus disease outbreak in Himachal Pradesh, a northern state of India. Transbound Emerg Dis. 2011;58:79–85.
Kumar J, Gupta VK. Pathological study of goat pox in a natural outbreak. Indian Vet J. 2015;92:70–1.
Nyadolgor U, Usuhgerel S, Baatarjargal P, Altanchimeg A, Odbile R. Histopathological study for using of pox inactivated vaccine in goats. J Agric Sci. 2015;15:51–5.
Manimaran K, Mahaprabhu R, Jaisree S, Hemalatha S, Ravimurugan T, Pazhanivel N, Roy P. An outbreak of sheep pox in an organized farm of Tamil Nadu, India. Indian J Anim Res. 2017;51:162–4.
Ireland DC, Binepal YS. Improved detection of capripoxvirus in biopsy samples by PCR. J Virol Methods. 1998;74:1–7.
Heine HG, Stevens MP, Foord AJ, Boyle DB. A capripoxvirus detection PCR and antibody ELISA based on the major antigen P32, the homolog of the vaccinia virus H3L gene. J Immunol Methods. 1999;227:187–96.
Mahmoud MA, Khafagi MH. Detection, identification and differentiation of sheep pox virus and goat pox virus from clinical cases in Giza Governorate, Egypt. Vet World. 2016;9:2231–0916.
Zhao Z, Wu G, Yan X, Zhu X, Li J, Zhu H, Zhang Z, Zhang Q. Development of duplex PCR for differential detection of goat pox and sheep pox viruses. BMC Vet Res. 2017;13:278. https://doi.org/10.1186/s12917-017-1179-0.
Zeng XC, Chi XL, Wenbo WL, Li HM, Huang XH, Huang YF, Rock LSH, Wang SH. Complete genome sequence analysis of goatpox virus isolated from China shows high variation. Vet Microbiol. 2014;173:38–49. https://doi.org/10.1016/j.vetmic.2014.07.013.
Nguyen-Viet H, Tuyet-Hanh TT, Unger F, Dang-Xuan S, Grace D. Food safety in Vietnam: where we are at and what we can learn from international experiences. Infect Dis Poverty. 2017. https://doi.org/10.1186/s40249-017-0249-7.
Anonymous. (2013). Agricultural transformation & food security 2040–vietnam country report, japan international cooperation agency. http://open_jicareport.jica.go.jp/pdf/12145546.pdf Accessed 20 June 2019.
Federation of Animal Science Societies (FASS). Guide for the care and use of agricultural animals in research and teaching. 3rd ed; 2010. http://www.fass.org. Accessed 22 Jan 2019.
Devendra C. Dynamics of goat meat production in extensive Systems in Asia: improvement of productivity and transformation of livelihoods. Agrotechnol. 2015. https://doi.org/10.4172/2168-9881.1000131.
The authors wish to extend their thanks to thank all staff of Faculty of Veterinary Medicine, Vietnam National University of Agriculture, Vietnam who have participated in the collection of epidemiology data and analysis of samples for the study. We would like to acknowledge deepest gratitude to the farm owners who have unselfishly cooperated in making the study a success.
This study was fully funded by the Vietnam International Education Department Fellowship, Ministry of Education and Training, Vietnam (911 Research Project Grant Scheme) commencing from the design of the study, sample collection, analysis and interpretation of the data which the authors gratefully appreciate.
Faculty of Veterinary Medicine, Universiti Putra Malaysia, 43400, Serdang, Selangor, Malaysia
Trang Hong Pham, Mohd Azmi Mohd Lila, Nor Yasmin Abd. Rahaman & Mustapha M. Noordin
Faculty of Veterinary Medicine, Vietnam National University of Agriculture, Gia-Lam District, Hanoi, 010000, Vietnam
Trang Hong Pham, Huong Lan Thi Lai & Lan Thi Nguyen
Institute of Veterinary Research and Development of Central Vietnam, Nha Trang, Khanh Hoa, 650000, Vietnam
Khien Van Do
Trang Hong Pham
Mohd Azmi Mohd Lila
Nor Yasmin Abd. Rahaman
Huong Lan Thi Lai
Lan Thi Nguyen
Mustapha M. Noordin
THP, HLTH, LTN and KVD conceived the research grant; THP and MMN analysed and interpreted the results; THP and MMN drafted the manuscript with contribution from all authors; MMN, MAML, NYAR revised the manuscript; MMN and MAML supervised running of the project. All authors read and approved the final manuscript.
Correspondence to Trang Hong Pham or Mustapha M. Noordin.
All procedures involving in this study were vetted by the Vietnam International Education Department Fellowship, Ministry of Education and Training, Vietnam (911 Research Project Grant Scheme) in compliance to the recommendations of the Guide for the Care and Use of Agricultural Animals in Research and Teaching (2010) [46], since North Vietnam does not impose ethics on the use of local animals for research. Informed consent form (field studies and sampling) was filled by the farmers whom are owners or managers of the farms that participated in the study.
Questionnaire.
Pham, T.H., Lila, M.A.M., Rahaman, N.Y.A. et al. Epidemiology and clinico-pathological characteristics of current goat pox outbreak in North Vietnam. BMC Vet Res 16, 128 (2020). https://doi.org/10.1186/s12917-020-02345-z
DOI: https://doi.org/10.1186/s12917-020-02345-z
Goat pox
|
CommonCrawl
|
Dynamics of the COVID-19 epidemic in Ireland under mitigation
Bernard Cazelles ORCID: orcid.org/0000-0002-7972-361X1,2,3,
Benjamin Nguyen-Van-Yen3,
Clara Champagne4,5 &
Catherine Comiskey6
In Ireland and across the European Union the COVID-19 epidemic waves, driven mainly by the emergence of new variants of the SARS-CoV-2 have continued their course, despite various interventions from governments. Public health interventions continue in their attempts to control the spread as they wait for the planned significant effect of vaccination.
To tackle this challenge and the observed non-stationary aspect of the epidemic we used a modified SEIR stochastic model with time-varying parameters, following Brownian process. This enabled us to reconstruct the temporal evolution of the transmission rate of COVID-19 with the non-specific hypothesis that it follows a basic stochastic process constrained by the available data. This model is coupled with Bayesian inference (particle Markov Chain Monte Carlo method) for parameter estimation and utilized mainly well-documented Irish hospital data.
In Ireland, mitigation measures provided a 78–86% reduction in transmission during the first wave between March and May 2020. For the second wave in October 2020, our reduction estimation was around 20% while it was 70% for the third wave in January 2021. This third wave was partly due to the UK variant appearing in Ireland. In June 2020 we estimated that sero-prevalence was 2.0% (95% CI: 1.2–3.5%) in complete accordance with a sero-prevalence survey. By the end of April 2021, the sero-prevalence was greater than 17% due in part to the vaccination campaign. Finally we demonstrate that the available observed confirmed cases are not reliable for analysis owing to the fact that their reporting rate has as expected greatly evolved.
We provide the first estimations of the dynamics of the COVID-19 epidemic in Ireland and its key parameters. We also quantify the effects of mitigation measures on the virus transmission during and after mitigation for the three waves. Our results demonstrate that Ireland has significantly reduced transmission by employing mitigation measures, physical distancing and lockdown. This has to date avoided the saturation of healthcare infrastructures, flattened the epidemic curve and likely reduced mortality. However, as we await for a full roll out of a vaccination programme and as new variants potentially more transmissible and/or more infectious could continue to emerge and mitigation measures change silent transmission, challenges remain.
In the last months of 2019, grouped pneumonia cases were described in China. The etiological agent of this new disease, a betacoronavirus, was identified in January and named SARS-CoV-2. Meanwhile this novel coronavirus disease (COVID-19) spread rapidly from China across multiple countries worldwide. As of March 17, 2020, COVID-19 was officially declared a pandemic by the World Health Organization. COVID-19 has now spread throughout most countries causing causing millions of cases, killing hundreds of thousands of people and causing socio-economic damage [1]. Until vaccination campaigns are widely implemented, the expansion of COVID-19 with the appearance of newer, more transmissible and/or more infectious variants continue to threaten to overwhelm the healthcare systems of many countries.
The first case in Ireland was declared on the 29th of February 2020 followed by a rapid increase in reported infections leading to a peak in daily incidence in the week of April 10th to 17th. This peak was followed by a steady decline in daily cases reported until mid-August when a slow but steady increase in cases emerged. This increase was sustained and on Friday the 18th of September as a result of this increase the capital city, Dublin, was placed on a level 3 alert with movement restrictions and various lockdown measures. On September 25th a rural region in close proximity to the border of Northern Ireland was also placed on this level 3 alert [2].
Our aim is to examine the dynamics of the COVID-19 epidemic in Ireland using public data and a simple stochastic model. As occurs with the majority of epidemics, the COVID-19 epidemic has and continues to modify greatly during its course. Taking account of the time-varying nature of the different mechanisms responsible for disease propagation is always a major challenge. To tackle this aspect, we have used a previously proposed framework [3]. This framework uses diffusion models driven by fractional Brownian motion to model time-varying parameters embedded in a stochastic modified SEIR model, coupled with Bayesian inference methods. This mechanistic modeling framework enables us to reconstruct the temporal evolution of key parameters based only on the available data, under the non-specific assumption that it follows a basic stochastic process constrained by the observations. The advantages of this approach are the possibility of (i) considering all the specific mechanisms of the transmission of the pathogen (e.g. asymptomatic transmission), (ii) using different datasets simultaneously, (iii) accounting for all the uncertainty associated with the data used and, most importantly (iv) following the time-evolution of some of the key model parameters. This framework allows us to follow changes in disease transmission owing, for example, to Public Health interventions, which are of particular interest to us in the case the COVID-19 epidemic.
Large uncertainties are associated with the reported number of cases of COVID-19 [4, 5]. The lower number of reported cases is due to low detection and reporting rates, firstly because the testing capacity (RT-PCR laboratory capacity) was limited and has greatly varied during the course of this epidemic. Secondly, it is due to features of this new virus, such as transmission before the onset of symptoms and important asymptomatic transmission, which results in a low fraction of infected people attending the health facilities for testing.
This suggests that hospitalized data is likely to be the most accurate COVID-19 related data. Thus we mainly focus on hospitalized data published by the Health Protection Surveillance Centre (HPSC) [6]. We also mainly focus on incidence data to avoid all defects related to the use of cumulative data (see [7]), ie: daily hospitalized admission, daily ICU admission, daily deaths and daily hospital discharged. We also used "current bed used" both in hospital and in ICU as these are state variables of our model. Taking account of the large variability of the daily observations, since the 1st of June 2020 we have only used a weekly average of the daily values observed.
Since hospitalized data is only available from the 22th of March after the first mitigation measures (school closure) and that our aim was to model the dynamics of the epidemic before, during and after the NPI measures, we used daily incident infectious data available before the 25th of March. Nevertheless this data was associated with a low reporting rate and a large variance in the observational process used (see Inference part below).
A simple model of extended stochastic Susceptible-Exposed-Infectious-Recovered (SEIR) also accounting for asymptomatic transmission and the hospital system has been developed (see eqs. A1-A3 in the Supporting information and Fig. 1). It is similar to others, which have been proposed to model and forecast the COVID-19 epidemic [8,9,10,11]. It includes the following variables: the susceptibles S, the infected non-infectious E, the infectious symptomatic I, the infectious asymptomatic A, the removed people R, and the hospital variables: hospitalized people H, people in intensive care unit ICU, hospital discharge G, and deaths at hospital D. We have also introduced Erlang-distributed stage durations (with a shape parameter equal to 2) for the E, I, A and H compartments to mimic a gamma distribution for stage duration in these compartments discounting inappropriate exponential stage durations (eqs. A1). As more and more people are being vaccinated in Ireland, the effect of vaccination is introduced in our model simply by considering the effect of vaccination on the depletion of susceptibles. For this, we removed from the susceptible compartment the "effectively protected vaccinated people" that are proportional to the number of people vaccinated with one and/or two doses (see eq. A2). The parameters are defined in Table 1 and in the Supplementary information.
Flow diagram of the model, with λ'(t) = β(t).(I1 + q1.I2 + q2.(A1 + A2))/N then the force of infection is λ(t) = λ'(t).S(t). β(t) is the time-varying transmission rate, σ the incubation rate, γ the recovery rate, 1/κ the average hospitalized period, 1/δ the average time spent in ICU, τA the fraction of asymptomatics, τH the fraction of infectious hospitalized, τI the fraction of ICU admission, τD the death rate, q1 and q2 the reduction of transmissibility of I2 and Ai, qI the reduction of the fraction of people admitted in ICU and qD the reduction of the death rate. The subscripts 1 and 2 are for the 2 stages of the Erlang distribution of the considered variable. The hospital discharge is the flow from H2 to R. Flows in blue are from hospital (Hi) and flow in red from ICU
Table 1 Defnition of the different parameters and their priors and posteriors based on current literature [8,9,10,11] (see also Fig. A1). For priors, some upper bound and/or lower bound have been imposed by the observations. U is for uniform distribution and tN for truncated normal distribution (tN [mean,std.,limit inf,limit sup])
As the peaks of those hospitalized and those admitted to ICU are concomitant we consider that a weak fraction, qI.τI of infectious with severe symptoms goes directly to ICU. Even if the majority of deaths occur in the ICU, a small fraction, qD.τD, can occur in hospital but not in intensive care.
An interesting sub-product of our framework is the possibility of estimating the time evolution of the effective reproduction number, Reff [12]. Reff is defined as the mean number of infections generated during the infectious period of a single infectious case at time t. It can be easily estimated using the steady-state form of a SEIR model. Taking into account the particularity of our model that considers different transmission capacity for different infectious, its value is a function of both the fraction of asymptomatic infectious Ai(t), τA, and of symptomatic infectious Ii(t), 1-τA:
$$ {R}_{eff}(t)=\left(\frac{\left(1+{q}_1\right)}{2}.\left(1-{\tau}_A\right)+{q}_2.{\tau}_A\right).\frac{\beta (t)}{\gamma }.\frac{S(t)}{N} $$
where β(t) is the transmission rate, 1/γ is the infection duration, τA is the fraction of asymptomatic individuals in the population, (1-τA) the proportion of symptomatic infectious individuals, qi are the reduction in the transmissibility of some infected (I2) and asymptomatics (Ai) and N is the population size.
As we used Brownian process for modeling the time-varying transmission rate our model is stochastic, the likelihood is intractable and it is estimated with particle filtering methods (Sequential Monte Carlo, SMC). Then the particle filter is embedded in a Markov Chain Monte Carlo framework, leading to the particle Markov Chain Monte Carlo method (PMCMC) algorithm [13]. More precisely, the likelihood estimated by SMC is used in a Metropolis Hasting scheme (particle marginal Metropolis Hastings) (see Supplementary information). The priors of the inferred parameters are in Table 1.
For the inference the observations considered are daily incident infectious at the beginning of the epidemic, new hospitalized patients, new ICU admission, new deaths and hospitalized discharges. Hospital observations are only available after the lockdown (25th of March). Because these are count processes, we model their observations with Negative Binomial likelihoods (see Supplementary information). Current hospital data, observed, hospitalized patients (H1 + H2 + ICU) and ICU beds used (ICU) have also been used in the inference process and we make the assumption that these variables follow a normal distribution (see Supplementary information).
Figures 2 and 3 present our main results, Fig. 2 displays the fit of the model and Fig. 3 shows the dynamic of the model. The posteriors of the fitted parameters are in Table 1 and in Fig. A1.
Reconstruction of the observed dynamics of COVID-19 in Ireland. A The time evolution of both β(t) and Reff (t). B Simulated observed daily incident infectious. C-D New daily admissions to hospital and to ICU. (E) Daily new deaths. F Hospital discharges. G-H Cases in Hospital and in ICU each day. The black points are observations used by the inference process, the white points are the observations not used. The blue lines are the median of the posterior of the simulated trajectories, the mauve areas are the 50% Credible Intervals (CI) and the light blue areas the 95% CI. In (A) the orange area is the 50% CI of Reff, the vertical dashed lines show the date of the main NPI measures and the horizontal dashed-line Reff = 1. For all the graphs, the reporting rates are applied to the model trajectories (Fig. 3) as during the inference process for comparison to the observations
Dynamics of COVID-19 in Ireland. A Time evolution of both susceptibles S(t) and Reff (t). (B) Infected non infectious, E(t) = E1(t) + E2(t). C Symptomatic infectious I(t) = I1(t) + I2(t). D Asymptomatic infectious A(t) = A1(t) + A2(t). (E) Hospitalized people H(t) = H1(t) + H2(t) + ICU(t). F People in ICU, ICU(t). G Cumulative death D(t). (H) Removed R(t). The blue lines are the median of the posterior of the simulated trajectories, the mauve areas are the 50% Credible Intervals (CI) and the light blue areas the 95% CI. In (A) the orange area is the 50% CI of Reff and the horizontal dashed-line indicates Reff = 1. In (H) the red line shows the median of R(t) when the "effectively protected vaccinated people" (see eq. A2) have been subtracted. The black points are observations used by the inference process, the white points are the observations not used
Figure 2 illustrates the potential of the framework to effectively describe the numerous observations of this complex epidemic. The main characteristic this framework offers is the ability to reconstruct the time variation of the transmission rate β(t) (Fig. 2A) that is needed to fit the observations. We can then compute the time-variation of Reff (Fig. 2A). The initial value of Reff is around 3.2 in accordance with numerous published papers (e.g. [14]). The peak of Reff around the time of the first hospital observations is presumably a compensation effect of the model to accommodate diverging trends between reported case data and hospital data. Then one can note a decrease of 78% of Reff between the 1st of March and the 1st of May and a decrease of 86% between the 12th of March (school closure and lock down of offices, restrictions on travel etc) and the 1st of May (Fig. 2A). The reduction in the transmission following the second lockdown was around 20% (Fig. 2A). Nevertheless the reduction of Reff was again significant (70%) for the large wave that was observed in January 2021, largely due to the UK variant [15, 16] (Fig. 2A). Given the temporality of the decline compared to the timing of the NPIs, these sharp decreases seem to be the result of the implementation of the mitigation measures.
Another important characteristic of this epidemic is the fact that the peak of daily hospital admission and daily ICU admission are concomitant (Figs. 2G-H), this concomitance has influenced the structure of the model we developed.
A final important point concerns the observed daily incident infectious. It is a source of data that the model has not taken into account in the inference process (Fig. 2B). We fit the model to the daily incident infectious up to March 25th only (black points on Fig. 2B), and plot our daily incident infectious estimates with the corresponding estimate of the reporting rate, with a median of 0.09 (95% CI: 0.06–0.14). These data highlight that the first peak in observed incident infectious comes 2–3 weeks late, and is higher than expected. This shows that it is important to take into account a delay in reporting, for instance using models for nowcasting [17, 18]. This also clearly illustrates that the reporting rate has greatly evolved during the course of the epidemic, with part of the increase maybe explained by a greater proportion of asymptomatics tested as time went on, whereas in the model the people tested are considered symptomatic. It is worth noting that as the epidemic progressed, after November 2020, the observed positive cases became more consistent with the hospital data (Fig. 2B-D).
Figure 3 displays the dynamic of the model. Figures 3C-D show that the asymptomatic infectious are as important as symptomatics but with a larger uncertainty due to lack of information available in the data. Indeed, the data used contain very little information on asymptomatics and we observe identical prior and posterior distributions for the rate of asymptomatic, τA (see Fig. A1).
Our model also allows us to estimate the sero-prevalence (Fig. 4). Our estimation for the 1st July is 2.1% (95% CI: 1.2–3.6%) and is in complete accordance with a survey study that shows a sero-prevalence of 1.7% (95% CI: 1.1–2.4%) between 22nd June and 16th July 2020 [19]. Figure 4 displays our estimation of the time evolution of the sero-prevalence that shows a large increase from the beginning of January 2020 due to the high propagation of the UK variant [15, 16] but, also, it would seem, due to the rolling out of the vaccination.
Estimation of the sero-prevalence and comparison with the value from a serological survey study [19]. The blue lines are the median of the posterior of the simulated trajectories, the mauve areas are the 50% CI and the light blue areas the 95% CI. The black line, around June–July, is for the median value of the serological survey, the orange area is for its 95%CI. The red line shows the median of sero-prevalence without the effect of vaccination simply by subtracting from the removed (R(t)) the "effectively protected vaccinated people" (see eq. A2) and the dashed red lines its 95%CI
The need globally to accurately model COVID-19 mitigation strategies and asymptomatic transmission in order to plan for the burden on hospital admissions was identified early in the pandemic [20]. Davies et al. [21] within their models in the United Kingdom have predicted that extreme measures are probably required to prevent an excess of demand on hospital beds, especially those in ICUs during 2021. Similarly in France, Di Domenico et al. [9] have used modeling techniques calibrated with hospital admission data to model the impact of mitigation strategies to predict the scale of the epidemic within the Ile-de-France region. In the same way, we provide estimations of the dynamics of the COVID-19 epidemic in Ireland and its key parameters. The main characteristics of our approach is accounting for non-stationarity by embedding time-varying parameters in a stochastic model coupled with Bayesian inference. This mechanistic modeling framework enables us to reconstruct the temporal evolution of the transmission rate of the COVID-19 based only on the available data, under the non-specific assumption that it follows a basic stochastic process constrained by the observations. We can also describe the time evolving COVID-19 epidemic, quantifying the effects of mitigation measures on the virus transmission during and after the three waves suffered, and also estimate the sero-prevalence.
With our approach that mainly uses well-documented hospital data, we found a reduction of transmissibility of the SARS-CoV-2 of 78–86% after the implementation of the mitigation measures for the first wave. Our reduction estimations were around 20% for the second wave in October–November 2020 but more than 70% for the third wave in January–February 2021. These reductions in transmission may reflect the nature of the mitigation measures introduced in the country. For the second wave, these measures were less restrictive than during the first and third wave, nevertheless the second wave was also less severe. These results are in accordance with the results published on the effects of mitigation measures in Europe during the first wave [14, 22]. For example, Garchitorena et al. [22] by comparing 24 non-pharmaceutical interventions found that the median decrease in viral transmission was 74%, which is enough to suppress the epidemic and that a partial implementation of different measures resulted in lower than average response efficiency.
Our results also highlighted that the observed confirmed cases are only a small fraction of the total number of cases, only the tip of the iceberg (see [4]). This underlines that human behavior in the face of testing as well the delays in reporting, must be accounted for, for instance using models for now-casting [17, 18]. For example, in France it has been estimated that the detection rate increased from 7% in mid-May to 40% by the end of June, compared to well below 5% at the beginning of the epidemic [23]. Then data from hospital system published by health authorities are crucial for understanding the course of this epidemic. These data are well measured, but are observed with a delay in relation to contamination. Nevertheless, these delays can be easily account for by mathematical models.
Our study is not without limitations. Our model like all complex SEIR models developed for COVID-19 is non-identifiable which means that it is likely that several solutions exist and we only present one of the most likely. This point is always overlooked but see Li et al. [8]. The major limitation is the use of the classical homogeneous mixing assumption in which all individuals are assumed to interact uniformly and ignores heterogeneity between groups by sex, age, geographical region. In all cases taking an age structure and mixing matrix appears insufficient and heterogeneity of contact is important (see [24]). However this kind of data is not easily available. Another weakness is perhaps the neglect of age-structure in the model to simulate age-based predictions as we enter the time of children returning to school. These weaknesses are however a future research development given the performance of the current model. Nevertheless in our opinion, these limitations are compensated for taking non-stationarity of this epidemic into account and by the fact that our results are mainly driven by hospital data, which is more accurate than the number of infected cases. Precise data from serological studies at different time periods would significantly reduce the uncertainties of the model predictions [25, 26].
The key strength of the current Irish study is the fit of the model to the current observed data on hospitalizations, deaths and ICU cases that were likely to be the most accurate COVID-19 related data [27]. This allows us to present the first Irish modeling estimates of sero- prevalence. The model presented predicted that in Ireland as of the 1st July 2020 between 1.2 and 3.5% of the population had been infected either as a symptomatic or asymptomatic case. This is in complete accordance with preliminary national serological results, which found that among 12 to 69 year olds living in Ireland the sero-prevalence rate was estimated between 26th June and 20th July 2020 at 1.7% (95% CI: 1.1–2.4%) [19]. Due to the high number of infected people during the second wave and especially during the third wave, by mid-May 2021, the sero-prevalence was estimated to be greater than 20%. This high value also reflects the result of the rolling out of the national vaccination programme (Fig. 4).
For the first wave, our sero-prevalence predictions contrast with those of more densely populated areas. For the first wave, estimated serological prevalence in the United Kingdom based on a random sample of home based testing has found that 6.0% (95% CI: 5.8–6.1%) of individuals tested positive, of these one third (32.2%, (95% CI, 31.0–33.4%)) reported no symptoms and were asymptomatic [21, 28]. Overall the authors estimated that 3.36 million (3.21 million to 3.51 million) people had been infected with SARS-CoV-2 in England by the end of June 2020. This estimate was substantially higher than the recorded numbers in the UK of 315,000 cases. This is in accordance with observations from Spain where between April and May 2020, sero-prevalence was 5% and only few cases of these people had a PCR test [29].
Undocumented infections particularly asymptomatic infections are known to be the silent drivers of infection. Many studies [29,30,31,32,33,34,35] that have investigated the impact of asymptomatic carriers on COVID-19 transmission state that, in a public health context, the silent threat posed by the presence of asymptomatic carriers in the population results in the COVID-19 pandemic being much more difficult to control. These studies show that the population of individuals with asymptomatic COVID-19 infections is contributing to driving the growth of the pandemic. Li et al. [8] estimate that in the early stages of the epidemic in China 86% of all infections were undocumented (95% CI: 82–90%). However perhaps what is more important according to Li et al. [8] was that the transmission rate of undocumented infections per person was 55% the transmission rate of documented infections (95% CI: 46–62%), yet, because of their greater numbers, undocumented infections were the source of 79% of the documented cases. In Ireland, we can see from Fig. 3 that our model estimates that the number of asymptomatic infectious is of the same order of magnitude as the number of symptomatic infectious but with a larger uncertainty. This highlights that there is not enough information in the data to go beyond the published values that have been considered in the prior of τA. It also emphasizes the importance of asymptomatic transmission, which is very difficult to observe. However, considering this large uncertainty, the computation of the part of asymptomatic transmission is not relevant.
We also found other interesting results such as a significant similarity between the trend of mobility and our estimation of the transmission between the epidemic waves (see Fig. A2 and [36]), highlighting the importance of following the evolution of mobility when relaxing mitigation measures to anticipate the future evolution of the spread of the SARS-CoV-2 [37].
In this work we have used a stochastic framework that accounts for the time-varying nature of the COVID-19 epidemic by using time-varying parameters and hospital data to provide a description of this evolving epidemic. Our results demonstrate that Ireland has significantly reduced transmission by employing mitigation measures, physical distancing and and long lockdowns for wave 3. This has avoided the saturation of healthcare infrastructures, flattened the epidemic curve during each wave and likely greatly reduced mortality. Our framework that accounts for the non-stationarity of the transmission also offers the possibility of computing the time varying Reff(t) and then to offer an interesting tool to follow the evolution of the COVID-19 epidemic. This tool could prove particularly useful in analyzing this new phase of this special epidemic, as new variants potentially more transmissible and/or more infectious could continue to emerge and mitigation measures change silent transmission.
All surveillance data are available at the site from Health Protection Surveillance Centre (HPSC): https://www.hpsc.ie/a-z/respiratory/coronavirus/novelcoronavirus/casesinireland/epidemiologyofcovid-19inireland/
or https://covid19ireland-geohive.hub.arcgis.com/
The data used and the code are at https://www.dropbox.com/s/n0hi5syu80nup5a/ssm_SEIAR_Ireland.zip?dl=0.
HPSC:
Health protection surveillance centre
ICU:
Intensive care units
NPI:
Non-pharmaceutical interventions
PMCMC:
Particle markov chain monte carlo
RT-PCR:
Reverse transcription polymerase chain reaction
SEIR:
Susceptible exposed infectious recovered
SMC:
Sequential monte carlo
Who situation reports. 2020. https://www.who.int/emergencies/diseases/novel-coronavirus-2019/situation-reports/
Health Protection Surveillance Centre (HPSC). COVID-19 Cases in Ireland. 2020. https://www.hpsc.ie/a-z/respiratory/coronavirus/novelcoronavirus/casesinireland/ accessed 29th September 2020.
Cazelles B, Champagne C, Dureau J. Accounting for non-stationarity in epidemiology by embedding time-varying parameters in stochastic models. PLoS Comput Biol. 2018;14(8):e1006211. https://doi.org/10.1371/journal.pcbi.1006211.
Richterich P. Severe underestimation of COVID-19 case numbers: effect of epidemic growth rate and test restrictions. MedRxiv. 2020;2020.04.13.20064220.
Pitzer VE, Chitwood M, Havumaki J, Menzies NA, Perniciaro S, Warren JL, et al. The impact of changes in diagnostic testing practices on estimates of COVID-19 transmission in the United States. Am J Epidemiol. 2021:kwab089. https://doi.org/10.1093/aje/kwab089.
Health Protection Surveillance Centre (HPSC). Ireland's COVID-19 Data Hub. 2020. https://covid19ireland-geohive.hub.arcgis.com/
King AA, Domenech de Cellès M, Magpantay FM, Rohani P. Avoidable errors in the modelling of outbreaks of emerging pathogens, with special reference to Ebola. Proc R Soc B Biol Sci. 2015;282:20150347.
Li R, Pei S, Chen B, Song Y, Zhang T, Yang W, et al. Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (SARS-CoV-2). Science. 2020;368(6490):489–93. https://doi.org/10.1126/science.abb3221.
Di Domenico L, Pullano G, Sabbatini CE, Boëlle PY, Colizza V. Impact of lockdown on COVID-19 epidemic in Île-de-France and possible exit strategies. BMC Med. 2020;18(1):240. https://doi.org/10.1186/s12916-020-01698-4.
Prem K, Liu Y, Russell TW, Kucharski AJ, Eggo RM, Davies N, et al. The effect of control strategies to reduce social mixing on outcomes of the COVID-19 epidemic in Wuhan, China: a modelling study. Lancet Public Health. 2020;5(5):e261–70. https://doi.org/10.1016/S2468-2667(20)30073-6.
Prague M, Wittkop L, Clairon Q, Dutartre D, Thiébaut R, Hejblum BP. Population modeling of early COVID-19 epidemic dynamics in French regions and estimation of the lockdown impact on infection rate. MedRxiv. 2020;2020.04.21.20073536.
Cazelles B, Champagne C, Nguyen-Van-Yen B, Comiskey C, Vergu E, Roche B. A mechanistic and data-driven reconstruction of the time-varying reproduction number: application to the COVID-19 epidemic. PLoS Comput Biol. 17(7):e1009211. https://doi.org/10.1371/journal.pcbi.1009211.
Andrieu C, Doucet A, Holenstein R. Particle markov chain Monte Carlo methods. J R Stat Soc Ser B. 2010;72(3):269–342. https://doi.org/10.1111/j.1467-9868.2009.00736.x.
Flaxman S, Mishra S, Gandy A, Unwin HJT, Mellan TA, Coupland H, et al. Estimating the effects of non-pharmaceutical interventions on COVID-19 in Europe. Nature. 2020;584(7820):257–61. https://doi.org/10.1038/s41586-020-2405-7.
Funk T, Pharris A, Spiteri G, Bundle N, Melidou A, Carr M, et al. COVID study groups. Characteristics of SARS-CoV-2 variants of concern B.1.1.7, B.1.351 or P.1: data from seven EU/EEA countries, weeks 38/2020 to 10/2021. Euro Surveill. 2021;26:2100348.
Mallon PW, Crispie F, Gonzalez G, Tinago W, Leon AG, McCabe M, et al. Whole-genome sequencing of SARS-CoV-2 in the Republic of Ireland during waves 1 and 2 of the pandemic. medRxiv. 2021;2021.02.09.21251402.
Höhle M. An der Heiden M. Bayesian nowcasting during the STEC O104: H4 outbreak in Germany, 2011. Biometrics. 2014;70(4):993–1002. https://doi.org/10.1111/biom.12194.
Bird S, Nielsen B. Now-casting of COVID-19 deaths in English hospitals. 2020; Nuffield College; (preprint) (available from: https://users.ox.ac.uk/~nuff0078/Covid/).
HSE. Preliminary report of the results of the Study to Investigate COVID-19 Infection in People Living in Ireland (SCOPI): A national sero-prevalence study, June–July. 2020. Available from https://www.hpsc.ie/a-z/respiratory/coronavirus/novelcoronavirus/scopi/SCOPI%20report%20preliminary%20results%20final%20version.pdf. Accessed 29 Sept 2020.
Anderson RM, Heesterbeek H, Klinkenberg D, Hollingsworth TD. How will country-based mitigation measures influence the course of the COVID-19 epidemic? Lancet. 2020;395(10228):931–4. https://doi.org/10.1016/S0140-6736(20)30567-5.
Davies NG, Klepac P, Liu Y, et al. Age-dependent effects in the transmission and control of COVID-19 epidemics. medRxiv. 2020;2020.03.24.20043018.
Garchitorena A, Gruson H, Cazelles B, Roche B. Quantifying the efficiency of non-pharmaceutical interventions against SARS-COV-2 transmission in Europe. MedRxiv. 2020;2020.08.17.20174821.
Pullano G, Di Domenico L, Sabbatini CE, Valdano E, Turbelin C, Debin M, et al. Underdetection of cases of COVID-19 in France threatens epidemic control. Nature. 2021;590(7844):134–9. https://doi.org/10.1038/s41586-020-03095-6.
Britton T, Ball F, Trapman P. A mathematical model reveals the influence of population heterogeneity on herd immunity to SARS-CoV-2. Science. 2020;369(6505):846–9. https://doi.org/10.1126/science.abc6810.
Metcalf CJE, Farrar J, Cutts FT, Basta NE, Graham AL, Lessler J, et al. Use of serological surveys to generate key insights into the changing global landscape of infectious disease. Lancet. 2016;388(10045):728–30. https://doi.org/10.1016/S0140-6736(16)30164-7.
Champagne C, Salthouse DG, Paul R, Cao-Lormeau VM, Roche B, Cazelles B. Structure in the variability of the basic reproductive number (R0) for Zika epidemics in the Pacific islands. eLife. 2016;5:e19874. https://doi.org/10.7554/eLife.19874.
Abbott S, Hellewell J, Thompson RN, Sherratt K, Gibbs HP, Bosse NI, et al. Estimating the time-varying reproduction number of SARS-CoV-2 using national and subnational case counts. Wellcome Open Res. 2020;5:112.
Ward H, Atchison C, Whitaker M et al. Antibody prevalence for SARS-CoV-2 following the peak of the pandemic in England: REACT2 study in 100 000 adults. 2020. Available from https://www.imperial.ac.uk/media/imperial-college/institute-of-global-health-innovation/Ward-et-al-120820.pdf
Pollán M, Pérez-Gómez B, Pastor-Barriuso R, Oteo J, Hernán MA, Pérez-Olmeda M, et al. Prevalence of SARS-CoV-2 in Spain (ENE-COVID): a nationwide, population-based seroepidemiological study. Lancet. 2020;396(10250):535–44. https://doi.org/10.1016/S0140-6736(20)31483-5.
Aguilar JB, Faust JS, Westafer LM, Gutierrez JB. Investigating the impact of asymptomatic carriers on COVID-19 transmission. medRxiv. 2020;2020.03.18.20037994.
Comiskey C, Snel A, Banka S. The second wave: estimating the hidden asymptomatic prevalence of Covid-19 in Ireland as we plan for imminent immunisation. HRB Open Res Under Rev. 2021;4. https://doi.org/10.12688/hrbopenres.13206.1.
Fox SJ, Pasco R, Tec M, Du Z, Lachmann M, Scott J, et al. The impact of asymptomatic COVID-19 infections on future pandemic waves. MedRxiv. 2020;2020.06.22.20137489.
Moghadas SM, Fitzpatrick MC, Sah P, Pandey A, Shoukat A, Singer BH, et al. The implications of silent transmission for the control of COVID-19 outbreaks. Proc Natl Acad Sci U S A. 2020;117(30):17513–5. https://doi.org/10.1073/pnas.2008373117.
Cazelles B, Comiskey C, Nguyen-Van-Yen B, Champagne C, Roche B. Parallel trends in the transmission of SARS-CoV-2 and retail/recreation and public transport mobility during non-lockdown periods. Int J Infect Dis. 2021;104:693–5. https://doi.org/10.1016/j.ijid.2021.01.067.
Zachreson C, Mitchell L, Lydeamore MJ, Rebuli N, Tomko M, Geard N. Risk mapping for COVID-19 outbreaks in Australia using mobility data. J R Soc Interface. 2021;18(174):20200657. https://doi.org/10.1098/rsif.2020.0657.
Subramanian R, He Q, Pascual M. Quantifying asymptomatic infection and transmission of COVID-19 in New York City using observed cases, serology, and testing capacity. Proc Natl Acad Sci U S A. 2021;118(9):e2019716118. https://doi.org/10.1073/pnas.2019716118.
ECDC. The role of asymptomatic and pre-symptomatic individuals update august 10th 2020. 2020. Available from https://www.ecdc.europa.eu/en/covid-19/latest-evidence/transmission. Accessed 25 Sept 2020.
We thank Una Ni Mhaoldhomhnaigh who extracted and managed the data during the lockdown and edited some of the previous version of the manuscript.
BC is partially supported by a grant ANR Flash Covid-19 from the "Agence Nationale de la Recherche" (DigEpi). The funder has no role in study design, data collection and analysis, decision to publish, and preparation of the manuscript.
UMMISCO, Sorbonne Université, Paris, France
Bernard Cazelles
INRAE, Université Paris-Saclay, MaIAGE, Jouy-en-Josas, France
Eco-Evolution Mathématique, IBENS, UMR 8197, CNRS, Ecole Normale Supérieure, Paris, France
Bernard Cazelles & Benjamin Nguyen-Van-Yen
Swiss Tropical and Public Health Institute, Basel, Switzerland
Clara Champagne
Universty of Basel, Basel, Switzerland
School of Nursing and Midwifery, Trinity College Dublin, The University of Dublin, Dublin, Ireland
Catherine Comiskey
Benjamin Nguyen-Van-Yen
BC and CaC contributed to conception and design of the study. BC, BNVY and ClC constructed and ran the model. All authors analyzed the simulations. The first draft of the manuscript was written by BC and CaC, all authors provided comments, edited and approved the final manuscript.
Correspondence to Bernard Cazelles.
The authors declare that they have no competing interests
Model equation, Bayesian inference, Prior and Posterior distributions, Figs. A1-A2. Supplementary Fig. A1. Prior and posterior distributions for the model inferences presented Fig. 2. I1(0) initial value of infectious, ν is the volatility of the Brownian process of β(t), 1/σ the average duration of the incubation, 1/γ the average duration of infectious period, 1/κ the average hospitalized period, 1/δ the average time spent in ICU, τA the fraction of asymptomatics, τH the fraction of infectious hospitalized, τI the fraction of ICU admission, τD the death rate, ρI the reporting rate for the infectious, ρH the reporting rate for the hospitalized people. The blue distributions are the priors and the discrete histograms are the posteriors. Supplementary Fig. A2. Parallel trends in our estimated Reff(t) (black lines) and Google Mobility (https://www.google.com/covid19/mobility/), retail and recreation mobility (continuous blue line) and transport mobility (dashed blue line) in Ireland. The mobility time series have been smoothed using moving average over a 7 days window. The vertical black dashed lines correspond to the start dates of the main mitigation measures.
Cazelles, B., Nguyen-Van-Yen, B., Champagne, C. et al. Dynamics of the COVID-19 epidemic in Ireland under mitigation. BMC Infect Dis 21, 735 (2021). https://doi.org/10.1186/s12879-021-06433-9
Stochastic model
Time varying parameters
|
CommonCrawl
|
Resource Blocks In LTE
Dynamic RB Allocations
Simulation and Discussions
Advances in Computer Sciences
A Dynamic Allocation Scheme for Resource Blocks Using ARQ Status Reports in LTE Networks
Tsang-Ling Sheu1*, Kang-Wei Chang1, Fu-Ming Yeh2
1 Department of Electrical Engineering, National Sun Yat-Sen University, Kaohsiung, Taiwan
2 The Broadband Wireless Department, Gemtek Technology Company, Hsinchu, Taiwan
Tsang-Ling Sheu
E-mail: [email protected]
Received Date:27 October 2017
Accepted Date:29 November 2017
Published Date:12 January 2018
DOI: 10.31021/acs.20181102
Manuscript ID: ACS-1-102
Publisher: Boffin Access Limited.
Volume: 1.1
Journal Type: Open Access
Copyright: © 2018 Sheu TL, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 international License.
Sheu TL. A Dynamic Allocation Scheme for Resource Blocks using ARQ Status Reports in LTE Networks. Adv Comput Sci. 2018 Jan;1(1):102
This paper presents a Dynamic RB (Resource Blocks) Allocation (DRBA) in LTE (LongTerm Evolution) networks by utilizing Automatic Repeated Request (ARQ). From the ARQ status report, an Evolved-Node Base Station (eNodeB) computes the successfully received packets per unit time for each User Equipment (UE). Thus, an eNB can adequately allocate Resource Blocks (RB) for a UE. In DRBA, we consider three different traffic types (audio, video, and data) with the priority from the highest to the lowest. To prevent the starvation of data traffic, we set an upper bound of RBs for audio and video traffic. To demonstrate the superiority of DRBA, we perform NS-3 simulation. Simulation results show that DRBA can perform much better than the traditional Automatic Modulation and Coding (AMC) scheme. In particular, when a UE is in high interference/noise environment, DRBA can achieve higher utilization, lower blocking rate, and admit more successfully connected UE.
RB; ARQ; LTE; Traffic Types; AMC
AMC (Automatic Modulation and Coding) scheme used in LTE (Long Term Evolution) networks can select an adequate modulation technique, such as BPSK, QPSK, and QAM, based on the channel quality of a UE (User Equipment) [1]. From the requested data rate and the selected modulation technique, the number of RB (Resource Blocks) required for a UE can be determined. [2] However, automatic modulation and coding (AMC), when allocating RB, does not take into account (i) the processing capability of a UE, and (ii) the unpredictable channel quality. Thus, in this paper, by estimating the processing capability of a UE and by adapting to the changeable channel quality, we propose a dynamic allocation scheme for RB by utilizing the status report of automatic repeated request (ARQ) [3].
Previous work on RB allocations in LTE networks can be divided into three categories. The first category allocates RB based on the requested Data Rate (DR) of UE. For examples, Maria, et al. proposed a fairness-based resource allocation in LTE downlink [4]. Their scheme can achieve fairness while maintaining a higher throughput. Yet, the authors assumed the same upper bound for all channels, which is not realistic. Zhang, et al. proposed RB allocations with priority based on video encoding bit rate and the queuing length of UE [5]. By considering a UE may be located in the overlapping area of two eNodeB, Wang, et al. proposed a method which allows two eNodeB to exchange their transmitting power and then decide which one is responsible for serving the UE [6]. The second category allocates RB based on traffic types, such as real-time and non-real-time. Lee, et al. built a mathematical model to compute the system throughput by assuming that real-time traffic has the highest priority for RB allocations [7]. [8] Huo, et al. proposed RB allocations for both real-time and non-real-time traffic by using a utility function for achieving the fairness of subcarriers. The third category allocates RB based on AMC and channel quality indicator (CQI). For examples, Hou, et al. derives a new SINR equation by considering channel gains, total system RB, and signal intensity [9]. Each UE is then allocated RB with different weights. [10] Tham, et al. proposed a RB allocation scheme by considering CQI, traffic types, and the bit error rate (BER) reported by each UE. [11] El-Hajj, et al. proposed a queue-aware mechanism, which can allocate RB based on the queue length of each UE. The authors proved their scheme can achieve better throughput as it is compared to a proportional fairness scheme. Aboul Hassan et al. and Ghosh et al. proposed RB allocations in multi-cell environment [12, 13]. Prior to RB allocations, neighboring eNBs will communicate about how to divide the shared bandwidth such that the interference can be kept in a minimum. [14] Liu, et al. proposed a scheme to determine whether or not to allocate RB for the retransmission of erroneous packets. In other words, if packet retransmission delay exceeds its real-time constraint, RB for packet retransmission is not allocated. Finally, Zhu, et al. derives an objective function which can allocate different numbers of RB to different layers (such as base layer and enhancement layer) of a scalable video stream [15].
Unlike the previous work, in this paper, we propose a dynamic RB allocation (DRBA) in LTE networks. The major innovation of the proposed DRBA is right in that it fully utilizes the periodical ARQ status reports returned from a UE. Specifically, from the ARQ status report, the successfully received packets per unit time for each UE can be computed. Thus, an adequate amount of RB, which is equivalent to the effective data rate, can be allocated to each UE, by considering changeable channel qualities associated with different modulation techniques.
The remainder of this paper is organized as follows. In Section 2, we introduce RB allocations in an OFDMA frame of LTE. In Section 3, we design the dynamic RB allocations and derive the effective data rate according to the returned ARQ statue report from a UE. In Section 4, the simulation of DRBA on NS-3 is performed and the simulation results are discussed. Finally, conclusion remarks are drawn in Section 5.
According to the standard of 3GPP, an OFDMA (Orthogonal Frequency Division Multiplexing Access) frame in LTE appears periodically every 10 msec [1]. In an OFDMA frame, there are 20 times slots divided into 10 sub-frames. Thus, each sub-frame contains two time slots with one slot lasting for 0.5 msec. As shown in (Figure 1), a Resource Block (RB) comprises two dimensions, a time slot with 7 symbols and 12 sub-carriers. Notice that a Resource Element (RE), consisting of exactly one subcarrier and one symbol, is the smallest entity and the building block of RB (one RB contains 84 RE).
RB in an OFDMA Frame of LTE
In this paper, to be more flexible in allocating RB for a UE, we assume the size of a RB may contain any number of RE, which can be computed from the requested data rate. Prior to the introduction of DRBA, we begin to review the conventional RB allocations based on AMC.
RB allocations by AMC
As shown in (Figure 2), depending on the Channel Quality Indicator (CQI), a modulation technique, such as BPSK, QPSK, QAM, etc, and the corresponding subcarriers are selected to meet the requested data rate of a UE. The conventional RB allocation uses the requested data rate and channel quality reported from a UE to compute the size of RB (in terms of the number of RE). In this paper, we refer to this conventional method as RB Allocated by AMC (RAMC). However, RAMC is not an effective RB allocation method, since channel quality may vary with time and change along with the movement of a UE. For a long-lived application, such as video streaming, the actual size of RB required by a UE may become smaller when channel quality becomes poorer. Thus, RAMC may just waste too much bandwidth due to excessive RB allocations, which harmfully confines the number of UE admitting to the LTE network. To compare with the size of RB allocated by RAMC, (Figure 2) illustrates the actual RB size in red color.
To remedy the excessive-allocation problem, in this paper we propose DRBA, which can dynamically compute the actual size of RB by utilizing the ARQ status report sent to eNodeB regularly from a UE.
Re-design of ARQ Status Report
ARQ status report in LTE networks is originally designed for eNodeB to retransmit the packets in error. Basically, it contains a negative ACK field, NACK_SN (10 bits), representing the sequence number of every erroneous packet. The original design of ARQ status report is not efficient for eNodeB to compute the number of successfully received packets by a UE.
To allow eNodeB more efficiently compute the number of packets successfully received by a UE, as shown in (Figure 3), we re-design the ARQ status report. Two new fields are added to the status report.
Re-design of ARQ status report.
They are RL_SN (10 bits), representing the largest sequence number received by a UE so far, and E (2 bits), representing that whether consecutive packets in error between two NACK_SN exist. Thus, with the help of RL_SN and two adjacent NACK_SN, the number of packets successfully received by a UE can be computed by eNodeB.
Size of RB vs Requested Data Rate
Once eNodeB knows the number of successfully received packets by a UE from the ARQ status report, the next step for eNodeB is to compute the size of RB (in terms of the number of RE) to be allocated to a UE. (Table I) shows the parameters used in deriving an equation of the number of RE contained in a RB. First, let us compute the effective data rate ( Reff ) as in Eq. (3.1), where NRL represents the number of PDU (Protocol Data Unit) sent from eNodeB in ∆t , and NNACK represents the number PDU rejected by a UE in ∆t . Notice that both NRL and NNACK can be easily computed from the re-designed ARQ status report regularly transmitted from a UE.
Total Bandwidth (Hz)
Frequency of a single carrier (Hz)
Duration of an OFDMA frame (sec)
tslot
Duration of a slot (sec)
Nsym
Number of symbols in a slot
Average number of bits in a symbol
Table 1: Parameters used in DRBA
$${R_{eff}} = {{\left( {{N_{RL}} - {N_{NACK}}} \right)} \over {\Delta t}} \times (PDUsize)........................(3.1)$$
Eq. (3.2) shows the symbol rate (Rsym), computed from the number of symbols (Nsym) in a slot and a slot duration (tslot). Eq. (3.3) shows the data rate of a RE (RRE), where lave represents the average number of data bits carried by a symbol.
$${R_{sym}} = {{{N_{sym}}} \over {{t_{slot}}}}.................................................(3.2)$$
$${R_{RE}} = {R_{sym}} \times {l_{ave}}...............................................(3.3)$$
$$\eta = \left\{ {\matrix{ {\left[ {{{{R_{req}}} \over {{R_{RE}}}}} \right]} & {,ifRAMC} \cr {\left[ {{{{R_{eff}}} \over {{R_{RE}}}}} \right]} & {,ifDRBA} \cr } } \right.........................................(3.4)$$
Finally, we can derive the size of RB in terms of the number of RE (n) for RAMC and for DRBA, respectively, as shown in Eq. (3.4), where Rreq represents the initial requested data rate of a UE. As an example (Figure 4) illustrates the RB allocations using DRBA. We assume there are BW/f carriers, divided into l categories based on theCQI. Notice that l (1 ≤ l ≤ L) represents the number of bits carried by a symbol in OFDMA frames. RB in blue colors denotes that they are allocated by RAMC, and RB in purple colors denotes that they are allocated by DRBA. From Eq. (3.4), since Reff is smaller than Rreq, the size of RB in the latter is relatively smaller than the former.
RB allocations using DRBA.
To analyze the proposed DRBA, we perform simulation on NS-3 by embedding the LTE modules. (Table II) shows the parameters and settings used in NS-3 simulation. Notice that we generate three types of multimedia traffic, audio, video, and data. Each type of traffic has different bit rates and durations.
System Bandwidth
Carrier Band width
OFDMA Frame
Slot duration
Modulation (MCS)
BPSK, QPSK, 16-QAM, 64-QAM
Carriers per MCS
Speed of UE
10 km/hr - 60 km/hr
Audio Traffic
14 - 50 Kbps
0.5 - 2 Mbps
3 - 8 Mbps
Audio Duration
1.5 min
Data Duration
Video Duration
Table II: Parameters and Settings in NS-3 Simulation
Comparison of the average blocking rates.
(Figure 5) shows the comparison of average blocking rates between RAMC (allocated by AMC) and DRBA (adjusted by ARQ). A connection in an LTE network can be blocked due to the requested data rate (or the size of RB) cannot be granted. The average blocking rate is therefore defined as the average rate of unsuccessful connection of the three multimedia traffic, audio, video, and data. As the number of UE exceeds 20, we can observe that DRBA can reduce the average blocking rate by at least 20%. In particular, as the interference/noise increases from 5 dB to 15 dB, the reduction in average blocking rate becomes more significant.
(Figure 6) shows the comparison of utilization of OFDMA frames. As the number of UE is below 20, DRBA (adjusted by ARQ) has lower utilization than RAMC (allocated by AMC), no matter when the interference/noise is 5 dB or 15 dB. However, as the number of UE exceeds 25, the utilization of DRBA is much higher than that of RAMC. Notice that the difference in frame utilization between the two methods can increase from 10% to 20%. In summary, the higher of interference/noise, the higher of the difference.
Comparison of the utilization in OFDMA frames.
As shown in (Figure 7), we compare the Successfully Connected UE (SCU) between the two methods, RAMC and DRBA, by changing the interference/noise from 5 dB to 15 dB. It is observed that as the sequence number of OFDMA frame increases from 200 to 1000, SCU largely increases from 15 to 25. Notice that as the interference/noise increases from 5 dB to 15 dB, SCU in DRBA can still reach near 25, but SCU in RAMC drops below 20. This is because in DRBA we can accurately compute the effective data rate of a UE, which save the bandwidth for admitting more UE to the LTE network.
Comparison of the successfully connected UE
By varying the buffer size from 3 to 25 packets in a UE, in (Figure 8), we compare the SCU between the two methods. (Figure 8(a)) Shows the SCU as the interference/noise is fixed to 5 dB and (Figure 8(b)) shows the SCU as the interference/noise is fixed to 15 dB. In general, no matter when the buffer size is 3 or 25 in a UE, the former can reach higher SCU than the latter due to the increasing interference/noise. It is interesting to notice that as the interference/noise is small (5 dB), smaller buffer size (3 packets) can achieve higher SCU than larger buffer size (25 packets). As shown in (Figure 8(a)), this phenomenon is more significant in DRBA (adjusted by ARQ), as it is compared to RAMC (allocated by AMC). The reason for this ambiguity is quite straightforward; smaller buffer size in a UE will create larger packet loss ratio, which in DRBA implies the effective data rate should be re-adjusted to a smaller number. Consequently, a certain percentage of occupied bandwidth can be released and it is re-assigned to the newly connected UE. On the other hand, as shown in (Figure 8(b)), when the interference/noise increases to 15 dB, the two SCU for different buffer sizes do not exhibit big difference.
Figure 8:Comparison of SCU for different buffer sizes
(a) Interference/noise = 5 dB
(b) Interference/noise = 15 dB
(Figure 9(a)) and (Figure 9(b)) show the comparison of RB allocations for an OFDMA frame using RAMC and DRBA, respectively. Here we consider 40 UEs intending to connect to the LTE network. The interference/noise is fixed to 5 dB. As it can be observed, RB allocations using RAMC has left more blank spaces than that using DRBA, which proves that the former exhibits lower utilization of an OFDMA frame. Additionally, the latter has fewer and smaller empty holes than the former, which demonstrates that the latter may allow more UE to be connected to the LTE network.
Figure 9:RB allocations for an OFDMA frame
(a) RB allocations using RAMC
(b) RB allocations using DRBA
In this paper, we have presented a dynamic RB allocation (DRBA) for OFDMA frames in an LTE network. The proposed DRBA can compute the effective data rate by considering the packet loss ratio based on the returned ARQ status report from a UE. The size of RB, consisting of a certain number of RE, is then calculated from the effective data rate. To demonstrate the superiority of the proposed DRBA over a conventional RB allocation based on AMC (RAMC), we re-designed the ARQ status report and performed simulation on NS-3. The simulation results have shown that the proposed DRBA can significantly reduce the blocking rates of multimedia traffic (audio, video, and data) and admitting more number of UE to be connected to the LTE network, particularly when the network has high interference/noise.
3GPP TS 36.211 V11.6.0, Tech. Spec. Group Radio Access Network (2014) Evolved Universal Terrestrial Radio Access (E-UTRA); Physical channels and modulation, (Release 11).
Fantacci R, Marabissi D, Tarchi D, Habib I (2009) Adaptive Modulation and Coding Techniques for OFDMA Systems. IEEE Transactions on Wireless Communications 8: 4876-4883 DOI: 10.1109/TWC.2009.090253.
3GPP TS 36.322 V9.3.0, Tech. Spec. Group Radio Access Network (2010) Evolved Universal Terrestrial Radio Access (E-UTRA); Radio Link Control (RLC) protocol specification, (Release 9).
Maria G, Koiipillai DR (2013) Fairness-Based Resource Allocation in OFDMA Downlink with Imperfect CSIT. International Conference on Wireless Communications and Signal Processing (WCSP), Hangzhou, China. DOI: 10.1109/WCSP.2013.6677098.
Zhang Y, Liu G, Wang Q, He L (2014) Proportional Fair Resource Allocation Algorithm for Video Transmission in OFDMA System. IEEE International Conference on Multimedia and Expo Workshops (ICMEW), Chengdu, China. DOI: 10.1109/ICMEW.2014.6890724.
Wang Y, Zhang W, Peng F, Yuan Y (2013) RNTP-Based Resource Block Allocation in LTE Downlink Indoor Scenarios," IEEE Wireless Communications and Networking Conference (WCNC), Shanghai, China. Doi: 1109/WCNC.2013.6555099.
Lee TH, Huang YW (2012) Resource Allocation Achieving High System Throughput with QoS Support in OFDMA-Based System. IEEE Transactions on Communications 60: 851-861. DOI: 10.1109/TCOMM.2012.020912.100632.
Huo C, Sesay AB, Fapojuwo AO (2011) Queue-Aware Adaptive Resource Allocation for OFDMA Systems Supporting Mixed Services. IEEE Wireless Communications and Networking Conference (WCNC), Cancun, Mexico.
Hou IH, Chen CS (2012) Self-Organized Resource Allocation in LTE Systems with Weighted Proportional Fairness. IEEE International Conference on Communications (ICC), Ottawa, Canada DOI: 10.1109/ICC.2012.6364444.
Tham ML, Chow CO, Utsu K, Ishii H (2013) BER-Driven Resource Allocation in OFDMA Systems. IEEE International Symposium Personal Indoor and Mobile Radio Communications (PIMRC), London, England.
Hajj AME, Dawy Z (2012) On Probabilistic Queue Length Based Joint Uplink/Downlink Resource Allocation in OFDMA Networks. International Conference on Telecommunications (ICT), Jounieh, Lebanon.
Hassan MAA, Sourour EA, Shaaban SE (2014) Novel Resource Allocation Algorithm for Improving Reuse One Scheme Performance in LTE Networks. International Conference on Telecommunications (ICT), Lisbon, Portugal.
Ghosh D, Mohapatra P (2013) Resource Allocation in OFDMA Femto Networks. International Conference on Computer Communications and Networks (ICCCN), Nassau, Bahamas.
Liu X, Zhu H (2010) Resource Allocation in OFDMA Systems in the Presence of Packet Retransmission. IEEE Vehicular Technology Conference (VTC), Taipei, Taiwan.
Zhu H (2012) Radio Resource Allocation for OFDMA Systems in High Speed Environments. IEEE Journal on Selected Areas in Communications 30: 748-759.
© Copyright 2018 Boffin Access Limited.
|
CommonCrawl
|
Node-weighted CSP in Prim's algorithm?
I'm looking for an algorithm which would find a minimal spanning tree given certain constraints (CSP) about importance of some nodes, e.g. consider a graph with next distance matrix: $$ \left[ \begin{array}{c} - & A & B & C & D & E & F \\ A & 0 & 120 & 100 & inf & inf & 30 \\ B & 120 & 0 & 70 & inf & 150 & inf \\ C & 100 & 70 & 0 & 60 & 60 & inf \\ D & inf & inf & 60 & 0 & inf & 50 \\ E & inf & 150 & 60 & inf & 0 & inf \\ F & 30 & inf & inf & 50 & inf & 0 \\ \end{array} \right] $$ Prim's algorithm will result in something like this: $$ \left[ \begin{array}{c} - & A & B & C & D & E & F \\ A & 0 & inf & inf & inf & inf & 30 \\ B & inf & 0 & 70 & inf & inf & inf \\ C & inf & 70 & 0 & 60 & 60 & inf \\ D & inf & inf & 60 & 0 & inf & 50 \\ E & inf & inf & 60 & inf & 0 & inf \\ F & 30 & inf & inf & 50 & inf & 0 \\ \end{array} \right] $$
However, node A is now zoned and it will take at least 3 transitions from $A$ to get to $C$ and for my specific CSP I need at most 2 transitions. It is fairly easy to incorporate such CSP into Prim's algorithm. The question is: are there any generic algorithms which deal with finding a minimal spanning tree given a set of constraints ?
algorithms graphs graph-traversal constraint-programming
Denys S.
Denys S.Denys S.
$\begingroup$ What do you mean with CSP exactly here? $\endgroup$ – Juho Apr 7 '13 at 14:09
$\begingroup$ Please state in general, formal terms what the additional restrictions are. $\endgroup$ – Raphael♦ Apr 7 '13 at 14:13
$\begingroup$ Edited the question. $\endgroup$ – Denys S. Apr 7 '13 at 14:44
$\begingroup$ Perhaps this might be releted to your question en.wikipedia.org/wiki/Steiner_tree_problem $\endgroup$ – fidbc Apr 7 '13 at 14:52
$\begingroup$ What kind of constraints do you have in mind? You will have to be specific. It might be that certain kinds of constraints lead to problem we don't know how to solve efficiently. $\endgroup$ – Juho Apr 7 '13 at 14:54
Browse other questions tagged algorithms graphs graph-traversal constraint-programming or ask your own question.
Constructing inequivalent binary matrices
A canonical representative, for this equivalence relation on matrices
How to find a perfect matching with this constraint?
Automorphism of a Graph with a given Set of Permutations
safe edge for Minimum spanning tree
Reconstruct graph of N edges from a matrix of shortest pair distances (N*N) (i.e. result from Floyd-Warshall algorithm)
|
CommonCrawl
|
Modular curve
From formulasearchengine
In number theory and algebraic geometry, a modular curve Y(Γ) is a Riemann surface, or the corresponding algebraic curve, constructed as a quotient of the complex upper half-plane H by the action of a congruence subgroup Γ of the modular group of integral 2×2 matrices SL(2, Z). The term modular curve can also be used to refer to the compactified modular curves X(Γ) which are compactifications obtained by adding finitely many points (called the cusps of Γ) to this quotient (via an action on the extended complex upper-half plane). The points of a modular curve parametrize isomorphism classes of elliptic curves, together with some additional structure depending on the group Γ. This interpretation allows one to give a purely algebraic definition of modular curves, without reference to complex numbers, and, moreover, prove that modular curves are defined either over the field Q of rational numbers, or a cyclotomic field. The latter fact and its generalizations are of fundamental importance in number theory.
1 Analytic definition
1.1 Compactified modular curves
3 Genus
3.1 Genus zero
4 Relation with the Monster group
Analytic definition
The modular group SL(2, Z) acts on the upper half-plane by fractional linear transformations. The analytic definition of a modular curve involves a choice of a congruence subgroup Γ of SL(2, Z), i.e. a subgroup containing the principal congruence subgroup of level N Γ(N), for some positive integer N, where
Γ(N)={(abcd):a,d≡±1modN and b,c≡0modN}.{\displaystyle \Gamma (N)=\left\{{\begin{pmatrix}a&b\\c&d\\\end{pmatrix}}:\ a,d\equiv \pm 1\mod N{\text{ and }}b,c\equiv 0\mod N\right\}.}
The minimal such N is called the level of Γ. A complex structure can be put on the quotient Γ\H to obtain a noncompact Riemann surface commonly denoted Y(Γ).
Compactified modular curves
A common compactification of Y(Γ) is obtained by adding finitely many points called the cusps of Γ. Specifically, this is done by considering the action of Γ on the extended complex upper-half plane H* = H ∪ Q ∪ {∞}. We introduce a topology on H* by taking as a basis:
any open subset of H,
for all r > 0, the set {∞}∪{τ∈H∣Im(τ)>r}{\displaystyle \{\infty \}\cup \{\tau \in \mathbf {H} \mid {\text{Im}}(\tau )>r\}}
for all coprime integers a, c and all r > 0, the image of {∞}∪{τ∈H∣Im(τ)>r}{\displaystyle \{\infty \}\cup \{\tau \in \mathbf {H} \mid {\text{Im}}(\tau )>r\}} under the action of
(a−mcn){\displaystyle {\begin{pmatrix}a&-m\\c&n\end{pmatrix}}}
where m, n are integers such that an + cm = 1.
This turns H* into a topological space which is a subset of the Riemann sphere P1(C). The group Γ acts on the subset Q ∪ {∞}, breaking it up into finitely many orbits called the cusps of Γ. If Γ acts transitively on Q ∪ {∞}, the space Γ\H* becomes the Alexandroff compactification of Γ\H. Once again, a complex structure can be put on the quotient Γ\H* turning it into a Riemann surface denoted X(Γ) which is now compact. This space is a compactification of Y(Γ).[1]
The most common examples are the curves X(N), X0(N), and X1(N) associated with the subgroups Γ(N), Γ0(N), and Γ1(N).
The modular curve X(5) has genus 0: it is the Riemann sphere with 12 cusps located at the vertices of a regular icosahedron. The covering X(5) → X(1) is realized by the action of the icosahedral group on the Riemann sphere. This group is a simple group of order 60 isomorphic to A5 and PSL(2, 5).
The modular curve X(7) is the Klein quartic of genus 3 with 24 cusps. It can be interpreted as a surface with three handles tiled by 24 heptagons, with a cusp at the center of each face. These tilings can be understood via dessins d'enfants and Belyi functions – the cusps are the points lying over ∞ (red dots), while the vertices and centers of the edges (black and white dots) are the points lying over 0 and 1. The Galois group of the covering X(7) → X(1) is a simple group of order 168 isomorphic to PSL(2, 7).
There is an explicit classical model for X0(N), the classical modular curve; this is sometimes called the modular curve. The definition of Γ(N) can be restated as follows: it is the subgroup of the modular group which is the kernel of the reduction modulo N. Then Γ0(N) is the larger subgroup of matrices which are upper triangular modulo N:
{(abcd):c≡0modN},{\displaystyle \left\{{\begin{pmatrix}a&b\\c&d\end{pmatrix}}:\ c\equiv 0\mod N\right\},}
and Γ1(N) is the intermediate group defined by:
{(abcd):a≡d≡1modN,c≡0modN}.{\displaystyle \left\{{\begin{pmatrix}a&b\\c&d\end{pmatrix}}:\ a\equiv d\equiv 1\mod N,c\equiv 0\mod N\right\}.}
These curves have a direct interpretation as moduli spaces for elliptic curves with level structure and for this reason they play an important role in arithmetic geometry. The level N modular curve X(N) is the moduli space for elliptic curves with a basis for the N-torsion. For X0(N) and X1(N), the level structure is, respectively, a cyclic subgroup of order N and a point of order N. These curves have been studied in great detail, and in particular, it is known that X0(N) can be defined over Q.
The equations defining modular curves are the best-known examples of modular equations. The "best models" can be very different from those taken directly from elliptic function theory. Hecke operators may be studied geometrically, as correspondences connecting pairs of modular curves.
Remark: quotients of H that are compact do occur for Fuchsian groups Γ other than subgroups of the modular group; a class of them constructed from quaternion algebras is also of interest in number theory.
The covering X(N) → X(1) is Galois, with Galois group SL(2, N)/{1, −1}, which is equal to PSL(2, N) if N is prime. Applying the Riemann–Hurwitz formula and Gauss–Bonnet theorem, one can calculate the genus of X(N). For a prime level p ≥ 5,
−πχ(X(p))=|G|⋅D,{\displaystyle -\pi \chi (X(p))=|G|\cdot D,}
where χ = 2 − 2g is the Euler characteristic, |G| = (p+1)p(p−1)/2 is the order of the group PSL(2, p), and D = π − π/2 − π/3 − π/p is the angular defect of the spherical (2,3,p) triangle. This results in a formula
g=124(p+2)(p−3)(p−5).{\displaystyle g={\tfrac {1}{24}}(p+2)(p-3)(p-5).}
Thus X(5) has genus 0, X(7) has genus 3, and X(11) has genus 26. For p = 2 or 3, one must additionally take into account the ramification, that is, the presence of order p elements in PSL(2, Z), and the fact that PSL(2, 2) has order 6, rather than 3. There is a more complicated formula for the genus of the modular curve X(N) of any level N that involves divisors of N.
Genus zero
In general a modular function field is a function field of a modular curve (or, occasionally, of some other moduli space that turns out to be an irreducible variety). Genus zero means such a function field has a single transcendental function as generator: for example the j-function generates the function field of X(1) = PSL(2, Z)\H. The traditional name for such a generator, which is unique up to a Möbius transformation and can be appropriately normalized, is a Hauptmodul (main or principal modular function).
The spaces X1(n) have genus zero for n = 1, ..., 10 and n = 12. Since these curves are defined over Q, it follows that there are infinitely many rational points on each such curve, and hence infinitely elliptic curves defined over Q with n-torsion for these values of n. The converse statement, that only these values of n can occur, is Mazur's torsion theorem.
Relation with the Monster group
{{ safesubst:#invoke:Unsubst||$N=Unreferenced section |date=__DATE__ |$B= {{ safesubst:#invoke:Unsubst||$N=Unreferenced |date=__DATE__ |$B= {{#invoke:Message box|ambox}} }} }} Modular curves of genus 0, which are quite rare, turned out to be of major importance in relation with the monstrous moonshine conjectures. First several coefficients of q-expansions of their Hauptmoduln were computed already in the 19th century, but it came as a shock that the same large integers show up as dimensions of representations of the largest sporadic simple group Monster.
Another connection is that the modular curve corresponding to the normalizer Γ0(p)+ of Γ0(p) in SL(2, R) has genus zero if and only if p is 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 41, 47, 59 or 71, and these are precisely the prime factors of the order of the monster group. The result about Γ0(p)+ is due to Jean-Pierre Serre, Andrew Ogg and John G. Thompson in the 1970s, and the subsequent observation relating it to the monster group is due to Ogg, who wrote up a paper offering a bottle of Jack Daniel's whiskey to anyone who could explain this fact, which was a starting point for the theory of monstrous moonshine.
The relation runs very deep and as demonstrated by Richard Borcherds, it also involves generalized Kac–Moody algebras. Work in this area underlined the importance of modular functions that are meromorphic and can have poles at the cusps, as opposed to modular forms, that are holomorphic everywhere, including the cusps, and had been the main objects of study for the better part of the 20th century.
Manin–Drinfeld theorem
Modularity theorem
Shimura variety, a generalization of modular curves to higher dimensions
↑ {{#invoke:citation/CS1|citation |CitationClass=citation }}
Template:Refbegin
{{#invoke:citation/CS1|citation
|CitationClass=citation }}
|CitationClass=citation }} Template:Refend
Retrieved from "https://en.formulasearchengine.com/index.php?title=Modular_curve&oldid=233291"
Algebraic curves
Riemann surfaces
|
CommonCrawl
|
Journal of the American Mathematical Society
Published by the American Mathematical Society, the Journal of the American Mathematical Society (JAMS) is devoted to research articles of the highest quality in all areas of pure and applied mathematics.
ISSN 1088-6834 (online) ISSN 0894-0347 (print)
The 2020 MCQ for Journal of the American Mathematical Society is 4.79.
Journals Home eContent Search About JAMS Editorial Board Author and Submission Information Journal Policies Subscription Information
All issues : 1988 – Present
Short rational generating functions for lattice point problems
by Alexander Barvinok and Kevin Woods PDF
J. Amer. Math. Soc. 16 (2003), 957-979 Request permission
We prove that for any fixed $d$ the generating function of the projection of the set of integer points in a rational $d$-dimensional polytope can be computed in polynomial time. As a corollary, we deduce that various interesting sets of lattice points, notably integer semigroups and (minimal) Hilbert bases of rational cones, have short rational generating functions provided certain parameters (the dimension and the number of generators) are fixed. It follows then that many computational problems for such sets (for example, finding the number of positive integers not representable as a non-negative integer combination of given coprime positive integers $a_{1}, \ldots , a_{d}$) admit polynomial time algorithms. We also discuss a related problem of computing the Hilbert series of a ring generated by monomials.
Alexander I. Barvinok, A polynomial time algorithm for counting integral points in polyhedra when the dimension is fixed, Math. Oper. Res. 19 (1994), no. 4, 769–779. MR 1304623, DOI 10.1287/moor.19.4.769
[B02]B02 A. Barvinok, A Course in Convexity, Graduate Studies in Mathematics, vol. 54, Amer. Math. Soc., Providence, RI, 2002.
Alexander Barvinok and James E. Pommersheim, An algorithmic theory of lattice points in polyhedra, New perspectives in algebraic combinatorics (Berkeley, CA, 1996–97) Math. Sci. Res. Inst. Publ., vol. 38, Cambridge Univ. Press, Cambridge, 1999, pp. 91–147. MR 1731815
Dave Bayer and Bernd Sturmfels, Cellular resolutions of monomial modules, J. Reine Angew. Math. 502 (1998), 123–140. MR 1647559, DOI 10.1515/crll.1998.083
Wojciech Banaszczyk, Alexander E. Litvak, Alain Pajor, and Stanislaw J. Szarek, The flatness theorem for nonsymmetric convex bodies via the local theory of Banach spaces, Math. Oper. Res. 24 (1999), no. 3, 728–750. MR 1854250, DOI 10.1287/moor.24.3.728
J. W. S. Cassels, An introduction to the geometry of numbers, Classics in Mathematics, Springer-Verlag, Berlin, 1997. Corrected reprint of the 1971 edition. MR 1434478
[D96]D96 G. Denham, The Hilbert series of a certain module, manuscript, 1996. [DH:03]DH:03 J.A. De Loera, R. Hemmecke, J. Tauzer and R. Yoshida, Effective lattice point counting in rational convex polytopes, preprint, http://www.math.ucdavis.edu/$\sim$latte/ (2003).
David Eisenbud, Commutative algebra, Graduate Texts in Mathematics, vol. 150, Springer-Verlag, New York, 1995. With a view toward algebraic geometry. MR 1322960, DOI 10.1007/978-1-4612-5350-1
P. Erdős and R. L. Graham, On a linear diophantine problem of Frobenius, Acta Arith. 21 (1972), 399–408. MR 311565, DOI 10.4064/aa-21-1-399-408
Martin Grötschel, László Lovász, and Alexander Schrijver, Geometric algorithms and combinatorial optimization, 2nd ed., Algorithms and Combinatorics, vol. 2, Springer-Verlag, Berlin, 1993. MR 1261419, DOI 10.1007/978-3-642-78240-4
Jürgen Herzog, Generators and relations of abelian semigroups and semigroup rings, Manuscripta Math. 3 (1970), 175–193. MR 269762, DOI 10.1007/BF01273309
Ravi Kannan, Lattice translates of a polytope and the Frobenius problem, Combinatorica 12 (1992), no. 2, 161–177. MR 1179254, DOI 10.1007/BF01204720
A. G. Khovanskiĭ, Sums of finite sets, orbits of commutative semigroups and Hilbert functions, Funktsional. Anal. i Prilozhen. 29 (1995), no. 2, 36–50, 95 (Russian, with Russian summary); English transl., Funct. Anal. Appl. 29 (1995), no. 2, 102–112. MR 1340302, DOI 10.1007/BF01080008
Ravi Kannan, László Lovász, and Herbert E. Scarf, The shapes of polyhedra, Math. Oper. Res. 15 (1990), no. 2, 364–380. MR 1051577, DOI 10.1287/moor.15.2.364
Christos H. Papadimitriou, Computational complexity, Addison-Wesley Publishing Company, Reading, MA, 1994. MR 1251285
Herbert E. Scarf, Test sets for integer programs, Math. Programming 79 (1997), no. 1-3, Ser. B, 355–368. Lectures on mathematical programming (ismp97) (Lausanne, 1997). MR 1464774, DOI 10.1016/S0025-5610(97)00058-0
Alexander Schrijver, Theory of linear and integer programming, Wiley-Interscience Series in Discrete Mathematics, John Wiley & Sons, Ltd., Chichester, 1986. A Wiley-Interscience Publication. MR 874114
Bernd Sturmfels, Gröbner bases and convex polytopes, University Lecture Series, vol. 8, American Mathematical Society, Providence, RI, 1996. MR 1363949, DOI 10.1090/ulect/008
L. A. Székely and N. C. Wormald, Generating functions for the Frobenius problem with $2$ and $3$ generators, Math. Chronicle 15 (1986), 49–57. MR 900339
Rekha R. Thomas, A geometric Buchberger algorithm for integer programming, Math. Oper. Res. 20 (1995), no. 4, 864–884. MR 1378110, DOI 10.1287/moor.20.4.864
Retrieve articles in Journal of the American Mathematical Society with MSC (2000): 05A15, 11P21, 13P10, 68W30
Retrieve articles in all journals with MSC (2000): 05A15, 11P21, 13P10, 68W30
Alexander Barvinok
Affiliation: Department of Mathematics, University of Michigan, Ann Arbor, Michigan 48109-1109
Email: [email protected]
Email: [email protected]
Received by editor(s): November 20, 2002
Published electronically: April 25, 2003
Additional Notes: This research was partially supported by NSF Grant DMS 9734138. The second author was partially supported by an NSF VIGRE Fellowship and an NSF Graduate Research Fellowship.
Journal: J. Amer. Math. Soc. 16 (2003), 957-979
MSC (2000): Primary 05A15, 11P21, 13P10, 68W30
DOI: https://doi.org/10.1090/S0894-0347-03-00428-4
|
CommonCrawl
|
Search Results: 1 - 10 of 87360 matches for " Craig W. Hedberg "
An Assessment of Food Safety Needs of Restaurants in Owerri, Imo State, Nigeria
Sylvester N. Onyeneho,Craig W. Hedberg
International Journal of Environmental Research and Public Health , 2013, DOI: 10.3390/ijerph10083296
Abstract: One hundred and forty five head chefs and catering managers of restaurants in Owerri, Nigeria were surveyed to establish their knowledge of food safety hazards and control measures. Face-to-face interviews were conducted and data collected on their knowledge of risk perception, food handling practices, temperature control, foodborne pathogens, and personal hygiene. Ninety-two percent reported that they cleaned and sanitized food equipment and contact surfaces while 37% engaged in cross-contamination practices. Forty-nine percent reported that they would allow a sick person to handle food. Only 70% reported that they always washed their hands while 6% said that they continued cooking after cracking raw eggs. All respondents said that they washed their hands after handling raw meat, chicken or fish. About 35% lacked knowledge of ideal refrigeration temperature while 6% could not adjust refrigerator temperature. Only 40%, 28%, and 21% had knowledge of Salmonella, E. coli, and Hepatitis A, respectively while 8% and 3% had knowledge of Listeria and Vibrio respectively, as pathogens. Open markets and private bore holes supplied most of their foods and water, respectively. Pearson's Correlation Coefficient analysis revealed almost perfect linear relationship between education and knowledge of pathogens ( r = 0.999), cooking school attendance and food safety knowledge ( r = 0.992), and class of restaurant and food safety knowledge ( r = 0.878). The lack of current knowledge of food safety among restaurant staff highlights increased risk associated with fast foods and restaurants in Owerri.
Effect of Temperature on the Survival of F-Specific RNA Coliphage, Feline Calicivirus, and Escherichia coli in Chlorinated Water
Paul B. Allwood,Yashpal S. Malik,Sunil Maherchandani,Craig W. Hedberg,Sagar M. Goyal
International Journal of Environmental Research and Public Health , 2005, DOI: 10.3390/ijerph2005030008
Abstract: We compared the survival of F-specific RNA coliphage MS2, feline calicivirus, and E. coli in normal tap water and in tap water treated to an initial concentration of 50 ppm free chlorine and held at 4°C, 25°C, or 37°C for up to 28 days. Our aim was to determine which of these two organisms (coliphage or E. coli) was better at indicating norovirus survival under the conditions of the experiment. There was a relatively rapid decline of FCV and E. coli in 50 ppm chlorine treated water and both organisms were undetectable within one day irrespective of the temperature. In contrast, FRNA phage survived for 7 to 14 days in 50 ppm chlorine treated water at all temperatures. All organisms survived for 28 days in tap water at 4°C, but FCV was undetectable on day 21 and day 7 at 25°C and 37°C, respectively. Greater survival of FRNA phage compared to E. coli in 50 ppm chlorine treated water suggests that these organisms should be further investigated as indicators of norovirus in depurated shellfish, sanitized produce, and treated wastewater which are all subject to high-level chlorine treatment.
Particle Characteristics and Metal Release From Natural Rutile (TiO2) and Zircon Particles in Synthetic Body Fluids [PDF]
Yolanda Hedberg, Jonas Hedberg, Inger Odnevall Wallinder
Journal of Biomaterials and Nanobiotechnology (JBNB) , 2012, DOI: 10.4236/jbnb.2012.31006
Abstract: Titanium oxide (rutile, TiO2) and zircon (ZrSiO4), known insoluble ceramic materials, are commonly used for coatings of implant materials. We investigate the release of zirconium, titanium, aluminum, iron, and silicon from different micron-sized powders of 6 powders of natural rutile (TiO2) and zircon (ZrSiO4) from a surface perspective. The investigation includes five different synthetic body fluids and two time periods of exposure, 2 and 24 hours. The solution chemicals rather than pH are important for the release of zirconium. When exceeding a critical amount of aluminum and silicon in the surface oxide, the particles seem to be protected from selective pH-specific release at neutral or weakly alkaline pH. The importance of bulk and surface composition and individual changes between different kinds of the same material is elucidated. Changes in material properties and metal release characteristics with particle size are presented for zircon.
Protective Green Patinas on Copper in Outdoor Constructions [PDF]
Yolanda Hedberg, Inger Odnevall Wallinder
Journal of Environmental Protection (JEP) , 2011, DOI: 10.4236/jep.2011.27109
Abstract: The last 15 years of research related to atmospheric corrosion and the release of copper to the environment are shortly summarized. Brown and green patinas with high barrier properties for corrosion are gradually evolved on copper at atmospheric conditions. The corrosion process and repeated dry and wet cycles results in a partial dissolution of corrosion products within the patina. Dissolved copper can be released and dispersed into the environment via the action of rainwater, however the major part is rearranged within the patina during drying cycles. The majority of corrosion products formed have a poor solubility, very different from water soluble copper salts. The release process is very slow and takes place independent of patina color. Its extent has only a marginal effect on the adherent patina. Released copper rapidly interacts with organic matter and in contact with different surfaces already in the close vicinity of the building, such as drainage systems, storm water pipes, pavements, stone materials and soil systems. These surfaces all have high capacities to retain copper in the runoff water and to reduce its concentration and chemical form to non-available and non-toxic levels for aquatic organisms.
What Do the United States and India Have in Common (Besides Indians): Enough for a Strategic Alliance?
Kern W. Craig
Asian Social Science , 2013, DOI: 10.5539/ass.v9n2p70
Abstract: The United States and India have much in common (besides Indians), enough in fact to constitute a comprehensive alliance. Both countries are former British colonies. Both use the English language: unofficially but more in the US; and, officially but less in India. Both are complimentarily large, the US in terms of area and India in terms of population. The people of India are however younger and poorer. Both countries have long coastlines and together they are adjacent the major oceans of the world: Pacific, Artic, and Atlantic including the Gulf of Mexico; and, Indian including the Arabian Sea and Bay of Bengal. The United States of America and the Republic of India have now converged as welfare states. The US was once more capitalistic whereas India was once more socialistic. Both countries use Affirmative Action: for minorities and women in the US; and, for Scheduled Castes, Scheduled Tribes, and Other Backward Classes in India. Both governments are secular but the US is predominately Christian whereas India is predominately Hindu. Both countries face the threat of Islamic terrorism particularly the US vis-à-vis Afghanistan and India vis-à-vis Pakistan. And both the United States and India must contend with the new super-state, China.
The effects of obesity on venous thromboembolism: A review [PDF]
Genyan Yang, Christine De Staercke, W. Craig Hooper
Open Journal of Preventive Medicine (OJPM) , 2012, DOI: 10.4236/ojpm.2012.24069
Abstract: Obesity has emerged as a global health issue that is associated with wide spectrum of disorders, including coronary artery disease, diabetes mellitus, hypertension, stroke, and venous thromboembolism (VTE). VTE is one of the most common vascular disorders in the United States and Europe and is associated with significant mortality. Although the association between obesity and VTE appears to be moderate, obesity can interact with other environmental or genetic factors and pose a significantly greater risk of VTE among individuals who are obese and who are exposed simultaneously to several other risk factors for VTE. Therefore, identification of potential interactions between obesity and certain VTE risk factors might offer some critical points for VTE interventions and thus minimize VTE morbidity and mortality among patients who are obese. However, current obesity measurements have limitations and can introduce contradictory results in the outcome of obesity. To overcome these limitations, this review proposes several future directions and suggests some avenues for prevention of VTE associated with obesity as well.
Low Temperature Electrostatic Force Microscopy of a Deep Two Dimensional Electron Gas using a Quartz Tuning Fork
J. A. Hedberg,A. Lal,Y. Miyahara,P. Grütter,G. Gervais,M. Hilke,L. Pfeiffer,K. W. West
Physics , 2010, DOI: 10.1063/1.3499293
Abstract: Using an ultra-low temperature, high magnetic field scanning probe microscope, we have measured electric potentials of a deeply buried two dimensional electron gas (2DEG). Relying on the capacitive coupling between the 2DEG and a resonant tip/cantilever structure, we can extract electrostatic potential information of the 2DEG from the dynamics of the oscillator. We present measurements using a quartz tuning fork oscillator and a 2DEG with a cleaved edge overgrowth structure. The sensitivity of the quartz tuning fork as force sensor is demonstrated by observation of Shubnikov de Haas oscillations at a large tip-2DEG separation distance of more than 500 nm.
Management of progressive type 2 diabetes: role of insulin therapy
Chemitiganti Ramachandra,Spellman Craig W
Osteopathic Medicine and Primary Care , 2009, DOI: 10.1186/1750-4732-3-5
Abstract: Insulin is an effective treatment for achieving tight glycemic control and improving clinical outcomes in patients with diabetes. While insulin therapy is required from the onset of diagnosis in type 1 disease, its role in type 2 diabetes requires consideration as to when to initiate and advance therapy. In this article, we review a case study that unfolds over 5 years and discuss the therapeutic decision points, initiation and advancement of insulin regimens, and analyze new data regarding the advantages and disadvantages of tight management of glucose levels.
The Structure of Dark Matter Halos in an Annihilating Dark Matter Model
Matthew W. Craig,Marc Davis
Physics , 2001, DOI: 10.1016/S1384-1076(01)00072-0
Abstract: The inability of standard non-interacting cold dark matter (CDM) to account for the small scale structure of individual galaxies has led to the suggestion that the dark matter may undergo elastic and/or inelastic scattering. We simulate the evolution of an isolated dark matter halo which undergoes both scattering and annihilation. Annihilations produce a core that grows with time due to adiabatic expansion of the core as the relativistic annihilation products flow out of the core, lessening the binding energy. An effective annihilation cross section per unit mass equal to $>.03 cm^2 g^{-1} (100 km s^{-1}/v$) with a scattering cross section per unit mass of .6 cm g$^{-1}$ produces a 3 kpc core in a 10$^{10}$ M$_{\sun}$ halo that persists for 100 dynamical times. The same cross section leads to a core of only 120 pc in a rich cluster. In addition to creating to cores, annihilation should erase structure on scales below $\sim 3\times10^8$ M$_{\sun}$. Annihilating dark matter provides a mechanism for solving some of the problems of non-interacting CDM, at the expense of introducing a contrived particle physics model.
Purpose in life among very old men [PDF]
Pia Hedberg, Yngve Gustafson, Christine Brulin, Lena Aléx
Advances in Aging Research (AAR) , 2013, DOI: 10.4236/aar.2013.23014
Abstract: This paper provides very old men's experiences of and reflections on purpose in life. The answers from an interview question about purpose in life from 23 men were analyzed by using qualitative content analysis. The results revealed three content areas: talking of purpose of life in general, talking of own purpose in life and reflections on purpose in life. Our findings showed that very old men experienced purpose in life most strongly when remembering the past and describing their earlier work. The old men reflected on purpose in life not just from their own individual perspectives, but also from a more reflective and analytic perspective.
|
CommonCrawl
|
A mean-field formulation for multi-period asset-liability mean-variance portfolio selection with probability constraints
Neutral and indifference pricing with stochastic correlation and volatility
January 2018, 14(1): 231-247. doi: 10.3934/jimo.2017044
Asymptotics for ruin probabilities in Lévy-driven risk models with heavy-tailed claims
Yang Yang , , Kam C. Yuen and Jun-Feng Liu
Institute of Statistics and Data Science, Nanjing Audit University, Nanjing 211815, China
Department of Statistics and Actuarial Science, The University of Hong Kong, Pokfulam Road, Hong Kong, China
* Corresponding author: Yang Yang
Received January 2016 Revised February 2017 Published April 2017
Fund Project: The research of Yang Yang was supported by the National Natural Science Foundation of China (No. 71471090, 71671166, 11301278), the Humanities and Social Sciences Foundation of the Ministry of Education of China (No. 14YJCZH182), Natural Science Foundation of Jiangsu Province of China (No. BK20161578), the Major Research Plan of Natural Science Foundation of the Jiangsu Higher Education Institutions of China (No. 15KJA110001), Qing Lan Project, PAPD, Program of Excellent Science and Technology Innovation Team of the Jiangsu Higher Education Institutions of China, 333 Talent Training Project of Jiangsu Province, High Level Talent Project of Six Talents Peak of Jiangsu Province (No. JY-039), Project of Construction for Superior Subjects of Mathematics/Statistics of Jiangsu Higher Education Institutions, and Project of the Key Lab of Financial Engineering of Jiangsu Province (No. NSK2015-17). The research of Kam C. Yuen was supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. HKU17329216), and the CAE 2013 research grant from the Society of Actuaries. The research of Jun-feng Liu was supported by the National Natural Science Foundation of China (No. 11401313), and Natural Science Foundation of Jiangsu Province of China (No. BK20161579)
Consider a bivariate Lévy-driven risk model in which the loss process of an insurance company and the investment return process are two independent Lévy processes. Under the assumptions that the loss process has a Lévy measure of consistent variation and the return process fulfills a certain condition, we investigate the asymptotic behavior of the finite-time ruin probability. Further, we derive two asymptotic formulas for the finite-time and infinite-time ruin probabilities in a single Lévy-driven risk model, in which the loss process is still a Lévy process, whereas the investment return process reduces to a deterministic linear function. In such a special model, we relax the loss process with jumps whose common distribution is long tailed and of dominated variation.
Keywords: Lévy-driven risk model, finite-time and infinite-time ruin probabilities, consistent variation, dominated variation, long tail, asymptotics.
Mathematics Subject Classification: Primary: 91B30, 60G51; Secondary: 60K05.
Citation: Yang Yang, Kam C. Yuen, Jun-Feng Liu. Asymptotics for ruin probabilities in Lévy-driven risk models with heavy-tailed claims. Journal of Industrial & Management Optimization, 2018, 14 (1) : 231-247. doi: 10.3934/jimo.2017044
N. H. Bingham, C. M. Goldie and J. L. Teugels, Regular Variation, Cambridge University Press, Cambridge, 1987. doi: 10. 1017/CBO9780511721434. Google Scholar
Y. Chen and K. W. Ng, The ruin probability of the renewal model with constant interest force and negatively dependent heavy-tailed claims, Insurance Math. Econom., 40 (2007), 415-423. doi: 10.1016/j.insmatheco.2006.06.004. Google Scholar
Y. Chen, K. W. Ng and Q. Tang, Weighted sums of subexponential random variables and their maxima, Adv. in Appl. Probab., 37 (2005), 510-522. doi: 10.1017/S0001867800000288. Google Scholar
Y. Chen and K. C. Yuen, Sums of pairwise quasi-asymptotically independent random variables with consistent variation, Stoch. Models, 25 (2009), 76-89. doi: 10.1080/15326340802641006. Google Scholar
D. B. H. Cline and G. Samorodnitsky, Subexponentiality of the product of independent random variables, Stochastic Process. Appl., 49 (1994), 75-98. doi: 10.1016/0304-4149(94)90113-9. Google Scholar
P. Embrechts, C. Klüppelberg and T. Mikosch, Modelling Extremal Events for Insurance and Finance, Springer-Verlag, Berlin, 1997. doi: 10. 1007/978-3-642-33483-2. Google Scholar
S. Foss, D. Korshunov and S. Zachary, An Introduction to Heavy-tailed and Subexponential Distributions, Springer-Verlag, New York, 2011. doi: 10. 1007/978-1-4419-9473-8. Google Scholar
A. Frolova, Y. Kabanov and S. Pergamenshchikov, In the insurance business risky investments are dangerous, Finance Stoch., 6 (2002), 227-235. doi: 10.1007/s007800100057. Google Scholar
Q. Gao and Y. Wang, Randomly weighted sums with dominated varying-tailed increments and application to risk theory, J. Korean Statist. Society, 39 (2010), 305-314. doi: 10.1016/j.jkss.2010.02.004. Google Scholar
H. K. Gjessing and J. Paulsen, Present value distributions with applications to ruin theory and stochastic equations, Stochastic Process. Appl., 71 (1997), 123-144. doi: 10.1016/S0304-4149(97)00072-0. Google Scholar
D. R. Grey, Regular variation in the tail behaviour of solutions of random difference equations, Ann. Appl. Probab., 4 (1994), 169-183. doi: 10.1214/aoap/1177005205. Google Scholar
F. Guo and D. Wang, Finite-and infinite-time ruin probabilities with general stochastic investment return processes and bivariate upper tail independent and heavy-tailed claims, Adv. in Appl. Probab., 45 (2013), 241-273. doi: 10.1017/S0001867800006261. Google Scholar
X. Hao and Q. Tang, A uniform asymptotic estimate for discounted aggregate claims with sunexponential tails, Insurance Math. Econom., 43 (2008), 116-120. doi: 10.1016/j.insmatheco.2008.03.009. Google Scholar
X. Hao and Q. Tang, Asymptotic ruin probabilities for a bivariate Lévy-driven risk model with heavy-tailed claims and risky investments, J. Appl. Probab., 4 (2012), 939-953. Google Scholar
C. C. Heyde and D. Wang, Finite-time ruin probability with an exponential L´evy process investment return and heavy-tailed claims, Adv. in Appl. Probab., 41 (2009), 206-224. doi: 10.1017/S0001867800003190. Google Scholar
V. Kalashnikov and D. Konstantinides, Ruin under interest force and subexponential claims: A simple treatment, Insurance Math. Econom., 27 (2000), 145-149. doi: 10.1016/S0167-6687(00)00045-7. Google Scholar
V. Kalashnikov and R. Norberg, Power tailed ruin probabilities in the presence of risky investments, Stochastic Process. Appl., 98 (2002), 211-228. doi: 10.1016/S0304-4149(01)00148-X. Google Scholar
C. Klüppelberg and R. Kostadinova, Integrated insurance risk models with exponential L´evy investment, Insurance Math. Econom., 42 (2008), 560-577. doi: 10.1016/j.insmatheco.2007.06.002. Google Scholar
C. Klüppelberg and U. Stadtmüller, Ruin probabilities in the presence of heavy-tails and interest rates, Scand. Actuar. J., 1 (1998), 49-58. doi: 10.1080/03461238.1998.10413991. Google Scholar
D. Konstantinides, Q. Tang and G. Tsitsiashvili, Estimates for the ruin probability in the classical risk model with constant interest force in the presence of heavy tails, Insurance Math. Econom., 31 (2002), 447-460. doi: 10.1016/S0167-6687(02)00189-0. Google Scholar
J. Li, Asymptotics in a time-dependent renewal risk model with stochastic return, J. Math. Anal. Appl., 387 (2012), 1009-1023. doi: 10.1016/j.jmaa.2011.10.012. Google Scholar
J. Paulsen, On Cramér-like asymptotics for risk processes with stochastic return on investments, Ann. Appl. Probab., 12 (2002), 1247-1260. doi: 10.1214/aoap/1037125862. Google Scholar
J. Paulsen and H. K. Gjessing, Ruin theory with stochastic return on investments, Adv. in Appl. Probab., 29 (1997), 965-985. doi: 10.1017/S0001867800047972. Google Scholar
P. E. Protter, Stochastic Integration and Differential Equations, 2nd edition, Springer-Verlag, Berlin, 2003. doi: 10. 1007/978-3-662-10061-5. Google Scholar
G. Samorodnitsky and M. S. Taqqu, Stable Non-Gaussian Random Processes. Stochastic Models with Infinite Variance. Chapman & Hall, New York, 1994. Google Scholar
Q. Tang, The finite-time ruin probability of the compound Poisson model with constant interest force, J. Appl. Probab., 42 (2005), 608-619. doi: 10.1017/S0021900200000656. Google Scholar
Q. Tang, Heavy tails of discounted aggregate claims in the continuous-time renewal model, J. Appl. Probab., 44 (2007), 285-294. doi: 10.1017/S0021900200117826. Google Scholar
Q. Tang and G. Tsitsiashvili, Precise estimates for the ruin probability in finite horizon in a discrete-time model with heavy-tailed insurance and financial risks, Stochastic Process. Appl., 108 (2003), 299-325. doi: 10.1016/j.spa.2003.07.001. Google Scholar
Q. Tang, G. Wang and K. C. Yuen, Uniform tail asymptotics for the stochastic present value of aggregate claims in the renewal risk model, Insurance Math. Econom., 46 (2010), 362-370. doi: 10.1016/j.insmatheco.2009.12.002. Google Scholar
Q. Tang and Z. Yuan, Randomly weighted sums of subexponential random variables with application to capital allocation, Extremes, 17 (2014), 467-493. doi: 10.1007/s10687-014-0191-z. Google Scholar
W. Vervaat, On a stochastic difference equation and a representation of nonnegative infinitely divisible random variables, Adv. in Appl. Probab., 11 (1979), 750-783. doi: 10.2307/1426858. Google Scholar
D. Wang, Finite-time ruin probability with heavy-tailed claims and constant interest rate, Stoch. Models, 24 (2008), 41-57. doi: 10.1080/15326340701826898. Google Scholar
K. Wang, Y. Wang and Q. Gao, Uniform asymptotics for the finite-time ruin probability of a dependent risk model with a constant interest rate, Methodol. Comput. Appl. Probab., 15 (2013), 109-124. doi: 10.1007/s11009-011-9226-y. Google Scholar
Y. Yang, R. Leipus and J. Šiaulys, On the ruin probability in a dependent discrete time risk model with insurance and financial risks, J. Comput. Appl. Math., 236 (2012), 3286-3295. doi: 10.1016/j.cam.2012.02.030. Google Scholar
Y. Yang, J. Lin and Z. Tan, The finite-time ruin probability in the presence of Sarmanov dependent financial and insurance risks, Appl. Math. J. Chinese Univ., 29 (2014), 194-204. doi: 10.1007/s11766-014-3209-z. Google Scholar
Y. Yang, K. Wang and D. Konstantinides, Uniform asymptotics for discounted aggregate claims in dependent risk models, J. Appl. Probab., 51 (2014), 669-684. doi: 10.1017/S0021900200011591. Google Scholar
Y. Yang and Y. Wang, Asymptotics for ruin probability of some negatively dependent risk models with a constant interest rate and dominatedly-varying-tailed claims, Statist. Probab. Letters, 80 (2010), 143-154. doi: 10.1016/j.spl.2009.09.023. Google Scholar
Emilija Bernackaitė, Jonas Šiaulys. The finite-time ruin probability for an inhomogeneous renewal risk model. Journal of Industrial & Management Optimization, 2017, 13 (1) : 207-222. doi: 10.3934/jimo.2016012
Jiangyan Peng, Dingcheng Wang. Asymptotics for ruin probabilities of a non-standard renewal risk model with dependence structures and exponential Lévy process investment returns. Journal of Industrial & Management Optimization, 2017, 13 (1) : 155-185. doi: 10.3934/jimo.2016010
Rongfei Liu, Dingcheng Wang, Jiangyan Peng. Infinite-time ruin probability of a renewal risk model with exponential Levy process investment and dependent claims and inter-arrival times. Journal of Industrial & Management Optimization, 2017, 13 (2) : 995-1007. doi: 10.3934/jimo.2016058
Yinghua Dong, Yuebao Wang. Uniform estimates for ruin probabilities in the renewal risk model with upper-tail independent claims and premiums. Journal of Industrial & Management Optimization, 2011, 7 (4) : 849-874. doi: 10.3934/jimo.2011.7.849
Qingwu Gao, Zhongquan Huang, Houcai Shen, Juan Zheng. Asymptotics for random-time ruin probability in a time-dependent renewal risk model with subexponential claims. Journal of Industrial & Management Optimization, 2016, 12 (1) : 31-43. doi: 10.3934/jimo.2016.12.31
Yang Yang, Kaiyong Wang, Jiajun Liu, Zhimin Zhang. Asymptotics for a bidimensional risk model with two geometric Lévy price processes. Journal of Industrial & Management Optimization, 2019, 15 (2) : 481-505. doi: 10.3934/jimo.2018053
Abhyudai Singh, Roger M. Nisbet. Variation in risk in single-species discrete-time models. Mathematical Biosciences & Engineering, 2008, 5 (4) : 859-875. doi: 10.3934/mbe.2008.5.859
Arno Berger. On finite-time hyperbolicity. Communications on Pure & Applied Analysis, 2011, 10 (3) : 963-981. doi: 10.3934/cpaa.2011.10.963
Arno Berger, Doan Thai Son, Stefan Siegmund. Nonautonomous finite-time dynamics. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 463-492. doi: 10.3934/dcdsb.2008.9.463
Min Niu, Bin Xie. Comparison theorem and correlation for stochastic heat equations driven by Lévy space-time white noises. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 2989-3009. doi: 10.3934/dcdsb.2018296
Yuebao Wang, Qingwu Gao, Kaiyong Wang, Xijun Liu. Random time ruin probability for the renewal risk model with heavy-tailed claims. Journal of Industrial & Management Optimization, 2009, 5 (4) : 719-736. doi: 10.3934/jimo.2009.5.719
Lin Xu, Rongming Wang. Upper bounds for ruin probabilities in an autoregressive risk model with a Markov chain interest rate. Journal of Industrial & Management Optimization, 2006, 2 (2) : 165-175. doi: 10.3934/jimo.2006.2.165
Zhengmeng Jin, Chen Zhou, Michael K. Ng. A coupled total variation model with curvature driven for image colorization. Inverse Problems & Imaging, 2016, 10 (4) : 1037-1055. doi: 10.3934/ipi.2016031
Fatiha Alabau-Boussouira, Vincent Perrollaz, Lionel Rosier. Finite-time stabilization of a network of strings. Mathematical Control & Related Fields, 2015, 5 (4) : 721-742. doi: 10.3934/mcrf.2015.5.721
Yutaka Sakuma, Atsushi Inoie, Ken'ichi Kawanishi, Masakiyo Miyazawa. Tail asymptotics for waiting time distribution of an M/M/s queue with general impatient time. Journal of Industrial & Management Optimization, 2011, 7 (3) : 593-606. doi: 10.3934/jimo.2011.7.593
Liyan Ma, Lionel Moisan, Jian Yu, Tieyong Zeng. A stable method solving the total variation dictionary model with $L^\infty$ constraints. Inverse Problems & Imaging, 2014, 8 (2) : 507-535. doi: 10.3934/ipi.2014.8.507
Zhimin Zhang, Eric C. K. Cheung. A note on a Lévy insurance risk model under periodic dividend decisions. Journal of Industrial & Management Optimization, 2018, 14 (1) : 35-63. doi: 10.3934/jimo.2017036
Sören Bartels, Marijo Milicevic. Iterative finite element solution of a constrained total variation regularized model problem. Discrete & Continuous Dynamical Systems - S, 2017, 10 (6) : 1207-1232. doi: 10.3934/dcdss.2017066
Young-Pil Choi, Seung-Yeal Ha, Jeongho Kim. Propagation of regularity and finite-time collisions for the thermomechanical Cucker-Smale model with a singular communication. Networks & Heterogeneous Media, 2018, 13 (3) : 379-407. doi: 10.3934/nhm.2018017
Drew Fudenberg, David K. Levine. Tail probabilities for triangular arrays. Journal of Dynamics & Games, 2014, 1 (1) : 45-56. doi: 10.3934/jdg.2014.1.45
Yang Yang Kam C. Yuen Jun-Feng Liu
|
CommonCrawl
|
Quantum Computing Stack Exchange is a question and answer site for engineers, scientists, programmers, and computing professionals interested in quantum computing. Join them; it only takes a minute:
Good metaphors for n-level quantum systems
It seems that a coin flip game is a decent metaphor for a 2-level system. Until 1 of the 2 players picks heads or tails, even if the coin has already been flipped, the win/loss wave form has not yet collapsed.
Would rock paper scissors be a good metaphor for qutrits? Where the number of players corresponds w/ the number of qutrits (eg. $3^n$ possible states).
Would a standard pack of 52 cards be a good metaphor for a 52-level quantum system (the game being guessing correctly a card selected from the deck at random)?
Particularly interested in game based metaphors because of the easy correlation to combinitorial game theory & computation complexity.
complexity-theory
edited Aug 1 '18 at 23:24
meowzz
meowzzmeowzz
Your metaphor can be chosen as $N-1$ identical coins, such that the outcome state vector corresponds to the sum of the heads. Thus we wind up in the state $|k\rangle$ when we have $k$ heads and $N-1-k$ tails.
In this approach you can actually associate your metaphor with a classical phase space, as a generalization of the association of a qubit with the Bloch sphere whose generator can be thought in the coin case as the direction of the normal to the coin's face.
In the multiple coin case, we have $N-1$ such directions. But since in our Hilbert space we don't mind the order of the coins only their summed result, in the classical phase space we should not distinguish states in which the direction of the normal to the coins faces switch.
What I just described to you in words is actually the Majorana representation (some call it the Majorana star representation), which is based on the isomorphism of the symmetrized tensor product of $N-1$ spheres to the complex projective space $CP^{N-1}$.
$$\otimes_{\mathrm{sym}} ^{N-1} S^2 \cong CP^{N-1}$$
The geometric quantization of this complex projective space is the $N-$ level system. There is some renewed interest in this representation, partially motivated by holonomic quantum computation, please see for example Liu and Fu.
Now, the outcome of the coin flipping in the case of a single qubit can be described as a measurement of the angular momentum component of the coin in the $z$ direction $j_z$. It is not hard to see that the corresponding operator in the multiple coin flip is the sum of the individual angular momenta:
$$J_z = \sum_1^{N-1} j^{(i)}_z$$
(The above equation is just standard shorthand used by physicists since the operators act on different components in the tensor product Hilbert space).
(In the $N\times N$ matrix representation, this operator can be chosen as: $J_z = \mathrm{diag}[N-1, N-3, .,.,., -N+1]$).
This representation of the $N-$ qubit system has a further analogy. As in the case of a single coin or qubit, the distribution function of the operator $j_z$ is Bernoullian for any state (density matrix) in which the system is prepared; the distribution function of $J_z$ in the multiple coin case is Binomial, for any choice of the density matrix of the system. Thus in both cases this observable returns the classical distribution function.
David Bar MosheDavid Bar Moshe
1,30977 bronze badges
There is always a difference between a quantum system and a classical metaphor. If a system is a qubit in a pure state, then there always exists a measurement basis (or alternatively a proper unitary gate for the standard measurement basis) such that the measurement outcome is 100% predictable, and a measurement basis with measurement outcome 50%-50%. You can't demonstrate this feature using a classical metaphor - in classical physics random is random and deterministic is deterministic.
kludgkludg
A coin is not a great analogy for a quantum system. A (slightly) better one is a box that contains 3 coins. There are 3 windows, labelled x, y and z. The box is rigged so that you can only open one window at a time. When you open a window, you can see the heads/tails state of the corresponding coin, but the other two coins get flipped, and you can't see what happens to them (unless you open another window, but then you know nothing anymore about the window you just had open because that coin has been flipped again).
I'm not sure that this attempt at an analogy scales very well to larger dimensional systems, because you probably can't, in general, describe the algebra of the system using a set of mutually anti-commuting observables, so your rules for how the different coins flip would have to be more complicated. The qudit analogy has $d^2-1$ coins. For example, 2 qubits have 15 coins, each corresponds to a tensor product of 2 Paulis. You can open sets of windows that correspond to commuting observables. The other coins get flipped upon opening, but there are some consistency conditions that some outcomes of coin flips are entirely determined by the outcomes of other coin flips. It becomes messy very quickly...
Further explanation (expanding some of the Mathematics)
Any density matrix of a qudit is described by a $d\times d$ matrix. You can pick any basis of matrices that you like to decompose that (Hermitian) matrix. You'll need $d^2$ of them, but one of those terms is always $\mathbb{I}/d$, which I don't need to count. For example, a basis for the qubit are the Pauli matrices $X$, $Y$ and $Z$ (and $\mathbb{I}$). If you correspond each of these basis elements to measurement operators, because there are 2 distinct eigenvalues, you get two measurement outcomes (like head/tails on a coin). If two operators commute, you can simultaneously know the two measurement outcomes. If the two observables anti-commute, the corresponds to maximal uncertainty between the two observables. In other words, you measure one observable (say $Z$), and the other observables ($X$ and $Y$, because both anticommute with $Z$) are complete uncertain, i.e. the coins are flipped.
For qubits, all 3 observables pair-wise anticommute: $\{X,Y\}=\{Y,Z\}=\{X,Z\}=0$, so whichever measurement you choose, the other two observables reset.
However, for two qubits, the relationships are not nearly so simple. You have all possible terms $$ \mathbb{I}\otimes X \qquad \mathbb{I}\otimes Y \qquad \mathbb{I}\otimes Z \\ X\otimes \mathbb{I} \qquad Y\otimes \mathbb{I} \qquad Z\otimes \mathbb{I} \\ X\otimes X \qquad X\otimes Y \qquad X\otimes Z \\ Y\otimes X \qquad Y\otimes Y \qquad Y\otimes Z \\ Z\otimes X \qquad Z\otimes Y \qquad Z\otimes Z $$ You can see that not all of these pair-wise anticommute, because $\mathbb{I}\otimes Z$ and $Z\otimes \mathbb{I}$ commute, for example. So, we could simultaneously measure the set of observables $\mathbb{I}\otimes Z$, $Z\otimes \mathbb{I}$ and $Z\otimes Z$, but all other observables are completely uncertain. Overall, you'd have 15 coins and there are sets of 3 windows that you can open simultaneously, and all other coins are flipped at that instant.
If you want to describe the same thing for qutrits, it gets more messy because you can use a basis where everything gives 2 answers, but there's not a perfect division into whether operators commute or anticommute, so you get partial connections between coins which are messier to give a classical equivalent of.
edited Aug 5 '18 at 8:01
DaftWullieDaftWullie
$\begingroup$ "mutually anti-commuting observables" returns 6 results via google. could u (briefly) explain? if there is more than will fit in a comment, perhaps a new question is in order? $\endgroup$ – meowzz Aug 4 '18 at 12:22
A coin is an extremely bad and highly misleading analogy for a qubit. You shouldn't use it by any means.
Yet, if you want an equally bad and misleading analogy for a qu-$d$-it, you should use something which is random and has $d$ possible outcome. For $d=3$, this might be rock-paper-scissors. On the other hand, a deck of cards with $52$ cards can have $52!$ possibilities, so it is an equally bad analogy for a qu-$(52!)$-it.
It has been suggested to add an explanation why I think this is a horrible analogy. The key point is that it pretends that quantum mechanics is merely classical randomness -- something which can be easily understood with our classical intuition, and thereby implying that all that talking about quantum mechanics being special is just talking. Indeed, the coin analogy already fails for a single qubit, when one performs measurements in more than one basis. This is for instance explained in DaftWullie's answer. Now one could argue that his answer also provides a way to model this with coins, but (1) these are coins which are rigged in a very weird way, and (2) it is still incomplete -- I just need to toss in measurement in yet another basis to make the whole description break down, and to yield an even more complex pattern of rigged coins (even worse, those coins would not get tossed after looking at another coins as in DW's answer, but they would get tossed in a biased way depending on which measurement I did).
Of course, since quantum theory is a theory about measurements -- which have classical inputs, i.e. measurement settings, and classical outputs, i.e. measurement outcomes -- we can always model this with classical objects which follow some odd distribution. However, the point is exactly that this distribution cannot be modeled by coins any more in any even remotely reasonable way (even more so once we consider two spatially separated qubits).
Norbert SchuchNorbert Schuch
$\begingroup$ The question is what a good metaphor would be. If there is no classical metaphors, how about one that is quantum. $\endgroup$ – meowzz Aug 7 '18 at 15:43
$\begingroup$ @meowzz I'd say explaining people QM takes a bit more patience on both sides than just talking about a classical coin. If either them or you are not willing to invest that time, then so it be. I think a good way to explain it would be to start from one measurement, which is random and can be seen as a coin (which is tossed once, and then we can look at it as often as we want), and then toss in more measurements (=coins) and then explain how these affect each other, see DaftWullie's answer. Another great way of explaining it is the way in which Preskill in his Lecture explains ... $\endgroup$ – Norbert Schuch Aug 7 '18 at 15:47
$\begingroup$ ... Bell inequalities (in the original setting), by having Alice & Bob have three boxes with a coin each. It will make people understand that quantum mechanics is not classical, and that is exactly the point. $\endgroup$ – Norbert Schuch Aug 7 '18 at 15:47
$\begingroup$ @meowzz But rather than asking for that here, why don't you ask a question about "what are good analogies", maybe specifying the context (one qubit, entanglement, ...), or maybe just in a general context? This might (well, hopefully at least) generate more interesting answers than this question, which is based on a misleading premise to start with. $\endgroup$ – Norbert Schuch Aug 7 '18 at 15:49
$\begingroup$ @meowzz I think it makes more sense to ask a new question. Otherwise, you will invalidate half of the answers (as you will change the meaning of the question), which is not really fair towards the people who answered. $\endgroup$ – Norbert Schuch Aug 7 '18 at 16:01
Rock paper scissors seems a good one. Now for the cards pack, I would not see the collapse metaphor here. While a player has not shown his cards, the cards are already set even if you do not see them. But I guess it is a point of view.
Maybe just pick a MMORPG with 52 monsters appearing randomly.
cnadacnada
$\begingroup$ I suppose the game w/ the deck would be "pick a card" $\endgroup$ – meowzz Aug 1 '18 at 23:22
$\begingroup$ A quantum MMORPG, so cool ;). $\endgroup$ – FSic Aug 2 '18 at 9:12
A rolling n-sided die would be a good analogy that follows the coin example very closely. Until the die settles you can think of it as having not "collapsed", and die can come in whatever side number desired (at least in your mind, if you're trying to make a physical example that might be difficult.)
Dripto DebroyDripto Debroy
$\begingroup$ This would be the best analogy to the classical coin flipping, as far as I can see. $\endgroup$ – user1271772 Aug 2 '18 at 9:24
Thanks for contributing an answer to Quantum Computing Stack Exchange!
Not the answer you're looking for? Browse other questions tagged complexity-theory or ask your own question.
What about this metaphor for quantum computing?
Empirical Algorithmics for Near-Term Quantum Computing
Good introductory material on quantum computational complexity classes
Upper Bounds for QMA Quantum Merlin Arthur, and QMA(k)
Quantum Algorithm for God's Number
What's the notion of input size for Quantum Verification?
Quantum algorithms for problems outside NP
Query lower bound for Majority function using the quantum adversary method
How to construct a quantum circuit (QIP system) for the graph non-isomorphism problem?
Quantum proof for the group non-membership problem
Is there a BQP algorithm for each level of the polynomial hierarchy PH?
|
CommonCrawl
|
Industrial brewing yeast engineered for the production of primary flavor determinants in hopped beer
Charles M. Denby1,2 na1,
Rachel A. Li2,3,4 na1,
Van T. Vu5,
Zak Costello2,4,6,
Weiyin Lin1,2,
Leanne Jade G. Chan2,4,
Joseph Williams7,
Bryan Donaldson8,
Charles W. Bamforth ORCID: orcid.org/0000-0002-8270-52287,
Christopher J. Petzold2,4,
Henrik V. Scheller2,3,9,
Hector Garcia Martin ORCID: orcid.org/0000-0002-4556-96852,4,6 &
Jay D. Keasling ORCID: orcid.org/0000-0003-4170-60881,2,4,5,10,11
Nature Communications volume 9, Article number: 965 (2018) Cite this article
14k Accesses
1117 Altmetric
Metabolic engineering
Flowers of the hop plant provide both bitterness and "hoppy" flavor to beer. Hops are, however, both a water and energy intensive crop and vary considerably in essential oil content, making it challenging to achieve a consistent hoppy taste in beer. Here, we report that brewer's yeast can be engineered to biosynthesize aromatic monoterpene molecules that impart hoppy flavor to beer by incorporating recombinant DNA derived from yeast, mint, and basil. Whereas metabolic engineering of biosynthetic pathways is commonly enlisted to maximize product titers, tuning expression of pathway enzymes to affect target production levels of multiple commercially important metabolites without major collateral metabolic changes represents a unique challenge. By applying state-of-the-art engineering techniques and a framework to guide iterative improvement, strains are generated with target performance characteristics. Beers produced using these strains are perceived as hoppier than traditionally hopped beers by a sensory panel in a double-blind tasting.
During the brewing process, Saccharomyces cerevisiae converts the fermentable sugars from grains into ethanol and a host of other flavor-determining by-products. Flowers of the hop plant, Humulus lupulus L., are typically added during the wort boil to impart bitter flavor and immediately before or during the fermentation to impart "hoppy" flavor and fragrance (Fig. 1a). Over the past two decades, consumers have displayed an increasing preference for beers that contain hoppy flavor. Hops are an expensive ingredient for breweries to source (total domestic sales have tripled over the past 10 years due to heightened demand) and a crop that requires a large amount of natural resources: ~100 billion L of water is required for annual irrigation of domestic hops and considerable infrastructure is required to deliver water from its source to the farm1,2. Further, hops vary considerable in essential oil content, making it challenging to achieve a consistent hoppy taste in beer.
Engineering brewer's yeast to express monoterpene biosynthetic pathways thereby replacing flavor hop addition. a During the brewing process, S. cerevisiae converts wort—a barley extract solution rich in fermentable sugars—into ethanol and other by-products. Hops are added immediately before, during, or after fermentation to impart "hoppy" flavor. Engineered strains produce linalool and geraniol, primary flavor components of hoppy beer, thereby replacing hop additions. b Six full-length plant-derived linalool synthase genes, as well as PTS-truncated variants, were expressed on high-copy plasmids. Full-length genes and PTS-truncated genes predicted by either ChloroP (C) or the RR-heuristic method (RR) are indicated by colored lines. c Error bars correspond to mean ± standard deviation of three biological replicates. Asterisks indicate statistically significant increases in monoterpene production compared with the control strain (Y) as determined by a t-test using p-value <0.025. The LIS from the California wildflower Clarkia breweri has been shown to increase production of linalool when heterologously expressed in plants47 and in yeast48. However, when C. breweri LIS was expressed, either with native codons (nCb) or "yeast-optimized" codons (Cb), linalool was not detected. The Mentha citrata LIS (Mc) truncated at the RR motif was identified as sufficiently active to allow for monoterpene production at levels characteristic of commercial beer and was chosen for integration into brewer's yeast strains
Hop flowers are densely covered by glandular trichomes, specialized structures that secrete secondary metabolites into epidermal outgrowths3. These secretions accumulate as essential oil, which is rich in various terpenes, the class of metabolites that impart hoppy flavor to beer. Considerable research has investigated which of these molecules are primarily responsible for this flavor4; these studies are complicated by genetic, environmental, and process-level variation5 and have suggested that the bouquet of flavor molecules contributed to beer by hops is complex. Nonetheless, the two monoterpene molecules linalool and geraniol have been identified as primary flavor determinants by several sensory analyses of hop extract aroma6,7,8 and finished beer taste and aroma7,9,10,11, and together, they are major drivers of the floral aroma of Cascade hops9, the most widely used hop in American craft brewing12. Previous metabolic engineering efforts have achieved microbial monoterpene biosynthesis in various microbial hosts. Work in a domesticated wine yeast has demonstrated the feasibility of producing monoterpene compounds by biosynthesis in yeast by overexpression of a geraniol synthase from a high-copy plasmid propagated in selective media13. However, engineering genetically stable, controlled, precise production of a combination of specific flavorants in any industrial food-processing agent has remained a formidable challenge.
In this work, we create drop-in brewer's yeast strains capable of biosynthesizing monoterpenes that give rise to hoppy flavor in finished beer, without the addition of flavor hops. To achieve this end, we identify genes suitable for monoterpene biosynthesis in yeast; we develop methods to overcome the difficulties associated with stable integration of large constructs in industrial strains; we adapt genetic tools to generate a collection of engineered industrial yeast strains on an unprecedented scale; we develop computational methods to affect precise biosynthetic control and leverage them to create a iterative framework towards target production levels. Ultimately, sensory analysis performed with beer brewed in pilot industrial fermentations demonstrates that engineered strains confer hoppy flavor to finished beer.
Identification of yeast-active linalool and geraniol synthases
The monoterpene synthases that catalyze the reaction of geranyl pyrophosphate (GPP) to linalool and geraniol in hops have not yet been identified14. However, genes from other plant species have been shown to encode these activities. To identify a linalool synthase (LIS) gene for functional heterologous expression in yeast, we expressed six different plant-derived LIS genes in a lab yeast strain engineered for high GPP precursor supply (Fig. 1b). However, none of the full-length proteins exhibited sufficient activity to achieve target monoterpene concentrations in finished beer (Fig. 1c). In plants, monoterpenes are biosynthesized in chloroplasts; plant monoterpene synthases, therefore, typically contain an N-terminal plastid targeting sequence (PTS) composed of 20–80 polar amino acids, which is cleaved to produce a mature protein. Truncation of the PTS sequence can improve expression and activity of microbially expressed monoterpene synthases15,16. However, methods for predicting the optimal PTS truncation site, as well as for predicting portability of enzymes from plant species to yeast are imperfect. We therefore screened candidate LIS variants from different sources and with different truncation sites for increased activity (Fig. 1b and Supplementary Fig. 1). We tested bioinformatically predicted17 PTS sites and observed a substantial increase in activity for the Lycopersicon esculentum LIS (Fig. 1c). We also used a heuristic, structure-based approach for PTS prediction: a conserved double arginine (RR) motif that functions as part of an active-site lid preventing water access to the carbocationic reaction intermediate18 lies immediately C-terminal to the PTS in several well-characterized terpene synthases15,16. We observed the highest linalool titer from the truncated M. citrata LIS (t67-McLIS, Fig. 1c). An analogous screen of six geraniol synthases revealed that the heterologous expression of the full-length protein from O. basilicum (ObGES) leads to geraniol production in yeast (Supplementary Fig. 1d).
Strategy for engineering monoterpene biosynthesis in brewing yeast
Once we identified monoterpene synthases that were sufficiently active in S. cerevisiae, we set out to engineer brewer's yeast strains capable of producing monoterpenes during beer fermentation. Yeast naturally produces the sesquiterpene precursor farnesyl pyrophosphate (FPP) through the ergosterol biosynthesis pathway, as FPP serves as a precursor for essential metabolites including hemes and sterols. While the flux through this pathway is tightly regulated, extensive metabolic engineering efforts have informed key genetic modifications that obviate regulatory checkpoints19,20 and increase monoterpene precursor supply21 (Supplementary Fig. 2). HMG-CoA reductase (HMGR) is one of the key rate-limiting steps of the pathway and is controlled by an inhibitory regulatory domain that responds to product accumulation19. Overexpression of a truncated form of yeast HMGR lacking the regulatory domain (tHMGR) results in increased flux towards end products22. A downstream enzyme, FPP synthase (FPPS), catalyzes the sequential condensation of two isopentyl pyrophosphate molecules with dimethylallyl pyrophosphate. GPP, the immediate precursor of monoterpene biosynthesis, is formed as an intermediate of the sequential reactions. While the high processivity of the wild-type FPPS limits the intracellular availability of GPP, a mutant (FPPS*) has been identified that reduces processivity, thereby increasing production of GPP-derived end products21. Based on these observations, we hypothesized that modulating the expression of tHMGR, FPPS*, t67-McLIS, and ObGES would result in brewer's yeast strains capable of producing linalool and geraniol during fermentation at concentrations encompassing those typical of finished beer (~0.2 mg/L)9,23.
In devising a strategy to modulate pathway activity, two challenges were considered. First, de novo design and generation of a collection of multi-gene constructs is difficult, time consuming, and expensive. To circumvent this challenge, we combined an existing toolkit of yeast genetic parts with a Golden Gate assembly strategy for facile design and rapid pathway construction24 (Fig. 2a, d and Supplementary Fig. 3). Second, incorporating large (i.e., >10 kb) genetically stable DNA constructs into brewer's yeast has not been reported, and is complicated by their ploidy as well as concerns regarding the incorporation of selection markers in food-processing agents. We therefore developed a Cas9-mediated methodology for stable and marker-less pathway integration (Fig. 2a–c, Supplementary Fig. 4, and Supplementary Note 1). Our method leverages a colorimetric assay to screen for positive transformants and allows for macroscopic visualization of successful integration events. Interestingly, this method also allowed us to visualize the high degree of genetic instability associated with heterozygous integration (Fig. 2b, c). By combining the assembly and integration strategies, we were able to generate strains with a diverse set of genetic designs, where each strain contained a unique combination of promoters driving expression of the four modulated genes (Fig. 2d).
Iterative improvement of strain design towards targeted monoterpene production levels. a Schematic of design–build–test–learn cycle. Design: Constructs were designed by combining yeast toolkit parts (i.e., promoters, terminators, linkers, etc.) with monoterpene biosynthesis pathway genes. Build: Methodology for integrating constructs into brewer's yeast. Note that, for simplicity, only a single allelic copy of the ADE2 locus is diagrammed. The ADE2Δ strain was co-transformed with a Cas9/sgRNA plasmid and repair template, which targeted a double-stranded break (DSB) in the ADE2 3′ sequence. Test: Data were collected using LC/MS, HPLC, and GC/MS. Learn: Correlation analyses informed design principles. Mathematical models were used to evaluate the extent to which design principles improved strain search efficiency. Variables corresponding to measured protein levels are highlighted. b, c Transformation plate illustrating colorimetric screening method. ADE2 encodes an enzymatic step in purine biosynthesis and its deletion results in the accumulation of a metabolite with red pigment when grown on media containing intermediate adenine concentration. Because the repair template contains the ADE2 gene, templated DSB repair results in a white colony phenotype. Because brewer's yeasts have multiple allelic copies of ADE2, stable integration requires repair at multiple ADE2Δ genomic loci. White colonies streaked from transformation plates result in either white colony color (b) or variegated colony color (c); white colony color corresponds to homozygous integration; variegated colony color corresponds to heterozygous integration, illustrating genetic instability of heterozygous allele containing a large DNA construct. d Illustration of assembly steps from parts (promoters/genes) to gene cassettes, to repair templates for first iteration strains. Assemblies are simplified for clarity—for detailed description see Supplementary Fig. 3. e Relative promoter strengths with corresponding protein and product abundances and sugar consumption (attenuation). Strains are sorted by total monoterpene production
Iterative design refines target monoterpene levels
Without empirical data, it is difficult to predict the relationship between specific genetic designs and metabolic end-product concentrations25,26. To improve search efficiency towards desired monoterpene concentrations, we separated our design–build-test process (Fig. 2a) into two stages, thereby affording us an opportunity to first sample a small subset of design space and then hone subsequent designs towards desired production profiles. An initial set of 18 strains containing promoters predicted to span a wide range of expression strengths were constructed and grown under microaerobic fermentation conditions that mimicked an industrial brewing process (Supplementary Fig. 5). We found that these first iteration strains produced monoterpenes within the range of commercial concentrations, although were generally lower (Fig. 3). Some strains exhibited a reduced fermentation capacity (Supplementary Figs. 6 and 7) including the strains closest to commercial concentrations. However, reduced fermentation capacity did not correlate with monoterpene production, suggesting that the fermentation defects were not primarily due to monoterpene toxicity (Supplementary Note 2).
Production of monoterpenes by engineered strains. Linalool and geraniol production of engineered yeast strains compared to concentrations found in commercial beers, plotted in log10 space. For relationships between flavor determinant concentration and taste intensity, the logarithm of a stimulus is typically proportional to the logarithm of the perceived intensity, such that the distance between points in log10 space is expected to be directly proportional to the magnitude of taste difference. First and second iteration points represent the mean of three biological replicates. Standard deviation values are listed in Supplementary Table 14. In ascending order of monoterpene concentration, commercial beers are Pale Ale, Torpedo Extra IPA, and Hop Hunter IPA, obtained from the Sierra Nevada Brewing Company
To further explore the relationship between genetic design and monoterpene production, the relative abundance of the four modulated proteins was measured for each strain during the active phase of fermentation. Protein abundance was strongly correlated with previously characterized promoter strength (Supplementary Table 1 and Supplementary Fig. 8), demonstrating that the qualitative relationship between promoter strengths generally extends from a lab strain grown aerobically in rich medium to a brewing strain grown in industrial brewing conditions. Furthermore, total monoterpene production was correlated with tHMGR and FPPS* abundance and linalool production was correlated with t67-McLIS abundance, verifying that the selected genes indeed control monoterpene production as anticipated (Fig. 2e and Supplementary Table 1). An interesting anomaly was that ObGES abundance was not correlated with geraniol production. We reasoned that since ObGES and t67-McLIS compete for GPP supply, variation in t67-McLIS abundance may obscure the relationship between ObGES and geraniol production. Indeed, we observed that the fraction of geraniol in total monoterpene composition correlated with the ratio between ObGES and t67-McLIS (p-value < 0.05; t-test). Together, these findings were encouraging, as they firmly demonstrated that genetic design can be used to control monoterpene production and that the knowledge gained from our initial test set could guide subsequent design.
We next set out to generate a second iteration of designs with target production levels defined by three commercially hopped beers that span a wide range of monoterpene concentrations and perceived hop flavor/aroma intensity. The salient trends observed in the test set informed two guiding principles: (1) to shift overall monoterpene production towards higher levels, designs were composed of strong promoters driving tHMGR, FPPS*, and ObGES and (2) to ensure variation in the ratio of linalool to geraniol, designs encompassed a range of promoter strengths driving expression of t67-McLIS. (Supplementary Table 11 and Supplementary Fig. 9). We anticipated that applying these design principles towards desired performance characteristics would improve search efficiency. In order to evaluate the extent of improvement, we established a mathematical modeling-based framework to predict the relationship between genetic design and monoterpene production (Online Methods, Supplementary Note 3, and Supplementary Tables 2–5). Using this framework, we predicted that the selected design principles significantly improved our search efficiency (Supplementary Fig. 10). Importantly, we observed a consistent improvement in comparing actual distance-from-target monoterpene levels between the first and second iteration strains (Fig. 4).
Evaluation of iterative genetic design refinement. Simulated (top panel) and measured (bottom panel) distance between engineered strain performance and target performance of commercial beers obtained from Sierra Nevada Brewing Company for first and second iteration strains. Target performance is defined based on Pale Ale (a), Torpedo IPA (b), and Hop Hunter IPA (c). In all cases, second iteration strains are closer to target performance than first iteration strains. p-values (t-test) reflect the degree of statistical significance with which second iteration strains are closer to target performance then first iteration strains
Engineered strains affect consistent hop flavor
The anticipated commercial value of generating hop flavor molecules through yeast biosynthesis is predicated on three assumptions: (1) because the conditions inside a fermenter can be precisely controlled, the final concentrations of yeast-biosynthesized monoterpenes in beer are likely to be more consistent compared with those given by conventional hop additions, (2) the biosynthesized monoterpenes linalool and geraniol confer hoppy flavor as perceived through human taste, and (3) the variation in hop flavor molecule concentrations correspond to differences on the order of those discernable by human taste. To test the consistency of yeast-biosynthesized monoterpene levels, replicate fermentations were performed at 8 L scale with a subset of engineered strains. To test the consistency of hop-derived monoterpene levels, Cascade hop preparations originating from five different farms in the Northwestern United States were used to supplement fermentations performed with the parent strain (Fig. 5a,b). We observed little variation in final monoterpene concentrations between replicate samples fermented with engineered strains, whereas fermentations hopped with different preparations yielded significantly greater variation (linalool p-value <1 × 10−5, geraniol p-value <1 × 10−3; t-test) (Fig. 5a,b). This result demonstrates that engineered strains biosynthesize monoterpenes more consistently than can be achieved by randomly selecting Cascade hop preparations from different farms. Next, to test whether yeast-biosynthesized monoterpenes conferred hop flavor, beer was produced in an authentic, pilot-scale brewhouse, following a recipe for a classic American Ale, using three engineered strains and the parent strain (WLP001) as a control (Supplementary Fig. 11 and Supplementary Table 17). A panel of tasters determined that the finished beers exhibited a range of hop flavor/aroma intensity (Fig. 5c). In addition, the apparent difference in hop flavor/aroma intensity between beer fermented with JBEI-16652 and JBEI-16669 was considerable, despite only a ~twofold difference in linalool and geraniol concentrations. Taken together, these results demonstrate that monoterpenes derived from yeast biosynthesis during fermentation give rise to hop flavor/aroma in finished beer and that biosynthesis provides greater consistency than traditional hopping. Finally, in order to compare the intensity of hop flavor conferred by traditional dry hopping with the hop flavor conferred by engineered strains, fermentations were performed both with the parent strain with dry-hop additions as well as with JBEI-16652 without dry-hop additions (Supplementary Fig. 11 and Supplementary Table 18). Conventionally dry-hopped beers consistently exhibited increased hop flavor/aroma as perceived by a sensory panel; however, these effects were not statistically significant compared to the parental control (Fig. 5d). In contrast, beer produced with JBEI-16652 again exhibited significantly higher hop flavor/aroma than the parental control. Similar monoterpene concentrations were observed between the two batches, demonstrating the consistent performance of the engineered strain.
Characteristics of pilot-scale beer fermented with engineered strains. a, b Variation in linalool (a) and geraniol (b) concentrations of engineered brewing strain fermentations compared with variation in concentrations generated by traditional dry hopping. For engineered strain samples, horizontal lines correspond to the mean of three biological replicates. For traditional dry hopping, the horizontal line corresponds to the mean of five Cascade hop samples obtained from different farms. Vertical lines correspond to standard deviation. c Sensory analysis of the pilot-scale beers fermented with three engineered strains compared to beer fermented with the parental strain. d Sensory analysis of pilot-scale beers fermented with engineered strain JBEI-16652 compared to beer fermented with the parental strain, with or without traditional Cascade dry-hopping. Asterisks (c, d) indicate statistically significant differences in hop aroma intensity as compared to the control beer (p-value < 0.05; Dunnett's test). Difference from control, DFC, was measured on a 9-point scale
In this study, we have engineered brewer's yeast for production of flavor molecules ordinarily derived from hops. We developed new methods to overcome the difficulties associated with stable integration of large constructs in industrial strains. Unlike classical microbial metabolic engineering efforts that focus on maximizing the titer of a single molecule, we focused on tuning the expression of key genes in a biosynthetic pathway to simultaneously make precise concentrations of multiple flavor molecules. This application promises to generate hop flavors with more consistency than traditional hop additions, as hop preparations are notoriously variable in the content of their essential oil and the flavor they impart to beer23. It should be noted that blending hop preparations from different sources can be used to reduce variation. However, blending is ultimately limited by practical constraints: In the best case, large craft breweries create one single hop blend per year, which fails to mitigate year-to-year variation. Our strategy is favored over plant or microbial bioprocess extraction because it avoids the use of non-renewable chemicals typical of industrial extraction. While historic consumer trepidation towards genetically engineered foods is of concern for widespread adoption, the general increase in consumer acceptance of such foods when tied to increased sustainability27 is encouraging.
Previous studies have demonstrated the feasibility of engineering brewer's yeast by incorporation of heterologous genes13,28,29; however, the scope and commercial relevance of these efforts have been limited, in part due to methodological difficulties of incorporating an array of large, genetically stable DNA constructs into industrial yeasts. Recent studies have resorted to alternative methods such as breeding hybrid strains30. While this has proven to be a powerful approach for generating diverse aroma phenotypes, it is intrinsically limited to enzymes and aromas associated with native yeast metabolism. Here, we developed a complementary methodology that allows for stable incorporation of plant secondary metabolism genes into industrial brewer's yeast. We provide evidence that incorporating linalool and geraniol biosynthesis confers hop flavor to beer. We note that the full flavor imparted by traditional hopping is likely to rely on a more diverse bouquet of molecules. The methodologies described herein provide a foundation for generating more complex yeast-derived hop flavors, and broaden the possibilities of yeast-biosynthesized flavor molecules to those throughout the plant kingdom.
All strains, expression plasmids, and additional plasmids used for strain construction are listed and described in Supplementary Tables 6–13. The sequence files corresponding to each plasmid can be found in the JBEI Public Registry (https://public-registry.jbei.org/)31. Plasmids were propagated in Escherichia coli strain DH10B and purified by Miniprep (Qiagen, Germantown, MD, USA). The "pathway" plasmids used to construct the engineered brewing strains were assembled by the standard Golden Gate method using type II restriction enzymes and T7 DNA Ligase (New England Biolabs, Ipswich, MA, USA)24,32 (for additional detail, see schematized assembly strategy in Supplementary Fig. 3). All other plasmids generated in this study were constructed by Gibson assembly33 using Gibson assembly master mix (New England Biolabs, Ipswich, MA, USA). Constructs were designed using the DeviceEditor bioCAD software34, and assembly primers were generated with j5 DNA assembly design automation software35 using the default settings. PCR amplification was performed using PrimeSTAR GXL DNA polymerase according to the manufacturer's instructions (Takara Bio, Mountain View, CA, USA). Genes coding for full-length linalool and geraniol synthases were ordered either from IDT (San Diego, CA, USA) as G-blocks or from Life Technologies (Carlsbad, CA, USA) as DNA strings. The coding sequences of heterologous genes in all plasmids were validated by Sanger sequencing (Genewiz, South Plainfield, NJ, USA and Quintara, South San Francisco, CA, USA).
Strain construction
Yeast lab strains were transformed by the high-efficiency lithium acetate method36. Strains were cultivated in yeast extract + peptone + dextrose (YPD) medium unless otherwise noted. To select for transformants containing auxotrophic complementation cassettes, transformed cells were plated on standard dropout medium (Sunrise Science Products, San Diego, CA, USA). To select for transformants containing drug resistance cassettes, cells were recovered in YPD medium for 4 h after transformation, and then plated on YPD medium supplemented with 200 μg/L geneticin (Sigma-Aldrich, St. Louis, MO, USA) or hygromycin B (Sigma-Aldrich, St. Louis, MO, USA). Minor modifications were made to cultivation conditions for brewer's yeast transformations: pre-transformation cultures were grown in YPD medium supplemented with 200 mg/L adenine sulfate at 20 °C in glass test tubes with shaking at 200 rpm. A single colony was used to inoculate an initial 5 mL culture, which was grown overnight to turbidity. This culture was used to inoculate a second 5 mL culture to an OD600 (optical density at 600 nm) of 0.01, which was grown for 18 h. The second culture was then used to inoculate 50 mL cultures in 250 mL Erlenmeyer flasks to OD600 of 0.05. After ~8 h of growth, strains were transformed by the lithium acetate method36, cells were recovered in YPD medium for 4 h, plated on YPD supplemented with 200 μg/L geneticin, and then grown for 5–7 days at 20 °C.
DNA used for genomic integration was prepared either by PCR-amplifying plasmid DNA or by digesting a plasmid with restriction enzymes. For construction of the GPP-hyper-producing strain, integration fragments were amplified from the corresponding plasmids by PCR (Supplementary Table 7). For construction of pathway-integrated brewing strains, plasmid DNA was linearized by restriction digestion with NotI-HF and PstI-HF (New England Biolabs, Ipswich, MA, USA) (Supplementary Tables 10 and 11).
All integration events were confirmed by diagnostic PCR using GoTaq Green Master Mix (Promega, Madison, WI, USA). For brewer's yeast strains, homozygosity at the integration locus was tested using primers targeted to the 5′ and 3′ junctions of desired allele and the parental allele. The identity of the multi-gene integration was verified with primers targeted to each of the four promoter/gene junctions. The promoter identities corresponding to each strain can be found in Supplementary Tables 12 and 13.
Screening synthases
For the linalool and geraniol synthase screening, single colonies were picked from the transformation plate and used to inoculate cultures in 5 mL CSM-Leu (Sunrise) +2% raffinose (Sigma-Aldrich, St. Louis, MO, USA) medium. After 24 h, the precultures were diluted into fresh CSM-Leu + 2% galactose (Sigma-Aldrich, St. Louis, MO, USA) medium to an OD of 0.05 and grown for 72 h with shaking at 200 rpm. An organic overlay was added 24 h after inoculation to capture hydrophobic monoterpenes. Decane was used as the overlay for the cultures expressing LIS and dodecane was used for those expressing GES. The overlay was chosen so as to minimize overlap of retention times between solvent and product for subsequent gas chromatography–mass spectrometry (GC/MS) analysis.
Microaerobic fermentation
Strains were streaked on YPD medium and grown for 2 days at 25 °C. Single colonies were used to inoculate initial 2 mL precultures in 24-well plates (Agilent Technologies, Santa Clara, CA, USA), which were grown for 3 days at 20 °C with shaking at 200 rpm. Strains were grown in a base medium composed of 100 g/L malt extract (ME) (Sigma-Aldrich, St. Louis, MO, USA). Each well contained a 5 mm glass bead (Chemglass Life Sciences, Vineland, NJ, USA). The resulting cultures were used to inoculate second 6 mL precultures in fresh 24-well plates to an OD of 0.1, which were then grown for 3 days at 20 °C with shaking at 120 rpm. The resulting cultures were then used to inoculate 25 mL cultures in glass test tubes to an OD of 1.0. These cultures were equipped with a one-way airlock for microaerobic fermentation and grown for 5 days at 20 °C (Supplementary Fig. 5). Test tubes were vortexed for 30 s every 24 h.
High-performance liquid chromatography
Maltotriose, maltose, glucose, and ethanol were separated by high-performance liquid chromatography (HPLC) and detected by a refractive index (RI) detector. On day 5, fermentation samples were centrifuged at 18,000 × g for 5 min, filtered using Costar® Spin-X® Centrifuge Tube Filters, 0.22-µm pore, transferred to HPLC tubes, and loaded into an Agilent 1100 HPLC equipped with an Agilent 1200 series auto-sampler, an Aminex HPX-87H ion exchange column (Bio-Rad, Hercules, CA USA), and an Agilent 1200 series RI detector. Metabolites were separated using 4 mM H2SO4 aqueous solution with a flow rate of 0.6 mL/min at 50 °C. Absolute sample concentrations were calculated using a linear model generated from a standard curve composed of authentic maltotriose, maltose, glucose, and ethanol standards (Sigma-Aldrich, St. Louis, MO, USA) diluted in water over a range of 0.2–20 g/L. All data are provided in Supplementary Table 15.
Monoterpene quantification
Monoterpenes were quantified by GC/MS analysis, using an Agilent GC system 6890 series GC/MS with Agilent mass selective detector 5973 network. In all experiments, 1 μL of the sample was injected (splitless), using He as the carrier gas onto a CycloSil-B column (Agilent, 30 m length, 0.25 mm inner diameter (i.d.), 0.25 μm film thickness, cat. no. 112-6632). The carrier gas was held at a constant flow rate of 1.0 mL/min and EMV mode was set to a gain factor of 1.
Sampling, oven temperature schedule, and ion monitoring was optimized for each experiment: for quantifying linalool and geraniol production in terpene synthase screens, the samples were spun down and the organic phase (solvent overlay) was collected, diluted 1:10 in ethyl acetate (Sigma-Aldrich, St. Louis, MO, USA), transferred to a glass GC vial, and injected into the GC column. For samples corresponding to the LIS screen, the oven temperature was held at 50 °C for 12 min, followed by a ramp of 10 °C/min to a temperature of 190 °C and a ramp of 50 °C/min to a final temperature of 250 °C, and then held at 250 °C for 1 min. The solvent delay was set to 20 min, and the MS was set to SIM mode for acquisition, monitoring m/z ions 80, 93, and 121. For samples corresponding to the geraniol synthase screen, the oven temperature was held at 50 °C for 5 min, then ramped at 30 °C/min to a temperature of 135 °C, then ramped at 5 °C/min to a temperature of 145 °C, then ramped at 30 °C/min to a temperature of 250 °C, and held at 250 °C for 1 min. The solvent delay was set to 10.8 min and the MS was set to monitor m/z ions 69, 93, 111, and 123. For quantifying linalool and geraniol in microaerobic fermentations performed with brewer's yeasts, samples were extracted on day 5 using ethyl acetate. Fermentation samples were collected and spun down, 1600 μL of the supernatant was mixed with ethyl acetate at a 4:1 ratio in a 96-well plate, the plate was sealed and vortexed for 2 min, then spun at 3000 × g for 5 min, and 30 μL of the ethyl acetate was transferred into a glass GC vial. The resulting preparation was injected into the GC column. For quantifying linalool and geraniol in various commercial beers, 2 mL of ethyl acetate was added to 8 mL of the beer in glass tubes (Kimble Chase, Rockwood, TN, USA). This was mixed by hand for 2 min and spun at 1000 × g for 10 min. Thirty microliters of the ethyl acetate layer was transferred to glass GC vials, and the resulting preparation was injected into the GC column. For both the microaerobic fermentation experiments and sampling of commercial beers, the oven temperature was held at 50 °C for 5 min, followed by a ramp of 5 °C/min to a temperature of 200 °C and a ramp of 50 °C/min to a final temperature of 250 °C, and then held at 250 °C for 1 min. The solvent delay was set to 5 min and the MS was set to monitor m/z ions 55, 69, 71, 80, 81, 93, 95, 107, 121, 123, and 136.
Peak areas for linalool and geraniol were quantified using MSD Productivity ChemStation software (Agilent Technologies, Santa Clara, CA, USA). Absolute sample concentrations were calculated using a linear model generated from a standard curve composed of authentic linalool and geraniol standards (Sigma-Aldrich, St. Louis, MO, USA). For monoterpene synthase screening experiments, standards were diluted in ethyl acetate over a range of 0.2–50 mg/L. For the microaerobic fermentation experiments and sampling of commercial beers, standards were spiked into a preparation extracted from the parent strain fermentation sample (i.e., a control preparation used to ensure accurate baseline signal) over a range of 0.2–10 mg/L. In calculating actual concentrations, apparent concentrations were scaled based on dilution or concentration in GC injection preparation.
Protein abundance data are reported in Supplementary Table 16. Culture (5 mL) was sampled after 2 days, vortexed, and spun at 3000 × g for 5 min. The supernatant was discarded, and the pellet was flash frozen. Plate-based cell pellets were lysed by chloroform-methanol precipitation, described below, while samples in tubes were lysed by re-suspending the pellets in 600 µL of yeast lysis buffer (6 M urea in 500 mM ammonium bicarbonate), followed by bead beating with 500 µL zirconia/silica beads (0.5 mm diameter; BioSpec Products, Bartlesville, OK, USA). Samples in tubes were bead beat for five cycles of 1 min with 30 s on ice in between each cycle. Subsequently, they were spun down in a benchtop centrifuge at a maximum speed for 2 min to pellet cell debris, and the clear lysate was transferred into fresh tubes. Plate-based cell lysis and protein precipitation was achieved by using a chloroform-methanol extraction37. The pellets were re-suspended in 60 µL methanol and 100 µL chloroform, and then 50 µL zirconia/silica beads (0.5 mm diameter; BioSpec Products, Bartlesville, OK, USA) were added to each well. The plate was bead beat for five cycles of 1 min with 30 s on ice in between each cycle. The supernatants were transferred into a new plate and 30 µL water was added to each well. The plate was centrifuged for 10 min at a maximum speed to induce the phase separation. The methanol and water layers were removed, and then 60 µL of methanol was added to each well. The plate was centrifuged for another 10 min at a maximum speed and then the chloroform and methanol layers were removed and the protein pellets were dried at room temperature for 30 min prior to re-suspension in 100 mM ammonium bicarbonate with 20% methanol.
The protein concentration of the samples was measured using the DC Protein Assay Kit (Bio-Rad, Hercules, CA, USA) with bovine serum albumin used as a standard. A total of 50 µg protein from each sample was digested with trypsin for targeted proteomic analysis. Protein samples were reduced by adding tris 2-(carboxyethyl)phosphine to a final concentration of 5 mM, followed by incubation at room temperature for 30 min. Iodoacetamide was added to a final concentration of 10 mM to alkylate the protein samples and then incubated for 30 min in the dark at room temperature. Trypsin was added at a ratio of 1:50 trypsin:total protein, and the samples were incubated overnight at 37 °C.
Peptides were analyzed using an Agilent 1290 liquid chromatography system coupled to an Agilent 6460 QQQ mass spectrometer (Agilent Technologies, Santa Clara, CA, USA). The peptide samples (10–20 µg[LC2]) were separated on an Ascentis Express Peptide ES-C18 column (2.7 μm particle size, 160 Å pore size, 5 cm length × 2.1 mm i.d., coupled to a 5 mm × 2.1 mm i.d. guard column with similar particle and pore size; Sigma-Aldrich, St. Louis, MO, USA), with the system operating at a flow rate of 0.400 mL/min and column compartment at 60 °C. Peptides were eluted into the mass spectrometer via a gradient with initial starting condition of 95% Buffer A (0.1% formic acid) and 5% Buffer B (99.9% acetonitrile, 0.1% formic acid). Buffer B was held at 5% for 1.5 min, and then increased to 35% Buffer B over 3.5 min. Buffer B was further increased to 80% over 0.5 min where it was held for 1 min, and then ramped back down to 5% Buffer B over 0.3 min where it was held for 0.2 min to re-equilibrate the column to the initial starting condition. The peptides were ionized by an Agilent Jet Stream ESI source operating in positive-ion mode with the following source parameters: gas temperature = 250 °C, gas flow = 13 L/min, nebulizer pressure = 35 psi, sheath gas temperature = 250 °C, sheath gas flow = 11 L/min, VCap = 3500 V. The data were acquired using Agilent MassHunter, version B.08.00. Resultant data files were processed by using Skyline38 version 3.6 (MacCoss Lab, University of Washington, Seattle, WA, USA) and peak quantification was refined with mProphet39 in Skyline.
Data analysis was performed using the R statistical programming language40. Additional libraries were used for data visualization functionalities41,42,43,44,45. For protein and metabolite analysis heatmaps (Fig. 2e and Supplementary Fig. 9), relative levels were reported as follows: promoter strengths were represented as a fraction of their previously reported rank order24 ranging from PRNR2 (0) to PTDH3 (1). Feature scaling was used to standardize the range of protein, monoterpene, and sugar abundances. Let s i equal the log10-transformed abundance value for species S in strain i. Normalized values were computed according to Eq. (1) as:
$$s_i^\prime = \frac{{s_i - {\mathrm{min}}(S)}}{{{\mathrm{max}}(S) - {\mathrm{min}}(S)}}$$
For sugar analysis, unfermented ME was included in max/min calculations. For fermentable sugars (i.e., maltotriose, maltose, glucose), the scaled values were subtracted from 1 in order to represent proximity to desired sugar consumption profile.
The distance metric of an engineered strain with respect to a given commercial beer was calculated using the Manhattan length as the distance of monoterpene production from beer monoterpene concentrations and the distance in sugar consumption from the parent strain. First, the difference between log10-transformed values of engineered strain monoterpene concentration and target beer monoterpene concentration was calculated for each species, linalool and geraniol. Second, the absolute values of these differences were calculated. Finally, the resulting values, together with the fraction of total sugar remaining after fermentation, were averaged.
Three different models were constructed in Python to predict monoterpene production from protein levels (for detailed description and implementation, see Supplementary Data File 1). Files containing data used to generate predictive models are included as Supplementary Data Files 2 and 3. Both the Gaussian regressor and linear models were implemented using Scikit-learn46. Additional equations needed to describe the linear model are given in Supplementary Table 3. Equations describing the Michaelis–Menten kinetics model are given in Supplementary Table 4 and a schematic of the model structure is provided in Supplementary Fig. 13. Kinetic parameters were scraped from the literature (Supplementary Table 5) and protein concentrations are given in Supplementary Data File 2. Free parameters were included to convert relative protein counts to absolute protein values. Additionally, a parameter β determined the relative ratio between the endogenous FPPS and FPPS*.
Both the linear and Gaussian regressor models were fit using standard methods from the Scikit Learn library. The kinetic model was manually constructed without external libraries. To fit the kinetic model, a differential evolution algorithm was used to perform parameter optimization on a nonlinear cost function. Specifically, the sum of the squared residual error of the model predictions from the first iteration strains was minimized with respect to the previously described parameters. The kinetic coefficients were bounded to vary over an order of magnitude from the values described in the literature. In order to cross-validate the models and minimize overfitting, a leave-one-out methodology was applied to each model. The error residuals from this cross-validation technique are reported in Supplementary Table 2.
Analysis performed for predicting the extent of performance improvement for second iteration strains (Fig. 4 and Supplementary Fig. 10) compared with randomly designed strains is described in Supplementary Note 3.
Toxicity assay
OD600 measurements were taken in 48-well clear flat bottom plates (Corning Inc., Corning, NY, USA) using a Tecan Infinite F200 PRO reader, with acquisition every 15 min. Analysis was performed using custom python scripts. Growth curves were calculated by averaging six biological replicates; shaded areas represent one standard deviation from the mean. Growth rates were calculated with a sliding window of 5 h, solving for maximum growth rate. Growth rates are presented as the average of six biological replicates; error bars represent 95% confidence intervals (Supplementary Fig. 12).
Pilot fermentations
Strains were streaked on YPD medium and grown for 2 days at 25 °C. Single colonies were used to inoculate initial 5 mL cultures in glass test tubes, which were grown for 2 days at 20 °C with shaking at 200 rpm. The resulting cultures were used to inoculate 1 L cultures in 2 L glass Erlenmeyer flasks, which were then grown for 3 days at 20 °C with shaking at 200 rpm. Strains were grown in a base medium composed of 100 g/L ME (Sigma-Aldrich, St. Louis, MO, USA). The resulting cultures were then used to inoculate industrial fermentations in wort produced in a 1.76 hL pilot brewery.
For the first set of fermentations, 35 kg of 2-Row malt was milled and added to 105 L of DI water treated with 79.15 g of brewing salts. Mashing was performed for 30 min at 65 °C, 10 min at 67 °C, and 10 min at 76 °C. The wort was allowed to recirculate for 10 min and was separated by lautering. Sparging occurred for 58 min, giving a final pre-boil volume in the brew kettle of 215 L. The wort was boiled until it reached a final volume of 197 L and a gravity of 11.65 °Plato. Kettle additions included 125 g of Magnum hop pellets, 15.1 g of Yeastex yeast nutrients, and 15 g Protofloc (Murphy and Son, Nottingham, UK). Ingredients were sourced from Brewers Supply Group (Shakopee, MN, USA), except where otherwise noted. After the wort was separated from the hot trub, it was transferred to four 56 L custom fermenters (JVNW, Canby, OR, USA), each filled to 40 L. The beers were fermented at 19 °C until they reached terminal gravity, held for an additional 24 h for vicinal diketone (VDK) removal, and then cold conditioned at 0 °C. The length of fermentation, and in turn the length of cold conditioning, was strain dependent. Samples were taken every 24 h to measure °Plato and pH (see Supplementary Fig. 11). The resulting beer was filtered under pressure and carbonated prior to storage in 7.75 gallon kegs. Samples were collected during the kegging process for Alcolyzer (Anton Paar, Ashland, VA) analysis (see Supplementary Table 17).
For the second set of fermentations, 35 kg of 2-Row malt was milled and added to 105 L of DI water treated with 79 g of brewing salts. Mashing was performed for 30 min at 65 °C, 10 min at 67 °C, and 10 min at 76 °C. The wort was allowed to recirculate for 10 min and was separated by lautering. Sparging occurred for 52 min, giving a final pre-boil volume in the brew kettle of 214 L. The wort was boiled until it reached a final volume of 194 L and a gravity of 11.25 °Plato. Kettle additions included 97.01 g of Galena hop pellets, 15.1 g of Yeastex yeast nutrients, and 15 g Protofloc (Murphy and Son, Nottingham, United Kingdom). Ingredients were sourced from Brewers Supply Group (Shakopee, MN) except where otherwise noted. After the wort was separated from the hot trub, it was transferred to four 56 L custom fermenters (JVNW, Canby, OR), each filled to 40 L. The beers were fermented at 19 °C until they reached terminal gravity, held for an additional 24 h for VDK removal, and then cold conditioned at 0 °C. The length of fermentation, and in turn the length of cold conditioning, was strain dependent. Samples were taken every 24 h to measure °Plato and pH (see Supplementary Fig. 11). After 48 h at 0 °C, 88.5 g Cascade dry hops (either from Washington or from Idaho) were added to two fermenters containing parent strain WLP001. The dry hops were left on the beer at 1.67 °C for 1 week before filtering. The resulting beer was filtered under pressure and carbonated prior to storage in 7.75 gallon kegs. Samples were collected during the kegging process for Alcolyzer (Anton Paar, Ashland, VA, USA) analysis (see Supplementary Table 18).
Sensory analysis
Institutional Review Board approval for human research was obtained from the UC Berkeley Office for Protection of Human Subjects (CPHS protocol number 2017-05-9941). The Committee for Protection of Human Subjects reviewed and approved the application under Category 7 of federal regulations.
Panelists: Sensory analysis of the brewed beer was conducted at Lagunitas Brewing Company (Petaluma, CA, USA). The first panel consisted of 27 employee participants (17 males and 10 females), the second of 13 employee participants (11 males and 2 females), ranging in experience from 2 to 154 tasting sessions attended in calendar year 2017. Ages ranged from mid-20s to 50s. All participants received basic sensory training per Lagunitas standards.
Sensory analysis: Samples of 2 ounces were presented in clear 6 oz brandy glasses (Libbey, Toledo, OH, USA). Each panelist received five glasses, one control and four samples (one blind control and three variables) arranged randomly by balanced block design. Block design and data gathering were accomplished using the EyeQuestion® software (Logic8 BV, The Netherlands). In a single sitting, panelists were asked to rank hop aroma intensity as compared to the control on a 9-point ordinal scale anchored on one end with "No difference" and the other end with "Extreme difference."
Data analysis: Data were analyzed using Dunnett's test in conjunction with one-way analysis of variance using EyeOpenR® (Logic8 BV, The Netherlands). Analysis was performed at the 95% confidence level. The blind control is used as the reference sample to account for any scoring bias that might occur.
Dry hopping for evaluation of variation between hop preparations
Parental strain WLP001 was streaked on YPD medium and grown for 2 days at 25 °C. A single colony was used to inoculate an initial 50 mL preculture in a 250 mL glass Erlenmeyer flask, which was grown for 1 day at 20 °C with shaking at 200 rpm. The strain was grown in a base medium composed of 100 g/L ME (Sigma-Aldrich, St. Louis, MO, USA) supplemented with YPD. The resulting culture was used to inoculate a 1 L preculture in a 2 L glass Erlenmeyer flask, which was then grown for 2 days at 20 °C with shaking at 200 rpm. The resulting culture was then used to inoculate four 2 L cultures in 4 L glass Erlenmeyer flasks, which were then grown for 1 day at 20 °C with shaking at 200 rpm. The resulting cultures were then used to inoculate 8 L cultures in 3-gallon glass carboys (Midwest Supplies, Roseville, MN, USA). These cultures were equipped with a one-way airlock for microaerobic fermentation and grown for 6 days at 20 °C. In the meantime, five different Cascade hop samples grown on farms across the Pacific Northwest were obtained from YCH Hops (Yakima, WA, USA). The hop samples were ground using a mortar and pestle and liquid nitrogen. On day 6, samples were taken from the fermentations (as un-hopped controls) and 25 g of hops was added to each fermentation. Hops were left to steep for 3 days, after which samples were collected for GC/MS analysis.
Batch-to-batch variation
Strains were streaked on YPD medium and grown for 2 days at 25 °C. Single colonies were used to inoculate initial 5 mL precultures in glass test tubes, which were grown for 2 days at 20 °C with shaking at 200 rpm. Strains were grown in a base medium composed of 100 g/L ME (Sigma-Aldrich, St. Louis, MO, USA). The resulting cultures were used to inoculate 500 mL precultures in 2 L glass Erlenmeyer flasks, which were then grown for 1 day at 20 °C with shaking at 200 rpm. The resulting cultures were then used to inoculate 8 L cultures in 3-gallon glass carboys (Midwest Supplies, Roseville, MN, USA). These cultures were equipped with a one-way airlock for microaerobic fermentation and grown for 12 days at 20 °C. Samples were taken on day 12 for GC/MS analysis.
The authors declare that all data supporting the findings of this study are available within the paper and its supplementary information files. Sequence data and strains generated in this study have been deposited in the JBEI public registry. See Supplementary Tables 6–13 for construct sequences and strain information. Computer code used in this study can be accessed from Supplementary Data 1.
Hopunion, Y. C. Sustainability Report (YCH Hops, Yakima, WA, 2015).
IHGC - Economic Commission. International Hop Grower's Convention Full Statistical Report. Available at https://www.usahops.org/img/blog_pdf/34.pdf (IHGC, Paris, 2016).
Wagner, G. J. Secreting glandular trichomes: more than just hairs. Plant. Physiol. 96, 675–679 (1991).
Eyres, G. & Dufour, J.-P. Hop essential oil: Analysis, chemical composition and odor characteristics. Beer Health Dis. Prev. 7, 239–254 (2009).
Sharp, D. C., Townsend, M. S., Qian, Y., & Shellhammer, T. H. Effect of harvest maturity on the chemical composition of Cascade and Willamette hops. J. Amer. Soc. Brew. Chem. 72, 231–238 (2014).
Sanchez, N.B., Lederer, C.L., Nickerson, G.B., Libbey, L.M. & McDaniel, M.R. Sensory and analytical evaluation of hop oil oxygenated fractions. Dev. Food Sci. 29, 371–402 (1992).
Steinhaus, M. & Schieberle, P. Comparison of the most odor-active compounds in fresh and dried hop cones (Humulus lupulus L. variety spalter select) based on GC-olfactometry and odor dilution techniques. J. Agric. Food Chem. 48, 1776–1783 (2000).
Lermusieau, G., Bulens, M. & Collin, S. Use of GC-olfactometry to identify the hop aromatic compounds in beer. J. Agric. Food Chem. 49, 3867–3874 (2001).
Peacock, V. E., Deinzer, M. L., Likens, S. T., Nickerson, G. B. & McGill, L. A. Floral hop aroma in beer. J. Agric. Food Chem. 29, 1265–1269 (1981).
Irwin, A. J. Varietal dependence of hop flavour volatiles in lager. J. Inst. Brew. 95, 185–194 (1989).
Steinhaus, M., Fritsch, H. T. & Schieberle, P. Quantitation of (R)- and (S)-linalool in beer using solid phase microextraction (SPME) in combination with a stable isotope dilution assay (SIDA). J. Agric. Food Chem. 51, 7100–7105 (2003).
HGA. Hop Growers of America Statistical Packet(HGA, Yakima, WA, 2015).
Pardo, E., Rico, J., Gil, J. V. & Orejas, M. De novo production of six key grape aroma monoterpenes by a geraniol synthase-engineered S. cerevisiae wine strain. Microb. Cell Fact. 14, 1–8 (2015).
Wang, G. et al. Terpene biosynthesis in glandular trichomes of hop. Plant Physiol. 148, 1254–1266 (2008).
Williams, D. C., McGarvey, D. J., Katahira, E. J. & Croteau, R. Truncation of limonene synthase preprotein provides a fully active "pseudomature" form of this monoterpene cyclase and reveals the function of the amino-terminal arginine pair. Biochemistry 37, 12213–12220 (1998).
Crowell, A. L., Williams, D. C., Davis, E. M., Wildung, M. R. & Croteau, R. Molecular cloning and characterization of a new linalool synthase. Arch. Biochem. Biophys. 405, 112–121 (2002).
Emanuelsson, O., Nielsen, H. & von Heijne, G. ChloroP, a neural network-based method for predicting chloroplast transit peptides and their cleavage sites. Protein Sci. 8, 978–984 (1999).
Bohlmann, J., Meyer-Gauen, G. & Croteau, R. Plant terpenoid synthases: molecular biology and phylogenetic analysis. Proc. Natl. Acad. Sci. USA 95, 4126–4133 (1998).
Burg, J. S. & Espenshade, P. J. Regulation of HMG-CoA reductase in mammals and yeast. Prog. Lipid Res. 50, 403–410 (2011).
Ro, D.-K. et al. Production of the antimalarial drug precursor artemisinic acid in engineered yeast. Nature 440, 940–943 (2006).
Ignea, C., Pontini, M., Maffei, M. E., Makris, A. M. & Kampranis, S. C. Engineering monoterpene production in yeast using a synthetic dominant negative geranyl diphosphate synthase. ACS Synth. Biol. 3, 298–306 (2014).
Polakowski, T., Stahl, U. & Lang, C. Overexpression of a cytosolic hydroxymethylglutaryl-CoA reductase leads to squalene accumulation in yeast. Appl. Microbiol. Biotechnol. 49, 66–71 (1998).
Wolfe, P. H. A Study of Factors Affecting the Extraction of Flavor when Dry Hopping Beer. Dissertation, Oregon State Univ. (2012).
Lee, M. E., DeLoache, W. C., Cervantes, B. & Dueber, J. E. A highly characterized yeast toolkit for modular, multipart assembly. ACS Synth. Biol. 4, 975–986 (2015).
Ajikumar, P. K. et al. Isoprenoid pathway optimization for Taxol precursor overproduction in Escherichia coli. Science 330, 70–74 (2010).
Smanski, M. J. et al. Functional optimization of gene clusters by combinatorial design and assembly. Nat. Biotechnol. 32, 1241–1249 (2014).
International Food Information Council Foundation. 2014 IFIC Consumer Perceptions of Food Technology Survey. http://www.foodinsight.org/surveys/2014-food-technology-survey (IFIC, 2014).
Verstrepen, K. J. et al. Expression levels of the yeast alcohol acetyltransferase genes ATF1, Lg-ATF1, and ATF2 control the formation of a broad range of volatile esters. Appl. Environ. Microbiol. 69, 5228–5237 (2003).
Steensels, J. et al. Improving industrial yeast strains: exploiting natural and artificial diversity. FEMS Microbiol. Rev. 38, 947–995 (2014).
Steensels, J., Meersman, E., Snoek, T., Saels, V. & Verstrepen, K. J. Large-scale selection and breeding to generate industrial yeasts with superior aroma production. Appl. Environ. Microbiol. 80, 6965–6975 (2014).
Ham, T. S. et al. Design, implementation and practice of JBEI-ICE: an open source biological part registry platform and tools. Nucleic Acids Res. 40, e141 (2012).
Engler, C., Kandzia, R. & Marillonnet, S. A One pot, one step, precision cloning method with high throughput capability. PLoS ONE 3, e3647 (2008).
Gibson, D. G. et al. Enzymatic assembly of DNA molecules up to several hundred kilobases. Nat. Methods 6, 343–345 (2009).
Chen, J., Densmore, D., Ham, T. S., Keasling, J. D. & Hillson, N. J. DeviceEditor visual biological CAD canvas. J. Biol. Eng. 6, 1 (2012).
Hillson, N. J., Rosengarten, R. D. & Keasling, J. D. j5 DNA assembly design automation software. ACS Synth. Biol. 1, 14–21 (2012).
Gietz, R. D. & Woods, R. A. Transformation of yeast by lithium acetate/single-stranded carrier DNA/polyethylene glycol method. Methods Enzymol. 350, 87–96 (2002).
González Fernández-Niño, S. M. et al. Standard flow liquid chromatography for shotgun proteomics in bioenergy research. Front. Bioeng. Biotechnol. 3, 44 (2015).
MacLean, B. et al. Skyline: an open source document editor for creating and analyzing targeted proteomics experiments. Bioinformatics 26, 966–968 (2010).
Reiter, L. et al. mProphet: automated data processing and statistical validation for large-scale SRM experiments. Nat. Methods 8, 430–435 (2011).
R Core Team. R: A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, Vienna, Austria, 2016).
Wilkinson, L. ggplot2: Elegant Graphics for Data Analysis by Wickham, H. Biometrics 67, 678–679 (2011).
Garnier, S. viridis: Default Color Maps from 'matplotlib'. https://CRAN.R-project.org/package=viridis (R package version 0.3.4) (2016).
Venables, W.N. & Ripley, B.D. Modern Applied Statistics with S 4th edn (Springer, New York, 2002).
Wickham, H. Reshaping data with the reshape Package. J. Stat. Softw. 21, 1–20 (2007).
Warnes, G. R. et al. gdata: Various R Programming Tools for Data Manipulation. http://CRAN.R-project.org/package=readxl (2015).
Pedregosa, F. et al. Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011).
Lewinsohn, E. et al. Enhanced levels of the aroma and flavor compound S-linalool by metabolic engineering of the terpenoid pathway in tomato fruits. Plant. Physiol. 127, 1256–1265 (2001).
Herrero, O., Ramón, D. & Orejas, M. Engineering the Saccharomyces cerevisiae isoprenoid pathway for de novo production of aromatic monoterpenes in wine. Metab. Eng. 10, 78–86 (2008).
We thank W. DeLoache, M. Lee, B. Cervantes, and J. Dueber for providing yeast genetic toolkit parts; YCH Hops for providing hop samples and Lagunitas Brewing Company for conducting the sensory analysis; V. Chubukov for providing python scripts to process yeast growth assay data. We also thank M. Brown, D. Gillick, and P. Shih for helpful discussions and comments on the manuscript. This work was funded by NSF grant 1330914 and was part of the DOE Joint BioEnergy Institute (http://www.jbei.org), which was supported by the U.S. Department of Energy, Office of Science, Office of Biological and Environmental Research, through contract DE-AC02-05CH11231 between Lawrence Berkeley National Laboratory and the U.S. Department of Energy. This work was also funded by SBIR grant 1722376.
These authors contributed equally: Charles M. Denby and Rachel A. Li.
California Institute of Quantitative Biosciences (QB3), University of California, Berkeley, CA, 94720, USA
Charles M. Denby
, Weiyin Lin
& Jay D. Keasling
Joint BioEnergy Institute, Emeryville, CA, 94608, USA
, Rachel A. Li
, Zak Costello
, Leanne Jade G. Chan
, Christopher J. Petzold
, Henrik V. Scheller
, Hector Garcia Martin
Department of Plant and Microbial Biology, University of California, Berkeley, CA, 94720, USA
Rachel A. Li
& Henrik V. Scheller
Lawrence Berkeley National Laboratory, Biological Systems and Engineering Division, Berkeley, CA, 94720, USA
Department of Bioengineering, University of California, Berkeley, CA, 94720, USA
Van T. Vu
DOE Agile BioFoundry, Emeryville, CA, 94608, USA
Zak Costello
& Hector Garcia Martin
Department of Food Science and Technology, University of California Davis, Davis, CA, 95616, USA
Joseph Williams
& Charles W. Bamforth
Lagunitas Brewing Company, Petaluma, CA, 94954, USA
Bryan Donaldson
Lawrence Berkeley National Laboratory, Environmental Genomics and Systems Biology Division, Berkeley, CA, 94720, USA
Henrik V. Scheller
Department of Chemical and Biomolecular Engineering, University of California, Berkeley, CA, 94720, USA
Jay D. Keasling
Novo Nordisk Foundation Center for Sustainability, Technical University of Denmark, 2900, Hellerup, Denmark
Search for Charles M. Denby in:
Search for Rachel A. Li in:
Search for Van T. Vu in:
Search for Zak Costello in:
Search for Weiyin Lin in:
Search for Leanne Jade G. Chan in:
Search for Joseph Williams in:
Search for Bryan Donaldson in:
Search for Charles W. Bamforth in:
Search for Christopher J. Petzold in:
Search for Henrik V. Scheller in:
Search for Hector Garcia Martin in:
Search for Jay D. Keasling in:
C.M.D. and R.A.L. conceived the idea, performed experiments, and wrote the manuscript; V.T.V. performed LIS activity screen; Z.C. developed predictive mathematical models; W.L. performed cloning and strain construction; L.J.G.C. and C.J.P. performed targeted proteomics; J.W. performed the pilot fermentations; B.D. performed the sensory analysis; H.V.S., H.G.M., and C.W.B. supervised the work; J.D.K. conceived the idea, supervised the work, and wrote the manuscript.
Correspondence to Charles M. Denby or Jay D. Keasling.
J.D.K. has a financial interest in Amyris, Lygos, Constructive Biology, and Demetrix, none of which will commercialize this technology. J.D.K, C.M.D., and R.A.L. have submitted a patent application covering aspects of this technology. C.M.D. and R.A.L. have a financial interest in Berkeley Brewing Science. The remaining authors declare no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Description of Additional Supplementary Files
Supplementary Data 1
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Denby, C.M., Li, R.A., Vu, V.T. et al. Industrial brewing yeast engineered for the production of primary flavor determinants in hopped beer. Nat Commun 9, 965 (2018) doi:10.1038/s41467-018-03293-x
DOI: https://doi.org/10.1038/s41467-018-03293-x
Machine learning applications in systems metabolic engineering
Gi Bae Kim
, Won Jun Kim
, Hyun Uk Kim
& Sang Yup Lee
Current Opinion in Biotechnology (2020)
Physicochemical and antioxidative properties of Cornelian cherry beer
Joanna Kawa-Rygielska
, Kinga Adamenko
, Alicja Z. Kucharska
, Paula Prorok
& Narcyz Piórecki
Food Chemistry (2019)
Orthogonal monoterpenoid biosynthesis in yeast constructed on an isomeric substrate
Codruta Ignea
, Morten H. Raadam
, Mohammed S. Motawia
, Antonios M. Makris
, Claudia E. Vickers
& Sotirios C. Kampranis
Nature Communications (2019)
Direct Writing of Tunable Living Inks for Bioprocess Intensification
Fang Qian
, Cheng Zhu
, Jennifer M. Knipe
, Samantha Ruelas
, Joshuah K. Stolaroff
, Joshua R. DeOtte
, Eric B. Duoss
, Christopher M. Spadaccini
, Calvin A. Henard
, Michael T. Guarnieri
& Sarah E. Baker
Nano Letters (2019)
Static microplate fermentation and automated growth analysis approaches identified a highly-aldehyde resistant Saccharomyces cerevisiae strain
Fellipe da Silveira Bezerra de Mello
, Alessandro Luis Venega Coradini
, Pedro Augusto Galvão Tizei
, Marcelo Falsarella Carazzolle
, Gonçalo Amarante Guimarães Pereira
& Gleidson Silva Teixeira
Biomass and Bioenergy (2019)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Top 50: Life and Biological Sciences
Nature Communications menu
Editors' Highlights
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
|
CommonCrawl
|
A robust and transformation-free joint model with matching and regularization for metagenomic trajectory and disease onset
Qian Li1,
Kendra Vehik2,
Cai Li1,
Eric Triplett3,
Luiz Roesch3,
Yi-Juan Hu4 &
Jeffrey Krischer2
To identify operational taxonomy units (OTUs) signaling disease onset in an observational study, a powerful strategy was selecting participants by matched sets and profiling temporal metagenomes, followed by trajectory analysis. Existing trajectory analyses modeled individual OTU or microbial community without adjusting for the within-community correlation and matched-set-specific latent factors.
We proposed a joint model with matching and regularization (JMR) to detect OTU-specific trajectory predictive of host disease status. The between- and within-matched-sets heterogeneity in OTU relative abundance and disease risk were modeled by nested random effects. The inherent negative correlation in microbiota composition was adjusted by incorporating and regularizing the top-correlated taxa as longitudinal covariate, pre-selected by Bray-Curtis distance and elastic net regression. We designed a simulation pipeline to generate true biomarkers for disease onset and the pseudo biomarkers caused by compositionality. We demonstrated that JMR effectively controlled the false discovery and pseudo biomarkers in a simulation study generating temporal high-dimensional metagenomic counts with random intercept or slope. Application of the competing methods in the simulated data and the TEDDY cohort showed that JMR outperformed the other methods and identified important taxa in infants' fecal samples with dynamics preceding host disease status.
Our method JMR is a robust framework that models taxon-specific trajectory and host disease status for matched participants without transformation of relative abundance, improving the power of detecting disease-associated microbial features in certain scenarios. JMR is available in R package mtradeR at https://github.com/qianli10000/mtradeR.
Gut microbiota profiled by 16s rRNA gene sequencing or metagenomic (i.e., whole-genome shotgun) sequencing has been frequently used in observational studies of environmental exposures, immune biomarkers, and disease onset [1,2,3,4,5]. One of the challenges in analyzing microbiota in an observational study is to incorporate the matching between participants based on certain confounding risk factors (e.g. gender, clinical site, etc.) and/or disease status (case-control), such as the DIABIMMUNE and TEDDY cohorts [1, 2, 5]. A matching design effectively eliminates the noise effect of sample collection, storage, shipment, sequencing batch, and environmental exposures confounding with disease outcomes, as well as reduces the sequencing costs. Statistical analyses of microbiota in matched sets included, but are not limited to, conditional logistic regression [1], non-parametric comparison PERMANOVA [6] and LDM [7] with extension to compare cases and controls within a matched set [8], which aimed to model and analyze microbiome data at independent time points.
Longitudinal profiling is a powerful strategy for the microbiome studies that aim to identify differential microbial trajectories between exposure groups or phenotypes [9, 10] or detect the time intervals of differential abundance [11]. However, most of these studies failed to test if the compositional trajectory of an operational taxonomic unit (OTU) signaled host disease status. To detect microbial trajectories predictive of disease outcome in matched sets, an intuitive method is the generalized linear mixed effect model with or without the zero-inflation component [9, 10, 12], in which a taxon's abundance and/or presence is the outcome variable and the disease status is the covariate of interest. The Zero-Inflated Beta Regression (ZIBR) model [9] tests the association between OTU and a covariate factor using a two-part model for the non-zero relative abundance and presence of each OTU, assuming the non-zero relative abundance and presence being independent. A similar framework [10] was proposed to analyze the longitudinal zero-inflated counts per OTU using a Negative Binomial distribution, without converting the raw counts to relative abundance. A semi-parametric approach for longitudinal taxon-specific relative abundance is the linear mixed effect model (LMM) with asin-square-root transformation, which has been implemented in MaAsLin 2 [12].
One concern about using generalized linear mixed model to test the association between 16S rRNA or metagenomic trajectory and disease onset is that the covariates in this model may contribute to disease risk. For example, the HLA haplogenotypes and early use of probiotics may affect infants' gut microbiota and should be included as covariates. These factors were also found associated with islet autoimmunity among children enrolled in TEDDY [13]. One usually added interaction terms between each covariate and the disease outcome [3, 12] to adjust for the association. However, a linear model with many interaction terms may lead to overfitting and reduce the detection power [14]. A sensible choice is the joint modeling of longitudinal biomarker and survival outcomes [15, 16], but there are limitations in applying this model to microbiome data in observational studies. First, the cost of metagenomic sequencing and the availability of fecal samples in a multi-center study may restrict the metagenome profiling to a subgroup of participants selected by certain criteria [1,2,3], whose survival outcome may deviate from common statistical assumptions. Second, the classic joint modeling approach aims to address repeated measurements of biomarkers in a time-to-event analysis rather than test if a biomarker's intercept or slope is predictive of host health condition. Third, in an observational study that selects and matches participants by certain factors, their risk of developing disease is also matched. Thus, a survival submodel may not be capable of characterizing the disease risk between matched participants.
Many of the existing methods for microbiome data are built on the transformed relative abundance, such as centered log-ratio or inter-quartile log-ratio. In our new method, transformation of compositional data is not considered, since transformation strategy may have profound impact on analysis result and interpretation [17]. The compositional change in true biomarkers (e.g., causal OTUs contributing to disease onset) always leads to simultaneous change in some other OTUs' composition because of sum-to-one constraint. In an observational study with matching design, it is common to collect and profile microbiota at many time points. The sum-to-one constraint and latent noise effect may yield pseudo biomarkers with relative abundance associated with host disease status but not contributing to disease development. Hence, a taxon-level model is built for relative abundance trajectory that adjusts for the dynamic interdependence between taxa and reduces pseudo biomarker rate. In addition, we illustrate the performance of our method by a simulation pipeline that mimics the negative correlation in microbial community.
The latent technical noise in microbiome was removed by converting raw counts to relative abundance, and Zero-Inflated Beta density [9] was adopted to model an OTU's non-zero relative abundance and presence, respectively. We employed a subject-level random effect to link the logistic regression model of disease to a two-part longitudinal submodel. The latent effect of exposures related to matched set indicator was modeled by another random effect nested with the subject-level random effect. The OTU-disease association was assessed by jointly testing the scaling parameters for the subject-level random effect in the two-part submodel. We benchmarked the robustness and power of our method by a comprehensive simulation study and an application in the TEDDY cohort. The results illustrated that our method controlled the rates of false discovery and pseudo biomarkers, as well as improved the efficacy of detecting microbial trajectories signaling disease outcome.
For simplicity, the aim of present research is to link the matched longitudinal microbiome samples to hosts' matched disease risk and incorporate the unknown dependence between taxa in an univariate trajectory framework, without modeling the compositionality. Briefly, we develop a Joint model with Matching and Regularization (JMR) to detect taxon-specific compositional trajectory associated with disease onset, adjusting for the linear correlation with other taxa and matched-set-specific latent noises. According to the characteristics of disease risk and infant-age gut microbiota in the TEDDY cohort, we designed a simulation pipeline similar to [8], generated the observed counts of temporal microbiota and compared our method to LMM and ZIBR using the simulated data. We also applied these methods to the shotgun metagenomic sequencing data profiled from the 4-9 months stool samples of infants enrolled in TEDDY cohort.
Overview of TEDDY microbiome study
TEDDY is an observational prospective study of children at increased genetic risk of type 1 diabetes (T1D) conducted in six clinical centers in the U.S. and Europe (Finland, Germany, and Sweden). A total of 8,676 children were enrolled from birth and followed every 3 months for blood sample collection and islet autoantibody measurement up to 4 years of age, then every 3-6 months based on autoantibody status until the age of 15 years or diabetes onset [18]. A primary disease endpoint in TEDDY is islet autoimmunity (IA), defined as persistently positive for insulin autoantibodies (IAA), glutamic acid decarboxylase autoantibodies (GADA), or insulinoma-associated-2 autoantibodies (IA-2A) at two consecutive visits confirmed by the two TEDDY laboratories [18]. The participants' monthly stool samples were collected from 3-month age until the onset of IA or censoring with random missing samples [1, 2]. Based on the sample availability and metagenomic sequencing cost, the microbiome study in TEDDY selected all the participants (cases) who developed IA by the design cutoff date May 31, 2012 and the controls at 1:1 case-control ratio matched by clinical center, gender, family history of T1D to profile the temporal gut microbiota, resulting in S = 418 matched sets (or pairs [19]). These matching factors are known risk factors for type 1 diabetes. Some of the matched sets are at higher risk of IA than the others due to higher risk human leukocyte antigen (HLA) genotypes, geography or having family history of T1D. Hence, the matched participants have comparable risk of IA, but heterogeneity still exists between them according to the case-control status by the design freeze date. The observed metagenomic counts table in TEDDY was generated by the standard procedure of DNA extraction, PCR amplification, shotgun metagenomic sequencing, assembly, annotation and quantification, as described in [1]. We visualized the top abundant species in the metagenomes of TEDDY participants who had matched IA endpoint no later than 2 years of age (Fig. 1).
Mean compositional trajectory of top abundant species in infant-age metagenomes grouped by host islet autoimmunity status at 2 years of age
Disease outcome for the matched participants are simulated by the procedure below. The observed relative abundance per taxon were simulated by different scenarios. We first generated raw counts for a single OTU by Beta-Binomial distribution to assess the robustness and power of our method JMR without covariate taxa. We also designed a shifting procedure to mimic inherent negative correlation in the true composition of microbiota and generated the temporal high-dimensional raw counts table to evaluate the performance of compared methods.
Generate disease outcome in matched sets
We defined matched sets and subjects as 'high-risk' and 'low-risk' to generate the temporal OTU counts prior to disease onset. Subjects are matched at 1:1 ratio. For participant \(j=1,2\) in matched set s \((s=1,\dots ,S)\), we first generated subject-level and set-level random effects from a standard Normal distribution \(a_{s_j}\sim N(0,1)\), \(b_s\sim N(0,1)\). Each random effect was converted to a binary variable by the median value. That is \(A_{s_j}=\varvec{I}(a_{s_j}>\text {median}(a_{s_j}))\), \(B_s=\varvec{I}(b_s>\text {median}(b_s))\), where \(A_{s_j}=1\) (or \(B_s=1\)) represents a 'high-risk' subject (or set). Next, we simulated a host genotype \(G_{s_j}\) as disease risk factor, and the host disease status by a Bernoulli distribution \(O_{s_j}\sim B(p_{s_j})\), where \(\text {logit}(p_{s_j})=\alpha _0+\alpha _1 G_{s_j}+\alpha _2 A_{s_j}+\alpha _3 B_{s}\). We fixed \((\alpha _0,\alpha _1, \alpha _3)=(0.5,-2,1)\), which is the JMR estimate from real data, and set \(\alpha _2 \in \{0.5,0.75,1,1.25,1.5\}\) to generate different datasets.
Scenario A: single OTU counts.
We first simulated the observed counts of a single OTU by Beta-Binomial [14] distribution to compare the univariate trajectory methods without adjusting for covariate taxa. The true relative abundance of an OTU at the earliest time point \(t=1\) was drawn from a Beta distribution \(\mu _1\sim Beta(\mu _0,\phi _0)\), where parameters \(\mu _0,\phi _0\) were estimated by applying Beta-Binomial MLE to the metagenomic raw counts of an OTU selected at a given relative abundance level in the TEDDY data. To simplify the age-dependent effect, the relative abundance of this OTU at later time points \(t>1\) was generated by linearly increasing \(\mu _1\) to \(\mu _t\). The baseline relative abundance at time t in a matched set s was generated by \(\mu _{st}\sim Beta(\mu ,\phi _t)\), and was increased or decreased by \(\Delta \mu _{st}\) if the set was labeled as 'high-risk'. The true relative abundance of this OTU for subject j in set s at time point t was simulated by \(\mu _{s_jt}\sim Beta(\mu _{st}, \phi _{st})\), and was increased or decreased by \(\Delta \mu _{s_jt}\) if the subject was 'high-risk'. The total counts per sample, i.e., library size was drawn from a Poisson distribution \(N_{s_jt}\sim PS(100000)\), and the counts for this OTU is generated from a Binomial (BN) distribution \(Y_{s_jt}\sim BN(N_{s_jt},\mu _{s_jt})\).
Scenario B1: counts table with random intercept and pseudo biomarkers
We also generated a high-dimensional counts table with \(P=1030\) OTUs to demonstrate the performance of each method, so that the covariate taxa can be used in JMR. The true composition of each microbiome sample \(\bar{\eta }_{s_jt}\) was simulated by a shifting procedure combined with Dirichlet distribution to account for the negative correlation within microbial community. The sample-wise library size was generated by a Poisson distribution, and the observed raw counts were sampled from a Multinomial distribution. Details of data generation process for this scenario is available in Methods, with a visualization for dimension of \(P=4\) in Fig. 2.
Shifting procedure in simulation scenario B1. From time T1 to T4, taxa A-C decrease and taxon D increases in relative abundance. Compared to a low-risk set, taxa A, D are more abundant and taxa B, C are less abundant in a high-risk set. Taxon A is less abundant (i.e., \(M^{-}\)) and taxon B is more abundant (i.e., \(M^{+}\)) in a high-risk subject compared to the matched low-risk subject, both being true biomarkers at each time point. Taxon C is a pseudo biomarker (randomly selected as \(M^{0}\) at time point T1) with relative abundance automatically changed due to sum-to-one constraint, while taxon D is unchanged at T1
For a subject labeled as 'high-risk', we increased \(15\%\) OTUs (denoted by \(M^{+}\)) in \(\bar{\eta }_{s_jt}\) by \(\Delta _{s_jt}\), and reduced another \(15\%\) OTUs (denoted by \(M^{-}\)) by \(d\Delta _{s_jt}\) (\(0<d<1\)). The subsets \(M^{+}\) and \(M^{-}\) are the true biomarkers for disease status. We randomly selected a third subset (denoted by \(M^{0}\)) from the remaining \(70\%\) OTUs in \(\bar{\eta }_{s_jt}\) and reduced the composition of \(M^{0}\) by a total of \((1-d)\Delta _{s_jt}\). There may exist OTUs never selected in \(M^{+}\), \(M^{-}\), or \(M^{0}\), which are the 'null' OTUs. The OTUs selected in \(M^{0}\) are the pseudo biomarkers due to random shift in frequency. We set the total shift \(\Delta _{s_jt}=\lambda \Delta ^0_{{s_jt}}\) at distinct effect size \(\lambda \in \{0.5,0.6,0.7,0.8\}\), where \(\Delta ^0_{{s_jt}}\) is the maximum shift restricted by sum-to-one.
Scenario B2: counts table with random slope and pseudo biomarkers
Data generation process for this scenario is similar to Scenario B1, except that the shift (\(\Delta _{s_jt}\)) in microbiota true composition between 'low-risk' and 'high-risk' subjects varies by time points. It's worth to note that we cannot distinguish 'false positive' from 'pseudo positive' in scenarios B1 and B2. Hence, we use the sum of false positive rate and pseudo positive rate, i.e., false or pseudo positive rate (FPPR) as a performance metric for scenarios B1,B2. That is \(\text {FPPR}=\frac{ \# \text { of positives in } (M^{+}\cup M^{-})^c }{\# \text { of OTUs in } (M^{+}\cup M^{-})^c }\).
Scenario C: counts table without pseudo biomarkers
In this scenario we considered random intercept signaling the disease onset and fixed half OTUs in \(\bar{\eta }_{s_jt}\) as 'null' in order to evaluate the FPR and FDR of each method, although this scenario is not applicable to real data. Among the other half OTUs, we selected \(10\%\) OTUs in \(\bar{\eta }_{s_jt}\) as \(M^{+}\) and \(40\%\) OTUs as \(M^{-}\) without a subset of pseudo biomarkers (\(M^{0}\)).
Performance of competing methods
In scenario A, we compared JMR not adjusting for correlated taxa (JMR-NC) with the following methods: a) a joint model with regularization but without matching indicator and correlated taxa (JR-NC); b) the ZIBR model with a Wald statistic jointly testing OTU-specific abundance or presence using either a single random effect (ZIBR-S) or nested random effects (ZIBR-N); c) LMM with arcsin-square-root transformation using either a single random effect (LMM-S) or nested random effects (LMM-N). For LMM and ZIBR methods, we used R package gamlss and set the sample age, genotype, disease status, and genotype-disease interaction term as the fixed effect covariates. It's worth to note that the nested random effects used in LMM and ZIBR are independent of host disease risk, different from those in JMR.
We randomly selected 6 OTUs with different relative abundance from TEDDY data and estimated the baseline parameters for each. These OTUs are Acinetobacter sp. NIPH 236, Brachyspira murdochii, Streptococcus phage YMC-2011, Erysipelatoclostridium ramosum, Ruminococcus gnavus. Then we generated \(n=10000\) replicates for each OTU with \(S\in \{50,100\}\). The type I error rate and power of each method was calculated at statistical significance level \(p<0.05\), shown in Table 1. The results showed that JMR-NC persistently controlled the type I error and provided higher detection power at distinct abundance levels except for the OTUs with \(-\log _{10}(y) \in (2, 3]\) and (5, 6]. Type I error of the reduced model JR-NC was severely inflated in some datasets and its power was lower than JMR-NC. LMM consistently controlled type I error, with power lower than JMR-NC in most simulated OTUs. The ZIBR method yielded inflated type I error rate and low efficacy regardless of sample size in this single-OTU scenario.
Table 1 The type I error and power based on 10000 simulated replicates for a taxon at different levels of mean relative abundance (\(\varvec{y}\))
In scenarios B1, B2 and C, we generated 10 replicates for each OTU table to assess the performance of competing methods. We evaluated the performance of each method at different size of set-level random effect \(\gamma \in \{0.6,0.7,0.8\}\). The taxa associated with disease onset in each OTU table are detected by FDR cutoff \(q<0.15\). The FPPR in scenario B1 (Fig. 3) showed that adjusting for the top-correlated taxa in JMR successfully controlled the rate of pseudo biomarkers across different scenarios and was more powerful than LMM at larger sample size (\(N=200\)), although JMR showed lower detection power compared to JMR-NC. The results in Fig. 4 also demonstrated the outperformance of JMR-NC, JMR, and ZIBR in the sensitivity of detecting taxon-specific trajectory heralding disease outcome. The power of ZIBR in either intercept or slope analysis was higher than the competing methods regardless of sample size in the high-dimensional scenarios, but this model yielded inflated FPPR (Fig. 3). The LMM methods were powerful in the test of intercept, but this model occasionally produced inflated FPPR (Fig. 3) regardless of the set-level random effect size. Furthermore, the power of LMM was unstable in intercept analysis, while its power in slope analysis was nearly zero. To confirm the impact of \(\gamma\) on performance, we also compared the metrics in Figs. 3 and 5 between different values of \(\gamma\), using Kruskal-Wallis test. A larger set-level random effect led to significant change in FPPR, FPR, FDR for LMM and ZIBR methods (\(p<10^{-5}\)), but this impact was trivial in JMR or JMR-NC (\(p>0.1\)). JMR showed the best performance in slope analysis, with higher sensitivity (Fig. 4) and the lowest FPPR (Fig. 3). Results of scenario C in Fig. 5 showed that JMR and LMM effectively controlled the FPR, while LMM produced higher FPR at larger matched-set-specific random effect (\(\gamma\)). The FDR of JMR at \(\gamma =0.6\) was relatively higher than that of LMM due to lower sensitivity. The inflated FPR and FDR of ZIBR in scenario C (Fig. 5) is consistent with the FPPR in scenario B (Fig. 3).
FPPR of each method for scenario B1 at different size of matched-set-specific random effect
Sensitivity of each method for scenarios B1 and B2 at different sample size (N) and different effect size (\(\lambda\))
FDR and FPR for each method in scenario C at different size of matched-set-specific random effect
For each raw counts table in scenarios B and C, more than half of simulated OTUs have the observed zero-inflation probability (i.e., \(1-\)prevalence) between 2% and 90%, although there are a few OTUs with the observed prevalence at \(100\%\). The overall prevalence of each OTU table in scenarios B and C is similar between different datasets, which cannot be specified in the Dirichlet-Multinomial distribution or the shifting procedure. Hence, we assess the impact of zero-inflation on performance only in scenario A, whereas the prevalence is related to taxon-specific relative abundance. We also visualized the prevalence of six OTUs generated in scenario A (Fig. 6). The power of JMR-NC (Table 1) was better for the prevalence at a medium level, i.e., replicates with \(-\log _{10}(y)\) between (3, 4] or (4, 5] (Fig. 6). The OTUs with higher abundance and relatively lower prevalence (i.e., replicates with \(-\log _{10}(y) \in (1,2]\) in Fig. 6) showed better efficacy in Table 1. In general, OTU-specific prevalence being too high or too low may reduce the power of JMR.
Prevalence distribution for each OTU in scenario A
Application in TEDDY
We applied the competing methods to the longitudinal metagenomes profiled from TEDDY children's monthly stool samples collected at the age of 4-9 months [1]. We included the cases developing IA between 9-month and 24-month age and their matched controls who remained IA-negative by the cases' diagnosis age. For each matched pair included in the present analysis, one participant was IA positive and the other one was negative at the age of 24 months. We excluded the participant(s) matched to multiple pairs, yielding \(N=152\) subjects (\(S=76\) pairs) and \(n=672\) metagenome samples. The cases who experienced IA onset after 24 months and their matched controls were not included in this analysis.
We first filtered OTUs at genus and species level by relative abundance \(>10^{-6}\) and prevalence \(>5\%\), selecting 125 out of 265 genera and 365 out of 750 species in downstream analysis. It's worth to note that there are 1797 species in total profiled and quantified in TEDDY cohort, with 750 species detected between 3- and 9-month age. The sample age and the hosts' breastfeeding status per time point were used as longitudinal covariates, while HLA DR3 &4 haplotype was included as time-invariant covariates. For the LMM and ZIBR methods, we used the interaction term between IA status and the binary HLA category (DR3 &4 vs. others) as a covariate to adjust for the association. We tested each OTU's association with IA by FDR cutoff \(q<0.05\) or \(q<0.1\), individually. The HLA DR3 &4 genotype was confirmed positively and significantly (\(p<0.05\) by Wald test) associated with IA in JMR. The results in Table 2 showed that JMR identified more OTUs than LMM in both intercept and slope analysis. The LMM methods only found a small subgroup of taxa associated with IA at either genus or species level. We also visualized the overlap and difference between JMR, JMR-NC, LMM-N selected by \(q<0.1\) in Fig. 7 with OTU names listed in Supplementary Table S1, and then compared Akaike Information Criterion (AIC) of JMR and JMR-NC for the 76 species detected by both methods. Adjusting for the correlated taxa in JMR did improve model fitting with lower mean AIC (-2631.847) compared to JMR-NC (-2615.571). LMM-N is not comparable to JMR or JMR-NC in terms of information criteria, since the taxon-specific relative abundance was transformed by asin-square-root.
Table 2 The number of genera and species associated with IA detected by each method in a subgroup of TEDDY participants
Venn diagram for the intercept analysis in TEDDY data by JMR, JMR-NC, LMM-N
The taxa with mean abundance (intercept) associated with IA onset exclusively detected by both JMR and JMR-NC at \(q<0.1\) include Bifidobacterium breve, Bacteroides fragilis, Lactobacillus ruminis, Veillonella ratti. B.breve, as one of the three species dominating infant-age gut microbiota in TEDDY, was less abundant in intercept (i.e. at 4- and 9-month) during infancy among IA cases, with density shown in Fig. 8. The species B.fragilis as part of the normal microbiota in human colon was found more abundant among IA cases compared to their matched controls (Fig. 1). This Bacteroides species was also found differential between T1D cases and controls at only one time point in a small-size Finnish cohort [20].
Distribution of relative abundance for B.breve and E.coli per time point grouped by 2-year IA status
There are two more abundant species Faecalibacterium prausnitzii and Escherichia coli visualized in Fig. 1 associated with IA in slope and exclusively detected by JMR. F.prausnitzii, as one of the most abundant and important commensal bacteria of human gut microbiota that produces butyrate and short-chain fatty acids from the fermentation of dietary fiber, increased faster in IA cases after 6-month of age. This rapid change and abnormally higher level of F.prausnitzii prior to IA seroconversion may be a result of the sudden change of dietary pattern during infancy.
Our method successfully detected the case-control difference in the slope of E.coli, which was found as an amyloid-producing bacteria with temperal dynamics heralding IA onset in a subset analysis in DIABIMMUNE cohort [21]. The relative abundance of E.coli in TEDDY smoothly decreased from 4-month to 9-month for both cases and controls (Fig. 1), and it was relatively more abundant in controls between 7- and 9-month with stratified densities shown in Fig. 8. The temporal change of E.coli prior to IA seroconversion in TEDDY detected by JMR was consistent with the decrease of E.coli reported in DIABIMMUNE cohort [21], which was possibly due to prophage activation according to the E.coli phage/E.coli ratio prior to E.coli depletion in that research.
We developed a joint model with nested random effects to test the association between taxa and disease risk, and adjusted for the correlated taxa screened by a pre-selection procedure in abundance and prevalence, individually. We implemented our method in an R package mtradeR (metagenomic trajectory analysis with disease endpoint and risk factors) with illustration examples at https://github.com/qianli10000/mtradeR. The JMR function implemented the framework in equation (1) by parallel computing. We also provided simulation functions StatSim and TaxaSim to generate (binary) disease status and temporal high-dimensional metagenomic counts of matched sets. The runtime of each method for different sample size and different number of OTUs were compared on an 8-core computer, with mean and standard deviation shown in Table 3. The nested random effects were utilized in each method. For the univariate models without covariate taxa, LMM-N is the fastest algorithm and ZIBR-N is the slowest, both implemented in gamlss R package. Although the adjustment of correlated taxa in JMR requires additional computation, the runtime of JMR is still shorter than ZIBR-N in gamlss.
Table 3 The mean and standard deviation (SD) of runtime in minutes for 30 repeated runs by each method at different number of longitudinal samples (n) and filtered OTUs (\(\tilde{P}\)) in TEDDY data. The OTUs in each dataset are filtered by either relative abundance \(>10^{-6}\), prevalence \(>5\%\) or relative abundance \(>10^{-5}\), prevalence \(>10\%\)
The simulation of single OTU demonstrated the performance of each method at different relative abundance levels, implying that LMM with either single or nested random effect is still a robust method. The simulation of high-dimensional OTU tables also illustrated LMM's overall performance in the test of intercept, but the unstableness of LMM is a concern in real data analysis. JMR yielded lower false or pseudo positive rate in the simulated datasets and higher detection power in slope analysis by adjusting for the top-correlated taxa. The pre-selection of top-correlated taxa in JMR was performed in relative abundance and presence, individually, being consistent with the two-part model strategy. According to the simulation study, a disadvantage of JMR is the limited power at small sample size and the dependence on tuning parameter. The prescreening procedure in JMR may occasionally select a true biomarker as covariate taxon, which is possibly confounding with the subject-level random effect. Hence, the adjustment of related taxa in JMR reduced the detection power compared to JMR-NC, although this strategy controlled the pseudo biomarker rate. Adding nodes in the GH approximation may improve the power of JMR, but more nodes will also lead to additional computation costs. Hence, future work should focus on improvement of JMR in both detection power and computation efficiency. Furthermore, the simulation results in Fig. 3 also suggested the minimum number of participants or matched pairs required based on set-level or subject-level random effect size. In an observational study with strong set-level noises in the microbiota (e.g., multi-center effect), a minimum sample size of \(N=200\) participants (i.e., \(S=100\) pairs) coupled with JMR can improve the detection power and control FPPR at each level of disease-associated random effect.
Another limitation of our method is the potential bias in scaling parameter (\(\lambda _r, \lambda _p\)) estimation, possibly caused by the \(L_2\) regularization. Our current work only focused on the unsigned association between a taxon and host disease status by using a Wald statistic. An improvement in the estimate of scaling parameter and statistical inference should be considered in future work, such as the algorithm in ZINQ [14]. We did not use quantile regression in current research, since the performance of ZINQ required tuning of grid. But ZINQ provided an alternative approach for modeling zero-inflation in microbiota composition with fewer statistical assumptions.
The right-censoring of longitudinal biomarker measurements or a binary disease outcome always occurs in observational studies. Our model allows random missingness or censoring of microbiome samples at any time point. In an observational study like TEDDY, the controls' disease outcome was censored at or later than the matched cases' endpoint, because the case-control matching was based on the participants' disease status. Thus, right-censoring is not applicable to the disease status at matched endpoint. For a study matching participants solely based on confounding risk factors (e.g., DIABIMMUNE), the right-censoring of disease outcome should be addressed prior to the usage of JMR, such as multiple imputation. There are other important topics to be considered in the modeling of longitudinal microbiome data. One potential direction is high dimensional modeling framework, such as tensor singular value decomposition [22]. A promising extension of the current work in JMR is to exploit functional data analysis for multiple microbial trajectories. By employing a non-parametric joint modeling, we may be able to capture nonlinear trends and heterogeneous patterns of longitudinal biomarkers in microbiota, as well as negative correlations among taxa [23].
The proposed framework JMR successfully controlled the false or pseudo biomarkers in taxon-specific trajectory analysis with improved detection power by incorporating the matching of participants and adjusting for the dependence between taxa.
Joint model with matching and regularization
The probability for participant j \((j=1,\dots ,J)\) in matched set s \((s=1,\dots ,S)\) developing the disease of interest is \(p_{s_j}=P(O_{s_j}=1)\), where \(O_{s_j}\) is the binary disease status. There are J participants in each matched set. Let \(y_{s_jt}\) be the relative abundance of an OTU for participant j in matched set s at time point t \((t=1,\dots ,T_{s_j})\). We denote the expected non-zero abundance by \(\mu _{s_jt}=E(y_{s_jt}|y_{s_jt}>0)\), and the probability of presence (or zero-inflation) by \(\pi _{s_jt}=P(y_{s_jt}>0)\), similar to [9]. For a microbiome study matching participants by the disease-associated factors and/or disease status (e.g., DIABIMMUNE, TEDDY), the matched participants are assumed to have comparable but distinct disease risk. Hence, we model the disease status by a logistic mixed effect model with nested random effects. A joint model for the host disease status and microbial trajectory in matched set is
$$\begin{aligned} \begin{array}{rcl} \text {logit}(p_{s_j})&{}=&{} \varvec{u}_{s_j}\varvec{\alpha }+ a_{s_j}+ b_s\\ \text {logit}(\mu _{s_j t})&{}=&{}\varvec{x}^{(1)}_{s_j t}\varvec{\beta }_{11} + \varvec{z}_{s_j t}\varvec{\beta }_{12} +\tilde{z}_{s_j t} (\lambda _r a_{s_j}+ \gamma _r b_s)\\ \text {logit}(\pi _{s_j t})&{}=&{}\varvec{x}^{(2)}_{s_j t}\varvec{\beta }_{21}+ \varvec{z}_{s_j t}\varvec{\beta }_{22}+\tilde{z}_{s_jt} (\lambda _p a_{s_j}+ \gamma _p b_s) \end{array} \end{aligned}$$
The host disease status is determined by a vector of fixed effect covariates \(\varvec{u}_{s_j}\) and the independent nested random effects \(a_{s_j}\), \(b_s\). The non-zero relative abundance \(\mu _{s_j t}\) and presence \(\pi _{s_j t}\) per OTU are predicted by the same random effects rescaled by parameters \(\lambda _r, \lambda _p\) and a vector of clinical or bioinformatics technical covariates \(\varvec{z}_{s_j t}\). To model the unknown correlation between taxa, this OTU's non-zero abundance and presence per time point also depend on the other taxa with relative abundance \(\varvec{x}^{(1)}_{s_j t}\) and presence-absence \(\varvec{x}^{(2)}_{s_j t}\) measured at the same time point, pre-selected by a procedure described below. The two-part submodel of \(y_{s_j t}\) characterizes how the trajectory is affected by subject- and set-level latent factors contributing to disease risk, and how the OTU trajectory interacts with correlated taxa over time. If an OTU is a pseudo biomarker, then its relative abundance (\(y_{s_jt}\)) should be driven by the top-correlated taxa per time point instead of the disease-associated random effect \(a_{s_j}\). On the other hand, the abundance of a true biomarker OTU at each time point is mainly determined by the latent risk of disease onset (\(a_{s_j}, b_{s}\)) and possibly associated with the top-correlated taxa.
We set \(\tilde{z}_{s_jt}=1\) in equation (1) to test intercept, and \(\tilde{z}_{s_jt}=\text {age}\) to test slope. The nested random effects and parameters \(\lambda _r, \lambda _p\) provide flexibility in the modeling of between-subjects and between-sets heterogeneity, as well as model the abundance-presence correlation in each taxon by shared nested random effects instead of assuming independence between the two processes as in [9].
Parameter estimation and hypothesis testing
To account for the sum-to-one restriction on non-zero relative abundance (\(0<\mu _{s_j t}<1\)) and the binarized measurement \(\varvec{I}(y_{s_jt}>0)\) of an OTU, we intuitively employ the Zero-Inflated Beta (ZIB) density function [9] to define the match-set-specific marginal likelihood for parameter estimation. That is \(L(\theta ;\varvec{y},\varvec{O})=\prod _{s=1}^{S}L_s\), where
$$\begin{aligned} \begin{array}{rcl} L_s= & {} \int _{-\infty }^{\infty }...\int _{-\infty }^{\infty }g_b( b_s)\prod _{j=1}^{J}l^{(1)}_{s_j}( a_{s_j}, b_s) l^{(2)}_{s_j}( a_{s_j}, b_s) g_a( a_{s_j}) \varvec{d a_{s_1}}...\varvec{d a_{s_J}}\varvec{d b_s} \end{array} \end{aligned}$$
$$\begin{aligned} \begin{array}{rcl} l^{(1)}_{s_j}(a_{s_j},b_s)&{}=&{}\prod _{t=1}^{T_{s_j}}[(1-\pi _{s_jt})\varvec{I}(y_{s_jt}=0)+\pi _{s_jt}\varvec{I}(y_{s_jt}>0)f(y_{s_jt}|y_{s_jt}>0)]\\ l^{(2)}_{s_j}(a_{s_j},b_s)&{}=&{}p_{s_j}\varvec{I}(O_{s_j}=1)+(1-p_{s_j})\varvec{I}(O_{s_j}=0)\\ \end{array} \end{aligned}$$
\(g_a( a_{s_j}), g_b( b_s)\) are the Gaussian density functions with mean 0 and variance \(\sigma ^2_a,\sigma ^2_b\), individually, and \(f(y_{s_jt}|y_{s_jt}>0)\) is the Beta density function with mean \(\mu _{s_j t}\) and overdispersion \(\phi\). In the simulation study, we demonstrated that the robustness and performance of this model does not require the observed relative abundance being generated from ZIB distribution.
The estimate of overdispersion \(\hat{\phi }\) without regularization is severely inflated and also leads to bias in the estimate of other parameters. Hence, we use \(L_2\) (ridge) regularization to control the overdispersion and type I error in hypothesis testing. All the parameters \(\theta\) are estimated by maximizing a penalized marginal likelihood function \(\hat{\theta }=\arg max \tilde{L}(\theta ;\varvec{y},\varvec{O})\), where
$$\begin{aligned} \tilde{L}(\theta ;\varvec{y},\varvec{O})=\ln L(\theta ;\varvec{y},\varvec{O})-\rho ||\theta ||^2_2 \end{aligned}$$
and \(\rho\) is selected by a cross-validation described below.
There is no closed form of the multivariate integral \(L_s\) in equation (2) because of the Beta density in \(l_{s_j}^{(1)} (a_{s_j},b_s)\). Hence, \(L_s\) can be approximated by Gauss-Hermite (GH) quadrature, with details explained in Appendix. We test the association between OTU trajectory and host disease status with null hypothesis \(H_0: \lambda _r= \lambda _p=0\) and a Wald statistic \(W=\frac{\hat{\lambda }^2_r}{SE^2_{\lambda _r}}+\frac{\hat{\lambda }^2_p}{SE^2_{\lambda _p}}\), which follows a Chi-Square distribution \(W\sim \chi ^2(2)\). The false discovery rate (FDR) for multiple testing is corrected by the Benjamini-Hochberg (BH) procedure.
Pre-selection of correlated taxa and tuning parameter selection
For each OTU (\(y_{s_jt}\)) in equation (1), using all the other taxa as covariates is computationally inefficient. Hence, we use a data-driven procedure to pre-select \(\varvec{x}^{(1)}_{s_jt}\) and \(\varvec{x}^{(2)}_{s_jt}\), and then perform a post-selection hypothesis testing. The first step screens the taxa correlated with \(y_{s_jt}\) in abundance and presence, individually, using the Bray-Curtis distance less than 0.1 quantile. Our current method uses relative abundance in both pre-selection and modeling, since this method is developed for large-scale microbiome studies and the multi-center technical batch effect can be simply normalized by relative abundance. According to the comparison of dissimilarity metrics on microbiome compositional data [24], we choose Bray-Curtis dissimilarity to pre-select the related taxa. This step may still result in many covariate taxa at species level in metageonmic data due to high dimensionality. Thus, we employ elastic net regression to further select the taxa with relative abundance \(\varvec{x}^{(1)}_{s_jt}\) associated with \(y_{s_j t}\) or the taxa with presence \(\varvec{x}^{(2)}_{s_jt}\) associated with \(\varvec{I}(y_{s_jt}>0)\), individually. In this pre-selection procedure, we model all the longitudinal metagenomes as independent samples regardless of time points (or age). One may restrict this procedure to a sub-community such as the species or subspecies of certain genera.
To reduce the computational burden of cross-validation for a high-dimensional OTU table, we randomly select \(P_0\) OTUs from distinct relative abundance levels to represent the complexity of microbiota composition. The matched sets are divided into 5 folds, each being a validation fold for the model built on the other four (training) folds. The penalized log likelihood in equation (4) is the negative objective function in cross-validation. For each validation fold f and the selected OTU i, the loss function is \(S_{fi}=-\tilde{L}(\hat{\theta }^i_{-f};\varvec{y}^i_{f},\varvec{O}_{f})\), where \(\hat{\theta }^i_{-f}=\arg \max \tilde{L}(\theta ^i;\varvec{y}^i_{-f},\varvec{O}_{-f})\). The optimal \(\rho\) is selected by the 'elbow point' minimizing \(S=\sum _{f=1}^{5}\sum _{i=1}^{P_0}S_{fi}/(5P_0)\).
Data generation process for simulation scenario B1
Step 1: Estimate the baseline mean composition (or frequency) of microbiota (\(\bar{\eta }_0\)) and the overdispersion (\(\xi _0=0.04\)) at the starting time point \(t=1\) in TEDDY data by Dirichlet-Multinomial (DM) maximum likelihood estimate (MLE) of the observed counts. Generate the mean frequency of microbiota at the first time point by Dirichlet (DL) distribution: \(\bar{\eta }_{01}\sim DL(\bar{\eta }_0, \xi _0)\).
Step 2: The mean frequency \(\bar{\eta }_{0t}\) at a later time point \(t>1\) is generated by the following shifting procedure: increase the frequency of some OTUs in \(\bar{\eta }_{01}\) (denoted by \(M^{+}_{base}\)) with a sum of \(\Delta _t\) and simultaneously reduce that of other OTUs in \(\bar{\eta }_{01}\) (denoted by \(M^{-}_{base}\)) by \(\Delta _t\). The absolute shift size \(\Delta _t\) represented the age effect on microbiota. This shifting strategy characterized the inherent correlation between \(M^{+}_{base}\) and \(M^{-}_{base}\) because of the simultaneous compositional change in these OTUs. All the OTUs in \(\bar{\eta }_{0t}\) are assigned to either \(M^{+}_{base}\) or \(M^{-}_{base}\) to account for the impact of latent exposures across time points.
Step 3: At each time point, the heterogeneity between matched sets is the overdispersion estimated by DM MLE based on the samples per time point in TEDDY, denoted by \(\xi _t\). The overdispersion at the first time point is \(\xi _1=0.05\) and linearly decreases over time, which mimics the time-dependent overdispersion observed in the infant-age metagenome in TEDDY. We generated a mean frequency for each matched set s at time point t by \(\bar{\eta }_{st}\sim DL(\bar{\eta }_{0t},\xi _t)\). If a set is labeled as 'high-risk', we shifted all the OTUs in \(\bar{\eta }_{st}\) using the procedure in Step 2 with shift size \(\Delta _{st}\), which is a proportion of the maximum shift size, i.e., \(\Delta _{st}=\gamma \Delta ^0_{st}\).
Step 4: The between-subject heterogeneity within each matched set was the median DM MLE of overdispersion per matched set based on the real data, that is \(\xi ^*=0.03\). Hence, we generated the true microbiota composition for a sample collected from a 'low-risk' subject j in set s at time t by \(\bar{\eta }_{s_jt}\sim DL(\bar{\eta }_{st},\xi ^*)\). The shift in \(\bar{\eta }_{s_jt}\) between 'low-risk' and 'high-risk' subjects were described in Results.
Step 5: The library size for each sample is simulated by a Poisson distribution \(N_{s_jt}\sim PS(100000)\), truncated by a minimum of 10000. The raw counts per sample is generated by Multinomial (MN) distribution \(C_{s_jt}\sim MN(N_{s_jt},\bar{\eta }_{s_jt})\).
The TEDDY Microbiome WGS data that supports the findings of this study have been deposited in NCBI's database of Genotypes and Phenotypes (dbGaP) with the primary accession code phs001443.v1.p1. The R package mtradeR that implements JMR and the simulation pipeline is available at https://github.com/qianli10000/mtradeR.
AIC:
Akaike information criterion
FDR:
FPR:
False positie rate
FPPR:
False or pseudo positive rate
GH:
Gauss-hermite
JMR:
LMM:
Linear mixed-effect model
OTU:
Operational taxonomic unit
TEDDY:
The environmental determinants of diabetes in the young
ZIBR:
Zero-inflated beta regression
Stewart CJ, Ajami NJ, O'Brien JL, Hutchinson DS, Smith DP, Wong MC, et al. Temporal development of the gut microbiome in early childhood from the TEDDY study. Nature. 2018;562(7728):583–8.
Vatanen T, Franzosa EA, Schwager R, Tripathi S, Arthur TD, Vehik K, et al. The human gut microbiome in early-onset type 1 diabetes from the TEDDY study. Nature. 2018;562(7728):589–94.
Wang DD, Nguyen LH, Li Y, Yan Y, Ma W, Rinott E, et al. The gut microbiome modulates the protective association between a Mediterranean diet and cardiometabolic disease risk. Nat Med. 2021;27(2):333–43.
Schirmer M, Smeekens SP, Vlamakis H, Jaeger M, Oosting M, Franzosa EA, et al. Linking the human gut microbiome to inflammatory cytokine production capacity. Cell. 2016;167(4):1125–36.
Vatanen T, Kostic AD, d'Hennezel E, Siljander H, Franzosa EA, Yassour M, et al. Variation in microbiome LPS immunogenicity contributes to autoimmunity in humans. Cell. 2016;165(4):842–53.
Anderson MJ. A new method for non-parametric multivariate analysis of variance. Austral Ecol. 2001;26(1):32–46.
Hu YJ, Satten GA. Testing hypotheses about the microbiome using the linear decomposition model (LDM). Bioinformatics. 2020;36(14):4106–15.
Zhu Z, Satten GA, Mitchell C, Hu YJ. Constraining PERMANOVA and LDM to within-set comparisons by projection improves the efficiency of analyses of matched sets of microbiome data. Microbiome. 2021;9(1):1–19.
Chen EZ, Li H. A two-part mixed-effects model for analyzing longitudinal microbiome compositional data. Bioinformatics. 2016;32(17):2611–7.
Zhang X, Yi N. NBZIMM: negative binomial and zero-inflated mixed models, with application to microbiome/metagenomics data analysis. BMC Bioinformatics. 2020;21(1):1–19.
Metwally AA, Yang J, Ascoli C, Dai Y, Finn PW, Perkins DL. MetaLonDA: a flexible R package for identifying time intervals of differentially abundant features in metagenomic longitudinal studies. Microbiome. 2018;6(1):1–12.
Mallick H, Rahnavard A, McIver LJ, Ma S, Zhang Y, Nguyen LH, et al. Multivariable association discovery in population-scale meta-omics studies. PLoS Comput Biol. 2021;17(11): e1009442.
Uusitalo U, Liu X, Yang J, Aronsson CA, Hummel S, Butterworth M, et al. Association of early exposure of probiotics and islet autoimmunity in the TEDDY study. JAMA Pediatr. 2016;170(1):20–8.
Ling W, Zhao N, Plantinga AM, Launer LJ, Fodor AA, Meyer KA, et al. Powerful and robust non-parametric association testing for microbiome data via a zero-inflated quantile approach (ZINQ). Microbiome. 2021;9(1):1–19.
Luna PN, Mansbach JM, Shaw CA. A joint modeling approach for longitudinal microbiome data improves ability to detect microbiome associations with disease. PLoS Comput Biol. 2020;16(12): e1008473.
Hu J, Wang C, Blaser MJ, Li H. Joint modeling of zero-inflated longitudinal proportions and time-to-event data with application to a gut microbiome study. Biometrics. 2021. https://doi.org/10.1111/biom.13515.
Quinn TP, Erb I, Gloor G, Notredame C, Richardson MF, Crowley TM. A field guide for the compositional analysis of any-omics data. GigaScience. 2019;8(9):giz107.
Group TS. The environmental determinants of diabetes in the young (TEDDY) study. Ann N Y Acad Sci. 2008;1150(1):1–13.
Lee HS, Burkhardt BR, McLeod W, Smith S, Eberhard C, Lynch K, et al. Biomarker discovery study design for type 1 diabetes in The Environmental Determinants of Diabetes in the Young (TEDDY) study. Diabetes Metab Res Rev. 2014;30(5):424–34.
Giongo A, Gano KA, Crabb DB, Mukherjee N, Novelo LL, Casella G, et al. Toward defining the autoimmune microbiome for type 1 diabetes. ISME J. 2011;5(1):82–91.
Tetz G, Brown SM, Hao Y, Tetz V. Type 1 diabetes: an association between autoimmunity, the dynamics of gut amyloid-producing E. coli and their phages. Sci Rep. 2019;9(1):1–11.
Han R, Shi P, Zhang AR. Guaranteed Functional Tensor Singular Value Decomposition. arXiv preprint arXiv:2108.04201. 2021.
Li C, Xiao L, Luo S. Joint model for survival and multivariate sparse functional data with application to a study of Alzheimer's Disease. Biometrics. 2021;78(2):435–47.
Weiss S, Van Treuren W, Lozupone C, Faust K, Friedman J, Deng Y, et al. Correlation detection strategies in microbial data sets vary widely in sensitivity and precision. ISME J. 2016;10(7):1669–81.
The TEDDY study is funded by the National Institute of Diabetes and Digestive and Kidney Diseases, National Institute of Allergy and Infectious Diseases, National Institute of Child Health and Human Development, National Institute of Environmental Health Sciences, Centers for Disease Control and Prevention, and JDRF. We thank the TEDDY study data coordinating center at Health Informatics Institute, University of South Florida for data processing and sharing. We thank Suraj Sarvode Mothi for data cleaning support.
The work of QL was in part supported by U24DK097771 from the National Institute of Diabetes, Digestive and Kidney Diseases via the NIDDK Information Network's (dkNET) New Investigator Pilot Program in Bioinformatics. QL and CL are in part supported by P30 Cancer Center Support Grant (CA21765) funded by National Cancer Institute; and the American Lebanese Syrian Associated Charities (ALSAC).
Department of Biostatistics, St. Jude Children's Research Hospital, Memphis, 38105, TN, USA
Qian Li & Cai Li
Health Informatics Institute, University of South Florida, Tampa, 33620, FL, USA
Kendra Vehik & Jeffrey Krischer
Department of Microbiology and Cell Science, University of Florida, Gainesville, 32611, FL, USA
Eric Triplett & Luiz Roesch
Department of Biostatistics and Bioinformatics, Emory University, Atlanta, 30322, GA, USA
Yi-Juan Hu
Qian Li
Kendra Vehik
Cai Li
Eric Triplett
Luiz Roesch
Jeffrey Krischer
QL proposed and implemented the model and algorithm, performed the experiments, analyzed the data, and drafted the manuscript. KV conceived the real data analysis, interpreted results, and contributed to manuscript writing. QL and YH designed the experiments. CL contributed to methodology discussion and manuscript writing. ET and LR contributed to manuscript writing. JK supervised the study design, sample collection, data generation and analysis in the TEDDY cohort. The author(s) read and approved the final manuscript.
Correspondence to Qian Li.
Additional file 1:
Appendix. Gauss-Hermite quadrature approximation for marginal likelihood.
Supplementary Table S1. The list of OTUs detected by JMR, JMR-NC, LMM-N, individually, as shown in Figure 7.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Li, Q., Vehik, K., Li, C. et al. A robust and transformation-free joint model with matching and regularization for metagenomic trajectory and disease onset. BMC Genomics 23, 661 (2022). https://doi.org/10.1186/s12864-022-08890-1
Metagenomic trajectory
Compositional data
Joint model
Zero-inflated
Pseudo biomarker
|
CommonCrawl
|
MathCity.orgMerging man & maths
FSc Section
FSc Part 1 (Mathematics): PTB
FSc Part 1 (KPK Boards)
BSc Section
MSc Section
MPhil/PhD
Atiq ur Rehman
MTH251: Set Topology
Do you have questions or comments? Please use Discussion at the end of this page.
Topology is an important branch of mathematics that studies all the "qualitative" or "discrete" properties of continuous objects such as manifolds, i.e. all the properties that aren't changed by any continuous transformations except for the singular (infinitely extreme) ones.
In this sense, topology is a vital arbiter in the "discrete vs continuous" wars. The very existence of topology as a discipline shows that "discrete properties" always exist even if you only work with continuous objects. On the other hand, topology always assumes that these features are "derived" – they're some of the properties of objects and these objects are deeper and that may have many other, continuous properties, too. The topological, discrete properties of these objects are just projections or caricatures of the "whole truth". (continue reading ...)
Objectives of the course
At the end of this course the students will be able to understand the theory of metric spaces and topological spaces. They are expected to learn how to write, in logical manner, proofs using important theorems and properties of metric spaces and topological spaces. Students learn to solve problems using the concepts of topology. They present their solutions as rigorous proofs written in correct mathematical English. Students will be able to devise, organize and present brief solutions based on definitions and theorems of topology. Students are expected not only to grasp the concepts of topology and apply them, but also to continue with their overall mathematical development. They will be improving such skills as mathematical writing and the presentation of rigorous logical arguments.
Preliminaries, Metric spaces: Open and closed sets, convergence, completeness.
Continuous and uniformly continuous mappings. Pseudometrics. Fixed point theorem for metric spaces; Topological Spaces. Open bases and sub-bases. Relative topology, Neighborhood system, Limit points, First and second countable spaces. Separable spaces. Products of spaces, Interior, Exterior, Closure and Frontier in product spaces.
Open and closed maps, Continuity and Homeomorphisms, Quotient spaces; Housdorff spaces, regular, and normal spaces, Urysohn's Lemma; Compact spaces, Tychonoff's theorem and locall compact spaces, Compactness for Metric spaces; Connected spaces, Components of a space, Totally disconnected spaces, Local connectedness, Path-wise connectedness
Topics to cover
Topological spaces: Definitions
Define topology on a set.
Define open set.
What is discrete topological space?
What is usual topology on $\mathbb{R}$?
What is indiscrete topology?
What is cofinite topology or $T_1$-topology?
Define closed set.
Write three open sets and three closed set of the cofinite topology on $\mathbb{Z}$.
Prove that intersection of topological spaces is topological space.
Give an example of topological spaces such that their union is not topological space.
Define accumulation points or define limit-point.
Define derive set.
Find the derive set of $A=\{1,2,3,\ldots,20\}$ under the usual topology on $\mathbb{R}$.
What is the derive set of $\mathbb{Q}$ under the usual topology on $\mathbb{R}$?
Consider the set $A=\left\{1,\frac{1}{2},\frac{1}{3},\ldots \right\}$. Find the derive set of $A$ under usual topology.
Define closure of a set.
Define dense set.
Define interior point, exterior point and boundary point.
Under the usual topology on $\mathbb{R}$, find interior and closure of the following sets:
(i) $A=\mathbb{N}$ (ii) $B=\{1,2,3,\ldots,100\}$ (iii) $C=[-1,1]$
(iii) $D=(0,5]$ (iv) $E=\{1,2,3\}\cup[4,5]$ (vi) $F=(3,10)$ (vii) $G=\mathbb{Q}$
Under the discrete topology on $\mathbb{R}$, write interior and closure of the following sets:
(i) $A=\{1,2,\ldots,10\}$ (ii) $B=[0,1)$ (iii) $C=\mathbb{Q}$
Define relative topology.
Let $X=\{a,b,c,d\}$ and $\tau=\{\varphi, X, \{a\}, \{a,c\}\}$. Then find the relative topology of $A=\{c,d\}$.
Let $A$ be a subset of topological space $X$. Then prove that $A$ is closed in $X$ iff $A'\subset A$.
Let $A$ be a subset of topological space. Then prove that $A\cup A'$ is closed.
Let $A$ be a subset of topological space. Then prove that $\overline{A}=A\cup A'$
Let $A$ and $B$ be subsets of topological space. Then prove that $\overline{A\cup B}=\overline{A}\cup \overline{B}$.
List of Problems taken from [2]:
Page 73-82: Problems 1, 3, 7, 10, 11, 12, 13, 14, 15, 17, 18, 19, 21, 23, 25, 26, 27, 28, 30, 34, 43
Assignment 01 (79.63 KiB, 428 downloads) | View Online
Assignment 03 (102.37 KiB, 168 downloads) | View Online
Introductory Presentation
Topology Notes]
Recommended book
James Munkres, Topology (2nd Edition), Prentice Hall, 2000.
Sheldon Davis, Topology, McGraw-Hill Science/Engineering/Math, 2004.
Seymour Lipschutz, Schaum's Outline of General Topology, McGraw-Hill, 2011.
G.F. Simmons, Introduction to Topology and Modern Analysis, Tata McGraw-Hill, 2004. (link)
Stephen Willard, General Topology, Dover Publications, 2004. (link)
M.A. Armstrong, Basic Topology, Springer, 2010.
atiq/sp18-mth251
MathCity.org
Merging man and math
Matric Section
Mathematics Conferences
|
CommonCrawl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.