text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Will the Federal Reserve ever adopt a policy regime that implements nominal GDP targeting or nominal wage targeting?
Metaculus
For background, policies of "inflation targeting" have dominated in developed country central banks for the last three or four decades, implicitly or explicitly. Under this policy, central banks explicitly aim for some target rate of inflation (or target range) each year,
For example: the ECB, as of 2021, has a symmetric, medium-term 2% inflation target; the Fed has since the 1990s been thought to operate under an implicit inflation target of 2%, which it formalized in a 2012 statement.
More recently, the Fed has adopted a policy which has become known as "flexible average inflation targeting", where it has stated or implied a willingness to allow inflation to go somewhat above 2% if inflation had been below 2% in the recent past; and vice versa for recent inflation overshoots. Economic theory predicts that this sort of "make-up policy", compared to the prior policy of implicitly "letting bygones be bygones", is very beneficial for avoiding recessions – particularly at the ZLB – by anchoring expectations for the future price level.
Some economists argue that the Fed should make a more significant change to its policy framework, and focus on nominal GDP growth rather than inflation. Nominal GDP growth is equal to (by definition) the sum of inflation and real GDP growth: $$ \begin{aligned} NGDP &= \underbrace{P}{\text{price level}} \cdot \underbrace{Y}{\text{real GDP}} \ \Longrightarrow \Delta NGDP &= \underbrace{\Delta P}{\text{inflation}} + \underbrace{\Delta Y}{\text{real GDP growth}} \end{aligned} $$ For example, the Fed might target 4% NGDP growth each year, expecting on average 2% growth in real GDP and therefore 2% inflation on average.
Under NGDP targeting, the central bank would allow inflation to rise when real growth is low (and vice versa). Many economists argue that such a policy would better prevent economic fluctuations (particularly at the ZLB) but also more generally. For more on NGDP targeting, see: [1], [2], [3], [4].
A similar, but slightly different alternative policy regime would target stable growth in nominal wages. Under nominal wage targeting, the central bank would aim for stable growth in an index of nominal wages in the economy. For example, the Fed might target 3.5% annual growth in an appropriate index of nominal wages, reflecting expectations for on average 1.5% growth in labor productivity and 2% inflation. For more on nominal wage targeting, see: [1], [2], [3].
The question resolves as yes on the date that the Federal Reserve announces a policy regime that implements NGDP targeting; or as no on January 1, 2050, whichever comes first. This information is taken from (e.g.) the press releases available on the Federal Reserve's website.
[fine-print] The question resolves to yes for either NGDP rate targeting or NGDP level targeting, and similarly for nominal wage targeting. NGDP targeting is also known as nominal income targeting. The resolution criterion requires an initial implementation, not merely an announcement of such a policy. [/fine-print]
Number of forecasts
For background, policies of "inflation targeting" have dominated in developed country central banks for the last three or four decades, implicitly or explicitly. Under this policy, central banks explicitly aim for some target rate of inflation (or...
Forecasts: 56
<iframe src="https://https://metaforecast.org/questions/embed/metaculus-10459" height="600" width="600" frameborder="0" />
|
CommonCrawl
|
Non-photorealistic neural sketching
A case study on frontal-face images
Francisco de Assis Pereira Vasconcelos de Arruda1,
José Eustáquio Rangel de Queiroz1 &
Herman Martins Gomes1
We present and evaluate a neural network-based technique to automatically enable NPR renderings from digital face images, which resemble semi-detailed sketches. The technique has been experimentally evaluated and compared with traditional approaches to edge detection (Canny and Difference of Gaussians, or DoG) and with a more recent variant, specifically designed for stylization purposes (Flow Difference of Gaussians, or FDoG). An objective evaluation showed, after an ANOVA analysis and a Tukey t-test, that the proposed approach was equivalent to the FDoG technique and superior to the DoG. A subjective experiment involving the opinion of human observers proved to be complementary to the objective analysis.
Non-photorealistic rendering (NPR) or stylized rendering is an important growing area of Computer Graphics and Image Processing. It is mainly concerned with the generation of images or videos that can have an expressive appeal, containing some visual and emotional characteristics, e.g., pencil, pen-and-ink, charcoal and watercolor drawing. Moreover, NPR techniques usually are meant to provide expressive, flexible visual representations, which are characterized by the use of randomness and arbitrary interpretation of features being drawn, rather than total adherence to realistic properties [2].
Within the above context, the goal of this paper is to present a neural network-based technique to automatically enable NPR renderings from digital images, which resembles making semi-detailed sketches. It has experimentally been evaluated in the context of human frontal-face images. Human faces have been subject of extensive study in diverse fields, such as digital image processing (DIP), computer vision (CV), pattern recognition (PR), and computer graphics (CG), for a wide variety of tasks ranging from detection [71], recognition [24], tracking [12], and animation [63], to NPR—portrait and sketch [11].
The following aspects of the sketch generation have been addressed in this work: (i) pre-processing (color processing and blurring) of face images; (ii) a neural network approach for edge detection, focusing on sketch generation; (iii) post-processing for improving final results; (iv) experimental validation of produced renderings by objective measures (PSNR, FoM, SSIM); and (v) validation of the results by a subjective evaluation, based on a human voting scheme.
The following section gives an overview on related work. Next, in Sect. 3, the proposed approach is presented. Results and evaluation are discussed in Sect. 4. Finally, in Sect. 5 some final considerations and suggestions for future work are provided.
NPR interest by the scientific community has shown noticeable growth in the 1990s. Some examples that support this observation are: (i) the creation of a dedicated NPR section at the Association for Computing Machinery's Special Interest Group on Graphics and Interactive Techniques (ACM-SIGGRAPH) event in 1998; (ii) the creation of a dedicated NPR section in the 1999 Eurographics Conference; and especially (iii) the emergence of the first symposium dedicated exclusively to the NPR theme—the Non-photorealistic Animation and Rendering in 2000 [34]. Another milestone is the publication of two textbooks that addresses the topic, authored by Gooch and Gooch [19] and Strothotte and Schlechtweg [55].
Several methods for simulating artistic styles were reported in the literature, such as ink illustrations [51, 68], technical illustrations [18], watercolor painting [6], illustrations with graphite [54], impressionist painting [37], cartoons [62], stained glass [42], image abstraction [33, 72], mosaics [14], among others.
In the following subsection, a review focused on generation of sketches is provided.
NPR sketch generation
Being arguably one of the most important operations in low-level computer vision with a number of available techniques, edge detection is in the forefront of CV, DIP and PR for object detection. Since edges contain a major amount of image information [50], it is essential to have a good understanding on state-of-the-art edge detection techniques. In the context of NPR, edge detection is an important stage in sketch generation.
Additionally, the edge detection method stands out as an auxiliary process for feature extraction [65], segmentation and recognition [28], and image analysis [9]. It is widely used in various CV and DIP algorithms, as it enables the grouping of pixels that divide the image into regions [9]. Such a division is the basis for pre-processing steps, and the edge map serves as input to other algorithms, such as object detection and object recognition.
The creation of sketches and stroke-based illustrations using 3D scene information, instead of using 2D information, was also addressed by several authors in recent years (e.g. [25, 30, 32, 36]). However, the focus of the presented research is restricted to the generation of 2D digital NPR images.
Some NPR techniques use edges when dealing with region outlines, and the detected edges enhance the final look of the rendering. For example, Salisbury et al. [51] developed a system for pen-and-ink illustration that allows the user to use the Canny edge detector to perform the tedious work of generating many pen strokes necessary to form illustrations. Litwinowicz [37] used the Canny edge detector to constrain the user drawn strokes to the object boundaries, thus preserving object silhouettes and fine details of the scenes.
Markosian et al. [38] citing architects that use printed architectural models with overlapped sketch strokes before presenting the models to costumers, gave special attention to sketches, avoiding the impression of completeness of the project. Sayeed and Howard [52] noted that the primary objective of such a representation is to outline objects, using a lower or higher level of detail, allowing recognition using only object boundaries.
DeCarlo and Santella [13] proposed a stylization approach based on eye-tracking data, a robust variant of the Canny edge detector [40], and mean-shift image segmentation. A visual attention map is used to infer information about high-impact areas that are used to create stylized rendering with emphasis on the edges used in the composition of the final stylized image. According to the authors, despite some limitations of the edge detector, the generated edges added expressiveness to the final result.
Tang and Wang [57] proposed a law enforcement system for recognition and indexing of faces using an approach based on face sketches. The sketch generated by their system resembles the real picture, does not portray the hair on the sketch composition and uses elements of shape and texture of the original image. The authors performed validation of the system through experiments using a subjective voting process, as well as numerical experiments, in a large set of images (greater than 300 images).
Tresset and Fol Leymarie [59] developed a study on the flow adopted by an artist to start the drawing process, which begins with observation, includes identification of projections, drawing lines and structuring surfaces, and goes to the composition of the final image. According to the authors, emphasis was given to the lines of the initial draft, which constitute the key element for the composition of the artwork. A computational system was developed for automatic generation of sketches, focusing on human faces. Some implementation details were highlighted, such as applying a more suitable color space for making sketches with fewer false positives for face and skin location, as well as processing the input image by applying an color constancy algorithm [4], such as histogram expansion or retinex.
Kang, Lee and Chui [31] proposed the Flow-based Difference of Gaussians (FDoG) algorithm, which automatically generates line representations of digital images, in a non-photorealistic manner. The technique consists of two parts: (a) computation of a smooth, feature-preserving local edge flow (called Edge Tangent Flow—ETF); and (b) edge detection by using a flow-based difference of Gaussians. The authors emphasized that line drawing is the simplest and oldest form of visual communication. Additionally, the authors stated that several NPR techniques use lines as basis for creating other styles of NPR, but there is scarcely research focusing primarily on line drawing. The work done by Kang, Lee and Chui [31] became an established NPR approach to line drawing picture generation, and thus is used in the objective and subjective comparison of this work, referred to as the FDoG technique.
Xu et al. [70] proposed a method for edge-preserving smoothing that can be used to generate image abstractions. Additionally, this method can be used for edge detection, detail enhancement and clip-art restoration. The method proposed by the authors tries to globally maintain the most prominent set of edges by increasing the steepness of intensity transitions, i.e. attenuating the intensity changes among neighboring pixels. A drawback of the method is over-sharpening image regions where there is large illumination variation.
According to Gonzalez and Woods [17], image segmentation plays a crucial role in DIP and CV, especially for nontrivial images and, in general, segmentation algorithms are based on the discontinuity or on the similarity of image gray levels. In the segmentation based on discontinuities, there are three types of feature of interest: isolated points, lines and edges. The authors define edges as significant gray-level variations in an image. Edge images are, in turn, formed by sets of connected edge pixels. Edge detectors are algorithms that aim to identify discontinuities in the image gray levels.
A remarkable aspect of edge detectors is the ability to extract relevant boundaries between objects. Such capacity is the leading inspiration for this research, as the extraction of salient facial features such as eyes, mouth and nose were possible, in order to automatically generate NPR representations of faces. In an earlier work [1], we proposed a method based on multiscale edge detector and a sub-image homomorphic filtering, with parameters optimized by means of a genetic algorithm. An experiment using image measures and a subjective evaluation compared the results of that method with the ones of a plain Canny edge detector. Some benefits of the method were highlighted, but no inferential statistical analysis was presented at that time.
A contribution of this research is the creation of a technique that deals specifically with the generation of sketches, using machine learning, specially focusing on human faces, given the scarcity in the reviewed literature of such techniques. Moreover, the proposed method presents an original and promising approach to the problem of NPR sketching and reports on inferential statistical analyses of objective and subjective comparisons between the outputs of different methods, which we believe is a relevant contribution to the NPR area.
The following subsection includes some related work involving the use of neural networks, a widely disseminated machine learning technique.
Edge detection using neural networks
Ng, Ong and Noor [43] highlighted issues with classical edge detectors (e.g. rounded edges) and proposed a neural edge detector, with a hybrid approach (partially supervised and partially unsupervised) using a MLP network with input data based on 3×3 image samples, trained with only five images. The detector obtained 1-pixel-width edges when using the supervised input given by a Canny edge detector. Although lacking a more systematic and objective evaluation, the results indicated the validity of a learning-based edge detector.
Suzuki, Horiba and Sugie [56] proposed an edge detector using supervised neural networks for noisy images. Experiments and visual comparison showed that the neural detector generated well connected lines with good noise suppression. However, details regarding the used network architecture were not provided.
Rajab, Woolfson and Morgan [48] proposed an approach for segmenting regions of injured human skin. Their method was based on edge detection using neural networks for pattern recognition of borders, ultimately indicating skin cancer in humans. The training set was made up exclusively of synthetic samples of 3×3 pixels size. After a quantitative evaluation using both synthetic and real-life examples of skin lesions, the technique developed by the authors obtained a better performance for skin-cancer diagnosis when used in a specific kind of problem, namely lesions with different border irregularities.
Becerikli and Demiray [5] have also investigated a neural network edge detector. A Laplacian edge detector was used for supervised training where the training set contained images and edge images corrupted with noise. The authors reported that any classical method of edge detection can be adapted to serve as a basis for training the classifier. The results presented by the authors showed a better visual quality when compared to a Laplacian edge detector, both in gray-level and colored images. However, no experimental objective or systematic evaluation was presented.
Several other authors (e.g. [10, 21, 44, 58, 64]) also used machine learning to detect edges. As previously stated, the lack of specific techniques for generating non-photorealistic representations using machine learning was noticed, specially neural networks.
It is worthy to note that, besides the problems given in the reviewed literature dealing with sketches and NPR techniques, some aspects found in the works that use machine learning to detect edges also emerged and inspired the approach proposed in this research, such as the use of: (i) training samples with noise, (ii) training samples with fixed size, (iii) estimation of contrast of a kernel classification using neural networks, (iv) synthetic samples for training; and (v) several paradigms of neural networks (multi-layer perceptron, self-organizing maps, Hopfield networks), among others. In the following section, the proposed approach is detailed, which was inspired by the related works on NPR, such as the synthetic edge samples used by Rajab, Woolfson and Morgan [48], and the noise-corrupted training samples used in the work of Becerikli and Demiray [5].
The main modules of the proposed approach are shown in Fig. 1. The three main steps, indicated by the numbered boxes in the figure, are:
color pre-processing and smoothing;
neural edge detection; and
post-processing.
Main modules of the proposed approach
Step (1) is subdivided in three pre-processing stages, namely: (i) color correction by color constancy processing; (ii) color space conversion to a more suitable space for sketching; and (iii) smoothing. In step (2), the image is spatially filtered using a 5×5 pixels kernel by means of a multi-layer perceptron neural network, trained to classify pixels as edges or non-edges. The result is an edge map that is input to the final step (3), composed of two post-processing stages: (i) brightness and contrast adjustment; and (ii) histogram transformation by gamma correction. The output of this process is the non-photorealistic image sketch. Figure 2 illustrates the main steps regarding the proposed approach for sketch generation.
Intermediate steps of the proposed approach: (A) Smoothed Y channel, (B) Neural network result, (C) Brightness and contrast adjustment, and (D) Histogram transformation by gamma correction
The specific goals of the research reported in this paper are threefold: (i) defining an approach to generate non-photorealistic sketches, where a validation took place, among a finite set of techniques and parameters; (ii) defining pre- and post-processing algorithms that achieve the best results for a limited set of face images under evaluation; and (iii) experimentally evaluating the proposed approach. In the following subsections we give more details of the steps of the proposed approach, which are aligned with first two specific goals above, whereas the experimental evaluation (iii) is described in Sect. 4.
The pre-processing steps were inspired by the work done and detailed by Tresset and Fol Leymarie [59], as discussed in Sect. 2.1, namely color space transformation and the color constancy algorithm processing. This research was motivated by the scarcity of research focused primarily on line drawings, edge-like abstractions, and related techniques in the NPR area. A neural network approach to such a problem presented an interesting topic for investigation. Post-processing is done to improve visualization of the resulting image.
Pre-processing is aimed at improving the image for the edge detection step, correcting the image colors to remove illumination artifacts, converting the color space to a space better suitable for sketch generation, and performing smoothing for detail suppression. Additional information is presented next.
Scene illumination changes may be a problem for image processing algorithms, because they may introduce false edges caused by shadows, and different illumination conditions. Those problems inspired the use of illumination compensation algorithms to improve the quality of the generated sketch, a feature also highlighted by Tresset and Fol Leymarie [59].
For chromatic correction, experiments using the following algorithms were performed: two color constancy algorithms (Retinex and GrayWorld) and a linear histogram stretch; both are explained next.
The retinex theory assumes that the human visual system consists of three retino-cortical systems [49], responsible for absorption of low, medium and high frequencies of the visible spectrum. Each system forms a different image, determining the relative brightness of different regions of the scene. According to Land and McCann [35], the borders between these adjacent areas are actively involved in color perception. The division of the brightness between two regions was chosen as a factor that describes the relationship between them. Therefore, if two areas have very different brightness, the ratio deviates from 1, and if they have close luminosities, the ratio approaches 1. If the ratio of the luminosity is measured in various image regions, the dominant color can be estimated, thus allowing an equalization of the overall scene brightness.
Depending on the parameters and the type of input image, the retinex algorithm can provide sharpening, lighting compensation, shadows mitigation, improved color image stability or compression of dynamic range [53]. Several versions of the retinex algorithm have been proposed over time: Single Scale Retinex (SSR), Multi-Scale Retinex (MSR), and Multi-Scale Retinex with Color Restoration (MSRCR) [3]. In this paper, we employed the MSR version.
The gray world hypothesis (GrayWorld assumption) was proposed by Buschsbaum (1980). The hypothesis that the illuminant could be estimated by computing an average value for the light received as a stimulus has a long history, being proposed, additionally, by Edwin Land [8]. Buschsbaum [8] was one of the first to formalize this hypothesis, considering that, on average, the world is gray, i.e., assuming an image with enough variation in color, the average values of the R, G and B components must tend to a common level of gray. Applying the gray world hypothesis to an image is equivalent to force a common average value for each channel R, G and B, which mitigates effects where lighting conditions are not favorable.
The contrast expansion, or linear histogram expansion, is a linear point operation that expands the range of values of the histogram of an image using the full range of available values [7]. In the method proposed in this paper, after the contrast expansion, there is enhancement of the details of the image, improving the dynamic range of overexposed (too bright) images or underexposed (too dark) images.
In this work, the approach that yielded the best results was the one that uses the GrayWorld algorithm, which is supported by the evaluation described in Sect. 4. After the color correction operation, the input image is submitted to a color space conversion as explained next.
The purpose of a color space is to facilitate the specification of color, standardizing its representation [17]. Several color spaces have been proposed in recent years [45], each one serving a specific purpose. In the luminance–chrominance models (e.g. YIQ, YUV, YCbCr), the image information is divided in a luminance component (brightness, present in the Y component) and two chrominance components (color or tone). The HSV (Hue, Saturation and Value) color space is a model that is related to the human perception of color [7] and presents a more intuitive representation of color than the RGB model. The value component provides the brightness of colors in terms of how much light is reflected or emitted by an object [15]. Other options related to human perception are the CIE LUV and CIE Lab color spaces, which are an attempt to produce a perceptually uniform model in the sense that the distance between two colors in the color space is more closely correlated with the perceptual color distance [7].
Two color spaces were studied and evaluated in this research: (i) HSV, and (ii) YCbCr. The investigation of alternative color spaces is left as a future work. The sketch generation was performed using only the brightness image information (the V channel in HSV model and Y channel in YCbCr model). The use of luminance-only information has been adopted by several other sketch generation approaches (e.g. [31, 41, 57, 59]). After an evaluation procedure described in Sect. 4, the Y channel of the YCbCr color space showed better results for sketch generation.
A smoothing filter is used, in this research, to reduce details of the input image and suppress noise. Two techniques were investigated and tested: (i) Gaussian smoothing [17]; and (ii) median filtering [17]. Although the median filter is well known to be edge preserving, the technique that showed better results was the Gaussian smoothing with σ=2, as described in Sect. 4.
Edge detection
After the pre-processing step described previously, an edge detection step is performed by means of a multi-layer perceptron (MLP) [22] trained with synthetic samples of edges and non-edges, using the supervised learning backpropagation algorithm, with randomly initialized weights and 10 trials for each trained network. The MLP model was chosen after a comparison with two other unsupervised methods, namely Hopfield networks and Kohonen Self-Organizing Maps (SOM). In the comparison, the unsupervised methods presented some issues, such as poor edge and non-edge grouping, over-segmentation (high false positive rate) and discontinuous edge segments. The neural network used in the proposed approach is composed of three layers: input, hidden and output. More details are given in the following paragraphs.
The neural network has two output neurons (xedge, x¬edge). For non-edge training samples, xedge=0 and x¬edge=1, whereas for edge samples, x¬edge=0 and xedge was trained with an estimate of the edge's local contrast c (with 1 being the maximum contrast). This value was obtained using the same method as Gomes and Fisher [16]:
where Lmax and Lmin are the maximum and minimum pixel intensities of a given image region. For training purposes, the contrast calculation is done only on the edges samples. The use of a contrast estimate, rather than the pure image binarization for the neural network training, helped producing a sketch-like effect, not only a pure binary edge map.
The pixel values of a fixed image neighborhood of 5×5 pixels are input to the proposed approach t. In order to reduce the size of the input space and to select discriminative features, a Principal Component Analysis (PCA) [29] was performed, which reduced the input space from 25 to 11 dimensions (the size of the neural network input layer). The PCA steps are now described.
Transform all samples into a vector: a 5×5 sample matrix becomes a 1×25 vector (obtaining XT).
Create a matrix Xedge from the concatenation of all 1×25 input sample vectors.
Obtain the mean vector: each column of step 2 forms a 1×25 matrix, with each cell containing the mean value for each linear column values (obtaining ).
Subtract the mean vector from each sample to obtain a vector A, which has zero mean: \(\mathbf{A} = \mathbf{X}_{\mathrm{T}} - \bar{\mathbf{X}}\).
Find the covariance matrix C of size wh×wh, created as follows: C=AAT(N−1)−1, where w=5 and h=5, given the sample size nature (5×5 dimension).
Calculate the normalized eigenvalues and eigenvectors of the covariance C matrix: C=UΛUT, where matrix U contain the eigenvectors [u1,…,u D ] and Λ is a diagonal matrix with the associated eigenvalues [λ1,…,λ D ]. The eigenvalues are ordered in the ascending order by magnitude. The eigenvector associated with the higher eigenvalue is discarded, because this eigenvector typically does not contain discriminative characteristics.
Reduce the dimensionality: this reduction is performed by choosing an eigenvalue pivot: eigenvectors with indices higher than the pivot are discarded. The pivot selection, namely the retention factor for the PCA, was chosen as the point representing 99% the data variance.
Create a projection subspace H by concatenating edge and non-edge samples: \(\mathbf{H}=[\begin{array}{l@{\ }l}\mathbf{U}^{\mathrm{edge}} &\mathbf{U}^{\neg \mathrm{edge}}\end{array}]\), where H is of size 25×11. The value 25 comes from the sample size (5×5), and 11 is the amount of retained eigenvectors (seven edges and four non-edges).
Project the input vector X into the H subspace, yielding a projection vector p of 11 positions: p=X×H.
The edge and non-edge samples were divided into training, validation, and testing data. The distribution of each of these sets is presented in Table 1.
Table 1 Samples distribution
Four edge orientations were used in the sample training task: 0∘, 45∘, 90∘ and 135∘. Smoothed versions of the synthetic edge samples were used to increase the training set variability. The non-edges have been defined as random values ranging in the interval [0,255]. A variation of no more than 20% between the higher and the lower pixel values was allowed in the generation of non-edge patterns. Some of the training samples are shown in Fig. 3.
Synthetic training samples
Finally, in order to define the hidden layer size, the number of hidden neurons was varied in the range [1,50], and the training process was repeated 10 times. Thus 10 different neural networks were obtained, by randomly initializing the connection weights. The network that presented the higher classification rate against the test set was selected (98.5% hit rate, two neurons in the hidden layer) to compose the proposed approach. The neural network training was implemented in MathWorks MATLAB®, and the application of the trained network was done by a custom made C++ implementation, for fast processing time.
The post-processing step was composed of two sub-stages: (i) contrast and brightness adjustment, and (ii) histogram transformation. This post-processing stage was applied to each image pixel to modify the dynamic range of the sketch, with the aim of emphasizing higher intensity pixels and facilitate the user visualization of the previous step output. According to experimental tests conducted after a calibration procedure (see Sect. 4), the brightness value was set to −80, the contrast value was set to 80, and gamma was set to 4.
Experimental evaluation, results and discussion
The resulting sketches were submitted to an objective and a subjective evaluation process, as discussed in this section. Additionally, some sketch images are presented at the end of the discussion.
Objective evaluation
The objective evaluation involved three steps:
the choice of a set of face images taken in non controlled conditions of illumination and viewpoint, followed by an artist ground-truth edge labeling;
a statistical comparison of the results obtained by some combinations of the pre-processing algorithms, using three different measures; and
comparison of three techniques for sketch generation.
A ground-truth set of fourteen images was manually labeled by a single artist. All of these images consisted of one pixel-width line drawing sketch of frontal-face images. A pair of ground-truth images used in the comparative experiments is given in Fig. 4. See Annex A of the supplied electronic supplementary material for images of the complete ground-truth set. Martin et al. [39] provided an empirical database for research on image segmentation and boundary detection, but with very few single-person frontal-face images, and thus that database was not used on the experiments.
Examples of ground-truth images used for evaluation: (A) Lena image; and (B) Thaís image
As mentioned above, one of the purposes of the objective evaluation was to obtain a set of optimal combinations of available algorithms to be used in the pre-processing step of the proposed approach. Three edge similarity measures were considered: (i) Peak Signal to Noise Ratio (PSNR) [47]; (ii) Pratt's Figure of Merit (FoM) [46]; and Structural SIMilarity (SSIM) [66, 67]. Those measures calculate a matching score between two images by using either a pixel-based approach or a neighborhood approach.
PSNR uses a pixel-based method for similarity evaluation, usually measured in decibels; thus a higher PSNR means higher similarity. The mean square error (MSE) between images I1 and I2 can be computed as described in (2). This measure and the next one are typically used to evaluate the degradation caused in images by compression algorithms:
where M and N are the image dimensions. The PSNR measure can be obtained from the MSE as follows:
where MAX I is the maximum value that a pixel can retain, e.g., if using an 8 bit pixel representation, the maximum value will be 255.
Pratt's FoM (see (4)) is a measure for edge deviation between an ideal edge map and another one produced by an edge detector [47]. FoM varies in the interval [0, 1], where 1 is a perfect match between the detected edge image and the ground-truth image:
where I N =max(I I ,I A ), I I represents the number of pixels in the ground-truth, I A represents the number of pixels in the evaluated image, a is an constant scale factor (usually 1/9), d(i) is the maximum distance between the position of the pixel in the original image and the ground-truth image.
SSIM is a similarity measure that uses local patterns of pixel intensities, previously normalized for luminance and contrast, and is computed as a function of three comparisons: luminance, contrast and structure. SSIM varies in the interval [0,1] [66]. The SSIM calculation for a pair of test images x and y is given by (5):
$$\mathrm{SSIM}(x,y) = \frac{(2 \mu_x \mu_y+c_1)(2~\mathrm{cov}_{xy}+c_2)}{(\mu_x^2 + \mu_y^2 + c_1)(\sigma_x^2+\sigma_y^2+c_2)} $$
where μ x is the x mean; μ y is the y mean; \(\sigma_{x}^{2}\) is the x variance; \(\sigma_{y}^{2}\) is the y variance; cov xy is the covariance of x and y; c1=(k1L)2, c2=(k2L)2 are used to stabilize the division with small denominators, being L the dynamic range of pixels values, typically 2#bitsperpixel−1; k1=0.01 and k2=0.03 by default.
Numerical-experimental results involving the above mentioned metrics are shown in Table 2, in which each group of algorithms is named as G Z , where Z represents the group number. Each group is composed of three pre-processing algorithms, e.g., the G1 (Group 1) algorithms are: (i) GrayWorld color constancy processing; (ii) a RGB to YCrCb conversion and the use of Y channel; and (iii) a Gaussian smoothing, as indicated in the table. The table lines contain the mean value \(\bar{X}\) and standard deviation s for the indicated measure, obtained for 14 images.
Table 2 Objective evaluation of proposed approach
After computing Table 2, an Analysis of Variance (ANOVA) with α=0.05 was performed in order to test the null hypothesis that there were no differences between the means of the measure values.
In the context of this research, the ANOVA test was used where one way (or factor) is to be evaluated (e.g., the measure result) and more than 2 categorized groups were presented for evaluation (e.g., the proposed approach groups G1, G2, …, G12).
The ANOVA values for each measure were as follows: (i) PSNR: F(11,156)=1.54252 and p-value<0.12145; (ii) SIM: F(11,156)=0.184008 and p-value<0.99824; (iii) FoM: F(11,156)=2.693397 and p-value<0.00336. A statistically significant difference was found only in the FoM results; thus, the null hypothesis was rejected.
After the ANOVA test, a multiple comparison Tukey t-test [61] was conducted in order to identify which group of available algorithms was significantly different from the others. The Tukey t-test showed that group 1 of algorithms (G1) was statistically different from the others when the FoM measure is taken as reference. Thus, the chosen algorithms for the pre-processing step were GrayWorld color constancy processing, use of the Y Channel and Gaussian smoothing.
After defining the pre-processing algorithms, an objective comparison between the proposed approach the DoG, and the FDoG edge detectors was performed. Although the proposed approach is aimed at producing gray-level representations, the objective evaluation described above was done with binary ones, given that the ground-truth was provided as binary images. The comparison was feasible because the DoG and FDoG outputs resemble a sketch. As Canny edges present single-pixel thicknesses, which interfere in FoM calculation, the Canny detector was not included in this comparison. Nonetheless it was included in the subjective evaluation process (see Sect. 4.2).
For each comparative experiment, the mean and the standard deviation of the 14 ground-truth images involving the three considered measures and the three chosen algorithms are given in Table 3.
Table 3 Objective evaluation of available techniques
Similarly to the experiment aimed to define the pre-processing algorithms for the proposed approach, an ANOVA test was performed to acknowledge statistical differences between the mean values of the three measures, regarding the chosen algorithms, using α=0.05. According to that analysis, significant statistical differences between the means of the three measures exist: (i) PSNR: F(2,39)=4.876696 and p-value<0.01287; (ii) SSIM: F(2,39)=15.24669 and p-value<0.00012; (iii) FoM: F(2,39)=3.61137 and p-value<0.03639.
The Tukey t-test for pairwise comparison (see Table 4), using α=0.05, showed that no statistically significant difference existed between the proposed approach and the FDoG algorithm, considering all the three measures. Additionally, a statistically significant difference was found on the following cases: (i) DoG and FDoG (PSNR); (ii) the approach proposed and DoG (SSIM); and (iii) the approach proposed and DoG (FoM).
Table 4 Tukey t-test results for objective evaluation of available techniques (P.A. is the proposed approach)
Nevertheless, the objective assessment cannot exhaust the nuances and variability in the results of techniques that aim at visual appeal to human observers, inherently ambiguous and subjective. This aspect does not invalidate the objective evaluation held earlier, but raises the need for finding a method or tool in order to better evaluate those nuances. Subjective evaluation appears as a second necessary technique for evaluation.
Subjective evaluation
Wallraven et al. [60] provided an experimental framework in which perceptual features of animated and real facial expressions can be addressed. Such a framework allows the evaluation of different animation methods.
Gooch et al. [20] also studied facial illustration evaluation, and showed that illustrations and caricatures are as effective as photographs in recognition tasks.
Winnemöller et al. [69] also evaluated stylization algorithms using a memory style game, reported a performance improvement when measuring recognition speed and memory of abstracted images of familiar faces.
Heath et al. [23] applied a visual rating score method for testing a set of edge detectors for a number of natural images. The evaluation criterion was a subjective rating as to how well humans could easily, quickly, and accurately recognize objects within a test image from the edge map produced by an edge detector. The evaluation process consisted of two parts:
parameter optimization of each edge detection algorithm; and
human visual assessment of the results.
The subjective evaluation process done on this research was inspired by the above studies in human facial illustrations and specially by the work by Heath et al. [23], with some modifications, namely: (i) the method for calibration of algorithm parameters; (ii) the choice of the image set; (iii) the use of a web-based polling application; and (iv) the choice of algorithms (FDoG, DoG, Canny and the approach proposed).
In order to avoid biases when defining the parameters for each edge detector, a parameter calibration (the first part of the subjective evaluation) was conducted by a voting scheme. An excerpt of the user interface used in this part of the experiment is given in Fig. 5.
Subjective evaluation—Part 1—User Interface
Twenty test users, with ages varying between 18 and 28 years, calibrated the four algorithms. In this first part of the experiment all users were software engineers and with image processing programming skills. Twenty face images were used (Annex B of the supplied electronic supplementary material contains those images). For each method, a random variation of parameters was generated, resulting in a set of eight sketch images for each of the 20 face images.
In the first part of the experiment, each user has evaluated 640 images (20 face images × 8 sketches × 4 algorithms). The parameters that showed the best results for the evaluated face set are given in Table 5.
Table 5 First part results: parameter calibration
The number of participants is aligned with previous NPR evaluation work; e.g., when performing recognition and learning tasks regarding caricature renderings, Gooch [20] performed the evaluation with 42 users for the recognition speed task (divided in three two-part experiments, each with 14 participants), and 30 students for the learning speed task (divided into three groups of 10), Winnemöller et al. [69] conducted two task-based studies, with 10 participants in each study, and Isenberg et al. [27] performed an observational study with 24 participants in total.
It is important to note that, after the calibration phase in the subjective evaluation, the results showed that the pre-processing modules for the proposed approach were equal in both evaluation assessments (objective and subjective), i.e., the objective evaluation results (Table 2, group G1 of algorithms) did not differ from subjective evaluation (Table 5—proposed approach parameters) when defining the best combination of pre-processing algorithms.
Similarly to the work of Heath et al. [23], the second part of the experiment considered a voting scheme where the images generated by the calibrated parameters (obtained in the first phase) were used. An excerpt of the user interface used in this part of the experiment is presented in Fig. 6.
In Heath et al. [23], 16 users evaluated 280 edge images, (2 edge images for each detector × 28 images of varied themes × 5 detectors). Moreover, the comparison of each edge detectors was made indirectly, taking into account a score in the interval [0,7], using sheets of paper for voting.
The range used in the present work differed from the range proposed by Heath et al. [23], because we used the interval [0,10], and the user was instructed to vote 0 (zero) if the face characteristics (eyes, eyebrows, nose, mouth, ears) were "very hard" to identify, and to vote 10 (ten) if the face characteristics were "very easy" to identify. In summary, the goal was to ask the participants to rate the relative performance of each algorithm when rendering frontal-face images in a non-photorealistic way, i.e., whether the algorithm outputs contained recognizable face landmarks.
In this paper the second part of the subjective evaluation was assessed by 25 users, without any image processing background, with ages varying between 18 and 28 years, who evaluated 10 face images generated by the four compared methods, with one image per method (totaling 40 images per person).
During a pilot test, it was found that using the same amount of images (20) used in step 1 of the subjective experiment resulted in a prohibitively time-consuming task (about 1 hour and 10 minutes per user). After reducing the number of images to 10 and after reformulating the interface of the experiment, the average participation time was reduced to 25 minutes.
Table 6 contains the mean (\(\bar{X}\)) and the standard deviation (s) of the performance for each evaluated algorithm after the computation of the subjective evaluation results.
Table 6 Second part results: relative performance for each subjectively evaluated technique
Once again, a single factor ANOVA was performed to acknowledge statistical differences between the mean values of the votes computed for each algorithm, using α=0.05, relatively to the results presented in the second part result. The ANOVA results were F(3,956)=106.1174 and p-value<2.65E-59. After the evaluation of ANOVA results, it can be observed that there is significant statistical difference between the means of the three measures. Finally, a multiple comparison Tukey t-test was performed, as shown in Table 7.
Table 7 Tukey t-test results for objective evaluation of available techniques
The Tukey t-test results indicate that there is a statistically significant difference between the FDoG and all other approaches, using α=0.05, and that there are no statistically significant differences between the proposed approach and the DoG algorithm.
A visual comparison of the evaluated algorithms is presented in Fig. 7, where it can be seen that the proposed technique presents a sketchy aspect. This sketchy aspect can be observed by the highly variable line width that is obtained by the edge importance and by the facial components highlighting, like stronger lines presented on drawing of the mouth and eyes.
Visual comparison between techniques: (A) Canny, (B) DoG, (C) FDoG, and (D) proposed approach
Additional results obtained using the proposed approach are presented in Annex C of the supplied electronic supplementary material.
Final considerations and further work
An approach for digital image stylization by means of a neural network sketching process was presented in this paper. In an objective evaluation, a set of test images with faces was used to compare the proposed approach with existing sketch-like rendering systems. The objective evaluation showed, after an ANOVA analysis and Tukey t-test, that the proposed approach did not differ from a state-of-the-art technique for sketch rendering (namely FDoG).
A subjective experiment proved to be complementary to the objective analysis. In the subjective evaluation, the calibration parameters for the four algorithms (Canny, DoG, proposed approach and FDoG) strengthened the conclusions derived from the objective evaluation. The pre-processing modules used in the proposed approach are equal in both evaluation assessments (objective and subjective). The Canny algorithm was considered unsuitable for generating non-photorealistic representations of human faces, given the low average rank value (3.84) obtained within the [0,10] scale. Moreover, the proposed approach presented a higher average rank when compared to the DoG and Canny algorithms, and a lower rank when compared to the FDoG algorithm.
Hertzmann [26] states that a missing ingredient from most NPR research is the lack of experimental study and evaluation of results, not only the simplistic evaluation of several test images and the visual appeal of results produced by a technique. Within that context, the research presented brings about an important contribution by applying a formal analysis in the evaluation of methods for human face sketch generation.
The average execution time for the four algorithms (Canny, DoG, FDoG and the proposed approach) are shown in Table 8. This time was computed after 20 runs on different input images with average resolution of 780k pixels. The results were obtained using a standard 2 GHz ×86 personal computer with 2 GB of RAM, running Windows XP operating system. The programs were written in C/C++ and compiled using Microsoft Visual Studio 2008. Standard OpenCV functions were used for color space conversion and thresholding. The execution time measurements did not take into account any i/o or memory allocation procedures.
Canny and DoG run at nearly real time due to the low complexity of these algorithms. The proposed approach took 3.645 seconds on average to process an image, whereas the average execution time for the FDoG algorithm was 3.235 seconds. The slightly higher execution time of the proposed approach, when compared to the FDoG, is partly explained by the pre- and post-processing steps of the proposed approach, which are not required in the other compared algorithms.
Table 9 allows a visual comparison to be made of the evaluated algorithms when processing noise-corrupted input images. Two types of noise were considered: (i) Gaussian white noise with zero mean and three different variances (s); and (ii) salt & pepper noise with three different densities (D). Three different noise levels were considered, varying from nearly imperceptible noise artifacts to a very noisy image. Visually, the FDoG and the proposed approach showed some degree of robustness against the two types of noise considered, whereas the Canny and the DoG produced comparatively poor results.
Table 9 Outputs of the evaluated algorithms when considering different levels of Gaussian and salt & pepper noise
Table 8 Average execution time
In order to further improve the subjective evaluation results (as reported on Tables 6 and 7), alternative color spaces as well as other pre- and post-processing techniques will be investigated, such as homomorphic and bilateral filtering. Moreover, we intend to evaluate the impact of training with additional synthetic edge orientations, using different kernel sizes and applying an edge thinning process to the output.
Another consideration regarding the neural network step is that the algorithm was trained on synthetic data, but was applied in real-world data. Some neural network trials were done with real-world data, with samples taken directly from stylized images. Unfortunately, the neural network failed at getting a consistent edge mapping, given that some test samples (among millions of samples) were ambiguous, leading at the same time to edge and non-edge mapping. Nevertheless, future work might include filtered real-world data in the neural network step of the proposed approach.
Future work may also involve increasing the number of images in the first step of the objective evaluation, including noise-corrupted images, in order to strengthen the result that placed group 1 (G1) as the top pre-processing strategy among all investigated groups.
Moreover, the second part of the subjective evaluation may be extended by means of additional scales to capture other relevant dimensions, such as aesthetics, complexity, face quality and overall image quality. Finally, evaluations based on levels of expertise may be performed (e.g. users with and without prior experience with illustration or NPR) in order to broaden the conclusions drawn from the subjective evaluation.
Arruda FA, Porto VA, Gomes HM, de Queiroz JER, Moroney N (2007) Facial sketching based on sub-image illumination removal and multiscale edge filtering. In: Proc IEEE SITIS 2007. IEEE Comp Soc, Los Alamitos, pp 520–527
Barile P, Ciesielski V, Trist K (2008) Non-photorealistic rendering using genetic programming. In: Proc 7th int conf on sim evolution and learning, vol 5361. Springer, Berlin, pp 299–308
Barnard K, Funt B (1999) Investigations into multi-scale retinex. In: Color imaging in multimedia. Wiley, New York, pp 9–17
Barnard K, Cardei V, Funt B (2002) A comparison of computational color constancy algorithms. IEEE Trans Image Process 11(9):972–984
Becerikli Y, Demiray HE, Ayhan M, Aktas K (2006) Alternative neural network based edge detection. Neural Inf Process – Lett Rev 10:193–199
Bousseau A, Kaplan M, Thollot J, Sillion F (2006) Interactive watercolor rendering with temporal coherence and abstraction. In: Proc NPAR 2006. ACM, New York, pp 141–149
Bovik AC (2005) Handbook of image and video processing. Academic Press, San Diego
Buschsbaum G (1980) A spatial processor model for object colour perception. J Franklin Inst 310(1):1–26
Chabrier S, Laurent H, Rosenberger C, Emile B (2008) Comparative study of contour detection evaluation criteria based on dissimilarity measures. J Image Video Proc 8(2):1–13
Chang CY (2004) A contextual-based hopfield neural network for medical image edge detection. In: Proc IEEE ICME 2004, vol 2, pp 1011–1014
Chen H, Liu Z, Rose C, Xu Y, Shum HY, Salesin D (2004) Example-based composite sketching of human portraits. In: Proc NPAR 2004. ACM, New York, pp 95–153
Cootes TF, Edwards GJ, Taylor CJ (2001) Active appearance models. IEEE Trans Pattern Anal Mach Intell 23(6):681–685
DeCarlo D, Santella A (2002) Stylization and abstraction of photographs. In: Proc SIGGRAPH 2002. ACM, New York, pp 769–776
Dobashi Y, Haga T, Johan H, Nishita T (2002) A method for creating mosaic images using Voronoi diagrams. In: Proc Eurographics short presentations, pp 341–348
Ebner M (2007) Color constancy. Wiley, New York
Gomes HM, Fisher R (2003) Primal-sketch feature extraction from a log-polar image. Pattern Recognit Lett 24(7):983–992
Gonzalez RC, RE Woods (2007) Digital image processing. Prentice Hall, New York
Gooch A, Gooch B, Shirley P, Cohen E (1998) A non-photorealistic lighting model for automatic technical illustration. In: Proc SIGGRAPH'98, pp 447–452, ACM, New York
Gooch B, Gooch A (2001) Non-photorealistic rendering. Peters, Wellesley
Gooch B, Reinhard E, Gooch A (2004) Human facial illustrations: Creation and psychophysical evaluation. ACM Trans Graph 23(1):27–44
Gupta L, Sukhendu D (2006) Texture edge detection using multi-resolution features and som. In: Proc ICPR 2006. IEEE Press, New York, pp 199–202
Haykin S (2008) Neural networks: a comprehensive foundation. Prentice Hall, New York
Heath MD, Sarkar S, Sanocki T, Bowyer KW (1997) Robust visual method for assessing the relative performance of edge-detection algorithms. IEEE Trans Pattern Anal Mach Intell 19(12):1338–1359
Heisele B, Ho P, Wu J, Poggio T (2003) Face recognition: Component-based versus global approaches. Comput Vis Image Underst 91(1–2):6–21
Hertzmann A (1999) Introduction to 3d non-photorealistic rendering: Silhouettes and outlines. Springer, Berlin
Hertzmann A (2010) Non-photorealistic rendering and the science of art. In: Proc NPAR 2010. ACM, New York, pp 147–157
Isenberg T, Neumann P, Carpendale S, Sousa MC, Jorge JA (2006) Non-photorealistic rendering in context: an observational study. In: Proc NPAR 2006. ACM, New York, pp 115–126
Jiang X, Marti C, Irniger C, Bunke H (2006) Distance measures for image segmentation evaluation. EURASIP J Appl Signal Process 2006(1):1–10
Jolliffe IT (2002) Principal component analysis. Springer, Berlin
Kalnins RD, Markosian L, Meier BJ, Kowalski MA, Lee JC, Davidson PL, Webb M, Hughes JF, Finkelstein A (2002) Wysiwyg npr: drawing strokes directly on 3d models. In: Proc SIGGRAPH 2002. ACM, New York, pp 755–762
Kang H, Lee S, Chui CK (2007) Coherent line drawing. In: Proc NPAR 2007. ACM, New York, pp 43–50
Kolliopoulos A, Wang JM, Hertzmann A (2006) Segmentation-based 3d artistic rendering. In: Proc EGSR 2006, pp 361–370
Kyprianidis JE, Kang H, Döllner J (2009) Image and video abstraction by anisotropic Kuwahara filtering. Comput Graph Forum 28(7):1955–1963
Lake A, Marshall C, Harris M, Blackstein M (2000) Stylized rendering techniques for scalable real-time 3d animation. In: Proc NPAR 2000. ACM, New York, pp 13–20
Land EH, McCann JJ (1971) Lightness and retinex theory. J Opt Soc Am 61(1):1–11
Lee H, Kwon S, Lee S (2006) Real-time pencil rendering. In: Proc NPAR 2006. ACM, New York, pp 37–45
Litwinowicz P (1997) Processing images and video for an impressionist effect. In: Proc SIGGRAPH'97. ACM/Addison-Wesley, New York, pp 407–414
Markosian L, Kowalski MA, Goldstein D, Trychin SJ, Hughes JF, Bourdev LD (1997) Real-time nonphotorealistic rendering. In: Proc SIGGRAPH'97. ACM/Addison-Wesley, New York, pp 415–420
Martin D, Fowlkes C, Tal D, Malik J (2001) A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proc 8th int'l conf comp vision, vol 2, pp 416–423
Meer P, Georgescu B (2001) Edge detection with embedded confidence. IEEE Trans Pattern Anal Mach Intell 23(12):1351–1365
Mignotte M (2003) Unsupervised statistical sketching for non-photorealistic rendering models. In: Proc ICIP 2003, vol 3, pp 573–576. IEEE Comp Soc, Los Alamitos
Mould D (2003) A stained glass image filter. In: Proc 13th EGWR, eurographics association, pp 20–25
Ng G, Ong FL, Noor NM (1995) Neural edge detector. ASEAN J Sci Tech Dev 12(1):35–42
Pian Z, Gao L, Wang K, Guo L, Wu J (2007) Edge enhancement post-processing using hopfield neural net. In: Proc ISNN 2007. Springer, Berlin, pp 846–852
Poynton C (2003) Digital video and HDTV algorithms and interfaces. San Mateo, Morgan Kaufmann
Pratt WK (2007) Digital image processing: PIKS scientific inside. Wiley, New York
Prieto MS, AR Allen (2003) A similarity metric for edge images. IEEE Trans Pattern Anal Mach Intell 25(10):1265–1273
Rajab MI, Woolfson MS, Morgan SP (2004) Application of region-based segmentation and neural network edge detection to skin lesions. Comput Med Imaging Graph 28(1-2):61–68
Rizzi A, Gatta C, Marini D (2004) From retinex to automatic color equalization: issues in developing a new algorithm for unsupervised color equalization. J Electron Imaging 13(1):75–84
Roushdy M (2006) Comparative study of edge detection algorithms applying on the grayscale noisy image using morphological filter. Int J Graph Vis Image Process 6(4):17–23
Salisbury MP, Anderson SE, Barzel R, Salesin DH (1994) Interactive pen-and-ink illustration. In: Proc SIGGRAPH'94, pp 101–108. ACM, New York
Sayeed R, Howard T (2006) State-of-the-art of non-photorealistic rendering for visualisation. In: Proc TP.CG.06. Pergamon, Elmsford, pp 1–10
Sharma G (2002) Digital color imaging handbook. CRC Press, Boca Raton
Sousa MC, Buchanan JW (1999) Computer-generated graphite pencil rendering of 3d polygonal models. Comput Graph Forum 18(3):195–208
Strothotte T, Schlechtweg S (2002) Non-photorealistic computer graphics: modeling, rendering and animation. San Mateo, Morgan Kaufmann
Suzuki K, Horiba T, Sugie N (2000) Edge detection from noisy images using a neural edge detector. In: Proc IEEE SPS workshop, vol 2. IEEE Comp Soc, Los Alamitos, pp 487–496
Tang X, Wang X (2003) Face sketch synthesis and recognition. In: Proc IEEE ICCV 2003. IEEE, New York, p 687
Toivanen PJ, Ansamaki J, Parkkinen JPS, Mielikainen J (2003) Edge detection in multispectral images using the self-organizing map. Pattern Recognit Lett 24(16):2987–2994
Tresset P, Leymarie FF (2005) Generative portrait sketching. In: Proc VSMM 2005, pp 739–748, Hal Twaites
Wallraven C, Breidt M, Cunningham DW, Bülthoff HH (2008) Evaluating the perceptual realism of animated facial expressions. ACM Trans Appl Percept 4:1–20
Walpole RE, Myers RH, Myers SL, Ye K (2010) Probability and statistics for engineers and scientists. Prentice Hall, New York
Wang J, Xu Y, Shum HY, Cohen MF (2004a) Video tooning. In: Proc SIGGRAPH 2004. ACM, New York, pp 574–583
Wang K, Yin B, Guo J, Ma S (2004b) Face animation parameters extraction and driving. In: Proc IEEE ISCIT 2004, vol 2. IEEE, New York, pp 1242–1245
Wang K, Gao L, Pian Z, Guo L, Wu J (2007) Edge detection combined entropy threshold and self-organizing map(som). In: Proc ISNN 2007. Springer, Berlin, pp 931–937
Wang S, Ge F, Liu T (2006) Evaluating edge detection through boundary detection. EURASIP J Appl Signal Proc 1–15
Wang Z, Bovik AC (2002) A universal image quality index. IEEE Signal Process Lett 9(3):81–84
Wang Z, Simoncelli E (2005) An adaptive linear system framework for image distortion analysis. In: Proc IEEE ICIP 2005, vol 2. IEEE, New York, pp 1160–1163
Winkenbach G, Salesin DH (1994) Computer-generated pen-and-ink illustration. In: Proc SIGGRAPH'94. ACM, New York, pp 91–100
Winnemöller H (2006) Perceptually-motivated non-photorealistic graphics. PhD thesis, Northwestern University
Xu L, Lu C, Xu Y, Jia J (2011) Image smoothing via l0 gradient minimization. ACM Trans Graph 30(6):174
Yang MH, Kriegman DJ, Ahuja N (2002) Detecting faces in images: A survey. IEEE Trans Pattern Anal Mach Intell 24(1):34–58
Zeng K, Zhao M, Xiong C, Zhu SC (2009) From image parsing to painterly rendering. ACM Trans Graph 29(1):1–11
This work was developed in collaboration with Hewlett Packard Brazil R&D.
UFCG, Av. Aprígio Veloso 882, 58429-900, Campina Grande, PB, Brazil
Francisco de Assis Pereira Vasconcelos de Arruda, José Eustáquio Rangel de Queiroz & Herman Martins Gomes
Francisco de Assis Pereira Vasconcelos de Arruda
José Eustáquio Rangel de Queiroz
Herman Martins Gomes
Correspondence to Herman Martins Gomes.
Electronic Supplementary Material
Below is the link to the electronic supplementary material.
de Arruda, F.d.A.P.V., de Queiroz, J.E.R. & Gomes, H.M. Non-photorealistic neural sketching. J Braz Comput Soc 18, 237–250 (2012). https://doi.org/10.1007/s13173-012-0061-y
Non-photorealistic rendering
Image sketching
|
CommonCrawl
|
How can the probability that a number contains the digit 3 be 1?
Based on this Numberphile video which claims almost all integers contain a $3$, I have a few questions on the reasoning behind recurring decimal numbers like $0.9999\ldots =1$
What they have shown is that $$\lim_{n \to + \infty} \frac{10^n-9^n}{10^n} = 1$$
this basically means that probability of eg. a $3$ occurring in a set of numbers like for $1-10, 1-100,$ increases as the upper bound gets large.
So you are more likely to see a $3$ when you take $1-100000$, than $1-10$ as the probability gets higher.
So what I would like to know is as '$n$' approaches $∞$ does probability of seeing a '$3$' equals $0.99999....$?
But since $0.9999... = 1$ wouldn't this not make sense, since there are infinitely many numbers that do not have a '$3$'?
All I need is for an explanation as to why this logic is wrong. Simpler answers are most appreciated.
I am not looking for the reason as to why 0.9999...=1.
probability limits
ruby duby
ruby dubyruby duby
$\begingroup$ We have many questions and answers about $0.\overline9=1$ already, but the actual question you ask seems to be something different abouth "probability of a 3". Could you perhaps edit the question title to say more precisely what you're actually asking? Also, it would help if you could edit in an explanation about where looking for 3s come in at all -- I suspect the video may say something about it, but it's much quicker for a reader to read a short explanation than to have to watch through a video to find out what it is you're talking about. $\endgroup$ – Henning Makholm Aug 1 '16 at 8:26
$\begingroup$ The main $0.99999999\ldots=1$ thread is at math.stackexchange.com/questions/11/… $\endgroup$ – Henry Aug 1 '16 at 8:41
$\begingroup$ For your information, $3$ occurs in the integer range from $1$ to $100$ once and exactly once. $\endgroup$ – Frenzy Li Aug 1 '16 at 9:02
$\begingroup$ @Frenzy Li you didn't read the question properly $\endgroup$ – ruby duby Aug 1 '16 at 9:03
$\begingroup$ @user357484 Then, please edit your wording. What does $3$ occur in a set of numbers mean? Is $233$ an occurrence of $3$? $\endgroup$ – Frenzy Li Aug 1 '16 at 9:04
The probability increases as the range increases, like you say; the probability that a 3 appears when we choose a random number between 1 and 100000 is much greater than when we pick a number between 1 and 100. As we let the range increase, the probability increases; when we do not have an upper bound, the probability is $0.999\ldots = 1$. So yes, the probability that, taking a completely random integer, it has a 3 as one of its digits is 1.
Ok now wait; there are lots of integers without a 3 (infinitely many), so how can this be?
The problem is the interpretation of "probability 1". We tend to think this means that taking a random number, it would be impossible for it not to contain a 3 (which is clearly not true). But this interpretation only works when we are talking about the probability of an event from a finite sample space. When the number of possibilities are infinite (as in this situation, where there are infinitely many integers to choose from), this has a slightly different meaning. It means that the event happens "almost surely". So taking a random number, the probability it does not contain a 3 is zero, but is not impossible. It would just be like splitting an atom when you throw a dart at a dartboard.
Morgan RodgersMorgan Rodgers
$\begingroup$ I really do like your interpretation of the question, this is pretty close to the answer that i was expecting. $\endgroup$ – ruby duby Aug 1 '16 at 9:21
$\begingroup$ @user357484 Thanks, it is an interesting question (it took a minute to realize where you were confused, but the edits improved it a lot; I think it is phrased nicely now). $\endgroup$ – Morgan Rodgers Aug 1 '16 at 9:35
$\begingroup$ You have spotted his problem and explained it well, but it think it would be even clearer if you started with the essential point that an event can have probability 1 without being certain. Another interesting aspect is the definition of a 'random integer': I'm not sure what is usual, perhaps that all cosets of $\{n\}$ are equally likely for all $n$? $\endgroup$ – PJTraill Aug 1 '16 at 12:55
$\begingroup$ For what it's worth, you can't choose an integer uniformly at random from all integers, there's simply no way to do it compatible with the usual probability axioms. This leads me to dislike the phrase "choosing a number without a 3 has probability zero" from this answer. $\endgroup$ – Ben Millwood Aug 1 '16 at 15:28
$\begingroup$ @BenMillwood That's fair, I know there is no practical way to choose a random element from the set of integers, but I didn't know exactly how to word it. I edited it a little, don't know if it's better (probability is not really my area). $\endgroup$ – Morgan Rodgers Aug 1 '16 at 15:38
Yep, there are infinitely many numbers without a $3$, but there are many more with a $3$.
Indeed, in $n$ digit numbers, $9^n$ of them have no $3$, while $10^n-9^n$ do. The ratio is
$$\left(\frac{10}9\right)^n-1$$ which tends to infinity exponentially, meaning that the numbers without a $3$ become more and more rare.
Also note that "the probability of seeing a $3$ equals $1$" doesn't mean that it is absolutely impossible to have no $3$.
Yves DaoustYves Daoust
$\begingroup$ "Yep, there are infinitely many numbers without a 3, but there are many more with a 3." – this is only true for certain, rather uncommon, definitions of "many more". Most mathematicians would say there are countably infinitely many of both kinds of number. $\endgroup$ – Ben Millwood Aug 1 '16 at 15:51
$\begingroup$ @BenMillwood: "countably infinitely many of both kinds of number" would be completely uninformative as it gives no clue about the probability. My "many more" is analyzed quantitatively. $\endgroup$ – Yves Daoust Aug 1 '16 at 16:23
Your question seems to be "How is it possible for an event to have 100% probability if there are exceptions?" The answer is, this is just one of many counter-intuitive situations when dealing with infinite sets.
With finite sets, if an event has any exceptions, its probability is necessarily < 100%. However, with infinite sets, it's possible for for there to be some exceptions (even an infinite number of exceptions) and the event to still have 100% probability. In other words, with infinite sets, "is true with probability 1" and "is always true" have different meanings!
See almost surely for more information.
BlueRaja - Danny PflughoeftBlueRaja - Danny Pflughoeft
You seem to think that the numbers $0.9999999\dots$ and $1$ are two different things. They are not. They are the exact same thing. The difference between $0.999\dots$ and $1$ is the same as the difference between "The third round rock rotating around the sun" and "the Earth". They are two different ways of representing the exact same thing.
What the video shows is that the limit of the probability of seeing a $3$ is equal to $1$. That does not mean that the probability itself is equal to $1$ for any single value of $n$.
What it does mean is that if $n$ is really big, then the probability is really close to $1$. It doesn't equal $1$, of course, since you can always pick a number with no threes in it.
5xum5xum
$\begingroup$ although this is not what i was looking for, i guess this would be closer to the answer than the others who are simply trying to prove that 0.9999... =1 $\endgroup$ – ruby duby Aug 1 '16 at 8:55
$\begingroup$ @user357484 Yes, but your main problem is that you still think that $0.9999\dots$ is different from $1$. Your statement " the probability of seeing a 3 from a set from 1-∞ is 0.9999..... but NEVER 1" shows that you think these are two different things, when really, they are not. $\endgroup$ – 5xum Aug 1 '16 at 8:58
$\begingroup$ yes that is worded pretty badly, i removed that part although i am still unsure how to better word it $\endgroup$ – ruby duby Aug 1 '16 at 9:39
The logic is both right and wrong, depending what kind of math you're using. But I'm a programmer, rather than a mathematician, so take what I say with a pinch of salt.
In programming at least, once you start working with different infinities, perhaps using libraries which allow you to handle the Aleph numbers of well ordered infinite sets, you also start playing with different kinds of zeroes, or rather, with different kinds of infinitesimals, which get all called "zero" in almost all branches of mathematics (algebra, calculus), in the same way as they treat all types of infinity as a single "inverse-zero", 1/0, the conceptual "upper bound" of the integers.
Once you start using math that is aware of relative infinities, then 0.999... is not one, it's just infinitesimally far from one. The fact that there are two ways of thinking about this, mathematically, is why there are so many arguments about it: as in all great arguments, both sides are right.
If you are using math which is aware of infinite sets, then 0.999... != 1, but 0.999... == 1 - (1/infinity).
If we randomly select an integer, the chance of there being few enough digits in it that we could express just its magnitude (even using shortcuts like powers of powers, terms like "googol" and "graham's number", or knuth's up-arrow notation) in a single lifetime is infinitesimal: there are only a finite number of magnitudes that we can express, and an infinite number of magnitudes.
Since "everyone knows" that dividing any integer by infinity gives zero, then there are apparently zero integers that we can express. But using math which is aware of different infinities, there aren't zero: just an infinitesimally small fraction of integers that we can express.
The chance of it being a number without a 3 in it, is infinitely larger than that. Because there is an infinite quantity of numbers with no 3s in, and only a finite list of numbers that could not be expressed by humans.
But it is still infinitesimally small, because for every number with no threes, there is an infinite number that does contain threes.
Picturing no-threes is hard. I find this easier to picture when considering the "countable" integers and the uncountable 'real' numbers between. It's clear to me that there are infinite integers; between each of those, there are infinite real numbers.
The chances of hitting an integer when panning through the numbers from 0.5 to 1.5, are 100%. The chances of randomly selecting an integer in a random pick form that range are infinitudinously small.
Dewi MorganDewi Morgan
$\begingroup$ I think that was also where I partially had a problem in making sense of this. Because infinitesimally large or small numbers exist, then 0.999...=1 doesn't add up because it is infinitesimally far from 1 $\endgroup$ – ruby duby Aug 2 '16 at 6:33
$\begingroup$ @rubyduby It only doesn't add up in those math systems where infinitesimals exist. Not all math systems are the same. Some math systems accept negative zero, for example; others do not. Some don't handle infinity at all. Some do, but only one infinity, and one zero as its inverse. Some (eg roman numerals) do not even accept the possibility of zero. This is why there are even arguments about 0.999... It's because many people don't get that there are different kinds of math from the ones they are familiar with. $\endgroup$ – Dewi Morgan Aug 2 '16 at 17:10
Not the answer you're looking for? Browse other questions tagged probability limits or ask your own question.
Is it true that $0.999999999\dots=1$?
What is the probability of an independent event occurring after repeated attempts?
How likely/unlikely is an event with probability $1$/$0$?
Can we calculate the probability of the following scenario?
What is the probability that 3 will be a temporary maximum?
Is each spin of a roulette wheel independent?
Probability of darts hitting a certain spot (or several spots)
Anomaly Probability - 0-99 vs 0.000-99.000
Does the Law of Total Probability work if the sample space is not partitioned?
Probability of two players getting the same score in a game of chance
What is the probability that the second player will win in this random game?
|
CommonCrawl
|
Harmonic mean of random variables
Is there an analytic solution/approximation to the PDF/CDF and mean of an harmonic mean of random variables? I'm wondering about beta distributions ($\beta$) or truncated exponential distributions ($E$)?
Generally, what is the PDF/CDF and mean of
$X = \dfrac{n}{\sum_{i=1}^{n}\frac{1}{\beta_i}} $
$Y = \dfrac{n}{\sum_{i=1}^{n}\frac{1}{E_{i}}} $
If there is no solution to $n$ distributions, I would be happy to see for $2$ and $3$...
mean random-variable pdf cdf harmonic-mean
Diogo SantosDiogo Santos
$\begingroup$ Is your question a bit confused perhaps? I just took it to ask about what the continuous variable harmonic mean is for those distributions. $\endgroup$ – Carl Mar 31 '17 at 23:25
$\begingroup$ Such problems usually need to be solved on a case by case basis. The question is too general to be answered. But if I were trying for a given random variable $V$, I would first find the pdf of $Z =1/V$ , then try to find the pdf of the sample mean for $Z$, and then the pdf of the inverse of the latter. $\endgroup$ – wolfies Apr 1 '17 at 15:20
$\begingroup$ @wolfies U R correct in that not every $f(x)$ has a harmonic mean, but, that is easy to define, i.e., when $\int_{\alpha }^{\beta } \frac{f(x)}{x} \, dx$ is not integrable. $\endgroup$ – Carl Apr 2 '17 at 20:50
$\begingroup$ researchgate.net/publication/… $\endgroup$ – kjetil b halvorsen Sep 30 '17 at 23:25
A harmonic mean is the reciprocal of the mean reciprocal of data, and is used to average rates. For example, electrical capacitance of a series of $n$ connected capacitors is $\frac{1}{n}^{th}$ of the harmonic mean, and electrical resistance of $n$ parallel connected resistors is also $\frac{1}{n}^{th}$ of its harmonic mean. In other words, for resistors in parallel, the harmonic mean resistance is the mean value of each individual resistance. Here, we use $t$ for time, but the following is true for any $x$. For continuous density functions one uses a variation of the second mean value theorem for integrals, which for support on $[\alpha\geq0,\beta]$, where $\alpha<\beta$ and where $1=\int_{\alpha }^{\beta } f(t) \, dt$, the harmonic mean residence time (H-MRT) is
\begin{equation} \text{H-MRT} := \Bigg \langle \frac{1}{T} \Bigg \rangle ^{-1}=\Bigg \langle \frac{1}{T} \Bigg \rangle ^{-1}\Bigg[\int_{\alpha }^{\beta } f(t) \, dt\Bigg]^{-1}=\Bigg[\int_{\alpha }^{\beta } \frac{f(t)}{t} \, dt\Bigg]^{-1}. \end{equation} For example, the Pareto density, whose first moment is undefined (unphysical; negative) for $0<\alpha<1$, some measure other than the mean, e.g., the harmonic mean is needed when the right tail is very heavy. From the definition above, this is \begin{equation} \text{H-MRT}_{\text{Pareto}} := \Bigg[\int_{\beta }^{\infty } \frac{\alpha \beta ^{\alpha } t^{-\alpha -1}}{t} \, dt\Bigg]^{-1} = \beta \Big(1+\frac{1}{\alpha }\Big);\; \alpha,\beta>0 \end{equation}
For the case when the mean value of a Pareto distribution is undefined, three other statistical measures are defined; median, geometric mean, and the harmonic mean, all three of which are also defined when the first moment is also defined. Of those three, the harmonic mean is arguably most useful as argued in https://arxiv.org/pdf/1402.4061; "However, the harmonic mean statistic will be relatively insensitive to the value of $\alpha _{\min }$ and will tolerate an $\alpha _{\min }$ that is close to 0. That is another point in favor of using the harmonic mean instead of one of the other statistics."
For a left truncated exponential distribution, the harmonic mean is \begin{equation} \Bigg[ \int_{\alpha }^{\infty } \frac{\lambda e^{-\lambda(x-\alpha)}}{x} \, dx \Bigg]^{-1}=\frac{e^{-\alpha \lambda }}{\lambda \Gamma (0,\alpha \lambda )};\; \alpha >0,\lambda >0 \end{equation}
The left and right truncated exponential harmonic mean is
\begin{equation} \Bigg[\int_{\alpha }^{\beta } \frac{\lambda e^{\lambda (\beta -x)}}{x \left(e^{\lambda (\beta -\alpha )}-1\right)} \, dx \Bigg]^{-1} =\frac{e^{-\beta \lambda } \left(e^{\lambda (\beta -\alpha )}-1\right)}{\lambda (\text{Ei}(-\beta \lambda )-\text{Ei}(-\alpha \lambda ))};\alpha <\beta ,\alpha \neq 0, \end{equation}
where the exponential integral function $\text{Ei}(z)=-\int_{-z}^{\infty } \frac{e^{-t}}{t} \, dt$, where the principal value of the integral is taken.
For the beta distribution the harmonic mean is
\begin{equation} \Bigg[ \int_0^1 \frac{x^{\alpha -1} (1-x)^{\beta -1}}{x B(\alpha ,\beta )} \, dx\Bigg]^{-1} =\frac{a-1}{a+b-1};\;\alpha>1,\beta>0 \end{equation}
$\begingroup$ I'm not sure what you mean by resistors adding by harmonic mean. If anything, they add as the reciprocal of the sum of reciprocals. The harmonic mean off by a factor of $n$. $\endgroup$ – Neil G Apr 6 '17 at 2:45
$\begingroup$ @NeilG Thanks for noticing, making the change now. $\endgroup$ – Carl Apr 6 '17 at 4:31
$\begingroup$ @NeilG Well, stated in opposite direction, resistance of $n$ resistors in parallel is $\frac{1}{n}^{th}$ of the harmonic mean, which latter already has a name. I think the important point is that when the measurement system is reciprocal one has to use reciprocation. For example, the total electrical conductance (in siemens or mhos) of parallel resistors is the sum of the conductances of each resistor. But the total resistance of series resistors is the sum of the resistances (in ohms). $\endgroup$ – Carl Apr 6 '17 at 5:21
$\begingroup$ Sure, but it's annoying that you have to say "$n$ times the harmonic mean" of the resistances. It would be nicer to say "the harmonic sum" of the resistances, in my opinion. $\endgroup$ – Neil G Apr 6 '17 at 5:32
$\begingroup$ that's the harmonic series. Anyway, you can define anything you like. $\endgroup$ – Neil G Apr 4 '18 at 11:55
Not the answer you're looking for? Browse other questions tagged mean random-variable pdf cdf harmonic-mean or ask your own question.
Algebra on random variables
Distribution family of the mean of iid random variables
Finite variance of harmonic mean estimator when samples are bounded
CDF of the ratio of two correlated $\chi^2$ random variables
Is this equation a mean value?
Functions of continuous random variables
Mean of maximum of exponential random variables (independent but not identical)
Distribution of maximum of normally distributed random variables
Calculating a Confidence Interval for a Proportion for a Sample of Different Size
Mean and Variance of Continuous Random Variable
|
CommonCrawl
|
Schroeder, Mann-Koepke, Gualtieri, Eckerman, and Breese (1987) assessed the performance of subjects on placebo and MPH in a game that allowed subjects to switch between two different sectors seeking targets to shoot. They did not observe an effect of the drug on overall level of performance, but they did find fewer switches between sectors among subjects who took MPH, and perhaps because of this, these subjects did not develop a preference for the more fruitful sector.
Noopept was developed in Russia in the 90s, and is alleged to improve learning. This drug modifies acetylcholine and AMPA receptors, increasing the levels of these neurotransmitters in the brain. This is believed to account for reports of its efficacy as a 'study drug'. Noopept in the UK is illegal, as the 2016 Psychoactive Substances Act made it an offence to sell this drug in the UK - selling it could even lead to 7 years in prison. To enhance its nootropic effects, some users have been known to snort Noopept.
As shown in Table 6, two of these are fluency tasks, which require the generation of as large a set of unique responses as possible that meet the criteria given in the instructions. Fluency tasks are often considered tests of executive function because they require flexibility and the avoidance of perseveration and because they are often impaired along with other executive functions after prefrontal damage. In verbal fluency, subjects are asked to generate as many words that begin with a specific letter as possible. Neither Fleming et al. (1995), who administered d-AMP, nor Elliott et al. (1997), who administered MPH, found enhancement of verbal fluency. However, Elliott et al. found enhancement on a more complex nonverbal fluency task, the sequence generation task. Subjects were able to touch four squares in more unique orders with MPH than with placebo.
Most research on these nootropics suggest they have some benefits, sure, but as Barbara Sahakian and Sharon Morein-Zamir explain in the journal Nature, nobody knows their long-term effects. And we don't know how extended use might change your brain chemistry in the long run. Researchers are getting closer to what makes these substances do what they do, but very little is certain right now. If you're looking to live out your own Limitless fantasy, do your research first, and proceed with caution.
The amphetamine mix branded Adderall is terribly expensive to obtain even compared to modafinil, due to its tight regulation (a lower schedule than modafinil), popularity in college as a study drug, and reportedly moves by its manufacture to exploit its privileged position as a licensed amphetamine maker to extract more consumer surplus. I paid roughly $4 a pill but could have paid up to $10. Good stimulant hygiene involves recovery periods to avoid one's body adapting to eliminate the stimulating effects, so even if Adderall was the answer to all my woes, I would not be using it more than 2 or 3 times a week. Assuming 50 uses a year (for specific projects, let's say, and not ordinary aimless usage), that's a cool $200 a year. My general belief was that Adderall would be too much of a stimulant for me, as I am amphetamine-naive and Adderall has a bad reputation for letting one waste time on unimportant things. We could say my prediction was 50% that Adderall would be useful and worth investigating further. The experiment was pretty simple: blind randomized pills, 10 placebo & 10 active. I took notes on how productive I was and the next day guessed whether it was placebo or Adderall before breaking the seal and finding out. I didn't do any formal statistics for it, much less a power calculation, so let's try to be conservative by penalizing the information quality heavily and assume it had 25%. So \frac{200 - 0}{\ln 1.05} \times 0.50 \times 0.25 = 512! The experiment probably used up no more than an hour or two total.
Another prescription stimulant medication, modafinil (known by the brand name Provigil), is usually prescribed to patients suffering from narcolepsy and shift-work sleep disorder, but it might turn out to have broader applications. "We have conducted at the University of Cambridge double-blind, placebo-controlled studies in healthy people using modafinil and have found improvements in cognition, including in working memory," Sahakian says. However, she doesn't think everyone should start using the drug off-label. "There are no long-term safety and efficacy studies of modafinil in healthy people, and so it is unclear what the risks might be."
One thing to notice is that the default case matters a lot. This asymmetry is because you switch decisions in different possible worlds - when you would take Adderall but stop you're in the world where Adderall doesn't work, and when you wouldn't take Adderall but do you're in the world where Adderall does work (in the perfect information case, at least). One of the ways you can visualize this is that you don't penalize tests for giving you true negative information, and you reward them for giving you true positive information. (This might be worth a post by itself, and is very Litany of Gendlin.)
Ongoing studies are looking into the possible pathways by which nootropic substances function. Researchers have postulated that the mental health advantages derived from these substances can be attributed to their effects on the cholinergic and dopaminergic systems of the brain. These systems regulate two important neurotransmitters, acetylcholine and dopamine.
Specifically, the film is completely unintelligible if you had not read the book. The best I can say for it is that it delivers the action and events one expects in the right order and with basic competence, but its artistic merits are few. It seems generally devoid of the imagination and visual flights of fancy that animated movies 1 and 3 especially (although Mike Darwin disagrees), copping out on standard imagery like a Star Wars-style force field over Hogwarts Castle, or luminescent white fog when Harry was dead and in his head; I was deeply disappointed to not see any sights that struck me as novel and new. (For example, the aforementioned dead scene could have been done in so many interesting ways, like why not show Harry & Dumbledore in a bustling King's Cross shot in bright sharp detail, but with not a single person in sight and all the luggage and equipment animatedly moving purposefully on their own?) The ending in particular boggles me. I actually turned to the person next to me and asked them whether that really was the climax and Voldemort was dead, his death was so little dwelt upon or laden with significance (despite a musical score that beat you over the head about everything else). In the book, I remember it feeling like a climactic scene, with everyone watching and little speeches explaining why Voldemort was about to be defeated, and a suitable victory celebration; I read in the paper the next day a quote from the director or screenwriter who said one scene was cut because Voldemort would not talk but simply try to efficiently kill Harry. (This is presumably the explanation for the incredible anti-climax. Hopefully.) I was dumbfounded by the depths of dishonesty or delusion or disregard: Voldemort not only does that in Deathly Hallows multiple times, he does it every time he deals with Harry, exactly as the classic villains (he is numbered among) always do! How was it possible for this man to read the books many times, as he must have, and still say such a thing?↩
Low-dose lithium orotate is extremely cheap, ~$10 a year. There is some research literature on it improving mood and impulse control in regular people, but some of it is epidemiological (which implies considerable unreliability); my current belief is that there is probably some effect size, but at just 5mg, it may be too tiny to matter. I have ~40% belief that there will be a large effect size, but I'm doing a long experiment and I should be able to detect a large effect size with >75% chance. So, the formula is NPV of the difference between taking and not taking, times quality of information, times expectation: \frac{10 - 0}{\ln 1.05} \times 0.75 \times 0.40 = 61.4, which justifies a time investment of less than 9 hours. As it happens, it took less than an hour to make the pills & placebos, and taking them is a matter of seconds per week, so the analysis will be the time-consuming part. This one may actually turn a profit.
Or in other words, since the standard deviation of my previous self-ratings is 0.75 (see the Weather and my productivity data), a mean rating increase of >0.39 on the self-rating. This is, unfortunately, implying an extreme shift in my self-assessments (for example, 3s are ~50% of the self-ratings and 4s ~25%; to cause an increase of 0.25 while leaving 2s alone in a sample of 23 days, one would have to push 3s down to ~25% and 4s up to ~47%). So in advance, we can see that the weak plausible effects for Noopept are not going to be detected here at our usual statistical levels with just the sample I have (a more plausible experiment might use 178 pairs over a year, detecting down to d>=0.18). But if the sign is right, it might make Noopept worthwhile to investigate further. And the hardest part of this was just making the pills, so it's not a waste of effort.
It is at the top of the supplement snake oil list thanks to tons of correlations; for a review, see Luchtman & Song 2013 but some specifics include Teenage Boys Who Eat Fish At Least Once A Week Achieve Higher Intelligence Scores, anti-inflammatory properties (see Fish Oil: What the Prescriber Needs to Know on arthritis), and others - Fish oil can head off first psychotic episodes (study; Seth Roberts commentary), Fish Oil May Fight Breast Cancer, Fatty Fish May Cut Prostate Cancer Risk & Walnuts slow prostate cancer, Benefits of omega-3 fatty acids tally up, Serum Phospholipid Docosahexaenonic Acid Is Associated with Cognitive Functioning during Middle Adulthood endless anecdotes.
By which I mean that simple potassium is probably the most positively mind altering supplement I've ever tried…About 15 minutes after consumption, it manifests as a kind of pressure in the head or temples or eyes, a clearing up of brain fog, increased focus, and the kind of energy that is not jittery but the kind that makes you feel like exercising would be the reasonable and prudent thing to do. I have done no tests, but feel smarter from this in a way that seems much stronger than piracetam or any of the conventional weak nootropics. It is not just me – I have been introducing this around my inner social circle and I'm at 7/10 people felt immediately noticeable effects. The 3 that didn't notice much were vegetarians and less likely to have been deficient. Now that I'm not deficient, it is of course not noticeable as mind altering, but still serves to be energizing, particularly for sustained mental energy as the night goes on…Potassium chloride initially, but since bought some potassium gluconate pills… research indicates you don't want to consume large amounts of chloride (just moderate amounts).
And there are other uses that may make us uncomfortable. The military is interested in modafinil as a drug to maintain combat alertness. A drug such as propranolol could be used to protect soldiers from the horrors of war. That could be considered a good thing – post-traumatic stress disorder is common in soldiers. But the notion of troops being unaffected by their experiences makes many feel uneasy.
Flow diagram of cognitive neuroscience literature search completed July 2, 2010. Search terms were dextroamphetamine, Aderrall, methylphenidate, or Ritalin, and cognitive, cognition, learning, memory, or executive function, and healthy or normal. Stages of subsequent review used the information contained in the titles, abstracts, and articles to determine whether articles reported studies meeting the inclusion criteria stated in the text.
The prefrontal cortex at the front of the brain is the zone that produces such representations, and it is the focus of Arnsten's work. "The way the prefrontal cortex creates these representations is by having pyramidal cells – they're actually shaped like little pyramids – exciting each other. They keep each other firing, even when there's no information coming in from the environment to stimulate the circuits," she explains.
Learning how products have worked for other users can help you feel more confident in your purchase. Similarly, your opinion may help others find a good quality supplement. After you have started using a particular supplement and experienced the benefits of nootropics for memory, concentration, and focus, we encourage you to come back and write your own review to share your experience with others.
The intradimensional– extradimensional shift task from the CANTAB battery was used in two studies of MPH and measures the ability to shift the response criterion from one dimension to another, as in the WCST, as well as to measure other abilities, including reversal learning, measured by performance in the trials following an intradimensional shift. With an intradimensional shift, the learned association between values of a given stimulus dimension and reward versus no reward is reversed, and participants must learn to reverse their responses accordingly. Elliott et al. (1997) reported finding no effects of the drug on ability to shift among dimensions in the extradimensional shift condition and did not describe performance on the intradimensional shift. Rogers et al. (1999) found that accuracy improved but responses slowed with MPH on trials requiring a shift from one dimension to another, which leaves open the question of whether the drug produced net enhancement, interference, or neither on these trials once the tradeoff between speed and accuracy is taken into account. For intradimensional shifts, which require reversal learning, these authors found drug-induced impairment: significantly slower responding accompanied by a borderline-significant impairment of accuracy.
White, Becker-Blease, & Grace-Bishop (2006) 2002 Large university undergraduates and graduates (N = 1,025) 16.2% (lifetime) 68.9%: improve attention; 65.2:% partying; 54.3%: improve study habits; 20%: improve grades; 9.1%: reduce hyperactivity 15.5%: 2–3 times per week; 33.9%: 2–3 times per month; 50.6%: 2–3 times per year 58%: easy or somewhat easy to obtain; write-in comments indicated many obtaining stimulants from friends with prescriptions
Potassium citrate powder is neither expensive nor cheap: I purchased 453g for $21. The powder is crystalline white, dissolves instantly in water, and largely tasteless (sort of saline & slightly unpleasant). The powder is 37% potassium by weight (the formula is C6H5K3O7) so 453g is actually 167g of potassium, so 80-160 days' worth depending on dose.
ADMISSIONSUNDERGRADUATE GRADUATE CONTINUING EDUCATION RESEARCHDIVISIONS RESEARCH IMPACT LIBRARIES INNOVATION AND PARTNERSHIP SUPPORT FOR RESEARCHERS RESEARCH IN CONVERSATION PUBLIC ENGAGEMENT WITH RESEARCH NEWS & EVENTSEVENTS SCIENCE BLOG ARTS BLOG OXFORD AND BREXIT NEWS RELEASES FOR JOURNALISTS FILMING IN OXFORD FIND AN EXPERT ABOUTORGANISATION FACTS AND FIGURES OXFORD PEOPLE OXFORD ACCESS INTERNATIONAL OXFORD BUILDING OUR FUTURE JOBS 牛津大学Staff Oxford students Alumni Visitors Local community
Amongst the brain focus supplements that are currently available in the nootropic drug market, Modafinil is probably the most common focus drug or one of the best focus pills used by people, and it's praised to be the best nootropic available today. It is a powerful cognitive enhancer that is great for boosting your overall alertness with least side effects. However, to get your hands on this drug, you would require a prescription.
There's been a lot of talk about the ketogenic diet recently—proponents say that minimizing the carbohydrates you eat and ingesting lots of fat can train your body to burn fat more effectively. It's meant to help you both lose weight and keep your energy levels constant. The diet was first studied and used in patients with epilepsy, who suffered fewer seizures when their bodies were in a state of ketosis. Because seizures originate in the brain, this discovery showed researchers that a ketogenic diet can definitely affect the way the brain works. Brain hackers naturally started experimenting with diets to enhance their cognitive abilities, and now a company called HVMN even sells ketone esters in a bottle; to achieve these compounds naturally, you'd have to avoid bread and cake. Here are 6 ways exercise makes your brain better.
The resurgent popularity of nootropics—an umbrella term for supplements that purport to boost creativity, memory, and cognitive ability—has more than a little to do with the recent Silicon Valley-induced obsession with disrupting literally everything, up to and including our own brains. But most of the appeal of smart drugs lies in the simplicity of their age-old premise: Take the right pill and you can become a better, smarter, as-yet-unrealized version of yourself—a person that you know exists, if only the less capable you could get out of your own way.
To judge from recent reports in the popular media, healthy people have also begun to use MPH and AMPs for cognitive enhancement. Major daily newspapers such as The New York Times, The LA Times, and The Wall Street Journal; magazines including Time, The Economist, The New Yorker, and Vogue; and broadcast news organizations including the BBC, CNN, and NPR have reported a trend toward growing use of prescription stimulants by healthy people for the purpose of enhancing school or work performance.
This continued up to 1 AM, at which point I decided not to take a second armodafinil (why spend a second pill to gain what would likely be an unproductive set of 8 hours?) and finish up the experiment with some n-backing. My 5 rounds: 60/38/62/44/5023. This was surprising. Compare those scores with scores from several previous days: 39/42/44/40/20/28/36. I had estimated before the n-backing that my scores would be in the low-end of my usual performance (20-30%) since I had not slept for the past 41 hours, and instead, the lowest score was 38%. If one did not know the context, one might think I had discovered a good nootropic! Interesting evidence that armodafinil preserves at least one kind of mental performance.
Federal law classifies most nootropics as dietary supplements, which means that the Food and Drug Administration does not regulate manufacturers' statements about their benefits (as the giant "This product is not intended to diagnose, treat, cure, or prevent any disease" disclaimer on the label indicates). And the types of claims that the feds do allow supplement companies to make are often vague and/or supported by less-than-compelling scientific evidence. "If you find a study that says that an ingredient caused neurons to fire on rat brain cells in a petri dish," says Pieter Cohen, an assistant professor at Harvard Medical School, "you can probably get away with saying that it 'enhances memory' or 'promotes brain health.'"
Our 2nd choice for a Brain and Memory supplement is Clari-T by Life Seasons. We were pleased to see that their formula included 3 of the 5 necessary ingredients Huperzine A, Phosphatidylserine and Bacopin. In addition, we liked that their product came in a vegetable capsule. The product contains silica and rice bran, though, which we are not sure is necessary.
The majority of nonmedical users reported obtaining prescription stimulants from a peer with a prescription (Barrett et al., 2005; Carroll et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; McCabe & Boyd, 2005; Novak et al., 2007; Rabiner et al., 2009; White et al., 2006). Consistent with nonmedical user reports, McCabe, Teter, and Boyd (2006) found 54% of prescribed college students had been approached to divert (sell, exchange, or give) their medication. Studies of secondary school students supported a similar conclusion (McCabe et al., 2004; Poulin, 2001, 2007). In Poulin's (2007) sample, 26% of students with prescribed stimulants reported giving or selling some of their medication to other students in the past month. She also found that the number of students in a class with medically prescribed stimulants was predictive of the prevalence of nonmedical stimulant use in the class (Poulin, 2001). In McCabe et al.'s (2004) middle and high school sample, 23% of students with prescriptions reported being asked to sell or trade or give away their pills over their lifetime.
Both nootropics startups provide me with samples to try. In the case of Nootrobox, it is capsules called Sprint designed for a short boost of cognitive enhancement. They contain caffeine – the equivalent of about a cup of coffee, and L-theanine – about 10 times what is in a cup of green tea, in a ratio that is supposed to have a synergistic effect (all the ingredients Nootrobox uses are either regulated as supplements or have a "generally regarded as safe" designation by US authorities)
Disclaimer: None of the statements made on this website have been reviewed by the Food and Drug Administration. The products and supplements mentioned on this site are not intended to diagnose, treat, cure, alleviate or prevent any diseases. All articles on this website are the opinions of their respective authors who do not claim or profess to be medical professionals providing medical advice. This website is strictly for the purpose of providing opinions of the author. You should consult with your doctor or another qualified health care professional before you start taking any dietary supplements or engage in mental health programs. Any and all trademarks, logos brand names and service marks displayed on this website are the registered or unregistered Trademarks of their respective owners.
|
CommonCrawl
|
Search all SpringerOpen articles
Applied Informatics
Altered effective connectivity network of the thalamus in post-traumatic stress disorder: a resting-state FMRI study with granger causality method
Youxue Zhang1,
Heng Chen1,
Zhiliang Long1,
Qian Cui1 &
Huafu Chen1
Applied Informatics volume 3, Article number: 8 (2016) Cite this article
Post-traumatic stress disorder (PTSD) is an anxiety disorder that can develop following a traumatic event. Previous studies have found abnormal functional connectivity between the thalamus and other brain regions. However, the traditional functional connectivity method cannot investigate the directional flow of the influence in PTSD. In the present study, we used an effective connectivity method based on Granger causality to explore altered direction of causal information flow within a network associated with the thalamus in PTSD. Employing this method, we found that PTSD patients exhibited increased influence from thalamus to middle/inferior frontal gyrus and insula, and increased bidirectional influences between thalamus and medial prefrontal cortex compared to healthy controls. This is the first study to reveal a network of abnormal effective connectivity in PTSD. In addition, using the machine learning approach, we found that the altered functional measurements could differentiate patients from healthy controls. Our findings may have important implications for the pathophysiological basis underlying PTSD.
Post-traumatic stress disorder (PTSD) is a prevalent anxiety disorder that can develop after exposure to a traumatic event. Motor vehicle accidents have been regarded as the common cause of PTSD (Silove et al. 2006). PTSD patients often suffer from a number of symptoms, including intrusive recollections of the trauma, hyperarousal and hypervigilance, and avoidance of trauma reminders (Blake et al. 1995). Although the symptoms of PTSD are familiar and readily identifiable, the effective connectivity relationship between brain regions underlying PTSD remains unclear.
Recently, resting-state functional magnetic resonance imaging (fMRI) has been considered to be an effective noninvasive technique for investigating pathophysiological basis of psychiatric and neurological disorders (Barkhof et al. 2014). Although several previous fMRI studies have found disrupted functional connectivity between thalamus and other brain regions in PTSD (Kennis et al. 2013; Yin et al. 2011), the directionality of the influence between separate brain regions remains unclear.
Exploration on directed pathways of information transfer between brain regions is a key issue and helps to advance our understanding of the abnormal brain function in psychiatric and neurological disorder. Granger causality (GC) was initially used to assess causal relationship between two time series of the economic sciences (Granger 1969). In our study, we applied GC method to fMRI data to investigate directed dynamical connectivity, which can provide novel information toward demonstrating the effective connectivity relationship between brain areas. It is a method based on multiple linear regression for exploring whether one time series could correctly predict another (Stephan and Roebroeck 2012). Compared with functional connectivity method that calculates intrinsic connections between spatially distinct brain regions, the GC method has the advantage of revealing both the direction and the strength of the information flow in brain circuits.
In the present study, we used the voxel-wise GC method and selected bilateral thalamus as seed regions to evaluate altered directional connectivity patterns from and to the thalamus in the resting-state in PTSD patients. In addition, we used the machine learning approach to examine brain-based predictors of PTSD patients. We hypothesized that effective connectivity networks of thalamus were disrupted in PTSD patients. These findings may have important implications for understanding of the pathophysiological basis underlying PTSD and provide the new evidence for the abnormal connectivity in this disorder.
Twenty PTSD patients with motor vehicle accidents and twenty age-, sex-, and education-matched healthy controls (HCs) were recruited (age 32.92 ± 8.48 years and 31.53 ± 7.43 years, respectively; gender 13 male/7 female and 14 male/6 female, respectively; education 11.20 ± 3.80 years and 13.00 ± 2.20 years, respectively). PTSD diagnosis was made using the Clinician-Administered PTSD Scale for DSM-IV (CAPS-DX). All participants had no history of psychiatric, neurological disorders and head injury.
MRI data acquisition
All images were obtained by a 3.0 T Siemens MRI scanner (Trio; Siemens Medical, Erlangen, Germany). Resting-state fMRI data were acquired using the echo-planar imaging (EPI) sequence with the following protocols: repetition time (TR) = 2000 ms, echo time (TE) = 30 ms, flip angle (FA) = 90°, matrix = 64 × 64, slice thickness = 3 mm, transverse slices = 36, and field of view (FOV) = 220 mm × 220 mm.
All fMRI data were preprocessed using Data Processing Assistant for Resting-State fMRI (DPARSF) (Yan and Zang 2010). The first ten volumes were discarded for equilibrium. Then slice-timing correction and realignment for head motion correction were performed. No translation or rotation parameters in any participants exceeded 3 mm or 3°. In addition, the imagings were further spatially normalized to the Montreal Neurological Institute EPI template image, and each voxel was resampled to 3 × 3 × 3 mm3. Then, the data were spatially smoothed using Gaussian kernel of 6 mm FWHW and detrended to abandon linear trend. After this, several sources of spurious variance were then removed from the data using linear regression, including Friston-24 head motion parameters, white matter signal, and cerebrospinal fluid. Finally, the data were temporally band-pass filtered (0.01–0.08 Hz) to reduce the effects of low-frequency drift and high-frequency noise.
Granger causality method
The bilateral thalamus of the automated anatomical labeling template was selected as the region of interest for the effective connectivity analysis. GC was used to describe the effective connectivity analysis between the seed regions and all other brain regions. The averaged time series of the seed region was defined as the seed time series X, and the time series Y represents the time series of voxels within the whole brain. The linear direct effect of X on Y (F x→y ) and the linear direct effect of Y on X (F y→x ) were calculated voxel by voxel within the whole brain. Therefore, two GC maps for each participant were obtained.
The calculation of GC method value was based on the Geweke's feedback model (Geweke 1982).
The autoregressive representation:
$$Y_{t} = \mathop \sum \limits_{k = 1}^{p} b_{k} Y_{(t - k)} + cZ_{t} + \varepsilon_{t}$$
$$X_{t} = \mathop \sum \limits_{k = 1}^{p} b_{k}^{{\prime }} X_{(t - k)} + c^{{\prime }} Z_{t} + \varepsilon_{t}^{{\prime }} .$$
The joint regressive representation:
$$Y_{t} = \mathop \sum \limits_{k = 1}^{p} A_{k} X_{(t - k)} + \mathop \sum \limits_{k = 1}^{p} B_{k} Y_{{\left( {t - k} \right)}} + CZ_{t} + \mu_{t}$$
$$X_{t} = \mathop \sum \limits_{k = 1}^{p} A_{k}^{{\prime }} Y_{(t - k)} + \mathop \sum \limits_{k = 1}^{p} B_{k}^{{\prime }} X_{{\left( {t - k} \right)}} + C^{{\prime }} Z_{t} + \mu_{t}^{{\prime }} .$$
$$F_{x \to y} = { \ln }\frac{{{\text{var}}(\varepsilon_{t} )}}{{{\text{var}}(\mu_{t} )}}$$
$$F_{y \to x} = { \ln }\frac{{{\text{var}}(\varepsilon_{t}^{{\prime }} )}}{{{\text{var}}(\mu_{t}^{{\prime }} )}} ,$$
where X t is the seed region signal and Y t is the other voxels signal, ɛ t and \(\varepsilon_{t}^{{\prime }}\) are the residuals of autoregression, \(\mu_{\text{t}}\) and \(\mu_{t}^{{\prime }}\) are residuals, and Z t is the covariate. F x→y represents directional influence from the time series X to Y. F y→x represents directional influence from the time series from Y to X.
Mean values of F x→y and F y→x maps were calculated. The two-sample t test was conducted on the GC method data in SPM8 to test the group differences between the PTSD patients and HCs. The multiple comparison correction was conducted using the AlphaSim program in the REST software (http://resting-fmri.sourceforge.net). The significance levels were set at p < 0.05.
The pattern classification
Pattern classification was included to address the potential effects related to group difference in more detail. The support vector machine (SVM) was applied to the GC, which found significant difference through statistical analysis. Therefore, the SVM, based on the LIBSVM implementation with linear kernel and default parameter, was applied using a leave-one-out cross-validation procedure. Furthermore, the statistical significance of pattern classification was assessed using permutation testing.
In the present study, the F x→y represents strength of the directed information flow from the thalamus to the other brain regions. The F y→x represents strength of the directed information flow from the other brain regions to the thalamus. We found altered F x→y and F y→x of bilateral thalamus in the PTSD patients relative to the HCs using the two-sample t test.
Effective Connectivity from and to the Left Thalamus
Compared with HCs, PTSD patients exhibited significantly increased effective connectivity from the left thalamus to left inferior frontal gyrus and insula (left thalamus with F x→y ), and increased effective connectivity from the medial prefrontal cortex (MPFC) to left thalamus (left thalamus with F y→x ) (p < 0.05, AlphaSim corrected) (shown in Table 1; Fig. 1).
Table 1 Altered effective connectivity from and to the left thalamus in PTSD patients
Between-group differences of effective connectivity from and to the left thalamus. a represents the effective connectivity from the left thalamus to other brain regions. b represents the effective connectivity from other brain regions to left thalamus. The red color represents the brain regions that show significantly increased effective connectivity
Effective connectivity from and to the right thalamus
Compared with HCs, PTSD patients also exhibited significantly increased effective connectivity from the right thalamus to left middle frontal gyrus and MPFC, (right thalamus with F x→y ) and significantly increased effective connectivity from the MPFC to right thalamus (right thalamus with F y→x ) (p < 0.05, AlphaSim corrected) (shown in Table 2; Fig. 2).
Table 2 Altered effective connectivity from and to the right thalamus in PTSD patients
Between-group differences of effective connectivity from and to the right thalamus. a represents the effective connectivity from the right thalamus to other brain regions. b represents the effective connectivity from other brain regions to right thalamus. The red color represents the brain regions that show significantly increased effective connectivity
Overall classifier performance
The classification analysis showed a correct classification rate of 77.5% (p < 0.001) with sensitivity of 70.0% and specificity of 85.0%. Taking each subject's discriminative score as a threshold, the receiver operating characteristic (ROC) curve of the classifier was yielded (Fig. 3). The area under the ROC curve (AUC) was 0.895, indicating a good classification power.
ROC for differentiating PTSD patients from healthy controls. ROC receiver operating characteristic curves, AUC area under the ROC curve
To the best of our knowledge, this is the first study using GC method to examine effective connectivity network associated with the thalamus in PTSD patients. We found that PTSD patients exhibited abnormal directionality of influence both from and to the thalamus.
GC method is a valuable tool to explore effective functional connectivity in psychiatric and neurological disorders, and it can supply information about the dynamics and directionality of BOLD (blood oxygenation level-dependent) signal in brain circuits. In the present study, an autoregressive model was applied for data analyzing. We used F value based on the decrease of the variance of residual to explore effective connectivity in participants. Using the GC method, we can detect the abnormalities in directional flow of the influence in PTSD patients. We found increased influence from the left thalamus to the left inferior frontal gyrus and insula, from the right thalamus to the left middle frontal gyrus, and mutual increased influences between MPFC and thalamus. The prefrontal cortex has been thought to be involved in encoding and retrieval of memories and emotional processing (Etkin et al. 2011). The insula plays a key role in emotional processing and cognitional processing (Gu et al. 2013; Menon and Uddin 2010). Thus, the findings in our study may suggest the abnormal emotional processing and cognitive processing in PTSD patients. One limitation is that the small sample size in our study should be noticed. Future studies will require larger sample size and divide the patients into subgroups according to their illness duration to confirm our findings.
In this study, we firstly found abnormal effective connectives in several thalamus-related pathways, which were involved in emotional processing and cognitive processing. In addition, employing the machine learning approach, we found that the abnormal functional measurements could differentiate PTSD patients from HCs. Our findings added important insights into understanding the effective connectivity networks and neural circuitry underlying PTSD.
Barkhof F, Haller S, Rombouts SA (2014) Resting-state functional MR imaging: a new window to the brain. Radiology 272:29–49
Blake DD, Weathers FW, Nagy LM, Kaloupek DG, Gusman FD, Charney DS, Keane TM (1995) The development of a clinician-administered PTSD scale. J Trauma Stress 8:75–90
Etkin A, Egner T, Kalisch R (2011) Emotional processing in anterior cingulate and medial prefrontal cortex. Trends Cognit Sci 15:85–93
Geweke J (1982) Measurement of linear dependence and feedback between multiple time series. J Am Stat Assoc 77:304–313
MathSciNet Article MATH Google Scholar
Granger CW (1969) Investigating causal relations by econometric models and cross-spectral methods. Econom J Econom Soc 37:424–438
Gu X, Hof PR, Friston KJ, Fan J (2013) Anterior insular cortex and emotional awareness. J Comp Neurol 521:3371–3388
Kennis, M, Rademaker, A.R, van Rooij, S J, Kahn, R S, Geuze, E (2013) Altered functional connectivity in posttraumatic stress disorder with versus without comorbid major depressive disorder: a resting state fMRI study. F1000Research 2:289. doi:10.12688/f1000research.2-289.v2
Menon V, Uddin LQ (2010) Saliency, switching, attention and control: a network model of insula function. Brain Struct Funct 214:655–667
Silove D, Brooks R, Steel Z, Blaszczynski A, Hillman K, Tyndall K (2006) Can structured interviews for posttraumatic stress disorder assist clinical decision-making after motor vehicle accidents? An exploratory analysis. Compr Psychiatry 47:194–200
Stephan KE, Roebroeck A (2012) A short history of causal modeling of fMRI data. Neuroimage 62:856–863
Yan C, Zang Y (2010) DPARSF: a MATLAB toolbox for "pipeline" data analysis of resting-state fMRI. Front Syst Neurosci 14:421–438
Yin Y, Jin C, Hu X, Duan L, Li Z, Song M, Chen H, Feng B, Jiang T, Jin H (2011) Altered resting-state functional connectivity of thalamus in earthquake-induced posttraumatic stress disorder: a functional magnetic resonance imaging study. Brain Res 1411:98–107
HC and YZ designed the study. YZ, HC, and ZL analyzed the imaging data. YZ, QC, and HC wrote the first draft of the manuscript. All authors read and approved the final manuscript.
The work is supported by the National High Technology Research and Development Program of China (863 Program) (2015AA020505), the Natural Science Foundation of China (61533006 and 31400901), the Fundamental Research Funds for the Central Universities (ZYGX2013Z004).
There are ethical restrictions on the data, and the authors do not have ethical approval to make the data publicly available. Please send requests via email to the corresponding author.
Key Laboratory for Neuroinformation of Ministry of Education, Center for Information in BioMedicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
Youxue Zhang, Heng Chen, Zhiliang Long, Qian Cui & Huafu Chen
Youxue Zhang
Heng Chen
Zhiliang Long
Qian Cui
Huafu Chen
Correspondence to Huafu Chen.
Zhang, Y., Chen, H., Long, Z. et al. Altered effective connectivity network of the thalamus in post-traumatic stress disorder: a resting-state FMRI study with granger causality method. Appl Inform 3, 8 (2016). https://doi.org/10.1186/s40535-016-0025-y
Effective connectivity network
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page
|
CommonCrawl
|
Mat. Sb.:
Mat. Sb., 2000, Volume 191, Number 6, Pages 3–30 (Mi msb481)
Systems of random variables equivalent in distribution to the Rademacher system and $\mathscr K$-closed representability of Banach couples
S. V. Astashkin
Samara State University
Abstract: Necessary and sufficient conditions ensuring that one can select from a system $\{f_n\}_{n=1}^\infty$ of random variables on a probability space $(\Omega,\Sigma,\mathsf P)$ a subsystem $\{\varphi_i\}_{i=1}^\infty$ equivalent in distribution to the Rademacher system on $[0,1]$ are found. In particular, this is always possible if $\{f_n\}_{n=1}^\infty$ is a uniformly bounded orthonormal sequence. The main role in the proof is played by the connection (discovered in this paper) between the equivalence in distribution of random variables and the behaviour of the $L_p$-norms of the corresponding polynomials.
An application of the results obtained to the study of the ${\mathscr K}$-closed representability of Banach couples is presented.
DOI: https://doi.org/10.4213/sm481
Sbornik: Mathematics, 2000, 191:6, 779–807
UDC: 517.5+517.982
MSC: 28A20, 60E99
Citation: S. V. Astashkin, "Systems of random variables equivalent in distribution to the Rademacher system and $\mathscr K$-closed representability of Banach couples", Mat. Sb., 191:6 (2000), 3–30; Sb. Math., 191:6 (2000), 779–807
\Bibitem{Ast00}
\by S.~V.~Astashkin
\paper Systems of random variables equivalent in distribution to the~Rademacher system and $\mathscr K$-closed representability of Banach couples
\jour Mat. Sb.
\vol 191
\mathnet{http://mi.mathnet.ru/msb481}
\crossref{https://doi.org/10.4213/sm481}
\jour Sb. Math.
\crossref{https://doi.org/10.1070/sm2000v191n06ABEH000481}
\scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-0034341464}
http://mi.mathnet.ru/eng/msb481
https://doi.org/10.4213/sm481
http://mi.mathnet.ru/eng/msb/v191/i6/p3
Dodds P.G., Semenov E.M., Sukochev F.A., Franchetti C., "The Banach-Saks property for function spaces", Dokl. Math., 66:1 (2002), 91–93
Dodds P.G., Semenov E.M., Sukochev F.A., "The Banach-Saks property in rearrangement invariant spaces", Studia Math., 162:3 (2004), 263–294
Astashkin S.V., Curbera G.P., "Rademacher multiplicator spaces equal to $L^\infty$", Proc. Amer. Math. Soc., 136:10 (2008), 3493–3501
S. V. Astashkin, "Rademacher functions in symmetric spaces", Journal of Mathematical Sciences, 169:6 (2010), 725–886
S. V. Astashkin, E. M. Semenov, "On Fourier Coefficients of Lacunary Systems", Math. Notes, 100:4 (2016), 507–514
Astashkin S., "Rademacher functions in weighted symmetric spaces", Isr. J. Math., 218:1 (2017), 371–390
S. V. Astashkin, "On comparing systems of random variables with the Rademacher sequence", Izv. Math., 81:6 (2017), 1063–1079
|
CommonCrawl
|
A leaky integrate-and-fire model with adaptation for the generation of a spike train
MBE Home
2016, 13(3): 461-481. doi: 10.3934/mbe.2016001
The effect of positive interspike interval correlations on neuronal information transmission
Sven Blankenburg 1, and Benjamin Lindner 1,
Bernstein Center for Computational Neuroscience Berlin, Berlin 10115, Germany, Germany
Received April 2015 Revised June 2015 Published January 2016
Experimentally it is known that some neurons encode preferentially information about low-frequency (slow) components of a time-dependent stimulus while others prefer intermediate or high-frequency (fast) components. Accordingly, neurons can be categorized as low-pass, band-pass or high-pass information filters. Mechanisms of information filtering at the cellular and the network levels have been suggested. Here we propose yet another mechanism, based on noise shaping due to spontaneous non-renewal spiking statistics. We compare two integrate-and-fire models with threshold noise that differ solely in their interspike interval (ISI) correlations: the renewal model generates independent ISIs, whereas the non-renewal model exhibits positive correlations between adjacent ISIs. For these simplified neuron models we analytically calculate ISI density and power spectrum of the spontaneous spike train as well as approximations for input-output cross-spectrum and spike-train power spectrum in the presence of a broad-band Gaussian stimulus. This yields the spectral coherence, an approximate frequency-resolved measure of information transmission. We demonstrate that for low spiking variability the renewal model acts as a low-pass filter of information (coherence has a global maximum at zero frequency), whereas the non-renewal model displays a pronounced maximum of the coherence at non-vanishing frequency and thus can be regarded as a band-pass filter of information.
Keywords: information filtering., neural signal transmission, non-renewal point process, Stochastic neuron models.
Mathematics Subject Classification: 93E03, 93E11, 94A12, 94A15, 94A24, 60G10, 60G15, 60G35, 60G50, 60G5.
Citation: Sven Blankenburg, Benjamin Lindner. The effect of positive interspike interval correlations on neuronal information transmission. Mathematical Biosciences & Engineering, 2016, 13 (3) : 461-481. doi: 10.3934/mbe.2016001
L. F. Abbott and W. G. Regehr, Synaptic computation,, Nature, 431 (2004), 796. doi: 10.1038/nature03010. Google Scholar
R. Azouz and C. M. Gray, Dynamic spike threshold reveals a mechanism for synaptic coincidence detection in cortical neurons in vivo,, Proc. Nat. Acad. Sci., 97 (2000), 8110. Google Scholar
D. Bernardi and B. Lindner, A frequency-resolved mutual information rate and its application to neural systems,, J. Neurophysiol., 113 (2014), 1342. doi: 10.1152/jn.00354.2014. Google Scholar
S. Blankenburg, W. Wu, B. Lindner and S. Schreiber, Information filtering in resonant neurons,, J. Comput. Neurosci., 39 (2015), 349. Google Scholar
A. Borst and F. Theunissen, Information theory and neural coding,, Nat. Neurosci., 2 (1999), 947. doi: 10.1038/14731. Google Scholar
N. Brenner, O. Agam, W. Bialek and R. de Ruyter van Steveninck, Statistical properties of spike trains: Universal and stimulus-dependent aspects,, Phys. Rev. E, 66 (2002). doi: 10.1103/PhysRevE.66.031907. Google Scholar
P. J. Brockwell and R. A. Davis, Time Series: Theory and Methods,, Springer, (2009). Google Scholar
N. Brunel, V. Hakim and M. J. E. Richardson, Firing-rate resonance in a generalized integrate-and-fire neuron with subthreshold resonance,, Phys. Rev. E, 67 (2003). doi: 10.1103/PhysRevE.67.051916. Google Scholar
M. Chacron, B. Lindner and A. Longtin, Threshold fatigue and information transfer,, J. Comput. Neurosci., 23 (2007), 301. doi: 10.1007/s10827-007-0033-y. Google Scholar
M. J. Chacron, A. Longtin and L. Maler, Negative interspike interval correlations increase the neuronal capacity for encoding time-dependent stimuli,, J. Neurosci., 21 (2001), 5328. Google Scholar
M. J. Chacron, B. Doiron, L. Maler, A. Longtin and J. Bastian, Non-classical receptive field mediates switch in a sensory neuron's frequency tuning,, Nature, 423 (2003), 77. Google Scholar
M. J. Chacron, B. Lindner and A. Longtin, Noise shaping by interval correlations increases information transfer,, Phys. Rev. Lett., 93 (2004). Google Scholar
T. Cover and J. Thomas, Elements of Information Theory,, Wiley, (1991). doi: 10.1002/0471200611. Google Scholar
D. R. Cox and P. A. W. Lewis, The Statistical Analysis of Series of Events,, Chapman and Hall, (1966). Google Scholar
F. Droste, T. Schwalger and B. Lindner, Interplay of two signals in a neuron with short-term synaptic plasticity,, Front. Comp. Neurosci., 7 (2013). doi: 10.3389/fncom.2013.00086. Google Scholar
T. A. Engel, L. Schimansky-Geier, A. V. M. Herz, S. Schreiber and I. Erchova, Subthreshold membrane-potential resonances shape spike-train patterns in the entorhinal cortex,, J. Neurophysiol., 100 (2008), 1576. doi: 10.1152/jn.01282.2007. Google Scholar
K. Fisch, T. Schwalger, B. Lindner, A. Herz and J. Benda, Channel noise from both slow adaptation currents and fast currents is required to explain spike-response variability in a sensory neuron,, J. Neurosci., 32 (2012), 17332. doi: 10.1523/JNEUROSCI.6231-11.2012. Google Scholar
J. L. Folks and R. S. Chhikara, The inverse gaussian distribution and its statistical application-a review,, J. R. Statist. Soc. B, 40 (1978), 263. Google Scholar
F. Gabbiani, Coding of time-varying signals in spike trains of linear and half-wave rectifying neurons,, Network Comp. Neural., 7 (1996), 61. Google Scholar
C. D. Geisler and J. M. Goldberg, A stochastic model of repetitive activity of neurons,, Biophys. J., 6 (1966), 53. Google Scholar
G. L. Gerstein and B. Mandelbrot, Random walk models for the spike activity of a single neuron,, Biophys. J., 4 (1964), 41. Google Scholar
W. Gerstner and W. M. Kistler, Spiking Neuron Models,, Cambridge University Press, (2002). doi: 10.1017/CBO9780511815706. Google Scholar
J. D. Hamilton, Time Series Analysis,, Princeton University Press, (1994). Google Scholar
A. V. Holden, Models of the Stochastic Activity of Neurones,, Springer-Verlag, (1976). Google Scholar
E. M. Izhikevich, Resonate-and-fire neurons,, Neural. Netw., 14 (2001), 883. Google Scholar
B. Lindner, Interspike interval statistics of neurons driven by colored noise,, Phys. Rev. E, 69 (2004). Google Scholar
B. Lindner, Low-pass filtering of information in the leaky integrate-and-fire neuron driven by white noise,, in International Conference on Theory and Application in Nonlinear Dynamics (ICAND 2012) (eds. I. Visarath, (2012). Google Scholar
B. Lindner, M. J. Chacron, A. Longtin, Integrate-and-fire neurons with threshold noise - a tractable model of how interspike interval correlations affect neuronal signal transmission,, Phys. Rev. E, 72 (2005). doi: 10.1103/PhysRevE.72.021911. Google Scholar
B. Lindner, D. Gangloff, A. Longtin and J. E. Lewis, Broadband coding with dynamic synapses,, J. Neurosci., 29 (2009), 2076. doi: 10.1523/JNEUROSCI.3702-08.2009. Google Scholar
S. B. Lowen and M. C. Teich, Auditory-nerve action potentials form a nonrenewal point process over short as well as long time scales,, J. Acoust. Soc. Am., 92 (1992), 803. Google Scholar
D. J. Mar, C. C. Chow, W. Gerstner, R. W. Adams and J. J. Collins, Noise shaping in populations of coupled model neurons,, Proc. Natl. Acad. Sci., 96 (1999), 10450. doi: 10.1073/pnas.96.18.10450. Google Scholar
G. Marsat and G. S. Pollack, Differential temporal coding of rhythmically diverse acoustic signals by a single interneuron,, J. Neurophysiol., 92 (2004), 939. Google Scholar
C. Massot, M. Chacron and K. Cullen, Information transmission and detection thresholds in the vestibular nuclei: Single neurons vs. population encoding,, J. Neurophysiol., 105 (2011), 1798. doi: 10.1152/jn.00910.2010. Google Scholar
M. Merkel and B. Lindner, Synaptic filtering of rate-coded information,, Phys. Rev. E, 81 (2010). doi: 10.1103/PhysRevE.81.041921. Google Scholar
J. W. Middleton, A. Longtin, J. Benda and L. Maler, Postsynaptic receptive field size and spike threshold determine encoding of high-frequency information via sensitivity to synchronous presynaptic activity,, J. Neurophysiol., 101 (2009), 1160. doi: 10.1152/jn.90814.2008. Google Scholar
A. B. Neiman and D. F. Russell, Sensory coding in oscillatory electroreceptors of paddlefish,, Chaos, 21 (2011). doi: 10.1063/1.3669494. Google Scholar
A. Nikitin, N. Stocks and A. Bulsara, Enhancing the resolution of a sensor via negative correlation: A biologically inspired approach,, Phys. Rev. Lett., 109 (2012). Google Scholar
A. M. M. Oswald, M. J. Chacron, B. Doiron, J. Bastian and L. Maler, Parallel processing of sensory input by bursts and isolated spikes,, J. Neurosci., 24 (2004), 4351. doi: 10.1523/JNEUROSCI.0459-04.2004. Google Scholar
S. A. Prescott and T. J. Sejnowski, Spike-rate coding and spike-time coding are affected oppositely by different adaptation mechanisms,, J. Neurosci., 28 (2008), 13649. doi: 10.1523/JNEUROSCI.1792-08.2008. Google Scholar
F. Rieke, D. Bodnar and W. Bialek, Naturalistic stimuli increase the rate and efficiency of information transmission by primary auditory afferents,, Proc. Biol. Sci., 262 (1995), 259. Google Scholar
F. Rieke, D. Warland, R. de Ruyter van Steveninck and W. Bialek, Spikes: Exploring the Neural Code,, MIT Press, (1999). Google Scholar
J. C. Roddey, B. Girish and J. P. Miller, Assessing the performance of neural encoding models in the presence of noise,, J. Comput. Neurosci., 8 (2000), 95. Google Scholar
S. G. Sadeghi, M. J. Chacron, M. C. Taylor and K. E. Cullen, Neural variability, detection thresholds, and information transmission in the vestibular system,, J. Neurosci., 27 (2007), 771. Google Scholar
T. Schwalger, K. Fisch, J. Benda and B. Lindner, How noisy adaptation of neurons shapes interspike interval histograms and correlations,, PLoS Comp. Biol., 6 (2010). doi: 10.1371/journal.pcbi.1001026. Google Scholar
R. Shannon, The mathematical theory of communication,, Bell Syst. Tech. J., 27 (1948), 379. doi: 10.1002/j.1538-7305.1948.tb01338.x. Google Scholar
N. Sharafi, J. Benda and B. Lindner, Information filtering by synchronous spikes in a neural population,, J. Comp. Neurosci., 34 (2013), 285. doi: 10.1007/s10827-012-0421-9. Google Scholar
L. Shiau, T. Schwalger and B. Lindner, ISI correlation in a stochastic exponential integrate-and-fire model with subthreshold and spike-triggered adaptation,, J. Comp. Neurosci., 38 (2015), 589. doi: 10.1007/s10827-015-0558-4. Google Scholar
J. Shin, The noise shaping neural coding hypothesis: A brief history and physiological implications,, Neurocomp., 44 (2002), 167. doi: 10.1016/S0925-2312(02)00379-X. Google Scholar
J. H. Shin, K. R. Lee and S. B. Park, Novel neural circuits based on stochastic pulse coding and noise feedback pulse coding,, Int. J. Electronics, 74 (1993), 359. doi: 10.1080/00207219308925840. Google Scholar
R. L. Stratonovich, Topics in the Theory of Random Noise,, Gordon and Breach, (1967). Google Scholar
R. D. Vilela and B. Lindner, Comparative study of different integrate-and-fire neurons: Spontaneous activity, dynamical response, and stimulus-induced correlation,, Phys. Rev. E, 80 (2009). doi: 10.1103/PhysRevE.80.031909. Google Scholar
R. S. Zucker and W. G. Regehr, Short-term synaptic plasticity,, Ann. Rev. Physiol., 64 (2002), 355. Google Scholar
Wenbin Lv, Qingyuan Wang. Global existence for a class of Keller-Segel models with signal-dependent motility and general logistic term. Evolution Equations & Control Theory, 2021, 10 (1) : 25-36. doi: 10.3934/eect.2020040
Yanan Li, Zhijian Yang, Na Feng. Uniform attractors and their continuity for the non-autonomous Kirchhoff wave models. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021018
Zhimin Li, Tailei Zhang, Xiuqing Li. Threshold dynamics of stochastic models with time delays: A case study for Yunnan, China. Electronic Research Archive, 2021, 29 (1) : 1661-1679. doi: 10.3934/era.2020085
Ziang Long, Penghang Yin, Jack Xin. Global convergence and geometric characterization of slow to fast weight evolution in neural network training for classifying linearly non-separable data. Inverse Problems & Imaging, 2021, 15 (1) : 41-62. doi: 10.3934/ipi.2020077
Zhongbao Zhou, Yanfei Bai, Helu Xiao, Xu Chen. A non-zero-sum reinsurance-investment game with delay and asymmetric information. Journal of Industrial & Management Optimization, 2021, 17 (2) : 909-936. doi: 10.3934/jimo.2020004
Yu-Jhe Huang, Zhong-Fu Huang, Jonq Juang, Yu-Hao Liang. Flocking of non-identical Cucker-Smale models on general coupling network. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1111-1127. doi: 10.3934/dcdsb.2020155
Lars Grüne. Computing Lyapunov functions using deep neural networks. Journal of Computational Dynamics, 2020 doi: 10.3934/jcd.2021006
Leslaw Skrzypek, Yuncheng You. Feedback synchronization of FHN cellular neural networks. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021001
Wai-Ki Ching, Jia-Wen Gu, Harry Zheng. On correlated defaults and incomplete information. Journal of Industrial & Management Optimization, 2021, 17 (2) : 889-908. doi: 10.3934/jimo.2020003
Nicolas Rougerie. On two properties of the Fisher information. Kinetic & Related Models, 2021, 14 (1) : 77-88. doi: 10.3934/krm.2020049
Jakub Kantner, Michal Beneš. Mathematical model of signal propagation in excitable media. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 935-951. doi: 10.3934/dcdss.2020382
Wenqiang Zhao, Yijin Zhang. High-order Wong-Zakai approximations for non-autonomous stochastic $ p $-Laplacian equations on $ \mathbb{R}^N $. Communications on Pure & Applied Analysis, 2021, 20 (1) : 243-280. doi: 10.3934/cpaa.2020265
Shengxin Zhu, Tongxiang Gu, Xingping Liu. AIMS: Average information matrix splitting. Mathematical Foundations of Computing, 2020, 3 (4) : 301-308. doi: 10.3934/mfc.2020012
Sven Blankenburg Benjamin Lindner
|
CommonCrawl
|
Vapour-bubble nucleation and dynamics in turbulent Rayleigh–Bénard convection
Daniela Narezo Guzman, Tomasz Frączek, Christopher Reetz, Chao Sun, Detlef Lohse, Guenter Ahlers
Journal: Journal of Fluid Mechanics / Volume 795 / 25 May 2016
Print publication: 25 May 2016
Vapour bubbles nucleating at micro-cavities etched into the silicon bottom plate of a cylindrical Rayleigh–Bénard sample (diameter $D=8.8$ cm, aspect ratio ${\it\Gamma}\equiv D/L\simeq 1.00$ where $L$ is the sample height) were visualized from the top and from the side. A triangular array of cylindrical micro-cavities (with a diameter of $30~{\rm\mu}\text{m}$ and a depth of $100~{\rm\mu}\text{m}$ ) covered a circular centred area (diameter of 2.5 cm) of the bottom plate. Heat was applied to the sample only over this central area while cooling was over the entire top-plate area. Bubble sizes and frequencies of departure from the bottom plate are reported for a range of bottom-plate superheats $T_{b}-T_{on}$ ( $T_{b}$ is the bottom-plate temperature, $T_{on}$ is the onset temperature of bubble nucleation) from 3 to 12 K for three different cavity separations. The difference $T_{b}-T_{t}\simeq 16$ K between $T_{b}$ and the top plate temperature $T_{t}$ was kept fixed while the mean temperature $T_{m}=(T_{b}+T_{t})/2$ was varied, leading to a small range of the Rayleigh number $Ra$ from $1.4\times 10^{10}$ to $2.0\times 10^{10}$ . The time between bubble departures from a given cavity decreased exponentially with increasing superheat and was independent of cavity separation. The contribution of the bubble latent heat to the total enhancement of heat transferred due to bubble nucleation was found to increase with superheat, reaching up to 25 %. The bubbly flow was examined in greater detail for a superheat of 10 K and $Ra\simeq 1.9\times 10^{10}$ . The condensation and/or dissolution rates of departed bubbles revealed two regimes: the initial rate was influenced by steep thermal gradients across the thermal boundary layer near the plate and was two orders of magnitude larger than the final condensation and/or dissolution rate that prevailed once the rising bubbles were in the colder bulk flow of nearly uniform temperature. The dynamics of thermal plumes was studied qualitatively in the presence and absence of nucleating bubbles. It was found that bubbles enhanced the plume velocity by a factor of four or so and drove a large-scale circulation (LSC). Nonetheless, even in the presence of bubbles the plumes and LSC had a characteristic velocity which was smaller by a factor of five or so than the bubble-rise velocity in the bulk. In the absence of bubbles there was strongly turbulent convection but no LSC, and plumes on average rose vertically.
Heat-flux enhancement by vapour-bubble nucleation in Rayleigh–Bénard turbulence
Daniela Narezo Guzman, Yanbo Xie, Songyue Chen, David Fernandez Rivas, Chao Sun, Detlef Lohse, Guenter Ahlers
Journal: Journal of Fluid Mechanics / Volume 787 / 25 January 2016
Print publication: 25 January 2016
We report on the enhancement of turbulent convective heat transport due to vapour-bubble nucleation at the bottom plate of a cylindrical Rayleigh–Bénard sample (aspect ratio 1.00, diameter 8.8 cm) filled with liquid. Microcavities acted as nucleation sites, allowing for well-controlled bubble nucleation. Only the central part of the bottom plate with a triangular array of microcavities (etched over an area with diameter of 2.5 cm) was heated. We studied the influence of the cavity density and of the superheat $T_{b}-T_{on}$ ( $T_{b}$ is the bottom-plate temperature and $T_{on}$ is the value of $T_{b}$ below which no nucleation occurred). The effective thermal conductivity, as expressed by the Nusselt number $\mathit{Nu}$ , was measured as a function of the superheat by varying $T_{b}$ and keeping a fixed difference $T_{b}-T_{t}\simeq 16$ K ( $T_{t}$ is the top-plate temperature). Initially $T_{b}$ was much larger than $T_{on}$ (large superheat), and the cavities vigorously nucleated vapour bubbles, resulting in two-phase flow. Reducing $T_{b}$ in steps until it was below $T_{on}$ resulted in cavity deactivation, i.e. in one-phase flow. Once all cavities were inactive, $T_{b}$ was increased again, but they did not reactivate. This led to one-phase flow for positive superheat. The heat transport of both one- and two-phase flow under nominally the same thermal forcing and degree of superheat was measured. The Nusselt number of the two-phase flow was enhanced relative to the one-phase system by an amount that increased with increasing $T_{b}$ . Varying the cavity density (69, 32, 3.2, 1.2 and $0.3~\text{mm}^{-2}$ ) had only a small effect on the global $\mathit{Nu}$ enhancement; it was found that $\mathit{Nu}$ per active site decreased as the cavity density increased. The heat-flux enhancement of an isolated nucleating site was found to be limited by the rate at which the cavity could generate bubbles. Local bulk temperatures of one- and two-phase flows were measured at two positions along the vertical centreline. Bubbles increased the liquid temperature (compared to one-phase flow) as they rose. The increase was correlated with the heat-flux enhancement. The temperature fluctuations, as well as local thermal gradients, were reduced (relative to one-phase flow) by the vapour bubbles. Blocking the large-scale circulation around the nucleating area, as well as increasing the effective buoyancy of the two-phase flow by thermally isolating the liquid column above the heated area, increased the heat-flux enhancement.
The importance of bubble deformability for strong drag reduction in bubbly turbulent Taylor–Couette flow
Dennis P. M. van Gils, Daniela Narezo Guzman, Chao Sun, Detlef Lohse
Bubbly turbulent Taylor–Couette (TC) flow is globally and locally studied at Reynolds numbers of $\mathit{Re}= 5\times 1{0}^{5} $ to $2\times 1{0}^{6} $ with a stationary outer cylinder and a mean bubble diameter around 1 mm. We measure the drag reduction (DR) based on the global dimensional torque as a function of the global gas volume fraction ${\alpha }_{global} $ over the range 0–4 %. We observe a moderate DR of up to 7 % for $\mathit{Re}= 5. 1\times 1{0}^{5} $ . Significantly stronger DR is achieved for $\mathit{Re}= 1. 0\times 1{0}^{6} $ and $2. 0\times 1{0}^{6} $ with, remarkably, more than $40\hspace{0.167em} \% $ of DR at $\mathit{Re}= 2. 0\times 1{0}^{6} $ and ${\alpha }_{global} = 4\hspace{0.167em} \% $ . To shed light on the two apparently different regimes of moderate DR and strong DR, we investigate the local liquid flow velocity and the local bubble statistics, in particular the radial gas concentration profiles and the bubble size distribution, for the two different cases: $\mathit{Re}= 5. 1\times 1{0}^{5} $ in the moderate DR regime and $\mathit{Re}= 1. 0\times 1{0}^{6} $ in the strong DR regime, both at ${\alpha }_{global} = 3\pm 0. 5\hspace{0.167em} \% $ . In both cases the bubbles mostly accumulate close to the inner cylinder (IC). Surprisingly, the maximum local gas concentration near the IC for $\mathit{Re}= 1. 0\times 1{0}^{6} $ is ${\approx }2. 3$ times lower than that for $\mathit{Re}= 5. 1\times 1{0}^{5} $ , in spite of the stronger DR. Evidently, a higher local gas concentration near the inner wall does not guarantee a larger DR. By defining and measuring a local bubble Weber number ( $\mathit{We}$ ) in the TC gap close to the IC wall, we observe that the cross-over from the moderate to the strong DR regime occurs roughly at the cross-over of $\mathit{We}\sim 1$ . In the strong DR regime at $\mathit{Re}= 1. 0\times 1{0}^{6} $ we find $\mathit{We}\gt 1$ , reaching a value of $9(+ 7, - 2)$ when approaching the inner wall, indicating that the bubbles increasingly deform as they draw near the inner wall. In the moderate DR regime at $\mathit{Re}= 5. 1\times 1{0}^{5} $ we find $\mathit{We}\approx 1$ , indicating more rigid bubbles, even though the mean bubble diameter is larger, namely $1. 2(+ 0. 7, - 0. 1)~\mathrm{mm} $ , as compared with the $\mathit{Re}= 1. 0\times 1{0}^{6} $ case, where it is $0. 9(+ 0. 6, - 0. 1)~\mathrm{mm} $ . We conclude that bubble deformability is a relevant mechanism behind the observed strong DR. These local results match and extend the conclusions from the global flow experiments as found by van den Berg et al. (Phys. Rev. Lett., vol. 94, 2005, p. 044501) and from the numerical simulations by Lu, Fernandez & Tryggvason (Phys. Fluids, vol. 17, 2005, p. 95102).
|
CommonCrawl
|
Chapter 28 Special Relativity
227 28.5 Relativistic Momentum
Calculate relativistic momentum.
Explain why the only mass it makes sense to talk about is rest mass.
Figure 1. Momentum is an important concept for these football players from the University of California at Berkeley and the University of California at Davis. Players with more mass often have a larger impact because their momentum is larger. For objects moving at relativistic speeds, the effect is even greater. (credit: John Martinez Pavliga)
In classical physics, momentum is a simple product of mass and velocity. However, we saw in the last section that when special relativity is taken into account, massive objects have a speed limit. What effect do you think mass and velocity have on the momentum of objects moving at relativistic speeds?
Momentum is one of the most important concepts in physics. The broadest form of Newton's second law is stated in terms of momentum. Momentum is conserved whenever the net external force on a system is zero. This makes momentum conservation a fundamental tool for analyzing collisions. All of Chapter 7 Work, Energy, and Energy Resources is devoted to momentum, and momentum has been important for many other topics as well, particularly where collisions were involved. We will see that momentum has the same importance in modern physics. Relativistic momentum is conserved, and much of what we know about subatomic structure comes from the analysis of collisions of accelerator-produced relativistic particles.
The first postulate of relativity states that the laws of physics are the same in all inertial frames. Does the law of conservation of momentum survive this requirement at high velocities? The answer is yes, provided that the momentum is defined as follows.
Relativistic Momentum
Relativistic momentum [latex]\boldsymbol{p}[/latex] is classical momentum multiplied by the relativistic factor [latex]\boldsymbol{\gamma}[/latex].
[latex]\boldsymbol{p = \gamma mu},[/latex]
where [latex]\boldsymbol{m}[/latex] is the rest mass of the object, [latex]\boldsymbol{u}[/latex] is its velocity relative to an observer, and the relativistic factor
[latex]\boldsymbol{\gamma =}[/latex][latex]\boldsymbol{\frac{1}{\sqrt{1- \frac{u^2}{c^2}}}}.[/latex]
Note that we use [latex]\boldsymbol{u}[/latex] for velocity here to distinguish it from relative velocity [latex]\boldsymbol{v}[/latex] between observers. Only one observer is being considered here. With [latex]\boldsymbol{p}[/latex] defined in this way, total momentum [latex]\boldsymbol{p_{\textbf{tot}}}[/latex] is conserved whenever the net external force is zero, just as in classical physics. Again we see that the relativistic quantity becomes virtually the same as the classical at low velocities. That is, relativistic momentum [latex]\boldsymbol{\gamma mu}[/latex] becomes the classical [latex]\boldsymbol{mu}[/latex] at low velocities, because [latex]\boldsymbol{\gamma}[/latex] is very nearly equal to 1 at low velocities.
Relativistic momentum has the same intuitive feel as classical momentum. It is greatest for large masses moving at high velocities, but, because of the factor [latex]\boldsymbol{\gamma}[/latex], relativistic momentum approaches infinity as $latex\boldsymbol{u} $ approaches [latex]\boldsymbol{c}[/latex]. (See Figure 2.) This is another indication that an object with mass cannot reach the speed of light. If it did, its momentum would become infinite, an unreasonable value.
Figure 2. Relativistic momentum approaches infinity as the velocity of an object approaches the speed of light.
Misconception Alert: Relativistic Mass and Momentum
The relativistically correct definition of momentum as [latex]\boldsymbol{p = \gamma mu}[/latex] is sometimes taken to imply that mass varies with velocity: [latex]\boldsymbol{m_{\textbf{var}} = \gamma m}[/latex], particularly in older textbooks. However, note that [latex]\boldsymbol{m}[/latex] is the mass of the object as measured by a person at rest relative to the object. Thus, [latex]\boldsymbol{m}[/latex] is defined to be the rest mass, which could be measured at rest, perhaps using gravity. When a mass is moving relative to an observer, the only way that its mass can be determined is through collisions or other means in which momentum is involved. Since the mass of a moving object cannot be determined independently of momentum, the only meaningful mass is rest mass. Thus, when we use the term mass, assume it to be identical to rest mass.
Relativistic momentum is defined in such a way that the conservation of momentum will hold in all inertial frames. Whenever the net external force on a system is zero, relativistic momentum is conserved, just as is the case for classical momentum. This has been verified in numerous experiments.
In Chapter 28.6 Relativistic Energy, the relationship of relativistic momentum to energy is explored. That subject will produce our first inkling that objects without mass may also have momentum.
What is the momentum of an electron traveling at a speed [latex]\boldsymbol{0.985c}[/latex]? The rest mass of the electron is [latex]\boldsymbol{9.11 \times 10^{-31} \; \textbf{kg}}[/latex].
The law of conservation of momentum is valid whenever the net external force is zero and for relativistic momentum. Relativistic momentum [latex]\boldsymbol{p}[/latex] is classical momentum multiplied by the relativistic factor [latex]\boldsymbol{\gamma}[/latex].
[latex]\boldsymbol{p = \gamma mu}[/latex], where [latex]\boldsymbol{m}[/latex] is the rest mass of the object, [latex]\boldsymbol{u}[/latex] is its velocity relative to an observer, and the relativistic factor [latex]\boldsymbol{\gamma = \frac{1}{\sqrt{1 - \frac{u^2}{c^2}}}}[/latex].
At low velocities, relativistic momentum is equivalent to classical momentum.
Relativistic momentum approaches infinity as [latex]\boldsymbol{u}[/latex] approaches [latex]\boldsymbol{c}[/latex]. This implies that an object with mass cannot reach the speed of light.
Relativistic momentum is conserved, just as classical momentum is conserved.
1: How does modern relativity modify the law of conservation of momentum?
2: Is it possible for an external force to be acting on a system and relativistic momentum to be conserved? Explain.
Problem Exercises
1: Find the momentum of a helium nucleus having a mass of [latex]\boldsymbol{6.68 \times 10^{-27} \;\textbf{kg}}[/latex] that is moving at [latex]\boldsymbol{0.200c}[/latex].
2: What is the momentum of an electron traveling at [latex]\boldsymbol{0.980c}[/latex]?
3: (a) Find the momentum of a [latex]\boldsymbol{1.00 \times 10^9 \;\textbf{kg}}[/latex] asteroid heading towards the Earth at [latex]\boldsymbol{30.0 \;\textbf{km/s}}[/latex]. (b) Find the ratio of this momentum to the classical momentum. (Hint: Use the approximation that [latex]\boldsymbol{\gamma = 1+(1/2)v^2/c^2}[/latex] at low velocities.)
4: (a) What is the momentum of a 2000 kg satellite orbiting at 4.00 km/s? (b) Find the ratio of this momentum to the classical momentum. (Hint: Use the approximation that [latex]\boldsymbol{\gamma = 1+(1/2)v^2/c^2}[/latex] at low velocities.)
5: What is the velocity of an electron that has a momentum of [latex]\boldsymbol{3.04 \times 10^{-21} \;\textbf{kg} \cdot \textbf{m/s}}[/latex]? Note that you must calculate the velocity to at least four digits to see the difference from cc.
6: Find the velocity of a proton that has a momentum of [latex]\boldsymbol{4.48 \times -10^{-19} \;\textbf{kg} \cdot \textbf{m/s}}[/latex].
7: (a) Calculate the speed of a [latex]\boldsymbol{1.00 - \mu g}[/latex] particle of dust that has the same momentum as a proton moving at [latex]\boldsymbol{0.999c}[/latex]. (b) What does the small speed tell us about the mass of a proton compared to even a tiny amount of macroscopic matter?
8: (a) Calculate [latex]\boldsymbol{\gamma}[/latex] for a proton that has a momentum of [latex]\boldsymbol{1.00 \;\textbf{kg} \cdot \textbf{m/s}}[/latex]. (b) What is its speed? Such protons form a rare component of cosmic radiation with uncertain origins.
[latex]\boldsymbol{p}[/latex], the momentum of an object moving at relativistic velocity; [latex]\boldsymbol{p = \gamma mu}[/latex], where [latex]\boldsymbol{m}[/latex] is the rest mass of the object, [latex]\boldsymbol{u}[/latex] is its velocity relative to an observer, and the relativistic factor [latex]\boldsymbol{\gamma = \frac{1}{\sqrt{1- \frac{u^2}{c^2}}}}[/latex]
the mass of an object as measured by a person at rest relative to the object
[latex]\boldsymbol{p = \gamma mu =}[/latex][latex]\boldsymbol{\frac{mu}{\sqrt{1 - \frac{u^2}{c^2}}}}[/latex][latex]\boldsymbol{=}[/latex][latex]\boldsymbol{\frac{(9.11 \times 10^{-31}\;\textbf{kg})(0.985)(3.00 \times 10^8 \;\textbf{m/s})}{1 - \frac{(0.985c)^2}{c^2}}}[/latex][latex]\boldsymbol{= 1.56 \times 10^{-21} \;\textbf{kg} \cdot \textbf{m/s}}[/latex]
1: [latex]\boldsymbol{4.09 \times 10^{-19} \;\textbf{kg} \cdot \textbf{m/s}}[/latex]
3: (a) [latex]\boldsymbol{3.000000015 \times 10^13 \;\textbf{kg} \cdot \textbf{m/s}}[/latex].
(b) Ratio of relativistic to classical momenta equals 1.000000005 (extra digits to show small effects)
5: [latex]\boldsymbol{2.9957 \times 10^8 \;\textbf{m/s}}[/latex]
7: (a) [latex]\boldsymbol{1.121 \times 10^{-8} \;\textbf{m/s}}[/latex]
(b) The small speed tells us that the mass of a proton is substantially smaller than that of even a tiny amount of macroscopic matter!
Previous: 28.4 Relativistic Addition of Velocities
Next: 28.6 Relativistic Energy
|
CommonCrawl
|
Effects of reverse deployment of cone-shaped vena cava filter on improvements in hemodynamic performance in vena cava
Ying Chen1,2,3,4,
Zaipin Xu5,
Xiaoyan Deng4,6,
Shibo Yang5,
Wenchang Tan2,3,
Yubo Fan4,
Yong Han7 &
Yubin Xing8
BioMedical Engineering OnLine volume 20, Article number: 19 (2021) Cite this article
Cone-shaped vena cava filters (VCFs) are widely used to treat venous thromboembolism. However, in the long term, the problem of occlusion persists even after the filter is deployed. A previous study hypothesized that the reverse deployment of a cone-shaped VCFs may prevent filter blockage.
To explore this hypothesis, a comparative study of the traditional and reverse deployments of VCFs was conducted using a computational fluid dynamics approach. The distribution of wall shear stress (WSS) and shear stress-related parameters were calculated to evaluate the differences in hemodynamic effects between both conditions. In the animal experiment, we reversely deployed a filter in the vena cava of a goat and analyzed the blood clot distribution in the filter.
The numerical simulation showed that the reverse deployment of a VCF resulted in a slightly higher shear rate on the thrombus, and no reductions in the oscillating shear index (OSI) and relative residence time (RRT) on the vessel wall. Comparing the traditional method with the reversely deployed cases, the shear rate values is 16.49 and 16.48 1/s, respectively; the minimal OSI values are 0.01 and 0.04, respectively; in the vicinity of the VCF, the RRT values are both approximately 5 1/Pa; and the WSS is approximately 0.3 Pa for both cases. Therefore, the reverse deployment of cone-shaped filters is not advantageous when compared with the traditional method in terms of local hemodynamics. However, it is effective in capturing thrombi in the short term, as demonstrated via animal experiments. The reversely deployed cone-shaped filter captured the thrombi at its center in the experiments.
Thus, the reverse deployment of cone-shaped filters is not advantageous when compared with the traditional method in terms of local hemodynamics. Therefore, we would not suggest the reverse deployment of the cone-shaped filter in the vena cava to prevent a potentially fatal pulmonary embolism.
Venous thromboembolism (VTE), including deep vein thrombosis (DVT) and pulmonary embolism (PE), is a leading global cause of high morbidity and mortality [1]. Anticoagulants are the first choice for the prevention and treatment of VTE [2]. However, when the use of anticoagulants is contraindicated or VTE recurs despite optimal anticoagulation, vena cava filter (VCF) intervention is recommended to prevent a potentially fatal PE [3, 4]. Commercially available cone-shaped VCFs, such as Option™, Vena Tech, and Greenfield filters, are widely used clinically. However, these filters tend to trap blood clots in their cone-shaped centers [5] (Fig. 1Ia), and the filter blockage problem therefore remains unresolved [6, 7].
Schematic drawings of traditional and reverse placements of VCF. I: a traditional deployment; b reverse deployment. Note that this is only a diagram and does not represent the specific size of the vena cave or VCF. II: Hagen–Poiseuille flow shear-stress distribution. VCF: vena cava filter
To address this issue, a hypothesis of the reverse deployment method for a cone-shaped filter was proposed [8] (Fig. 1Ib). With this hypothesis, in the flow of a circular tube, the flow-induced shear stress distribution is distributed based on Poiseuille's law as follows:
$$\tau (r) = \frac{4Q\mu }{{\pi R^{4} }}r.$$
Here, Q represents the volume flow rate, \(\mu\) denotes the viscosity, R denotes the semi-diameter of the tube, \(\pi\) denotes the circular constant, and r represents the change in the radial coordinate from 0 of the axis to R of the wall. Therefore, the shear stress at the center evidently is lower than the shear stress in the peripheral area [8]. It is proposed that, by reversing the filter, the blood flow stream would force the blood clots to remain within the peripheral areas of the vena cava, thereby leaving the central passage open, whereas the higher flow-induced wall shear stress (WSS) in the peripheral area can dissolve the captured blood clots. Regions of low shear stress and slow flow can be thrombogenic because of the accumulation of thrombin [9]. High shear stress can enhance the removal of thrombin and fibrin, thereby reducing the likelihood of secondary hemostasis [9]. High levels of shear stress can also stimulate endothelial cells that secrete a tissue plasminogen activator, reducing the risk of hemostasis [10]. Based on this hypothesis, the wall shear stresses (WSSs) acting on the thrombus (center versus peripheral) should be compared. If the WSS on the thrombus deposited at the periphery is higher than that at the center, the thrombus may be dissolved. Shear stress is the component of stress coplanar with the material cross section so that the friction between the blood and thrombus is the key factor. Therefore, this study investigated the hemodynamics of a reversely deployed traditional VCF.
In this study, we designed a numerical study and an animal experiment to assess whether the reverse deployment of a cone-shaped VCF could indeed improve the hemodynamic performance of the filter by inducing high shear stress. As the hypothesis indicates, reverse deployment would induce higher shear stress, which can dissolve the thrombin faster. Therefore, through a numerical study, we compared the reverse deployment method with the conventional method, mainly focusing on the shear stress and shear stress-related parameters, such as the WSS, oscillatory shear index (OSI), and relative residence time (RRT). In the animal experiment, we reversely deployed a cone-shaped filter in the vena cava of a goat and analyzed the blood clot distribution in the filter.
Shear rate
Several representative cross sections were selected in the vena cava model to compare the average shear rate in the different cases, and the area-weighted average shear rates for six representative cross sections were plotted (S1–S6, Fig. 2).
Shear rate for the four cases. Right: six cross sections in the vena cava model, with each adjacent side distance measuring 10 mm. Left: plots of the shear rate and area-weighted average shear rate at six representative cross sections of four cases from steady-flow computations
Models, inlet waveform velocity, and streamlines. I: Model diagrams of the four cases with and without thrombi. The diagram also shows the cone-shaped filter in the vena cava used in the computations. II: peak systole (time = t2) velocity streamline for the four cases from pulsatile flow computations
Figure 2 illustrates the shear rate profiles of the six representative cross sections of the vena cava based on different cases. The shear rate of the blood flow adjacent to the filters exceeded that in other regions within the vena cava in all models. Near the filter, Cases 2 and 4 clearly induced higher shear rates with thrombi from S2 to S4; the maximum shear rates for Cases 2 and 4 are shown in S3. By comparison, the shear rates for Cases 1 and 3, without thrombi, were lower. In S5 and S6, which are located far from the filter and thrombi, no evident difference between the different cases was found.
Flow pattern
Figure 3II presents the peak systole (time = t2) velocity streamlines for the four cases colored by the velocity magnitude. In Cases 2 and 4 with thrombi, the velocity clearly increased near the filter and thrombus. For Case 2, at the bottom of the cone-shaped thrombus, the streamline was clearly red, which represents a high velocity. The streamlines are smoother than those in Cases 1 and 3 without a thrombus.
Velocity distributions from pulsatile flow computations. Time = t1 and time = t2 velocity distributions from pulsatile flow computations
Figures 4 and 5 present the velocity profiles within the six planes of each vena cava during the cardiac cycle. Notably, the cutting planes (S1–S6 in Fig. 2) represent the same positions for all cases. Because the cutting plane does not present the same position, the velocity contours must be different. For instance, S3 is at approximately the middle of the thrombus in Case 4, whereas S2 is at approximately the middle of the thrombi in Case 2.
Contours of TAWSS and transverse WSS from pulsatile flow computations. I: contours of TAWSS on the caval wall and filter. II: contours of transverse TAWSS on the caval wall and filter. TAWSS: time-averaged wall shear stress
In general, among all the cases considered, excepting that of the inner filter, none of the models significantly changed the blood flow velocity within the vena cava during the overall cardiac cycle. These results are consistent with those obtained in a previous study by Singer et al., who reported that different thrombi shapes generally result in similar shear stresses and velocity profiles when thrombi are trapped within the central inferior vena cava [11]. However, the increase in the flow velocity in the periphery of a filter is more pronounced because of the existence of a thrombus. For example, in S3 of both Cases 2 and 4, the velocity in the periphery of the thrombi is evidently higher. In conclusion, in Cases 2 and 4, the presence of thrombi leads to higher velocities along their peripheries.
Time-averaged and transverse WSS on the vessel wall and filter/thrombus
The time-averaged wall shear stress (TAWSS) contours of the vessel models in the four cases are shown in Fig. 6I. The TAWSSs on the vessel wall for these cases are qualitatively similar. Specifically, in the vicinity of the filter, the TAWSS on the vessel wall increases with the existence of thrombi. In Cases 3 and 4, in which the filters are reversely deployed, the TAWSS of the cone vertex of the filter is slightly higher than in Cases 1 and 2, in which the filters are traditionally deployed. As shown in Fig. 6I, the TAWSS on the filter is extremely high in the absence of a thrombus. When thrombi are present in the center of the filter, the TAWSS of Case 2 is evidently higher than the TAWSS of Case 4, particularly at the bottom of the thrombus cone. Furthermore, Fig. 6I also shows that the TAWSS on the filter decreases after thrombus deposition, particularly in the joint region between the struts and thrombus.
OSI and RRT for the four cases. I: OSI on the caval wall. II: RRT on the caval wall. OSI: oscillatory shear index; RRT: relative residence time
Figure 6II shows the transverse WSS distribution on both the vessel wall and filter. In general, similarly to the TAWSS distribution, the transverse WSS on the thrombus in the traditionally deployed cases is higher than that in the reverse deployment cases. In Case 2, in particular, the transverse WSS on the surface of the thrombi is shown in red and thus has a high value
Oscillatory shear index and relative residence time
As shown in Fig. 7, the OSI weakens after thrombi deposition and is more evident within the vicinity of the filter/thrombus. When compared with Cases 2 and 4, the reverse deployment case leads to a slightly higher OSI than the traditional deployment case within the thrombus area. Specifically, in Cases 2 and 4, a lower OSI location appeared in this vicinity.
Autologous blood clots are injected into the left jugular vein of the goat; CR image; experimental results. I: cylindrical thrombus on the sterile gauze and mixed solution of autologous thrombus. A reverse cone-shaped filter with a mixed solution of blood clots was injected into the jugular vein. II: CR image display indicates showing the VCF explanted in the superior vena cava of the goat. III: experimental results: reversed cone-shaped filter deposited thrombus. CR: computed radiography; VCF: vena cava filter
Figure 7 also clearly demonstrates that the existence of thrombi decreases the RRT on the vessel wall, particularly in the vicinity of the thrombi. A comparison of no-thrombus Case 1 and thrombus Case 2 shows that the RRT on the vessel wall decreases from 20 to 10 within the thrombus area.
Animal experimental results
After euthanizing the goat, an autopsy was immediately conducted to observe the filter. It was observed that the filter was explanted in the superior vena cava without tilting. The diameter of the superior vena cava was 15.2 mm, and the end of the filter was near the heart. Notably, the thrombi continued to be deposited within the center of the filter with the reverse deployment of the cone-shaped filter in the vena cava of the goat (Fig. 8III). There were no visible injuries such as a vascular intima, vessel wall fracture, or adhesion at the hooked site. The surface of the lung did not exhibit any abnormalities or embolism.
Mesh distribution for Case 4. I: mesh distribution around the filter and thrombus. II: overall view of the structure. III: plane mesh distribution around the thrombus
VCF occlusion after filter deployment continues to be a challenge even in the modern era of medicine. This study investigated the use of a VCF with reverse deployment to determine whether it improved the local hemodynamic performance in a vena cava. This was accomplished by conducting numerical simulations and an in vivo experiment.
The motion of blood in a straight vessel under a pressure gradient balances the energy loss resulting from friction. The fluid velocity is higher at the center, and blood clots may be easily deposited in the peripheral areas of the vessel. Therefore, prior to conducting the animal experiment, it was expected that the thrombus would be trapped near the vessel wall. However, the experimental results showed that the thrombus was still deposited at the center of the filter. At a broader level, it is inferred that the flow of blood in the vena cava does not correspond to a standard circular tube flow. The direction of blood flow in the vena cava constitutes a major backflow from the jugular vein to the heart. However, the velocity of blood in the vena cava fluctuates, and this clearly differentiates it from the Poiseuille flow in a simple straight circular tube. According to the previous hypothesis, high shear stress may dissolve the thrombin more rapidly [10]; the accumulation of clots at the center of the filter is poor because the clots slowly dissolve in a low-shear flow. Therefore, the results of the animal experiment showed that this hypothesis does not hold for the in vivo vena cava.
The results also showed that the new deployment method did not improve the hemodynamics of the caval wall in terms of the TAWSS or transWSS when compared with those of a traditional method. The simulation results also concluded that with the exception of the cone vertex, the use of reverse deployment decreased the TAWSS and transWSS on the surface of the thrombi. The geometry of the inverted thrombi favored the development of blood stasis because of the lower WSS at the bottom of the thrombi. A low WSS is usually associated with blood flow stasis and thrombosis [12]. Furthermore, the reverse deployment also leads to a slightly higher OSI and RRT within the vicinity of the filter. Low WSS and high OSI values are important factors in intimal hyperplasia, because they affect the function of endothelial cells [13]. A high OSI and RRT can lead to thrombus formation by simulating platelet aggregation, enhancing the collisions of the activated platelets, and increasing the residence time of procoagulant microparticles [14]. Consequently, a reverse deployment can act adversely in dissolving the deposited thrombi, thereby further increasing the risk of PE. Furthermore, as a potential disadvantage of the inverted cone-shaped filter, clots propelled to the periphery of a reversed filter are more likely to pass through and become PEs. These thrombi attached to the downstream side of the filter are more susceptible to embolization than those captured in the filter placed in a traditional orientation.
To date, there is still no solid evidence showing the significance of the hemodynamics in thrombus formation at the center or on the wall at the early formation stage. The thrombus formed on the filter may be primarily because of the structure of the filter. As aforementioned, the maximum blood velocity occurs along the center of the vessel, and the minimum velocity appears near the wall. Therefore, thrombi are mostly distributed at the center of the bloodstream. The struts are densely distributed at the center of the filter regardless of how the filter is deployed, resulting in thrombi being easily attached to the struts. In the long term, after a thrombus is formed on a reversely deployed filter, vortices may be generated on the backside of the clot owing to fluid dynamics, potentially causing an accumulation of the clot. In the future, we would like to conduct such a study, including more animal experiments to determine whether the strut of the filter can hold the clot when the thrombus grows, or whether it leads to a risk of shedding, thereby blocking the downstream vessels and the heart.
As a preliminary consideration, this study has certain limitations. First, we assumed the thrombus shape as a standard cone volume in our numerical simulation. An extant study showed that no single thrombus shape can be rigorously assigned to a clot owing to the inherently complex and random nature of its formation [15]. However, according to the findings reported by Wang, spherical-shaped and conical-shaped thrombi are the most representative in clinical patients [16]. We also considered the results from the animal experiment, which revealed that the captured thrombus was cone-shaped. Therefore, we finally modeled the thrombus as a cone shape in the simulation study. Second, owing to the high costs involved in conducting an in vivo experiment, the outcome of our experiment outcome is for a single animal. Third, we assumed a velocity profile with an axial velocity component and a transverse velocity component equal to zero at the inlet. For a venous flow, the flow velocity is low relative to that in the arteries. As part of future investigations, we intend to consider the pulsatile velocity profile even though a parabolic velocity profile was assumed. An optional solution is to extend the length of the inlet to provide a sufficient path and allow the numerical approach to generate the velocity profile based on the pulsatile velocity waveform. In addition, there are still other limitations of the simulation study, including single vessel geometry, a rigid wall used in the simulations, and a parabolic velocity profile instead of a Womersley velocity profile. All of these aspects should be addressed in future studies.
In conclusion, the results of this study showed that the reverse deployment method is no better than the traditional deployment in terms of the biomarkers from numerical simulations. Therefore, based on the results of the numerical simulations and a single in vivo experiment, we would not suggest the reverse deployment of the cone-shaped filter in the vena cava to prevent a potentially fatal PE.
Simulation study methods
Geometric model and meshing
Utilizing a prototype of a Greenfield VCF [3, 17], and vena cava structure sizes extracted from [18], we used Pro/Engineer (version Wildfire 4.0, Parametric Technology Co., USA) and created a cone-shaped filter. The cone-shaped filter was simplified into six symmetrical circular struts each, with a diameter of 0.5 mm. The length of the filter was 30 mm. Figure 3I shows four cases that represent a traditional deployment and a reverse deployment with or without a thrombus, which were investigated using numerical simulations. The volume of the cone thrombus used in the numerical simulation was 0.565 cm3, which is consistent with the volume reported by Wang et al. [16, 19]. The vena cava model, as shown in Fig. 3I, was reconstructed based on computed tomographic images from a healthy male volunteer aged 58 years. The volunteer provided written informed consent to this study, which was approved by the Ethical Committee of the General Hospital of the People's Liberation Army and carried out in accordance with the regulations of the hospital. We reconstructed the vena cava model images using the commercial three-dimensional reconstruction software Mimics (Materialize, Belgium). Next, Geomagic (Geomagic, USA) was used to improve the quality of the model surface. The vena cava model was 109 mm long.
In each case, all computational models were meshed using tetrahedral and hexahedral elements using ANSYS ICEM (ANSYS Inc., Canonsburg, PA). To ensure that the results were mesh-independent and validated, a grid-adaptation technique was used, which refined the grid based on the geometric and numerical solution data. Boundary layers near the vessel wall were also applied, which were set to 3; the height ratio was set to 1.2, and the total height was set to 0.2 mm. High-density mesh elements were applied close to filters, and the maximum size of the filter was set to 0.15 mm. The final volumes of the meshes corresponded to 3 298 245, 3 838 665, 3 318 325, and 3 798 220 for Case 1, 2, 3, and 4, respectively. In particular, the mesh distribution for Case 4 is presented in Fig. 9.
Inlet inferior vena cava waveform velocity used in the pulsatile flow computations
In this study, simulations were performed assuming laminar flow conditions [20]. Blood was assumed to be a homogeneous and incompressible non-Newtonian fluid. Notably, our previous study showed a similar difference in the flow features between the Newtonian numerical simulations and Carreau model simulations in the vena cava [21]. Therefore, only the Carreau model simulation results are provided in this study.
Governing equations
Flow simulations were performed based on the three-dimensional, incompressible Navier–Stokes and the continuity equations as follows [22]:
$$\rho ((\partial \upsilon /\partial t) + (\upsilon \cdot \nabla )\upsilon ) = - \nabla p + \nabla \cdot \tau ,$$
$$\nabla \cdot \upsilon = 0,$$
where \(\upsilon\) and p are the fluid velocity vector and pressure, respectively; ρ = 1050 kg/m3 is the blood density; and \(\tau\) is the tension tensor, which is expressed as follows:
$$\tau = 2\eta (\dot{\gamma })D.$$
Here, D and \(\dot{\gamma }\) are the respective deformation rate tensor and shear rate, respectively, and \(\eta\) is the viscosity, which is a function of the shear rate.
The Carreau model is used to calculate the viscosity of blood as follows:
$$\eta (\dot{\gamma }) = \eta_{\infty } + (\eta_{0} - \eta_{\infty } )[1 + (\lambda \dot{\gamma })^{2} ]^{((n - 1)/2)} ,$$
where \(\eta_{\infty }\) = 3.45 \(\times\) 10−3 kg/(m s), \(\eta_{0}\) = 5.6 \(\times\) 10–2 kg/(m s), n = 0.3568, and \(\lambda\) = 3.313 s [23].
Hemodynamic parameters
The shear stress on the vessel wall throughout a cardiac cycle was evaluated by using the TAWSS, which is expressed as follows:
$$\mathrm{TAWSS}=\frac{1}{\mathrm{T}}{\int }_{0}^{\mathrm{T}}\left|\mathrm{WSS}\left(\mathrm{s},\mathrm{t}\right)\right|\mathrm{dt},$$
where T is the cardiac cycle period, WSS is the instantaneous wall shear stress vector, and s is the position on the caval wall. The OSI indicates the changing frequency of the wall shear-stress direction as follows [24]:
$$\mathrm{OSI}=\frac{1}{2}\left[1-\left(\frac{\frac{1}{\mathrm{T}}\left|{\int }_{0}^{\mathrm{T}}\mathrm{WSS}\left(\mathrm{s},\mathrm{t}\right)\cdot \mathrm{dt}\right|}{\frac{1}{\mathrm{T}}{\int }_{0}^{\mathrm{T}}\left|\mathrm{WSS}\left(\mathrm{s},\mathrm{t}\right)\right|\mathrm{dt}}\right)\right],$$
$$0\le \mathrm{OSI}\le \frac{1}{2}.$$
A zero OSI value corresponds to a unidirectional shear flow, and the OSI value is 1/2 when a purely oscillatory shear case occurs [25].
Another useful parameter, the RRT, was also calculated. Specifically, RRT reflects the residence time of flow particles near the caval wall, and it is also recommended as a single metric of low and oscillating shear stress [26]. Thus, RRT is defined as follows [27]:
$$\mathrm{RRT}=\frac{1}{\left(1-2\cdot \mathrm{OSI}\right)\cdot \mathrm{TAWSS}}.$$
The OSI does not distinguish well between uniaxial pulsatile flow and multidirectional flow. Therefore, the parameter, namely, transWSS, was also introduced. It is expressed as follows [28]:
$$\mathrm{transWSS}=\frac{1}{\mathrm{T}}{\int }_{0}^{\mathrm{T}}\left|\mathrm{WSS}\cdot \left[n\times \frac{\frac{1}{\mathrm{T}}{\int }_{0}^{\mathrm{T}}\mathrm{WSS}\left(\mathrm{s},\mathrm{t}\right)\cdot \mathrm{dt}}{\left|\frac{1}{\mathrm{T}}{\int }_{0}^{\mathrm{T}}\mathrm{WSS}\left(\mathrm{s},\mathrm{t}\right)\cdot \mathrm{dt}\right|}\right]\right|\mathrm{dt},$$
where n is normal to the vessel surface. This new metric has clear advantages over other parameters that have attempted to capture multidirectional aspects. It also appears to be more sensitive to changes in the velocity waveform [29]. Thus, it complements TAWSS and OSI as opposed to replacing them.
Boundary conditions and computation
In all cases, a steady-flow simulation was first performed. The solution obtained from this simulation was then used as the initial iteration data for further pulsatile flow simulations. With respect to the steady-flow simulation, a uniform inflow velocity profile with an axial velocity component of 0.1 m/s and a transverse velocity component equal to zero were used at the inlet [30]. We set the outlet as the outflow. The caval wall was assumed to be rigid and non-slippery.
For the pulsatile flow simulation, the time-dependent parabolic flow velocity waveform based on the measurement performed by Zhang et al. (as shown in Fig. 10) was set at the inlet [31]. The other boundary conditions were identical to those in the steady computation.
The finite volume method was adopted to solve the mass and momentum conservation equations using ANSYS Fluent 14.0 computational fluid dynamics solver (ANSYS Inc., Canonsburg, PA). The residual continuity and velocity were assigned a value of 1.0 × 10–5. Six cycles were required to obtain convergence for the transient analysis, with 200 steps in each cycle (time = 1 s). The pulsatile calculation was performed on a computer equipped with a 2.20 GHz Intel(R) Xeon(R) CPU processor and 64 GB of random access memory (RAM). The computational time-span approached a week for each scenario.
Animal experiment methods
Animal experimental setup
The use of animals in the present study was approved by the local ethical committee (Guizhou Institute of Animal Husbandry and Veterinary Science, Guiyang, China) and was based on the laboratory animal administration rules of China. A healthy adult male goat weighing 51.2 kg was used for the experiment. The goat was anesthetized through an intramuscular injection of a general anesthesia agent of QMB (a product of the Department of Veterinary Surgery, Northeast Agricultural University, China.) at 0.1 mL/kg body weight. Initially, an attempt was made to deploy the VCF in an inferior vena cava from the femoral vein. However, the results indicate that the femoral vein of the goat was excessively thin to allow it to separate from the vessels, thereby making it difficult to insert the filter. The filter was deployed in the superior vena cava from the jugular vein of the goat.
Option™ VCF, a commercially available filter, is cone-shaped and in wide clinical use. In this study, the Option™ VCF (Argon Medical Devices, Frisco, Texas, USA) was inserted into the superior vena cava via the jugular vein of the goat using B-mode ultrasound guidance.
The goat did not exhibit any abnormalities under the preoperative B-mode ultrasound guidance. After anesthesia, the goat was placed in a left lateral decubitus position. The left jugular vein of the goat was fixed. The skin at the surgical site was disinfected as per the standard. The jugular vein was approached using a longitudinal incision in the middle of the neck with adequate vessel separation, and the vessel was subsequently punctured using a puncture needle. The Option™ VCF was then deployed in the superior vena cava according to the routine procedure of VCF deployment.
After deployment, computed radiography (CR; Carestream Vita CR System, Rui Ke Medical Shanghai Co., Ltd.) was used to determine the VCF location. The CR image display showed that VCF was present in the superior vena cava (Fig. 8II).
Eleven days after the VCF deployment, 10 mL off-venous blood was drawn from the goat with the help of a syringe. The blood sample in the syringe was maintained for 30 min at room temperature, and the syringe was then depressed to form cylindrical thrombi on sterile gauze. A cylindrical thrombus of 2–6 mm in length was selected. Autologous thrombi were flushed using physiological saline. Subsequently, a mixed suspension containing at least three clots per 1 mL of physiological saline was prepared. Next, 20 mL of the mixed solution of autologous thrombi was injected into the left jugular vein of the goat (Fig. 8I). Although collagen proteins or other coagulation factors could also simulate a thrombus, these invaders may cause an immune response [32]. Therefore, autologous blood clots are more favorable than other coagulation factors. To develop a canine model of an acuter PE model, extant studies have also considered using autologous blood clots as simulated thrombin [33]. Previous studies revealed that placing a filter in the vena cava for longer than 2 weeks leads to intimal hyperplasia and vascular adhesion [34, 35]. Therefore, 14 days after the VCF deployment, the goat was euthanized through an overdose of anesthesia, and the filter was removed to observe thrombosis capture on the spot.
All data generated or analyzed during this study are included in this published article.
VTE:
DVT:
PE:
Pulmonary embolism
VCF:
Vena cava filter
TAWSS:
Time-averaged wall shear stress
OSI:
Oscillatory shear index
RRT:
Relative residence time
WSS:
Wall shear stress
CFD:
Computational fluid dynamic
CR:
Computed radiography
Haut ER, Garcia LJ, Shihab HM, et al. The effectiveness of prophylactic inferior vena cava filters in trauma patients: a systematic review and meta-analysis. Jama Surg. 2014;149(2):194–202.
Rojas-Hernandez CM, Zapata-Copete JA, Garcia-Perdomo HA. Role of vena cava filters for the management of cancer-related venous thromboembolism: systematic review and meta-analysis. Crit Rev in Oncol Hemat. 2018;130:44–50.
Rajasekhar A, Streiff MB. Vena cava filters for management of venous thromboembolism: a clinical review. Blood Rev. 2013;27:225–41.
Michaelov T, Blickstein D, Levy D, et al. Removal of retrievable vena cava filters in routine practice: a multicenter study. Eur J Intern Med. 2011;22:e87–9.
Stewart S, Robinson RA, Nelson RA, et al. Effects of thrombosed vena cava filters on blood flow: flow visualization and numerical modeling. Ann Biomed Eng. 2008;36:1764–81.
Lorch H, Dallmann A, Zwaan M, et al. Efficacy of permanent and retrievable vena cava efficacy of permanent and retrievable vena cava filters: experimental studies and evaluation of a new device. Intervent Radiol. 2002;25:193–9.
Heike L, Zwaan M, Kulke C, et al. In vitro studies of temporary vena cava filters. Cardiovasc Intervent Radiol. 1998;21:146–50.
Chen Z, Fan Y, Deng X. A novel deployment design of vena cava filters might be the solution. Med Hypotheses. 2011;77:990–2.
Lowe GD. Virchow's triad revisited: abnormal flow. Pathophys Haemost Thromb. 2004;33:455–7.
Kroll MH, Hellums JD, Mcintire LV, et al. Platelets and shear stress. Blood. 1996;88:1525–41.
Singer MA, Henshaw WD, Wang SL. Computational modeling of blood flow in the TrapEase inferior vena cava filter. J Vasc Interv Radiol. 2009;20:799–805.
Wootton DM, Ku DN. Fluid mechanics of vascular systems, diseases, and thrombosis. Annu Rev Biomed Eng. 1999;1:299–329.
Chiu JJ, Chien S. Effects of disturbed flow on vascular endothelium: pathophysiological basis and clinical perspectives. Physiol Rev. 2011;91:327–87.
Jimenez JM, Davies PF. Hemodynamically driven stent strut design. Ann Biomed Eng. 2009;37:1483–94.
Nicolas M, Malve M, Pena E, Martinez MA, Leask R. In vitro comparison of Gunther Tulip and Celect filters testing filtering efficiency and pressure drop. J Biomech. 2015;48:504–11.
Wang SL, Singer MA. Toward an optimal position for inferior vena cava filters: computational modeling of the impact of renal vein inflow with Celect and TrapEase filters. J Vasc Interv Radiol. 2010;21:367–74.
Rosenthal D, Wellons ED, Lai KM, et al. Retrievable inferior vena cava filters: initial clinical results. Ann Vasc Surg. 2006;20:157–65.
Yang M, Sun L, Zhang JW, et al. Diameter and length measurement of infrarenal inferior vena cava in Shandong Peninsula adult and its significance. Zhonghua wai ke za zhi. 2011;49:514–6.
Wang SL, Timmermans HA, Kaufman JA. Estimation of trapped thrombus volumes in retrievable inferior vena cava filters. J Vasc Interv Radiol. 2007;8:273–6.
Couch GG, Johnston KW, Ojha M. An in vitro comparison of the hemodynamics of two inferior vena cava filters. J Vasc Surg. 2000;31:540–8.
Chen Y, Deng X, Shan X, et al. Study of helical flow inducers with different thread pitches and diameters in vena cava. PLoS ONE. 2018;13(1):e0190609.
Owida AA, Do H, Morsi YS. Numerical analysis of coronary artery bypass grafts: an overview. Comput Methods Programs Biomed. 2012;108:689–705.
Cho YI, Kensey KR. Effects of the non-Newtonian viscosity of blood on flows in a diseased arterial vessel. Part1: steady flows. Biorheology. 1991;28:241–62.
Ha H, Choi W, Lee SL. Beneficial fluid-dynamic features of pulsatile swirling flow in 45° end-to-side anastomosis. Med Eng Phy. 2015;37:272–9.
Chen Y, Zhang P, Deng X, et al. Improvement of hemodynamic performance using novel helical flow vena cava filter design. Sci Rep. 2017;7:40724.
Himburg HA, Grzybowski DM, Hazel AL, et al. Spatial comparison between wall shear stress measures and porcine arterial endothelial permeability. Am J Physiol Heart Circ Physiol. 2004;286:H1916–22.
Lee SW, Antiga L, Steinman DA. Correlations among indicators of disturbed flow at the normal carotid bifurcation. J Biomech Eng. 2009;131:061013.
Mohamied Y, Rowland EM, Bailey EL, et al. Change of direction in the biomechanics of atherosclerosis. Ann Biomed Eng. 2015;43:16–25.
Perffer V, Sherwin SJ, Weinberg PD. Computation in the rabbit aorta of a new metric—the transverse wall shear stress—to quantify the multidirectional character of disturbed blood flow. J Biomech. 2013;46:2651–8.
Leask RL, Johnston KW, Ojha M. Hemodynamic Effects of clot entrapment in the TrapEase inferior vena cava filter. J Vasc Interv Radiol. 2004;15:485–90.
Zhang B, Toru K. Doppler waveforms: the relation between ductus venosus and inferior vena cava. Ultrasound Med Biol. 2005;31:1173–6.
Ueno Y, Koike H, Annoh S, et al. Anti-inflammatory effects of beraprost sodium, a stable analogue of PGI2, and its mechanisms. Prostaglandins. 1997;53:279–89.
Zhao L, Jia Z, Lu G, et al. Establishment of a canine model of acute pulmonary embolism with definite right ventricular dysfunction through introduced autologous blood clots. Thromb Res. 2005;135:727–32.
Gregorio MA, Gimeno MJ, Tobio R, et al. Animal experience in the Gunther Tulip retrievable inferior vena cava filter. Cardiovasc Intervent Radiol. 2001;24:413–7.
Nadkarni S, Macdonald S, Cleveland TJ, et al. Placement of a retrievable Günther Tulip filter in the superior vena cava for upper extremity deep venous thrombosis. Cardiovasc Intervent Radiol. 2002;25:524–6.
The authors are very grateful to the Guizhou Institute of Animal Husbandry and Veterinary Science for their meticulous care of animals.
This work was funded by grants from the National Natural Science Research Foundation of China (Nos. 11862004, 11772036, 11332003, 11572028, 11421202) and the Guizhou Science and Technology Cooperation Platform Talent Foundation Project of China (No. [2018]5781).
College of Engineering and Technology, Beijing Institute of Economics and Management, Beijing, 100102, China
College of Engineering, Peking University, Beijing, 100871, China
Ying Chen & Wenchang Tan
Shenzhen Graduate School, Peking University, Shenzhen, 518055, China
Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, School of Biological Science and Medical Engineering, Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, 100083, China
Ying Chen, Xiaoyan Deng & Yubo Fan
Department of Veterinary Medicine, College of Animal Science, Guizhou University, Guiyang, 550025, Guizhou, China
Zaipin Xu & Shibo Yang
School of Automation and Information Engineering, Sichuan University of Science and Engineering, Zigong, 643002, Sichuan, China
Xiaoyan Deng
Guizhou Institute of Animal Husbandry and Veterinary Science, Guiyang, 550025, Guizhou, China
Yong Han
Department of Infection Management and Disease Control, The General Hospital of People's Liberation Army, Beijing, 100853, China
Yubin Xing
Zaipin Xu
Shibo Yang
Wenchang Tan
Yubo Fan
Concept and design: YC, XD; drafting the article: YC, ZX; data analysis: YC, YX; performing the animal experiments: ZX, SY, YC; critical revision of the article: XD, YC; approval of the article: YF, YH, WT. All authors read and approved the final manuscript.
Correspondence to Ying Chen, Zaipin Xu or Xiaoyan Deng.
The study was approved by Guizhou Institute of Animal Husbandry and Veterinary Science.
Chen, Y., Xu, Z., Deng, X. et al. Effects of reverse deployment of cone-shaped vena cava filter on improvements in hemodynamic performance in vena cava. BioMed Eng OnLine 20, 19 (2021). https://doi.org/10.1186/s12938-021-00855-x
DOI: https://doi.org/10.1186/s12938-021-00855-x
Cone-shaped vena cava filter
Hemodynamic performance
Reverse deployment
|
CommonCrawl
|
Bose-Einstein condensates and spectral properties of multicomponent nonlinear Schrödinger equations
DCDS-S Home
Dissipative solitons in binary fluid convection
October 2011, 4(5): 1199-1212. doi: 10.3934/dcdss.2011.4.1199
Multiple dark solitons in Bose-Einstein condensates at finite temperatures
P.G. Kevrekidis 1, and Dimitri J. Frantzeskakis 2,
University of Massachusetts, Lederle Graduate Research Tower, Department of Mathematics and Statistics, Amherst, MA 01003
Department of Physics, University of Athens, Panepistimiopolis, Zografos, Athens 15784, Greece
Received September 2009 Revised October 2009 Published December 2010
We study analytically, as well as numerically, single- and multiple-dark matter-wave solitons in atomic Bose-Einstein condensates at finite temperatures. Our analysis is based on the study of the dissipative Gross-Pitaevskii equation, which incorporates a phenomenological damping term accounting for the interaction of the condensate with the thermal cloud. We illustrate how the negative Krein sign eigenmodes (associated with the the single- or multiple-dark soliton states) can give rise to Hopf bifurcations and oscillatory instabilities, whose ensuing dynamics is also elucidated. In all cases, the finite-temperature induced dynamics results in soliton decay, and the system eventually asymptotes to the ground state.
Keywords: finite temperature., Bose-Einstein condensates, Dark solitons.
Mathematics Subject Classification: Primary: 35Q51; Secondary: 35A3.
Citation: P.G. Kevrekidis, Dimitri J. Frantzeskakis. Multiple dark solitons in Bose-Einstein condensates at finite temperatures. Discrete & Continuous Dynamical Systems - S, 2011, 4 (5) : 1199-1212. doi: 10.3934/dcdss.2011.4.1199
B. P. Anderson, P. C. Haljan, C. A. Regal, D. L. Feder, L. A. Collins, C. W. Clark and E. A. Cornell, Watching dark solitons decay into vortex rings in a Bose-Einstein condensate,, Phys. Rev. Lett., 86 (2001), 2926. doi: 10.1103/PhysRevLett.86.2926. Google Scholar
I. Aranson and V. Steinberg, Stability of multicharged vortices in a model of superflow,, Phys. Rev. B, 53 (1996), 75. doi: 10.1103/PhysRevB.53.75. Google Scholar
C. Becker, S. Stellmer, P. Soltan-Panahi, S. Dörscher, M. Baumert, E.-M. Richter, J. Kronjäger, K. Bongs and K. Sengstock, Oscillations and interactions of dark and dark-bright solitons in Bose-Einstein condensates,, Nature Phys., 4 (2008), 496. doi: 10.1038/nphys962. Google Scholar
K. Bongs, S. Burger, S. Dettmer, D. Hellweg, J. Arlt, W. Ertmer and K. Sengstock, Coherent manipulation and guiding of Bose-Einstein condensates by optical dipole potentials,, C. R. Acad. Sci. Paris, 2 (2001), 671. Google Scholar
S. Burger, K. Bongs, S. Dettmer, W. Ertmer, K. Sengstock, A. Sanpera, G. V. Shlyapnikov and M. Lewenstein, Dark solitons in Bose-Einstein condensates,, Phys. Rev. Lett., 83 (1999), 5198. doi: 10.1103/PhysRevLett.83.5198. Google Scholar
Th. Busch and J. R. Anglin, Motion of dark solitons in trapped Bose-Einstein Condensates,, Phys. Rev. Lett., 84 (2000), 2298. doi: 10.1103/PhysRevLett.84.2298. Google Scholar
R. Carretero-González, B. P. Anderson, P. G. Kevrekidis, D. J. Frantzeskakis and C. N. Weiler, Dynamics of vortex formation in merging Bose-Einstein condensate fragments,, Phys. Rev. A, 77 (2008). Google Scholar
R. Carretero-González, P. G. Kevrekidis and D. J. Frantzeskakis, Nonlinear waves in Bose-Einstein condensates: Physical relevance and mathematical techniques,, Nonlinearity, 21 (2008). Google Scholar
R. Carretero-González, N. Whitaker, P. G. Kevrekidis and D. J. Frantzeskakis, Vortex structures formed by the interference of sliced condensates,, Phys. Rev. A, 77 (2008). Google Scholar
S. Choi, S. A. Morgan and K. Burnett, Phenomenological damping in trapped atomic Bose-Einstein condensates,, Phys. Rev. A, 57 (1998), 4057. doi: 10.1103/PhysRevA.57.4057. Google Scholar
S. P. Cockburn, H. E. Nistazakis, T. P. Horikis, P. G. Kevrekidis, N. P. Proukakis and D. J. Frantzeskakis, Matter-wave dark solitons: Stochastic vs. analytical results,, Phys. Rev. Lett, 104 (2010). doi: 10.1103/PhysRevLett.104.174101. Google Scholar
S. P. Cockburn and N. P. Proukakis, The stochastic Gross-Pitaevskii equation and some applications,, Laser Phys., 19 (2009), 558. doi: 10.1134/S1054660X09040057. Google Scholar
J. Denschlag, J. E. Simsarian, D. L. Feder, C. W. Clark, L. A. Collins, J. Cubizolles, L. Deng, E. W. Hagley, K. Helmerson, W. P. Reinhardt, S. L. Rolston, B. I. Schneider and W. D. Phillips, Generating solitons by phase engineering of a Bose-Einstein condensate,, Science, 287 (2000), 97. doi: 10.1126/science.287.5450.97. Google Scholar
Z. Dutton, M. Budde, C. Slowe and L. V. Hau, Observation of quantum shock waves created with ultra-compressed slow light pulses in a Bose-Einstein Condensate,, Science, 293 (2001), 663. doi: 10.1126/science.1062527. Google Scholar
P. Engels and C. Atherton, Stationary and nonstationary fluid flow of a Bose-Einstein condensate through a penetrable barrier,, Phys. Rev. Lett., 99 (2007). doi: 10.1103/PhysRevLett.99.160405. Google Scholar
P. O. Fedichev, A. E. Muryshev and G. V. Shlyapnikov, Dissipative dynamics of a kink state in a Bose-condensed gas,, Phys. Rev. A, 60 (1999), 3220. doi: 10.1103/PhysRevA.60.3220. Google Scholar
D. J. Frantzeskakis, Dark solitons in atomic Bose-Einstein condensates: from theory to experiments,, J. Phys. A: Math. Theor., 43 (2010). doi: 10.1088/1751-8113/43/21/213001. Google Scholar
D. J. Frantzeskakis, G. Theocharis, F. K. Diakonos, P. Schmelcher and Yu. S. Kivshar, Interaction of dark solitons with localized impurities in Bose-Einstein condensates,, Phys. Rev. A, 66 (2002). doi: 10.1103/PhysRevA.66.053608. Google Scholar
R. Graham, Decoherence of Bose-Einstein condensates in traps at finite temperature,, Phys. Rev. Lett., 81 (1998), 5262. doi: 10.1103/PhysRevLett.81.5262. Google Scholar
B. Jackson and N. P. Proukakis, Finite-temperature models of Bose-Einstein condensation,, J. Phys. B: At. Mol. Opt. Phys., 41 (2008). doi: 10.1088/0953-4075/41/20/203002. Google Scholar
B. Jackson, C. F. Barenghi and N. P. Proukakis, Matter wave solitons at finite temperatures,, J. Low Temp. Phys., 148 (2007), 387. doi: 10.1007/s10909-007-9410-1. Google Scholar
B. Jackson, N. P. Proukakis and C. F. Barenghi, Dark-soliton dynamics in Bose-Einstein condensates at finite temperature,, Phys., 75 (2007). doi: 10.1103/PhysRevA.75.051601. Google Scholar
T. Kapitula and P. G. Kevrekidis, Bose-Einstein condensates in the presence of a magnetic trap and optical lattice,, Chaos, 15 (2005). doi: 10.1063/1.1993867. Google Scholar
T. Kapitula, P. G. Kevrekidis and B. Sandstede, Counting eigenvalues via the Krein signature in infinite-dimensional Hamiltonian system,, Physica D, 195 (2004), 263. doi: 10.1016/j.physd.2004.03.018. Google Scholar
P. G. Kevrekidis, D. J. Frantzeskakis and R. Carretero-González R (eds.), "Emergent Nonlinear Phenomena in Bose-Einstein Condensates: Theory and Experiment,", Springer, (2007). Google Scholar
Yu. S. Kivshar and W. Królikowski, Lagrangian approach for dark solitons,, Opt. Commun., 114 (1995), 353. doi: 10.1016/0030-4018(94)00644-A. Google Scholar
Yu. S. Kivshar and X. Yang, Perturbation-induced dynamics of dark solitons,, Phys. Rev. E, 49 (1994), 1657. doi: 10.1103/PhysRevE.49.1657. Google Scholar
C. K. Law, P. T. Leung and M.-C. Chu, Quantum fluctuations of coupled dark solitons in a trapped Bose-Einstein condensate,, J. Phys. B: At. Mol. Opt. Phys., 35 (2002), 3583. doi: 10.1088/0953-4075/35/16/316. Google Scholar
M. D. Lee and C. W. Gardiner, Quantum kinetic theory. VI. The growth of a Bose-Einstein condensate,, Phys. Rev. A, 62 (2000). doi: 10.1103/PhysRevA.62.033606. Google Scholar
E. J. M. Madarassy and C. F. Barenghi, Vortex dynamics in trapped Bose-Einstein condensate,, J. Low Temp. Phys., 152 (2008), 122. doi: 10.1007/s10909-008-9811-9. Google Scholar
A. A. Penckwitt, R. J. Ballagh and C. W. Gardiner, Nucleation, growth and stabilization of Bose-Einstein condensate vortex lattices,, Phys. Rev. Lett., 89 (2002). doi: 10.1103/PhysRevLett.89.260402. Google Scholar
L. P. Pitaevskii, ,, Zh. Eksp. Teor. Fiz., 35 (1958). Google Scholar
L. P. Pitaevskii and S. Stringari, "Bose-Einstein Condensation,", Oxford University Press, (2003). Google Scholar
N. P. Proukakis, N. G. Parker, C. F. Barenghi and C. S. Adams, Parametric driving of dark solitons in atomic Bose-Einstein condensates,, Phys. Rev. Lett., 93 (2004). doi: 10.1103/PhysRevLett.93.130408. Google Scholar
R. Sásik, L. M. A. Bettencourt and S. Habib, Thermal vortex dynamics in a two-dimensional condensate,, Phys. Rev. B, 62 (2000), 1238. doi: 10.1103/PhysRevB.62.1238. Google Scholar
D. V. Skryabin, Energy of internal modes of nonlinear waves and complex frequencies due to symmetry breaking,, Phys. Rev. E, 64 (2001). doi: 10.1103/PhysRevE.64.055601. Google Scholar
S. Stellmer, C. Becker, P. Soltan-Panahi, E.-M. Richter, S. Dörscher, M. Baumert, J. Kronjäger, K. Bongs and K. Sengstock, Collisions of dark solitons in elongated Bose-Einstein condensates,, Phys. Rev. Lett., 101 (2008). doi: 10.1103/PhysRevLett.101.120406. Google Scholar
G. Theocharis, P. G. Kevrekidis, M. K. Oberthaler and D. J. Frantzeskakis, Dark matter-wave solitons in the dimensionality crossover,, Phys. Rev. A, 76 (2007). doi: 10.1103/PhysRevA.76.045601. Google Scholar
G. Theocharis, A. Weller, J. P. Ronzheimer, C. Gross, M. K. Oberthaler, P. G. Kevrekidis and D. J. Frantzeskakis, Multiple atomic dark solitons in cigar-shaped Bose-Einstein condensates,, Phys. Rev. A, 81 (2009). doi: 10.1103/PhysRevA.81.063604. Google Scholar
M. Tsubota, K. Kasamatsu and M. Ueda, Vortex lattice formation in a rotating Bose-Einstein condensate,, Phys. Rev. A, 65 (2002). Google Scholar
A. Weller, J. P. Ronzheimer, C. Gross, J. Esteve, M. K. Oberthaler, D. J. Frantzeskakis, G. Theocharis and P. G. Kevrekidis, Experimental observation of oscillating and interacting matter-wave dark solitons,, Phys. Rev. Lett., 101 (2008). doi: 10.1103/PhysRevLett.101.130401. Google Scholar
Florian Méhats, Christof Sparber. Dimension reduction for rotating Bose-Einstein condensates with anisotropic confinement. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 5097-5118. doi: 10.3934/dcds.2016021
Weizhu Bao, Loïc Le Treust, Florian Méhats. Dimension reduction for dipolar Bose-Einstein condensates in the strong interaction regime. Kinetic & Related Models, 2017, 10 (3) : 553-571. doi: 10.3934/krm.2017022
Pedro J. Torres, R. Carretero-González, S. Middelkamp, P. Schmelcher, Dimitri J. Frantzeskakis, P.G. Kevrekidis. Vortex interaction dynamics in trapped Bose-Einstein condensates. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1589-1615. doi: 10.3934/cpaa.2011.10.1589
Xuguang Lu. Long time strong convergence to Bose-Einstein distribution for low temperature. Kinetic & Related Models, 2018, 11 (4) : 715-734. doi: 10.3934/krm.2018029
Vadym Vekslerchik, Víctor M. Pérez-García. Exact solution of the two-mode model of multicomponent Bose-Einstein condensates. Discrete & Continuous Dynamical Systems - B, 2003, 3 (2) : 179-192. doi: 10.3934/dcdsb.2003.3.179
Vladimir S. Gerdjikov. Bose-Einstein condensates and spectral properties of multicomponent nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - S, 2011, 4 (5) : 1181-1197. doi: 10.3934/dcdss.2011.4.1181
Liren Lin, I-Liang Chern. A kinetic energy reduction technique and characterizations of the ground states of spin-1 Bose-Einstein condensates. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 1119-1128. doi: 10.3934/dcdsb.2014.19.1119
Kui Li, Zhitao Zhang. A perturbation result for system of Schrödinger equations of Bose-Einstein condensates in $\mathbb{R}^3$. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 851-860. doi: 10.3934/dcds.2016.36.851
Dong Deng, Ruikuan Liu. Bifurcation solutions of Gross-Pitaevskii equations for spin-1 Bose-Einstein condensates. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3175-3193. doi: 10.3934/dcdsb.2018306
Brahim Alouini. Finite dimensional global attractor for a Bose-Einstein equation in a two dimensional unbounded domain. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1781-1801. doi: 10.3934/cpaa.2015.14.1781
Weizhu Bao, Yongyong Cai. Mathematical theory and numerical methods for Bose-Einstein condensation. Kinetic & Related Models, 2013, 6 (1) : 1-135. doi: 10.3934/krm.2013.6.1
Brahim Alouini, Olivier Goubet. Regularity of the attractor for a Bose-Einstein equation in a two dimensional unbounded domain. Discrete & Continuous Dynamical Systems - B, 2014, 19 (3) : 651-677. doi: 10.3934/dcdsb.2014.19.651
Brahim Alouini. Long-time behavior of a Bose-Einstein equation in a two-dimensional thin domain. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1629-1643. doi: 10.3934/cpaa.2011.10.1629
Anne de Bouard, Reika Fukuizumi, Romain Poncet. Vortex solutions in Bose-Einstein condensation under a trapping potential varying randomly in time. Discrete & Continuous Dynamical Systems - B, 2015, 20 (9) : 2793-2817. doi: 10.3934/dcdsb.2015.20.2793
Kazuo Aoki, Ansgar Jüngel, Peter A. Markowich. Small velocity and finite temperature variations in kinetic relaxation models. Kinetic & Related Models, 2010, 3 (1) : 1-15. doi: 10.3934/krm.2010.3.1
Tian Ma, Shouhong Wang. Gravitational Field Equations and Theory of Dark Matter and Dark Energy. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 335-366. doi: 10.3934/dcds.2014.34.335
Tianwen Luo, Tao Tao, Liqun Zhang. Finite energy weak solutions of 2d Boussinesq equations with diffusive temperature. Discrete & Continuous Dynamical Systems - A, 2019, 0 (0) : 1-29. doi: 10.3934/dcds.2019230
Uta Renata Freiberg. Einstein relation on fractal objects. Discrete & Continuous Dynamical Systems - B, 2012, 17 (2) : 509-525. doi: 10.3934/dcdsb.2012.17.509
Colin Guillarmou, Antônio Sá Barreto. Inverse problems for Einstein manifolds. Inverse Problems & Imaging, 2009, 3 (1) : 1-15. doi: 10.3934/ipi.2009.3.1
David Usero. Dark solitary waves in nonlocal nonlinear Schrödinger systems. Discrete & Continuous Dynamical Systems - S, 2011, 4 (5) : 1327-1340. doi: 10.3934/dcdss.2011.4.1327
P.G. Kevrekidis Dimitri J. Frantzeskakis
|
CommonCrawl
|
Is there a relativistic generalization of the Maxwell-Boltzmann velocity distribution?
The Maxwell-Boltzmann velocity distribution in 3D space is $$ f(v)dv = 4\pi \left(\frac{m}{2\pi k_B T}\right)^{3/2} v^2 \exp\left(-\frac{m v^2}{2k_B T}\right)dv$$ It gives the probability for a single particle to have a speed in the intervall $[v,v+dv]$. But this probability is not zero for speeds $v > c $ in conflict with special relativity.
Is there a generalization of the Maxwell-Boltzmann velocity distribution which is valid also in the relativistic regime so that $f(v) = 0$ for $v>c$ ? And how can it be derived? Or can a single particle distribution simply not exist for relativistic speeds, because for high energies, we always have pair-production meaning the particle number is not conserved and a single particle distribution can not be defined in a consistent way?
thermodynamics special-relativity
asmaierasmaier
$\begingroup$ FYI, the relativistic generalization is called the Maxwell-Juttner distribution. $\endgroup$ – user1631 Jul 18 '11 at 17:18
The point of a Boltzmann distribution is that it maximizes entropy given a fixed energy. The concept applies to systems with other degrees of freedom besides translational kinetic energy. The general distribution, from Wikipedia is
Thus, the simple adjustment to the Maxwell-Boltzmann distribution you cited is to replace the Newtonian kinetic energy $\frac{mv^2}{2}$ with the relativistic kinetic energy $(\gamma - 1)mc^2$ everywhere it appears in the distribution.
Pair creation is a separate issue that I'll leave to someone else.
Mark EichenlaubMark Eichenlaub
$\begingroup$ $\gamma-1$ has an imaginary component for $v>c$, which seems to preclude a probability interpretation. Do you have a reference where the consequences of the model you suggest above are developed? Does one have to insert a Heaviside function restriction to $v<c$? That would seem moderately ad-hoc, although perhaps not irredeemably. Pair production is a quantum field theoretic concept that, as you somewhat imply, puts us into a wholly different and barely understood arena. $\endgroup$ – Peter Morgan Jul 17 '11 at 3:01
$\begingroup$ @Peter I would think of it as saying that the energies can go from 0 to infinity, and then you get a probability distribution for energies that doesn't seem artificial. From there, you can transform it into a distribution for velocities if you so desire. Doing the transformation you'll naturally stop at v = c. I suppose you are right that you have to insert a Heaviside function. No, I don't have a reference; it was just an off-the-cuff response. $\endgroup$ – Mark Eichenlaub Jul 17 '11 at 3:09
$\begingroup$ I believe this ansatz has a problem even for speeds below c. If $E_{kin} = (\gamma-1)m c^2 > 2 m c^2 $ that is for $v > \frac{2}{3} \sqrt{2} c = 0.94c$ the energy is enough for pair production. I guess at this point a distribution based on a constant number of particles cannot be correct, am I right? $\endgroup$ – asmaier Jul 17 '11 at 11:39
$\begingroup$ Well, nice try but this is indeed very naive attempt. My feeling is that there is no use in trying to create classical (i.e. non-quantum) relativistic statistical mechanics similarly to trying to create relativistic quantum mechanics. In both cases one needs to introduce possibility of creation/annihilation for the theory to be consistent (or equivalently, pass to the language of fields). And I am not even talking about elementary stuff @asmaier mentions that for relativistic particles concept of kinetic energy is unnatural at best... $\endgroup$ – Marek Jul 17 '11 at 20:28
$\begingroup$ @Marek Well, you would know better than I. I'd be interested if you have the time to write up an answer. $\endgroup$ – Mark Eichenlaub Jul 17 '11 at 20:37
The usual, or text book generalisation (Juttner), which was first derived in 1911 is not covariant. Presumably a fully covariant distribution would cover all these details assuming that it exists in the first place. The most recent attempt I am aware of is by Ewald Lehmann who goes back to basics, "Covariant equilibrium statistical mechanics", Journal of Mathematical Physics 47, 023303,2006.
edited Feb 8 '18 at 2:40
David SherDavid Sher
Per Mark's request, I'll provide an answer.
First, I think it's not possible to obtain such a formula. One can of course naively try to extrapolate various classical formulae but all of these attempts are bound to fail. Here's why: even when constructing relativistic quantum mechanics one finds that the theory is not consistent. For example, in non-relativistic quantum mechanics one has a position operator that can be used to get information on precise position of the particle (up to the uncertainty given by Heisenberg's uncertainty principle). But when one includes relativity into the picture the theory stops being consistent. This reflects the fact that to localize the particle with great precision one has to make experiments with increasingly high energy and at certain point the energy is enough for the appearance of new particles. In fact, creation and annihilation of particles is inevitable in relativistic regime (and in some systems it's not even clear what particles should be and one has to talk about fields instead). The situation is even more pronounced in statistical physics where there are huge number of particles present.
More importantly, there's no need to get such a formula. Consider systems for which it would be useful. Such systems would have to be extremely non-classical (like neutron stars, black holes interior, quark-gluon plasma, etc.) and the concept of velocity would have no meaning as there's no way to observe individual particles of these systems; which is in contrast to the classical case where you can test Maxwell distribution by letting the particles out of the box one by one and seeing how fast they are (the actual experiment is lot more sophisticated of course, but that's unimportant here).
MarekMarek
21.6k6868 silver badges101101 bronze badges
$\begingroup$ Do you think the Maxwell-Juttner formula is correct? $\endgroup$ – John McVirgooo Aug 7 '11 at 18:14
There is no particle count invariance in general relativity. One can substitute energy density and from a classical point of view an equilibrium energy density defines a specific temperature. Lehmann's paper, which I referred to in a comment posted, above clearly attempts to do this. From a practical point of view such extreme conditions preclude any attempt at experimental verification. Numerical simulations usually refer to impenetrable classical particles and disregard loss of count invariance.
Richard Feynman is reported to have said something to the effect that if he can derive a result in several different ways he is probably right. Lehmann's results can be duplicated, at least qualitatively, using Maxwell's original (teenage) argument based on symmetry alone. Both give distributions which become flatter and both have a critical temperature of logT 11-12, roughly what one may expect at the heart of a core collapse supernova.
An upper limit should not be a surprise. For example if we go to more and more extreme conditions we will reach those prevalent in the early universe at the time of the big bang. Any hope of even a theoretical equilibrium is likely to be lost well before this.
$\begingroup$ User 153362 and David Sher are one and the same person- me- but I have been unable to get this wretched site to acknowledge this. $\endgroup$ – user153362 Feb 10 '18 at 21:48
Here's the relativistic generalization: $$f(\beta)=\left(\frac{\pi\alpha\beta^2}{2(1-\beta^2)}\right)^{(d-1)/2}\frac{e^{-\alpha/\sqrt{1-\beta^2}}}{2\sqrt{1-\beta^2}\,K_{(d+1)/2}(\alpha)}$$ where $\alpha=\frac{mc^2}{kT}$, $d$ is the spatial dimension, and $K$ is a Bessel function. I'll leave the derivation to the reader, but this integral is handy: $$\int_0^\infty e^{-\alpha\cosh\theta}\sinh^{2\nu}\theta\,d\theta=\frac{1}{\sqrt{\pi}}\left(\frac{2}{\alpha}\right)^\nu\Gamma(\nu+1/2)\,K_\nu(\alpha)$$ For $d=3$ I've included a plot with several values of $\alpha$.
ProfJMillerProfJMiller
Not the answer you're looking for? Browse other questions tagged thermodynamics special-relativity or ask your own question.
Why maxwells speed distribution law doesn't violet special theory if relativity
Temperature Required to give Particles of Matter Speed comparable to the speed of light
Plotting the maxwell-Boltzmann velocity distribution in matlab
Maxwell-Boltzmann distribution, average speed in one direction
Why do velocities obey the Boltzmann distribution?
Maxwell-Boltzmann Distribution (speed) as a Maximum Entropy Distribution and Its Interpretation
Maxwell Boltzmann Distribution in Molecular Dynamics Simulation
Entropy for a general probability distribution and Boltzmann distribution
Degeneracy of Maxwell-Boltzmann distribution
Deriving Bolzmann velocity distribution from the argument the author provides
Use of the solid angle in deriving the Maxwell-Boltzmann distribution
|
CommonCrawl
|
Assessing the impact of human mobility to predict regional excess death in Ecuador
Your article has downloaded
Similar articles being viewed by others
Carousel with three slides shown at a time. Use the Previous and Next buttons to navigate three slides at a time, or the slide dot buttons at the end to jump three slides at a time.
Mobility data as a proxy for epidemic measures
Nishant Kishore
Functional data analysis characterizes the shapes of the first COVID-19 epidemic wave in Italy
Tobia Boschi, Jacopo Di Iorio, … Francesca Chiaromonte
The demographic and geographic impact of the COVID pandemic in Bulgaria and Eastern Europe in 2020
Antoni Rangachev, Georgi K. Marinov & Mladen Mladenov
Practical geospatial and sociodemographic predictors of human mobility
Corrine W. Ruktanonchai, Shengjie Lai, … Alessandro Sorichetta
Multiscale dynamic human mobility flow dataset in the U.S. during the COVID-19 epidemic
Yuhao Kang, Song Gao, … Jake Kruse
Mobility-based real-time economic monitoring amid the COVID-19 pandemic
Alessandro Spelta & Paolo Pagnottoni
COVID-19 outbreak response, a dataset to assess mobility changes in Italy following national lockdown
Emanuele Pepe, Paolo Bajardi, … Michele Tizzoni
Impact of Governmental interventions on epidemic progression and workplace activity during the COVID-19 outbreak
Sumit Kumar Ram & Didier Sornette
Predicting regional COVID-19 hospital admissions in Sweden using mobility data
Philip Gerlee, Julia Karlsson, … Torbjörn Lundh
Leticia Cuéllar1,
Irene Torres2,
Ethan Romero-Severson1,
Riya Mahesh1,
Nathaniel Ortega1,
Sarah Pungitore1,
Ruian Ke1 &
Nicolas Hengartner1
Scientific Reports volume 12, Article number: 370 (2022) Cite this article
COVID-19 outbreaks have had high mortality in low- and middle-income countries such as Ecuador. Human mobility is an important factor influencing the spread of diseases possibly leading to a high burden of disease at the country level. Drastic control measures, such as complete lockdown, are effective epidemic controls, yet in practice one hopes that a partial shutdown would suffice. It is an open problem to determine how much mobility can be allowed while controlling an outbreak. In this paper, we use statistical models to relate human mobility to the excess death in Ecuador while controlling for demographic factors. The mobility index provided by GRANDATA, based on mobile phone users, represents the change of number of out-of-home events with respect to a benchmark date (March 2nd, 2020). The study confirms the global trend that more men are dying than expected compared to women, and that people under 30 show less deaths than expected, particularly individuals younger than 20 with a death rate reduction between 22 and 27%. The weekly median mobility time series shows a sharp decrease in human mobility immediately after a national lockdown was declared on March 17, 2020 and a progressive increase towards the pre-lockdown level within two months. Relating median mobility to excess deaths shows a lag in its effect: first, a decrease in mobility in the previous two to three weeks decreases excess death and, more novel, we found an increase of mobility variability four weeks prior increases the number of excess deaths.
The coronavirus disease (COVID-19) pandemic has a high morbidity and mortality. Ecuador, like many Latin American countries1, has been hit hard by the COVID-19 pandemic, with over 242 thousand reported cases and 14,500 deaths by the end of January 2021. The first confirmed case of COVID-19, reported on February 29, 2020, was a woman in her '70s who returned from Spain two weeks prior2. On March 13, with 23 confirmed cases, that same woman became the first COVID-19 confirmed fatality3. By April, Ecuador emerged as an "epicenter" of the pandemic in Latin America, with reports of uncollected dead bodies remaining for days in the streets4. To control the outbreak, schools and universities in Ecuador were closed on March 13th 5,6, and on March 17th, Ecuador implemented a national lockdown7. Both of these measures decrease the spread of the disease by reducing contacts between infected and susceptible individuals. It is possible to measure the compliance to these orders by tracking human mobility derived from cell phone data. Such data has emerged as a useful tool to measure human mobility and its relationship to the spread of diseases8,9,10 including SARS-COV-28,11,12, malaria13,14 cholera15, measles16, dengue17,18, and Ebola19,20.
Quantifying the severity of the outbreak in Ecuador is challenging. Due to limited testing, the reported daily counts of COVID-19 incidence and death underestimate the true magnitude of the outbreak. Excess death, which compares total number of observed deaths to the expected number of deaths, is commonly used to assess official undercounted burden of an infectious disease. From a health care and societal perspective, quantifying the excess death associated to an epidemic is informative21,22,23 and is the most reliable measure of current COVID-19 data available24. Analysis of the increase in all-cause mortality can complement the more traditional analysis of the time series of disease incidence and disease-related deaths.
In this paper, we empirically address the hypothesis that a reduction in mobility predicts a decrease in the future number of COVID-19 cases and deaths. We relate the temporal dynamic of excess deaths within each of the 24 provinces in Ecuador (see Fig. 1) to human mobility characteristics derived from mobile phones data provided by GRANDATA through the United Nations Development Program. We show that the implementation of lockdown can dramatically reduce the risk of excess deaths especially for provinces that were experiencing surges of SARS-CoV-2 outbreak (e.g. Guayas and Santa Elena). For other provinces, lockdown prevented and delayed the wave of excess deaths by several months (e.g. Pichincha).
Map of Ecuador with provinces Figure created in R version 4.0.3 (https://www.r-project.org).
We demonstrate that both, the median mobility and the variability of mobility as measured by the Interquartile Range (IQR), are statistically significant explanatory variables for excess deaths; that is, that the effect of lockdowns are mediated, at least partially, through reduction in individual-level mobility. However, the mobility data lacks demographic covariates, limiting its predictive power given the importance of demographic characteristics in risk of both COVID-19 infection and death. Given the usefulness of mobility data to predict epidemic progression, we suggest that future collection of mobility data have demographic covariates to increase the predictive power of predictive models. This paper is organized as follows: Sect. 2 provides background information on Ecuador and the need to relate population mobility to excess deaths. Section 3 describes the mobility data and the death records used to estimate excess deaths. Section 4 describes the statistical methods whose results are shown in Sect. 5 and discussed in Sect. 6. We conclude in Sect. 7.
Ecuador has an estimated population of 17.5 million; it is located by the Pacific Ocean, at the northwest of South America, and borders with Colombia to the north and Peru to the south and east. It is divided into 24 provinces, that are each further divided into cantons. The World Bank classifies it as an upper middle-income country, with a Gini inequality coefficient of 45.4 (where 100 is maximum inequality).
Ecuador's healthcare system is patchwork of public and private healthcare providers. The public/private health insurance covers 40%/60% of the (employed or farmer) population and the Ministry of Public Health services for the uninsured. At least 39% of total health spending in Ecuador is out-of-pocket25 with more than half of this amount destined for medication. Life expectancy at birth for women is 79.3 and for men, 73.9 years25 which ranks 3rd best among the 12 South American countries in 2019.
Ecuador officially registered a relatively small number of COVID-19 deaths; yet news articles from March and April 2020 report a large number of deaths. Estimates of excess deaths in26,27 are consistent with Ecuador having a higher than officially reported death toll.
Excess deaths in Ecuador
Our previous study revealed the true impact of COVID-19 on mortality in Ecuador27,28 by quantifying the excess deaths in Ecuador by province, sex, age and ethnicity. Our key finding are a 70% more deaths than expected from January 1st to September 23rd, 2020, which is 3 times the level of excess deaths found in high income countries like the United States22,29,30, England and Wales21, and Italy31,32. Strikingly, the Provinces of Guayas and Santa Elena, the worst affected by the pandemic, have over 200% more deaths than expected, with the highest peaks in late March and early April reaching between a 12 to 15-fold more deaths than expected. As for demographics, we found similar patterns to those observed in other countries, for example we found that men had a death rate 183% of the expected value, while the rate in women was 153% of expected deaths; and even though we find that excess death increased with age, the group mostly affected is the [60, 69] age group with a death rate 233% of the expected value, while the age group with people older than 80 had a death rate 160% of the expected level. Finally, we found that the indigenous ethnic group was disproportionally affected with a death rate of 220% more than expected compared to 136% for the mestizo's group, the most prevalent ethnic group in Ecuador.
Strict lockdown in Ecuador
To control the outbreak, Ecuador closed schools on March 13th, implemented a national lockdown on March 17th and instituted mandatory face masks from April 7, 202033. The lockdown limited all non-essential movement, including halting intra and inter-provincial public and private transportation except for medical and legal personnel, and police and military forces. From May 11, 2021, cantons progressively opened up on an individual basis, depending on the number of cases reported. This involved public transportation, restaurants and stores with limited occupancy. Schools and universities remained closed during the entire study period.
Relating excess deaths to population mobility characteristics
Using the methodology from27,28, we calculate time series of weekly all cause Excess Death Factor (EDF), the ratio of observed over expected deaths, in each province stratified by age group and sex. Spatial–temporal analysis of these time series of the weekly reveals a wave of COVID that originated in Guayas and spread through the rest of Ecuador over a period of six months, from March 2020 to August 2020. This suggests that Ecuador suffered from a single outbreak with a complex spatiotemporal pattern rather than two distinct waves.
Physiologically, the dynamics of excess deaths time series depends on the number of infected individuals in previous weeks. The latter depends on many factors, including human behavior related to the mixing between infected and susceptible individuals. We expect that behavior change during and after the strict lockdown will have a delayed impact on the dynamic of excess deaths. In this paper, we hypothesize that the magnitude of the epidemic (as measured by excess deaths) depends on characteristics of past population mobility distributions. Specifically, we relate weekly excess deaths to mobility characteristics derived from mobile phone data provided by GRANDATA through the United Nations Development Program to quantify compliance with mandatory lockdown orders in Ecuador, and changing mobility patterns after the strict lockdown is lifted.
Cellular phone mobility data are readily available, and can be aggregated at relevant geographical resolution. In our case, we have mobility data for each Ecuadorian canton. While this type of data can be collected systematically, aggregated mobility data may be of limited use as a predictor of disease spread. For example, the data lacks demographic characteristics of the users. Since age and sex are important predictors for excess death27,28,34,35 having the mobility data disaggregated similarly would be useful to understand how mobility36,37 impacts excess death. Further, the GRANDATA excludes people with limited activity, essentially excluding proportion of individuals staying home, including those telecommuting. Nevertheless, the data from GRANDATA are a direct measure of the relationship between government restrictions, mobility, and invaluable to quantify compliance to mobility restrictions imposed to mitigate the spread of COVID.
Description of the data
Mobility data
GRANDATA is a San Francisco-based company that leverages advanced research in Human Dynamics to identify market trends and predict customer actions. They made their data from Latin America and the Caribbean available to the United Nations Development Program (UNDP) to help combat COVID-19 in those regions38. UNDP provided limited access to the data through competitive research proposal evaluation39.
GRANDATA provided mobility indexes for 191 cantons (out of the 218) with no data for 27 cantons from March 1st, 2020 to November 1st, 2020. The indices were obtained as follows: mobile phone users, with their consent, were geolocated to track their mobility patterns, using the most frequent location (the mode of that distribution) as their place of residence. The daily number of "out-of-home" events (without keeping track of the destination of each event) were aggregated to preserve the anonymity of phone users and reported as percentage difference for daily number of events relative to the baseline date of March 2nd, 2020. While our data does not have the richness of mobility flow data, it provides insights into human behavior and compliance to strict lockdown, and is sufficiently informative to predict excess deaths.
For each province, we compute the population weighted median, population weighted inter-quartile range, and a population weighted measure of skewness. Figure 2 displays the time series of the median mobility for each province, highlighting in color the statistics of the provinces Guayas, Manabí, Pichincha, and Santo Domingo. The box indicates the period of strict lockdown. A relative change of -0.5 indicates that there are half as many "out-of-home" travels, a value of 0 corresponds to normal out of home travel, and a positive value of 0.5 correspond to a 50% increase in the number of out-of-home trips.
Time series of relative median mobility change for each province.
Our hypothesis is that the entire mobility distribution, and not just the median or mean, impact the spread of COVID-19. We summarize the variability of the mobility scores by the Inter-Quartile Range (IQR). The IQR measures the spread of a distribution by taking the difference between the 75% quantile and the 25% quantile. For a Gaussian distribution, the IQR is 1.35 times the standard deviation. Figure 3 displays the time series of IQR in each province in gray, with the values for the provinces of Guayas, Manabí, Pichincha, and Santo Domingo highlighted in color. The box indicates the period of strict lockdown.
Time series of mobility change variability (IQR) for all Ecuadorian provinces.
Historical all-cause mortality records from 2015 to 2019 were obtained from the Ecuadorian National Institute of Statistics and Census. Individual records include date of death, age, sex, and ethnicity of the diseased, place of death registration, residence, and the International Classification of Disease (ICD) code for the cause of death. The Ecuadorian Ministry of Government provided death records from all causes that occurred from January 1st, 2020 to September 26th, 2020. In addition to the date of death, records report sex, age, registration, and residence location by parish, canton, and province, but without the cause of death.
For our analysis, we computed the number of death per week, sex, age, and province. We binned age into nine 10-year age groups ([0,9], [10,19], …, [70, 79] and [80 and older]). For all years (2015–2020), week 1 was set from January 1 to January 7, week 2 from January 8 to January 14, and so on. In all cases, week 53 contains less than 7 days. We calculated counts for all 24 Ecuadorian provinces but ignored the three smaller "Not Delimited Areas". We use the 2020 population estimates from the INEC (the Ecuadorian National Institute of Statistics and Census).
Individual records of COVID-19 incidence and testing from Ecuador were obtained from the Ecuadorian Ministry of Public Health. Death records were obtained from the Ministry of Interior. All records were aggregated at the weekly level and binned by sex, age group, and province. Ethics committee approval for the use of health patient information and death records was obtained from the Ethical Committee for Expedited Review of COVID-19 Research of the Ecuadorian Ministry of Health. The study has been approved by the Human Subjects Research Review Board, Los Alamos National Laboratory.
Estimation of excess death
We fitted a Poisson log-linear model to the baseline death counts using the glm function in R 4.0.3 on the historic 2015–2019 data. The predictor variables included sex, age group and their interactions, province and week of the year as a factor. Using this fitted model, we predict the time series of weekly expected number of deaths from all causes in each province, disaggregated by age and sex. The fitting of this model was described in27.
The excess death is the difference between the observed number of deaths in 2020 and the model predicted expected number of deaths; that is, the number of deaths above what would have been expected in 2020 were it a typical year. The Excess Death Factor (EDF) is the ratio of the observed over the expected number of deaths. The EDF implicitly normalizes each province by its population size, allowing them to be compared across provinces. An EDF of 2 means that there are twice as many deaths during the pandemic than in a normal year. The presumption is that these deaths are attributed to COVID-19, although there is uncertainty in that attribution. Indeed, it is possible that mortality from all causes is higher because the healthcare system is overwhelmed, or lower because the strict lockdown likely prevented some deaths (e.g., by reducing the number of vehicle accidents). However, the EDF gives a much clearer image of COVID-19 associated mortality than the confirmed COVID deaths.
Given that the outbreak in each province reaches its peak in different weeks, we chose to model each province separately. On average, a death from COVID-19 is estimated to occur two to eight weeks after infection40. To explore the effect of this delay, we computed the cross-correlation between the time series of the EDF and the mobility statistics. That is, we calculated the correlation between the EDF in a given week, and the mobility statistics in prior weeks. While a correlation does not imply causation, the cross-correlation shows how changes in mobility in the past impact today's EDF.
We used Poisson log-linear regression to explain the EDFs by mobility, while controlling for age and sex. To do so, we model the observed weekly death Y in each province as Poisson distributed variable with mean
$$\mathrm{log}E[Y]=\mathrm{log}\left(\mathrm{predicted}\right)+\sum {\mathrm{\alpha }}_{\mathrm{j}}{W}_{j}$$
where W1,…, Wk are the explanatory variables in the model, and log(predicted) is the logarithm of the predicted baseline for the number of deaths based on the historical data. The latter is used as an offset in the regression. To account for the delay between mobility change and death, our model included lagged mobility statistics, that is, we only included mobility statistics from the 2, 3, and 4 previous weeks. Further lags were not considered for the regression analysis given the limited number of weeks for the mobility data.
We applied model selection with stepwise regression using the Akaike information criterion (AIC) criterion on the mobility statistics to identify which lags and other summary statistics were important. We note that the AIC is permissive in that it may retain a few variables that are only weakly associated with the response.
Finally, we compute a measure of variability explained by the mobility statistics by reporting the ratio of the difference of the deviance of the fitted model with and without the mobility statistics, divided by the difference of the full model41 and the model with only the demographic variables. This quantity is similar in spirit to the R2 statistic in standard linear regression.
Geographical distribution of excess death
Figure 4 visualizes the geographical spread of the EDF aggregated by sex and all age groups, for six selected weeks: the week of March 18, when the lock started, the week of April 1st , in the middle of the lockdown, the weeks of June 3rd, July 15 and August 12, after the lockdown, and the week of September 16, the last week we had data. A figure for all the weeks is provided in the appendix. The figure shows how the hotspot of the epidemic is moving through Ecuador, with different provinces becoming the weekly hotspot at different times.
Geographical and temporal evolution of weekly excess death factor for Ecuador. Figure created in R version 4.0.3 (https://www.r-project.org).
It is easier to visualize the dynamic of the EDFs by displaying as a heatmap (Fig. 5) the time series of EDF. The provinces are ordered by the date of the first time a province recorded a doubling in the number of deaths, i.e., has an EDF of two. The box indicates the period of strict lockdown. The plot clearly shows that COVID-19 did not hit uniformly Ecuador, and that the epidemic exhibits both a temporal and geographic pattern. Interestingly, each province shows a peak period of excess death of about a month or two, during which there are over 2 times more deaths than expected. After that peak period, the excess death ratio declines, but typically remains larger than one. The two vertical black lines indicate the start and end of the strict lockdown, and the red line the start date of mandatory mask use.
Heatmap of excess death factor by province with national restrictions.
There is a delay of several weeks between the time of infection and the time of death. As a result, for some provinces such as Guayas and Santa Elena, the lockdown occurred too late in the course of their outbreak to impact the time and magnitude of the peak of excess deaths. Other provinces, such as Santo Domingo de Los Tsachilas and Pichincha benefitted from the strict lockdown as they achieve their peak well after the strict lockdown was lifted. Finally, some provinces, such as Manabí, achieved their peak in the second half of the strict lockdown and possibly benefitted from it.
Geographical distribution of mobility
Figure 6 shows a similar heatmap for the median mobility score. The two black lines indicate the date of the beginning and end of the strict lockdown. The red line is the date of the mandatory mask order. The plot shows that all the provinces decreased their mobility during the lockdown period and exhibit some increased mobility thereafter. After the lockdown, the population in some of the provinces, such as Guayas, Pichincha, and Manabí, maintained a lower mobility than before the lockdown, while the population in other provinces, such as Bolivar and Napo, the mobility returned to, or even exceeded, the level prior to the lockdown.
Heatmap of mobility change by province with national restrictions.
Relating excess deaths to mobility
Figure 7 displays simultaneously the EDF and the median mobility for Guayas, Manabí, Pichincha and Santo Domingo. These four provinces were chosen because they are archetypes of when they reached their peak of EDF with respect to the lockdown period. For Guayas, the lockdown started too late to impact the EDF peak, for Manabí, the lockdown had likely a partial effect on the timing and magnitude of the peak, and both Pichincha and Santo Domingo are examples of peaks occurring well after the lockdown ended. The difference between these two provinces is the pattern of population mobility. In Pichincha, the population maintained a lower mobility than prior to the lockdown, whereas in Santo Domingo, the population mobility returned to pre-lockdown levels. Figure 7, which shows simultaneously the time series of EDFs (in yellow), fraction of deaths attributed to COVID (red) and median mobility score (green), reveals that there is no general pattern linking mobility score to excess deaths that holds across all provinces. As a result, we performed a separate statistical analysis for each of the 24 provinces, and then compared the results in terms of the stage of the outbreak each province was when interventions were implemented.
Time series of excess death factor, confirmed COVID-10 deaths, and median mobility.
Cross-correlation analysis
Figure 8 displays the cross-correlation between the time series of EDFs and mobility statistics for selected provinces: Guayas, Manabí, Pichincha, and Santo Domingo. The x-axis represents the lag, the difference in weeks, between the mobility statistic and the EDF. A lag of zero (right most point) means the correlation is taken between EDF and the mobility statistic in the same week, whereas a lag of -2 corresponds to a correlation between EDF and the mobility statistic for two weeks prior. The colors indicate positive (green) and negative (red) correlations.
Cross-correlation between time series of excess death factors and mobility statistics.
In two of these provinces, Guayas and Manabí, the median mobility scores from the past three weeks are negatively correlated with the current EDF. Looking at Fig. 7 we see why: the mobility (green curve) decreases when the EDF (yellow line) is rising. This is indicative of what we expect when a shutdown is instituted because of rising cases.
In contrast, the correlation between EDF and median mobility in the past four weeks is positive in Santo Domingo. Again, Fig. 7 tells the story: observe that in Santo Domingo, after the strict lockdown was lifted, the median mobility slowly returned to its pre-lockdown level. At the same time, EDF increased. Again, this is the pattern one expects if one lifts mobility restrictions prior to having fully contained the outbreak.
Finally, in Pichincha, the cross-correlation is capturing a situation that lies between the previous cases. After the lockdown ended, the mobility slowly increased, allowing the outbreak to take hold. As it became evident that Pichincha was in the grip of an outbreak, mobility reduced slightly.
Looking at the plot of the cross-correlation for the median in the Appendix, we see the described pattern repeat for all provinces, dividing them into provinces whose outbreak was already in progress at the time of the lockdown, and those whose outbreak emerged after the lockdown.
The interpretation for the variability statistics is harder. Looking at cross-correlation plots for the variability for all the provinces (see Appendix), we note that for many provinces, the cross-correlation is positive. This makes intuitive sense: larger variability implies that some (but not all) individuals have mobility. For a very infectious virus like SARS-COV-242, a small fraction of individuals who move about and can transmit the disease suffices.
The regression analysis improves upon the cross-correlation in that it allows to control for age group and sex, considers the joint behavior of a set of explanatory variables, and allows model selection. The latter helps identify which explanatory variables are statistically significant.
We used a log-linear Poisson regression to predict the weekly number of deaths divided into age group and sex categories. By including the logarithm of the predicted death from the baseline model fitted on historical death data as a fixed offset, we can interpret the regression as a model for the excess death ratio. Our model used age group and sex and their interaction as covariates. The inclusion of the age-sex interaction allows the model to fit different death rates for men and women in the same age group. In addition to the demographic variables, we included the value of the median mobility and IQR from the past two to four weeks (names lag 2, lag 3 and lag 4). We applied stepwise regression using the AIC criteria to select which variables are statistically important.
We did not include mobility statistics from the same week or from the one week before to help with the interpretation. Indeed, the mobility statistics of the current and past weeks are correlated, and our model selection could identify those variables as statistically significant, even though we know from the disease progression, and how mobility impacts the spread of disease, that these variables are not causative.
The table of estimated coefficients are provided in Table 2 of the appendix. To help interpret, we prefer to present in Table 1 the multiplier for the expected EDF from a 10% decrease. For example, a 10% decrease in the median mobility two weeks prior in the province of Santo Domingo, reduces the expected EDF by a multiplier of 0.905.
Table 1 Multiplier for the expected excess death factor from a 10% decrease of mobility statistics Lags.
The empty cells in the table indicate variables that have not been selected. For example, the regression model to predict the EDF in the province of Esmeralda only includes the median mobility score from the four weeks before. The provinces are ordered according to the first date that the EDF was two.
Again we see two groups: the provinces for which multipliers of a 10% mobility reduction in lag 2, or when lag 2 is not significant, in lag 3, is greater than one or is less than one. As we observed in the correlation analysis, the provinces that had early peaks during the strict shutdown all show a factor that is greater than one. Indeed, these provinces had an increase in the weekly EDFs and a decrease in mobility at the same time.
The second group had their peak after the end strict shutdown. As we noted the median mobility was increasing during that time. And as mobility increased so did the EDFs. In contrast, multipliers less than one represent the effect of a 10% decrease in mobility.
We note that for almost all provinces the variability (IQR) four weeks prior that is statistically significant, a 10% decrease in the IQR leads to a reduction in the expected EDF.
Finally, the table presents the measure of variability explained by the mobility statistics defined in the Methods section.
A limitation of our analysis is that it does not account for the dynamic of the outbreak. We plan to present such an analysis in a forthcoming paper.
This papers contributes to our understanding of non-pharmaceutical interventions (NPI) by probing the relationship of macroscopic mobility patterns and excess death. We made three key discoveries: (1) excess deaths spread across the country in a west-to-east gradient with the epicenter focusing on Guayas, (2) lockdowns lead to a large reduction in local, individual-level mobility, and (3) both the median and variability of mobility indices are reasonable predictor variables of excess deaths although the relationship between mobility and excess deaths is complicated. Based on these observations, it is conceivable that local lockdowns should be implemented in a more targeted manner: immediately for provinces that experience on-going COVID-19 outbreaks to prevent large magnitude of excess deaths; and delayed for provinces that are geographically far away from the epi-center of the outbreak. This approach could strike a more overall favorable balance between infection control and the social disruption caused by lockdowns.
The complex relationship between mobility and excess deaths is illustrated by the dynamics in Guayas, where the epidemic was already a significant cause of mortality when the national lockdown orders were issued; in this context, the effect of reduced average mobility might have a reduced effect as enough people are infected to sustain transmission even given reduced contact rates. However, in regions where the lockdown occurred before there was significant excess death, the slow increase in mobility corresponded to an increase in EDF. While exact causation is difficult in these studies, our study is consistent with the idea that NPIs are partially mediated though reduced mobility and that those effects are strongest when the endogenous transmission rates are low within a given region due to a limited number of infected persons.
Mobility can serve as an indicator of the extent of SARS-CoV-2 transmission in a community. However, its utility is limited by several factors. First, the relationship between mobility and excess deaths depends on the context of the epidemic and government response, as demonstrated by our analysis in this work. For example, we found that the mobility measure negatively correlates with EDF with a two-weeks lag for provinces (e.g. Guayas) experiencing the first excess deaths wave in April and May 2020. This negative correlation is because of implementation of lockdown while increasing deaths as a result of wide-spread SARS-CoV-2. On the other hand, we found positive correlations between the two in provinces (e.g. Santo Domingo) that did not experience large outbreaks during the first wave of excess deaths. Presumably, this is because as lockdown is relaxed, increases in mobility led to increases in excess deaths. Second, we and others11,12 often use a single or a couple of measures (median or variability) of mobility; however, the spread of SARS-CoV-2 may be impacted differently by different types of mobility. For example, it was suggested long-distance travel may help the dispersal of the pathogen11,43 In addition, mobility measures often summarize travels of different types, i.e. grocery shopping, travel to work, go to park etc. These different types may contribute to SARS-CoV-2 spread differently.
This paper presents the Excess Death Factor (EDF) time series for all provinces in Ecuador, and relates them to mobility data derived from cellular phone data that was obtained from GRANDATA and the United Nations Development Program for the period starting on March 1st, 2020 and ending on September 23rd, 2020. The data reveal that the provinces were hit by the pandemic in a clear spatio-temporal pattern, with the peak EDF moving across Ecuador over time in a relatively short six-month period. A statistical analysis reveals that the relationship between human mobility and EDF show two archetypes, one pattern for the provinces whose peak EDF occurred during the strict lockdown, and another pattern for the provinces who reached the peak of their EDF after the conclusion of the strict lockdown. Finally, we demonstrate both the median mobility and the variability of mobility as measured by the IQR are statistically significant predictors for the EDF.
Researchers interested in the mobility data should contact GRANDATA. The historic mortality data (2015–2019) is publicly available [https://www.ecuadorencifras.gob.ec/defunciones-generales] but the 2020 COVID-19 mortality data cannot be released without permission from the Ecuador Ministry of Health.
Latin America and the Caribbean: Impact of COVID-19. Congressional Research Service. https://fas.org/sgp/crs/row/IF11581.pdf (2021).
Cañizares, A.M. Ecuador confirma el primer caso de coronavirus en el país. CNN Español https://cnnespanol.cnn.com/2020/02/29/alerta-ecuador-confirma-el-primer-caso-de-coronavirus-en-el-pais/ (2020).
Heredia, V., González, J. Ecuador reporta su primera muerte por coronavirus; se trata de la mujer contagiada en el caso primario. El Universo https://www.eluniverso.com/noticias/2020/03/13/nota/7780092/ecuador-reporta-su-primera-muerte-coronavirus-se-trata-primera (2020).
Faiola, A. & Herrero, A. Bodies lie in the streets of Guayaquil, Ecuador, emerging epicenter of the coronavirus in Latin America. Washington Post. https://www.washingtonpost.com/world/the_americas/coronavirus-guayaquil-ecuador-bodies-corpses-streets/2020/04/03/79c786c8-7522-11ea-ad9b-254ec99993bc_story.html (2020).
Ecuadorian Ministry of Education. Comunicado oficial. Suspensión de las actividades académicas para los estudiantes para precautelar la salud de la comunidad educativa.[Official Communication. Classes are suspended for students to protect the health of the educational community]. 2020 (March 12). https://educacion.gob.ec/comunicado-oficial-suspension-de-las-actividades-academicas-para-los-estudiantes-para-precautelar-la-salud-de-la-comunidad-educativa/ (2020).
El Universo. Senescyt suspende actividades universitarias a nivel nacional; Espol dará clases virtuales; PUCE suspende clases [Senescyt suspends university activities across the country; Espol will teach virtually; PUCE suspends classes]. https://www.eluniverso.com/noticias/2020/03/12/nota/7778493/senescyt-suspende-actividades-universitarias-nivel-nacional-espol (2020).
President of Ecuador. Executive Decree 1017. https://www.defensa.gob.ec/wp-content/uploads/downloads/2020/03/Decreto_presidencial_No_1017_17-Marzo-2020.pdf (2020).
Grantz, K. et al. The use of mobile phone data to inform analysis of COVID-19 pandemic epidemiology. Nat. Commun. 11(1), 1–8 (2020).
Buckee, C. O. et al. Aggregated mobility data could help fight COVID-19. Science 368, 145–146 (2020).
Article ADS Google Scholar
Wesolowski, A., Buckee, C. O., Engø-Monsen, K. & Metcalf, C. J. E. Connecting mobility to infectious diseases: the promise and limits of mobile phone data. J. Infect. Dis. 214, S414–S420 (2016).
Mena, G. et al. Socioeconomic status determines COVID-19 incidence and related mortality in Santiago, Chile. Science 372, 6545 (2021).
Kishiore, N. et al. Lockdowns results in changes in human mobility which may impact the epidemiology dynamics of SARS-CoV-2. Sci. Rep. 11, 6995 (2021).
Ruktanonchai, N. W. et al. Identifying malaria transmission foci for elimination using human mobility data. PLoS Comput. Biol. 12, e1004846 (2016).
Wesolowski, A. et al. Quantifying the impact of human mobility on malaria. Science 338, 267–270 (2012).
Article ADS CAS Google Scholar
Peak, C. M., Reilly, A. L., Azman, A. S. & Buckee, C. O. Prolonging herd immunity to cholera via vaccination: accounting for human mobility and waning vaccine effects. PLoS Negl. Trop. Dis. 12, e0006257 (2018).
Wesolowski, A. et al. Measles outbreak risk in Pakistan: exploring the potential of combining vaccination coverage and incidence data with novel data-streams to strengthen control. Epidemiol. Infect. 146, 1575–1583 (2018).
Cummings, D. A. T. et al. Travelling waves in the occurrence of dengue haemorrhagic fever in Thailand. Nature 427, 344–347 (2004).
Wesolowski, A. et al. Impact of human mobility on the emergence of dengue epidemics in Pakistan. Proc. Natl Acad. Sci. 112, 11887–11892 (2015).
Peak, C. M. et al. Population mobility reductions associated with travel restrictions during the Ebola epidemic in Sierra Leone: use of mobile phone data. Int. J. Epidemiol. 47, 1562–1570 (2018).
Wesolowski, A. et al. Commentary: containing the Ebola outbreak - the potential and challenge of mobile network data. PLoS Curr. 6, https://pubmed.ncbi.nlm.nih.gov/25642369 (2014).
Aburto, J. M. et al. Estimating the burden of the COVID-19 pandemic on mortality, life expectancy and lifespan inequality in England and Wales: a population-level analysis. J. Epidemiol. Community Health. 75, 735–740 (2021).
Woolf, S. H. et al. Excess deaths from COVID-19 and other causes, March-July 2020. JAMA 324(15), 1562–1564 (2020).
Zylke, J. W. & Bauchner, H. Mortality and morbidity: the measure of a pandemic. JAMA 324(5), 458–459 (2020).
National Academies of Sciences, Engineering, and Medicine. Evaluating Data Types: A Guide for Decision Makers using Data to Understand the Extent and Spread of COVID-19. Washington, DC: The National Academies Press. doi:https://doi.org/10.17226/25826 (2020).
OECD/World Bank. Health at a Glance: Latin America and the Caribbean 2020, OECD Publishing, Paris, doi:https://doi.org/10.1787/6089164f-en (2020).
Coronavirus tracked: the latest figures as countries start to reopen. Financial Times https://www.ft.com/content/a26fbf7e-48f8-11ea-aeb3-955839e06441 (2020).
Cuéllar, L., Torres, I., Romero-Severson, E., Mahesh, R., Ortega, N., Pungitore, S., Ke, R., Hengartner, N., Excess deaths reveal the true spatial, temporal, and demographic impact of COVID-19 on mortality in Ecuador. Int. J. Epidemiol (2021).
Cuéllar, L., Torres, I., Romero-Severson, E., Mahesh, R., Ortega, N., Pungitore, S., Hengartner, N., Ke, R., Excess Deaths reveal unequal impact of COVID-19 in Ecuador, BMJ Glob. Health, 6:e006446 (2021).
Rossen, L. M., Branum, A. M., Ahmad, F. B., Sutton, P. & Anderson, R. N. Excess Deaths Associated with COVID-19, by Age and Race and Ethnicity: United States, January 26-October 3, 2020. MMWR Morb. Mortal Wkly Rep. 69(42), 1522–1527 (2020).
Weinberger, D. M. et al. Estimation of excess deaths associated with the COVID-19 Pandemic in the United States, March to May 2020. JAMA Int. Med. 180(10), 1336–1344 (2020).
Michelozzi, P. et al. Temporal dynamics in total excess mortality and COVID-19 deaths in Italian cities. BMC Public Health 20(1), 1238 (2020).
Scortichini, M., Schneider Dos Santos, R., De' Donato, F., et al. Excess mortality during the COVID-19 outbreak in Italy: a two-stage interrupted time-series analysis. Int. J. Epidemiol. 2021; 49(6): 1909–1917.
Emergency Operations Committee. Resolutions - April 7, 2020. https://www.gestionderiesgos.gob.ec/resoluciones-coe-nacional-07-de-abril-2020 (2020).
Bhopal, S. S. et al. Sex differential in COVID-19 mortality varies markedly by age. Lancet 396(10250), 532–533 (2020).
Bilinski, A. & Emanuel, E. J. COVID-19 and Excess All-Cause Mortality in the US and 18 Comparison Countries. JAMA 324(20), 2100–2102 (2020).
Kraemer, M.U.G. et al. The effect of human mobility and control measures on the COVID-19 epidemic in China. Preprint at doi:https://doi.org/10.1101/2020.03.02.20026708 (2020).
Basellini, U. et al Linking excess mortality to mobility data during the first wave of COVID-19 in England and Wales. SSM - Population Health 14, 100799. https://doi.org/10.1016/j.ssmph.2021.100799 (2021) .
United Nations Developing Program and Grandata join forces, April 27, 2020. https://www.latinamerica.undp.org/content/rblac/en/home/presscenter/pressreleases/2020/undp-and-grandata-join-forces-in-a-tool-for-addressing-public-po.html (2020).
United Nations Development Program, Call for research proposals, August 4, 2020. https://www.latinamerica.undp.org/content/rblac/en/home/presscenter/pressreleases/2020/exploring-impact-and-response-to-the-covid-19-pandemic-in-latin-.html (2020).
WHO. Report of the WHO-China Joint Mission on Coronavirus Disease 2019 (COVID-19). https://www.who.int/docs/default-source/coronaviruse/who-china-joint-mission-on-covid-19-final-report.pdf
Cornillon, P.A., Hengartner, N., Matzner-Løber, E. & Rouvière, L. Régression avec Rm 2e édition (EDP Sciences, 2019).
Sanche, S. et al. High Contagiousness and Rapid Spread of Severe Acute Respiratory Syndrome Coronavirus 2. Emerg. Infect. Dis. 26(7), 1470–1477 (2020).
Wilson, M. Travel and the Emergence of Infectious Diseases. Emerg. Infect. Dis. 1(2) 39–46. https://doi.org/10.3201/eid0102.950201 (1995).
Los Alamos National Laboratory, Los Alamos, NM, USA
Leticia Cuéllar, Ethan Romero-Severson, Riya Mahesh, Nathaniel Ortega, Sarah Pungitore, Ruian Ke & Nicolas Hengartner
Fundación Octaedro, Quito, Ecuador
Irene Torres
Leticia Cuéllar
Ethan Romero-Severson
Riya Mahesh
Nathaniel Ortega
Sarah Pungitore
Ruian Ke
Nicolas Hengartner
Conceptualization: LC, IT, ERS, NH and RK; data acquisition: IT; methodology and formal analysis: LC, ERS, RM, NO, SP and NH; underlying data validation: LC, IT and NH; visualization and writing: LC and NH. All authors reviewed the manuscript.
Correspondence to Leticia Cuéllar or Irene Torres.
Supplementary Information.
Cuéllar, L., Torres, I., Romero-Severson, E. et al. Assessing the impact of human mobility to predict regional excess death in Ecuador. Sci Rep 12, 370 (2022). https://doi.org/10.1038/s41598-021-03926-0
Received: 25 May 2021
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
About Scientific Reports
Guide to referees
Scientific Reports (Sci Rep) ISSN 2045-2322 (online)
|
CommonCrawl
|
Showing results 1 to 10 of 19.
Function outlier [GmAMisc v1.1.1]
R function for univariate outliers detection
The function allows to perform univariate outliers detection using three different methods. These methods are those described in: Wilcox R R, "Fundamentals of Modern Statistical Methods: Substantially Improving Power and Accuracy", Springer 2010 (2nd edition), pages 31-35.
Function outlier_detection [mdapack v0.0.2]
Outlier detection function
'outlier_detection'visually detect and highlights outliers in a univariate continuous variable. The function fetches the values of data points that lie beyond the extremes of the whiskers(observations that lie outside of 1.5 * IQR.
Function outliers [FactoInvestigate v1.7]
Outliers detection
Detection of singular individuals that concentrates too much inertia.
Function cook.outliers [referenceIntervals v1.2.0]
~outlier
Determines outliers using Cook's Distance
A linear regression model is calculated for the data (which is the mean for one-dimensional data. From that, using the Cook Distances of each data point, outliers are determined and returned.
Function horn.outliers [referenceIntervals v1.2.0]
Determines outliers using Horn's method and Tukey's interquartile fences on a Box-Cox transformation of the data.
This function determines outliers in a Box-Cox transformed dataset using Horn's method of outlier detection using Tukey's interquartile fences. If a data point lies outside 1.5 * IQR from the 1st or 3rd quartile point, it is an outlier.
Function dixon.outliers [referenceIntervals v1.2.0]
Determines outliers using Dixon's Q Test method
This determines outliers of the dataset by calculating Dixon's Q statistic and comparing it to a standardized table of statistics. This method can only determine outliers for datasets of size 3 <= n <= 30. This function requires the outliers package.
Function outlier.detection [fdasrvf v1.9.4]
Outlier Detection
This function calculates outlier's using geodesic distances of the SRVFs from the median
Function RSlo [MGBT v1.0.4]
low outlier (definition)
Rosner RST Test Adjusted for Low Outliers
The Rosner (1975) method or the essence of the method, given the order statistics \(x_{[1:n]} \le x_{[2:n]} \le \cdots \le x_{[(n-1):n]} \le x_{[n:n]}\), is the statistic: $$RS_r = \frac{ x_{[r:n]} - \mathrm{mean}\{x_{[(r+1)\rightarrow(n-r):n]}\} } {\sqrt{\mathrm{var}\{x_{[(r+1)\rightarrow(n-r):n]}\}}}\mbox{,} $$
Function BLlo [MGBT v1.0.4]
Barnett and Lewis Test Adjusted for Low Outliers
The Barnett and Lewis (1995, p. 224; \(T_{\mathrm{N}3}\)) so-labeled "N3 method" with TAC adjustment to look for low outliers. The essence of the method, given the order statistics \(x_{[1:n]} \le x_{[2:n]} \le \cdots \le x_{[(n-1):n]} \le x_{[n:n]}\), is the statistic $$BL_r = T_{\mathrm{N}3} = \frac{ \sum_{i=1}^r x_{[i:n]} - r \times \mathrm{mean}\{x_{[1:n]}\} } {\sqrt{\mathrm{var}\{x_{[1:n]}\}}}\mbox{,}$$ for the mean and variance of the observations. Barnett and Lewis (1995, p. 218) brand this statistic as a test of the "\(k \ge 2\) upper outliers" but for the MGBT package "lower" applies in TAC reformulation. Barnett and Lewis (1995, p. 218) show an example of a modification for two low outliers as \((2\overline{x} - x_{[2:n]} - x_{[1:n]})/s\) for the mean \(\mu\) and standard deviation \(s\). TAC reformulation thus differs by a sign. The \(BL_r\) is a sum of internally studentized deviations from the mean: $$SP(t) \le {n \choose k} P\biggl(\bm{t}(n-2) > \biggr[\frac{n(n-2)t^2}{r(n-r)(n-1)-nt^2}\biggl]^{1/2}\biggr)\mbox{,}$$ where \(\bm{t}(df)\) is the t-distribution for \(df\) degrees of freedom, and this is an inequality when $$t \ge \sqrt{r^2(n-1)(n-r-1)/(nr+n)}\mbox{,}$$ where \(SP(t)\) is the probability that \(T_{\mathrm{N}3} > t\) when the inequality holds. For reference, Barnett and Lewis (1995, p. 491) example tables of critical values for \(n=10\) for \(k \in 2,3,4\) at 5-percent significant level are \(3.18\), \(3.82\), and \(4.17\), respectively. One of these is evaluated in the Examples.
Function shape.fd.outliers [ddalpha v1.3.11]
Functional Depth-Based Shape Outlier Detection
Detects functional outliers of first three orders, based on the order extended integrated depth for functional data.
|
CommonCrawl
|
Influence of experimental parameters on the laser heating of an optical trap
Frederic Català1,2,
Ferran Marsà2,3,
Mario Montes-Usategui1,2,3,
Arnau Farré2,3 &
Estela Martín-Badosa ORCID: orcid.org/0000-0002-1162-35671,2
Optical manipulation and tweezers
In optical tweezers, heating of the sample due to absorption of the laser light is a major concern as temperature plays an important role at microscopic scale. A popular rule of thumb is to consider that, at the typical wavelength of 1064 nm, the focused laser induces a heating rate of B = 1 °C/100 mW. We analysed this effect under different routine experimental conditions and found a remarkable variability in the temperature increase. Importantly, we determined that temperature can easily rise by as much as 4 °C at a relatively low power of 100 mW, for dielectric, non-absorbing particles with certain sets of specific, but common, parameters. Heating was determined from measurements of light momentum changes under drag forces at different powers, which proved to provide precise and robust results in watery buffers. We contrasted the experiments with computer simulations and obtained good agreement. These results suggest that this remarkable heating could be responsible for changes in the sample under study and could lead to serious damage of live specimens. It is therefore advisable to determine the temperature increase in each specific experiment and avoid the use of a universal rule that could inadvertently lead to critical changes in the sample.
Optical tweezers have been proven to be a powerful microtool for biological studies since their inception, pioneered by Arthur Ashkin1. This non-invasive technique exhibits some advantageous features including non-contact forces in the range 0.1–100 pN and compatibility with liquid medium environments which make it highly suitable for application in biological studies. However, even at the innocuous laser wavelength of 1064 nm used in our experiments, light absorption in water is not negligible; so localized heating at the focus of the optical trap and heat transfer to the immediate surroundings could produce small but significant thermal effects.
Different methods have been used to determine temperature increases due to the use of optical tweezers (see Supplementary Figure 6 and Supplementary Table 2). By means of the fluorescence emission shifts of a temperature-sensitive Laurdan dye probe, Liu et al.2,3 measured temperature increases of 1 °C/100 mW, 1.15 °C/100 mW and 1.45 °C/100 mW for trapped live human sperm cells, hamster ovary cells and liposomes, respectively. Haro-González et al.4 found an increase of 9.9 °C/100 mW for a 980-nm laser trap, using quantum dot luminescence thermometry. Ebert et al.5 developed a fluorescence ratio technique using the temperature sensitive dye Rhodamine B and the temperature-independent reference dye Rhodamine 110, and obtained a heating of 1.3 °C/100 mW. The same method was used by Wetzel et al., who determined an increase of 2.3 °C/100 mW6. Kuo7 used an adaptation of the wax-melting method to estimate temperature increase, which was found to be 1.7 °C/100 mW. The changes induced by temperature in the refractive index of water were monitored by Celliers et al.8, who measured a 4 °C temperature increase in a 55-mW, 985-nm laser trap (7.3 °C/100 mW).
Following a completely different approach, Peterman et al.9 introduced a technique based on the analysis of the thermal motion of a trapped bead and measured a temperature increase of 3 °C-4 °C/100 mW for different sizes of polystyrene beads diluted in glycerol and around 0.8 °C/100 mW for silica beads diluted in water. With the same method, Abbondanzieri et al.10, using a system based on a dual-beam optical trap, measured a heating of 0.4 °C every 100 mW of laser power at the back of the objective. Similarly, Jun et al.11 compared the active and passive calibration of an optical trap and inferred heating rates of 2.4 °C/100 mW and 1.2 °C/100 mW for 0.49-μm polystyrene and 0.64-μm silica beads, respectively, in a 980-nm laser trap. Finally, for a 975-nm low-NA laser trap, Mao et al.12 obtained 5.6 °C/100 mW by measuring the temperature-dependent viscosity of water in a Stokes drag experiment from direct force measurements based on light momentum.
Despite this variation in reported results, temperature increase at 1064 nm is often assumed, as a rule of thumb, to be approximately 1 °C/100 mW13. This small heating is then frequently used to argue for the relative innocuousness of laser traps, especially when used with live samples, such as cells. However, as discussed in ref.9, laser heating is directly dependent on the intensity distribution of the trapping laser and is therefore sensitive to changes in experimental conditions which could give rise to particularly unfavourable cases with considerably larger temperature increases. For example, Peterman et al.9 showed the increase in heating with the axial position of the trap in glycerol. Unfortunately, to the best of our knowledge, that is the only work in which the variability of the heating rate, B, is analysed with respect to some parameters.
To fill this gap, we assessed the change in B under different experimental conditions. Heating was measured while changing the numerical aperture of the objective, the suspension liquid, the material and size of the trapped particle, and the position of the trap. We found a remarkable dependence on some of the parameters and, more importantly, temperature increases as large as 4.0 °C/100 mW in water. The experimental scheme was based on the analysis of the variation in measured drag forces on trapped particles under different laser powers, similar to that adopted in refs9,12. The force was determined through measurement of the change in light momentum14,15,16. This method is independent of the specific properties of the sample; and particularly, it does not depend on the laser power or the chamber temperature. This allowed us to directly infer variations in the measured force as changes in the medium viscosity caused by the temperature rise. Our experiments were complemented with computer simulations of the heating produced by an optical trap. We used the model proposed by Peterman et al.9, which provided an accurate description of the experimental data.
Measurement of temperature increase at the focus of the optical trap
In this work, determination of local temperature in an optical trap is based on the measurement of the viscosity of the solvent, which is accessible in a single step from Stokes-drag force measurements. As described in Methods and in the Supplementary Information, drag forces are assessed from light-momentum measurements, such that: Fdrag = −α detector·Sx, where Sx is the position sensitive detector (PSD) signal. The light-momentum calibration parameter, α detector, has been demonstrated to be independent of the geometry of the trapped object and of the structure of the trapping beam14,15,16,17, and importantly, it is not dependent on the laser power or sample temperature. In this way, changes in the measured drag force when incrementing the trap power, and therefore increasing the sample temperature, are directly caused by the variation in the medium viscosity, given as follows:
$$\eta (T)=\frac{{\alpha }_{\det ector}{S}_{x}}{6\pi Rvb}$$
where R is the radius of the trapped microsphere, v is the flow velocity and b is the Faxén correction due to a sphere-to-surface interaction18. With this intention, we used a piezoelectric stage to produce constant drag forces. In Fig. 1a (top), we show the sudden drop in the force reading when the trap power is switched from 20 mW to 200 mW. As all the other variables are fixed, this is indicative of a change in the water viscosity that arises from the temperature increase induced by laser heating. When the intensity is decreased back to its original value, the original force is reversibly recovered. Laser heating at and around the focus of an optical trap is due to absorption of the NIR laser light, primarily by the solvent2,8,9,12, as we discuss below. The dynamic viscosity thus becomes a natural vehicle to connect force readings and sample temperature. Variations in force readings can be translated into changes in the viscosity of the medium, which we can directly convert into changes of sample temperature (see Fig. 1a, bottom). For water, the relation between viscosity and temperature is given by9:
$$\mathrm{log}({\eta }_{{\rm{water}}}(T))=\frac{1.3272\cdot (293.15-T)-0.001053\cdot {(T-293.15)}^{2}}{T-168.15}-2.999$$
Determination of temperature through measurements of light momentum. (a) Top: Drag force measured on a 1.16-μm bead for 200 seconds. Each dot is the mean drag force obtained from the square force signal over one cycle, which is produced by the piezoelectric stage continuously applying a triangular oscillation. Power is switched from 20 mW to 200 mW at second 30 and back to 20 mW after 100 seconds. Bottom: Temperature obtained from the force measurement depicted in the top panel, through Eqs 1 and 2. (b) Both the heating rate, B, and room temperature, T room, were determined from the linear fit to measurements of temperature at 10 different laser powers between 20 and 200 mW. (c) Temperature values obtained from all the experiments on 1.16-μm beads in water (red circles) compared with independent measurements with a precision thermometer (black dots). (d) The ratio T room/T ref showed a standard deviation similar to that expected from the ±2–3% of the diameter of the beads used.
In agreement with previous results2, in all our experiments we found a linear relation between the trap power, P, and the temperature, T, in the range 20 °C–30 °C: T = T room + BP, from which we determined the heating rate, B (°C/100 mW), and the room temperature, T room (see Fig. 1b). In all the experiments, we swept the trap power from 20 mW to 200 mW (at every 20 mW) by properly setting the output laser power. As described in Methods, trap power was monitored from the S SUM signal of the PSD.
We compared T room with an independent measurement obtained with a precision thermometer (±0.1 °C), T ref, and observed clear correspondence between the measurements (Fig. 1c), with an average deviation of −1.5 °C (−0.5%) due to a slight discrepancy between the measured and the theoretical drag force (+2%). When normalized by T ref, the room temperature measurements exhibited a ±0.3% standard deviation (Fig. 1d), which mainly comes from the ±2–3% standard deviation of the diameter of the monodisperse polystyrene microspheres we used (see Supplementary Table 1).
Experiments: dependence on experimental conditions
Measurements of B, carried out as described in Methods and shown in Fig. 1b, had a reproducibility of approximately ±10% (compatible with that estimated from the linear fit), as shown in the experiment in Fig. 2a, which was repeated 10 times. Here, a 1.16-μm polystyrene microbead trapped at ztrap = 10 μm from the lower coverslip of the microchamber experienced a heating rate of 1.9 °C ± 0.2 °C/100 mW (±11%); similar to previous results in the literature (see Supplementary Figure 6 and Supplementary Table 2).
Variability of B under different experimental conditions. (a) Systematic measurements of the heating rate with the same sample showed a standard deviation of 11%, confirming the error estimation of 10–15% in our measurements. (b) Variation of B for different axial positions of the trap with a 1.16-μm polystyrene microbead (squares). The mean and standard deviation of the measured B values beyond z trap = 10 μm, equal to the independent measurement shown in a, are indicated by the red line and the grey area, respectively. Results from the simulations described in the next section are superimposed (orange trace). (c) Heating for particles of different radius, R, and the same material (polystyrene). Upward (downward) triangles correspond to particles trapped with the water- (oil-) immersion NA = 1.2 (NA = 1.3) objective in water, while circles correspond to the particles trapped with the NA = 1.2 objective in glycerol. The solid lines are radial temperature distributions obtained from simulations in water (α/K = 23.7 °C/W) and glycerol (α/K = 76.4 °C/W) respectively, both for the NA = 1.2 objective. The inset shows the experimental ratio B glycerol/B water (squares), which is similar to the nominal ratio: 3.2 (dashed line). (d) Heating for 0.61-μm and 3.00-μm polystyrene microbeads in water, trapped with the oil-immersion objective with different values of NAeff. (e) B was found to be equal for three 2-μm particles of different materials (MR: melamine resin, PS: polystyrene).
The first parameter analysed that affects trap heating was the axial position of the trap (Fig. 2b): a response previously measured and modelled in glycerol9. As discussed in the next section, the higher heat conductivity of the coverslip means that it acts as a heat sink, cooling the trap more as it is placed closer to the interface. Similarly to previous findings9, heating was observed to increase rapidly in the first 10 μm above the lower glass surface, though in our case it saturated at approximately 1.9 °C/100 mW (the same value obtained in Fig. 2a) in the range 10–30 μm. The temperature increase is chiefly concentrated in a region ~10–15 μm around the trap position, z trap, so that heat dissipation by the coverslip is actually sufficient to cool down the trap only below z trap = 10–15 μm. For this experiment, the water-immersion, NA = 1.2 objective was used, which avoids spherical aberration and permits efficient trapping over the whole microchamber axial range.
We then analysed the effect of particle size. Despite it having been suggested that this parameter has only a small influence on the final result, we observed a difference of over twofold in B for particles with diameters from 0.61 μm to 3.00 μm (see Fig. 2c): much greater than the error associated with the determination of B (10–15%, see Fig. 2a). The larger the particle, the lower the heating was. As discussed in the next section, this result can be directly connected to the radial temperature profile caused by the optical trap. This was reproduced by both the water-immersion NA = 1.2 and the oil-immersion NA = 1.3 objectives, though greater heating was observed with the latter, especially for smaller beads.
In addition, we studied the effect of reducing the effective numerical aperture, NAeff, of the trapping beam. A diaphragm was placed at an optical equivalent to the entrance pupil of the NA = 1.3 objective (of focal length f′ obj) to modify the diameter of the beam, D beam, such that: NAeff = D beam/2 f′ obj. The measurements were critically dependent on NAeff, exhibiting variations as large as ±2 °C/100 mW and ±1 °C/100 mW for 0.61-μm and 3.00-μm polystyrene beads, respectively (see Fig. 2d). For the case of 0.61-μm beads, our experimental results did not exhibit a monotonic trend; whereas they revealed an ascending pattern for 3.00-μm beads. The output laser power was set to produce a similar range for the trap power (20 mW to 200 mW) when reducing NAeff (see Methods).
As reported by Peterman et al., B also depends on the suspension medium. The variation in temperature found when we changed water for glycerol was similar to the quotient between the ratio of light absorption, α (m−1), to thermal conductivity, K, between the two liquids, (α/K)glycerol/(α/K)water = 3.2 (see Fig. 2c, inset). This result suggests that, as assumed by different models2,8,9,12, the temperature increase depends linearly on α/K.
In contrast, heating seemed to be independent of the material the trapped particle was made of. Figure 2e shows the results for microspheres of similar sizes (2.19 μm, 2.32 μm and 1.87 μm) but different dielectric materials (melamine resin, silica and polystyrene, respectively). Despite the ratio α/K for the three beads differing by 4 orders of magnitude (13.8 °C/W, 3.6·10−3 °C/W, 50 °C/W, respectively), B was found to change by only 0.1 °C/100 mW (±4%), in accordance with the fact that trap heating is mainly governed by laser absorption in the surrounding buffer.
Simulations: heat transport
Temperature, T, is in general terms governed by the heat equation:
$${\nabla }^{2}T=\frac{1}{k}\frac{\partial T}{\partial t}-\frac{q}{K}$$
where k = K/c·d is the thermal diffusivity, c is the specific heat, d is the density, K is the thermal conductivity and q the energy absorbed per unit of volume and unit of time. Eq. 3 describes how energy transferred by the laser is diffused throughout the surrounding space.
To solve this equation, we need to specify both the function q and the boundary conditions. We used the absorption proposed by Peterman et al., which is an improved version of the spherical model of Liu et al.2, taking into account the finite volume of the focus9:
$$q\,=\,\frac{1}{2\pi }\frac{\alpha P}{{r}^{2}+{(\lambda /(2\pi N{A}^{2}))}^{2}}$$
Here, we used f(θ) = 1/2π (see section "Theoretical model" in ref.9) and we included the explicit dependence on the NA. In the equation, α (m−1) corresponds to the optical absorption and P to the incident laser power. Considering the geometry of the problem, defined by the flat parallel coverslips, with higher conductivity (1.4 W/m/K) and lower absorption (0.005 m−1) than water, acting as heat sinks, we alternatively chose boundary conditions with cylindrical coordinates, in which the spherical radial coordinate in Eq. 4 is expressed as \({r}^{2}={\rho }^{2}+{(z-{z}_{{\rm{trap}}})}^{2}\). We set the Dirichlet boundary conditions: ΔT = 0 at ρ = 80 μm and z = 80 μm; whereas a real water-glass interface was simulated at z = 0, which is the surface closest to the optical trap in all the experiments. The trap height, z trap, was 10 μm in all the experiments, unless otherwise stated.
The evaluation of the temporal part of the equation shows that after 1 ms the temperature reaches 90% of the steady-state value (Fig. 3a). For the highest flow velocity applied in our experiments (320 μm/s for 0.61-μm beads in water; see Supplementary Table 1), such a characteristic heating time is still faster than the fluid displacement time around the focal region. Sample heating of more than 90% is thereby ensured and a steady-state situation can be considered due to our precision in temperature measurements, which was around 10–15%. This agrees with the results in Supplementary Figure 4, which show no substantial variation in the measured viscosity for flow velocities up to 700 μm/s. For the lower flow rates applied to larger microbeads, the rate of heat diffusion is even faster compared with the velocity of the medium.
Simulation of temperature distribution inside the microchamber. (a) Temporal evolution of heating. The simulation shows that the temperature reaches 90% of its steady-state value after 1 ms. (b) Temperature distribution for three different axial positions of the trap. z = 0 represents the bottom surface of the chamber and ρ has been shifted appropriately for visualization. (c) Radial (black) and axial (red) temperature distributions following the ln(1/r) decay (log scale plot in inset i). Radial temperature distributions for the ΔT = 0 boundary condition at ρ = 80, 120 and 160 μm (solid, dashed and dot-dashed black lines) converge to the same value at short distances (inset ii). (d) Simulations of the temperature distribution at different heights (shadowed areas). The circles correspond to estimations of B for a 1.16-μm bead according to B bead = ΔT(ρ = R bead ).
In Fig. 3b, we represent a typical ρ – z section of the temperature distribution around the trap (NA = 1.2) for three different axial positions of the focus. This shows that sample warming decreases as the trap approaches the water-glass interface, due to heat dissipation into the coverslip becoming more efficient. The axial and radial profiles for the trap at z trap = 10 μm shown in Fig. 3c reproduce the characteristic ln(1/r) decay proposed by Mao et al.12, whose solution had been empirically found by Celliers et al.8 some years before, ΔT(r) = a – bln(r), and experimentally proved by Haro-González et al.4.
We can observe how the distance to the closest glass surface governs the heating produced by the laser (Fig. 3d). This is due to the low α/K of glass, which dissipates the heat produced by the laser, keeping the water-glass interface almost at room temperature. Unlike the result found by Peterman et al. obtained assuming spherical symmetry (see Supplementary Figure 7 and Supplementary Table 3), temperature was found to increase only over the first 0–10 μm from the bottom surface of the chamber, where the heat dissipated by the glass coverslip was significant enough to cool down the focus. After this, B becomes almost constant as the distance increases across the rest of the chamber. As mentioned above, the temperature profile decaying sufficiently beyond 10 μm from the trap centre (Fig. 3c) leads to the cooling by heat dissipation through the coverslip being unnoticeable for z trap values of more than 10 μm.
In contrast, the boundary in the radial coordinate, ρ, seems to have little impact on the value of the temperature near the focus. In Fig. 3c, inset ii, we show three temperature distributions simulated with the Dirichlet ΔT = 0 °C condition fixed at different distances, which collapse into a single curve, i.e., maintaining the same maximum heating in the vicinity of the trap. Interestingly, this profile describes the temperature increase measured for microspheres of different radii, R bead, as shown in Fig. 2c. We found a certain connection between the temperature at distance R bead and the heating of a particle with that radius, i.e. between B bead and ΔT(ρ = R bead) at P = 100 mW (Fig. 4a). This is consistent with the fact that our method, based on drag force measurements, detects viscosity changes at the interface between the bead and the medium.
Simulation of temperature distribution inside the microchamber. (a) Relation between the heating rate for a given bead, B bead, and the temperature distribution produced by the trap. (b) Simulated temperature profile introducing a 2-μm microsphere at the focus of the trap. The grey area corresponds to the reproducibility of our measurements, shown in Fig. 2a. (MR: melamine resin, PS: polystyrene, Si: silica).
Concerning the effect of the NA, we had observed (Fig. 2c) considerably higher heating rates for the oil-immersion, NA = 1.3 objective, particularly on the smallest, 0.61-μm beads. Likewise, Fig. 2d showed large variations in B when modifying NAeff. The heat source in our simulations, q, includes the dependence on the NA and also yields greater heating for higher NA; though the resulting difference is one order of magnitude smaller than in the experiments. We believe that the model does not accurately describe the temperature increase at distances comparable to the beam waist, r ~ w 0, as the result depends on the shape of the beam, which is not correctly modelled in this region. Additionally, experimental modification of the effective NA was achieved by reducing the beam diameter with an iris placed at an optical equivalent of the entrance pupil of the objective. This, at the same time, modifies the overfilling, which affects additional variables that govern the optical field at the focus and hence may also introduce some deviations from the simulations. For 3.00-μm beads, alternative model approaches8,12 qualitatively describe the heating–NA relationship (see Supplementary Figure 7). The fact that these beads are far larger than the beam waist, and that the optical field at R bead is hence described more precisely, is the reason why the heating predicted through simulations is closer to the experimental results.
Finally, we checked the impact of the particle material on temperature. We incorporated into the simulations a second material with a spherical shape, corresponding to the particle, at the focus of the trap. We analysed the temperature distribution for three different dielectric materials (melamine resin, silica and polystyrene) and verified that the temperature over the particle surface, i.e. at a distance R bead, was almost independent of the material of which the microsphere was made within our experimental errors (Fig. 4b). Therefore, only the optical absorption and the thermal conductivity of the suspension medium seemed to play a role in the heating (see the ratio α/K for water and glycerol in Fig. 2c, inset). To a first approximation, an intuitive interpretation of these results is that the particle "experiences" on average the temperature of the surrounding solvent which is independent of the particle itself and is thereby given by the decreasing temperature profile produced by the trap alone, making the temperature increase lower for larger particles. Note that for absorbing particles, such as semiconductor or metallic particles, heating would be qualitatively higher since the energy transferred to the medium would be sufficient to alter the temperature distribution.
We demonstrated that the direct determination of viscosity changes caused by increasing laser powers is a robust strategy for obtaining local temperatures, i.e. the heating rate B. This was possible due to our force calibration being based on the detection of the trapping beam momentum (which is independent of the sample temperature and trapping power, among other features) which was compared to the temperature-dependent Stokes drag calibration. In this way, we could directly interpret changes in the measured force as variations in the sample temperature. Importantly, the method is precise enough to assess trap heating in watery buffers and makes it unnecessary to carry out experiments in media with higher α/K values.
The agreement between our results and simulations allows us to extract some interesting conclusions concerning the process of sample heating in an optical trap. The model proposed in ref.9 seems to capture the main elements necessary for the description of the behaviour of the temperature inside the chamber under different conditions. Simulations showed that the temperature distribution originated by an optical trap followed the typical ln(1/r) decay reported in other papers4,8,12. This decay describes the heating experienced by microspheres of increasing radii, which was observed to change by a factor of 2 and 3 from 0.61- to 3.00-μm beads, trapped with an NA = 1.2 and an NA = 1.3 objective, respectively. Such variation was considerably greater than the measurement precision, which was assessed to be of the order of 10–15%. In contrast, the material of the tweezed dielectric microspheres did not affect the measurement of B, which was also confirmed by simulations including the presence of the bead. Finally, the trap height played an important role in sample heating, as the coverslip acts as a sink, dissipating heat and maintaining the interface nearly at room temperature. When the trap was created below 10 μm, it was efficiently cooled down; whereas beyond that distance, B remained almost constant, due to the temperature increase being concentrated mainly within the 10 μm surrounding the trap focus.
The results of this work demonstrate the substantial variability in laser-induced trap heating, depending on the multiple experimental conditions examined. Temperature control is of the utmost importance in precise experiments in biophysics and other applications of optical trapping. This makes it highly necessary to perform in situ heating calibration, instead of applying the straightforward rule of thumb of 1 °C/100 mW. Large measurement inaccuracies, e.g. derived from erroneous thermal trap calibration, as well as changes in the activity of biological parameters10 could arise if the actual trap temperature is omitted. As a critical example, samples of a size similar to the beam waist, R ~ ω 0, showed a ±2 °C/100 mW variation in B by only changing the effective NA of the trapping beam (Fig. 2d). Moreover, trap heating studied by other research groups with manifold techniques (see Supplementary Figure 6 and Supplementary Table 2) have demonstrated this variability. In conclusion, each experimental optical trap configuration leads to different heating performance due to several aspects, such as sample size, beam structure, microchamber dimensions and buffer specifications.
The remarkable heating observed under certain conditions could have an impact on experiments with cells. In such samples, laser radiation is absorbed by the intracellular medium, the cytosol, which is a complex and highly crowded compound. However, because cells and their components present weak absorbance in the NIR range, we can assume that absorption of laser radiation is dominated by water. Typical laser powers required to manipulate intracellular organelles are of the order of 200 mW. Natural structures inside cells are usually smaller than ~1 μm. In addition, high-NA (phase-contrast) oil-immersion objectives are normally preferred for the visualization of cells. Therefore, in general terms, heating will tend to be particularly large in this kind of experiments. Assuming that the heating of the cell is similar to that of water, and using the result for B for the smallest microsphere of diameter 0.61 μm and for NA = 1.3, we can estimate that the local temperature will rise by approximately 8 °C.
In fact, the large attainable temperature increase that 1064 nm lasers can produce could induce serious damage that one should assess beforehand. Although photodamage due, among other possibilities, to the generation of singlet oxygen is accepted to be the main source of damage upon live cells19, thermal heating should be reconsidered as a feasible origin of cell damage/death as well.
As we have shown, heating can be reduced by the use of trapping objectives of different NA or by creating the traps close to the coverslip. Furthermore, the use of laser wavelengths with lower optical absorption20, such as 820 nm, as proposed by Haro-González et al.4, appears to be a good choice for reducing photodamage and cell heating. In fact, those authors found that temperature increase at this wavelength was close to zero at 300 mW.
Optical trapping set-up and force measurements
The optical trapping set-up consisted of a CW Ytterbium fibre laser (IPG YLM-5-1064-LP, λ = 1064 nm TEM00, Pmax = 5 W). The laser was directed into an inverted microscope (Nikon, Eclipse TE2000-E) through the epi-fluorescence port and focused on the sample plane by high-numerical-aperture objective lenses (Nikon CFI Plan Apo 60x water immersion, NA = 1.2 and Nikon CFI Plan Fluor 100x oil immersion, NA = 1.3). A telescope allowed us to adapt the beam waist to overfill the entrance pupil of the microscope objective. Overfilling (Dbeam/Dobj) on the water- and oil-immersion objective was about 1 and 1.6, respectively. A dichroic mirror placed before the objective lens redirected the beam so the same path could simultaneously be used for the illumination light coming from the condenser, which passes through the mirror and reaches the CCD camera (QImaging, QICAM) at the bottom of the microscope.
The fibre laser was observed to oscillate in polarization, leading to possible power fluctuations when passing through polarizing elements, which were carefully avoided by properly setting the polarization of the beam (see Supplementary Figure 5).
Measurements of lateral trapping forces were carried out with a direct force-detection instrument (Impetux Optics, LUNAM T-40i), which collects the transmitted laser light from the sample with an NA = 1.415,16 lens and tracks the light–momentum distribution at the BFP with a PSD. The 50-kHz PSD signals were processed with custom-designed analysis software (LabView and Matlab programs). As described in Results, lateral optical forces, F X,Y, were obtained through the PSD signals S X,Y as F X,Y = −α detector·S X,Y.
In turn, trap power was obtained as P trap = 1/ψ·S SUM, where S SUM is the SUM signal of the PSD and ψ (V/W) is the responsivity of the force-detection instrument. Briefly, it is calibrated by the manufacturer using an adaption of the dual-objective method from which the transmission of an objective can be obtained16,21.
All the experiments were conducted in a temperature controlled laboratory. The samples consisted of highly diluted solutions of polystyrene monodisperse spherical particles with diameters of 0.61 μm, 1.16 μm, 1.87 μm and 3.00 μm (refractive index n = 1.59, density d = 1.05 g/cm3, all with a standard deviation of 2–3%), provided by the manufacturer and confirmed with dynamic light scattering for the smallest ones (see Supplementary Information). We additionally used melamine resin 2.19-μm beads and silica 2.32-μm beads for the analysis of laser heating on samples made of different materials.
As buffers, we used water (density, d = 0.99 g/cm3; absorption coefficient, α = 14.2 m−1; specific heat, c = 4.18 J/g·K; and thermal conductivity, K = 0.6 W/m·K at 25 °C) or glycerol (density, d = 1.261 g/cm3; absorption coefficient, α = 21.4 m−1; specific heat, c = 2.41 J/g·K; and thermal conductivity K = 0.28 W/m·K at 25 °C). The experimental home-built microchamber consisted of a thick (1 mm) glass slide and a coverslip separated by 80 μm wide double-sided Scotch tape. Particles were trapped at z trap = 10 μm from the lower coverslip-liquid interface, except for the experiment on axial dependence B(z), which was carried out with the water-immersion, NA = 1.2 objective to avoid spherical aberration due to the refractive index mismatch and ensure efficient trapping in depth. Surface interaction was compensated using Faxén's coefficient18.
Drag forces generated with a piezoelectric stage
We used a piezoelectric stage (Piezosystem Jena, TRITOR 102 SG) driven by a voltage sequence with controlled amplitude and frequency which produced, after calibration of the low-pass filtering of the electronics, triangular time signals with a precise slope. We set the flow velocity, v, to produce similar hydrodynamic forces on all the (different sized) beads we used: approximately 1.6 pN. Stokes' law provides an analytical expression for the force applied on a microsphere (of radius R): F drag = 6πηRbv, where η(T) is the viscosity of the surrounding medium, which is a function of temperature, T, and b is Faxén's coefficient to correct for surface hydrodynamic interaction18. The flow velocities were always below the rate of heat dissipation, to ensure stationary heating at the trap focus. See Supplementary Figures 2, 3 and 4 for a more detailed description.
Temperature increase obtained from drag force measurements
Measurements of temperature increase due to laser heating were based on the determination of the light momentum changes when controlled forces were applied to trapped particles at different laser powers. The triangular flow oscillation produced by the piezoelectric stage yielded square force signals that were registered by the PSD of the force detection system. The constant force plateaus corresponded to the back-and-forth, ±6πηRbv Stokes forces. The absolute drag force was calculated as the half-difference between the two averaged plateaus, which automatically cancelled out the initial momentum of the beam. To average the force values, approximately 10,000 data points were taken, corresponding to the timeframe over which the piezoelectric stage yielded a constant slope, i.e., constant drag force (see Supplementary Figures 2 and 3).
Each force value used for sample temperature calculations was taken as the mean over 20 consecutive cycles; and the corresponding standard deviation, typically in the range ±1–3%, was considered to be the associated error bar. Such standard deviation resulted in an uncertainty of ±0.5 °C–1.5 °C in temperature measurements. Ten temperature measurements, T i, at trapping powers, P i, of 20 mW, 40 mW, …, 200 mW, were linearly fitted by T = T room + BP, from which we could obtain the heating rate, B, with an estimated precision of 10–15%. Final heating rates (Fig. 2) were calculated from the mean of 3–5 measurements.
Trap heating simulations
To solve the temperature distribution due to an optical trap (see Eq. 4), we used an adaptation of the MathWorks Partial Differential Equation (PDE) Toolbox that allowed us to include the Jacobian for cylindrical coordinates (see Supplementary Information). Except for the study of temporal evolution, all simulations were carried out by solving the elliptic equation to obtain the stationary temperature distribution. For comparison with the experimental measurements of B (°C/100 mW) (Fig. 2), we obtained the temperature solutions, ΔT(ρ, z) at a laser power of 100 mW. See Results for a detailed discussion of the boundary conditions and the definition of the heat source q, as well as the main results of and conclusions derived from the simulations.
Ashkin, A., Dziedzic, J. M., Bjorkholm, J. E. & Chu, S. Observation of a single-beam gradient force optical trap for dielectric particles. Opt. Letters 11, 288, https://doi.org/10.1364/OL.11.000288 (1986).
Liu, Y. et al. Evidence for localized cell heating induced by infrared optical tweezers. Biophys. J. 68, 2137, https://doi.org/10.1016/S0006-3495(95)80396-6 (1995).
Liu, Y., Sonek, G. J., Berns, M. W. & Tromberg, B. J. Physiological monitoring of optically trapped cells: assessing the effects of confinement by 1064-nm laser tweezers using microfluorometry. Biophys. J. 71, 2158, https://doi.org/10.1016/S0006-3495(96)79417-1 (1996).
Haro-González, P. et al. Quantum dot-based thermal spectroscopy and imaging of optically trapped microspheres and single cells. Small 9, 2162, https://doi.org/10.1002/smll.201201740 (2013).
Ebert, S., Travis, K., Lincoln, B. & Guck, J. Fluorescence ratio thermometry in a microfluidic dual-beam laser trap. Opt. Express 15, 15493, https://doi.org/10.1364/OE.15.015493 (2007).
Wetzel, F. et al. Single cell viability and impact of heating by laser absorption. Eur. Biophys. J. 40, 1109, https://doi.org/10.1007/s00249-011-0723-2 (2011).
Kuo S. C. A simple assay for local heating by optical tweezers. In: Methods in Cell Biology, Elsevier, Chapter 3, 43; https://doi.org/10.1016/S0091-679X(08)60401-X (1997).
Celliers, P. M. & Conia, J. Measurement of localized heating in the focus of an optical trap. Appl. Opt. 39, 3396, https://doi.org/10.1364/AO.39.003396 (2000).
Peterman, E. J. G., Gittes, F. & Schmidt, C. F. Laser-induced heating in optical traps. Biophys. J. 84, 1308, https://doi.org/10.1016/S0006-3495(03)74946-7 (2003).
Abbondanzieri, E. A., Shaevitz, J. W. & Block, S. M. Picocalorimetry of transcription by RNA polymerase. Biophys. J. 89, L61, https://doi.org/10.1529/biophysj.105.074195 (2005).
Jun, Y., Tripathy, S. K., Narayanareddy, B. R. J., Mattson-Hoss, M. K. & Gross, S. P. Calibration of optical tweezers for in vivo force measurements: how do different approaches compare? Biophys. J. 107, 1474, https://doi.org/10.1016/j.bpj.2014.07.033 (2014).
Mao, H. et al. Temperature control methods in a laser tweezers system. Biophys. J. 89, 1308, https://doi.org/10.1529/biophysj.104.054536 (2005).
Gross, S. P. Application of optical traps in vivo. Methods Enzymol. 361, 162, https://doi.org/10.1016/S0076-6879(03)61010-4 (2003).
Smith, S. B., Cui, Y. & Bustamante, C. Optical-trap force transducer that operates by direct measurement of light momentum. Methods Enzymol. 361, 134, https://doi.org/10.1016/S0076-6879(03)61009-8 (2003).
Farré, A. & Montes-Usategui, M. A force detection technique for single-beam optical traps based on direct measurement of light momentum changes. Opt. Express 18, 11955, https://doi.org/10.1364/OE.18.011955 (2010).
Farré, A., Marsà, F. & Montes-Usategui, M. Optimized back-focal-plane interferometry directly measures forces of optically trapped particles. Opt. Express 20, 12270, https://doi.org/10.1364/OE.20.012270 (2012).
Català, F., Marsà, F., Montes-Usategui, M., Farré, A. & Martín-Badosa, E. Extending calibration-free force measurements to optically-trapped rod-shaped samples. Sci. Rep. 7, 42960, https://doi.org/10.1038/srep42960 (2017).
ADS Article PubMed PubMed Central CAS Google Scholar
Svoboda, K. & Block, S. M. Biological applications of optical forces. Annu. Rev. Biophys. Biomol. Struct. 23, 247, https://doi.org/10.1146/annurev.bb.23.060194.001335 (1994).
Neuman, K. C., Chadd, E. H., Liou, G. F., Bergman, K. & Block, S. M. Characterization of photodamage to escherichia coli in optical traps. Biophys. J. 77, 2856, https://doi.org/10.1016/S0006-3495(99)77117-1 (1999).
Kedenburg, S., Vieweg, M., Gissibl, T. & Giessen, H. Linear refractive index and absorption measurements of nonlinear optical liquids in the visible and near-infrared spectral region. Opt. Mat. Express 2, 1588, https://doi.org/10.1364/OME.2.001588 (2012).
Farré, A., Marsà, F. & Montes-Usategui, M. Beyond the Hookean spring model: direct measurement of optical forces through light momentum changes in Optical Tweezers: Methods and Protocols (ed. Gennerich, A.). Methods in Molecular Biology, 1486, https://doi.org/10.1007/978-1-4939-6421-5_3 (Humana Press, New York, 2017).
This research was partly funded by: the Spanish Ministerio de Economía y Competitividad, through grants FIS2010-16104 and FIS2014-60052-R; the Generalitat de Catalunya through the project ACC1Ó (VALTEC G614828324059231); and the Barcelona Knowledge Campus Initiative (FPC2010-17). FC acknowledges a grant from the Spanish Ministerio de Educación, Cultura y Deporte (Subprograma de Formación de Profesorado Universitario). We would like to thank Ione Verdeny for the initial temperature measurement experiments and Carol López-Quesada for calibration of the piezoelectric stage.
Optical Trapping Lab – Grup de Biofotònica, Departament de Física Aplicada, Universitat de Barcelona, Martí i Franquès 1, Barcelona, 08028, Spain
Frederic Català, Mario Montes-Usategui & Estela Martín-Badosa
Institut de Nanociència i Nanotecnologia (IN2UB), Martí i Franquès 1, Barcelona, 08028, Spain
Frederic Català, Ferran Marsà, Mario Montes-Usategui, Arnau Farré & Estela Martín-Badosa
Impetux Optics S. L., Trias i Giró 15 1-5, Barcelona, 08034, Spain
Ferran Marsà, Mario Montes-Usategui & Arnau Farré
Frederic Català
Ferran Marsà
Mario Montes-Usategui
Arnau Farré
Estela Martín-Badosa
F.C. performed the experiments and the simulations. A.F. conceived the idea and supervised the experiments and the simulations. F.M. supervised the experiments. M.M.-U. conceived the idea. E.M.-B. conceived the idea, and supervised both the experiments and the simulations. All the authors wrote the manuscript and gave final approval for publication.
Correspondence to Estela Martín-Badosa.
M.M.-U. and A.F. are holders of the patents US 8,637,803 and JP 5,728,470, which protect the technology for measuring forces used in the research presented here. F.M., M.M.-U. and A.F. are shareholders in the company Impetux Optics S.L., which commercially exploits that patented technology. In addition, F.M. and A.F. were employees of that company during the development of the research.
Català, F., Marsà, F., Montes-Usategui, M. et al. Influence of experimental parameters on the laser heating of an optical trap. Sci Rep 7, 16052 (2017). https://doi.org/10.1038/s41598-017-15904-6
Manipulating rod-shaped bacteria with optical tweezers
Zheng Zhang
Tom E. P. Kimkes
Matthias Heinemann
Effects of near infrared focused laser on the fluorescence of labelled cell membrane
Remy Avila
Elisa Tamariz
Pablo Loza-Alvarez
Top 100 in Physics
|
CommonCrawl
|
plots:gosh_plot
GOSH Plot
Olkin, Dahabreh, and Trikalinos (2012) proposed the GOSH (graphical display of study heterogeneity) plot, which is based on examining the results of an equal-effects model in all possible subsets of size $1, \ldots, k$ of the $k$ studies included in a meta-analysis. In a homogeneous set of studies, the model estimates obtained this way should form a roughly symmetric, contiguous, and unimodal distribution. On the other hand, when the distribution is multimodal, then this suggests the presence of heterogeneity, possibly due to outliers and/or distinct subgroupings of studies. Plotting the estimates against some measure of heterogeneity (e.g., $I^2$, $H^2$, or the Q-statistic) can also help to reveal subclusters, which are indicative of heterogeneity.
A nice example to illustrate the method is the meta-analysis of trials that examined the effectiveness of intravenous magnesium in the prevention of death following acute myocardial infarction (data from Egger et al., 2001). While the smaller studies suggested that magnesium is an effective treatment for reducing mortality, the results from the ISIS-4 mega trial (ISIS-4 Collaborative Group, 1995) indicated no reduction in mortality with magnesium treatment.
Below, an equal-effects model is first fitted to all trials and then the gosh() function is used to fit equal-effects models in all possible subsets. The GOSH plot is then created by plotting the results. The points for subsets that include the ISIS-4 trial (study 16 in the dataset) are shown in red and blue otherwise.
### meta-analysis of all trials including ISIS-4 using an equal-effects model
res <- rma(measure="OR", ai=ai, n1i=n1i, ci=ci, n2i=n2i, data=dat.egger2001, method="EE")
### fit EE model to all possible subsets
sav <- gosh(res)
### create GOSH plot
### red points for subsets that include and blue points
### for subsets that exclude study 16 (the ISIS-4 trial)
plot(sav, out=16, breaks=100)
Egger, M., Davey Smith, G., & Altman, D. G. (Eds.) (2001). Systematic reviews in health care: Meta-analysis in context (2nd ed.). London: BMJ Books.
ISIS-4 Collaborative Group (1995). ISIS-4: A randomised factorial trial assessing early oral captopril, oral mononitrate, and intravenous magnesium sulphate in 58,050 patients with suspected acute myocardial infarction. Lancet, 345, 669–685.
Olkin, I., Dahabreh, I. J., & Trikalinos, T. A. (2012). GOSH - a graphical display of study heterogeneity. Research Synthesis Methods, 3(3), 214–223.
plots/gosh_plot.txt · Last modified: 2021/11/08 15:40 by Wolfgang Viechtbauer
|
CommonCrawl
|
export.arXiv.org > astro-ph > arXiv:1905.00025
astro-ph.GA
Title: Free-form lens model and mass estimation of the high redshift galaxy cluster ACT-CL J0102-4915, "El Gordo"
Authors: J.M. Diego, S. Molnar, C. Cerny, T. Broadhurst, W. Windhorst, A. Zitrin, R. Bouwens, D. Coe, C. Conselice, K. Sharon
(Submitted on 30 Apr 2019 (v1), last revised 6 Jul 2020 (this version, v2))
Abstract: We examine the massive colliding cluster El Gordo, one of the most massive clusters at high redshift. We use a free-form lensing reconstruction method that avoids making assumptions about the mass distribution. We use data from the RELICS program and identify new multiply lensed system candidates. The new set of constraints and free-form method provides a new independent mass estimate of this intriguing colliding cluster. Our results are found to be consistent with earlier parametric models, indirectly confirming the assumptions made in earlier work. By fitting a double gNFW profile to the lens model, and extrapolating to the virial radius, we infer a total mass for the cluster of $M_{200c}=(1.08^{+0.65}_{-0.12})\times10^{15}$M$_{\odot}$. We estimate the uncertainty in the mass due to errors in the photometric redshifts, and discuss the uncertainty in the inferred virial mass due to the extrapolation from the lens model. We also find in our lens map a mass overdensity corresponding to the large cometary tail of hot gas, reinforcing its interpretation as a large tidal feature predicted by hydrodynamical simulations that mimic El Gordo. Finally, we discuss the observed relation between the plasma and the mass map, finding that the peak in the projected mass map may be associated with a large concentration of colder gas, exhibiting possible star formation. El Gordo is one of the first clusters that will be observed with JWST, which is expected to unveil new high redshift lensed galaxies around this interesting cluster, and provide a more accurate estimation of its mass.
Comments: 19 pages, 10 figures. Updated figures
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); Astrophysics of Galaxies (astro-ph.GA)
DOI: 10.3847/1538-4357/abbf56
Cite as: arXiv:1905.00025 [astro-ph.CO]
(or arXiv:1905.00025v2 [astro-ph.CO] for this version)
From: Jose M. Diego Rodriguez [view email]
[v1] Tue, 30 Apr 2019 18:00:01 GMT (5735kb,D)
[v2] Mon, 6 Jul 2020 19:02:51 GMT (10984kb,D)
|
CommonCrawl
|
Shape and fluctuations of frustrated self-assembled nano ribbons
Mingming Zhang1,2,
Doron Grossman ORCID: orcid.org/0000-0002-2731-07251,
Dganit Danino2 &
Eran Sharon1
Nature Communications volume 10, Article number: 3565 (2019) Cite this article
A Publisher Correction to this article was published on 13 September 2019
This article has been updated
Self-assembly is an important process by which nontrivial structures are formed on the sub-micron scales. Such processes are governed by chemical and physical principles that dictate how the molecular interactions affect the supramolecular geometry. Currently there is no general framework that links between molecular properties and the supramolecular morphology with its size parameters. Here we introduce a new paradigm for the description and analysis of supramolecular structures that self-assemble via short-range interactions. Analysis of molecular interactions determines inputs to the theory of incompatible elasticity, which provides analytic expressions for supramolecular shape and fluctuations. We derive quantitative predictions for specific amphiphiles that self-assembled into chiral nanoribbons. These are quantitatively confirmed experimentally, revealing unique shape evolution, unusual mechanics and statistics, proving that the assemblies are geometrically incompatible. The success in predicting equilibrium and statistics suggests the approach as a new framework for quantitative study of a large variety of self-assembled nanostructures.
A prototypical, interesting class of molecular assemblies is that of twisted and helical nanoribbons, assembled from chiral molecules. Such structures are formed by a wide variety of building blocks, such as amphiphilic lipids, peptide amphiphiles, amino acid derivatives, and proteins1,2, e.g. during the evolution of some neurodegenerative disorders3. Many of these systems undergo morphological evolution during assembly: at early stages they form long, narrow twisted ribbons with a straight centerline, around which the ribbon twists with pitch P (Fig. 1a). As assembly proceeds, the ribbons width,W, increases, leading to the growth of P. Further widening induces shape transition into helical ribbons whose centerline is a helix, characterized by its pitch P and radius R (Fig. 1b). Finally, as widening further proceeds, the helical ribbons close into tubes with radius R (Fig. 1c). Currently, this common shape evolution is not understood and cannot be linked to the molecular chemistry and interactions. Moreover, it is not even known which kind of molecules it can be applied to.
Shape evolution and characterization: Cryo-TEM images and illustrations (insets) of self-assembled N- α -lauryl-lysyl-aminolauryl-lysyl-amide (C12-β12) ribbons. a After ~24 h of assembly most ribbons are twisted, having a straight centerline, (inset, yellow dashed line) i.e. R = 0. b After 1 week, helical ribbons are abundant. Their center line is a helix with given pitch, P, and radius, R. Determination of W, P, and R from the image is demonstrated. c After 5 months most assemblies are tubes (distinguished by the dark parallel boundaries compared to the pale ends) with diameters D = 2 R ≈ 100 nm. Scale bars = 100 nm
The modeling of chiral ribbons is based on two main approaches4,5. The first analyzes the basic chemical interactions between adjacent molecules, in order to determine the relative bends and twists6. It is further assumed that the intrinsic geometry that is prescribed by the local interactions is accurately manifested by the supramolecular morphology. However, in many cases, in order to form suprastructures, the optimal nearest neighbors' configuration (the intrinsic geometry) must be distorted, since the relaxed elements would not fit together to form a continues aggregate. Such systems are known as geometrically frustrated7. The flexibility of most biomolecules and their soft interactions allow such systems to overcome the geometrical frustration via distortion with respect to the locally optimal configuration. This elastic distortion generates internal stresses, affecting the aggregation process7, the shape8,9,10,11 and the mechanical properties12,13. Therefore, in order to describe the global shape, a mechanical model is needed, in addition to characterization of the intrinsic geometry. This is the second approach, based on continuum mechanics, which models twisted and helical ribbons in various ways. These include liquid membrane models14, as well as solid ribbons with broken15,16, or unbroken17 mirror symmetry. Such models are phenomenological and qualitative—not related to the specific chemical interactions. These limitations prevent quantitative comparison between experiments18,19,20,21 and theory.
Recently, we used the theory of incompatible elasticity22 to derive a two-dimensional (2D) modeling for shape selection of thin, geometrically frustrated, sheets23. The intrinsic geometry serves as input to the theory, and can originate from different processes, e.g., plastic deformations24, active swelling25,26 or growth of biological tissues27. The theory was successfully applied to analyze equilibrium shapes of macroscopic ribbons having intrinsic twist28. Additionally, a reduced, one-dimensional (1D) model of incompatible ribbons29, provides analytical expressions for equilibrium shapes, as well as for statistical properties of thermal ribbons. Considering self-assembled nanoribbons, one can use the intrinsic geometry, computed from the chemical interactions, as described in Nandi and Bagchi6 (the first approach) as input to the 1D elasticity theory, in order to analytically compute ribbon shape and fluctuations. This new combined methodology integrates the two approaches, proposing a new paradigm e.g., for modeling self-assembled nanoribbons, with an unprecedented quantitative link between molecular and supramolecular properties. It is applicable to a wide variety of self-assembled slender structures (see Supplementary note (5))
Here we perform extensive cryo-electron microscopy (cryo-TEM) shape measurements of an amphiphile, C12-β12 (N-α-lauryl-lysyl-aminolauryl-lysyl-amide), as it assembled into twisted nanoribbons and further, to helical ribbons and tubes (Fig. 1). We write the 1D elastic model for the ribbons and provide analytical expressions that describe the ribbon's equilibrium shape over the entire range of width. The geometrical parameters in the model are determined by the interaction between monomers. We go beyond studying equilibrium configurations by analyzing shape fluctuations of ribbons. We measure the predicted unusual statistics, which indicate softening of the ribbon with increasing width. These results show that the self-assembled ribbons are indeed frustrated ribbons, well captured by the combined chemical-physical approach. Finally, we discuss how the approach can be applied to other self-assembled ribbons.
Equilibrium configurations
We start by computing ribbon's equilibrium configurations, using the theory of incompatible elastic sheets, with parameters determined by the molecular interactions. The theory uses two input fields that encode the intrinsic geometry. The reference metric \(\bar a\) is determined by gradients in equilibrium distances within the plane of the ribbon. The reference curvature \(\bar b\) is determined by gradients of equilibrium distances across the ribbon's thickness. In the gel phase (at 25 °C) C12-β12 (Supplementary Fig. 1) adopts a bolaamphiphile-like configuration (Fig. 2a) and forms ribbons reminiscent of a lipid bilayer20, with hydrophilic heads facing out and hydrophobic carbon chains hidden inside the sheet (Fig. 2b). Neighboring headgroups interact via hydrogen bonds between amide groups. This attractive interaction prescribes a linear order in the sheet, pulling neighboring heads tightly together. The optimal conformation between heads can be approximated by a close packing6, inducing twist around the amide bonds with a preferred twist angle,\(\theta _0\sim 20^0 - 40^0\) between adjacent headgroups (when ignoring the effect of carbon tails) (Fig. 2e,f). Using here S (left handed) chiral carbons, right-handed twist is preferred (see Fig. 2e and Supplementary Figs. 3–7). Next we consider the chain's Van der Waals (VdW) attractive interaction, between the 22 methylene groups. This interaction tends to align the molecules, resisting twist and is, therefore, minimal at θ = 0. The combined energy associated with a given twist angle, θ, between two amphiphils is, therefore, approximated by \(E(\theta ) \propto D^2\left( {\theta - \theta _0} \right)^2 + L^3\theta ^2\), where the first/second term corresponds to the head/chain energy, respectively. Here, D is an effective headgroup diameter and L is the chain length. The optimal twist between two molecules is obtained by minimizing the energy with respect to θ. We find the twist angle, θ*, to be in the range of 0.3°−2° (see supplementary note (2)), leading to spontaneous twist (angle per unit length) \(k_0 = \frac{{\theta ^ \ast }}{D}\). Using \(D = 0.6\;{\mathrm{nm}}\), we get \(k_0 = 0.03 \pm 0.02\frac{{{\mathrm{rad}}}}{{{\mathrm{nm}}}}\). The slight difference, Δd in equilibrium length of the hydrogen bonds between primary and secondary amines (Fig. 2c) induces curvature, in addition to the twist. We mark it αk0, where α is a measure for the up-down asymmetry (α = 0 implies a symmetric ribbon, as in refs. 15,18,28). The curvature is approximately \(\alpha k_0 \approx \frac{{{\mathrm{\Delta }}d}}{{Dt}}\) and we find \(\alpha = 0.1 \pm 0.07.\) We note that a much more accurate estimations of k0 and α can be achieved via molecular simulation. The reference curvature tensor, which represents the right-handed twist, k0, and the bend αk0 is:
$$\bar b = \left( {\begin{array}{*{20}{c}} 0 & {k_0} \\ {k_0} & {\alpha k_0} \end{array}} \right)$$
Dominant chemical interactions and the generation of intrinsic twist: a A Skeletal structural formula (left) and a ball-and-stick model (right) of C12-β12 in its bolaamphiphile-like configuration. The heads and chains are marked with blue and red respectively. At side A, the head contains two secondary amines, and at B, one primary and one secondary amine. b Hydrophobic interactions drive the assembly of C12-β12 into a ribbon, with hydrogen bonds forming along the x direction—the ribbons' long axis. The domains within the orange circles are magnified in c, illustrating the hydrogen bonds (cyan lines) on sides A (secondary amines (2°)) and B (one secondary and one primary (1°) amines). This asymmetry leads to asymmetry between the two faces of the ribbon (the lysine are truncated in panels c–f, for clarity). d A top view (in the x-y plane) of two (S) left-handed head groups (side A). Close packing of the two heads directs the methylene group into the largest free volume. Right (orange arrow) zero (red arrow) and left (black arrow) optional twists are illustrated. e The same headgroups illustrated with the VdW radiuses. The chirality (in this case the difference in VdW radiuses) leads to a larger free space on the left (orange arrow) than on the right. f The conformation in e induces right-handed intrinsic twist along the x direction
where we use a coordinate system aligned with the ribbon as in Fig. 2b (see also Supplementary Fig. 8). Finally we note that the ribbon has no structural lateral gradients hence, its reference metric is flat (\(\bar a\) is the identity matrix). A similar approach can be implemented for the study of other molecular systems, where different molecular interactions and dimensions would determine different \(\bar b\) and \(\bar a\) (see examples in Supplementary note (5)).
Computation of ribbon equilibrium configurations consists of plugging \(\bar a\) and \(\bar b\) into the energy functional of the 1D theory29 and solving the resulting Euler-Lagrange equations. The solution depends on the ribbon thickness, t, and width, W, as well as on its material properties. Here we limit the analysis to the case of isotropic elasticity, characterized by the Young's modulus, Y, and Poisson's ratio, v. The solutions are right-handed ribbons with radius R and pitch P:
$$R\left( W \right) = \frac{{l\left( W \right)}}{{l\left( W \right)^2 + m\left( W \right)^2}}$$
$$P(W) = \frac{{2{\mathrm{\pi }}m(W)}}{{l(W)^2 + m(W)^2}}$$
where \(l(W;\alpha ,k_0,\nu ,t)\) and \(m\left( {W;\alpha ,k_0,\nu ,t} \right)\) are functions of W, for a given set of (α, k0, v, t) (see Supplementary note (3)). In the wide limit (\(W^2 \gg \frac{t}{{k_0}}\)), the solution becomes independent of W:
$$R_{{\mathrm{Wide}}} = \frac{{\sqrt {4 + \alpha ^2} (1 - \nu ) - \alpha (1 + \nu )}}{{2((1 - \nu )^2 + \alpha ^2\nu )k_0}}$$
$$P_{{\mathrm{Wide}}} = \frac{{4\;\pi }}{{(2(1 - \nu ) + {\mathrm{\nu \alpha }}( - \alpha + \sqrt {4 + \alpha ^2} ))k_0}}$$
The full solution (Eqs. (1) and (2)) describes a twisted ribbon which, upon widening, becomes helical. It can be expressed in a dimensionless form,\(\tilde R\left( {\tilde W} \right)\), and \(\tilde P\left( {\tilde W} \right)\) (Fig. 3a and Supplementary Fig. 9). These analytical solutions qualitatively resemble the numerical results in Armon et al. 28, however, they include in addition the dependence on α, the asymmetry of the sheet. We find that as α increases, the maximal pitch decreases, the twisted-to-helical transition occurs at smaller values of \(\tilde W\), and over a wider range of \(\tilde W\). Additionally, the pitch angle, \(\delta \equiv \frac{P}{{2{\mathrm{\pi }}R}}\) of wide ribbons (Fig. 3b) obeys the relation \(\tan \delta _{{\mathrm{Wide}}} = \frac{1}{2}\left( {\alpha + \sqrt {4 + \alpha ^2} } \right)\), i.e. \(\delta _{{\mathrm{Wide}}}\)increases with α. Note that for symmetric bilayers, where α = 0, \(\delta _{{\mathrm{Wide}}} = 45^\circ\) as in Armon et al. 28.
Ribbon's shape as a function of width. a Analytic solutions for the dimensionless ribbon-pitch, \(\tilde P\), and radius, \(\tilde R\), as functions of the dimensionless width, \(\tilde W\) for different values of α (0.01 (solid cyan), 0.1 (dashed light blue), and 1 (dot-dashed dark blue)). As α increases, the twist-to-helical transition becomes smoother and occurs earlier. Insets show selected realizations of ribbon configurations. b Top: the distribution of the measured pitch angle of wide (W > 80 nm) ribbons. Bottom: the computed pitch angle of wide ribbons (\(\tilde W \to \infty\)),\(\delta _{{\mathrm{Wide}}}\), as a function of α. c Measurements of P (top) and R (bottom) vs. W. Cyan data points mark twisted ribbons (R = 0) and blue points indicate helical ribbons (R > 0). The pitch increases in the twisted phase, then slightly decreases, beyond the twisted-to-helical transition, and then both pitch and radius are stabilized on roughly constant values. Measurement error is typically ±6 nm for P and ±4 nm for R. Note the large asymmetric scatter in the pitch near the twisted-to-helical transition at W ≈ 40 nm. d The average (over ΔW = 2 nm) of the data in c vs. W, together with the solutions for Eqs. (1) and (2) with \(k_0 = 0.03\frac{{{\mathrm{rad}}}}{{{\mathrm{nm}}}}\), α =0.1, t = 3.4 nm and the fitted Poisson's ratio v = 0.5. The experimental data is colored by the relative abundance of twisted (cyan) and helical (blue) ribbons at a given width. Error bars indicate s.d.
We now quantitatively compare model predictions with experimental measurements. We analyzed more than 500 cryo-TEM images, collecting over 4000 measurements of W, P and R for ribbons at different stages of assembly (Fig.1). The width and pitch of a given ribbon were found to be remarkably uniform, with a variation of ±3% over lengths of order few microns (Supplementary Fig. 2), justifying the definitions of ribbon-width and ribbon-pitch. Plotting P and R versus W clearly reveals the twist-to-helical transition (Fig. 3c). For W < 40 nm most ribbons are twisted and their pitch increases with W. For W > 40 nm, most ribbons are helical, and beyond W = 60 nm twisted ones are hardly present. As W increases beyond W ≈ 40 nm the pitch stops its increase, and gradually decreases before stabilizing on a width-independent value of P ~ 400 nm. This evolution is consistent with our model predictions (Eq. (4), Fig. 3a). Further, we measure \(\delta _{{\mathrm{Wide}}} = 62^\circ \pm 3^\circ\) (Fig. 3b inset), indicating that indeed α > 0. All ribbons were right-handed (Supplementary Fig. 7).
We plot the computed P(W) and R(W) using the parameters estimated from the molecular interactions, \(k_0 = 0.03\frac{{{\mathrm{rad}}}}{{{\mathrm{nm}}}}\), α = 0.1 and ribbon thickness, t = 3.4 nm20, with the Poisson ratio,v = 0.5, being the only fitting parameter (the effect of v on ribbon's shape is presented in Supplementary Fig. 9). The theoretical curves provide a good description of the ribbon's shape over the entire range of widths (Fig. 3d), including the decrease in the pitch after the transition and its width-independent value at large W. This is the first successful analytical prediction of the entire shape evolution of the ribbons.
Interestingly, Fig. 3d shows a systematic deviation between the measured radius of wide ribbons and the theoretical predictions. Furthermore, though the average data (Fig. 3d) is well described by our model, the raw data of radius measurements (Fig. 3c bottom) suggest that the twisted-to-helical transition is 1st order, rather than 2nd order. Some of the deviations may result from simplifications in the model (e.g., rough estimation of geometrical parameters and the assumption of isotropic elasticity). However, a more important factor is that due to thermal fluctuations the ribbons are not in a mechanical equilibrium. We therefore turn to study the statistics of ribbon shapes.
Thermal fluctuations
A huge (typically >100 nm) scatter in the data of both pitch and radius is noted in Fig. 3c, much larger than our measurement accuracy, which is better than 6 nm. It results from fluctuations in the shape of the supramolecular structures around their energy minimum (Eqs. (1) and (2)). The probability of finding a ribbon of width W in some configuration, whose energy is larger by ΔE from the energy minimum is\(p\left( {{\mathrm{\Delta }}E} \right)\sim e^{ - \frac{{{\mathrm{\Delta }}E}}{{k_{\mathrm{B}}T}}}\) where kB is the Boltzmann coefficient and T is the temperature. Unlike equilibrium shapes, fluctuations are directly related to the ratio between thermal, (kBT), and elastic, (ΔE), energy scales. Therefore, their analysis can provide information about material properties, and serve as a verification of our model, independently of the average shape analysis presented earlier.
We analyze the fluctuations in the pitch of twisted configurations in the range 10 < W < 40 nm. The elastic energy associated with small deviation ΔP from equilibrium pitch, P(W), is \({\mathrm{\Delta }}E \approx Yf\left( W \right)({\mathrm{\Delta }}P)^2\). Here Y is a 2D Young's modulus and f(W) depends only on geometrical parameters (t,v,α,W,k0). It is computed from our model (Supplementary notes (4) and Supplementary Fig. 10), using the same parameters as in Fig. 3d. The product Yf(W) sets the standard deviation (std) of the pitch distributions, σP(W), at different ribbon widths. Calculation of σP(W) for twisted configurations reveals an unusual, non-monotonic dependence on W, indicating ribbon rigidity (which scales inversely to σP(W)) that first increases with W, but then decreases for W > 25 nm (Fig. 4a solid line).
Statistics of ribbon shapes. a The standard deviation of the pitch of twisted ribbons as a function of ribbon width. The cyan line is the theoretical prediction, with Y = 9.5 MPa. The green dashed line is the calculated std for a compatible twisted ribbon with the same dimensions. Inset: measured pitch distribution around the average pitch and the determination of σ, obtained for \(W = 21 \mp 1\;{\mathrm{nm}}\). b The abundance of ribbon configuration vs. Yf(W)(ΔP)2, (with Yf(W) as in a and T = 300 K). An exponential dependence (dashed line) is found up to three standard deviations. Larger deviations are found to have higher probability than Gaussian. Data (~2800 data points) were collected from the entire range of widths (10−40 nm) of twisted ribbons. Inset: the distribution of normalized pitch deviations ΔP (semi log plot) with a Gaussian fit (solid line). The distribution is positively skewed. c Distributions of ΔP at different ranges of ribbon widths (indicated in each panel). The skewness varies non-monotonically with W (with increasing width: skewness values are: 0.38, 0.22, 0.39, and 0.42). d The (dimensionless) energy versus pitch as calculated from Eq. (5) for twisted ribbon with t = P0 = 1 for different values of \(\tilde W\) (indicated). The solid areas illustrate accessible states at a fixed (dimensionless) thermal energy, \(\widetilde {k_{\mathrm{B}}T} = 0.2\)
We analyze ~2800 ribbons, measuring σP(W) at bin size ΔW = 2 nm (inset Fig. 4a). The measured σP(W) is consistent with our predictions, including the predicted softening of ribbons for W > 25 nm (Fig. 4a). We emphasize that such unique property cannot exist in compatible twisted ribbons (as well as flat ribbons and rods) that stiffen with W, leading to monotonically decreasing std (Fig. 4a, dashed line). Fitting the data we find Y ≈ 9.5 MPa consistent with measured Young's modulus of other phospholipids30. We group data from all widths into one distribution by rescaling ΔP(W) by the computed Yf(W). Plotting the probabilities for the rescaled energy fluctuations (Fig. 4b) we find Gaussian distribution (manifested as a straight line) for moderate fluctuations (up to three std), with a systematic deviation of larger fluctuations. Plotting the normalized pitch fluctuations (inset of Fig. 4b) reveals that the deviations are all of pitch that is larger than expected by the linearized calculation, i.e., the distribution is asymmetric, having a positive skewness (>0.5). This skewness reflects a strong nonlinearity (in ΔP) of the ribbon stiffness, nonlinearity which is probed only by large fluctuations. Similarly to the std, the (computed and measured) skewness is non-monotonic with W (Fig. 4c and Supplementary Fig. 11), indicating increasing nonlinearity close to the twisted-to-helical transition.
The observed nontrivial statistics can be understood from a simplified model of incompatible purely twisted (α = 0) ribbons: The energy of a ribbon of thickness, t, width, W and reference pitch, \(P_0 = \frac{{2{\mathrm{\pi }}}}{{k_0}}\), depends on the actual pitch, P, as follows:
$$E\sim \frac{{tW^5}}{{P^4}} + t^3W\left( {\frac{1}{P} - \frac{1}{{P_0}}} \right)^2$$
The first term is the stretching energy, which (non-locally) penalizes Gaussian curvature (\(K \propto \frac{1}{{P^2}}\)). The second term is the bending energy, which penalizes deviations from the reference pitch. The combined (dimensionless) energy is plotted (Fig. 4d) as a function of \(\tilde P\), for different \(\tilde W\) together with an illustration of accessible states at (dimensionless) thermal energy \(\widetilde {k_{\mathrm{B}}T} = 0.2\) (the colored areas at each minimum). We find that as \(\tilde W\) increases, the energy minimum shifts to larger pitch values, its depth increases up to \(\tilde W = 0.4\) (the blue curve) but then significantly decreases, and it becomes increasingly asymmetric as \(\tilde W\) approaches 1.
This captures the essence of shape evolution and statistical mechanics of the incompatible ribbons: due to incompatibility there is no configuration, in which the two energy terms simultaneously vanish. The stretching energy vanishes only at \(\tilde P \to \infty\), while the bending energy vanishes at \(\tilde P = \tilde P_0 = 1\). Due to the different scaling with \(\tilde W\) of the bending (\(\sim \tilde W\)) and stretching (\(\sim \tilde W^5\)) terms, the competition between them is resolved differently, depending on \(\tilde W\): For \(\tilde W \ll 1\) the bending term dominates, leading to \(\tilde P \approx \tilde P_0\) and a deep energy minimum. As \(\tilde W\) increases, stretching starts dominating. As a result, the minimum is shifted to \(\tilde P > \tilde P_0\) and becomes shallower and asymmetric. In addition, the total (residual) minimal energy increases with \(\tilde W\), until the twisted solution loses stability and is replaced by the helical one (not shown). It is important to notice that the large asymmetry close to \(\tilde W = 1\) and the resultant skewness of the distribution can lead to a significant difference between the average measured pitch and the mechanical equilibrium pitch. Such effects, which apparently are not negligible in our system might affect the estimation of the geometrical parameters from measurements and might be the source of the systematic deviation in Fig. 3d.
Finally, we note that a compatible twisted ribbon would show dramatically different statistics. In this case, the ribbon energy is of the form:
\(E\sim tW^5( {\frac{1}{{P^2}} - \frac{1}{{P_0^2}}} )^2 + t^3W( {\frac{1}{P} - \frac{1}{{P_0}}} )^2\)which leads to a fixed minimum at P = P0, which gets narrower and effectively more symmetric (for a fixed T) with W (Supplementary Fig. 12), implying monotonically decreasing std and skewness. The observations in the simplified model hold for the exact calculations (Eqs. (1) and (2)).
The work shows the existence of geometrically frustrated assemblies on the sub-micron scale and presents a general way to study them: chemical information is integrated into the theory of incompatible sheets, utilizing its computing power to link between molecular and supramolecular properties of soft molecular assemblies. The quantitative modeling of chiral ribbon's shape and statistics indicates that the assemblies are incompatible ribbons with Euclidean reference metric and an asymmetric saddle reference curvature. The twisted-to-helical transition is a direct outcome of the bending-stretching competition, and very likely does not result from thermodynamic changes in the material, but only from changes in the ribbon's width. Furthermore, the non-monotonic std and skewness disclosed in this work, are unique to the modeled incompatible ribbons and cannot appear in compatible structures. We suggest that a wide range amphiphilic bilayers, as well as peptide and proteins ribbons, form frustrated ribbons of this type. The necessary conditions are ordered attractive interaction, flexibility and chirality of the building blocks. The evolution of their shape, as well as its statistics will be dominated by residual stresses. As such, they are size-dependent and should be modeled accordingly (small scale molecular simulations cannot provide the right results).
Our nano-scale ribbons are strongly affected by thermal fluctuations and important information can be extracted from their analysis. As we showed, analysis of several cryo-TEM images provides information about the rigidity of the material (Young's modulus), nonlinearities in interaction and nearest neighbors conformations. In addition, fluctuations can qualitatively affect the coarse-grained modeling of the system. Examples are the shift of averages from mechanical equilibrium points, presented here, and the nontrivial renormalization of mechanical parameters31. In some cases, such effects can change a 2nd order transition to 1st order one32,33. Such effects possibly explain the systematic deviations in Fig. 3c, d. It is important to note that many different structures with various intrinsic geometries can be handled by combining chemical analysis with incompatible elasticity theory (see Supplementary Information (5)), i.e., by directly following the steps presented here for our specific molecule, suggesting a vast of new research, as well as application possibilities.
Cryo-transmission electron microscopy
Specimens for cryo-TEM analysis were prepared in the semi-automated Vitrobot (FEI) or in the controlled environment vitrification system (CEVS), at 25 °C and water saturation to prevent evaporation from the specimens during preparation. In total, ~7 μL drop of each suspension was placed on a perforated carbon film (Ted Pella), blotted to create a thin film (manually in the CEVS or automatically in the Vitrobot), plunged into liquid ethane (−183 °C) to create a vitrified specimen, and transferred to liquid nitrogen (−196 °C) for storage until examination. Analysis was done in the Tecnai T12 G2 TEM (FEI) at 120 kV using a Gatan 626 cryo holder maintained below −175 °C. Images were recorded on a Gatan 2kx2k UltraScan 1000 camera in the low-dose imaging mode to minimize electron-beam radiation damage34 (See Supplementary Methods for more details).
High-resolution scanning Electron Microscopy
Samples were examined in a Zeiss Ultra Plus high-resolution scanning electron microscopy (HR-SEM) equipped with a Schottky field-emission electron gun at a very low electron acceleration voltage (1 kV) and short working distance (2.5–5 mm) using the Everhart-Thornley secondary electron imaging detector.
Measurements of ribbons width and pitch
The maturation of ribbons configuration as a function of time was analyzed by measuring ribbons' width, pitch and radius (Supplementary Fig. 2). Most ribbons have a remarkably well-defined width and pitch that vary with no more than 3% within any given ribbon (Supplementary Fig. 2). Therefore, the notions of "ribbon width" and "ribbon pitch" are well defined. The dimensions, however, vary between ribbons (Supplementary Fig. 2a).
The data that support the findings of this study are available from the corresponding author upon request.
An amendment to this paper has been published and can be accessed via a link at the top of the paper.
Shimizu, T., Masuda, M. & Minamikawa, H. Supramolecular nanotube architectures based on amphiphilic molecules. Chem. Rev. 105, 1401–1443 (2005).
Adamcik, J. & Mezzenga, R. Study of amyloid fibrils via atomic force microscopy. Curr. Opin. Colloid Interface Sci. 17, 369–376 (2012).
Chiti, F. & Doson, C. M. Protein misfolding, functional amyloid, and human disease. Annu. Rev. Biochem. 75, 333–366 (2006).
Wang, Y., Xu, J., Wang, Y. W. & Chen, H. Y. Emerging chirality in nanoscience. Chem. Soc. Rev. 42, 2930–2962 (2013).
Barclay, T. G., Constantopoulos, K. & Matisons, J. Nanotubes self-assembled from amphiphilic molecules via helical intermediates. Chem. Rev. 114, 10217–10291 (2014).
Nandi, N. & Bagchi, B. Molecular origin of the intrinsic bending force for helical morphology observed in chiral amphiphilic assemblies: Concentration and size dependence. J. Am. Chem. Soc. 118, 11208–11216 (1996).
Hall, D. M., Bruss, I. R., Barone, J. R. & Grason, G. M. Morphology selection via geometric frustration in chiral filament bundles. Nat. Mater. 15, 727–732 (2016).
Klein, Y., Venkataramani, S. & Sharon, E. Experimental study of shape transitions and energy scaling in thin non-euclidean plates. Phys. Rev. Lett. 106, 118303 (2011).
ADS Article Google Scholar
Gemmer J. A., Venkataramani S. C. Shape selection in non-Euclidean plates. Phys. D-Nonlinear Phenom 240, 1536–1552 (2011).
ADS MathSciNet Article Google Scholar
Shyer, A. E. et al. Villification: how the gut gets its villi. Science 342, 212–218 (2013).
Holland, M., Budday, S., Goriely, A. & Kuhl, E. Symmetry breaking in wrinkling patterns: gyri are universally thicker than sulci. Phys. Rev. Lett. 121, 6 (2018).
Levin, I. & Sharon, E. Anomalously soft non-euclidean springs. Phys. Rev. Lett. 116, 5 (2016).
Guest, S. D., Kebadze, E. & Pellegrino, S. A zero-stiffness elastic shell structure. J. Mech. Mater. Struct. 6, 203–212 (2011).
Selinger, J. V., Spector, M. S. & Schnur, J. M. Theory of self-assembled tubules and helical ribbons. J. Phys. Chem. B 105, 7157–7169 (2001).
Selinger, R. L. B., Selinger, J. V., Malanoski, A. P. & Schnur, J. M. Shape selection in chiral self-assembly. Phys. Rev. Lett. 93, 4 (2004).
Ghafouri, R. & Bruinsma, R. Helicoid to spiral ribbon transition. Phys. Rev. Lett. 94, 4 (2005).
Assenza, S., Adamcik, J., Mezzenga, R. & De Los Rios, P. Universal behavior in the mesoscale properties of amyloid fibrils. Phys. Rev. Lett. 113, 5 (2014).
Oda, R., Huc, I., Schmutz, M., Candau, S. J. & MacKintosh, F. C. Tuning bilayer twist using chiral counterions. Nature 399, 566–569 (1999).
Oda, R., Artzner, F., Laguerre, M. & Huc, I. Molecular structure of self-assembled chiral nanoribbons and nanotubules revealed in the hydrated state. J. Am. Chem. Soc. 130, 14705–14712 (2008).
Ziserman, L., Lee, H.-Y., Raghavan, S. R., Mor, A. & Danino, D. Unraveling the mechanism of nanotube formation by chiral self-assembly of amphiphiles. J. Am. Chem. Soc. 133, 2511–2517 (2011).
Spector, M. S., Singh, A., Messersmith, P. B. & Schnur, J. M. Chiral self-assembly of nanotubules and ribbons from phospholipid mixtures. Nano Lett. 1, 375–378 (2001).
Kondo, K. Memoirs of the Unifying Study of Thebasic Problems In Engineering Science By Means of Geometry. vol. 1, p. 5–17 (Gakujutsu Bunken Fukyu-Kai, Tokyo, 1955).
Efrati, E., Sharon, E. & Kupferman, R. Elastic theory of unconstrained non-Euclidean plates. J. Mech. Phys. Solids 57, 762–775 (2009).
Sharon, E., Roman, B., Marder, M., Shin, G. S. & Swinney, H. L. Mechanics: buckling cascades in free sheets - Wavy leaves may not depend only on their genes to make their edges crinkle. Nature 419, 579–579 (2002).
Klein, Y., Efrati, E. & Sharon, E. Shaping of elastic sheets by prescription of non-Euclidean metrics. Science 315, 1116–1120 (2007).
ADS MathSciNet CAS Article Google Scholar
Kim, J., Hanna, J. A., Byun, M., Santangelo, C. D. & Hayward, R. C. Designing responsive buckled surfaces by halftone gel lithography. Science 335, 1201–1205 (2012).
Nath, U., Crawford, B. C. W., Carpenter, R. & Coen, E. Genetic control of surface curvature. Science 299, 1404–1407 (2003).
Armon, S., Efrati, E., Kupferman, R. & Sharon, E. Geometry and mechanics in the opening of chiral seed pod. Science 333, 1726–1730 (2011).
Grossman, D., Sharon, E. & Diamant, H. Elasticity and fluctuations of frustrated nanoribbons. Phys. Rev. Lett. 116, 5 (2016).
Marsh D. CRC handbook of lipid bilayers (CRC Press, 306000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL, 1990).
Nelson David, Piran Tsvi, Steven W. Statistical Mechanics Of Membranes And Surfaces 2nd edn (World Scientific Publishing, 5 Toh Tuck Link, Singapore, 2004).
Imry, Y. Tricritical points in compressible magnetic systems. Phys. Rev. Lett. 33, 1304–1307 (1974).
Levanyuk, A. P., Minyukov, S. A. & Vallade, M. Fluctuation-induced 1st-order phase-transitions near mean-field tricritical points in solids. J. Phys.-Condes. Matter 5, 4419–4428 (1993).
Danino, D. Cryo-TEM of soft molecular assemblies Curr. Opin. Colloid Interface Sci. 17, 316–329 (2012).
This research was supported by the USA-Israel binational science foundation, grant # 2014310 (ES) and the Israel Science Foundation grant No. 1117/16 (DD). M.Z. was supported by the Hebrew University Post-doctoral scholarship (PBC).
The Racah institute of Physics, The Hebrew University of Jerusalem, Jerusalem, Israel
Mingming Zhang, Doron Grossman & Eran Sharon
CryoEM Laboratory of Soft Matter, Faculty of Biotechnology and Food Engineering, Technion—Israel Institute of Technology, Haifa, Israel
Mingming Zhang & Dganit Danino
Mingming Zhang
Doron Grossman
Dganit Danino
Eran Sharon
M.Z. conceived the study, performed the experiments, and analyzed data. D.G. performed all the theoretical study and modeling and analyzed data, D.D. conceived the study, E.S. conceived the study. All authors co-wrote the paper.
Correspondence to Eran Sharon.
The authors declare no competing interests.
Peer Review Information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Zhang, M., Grossman, D., Danino, D. et al. Shape and fluctuations of frustrated self-assembled nano ribbons. Nat Commun 10, 3565 (2019). https://doi.org/10.1038/s41467-019-11473-6
Geometric Frustration in Molecular Crystals
Efi Efrati
Israel Journal of Chemistry (2020)
New Patchy Particle Model with Anisotropic Patches for Molecular Dynamics Simulations: Application to a Coarse-Grained Model of Cellulose Nanocrystal
Nicolas Rolland
, Aleksandar Y. Mehandzhiyski
, Mohit Garg
, Mathieu Linares
& Igor V. Zozoulenko
Journal of Chemical Theory and Computation (2020)
Twisting of Charged Nanoribbons to Helicoids Driven by Electrostatics
Gervasio Zaldivar
, Martin Conda-Sheridan
& Mario Tagliazucchi
The Journal of Physical Chemistry B (2020)
Editors' Highlights
Top Articles of 2019
Nature Communications ISSN 2041-1723 (online)
|
CommonCrawl
|
Project acronym EffectiveTG
Project Effective Methods in Tame Geometry and Applications in Arithmetic and Dynamics
Researcher (PI) Gal BINYAMINI
Summary Tame geometry studies structures in which every definable set has a finite geometric complexity. The study of tame geometry spans several interrelated mathematical fields, including semialgebraic, subanalytic, and o-minimal geometry. The past decade has seen the emergence of a spectacular link between tame geometry and arithmetic following the discovery of the fundamental Pila-Wilkie counting theorem and its applications in unlikely diophantine intersections. The P-W theorem itself relies crucially on the Yomdin-Gromov theorem, a classical result of tame geometry with fundamental applications in smooth dynamics. It is natural to ask whether the complexity of a tame set can be estimated effectively in terms of the defining formulas. While a large body of work is devoted to answering such questions in the semialgebraic case, surprisingly little is known concerning more general tame structures - specifically those needed in recent applications to arithmetic. The nature of the link between tame geometry and arithmetic is such that any progress toward effectivizing the theory of tame structures will likely lead to effective results in the domain of unlikely intersections. Similarly, a more effective version of the Yomdin-Gromov theorem is known to imply important consequences in smooth dynamics. The proposed research will approach effectivity in tame geometry from a fundamentally new direction, bringing to bear methods from the theory of differential equations which have until recently never been used in this context. Toward this end, our key goals will be to gain insight into the differential algebraic and complex analytic structure of tame sets; and to apply this insight in combination with results from the theory of differential equations to effectivize key results in tame geometry and its applications to arithmetic and dynamics. I believe that my preliminary work in this direction amply demonstrates the feasibility and potential of this approach.
Tame geometry studies structures in which every definable set has a finite geometric complexity. The study of tame geometry spans several interrelated mathematical fields, including semialgebraic, subanalytic, and o-minimal geometry. The past decade has seen the emergence of a spectacular link between tame geometry and arithmetic following the discovery of the fundamental Pila-Wilkie counting theorem and its applications in unlikely diophantine intersections. The P-W theorem itself relies crucially on the Yomdin-Gromov theorem, a classical result of tame geometry with fundamental applications in smooth dynamics. It is natural to ask whether the complexity of a tame set can be estimated effectively in terms of the defining formulas. While a large body of work is devoted to answering such questions in the semialgebraic case, surprisingly little is known concerning more general tame structures - specifically those needed in recent applications to arithmetic. The nature of the link between tame geometry and arithmetic is such that any progress toward effectivizing the theory of tame structures will likely lead to effective results in the domain of unlikely intersections. Similarly, a more effective version of the Yomdin-Gromov theorem is known to imply important consequences in smooth dynamics. The proposed research will approach effectivity in tame geometry from a fundamentally new direction, bringing to bear methods from the theory of differential equations which have until recently never been used in this context. Toward this end, our key goals will be to gain insight into the differential algebraic and complex analytic structure of tame sets; and to apply this insight in combination with results from the theory of differential equations to effectivize key results in tame geometry and its applications to arithmetic and dynamics. I believe that my preliminary work in this direction amply demonstrates the feasibility and potential of this approach.
Project acronym HomDyn
Project Homogenous dynamics, arithmetic and equidistribution
Researcher (PI) Elon Lindenstrauss
Summary We consider the dynamics of actions on homogeneous spaces of algebraic groups, and propose to tackle a wide range of problems in the area, including the central open problems. One main focus in our proposal is the study of the intriguing and somewhat subtle rigidity properties of higher rank diagonal actions. We plan to develop new tools to study invariant measures for such actions, including the zero entropy case, and in particular Furstenberg's Conjecture about $\times 2,\times 3$-invariant measures on $\R / \Z$. A second main focus is on obtaining quantitative and effective equidistribution and density results for unipotent flows, with emphasis on obtaining results with a polynomial error term. One important ingredient in our study of both diagonalizable and unipotent actions is arithmetic combinatorics. Interconnections between these subjects and arithmetic equidistribution properties, Diophantine approximations and automorphic forms will be pursued.
We consider the dynamics of actions on homogeneous spaces of algebraic groups, and propose to tackle a wide range of problems in the area, including the central open problems. One main focus in our proposal is the study of the intriguing and somewhat subtle rigidity properties of higher rank diagonal actions. We plan to develop new tools to study invariant measures for such actions, including the zero entropy case, and in particular Furstenberg's Conjecture about $\times 2,\times 3$-invariant measures on $\R / \Z$. A second main focus is on obtaining quantitative and effective equidistribution and density results for unipotent flows, with emphasis on obtaining results with a polynomial error term. One important ingredient in our study of both diagonalizable and unipotent actions is arithmetic combinatorics. Interconnections between these subjects and arithmetic equidistribution properties, Diophantine approximations and automorphic forms will be pursued.
Project acronym NMR-DisAgg
Project The Dynamic Composition of the Protein Chaperone Network: Unraveling Human Protein Disaggregation via NMR Spectroscopy
Researcher (PI) Rina ROSENZWEIG
Summary Molecular chaperones are a diverse group of proteins critical to maintaining cellular homeostasis. Aside from protein refolding, it has recently been discovered that certain combinations of human chaperones can break apart toxic protein aggregates and even amyloids that have been linked to a host of neurodegenerative diseases. The first chaperones in this disaggregation reaction that are responsible for recognizing and performing initial remodeling of aggregates, are members of the Hsp40 (DnaJ) and small heat shock protein (sHSP) families. Very little, though, is known regarding how these chaperones perform their functions, and characterization of sHsp- and DnaJ-substrate complexes by most structural techniques has proven extremely challenging, as most chaperones are dynamic in nature and typically operate through a series of transient interactions with both their clients and other chaperones. The advanced NMR techniques used in our lab, however, are ideally suited for the study of these exact types of dynamic systems, and include recently developed experiments (CEST, CPMG) that allow us to monitor the transient and low populated protein states typical of chaperone-chaperone and chaperone-client interactions, as well as to study the structure of these potentially very large protein complexes (methyl-TROSY). By exploiting these NMR methodologies and additional, novel labeling schemes, we will characterize, for the first time, the recognition and substrate remodeling performed by the many members of the DnaJ and sHsp chaperone families on their clients. We will then take these approaches one step further and develop real time NMR experiments to observe the client remodeling performed over the course of the disaggregation reaction itself. By combining advanced NMR with biophysical and functional assays, we ultimately aim to identify the specific sets of chaperones that, with the Hsp70 system, protect our cells by dissolving disease-linked aggregates and amyloid fibers.
Molecular chaperones are a diverse group of proteins critical to maintaining cellular homeostasis. Aside from protein refolding, it has recently been discovered that certain combinations of human chaperones can break apart toxic protein aggregates and even amyloids that have been linked to a host of neurodegenerative diseases. The first chaperones in this disaggregation reaction that are responsible for recognizing and performing initial remodeling of aggregates, are members of the Hsp40 (DnaJ) and small heat shock protein (sHSP) families. Very little, though, is known regarding how these chaperones perform their functions, and characterization of sHsp- and DnaJ-substrate complexes by most structural techniques has proven extremely challenging, as most chaperones are dynamic in nature and typically operate through a series of transient interactions with both their clients and other chaperones. The advanced NMR techniques used in our lab, however, are ideally suited for the study of these exact types of dynamic systems, and include recently developed experiments (CEST, CPMG) that allow us to monitor the transient and low populated protein states typical of chaperone-chaperone and chaperone-client interactions, as well as to study the structure of these potentially very large protein complexes (methyl-TROSY). By exploiting these NMR methodologies and additional, novel labeling schemes, we will characterize, for the first time, the recognition and substrate remodeling performed by the many members of the DnaJ and sHsp chaperone families on their clients. We will then take these approaches one step further and develop real time NMR experiments to observe the client remodeling performed over the course of the disaggregation reaction itself. By combining advanced NMR with biophysical and functional assays, we ultimately aim to identify the specific sets of chaperones that, with the Hsp70 system, protect our cells by dissolving disease-linked aggregates and amyloid fibers.
Project acronym PATHWISE
Project Pathwise methods and stochastic calculus in the path towards understanding high-dimensional phenomena
Researcher (PI) Ronen ELDAN
Summary Concepts from the theory of high-dimensional phenomena play a role in several areas of mathematics, statistics and computer science. Many results in this theory rely on tools and ideas originating in adjacent fields, such as transportation of measure, semigroup theory and potential theory. In recent years, a new symbiosis with the theory of stochastic calculus is emerging. In a few recent works, by developing a novel approach of pathwise analysis, my coauthors and I managed to make progress in several central high-dimensional problems. This emerging method relies on the introduction of a stochastic process which allows one to associate quantities and properties related to the high-dimensional object of interest to corresponding notions in stochastic calculus, thus making the former tractable through the analysis of the latter. We propose to extend this approach towards several long-standing open problems in high dimensional probability and geometry. First, we aim to explore the role of convexity in concentration inequalities, focusing on three central conjectures regarding the distribution of mass on high dimensional convex bodies: the Kannan-Lov'asz-Simonovits (KLS) conjecture, the variance conjecture and the hyperplane conjecture as well as emerging connections with quantitative central limit theorems, entropic jumps and stability bounds for the Brunn-Minkowski inequality. Second, we are interested in dimension-free inequalities in Gaussian space and on the Boolean hypercube: isoperimetric and noise-stability inequalities and robustness thereof, transportation-entropy and concentration inequalities, regularization properties of the heat-kernel and L_1 versions of hypercontractivity. Finally, we are interested in developing new methods for the analysis of Gibbs distributions with a mean-field behavior, related to the new theory of nonlinear large deviations, and towards questions regarding interacting particle systems and the analysis of large networks.
Concepts from the theory of high-dimensional phenomena play a role in several areas of mathematics, statistics and computer science. Many results in this theory rely on tools and ideas originating in adjacent fields, such as transportation of measure, semigroup theory and potential theory. In recent years, a new symbiosis with the theory of stochastic calculus is emerging. In a few recent works, by developing a novel approach of pathwise analysis, my coauthors and I managed to make progress in several central high-dimensional problems. This emerging method relies on the introduction of a stochastic process which allows one to associate quantities and properties related to the high-dimensional object of interest to corresponding notions in stochastic calculus, thus making the former tractable through the analysis of the latter. We propose to extend this approach towards several long-standing open problems in high dimensional probability and geometry. First, we aim to explore the role of convexity in concentration inequalities, focusing on three central conjectures regarding the distribution of mass on high dimensional convex bodies: the Kannan-Lov'asz-Simonovits (KLS) conjecture, the variance conjecture and the hyperplane conjecture as well as emerging connections with quantitative central limit theorems, entropic jumps and stability bounds for the Brunn-Minkowski inequality. Second, we are interested in dimension-free inequalities in Gaussian space and on the Boolean hypercube: isoperimetric and noise-stability inequalities and robustness thereof, transportation-entropy and concentration inequalities, regularization properties of the heat-kernel and L_1 versions of hypercontractivity. Finally, we are interested in developing new methods for the analysis of Gibbs distributions with a mean-field behavior, related to the new theory of nonlinear large deviations, and towards questions regarding interacting particle systems and the analysis of large networks.
Project acronym SensStabComp
Project Sensitivity, Stability, and Computation
Researcher (PI) Gil KALAI
Host Institution (HI) INTERDISCIPLINARY CENTER (IDC) HERZLIYA
Summary Noise sensitivity and noise stability of Boolean functions, percolation, and other models were introduced in a paper by Benjamini, Kalai, and Schramm (1999) and were extensively studied in the last two decades. We propose to extend this study to various stochastic and combinatorial models, and to explore connections with computer science, quantum information, voting methods and other areas. The first goal of our proposed project is to push the mathematical theory of noise stability and noise sensitivity forward for various models in probabilistic combinatorics and statistical physics. A main mathematical tool, going back to Kahn, Kalai, and Linial (1988), is applications of (high-dimensional) Fourier methods, and our second goal is to extend and develop these discrete Fourier methods. Our third goal is to find applications toward central old-standing problems in combinatorics, probability and the theory of computing. The fourth goal of our project is to further develop the ``argument against quantum computers'' which is based on the insight that noisy intermediate scale quantum computing is noise stable. This follows the work of Kalai and Kindler (2014) for the case of noisy non-interacting bosons. The fifth goal of our proposal is to enrich our mathematical understanding and to apply it, by studying connections of the theory with various areas of theoretical computer science, and with the theory of social choice.
Noise sensitivity and noise stability of Boolean functions, percolation, and other models were introduced in a paper by Benjamini, Kalai, and Schramm (1999) and were extensively studied in the last two decades. We propose to extend this study to various stochastic and combinatorial models, and to explore connections with computer science, quantum information, voting methods and other areas. The first goal of our proposed project is to push the mathematical theory of noise stability and noise sensitivity forward for various models in probabilistic combinatorics and statistical physics. A main mathematical tool, going back to Kahn, Kalai, and Linial (1988), is applications of (high-dimensional) Fourier methods, and our second goal is to extend and develop these discrete Fourier methods. Our third goal is to find applications toward central old-standing problems in combinatorics, probability and the theory of computing. The fourth goal of our project is to further develop the ``argument against quantum computers'' which is based on the insight that noisy intermediate scale quantum computing is noise stable. This follows the work of Kalai and Kindler (2014) for the case of noisy non-interacting bosons. The fifth goal of our proposal is to enrich our mathematical understanding and to apply it, by studying connections of the theory with various areas of theoretical computer science, and with the theory of social choice.
|
CommonCrawl
|
PDF PubReader
Shan , Ren , Wang , and Wang*: Tobacco Retail License Recognition Based on Dual Attention Mechanism
Volume 18, No 4 (2022), pp. 480 - 488
10.3745/JIPS.02.0177
Yuxiang Shan , Qin Ren , Cheng Wang and Xiuhui Wang*
Tobacco Retail License Recognition Based on Dual Attention Mechanism
Abstract: Images of tobacco retail licenses have complex unstructured characteristics, which is an urgent technical problem in the robot process automation of tobacco marketing. In this paper, a novel recognition approach using a double attention mechanism is presented to realize the automatic recognition and information extraction from such images. First, we utilized a DenseNet network to extract the license information from the input tobacco retail license data. Second, bi-directional long short-term memory was used for coding and decoding using a continuous decoder integrating dual attention to realize the recognition and information extraction of tobacco retail license images without segmentation. Finally, several performance experiments were conducted using a largescale dataset of tobacco retail licenses. The experimental results show that the proposed approach achieves a correction accuracy of 98.36% on the ZY-LQ dataset, outperforming most existing methods.
Keywords: Attention Mechanism , Image Recognition , Robot Process Automation (RPA)
Robot process automation (RPA) used in tobacco marketing involves the intelligent recognition of multiple scene objects during tobacco operations. For example, to evaluate the recognition accuracy of the impact of tobacco marketing activities on the product market, it is necessary to deeply analyze and mine various text and image information from the retail process of the products involved. However, there are still many challenges in realizing intelligent tobacco operations and management [1]. One of the primary issues is to identify tobacco retail licenses from different regions and extract relevant information. Tobacco retail license images have complex unstructured features [2] that increase the difficulty of extracting information from them.
To realize the automatic recognition and information extraction from tobacco retail license images, a novel recognition approach using a double-attention module is presented in this paper. The proposed method is mainly used under two scenarios: (1) identifying the license number and authorization date from tobacco retail license images and (2) statistically analyzing the marketing of specific types of tobacco. For the proposed network, DenseNet was used to obtain the license information from the input data from tobacco retail licenses. Furthermore, bi-directional long short-term memory (BiLSTM) was used for the coding and decoding using a continuous decoder integrating dual attention to realize the recognition and information extraction of tobacco retail license images without segmentation. The contributions of this study can be summarized as follows:
(1) A new recognition method based on a double attention mechanism is proposed for the automatic recognition and information extraction from tobacco retail license images.
(2) DenseNet and BiLSTM are integrated to extract the features and information from tobacco retail license images without the need for segmentation.
(3) A comprehensive evaluation was conducted using a largescale tobacco retail license image dataset. We thoroughly evaluated the proposed recognition network using the ZY-LQ dataset and obtained a 98.36% correct recognition rate, outperforming most existing approaches.
The remainder of this paper is organized as follows. In the next section, we discuss existing research on image recognition for different applications. In Section 3, we propose a novel recognition network for unstructured tobacco retail license images by integrating the attention mechanism and decoder module. In Section 4, we describe the experiments conducted and present the evaluation of the proposed method using a largescale tobacco retail license image dataset. Finally, in Section 5, we provide concluding remarks regarding the present study.
2. Related Work
Image recognition is an important branch of artificial intelligence. Researchers and research institutes have proposed a variety of models and algorithms to solve different application problems [3–7]. Zhang et al. [3] proposed the ASSDA method to deal with text images from different domains, which focuses on aligning a cross-domain distribution. In [4], a recognition framework for text lines was presented for embedded applications, with more attention paid to the balance between limited resources and the recognition rate. A convolutional neural network (CNN)-based card recognition framework [5] was proposed for a similar task of tobacco retail license image recognition, which has mainly been used to improve the robustness to different environments and the efficiency of processing natural images. Bera et al. [6] focused on discriminating fine-grained changes with a particular emphasis on a distinct finegrained visual classification. Islam et al. [7] presented a region-of-interest detection method that focuses on Bangla text extraction and recognition from natural scene images.
In addition, attention mechanisms [8,9] have been utilized in many different applications, such as natural language processing, video-based understanding, and visual classification, which is an important direction in deep learning technology. Lai et al. [8] conducted a comprehensive review of the attention mechanism used in model optimization. Luo et al. [9] proposed a depth-characteristic combination framework that integrates a variety of attention modules to realize iris recognition.
Specifically, existing image recognition algorithms focus on different issues and can solve many actual application problems; however, there is still no effective solution for the recognition of tobacco retail licenses having significant unstructured characteristics.
3. Recognition Method based on Dual Attention Mechanism
As shown in Fig. 1, considering the unstructured and multiscale characteristics of sample images of tobacco retail licenses, the recognition network proposed in this paper integrates an attention mechanism with a decoder module.
Architecture of the proposed dual attention network.
3.1 Feature Extraction
The proposed recognition network conducts a feature extraction through the DenseNet module, which consists of DenseBlock and a transition layer in the middle. Among them, the transition layer connects different DenseBlocks and integrates the features obtained by the previous version. DenseBlock constructs dense connections from the front to the rear layers for reusing the characteristics, as shown in Fig. 2. The dataflow design makes the feature extraction more effective, enhances the gradient propagation, and improves the convergence speed of the proposed recognition model. Moreover, we use a convolutional kernel with a size of [TeX:] $$1 \times 1$$ in each layer, which simplifies the feature maps and improves the effectiveness of the proposed recognition model.
Dense block used in the proposed network.
The key parts of our information extraction module are continuous decoders that fuse the attention modules. When image [TeX:] $$I_0$$ is input into the CNN model, where [TeX:] $$\mathrm{H}_l(\cdot)$$ is the nonlinear transformation function after each layer, and l is the corresponding index of each layer, the traditional forward propagation network connects all the layers as an input and obtains.
[TeX:] $$I_l=\mathrm{H}_l\left(I_{l-1}\right).$$
Nevertheless, DenseNet connects each layer in the feedforward model of the residual connection such that the [TeX:] $$l^{t h}$$ layer takes characteristic diagrams in front of it as input, that is,
[TeX:] $$I_l=\mathrm{H}_l\left(\left[I_0, I_1, \ldots, I_{l-1}\right]\right),$$
where [TeX:] $$I_0, I_1, \ldots, I_{l-1}$$ are the feature maps generated by the [TeX:] $$0-t h, 1-t h, \ldots, \text { and }(l-1) t h$$ layers, respectively.
In each module, several characteristic graphs are connected into a vector in which the growth parameter k and layer parameter l can control the parameters of the dense blocks. For example, the [TeX:] $$l^{\text {th }}$$ layer has an input feature map [TeX:] $$k_0+k *(l-1),$$ where [TeX:] $$k_0$$ is the number of channel.
3.2 Dual Attention Mechanism
After extracting the visual feature sequence of the image through DenseNet, decoding based on connectionist temporary classification (CTC) is used to generate the output. Each codec combination consists of a BiLSTM encoder and decoder for outputting the context features. The context feature and visual feature sequence V calculated from the DenseNet network are connected in-series, as shown in Fig. 3. The specific process uses CTC decoding for the visual feature sequence V extracted by DenseNet, which can optimize the character representation in the visual feature sequence. Feature sequence V is then fed back by placing a fully-connected layer, which outputs the output sequence H with length N and further inputs it into the CTC module. The CTC module converts the results into a conditional probability module and obtains the most likely tag.
Continuous encoder-decoder.
To facilitate the use of contextual data, we used the BiLSTM module for reverse data processing to solve the long-term dependence problem and avoid a single long short-term memory (LSTM) by considering the gone data and ignoring the incoming data. BiLSTM represents the forward and backward data using two independent LSTM layers. In this paper, the segmented license image sequence was labeled to eliminate the need for sequence segmentation. Consequently, CTC was utilized to map each output to the corresponding probability module of all possible label sequences. Finally, the results were obtained through repeated encoding and decoding using the BiLSTM and decoder.
Dual attention decoder.
As shown in Fig. 4, in each encoder step, the attention map is used twice. First, a one-dimensional operation on the feature graph is applied, and a fully-connected layer is used after these feature graphs to calculate the attention feature graph. The pixel product between the attention and original feature maps is then calculated to generate the attention feature map [TeX:] $$D^{\prime}.$$ The decoding of [TeX:] $$D^{\prime}$$ is completed using an independent module, and outputs [TeX:] $$y_t$$ are subsequently obtained.
Given a tobacco retail license image I and an encoder [TeX:] $$E(I)=\left\{h 1, h 2, \ldots, h_T\right\},$$ during the [TeX:] $$t^{t h}$$ step, the module outputs [TeX:] $$y_t, \text { i.e. },$$
[TeX:] $$y t=G(a t, b t),$$
where [TeX:] $$G(.)$$ refers to the feedforward function, [TeX:] $$a_t$$ is the state at time point [TeX:] $$t, \text { and } b_t$$ is the weighted sum of the sequence eigenvectors, which are defined as follows:
[TeX:] $$a_t=\operatorname{LSTM}\left(y_{t-1}, b_t, a_{t-1}\right),$$
[TeX:] $$g_t=\sum_{j=1}^T \alpha_{t, j}, h_j.$$
Here, [TeX:] $$\alpha_t \in R^T$$ is the weight vector, which is also known as the alignment factor.
4. Experiments
To evaluate the effectiveness of our recognition approach, a comparative experiment was conducted on a largescale tobacco retail license image dataset, namely, ZY-LQ. The ZY-LQ dataset consists of 527,921 tobacco retail license images with different scales and levels of sharpness, and the license information has different forms of presentation, as shown in Fig. 5. Each license image comprises the license number, license issuing authority, store name, name of the person in-charge, and priority period. In addition, each image is marked with the store name, owner's name, monopoly license number, province, city, and county information.
Examples from the ZY-LQ dataset.
4.1 Experimental Configuration
The experimental environment was an NVIDIA Quadro P5000 graphics card, 128 GB of running memory, and a 2.30-GHz Intel Xeon Gold 5118 CPU processor. The software environment was an integrated Ubuntu 16 operating system, Python 3.6, and PyTorch 1.0 development environment.
Furthermore, we used cumulative match characteristic (CMC) curves, which are precision curves that provide the recognition precision for each rank, to display the experimental results in a comparative experiment. The x-axis of the CMC curves represented the rank of recognition, whereas the y-axis represented the precision in percentage. In addition, hard and soft indices were utilized to evaluate the effectiveness of image and text recognition, which were defined according to the Levenshtein distance (LD). In this study, the hard index (HI) was used to evaluate the recognition results, which are defined by the target string [TeX:] $$\beta_T$$ and recognized string [TeX:] $$\beta_R.$$
where N refers to the number of images for testing, and [TeX:] $$N_\beta$$ is the image number that satisfies the equation [TeX:] $$L D\left(\beta_T, \beta_R\right)=0.$$
4.2 Experimental Results
In a comparative experiment, four existing approaches were used: DenseNet, LSTM, BiLSTM, and CNN. The dataset was divided at a ratio of 8:2 for training and testing, which were randomly selected, and the experimental results are shown in Fig. 6.
Fig. 6 shows that the proposed approach outperforms the other three methods in terms of the correct recognition rate. It can be observed from the figure that when the rank is 20, the correct recognition rate exceeds 95%. From the results, note that the BiLSTM and DenseNet methods achieve the second and third best recognition results. As a possible reason for this situation, our approach integrates the advantages of BiLSTM and DenseNet models.
Comparison of five approaches on the ZY-LQ dataset.
To further discuss the effectiveness of the CTC decoder, attention mechanism, BiLSTM, and DenseBlock on the proposed model, we conducted ablation experiments and discuss the results using the HI. The corresponding experimental results are shown in Tables 1 and 2, which indicate that adding a CTC decoder to the basic DenseNet + LSTM framework improves the accuracy by 0.45%, and by adding an attention module, the accuracy is improved to 98.12%, thereby verifying the effectiveness of such additions. Furthermore, by increasing the number of DenseBlocks to three and the number of BiLSTM layers to four, the results in Table 2 are obtained. The first two lines show that when no CTC decoder is used, the attention module improves the accuracy by 0.66%. When the CTC decoder is added, the accuracy is 98.36%. This shows that using the CTC decoder and attention module simultaneously is more effective. This result occurs because the BiLSTM network can easily learn information, and the appropriate addition of a BiLSTM layer can help consider more contextual information of the sequence and allow better feature information be obtained during the coding process.
Results when changing CTC decoder and attention mechanism
DenseBlock
CTC decoder
Attention module
LSTM layers
DenseNet+LSTM 3 0 0 4 94.13
DenseNet+LSTM+CTC 3 1 0 4 95.37
DenseNet+LSTM+Attention 3 0 2 4 98.03
DenseNet+LSTM+CTC+Attention 3 1 2 4 98.12
Results for changing BiLSTM and DenseBlock
This work in this paper was supported by the Research on Key Technology and Application of Marketing Robot Process Automation (RPA) Based on Intelligent Image Recognition in Zhejiang China Tobacco Industry Co. Ltd. (No. ZJZY2021E001).
Yuxiang Shan
He received his M.S. degree in School of Computer Science and Technology from Zhejiang University in 2013. He is now an engineer of Information Center of China Tobacco Zhejiang Industrial Co. Ltd. His current research interests include image recognition and artificial intelligence.
Qin Ren
She received a bachelor's degree in marketing from Zhejiang Normal University in 2010. Since August 2011, she has worked in China Tobacco Zhejiang Industrial Co. Ltd., engaged in Tobacco Marketing and Internet Marketing Research, respectively.
Cheng Wang
He received his B.S. degree in School of Human Resources Management from Nanjing Audit University in 2010. Since then, he joined Zhejiang Tobacco Industry Company as custom manager. In 2020, he joined the brand operation department, engaged in data operation and customer operation.
Xiuhui Wang
He received his master's degree and doctor's degree from Zhejiang University in 2003 and 2007, respectively. He is now a professor in the Computer Department of China Jiliang University. His current research interests include computer graphics, pattern recognition and artificial intelligence.
1 M. Deng, Z. Li, Y . Kang, C. P . Chen, and X. Chu, "A learning-based hierarchical control scheme for an exoskeleton robot in human–robot cooperative manipulation," IEEE Transactions on Cybernetics, vol. 50, no. 1, pp. 112-125, 2020.doi:[[[10.1109/tcyb.2018.2864784]]]
2 A. Ravendran, M. Bryson, and D. G. Dansereau, "Burst imaging for light-constrained structure-frommotion," IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 1040-1047, 2022.doi:[[[10.1109/LRA.2021.3137520]]]
3 Y . Zhang, S. Nie, S. Liang, and W. Liu, "Robust text image recognition via adversarial sequence-to-sequence domain adaptation," IEEE Transactions on Image Processing, vol. 30, pp. 3922-3933, 2021.doi:[[[10.1109/tip.2021.3066903]]]
4 Y . S. Chernyshova, A. V . Sheshkus, and V . V . Arlazarov, "Two-step CNN framework for text line recognition in camera-captured images," IEEE Access, vol. 8, pp. 32587-32600, 2020.doi:[[[10.1109/access.2020.2974051]]]
5 Z. Ou, B. Xiong, F. Xiao, and M. Song, "ERCS: an efficient and robust card recognition system for camerabased image," China Communications, vol. 17, no. 12, pp. 247-264, 2020.doi:[[[10.23919/JCC.2020.12.018]]]
6 A. Bera, Z. Wharton, Y . Liu, N. Bessis, and A. Behera, "Attend and guide (AG-Net): a keypoints-driven attention-based deep network for image recognition," IEEE Transactions on Image Processing, vol. 30, pp. 3691-3704, 2021.doi:[[[10.1109/tip.2021.3064256]]]
7 R. Islam, M. R. Islam, and K. H. Talukder, "An efficient ROI detection algorithm for Bangla text extraction and recognition from natural scene images," Journal of King Saud University-Computer and Information Sciences, 2022. https://doi.org/10.1016/j.jksuci.2022.02.001doi:[[[10.1016/j.jksuci..02.001]]]
8 Q. Lai, S. Khan, Y . Nie, H. Sun, J. Shen, and L. Shao, "Understanding more about human and machine attention in deep neural networks," IEEE Transactions on Multimedia, 23, 2086-2099, 2020.doi:[[[10.1109/tmm.2020.3007321]]]
9 Z. Luo, J. Li, and Y . Zhu, "A deep feature fusion network based on multiple attention mechanisms for joint iris-periocular biometric recognition," IEEE Signal Processing Letters, vol. 28, pp. 1060-1064, 2021.doi:[[[10.1109/lsp.2021.3079850]]]
Received: January 27 2022
Revision received: March 21 2022
Accepted: June 15 2022
Published (Print): August 31 2022
Published (Electronic): August 31 2022
Corresponding Author: Xiuhui Wang* , [email protected]
Yuxiang Shan, Chinese Tobacco Zhejiang Industrial Company Limited, Hangzhou, China, [email protected]
Qin Ren, Chinese Tobacco Zhejiang Industrial Company Limited, Hangzhou, China, [email protected]
Cheng Wang, Chinese Tobacco Zhejiang Industrial Company Limited, Hangzhou, China, [email protected]
Xiuhui Wang*, Dept. of Computer, China Jiliang University, Hangzhou, China, [email protected]
|
CommonCrawl
|
The Ratio Test for Positive Series of Real Numbers
We will now develop yet another important test for determining the convergence or divergence of a series. This test is known as the ratio test for positive series.
Theorem 1: Let $(a_n)_{n=1}^{\infty}$ be a positive sequence of real numbers and let $\displaystyle{\lim_{n \to \infty} \frac{a_{n+1}}{a_n} = \rho}$.
a) If $0 \leq \rho < 1$ then $\displaystyle{\sum_{n=1}^{\infty} a_n}$ converges.
b) If $1 < \rho \leq \infty$ then $\displaystyle{\sum_{n=1}^{\infty} a_n}$ diverges.
If $\rho = 1$ then this test is inconclusive.
Proof of a): Suppose that $0 \leq \rho < 1$. Since $\displaystyle{\lim_{n \to \infty} \frac{a_{n+1}}{a_n} = \rho}$ for $r$ where $\rho < r < 1$ there exists an $N \in \mathbb{N}$ where if $n \geq N$ then:
\begin{align} \quad \frac{a_{n+1}}{a_n} \leq r \end{align}
So $a_{n+1} \leq ra_n$. We see that:
\begin{align} \quad a_{N+1} & \leq r a_{N} \\ \quad a_{N+2} & \leq r a_{N+1} \leq r^2 a_{N} \\ \quad a_{N+3} & \leq r a_{N+2} \leq r^2 a_{N+1} \leq r^3 a_N \\ \quad & \vdots\\ \quad a_{N+k} & \leq \cdots \leq r^k a_N \end{align}
So $\displaystyle{\sum_{n=N+1}^{\infty} a_n = \sum_{k=1}^{\infty} a_{N+k} \leq \sum_{k=1}^{\infty} r^k a_N}$. But the series $\displaystyle{\sum_{k=1}^{\infty} r^k a_N}$ converges as a geometric series since $0 \leq \rho < r < 1$, and by the comparison test we have that the subseries $\displaystyle{\sum_{n=N+1}^{\infty} a_n}$ converges also which implies that the whole series $\displaystyle{\sum_{n=1}^{\infty} a_n}$ converges.
Proof of b) Suppose that $1 < \rho \leq \infty$. Since $\displaystyle{\lim_{n \to \infty} \frac{a_{n+1}}{a_n} = \rho}$, for $r$ such that $1 < r \rho$ we have that there exists an $N \in \mathbb{N}$ such that if $n \geq N$ then:
\begin{align} \quad r \leq \frac{a_{n+1}}{a_n} \\ \end{align}
So for $ra_n \leq a_{n+1}$. So for $n \geq N$ we have:
\begin{align} \quad ra_N & \leq a_{N+1} \\ \quad r^2a_N & \leq ra_{N+1} \leq a_{N+2} \\ \quad r^3a_N & \leq r^2 a_{N+1} \leq r^2 a_{N+2} \leq a_{N+3} \\ \quad \quad & \vdots \\ \quad r^ka_N & \leq \cdots \leq a_{N+k} \end{align}
So $\displaystyle{\sum_{k=1}^{\infty} r^k a_N \leq \sum_{k=1}^{\infty} a_{N+k} = \sum_{n=N+1}^{\infty} a_n}$. Since $1 < r$ we have that the series $\displaystyle{\sum_{k=1}^{\infty} r^k a_N}$ diverges as a geometric series and by comparison the subseries $\displaystyle{\sum_{n=N+1}^{\infty} a_n}$ diverges so the whole series $\displaystyle{\sum_{n=1}^{\infty} a_n}$ diverges.
If $\rho = 1$ then the series $\displaystyle{\sum_{n=1}^{\infty} a_n}$ may converge or diverge. For example, consider the series $\displaystyle{\sum_{n=1}^{\infty} \frac{1}{n^2}}$. We know this series converges. Using the ratio test and we see that:
\begin{align} \quad \rho = \lim_{n \to \infty} \frac{a_{n+1}}{a_n} = \lim_{n \to \infty} \frac{n^2}{(n+1)^2} = 1 \end{align}
We also know that the series $\displaystyle{\sum_{n=1}^{\infty} \frac{1}{n}}$ diverges, and using the ratio test we see that:
\begin{align} \quad \rho = \lim_{n \to \infty} \frac{a_{n+1}}{a_n} = \lim_{n \to \infty} \frac{n}{n+1} = 1 \end{align}
So as you can see, if $\rho = 1$ then the ratio test gives us no information on the convergence/divergence of a series.
|
CommonCrawl
|
Last edited by Tujar
6 edition of Mathematical Methods for the Magnetohydrodynamics of Liquid Metals (Numerical Mathematics and Scientific Computation) found in the catalog.
Mathematical Methods for the Magnetohydrodynamics of Liquid Metals (Numerical Mathematics and Scientific Computation)
Published October 16, 2006 by Oxford University Press, USA .
Series Numerical Mathematics and Scientific Computation
Old & rare.
Estimates of net migration for Mississippi, 1980-1990
Painting Natures Details in Watercolor
framework for the development of applications involving image segmentation.
Cinema all the time
Shared walls
Essentials of Political Research
Life of the amir Dost Mohammed Khan of Kabul
Uses of tradition
Dabbling duck recruitment in relation to habitat and predators at Union Slough National Wildlife Refuge, Iowa
LIV LANG GERMAN CASS 108
Book of Mormon Reflections
Contract Farming (Command 5099)
Crazies to the Left of Me, Wimps to the Right LP
Wife By Approval (Harlequin Presents)
Coming of age in Samoa
Mathematical Methods for the Magnetohydrodynamics of Liquid Metals (Numerical Mathematics and Scientific Computation) by Jean-Frederic Gerbeau Download PDF EPUB FB2
Buy Mathematical Methods for the Magnetohydrodynamics of Liquid Metals (Numerical Mathematics and Scientific Computation) on FREE SHIPPING on qualified orders Mathematical Methods for the Magnetohydrodynamics of Liquid Metals (Numerical Mathematics and Scientific Computation): Gerbeau, Jean-Frédéric, Le Bris, Claude, Lelièvre Cited by: This text focuses on mathematical and numerical Mathematical Methods for the Magnetohydrodynamics of Liquid Metals book for the simulation of magnetohydrodynamic phenomena, with an emphasis on the magnetohydrodynamics of liquid metals, on two-fluid flows.
Get this from a library. Mathematical methods for the magnetohydrodynamics of liquid metals. [Jean-Frédéric Gerbeau; Claude Le Bris; Tony Lelièvre] -- Aimed at Mathematical Methods for the Magnetohydrodynamics of Liquid Metals book mathematicians, engineers and physicists, as well as those in industry, the approach of this text is highly mathematical and based on solid numerical analysis.
It focuses on. Mathematical Methods for the Magnetohydrodynamics of Liquid Metals (Numerical Mathematics and Scientific Computation) - Kindle edition by Gerbeau, Jean-Frédéric, Claude Le Bris, Tony Lelièvre.
Download it once and read it on your Kindle device, PC, phones or tablets. Use features like bookmarks, note taking and highlighting while reading Mathematical Methods for the Magnetohydrodynamics of 5/5(1).
This text focuses on mathematical and numerical techniques for the simulation of magnetohydrodynamic phenomena, with an emphasis on the magnetohydrodynamics of liquid metals, on two-fluid flows, and on a prototypical industrial application.
The approach is a highly mathematical one, based on the rigorous analysis of the equations at hand, and a solid numerical analysis of the discretization. Mathematical methods for the Magnetohydrodynamics of Liquid Metals Jean-Frédéric Gerbeau, Claude Le Bris, Tony Lelièvre This comprehensive text focuses on mathematical and numerical techniques for the simulation of magnetohydrodynamic phenomena, with an emphasis laid on the magnetohydrodynamics of liquid metals, and on a prototypical.
This comprehensive text focuses on mathematical and numerical techniques for the simulation of magnetohydrodynamic phenomena, with an emphasis laid on the magnetohydrodynamics of liquid metals, and on a prototypical industrial application. Mathematical Methods for the Magnetohydrodynamics of Liquid Metals (Numerical Mathematics and Scientific Computation) eBook: Gerbeau, Jean-Frédéric, Reviews: 1.
Magnetohydrodynamics (MHD; also magneto-fluid dynamics or hydromagnetics) is the study of the magnetic properties and behaviour of electrically conducting es of such magnetofluids include plasmas, liquid metals, salt water, and word "magnetohydrodynamics" is derived from magneto-meaning magnetic field, hydro-meaning water, and dynamics meaning movement.
This comprehensive text focuses on Mathematical and numerical techniques for the simulation of magnetohydrodynamic phenomena, with an emphasis laid on the magnetohydrodynamics of liquid metals, and on a prototypical industrial application.
Aimed at research mathematicians, engineers, and physicists, as well as those working in industry, and starting from a good understanding of the physics. THE MAGNETOHYDRODYNAMICS EQUATIONS THE MAGNETOHYDRODYNAMICS EQUATIONS Chapter: (p.1) 1 THE MAGNETOHYDRODYNAMICS EQUATIONS Source: Mathematical Methods for the Magnetohydrodynamics of Liquid Metals Author(s): Jean-Frédéric Gerbeau Claude Le Bris Tony Lelièvre Publisher: Mathematical Methods for the Magnetohydrodynamics of Liquid Metals book University Press.
Liquid metal MHO is within the scope of two series of international conferences. One is the International Congress on "MHD Power Generation", held every four years, which includes technical and economical aspects as well as scientific questions.
The other if the Beer-Sheva Seminar on "MHO Flows and. @article{osti_, title = {Liquid metal magnetohydrodynamics}, author = {Lielpeteris, J and Moreau, R.}, abstractNote = {Liquid metal MHD is the subject of this book.
It is of central importance in fields like metals processing, energy conversion, nuclear engineering (fast breeders or fusion reactors), geomagnetism and astrophysics. This comprehensive text focuses on Mathematical Methods for the Magnetohydrodynamics of Liquid Metals book and numerical techniques for the simulation of magnetohydrodynamic phenomena, with an emphasis laid on the magnetohydrodynamics of liquid metals, on two-fluid flows and on a prototypical industrial at research mathematicians, engineers, and physicists, as well as those working in industry, and starting from a good.
Mathematical Methods For The Magnetohydrodynamics Of Liquid Metals Abstrak: This book focuses on mathematical and numerical techniques for the simulation of magnetohydrodynamics phenomena.
The emphasis is laid on the magnetohydrodynamics of liquid metals, and. Mathematical methods for the Magnetohydrodynamics of Liquid Metals. Oxford University Press, USA. Jean-Frédéric Gerbeau, Claude Le Bris, Tony Lelièvre.
Year: You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if. Mathematical Methods for the Magnetohydrodynamics of Liquid Metals by Claude Le Bris, Jean-Frédéric Gerbeau, Tony Lelièvre, Jean-Édéric Gerbeau Unknown, Pages, Published ISBN / ISBN / Pages: Mathematical Methods for Physics and Engineering: A Comprehensive Guide Mathematical Modeling, Simulation, Visualization and e-Learning: Proceedings of an International Workshop held at Rockefeller Foundation' s Bellagio Conference Center, Milan, Italy, Liquid Metal Magnetohydrodynamics by J.
Lielpeteris,available at Book Depository with free delivery worldwide. Abstract. The term Magnetohydrodynamics (MHD) refers in general to mathematical models that couple electromagnetic phenomena, including wave propagation, with fluid dynamics.
These models are generally involved in plasma physics, stellar dynamics, metal liquid flows and many other applications. Here we are concerned, as this is the case throughout this textbook, with the particular case of Author: Rachid Touzani, Jacques Rappaz.
Alexander Blokhin, Yuri Trakhinin, in Handbook of Mathematical Fluid Dynamics, Abstract. This chapter is devoted to the issue of stability of strong discontinuities in fluids and magnetohydrodynamics (MHD) and surveys main known results in this field. All the main points in the stability analysis are demonstrated on the example of shock waves in ideal models of gas dynamics, relativistic.
Numerical Mathematics and Scientific Computation; Type. Academic Research (33) Books for Courses (3) Price. Mathematical Methods for the Magnetohydrodynamics of Liquid Metals $ Add Mathematical Methods for the Magnetohydrodynamics of Liquid Metals to Cart.
Cambridge Core - Chemical Engineering - An Introduction to Magnetohydrodynamics - by P. Davidson. Magnetic fields influence many natural and man-made flows.
They are routinely used in industry to heat, pump, stir and levitate liquid by: Coupling and stability of interfacial waves in liquid metal batteries - Volume - G. Horstmann, N.
Weber, T. Weier T. Mathematical Methods for the Magnetohydrodynamics of Liquid Metals, Numerical Mathematics and Scientific by: Mathematical Methods for the Magnetohydrodynamics of Liquid Metals Jean-Frédéric Gerbeau, Claude Le Bris, Tony Lelièvre No preview available - Numerical Analysis and Optimization: An Introduction to Mathematical.
Having read a great many books about Electrodynamics, Fluid Mechanics, etc., it takes a book about Magnetohydrodynamics to finally understand. This is the book that draws together the missing equations or vague comments from pedagogic physics.
Literally this book ties the different fields (sic) together in an intelligible whole/5(7). Magnetohydrodynamics (MHD; also magneto-fluid dynamics or hydromagnetics) is the study of the magnetic properties and behaviour of electrically conducting es of such magnetofluids include plasmas, liquid metals, salt water, and electrolytes; the word "magnetohydrodynamics" is derived from magneto-meaning magnetic field, hydro-meaning water, and dynamics meaning movement.
This project, involves the use of theoretical, computational and experimental tools for multi-physics analysis as well as advanced engineering design methods and techniques. Keywords Electromagnetic pump Annular linear induction pump Thermomagnetic systems Liquid metal Engineering MHD Magnetohydrodynamics Space reactors Fission surface power Author: Carlos O.
Maidana. For example, liquid metal-cooled reactors are typically very compact and can be used in space propulsion systems and in fission reactors for planetary exploration. Computer aided engineering (CAE), computational physics and mathematical methods are introduced. In this paper, we are concerned with the global existence and uniqueness of the strong solutions to the compressible Magnetohydrodynamic equations in $\mathbb{R}^N(N\ge3)$.
Under the condition that the initial data are close to an equilibrium state with constant density, temperature and magnetic field, we prove the global existence and uniqueness of a solution in a functional setting invariant Cited by: 4.
Jean-Frédéric Gerbeau, Claude Le Bris, and Tony Lelièvre, Mathematical methods for the magnetohydrodynamics of liquid metals, Numerical Mathematics and Scientific Computation, Oxford University Press, Oxford, MR Cited by: 6. An Introduction to Magnetohydrodynamics by P.
Davidson,available at Book Depository with free delivery worldwide/5(9). The sun is an MHD system that is not well understood. Magnetohydrodynamics (MHD; also magneto-fluid dynamics or hydro magnetics) is the study of the magnetic properties and behaviour of electrically conducting es of such magneto fluids include plasmas, liquid metals, salt water, and word "magneto hydro dynamics" is derived from magneto-meaning magnetic.
Other Book for download: Web Services: Theory and Practice Book Download Multicultural Explorations: Joyous Journeys with Books Pdf Download Symmetries in Atomic Nuclei: From Isospin to Supersymmetry (Springer Tracts in Modern Physics) Pdf Download.
Liquid metal batteries (LMBs) are discussed today as a cheap grid scale energy storage, as required for the deployment of fluctuating renewable energies. Built as stable density stratification of two liquid metals separated by a thin molten salt layer, LMBs are susceptible to short-circuit by fluid flows.
Using direct numerical simulation, we study a sloshing long wave interface instability in Cited by: Cambridge Texts in Applied Mathematics: An Introduction to Magnetohydrodynamics Series Number Home Magnegohydrodynamics Us Help Free delivery worldwide.
Check out the top books of the year on our page Best Books of Review quote Review of previous edition: Physical Properties of Liquid Metals. A first course in fluid dynamics; 4. interaction of magnetic fields and conducting fluids ~liquid metals!. The book definitely has its strong points: the emphasis on the underlying physics rather than the mathematical descrip-tion, clear and well written explanations, historical notes, and very enlightening discussions of the MHD applications, from astrophysical to industrial.
This book focuses on mathematical and numerical techniques for the simulation of magnetohydrodynamics phenomena. The emphasis is laid on the magnetohydrodynamics of liquid metals, and on a prototypical industrial application.
Introduction to Magnetohydrodynamics. Cambridge Text in Applied Mathematics. Here the magnetic stirring (Section 8), magnetic damping of flows in liquid metals (Section 9), and vacuum-arc remelting (Section 10) are discussed.
The role of MHD in production of (mainly) aluminum and the effect of instabilities in reduction cells are presented Cited by: 1. (co-authored with J-F.
Gerbeau and T. Lelièvre) Mathematical methods for the Magnetohydrodynamics of liquid metals, Numerical Mathematics and Scientific Computation, Oxford University Press, (co-edited with André Bandrauk and Michel Delfour) High-dimensional Partial Differential Equations in Science and Engineering, American.
Mathematical Pdf Aris R. Aris (chemical engineering, University of Minnesota) introduces the art of building a system of equations which is both sufficiently complex to do justice to physical reality and sufficiently simple to give real insight into the situation.We devote the present paper to a fully discrete finite element scheme for the 2D/3D nonstationary incompressible magnetohydrodynamic-Voigt regularization model.
This scheme is based on a finite element approximation for space discretization and the Crank-Nicolson-type scheme for time discretization, which is a two-step method.Ebook magnetohydrodynamics (MHD) equations are used to model the flow of electrically conducting fluids in such applications as liquid metals and plasmas.
This system of non-self-adjoint, nonlinear P Cited by:
lombardspirit.com - Mathematical Methods for the Magnetohydrodynamics of Liquid Metals book © 2020
|
CommonCrawl
|
Journal of the Academy of Marketing Science
Are sponsorship announcements good news for the shareholders? Evidence from international stock exchanges
Marc Mazodier
Amir Rezaee
Original Empirical Research
The objective of this study is to analyse investors' perceptions of sponsorship's ability to increase brand equity, through the impact of sponsorship announcement on stock market value. An event study method, based on a unique sample of 293 worldwide sponsorship announcements from 2010, shows substantial negative abnormal returns following announcement dates. In addition, a cross-sectional regression analysis reveals the influence of several featured factors. Philanthropic sponsorships and sponsorships of events with distinctive values are less negatively perceived by investors, but US companies exhibit more negative returns in shareholder value than other firms. This study offers no support for varying impacts of event audience, renewal agreement, property sponsorship and title sponsorship on abnormal returns though.
Event study Sponsorship International marketing Finance
The authors would like to thank Dr. Charles Bal, former General Manager of brandRapport France, who invited us to analyze the database at the core of this study. We also thank Vincent de Lavarenne for his outstanding research assistance. Finally, we greatly appreciate the constructive comments offered by editor G. Tomas M. Hult and two anonymous reviewers.
To assess the sponsorship announcement's impact on shareholders' wealth, we implemented a market model for abnormal returns. We estimated the market model variables (α and β) for an estimation period immediately before the event period. The estimation period lasted 125 days; the event period is composed of days −15 to +15 around the announcement date.
Let Rit be the observed return for security i on day t and Rmt be the return on the index for day t. The abnormal return ARit for security i on day t is
$$ A{R_{it }}={R_{it }}-\left( {{\alpha_i}+{\beta_i}{R_{mt }}} \right) $$
The average abnormal return on day t for a given sample of N sponsorship announcements is
$$ AA{R_t}=\frac{1}{N}\sum\limits_{j=1}^N {A{R_{jt }}} $$
Then, the cumulative average abnormal return between event days a and b can be calculated as:
$$ CAA{R_{a,b }}=\frac{1}{N}\sum\limits_{j=1}^N {\sum\limits_{t=a}^b {A{R_{jt }}} } $$
Aaker, J. L., & Sengupta, J. (2000). Additivity versus attenuation: the role of culture in the resolution of information incongruity. Journal of Consumer Psychology, 9, 67–82.CrossRefGoogle Scholar
Agrawal, J., & Kamakura, W. A. (1995). The economic worth of celebrity endorsers: an event study analysis. Journal of Marketing, 59, 56–62.CrossRefGoogle Scholar
Aktas, N., de Bodt, E., & Roll, R. (2004). Market responses to European Regulation of business combinations. Journal of Financial and Quantitative Analysis, 39, 731–757.CrossRefGoogle Scholar
Asquith, P., & Mullins, D. (1986). Signaling with dividends, stock repurchases and equity issues. Financial Management, 35, 27–44.CrossRefGoogle Scholar
Bartholdy, J., Olson, D., & Peare, P. (2007). Conducting event studies on a small stock exchange. European Journal of Finance, 13, 227–252.CrossRefGoogle Scholar
Bhattacharya, C. B., Korschun, D., & Sen, S. (2011). What really drives value in corporate responsibility?. McKinsey Quarterly, December, https://www.mckinseyquarterly.com/What_really_drives_value_in_corporate_responsibility_2895.
Brown, S. J., & Warner, J. B. (1985). Using daily stock returns: the case of event studies. Journal of Financial Economics, 14, 3–31.CrossRefGoogle Scholar
Calderon-Martinez, A., Mas-Ruiz, F. J., & Nicolau-Gonzalbez, J. L. (2005). Commercial and philanthropic sponsorship–direct and interaction effects on company performance. International Journal of Market Research, 47, 75–99.Google Scholar
Campbell, C. J., & Wasley, C. E. (1993). Measuring security price performance using daily NASDAQ returns. Journal of Financial Economics, 33, 73–92.CrossRefGoogle Scholar
Chang, D. R., & Davis, J. A. (2012). Think small in Olympic Sponsorship. Harvard Business Review, August, http://blogs.hbr.org/cs/2012/08/think_small_in_olympic_sponsor.html.
Clark, J. M., Cornwell, T. B., & Pruitt, S. W. (2002). Corporate stadium sponsorship, signaling theory, agency conflicts and shareholder wealth. Journal of Advertising Research, 42, 16–32.Google Scholar
Clark, J. M., Cornwell, T. B., & Pruitt, S. W. (2009). The impact of title event sponsorship announcements on shareholder wealth. Marketing Letters, 20, 169–182.CrossRefGoogle Scholar
Cobbs, J., Groza, M. D., & Pruitt, S. W. (2012). Warning flags on the race track—the global markets' verdict on formula one sponsorship. Journal of Advertising Research, 52, 74–86.CrossRefGoogle Scholar
Conchar, M. P., Crask, M. R., & Zinhkhan, G. M. (2005). Market valuation models of the effect of advertising and promotion spending: a review and meta-analysis. Journal of the Academy of Marketing Science, 33, 445–460.CrossRefGoogle Scholar
Cornwell, T. B. (2008). State of the art and science in sponsorship-linked marketing. Journal of Advertising, 37, 41–55.CrossRefGoogle Scholar
Cornwell, T. B., Clark, J. M., & Pruitt, S. W. (2005). The relationship between major-league sports' official sponsorship announcement and the stock prices of sponsoring firms. Journal of the Academy of Marketing Science, 33, 401–412.CrossRefGoogle Scholar
Corrado, C. J. (1989). A nonparametric test for abnormal security-price performance in event studies. Journal of Financial Economics, 23, 385–395.CrossRefGoogle Scholar
Corrado, C. J. (2011). Event studies: a methodology review. Accounting and Finance, 51, 207–234.CrossRefGoogle Scholar
Crowley, M. G. (1991). Prioritizing the sponsorship audience. European Journal of Marketing, 25, 11–21.CrossRefGoogle Scholar
D'Astous, A., & Bitz, P. (1995). Consumer evaluations of sponsorship programmes. European Journal of Marketing, 29, 6–22.CrossRefGoogle Scholar
Ding, H., Molchanov, A. E., & Storl, P. A. (2011). The value of celebrity endorsements: a stock market perspective. Marketing Letters, 22, 147–163.CrossRefGoogle Scholar
Drumwright, M. E. (1996). Company advertising with a social dimension: the role of noneconomic criteria. Journal of Marketing, 60, 71–87.CrossRefGoogle Scholar
Dutton, J. M., Dukerich, J. M., & Harquail, C. V. (1994). Organizational images and member identification. Administrative Science Quarterly, 39, 239–263.CrossRefGoogle Scholar
European Commission (1999). The European model of sport. Retrieved 15 August, 2012 from http://ec.europa.eu/sport/documents/library/doc248_en.pdf.
Evans, R. D. (2010). Olympic sponsorships: a winning investment? Society for Marketing Advances Proceedings, 181–182.Google Scholar
Fama, E. F., Fisher, L., Jensen, M. C., & Roll, R. (1969). The adjustment of stock prices to new information. International Economic Review, 10, 1–21.CrossRefGoogle Scholar
Farrelly, F. J., Quester, P. G., & Greyser, S. A. (2005). Defending the co-branding benefits of sponsorship B2B partnerships: the case of ambush marketing. Journal of Advertising Research, 45, 339–348.CrossRefGoogle Scholar
Ferrell, O. C., Gonzalez-Padron, T. L., Hult, G. T. M., & Maignan, I. (2010). From market orientation to stakeholder orientation. Journal of Public Policy & Marketing, 29, 93–96.CrossRefGoogle Scholar
Fornell, C., Mithas, S., Morgeson, F. V., III, & Krishnan, M. S. (2006). Customer satisfaction and stock prices: high returns, low risk. Journal of Marketing, 70, 3–14.CrossRefGoogle Scholar
Gwinner, K. P. (1997). A model of image creation and image transfer in event sponsorship. International Marketing Review, 14, 145.CrossRefGoogle Scholar
Gwinner, K. P., & Eaton, J. (1999). Building brand image through event sponsorship: the role of image transfer. Journal of Advertising, 28, 47–57.CrossRefGoogle Scholar
Heider, F. (1958). The psychology of interpersonal relations. New York: John Wiley & Sons.CrossRefGoogle Scholar
International Events Group (2011). Sponsorship spending: 2010 proves better than expected; bigger gains set for 2011. Retrieved February 8, 2011, from http://www.sponsorship.com/IEGSR/2011/01/04/Sponsorship-Spending--2010-Proves-Better-Than-Expe.aspx.
International Events Group (2012). Sponsorship spending: economic uncertainty to slow sponsorship growth in 2012 Retrieved November 18, 2012, from http://www.sponsorship.com/About-IEG/Press-Room/Economic-Uncertainty-To-Slow-Sponsorship-Growth-In.aspx.
Jagre, E., Watson, J. J., & Watson, J. G. (2001). Sponsorship and congruity theory; a theoretical framework for explaining consumer attitude and recall of event sponsorship. In M. C. Gilly & J. Meyers-Levy (Eds.), Advances in consumer research (pp. 439–445). Valdosta: Association for Consumer Research.Google Scholar
Johar, G. V., & Pham, M. T. (1999). Relatedness, prominence and constructive sponsor identification. Journal of Marketing Research, 36, 299–312.CrossRefGoogle Scholar
Jones, J. P. (1990). Ad spending: maintaining market share. Harvard Business Review, 68, 38–42.Google Scholar
Joshi, A., & Hanssens, D. M. (2010). The direct and indirect effects of advertising spending on firm value. Journal of Marketing, 74, 20–33.CrossRefGoogle Scholar
Keller, K. L. (2003). Brand synthesis: the multidimensionality of brand knowledge. Journal of Consumer Research, 29, 595–600.CrossRefGoogle Scholar
Knittel, C. R., & Stango, V. (2010). Celebrity endorsements, firm value and reputation risk: evidence from the Tiger Woods Scandal. Retrieved August 15, 2012 from http://www.econ.ucdavis.edu/faculty/knittel/papers/tiger_latest.pdf.
Koku, P., Jagpal, H. S., & Viswanath, P. V. (1997). The effect of new product announcements and preannouncements on stock price. Journal of Market-Focused Management, 2, 183–199.CrossRefGoogle Scholar
Lacey, R., Close, A. G., & Finney, R. Z. (2010). The pivotal roles of product knowledge and corporate social responsibility in event sponsorship effectiveness. Journal of Business Research, 63, 1222–1228.CrossRefGoogle Scholar
Lane, V., & Jacobson, R. (1995). Stock market reactions to brand extension announcements: the effects of brand attitude and familiarity. Journal of Marketing, 59, 63–77.CrossRefGoogle Scholar
Lardinoit, T., & Derbaix, C. (2001). Sponsorship and recall of sponsors. Psychology and Marketing, 18, 167–190.CrossRefGoogle Scholar
Luo, X., & Bhattacharya, C. B. (2009). The debate over doing good: corporate social performance, strategic marketing levers, and firm-idiosyncratic risk. Journal of Marketing, 73, 198–213.CrossRefGoogle Scholar
Madden, T. J., Fehle, F., & Fournier, S. (2006). Brands matter: an empirical demonstration of the creation of shareholder value through branding. Journal of the Academy of Marketing Science, 34, 224–235.CrossRefGoogle Scholar
Maignan, I., Ferrell, O. C., & Hult, G. T. M. (1999). Corporate citizenship: cultural antecedents and business benefits. Journal of the Academy of Marketing Science, 27, 455–469.CrossRefGoogle Scholar
Marshall, D. (1992). Does sponsorship always talk the same language? An overview of how attitudes to sponsorship vary across Europe. In Expoconsult (Ed.), Sponsorship Europe '92 Conference Proceedings (pp. 151–171). Maarssen: ESOMAR.Google Scholar
Maynes, E., & Rumsey, J. (1993). Conducting event studies with thinly traded stocks. Journal of Banking and Finance, 17, 145–157.CrossRefGoogle Scholar
Mazodier, M., & Merunka, D. (2012). Achieving brand loyalty through sponsorship: the role of fit and self-congruity. Journal of the Academy of Marketing Science, 40(6), 807–820.CrossRefGoogle Scholar
Mazodier, M., Quester, P. G., & Chandon, J.-L. (2012). Unmasking the ambushers: conceptual framework and empirical evidence. European Journal of Marketing, 46, 192–214.CrossRefGoogle Scholar
McDaniel, S. R. (1999). An investigation of match-up effects in sport sponsorship advertising: the implications of consumer advertising schemas. Psychology and Marketing, 16, 163–184.CrossRefGoogle Scholar
Meenaghan, T. (2001). Understanding sponsorship effects. Psychology and Marketing, 18, 95–122.CrossRefGoogle Scholar
Meyers-Levy, J., & Tybout, A. M. (1989). Schema congruity as a basis for product evaluation. Journal of Consumer Research, 16, 39–54.CrossRefGoogle Scholar
Michaelis, M., Woisetschlager, D. M., & Hartleb, V. (2008). An empirical comparison of ambushing and sponsorship effects: the case of the 2006 FIFA World Cup Germany. In A. Y. Lee & D. Soman (Eds.), Advances in consumer research (pp. 527–533). Duluth: Association for Consumer Research.Google Scholar
Miyazaki, A. D., & Morgan, A. G. (2001). Assessing market value of event sponsoring: corporate Olympic sponsorships. Journal of Advertising Research, 41, 9–13.Google Scholar
Mizik, N., & Jacobson, R. (2008). The financial value impact of perceptual brand attributes. Journal of Marketing Research, 45, 15–32.CrossRefGoogle Scholar
Morgan, N. A. (2012). Marketing and business performance. Journal of the Academy of Marketing Science, 40, 102–119.CrossRefGoogle Scholar
Morgan, N. A., Clark, B. H., & Gooner, R. A. (2002). Marketing productivity, marketing audits, and systems for marketing performance assessment: integrating multiple perspectives. Journal of Business Research, 33, 363–375.CrossRefGoogle Scholar
Ngobo, P.-V., Casta, J.-F., & Ramond, O. (2012). Is customer satisfaction a relevant metric for financial analysts? Journal of the Academy of Marketing Science, 40, 480–508.CrossRefGoogle Scholar
Nidumolu, R., Prahalad, C. K., & Rangaswami, M. R. (2009). Why sustainability is now the key driver of innovation. Harvard Business Review, 87, 56–64.Google Scholar
Nielsen (2011). Global advertising rebounded 10.6% in 2010, Retrieved 21 July, 2012, from http://www.nielsen.com/us/en/insights/press-room/2011/global-advertising-rebound-2010.html.
O'Reilly, N. & Seguin, B. (2011). Canadian sponsorship landscape study. Retrieved August 8, 2012, from http://www.sponsorshipmarketing.ca/CSLS_5thannual_2011.pdf.
Olson, E. L., & Thjømøe, H. S. (2009). Sponsorship effect metric: assessing the financial value of sponsoring by comparisons to television advertising. Journal of the Academy of Marketing Science, 37, 504–515.CrossRefGoogle Scholar
Olson, E. L., & Thjømøe, H. S. (2011). Explaining and articulating the fit construct in sponsorship. Journal of Advertising, 40, 57–70.CrossRefGoogle Scholar
Pauwels, K., Silva-Risso, J., Srinivasan, S., & Hanssens, D. M. (2004). New products, sales promotion, and firm value: the case of automobile industry. Journal of Marketing, 68, 142–156.CrossRefGoogle Scholar
Pegoraro, A., O'Reilly, N., & Levallet, N. (2009). Gender-based sponsorship of grassroots events as an agent of corporate social responsibility: the case of a national women's triathlon series. Journal of Sponsorship, 2, 140–151.Google Scholar
Perreault, W. D., Jr., & Leigh, L. E. (1989). Reliability of nominal data based on qualitative judgments. Journal of Marketing Research, 26, 135–148.CrossRefGoogle Scholar
Pham, M. T. (1992). Effects of involvement, arousal, and pleasure on the recognition of sponsorship stimuli. In J. F. Sherry & B. Sternthal (Eds.), Advances in Consumer research (pp. 85–93). Provo: Association for Consumer Research.Google Scholar
Polonsky, M. J., & Speed, R. (2001). Linking sponsorship and cause related marketing: complementarities and conflicts. European Journal of Marketing, 35, 1361–1389.CrossRefGoogle Scholar
Prendergast, G. P., Poon, D., & West, D. C. (2010). Match game–linking sponsorship congruence with communication outcomes. Journal of Advertising Research, 50, 214–226.CrossRefGoogle Scholar
Pruitt, S. W., Cornwell, T. B., & Clark, J. M. (2004). The NASCAR phenomenon: auto racing sponsorships and shareholder wealth. Journal of Advertising Research, 44, 281–296.CrossRefGoogle Scholar
Quester, P. G., & Thompson, B. (2001). Advertising and promotion leverage on arts sponsorship effectiveness. Journal of Advertising Research, 41, 33–47.Google Scholar
Quester, P. G., Farrelly, F., & Burton, R. (1998). Sports sponsorship management: a multinational comparative study. Journal of Marketing Communications, 4, 115–128.CrossRefGoogle Scholar
Sandler, D. M., & Shani, D. (1989). Olympic sponsorship vs. ambush marketing: who gets the gold? Journal of Advertising Research, 29, 9–14.Google Scholar
Scholes, M., & Williams, J. (1977). Estimating betas from nonsynchronous data. Journal of Financial Economics, 5, 309–327.CrossRefGoogle Scholar
Simmons, C. J., & Becker-Olsen, K. L. (2006). Achieving marketing objectives through social sponsorships. Journal of Marketing, 70, 154–169.CrossRefGoogle Scholar
Smith, G. (2004). Brand image transfer through sponsorship: a consumer learning perspective. Journal of Marketing Management, 20, 457–474.CrossRefGoogle Scholar
Spector, P. E., & Brannick, M. T. (2011). Methodological urban legends: the misuse of statistical control variables. Organizational Research Methods, 14, 287–305.CrossRefGoogle Scholar
Speed, R., & Thompson, P. (2000). Determinants of sports sponsorship response. Journal of the Academy of Marketing Science, 28, 226–238.CrossRefGoogle Scholar
Sponsorship Research International. (1996). Worldwide sponsorship values. London: SRI.Google Scholar
Srivastava, R., Shervani, T. A., & Fahey, L. (1999). Marketing, business processes, and shareholder value: an organizationally embedded view of marketing activities and the discipline of marketing. Journal of Marketing, 63, 168–179.CrossRefGoogle Scholar
Thjømøe, H. M., Olson, E. L., & Bronn, P. S. (2002). Decision-making processes surrounding sponsorship activities. Journal of Advertising Research, 42, 6–15.Google Scholar
Wiles, M. A., Morgan, N. A., & Rego, L. L. (2012). The effect of brand acquisition and disposal on stock returns. Journal of Marketing, 76, 38–58.CrossRefGoogle Scholar
© Academy of Marketing Science 2013
1.Ehrenberg Bass Institute for Marketing ScienceUniversity of South AustraliaAdelaideSouth Australia
2.ISG Business SchoolParisFrance
Mazodier, M. & Rezaee, A. J. of the Acad. Mark. Sci. (2013) 41: 586. https://doi.org/10.1007/s11747-013-0325-x
Received 10 May 2012
Accepted 03 January 2013
DOI https://doi.org/10.1007/s11747-013-0325-x
|
CommonCrawl
|
Journal of The American Society for Mass Spectrometry
March 2016 , Volume 27, Issue 3, pp 520–531 | Cite as
Enhanced Dissociation of Intact Proteins with High Capacity Electron Transfer Dissociation
Nicholas M. Riley
Christopher Mullen
Chad R. Weisbrod
Seema Sharma
Michael W. Senko
Vlad Zabrouskov
Michael S. Westphall
John E. P. Syka
Joshua J. Coon
Electron transfer dissociation (ETD) is a valuable tool for protein sequence analysis, especially for the fragmentation of intact proteins. However, low product ion signal-to-noise often requires some degree of signal averaging to achieve high quality MS/MS spectra of intact proteins. Here we describe a new implementation of ETD on the newest generation of quadrupole-Orbitrap-linear ion trap Tribrid, the Orbitrap Fusion Lumos, for improved product ion signal-to-noise via ETD reactions on larger precursor populations. In this new high precursor capacity ETD implementation, precursor cations are accumulated in the center section of the high pressure cell in the dual pressure linear ion trap prior to charge-sign independent trapping, rather than precursor ion sequestration in only the back section as is done for standard ETD. This new scheme increases the charge capacity of the precursor accumulation event, enabling storage of approximately 3-fold more precursor charges. High capacity ETD boosts the number of matching fragments identified in a single MS/MS event, reducing the need for spectral averaging. These improvements in intra-scan dynamic range via reaction of larger precursor populations, which have been previously demonstrated through custom modified hardware, are now available on a commercial platform, offering considerable benefits for intact protein analysis and top down proteomics. In this work, we characterize the advantages of high precursor capacity ETD through studies with myoglobin and carbonic anhydrase.
Graphical Abstract
ᅟ
Intact proteins Electron transfer dissociation Ion–ion reactions Instrumentation
The online version of this article (doi: 10.1007/s13361-015-1306-8) contains supplementary material, which is available to authorized users.
Interrogation of intact proteins via mass spectrometry (MS) has the potential to capture nearly all of the relevant information encoded in each protein, including primary sequence information, combinatorial patterns of post-translational modifications (PTMs), and protein gas-phase structure [1, 2, 3, 4, 5]. As molecular weight alone is largely insufficient for full protein characterization [6, 7, 8, 9], tandem MS (MS/MS) is the key component of these top down sequencing methods, revealing both the primary sequence and protein modification state [10, 11, 12, 13]. The emergence of new ion dissociation methods continues to drive top down proteomics [14, 15, 16] by offering valuable alternatives to traditional slow-heating methods (e.g., collision-activated dissociation, CAD). Electron transfer dissociation (ETD) leverages electron-driven radical rearrangements to promote cleavage of N–Cα bonds between amino acid residues, preserving labile post-translational modifications (PTMs) and providing extensive sequence-informative fragmentation of peptides and proteins [17, 18, 19]. Ideally suited for large, highly charged protein molecules, ETD has afforded important gains in top down proteomics, extending protein sequence coverage and enabling characterization of important PTMs and sequence variants [20, 21, 22, 23, 24].
Despite advancements in fragmentation methods and mass analyzers in the past decade, MS instrumentation remains a barrier to further progress in whole protein analysis [25, 26, 27, 28]. Realizing the full potential of top down proteomics, especially when applied to large proteins, requires robust and comprehensive protein fragmentation, which continues to be a challenging endeavor. By providing high resolution/accurate mass (HR/AM) measurements for precursor and product ions with good sensitivity, Fourier-transform (FT) instruments are a well-suited MS platform for top down proteomics [29, 30, 31, 32, 33]. FT-MS instruments are easily coupled with other ion trapping devices (i.e., hybrid systems) to offer a considerable array of fragmentation methods, including CAD, higher-energy collisional dissociation (HCD), photo-activation, electron capture dissociation (ECD), and ETD [34, 35, 36, 37, 38, 39, 40, 41]. A characteristic inherent to all ion trapping instruments, however, is that the number of ions that can be analyzed in a given scan is limited by a fixed number of charges that can be effectively contained and manipulated [42, 43]. The charge capacity of ion trapping devices becomes especially consequential for intact protein fragmentation, where product ion signal is often distributed amongst hundreds of potential fragment channels and increasingly complex isotopic distributions.
As protein mass increases, the efficacy of MS/MS on whole proteins notably diminishes; larger proteins carry more charge and have a greater number of dissociation channels. Not only does this increase spectral complexity, but it also limits precursor capacity (i.e., ion number) in ion trap reaction vessels [44, 45, 46, 47]. For example, an ion trap with a charge capacity of approximately 300,000 charges can store ∼ 30,000 precursor ions for the z = +10 charge state of ubiquitin (∼8.6 kDa) but only roughly 9400 precursor ions for the z = +32 charge state of carbonic anhydrase (∼29 kDa). Compounding this, charge from those initial ion populations is potentially distributed across product ions from 75 backbone bonds for ubiquitin compared with carbonic anhydrase's 258 backbone bonds. To improve the S/N of product ion measurements, spectral averaging (summing signal from several individual scans) is often required for MS/MS of even modest sized proteins. The tradeoff for the increase in S/N, however, is a significant increase in acquisition times required for generation of high quality spectra, accordingly limiting the sampling depth achievable in a given experiment. We conclude that increasing the number of precursor ion charges prior to initiation of the dissociation event is a direct way to improve S/N without spectral averaging [27, 48, 49]. Herein we describe modifications to ion processing and storage that permit increased precursor ion populations for ETD experiments—we call this method high capacity ETD (also called ETD high dynamic range, or ETD HD).
For ETD on hybrid ion trap-Orbitrap systems, the size of the precursor population is limited by the precursor sequestration event in the dual cell quadrupole linear ion trap m/z analyzer (A-QLT) prior to the reaction [50, 51]. We have shown previously that a larger ETD reaction cell, called the multipurpose dissociation cell (MDC), can accommodate 6- to 10-fold larger initial populations of precursor ions, thereby alleviating the capacity restrictions imposed by using the A-QLT [52]. With the MDC, we achieved better ion statistics and increased the intra-scan dynamic range for protein fragmentation, leading to higher quality spectra (i.e., increased product ion S/N) with less spectral averaging required, which ultimately enabled better top down analyses of complex protein mixtures.
Hunt and co-workers described a different approach enabled by the development of a front-end ETD reagent source [53]. Here the A-QLT remained as the ion–ion reaction cell, but products from multiple rounds of ion–ion reactions were accumulated in the C-trap before a single mass analysis of all product ions in the Orbitrap, ultimately improving the S/N of MS/MS spectra. The promising results characterized in both the Hunt and Coon lab strategies have motivated us to develop an improved implementation of ETD on the newest generation of quadrupole-Orbitrap-linear ion trap Tribrid mass spectrometers [36].
Here we demonstrate that the ion capacity of the precursor accumulation event prior to the ETD reaction can be increased by changing where in the A-QLT precursor cations and reagent anions are stored. This new implementation of high capacity ETD on the newest generation of Orbitrap Fusion Lumos Tribrid platform allows use of larger precursor populations for ETD MS/MS scans, enabling higher product ion S/N over standard ETD, for a given spectral acquisition time. Ultimately this translates to more sequence-informative fragment ions and higher protein sequence coverage achieved with less spectral averaging in high capacity ETD.
Materials, Reagents, and Sample Preparation
Myoglobin [P68082] and carbonic anhydrase [P00921] were purchased as mass spectrometry grade standards from Protea Biosciences (Morgantown, WV, USA). Formic acid ampoules and acetonitrile were purchased from Thermo Scientific (Rockford, IL, USA).
Mass Spectrometry Instrumentation
High precursor capacity ETD was implemented using the existing dual pressure linear ion trap (A-QLT) on the Orbitrap Fusion Lumos (Thermo Fisher Scientific, San Jose, CA, USA). In standard ETD, the precursor sequestration event occurs by creating a DC potential well of approximately 2 V in the back section of the high pressure cell (HPC). This holds the precursor cations in the back section of the HPC, while the center section and front section voltages of the HPC are set to allow for reagent anion accumulation. To enable high capacity ETD, instrument control code was modified to allow transfer of precursor ions directly from the ion routing multipole to the center section of the HPC for storage using a DC potential well of approximately 4 V, omitting relocation of precursor ions to the back section prior to the ETD reaction (Figure 2a). Reagent accumulation is then achieved by holding the front section at a positive DC offset to establish the potential well for anions. Charge-sign independent trapping for the ion–ion reaction was then performed in the same fashion for both standard and high capacity ETD by setting all DC bias voltages to 0 V and applying axial confining rf voltages to the end lenses of the HPC.
ESI-MS/MS Analysis
Myoglobin (P68082) and carbonic anhydrase (P00921) were resuspended at approximately 10 pmol per μL in 49.9:49.9:0.2 acetonitrile/water/formic acid, infused via syringe pump into the mass spectrometer at 5 μL per min through a 500 μL syringe, and ionized with electrospray ionization (ESI) at +3.5 kV with respect to ground. For myoglobin, MS/MS scans were performed in the Orbitrap with unthresholded transient acquisition at a resolving power of 120,000 (full width at half maximum) at 200 m/z with a range of 200–2000 Th. Precursor ions were isolated with the mass selecting quadrupole with an isolation width of 10 m/z, and automatic gain control (AGC) target values ranging from 100,000 to 1,000,000 charges as indicated. Transient averaging began after data acquisition was started so that scans with one to 100 transients averaged could be analyzed. An AGC target of 800,000 charges was used for fluoranthene reagent anions (m/z 202, isolated by the mass selecting quadrupole) for ETD and EThcD experiments, reaction times varied as indicated in Supplemental Table 1, and a normalized collision energy of 10 was used for EThcD. Analyses were performed in intact protein mode with a pressure of 3 mTorr in the ion-routing multipole. For carbonic anhydrase, MS/MS scans were performed in the Orbitrap at a both 120,000 and 240,000 resolving powers (at 200 m/z) with precursor AGC target values of 300,000 and 1,000,000 and a m/z range 400–2000 Th. Transient averaging began after data acquisition was started so that scans with one to 200 transients averaged could be analyzed. The AGC target for fluoranthene reagent anions was set to 700,000 charges, reaction times varied as indicated in the text, and the pressure in the ion-routing multipole was set to 1 mTorr in intact protein mode.
MS/MS m/z spectra were deconvoluted with XTRACT (Thermo Fisher Scientific) using default parameters and a S/N threshold of two. ProSight Lite [54] was used to generate matched fragments using a 10 ppm tolerance. ETD spectra were matched with c-, z-, and y-type ions, and EThcD spectra were also matched with those fragment types in addition to b-type ions. N-terminal methionines were removed from the protein sequences before matching with ProSight Lite, and carbonic anhydrase was matched with an additional sequence modification of N-terminal acetylation (+42.01 Da). Supplemental Figure 5 compares signal from fragments seen with ETD and high capacity ETD, and those unique only to high capacity ETD.
Since precursor ion signal is distributed amongst product ions upon fragmentation, the size of the initial precursor population and its subsequent effect on product ion S/N are critical considerations in tandem MS experiments. Ion trapping instruments use a fixed number of charges per scan, and the capacity of these devices is ultimately defined by the number of charges they can hold and manipulate. The maximum number of ions that can be effectively stored decreases linearly with the increasing ion charge, as shown by the expression:
$$ {N}_{charges}\ /{Z}_p={N}_{ions} $$
where Ncharges is the number of stored charges, Zp is the charge of the precursor, and Nions is the number of ions that comprise a precursor population of charge Zp. This relationship becomes more consequential when analyzing intact proteins, as larger proteins tend to be more highly charged in standard electrospray ionization. This connection between charge state distribution and protein mass has been empirically modeled by Kelleher and co-workers [27]; according to their work, the theoretical charge state distribution of a protein as a function of its molecular weight (MW) can be estimated as:
$$ C{S}_n = {e}^{\left[-{\left(n-4.12\times {10}^{-4}\ (MW)-0.297\right)}^2/2{\left(1.59\times {10}^{-4}(MW)-0.153\right)}^2\right]} $$
for 0 < n < 8.64 × 10− 4(MW) + 1.
The charge state distributions shown in Figure 1a were obtained according to Equation 2, showing that the most intense predicted charge state for a smaller protein like ubiquitin is around z = +10, and the mode of the distribution increases as protein molecular weight increases (approximately z = +20 and z = +30 for myoglobin and carbonic anhydrase, respectively).
Challenges with product ion signal-to-noise following fragmentation of intact proteins. (a) Theoretical charge state distributions for ubiquitin, myoglobin, and carbonic anhydrase show that the absolute number of charges that precursors carry and the relative width of the charge state distribution both increase as protein mass increases. (b) Considering precursors near the middle of the charge state distribution (panel a, open symbols), the number of ions stored for an MS/MS event is plotted versus molecular weight, using a capacity of 300,000 charges. (c) The S/N challenges inherent to fewer ions present in the precursor population are compounded by the increase in the number of dissociation channels as protein size increases, spreading the measureable signal across more fragment ions. (d) Large proteins generate proportionally larger fragment ions, further exacerbating S/N problems, as larger fragments have broader isotope distributions and require more ions per fragment to lift peaks above the noise band (light grey). The relative S/N value can be improved by spectral averaging (dark grey), but this can substantially increase acquisition time and restrict throughput. (e) ETD generates c-type (blue) and z-type (red) fragments (i), but competing pathways, like secondary dissociation events (ii), non-dissociative electron transfer (iii), and proton transfer reactions (iv), further dilute the signal seen for sequence informative product ions (c and z fragments). (f) Three main factors contribute to the limitations on ion storage capacity of the back section of the high pressure cell: top, the confining rf field in the HPC is weakest near the lenses (dashed circle); middle, the axial dc confinement field that keeps ions in the back section of HPC is radially destabilizing (dashed arrows); bottom, space charge effects from the trapping of like-signed ions is both radially and axially destabilizing (dashed arrows), increasing with the amount of stored charge
An increase in observed charge state for larger proteins equates to fewer ions that can be accumulated for MS/MS events in ion traps that have fixed charge capacities (Equation 1)—and fewer precursor ions mean less charge to be distributed across the resulting product reaction channels. Figure 1b shows the number of ions that can be accumulated for a standard ETD reaction (assuming a capacity of ∼300,000 charges [50, 52]) for ubiquitin, myoglobin, and carbonic anhydrase. Here a precursor charge state with relatively high theoretical abundance (Figure 1a, open symbols) was selected for each protein so that all three precursor m/z values were within ∼40 Th of each other. It is clear that the number of precursor ions stored decreases exponentially as protein size increases, meaning larger proteins already present challenges for product ion S/N. Moreover, the estimated number of dissociation channels increases linearly with increasing molecular weight, as approximated by the expression:
$$ {N}_{channels}={N}_{fragmentTypes}*{N}_{bonds}={N}_{fragmentTypes}*\left(MW/111.1254-1\right) $$
where Nchannels is the number of dissociation channels, NfragmentTypes describes the number of fragment types generated by the dissociation method (e.g., c- and z-type), Nbonds is the number of inter-residue bonds in the protein, MW is the molecular weight of the protein, and 111.1254 is the mass of averagine [55]. The last term in Equation 3 uses averagine as an approximation so that any protein of known molecular weight can be described without requiring knowledge of the number of amino acids comprising the primary sequence. If the residue count of a protein is known, Nbonds can be calculated simply by subtracting one from this total. Considering only the canonical ETD fragment series (c- and z-type) and assuming for simplicity that these fragments are the only channels across which signal can be distributed, Figure 1c illustrates the S/N challenges that arise from an already smaller number of precursor ions (Figure 1b) being spread across more fragments. To compound S/N challenges further, larger proteins generate larger fragments, which have broader isotope distributions. Depreciation in peak abundance due to the presence of naturally occurring isotopes can be expressed by the relationship:
$$ S/N\kern0.5em \propto \kern0.5em 1/\sqrt{MW} $$
which ultimately requires a greater number of ions for larger fragments to raise usable signal above the noise band (Figure 1d) [27, 56].
All of these challenges are fundamental to top down proteomics, regardless of fragmentation type employed, but the characteristics of ETD itself also require consideration when examining product ion S/N in ETD tandem MS: (1) ETD reactions by their nature consume charge, and (2) the signal in an ETD MS/MS spectrum exists in product ions from competing pathways, such as internal fragments from secondary ETD reactions, non-dissociative electron transfer, and proton transfer reactions (Figure 1e) [57, 58, 59, 60, 61, 62]. Consumption of charge during the reaction limits the sensitivity of the MS/MS scan because S/N can be related to the number of charges present by:
$$ {N}_{charges}={\nu}_n*\left(S/N\right) $$
assuming the presence of only thermal noise during detection, where ν n is the thermal noise (estimated to be the equivalent of 20 charges, caused by the amplifier, in the Orbitrap) [48, 63]. Competing reaction pathways can reduce sequence-informative product ion yield, and even those products that do impart sequence information (i.e., c-type and z-type fragments) can exist in multiple charge states, consuming available signal for no additional gain in sequence coverage. We conclude that strategies for improving product ion S/N will be of considerable value for top down protein analysis.
Two of the most straight-forward practices to increase product ion S/N and effectively mitigate these challenges are (1) averaging signal from multiple spectra, and (2) conducting reactions on larger precursor populations. Considering modern Fourier transform mass spectrometers (i.e., FT-ICR and Orbitrap systems), which are widely used for intact protein analysis and top down proteomics, spectral averaging consists of summing several individual time-domain signals. Note that transient is used here and throughout for simplicity to describe the time-domain signal from FT-MS measurements. Here, the signal amplitude increases proportionally with the number of transients averaged (Ntransients), while the noise increases as the square root of Ntransients:
$$ {G}_{S/N} \propto {N}_{transients}/\sqrt{N_{transients\ }}\kern0.5em \therefore \kern0.5em {G}_{S/N} \propto \sqrt{N_{transients\ }} $$
making the relative gain in the S/N of a spectrum (G S/N ) increase by the square root of Ntransients (Figure 1d) [64]. For example, the approximately 2-fold difference depicted in Figure 1d would require four averaged transients. Although effective in generating high-quality MS/MS spectra, transient averaging requires significantly longer acquisition times, as even the fastest FT instruments can require several hundred ms or longer per transient to achieve necessary HR/AM measurements [35, 37, 65]. To meet throughput demands of many experiments, it is desirable to keep transient averaging to a minimum. Furthermore, extensive transient averaging may still fail to salvage low-level ions that are buried in the noise [66]; conversely, sufficiently large precursor populations can provide enough signal to boost these ions to a detectable level.
In commercially available linear ion trap-Orbitrap hybrid systems, ETD occurs in the high pressure cell (HPC) of the A-QLT. This cell is partitioned into three sections (front, center, and back) with linear dimensions of 12.5 mm, 35 mm, and 12.5 mm, respectively. With the introduction of a front-end reagent source on the Orbitrap Fusion Tribrid system, precursors are isolated by a mass-selecting quadrupole, accumulated in the HPC, and then sequestered in the back section of the cell with reagent trapping in the center and front sections (Figures 1f and 2a). Three main factors influence the ion capacity of the back section of the HPC: (1) the rf radial confinement field on the HPC used to trap the ions; (2) the DC axial confinement field created by the DC potentials applied to the center section, back section, and end lens (Figure 2a); and (3) the space charge field exerted by the ions themselves. Increasing the number of ions sequestered into this back section causes each of these forces to contribute some form of ion destabilization. The rf field confining the ions radially weakens near the end lens due to fringe field affects, leading to a weaker confinement potential in a region of the back section that results in less efficient trapping (Figure 1f, top). Correspondingly, axial DC fields push ions into the back section from both axial directions, causing radial destabilization. As the amount of stored charge increases, ions will be pushed out radially from the stable regions of the trap into the rf field, resulting in micro-motion induced by the rf field that can cause collisional dissociation and/or ejection [67, 68]. This destabilization limits the number of precursors that can be effectively stored without unwanted fragmentation (Figure 1f, middle). Finally, the space charge forces exerted by the precursors of like charge are axially and radially destabilizing, increasing with the total amount of trapped charge and compounding the effects described for the other fields (Figure 1f, bottom).
Increased product ion S/N with high capacity ETD. (a) In standard ETD, precursor cations are sequestered into the back section of the high pressure cell of the A-QLT prior to the reaction, while reagent anions are accumulated in the center and front sections. In high capacity ETD, precursor cations are accumulated in the center section, allowing larger precursor populations for increased product ion S/N. Black lines show DC potentials. (b) Spectra from both standard and high capacity ETD scans (two transients averaged for each) on the z = +18 precursor of myoglobin show that product ions have greater S/N in high capacity ETD, and this increase in product ion S/N allows more sequencing ions to be matched in high capacity ETD. Both spectra are on the same intensity scale
Altering the precursor storage event so that precursor ions remain in the center section of the HPC, rather than sequester them to the back section, eliminates the challenges of both fringe field effects and smaller confinement fields. Additionally, this scheme provides a greater trapping volume, alleviating the destabilization effects of the DC axial confinement fields and space charge forces. Equation 7 estimates the relationship of the ion capacity/trapping volume of the sections of the linear ion trap and the section length:
$$ {N}_{center}/{N}_{back} \propto {l}_{center}/{l}_{back} $$
where Ncenter/back and lcenter/back are the charge capacities and lengths of the center/back sections of the HPC, respectively [50, 69]. Because the center and back sections have the same general operating parameters, the ratio of charge capacities is approximately proportional to the ratio of the lengths of the two sections. Comparing the length of the center section (35 mm) and the back section (12.5 mm), we expect an approximately 3-fold increase in charge capacity when storing precursors in the center section instead of the back section. The current storage capacity of sequestration in the back section is estimated to be ∼200,000 to ∼500,000 charges [52], meaning precursor targets of 1,000,000 or more charges should be successfully stored using the new acquisition scheme. Note, using the center section for precursor ion storage affects the accumulation of reagent anions as well, which we discuss further in the next section. In this work, we explore the benefits that can be achieved when altering the precursor accumulation event in the current A-QLT device to permit accumulation of more precursor charges in the center section.
Implementing High Capacity ETD in a Dual Pressure Linear Ion Trap
To address the limitations in ion capacity imposed by precursor sequestration in the back section of the HPC, we have employed a new implementation of ETD called high capacity ETD. Figure 2a illustrates how precursor storage differs between standard and high capacity ETD by showing the voltages employed relative to 0 V during the reagent accumulation period. The practical implications of this change in precursor spatial confinement is highlighted in Figure 2b, showing ETD spectra from both the standard and high capacity implementations for the z = +18 precursor of myoglobin.
For both the standard and high capacity schemes, precursor AGC target values of 100,000, 200,000, 400,000, 600,000, 800,000, and 1,000,000 were investigated. In standard ETD, a target value of 400,000 produced the highest number of matched fragments (71), whereas higher target values did not translate to an increase in sequence-informative fragment ions. This is in accordance to the estimated capacity of ∼200,000 to ∼500,000 charges discussed above. In high capacity ETD, however, the largest AGC target value of 1,000,000 produced the most fragments (136), nearly doubling the number of fragments observed following standard ETD using the same amount of spectral averaging (two transients averaged for each). For product ions identified in both conditions, S/N values are approximately 3-fold higher, sometimes more, in high capacity ETD. Additional fragment ion identifications were often due to the improved S/N, enabling confident charge state assignment and subsequent matching against theoretical values.
Note that the change in the precursor accumulation event necessitates changes in the reagent accumulation event. In standard ETD, reagent anions are accumulated in the center section of the HPC, whereas in high capacity ETD only the front section is used (Figure 1a). This now limits the capacity for reagent anion storage, although the ion capacity of the end section should be higher for the reagent anions than it is for the precursor cations. The higher relative capacity can be accounted for by both the significant difference in ion m/z between reagent anions (202 Th) and their precursor counterparts (especially in top down proteomics) and by the single charge of the reagent anions compared with highly charged protein precursors. Slightly smaller reagent anion populations may affect the pseudo-first order kinetics of the ETD reactions, as the reagent, which may no longer be in a large excess of the precursor population, could be depleted during the reaction. Indeed, loss of pseudo-first order kinetics due to reagent depletion was observed in these experiments, requiring longer reaction times to achieve sufficient fragmentation, even with the same reagent AGC target values (Supplemental Table 1). Using the spectra in part b of Figure 2 as an example, the standard ETD scheme using a precursor AGC target of 400,000 used a reaction time of 5 ms; the high capacity ETD with a precursor AGC target of 1,000,000 required a 25 ms reaction time to accrue a comparable level of progression of the ETD reaction.
Although this is a significant increase in reaction time, the resulting increase in total scan time is marginal. The standard ETD spectrum in Figure 2, with two averaged transients and an average precursor injection time of 4.3 ms, required 848 ms of total elapsed scan time for 71 fragments. The high capacity ETD spectrum, comparatively, with two averaged transients and a 12.4 ms average precursor injection time, required 904 ms of total elapsed scan time for 136 fragments. Thus, even though high capacity ETD required approximately 56 ms longer in total acquisition time, it nearly doubled the number of identifiable fragments generated. Even when averaging an additional transient (for a total acquisition time of 1271 ms), the number of fragments identified in standard ETD increased to only 82 fragments. Furthermore, increasing the reaction time for standard ETD did not provide any beneficial information. Standard ETD at both an AGC target of 400,000 with an 11 ms reaction time (6 ms longer) and an AGC target of 1,000,000 with a 25 ms reaction time (20 ms longer) yielded fewer matching fragments, 64 and 55, respectively, indicating degrees of over-reaction.
High Capacity ETD for a Moderate Size Protein (∼17 kDa)
We extended our look at the benefits high capacity ETD can offer over standard ETD by investigating three charge states of myoglobin at six precursor AGC target values using varying amounts of spectral averaging. Figure 3 presents a heat map of the number of matched fragments generated from all three precursors selected. This includes data for each of the six AGC target values when averaging one to five transients. Several interesting trends arise, most notably the darker overall color of the high capacity heat maps that shows more fragments are being produced in high capacity ETD. For standard ETD, more fragments are generated from left to right as the number of transients averaged increases, but there is no distinct trend when moving from bottom to top in each heat map (i.e., increasing AGC target values), except for the increases seen when moving from 100,000 to higher targets. These results confirm that the storage capacity of the back section is approximately 200,000 to 400,000 charges, since these AGC targets have similar likelihoods of producing the same number of fragments as higher target values for a given number of averaged transients.
High capacity ETD generates more matched fragments than standard ETD. Averaging only one to two transients with larger AGC targets in high capacity ETD provides as many or more matched fragment ions as averaging four to five transients in standard ETD
High capacity ETD maps show a distinct pattern of darker colors (more matching fragments) for higher precursor targets (upper half), demonstrating that the new implementation of ETD permits reaction of larger precursor populations for improved product ion S/N and more sequence-informative fragments for a given acquisition time. Importantly, Figure 3 also demonstrates the improvements in fragment ion generation seen with high capacity ETD over standard ETD when considering a given number of transients averaged. Higher AGC target values—especially 800,000 and 1,000,000—provide as many, if not more, matched fragments in two averaged transients as standard ETD can provide in five, and the benefits are striking when considering similar numbers of averaged transients for the two conditions. This may also indicate that approximately 2- to 5-fold more precursor ions can be stored successfully in high capacity ETD, if not more. Additionally, to eliminate differences in reaction time as a cause of the improvements observed with the high capacity scheme, we reacted precursors in the standard ETD scheme (HPC back section sequestration) for the duration used in high capacity ETD (Supplemental Figure 1). The larger precursor AGC targets required longer reactions times in high capacity ETD, so the number of fragments generated from standard ETD at high capacity reaction times is noticeably decremented because of over-reaction and generation of internal fragments, confirming that the benefits seen in high capacity ETD are attributed to the reaction of larger precursor populations.
High capacity ETD afforded improved protein sequence coverage. These improvements for the three precursors of myoglobin are summarized in Figure 4. High capacity ETD provides more sequence coverage in two averaged transients than standard ETD can achieve with five averaged transients for all three precursor ion species. Even a single scan with high capacity ETD provides competitive sequence coverage values compared with five averaged transients for standard ETD.
Higher quality spectra of high capacity ETD translate to greater protein sequence coverage with less spectral averaging required. Numbers along the left indicate the number of averaged transients and the y-axes show precursor AGC target values
As noted previously, the increased precursor injection times and longer reaction times in high capacity ETD can increase total spectral acquisition time slightly, even when using the same number of averaged transients, but Supplemental Figure 2 shows that these increases are minor relative to total scan time. The spread in the curves for different AGC target values in high capacity ETD (red) is greater than the curves in standard ETD (blue); this difference further demonstrates that larger precursor populations are indeed being retained when precursor target values are increased in high capacity ETD, whereas the size of the precursor population plateaus in standard ETD despite elevated AGC target values. Overall, the high capacity ETD scheme greatly improves the protein sequence coverage that can be obtained per second of data acquisition, which makes high capacity ETD highly advantageous when spectral quality must be balanced with acquisition time, as is needed in high-throughput top down (i.e., using online chromatography) proteomics experiments.
Even when increasing the degree of spectral averaging up to 100 averaged transients, high capacity ETD still provides increased sequence coverage (Figure 5). Here, as before, the AGC target values are set to the indicated values, although standard ETD cannot retain over ∼400,000 precursors (see above). Sequence coverage for myoglobin with high capacity ETD and standard ETD remains similar, as expected, for precursor targets of 100,000 and 200,000 charges. Despite improvements seen in high capacity ETD with relatively small degrees of spectral averaging (one to ten averaged transients) for moderately large target values (i.e., 400,000 and 600,000), sequence coverage achieved for the two implementations does converge with high degrees of averaging (>50 transients). The largest AGC target values (800,000 and 1,000,000) provide consistent gains in sequence coverage with high capacity ETD even with significant spectral averaging, although the difference between the two still diminishes. This is likely because myoglobin is a moderately sized protein; for these proteins, high degrees of spectral averaging, while incurring significantly extended acquisition times, can mitigate S/N challenges and match the boosts seen from larger precursor populations.
High capacity ETD enables larger AGC target values that produce greater protein sequence coverage than standard ETD for myoglobin, even with many averaged transients. Data here is shown for the z = +15 precursor of myoglobin
Beyond traditional ETD fragmentation, we observed that high capacity ETD can aid hybrid fragmentation methods as well. EThcD, which uses beam-type collisional activation of ETD products after the ion–ion reaction [70, 71], can improve sequence coverage for precursor ions, especially those with low-charge density where precursor-to-product ion conversion efficiency is hindered by noncovalent interactions. We saw that the high capacity ETD scheme aided in fragment ion generation and protein sequence coverage with EThcD on the z = +15 precursor of myoglobin (Supplemental Figure 3). We surmised that high capacity ETD can be especially valuable for these hybrid fragmentation techniques, where secondary activation has to be carefully balanced since more fragmentation channels are being added to erode product ion signal. As expected, the best benefits to EThcD with the high capacity scheme were seen at the highest precursor targets.
High Capacity ETD for a Larger Protein (∼29 kDa)
High capacity ETD is well-positioned to provide pronounced gains for larger proteins, where the number of dissociation channels is significantly greater and even considerable degrees of spectral averaging cannot approach the increases provided by reaction of large precursor populations [72, 73]. To investigate this, we reacted the z = +34 precursor of carbonic anhydrase (∼29 kDa) using standard ETD (AGC target of 300,000) and high capacity ETD (AGC target of 1,000,000). To explore how the high capacity ETD and standard ETD schemes compare with higher resolution spectra, we also collected MS/MS spectra at two resolving powers (120 and 240 K). First, the best reaction times to use for each condition were determined experimentally (Supplemental Figure 4), and reaction times of 4 and 7 ms were used for standard and high capacity ETD, respectively.
Figure 6a demonstrates that high capacity ETD affords greater sequence coverage than standard ETD for up to 200 transients averaged at both 120 and 240 K resolving powers. In fact, high capacity ETD at 120 K outperforms standard ETD at 240 K. To show that the gains seen with high capacity ETD can be attributed to increases in product ion S/N, we plotted histograms of S/N values for product ions from ETD spectra (at 240 K) with eight (Figure 6b) and 200 (Figure 6c) transients averaged for both high capacity and standard ETD. The distributions are only shown up to S/N 20 to emphasize the region where the majority of the peaks lie, but the maximum S/N values for each condition are given in parentheses in the figure legends. Expected patterns arise: the distributions are shifted toward higher S/N values in high capacity ETD, and the distributions are broader with higher S/N values when 200 versus eight transients are averaged. Note that eight averaged transients with high capacity ETD provide similar sequence coverage to that seen with 200 averaged transients in standard ETD. Although a scan using eight averaged transients still requires approximately 2- to 4 s to acquire at 120 K (Supplemental Figure 2), high capacity ETD makes more thorough characterization of larger proteins on chromatographic timescales a realistic goal.
High capacity ETD provides superior results for characterization of carbonic anhydrase (∼29 kDa). Protein sequence coverage for the z = +34 precursor of carbonic anhydrase using either high capacity or standard ETD is shown in (a) with varying degrees of spectral averaging and at two different resolving powers (120 and 240 K). Histograms display the distribution of signal-to-noise (S/N) of peaks in high capacity and standard ETD spectra using eight or 200 averaged transients, (b) and (c), respectively, at a resolving power of 240 K. The y-axes show the peak count for a given S/N value (bin size = 0.1). The maximum S/N for a peak in each spectrum is given in parentheses in the figure, although the histograms only display the distributions up to 20 S/N to highlight the region where the majority of the peaks lie
We have enabled the accumulation and retention of 2- to 5-fold more precursors for ETD reactions by altering the region in the ion trap where precursor ions are stored during reagent ion injection. When holding precursor cations in the center section of the high pressure cell of a dual cell quadrupole linear ion trap, as many as 1,000,000 charges or more can be stored for subsequent ion–ion reactions. This increase in precursor ion capacity boosts the signal-to-noise of product ions, producing higher quality MS/MS spectra with only minor increases in acquisition time. High capacity ETD facilitates a more robust characterization of intact protein cations—a single scan can achieve fragment ion production and protein sequence coverage equivalent to approximately five averaged scans of standard ETD. Overall, high capacity ETD improves the compromise between S/N improvements and spectral acquisition speed while still enabling enhanced MS/MS data quality for intact proteins, regardless the degree of spectral averaging. Moreover, high capacity ETD has been implemented using commercially accessible hardware and is available on the newest generation of quadrupole-Orbitrap-linear ion trap Tribrid mass spectrometer (Orbitrap Fusion Lumos), giving it a distinct advantage over earlier approaches that required custom modified devices.
The improvements in MS/MS characterization of intact proteins with high capacity ETD will advance top down proteomics by providing more robust fragmentation on a chromatographic time-scale. This new implementation of ETD also benefits hybrid dissociation methods like EThcD, which are demonstrating promise as new methods to intact protein fragmentation approaches. Future work will focus on how high capacity ETD can benefit other hybrid dissociation techniques (e.g., ultraviolet photo-dissociation (UVPD)-ETD methods [74] and activated ion ETD (AI-ETD) [75, 76]) with an emphasis on how this improved approach to ETD can be employed in large-scale proteome characterizations. With the implementation of high capacity ETD, we present a straightforward strategy to improve tandem mass spectra of intact proteins; accordingly, this approach is implemented on the Orbitrap Fusion Lumos and maintains all of the benefits of conducting ion–ion reactions in the dual-cell quadrupole linear ion trap.
The authors gratefully acknowledge support from Thermo Fisher Scientific and NIH grant R01 GM080148. N.M.R. was funded through an NSF Graduate Research Fellowship (DGE-1256259). The authors also thank Graeme McAlister for helpful discussions.
13361_2015_1306_MOESM1_ESM.docx (534 kb)
ESM 1 (DOCX 533 kb)
Kelleher, N.L.: Peer reviewed: top-down proteomics. Anal. Chem. 76, 196A–203A (2004)CrossRefGoogle Scholar
Chait, B.T.: Chemistry. Mass spectrometry: bottom-up or top-down? Science 314, 65–66 (2006)CrossRefGoogle Scholar
Zhang, H., Cui, W., Wen, J., Blankenship, R.E., Gross, M.L.: Native electrospray and electron-capture dissociation FTICR mass spectrometry for top-down studies of protein assemblies. Anal. Chem. 83, 5598–606 (2011)CrossRefGoogle Scholar
Konermann, L., Vahidi, S., Sowole, M.A: Mass spectrometry methods for studying structure and dynamics of biological macromolecules. Anal. Chem. 86, 213–232 (2014)Google Scholar
McLafferty, F.W., Breuker, K., Jin, M., Han, X., Infusini, G., Jiang, H., Kong, X., Begley, T.P.: Top-down MS, a powerful complement to the high capabilities of proteolysis proteomics. FEBS J. 274, 6256–6268 (2007)CrossRefGoogle Scholar
Carr, S.A., Hemling, M.E., Bean, M.F., Roberts, G.D.: Integration of mass spectrometry in analytical biotechnology. Anal. Chem. 63, 2802–2824 (1991)CrossRefGoogle Scholar
Mann, M., Højrup, P., Roepstorff, P.: Use of mass spectrometric molecular weight information to identify proteins in sequence databases. Biol. Mass Spectrom. 22, 338–345 (1993)CrossRefGoogle Scholar
Mann, M., Wilm, M.: Error-tolerant identification of peptides in sequence databases by peptide sequence tags. Anal. Chem. 66, 4390–4399 (1994)CrossRefGoogle Scholar
Yu, L., Xiong, Y.-M., Polfer, N.C.: Periodicity of monoisotopic mass isomers and isobars in proteomics. Anal. Chem. 83, 8019–8023 (2011)CrossRefGoogle Scholar
Aebersold, R., Mann, M.: Mass spectrometry-based proteomics. Nature 422, 198–207 (2003)CrossRefGoogle Scholar
Coon, J., Syka, J.E.P., Shabanowitz, J., Hunt, D.F.: Tandem mass spectrometry for peptide and protein sequence analysis. Biotechniques 38, 519–523 (2005)CrossRefGoogle Scholar
Smith, L.M., Kelleher, N.L.: Proteoform: a single term describing protein complexity. Nat. Methods 10, 186–187 (2013)CrossRefGoogle Scholar
Breuker, K., Jin, M., Han, X., Jiang, H., McLafferty, F.W.: Top-down identification and characterization of biomolecules by mass spectrometry. J. Am. Soc. Mass Spectrom. 19, 1045–1053 (2008)CrossRefGoogle Scholar
Zubarev, R., Kelleher, N.L., McLafferty, F.W.: Electron capture dissociation of multiply charged protein cations. A nonergodic process. J. Am. Chem. Soc. 120, 3265–3266 (1998)CrossRefGoogle Scholar
Shaw, J.B., Li, W., Holden, D.D., Zhang, Y., Griep-Raming, J., Fellers, R.T., Early, B.P., Thomas, P.M., Kelleher, N.L., Brodbelt, J.S.: Complete protein characterization using top-down mass spectrometry and ultraviolet photodissociation. J. Am. Chem. Soc. 135, 12646–12651 (2013)CrossRefGoogle Scholar
Zhou, M., Wysocki, V.H.: Surface induced dissociation: dissecting noncovalent protein complexes in the gas phase. Acc. Chem. Res. 47, 1010–1018 (2014)CrossRefGoogle Scholar
Syka, J.E.P., Coon, J.J., Schroeder, M.J., Shabanowitz, J., Hunt, D.F.: Peptide and protein sequence analysis by electron transfer dissociation mass spectrometry. Proc. Natl. Acad. Sci. U. S. A. 101, 9528–9533 (2004)CrossRefGoogle Scholar
Coon, J.J., Ueberheide, B., Syka, J.E.P., Dryhurst, D.D., Ausio, J., Shabanowitz, J., Hunt, D.F.: Protein identification using sequential ion/ion reactions and tandem mass spectrometry. Proc. Natl. Acad. Sci. U. S. A. 102, 9463–9468 (2005)CrossRefGoogle Scholar
Udeshi, N.D., Compton, P.D., Shabanowitz, J., Hunt, D.F., Rose, K.L.: Methods for analyzing peptides and proteins on a chromatographic timescale by electron-transfer dissociation mass spectrometry. Nat. Protoc. 3, 1709–1717 (2008)CrossRefGoogle Scholar
Cui, W., Rohrs, H.W., Gross, M.L.: Top-down mass spectrometry: recent developments, applications and perspectives. Analyst 136, 3854–3864 (2011)CrossRefGoogle Scholar
Tran, J.C., Zamdborg, L., Ahlf, D.R., Lee, J.E., Catherman, A.D., Durbin, K.R., Tipton, J.D., Vellaichamy, A., Kellie, J.F., Li, M., Wu, C., Sweet, S.M.M., Early, B.P., Siuti, N., LeDuc, R.D., Compton, P.D., Thomas, P.M., Kelleher, N.L.: Mapping intact protein isoforms in discovery mode using top-down proteomics. Nature 480, 254–258 (2011)CrossRefGoogle Scholar
Russell, J.D., Scalf, M., Book, A.J., Ladror, D.T., Vierstra, R.D., Smith, L.M., Coon, J.J.: Characterization and quantification of intact 26S proteasome proteins by real-time measurement of intrinsic fluorescence prior to top-down mass spectrometry. PLoS One 8, e58157 (2013)CrossRefGoogle Scholar
Durbin, K.R., Fellers, R.T., Ntai, I., Kelleher, N.L., Compton, P.D.: Autopilot: an online data acquisition control system for the enhanced high-throughput characterization of intact proteins. Anal. Chem. 86, 1485–1492 (2014)CrossRefGoogle Scholar
Rhoads, T.W., Rose, C.M., Bailey, D.J., Riley, N.M., Molden, R.C., Nestler, A.J., Merrill, A.E., Smith, L.M., Hebert, A.S., Westphall, M.S., Pagliarini, D.J., Garcia, B.A., Coon, J.J.: Neutron-encoded mass signatures for quantitative top-down proteomics. Anal. Chem. 86, 2314–2319 (2014)CrossRefGoogle Scholar
Catherman, A.D., Skinner, O.S., Kelleher, N.L.: Top down proteomics: facts and perspectives. Biochem. Biophys. Res. Commun. 445, 683–693 (2014)CrossRefGoogle Scholar
Garcia, B.A.: What does the future hold for top down mass spectrometry? J. Am. Soc. Mass Spectrom. 21, 193–202 (2010)CrossRefGoogle Scholar
Compton, P.D., Zamdborg, L., Thomas, P.M., Kelleher, N.L.: On the scalability and requirements of whole protein mass spectrometry. Anal. Chem. 83, 6868–6874 (2011)CrossRefGoogle Scholar
Siuti, N., Kelleher, N.L.: Decoding protein modifications using top-down mass spectrometry. Nat. Methods 4, 817–821 (2007)CrossRefGoogle Scholar
Kellie, J.F., Tran, J.C., Lee, J.E., Ahlf, D.R., Thomas, H.M., Ntai, I., Catherman, A.D., Durbin, K.R., Zamdborg, L., Vellaichamy, A., Thomas, P.M., Kelleher, N.L.: The emerging process of top down mass spectrometry for protein analysis: biomarkers, protein-therapeutics, and achieving high throughput. Mol. Biosyst. 6, 1532–1539 (2010)CrossRefGoogle Scholar
Huang, T.-Y., McLuckey, S.A.: Top-down protein characterization facilitated by ion/ion reactions on a quadrupole/time of flight platform. Proteomics 10, 3577–3588 (2010)CrossRefGoogle Scholar
Makarov, A., Denisov, E.: Dynamics of ions of intact proteins in the Orbitrap mass analyzer. J. Am. Soc. Mass Spectrom. 20, 1486–1495 (2009)CrossRefGoogle Scholar
Bogdanov, B., Smith, R.D.: Proteomics by FTICR mass spectrometry: top down and bottom up. Mass Spectrom. Rev. 24, 168–200 (2005)CrossRefGoogle Scholar
Mann, M., Kelleher, N.L.: Precision proteomics: the case for high resolution and high mass accuracy. Proc. Natl. Acad. Sci. U. S. A. 105, 18132–18138 (2008)CrossRefGoogle Scholar
McAlister, G.C., Berggren, W.T., Griep-Raming, J., Horning, S., Makarov, A., Phanstiel, D., Stafford, G., Swaney, D.L., Syka, J.E.P., Zabrouskov, V., Coon, J.J.: A proteomics grade electron transfer dissociation-enabled hybrid linear ion trap-Orbitrap mass spectrometer. J. Proteome Res. 7, 3127–3136 (2008)CrossRefGoogle Scholar
Michalski, A., Damoc, E., Lange, O., Denisov, E., Nolting, D., Müller, M., Viner, R., Schwartz, J., Remes, P., Belford, M., Dunyach, J.-J., Cox, J., Horning, S., Mann, M., Makarov, A.: Ultra high resolution linear ion trap Orbitrap mass spectrometer (Orbitrap Elite) facilitates top down LC MS/MS and versatile peptide fragmentation modes. Mol. Cell. Proteomics. 11, O111.013698 (2012)CrossRefGoogle Scholar
Senko, M.W., Remes, P.M., Canterbury, J.D., Mathur, R., Song, Q., Eliuk, S.M., Mullen, C., Earley, L., Hardman, M., Blethrow, J.D., Bui, H., Specht, A., Lange, O., Denisov, E., Makarov, A., Horning, S., Zabrouskov, V.: Novel parallelized quadrupole/linear ion trap/Orbitrap tribrid mass spectrometer improving proteome coverage and peptide identification rates. Anal. Chem. 85, 11710–11714 (2013)CrossRefGoogle Scholar
Ahlf, D.R., Compton, P.D., Tran, J.C., Early, B.P., Thomas, P.M., Kelleher, N.L.: Evaluation of the compact high-field Orbitrap for top-down proteomics of human cells. J. Proteome Res. 11, 4308–4314 (2012)CrossRefGoogle Scholar
Håkansson, K., Chalmers, M.J., Quinn, J.P., McFarland, M.A., Hendrickson, C.L., Marshall, A.G.: Combined electron capture and infrared multiphoton dissociation for multistage MS/MS in a Fourier transform ion cyclotron resonance mass spectrometer. Anal. Chem. 75, 3256–3262 (2003)CrossRefGoogle Scholar
Haselmann, K.F., Budnik, B.A., Olsen, J.V., Nielsen, M.L., Reis, C.A., Clausen, H., Johnsen, A.H., Zubarev, R.A.: Advantages of external accumulation for electron capture dissociation in Fourier transform mass spectrometry. Anal. Chem. 73, 2998–3005 (2001)CrossRefGoogle Scholar
Vasicek, L.A., Ledvina, A.R., Shaw, J., Griep-Raming, J., Westphall, M.S., Coon, J.J., Brodbelt, J.S.: Implementing photodissociation in an Orbitrap mass spectrometer. J. Am. Soc. Mass Spectrom. 22, 1105–1108 (2011)CrossRefGoogle Scholar
Syka, J.E.P., Marto, J.A., Bai, D.L., Horning, S., Senko, M.W., Schwartz, J.C., Ueberheide, B., Garcia, B., Busby, S., Muratore, T., Shabanowitz, J., Hunt, D.F.: Novel linear quadrupole ion trap/FT mass spectrometer: performance characterization and use in the comparative analysis of histone H3 post-translational modifications. J. Proteome Res. 3, 621–626 (2004)CrossRefGoogle Scholar
March, R.E.: An introduction to quadrupole ion trap mass spectrometry. J. Mass Spectrom. 32, 351–369 (1997)CrossRefGoogle Scholar
Louris, J.N., Cooks, R.G., Syka, J.E.P., Kelley, P.E., Stafford, G.C., Todd, J.F.J.: Instrumentation, applications, and energy deposition in quadrupole ion-trap tandem mass spectrometry. Anal. Chem. 59, 1677–1685 (1987)CrossRefGoogle Scholar
Stephenson, J.L., McLuckey, S.A., Reid, G.E., Wells, J.M., Bundy, J.L.: Ion/ion chemistry as a top-down approach for protein analysis. Curr. Opin. Biotechnol. 13, 57–64 (2002)CrossRefGoogle Scholar
Reid, G.E., McLuckey, S.A.: "Top down" protein characterization via tandem mass spectrometry. J. Mass Spectrom. 37, 663–675 (2002)CrossRefGoogle Scholar
Scherperel, G., Reid, G.E.: Emerging methods in proteomics: top-down protein characterization by multistage tandem mass spectrometry. Analyst 132, 500–506 (2007)CrossRefGoogle Scholar
Louris, J.N., Brodbelt-Lustig, J.S., Graham Cooks, R., Glish, G.L., van Berkel, G.J., McLuckey, S.A.: Ion isolation and sequential stages of mass spectrometry in a quadrupole ion trap mass spectrometer. Int. J. Mass Spectrom. Ion Processes 96, 117–137 (1990)CrossRefGoogle Scholar
Limbach, P.A., Grosshans, P.B., Marshall, A.G.: Experimental determination of the number of trapped ions, detection limit, and dynamic range in Fourier transform ion cyclotron resonance mass spectrometry. Anal. Chem. 65, 135–140 (1993)CrossRefGoogle Scholar
Tsybin, Y.O., Witt, M., Baykut, G., Håkansson, P.: Electron capture dissociation Fourier transform ion cyclotron resonance mass spectrometry in the electron energy range 0–50 eV. Rapid Commun. Mass Spectrom. 18, 1607–1613 (2004)CrossRefGoogle Scholar
Schwartz, J.C., Senko, M.W., Syka, J.E.P.: A two-dimensional quadrupole ion trap mass spectrometer. J. Am. Soc. Mass Spectrom. 13, 659–669 (2002)CrossRefGoogle Scholar
Second, T.P., Blethrow, J.D., Schwartz, J.C., Merrihew, G.E., MacCoss, M.J., Swaney, D.L., Russell, J.D., Coon, J.J., Zabrouskov, V.: Dual-pressure linear ion trap mass spectrometer improving the analysis of complex protein mixtures. Anal. Chem. 81, 7757–7765 (2009)CrossRefGoogle Scholar
Rose, C.M., Russell, J.D., Ledvina, A.R., McAlister, G.C., Westphall, M.S., Griep-Raming, J., Schwartz, J.C., Coon, J.J., Syka, J.E.P.: Multipurpose dissociation cell for enhanced ETD of intact protein species. J. Am. Soc. Mass Spectrom. 24, 816–827 (2013)CrossRefGoogle Scholar
Earley, L., Anderson, L.C., Bai, D.L., Mullen, C., Syka, J.E.P., English, A.M., Dunyach, J.-J., Stafford, G.C., Shabanowitz, J., Hunt, D.F., Compton, P.D.: Front-end electron transfer dissociation: a new ionization source. Anal. Chem. 85, 8385–8390 (2013)CrossRefGoogle Scholar
Fellers, R.T., Greer, J.B., Early, B.P., Yu, X., LeDuc, R.D., Kelleher, N.L., Thomas, P.M.: ProSight Lite: graphical software to analyze top-down mass spectrometry data. Proteomics 15, 1235–1238 (2015)Google Scholar
Senko, M.W., Beu, S.C., McLaffertycor, F.W.: Determination of monoisotopic masses and ion populations for large biomolecules from resolved isotopic distributions. J. Am. Soc. Mass Spectrom. 6, 229–233 (1995)CrossRefGoogle Scholar
Rockwood, A.L.: Relationship of Fourier transforms to isotope distribution calculations. Rapid Commun. Mass Spectrom. 9, 103–105 (1995)CrossRefGoogle Scholar
Coon, J.J., Syka, J.E.P., Schwartz, J.C., Shabanowitz, J., Hunt, D.F.: Anion dependence in the partitioning between proton and electron transfer in ion/ion reactions. Int. J. Mass Spectrom. 236, 33–42 (2004)CrossRefGoogle Scholar
Compton, P.D., Strukl, J.V., Bai, D.L., Shabanowitz, J., Hunt, D.F.: Optimization of electron transfer dissociation via informed selection of reagents and operating parameters. Anal. Chem. 84, 1781–1785 (2012)CrossRefGoogle Scholar
McLuckey, S.A., Stephenson, J.L.: Ion/ion chemistry of high-mass multiply charged ions. Mass Spectrom. Rev. 17, 369–407 (1999)CrossRefGoogle Scholar
Rose, C.M., Rush, M.J.P., Riley, N.M., Merrill, A.E., Kwiecien, N.W., Holden, D.D., Mullen, C., Westphall, M.S., Coon, J.J.: A calibration routine for efficient ETD in large-scale proteomics. J. Am. Soc. Mass Spectrom. (2015). doi: 10.1007/s13361–015–1183–1 Google Scholar
McLuckey, S.A., Stephenson, J.L., Asano, K.G.: Ion/ion proton-transfer kinetics: implications for analysis of ions derived from electrospray of protein mixtures. Anal. Chem. 70, 1198–1202 (1998)CrossRefGoogle Scholar
Good, D.M., Wirtala, M., McAlister, G.C., Coon, J.J.: Performance characteristics of electron transfer dissociation mass spectrometry. Mol. Cell. Proteomics. 6, 1942–1951 (2007)CrossRefGoogle Scholar
Makarov, A., Denisov, E., Kholomeev, A., Balschun, W., Lange, O., Strupat, K., Horning, S.: Performance evaluation of a hybrid linear ion trap/Orbitrap mass spectrometer. Anal. Chem. 78, 2113–2120 (2006)CrossRefGoogle Scholar
Marshall, A.G., Comisarow, M.B.: Fourier transform methods in spectroscopy. J. Chem. Educ. 52, 638 (1975)CrossRefGoogle Scholar
Makarov, A., Denisov, E., Lange, O.: Performance evaluation of a high-field Orbitrap mass analyzer. J. Am. Soc. Mass Spectrom. 20, 1391–1396 (2009)CrossRefGoogle Scholar
Fornelli, L., Damoc, E., Thomas, P.M., Kelleher, N.L., Aizikov, K., Denisov, E., Makarov, A., Tsybin, Y.O.: Analysis of intact monoclonal antibody IgG1 by electron transfer dissociation Orbitrap FTMS. Mol. Cell. Proteomics. 11, 1758–1767 (2012)CrossRefGoogle Scholar
Sannes-Lowery, K., Griffey, R.H., Kruppa, G.H., Speir, J.P., Hofstadler, S.A.: Multipole storage assisted dissociation, a novel in-source dissociation technique for electrospray ionization generated ions. Rapid Commun. Mass Spectrom. 12, 1957–1961 (1998)CrossRefGoogle Scholar
Sannes-Lowery, K.A., Hofstadler, S.A.: Characterization of multipole storage assisted dissociation: implications for electrospray ionization mass spectrometry characterization of biomolecules. J. Am. Soc. Mass Spectrom. 11, 1–9 (2000)CrossRefGoogle Scholar
Campbell, J.M., Collings, B.A., Douglas, D.J.: A new linear ion trap time-of-flight system with tandem mass spectrometry capabilities. Rapid Commun. Mass Spectrom. 12, 1463–1474 (1998)CrossRefGoogle Scholar
Frese, C.K., Altelaar, A.F.M., van den Toorn, H., Nolting, D., Griep-Raming, J., Heck, A.J.R., Mohammed, S.: Toward full peptide sequence coverage by dual fragmentation combining electron-transfer and higher-energy collision dissociation tandem mass spectrometry. Anal. Chem. 84, 9668–9673 (2012)CrossRefGoogle Scholar
Brunner, A.M., Lossl, P., Liu, F., Huguet, R., Mullen, C., Yamashita, M., Zabrouskov, V., Makarov, A., Altelaar, A.F.M., Heck, A.J.R.: Benchmarking multiple fragmentation methods on an Orbitrap Fusion for top-down phospho-proteoform characterization. Anal. Chem. 87, 4152–4158 (2015)CrossRefGoogle Scholar
Fornelli, L., Parra, J., Hartmer, R., Stoermer, C., Lubeck, M., Tsybin, Y.O.: Top-down analysis of 30–80 kDa proteins by electron transfer dissociation time-of-flight mass spectrometry. Anal. Bioanal. Chem. 405, 8505–8514 (2013)CrossRefGoogle Scholar
Han, X., Jin, M., Breuker, K., McLafferty, F.W.: Extending top-down mass spectrometry to proteins with masses greater than 200 kDa. Science 314, 109–112 (2006)CrossRefGoogle Scholar
Cannon, J.R., Holden, D.D., Brodbelt, J.S.: Hybridizing ultraviolet photodissociation with electron transfer dissociation for intact protein characterization. Anal. Chem. 86, 10970–10977 (2014)CrossRefGoogle Scholar
Riley, N.M., Westphall, M.S., Coon, J.J.: Activated ion electron transfer dissociation for improved fragmentation of intact proteins. Anal. Chem. 87, 7109–7116 (2015)CrossRefGoogle Scholar
Zhao, Y., Riley, N.M., Sun, L., Hebert, A.S., Yan, X., Westphall, M.S., Rush, M.J.P., Zhu, G., Champion, M.M., Medie, F.M., Champion, P.A.D., Coon, J.J., Dovichi, N.J.: Coupling capillary zone electrophoresis with electron transfer dissociation and activated ion electron transfer dissociation for top-down proteomics. Anal. Chem. 87, 5422–5429 (2015)CrossRefGoogle Scholar
© American Society for Mass Spectrometry 2015
1.Department of ChemistryUniversity of Wisconsin-MadisonMadisonUSA
2.Thermo Fisher ScientificSan JoseUSA
3.Department of Biomolecular ChemistryUniversity of Wisconsin-MadisonMadisonUSA
4.Genome Center of WisconsinUniversity of Wisconsin-MadisonMadisonUSA
Riley, N.M., Mullen, C., Weisbrod, C.R. et al. J. Am. Soc. Mass Spectrom. (2016) 27: 520. https://doi.org/10.1007/s13361-015-1306-8
Received 21 August 2015
Accepted 05 November 2015
the American Society for Mass Spectrometry
|
CommonCrawl
|
IZA Journal of Labor Economics
Training and minimum wages: first evidence from the introduction of the minimum wage in Germany
Lutz Bellmann1,2,3,
Mario Bossler1,4,
Hans-Dieter Gerner5,1 &
Olaf Hübler6,3
IZA Journal of Labor Economics volume 6, Article number: 8 (2017) Cite this article
We analyze the short-run impact of the introduction of the new statutory minimum wage in Germany on further training at the workplace level. Applying difference-in-difference methods to data from the IAB Establishment Panel, we do not find a reduction in the training incidence but a slight reduction in the intensity of training at treated establishments. Effect heterogeneities reveal that the negative impact is mostly driven by employer-financed training. On the worker level, we observe a reduction of training for medium- and high-skilled employees but no significant effects on the training of low-skilled employees.
In the literature on minimum wages, there has been a long-lasting and still ongoing discussion on the effects of minimum wages on employment. In their surveys, Brown (1999) as well as Neumark and Wascher (2007) conclude that most studies until the late 1980s corroborated the conventional view that minimum wages reduce employment. In the 1990s, a new strand of research in applied microeconometrics failed to detect meaningful negative employment effects. This caused the literature to converge towards a debate on the size of such—mostly small—employment effects, as well as potential alternative channels of adjustment within affected firms (see among others, Addison 2017 or Bossler and Gerner 2016). Bárány (2016) argues that an increase in training might be one of these adjustment channels and may serve as an explanation for only small disemployment effects. According to the economic theory, minimum wages could negatively impact the incentives for employees to realize human capital accumulation due to lower expected returns from training. Secondly, they could reduce employers' willingness to finance further training as part of cost-saving strategies. In case of non-competitive labor markets, a counteractive increase in training investments could be used to restore productivity-dependent rents. Following these arguments, it seems sensible to complement empirical studies of employment effects (e.g., Bossler and Gerner 2016) with evidence on minimum wage-induced impacts on training at the workplace level.
Of the 28 member states of the European Union, 22 have a statutory minimum wage, while sectoral-specific minimum wages and collective bargaining regimes are used in the remaining six countries (Schulten 2016). In Germany, the statutory minimum wage came into force on 1 January 2015 after it was approved in parliament on 11 July 2014. It is the first compulsory minimum wage that it is valid to all employees with only minor exemptions.1 The minimum wage was introduced in response to a period of two decades of a substantial decrease in collective bargaining coverage and an increase in wage inequality. The new minimum wage is largely binding, and employer expectations surveyed prior to the minimum wage introduction make adjustments in firm-financed training likely (Bossler 2017). Based on a biennial suggestion of the newly introduced minimum wage commission, the minimum wage can be adjusted by a legislative decree of the German Federal Government. The Minimum Wage Law §9(2) determines that the commission shall suggest a minimum wage that contributes to an appropriate minimum protection of workers and to fair and performing conditions of competition and does not jeopardize employment. While there is no clear connection between the height of the minimum wage and the cost of living, the law states that the development of the minimum wage should align the development of collectively bargained wages in Germany.
This article studies the minimum wage effects on training in course of an introduction of a statutory minimum wage in continental Europe. Applying difference-in-differences estimation techniques to data of the IAB Establishment Panel 2011–2015, the analysis contributes to the literature on training and minimum wages in three aspects. First, we present training effects of the new statutory minimum wage in Germany, which was introduced on 1 January 2015. Second, we can distinguish between three types of training (external training courses, internal training courses, and training on the job) and three skill groups (unskilled workers, workers with vocational qualifications, workers with university degrees). Third, we distinguish between training that is solely firm-financed and training which includes employee expenditures. The empirical analysis of minimum wage effects on training with establishment data is a useful supplement to investigations of individual data. However, employer-employee data that include the time period of the German minimum wage introduction are not yet available.
The article proceeds as follows. Section 2 describes the theoretical aspects presented in previous literature based on which we formulate empirical hypotheses. Section 3 describes the data and microeconometric methods of our analysis. Section 4 presents the empirical results including a sample description, the baseline difference-in-differences results, robustness checks, and effect heterogeneities with respect to sectors, types of training, the employees' financial participation at training costs, and training by qualification levels. Section 5 concludes.
Previous literature and hypotheses
In this section, we review the minimum wage literature with respect to effects on training. The implementation of the statutory minimum wage affects the individuals' and firms' training decisions, as it potentially changes the opportunity costs and the gains of training. According to the standard human capital theory, a large part of human capital is accumulated on the job. Employees often finance these investments through wage cuts since they can internalize future gains from training due to an increased productivity. However, a binding minimum wage may inhibit the ability of employers to cut wages to finance training costs (or the firm's share of training costs). Therefore, the implementation of minimum wages is predicted to decrease training of low-paid employees.
In the literature, Leighton and Mincer (1981) corroborate this conjecture by showing that US states with a relatively high proportion of low-skilled employees, and thus a relatively larger applicability of the federal minimum wage, exhibit lower training activities. Hashimoto (1982) argues that minimum wages enhance labor market competition through increasing competition for jobs. This, in turn, leads to a reduction in training. Lazear (1979) estimates the effects of an increase in the minimum wage on training intensity and finds a reduction between 3 to 15% from what it would have been in the absence of changes in the minimum wage. The negative effect of minimum wages on training is also supported in studies by Schiller (1994) who analyzes young labor market entrants that receive less training if they are paid the minimum wage and by Neumark and Wascher (2001) who exploit variation of minimum wages across US states.
Departing from the standard theoretical view that labor markets are competitive, Acemoglu and Pischke (2003) show that the effect of minimum wages on training depends on the labor market structure (Additional file 5). In the presence of labor market frictions, firms receive productivity-dependent rents by paying wages below productivity, i.e., rent=productivity−wage. A binding minimum wage redistributes some fraction of these rents from employers to employees. Even though training is costly, this creates an incentive for firms to increase productivity through training in order to restore the level of rents to some higher level.2 In the empirical part of their article, Acemoglu and Pischke (2003) use the US National Longitudinal Survey of Youth for the years 1987 to 1992 and measure competitiveness by industry wage differentials and find some weak evidence that training is positively related to minimum wages among workers in less competitive sectors.
In line with these theoretical exploitations that minimum wages could in some circumstances even foster training, Arulampalam et al. (2004) find an increase in workers' training probability following the introduction of the national minimum wage in Britain. Their estimates show an increase in the training probability ranging between 8 and 11 percentage points. In another empirical analysis of the minimum wage in Britain, Riley and Bondibene (2017) show that firms respond to increased minimum wages by the use of productivity-enhancing HR instruments such as organizational changes and training.
Lechthaler and Snower (2008) contribute by accounting for another theoretical channel in which minimum wages may limit the internalization of gains from training. As minimum wages are theoretically associated with an employment reduction, firms cannot fully appropriate the gains in form of a higher productivity leading to a reduced incentive for training provision. This is especially the case for low-qualified workers whose employment is most endangered by minimum wages. Correspondingly, the calibration exercise by Lechthaler and Snower (2008) demonstrates that an increase in the minimum wage by 10% reduces training of low-skilled employees by 11.3% but increases training of medium-skilled employees by 4.1% and high-skilled employees by 1%.
Another strand of literature considers a labor demand-induced impact of minimum wages on skill formation. Cahuc and Michel (1996) suppose that due to the implementation of a minimum wage, the wages of low-skilled employees tend to rise relative to those of high-skilled labor. Thus, the relative demand for low-skilled employees decreases while the relative demand for high-skilled employees increases. Consequently, this creates a labor demand-induced incentive for human capital accumulation of the low-skilled in order to meet the increased demand for high-skilled employment. However, such human capital accumulation could be realized outside the workplace on the own initiative of the employee. Hence, training at the workplace level could disregard the suggested theoretical channel of an additional human capital accumulation. Moreover, the authors mention that training subsidies may serve the same objective while avoiding the negative externalities such as unemployment, and such subsidies may cover some of both the employees' and employers' incentives for additional training. This mechanism may be relevant in our case, as the German Federal Government announced an additional budget for training subsidies for unemployed and employees who are in danger of becoming unemployed shortly after the minimum wage was introduced.
A related argument is that minimum wages may lead to increasing demand-induced skill requirements. This increased skill demand is supported in a recent description by Gürtzgen et al. (2016) who illustrate that skill requirements for vacant minimum wage positions increased in 2015 after the German minimum wage was introduced. Hence, employers expect some additional skill accumulation in exchange for paying the minimum wage.
However, there are also other arguments predicting a decrease of training provision. First, when firms have a certain budget for personnel expenses, the increased wage costs have to be compensated by a reduction of some other fringe benefits (Belman and Wolfson 2014, p. 280). While such a cost reduction could well include several sorts of benefits, a reduction of training expenses could be one explicit channel to compensate for the increased personnel costs. Second, a more cautious hiring policy among minimum wage firms may cause training to decrease. The training literature suggests that newly hired employees typically require training to enable familiarization with job-specific tasks (Beckmann and Bellmann 2002), and the recent minimum wage literature in Germany suggests a (modest) employment effect mostly due to a reduction in hires (Bossler and Gerner 2016). Third, firms may want to encourage employee-initiated quits to reduce employment. This increased quit rate could be achieved by a reduction of training provision, which has shown to be associated with an increasing quit rate (Hübler and König 1999).
In recent empirical work, a reduction in employee turnover induced by minimum wages has been well documented (e.g., Dube et al. 2016), e.g., explained by the efficiency-wage theory. Some indications of a turnover-reducing effect in Germany are presented in Bossler and Gerner (2016). A turnover reduction induces an incentive to employers to invest in training of workers, as future returns are more likely to be internalized. Moreover, if the minimum wage reduces employee turnover, this also reduces the need to hire new personnel, which is also associated with higher training investments, as argued above. Additionally, and as suggested by Lang and Kahn (1998), new hires may be of higher average quality (i.e., productivity), which in turn may reduce the need for initial training of newly hired workers. However, some empirical investigations—independent of minimum wages—point at the opposite mechanism that qualified workers get more training than others (Dostie 2015, Hübler and König 1999). If this is the case, a minimum wage-induced change to the workforce composition could lead to an initially uncertain effect on training. We address this issue when we control for the workforce composition and new hires both by levels of qualification in our analysis (Section 4.2).
Finally, some literature discusses direct and indirect effects of training on wages. For example, Goux and Maurin (2000) use French data and show that the primary effect of training is a reduction of turnover but not of wages. Zwick (2006) finds that training increases the productivity but there was no significant rise of wages. This result is also confirmed by Görlitz (2011), who investigates the short-term impact of on-the-job training on wages using German linked employer-employee data. Hence, the empirical literature suggests that training yields a productivity-dependent rent that is not offset by increased wages.
All these theoretical considerations allow us to derive expectations in the form of empirical hypotheses. (1) The human capital theory predicts minimum wages to reduce training. (2) In some circumstances, this pessimistic prediction can be relaxed when (a) frictions allow for productivity-dependent rents in less competitive labor markets (Acemoglu and Pischke 2003) or (b) low-skilled employees have a labor demand-induced incentive to invest in training (Cahuc and Michel 1996). (3) The results by Lechthaler and Snower (2008) predict a decrease of training for low-skilled workers but a modest increase of training for medium- and high-skilled employees.
The data set of our empirical analysis is the IAB Establishment Panel,3 a large annual establishment-level survey on personnel developments and personnel policies such as the provision of training. The survey comprises about 15,000 observations each year and the gross population consists of all registered establishments located in Germany that recorded at least one worker covered by the social security system. The sample is representative for industries, German states ("Bundesländer"), and differing establishment size categories.4 The personal interviews are conducted by TNS Infratest Social Research in face-to-face on-site meetings with a personnel manager of the respective workplace. This procedure ensures a high data quality and a yearly continuation response rate of about 80%.
The survey follows establishments over time, and a unique establishment identifier allows us to construct a panel of workplaces over time covering the period from 2011 to 2015. We start our panel analysis in 2011 after the financial crisis, which marks a starting point of a period of fairly stable economic development. As the new German minimum wage was introduced on 1 January 2015, the panel includes four waves ahead of its introduction followed by one treatment year. The post-treatment information was collected in the 3rd quarter of 2015.
The IAB Establishment Panel surveys a wide range of measures concerning employers' training provision. Most importantly and very generally, employers are asked to give a yes/no response about the incidence of training at the respective plant. Following this binary distinction, the survey asks for the number of employees who participated in training activities within the last 6 months.5 This number of trained employees allows us to construct a measure of training intensity, defined as trained employees as a fraction of total employment. A second set of questions asks for the types of training that are used at the respective establishment. This allows us to distinguish between training that is provided through external and internal courses as well as on-the-job training. Similarly, the survey allows a distinction between training that is solely employer-financed and training that includes a financial contribution by employees towards the training costs. Finally, a third question directly asks for the number of trained individuals by three levels of qualification (low, medium, and high). From this information, we can construct variables that measure the share of trained employees at each skill level.
Empirical methodology
Treatment assignment
For the pre-treatment panel wave of 2014, we have designed a questionnaire module that allows for a treatment assignment that can be used for a minimum wage evaluation using difference-in-differences estimation. The data include a measure on whether the respective workplace was affected by the minimum wage by asking whether at least one worker received a remuneration below the initial hourly minimum wage of € 8.50. A second question captures the number of employees receiving an hourly wage below € 8.50 at that point in time.
From these variables of 2014, we construct a first binary treatment group identifier that indicates establishments with at least one employee with a remuneration below € 8.50, which helps to delineate between treated and control plants. A second variable measures the treatment intensity from the fraction of affected employees. This creates a bite measure with a stronger weight on workplaces with larger fractions of affected employees. Both treatment variables are calculated from 2014 data but can be traced back and forth across panel years for the respective establishments. This sample construction results in an analysis sample of plants that existed in 2014.
Empirical approach
We structure our empirical analysis as follows. Before estimating difference-in-differences specifications, we present descriptive statistics that characterize our data sample as well as differences between the treatment group and the control group from ahead of the minimum wage introduction in 2014. In the main part of our analysis, we present treatment effects not only from difference-in-differences estimations but also from specifications that include time-varying control variables.
In the baseline difference-in-differences estimation, the measure for training (training incidence, training intensity) is regressed on the interaction of the treatment variable and a post minimum wage indicator:
$$\begin{array}{@{}rcl@{}} \text{training}_{it} = (\mathrm{treatment~group}_{i} \cdot D2015_{t}) \delta + \theta_{i} + \tau_{t} + \epsilon_{it}, \end{array} $$
where treatment group is either the binary treatment variable or the intensity treatment. The effect on the treatment group and treatment time interaction δ captures the treatment effect of the minimum wage introduction on the treated establishments. In the baseline specification, θ i is a vector of time-constant firm-specific fixed effects that control for all constant differences in training between establishments (see, e.g., Hsiao 2003). Hence, it also controls for constant differences between the treatment and the control group, which is required in difference-in-differences specifications (Lechner 2010). Finally, τ t captures time fixed effects that are common to all establishments, which we operationalize by dummy variables for each year in the panel data, and ε it is an idiosyncratic error term.6
When we estimate effects conditional on covariates, we simply add a vector of time-varying control variables x it to the baseline specification:
$$\begin{array}{@{}rcl@{}} \text{training}_{it} = (\mathrm{treatment~group}_{i} \cdot D2015_{t}) \delta + x_{it}' \beta + \theta_{i} + \tau_{t} + \epsilon_{\text{it}}. \end{array} $$
Although the presented estimates are retrieved from fixed effect specifications, they can be replicated using differences-in-differences specifications with a time-constant control variable for the treatment group using OLS or random effects estimation. However, the Breusch-Pagan test points at the importance of time-invariant firm effects in our analysis and the Hausman test shows that such firm effects are correlated with our time-varying variables of interest.7
Finally, and despite the use of linear probability models, we can also replicate our results using non-linear panel estimators that control for time-constant heterogeneity. Such approaches include the Mundlak estimator (Mundlak 1978), in which the firms effects θ i are modeled by time averages (\(\theta _{i} = \bar x_{i}'\pi + w_{i}\)), and the van Praag estimator (van Praag 2015), which additionally replaces x it by (\(x_{it}-\bar {x}_{i}\)).8
We round off our empirical analysis by estimating several effect heterogeneities using the baseline specification. In a first step, we estimate the difference-in-differences effect for competitive and non-competitive sectors. In a second step, we re-estimate the baseline specification while looking at alternative definitions of the endogenous training variable, yielding effects on (a) tree types of training, (b) training with and without financial participation of employees at the training costs, and (c) training by skill levels of the employees.
Empirical analysis and results
The analysis sample and the variables of interest are described in Table 1. In total, we consider 58,209 establishment observations over the period from 2011 to 2015. Most of the training outcomes that we analyze contain only few observations with missing information. Two exceptions are the training by skill levels, which was not included in the 2012 survey, and the distinction by financial participation in training costs, which was only included in the surveys of 2011, 2013 and 2015. With respect to the treatment assignment, the descriptives show that 15% of the establishment observations are treated, implying that 85% of the observations are in the control group of unaffected plants.
Table 1 Descriptive statistics of the total sample
We measure the training incidence as a binary indicator and the training intensity as the share of trained employees that were subject to training in the first half of the respective year. As we see from the sample averages, 70% of the workplaces in Germany provided some sort of training to at least one of the employees. Across all establishments, 32% of all employees participated in training. While the intensity of training among the medium qualified employees was especially high, this is mostly due to the fact that this group constitutes the largest group of employees in Germany. Training can be realized with or without financial participation of employees. While we exclude establishments without a clear assertion in this respect, the sample averages clearly show that the training is financed by employers in most cases. A final distinction is made by types of training. We distinguish between external courses, internal courses, and on-the-job training.9 The latter comprises on-the-job training in terms of job rotation, quality circles, and self-learning programs.
In conditional estimations of the difference-in-differences specification, we consider a large set of control variables. To account for potential changes in the workforce composition, we account for the workforce composition by levels of qualification. Moreover, we control for the hiring of new workers by skill levels, as newly hired workers may receive some initial training. Looking at other (more structural) firm-level characteristics, the description displays that 43% of the plants are covered by collective bargaining, and 28% report worker representation through a works council. Concerning the workforce composition, 27% of the employees are part-time workers, which is relevant as these are less likely to participate in training. We also account for the self-reported technical levels of capital and competitive pressure.
Table 2 displays descriptive differences of the variables by treatment status. While we did not have concrete expectations concerning descriptive differences in training, almost all variables that measure training in some qualitative way indicate a lower average training provision in treated compared to untreated establishments. An exception is the training of low qualified employees, which is initially higher at treated establishments. A t test shows that most bivariate mean differences are statistically significant at a 1% significance level. These descriptive differences suggest that employees (except the low qualified employees) at affected plants were less likely to have participated in training measures ahead of the minimum wage introduction.
Table 2 Testing differences between treatment and control group in 2014
Also with respect to other observable variables, Table 2 reveals some meaningful differences between treatment and control groups in 2014. The treatment group is more likely to be covered by a collective bargaining agreement and less likely to possess a works council. Moreover, we observe some other typical differences: treated plants show a larger share of part-time employment, have a relatively outdated technical capital infrastructure, and face higher competitive pressure. As all these observable variables may also be correlated with the provision of training, we use them as controls for a robustness check that presents difference-in-differences estimates conditional on covariates. Finally, Table 2 does not show significant differences in the hiring rate of skilled workers and the firm size measured by the number of employees.
Difference-in-differences estimation
Difference-in-differences estimates in Table 3 allow us to interpret the effect of the minimum wage introduction on training. The estimated coefficients are treatment effects as depicted by the δ-coefficient in equation (1), which is the effect of the treatment group–treatment time interaction. While Panel A displays effects on the training incidence, i.e., whether the firm provides training at all, Panel B presents effects on the training intensity, i.e., the fraction of trained employees in the respective establishment.
Table 3 Difference-in-differences estimates on training
The effects of the minimum wage introduction on training incidence are small and insignificantly different from zero for both the binary treatment definition and the intensity treatment. The results imply that there is no effect on the very general decision whether or not to provide training to at least one employee. The validity of this conclusion is supported by the two placebo regressions in columns (2) and (4), which omits data from 2015 and artificially assigns treatment to observations of 2014.
Looking at the effect of the minimum wage on training intensity, Panel B reveals a negative impact of −0.018 from the binary treatment variable, implying that the minimum wage reduced training by 1.8 percentage points at treated workplaces. This effect is supported by the negative and significant impact in column (3). The effect size implies that the training intensity decreases by 0.4 percentage points for a 10 percentage points increase in the fraction of affected employees. The placebo tests in columns (2) and (4) are insignificantly different from zero supporting such negative effects. However, the negative sign of such placebo estimations points at the possibility of a trend towards lower training provision that may be independent of the introduction of the minimum wage. As parallel trends between the treatment group and the control group in the absence of the treatment is a crucial assumption for difference-in-difference estimates, we devote the next subsection to the inspection of trend differences.
As a first robustness check to the baseline effects, we estimate the minimum wage effect on the training intensity from conditional difference-in-differences specifications. This is particularly relevant as the workforce composition may change due to the minimum wage if firms hire more qualified workers (Lang and Kahn 1998), which may or may not require a different intensity of training. Table 4 controls for such covariates, where columns (1) and (5) control for the workforce composition by qualification levels only. While the results corroborate the conjecture that highly qualified employees receive more training, the effect of the minimum wage remains unchanged. In columns (2) and (6), we control for the general hiring rate as well as the hiring rate of qualified employees. The results show some indication that the hiring rate exerts a negative impact on training while the hiring of qualified employees has a positive impact, again the effect of the minimum wage on training remains unchanged. Very similar results are obtained form columns (3) and (7), where the estimations control for the workforce composition and the hiring rates. These results suggest that the effect of the minimum wage on training is not simply driven by the mechanism that the minimum wage changes the workforce through hiring different kinds of workers.
Table 4 Conditional difference-in-differences estimates on the training intensity
We additionally control for some more structural firm-level information in columns (4) and (8). While this conditioning has the advantage of controlling for potentially confounding covariates, it carries the risk of controlling for potentially endogenous factors. Additional control variables are collective bargaining coverage, works council representation, the fraction of part-timers, the technical level of capital, and an indicator for high competitive pressure. Suchlike the baseline estimates, both specifications show a negative and significant treatment effect of the minimum wage introduction on the share of trained employees, which in size remains virtually unchanged.
As treatment effects may vary by firm size, we test effect heterogeneities by firm size categories. We construct interaction variables between the treatment variable and firm size dummies, where we consider three firm size classes measured by the number of employees: FS1≤5, 6≤FS2<250 and FS3≥250. Using control variables as in columns (4) and (8), we find
$$\begin{aligned} &\beta_{DiD\_\mathrm{binary\hspace{1mm} treatment}}\cdot \text{FS}1 = -0.018 (0.015)\\ &\beta_{DiD\_\mathrm{binary\hspace{1mm} treatment}}\cdot \text{FS}2 = -0.015 (0.009) \\ &\beta_{DiD\_\mathrm{binary\hspace{1mm} treatment}}\cdot \text{FS}3 = -0.042** (0.020) \\ &\beta_{DiD\_\mathrm{intensive\hspace{1mm} treatment}}\cdot \text{FS}1 = -0.030 (0.026)\\ &\beta_{DiD\_\mathrm{intensive\hspace{1mm} treatment}}\cdot \text{FS}2 = -0.050*** (0.019) \\ &\beta_{DiD\_\mathrm{intensive\hspace{1mm} treatment}}\cdot \text{FS}3 = 0.002 (0.055), \end{aligned} $$
where cluster robust standard errors are in parentheses. Under binary treatment the interaction effect is only significant for large establishments. The picture changes when we consider the intensive treatment definition. In such regressions, middle-sized establishments have the strongest negative treatment effect. The influence on smaller and larger firms turns insignificant.
Inspecting the parallel trends assumption
Even though the placebo effects of the baseline specification in Table 3 fall short of statistical significance, they may point at a divergence of trends between the treatment and the control group. As parallel trends are a crucial assumption for difference-in-differences estimation, we devote this subsection to the inspection of group-specific trends, and we apply estimation strategies that have been proposed in the literature to deal with the divergence of trends in difference-in-differences estimation.
Figure 1 illustrates the average use of training for treated and control establishments in our analysis sample. Despite some year-specific variation in the data that is independent of the two groups, the unconditional illustration in Panel a questions the parallel trends assumptions by revealing a relative decline in the training intensity among the group of treated establishments. While this development can be seen as an interesting finding in itself, it may be independent of the introduction of the minimum wage.
Inspection of parallel trends between treatment and control group. a The trends of training by treatment status before propensity score matching. b The trends of training by treatment status for the treatment group and the matched control group, which is defined by radius matching. Source: IAB Establishment Panel, 2011–2015, analysis sample
To shed light on the presence of a treatment effect irrespective of a divergence in the group-specific training trends, we add a term that captures treatment group and time-specific heterogeneity to our empirical model of interest:
$$\begin{array}{@{}rcl@{}} \text{training}_{it} = (\mathrm{treatment~group}_{i} \cdot D2015_{t}) \delta + \theta_{i} + \tau_{t} + \Psi_{\mathrm{treatment\,group,t}} + \epsilon_{it} \end{array} $$
The treatment group and time-specific heterogeneity is specified by Ψ treat group,t. As Ψ treatment group,t may correlate with the treatment effect interaction of interest, we attempt to control for this additional heterogeneity using two different approaches.
As a first attempt, we follow previous literature by Addison et al. (2015), Allegretto et al. (2011), and Neumark et al. (2014) who suggest to control for such treatment group and time-specific heterogeneities directly by the use of parametric trends. Following this argument, we specify such treatment group-specific trends as Ψ treatment group,t=T t ∗treatment group i ∗ψ, where T t represents a count variable for each panel wave and treatment group i indicates treated plants. The trend term is included to the baseline specification as formulated in equation (3), where ψ estimates the treated establishments' trend divergence from the control group, exploiting time variation from before the minimum wage introduction. As this imposes the assumption that the trend divergence is linear, we additionally check alternative—more flexible—specifications of such trends by the use of quadratic and cubic polynomials, i.e.,
$$\begin{array}{@{}rcl@{}} \Psi_{\mathrm{treatment\,group,t}}=T_{t}*\mathrm{treatment\,group}_{i}*\psi_{1}+T_{t}^{2}*\mathrm{treatment\,group}_{i}*\psi_{2} \end{array} $$
$$\begin{array}{@{}rcl@{}} \Psi_{\mathrm{treatment\,group,t}} & = & T_{t}*\mathrm{treatment\,group}_{i}*\psi_{1}+T_{t}^{2}*\mathrm{treatment\,group}_{i}*\psi_{2} \\ & & + \, T_{t}^{3}*\mathrm{treatment\,group}_{i}*\psi_{3} \end{array} $$
The treatment effect of the regressions that control for treatment group-specific trends are displayed in Table 5. The binary and the intensive treatment effects shrink to about one percentage point. Additionally, in such specifications the standard errors increase, as the specification leaves little time variation for identification of the true treatment effect. Hence, all point estimates turn insignificant. However, we can also emphasize that the point estimates remain negative and robust in all such trend specifications, pointing at a negative treatment effect of the minimum wage introduction that quantifies a decrease in training by one percentage point.
Table 5 Treatment effects controlling for treatment group-specific trends
In a second attempt, we control for the group-specific time heterogeneity Ψ treatment group,t by the use of a matching procedure that compares treated establishments with control establishments conditional on pre-treatment developments in training, i.e., conditional on pre-treatment developments in the outcome variable. In the matching procedure, we want to compare establishments with similar levels of training in 2014 as well as similar growth rates in training 2011–2012, 2012–2013 and 2013-2014.
For this purpose, we use a propensity score matching (PSM) that allows us to equalize pre-treatment trends as illustrated in Panel b of Fig. 1.10 PSM requires estimation of the propensity score p(x) that represents the probability to be treated for each establishment in the data, which is an estimation of the treatment group identifier on the pre-treatment training level and pre-treatment growth rates of training as explanatory variables. Based on this prerequisite estimation, the treatment effect on the treated establishments can be formalized as follows:
$$\begin{array}{@{}rcl@{}} \text{ATT}^{\text{PSM}} = \frac{1}{N_{\text{treated}}} \sum_{i \in N_{treated}} [\Delta^{2015-14} \text{training}_{i} - \widehat{\Delta^{2015-14} \text{training}_{M(i)|p(x)}}] \end{array} $$
where the outcome of interest is the growth in the training intensity (share of trained employees) from 2014 to 2015. For each treated establishment i, this outcome variable is compared with a group of establishments from the control group, i.e., close matches defined as M(i), where the match-defining proximity is evaluated from the propensity score p(x). The difference between the average of the treated establishments' training growth and matched control group's average training growth defines the treatment effect on the treated establishments.
The estimates in Table 6 present the average treatment effects on the treated establishments based on the propensity score matching procedure in equation (4), where the group of similar matches is defined by radius matching with a caliper of 0.03 percentage points. From a visual inspection of the pre-treatment trends of the treatment and the control group in Panel b of Fig. 1, this procedure serves to equalize trends, but it is insensitive to the point estimate of interest.11 The effect in column (1) reveals a reduction in training by about −2.9 percentage points. A crucial assumption for propensity score matching is sufficient common support, i.e., a sufficiently large number of control establishments that are similar compared with their treated counterparts. As a robustness check, we follow the suggestion by Imbens (2015) and use a trimming procedure that reduces the analysis sample based on the support in the propensity score. In fact, we exclude 5% of the treated establishments whose propensity score has the least number of similar controls. The respective estimate is presented in column (2). As the point estimate remains unchanged, the trimming of the treatment group does not affect our results.
Table 6 Treatment effects of a radius matching on pre-treatment trends
As the application of propensity score matching requires a binary treatment assignment, we cannot directly use the intensity treatment in our PSM. However, we follow a suggestion by Imbens and Wooldridge (2009) in using two separate binary treatments, differing in their treatment intensity (more or less than 50% of the employees were affected by the minimum wage introduction). We use both treatments in separate estimations and exclude observations marked by the alternative definition respectively. The results in columns (3) and (4) of Table 6 reveal an effect on training which is slightly larger for the group of more severely affected establishments.
In total, the treatment effects from PSM reveal a robust negative effect on training, while providing an effective strategy to equalize pre-treatment trends. Combined with the baseline panel estimates in Tables 3 and 4 and the parametrization of group-specific trends in Table 5, the effect of the minimum wage introduction on training is in the range between −1 and −3 percentage points.
Effect heterogeneities
In what follows, we present heterogeneities of the effect of the minimum wage introduction on training. First, we estimate separate effects for competitive and non-competitive sectors. Thereafter, we use different outcome variables to estimate effects on three different types of training, on training with and without financial participation by the employees and on training by levels of qualification.
Sectoral differences
We first present separate treatment effects for groups of sectors by competitiveness.12 In contrast to the standard human capital theory, the contribution by Acemoglu and Pischke (2003) shows that effects of minimum wages on training can be positive when labor markets are less competitive, i.e., when rents are prevalent in the market. As these rents are productivity-dependent, employers have an incentive to restore such rents by increasing the productivity through training. Minimum wages would otherwise redistribute the rents to the employees' side. In situations without minimum wages, it is less likely that the establishments attempt to enlarge rents via productivity-enhancing training without an employees' participation. This would be considered as unfair and would probably have negative consequences for the firm by reactions of the employees.
Table 7 shows that the adverse effect of the minimum wage introduction on training is much stronger in competitive than in less competitive sectors. Looking at the incidence of training, we observe significant effects for competitive sectors while the effect for non-competitive sectors is insignificant. When we view the training intensity, we find insignificant effects in both sectors but the negative coefficient is absolutely larger in competitive sectors. Our results do not support the prediction that minimum wages positively affect training in less competitive sectors (here services). However, we cannot exclude that under weak competition, the negative minimum wage effect on training is reduced.
Table 7 Treatment effects on training in competitive and non-competitive sectors
The data provide us with information on the types of training at the respective establishments allowing a distinction between external courses, internal courses, and on-the-job training. We use this information as separate outcome variables in separate regressions. Table 8 demonstrates that the introduction of the minimum wage reduces the training for all three types. No systematic differences can be observed.
Table 8 Separate treatment effects on types of training
Financial participation of the employees
We now differentiate by training that is purely employer-financed and training with financial participation of the employees. Workers who themselves contribute to training by financial participation view training especially as a long-run mean of climbing the career ladder. They are investing in their future jobs and income streams. However, from the workers' perspective, the minimum wage induces an incentive to reduce training activities as marginal returns could be reduced when the minimum wage compresses the wage distribution or when workers expect a permanent wage compression such that skill premiums become less pronounced. On the other hand, employees may face a demand-induced incentive for skill accumulation (Cahuc and Michel 1996). Hence, the effect on employee-financed training is theoretically ambiguous. By contrast, employers have a clear cost-reducing incentive and are therefore predicted to decrease firm-financed training activities.
Table 9 demonstrates negative effects on the training incidence and the training intensity when it is purely firm-financed. However, we do not observe any significant effect on training that is completely or at least partially financed by employees. This result is in line with the theoretical expectation that employers try to reduce costs of training. Employees may still have an incentive to invest in human capital accumulation through training, and therefore, they participate in training costs if they are not financially constrained.13
Table 9 Treatment effects on training by financial participation at training costs
Effects by workers' qualification
Finally, the IAB Establishment Panel contains information on whether the participants in training were unskilled workers, workers with a vocational certificate, or employees with a university degree have participated in training. This distinction allows us to add to the literature, where effects on training are mostly analyzed irrespective of the workers' initial education. The relevance draws on the possibility that firms may change the provision of training not only for low-qualified minimum wage workers but they may also adjust the provision of training measures for skilled personnel in order to compensate for the costs induced by the minimum wage.
However, we cannot distinguish between treated and untreated workers by skill levels. Hence, we have to be cautious when relating skills groups to the minimum wage. Nevertheless, we can estimate the reduced form effect of the minimum wage introduction on the training by skill groups. If the respective treatment effect on unskilled workers is negative, this could imply that
Training is reduced for minimum wage affected and unaffected unskilled workers
Training of affected workers increases and that of unaffected workers decreases, where the latter effect overcompensates the former
Training of non-affected workers increases and that of affected workers decreases, where the latter overcompensates the former.
The effects in Table 10 do not show statistically significant effects on the training incidences for low and medium qualified workers (columns 1 to 2). This could be due to the fact that qualification decisions are involved in long-term considerations. However, it is interesting to note that the training incidence and intensity of high-skilled workers decreases—see columns (3) and (6). The latter effect is also observed for medium qualified employees—see column (5)—while the effect is insignificant for the low-skilled workers.
Table 10 Treatment effects on training by skill levels of employees
What might explain these effect differences? One possibility is that the previous training of workers with a university degree or a vocational certificate was much more intensive, so that a temporary reduction for these groups is less important. From a theoretical perspective, it seems plausible because the minimum wage induces an incentive for human capital accumulation among the least skilled, as the relative labor demand shifts towards relatively more skilled workers (see Section 2). This pressure to invest in training at the employees' side contrasts the cost-saving argument on the employers' side, such that an insignificant effect seems plausible. Once more, we should emphasize that training decisions are based on long-term considerations and insofar not all effects can be captured in the first post-reform year, especially because the substitution between skilled and unskilled is highly expansive. Nevertheless, we believe that long-term oriented establishments start early with adjustment measures, as this is advantageous over reluctant behavior.
Another contrasting argument may be that training activities are increased for low-skilled hires because of a rise in skill demand for vacancies paid at the level of the minimum wage (Gürtzgen et al. 2016). It seems plausible that the respective applicants have to finance the acquisition of these skills if they cannot supply a sufficiently high productivity. However, as the data do not include information on the financial burden of training costs by skill levels, we cannot rule out employer-financed training for such hires.
We analyze effects of the introduction of the new statutory minimum wage on firm-level training in Germany. Human capital theory predicts that binding minimum wages prohibit wage cuts that are used by employers to finance training. If training costs are not offset by an increase in productivity, firms have to cut training costs. However, this pessimistic prediction can be relaxed when labor market frictions allow for productivity-dependent rents (Acemoglu and Pischke 2003). Moreover, an increase of relative demand for skilled workers may induce an incentive for human capital accumulation.
We apply a difference-in-differences estimation to data from the IAB Establishment Panel, a panel data set with comprehensive information on training and the bite of the minimum wage. The results do not provide evidence for a decrease in training incidence. However and more importantly, we find fairly robust evidence for a decrease in the training intensity (i.e., share of trained employees) in establishments affected by the minimum wage. The estimated effects of the binary treatment on the training intensity is roughly −1.8 percentage points across all specifications, including estimations with and without covariates. Under the consideration of an intensive treatment variable, the percentage point effect ranges between −0.039 and −0.041. Moreover, we find robust effects on different types of training, indicating that the reduction in training is not compensated by a change in the quality of such training.
When we estimate separate effects for rather competitive and non-competitive sectors, we find that the negative effects on the training intensity are driven by competitive sectors, whereas the effect in non-competitive sectors remains inconclusive. This effect heterogeneity corroborates predictions by Acemoglu and Pischke (2003) who have shown that market frictions and productivity-dependent rents can, in fact, allow for a positive effect of minimum wages on training. We also demonstrate that the minimum wage mostly affected employer-financed training but not training that is at least partially financed by employees. We conclude that employers have a minimum wage induced incentive to cut training costs when they cannot devise improved training regimes that would provide net productivity increases. The employees' incentives for human capital accumulation seem to be unchanged.
We finally present separate effects by skill groups. The results show that the effect is mostly driven by a reduction of training for medium and highly qualified employees. While this is at odds with predictions by Lechthaler and Snower (2008), it could well be that employers cannot further risk a diminishing productivity of low-skilled employees, and in contrast, they are able to cut training costs of employees that are typically not concerned by minimum wages.
As a caveat concerning our analysis, we should stress that it assesses short-run effects that were measured in the 3rd quarter of 2015, the year in which the minimum wage was introduced. As the effects of the minimum wage may – or may not – emerge in a longer-term adjustment period, we recommend further studies on this issue in the long-run. Such a long-run analysis would complement our results by capturing lagged training adjustments that should be equally important to policy makers. We also suggest supplementary analyses that look at the individual level independent of the workplace dimension. It is possible that the minimum wage affects human capital accumulation outside the workplace, which is neglected in our firm-level analysis. We further encourage an analysis that relates minimum wage effects on training to employment effects. It seems plausible that a reduction of costs (through a decline in employer-financed training) has helped to maintain the size of the workforce.
1 Such exemptions include young employees until 18 years of age, apprentices, internships with a maximal duration of 3 months, long-term unemployment during the first 6 months of a subsequent employment, and volunteers. Until the end of 2017 already existing sectoral minimum wages were allowed to undercut the new statutory minimum wage.
2 A detailed graphical description of this concept is presented in the figure of Additional file 5.
3 Comprehensive descriptions of the IAB Establishment Panel are provided by Fischer et al. (2009) or Ellguth et al. (2014).
4 In a comparison of the survey sample with the population of all establishments in Germany, Bossler et al. (2017) do not detect any meaningful selectivity in the survey response.
5 In panel years before 2014, respondents were allowed to report the number of training measures instead of trained employees. This alternative reporting option was used by approximately 15% of the establishments and causes an inconsistency within these years but also across time. We impute the number of trained employees for such cases using the procedure by Stegmaier (2012) and Hinz (2016). However, an exclusion of such observations of the waves 2011, 2012 and 2013 from our analysis sample does not cause any significant changes to our results.
6 We report inference based on standard errors clustered at the establishment level.
7 The test statistic of the Breusch-Pagan Lagrange multiplier test is 26,958.89 (p value 0.0000). Hence, the null hypothesis of zero firm-level variation is clearly rejected. The chi-squared test statistic of the Hausman test is 54.13 (p-value 0.0000). This rejection implies that firm-specific effects are correlated with time-varying observables pointing at the importance of fixed effect estimation.
8 Results of these non-linear estimations are available on request.
9 Training at a specific workplace can comprise all three types of training.
10 We use a radius matching procedure as this equalizes trends, see Panel b of Fig. 1. While other matching procedures such as the kernel matching or nearest neighbor matching allow us to replicate the treatment effects, they fail to equalize trends from a visual inspection.
11 The results are robust to alternative sizes of the caliper. Setting the caliper to values ranging between 0.01 and 0.06 yields point estimates ranging between −0.02 and −0.03.
12 We use a distinction by sectors based on the results by Bachmann and Frings (2017) who estimate the extent of sector-specific monopsony power from elasticities of job-to-job transitions. Based on their results, all manufacturing sectors are competitive, whereas among the services, wholesale, retailing, hotels and restaurants, and other services, non-industrial organizations, and the public services are characterized by a high extend of monopsony power.
13 The case of financial constraints is discussed by Acemoglu and Pischke (2003).
Acemoglu, D, Pischke JS (2003) Minimum wages and on-the-job training. Res Labor Econ 22: 159–202.
Addison, J, Blackburn ML, Cotti CD (2015) On the robustness of minimum wage effects: geographically-disparate trends and job growth equations. IZA J Labor Econ 4(1): 1–16.
Addison, J (2017) The effects of minimum wages on employment: the legacy of myth and measurement. Ind Labor Relations Rev 70(3): 814–818.
Allegretto, SA, Dube A, Reich M (2011) Do minimum wages really reduce teen employment? Accounting for heterogeneity and selectivity in state panel data. Ind Relat 50(2): 205–240.
Arulampalam, W, Booth AL, Bryan ML (2004) Training and the new minimum wage. Econ J 114(494): C87–C94.
Bachmann, R, Frings H (2017) Monopsonistic competition, low-wage labour markets, and minimum wages—an empirical analysis. Appl Econ in press.
Bárány, ZL (2016) The minimum wage and inequality: the effects of education and technology. J Labor Econ 34(1): 237–274.
Beckmann, M, Bellmann L (2002) Churning in deutschen Betrieben. Welche Rolle spielen technischer Fortschritt, organisatorische Änderungen und Personalstruktur? In: Bellmann L Kölling A (eds)Betrieblicher Wandel und Fachkräftebedarf, 133–171.. Institute for Employment Research, Nuremberg.
Belman, D, Wolfson PJ (2014) What does the minimum wage do?. Upjohn Press, Kalamazoo.
Bossler, M (2017) Employment expectations and uncertainties ahead of the new German minimum wage. Scott J Pol Econ. in press.
Bossler, M, Gerner HD (2016) Employment effects of the new German minimum wage: evidence from establishment-level micro data. IAB-Dicussion Paper No. 10/2016, Nuremberg.
Bossler, M, Geis G, Stegmaier J (2017) Comparing survey data with an official administrative population: assessing sample-selectivity in the IAB Establishment Panel. Qual Quant Int J Methodol. in press.
Brown, CH (1999) Minimum wages, employment, and the distribution of income. In: Ashenfelter OC Card D (eds)Handbook of Labor Economics 3(B), 2101–2163.. Amsterdam.
Cahuc, P, Michel P (1996) Minimum wage unemployment and growth. Eur Econ Rev 40(7): 1463–1482.
Dostie, B (2015) Who benefits from firm-sponsered training? IZA World of Labor 145: 1–15.
Dube, A, Lester TW, Reich M (2016) Minimum wage shocks, employment flows, and labor market frictions. J Labor Econ 34(3): 663–704.
Ellguth, P, Kohaut S, Möller I (2014) The IAB Establishment Panel—methodological essentials and data quality. J Labour Market Res 47(1-2): 27–41.
Fischer, G, Janik F, Müller D, Schmucker A (2009) The IAB Establishment Panel. Things users should know. Schmollers Jahrbuch J Contextual Econ 129(1): 133–148.
Görlitz, K (2011) Continuous training and wages: an empirical analysis using a comparison-group approach. Econ Educ Rev 30(4): 691–701.
Goux, D, Maurin E (2000) Returns to firm-provided training: evidence from French worker-firm matched data. Labour Econ 7: 1–19.
Gürtzgen, N, Kubis A, Rebien M, Weber E (2016) Neueinstellungen auf Mindestlohnniveau: Anforderungen und Besetzungsschwierigkeiten gestiegen. IAB-Kurzbericht, No. 12/2016), Nuremberg.
Hashimoto, M (1982) Minimum wage effects on training on the job. Am Econ Rev 72(5): 1070–1087.
Hinz, T (2016) Personnel policy adjustments when apprentice positions are unfilled: evidence from German establishment data. Chair of Labour and Regional Economics Discussion Paper No. 99, Nuremberg.
Hsiao, C (2003) Analysis of panel data, second edition. Cambridge University Press, Cambridge.
Hübler, O, König A (1999) Betriebliche Weiterbildung, Mobilität und Beschäftigungsdynamik. Jahrbucher für Nationalökonomie und Statistik̈ 219(1/2):165–193.
Imbens, GW (2015) Matching methods in practice: three examples. J Human Resour 50(2): 373–419.
Imbens, GW, Wooldridge JM (2009) Recent developments in the econometrics of program evaluation. J Econ Lit 47(1): 5–86.
Lang, K, Kahn S (1998) The effect of minimum-wage laws on the distribution of employment: theory and evidence. J Public Econ 69(1): 67–82.
Lazear, E (1979) The narrowing of black-white wage differential is illusory. Am Econ Rev 69(4): 553–564.
Lechner, M (2010) The estimation of causal effects by difference-in-difference methods. Foundations Trends Econ 4(3): 165–224.
Lechthaler, W, Snower D (2008) Minimum wages and training. Labour Econ 15(6): 1223–1237.
Leighton, LS, Mincer J (1981) The effects of minimum wage on human capital formation. In: Rottenberg S (ed)The Economy of Legal Minimum Wages, 155–173.. American Enterprise Institute, Washington D.C.
Mundlak, Y (1978) On the pooling of time series and cross section data. Econometrica 46(1): 69–85.
Neumark, D, Wascher W (2001) Minimum wages and training revisited. J Labor Econ 19(3): 563–595.
Neumark D, Wascher W (2007) Minimum wages and employment. Foundations Trends Microeconomics 3(1+2): 1–182.
Neumark, D, Salas I, Wascher W (2014) Revisiting the minimum wage-employment debate: throwing out the baby with the bathwater?Ind Labor Relations Rev 67(Supplement): 608–648.
Riley, R, Bondibene CR (2017) Raising the standard: minimum wages and firm productivity. Labour Econ 44(1): 27–50.
Schiller, BR (1994) Moving up: the training and wage gains of minimum-wage entrants. Soc Sci Q 75(3): 622–636.
Schulten, T (2016) Anhaltende Entwicklungsdynamik in Europa, WSI-Mindestlohnbericht 2016. WSI-Mittelungen69(2): 129–135.
Stegmaier, J (2012) Effects of works councils on firm-provided further training in Germany. Br J Ind Relat 50(4): 667–689.
van Praag, BMS (2015) A new view on panel econometrics: is probit feasible after all?IZA Discussion Paper No. 9345, Bonn.
Zwick, T (2006) The impact of training intensity on establishment productivity. Ind Relat 45(1): 26–46.
We thank the editor, Pierre Cahuc, and two reviewers for very helpful comments, and we are grateful to Silke Anger and Robert A. Hart for fruitful discussions. We also thank the participants of the 2016 workshop on labor market policy in Halle and the German Economic Association's 2016 meeting of the standing field committee on education economics in Bamberg.
Responsible editor: Pierre Cahuc
The IZA Journal of Labor Economics is committed to the IZA Guiding Principles of Research Integrity. The authors declares that they have observed these principles.
Institute for Employment Research (IAB), Regensburger Str. 104, Nuremberg, 90478, Germany
Lutz Bellmann
, Mario Bossler
& Hans-Dieter Gerner
Friedrich-Alexander-University of Erlangen-Nuremberg, Erlangen, Germany
IZA, Bonn, Germany
& Olaf Hübler
The Labor and Socio-Economic Research Center (LASER) of the University of Erlangen-Nuremberg, Nuremberg, Germany
Mario Bossler
University of Applied Sciences, Koblenz, Germany
Hans-Dieter Gerner
Leibniz University Hannover, Hannover, Germany
Olaf Hübler
Search for Lutz Bellmann in:
Search for Mario Bossler in:
Search for Hans-Dieter Gerner in:
Search for Olaf Hübler in:
Correspondence to Lutz Bellmann.
Minimum wages and training in monopsonistic labor market as derived by Acemoglu and Pischke (2003). The file describes the basic theoretical idea of the theory in Acemoglu and Pischke (2003) from a graph. (PDF 44 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Bellmann, L., Bossler, M., Gerner, H. et al. Training and minimum wages: first evidence from the introduction of the minimum wage in Germany. IZA J Labor Econ 6, 8 (2017). https://doi.org/10.1186/s40172-017-0058-z
DOI: https://doi.org/10.1186/s40172-017-0058-z
Difference-in-differences
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page
|
CommonCrawl
|
Tikhonov, Sergei Viktorovich
Statistics Math-Net.Ru
Total publications: 15
Scientific articles: 14
Presentations: 6
This page: 1443
Abstract pages: 4990
Full texts: 1466
References: 657
Doctor of physico-mathematical sciences (2014)
Speciality: 01.01.02 (Differential equations, dynamical systems, and optimal control)
Keywords: measure-preserving transformations, mixing actions, leash-topology
UDC: 517.987.5, 517.938.5, 517.938, 517.987
MSC: 28D15, 37A05, 37A15
Generic measure-preserving transformations, category-preserving maps, generic mixing actions, spectral multiplicity
Main publications:
S. V. Tikhonov, "Vlozheniya deistvii reshetki v potoki s mnogomernym vremenem", Matem. sb., 197:1 (2006), 97–132
S. V. Tikhonov, "Polnaya metrika na mnozhestve peremeshivayuschikh preobrazovanii", Matem. sb., 198:4 (2007), 135–158 ; Sb. Math., 198:4 (2007), 575–596
S. V. Tikhonov, "Peremeshivayuschie preobrazovaniya s odnorodnym spektrom", Matem. sb., 202:8 (2011), 139–160 ; Sb. Math., 202:8 (2011), 1231–1252
http://www.mathnet.ru/eng/person20304
List of publications on Google Scholar
List of publications on ZentralBlatt
Publications in Math-Net.Ru
1. S. V. Tikhonov, "Multiple Mixing with Respect to Noncoinciding Sets", Tr. Mat. Inst. Steklova, 308 (2020), 243–252 ; Proc. Steklov Inst. Math., 308 (2020), 229–237
2. S. V. Tikhonov, "Rigidity of Actions with Extreme Deviation from Multiple Mixing", Mat. Zametki, 103:6 (2018), 912–926 ; Math. Notes, 103:6 (2018), 977–989
3. S. V. Tikhonov, "On the Absence of Multiple Mixing and on the Centralizer of Measure-Preserving Actions", Mat. Zametki, 97:4 (2015), 636–640 ; Math. Notes, 97:4 (2015), 652–656
4. S. V. Tikhonov, "Approximation of Mixing Transformations", Mat. Zametki, 95:2 (2014), 282–299 ; Math. Notes, 95:2 (2014), 255–269
5. S. V. Tikhonov, "Genericity of a multiple mixing", Uspekhi Mat. Nauk, 67:4(406) (2012), 187–188 ; Russian Math. Surveys, 67:4 (2012), 779–780
6. S. V. Tikhonov, "Bernoulli shifts and local density property", Vestnik Moskov. Univ. Ser. 1. Mat. Mekh., 2012, 1, 31–37 ; Moscow University Mathematics Bulletin, 67:1 (2012), 29–35
7. S. V. Tikhonov, "A Note on Rochlin's Property in the Space of Mixing Transformations", Mat. Zametki, 90:6 (2011), 953–954 ; Math. Notes, 90:6 (2011), 925–926
8. S. V. Tikhonov, "Mixing transformations with homogeneous spectrum", Mat. Sb., 202:8 (2011), 139–160 ; Sb. Math., 202:8 (2011), 1231–1252
9. A. M. Stepin, S. V. Tikhonov, "Remarks on Isorigidity, Centralizers, and Spectral Equivalence in Ergodic Theory", Mat. Zametki, 81:2 (2007), 314–316 ; Math. Notes, 81:2 (2007), 278–280
10. S. V. Tikhonov, "Total metric on the set of mixing transformations", Uspekhi Mat. Nauk, 62:1(373) (2007), 209–210 ; Russian Math. Surveys, 62:1 (2007), 193–195
11. S. V. Tikhonov, "A complete metric in the set of mixing transformations", Mat. Sb., 198:4 (2007), 135–158 ; Sb. Math., 198:4 (2007), 575–596
12. V. V. Ryzhikov, S. V. Tikhonov, "Typical $\mathbb Z^n$-actions can be inserted only in injective $\mathbb R^n$-actions", Mat. Zametki, 79:6 (2006), 925–930 ; Math. Notes, 79:6 (2006), 864–868
13. S. V. Tikhonov, "Embedding lattice actions in flows with multidimensional time", Mat. Sb., 197:1 (2006), 97–132 ; Sb. Math., 197:1 (2006), 95–126
14. S. V. Tikhonov, "On relation of measure-theoretic and special properties of $\mathbb Z^d$-actions", Fundam. Prikl. Mat., 8:4 (2002), 1179–1192
15. S. V. Tikhonov, "Correction to the article "On relation of measure-theoretic and special properties of $\mathbb{Z}^{d}$-actions" (Fundamental and Applied Mathematics, 2002, V. 8, no. 4, 1179-1192)", Fundam. Prikl. Mat., 8:4 (2002), 1272
Presentations in Math-Net.Ru
1. Dynamical systems and transformation groups: entropy, mixing, spectra and generic properties
A. M. Stepin, S. V. Tikhonov
Steklov Mathematical Institute Seminar
2. Generic mixing actions
Sergey Tikhonov
International Conference "Anosov Systems and Modern Dynamics" dedicated to the 80th anniversary of Dmitry Anosov
3. Свойства перемешивающих действий
S. V. Tikhonov
Dynamical systems and differential equations
4. Кратное перемешивание, системы Ледраппье и централизатор сохраняющего меру преобразования (продолжение доклада)
Dynamical systems and statistical physics
5. Кратное перемешивание, системы Ледраппье и централизатор сохраняющего меру преобразования
6. Typicalness, Limiting Behavior and Spectral Properties of Dynamic Systems
Dobrushin Mathematics Laboratory Seminar
Russian State University of Trade and Economics, Moscow
Plekhanov Russian State University of Economics, Moscow
|
CommonCrawl
|
Probing the evolution of the EAS muon content in the atmosphere with KASCADE-Grande (1801.05513)
KASCADE-Grande Collaboration: W.D. Apel, J.C. Arteaga-Velázquez, K. Bekk, M. Bertaina, J. Blümer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, F. Cossavella, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, D. Fuhrmann, A. Gherghel-Lascu, H.J. Gils, R. Glasstetter, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, T. Huege, K.-H. Kampert, D. Kang, H.O. Klages, K. Link, P. Łuczak, H.J. Mathes, H.J. Mayer, J. Milke, B. Mitrica, C. Morello, J. Oehlschläger, S. Ostapchenko, T. Pierog, H. Rebel, M. Roth, H. Schieler, S. Schoo, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, H. Ulrich, A. Weindl, J. Wochele, J. Zabierowski
The evolution of the muon content of very high energy air showers (EAS) in the atmosphere is investigated with data of the KASCADE-Grande observatory. For this purpose, the muon attenuation length in the atmosphere is obtained to $\Lambda_\mu = 1256 \, \pm 85 \, ^{+229}_{-232}(\mbox{syst})\, \mbox{g/cm}^2$ from the experimental data for shower energies between $10^{16.3}$ and $10^{17.0} \, \mbox{eV}$. Comparison of this quantity with predictions of the high-energy hadronic interaction models QGSJET-II-02, SIBYLL 2.1, QGSJET-II-04 and EPOS-LHC reveals that the attenuation of the muon content of measured EAS in the atmosphere is lower than predicted. Deviations are, however, less significant with the post-LHC models. The presence of such deviations seems to be related to a difference between the simulated and the measured zenith angle evolutions of the lateral muon density distributions of EAS, which also causes a discrepancy between the measured absorption lengths of the density of shower muons and the predicted ones at large distances from the EAS core. The studied deficiencies show that all four considered hadronic interaction models fail to describe consistently the zenith angle evolution of the muon content of EAS in the aforesaid energy regime.
KASCADE-Grande Limits on the Isotropic Diffuse Gamma-Ray Flux between 100 TeV and 1 EeV (1710.02889)
KASCADE-Grande Collaboration: W. D. Apel, J. C. Arteaga-Velázquez, K. Bekk, M. Bertaina, J. Blümer, H. Bozdog, I. M. Brancus, E. Cantoni, A. Chiavassa, F. Cossavella, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, Z. Feng, D. Fuhrmann, A. Gherghel-Lascu, H. J. Gils, R. Glasstetter, C. Grupen, A. Haungs, D. Heck, J. R. Hörandel, T. Huege, K.-H. Kampert, D. Kang, H. O. Klages, K. Link, P. Łuczak, H. J. Mathes, H. J. Mayer, J. Milke, B. Mitrica, C. Morello, J. Oehlschläger, S. Ostapchenko, T. Pierog, H. Rebel, M. Roth, H. Schieler, S. Schoo, F. G. Schröder, O. Sima, G. Toma, G. C. Trinchero, H. Ulrich, A. Weindl, J. Wochele, J. Zabierowski
KASCADE and KASCADE-Grande were multi-detector installations to measure individual air showers of cosmic rays at ultra-high energy. Based on data sets measured by KASCADE and KASCADE-Grande, 90% C.L. upper limits to the flux of gamma-rays in the primary cosmic ray flux are determined in an energy range of ${10}^{14} - {10}^{18}$ eV. The analysis is performed by selecting air showers with a low muon content as expected for gamma-ray-induced showers compared to air showers induced by energetic nuclei. The best upper limit of the fraction of gamma-rays to the total cosmic ray flux is obtained at $3.7 \times {10}^{15}$ eV with $1.1 \times {10}^{-5}$. Translated to an absolute gamma-ray flux this sets constraints on some fundamental astrophysical models, such as the distance of sources for at least one of the IceCube neutrino excess models.
The Single-Phase ProtoDUNE Technical Design Report (1706.07081)
B. Abi, R. Acciarri, M. A. Acero, M. Adamowski, C. Adams, D. L. Adams, P. Adamson, M. Adinolfi, Z. Ahmad, C. H. Albright, T. Alion, J. Anderson, K. Anderson, C. Andreopoulos, M. P. Andrews, R. A. Andrews, J. dos Anjos, A. Ankowski, J. Anthony, M. Antonello, A. Aranda Fernandez, A. Ariga, T. Ariga, E. Arrieta Diaz, J. Asaadi, M. Ascencio, D. M. Asner, M. S. Athar, M. Auger, A. Aurisano, V. Aushev, D. Autiero, F. Azfar, J. J. Back, H. O. Back, C. Backhouse, X. Bai, M. Baird, A. B. Balantekin, S. Balasubramanian, B. Baller, P. Ballett, B. Bambah, H. Band, M. Bansal, S. Bansal, G. J. Barker, G. Barr, J. Barranco Monarca, N. Barros, A. Bashyal, M. Bass, F. Bay, J. L. Bazo, J. F. Beacom, B. R. Behera, G. Bellettini, V. Bellini, O. Beltramello, N. Benekos, P. A. Benetti, A. Bercellie, E. Berman, H. Berns, R. Bernstein, S. Bertolucci, V. Bhatnagar, B. Bhuyan, J. Bian, K. Biery, M. Bishai, A. Bitadze, T. Blackburn, A. Blake, F. d. M. Blaszczyk, E. Blaufuss, G. C. Blazey, M. Blennow, E. Blucher, V. Bocean, F. Boffelli, J. Boissevain, S. Bolognesi, T. Bolton, M. Bonesini, T. Boone, C. Booth, S. Bordoni, P. Bour, B. Bourguille, S. B. Boyd, D. Boyden, D. Brailsford, A. Brandt, J. Bremer, S. J. Brice, C. Bromberg, G. Brooijmans, G. Brown, G. Brunetti, N. Buchanan, H. Budd, P. Calafiura, A. Calatayud, E. Calligarich, E. Calvo, L. Camilleri, M. Campanelli, C. Cantini, B. Carls, M. Cascella, E. Catano-Mur, M. Cavaili-Sforza, F. Cavanna, S. Centro, A. Cervera Villanueva, T. Cervi, M. Chalifour, A. Chappuis, A. Chatterjee, S. Chattopadhyay, S. Chattopadhyay, L. Chaussard, S. Chembra, H. Chen, M. Chen, D. Cherdack, C. Chi, S. Childress, S. Choubey, B. C. Choudhary, G. Christodoulou, C. Christofferson, E. Church, T. E. Coan, A. Cocco, P. Cole, G. Collin, J. M. Conrad, M. Convery, R. Corey, L. Corwin, J. I. Crespo-Anadón, J. Creus Prats, P. Crivelli, D. Cronin-Hennessy, C. Crowley, A. Curioni, D. Cussans, H. da Motta, D. Dale, T. Davenne, G. S. Davies, J. Davies, J. Dawson, K. De, I. De Bonis, A. De Gouvea, P. C. de Holanda, P. De Jong, P. De Lurgio, J. J. de Vries, M. P. Decowski, P. Dedin-Neto, A. Delbart, D. Delepine, M. Delgado, D. Demuth, S. Dennis, C. Densham, R. Dharmapalan, N. Dhingra, M. Diamantopoulou, J. S. Diaz, G. Diaz Bautista, P. Ding, M. Diwan, Z. Djurcic, M. J. Dolinski, G. Drake, D. Duchesneau, D. Dutta, M. Duvernois, H. Duyang, D. A. Dwyer, S. Dye, A. S. Dyshkant, S. Dytman, M. Eads, B. Eberly, D. Edmunds, S. Elliott, W. M. Ellsworth, M. Elnimr, S. Emery, S. Eno, A. Ereditato, C. O. Escobar, L. Escudero Sanchez, J. J. Evans, K. Fahey, A. Falcone, L. Falk, A. Farbin, C. Farnese, Y. Farzan, M. Fasoli, A. Fava, J. Felix, P. Fernandez Menendez, E. Fernandez-Martinez, L. Fields, F. Filthaut, A. Finch, M. Fitton, B.T. Fleming, T. Forest, J. Fowler, W. Fox, J. Freeman, J. Freestone, D. Friant, J. Fried, A. Friedland, S. Fuess, B. Fujikawa, A. Gago, H. Gallagher, S. Galvin, S. Galymov, T. Gamble, R. Gandhi, S. Gao, D. Garcia-Gamez, S. Gardiner, D. Gastler, A. Gendotti, D. Gibin, I. Gil-Botella, R. Gill, A. K. Giri, D. Goeldi, M. Gold, S. Gollapinni, R. A. Gomes, J. J. Gomez Cadenas, M. C. Goodman, D. Gorbunov, S. Goswami, N. Graf, N. Graf, M. Graham, E. Gramelini, R. Gran, C. Grant, N. Grant, V. Greco, H. Greenlee, L. Greenler, M. Groh, K. Grzelak, E. Guardincerri, V. Guarino, G. P. Guedes, R. Guenette, A. Guglielmi, K. K. Guthikonda, M. M. Guzzo, A. T. Habig, A. Hackenburg, R. W. Hackenburg, H. Hadavand, R. Haenni, A. Hahn, M. D. Haigh, T. Haines, T. Hamernik, P. Hamilton, T. Handler, S. Hans, D. Harris, J. Hartnell, T. Hasegawa, R. Hatcher, A. Hatzikoutelis, S. Hays, E. Hazen, M. Headley, A. Heavey, K. M. Heeger, J. Heise, K. Hennessy, S. Henry, J. Hernandez-Garcia, J. Hewes, A. Higuera, T. Hill, A. Himmel, A. Holin, E.W. Hoppe, S. Horikawa, G. Horton-Smith, M. Hostert, A. Hourlier, B. Howard, J. Howell, J. Hugon, P. Hurh, J. Huston, J. Hylen, J. Insler, G. Introzzi, A. Ioannisian, A. Izmaylov, D. E. Jaffe, C. James, E. James, C. H. Jang, F. Jediny, C. Jen, A. Jhingan, S. Jiménez, M. Johnson, R. Johnson, J. Johnstone, J. Joshi, H. Jostlein, C. K. Jung, T. Junk, A. Kaboth, I. Kadenko, Y. Kamyshkov, G. Karagiorgi, D. Karasavvas, Y. Karyotakis, P. Kaur, B. Kayser, N. Kazaryan, E. Kearns, P.T.Keener, E. Kemp, C. Kendziora, W. Ketchum, S. H. Kettell, M. Khabibullin, A. Khotjantsev, B. Kirby, M. Kirby, J. Klein, Y. J. Ko, T. Kobilarcik, B. Kocaman, L.W. Koerner, S. Kohn, G. Koizumi, A. Kopylov, M. Kordosky, L. Kormos, U. Kose, V. A. Kostelecký, M. Kramer, I. Kreslo, K. Kriesel, W. Kropp, Y. Kudenko, V. A. Kudryavtsev, S. Kulagin, A. Kumar, J. Kumar, L. Kumar, V. Kus, T. Kutter, K. Lande, C. Lane, K. Lang, T. Langford, F. Lanni, A. Laundrie, T. Le, J. Learned, P. Lebrun, D. Lee, W. M. Lee, M. A. Leigui de Oliveira, Q. Z. Li, S. Li, S. W. Li, X. Li, Y. Li, Z. Li, C. S. Lin, S. Lin, R. Linehan, J. Link, Z. Liptak, D. Lissauer, L. Littenberg, B. R. Littlejohn, J. Liu, Q. Liu, T. Liu, S. Lockwitz, N. Lockyer, T. Loew, M. Lokajicek, L. LoMonaco, K. Long, K. Loo, J. P. Lopez, D. Lorca, J. M. LoSecco, W. Louis, M. Luethi, K. B. Luk, T. Lundin, X. Luo, T. Lux, J. Lykken, J. Maalampi, A. A. Machado, C. T. Macias, J. R. Macier, R. MacLellan, S. Magill, G. Mahler, K. Mahn, M. Malek, F. Mammoliti, S. K. Mandal, S. Mandodi, S. Manly, A. Mann, A. Marchionni, W. Marciano, D. Marfatia, C. Mariani, J. Maricic, A. D. Marino, M. Marshak, C. M. Marshall, J. Marshall, J. Marteau, J. Martin-Albo, D. A. Martinez Caicedo, M. Masud, S. Matsuno, J. Matthews, C. Mauger, K. Mavrokoridis, R. Mazza, A. Mazzacane, E. Mazzucato, N. McCauley, E. McCluskey, N. McConkey, K.T. McDonald, K. S. McFarland, C. McGivern, A. M. McGowan, C. McGrew, S. R. McGuinness, R. McKeown, D. McNulty, R. McTaggart, A. Mefodiev, P. Mehta, D. Mei, O. Mena, S. Menary, D. P. Mendez, H. Mendez, A. Menegolli, G. Meng, M. Messier, W. Metcalf, M. Mewes, H. Meyer, T. Miao, R. Milincic, W. Miller, G. Mills, O. Mineev, O. G. Miranda, C. S. Mishra, S. R. Mishra, B. Mitrica, D. Mladenov, I. Mocioiu, K. Moffat, R. Mohanta, N. Mokhov, L. Molina Bueno, C. Montanari, D. Montanari, J. Moon, M. Mooney, C. D. Moore, B. Morgan, C. Morris, W. Morse, C. Mossey, C. A. Moura, J. Mousseau, L. Mualem, M. Muether, S. Mufson, S. Murphy, J. Musser, Y. Nakajima, D. Naples, J. Navarro, D.Navas-Nicolás, J. Nelson, M. Nessi, D. Newbold, M. Newcomer, K. T. T. Nguyen, R. Nichol, T. C. Nicholls, E. Niner, A. Norman, B. Norris, P. Novakova, P. Novella, E. Nowak, J. Nowak, M. S. Nunes, H. O'Keeffe, A. Olivares Del Campo, R. Oliveira, A. Olivier, Y. Onishchuk, T. Ovsjannikova, S. Pakvasa, O. Palamara, J. Paley, C. Palomares, E. Pantic, V. Paolone, V. Papadimitriou, S. Paramesvaran, J. Park, S. Parke, Z. Parsa, S. Pascoli, J. Pasternak, J. Pater, R. B. Patterson, S. J. Patton, T. Patzak, B. Paulos, L. Paulucci, Z. Pavlovic, G. Pawloski, D. Payne, S. J. M. Peeters, E. Pennacchio, G. N. Perdue, O. L. G. Peres, L. Periale, J. D. Perkin, K. Petridis, R. Petti, A. Petukhov, P. Picchi, F. Pietropaolo, P. Plonski, R. Plunkett, R. Poling, M. Popovic, R. Pordes, S. Pordes, M. Potekhin, R. Potenza, B. Potukuchi, S. Poudel, J. Pozimski, O. Prokofiev, N. Pruthi, P. Przewlocki, N. Pumulo, D. Pushka, X. Qian, J. L. Raaf, R. Raboanary, V. Radeka, J. Rademacker, B. Radics, A. Radovic, I. Rakhno, H. T. Rakotondramanana, L. Rakotondravohitra, Y. A. Ramachers, R. Rameika, J. Ramsey, A. Rappoldi, G. Raselli, P. Ratoff, B. Rebel, C. Regenfus, J. Reichenbacher, S. D. Reitzner, A. Remoto, A. Renshaw, S. Rescia, L. Rice, K. Rielage, K. Riesselmann, D. Rivera, M. Robinson, L. Rochester, O. B. Rodrigues, P. Rodrigues, B. Roe, R. M. Roser, M. Ross-Lonergan, M. Rossella, J. Rout, S. Roy, A. Rubbia, C. Rubbia, R. Rucinski, C. Rudolph von Rohr, B. Russell, D. Ruterbories, R. Saakyan, K. Sachdev, N. Sahu, P. Sala, N. Samios, F. Sanchez, M.C. Sanchez, W.R. Sands, S. Santana, R. Santorelli, L. M. Santos, G. Santucci, N. Saoulidou, G. Savage, A. Scaramelli, A. Scarpelli, T. Schaffer, H. Schellman, P. Schlabach, D. Schmitz, J. Schneps, K. Scholberg, A. Schukraft, E. Segreto, S. Sehrawat, J. A. Sepulveda-Quiroz, F. Sergiampietri, K. Sexton, L. Sexton-Kennedy, M. H. Shaevitz, J. Shahi, S. Shahsavarani, P. Shanahan, R. Sharma, R. K. Sharma, T. Shaw, D. Shooltz, R. Shrock, N. Simos, J. Sinclair, G. Sinev, I. Singh, J. Singh, J. Singh, V. Singh, F. W. Sippach, K. Siyeon, A. Smith, P. Smith, J. Smolik, M. Smy, E. Snider, P. Snopok, J. Sobczyk, H. Sobel, M. Soderberg, N. Solomey, W. Sondheim, M. Sorel, A. Sousa, K. Soustruznik, M. Spanu, J. Spitz, N. J. C. Spooner, M. Stancari, D. Stefan, A. Stefanik, H. M. Steiner, J. Stewart, J. Stock, S. Stoica, J. Stone, J. Strait, M. Strait, T. Strauss, S. Striganov, R. Sulej, G. Sullivan, Y. Sun, L. Suter, C. M. Sutera, R. Svoboda, B. Szczerbinska, A. M. Szelc, S. Söldner-Rembold, R. Talaga, S. Tariq, E. Tatar, R. Tayloe, K. Terao, M. Thiesse, L. F. Thompson, M. Thomson, C. Thorn, M. Thorpe, X. Tian, D. Tiedt, S. C. Timm, J. Todd, A. Tonazzo, T. Tope, F. R. Torres, M. Torti, M. Tortola, F. Tortorici, M. Toups, C. Touramanis, J. Trevor, M. Tripathi, I. Tropin, W. H. Trzaska, Y. Tsai, K. V. Tsang, A. Tsaris, R. Tsenov, S. Tufanli, C. Tull, J. Turner, M. Tzanov, E. Tziaferi, Y. Uchida, S. Uma Sankar, J. Urheim, T. Usher, M. R. Vagins, P. Vahle, G. A. Valdiviesso, L. Valerio, Z. Vallari, J. Valle, R. Van Berg, R. Van de Water, F. Varanini, G. Varner, G. Vasseur, K. Vaziri, G. Velev, S. Ventura, A. Verdugo, M. A. Vermeulen, E. Vernon, T. Viant, T. V. Vieira, C. Vignoli, S. Vihonen, C. Vilela, B. Viren, P. Vokac, T. Vrba, T. Wachala, D. Wahl, M. Wallbank, B. Wang, H. Wang, T. Wang, T.K. Warburton, D. Warner, M. Wascko, D. Waters, A. Weber, M. Weber, W. Wei, A. Weinstein, D. Wenman, M. Wetstein, A. White, L. H. Whitehead, D. Whittington, M. J. Wilking, J. Willhite, P. Wilson, R. J. Wilson, P. Wittich, J. Wolcott, H. H. Wong, T. Wongjirad, K. Wood, L.S. Wood, E. Worcester, M. Worcester, S. Wu, W. Xu, C. Yanagisawa, S. Yang, T. Yang, J. Ye, M. Yeh, N. Yershov, K. Yonehara, B. Yu, J. Yu, J. Zalesak, L. Zambelli, B. Zamorano, L. Zang, A. Zani, K. Zaremba, G.P. Zeller, C. Zhang, C. Zhang, Y. Zhou, E. D. Zimmerman, M. Zito, J. Zuklin, V. Zutshi, R. Zwaska
July 27, 2017 hep-ex, physics.ins-det
ProtoDUNE-SP is the single-phase DUNE Far Detector prototype that is under construction and will be operated at the CERN Neutrino Platform (NP) starting in 2018. ProtoDUNE-SP, a crucial part of the DUNE effort towards the construction of the first DUNE 10-kt fiducial mass far detector module (17 kt total LAr mass), is a significant experiment in its own right. With a total liquid argon (LAr) mass of 0.77 kt, it represents the largest monolithic single-phase LArTPC detector to be built to date. It's technical design is given in this report.
A feasibility study to track cosmic muons using a detector with SiPM devices based on amplitude discrimination (1603.05650)
D. Stanca, M. Niculescu-Oglinzanu, I. Brancus, B. Mitrica, A. Balaceanu, B. Cautisanu, A. Gherghel-Lascu, A. Haungs, H.-J. Mathes, H. Rebel, A. Saftoiu, O. Sima, T. Mosu
March 17, 2016 hep-ex, physics.ins-det
The possibility to build a SiPM-readout muon detector (SiRO), using plastic scintillators with optical fibers as sensitive volume and readout by SiPM photo-diodes, is investigated. SiRO shall be used for tracking cosmic muons based on amplitude discrimination. The detector concept foresees a stack of 6 active layers, grouped in 3 sandwiches for determining the muon trajectories through 3 planes. After investigating the characteristics of the photodiodes, tests have been performed using two detection modules, each being composed from a plastic scintillator sheet, $100 \times 25 \times 1\,$cm$^{3}$, with 12 parallel, equidistant ditches; each ditch filled with an optical fiber of $1.5\,$mm thickness and always two fibers connected to form a channel. The attenuation of the light response along the optical fiber and across the channels have been tested. The measurements of the incident muons based on the input amplitude discrimination indicate that this procedure is not efficient and therefore not sufficient, as only about 30\% of the measured events could be used in the reconstruction of the muon trajectories. Based on the studies presented in this paper, the layout used for building the SiRO detector will be changed as well as the analog acquisition technique will be replaced by a digital one.
|
CommonCrawl
|
> astro-ph > arXiv:1506.05369
Astrophysics > Astrophysics of Galaxies
arXiv:1506.05369 (astro-ph)
[Submitted on 17 Jun 2015 (v1), last revised 8 Oct 2015 (this version, v2)]
Title:Stellar Velocity Dispersion and Anisotropy of the Milky Way Inner Halo
Authors:Charles King III, Warren R. Brown, Margaret J. Geller, Scott J. Kenyon (Smithsonian Astrophysical Observatory)
Abstract: We measure the three components of velocity dispersion, $\sigma_{R},\sigma_{\theta},\sigma_{\phi}$, for stars within 6 < R < 30 kpc of the Milky Way using a new radial velocity sample from the MMT telescope. We combine our measurements with previously published data so that we can more finely sample the stellar halo. We use a maximum likelihood statistical method for estimating mean velocities, dispersions, and covariances assuming only that velocities are normally distributed. The alignment of the velocity ellipsoid is consistent with a spherically symmetric gravitational potential. From the spherical Jeans equation, the mass of the Milky Way is M(<12 kpc) = $1.3\times10^{11}$ M$_{\odot}$ with an uncertainty of 40%. We also find a region of discontinuity, 15 < R < 25 kpc, where the estimated velocity dispersions and anisotropies diverge from their anticipated values, confirming at high significance the break observed by others. We argue that this break in anisotropy is physically explained by coherent stellar velocity structure in the halo, such as the Sgr stream. To significantly improve our understanding of halo kinematics will require combining radial velocities with future Gaia proper motions.
Comments: 15 pages, accepted to ApJ
Subjects: Astrophysics of Galaxies (astro-ph.GA); Solar and Stellar Astrophysics (astro-ph.SR)
Cite as: arXiv:1506.05369 [astro-ph.GA]
(or arXiv:1506.05369v2 [astro-ph.GA] for this version)
Related DOI: https://doi.org/10.1088/0004-637X/813/2/89
From: Warren R. Brown [view email]
[v1] Wed, 17 Jun 2015 15:42:23 UTC (1,562 KB)
[v2] Thu, 8 Oct 2015 14:21:02 UTC (1,601 KB)
astro-ph.GA
astro-ph.SR
|
CommonCrawl
|
Tag Discussion Solution Statistics Submit
Time Limit : sec, Memory Limit : KB
Twin Trees Bros.
To meet the demand of ICPC (International Cacao Plantation Consortium), you have to check whether two given trees are twins or not.
Example of two trees in the three-dimensional space.
The term tree in the graph theory means a connected graph where the number of edges is one less than the number of nodes. ICPC, in addition, gives three-dimensional grid points as the locations of the tree nodes. Their definition of two trees being twins is that, there exists a geometric transformation function which gives a one-to-one mapping of all the nodes of one tree to the nodes of the other such that for each edge of one tree, there exists an edge in the other tree connecting the corresponding nodes. The geometric transformation should be a combination of the following transformations:
translations, in which coordinate values are added with some constants,
uniform scaling with positive scale factors, in which all three coordinate values are multiplied by the same positive constant, and
rotations of any amounts around either $x$-, $y$-, and $z$-axes.
Note that two trees can be twins in more than one way, that is, with different correspondences of nodes.
Write a program that decides whether two trees are twins or not and outputs the number of different node correspondences.
Hereinafter, transformations will be described in the right-handed $xyz$-coordinate system.
Trees in the sample inputs 1 through 4 are shown in the following figures. The numbers in the figures are the node numbers defined below.
For the sample input 1, each node of the red tree is mapped to the corresponding node of the blue tree by the transformation that translates $(-3, 0, 0)$, rotates $-\pi / 2$ around the $z$-axis, rotates $\pi / 4$ around the $x$-axis, and finally scales by $\sqrt{2}$. By this mapping, nodes #1, #2, and #3 of the red tree at $(0, 0, 0)$, $(1, 0, 0)$, and $(3, 0, 0)$ correspond to nodes #6, #5, and #4 of the blue tree at $(0, 3, 3)$, $(0, 2, 2)$, and $(0, 0, 0)$, respectively. This is the only possible correspondence of the twin trees.
For the sample input 2, red nodes #1, #2, #3, and #4 can be mapped to blue nodes #6, #5, #7, and #8. Another node correspondence exists that maps nodes #1, #2, #3, and #4 to #6, #5, #8, and #7.
For the sample input 3, the two trees are not twins. There exist transformations that map nodes of one tree to distinct nodes of the other, but the edge connections do not agree.
For the sample input 4, there is no transformation that maps nodes of one tree to those of the other.
The input consists of a single test case of the following format.
$n$
$x_1$ $y_1$ $z_1$
$x_n$ $y_n$ $z_n$
$u_1$ $v_1$
$u_{n−1}$ $v_{n−1}$
$x_{n+1}$ $y_{n+1}$ $z_{n+1}$
$x_{2n}$ $y_{2n}$ $z_{2n}$
$u_n$ $v_n$
$u_{2n−2}$ $v_{2n−2}$
The input describes two trees. The first line contains an integer $n$ representing the number of nodes of each tree ($3 \leq n \leq 200$). Descriptions of two trees follow.
Description of a tree consists of $n$ lines that give the vertex positions and $n - 1$ lines that show the connection relation of the vertices.
Nodes are numbered $1$ through $n$ for the first tree, and $n + 1$ through $2n$ for the second tree.
The triplet $(x_i, y_i, z_i)$ gives the coordinates of the node numbered $i$. $x_i$, $y_i$, and $z_i$ are integers in the range between $-1000$ and $1000$, inclusive. Nodes of a single tree have distinct coordinates.
The pair of integers $(u_j , v_j )$ means that an edge exists between nodes numbered $u_j$ and $v_j$ ($u_j \ne v_j$). $1 \leq u_j \leq n$ and $1 \leq v_j \leq n$ hold for $1 \leq j \leq n - 1$, and $n + 1 \leq u_j \leq 2n$ and $n + 1 \leq v_j \leq 2n$ hold for $n \leq j \leq 2n - 2$.
Output the number of different node correspondences if two trees are twins. Output a zero, otherwise.
Sample Input 1
Sample Output 1
Source: https://onlinejudge.u-aizu.ac.jp/problems/1403
|
CommonCrawl
|
Networks & Heterogeneous Media
June 2009 , Volume 4 , Issue 2
Special Issue on Irrigation Channels and Related Problems
Georges Bastin, Alexandre M. Bayen, Ciro D'Apice, Xavier Litrico and Benedetto Piccoli
2009, 4(2): i-v doi: 10.3934/nhm.2009.4.2i +[Abstract](1114) +[PDF](87.6KB)
1. Introduction: Management of canal networks at the age of information technology. With the miniaturization of sensors and their decreasing costs, the paradigm of instrumentation of the built infrastructure and the environment has now been underway for several years, leading to numerous successful and sometimes spectacular realizations such as the instrumentation of the Golden Gate with wire- less sensors a few years ago. The convergence of communication, control and sensing on numerous platforms including multi-media platforms has enabled engineers to augment physical infrastructure systems with an information layer, capable of real- time monitoring, with particular success in the health monitoring community. This paradigm has reached a level of maturity, revealed by the emergence of numerous technologies usable to monitor the built infrastructure. Supervisory Control And Data Acquisition (SCADA) systems are a perfect example of such infrastructure, which integrate sensing, communication and control. In the context of management of irrigation networks, the impact of this technology on the control of such systems has the potential of significantly improving efficiency of operations.
For more information please click the "Full Text" above
Georges Bastin, Alexandre M. Bayen, Ciro D\'Apice, Xavier Litrico, Benedetto Piccoli. Preface. Networks & Heterogeneous Media, 2009, 4(2): i-v. doi: 10.3934/nhm.2009.4.2i.
On Lyapunov stability of linearised Saint-Venant equations for a sloping channel
Georges Bastin, Jean-Michel Coron and Brigitte d'Andréa-Novel
2009, 4(2): 177-187 doi: 10.3934/nhm.2009.4.177 +[Abstract](1890) +[PDF](1316.6KB)
We address the issue of the exponential stability (in $L^2$-norm) of the classical solutions of the linearised Saint-Venant equations for a sloping channel. We give an explicit sufficient dissipative condition which guarantees the exponential stability under subcritical flow conditions without additional assumptions on the size of the bottom and friction slopes. The stability analysis relies on the same strict Lyapunov function as in our previous paper [5]. The special case of a single pool is first treated. Then, the analysis is extended to the case of the boundary feedback control of a general channel with a cascade of $n$ pools.
Georges Bastin, Jean-Michel Coron, Brigitte d\'Andr\u00E9a-Novel. On Lyapunov stability of linearised Saint-Venant equations for a sloping channel. Networks & Heterogeneous Media, 2009, 4(2): 177-187. doi: 10.3934/nhm.2009.4.177.
Methods for the localization of a leak in open water channels
Nadia Bedjaoui, Erik Weyer and Georges Bastin
2009, 4(2): 189-210 doi: 10.3934/nhm.2009.4.189 +[Abstract](1481) +[PDF](327.1KB)
In this paper, we present two methods for determining the position of a leak in an open water channel. The available measurements are the water level and the gate position at the upstream and downstream end of a channel reach. We assume that the size of the leak and the time it started are already estimated by a leak-detection method. Both of the proposed methods make use of a nonlinear Saint-Venant equation model of the channel where the leak is modelled as a lateral outflow. The first method makes use of a bank of $N$ models corresponding to $N$ possible positions of the leak along the channel. The estimated position of the leak is determined by the model which minimizes a quadratic cost function. The second method is based on the same principle except that it uses observers instead of pure models. The methods are tested on both real and simulated data from the Coleambally Channel 6 in Australia. It is further shown that the determination of the position of a leak is an inherently difficult problem.
Nadia Bedjaoui, Erik Weyer, Georges Bastin. Methods for the localization of a leak in open water channels. Networks & Heterogeneous Media, 2009, 4(2): 189-210. doi: 10.3934/nhm.2009.4.189.
Towards nonlinear delay-based control for convection-like distributed systems: The example of water flow control in open channel systems
Gildas Besançon, Didier Georges and Zohra Benayache
In this paper, the driving idea is to use a possible approximation of partial differential equations with boundary control by ordinary differential equations with time-varying delayed input, for a control purpose. This results in the development of a specific nonlinear control methodology for such delayed-input systems. The case of water flow control in open channel systems is used as a motivating and illustrative example, with corresponding simulation results.
Gildas Besan\u00E7on, Didier Georges, Zohra Benayache. Towards nonlinear delay-based control for convection-likedistributed systems: The example of water flow control in openchannel systems. Networks & Heterogeneous Media, 2009, 4(2): 211-221. doi: 10.3934/nhm.2009.4.211.
Sensor systems on networked vehicles
João Borges de Sousa, Bernardo Maciel and Fernando Lobo Pereira
The future role of networked unmanned vehicles in advanced field studies is discussed in light of the recent technological advances and trends. Visions for systems which could have not been designed before are contrasted to the legal, technological and societal challenges facing the deployment of these systems. The discussion is illustrated with examples of developments from the Underwater Systems and Technologies Laboratory (LSTS) from Porto University.
Jo\u00E3o Borges de Sousa, Bernardo Maciel, Fernando Lobo Pereira. Sensor systems on networked vehicles. Networks & Heterogeneous Media, 2009, 4(2): 223-247. doi: 10.3934/nhm.2009.4.223.
A Hamiltonian perspective to the stabilization of systems of two conservation laws
Valérie Dos Santos, Bernhard Maschke and Yann Le Gorrec
This paper aims at providing some synthesis between two alternative representations of systems of two conservation laws and interpret different conditions on stabilizing boundary control laws. The first one, based on the invariance of its coordinates, is the representation in Riemann coordinates which has been applied successfully for the stabilization of linear and non-linear hyperbolic systems of conservation laws. The second representation is based on physical modelling and leads to port Hamiltonian systems which are extensions of infinite-dimensional Hamiltonian systems defined on Dirac structure encompassing pairs of conjugated boundary variables. In a first instance the port Hamiltonian formulation is recalled with respect to a canonical Stokes-Dirac structure and then derived in Riemann coordinates. In a second instance the conditions on the boundary feedback relations derived with respect to the Riemann invariants are expressed in terms of the port boundary variable of the Hamiltonian formulation and interpreted in terms of the dissipation inequality of the Hamiltonian functional. The p-system and the Saint-Venant equations arising in models of irrigation channels are the illustrating examples developed through the paper.
Val\u00E9rie Dos Santos, Bernhard Maschke, Yann Le Gorrec. A Hamiltonian perspective to thestabilization of systems of two conservation laws. Networks & Heterogeneous Media, 2009, 4(2): 249-266. doi: 10.3934/nhm.2009.4.249.
Infinite-dimensional nonlinear predictive control design for open-channel hydraulic systems
Didier Georges
A nonlinear predictive control design based on Saint Venant equations is presented in this paper in order to regulate both water depth and water flow rate in a single pool of an open-channel hydraulic system. Thanks to variational calculus, some necessary optimality conditions are given. The adjoint partial differential equations of Saint Venant partial differential equations are also derived. The resulting two-point boundary value problem is solved numerically by using both time and space discretization and operator approximations based on nonlinear time-implicit finite differences. The practical effectiveness of the control design is demonstrated by a simulation example. A extension of the predictive control scheme to a multi-pool system is proposed by using a decomposition-coordination approach based on two-level algorithm and the use of an augmented Lagrangian, which can take advantage of communication networks used for distributed control. This approach may be easily applied to other problems governed by hyperbolic PDEs, such as road traffic systems.
Didier Georges. Infinite-dimensional nonlinear predictive control design for open-channel hydraulicsystems. Networks & Heterogeneous Media, 2009, 4(2): 267-285. doi: 10.3934/nhm.2009.4.267.
Traffic flow models with phase transitions on road networks
Paola Goatin
The paper presents a review of the main analytical results available on the traffic flow model with phase transitions described in [10]. We also introduce a forthcoming existence result on road networks [14].
Paola Goatin. Traffic flow models with phase transitions on road networks. Networks & Heterogeneous Media, 2009, 4(2): 287-301. doi: 10.3934/nhm.2009.4.287.
Adaptive and non-adaptive model predictive control of an irrigation channel
João M. Lemos, Fernando Machado, Nuno Nogueira, Luís Rato and Manuel Rijo
The performance achieved with both adaptive and non-adaptive Model Predictive Control (MPC) when applied to a pilot irrigation channel is evaluated. Several control structures are considered, corresponding to various degrees of centralization of sensor information, ranging from local upstream control of the different channel pools to multivariable control using only proximal pools, and centralized multivariable control relying on a global channel model. In addition to the non-adaptive version, an adaptive MPC algorithm based on redundantly estimated multiple models is considered and tested with and without feedforward of adjacent pool levels, both for upstream and downstream control. In order to establish a baseline, the results of upstream and local PID controllers are included for comparison. A systematic simulation study of the performances of these controllers, both for disturbance rejection and reference tracking is shown.
Jo\u00E3o M. Lemos, Fernando Machado, Nuno Nogueira, Lu\u00EDs Rato, Manuel Rijo. Adaptive and non-adaptivemodel predictive control of an irrigation channel. Networks & Heterogeneous Media, 2009, 4(2): 303-324. doi: 10.3934/nhm.2009.4.303.
Modal decomposition of linearized open channel flow
Xavier Litrico and Vincent Fromion
Open channel flow is traditionally modeled as an hyperbolic system of conservation laws, which is an infinite dimensional system with complex dynamics. We consider in this paper an open channel represented by the Saint-Venant equations linearized around a non uniform steady flow regime. We use a frequency domain approach to fully characterize the open channel flow dynamics. The use of the Laplace transform enables us to derive the distributed transfer matrix, linking the boundary inputs to the state of the system. The poles of the system are then computed analytically, and each transfer function is decomposed in a series of eigenfunctions, where the influence of space and time variables can be decoupled. As a result, we can express the time-domain response of the whole canal pool to boundary inputs in terms of discharges. This study is first done in the uniform case, and finally extended to the non uniform case. The solution is studied and illustrated on two different canal pools.
Xavier Litrico, Vincent Fromion. Modal decomposition of linearized open channel flow. Networks & Heterogeneous Media, 2009, 4(2): 325-357. doi: 10.3934/nhm.2009.4.325.
Distributed model predictive control of irrigation canals
Rudy R. Negenborn, Peter-Jules van Overloop, Tamás Keviczky and Bart De Schutter
Irrigation canals are large-scale systems, consisting of many interacting components, and spanning vast geographical areas. For safe and efficient operation of these canals, maintaining the levels of the water flows close to pre-specified reference values is crucial, both under normal operating conditions as well as in extreme situations.
Irrigation canals are equipped with local controllers, to control the flow of water by adjusting the settings of control structures such as gates and pumps. Traditionally, the local controllers operate in a decentralized way in the sense that they use local information only, that they are not explicitly aware of the presence of other controllers or subsystems, and that no communication among them takes place. Hence, an obvious drawback of such a decentralized control scheme is that adequate performance at a system-wide level may be jeopardized, due to the unexpected and unanticipated interactions among the subsystems and the actions of the local controllers.
In this paper we survey the state-of-the-art literature on distributed control of water systems in general, and irrigation canals in particular. We focus on the model predictive control (MPC) strategy, which is a model-based control strategy in which prediction models are used in an optimization to determine optimal control inputs over a given horizon. We discuss how communication among local MPC controllers can be included to improve the performance of the overall system. We present a distributed control scheme in which each controller employs MPC to determine those actions that maintain water levels after disturbances close to pre-specified reference values. Using the presented scheme the local controllers cooperatively strive for obtaining the best system-wide performance. A simulation study on an irrigation canal with seven reaches illustrates the potential of the approach.
Rudy R. Negenborn, Peter-Jules van Overloop, Tam\u00E1s Keviczky, Bart De Schutter. Distributed model predictive control of irrigation canals. Networks & Heterogeneous Media, 2009, 4(2): 359-380. doi: 10.3934/nhm.2009.4.359.
A salinity sensor system for estuary studies
Thanh-Tung Pham, Thomas Green, Jonathan Chen, Phuong Truong, Aditya Vaidya and Linda Bushnell
In this paper, we present the design, development and testing of a salinity sensor system for estuary studies. The salinity sensor was designed keeping size, cost and functionality in mind. The target market for this sensor is in hydrology where many salinity sensors are needed at low cost. Our sensor can be submersed in water for up to two weeks (all electronics are completely sealed) while salinity is recorded on-board at user-defined intervals. The data is then downloaded to a computer in the laboratory, after which the sensor is recharged, cleaned for biofouling and ready to be used again. The system uses a software program to download, display and analyze the sensor data. Our initial laboratory testing shows the salinity sensor system is functional. The novelty of this work is in the use of toroidal (inductive) conductivity sensors, the resulting low cost and simple design.
Thanh-Tung Pham, Thomas Green, Jonathan Chen, Phuong Truong, Aditya Vaidya, Linda Bushnell. A salinity sensor system for estuary studies. Networks & Heterogeneous Media, 2009, 4(2): 381-392. doi: 10.3934/nhm.2009.4.381.
Control of systems of conservation laws with boundary errors
Christophe Prieur
The general problem under consideration in this paper is the stability analysis of hyperbolic systems. Some sufficient criteria on the boundary conditions exist for the stability of a system of conservation laws. We investigate the problem of the stability of such a system in presence of boundary errors that have a small $\mathcal{C}^1$-norm. Two types of perturbations are considered in this work: the errors proportional to the solutions and those proportional to the integral of the solutions. We exhibit a sufficient criterion on the boundary conditions such that the system is locally exponentially stable with a robustness issue with respect to small boundary errors. We apply this general condition to control the dynamic behavior of a pipe filled with water. The control is defined as the position of a valve at one end of the pipe. The potential application is the study of hydropower installations to generate electricity. For this king of application it is important to avoid the waterhammer effect and thus to control the $\mathcal{C}^1$-norm of the solutions. Our damping condition allows us to design a controller so that the system in closed loop is locally exponential stable with a robustness issue with respect to small boundary errors. Since the boundary errors allow us to define the stabilizing controller, small errors in the actuator may be considered. Also a small integral action to avoid possible offset may also be added.
Christophe Prieur. Control of systems of conservation laws with boundary errors. Networks & Heterogeneous Media, 2009, 4(2): 393-407. doi: 10.3934/nhm.2009.4.393.
Comparison of two data assimilation algorithms for shallow water flows
Issam S. Strub, Julie Percelay, Olli-Pekka Tossavainen and Alexandre M. Bayen
This article presents the comparison of two algorithms for data assimilation of two dimensional shallow water flows. The first algorithm is based on a linearization of the model equations and a quadratic programming (QP) formulation of the problem. The second algorithm uses Ensemble Kalman Filtering (EnKF) applied to the non-linear two dimensional shallow water equations. The two methods are implemented on a scenario in which boundary conditions and Lagrangian measurements are available. The performance of the methods is evaluated using twin experiments with experimentally measured bathymetry data and boundary conditions from a river located in the Sacramento Delta. The sensitivity of the algorithms to the number of drifters, low or high discharge and time sampling frequency is studied.
Issam S. Strub, Julie Percelay, Olli-Pekka Tossavainen, Alexandre M. Bayen. Comparison of two data assimilation algorithms for shallow water flows. Networks & Heterogeneous Media, 2009, 4(2): 409-430. doi: 10.3934/nhm.2009.4.409.
|
CommonCrawl
|
Is it true that $b^n-a^n < (b-a)nb^{n-1}$ when $0 < a< b$?
A Real Analysis textbook says the identity $$b^n-a^n = (b-a)(b^{n-1}+\cdots+a^{n-1})$$ yields the inequality $$b^n-a^n < (b-a)nb^{n-1} \text{ when } 0 < a< b.$$ (Note that $n$ is a positive integer)
No matter how I look at it, the inequality seems to be wrong. Take for instance, the inequality does not hold for $n=1$ when one tries mathematical induction. It does not hold for other values of $n$ too. I guess there is something I am missing here and I will appreciate help.
real-analysis
Asaf Karagila♦
Mr ProfMr Prof
$\begingroup$ How can it be wrong? When $n=1$, the inequality is trivially true since $b-a=b-a$. $\endgroup$
– Clayton
Apr 5 '18 at 0:15
$\begingroup$ @Clayton : Since it was stated as a strict inequality, it is trivially false when $n=1. \qquad$ $\endgroup$
– Michael Hardy
$\begingroup$ From what I can see, when n=1, b - a < b - a . $\endgroup$
– Mr Prof
$\begingroup$ Sorry @MichaelHardy: the text would have been easier to read if the OP had used $\LaTeX$. I mistook the written $<$ for $\leq$. $\endgroup$
$\begingroup$ @Clayton : True. But you shouldn't call it LaTeX. It's MathJax. LaTeX is immensely more elaborate than MathJax. $\endgroup$
\begin{align} b^n-a^n & = (b-a)(b^{n-1}+ b^{n-2}a + b^{n-3}a^2 + b^{n-4}a^3 + b^{n-5} a^4 +\cdots+a^{n-1}) \\[10pt] & < (b-a)(b^{n-1} + b^{n-2} b + b^{n-3}b^2 + b^{n-4}b^3+ b^{n-5}b^4 + \cdots + b^{n-1}) \\[10pt] & = (b-a)(b^{n-1} + b^{n-1} + b^{n-1} + b^{n-1} + b^{n-1} + \cdots + b^{n-1}) \\[10pt] & = (b-a) n b^{n-1}. \end{align} The only positive integer $n$ for which this does not work is $n=1,$ where the second factor has only one term, which is $1.$ And in that case it works if you say $\text{"}\le\text{''}$ instead of $\text{"}<\text{''}.$ \begin{align} b^2-a^2 & = (b-a)(b+a) < (b-a)(b+b) & & = (b-a)2b. \\[10pt] b^3-a^3 & = (b-a)(b^2 + ba + a^2) < (b-a)(b^2+b^2+b^2) & & = (b-a)3b^2. \\[10pt] b^4 - a^4 & = (b-a)(b^3+b^2a+ba^2+a^3) \\ & < (b-a)(b^3+b^3+b^3+b^3) & & = (b-a)4b^3. \\[10pt] b^5-a^5 & = (b-a)(b^4 + b^3a + b^2 a^2 + ba^3 + a^4) \\ & < (b-a)(b^4+b^4+b^4+b^4+b^4) & & = (b-a)5b^4. \\[10pt] & \qquad\qquad\text{and so on.} \end{align}
Michael HardyMichael Hardy
$\begingroup$ +$1$: a superb answer per your usual style. I like that you gave additional examples, too, to better illustrate the method. $\endgroup$
$\begingroup$ @Clayton : Thank you. $\endgroup$
Observe that $n > 1$ for the assertion to be valid. Thus: $b^{n-1-k}a^k< b^{n-1-k}b^k=b^{n-1}$. Letting $k$ runs from $0$ to $n-1$ and add them up: $b^{n-1}+b^{n-2}a+\cdots+a^{n-1} < nb^{n-1}$ which implies the inequality in question.
DeepSeaDeepSea
$\begingroup$ Ok, I get the argument. What I am confused about is the sign "strictly less than". I think it makes b - a < b - a when n=1. If the sign had been "less than or equal to", I would have easily seen that when n=1, b - a = b - a. I will like to know if what I am thinking does not matter or if it is wrong. $\endgroup$
$\begingroup$ I think it is a mistake in the text, it should hold for $n>1$. When $n=1$ then the $\ge$ sign should be used. $\endgroup$
– Btzzzz
$\begingroup$ I think my source of confusion has been addressed. In addition, I understand the proof better now. I appreciate you guys for your contributions. I particularly appreciate Micheal Hardy's effort. Thanks $\endgroup$
$\begingroup$ @ Btzzzz. Ok I see $\endgroup$
$\begingroup$ Perhaps something in the context of that section of the textbook implies $n>1$? $\endgroup$
We basically need to show $$ b^{n-1}+\ldots+a^{n-1}<nb^{n-1}$$ Since $a<b$, then $a^{n-1}<b^{n-1}$. There are $n$ terms, and so the inequality holds only for when $n>1$.
Note that there is the same question here which uses a $\le$ sign, so i think it is a misprint in the text.
BtzzzzBtzzzz
$\begingroup$ I have to say, that's one of the most vexing usages of "$\dots$" I've ever seen. $\endgroup$
– Joker_vD
As $0 < a < b$ then $a^k < b^k$ and....
\begin{align} b^n-a^n & = (b-a)(b^{n-1}+b^{n-2}a+\cdots+a^{n-2}b + a^{n-1}) \\ & =(b-a)\sum_{k=0}^{n-1} b^{n-k-}a^k \\ & < (b-a)\sum_{k= 0}^{n-1}b^{n-k-} b^k \\ & = (b-a)\sum_{k=0}^{n-1} b^{n-1} \\ & =(b-a)nb^{n-1}. \end{align}
fleabloodfleablood
$\begingroup$ $(b-a)\sum_{k=1}^{n-1} b^{n-k}a^k$ should be $=(b-a)\sum_{k=0}^{n-1} b^{n-1-k}a^k$. Then $n-1$ becomes $n$. $\endgroup$
– Jens Schwaiger
Not the answer you're looking for? Browse other questions tagged real-analysis or ask your own question.
Proving Inequality using Induction $a^n-b^n \leq na^{n-1}(a-b)$
Query regarding Theorem 1.21 in Baby Rudin
Bernoulli's inequality variation
strictly increasing concave function on R+
Bernoulli's Inequality when $-2≤x<-1$
An $\epsilon$-$\delta$ proof for difference between powers of real numbers
What is the difference between subgame perfect Nash-equilibrium and backwards induction?
Showing that $f_k \rightarrow f $ uniformly on $A$ $\iff f_k \rightarrow f $ in $ C_b $
A question regarding the proof of theorem 1.21 in Baby Rudin
Show that nth root of a real number exists and it is unique
Did this real analysis exam question went wrong?
|
CommonCrawl
|
EURASIP Journal on Bioinformatics and Systems Biology
December 2016 , 2017:3 | Cite as
Autism spectrum disorder detection from semi-structured and unstructured medical data
Jianbo Yuan
Chester Holtz
Tristram Smith
Jiebo Luo
First Online: 01 February 2017
Part of the following topical collections:
Biomedical Informatics with Optimization and Machine Learning
Autism spectrum disorder (ASD) is a developmental disorder that significantly impairs patients' ability to perform normal social interaction and communication. Moreover, the diagnosis procedure of ASD is highly time-consuming, labor-intensive, and requires extensive expertise. Although there exists no known cure for ASD, there is consensus among clinicians regarding the importance of early intervention for the recovery of ASD patients. Therefore, to benefit autism patients by enhancing their access to treatments such as early intervention, we aim to develop a robust machine learning-based system for autism detection by using Natural Language Processing techniques based on information extracted from medical forms of potential ASD patients. Our detecting framework involves converting semi-structured and unstructured medical forms into digital format, preprocessing, learning document representation, and finally, classification. Testing results are evaluated against the ground truth set by expert clinicians and the proposed system achieve a 83.4% accuracy and 91.1% recall, which is very promising. The proposed ASD detection framework could significantly simplify and shorten the procedure of ASD diagnosis.
Autism spectrum disorder Distributed representation Medical forms Classification
Autism spectrum disorder (ASD) is a general classification for a broad range of disorders with a variety of issues stemming from complications with neurological development. Symptoms of ASD are of varied severity involving difficulties with verbal and nonverbal communication, repetitive behaviors, and typical social interaction. The defining features of ASD are deficits in reciprocal social communication and frequent or intense repetitive or restrictive behaviors [1]. ASD has a prenatal or early childhood onset and a chronic course. Although previously considered rare, ASD is now estimated to occur in approximately 1 in 68 individuals, a threefold increase reported in prevalence in 10 years [2]. No conclusions have been reached on whether the rising prevalence estimates reflect an actual increase in prevalence or are just an artifact of changes in screening and diagnosis. Studies also show that ASD is about four times more common among boys than girls.
Currently, no laboratory test for ASD exists, and the process of diagnosing the disorder is highly complex and labor-intensive, requiring extensive expertise. Clinicians diagnose ASD based on a variety of factors including a review of medical records, medical and neurological examinations, standardized developmental tests, and behavioral assessments, such as the Autism Diagnosis Observation Schedule [3]. Because of the resources and skill required to assemble and integrate this information, few centers offer ASD diagnostic evaluations, and these centers have lengthy waiting lists, ranging from 2–12 months for an initial appointment.Waiting is not only stressful for children with ASD and their families, but it delays their access to early intervention services, which have been shown to improve outcomes dramatically in many cases [4]. Furthermore, symptoms of ASD is very similar and can be easily confused with other mental illnesses whose treatment procedures are very different such as depression. To simplify the diagnostic process and shorten the waiting time, a computerized method for detecting ASD that requires little or no expert supervision would be a major advance over current practice.
We tested the feasibility and potential utility of a novel method for identifying children who may have ASD: natural language processing (NLP) with machine learning. The greatest challenges of researches on biomedical resources are the limitations on labeled data scale and data quality. First, it is not applicable to utilize crowd-sourcing tools for data labeling such as Amazon Mechanical Turk (AMT) because of privacy issues, which limits greatly on the data scale. Second, biomedical data for potential ASD patients are strictly restricted by privacy issues and the limited clinical resources for diagnosing ASD as there are only twelve ASD diagnosis centers in the USA. Moreover, the data resources which we have access to are complicated and noisy especially when the data includes hand-writing and even it is not stored in a usable format. For this initial study, we had access to the medical semi-structured and unstructured forms for 199 potential ASD patients in hand-written format. We converted all hand-written forms into digital format, extracted de-identified information from medical records obtained prior to the initial diagnostic evaluation, and examined whether our proposed algorithm could accurately predict which children should or should not receive an ASD diagnosis. Predictions are evaluated by an expert clinician in the Andrew J. Kirch Developmental Services Center at Golisano Childrens Hospital and confirmed by a standardized diagnostic instrument, the Autism Diagnostic Observation Schedule. To the best of our knowledge, our work is the first to propose a computerized ASD detection framework based only on hand-written semi-structured and unstructured medical forms. To be more specific, the results generated by our proposed framework with high recall values are suitable for identifying potential ASD patients who need to seek for further clinical help but shouldn't be considered as a definite diagnosis. In particular, our contributions include the following:
We propose a robust machine learning approach to tackle a challenging problem that involves mining from semi-structured and unstructured medical data in hand-written format.
We convert semi-structured and unstructured medical forms into de-identified text contents in a ready-to-use format and same converting procedures can be used to extract information from confidential hand-written forms in large scales.
We apply different word embedding models including the state-of-the-art distributed representations and establish a promising baseline for automated ASD detection on such a dataset.
2 Related work
We are in an era of exploring data of all domains such as multimedia data from social networks, forms and videos from biomedical domain, and taking advantage of such data to benefit human lives. For example, researchers have used social multimedia data to monitor human's mental health condition or emotion status. Other researchers have successfully recognized human sentiments based on recorded voice [5]. Yuan et al. [6] researched on analyzing users' sentiment changes over time based on massive social multimedia data including texts and images from Twitter, and found strong correlation on sentiments expressed in textual contents and visual contents. Zhou et al. [7] integrated unobtrusive multimodal sensing such as head movement, breathing rate, and heart rate for mental health monitoring.
Much research has focused on medical applications and has involved machine-learning techniques. Compared with traditional biomedical diagnostic procedures which are usually time-consuming, labor-intensive, and limited to a small scope, new adoption of machine learning techniques into practical medical applications has advantages in terms of efficiency, scalability and reliability. For example, Devarakonda and Tsou developed a machine learning framework to automatically generate an open-end medical problem list for a patient using lexical and medical features extracted from a patient's Electronic Medical Records [8]. Hernandez et al. [9] explored the feasibility of monitoring user's physiological signals using Google glass and showed promising results. For diagnosing ASD, the most relevant data are observations of the child's social communication and repetitive behavior. To obtain these data, we focus our research only on previously acquired records of potential patients, as these records contain comments about children's behavior. The most similar work to ours is from [10], who analyzed digital early intervention records to detect ASD based on bigram and unigram features. Another research perspective on automated ASD assessment is to extract patterns from deficits in semantic and pragmatic expression [11, 12].
Another family of related work is on learning representations of texts, which embed words or documents into vector space of real numbers in a relatively low dimensional space such as [13]. Lexical features include Bag-of-Words (BoW), n-grams (typically bigram and trigram), and term frequency-inverse document frequency (tf-idf). Topic models such as Latent Dirichlet Allocation (LDA) are also used as features in document classification problems and researches show that topic model outperforms lexical features in some cases such as sentiment analysis [14, 15]. Recent word embedding algorithms are driven by the development of deep learning techniques. Distributed representations are obtained from a recurrent neural net language model [16, 17] by exploring the skip-gram model with subsampling of the frequent words and achieved a significant speedup and obtained more accurate representations of less frequent words.
3 ASD detection framework
The biggest challenges in applying machine learning algorithms to medical studies are limited data scale, data labeling, and domain knowledge. Patients' and non-patients' data are more difficult to obtain compared with social media data due to the fact that fewer public biomedical data resources exist. For example, one video for medical use would require hours of recording and the participation of a doctor who has special expertise in such an area. These data are also kept strictly confidential unless patients expressly authorize release. In contrast, we can easily crawl thousands of tweets from Twitter about a certain topic in one hour. Additionally, data labeling and result evaluation would be another issue after data collection. Though crowd-sourcing techniques such as Amazon Mechanical Turk have been widely used for labeling in machine learning and computer vision tasks, they are not feasible for our case because we cannot reveal personal information to the crowd. We depend instead on reviews by expert clinicians for data labeling.
In our case, we have collected hand-written medical forms from parents and service providers of children who have shown signs of ASD and thus need further rigorous evaluation. Those hand-written forms are far from a ready-to-use format since they are not even digitalized. Thus we first scan all the medical forms and save them in picture forms on a server that meets our institution's stringent standards for maintaining confidentiality of electronic medical records and that is only accessible by authorized users for privacy concerns.We then conduct preprocessing procedures including de-skewing (meaning that we rotate the skewed scanned medical files to the right angle), and de-identification (automatically blanking areas containing personal information). OCR software is used to convert scanned documents into text contents. In the next step, we extract features based on the digital forms and perform classification using support vector machines to detect children with a high probability of having ASD. Features we extracted include lexical features such as Bag-of-Words, n-gram and term frequency-inverse document frequency (tf-idf), topic model (LDA) and distributed representation based on skip-gram model. Our proposed framework is shown in Fig. 1.
The framework of proposed ASD detection
3.1 Data collection
We have collected semi-structured and unstructured medical forms of children who have been referred for an evaluation of possible ASD. We first scan all the medical forms into digital format (tif) and go through preprocessing. In the next step, we incorporate the OCR software for recognizing text contents from the scanned documents. Hand-writing recognition is a well-established problem and we have experimented with different resources including Omnipage Capture SDK [18], Captricity [19], and ABBYY [20], which to the best of our knowledge are among the best tools in the market for recognizing hand-written letters and have been widely used in recognizing and transforming documents into usable digital forms [21]. Even so, the results are not satisfactory in some cases. We then inspect and manually correct data for all the medical documents that have been processed through OCR process in this case, which makes data collection much more time-consuming and labor-intensive.
In this study, we have digitized forms for 199 patients, with 56 children diagnosed as actual ASD patients (positive samples) and 143 non-ASD patients (negative samples). The medical forms we analyzed include: referral form from primary care physician, parent and teacher questionnaire, preschool and early intervention questionnaire, and additional forms including phone intake by social workers. All the forms for each potential patient are concatenated together and treated as one document for the classification. Ground truth labels of patients (ASD or not ASD) are obtained from clinical reports.
3.2 Data preprocessing
A new problem arises by document scanning since sometimes the scanned forms are skewed. Additionally, the scanned forms contain personal information such as names, phone numbers, address and so on. Therefore, we go through preprocessing procedures including de-skewing, and de-identification. Such preprocessing procedure is necessary because OCR SDKs such as Captricity do not have embedded de-skewing option and their process involves recognizing documents slice by slice horizontally. Our preprocessing process improves the generated results significantly in most cases. By applying preprocessing, OCR process and manual correction afterwards, we are able to reduce the time of data collecting and converting by about 80%.
De-skewing: We used a simple but effective de-skewing algorithm: first we compute entropy defined in Eq. 1 based on the probability that black pixel x appears in line i denoted by P α (x i ) given a skew angle α, which is calculated as the count of black pixels in line i divided by the total number of pixels in the same line after skewed at angle α. We removed pixel lines which has less than 10% black pixels for two reasons: the value of function P l o g(P) rises with the increase on the value of P over range [ 0.1,1], but it acts the opposite way on range [ 0,0.1); and the P(x i ) value of pixel lines containing text contents is usually larger than 10% expect for lines with pepper noise. We then find the optimized solution for α to minimize the entropy.
$$ H(X) = -\sum_{i}P_{\alpha }(x_{i})\log P_{\alpha }(x_{i}) $$
De-identification: Since parts of medical forms are semi-structured, the regions containing personal information are located in relatively fixed areas for each type of form. Each form has its own distinctive feature such as edges in the parent's questionnaire, which can be used to locate the areas needed to be de-identified. For unstructured forms, we manually black out the regions containing personal information. We use the following Sobel operator to extract edges of each medical form and automatically de-identify the information by blanking out such fields. We apply a pair of 3×3 convolution kernels as in Eq. 2.
$$ \begin{aligned} \left[ \begin{array}{ccc} -1 & 0 & 1\\ -2& 0 & 2\\ -1& 0 & 1 \end{array}\right] \left[ \begin{array}{ccc} 1 & 2 & 1\\ 0 & 0 & 0 \\ -1 & -2 & -1 \end{array}\right] \end{aligned} $$
4 Learning document representation and performing document classification
Learning good representations of documents to capture the semantics behind text contents is central to a wide range of NLP tasks such as sentiment analysis, and document classification as in our case.
4.1 Lexical features
Lexical features are widely used in NLP tasks including Bag-of-Words model, n-gram model and tf-idf. These features capture the occurrences of words or phrases and usually contribute to high dimensional feature space of ten thousands depending on the dataset.
Bag-of-Words and N-Gram Model: Bag-of-Words (BoW) model is a common way to represent documents in matrix form. A sentence or a document is represented as a vector of which the number of entities as the dictionary and each entity indicates the occurrence of that word in the input sentence or document. However, BoW model captures neither the ordering nor the semantic meanings of words. N-gram model is similar to BoW model with an extension from a bag of single words to a bag of, typically two-words or three-words phrases, know as bigram and trigram. N-gram model preserves ordering of the words and captures a better sense of semantics than BoW model.
Term Frequency-Inverse Document Frequency: Both BoW and n-gram models draw much attention on frequent words with and without preserving the order of the words, which will be highly in favor of the frequent stop words such as: a/an, the, and, etc., and results in a noisy representation of the documents. While Tf-idf is considered as a weighted form of term frequency and is a statistical measure used to evaluate how important a word is to a document. Let t f(w,d) donate the number of times word w appears in document d, where document d belongs to a document set D, and i d f(w,D) indicates inverse document frequency of word w in the set D, then tf-idf is defined in Eqs. 3 and 4.
$$\begin{array}{@{}rcl@{}} tf-idf &=& tf(w,d) \times idf(w,D) \end{array} $$
$$\begin{array}{@{}rcl@{}} idf(w,D) &=& \log \frac{N}{1+\left | \left \{ d\in D:w\in d \right \} \right |} \end{array} $$
4.2 Latent Dirichlet Allocation (LDA)
Assuming that each document is a mixture of latent topics, LDA is a probabilistic model which learns P(t|w), the probability of word w belongs to a certain latent topic t in a topic set T (usually with a pre-defined number of topics) [14]. By normalizing each word vector from a sentence or a document based on the probabilities of word-topic, we obtain the sentence or document vector for the topic distribution and thus embed the target document into a vector based on LDA model. Compared with lexical features mentioned above, the document representation learned by LDA model indicates the distribution of topics given the input word or document which is in a lower dimension and focusing more on the latent semantic meaning of the input texts.
4.3 Distributed Representation (Doc2vec)
Following the work in [16, 17], we extracted the state-of-the-art distributed representations of the documents. Contrary to the lexical features, the semantic meaning conveyed by each word is assumed to distribute along a word window based on the distributed representations (as known as doc2vec feature) [17]. The doc2vec is learned based on the word2vec which can be trained in a Continuous Bag-of-Words (CBoW) or a Skip-gram fashion. For a word vector learning based on CBoW as shown in Fig. 2, given a sequence of N words {w 1,w 2,…,w N }, the objective is to predict the target word w i given the surrounding words within a window size c:
$$ \frac{1}{N}\sum\limits_{i=c}^{N-c}\log p(w_{i}|w_{i-c},\ldots,w_{i+c}) $$
The framework of learning document representations
The probability of w i in the objective function is calculated based on the softmax function shown in Eq. 6 where the word vectors are concatenated for predicting the next word in the content. The Skip-gram model is simply with the opposite direction of word prediction to the CBoW model where the objective is to predict the surrounding words given one word as input. Similarly, the processing of learning the doc2vec vector is maximizing the averaged log probability with the softmax function by combining the word vectors with the paragraph vector p i in a concatenated fashion as shown in Fig. 2. In our case we choose to learn our document representations based on the CBoW model following the conclusions that it extracts better information when the data scale is limited and generally performs better in later classification tasks as demonstrated in [17].
$$ p\left(w_{i}|w_{i-c},\ldots,w_{i+c}\right) = \frac{e^{y_{w_{i}}}}{\sum_{j\in (1,\ldots,N)} e^{y_{wj}}} $$
Upsampling: Since our dataset is imbalanced in that we have more negative samples, we upsample the positive samples before the training process. Our experimental results show an improvement over results without upsampling which will be discussed later in Section 5. For each pair of positive samples, we compute their Euclidean distance, and then find the nearest positive neighbours for each positive sample. Artificial positive samples are generated randomly between each positive sample and its nearest positive neighbours.
Classifier: We use Support Vector Machines (SVM) for ASD detection. In order to extract discriminative features, we use lexical features, LDA model and doc2vec features. These features are useful, but they contribute to a relatively high dimensional space compared with our dataset scale. Such high dimensional spaces pose potential risk of overfitting and can reduce the robustness of our system. Therefore, when dealing with high dimensions, we add L1-regularization to our objective function to enforce the sparsity of weights as shown in Eq. 7. On the other hand, if the feature space is not in high dimensions, such as representations extracted from LDA and doc2vec model, we add L2-regularization term as shown in Eq. 8.
$$ \min_{w} \left \| w \right \|_{1} + C \sum\limits_{i=1}^{l}\left(\max \left(0,1-y_{i}w^{T}x_{i} \right)\right)^{2} $$
$$ \min_{w} \frac{1}{2}w^{T}w + C\sum\limits_{i=1}^{l} \left(\max \left(0,1-y_{i}w^{T}x_{i} \right)\right)^{2} $$
5 Results and discussion
In this section, we demonstrate preprocessing results and evaluate our proposed ASD detection framework.
5.1 Preprocessing Results
Due to the page limit, we only show one example of a particular medical form (referral form from primary care physician) in Fig. 3 which is a semi-structured form. This form is clearly skewed to the left with a slight distortion which makes each line not straight, as demonstrated in Fig. 3 (left column). Such skewed documents will raise issues when passed to our OCR tools because the OCR tools will slice the document horizontally before text recognition and cut the words in the skewed lines in half. Our entropy-based de-skewing algorithm was able to find an optimal de-skewing angle and re-orient the form in a better shape. However, since the distortion exists, the computed optimal angle only assures the majority of lines and words to be horizontal as shown in the middle of Fig. 3. The top of this form contains confidential personal information which is kept above the line of asterisks including name, ID, phone number, etc. Our de-identification process tracked the line of asterisks automatically and blacked out the region above the line for as shown in Fig. 3 (right column). Our example in Fig. 3 is a semi-structured form and can be processed in an unsupervised manner, where we have the knowledge of document structure and can track the specific areas that include confidential information. For unstructured forms, personal information appears randomly on each form and it's not feasible to recognize them all using algorithms with zero miss rate. Therefore we manually smeared the parts containing confidential information on each unstructured document and then passed the documents to the OCR tools to convert them into text contents, which makes our preprocessing semi-supervised in general.
An example of semi-structured medical form (left), after de-skewing (middle) and de-identification (right)
5.2 Classification results
Lexical features such as BoW, tf-idf and n-gram (we choose bigram and trigram) generated relatively high dimensional vector representations of target documents. We remove stop words (174 in total) such as "a", "the", "is/are", "he/she", etc. Our feature extraction results are shown in Table 1. We used Gensim to build LDA model and extract doc2vec features because its efficient implementation and good scalability [22]. We extracted 50, 100, 150, 200, 250, and 300 topics and features, respectively.
Number of extracted lexical feature
TF-IDF
N-Gram
Number of Features
We used liblinear with L1-regularized and L2-regularized classification [23] for our document classification task. Since there are more negative samples in our dataset, we upsampled the positive samples before training. For lexical features, we chose to use L1-regularized SVM to reinforce the sparsity of feature space, and L2-regularized SVM for LDA and doc2vec features. Compared with the total 18962 dimensional feature space, only 386 weights learned for each feature are non-zero. We performed 7-fold cross-validation for evaluation and 5-fold cross-validation to learn the optimal parameters during the training process. For the training data of each fold, we generated two artificial positive samples for each positive sample which resulted in a more balanced dataset. Tables 2 and 3 and Fig. 4 show classification results including accuracy, precision and recall based on BoW, tf-idf and n-gram, and the combination of three, as well as LDA and doc2vec features. Since our application emphasizes on the recall over precision, the F2 scores are also provided (Eq. 9) in Tables 2 and 3. Given performances are different based on different numbers of features of LDA and doc2vec, we only show the best in Tables 2 and 3, which are 150 dimensions for doc2vec and 200 dimensions for LDA with upsampling, and 150 dimensions for doc2vec and 100 dimensions for LDA without upsampling.
$$ F2 = \frac{5\cdot precision\cdot recall}{4\cdot precision+ recall} $$
Classification results for LDA and doc2vec features with different dimensions
Classification results without upsampling
F2 Score
All Lexical Features
Doc2Vec
Classification results with upsampling
As the results show that our proposed framework was toned towards a better performance on recall while maintaining a decent precision and accuracy because we don't want to miss out any potential ASD patients. Comparisons between lexical features shows that the combination of all lexical features yields the best performance. Both BoW and tf-idf features perform similarly and n-gram features alone is very close to the combination of all three. On the other hand, features extracted using the LDA model show some improvements in precision and recall over all combinations of lexical features, but are neither significant nor as good as doc2vec features. The reason could be due to the fact that LDA emphasis on modeling topics from documents, and the use of LDA has no guarantee to generate robust document representations [15]. According to our experiment results, distributed representations provide the best classification results, and the best performance is obtained when the number of dimensions is 150. Additionally, as it is demonstrated in Fig. 4, more dimensions for LDA and doc2vec gain little improvements on the performance, if any. For the LDA model we expect there are not too many topics extracted from the data considering our data scale, and larger number of topics will render the effectiveness of the LDA model and add noise in the learned document vector. The reason for the doc2vec model is because the doc2vec model is expected to learn a decent semantic embedding of the documents within low dimensions, and in our case adding more dimensions will increase the risk of overfitting considering the scale of our dataset.
By applying upsampling, the precision and recall for LDA and doc2vec features raise significantly but only little improvements are obtained for lexical features. This is because generally LDA and doc2vec models learn a better representation of the documents, and the upsampling process we proposed enforces the positive samples' representations while the data are well separated. On the other hand, the lexical features cannot learn features as effectively as doc2vec and for cases that positive and negative samples are not well separated such as lexical features, the proposed upsampling process doesn't yield much improvements. The performance will benefit and become more robust when we provide a more balanced dataset. Table 4 demonstrates the top 10 features with the highest absolution weight values based on lexical features. These features are very consistent with clinical's opinion on the keywords and key phrases regarding ASD diagnosis. However, the weights learned by the classifiers are not very distinguishable in value between each other, which shows that the document representations obtained by lexical features are not sufficient enough for a robust ASD detection.
Top 10 selected features with the largest weights
Vocalizes vowel sounds
Attention span
Actively involved
Functionally plays
Affection family
The reported prevalence of ASD has risen sharply over the past 25 years and the diagnosis of ASD is highly time-consuming and labor-intensive. Our proposed ASD detection algorithm has demonstrated high promise for detecting ASD based on the patients' medical forms. Our method could significantly shorten the waiting time of the ASD diagnosis procedure and benefit the patients by facilitating potential early intervention services which have been proven to be very useful in many cases. Although the main focus of this paper is on ASD detection, the proposed NLP based framework can be potentially extended to other types of health related issues such as depression, anxiety, etc. For future work, we are working on computerized generation of an index for ASD patients indicating the severity of the patients based on their medical data, so that it can be used to monitor their progress over time. Furthermore, changes in the index could potentially serve as an outcome measure in trials of different therapies.
We are grateful for the funding from the New York State through the Goergen Institute for Data Science, and the University Multidisciplinary Research Award.
JY contributed to the main algorithm design, experiments and results analysis. CH worked mainly on data collection, data cleaning and data correction. TS contributed to data collection, establishing ground truth and providing multi-discipline view from the biomedical perspective. JL instructed the algorithm, experiments and analysis from a data-driven machine learning perspective. All authors have contributed to the writing of this paper. All authors read and approved the final manuscript.
We only publish de-identified information. We have IRB approval on record.
American Psychiatric Association, Committee on Nomenclature and StatisticsDiagnostic and Statistical Manual of Mental Disorders, Third Edition (American Psychiatric Association, Washington, DC, 1980).Google Scholar
M Wingate, RS Kirby, S Pettygrove, C Cunniff, E Schulz, T Ghosh, C Robinson, L-C Lee, R Landa, J Constantino, Prevalence of autism spectrum disorder among children aged 8 years-autism and developmental disabilities monitoring network, 11 sites, united states, 2010. Surveill. Summ. 63(SS02), 1–21 (2014).Google Scholar
CP Johnson, SM Myers, Identification and evaluation of children with autism spectrum disorders. Pediatrics. 120(5), 1183–1215 (2007).CrossRefGoogle Scholar
B Reichow, Overview of meta-analyses on early intensive behavioral intervention for young children with autism spectrum disorders. J. Autism Dev. Disord.42(4), 512–520 (2012).CrossRefGoogle Scholar
N Yang, R Muraleedharan, J Kohl, I Demirkol, W Heinzelman, M Sturge-Apple, in Proceedings of the 4th IEEE Workshop on Spoken Language Technology. Speech-based emotion classification using multiclass svm with hybrid kernel and thresholding fusion (IEEEMiami, 2012), pp. 455–460.Google Scholar
J Yuan, Q You, J Luo, in Multimedia Data Mining and Analytics. Sentiment analysis using social multimedia (Springer, 2015), pp. 31–59.Google Scholar
D Zhou, J Luo, V Silenzio, Y Zhou, J Hu, G Currier, H Kautz, in Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. Tackling mental health by integrating unobtrusive multimodal sensing (Austin, 2015), pp. 1401–1408.Google Scholar
M Devarakonda, C-H Tsou, in Proceedings of the 27th Conference on Innovative Applications of Artificial Intelligence (AAI). Automated problem list generation from electronic medical records in ibm watson (Austin, 2015), pp. 3942–3947.Google Scholar
J Hernandez, Y Li, JM Rehg, RW Picard, in Wireless Mobile Communication and Healthcare (Mobihealth), 2014 EAI 4th International Conference On. Bioglass: Physiological parameter estimation using a head-mounted wearable device (IEEEAthens, 2014), pp. 55–58.Google Scholar
M Liu, Y An, X Hu, D Langer, C Newschaffer, L Shea, in Bioinformatics and Biomedicine (BIBM), 2013 IEEE International Conference On. An evaluation of identification of suspected autism spectrum disorder (asd) cases in early intervention (ei) records (IEEEShanghai, 2013), pp. 566–571.CrossRefGoogle Scholar
M Rouhizadeh, E Prud'hommeaux, B Roark, J Van Santen, in Proceedings of the Conference. Association for Computational Linguistics. North American Chapter. Meeting, 2013. Distributional semantic models for the evaluation of disordered language (NIH Public AccessAtlanta, 2013), p. 709.Google Scholar
E Prud'hommeaux, E Morley, M Rouhizadeh, L Silverman, J van Santeny, B Roarkz, R Sproatz, S Kauper, R DeLaHunta, in Spoken Language Technology Workshop (SLT), 2014 IEEE. Computational analysis of trajectories of linguistic development in autism (IEEESouth Lake Tahoe, 2014), pp. 266–271.CrossRefGoogle Scholar
J Turian, L Ratinov, Y Bengio, in Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Word representations: a simple and general method for semi-supervised learning (Association for Computational LinguisticsUppsala, 2010), pp. 384–394.Google Scholar
DM Blei, AY Ng, MI Jordan, Latent dirichlet allocation. J. Mach. Learn. Res. 3:, 993–1022 (2003).zbMATHGoogle Scholar
AL Maas, RE Daly, PT Pham, D Huang, AY Ng, C Potts, in Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Learning word vectors for sentiment analysis (Association for Computational LinguisticsPortland, 2011), pp. 142–150.Google Scholar
T Mikolov, I Sutskever, K Chen, GS Corrado, J Dean, in Advances in Neural Information Processing Systems 26. Distributed representations of words and phrases and their compositionality (Neural Information Processing Systems (NPIS) Conference, 2013), pp. 3111–3119.Google Scholar
QV Le, T Mikolov, in Proceedings of International Conference on Machine Learning (ICML), 14. Distributed representations of sentences and documents (Beijing, 2014), pp. 1188–1196.Google Scholar
OmniPage Capture SDK.http://www.nuance.com/for-business/by-product/omnipage/csdk/index.htm. Accessed March 2015.
Captricity, Unprecedented Data Accessed at Your Service. https://captricity.com/. Accessed March 2015.
ABBYY Recognition Server. http://www.abbyy.com/recognition-server/. Accessed March 2015.
J Shin, in Big Data (Big Data), 2014 IEEE International Conference On. Investigating the accuracy of the openfda api using the fda adverse event reporting system (faers), (2014), pp. 48–53. doi:http://dx.doi.org/10.1109/BigData.2014.7004412.
R Rehurek, P Sojka, in Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks. Software framework for topic modeling with large corpora, (2010).Google Scholar
R-E Fan, K-W Chang, C-J Hsieh, X-R Wang, C-J Lin, Liblinear: A library for large linear classification. J. Mach. Learn. Res. 9:, 1871–1874 (2008).zbMATHGoogle Scholar
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.Department of Computer ScienceUniversity of RochesterRochesterUSA
2.School of Medicine and DentistryUniversity of Rochester Medical CenterRochesterUSA
Yuan, J., Holtz, C., Smith, T. et al. J Bioinform Sys Biology (2016) 2017: 3. https://doi.org/10.1186/s13637-017-0057-1
Received 25 May 2016
First Online 01 February 2017
|
CommonCrawl
|
Spatial estimation of soil erosion using RUSLE modeling: a case study of Dolakha district, Nepal
Pawan Thapa ORCID: orcid.org/0000-0002-4331-53151
Soil erosion causes topsoil loss, which decreases fertility in agricultural land. Spatial estimation of soil erosion essential for an agriculture-dependent country like Nepal for developing its control plans. This study evaluated impacts on Dolakha using the Revised Universal Soil Loss Equation (RUSLE) model; analyses the effect of land use and land cover (LULC) on soil erosion.
The soil erosion rate categorized into six classes based on the erosion severity, and 5.01% of the areas found under extreme severe erosion risk (> 80 Mg ha−1 year−1) addressed by decision-makers for reducing its rate and consequences. Followed by 10% classified between high and severe range from 10 to 80 Mg ha−1 year−1. While 15% and 70% of areas remained in a moderate and low-risk zone, respectively. Result suggests the area of the north-eastern part suffers from a high soil erosion risk due to steep slope.
The result produces a spatial distribution of soil erosion over Dolakha, which applied for conservation and management planning processes, at the policy level, by land-use planners and decision-makers.
Last few decades, soil erosion impacts natural resources and agricultural production globally (Bakker et al. 2005; Pimentel et al. 1995; Prasannakumar et al. 2012). In Mountain regions, soil erosion causes severe hazards, such as heavy rainfall, surface water flow on bare lands that contribute to land degradation (Ristić et al. 2012; Ashiagbor et al. 2013; Tamene and Vlek 2008). The primary soil erosion is onsite consequences impacts on soil fertility loss and degradation of soil resources quality, whereas pollution on water bodies and settling sediments are an offsite (Morgan et al. 1984; Blaikie and Brookfield 2015). It directly impacts on the environment, economy, and agriculture in mountain areas (Vanacker et al. 2003; Navas et al. 2004). Its rate increased through the change in precipitation and temperature pattern and eventually altering the runoff and land use, which causes flood, drought, and famine (Nearing et al. 2004; Thapa and Dhulikhel 2019; Zhao et al. 2013). On the other hand, the deposition of sediments on the river affects reservoir and dams, increases their costs of maintenance, and in the long run, makes them unusable (Samaras and Koutitas 2014). There have been several studies to understand this situation; their findings control soil erosion and ecological restoration (Samaras and Koutitas 2014; Shah et al. 1991). Although some attention regarding erosion modeling essential for inaccessible mountainous areas.
Mountain region is highly vulnerable to resource degradation caused by landslides, soil loss from steep slopes, and deforestation (TolIIrism 1995). In Nepal, approximately 45.5% of land erodes from the water in steeper areas (Chalise et al. 2019). One study shows that soil loss of agricultural land in the hills through surface erosion, other finds in 1992 and 1993 of Likhu Khola watershed soil loss and slightly degraded secondary forest (Shrestha 1997; Gardner and Jenkins 1995). Several models exist for prediction of soil erosion from empirical (USLE/RUSLE), and considerably varies in its data input. The RUSLE represents raindrop impacts on climate, soil, topography, and land use affect rill and inter rill soil erosion (Magdoff and Weil 2004). This method is widely used to estimate soil erosion loss and risk, which provides a guideline for the development of conservation plans and controlling erosion under different land-cover conditions, such as croplands, rangelands, and disturbed forest lands (Milward and Mersey 1999). The remote sensing and GIS techniques are feasible for soil erosion estimation and spatial distribution in larger areas (Milward and Mersey 1999; Bahadur 2012). The remote sensing, GIS, and RUSLE provide the potential to estimate soil erosion loss on a cell-by-cell basis (Milward and Mersey 1999). The RUSLE model with GIS used for estimation of the spatial distribution of soil erosion in Dolakha; however, the model is applicable only in the prediction of sheet and rill erosion, unable to estimate the rate of gully erosion (Wang et al. 2002). It provides a framework for decision-makers for planning activities to control erosion and contributes toward filling a gap of soil loss information of a particular district.
The study area is the Dolakha District, Nepal, situated in the northeast part of Kathmandu (Fig. 1; Dennison and Rana 2017). It's fragile in the last few years for soil erosion, deforestation, and terrace farming on steep slopes and rapid population pressure on natural resources (Thapa and Upadhyaya 2020; Pei and Sharma 1998). Other significant contributors are land use and land cover change caused by a human that increases the erosion rate here (Fall et al. 2011).
Study area used for estimating soil erosion
The spatial datasets for this research are shown in Table 1.
Table 1 The datasets used for the RUSLE modelling
The RUSLE model estimates soil damage for ground slopes in Geographic Information System (GIS) platform (Yitayew et al. 1999; Fig. 2). A combined equation of geophysical and land cover factors to evaluate the yearly soil loss from a unit of property. It assesses erosion risk in the research area, which has its qualities and application scopes (Ciesiolka et al. 1995; Boggs et al. 2001). Its global model to predict soil loss because of its convenience and compatibility with GIS (Milward and Mersey 1999; Tang et al. 2015; Jha and Paudel 2010; Šúri et al. 2002). GIS and remote sensing technology advancement enabled a more accurate estimation of the factors used for calculation (TolIIrism 1995; Park et al. 2005; Ganasri and Ramesh 2016; Atoma 2018). Each of the elements derived separately in raster data format and the erosion calculated using the map algebra functions. Figure 2 illustrates the framework for the RUSLE model calculation and expressed by an equation,
$${\text{A}} = \left[ {\text{R}} \right]*\left[ {\text{K}} \right]*\left[ {\text{LS}} \right]*\left[ {\text{C}} \right]*\left[ {\text{P}} \right],$$
where A = soil loss (mg ha−1 year−1), R = rainfall erosivity factor (mm ha−1 year−1), K = soil erodibility factor (mg ha−1 year−1), LS = slope-length and slope steepness factor (dimensionless), C = land management factor (dimensionless), and P = conservation practice factor (dimensionless).
The methodological framework for potential soil erosion using RUSLE model
RUSLE parameters computation
Rainfall erosivity factor (R)
This rainfall erosion factor (R) describes the intensity of precipitation at a particular location based on their amount on soil erosion (Koirala et al. 2019; Thapa and Upadhyaya 2019). Essential for soil erosion risk assessment under future land use and climate change (Stocking 1984). It quantifies an effect of raindrop amount and rate of runoff associate with rainfall and its unit expressed in mm ha−1 h−1 year−1. During this study, the rainfall map produced by the National Aeronautics and Space Administration (NASA) used to generate a rainfall erosion factor (Wischmeier and Smith 1978). This map shows mean annual precipitation over the district, equation integrated to make the R-factor given by Morgan et al. (1984).
$${\text{R}} = 3 8. 5+ 0. 3 5 {\text{P}},$$
where R = rainfall erosivity factor, P = mean annual rainfall in mm
Soil erodibility factor (K)
The soil erodibility factor (K) measures the susceptible soil types and their particles to detachment and transport by rainfall and runoff. The K factor influenced by soil texture, organic matter, soil structure, and permeability of the soil profile (Erencin et al. 2000). The equation provided by reference used to estimate the soil loss (Kouli et al. 2009).
$${\text{K}} = {\text{Fcsand}}*{\text{Fsi-cl}}*{\text{Forgc}}*{\text{Fhisand}}*0. 1 3 1 7,$$
$${\text{Fcsand}} = \left[ {0. 2+ 0. 3 {\text{exp}}\left( { - 0.0 2 5 6 {\text{ SAN }}\left( { 1- \frac{SIL}{100}} \right)} \right)} \right],$$
$${\text{Fsi-cl}} = \left[ {\frac{SIL}{CLA + SIL}} \right]^{0. 3} ,$$
$${\text{Forgc}} = \left[ { 1.0 \, - \frac{{0.25 {\text{C}}}}{{C + \exp \left( {3.72 - 2.95 C} \right)}}} \right],$$
$${\text{Fhisand}} = \left[ { 1.0 \, - \frac{{0.70 {\text{SN}}1}}{{SN1 + \exp \left( { - 5.51 + 22.9 SN1} \right)}}} \right],$$
where SAN, SIL and CLA are % sand, silt and clay, respectively; C is the organic carbon content; and SN1 sand content subtracted from 1 and divided by 100. Fcsand = low soil erodibility factor for soil Fsi-cl = low soil erodibility factor with high clay to silt ratio. Forgc = factor that reduces soil erodibility for soil with high organic content. Fhisand = factor that reduces soil erodibility for soil with high sand content.
Topographic factor (LS)
The topographic factor (LS) created from two sub-factors: a slope gradient factor (S) and a slope-length factor (L), both determined from the Digital Elevation Model (DEM). The slope-length and gradient parameter is crucial in the soil erosion modeling for calculating overland flow (surface runoff) (Morgan et al. 1992). The L and S represent the effect of slope length and steepness respectively on erosion, also when it increases soil loss per unit area rises. These calculated from the DEM and combined to result in the topographical factor grid using relation (Atoma 2018).
$${\text{L}} = \left( {\frac{\lambda }{22.13}} \right)^{\text{m}} ,$$
where L = slope length factor, λ = slope length (m), m = slope-length exponent
$${\text{m}} = \frac{F}{{1 + F^{\prime}}},$$
$${\text{F}} = \frac{\sin /0.0896}{{ 3 \left( {\sin } \right)_{0.8} + 0.56}},$$
where F = ratio between rill erosion and inter rill erosion, β = slope angle (°)
In ArcGIS, L was calculated as,
$$flow_{acc } + 625 {\text{L }} = \frac{{\left( {flow_{acc } + 625} \right)^{{\left( {m + 1} \right)}} {-}flow_{acc}^{{\left( {m + 1} \right)}} }}{{25^{{\left( {{\text{m}} + 2} \right)}} * 22.13^{\text{m}} }},$$
For slope gradient factor,
$${\text{S}} = {\text{Con}}\left( {\left( {{\text{Tan}}\left( {{\text{slope}}*0.0 1 7 4 5} \right) < 0.0 9} \right),\left( { 10. 8*{\text{Sin}}\left( {{\text{slope}}*0.0 1 7 4 5} \right) + 0.0 3} \right),\left( { 1 6. 8*{\text{Sin}}\left( {{\text{slope}}*0.0 1 7 4 5} \right) - 0. 5} \right)} \right),$$
$${\text{LS}} = {\text{L}}*{\text{S}},$$
Cover management factor (C)
The cover-management factor (C) reflects the effect of cropping and other practices on erosion rates (Chalise et al. 2019). It's the most spatiotemporal sensitive as it follows plant growth and rainfall dynamics (Nearing et al. 2004). This factor defined as a non-dimensional number between zero and one representing a rainfall erosion weighted ratio of soil loss for specified land and vegetated conditions to the corresponding loss from continuous bare fallow (Wischmeier and Smith 1978). In this study, LULC produced by the ICIMOD used for preparing a C-factor map (Sheikh et al. 2011). The raster map converted to polygon through raster to polygon tool and the attributes with the same land use type merged into a single class using ArcGIS 10.5 software, the study used eight types of land use (Table 2). Each land-use example, C values assigned through reference ranges from 0 to 1, where lower C means no loss, while higher indicates uncover and significant chances of soil loss (Erencin et al. 2000; Panagos et al. 2015).
Table 2 Cover management factor
Support practice factor (P)
The support practice factor indicates the rate of soil loss according to agricultural practice. There are three methods, such as contours, cropping, and terrace, and vital elements to control erosion (Park et al. 2005). This contouring method used with P values ranges from 0 to 1, where the value 0 represents proper anthropic erosion, and the value 1 indicates a non-anthropic erosion facility (Table 3, Kouli et al. 2009). It values for particular types of land cover used from published sources (Coughlan and Rose 1997; Yue-Qing et al. 2008).
Table 3 P factor values for slope (Kumar and Kushwaha 2013)
Potential erosion map
The different data input processed in ArcGIS created five-factor maps: R, K, LS, C, and P (Fig. 3). These raster maps integrated within the ArcGIS environment using the RUSLE relation to generate composite maps of the estimated erosion loss on the study area (Ganasri and Ramesh 2016; Atoma 2018). Zonal statistics tool and calculating geometry used for computing an area-weighted mean of the potential erosion rates for its generated to explore the relationship between slope and LULC on erosion (Bastola et al. 2019). The slope map of Dolakha created from DEM in ArcGIS and then reclassified into eight classes (Gorr and Kurland 2010).
Five factor maps of soil erosion of study area, a Topographic factor map, b cover management factor map, c support practice factor, d soil erodibility factor map, e rainfall erosivity factor map
Factor maps
The results showed that the topographic factor (LS) value range between 0 and 12,073 (Fig. 3a); Rainfall erosivity factor (R) value ranges between 87.4938 and 1541.42 mm ha−1 year−1 (Fig. 3e); Soil erodibility factor (K) value ranged from 0.2 to 0.32 (Fig. 3d). The support practice factor (P) value for the entire area ranged from 0.55 to 1 (Fig. 3c). The cover management factor (C) value ranged between 0 and 0.45 (Fig. 3b).
Potential soil erosion rates of Nepal
The potential soil erosion map of the Dolakha district produced multiplying the factor maps in ArcGIS (Fig. 4; Table 4). The soil erosion higher than 80% consists of a 5.01% area (Table 4). It shows that around 10% of high, very high, and severe risk zones need conservation to reduce the risk of soil erosion. The mean erosion rate high in barren lands, followed by agricultural lands, shrubs, grasslands, and forests, and the highest soil loss rates observed in steep slopes.
Potential map of soil erosion rate of Dolakha district
Table 4 Potential soil erosion rate of Dolakha district
The study area classified into six classes that remained constant in the different erosion classes shown bold in the diagonal cells. Greater than 80% of the study area remained in the high severe erosion risk class. The proportion area at low and moderate risk of erosion is 70% and 15%, respectively, while the space between the upper and very severe, around 15%, indicates a chance increasing.
RUSLE is an empirical-based modeling approach that predicts the long-term average annual rate of soil erosion on slopes using five factors. It estimates soil loss with similar terrain and meteorological conditions (Prasannakumar et al. 2012). This research, a potential soil erosion rate map of Dolakha generated using different sources (Table 1) for the RUSLE model using ArcGIS software. Its the first time that such an approach to assess erosion risk across an entire mountainous region, and the methodology still has certain limitations. Again, it identifies priority areas for reducing soil erosion. Other research studies having similar geographic characteristics also used the same method (Prasannakumar et al. 2012; Panagos et al. 2015; Kumar and Kushwaha 2013). In an erosion model, proper consideration of R-factor, LS-factor, K-factor, P-factor, and C-factor should minimize the uncertainties. The LS-factor with a maximum slope in the study area used as original RUSLE formulations (McCool et al. 1989). According to the many research results, Nepal is vulnerable to soil erosion hazards due to five major factors, high annual precipitation, the soil characteristics, mainly texture and steep slopes, land covers, and soil conservation practices along the slopes (McCool et al. 1987, 1989). Its total soil erosion is comparatively higher than in other countries of the world (Koirala et al. 2019). This study also found that around 15% of the total area in district was between upper and very severe region, which is common in rates of erosion of countries like India, Ethiopia, United Kingdom, Europe, and Africa in their steeper slope area (Morgan et al. 1984; Stocking 1984; Wischmeier and Smith 1978). Here, the suggested range of erosion is almost equal to that of Australia, China, as estimated by Lu et al. (2003). The higher erosion rates in China and Australia indicate the vulnerability to an erosion of the semi-arid and semi-humid areas of the world (Suárez de Castro and Rodríguez Grandas 1962). The soil erosion rates in the mountainous regions, like Nepal, also rise with an increase in slope (Assouline and Ben-Hur 2006).
RUSLE is the most commonly applied soil loss estimation model (Mekelle 2015; Wang et al. 2013; Maetens et al. 2012; Erol et al. 2015). Moreover, RUSLE model strength lies in giving predicted soil loss using limited information, especially in developing countries where data are scarce (Tadesse 2016; Shahid 2013; Angima et al. 2003) although RUSLE widely used in mountainous terrain with steep slopes still questionable (Turnipseed et al. 2003; Cevik and Topal 2003). Some models, such as AGNPS or ANSWER, are unsuitable in Nepal due to large amounts of data and AGNPS, non-applicable to this Middle hill area (Kettner 1996; Ayalew 2020). On the other hand, modeling results are impressive but difficult to interpret and validate because of its complexity (Meyer and Flanagan 1992). Overall, such models include errors because they based on empirical rules. This model identifies areas at risk, where management actions needed (Rabia 2012; Roșca et al. 2012). It's simple, flexible, and physical base for predicting the relative soil loss pattern; however, the high hills are heterogeneous on precipitation patterns and topography. It assesses soil erosion accuracy estimates from the models using ground observations. It's impossible to validate the assessments and analyze error and bias by comparing model prediction with field-based measurements over a set of sites. There have been very few field-based studies in the area. However, the output compared with the estimated erosion levels of different studies from published articles and with other model-based results of mid and high hill areas in Nepal with similar characteristics to the Dolakha (Kebede 2001; Saxton and Rawls 2006).
The method contains limitations and potential for factors that drive erosion in the RUSLE model. Precipitation data from TRMM Data used for hill areas to calculate the rainfall erosion factor. Weather stations in the Himalayan region limited, and spatial precipitation data is low resolution. Furthermore, this approach is unable to capture the distribution of massive precipitation events, markedly impacting soil erosion. The cover management factor (C) and support practice factor (P) weighted at soil order level using published results (Uddin et al. 2018). Better estimates could determine the C-factor by remote sensing, have used vegetation indices such as the Normalized Difference Vegetation Index (NDVI) (Almagro et al. 2019a, b). Usually, NDVI directly correlated to the C-factor by linear or exponential regression (Yue-Qing et al. 2008; Van der Knijff et al. 1999). An alternative approach for an approximation of the P-factor based on empirical equations. For instance, Wener's method assumes that the P-factor linked to topographical features, and slope gradient (Panagos et al. 2015; Terranova et al. 2009; Phinzi and Ngetar 2019). The RUSLE method reported overestimating erosion in high terrain (Ganasri and Ramesh 2016; Uddin et al. 2018). The Rich Mesic Forest (RMF) and Mesic Forest (MF) models better estimates over a slope with ground data (Fall et al. 2011; Bellemare et al. 2005). Comprehensive research appropriate for accurate estimation of erosion model and ground measurements.
The severity assessment of soil erosion GIS-based RUSLE equation considering rainfall, soil, DEM, land use, and land cover. The soil erosion rate categorized into six classes based on its severity, and 5.01% of the regions found under extreme risk (> 80 Mg ha−1 year−1) 70% of areas remained in a low-risk zone. This show area with high elevation along with prompt rainfall susceptible to soil erosion. The predicted severity can provide a basis for conservation and planning processes at the decision-makers. The regions with high to very severe soil erosion warrant special priority and control measures. While this model forms a basis on mapping and prediction using remote sensing and GIS-based analysis for vulnerability zones, such studies suggested for conservation and refining the model in the future.
The data used for this study mentioned with their sources; if data used in the manuscript are not precise, the author is ready to clarify and even send the dataset on request.
RUSLE:
Revised Universal Soil Loss Equation
ArcGIS:
Arc Geographic Information System
DEM:
Digital Elevation Model
SRTM DEM:
Shuttle Radar Topography Mission, Digital Elevation Model
METI:
Ministry of Economy, Trade and Industry, Japan
NASA:
National Aeronautics and Space Administration, US
ICIMOD:
International Center for Integrated Mountain Development
TRMM:
Tropical Rainfall Measuring Mission
Almagro A et al (2019a) International soil and water conservation research
Almagro A et al (2019b) Improving cover and management factor (C-factor) estimation using remote sensing approaches for tropical regions. Int Soil Water Conserv Res 7(4):325–334
Angima SD, Stott DE, O'neill MK, Ong CK, Weesies GA (2003) Soil erosion prediction using RUSLE for central Kenyan highland conditions. Agric Ecosyst Environ 97(1–3):295–308
Ashiagbor G, Forkuo EK, Laari P, Aabeyir R (2013) Modeling soil erosion using RUSLE and GIS tools. Int J Remote Sens Geosci 2(4):1–17
Assouline S, Ben-Hur M (2006) Effects of rainfall intensity and slope gradient on the dynamics of interrill erosion during soil surface sealing. CATENA 66(3):211–220
Atoma H (2018) Assessment of soil erosion by Rusle model using remote sensing and GIS techniques: a case study of Huluka Watershed, Central Ethiopia, PhD Thesis, Addis Ababa University
Ayalew M (2020) Evaluating the sediment yield by improving the RUSLE and SDR in Gumara Watershed, Upper Blue Nile Basin, Ethiopia, PhD Thesis
Bahadur KK (2012) Spatio-temporal patterns of agricultural expansion and its effect on watershed degradation: a case from the mountains of Nepal. Environ Earth Sci 65(7):2063–2077
Bakker MM, Govers G, Kosmas C, Vanacker V, Van Oost K, Rounsevell M (2005) Soil erosion as a driver of land-use change. Agric Ecosyst Environ 105(3):467–481
Bastola S, Seong YJ, Lee SH, Shin Y (2019) Assessment of soil erosion loss by using RUSLE and GIS in the Bagmati Basin of Nepal, 20(3):5–14
Bellemare J, Motzkin G, Foster DR (2005) Rich mesic forests: edaphic and physiographic drivers of community variation in western Massachusetts. Rhodora 107(931):239–283
Blaikie P, Brookfield H (2015) Land degradation and society. Routledge, London
Boggs G, Devonport C, Evans K, Puig P (2001) GIS-based rapid assessment of erosion risk in a small catchment in the wet/dry tropics of Australia. Land Degrad Dev 12(5):417–434
Cevik E, Topal T (2003) GIS-based landslide susceptibility mapping for a problematic segment of the natural gas pipeline, Hendek (Turkey). Environ Geol 44(8):949–962
Chalise D, Kumar L, Spalevic V, Skataric G (2019) Estimation of sediment yield and maximum outflow using the IntErO model in the Sarada river basin of Nepal. Water 11(5):952
Ciesiolka CA et al (1995) Methodology for a multi-country study of soil erosion management. Soil Technol 8(3):179–192
Coughlan KJ, Rose CW (1997) A new soil conservation methodology and application to cropping systems in tropical steeplands: a comparative synthesis of results obtained in ACIAR Project PN 9201, ACIAR
Dennison L, Rana P (2017) Nepal's emerging data revolution. Development Initiatives, Washington
Erencin Z, Shresta DP, Krol IB (2000) C-factor mapping using remote sensing and GIS. Case study Lom SakLom Kao Thail Geogr Inst Justus-Liebig-Univ Giess Intern Inst Aerosp Surv Earth SciITC Enschede Netherland
Erol A, Koşkan Ö, Başaran MA (2015) Socioeconomic modifications of the universal soil loss equation. Solid Earth 6:1025–1035
Fall S et al (2011) Analysis of the impacts of station exposure on the US Historical Climatology Network temperatures and temperature trends. J Geophys Res Atmos. https://doi.org/10.1029/2010JD015146
Ganasri BP, Ramesh H (2016) Assessment of soil erosion by RUSLE model using remote sensing and GIS—a case study of Nethravathi Basin. Geosci Front 7(6):953–961
Gardner R, Jenkins A (1995) Land use, soil conservation and water resource management in the Nepal middle hills. Overseas Development Administration, London
Gorr WL, Kurland KS (2010) GIS tutorial 1: basic workbook. Esri Press, Redlands
Jha MK, Paudel RC (2010) Erosion predictions by empirical models in a mountainous watershed in Nepal. J Spat Hydrol 10(1):89–102
Kebede TA (2001) Farm household technical efficiency: a stochastic frontier analysis. Study Rice Prod Mardi Watershed West Dev Reg Nepal Masters Thesis Submitt Dep Econ Soc Sci Agric Univ Norway
Kettner A (1996) Simulated man-induced erosion in the Middle Mountains of Nepal, a case study on the relation between land use, land tenure and erosion with the use of the AGNPS-model in the Mahadev Khola watershed, PhD Thesis, Msc thesis (unpublished), Dept. of Soil and Water Conservation and…
Koirala P, Thakuri S, Joshi S, Chauhan R (2019) Estimation of soil erosion in Nepal using a RUSLE modeling and geospatial tool. Geosciences 9(4):147
Kouli M, Soupios P, Vallianatos F (2009) Soil erosion prediction using the revised universal soil loss equation (RUSLE) in a GIS framework, Chania, Northwestern Crete, Greece. Environ Geol 57(3):483–497
Kumar S, Kushwaha SPS (2013) Modelling soil erosion risk based on RUSLE-3D using GIS in a Shivalik sub-watershed. J Earth Syst Sci 122(2):389–398
Lu H, Prosser IP, Moran CJ, Gallant JC, Priestley G, Stevenson JG (2003) Predicting sheetwash and rill erosion over the Australian continent. Soil Res 41(6):1037–1062
Maetens W, Vanmaercke M, Poesen J, Jankauskas B, Jankauskiene G, Ionita I (2012) Effects of land use on annual runoff and soil loss in Europe and the Mediterranean: a meta-analysis of plot data. Prog Phys Geogr 36(5):599–653
Magdoff F, Weil RR (2004) Soil organic matter in sustainable agriculture. CRC Press, Boca Raton
McCool DK, Brown LC, Foster GR, Mutchler CK, Meyer LD (1987) Revised slope steepness factor for the universal soil loss equation. Trans ASAE 30(5):1387–1396
McCool DK, Foster GR, Mutchler CK, Meyer LD (1989) Revised slope length factor for the universal soil loss equation. Trans ASAE 32(5):1571–1576
Mekelle E (2015) Assessing Runoff and soil erosion by water using GIS and RS techniques at Midmar Catchment, Northern Ethiopia BY: Tsegay Aregawi Gebremedhn, PhD Thesis, Mekelle University
Meyer CR, Flanagan DC (1992) Application of case-based reasoning concepts to the WEPP soil erosion model, AI Appl USA
Milward A, Mersey JE (1999) Adapting the RUSLE to model soil erosion potential in a mountainous tropical watershade. Catena 38(2):109–129
Morgan RPC, Morgan DDV, Finney HJ (1984) A predictive model for the assessment of soil erosion risk. J Agric Eng Res 30:245–253
Morgan RPC, Quinton JN, Rickson RJ (1992) EUROSEM documentation manual. Silsoe Coll. Silsoe Bedford UK, p 34
Navas A, Garcés BV, Machín J (2004) Research Note: An approach to integrated assessment of reservoir siltation: the Joaquín Costa reservoir as a case study
Nearing MA, Pruski FF, O'neal MR (2004) Expected climate change impacts on soil erosion rates: a review. J Soil Water Conserv 59(1):43–50
Panagos P, Borrelli P, Meusburger K, Alewell C, Lugato E, Montanarella L (2015) Estimating the soil erosion cover-management factor at the European scale. Land Use Policy 48:38–50
Park C-S, Jung Y-S, Joo J-H, Lee J-T (2005) Best management practices reducing soil loss in the saprolite piled upland in Hongcheon highland. Korean J Soil Sci Fertil 38(3):119–126
Pei S, Sharma UR (1998) Transboundary biodiversity conservation in the Himalayas. Ecoregional Coop Biodivers Conserv Himalayas, 164–184
Phinzi K, Ngetar NS (2019) The assessment of water-borne erosion at catchment level using GIS-based RUSLE and remote sensing: a review. Int Soil Water Conserv Res 7(1):27–46
Pimentel D et al (1995) Environmental and economic costs of soil erosion and conservation benefits. Science 267(5201):1117–1123
Prasannakumar V, Vijith H, Abinod S, Geetha N (2012) Estimation of soil erosion risk within a small mountainous sub-watershed in Kerala, India, using Revised Universal Soil Loss Equation (RUSLE) and geo-information technology. Geosci Front 3(2):209–215
Rabia AH (2012) Mapping soil erosion risk using RUSLE, GIS and remote sensing techniques. In: The 4th international congress of ECSSS, EUROSOIL, Bari, Italy, pp 1–15
Ristić R, Kostadinov S, Radić B, Trivan G, Nikić Z (2012) Torrential floods in Serbia-man made and natural hazards. In: 12th congress interpraevent
Roșca B, Vasiliniuc I, Topșa G (2012) Models for estimating soil erosion in the middle and lower vasluie\AA\pounds Basin. Bull Univ Agric Sci Vet Med Cluj-Napoca Agric, 69(1)
Samaras AG, Koutitas CG (2014) The impact of watershed management on coastal morphology: a case study using an integrated approach and numerical modeling. Geomorphology 211:52–63
Saxton KE, Rawls WJ (2006) Soil water characteristic estimates by texture and organic matter for hydrologic solutions. Soil Sci Soc Am J 70(5):1569–1578
Shah PB, Schreier H, Brown SJ, Riley KW (1991) Soil fertility and erosion issues in the middle mountains of Nepal: workshop proceedings, Jhikhu Khola Watershed, Apr 22–25, 1991
Shahid S (2013) Modelling soil erosion susceptibility of johor river basin by using geographical information system (GIS)
Sheikh AH, Palria S, Alam A (2011) Integration of GIS and universal soil loss equation (USLE) for soil loss estimation in a Himalayan watershed. Recent Res Sci Technol. https://doi.org/10.21203/rs.3.rs-25478/v2
Shrestha DP (1997) Assessment of soil erosion in the Nepalese Himalaya: a case study in Likhu Khola Valley, Middle Mountain Region. Land Husb 2(1):59–80
Stocking M (1984) Rates of erosion and sediment yield in the African environment. Chall Afr Hydrol Water Resour, 285–295
Suárez de Castro F, Rodríguez Grandas A (1962) Investigaciones sobre la erosión y la conservación de los suelos en Colombia [Research on soil erosion and conservation in Colombia]
Šúri M, Cebecauer T, Hofierka J, Fulajtár E (2002) Erosion assessment of Slovakia at regional scale using GIS. Ecology 21(4):404–422
Tadesse L (2016) Assessing the impact of watershed development programs on soil erosion and biomass production using remote sensing and GIS: the case of Yezat Watershed, West Gojam Zone of Amhara Region, Ethiopia, PhD Thesis, Addis Ababa University
Tamene L, Vlek PL (2008) Soil erosion studies in northern Ethiopia. In: Land use and soil resources, Springer, pp 73–100
Tang Q, Xu Y, Bennett SJ, Li Y (2015) Assessment of soil erosion using RUSLE and GIS: a case study of the Yangou watershed in the Loess Plateau, China. Environ Earth Sci 73(4):1715–1724
Terranova O, Antronico L, Coscarelli R, Iaquinta P (2009) Soil erosion risk scenarios in the Mediterranean environment using RUSLE and GIS: an application model for Calabria (southern Italy). Geomorphology 112(3–4):228–245
Thapa P, Dhulikhel N (2019) Observed and perceived climate change analysis in the Terai Region, Nepal. GSJ 7(12)
Thapa P, Upadhyaya PS (2019) Vulnerability assessment of indigenous communities to climate change in Nepal
Thapa P, Upadhyaya PS (2020) Riverbed water extraction and utilization of Rural Communities Kavre, Nepal. Int Eur Ext Enablement Sci Eng Manag. (IEEE-SEM) 8(1):8–11
TolIIrism M (1995) Carrying capacity of Himalayan
Turnipseed AA, Anderson DE, Blanken PD, Baugh WM, Monson RK (2003) Airflows and turbulent flux measurements in mountainous terrain: part 1. Canopy and local effects. Agric For Meteorol 119(1–2):1–21
Uddin K, Abdul Matin M, Maharjan S (2018) Assessment of land cover change and its impact on changes in soil erosion risk in Nepal. Sustainability 10(12):4715
Van der Knijff JMF, Jones RJA, Montanarella L (1999) Soil erosion risk assessment in Italy. Citeseer
Vanacker V, Govers G, Barros S, Poesen J, Deckers J (2003) The effect of short-term socio-economic and demographic change on landuse dynamics and its corresponding geomorphic response with relation to water erosion in a tropical mountainous catchment, Ecuador. Landsc Ecol 18(1):1–15
Wang G, Gertner G, Singh V, Shinkareva S, Parysow P, Anderson A (2002) Spatial and temporal prediction and uncertainty of soil loss using the revised universal soil loss equation: a case study of the rainfall–runoff erosivity R factor. Ecol Model 153(1–2):143–155
Wang L, Huang J, Du Y, Hu Y, Han P (2013) Dynamic assessment of soil erosion risk using Landsat TM and HJ satellite data in Danjiangkou Reservoir area, China. Remote Sens 5(8):3826–3848
Wischmeier WH, Smith DD (1978) Predicting rainfall erosion losses: a guide to conservation planning. Department of Agriculture, Science and Education Administration, Indiana
Yitayew M, Pokrzywka SJ, Renard KG (1999) Using GIS for facilitating erosion estimation. Appl Eng Agric 15(4):295
Yue-Qing X, Xiao-Mei S, Xiang-Bin K, Jian P, Yun-Long C (2008) Adapting the RUSLE and GIS to model soil erosion risk in a mountains karst watershed, Guizhou Province, China. Environ Monit Assess 141(1–3):275–286
Zhao G, Mu X, Wen Z, Wang F, Gao P (2013) Soil erosion, conservation, and eco-environment changes in the Loess Plateau of China. Land Degrad Dev 24(5):499–510
I want to acknowledge the people who directly and indirectly contributed to the study.
No funding was received.
Department of Geomatics Engineering, Kathmandu University, Dhulikhel, Nepal
Pawan Thapa
The author has conducted all research activities such as data acquisition, analysis, evaluation, and result. The author read and approved the final manuscript.
Correspondence to Pawan Thapa.
Ethics approval and consent to participant
I agreed to submit the final manuscript for Environmental Systems Research Journal.
The author declares that they have no competing interests.
Thapa, P. Spatial estimation of soil erosion using RUSLE modeling: a case study of Dolakha district, Nepal. Environ Syst Res 9, 15 (2020). https://doi.org/10.1186/s40068-020-00177-2
Mountainous region
|
CommonCrawl
|
Are we certain that quantum computers are more efficient than classical computers can be built?
I mean are we certain that they will be able to provide us a huge improvements (in some tasks) compared to clasical computers?
physical-realization classical-computing
AdouAdou
$\begingroup$ It seems that there's a fairly convincing argument now that even if P = NP, BQP is a separate complexity class: eccc.weizmann.ac.il/report/2018/107 $\endgroup$
– soitgoes
The answer is no. We cannot be 100% certain.
Just like we don't have a proof that P $\ne$ NP, there is no proof that NP $\ne$ QMA, though we believe both these inequalities to be true even without proof.
Furthermore, we do not know how the "engineering complexity" scales, so even though Shor's algorithm has exponentially fewer operations to perform than the best known classical algorithm, it might be double exponentially more difficult to implement it physically. See my answer to this question: Are there any estimates on how complexity of quantum engineering scales with size?.
It is also possible that there exists a proof that NP $\ne$ QMA and that the engineering complexity scales linearly, meaning that quantum computers could "provably" have some advantage, but we just do not know of any such proof yet. Until we see a quantum computer give these "huge improvements" for a problem where it is provably better than the best classical algorithm, we have no way of being 100% certain that quantum computers will provide what you ask.
Quantum communication though (not necessarily quantum computing), does have some provable benefits over present day classical communication devices, and one example is the BB84 protocol.
$\begingroup$ Do you mean $\mathsf{NP \ne QMA}$, or do you perhaps mean $\mathsf{BPP \ne BQP}$? $\endgroup$
– Niel de Beaudrap
$\begingroup$ @NieldeBeaudrap Why do you ask that? $\endgroup$
$\begingroup$ If the question is whether quantum computers will provably bring an advantage, the more likely class of problems to consider is BQP (problems which polynomial-uniform quantum circuits can decide with bounded error) rather than QMA (yes/no problems for which candidate answers, possibly obtained by some completely other means, have proofs which can be tested by polynomial-uniform quantum circuits with bounded error). The comparison of QMA to NP is basically appropriate in this context (MA would be better), but QMA itself is unusual to invoke for 'quantum advantage'. $\endgroup$
$\begingroup$ @NieldeBeaudrap Would you agree that if QMA is bigger than NP, quantum computers have an advantage over classical computers? If so then there is no need to obscure things with complexity classes that no one's ever heard of. The person that asked, is a beginner, so they'll have heard of P vs NP, and QMA is a quantum analog of NP, and is probably more well-known by quantum researchers than BQP or BPP: I've heard chemists talk about QMA completeness and I've seen lots of people talk about QMA in the context of the k-local Hamiltonian problem, but can't remember the last time someone mentioned BQP $\endgroup$
$\begingroup$ Maybe you were trying to suggest that we can have: NP = QMA and BPP $\ne$ BQP? $\endgroup$
There is no absolute certainty. People introduce complexity classes: BPP for the set of problems that a classical computer (with access to a source of randomness) can solve in polynomial time, and BQP for the set of problems that a quantum computer can efficiently solve. We know that BQP contains BPP, but we do not know for certain that there are problems in BQP but not BPP. It is generally believed that there are some. Shor's algorithm for factoring large composite numbers is most typically hailed as a likely candidate, although the best candidate is the algorithm for the Jones polynomial because this problem is known to be the hardest that is efficiently solvable by a quantum computer. But we don't know. It may be that there is an efficient classical algorithm for this problem, and we just haven't been smart enough to find it yet.
What we do know is that with respect to certain oracles (i.e. black boxes that function in particular ways), quantum algorithms do outperform classical ones. You can think of this as "if we program an algorithm in a particular way, how fast can it run?". The advantage of these oracle-based algorithms is that lower bounds can be proven, showing the minimum number of operations required. The classic example of this is Simon's problem. Again, it doesn't mean that there can't be a better way of doing it classically via another route.
The other aspect that is implied by your question is whether a scalable universal quantum computer can actually be built. Until this is actually done, I don't think it can be proven, although it is generally believed that it will happen. There are, however, those who believe there are fundamental limitations that will prevent the construction of a suitable device. Gil Kalai, for example.
Not the answer you're looking for? Browse other questions tagged physical-realization classical-computing or ask your own question.
Why is there so much hope around quantum computers?
Are there any estimates on how complexity of quantum engineering scales with size?
Is Gil Kalai's argument against topological quantum computers sound?
What is the reason for the exponential speed-up of quantum computers?
Can quantum computers design quantum computers autonomously better than other methods?
Why is it harder to build quantum computers than classical computers?
Is quantum cryptography safer than classical cryptography?
Are quantum computers just a variant on Analog computers of the 50's & 60's that many have never seen nor used?
Looking for papers that are pessimistic about the feasibility of a quantum computer
Classical algorithm with complexity similar to Shor's discovered: Are there more efficient quantum algorithms than Shor's?
Quantum computer speedups for classically efficient applications
|
CommonCrawl
|
Electronic instabilities of a Hubbard model approached as a large array of coupled chains: competition between d-wave superconductivity and pseudogap phase [PDF]
E. Perfetto,J. Gonzalez
Physics , 2007, DOI: 10.1103/PhysRevB.77.054504
Abstract: We study the electronic instabilities in a 2D Hubbard model where one of the dimensions has a finite width, so that it can be considered as a large array of coupled chains. The finite transverse size of the system gives rise to a discrete string of Fermi points, with respective electron fields that, due to their mutual interaction, acquire anomalous scaling dimensions depending on the point of the string. Using bosonization methods, we show that the anomalous scaling dimensions vanish when the number of coupled chains goes to infinity, implying the Fermi liquid behavior of a 2D system in that limit. However, when the Fermi level is at the Van Hove singularity arising from the saddle points of the 2D dispersion, backscattering and Cooper-pair scattering lead to the breakdown of the metallic behavior at low energies. These interactions are taken into account through their renormalization group scaling, studying in turn their influence on the nonperturbative bosonization of the model. We show that, at a certain low-energy scale, the anomalous electron dimension diverges at the Fermi points closer to the saddle points of the 2D dispersion. The d-wave superconducting correlations become also large at low energies, but their growth is cut off as the suppression of fermion excitations takes place first, extending progressively along the Fermi points towards the diagonals of the 2D Brillouin zone. We stress that this effect arises from the vanishing of the charge stiffness at the Fermi points, characterizing a critical behavior that is well captured within our nonperturbative approach.
Renormalization group analysis of the 2D Hubbard model [PDF]
Christoph J. Halboth,Walter Metzner
Physics , 1999, DOI: 10.1103/PhysRevB.61.7364
Abstract: Salmhofer [Commun. Math. Phys. 194, 249 (1998)] has recently developed a new renormalization group method for interacting Fermi systems, where the complete flow from the bare action of a microscopic model to the effective low-energy action, as a function of a continuously decreasing infrared cutoff, is given by a differential flow equation which is local in the flow parameter. We apply this approach to the repulsive two-dimensional Hubbard model with nearest and next-nearest neighbor hopping amplitudes. The flow equation for the effective interaction is evaluated numerically on 1-loop level. The effective interactions diverge at a finite energy scale which is exponentially small for small bare interactions. To analyze the nature of the instabilities signalled by the diverging interactions we extend Salmhofers renormalization group for the calculation of susceptibilities. We compute the singlet superconducting susceptibilities for various pairing symmetries and also charge and spin density susceptibilities. Depending on the choice of the model parameters (hopping amplitudes, interaction strength and band-filling) we find commensurate and incommensurate antiferromagnetic instabilities or d-wave superconductivity as leading instability. We present the resulting phase diagram in the vicinity of half-filling and also results for the density dependence of the critical energy scale.
Fermi liquid behavior in the 2D Hubbard model at low temperatures [PDF]
G. Benfatto,A. Giuliani,V. Mastropietro
Physics , 2005, DOI: 10.1007/s00023-006-0270-z
Abstract: We prove that the weak coupling 2D Hubbard model away from half filling is a Landau Fermi liquid up to exponentially small temperatures. In particular we show that the wave function renormalization is an order 1 constant and essentially temperature independent in the considered range of temperatures and that the interacting Fermi surface is a regular convex curve. This result is obtained by deriving a convergent expansion (which is not a power series) for the two point Schwinger function by Renormalization Group methods and proving at each order suitable power counting improvements due to the convexity of the interacting Fermi surface. Convergence follows from determinant bounds for the fermionic expectations.
Renormalization Group and Fermi Liquid Theory [PDF]
A. C. Hewson
Physics , 1994, DOI: 10.1080/00018739400101525
Abstract: We give a Hamiltonian based interpretation of microscopic Fermi liquid theory within a renormalization group framework. We identify the fixed point Hamiltonian of Fermi liquid theory, with the leading order corrections, and show that this Hamiltonian in mean field theory gives the Landau phenomenological theory. A renormalized perturbation theory is developed for calculations beyond the Fermi liquid regime. We also briefly discuss the breakdown of Fermi liquid theory as it occurs in the Luttinger model, and the infinite dimensional Hubbard model at the Mott transition.
The 2D Hubbard model on the honeycomb lattice [PDF]
Alessandro Giuliani,Vieri Mastropietro
Physics , 2008, DOI: 10.1007/s00220-009-0910-5
Abstract: We consider the 2D Hubbard model on the honeycomb lattice, as a model for a single layer graphene sheet in the presence of screened Coulomb interactions. At half filling and weak enough coupling, we compute the free energy, the ground state energy and we construct the correlation functions up to zero temperature in terms of convergent series; analiticity is proved by making use of constructive fermionic renormalization group methods. We show that the interaction produces a modification of the Fermi velocity and of the wave function renormalization without changing the asymptotic infrared properties of the model with respect to the unperturbed non-interacting case; this rules out the possibility of superconducting or magnetic instabilities in the ground state. We also prove that the correlations verify a Ward Identity similar to the one for massless Dirac fermions, up to asymptotically negligible corrections and a renormalization of the charge velocity.
Spectral function of the 2D Hubbard model: a density matrix renormalization group plus cluster perturbation theory study [PDF]
Chun Yang,Adrian E. Feiguin
Physics , 2015,
Abstract: We study the spectral function of the 2D Hubbard model using cluster perturbation theory, and the density matrix renormalization group as a cluster solver. We reconstruct the two-dimensional dispersion at, and away from half-filling using 2xL ladders, with L up to 80 sites, yielding results with unprecedented resolution in excellent agreement with quantum Monte Carlo. The main features of the spectrum can be described with a mean-field dispersion, while kinks and pseudogap traced back to scattering between spin and charge degrees of freedom.
A Renormalization group approach for highly anisotropic 2D Fermion systems: application to coupled Hubbard chains [PDF]
S. Moukouri
Abstract: I apply a two-step density-matrix renormalization group method to the anisotropic two-dimensional Hubbard model. As a prelude to this study, I compare the numerical results to the exact one for the tight-binding model. I find a ground-state energy which agrees with the exact value up to four digits for systems as large as $24 \times 25$. I then apply the method to the interacting case. I find that for strong Hubbard interaction, the ground-state is dominated by magnetic correlations. These correlations are robust even in the presence of strong frustration. Interchain pair tunneling is negligible in the singlet and triplet channels and it is not enhanced by frustration. For weak Hubbard couplings, interchain non-local singlet pair tunneling is enhanced and magnetic correlations are strongly reduced. This suggests a possible superconductive ground state.
Fermi surface renormalization in Hubbard ladders [PDF]
Kim Louis,J. V. Alvarez,Claudius Gros
Abstract: We derive the one-loop renormalization equations for the shift in the Fermi-wavevectors for one-dimensional interacting models with four Fermi-points (two left and two right movers) and two Fermi velocities v_1 and v_2. We find the shift to be proportional to (v_1-v_2)U^2, where U is the Hubbard-U. Our results apply to the Hubbard ladder and to the t_1-t_2 Hubbard model. The Fermi-sea with fewer particles tends to empty. The stability of a saddle point due to shifts of the Fermi-energy and the shift of the Fermi-wavevector at the Mott-Hubbard transition are discussed.
Breakdown of Landau Theory in Overdoped Cuprates near the Onset of Superconductivity [PDF]
M. Ossadnik,C. Honerkamp,T. M. Rice,M. Sigrist
Physics , 2008, DOI: 10.1103/PhysRevLett.101.256405
Abstract: We use the functional renormalization group to analyze the temperature dependence of the quasi-particle scattering rates in the two-dimensional Hubbard model below half-filling. Using a band structure appropriate to overdoped Tl2Ba2CuO(6+x) we find a strongly angle dependent term linearly dependent on temperature which derives from an increasing scattering vertex as the energy scale is lowered. This behavior agrees with recent experiments and confirms earlier conclusions on the origin of the breakdown of the Landau Fermi liquid near the onset of superconductivity.
Evidence of a short-range incommensurate $d$-wave charge order from a fermionic two-loop renormalization group calculation of a 2D model with hot spots [PDF]
Vanuildo S. de Carvalho,Hermann Freire
Physics , 2014, DOI: 10.1016/j.aop.2014.05.009
Abstract: The two-loop renormalization group (RG) calculation is considerably extended here for the two-dimensional (2D) fermionic effective field theory model, which includes only the so-called "hot spots" that are connected by the spin-density-wave (SDW) ordering wavevector on a Fermi surface generated by the 2D $t-t'$ Hubbard model at low hole doping. We compute the Callan-Symanzik RG equation up to two loops describing the flow of the single-particle Green's function, the corresponding spectral function, the Fermi velocity, and some of the most important order-parameter susceptibilities in the model at lower energies. As a result, we establish that -- in addition to clearly dominant SDW correlations -- an approximate (pseudospin) symmetry relating a short-range \emph{incommensurate} $d$-wave charge order to the $d$-wave superconducting order indeed emerges at lower energy scales, which is in agreement with recent works available in the literature addressing the 2D spin-fermion model. We derive implications of this possible electronic phase in the ongoing attempt to describe the phenomenology of the pseudogap regime in underdoped cuprates.
|
CommonCrawl
|
Journal of Analytical Science and Technology
An exploration into the use of Hansen solubility parameters for modelling reversed-phase chromatographic separations
David Ribar ORCID: orcid.org/0000-0002-6305-60151,
Tjaša Rijavec ORCID: orcid.org/0000-0003-2778-13721 &
Irena Kralj Cigić ORCID: orcid.org/0000-0003-1972-80881
Journal of Analytical Science and Technology volume 13, Article number: 12 (2022) Cite this article
The suitability of Hansen solubility parameters as descriptors for modelling analyte retention during reversed-phase chromatographic experiments was investigated. A novel theoretical model using Hansen solubility parameters as the basis for a complete mathematical derivation of the model was developed. The theoretical model also includes the cavitation volumes of the analytes, which were calculated using ab initio density functional theory methods. A set of three homologous phthalates was used for experimental data collection and subsequent model construction. The training error and the generalization error of the model were additionally evaluated using a range of chemically diverse analytes. Statistical evaluation of the results revealed that the model is suitable for analyte retention prediction but is limited to the analytes used in the model construction. Therefore, the resulting theoretical model cannot be easily generalized. A retention anomaly attributed to the column temperature and mobile phase composition was experimentally observed and mathematically investigated.
Considerable research interest has been devoted to understanding the principal mechanism of reversed-phase (RP) chromatographic separations, which has not been fully elucidated due to its complexity and a multitude of parameters that greatly affect the chromatographic experiment (Poole 2019). Poole (2015) described a qualitative interphase model which aimed at simplifying the complexity of the retention mechanism. He described the active chromatographic separatory area inside the analytical column and segmented it into three main domains: the silica particle surface, the stationary phase (SP), and the bulk mobile phase (MP). At a constant composition of the MP, the SP becomes solvated. Analytes approaching from the bulk MP can interact with the SP in a combination of two retention mechanisms: a partition and an adsorption mechanism (Poole 2019, 2015). The predominant mechanism, by which retention of the analyte occurs, is a function of MP composition, column temperature, condition and age of the SP, and the flow rate (Vailaya 2005). From a modelling perspective, this complexity presents the greatest challenge, as certain simplifications are unavoidable.
Retention prediction and modelling in chromatography
Haddad et al. (2021) recently published an overview of modern approaches to modelling chromatographic retention data. They can be divided into two main classes: statistical and physicochemical modelling. Statistical approaches include quantitative structure-retention relationships (QSRR) (Amos et al. 2018) and design of experiments (DoE) (Sahu et al. 2018), while, physicochemical models encompass solvatophobic models (Horváth et al. 1976; Moldoveanu and David 2015), solvent strength models (Neue and Kuss 2010; Tyteca and Desmet 2015), and linear free-energy relationships (LFER) (Vailaya 2005), which can be further divided into exothermodynamic relations (Ovčačíková et al. 2016), hydrophobic subtraction models (Snyder et al. 2004) and solvation parameter models (Abraham et al. 2004).
Gilar et al. (2020) previously compared the nonlinear and linear solvent strength models and found that the linear model can serve as an estimate for the prediction of analyte retention. The nonlinear model was implemented to try to correct the experimentally observed nonlinear concave behaviour in the \(\ln k_{i}\) versus solvent strength plots. The origin of this nonlinear behaviour in RP liquid chromatography is still not fully understood.
The analytes used in modelling are usually divided into two sets: a training set containing analytes that are directly integrated into the model, and a test set with analytes that are not used to construct the model. The latter serve as data, used to evaluate the generalization error of a model. The training error is determined using the training set. Together, these two metrics provide evaluation about the performance of the model. An ideal theoretical model has a low training error for analytes used in training the model and a low degree of generalization error for analytes added to the model for testing (Spears et al. 2018).
Solubility parameters
Hansen solubility parameters (HSP) are empirical thermodynamic parameters that aim to quantify the notion of "like dissolves like" (Hansen 1967). The core idea of HSP is summarized in Eq. 1, where a clear distinction of three basic molecular interaction types can be seen: dispersion interactions (d), polar interactions (p), and hydrogen bonding interactions (h).
$$\begin{array}{*{20}c} {\delta^{2} = 4\delta_{d}^{2} + \delta_{p}^{2} + \delta_{h}^{2} } \\ \end{array}$$
The sum of the three partial solubility parameters represents the total solubility parameter \(\delta\), which is defined as the square root of the cohesion energy density i.e. the energy originating from intermolecular interactions divided by the molar volume, Eq. 2, (Hildebrand 1949).
$$\begin{array}{*{20}c} {\delta = \sqrt {\frac{{E_{{{\text{coh}}{.}}} }}{{V_{{\text{m}}} }}} } \\ \end{array}$$
Hansen distance is a numerical value that quantifies the thermodynamic similarity of two analytes based on the strength of the constituent interaction types. A Hansen distance for two analytes can be calculated from their respective HSP, Eq. 3.
$$\begin{array}{*{20}c} {\lambda_{i,j} = 4\left( {\delta_{d,i} - \delta_{d,j} } \right)^{2} + \left( {\delta_{p,i} - \delta_{p,j} } \right)^{2} + \left( {\delta_{h,i} - \delta_{h,j} } \right)^{2} } \\ \end{array}$$
HSP's were developed mainly for predicting the solubility of paints, resins, and polymer components (Hansen 1967). They can be used in the development and study of the mechanical properties of thermoplastic-lignin composites (Zhao et al. 2018b, a). Their use in the pharmaceutical industry includes predicting the solubility of pharmaceutical excipients (Adamska et al. 2007) and the compatibility of biopolymers (Adamska et al. 2016). Sánchez-Camargo et al. have reviewed the use of HPS to predict solubilities for the selection of more environmentally acceptable extraction solvents (Sánchez-Camargo et al. 2019). The work of Schoenmakers et al. (1982), Schoenmakers and de Galan (1981) shows that there is a mathematical relationship between the solubility parameter and the chromatographic activity coefficients that allows a theoretical derivation of retention as a function of solubility parameters for all analytes and phases in chromatographic separations.
Aim of the study
The aim of this study is twofold. First, to formally derive and investigate a novel LFER physicochemical model for modelling and predicting RP chromatographic separations based on Hansen solubility parameters using the experimental retention data obtained with a series of three homologous phthalates. Secondly, to investigate the training error and generalization error of the model using a retention information dataset of chemically different analytes that have a similar aromatic structural component with different functional groups, in order to evaluate the model as a tool for routine retention prediction of analytes.
Derivation of the model equation
The primary objective of the study is to develop and provide a detailed mathematical and theoretical derivation of the model. The model derivation begins with the mathematical treatment of retention equilibria occurring during RP chromatographic separations. After the analyte, denoted by the index \(i\), is introduced into the MP, the dynamic equilibrium between the activities of the analyte, corresponding to both phases, denoted by \({\mathcal{S}}\) for the SP and \({\mathcal{M}}\) for the MP, is formed, Eq. 4.
$$\begin{array}{*{20}c} {a_{{{\mathcal{S}},i}} \rightleftharpoons a_{{{\mathcal{M}},i}} } \\ \end{array}$$
The dynamic equilibrium can be described by its equilibrium constant, Eq. 5.
$$\begin{array}{*{20}c} {K_{i} = \frac{{a_{{{\mathcal{M}},i}} }}{{a_{{{\mathcal{S}},i}} }}} \\ \end{array}$$
The retention factor is a function of the chromatographic equilibrium constant, Eq. 6.
$$\begin{array}{*{20}c} {K_{i} = k_{i} \cdot \frac{{V_{{\mathcal{S}}} }}{{V_{{\mathcal{M}}} }}} \\ \end{array}$$
Combining Eqs. 5 and 6, with the formal definition for activity being the product of the molar concentration and the activity coefficient, results in Eq. 7.
$$\begin{array}{*{20}c} {k_{i} = \frac{{\gamma_{{{\mathcal{M}},i}} }}{{\gamma_{{{\mathcal{S}},i}} }} \cdot \frac{{n_{{\mathcal{S}}} }}{{n_{{\mathcal{M}}} }}} \\ \end{array}$$
Schoenmakers et al. (1982), Schoenmakers and de Galan (1981) have previously related the chromatographic activity coefficient \(\gamma_{i}\) with solubility parameters \(\delta\), Eq. 8, with the subscript f indicating the phase.
$$\begin{array}{*{20}c} {\ln \gamma_{f,i} = \frac{{V_{m,i} }}{RT} \cdot \left( {\delta_{i} - \delta_{f} } \right)^{2} } \\ \end{array}$$
Combination of Eqs. 7 and 8 and replacement of the molar volume and the universal gas constant with the cavitation volume and Boltzmann constant, due to the theoretical treatment of single molecules during the DFT cavitation volume calculations, results in
$$k_{i} = \frac{{\exp \left( {\frac{{V_{\kappa ,i} }}{{k_{B} T}} \cdot \left( {\delta_{i} - \delta_{{\mathcal{M}}} } \right)^{2} } \right)}}{{\exp \left( {\frac{{V_{\kappa ,i} }}{{k_{B} T}} \cdot \left( {\delta_{i} - \delta_{{\mathcal{S}}} } \right)^{2} } \right)}} \cdot \frac{{n_{{\mathcal{S}}} }}{{n_{{\mathcal{M}}} }}$$
and after simplification
$$k_{i} = \exp \left( {\frac{{V_{\kappa ,i} }}{{k_{B} T}} \cdot \left( {\left( {\delta_{i} - \delta_{{\mathcal{M}}} } \right)^{2} - \left( {\delta_{i} - \delta_{{\mathcal{S}}} } \right)^{2} } \right)} \right) \cdot \frac{{n_{{\mathcal{S}}} }}{{n_{{\mathcal{M}}} }}$$
The natural logarithm of ki is presented in Eq. 9.
$$\begin{array}{*{20}c} {\ln k_{i} = \frac{{V_{\kappa ,i} }}{{k_{B} T}} \cdot \left( {\left( {\delta_{i} - \delta_{{\mathcal{M}}} } \right)^{2} - \left( {\delta_{i} - \delta_{{\mathcal{S}}} } \right)^{2} } \right) + \ln \frac{{n_{{\mathcal{S}}} }}{{n_{{\mathcal{M}}} }}} \\ \end{array}$$
Then Hansen solubility parameters as Hansen distances are introduced
$$\begin{aligned} & \lambda_{{{\mathcal{M}},i}} = \left( {\delta_{i} - \delta_{{\mathcal{M}}} } \right)^{2} \\ & \lambda_{{{\mathcal{S}},i}} = \left( {\delta_{i} - \delta_{{\mathcal{S}}} } \right)^{2} \\ \end{aligned}$$
each encompassing the thermodynamic similarity between the analyte and the chromatographic phase. The Hansen distance for the analyte and SP is trivial to calculate, using the formal definition, Eq. 10.
$$\begin{array}{*{20}c} {\lambda_{{{\mathcal{S}},i}} = 4(\delta_{d,i} - \delta_{{d,{\mathcal{S}}}} )^{2} + (\delta_{p,i} - \delta_{{p,{\mathcal{S}}}} )^{2} + (\delta_{h,i} - \delta_{{h,{\mathcal{S}}}} )^{2} } \\ \end{array}$$
As the mobile phase is composed of a mixture of water and ACN, the corresponding MP-analyte Hansen distance is more complicated to calculate. It was assumed that HSP of mixtures are equal to the weighted sums of all HSP of pure substances. The molar fraction is used as a weight. Using this, the Hansen distance for the combination of analyte and the MP becomes Eq. 11.
$$\begin{array}{*{20}c} {\lambda_{{{\mathcal{M}},i}} = 4\left( {\delta_{d,i} - \mathop \sum \limits_{j = 1}^{n} x_{j} \delta_{{d,{\mathcal{M}},j}} } \right)^{2} + \left( {\delta_{p,i} - \mathop \sum \limits_{j = 1}^{n} x_{j} \delta_{{p,{\mathcal{M}},j}} } \right)^{2} + \left( {\delta_{h,i} - \mathop \sum \limits_{j = 1}^{n} x_{j} \delta_{{h,{\mathcal{M}},j}} } \right)^{2} } \\ \end{array}$$
The molar fraction of a bicomponent mixture adds up to unity, thus enabling the transformation of Eq. 11 into Eq. 12, where x denotes the molar fraction of ACN.
$$\begin{aligned} \lambda_{{{\mathcal{M}},i}} & = 4\left( {\delta_{d,i} - x\delta_{d,1} - \left( {1 - x} \right)\delta_{d,2} } \right)^{2} + \left( {\delta_{p,i} - x\delta_{p,1} - \left( {1 - x} \right)\delta_{p,2} } \right)^{2} \\ & \quad + \left( {\delta_{h,i} - x\delta_{h,1} - \left( {1 - x} \right)\delta_{h,2} } \right)^{2} \\ \end{aligned}$$
Expanding the equation and factoring out the molar fraction, with further algebraical manipulation described in detail in the Additional file 1: section S1, leads to Eq. 13.
$$\begin{array}{*{20}c} {\lambda_{{{\mathcal{M}},i}} = \Lambda_{\alpha } x^{2} + \Lambda_{\beta } x + \Lambda_{\gamma } } \\ \end{array}$$
By combining the Hansen distances and Eq. 9, we arrive at Eq. 14.
$$\begin{array}{*{20}c} {\ln k_{i} = \frac{{V_{\kappa ,i} }}{{k_{B} T}} \cdot \left( {\lambda_{{{\mathcal{M}},i}} - \lambda_{{{\mathcal{S}},i}} } \right) + \ln \frac{{n_{{\mathcal{S}}} }}{{n_{{\mathcal{M}}} }}} \\ \end{array}$$
Then the thermodynamic descriptor as Eq. 15 can be defined.
$$\begin{array}{*{20}c} {\Xi_{i} = \frac{{V_{\kappa ,i} }}{{k_{B} T}} \cdot \left( {\lambda_{{{\mathcal{M}},i}} - \lambda_{{{\mathcal{S}},i}} } \right)} \\ \end{array}$$
With the assumption that the second logarithm in Eq. 14 is constant and by implementing the thermodynamic descriptor, the central theoretical model Eq. 16 is derived.
$$\begin{array}{*{20}c} {\ln {\text{k}}_{{\text{i}}} = \beta_{0} + \beta_{1} {\Xi }_{i} } \\ \end{array}$$
The model is based on Eq. 16—a simple linear regression, with regression coefficients denoted as \(\beta\). The Hansen distances are calculated for each combination of SP, MP composition and analyte type, to obtain unique values of the thermodynamic descriptor. The analyte cavitation volume is calculated using theoretical DFT calculations, as described in the Materials and methods section. The model relates the theoretical thermodynamic descriptor \({\Xi }_{i}\), which encompasses the variables of column temperature, Hansen distances and analyte cavitation volumes, with the experimentally determined retention factor \(\ln k_{i}\).
Statistical analysis and general observations of the model
Plotting the theoretical thermodynamic descriptor \({\Xi }_{i}\) against the experimental \(\ln k_{i}\) reveals a pattern, which is presented in Fig. 1. Each point represents an average of three experimentally determined retention factors.
Plot of experimental \(\ln k_{i}\) values for the training set analytes (DMP, DEP, DBP) on two different SP (C8 and C18) vs. the thermodynamic descriptor \({\Xi }_{i}\)
A clear distinction of the retention data of individual analytes (DMP, DEP and DBP) is observed including the formation of two groups corresponding to the two studied SP (C8 and C18). Furthermore, differently oriented bands of individual analyte within the SP groups are observed. The results indicate the lack of generality of the thermodynamic descriptor \({\Xi }_{i}\) for the complete description of structural variability as different retention behaviours for the investigated analytes are observed. A cumulative model, which includes all the data points, is therefore not theoretically justified. Hence six separate linear regression models were created, one for each analyte and the corresponding SP (singular models) and two cumulative models for all three analytes within the training set, but for separate SP.
Statistical data processing of the results is presented in Table 1 and complemented with regression coefficients presented in the Additional file 1: Table S1.
Table 1 Results of the statistical analysis of the theoretical model. p-values below 0.05 denote a statistical significance of the whole model
Residual analysis was performed to investigate their possible correlations, the results of which are presented in Fig. 2. The correlation between the residual values could be observed as a trend indicated by the yellow line, deviating from the horizontal zero line. The trends have the minima at data points corresponding to the experiments at the MP composition of 60 vol. % of ACN. The cumulative model (Fig. 2d) reveals even more pronounced residual correlations. Thus, according to these observations, the models are not linear, even though a positive p-value test suggests a linear relationship. The same trend was observed in the experiments with C18 SP.
Residual analysis of all models on SP C8. a DMP model, b DEP model, c DBP model, d cumulative model. The yellow line indicates the trend of the residuals
Evaluation of the training error and generalization error
Regardless of the observed nonlinearity, the model's training error was investigated. The predicted \(\ln k_{i}\) values were compared with experimentally gained values for DMP, DEP and DBP. Relative error and root mean squared error of prediction (RMSEP) of the retention time were calculated (Table 2).
Table 2 Evaluation of the model's training error
The singular models that included only one analyte underestimated retention times, whereas the cumulative model for all three analytes at selected SP overestimated the retention times with larger deviations.
The evaluation of the generalization error with analytes from the test set revealed even greater differences (Table 3), which minimized the ability to predict retention times for analytes not used in the construction of the model.
Table 3 RMSEP values for the test set analytes
The model can therefore hardly be generalized for retention prediction for chemically diverse analytes that are not intrinsically included in the model.
Comparison with solvent strength models
Unlike conventional retention modelling approaches that use isothermal experiments (Haddad et al. 2021), the HSP-based model includes column temperature as an independent variable used to calculate the thermodynamic descriptor. The residual standard error (RSE) value was calculated for each model (Fig. 3) to compare the novel theoretical model with conventional linear and nonlinear solvent strength models. All metrics of model quality and regression coefficients are presented in Additional file 1: Tables S2–S4. Data obtained at 25 °C were used, except for the model that included all three column temperatures. The solvent strength models achieve a lower RSE value for singular analyte models. Their RSE values for cumulative models are much higher than the values of the HSP model. The HSP model is therefore more suitable compared to the conventional solvent strength model when several analytes are modelled simultaneously. The low values of RSE for singular solvent strength models and their higher values for the cumulative models could indicate a greater degree of overfitting of the retention data.
A comparison of the RSE values for the investigated models. NLSS: nonlinear solvent strength model, LSS: linear solvent strength model, HSP_1T: HSP model with one temperature, HSP_3T: HSP model with three temperatures
Exploring beyond the theoretical model
Nonlinearity was further investigated with multiple linear regression. In addition to the thermodynamic descriptor, the column temperature and the total adsorption of ACN on the C18 SP, based on data from Buntz et al. (Buntz et al. 2012), were used as independent variables. The nonlinear model is described by Eq. 17.
$$\begin{array}{*{20}c} {\ln k_{i} = \beta_{0} + \beta_{1} \Xi_{i} + \beta_{2} A + \beta_{3} T } \\ \end{array}$$
where A denotes the total molar adsorption of ACN on the SP. Since data for the total adsorption of ACN was only available for the C18 and not for the C8 SP, the models were built only for data collected with the corresponding C18 column.
The results of the statistical analysis of the multiple regression model, presented in Table 4, reveal an improvement in the statistical regression measure of quality, mainly as a decrease in the RSE values compared to values presented in Table 1. Regression coefficients are reported in the Additional file 1: Table S5. A consecutive statistical residual analysis (reported in the Additional file 1: Figure S1) revealed a nonlinear correlation of the residuals, indicating that the independent variables used in this model do not fully describe the chromatographic separation model, analogous to the theoretical model equation using a simple linear regression.
Table 4 Results of the statistical analysis of the multiple linear regression model. p values below 0.05 denote a statistical significance of the whole model
A statistical investigation of the experimental-only retention data provided insight into the observed nonlinear anomaly. The differences in retention between the three temperatures at a fixed MP composition are presented in Fig. 4. First, a general trend is observed in the effect of temperature and MP composition on retention. Increasing the ACN content in MP decreases the extent to which a change in temperature affects the \(\ln k_{i}\) for all three analytes in the training set, which is evident from the interpolated trends. Second, an anomalous retention is observed at 60 vol. % of ACN. The anomaly coincides with the minima observed in simple linear and multiple linear regression models. The trends for DMP and DEP are similar, while DBP shows a slightly different pattern with increasing ACN content. The general trend is observed for both investigated SP.
The differences in the experimental retention data calculated at constant mobile phase compositions and three different temperatures on both investigated SP
In this paper, we present a novel theoretical model for predicting the retention of analytes in RP chromatographic separations, together with a complete mathematical derivation of the central model equation. To date, this is the first study to use Hansen solubility parameters as possible thermodynamic descriptors for predicting retention behaviour in RP HPLC. We have found that Hansen solubility parameters cannot satisfactorily describe multiple chemical interactions that occur in RP chromatographic separations. The theoretical model combining HSP with theoretically derived solute cavitation volumes is not applicable for the routine prediction of analyte retention times, due to an unfavourable training and generalization error. Moreover, the assumed linear relationship proves to be statistically unjustifiable. Incorporating the thermodynamics of dissolution and mixing, as suggested by Louwerse et al. (Louwerse et al. 2017), could lead to the development of a more accurate HSP based theoretical model. Although, a comparison with conventional solvent strength models reveals that the HSP model is better at simultaneously predicting the retention of multiple analytes with only one regression model. This represents a shift from various model development methods in which multiple regression equations corresponding to each analyte under study are created to describe retention behaviour (McEachran et al. 2018). A multiple linear regression model was also performed, including additional independent variables such as column temperature and total adsorption of ACN to the SP, to investigate the origin of the nonlinear behaviour deviating from the theory. The nonlinearity was still observed. To verify that the observed anomaly is not an artefact of the model, the numerical differences in the experimental retention data were calculated. A correlation was found between change in MP composition and temperature on retention. The anomaly at the MP composition of 60 vol. % ACN is again observed, raising an open research question about the origin of the surveyed nonlinear anomaly.
Diethyl phthalate (DEP) (99.5%), dimethyl phthalate (DMP) (≥ 99%), dibutyl phthalate (DBP) (99%), 3-methylbenzoic acid (99%), 2,4,6-triiodophenol (97%), uracil (99%) and benzaldehyde (≥ 99%) were purchased from Sigma-Aldrich (Germany) and vanillin (99%) from Merck (Germany). Acetonitrile (ACN) (Fischer Scientific, UK, ≥ 99.9%) and deionised water, purified with the Mill-Q® system, were used as chromatographic solvents.
1 mg mL−1 stock solutions of DMP, DEP, DBP, benzaldehyde, 2,4,6-triiodophenol, 3-methylbenzoic acid and vanillin were prepared in ACN. A 1 mg mL−1 stock solution of uracil was prepared in deionised water. 10 mg L−1 solutions of investigated analytes were prepared by dilution with ACN. To each solution, uracil was added at a concentration of 10 mg L−1, which was used as a marker for the void time. Its determination using uracil was previously described as an acceptable estimation technique (Bidlingmeyer et al. 1991; Rimmer et al. 2002).
Instrumental conditions
The HPLC experiments were carried out on an Agilent Technologies system 1100 with degasser, quaternary pump, auto-sampler, column thermostat and a diode array detector. Two RP columns were used; YMC Triart C18 (150 × 4.6 mm, 5 μm, pore size of 12 nm) and YMC Triart C8 (150 × 4.6 mm, 5 μm, pore size of 12 nm). All chromatographic separations were isocratic, with a constant flow of 1 mL min−1. Absorption spectra were recorded with a diode array detector in the spectral range from 210 to 400 nm at a collection frequency of 5 Hz.
Experimental design for data acquisition
Retention data for uracil, DMP, DEP and DBP using the solution in ACN were collected using a full factorial experimental design by varying the column temperature (25, 35 and 45 °C), MP composition (40, 50, 60, 70, 80 vol. % ACN) and the SP (C8 and C18). Each data point was collected three times, by repeating the full factorial experimental sequence.
The solutions of analytes in the training and test sets were used in HPLC experiments at 25 °C on the C8 SP at 65, 75 and 85 vol. % of ACN in the MP for the collection of data, used in the investigation of the training error and the generalization error of the model. All retention data are presented in the Additional file 1: Tables S6–S12.
Ab initio density functional theory calculations
Density functional theory (DFT) calculations were implemented using GAMESS (version R2 September 30, 2020 for Microsoft Windows) (Barca et al. 2020) and the graphical interface software Winmostar (Winmostar—Student V10.1.3 for 64bit Windows). The B3LYP hybrid functional was used. The geometry was preliminarily optimized using a smaller 3-21G* basis set, followed by an aug-cc-pVTZ basis set for the final optimization. Water was simulated using the polarizable continuum model (Mennucci 2012) for the determination of solute cavitation volumes. The calculated analyte cavitation volumes are listed in the Additional file 1: Table S13.
HSP calculation
Analyte HSP were calculated according to the group contribution method described by Stefanis and Panayiotou (2012, 2008). Utilizing this technique, each analyte molecule is broken down into discrete first-order molecular fragments i.e. aromatic ArC-H, methyl -CH3 and ester COO fragments. After identifying all first-order molecular fragments, second-order fragments can be identified. These include resonance stabilized functional groups i.e. aromatic ester ArCCOO fragments. Once a molecule is described by first- and second-order fragments, the dispersion, polar and hydrogen bond contributions for each fragment are summed. These sums are consecutively modified by adding terms, derived by regression of a large set of experimentally determined HSP parameters. n-octane and n-octadecane were used as SP approximations and their HSP were calculated using the same method. The calculated HSP are presented in the Additional file 1: Table S14. HSP for deionised water and ACN, Additional file 1: Table S15, were collected from tabulated experimental data, provided by Hansen (2007).
Statistical analysis and model construction
All statistical analyses were implemented using R (ver. 4.0.2) within Rstudio (ver. 1.3.1056). R was also used in the production of the figures. The retention prediction model was implemented using Python (ver. 3.6.1) within PyCharm (2019.3.5 Professional Edition).
All data generated and analysed in this study have been provided in the manuscript and the supporting information.
RP:
Reversed-phase
SP:
Mobile phase
LFER:
Linear free-energy relationship
HSP:
Hansen solubility parameters
DEP:
Diethyl phthalate
DMP:
Dimethyl phthalate
DBP:
Dibutyl phthalate
DFT:
RMSEP:
Root mean square error of prediction
RSE:
Residual standard error
\({\mathcal{S}}\) :
Index representing the SP
\({\mathcal{M}}\) :
Index representing the MP
\(i\) :
Index representing a general analyte
\(j\) :
Index representing a general component of the MP
\(d\) :
Index representing dispersion partial solubility parameters
\(p\) :
Index representing polar partial solubility parameters
\(h\) :
Index representing hydrogen bonding partial solubility parameters
Abraham MH, Ibrahim A, Zissimos AM. Determination of sets of solute descriptors from chromatographic measurements. J Chromatog a. 2004;1037:29–47.
Adamska K, Voelkel A, Héberger K. Selection of solubility parameters for characterization of pharmaceutical excipients. J Chromatog a. 2007;1171:90–7.
Adamska K, Voelkel A, Berlińska A. The solubility parameter for biomedical polymers—application of inverse gas chromatography. J Pharm Biomed Anal. 2016;127:202–6.
Amos RIJ, Haddad PR, Szucs R, Dolan JW, Pohl CA. Molecular modeling and prediction accuracy in Quantitative Structure-Retention Relationship calculations for chromatography. Trends Anal Chem. 2018;105:352–9.
Barca GMJ, Bertoni C, Carrington L, Datta D, De Silva N, Deustua JE, et al. Recent developments in the general atomic and molecular electronic structure system. J Chem Phys. 2020;152:154102.
Bidlingmeyer BA, Warren FV, Weston A, Nugent C, Froehlich PM. Some practical considerations when determining the void volume in high-performance liquid chromatography. J Chromatogr Sci. 1991;29:275–9.
Buntz S, Figus M, Liu Z, Kazakevich YV. Excess adsorption of binary aqueous organic mixtures on various reversed-phase packing materials. J Chromatogr A. 2012;1240:104–12.
Gilar M, Hill J, McDonald TS, Gritti F. Utility of linear and nonlinear models for retention prediction in liquid chromatography. J Chromatogr A. 2020;1613:460690.
Haddad PR, Taraji M, Szucs R. Prediction of analyte retention time in liquid chromatography. Anal Chem. 2021;93:228–56.
Hansen CM. The three dimensional solubility parameter—key to paint component affinities I.—Solvents, plasticizers, polymers, and resins. J Paint Technol. 1967;39:104–17.
CAS Google Scholar
Hansen CM. Hansen solubility parameters: a user's handbook. 2nd ed. Boca Raton: CRC Press; 2007.
Hildebrand JH. A critique of the theory of solubility of non-electrolytes. Chem Rev. 1949;44:37–45.
Horváth C, Melander W, Molnár I. Solvophobic interactions in liquid chromatography with nonpolar stationary phases. J Chromatog a. 1976;125:129–56.
Louwerse MJ, Maldonado A, Rousseau S, Moreau-Masselon C, Roux B, Rothenberg G. Revisiting Hansen solubility parameters by including thermodynamics. ChemPhysChem. 2017;18:2999–3006.
McEachran AD, Mansouri K, Newton SR, Beverly BEJ, Sobus JR, Williams AJ. A comparison of three liquid chromatography (LC) retention time prediction models. Talanta. 2018;182:371–9.
Mennucci B. Polarizable continuum model. Wiley Interdiscip Rev Comput Mol Sci. 2012;2:386–404.
Moldoveanu S, David V. Estimation of the phase ratio in reversed-phase high-performance liquid chromatography. J Chromatog a. 2015;1381:194–201.
Neue UD, Kuss H-J. Improved reversed-phase gradient retention modeling. J Chromatog a. 2010;1217:3794–803.
Ovčačíková M, Lísa M, Cífková E, Holčapek M. Retention behavior of lipids in reversed-phase ultrahigh-performance liquid chromatography-electrospray ionization mass spectrometry. J Chromatogr A. 2016;1450:76–85.
Poole CF. An interphase model for retention in liquid chromatography. J Planar Chromatogr Mod TLC. 2015;28:98–105.
Poole CF. Influence of solvent effects on retention of small molecules in reversed-phase liquid chromatography. Chromatographia. 2019;82:49–64.
Rimmer CA, Simmons CR, Dorsey JG. The measurement and meaning of void volumes in reversed-phase liquid chromatography. J Chromatogr A. 2002;965:219–32.
Sahu PK, Ramisetti NR, Cecchi T, Swain S, Patro CS, Panda J. An overview of experimental designs in HPLC method development and validation. J Pharm Biomed Anal. 2018;147:590–611.
Sánchez-Camargo AP, Bueno M, Parada-Alfonso F, Cifuentes A, Ibáñez E. Hansen solubility parameters for selection of green extraction solvents. Trends Anal Chem. 2019;118:227–37.
Schoenmakers PJ, de Galan L. Systematic study of ternary solvent behaviour in reversed-phase liquid chromatography. J Chromatogr A. 1981;218:261–84.
Schoenmakers PJ, Billiet HAH, de Galan L. The solubility parameter as a tool in understanding liquid chromatography. Chromatographia. 1982;15:205–14.
Snyder LR, Dolan JW, Carr PW. The hydrophobic-subtraction model of reversed-phase column selectivity. J Chromatog a. 2004;1060:77–116.
Spears BK, Brase J, Bremer P-T, Chen B, Field J, Gaffney J, et al. Deep learning: a guide for practitioners in the physical sciences. Phys Plasmas. 2018;25:080901.
Stefanis E, Panayiotou C. Prediction of hansen solubility parameters with a new group-contribution method. Int J Thermophys. 2008;29:568–85.
Stefanis E, Panayiotou C. A new expanded solubility parameter approach. Int J Pharm. 2012;426:29–43.
Tyteca E, Desmet G. On the inherent data fitting problems encountered in modeling retention behavior of analytes with dual retention mechanism. J Chromatog a. 2015;1403:81–95.
Vailaya A. Fundamentals of reversed phase chromatography: thermodynamic and exothermodynamic treatment. J Liq Chromatogr Relat Technol. 2005;28:965–1054.
Zhao G, Ni H, Jia L, Ren S, Fang G. Quantitative analysis of relationship between Hansen solubility parameters and properties of alkali lignin/acrylonitrile–butadiene–styrene blends. ACS Omega. 2018a;3:9722–8.
Zhao G, Ni H, Ren S, Fang G. Correlation between solubility parameters and properties of alkali lignin/PVA composites. Polymers. 2018b;10:290.
The authors acknowledge the financial support from EU Horizon 2020 APACHE project and the Slovenian Research Agency.
European Union's Horizon 2020 research and innovation program: APACHE project Grant Agreement No. 814496. Slovenian Research Agency: research core funding No. P1-0153.
Faculty of Chemistry and Chemical Technology, University of Ljubljana, Večna pot 113, 1000, Ljubljana, Slovenia
David Ribar, Tjaša Rijavec & Irena Kralj Cigić
David Ribar
Tjaša Rijavec
Irena Kralj Cigić
DR designed and conducted the experiments, derived the theoretical model, interpreted the results, and drafted the manuscript. TR assisted with the experiments and revised the manuscript. IKC supervised the research and provided a critical revision of the manuscript. All authors read and approved the final manuscript.
Correspondence to Irena Kralj Cigić.
Additional file 1
. Supporting information.
Ribar, D., Rijavec, T. & Kralj Cigić, I. An exploration into the use of Hansen solubility parameters for modelling reversed-phase chromatographic separations. J Anal Sci Technol 13, 12 (2022). https://doi.org/10.1186/s40543-022-00322-9
Received: 03 September 2021
Accepted: 29 March 2022
Molecular modelling
Chemometrics
Retention prediction
|
CommonCrawl
|
Periodic and almost periodic oscillations in a delay differential equation system with time-varying coefficients
Impulsive motion on synchronized spatial temporal grids
December 2017, 37(12): 6099-6121. doi: 10.3934/dcds.2017262
Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption
Johannes Lankeit 1,, and Yulan Wang 2,
Institut für Mathematik, Universität Paderborn, Warburger Str. 100, 33098 Paderborn, Germany
School of Science, Xihua University, Chengdu 610039, China
* Corresponding author: Johannes Lankeit
Received August 2016 Revised July 2017 Published August 2017
Fund Project: J. Lankeit acknowledges support of the Deutsche Forschungsgemeinschaft within the project Analysis of chemotactic cross-diffusion in complex frameworks. Y. Wang was supported by the NNSF of China (no. 11501457).
This paper deals with the homogeneous Neumann boundary-value problem for the chemotaxis-consumption system
$\left\{ \begin{align} & {{u}_{t}}=\Delta u-\chi \nabla \cdot \left( u\nabla v \right)+\kappa u-\mu {{u}^{2}},\ \ \ \ \ \ \ x\in \mathit{\Omega },t>0, \\ & {{v}_{t}}=\Delta v-uv,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x\in \mathit{\Omega },t>0, \\ \end{align} \right.$
$N$
-dimensional bounded smooth domains for suitably regular positive initial data.
We shall establish the existence of a global bounded classical solution for suitably large $μ$ and prove that for any
$μ>0$
there exists a weak solution.
Moreover, in the case of
$κ>0$
convergence to the constant equilibrium
$(\frac{κ}{μ },0)$
is shown.
Keywords: Chemotaxis, logistic source, global existence, boundedness, asymptotic stability, weak solution.
Mathematics Subject Classification: 35Q92, 35K55, 35A01, 35B40, 35D30, 92C17.
Citation: Johannes Lankeit, Yulan Wang. Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6099-6121. doi: 10.3934/dcds.2017262
N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler, Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues, Math. Models Methods Appl. Sci., 25 (2015), 1663-1763. doi: 10.1142/S021820251550044X. Google Scholar
X. Cao and J. Lankeit, Global classical small-data solutions for a three-dimensional chemotaxis Navier-Stokes system involving matrix-valued sensitivities, Calc. Var. Partial Differential Equations, 55 (2016), Paper No. 107, 39pp. doi: 10.1007/s00526-016-1027-2. Google Scholar
X. He and S. Zheng, Convergence rate estimates of solutions in a higher dimensional chemotaxis system with logistic source, J. Math. Anal. Appl., 436 (2016), 970-982. doi: 10.1016/j.jmaa.2015.12.058. Google Scholar
M. A. Herrero and J. J. L. Velázquez, Chemotactic collapse for the Keller-Segel model, J. Math. Biol., 35 (1996), 177-194. doi: 10.1007/s002850050049. Google Scholar
M. A. Herrero and J. J. L. Velázquez, A blow-up mechanism for a chemotaxis model, Ann. Scuola Norm. Sup. Pisa Cl. Sci.(4), 24 (1997), 633-683 (1998). Google Scholar
D. Horstmann, From 1970 until present: The Keller-Segel model in chemotaxis and its consequences. I, Jahresber. Deutsch. Math.-Verein., 105 (2003), 103-165. Google Scholar
S. Ishida, K. Seki and T. Yokota, Boundedness in quasilinear Keller-Segel systems of parabolic-parabolic type on non-convex bounded domains, J. Differential Equations, 256 (2014), 2993-3010. doi: 10.1016/j.jde.2014.01.028. Google Scholar
E. F. Keller and L. A. Segel, Initiation of slime mold aggregation viewed as an instability, J. Theoret. Biol., 26 (1970), 399-415. Google Scholar
O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva, Linear and Quasilinear Equations of Parabolic Type, Translated from the Russian by S. Smith. Translations of Mathematical Monographs, Vol. 23, American Mathematical Society, Providence, R. I., 1968. Google Scholar
J. Lankeit, Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source, J. Differential Equations, 258 (2015), 1158-1191. doi: 10.1016/j.jde.2014.10.016. Google Scholar
J. Lankeit, Long-term behaviour in a chemotaxis-fluid system with logistic source, Math. Models Methods Appl. Sci., 26 (2016), 2071-2109. doi: 10.1142/S021820251640008X. Google Scholar
X. Li, Global existence and uniform boundedness of smooth solutions to a parabolic-parabolic chemotaxis system with nonlinear diffusion, Bound. Value Probl., 2015 (2015), 17pp. doi: 10.1186/s13661-015-0372-y. Google Scholar
K. Lin and C. Mu, Global dynamics in a fully parabolic chemotaxis system with logistic source, Discrete Contin. Dyn. Syst., 36 (2016), 5025-5046. doi: 10.3934/dcds.2016018. Google Scholar
N. Mizoguchi, Type Ⅱ blowup in a doubly parabolic Keller-Segel system in two dimensions, J. Funct. Anal., 271 (2016), 3323-3347. doi: 10.1016/j.jfa.2016.09.016. Google Scholar
N. Mizoguchi and M. Winkler, Blow-up in the two-dimensional parabolic Keller-Segel system, Preprint. Google Scholar
T. Nagai, T. Senba and T. Suzuki, Chemotactic collapse in a parabolic system of mathematical biology, Hiroshima Math. J., 30 (2000), 463-497. Google Scholar
K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura, Exponential attractor for a chemotaxis-growth system of equations, Nonlinear Anal., 51 (2002), 119-144. doi: 10.1016/S0362-546X(01)00815-X. Google Scholar
M. M. Porzio and V. Vespri, Hölder estimates for local solutions of some doubly nonlinear degenerate parabolic equations, J. Differential Equations, 103 (1993), 146-178. doi: 10.1006/jdeq.1993.1045. Google Scholar
J. Simon, Compact sets in the space $L^p(0,T;B)$, Ann. Mat. Pura Appl.(4), 146 (1987), 65-96. doi: 10.1007/BF01762360. Google Scholar
Y. Tao, Boundedness in a chemotaxis model with oxygen consumption by bacteria, J. Math. Anal. Appl., 381 (2011), 521-529. doi: 10.1016/j.jmaa.2011.02.041. Google Scholar
Y. Tao and M. Winkler, Eventual smoothness and stabilization of large-data solutions in a three-dimensional chemotaxis system with consumption of chemoattractant, J. Differential Equations, 252 (2012), 2520-2543. doi: 10.1016/j.jde.2011.07.010. Google Scholar
Y. Tao and M. Winkler, Persistence of mass in a chemotaxis system with logistic source, J. Differential Equations, 259 (2015), 6142-6161. doi: 10.1016/j.jde.2015.07.019. Google Scholar
L. Wang, S. U. -D. Khan and S. U. -D. Khan, Boundedness in a chemotaxis system with consumption of chemoattractant and logistic source, Electron. J. Differential Equations, 2013 (2013), 9pp. Google Scholar
M. Winkler, A three-dimensional Keller-Segel-Navier-Stokes system with logistic source: Global weak solutions and asymptotic stabilization, Preprint. Google Scholar
M. Winkler, Aggregation vs. global diffusive behavior in the higher-dimensional Keller-Segel model, J. Differential Equations, 248 (2010), 2889-2905. doi: 10.1016/j.jde.2010.02.008. Google Scholar
M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source, Comm. Partial Differential Equations, 35 (2010), 1516-1537. doi: 10.1080/03605300903473426. Google Scholar
M. Winkler, Global large-data solutions in a chemotaxis-(Navier-)Stokes system modeling cellular swimming in fluid drops, Comm. Partial Differential Equations, 37 (2012), 319-351. doi: 10.1080/03605302.2011.591865. Google Scholar
M. Winkler, Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system, J. Math. Pures Appl.(9), 100 (2013), 748-767. doi: 10.1016/j.matpur.2013.01.020. Google Scholar
M. Winkler, Global asymptotic stability of constant equilibria in a fully parabolic chemotaxis system with strong logistic dampening, J. Differential Equations, 257 (2014), 1056-1077. doi: 10.1016/j.jde.2014.04.023. Google Scholar
M. Winkler, Stabilization in a two-dimensional chemotaxis-Navier-Stokes system, Arch. Ration. Mech. Anal., 211 (2014), 455-487. doi: 10.1007/s00205-013-0678-9. Google Scholar
Q. Zhang and Y. Li, Stabilization and convergence rate in a chemotaxis system with consumption of chemoattractant, J. Math. Phys., 56 (2015), 081506, 10pp. doi: 10.1063/1.4929658. Google Scholar
Lu Xu, Chunlai Mu, Qiao Xin. Global boundedness of solutions to the two-dimensional forager-exploiter model with logistic source. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020396
Lei Yang, Lianzhang Bao. Numerical study of vanishing and spreading dynamics of chemotaxis systems with logistic source and a free boundary. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1083-1109. doi: 10.3934/dcdsb.2020154
Pan Zheng. Asymptotic stability in a chemotaxis-competition system with indirect signal production. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1207-1223. doi: 10.3934/dcds.2020315
Wenbin Lv, Qingyuan Wang. Global existence for a class of Keller-Segel models with signal-dependent motility and general logistic term. Evolution Equations & Control Theory, 2021, 10 (1) : 25-36. doi: 10.3934/eect.2020040
Xing Wu, Keqin Su. Global existence and optimal decay rate of solutions to hyperbolic chemotaxis system in Besov spaces. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021002
Neng Zhu, Zhengrong Liu, Fang Wang, Kun Zhao. Asymptotic dynamics of a system of conservation laws from chemotaxis. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 813-847. doi: 10.3934/dcds.2020301
Shuxing Chen, Jianzhong Min, Yongqian Zhang. Weak shock solution in supersonic flow past a wedge. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 115-132. doi: 10.3934/dcds.2009.23.115
José Luiz Boldrini, Jonathan Bravo-Olivares, Eduardo Notte-Cuello, Marko A. Rojas-Medar. Asymptotic behavior of weak and strong solutions of the magnetohydrodynamic equations. Electronic Research Archive, 2021, 29 (1) : 1783-1801. doi: 10.3934/era.2020091
Hui Zhao, Zhengrong Liu, Yiren Chen. Global dynamics of a chemotaxis model with signal-dependent diffusion and sensitivity. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021011
Yukihiko Nakata. Existence of a period two solution of a delay differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1103-1110. doi: 10.3934/dcdss.2020392
Lekbir Afraites, Chorouk Masnaoui, Mourad Nachaoui. Shape optimization method for an inverse geometric source problem and stability at critical shape. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021006
Helmut Abels, Johannes Kampmann. Existence of weak solutions for a sharp interface model for phase separation on biological membranes. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 331-351. doi: 10.3934/dcdss.2020325
Ran Zhang, Shengqiang Liu. On the asymptotic behaviour of traveling wave solution for a discrete diffusive epidemic model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1197-1204. doi: 10.3934/dcdsb.2020159
Cheng He, Changzheng Qu. Global weak solutions for the two-component Novikov equation. Electronic Research Archive, 2020, 28 (4) : 1545-1562. doi: 10.3934/era.2020081
Jonathan J. Wylie, Robert M. Miura, Huaxiong Huang. Systems of coupled diffusion equations with degenerate nonlinear source terms: Linear stability and traveling waves. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 561-569. doi: 10.3934/dcds.2009.23.561
HTML views (76)
Johannes Lankeit Yulan Wang
|
CommonCrawl
|
Simulation of mechanisms modeled by geometrically-exact beams using Rodrigues rotation parameters
Alfredo Gay Neto1
Computational Mechanics volume 59, pages459–481(2017)Cite this article
We present mathematical models for joints, springs, dashpots and follower loads, to be used together with geometrically-exact beam finite elements to simulate mechanisms. The rotations are described using Rodrigues parameters. An updated-Lagrangian approach is employed, leading to the possibility of finite rotations involving many turns, overcoming possible singularities in the rotation tensor. We present formulations for spherical, hinge and universal (Cardan) joints, which are enforced by Lagrange multipliers. For the hinge joint, a torsional spring with a nonlinear damper model is presented. A geometric-nonlinear translational spring/dashpot model is proposed, such as follower loads. All formulations are presented detailing their contribution to the model weak form and tangent operator. These are employed together with implicit time-integration schemes. Numerical examples are performed, showing that the proposed formulations are able to model complex spatial mechanisms. Usage of the models together with contact interaction between beams is explored by a cam/follower mechanism example.
Hartenberg RS, Denavit J (1964) Kinematic synthesis of linkages. McGraw-Hill, New York
Sclater N, Chironis NP (2007) Mechanisms and mechanical devices sourcebook, 4th edn. McGraw-Hill, New York
Schiehlen W (2006) Computational dynamics: theory and applications of multibody systems. Eur J Mech A 25:566–594
Schiehlen W, Guse N, Seifried R (2006) Multibody dynamics in computational mechanics and engineering applications. Comput Methods Appl Mech Eng 195:5509–5522
Shabana A (1998) Dynamics of multibody systems. Cambridge University Press, Cambridge
Simeon B (2006) On Lagrange multipliers in flexible multibody dynamics. Comput Methods Appl Mech Eng 195:6993–7005
Ider SK, Amirouche FML (1990) Stability analysis of constrained multibody systems. Comput Mech 6:327–340
Park KC, Felippa CA, Gumaste UA (2000) A localized version of the method of Lagrange multipliers and its applications. Comput Mech 24:476–490
Geradin M, Cardona A (1989) Kinematics and dynamics of rigid and flexible mechanisms using finite elements and quaternion algebra. Comput Mech 4:115–135
Cardona A, Geradin M, Doan DB (1991) Rigid and flexible joint modelling in multibody dynamics using finite elements. Comput Methods Appl Mech Eng 89:395–418
Ibrahimbegovic A, Taylor RL, Lim H (2003) Non-linear dynamics of flexible multibody systems. Comput Struct 81:1113–1132
Bauchau OA, Damilano G, Theron NJ (1995) Numerical integration of non-linear elastic muti-body systems. Int J Num Methods Eng V 38:2727–2751
Ibrahimbegovic A, Mamouri S (2000) On rigid components and joint constraints in nonlinear dynamics of flexible multibody systems employing 3D geometrically exact beam model. Comput Methods Appl Mech Eng 188:805–831
Ibrahimbegovic A, Mamouri S (2002) "Energy conserving/decaying implicit time-stepping scheme for nonlinear dynamics of three-dimensional beams undergoing finite rotations. Comput Methods Appl Mech Eng 191:4241–4258
Bauchau OA (2000) On the modeling of prismatic joints in flexible multi-body systems. Comput Methods Appl Mech Eng 181:87–105
Jelenic G, Crisfield MA (2001) Dynamic analysis of 3D beams with joints in presence of large rotations. Comput Methods Appl Mech Eng 190:4195–4230
Bauchau OA, Rodriguez J (2002) Modeling of joints with clearance in flexible multibody systems. Int J Solids Struct 39:41–63
Flores P, Ambrósio J, Claro JCP, Lankarani HM (2008) Kinematics and dynamics of multibody systems with imperfect joints. Lecture notes in applied and computacional mechanics V. Springer, p 34
Betsch P, Steinmann P (2003) Constrained dynamics of geometrically exact beams. Comput Mech 31:49–59
Tian Q, Zhang Y, Chen L, Flores P (2009) Dynamics of spatial flexible multibody systems with clearance and lubricated spherical joints. Comput Struct 87:913–929
Gay Neto A, Martins CA, Pimenta PM (2014) Static analysis of offshore risers with a geometrically-exact 3D beam model subjected to unilateral contact. Comput Mech 53:125–145
Gay Neto A (2016) Dynamics of offshore risers using a geometrically-exact beam model with hydrodynamic loads and contact with the seabed. Eng Struct 125:438–454
Gay Neto A, Pimenta PM, Wriggers P (2015) Self-contact modeling on beams experiencing loop formation. Comput Mech 55(1):193–208
Gay Neto A, Pimenta PM, Wriggers P (2016) A master-surface to master-surface formulation for beam to beam contact. Part I: frictionless interaction. Comput Methods Appl Mech Eng 303:400–429
Gay Neto A, Pimenta PM, Wriggers P (2014) Contact between rolling beams and flat surfaces. Int J Numer Methods Eng 97:683–706
Pimenta PM, Campello EMB (2001) Geometrically nonlinear analysis of thin-walled space frames. Proceedings of the second european conference on computational mechanics, II ECCM, Krakow
Pimenta PM, Campello EMB, Wriggers P (2008) An exact conserving algorithm for nonlinear dynamics with rotational dof's and general hyperelasticity. part 1: Rods. Comput Mech 42:715–732
Betsch P, Menzel A, Stein E (1998) On the parametrization of finite rotations in computational mechanics: a classification of concepts with applications to smooth shells. Comput Methods Appl Mech Eng 155:273–305
Campello EMB, Pimenta PM, Wriggers P (2011) An exact conserving algorithm for nonlinear dynamics with rotational dofs and general hyperelasticity. Part 2: Shells. Comput Mech 48:195–211
Campello EMB, Pimenta PM, Wriggers P (2003) A triangular finite shell element based on a fully nonlinear shell formulation. Comput Mech 31:505–518
Ota NSN, Wilson L, Gay Neto A, Pellegrino S, Pimenta PM (2016) Nonlinear dynamic analysis of creased shells. Finite Elem Anal Des 121:64–74
Wriggers P (2008) Nonlinear finite element methods. Springer, Berlin
Korelc J (2002) Multi-language and multi-environment generation of nonlinear finite element codes. Eng Comput 18(4):312–327
Korelc J (1997) Automatic generation of finite-element code by simultaneous optimization of expressions. Theor Comput Sci 187:231–248
Gay Neto A, Campello EMB (2016) Granular materials interacting with thin flexible rods. Comput Part Mech (published online)
Gay Neto A (2014) Giraffe user's manual—generic interface readily accessible for finite elements. São Paulo. http://sites.poli.usp.br/p/alfredo.gay/
Gams M, Planinc I, Saje M (2007) Energy conserving time integration scheme for geometrically exact beam. Comput Methods Appl Mech Eng 196:2117–2129
Bauchau OA (2011) Flexible multibody dynamics. Springer, Dordrecht
Cardona A, Geradin M (1989) Time integration of the equations of motion in mechanism analysis. Comput Struct 33(3):801–820
Chung J, Hulbert GM (1993) A time integration algorithm for structural dynamics with improved numerical dissipation: the generalized-\(\alpha \) method. J Appl Mech 60:371–375
Gransden DI, Bornemann PB, Rose M, Nitzsche F (2015) A constrained generalized-\(\alpha \) method for coupling rigid parallel chain kinematics and elastic bodies. Comput Mech 55:527–541
The author would like to acknowledge Prof. Paulo de Mattos Pimenta for the discussions. The author also acknowledges the financial support by FAPESP (Fundação de Amparo à Pesquisa do Estado de São Paulo) under the Grant 2015/11655-3 and by CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico) under the Grant 308190/2015-7.
Polytechnic School at University of São Paulo, São Paulo, Brazil
Alfredo Gay Neto
Search for Alfredo Gay Neto in:
Correspondence to Alfredo Gay Neto.
Supplementary material 1 (mp4 486 KB)
Supplementary material 4 (mp4 13630 KB)
Appendix 1: Terms of hinge and universal joint tangent operators
Some terms presented in the development of the tangent operator of the hinge and universal joints are detailed in this appendix. The terms that appear in Eqs. (26) and (51) are given by:
$$\begin{aligned} \mathbf{r}_{\mathrm{HA}} ,_{\varvec{\upalpha }_\mathrm{A}^\Delta }= & {} {\uplambda }_{\mathrm{H}1} \mathbf{Y}\left( {\mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{3A}}^\mathrm{i} ,\varvec{\upalpha }_\mathrm{A}^\Delta ,\varvec{\upalpha }_\mathrm{A}^\Delta ;\mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{1B}}^\mathrm{i} } \right) \nonumber \\&+\,{\uplambda }_{\mathrm{H}2} \mathbf{Y}\left( {\mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{3A}}^\mathrm{i} ,\varvec{\upalpha }_\mathrm{A}^\Delta ,\varvec{\upalpha }_\mathrm{A}^\Delta ;\mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{2B}}^\mathrm{i} } \right) , \end{aligned}$$
$$\begin{aligned} \mathbf{r}_{\mathrm{HB}} ,_{\varvec{\upalpha }_\mathrm{B}^\Delta }= & {} {\uplambda }_{\mathrm{H}1} \mathbf{Y}\left( {\mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{1B}}^\mathrm{i} ,\varvec{\upalpha }_\mathrm{B}^\Delta ,\varvec{\upalpha }_\mathrm{B}^\Delta ;\mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{3A}}^\mathrm{i} } \right) \nonumber \\&+\,{\uplambda }_{\mathrm{H}2} \mathbf{Y}\left( {\mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{2B}}^\mathrm{i} ,\varvec{\upalpha }_\mathrm{B}^\Delta ,\varvec{\upalpha }_\mathrm{B}^\Delta ;\mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{3A}}^\mathrm{i} } \right) , \end{aligned}$$
$$\begin{aligned} \mathbf{r}_{\mathrm{HA}} ,_{\varvec{\upalpha }_\mathrm{B}^\Delta }= & {} {\uplambda }_{\mathrm{H}1} \left( {\hbox {skew}\left( {\mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{3A}}^\mathrm{i} } \right) {\varvec{\Xi }}_\mathrm{A} } \right) ^{\mathrm{T}}\nonumber \\&\times \left( {\hbox {skew}\left( {\mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{1B}}^\mathrm{i} } \right) {\varvec{\Xi }}_\mathrm{B} } \right) +\nonumber \\&+\,{\uplambda }_{\mathrm{H}2} \left( {\hbox {skew}\left( {\mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{3A}}^\mathrm{i} } \right) {\varvec{\Xi }}_\mathrm{A} } \right) ^{\mathrm{T}}\nonumber \\&\times \left( {\hbox {skew}\left( {\mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{2B}}^\mathrm{i} } \right) {\varvec{\Xi }}_\mathrm{B} } \right) , \end{aligned}$$
$$\begin{aligned} \mathbf{r}_{\mathrm{HB}} ,_{\varvec{\upalpha }_\mathrm{A}^\Delta } ={\mathbf{r}_{\mathrm{HA}} ,_{\varvec{\upalpha }_\mathrm{B}^\Delta }}{^{\mathrm{T}}} , \end{aligned}$$
$$\begin{aligned} \mathbf{r}_{\mathrm{HA}} ,_{{\varvec{\uplambda }}_\mathrm{H} }= & {} \left[ -\left( {\hbox {skew}\left( {\mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{3A}}^\mathrm{i} } \right) {\varvec{\Xi }}_\mathrm{A} } \right) ^{\mathrm{T}}{} \mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{1B}}^\mathrm{i} \right. \nonumber \\&\left. -\left( {\hbox {skew}\left( {\mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{3A}}^\mathrm{i} } \right) {\varvec{\Xi }}_\mathrm{A} } \right) ^{\mathrm{T}}\mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{2B}}^\mathrm{i}\right] , \end{aligned}$$
$$\begin{aligned} \mathbf{r}_\mathrm{H} ,_{\varvec{\upalpha }_\mathrm{A}^\Delta } ={\mathbf{r}_{\mathrm{HA}} ,_{{\varvec{\uplambda }}_\mathrm{H} }} {^{\mathrm{T}}}, \end{aligned}$$
$$\begin{aligned} \mathbf{r}_{\mathrm{HB}} ,_{{\varvec{\uplambda }}_\mathrm{H} }= & {} \left[ -\left( {\hbox {skew}\left( {\mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{1B}}^\mathrm{i} } \right) {\varvec{\Xi }}_\mathrm{B} } \right) ^{\mathrm{T}}{} \mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{3A}}^\mathrm{i} \right. \nonumber \\&\left. -\left( {\hbox {skew}\left( {\mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{2B}}^\mathrm{i} } \right) {\varvec{\Xi }}_\mathrm{B} } \right) ^{\mathrm{T}}{} \mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{3A}}^\mathrm{i} \right] \quad \hbox {and } \end{aligned}$$
$$\begin{aligned} \mathbf{r}_\mathrm{H} ,_{\varvec{\upalpha }_\mathrm{B}^\Delta }= & {} {\mathbf{r}_{\mathrm{HB}} ,_{{\varvec{\uplambda }}_\mathrm{H} }} {^{\mathrm{T}}}. \end{aligned}$$
$$\begin{aligned} \mathbf{r}_{\mathrm{UA}} ,_{\varvec{\upalpha }_\mathrm{A}^\Delta }= & {} \mathbf{Y}\left( {\mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{1A}}^\mathrm{i} ,\varvec{\upalpha }_\mathrm{A}^\Delta ,\varvec{\upalpha }_\mathrm{A}^\Delta ;\mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{2B}}^\mathrm{i} } \right) {\uplambda }_\mathrm{U} , \end{aligned}$$
$$\begin{aligned} \mathbf{r}_{\mathrm{UB}} ,_{\varvec{\upalpha }_\mathrm{B}^\Delta }= & {} \mathbf{Y}\left( {\mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{2B}}^\mathrm{i} ,\varvec{\upalpha }_\mathrm{B}^\Delta ,\varvec{\upalpha }_\mathrm{B}^\Delta ;\mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{1A}}^\mathrm{i} } \right) {\uplambda }_\mathrm{U} , \end{aligned}$$
$$\begin{aligned} \mathbf{r}_{\mathrm{UA}} ,_{\varvec{\upalpha }_\mathrm{B}^\Delta }= & {} \left( {\hbox {skew}\left( {\mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{1A}}^\mathrm{i} } \right) {\varvec{\Xi }}_\mathrm{A} } \right) ^{\mathrm{T}}\left( {\hbox {skew}\left( {\mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{2B}}^\mathrm{i} } \right) {\varvec{\Xi }}_\mathrm{B} } \right) {\uplambda }_\mathrm{U},\nonumber \\ \end{aligned}$$
$$\begin{aligned} \mathbf{r}_{\mathrm{UB}} ,_{\varvec{\upalpha }_\mathrm{A}^\Delta }= & {} {\mathbf{r}_{\mathrm{UA}} ,_{\varvec{\upalpha }_\mathrm{B}^\Delta }} {^{\mathrm{T}}}, \end{aligned}$$
$$\begin{aligned} \mathbf{r}_{\mathrm{UA}} ,_{{\uplambda }_\mathrm{U} }= & {} -\left( {\hbox {skew}\left( {\mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{1A}}^\mathrm{i} } \right) {\varvec{\Xi }}_\mathrm{A} } \right) ^{\mathrm{T}}\mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{2B}}^\mathrm{i} , \end{aligned}$$
$$\begin{aligned} \hbox {r}_\mathrm{U} ,_{\varvec{\upalpha }_\mathrm{A}^\Delta } ={\mathbf{r}_{\mathrm{UA}} ,_{{\uplambda }_\mathrm{U} }} {^{\mathrm{T}}}, \end{aligned}$$
$$\begin{aligned} \mathbf{r}_{\mathrm{UB}} ,_{{\uplambda }_\mathrm{U} }= & {} -\left( {\hbox {skew}\left( {\mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{2B}}^\mathrm{i} } \right) {\varvec{\Xi }}_\mathrm{B} } \right) ^{\mathrm{T}}{} \mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{1A}}^\mathrm{i} \quad \hbox { and} \end{aligned}$$
$$\begin{aligned} \hbox {r}_\mathrm{U} ,_{\varvec{\upalpha }_\mathrm{B}^\Delta } ={\mathbf{r}_{\mathrm{UB}} ,_{{\uplambda }_\mathrm{U} }} {^{\mathrm{T}}}. \end{aligned}$$
The operator \(\mathbf{Y}\left( {\mathbf{Q}_\mathrm{k}^\Delta \mathbf{a},\varvec{\upalpha }_\mathrm{k}^\Delta ,\varvec{\upalpha }_\mathrm{k}^\Delta ;\mathbf{v}} \right) \) used in Eqs. (84), (85), (92) and (93) is defined by:
$$\begin{aligned} \mathbf{Y}\left( {\mathbf{Q}_\mathrm{k}^\Delta \mathbf{a},\varvec{\upalpha }_\mathrm{k}^\Delta ,\varvec{\upalpha }_\mathrm{k}^\Delta ;\mathbf{v}} \right) :=\left[ {\hat{\mathbf{F}}\left( {\mathbf{A}_\mathrm{k} \mathbf{v}} \right) +{\varvec{\Xi }}_\mathrm{k}^\mathrm{T} \mathbf{VA}_\mathrm{k} {\varvec{\Xi }}_\mathrm{k} } \right] \end{aligned}$$
with \(\mathbf{V}=\hbox {skew}\left( \mathbf{v} \right) \), \(\mathbf{A}_\mathrm{k} =\hbox {skew}\left( {\mathbf{Q}_\mathrm{k}^\Delta \mathbf{a}} \right) \).
The operator \(\hat{\mathbf{F}}\), applied to a generic vector \(\mathbf{v}_\mathrm{k} \left( {\varvec{\upalpha }_\mathrm{k}^\Delta } \right) \), is given by:
$$\begin{aligned} \hat{\mathbf{F}}\left( {\mathbf{v}_\mathrm{k} } \right) =-\frac{1}{2}\left( {\frac{4}{4+{\upalpha }_\mathrm{k}^2 }} \right) \left[ {\left( {{\varvec{\Xi }}_\mathrm{k}^\mathrm{T} \mathbf{v}_\mathrm{k} } \right) \otimes \varvec{\upalpha }_\mathrm{k}^\Delta -\mathbf{V}_\mathrm{k} } \right] \end{aligned}$$
with \(\mathbf{V}_\mathrm{k} =\hbox {skew}\left( {\mathbf{v}_\mathrm{k} } \right) \) and \({\upalpha }_\mathrm{k} =\Vert \varvec{\upalpha }_\mathrm{k}^\Delta \Vert \).
Appendix 2: Algebraic details of hinge torsion spring variation
In this appendix we present the derivation of \({\updelta \uptheta }^{\Delta }\), from Eq. (33). We defined the angle \({\uptheta }^{\Delta }\) such that
$$\begin{aligned} \hbox {sin}{\uptheta }^{\Delta }=\left( {\mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{1B}}^\mathrm{i} \times \mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{1B}}^\mathrm{i} } \right) \cdot \mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{3A}}^\mathrm{i} . \end{aligned}$$
Then, one may write:
$$\begin{aligned} {\updelta \uptheta }^{\Delta }= & {} {\updelta }\left[ {\hbox {asin}\left[ {\left( {\mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{1B}}^\mathrm{i} \times \mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{1B}}^\mathrm{i} } \right) \cdot \mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{3A}}^\mathrm{i} } \right] } \right] \nonumber \\= & {} \frac{1}{\sqrt{1-\sin ^{2}\left( {\theta ^{\Delta }} \right) }}\left( {{\updelta \uptheta }_1^\Delta +{\updelta \uptheta }_2^\Delta +{\updelta \uptheta }_3^\Delta } \right) \end{aligned}$$
$$\begin{aligned} {\updelta \uptheta }_1^\Delta= & {} \left( {{\updelta }\left( {\mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{1B}}^\mathrm{i} } \right) \times \mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{1B}}^\mathrm{i} } \right) \cdot \mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{3A}}^\mathrm{i} , \end{aligned}$$
$$\begin{aligned} {\updelta \uptheta }_2^\Delta= & {} \left( {\mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{1B}}^\mathrm{i} \times {\updelta }\left( {\mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{1B}}^\mathrm{i} } \right) } \right) \cdot \mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{3A}}^\mathrm{i} \quad \hbox { and } \end{aligned}$$
$$\begin{aligned} {\updelta \uptheta }_3^\Delta= & {} \left( {\mathbf{Q}_\mathrm{B}^\Delta \mathbf{e}_{\mathrm{1B}}^\mathrm{i} \times \mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{1B}}^\mathrm{i} } \right) \cdot {\updelta }\left( {\mathbf{Q}_\mathrm{A}^\Delta \mathbf{e}_{\mathrm{3A}}^\mathrm{i} } \right) . \end{aligned}$$
The terms (104) and (105) can be developed using the result from Eq. (19) and scalar and triple vector products properties, leading to
$$\begin{aligned} {\updelta \uptheta }_1^\Delta= & {} -\cos {\uptheta }^{\Delta }\mathbf{e}_{\mathrm{3A}}^{\mathrm{i}+1} \cdot {\varvec{\Xi }}_\mathrm{B} {\updelta }\varvec{\upalpha }_\mathrm{B} \quad \hbox {and} \end{aligned}$$
$$\begin{aligned} {\updelta \uptheta }_2^\Delta= & {} +\cos {\uptheta }^{\Delta }\mathbf{e}_{\mathrm{3A}}^{\mathrm{i}+1} \cdot {\varvec{\Xi }}_\mathrm{A} {\updelta }\varvec{\upalpha }_\mathrm{A} . \end{aligned}$$
The term (106) is null. Then using (107) and (108) in (103) leads to
$$\begin{aligned} {\updelta \uptheta }^{\Delta }=\mathbf{e}_{\mathrm{3A}}^{\mathrm{i}+1} \cdot {\varvec{\Xi }}_\mathrm{A} {\updelta }\varvec{\upalpha }_\mathrm{A} -\mathbf{e}_{\mathrm{3A}}^{\mathrm{i}+1} \cdot {\varvec{\Xi }}_\mathrm{B} {\updelta }\varvec{\upalpha }_\mathrm{B} , \end{aligned}$$
which can be used to obtain the weak form contribution due to torsional spring in the hinge constraint.
Appendix 3: Algebraic details on the follower moment contributions
In expression (81), we used the property: \({\varvec{\Xi }}_\mathrm{A}^\mathrm{T} \mathbf{Q}_\mathrm{A}^\Delta ={\varvec{\Xi }}_\mathrm{A}\). Since this property is not trivial, we here show a sketch of its algebraic proof. To develop it, we have to use a property of skew-symmetric tensors: \(\mathbf{A}_\mathrm{k}^3 =-{\upalpha }^{2}{} \mathbf{A}_\mathrm{k} \). One can write:
$$\begin{aligned} {\varvec{\Xi }}_\mathrm{A}^\mathrm{T} \mathbf{Q}_\mathrm{A}^\Delta= & {} \frac{4}{4+{\upalpha }^{2}}\left( {\mathbf{I}-\frac{1}{2}{} \mathbf{A}_\mathrm{k} } \right) \left[ {\mathbf{I}+\frac{4}{4+{\upalpha }^{2}}\left( {\mathbf{A}_\mathrm{k} +\frac{1}{2}{} \mathbf{A}_\mathrm{k}^2 } \right) } \right] \nonumber \\= & {} {\varvec{\Xi }}_\mathrm{A} {-}\frac{4}{4{+}{\upalpha }^{2}}\mathbf{A}_\mathrm{k} {+}\left( {\frac{4}{4{+}{\upalpha }^{2}}} \right) ^{2}\mathbf{A}_\mathrm{k} {-}\frac{1}{4}\left( {\frac{4}{4{+}{\upalpha }^{2}}} \right) ^{2}{} \mathbf{A}_\mathrm{k}^3 \nonumber \\= & {} {\varvec{\Xi }}_\mathrm{A} +\frac{4}{4+{\upalpha }^{2}}{} \mathbf{A}_\mathrm{k} \left[ {-1+\frac{4}{4+{\upalpha }^{2}}+\frac{1}{4}{\upalpha }^{2}\frac{4}{4+{\upalpha }^{2}}} \right] \nonumber \\= & {} {\varvec{\Xi }}_\mathrm{A}. \end{aligned}$$
In the development of the Eq. (82) we had performed the following algebraic manipulation, in order to write the tangent operator contribution due to follower moments:
$$\begin{aligned} \Delta \left( {{\varvec{\Xi }}_\mathrm{A} \mathbf{m}_\mathrm{A}^\mathrm{i} } \right)= & {} \Delta {\varvec{\Xi }}_\mathrm{A} \mathbf{m}_\mathrm{A}^\mathrm{i} =\frac{1}{2}\frac{4}{4+{\upalpha }^{2}} \left[ {\Delta \mathbf{A}_\mathrm{k} -\left( {\varvec{\upalpha }_\mathrm{k} \cdot \Delta \varvec{\upalpha }_\mathrm{k} } \right) {\varvec{\Xi }}_\mathrm{A}} \right] \mathbf{m}_\mathrm{A}^\mathrm{i} \nonumber \\= & {} \frac{1}{2}\frac{4}{4+{\upalpha }^{2}}\left[ {\Delta \varvec{\upalpha }_\mathrm{k} \times \mathbf{m}_\mathrm{A}^\mathrm{i} {-}{\varvec{\Xi }}_\mathrm{A} \mathbf{m}_\mathrm{A}^\mathrm{i} \left( {\varvec{\upalpha }_\mathrm{k} {\cdot } \Delta \varvec{\upalpha }_\mathrm{k} } \right) } \right] \nonumber \\= & {} -\frac{1}{2}\frac{4}{4+{\upalpha }^{2}}\left( {\mathbf{M}_\mathrm{A}^\mathrm{i} {+}{\varvec{\Xi }}_\mathrm{A} \mathbf{m}_\mathrm{A}^\mathrm{i} \otimes \varvec{\upalpha }_\mathrm{A}^\Delta } \right) \Delta \varvec{\upalpha }_\mathrm{A}^\Delta .\nonumber \\ \end{aligned}$$
Appendix 4: Time-integration of finite element models with constraints
The dynamics of a continuum solid body constitutes an initial and boundary value problem in mathematics. Such body may be discretized in space by employing the finite element method, leading to a system of ordinary differential equations (ODE's), which solution is given by the time-evolution of the chosen spatial generalized coordinates. When analyzing a system of bodies (rigid or flexible), one may introduce constraints to relate their movement, as the proposed joints on present work. The resulting set turns into a differential-algebraic system of equations (DAE's).
To include in a finite element model a set of "m" holonomic and scleronomic constraints, one may add an extra contribution to the whole system potential:
$$\begin{aligned} \hbox {W}_\mathrm{C} =\mathbf{r}_\mathrm{C} \cdot {\varvec{\uplambda }}_\mathrm{C} =\mathbf{r}_\mathrm{C}^\mathrm{T} {\varvec{\uplambda }}_\mathrm{C} , \end{aligned}$$
where \({\varvec{\uplambda }}_\mathrm{C} \) is a vector with "m" Lagrange multipliers, unknown, and \(\mathbf{r}_\mathrm{C}\) is a m-dimensional vector with constraints description. The variation of (112) leads to
$$\begin{aligned} {\updelta } \hbox {W}_\mathrm{C} ={\updelta }{} \mathbf{r}_\mathrm{C}^\mathrm{T} {\varvec{\uplambda }}_{\mathrm{C}} + \mathbf{r}_\mathrm{C}^\mathrm{T} {\updelta }\, {\varvec{\uplambda }}_{\mathrm{C}} , \end{aligned}$$
which corresponds to a contribution to the model weak form (1), responsible for enforcing the system to obey the constraints in \(\mathbf{r}_\mathrm{C}\). It is possible to detail the term \({\updelta }\mathbf{r}_\mathrm{C}^\mathrm{T} {\varvec{\uplambda }}_\mathrm{C} ={\updelta }\mathbf{u}^{\mathbf{T}}{} \mathbf{B}^{\mathrm{T}}{\varvec{\uplambda }}_\mathrm{C} \), where \({\updelta }{} \mathbf{u}\) is a vector of the virtual displacements of the system, with "n" degrees of freedom, and \(\mathbf{B}\) is the so-called constraint matrix or Jacobian matrix of constraints, such that \({\updelta }{} \mathbf{r}_\mathrm{C} =\mathbf{B}\,{\updelta }{} \mathbf{u}\). This form is commonly presented in multibody dynamics presentations, such as [18, 38].
The solution of the system of DAE's involves determining the time-evolution of the "n" generalized coordinates and "m" algebraic variables (Lagrange multipliers). To achieve that, many approaches are possible. The main concern is that the resulting called index-3 DAE's may be difficult to solved and integrated numerically. Then, in literature one finds approaches to eliminate the algebraic variables from the system of equations. This procedure aims at turning the original system of DAE's into ODE's, which may be integrated using well-known techniques. Some difficulties may appear, such as the "drift phenomenon", since some of these techniques involve enforcing constraints at velocity/acceleration levels (see [38] for details). To try to remedy such problem, one may use techniques developed for constraint violation stabilization—in [18] there are some examples.
Alternatively, particularly when using the finite element method together with constraints imposed using Lagrange multipliers, one may try to solve directly the system of DAE's, instead of transforming it into a system of ODE's. For that, according to [38], proper scaling methods for the Lagrange multipliers may be employed leading the system of DAE's no more difficult to integrate than the corresponding ODE's of an unconstrained system. Then, one may employ time integration schemes to account for the dynamic evolution of the constrained system along time, choosing among explicit or implicit methods, according to the kind of problem and time-scale of interest.
For example, by employing the Newmark method, the following approximations (114)–(117) are done for the evaluation of velocities/accelerations at the end of the current integration time-step—to be used in (1). Such quantities are dependent of previously converged values of velocities/accelerations and current increments of displacements/rotations (see [32]):
$$\begin{aligned} \dot{\mathbf{u}}_{\mathrm{k}}^{\mathrm{i + 1}}= & {} {\upalpha } _{4} \mathbf{u}_{\mathrm{k}}^\Delta + {\upalpha } _5 \dot{\mathbf{u}}_{\mathrm{k}}^{\mathrm{i}} + {\upalpha } _6 \ddot{\mathbf{u}}_{\mathrm{k}}^{\mathrm{i}} \end{aligned}$$
$$\begin{aligned} \ddot{\mathbf{u}}_{\mathrm{k}}^{\mathrm{i + 1}}= & {} {\upalpha } _{1} \mathbf{u}_{\mathrm{k}}^\Delta + {\upalpha }_{2} \dot{\mathbf{u}}_{\mathrm{k}}^{\mathrm{i}} + {\upalpha } _{3} \ddot{\mathbf{u}}_{\mathrm{k}}^{\mathrm{i}} \end{aligned}$$
$$\begin{aligned} {\varvec{\upomega }}_{\mathrm{k}}^{\mathrm{i + 1}}= & {} \mathbf{Q}_{\mathrm{k}}^\Delta \left( {\upalpha } _4 {\varvec{\upalpha }}_{\mathrm{k}}^\Delta + {{\upalpha }} _5 {\varvec{\upomega }} _{\mathrm{k}}^{\mathrm{i }}+ {{\upalpha }} _6 \dot{\varvec{\upomega }} _{\mathrm{k}}^{\mathrm{i}} \right) \end{aligned}$$
$$\begin{aligned} \dot{\varvec{\upomega }}_{\mathrm{k}}^{\mathrm{i + 1}}= & {} \mathbf{Q}_{\mathrm{k}}^\Delta \left( {\upalpha } _1 {\varvec{\upalpha }}_{\mathrm{k}}^\Delta + {{\upalpha }} _2 {\varvec{\upomega }} _{\mathrm{k}}^{\mathrm{i }}+ {{\upalpha }} _3 \dot{\varvec{\upomega }} _{\mathrm{k}}^{\mathrm{i}} \right) \end{aligned}$$
The coefficients of the method are: \({\upalpha }_1 =\frac{1}{{\upbeta }\left( {\Delta \hbox {t}} \right) ^{2}}\), \({\upalpha }_2 =\frac{1}{{\upbeta \Delta \hbox {t}}}\), \({\upalpha }_3 =\frac{1-2{\upbeta }}{2{\upbeta }}\), \({\upalpha }_4 =\frac{{\upgamma }}{{\upbeta \Delta \hbox {t}}}\), \({\upalpha }_5 =1-\frac{{\upgamma }}{{\upbeta }}\), \({\upalpha }_6 =\left( {1-\frac{{\upgamma }}{2{\upbeta }}} \right) \Delta \hbox {t}\), and \(\Delta \hbox {t}\) is the time integration step. The set of values for the scalars \({\upbeta }\) and \({\upgamma }\) control the accuracy of the integration, such as the introduction of numerical damping. Note that Eqs. (116) and (117) represent a non-standard Newmark integration scheme. They represent a special version to integrate rotations, as presented by Ibrahimbegovic and Mamouri [13].
An important concern, particularly when handling DAE's, is regarding the numerical stability of the time integration technique. As presented in [39], the damping is important to stabilize the integration method, since the constraint equations introduce infinite-frequency eigenvalues to the system, due to null terms in mass matrix coming from the constraint equations. Then, when integrating constrained undamped systems, one may face numerical difficulties, such as oscillations in acceleration results, leading sometimes to the divergence of the method. With that, the recommended time-integration technique for integration of DAE's should dissipate energy related to high-frequency modes. For that, one may employ particular sets of coefficients \({\upbeta }\) and \({\upgamma }\) and use the Newmark method, but with the drawback of decreasing the order of accuracy of integration. As an alternative, one may find more elaborated methods, such as the Generalized-\(\upalpha \) method [40] and modified versions of it, as recently presented in [41]. Such method introduces numerical damping with no decreasing the time integration accuracy. However, according to [38], stability-guarantee is only assured for linear systems by these methods.
In the present work, the focus is on development of constraint equations, weak form and tangent operator contributions for the proposed joints and other modeling features. Then, our numerical examples were done employing the classical time-integration technique: Newmark method. Furthermore, we decided to process all our examples with no numerical damping, even leading to the reported oscillations in time-series of acceleration, as predicted by [39]. For all shown examples in this paper, such oscillations appear to not present influence in the quality of our results. The here presented techniques to enforce constraints can be applied together with any other time-integration scheme, implicit or explicit, including the energy-conserving methods, which are being aim of research in the last years.
After this discussion, now we present briefly the details of implementation of implicit time-integration to solve the DAE's system. Each step is done in a local level, to evaluate weak form and tangent stiffness matrix contributions due to
finite element internal forces—\({\updelta } \hbox {W}_\mathrm{i} \);
external forces—\({\updelta } \hbox {W}_\mathrm{e} \);
kinetic energy contribution—\({\updelta } \hbox {T}\). Here the Formulae (114)–(117) from Newmark integration are employed, since they are used to describe the unknown acceleration/velocity at the end of the time-integration step, as a function of current displacements;
constraints contribution (joints)—\({\updelta } \hbox {W}_\mathrm{C} \), as depicted in (113).
Other contributions (e.g.: contact constraints, damping and others—possibly using (114)–(117) from Newmark integration when velocities/accelerations appear)
All these contributions compound the global residual vector in Eq. (1) and tangent stiffness matrix in Eq. (2). These quantities are used to perform a Newton–Raphson iteration, determining increments for displacements/Lagrange multipliers. The steps 1–5 are repeated until convergence of the current time-step. The process is repeated for each time-step, until the desired end-time value.
Note that each iteration of Newton–Raphson method will require the solution of system of equations, involving both displacement and Lagrange multipliers increments. At this level one may generate bad-conditioned systems due to possible too different orders of magnitude between generalized coordinates and Lagrange multipliers. Improvements in the linear system solver may be necessary. One here may use many options of good solvers available, with automatic scaling and pivoting techniques, as we did in this work.
Gay Neto, A. Simulation of mechanisms modeled by geometrically-exact beams using Rodrigues rotation parameters. Comput Mech 59, 459–481 (2017). https://doi.org/10.1007/s00466-016-1355-2
Geometrically-exact
Cam/follower
|
CommonCrawl
|
Asian Pacific Journal of Cancer Prevention
Pages.5193-5198
Asian Pacific Organization for Cancer Prevention (아시아태평양암예방학회)
Glycemic Index and Glycemic Load Dietary Patterns and the Associated Risk of Breast Cancer: A Case-control Study
Woo, Hae Dong (Cancer Epidemiology Branch, National Cancer Center) ;
Park, Ki-Soon (Cancer Epidemiology Branch, National Cancer Center) ;
Shin, Aesun (Cancer Epidemiology Branch, National Cancer Center) ;
Ro, Jungsil (Center for Breast Cancer, National Cancer Center) ;
Kim, Jeongseon (Cancer Epidemiology Branch, National Cancer Center)
https://doi.org/10.7314/APJCP.2013.14.9.5193
The glycemic index (GI) and glycemic load (GL) have been considered risk factors for breast cancer, but association studies of breast cancer risk using simple GI and GL might be affected by confounding effects of the overall diet. A total of 357 cases and 357 age-matched controls were enrolled, and dietary intake was assessed using a validated food frequency questionnaire (FFQ) with 103 food items. GI and GL dietary patterns were derived by reduced rank regression (RRR) method. The GI and GL pattern scores were positively associated with breast cancer risk among postmenopausal women [OR (95%CI): 3.31 (1.06-10.39), p for trend=0.031; 9.24 (2.93-29.14), p for trend<0.001, respectively], while the GI pattern showed no statistically significant effects on breast cancer risk, and the GL pattern was only marginally significant, among premenopausal women (p for trend=0.043). The GI and GL pattern scores were positively associated with the risk of breast cancer in subgroups defined by hormone receptor status in postmenopausal women. The GI and GL patterns based on all food items consumed were positively associated with breast cancer.
Case-control study;glycemic index (GI);glycemic load (GL);breast cancer
Ahn Y, Kwon E, Shim J, et al (2007). Validation and reproducibility of food frequency questionnaire for Korean genome epidemiologic study. Eur J Clin Nutr, 61, 1435-41. https://doi.org/10.1038/sj.ejcn.1602657
Bao Y, Nimptsch K, Meyerhardt JA, et al (2010). Dietary insulin load, dietary insulin index, and colorectal cancer. Cancer Epidemiol Biomarkers Prev, 19, 3020-6. https://doi.org/10.1158/1055-9965.EPI-10-0833
Belle FN, Kampman E, McTiernan A, et al (2011). Dietary fiber, carbohydrates, glycemic index and glycemic load in relation to breast cancer prognosis in the HEAL cohort. Cancer Epidemiol Biomarkers Prev, 20, 890-9. https://doi.org/10.1158/1055-9965.EPI-10-1278
Gnagnarella P, Gandini S, La Vecchia C, Maisonneuve P (2008). Glycemic index, glycemic load, and cancer risk: a metaanalysis. Am J Clin Nutr, 87, 1793-801. https://doi.org/10.1093/ajcn/87.6.1793
Butler LM, Wu AH, Wang R, et al (2010). A vegetable-fruitsoy dietary pattern protects against breast cancer among postmenopausal Singapore Chinese women. Am J Clin Nutr, 91, 1013-9. https://doi.org/10.3945/ajcn.2009.28572
Edefonti V, Randi G, La Vecchia C, Ferraroni M, Decarli A (2009). Dietary patterns and breast cancer: a review with focus on methodological issues. Nutr Rev, 67, 297-314. https://doi.org/10.1111/j.1753-4887.2009.00203.x
Foster-Powell K, Holt SHA, Brand-Miller JC (2002). International table of glycemic index and glycemic load values: 2002. Am J Clin Nutr, 76, 5-56. https://doi.org/10.1093/ajcn/76.1.5
Hoffmann K, Schulze MB, Schienkiewitz A, Nothlings U, Boeing H (2004). Application of a new statistical method to derive dietary patterns in nutritional epidemiology. Am J Epidemiol, 159, 935-44. https://doi.org/10.1093/aje/kwh134
Jenkins DJA, Kendall CWC, Augustin LSA, et al (2002). Glycemic index: overview of implications in health and disease. Am J Clin Nutr, 76, 266S-73. https://doi.org/10.1093/ajcn/76.1.266S
Jiao L, Flood A, Subar AF, et al (2009). Glycemic index, carbohydrates, glycemic load, and the risk of pancreatic cancer in a prospective cohort study. Cancer Epidemiol Biomarkers Prev, 18, 1144-51. https://doi.org/10.1158/1055-9965.EPI-08-1135
Kaaks R (1996). Nutrition, hormones, and breast cancer: is insulin the missing link? Cancer Causes Control, 7, 605-25. https://doi.org/10.1007/BF00051703
Key TJ, Appleby PN, Reeves GK, et al (2010). Insulin-like growth factor 1 (IGF1), IGF binding protein 3 (IGFBP3), and breast cancer risk: pooled individual data analysis of 17 prospective studies. Lancet Oncol, 11, 530-42. https://doi.org/10.1016/S1470-2045(10)70095-4
Larsson SC, Bergkvist L, Wolk A (2009). Glycemic load, glycemic index and breast cancer risk in a prospective cohort of Swedish women. Int J Cancer, 125, 153-7. https://doi.org/10.1002/ijc.24310
Lajous M, Boutron-Ruault MC, Fabre A, Clavel-Chapelon F, Romieu I (2008). Carbohydrate intake, glycemic index, glycemic load, and risk of postmenopausal breast cancer in a prospective study of French women. Am J Clin Nutr, 87, 1384-91. https://doi.org/10.1093/ajcn/87.5.1384
Lajous M, Willett W, Lazcano-Ponce E, et al (2005). Glycemic load, glycemic index, and the risk of breast cancer among Mexican women. Cancer Causes and Control, 16, 1165-9. https://doi.org/10.1007/s10552-005-0355-x
Lanzino M, Morelli C, Garofalo C, et al (2008). Interaction between estrogen receptor alpha and insulin/IGF signaling in breast cancer. Curr Cancer Drug Targets, 8, 597-610. https://doi.org/10.2174/156800908786241104
Mawson A, Lai A, Carroll JS, et al (2005). Estrogen and insulin/ IGF-1 cooperatively stimulate cell cycle progression in MCF-7 breast cancer cells through differential regulation of c-Myc and cyclin D1. Mol Cell Endocrinol, 229, 161-73. https://doi.org/10.1016/j.mce.2004.08.002
McCann SE, McCann WE, Hong CC, et al (2007). Dietary patterns related to glycemic index and load and risk of premenopausal and postmenopausal breast cancer in the Western New York exposure and breast cancer study. Am J Clin Nutr, 86, 465-71. https://doi.org/10.1093/ajcn/86.2.465
Mulholland H, Murray L, Cardwell C, Cantwell M (2008). Dietary glycaemic index, glycaemic load and breast cancer risk: a systematic review and meta-analysis. Br J Cancer, 99, 1170-5. https://doi.org/10.1038/sj.bjc.6604618
Nielsen TG, Olsen A, Christensen J, Overvad K, Tjonneland A (2005). Dietary carbohydrate intake is not associated with the breast cancer incidence rate ratio in postmenopausal Danish women. J Nutr, 135, 124-8. https://doi.org/10.1093/jn/135.1.124
Richardson A, Hamilton N, Davis W, Brito C, De Leon D (2011). Insulin-like growth factor-2 (IGF-2) activates estrogen receptor-$\alpha$ and-$\beta$ vir the IGF-1 and the insulin feceptors in breast cancer cells. Growth Factors, 29, 82-93. https://doi.org/10.3109/08977194.2011.565003
Patel AV, McCullough ML, Pavluck AL, et al (2007). Glycemic load, glycemic index, and carbohydrate intake in relation to pancreatic cancer risk in a large US cohort. Cancer Causes Control, 18, 287-94. https://doi.org/10.1007/s10552-006-0081-z
Pierce JP, Natarajan L. Caan BJ, et al (2007). Influence of a diet very high in vegetables, fruit, and fiber and low in fat on prognosis following treatment for breast cancer. JAMA, 298, 289-98. https://doi.org/10.1001/jama.298.3.289
Regitnig P, Reiner A, Dinges HP, et al (2002). Quality assurance for detection of estrogen and progesterone receptors by immunohistochemistry in Austrian pathology laboratories. Virchows Arch, 441, 328-34. https://doi.org/10.1007/s00428-002-0646-5
Shikany JM, Redden DT, Neuhouser ML, et al (2011). Dietary glycemic load, glycemic index, and carbohydrate and risk of breast cancer in the women's health initiative. Nutr Cancer, 63, 899-907. https://doi.org/10.1080/01635581.2011.587227
Sieri S, Pala V, Brighenti F, et al (2007). Dietary glycemic index, glycemic load, and the risk of breast cancer in an Italian prospective cohort study. Am J Clin Nutr, 86, 1160-6. https://doi.org/10.1093/ajcn/86.4.1160
Wang L, Xie K, Huo H, et al (2012). Luteolin inhibits proliferation induced by IGF-1 pathway dependent ERq in human breast cancer MCFS7 cells. Asian Pac J Cancer Prev, 13, 1431-7. https://doi.org/10.7314/APJCP.2012.13.4.1431
Wen W, Shu XO, Li H, et al (2009). Dietary carbohydrates, fiber, and breast cancer risk in Chinese women. Am J Clin Nutr, 89, 283-9. https://doi.org/10.3945/ajcn.2008.26356
Willett W, Stampfer MJ (1986). Total energy intake: implications for epidemiologic analyses. Am J Epidemiol, 124, 17-27. https://doi.org/10.1093/oxfordjournals.aje.a114366
Yee D, Lee AV (2000). Crosstalk between the insulin-like growth factors and estrogens in breast cancer. J Mammary Gland Biol Neoplasia, 5, 107-15. https://doi.org/10.1023/A:1009575518338
Zeng YW, Yang JZ, Pu XY, et al (2013). Strategies of functional food for cancer prevention in human beings. Asian Pac J Cancer Prev, 14, 1585-92. https://doi.org/10.7314/APJCP.2013.14.3.1585
Zhang CX, Ho SC, Chen YM, et al (2009). Greater vegetable and fruit intake is associated with a lower risk of breast cancer among Chinese women. Int J Cancer, 125, 181-8. https://doi.org/10.1002/ijc.24358
Dietary Carbohydrate, Fiber and Sugar and Risk of Breast Cancer According to Menopausal Status in Malaysia vol.15, pp.14, 2014, https://doi.org/10.7314/APJCP.2014.15.14.5959
Dietary Resistant Starch Contained Foods and Breast Cancer Risk: a Case-Control Study in Northwest of Iran vol.16, pp.10, 2015, https://doi.org/10.7314/APJCP.2015.16.10.4185
Dietary Carbohydrate, Glycemic Index, Glycemic Load, and Breast Cancer Risk Among Mexican Women vol.26, pp.6, 2015, https://doi.org/10.1097/EDE.0000000000000374
High glycemic index and glycemic load are associated with moderately increased cancer risk vol.59, pp.7, 2015, https://doi.org/10.1002/mnfr.201400594
Glycaemic index, glycaemic load and risk of cutaneous melanoma in a population-based, case–control study vol.117, pp.03, 2017, https://doi.org/10.1017/S000711451700006X
Associations of Mediterranean Diet and a Posteriori Derived Dietary Patterns with Breast and Lung Cancer Risk: A Case-Control Study vol.10, pp.4, 2018, https://doi.org/10.3390/nu10040470
Associations between dietary patterns and the risk of breast cancer: a systematic review and meta-analysis of observational studies vol.21, pp.1, 2019, https://doi.org/10.1186/s13058-019-1096-1
|
CommonCrawl
|
Direct Electron Transfer-type Bioelectrocatalysis of Redox Enzymes at Nanostructured Electrodes
Taiki Adachi, Yuki Kitazumi, Osamu Shirai, Kenji Kano
Subject: Chemistry, Electrochemistry Keywords: direct-electron transfer-type bioelectrocatalysis; nanostructures; mesoporous electrodes, curvature effect; protein engineering; bi-directional bioelectrocatalysis; hydrogenase; fructose dehydrogenase; bilirubin oxidase; formate dehydrogenase; ferredoxin-NADP+ reductase
Direct electron transfer (DET)-type bioelectrocatalysis, which couples electrode reactions and catalytic functions of redox enzymes without any redox mediator, is one of the most intriguing subjects studied over the past decades in the field of bioelectrochemistry. In order to realize the DET-type bioelectrocatalysis and to improve the performance, nanostructures of the electrode surface have to be carefully tuned for each enzyme. In addition, enzymes can also be tuned by protein engineering approach for the DET-type reaction. This review summarizes the resent progresses in this field of the research, while taking into consideration of the importance of nanostructure of electrodes as well as redox enzymes. Described are basic concepts and theoretical aspects of DET-type bioelectrocatalysis, significance of nanostructures as scaffolds for DET-type reactions, protein engineering approached for DET-type reactions, and concepts and facts of bidirectional DET-type reactions, from a cross-disciplinary viewpoint.
Direct Ink Writing Glass: A Preliminary Step for Optical Application
Bo Nan, Przemysław Gołębiewski, Ryszard Buczyński, F.J. Galindo Rosales, José Ferreira
Subject: Materials Science, General Materials Science Keywords: direct ink writing; glass; rheology
In this paper, we present a preliminary study and conceptual idea concerning 3D printing water-sensitive glass by exemplifying with a borosilicate glass with high alkali and alkaline oxide contents using direct ink writing. The investigated material was prepared in the form of a glass frit, which was further ground in order to obtain a fine powder of desired particle size distribution. In a following step, inks were prepared by mixing the fine glass powder with Pluoronic F-127 hydrogel. The acquired pastes were rheologically characterized and printed using a Robocasting device. DSC experiments were performed for base materials and the obtained green bodies. After sintering, SEM and XRD analyses were carried out in order to examine microstructure and the eventual presence of crystalline phase inclusions. Results confirmed that obtained inks exhibit stable rheological properties despite the propensity of glass to undergo hydrolysis and could be adjusted to desirable values for 3D printing. No additional phase was observed, supporting the suitability of the designed technology for the production of water sensitive glass inks. SEM micrographs of the sintered samples revealed the presence of closed porosity, which may be the main reason of light scattering.
Smart Systems for Material and Process Designing in Direct Nanoimprint Lithography Using Hybrid Deep Learning
Yoshihiko Hirai, Sou Tsukamoto, Hidekatsu Tanabe, Kai Kameyama, Hiroaki Kawata, Masaaki Yasuda
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: direct nanoimprint; process design; deep learning
A hybrid smart process and material design system for nanoimprinting is proposed, which combines with a learning system based on experimental and numerical simulation results. Instead of carrying out extensive learning experiments for various condition, the simulation learning results are partially complimented where the results can be theoretically predicted by numerical simulation. In other word, the lacking data in experimental learning are complimented by simulation-based learning results. Therefore, the prediction of nanoimprint results under various conditions without experimental learning could be realized even for unknown materials. In this study, material and process design for a low-temperature nanoimprint process using glycerol-containing polyvinyl alcohol are demonstrated. Experimental results under limited conditions are learned to investigate optimum Glycerol concentrations and process temperatures. On the other hand, simulation-based learning is used to predict the dependence on press pressure and shape parameters. The prediction results for unknown Glycerol concentrations agreed well with the follow-up experiments.
Impact of Foreign Direct Investment Inflows on Economic Growth; the Case of the Republic of Seychelles
Yusheng Kong, Sampson Agyapong Atuahene, Geoffrey Bentum-Mican, Abigail Konadu Aboagye
Subject: Social Sciences, Economics Keywords: foreign direct investment; economic growth; economies
This paper aims to research whether there is link between FDI inflows and Economic growth in the Republic of Seychelles Island. The ordinary least square results obtained shows that in the impact of FDI inflows on economic growth is low. Small Island Developing States attracts less FDI inflow because they are limited to few resources that attracts overseas firms which results in retarded development. The research lighted that impact of foreign direct investment on host countries does not only depend on the quality and quantity of the FDI inflows but some other variables such as the internal policies and the management skills, market structures, economic trends among others.
Bone Laser Patterning to Decipher Cells Organization
Nicolas Touya, Samy Al-Bourgol, Theo Desigaux, Olivia Kérourédan, Laura Gemini, Rainer Kling, Raphaël Devillard
Subject: Biology, Physiology Keywords: tissue engineering; bone; laser; femtosecond; patterning; direct
Laser patterning of implant materials for bone tissue engineering purposes has shown to be a promising technique to control cell properties such as adhesion or differentiation, resulting in an enhanced osteointegration. However, the perspective of patterning the bone tissue side interface to generate microstructure effects has never been investigated. In the present study, three different laser-generated patterns were machined on the bone surface with the aim to identify the best surface morphology compatible with osteogenic-related cells recolonization. The laser patterned bone tissue was characterized by electron scanning microscopy and confocal microscopy in order to obtain a comprehensive picture of the bone surface morphology. Cortical bone patterning impact upon cell compatibility and cytoskeleton rearrangement to the patterned surfaces was performed with Stromal Cells from Apical Papilla (SCAPs). Results indicated that laser machining had no detrimental effect upon consecutively seeded cells metabolism. Orientation assays revealed that surface patterning characterized by larger hatch distances was correlated with a higher cell cytoskeletal conformation to the laser-machined patterns. For the first time, to our knowledge, bone is considered and assessed here as a potentially engineered-improvable biological interface. Further studies shall focus on in vivo implications of this direct patterning.
Inward Foreign Direct Investment Induced Technological Innovation in Sri Lanka? Empirical Evidence Using ARDL Approach
A.M. Priyangani Adikari, Haiyun Liu, M.M.S.A. Marasinghe
Subject: Social Sciences, Accounting Keywords: Foreign direct investment; technological innovation; ARDL approach
Fostering innovation is considered one of the key policy priorities in most governments' agendas in developing countries, and foreign direct investment (FDI) is considered a principal resource for financing sustainable development, corresponding to 17 sustainable development goals (SDGs). This study analyzes the extent to which inward FDI affects innovation (proxied with patent applications) in Sri Lanka using secondary data from 1990 to 2019. We used the Autoregressive Distributed Lag (ARDL) cointegration procedure to examine the long-run relationships between variables. As per the study results, the coefficient of inward FDI is a negative sign while the coefficients of per capita gross domestic product (GDP) and high technology exports (HEX) show positive signs 2.142 and 0.414, respectively, and statistically significant in the long run. It is demonstrated that per capita GDP and high technology exports are an important variable in explaining technological innovation, and inward FDI and education expenditure (EDU) did not contribute towards widening technological innovation in Sri Lanka. Shaping the future of FDI in Sri Lanka is essential to foster innovation capability.
Presentation of the Transmission Functions to the Mechanism of a 2T6R Robot in Direct Kinematics, Deduced by Two Original Methods
Liviu Marian Ungureanu, Florian Ion Petrescu
Subject: Engineering, Automotive Engineering Keywords: Mechanical transmissions; 2T6R robot; kinematic; direct kinematics.
The paper deals with the problems of direct kinematics related to a 2T6R robot, the direct kinematics of the plane mechanism of the robot. The direct kinematics of this proposed new robot help to study the movement, including the dynamic one, to determine the forces in the mechanism and to model the possible trajectories of the final effector point, thus determining the robot's working space and some of its multiple possible uses. Two totally different methods were used to verify the calculation relations, the results obtained by both methods being practically identical. The first method used is an original trigonometric method, and the second is an original geometrically analytical method.
Investigating the Effects of Cerebellar Transcranial Direct Current Stimulation on Post-Stroke Overground Gait Performance: A Partial Least-Squares Regression Approach
Dhaval Solanki, Zeynab Rezaee, Anirban Dutta, Uttama Lahiri
Subject: Engineering, Biomedical & Chemical Engineering Keywords: Gait; Stroke; Cerebellum; Transcranial Direct Current Stimulation
Stroke often results in impaired gait, which can limit community ambulation and the quality of life. Recent works have shown the feasibility of transcranial Direct Current Stimulation (tDCS) as an adjuvant treatment to facilitate gait rehabilitation. Since the cerebellum plays an essential role in balance and movement coordination, which is crucial for independent overground ambulation, so, we investigated the effects of cerebellar tDCS (ctDCS) on the post-stroke overground gait performance in chronic stroke survivors. Fourteen chronic post-stroke male subjects were recruited based on convenience sampling at the collaborating hospitals where ten subjects finally participated in the ctDCS study. We evaluated the effects of two ctDCS montages with 2mA direct current, a) optimized configuration for dentate stimulation with 3.14cm2 disc anode at PO10h (10/5 EEG system) and 3.14cm2 disc cathode at PO9h (10/5 EEG system), and b) optimized configuration for leg lobules VII-IX stimulation with 3.14cm2 disc anode at Exx8 (electrodes defined by ROAST) and 3.14cm2 disc cathode at Exx7. We found ctDCS to be acceptable by all the exposed subjects. The ctDCS intervention had an effect on the 'Normalised Step length Affected side' (p=0.1) and 'Gait Stability Ratio' (p=0.0569), which was found using Wilcoxon signed-rank test at 10% significance level. Also, ctDCS montage specific effect was found using a two-sided Wilcoxon rank-sum test at a 5% significance level for 'Step Time Affected Leg' (p=0.0257) and '%Stance Time Unaffected Leg' (p=0.0376). Moreover, the changes in the quantitative gait parameters across both the montages were found to be correlated to the mean electric field strength in the lobules based on partial least squares regression analysis (R2 statistic = 0.6574) where the mean electric field strength at the cerebellar lobules, Vermis VIIIb, Ipsilesional IX, Vermis IX, Ipsilesional X, had the most loading. In conclusion, our feasibility study indicated the potential of a single session of ctDCS to contribute to the immediate improvement in the balance and gait performance in terms of gait-related indices and clinical gait measures.
Nanoscale Plasmonic Printing
Saulius Juodkazis, Soon Hock Ng
Subject: Materials Science, Nanotechnology Keywords: plasmonics; nanoscale; ablation; direct laser write; die-met
Nanoscale structuring/printing is of interest for range of applications in 3D subtractive and additive manufacturing (3D+/-). Basic principles of light field enhancement and control at the nanoscale are overviewed in this section/chapter for bulk, surface, and localised plasmons (1D, 2D, and 3D localisation, respectively). All these plasmons are resonant phenomena which have common Lorentzian spectral lineshape which relates refractive and absorption properties as well as defining the phase of transmitted and scattered light. Localisation of light at the nanoscale creates the possibility of modification with matching resolution. Harnessing this light enhancement can be demonstrated as a "nano-pen" for direct write nanolithography.
Valence generalization across non-recurring structures
Micah Amd
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: learning theory; symbolic conditioning; direct realism; valence transfer
Semantically meaningless strings that are associated with affective attributes (US) can become emotionally valenced CS. Jurchiș et al (2020) recently demonstrated CS-US associations may influence evaluations towards previously-unseen letter strings if the latter share grammar construction rules with CS. We replicated those authors' findings in a modified extension (Experiment 1; N1 = 108), where happy/angry faces (US) were differentially associated with letter strings (CS) constructed using familiar (English) or non-familiar (Phoenician) alphabets. CS-US sequences were sandwiched by evaluations of strings that never appeared as CS, but shared grammar construction rules. However, post-hoc tests indicated valence effects were restricted to participants classified as 'high awareness' or those who had been exposed to longer stimulus durations, suggesting resource-intense deliberations were central during evaluations. Qualitative awareness checks additionally showcased many participants had attributed valences to recurring elements across conditioned and evaluated exemplars. These limitations were collectively addressed in Experiment 2 (N2 = 140), where participants viewed Phoenician (/English) CS during conditioning but viewed English (/Phoenician) strings during evaluations, meaning no strings nor elements recurred between phases. We found credible valence effects across English and Phoenician strings, with the latter observed across all awareness categories. Because participants were unable to consciously specify any evaluative strategies while evaluating Phoenician strings, we speculate grammar construction rules (organizing relations) may have been non-consciously acquired during conditioning.
Statistical Structure and Deviations from Equilibrium in Wavy Channel Turbulence
Saadbin Khan, Balaji Jayaraman
Subject: Engineering, Mechanical Engineering Keywords: roughness; wall turbulence; direct numerical simulation; wavy surface
The structure of turbulent flow over non-flat surfaces is a topic of major interest in practical applications in both engineering and geophysical settings. A lot of work has been done in the fully rough regime at high Reynolds numbers where the effect on the outer layer turbulence structure and the resulting friction drag is well documented. It turns out that surface topology plays a significant role on the flow drag especially in the transitional roughness regime and therefore, hard to characterize. Survey of literature shows that roughness function depends on the interaction of roughness height, flow Reynolds number and topology shape. In addition, if the surface topology contains large enough scales then it can impact the outer layer dynamics and in turn modulate the total frictional force. Therefore, it is important to understand the mechanisms underlying drag increase from systematically varied surface undulations in order to better interpret quantifications based on mean statistics such as roughness function. In this study, we explore the mechanisms that modulate the turbulence structure over a two-dimensional (2D) sinusoidal wavy surface with a fixed amplitude, but varying slope. To accomplish this, we model the turbulent flow between two infinitely wide 2D wavy plates at a friction Reynolds number, $Re_{\tau}=180$. We pursue two different but related flavors of analysis. The first one adopts a roughness characterization flavor of such wavy surfaces. The second one focuses on understanding the non-equilibrium near surface turbulence structure and their impact on roughness characterization. Analysis of the different statistical quantifications show strong dependence on wave slope for the roughness function indicating drag increase due to enhanced turbulent stresses resulting from increased production of vertical velocity variance from the surface undulations.
High-Throughput Direct Mass Spectrometry-Based Metabolomics to Characterize Metabolite Fingerprints Associated with Alzheimer's Disease Pathogenesis
Raúl González-Domínguez, Ana Sayago, Ángeles Fernández-Recamales
Subject: Chemistry, Analytical Chemistry Keywords: metabolomics; direct mass spectrometry; Alzheimer's disease; pathogenesis; biomarkers
Direct mass spectrometry-based metabolomics has been widely employed in the last years to characterize metabolic alterations underlying to Alzheimer's disease development and progression. This high-throughput approach presents a great potential for fast and simultaneous fingerprinting of a vast number of metabolites, which can be applied to multiple biological samples such as serum/plasma, urine, cerebrospinal fluid and tissues. In this review article we present the main advantages and drawbacks of metabolomics based on direct mass spectrometry compared with conventional analytical techniques, and provide a comprehensive revision of the literature on the application of these tools in Alzheimer's disease research.
Estimation of the Stage-Wise Costs of Breast Cancer in Germany Using a Modeling Approach
Khan Shah Alam, Karla Hernandez-Villafuerte, Diego Andres Hernandez Carreno, Schlander Michael
Subject: Medicine & Pharmacology, Other Keywords: breast cancer; stage-wise costs; direct medical costs; modeling
Breast cancer (BC) is a heterogeneous disease representing a substantial economic burden. In order to develop policies that successfully decrease this burden, the factors affecting costs need to be fully understood. Evidence suggests that early detection in Stage I has a lower cost than late detection. We aim to provide conservative estimates of BC's stage-wise medical costs from German healthcare and the payer's perspective. To this end, we conducted a literature review of articles evaluating stage-wise costs of BC in Germany through PubMed, Web of Science, and Econ Lit databases supplemented by Google Scholar. We developed a decision tree model to estimate BC related medical costs in Germany using available treatment and cost information. The review generated seven studies; none estimated the stage-wise costs of BC. The studies were classified into two groups: (1) case scenarios (five studies) and two studies based on administrative data. The first sickness funds data study (Gruber, Stock, et al. 2012) used 1999 information to approach BC attributable cost; their results suggest a range between €3,929 and €11,787 depending on age. The second study (Kreis, Plöthner, et al. 2020) used 2011-2014 data and suggested an initial phase incremental cost of €21,499, an intermediate phase cost of €2,620, and a terminal phase cost of €34,513 per incident case. Our decision tree model based BC stage-wise cost estimates were €21,523 for Stage I, €25,679 for Stage II, €30,156 for Stage III, €42,086 for Stage IV. Alternatively, the modeled cost estimates are €20,284 for the initial phase of care, €851 for the intermediate phase of care, and €34,963 for the terminal phase of care. Our estimates for phases of care are consistent with recent German estimates provided by Kreis and Plöthner et al. Furthermore, the data collected by sickness funds are collected primarily for reimbursement purposes, where the German ICD-10 classification system defines a cancer diagnosis. As a result, claims data lack the clinical information necessary to understand stage-wise BC costs. Our model-based estimates fill the gap and inform future economic evaluations of BC interventions.
15-Year Analysis of Direct Effects of Total and Dust Aerosols in Solar Radiation/Energy Over the Mediterranean Basin
Kyriakoula Papachristopoulou, Ilias Fountoulakis, Antonis Gkikas, Panagiotis G. Kosmopoulos, Panagiotis T. Nastos, Maria Hatzaki, Stelios Kazadzis
Subject: Earth Sciences, Atmospheric Science Keywords: aerosols; dust; direct radiative effects; solar energy; Mediterranean basin
The direct radiative effects of atmospheric aerosols are essential for climate, as well as for other societal areas, like the energy sector. The goal of the present study is to exploit the newly devel-oped ModIs Dust AeroSol (MIDAS) dataset for quantifying the direct effects on the downwelling surface solar irradiance (DSSI), induced by the total and dust aerosols amount, under clear-sky conditions and the associated impacts on solar energy for the broader Mediterranean basin, over the period 2003 – 2017. Aerosol optical depth (AOD) and dust optical depth (DOD) derived by the MIDAS dataset, along with additional aerosol and dust optical properties and atmospheric variables were used as inputs to radiative transfer modeling to simulate DSSI components. A 15-year climatology of AOD, DOD and of clear-sky Global Horizontal Irradiation (GHI) and Direct Normal Irradiation (DNI) was derived. The spatial and temporal variability of the aerosol and dust effects on the different DSSI components was assessed. Aerosol attenuation of annual GHI and DNI range from 1-13% and 5-47%, respectively. Significant attenuation by dust 2-10% and 9-37% was found over North Africa and the Middle East, contributing to 45-90% of the total aero-sol effects. The mean GHI and DNI attenuation during extreme dust episodes reached values up to 12% and 44%, respectively, for different areas. After 2008 a decline of aerosol effects on DSSI was found, attributed mainly to the dust component. Sensitivity analysis using different AOD/DOD inputs from Copernicus Atmosphere Monitoring Service (CAMS) reanalysis dataset, revealed CAMS underestimation of aerosol and dust radiative effects compared to MIDAS, due to slight AOD and stronger DOD underestimation, respectively.
Physicochemical Interactions in Systems C.I. Direct Yellow 50 – Weakly Basic Resins: Kinetic, Equilibrium, and Auxiliaries Addition Aspects
Monika Wawrzkiewicz, Ewelina Polska-Adach
Subject: Chemistry, Analytical Chemistry Keywords: Amberlyst; anion exchanger; direct dye; removal; resins; sorption; wastewaters
Intensive development of many industries, including textile, paper or plastic, which consume large amounts of water and generate huge amounts of wastewaters containing toxic dyes, contribute to pollution of the aquatic environment. Among many known methods of wastewater treatment, adsorption techniques are considered as the most effective. In the present study the weakly basic anion exchangers such as Amberlyst A21, Amberlyst A23 and Amberlyst A24 of the polystyrene, phenol-formaldehyde and polyacrylic matrices were used for C.I. Direct Yellow 50 removal from aqueous solutions. The equilibrium adsorption data were well fitted to the Langmuir adsorption isotherm. Kinetic studies were described by the pseudo-second order model. The pseudo-second order rate constants were in the range of 0.0609-0.0128 g/mg·min for Amberlyst A24, 0.0038-0.0015 g/mg·min for Amberlyst A21 and 1.1945-0.0032 g/mg·min for Amberlyst A23, and decreased with the increasing initial concentration of dye from 100-500 mg/L, respectively. There were observed auxiliaries (Na2CO3, Na2SO4, anionic and non-ionic surfactants) impact on the dye uptake. The polyacrylic resin Amberlyst A24 can be promising sorbent for C.I. Direct Yellow 50 removal as it is able to uptake 666.5 mg/g of the dye compared to the phenol-formaldehyde Amberlyst A23 of the 284.3 mg/g capacity.
Sensitivity of Radiative Fluxes to Aerosols in the ALADIN-HIRLAM Numerical Weather Prediction System
Laura Rontu, Emily Gleeson, Daniel Martin Perez, Kristian Pagh Nielsen, Velle Toll
Subject: Earth Sciences, Atmospheric Science Keywords: aerosols; CAMS; NWP; ALADIN-HIRLAM; MUSC; direct radiative effect
The direct radiative effect of aerosols is taken into account in many limited area numerical weather prediction models using wavelength-dependent aerosol optical depths of a range of aerosol species. We study the impact of aerosol distribution and optical properties on radiative transfer, based on climatological and more realistic near real-time aerosol data. Sensitivity tests were carried out using the single column version of the ALADIN-HIRLAM numerical weather prediction system, set up to use the HLRADIA broadband radiation scheme. The tests were restricted to clear-sky cases to avoid the complication of cloud-radiation-aerosol interactions. The largest differences in radiative fluxes and heating rates were found to be due to different aerosol loads. When the loads are large, the radiative fluxes and heating rates are sensitive to the aerosol inherent optical properties and vertical distribution of the aerosol species. Impacts of aerosols on shortwave radiation dominate longwave impacts. Sensitivity experiments indicated the important effects of highly absorbing black carbon aerosols and strongly scattering desert dust.
Deep Cerebellar Transcranial Direct Current Stimulation of the Dentate Nucleus to Facilitate Standing Balance in Chronic Stroke Survivors
Zeynab Rezaee, Surbhi Kaura, Dhaval Solanki, Adyasha Dash, M V Padma Srivastava, Uttama Lahiri, Anirban Dutta
Subject: Medicine & Pharmacology, Other Keywords: cerebellar transcranial direct current stimulation; dentate nucleus; computational modeling
Objective: Cerebrovascular accidents are the second leading cause of death and the third leading cause of disability worldwide. We hypothesized that cerebellar transcranial direct current stimulation (ctDCS) of the dentate nuclei and the lower-limb representations in the cerebellum can improve standing balance functional reach in chronic (> 6 months' post-stroke) stroke survivors. Materials and Methods: Magnetic resonance imaging(MRI) based subject-specific electric field was computed across 10 stroke survivors and one healthy MRI template to find an optimal bipolar bilateral ctDCS montage to target dentate nuclei and lower-limb representations (lobules VII-IX). Then, in a repeated-measure crossover study on 5 stroke survivors, we compared 15minutes of 2mA ctDCS based on the effects on successful functional reach(%) during standing balance task. Three-way ANOVA investigated the factors of interest– brain regions, montages, stroke participants, and their interactions.Results: "One-size-fits-all" ctDCS montage for the clinical study was found to be bipolar PO9h – PO10h for dentate nuclei and bipolar Exx7–Exx8 for lobules VII-IX with contalesional anode. Bipolar PO9h–PO10h ctDCS performed significantly (alpha=0.05) better in facilitating successful functional reach (%) when compared to bipolar Exx7–Exx8 ctDCS. Furthermore, a linear relationship between successful functional reach (%) and electric field strength was found where bipolar PO9h–PO10h montage resulted in a significantly (alpha=0.05) higher electric field strength when compared to bipolar Exx7–Exx8 montage for the same 2mA current. Conclusion: We presented a rational neuroimaging based approach to optimize deep ctDCS of the dentate nuclei and lower limb representations in the cerebellum for post-stroke balance rehabilitation.
Regional Temporal and Spatial Trends in Drought and Flood Disasters in China and Assessment of Economic Losses in Recent Years
Jieming Chou , Tian Xian, Wenjie Dong, Yuan Xu
Subject: Earth Sciences, Atmospheric Science Keywords: damaged area; direct economic loss; disaster; drought; extreme precipitation
Understanding the distribution in drought and floods plays an important role in disaster risk management. The present study aims to explore the trends in the standardized precipitation index and extreme precipitation days in China, as well as to estimate the economic losses they cause. We found that in the Northeast China, northern of North China and northeast of Northwest China were severely affected by drought disasters (average damaged areas were 6.44 million hectares) and the most severe drought trend was located in West China. However, in the north of East China and Central China, the northeastern of the Southwest China was severely affected by flood disasters (average damaged areas were 3.97 million hectares) and the extreme precipitation trend is increasing in the northeastern of the Southwest China. In the Yangtze River basin, there were increasing trends in terms of drought and extreme precipitation, especially in the northeastern of the Southwest China, where accompanied by severe disaster losses. By combining the trends in drought and extreme precipitation days with the distribution of damaged areas, we found that the increasing trend in droughts shifted gradually from north to south, especially in the Southwest China, and the increasing trend in extreme precipitation gradually shifted from south to north.
Adaptive Operation Strategy for Voltage Stability Enhancement in Active DMFCs
Qinwen Yang, Xu-Qu Hu, Xiu-Cheng Lei, Ying Zhu, Xing-Yi Wang, Sheng-Cheng JI
Subject: Engineering, Energy & Fuel Technology Keywords: Direct Methanol Fuel Cell; Operation strategy; Multi-objective optimization
An adaptive operation strategy for on-demand control of DMFC system is proposed as an alternative method to enhance the voltage stability. Based on a single-cell DMFC stack, a newly simplified semi-empirical model is developed from the uniform-designed experimental results to describe the I-V relationship. Integrated with this model, the multi-objective optimization method is utilized to develop an adaptive operation strategy. Although the voltage instability is frequently encountered in unoptimized operations, the voltage deviation is successfully decreased to a required level by adaptive operations with operational adjustments. Moreover, the adaptive operations are also found to be able to extend the range of operating current density or to decrease the voltage deviation according to ones requirements. Numerical simulations are implemented to investigate the underlying mechanisms of the proposed adaptive operation strategy, and experimental adaptive operations are also performed on another DMFC system to validate the adaptive operation strategy. Preliminary experimental study shows a rapid response of DMFC system to the operational adjustment, which further validates the effectiveness and feasibility of the adaptive operation strategy in practical applications. The proposed strategy contributes to a guideline for the better control of output voltage from operating DMFC systems.
Recent Trends in Minimization of Torque Ripples in Switched Reluctance Machine
N Deepa, K Krishnaveni, G Mallesham
Subject: Engineering, Electrical & Electronic Engineering Keywords: switched reluctance motor; direct instantaneous torque sharing function; rotor angle
Increased definite power, reduced cost, and dynamic construction are the important features required by electric motors for various applications. One such motor accommodating all the requisites mentioned above is Switched reluctance. Applications pertaining to small, commercial, and EV find Switched reluctance machine as a developing competitor to the conventional induction and permanent magnet motors. Though SRMS contribute a considerable number of advantages over induction motors and permanent magnet motors, the primary disadvantages pertaining to SRMs are the torque ripple, especially at high-speed operation, and the resulting acoustic noise. These disadvantages may not always be detrimental to the systems in entirely all cases, but depending on the application to be employed, torque ripple and acoustic noise form are detrimental to the system. This paper reviews the technology status and recent trends in the minimization of torque ripples switched reluctance machines.
Ligandless Palladium-catalyzed Direct C-5 arylation of azoles Promoted by Benzoic Acid in Anisole
Elisabetta Rosadoni, Federico Banchini, Sara Bellini, Marco Lessi, Luca Pasquinelli, Fabio Bellina
Subject: Chemistry, Organic Chemistry Keywords: azoles; direct arylation; C–C coupling; palladium; aromatic solvents; regioselectivity
Due to the widespread application of (hetero)arylazoles, the development of straightforward functional group-tolerant synthetic methods that enable selective heteroaromatic elaboration under mild conditions aroused considerable attention. Over the last years we were interested in studies aimed to broaden the substrate scope of the direct functionalization of azoles and, in particular, to develop efficient synthetic protocols for the carbon-carbon bond forming reaction by selective palladium-catalyzed Csp2-H bond activation of imidazole derivatives. During these studies, we discovered that the outcome of the Pd-catalyzed arylation of imidazoles with aryl bromides is deeply influenced by the nature of the reaction solvent. Specifically, it is well known that the Pd-catalyzed direct arylation of imidazoles with aromatic halides selectively leads to C-5 monoarylation products when polar aprotic solvents such as DMF (or DMA) are used as a reaction medium, but these solvents are coded as dangerous according to EHS (Environmental, Health, Safety) parameters. Instead, the use of aromatic solvents as the reaction medium for direct arylations, although some of them show good EHS values, is poorly reported, probably due to their low solvent power against reagents and their potential involvement in undesired side reactions. So, with the intention of filling this gap, in this paper, we have developed a selective C-5 arylation procedure in anisole as the solvent, discovering also the unprecedented role of benzoic acid as a promoter of the coupling.
Tunneling Current Model under Drain Induced Barrier Lowing Effects for Scaled Devices
Zhichao Zhao, Tiefeng Wu, Miao Wang, Yunfang Xi, Qiuxia Feng, Yonghao Sun
Subject: Engineering, Electrical & Electronic Engineering Keywords: Scaled device; Ultra-Thin gate oxide; DIBL; Direct tunneling current
With the proportional reduction of MOSFET size, the leakage-to-barrier reduction (DIBL) effect leads to a more significant increase in the tunneling current on the gate, and the appearance of the gate tunneling current also seriously affects the static characteristics of the device. In this paper, a new theoretical model of the relationship between the direct tunneling current and the thickness of the oxide layer under the DIBL effect is proposed for the MOSFET device with ultra-thin oxide layer. On this basis, the characteristics of the MOSFET device are studied in detail by using HSPICE, and their working conditions are quantitatively analyzed. The characteristic variation trend of small-size devices under the influence of gate tunneling current is predicted. The simulation results using BSIM4 model are consistent with the theoretical model. The theory and data in this paper will provide useful reference for large scale integrated circuit design.
Adoption Pathways for DC Power Distribution in Buildings
Vagelis Vossos, Daniel L Gerber, Melanie Gaillet-Tournier, Bruce Nordman, Richard Brown, Willy Bernal Heredia, Omkar Ghatpande, Avijit Saha, Gabe Arnold, Stephen M. Frank
Subject: Engineering, Electrical & Electronic Engineering Keywords: DC power distribution; efficient buildings; direct-DC; microgrids; renewable energy
Driven by the proliferation of DC energy sources and DC end-use devices (e.g., photovoltaics, battery storage, solid-state lighting, and consumer electronics), DC power distribution in buildings has recently emerged as a path to improved efficiency, resilience, and cost savings in the transitioning building sector. Despite these important benefits, there are several technological and market barriers impeding the development of DC distribution, which have kept this technology at the demonstration phase. This paper identifies specific end-use cases for which DC distribution in buildings is viable today. We evaluate their technology and market readiness, as well as their efficiency, cost, and resiliency benefits while addressing implementation barriers. The paper starts with a technology review, followed by a comprehensive market assessment, in which we analyze DC distribution field deployments and their end-use characteristics. We also conduct a survey of DC power and building professionals through on-site visits and phone interviews and summarize lessons learned and recommendations. In addition, the paper includes a novel efficiency analysis, in which we quantify energy savings from DC distribution for different end-use categories. Based on our findings, we present specific adoption pathways for DC in buildings that can be implemented today, and for each pathway we identify challenges and offer recommendations for the research and building community.
Effectiveness of tDCS to Improve Recognition and Reduce False Memories in Older Adults
Juan C Melendez, Encarnación Satorres, Alfonso Pitarque, Iraida Delhom, Elena Real, Joaquin Escudero
Subject: Behavioral Sciences, Applied Psychology Keywords: transcranial direct current stimulation; true recognition; false recognition; aging; experiment.
Background. False memories tend to increase in healthy and pathological aging, and their reduction could be useful in improving cognitive functioning. The objective was to use an active-placebo method to verify whether the application of tDCS in improving true recognition and reducing false memories in healthy older people. Method. Participants were 29 healthy older adults (65-78 years old) assigned to active or placebo group; active group received anodal stimulation at 2mA for 20 min over F7. An experimental task was used to estimate true and false recognition. The procedure took place in two sessions on two consecutive days. Results. A mixed ANOVA of true recognition showed a significant main effect of session (p = .004), indicating an increase from before treatment to after it. False recognition showed a significant main effect (p = .004), indicating a decrease from before treatment to after it and a significant session x group interaction (p < .0001). Conclusions. Overall, our results show that tDCS is an effective tool for increasing true recognition and reducing false recognition in healthy older people, and suggest that stimulation improves recall by increasing the number of items a participant can recall and reducing the number of memory errors.
Effect of Foreign Direct Investment on Bangladesh Economy: a Time Series Analysis from 1972 to 2013
Rumana Rashid, Sk. Sharafat Hossen
Subject: Social Sciences, Economics Keywords: Time-series analysis; Foreign Direct Investment; economic growth; Bangladesh economy
This study investigates the impact of Foreign Direct Investment (FDI) on economic growth and examines the causality between FDI and economic growth in Bangladesh during 1972-2013. Gross Domestic Product (GDP), export performance (EXP), Foreign Direct Investment (FDI), and Gross Fixed Capital Formation (GFCF) are considered to capture the objective of the study. The study methodology includes some systematic steps. As the data used in the study is time-series in nature, the author employs unit root tests, and in this case, Augmented Dickey-Fuller (ADF) and Phillips-Perron (PP) tests are used. Then Johansen's cointegration test, Granger causality test, regression with Newey-West Standard Error and Vector Error Correction Model (VECM) are applied. By using the ADF and PP test the study reveals that the variables of four-time series are integrated of I (1) i.e. they are stationary at first difference. Regression analysis result demonstrates that FDI has a positive effect on economic growth. The Granger Causality test discloses that there is a unidirectional relationship between FDI and economic growth. But the VECM estimation finds that in the long run FDI negatively affects economic growth.
Semi-Automatic Detection of Indigenous Settlement Features on Hispaniola through Remote Sensing Data
Till F. Sonnemann, Douglas C. Comer, Jesse L. Patsolic, William P. Megarry, Eduardo Herrera Malatesta, Corinne L. Hofman
Subject: Arts & Humanities, Archaeology Keywords: Remote sensing; direct detection; GIS mapping; Caribbean Archaeology; landscape archaeology
Satellite imagery has had limited application in the analysis of pre-colonial settlement archaeology in the Caribbean; visible evidence of wooden structures perishes quickly in tropical climates. Only slight topographic modifications remain, typically associated with middens. Nonetheless, surface scatters, as well as the soil characteristics they produce, can serve as quantifiable indicators of an archaeological site, which can be detected by analysis of remote sensing imagery. A variety of data sets were investigated, with the intention to combine multispectral bands to feed a direct detection algorithm, providing a semi-automatic process to cross-correlate the datasets. Sampling was done using locations of known sites, as well as areas with no archaeological evidence. The pre-processed very diverse remote sensing data sets have gone through a process of image registration. The algorithm was applied in the northwestern Dominican Republic on areas that included different types of environments, chosen for having sufficient imagery coverage, and a representative number of known locations of indigenous sites. The resulting maps present quantifiable statistical results of locations with similar pixel value combinations as the identified sites, indicating higher probability of archaeological evidence. The results show the variable potential of this method in diverse environments.
Epidemiology and Genotype Distribution of Hepatitis C Virus In Russia
Nikolay Pimenov, Dmitry Kostyushev, Svetlana Komarova, Anastasia Fomicheva, Alexander Urtikov, Olga Belaia, Karina Umbetova, Natalia Tsapkova, Vladimir Chulanov
Subject: Medicine & Pharmacology, Other Keywords: hepatitis; HCV; epidemiology; genotype; subgenotype; recombinant; RF1_2k/1b; direct-acting antivirals
Hepatitis C virus (HCV) causes both acute and chronic disease of the liver that can lead to liver cirrhosis, cancer and liver failure. HCV is characterized by high genetic diversity and substantial variations in prevalence of specific HCV genotypes in different countries of the world. Many effective regimens of direct-acting antivirals (DAAs), including pan-genotypic, can successfully treat HCV infection. However, genotype-specific treatments for HCV are being actively employed in the national plans for elimination of HCV infection around the world. Evaluation of HCV genotype prevalence in a country is mandatory for successful implementation of the national plans for elimination of HCV infection and allocation of financial resources to DAAs most effecting for specific HCV genotypes prevalent in a country. Here, we analyzed HCV genotypes, subgenotypes and recombinants in 10,107 serum samples from patients with chronic HCV infection from all Federal districts of Russia collected in 2015-2017. This is the first, largest evaluation of HCV genotypes performed on samples from all territories of Russia, from its Central Federal district to the Far East. Moreover, we have updated retrospective epidemiological analysis of chronic and acute HCV infection in Russia in 2001-2021. We demonstrate that the incidence of acute HCV infection in Russia reduced from 16.7 cases per 100,000 population in 2001 to 0.6 cases per 100,000 population in 2021. The number of cases of chronic HCV infection decreased from 29.5 to 16.4 per 100,000 population during this period. HCV genotype analysis indicated that HCV genotype 1 dominates in Russia (53.6%). Genotypes 3 and 2 were detected in 35.4% and 7.8% of patients respectively. These proportions are virtually identical in all regions of Russia except for Far East, where HCV genotype 2 amounts only to 1%. HCV genotypes 1 and 2 are more widespread in women, while HCV genotype 3 in men. The highest frequency of identification of genotype 3 was found in the age group of 31-40 years old (44.9%, respectively), and genotype 1 was more prevalent in a group of over 70 years old (72.2%). The proportion of HCV genotype 2 is predominant among HCV-infected persons older than 40 years. Discriminating HCV genotype 2 and recombinant RF1_2k/1b, which are frequently misclassified, is important for successful antiviral treatment of such patients. For the first time, we demonstrate the countrywide prevalence of HCV RF1_2k/1b in different regions of Russia. HCV RF1_2k/1b amounts to 3.2% in the structure of HCV genotypes, reaching 30% among samples classified as genotype 2 by some commercial genotyping tests. The highest proportion of HCV RF1_2k/1b was detected in the north-west (60%), southern (41.6%) and central (31.6%) federal district. Its frequency in Far Eastern and North Caucasus Districts was ~ 14.3%. HCV RF1_2k/1b was not detected in the Volga, Ural and Siberian districts. To conclude, this is the first and most complete evaluation of HCV epidemiology and genotype/subgenotype distribution in Russia.
Comparison of Efficiency Based Optimal Load Distribution for Modular Ssts With Biologically Inspired Optimization Algorithms
Mariam Mughees, Munazza Sadaf, Hasan Erteza Gelani, Abdullah Bilal, Faisal Saeed
Subject: Engineering, Electrical & Electronic Engineering Keywords: Solid state transformer; Direct current; Renewable Energy Systems; Ant Lion Optimizer
The battle of currents between AC and DC reignited as a result of the development in the field of power electronics. The efficiency of DC distribution systems is highly dependent on the efficiency of distribution converter, which calls for optimized schemes for efficiency enhancement of distribution converters. Modular solid-state transformers play a vital role in DC Distribution Networks and Renewable Energy systems (RES).This paper deals with efficiency-based load distribution for Solid State Transformers (SSTs) in DC distribution networks. Aim is to achieve a set of minimum inputs that are consistent with output while considering constraints and efficiency. As the main feature of modularity is associated with a three-stage structure of SSTs. This modular structure has been optimized using Ant Lion Optimizer (ALO) and validated by applying it EIA (Energy Information Agency) DC Distribution Network which contains SSTs. In the DC distribution grid, modular SSTs provide promising conversion of DC power from medium voltage to lower DC range (400V). The proposed algorithm is simulated in MATLAB and also compared with two other metaheuristic algorithms. The obtained results prove that the proposed method can significantly reduce input requirements for producing the same output while satisfying the specified constraints.
A Wireless Animal Robot Stimulation System Based on Neuronal Electrical Signal Characteristics
Rui Yan, Ruituo Huai
Subject: Biology, Other Keywords: animal robots; neuronal electrical signal; electrical stimulation; Direct Digital Synthesis algorithm
As a stimulus signal, coded electrical signals can control the motion behavior of animals, which has been widely used in the field of animal robots. In current research, most of the stimulus signals used by researchers are traditional waveforms, such as square waves. To enrich the stimulus waveform, a wireless animal robot stimulation system based on neuronal electrical signal characteristics is presented in this paper. The stimulator uses the CC1101 wireless module to control animal behavior through brain stimulation. The LabVIEW-based graphical user interface(GUI) can manipulate brain stimulation remotely while the stimulator powered by battery. Additionally, The spikes of animals have been simulated by this system through Direct Digital Synthesizer(DDS) algorithm. The GUI enable users to customize the combination of these analog spike signals. The recombined signals are sent to the stimulator through CC1101 as stimulus signals. In vivo experiments conducted on five pigeons verified the efficacy of the stimulation mechanism. The analog spike signal with an amplitude of 3-5V successfully caused the pigeon's turning behavior. The feasibility of the analog spike signals as stimulus signals was successfully verifified. Increased the diversity of stimulus waveforms in the field of animal robots.
Direct Oral Anticoagulants in Patients with Nonvalvular Atrial Fibrillation: A Real-World Experience from a Single Spanish Regional Hospital
Enrique Rodilla, Maria Isabel Orts-Martinez, Miguel Angel Sanz-Caballer, María Teresa Gimeno-Brosel, Maria Jesús Arilla-Morel, Isabel Navarro-Gonzalo, Inmaculada Castillo-Valero, Inmaculada Salvador-Mercader, Ana Carral-Tatay
Subject: Medicine & Pharmacology, General Medical Research Keywords: Direct oral anticoagulants (DOACs); Nonvalvular atrial fibrillation (NVAF); Real-world experience
The aim is to evaluate a program for direct oral anticoagulants (DOACs) management in nonvalvular atrial fibrillation (NVAF) patients, according to patient profiles, appropriateness of dosing, patterns of crossover, effectiveness and safety. This is an observational and longitudinal retrospective study in a cohort of patients attended in daily clinical practice in a single regional hospital in Spain with a systematic follow-up plan for up to 3 years for patients initiating dabigatran, rivaroxaban or apixaban between JAN/2012-DEC/2016. We analyzed 490 episodes of treatment (apixaban 2.5 mg: 9.4%, apixaban 5 mg: 21.4%, dabigatran 75 mg: 0.6%, dabigatran 110 mg: 12,4%, dabigatran 150 mg: 19.8%, rivaroxaban 15 mg: 17.8% and rivaroxaban 20 mg: 18.6%) in 445 patients. 13.6% of patients on dabigatran, 9.7% on rivaroxaban, and 3.9% on apixaban, switched to other DOACs or changed dosing. Apixaban was the most frequent DOAC switched to. The most frequent reasons for switching were toxicity (23.8%), bleeding (21.4%) and renal deterioration (16.7%). Inappropriateness of dose was found in 23.8% of episodes. Patients taking apixaban 2.5 mg were older, had higher CHA2DS2VASc score and lower creatinine clearance. Patients taking dabigatran 150 mg and rivaroxaban 20 mg were younger, had lower CHA2DS2VASc and higher creatinine clearance. Rates of stroke/transient ischemic attack (TIA) were 1.64/0.54 events/100 patients-years, while rates of major, clinically relevant non-major (CRNM) bleeding and intracranial bleeding where 2.4, 5, and 0.5 events/100 patients-years. Gastrointestinal and genitourinary bleeding were the most common type of bleeding events (BE). On multivariable analysis, prior stroke (RR: 4.2; CI: 1.5-11.8; p=0.006) and age (RR: 1.2; CI: 1.1-1.4; p=0.006) were independent predictors of stroke/TIA. Concurrent platelet inhibitors (RR: 7.1; CI: 2.3-21.8; p=0.001), male gender (RR: 2.1; CI: 1.2-3.7; p=0.0012) and age (RR: 1.1; CI: 1.02-1.13; p=0.005) were independent predictors of BE. This study complements the scant data available on the use of DOACs in NVAF patients in Spain, confirming a good safety and effectiveness profile
Recent Advances in the Direct Synthesis of Hydrogen Peroxide Using Chemical Catalysis – a review
Sumanth Ranganathan, Volker Sieber
Subject: Chemistry, Chemical Engineering Keywords: catalyst; direct synthesis; hydrogen peroxide; Pd based catalyst; reactor engineering; microreactor
Hydrogen peroxide is an important chemical of increasing demand in today's world. Currently, the anthraquinone autoxidation process dominates the industrial production of hydrogen peroxide. Herein, hydrogen and oxygen are reacted indirectly in the presence of quinones to yield hydrogen peroxide. Owing to the complexity and multi-step nature of the process, it is advantageous to replace the process with an easier and straightforward one. The direct synthesis of hydrogen peroxide from its constituent reagents is an effective and clean route to achieve this goal. Factors such as water formation due to thermodynamics, explosion risk, and the stability of the hydrogen peroxide produced hinder the applicability of this process at an industrial level. Currently, the catalysis for the direct synthesis reaction is palladium based and the research into finding an effective and active catalyst has been ongoing for more than a century now. Palladium in its pure form, or alloyed with certain metals are some of the new generation of catalysts that are extensively researched. Additionally, to prevent the decomposition of hydrogen peroxide to water, the process is stabilised by adding certain promoters such as mineral acids and halides. A major part of today's research in this field focusses on the reactor and the mode of operation required for synthesising hydrogen peroxide. The emergence of microreactor technology has helped in setting up this synthesis in a continuous mode, which could possibly replace the anthraquinone process in the near future. This review will focus on the recent findings of the scientific community in terms of reaction engineering, catalyst and reactor design in the direct synthesis of hydrogen peroxide.
Investigation of Hg Content by a Rapid Analytical Technique in Mediterranean Pelagic Fishes
Giuseppa Di Bella, Roberta Tardugno, Nicola Cicero
Subject: Chemistry, Food Chemistry Keywords: Mercury, Pelagic Fish, Direct Mercury Analyzer, Mediterranean Sea, Tolerable Weekly Intake
Mercury (Hg) fish and seafood contamination is a global concern and needs worldwide sea investigations in order to protect consumers. The aim of this study was to investigate the Hg concentration by means of a rapid and simple analytical technique with direct Mercury Analyzer (DMA-80) in pelagic fish species, Tetrapturus belone (spearfish), Thunnus thynnus (tuna) and Xiphias gladius (swordfish) caught in the Mediterranean Sea. Hg contents were evaluated also in Salmo salar (salmon) as pelagic fish not belonging to the Mediterranean area. The results obtained were variable ranging between 0,015-2,562 mg kg-1 for T. thynnus specie, 0,477-3,182 mg kg-1 for X. gladius, 0,434-1,730 mg kg-1 for T. belone and 0,004-0,019 mg kg-1 for S. salar, respectively. The total Hg tolerable weekly intake (TWI) and % tolerable weekly intake (TWI%) values according to the European Food Safety Authority (EFSA) were calculated. The results highlighted that the pelagic species caught in the Mediterranean Sea should be constantly monitored due to their high Hg contents as well as their TWI and TWI% with respect to S. salar samples.
The Effect of C2H2/H2 Gas Mixture Ratio in Direct Low-Temperature Vacuum Carburization
Yeongha Song, Jun-Ho Kim, Kyu-Sik Kim, Sunkwang Kim, Pung Keun Song
Subject: Materials Science, Surfaces, Coatings & Films Keywords: direct surface activation; low-temperature vacuum carburization; expanded austenite; supersaturation; acetylene
The effect of the acetylene and hydrogen gases mixture ratios in direct low-temperature vacuum carburization was investigated. The gas ratio is an important parameter for producing the free radicals in the carburization. The free radicals can remove the natural oxide film by the strong reaction of the hydrocarbons, and then thermodynamically activity can be increased. When the gas ratio was below 1, the supersaturation expanded austenite layers were formed on the surface of the AISI 316L stainless steel, which had the maximum carbon solubility up to 11.5 at.% at 743 K, were formed. On the other hand, when the gas ratio was above 1, the carbon concentration of them remained low even if the process time was enough increased to reach the maximum carbon solubility. As a result, the carbon concentration underneath the surface was determined to be highly dependent on the gas mixture ratio of acetylene and hydrogen. In conclusion, it is necessary to restrict the ratio of acetylene and hydrogen gases to total mixture gases to form the expanded austenite layer with the high carbon concentration in the direct low-temperature vacuum carburization.
The Dynamics of the Light Adaptation of the Human Visual System; Subjective and Electroretinographic Analysis
Friederike Thoss, Simone Ballosek, Bengt Bartsch, Franz Thoss
Subject: Life Sciences, Other Keywords: rapid light adaptation; glare; discrimination threshold; increment threshold; direct current electroretinogram
The excitation of the visual system increases with increasing retinal illumination. At the same time, the sensitivity of the system decreases (light adaptation). Higher excitation automatically results in a lower sensitivity. This study investigates whether this parallelism between the excitation and the sensitivity also applies in the dynamic case, that is, during the transition to a higher excitation level after an increase in the retinal illuminance. For this purpose, the courses of the subjective and the electroretinographic threshold during the transitional phase after a step of the adaptation illumination was determined by means of a special light-stimulation apparatus. As a measure of the course of the excitation during this time, the response ERG on the adaptation step was recorded with a special amplifier. The threshold curve always has an overswing, which shows subjectively very strong differences. It can be concluded that the glare caused by a sudden increase in illuminance is subjectively very different. The comparison between the response ERG on the adaptation step and the course of the electroretinographic increment threshold during this time shows a broad agreement between the two courses. It can thus be assumed that the sensitivity of the visual system follows the course of the excitation also in the dynamic case. In addition, the investigation shows that the glare experienced after a step in the illuminance clearly shows great subjective differences.
Effect of Fuel Injection Strategy on the Carbonaceous Structure Formation and Nanoparticle Emission in a DISI Engine Fuelled with Butanol
Simona Silvia Merola, Adrian Irimescu, Silvana Di Iorio, Bianca Maria Vaglieco
Subject: Engineering, Automotive Engineering Keywords: spark ignition engine; direct injection; gasoline; butanol; optical investigations; nanoparticle emissions
Within the context of ever wider expansion of direct injection in spark ignition engines, this investigation was aimed at improved understanding of the correlation between fuel injection strategy and emission of nanoparticles. Measurements performed on a wall guided engine allowed identifying the mechanisms involved in the formation of carbonaceous structures during combustion and their evolution in the exhaust line. In-cylinder pressure was recorded in combination with cycle-resolved flame imaging, gaseous emissions and particle size distribution. This complete characterization was performed at three injection phasing settings, with butanol and commercial gasoline. Optical accessibility from below the combustion chamber, allowed visualization of diffusive flames induced by fuel deposits; these localized phenomena were correlated to observed changes in engine performance and pollutant species. With gasoline fueling, minor modifications were observed with respect to combustion parameters, when varying the start of injection. The alcohol, on the other hand, featured marked sensitivity to the fuel delivery strategy. Even though the start of injection was varied in a relatively narrow crank angle range during the intake stroke, significant differences were recorded, especially in the values of particle emissions. This was correlated to the fuel jet-wall interactions; the analysis of diffusive flames, their location and size confirmed the importance of liquid film formation in direct injection engines, especially at medium and high load.
Synthesis, Characterization and Catalytic Performance of Well-ordered Crystalline Heteroatom Mesoporous MCM-41
Jing Qin, Baoshan Li
Subject: Chemistry, Applied Chemistry Keywords: bulk crystal mesoporous MCM-41; heteroatom molecular sieves; direct hydrothermal method
The mesoporous heteroatom molecular sieve MCM-41 bulk crystals with the crystalline phase were synthesized via a one-step hydrothermal method using an ionic complex as template. The ionic complex template was formed by interaction between cetyltrimethylammonium ions and metal complex ion [M (EDTA)]2- (M=Co or Ni ). The materials were characterized by X-ray diffraction, scanning electron microscopy, high-resolution transmission electron microscopy, N2 adsorption–desorption isotherms, and X-ray absorption fine structure spectroscopy. The results showed that the materials possess a highly ordered mesoporous structure with the crystalline phase and possess high uniform ordered arrangement channels. The structure is in the vertical cross directions with a crystalline size of about 12 µm and high specific surface areas. The metal atoms were incorporated into the zeolite frameworks in the form of octahedral coordinate and have a uniform distribution in the materials. The amount of metal complexes formed by metal ion and EDTA is an essential factor for the formation of the vertical cross structure. Comparing to Si-MCM-41, the samples exhibited better conversion, higher selectivity for cumene cracking.
Does Globalization Encourage Female Employment? A Cross-Country Panel Study
Asrifa Hossain, Shankar Ghimire, Anna Valeva, Jessica Harriger-Lin
Subject: Social Sciences, Economics Keywords: Female Participation in Labor Force (FPLF); Foreign Direct Investment (FDI); System GMM
This study assesses the impact of globalization on female participation in the labor force (FPLF). The increased globalization in the last several decades has created various economic opportunities for enterprises and individuals worldwide at an unprecedented rate. As a result, it has helped improve the quality of life for many men and women. In this process, the issue of women's economic participation has been a critical topic for discussion worldwide. In that context, the objective of the paper is to determine if FPLF is influenced by a country's participation in foreign markets through foreign direct investment (FDI) – a proxy for globalization. The paper uses a panel dataset obtained from the World Bank's World Development Indicators database for 99 countries from 2001 to 2018. We then use system Generalized Method of Moments (system GMM) to estimate a dynamic panel model with appropriate specification tests. The results show that the positive effects of FDI on FPLF are more robust for low- and middle-income countries than high-income countries. We also find that results may be sensitive to outlier observations. Our results explain the seemingly inconclusive results within existing literatures and suggest that low- and middle-income countries should particularly focus on sectors that generate FDI as they stand to yield the greatest benefits with regards to female economic empowerment.
Sequestering Biomass for Natural, Efficient, and Low-Cost Direct Air Capture of Carbon Dioxide (Version 4)
Jeffrey A. Amelse, Paul K. Behrens
Subject: Earth Sciences, Atmospheric Science Keywords: Carbon Dioxide; Net Zero; Sequestration; Biomass; Direct Capture; Global Warming; Landfills; Forestry
Many corporations and governments aspire to become Net Zero Carbon Dioxide by 2030-2050. Achieving this goal requires understanding where energy is produced and consumed, the magnitude of CO2 generation, and the Carbon Cycle. Many prior proposed solutions focus on reducing future CO2 emissions from continued use of fossil fuels. Examination of these technologies exposes their limitations and shows that none offer a complete solution. For example, bioethanol is shown to be both carbon and energy inefficient. Direct Air Capture technologies are needed to reduce CO2 already in the air. The most natural form of Direct Air Capture involves letting nature do the work of creating biomass via photosynthesis. However, it is necessary to break the Carbon Cycle by permanently sequestering that biomass carbon in "landfills" modified to discourage decomposition to CO2 and methane. Tree leaves and biomass grown on-purpose, such as high yield switchgrass, are proposed as good biomass sources for this purpose. Left unsequestered, leaves decompose with a short Carbon Cycle time constant releasing CO2 back to the atmosphere. While in any given year, leaves represent a small fraction of a tree's above ground biomass, leaves can represent a substantial fraction of the total biomass generated by a tree when integrated over a tree's lifetime. Understanding the chemistry of the distinct phases landfills undergo is the key to minimizing or eliminating decomposition. First, the compact cross-linked structure of cellulose and keeping water out will make it difficult for initial depolymerization to release sugars. Air ingress should be minimized to minimize Phase I aerobic decomposition. pH manipulation can discourage acid formation during Phase II. Lignocellulose is low in nutrients needed for anaerobic decomposition. Inhibitors can be added if needed. The goal is to move quickly to the dormant phase where decomposition stops. The cost for Carbon Capture and Storage (CCS) for growing and sequestering high yield switchgrass is estimated to be lower than CCS for steam reforming of methane hydrogen plants (SRM) and supercritical or combined cycle coal power plants. Thus, sequestration of biomass is a natural, carbon efficient, and low-cost method of Direct Capture. Biomass sequestration can provide CO2 removal on giga tonnes per year scale and can be implemented in the needed timeframe (2030-2050).
A Comparative Study on the Mechanical Properties and Microstructure of Cement-Based Materials by Direct Electric Curing and Steam Curing
Zhihan Yang, Youjun Xie, Jionghuang He, Fan Wang, Xiaohui Zeng, Kunlin Ma, Guangcheng Long
Subject: Engineering, Civil Engineering Keywords: Direct electric curing; Steam curing; Mechanical properties; Microstructure; Joule heat; Energy consumption
Direct electric curing (EC) is a new green curing method for cement-based materials that improves the early mechanical properties via the uniform high temperature produced by Joule heating. To understand the effects of EC and steam curing (SC) on the mechanical properties and microstructure of cement-based materials, the mortar was cured at different temperature-controlled curing regimes (40°C, 60°C and 80°C). Meanwhile, mechanical properties, hydrate phase and pore structure of specimens were investigated. The energy consumption of two curing methods was compared and analyzed. The results show that the EC specimens have better and more stable growth of mechanical strength. The pore structure of EC specimen is also better than that of SC specimen at different maintenance ages. However, the hydration degree and products of samples cured by EC are similar to that SC samples. The energy consumption of EC is lower than SC. This study provides an important technical support for the EC in the production of energy-saving and high early-strength concrete precast components.
Effect of Aluminum Nitride Buffer Layer Deposited by Molecular Beam Epitaxy on the Growth of Aluminum Nitride Thin Films Deposited by DC Magnetron Sputtering Technique
B. Riah, Julien Camus, Abdelhak Ayad, Mohammad Rammal, Raouia Zernadji, Nadjet Rouag, M.A. Djouadi
Subject: Materials Science, Biomaterials Keywords: Hexagonal AlN; thin films; Direct current magnetron sputtering; Texture; fiber; heteroepitaxial growth
This paper reports the effect of silicon substrate orientation and aluminum nitride buffer layer deposited by molecular beam epitaxy on the growth of aluminum nitride thin films deposited by DC magnetron sputtering technique at low temperature. The structural analysis has revealed a strong (0001) fiber texture for both substrates Si (100) and (111) and a hetero-epitaxial growth on few nanometers AlN buffer layer grown by MBE on Si (111) substrate. SEM images and XRD characterization have shown an enhancement in AlN crystallinity thanks to AlN (MBE) buffer layer. Raman spectroscopy indicated that the AlN film was relaxed when it deposited on Si (111), in compression on Si (100) and under tension on AlN buffer layer grown by MBE/Si (111) substrates, respectively. The interface between Si (111) and AlN grown by MBE is abrupt and well defined; contrary to the interface between AlN deposited using PVD and AlN grown by MBE. Nevertheless, AlN hetero-epitaxial growth was obtained at low temperature (<250°C).
Acceptability, Appropriateness, and Feasibility of Automated Screening Approaches and Family Communication Methods for Identification of Familial Hypercholesterolemia: Stakeholder Engagement Results from the IMPACT-FH study
Laney K Jones, Nicole Walters, Andrew Brangan, Catherine D Ahmed, Michael Gatusky, Gemme Campbell-Salome, Ilene G Ladd, Amanda Sheldon, Samuel S Gidding, Mary P McGowan, Alanna K Rahm, Amy C Sturm
Subject: Medicine & Pharmacology, Allergology Keywords: Familial hypercholesterolemia; identification; implementation outcomes; cascade screening; cascade testing; chatbots; direct contact
Guided by the Conceptual Model of Implementation Research, we explored the acceptability, appropriateness, and feasibility of: 1) automated screening approaches utilizing existing health data to identify those who require subsequent diagnostic evaluation for familial hypercholesterolemia (FH) and 2) family communication methods including chatbots and direct contact to communicate information about inherited risk for FH. Focus groups were conducted with 22 individuals with FH (2 groups) and 20 clinicians (3 groups). These were recorded, transcribed, and analyzed using deductive (coded to implementation outcomes) and inductive (themes based on focus group discussions) methods. All stakeholders described these initiatives as: 1) acceptable and appropriate to identify individuals with FH and communicate risk with at-risk relatives; and 2) feasible to implement in current practice. Stakeholders cited current initiatives, outside of FH (e.g., pneumonia protocols, colon cancer and breast cancer screenings), that gave them confidence for successful implementation. Stakeholders described perceived obstacles, such as nonfamiliarity with FH, that could hinder implementation and potential solutions to improve systematic uptake of these initiatives. Automated health data screening, chatbots, and direct contact approaches may be useful for patients and clinicians to improve FH diagnosis and cascade screening.
Photophysical, Thermal and Structural Properties of Thiophene and Benzodithiophene-Based Copolymers Synthesized by Direct Arylation Polycondensation Method
Newayemedhin A. Tegegne, Zelalem Abdissa, Wendimagegn Mammo
Subject: Materials Science, Biomaterials Keywords: Intramolecular charge transfer; copolymers; pi-pi stacking; direct arylation polycondensation; excitonic state
Three low band gap copolymers based on isoindigo acceptor units were designed and successfully synthesized by direct arylation polycondensation method. Two of them were benzodithiophene(BDT)-isoindigo copolymers (PBDTI-OD and PBDTI-DT) with 2-octlydodecyl(OD) and 2-decyltetradecyl (DT) substituted isoindigo units, respectively. Thiophene donor and DT-substituted isoindigo acceptor units were copolymerized to synthesize PTI-DT. The copolymers have broad absorption range that extends to over 760 nm with a band gap ~ .5 eV. The photophysical property studies showed the BDT based copolymers have non-polar ground states. Their emission exhibited the population of intramolecular charge transfer (ICT) state in polar solvents and tightly bound excitonic state in non-polar solvents due to self-aggregation. On the contrary, the emission from the thiophene based copolymers was only from the tightly bound excitonic state. The thermal decomposition temperature of the copolymers was above 380 oC. The X-ray diffraction pattern of the three copolymers showed a halo due pi-pi stacking. A second sharper peak was observed in the BDT-based copolymer with longer side chain on the isoindigo unit (PBDTI-DT) and the thiophene based copolymers with PTI-DT exhibiting a better structural order.
Direct Ink Writing Technology (3D Printing) of Graphene-Based Ceramic Nanocomposites: A Review
Nestor Washington Solís Pinargote, Anton Smirnov, Nikita Peretyagin, Anton Seleznev, Pavel Peretyagin
Subject: Materials Science, Nanotechnology Keywords: additive manufacturing; graphene oxide; graphene-based paste; direct ink writing; ceramic nanocomposites
In the present work, the state of the art of the most common additive manufacturing (AM) technologies used for the manufacturing of complex shape structures of graphene-based ceramic nanocomposites, ceramic and graphene-based parts is explained. A brief overview of the AM processes for ceramic, which are grouped by the type of feedstock used in each technology, is presented. The main technical factors that affect the quality of the final product were reviewed. The AM processes used for 3D printing of graphene-based materials are described in more detail; moreover, some studies in a wide range of applications related to these AM techniques are cited. Furthermore, different feedstock formulations and their corresponding rheological behaviour were explained. Additionally, the most important works about the fabrication of composites using graphene-based ceramic pastes by Direct Ink Writing (DIW) are disclosed in detail and illustrated with representative examples. Various examples of the most relevant approaches for the manufacturing of graphene-based ceramic nanocomposites by DIW are provided.
Aerosol Jet Direct Writing Polymer-Thick-Film Resistors for Printed Electronics
James Q. Feng, Anthony Loveland, Michael J. Renn
Subject: Keywords: printed electronics; Aerosol Jet® printing; direct-write technology; embedded PTF resistors
Electronic designers nowadays are facing two challenging demands for various applications: miniaturization and increased functionality. To satisfy these seemingly opposed requirements, reducing the number of mounted components—and thus solder joints—in PCB designs becomes an attractive approach by directly printing passive components such as embedded resistors into the circuit. This approach can also potentially increase the reliability, such as "mean time between failures" (MTBF), while reducing the circuit board size. With its unique capabilities for non-contact precision material deposition, the Aerosol Jet® direct-write technology has been enabling additive manufacturing of fine-feature electronics conformally onto flexible substrates of complicated shapes. The CAD/CAM controlled relative motions between substrate and print head allows convenient adjustment of the pattern and pile height of deposited material at a given ink volumetric deposition rate. To date in the printed electronics industry, additively printing embedded polymer-thick-film (PTF) resistors has mostly been done with screen printing using carbon-based paste inks. Here we demonstrate results of Aerosol Jet® printed PTF resistors of resistance values ranging from ~50 W to > 1 kW, adjustable (among several variable parameters) by the number of stacked layers (or print passes with each pass depositing a fixed amount of ink) between contact pads of around 1 mm apart with footprint line typically < 0.3 mm. In principle, any ink material that can be atomized into fine droplets of 1 to 5 microns can be printed with the Aerosol Jet® system. However, the print quality such as line edge cleanliness can significantly influenced by ink rheology which involves solvent volatility, solids loading, and so on. Our atomizable carbon ink was made by simply diluting a screen printing paste with a compatible solvent of reasonable volatility, which can be cured at temperatures below 200 oC. We show that Aerosol Jet® printed overlapping lines can be stacked to large pile height (to reduce the resistance value) without significant increase of line width, which enables fabricating embedded resistors with adjustable resistance values in a limited footprint space.
On a Dual Direct Cosine Simplex Type Algorithm and Its Computational Behavior
Elsayed Badr, khalid Aloufi
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: linear programming; dual simplex method; dual direct cosine method; two-phase method
The goal of this paper is to propose a dual version of the direct cosine simplex algorithm (DDCA) for general linear problems. Unlike the two-phase and the big-M methods, our technique does not involve artificial variables. Our technique solves the dual Klee-Minty problem in two iterations and solves the dual Clausen's problem in four iterations. The utility of the proposed method is evident from the extensive computational results on test problems adapted from NETLIB. Preliminary results indicate that this dual direct cosine simplex algorithm (DDCA) reduces the number of iterations of two-phase method.
Hardware Factors Influencing Interlayer Bonding Strength of Parts Obtained by Fused Filament Fabrication
Vladimir E. Kuznetsov, Azamat G. Tavitov, Oleg D. Urzhumtcev
Subject: Materials Science, Polymers & Plastics Keywords: fused filament fabrication; fused deposition modeling; interlayer bonding; direct extruder; Bowden extruder
Current paper investigates the influence of hardware setup and parameters of a 3D printing process based on fused filament fabrication (FFF) technology on resulting sample strength. Three-point bending of samples printed with long side oriented along Z axis was used as a measure of the interlayer bonding strength. The same CAD model was converted into NC-programs through same slicing software to be run on four different desktop FFF 3D printers, out of filament of same brand and color. Within all the printers same ranges of layer thickness values from 0.1 to 0.3 mm and feed rates from 25 to 75 mm/s were planned to be varied. All the machines demonstrated statistically almost identical values of maximum flexural strength, however the different machines exhibited maximum sample strength with different combinations of varied parameters. Among all the hardware factors observed, the most important was proved to be extruder type, direct or Bowden. This feature fundamentally changes the nature of studied parameters influence onto the resulting strength of the FFF process. For the extruders of Bowden type the length of flexible guiding tube is of great importance.All the machines demonstrated statistically almost identical values of maximum flexural strength, however the different machines exhibited maximum sample strength with different combinations of varied parameters. Among all the hardware factors observed, the most important was proved to be extruder type, direct or Bowden. This feature fundamentally changes the nature of studied parameters influence onto the resulting strength of the FFF process. For the extruders of Bowden type the length of flexible guiding tube is of great importance.
TL-Moments for Type-I Censored Data with an Application to the Weibull Distribution
Hager A. Ibrahim, Mahmoud Riad Mahmoud, Fatma A. Khalil, Ghada A. El-Kelany
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: censored data; estimation; direct L-moments; TL-moments; maximum likelihood; weibull distribution
This paper aims to provide an adaptation of the TL-moments method to censored data. The present study concentrates on Type-I censored data. The idea of using TL-moments with censored data may seem conflicting. But our perspective in that, we may use data censored from one side and trimmed from the other side. This study is applied to estimate the two unknown parameters of the Weibull distribution. The suggested point is compared with Direct L-moments and ML methods. A Monte Carlo simulation study is carried out to compare these method in terms of estimate average, root of mean square error (RMSE) and relative absolute biases (RAB).
First of its Kind: A Test Artifact for Direct Laser Writing
Sven Fritzsche, Brian Richard Pauw, Christiane Weimann, Heinz Sturm
Subject: Engineering, Other Keywords: test artifact; two photon polymerization; Direct Laser Writing; quality infrastructure; multi photon lithography
With Direct Laser Writing (DLW) maturing in all aspects as a manufacturing technology a toolset for quality assurance must be developed. In this work we want to introduce a first of its kind test artifact. Test artifacts are standardized 3D models with specific geometric feature to evaluate the performance of writing parameters. Test artifacts are already common in other 3D additive manufacturing technologies e.g. Selective Laser Melting. The test artifact introduced in this work was developed in particular to accommodate 1) the high geometrical resolution of DLW structures and 2) the limited possibilities to examine the resulting structure. Geometric accuracy, surface adhesion as well as confocal raman spectroscopy results were considered when evaluating the design of the test artifact. We will explain the individual features and design considerations of our DLW test artifact. The difference between two slicers, Cura and 3DPoli, and the implications on measured feature sizes and the general shape is quantified. The measured geometries are used to derive a general design guide for a specific combination of photoresist, laser power and scanning speed and to analyse the geometric accuracy of a structure produced using these guidelines.
Study of Hybrid Transmission HVAC/HVDC by Particle Swarm Optimization (PSO)
Yulianta Siregar, Credo Maestro Hasianta Pardede
Subject: Engineering, Electrical & Electronic Engineering Keywords: high voltage alternating current; high voltage direct current; particle swarm optimization; power losses
Indonesia's SUMBAGUT 150 kV transmission of High Voltage Alternating Current Network (HVAC) system has considerable power losses. These power losses are a critical problem in the transmission network system. Meanwhile, this study provides one solution to reduce power losses using a High Voltage Direct Current (HVDC) network system. Determining the location to convert HVAC into HVDC is very important. The authors use Particle Swarm Optimization (PSO) to get the optimal location on the 150 kV SUMBAGUT HVAC transmission network system. The study results showed that before using the HVDC network system, the power losses were 122.26 MW. Meanwhile, power losses with one transmission HVDC in the "Paya Pasir-Sei Rotan" are 84.16 MW, "Porsa-P. Siantar" 90.83 MW, "Paya Pasir-Paya Geli" 104.14 MW. Then power losses with two transmission HVDC in "Paya Pasir-Sei Rattan" and "Porsa-P. Siantar" is 71.24 MW, "Paya Pasir-Sei Rotan" and "Paya Pasir-Paya Geli" 77.46 MW, "Porsa-P. Siantar" and "Paya Pasir-Paya Geli" 78.52 MW. The last result, power losses with three transmission HVDC in "Paya Pasir-Sei Rotan," "Porsa-P. Siantar," and "Paya Pasir-Paya Geli" lost 64.57 MW.
Novel Composite Electrode of the Reduced Graphene Oxide Nanosheets with Gold Nanoparticles Modified by Glucose Oxidase for Electrochemical Reactions
Li Dong, Wenjuan Yu, Minmin Liu, Yang Liu, Qinsi Shao, Aijun Li, Wei Yan, Jiujun Zhang
Subject: Chemistry, Analytical Chemistry Keywords: reduced graphene oxide nanosheets; gold nanoparticles; composite materials; glucose oxidase; direct electron transfer
Graphene-based composites have been widely explored for electrode and electrocatalyst materials for electrochemical energy systems. In this paper, a novel composite material of the reduced graphene oxide nanosheets (rGON) with gold nanoparticles (NPs) (rGON-AuNP) is synthesized, and its morphology, structure and composition are characterized by SEM, HRTEM, XRD, EDX, FTIR, Raman, and UV-Vis techniques. To confirm this material's electrochemical activity, a glucose oxidase (GOD) is chosen as the target reagent to modify the rGON-AuNP layer to form GOD/rGON-AuNP/glassy carbon (GC) electrode. Two pairs of distinguishable redox peaks, corresponding to the redox processes of two different conformational GOD on AuNP, are observed on the cyclic voltammograms of GOD/rGON-AuNP/GC electrode. Both cyclic voltammetry and electrochemical impedance spectroscopy are employed to study the mechanism of direct electron transfer from GOD to GC electrode on the rGON-AuNP layer. In addition, this GOD/rGON-AuNP/GC electrode shows catalytic activity toward glucose oxidation reaction.
Multi-Purpose Nanovoid Array Plasmonic Sensor Produced by Direct Laser Patterning
Dmitrii V. Pavlov, Alexey Yu. Zhizhchenko, Mitsuhiro Honda, Masahito Yamanaka, Oleg B. Vitrik, Sergey A. Kulinich, Saulius Juodkazis, Sergey I. Kudryashov, Aleksandr Kuchmizhak
Subject: Physical Sciences, Applied Physics Keywords: direct femtosecond laser printing; nanovoid arrays; plasmonic sensors; refractive index and gas sensing
We demonstrate a multi-purpose plasmonic sensor based on nanovoid array fabricated via inexpensive and highly reproducible direct femtosecond laser patterning of thin glass-supported Au films. The proposed nanovoid array exhibits near-IR surface plasmon (SP) resonances, which can be excited under normal incidence and optimised for specific application by tailoring array periodicity as well as nanovoid geometric shape. Fabricated SP sensor offers competitive sensitivity of about 1600 nm/RIU at a figure of merit of 12 in bulk refractive index tests, as well as allows for identification of gases and ultra-thin analyte layers making the sensor particularly useful for common bioassay experiments. Moreover, isolated nanovoids support strong electromagnetic field enhancement at lattice SP resonance wavelength, allowing for label-free molecular identification via surface-enhanced vibration spectroscopy.
A Soft Curtailment of Wide-area Central Air Conditioning Load
Leehter Yao, Lei Yao, Wei Hong Lim
Subject: Engineering, Electrical & Electronic Engineering Keywords: fuzzy linear programming, direct load control, scheduling optimization, chillers, air condition, demand response
A real-time two-way direct load control (TWDLC) of central air-conditioning chillers in wide area is proposed to provide demand response. The proposed TWDLC scheme is designed to optimize the load shedding ratio of every customer under control to ensure the target load to be shed is met at every scheduling period. In order to overcome the load reduction uncertainties of TWDLC, an innovative solution is proposed by applying a certain degree of loosening on the constraint of the actual shed load. Fuzzy linear programming is utilized to solve the optimization problem with fuzzy constraints. The proposed fuzzy linear programming problem is solved by delicately transforming it into a regular liner programming problem. A selection scheme used to obtain the feasible candidates set for load shedding at every sampling interval of TWDLC is also designed along with the fuzzy linear programming.
Reduction of Bromo- and Iodo-2,6-bis(diphenylphosphanylmethyl)benzene with Magnesium and Calcium
Alexander Koch, Sven Krieck, Helmar Görls, Matthias Westerhausen
Subject: Chemistry, Organic Chemistry Keywords: Grignard reagent; arylmagnesium halide; phosphinates; calcium phosphinates; direct synthesis; ether degradation; calcium bromide
Arylmagnesium and -calcium reagents are easily accessible, however, ether degradation processes limit the storability especially of the calcium-based heavy Grignard reagents. Ortho-bound substituents with phosphanyl donor sites usually block available coordination sites and stabilize such complexes. The reaction of bromo-2,6-bis(diphenylphosphanylmethyl)benzene (1a) with magnesium in tetrahydrofuran yields [Mg{C6H3-2,6-(CH2PPh2)2}2] (2) after recrystallization from 1,2-dimethoxyethane. However, the similarly performed reduction of bromo- (1a) and iodo-2,6-bis(diphenylphosphanylmethyl)benzene (1b) with calcium leads to ether cleavage and subsequent degradation products. α-Deprotonation of THF yields 1,3-bis(diphenylphosphanylmethyl)benzene. Furthermore, the insoluble THF adducts of dimeric calcium diphenylphosphinate halides, [(thf)3Ca(X)(µ-O2PPh2)]2 [X = Br (3a), I (3b)], precipitate verifying ether decomposition and cleavage of P-C bonds. Ether adducts of calcium halides [such as [(dme)2(thf)CaBr2] (4)] form supporting the initial Grignard reaction and a subsequent Schlenk-type dismutation reaction.
Simulation of Matrix Converter by using MATLAB-Simulink
Abhinav Vinod Deshpande
Subject: Engineering, Automotive Engineering Keywords: Direct Matrix Converter (DMC; Space Vector Modulation (SVPWM); AC-AC Power Conversion; Converters; Simulation
The matrix converter converts the input line voltage into a variable voltage with an unrestricted output frequency without using an intermediate circuit, dc link circuit. A pure sine in and pure sine out is the unique feature of the matrix converter. This research paper also analyzes the basic operating principle and the simulation modeling of the direct matrix converter, which is controlled by the Space Vector Pulse Width Modulation technique by using the software which is known as MATLAB/Simulink. The most desirable features in the power frequency changes can be fulfilled by using the matrix converters, and this is the reason for the tremendous interest in the topology. Since the power electronic circuits which is known as the motor drives are used to operate the AC motors at the frequencies other than that of the supply.
Innovative Approaches to Aphasia Rehabilitation: a Review on Efficacy, Safety and Controversies
Chiara Picano, Agnese Quadrini, Francesca Pisano, Paola Marangolo
Subject: Behavioral Sciences, Applied Psychology Keywords: Post-stroke aphasia; aphasia rehabilitation; pharmacological approach; virtual reality; transcranial direct current stimulation (tDCS)
Aphasia is one of the most socially disabling post-stroke deficits. Although traditional therapies have been shown to induce adequate clinical improvement, aphasic symptoms often persist. Therefore, new rehabilitation techniques which act as a substitute or as an adjunct to traditional approaches are urgently needed. The present review provides an overview of the efficacy and safety of the most innovative approaches which have been proposed over the last twenty years. First, we examined the effectiveness of the pharmacological approach, principally used as an adjunct to language therapy, reporting the mechanism of action of each single drug for the recovery of aphasia. Results are conflicting but promising. Secondly, we discussed the application of Virtual Reality (VR) which has been proved to be useful since it potentiates the ecological validity of the language therapy by using virtual contexts which simulate real-life everyday contexts. Finally, we focused on the use of Transcranial Direct Current Stimulation (tDCS), both discussing its applications at the cortical level and highlighting a new perspective, which considers the possibility to extend the use of tDCS over the motor regions. Although the review revels an extraordinary variability among the different studies, substantial agreement has been reached on some general principles, such as the necessity to consider tDCS only as an adjunct to traditional language therapy.
Analytical Modeling of Residual Stress in Selective Laser Melting Considering Volume Conservation in Plastic Deformation
Elham Mirkoohi, Dongsheng Li, Hamid Garmestani, Steven Y. Liang
Subject: Engineering, Mechanical Engineering Keywords: Selective Laser Melting; residual stress; direct metal deposition; thermomechanical analytical modeling; Ti-6Al-4V
Residual stress (RS) is the most challenging problem in metal additive manufacturing (AM) since the build-up of high tensile RS may influence the fatigue life, corrosion resistance, crack initiation, and failure of the additively manufactured components. While tensile RS is inherent in all the AM processes, fast and accurate prediction of stress state within the part is extremely valuable and would result in optimization of the process parameters in achieving a desired RS and control of the build process. This paper proposes a physics-based analytical model to rapidly and accurately predict the RS within the additively manufactured part. In this model, a transient moving point heat source (HS) is utilized to determine the temperature field. Due to the high temperature gradient within the proximity of the melt pool area, material experience high thermal stress. Thermal stress is calculated by combining three sources of stresses known as stresses due to the body forces, normal tension, and hydrostatic stress in a homogeneous semi-infinite medium. The thermal stress determines the RS state within the part. Consequently, by taking the thermal stress history as an input, both the in-plane and out of plane RS distributions are found from incremental plasticity and kinematic hardening behavior of the metal by considering volume conservation in plastic deformation in coupling with the equilibrium and compatibility conditions. In this modeling, material properties are temperature-sensitive since the steep temperature gradient varies the properties significantly. Moreover, the energy needed for the solid-state phase transition is reflected by modifying the specific heat employing the latent heat of fusion. Furthermore, the multi-layer and multi-scan aspects of metal AM are considered by including the temperature history from previous layers and scans. Results from the analytical RS model presented excellent agreement with XRD measurements employed to determine the RS in the Ti-6Al-4V specimens.
Elham Mirkoohi, Dongsheng Li, Hamid Garmestani, and Steven Y. Liang
New Methodological Approach Towards a Complete Characterization of Structural Fiber Reinforced Concrete by Means of Mechanical Testing Procedures
Marcos G. Alberti, Alvaro Picazo, Jaime C. Gálvez, Alejandro Enfedaque
Subject: Engineering, Civil Engineering Keywords: fiber reinforced concrete; direct tensile test; push-off test; polyolefin fiber; digital image correlation
This work proposes a novel methodology for the complete characterization of fiber reinforced concrete (FRC). The method includes bending tests of prismatic notched specimens, based on the Standards for FRC, tensile and pure shear tests. The values adopted by the standards for designing FRC are the obtained from bending tests, typically fR3, even for shear and pure tension loading. This paper shows that the remaining strength of FRC, supplied by the fibers, depends on the type of loading. In the case of shear and tensile loading the prescriptions of the standards may be unsafe. In this work, the remaining halves of specimens subjected to bending test are prepared and used for shear and tension tests. This means significant savings in specimen preparation and a greater amount of information for structural use of FRC. The results provide relevant information for the design of structural elements of FRC compared with the only use of data supplied by bending tests. In addition, a video-extensometry system was used to analyze the crack generation and cracking patterns. The video-extensometry applied to shear tests allowed the assessment of the sliding values and crack opening values at the crack discontinuity. These values may be quite relevant for the study of the FRC behavior when subjected to shear according to the shear-friction model theories.
Lowering the Carbon Footprint of Steel Production Using Hydrogen Direct Reduction of Iron Ore and Molten Metal Methane Pyrolysis
Abhinav Bhaskar, Mohsen Assadi, Homam Nikpey Somehsaraei
Subject: Engineering, Energy & Fuel Technology Keywords: hydrogen; methane pyrolysis; direct reduced iron; industrial decarbonization; iron and steel; electric arc furnace
Reducing emissions from the iron and steel industry is essential to achieve the Paris climate goals. A new system to reduce the carbon footprint of steel production is proposed in this article by coupling hydrogen direct reduction of iron ore (H-DRI) and natural gas pyrolysis on liquid metal surface inside a bubble column reactor. If grid electricity from EU is used, the emissions would be 435 kg CO2/tls without considering methane leakage from the extraction, storage and transport of natural gas. Solid carbon, produced as a by-product of natural gas decomposition, finds applications in many industrial sectors, including as a replacement for coal in coke ovens. Specific energy consumption (SEC) of the proposed system is approximately 6.3 MWh per ton of liquid steel(tls). It is higher than other competing technologies, 3.48 MWh/tls for water electrolysis based DRI, and, 4.3-4.5 MWh/tls for natural gas based DRI and blast furnace-basic oxygen furnace (BF-BOF) respectively. Utilization of large quantities of natural gas, where the carbon remains unused, is the major reason for high SEC. Preliminary analysis of the system revealed that it has the potential to compete with existing technologies to produce CO2 free steel, if renewable electricity is used. Further studies on the kinetics of the bubble column reactor, H-DRI shaft furnace, design and sizing of components, along with building of industrial prototypes are required to improve the understanding of the system performance.
The Effect of Green Logistics on Economic growth, Social and Environmental Sustainability: An Empirical Study of Developing Countries in Asia
Syed Abdul Rehman Khan
Subject: Engineering, Other Keywords: renewable energy; education expenditure; environmental degradation; health expenditure; carbon emissions; and foreign direct investments
This panel study investigates the relationship between green logistics indices, economic, environmental, and social factors in the perspective of Asian emerging economies. This study adopted FMOLS and DOLS methods to test research hypothesis, catering the problem of endogenity and serial correlation. The results suggest that logistics operations, particularly LPI2 (efficiency of customs clearance processes), LPI4 (quality of logistics services) and LPI5 (trade and transport-related infrastructure), are positively and significantly correlated with per capita income, manufacturing value added and trade openness. While, greater logistics operations are negatively associated with social and environmental problems including, climate change, global warming, carbon emissions, and poisoning atmosphere. In addition, human health is badly affected by heavy smog, acid rainfall, and water pollution. The findings further extend and reveal that political instability, natural disaster and terrorism are also a primary cause of poor economic growth and environmental sustainability with poor trade and logistics infrastructure. Further, the application of renewable energy resources and green practices can mitigate negative effects on social and environmental sustainability without compromising the performance of economic growth. There is very limited empirical work presented in literature using renewable energy and green ideology to solve macro-level social and environmental problems, while this study will assist the policymakers and researchers to understand the importance of green concept in improving countries' social, economic and environmental performance.
Development and Application of a Wireless Sensor for Space Charge Density Measurement in an Ultra-High-Voltage, Direct-Current Environment
Encheng Xin, Yong Ju, Haiwen Yuan
Subject: Engineering, Other Keywords: Ultra High-voltage direct-current (UHVDC); space charge density; energy consumption; wireless communication; Zigbee
A space charge density wireless measurement system based on the idea of distributed measurement is proposed for collecting and monitoring the space charge density in an ultra-high-voltage direct-current (UHVDC) environment. The proposed system architecture is composed of a number of wireless nodes connected with space charge density sensors and a base station. The space charge density sensor based on atmospheric ion counter method is elaborated and developed, and the ARM microprocessor and Zigbee radio frequency module are applied. The wireless network communication quality and the relationship between energy consumption and transmission distance in the complicated electromagnetic environment is tested. Based on the experimental results, the proposed measurement system demonstrates that it can adapt to the complex electromagnetic environment under the UHVDC transmission lines and can accurately measure the space charge density.
Minimum-Fuel Planar Earth-to-Mars Low-Thrust Trajectories Using Bang-Bang Control
Daero Lee
Subject: Engineering, Other Keywords: Low-thrust trajectories; bang-bang control; electric propulsion; constant specific impulse; indirect method; direct method
Recent advance in electric propulsion systems have demonstrated that these engines can be used for for long-duration interplanetary voyages. Constant specific impulse engine described as a thrust-limited engine is an example of this type of engine, processing the ability to operate at a constant level of impulse. The determination of minimum-fuel, planar heliocentric Earth-to-Mars low-thrust trajectories of spacecraft using a constant specific impulse is discussed considering the first-order necessary conditions derived from Lawden's primer vector theory. The minimum-fuel low-thrust Earth-to-Mars optimization problem is then solved in two-dimensional, heliocentric frame using both indirect and direct methods. In the indirect method, two-point-boundary-value problems are derived to solve boundary value problems for ordinary differential equations. In the direct method, a general-purpose optimal control software called GPOPS-II is adopted to solve these optimal control problems. Numerical examples using two different optimization methods are presented to demonstrate the characteristics of minimum-fuel planar low-thrust trajectories with on-off-on thrust sequences at three chosen flight times and available maximum powers. The results are useful for broad trajectory search in the preliminary phase of mission designs.
Aero Submarine: a Theoretical Design
Sepehr Akramipour
Subject: Engineering, Automotive Engineering Keywords: theoretical design, aero submarine, aerial submersible vehicle, direct dive, water-air transition, air-water transition.
Aero submarines (aerosubs) are vehicles that can both fly both in air and travel under water. The concept of dual aerial and aquatic vehicles emerged in 1939 when Russian engineer Boris Ushakov proposed the "flying submarine"[1], and this was followed by further developments including RFS 1 [2], convair project in 1964[3] , etc. however, to date, limited attempt has been diverted towards the advanced development of such aircraft. This is heavily influenced by challenges associated with the design and operation of the same. Based on the review of literature the authors aim to introduce a theoretical design for an aerosub (QFS-20) with a view to address the design and operation issues including power, entry to and exit from water.
Effects of Direct-Acting Antiviral Agents on the Mental Health of Patients with Chronic Hepatitis C: A Prospective Observational Study
Michele Fabrazzo, Rosa Zampino, Martina Vitrone, Gaia Sampogna, Lucia Del Gaudio, Daniela Nunziata, Salvatore Agnese, Anna Santagata, Emanuele Durante-Mangoni, Andrea Fiorillo
Subject: Medicine & Pharmacology, Psychiatry & Mental Health Studies Keywords: Chronic hepatitis C; Direct-acting antiviral agents; Hepatitis C virus; Consultation-liaison psychiatry; Depression; Anxiety
In chronic hepatitis C (CHC) patients, interferon-based treatments showed toxicity, limited efficacy, and psychiatric manifestations. Direct-acting antiviral (DAA) agents appeared safer, though it remains unclear if they may exacerbate or foster mood symptoms in drug-naïve CHC patients. We evaluated 62 CHC patients' mental status, before and 12 weeks after DAA therapy, by assessment scales and psychometric instruments. We subdivided patients into two groups, CHC patients with (Group A) or without (Group B) a current and/or past psychiatric history. After DAA treatment, Group A patients showed low anxiety and improved depression, no variation in self-report distress, but worse general health perceptions. No significant difference emerged from coping strategies. Depression and anxiety improved in Group B, and no change emerged from total self-reported distress, except for somatization. Moreover, Group B increased problem-focused strategies for suppression of competing activities, and decreased strategies of instrumental social support. Contrarily, Group B reduced significantly emotion-focused strategies, such as acceptance and mental disengagement, and improved vitality, physical and social role functioning. DAA therapy is safe and free of hepatological and psychiatric side effects in CHC patients, regardless of current and/or past psychiatric history. In particular, patients without a psychiatric history also remarkably improved their quality of life.
Preprint CASE REPORT | doi:10.20944/preprints202004.0443.v1
The clinical outcome of concurrent speech therapy and transcranial direct current stimulation in dysarthria and palilalia following traumatic brain injury: A case study
Masumeh Bayat, Mahshid Tahamtan, Malihe Sabeti, Mohammad Nami
Subject: Keywords: traumatic brain injury (TBI); Dysarthria; transcranial direct current stimulation (tDCS); Quantitative Electroencephalography (QEEG); speech therapy
Purpose: Dysarthria, a neurological injury of the motor component of the speech circuitry, is of common consequences of traumatic brain injury (TBI). Palilalia is a speech disorder characterized by involuntary repetition of words, phrases, or sentences. Based on the evidence supporting the effectiveness of transcranial direct current stimulation (tDCS) in some speech and language disorders, we hypothesized that using tDCS would enhances the effectiveness of speech therapy in a client with chronic dysarthria following TBI. Method: We applied the constructs of the "Be Clear" protocol, a relatively new approach in speech therapy in dysarthria, together with tDCS on a chronic subject who affected by dysarthria and palilalia after TBI. Since there was no research on the use of tDCS in such cases, regions of interest (ROIs) were identified based on deviant brain electrophysiological patterns in speech tasks and resting state compared with normal expected patterns using the Quantitative Electroencephalography (QEEG) analysis. Results: Measures of perceptual assessments of intelligibility, an important index in the assessment of dysarthria, were superior to the primary protocol results immediately and 4 months after intervention. We did not find any factor other than the use of tDCS to justify this superiority. The percentage of repeated words, an index in palilalia assessment, had a remarkable improvement immediately after intervention but fell somewhat after 4 months. We justified this case with subcortical origins of palilalia. Conclusion: Our present case-based findings suggested that applying tDCS together with speech therapy may improve intelligibility in similar case profiles as compared to traditional speech therapy. To reconfirm the effectiveness of the above approach in cases with dysarthria following TBI, more investigation need to be pursued.
Ultrasensitive SERS-Based Plasmonic Sensor with Analyte Enrichment System Produced by Direct Laser Writing
George Pavliuk, Dmitrii Pavlov, Eugeny Mitsai, Oleg Vitrik, Aleksandr Mironenko, Alexander Zakharenko, Sergei A. Kulinich, Saulius Juodkazis, Svetlana Bratskaya, Alexey Zhizhchenko, Aleksandr Kuchmizhak
Subject: Physical Sciences, Optics Keywords: direct laser processing; femtosecond laser pulses; superhydrophobic textures; analyte enrichment; plasmonic nanostructures; SERS; medical drugs
We report an easy-to-implement device for SERS-based detection of various analytes dissolved in water droplets at trace concentrations. The device combines an analyte-enrichment system and SERS-active sensor site, both produced via inexpensive and high-performance direct fs-laser printing. Fabricated on a surface of water-repellent polytetrafluoroethylene substrate as an arrangement of micropillars, the analyte-enrichment system supports evaporating water droplet in the Cassie-Baxter superhydrophobic state, thus ensuring delivery of the dissolved analyte molecules towards the hydrophilic SERS-active site. The efficient pre-concentration of the analyte onto the sensor site based on densely-arranged spiky plasmonic nanotextures results in its subsequent label-free identification by means of SERS spectroscopy. Using the proposed device, we demonstrate reliable SERS-based fingerprinting of various analytes, including common organic dyes and medical drugs at ppb concentrations. The proposed device is believed to find applications in various areas, including label-free environmental monitoring, medical diagnostics, and forensics.
Genotype Fingerprints Enable Fast and Private Comparison of Genetic Testing Results for Research and Direct-to-Consumer Applications
Max Robinson, Gustavo Glusman
Subject: Life Sciences, Genetics Keywords: computational genomics; genome comparison; algorithms; genetic testing; privacy; direct-to-consumer; study design; population genetics
Genetic testing has expanded out of the research laboratory into medical practice and the direct-to-consumer market, and rapid analysis of the resulting genotype data can now have significant impact. We present a method for summarizing personal genotypes as 'genotype fingerprints' that meet these needs. Genotype fingerprints can be derived from any single nucleotide polymorphism (SNP)-based assay, and remain comparable as chip designs evolve to higher marker densities. We demonstrate that they support distinguishing types of relationships among closely related individuals and closely related individuals from individuals from the same background population, as well as high-throughput identification of identical genotypes, individuals in known background populations, and de novo separation of subpopulations within a large cohort through extremely rapid comparisons. While fingerprints do not preserve anonymity, they provide a useful degree of privacy by summarizing a genotype in a way that prevents reconstruction of individual marker states. Genotype fingerprints are therefore well-suited as a format for public aggregation of genetic information to support ancestry and relatedness determination without revealing personal health risk status.
Micro-Dumbbells—A Versatile Tool for Optical Tweezers
Weronika Lamperska, Sławomir Drobczyński, Michał Nawrot, Piotr Wasylczyk, Jan Masajada
Subject: Physical Sciences, Optics Keywords: optical tweezers; optical trapping; viscosity; direct laser writing; 3D lithography; two-photon polymerization; micro-tool
Manipulation of micro- and nano-sized objects with optical tweezers is a well established, albeit still evolving technique. While many objects can be trapped directly with focused laser beam(s), for some applications indirect manipulation with tweezers-operated tools is preferred. We introduce a simple, versatile micro-tool operated with holographic optical tweezers. The 40 µm long dumbbell-shaped tool, fabricated with two-photon laser 3D photolithography has two beads for efficient optical trapping and a probing spike on one end. We demonstrate fluids viscosity measurements and vibration detection as examples of possible applications.
Kinetic Theory and Memory Effects of Homogeneous Inelastic Granular Gases Under Nonlinear Drag
Andrés Santos, Alberto Megías
Subject: Physical Sciences, Fluids & Plasmas Keywords: granular gases; kinetic theory; Enskog--Fokker--Planck equation; direct simulation Monte Carlo; event-driven molecular dynamics
We study in this work a dilute granular gas immersed in a thermal bath made of smaller particles with nonnegligible masses as compared with the granular ones. The kinetic theory for this system is developed and described by an Enskog--Fokker--Planck equation for the one-particle velocity distribution function. Granular particles are assumed to have inelastic and hard interactions, losing energy in collisions as accounted by a constant coefficient of normal restitution. The interaction with the thermal bath is based on a nonlinear drag force plus a white-noise stochastic force. To get explicit results of the temperature aging and steady states, Maxwellian and first Sonine approximations are developed. The latter takes into account the coupling of the excess kurtosis with the temperature. Theoretical predictions are compared with direct simulation Monte Carlo and event-driven molecular dynamics simulations. While good results for the granular temperature are obtained from the Maxwellian approximation, a much better agreement, especially as inelasticity and drag nonlinearity increase, is found when using the first Sonine approximation. The latter approximation is, additionally, crucial to account for memory effects like Mpemba and Kovacs-like ones.
Fast Detection of SARS-CoV-2 RNA Directly from Respiratory Samples Using a Loop-Mediated Isothermal Amplification (LAMP) Test
Olympia E. Anastasiou, Caroline Holtkamp, Miriam Schäfer, Frieda Schön, Anna Maria Eis-Hübinger, Andi Krumbholz
Subject: Medicine & Pharmacology, Allergology Keywords: SARS-CoV-2; COVID-19; RT-PCR; nucleic acids; direct testing; loop-mediated isothermal amplification; LAMP
The availability of simple SARS-CoV-2 detection methods is crucial to contain the COVID-19 pandemic. This study examined whether a commercial LAMP assay can reliably detect SARS-CoV-2 genomes directly in respiratory samples without having to extract nucleic acids (NA) beforehand. Nasopharyngeal swabs (NPS, n = 220) were tested by real-time reverse transcription (RT)-PCR and with the LAMP assay. For RT-PCR, NA were investigated. For LAMP, NA from 26 NPS in viral transport medium (VTM) were tested. The other 194 NPS were analyzed directly without prior NA extraction [140 samples in VTM; 54 dry swab samples stirred in phosphate buffered saline]. Ten NPS were tested directly by LAMP using a sous-vide cooking unit. The isothermal assay demonstrated excellent specificity (100%) but moderate sensitivity (68.8%), with a positive predictive value of 1 and a negative predictive value of 0.65 for direct testing of NPS in VTM. The use of dry swabs, even without NA extraction, improved the analytical sensitivity; up to 6% of samples showed signs of inhibition. The LAMP could be performed successfully with a sous-vide cooking unit. This technique is very fast, requires little laboratory resources and can replace rapid antigen tests or verify reactive rapid tests on site.
Comparison of Typical Controllers for Direct Yaw Moment Control Applied on an Electric Race Car
Andoni Medina, Angel Rubio, Guillermo Bistue
Subject: Engineering, Automotive Engineering Keywords: Direct Yaw Moment Control; Electric Race Car; FSAE; Limit Handling; Yaw Rate Control; Lap Time Simulation
Direct yaw moment control (DYC) is an effective way to alter the behaviour of electric cars with independent drives. Controlling the torque applied to each wheel can improve the handling performance of a vehicle making it safer andfaster on a race track. The state-of-the-art literature covers the comparison of various controllers (PID, LPV, LQR, SMC, etc.) using ISO manoeuvres. However, more advanced comparison on important characteristics of the controllers performance is missed, such as the robustness of the controllers under changes in the vehicle model, steering behaviour, use of the friction circle and, ultimately, lap time on a track. In this study, we have compared the controllers according to some of the aforementioned parameters on a modelled race car. Interestingly, best lap times are not provided by perfect neutral or close-to-neutral behaviour of the vehicle, but rather by allowing certain deviations from the target yaw rate. In addition, a modified PID controller showed that its performance is comparable to other more complex control techniques such as MPC.
A Survey of Power Take-off Systems of Ocean Wave Energy Converters
Zhenwei Liu, Ran Zhang, Han Xiao, Xu Wang
Subject: Engineering, Energy & Fuel Technology Keywords: power take-off; wave energy converters; direct-drive; indirect drive; linear generator; hydrodynamics; energy harvesting efficiency
Ocean wave energy conversion as one of the renewable clean energy sources is attracting the research interests of many people. This review introduces different types of power take-off technology of wave energy converters. The main focus is the linear direct drive power take-off devices as they have the advantages for ocean wave energy conversion. The designs and optimizations of power take-off systems of ocean wave energy converters have been studied from reviewing the recently published literature. Also, the simple hydrodynamics of wave energy converters have been reviewed for design optimization of the wave energy converters at specific wave sites. The novel mechanical designs of the power take-off systems have been compared and investigated in order to increase the energy harvesting efficiency.
Detailed Modeling of the Direct Reduction of Iron Ore in a Shaft Furnace
Hamzeh Hamadeh, Olivier Mirgaux, Fabrice Patisson
Subject: Materials Science, Metallurgy Keywords: ironmaking; direct reduction; iron ore; DRI; shaft furnace; mathematical model; heterogeneous kinetics; heat and mass transfer
This paper addresses the modeling of the iron ore direct reduction process, a process likely to reduce CO2 emissions from the steel industry. The shaft furnace is divided into three sections (reduction, transition, and cooling), and the model is two-dimensional (cylindrical geometry for the upper sections and conical geometry for the lower one), to correctly describe the lateral gas feed and cooling gas outlet. This model relies on a detailed description of the main physical–chemical and thermal phenomena, using a multi-scale approach. The moving bed is assumed to be comprised of pellets of grains and crystallites. We also take into account eight heterogeneous and two homogeneous chemical reactions. The local mass, energy, and momentum balances are numerically solved, using the finite volume method. This model was successfully validated by simulating the shaft furnaces of two direct reduction plants of different capacities. The calculated results reveal the detailed interior behavior of the shaft furnace operation. Eight different zones can be distinguished, according to their predominant thermal and reaction characteristics. An important finding is the presence of a central zone of lesser temperature and conversion.
Convection Enhanced Delivery in the Setting of High-Grade Gliomas
Chibueze D. Nwagwu, Amanda V. Immidisetti, Michael Jiang, Oluwasegun Adeagbo, David Cory Adamson, Anne-Marie Carbonell
Subject: Medicine & Pharmacology, Oncology & Oncogenics Keywords: glioblastoma; high-grade glioma; refractory glioma; direct delivery; convection enhanced delivery; neuro-oncology; refractory glioblastoma; clinical trials
Development of effective treatments for high-grade glioma (HGG) is hampered by 1) the blood-brain barrier (BBB), 2) an infiltrative growth pattern, 3) rapid development of therapeutic resistance, and, in many cases, 4) dose-limiting toxicity due to systemic exposure. Convec-tion-enhanced delivery (CED) has the potential to significantly limit systemic toxicity and in-crease therapeutic index by directly delivering homogenous drug concentrations to the site of disease. In this review, we present clinical experiences and preclinical developments of CED in the setting of high-grade gliomas.
Overview of HVDC Technology
Neville Watson
Subject: Engineering, Electrical & Electronic Engineering Keywords: High Voltage Direct Current transmission (HVDC); Multi-terminal HVDC; Voltage Source Converter (CSC); Voltage Source Converter (VSC)
There is a growing use of High Voltage Direct Current (HVDC) globally due to the many advantages of Direct Current (DC) transmission systems over Alternating Current (AC) transmission, including enabling transmission over long distances, higher transmission capacity and efficiency. Moreover, HVDC systems can be a great enabler in the transition to a low carbon electrical power system which is an important objective in today's society. The objectives of the paper are to give a comprehensive overview of HVDC technology, its development, and present status, and to discuss its salient features, limitations and applications.
A General Theorem on the Stability of a Class of Functional Equations including Quartic-Cubic-Quadratic-Additive Equations
Yang-Hi Lee, Soon-Mo Jung
Subject: Mathematics & Computer Science, Analysis Keywords: generalized Hyers-Ulam stability; functional equation; $n$-dimensional quartic-cubic-quadratic-additive type functional equation; direct method
We prove general stability theorems for $n$-dimensional quartic-cubic-quadratic-additive type functional equations of the form \begin{eqnarray*} \sum_{i=1}^\ell c_i f \big( a_{i1}x_1 + a_{i2}x_2 + \cdots + a_{in}x_n \big) = 0 \end{eqnarray*} by applying the direct method. These stability theorems can save us much trouble of proving the stability of relevant solutions repeatedly appearing in the stability problems for various functional equations.
Hybrid Sequencing Reveals Novel Features in the Transcriptomic Organization of Equid Alphaherpesvirus 1
Dóra Tombácz, Gábor Torma, Gábor Gulyás, Ádám Fülöp, Ákos Dörmő, István Prazsák, Zsolt Csabai, Máté Mizik, Ákos Hornyák, Zádori Zoltán, Balázs Kakuk, Zsolt Boldogkői
Subject: Biology, Other Keywords: Equid alphaherpesvirus 1; EHV-1; transcriptome; replication origin; long-read sequencing; nanopore sequencing; direct RNA sequencing; Illumina sequencing
In this study, a structural profiling of equid alphaherpesvirus 1 (EHV-1) transcriptome was carried out using next-generation (Illumina) and third-generation (Oxford Nanopore Technologies) sequencing platforms. We annotated the canonical mRNA molecules and their isoforms, including transcript start and end site isoforms, and splice variants. Additionally, a number of putative 5′-truncated mRNAs containing shorter in-frame ORFs were detected. We also demonstrated that EHV-1 produces a high number of non-coding transcripts, including antisense and intergenic RNAs. One of the most remarkable features of the EHV-1 is the generation of abundant fusion transcripts some of which encoding chimeric polypeptides. We observed a higher number of splicing and transcriptional overlaps than in related viruses. Additionally, we found that many upstream genes of tandem gene clusters have their own transcript end sites (TESs) besides the co-terminal TESs, which is rare in other alphaherpesviruses. We show here that the replication origins (OriS and OriL) of the virus are co-localized with promoter sequences and overlap with specific RNA molecules. Furthermore, we discovered a novel non-coding RNA (designated as NOIR) that overlaps the 5′-ends of the longer transcript variants encoded by the two main transactivator genes ORF64 and 65 bracketing the OriL. These all suggest the existence of a central regulatory system which controls the genome-wide transcription and the replication through a mechanism based on the interference between the machineries carrying out the synthesis of DNA and RNA.
Crowding in or crowding out? Public Investment and Private Investment in South Africa: An ECM Approach
Dumisani Pamba
Subject: Social Sciences, Economics Keywords: Credit to Private Sector; Foreign Direct Investment; Government Consumption Expenditure; Public Investment; Error Correction Model and South Africa
This study aims to explore the link between public investment and private investment in South Africa, using time series data spanning 40 years (1980–2020). Private investment is subdivided into credit to private sector (CPS) and foreign direct investment (FDI). Several econometric methodologies were used in the study, including the unit root test, cointegration test, and Error Correction Method (ECM). The Phillips-Perron (PP) test results point out that all the variables are stationary at levels with the exception of public investment (PI) which is stationary at first difference. The co-integration test reveals that the variables have a long-run equilibrium relationship. According to the findings of the ECM, public investment has a negative relationship with private investment (as measured by credit to private sector and foreign direct investment). The conclusion implies that in South Africa, public investment crowds out private investment. Other results revealed that, RGDP crowds in credit to private sector while crowding out foreign direct investment. Finally, the ECM findings show that government consumption expenditure crowds out credit to private sector and foreign direct investment. The residuals are homoskedastic and show no serial correlation, indicating that the model is adequate, according to the test for adequacy.
Vibration Analysis of Axially Functionally Graded Non-Prismatic Timoshenko Beams Using the Finite Difference Method
Valentin Fogang
Subject: Engineering, Civil Engineering Keywords: Axially functionally graded non-prismatic Timoshenko beam; finite difference method; additional points; vibration analysis; direct time integration method
This paper presents an approach to the vibration analysis of axially functionally graded non-prismatic Timoshenko beams (AFGNPTB) using the finite difference method (FDM). The characteristics (cross-sectional area, moment of inertia, elastic moduli, shear moduli, and mass density) of axially functionally graded beams vary along the longitudinal axis. The Timoshenko beam theory covers cases associated with small deflections based on shear deformation and rotary inertia considerations. The FDM is an approximate method for solving problems described with differential equations. It does not involve solving differential equations; equations are formulated with values at selected points of the structure. In addition, the boundary conditions and not the governing equations are applied at the beam's ends. In this paper, differential equations were formulated with finite differences, and additional points were introduced at the beam's ends and at positions of discontinuity (supports, hinges, springs, concentrated mass, spring-mass system, etc.). The introduction of additional points allowed us to apply the governing equations at the beam's ends and to satisfy the boundary and continuity conditions. Moreover, grid points with variable spacing were also considered, the grid being uniform within beam segments. Vibration analysis of AFGNPTB was conducted with this model, and natural frequencies were determined. Finally, a direct time integration method (DTIM) was presented. The FDM-based DTIM enabled the analysis of forced vibration of AFGNPTB, considering the damping. The results obtained in this study showed good agreement with those of other studies, and the accuracy was always increased through a grid refinement.
Power Control of Direct Interconnection Technique for Airborne Wind Energy Systems
Mahdi Ebrahimi Salari, Joseph Coleman, Daniel Toal
Subject: Engineering, Electrical & Electronic Engineering Keywords: Airborne wind energy; Direct interconnection technique; Load sharing control; Active power; Reactive power exchange; Non-reversing pumping mode
In this paper, an offshore airborne wind energy (AWE) farm consisting of three non-reversing pumping mode AWE systems is modelled and simulated. The AWE systems employ permanent magnet synchronous generators (PMSG). A direct interconnection technique is developed and implemented for AWE systems. This method is a new approach invented for interconnecting offshore wind turbines with the least number of required offshore-based power electronic converters. The direct interconnection technique can be beneficial in improving the economy and reliability of marine airborne wind energy systems. The performance and interactions of the directly interconnected generators inside the energy farm internal power grid are investigated. The results of the study conducted in this paper, show the directly interconnected AWE systems can exhibit a poor load balance and significant reactive power exchange which must be addressed. Power control strategies for controlling the active and reactive power of the AWE farm are designed, implemented, and promising results are discussed in this paper.
Vibration Analysis of Axially Functionally Graded Non-Prismatic Euler-Bernoulli Beams Using the Finite Difference Method
Subject: Engineering, Civil Engineering Keywords: Axially functionally graded non-prismatic Euler-Bernoulli beam; finite difference method; additional points; vibration analysis; direct time integration method
This paper presents an approach to the vibration analysis of axially functionally graded (AFG) non-prismatic Euler-Bernoulli beams using the finite difference method (FDM). The characteristics (cross-sectional area, moment of inertia, elastic moduli, and mass density) of AFG beams vary along the longitudinal axis. The FDM is an approximate method for solving problems described with differential equations. It does not involve solving differential equations; equations are formulated with values at selected points of the structure. In addition, the boundary conditions and not the governing equations are applied at the beam's ends. In this paper, differential equations were formulated with finite differences, and additional points were introduced at the beam's ends and at positions of discontinuity (supports, hinges, springs, concentrated mass, spring-mass system, etc.). The introduction of additional points allowed us to apply the governing equations at the beam's ends and to satisfy the boundary and continuity conditions. Moreover, grid points with variable spacing were also considered, the grid being uniform within beam segments. Vibration analysis of AFG non-prismatic Euler-Bernoulli beams was conducted with this model, and natural frequencies were determined. Finally, a direct time integration method (DTIM) was presented. The FDM-based DTIM enabled the analysis of forced vibration of AFG non-prismatic Euler-Bernoulli beams, considering the damping. The results obtained in this paper showed good agreement with those of other studies, and the accuracy was always increased through a grid refinement.
Efficacy of Direct Current Stimulation on Experimental Sensory Modalities and Pain Outcome Measures in Healthy Participants: Systematic Review and Meta-Analysis.
Guillermo García-Barajas, Julian Taylor, Juan Pablo Romero Muñoz, Sergio Lerma Lara, Alfonso Gil-Martínez, Josué Fernandez Carnero
Subject: Medicine & Pharmacology, Allergology Keywords: Non-invasive direct current stimulation; Cortical, Suboccipital and Spinal stimulation; Quantitative sensory testing, Pain outcome measures, Endogenous pain modulation.
Background: Objectives. The objective of this study was to compare the efficacy of direct current stimulation (DCS) applied at the transcranial, suboccipital and spinal level on experimental sensory modalities and pain outcome measures in healthy subjects. The hypothesis of this study was that systematic analysis of the efficacy of DCS on modulating evoked thermal and mechanical pain modalities and mechanisms such as endogenous pain modulation in healthy individuals would reveal sensitive outcome measures help develop this technique for the control of chronic pain. Materials and Methods. Database searches were conducted up to December 2019 for randomized controlled trials that performed sham-controlled DCS of experimental sensory modalities and pain outcomes following transcranial, suboccipital and spinal locations in healthy participants. Standardized mean differences with 95% confidence intervals were calculated for sensory modalities, including random-effect metanalysis. Results: Thirty-one studies were included for analysis (647 participants). A significant decrease in pain intensity for active vs sham transcranial stimulation was identified for pain intensity (n=158; SMD=0.79; 95% CI=0.56 to 1.02), a significant increase in heat pain threshold (n=222; SMD=1.16; 95% CI=0.95 to 1.37), and a significant increase in cold pain threshold (n = 155; SMD = 0.77, 95% CI 0.53 to 1.01). No significant modulation of pressure pain threshold was identified with DCS and only a limited number of studies focused on experimental pain modulation following neuromodulation at the suboccipital or spinal level. Conclusions: These results show significant transcranial DCS neuromodulation of pain intensity and on thermal pain modalities. Future studies should focus on endogenous pain and sensory modality modulation with sham-controlled DCS applied at transcranial, suboccipital and spinal locations.
Control Upstream Austenite Grain Coarsening during Thin Slab Casting Direct Rolling (TSCDR) Process
Tihe Zhou, Ronald J. O'Malley, Hatem S. Zurob, Mani Subramanian, Sang-Hyun Cho, Peng Zhang
Subject: Materials Science, Metallurgy Keywords: thin slab casting direct rolling, austenite grain coarsening, grain growth control, liquid core reduction, secondary cooling, two phase pinning
Thin Slab Casting and Directing Rolling (TSCDR) has become a major process for flat- rolled production. However, the elimination of slab reheating and limited number of thermomechanical deformation passes leave fewer opportunities for austenite grain refinement resulting in some large grains persist in the final microstructure. In order to achieve excellent Ductile to Brittle Transaction (DBTT) and Drop Weight Tear Test (DWTT) properties in thicker gauge high strength low alloy products, it is necessary to control austenite grain coarsening prior to the onset of thermomechanical processing. This contribution proposes a suite of methods to refine the austenite grain from both theoretical and practical perspective including: increasing cooling rate during casting, liquid core reduction, increasing austenite nucleation sites during the delta ferrite to austenite phase transformation, controlling holding furnace temperature and time to avoid austenite coarsening, and producing new alloy with two phase pinning to arrest grain coarsening. These methodologies can not only refine austenite grain size in the slab center, but also improve the slab homogeneity.
Processes of Direct Laser Writing 3D Nano-Lithography
Simonas Varapnickas, Mangirdas Malinauskas
Subject: Physical Sciences, Applied Physics Keywords: 3D nano-lithography, 3D laser lithography, direct laser writing, nanopolymerization, cross-linking, multi-photon absorption, avalanche ionization, temperature effects
Direct laser writing three-dimensional nano-lithography is an established technique for manufacturing functional 3D micro- and nano-objects via non-linear absorption induced polymerization process. In this Chapter an underlying physical mechanisms taking place during nano-confined polymerization reaction, induced by tightly focused ultra-short laser pulses, are reviewed and discussed. The special attention is paid on the effects that directly impact structuring resolution and minimum achievable feature size. Analysis of possible photo-initiation mechanisms as contributing multi-photon absorption and avalanche ionization in pre-polymers under diverse exposure conditions (wavelength, pulse duration) is presented. Feasible structuring of pure (non-photosensitized) and functional nanoparticles doped polymer precursors is justified and benefits of such materials/structures for microoptics, photonics and cell scaffolds are highlighted. The influence of temperature effects (induced by writing process itself or determined by ambient conditions) on polymerization process, observed in different pre-polymers under diverse exposure regimes is outlined. The further adjustment of the structuring resolution is possible via precise control of light polarization and diffusion assisted radical quenching. The work is concluded with a brief outlook on future challenges and perspectives related to refinement of 3D ultra-fast laser lithography fabrication process in the means of application of diverse post-processing methods and research into novel photo-curable materials including inorganic ones.
Mesoscale Laser 3D Printing
Linas Jonušauskas, Darius Gailevičius, Sima Rekštytė, Tommaso Baldacchini, Saulius Juodkazis, Mangirdas Malinauskas
Subject: Physical Sciences, Applied Physics Keywords: direct laser writing, multiphoton processing, laser 3D nanolithography, optical 3D printing, microstructures, nanotechnology, mesoscale, two-photon polymerization, microoptics, SZ2080
3D meso-scale structures that can reach up to centimeters in overall size but retain micro- or nano-features, proved to be promising in various science fields ranging from micro-mechanical metamaterials to photonics and bio-medical scaffolds. In this work we present synchronization of the linear and galvano scanners for efficient femtosecond 3D optical printing of objects at the meso-scale (from sub-μm to sub-cm spanning five orders of magnitude). In such configuration the linear stages provide stitch-free structuring at nearly limitless (up to tens-of-cm) working area, while galvo-scanners allow to achieve translation velocities in the range of mm/s-cm/s without sacrificing nano-scale positioning accuracy and preserving undistorted shape of the final print. The principle behind this approach is demonstrated, proving its inherent advantages in comparison to separate use of only linear stages or scanners. The printing rate is calculated in terms voxels/s, showcasing the capability to maintain an optimal feature size while increasing throughput. Full capabilities of this approach are demonstrated by fabricating structures that reach millimeters in size but still retain μm-scale features: scaffolds for cell growth, microlenses and photonic crystals. All this is combined into a benchmark structure: a meso-butterfly. Provided results show that synchronization of two scan modes is crucial for the end goal of industrial-scale implementation of this technology and makes the laser printing well aligned with similar approaches in nanofabrication by electron and ion beams.
Induction Generator with Direct Control and a Limited Number of Measurements on the Side of the Converter Connected to the Power Grid
Andrzej Kasprowicz, Ryszard Strzelecki
Subject: Engineering, Electrical & Electronic Engineering Keywords: induction generator; Direct Field Oriented Control (DFOC), three-level inverter; sinusoidal pulse width modulator (SPWM), maximum power point tracking (MPPT)
The article presents an induction generator connected to the power grid using the AC / DC / AC converter and LCL coupling filter. In the converter, both from the generator side and from the power grid side, three-level inverters were used. The algorithm realizing Pulse Width Modulation (PWM) in inverters has been simplified to the maximum. Control of the induction generator was based on the Direct Field Oriented Control (DFOC) method. At the same time, voltage control has been used for this solution. The MPPT algorithm has been extended to include the variable pitch range of the wind turbine blades. The active voltage balancing circuit has been used in the inverter DC voltage circuit. In the control system of the grid converter with an LCL filter, the number of measurements was limited to the measurement of power grid currents and voltages. Synchronization of control from the power grid side is ensured by the use of a PLL loop with the system of preliminary suppression of undesired harmonics (CDSC).
Antivirals against the Chikungunya Virus
Verena Battisti, Ernst Urban, Thierry Langer
Subject: Medicine & Pharmacology, Other Keywords: Chikungunya virus; alphavirus; antiviral therapy; direct-acting antivirals; host-directed antivirals; in silico screening; in vivo validation; antiviral drug development
Chikungunya virus (CHIKV) is a mosquito-transmitted alphavirus that has re-emerged in recent decades, causing large-scale epidemics in many parts of the world. CHIKV infection leads to a febrile disease known as chikungunya fever (CHIKF), which is characterised by severe joint pain and myalgia. As many patients develop a painful chronic stage and neither antiviral drugs nor vaccines are available, the development of a potent CHIKV inhibiting drug is crucial for CHIKF treatment. A comprehensive summary of current antiviral research and development of small-molecule inhibitor against CHIKV is presented in this review. We highlight different approaches used for the identification of such compounds and further discuss the identification and application of promising viral and host targets.
Does Poverty deter Foreign Direct Investment flows to Developing Countries?
Sena Kimm Gnangnon
Subject: Social Sciences, Accounting Keywords: Poverty; Foreign direct investment inflows; Human capital; Trade openness; Export product diversification; Economic growth; Labour productivity; Financial development; Infrastructure development.
The present paper investigates the effect of poverty on foreign direct investment (FDI) inflows in developing countries. It complements the important extant literature on the effect of FDI inflows on poverty by examining the issue the other way around. The analysis is conducted using a sample of 117 countries over the period 1980-2017, and the two-step system Generalized Methods of Moments (GMM) technique. It has relied on two indicators of poverty, namely poverty headcount ratio and poverty gap. Findings indicate that over the full sample, poverty influences negatively FDI inflows, including through its adverse effect on human capital (that is, both education and health). Unsurprisingly, low-income countries (considered as poorest countries in the full sample) experience a higher negative effect of poverty on FDI inflows than other countries. On another note, participation in international trade matters for the effect of poverty on FDI inflows. In fact, an increase in poverty levels results in lower FDI inflows in countries that experience low workers' productivity, a less developed financial sector, and a low level of infrastructure development. Furthermore, the effect of poverty on FDI inflows does not depend on the prevailing economic growth rate. Finally, the analysis has revealed the existence of a non-linear effect of poverty on FDI inflows for the poverty headcount indicator, but not for the poverty gap indicator. The non-linear effect of poverty headcount on FDI inflows is such that a rise in poverty headcount ratio results in lower FDI inflows, but an additional increase in poverty more than further discourages FDI inflows. The conclusion discusses the implications of these findings.
Measurement Invariance of a Direct Behavior Rating Multi Item Scale across Occasions
Markus Gebhardt, Jeffrey M. DeVries, Jana Jungjohann, Gino Casale, Andreas Gegenfurtner, Jörg-Tobias Kuhn
Subject: Social Sciences, Education Studies Keywords: Direct Behavior Rating 1; Test 2; Sensitivity over time 3; Rating 4; School 5; Classroom Behavior 6; Progress Monitoring 7
Direct Behavior Rating (DBR) as a behavioral progress monitoring tool can be designed as longitudinal assessment with only short intervals between measurement points. The reliability of these instruments has been evaluated mostly in observational studies with small samples based on generalizability theory. However, for standardized use in the pedagogical field, a larger and broader sample is required in order to assess measurement invariance between different participant groups and over time. Therefore, we constructed a DBR with multiple items to measure the occurrence of specific externalizing and internalizing student classroom behaviors on a Likert scale (1 = never to 7 = always). In a pilot study, two trained raters observed 16 primary school students and rated the student behavior over all items with a satisfactory reliability. In the main study, 108 regular primary school students, 97 regular secondary school students and 14 students in a clinical setting were rated daily over one week (five measurement points). IRT analyses confirmed the instrument's technical adequacy, and latent growth models demonstrated the instrument's stability over time. Further development of the instrument and study designs to implement DBRs are discussed.
The Cosmological Perturbed Lightcone Gauge
Maye Elmardi
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: general relativity; past lightcone gauge; direct observational approach; cosmological observables; galaxy surveys; galaxy number count; density contrast; overdensity; cosmological perturbations
The lightcone gauge is a set of what are called the observational coordinates adapted to our past lightcone. We develop this gauge by producing a perturbed spacetime metric that describes the geometry of our past lightcone where observations are usually obtained. We connect the produced observational metric to the perturbed Friedmann-Lema\^itre-Robertson-Walker metric in the standard general gauge or what is the so-called 1+3 gauge. We derive the relations between these perturbations of spacetime in the observational coordinates and those perturbations in the standard metric approach, as well as the dynamical equations for the perturbations in observational coordinates. We also calculate the observables in the lightcone gauge and re-derive them in terms of Bardeen potentials to first order. A verification is made of the observables in the perturbed lightcone gauge with those in the standard gauge. The advantage of the method developed is that the observable relations are simpler than in the standard formalism, and they are expressed in terms of the metric components which in principle are measurable. We use the perturbed lightcone gauge in galaxy surveys and the calculations of galaxy number density contrast. The significance of the new gauge is that by considering the null-like light propagations the calculations are much simpler due to the non-consideration of the angular deviations.
Novel Quantitative Trait Loci for Weed Competitive Ability Traits Using the Early Generation of Backcross Rice Populations
Niña Dimaano, Jauhar Ali, Mahender Anumalla, Pompe Cruz, Aurora Baltazar, Maria Diaz, Yun Long Pang, Bart Acero, Zhikang Li
Subject: Life Sciences, Genetics Keywords: Weed competitive ability; early seed germination and seedling vigor traits; quantitative trait loci (QTLs); single nucleotide polymorphism; direct seeded rice
Weed competitive ability (WCA) is a desirable key trait for the improvement of grain yield under direct-seeded rice (DSR) and the aerobic rice ecosystem. The present study targeted screening of 167 introgression lines (ILs) of a Green Super Rice (GSR) IR2-6 population derived from a cross between Weed Tolerant Rice 1 (WTR1) as the recipient parent and Y134 as the donor parent developed at IRRI for weed competitiveness in screen house conditions (SHC). The ILs were phenotyped for WCA traits such as early seed germination (ESG) and early seedling vigor (ESV) in Petri dishes and pot experiment conditions. The results of phenotypic variance revealed ESG-related traits, especially first germination count (1st GC) that positively correlated with second germination count (2nd GC), germination percentage (GP), total dry weight (TDW), total fresh weight (TFW), and vigor index (VI-1), whereas, in ESV, all the traits were positively correlated with each other except for three traits: root dry weight (RDW), 1st GC, and GP-2. The ESG and ESV traits are vital for weed competitiveness. A 6K SNP array was used to study the genetic association for the WCA traits. Forty-four QTLs for WCA traits were mapped on all chromosomes (except on chromosomes 4 and 8) through single marker analysis (SMA). Out of 44 QTLs, 29 were associated with ESG traits and 15 with ESV traits, with LOD scores of 2.93 to 8.03 and 2.93 to 5.04 and explained phenotypic variance ranging from 7.85% to 19.9% and from 7.85% to 13.2%, respectively. However, 31 QTLs were contributed by a negative additive allele from Y134, whereas a positive additive allele was contributed by WTR1 in 13 QTLs. Among them, two QTL hotspot regions were mapped on chromosome 11 (24.7-27.9 Mb) and chromosome 12 (14.8-17.4 Mb). The majority of the QTLs related to WCA traits were grouped into two QTL hotspots: QTL hotspot-I (qAFW11.1, qFC11.1, qFC11.2, qSC11.1, qGP-111.1, qGP-111.2, qTFGS11.1, qVI-111.1, and qVI-111.2) and QTL hotspot-II (qFC12.1, qFC12.2, qSC12.1, qFC12.2, qGP-112.1, qGP-112.2, qTFGS12.1, qTFGS12.2, qVI-112.1, qIV12.2, qFC12.1, and qGC12.2), and a few of them were co-localized on chromosomes 11 and 12. Further, we fine-tuned in the genomic regions of QTL hotspots and identified a total of 13 putative candidate genes on chromosomes 11 and 12 collectively. The present study is the first report on the genetic basis of WCA-related traits and the co-localized QTLs, which could be highly valuable in future breeding programs aiming to improve WCA in rice.
Overhanging Features and the SLM/DMLS Residual Stresses Problem: Review and Future Research Need
Albert E. Patterson, Sherri L. Messimer, Phillip A. Farrington
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: additive manufacturing; 3-D printing; metal additive manufacturing; selective laser melting; SLM; direct metal laser sintering; DMLS; metal powder processing
A useful and increasingly common additive manufacturing (AM) process is the selective laser melting (SLM) or direct metal laser sintering (DMLS) process. SLM/DMLS can produce full-density metal parts from difficult materials, but it tends to suffer from severe residual stresses introduced during processing. This limits the usefulness and applicability of the process, particularly in the fabrication of parts with delicate overhanging and protruding features. The purpose of this study was to examine the current insight and progress made toward understanding and eliminating the problem in overhanging and protruding structures. To accomplish this, a survey of literature was undertaken, focusing on process modeling (general, heat transfer, stress and distortion, and material models), direct process control (input and environmental control, hardware-in-the-loop monitoring, parameter optimization, and post-processing), experiment development (methods for evaluation, optical and mechanical process monitoring, imaging, and design-of-experiments), support structure optimization, and overhang feature design; approximately 140 published works were examined. The major findings of this study were that a small minority of the literature on SLM/DMLS deals explicitly with the overhanging stress problem, but some fundamental work has been done on the problem. Implications, needs, and potential future research directions are discussed in-depth in light of the present review.
Timoshenko Beam Theory: First-Order Analysis, Second-Order Analysis, Stability, and Vibration Analysis Using the Finite Difference Method
Subject: Engineering, Civil Engineering Keywords: Timoshenko beam; finite difference method; additional points; element stiffness matrix; tapered beam; second-order analysis; vibration analysis; direct time integration method
This paper presents an approach to the Timoshenko beam theory (TBT) using the finite difference method (FDM). The Timoshenko beam theory covers cases associated with small deflections based on shear deformation and rotary inertia considerations. The FDM is an approximate method for solving problems described with differential equations. It does not involve solving differential equations; equations are formulated with values at selected points of the structure. In addition, the boundary conditions and not the governing equations are applied at the beam's ends. The model developed in this paper consisted of formulating differential equations with finite differences and introducing additional points at the beam's ends and at positions of discontinuity (concentrated loads or moments, supports, hinges, springs, brutal change of stiffness, spring-mass system, etc.). The introduction of additional points allowed us to apply the governing equations at the beam's ends. Moreover, grid points with variable spacing were considered, the grid being uniform within beam segments. First-order, second-order, and vibration analyses of structures were conducted with this model. Furthermore, tapered beams were analyzed (element stiffness matrix, second-order analysis, vibration analysis). Finally, a direct time integration method (DTIM) was presented; the FDM-based DTIM enabled the analysis of forced vibration of structures, with damping taken into account. The results obtained in this paper showed good agreement with those of other studies, and the accuracy was increased through a grid refinement. Especially in the first-order analysis of uniform beams, the results were exact for uniformly distributed and concentrated loads regardless of the grid.
Effects of tDCS on Sound Duration in Patients with Apraxia of Speech in Primary Progressive Aphasia
Charalambos Themistocleous, Kimberly Webster, Kyrana Tsapkini
Subject: Medicine & Pharmacology, Allergology Keywords: apraxia of speech (AOS); transcranial direct current stimulation (tDCS); primary progressive aphasia (PPA); inferior frontal gyrus (IFG); sound duration; brain stimulation
Transcranial direct current stimulation (tDCS) over the left Inferior Frontal Gyrus (IFG) was found to improve apraxia of speech (AOS) in post-stroke aphasia, speech fluency in adults who stutter, naming and spelling in primary progressive (PPA). This paper aims to determine whether tDCS over the left IFG coupled with AOS therapy improves speech fluency in patients with PPA more than sham. Eight patients with non-fluent PPA with AOS symptoms received either active or sham tDCS, along with speech therapy for 15 weekday sessions. Speech therapy consisted of repetition of increasing syllable-length words. Evaluations took place before, immediately after, and two months post-intervention. Words were segmented into vowels and consonants and the duration of each vowel and consonant was measured. Segmental duration was significantly shorter after tDCS than sham for both consonants and vowels. tDCS gains generalized to untrained words. The effects of tDCS sustained over two months post-treatment in trained words. Taken together, these results demonstrate that the tDCS over the left IFG facilitates speech production by reducing segmental duration. The results provide preliminary evidence that tDCS can maximize efficacy of speech therapy in non-fluent PPA with AOS.
Optically Clear and Resilient Free-Form µ-Optics 3D-Printed via Ultrafast Laser Lithography
Linas Jonušauskas, Darius Gailevičius, Lina Mikoliūnaitė, Danas Sakalauskas, Simas Šakirzanovas, Saulius Juodkazis, Mangirdas Malinauskas
Subject: Materials Science, Polymers & Plastics Keywords: direct laser writing; ultrafast laser; 3D laser lithography; 3D printing; hybrid polymer; integrated microoptics; optical damage; photonics; pyrolysis; ceramic 3D structures
We introduce optically clear and resilient free-form micro-optical of pure (non-photosensitized) organic-inorganic SZ2080 material made by femtosecond 3D laser lithography (3DLL). This is advantageous for rapid printing of 3D micro-/nanooptics, including their integration directly onto optical fibers. A systematic study on the fabrication peculiarities and quality of resultant structures is performed. Comparison of microlenses' resiliency to CW and femtosecond pulsed exposure is determined. Experimental results prove that pure SZ2080 is ∼3 fold more resistant to high irradiance as compared with a standard photo-sensitized material and can sustain up to 1.91 GW/cm2 intensity. 3DLL is a promising manufacturing approach for high-intensity micro-optics for emerging fields in astro-photonics and atto-second pulse generation. Additionally, pyrolysis is employed to shrink structures up to 40% by removing organic SZ2080 constituents. This opens a promising route towards downscaling photonic lattices and creation of mechanically robust glass-ceramic structures.
A Novel Adaptive Neuro-Fuzzy Based Cascaded PIDF-PIDF Controller for Automatic Generation Control Analysis of Multi-Area Multi-Source Hydrothermal System
Abinands Ramshanker, Ravi K., Jacob I. Raglend, Belwin J. Edward
Subject: Engineering, Electrical & Electronic Engineering Keywords: Automatic generation controls (AGC); Adaptive Neuro-Fuzzy controller; cascaded controller; parallel High voltage direct current (HVDC) tie-lines; Skill Optimization Algorithm (SOA)
This article investigated the Automatic Generation Control(AGC) of multi-area multi-source interconnected systems with hydropower plants, thermal power plants, and wind energy. Adaptive Neuro-fuzzy controller integrated with the cascaded proportional-integral-derivative with filter (PIDF-PIDF) is a new cascaded controller (ANF-PIDF-PIDF) that has been presented as a secondary controller for applied hybrid power systems. The recent Skill Optimization Algorithm (SOA) is employed to optimize PIDF- PIDF controller parameter gains and the Adaptive Neuro-Fuzzy controller's inputs and output scaling factors. SOA is used to update the controller parameters with integral square error (ISE) employed as the objective function. A 1% step load disturbance was considered simultaneously in all three areas. The controller's performance is evaluated and compared with and without considering the effects of wind energy sources and non-linearity for ANF-PIDF-PIDF, PIDF-PIDF, and PIDF and it was determined that the ANF-PIDF-PIDF was the most efficient. The dynamic system performance is also compared with parallel high voltage direct current (HVDC) tie-lines. The investigation clearly shows that incorporating HVDC tie-line with multi-area, multi-source provides better dynamic performance in maximum amplitude, oscillation, and settling time. Additionally, sensitivity analysis is done and the optimum controller gains does not need to be reset to uncertain values in system loading conditions. All simulation results were evaluated using MATLAB 2016b.
Mapping the Melatonin Suppression, Star Light and Induced Photosynthesis Indices with the LANcube
Martin Aubé, Charles Marseille, Ammar Farkouh, Adam Dufour, Alexandre Simoneau, Jaime Zamorano, Johanne Roby, Carlos Tapia
Subject: Earth Sciences, Environmental Sciences Keywords: Artificial Light at Night; Intrusive Light; Direct Light Pollution; Radiometry; Multispectral; Multiangular; Melatonin Suppression Index; Star Light Index; Spectroscopy; Measurement; Synthetic photometry
Increased exposure to artificial light at night can affect human health including disruption of melatonin production and circadian rhythms and extend to increased risks of hormonal cancers and other serious diseases. In addition, multiple negative impacts on fauna and flora are well documented, and it is a matter of fact that artificial light at night is a nuisance for ground-based astronomy. These impacts are frequently linked to the colour of the light or more specifically to its spectral content. Artificial light at night is often mapped by using space borne sensors, but most of them are panchromatic and thus insensitive to the colour. In this paper, we suggest a method that allows high resolution mapping of the Artificial light at night by using ground-based measurements with the LANcube system. The device separates the light detected in four bands (Red, Green, Blue, and Clear) and provides this information for six faces of a cube. We found relationships between the LANcube's colour ratios and 1- the Melatonin Suppression Index, 2- the StarLight Index and 3- the Induced Photosynthesis Index. We show how such relationships combined with data acquisition from a LANcube positioned on the top of a car can be used to produce spectral indices maps of a whole city in a few hours.
Focality Oriented Selection of Current Dose for Transcranial Direct Current Stimulation
Rajan Kashyap, Sagarika Bhattacharjee, Ramaswamy Arumugam, Rose Dawn Bharath, Kaviraja Udupa, Kenichi Oishi, John E. Desmond, S.H. Annabel Chen, Cuntai Guan
Subject: Medicine & Pharmacology, Allergology Keywords: Transcranial direct current stimulation (tDCS); Realistic volumetric Approach-based Simulator for Transcranial electric stimulation (ROAST); Systematic Approach for tDCS Analysis (SATA); Current dose
Background: In Transcranial Direct Current Stimulation (tDCS) the injected current gets distributed across the brain areas. The motive is to stimulate the target region-of-interest (ROI), while minimizing the current in non-target ROIs. For this purpose, determining the appropriate current-dose for an individual is difficult. Aim: To introduce Dose-Target-Determination-Index (DTDI) to quantify the focality of tDCS and examine the dose-focality relationship in three different populations. Method: Here, we extended our previous toolbox i-SATA to the MNI reference space. After a tDCS montage is simulated for a current-dose, the i-SATA(MNI) computes the average (over voxels) current density for every region in the brain. DTDI is the ratio of average current density at target ROI to the ROI with maximum value (peak region). Ideally target ROI should be the peak region, so DTDI shall range from 0 to 1. Higher the value, the better the dose. We estimated the variation of DTDI within and across individuals using T1-weighted brain images of 45 males and females distributed equally across three age groups- (a) Young adults (20 ≥ x ˂ 40 years), (b) Mid adults (40 ≥ x ˂ 60 years), and (c) Older adults (60 ≥ x ˂ 80 years). DTDI's were evaluated for the frontal montage with electrodes at F3 and right supra-orbital for three current doses 1mA, 2mA, and 3mA with the target ROI at left middle frontal gyrus. Result: As the dose is incremented, DTDI may show (a) increase, (b) decrease, and (c) no change across the individuals. The focality decreases with age and the decline is stronger in males. Higher current dose at older age can enhance the focality of stimulation. Conclusion: DTDI provides information on which tDCS current dose will optimize the focality of stimulation. DTDI recommended dose should be prioritised based on the age (> 40 years) and sex (especially males) of an individual. The toolbox i-SATA(MNI) is freely available.
Direct Flux Control for Stand-Alone Operation Brushless Doubly Fed Induction Generator Using Resonant-Based Sliding-Mode Control Approach
Kai Ji, Shenghua Huang
Subject: Engineering, Electrical & Electronic Engineering Keywords: brushless doubly fed induction generator; direct control; stand-alone; sliding-mode; resonant; reduced-order generalized integrator; variable-speed constant-frequency; wind energy conversion systems
In this paper, a novel voltage control strategy of stand-alone operation brushless doubly fed induction generator for variable speed constant frequency wind energy conversion systems was presented and discussed particularly. Based on the model of the generator power system, the proposed direct flux control strategy employs a nonlinear reduced-order generalized integrator based resonant sliding-mode control scheme to directly calculate and regulated the output value of converter which control winding stator required so as to eliminate the instantaneous errors of power winding stator flux, and no involving any synchronous rotating coordinate transformations. The stability, robustness and convergence capability of the proposed control strategy were described and analyzed. Owing to no extra current control loops involved, therefore simplifying the system configuration design and enhancing the transient performance. Constant converter switching frequency was achieved by using space vector pulse width modulation, which reduce the harmonic of generator terminal voltage. In addition, experimental results prove the feasibility and validity of the proposed scheme, and excellent steady and dynamic state performance is achieved.
Euler-Bernoulli Beam Theory: First-Order Analysis, Second-Order Analysis, Stability, and Vibration Analysis Using the Finite Difference Method
Subject: Keywords: Euler Bernoulli beam; finite difference method; additional points; element stiffness matrix; tapered beam; first-order analysis; second-order analysis; vibration analysis; direct time integration method
This paper presents an approach to the Euler-Bernoulli beam theory (EBBT) using the finite difference method (FDM). The EBBT covers the case of small deflections, and shear deformations are not considered. The FDM is an approximate method for solving problems described with differential equations. The FDM does not involve solving differential equations; equations are formulated with values at selected points of the structure. Generally, the finite difference approximations are derived based on fourth-order polynomial hypothesis (FOPH) and second-order polynomial hypothesis (SOPH) for the deflection curve; the FOPH is made for the fourth and third derivative of the deflection curve while the SOPH is made for its second and first derivative. In addition, the boundary conditions and not the governing equations are applied at the beam's ends. In this paper, the FOPH was made for all of the derivatives of the deflection curve, and additional points were introduced at the beam's ends and positions of discontinuity (concentrated loads or moments, supports, hinges, springs, etc.). The introduction of additional points allowed us to apply the governing equations at the beam's ends and to satisfy the boundary and continuity conditions. Moreover, grid points with variable spacing were also considered, the grid being uniform within beam segments. First-order analysis, second-order analysis, and vibration analysis of structures were conducted with this model. Furthermore, tapered beams were analyzed (element stiffness matrix, second-order analysis). Finally, a direct time integration method (DTIM) was presented. The FDM-based DTIM enabled the analysis of forced vibration of structures, with damping taken into account. The results obtained in this paper showed good agreement with those of other studies, and the accuracy was increased through a grid refinement. Especially in the first-order analysis of uniform beams, the results were exact for uniformly distributed and concentrated loads regardless of the grid. Further research will be needed to investigate polynomial refinements (higher-order polynomials such as fifth-order, sixth-order…) of the deflection curve; the polynomial refinements aimed to increase the accuracy, whereby non-centered finite difference approximations at beam's ends and positions of discontinuity would be used.
|
CommonCrawl
|
A question on the definition of the Riemann–Stieltjes integral
I have been reading Wikipedia's article on the Riemann–Stieltjes integral (https://en.wikipedia.org/wiki/Riemann%E2%80%93Stieltjes_integral) and I don't understand why the Riemann–Stieltjes integral isn't equaivalent to the Generalized Riemann–Stieltjes integral. I mean, if instead of $g$ we take the identity function (i.e. we consider the case of the Riemann integral) then they are the same. Why does this change for other $g$'s? I am quite confused because my textbook uses the definition Generalized Riemann–Stieltjes integral as the definition for the Riemann–Stieltjes integral and this is why I want to understand the difference between them.
EDIT: I would also like to know which properties of the generalized integral transfer to the ungeneralized one and which do not to fully grasp them.
real-analysis integration definition
JustAnAmateur
JustAnAmateurJustAnAmateur
$\begingroup$ ${}\,+1.$ One who edits Wikipedia articles should understand the way I edited the punctuation in this question. $\endgroup$
– Michael Hardy
$\begingroup$ Pollard, Henry (1920). "The Stieltjes integral and its generalizations". The Quarterly Journal of Pure and Applied Mathematics. 49. babel.hathitrust.org/cgi/… $\endgroup$
$\begingroup$ Pollard writes: "The integral so obtained exists and is identical with the integral previously defined whenever this exists, and exists in certain cases where this does not." I don't yet know what those "certain cases" are. $\qquad$ $\endgroup$
Pollard, Henry (1920). "The Stieltjes integral and its generalizations". The Quarterly Journal of Pure and Applied Mathematics. 49.
Pollard's paper appears to be where the generalization was introduced. The paper begins on page 73 and that page has only the title, the author's name and affiliation, and the references. The link above is to page 74.
On page 80, Pollard defines two functions: $$ f(x) = \begin{cases} 0 & x < 1, \\ k & x\ge 1, \end{cases} $$ $$ \varphi(x) = \begin{cases} 0 & x\le 1, \\ 1 & x>1. \end{cases} $$ He seems to claim that as a generalized Riemann–Stieltjes integral, apparently defined as a Moore–Smith limit of a net indexed by partitions of the interval $[0,2],$ the integral $$ \int_0^2 f(x)\,d\varphi(x) $$ exists and is equal to $k(\varphi(2) - \varphi(1)),$ but that as a Riemann–Stieltjes integral, defined as a limit as the mesh of the partition approaches $0,$ that integral does not exist. Pollard calls the the generalized Riemann–Stieltjes integral "the modified Stieltjes integral."
But I haven't carefully sifted through all the details.
Michael HardyMichael Hardy
$\begingroup$ If the partition does not contain the point $1$, then by choosing the corresponding $c$ either $< 1$ or $\geqslant 1$ you can make the sum $0$ or $k$. But if the partition contains the point $1$, then for the $[x_{k-1},1]$ interval you always have $f(c)\cdot (0 - 0)$ and for the $[1, x_{k+1}]$ interval you always have $k\cdot(1-0)$. $\endgroup$
$\begingroup$ I seem to recall Baby Rudin saying the Riemann–Stieltjes integral is undefined in cases where $f$ and $\varphi$ have a point of discontinuity in common. So that's where the "generalized" version goes beyond the ungeneralized one. $\endgroup$
$\begingroup$ Yes. The generalised Riemann–Stieltjes integral can deal with some common discontinuities, but not with all. $\endgroup$
$\begingroup$ Thank you very much! I agree with @MichaelHardy that the generalized version deals with some cases when the function have a point of discontinuity in common. What I would also like to know is which properties of the generalized integral transfer to the ungeneralized one and which do not, I will add this in my question. $\endgroup$
– JustAnAmateur
$\begingroup$ @JustAnAmateur : I am inclined to doubt that any useful properties of the generalized version fail to apply to the version defined by Stieltjes except that the one defined by Stieltjes fails to be defined when $f$ and $\varphi$ have a common discontinuity. The paper by Pollard might be the first place I'd look for an answer to that. $\endgroup$
Not the answer you're looking for? Browse other questions tagged real-analysis integration definition or ask your own question.
How is Riemann–Stieltjes integral a special case of Lebesgue–Stieltjes integral?
Showing certain sum as a Riemann-Stieltjes integral
Help understand a step in a proof regarding Riemann-Stieltjes integral
Confusion about Stieltjes integrals: Improper-Riemann, Lebesgue, and Generalized Riemann
Tensors as geometric objects
Evaluation of Riemann-Stieltjes Integral
Fundamental Theorem of Calculus. For the Riemann–Stieltjes integral.
Definition of Riemann integral, or Riemann-Stieltjes integral.
Riemann-Stieltjes integral question
|
CommonCrawl
|
A survey of gradient methods for solving nonlinear optimization
Haiyu Liu 1, , Rongmin Zhu 1,, and Yuxian Geng 2,
Department of Mathematics, Nanjing University, Nanjing 210093, China
School of Mathematics and Physics, Jiangsu University of Technology, Changzhou 213001, China
* Corresponding author: Rongmin Zhu
Received January 2020 Revised May 2020 Published July 2020
Fund Project: The first author is supported by the NSF of China (Grants No. 11771202)
Let $ \mathcal{G}(\mathcal{X}) $ and $ \mathcal{G}(\mathcal{Y}) $ be Gorenstein subcategories induced by an admissible balanced pair $ (\mathcal{X}, \mathcal{Y}) $ in an abelian category $ \mathcal{A} $. In this paper, we establish Gorenstein homological dimensions in terms of these two subcategories and investigate the Gorenstein global dimensions of $ \mathcal{A} $ induced by the balanced pair $ (\mathcal{X}, \mathcal{Y}) $. As a consequence, we give some new characterizations of pure global dimensions and Gorenstein global dimensions of a ring $ R $.
Keywords: Abelian category, balanced pair, Gorenstein subcategory, homological dimension, global dimension.
Mathematics Subject Classification: 16E05, 16E30, 18E10, 18G20.
Citation: Haiyu Liu, Rongmin Zhu, Yuxian Geng. Gorenstein global dimensions relative to balanced pairs. Electronic Research Archive, 2020, 28 (4) : 1563-1571. doi: 10.3934/era.2020082
A. Beligiannis, The homological theory of contravariantly finite subcategories: Auslander-Buchweitz contexts, Gorenstein categories and (co-)stablization, Comm. Algebra, 28 (2000), 4547-4596. doi: 10.1080/00927870008827105. Google Scholar
D. Bennis, J. R. García Rozas and L. Oyanarte, On the stability question of Gorenstein categories, Appl. Categ. Structures, 25 (2017), 907-915. doi: 10.1007/s10485-016-9478-3. Google Scholar
X. Chen, Homotopy equivalences induced by balanced pairs, J. Algebra, 324 (2010), 2718-2731. doi: 10.1016/j.jalgebra.2010.09.002. Google Scholar
I. Emmanouil, On the finiteness of Gorenstein homological dimensions, J. Algebra, 372 (2012), 376-396. doi: 10.1016/j.jalgebra.2012.09.018. Google Scholar
E. E. Enochs and O. M. G. Jenda, Relative Homological Algebra, De Gruyter Expositions in Mathematics, 30, Walter de Gruyter GmbH & Co. KG, Berlin, 2011. doi: 10.1515/9783110215212. Google Scholar
S. Estrada, M. A. Pérez and H. Zhu, Balanced pairs, cotorsion triplets and quiver representations, Proc. Edinb. Math. Soc. (2), 63 (2020), 67–90. doi: 10.1017/S0013091519000270. Google Scholar
T. V. Gedrich and K. W. Gruenberg, Complete cohomological functors on groups, Topology Appl., 25 (1987), 203-223. doi: 10.1016/0166-8641(87)90015-0. Google Scholar
J. Gillespie, Model structures on moules over Ding-Chen rings, Homology Homotopy Appl., 12 (2010), 61-73. doi: 10.4310/HHA.2010.v12.n1.a6. Google Scholar
J. Gillespie, On Ding injective, Ding projective and Ding flat modules and complexes, Rocky Mountain J. Math., 47 (2017), 2641-2673. doi: 10.1216/RMJ-2017-47-8-2641. Google Scholar
H. Holm, Gorenstein homological dimensions, J. Pure Appl. Algebra, 189 (2004), 167-193. doi: 10.1016/j.jpaa.2003.11.007. Google Scholar
Z. Huang, Proper resolutions and Gorenstein categories, J. Algebra, 393 (2013), 142-169. doi: 10.1016/j.jalgebra.2013.07.008. Google Scholar
H. Li, J. Wang and Z. Huang, Applications of balanced pairs, Sci. China Math., 59 (2016), 861-874. doi: 10.1007/s11425-015-5094-1. Google Scholar
S. Sather-Wagstaff, T. Sharif, D. White, Stability of Gorenstein categories, J. Lond. Math. Soc. (2), 77 (2008), 481–502. doi: 10.1112/jlms/jdm124. Google Scholar
D. Simson, On pure global dimension of locally finitely presented Grothendieck categories, Fund. Math., 96 (1977), 91-116. doi: 10.4064/fm-96-2-91-116. Google Scholar
X. Tang, On $F$-Gorenstein dimensions, J. Algebra Appl., 13 (2014), 14pp. doi: 10.1142/S0219498814500224. Google Scholar
A. Xu and N. Ding, On stability of Gorenstein categories, Comm. Algebra, 41 (2013), 3793-3804. doi: 10.1080/00927872.2012.677892. Google Scholar
X. Yang, Gorenstein categories $\mathcal{G(X, Y, Z)}$ and dimensions, Rocky Mountain J. Math., 45 (2015), 2043-2064. doi: 10.1216/RMJ-2015-45-6-2043. Google Scholar
Y. Zheng, Balanced pairs induce recollements, Comm. Algebra, 45 (2017), 4238-4245. doi: 10.1080/00927872.2016.1262384. Google Scholar
Hua Qiu, Zheng-An Yao. The regularized Boussinesq equations with partial dissipations in dimension two. Electronic Research Archive, 2020, 28 (4) : 1375-1393. doi: 10.3934/era.2020073
Lisa Hernandez Lucas. Properties of sets of Subspaces with Constant Intersection Dimension. Advances in Mathematics of Communications, 2021, 15 (1) : 191-206. doi: 10.3934/amc.2020052
João Marcos do Ó, Bruno Ribeiro, Bernhard Ruf. Hamiltonian elliptic systems in dimension two with arbitrary and double exponential growth conditions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 277-296. doi: 10.3934/dcds.2020138
Sabira El Khalfaoui, Gábor P. Nagy. On the dimension of the subfield subcodes of 1-point Hermitian codes. Advances in Mathematics of Communications, 2021, 15 (2) : 219-226. doi: 10.3934/amc.2020054
Annegret Glitzky, Matthias Liero, Grigor Nika. Dimension reduction of thermistor models for large-area organic light-emitting diodes. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020460
Guillaume Cantin, M. A. Aziz-Alaoui. Dimension estimate of attractors for complex networks of reaction-diffusion systems applied to an ecological model. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020283
Yuanfen Xiao. Mean Li-Yorke chaotic set along polynomial sequence with full Hausdorff dimension for $ \beta $-transformation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 525-536. doi: 10.3934/dcds.2020267
Xianbo Sun, Zhanbo Chen, Pei Yu. Parameter identification on Abelian integrals to achieve Chebyshev property. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020375
Masaharu Taniguchi. Axisymmetric traveling fronts in balanced bistable reaction-diffusion equations. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3981-3995. doi: 10.3934/dcds.2020126
Jann-Long Chern, Sze-Guang Yang, Zhi-You Chen, Chih-Her Chen. On the family of non-topological solutions for the elliptic system arising from a product Abelian gauge field theory. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3291-3304. doi: 10.3934/dcds.2020127
Gheorghe Craciun, Jiaxin Jin, Casian Pantea, Adrian Tudorascu. Convergence to the complex balanced equilibrium for some chemical reaction-diffusion systems with boundary equilibria. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1305-1335. doi: 10.3934/dcdsb.2020164
Peter Frolkovič, Karol Mikula, Jooyoung Hahn, Dirk Martin, Branislav Basara. Flux balanced approximation with least-squares gradient for diffusion equation on polyhedral mesh. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 865-879. doi: 10.3934/dcdss.2020350
Bernold Fiedler. Global Hopf bifurcation in networks with fast feedback cycles. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 177-203. doi: 10.3934/dcdss.2020344
Oleg Yu. Imanuvilov, Jean Pierre Puel. On global controllability of 2-D Burgers equation. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 299-313. doi: 10.3934/dcds.2009.23.299
Yunping Jiang. Global graph of metric entropy on expanding Blaschke products. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1469-1482. doi: 10.3934/dcds.2020325
Mohamed Dellal, Bachir Bar. Global analysis of a model of competition in the chemostat with internal inhibitor. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1129-1148. doi: 10.3934/dcdsb.2020156
Haiyu Liu Rongmin Zhu Yuxian Geng
|
CommonCrawl
|
Multi-Wavelength Densitometer for Experimental Research on the Optical Characteristics of Smoke Layers
Wojciech Węgrzyński ORCID: orcid.org/0000-0002-7465-02121,
Piotr Antosiewicz1 &
Jadwiga Fangrat1
Fire Technology volume 57, pages 2683–2706 (2021)Cite this article
A novel multi-wavelength densitometer was built for the purpose of continuous and simultaneous measurements of light obscuration in smoke layers, concurrently in five bands (λ = 450 nm, 520 nm, 658 nm, 830 nm and 980 nm). This device was used for determining transmittance and visibility in smoke parameters of a smoke layer from the fire of 1.00 dm3 of n-Heptane in a 0.33 × 0.33 m tray located in a test chamber (9.60 × 9.80 × 4.00 m3). The performance of the device was compared with a commercial Lorenz densitometer at 880 nm. Significant differences in measured value of transmittance were observed between the different sensors – from 65% at 450 nm (blue light), 80% at 658 nm (red light) to 95% at 980 nm (IR). The visibility in smoke, estimated following the theory of Jin for light reflecting signs (K = 3), ranged from 7.5 m (blue light) to 12 m (red light) and for the light-emitting (K = 8) signs from 18 to 32 m, respectively. The performed experiment has confirmed the applicability and added value of multi-wavelength measurements of light-extinction in fire experiments. The device was sensitive to temperature variations and requires active cooling and careful warm-up prior to experiments, to reach the expected sensitivity.
Optical properties of fire smoke are in the focal point of the fire science, highly relevant to both physical experiments and computer modelling of fires. The obscuration of light by smoke is usually presented in the form of a 'visibility in smoke' parameter. In performance-based analyses for fire safety engineering of buildings, the visibility in smoke is often the first tenability criterion that exceeds its threshold value [1], in consequence shaping the fire safety solutions in the building. The smoke obscuration also influences the visibility of evacuation signs, the response to the information on the signs, behaviour and the movement speed of people who are evacuating [2]. Finally, low visibility in smoke will be one of the critical challenges that firefighters face when performing rescue operations.
Despite the profound relevance of visibility in smoke to the fire safety engineering, the existing theory of visibility in smoke was criticized as oversimplified, based on a limited range of experimental results and highly sensitive to user-dependent properties [3]. Even though multiple studies were performed to improve the existing theory to include for more diverse visibility of various evacuation signs (e.g. [4, 5]), the conventional approach is still the simplest model introduced by Jin [6]. This model is based on a limited number of experiments (primarily from the 1960s and 1970s) on the obscuration of monochromatic light by homogenous cold smoke [7, 8]. The development of this model was summarized in [2, 9].
To better understand how light is obscured by the fire smoke and its implications on the performance of evacuation lighting and signage, it is essential to recognize the effects of polydisperse smoke (non-homogenous) on polychromatic light. This assumption is a significant improvement in the precision as compared to the traditional approach, but also resulting in higher complexity of the problem. To enable future research under these conditions, we have developed a novel construction of a smoke densitometer, that may be installed directly within a smoke reservoir and measure smoke extinction concurrently in five wavelengths. In this paper, we discuss similar developments in the past, the construction of the device with particular technical challenges that were faced in the development phase, calibration and exemplar use in an n-Heptane compartment fire experiment.
Obscuration of Light in Smoke Layers
The obscuration of light passing through a medium is often described with Bougher-Lamber-Beer law (Eq. 1). The ratio of light intensity measured at a target in obscured conditions (I), to the light intensity measured with a clean optical path (I0) is the transmittance (τ). The light obscuration depends on three elements: specific light extinction coefficient (σs), the mass density of smoke (m) and the length of the optical path (l). Another common way to present the light extinction of the smoke is the natural logarithm of τ, which is called the obscuration density (OD) and is presented in [dB/m].
$$\tau =\frac{I}{{I}_{0}}={e}^{-{\sigma }_{s}ml}$$
It was recognised that this application of the Bougher-Lambert–Beer law has many limitations emerging from the preliminary assumptions of the law [3], among which: (a) it should be used for monochromatic light, (b) it should be used for single or parallel light sources, (c) the light absorption is considered dominant over light scattering effects. Despite above mentioned profound limitations, this relation is still commonly used to determine the light extinction in fire smoke. Significant simplification in the practical use of this method was introduced by unifying the value of the specific light extinction coefficient (σs) for flaming combustion, with a value of
σs = 8.7 m2/g (± 1.1 m2/g) [10]. This value was determined based on seven studies involving 29 fuels, for light wavelength λ = 633 nm. A major implication of determination of a nearly universal value of σs was enabling to determine the mass concentration of smoke based on light extinction coefficients. This universal value was widely adopted in engineering tools of fire science, among them the most popular CFD code FDS [11]. Other values of σs can be found in literature and engineering guidance – e.g. the value of 4.7 m2/g used in the design of visibility conditions in road tunnels [12] or other values of σs reviewed by [9].
In [13], the value of σs was discussed as a function of light wavelength. This was done by investigating a large batch of measurements of specific light extinction coefficient in wavelengths ranging from UV (385 nm) to far-IR (10 600 nm), with corresponding values of σs from 12 m2/g to 1 m2/g respectively. The value of σs decreased with increasing wavelength. A least-square fit to the experimental data was found (Eq. 2), with R = 0.98675. However, it must be emphasized that this fit applies to stoichiometric and overventilated combustion [13].
$${\sigma }_{S}=4.8081{\lambda }^{-1.0088}$$
As the σs asymptotically grows with the decreasing wavelength, the differences in σs in the lower end of the visible spectrum are profound, which was confirmed in qualitative result analysis of some experiments on the visibility of evacuation signs, e.g. [9, 14, 15] or research on the applicability of free space outdoor optic connectors in foggy weather [16]. As the value of σs can differ significantly within the visible spectrum, it is obvious that the light obscuration through the smoke at different wavelengths will take different values. In fact, past research was focused on exploiting this optical characteristic of smoke for remote analysis of characteristic smoke diameter, as summarized in Sect. 1.3.
Multi-Wavelength Smoke Densitometers in the Literature
Differences in the obscuration of light at different wavelengths were often connected to the distribution of particle sizes of the smoke [15, 17, 18]. As the particle size distribution depends on the properties of burning material and the conditions in which the combustion occurs, this approach was considered for remote identification of burning fuel.
Jin [15] has considered the impact of light obscuration at a different wavelength on the visibility of evacuation signs. He has recognized that the extinction coefficient in short wavelengths is larger than for long wavelengths and that the difference between them is also a function of time. The difference diminishes as an effect of coagulation of smoke particles. The particle size of 1 µm was proposed as the boundary above which the wavelengths stops being an important factor for visibility. He has calculated the ratios of the visibility in red light to the one of blue light, for smouldering and flaming polystyrene, flaming wood and flaming kerosene.
Dobbins and Jizmagian [18] proposed two methods to determine the mean size of polydispersed dielectric spheres, first of which was the determination of a single spectral transmittance together with knowledge of particle concentrations. This approach is similar to how smoke obscuration effects are determined with Bougher's law. The size of polystyrene particles used in the experiment (0.126–1.305 µm) was close to the size distribution of soot particles commonly found in fire smoke [19]. For a polydispersion of known concentrations, they were able to compute the particle sizes with one transmittance measurement, following a revised version of Lambert–Beer law from their previous work [20].
Cashdollar et al. [17] proposed to use a white light source and a compact three-wavelength detector assembly to determine the average particle size and mass concentration of smoke. They have used the Mie theory and Dobbins revised Bougher's transmission law [18] to calculate sizes and concentrations. The beam separation was obtained with two cube beamsplitter filters, centred at wavelengths 450 nm, 630 nm and 1000 nm (nominal bandwidth 10 nm). The light intensity was measured with silicon photodiodes and the entire assembly was mounted in one small casing (8 cm × 10 cm × 5 cm). The smoke obscuration was measured at an optical path of 0.1 m, within a gas sampling test section connected to a fire-tunnel. For wood fire smoke log-transmission ratios were: ln τ(1000 nm)/ln τ(450 nm) = 0.25; ln τ(1000 nm)/ ln τ(630 nm) = 0.47 and ln τ(630 nm)/ln τ(450 nm) = 0.54. These ratios allow for determination for mean particle sizes. A reasonably good agreement was found between these measurements and two other sizing methods (transmission electron microscopy and ionization-type particulate detectors). Authors emphasized that this device may be used in field experiments.
Uthe [21] did investigate the applicability of multi-wavelength LIDAR method to remotely determine extinction coefficient measurements and mean particle sizes of aerosols. The experimental study was performed with 14-wavelength transmissometer. Ten different wavelengths between 390 and 1640 nm, as well as 3900 nm, were produced with Zirconium arc-lamp; wavelength 3390 nm with a He–Ne laser and 10,600 nm with a CO2 laser. The light intensity was measured by photodiodes (Silicone, Germanium, PbSe or HgCdTe) behind narrow-band optical filters. The facility consisted of a long (10 m) aerosol chamber with a cross-section of 0.50 × 0.50 m, to which compressed air with different dusts was supplied. Ratios of extinction between the transmittance measured at 1 045 nm and 514 nm were presented as a function of Saurer mean diameter for submicron particles, allowing for identification of the aerosol type. They have concluded that a single laser system (Nd:YAG 1060 nm) and its first harmonic (530 nm) would be useful to evaluate particles of sizes < 1 µm. For particles with sizes greater than 1 µm, the ratio of extinction coefficients at 1045 and 514 nm converges to value of 1.2 and is no longer sensitive to particle size. A similar conclusion was drawn in [9] where it was stated that as the particle sizes increases, the integral optical property of the soot cloud approaches that of monodisperse particles and particle sizes are no longer important. This is somewhat similar to the observations of Jin [15].
Limitations of optical transmission to the determination of the particle sizes were investigated by Swanson et al. [22]. They have discussed the underlying theory and presented the derivation of the Bougher-Lambert–Beer law, as well as the theory of Mie and the radiative transport equation (RTE). The application of Bougher-Lambert–Beer law was verified in a scattering environment and the Hodkinson's findings related to the restriction of aperture related to scattered light were emphasized. A method of determination of the particle sizes with two measurements of transmittance was validated for monodisperse polystyrene particles with a size of 0–1 µm. Aspey et al. [23] have further improved this method using High-Level Synthesis (HLS) approach and polychromatic LED. They have investigated the spectral change in transmitted polychromatic light, which was used to determine Mie scattering parameters in a wood smoke in the function of time. It was observed that for the wood smoke forward scattering of blue wavelength is approximately three times that for the green detector and the red detector measured negligible forward scattering.
The use of multi-wavelength light transmission was also proposed by Wilkens and van Hees [24] to improve the research capabilities of a cone-calorimeter [25]. This implementation would allow for determination of the mean particle diameter of the smoke produced in the calorimeter, through non-intrusive light transmission measurement. They have reviewed the methods of smoke identification used in the conventional standardized fire testing apparatus and performed theoretical considerations on the limitations of the proposed tool and also in-depth discussion of previous developments. An apparatus similar in the concept was built by Chaudhry and Moinuddin [26]. A sealed box was used as a combustion chamber to burn different materials and the smoke was further transported into five separate light chambers. These chambers were equipped with transmittance measurements with specific wavelengths 275 nm, 365 nm, 405 nm, 620 nm and 960 nm. Both absorbed (measured at angle 0°) and scattered (at angle 90°) light were recorded. A Random Forest machine learning algorithm was trained based on the experimental results, in order to predict the burning material based on the measurements at these five wavelengths. The overall success rate of the algorithm was 96.6%–97.5% showing the high applicability of this method. For some materials such as cardboard and polystyrene, the success rate was lower, respectively 60.3% and 59%. The difficulties with the use of this approach were associated with obtaining meaningful results for both scattering (at a certain angle) and absorption.
Another recently developed approach was done with tomographic reconstruction of an image of LED, captured with a digital camera through a smoke layer. By comparing the predicted and observed shape and light intensity of the LED, the smoke layer opacity could be reverse engineered. This approach could be used for multi-wavelength observations, as typical LEDs consist of red, green and blue diodes [27].
Summarizing previous efforts it was found, that the past use of multi-wavelength optical transmittance meters was limited to the determination of the mean diameter of smoke particle size and consequently, the type and source of the smoke. Most of the devices constructed were connected to bench-scale apparatus (boxes, tube burners, smoke tunnels). The Cashdollar's apparatus [17] was meant to be used in field experiments, although such use was not reported in the literature. The limitation of the proposed solutions was (1) requirement to know the mass density of smoke or (2) need for high accuracy of the transmittance measurements, which excludes the use of this type of apparatus in field experiments. The direct measurements in compartment scale fire experiments are needed as the optical properties of smoke generated in small and large scale may not necessarily be the same [28]. It is possible to samples the gas from smoke layers, although as Wilkens [24] stated, this would be considered an intrusive approach. Furthermore, following findings of Jin [15] an Aspey [23], the properties of smoke will change with time and that could distort the size of smoke particles and the light extinction measurements in consequence.
For our future research on the visibility in smoke in compartment fires, validation studies for computer models and smoke obscuration effects on evacuation lighting and signage we are interested in the differences of light transmittance through smoke-layers in full-scale experiments, rather than properties of smoke generated in bench-scale apparatus. To fit our purpose, the tool used to measure the light transmittance would have to (a) be mounted directly within the smoke layer, (b) be able to measure the transient change of the transmittance in real-time and (c) allow for result comparison with the existing body of measurements from EN 54–7 test chamber [29], such as one used in [30]. For this purpose, the most promising solution was to construct an optical densitometer based on laser diodes and photo-diodes, that could be mounted within a single casing and share the same optical path. The sources cannot interfere with each other and the interference from the fire source and other light sources (such as emergency lighting) should be limited by aperture, narrow-band filters and internal structure of the receiver. Following these limitations, we have constructed a densitometer that consists of five separate sets of laser—photo-diode couples within one casing, with their light extinction recorded simultaneously over an identical optical path. The sets differ by their specific wavelength and in our case these were: 450 nm (blue), 520 nm (green), 658 nm (red), 830 nm (IR) and 980 nm (IR).
Development of the Multi-Wavelength Densitometer
The Prototype Design
Two prototype multi-wavelength densitometers were constructed based on commercially available 50–100 mW lasers. The list of system components is given in Appendix A. As in [31], the device was built to be installed in a hot smoke layer; thus, the construction of the device was designed to allow for easy replacement if an element is damaged. Five different lasers were installed within one casing, each connected optically with a photodiode behind a matching optical cut-off narrow band filter on the opposite side of the device. A micro alignment system was used to target the laser beam directly on the photodiode. The device was built on a rigid aluminium structure. The laser and photo-diode casings were connected to an external cooling apparatus (cold air), which was controlled with a thermostatic controller to maintain a constant temperature within the box. Photodiodes were connected to five signal amplifiers, built within the receiver casing, which were further connected to the data acquisition system. The measurements at all five wavelengths are performed simultaneously and share the same timeline. The prototype device and general scheme of operation are shown in Fig. 1. The internal build and details of the device are shown in Fig. 2. The calibration data obtained at a thermal equilibrium (26 °C) is shown in Appendix B. With the exception of two data points (520 nm and 658 nm at τ ~ 0.5) in range of τ from 0.2 to 0.8 the device had a calibration error of less than 10%, which for a prototype device we consider as sufficient to evaluate the concept of multi-wavelength densitometry.
The general overview of the multi-wavelength optical densitometer with its auxiliary components – the data acquisition system and set of calibration filters
The internal structure of the multi-wavelength optical densitometer along with images showing construction details of particular parts of the device
The first prototype device had a length of 0.92 m and was built to match the size of the accredited Lorenz densitometer used in the laboratory, allowing direct comparison between the prototype and existing densitometer. An alternate variant of the densitometer was constructed with a longer optical path (2 m) and additional mounting mechanism. This mechanism allows for automated lowering and rising of the densitometer within a test chamber. This capability will be used in future to capture the obscuration density gradient within a small box (2 × 2 × 1 m).
Thermal Stabilisation of the Device
A possible practical problem with measuring the optical properties of smoke with laser-based densitometer is the thermal drift of lasers. The intensity of light emitted by the laser changes with its temperature. In consequence, the ratio of measured and initial light intensity stops being dependant only on the optical properties of smoke. In order to minimize influence of this factor we have fit the densitometers with an air based cooling system, that was steered by a thermostatic controller, set for an operating temperature of 26 °C. The laser and photo-diode casings were insulated from the hot smoke layer with a layer of mineral wool (5 cm, 120 g/m3). If the temperature inside the box exceeded 26 °C, a low-velocity air supply pumping cold air (8–13 °C) was enabled, to cool the interior and maintain the temperature at 26 °C. Multiple variants of air supply velocity and temperatures were tested. Supply of air at too high velocity has caused vibrations of the laser unit, which interfered with the light intensity measurements. Supply of colder air has caused higher temperature gradients, which resulted in unpredictable changes in the light intensity.
The device must have been pre-heated for approx. 90–150 min before each experiment, for the laser temperature to stabilize. Once the thermal stability of the lasers was reached, the values of light intensity (further considered as our I0) were saved. The illustration of measurements performed in the pre-heating period is shown in Fig. 3. In the first 45 min of the warm-up period, the lasers heat up with different speed. Thermostatic device is first triggered at approx. 10th minute and since approx. 40th minute it works continuously. After 45 min the thermal stabilization is obtained and further changes of light intensity are insignificant.
Raw data (voltage) measured on the photodiodes during the pre-heating phase of the experiment
To confirm the applicability of the device for research on compartment fires we have performed an experiment involving an n-Heptane pool fire, known as the TF-5 fire, in a standardized EN 54–7 [29] test chamber with dimensions of 9.60 × 9.80 × 4.00 m3 (Fig. 4). This experiment was in principle a recreation of an experiment published previously in [3]. The test rig is built from a steel structure, with gypsum plasterboards. The compartment is closed from the exterior, but is not airtight. During the experiments we have observed no smoke in the hall where the compartment is placed. This lead to an assumption that the compartment is sufficiently sealed from exterior and no significant exchange of gasses occurred between the compartment and exterior during the experiment.
General view (upper) and a sketch (bottom) of the test chamber with the indication of the localization of the optical densitometers
Before the experiments the densitometer was calibrated following the procedure described in [31]. The details of the calibration are given in Appendix B.
A fuel tray with dimensions of 0.33 × 0.33 m2 was filled with 1000 ml of n-Heptane and placed on top of a scale. The mass-loss rate measurements were used to determine the Heat Release Rate, following the assumption that the Heat of Combustion value is Hc = 44,400 kJ/kg. No ventilation was used in the experiment (the compartment was sealed). The obscuration density was measured with the prototype densitometer and an accredited optical densitometer (Lorenz, λ = 880 nm) of the chamber, located next to each other, Fig. 5. Temperature measurements were performed with thermocouples (type K 1 mm Ni–Cr, extended uncertainty 0.3%) directly above the source of the fire and next to the optical densitometers. The thermocouple readings were not corrected for radiation. Additional measurements included light intensity of evacuation lighting as well as the spectrogram of the light of one of the lighting points. These measurements were performed for a parallel experiment and are out of the scope of this investigation.
Close up view of the mounting location of the multi-wavelength optical densitometer
HRR and Smoke Temperature
Three n-Heptane TF5-type fires were performed in the experimental chamber. The burnout time varied between 320 and 350 s. As the fuel was burning out at slightly different times, some significant differences between experiments were observed in measured mass loss rate towards the end of the experiments. As those differences may skew the optical measurements, only the first 300 s of the experiment were taken into account in quantitative result analysis. The average Heat Release Rate (HRR) of the fire was approx. 87 kW, with a peak value of approx. 117 kW in the 200th second of the experiment. The individual HRR curves for each experiment and a mean curve are shown in Fig. 6.
HRR estimation based on the mass loss measurements in the experiment and the mean HRR value
The measurements of the temperatures are shown in Fig. 7. The temperature evolution followed the growth and decay of the fire, with a peak value at approx. 200th second of the experiment. After 300 s the temperature of the smoke layer was decaying. The peak temperature measured was 101 °C and 63 °C directly above the fire and near the densitometers, respectively. The variability of the temperature measured at the densitometers was smaller than for the central point directly above the fire.
Temperature measured under ceiling directly above the fire, next to the optical densitometers and at the floor of the compartment. Bold lines represent the mean value measured and shaded areas represent the standard deviation of the measurements in three experiments
The view of the course of a single experiment is shown in Fig. 8. At the moment of ignition the chamber is free of any smoke, with visible two rows of evacuation lighting lamps and a small reflecting sign in the top-left corner. As the fire develops, a smoke layer forms under the ceiling, which can be identified by a "cone" of light under the lamps. After approx. 3 min, the smoke is so dense, that the reflective sign is no longer visible by the camera. A the end of the experiment, the depth of the smoke layer is approximately half of the height of the compartment.
Stills from a recording of one of the experiments in chosen time-steps
Smoke Obscuration
The measurements of the smoke obscuration at different wavelengths are shown in Fig. 9. As expected, the lowest transmittance was observed for shortest wavelength (λ = 450 nm, blue light), while the highest transmittance was observed for the IR lasers (λ = 880 nm and 980 nm). All lasers of the multi-wavelength densitometer show similar trends (in terms of peaks and lows) of the transmittance measured, while the Lorenz densitometer shows a more flat-out curve, which may be a result of internal averaging within the densitometer controller. It also must be noted that the measurement of the light obscuration with the red (λ = 658 nm) laser was fully successful only in the first experiment. In further experiments, this laser failed after approx. 180 s into experiment and further measurements from this laser were discarded.
Results of transmittance measured in experiments. Bold lines represent mean values and shaded areas represent standard deviations of the measurements in three experiments. Plot (e) represents the transmittance measured with accredited Lorenz densitometer
The scatter of results obtained with multi-wavelength densitometer is larger than for the Lorenz device, however that may be as well attributed to the higher sensitivity for smoke obscuration measurements at lower wavelengths (450–658 nm) than at the IR band. In fact, the prototype densitometer at λ = 980 nm has shown similar scatter as the Lorenz densitometer. In the visible spectrum, the value of the transmittance measured by the red (λ = 658 nm) laser at the end of the experiment is approx. 80%, while for green (λ = 520 nm) and blue (λ = 450 nm) it is 70% and 65% respectively.
The value of transmittance may be used to calculate the light extinction coefficient, which following the theory of Jin [7] (Eq. 3) can be expressed as the visibility in smoke.
$$V=\frac{K}{\mathrm{ln}(\frac{I}{{I}_{0}})}$$
A plot of visibility for the K = 3 (light-reflecting signs) is shown in Fig. 10 and for K = 8 (light-emitting signs) on Fig. 11. As observed, the difference between visibility estimation is significant in the visible band. In the case of light reflecting signs, the visibility for blue light at the end of the experiment is approx. 7 m, while for red it is 12.5 m and for IR at λ = 880 nm, it is 25 m. The difference expressed in visibility in smoke [m] is even larger for light-emitting signs (K = 8), with measured values of respectively 18 m, 32 m and 65 m. As the tenability criterion is usually a single sharp value (e.g. 10 m [2]), the wavelength sensitivity may be an important factor to consider.
Visibility in smoke for light reflecting signs (K = 3) for various wavelengths measured in the experiments. Bold lines represent mean values and shaded areas represent standard deviations of the measurements in three experiments
Visibility in smoke for light-emitting signs (K = 8) for various wavelengths measured in the experiments. Bold lines represent mean values and shaded areas represent standard deviations of the measurements in three experiments
The Role of Particle Size Distribution
As identified in the literature review (Sect. 1.3), the ratio of light extinction coefficients for different wavelengths is related to the particle size distribution. As shown in [21] this may be used to identify the type of material being burnt. A theoretical considerations of relation between particle size and light scattering and extinction are given in [9] and a practical application idea in [26].
The smoke in our experiment was from an n-Heptane pool fire. The n-Heptane fire smoke characteristics are known from similar tests in a chamber of the same type, as discussed by Keller et al. [32]. They have used a scanning mobility particle sizer (SMPS) that can classify the particles according to their electrical mobility by means of an electrostatic classifier and counts them optically. A single scan takes approx. 55 s, which for a transient phenomenon (n-Heptane combustion) introduces some unavoidable bias. To limit this bias, the measurement was repeated five times within each experiment. The particle size distribution obtained for n-Heptane fire is shown in Fig. 12a. Particle generation in flaming combustion of n-Heptane in a similar setup was also measured by Li et al. [33]. They have used Fast Particulate Spectrometer (DMS500) to record particle number concentration and distribution. A particle distribution curve was determined for many tray sizes, including a 33 cm tray similar to the tray used in our research. For easier comparison with the results of [32], the results of this study were redrawn on Fig. 12b following the same scale.
Particle distribution (number concentration of particles dw/d Log dp, [#/cm3]) in function of particle diameter (dp) in an n-Heptane fire of a similar size, in a compartment with similar dimensions to the one in the current study. a is reproduced from [32] and b is redrawn from [33]. On a the time refers to the start of measurement and one full measurement takes approx. 1 min
The multi-wavelength densitometer shown in this paper could be possibly used to identify the ratio of extinction coefficients between different wavelengths of the device, as shown in Fig. 13. Such a comparison was performed in the past, as reported in literature, eg. by Uthe [21], who received ratio between light extinction at wavelength of 1045 and 514 nm of 0.25 for rosin smoke (mean particle diameter below 100 nm). In our case, the value obtained by comparing the extinction at 980 and 450 nm for n-heptane smoke (with assumed particle distribution as on Fig. 12) converged at 0.19. Another comparison may be made with the data of Jin, who measured ratios between blue and red of approx. 1.2–1.4 for polystyrene, which is close to the value measured in the initial phase of our experiment. Despite the shortcomings in the precision of the developed apparatus, these approximations indicate that there is potential in remote sensing the particle distribution size with our apparatus. This could be improved, if instead of simple ratio of transmittance more advances algorithms are used to investigate the light transmission at different wavelengths, as suggested in [26]. Furthermore, more data on particle size distribution is required, in order to calibrate such a measurement.
Measured extinction coefficient ratios for different devices used in the experiment
Visibility in smoke is among the key tenability criteria used by modern Fire Safety Engineering. It is paramount to the design of evacuation signage and lighting and is one of the most critical factors that need to be addressed during firefighting operations. Despite the profound meaning of this parameter for fire safety, the dependence of the light extinction in smoke on the wavelength of light is rarely taken into account in research or engineering. Most of the modern research is referring to the visibility in the red light band, while light commonly used in buildings is polychromatic and usually covers the whole visible spectrum.
To allow for the future research on the light extinction in smoke layers, we have constructed a multi-wavelength densitometer, capable of measurements in five bands: blue (λ = 450 nm), green (λ = 520 nm), red (λ = 658 nm) and IR (λ = 830 nm and λ = 980 nm). We have performed three experiments with flaming combustion of n-Heptane (also known as the TF-5 fire) based on EN 54–7 [29]. We have measured significant differences in light transmittance and visibility in smoke layers for different wavelengths. In the visible light band, the transmittance measured in same conditions was from 65% (blue light) to 79% (red light). The smoke obscuration expressed as visibility in smoke for light reflecting signs was 7 m (blue light) to 12.5 m (red light). The transmittance and visibility in smoke for the IR band were significantly higher.
Furthermore, as indicated in the literature the ratios between the light extinction coefficient are related to mean particle diameter of the smoke, which means device as shown in here could be used to remotely identify the smoke properties. However, more precise extinction measurements, a method to account for scattering and possibly a database with reference values may be required to make this approach robust. Alternatively, as shown in [26], machine learning approach could be used to identify the smoke based on transmittance values at different wavelengths, rather than specific ratios between the measurements.
The multi-wavelength densitometer may be a useful tool in research focused on the visibility in smoke, validation of numerical models of smoke obscuration and research on evacuation signage and lighting. Our future research will include measuring the light extinction in smoke layers from different materials and types of combustion (flaming, smouldering, pyrolysis) and estimation of the light extinction of emergency lighting sources.
The trade names used in this study are given to allow for reproduction of the results presented in the paper, and are not an endorsement by the Authors or recommendation of the particular type or make of the devices.
Levy C, Martin D, Meacham BJ (2015) Assessing variability in engineer selection of tenability criteria. In: SFPE Europe Conference on Fire Safety Engineering. SFPE
Yamada T, Akizuki Y (2016) Visibility and Human Behavior in Fire Smoke. SFPE Handbook of Fire Protection Engineering. Springer, New York, NY, pp 2181–2206
Węgrzyński W, Vigne G (2017) Experimental and numerical evaluation of the influence of the soot yield on the visibility in smoke in CFD analysis. Fire Saf J. 91:389–398. https://doi.org/10.1016/j.firesaf.2017.03.053
Collins BL, Dahir MS, Madrzykowski D (1993) Visibility of exit signs in clear and smoky conditions. Fire Technol. 29:154–182. https://doi.org/10.1007/BF01038537
Yamada T, Kubota K, Abe N, Iida A (2004) Visibility of emergency exit signs and emergency lights through smoke. 560–569
Jin T (1978) Visibility through fire smoke. J Fire Flammabl 9:135–155
Jin T (1970) Visibility through Fire Smoke (I). Bull Fire Prev Soc Japan 19
Jin T (1971) Visibility through Fire Smoke (II). Bull Fire Prev Soc Japan 21
Zhang Q, Rubini PA (2011) Modelling of light extinction by soot particles. Fire Saf J 46:96–103
Mulholland GW, Croarkin C (2000) Specific extinction coefficient of flame generated smoke. Fire Mater. 24:227–230: https://doi.org/https://doi.org/10.1002/1099-1018(200009/10)24:5<227::AID-FAM742>3.0.CO;2-9
McGrattan K, Hostikka S, McDermott R, Floyd J, Weinschenk C, Overholt K (2017) Sixth edition fire dynamics simulator technical reference guide volume 1. Mathematical Model. 1:1–147
PIARC Technical Committee 3.3 Road Tunnel Operation (2008) Road tunnels: a guide to optimising the air quality impact upon the environment
Widmann JF (2003) Evaluation of the planck mean absorption coefficients for radiation transport through smoke. Combust Sci Technol. 175:2299–2308. https://doi.org/10.1080/714923279
Zhang Q (2010) Image based analysis of visibility in smoke laden environments, Department of Engineering, The University of Hull, PhD Thesis
Jin T (1974) Visibility through Fire Smoke (IV). Bull Fire Prev Soc Japan 22:1–8
Nebuloni R (2005) Empirical relationships between extinction coefficient and visibility in fog. Appl Opt. 44:3795–3804. https://doi.org/10.1364/AO.44.003795
Cashdollar KL, Lee CK, Singer JM (1979) Three-wavelength light transmission technique to measure smoke particle size and cocnentration. Appl Opt. 18
Dobbins RA, Jizmagian SG (1966) Particle Size Measurements Based on Use of Mean Scattering Cross Sections*. J Opt Soc Am 1345:1351–1354
Mulholland GW (2008) Smoke Production and Properties. In: SFPE Handbook of Fire Protection Engineering, Fourth Edi. NFPA & SFPE. Quincy. MA
Dobbins RA, Jizmagian GS (1966) Optical Scattering Cross Sections for Polydispersions of Dielectric Spheres. J Opt Soc Am. 56:1345. https://doi.org/10.1364/JOSA.56.001345
Uthe EE (1982) Particle size evaluations using multiwavelength extinction measurements 21:6–11
Swanson NL, Billard BD, Gennaro TL (1999) Limits of optical transmission measurments with application to particle sizing techniques. Appl Opt 38:5887–5893
Aspey RA, Brazier KJ, Spencer JW (2005) Multiwavelength sensing of smoke using a polychromatic led : mie extinction characterization using HLS analysis. IEEE Sens J 5:1050–1056
Wilkens K, Van Hees P (2015) Obtaining additional smoke characteristics using multi-wavelength. In: Fire and Materials. San Francisco. USA
Lindholm J, Brink A, Hupa M (2009) Cone calorimeter – a tool for measuring heat release rate. Finnish-Swedish Flame Days 2009. 4B. https://doi.org/https://doi.org/10.1002/fam
Chaudhry T, Moinuddin K (2017) Method of identifying burning material from its smoke using attenuation of light. Fire Saf J. 93:84–97. https://doi.org/10.1016/j.firesaf.2017.08.001
Arnold L, Belt A, Schultze T, Sichma L (2020) Spatiotemporal measurement of light extinction coefficients in compartment fires. Fire Mater fam. 2841:59–65. https://doi.org/10.1002/fam.2841
Evans D, Walton WD, Baum HR, Notarianni KA, Lawson JR, Tang HC, Keydel KR, Rehm RG, Madrzykowski D, Zile RH (1992) In-situ burning of oil spills: mesoscale experiments. Fifteenth Arct Mar Oil Spill Progr Tech Semin. 12:593–657
EN 54–7:2018 (2018) Fire detection and fire alarm systems. Smoke detectors. Point smoke detectors that operate using scattered light, transmitted light or ionization. CEN
Vigne G, Węgrzyński W (2016) Influence of variability of soot yield parameter in assessing the safe evacuation conditions in advanced modeling analysis. Results of physical and numerical modeling comparison. In: 11th Conference on Performance-Based Codes and Fire Safety Design Methods. SFPE
Węgrzyński W, Antosiewicz P, Burdzy T, Zimny M, Krasuski A (2019) Smoke obscuration measurements in reduced-scale fire modelling based on froude number similarity. Sensors. 19:3628. https://doi.org/10.3390/s19163628
Keller A, Loepfe M, Nebiker P, Pleisch R, Burtscher H (2006) On-line determination of the optical properties of particles produced by test fires. Fire Saf J. 41:266–273. https://doi.org/10.1016/j.firesaf.2005.10.001
Li Y, Chen D, Wang F, Yuan W, Zhang Q, Zhang Y (2013) An experimental study on the particle concentration and size distribution of smoke generated by flaming N-heptane. Appl Mech Mater. 391:61–65
This research was funded by the Building Research Institute statutory grant financed by the Ministry of Science and Higher Education, Grant number NZP-122/2018.
Fire Research Department, Building Research Institute (ITB), Filtrowa 1 St, 00-611, Warsaw, Poland
Wojciech Węgrzyński, Piotr Antosiewicz & Jadwiga Fangrat
Wojciech Węgrzyński
Piotr Antosiewicz
Jadwiga Fangrat
Correspondence to Wojciech Węgrzyński.
Appendix A: List of System Components
The components of the system were:
20 channel data logger type Graphtec GL840 Midi Logger;Footnote 1
Lasers with the following characteristic:
450 nm, 50 mW;
830 nm, 100 mW;
980 nm, 50 mW
Five photodiodes OSRAM type IR PIN; TO5; 850 nm; 400 – 1100 nm; 55°; THT; 2nA, type BPX61;
25 mm circular bandpass optical filters, with peak transmittance at 450 nm, 520 nm, 656,3 nm, 830 nm and 1000 nm.
Additionally, five photodiode amplifier systems were built using the following elements:
Modular switches;
Modular symmetric power supply unit (± 15 V) and a power supply unit 15 W, 12 V, 1.25 A (shared);
Symmetric transformer 2 × 15 V;
Microcircuit LM 725;
Potentiometer;
Resistors (100Ω—100kΩ);
Ceramic capacitors (100, 220, 500 pF);
An electrolytic capacitor (4,7 µF).
Appendix B: System Calibration
The densitometer calibration was performed following the procedure previously described in [31]. The calibration was performed on a thermally stabilized unit, at 26 °C. A calibration filter holder was built on the bench of the densitometer, as shown in Fig. 14. The structure allowed for secure fixing of the different filters and blocking of a particular beam of light. The filters used in this process were a set of four SCHOTT® optical filters:
NG11, d = 1.12 mm;
NG4, d = 1.08 mm;
NG4, d = 1.86 mm.
The transmittance of the filter at a given wavelength is known and shown in Fig. 15.
Frame for mounting the optical filters for the calibration process and the view of an example filter
Internal transmittance of the calibration filters, calculated with a tool provided by the manufacturer (https://www.schott.com/advanced_optics/)
For each measurement point, we have compared the measured value of transmittance (I/I0) with the expected value based on the filter characteristic (Fig. 16). For most of the calibration points, the measured error was less than 10%. For some points, the measured error exceeded 10%. Notable outlier points are for 520 nm and 658 nm wavelength at the transmittance between 0.5 and 0.6, which had 13.88% and 14.40% error. These values of transmittance are at the expected lower end of the range of the transmittance in smoke layers measured in the test chamber. This observation was taken into account in the investigation of the results of experiments.
Results of densitometer calibration. Dashed lines in the upper graph represent + 10%, 0 and -10% error
Węgrzyński, W., Antosiewicz, P. & Fangrat, J. Multi-Wavelength Densitometer for Experimental Research on the Optical Characteristics of Smoke Layers. Fire Technol 57, 2683–2706 (2021). https://doi.org/10.1007/s10694-021-01139-5
Issue Date: September 2021
Visibility in smoke
|
CommonCrawl
|
B. Parent • AE23815 Heat Transfer
Heat Transfer Assignment 7 — Free & Forced Flow
$\xi$ is a parameter related to your student ID, with $\xi_1$ corresponding to the last digit, $\xi_2$ to the last two digits, $\xi_3$ to the last three digits, etc. For instance, if your ID is 199225962, then $\xi_1=2$, $\xi_2=62$, $\xi_3=962$, $\xi_4=5962$, etc. Keep a copy of the assignment — the assignment will not be handed back to you. You must be capable of remembering the solutions you hand in.
A tube bank uses an in-line arrangement with $S_p=S_n=1.9$ cm and 6.33-mm-diameter tubes. The tube bank is 6 rows deep and $50$ tubes high. The surface temperature of the tubes is constant at $90^\circ$C, and air at a pressure of 1 atmosphere, a temperature of $20^\circ$C, and a speed of $4.5$ m/s is forced across them. Calculate the total heat transfer per unit length for the tube bank as well as the outlet temperature of the air.
Repeat the previous problem but with the tubes arranged in the "staggered" configuration with the same values of $S_p$ and $S_n$.
Consider a 1-m long copper cable with a diameter $D=1.6$ mm, an electrical resistivity of $R_{\rm c}=30 \times 10^{-9}\Omega$m and with an emissivity $\epsilon=0.5$. Air flows across the cable with a velocity $u_\infty=40$ m/s, a density $\rho_\infty=0.5$ kg/m$^3$, and a temperature $T_\infty=230$ K. If a voltage difference $\Delta V=3.4$V is applied to the cable extremities, do the following:
(a) Find the heat generated within the cable in Watt.
(b) Find the convective heat transfer coefficient in W/m$^2$K.
(c) Find the surface temperature of the cable in K.
You can assume that the film temperature is 400 K and use the following thermophysical data for air:
Matter $c_p,~{\rm J/kg^\circ C}$ $k,~{\rm W/m^\circ C}$ $\mu$, kg/ms
Air 1000 0.05 $10^{-5}$
You wish to cook some chicken optimally using a convection oven. A convection oven differs from standard ovens by blowing hot air at moderate speeds on the food. This results in the food being heated mostly through convective heat transfer rather than through radiation heat transfer. The chicken you wish to cook can be modeled as a solid sphere with a radius of 1 cm, a thermal conductivity of 0.5 W/m$^\circ$C, a density of 1000 kg/m$^3$, and a heat capacity of 3200 J/kg$^\circ$C. The convection oven blows hot air at atmospheric pressure, a temperature of $130^\circ$C and a speed of 5 m/s towards the chicken. The chicken is initially at a temperature of $5^\circ$C and stands on a grill through which the air can flow freely. You wish to cook the chicken optimally so that it is as tender as possible while being safe to eat. To be safe for eating, the temperature at any location within the chicken must have reached at least $70^\circ$C. To be as tender as possible, the chicken must not be overcooked and must therefore be taken out of the oven as soon as it is safe for eating. Knowing the latter, do the following:
(a) Find the most accurate possible average convective heat transfer coefficient over the chicken when in the oven.
(b) Using the average convective heat transfer coefficient found in (a), determine the amount of time the chicken should be left in the oven to be optimally cooked.
(c) Find the surface temperature of the chicken when it is taken out of the oven.
Consider a brick wall that is insulated on one side and exposed to radiation from the sun on the other side as follows:
For $H=1$ m, $L=0.1$ m, $T_\infty=27^\circ$C, $P_\infty=1$ atm, and an incoming radiation heat flux from the sun equal to $q^"_{\rm radin}=700$ W/m$^2$, and an emissivity factor of the bricks of $\epsilon=0.5$, do the following:
(a) Find the convective heat transfer coefficient due to free convection at $x=L$ and $y=H$.
(b) Find the surface temperature $T_{\rm s}$ at $x=L$ and $y=H$.
You can assume negligible heat transfer on the top surface of the wall and that the film temperature is equal to 300 K. Use the following thermophysical properties for the bricks and the air:
Matter $\rho$, kg/m$^3$ $c_p$, J/kgK $k$, W/mK $\mu$, kg/ms
Bricks 1600 840 0.7 --
Air -- 1000 0.02 $10^{-5}$
Consider a 0.01 m diameter sphere made of magnesium initially at a uniform temperature of 80$^\circ$C. The sphere is then immersed in a large pool of water with the water being still and at an initial temperature of 20$^\circ$C. Because of the gravitational force, the sphere accelerates towards the bottom of the pool and quickly reaches a constant velocity. Knowing that the drag coefficient of the sphere is of 1.1, do the following:
(a) When the sphere velocity becomes constant, find the velocity of the sphere with respect to the water.
(b) Find the temperature at the center of the sphere after a time of 2 seconds.
(c) Find the temperature on the surface of the sphere after a time of 2 seconds.
(d) Find the amount of energy (in Joules) lost by the sphere to the water after a time of 2 seconds.
Hints: (i) the buoyancy force is equal to the weight of the displaced fluid; (ii) the drag coefficient is equal to $C_D=F_{\rm drag}/ (\frac{1}{2} \rho_\infty u_\infty^2 A)$ with the frontal area $A=\pi R^2$ and $R$ the radius of the sphere.
Use the following data for magnesium and water:
Property Water Magnesium
$\rho$, kg/m$^3$ 1000 1700
$c$, kJ/kgK 4 1
$k$, W/m$^\circ$C 0.6 171
$\mu$, kg/ms 0.001 --
1. 54.9 kW/m, 30.6$^\circ$C.
3. 775 W, 415 W/m$^2$K, 595 K.
4. 61 W/m$^2$$^\circ$C, 256 s, 97$^\circ$ C.
5. 2.7 W/m$^2$$^\circ$C, 319 K.
6. 4436 W/m$^2$$^\circ$C, 51.1 J.
Due on Wednesday May 29th at 9:00. Do Questions #2, #5, and #6 only.
PDF 1✕1 2✕1 2✕2
|
CommonCrawl
|
Quasistatic damage evolution with spatial $\mathrm{BV}$-regularization
DCDS-S Home
Thermodynamics of perfect plasticity
February 2013, 6(1): 215-233. doi: 10.3934/dcdss.2013.6.215
Thermalization of rate-independent processes by entropic regularization
T. J. Sullivan 1, , M. Koslowski 2, , F. Theil 3, and Michael Ortiz 4,
Applied & Computational Mathematics and Graduate Aerospace Laboratories, California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125-9400, United States
School of Mechanical Engineering, Purdue University, 585 Purdue Mall, West Lafayette, IN 47907-2088, United States
Mathematics Institute, University of Warwick, Coventry, CV4 7AL, United Kingdom
Graduate Aerospace Laboratories, California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125, United States
Received May 2011 Revised July 2011 Published October 2012
We consider the effective behaviour of a rate-independent process when it is placed in contact with a heat bath. The method used to ``thermalize'' the process is an interior-point entropic regularization of the Moreau--Yosida incremental formulation of the unperturbed process. It is shown that the heat bath destroys the rate independence in a controlled and deterministic way, and that the effective dynamics are those of a non-linear gradient descent in the original energetic potential with respect to a different and non-trivial effective dissipation potential.
Keywords: non-linear evolution equations, Gradient descent, thermodynamics..
Mathematics Subject Classification: Primary: 47J35, 82C3.
Citation: T. J. Sullivan, M. Koslowski, F. Theil, Michael Ortiz. Thermalization of rate-independent processes by entropic regularization. Discrete & Continuous Dynamical Systems - S, 2013, 6 (1) : 215-233. doi: 10.3934/dcdss.2013.6.215
L. Ambrosio, N. Gigli and G. Savaré, "Gradient Flows in Metric Spaces and in the Space of Probability Measures,", Lectures in Mathematics ETH Zürich, (2008). Google Scholar
S. Boyd and L. Vandenberghe, "Convex Optimization,", Cambridge University Press, (2004). Google Scholar
R. Jordan and D. Kinderlehrer, An extended variational principle,, Partial Differential Equations and Applications, 177 (1996), 187. Google Scholar
R. Jordan, D. Kinderlehrer and F. Otto, The variational formulation of the Fokker-Planck equation,, SIAM J. Math. Anal., 29 (1998), 1. Google Scholar
M. Koslowski, "A Phase-Field Model of Dislocations in Ductile Single Crystals,", Ph. D. thesis, (2003). Google Scholar
A. Mielke, Evolution of rate-independent systems,, Evolutionary Equations. Handb. Differ. Equ., II (2005), 461. Google Scholar
\bysame, Modeling and analysis of rate-independent processes,, January 2007, (2007). Google Scholar
A. Mielke, R. Rossi and G. Savaré, Modeling solutions with jumps for rate-independent systems on metric spaces,, Discrete Contin. Dyn. Syst., 25 (2009), 585. Google Scholar
A. Mielke and F. Theil, On rate-independent hysteresis models,, NoDEA Nonlinear Differential Equations Appl., 11 (2004), 151. Google Scholar
J. J. Moreau, Proximité et dualité dans un espace hilbertien,, Bull. Soc. Math. France, 93 (1965), 273. Google Scholar
J. P. Penot and M. Théra, Semicontinuous mappings in general topology,, Arch. Math. (Basel), 38 (1982), 158. Google Scholar
D. Preiss, Geometry of measures in $\mathbbR^n$distribution, rectifiability, and densities: ,, Ann. of Math., 125 (1987), 537. Google Scholar
T. J. Sullivan, "Analysis of Gradient Descents in Random Energies and Heat Baths,", Ph. D. thesis, (2009). Google Scholar
T. J. Sullivan, M. Koslowski, F. Theil and M. Ortiz, On the behavior of dissipative systems in contact with a heat bath: application to Andrade creep,, J. Mech. Phys. Solids, 57 (2009), 1058. Google Scholar
K. Yosida, "Functional Analysis,", Die Grundlehren der Mathematischen Wissenschaften, (1965). Google Scholar
G. A. Athanassoulis, K. A. Belibassakis. New evolution equations for non-linear water waves in general bathymetry with application to steady travelling solutions in constant, but arbitrary, depth. Conference Publications, 2007, 2007 (Special) : 75-84. doi: 10.3934/proc.2007.2007.75
Tommi Brander, Joonas Ilmavirta, Manas Kar. Superconductive and insulating inclusions for linear and non-linear conductivity equations. Inverse Problems & Imaging, 2018, 12 (1) : 91-123. doi: 10.3934/ipi.2018004
Pablo Ochoa. Approximation schemes for non-linear second order equations on the Heisenberg group. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1841-1863. doi: 10.3934/cpaa.2015.14.1841
Christoph Walker. Age-dependent equations with non-linear diffusion. Discrete & Continuous Dynamical Systems - A, 2010, 26 (2) : 691-712. doi: 10.3934/dcds.2010.26.691
Simona Fornaro, Ugo Gianazza. Local properties of non-negative solutions to some doubly non-linear degenerate parabolic equations. Discrete & Continuous Dynamical Systems - A, 2010, 26 (2) : 481-492. doi: 10.3934/dcds.2010.26.481
Kurt Falk, Marc Kesseböhmer, Tobias Henrik Oertel-Jäger, Jens D. M. Rademacher, Tony Samuel. Preface: Diffusion on fractals and non-linear dynamics. Discrete & Continuous Dynamical Systems - S, 2017, 10 (2) : ⅰ-ⅳ. doi: 10.3934/dcdss.201702i
Dmitry Dolgopyat. Bouncing balls in non-linear potentials. Discrete & Continuous Dynamical Systems - A, 2008, 22 (1&2) : 165-182. doi: 10.3934/dcds.2008.22.165
Dorin Ervin Dutkay and Palle E. T. Jorgensen. Wavelet constructions in non-linear dynamics. Electronic Research Announcements, 2005, 11: 21-33.
Armin Lechleiter. Explicit characterization of the support of non-linear inclusions. Inverse Problems & Imaging, 2011, 5 (3) : 675-694. doi: 10.3934/ipi.2011.5.675
Denis Serre. Non-linear electromagnetism and special relativity. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 435-454. doi: 10.3934/dcds.2009.23.435
Feng-Yu Wang. Exponential convergence of non-linear monotone SPDEs. Discrete & Continuous Dynamical Systems - A, 2015, 35 (11) : 5239-5253. doi: 10.3934/dcds.2015.35.5239
Cyril Imbert, Sylvia Serfaty. Repeated games for non-linear parabolic integro-differential equations and integral curvature flows. Discrete & Continuous Dynamical Systems - A, 2011, 29 (4) : 1517-1552. doi: 10.3934/dcds.2011.29.1517
Sanjay Khattri. Another note on some quadrature based three-step iterative methods for non-linear equations. Numerical Algebra, Control & Optimization, 2013, 3 (3) : 549-555. doi: 10.3934/naco.2013.3.549
Rajesh Kumar, Jitendra Kumar, Gerald Warnecke. Convergence analysis of a finite volume scheme for solving non-linear aggregation-breakage population balance equations. Kinetic & Related Models, 2014, 7 (4) : 713-737. doi: 10.3934/krm.2014.7.713
Radhia Ghanmi, Tarek Saanouni. Well-posedness issues for some critical coupled non-linear Klein-Gordon equations. Communications on Pure & Applied Analysis, 2019, 18 (2) : 603-623. doi: 10.3934/cpaa.2019030
Michela Procesi. Quasi-periodic solutions for completely resonant non-linear wave equations in 1D and 2D. Discrete & Continuous Dynamical Systems - A, 2005, 13 (3) : 541-552. doi: 10.3934/dcds.2005.13.541
Emine Kaya, Eugenio Aulisa, Akif Ibragimov, Padmanabhan Seshaiyer. A stability estimate for fluid structure interaction problem with non-linear beam. Conference Publications, 2009, 2009 (Special) : 424-432. doi: 10.3934/proc.2009.2009.424
Eugenio Aulisa, Akif Ibragimov, Emine Yasemen Kaya-Cekin. Stability analysis of non-linear plates coupled with Darcy flows. Evolution Equations & Control Theory, 2013, 2 (2) : 193-232. doi: 10.3934/eect.2013.2.193
Faustino Sánchez-Garduño, Philip K. Maini, Judith Pérez-Velázquez. A non-linear degenerate equation for direct aggregation and traveling wave dynamics. Discrete & Continuous Dynamical Systems - B, 2010, 13 (2) : 455-487. doi: 10.3934/dcdsb.2010.13.455
T. Gilbert, J. R. Dorfman. On the parametric dependences of a class of non-linear singular maps. Discrete & Continuous Dynamical Systems - B, 2004, 4 (2) : 391-406. doi: 10.3934/dcdsb.2004.4.391
T. J. Sullivan M. Koslowski F. Theil Michael Ortiz
|
CommonCrawl
|
Myco-metabolites as biological control agents against the two-spotted spider mite, Tetranychus urticae Koch (Acari: Tetranychidae)
Mohamed E. Osman1,
Amany A. Abo Elnasr1,
Mohamed A. Nawar2 &
Gohyza A. Hefnawy2
Fungi are a promising source for bioactive secondary metabolites against various agricultural pests. Soil samples were collected from the rhizosphere of various plants at El-Khatatba, Egypt, in May 2016. Sixty-two fungal isolates were locally isolated and screened against Tetranychus urticae Koch (Acari: Tetranychidae). Four fungal strains, showed potent control activities, were morphologically identified. Laboratory evaluation of the crude extracts of the selected strains, Aspergillus melleus, A. terreus, Emericella nidulans, and Chaetomium globosum, exhibited LC50 values of 10.27, 33.05, 14.68, and 22.40 mg/ml against females of T. urticae, respectively. Correspondent LC50 values that exhibited on eggs were 8.81, 23.17, 11.66, and 11.05 mg/ml. Consequently, secondary metabolites of A. melleus were separated by liquid chromatography/mass spectrometry. Compounds separated from the active fraction were identified as mellamide, ochratoxin C, nodulisporic acid, 7-Oxocurvularin, and 6-(4′-hydroxy-2′-methyl phenoxy)-(−)-(3R)-mellein. Obtained secondary metabolites are promising sources for biopesticides to be used in bio-rationale control of T. urticae.
Crop production is decreased annually by 18–26% because of arthropod pests (Culliney 2014). The two-spotted spider mite, Tetranychus urticae Koch (Acari: Tetranychidae), can infest more than 200 plant species all over the world. Many reports showed that T. urticae can cause harsh damage to many agricultural crops such as vegetables, fruits, and ornamental plants (Fasulo and Denmark 2000). Management of T. urticae by using synthetic acaricides has been widely applied (Attia et al. 2013). Due to health and environmental hazards caused by chemical pesticides in addition to their effects on other non-target organisms such as predators, their use has been firmly regulated (Horikoshi et al. 2017). Moreover, the development of acaricidal resistance in T. urticae is highly increasing since they have an outstanding potential to develop such resistance. Therefore, the development of new acaricides with novel modes of action is increasingly needed (Marcic 2012).
Currently, many chemicals used in agriculture to control pests are originally derived from microbial metabolites (Horikoshi et al. 2017). Fungi and fungal metabolites exhibit a high toxicity to insects and mite pests; however, they show low toxicity to non-target organisms (Ragavendran and Natarajan 2015). Researchers thought that fungi use their products as well as mycotoxins as chemical defenses against different targets including insects and mites. For instance, mycotoxins such as aflatoxin B and trichothecenes exhibit toxicity to many insect pests (Srivastava et al. 2009). Consequently, microbial metabolites may be promising resources in a novel pesticide development (Horikoshi et al. 2017).
Beside predatory mites, plant extracts, and essential oils, microbial metabolites may be also promising tools in bio-rationale control of T. urticae (Attia et al. 2013). For example, thuringiensin produced by B. thuringiensis showed a potent acaricidal efficiency on T. urticae, T. cinnabarinus (Neal Jr et al. 1987), and Panonychus ulmi Koch (Vargas Mesina 1993). Pseudomonas fluorescens exhibited a strong efficacy on adults of Oligonychus coffeae Nietner under laboratory conditions (Roobakkumar et al. 2011). Also, an actinomycete-derived product, abamectin (avermectin) produced from Streptomyces avermitilis, demonstrated highly toxic effects on T. cinnabarinus (Wu and Liu 1997), T. urticae, Phyllocoptruta oleivora, Panonychus citri, and T. turkestani Ugarov and Nikolski (Putter et al. 1981).
The present study aimed to evaluate the control potential of ethyl acetate extract produced by different fungal strains against T. urticae under laboratory conditions. In addition, the active compound/s produced by the most effective strain will be separated, purified, and identified.
All experiments were conducted at the laboratories of Plant Protection Department, Desert Research Centre, Egypt (2016–2018).
Collection of soil samples
Soil samples were collected from EL-Khatatba, Egypt, in May 2016. Plants at different fields were examined by the aid of a × 10 magnification lens for spider mite infestation. Samples were collected from the rhizosphere of the spider mite-infested plants at a depth of 10–15 cm under the soil surface. Soil samples (500 g each) were placed in sterilized polyethylene bags and stored at 4 °C until use.
Isolation, purification, and identification of fungi by cultural and morphological methods
Isolation of soil fungi was done by a serial dilution method. Two types of culture media were used for isolation of fungi: potato dextrose agar (PDA) and Czapek's Dox agar supplemented with yeast extract (5 g/l). All media were supplemented with chloramphenicol antibiotic (25 mg/l). Isolation of fungi was carried out from dilutions of 10−2 and 10−3, where 0.5 ml was taken from each dilution and placed into Petri dishes with a solid media, then spread, using a sterilized glass spreader and was left for 30 min before incubation at 28 °C for 4–7 days. Each morphological distinct, fungal colony was sub-cultured and purified, using standard techniques.
Primary screening of fungal isolates against T. urticae
Sixty-two fungal isolates were screened for their control activity by testing their culture filtrates against adult spider mite females. Four discs (1 cm) were cut from fresh culture of each fungal isolate and inoculated in 100 ml of potato dextrose broth. The broth cultures were incubated in shaking incubator at 28 °C, 150 rpm for 7 days. Fungal cultures were filtered, using sterilized filter papers and centrifuged. Finally, the filtrates were preserved in refrigerator at 4 °C until use within a week to avoid any contamination or alteration of metabolites. Bioassay of fungal filtrates on spider mites was carried out by leaf-dipping method. Three mulberry leaf discs (25 mm) were immersed for 5 s in each fungal filtrate and dried at room temperature. The treated leaf discs were placed on wetted cotton wool in Petri dishes. Each was lined with cotton lining to prevent mites from escaping. Ten females were transferred to each leaf disc, using a fine paintbrush and incubated at 25 °C. Control was immersed in potato dextrose broth and mortality rates were observed after 5 days.
Selection and identification of the most potent isolates
Four isolates (KF23, KF45, KF40, and KF9) out of 62 were selected for further investigation, as they achieved > 50% mortality in T. urticae. Cultures were grown on Czapek's yeast extract agar and incubated at 28 °C for 7–10 days. The identification of the most potent isolates was carried out on morphological basis, using culture characteristics, e.g., growth rate, color, and pigmentation, as well as microscopic features, e.g., conidiophores, conidia, production of sclerotia, and dimensions of the different microscopic fungal structures. The results were confirmed by the scientists of Mycological Centre, Assiut University, Egypt.
Effect of crude extract of the selected isolates on adult females of T. urticae
Twelve plugs of freshly prepared culture of each fungal isolate were inoculated in 300 ml potato dextrose broth media and incubated in shaking incubator at 28 °C, 150 rpm for 10 days. After incubation time, each fungal culture was filtered and subjected to extraction with ethyl acetate (1:1) triple. The crude extract was dried, using a rotary evaporator, and stored in a freezer until use. Four concentrations (5, 10, 20, 30 mg/ml) were prepared from the crude extract of each fungal isolate. Bioassay of each extract was undertaken using the leaf dipping method. Leaf discs of 25 mm were immersed in each of the four concentrations, and another disc group was dipped in ethanol and used as a check and left to dry at room temperature. Treated leaf discs were placed on wetted cotton in Petri dishes (9 cm) and lined with a cotton lining to prevent spider mites from escaping, and then 20 females were transferred and incubated at 25 °C. The mortality rates were recorded after 3 days. Each concentration was replicated three times.
Effect of crude extract of the selected isolates on eggs of T. urticae
The ovicidal activity of the previously prepared concentrations of ethyl acetate extract of selected isolates was tested as follows: 10 females were placed on mulberry leaf discs (25 mm), prepared in 9 cm Petri dish to obtain same-aged eggs to be used in the tests. Twenty-four hours later, females were removed from the discs, eggs were counted, and three leaf discs, carrying eggs, were dipped into each concentration. The control discs were dipped in 70% ethanol and were left to dry. Observations continued daily until hatching of eggs in the control group.
Confirmation of the most potent strain by DNA sequencing
The strain was cultured on PDA overlaid with cellophane at 27 °C for 3–5 days. Total genomic DNA (deoxyribonucleic acid) was directly extracted from fungal mycelia of the KF23 strain, using the Quick-DNA™ Fungal/Bacterial Microprep Kit (Zymo research #D6007). The internal transcribed spacers (ITS1 and ITS4) were amplified using the universal primer pair ITS1 (5′-CTTGGTCATTTAGAGGAAGTAA-3′) and ITS4 (5′-TCCTCCGCTTATTGATATGC-3′) (White et al. 1990) in a 50-μl reaction mixture comprising 5 μl of genomic DNA, 1 μl of each primer, 25 μl Maxima Hot Start PCR Master Mix (Thermo K1051), and 18 μl of sterile distilled water. The polymerase chain reaction (PCR) was carried out with the following parameters: an initial denaturation at 95 °C (10 min), 35 cycles of primer denaturation at 95 °C (30s), annealing at 57 °C (1 min), and extension at 72 °C (1 min 30s). A final elongation step was allowed at 72 °C for 10 min. The PCR products were purified using Gene JET™ PCR Purification Kit (Thermo K0701), according to the manufacturer's instructions. Sequencing was done at GATC Company Germany by using ABI 3730xl DNA sequencer and using forward and reverse primers. The sequence obtained was subjected into Geneious pro 8.9 software and compared to ITS sequences available in Gene Bank by blast analysis. Finally, the phylogenetic tree was constructed to establish the taxonomic rank of the fungus.
Separation, purification, and identification of acaricidal secondary metabolites produced by A. melleus
Preparation of extracellular crude extract from A. melleus
Twenty-five Erlenmeyer flasks (250 ml), each containing 100 ml of PDB media supplemented with sucrose 3% (w/v), ammonium sulfate 0.75% (w/v), at initial pH value 4 and the flasks, were incubated at 30 °C for 6 days under static conditions. After the incubation period, the culture was collected and filtered by muslin cloth, then the clear filtrate was collected and subjected to extraction by Ethyl acetate triple (1:1), then the extraction solvent was dried, using vacuum under reduced pressure at 40 °C, and the crude extract was collected.
Separation of active compounds by thin-layer chromatography
Aluminum sheets (20×20 cm) with pre-coated silica gel 60 F254 (Merck, Germany) was used to separate active metabolites. The crude extract was applied in the form of bands at 1 cm from the bottom of the plate, using a capillary tube. Subsequently, the thin-layer chromatography (TLC) plate was then placed in a glass jar previously saturated with a developing solvent system of toluene to ethyl acetate to formic acid (6:3:1) (v/v). The glass jar was covered tightly. The developing system was left to rise on the TLC plate, until it reached the solvent front. The developed chromatogram was examined visually and observed under short (254 nm) and long (366 nm) wavelength ultraviolet light (UV). The retention factor (Rf) values of the separated spots were determined subsequently. For the detection of bioactive compounds, each band was scraped off the plate, eluted with methanol, and filtered, using a filter paper. The concentration of LC50 was prepared from each band and subjected to assay against T. urticae.
Separation and identification of the most active compound/s
Liquid column chromatography–mass spectrometry (LC-MS/MS) was used to separate active compounds in the highest active band. The band (Rf3) was subjected to identification on Agilent LC-MS/MS (Agilent Technologies, USA) that consists of C18 column model G1316A equipped with a diode array detector (model G1315D) and coupled to a 6420 Triple Quadrupole mass spectrometer equipped with an electrospray ionization (ESI) source operating in positive ionization mode. The mobile phase consisted of 5 mM ammonium formate containing 0.1% formic acid (solvent A) and acetonitrile (solvent B). A total of 5 μl of the sample was injected. The gradient elution program for LC analysis was applied as follows: 0–2 min, 40% B; 2–12 min, 40–85% B; 12–15 min, 85–90% B; 15–35 min, 90–95% B at a flow rate of 0.3 ml min−1 and the column temperature was set to 35 °C. The MS analysis used full-scan mode with the mass range set to 100–1200 m/z in positive and mode. The conditions of the ESI source were as follows: drying gas, high-purity nitrogen (N2), drying gas temperature was set to 350 °C, drying gas flow-rate was 11 L/min, the nebulizing gas (N2) was set at a pressure of 45 psi, a potential of 3500 V for the positive ionization mode was applied to the tip of the capillary, and the fragmentation voltage was 130 V. The acquisition and data analysis were controlled using Agilent LC-MS Software (Agilent, USA). The identification of the separated compounds was carried out by comparing their retention times and mass spectra provided by ESI-MS with the data obtained from previous studies that are carried out on the same A. melleus as well as those of authentic standards when available.
Control is corrected according to Abbott's formula (Abbott 1925).
$$ \mathrm{Corrected}\ \mathrm{mortality}=\frac{T-C}{100-C}\times 100 $$
where T = dead mites in treatment and C = dead mites in control.
Data obtained from each dose-response bioassay were subjected to probit analysis (Finney 1971) to estimate LC50 values. For mortality and ovicidal activity experiments, the number of dead mite individuals and un-hatched eggs were counted and analyzed by one-way analysis of variance (ANOVA) (P = 0.05). Means were compared by Duncan's test (Duncan 1955). Also, mortality percentages were calculated and corrected according to Abbott's formula (Abbott 1925).
Isolation and primary assay of fungi
Sixty-two fungal strains were isolated from the rhizosphere of different plants and tested for their control activity against the females of T. urticae. Thirty-four isolates showed insignificant differences in their efficiency, while 28 showed a considerable control activity against spider mite (Table 1). Among the effective isolates, KF23, KF40, KF45, and KF9, that showed the highest mortality rates, were selected for further experiments. Culture filtrates of KF23, KF9, KF40, and KF45 resulted in 53.69, 73.83, 50.34, and 67.11% mortality rate, respectively, 5 days after exposure. Similarly, the culture filtrates of B. bassiana and M. anisoplae highly reduced mite population (Yun et al. 2017). The obtained results were relatively higher than mortality rates of HtCRMB isolate of Hirsutella thompsonii Fisher that only exhibited 55.90% on T. urticae and citrus rust mite, Phyllocoptruta oleivora Ashmead, 9 days after application (Aghajanzadeh et al. 2006). Mehdi (2006) studied the effect of culture filtrates of some fungal isolates and found that the highest contact toxicities were 52.4, 52.4, 48.4, and 50.4%, exhibited by Alternaria alternata, A. terreus, Trichoderma viridae, and Eurotium eurotiorum, followed by A. pluriseptata, Stachybotrys atra, Trichoderma harzianum, T. koningii, and Ulocladium chartarum showed mortality rates of 42.5, 42.5, 46.5, 43.5, and 39.6%, respectively.
Table 1 Effect of culture filtrate of potent isolates on the females of Tetranychus urticae
Morphological identification of the most effective isolates
Isolate KF23 grown on Czapek's agar media forming colony covered with dense layer of sclerotia and conidiophores absent or produced sparingly in the center. Conidial structures are more abundant in older cultures. Sclerotia were yellow to brown and hard and slightly rounded. This isolate was confirmed as Aspergillus melleus Yukawa at the Mycological Centre, Faculty of Science, Assiut University, Egypt. Isolate KF40 was grown on Czapek's medium forming a colony, colored dark orange yellow with yellow reverse. The colony attained a diameter of 31 mm after 7 days at 25 °C. Isolate KF40 was identified as A. terreus, according to Raper and Fennell (1965). Isolate KF9 was grown on potato dextrose agar medium producing flattened, black gray colony with white, lobed edges and dark yellow reverse. Growth was relatively slow and characterized by an abundance of olive green ascocarps with ascomal hairs. Spores were pale brown and lemon-shaped. These characters are matching well with Chaetomium globosum (Domsch et al. 1980). The isolate KF45 grew rapidly on Czapek's agar forming green colony with purple shade. Cleistothecia are scattered on the colony, green-colored, and globose. Microscopic examination showed that ascospores were orange with two equatorial crests and in addition to the presence of globose Hülle cells. Accordingly, isolate KF45 was Emericella nidulans (Raper and Fennell 1965).
Effect of crude extract of the selected fungi on adults of T. urticae
Mortality rates exhibited by the ethyl acetate extract of each of the selected isolates are shown in Table 2. LC50 values on females exhibited by A. melleus, C. globosum, A. terreus, and E. nidulans were 10.27, 22.40, 33.05, and 14.68 mg/ml, respectively. It seems that there are no reports on the acaricidal activity of A. melleus or E. nidulans. The present results proved that the obtained isolates possessed a concentration-dependent strong acaricidal efficacy, since they caused high mortality rates in mite population at the concentration of 30 mg/ml where 91.67% and 81.67% were killed by A. melleus and E. nidulans. Also, C. globosum and A. terreus showed mortality rates of 76.67% and 56.67% at the highest concentration (30 mg/ml), respectively. Similarly, the crude extract of Hypocrella raciborskii Zimm at a concentration of 3% (w/v) showed residual toxicity of 80% in 3 days after application (Buttachon and Kijjoa 2013). Santamarina et al. (1987) found that the crude extract (10 mg/ml) of P. funiculosum caused 100% mortality in the population of Panonychus ulmi on the second day after treatment. Similar results were obtained by Jimenez et al. (1993) who tested P. funiculosum on adult females of T. urticae and recorded a mortality rate of 100% after 3 days.
Table 2 Effect of ethyl acetate extract on females of Tetranychus urticae (after 72 h exposure)
Effect of crude extracts of the selected fungi on eggs of T. urticae
LC50 values exhibited on eggs were 8.81, 11.05, 23.17, and 11.66 mg/ml, when eggs of the spider mite were subjected to crude extracts of A. melleus, C. globosum, A. terreus, and E. nidulans, respectively (Table 3). Interestingly, the evaluated isolates showed a higher toxicity towards eggs than adults of T. urticae. Crude extracts (30 mg/ml) of A. melleus, E. nidulans, and C. globosum inhibited completely the hatchability of eggs. Similarly, fungal filtrates studied by Mehdi (2006) had more toxic action on eggs of T. urticae than adults. He also found that the filtrates of Alternaria alternate, A. pluriseptata, Aspergillus terreus, Trichoderma harzianum, T. koningii, T. viride, Eurotium eurotiorum, Ulocladium chartarum, and Stachybotrys atra inhibited hatchability by an average of 65.5, 58.3, 65.5, 60.6, 60.6, 54.1, 40, 52, 49.1, and 49.1%, respectively. Santamarina et al. (1987) reported that the crude extract (10 mg/ml) of P. funiculosum caused 52.17% reduction in eggs hatching of Panonychus ulmi.
Table 3 Effect of ethyl acetate extract on egg hatchability of Tetranychus urticae
Identification of A. melleus
A. melleus showed the minimum LC50 value, so it was selected for further investigation. The identification of A. melleus was confirmed by DNA sequencing. The alignment of the ITS sequences of fungal isolate KF23 was done by the National Center for Biotechnology Information (NCBI) database. The phyllogenetic analysis showed that KF23 most resemble A. melleus strain CBS 112786 (Accession No. FJ491567) with 99% homology (Fig. 1).
Phylogenetic tree of the ITS region sequences of the fungal isolate (Aspergillus melleus KF23) aligned with closely related sequences accessed from Gene Bank. Branches show bootstrap values (%), random seed (226,649), no. of replicates = 100; scale bar indicates nucleotides substitutions per site
Identification of active metabolites of A. melleus
Partial separation of active compounds in A. melleus crude extract by TLC resulted in five major bands. Four of them were visualized at long UV wavelength at 366 nm and one was seen under short UV wavelength at 254 nm. A band colored florescent blue at a Rf1 value of 0.97, florescent green band at Rf2 value of 0.72, yellow band at Rf3 value 0.55, blue band at Rf5 of 0.1, and brown band at Rf4 0.43. Each of the five bands was eluted by methanol and subjected to toxicity bioassay against females of T. urticae. The mortality rates are as shown in Fig. 2. The Rf3 band exhibited the highest mortality percent of 57.05%, followed by Rf1 (28.52%), Rf2 (10.07%), Rf5 (11.74%), and Rf4 (8.39%). Therefore, it was selected for further purification and identification by LC-MS/MS.
Lethal activity of TLC fractions of ethyl acetate extract of A. melleus on T. urticae. Error bars are corresponding to standard error. Different letters (superscript a and b) indicate a significant difference according to Duncan's multiple range test
Liquid chromatography/mass spectrometry
The metabolites in the active fraction separated by TLC, which had Rf3 of 0.55, were analyzed by LC-MS/MS and identified by comparing to the same authentic compounds. From LC/MS data, 13 compounds were detected (Fig. 3). Only five compounds were identified, while the others are needed further studies for identification.
Chromatogram shows the peaks separated by LC from the band (Rf3)
The first identified one appeared on liquid chromatography profile at a retention time (Rt) of 4.854 min had a molecular mass of 306 resulting from ESI-MS m/z = 306 [M+]; it might be 7-oxocurvularin (Fig. 4) and it had the molecular formula C16H18O6. The second identified compound appeared at Rt 10.399 min and a molecular mass of 679 deduced from m/z of 679 [M+] might be a nodulisporic acid with fragment ion at m/z = 702 [M + Na] deduced from m/z 701 [M − 1] (Fig. 5). The molecular formula of the nodulisporic acid is C43H53NO6. The third peak detected at Rt 15.298 min and a molecular weight of 381 deduced from m/z 382 [M + 1] might be mellamide (Fig. 6) which had the molecular formula of C23H31N3O2. The fourth identified peak, Rt (18.025 min) with molecular weight of 431, resulted from m/z = 431 and fragment ion of m/z = 358 (Fig. 7). This peak might be ochratoxin C with a molecular formula of C22H22NO6Cl. The last identified one appeared at Rt 21.243 min with a molecular mass of 301 deduced from m/z = 300 [M − 1] and fragment ion at m/z = 258 [M + 2], 149 [M − 1], and 121[M − 3] (Fig. 8). This peak might be 6-(4′-hydroxy-2′-methyl phenoxy)-(−)-(3R)-mellein (C19H24O3). Further investigations are required to characterize the other eight compounds. It was reported that A. melleus produces a wide range of polyketides (Garson et al. 1984) and naphthoquinone pigments such as xanthomegnin, viomellein, and viopurpurin (Durley et al. 1975). Similarly, Ondeyka et al. (2003) reported an insecticidal activity of mellamide, xanthomegnin, viomellein, and ochratoxin A isolated from A. melleus against Aedes aegypti. Also, A. melleus produces aspyrone that exhibited nematicidal activity against Pratylenchus penetrans by 80% at a concentration of 300 mg/l (Kimura et al. 1996).
Mass chart shows the fragment ions of 7-oxocurvularin
Mass chart shows the fragment ions of nodulisporic acid
Mass chart shows the fragment ions of mellamide
Mass chart shows the fragment ions of ochratoxin C
Mass chart shows the fragment ions of 6-(4′-hydroxy-2′-methyl phenoxy)-(−)- (3R)-mellein
A. melleus, E. nidulans, C. globosum, and A. terreus showed promising acaricidal activities against the two-spotted spider mite, T. urticae, females and eggs. Further investigations are still needed to identify their bioactive secondary metabolites and to evaluate their efficacy under greenhouse and field trials.
The data and material of this manuscript are available on reasonable request.
ANOVA:
ESI:
ITS:
Internal transcribed spacer
LC50 :
The median lethal concentration
LC-MS/MS:
Liquid chromatography–mass spectrometry
NCBI:
PCR:
Potato dextrose agar
Rf:
Retention factor
rpm:
Revolutions per minute
TLC:
Thin layer chromatography
UV:
Abbott WS (1925) A method of computing the effectiveness of an insecticide. J Econ Entomol 18(2):265–267
Aghajanzadeh S, Mallik B, Chandrashekar SC (2006) Toxicity of culture filtrate of Hirsutella thompsonii Fisher against citrus rust mite, Phyllocoptruta oleivora Ashmead (Acari: Eriophyidae) and two spotted spider mite, Tetranychus urticae Koch (Acari: Tetranychidae). Int J Agric Biol 8(2):276–279
Attia S, Grissa KL, Lognay G, Bitume E, Hance T, Mailleux AC (2013) A review of the major biological approaches to control the worldwide pest Tetranychus urticae (Acari: Tetranychidae) with special reference to natural pesticides. J Pest Sci 86(3):361–386
Buttachon S, Kijjoa A (2013) Acaricidal activity of Hypocrella raciborskii Zimm. (Hypocreales: Clavicipitaceae) crude extract and some pure compounds on Tetranychus urticae Koch (Acari: Tetranychidae). Afr J Microbiol Res 7(7):577–585
Culliney TW (2014) Crop losses to arthropods. In: Pimentel D, Peshin R (eds) Pesticide problems. Integrated pest management, vol 3. Springer, Dordrecht, pp 201–225
Domsch KH, Gams W, Anderson TH (1980) Compendium of soil fungi. Academic, London
Duncan DB (1955) Multiple range and multiple F-tests. Biometrics 11(1):1–42
Durley RC, MacMillan J, Simpson TJ, Glen AT, Turner WB (1975) Fungal products. Part XIII. Xanthomegnin, viomellin, rubrosulphin, and viopurpurin, pigments from Aspergillus sulphureus and Aspergillus melleus. J Chem Soc Perkin Trans 1(2):163–169
Fasulo TR, Denmark HA (2000) Two spotted spider mite, Tetranychus urticae Koch (Arachnida: Acari: Tetranychidae). University of Florida Cooperative Extension Service, Institute of Food and Agricultural Sciences, EDIS http://entnemdept.ufl.edu/creatures/orn/twospotted_mite.htm
Finney DJ (1971) Probit Analysis, 3rd edn. Cambridge University Press, London
Garson MJ, Staunton J, Jones PG (1984) New polyketide metabolites from Aspergillus melleus: structural and stereochemical studies. J Chem Soc Perkin Trans 1:1021–1026
Horikoshi R, Goto K, Mitomi M, Oyama K, Sunazuka T, Ōmura S (2017) Identification of pyripyropene A as a promising insecticidal compound in a microbial metabolite screening. J Antibiot 70(3):272
Jimenez M, Atienza J, Hernandez E (1993) Bioproduction of an extract from Penicillium funiculosum Thom with activity against Ceratitis capitata and Tetranychus urticae. Appl Microbiol Biotechnol 39(4–5):615–616
Kimura Y, Nakahara S, Fujioka S (1996) Aspyrone, a nematicidal compound isolated from the fungus, Aspergillus melleus. Biosci Biotechnol Biochem 60(8):1375–1376
Marcic D (2012) Acaricides in modern management of plant-feeding mites. J Pest Sci 85(4):395–408
Mehdi HMR (2006) The effect of some fungi in biocontrol in two spotted spider mite Tetranychus urticae (Koch.) (Tetranychidae: Acari). J Basrah Res (Sciences) 32(2B):12–26
Neal JW Jr, Lindquist RK, Gott KM, Casey ML (1987) Activity of the thermostable p-exotoxin of Bacillus thuringiensis Berliner on Tetranychus urticae and T. cinnabarinus. J Agric Entomol 4(1):33–40
Ondeyka JG, Dombrowski AW, Polishook JP, Felcetto T, Shoop WL, Guan Z, Singh SB (2003) Isolation and insecticidal activity of mellamide from Aspergillus melleus. J Ind Microbiol Biotechnol 30(4):220–224
Putter I, Mac Connell JG, Preiser FA, Haidri AA, Ristich SS, Dybas RA (1981) Avermectins: novel insecticides, acaricides and nematicides from a soil microorganism. Experientia 37(9):963–964
Ragavendran C, Natarajan D (2015) Insecticidal potency of Aspergillus terreus against larvae and pupae of three mosquito species Anopheles stephensi, Culex quinquefasciatus, and Aedes aegypti. Environ Sci Pollut Res 22(21):17224–17237
Raper KB, Fennell DI (1965) The genus Aspergillus. The Williams &Wilkins company, Baltimore (686 pages)
Roobakkumar A, Babu A, Kumar DV, Sarkar S (2011) Pseudomonas fluorescens as an efficient entomopathogen against Oligonychus coffeae Nietner (Acari: Tetranychidae) infesting tea. J Entomol Nematol 3(5):73–77
Santamarina MP, Jimenez M, Sanchis V, Garcia F, Hernandez E (1987) A strain of Penicillium funiculosum Thorn with activity against Panonychus ulmi Koch (Acar.: Tetranychidae). J Appl Entomol 103(1–5):471–476
Srivastava CN, Maurya P, Sharma P, Mohan L (2009) Prospective role of insecticides of fungal origin. Entomol Res 39(6):341–355
Vargas Mesina RR (1993) Thuringiensin toxicity to Tetranychus urticae Koch and Panonychus ulmi (Koch) (Tetranychidae) and effects on cuticle development of immature stages of T. urticae. Ph.D. Dissertation, Lincoln University, New Zealand
White TJ, Bruns T, Lee S, Taylor J (1990) Amplification and direct sequencing of fungal ribosomal RNA gene for phylogenetics. In: Innis MA, Gelfand DH, Sninsky JJ, White TJ (eds) PCR protocols: a guide to methods and applications. Academic, San Diego, pp 315–322
Wu Y, Liu X (1997) Toxicity and biological effect of abamectin on Tetranychus cinnabarinus in lab. Acta Agric Boreali Sin 12(1):108–111
Yun HG, Kim DJ, Lee JH, Ma JI, Gwak WS, Woo SD (2017) Comparative evaluation of conidia, blastospores and culture filtrates from entomopathogenic fungi against Tetranychus urticae. Int J Ind Entomol 35(1):58–62
Thanks are due to Prof. Mohamed Abdelaziz Balah, Plant Protection Department, Desert Research Center, for the interpretation of LC-MS/MS data and identification of the separated compounds.
There are no funding sources for this manuscript.
Department of Botany and Microbiology, Faculty of Science, Helwan University, Cairo, Egypt
Mohamed E. Osman
& Amany A. Abo Elnasr
Department of Plant Protection, Desert Research Centre, Cairo, Egypt
Mohamed A. Nawar
& Gohyza A. Hefnawy
Search for Mohamed E. Osman in:
Search for Amany A. Abo Elnasr in:
Search for Mohamed A. Nawar in:
Search for Gohyza A. Hefnawy in:
The conception and design of the study were done by all authors; the last author (GAH) performed the experimental part under the supervision of MEO, AAAE, and MAN. The analysis and interpretation of the data were done by the first author (MEO) and AAAE. Both MAN and GAH prepared the manuscript draft; then the manuscript was revised for important intellectual content by MEO and AAAE. All authors read and approved the final manuscript.
Correspondence to Gohyza A. Hefnawy.
This article does not contain any studies with human participants or animals.
The manuscript has not been published in completely or in part elsewhere.
Osman, M.E., Abo Elnasr, A.A., Nawar, M.A. et al. Myco-metabolites as biological control agents against the two-spotted spider mite, Tetranychus urticae Koch (Acari: Tetranychidae). Egypt J Biol Pest Control 29, 64 (2019). https://doi.org/10.1186/s41938-019-0166-0
Tetranychus urticae
Fungal metabolites
Aspergillus melleus
|
CommonCrawl
|
pdgLive Home > Supersymmetric Particle Searches > R-parity violating ${{\widetilde{\mathit \tau}}}$ (Stau) mass limit
Charged sleptons
This section contains limits on charged scalar leptons (${{\widetilde{\mathit \ell}}}$, with ${{\mathit \ell}}={{\mathit e}},{{\mathit \mu}},{{\mathit \tau}}$). Studies of width and decays of the ${{\mathit Z}}$ boson (use is made here of $\Delta \Gamma _{{\mathrm {inv}}}<2.0~$MeV, LEP 2000 ) conclusively rule out ${\mathit m}_{{{\widetilde{\mathit \ell}}_{{R}}}}<40~$GeV (41 GeV for ${{\widetilde{\mathit \ell}}_{{L}}}$) , independently of decay modes, for each individual slepton. The limits improve to 43$~$GeV ($43.5$ GeV for ${{\widetilde{\mathit \ell}}_{{L}}}$) assuming all 3 flavors to be degenerate. Limits on higher mass sleptons depend on model assumptions and on the mass splitting $\Delta \mathit m$= ${\mathit m}_{{{\widetilde{\mathit \ell}}}}–{\mathit m}_{{{\widetilde{\mathit \chi}}_{{1}}^{0}}}$. The mass and composition of ${{\widetilde{\mathit \chi}}_{{1}}^{0}}$ may affect the selectron production rate in ${{\mathit e}^{+}}{{\mathit e}^{-}}$ collisions through ${{\mathit t}}$-channel exchange diagrams. Production rates are also affected by the potentially large mixing angle of the lightest mass eigenstate ${{\widetilde{\mathit \ell}}_{{1}}}={{\widetilde{\mathit \ell}}_{{R}}}$ sin$\theta _{{{\mathit \ell}}}$ + ${{\widetilde{\mathit \ell}}_{{L}}}$ cos $\theta _{{{\mathit \ell}}}$. It is generally assumed that only ${{\widetilde{\mathit \tau}}}$ may have significant mixing. The coupling to the ${{\mathit Z}}$ vanishes for $\theta _{{{\mathit \ell}}}$=0.82. In the high-energy limit of ${{\mathit e}^{+}}{{\mathit e}^{-}}$ collisions the interference between ${{\mathit \gamma}}$ and ${{\mathit Z}}$ exchange leads to a minimal cross section for $\theta _{{{\mathit \ell}}}$=0.91, a value which is sometimes used in the following entries relative to data taken at LEP2. When limits on ${\mathit m}_{{{\widetilde{\mathit \ell}}_{{R}}}}$ are quoted, it is understood that limits on ${\mathit m}_{{{\widetilde{\mathit \ell}}_{{L}}}}$ are usually at least as strong.
Possibly open decays involving gauginos other than ${{\widetilde{\mathit \chi}}_{{1}}^{0}}$ will affect the detection efficiencies. Unless otherwise stated, the limits presented here result from the study of ${{\widetilde{\mathit \ell}}^{+}}{{\widetilde{\mathit \ell}}^{-}}$ production, with production rates and decay properties derived from the MSSM. Limits made obsolete by the recent analyses of ${{\mathit e}^{+}}{{\mathit e}^{-}}$ collisions at high energies can be found in previous Editions of this Review.
For decays with final state gravitinos (${{\widetilde{\mathit G}}}$), ${\mathit m}_{{{\widetilde{\mathit G}}}}$ is assumed to be negligible relative to all other masses.
R-parity violating ${{\widetilde{\boldsymbol \tau}}}$ (Stau) mass limit INSPIRE search
2018 Z
ATLS ${}\geq{}4{{\mathit \ell}}$, RPV, ${{\mathit \lambda}_{{12k}}}{}\not=$0, ${\mathit m}_{{{\widetilde{\mathit \chi}}_{{1}}^{0}}}$ = 600 GeV (mass-degenerate left-handed sleptons and sneutrinos of all 3 generations)
ATLS ${}\geq{}4{{\mathit \ell}}$, RPV, ${{\mathit \lambda}_{{i33}}}{}\not=$0, ${\mathit m}_{{{\widetilde{\mathit \chi}}_{{1}}^{0}}}$ = 300 GeV (mass-degenerate left-handed sleptons and sneutrinos of all 3 generations)
$> 74$ 95 2
ABBIENDI
OPAL RPV, ${{\widetilde{\mathit \tau}}_{{L}}}$
ABDALLAH
DLPH RPV, ${{\widetilde{\mathit \tau}}_{{R}}}$, indirect, $\Delta \mathit m>$5~GeV
1 AABOUD 2018Z searched in 36.1 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 13 TeV for events containing four or more charged leptons (electrons, muons and up to two hadronically decaying taus). No significant deviation from the expected SM background is observed. Limits are set on the Higgsino mass in simplified models of general gauge mediated supersymmetry Tn1n1A/Tn1n1B/Tn1n1C, see their Figure 9. Limits are also set on the wino, slepton, sneutrino and gluino mass in a simplified model of NLSP pair production with R-parity violating decays of the LSP via ${{\mathit \lambda}_{{12k}}}$ or ${{\mathit \lambda}_{{i33}}}$ to charged leptons, see their Figures 7, 8.
2 ABBIENDI 2004F use data from $\sqrt {s }$ = $189 - 209$~GeV. They derive limits on sparticle masses under the assumption of RPV with ${{\mathit L}}{{\mathit L}}{{\overline{\mathit E}}}$ or ${{\mathit L}}{{\mathit Q}}{{\overline{\mathit D}}}$ couplings. The results are valid for tan ${{\mathit \beta}}$ = 1.5, ${{\mathit \mu}}$ = $-200$~GeV, with, in addition, $\Delta \mathit m$ $>$ 5~GeV for indirect decays via ${{\mathit L}}{{\mathit Q}}{{\overline{\mathit D}}}$ . The limit quoted applies to direct decays with ${{\mathit L}}{{\mathit L}}{{\overline{\mathit E}}}$ couplings and improves to 75~GeV for ${{\mathit L}}{{\mathit Q}}{{\overline{\mathit D}}}$ couplings. The limit on the ${{\widetilde{\mathit \tau}}_{{R}}}$ mass for indirect decays is 92~GeV for ${{\mathit L}}{{\mathit L}}{{\overline{\mathit E}}}$ couplings at ${\mathit m}_{{{\widetilde{\mathit \chi}}^{0}}}$ = 10~GeV and no exclusion is obtained for ${{\mathit L}}{{\mathit Q}}{{\overline{\mathit D}}}$ couplings. Supersedes the results of ABBIENDI 2000 .
3 ABDALLAH 2004M use data from $\sqrt {s }$ = $192 - 208$~GeV to derive limits on sparticle masses under the assumption of RPV with ${{\mathit L}}{{\mathit L}}{{\overline{\mathit E}}}$ couplings. The results are valid for ${{\mathit \mu}}$ = $-200$~GeV, tan ${{\mathit \beta}}$ = 1.5, $\Delta \mathit m$ $>$ 5~GeV and assuming a BR of 1 for the given decay. The limit quoted is for indirect decays using the neutralino constraint of 39.5 GeV, also derived in ABDALLAH 2004M. For indirect decays via ${{\mathit L}}{{\mathit L}}{{\overline{\mathit E}}}$ the limit decreases to 86 GeV if the constraint from the neutralino is not used. Supersedes the result of ABREU 2000U.
AABOUD 2018Z
PR D98 032009 Search for supersymmetry in events with four or more leptons in $\sqrt{s}=13$ TeV $pp$ collisions with ATLAS
ABBIENDI 2004F
EPJ C33 149 Search for $\mathit R$-Parity Violating Decays of Scalar Fermions at LEP
ABDALLAH 2004M
EPJ C36 1 Search for Supersymmetric Particles Assuming $\mathit R$-Parity non-conservation in ${{\mathit e}^{+}}{{\mathit e}^{-}}$ Collisions at $\sqrt {s }$ = 192 to 208 GeV
|
CommonCrawl
|
Integrative eQTL-weighted hierarchical Cox models for SNP-set based time-to-event association studies
Haojie Lu1 na1,
Yongyue Wei2 na1,
Zhou Jiang1,
Jinhui Zhang1,
Ting Wang1,
Shuiping Huang1,3,4 &
Ping Zeng ORCID: orcid.org/0000-0003-2710-34401,3,4
Integrating functional annotations into SNP-set association studies has been proven a powerful analysis strategy. Statistical methods for such integration have been developed for continuous and binary phenotypes; however, the SNP-set integrative approaches for time-to-event or survival outcomes are lacking.
We here propose IEHC, an integrative eQTL (expression quantitative trait loci) hierarchical Cox regression, for SNP-set based survival association analysis by modeling effect sizes of genetic variants as a function of eQTL via a hierarchical manner. Three p-values combination tests are developed to examine the joint effects of eQTL and genetic variants after a novel decorrelated modification of statistics for the two components. An omnibus test (IEHC-ACAT) is further adapted to aggregate the strengths of all available tests.
Simulations demonstrated that the IEHC joint tests were more powerful if both eQTL and genetic variants contributed to association signal, while IEHC-ACAT was robust and often outperformed other approaches across various simulation scenarios. When applying IEHC to ten TCGA cancers by incorporating eQTL from relevant tissues of GTEx, we revealed that substantial correlations existed between the two types of effect sizes of genetic variants from TCGA and GTEx, and identified 21 (9 unique) cancer-associated genes which would otherwise be missed by approaches not incorporating eQTL.
IEHC represents a flexible, robust, and powerful approach to integrate functional omics information to enhance the power of identifying association signals for the survival risk of complex human cancers.
A wide range of recent genome-wide association studies (GWASs) have revealed that germline variants (i.e., single nucleotide polymorphisms [SNPs]) are also an important inherited component of cancer risk [1,2,3], although somatic mutations (e.g., copy number and DNA methylation alterations) play an essential role in the pathophysiology of many human cancers [4,5,6]. Conventionally, the association of SNPs is examined one at a time in cancer GWASs [1, 2]; however, the power for detecting such single SNP association signal remains limited because genetic variants generally have weak effect sizes [7,8,9], making the detection of cancer-associated SNPs difficult even with large samples. In addition, these identified genetic variants often explain only a very small fraction of cancer predisposition, leading to the so-called missing heritability [10,11,12,13,14], which also implies that a large amount of causal loci have yet been discovered and the endeavor to identify causative genes for cancers should continue.
As an effective alternative strategy, SNP-set analysis has been proposed in GWAS [15,16,17,18,19,20,21], where a set of SNPs defined a priori within a gene or other genetic units (e.g., pathway) are analyzed collectively to assess their joint influence on diseases or traits. Existing SNP-set approaches can be roughly grouped into two categories: (i) the burden test, in which the association with disease risk is evaluated for the overall effect of a weighted summation of variant alleles [22, 23]; and (ii) the variance component test, in which the association is examined for the variance of genetic variants under the framework of mixed-effects models [16, 24]. Due to the aggregation of multiple weak association signals and the reduced burden of multiple testing, SNP-set analysis is often more powerful than its counterpart of single SNP analysis. However, these SNP-set association approaches might be still underpowered when additional informative knowledge is available about the alternative. For example, if the association between a set of genetic variants and the survival risk of cancers is regulated through gene expression, the power improvement would be further achieved by integrating transcriptomic data into the test method. As it is widely demonstrated that disease-associated SNPs are more likely to be expression quantitative trait loci (eQTL) [25], it is thus conceivable that incorporating such knowledge would increase power for detecting association [26,27,28].
Several methods have been proposed for this goal within the mixed-effects model framework. For example, MiST was developed for continuous and binary phenotypes in rare variant association studies by modeling the effects of rare variants as a function of functional features while allowing for the heterogeneity of variant-specific effects [29]. This method was recently further generalized to integrate eQTL or other functional annotations [26, 30, 31]. Both simulations and real applications have exhibited the advantage of these integrative approaches compared with the general methods that do not incorporate functional characteristics of genetic variants. However, to our best knowledge, there is little relevant work with regards to integrative approaches for time-to-event association studies.
In the present study, we develop such a method within the hierarchical Cox model framework to jointly analyze multiple SNPs for association with censored survival outcomes (i.e., time-to-event phenotypes) [32, 33]. Specifically, we first group SNPs into SNP-sets based on a biologically meaningful unit (i.e., genes), and then test for the overall joint effects of all SNPs within the gene. To integrate eQTL, following prior work [26, 29], we suppose the effect sizes of SNPs are partly explained by eQTL via a hierarchical modeling. As a result, our association analysis consists of two components: the first component stands for the fixed effect through the weighted burden score to reflect the impact of genetic variants on the survival risk explained by eQTL, while the second component examines the residual effects of genetic variants beyond eQTL. These residual effects are treated as random effects following an arbitrary distribution with mean zero and variance τ [32, 33]. Therefore, methodologically, testing the joint effect for a group of SNPs with the survival risk of a cancer of focus is equivalent to examining the fixed effect and random effects simultaneously.
Under our model context, a novel decorrelated modification is made so that two independent statistics (i.e., a burden test statistic and a variance component test statistic) are derived for each of the two components. Then, the joint effect test can be easily constructed based on these two uncorrected statistics via various p-values combination strategies. To this aim, we consider three data-driven approaches (e.g., the Fisher's combination, the optimally weighted combination, and the adaptively weighted combination) for combining them to capture the association signals from both sources. To further enhance power, we exploit the recently developed aggregated Cauchy association test (ACAT) to integrate the strengths of all the five types of test methods (i.e., three combination tests as well as the burden test and the variance component test; the latter was also called the kernel machine [KM] test) [34, 35]. We refer to our proposed approach and test framework described above as integrative eQTL hierarchical Cox model (IEHC). Extensive simulations demonstrate that the three combination tests have comparable power or are better than both the burden and variance component tests under some specific scenarios, while IEHC-ACAT enjoys consistently higher power across all simulation scenarios. We finally apply IEHC to ten TCGA cancers which have one explicit relevant tissue in The Genotype-Tissue Expression (GTEx) project and integrate eQTL into our method [36]. We identified a total of 21 (9 unique) cancer-associated genes which would otherwise be missed by the general SNP-set based survival association methods that do not consider eQTL.
An overview of the IEHC model and the joint test
First, consider that there are S genotypes (denoted by Gi and coded as 0, 1 or 2 in terms of the number of effect allele) of SNPs located within a given gene and p covariates Xi (e.g., age, gender, and cancer stage) for n individuals; and S in general varies gene by gene. In addition, denote the observed survival time by ti and the true survival time by Ti with di indicating the censored status; that is, di = 1 if Ti = ti, whereas di = 0 if Ti < ti. Under the proportional hazards condition, we assume the hazard function λ(t) of the survival time ti is related to Gi and Xi through the classical Cox model [37]
$$\log \left( {\frac{{{\uplambda }(t_{i} )}}{{{\uplambda }_{0} (t_{i} )}}} \right)\; = \;{\text{G}}_{i}^{T} {\varvec{\alpha}}\;{ + }\;{\text{X}}_{i}^{T} {\varvec{c}}$$
where λ0 is an arbitrary baseline hazard function, α = (α1, …, αS) is an S-vector of effect sizes for SNPs and c = (c1, c2, …, cp) is a p-vector of fixed effect sizes for clinical covariates.
We here only provide an overview of IEHC, with technical details demonstrated in the Additional file 1. In brief, IEHC examines a group of SNPs in one gene at each time and integrates eQTL information by extending the Cox model above in a hierarchical manner
$$\begin{aligned} \log \left( {\frac{{{\uplambda }(t_{i} )}}{{{\uplambda }_{0} (t_{i} )}}} \right)\; & = \;\sum\limits_{j\; = \;1}^{S} {{\text{G}}_{ij} \alpha_{j} } \;{ + }\;\sum\limits_{k\; = \;1}^{p} {X_{ik}^{T} c_{k} } \; = \;\eta \;{ = }\;{\text{G}}_{i}^{T} {\varvec{\alpha}}\;{ + }\;{\text{X}}_{i}^{T} {\varvec{c}} \\ \alpha_{j} \; & = \;\beta_{j} \; \times \;\theta \; + \;b_{j} \\ b_{j} \; & \sim \;N(0,\;{\uptau }) \\ \end{aligned}$$
Of note, plugging αj into the first line leads to \(\eta \;{ = }\;({\text{G}}_{i}^{T} {\varvec{\beta}})\theta \;{ + }\;{\text{G}}_{i}^{T} {\varvec{b}}\;{ + }\;{\text{X}}_{i}^{T} {\varvec{c}}\). In the above, βj is the known eQTL effect size of the jth SNP and directly obtained in terms of summary statistics from the GTEx project [36, 38], θ is a scale of coefficient for eQTL and quantifies the association between the survival risk and the weighted burden score \({\text{G}}_{i}^{T} {\varvec{\beta}}\), and bj is the normal residual variant-specific effect size that is not interpreted by eQTL alone. Then, the hypothesis of no association between a set of SNPs and the survival outcome is
$$H_{0} :\;\theta \; = \;0\;{\text{and}}\;b\; = \;0\; \Leftrightarrow \;H_{0} :\;\theta \; = \;0\;{\text{and}}\;{\uptau }\; = \;0$$
This is a joint test including both fixed effect and random effects: the first component examines the influence of genetic variants on the survival risk explained by eQTL (i.e., θ = 0); while the second component examines the impact of genetic variants beyond the effects of eQTL (i.e., τ = 0).
To implement the hypothesis testing while circumventing the potential correlation between statistics and improving the statistical computation, we propose the following two-stage strategy. Briefly, we derive the test statistic for θ under H0: θ = 0 and τ = 0 as usual, while derive the score statistic for τ under τ = 0 but without the constraint of θ = 0. By doing this, we ensure that these two statistics are independent (see simulation results in Additional file 1). This strategy substantially eases the construction of test statistics for the joint test and two asymptotically independent statistics are eventually derived: one for θ in the general Cox model (say Uθ) [37] and the other for the variance component parameter τ in the kernel machine (KM) Cox model (say Uτ) [32, 33]. We combine the two uncorrelated statistics via several aggregation approaches, including the Fisher's combination (IEHC-Fisher) [39, 40], the optimally weighted linear combination (IEHC-optim) as well as the adaptively weighted linear combination (IEHC-adapt). For IEHC-optim we establish Tρ = ρUθ + (1-ρ)Uτ, with \(\rho \; \in \;[0,\;1]\) controlling the contribution of the fixed-effect component. The final ρ in IEHC-optim is selected by optimizing Tρ. On the other hand, IEHC-adapt is a data-adaptive generalization of the Fisher's combination [39, 40], for which the test statistic takes the form T = ρθZθ + ρτZτ, where Zθ = − 2log(pθ) and Zτ = − 2log(pτ), based on which ρθ and ρτ are determined via an adaptive manner.
The IEHC test described above includes two special cases: the burden test for examining the fixed effect θ (with τ = 0) in the general Cox model and the KM test examining the variance component parameter τ (with θ = 0) in the KM Cox model. To further boost the power, we employ the recently developed aggregated Cauchy association test (ACAT) to combine the strengths of these five methods (i.e., the burden test, the KM test and three joint tests including IEHC-Fisher, IEHC-optim and IEHC-adapt) [34, 35]. The advantage of IEHC-ACAT is that it allows us to aggregate correlated p-values obtained from multiple various tests into a single well-calibrated p-value while maintaining the type I error control correctly. The detailed procedures for these approaches are relegated to Additional file 1. The code for IEHC is freely available at https://github.com/biostatpzeng/IEHC.
Simulations for type I error control and power evaluation
We now perform simulations to evaluate the type I error control and power for IEHC. To mimic the truth, we undertook simulations based on realistic genotypes available from the Geuvadis program because the sample size in the real-life applications used in this paper matched closely that of Geuvadis [41]. First, we obtained 550 a group of correlated SNPs in a local genetic region from 465 individuals in Geuvadis. During the simulation we randomly selected S nearby SNPs (denoted by G1), with S varying according to a uniform distribution ranging from 20 to 50 (i.e., S was on average equal to 35); among these selected genetic variants we further randomly set 0%, 30% or 50% of SNPs having zero effect sizes. We generated the gene expression level with the first 165 individuals and sampled the effect sizes β from a normal distribution with a special variance so that the proportion of the explained variation (PVE) of the expression level would be 30% or 50%.
Then, we calculated α = β × θ + b, with b following a normal distribution with variance τ. Two independent covariates (i.e., X1 was binary and X2 was continuous) were also generated with each having an effect size of 0.50. We employed the inverse probability method to generate the survival time which followed a Weibull distribution with the shape parameter equal to 1 and the scale parameter equal to 0.01 [42]. The location parameter (denoted by μ) of this Weibull distribution was determined by α and the two covariates: μ = exp(η) and η = G2α + 0.5X1 + 0.5X2, with G2 representing the remaining genotypes of 300 samples in Geuvadis. The censored rate was fixed to be 50% in a random manner. Note that, this relatively high censored rate corresponded to the similar situation observed in the TCGA cancer dataset (see below). We set θ = 0 and τ = 0 to assess the type I error control and run 105 replications. To evaluate the power, we specified θ = 0, 0.1, 0.2, 0.3 or 0.4, and τ = 0, 0.02, or 0.04 (here at least one of θ and τ was nonzero). The power simulation was repeated 103 times.
TCGA cancers and GTEx eQTL summary statistics
TCGA cancers and quality control
We applied the proposed method to multiple cancer data publicly available from TCGA [43]. We downloaded these datasets at https://xenabrowser.net/ and focused on cancers having one explicitly relevant tissue in the GTEx project [36]. However, we did not include PRAD (prostate adenocarcinoma) and THCA (thyroid carcinoma) as nearly all the PRAD patients (99.3% = 146/147) and THCA patients (95.7% = 315/329) were alive during the follow-up. We also removed DLBC (lymphoid neoplasm diffuse large b-cell lymphoma), KICH (kidney chromophobe) and TGCT (testicular germ cell tumor) because of too small sample sizes (i.e., only 24 for DLBC, 57 for KICH and 69 for TGCT). Finally, we reserved ten cancers for further analysis (Table 1).
Table 1 Summary information of 10 TCGA cancers and the number of genes, sample sizes and SNPs of these cancers after combining the tissue in GTEx and quality control
To avoid the issue of ethnic heterogeneity, we included only patients of European ancestry and selected the overall survival time and status in our analysis following prior work [44]. Several important clinical covariates were incorporated, such as age, gender, and pathologic tumor stage because only these clinical variables were available for the majority of TCGA patients. When the pathologic tumor stage is unavailable, we alternatively employed the clinical stage (i.e., OV). We further standardized each clinical covariate. In addition, for every cancer we only kept samples from primary tumor tissues and excluded patients with too many missing values in clinical covariates (Table 1).
TCGA genotypes, imputation, and quality control
For each cancer we first filtered out SNPs that had missingness rate > 0.95 across patients, genotype calling rate < 0.95, minor allele frequency (MAF) > 0.01, or Hardy–Weinberg equilibrium (HWE) p-value < 10–4. Then, we undertook imputation by first phasing genotypes with SHAPEIT [45], then imputed SNPs based on the Haplotype Reference Consortium panel [46] on the Michigan Imputation Server using minimac3 [47]. The filtering procedure for imputed genotypes included HWE p-value < 10–4, genotype call rate < 95%, MAF < 0.01 and imputation score < 0.30.
GTEx eQTL summary statistics and the combination with TCGA
At the same time, for these kept cancers we obtained eQTL summary statistics of the related tissue from GTEx [36] and performed a stringent quality control (Table 1): (i) reserved SNPs with MAF > 0.05; (ii) excluded non-biallelic SNPs and SNPs with strand-ambiguous alleles; (iii) excluded SNPs that had no rs labels as well as duplicated ones; (iv) kept only SNPs which were included within TCGA; (v) removed SNPs whose alleles did not match those in TCGA; (vi) aligned the effect allele of SNP between TCGA and GTEx.
For comparison we implemented the following six methods in both simulations and real-life applications within the context of Cox modeling log[λ(t)/λ0(t)/] = η: (i) the burden test: to examine H0: θ = 0 in η = (Gβ) × θ + Xc using the Wald test in the general Cox model; (ii) the KM test: to assess H0: τ = 0 in η = Gb + Xc and b ~ N(0, τ) using the kernel-machine based approach; (iii) IEHC-Fisher: to jointly test H0: τ = 0 and θ = 0 in η = (Gβ) × θ + Gb + Xc and b ~ N(0, τ) using the Fisher's combination method, or (iv) IEHC-adapt using the adaptive combination method, or (v) IEHC-optim using the optimal combination method; (vi) IEHC-ACAT: to aggregate the first five tests using the Cauchy combination method.
Independence of the two statistics in the joint test and type I error control
First, in order to validate the independence of the two statistics (denoted by Uθ and Uτ) constructed in the joint test of IEHC, we computed the Pearson's correlation coefficient between them under the null of our simulation and find little evidence supporting the dependence of the two statistics (Additional file 1: Figure S1). For instance, across the 105 replications, the overall correlation between Uθ and Uτ is 1.75 × 10–3 (95% confidence interval: – 4.44 × 10–3—7.95 × 10–3, P = 0.580), confirming the validity of our proposed joint test framework within which we can combine two uncorrelated statistics in a statistically straightforward fashion. Next, the Q-Q plots demonstrate all the tests, including the burden test, the KM test, IEHC-Fisher, IEHC-adapt, IEHC-optim as well as IEHC-ACAT, effectively control the type I error (Fig. 1). Particularly, we find IEHC-ACAT correctly maintains the type I error control even if the aggregated test methods (i.e., the first five) are highly correlated (Additional file 1: Figure S2). Furthermore, IEHC-Fisher is more powerful when the fixed effect explained by eQTL and random effects beyond eQTL exist simultaneously, but is less powerful when only one of the two types of effects is true or under the null that both θ and τ are zero, where the deflated p-values are observed (Fig. 1).
The QQ plots evaluating the type I error for the burden test, the KM test, IEHC-Fisher, IEHC-adapt, IEHC-optim as well as IEHC-ACAT under the null that both θ and τ are zero
Simulation results for power evaluation
We now compare the power of these tests under the alternative. To save the space, here we only present the results under three scenarios: the PVE of the gene expression level explained by β (the effect sizes of eQTLs) was equal to 0.3 or 0.5, the effect size θ (the effect size of the eQTL-based genetic score) was set to 0 or 0.4, and τ (the variance of the direct effect sizes of genetic variants) was set to 0 or 0.04. The results for other scenarios are displayed in Additional file 1: Figures S3-S9. As for the results shown in Fig. 2, we find the burden test is in general powerful when the association signal comes only from eQTL (i.e., θ = 0.4 and τ = 0), while is underpowered when the association signal comes only from SNPs (i.e., θ = 0 and τ = 0.04). The opposite results are observed for the KM test. Compared to the burden test and the KM test, the three joint tests (i.e., IEHC-Fisher, IEHC-adapt, and IEHC-optim) are often better when the association signal is contributed by both eQTL and SNPs (i.e., θ = 0.4 and τ = 0.04).
Power comparison among the six test methods under the alternative. In the simulation scenarios, 30%, 50% or 0% SNPs were randomly selected to have zero effect sizes. The PVE of the expression level explained by β was set to 0.3 (above) or 0.5 (below). A 30% SNPs having zero effect sizes; B 50% SNPs having zero effect sizes; C 0% SNPs having zero effect sizes. Here, θ = 0.4 or (and) τ = 0.04
In addition, we find the relative performance of power between the joint tests (i.e., IEHC-Fisher, IEHC-adapt, and IEHC-optim) and the burden test as well as the KM test depends on the magnitude of θ and τ. More specifically, when θ and τ are not large enough, the burden test or (and) the KM test may behave better than the joint tests even the association signal is contributed by both the two components. For instance, the KM test is more powerful compared to the joint tests when θ = 0.1 and τ = 0.04 (Additional file 1: Figures S4); whereas the burden test has a higher power when θ = 0.2 and τ = 0.02 (Additional file 1: Figures S5). Finally, IEHC-ACAT, which integrates the five tests, consistently behaves better across various simulation scenarios (Fig. 2 and Additional file 1: Figures S3-S9).
Correlation between cis-SNP marginal effect sizes of each gene in TCGA and GTEx
In the real application, we first quantify the association of cis-SNP effect sizes between TCGA and GTEx. To do so, for each SNP in TCGA we generated its marginal effect size using the general Cox model while adjusting for available cancer-specific covariates (e.g., age and tumor stage), and then conducted a simple linear regression with the two sets of estimated SNP effect sizes for each gene of these cancers. Note that, the SNP effect sizes of GTEx can be directly accessed through public portal (https://www.gtexportal.org/). Such regression analysis renders us a rough insight to interrogate the relationship of the two types of SNP effect sizes.
We discover that these two types of SNP effect sizes are substantially correlated for a great deal of genes for each cancer (Table 2). For example, we find that on average ~ 72.8% (ranging from 67.6% for BRCA to 76.4% for ACC) of regression coefficients are significant (false discover rate [FDR] < 0.05). Notably, for a given cancer the regression coefficients may be positive for some genes while negative for others (Fig. 3A). Particularly, among a total of 118 genes whose regression coefficients are significant across all the ten cancers, we still find the regression coefficients are either positive or negative across diverse cancers (Fig. 3B), indicating distinct genetic influences of SNPs on the regulation of gene expression and the survival risk of cancers. More importantly, a small fraction of (~ 3.4% on average) determination coefficients (R2) are larger than 10%, implying that the cis-SNP effect sizes of some certain genes in TCGA cancers can be indeed explained by eQTL of relevant tissue in the GTEx (Table 2).
Table 2 Summary information of cis-SNPs for the 10 cancers and the association of cis-SNP effect sizes for each gene in TCGA and GTEx
A Distribution of estimated regression coefficients for each gene across all the 10 TCGA cancers; B Heatmap of estimated regression coefficients of 47 of 118 genes that are simultaneously significant (FDR < 0.05) across all the 10 TCGA cancers; the density and the size of the color represent the magnitude of the regression coefficients
Taken together, although the average strength of the relationship between the two types of SNP effect sizes across genes may be relatively moderate, it nevertheless suggests potential genetic overlap especially at some certain genes. It is therefore worthy of integrating the eQTL of GTEx into the SNP-set based survival association analysis of TCGA cancers to boost the power.
Associated genes identified with the IEHC method
We here demonstrate that incorporating the eQTL information of GTEx into the SNP-set association analysis has the potential to enhance the power. We also exhibit that integrating all available tests by IEHC-ACAT can further increase the power. For each cancer and each type of joint tests (i.e., IEHC-Fisher, IEHC-optim, IEHC-adapt, and IEHC-ACAT), we classify all the genes into four various groups in terms of the regression coefficients (i.e., FDR < 0.05) and the results of joint tests (P < 0.05) (Additional file 1: Tables S1–S4 and Figures S10, S11). Taking ACC for example, there are a total of 8533 (= 259 + 8274) genes whose regression coefficients are significant (FDR < 0.05) and 2,630 (= 19 + 2611) genes whose regression coefficients are non-significant (FDR > 0.05); among these genes, 259 (~ 3.04% = 259/8533) and 19 (~ 0.72% = 19/2630) genes have a p-value less than 0.05 in terms of IEHC-Fisher, indicating that IEHC-Fisher has a fourfold higher likelihood (~ 4.22 = 3.04/0.72) to discover association signals (p = 4.61 × 10–11; Additional file 1: Table S1). The basic logic is that a smaller p-value would be generated in the joint tests if the eQTL of GTEx is predictive to the effect size of SNP in TCGA. Therefore, we expect that the detection rate of potentially associated genes (determined by p < 0.05) would be greater among genes with significant regression coefficients compared to those with non-significant regression coefficients. Formally, we employ the chi-square test to examine the difference in the detection rates (e.g., 3.04% vs. 0.72%) and observe a pronounced improvement of the detection rate for the four joint tests across nearly all cancers except OV (Fig. 4), in line with our expectation and suggesting the improvement of power when integrating the eQTL information of GTEx.
Improvement of the detection rate for genes with p-values of joint tests and the FDR of regression coefficients less than 0.05 across all the 10 TCGA cancers. Here the improvement is computed with the ratio of the detection rate for genes with significant regression coefficients and that for genes with non-significant regression coefficients. Thus, a ratio larger than one indicates the improvement
Finally, the number of associated genes identified with various test approaches is summarized in Table 3. Note that, the KM test cannot identify any associations, whereas a total of 21 (9 unique) genes are discovered for four cancers after incorporating the eQTL information of GTEx (Table 4). Specifically, IEHC-ACAT and the burden test detect 5 genes, followed by IEHC-adapt (4 genes), IEHC-optim (4 genes) and IEHC-Fisher (3 genes). We find that some genes are specifically discovered by some methods (e.g., COL9A1, MSANTD2, and LMBRD1 by the burden test), although several genes are simultaneously identified by multiple tests (e.g., RP11-1391J7.1 by the burden test, IEHC-adapt, IEHC-optim, and IEHC-ACAT), suggesting the various power of these test approaches across diverse genes, in line with the observation found in the simulations. Among the nine unique genes, the SNP effect sizes have a moderate correlation between TCGA and GTEx (Table 4 and Additional file 1: Figure S12).
Table 3 The number of significant genes identified by different test approaches in the 10 TCGA cancers (FDR < 0.1)
Table 4 Summary information for associated genes for the four cancers identified by different tests
With regards to these discovered genes, there are previous studies which provided evidence supporting their associations with the cancers. For instance, it was discovered the methylation level of COL9A1 reduced more evidently in tumors compared to that in the blood or healthy breast tissue, suggesting the association between COL9A1 and the risk of breast cancer [48]. Dysregulation of EGFR expression and signaling was previously well documented to contribute to the progression and metastasis of breast cancer while MSANTD2 played a crucial role in decreased epidermal growth factor endocytosis [49]. It was recently shown LMBRD1 was significantly over-expressed in BRCA1 mutated cell line compared to BRCA1 wild-type cell line [50]. As another example, COMMD1 was under-expressed in ovarian cancer, and the lack of detectable COMMD1 protein expression was more frequent in ovarian cancer; COMMD1 was also shown to be related to the cisplatin sensitivity in ovarian cancer [51]. In addition, we observe that four genes (i.e., COL9A1, MSANTD2, LMBRD1 and RP11-1391J7.1) were differentially expressed between normal samples and tumor samples (Additional file 1: Figure S13), and that COL9A1 was differentially expressed among different tumor stages (Additional file 1: Figure S14). In summary, these identified genes may represent potentially promising candidate biomarkers for cancer prediction, clinical treatment, and survival prognosis evaluation.
Recent technological advances in high-throughput platforms have greatly expanded the breadth of available omics datasets, including gene expression at the transcriptome level [36]. These abundant data resources facilitate to elucidate the interpretation of genetic variation in relation to survival risk and generate insightful perspective into the genetic underpinning of many complex human cancers [1,2,3]. However, how to effectively leverage the useful omics information is still an open problem. Therefore, there is a great demand for powerful analysis tools to fully harness the utility of these datasets. To fill such knowledge gap in the literature, herein we have proposed a novel genetic integrative Cox approach, called IEHC, to undertake the association analysis particularly for survival (time-to-event) phenotypes.
By characterizing effect sizes of SNPs between GTEx and TCGA, we found that there existed a substantial correlation across genes between the two types of effect sizes, indicating that we had the potential to improve the power if incorporating the GTEx eQTL into the survival SNP-set association studies. Methodologically, under the hierarchical model framework, IEHC has an appealing property that it models the effect sizes of SNPs as a function of variant characteristics (i.e., eQTL) to leverage information across loci while allowing for individual heterogeneous variant effects [26, 29]. Moreover, IEHC can be further interpreted within the framework of transcriptome-wide association studies (TWAS) [30, 31, 52]. In brief, the weighted burden score in IEHC (i.e., \({\text{G}}_{i}^{T} {\varvec{\beta}}\)) is viewed as an imputed expression level with the weights of SNPs estimated from external tissue-related transcriptome reference datasets (i.e., GTEx), and the association between imputed expressions and cancers is examined for that gene while controlling for the direct effects of SNPs (i.e., \({\text{G}}_{i}^{T} {\varvec{b}}\)). Because TWAS is effectively viewed as performing a two-sample causal inference [53, 54]; consequently, in this sense, IEHC has the ability to identify putative causal genes for cancers under certain regularity conditions [53,54,55].
Compared to the permutation test which is often computationally intensive, the proposed joint tests in IEHC are much more efficient because only two independent statistics are involved, both of which can be implemented with existing software and can be further combined via three kinds of p-values combination strategies. In addition, two previously used tests, including the burden test and the KM test, can be considered as special cases of the joint test. Furthermore, in IEHC we utilized ACAT to combine all these test methods. IEHC-ACAT enjoys the attractive strength that it takes the summary of a set of p-values as the test statistic and evaluates the significance analytically without the knowledge of correlation structure [34, 35]; thus, it is extraordinarily flexible and computationally fast. As a result, IEHC-ACAT allows us to aggregate dependent p-values obtained from these tests into a single well-calibrated p-value that can achieve the maximal power while maintaining the type I error correctly [34, 35, 56].
Extensive simulations revealed the relative performance of these joint test methods in IEHC and highlighted the strength of IEHC-ACAT. In agreement with the results of simulation, in the real application to ten TCGA cancers, we found that integrating eQTL can in general enhance the power and discover more genes that might be related to the survival risk of cancer. Particularly, IEHC-ACAT identified the highest number of associated genes among these competitive methods. In contrast, the KM test, which did not consider the eQTL of GTEx, cannot identify any association signals, suggesting the usefulness of integrating external informative variant annotations. Besides the attractive property in methodology, IEHC is also biologically interpretable when integrating transcriptomic information. First, it has been revealed that molecular features measured at the transcription level generally affect clinical outcomes more directly than those measured at other omic levels. Thus, the gene expression level would have the best predictive power for cancer prognostic evaluation compared to other genomic measurements [44]. Second, as it is widely demonstrated that SNPs associated with complex phenotypes are more likely to be eQTL [25], implying that gene expression may mediate the influence of genetic variants on the cancer risk. Therefore, eQTL can bridge the gap between cancers and many identified causal SNPs which have unknown function roles.
It needs to be emphasized that for the current IEHC model we only considered one type of variant characteristics (i.e., eQTL) but ignored other relevant information (e.g., protein quantitative trait loci). Therefore, the power of IEHC-Fisher, IEHC-adapt, IEHC-optim, and IEHC-ACAT may be further improved if more useful variant annotations would be employed in IEHC. The hierarchical modeling in IEHC offers an effective and general manner to incorporate more functional annotations as they become available. However, when many functional annotations can be applied, some of them may not be useful for determining associated genes. Therefore, the selection of informative annotations during the association analysis is necessary, which may be an interesting avenue for future investigations. Furthermore, although there include many cancer types in TCGA, their effective sample sizes are still relatively small, and the censored proportions are high [43, 44], which inevitably undermines the power of any methods and may lead to the failure of identifying some associated genes with survival. In addition, we only considered the linear kernel in these joint tests of IEHC when assessing the direct effects of SNPs (i.e., H0: τ = 0). The linear kernel may be sub-optimal if the relationship between SNPs and the survival risk is non-linear. Intuitively, the power of IEHC would depends on how well the chosen kernel captures the true relationship between SNPs and the survival risk, which can differ in the numbers, effect sizes, and effect directions of causal variants across diverse genes. For example, if only a very small fraction of SNPs may be causal, then the sparse kernel should be a better choice; if SNPs have mutual interaction effect sizes, then the product kernel consisting of main effects and interaction terms is preferred. However, in practice the relation is rarely known in advance, selecting an optimal kernel may be very challenging [20, 57,58,59,60,61]. Therefore, adaptive IEHC model and test methods for multiple candidate kernel functions are warranted to study in the future [20].
Overall, IEHC represents a flexible, robust, and powerful approach to integrate functionally omic datasets to improve the power of identifying associated genes for the survival risk of complex human cancers.
All data generated or analyzed during this study are included in this published article and its additional file.
GWAS:
ACAT:
Aggregated Cauchy association test
eQTL:
Expression quantitative trait loci
SNP:
Single nucleotide polymorphisms
GTEx:
Genotype-Tissue Expression
FDR:
False discover rate
TWAS:
Transcriptome-wide association study
IEHC:
Integrative eQTL hierarchical Cox model
Michailidou K, Lindström S, Dennis J, Beesley J, Hui S, Kar S, Lemaçon A, Soucy P, Glubb D, Rostamianfar A, et al. Association analysis identifies 65 new breast cancer risk loci. Nature. 2017;551:92–4.
Schumacher FR, Al Olama AA, Berndt SI, Benlloch S, Ahmed M, Saunders EJ, Dadaev T, Leongamornlert D, Anokian E, Cieza-Borrella C, et al. Association analyses of more than 140,000 men identify 63 new prostate cancer susceptibility loci. Nat Genet. 2018;50:928–36.
Huang K-l, Mashl RJ, Wu Y, Ritter DI, Wang J, Oh C, Paczkowska M, Reynolds S, Wyczalkowski MA, Oak N, et al. Pathogenic Germline Variants in 10,389 Adult Cancers. Cell. 2018;173:355–370.
Baylin SB. DNA methylation and gene silencing in cancer. Nat Clin Pract Oncol. 2005;2:S4–11.
Robertson KD. DNA methylation and human disease. Nat Rev Genet. 2005;6:597–610.
Jones PA. DNA methylation and cancer. Oncogene. 2002;21:5358.
Hindorff LA, Sethupathy P, Junkins HA, Ramos EM, Mehta JP, Collins FS, Manolio TA. Potential etiologic and functional implications of genome-wide association loci for human diseases and traits. Proc Natl Acad Sci U S A. 2009;106:9362–7.
Visscher PM, Wray NR, Zhang Q, Sklar P, McCarthy MI, Brown MA, Yang J. 10 Years of GWAS Discovery: Biology, Function, and Translation. Am J Hum Genet. 2017;101:5–22.
Zeng P, Zhao Y, Qian C, Zhang L, Zhang R, Gou J, Liu J, Liu L, Chen F. Statistical analysis for genome-wide association study. J Biomed Res. 2015;29:285–97.
Girirajan S. Missing heritability and where to find it. Genome Biol. 2017;18:89.
Manolio TA, Collins FS, Cox NJ, Goldstein DB, Hindorff LA, Hunter DJ, McCarthy MI, Ramos EM, Cardon LR, Chakravarti A, et al. Finding the missing heritability of complex diseases. Nature. 2009;461:747–53.
Eichler EE, Flint J, Gibson G, Kong A, Leal SM, Moore JH, Nadeau JH. Missing heritability and strategies for finding the underlying causes of complex disease. Nat Rev Genet. 2010;11:446–50.
Gusev A, Bhatia G, Zaitlen N, Vilhjalmsson BJ, Diogo D, Stahl EA, Gregersen PK, Worthington J, Klareskog L, Raychaudhuri S. Quantifying missing heritability at known GWAS loci. PLoS Genet. 2013;9:e1003993.
Young AI. Solving the missing heritability problem. PLoS Genet. 2019;15:e1008222.
Wu MC, Kraft P, Epstein MP, Taylor DM, Chanock SJ, Hunter DJ, Lin X. Powerful SNP-Set Analysis for Case-Control Genome-wide Association Studies. Am J Hum Genet. 2010;86:929–42.
Wu MC, Lee S, Cai T, Li Y, Boehnke M, Lin X. Rare-Variant Association Testing for Sequencing Data with the Sequence Kernel Association Test. Am J Hum Genet. 2011;89:82–93.
Lee S, Emond MJ, Bamshad MJ, Barnes KC, Rieder MJ, Nickerson DA, Christiani David C, Wurfel Mark M, Lin X. Optimal Unified Approach for Rare-Variant Association Testing with Application to Small-Sample Case-Control Whole-Exome Sequencing Studies. Am J Hum Genet. 2012;91:224–37.
Schifano ED, Epstein MP, Bielak LF, Jhun MA, Kardia SLR, Peyser PA, Lin X. SNP Set Association Analysis for Familial Data. Genet Epidemiol. 2012;36:797–810.
Wang X, Lee S, Zhu X, Redline S, Lin X. GEE-Based SNP Set Association Test for Continuous and Discrete Traits in Family-Based Association Studies. Genet Epidemiol. 2013;37:778–86.
Wu MC, Maity A, Lee S, Simmons EM, Harmon QE, Lin X, Engel SM, Molldrem JJ, Armistead PM. Kernel Machine SNP-Set Testing Under Multiple Candidate Kernels. Genet Epidemiol. 2013;37:267–75.
Lee S, Abecasis Gonçalo R, Boehnke M, Lin X. Rare-Variant Association Analysis: Study Designs and Statistical Tests. Am J Hum Genet. 2014;95:5–23.
Morgenthaler S, Thilly W. A strategy to discover genes that carry multi-allelic or mono-allelic risk for common diseases: a cohort allelic sums test (CAST). Mutat Res. 2007;615:28–56.
Li B, Leal SS. Novel methods for detecting associations with rare variants for common diseases: application to analysis of sequence data. Am J Hum Genet. 2008;83:311–21.
Zeng P, Zhao Y, Liu J, Liu L, Zhang L, Wang T, Huang S, Chen F. Likelihood ratio tests in rare variant detection for continuous phenotypes. Ann Hum Genet. 2014;78:320–32.
Nicolae DL, Gamazon E, Zhang W, Duan S, Dolan ME, Cox NJ. Trait-associated SNPs are more likely to be eQTLs: annotation to enhance discovery from GWAS. PLoS Genet. 2010;6:e1000888.
Su YR, Di C, Bien S, Huang L, Dong X, Abecasis G, Berndt S, Bezieau S, Brenner H, Caan B, et al. A Mixed-Effects Model for Powerful Association Tests in Integrative Functional Genomics. Am J Hum Genet. 2018;102:904–19.
Wu C, Pan W. Integrating eQTL data with GWAS summary statistics in pathway-based analysis with application to schizophrenia. Genet Epidemiol. 2018;42:303–16.
Xue H, Pan W, for the Alzheimer's Disease Neuroimaging I. Some statistical consideration in transcriptome-wide association studies. Genet Epidemiol. 2020;44:221–232.
Sun J, Zheng Y, Hsu L. A unified mixed-effects model for rare-variant association in sequencing studies. Genet Epidemiol. 2013;37:334–44.
Gamazon ER, Wheeler HE, Shah KP, Mozaffari SV, Aquino-Michaels K, Carroll RJ, Eyler AE, Denny JC, Consortium GT, Nicolae DL, et al. A gene-based association method for mapping traits using reference transcriptome data. Nat Genet. 2015;47:1091–8.
Gusev A, Ko A, Shi H, Bhatia G, Chung W, Penninx BWJH, Jansen R, de Geus EJC, Boomsma DI, Wright FA, et al. Integrative approaches for large-scale transcriptome-wide association studies. Nat Genet. 2016;48:245–52.
Lin X, Cai T, Wu MC, Zhou Q, Liu G, Christiani DC, Lin X. Kernel machine SNP-set analysis for censored survival outcomes in genome-wide association studies. Genet Epidemiol. 2011;35:620–31.
Cai T, Tonini G, Lin X. Kernel machine approach to testing the significance of multiple genetic markers for risk prediction. Biometrics. 2011;67:975–86.
Liu Y, Xie J. Cauchy Combination Test: A Powerful Test With Analytic p-Value Calculation Under Arbitrary Dependency Structures. J Am Stat Assoc. 2020;115:393–402.
Liu Y, Chen S, Li Z, Morrison AC, Boerwinkle E, Lin X. ACAT: A Fast and Powerful p Value Combination Method for Rare-Variant Analysis in Sequencing Studies. Am J Hum Genet. 2019;104:410–21.
GTEx Consortium. Genetic effects on gene expression across human tissues. Nature. 2017;550:204–213.
Cox DR. Regression Models and Life-Tables. J Roy Stat Soc: Ser B (Methodol). 1972;34:187–220.
GTEx Consortium. The Genotype-Tissue Expression (GTEx) project. Nat Genet. 2013;45:580–585.
Koziol JA, Perlman MD. Combining independent chi-squared tests. J Am Stat Assoc. 1978;73:753–63.
Fisher RA: Statistical Methods for Research Workers, 5th Edn. Biological monographs and manuals. Edinburgh: Oliver and Boyd Ltd; 1934.
Lappalainen T, Sammeth M, Friedländer MR, Pa TH, Monlong J, Rivas MA, Gonzàlezporta M, Kurbatova N, Griebel T, Ferreira PG. Transcriptome and genome sequencing uncovers functional variation in humans. Nature. 2013;501:506–11.
Bender R, Augustin T, Blettner M. Generating survival times to simulate Cox proportional hazards models. Stat Med. 2005;24:1713–23.
Hoadley KA, Yau C, Hinoue T, Wolf DM, Lazar AJ, Drill E, Shen R, Taylor AM, Cherniack AD, Thorsson V, et al. Cell-of-Origin Patterns Dominate the Molecular Classification of 10,000 Tumors from 33 Types of Cancer. Cell. 2018;173:291–304.
Yu X, Wang T, Huang S, Zeng P. How can gene expression information improve prognostic prediction in TCGA cancers: an empirical comparison study on regularization and mixed-effect survival models. Front Genetics. 2020;11:8.
Delaneau O, Zagury JF, Marchini J. Improved whole-chromosome phasing for disease and population genetic studies. Nat Methods. 2013;10:5–6.
McCarthy S, Das S, Kretzschmar W, Delaneau O, Wood AR, Teumer A, Kang HM, Fuchsberger C, Danecek P, Sharp K, et al. A reference panel of 64,976 haplotypes for genotype imputation. Nat Genet. 2016;48:1279–83.
Das S, Forer L, Schonherr S, Sidore C, Locke AE, Kwong A, Vrieze SI, Chew EY, Levy S, McGue M, et al. Next-generation genotype imputation service and methods. Nat Genet. 2016;48:1284–7.
Piotrowski A, Benetkiewicz M, Menzel U, de Ståhl TD, Mantripragada K, Grigelionis G, Buckley PG, Jankowski M, Hoffman J, Bała D. Microarray-based survey of CpG islands identifies concurrent hyper-and hypomethylation patterns in tissues derived from patients with breast cancer. Genes Chromosom Cancer. 2006;45:656–67.
Runkle KB, Meyerkord CL, Desai NV, Takahashi Y, Wang H-G. Bif-1 suppresses breast cancer cell migration by promoting EGFR endocytic degradation. Cancer Biol Ther. 2012;13:956–66.
Privat M, Rudewicz J, Sonnier N, Tamisier C, Ponelle-Chachuat F, Bignon Y-J. Antioxydation and cell migration genes are identified as potential therapeutic targets in basal-like and BRCA1 mutated breast cancer cell lines. Int J Med Sci. 2018;15:46.
Fedoseienko A, Wieringa HW, Wisman GBA, Duiker E, Reyners AK, Hofker MH, van der Zee AG, van de Sluis B, van Vugt MA. Nuclear COMMD1 is associated with cisplatin sensitivity in ovarian cancer. PLoS ONE. 2016;11:e0165385.
Zeng P, Dai J, Jin S, Zhou X. Aggregating multiple expression prediction models improves the power of transcriptome-wide association studies. Hum Mol Genet. 2021;30:939–51.
Zhu H, Zhou X. Transcriptome-wide association studies: a view from Mendelian randomization. Quant Biol. 2020;9:78.
Yuan Z, Zhu H, Zeng P, Yang S, Sun S, Yang C, Liu J, Zhou X. Testing and controlling for horizontal pleiotropy with probabilistic Mendelian randomization in transcriptome-wide association studies. Nat Commun. 2020;11:3861.
Wainberg M, Sinnott-Armstrong N, Mancuso N, Barbeira AN, Knowles DA, Golan D, Ermel R, Ruusalepp A, Quertermous T, Hao K, et al. Opportunities and challenges for transcriptome-wide association studies. Nat Genet. 2019;51:592–9.
Xiao L, Yuan Z, Jin S, Wang T, Huang S, Zeng P. Multiple-tissue integrative transcriptome-wide association studies discovered new genes associated with amyotrophic lateral sclerosis. Front Genetics. 2020;11:587243.
Urrutia E, Lee S, Maity A, Zhao N, Shen J, Li Y, Wu MC. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT). Stat Interface. 2015;8:495–505.
Wang X, Xing EP, Schaid DJ. Kernel methods for large-scale genomic data analysis. Brief Bioinform. 2014;16:183–92.
Yang H, Cao H, He T, Wang T, Cui Y. Multilevel heterogeneous omics data integration with kernel fusion. Brief Bioinform. 2020;21:156–70.
Yang H, Li S, Cao H, Zhang C, Cui Y. Predicting disease trait with genomic data: a composite kernel approach. Brief Bioinform. 2016;18:591–601.
He T, Li S, Zhong P-S, Cui Y. An optimal kernel-based U-statistic method for quantitative gene-set association analysis. Genet Epidemiol. 2019;43:137–49.
We thank TCGA and GTEx for the sharing of datasets analyzed in our work; these datasets can be available at https://xenabrowser.net/ and https://www.gtexportal.org/. We are also very grateful to the editor and two referees for their insightful and constructive comments, which substantially improved our original manuscript. The data analyses in the present study were carried out with the high-performance computing cluster that was supported by the special central finance project of local universities for Xuzhou Medical University.
The research of Ping Zeng was supported in part by the National Natural Science Foundation of China (82173630 and 81402765), the Youth Foundation of Humanity and Social Science funded by Ministry of Education of China (18YJC910002), the Natural Science Foundation of Jiangsu Province of China (BK20181472), the China Postdoctoral Science Foundation (2018M630607 and 2019T120465), the QingLan Research Project of Jiangsu Province for Outstanding Young Teachers, the Six-Talent Peaks Project in Jiangsu Province of China (WSN-087), the Training Project for Youth Teams of Science and Technology Innovation at Xuzhou Medical University (TD202008), the Postdoctoral Science Foundation of Xuzhou Medical University, and the Statistical Science Research Project from National Bureau of Statistics of China (2014LY112). The research of Shuiping Huang was supported in part by the Social Development Project of Xuzhou City (KC19017). The research of Ting Wang was supported in part by the Social Development Project of Xuzhou City (KC20062). The research of Yongyue Wei was supported in part by the National Natural Science Foundation of China (81973142 and 81402764).
Haojie Lu and Yongyue Wei wish it to be known that, in their opinion, the first two authors should be regarded as joint first authors
Department of Biostatistics, School of Public Health, Xuzhou Medical University, Xuzhou, 221004, Jiangsu, China
Haojie Lu, Zhou Jiang, Jinhui Zhang, Ting Wang, Shuiping Huang & Ping Zeng
Department of Biostatistics, School of Public Health, Nanjing Medical University, Nanjing, 211166, Jiangsu, China
Yongyue Wei
Center for Medical Statistics and Data Analysis, Xuzhou Medical University, Xuzhou, 221004, Jiangsu, China
Shuiping Huang & Ping Zeng
Key Laboratory of Human Genetics and Environmental Medicine, Xuzhou Medical University, Xuzhou, 221004, Jiangsu, China
Haojie Lu
Zhou Jiang
Jinhui Zhang
Ting Wang
Shuiping Huang
Ping Zeng
PZ conceived the idea for the study. PZ, SH and YW obtained the data, PZ and HL performed the data analyses. PZ, ZJ, JZ, TW, and HL interpreted the results of the data analyses. PZ and HL drafted the manuscript, and all authors approved the manuscript and provided relevant suggestions. All authors read and approved the final manuscript.
Correspondence to Ping Zeng.
Lu, H., Wei, Y., Jiang, Z. et al. Integrative eQTL-weighted hierarchical Cox models for SNP-set based time-to-event association studies. J Transl Med 19, 418 (2021). https://doi.org/10.1186/s12967-021-03090-z
Integrative analysis
SNP-set association study
Joint effect test
Hierarchical modeling
Cox model
Personalized medicine
|
CommonCrawl
|
Dynamics and chaos control in a discrete-time ratio-dependent Holling-Tanner model
Sarker Md. Sohel Rana1
A discrete-time Holling-Tanner model with ratio-dependent functional response is examined. We show that the system experiences a flip bifurcation and Neimark-Sacker bifurcation or both together at positive fixed point in the interior of \(\mathbb {R}^{2}_{+}\) when one of the model parameter crosses its threshold value. We concentrate our attention to determine the existence conditions and direction of bifurcations via center manifold theory. To validate analytical results, numerical simulations are employed which include bifurcations, phase portraits, stable orbits, invariant closed circle, and attracting chaotic sets. In addition, the existence of chaos in the system is justified numerically by the sign of maximum Lyapunov exponents and fractal dimension. Finally, we control chaotic trajectories exists in the system by feedback control strategy.
Mathematical modeling is a promising approach to understand and analyze the dynamics of ecological systems. In population ecology, the classical and significant studied theme is the interaction between predator and prey species. In recent year, the Leslie type predator-prey model has received more interest to investigate the dynamical behaviors between the species. The dynamic complexity of predator-prey system depends on predator's functional response. Holling type II functional response is mostly used functional response among arthropod predators. The Leslie predator-prey model with Holling type II functional response is called Holling-Tanner model. A number of famous ecologist and mathematician have been given attention and investigated extensively Holling-Tanner models [1–3]. Their empirical works found complex dynamical behaviors including stable or unstable limit cycle, stability states around positive equilibrium. They showed that the asymptotic stability of the positive equilibrium does not imply the global stability. They also showed that the model may undergo the Bogdanov-Takens bifurcation and the subcritical Hopf bifurcation if parameters vary in a small vicinity of critical values. But Holling-Tanner models with ratio-dependent functional response (1) has been studied in [4]. The authors established the global stability conditions of the model near positive equilibrium with the help of Lyapunov function. It is also proved that the model (1) possess a unique stable limit cycle at the positive equilibrium showing coexistence of predator and prey in the sense of oscillatory balance behavior.
However, lots of exploratory works have recommend that if population size is small, or population generations are relatively discrete (nonoverlapping), or population changes within certain time intervals, one can consider the difference equation model rather than differential equation model to reveal chaotic dynamics [5–15]. These researches explored many complex properties including flip and Neimark-Sacker bifurcations, stable orbits and chaotic attractors which had been derived either by numerically or by normal form and center manifold theory.
Recently, a little works in literature studied discrete-time Holling-Tanner models [16–18] and its chaotic behaviors. For instance, a discrete-time Holling and Leslie type predator-prey system with constant-yield prey harvesting analyzed in [16], in [17] the authors investigated a discrete Holling-Tanner model and a discrete predator-prey model with modified Holling-Tanner functional response discussed in [18]. These studies paid their attention to determine the stability and directions of flip and Neimark-Sacker bifurcations via use of center manifold theory.
The following ratio-dependent Holling-Tanner model [4] is considered in this paper:
$$ \begin{array}{lcl} \dot{x} &=& rx \left(1-\frac{x}{K} \right)- \frac{mx}{x+Ay}y\\ \dot{y} &=& sy\left(1-h\frac{y}{x}\right) \end{array} $$
where x and y represent prey and predator population densities, respectively; r and s are the intrinsic growth rates of the prey and predator, respectively. K is the prey environment carrying capacity. The predator consumes prey according to the ratio-dependent Holling type II functional response \(\frac {mx}{x+Ay}\). The carrying capacity x/h of predator is proportional to the population size of the prey. The parameter h is the number of preys required to support predator births. m is the maximal predator per capita consumption rate; A is the number of prey necessary to achieve one-half of the maximum rate m; the constants r,K,A,m,s and, h all are being positive.
We introduce the new variables and parameters by the following scaling transformations:
$$\frac{x}{K} \rightarrow x, \frac{my}{rK} \rightarrow y, rt \rightarrow t \; \text{and}, \; a = \frac{rA}{m}, d = \frac{sh}{m}, b = \frac{m}{hr}.$$
Then, the system (1) becomes
$$ \begin{array}{lcl} \dot{x} &=& x \left(1-x \right)-\frac{xy}{x+ay}\\ \dot{y} &=& dy\left(b-\frac{y}{x}\right) \end{array} $$
To get following two-dimensional discrete system, forward Euler scheme with integral step size δ is applied to system (2):
$$ \left(\begin{array}{l} x \\ y \end{array}\right) \mapsto \left(\begin{array}{ll} x&+\delta x \left[ \left(1-x \right)-\frac{y}{x+ay} \right]\\ &y+\delta y \left[d\left(b-\frac{y}{x}\right) \right] \end{array}\right) $$
This study will mainly focus on how model parameters affect on the dynamics of system (3). Especially, we discuss systematically to show the existence condition of Flip bifurcation and Neimark-Sacker bifurcation using bifurcation theory and center manifold theory [19]. Because in the discrete predator-prey system, Flip bifurcation and Neimark-Sacker bifurcation both are the important mechanisms for the generation of complex dynamics and both bifurcations cause the system to jump from stable window to chaotic states through periodic and quasi-periodic states, and trigger a route to chaos.
The outlines of this paper is as follows. The existence condition for fixed points of system (3) in the interior of \(\mathbb {R}^{2}_{+}\) and their stability analysis are given in "Existence conditions and stability analysis of fixed points" section. In "Direction and stability analysis of bifurcation" section, we determine direction of bifurcation for system (3) under certain parametric condition. The bifurcation diagrams, phase portraits, maximum Lyapunov exponents, and Fractal dimensions are presented numerically in "Numerical simulations" section by changing one or more control parameters values. In "Controlling chaos" section, a feedback control technique has been used to stabilize unstable trajectories. A short conclusion is presented in the "Discussions" section.
Existence conditions and stability analysis of fixed points
To find fixed points of system (3), we set
$$ \left\{\begin{array}{lcl} x+\delta x \left[ \left(1-x \right)-\frac{y}{x+ay} \right] &=& x \\ y+\delta y \left[d\left(b-\frac{y}{x}\right) \right] &=& y \end{array}\right. $$
By solving system of non-linear Eqs. 4 we obtain the following result.
For all feasible values of parameters, the system (3)
(i) always has axial fixed point E1(1,0),
(ii) has a unique coexistence fixed point E2(x∗,y∗) if b<1+ab where \(x^{*} = \frac {1-b+ab}{1+ab} \quad \text {and} \quad y^{*} = b x^{*}.\)
Next, we analyze local stability at each fixed points for system (3). The Jacobian matrix of system (3) around arbitrary fixed point E(x,y) is given by
$$ J(x,y)= \left(\begin{array}{ll} j_{11} & j_{12}\\ j_{21} & j_{22} \end{array}\right) $$
$$j_{11}=1+\delta \left(1-2x + \frac{xy}{(x + ay)^{2}} - \frac{y}{x + ay} \right), j_{12}=\delta \left(\frac{a xy}{(x + ay)^{2}} - \frac{x}{x + ay} \right) $$
$$j_{21}=\frac{d \delta y^{2}}{x^{2}}, j_{22}=1- \frac{d \delta y}{x} + d \delta \left(b-\frac{y}{x} \right). $$
The characteristic equation of matrix J is
$$ \lambda^{2} +p(x,y) \lambda +q(x,y)=0 $$
where p(x,y)=−trJ=−(j11+j22) and detJ=j11j22−j12j21.
Now, the topological classification of stability around fixed points by using Jury's criterion [20] are expressed as follows.
For predator free fixed point E1(1,0), the following topological classification true
(i) E1 is a saddle if 0<δ<2,
(ii) E1 is a source if δ>2,
(iii) E1 is a non-hyperbolic if δ=2.
$$FB_{E_{1}} = \left\{(a, b, d, \delta): \delta =2, a, b, d>0 \right\}. $$
It is obvious that when parameters are in \(FB_{E_{1}}\), then one of the eigenvalues of J(E1): λ1=1−δ and λ2=1+bdδ is −1 and the other is not equal to ±1. Therefore, system (3) experience a flip bifurcation when parameters change in small vicinity of \(FB_{E_{1}}\).
At E2(x∗,y∗), the Eq. 6 becomes
$$F(\lambda):=\lambda^{2} - (2 + L \delta) \lambda +(1 + L \delta + M \delta^{2})=0, $$
$$\begin{aligned} L &= 1 + bd - 2 x^{*} - \frac{2 d y^{*}}{x^{*}} - \frac{a {y^{*}}^{2}}{\left(x^{*} + {a y^{*}}^{2}\right. } {,} \\ M &= d (b - 2b x^{*} + 4y^{*}) - \frac{2 d y^{*}}{x^{*}} - \frac{(2+ab) d {y^{*}}}{(x^{*} + a y^{*})^{2}} + \frac{(3x^{*}+2ay^{*}) d {y^{*}}^{2}}{x^{*} (x^{*} + a y^{*})^{2}}. \end{aligned} $$
Therefore, F(1)=Mδ2>0 and F(−1)=4+2Lδ+Mδ2.
For topological classification of E2, we state following Proposition.
Suppose b<1+ab holds. Then for coexistence fixed point E2(x∗,y∗) of system (3), the following topological classification true
(i) E2 is a sink if one of the following conditions holds
(i.1) \(L^{2} - 4 M \geq 0 \;\; \text {and} \;\; \delta <\frac {- L - \sqrt {L^{2} - 4 M}}{M}\);
(i.2) \(L^{2} - 4 M<0 \;\; \text {and} \;\; \delta <- \frac {L}{M}\);
(ii) E2 is a source if one of the following conditions holds
(ii.1) \(L^{2} - 4 M \geq 0 \;\; \text {and} \;\; \delta >\frac {- L + \sqrt {L^{2} - 4 M}}{M}\);
(ii.2) \(L^{2} - 4 M<0 \;\; \text {and} \;\; \delta >- \frac {L}{M}\);
(iii) E2 is a non-hyperbolic if one of the following conditions holds
(iii.1) \(L^{2} - 4 M \geq 0 \;\; \text {and} \;\; \delta =\frac {- L \pm \sqrt {L^{2} - 4 M}}{M}\);
(iii.2) \(L^{2} - 4 M<0 \;\; \text {and} \;\; \delta =- \frac {L}{M}\);
(iv) E2 is a saddle if otherwise.
From Proposition 2, it can be easily seen that if condition (iii.1) holds then eigenvalues of J(E2) are λ1=−1 and λ2≠±1. If (iii.2) is true, then eigenvalues of J(E2) are complex having modulus one.
$$FB^{1}_{E_{2}} = \left\{(a, b, d, \delta): \delta=\frac{- L - \sqrt{L^{2} - 4 M}}{M}, \; L^{2} - 4 M \geq0, a, b, d>0 \right\}, $$
$$FB^{2}_{E_{2}} = \left\{(a, b, d, \delta): \delta=\frac{- L + \sqrt{L^{2} - 4 M}}{M}, \; L^{2} - 4 M \geq0, a, b, d>0 \right\}. $$
Then system (3) experience a flip bifurcation at E2 when parameters (a,b,d,δ) vary in a small vicinity of either set \(FB^{1}_{E_{2}}\) or set \(FB^{2}_{E_{2}}\).
Also let
$${\text{NSB}}_{E_{2}} = \left\{(a, b, d, \delta): \delta=- \frac{L}{M}, \; L^{2} - 4 M<0, a, b, d>0 \right\},$$
then system (3) experience a NS bifurcation at E2 if the parameters (a,b,d,δ) change around the set \({\text {NSB}}_{E_{2}}\).
Direction and stability analysis of bifurcation
Here, we will pay attention to determine the direction and stability of flip bifurcation and Neimark-Sacker bifurcation of system (3) around E2 via application of center manifold theory [19]. The integral step size δ is being taken as a real bifurcation parameter.
Flip bifurcation: direction and stability
We take the parameters (a,α,β,δ) arbitrarily locate in \({\text {FB}}^{1}_{E_{2}}\). For other set \({\text {FB}}^{2}_{E_{2}}\), one can apply similar reasoning. Consider the system (3) at fixed point E2(x∗,y∗) with parameters lie in \({\text {FB}}^{1}_{E_{2}}\).
$$\delta=\delta_{F}=\frac{- L - \sqrt{L^{2} - 4 M}}{M},$$
then the eigenvalues of J(E2) are
$$\lambda_{1}(\delta_{F})=-1 \;\; \text{and} \;\; \lambda_{2}(\delta_{F})=3 + L \delta_{F}.$$
In order for |λ2(δF)|≠1, we have
$$ L \delta_{F} \ne -2, -4. $$
Assume that \(\tilde {x}=x-x^{*}, \;\;\tilde {y}=y-y^{*}\), and set A(δ)=J(x∗,y∗). Then, we transform the fixed point (x∗,y∗) of system (3) to the origin. By Taylor expansion, system (3) can be written as
$$ \left(\begin{array}{l} \tilde{x} \\ \tilde{y} \end{array}\right) \rightarrow A(\delta) \left(\begin{array}{l} \tilde{x} \\ \tilde{y} \end{array}\right) + \left(\begin{array}{l} F_{1}(\tilde{x}, \tilde{y}, \delta) \\ F_{2}(\tilde{x}, \tilde{y}, \delta) \end{array}\right) $$
where \(X=(\tilde {x}, \tilde {y})^{T}\) is the vector of the transformed system and
$$ {\begin{aligned} F_{1}(\tilde{x}, \tilde{y}, \delta) &= \frac{1}{6} \left[-\frac{6 a \delta {y^{*}}^{2}}{(x^{*}+ay^{*})^{4}} \tilde{x}^{3} - \frac{6 a^{2} \delta {x^{*}}^{2}}{(x^{*}+ay^{*})^{4}} \tilde{x}^{3} - \frac{6a \delta y^{*}(-2x^{*} +a y^{*})}{(x^{*}+ay^{*})^{4}} \tilde{x}^{2} \tilde{y} + \frac{6a \delta x^{*}(-x^{*} + 2a y^{*})}{(x^{*}+ay^{*})^{4}} \tilde{x} \tilde{y}^{2} \right] \\ & + \frac{1}{2} \left[\delta \left(-2 - \frac{2x^{*} y^{*}}{(x^{*}+ay^{*})^{3}} + \frac{2y^{*}}{(x^{*}+ay^{*})^{2}} \right) \tilde{x}^{2} + \frac{2a \delta {x^{*}}^{2}}{(x^{*}+ay^{*})^{3}} \tilde{y}^{2} - \frac{4 a \delta x^{*} y^{*}}{(x^{*}+ay^{*})^{3}} \tilde{x} \tilde{y} \right] \\ & \qquad+ O(\|X\|^{4}) \\ F_{2}(\tilde{x}, \tilde{y}, \delta) &= \frac{d \delta}{{x^{*}}^{4}} \tilde{x} \left({x^{*} \tilde{y} - y^{*} \tilde{x}} \right)^{2} - \frac{d \delta}{{x^{*}}^{3}} \tilde{x} \left({x^{*} \tilde{y} - y^{*} \tilde{x}} \right)^{2} + O(\|X\|^{4}) \end{aligned}} $$
The system (8) can be expressed as \(X_{n+1}=AX_{n} + \frac {1}{2}B(X_{n},X_{n}) + \frac {1}{6}C(X_{n},X_{n},X_{n}) + O({X_{n}}^{4})\) where \( B(x, y) = \left (\begin {array}{l} B_{1}(x, y) \\ B_{2}(x, y) \end {array}\right) \) and \( C(x, y, u) = \left (\begin {array}{l} C_{1}(x, y, u) \\ C_{2}(x, y, u) \end {array}\right) \)are symmetric multi-linear vector functions of \(x, y, u \in \mathbb {R}^{2}\) and defined as follows:
$${\begin{aligned} B_{1}(x, y) & = \sum_{j, k=1}^{2} \left. \frac{\delta^{2} F_{1}(\xi, \delta)}{\delta \xi_{j} \delta \xi_{k}} \right|_{\xi=0}\; {x_{j} y_{k}} = -\frac{2a \delta x^{*}y^{*}}{(x^{*}+ay^{*})^{3}} ({x_{1} y_{2}} + {x_{2} y_{1}}) + \frac{2a \delta {x^{*}}^{2}}{(x^{*}+ay^{*})^{3}}{x_{2} y_{2}}\\ & \qquad + \delta \left(-2 - \frac{2x^{*} y^{*}}{(x^{*}+ay^{*})^{3}} + \frac{2y^{*}}{(x^{*}+ay^{*})^{2}} \right) {x_{1} y_{1}}, \\ B_{2}(x, y) & = \sum_{j, k=1}^{2} \left. \frac{\delta^{2} F_{2}(\xi, \delta)}{\delta \xi_{j} \delta \xi_{k}} \right|_{\xi=0}\; {x_{j} y_{k}} = - \frac{2 d \delta}{x^{*}} {x_{2} y_{2}} + \frac{2 d \delta y^{*}}{{x^{*}}^{2}} ({x_{1} y_{2}} + {x_{2} y_{1}}) - \frac{2 d \delta {y^{*}}^{2}}{{x^{*}}^{3}} {x_{1} y_{1}}, \end{aligned}} $$
$${\begin{aligned} C_{1}(x, y, u) & = \sum_{j, k, l=1}^{2} \left. \frac{\delta^{2} F_{1}(\xi, \delta)}{\delta \xi_{j} \delta \xi_{k} \delta \xi_{l}} \right|_{\xi=0}\; {x_{j} y_{k} u_{l}} = - \frac{6a \delta {y^{*}}^{2}}{(x^{*}+ay^{*})^{4}} {x_{1} y_{1} u_{1}} - \frac{6a^{2} \delta {x^{*}}^{2}}{(x^{*}+ay^{*})^{4}} {x_{2} y_{2} u_{2}}\\ & \qquad - \frac{2ay^{*} (-2x^{*} + ay^{*}) \delta}{(x^{*}+ay^{*})^{4}} (x_{1} y_{2} u_{1} + x_{2} y_{1} u_{1} + x_{1} y_{1} u_{2}) \\ & \qquad + \frac{2ax^{*} (-x^{*} + 2ay^{*}) \delta}{(x^{*}+ay^{*})^{4}} (x_{1} y_{2} u_{2} + x_{2} y_{1} u_{2} + x_{2} y_{2} u_{1}),\\ C_{2}(x, y, u) & = \sum_{j, k, l=1}^{2} \left. \frac{\delta^{2} F_{2}(\xi, \delta)}{\delta \xi_{j} \delta \xi_{k} \delta \xi_{l}} \right|_{\xi=0}\; {x_{j} y_{k} u_{l}} = \frac{2 d \delta}{{x^{*}}^{2}} (x_{1} y_{2} u_{2} + x_{2} y_{1} u_{2} + x_{2} y_{2} u_{1}) \\ & \qquad - \frac{4 d \delta {y^{*}}}{{x^{*}}^{3}} (x_{1} y_{1} u_{2} + x_{1} y_{2} u_{1} + x_{2} y_{1} u_{1}) + \frac{6 d \delta {y^{*}}^{2}}{{x^{*}}^{4}} {x_{1} y_{1} u_{1}}. \end{aligned}} $$
Let \(p, q \in \mathbb {R}^{2}\) be eigenvectors of A and transposed matrix AT respectively for λ1(δF)=−1 Then, we have
$$A(\delta_{F})q=-q \quad \text{and} \quad A^{T}(\delta_{F})p=-p.$$
Direct computation shows
$$\begin{array}{*{20}l} q & \sim \left(2 + bd \delta_{F} - \frac{2 d \delta_{F} {y^{*}}}{x^{*}}, - \frac{d \delta_{F} {y^{*}}^{2}}{{x^{*}}^{2}} \right)^{T}, \\ p & \sim \left(2 + bd \delta_{F} - \frac{2d \delta_{F} {y^{*}}}{x^{*}}, \frac{\delta_{F} {x^{*}}^{2}}{(x^{*}+ay^{*})^{2}} \right)^{T}. \end{array} $$
We use 〈p,q〉=p1q1+p2q2, standard scalar product in \(\mathbb {R}^{2}\) to normalize the vectors p and q. Setting the normalized vectors as
$$\begin{array}{*{20}l} q & = \left(2 + bd \delta_{F} - \frac{2 d \delta_{F} {y^{*}}}{x^{*}}, - \frac{d \delta_{F} {y^{*}}^{2}}{{x^{*}}^{2}} \right)^{T}, \\ p & = \gamma_{1} \left(2 + bd \delta_{F} - \frac{2d \delta_{F} {y^{*}}}{x^{*}}, \frac{\delta_{F} {x^{*}}^{2}}{(x^{*}+ay^{*})^{2}} \right)^{T}. \end{array} $$
where \(\gamma _{1} =\frac {1}{(2 + bd \delta _{F} - \frac {2d \delta _{F} {y^{*}}}{x^{*}})^{2} - \frac {d \delta ^{2}_{F} {y^{*}}^{2}}{(x^{*}+ay^{*})^{2}}}.\) We see that 〈p,q〉=1.
The sign of coefficient l1(δF) determines the direction of flip bifurcation and is computed by
$$ l_{1}(\delta_{F}) = \frac{1}{6} \langle p, C(q, q, q) \rangle - \frac{1}{2} \langle p, B(q, (A-I)^{-1} B(q, q)) \rangle $$
We summarize above discussion into the following theorem for direction and stability of flip bifurcation.
Assume that (7) holds. Then, if l1(δF)≠0 and the parameter δ varies its value in a small vicinity of \(FB^{1}_{E_{2}}\), the system (3) experiences a flip bifurcation at positive fixed point E2(x∗,y∗). Moreover, if l1(δF)>0 (resp., l1(δF)<0), then there exists stable (resp., unstable) period-2 orbits bifurcate from E2(x∗,y∗).
Neimark-Sacker bifurcation: direction and stability
Next, we take the parameters (a,α,β,δ) arbitrarily located in \({\text {NSB}}_{E_{2}}\). Consider the system (3) at fixed point E2(x∗,y∗) with parameters vary in the vicinity of \({\text {NSB}}_{E_{2}}\). Then, the roots (eigenvalues) of Eq. 6 are pair of complex conjugate and given by
$$\lambda, \bar{\lambda}=\frac{-p(\delta) \pm i \sqrt{4q(\delta)-p(\delta)^{2}}}{2}=1 + \frac{L \delta}{2} \pm \frac{i \delta}{2} \sqrt{4M - L^{2}}.$$
$$ \delta = \delta_{{\text{NS}}} =- \frac{L}{M} $$
Then, we have \(|\lambda |=\sqrt {q(\delta _{{\text {NS}}})}=1.\)
From the transversality condition, we get
$$ \frac{d|\lambda(\delta)|}{d\delta}|_{\delta = \delta_{{\text{NS}}}} =-\frac{L}{2} \ne 0 $$
Moreover, the nonresonance condition p(δNS)≠0,1 obviously satisfies
$$ \frac{L^{2}}{M}\ne 2, 3 $$
$$ \lambda^{k}(\delta_{{\text{NS}}}) \ne 1 \;\; \text{for} \; k=1, 2, 3, 4 $$
Let \(q, p \in \mathbb {C}^{2}\) be eigenvectors of A(δNS) and AT(δNS) for eigenvalues λ(δNS) and \(\bar {\lambda }(\delta _{{\text {NS}}})\) respectively such that
$$A(\delta_{{\text{NS}}}) q = \lambda(\delta_{{\text{NS}}}) q, \;\quad A(\delta_{{\text{NS}}}) \bar{q} = \bar{\lambda}(\delta_{{\text{NS}}}) \bar{q} $$
$$A^{T}(\delta_{{\text{NS}}}) p = \bar{\lambda}(\delta_{{\text{NS}}}) p, \;\quad A^{T}(\delta_{{\text{NS}}}) \bar{p} = \lambda(\delta_{{\text{NS}}}) \bar{p}. $$
By direct calculation, we obtain
$$\begin{array}{*{20}l} q & \sim \left(1 + bd \delta_{{\text{NS}}} - \frac{2d \delta_{{\text{NS}}} y^{*}}{x^{*}} - \lambda, - \frac{d \delta_{{\text{NS}}} {y^{*}}^{2}}{{x^{*}}^{2}} \right)^{T}, \\ p & \sim \left(1 + bd \delta_{{\text{NS}}} - \frac{2 d \delta_{{\text{NS}}} y^{*}}{x^{*}} - \bar{\lambda}, \frac{\delta_{{\text{NS}}} x^{*}}{(x^{*}+ay^{*})^{2}} \right)^{T}. \end{array} $$
For normalization of the vectors p and q, we set \(p = \gamma _{2} \left (1 + bd \delta _{{\text {NS}}} - \frac {2 d \delta _{{\text {NS}}} y^{*}}{x^{*}} - \bar {\lambda }, \frac {\delta _{{\text {NS}}} x^{*}}{(x^{*}+ay^{*})^{2}} \right)^{T}\) where
$$\gamma_{2} =\frac{1}{(1 + bd \delta_{{\text{NS}}} - \frac{2 d \delta_{{\text{NS}}} y^{*}}{x^{*}} - \bar{\lambda})^{2} - \frac{d \delta^{2}_{{\text{NS}}} {x^{*}}^{2} {y^{*}}^{2}}{x^{*}(x^{*}+ay^{*})^{2}}}.$$
Then we see that \(\langle p, q \rangle =\bar {p_{1}} q_{2} + \bar {p_{2}} q_{1}=1\).
When δ close to δNS and \(z \in \mathbb {C}\), the vector \(X \in \mathbb {R}^{2}\) can be decomposed uniquely as \(X = zq + \bar {z}\bar {q}\).
It is obvious that z=〈p,X〉. Thus, we obtain the following transformed form of system (8) for all sufficiently small |δ| near δNS:
$$z \mapsto \lambda(\delta)z + g(z, \bar{z}, \delta),$$
where λ(δ)=(1+φ(δ))eiθ(δ) with φ(δNS)=0 and \(g(z, \bar {z}, \delta)\) is a smooth complex-valued function. According to Taylor expression, the function g can be written as
$$g(z, \bar{z}, \delta) = \sum_{k+l \ge2} {\frac{1}{k! l!}} g_{kl}(\delta) z^{k} {\bar{z}}^{l}, \;\; \text{with} \;\; g_{kl} \in \mathbb{C}, \; k, l = 0, 1,\cdots.$$
By symmetric multi-linear vector functions, the Taylor coefficients gkl are obtained as
$$\begin{array}{*{20}l} g_{20}(\delta_{{\text{NS}}}) & = \langle p, B(q, q) \rangle,\\ g_{11}(\delta_{{\text{NS}}}) & = \langle p, B(q, \bar{q}) \rangle\\ g_{02}(\delta_{{\text{NS}}}) & = \langle p, B(\bar{q}, \bar{q}) \rangle,\\ g_{21}(\delta_{{\text{NS}}}) & = \langle p, C(q, q, \bar{q}) \rangle, \end{array} $$
The coefficient l2(δNS) which determines the direction of Neimark-Sacker bifurcation in a generic system exhibiting invariant closed curve can be calculated via
\(\phantom {\dot {i}\!}l_{2}(\delta _{{\text {NS}}}) = \text {Re} \left (\frac {e^{- i \theta (\delta _{{\text {NS}}})} g_{21}}{2} \right) - \text {Re} \left (\frac {(1-2e^{i \theta (\delta _{{\text {NS}}})}) e^{-2 i \theta (\delta _{{\text {NS}}})}}{2(1-e^{i \theta (\delta _{{\text {NS}}})})} g_{20} g_{11} \right) - \frac {1}{2} |g_{11}|^{2} - \frac {1}{4} |g_{02}|^{2},\) where \(\phantom {\dot {i}\!}e^{i \theta (\delta _{{\text {NS}}})} = \lambda (\delta _{{\text {NS}}})\).
Summarizing above analysis, we present the following theorem for direction and stability of Neimark-Sacker bifurcation.
Suppose that (13) holds and l2(δNS)≠0. If the parameter δ varies its value in small neighborhood of \({\text {NSB}}_{E_{2}}\), then system (3) experiences a Neimark-Sacker bifurcation at positive fixed point E2. Moreover, if the sign of l2(δNS) is negative (resp., positive), then a unique invariant closed curve bifurcates from E2 which is attracting (resp., repelling) and the Neimark-Sacker bifurcation is supercritical (resp., subcritical).
Numerical simulations
In this section, numerical simulation are performed to validate our theoretical results, especially diagrams for bifurcation of system (3) at fixed point E2, phase portraits, maximum Lyapunov exponents and fractal dimension corresponding to bifurcation diagrams. For bifurcation analysis, we consider different sets for parameter values as given in Table 1 :
Table 1 Parameter values
Flip bifurcation with bifurcation parameter δ covering [ 2.6,3.86]
We set values of parameter as given in case (i). By calculation, we obtain a unique fixed point E2(0.75,0.75) of system (3). A flip bifurcation point is evaluated as δF=2.95569. It is observed that the system (3) experiences a flip bifurcation around E2 when δ passes its critical value δF. Also, at δ=δF the corresponding eigenvalues are λ1=− 1,λ2=0.672397,a(δF)=20.5735 and \((a, b, d, \delta) \in {\text {FB}}^{1}_{E_{2}}\). This shows the correctness of Theorem 1.
The bifurcation diagrams shown in Fig. 1a, b reveal that fixed point E2 is stable for δ<δF, loses its stability at δ=δF and for δ>δF there exists a period doubling phenomena leading to chaos. It is also seen that the period −2, −4, −8, and −16 orbits emerging for δ∈[ 2.6,3.668], chaotic set for δ∈[ 3.69,3.86] and the period −12 orbit occurs at δ=3.7924 which is in chaotic window δ∈[ 3.69,3.86] causing dynamic transition from periodic behaviors to chaos. The maximum Lyapunov exponents (MLE) and fractal dimension (FD) related to Fig. 1a, b are displayed in Fig. 1c, d which confirm stable, periodic, or chaotic states exists in system (3).
Flip bifurcation of system (3). a Bifurcation in prey. b Bifurcation in predator. c Maximum Lyapunov exponents related to a, b. d FD corresponding to a. (x0,y0)=(0.74,0.74)
Neimark-Sacker (NS) bifurcation with bifurcation parameter δ covering [ 2.0,4.5]
With the variation of parameter a, the predator-prey system (3) exhibits much richer dynamics through the emergence of NS bifurcation. We take parameters as given in case (ii). After calculation, we find a unique fixed point E2(0.375,0.375) of system (3). A NS bifurcation point is obtained as δNS=2.25. It is seen that a NS bifurcation emerges around fixed point E2 when δ passes through δNS. Also, we have \(\lambda, \bar {\lambda }~=~0.905078~~\pm ~0.425245 i, g_{20} = 0.359863~+~0.156709 i, g_{11}~=~0.50625~-~0.499099 i, g_{02}~=~-~1.27625~+~0.58727 i, g_{21}~=~-~ 1.27332~+~1.64876 i, a(\delta _{{\text {NS}}})~=~-~0.742379\) and \((a, b, d, \delta) \in {\text {NSB}}_{E_{2}}\). The correctness of Theorem 2 is verified.
The bifurcation diagrams are depicted in Fig. 2a, b which illustrate that the fixed point E2 is stable for δ<δNS, loses its stability near δ=δNS and an attracting invariant closed cycle appears if δ>δNS. The maximum Lyapunov exponents related to Fig. 2a, b are disposed in Fig. 2c, which exhibits the existences of periodic orbits and chaos as parameter δ increases. These results indicate that NS bifurcation instigates a route to chaos, through a dynamic transition from a stable state, to invariant closed cycle, with periodic and quasi-periodic states occurring in between, to chaotic sets. For instance, chaotic set is observed when δ∼4.15 which is consistent with the sign of maximum Lyapunov exponent. Figure 2d is a local amplification of Fig. 2a for δ∈[ 3.5,4.0].
NS bifurcation of system (3). a NS bifurcation in prey, b NS bifurcation in predator, c maximum Lyapunov exponents corresponding to a, b. d Local amplification diagram in a for δ∈[3.5,4.0].e FD associated with a. (x0,y0) = (0.37,0.37)
Figure 3 explicitly shows that as the values of δ increases, there are alternation between periodic or quasi-periodic behaviors and invariant cycle or chaotic behavior. For different values of δ, phase portraits of system (3) associated with Fig. 2a, b are plotted in Fig. 3 illustrating the existence of period −10, −19, −38, and −9 orbits, and chaos in system (3) at δ∼3.6,δ∼3.9,δ∼3.95,δ∼4.075, and δ∼4.5, respectively.
Phase portraits for different values of δ corresponding to Fig. 2a, b
Flip-NS bifurcation with bifurcation parameter a covering [ 0.6,3.0]
When we set the parameter values as in case (iii), a new bifurcation diagrams is obtained as plotted in Fig. 4. This illustrates that the predator-prey system (3) experiences Neimark-Sacker bifurcation and flip bifurcation both together when parameter a passes its critical value. The system firstly enters chaotic dynamics for small value of a. However, with the increase of a value, the chaotic dynamics of the predator-prey system suddenly disappear through a NS bifurcation occurring first at a∼0.691714, and then the system dynamics jump to a stable state. Thereafter, we find that the predator-prey system undergoes a flip bifurcation occurs at a∼1.92452 and then period doubling phenomena trigger a route to chaos. The maximum Lyapunov exponents (MLE) and fractal dimension (FD) related to Fig. 4a, b are displayed in Fig. 4c, d which confirm dynamic transition in system (3) from chaotic set to stable window and vice versa.
NS-flip bifurcation of system (3). a Bifurcation for prey, b bifurcation for predator, c maximum Lyapunov exponents related to a, b. d FD associated with a. (x0,y0)=(0.37,0.37)
Bifurcation for parameters δ and a
Suppose parameters are considered as given in case (iv). When two more parameters change through its critical values, then system (3) can exhibit complex dynamic behavior. Under this parametric condition, the 3D bifurcation diagrams in (δ,a,x)-space are displayed in Fig. 5a. Figure 5b shows the 2D projected maximum Lyapunov exponents onto (δ,a) plane. It is now easy to determine values of bifurcation parameters to see how does the dynamics of system (3) change from non-chaotic state to periodic or chaotic state. For instance, unstable chaotic trajectories appear in the system for parameters δ = 4.15, a = 0.6 whereas stable trajectories appear for δ = 3.6, a = 0.6 (see Fig. 3), which are conformable with the signs presented in Fig. 5b. It is also remarkable from Fig. 5a that with the growth of parameter a, the predator-prey system (3) experiences NS bifurcation first and then flip bifurcation, and in between there is a stable window appears showing that the predator and prey coexist at a oscillatory balance behavior.
Bifurcation diagram in (δ−a−x) space. (a) the transition between NS bifurcation and flip bifurcation for prey when δ∈[2.0,4.5], a = 0.6,1.0,1.5,2.0,2.5,3.0∈[0.6,3.0]. b The 2D projection of 3D maximum Lyapunov exponents onto (δ,a) plane. (x0,y0) = (0.37,0.37)
Fractal dimension
In order to characterize the strange attractors exists in a system, one can measure fractal dimensions (FD) [21, 22] which is defined by
$$d_{L} = j + \frac{\sum_{i=1}^{j} h_{i}}{|h_{j}|} $$
where h1,h2,...,hn are Lyapunov exponents and j is the largest integer such that \(\sum _{i=1}^{j} h_{i} \ge 0\) and \(\sum _{i=1}^{j+1} h_{i} < 0\).
For system (3), the fractal dimension dL takes the form
$$d_{L} =1 + \frac{h_{1}}{|h_{2}|}, \quad h_{1}>0>h_{2} \quad \text{and} \quad h_{1}+h_{2}<0. $$
Considering parameter values as given in case (ii), FD of system (3) is plotted in Fig. 2e. The strange attractors of system (3) (see Fig. 3) and its corresponding FD (see Fig. 2e) confirms that growth of parameter δ causes a chaotic dynamics for the discrete-time ratio-dependent Holling-Tanner system.
Controlling chaos
We will apply state feedback control method [20] to control chaos exists in system (3) at the state of unstable trajectories. By adding a feedback control law as the control force un to system (3), the controlled system becomes
$$ \begin{aligned} x_{n+1} &=& x_{n}+\delta x_{n} \left[ \left(1-x_{n} \right)-\frac{y_{n}}{x_{n}+ay_{n}} \right] + u_{n} \\ y_{n+1} &=& y_{n}+\delta y_{n} \left[d\left(b-\frac{y_{n}}{x_{n}}\right) \right] \end{aligned} $$
$$u_{n} = -k_{1}(x_{n} - x^{*}) - k_{2}(y_{n} - y^{*}) $$
where the feedback gains are denoted by k1and k2 and (x∗,y∗) represent coexistence fixed point of system (3).
The Jacobian matrix Jc of the controlled system(15) is
$$ \begin{aligned} J_{c}(x^{*}, y^{*})= \left(\begin{array}{cc} j_{11}-k_{1} & j_{12}-k_{2} \\ \quad j_{21} & \quad j_{22} \end{array}\right) \end{aligned} $$
where \(j_{11}=1+\delta \left (1-2x + \frac {xy}{(x + ay)^{2}} - \frac {y}{x + ay} \right), j_{12}=\delta \left (\frac {a xy}{(x + ay)^{2}} - \frac {x}{x + ay} \right), j_{21}=\frac {d \delta y^{2}}{x^{2}}, j_{22}=1- \frac {d \delta y}{x} + d \delta \left (b-\frac {y}{x} \right)\) are evaluated at (x∗,y∗). The characteristic equation of Jc(x∗,y∗) is
$$ \lambda^{2}-(trJ_{c}) \lambda +detJ_{c}=0 $$
$$\begin{array}{*{20}l} trJ_{c} & = j_{11} + j_{22} - k_{1},\\ detJ_{c} & = j_{22}(j_{11} - k_{1}) - j_{21}(j_{12} - k_{2}). \end{array} $$
Let λ1 and λ2 be the roots of (17). Then,
$$ \lambda_{1} + \lambda_{2}=j_{11} + j_{22} - k_{1} $$
$$ \lambda_{1} \lambda_{2}= j_{22}(j_{11} - k_{1}) - j_{21}(j_{12} - k_{2}) $$
The solution of the equations λ1 = ± 1 and λ1λ2 = 1 determines the lines of marginal stability. These conditions confirm that |λ1,2| < 1. Suppose that λ1λ2=1, then from (19), we have
$$l_{1}: j_{22} k_{1}-j_{21} k_{2}=j_{11} j_{22}-j_{12} j_{21}-1.$$
Now assume that λ1=1, then from (18) and (19), we get
$$l_{2}: (1-j_{22}) k_{1}+j_{21} k_{2}=j_{11} + j_{22}-1-j_{11} j_{22}+j_{12} j_{21}.$$
Next, assume that λ1 = − 1, then from (18) and (19), we obtain
$$l_{3}: (1+j_{22}) k_{1}-j_{21} k_{2}=j_{11} + j_{22}+1+j_{11} j_{22}-j_{12} j_{21}.$$
Then, the lines l1,l2,and l3 (see Fig. 6a) in the (k1,k2) plane determine a triangular region which keeps |λ1,2|<1.
Controlling chaos in system (15). a Stability region in (k1,k2) plane, (b, c). Time series for states x and y, respectively
We have carried out numerical simulations to check how the implementation of feedback control method works and controls chaos at unstable state. Taking parameter values as in case (ii) with fixed δ = 2.126. We consider the feedback gains are as k1 = − 1.3 and k2=0.16 and initial value as (x0,y0)=(0.65,0.95). Region of stable eigenvalues in (k1,k2) plane is plotted in Fig. 6a. We numerically show that at the fixed point (0.662032,0.993048), the chaotic trajectory is stabilized, see Fig. 6b, c.
This work is concerned with the dynamics and chaos control of a discrete-time ratio-dependent Holling-Tanner model in \(\mathbb {R}^{2}_{+}\). By the center manifold theory, we determine the existence condition and direction of flip and NS bifurcations of system (3) around coexistence fixed point. In particular, we show that system (3) experiences a flip or NS bifurcation at unique coexistence fixed point if parameter δ varies in the neighborhood of \({\text {FB}}^{1}_{E_{2}}\) or \({\text {NSB}}_{E_{2}}\). Based on Figs. 1 and 2, we notice that the small integral step size δ can stabilize the dynamical system (3), but the large integral step size may destabilize the system producing more complex dynamical behaviors. To see how does the integral step size play a key role in exploring the dynamical behaviors, we carry out numerical simulations to reveal unpredictable dynamics of the system including periods −2, −4, −8, −12, and −16 orbits via flip bifurcation and periods −9, −10, −19, and −38 orbits, invariant closed cycle, and chaotic sets via NS bifurcation, respectively. In addition, from Fig. 4, we can see that the appropriate choice of parameter a can stabilize the dynamical system (3). However, for the low or high values of a may destabilize system (3). Thus, with the increase of parameter a, it is shown that system (3) experiences NS and flip bifurcations both together. The two bifurcations cause the system to jump from steady state to chaotic dynamical behavior via periodic and quasi-periodic states and trigger routes to chaos, and vice-versa; that is, chaotic dynamics appear or disappear along with the emergence of bifurcations.
Moreover, in 3D bifurcation diagrams, we observe a dynamic transition between NS bifurcation and flip bifurcation. Through the two-dimensional parameter-spaces, we also notice that system dynamics can be periodic, quasi-periodic and chaotic. These complex dynamic behaviors of populations occurred by the flip and NS bifurcations can be explained ecologically, that is, the predator and prey population densities can fluctuate at regular or irregular intervals or the predator-prey system become unstable [23, 24]. We can also say that when the prey population are abundant, the consumption of prey by predator may have a marginal effect on the dynamics of prey. The presence of chaos is verified by the sign of maximum Lyapunov exponents and fractal dimension. Finally, we provide state feedback control method to control chaos at unstable trajectories. We would expect to explore more analytical results on multiple-parameter bifurcation of the system in future.
Availability of data and material
Li, Y., Xiao, D.: Bifurcations of a predator-prey system of Holling and Leslie types. Chaos Solit. Fract. 34(2), 606–620 (2007). https://doi.org/10.1016/j.chaos.2006.03.068.
Hsu, S. B., Hwang, T. W.: Global stability for a class of predator–prey systems. SIAM J. Appl. Math. 55, 763–783 (1995).
Gasull, A., Kooij, R. E., Torregrosa, J.: Limit cycles in the Holling-Tanner model. Publ. Mat. 41, 149–167 (1997).
Liang, Z., Pan, H.: Qualitative analysis of a ratio-dependent Holling-Tanner model. J. Math. Anal. Appl. 334, 954–964 (2007).
He, Z. M., Lai, X.: Bifurcation and chaotic behavior of a discrete-time predator-prey system. Nonlinear Anal. Real World Appl. 12, 403–417 (2011).
He, Z. M., Li, B.: Complex dynamic behavior of a discrete-time predator-prey system of Holling-III type. Adv. Differ. Equ. 180 (2014).
Rana, S. M. S.: Bifurcation and complex dynamics of a discrete-time predator-prey system with simplified Monod-Haldane functional response. Adv. Differ. Equ. 345 (2015). https://doi.org/10.1186/s13662-015-0680-7.
Rana, S. M. S.: Chaotic dynamics and control of discrete ratio-dependent predator-prey system. Discret. Dyn. Nat. Soc., 1–13 (2017). https://doi.org/10.1155/2017/4537450.
Rana, S. M. S., Kulsum, U.: Bifurcation analysis and chaos control in a discrete-time Predator-Prey System of Leslie Type with Simplified Holling Type IV Functional Response. Discret. Dyn. Nat. Soc. (2017). https://doi.org/10.1155/2017/9705985.
Rana, S. M. S.: Bifurcations and chaos control in a discrete-time predator-prey system of Leslie type. J. Appl. Anal. Comput. 9(1), 31–44 (2019). https://doi.org/10.11948/2019.31.
MathSciNet Google Scholar
Tan, W., Gao, J., Fan, W.: Bifurcation Analysis and Chaos Control in a Discrete Epidemic System. Discret. Dyn. Nat. Soc. (2015). https://doi.org/10.1155/2015/974868.
Zhao, M., Xuan, Z., Li, C.: Dynamics of a discrete-time predator-prey system. Adv. Differ. Equ. 191 (2016). https://doi.org/10.1186/s13662-016-0903-6.
Zhao, M., Li, C., Wang, J.: Complex dynamic behaviors of a discrete-time predator-prey system. J. Appl. Anal. Comput. 7(2), 478–500 (2017). https://doi.org/10.11948/2017030.
Liu, W., Cai, D.: Bifurcation, chaos analysis and control in a discrete-time predator-prey system. Adv. Differ. Equ. 11 (2019).
Kangalgil, F.: Neimark-Sacker bifurcation and stability analysis of a discrete-time prey–predator model with Allee effect in prey. Adv. Differ. Equ. 92 (2019).
Hu, D. P., Cao, H. J.: Bifurcation and chaos in a discrete-time predator-prey system of Holling and Leslie type. Commun. Nonlinear Sci. Numer. Simulat. 22, 702–715 (2015).
Cao, H., Yue, Z., Zhou, Y.: The stability and bifurcation analysis of a discrete Holling-Tanner model. Adv. Differ. Equ. 330 (2013).
Zhao, J., Yan, Y.: Stability and bifurcation analysis of a discrete predator-prey system with modified Holling-Tanner functional response. Adv. Differ. Equ. 402 (2018).
Kuzenetsov, Y. A.: Elements of Applied Bifurcation Theory, 2nd Ed.Springer-Verlag (1998).
Elaydi, S. N.: An Introduction to Difference Equations. Springer-Verlag, New York (2005).
Cartwright, J. H. E.: Nonlinear stiffness Lyapunov exponents and attractor dimension. Phys. Lett. A. 264, 298–304 (1999).
Kaplan, J. L., Yorke, Y. A.: A regime observed in a fluid flow model of Lorenz. Commun. Math. Phys. 67, 93–108 (1979).
Hastings, A., Hom, C. L., Ellner, S., Turchin, P., Godfray, H. C. J.: Chaos in ecology: Is mother nature a strange attractor?. Ann. Rev. Ecol. Evol. Syst. 24, 1–33 (1993).
Nicholson, A. J.: The self-adjustment of populations to change. Cold Spring Harb. Symp. Quant. Biol. 22, 153–173 (1957).
The author would like to thank the editor and the referees for their valuable comments and suggestions, which led to the improvements of the paper.
Department of Mathematics, University of Dhaka, Dhaka, 1000, Bangladesh
Sarker Md. Sohel Rana
The author carried out the proof of the main results and approved the final manuscript.
The authors declare that there is no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Sohel Rana, S.M. Dynamics and chaos control in a discrete-time ratio-dependent Holling-Tanner model. J Egypt Math Soc 27, 48 (2019). https://doi.org/10.1186/s42787-019-0055-4
Holling-Tanner model
Ratio-dependent
Feedback control
(2010) 39A33
37D45
|
CommonCrawl
|
Meaning, Use and Modality: M-Facts and U-Facts
I don't have a precise definition of either "M-fact" or "U-fact", but roughly, the idea is that an M-fact is a meaning-fact, while a U-fact is a usage-fact.
Examples of M-facts might be things like:
1. "Schnee" refers-in-German to snow.
2. "sensible" is true-in-Spanish of x iff x is sensitive.
3. "Schnee ist weiss" is true-in-German iff snow is white.
4. "I" refers-in-English, relative to context $C$, to the agent of the speech act in $C$.
5. "kai" means-in-Greek logical conjunction.
In each case, the language relativity has been made explicit. I think that ignoring language relativity is a major fallacy in much writing about the foundations of linguistics and philosophy of language. Tarskian T-sentences are examples of M-facts. The bearers of the semantic (and syntactic) properties are types, not tokens. Again, I think that it's major mistake to be confused about this.
Canonical examples of U-facts might be things like:
6. Speaker X uses the string "Schnee" to refer to snow.
7. Speaker Y has a disposition to utter the string "gavagai" when there are rabbits nearby.
8. Speaker Z tends to assert the string $\sigma_1 \ast$ "kai" $\ast \sigma_2$ just when Z is prepared to assert both $\sigma_1$ and $\sigma_2$.
One might initially think that U-facts explain M-facts. Or that U-facts provide evidence for M-facts. Roughly:
U-facts: evidence/data for linguistic theories.
M-facts: theoretical content of linguistic theories.
This is, I think, roughly right, but with a very important caveat, which is that M-facts cannot be explained by U-facts. The argument is this:
(i) M-Facts are Necessities.
An M-fact, such as the fact that "Schnee" refers-in-German to snow couldn't have been otherwise. The argument for this is a counterfactual thought experiment. Suppose $L$ is an interpreted language such that "Schnee" refers-in-L to sugar. Then it seems clear to me that $L$ isn't German. If one changes the meanings of a language, the language is simply a different one. Languages are very finely individuated.
(ii) U-Facts are Contigencies
A U-fact, such as the fact that Y has a disposition to utter "gavagai" when there are rabbits nearby, is contingent. It Y needn't have had that disposition. A U-fact is connected to properties of the speaker's cognitive system.
If the previous two claims, (i) and (ii), are correct, then
(iii) U-facts cannot explain, or provide evidence for, M-facts.
This conclusion follows because contingencies cannot explain necessities.
I've given this argument is several talks since 2008 (originally in a talk "Meaning, Use and Modality" in at Universidad Complutense, Madrid). The audience frequently responds with considerable surprise!
If U-facts do not explain M-facts, then what explains M-facts? I say, "Nothing". Nothing explains why "Schnee" refers in L to sugar. It is simply an intrinsic property of the language L. There is some sense in which M-facts, along with syntactic and phonological and pragmatic facts, about a language $L$ are mathematical facts. Languages are complicated (mixed) mathematical objects. For example, suppose that
The string $\phi$ is a logical consequence in $L$ of the set $\Delta$ of $L$-strings
Then this fact, about $L$, is a necessity.
If U-facts do not explain M-facts, then what do they explain or provide evidence for? I think the answer to this is,
U-facts explain, or provide evidence for, what language the speaker/agent cognizes.
So, let me call these C-facts, and these have the form:
(C) Speaker X speaks/cognizes L
So, for example,
The (contingent) U-fact that X has a disposition to utter "gavagai" when there are rabbits nearby is evidence for the (contingent) C-fact that X speaks/cognizes a language L for which the following M-fact holds of necessity: "gavagai" denotes-in-L rabbits.
Published by Jeffrey Ketland at 7:32 pm
Daniel Cohnitz 26 August 2012 at 13:58
Hi Jeffrey,
This is an interesting point, and you made a similar one in response to my talk at the MCMP. I take it that the point is that we can consider a language to be an abstract object of some sort (some n-tupel, which includes a vocabulary, formation rules, and semantic interpretation rules, and whatever else you like to add). If that n-tupel is the language in question, then the M-facts about that language are necessarily true for that language. OK, so why am I unhappy with the way to put it that way? Well, at least prima facie it seems to contradict (or at least seems in tension with) what Jussi and I argue in Meta-Externalism vs Meta-Internalism in the Study of Reference (http://philpapers.org/archive/COHMVM), where we argue somthing like U-facts constitute (and thus, I guess, explain) the M-facts.
Here are some thoughts, why I think it's not incompatible with our view, but rather a matter of how to identify languages:
Now, when we are talking about the M-facts in natural languages, we identify the natural language (in unfortunately unclear ways that Chomsky always points out) as the language that is shared by a certain linguistic community. Thus, when we investigate the M-facts of English, we investigate which abstract object is the closest model for the pattern of dispositions to use expressions that we find among those people we take to be English speakers. It is a contingent fact, which of the abstract objects best models the usage we find in that linguistic community we are currently investigating. And that's I guess why I'd say that 'Schnee' refers to snow in German is constituted by the contingent fact that speakers of German have the diposition to use 'Schnee' in the way they do.
Let's see whether I can make my point with an analogy (but I'm not so convinced by this myself). Let's relativize all factual claims to the actual world. Ignoring world-relativity seems to be a mistake like ignoring language relativity. But 'roses are red in the actual world' is certainly necessary, since if roses were green, it would surely be a different world. Thus, that roses are red in the actual world is a necessary fact that no contingent fact whatsoever could explain.
One might point out that it's not contingent facts explaining that roses are red in the actual world, since those facts, being all world-relative themselves, are also all necessarily true. But it seems that something similar could be said in the language case: in the sense in which it is a necessary fact that 'Schnee' refers in German to snow, it gets explained by another necessary fact, namely that German speakers have the disposition to use 'Schnee' in that way, since if they wouldn't, they wouldn't be speakers of German.
Naomi O-K 26 August 2012 at 14:42
Thanks, this is cool! But I think there is a hitch:
Your argument builds on modality, and it seems to me that this is also where the reason for people's surprise lies.
When we think about modality, effectively what we do is to compare what we get when we vary certain parameters. But in order to be able to make a comparison, we have to keep one parameter fixed. So for instance, we may take objects to be fixed and vary their properties (let me call this K-modality). This allows us to say, for instance, that it is a contingent fact that Aristotle was a philosopher, he might also have been a politician or a physician or what have you. We are comparing the same object but exchanging one of its properties.
Alternatively, we can take properties as fixed and regard objects as sets of their properties (let us call this L-modality). In this case, any change to the properties gives us a different object and we could no longer regard politics as a possible occupation of Aristotle's – someone like Aristotle in all other respects but who was a politician would simply no longer be Aristotle. We could therefore no longer say that it is a contingent fact about Aristotle that he was a philosopher. Had he not been a philosopher, he would not have been Aristotle.
Now, when you say that "Schnee" referring-in-German to snow could not have been otherwise, this is a statement in L-modality. Conversely, when you say that Y needn't have had the disposition to utter "gavagai" when he sees a rabbit, this is a statement in K-modality.
I don't think I want to swallow a combination of the two in one argument - I would no longer know what it is we are comparing here. But let's see what happens if we apply the same concept of modality to both.
If we go by L-modality, it is no longer a contingent fact about Y that he has a disposition to say "gavagai" when he sees a rabbit, because lacking that disposition would make him someone else – he would no longer be Y. It is therefore just as necessary for him to have that disposition as it is necessary for L that 'gavagai' refer to rabbits. You could therefore define L as the totality of Y's utterance dispositions; or define Y's utterance dispositions as the totality of reference and truth conditions of L. It seems to me that in L-modality, U-facts are a very good explanation of M-facts and vice versa.
If instead we go by K-modality, it is no longer a necessary fact about L that 'Schnee' refer-in-German to snow. It may well have been (and, in fact, has been) otherwise. So here again, we can explain U-facts by M-facts and vice versa.
So applying the same concept of modality to both dissolves the puzzle (as you know I think it should) and we never need C-facts.
I can see why one may want to apply different identity conditions to languages than to people, but that is a philosophical, not a logical issue, isn't it? Besides, usually people who apply L-modality to languages don't tend to worry much about speakers anyway; and people who apply K-modality to speakers would not tend to regard languages as distinct enough from speakers to apply L-modality to them. If they really did, I don't think that modality will make for a good way of linking the two.
Jeffrey Ketland 26 August 2012 at 20:36
Many thanks! And thanks for the link to your paper with Jussi.
You may have seen them, but I've mentioned this topic a couple of times before.
http://m-phi.blogspot.co.uk/2012/06/theres-glory-for-you.html
http://m-phi.blogspot.co.uk/2012/08/is-there-philosophical-problem-of.html
Yes, on language individuation, I think a lot of the folk talk is a bit misleading. So, no particular individual speaks English (or German, etc.) strictly speaking. Rather, there are individual idiolects, which in a "language community" are similar enough to permit effortless interpretation usually.
"And that's I guess why I'd say that 'Schnee' refers to snow in German is constituted by the contingent fact that speakers of German have the disposition to use 'Schnee' in the way they do."
Yes, I think that's a plausible and fairly standard view.
My idea is to separate out the U-facts, M-facts and C-facts, keeping the M-facts as necessities. I first define a language L such that "Schnee" refers-in-L to snow.
(The Saussurean principle, "any string can mean/refer anything", tells us that all of these languages are already out there, as it were.)
I then say that the contingent U-fact, that Gottlieb uses "Schnee" to refer to snow, is evidence for (or partially explains, or partially constitutes. etc.) the contingent C-fact that Gottlieb he speaks/cognizes the language L.
Consequently, I'm able to separate semantics from the questions involving cognition and intentionality. (Semantics is then really a branch of applied mathematics, and becomes much less controversial.)
In one of the earlier M-Phi posts ("Is There a Philosophical Problem of Reference?", in the comments with Sam), I was playing around with some accounts of "A cognizes L" along the lines of
A cognizes L iff, for each L-string s, the concept that A assigns to s = the meaning of s in L.
So, the basic psychological or intentional notion is the notion of an agent's *assigning* a concept (or a referent) to a string.
The second point, adding "in the actual world, ...", is really interesting. Not sure what to say! But will think about it!
I don't think what I'm saying is incompatible with what is usually written in this area, unless one wants to insist on the contingency of semantic facts. I think of it as an attempt to clarify what's going on in debates about semantic theory, indeterminacy of meaning/reference, empirical evidence in linguistics, etc.
Cheers, Jeff
Hi Naomi,
Many thanks! Yes, the modal individuation of languages is the crucial point. On the view here, a language isn't a bundle of properties; rather, it's an abstract object.
"If instead we go by K-modality, it is no longer a necessary fact about L that 'Schnee' refer-in-German to snow. It may well have been (and, in fact, has been) otherwise."
The problem here is what you mean by "German". Is there a single entity that any pair of speakers in Germany and Austria speak? I think German, Spanish, etc. (what Chomsky calls E-languages) are useful approximations for certain purposes (i.e., thinking about speech communities), but can be removed from foundational discussions, where one has to focus on the idiolects that individuals speak. Chomsky calls these I-languages and he regards them as mental entities. My view (like J.J. Katz) is that they're abstract; that they're cognized somehow in the mental states of the agent: cognizing L just is a complicated mental state. (Not that I have a good account of this!)
A separate objection to your E-language view is that there are no constraints on how an E-language can change - temporally and particularly modally. So, German could be have been indistinguishable from Thai.
Suppose tomorrow, at 3pm, everyone who currently cognizes German comes to cognizes Thai, while you still speak German. It seems to me that, on your view, we have to conclude that everyone now speaks *German* and you are a relic of what German used to be (i.e., today). But I would say that this gets language individuation seriously wrong.
On the view I defend, languages (including all these idiolects) are abstract entities, analogous to numbers or functions. So, a function $f : \mathbb{N} \rightarrow \mathsf{N}$ cannot "change", while keeping it "the same". Still, a physical computer might (contingently) realize $f$ today and $g$ tomorrow. This does not imply that $f$ has turned into $g$. It means the concrete system has undergone change. So, I'd argue that an agent's cognizing a language L is (somewhat) analogous to a computer's realizing a function $f$.
Thanks a lot for your reply. Can I suggest we skip the 'What is German'-discussion, where we agree; I only brought it up because you used 'German' (to my surprise). I'll also not use 'idiolect' now (it'll become clear why). But your reply to Daniel suddenly gave me an "Aha-Erlebnis" (new German word for your idiolect). Am I right in thinking that, in a simplified version, the idea is something like this?:
We have in the universe a group of interpreted languages with a certain family resemblance which vox populi calls 'German'. Each of them consists of a countable number of words and forms of composition (I think this is best kept finite), references, truth conditions and whatever else you need for the semantics; in other words, they can be spelled out as a set of meaningful words and sentences – your M-facts. Assuming plenitude, any combination of any words and forms of composition are available, and which of them are called 'German' is of little concern to us now. Let's call them G1, G2,…Gn.
Then we have a countable number of speakers of 'German'. Let's pick out one, Y. Y has dispositions to say certain things in certain situations. The totality of Y's such dispositions at any time is one of our Gs above (that's why I suggested to keep them finite). Now, with language being essentially subject to perpetual change, we need to time-slice Y. Thus at t1 Y's dispositions are, say G75, at t2 they are G79, etc. Which Gs Y 'cognises' (as you say) in the course of his life is a contingent fact, as is which situations he is in and hence which words and sentences he actually utters.
If this is correct, it seems to me that all taken together, U-facts are at least triply (in fact much more) contingent. So I am not quite sure why the first and the third of these get names – 'C-facts' and 'U-facts' – but not the second? C-facts link a G to Y's dispositions but not yet to his utterances, don't they.
Also, we will need an account of how the G Y cognises at any tn relates to the G he cognises at tn-1, because there must of course be a very large overlap between them. (This is also in reply to your Thai example.)
And, by the bye, if this *is* what you have in mind, would you define 'idiolect' as any one G, or as the series of Gs Y cognises in the course of his life? (Nothing hinges on this, it is just for the sake of clarity in future.)
Great, yes, that's right; but with a caveat:
"The totality of Y's such dispositions at any time is one of our Gs above"
But I don't think it can't be "is", here. The totality of dispositions is not the same as a language. For example, suppose agent Y (at t) has:
(i) a disposition to say "Eine Katze" when there's a cat nearby.
(ii) a disposition to say "Gavagai" when there's a rabbit nearby.
(iii) etc.
But this isn't a language. It's a list of dispositions (and characterizes speech-behaviour).
On the other hand, a language $L$ involves an alphabet $A$, the set of strings for $A$, a syntax for $L$ (certain special subsets of $L$-strings), and semantic and pragmatic functions for $L$ (mapping the strings to intensions, referents, mental states and whatnot).
This isn't a totality of dispositions. And the main problem is that there isn't a logical relationship between the U-fact,
(U) Agent A has the disposition to utter "Gavagai" when there are rabbits nearby.
and the M-fact,
(M) "Gavagai" means-in-L RABBIT.
This is, in a sense, precisely what the whole discussion is about.
Note that (M) doesn't even involve a *speaker* and note that (U) doesn't involve a language L.
Although, (U) is contingent, I'm arguing that (M) (if it's the case), is necessary. A contingent bridge principle is required, connecting the agent A and the language L for which (M) holds. I.e.,
(C) Agent A speaks/cognizes an L such that (M) holds.
Then, I can say that (C) *explains* (U). And I can say that (U) is evidence to support (C).
Whereas (M) itself is a background necessity concerning the intrinsic properties of L.
I think I'd define "the idiolect of A at t" as: "the language L cognized by A at time t". As you can see, I think this can be fluctuating and changing quite a lot. For example, a Kripkean baptism is a small idiolectic extension, rather analogous to skolemization, the introduction of a new constant in model theory.
Thank you for the response. Let me try again. You want to argue that although your account is just a rediscription of the usual story, there is nevertheless an interesting lesson, viz. that semantic facts are necessary facts, and thus need no explaining. The explaining should be somewhere else, namely when we explain how we manage to cognize one language rather than another, and the semanticist should go on happily with his job not worrying too much about that contingent question, but do semantics, basically a part of mathematics.
First of all, it is right that one might study the abstract systems we call semantics in their own right. But that would be just mathematics. In what sense and why could one claim that one is studying "semantics" rather than just the relation or interplay between two or more abstract set-theoretic structures? It seems to me that the reason that this study is considered "semantics", is precisely because we believe these abstract structures correspond to systems and their relations that we call natural languages.
Let me make my analogy-argument in a slightly different form (perhaps that makes it easier to point out where I misunderstood you): let's take some system of objects and relations between them, call it S, and let's stipulate that the S-facts are contingent facts. Let's suppose further that there is a set-theoretic (or in any case mathematical) model of S, call it M. In set theory we can design very many models like M, that all are somewhat different from M in how they specify the relations over its domain, for example. Let's assume that, however, only one of the models corresponds to S. Just as we can model various possible mappings of expressions on referents, only one of which corresponds to the mapping of English expressions onto objects in the world that we find in English.
M can be studied mathematically, and the M-facts are, relative to M, necessary facts. But although we can identify M-facts with S-facts, if M corresponds with S, this doesn't turn all (or any) S-facts into necessities.
Thus, when we study semantics (the relation between actual expressions and their actual meanings), we use abstract models to do that, and they have their properties necessarily (on my account because we stipulated all of them), but that doesn't make the facts of semantics necessary.
We can, of course, revise our normal way of speaking, and call a certain branch of pure mathematics "semantics", but I don't see what we would win that way.
Cheers, Daniel
Yes, thanks for that - that's very close to what I have in mind!
But there's an important difference I'd stress, between "pure" mathematics and "mixed" mathematics (terminology from the theory of applicability of mathematics).
Languages aren't (usually) the objects of pure mathematics (like, say, $\pi$ or $e^{i \pi}$ or $\aleph_0$); rather, they are (usually) *mixed* mathematical entities. E.g., they involve sets of linguistic types, utterances, phonemes, functions to sets of chairs, tables, or to intensions, etc. I definitely think that ordinary natural languages are mixed mathematical entities. ("Natural" languages are a tiny subclass of the infinitely large class of languages; they are "natural" only in that human minds happen to cognize/speak them.)
Actually, I think this is the standard way of studying semantics!
It's hard to imagine eliminating types, strings, functions, extensions and intensions from semantics. To eliminate this, one would have to nominalize semantics, as Hartry Field has tried with physics, but it would be very hard to carry out. One would have to introduce "possible tokens" and strange modal relations.
"M can be studied mathematically, and the M-facts are, relative to M, necessary facts. But although we can identify M-facts with S-facts, if M corresponds with S, this doesn't turn all (or any) S-facts into necessities."
But if a language L is a *mixed* mathematical entity, then it is the S-facts themselves (they are mixed facts) about L that we're studying here. For example, for example, a modal version of mixed comprehension says,
(C) For all worlds w, there is a set Chair(w) such that for all y, y is in X(w) iff, at w, y is a chair
This itself is a necessary mixed mathematical assumption. Although it asserts the existence of various sets of chairs, it implies nothing about these sets (e.g., their cardinality). Then, we might consider a language L with the function $\lambda_{w}Chair(w)$ as the intension of some string, say, "silla". Then the semantic fact about L,
(S) The intension of "silla" in L is $\lambda_{w}Chair(w)$
is necessary. (That's the gist of the idea.)
(This L is similar to Spanish in this respect.)
It isn't that there's some other, pure, representation M of this. One can do that if one wishes (e.g., replace phonemes by numbers, strings by sequences of numbers, a la Gödel, worlds by numbers, and functions from worlds to chairs by functions from numbers to sets of numbers, say). But that would then be the semantics of a language whose domain was purely abstract; while, normally, the domain and predicate extensions for a natural language contain concreta.
Oops, (C) should be
(C) For all worlds w, there is a set $Chair(w)$ such that for all y, y is in $Chair(w)$ iff, at w, y is a chair.
Mons 15 December 2013 at 10:59
Does your argument extend to social practices outside language which have constitutive rules? It is necessary truth e.g. that football is played with a ball and 11 players on each team, since a game that didn't have these rules wouldn't be football, but another game. Does that mean we can't explain how football came into being in terms of e.g. our practices or dispositions to uphold these rules, indeed can't explain it at all?
Jeffrey Ketland 19 December 2013 at 06:07
Mons,
"Does your argument extend to social practices outside language which have constitutive rules?"
I think so, yes. On an analogous view for games, football doesn't "come into being". Footballers do; their brains undergo neurophysiological change, and these footballers exhibit certain patterns of collective activity. (Cf., speaking/cognizing a particular language.)
"It is necessary truth e.g. that football is played with a ball and 11 players on each team, since a game that didn't have these rules wouldn't be football, but another game."
I'd say yes. If one tries to define "football" exactly - as a single unique entity - the entity one defines does have this modal property. There, however, no such unique entity as "football". There is a somewhat heterogeneous variety of games, played with slightly different rules.
"Does that mean we can't explain how football came into being in terms of e.g. our practices or dispositions to uphold these rules, indeed can't explain it at all?"
We can try to explain psychological facts, yes. E.g., how an activity came into being "in terms of e.g. our practices or dispositions to uphold these rules". But if one considers a particular game $G$, then we cannot explain how $G$ "came into being", much as we cannot explain how 6 came into being. After all, $G$ didn't "come into being"! Footballers did.
|
CommonCrawl
|
Instance spaces for machine learning classification
Mario A. Muñoz ORCID: orcid.org/0000-0002-7254-28081,
Laura Villanova1,
Davaatseren Baatar1 &
Kate Smith-Miles ORCID: orcid.org/0000-0003-2718-76801
Machine Learning volume 107, pages109–147(2018)Cite this article
This paper tackles the issue of objective performance evaluation of machine learning classifiers, and the impact of the choice of test instances. Given that statistical properties or features of a dataset affect the difficulty of an instance for particular classification algorithms, we examine the diversity and quality of the UCI repository of test instances used by most machine learning researchers. We show how an instance space can be visualized, with each classification dataset represented as a point in the space. The instance space is constructed to reveal pockets of hard and easy instances, and enables the strengths and weaknesses of individual classifiers to be identified. Finally, we propose a methodology to generate new test instances with the aim of enriching the diversity of the instance space, enabling potentially greater insights than can be afforded by the current UCI repository.
The practical importance of machine learning (ML) has resulted in a plethora of algorithms in recent decades (Carbonell et al. 1983; Flach 2012; Jordan and Mitchell 2015). Are new and improved machine learning algorithms really better than earlier versions? How do we objectively assess whether one classifier is more powerful than another? Common practice is to test a classifier on a well-studied collection of classification datasets, typically from the UCI repository (Wagstaff 2012). However, this practice is attracting increasing criticism (Salzberg 1997; Langley 2011; Wagstaff 2012; Macia and Bernadó-Mansilla 2014; Rudin and Wagstaff 2014) due to concerns about over-tuning algorithm development to a set of test instances without enough regard to the adequacy of these instances to support further generalizations. While there is no doubt that the UCI repository has had a tremendous impact on ML studies, and has improved research practice by ensuring comparability of performance evaluations, there is concern that the repository may not be a representative sample of the larger population of classification problems (Holte 1993; Salzberg 1997). We must challenge whether chosen test instances are enabling us to evaluate algorithm performance in an unbiased manner, and we must seek new tools and methodologies that enable us to generate new test instances that drive improved understanding of the strengths and weaknesses of algorithms. The development of such methodologies to support objective assessment of ML algorithms is at the core of this study.
As stated by Salzberg (1997), "the UCI repository is a very limited sample of problems, many of which are quite easy for a classifier". Additionally, because of the intensive use of the repository, there is increasing knowledge about its problem instances. Such knowledge inevitably translates into the development of new algorithms that can be biased towards known properties of the UCI datasets. Therefore, algorithms that work well on a handful of UCI datasets might not work well on new or less popular problem instance classes. If these less-popular instances are found to be prevalent in a particular critical application area, such as medical diagnostics, the consequences for selecting an algorithm that does not generalize well to this application domain could be severe.
Indeed, based on the No-Free-Lunch (NFL) theorems (Culberson 1998; Igel and Toussaint 2005), it is unlikely that any one algorithm always outperforms other algorithms for all possible instances of a given problem. Given the large number of available algorithms, it is challenging to identify which algorithm is likely to be best for a new problem instance or class of instances. This challenge is referred to as the Algorithm Selection Problem (ASP). A powerful framework to address the ASP was proposed by Rice (1976). The framework relies on measurable features of the problem instances, correlated with instance difficulty, to predict which algorithm is likely to perform best. Rice's framework was originally developed for solvers of partial differential equations (Weerawarana et al. 1996; Ramakrishnan et al. 2002); it was then generalized to other domains such as classification, regression, time-series forecasting, sorting, constraint satisfaction, and optimization (Smith-Miles 2008). For the machine learning community, the idea of measuring statistical features of classification problems to predict classifier performance, using machine learning methods to learn the model, developed into the well-studied field of meta-learning (learning about learning algorithm performance) (Aha 1992; Brazdil et al. 2008; Ali and Smith 2006; Lee and Giraud-Carrier 2013).
Beyond the challenge of accurately predicting which algorithm is likely to perform best for a given problem instance, based on a learned model of the relationship between instance features and algorithm performance, is the challenge to explain why. Smith-Miles and co-authors have developed a methodology over recent years through a series of papers (Smith-Miles and Lopes 2012; Smith-Miles et al. 2014; Smith-Miles and Bowly 2015) that extend Rice's framework to provide greater insights into algorithm strengths and weaknesses. Focusing on combinatorial optimization problems such as graph coloring, the methodology first involves devising novel features of problem instances that correlate with difficulty or hardness (Smith-Miles and Tan 2012; Smith-Miles et al. 2013), so that existing benchmark instances can be represented as points in a high-dimensional feature space before dimension reduction techniques are employed to project to a 2-D instance space. Within this instance space (Smith-Miles et al. 2014), the performance of algorithms can be visualized and pockets of the instance space corresponding to algorithm strengths and weaknesses can be identified and analyzed to understand which instance properties are being exploited or are causing difficulties for an algorithm. Objective measures can be calculated that summarize each algorithm's relative power across the broadest instance space, rather than on a collection of existing test instances. Finally, the location of the existing benchmark instances in the instance space reveals much about their diversity and challenge, and a methodology has been developed to evolve new test instances to fill and broaden the instance space (Smith-Miles and Bowly 2015). This methodology has so far only been applied to combinatorial optimization problems such as graph coloring, although its broader applicability makes it suitable for other problem domains including machine learning.
Of course the decades of work in meta-learning has already contributed significant knowledge about how the measurable statistical properties of classification datasets affect difficulty for accurate machine learning classification. This relates to the first stage of the aforementioned methodology. It remains to be seen how these features should be selected to create the most useful instance space for classification problems; what can be learned about machine learning classifiers in this space; and whether the existing UCI repository instances are sufficiently diverse when viewed from the instance space. Further, the question of how to evolve new classification test instances to fill this space needs to be carefully considered, since it is a more challenging task than evolving graphs in our previous work (Smith-Miles and Bowly 2015) which have a relatively simple structure of nodes and edges. In the current work we revisit the domain of ML and adapt and extend the proposed methodology to enable objective assessment of the performance of supervised learning algorithms, which are the most widely used ML methods (Hastie et al. 2005). The diversity of the UCI repository instances will be visualized, along with algorithm strengths and weaknesses, and a methodology for generation of new test classification instances will be proposed and illustrated.
The remainder of this paper is organized as follows. Section 2 summarizes the methodology that will be employed based on an extended Rice framework. Section 3 describes the building blocks of the methodology when applied to machine learning classification, namely the meta-data composed of problem instances, features, algorithms and performance metrics. In Sect. 4, we describe the statistical methodology used to identify a subset of features that capture the challenges of classification. Section 5 then demonstrates that these selected features are adequate by showing how accurately they can predict the performance of ML algorithms. In Sect. 6, details are presented of the process employed to generate a 2-dimensional instance space where the relative difficulty of the UCI instances and algorithm performances across the space are visualized. This includes a new dimension reduction methodology that has been developed to improve the interpretability of the visualizations. Section 7 shows how the instance space can be used for objective assessment of algorithm performance, and to gain insights into strengths and weaknesses. Section 8 then presents a proof-of-concept for a new method for generating additional test instances in the instance space, and illustrates how an augmented UCI repository could be developed. Finally, Sect. 9 presents our conclusions and outlines suggestions for further research. Supplementary materialFootnote 1 that provides more detail about the developed features and all datasets and codeFootnote 2 used to calculate the features are available online.
Methodological framework
The methodology used in this study extends upon the Algorithm Selection Problem framework of Rice (1976), shown in the blue box of Fig. 1. It has been extended to enable more than performance prediction of algorithms given instance features: the extended framework (Smith-Miles et al. 2014) enables visualization of the instance space, instance difficulty, algorithm performance, and objective measurement of algorithmic power. The framework is composed of several spaces, which are described further below.
Methodological framework, extending Rice's Algorithm Selection Problem shown within the blue box
The problem space, \(\mathcal {P}\), is composed of instances of a given problem for which we have computational results for a given subset \(\mathcal {I}\). In this paper, \(\mathcal {I}\) contains the classification datasets from the UCI repository. The feature space, \(\mathcal {F}\), contains multiple measures characterizing properties (correlating with difficulty) of the instances in \(\mathcal {I}\). The algorithm space, \(\mathcal {A}\), contains a portfolio of selected algorithms to solve the problem, in this case, classification algorithms. The performance space, \(\mathcal {Y}\), contains measures of performance for the algorithms in \(\mathcal {A}\) evaluated on the instances in \(\mathcal {I}\). For a given problem instance \(x \in \mathcal {I}\) and a given algorithm \(\alpha \in \mathcal {A}\), a feature vector \(\mathbf {f}(x) \in \mathcal {F}\) and algorithm performance metric \(y(\alpha ,x) \in \mathcal {Y}\) are measured. By repeating the process for all instances in \(\mathcal {I}\) and all algorithms in \(\mathcal {A}\), the meta-data \(\{\mathcal {I},\mathcal {F},\mathcal {A},\mathcal {Y}\}\) are generated. Within the framework of Rice (1976), we can now learn, using regression or more powerful supervised learning methods, the relationship between the features and the algorithm performance metric, to enable performance prediction. Full details of this meta-data for the domain of classification, including the choice of features, will be provided in Sect. 3.
The aim of the extended methodology shown in Fig. 1 however is to gain insights into why some algorithms might be more or less suited to certain instance classes. In our extended framework, the meta-data is used to learn the mapping \(g(\mathbf {f}(x), y(\alpha ,x))\), which projects the instance x from a high-dimensional feature space to a 2-dimensional space. The resulting 2-dimensional space, referred to as instance space, is generated in such a way as to result in linear trend of features and algorithm performance across different directions of the instance space, increasing the opportunity to infer how the properties of instances affect difficulty. A new approach to achieving an optimal 2-D projection has been proposed for this paper, and the details are presented in "Appendix A". Following the optimal projection of instances to a 2-D instance space, each classification dataset is now represented as a single point in \(\mathbb {R}^2\), so the distribution of existing benchmark instances can be viewed across the instance space, and their diversity assessed. Further, the distribution of features and performance metrics for each algorithm can also be easily viewed to provide a snapshot of the adequacy of instances and features to describe algorithm performance. Instances are adequate if they are diverse enough to expose areas where an algorithm performs poorly, as well as areas where an algorithm performs well. Features are adequate if they allow accurate prediction of algorithm performance while explaining critical similarities and differences between instances.
The 2-dimensional instance space, color-coded with algorithm performance, is then investigated to identify in which region each algorithm \(\alpha \) is expected to perform well. Such a region is referred to as the algorithm's footprint. The area of the footprint can be calculated to objectively measure each algorithms' expected strength across the entire instance space, rather than on chosen test instances. The resulting measure is the algorithmic power.
The 2-dimensional instance space is further investigated to seek explanation as to why algorithm \(\alpha \) performs well (or poorly) in different regions of the instance space. Since the projection has been achieved in a manner that creates linear trends across the space for features and algorithm performance, footprint areas can more readily be described in terms of the statistical properties of the instances found in each footprint.
The final component of the proposed methodology involves revisiting the distribution of the existing instances \(\mathcal {I}\) in the instance space, and identifying target points where it would be useful to have additional instances created from the space of all possible instances \(\mathcal {P}\). A methodology for evolving instances has previously been proposed for generating graphs with controllable characteristics for graph coloring problems (Smith-Miles and van Hemert 2011), but will be adapted in this paper to generate new classification datasets that lie at specific locations in the instance space.
In summary, the proposed methodology requires:
Construction of the meta-data, including a set of candidate features (see Sect. 3);
Selection of a subset of candidate features (see Sect. 4);
Justification of selected features using performance prediction accuracy (see Sect. 5);
Creation of a 2-D instance space for visualization of instances and their properties (see Sect. 6);
Objective measurement of algorithmic power (see Sect. 7); and
Generation of new test instances to fill the instance space (see Sect. 8).
Each of these steps in this general methodology requires considerable innovation and thought when tailored to a new application domain. The following six sections will describe these steps in more detail for our classification study.
Meta-data for supervised classification algorithms
Let \(\mathbf {X}^{i} = [\mathbf {x}^{i}_{1} \ \mathbf {x}^{i}_{2} \ \dots \ \mathbf {x}^{i}_{p}]\in \mathbb {R}^{q\times p}\) be the data matrix, where p is the number of observations and q is the number of attributes; then let \(\mathbf {c}^{i} \in \left\{ 1,\ldots ,K\right\} ^{p},K\ge 2\) be the class vector taking on \(K\in \mathbb {N}\) labels.
A supervised learning problem, referred to as problem instance, consists of a collection of \((\mathbf {x}_{j}, c_{j})\) pairs. In this work, the input \(\mathbf {x}_{j}\) is a q-dimensional input vector that may comprise of binary, nominal and numeric values; the output label \(c_{j}\) (class) takes on one of K labels. The focus is therefore on binary and multi-class classification. Typically, the data matrix \(\mathbf {X}^{i}\) is divided into a training set and a test set. The learning task is to infer, from the training set, an approximating function relating the attributes to the class labels. The inferred function is then used to predict the labels in the test set, which consists of input vectors previously unseen by the learning algorithm. The performance of the learning algorithm is measured by a metric comparing true and predicted labels. The lower the degree of discrepancy between true and predicted labels, the better the algorithm performance.
The focus in meta-learning is to study how measurable features of the problem instances \((\mathbf {X}^{i},\mathbf {c}^{i})\) affect a given learning algorithm's performance metric. Problem instances \(\mathcal {I}\), learning algorithms \(\mathcal {A}\), and performance metrics \(\mathcal {Y}\), are three of the four elements composing the meta-data \(\left\{ \mathcal {I}, \mathcal {F}, \mathcal {A}, \mathcal {Y}\right\} \). We will briefly describe these elements of the meta-data used in this study below, before presenting a more extensive discussion about the critical choice of features \(\mathcal {F}\).
Problem instances \(\mathcal {I}\)
The problem instances we have used in this research consist of classification datasets comprising of one or more input variables (attributes) and one output variable (class). Datasets have been downloaded from two main sources, namely the University of California Irvine (UCI) repository (Lichman 2013) and the Knowledge Extraction Evolutionary Learning (KEEL) repository (Alcalá et al. 2010); additionally, a few datasets from the Data Complexity library (DCol—http://dcol.sourceforge.net/) have been used.
KEEL and DCol datasets rely on a convenient common format, the KEEL format. It originates from the ARFF format employed in the popular Waikato Environment for Knowledge Analysis (WEKA) suite (Holmes et al. 1994). Along with the dataset itself, KEEL and ARFF files carry information about dataset name, attributes name and type, and values taken on by both nominal and class attributes; additionally, the class attribute always occupies the last column of the data matrix. The use of this common format facilitates standardization and minimizes errors deriving from data manipulation; this is particularly true when many datasets need to be analyzed and automatic procedures are employed.
In constrast, UCI data files vary greatly in their format. Often, multiple files have to be merged to generate the final dataset; sometimes, information about the data themselves is not (clearly) available. UCI classification datasets have been extensively investigated and detailed information has been provided for the pre-processing of 166 datasets (Macia and Bernadó-Mansilla 2014) which we have adopted in this study.
Overall, we have used a total of 235 problem instances comprising 210 UCI instances, 19 KEEL instances and 6 DCol instances. A list of the problem instances and links to the files are provided in Sect. 1 of the Supplementary Material.
The selected 235 problem instances have up to 11,055 observations and up to 1558 attributes. Larger instances could have been selected, but were excluded due to the need to impose some computational constraints when deriving the features and running the algorithms described below.
Multiple datasets present missing values. For these datasets, two problem instances are derived. In the first problem instance the missing values are maintained, whereas in the second problem instance the missing values are estimated. The estimating procedure is as follow. Let k be the class label of an instance with missing value(s). If the missing value pertains to a numeric attribute, the missing value is replaced with the average value of the attribute computed over all the instances with class label k. For a nominal attribute, the mode is used (Orriols-Puig et al. 2010). This class-based imputation approach has been shown to efficiently achieve good accuracy and outperform other more complex methodologies (Fujikawa and Ho 2002; Young et al. 2011). For those cases where missing values are the only available data for a given class, imputation through global average/mode is used. Finally, instances with missing values in the class attribute are omitted. Note that all algorithms use the same data, hence, any unintended advantage due to the chosen imputation approach will be shared by all algorithms.
Algorithms \(\mathcal {A}\)
We consider a portfolio of ten popular supervised learners representing a comprehensive range of learning mechanisms. The algorithms are Naive Bayes (NB), Linear Discriminant (LDA), Quadratic Discriminant (QDA), Classification and Regression Trees (CART), J48 decision tree (J48), k-Nearest Neighbor (KNN), Support Vector Machines with linear, polynomial and radial basis kernels (L-SVM, poly-SVM, and RB-SVM respectively), and random forests (RF). NB, J48, CART and RF are expected to give uncorrelated errors while providing a good diversity of classification mechanisms (Lee and Giraud-Carrier 2013); LDA and QDA are expected to further extend the diversity of the algorithm portfolio, whereas KNN and SVM are considered because of their popularity. The R packages employed are e1071 (Meyer et al. 2015) (NB, L-SVM, poly-SVM, RB-SVM, RF), MASS (Venables and Ripley 2002) (LDA, QDA), rpart (Therneau et al. 2014) (CART), RWeka (Holmes et al. 1994) (J48) and kknn (Hechenbichler 2014). For all of the packages, the default parameters value are used.
Performance metric \(\mathcal {Y}\)
There exist various measures of algorithm performance focusing on either prediction accuracy/error or computation time/cost. In this work, we consider measures of prediction accuracy/error which evaluate how well or poorly the labels are classified. The performance of a supervised learner is derived by comparing labels in the problem instance (data labels) and labels predicted by the algorithm (predicted labels).
In a binary classification, where the class labels are either positive or negative, the comparison is based on four counts. The counts are the number of (i) positive labels that are correctly classified (true positives \(\textit{tp}\)), (ii) negative labels that are wrongly classified (false positives \(\textit{fp}\)), (iii) negative labels that are correctly classified (true negatives \(\textit{tn}\)), and (iv) positive labels that are wrongly classified (false negatives \(\textit{fn}\)) (Sokolova and Lapalme 2009). The proportion of incorrectly classified labels is the Error Rate. The proportion of positive predicted labels that are correctly classified is the Precision. The proportion of positive data labels that are correctly classified is the Recall. The harmonic mean of precision and recall is the F1-measure.
In multi-class classification, problem instances with K class labels are usually decomposed into K binary problem instances. For each of the K problem instances, counts are derived and used to calculate an overall performance measure. There exist two different strategies to derive the overall performance measure. One strategy is to calculate K performance measures (one for each sub-problem) and average them out. This is referred to as macro-averaging and generates measures such as macro-Precision, macro-Recall and macro-F1. The other strategy is to obtain cumulative counts of the form \(tp = \sum _{k=1}^{K} tp_k\) and use them to calculate the overall performance value. This is referred to as micro-averaging and generates measures such as micro-Precision, micro-Recall and micro-F1 (Tsoumakas and Vlahavas 2007; Sokolova and Lapalme 2009). While macro-averaging treats all classes equally, micro-averaging favours bigger classes (Sokolova and Lapalme 2009) biasing the overall performance toward the performance on the bigger classes. Overall, the choice of the most suitable averaging strategy depends on the purpose of the study.
In the current work, the purpose is to assess algorithm performance by adopting a broad perspective and targeting a whole range of problem instances. Therefore, we do not wish to place too much emphasis on algorithms that perform particularly well for large classes; similarly, we do not wish to disregard class size information completely. Therefore, we adopt an intermediate strategy consisting of averaging class-based performance measures (similarly to macro-averaging) but using weights that are proportional to the class size (i.e. \(w_k=n_k/n\), where \(n_k\) denotes the number of instances with label k).
In addition to the aforementioned metrics other performance measures exist (e.g. Break Even Point, Area Under the Curve—AUC); however, they are either a function of other measures or metrics that are not well developed for multi-class classification (Sokolova and Lapalme 2009). Because our problem instances embrace both binary and multi-class classification, we restrict our attention to Error Rate (ER), Precision, Recall and F1-measure using a weighted macro-average strategy, shown in Table 1.
Table 1 Overview of performance measures for both binary and multi-class classification
Features \(\mathcal {F}\)
Useful features of a classification dataset are measurable properties that (i) can be computed in polynomial time and (ii) are expected to expose what makes a classification problem hard for a given algorithm.
It is well known for example that problems in high-dimensions tend to be hard for algorithms like nearest neighbor (Vanschoren 2010); indeed, the density of the data points decreases exponentially as the number of attributes increases and point density is an important requirement for nearest neighbor. Similarly, problems with highly unbalanced classes tend to be hard for algorithms like unpenalized Support Vector Machines and Discriminant Analysis (Ganganwar 2012); indeed, the algorithms' assumptions (e.g. equal distribution of data within the classes, balanced dataset) are not met. In the above mentioned cases, simple examples of relevant features are number of instances in the dataset, number of attributes, and percentage of instances in the minority class.
Features for classification problems have a relatively long history in the meta-learning field, with the first studies dating back to the early 1990s (Rendell and Cho 1990; Aha 1992; Brazdil et al. 1994; Michie et al. 1994; Gama and Brazdil 1995). Over the following years, many authors used existing features and investigated new features based on either metrics (Perez and Rendell 1996; Vilalta 1999; Pfahringer et al. 2000a; Smith et al. 2002; Vilalta and Drissi 2002; Goethals and Zaki 2004; Ali and Smith 2006; Segrera et al. 2008; Song et al. 2012) or model fitting (Bensusan and Giraud-Carrier 2000; Peng et al. 2002; Ho and Basu 2002). Various manuscripts have provided a snapshot of the most popular features over the years (Castiello et al. 2005; Smith-Miles 2008; Vanschoren 2010; Balte et al. 2014). As the development of new features emerged, it became common practice to classify the meta-features into eight different groups: (i) simple, (ii) statistical, (iii) information theoretic, (iv) landmarking, (v) model-based, (vi) concept characterisation, (vii) complexity, and (viii) itemset-based meta-features.
An overview of these groups of features is reported below:
Simple features measure basic aspects related to dimensionality, type of attributes, missing values, outliers, and class attribute. They have been regularly adopted in meta-learning studies since the pioneering works by Rendell and Cho (1990) and Aha (1992).
Statistical features make use of metrics from descriptive statistics (e.g. mean, standard deviation, skewness, kurtosis, correlation), hypothesis testing (e.g. p-value, Box's M-statistic) and data analysis techniques (e.g. canonical correlation, Principal Component Analysis) to extract information about single attributes as well as multiple attributes simultaneously.
Information theoretic features quantify the information present in attributes that are investigated either alone (e.g. entropy) or in combination with class label information (e.g. mutual information).
Landmarking features are performance measures of simple and efficient learning algorithms (landmarkers) such as Naive Bayes, Linear Discriminant, 1-Nearest Neighbor and single-node trees (Pfahringer et al. 2000a, b). The idea behind the approach is that landmarker performance can shed light on the properties of a given problem instance (Bensusan and Giraud-Carrier 2000). For example, good performance of a linear discriminant classifier indicates that the classes are likely to be linearly separable; on the contrary, bad performance indicates probable non-linearly separable classes. In a meta-learning study, multiple and diverse landmarkers are used, so that each landmarker can contribute an area of expertise. The collection of areas of expertise to which a problem instance belongs, can then be used to characterize the problem instance itself (Bensusan and Giraud-Carrier 2000). There exist multiple variants of the landmarking approach. One such variant that is relevant for the current work and not yet herein implemented, is sampling landmarking (Fürnkranz and Petrak 2001; Soares et al. 2001; Leite and Brazdil 2008). Sampling landmarking considers computationally complex algorithms and evaluates their performance on a collection of data subsets. The use of data subsets allows saving computational time without affecting results significantly; indeed, running an algorithm on the full dataset or on a collection of data subsets is expected to give a similar profile of algorithm expertise (Fürnkranz and Petrak 2001).
Model-based features aim to characterize problem instances using the structural shape and size of decision trees fitted to the instances themselves (Peng et al. 2002). Examples are number of nodes and leaves, distribution of nodes at each level and along each branch, width and depth of the tree.
Concept characterization features measure the sparsity of the input space and the irregularity of the input-output distribution (Perez and Rendell 1996; Vilalta and Drissi 2002). Irregular input-output distributions occur when neighboring examples in the input space have different labels in the output space. Concept characterization features were shown to provide much information about the difficulty of problem instances (Vilalta 1999; Robnik-Šikonja and Kononenko 2003). Unfortunately, they have a high computational cost because they require the calculation of the distance matrix.
Complexity features measure the geometrical characteristics of the class distribution and focus on the complexity of the boundary between classes (Ho and Basu 2002). The aim is to identify problem instances having ambiguous classes. The ambiguity of the class attribute might be an intrinsic property of the data or might derive from inadequate measurements of the attributes; class ambiguity is likely to be influenced by sparsity and high-dimensionality (Ho and Basu 2002; Macia and Bernadó-Mansilla 2014). In general, complexity features investigate (i) class overlap measured in the input space, (ii) class separability, and (iii) geometry, topology and density of manifolds (Macià et al. 2010).
Itemsets- and association rules-based features measure the distribution of values of both single attributes and pairs of attributes, as well as characteristics of the interesting variable relations (Song et al. 2012; Burton et al. 2014). In this approach, the original problem instance is transformed into a binary dataset. For nominal attributes, each distinct attribute value in the original data generates a new attribute in the binary data. For numeric attributes a discretization method is applied first. The frequency of each binary attribute is then measured, as well as the frequency of pairs of binary attributes.
Concept characterization features and, in the large majority, statistical features are suitable for numerical attributes only; information theoretic features are suitable for nominal attributes. To allow for all the features to be calculated on all the attributes, the original attributes can be pre-processed. On the one hand, we convert nominal attributes into numeric attributes by replacing labels (e.g. \(\left\{ A,B,C\right\} \)) with numbers (e.g. \(\left\{ 1, 2, 3\right\} \)). Despite being not ideal for standard statistical analysis, this approach is of value for the purpose of the current study because it allows us to take into account potentially important relationships between numeric and nominal attributes. Note that the aforementioned transformation is not applied to the class attribute which remains unaltered throughout the analysis. On the other hand, we discretize numeric attributes using ten intervals of equal width, thus obtaining nominal attributes with ten categories. Discretization using a fixed interval width is one of the many discretization approaches existing in the literature [e.g. equal frequencies, given interval boundaries, k-means clustering, Fayyad and Irani method (Fayyad and Irani 1992)]. The motivation behind our choice lies in the simplicity and consistency of the approach throughout the problem instances. By using ten categories we discretize the original attribute without losing too much information.
When deriving features values it is possible to obtain a single number (e.g. number of instances, number of attributes), a vector (e.g. vector of attributes' entropies) or a matrix (e.g. absolute correlation matrix). When the output is a vector or matrix, further processing is required. A typical procedure is to generate a single feature value by calculating the mean of the vector or matrix; however, this can result in a considerable loss of information. To preserve a certain degree of distributional information, several authors have proposed the use of summary statistics (Michie et al. 1994; Brazdil et al. 1994; Lindner and Studer 1999; Soares and Brazdil 2000). We adopt this approach and calculate minimum, maximum, mean, standard deviation, skewness and kurtosis from the vector/matrix values. Therefore, for each vector or matrix property we obtain six new features.
Most of the statistical and information theoretic features can be calculated on the attributes either independently or in conjunction with the information in the class attribute. We name features belonging to the second case with the suffix 'by class'. For example, assume our problem instance has two numeric attributes, \(X_1\) and \(X_2\), and the class attribute C takes on labels \(\left\{ c_1, c_2\right\} \); further assume that we want to calculate the feature 'mean standard deviation of attributes'. In the first case (calculation independent of class attribute) we (i) calculate the standard deviation of attribute \(X_1\) and the standard deviation of attribute \(X_2\), and (ii) average the two numbers; the resulting value is our feature 'mean standard deviation of attributes'. In the second case (calculation in conjunction with class attribute), we calculate (i) the standard deviation of attribute \(X_1\) computed over all the instances that predict class \(c_1\), (ii) the standard deviation of attribute \(X_1\) computed over all the instances that predict class \(c_2\), (iii) the standard deviation of attribute \(X_2\) computed over all the instances that predict class \(c_1\), (iv) the standard deviation of attribute \(X_2\) computed over all the instances that predict class \(c_2\). The four values are then averaged and we obtain our final feature 'mean standard deviation of attributes by class'.
In this study we have generated a set of 509 features derived from the eight types of features. Not all of these will be interesting for the challenge of understanding how features affect the performance of our chosen algorithms across our selected set of test instances. Our goal is to represent the instances in a feature space, generated to maximize our chances of gaining insights via visualization. The process of selecting relevant features from this candidate set of 509 features will be discussed in the following section.
Feature selection
The candidate set of 509 features derived from the available literature contains much redundancy, with many features measuring aspects of a problem instance that are either similar or not relevant to expose the hardness of the classification task itself. Thus, a small set of relevant features must be selected. To identify relevant features, we deliberately alter the hardness of the classification task and observe how the feature values react to such alteration. Overall, the procedure we adopt to select a relevant set of features is as follows:
Identify broad characteristics that are either known or expected to make a classification task harder (classification challenges);
Alter a problem instance to deliberately vary the hardness of the classification task based on the challenges identified in the previous step (instance alteration);
Calculate all 509 features on both original and altered problems;
Use a statistical procedure to compare features values of original and altered problems (statistical test);
Identify the set of relevant features as those most responsive to the challenges;
Evaluate the adequacy of the relevant features via performance prediction.
Classification challenges The algorithms listed in Sect. 3.2 are known to perform better under certain circumstances. Such circumstances broadly relate to algorithm assumptions (e.g. normality, equal covariance within classes) or characteristics of the data (e.g. numeric and/or nominal, presence of missing values). Based on previous investigations of classification algorithms (Lessmann et al. 2015; Sokolova and Lapalme 2009; Kotsiantis 2007; Kotsiantis et al. 2006; Vilalta 1999; Michie et al. 1994) we identify 12 challenging circumstances. They are:
Non-normality within classes instances belonging to the same class do not follow a multivariate normal distribution;
Unequal covariances within classes instances belonging to the same class follow a multivariate normal distribution; however, the variance-covariance matrices of the distributions are different;
Redundant attributes two or more attributes carry very similar information;
Type of attributes the problem instance comprises both numeric and nominal attributes;
Unbalanced classes at least one class has a considerably different number of instances;
Constant attribute within classes for at least one attribute, all the instances belonging to the same class assume the same (numeric or categorical) value;
(Nearly) Linearly dependent attributes at least one numeric attribute is (nearly) a linear combination of another two or more numeric attributes;
Non-linearly separable classes there exists no hyperplane that well separates the classes;
Missing values a considerable number of instances present missing values for one or more attribute;
Data scaling the scale of one or more attributes is very different from the scale of the remainder attributes;
Redundant instances there exist a considerable number of repeated instances;
Lack of information a limited number of instances is available.
Instance alteration Based on the above challenges, we alter a problem instance to change the difficulty of the classification task. For each challenge we obtain two problem instances that we want to compare; they are the original and altered datasets. The altered dataset is either more or less challenging in terms of a specific classification challenge. As an example, consider the challenge of non-linearly separable classes. We alter each dataset to make it more linearly separable by fitting a hyperplane through the data using linear regression, and then altering the class labels so that each side of the hyperplane contains only one class. The altered dataset is therefore less challenging than the original, and we will be able to see which features are different when compared to the original dataset and therefore correlate with non-linear separability. Each challenge is treated in a similar manner. Details of the applied alterations and comparisons are reported in Sect. 2 of the Supplementary Material. Table 2 reports the identified challenges and highlights which ones are likely to be relevant for the investigated algorithms.
Table 2 Classification challenges specific to the investigated algorithms
Statistical test Original and altered datasets are compared in terms of their values of the 509 candidate features. For a given feature, its value is calculated on both the original and altered problem. A statistically significant difference in values suggests that the applied alteration (cause) results in a change of the feature value (effect) and that the feature is relevant to measuring the degree of the challenge presented by an instance. Furthermore, the bigger the difference, the higher the discriminating ability of the feature. We consider only one single challenge at a time as the aim is to identify features that are in a cause-effect relationship with classification hardness.
To draw statistically sound conclusions, a distribution of feature values is required for both the original and altered problem. For the altered problem, multiple values naturally arise by running the alteration process multiple times; due to the intrinsic randomness of the alteration process, a different altered problem is obtained in each simulation run. Instead, for the original problem no intrinsic randomness exists. Therefore, we introduce a small source of variability by randomly removing one observation (i.e. dataset row) from the problem instance in each simulation run. For consistency, the same observation is removed before applying the alteration process.
The two distributions of feature values are compared through a two-sided t-test with unequal variances. Unequal variances are considered because the feature values derived from the original problem are usually less variable than the feature values derived from the altered problems. It is well known that two types of errors can occur when performing a statistical test. On the one hand, assume we are testing a feature that has no cause-effect relationship with a given challenge; the error we can make is to conclude that the feature is relevant (Type I error, \(\alpha \)). On the other hand, assume we are testing a feature that has a cause-effect relationship with a given challenge; the error we can make is to conclude that the feature is not relevant (Type II error, \(\beta \)). Before implementing the test, the value of \(\alpha \) is fixed and the desired value of \(\beta \) is specified. Additionally, a third value needs to be specified; this is the change in the feature value (\(\Delta \)) that we want to detect when comparing original and altered problems. The specified values of \(\alpha \), \(\beta \) and \(\Delta \) are used to determine a suitable sample size, namely the number of repeats or simulation runs, required to simultaneously control Type I and II errors. When fixing the value of \(\alpha \) it is important to consider that we are performing a large number of tests. 509 features are tested on 12 challenges, resulting in a total of \(n_{{ tests}}=6108\) tests. Assuming that none of the features is relevant (i.e. no cause-effect relationship with the challenge), a test with \(\alpha = 0.01\) would still select 123 features as relevant. To avoid this, a smaller value of \(\alpha \) must be used in the test. Such a value is typically determined through a correction. Among the available corrections, we choose the Bonferroni correction \(\alpha ^*=\alpha /n_{{ tests}}\), where \(\alpha ^*\) is the corrected value. Such a value is typically very small and results in a restrictive test. This well serves our purpose to identify a small set of suitable features.
Overall, we set \(\alpha ^*=1.64e^{-6}\), \(\beta =0.1\), \(\Delta =3\) and obtain the optimal sample size \(n_{{ runs}}=14\) through power analysis (Cohen 1992). In this context, the sample size is the number of comparisons required and corresponds to the number of simulated altered problem instances generated. Based on these settings, the test has (i) 99% chance to correctly discard a feature that has no cause-effect relationship with the challenge, and (ii) 90% chance to correctly select a feature that has a cause-effect relationship with the challenge. Features with \(|p\text {-}value|<\alpha ^*\) are identified as significant. For each single challenge, significant features are sorted in ascending order based on their \(p\text {-}value\), with the most relevant features appearing at the top of the list for each challenge.
Set of relevant features The procedure described above applies to a single problem instance and its alterations. To ensure consistency of results, we repeat the procedure and apply it to six different problem instances selected from those described in Sect. 3.1. The selected problem instances are (1) balloons, (2) blogger, (3) breast, (4) breast with 2 attributes only, (5) iris, and (6) iris with two attributes only. All of these are relatively small problems with up to 699 instances and up to 11 attributes. The choice is motivated by both theoretical and practical aspects. From a theoretical point of view, the procedure is based on relative comparisons and it is not supposed to be influenced by problem dimensionality. From a practical point of view, the procedure can be time-prohibitive if applied to large problems.
For each single challenge, the aim is to select one single feature that has the highest chance to detect the given challenge when measured on a new problem instance. For each challenge, the output of the procedure is composed of six sorted lists (one list per tested problem instance) of significant features. We compare these six lists and select the features that most frequently appear at the top of the lists. The selected features are (i) standard deviation of class probabilities, (ii) proportion of instances with missing values, (iii) mean class standard deviation, (iv) maximum coefficient of variation within classes, (v) mean coefficient of variation of the class attribute, (vi) minimum skewness of the class attribute, (vii) mean skewness of the class attribute, (viii) minimum normalized entropy of the attributes, (ix) maximum normalized entropy of the attributes, (x) standard deviation of the joint entropy between attributes and class attribute, (xi) skewness of the joint entropy between attributes and class attribute, (xii) standard deviation of the mutual information between attributes and class attribute, (xiii) mean concept variation, (xiv) standard deviation of the concept variation, (xv) kurtosis of the concept variation, (xvi) mean weighted distance, (xvii) standard deviation of the weighted distance, (xviii) skewness of the weighted distance.
Along with these features associated with specific challenges, we have also considered features that are frequently used in meta-learning studies. Based on a literature review over the 1992–2015 period (Aha 1992; Brazdil et al. 1994; Gama and Brazdil 1995; Michie et al. 1994; Vilalta 1999; Bensusan and Giraud-Carrier 2000; Pfahringer et al. 2000a; Peng et al. 2002; Smith et al. 2002; Castiello et al. 2005; Ali and Smith 2006; Vanschoren 2010; Reif et al. 2012, 2014; Reif and Shafait 2014; Garcia et al. 2015) we identify 21 features. The details are reported in Table 3. A short explanation regarding the meaning of the symbols used in this table can be found in Table 4. Finally, we consider complexity measures (Ho and Basu 2002) because they explicitly aim to characterize the difficulty of the classification task.
Table 3 Frequent features selected from the literature over the period 1992–2015
Table 4 Description of the attributes used in Table 3
All of the identified features are combined in a single list and further processed. The aim is to identify uncorrelated features that are linearly related to algorithm performance; thus we select features having feature-to-feature correlation less than |0.7| and feature-to-performance correlation greater than |0.3|. The final set of relevant features is as follows, with further details of each feature and the correlation matrix presented in Sect. 3 of the Supplementary Material:
\(H(\mathbf {X})_{\max }^{'}\)—maximum normalized entropy of the attributes
\(H_{c}^{'}\)—normalized entropy of class attribute
\(\overline{M}_{CX}\)—mean mutual information of attributes and class
\(DN_{ER}\)—error rate of the decision node
\(SD(\nu )\)—standard deviation of the weighted distance
F3—maximum feature efficiency
F4—collective feature efficiency
L2—training error of linear classifier
N1—fraction of points on the class boundary
N4—nonlinearity of nearest neighbor classifier
Assessing adequacy of the feature set via performance prediction
The set of ten selected features is adequate for our purposes if it exposes the strengths and weaknesses of algorithms. To achieve this, a prerequisite is that algorithm performance can be accurately predicted based on the selected set of features. We adopt an approach based on model fitting and evaluation of model accuracy.
We fit flexible models to \((\mathbf {f}_{i}, y_{i})\) pairs where \(\mathbf {f}_{i} \in \mathbb {R}^m\) is a input vector comprising of the selected features and \(y_{i} \in \mathbb {R}\) measures the algorithm performance, with \(i=1,\ldots ,235\) instances and \(m=10\) features. We consider two cases. In the first case, the output is a measure of algorithm performance, namely the error rate \(\textit{ER}\); it varies continuously in [0, 1] and gives rise to a regression problem. In the second case, the output is a measure of problem difficulty that we derive from \(\textit{ER}\):
$$\begin{aligned} h(ER) = {\left\{ \begin{array}{ll} 0 &{} \quad \text{ if } ER \le 0.2\\ 1 &{} \quad \text{ if } ER > 0.2;\\ \end{array}\right. } \end{aligned}$$
which takes on labels \(\left\{ 0,1\right\} \) (corresponding to easy and hard instances respectively) and gives rise to a binary classification problem. For both regression and classification problems, we use a Support Vector Machine (SVM) model with Gaussian Radial Basis Function (RBF) kernel \(k(\mathbf {f},\mathbf {f}') = \exp (\gamma \Vert \mathbf {f}-\mathbf {f}'\Vert ^2)\). The type of SVM used is \(\epsilon \)-regression and C-classification respectively. Both \(\epsilon \)-regression and C-classification present two parameters; they are the cost C in the regularization term and the RBF hyper-parameter \(\gamma \). An additional parameter \(\epsilon \) is used in \(\epsilon \)-regression to tolerate small approximation errors (Vapnik 1995). We tune C and \(\epsilon \) through grid search in [1, 10] and [0, 1] respectively; we estimate a good value of the RBF hyper-parameter \(\gamma \) based on the 0.1 and 0.9 quantile of \(\Vert \mathbf {f}-\mathbf {f}'\Vert ^2\) (Caputo et al. 2002). We use tenfold cross validation to train the model and assess the model generalization ability. The cross-validated Mean Squared Error (cv-MSE) and Error Rate (cv-ER) are used as estimates of the model generalization ability in regression and classification, respectively. The described procedure relies on the R packages e1071 (Meyer et al. 2015) and kernlab (Karatzoglou et al. 2004).
Table 5 reports values of SVM parameters and cross-validated error for both regression and classification studies; for the regression problem the table reports also the coefficient of determination \(R^2\) as a measure of goodness-of-fit; \(R^2=\left( \text {cor}\left( \mathbf {y},\widehat{\mathbf {y}}\right) \right) ^2\), where \(\mathbf {y}\) and \(\widehat{\mathbf {y}}\) are the observed and estimated algorithm performance. SVM models for LDA and QDA tend to present the largest errors indicating that additional features might be required to capture the challenges that instances provide for those algorithms. Overall, the small values of the cross-validated errors and the large \(R^2\) values indicate that the selected features are adequate to accurately predict algorithm performance and problem difficulty, although there is always room for improvement through additional feature creation.
Table 5 Parameters values and performance of the SVM models approximating the functional relationship between selected features and (i) algorithm performance (regression case), (ii) problem difficulty (classification case)
Creating an instance space
The final aim of the current research is to expose strengths and weaknesses of classification algorithms and provide an explanation for the good or bad performance based on features of the problem instances. The quality of the problem instances to support these insights must be evaluated. A critical step is the visualization of the instances, their features and algorithm performance in a common space, the instance space.
Both problem instances and features play a critical role in determining a suitable instance space. Instances must be diverse and dense enough to uniformly cover a wide degree of problem difficulty; for all algorithms there must exist both easy and hard instances, and the transition from easy to hard should be densely covered. On the other hand, features must correlate to algorithm performance, measure diverse aspects of the problem instances, and be uncorrelated with one another. The feature set should be small in size, yet it should comprehensively measure aspects of the problem instances that either challenge algorithms or make their task easy.
How we choose to project the instances from a high-dimensional feature vector to a 2-D instance space is also a critical decision that affects the usefulness of the instance space for further analysis. The ideal instance space maps the available problem instances to a 2-dimensional representation in such a way that both features and algorithm performance vary smoothly and predictably across the space. This exposes trends in features and algorithm performance, and helps to partition the instance space into easier and harder instances, and show how the features support those partitions. The comparison can give an instant perception of why a given algorithm performs well or poorly in a given area of the instance space. We will focus here on finding projections that result in linear trends, but the general approach can be extended to encompass more complex interplays including pair-wise interactions and non-linear relationships.
Since existing problem instances are limited in number and are not necessarily representative of a broader population of classification problems, the first instance space we generate may not be capable of the insights we seek. If we generated additional instances, our choice of features might also need to be updated to explain their performance. Clearly, the instance space generation is an iterative process that might require multiple adjustments to identify an optimal subset of features and a suitable set of instances. The steps that we have implemented to generate the instance space are:
Select a set of relevant features and evaluate their adequacy;
Generate an instance space and evaluate the adequacy of the instances;
If the instance space is inadequate, artificially generate new instances and return to Step 1.
With strong evidence that the relevant features accurately predict algorithm performance and problem difficulty, we now build an instance space to inspect the relationships between problem instances—their features and their difficulty for the chosen algorithms—and objectively assess algorithm performance across the broadest space of instances \(\mathcal {P}\) rather than just \(\mathcal {I}\). In previous work in graph coloring (Smith-Miles et al. 2014), we used Principal Component Analysis to project graph features into a 2-dimensional space. However, the PCA model was somewhat unsatisfactory to predict performance, since PCA is only concerned with maximizing variance explained in the features with no regard for projecting to show trends in difficulty. Therefore, we reformulate the dimensionality reduction problem to consider an optimal projection for our purpose below.
A new interpretable projection approach
Given the feature data matrix \(\mathbf {F} = [\mathbf {f}_{1} \ \mathbf {f}_{2} \ \dots \ \mathbf {f}_{n}]\in \mathbb {R}^{m\times n}\) and algorithm performance vector \(\mathbf {y} \in \mathbb {R}^n\), where m is the number of features and n is the number of problem instances, we achieve an ideal projection of the instances if we can find \(\mathbf {A}_{r} \in \mathbb {R}^{d \times m}\), \(\mathbf {B}_{r} \in \mathbb {R}^{m \times d}\) and \(\mathbf {c}_{r} \in \mathbb {R}^{d}\) which minimizes the approximation error
$$\begin{aligned} \Vert \mathbf {F} - \widehat{\mathbf {F}} \Vert ^{2}_{F} + \Vert \mathbf {y}^{\top } - \widehat{\mathbf {y}}^{\top } \Vert ^{2}_{F} \end{aligned}$$
such that
$$\begin{aligned} \mathbf {Z}= & {} \mathbf {A}_{r}\mathbf {F} \end{aligned}$$
$$\begin{aligned} \widehat{\mathbf {F}}= & {} \mathbf {B}_{r}\mathbf {Z} \end{aligned}$$
$$\begin{aligned} \widehat{\mathbf {y}}^{\top }= & {} \mathbf {c}_{r}^{\top }\mathbf {Z}. \end{aligned}$$
with \(d=2\) being the target dimension. Without loss of generality we assume that the feature data \(\mathbf {F}\) and \(\mathbf {y}\) are centered, \(m<n\) and \(\mathbf {F}\) is full row rank, i.e. \(\text {rank}\left( \mathbf {F}\right) = m\). If \(\mathbf {F}\) is not full dimensional then we consider the problem in a subspace spanned by \(\mathbf {F}\).
Thus, we have the following optimization problem
$$\begin{aligned} \min&\left\| \mathbf {F} - \mathbf {B}_{r}\mathbf {Z} \right\| ^{2}_{F} + \left\| \mathbf {y}^{\top } - \mathbf {c}_{r}^{\top }\mathbf {Z} \right\| ^{2}_{F} \nonumber \\ \text {s.t.}&\mathbf {Z} = \mathbf {A}_{r}\mathbf {F} \\ (\mathcal{D})&\mathbf {A}_{r} \in \mathbb {R}^{d \times m} \nonumber \\&\mathbf {B}_{r} \in \mathbb {R}^{m \times d} \nonumber \\&\mathbf {c}_{r} \in \mathbb {R}^d \nonumber \end{aligned}$$
In Appendix A we prove the existence of a global optimum for \(\mathcal {D}\), and that such optimum has infinitely many solutions. A lower bound for \(\mathcal {D}\) is given by the two largest principal components of the matrix \(\bar{\mathbf {F}}=\left( \mathbf {F}^{\top } \mathbf {y}\right) ^{\top }\), which would correspond to the solution if \(\mathbf {F}\mathbf {F}^{\top }\) is invertible, otherwise the solution is numerically unstable and the method provides an approximation. The results from Appendix A also hold for a matrix of performances \(\mathbf {Y}\), meaning that a joint instance space can be calculated for a group of algorithms. Performance is now estimated as \(\widehat{\mathbf {Y}} = \mathbf {C}_{r}\mathbf {Z},\mathbf {C}_{r}\in \mathbb {R}^{\cdot \times d}\), where \(\cdot \) represents the number of algorithms.
Let \(\mathbf {F}\in \mathbb {R}^{10\times 235}\) be a matrix whose rows correspond to the ten relevant features from Sect. 4, and its columns correspond to the 235 UCI/KEEL/DCol instances. Each feature was transformed as follows: F4 was scaled to \(\left[ -\,0.99999,0.99999\right] \) and \(\tanh ^{-1}\)-transformed, \(\left\{ H_C^{'},\overline{M}_{CX},DN_{er},SD(\nu ),F3,L2,N1\right\} \) were root-squared. Let \(\mathbf {Y}\in \mathbb {R}^{10\times 235}\) be a matrix whose rows correspond to root-squared error rate of the ten algorithms in Table 5. Both features and error rates were normalized to \(\mathcal {N}\left( 0,1\right) \). Using Corollary 1 from Appendix A, we found that the lower bound is \(1.7216\times 10^{3}\). Furthermore, \(\mathbf {F}\mathbf {F}^{\top }\) was found to be right-side invertible only; hence, the error when using Eq. (13) is equal to \(1.8749\times 10^{3}\). Error values are large, as they correspond to the sum of the error per instance, feature and algorithm. The mean error rate per instance, feature and algorithm is 0.36.
We solve numerically \(\left( \mathcal {D}\right) \) using BIPOP-CMA-ES, a stochastic, iterative, variable metric method with demonstrated effectiveness in middle sized optimization problems (Hansen 2009). To use this method, we represent \(\left\{ \mathbf {A}_{r},\mathbf {B}_{r},\mathbf {C}_{r}\right\} \) as a column vector by concatenating the matrix columns. We run 30 times BIPOP-CMA-ES starting from random positions, using the default parameters and a maximum of \(10^{5}\) evaluations of \(\left( \mathcal {D}\right) \). The average error from the runs was of \(1.8658\times 10^{3}\) with a standard deviation of \(2.0758\times 10^{-12}\), which can be considered within numerical precision, meaning that all runs converged to the same error. On the other hand, if \(\mathbf {A}_{r}\) is set to be the two largest principal components of \(\mathbf {F}\) as in Smith-Miles et al. (2014), the average error from 30 runs of BIPOP-CMA-ES is \(2.0403\times 10^{3}\) with an standard deviation of \(8.0996\times 10^{-13}\), meaning that PCA is a suboptimal solution of \(\left( \mathcal {D}\right) \). Given the Theorem 2 from "Appendix A", we can conclude that BIPOP-CMA-ES converged to a global optimum. The ratio between the lower bound and the numerical solution was 0.9227. To select the best solution from these thirty runs, we define a measure of topological preservation as the Pearson Correlation between the distances in the feature space, \(\left\| \mathbf {f}_{i}-\mathbf {f}_{j}\right\| \), and the distances in the instance space, \(\left\| \mathbf {z}_{i}-\mathbf {z}_{j}\right\| \) (Yarrow et al. 2014). The chosen transformation of instances from the 10-D feature space to the 2-D instance space, with the highest topological preservation of 0.8026, is:
$$\begin{aligned} \mathbf Z = \left[ \begin{array}{rr} 0.070 &{}\quad 0.180 \\ 0.094 &{}\quad 0.618 \\ -\,0.277 &{}\quad -\,0.052 \\ 0.114 &{}\quad 0.192 \\ 0.045 &{}\quad -\,0.100 \\ -\,0.128 &{}\quad 0.151 \\ -\,0.045 &{}\quad 0.077 \\ 0.184 &{}\quad 0.017 \\ 0.449 &{}\quad 0.223 \\ 0.132 &{}\quad -\,0.112 \\ \end{array} \right] ^{\top } \left[ \begin{array}{c} H(\mathbf {X})_{\max }^{'}\\ H_{c}^{'}\\ \overline{M}_{CX}\\ DN_{ER}\\ SD(\nu )\\ F3\\ F4\\ L2\\ N1\\ N4 \end{array} \right] \end{aligned}$$
Distribution of normalized features on the projected instance space
Normalized error rate of each classification algorithm on the projected instance space
Sizes of the instances in terms of the number of observations \(\left( p\right) \), attributes \(\left( q\right) \), and classes \(\left( K\right) \). All have been \(\log _{10}\)-scaled
Instance space results
Figures 2, 3 and 4 show the instance space resulting from Eq. (7). Figure 2 enables us to visualize the instances described by their features selected in Sect. 4, while Fig. 3 shows the Error Rate \(\left( ER\right) \) of each algorithm in Sect. 3.2 distributed across the instance space. Both the features and the error rate were scaled to \(\left[ 0,1\right] \) using their maximum and minimum. The fit of the projection model given by \(\left\{ \mathbf {A}_{r},\mathbf {B}_{r},\mathbf {C}_{r}\right\} \) is evaluated using the coefficient of determination \(R^{2}\), defined as in Sect. 4. Recall that our objective for projection was to achieve linearity across the instance space for each feature as well as performance of each algorithm, as much as possible simultaneously. For the features, the best fit is obtained for N1 \(\left( R^{2}=0.910\right) \), and the worst fit for \(H(\mathbf {X})_{\max }^{'}\) \(\left( R^{2}=0.116\right) \). For the error rate, the best fit is obtained for KNN \(\left( R^{2}=0.805\right) \), and the worst fit for QDA \(\left( R^{2}=0.367\right) \). The median \(R^{2}\) is equal to 0.650, meaning that the projection model describes a linear trend between most features and algorithms.
If the linear fit across the instance space of a feature is poor, it is simply visualizing that the feature plays no significant part in explaining the instance space (it has a low coefficient in linear combinations that create the principal component axes). So we cannot expect to describe the location of instances in terms of such features. If there is a poor linear fit for an algorithm's performance however, this tells us that the features do not suggest a good linear relationship with algorithm performance. For some algorithms, the choice of features may be better than for others. We have chosen a common feature set that performs well on average across all algorithms, but some algorithms may benefit from their own set of features that explain performance. We see this mirrored also in the performance prediction results in Table 5.
Figure 4 illustrates the size of the instances by the number of (a) observations, (b) attributes, and (c) classes. We report that 2.5% of the instances have less than 50 observations, 5.9% having less than 100, and only 1.7% have more than 10,000. The majority of instances (66.1%) have between 100 and 2000 observations. In terms of attributes, 33.5% of the instances have less than ten attributes, 76.7% have less than 50, and only 2.5% have more than 500. In terms of classes, 53.8% of the instances have two classes, 14.8% have three classes. Only 5.9% of the instances have more than ten classes, with the largest being 102. In general, the selected UCI/KEEL/DCol set is skewed towards binary problems with a middle sized number of observations and attributes. It should be noted that we omitted very large datasets due to computational constraints when evaluating 509 features for the statistical study, but we don't believe this limits our conclusions except for the absence of huge big-data problems. We should still be able to understand how the features of a dataset combine to create complexity even for moderate-sized datasets.
From these figures we can conclude that, for our selected 235 instances, the number of observations per instance increases from top to bottom, while the number of classes from right to left. There is no trend emerging from the number of attributes; hence, it does not influence the performance of the algorithms as much as the number of observations or classes. Those algorithms whose \(R^{2}\) is above 0.500 tend to find easier the instances on the bottom left side of the space, whereas the remaining algorithms tend to find easier those in the bottom center of the space. This means that most of the instances with a high number of observations and classes are relatively easier for most algorithms, with the exception of LDA and QDA. In general, high values of \(H(\mathbf {X})_{\max }^{'}\), \(DN_{ER}\) and N1 tend to produce harder instances for most algorithms.
Objective assessment of algorithmic power
Our method for objective assessment of algorithmic power is based on the accurate estimation and characterization of each algorithm's footprint—a region in the space of all possible instances of the problem where an algorithm is expected to perform well based on inference from empirical performance analysis.
We have previously reported methods for calculation and analysis of algorithm footprints as a generalized boundary around known easy instances. For example, in Smith-Miles and Tan (2012) we used the area of the convex hull created by triangulating the instances where good performance was observed. The convex hull was then pruned by removing those triangles whose edges exceeded a user-defined threshold. However, there may be insufficient evidence that the remaining triangles actually represent areas of good performance. In Smith-Miles et al. (2014), unconnected triangles were generated by finding the nearest neighbors and maintaining a taboo list. The triangles were merged together if the resulting region fulfilled some density and purity requirements. However, randomization steps in the triangle construction process lead to some triangles becoming exceedingly large. In this paper, we use an improved approach (Muñoz and Smith-Miles 2017) described by the two algorithms presented in Appendix B.
Algorithm 1 constructs the footprints following these steps: (i) measuring the distances between all instances, while marking for elimination those instances with a distance lower than a threshold, \(\delta \); (ii) calculating a Delaunay triangulation with the remaining instances; (iii) finding the concave hull, by removing any triangle with edges larger than another threshold, \(\Delta \); (iv) calculating the density and purity of each triangle in the concave hull; and, (v) removing any triangle that does not fulfill the density and purity limits.
The parameters for Algorithm 1 are: the lower and upper distance thresholds, \(\left\{ \delta ,\Delta \right\} \), set to 1 and 25% of the maximum distance respectively. The density threshold, \(\rho \), is set to 10, and the purity threshold, \(\pi \), is set to 75%. Algorithm 2 removes the contradictions that could appear when two conclusions could be drawn from the same section of \(\mathcal {I}\) due to overlapping footprints, e.g., when comparing two algorithms. This is achieved by comparing the area lost by the overlapping footprints when the contradicting sections are removed, while maintaining the density and purity thresholds.
Table 6 Footprint analysis of the algorithms using Eq. (1) and \(\beta =0.5\)
Table 6 presents the results from the analysis using Eq. (1) to define the instance hardness. The best algorithm is the one such that \(\textit{ER}\) is the lowest for the given instance. In addition, an instance is also defined as \(\beta \)-hard with \(\beta =0.5\) if 50% of the algorithms have an error rate higher than 20%. The results have been normalized over the area (11.6685) and density (19.8827) of the convex hull that encloses all instances. Further results are also illustrated in Fig. 5, which shows the instances whose error rate is below 20% as blue marks and above 20% as red marks. The footprint for QDA has a normalized area of 8.5%, making QDA the weakest algorithm in the portfolio, while J48 with an area of 63.7% could be considered the strongest. However, given the lack of diversity on the UCI/KEEL/DCol set, most algorithms fail in similar regions of the space, and we lack instances that reveal more subtle differences in performance. In fact, over half of the instance space is considered to have \(\beta \)-easy instances, while \(\beta \)-hard instances occupy only 20% approximately, for \(\beta =0.5\). Besides, large empty areas are visible. For example, a single instance is located at \(\mathbf {z}=\left[ 0.744,2.833\right] \), with the next closest located at \(\mathbf {z}=\left[ 0.938,2.281\right] \). This single instance has \(ER>20\%\) for all algorithms except J48, whereas the nearby instances have \(ER>20\%\) for all algorithms. Therefore, more instances are needed to conclude whether this region represents a strength for J48.
Figure 6 illustrates in the instance space for each instance (a) their best algorithm and (b) their \(\beta \)-hardness. Inspection of the best algorithm explains the results observed in Table 6 for the best algorithm, in which case LDA, QDA, poly-SVM and RBF-SVM footprints cover 0% of the instance space. This means there is no dense concentration of instances for which these algorithms are the best, although they are still competitive across a broad part of the instance space. Instead, the location of instances for which these algorithms are best are scattered throughout the space limiting our ability to construct a footprint of confidence. Of course, default parameters have been used for all algorithms, and the footprint calculation could be different with parameter tuning to allow each algorithm to maximize its footprint. We also observe less than ideal purity is present for most algorithms, due to significant overlap between footprints. Overall, the selection of the best algorithm seems to be more related to the number of attributes than any other feature. Furthermore, \(\beta \)-hardness of the UCI/KEEL/DCol set of instances is consistent with the conclusions drawn from Figs. 3 and 5, which show that most of these algorithms have similar performances on the problem set \(\mathcal {I}\). Despite our extensive efforts to generate an instance space to provide visual insights into algorithm strengths and weaknesses—using a rigorous feature selection process and optimized projection to 2-D—we believe that the instances in UCI/KEEL/DCol are not sufficiently diverse to reveal the kinds of insights we seek.
Footprints of the algorithms in the instance space
Overall performance of the algorithm portfolio, with the best algorithm for each instance shown in (a), while b shows blue marks representing \(\beta \)-easy instances, and red marks representing \(\beta \)-hard instances
Generation of artificial problem instances
While most of the UCI/KEEL/DCol instances are based on real-world data, the results from Sects. 6 and 7 demonstrate the limitations of this set for refined algorithm evaluation. Most instances elicit similar performance from fundamentally different algorithms, such as KNN, RBF-SVM and RF. Furthermore, there are a few areas in the instance space in which the number of instances is scarce. For example, the single instance at \(\mathbf {z}=\left[ 0.744,2.833\right] \), for which only J48 achieved \(ER\le 20\%\). These limitations provide an opportunity to generate new instances that (i) may produce different performance from existing algorithms, such that their strengths and weaknesses can be better understood; (ii) have features that will place them in empty areas within the space, or that help push the boundaries currently known; and (iii) represent modern challenges in machine learning classification.
Perhaps the most common way to artificially generate test instances is to select and sample an arbitrary probability distribution. However, this approach lacks control, as there is no guarantee that the resulting dataset will have specific features. In Macia and Bernadó-Mansilla (2014), an alternative method is proposed, in which a "seed" dataset is adjusted by evolving each observation. However, this approach resulted in very little change in the features of the dataset (merely a small magnitude perturbation), which makes it unsuitable for our task of exploring empty areas or pushing the boundaries of the instance space. Furthermore, as the number of observations increases, the evolution process becomes quickly intractable. An alternative is provided in Soares (2009), where new datasets are obtained by switching an independent attribute with the class vector. Assuming q categorical attributes, it is possible to obtain q new derived datasets. However, this approach is susceptible to missing target values, skewed class distributions, or difficulties when the new class is completely uncorrelated to the independent variables.
So we present in this paper a proof-of-concept for a new method to generate test instances by evolving datasets to lie at target locations in the instance space. Before describing this method though, we first consider whether selecting other datasets beyond the UCI repository, KEEL and DCoL would have given a more diverse instance space. A recent popular addition to classification dataset repositories comes from the OpenML project (Vanschoren et al. 2013). Figure 7 shows the results of projecting a set of OpenML datasets into our instance space. The blue marks represent the original UCI/KEEL/DCoL set, while the red marks are a selection of OpenML datasets of similar size to those in the UCI/KEEL/DCoL set, that is, those with less than 50 classes, 100 features and \(10^{4}\) observations, with no missing values.Footnote 3 This resulted in 247 datasets, 49 of which are also in the UCI/KEEL/DCoL set. The figure shows that a large portion of the OpenML datasets fall within the areas currently covered by the UCI/KEEL/DCoL set. However, a number of datasets cover previously empty areas in the upper left corner of the space. This suggests that there is some additional diversity created by considering OpenML datasets within the problem sizes considered in this study. Although relaxing the size restrictions used in this example may improve the diversity, complexity should be increased without resorting to expanding the dataset size. Therefore, there is substantial scope to generate more complex datasets of similar sizes.
Location of 247 instances from OpenML in the constructed instance space. The blue marks represent the original UCI/KEEL/DCoL set, while the red marks are those OpenML problems with less than 50 classes, 100 features and \(10^{4}\) observations, without missing values
Generating datasets by fitting Gaussian mixture models
To generate instances with a desired target vector of features, \(\mathbf {f}_{T}\), we tune a Gaussian mixture model (GMM) until the mean squared error (MSE) between \(\mathbf {f}_{T}\) and the feature vector of a sample from the GMM, \(\mathbf {f}_{S}\), is zero, assuming that the GMM is sampled using a fixed seed to guarantee some level of repeatability. Let us define our GMM as having \(\kappa \) components on q attributes; therefore, the probability of an observation \(\mathbf {x}\), being sampled from the GMM is defined as:
$$\begin{aligned} \text {pr}\left( \mathbf {x}\right) = \sum _{k=1}^{\kappa }{\phi _{k}\mathcal {N}\left( \varvec{\mu _{k}},\mathbf {\Sigma }_{k}\right) } \end{aligned}$$
where \(\left\{ \phi _{k}\in \mathbb {R}, \varvec{\mu }_{k}\in \mathbb {R}^{q}, \mathbf {\Sigma }_{k}\in \mathbb {R}^{q\times q}\right\} \) are the weight, mean vector, and covariance matrix of a q-variate normal distribution respectively. For simplicity, we set \(\kappa \) to be a multiple of the number of labels, K, and \(\phi _{k}=\kappa ^{-1}\) for all components. Tuning the GMM can be thought of as a continuous black-box minimization problem; therefore, we use BIPOP-CMA-ES (Hansen 2009) as in Sect. 6.1. To use this method, we must represent the GMM parameters, \(\left\{ \varvec{\mu }_{1}, \ldots , \varvec{\mu }_{\kappa }, \mathbf {\Sigma }_{1}, \ldots ,\mathbf {\Sigma }_{\kappa }\right\} \), as a vector \(\varvec{\theta }\). Since \(\mathbf {\Sigma }_{k}\) must be a positive semi-definite matrix, we can assume the existence of its Cholesky decomposition, i.e., an upper triangular matrix \(\mathbf {A}_{k}\) such that \(\mathbf {\Sigma }_{k} = \mathbf {A}_{k}^{\top }\mathbf {A}_{k}\). Therefore, \(\varvec{\theta }\) is defined as \(\left[ \mu _{1,1},\ldots ,\mu _{q,1},\ldots ,\mu _{1,\kappa },\ldots , \mu _{q,\kappa },a_{1,1,1},\ldots ,a_{1,q,1},a_{2,2,1},\ldots ,\right. \) \(\left. a_{2,q,1},\ldots ,a_{q,q,1},\ldots ,a_{1,1,\kappa },\ldots a_{1,q,\kappa },a_{2,2,\kappa },\ldots ,a_{2,q,\kappa },\ldots ,a_{q,q,\kappa }\right] ^{\top }\), where \(\mu _{\cdot ,k}\) are the elements of the vector \(\varvec{\mu }_{k}\), and \(a_{\cdot ,\cdot ,k}\) are the non-zero elements of \(\mathbf {A}_{k}\). A dataset is completely defined by setting the number of observations, p. This approach has a number of advantages: (i) it is scalable by increasing the number of attributes, observations and classes; (ii) it allows additional flexibility, by setting the values of \(\kappa \) and \(\phi _{k}\); (iii) it enables control over the covariance between attributes, as \(\sigma _{\cdot ,\cdot ,k}^{2}\) can be set permanently to a constant value, even zero, which also has the advantage of reducing the length of \(\varvec{\theta }\); (iv) it produces immediately a model of the data distribution, which is a solution to the classification and clustering problem; and (v) the optimization problem is unconstrained. Nevertheless, this approach does have some limitations: (i) a GMM produces datasets whose attributes are Gaussian distributed real values, eliminating the possibility of more complex variable types, such as categorical; (ii) the fitting problem is known to have local optima; and (iii) it can be can be computationally expensive for very large datasets, or inaccurate for very small ones.
Nevertheless, to test this proposed generation approach, we define two experiment types. The first one is aimed at validation, where we create datasets whose features mimic those of the well known Iris dataset. Given that the instances can be described in the high dimensional feature space or in its 2-dimensional projection, two experiments of this type in total are carried out. The purpose of this experiment is to test whether the generation mechanism can converge to a set of target features. Furthermore, this experiment provides additional evidence of the instance space being a good representation, by confirming that a dataset with similar features produces similar response from the algorithms. For simplicity, we fix the dataset size to \(p=150\), \(q=4\), \(K=3\) and \(\kappa =3K\), and carry out ten repetitions with a soft bound of \(10^{4}\) function evaluations, i.e., the number of times a GMM is tested. Under these conditions, \(\varvec{\theta }\) has a length of 84. The values of \(\varvec{\theta }\) are sampled at random from a uniform distribution between \(\left[ -\,10,10\right] \).
For the second experiment, we aim to generate instances located elsewhere in the instance space, with target feature vectors determined by a latin hyper-cube sample (LHS) in the 2-D instance space, with bounds determined by the largest and smallest values for \(\mathbf {Z}\). Again we use Iris as a template problem, i.e., \(p=150\), \(q=4\), \(K=3\), but we try to evolve the dataset so that its features match a different target vector. We should note that fixing the size limits our ability to achieve MSE=0, due to the relationships observed in Fig. 4 between \(\left\{ p,q,K\right\} \) and the instance location. However, this experiment will give us an indication of the location bounds of Iris-sized problems in the space and their complexity. We set the value of \(\kappa =3K\), and select the values of \(\varvec{\theta }\) at random from a uniform distribution between \(\left[ -\,10,10\right] \). Under these conditions, \(\varvec{\theta }\) has a length of 126. We carry out ten repetitions with a soft bound of \(10^{4}\) function evaluations.
Table 7 Error rate of the test algorithms over the Iris-matching instances, with the target defined in the feature space (H) or its 2-D projection (L)
The results of the first experiment are presented in Table 7, as the \(\textit{ER}\) of the test algorithms, with the target defined in either the high-dimensional feature space (H) or the 2-D instance space (L), \(e_{t}\) denotes the MSE to target per dimension, and \(\rho _{e,p}\) is the Pearson correlation between \(e_{t}\) and the error rate of an algorithm. The generated instances were sorted from the lowest to the highest \(e_{t}\). In boldface are those instances whose difference in \(\textit{ER}\) to Iris is higher than the average difference in \(\textit{ER}\), which is presented in parenthesis below AVG. The table shows that as \(e_{t}\) increases, the difference in \(\textit{ER}\) to Iris increases, as demonstrated by \(\rho _{e,p}\), with the exception of KNN, and to a lesser extent to CART. On average, the performance of the generated instances differs by 3.0% compared to Iris. The location of the Iris dataset and the newly generated Iris-like datasets in the instance space are shown in Fig. 8a and confirm that the new instances indeed have similar features to Iris. These results demonstrate that our generation approach can produce instances with controlled feature values—like Iris features in this case—and those new instances elicit similar performance from the algorithms.
Figure 8b shows the results for the second experiment. Grey marks represent the UCI/KEEL/DCol problems, blue marks represent the LHS targets, green marks represent the starting location, red marks represent the stopping location, and the black mark represents the Iris problem. The black lines represent the trajectory of the evolution process, while the red line represents the boundaries of the instance space considering the largest and smallest observed values of \(\mathbf {X}\), and the correlation between features.Footnote 4 These results demonstrate that by randomly initializing an Iris-sized dataset and setting targets at different locations in the instance space we can generate datasets that are located away from Iris, and have different features despite having the same number of observations, attributes and classes. The evolution process converges towards distant targets, although not all targets were reachable within a reasonable computational time through our generation mechanism for Iris-sized problems. At this point, we do not know whether this is due to the natural boundary that Iris-sized problems can feasibly occupy in the instance space given their range of features and correlations, or if they could be forced to continue with a larger computational budget. More theoretical work is needed to establish the instance space boundary for different sized problems, but the potential of the method for generating new test instances has been demonstrated.
Instance space showing Iris dataset (black) and attempts to generate new datasets that are a similar to Iris, b Iris-sized instances located elsewhere (red) based on target features (blue)
This paper addresses the issue of objective performance evaluation of machine learning classifiers, and examines the criticality of test instances to support conclusions. Where we find that well-studied test instances are inadequate to evaluate the strengths and weaknesses of algorithms, we must seek methods to generate new instances that will provide the necessary insights. A comprehensive methodology has been developed to enable the quality of the UCI repository and other test instances used by most machine learning researchers to be assessed, and for new classification datasets to be generated with controllable features. The creation of a classification instance space is central to this methodology, and enables us to visualize classification datasets as points in a two-dimensional space, after suitable projection from a higher-dimensional feature space. In this paper we have proposed a new dimension reduction technique ideally suited to our visualization objective, one that maximizes our ability to visualize trends and correlations in instance features and algorithm performances across the instance space. The visualization reveals pockets of hard and easy instances, shows the (lack of) diversity of the set of instances, and enables an objective measure of the relative performance of each algorithm—its footprint in the instance space where the algorithm is expected to do well. Quantitative metrics, such as the area of the footprint, provide objective measures of the relative power and robustness of an algorithm across the broadest range of test instances.
The results presented in this paper demonstrate the lack of diversity of the benchmark instances, as most algorithms had similar footprints, suggesting that either the algorithms are all essentially the same (at least with default parameter settings), that the instances are not revealing the unique strengths and weaknesses of each algorithm as much as is desired, or that the features may not be discriminant enough. For this last case, it is also possible that totally new features are required in order to describe the performance of some algorithms more effectively. Furthermore, there is significant bias on the sizes and types of problems in the repository. Therefore, we proposed a method to generate new test instances, aiming to enrich the repository's diversity. Our method modifies a template probability distribution until the features of a sample match a target, which can represent either an existing dataset, or an arbitrary point in the space.
In addition to the contributions made in this paper to developing new methodologies—instance generation and dimensional reduction—to support our broader objectives, this paper has also contributed to the meta-learning literature through its comprehensive examination of a collection of 509 features, to determine which ones can identify the presence of conditions that challenge classification algorithms. The feature set was reduced to the ten most statistically significant features after analyzing the correlation between features and a measure of algorithm performance. However, it should be noted that our final feature set is based on our current data, the selected UCI/KEEL/DCol datasets; hence, the selected features may change with a larger repository.
While there are theoretical and computational issues that limit our ability to extensively explore and fill the gaps in the instance space at this time—e.g., the lack of precise theoretical bounds of the instance space—our method shows significant promise. Further research on this topic will develop theoretical upper and lower bounds on the features and their dependencies to enable the theoretical boundary of the instance space to be defined more tightly than the one shown in Fig. 8. We will also continue to examine the most efficient representation of a dataset to ensure scalability and enable the instance space to be filled with new instances of arbitrary size. Of course, once we have succeeded in generating a large number of new test instances, we will need to verify that they are more useful for meta-learning, not just that they have different features and live in unique parts of the instance space.
The methodology developed in this paper is an iterative one and will need to be repeated as we accumulate more datasets with a diversity of features that really challenge algorithms. In fact, the OpenML repository (Vanschoren et al. 2013) provides an excellent collection of meta-data and additional features and algorithms from OpenML can be considered. Sampling landmarking provides relevant meta-features to further extend our current feature set, whereas meta-learning techniques such as those proposed by Lee and Giraud-Carrier (2013) provide valuable resources to select a more comprehensive set of algorithms. New features may need to be selected from the extended set of meta-features to explain the challenges of new instances, and the statistical methodology we have presented can be applied again, perhaps with even more altered datasets than used in this study. Conquering the computational challenges exposed by this proof-of-concept study, and repeating the methodology on the broadest set of instances—to determine the features that best explain the performance of different portfolios of algorithms, and creating the definitive instance space—will enable insights into the strengths and weaknesses of machine learning classifiers to be revealed.
http://users.monash.edu.au/~ksmiles/matilda/machinelearning/supplementary.pdf.
http://users.monash.edu.au/~ksmiles/matilda/classification.zip.
To extract the relevant datasets, we follow the example in https://mlr-org.github.io/Benchmarking-mlr-learners-on-OpenML/, which are listed in Sect. 1 of the supplementary material.
Available in the supplementary material.
Aha, D. W. (1992). Generalizing from case studies: A case study. In Proceedings of the 9th international conference on machine learning (pp. 1–10).
Alcalá, J., Fernández, A., Luengo, J., Derrac, J., García, S., Sánchez, L., et al. (2010). Keel data-mining software tool: Data set repository, integration of algorithms and experimental analysis framework. Journal of Multiple-Valued Logic and Soft Computing, 17(2–3), 255–287.
Ali, S., & Smith, K. A. (2006). On learning algorithm selection for classification. Applied Soft Computing, 6(2), 119–138.
Balte, A., Pise, N., & Kulkarni, P. (2014). Meta-learning with landmarking: A survey. International Journal of Computer Applications, 105(8), 47–51.
Bensusan, H., & Giraud-Carrier, C. (2000). Discovering task neighbourhoods through landmark learning performances. In D. A. Zighed, J. Komorowski, & J. Żytkow (Eds.), Principles of data mining and knowledge discovery: 4th European conference, PKDD 2000 Lyon, France, September 13–16, 2000 Proceedings (pp. 325–330). Berlin, Heidelberg: Springer.
Brazdil, P., Carrier, C. G., Soares, C., & Vilalta, R. (2008). Metalearning: Applications to data mining. Berlin: Springer Science & Business Media.
Brazdil, P., Gama, J., & Henery, B. (1994). Characterizing the applicability of classification algorithms using meta-level learning. In Machine learning: ECML-94 (pp. 83–102). Springer.
Burton, S. H., Morris, R. G., Giraud-Carrier, C. G., West, J. H., & Thackeray, R. (2014). Mining useful association rules from questionnaire data. Intelligent Data Analysis, 18(3), 479–494.
Caputo, B., Sim, K., Furesjo, F., & Smola, A. (2002). Appearance-based object recognition using SVMS: Which kernel should I use? In: Proceedings of NIPS workshop on statistical methods for computational experiments in visual processing and computer vision, Whistler (Vol. 2002).
Carbonell, J. G., Michalski, R. S., & Mitchell, T. M. (1983). An overview of machine learning. In R. S. Michalski, J. G. Carbonell, & T. M. Mitchell (Eds.), Machine learning: An artificial intelligence approach (pp. 3–23). Berlin, Heidelberg: Springer.
Castiello, C., Castellano, G., & Fanelli, A. M. (2005). Meta-data: Characterization of input features for meta-learning. In V. Torra, Y. Narukawa, & S. Miyamoto (Eds.), Modeling decisions for artificial intelligence: Second international conference, MDAI 2005, Tsukuba, Japan, July 25–27, 2005 Proceedings (pp. 457–468). Berlin, Heidelberg: Springer.
Cohen, J. (1992). Statistical power analysis. Current Directions in Psychological Science, 1(3), 98–101.
Culberson, J. C. (1998). On the futility of blind search: An algorithmic view of "no free lunch". Evolutionary Computation, 6(2), 109–127.
Fayyad, U. M., & Irani, K. B. (1992). On the handling of continuous-valued attributes in decision tree generation. Machine Learning, 8(1), 87–102.
MATH Google Scholar
Flach, P. (2012). Machine learning: The art and science of algorithms that make sense of data. Cambridge: Cambridge University Press.
Fujikawa, Y., & Ho, T. (2002). Cluster-based algorithms for dealing with missing values. In Pacific-Asia conference on knowledge discovery and data mining (pp. 549–554). Springer
Fürnkranz, J., & Petrak, J. (2001). An evaluation of landmarking variants. In Working notes of the ECML/PKDD 2000 workshop on integrating aspects of data mining, decision support and meta-learning (pp. 57–68).
Gama, J., & Brazdil, P. (1995). Characterization of classification algorithms. In C. Pinto-Ferreira & N. J. Mamede (Eds.), Progress in artificial intelligence: 7th Portuguese conference on artificial intelligence, EPIA '95 Funchal, Madeira Island, Portugal, October 3–6, 1995 Proceedings (pp. 189–200). Berlin, Heidelberg: Springer.
Ganganwar, V. (2012). An overview of classification algorithms for imbalanced datasets. International Journal of Emerging Technology and Advanced Engineering, 2(4), 42–47.
Garcia, L. P., de Carvalho, A. C., & Lorena, A. C. (2015). Noise detection in the meta-learning level. Neurocomputing, 176, 14–25.
Goethals, B., & Zaki, M. J. (2004). Advances in frequent itemset mining implementations: Report on FIMI'03. ACM SIGKDD Explorations Newsletter, 6(1), 109–117.
Hansen, N. (2009). Benchmarking a bi-population CMA-ES on the BBOB-2009 function testbed. In GECCO '09 (pp. 2389–2396). ACM. https://doi.org/10.1145/1570256.1570333
Hastie, T., Tibshirani, R., Friedman, J., & Franklin, J. (2005). The elements of statistical learning: Data mining, inference and prediction. The Mathematical Intelligencer, 27(2), 83–85.
Hechenbichler, K. S. K. (2014). kknn: Weighted k-nearest neighbors. http://CRAN.R-project.org/package=kknn. R package version 1.2-5.
Ho, T. K., & Basu, M. (2002). Complexity measures of supervised classification problems. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(3), 289–300.
Holmes, G., Donkin, A., & Witten, I. H. (1994). Weka: A machine learning workbench. In Proceedings of the 1994 second Australian and New Zealand conference on intelligent information systems, 1994 (pp. 357–361). IEEE.
Holte, R. C. (1993). Very simple classification rules perform well on most commonly used datasets. Machine Learning, 11(1), 63–90.
MATH MathSciNet Article Google Scholar
Igel, C., & Toussaint, M. (2005). A no-free-lunch theorem for non-uniform distributions of target functions. Journal of Mathematical Modelling and Algorithms, 3(4), 313–322.
MathSciNet MATH Article Google Scholar
Jordan, M., & Mitchell, T. (2015). Machine learning: Trends, perspectives, and prospects. Science, 349(6245), 255–260.
Karatzoglou, A., Smola, A., Hornik, K., & Zeileis, A. (2004). kernlab—An S4 package for kernel methods in R. Journal of Statistical Software, 11(9), 1–20.
Kotsiantis, S. B. (2007). Supervised machine learning: A review of classification techniques. Informatica, 31, 249–268.
MathSciNet MATH Google Scholar
Kotsiantis, S. B., Zaharakis, I. D., & Pintelas, P. E. (2006). Machine learning: A review of classification and combining techniques. Artificial Intelligence Review, 26(3), 159–190.
Langley, P. (2011). The changing science of machine learning. Machine Learning, 82(3), 275–279.
MATH Article Google Scholar
Lee, J. W., & Giraud-Carrier, C. (2013). Automatic selection of classification learning algorithms for data mining practitioners. Intelligent Data Analysis, 17(4), 665–678.
Leite, R., & Brazdil, P. (2008). Selecting classifiers using metalearning with sampling landmarks and data characterization. In Proceedings of the planning to learn workshop (PlanLearn 2008), held at ICML/COLT/UAI (pp. 35–41).
Lessmann, S., Baesens, B., Seow, H.-V., & Thomas, L. C. (2015). Benchmarking state-of-the-art classification algorithms for credit scoring: An update of research. European Journal of Operational Research, 247(1), 124–136.
Lichman, M. (2013). UCI machine learning repository. http://archive.ics.uci.edu/ml
Lindner, G., & Studer, R. (1999). AST: Support for algorithm selection with a CBR approach. In J. M. Żytkow & J. Rauch (Eds.), Principles of data mining and knowledge discovery: Third European conference, PKDD'99, Prague, Czech Republic, September 15–18, 1999 Proceedings (pp. 418–423). Berlin, Heidelberg: Springer.
Macia, N., & Bernadó-Mansilla, E. (2014). Towards UCI+: A mindful repository design. Information Sciences, 261, 237–262.
Macià, N., Orriols-Puig, A., Bernadó-Mansilla, E. (2010). In search of targeted-complexity problems. In Proceedings of the 12th annual conference on genetic and evolutionary computation (pp. 1055–1062). ACM.
Meyer, D., Dimitriadou, E., Hornik, K., Weingessel, A., & Leisch, F. (2015). e1071: Misc functions of the Department of Statistics, Probability Theory Group (Formerly: E1071), TU Wien (2015). http://CRAN.R-project.org/package=e1071. R package version 1.6-7.
Michie, D., Spiegelhalter, D. J., Taylor, C. C., & Campbell, J. (Eds.). (1994). Machine learning, neural and statistical classification. Upper Saddle River, NJ: Ellis Horwood.
Muñoz, M. A., & Smith-Miles, K. A. (2017). Performance analysis of continuous black-box optimization algorithms via footprints in instance space. Evolutionary Computation, 25(4), 529–554.
Orriols-Puig, A., Macia, N., & Ho, T. K. (2010). Documentation for the data complexity library in c++ (Vol. 196). La Salle: Universitat Ramon Llull.
Peng, Y., Flach, P. A., Soares, C., & Brazdil, P. (2002). Improved dataset characterisation for meta-learning. In S. Lange, K. Satoh, & C. H. Smith (Eds.), Discovery science: 5th international conference, DS 2002 Lübeck, Germany, November 24–26, 2002 Proceedings (pp. 141–152). Berlin, Heidelberg: Springer.
Perez, E., & Rendell, L. A. (1996). Learning despite concept variation by finding structure in attribute-based data. In Proceedings of the thirteenth international conference on machine learning. Citeseer.
Pfahringer, B., Bensusan, H., & Giraud-Carrier, C. (2000a). Meta-learning by landmarking various learning algorithms. In Proceedings of the seventeenth international conference on machine learning (pp. 743–750). San Francisco, CA: Morgan Kaufmann Publishers Inc.
Pfahringer, B., Bensusan, H., & Giraud-Carrier, C. (2000b). Tell me who can learn you and I can tell you who you are: Landmarking various learning algorithms. In Proceedings of the 17th international conference on machine learning (pp. 743–750).
Ramakrishnan, N., Rice, J. R., & Houstis, E. N. (2002). Gauss: An online algorithm selection system for numerical quadrature. Advances in Engineering Software, 33(1), 27–36.
Reif, M., & Shafait, F. (2014). Efficient feature size reduction via predictive forward selection. Pattern Recognition, 47(4), 1664–1673.
Reif, M., Shafait, F., & Dengel, A. (2012). Meta-learning for evolutionary parameter optimization of classifiers. Machine Learning, 87(3), 357–380.
MathSciNet Article Google Scholar
Reif, M., Shafait, F., Goldstein, M., Breuel, T., & Dengel, A. (2014). Automatic classifier selection for non-experts. Pattern Analysis and Applications, 17(1), 83–96.
Rendell, L., & Cho, H. (1990). Empirical learning as a function of concept character. Machine Learning, 5(3), 267–298.
Rice, J. R. (1976). The algorithm selection problem. Advances in Computers, 15, 65–118.
Robnik-Šikonja, M., & Kononenko, I. (2003). Theoretical and empirical analysis of relieff and rrelieff. Machine Learning, 53(1–2), 23–69.
Rudin, C., & Wagstaff, K. L. (2014). Machine learning for science and society. Machine Learning, 95(1), 1–9.
Salzberg, S. L. (1997). On comparing classifiers: Pitfalls to avoid and a recommended approach. Data Mining and Knowledge Discovery, 1(3), 317–328.
Segrera, S., Pinho, J., & Moreno, M. N. (2008). Information-theoretic measures for meta-learning. In E. Corchado, A. Abraham, & W. Pedrycz (Eds.), Hybrid artificial intelligence systems: Third international workshop, HAIS 2008, Burgos, Spain, September 24–26, 2008 Proceedings (pp. 458–465). Berlin, Heidelberg: Springer.
Smith, K. A., Woo, F., Ciesielski, V., & Ibrahim, R. (2002). Matching data mining algorithm suitability to data characteristics using a self-organizing map. In A. Abraham & M. Köppen (Eds.), Hybrid information systems (pp. 169–179). Heidelberg: Physica-Verlag.
Smith-Miles, K., Baatar, D., Wreford, B., & Lewis, R. (2014). Towards objective measures of algorithm performance across instance space. Computers & Operations Research, 45, 12–24.
Smith-Miles, K., & Bowly, S. (2015). Generating new test instances by evolving in instance space. Computers & Operations Research, 63, 102–113.
Smith-Miles, K., & van Hemert, J. (2011). Discovering the suitability of optimisation algorithms by learning from evolved instances. Annals of Mathematics and Artificial Intelligence, 61(2), 87–104.
Smith-Miles, K., & Lopes, L. (2012). Measuring instance difficulty for combinatorial optimization problems. Computers & Operations Research, 39(5), 875–889.
Smith-Miles, K., & Tan, T. (2012). Measuring algorithm footprints in instance space. In IEEE CEC '12 (pp. 3446–3453).
Smith-Miles, K., & Tan, T. T. (2012) Measuring algorithm footprints in instance space. In 2012 IEEE congress on evolutionary computation (CEC) (pp. 1–8). IEEE.
Smith-Miles, K., Wreford, B., Lopes, L., & Insani, N. (2013). Predicting metaheuristic performance on graph coloring problems using data mining. In E. Talbi (Ed.), Hybrid metaheuristics (pp. 417–432). Berlin, Heidelberg: Springer.
Smith-Miles, K. A. (2008). Cross-disciplinary perspectives on meta-learning for algorithm selection. ACM Computing Surveys (CSUR), 41(1), 6.
Soares, C. (2009). UCI++: Improved support for algorithm selection using datasetoids. In Advances in knowledge discovery and data mining: 13th Pacific-Asia conference, PAKDD 2009 Bangkok, Thailand, April 27–30, 2009 Proceedings (pp. 499–506). https://doi.org/10.1007/978-3-642-01307-2_46.
Soares, C., & Brazdil, P. B. (2000). Zoomed ranking: Selection of classification algorithms based on relevant performance information. In D. A. Zighed, J. Komorowski, & J. Żytkow (Eds.), Principles of data mining and knowledge discovery: 4th European Conference, PKDD 2000 Lyon, France, September 13–16, 2000 Proceedings (pp. 126–135). Berlin, Heidelberg: Springer.
Soares, C., Petrak, J., & Brazdil, P. (2001). Sampling-based relative landmarks: Systematically test-driving algorithms before choosing. In Portuguese conference on artificial intelligence (pp. 88–95). Springer.
Sokolova, M., & Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks. Information Processing & Management, 45(4), 427–437.
Song, Q., Wang, G., & Wang, C. (2012). Automatic recommendation of classification algorithms based on data set characteristics. Pattern Recognition, 45(7), 2672–2689.
Therneau, T., Atkinson, B., & Ripley, B. (2014). rpart: Recursive partitioning and regression trees. http://CRAN.R-project.org/package=rpart. R package version 4.1-8.
Tsoumakas, G., Vlahavas, I. (2007). Random k-labelsets: An ensemble method for multilabel classification. In European conference on machine learning (pp. 406–417). Springer.
Vanschoren, J. (2010). Understanding machine learning performance with experiment databases. PhD thesis, Katholieke Universiteit Leuven – Faculty of Engineering.
Vanschoren, J., van Rijn, J. N., Bischl, B., & Torgo, L. (2013). Openml: Networked science in machine learning. SIGKDD Explorations, 15(2), 49–60. https://doi.org/10.1145/2641190.2641198.
Vapnik, V. N. (1995). The nature of statistical learning theory. New York, NY: Springer-Verlag.
Venables, W. N., & Ripley, B. D. (2002). Modern applied statistics with S (4th ed.). Springer, New York. http://www.stats.ox.ac.uk/pub/MASS4. ISBN 0-387-95457-0
Vilalta, R. (1999). Understanding accuracy performance through concept characterization and algorithm analysis. In Proceedings of the ICML-99 workshop on recent advances in meta-learning and future work (pp. 3–9).
Vilalta, R., & Drissi, Y. (2002). A characterization of difficult problems in classification. In M. A. Wani, H. R. Arabnia, K. J. Cios, K. Hafeez, & G. Kendall (Eds.), Proceedings of the 2002 international conference on machine learning and applications - ICMLA 2002, June 24–27, 2002, Las Vegas, Nevada (pp. 133–138).
Wagstaff, K. (2012). Machine learning that matters. arXiv preprint arXiv:1206.4656
Weerawarana, S., Houstis, E. N., Rice, J. R., Joshi, A., & Houstis, C. E. (1996). Pythia: A knowledge-based system to select scientific algorithms. ACM Transactions on Mathematical Software (TOMS), 22(4), 447–468.
Yarrow, S., Razak, K. A., Seitz, A. R., & Seriès, P. (2014). Detecting and quantifying topography in neural maps. PLoS ONE, 9(2), 1–14. https://doi.org/10.1371/journal.pone.0087178.
Young, W., Weckman, G., & Holland, W. (2011). A survey of methodologies for the treatment of missing values within datasets: Limitations and benefits. Theoretical Issues in Ergonomics Science, 12(1), 15–43.
This work is funded by the Australian Research Council through Australian Laureate Fellowship FL140100012. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for this research.
School of Mathematical Sciences, Monash University, Clayton, VIC, 3800, Australia
Mario A. Muñoz, Laura Villanova, Davaatseren Baatar & Kate Smith-Miles
Mario A. Muñoz
Laura Villanova
Davaatseren Baatar
Kate Smith-Miles
Correspondence to Kate Smith-Miles.
Below is the link to the electronic supplementary material.
Supplementary material 1 (pdf 86 KB)
Appendix A: Projection methodology
We consider the optimal solution to the projection problem:
$$\begin{aligned} \min&\left\| \mathbf {F} - \mathbf {B}_{r}\mathbf {Z} \right\| ^{2}_{F} + \left\| \mathbf {y}^{\top } - \mathbf {c}_{r}^{\top }\mathbf {Z} \right\| ^{2}_{F} \nonumber \\ \hbox {s.t.}&\mathbf {Z} = \mathbf {A}_{r}\mathbf {F} \nonumber \\ (\mathcal{D})\qquad \quad&\mathbf {A}_{r} \in \mathbb {R}^{d \times m} \nonumber \\&\mathbf {B}_{r} \in \mathbb {R}^{m \times d} \nonumber \\&\mathbf {c}_{r} \in \mathbb {R}^d \end{aligned}$$
\((\mathcal{D})\) has at least one global minimum.
The problem can be presented as minimization of coercive function
$$\begin{aligned} f(\mathbf {A}_{r}, \mathbf {B}_{r}, \mathbf {c}_{r}) = \Vert \mathbf {F} - \mathbf {B}_{r} \mathbf {A}_{r} \mathbf {F} \Vert ^{2}_{F} + \Vert \mathbf {y}^{\top } - \mathbf {c}_{r}^{\top } \mathbf {A}_{r} \mathbf {F} \Vert ^{2}_{F}. \end{aligned}$$
Thus, \((\mathcal{D})\) must have at least one global minimum. \(\square \)
\((\mathcal{D})\) has infinitely many optimal solutions.
If we neglect the constraint (8) and treat \(\mathbf {Z}\) as an independent variable then we get the following relaxation of the problem \((\mathcal{D})\)
$$\begin{aligned} \min&\left\| \mathbf {F} - \mathbf {B}_{r} \mathbf {Z} \right\| ^{2}_{F} + \left\| \mathbf {y}^{\top } - \mathbf {c}_{r}^{\top } \mathbf {Z} \right\| ^{2}_{F} \\ \hbox {s.t.}&\mathbf {B}_{r} \in \mathbb {R}^{m \times d} \\ (\mathcal{R})\qquad \quad&\mathbf {c}_{r} \in \mathbb {R}^{d} \\&\mathbf {Z} \in \mathbb {R}^{d\times n} \end{aligned}$$
which can be rewritten as
$$\begin{aligned} \min&\Vert \bar{\mathbf {F}} - \mathbf {V} \mathbf {Z} \Vert ^{2}_{F} \\ (\mathcal{R}^{'})\qquad \quad \hbox {s.t.}&\mathbf {Z} \in \mathbb {R}^{d\times n} \\&\mathbf {V} \in \mathbb {R}^{(m+1)\times d} \end{aligned}$$
$$\begin{aligned} \bar{\mathbf {F}} = \left( \begin{array}{c} \mathbf {F}\\ \mathbf {y}^{\top } \end{array} \right) \quad \hbox {and} \quad \mathbf {V} = \left( \begin{array}{c} \mathbf {B}_{r} \\ \mathbf {c}_{r}^{\top } \end{array} \right) \in \mathbb {R}^{(m+1)\times d} \end{aligned}$$
From any feasible solution \((\bar{\mathbf {V}}, \bar{\mathbf {Z}})\) of \((\mathcal{R}^{'})\) we can construct a feasible solution of \((\mathcal{D})\)
$$\begin{aligned} \left( \begin{array}{c} \bar{\mathbf {B}}_{r} \\ \bar{\mathbf {c}}_{r}^{\top } \end{array} \right)= & {} \bar{\mathbf {V}} \end{aligned}$$
$$\begin{aligned} \bar{\mathbf {A}}_{r}= & {} \bar{\mathbf {Z}} \mathbf {F}^{\top }(\mathbf {F}\mathbf {F}^{\top })^{-1} \end{aligned}$$
with the same objective value. In other words, the relaxed problem \((\mathcal{R}^{'})\) provides an exact lower bound to \((\mathcal{D})\). Moreover, from any optimal solution to \((\mathcal{R}^{'})\) we can construct an optimal solution to \((\mathcal{D})\)—see Corollaries 1 and 2 below.
From PCA, we know that \((\mathcal{R}^{'})\) has infinitely many alternative solutions. Consequently, \((\mathcal{D})\) has infinitely many alternative solutions. \(\square \)
An optimal solution to \((\mathcal{D})\) can be constructed from eigenvectors of \(\bar{\mathbf {F}} \bar{\mathbf {F}}^{\top }\). Precisely,
$$\begin{aligned} \left( \begin{array}{c} \tilde{\mathbf {B}}_{r} \\ \tilde{\mathbf {c}}_{r}^{\top } \end{array} \right)= & {} \tilde{\mathbf {V}} \end{aligned}$$
$$\begin{aligned} \tilde{\mathbf {A}}_{r}= & {} \tilde{\mathbf {V}}^{\top } \left( \begin{array}{c} \mathbf {F}\\ \mathbf {y}^{\top } \end{array} \right) \mathbf {F}^{\top }(\mathbf {F}\mathbf {F}^{\top })^{-1} \end{aligned}$$
where columns of \(\tilde{\mathbf {V}}\) are the d eigenvectors of \(\bar{\mathbf {F}}\bar{\mathbf {F}}^{\top }\) corresponding to the d largest eigenvalues.
Immediate from PCA that eigenvectors of \(\bar{\mathbf {F}}\bar{\mathbf {F}}^{\top }\) provide an optimal solution to \((\mathcal{R}^{'})\) where columns of \(\tilde{\mathbf {V}}\) are the d eigenvectors of \(\bar{\mathbf {F}}\bar{\mathbf {F}}^{\top }\) corresponding to the d largest eigenvalues and
$$\begin{aligned} \tilde{\mathbf {Z}} = \tilde{\mathbf {V}}^{\top } \bar{\mathbf {F}}. \end{aligned}$$
\(\square \)
Note that the matrix \(\tilde{\mathbf {V}}\) obtained from eigenvectors has orthonormal columns.
Due to the rank factorization, the problem \((\mathcal{D})\) is equivalent to the following problem
$$\begin{aligned} \min&\Vert \mathbf {F} - \mathbf {R}\mathbf {F} \Vert ^2_{F} + \Vert \mathbf {y}^{\top } - \mathbf {s}^{\top } \mathbf {F} \Vert ^2_F \nonumber \\ (\mathcal{D}^{'}) \qquad \quad \hbox {s.t.}&rank \ \left( \begin{array}{c} \mathbf {R} \nonumber \\ \mathbf {s}^{\top } \end{array} \right) = d \\&\mathbf {R} \in \mathbb {R}^{m\times m} \nonumber \\&\mathbf {s} \in \mathbb {R}^{m} \end{aligned}$$
Let \((\tilde{\mathbf {R}}, \tilde{\mathbf {s}})\) be an optimal solution to \((\mathcal{D}^{'})\). Then the system
$$\begin{aligned} \left\{ \begin{array}{ccl} \tilde{\mathbf {R}} &{}=&{} \mathbf {B}_{r}\mathbf {A}_{r} \\ \tilde{\mathbf {s}}^{\top } &{}=&{} \mathbf {c}_{r}^{\top }\mathbf {A}_{r} \end{array}\right. \end{aligned}$$
is feasible and any solution of the system is an optimal solution to \((\mathcal{D})\).
Immediate from rank factorization. \(\square \)
Corollary 1 provides a lower bound to the dimensionality reduction problem, which numerically holds iif \(\mathbf {F}\mathbf {F}^{\top }\) is invertible, otherwise the solution is numerically unstable. Otherwise, \(\tilde{\mathbf {V}}^{\top }\bar{\mathbf {F}}\ne \tilde{\mathbf {A}}\mathbf {F}\) meaning that the solution is unstable; hence, this analytic method is potentially an approximation in the presence of numerical instability issues.
Appendix B: Algorithms for footprint analysis
Muñoz, M.A., Villanova, L., Baatar, D. et al. Instance spaces for machine learning classification. Mach Learn 107, 109–147 (2018). https://doi.org/10.1007/s10994-017-5629-5
Instance space
Algorithm footprints
Test instance generation
Instance difficulty
Part of a collection:
Special Issue on Metalearning and Algorithm Selection
Not logged in - 184.72.102.217
|
CommonCrawl
|
Permeability changes and effect of chemotherapy in brain adjacent to tumor in an experimental model of metastatic brain tumor from breast cancer
Afroz S. Mohammad1,
Chris E. Adkins1,
Neal Shah1,
Rawaa Aljammal1,
Jessica I. G. Griffith1,
Rachel M. Tallman1,
Katherine L. Jarrell1 &
Paul R. Lockman1
BMC Cancer volume 18, Article number: 1225 (2018) Cite this article
Brain tumor vasculature can be significantly compromised and leakier than that of normal brain blood vessels. Little is known if there are vascular permeability alterations in the brain adjacent to tumor (BAT). Changes in BAT permeability may also lead to increased drug permeation in the BAT, which may exert toxicity on cells of the central nervous system. Herein, we studied permeation changes in BAT using quantitative fluorescent microscopy and autoradiography, while the effect of chemotherapy within the BAT region was determined by staining for activated astrocytes.
Human metastatic breast cancer cells (MDA-MB-231Br) were injected into left ventricle of female NuNu mice. Metastases were allowed to grow for 28 days, after which animals were injected fluorescent tracers Texas Red (625 Da) or Texas Red dextran (3 kDa) or a chemotherapeutic agent 14C-paclitaxel. The accumulation of tracers and 14C-paclitaxel in BAT were determined by using quantitative fluorescent microscopy and autoradiography respectively. The effect of chemotherapy in BAT was determined by staining for activated astrocytes.
The mean permeability of texas Red (625 Da) within BAT region increased 1.0 to 2.5-fold when compared to normal brain, whereas, Texas Red dextran (3 kDa) demonstrated mean permeability increase ranging from 1.0 to 1.8-fold compared to normal brain. The Kin values in the BAT for both Texas Red (625 Da) and Texas Red dextran (3 kDa) were found to be 4.32 ± 0.2 × 105 mL/s/g and 1.6 ± 1.4 × 105 mL/s/g respectively and found to be significantly higher than the normal brain. We also found that there is significant increase in accumulation of 14C-Paclitaxel in BAT compared to the normal brain. We also observed animals treated with chemotherapy (paclitaxel (10 mg/kg), erubilin (1.5 mg/kg) and docetaxel (10 mg/kg)) showed activated astrocytes in BAT.
Our data showed increased permeation of fluorescent tracers and 14C-paclitaxel in the BAT. This increased permeation lead to elevated levels of activated astrocytes in BAT region in the animals treated with chemotherapy.
The incidence of metastatic brain tumors in United States is approximately 170,000 patients annually [1]. The most common primary sites for brain metastases are lung, breast, and skin, with more than 70% of the patients account for cancers from lung and breast [2]. The incidence of breast cancer metastases to brain is increasing, as there is a significant improvement in 5-year survival from primary breast cancer [3, 4]. Once diagnosed with metastatic brain tumors from breast cancer, 4 out of 5 patients will die within one year [5].
Conventional chemotherapy fails in metastatic brain tumors due to the presence of blood-brain barrier (BBB)/ blood-tumor barrier (BTB), which prevents a sufficient concentration of chemotherapeutics from reaching lesions [5]. However, we have previously found that there is an increase in drug permeation in metastatic lesions when compared to the normal brain [6, 7]. Many newer strategies to treat metastatic brain tumors include methods to improve chemotherapeutic penetration by overcoming the BBB/BTB, including nanoparticles, osmotic BBB disruption, BBB disruption using ultrasound, etc. [8,9,10,11]. All of these strategies have shown increased penetration through BBB, but the effect of chemotherapy on tumor-adjacent healthy tissue has not been thoroughly investigated.
In this study, we hypothesize that the area around tumor is more accessible to drug penetration due to increased vascular permeability and diffusion from the tumor into normal brain tissues, which may result in chemotherapy accumulation and effect in the brain adjacent to tumor (BAT). We tested the penetration of two different fluorescent permeability markers, texas Red free dye (Mol. Wt. 625 Da.) and texas Red dextran 3 kDa. (Mol. wt. 3000 Da.). We then determined the distribution of 14C-paclitaxel in normal brain, tumors, and BAT regions. Finally, we studied the effect of chemotherapy on BAT by staining for a marker of neuro-inflammation.
Chemicals & reagents
The fluorescent tracers Texas Red (625 Da) and Texas Red dextran (3 kDa) was purchased from Molecular Probes-Life Technologies (Carlsbad, CA). Dulbecco's modified eagle medium (DMEM) and Fetal bovine serum (FBS) were purchased from Gibco-Life Technologies (Carlsbad, CA). Cell culture flasks were purchased from Falcon (Corning, NY). Radiolabeled (14C)-Paclitaxel (101 mCi/mmol) was purchased from Moravek, Inc. (Brea, CA). Paclitaxel, docetaxel and eribulin was purchased from Selleckchem Chemicals (Houston, TX). Radiolabeled (14C)-Paclitaxel (101 mCi/mmol) was purchased from Moravek, Inc. (Brea, CA). Cresyl violet acetate (0.1%) and Cremophore EL was purchased from Sigma-Aldrich (St. Louis, MO). Anti-GFAP antibody (ab4674) was purchased from abcam (Cambridge, MA). All other chemicals and reagents used were of analytical grade and were used as supplied.
Human MDA-MB-231Br metastatic breast cancer cells were kindly donated as a gift by Dr. Patricia S. Steeg (Canter for Cancer Research, National Cancer Institute, Bethesda, MD). Human MDA-MB-231Br metastatic breast cancer cell line was created from the commercially available MDA-MB-231 cell line by Dr. Patricia Steeg's lab by repeated cycles of intra-cardiac injection and harvesting from brain metastases in mice [6, 7] . The cells were cultured in DMEM supplemented with 10% FBS. MDA-MB-231Br cell lines were transfected to stably express the enhanced green fluorescent protein (eGFP). All cells used in experimental conditions came from passages 1–10 and were maintained at 37 °C with 5% CO2. For all cell preparations for intracardiac injection, cells were harvested at 70% confluency.
Experimental brain metastases model
All animal handling and procedures were approved by Institutional Animal Care and Use Committee protocol (WVU #13–1207), and all work was conducted following the 1996 NIH Guide for the Care and Use of Laboratory Animals. Human ethics approval and informed consent for this study are not applicable because no human subjects were involved in this study. Female athymic nu/nu mice (24–30 g) were purchased from Charles River Laboratories (Wilmington, MA) and were used for the experimental metastases model in this study. Mice were 6 to 8 weeks of age at the initiation of the brain metastases models and were housed in a barrier facility with chow and water available ad libitum before and after inoculation of tumor cells. For inoculation of MDA-MB-231BR cells, mice were anesthetized under 2% isoflurane and injected with 175,000 cells in the left cardiac ventricle using a sterile 27-gauge tuberculin syringe with the aid of a stereotaxic device (Stoelting, Wood Dale, IL) as previously reported by Adkins et al. [6]. Injection accuracy was evaluated by a pulsatory flash of bright-red blood into the syringe upon little retraction of the plunger prior to injection. After intra-cardiac injection, mice were placed in a warmed (37 °C) sterile cage and vitals monitored until fully recovered. Metastases were allowed to develop until neurologic symptoms like seizures, labored breathing, hunched posture and anorexia appeared (~ 28 days for MDA-MB-231Br), and animals were then anesthetized with ketamine/xylazine (100 mg/kg and 8 mg/kg respectively) prior to Texas Red 625 Da (6 mg/kg in saline) and Texas Red dextran 3 kDa (6 mg/kg in saline) and 14C-Paclitaxel (10 μCi/animal, 10 mg/kg in Taxol formulation, Moravek) injection via IV bolus dose (femoral vein). The Texas Red 625 Da (n = 6) and Texas Red dextran 3 kDa (n = 6) were allowed to circulate for 10 min prior to euthanasia by decapitation, and 14C-Paclitaxel (n = 10) was allowed to circulate for 8 h before sacrifice by decapitation. The endpoints for texas red and 14C-Paclitaxel circulation times were determined by previous studies [7]. Brains were rapidly removed (less than 60 s), flash-frozen in isopentane (− 65 °C), and stored at negative 20 °C.
Tissue processing and analysis
Brain slices (20 μm) were acquired with a cryotome (Leica CM3050S; Leica Microsystems, Wetzlar, Germany) and transferred to charged microscope slides. Fluorescent images of brain slices were acquired using a stereomicroscope (Olympus MVX10; Olympus, Center Valley, PA) equipped with a 0.5 NA 2X objective and a monochromatic cooled CCD scientific camera (Retiga 4000R, QIMaging, Surrey, BC, Canada). Texas Red fluorescence was imaged using a DsRed sputter filter (excitation/band λ 545/25 nm, emission/band λ 605/70 nm and dichromatic mirror at λ 565 nm) (Chroma Technologies, Bellows Falls, VT) and enhanced green fluorescent protein (expressed in MDA-MB-231Br) using an ET-GFP sputter filter (excitation/band λ 470/40 nm, emission/band λ 525/50 nm and dichromatic mirror at λ 495 nm) (Chroma Technologies, Bellows Falls, VT). Fluorescent image capture and analysis software (SlideBook 5.0; Intelligent Imaging Innovations Inc., Denver, CO) was used to capture and quantitate images. Binary mask methodology was used to analyze brain slices based upon the eGFP fluorescence from MDA-MB-231Br cells. Binary mask methodology is simply voxel-defined regions of interest where tumor was defined by the presence of eGFP fluorescence from MDA-MB-231Br on a voxel-by-voxel basis. By this methodology, the eGFP fluorescence roughly > 3-fold above background was considered a brain tumor. Once the images were acquired, circumferential fluorescent analysis was performed using software analysis (SlideBook 5.0; Intelligent Imaging Innovations Inc., Denver, CO), where 8-μm thick region of interest (ROI) were drawn 300 μm beyond and within the tumor (Fig. 1a and b). Texas red permeability fold-changes were determined by Texas Red sum intensity (SI) per unit area of metastases relative to that of contralateral normal brain regions. The transfer coefficient (Kin) of Texas Red tracers were determined in tumor, BAT and normal brain by multiple uptake time approach after analyzing the blood and tumor concentrations of Texas Red tracers as previously described by Mittapalli et al. [12]
Circumferential Fluorescent Analysis by Quantitative Fluorescence Microscopy. Fluorescent image of eGFP transfected MDA-MB-231Br metastasis in brain with circumferential 8 μm thick regions of interest (ROI) drawn to 300 μm beyond the metastasis margin (a and b). To distinguish between BAT and tumor regions, the inner 300 μm from the metastasis margin were used to create 8 μm thick circumferential ROIs (c and d)
The unidirectional blood to brain, blood to tumor and blood to BAT transfer constant Kin was determined for fluorescent tracers using single-time uptake approach [13,14,15]. A single-time uptake method was used to calculate Kin because of heterogeneity of the metastatic tumors. Kin was calculated using the following equation [12, 15]
$$ {\mathrm{K}}_{in}=\frac{C_{br}(T)}{\int_0^t{C}_{bl}(T) dt} $$
Where, Cbr is the amount of compound in brain/metastatic tumor/ BAT per unit mass of the tissue at time T and Cbl is the blood concentration of the tracer.
For 14C-Paclitaxel permeation studies, 20 μm thick brain slices were exposed for 20 days to phosphor screens along with tissue-calibrated standards for quantitative autoradiographic analysis. The phosphor screens were developed using GE Typhoon FLA 7000 and images were processed using MCID software (Imaging Research) and Adobe Photoshop to acquire color-coded drug concentrations (ng/g or μg/g) in regions of interest.
Effect of drugs on BAT
Female athymic nu/nu mice were inoculated with human MDA-MB-231-Br-Luc cells and allowed to develop metastases. On day 21, the presence of metastases was confirmed using an IVIS bioluminescent imaging system and animals are randomly divided into four treatment groups (n = 10/group) and then treated with Vehicle (n = 10, saline), Docetaxel (10 mg/kg I.V, once a week, n = 10), Eribulin (1.5 mg/kg I.P, twice every week, n = 10) and Paclitaxel (10 mg/kg I.V, once a week, n = 10). Docetaxel and Eribulin was dissolved in a vehicle composed of 5% Tween 80 and 5% Ethanol in saline, whereas paclitaxel was dissolved in a vehicle composed of 1:1 blend of Cremophor EL and ethanol was then diluted (nine parts of saline to one part of blend) with normal saline for administration. The treatment regimen was continued until mice showed neurological symptoms, and the then mice were sacrificed and the brains were harvested. The brains were sectioned and stained for glial fibrillary acidic protein (GFAP) for the presence of activated astrocytes in BAT region.
The unidirectional blood-to-brain, blood-to-tumor and blood-to-BAT transfer constant Kin differences were compared by one-way ANOVA with multiple comparisons (GraphPad® Prism 6.0, San Diego, CA) and were considered statistically significant at p < 0.05. MCID software (Imaging Research Inc., UK) was used to quantify permeation of 14C-Paclitaxel in brain metastases, BAT and normal brain.
BAT permeability
Regional barrier integrity was evaluated using permeability tracers, Texas Red 625 Da and Texas Red dextran (3 kDa), which fall within the upper-limit molecular weight of most conventional and non-biological chemotherapeutic drugs. The margins of metastases were demarcated based on eGFP fluorescence around cancer cell clusters that were confined within 100 μm of each other, as previously described (8). Once the tumor margin was defined for each metastasis, a series of consecutive circumferential masks (8 μm wide) extending 300 μm beyond the original metastasis margin were generated automatically using custom written SlideBook 5.0 software scripts (Fig. 1a and b). The additional 200 μm region was drawn to also allow for analysis of brain distant to tumor. Additional circumferential masks (8 μm wide) that extend 300 μm internally from the metastasis margin were created using the software scripts (Fig. 1c and d).
Texas Red 625 Da and Texas Red Dextran 3 kDa permeation were plotted relative to the distance from the tumor edge for different metastases exhibiting different magnitudes of mean permeability increases (Fig. 2a). Analysis of Texas Red 3 kDa permeation within the BAT region 100 μm beyond the tumor edge for each metastasis demonstrated mean permeability increase ranging from 1.0 to 1.8-fold compared to normal brain (Fig. 2b). The mean permeability of Texas Red 625 Da within BAT region increased 1.0 to 2.5-fold when compared to normal brain.
Circumferential fluorescent analysis of Texas Red 625 Da and Texas Red Dextran (3 kDa) in tumor and BAT regions in metastases (a). Analysis TR permeation within 100 μm beyond the tumor edge. Fold increase in TR 625da permeability: 1.8–3.8. Fold increase in TRD 3KD permeability: 1–2.5 (b)
We then calculated Kin for tumor, normal brain, and BAT, and we found that there was a significant increase in Kin in BAT for both Texas Red free dye and Texas Red Dextran 3 kDa when compared with normal brain (Fig. 3a and b). The Kin values for Texas Red 625 Da in normal brain was found to be 1.2 ± 0.16 × 105 mL/s/g. For tumor, it was 11.3 ± 1.9 × 105 mL/s/g, and for BAT the Kin was 4.32 ± 0.2 × 105 mL/s/g. The Kin values for Texas Red 3 kDa was found to be 0.4 ± 0.14 × 105 mL/s/g, 2 ± 0.3 × 105 mL/s/g and 1.6 ± 1.4 × 105 mL/s/g for normal brain, tumor and BAT respectively.
Blood-to- brain transfer coefficients (Kin) for Texas Red (625 Da) in normal brain (Control), BAT and Tumor regions (a). Blood-to- brain transfer coefficients (Kin) for Texas Red Dextran (3 kDa) in normal brain (Control), BAT and Tumor regions (b). The Kin was determined by single-time uptake approach. **, P < 0.01; ***, P < 0.001, respectively. All the data represented here are mean ± SEM; n = 6 for all data points
Distribution of paclitaxel in normal brain, BAT and tumor
After analyzing Texas Red tracer permeability and transfer coefficients in the BAT, we determined the distribution of 14C-Paclitaxel using autoradiography. The tumor was identified by cresyl violet stain (Fig. 4a) and the corresponding overlaid autoradiogram (Fig. 4b) was used to analyze the concentrations of paclitaxel in 100 × 100 μm squares (50 × 50 μm squares in BAT) as shown in Fig. 4a and b. We found that there is increase in the concentration of 14C-Paclitaxel in BAT regions and the increase in concentration was heterogeneous as seen in the metastases. We found that the concentration of 14C-paclitaxel in BAT (0–50 μm) to be 86.7 ± 31 ng/g and BAT (50–100 μm) 35.4 ± 11 ng/g (Fig. 4c), whereas the concentrations of 14C-Paclitaxel beyond 100 μm of tumor and normal brain was consistently found to be 1 ng/g. The concertation of 14C-Paclitaxel in the tumor was 529 ± 223 ng/g consistent with our previous studies [7].
Representative image of 231Br brain metastases (a) and corresponding 14C-Paclitaxel accumulation (b) in metastases 8 h after intravenous administration of radiolabeled paclitaxel. Paclitaxel concentrations from 100 μm squares as shown in image A and B were determined (1 = 1 ng/g, 2 = 1 ng/g, 3 = 10.5 ng/g, 4 = 293 ng/g, 5 = 261 ng/g). (c) Analysis of 14C-Paclitaxel concentration in tumor regions (− 300 μm to 0) and normal brain regions (0 to 300 μm). Data are mean ± SEM; n = 15 for all data points
Chemotherapeutic drugs induce astrocyte activation in BAT
After studying the permeability of tracers and 14C-paclitaxel in BAT, we sought to study the effect of chemotherapeutic drugs on BAT. For this study, we treated mice with various chemotherapeutic drugs after the confirmation of metastases as mentioned above. To visualize activated astrocytes, we stained for glial fibrillary acidic protein (GFAP), which is over-expressed when astrocytes are activated [16]. We observed GFAP over-expression in BAT in all the groups treated with chemotherapeutic drugs and found that there is an increase in expression of GFAP in BAT (Fig. 5b-d). However, GFAP expression in BAT in saline treated group was not noticeable (Fig. 5a).
Fluorescent images representing presence of nuclei (DAPI) in blue and activated astrocytes (GFAP) in green after treating with a.Saline (Vehicle), b. Eribulin (1.5 mg/kg I.P), c. Docetaxel: (10 mg/kg I.V), d. Paclitaxel: (10 mg/kg I.V). The GFAP expression in BAT regions in chemotherapeutic treated group appears are higher than that of vehicle group. Scale bar = 50 μm
Many studies have shown the permeability and effect of chemotherapy in the brain metastases [7], but surprisingly, there are not many studies investigating those same effects in BAT. With increase in strategies to overcome BBB and BTB to treat metastases [1, 9, 10], it is important to study the permeability in BAT and effect of chemotherapy in metastatic tumors. In this study, we found that the permeability of tracers and 14C-palcitaxel increased in BAT when compared to normal brain regions distant to the tumor. We also found that administration of chemotherapeutic drugs induced activation of astrocytes in these adjacent regions.
In this work, we studied permeability for two tracers, Texas red 625 Da and Texas red dextran 3 kDa using quantitative fluorescence microscopy. The methodology was developed based on previous study by Mittapalli et al., [12], where all fluorescent images were captured using the same settings in the microscope to maintain uniformity in fluorescence emission [17]. Permeation of Texas red tracers in brain metastases were previously characterized by Adkins et al. [18], and we found similar fold-increase in tumor core. Unidirectional BBB/BTB transfer constants Kin for both dyes were calculated using an established multiple-time uptake approach [13]. The Kin values obtained in these studies for normal brain and tumor were consistent with our previous published data [12]. The increased Kin values in BAT when compared to normal brain clearly suggest the permeability in BAT region was increased.
Once we had confirmed the increase in permeability of the tracers, we studied the distribution of a chemotherapeutic agent, 14C-paclitaxel in BAT. We used quantitative autoradiography (QAR) to determine the distribution of 14C-paclitaxel in BAT, normal brain, and within the tumor [19, 20]. We found that there is an increase in accumulation of 14C-paclitaxel in the BAT region and this increase is heterogeneous similar to what we have found in brain lesions previously [7]. The increase in permeation of BTB can be accounted for angiogenesis in the tumor [21,22,23] and the reasons for this heterogeneous permeability within the lesion is due to dynamics of angiogenic process as reported in the previous studies [24]. Also, the vascular endothelial growth factor (VEGF) secreted during tumor angiogenesis disrupt the tight junctions of the BBB which may lead to increased vascular permeability in the BAT [25, 26].
The most common transport mechanism for drugs across BBB is through passive diffusion [27]. For passive diffusion of drugs across the BBB, the drugs which are lipid soluble, low molecular weight (< 400 Da) and which form ≤7 hydrogen bonds are better candidates [28]. Diffusion through lipid membrane like BBB is dependent on molecular volume of the solute, which in turn depends on its molecular weight [29, 30]. BBB permeability decreases 100 fold with the increase is solute's molecular weight from 300 Da to 450 Da [31]. In addition to solute related limitations, the active efflux transporters like p-glycoprotein (P-gp) and other members of ABC (ATP-binding cassette) family of transporters present at the BBB play a significant role in efflux of chemotherapeutic agents from the brain to blood [32, 33]. However, in metastatic lesions the BBB is disrupted (BTB) which results in an increase in penetration of chemotherapeutic agents [34]. The higher tumor concentration of chemotherapeutic agents in the tumor creates a concentration gradient with the surrounding normal brain allowing the chemotherapeutic agent to diffuse into normal brain [35]. Other studies observed increased blood flow in brain metastases and when compared to normal brain. Regarding permeability, the blood-to-tissue transfer constant (Ki) for 14C-α-aminoisobutyric acid (AIB) was increased in both tumor and BAT when compared to normal brain, suggesting irregular neovascularization with increased permeability in the brain metastases [36,37,38].
Finally, once we confirmed the increased permeation of tracers and increased distribution of 14C-paclitaxel in BAT, we studied the effect of chemotherapy on BAT. After treating with various chemotherapeutic agents, we stained for GFAP to determine whether there was any inflammatory effect of chemotherapeutic drugs in CNS. GFAP is expressed in astrocytes in the brain [39], and when there is injury, inflammation or neurodegeneration in the central nervous system (CNS), the common reaction of astrocytes is hypertrophy, referred to as reactive astrocytosis or activated astrocytes [40,41,42]. This hypertrophy increases the expression of GFAP in astrocytes as well as the binding affinity to GFAP antibody [43]. Expression of GFAP is altered by many factors like brain injury and disease [16]. Many earlier studies reported the increase in GFAP expression in various diseases such as Alzheimer's, Amyotrophic lateral sclerosis (ALS), Parkinson's, Pick's, Huntington's and Autism [44,45,46,47,48]. In Autism, increase in autoantibodies of GFAP has also been found in plasma [49, 50]. In the case of acute CNS injuries like brain infarction and traumatic brain injury, there was increase in levels of GFAP in CSF [51, 52]. On the other hand, decrease in GFAP expression was associated with depression and growth of gliomas [53, 54]. We found that treating with chemotherapy, increased the expression of GFAP protein in BAT (Fig. 5), confirming the presence of activated astrocytes after pharmacological chemotherapy regimens.
Recent studies indicate, chemotherapy may induce numerous deleterious effects within CNS such as altered cognitive function, memory and attention [55]. Fading of cognitive function after chronic chemotherapy administration in patients with cancer has been termed "chemo-fog" or "chemo-brain" [56]. With improvements in survival for women with breast cancer over the past decade, there is also increased number of survivors expressing concerns with memory and concentration post treatment [57,58,59]. Recent studies suggest that the mechanism for chemo-fog is secondary to the toxic effects imposed by sub-lethal concentrations of chemotherapy on the normal cellular population of CNS [60]. Many studies suggests that chemotherapeutic agents not only induce oxidative stress and apoptosis in CNS but they also inhibit proliferation and differentiation of cellular population of CNS leading to abnormal expression of neurotrophic proteins in the brains [61,62,63,64].
In summary, we observed permeation of fluorescent tracers were increased in the BAT compared to normal brain, which was accompanied by increased distribution of 14C-paclitaxel. This increase in permeation resulted in increased uptake of chemotherapeutic agents and increased the expression of GFAP in regions adjacent to tumor, indicating reactive astrocytosis. As many new clinical strategies to treat brain metastases tend to increase drug permeation, it is also important to study potential damage in normal brain.
BAT:
Brain adjacent to tumor
BDT:
Brain distant from tumor
BLI:
Bio-luminescence imaging
BTB:
Blood-tumor barrier
CNS:
EPR:
Enhanced permeation and retention
GFAP:
Glial fibrillary acidic protein
P-gp:
P-Glycoprotein
QAR:
quantitative autoradiography
Platta CS, Khuntia D, Mehta MP, Suh JH. Current treatment strategies for brain metastasis and complications from therapeutic techniques: a review of current literature. Am J Clin Oncol. 2010;33(4):398–407.
Rivkin M, Kanoff RB. Metastatic brain tumors: current therapeutic options and historical perspective. The Journal of the American Osteopathic Association. 2013;113(5):418–23.
Leone JP, Leone BA. Breast cancer brain metastases: the last frontier. Experimental Hematology & Oncology. 2015;4(1):33.
Frisk G, Svensson T, Bäcklund LM, Lidbrink E, Blomqvist P, Smedby KE. Incidence and time trends of brain metastases admissions among breast cancer patients in Sweden. Br J Cancer. 2012;106(11):1850–3.
Adkins CE, Nounou MI, Mittapalli RK, Terrell-Hall TB, Mohammad AS, Jagannathan R, Lockman PR. A novel preclinical method to quantitatively evaluate early-stage metastatic events at the murine blood-brain barrier. Cancer prevention research (Philadelphia Pa). 2015;8(1):68–76.
Adkins CE, Mohammad AS, Terrell-Hall TB, Dolan EL, Shah N, Sechrest E, Griffith J, Lockman PR. Characterization of passive permeability at the blood–tumor barrier in five preclinical models of brain metastases of breast cancer. Clinical & Experimental Metastasis. 2016;33(4):373–83.
Lockman PR, Mittapalli RK, Taskar KS, Rudraraju V, Gril B, Bohn KA, Adkins CE, Roberts A, Thorsheim HR, Gaasch JA, et al. Heterogeneous blood-tumor barrier permeability determines drug efficacy in experimental brain metastases of breast cancer. Clinical cancer research : an official journal of the American Association for Cancer Research. 2010;16(23):5664–78.
Adkins CE, Nounou MI, Hye T, Mohammad AS, Terrell-Hall T, Mohan NK, Eldon MA, Hoch U, Lockman PR. NKTR-102 efficacy versus irinotecan in a mouse model of brain metastases of breast cancer. BMC Cancer. 2015;15:685.
Guillaume DJ, Doolittle ND, Gahramanov S, Hedrick NA, Delashaw JB, Neuwelt EA. Intra-arterial chemotherapy with osmotic blood-brain barrier disruption for aggressive Oligodendroglial tumors: results of a phase I study. Neurosurgery. 2010;66(1):48–58.
Konofagou EE, Tung Y-S, Choi J, Deffieux T, Baseri B, Vlachos F. Ultrasound-induced blood-brain barrier opening. Curr Pharm Biotechnol. 2012;13(7):1332–45.
El-Habashy SE, Nazief AM, Adkins CE, Wen MM, El-Kamel AH, Hamdan AM, Hanafy AS, Terrell TO, Mohammad AS, Lockman PR, et al. Novel treatment strategies for brain tumors and metastases. Pharmaceutical patent analyst. 2014;3(3):279–96.
Mittapalli RK, Adkins CE, Bohn KA, Mohammad AS, Lockman JA, Lockman PR. Quantitative fluorescence microscopy measures vascular pore size in primary and metastatic brain tumors. Cancer Res. 2017;77(2):238–46.
Patlak CS, Blasberg RG, Fenstermacher JD. Graphical evaluation of blood-to-brain transfer constants from multiple-time uptake data. J Cereb Blood Flow Metab. 1983;3(1):1–7.
Asotra K, Ningaraj N, Black KL. Measurement of blood-brain and blood-tumor barrier permeabilities with [14C]-labeled tracers. Methods in molecular medicine. 2003;89:177–90.
Blasberg RG, Shapiro WR, Molnar P, Patlak CS, Fenstermacher JD. Local blood-to-tissue transport in Walker 256 metastatic brain tumors. J Neuro-Oncol. 1984;2(3):205–18.
Eng LF, Ghirnikar RS, Lee YL. Glial fibrillary acidic protein: GFAP-thirty-one years (1969–2000). Neurochem Res. 2000;25(9):1439–51.
Song L, Varma CA, Verhoeven JW, Tanke HJ. Influence of the triplet excited state on the photobleaching kinetics of fluorescein in microscopy. Biophys J. 1996;70(6):2959–68.
Bohn KA, Adkins CE, Mittapalli RK, Terrell-Hall TB, Mohammad AS, Shah N, Dolan EL, Nounou MI, Lockman PR. Semi-automated rapid quantification of brain vessel density utilizing fluorescent microscopy. J Neurosci Methods. 2016;270:124–31.
Knight RA, Karki K, Ewing JR, Divine GW, Fenstermacher JD, Patlak CS, Nagaraja TN. Estimating blood and brain concentrations and blood-to-brain influx by magnetic resonance imaging with step-down infusion of Gd-DTPA in focal transient cerebral ischemia and confirmation by quantitative autoradiography with Gd-[(14)C]DTPA. Journal of cerebral blood flow and metabolism : official journal of the International Society of Cerebral Blood Flow and Metabolism. 2009;29(5):1048–58.
Knight RA, Nagaraja TN, Ewing JR, Nagesh V, Whitton PA, Bershad E, Fagan SC, Fenstermacher JD. Quantitation and localization of blood-to-brain influx by magnetic resonance imaging and quantitative autoradiography in a model of transient focal ischemia. Magn Reson Med. 2005;54(4):813–21.
van Tellingen O, Yetkin-Arik B, de Gooijer MC, Wesseling P, Wurdinger T, de Vries HE. Overcoming the blood-brain tumor barrier for effective glioblastoma treatment. Drug resistance updates : reviews and commentaries in antimicrobial and anticancer chemotherapy. 2015;19:1–12.
Carmeliet P, Jain RK. Angiogenesis in cancer and other diseases. Nature. 2000;407(6801):249–57.
Fukumura D, Jain RK. Tumor microvasculature and microenvironment: targets for anti-angiogenesis and normalization. Microvasc Res. 2007;74(2–3):72–84.
Eilken HM, Adams RH. Dynamics of endothelial cell behavior in sprouting angiogenesis. Curr Opin Cell Biol. 2010;22(5):617–25.
Schmitt M, Horbach A, Kubitz R, Frilling A, Haussinger D. Disruption of hepatocellular tight junctions by vascular endothelial growth factor (VEGF): a novel mechanism for tumor invasion. J Hepatol. 2004;41(2):274–83.
Carmeliet P. VEGF as a key mediator of angiogenesis in cancer. Oncology. 2005;69(Suppl 3):4–10.
Pardridge WM. Drug transport across the blood–brain barrier. J Cereb Blood Flow Metab. 2012;32(11):1959–72.
Lipinski CA. Drug-like properties and the causes of poor solubility and poor permeability. J Pharmacol Toxicol Methods. 2000;44(1):235–49.
Levin VA. Relationship of octanol/water partition coefficient and molecular weight to rat brain capillary permeability. J Med Chem. 1980;23(6):682–4.
van de Waterbeemd H, Camenisch G, Folkers G, Chretien JR, Raevsky OA. Estimation of blood-brain barrier crossing of drugs using molecular size and shape, and H-bonding descriptors. J Drug Target. 1998;6(2):151–65.
Fischer H, Gottschlich R, Seelig A. Blood-brain barrier permeation: molecular parameters governing passive diffusion. J Membr Biol. 1998;165(3):201–11.
Uchida Y, Ohtsuki S, Katsukura Y, Ikeda C, Suzuki T, Kamiie J, Terasaki T. Quantitative targeted absolute proteomics of human blood-brain barrier transporters and receptors. J Neurochem. 2011;117(2):333–45.
Sharom FJ. ABC multidrug transporters: structure, function and role in chemoresistance. Pharmacogenomics. 2008;9(1):105–27.
Hiesiger EM, Voorhies RM, Basler GA, Lipschutz LE, Posner JB, Shapiro WR. Opening the blood-brain and blood-tumor barriers in experimental rat brain tumors: the effect of intracarotid hyperosmolar mannitol on capillary permeability and blood flow. Ann Neurol. 1986;19(1):50–9.
Walker MD, Weiss HD. Chemotherapy in the treatment of malignant brain tumors. Adv Neurol. 1975;13:149–91.
Blasberg RG, Gazendam J, Patlak CS, Shapiro WS, Fenstermacher JD. Changes in blood-brain transfer parameters induced by hyperosmolar intracarotid infusion and by metastatic tumor growth. Adv Exp Med Biol. 1980;131:307–19.
Fidler IJ, Yano S, Zhang RD, Fujimaki T, Bucana CD. The seed and soil hypothesis: vascularisation and brain metastases. The Lancet Oncology. 2002;3(1):53–7.
Langley RR, Fidler IJ. The biology of brain metastasis. Clin Chem. 2013;59(1):180–9.
Baba H, Nakahira K, Morita N, Tanaka F, Akita H, Ikenaka K. GFAP gene expression during development of astrocyte. Dev Neurosci. 1997;19(1):49–57.
Khurgel M, Ivy GO. Astrocytes in kindling: relevance to epileptogenesis. Epilepsy Res. 1996;26(1):163–75.
Yang Z, Wang KK. Glial fibrillary acidic protein: from intermediate filament assembly and gliosis to neurobiomarker. Trends Neurosci. 2015;38(6):364–74.
Niranjan R, Nagarajan R, Hanif K, Nath C, Shukla R. LPS induces mediators of neuroinflammation, cell proliferation, and GFAP expression in human astrocytoma cells U373MG: the anti-inflammatory and anti-proliferative effect of guggulipid. Neurological sciences : official journal of the Italian Neurological Society and of the Italian Society of Clinical Neurophysiology. 2014;35(3):409–14.
Hozumi I, Chiu FC, Norton WT. Biochemical and immunocytochemical changes in glial fibrillary acidic protein after stab wounds. Brain Res. 1990;524(1):64–71.
Tsuji T, Shimohama S, Kamiya S, Sazuka T, Ohara O. Analysis of brain proteins in Alzheimer's disease using high-resolution two-dimensional gel electrophoresis. J Neurol Sci. 1999;166(2):100–6.
Troost D, Sillevis Smitt PA, de Jong JM, Swaab DF. Neurofilament and glial alterations in the cerebral cortex in amyotrophic lateral sclerosis. Acta Neuropathol. 1992;84(6):664–73.
Banati RB, Daniel SE, Blunt SB. Glial pathology but absence of apoptotic nigral neurons in long-standing Parkinson's disease. Movement disorders : official journal of the Movement Disorder Society. 1998;13(2):221–7.
Murayama S, Inoue K, Kawakami H, Bouldin TW, Suzuki K. A unique pattern of astrocytosis in the primary motor area in amyotrophic lateral sclerosis. Acta Neuropathol. 1991;82(6):456–61.
Laurence JA, Fatemi SH. Glial fibrillary acidic protein is elevated in superior frontal, parietal and cerebellar cortices of autistic subjects. Cerebellum (London, England). 2005;4(3):206–10.
Singh VK, Warren R, Averett R, Ghaziuddin M. Circulating autoantibodies to neuronal and glial filament proteins in autism. Pediatr Neurol. 1997;17(1):88–90.
Rosengren LE, Ahlsen G, Belfrage M, Gillberg C, Haglid KG, Hamberger A. A sensitive ELISA for glial fibrillary acidic protein: application in CSF of children. J Neurosci Methods. 1992;44(2–3):113–9.
Aurell A, Rosengren LE, Karlsson B, Olsson JE, Zbornikova V, Haglid KG. Determination of S-100 and glial fibrillary acidic protein concentrations in cerebrospinal fluid after brain infarction. Stroke. 1991;22(10):1254–8.
Hausmann R, Riess R, Fieguth A, Betz P. Immunohistochemical investigations on the course of astroglial GFAP expression following human brain injury. Int J Legal Med. 2000;113(2):70–5.
Johnston-Wilson NL, Sims CD, Hofmann JP, Anderson L, Shore AD, Torrey EF, Yolken RH. Disease-specific alterations in frontal cortex brain proteins in schizophrenia, bipolar disorder, and major depressive disorder. The Stanley Neuropathology Consortium. Molecular psychiatry. 2000;5(2):142–9.
Chumbalkar VC, Subhashini C, Dhople VM, Sundaram CS, Jagannadham MV, Kumar KN, Srinivas PN, Mythili R, Rao MK, Kulkarni MJ, et al. Differential protein expression in human gliomas and molecular insights. Proteomics. 2005;5(4):1167–77.
Kovalchuk A, Kolb B: Chemo brain: from discerning mechanisms to lifting the brain fog-an aging connection. Cell Cycle (Georgetown, Tex) 2017:1–5.
Raffa RB. Is a picture worth a thousand (forgotten) words?: neuroimaging evidence for the cognitive deficits in 'chemo-fog'/'chemo-brain. J Clin Pharm Ther. 2010;35(1):1–9.
Castellon SA, Ganz PA, Bower JE, Petersen L, Abraham L, Greendale GA. Neurocognitive performance in breast cancer survivors exposed to adjuvant chemotherapy and tamoxifen. J Clin Exp Neuropsychol. 2004;26(7):955–69.
Schagen SB, van Dam FS, Muller MJ, Boogerd W, Lindeboom J, Bruning PF. Cognitive deficits after postoperative adjuvant chemotherapy for breast carcinoma. Cancer. 1999;85(3):640–50.
Ahles TA, Saykin AJ, Furstenberg CT, Cole B, Mott LA, Skalla K, Whedon MB, Bivens S, Mitchell T, Greenberg ER, et al. Neuropsychologic impact of standard-dose systemic chemotherapy in long-term survivors of breast cancer and lymphoma. Journal of clinical oncology : official journal of the American Society of Clinical Oncology. 2002;20(2):485–93.
Kaiser J, Bledowski C, Dietrich J. Neural correlates of chemotherapy-related cognitive impairment. Cortex; a journal devoted to the study of the nervous system and behavior. 2014;54:33–50.
Seigers R, Fardell JE. Neurobiological basis of chemotherapy-induced cognitive impairment: a review of rodent research. Neurosci Biobehav Rev. 2011;35(3):729–41.
Seigers R, Loos M, Van Tellingen O, Boogerd W, Smit AB, Schagen SB. Cognitive impact of cytotoxic agents in mice. Psychopharmacology. 2015;232(1):17–37.
Seigers R, Pourtau L, Schagen SB, van Dam FS, Koolhaas JM, Konsman JP, Buwalda B. Inhibition of hippocampal cell proliferation by methotrexate in rats is not potentiated by the presence of a tumor. Brain Res Bull. 2010;81(4–5):472–6.
Seigers R, Timmermans J, van der Horn HJ, de Vries EF, Dierckx RA, Visser L, Schagen SB, van Dam FS, Koolhaas JM, Buwalda B. Methotrexate reduces hippocampal blood vessel density and activates microglia in rats but does not elevate central cytokine release. Behav Brain Res. 2010;207(2):265–72.
We would like to acknowledge National Cancer Institute and the National Institute of General Medical Sciences of the National Institutes of Health for funding this project.
The design of the study, experimental work, collection, analysis, and interpretation of data and writing the manuscript were funded by a grant from National Cancer Institute (R01CA166067-01A1/ R01CA166067–05). Interpretation of data and writing of manuscript were also funded by National Institute of General Medical Sciences of the National Institutes of Health (CTSI Award: U54GM104942). Microscopy imaging and image analysis were funded by National Institute of General Medical Sciences of the National Institutes of Health (the CoBRE P30 GM103488). Funding body had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
The data generated and analyzed during this study are available from the corresponding author on reasonable request.
Department of Pharmaceutical Sciences, West Virginia University Health Sciences Center, School of Pharmacy, 1 Medical Center Drive, Morgantown, West Virginia, 26506-9050, USA
Afroz S. Mohammad, Chris E. Adkins, Neal Shah, Rawaa Aljammal, Jessica I. G. Griffith, Rachel M. Tallman, Katherine L. Jarrell & Paul R. Lockman
Afroz S. Mohammad
Chris E. Adkins
Neal Shah
Rawaa Aljammal
Jessica I. G. Griffith
Rachel M. Tallman
Katherine L. Jarrell
Paul R. Lockman
ASM Conception and design, experimental work, analysis and interpretation of data, writing and review and approval of manuscript. CEA Conception and design, experimental work, analysis and interpretation of data, writing and review and approval of manuscript. NS Experimental work, analysis and interpretation of data review and approval of manuscript. RA Experimental work, analysis and interpretation of dAbsata review and approval of manuscript. JIGG Analysis of data, writing and review and approval of manuscript. RMT Experimental work, review and approval of manuscript. KLJ Experimental work, review and approval of manuscript. PRL Conception and design, analysis and interpretation of data, writing and review and approval of manuscript. Each author has read and approved the final version of the manuscript.
Correspondence to Paul R. Lockman.
All animal handling and procedures were approved by Institutional Animal Care and Use Committee at West Virginia University, Morgantown, West Virginia, USA (protocol number 13–1207).
Mohammad, A.S., Adkins, C.E., Shah, N. et al. Permeability changes and effect of chemotherapy in brain adjacent to tumor in an experimental model of metastatic brain tumor from breast cancer. BMC Cancer 18, 1225 (2018). https://doi.org/10.1186/s12885-018-5115-x
Fluorescent microscopy
Astrocytosis
|
CommonCrawl
|
Can every proof by contradiction also be shown without contradiction?
Are there some proofs that can only be shown by contradiction or can everything that can be shown by contradiction also be shown without contradiction? What are the advantages/disadvantages of proving by contradiction?
As an aside, how is proving by contradiction viewed in general by 'advanced' mathematicians. Is it a bit of an 'easy way out' when it comes to trying to show something or is it perfectly fine? I ask because one of our tutors said something to that effect and said that he isn't fond of proof by contradiction.
logic proof-writing propositional-calculus proof-theory
sonicboomsonicboom
$\begingroup$ Let us assume every proof by contradiction can also be shown without contradiction... $\endgroup$
– GeoffDS
$\begingroup$ Since this topic came up in some of the answers and comments, Andrej Bauer's blog post Proof of negation and proof by contradiction seems relevant. $\endgroup$
$\begingroup$ ... and Gowers's blog post: When is proof by contradiction necessary?. $\endgroup$
$\begingroup$ I should know this, but I am not sure. You are basically asking whether intuitionistic logic is equivalent to classical logic? I am pretty sure it is not. $\endgroup$
– Panayiotis Karabassis
$\begingroup$ It would be mathematically ironic if a question about the weakness of proof by contradiction would be closed as not constructive. [Insert a rimshot sound here] $\endgroup$
– Asaf Karagila ♦
To determine what can and cannot be proved by contradiction, we have to formalize a notion of proof. As a piece of notation, we let $\bot$ represent an identically false proposition. Then $\lnot A$, the negation of $A$, is equivalent to $A \to \bot$, and we take the latter to be the definition of the former in terms of $\bot$.
There are two key logical principles that express different parts of what we call "proof by contradiction":
The principle of explosion: for any statement $A$, we can take "$\bot$ implies $A$" as an axiom. This is also called ex falso quodlibet.
The law of the excluded middle: for any statement $A$, we can take "$A$ or $\lnot A$" as an axiom.
In proof theory, there are three well known systems:
Minimal logic has neither of the two principles above, but it has basic proof rules for manipulating logical connectives (other than negation) and quantifiers. This system corresponds most closely to "direct proof", because it does not let us leverage a negation for any purpose.
Intuitionistic logic includes minimal logic and the principle of explosion
Classical logic includes intuitionistic logic and the law of the excluded middle
It is known that there are statements that are provable in intuitionistic logic but not in minimal logic, and there are statements that are provable in classical logic that are not provable in intuitionistic logic. In this sense, the principle of explosion allows us to prove things that would not be provable without it, and the law of the excluded middle allows us to prove things we could not prove even with the principle of explosion. So there are statements that are provable by contradiction that are not provable directly.
The scheme "If $A$ implies a contradiction, then $\lnot A$ must hold" is true even in intuitionistic logic, because $\lnot A$ is just an abbreviation for $A \to \bot$, and so that scheme just says "if $A \to \bot$ then $A \to \bot$". But in intuitionistic logic, if we prove $\lnot A \to \bot$, this only shows that $\lnot \lnot A$ holds. The extra strength in classical logic is that the law of the excluded middle shows that $\lnot \lnot A$ implies $A$, which means that in classical logic if we can prove $\lnot A$ implies a contradiction then we know that $A$ holds. In other words: even in intuitionistic logic, if a statement implies a contradiction then the negation of the statement is true, but in classical logic we also have that if the negation of a statement implies a contradiction then the original statement is true, and the latter is not provable in intuitionistic logic, and in particular is not provable directly.
Carl MummertCarl Mummert
$\begingroup$ +1 Great explanation. Can you emphasize the affirmative answer?: So there are statements that are provable by contradiction that are not provable directly. $\endgroup$
– ypercubeᵀᴹ
$\begingroup$ What is "identically false proposition"? $\endgroup$
– SasQ
$\begingroup$ @SasQ: saying that a statement is "true" or "false" is a claim about some specific model. Saying that a statement is identically false indicates that the statement is disprovable, so it is false in all models. $\endgroup$
– Carl Mummert
$\begingroup$ Note: "identically false/true" is often called "valid/invalid" instead. $\endgroup$
– Noldorin
$\begingroup$ An excellent. It's great to see a positive characterisation of intuitionistic and classical logic, rather than the usual meaningless 'intuitionistic logic is classical logic without the law of excluded middle'. $\endgroup$
– Miles Rout
If a statements says "not $X$" then it is perfectly fine to assume $X$, arrive at a contradiction and conclude "not $X$". However, in many occasions a proof by contradiction is presented while it is really not used (let alone necessary). The reasoning then goes as follows:
Proof of $X$: Suppose not $X$. Then ... complete proof of $X$ follows here... This is a contradiction and therefore $X$.
A famous example is Euclid's proof of the infinitude of primes. It is often stated as follows (not by Euclid by the way):
Suppose there is only a finite number of primes. Then ... construction of new prime follows ... This is a contradiction so there are infinitely many primes.
Without the contradiction part, you'd be left with a perfectly fine argument. Namely, given a finite set of primes, a new prime can be constructed.
This kind of presentation is really something that you should learn to avoid. Once you're aware of this pattern its amazing how often you'll encounter it, including here on math.se.
WimCWimC
$\begingroup$ Another example of this is the common presentation of the proof that there is no surjection from $\Bbb N$ to $(0,1)$. The usual presentation begins "suppose we have such a surjection…" and concludes "therefore it is not a surjection, and we have a contradiction". A simpler presentation simply lets $f$ be an arbitrary function $\Bbb N\to (0,1)$, follows the same argument, and concludes "…therefore, $f$ is not surjective". $\endgroup$
– MJD
$\begingroup$ Could you detail the part "construction of new prime follows" in your Euclid example please? I'd bet your version of the proof is either flawed, or uses contradiction. The key point being that "construction of new prime follows" normally relies on the assumption that only a finite number of primes exist. $\endgroup$
– Axel
$\begingroup$ @Axel: The construction is: given a (possibly empty) finite set of primes, we can construct a number larger than one which is not divisible by any prime in the set. Therefore a prime exists which is not in the set. Therefore the set of all primes must be larger than any finite set, i.e., infinite. $\endgroup$
– Dietrich Epp
$\begingroup$ @Axel: Where is the contradiction? The point illustrated in this answer is that the contradiction comes from saying "suppose that that the set of all primes is finite", but this is unnecessary for the proof. What I said is "suppose a set of primes is finite". No contradiction there. $\endgroup$
$\begingroup$ @DietrichEpp: You're simply not there yet. You have shown that any finite set of primes does not contain all primes. Thus the set of all primes cannot be a finite set. Fine. Now show that the set of all primes is infinite. The key part is still missing. $\endgroup$
It somewhat depends on whether you are intuitionist or not (or both? or neither? Who knows without the law of excluded middle). According to the Wikipedia article even intuitionists accept some versions of what one could call indirect proof, but reject most. In that sense, a direct proof would be preferable (and is often even a bit more elegant).
Theorem. There exist irrational numbers $a,b$ such that $a^b$ is rational.
Proof: Assume that $a,b\notin \mathbb Q$ always implies $a^b\notin \mathbb Q$. Then $u:=\sqrt 2^{\sqrt 2}\notin \mathbb Q$ and $u^\sqrt 2=\sqrt 2^{\sqrt 2\cdot\sqrt 2}=\sqrt 2^2=2\notin \mathbb Q$ - contradiction!
Indeed, an intuitionist would complain that we do not exhibit a pair $(a,b)$ with $a,b\notin \mathbb Q$ and $a^b\in \mathbb Q$. Instead, we only show that either $(\sqrt 2,\sqrt 2)$ or $(u,\sqrt 2)$ is such a pair. Converting the proof given above to a direct and constructive proof would in fact require you to actually prove one of the two possible options $u\in \mathbb Q$ or $u\notin\mathbb Q$.
Hagen von EitzenHagen von Eitzen
$\begingroup$ Of course you are correct the intuitionist would complain that the proof does not exhibit a specific pair. But an intuitionist would not agree that the proof produces two pairs such that either the first pair is an example or the second is an example. The intuitionistic reading of that claimed conclusion would say that, to prove the "or", we would have to prove which pair is the example, which is exactly what the proof does not do. The intuitionist would accept the proof shown here as showing "it is not the case that for all $a,b \not \in \mathbb{Q}$, $a^b \not \in \mathbb{Q}$". $\endgroup$
$\begingroup$ By Gelfond–Schneider $\sqrt2^{\sqrt2}$ is transcendental. Voilà! No more proof by contradiction ;-) $\endgroup$
– kahen
$\begingroup$ @kahen Well, the proof of Gelfond Schneider referenced in the wikipedia article you referenced seems to contain a step which is proven by contradiction... (if Delta not 0 ... conclude that Delta is 0) $\endgroup$
– mkl
See this post: Are proofs by contradiction weaker than other proofs?.
There are some wonderful answers related to your question - and addresses, directly, your "aside": See, in particular, what JDH writes.
One of the advantageous to constructing direct proofs of propositions, when this is feasible, is that one can discover other useful propositions in the process. That is, direct proofs help clarify the necessary and sufficient conditions that make a theorem true, and provide a structure demonstrating how these conditions relate, and how the chain of implications imply the conclusion.
Indirect proofs, on the other hand (aka "proofs by contradiction") only tell us that supposing a proposition to be otherwise leads to a contradiction at some point. But such a proof doesn't really provide the sort of insight that can be gained from direct proofs.
That is not to say that indirect proofs don't have their place (e.g., they come in handy when asked to prove propositions during a time-limited exam!). They often help "rule out" certain propositions on the basis that they contradict well established axioms or theorems. Also, indirect proofs are sometimes more intuitive than direct proofs. For example, proving that $\sqrt{2}$ is not rational using a proof by contradiction is clean, and intuitive.
Sometimes an indirect proof will emerge first, after which one can seek to proceed with trying to construct a direct proof to prove the same proposition. That is, providing an indirect proof of a proposition often motivates the construction of direct proofs.
I found this blog entry (Gowers's Weblog) When is a proof by contradiction necessary. from which I'll quote an introductory remark:
It seems to be possible to classify theorems into three types: ones where it would be ridiculous to use contradiction, ones where there are equally sensible proofs using contradiction or not using contradiction, and ones where contradiction seems forced. But what is it that puts a theorem into one of these three categories?
The post follows immediately with a nice reply from Terence Tao.
amWhyamWhy
$\begingroup$ The definition of "irrational" is "not rational", so the statement "$\sqrt{2}$ is irrational" is inherently a negative one. As such there is no "direct" proof (unless one somehow comes up with a definition of "irrational" that does not invoke "rational"). $\endgroup$
– Zhen Lin
$\begingroup$ What if you prove that for $n,m\in\mathbb Z$ and $m\ne0$, the distance between $n/m$ and $\sqrt{2}$ is at least $1/(3m^2)$? Could that qualify as a "direct" proof of irrationality? $\endgroup$
– Michael Hardy
$\begingroup$ @Michael Hardy: in fact some constuctivists actually take that to be the definition of irrational, so that this is stronger to them than just "not rational". But the classical definition of "irrational" is just "not rational", and that is the perspective of my previous comment in this thread. $\endgroup$
$\begingroup$ @sonicboom mathoverflow.net/questions/32011/direct-proof-of-irrationality math.stackexchange.com/questions/20567/… $\endgroup$
$\begingroup$ @ZhenLin Don't you need something more specific than "not rational" to define "irrational"? Are imaginary numbers, matrixes, dogs, and mattresses rational? Are they irrational? $\endgroup$
– Peter Olson
A few points from my (limited) experience:
I love proof by contradiction and I have used it in graduate level classes and no one seemed to mind so long as the logic was infallible.
For me, it's much easier to think about a proposition in terms of "What if this wasn't true?". That is usually my first instinct, this makes proof by contradiction the natural first choice. For instance, if I were to be asked to prove something like "Prove that a non-singular matrix has a unique inverse". My first instinct would be "What if a non-singular matrix had 2 inverses?" and from then on, the proof follows cleanly.
Sometimes, however, contradictions don't come cleanly and proof by simple logical deductions would probably take 5 lines whereas contradiction will take millions. I could point you to specific proofs but I'll have to do some digging. Further, if you look at every proof and try using Proof by Contradiction, another problem you will face is that sometimes, you will state your intended contradiction but never use it. In other words, solve using direct proof.
Another aspect about Proof by Contradiction (IMHO) is that you really must know all definitions and their equivalent statements fairly well to come up with a nice contradiction. Else, you will end up proving several lemmas on the way which looks clean in a direct proof but not so much in a Proof by Contradiction, but again, this might be a personal choice.
In summary, if you find it easier to think in terms of "What if not" then go ahead, use it but make sure your proof skills using other strategies are as good because $\exists$ nail that you cannot hit with the PbC hammer that you'll carry.
InquestInquest
What is a proof by contradiction? This is actually quite difficult to answer in a satisfactory way, but usually what people mean is something like this: given a statement $\phi$, a proof of $\phi$ by contradiction is a derivation of a contradiction from the assumption $\lnot \phi$. In order to analyse this, it is very important to distinguish between the statement $\phi$ and the statement $\lnot \lnot \phi$; the two statements are formally distinct (as obvious from the fact that their written forms are different!) even though they always have the same truth value in classical logic.
Let $\bot$ denote contradiction. When we show a contradiction assuming $\lnot \phi$, what we have is a conditional proof of $\bot$ from $\lnot \phi$. This can then be transformed into a proof of the statement $\lnot \phi \to \bot$, which is the long form of $\lnot \lnot \phi$ – in other words, we have a proof that "it is not the case that $\lnot \phi$". This, strictly speaking, is not a complete proof of $\phi$: we must still write down the last step deducing $\phi$ from $\lnot \lnot \phi$. This is the point of contention between constructivists and non-constructivists: in the constructive interpretation of logic, $\lnot \lnot \phi$ is not only formally distinct from $\phi$ but also semantically distinct; in particular, constructivists reject the principle that $\phi$ can be deduced from $\lnot \lnot \phi$ (though they may accept some limited instances of this rule).
There is one case where proof by contradiction is always acceptable to constructivists (or at least intuitionists): this is when the statement $\phi$ to be proven is itself of the form $\lnot \psi$. This is because it is a theorem of intuitionistic logic that $\lnot \lnot \lnot \psi$ holds if and only if $\lnot \psi$. On the other hand, it is also in principle possible to give a "direct" proof of $\lnot \psi$ in the following sense: we simply have to derive a contradiction by assuming $\psi$. Any proof of $\lnot \psi$ by contradiction can thus be transformed into a "direct" proof because one can always derive $\lnot \lnot \psi$ from $\psi$; so if we can obtain a contradiction by assuming $\lnot \lnot \psi$, we can certainly derive a contradiction by assuming $\psi$.
Ultimately, both of the above methods involve making a counterfactual assumption and deriving a contradiction. However, it is sometimes possible to "push" the negation inward and even eliminate it. For example, if $\phi$ is the statement "there exists an $x$ such that $\theta (x)$ holds", then $\lnot \phi$ can be deduced from the statement "$\theta (x)$ does not hold for any $x$". In particular, if $\theta (x)$ is itself a negative statement, say $\lnot \sigma (x)$, then $\lnot \phi$ can be deduced from the statement "$\sigma (x)$ holds for all $x$". Thus, proving "there does not exist an $x$ such that $\sigma (x)$ does not hold" by showing "$\sigma (x)$ holds for all $x$" might be considered a more "direct" proof than either of the two previously-mentioned approaches.
Can all proofs by contradiction be transformed into direct proofs? In some sense the answer has to be no: intuitionistic logic is known to be weaker than classical logic, i.e. there are statements have proofs in classical logic but not intuitionistic logic. The only difference between classical logic and intuitionistic logic is the principle that $\phi$ is deducible from $\lnot \lnot \phi$, so this (in some sense) implies that there are theorems that can only be proven by contradiction.
So what are the advantages of proof by contradiction? Well, it makes proofs easier. So much so that one algorithm for automatically proving theorems in propositional logic is based on it. But it also has its disadvantages: a proof by contradiction can be more confusing (because it has counterfactual assumptions floating around!), and in a precise technical sense it is less satisfactory because it generally cannot be (re)used in constructive contexts. But most mathematicians don't worry about the latter problem.
Zhen LinZhen Lin
As "Inquest"'s answer mentions, it's often easier to find a proof by contradiction than a direct proof. But after you do that, you can often make the proof simpler by rearranging it into a direct proof. It is not good to make a proof appear more complicated than it really is.
To see another disadvantage of some proofs by contradiction, consider this:
Proof: To prove $A$, assume not $A$. [insert 50 pages of argument here] We have reached a contradiction. Therefore $A$. End of proof
Now ask yourself: Which of the propositions proved in those 50 pages are erroneous and could be proved only because one relied on the false assumption that not $A$, and which are validly proved, and which are true but not validly proved because the assumption that not $A$ was relied on? It's not so easy to tell without a lot more work. And if you remember a proof of one of those propositions, you might just mistakenly think that it's been proved and is therefore known to be true. So it might be far better to limit the use of proof by contradiction to some portions of those 50 pages where no other method works.
Perhaps proofs of non-existence can be done only by contradiction. Here I might offer as an example the various proofs of the irrationality of $\sqrt{2}$, but for the fact that I've seen it asserted that if $m$, $n$ are integers, than $m/n$ differs from $\sqrt{2}$ by at least an amount that depends on $n$ --- I think it might have been $1/(3n^2)$. Here's another example: How would one prove the non-existence of a non-trivial (i.e. $>1$) common divisor of $n$ and $n+1$?
I've seen a book on logic asserting that a proof by contradiction of a non-existence assertion does not constitute an "indirect proof", since the assertion is inherently negative. I don't know how conventional that is.
Michael HardyMichael Hardy
$\begingroup$ Formally, a negative statement such as $\lnot \phi$ is an abbreviation for $\phi \to \bot$, i.e. it is the statement that $\phi$ leads to an absurdity; accordingly, to prove $\lnot \phi$, one must show that assuming $\phi$ leads to contradiction. This is completely orthodox logic. $\endgroup$
$\begingroup$ But when must a statement be written in the form $\lnot\varphi$ ? $\endgroup$
$\begingroup$ There isn't any "must" about it. If $\phi$ is a compound formula such as $\psi \lor \theta$ you can just push the $\lnot$ inward and get $(\lnot \psi) \land (\lnot \theta)$. But that just causes the number of negative statements to multiply... $\endgroup$
$\begingroup$ So when must it be written in a form that has negative statements? Does the answer depend on which formal language you write it in? $\endgroup$
$\begingroup$ Of course. And it depends on what things you take as primitive. (Is $=$ more primitive than $\ne$? Sometimes constructivists take an "apartness" relation $\mathrel{\#}$ to be primitive, in which case $=$ becomes the negation of $\mathrel{\#}$!) $\endgroup$
Another example of a contradiction proof that provides no idea on a constructive proof is the strategy-stealing argument. For certain symmetric games, the second player cannot have a winning strategy. If he did, the first player could "pretend" to be the second player and steal his winning strategy, stealing it from him, a contradiction.
An interesting example is the game Hex. It is easy to show that Hex cannot end in a tie, and the strategy-stealing argument does apply to it. Therefore, it is a first player win. But for symmetric $n$ x $n$, the actual winning strategy is still not known. Thus, this is an example of something that has been proven using contradiction and not constructively (yet).
asmeurerasmeurer
$\begingroup$ I would formulate it as: Let $S$ be any strategy for the second player. Then by symmetry the first player can apply $S$ too. Since not both players can win and both are applying $S$, $S$ is not a winning strategy. $\endgroup$
– WimC
There is nothing wrong with proof by contradiction. You can show that they work using a truth table. In the end, that's all that really matters, right?
As far as I know, you can't know for certain that something is not provable by a direct proof. However, a proof by contradiction might be an easier way to prove some things, like the irrationality of certain numbers. For example, I have never seen a direct proof of the irrationality of $\sqrt{2}$.
EDIT: As Carl Mummert said in his answer, the above part in italics is not true. There are propositions which are only provable by contradiction.
A proof by contradiction can be also be formulated as a proof by contrapositive. If we know $Q$ is false, if we can show $P\Rightarrow Q$ then we have proved that $P$ is false. Whether you view this as "proof without contradiction" or not is up to you. In any case, they are logically equivalent.
Espen NielsenEspen Nielsen
$\begingroup$ As I explained in my answer, we do know that there are things that are not provable directly modulo the usual formalization of proofs. The usual proof that $\sqrt{2}$ is not rational is a direct proof when it is formalized in the usual way, although it is a direct proof of a negative statement, which can be deceptive at first. $\endgroup$
$\begingroup$ I found the link I had lost, which explains some of this: math.andrej.com/2010/03/29/… $\endgroup$
$\begingroup$ Thank you for pointing this out, and for the reference. Both your comment and the article were very informative and enlightening! $\endgroup$
– Espen Nielsen
First of all, this is not an answer to the title but to the aside question and is just an example of why you would prefer a constructive proof to a proof by contradiction. Consider the example below,
Prove that $x^2 = 1$ has a root.
Proof by contradiction: Assume that $x^2 = 1$ has no root. Let $f(x) = x^2 - 1 $ then $x^2 = 1$ has a root if and only if $f(x_0) = 0$ for some $x_0$. By assumption $x^2 =1$ has no root and thus, $f(x) \neq 0$ for every $x$. Note that $f$ is continuous and $f(0) = -1$ and $f(2) = 3$. Hence, by the intermediate value theorem, $\exists x_0$ such that $f(x_0) = 0$ which is a contradiction. Therefore, $x^2=1$ has a root.
Constructive proof: For $x^2=1$ if and only if $x^2-1=0$ iff $(x-1)(x+1)=0$. Hence, for $x=\pm 1$ the equation is satisfied, namely the roots are $-1$ and $1$.
The difference is not about the length of the proofs but the information you have. In constructive proof, you know what the roots are but not in the proof by contradiction. Of course, in proof by contradiction, you could have said "let $x_0 = 1$, then $x_0^2=1$ which is a contradiction since $1$ is a root." but then, it is not a clear distinction between the two types of proofs.
I do believe there are some proofs that are only demonstrable through contradiction, and I'm going to attempt to describe them logically:
Let X be a logical statement such that: X $\rightarrow$ y, where y is a known contradiction (such as 2+2=5 in the normal arithmetic structure). Without knowing anything else of X, $\neg$X implies nothing and nothing implies $\neg$X (and hence is not provable). But, of course, assuming X implies a contradiction, and thus, $\neg$X.
This form of statement X is isolated, in that it only relates to itself and the contradiction. I do believe they can be constructed though, for it seems it they can be described.
With that said, in real math and logic, or in general real world scenarios, I don't believe any statements of this form exist, except possibly ones that are constructed to meet this criteria and otherwise meaningless. The proof of the primes was eventually proved without contradiction, to my understanding; until math had been more developed, I think that the statement "the number of primes is $\infty$" was basically an isolated logical statement at Euclid's time and for many years after probably, in that there were no other things known to imply it and it didn't imply anything else useful towards its proof.
roflsrofls
My non-mathematical response.
A == B equals !(A != B)
You always end up with a binary decision, is or is not. And in any language is = !(is not).
But I guess it is too simple to be ok.
ctype.h
Bart CalixtoBart Calixto
$\begingroup$ is = !(is not) does not hold in minimal and intuitionistic logic. See Carl's answer. $\endgroup$
Whether a proof is "by contradiction" really just depends on the statement you started with. If your inital statement is $P \rightarrow Q$, then showing the equivalent $\neg Q \rightarrow \neg P$ is "proof by contradiction". But in reality, the "direct" proof for $ P \rightarrow Q$ is just a proof "by contradiction" for $\neg Q \rightarrow \neg P$. The only reason why we started with $P \rightarrow Q$ instead of $\neg Q \rightarrow \neg P$ is our intuition.
This is just my opinion, but also remember that sometimes, it is also very valuable to know what holds if $Q$ does not hold.
kutschkemkutschkem
$\begingroup$ It really depends by what you accept as proof. Every logical system has certain logical axioms, nobody is forcing you to accept them. $\neg \neg A \leftrightarrow A$ is in fact not provable in intuitionistic logic. $\endgroup$
$\begingroup$ ok fine. while i don't know what intuitionistic logic is, i was assuming that the law of excluded middle would hold. $\endgroup$
– kutschkem
An interesting example of this is the entire study of Smooth Infinitesimal Analysis. It relies on not having the law of the excluded middle (i.e. no proof by contradictions are accepted) in order to be valid. Thus if everything provable by contradiction was also able to be proven directly, then there could not be smooth infinitesimal analysis! Look at Bell's book for more details, though the wiki gives a good example.
Chris RackauckasChris Rackauckas
Following Carl Mummert in considering the three main systems of propositional logic, let's re-interpret the question once again as
Does there exist a proof by contradiction that is valid in Classical Logic, yet invalid in Minimal Logic (resp. Intuitionistic Logic)?
The systems of Minimal Logic, Intuitionistic Logic and Classical Logic are three systems of propositional logic of strictly increasing strength. (I will be using the textbook 'Foundations of logic and mathematics' by Nievergelt as a reference, especially Sections 1.1 and 4.1.) To begin to answer this question, we first need to formalise what 'proof by contradiction' is, as a logical principle. Let us look at two examples.
Take first the usual proof of the infinitude of primes: Suppose $p_1, \ldots, p_N$ is the list of all primes. Then the smallest prime factor of $p_1 \cdots p_N + 1$ is larger than $p_N$. Thus there are infinitely many primes. The underlying logical principle applied at the word 'thus' is the so-called Law of Reductio Ad Absurdum: $$(P \to Q) \to ((P \to \neg Q) \to \neg P),$$ where $P$ is '$p_1, \ldots, p_N$ is the list of all primes' and Q is 'the smallest prime factor of $p_1 \cdots p_N + 1$ is a prime larger than $p_N$'. So if the Law of Reductio Ad Absurdum is valid, then the proper conclusion of the above proof is that $p_1, \ldots, p_N$ is not the list of all primes, which is reasonable as a possible definition of the infinitude of primes (where the prefix 'in-' means 'not', so that 'in-finite' means 'not-listable').
There is another kind of proof of contradiction, namely of the Pigeonhole Principle: Given $n$ holes, if $n + 1$ pigeons are put into them, then there must be some hole with at least two pigeons. So the proof goes: Were there no hole having at least two pigeons, then at most $n$ pigeons were put into the $n$ holes. Thus, if $n + 1$ pigeons were put into the $n$ holes, then there is some hole with at least two pigeons. And the underlying logical principle at the word 'thus' is now the so-called Converse Law of Contraposition: $$(\neg P \to \neg Q) \to (Q \to P),$$
where $P$ is 'there is some hole with at least two pigeons' and $Q$ is '$n + 1$ pigeons are put into the holes'.
By these two examples, I hope that the reader sees and is convinced that what is generically regarded as 'proof by contradiction' is formalisable as either the the Law of Reductio Ad Absurdum or the Converse Law of Contraposition, which are two separate laws distinct from each other.
The subtlety now arises that in fact
the Law of Reductio Ad Absurdum is valid in Minimal Logic, in Intuitionistic Logic and in Classical Logic.
The Converse Law of Contraposition is not valid in Minimal Logic (resp. Intuitionistic Logic). However, adding the Converse Law of Contraposition to Minimal Logic (resp. Intuitionistic Logic) gives a logic equivalent to the full Classical Logic (see Appendix).
So finally, we can arrive at an answer to the re-interpreted question in the yellow box. For our first example, the proof of the infinitude of primes uses 'proof by contradiction' in the sense of the Law of Reductio Ad Absurdum. This proof is valid in Classical Logic, but by (1), is also valid in Minimal Logic and in Intuitionistic Logic. However, for our second example, the proof of the Pigeonhole Principle uses 'proof by contradiction' in the sense of the Converse Law of Contraposition. Although this proof is valid in Classical Logic, by (2), it is not valid in Minimal Logic nor in Intuitionistic Logic. So we must be careful not to reject as non-intuitionistic or non-minimalistic those proofs in classical mathematics that uses the Law of Reductio Ad Absurdum, and inspect carefully whether it is this law or the Converse Law of Contraposition that is being employed.
For the convenience of the reader, I write down the axioms of these three systems of logic, as taken from Nievergelt's book. One of the purposes of writing this down is that in @Carl Mummert's answer, he uses a constant symbol $\bot$ to denote the falsum. However, it is possible avoid the falsum and to write down the axioms of Minimal Logic, Intuitionistic Logic and Classical Logic completely over the language $\{\neg, \to, \vee, \wedge\}$, with the symbol $\neg$ for negation, the symbol $\to$ for implication, the symbol $\vee$ for disjunction and the symbol $\wedge$ for conjunction. In this language, the use of a constant symbol $\bot$ for the falsum is avoided.
To give the details, let $CL^-$ be the system consists of the following two axiom schemas:
$P \to (Q \to P)$
$(P \to (Q \to R)) \to ((P \to Q) \to (P \to R))$
Then Classical Logic (CL) is $CL^-$ together with the Converse Law of Contraposition (p.58).
Let $T$ denote $CL^{-}$ together with the additional five axiom schemas:
$(P \wedge Q) \to P$
$(P \wedge Q) \to Q$
$P \to (Q \to (P \wedge Q)$
$P \to (P \vee Q)$
$Q \to (P \vee Q)$
$(P \to R) \to ((Q \to R) \to ((P\vee Q) \to R))$
Then Minimal Logic (ML) is $T$ plus the Law Of Reductio Ad Absurdum (p.228). And, Intuitionistic Logic (IL) is $T$ plus the Special Law of Reductio Ad Absurdum -- $$(P \to \neg P) \to \neg P$$ and also plus the Law of Denial Of the Antecedent $$\neg P \to (P \to Q).$$
The facts are $ML + \text{Law of Denial Of the Antecedent} \Leftrightarrow IL$ (Exercise 755, p.231), $ML + \text{Law of Double Negation} \Leftrightarrow CL$ (Theorem 653, p.229) and $IL + \text{Law of Double Negation} \Leftrightarrow CL$ (Exercise 754, p.231). Since the Law of Reductio Ad Absurdum is an axiom of ML, hence it is valid in both $IL$ and $CL$. Next, $ML$ is strictly weaker than $IL$ since the Law of Denial of the Antecedent is not valid in $ML$. And also $IL$ is strictly weaker than $CL$ since the Law of Double Negation is not valid in $IL$. Hence, the Law of Double Negation being the special case of the Converse Law of Contraposition by taking $Q = \top$ as the verum, the Converse Law of Contraposition is not valid in $ML$ nor in $IL$.
Colin TanColin Tan
Not the answer you're looking for? Browse other questions tagged logic proof-writing propositional-calculus proof-theory or ask your own question.
Can we always give a direct proof?
Direct Proof and Proof by Contradiction
"Simple" beautiful math proof
Are the "proofs by contradiction" weaker than other proofs?
Could I be using proof by contradiction too much?
Irrationality proofs not by contradiction
Can every true theorem that has a proof be proven by contradiction?
Why does Euclid write "Prime numbers are more than any assigned multitude of prime numbers."
Is there a clean non-contrived theorem that can only be proven by contradiction?
A confusion about proof by contradiction...
Giving Proof by counterexample
Difference between proof of negation and proof by contradiction
Logical limitations of Proofs by Contradiction
How can a geometric proof ever be truly rigorous?
Understanding ex falso quodlibet together with proof by contradiction in a Gentzen style ND Proof
Theorems & proof by contradiction
Show that if x $\neq$ 0, then $x^2$ > 0
Proof by contradiction and invalid reasoning
Why doesn't "Proof by contradiction" lead to "Principle of Explosion"?
|
CommonCrawl
|
Madanswer
5G Network (98)
Angular (180)
Ansible (108)
Augmented Reality | Virtual Reality (23)
Big Data | Hadoop (344)
Continuous Deployment (44)
Continuous Integration (78)
C Sharp (C#) (115)
Data Handling (25)
Data Handling using R (37)
Data Science (178)
Dataware house (59)
Developer GIT (69)
Dot Net (70)
ECMAScript (46)
Education and Learning (101)
Git Slack Integration (41)
Gradle (71)
Interview Question (260)
Jenkins (95)
JIRA (54)
Kibana (25)
Life and Society (2)
MVC Language (62)
NGINX (77)
NoSQL - Database Revolution (134)
QuickTest Professional (QTP) (109)
R Language (177)
React JS (163)
Regression Analysis Q&A (33)
Robotic Process Automation (108)
Salesforce (133)
Selenium (226)
Service Discovery (69)
SOAPUI (80)
Spark Sql (63)
Sports and Lifestyle (3)
Testing (130)
Top 100 Questions Answer (16)
TOGAF (79)
TPF (26)
TPFDF (24)
User Research Methods (42)
Vue.JS (87)
Recent questions and answers in Mathematics
Which of the following can be used as a means of interaction with bots?
answered 8 hours ago in Mathematics by anonymous
#bots
Consider the following 2 X 2 square matrices A , B and C. (A+B) + C = A + (B + C) signifies which property?
answered 2 days ago in Mathematics by rahuljain1 (581 points)
#vector-associativity
For a given vector \vec{v} v ⃗ in 2D space, stretching it by a value of 2 is called ____
#scalability
Every vector in 2D space is associated with multiple pairs of number.
#2d-space
Let \vec{u} u ⃗ = (6,0,-2) and \vec{v} v ⃗ = (0,8,0). What is \vec{u} u ⃗ X \vec{v} v ⃗ ?
#vector-analysis
Consider the matrix A = \begin{pmatrix} 7 & 9 & -3 \\ 3 & -6 & 5 \\ 4 & 0 & 1 \end{pmatrix}
#minor-value
____________________ is the span of the columns of your matrix.
#column-space
Which property of a bot is indicated by the ability to pull intent from different time steps and perform actions?
asked 2 days ago in Mathematics by sharadyadav1986 (1.3k points)
#bot-action
Alexa, a popular bot from Amazon, provides a feature to integrate with IoT devices.
#amazon-bot
#bot
You can create polls and get responses to polls using emoji.
poll-response
What are the series of steps performed to understand the intent and entity of input?
#natual-language-understanding
Google Now, Siri, and Cortana can be grouped under _______________.
#voice-assistant
'Chatbot, Conversational Agent, and Dialogue Systems; all these terms are the same.' State whether this statement is true or false.
#chatbot-conversational
What is the order of steps in Natural Language Understanding?
#natural-language
Buttons are used for rendering canned responses.
#canned-response
In a linear transform, when the orientation of space is inverted, the determinant value is negative.
answered 2 days ago in Mathematics by Robindeniel (1.3k points)
#linear-transform
What is the transpose of the given matrix?
#transpose-matrix
If one set of vectors can be expressed as a linear combination of other set of vectors , they are called __________________.
#linear-vector
The _____________ of a vector space is a set of linearly independent vectors that span the entire space.
#vector-space
What kind of matrix is A? A = \begin{pmatrix} 1 & 2 & 3 \\ 0 & 5 & 7 \\ 0 & 0 & 9 \end{pmatrix}
#upper-triangle-matrix
Let \vec{u} u ⃗ = (1,2,-4) and \vec{v} v ⃗ = (2,3,5). What is \vec{u} \cdot \vec{v} u ⃗ ⋅ v ⃗ ?
#vector
Principle Component Analysis determines the direction of maximum variance of data for a given feature set
#principle-component
According to the property of Singular Value Decomposition , it is always possible to decompose a real matrix A into $$ A = U \sum V^T
#singular-value-decomposition
The set of all possible vectors you can reach with the linear combination of two vectors is called ______________
#linear-combination
_________________ is the number of dimensions in the output of a Transformation.
The fraction (5x-11)/(2x2 + x - 6) was obtained by adding the two fractions A/(x + 2) and B/(2x - 3). The values of A and B must be, respectively: (a) 5x, -11, (b) -11, 5x, (c) -1, 3, (d) 3, -1, (e) 5, -11
answered Nov 21, 2019 in Mathematics by anonymous
#maths
#mathematics
When a number is divided by 457 the remainder is 38 and if the same number is divided by 17, then remiander will be
asked May 24, 2019 in Mathematics by kamalkhandelwal29 (63 points)
Name a curve which is continuous not differentiable
The least perfect square, which is divisible by each of 21, 36 and 66
(x-3)² + (x - 3)(y - 4)- 2(y - 4)²
To see more, click for all the questions in this category.
Interview Question and Answer quick links
Agile Questions and Answers
Javascript Questions and Answers
Ansible Questions and Answers
Bootstrap Questions and Answers
Angular Questions and Answers
Android Questions and Answers
Linux Questions and Answers
Kibana Questions and Answers
Dataware House Questions and Answers
Gradle Questions and Answers
Agile Interview Questions and Answers
5G-Network Questions and Answers
AWS Questions and Answers
C++ Questions and Answers
C++ Interview Questions and Answers
REACTJS/AJAX Questions and Answers
Data Handling Questions and Answers
Hadoop/Big Data Questions and Answers
HTML Questions and Answers
JAVA Questions and Answers
JAVA Interview Questions and Answers
JIRA Questions and Answers
Azure Questions and Answers
Blockchain Questions and Answers
GIT Questions and Answers
DOT NET Questions and Answers
Machine Learning Questions and Answers
MVC Language Questions and Answers
NOSQL Questions and Answers
Python Questions and Answers
R Language Questions and Answers
Framework Questions and Answers
IONIC Questions and Answers
Cloud Computing Questions and Answers
Salesforce Computing Questions and Answers
Selenium Computing Questions and Answers
Spark SQL Questions and Answers
VUE JS Questions and Answers
Devops-Culture Questions and Answers
Jenkins Questions and Answers
Artificial Intelligence Questions and Answers
Continuous Integration Questions and Answers
GIT Slack integration Questions and Answers
Continuous Deployment Questions and Answers
We also offer various courses and Trainings. Please E-Mail your requirement at [email protected] Powered by Madanswer.
This website is an open platform for users to share their queries and answers, therefore if any of the content belongs to copyright please email to us on ([email protected] ), we will try to remove the same as soon as possible
|
CommonCrawl
|
Metabolomics of mammalian brain reveals regional differences
Selected articles from the International Conference on Intelligent Biology and Medicine (ICIBM) 2018: systems biology
William T. Choi1,2,3,4 na1,
Mehmet Tosun4,5 na1,
Hyun-Hwan Jeong4,6 na1,
Cemal Karakas4,5,
Fatih Semerci1,4,
Zhandong Liu4,5,7 &
Mirjana Maletić-Savatić1,4,5,7,8
BMC Systems Biology volume 12, Article number: 127 (2018) Cite this article
The mammalian brain is organized into regions with specific biological functions and properties. These regions have distinct transcriptomes, but little is known whether they may also differ in their metabolome. The metabolome, a collection of small molecules or metabolites, is at the intersection of the genetic background of a given cell or tissue and the environmental influences that affect it. Thus, the metabolome directly reflects information about the physiologic state of a biological system under a particular condition. The objective of this study was to investigate whether various brain regions have diverse metabolome profiles, similarly to their genetic diversity. The answer to this question would suggest that not only the genome but also the metabolome may contribute to the functional diversity of brain regions.
We investigated the metabolome of four regions of the mouse brain that have very distinct functions: frontal cortex, hippocampus, cerebellum, and olfactory bulb. We utilized gas- and liquid- chromatography mass spectrometry platforms and identified 215 metabolites.
Principal component analysis, an unsupervised multivariate analysis, clustered each brain region based on its metabolome content, thus providing the unique metabolic profile of each region. A pathway-centric analysis indicated that olfactory bulb and cerebellum had most distinct metabolic profiles, while the cortical parenchyma and hippocampus were more similar in their metabolome content. Among the notable differences were distinct oxidative-anti-oxidative status and region-specific lipid profiles. Finally, a global metabolic connectivity analysis using the weighted correlation network analysis identified five hub metabolites that organized a unique metabolic network architecture within each examined brain region. These data indicate the diversity of global metabolome corresponding to specialized regional brain function and provide a new perspective on the underlying properties of brain regions.
In summary, we observed many differences in the metabolome among the various brain regions investigated. All four brain regions in our study had a unique metabolic signature, but the metabolites came from all categories and were not pathway-centric.
Highly specialized brain functions, including learning, memory, attention and numerous other physiological processes, directly depend on the neuronal network formation, cellular homeostasis and overall tissue metabolism [1]. Metabolism is critical for the proper function of all living cells. The small biomolecules that participate in the metabolic processes determine an individual's metabolic state and provide a close representation of that individual's overall health status [2]. Recent studies have reported that the neurological and mental health disorders could be traced to alterations in the metabolic pathways [3, 4]. Pathologic conditions mostly disturb normal metabolic processes, resulting in changes that can be observed as metabolic signatures [5,6,7,8,9,10,11,12]. Tracing these metabolic signatures could thus reveal information about the physiologic state of the brain under a particular condition [13].
One of the approaches to trace metabolic signatures utilizes 'omics methodologies, widely used for molecular profiling, identification of biomarkers, characterization of complex biochemical pathways, and examination of pathophysiological processes in various diseases [14]. One of the 'omics sciences is metabolomics, which measures the biochemical content of cell processes downstream of genomic, transcriptomic, and proteomic systems [15,16,17]. The collection of all metabolites, known as the metabolome, includes a broad range of small (< 1 kDa) molecules such as monosaccharides, disaccharides and oligosaccharides; organic bases, nucleosides, and nucleotides; amino acids and peptides, numerous kinds of lipids, and other compounds [18]. The level of each metabolite within the metabolome depends on the specific physiological, developmental, and pathological state of a biological system, thus, reflecting on the phenotype in response to different genetic and environmental influences [19]. The systemic study of these small molecule metabolites thus may lead to a deeper insight into the dynamic phenotype of the biologic system and its change as a result of pathology [20, 21].
Mass spectrometry (MS) and nuclear magnetic resonance (NMR) spectrometry are the two technologies used for metabolomics studies [22]. MS can be combined with gas and liquid chromatography (GC and LC, respectively) separation tools to better resolve the metabolites [23]. Metabolomics analyses can generally be separated into two groups: targeted and untargeted analyses. Targeted metabolomics is used when a set of metabolites is examined, typically focusing on one or more selected pathways of interest [24]. Untargeted metabolomics involves simultaneously measuring as many metabolites as possible without bias [25]. In contrast to targeted metabolomics, untargeted metabolomics is global in scope and reveals the comprehensive metabolism of a whole cell/tissue/organism [26].
Despite the importance of the brain metabolism for its proper function and in pathology, our insights are sparse [27]. Thus, the objective of this study was to investigate whether various brain regions have diverse metabolome profiles, similarly to their genetic diversity. The answer to this question would suggest that not only the genome but also the metabolome may contribute to the functional diversity of brain regions. Further, abnormalities in the region-specific metabolome may be underlying the pathology, as recently reported [9]. We report here the complete metabolome profile of four mouse brain regions (olfactory bulb, frontal cortex, hippocampus, and cerebellum) involved in distinct brain functions (processing of smell, higher order functions, learning and memory, and movement, respectively), using an untargeted GC/LC-MS metabolomics analysis. We then sought to find the metabolites that distinguish each region using univariate, bivariate, and multivariate statistical approaches. We defined a set of metabolites that contribute to each region's metabolic signature. To understand the metabolic architecture within each region, we concluded the study with a metabolic network analysis, identifying key modules with a potential to influence the metabolic network architecture.
We harvested four different brain regions (olfactory bulb, frontal parenchymal, hippocampus, and cerebellum) from six 4-week-old C56BL6 mice. The tissue weight was measured and subsequently quickly frozen. Sample analysis was conducted by Metabolon, Inc. using a proprietary series of organic and aqueous extractions to remove the protein content while allowing maximum recovery of small molecules. The extract was divided into two parts: one for analysis by LC and the other for analysis by GC. TurboVap® (Zymark) was used to remove the organic solvent content. Each sample was then frozen and dried under vacuum. The following methodology section was provided by Metabolon, Inc. as their standard protocol for untargeted mass spectrometry metabolomics.
Untargeted mass spectrometry profiling
Metabolon, Inc. used three independent platforms (ultrahigh performance liquid chromatography/tandem mass spectrometry (UHPLC/MS-MS2) optimized for basic species, UHPLC/MS-MS2 optimized for acidic species, and gas chromatography/mass spectrometry (GC/MS)) to generate untargeted high-throughput mass spectrometry-identified metabolites in the brain regions. Metabolic profiling analysis combined the three independent platforms using a non-targeted approach to obtain the relative quantity of a broad spectrum of molecules. Experimental samples and controls were randomized across platforms. In addition, several technical replicate samples were created from a homogeneous pool containing a small amount of all study samples. Prior to extraction, recovery standards were added to ensure quality control (QC) charts. Sample preparation was conducted using a Metabolon, Inc. proprietary series of organic and aqueous extractions to remove the protein content while allowing maximum recovery of small molecules. Each sample was then frozen and dried under vacuum. A number of additional samples were included with each day's analysis for QA/QC charts. Furthermore, a selection of QC compound was added to each sample, including those under test. These compounds were cautiously chosen so as not to interfere with the measurement of the endogenous compounds. Prior to loading the samples into the mass spectrometers, the instrument variability was determined by calculating the median relative standard deviation (RSD) for the standards that were added to all sample. Overall variability was determined by calculating the median RSD for all endogenous metabolites (i.e., non-instrument standards) present in 100% of the samples, which are technical replicates of pooled samples. For UHPLC/MS/MS2 analysis, aliquots were separated using a Waters Acquity UPLC (Waters Corp.) and analyzed using an LTQ mass spectrometer (MS) (Thermo Fisher Scientific, Inc.), which consisted of an electrospray ionization source and linear ion-trap mass analyzer. The MS instrument scanned 99 to 1,000 m/z and alternated between MS and MS2 scans using dynamic exclusion with approximately 6 scans per second. Derivatized samples for GC/MS were loaded to a 5% phenyldimethyl silicone column with helium as the carrier gas and a temperature ramp from 60 °C to 340 °C and then analyzed on a Thermo-Finnigan Trace DSQ MS (Thermo Fisher Scientific, Inc.) operated at unit mass resolving power with electron impact ionization and a 50 to 750 amu scan range.
Metabolites were identified by comparison of the ion features in the experimental samples with a library of compound standard entries that included retention time, molecular weight to charge ratio (m/z), preferred adducts, and in-source fragments as well as associated MS spectra, and were curated by visual inspection for quality control using the software improved by Metabolon, Inc. [28]. The raw mass spectrometry data extraction gave information that could be loaded into a relational database. Afterward, the information was examined, and appropriate QC limits were implemented. Numerous curation procedures were carried out to ensure that a high-quality dataset was made available for statistical analysis and data interpretation. QC and curation processes were generated to ensure precise and consistent identification of true compound entities, and to remove those representing system artifacts, mis-assignments, and background noise. Metabolon, Inc. uses proprietary visualization and interpretation software to confirm the consistency of peak identification among the various samples. Library matches for all compounds were checked for each sample and corrected if necessary.
Weighted correlation network analysis
To better understand the metabolite network organization in the brain, we performed the weighted correlation network analysis (WGCNA). This method is a network inference algorithm derived from a biological profile and widely applied for studying biological networks [29]. This algorithm relies on the pairwise correlation between metabolites and it provides information such as network module (a subset of metabolites that highly correlate each other) and eigen-metabolite (an imaginary metabolite that represents a module). To perform the network analysis, we first calculated every pairwise correlation of metabolites from the metabolite profiles of the entire 24 samples of four brain regions in this study. All of the correlations were stored in the matrix S, and sij stored a correlation between i-th metabolite and j-th metabolite in the profile. Next, we defined a weighted network adjacency A with the soft thresholding manner [29]. An element of i-th and j-th metabolites in soft-thresholded adjacency weighted matrix A is defined by \( {a}_{ij}={s}_{ij}^{\beta } \). β ≥ 1 is a parameter to fit the network of A to the scale-free topology, which many biological networks follow [30]. For a scale-free network, degree/connectivity distribution of the network follows a power low p(k)~k−γ, where p(k) is the distribution of nodes with a degree of k in the network. From an empirical observation of the scale-free topology with different β values, we chose 11 as the optimal value of the β. We next defined Ω, which reflects a relative inter-connectedness between a pair of metabolites:
$$ {\omega}_{ij}=\frac{l_{ij}+{a}_{ij}}{\min \left({k}_i,{k}_j\right)+1-{a}_{ij}} $$
where \( {l}_{ij}={\sum}_u{a}_{iu}\ast {a}_{uj} \) and the node connectivity \( {k}_i={\sum}_u{a}_{ij} \).
Ω is converted to a dissimilarity matrix D, and D is defined by dij = 1 − ωij.
With the dissimilarity matrix we performed a complete-linkage hierarchical clustering method using 'flashClust' function in R [31], and then we cut the hierarchical tree using the dynamic branch cut method [32] using 'cutDynamic' function in R and default parameters of the function. After the tree-cutting, we defined a metabolite module if two criteria were satisfied: i) metabolites in the module were connected in the cut tree, and ii) the number of metabolites in the module was larger than 10. If metabolites were not assigned to any module, they were omitted. We repeated the clustering and cutting procedure until every metabolite was assigned. At the final step, we calculated eigen-metabolite for each network module, defined as the first principal component of the concentration matrix of the metabolites in the module.
Bioinformatics and statistical analyses
Statistical analysis was conducted using 'R' language (http://cran.r-project.org/). Analysis of variance (ANOVA) with false detection rate (FDR) correction using Benjamini–Hochberg procedure was performed for the metabolomics data [33, 34] Normalization when indicated was done using studentized residual or z-score. Comparison of the statistical difference in a single metabolite between two regions, we used Welch's two-sample t-test.
Brain metabolome is enriched in several classes of metabolites
Using UHPLC/MS-MS2 optimized for basic species, UHPLC/MS-MS2 optimized for acidic species, and GC/MS, we performed untargeted high-throughput mass spectrometry and identified metabolites in four mouse brain regions: olfactory bulb, frontal cortex, hippocampus, and cerebellum (N = 6 per region). We detected 215 metabolites overall (Fig. 1), using a library of more than 2000 purified compounds. Mean concentration-variance plot indicated marginal within-group variance. The 215 metabolites belonged to eight different categories of small molecules: amino acids, carbohydrates, cofactors and vitamins, energy metabolism, lipids, nucleotides, peptides, and xenobiotics. Amino acid and lipid categories predominated (Fig. 1b).
Untargeted metabolomics identifies 215 metabolites in four brain regions of an adult mouse. From a proprietary library of Metabolon, Inc. containing > 2000 compounds, 215 metabolites were identified in the four brain regions: olfactory bulb, frontal cortex, hippocampus and cerebellum (N = 6 per region). a. Heat map represents the relative concentration of each metabolite organized by its respective metabolic category and brain region. Relative concentration of the metabolites was normalized by using studentized residual method. Color gradient represents the Z score distribution of each metabolite across all four regions, each containing six biological samples. Within-group variance is marginal. b. Eight major types of metabolites were identified. Shaded regions indicate their respective metabolic category and the number of metabolites in shown in parentheses. OB, olfactory bulb; FCX, frontal cortex; HC, hippocampus; CB, cerebellum
To then determine the metabolites that differed significantly across all regions, we used analysis of variance (ANOVA) and false discovery rate (FDR) for multiple testing corrections at a cutoff of FDR < 0.01. Seventy metabolites achieved this statistical significance. We then sought to determine whether the degree of abundance in these 70 significant metabolites could be used to infer region specificity (Additional file 1). We used a two-sample t-test to examine pairwise difference in 70 metabolites for each brain region (the threshold p-value < 0.05 was used as a threshold for statistical significance, and log2 fold change of metabolite concentration between any two regions was used to calculate the relative metabolite abundance). 27/70 metabolites were detected in either high (log2 fold-change > 0) or low (log2 fold-change < 0) amounts in the cerebellum, and 9/70 metabolites were highly abundant in the olfactory bulb. In the hippocampus, two metabolites were either high or low, and in the frontal cortex no metabolites were significantly different. Although cerebellum, olfactory bulb and hippocampus (to an extent) may have a set of metabolites that can distinguish them from each other, the abundance of each metabolite was not a sufficient analytical parameter to distinguish the various brain regions. Regardless, a region-specific metabolome relationship still appeared to exist.
Untargeted mass spectrometry suggest regional metabolic differences
To further parse out the metabolic profiles of four brain regions, we applied a multivariate analysis utilizing PCA (Fig. 2). With the set of metabolomics data, PCA grouped the brain regions into a few latent components. These components identified the brain regions with strong similarity; therefore, brain samples with strong similarity would share the same component, while those that are different would be separated by a distance. As seen in Fig. 2a, the separation of the brain regions is evident in the first and second principal components. While the olfactory bulb and cerebellum were distinctly clustered in the scores plot, hippocampus and the frontal cortex clustered together, suggesting that they share similar metabolome. Based on the loadings, we revealed a set of metabolic profiles for each brain region (Fig. 2b-d; Additional file 2).
Multivariate analysis via PCA reduces the complexity of the brain region metabolome and identifies key components that contribute to their differences. a. Scattered scores plot shows the degree of separation across all brain region samples when metabolites with real significance are analyzed. b. Scattered loadings plot highlights the category of metabolites that contribute to the separation of the brain region clusters. c-d. The top 10 metabolites that explain the metabolome differences of the brain regions based on principal components 1 and 2. OB, olfactory bulb; FCX, frontal cortex; HC, hippocampus; CBL, cerebellum
Targeted analysis of the brain Metabolome
We then focused on individual metabolites among those significantly different between brain regions, to examine whether they belong to biochemically defined pathways. We found significantly high levels of histidine-containing dipeptides, carnosine and anserine, in the olfactory bulb relative to the other regions of the brain (Fig. 3; p < 0.001). It has been proposed that carnosine plays a role as a neuromodulator in olfaction [35], and our findings support this hypothesis given the abundance of this metabolite in this region compared to others. Further, we found that metabolites that participate in cysteine pathway were differently distributed in different regions (Fig. 4). Cysteine is a non-essential amino acid central to many biochemical pathways including the biosynthesis of antioxidants glutathione and taurine and production Coenzyme A (CoA) [36]. Cystathionine, an intermediate product of cysteine, was present in significantly higher amount in the cerebellum than in the other regions (p < 0.01), as reported [37]. In addition, oxidized glutathione level was the highest in the frontal region (p < 0.05-p < 0.001), while cysteine-glutathione disulphide was higher in cerebellum compared to other regions (p < 0.01). These data suggest that cerebellum may experience relatively elevated oxidative stress and anti-oxidant demands matched by increased activity along the cysteine transulfuration pathway to generate glutathione. Cysteine and glutathione have beneficiary effects on nerve cell survival by reducing the oxidative stress [38,39,40,41] and have been reported as some of the key metabolites in Parkinson's and Huntington's diseases [27]. Taurine, also derived from cysteine, has been implicated in multiple cellular functions in the brain including a central role as a neurotransmitter, neuromodulator, an osmolyte, and as a neuroprotectant against oxidative stress. The highest level of taurine was observed in the olfactory bulb followed by the frontal cortex, and the lowest level was observed in the cerebellum. These region-specific taurine levels suggest differential importance for this neuroactive amino acid derivative across the brain regions analyzed.
Histidine-containing dipeptides, except homocarnosine, are enriched in olfactory bulb. Homocarnosine is highly enriched in the cerebellum. Other regions do not have significant amounts of these dipeptides. Bar graphs are mean ± SD metabolite concentration. Asterisks (*) indicates statistical significance between the regions estimated by Welch's two-sample t-test. (*: p ≤ 0.05, **: p ≤ 0.01, ***: p ≤ 0.001, ****: p ≤ 0.0001). OB, olfactory bulb; FCX, frontal cortex; HC, hippocampus; CBL, cerebellum
Cysteine pathway metabolites are enriched in cerebellum. Bar graphs are mean±SD metabolite concentration. Asterisks (*) indicates statistical significance between the regions estimated by Welch's two-sample t-test. (*: p ≤ 0.05, **: p ≤ 0.01, ***: p ≤ 0.001, ****: p ≤ 0.0001). OB, olfactory bulb; FCX, frontal cortex; HC, hippocampus; CBL, cerebellum. SAM: S-Adenosyl methionine; PE: Phosphatidylethanolamine; PC: Phosphatidylcholine; PEMT: PE N-methyltransferase; CS: Cystathionine-β-synthase
Interestingly, the levels of cholesterol in the brain regions examined did not vary (Fig. 5). Cholesterol in the brain is synthesized de novo and its concentration is regulated by the rate of its turnover. Generation of 24-S-cholesterol by cholesterol-24-dehydrogenase enzyme in the endoplasmic reticulum is the main pathway for cholesterol turnover in the brain [42]. 24-S-hydroxycholesterol crosses the blood brain barrier and is transported via circulation to the liver for further metabolism [43]. In our study, the highest level of 24-S-cholesterol was found in the frontal cortex followed by the hippocampus (p < 0.01), implying a higher rate of cholesterol metabolism in these tissues relative to the cerebellum and the olfactory bulb (Fig. 5). Dietary cholesterol homologues from plants, campesterol and sitosterol, are known to get enriched to some extent in the mammalian brain [44] and were detected in our samples as well. The campesterol was most abundant in the olfactory bulb (Fig. 5; p < 0.001). Brain cholesterol metabolism seems to play a role in the Alzheimer's disease pathogenesis. It was shown that both beta-amyloid and amyloid precursor protein can oxidize cholesterol to form 7-beta-hydroxycholesterol, a proapoptotic oxysterol that is neurotoxic at nanomolar concentrations [45]. Our data indicate that this metabolite is produced in the normal brain under physiological conditions and it will be important to study this metabolite in mouse models of neurodegeneration.
While cholesterol is evenly distributed in all brain regions studied, it is converted to 24S-cholesterol mostly in the cortical regions and hippocampus. Two plant-based cholesterol derivatives, campesterol and desmosterol, are also detected. Bar graphs are mean ± SD metabolite concentration. Asterisks (*) indicates statistical significance between the regions estimated by Welch's two-sample t-test. (*: p ≤ 0.05, **: p ≤ 0.01, ***: p ≤ 0.001, ****: p ≤ 0.0001). OB, olfactory bulb; FCX, frontal cortex; HC, hippocampus; CBL, cerebellum
Finally, the brain tissue was particularly rich in two polyunsaturated fatty acids (PUFAs), arachidonic acid (AA) (20:4n-6) and docosahexaenoic acid (DHA) (22:6n-3) (Fig. 6). The DHA and AA are essential for brain function, optimal growth, and development [46]. PUFAs are also very effective against post-stroke brain injury and angiogenesis and they support white matter restoration [47]. The differences in PUFA distribution between the cerebellum and the other regions of the brain may come from the composition of cellular membranes of Purkinje cells and granule cells found in the cerebellum. The high levels PUFAs in Purkinje cells provide protection against the degeneration and autophagy at some pathologic conditions [48]. Joffre et al. found highest concentration of DHA and AA in the hypothalamus in their mouse model study [49]. It has also been reported that in the mouse astrocytes, prostaglandin D2 and prostaglandin E2 are powerful inducers of nerve growth factor and brain derived neurotrophic factor [50].
The distribution of poly-unsaturated fatty acids and prostaglandins in different brain regions. Bar graphs are mean ± SD metabolite concentration. Asterisks (*) indicates statistical significance between the regions estimated by Welch's two-sample t-test. (*: p ≤ 0.05, **: p ≤ 0.01, ***: p ≤ 0.001, ****: p ≤ 0.0001). OB, olfactory bulb; FCX, frontal cortex; HC, hippocampus; CBL, cerebellum
Global metabolic connectivity and network module analysis of the brain Metabolome
To examine whether each brain region has a distinct metabolic architecture in addition to its enrichment in certain metabolites, we performed WGCNA analysis on all 215 metabolites. We found five modules consisting of the 61 out of 215 metabolites (Fig. 7a; Additional file 3). 154 metabolites had low correlation to other metabolites and were thus omitted from network analyses. The correlation of all pairwise metabolites in each module is high (Fig. 7b). Although we used WGCNA to detect the network with high correlation across the entire dataset, we discovered that each module in the network has brain region-specific property. Namely, we calculate eigen-metabolites of each module using principal component analysis of metabolite concentrations within each module (Fig. 7c). The eigen-metabolites show that each module has different pattern of metabolite concentration compared to other modules, and these patterns differed for each brain region. Region-specific differences in two modules, the blue and yellow ones, was confirmed with ANOVA. Further, the eigen-metabolites identified hub metabolites for each module with the $k$ connectivity measure: deoxy-carnitine (blue, k = 0.954), 2-palmitoyl glycerol phosphoethanolamine (brown, k = 0.999), glycine (green, k = 0.968), leucil-leucine (turquoise, k = 0.968), and ergothioneine (yellow, k = 0.942). To then examine how each brain region builds its metabolic architecture around these hub metabolites, we plotted the correlations of every pairwise metabolite of each module in the contour graph (Fig. 7d). Indeed, each module except the brown one shows different correlation pattern for each brain region. These data indicate that not only brain regions differ in a set of metabolites they accumulate as we have shown using a traditional, targeted approach, but also in their organization of metabolic networks centered around a few common hub metabolites as shown using the WGCNA analysis.
Metabolite correlation classifies the metabolome into brain region-specific network modules. a. The weighted correlation network analysis (WGCNA) cluster dendrogram identifies 61 out of 215 metabolites across four different brain regions as five distinct network modules. b. The heatmap of the topological overlap matrix of the metabolite networks shows that the intra-cluster similarities of each module are high, confirming the clustering in A. c. Eigen-metabolites of each module exhibit different concentration patterns. The p-values were calculated by ANOVA. d. The contour graphs of correlations of every pairwise metabolite in each module shows the correlations in each brain region as well as in all of them merged together (rightmost graph for each module). While there is a high correlation overall (rightmost graph for each module), the decomposed graphs show a unique correlation pattern for each brain region. The divergent patterns of any given module are also different from patterns of every other module
In this study, we utilized several approaches to examine brain metabolome. We investigated whether different brain regions have unique metabolome contents using untargeted mass spectrometry metabolomic profiling of the mouse brain. First, we found that each region is enriched in a set of metabolites, supporting our hypothesis that metabolic specificity may be important for the biological function of a given region. Not surprisingly, the biochemical profiles of the frontal and hippocampal regions were very similar, while the cerebellum was the most distinct when compared to other tissues. Second, we found that each region has a unique metabolic network architecture, further highlighting their metabolic specificity.
Metabolomics has become one of the approaches to understand the integrated response of cellular processes to genetic and environmental factors. However, given the large amount of information generated through metabolomics, as in other 'omics approaches, the study of metabolism requires additional approaches to reduce complexities. Namely, understanding the metabolome can be a daunting task because of the hundreds of measured metabolite species. A classical way to study metabolism is pathway-centric. The biochemical pathways provide the roadmap for energy transfer, which includes the enzymes catalyzing a reaction, the substrates that get converted from one state to another, and the physical chemistry, i.e. kinetics and thermodynamics, that explains why and how the energy transfer can take place. In our study, many biochemical differences were observed among the various brain regions, illustrating the diversity of global metabolism corresponding to specialized regional brain function. Overall, olfactory bulb and cerebellum showed more distinct metabolic profiles, while the cortical parenchymal and hippocampus were more similar. This is not surprising, given that both structures participate in memory formation. Among the notable differences across all four regions were distinct redox status and region-specific fatty acid profiles, suggesting that different brain regions depend on these molecules to a varying degree to perform their function and maintain steady-state.
When one moves away from the pathway-centric approach and starts to incorporate the interconnected metabolic pathways, the complexities increase exponentially. For example, the metabolite concentrations are theoretically determined by the activity of the enzymes. However, there are countless variables that both the enzyme activity and the metabolites are affected by. Understanding the quantity of just one metabolite and/or its interaction with a few other metabolites that belong to the same pathway is not enough to understand metabolism of a given cell or tissue. On the other hand, by looking at the whole metabolome, we can find characteristic patterns in metabolite profiles, directly linking them to the underlying biochemical reaction networks. We thus reasoned, based on the biochemistry paradigm of feedback regulation, that the metabolites could be part of a biochemical network of interconnecting pathways where the changes of a set of metabolites could influence another set of metabolites. In theory, the metabolites in a biochemical network are connected with each other; a change in one metabolite can influence a metabolite from a different pathway, thus creating a dense network. With the relative concentrations of the identified metabolites, a partial relationship could be inferred by the correlations between metabolites [51, 52]. The weighted correlation network analysis (WGCNA) [29] we used to identify network modules of the metabolites showed exactly what we predicted – each brain region has a unique metabolic network architecture.
We observed many differences in the metabolome among the various brain regions investigated. All four brain regions in our study had a unique metabolic signature, but the metabolites came from all categories and were not pathway-centric. To better understand these unique, region-specific metabolic signatures, the metabolic network analysis essentially found network of structures, and led to the discovery of the five hubs important to maintain the common metabolic architecture for all brain regions.
AA:
ANOVA:
CoA:
DHA:
Docosahexaenoic acid
DSQ:
Dual stage quadrupole
FDR:
False detection rate
kDa:
Kilodalton
LC:
LTQ:
Linear Trap Quadropole
m/z:
Molecular weight to charge ratio
MS:
NMR:
PCA:
Principal component analysis
PUFAs:
QA/QC:
Quality assessment/Quality control
RSD:
Relative standard deviation
UHPLC/MS-MS2 :
Ultrahigh performance liquid chromatography/tandem mass spectrometry
WGCNA:
Qi M, Philip MC, Yang N, Sweedler JV. Single Cell Neurometabolomics. ACS Chem Neurosci. 2018;9(1):40–50.
Beger RD, Dunn W, Schmidt MA, Gross SS, Kirwan JA, Cascante M, Brennan L, Wishart DS, Oresic M, Hankemeier T, et al. Metabolomics enables precision medicine: "a white paper, community perspective". Metabolomics. 2016;12(10):149.
Kristal BS, Shurubor YI. Metabolomics: opening another window into aging. Sci Aging Knowl Environ. 2005;2005(26):pe19.
Kaddurah-Daouk R, Krishnan KR. Metabolomics: a global biochemical approach to the study of central nervous system diseases. Neuropsychopharmacology. 2009;34(1):173–86.
Oresic M, Anderson G, Mattila I, Manoucheri M, Soininen H, Hyotylainen T, Basignani C. Targeted serum metabolite profiling identifies metabolic signatures in patients with Alzheimer's disease, Normal pressure hydrocephalus and brain tumor. Front Neurosci. 2017;11:747.
Botas A, Campbell HM, Han X, Maletic-Savatic M. Metabolomics of neurodegenerative diseases. Int Rev Neurobiol. 2015;122:53–80.
Petrovchich I, Sosinsky A, Konde A, Archibald A, Henderson D, Maletic-Savatic M, Milanovic S. Metabolomics in schizophrenia and major depressive disorder. Front Biol. 2016;11(3):222–31.
Gandy K, Kim S, Sharp C, Dindo L, Maletic-Savatic M, Calarge C. Pattern separation: a potential marker of impaired hippocampal adult neurogenesis in major depressive disorder. Front Neurosci. 2017;11:571.
Liu L, MacKenzie KR, Putluri N, Maletic-Savatic M, Bellen HJ. The glia-neuron lactate shuttle and elevated ROS promote lipid synthesis in neurons and lipid droplet accumulation in glia via APOE/D. Cell Metab. 2017;26(5):719–37 e716.
Zhu Y, Fan Q, Han X, Zhang H, Chen J, Wang Z, Zhang Z, Tan L, Xiao Z, Tong S, et al. Decreased thalamic glutamate level in unmedicated adult obsessive-compulsive disorder patients detected by proton magnetic resonance spectroscopy. J Affect Disord. 2015;178:193–200.
Vingara LK, Yu HJ, Wagshul ME, Serafin D, Christodoulou C, Pelczer I, Krupp LB, Maletic-Savatic M. Metabolomic approach to human brain spectroscopy identifies associations between clinical features and the frontal lobe metabolome in multiple sclerosis. Neuroimage. 2013;82:586–94.
Zhang X, Tang Y, Maletic-Savatic M, Sheng J, Zhang X, Zhu Y, Zhang T, Wang J, Tong S, Wang J, et al. Altered neuronal spontaneous activity correlates with glutamate concentration in medial prefrontal cortex of major depressed females: an fMRI-MRS study. J Affect Disord. 2016;201:153–61.
Brown AG, Tulina NM, Barila GO, Hester MS, Elovitz MA. Exposure to intrauterine inflammation alters metabolomic profiles in the amniotic fluid, fetal and neonatal brain in the mouse. PLoS One. 2017;12(10):e0186656.
Zhang Y, Yuan S, Pu J, Yang L, Zhou X, Liu L, Jiang X, Zhang H, Teng T, Tian L, et al. Integrated metabolomics and proteomics analysis of Hippocampus in a rat model of depression. Neuroscience. 2018;371:207–20.
Villas-Boas SG, Mas S, Akesson M, Smedsgaard J, Nielsen J. Mass spectrometry in metabolome analysis. Mass Spectrom Rev. 2005;24(5):613–46.
Holmes E, Wilson ID, Nicholson JK. Metabolic phenotyping in health and disease. Cell. 2008;134(5):714–7.
Varma VR, Oommen AM, Varma S, Casanova R, An Y, Andrews RM, O'Brien R, Pletnikova O, Troncoso JC, Toledo J, et al. Brain and blood metabolite signatures of pathology and progression in Alzheimer disease: a targeted metabolomics study. PLoS Med. 2018;15(1):e1002482.
Issaq HJ, Van QN, Waybright TJ, Muschik GM, Veenstra TD. Analytical and statistical approaches to metabolomics research. J Sep Sci. 2009;32(13):2183–99.
Fiehn O. Metabolomics--the link between genotypes and phenotypes. Plant Mol Biol. 2002;48(1–2):155–71.
Luan H, Wang X, Cai Z. Mass spectrometry-based metabolomics: targeting the crosstalk between gut microbiota and brain in neurodegenerative disorders. Mass Spectrom Rev. 2017. https://doi.org/10.1002/mas.21553.
Liu CC, Chen JL, Chang XR, He QD, Shen JC, Lian LY, Wang YD, Zhang Y, Ma FQ, Huang HY, et al. Comparative metabolomics study on therapeutic mechanism of electro-acupuncture and moxibustion on rats with chronic atrophic gastritis (CAG). Sci Rep. 2017;7(1):14362.
Gika HG, Wilson ID, Theodoridis GA. The role of mass spectrometry in nontargeted Metabolomics. Compr. Anal. Chem. 2014;63:213–33.
Arnold JM, Choi WT, Sreekumar A, Maletic-Savatic M. Analytical strategies for studying stem cell metabolism. Front Biol (Beijing). 2015;10(2):141–53.
Dudley E, Yousef M, Wang Y, Griffiths WJ. Targeted metabolomics and mass spectrometry. Adv Protein Chem Struct Biol. 2010;80:45–83.
Patti GJ, Yanes O, Siuzdak G. Innovation: metabolomics: the apogee of the omics trilogy. Nat Rev Mol Cell Biol. 2012;13(4):263–9.
Wang X, Wang D, Zhou Z, Zhu W. Subacute oral toxicity assessment of benalaxyl in mice based on metabolomics methods. Chemosphere. 2018;191:373–80.
Gonzalez-Riano C, Garcia A, Barbas C. Metabolomics studies in brain tissue: a review. J Pharm Biomed Anal. 2016;130:141–68.
Dehaven CD, Evans AM, Dai H, Lawton KA. Organization of GC/MS and LC/MS metabolomics data into chemical libraries. J Cheminform. 2010;2(1):9.
Langfelder P, Horvath S. WGCNA: an R package for weighted correlation network analysis. BMC Bioinformatics. 2008;9:559.
Jeong HH, Leem S, Wee K, Sohn KA. Integrative network analysis for survival-associated gene-gene interactions across multiple genomic profiles in ovarian cancer. J Ovarian Res. 2015;8:42.
Langfelder P, Horvath S. Fast R functions for robust correlations and hierarchical clustering. J Stat Softw. 2012;46(11):i11.
Langfelder P, Zhang B, Horvath S. Defining clusters from a hierarchical cluster tree: the dynamic tree cut package for R. Bioinformatics. 2008;24(5):719–20.
Pavlidis P. Using ANOVA for gene selection from microarray studies of the nervous system. Methods. 2003;31(4):282–9.
Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B Methodol. 1995;57(1):289–300.
Sassoe-Pognetto M, Cantino D, Panzanelli P, Verdun di Cantogno L, Giustetto M, Margolis FL, De Biasi S, Fasolo A. Presynaptic co-localization of carnosine and glutamate in olfactory neurones. Neuroreport. 1993;5(1):7–10.
Stipanuk MH, Dominy JE Jr, Lee JI, Coloso RM. Mammalian cysteine metabolism: new insights into regulation of cysteine metabolism. J Nutr. 2006;136(6 Suppl):1652S–9S.
Lefauconnier JM, Portemer C, Chatagner F. Cystathionine in rat brain: catabolism in vivo. Neurochem Res. 1978;3(3):345–56.
Maher P. Potentiation of glutathione loss and nerve cell death by the transition metals iron and copper: implications for age-related neurodegenerative diseases. Free Radic Biol Med. 2018;115:92–104.
Song W, Tavitian A, Cressatti M, Galindez C, Liberman A, Schipper HM. Cysteine-rich whey protein isolate (Immunocal (R)) ameliorates deficits in the GFAP.HMOX1 mouse model of schizophrenia. Free Radic Biol Med. 2017;110:162–75.
Pauletti A, Terrone G, Shekh-Ahmad T, Salamone A, Ravizza T, Rizzi M, Pastore A, Pascente R, Liang LP, Villa BR, et al. Targeting oxidative stress improves disease outcomes in a rat model of acquired epilepsy. Brain. 2017;140(7):1885–99.
Jiang X, Chen J, Bajic A, Zhang C, Song X, Carroll SL, Cai ZL, Tang M, Xue M, Cheng N, et al. Quantitative real-time imaging of glutathione. Nat Commun. 2017;8:16087.
Benussi L, Ghidoni R, Dal Piaz F, Binetti G, Di Iorio G, Abrescia P. The level of 24-Hydroxycholesteryl esters is an early marker of Alzheimer's disease. J Alzheimers Dis. 2017;56(2):825–33.
Meljon A, Theofilopoulos S, Shackleton CH, Watson GL, Javitt NB, Knolker HJ, Saini R, Arenas E, Wang Y, Griffiths WJ. Analysis of bioactive oxysterols in newborn mouse brain by LC/MS. J Lipid Res. 2012;53(11):2469–83.
Saeed AA, Genove G, Li T, Hulshorst F, Betsholtz C, Bjorkhem I, Lutjohann D. Increased flux of the plant sterols campesterol and sitosterol across a disrupted blood brain barrier. Steroids. 2015;99(Pt B):183–8.
Nelson TJ, Alkon DL. Oxidation of cholesterol by amyloid precursor protein and beta-amyloid peptide. J Biol Chem. 2005;280(8):7377–87.
Harauma A, Hatanaka E, Yasuda H, Nakamura MT, Salem N Jr, Moriguchi T. Effects of arachidonic acid, eicosapentaenoic acid and docosahexaenoic acid on brain development using artificial rearing of delta-6-desaturase knockout mice. Prostaglandins Leukot Essent Fatty Acids. 2017;127:32–9.
Cai M, Zhang W, Weng Z, Stetler RA, Jiang X, Shi Y, Gao Y, Chen J. Promoting neurovascular recovery in aged mice after ischemic stroke - prophylactic effect of Omega-3 polyunsaturated fatty acids. Aging Dis. 2017;8(5):531–45.
Bak DH, Zhang E, Yi MH, Kim DK, Lim K, Kim JJ, Kim DW. High omega3-polyunsaturated fatty acids in fat-1 mice prevent streptozotocin-induced Purkinje cell degeneration through BDNF-mediated autophagy. Sci Rep. 2015;5:15465.
Joffre C, Gregoire S, De Smedt V, Acar N, Bretillon L, Nadjar A, Laye S. Modulation of brain PUFA content in different experimental models of mice. Prostaglandins Leukot Essent Fatty Acids. 2016;114:1–10.
Toyomoto M, Ohta M, Okumura K, Yano H, Matsumoto K, Inoue S, Hayashi K, Ikeda K. Prostaglandins are powerful inducers of NGF and BDNF production in mouse astrocyte cultures. FEBS Lett. 2004;562(1–3):211–5.
Moschen S, Higgins J, Di Rienzo JA, Heinz RA, Paniego N, Fernandez P. Network and biosignature analysis for the integration of transcriptomic and metabolomic data to characterize leaf senescence process in sunflower. BMC Bioinformatics. 2016;17(Suppl 5):174.
DiLeo MV, Strahan GD, den Bakker M, Hoekenga OA. Weighted correlation network analysis (WGCNA) applied to the tomato fruit metabolome. PLoS One. 2011;6(10):e26683.
The metabolomic profiling data collection was done by Metabolon, Inc.. We thank the members of Maletic-Savatic lab for comments and critical reading of the paper.
The research was supported in part by the Baylor College of Medicine Microscopy Core (P30HD024064 Intellectual and Developmental Disabilities Research Grant from the Eunice Kennedy Shriver National Institute of Child Health and Human Development). WTC was supported by the NLM Training Program in Biomedical Informatics (T15LM007093) and the Baylor College of Medicine Medical Scientist Training Program. Publication of this article was funded in part by the NIH grant GM120033–01 (MMS, ZL). The funding bodies did not have any role in the design or conclusions of this study.
All data and materials will be shared in accordance with the NIH Grants Policy on Sharing of Unique Research Resources.
About this supplement
This article has been published as part of BMC Systems Biology Volume 12 Supplement 8, 2018: Selected articles from the International Conference on Intelligent Biology and Medicine (ICIBM) 2018: systems biology. The full contents of the supplement are available online at https://bmcsystbiol.biomedcentral.com/articles/supplements/volume-12-supplement-8.
William T. Choi, Mehmet Tosun and Hyun-Hwan Jeong contributed equally to this work.
Program in Developmental Biology, Baylor College of Medicine, Houston, TX, USA
William T. Choi, Fatih Semerci & Mirjana Maletić-Savatić
The National Library of Medicine Training Program in Biomedical Informatics, Houston, TX, USA
William T. Choi
Medical Scientist Training Program, Baylor College of Medicine, Houston, TX, USA
Jan and Dan Duncan Neurological Research Institute, Texas Children's Hospital, Houston, TX, USA
William T. Choi, Mehmet Tosun, Hyun-Hwan Jeong, Cemal Karakas, Fatih Semerci, Zhandong Liu & Mirjana Maletić-Savatić
Department of Pediatrics-Neurology, Baylor College of Medicine, Houston, TX, USA
Mehmet Tosun, Cemal Karakas, Zhandong Liu & Mirjana Maletić-Savatić
Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, TX, USA
Hyun-Hwan Jeong
Quantitative Computational Biology Program, Baylor College of Medicine, Houston, TX, USA
Zhandong Liu & Mirjana Maletić-Savatić
Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
Mirjana Maletić-Savatić
Mehmet Tosun
Cemal Karakas
Fatih Semerci
Zhandong Liu
WTC performed the experiments, analyzed the data and wrote the manuscript. MT analyzed the data and wrote the manuscript. HHJ performed metabolic network analysis and wrote the manuscript. CK and FS participated in validation of the data. ZL and MMS designed and supervised all studies, analyzed and interpreted the data, provided financial support, and wrote the manuscript. All authors have read and approved the manuscript.
Correspondence to Zhandong Liu or Mirjana Maletić-Savatić.
All authors declare that they have no competing interests.
70 metabolites significantly differed across all brain regions. The significance of the metabolites was determined by ANOVA with FDR correction (FDR < 0.01). (PDF 2060 kb)
List of predominant metabolites of the four brain regions determined by PCA loadings. (PDF 52 kb)
Metabolites that represent each identified module. (PDF 50 kb)
Choi, W.T., Tosun, M., Jeong, HH. et al. Metabolomics of mammalian brain reveals regional differences. BMC Syst Biol 12 (Suppl 8), 127 (2018). https://doi.org/10.1186/s12918-018-0644-0
Olfactory bulb
|
CommonCrawl
|
Spatial Evaluation of Earthquake Disaster Based On Correlation Characteristics And BP Neural Network
Hanxu Zhou, Ailan Che, Xianghua Shuai
Rapid spatial evaluation of disaster after earthquake occurrence is required in the emergency rescue management, due to its significant support for decreasing casualties and property losses. The earthquake-hit population is taken as an example of earthquake disaster to construct the evaluation model using the data from the 2013 Ms7.0 Lushan earthquake. Ten influencing factors are classified into environmental factors and seismic factors. The correlation analysis reveals characteristics that there is a nonlinear relationship between the earthquake-hit population and various factors, and per capita GDP and PGA factor have a stronger correlation with earthquake-hit population. Moreover, the spatial variability of influencing factors would affect the distribution of earthquake-hit population. The earthquake-hit population is evaluated using BP neural network with optimizing training samples based on the spatial characteristics of per capita GDP and PGA factors. Different number of sample points are generated in areas with different value intervals of influencing factors, instead of the random distribution of sample points. The minimum value of RMSE (Root Mean Square Error) from testing set is 18 people/km2, showing good accuracy in the spatial evaluation of earthquake-hit population. Meanwhile, the optimizing samples considering spatial characteristics could improve the convergence speed and generalization capability comparing to random samples. The trained network was generalized to the 2017 Ms7.0 Jiuzhaigou earthquake to verify the prediction accuracy. The evaluation results indicate that BP neural network considering the correlation characteristics of factors has the capability to evaluate the seismic disaster information in space, providing more detailed information for emergency service and rescue operation.
Earthquake-hit population
Spatial evaluation
Correlation characteristic
BP neural network
Sample optimization
Generalization capability.
Earthquake disasters have a profound impact on human living environment due to their suddenness and destructiveness. Severe casualties, house collapse and economic loss would be caused under the action of intense seismic ground motion. Strong earthquakes have continued to appear worldwide in the past two decades (Rossetto et al., 2007; Lara et al., 2016; Shimada, 2016). People injured or killed by the earthquake could range from a few to tens of thousands of people, distributed in different spatial locations (Zhao et al., 2018). Although the concerns of seismic problem continue to deepen and the seismic awareness of human is constantly enhanced, within the past decades, the active activities of geological structures are still affecting the environment of anthroposphere (Sun et al., 2016; Wu et al., 2020; Santos-Reyes and Gouzeva, 2020; Luo et al., 2021). Due to the unpredictability of earthquake occurrence, it is difficult to prepare before earthquake, so countries are committed to improve the emergency rescue ability after earthquake (Huang and Li, 2014). Among the various types of earthquake disaster information, modeling the earthquake casualty is particularly important for offering reference for emergency rescue and decision making. Casualty evaluation after earthquake is fast becoming an important issue increasingly responsible for significant economic, social, and environmental risk management (Huang and Huang, 2018).
The disaster under intense seismic motion is a complex result of various influencing factors. Seismic intensity, topography, population and economic level are all related to casualties due to earthquake to a certain extent. The traditional physical model or statistical regression model is difficult to reflect the nonlinear relation between earthquake-hit population and factors (Erdik et al., 2011). With the continuous improvement of computing speed in recent years, machine learning methods have been more widely used. More and more scholars apply them for disaster mapping under earthquake considering that machine learning methods could provide the ability to learn from historical data for producing insight into extreme events (Yang et al., 2015; Choubin et al., 2019; Pourghasemi et al., 2019; Jena et al., 2020; Hou et al., 2020; Si and Du, 2020; Luo et al., 2020). Aghamohammadi et al. (2013) used artificial neural network (ANN) for estimating the human loss of building damage under earthquake based on the data of the 2003 Bam earthquake. Huang et al. (2015) proposed a robust wavelet (RW) v-SVM (support vector machine) earthquake casualty prediction model. The factors including earthquake magnitude, intensity, population density, pre-warning level, in-building probability, location of occurrence, supply support and building collapse ratio were considered. It was concluded that RW v-SVM model had higher prediction accuracy and quicker learning than standard SVM and neural network. Gul and Guneri (2016) built up an ANN model for casualty prediction taking occurrence time, magnitude, population density as factors. Data of 21 earthquakes in Turkey were collected as samples for network training. Huang et al. (2020) established Extreme Learning Machine (ELM) network to predict earthquake casualty based on the data of 84 groups of earthquake victims in China. It was found that the ELM algorithm had better robustness and generalization capability than BP neural network and SVM. It can be noticed that the existing researches focused on the prediction of population numerical value affected by earthquake considering factors of multiple dimensions. Moreover, the accuracy and performance of different machine learning methods were compared based on the evaluation results of earthquake casualty. However, the input layer of different machine learning methods used numerical data without spatial information, and the spatial characteristics of disaster information of output layer have not been evaluated effectively. For earthquake emergency management, the spatial distribution of disaster information within the earthquake affected area has a greater significance for the formulation of detailed rescue plans.
The generalization capability of network refers to the ability to obtain accurate output when inputting new data other than training samples. Generalization capability is the most important index to measure the performance of network. The complexity of structure and samples are the main factors affecting the generalization capability of model. Research of Partridge (1996) on three-layer neural network found that the influence of training set on generalization capability is great, even more than the influence of neural number. Many researchers combine principal component analysis (PCA), clustering analysis and other methods with machine learning to optimize the training set aiming to improve the generalization capability of the network (Basharat et al., 2016; Li et al., 2020). Lou et al. (2012) used PCA to reduce the dimension of assessment factors, disaster-formative environment and disaster-affected bodies, and established a BP neural network to assess the economic loss under tropical cyclones in Zhejiang Province. A combined use of PCA and ANN was adopted by Gao et al. (2020) to evaluate the personal exposure level to PM2.5, and it was found that the combined use of PCA and ANN produced more accurate results than simple ANN method. It can be seen that optimizing the input samples of network could improve the generalization capability. Most of the existing sample optimization methods are based on statistical analysis on numerical dimensions. The distribution of influencing factors and training results in the spatial dimension are also related. Sample optimization based on spatial correlation characteristics might provide a novel solution to improve the generalization capability.
The study presented herein aimed at effective evaluating the spatial distribution of earthquake disaster information in each county. The earthquake-hit population spatial distribution was selected as the study content and evaluated based on correlation characteristics of influencing factors and BP neural network, using data from the 2013 Ms7.0 Lushan earthquake. The selection of samples was optimized based on the spatial characteristics resulting from correlation analysis, to improve the generalization capability of network and accuracy of evaluation results.
2. Influencing Factors Of The Earthquake Disaster In The 2013 Ms7.0 Lushan Earthquake
2.1 Earthquake-hit population in the 2013 Ms7.0 Lushan earthquake
The Lushan Ms7.0 earthquake occurred on April 20, 2013 and the epicenter was located at 30°18'N, 103°56'E, in Lushan county, Sichuan province, China. The focal depth of the Lushan earthquake was 13km. The affected-area of the Lushan earthquake was at the junction of Qinghai Tibet Plateau and Sichuan Basin. The Lushan earthquake was caused by the tectonic activity of Longmenshan fault zone, the same as the 2008 Ms8.0 Wenchuan earthquake. The distance between Lushan earthquake and Wenchuan earthquake epicenter was about 85 km. A total of 196 people were killed, 21 were missing and 11470 injured in the Lushan earthquake. The Lushan earthquake affected an area of 12500 km2 and caused a direct economic loss of about 185.4 billion yuan. After the earthquake, Sichuan Province immediately started the first level emergency procedures and sent out army to carry out emergency rescue work.
The total number of earthquake-hit population under the Lushan earthquake reached 3.7 million. The earthquake-hit population refers to the people who have suffered property or life losses due to the earthquake. The earthquake-hit population not only reflects the severity of natural disasters, but also reveals the impact of earthquake on people's lives. The earthquake-hit population also offers references in the formulation of emergency rescue plan, leading to a fact that the number of earthquake-hit population has become an important index to evaluate the damage caused by earthquake. Figure 1 illustrates the earthquake-hit population density (number of earthquake-hit people per square kilometer) in each county-level administrative region within the earthquake affected area. The data was collected and released by Sichuan provincial government after the earthquake on the Internet (Wang and Li, 2014).
In Figure1, the color represents the earthquake-hit population density. The earthquake-hit population density in Yucheng District, Danling County and Mingshan County was relatively higher. The earthquake-hit population density in Mingshan County was the highest, reaching approximately 432 people/km2. It can be observed that the region with the highest earthquake-hit population density was not the region where the epicenter was located. It indicated that the impact of earthquakes on population is complicated in space, the epicenter is not necessarily the most severely affected area under seismic motion. Similar phenomenon had appeared in the 2008 Ms8.0 Wenchuan earthquake (Yang et al., 2014). The casualties caused by earthquakes are related to many categories of influencing factors. The influence factors of earthquake-hit population are divided into environmental and seismic factors.
2.2 Environmental influencing factors
The environmental influencing factors refer to the environmental conditions in the study area, and there is no direct relationship between environmental factors and earthquake occurrence. The environmental influencing factors considered in the research contained elevation, slope angle, population density, per capita GDP, distance to fault, distance to river and Normalized Difference Vegetation Index (NDVI). The data details of environmental influencing factors are shown in Table 1. In the environmental factors, elevation, slope angle, distance to fault, distance to river and NDVI are the maps with data varying with spatial location. However, population density and per capita GDP have inconsistent gradation with other environmental factors. There is one attribute value for each county level administrative region for these two factors, because counties were used as the basis unit of statistics.
Data details of the environmental influencing factors
Scale/resolution
Geospatial Data Cloud site, Computer Network Information Center, Chinese Academy of Sciences (http://www.gscloud.cn)
30×30m
Slope angle
Calculated from digital elevation model
China Earthquake Network Center
County level
Per capita GDP
Distance to fault
1:100000
Distance to river
2.2.1 Elevation
Elevation is considered to be the most important factor in the analysis of natural disaster susceptibility (Peng et al., 2014; Tehrany et al., 2015; Saha et al., 2020). There is also correlation between elevation and distribution of earthquake-hit population. On the one aspect, the population is concentrated on the plains with lower elevations; on the other aspect, there is slope amplification effect on seismic ground motion resulting in the more severe geological disasters in high elevation areas (Zhang et al., 2018). The digital elevation model with a resolution of 30×30m updated in 2009 (Figure 2(a)) was obtained from Geospatial Data Cloud site, Computer Network Information Center, Chinese Academy of Sciences.
2.2.2 Slope angle
The slope angle is a geomorphic parameter which has an important impact on seismic geological disasters such as landslide, debris flow, barrier lake, etc. In the field investigation of historical strong earthquakes, it was found that a large number of earthquake casualties were caused by geological disasters triggered by seismic motion (Xu et al., 2015). The slope angle map (Figure 2(b)) was derived from the digital elevation model using ArcMap software.
2.2.3 Population density
The population density is a key factor in risk assessment of natural disaster. It is calculated as the ratio of population to bare land area here. Some strong earthquakes occurred in mountainous areas with low population density, which posed a relatively small threat to people's lives and property (Ara, 2014). Because of the strong mobility of population, it is difficult to obtain the spatial distribution of population at the moment before earthquake occurrence. Therefore, the resident population of each county in census was applied to approximate the population distribution (Figure 2(c)). The population density is the ratio of population to area in each county. The population data were updated in 2011 and offered by China Earthquake Network Center.
2.2.4 Per capita GDP
The per capita GDP is another crucial factor in earthquake-hit population evaluation. The per capita GDP reflects the economic level of local people from one aspect, and the economic level would affect the seismic resistance ability of engineering constructions. Generally, the higher the economic level is, the stronger seismic resistance ability the constructions have. Similar to population density map, the per capita GDP of each county was applied as per capita GDP distribution map (Figure 2(d)). The per capita GDP data were updated in 2011 and offered by China Earthquake Network Center.
2.2.5 Distance to fault
The distance to fault is another significant factor related to seismic geological disasters. Generally, fractured or weak zones are located near fault bedding planes, which are susceptible to weathering and sliding (Conforti et al., 2014). The fault data were offered by China Earthquake Network Center and the distance to fault was calculated using buffers of ArcMap software (Figure 2(e)).
2.2.6 Distance to river
The distance to river can also influence the earthquake-hit population, as river erosion and soil saturation would decrease the seismic stability of slopes (Yalcin, 2008). The river data were offered by China Earthquake Network Center and the distance to river was calculated using buffers of ArcMap software (Figure 2(f)).
2.2.7 NDVI
The NDVI can play a feasible role in the earthquake-hit population evaluation under the Lushan earthquake. NDVI quantifies vegetation by measuring the difference between near infrared (vegetation strong reflection) and red light (vegetation absorption). The closer the NDVI is to +1, the better the vegetation coverage in the area is, and the lower the degree of urbanization is. The NDVI map was obtained from Landsat 7 ETM+ satellite images acquired in 2012 from Geospatial Data Cloud site, Computer Network Information Center, Chinese Academy of Sciences (Figure 2(g)).
2.3 Seismic influencing factors
The seismic influencing factors refer to the elements and characteristics that are directly related to earthquake occurrence, and can be rapidly reflected by seismic motion monitoring instruments after earthquake occurrence. The seismic influencing factors considered in this research contained peak ground acceleration (PGA), peak ground velocity (PGV) and distance to epicenter. The data of seismic influencing factors are shown in Table 2.
Data details of the seismic influencing factors
Distance to epicenter
2.3.1 PGA
The PGA distribution map is the most commonly used parameter to describe the seismic ground motion intensity of an earthquake (Boatwright et al., 2003; Yuan et al., 2013). PGA represents the peak value of acceleration time-history waveform recorded on the ground surface during earthquake occurrence. It can be considered as the maximum instantaneous force exerted by earthquake, and can effectively evaluate the intensity of seismic ground motion at different positions in space. The PGA data were recorded by strong-motion seismograph network which could be offered in a short time after earthquake occurrence. The data used in this research were supported by China Earthquake Network Center (Figure 3(a)).
2.3.2 PGV
PGV is also an important index to evaluate the intensity of seismic ground motion. The acceleration time-history wave would miss some information, such as low-frequency components. The velocity time-history wave can better record this information. Therefore, PGV data are considered to comprehensively evaluate the seismic ground motion intensity. The PGV data were supported by China Earthquake Network Center (Figure 3(b)).
2.3.3 Distance to epicenter
Distance to epicenter is a parameter to measure the relative distance between the study region and the epicenter. In previous researches, it was shown that with the increase of distance to epicenter, the impact of earthquake disaster was gradually reduced. The distance to epicenter was calculated using buffers of ArcMap software (Figure 3(c)).
3. Spatial Correlation Characteristics Of Influencing Factors
Earthquake-hit population is related to environmental and seismic influencing factors. The spatial distribution variability of influencing factors leads to the difference of earthquake-hit population in different counties. The Spearman rank correlation coefficients were calculated to analyze the relationship between earthquake-hit population and influencing factors, which can be expressed as follow in Eq. (1):
$${r_s}={\text{1}} - \frac{{{\text{6}}\sum\limits_{{i={\text{1}}}}^{n} {{D_i}^{{\text{2}}}} }}{{n\left( {{n^{\text{2}}} - {\text{1}}} \right)}}$$
where Di denotes the ranking difference of earthquake-hit population and factor; n is the number of samples. The coefficient measures the degree of consistency and describes the strength of monotonicity between earthquake-hit population and influencing factors (Peng, 2015; Shahaki and Celikag, 2019). The coefficient ranges from 1 to -1. When the coefficient is positive, the data are positively correlated. The closer the absolute value of coefficient is to 1, the stronger correlation the data have.
Within the study region shown in Figure 1, 1000 sampling points were randomly generated and distributed using ArcMap software. The earthquake-hit population density data and influencing factor data at sampling points were extracted to construct the database for correlation analysis.
3.1 Correlation analysis between earthquake-hit population and environmental influencing factors
The Spearman correlation coefficient and significance test results between earthquake-hit population and environmental influencing factors are listed in Table 3. The results of significance test were all less than 0.05. It showed that the number of samples was reasonable and the value of correlation coefficient was effective. The highest correlation coefficient was -0.322 between earthquake-hit population density and per capita GDP, showing that in the regions with high per capita GDP, the earthquake-hit population was generally low. The minimum value of coefficient was -0.073. The results of correlation coefficient revealed that among various environmental influencing factors, none of them had a direct linear relationship with earthquake-hit population density. The earthquake-hit population density distribution is the result of multiple environmental factors.
Spearman correlation coefficient results with environmental influencing factors
Earthquake-hit population density
Correlation coefficient
Significance test
5.58×10−9
3.19×10−16
Moreover, the spatial correlation between earthquake-hit population density and environmental factors are illustrated in Figure 4. The histogram statistics the average earthquake-hit population density within the different factor intervals, which can reflect the related characteristics between spatial distribution of earthquake-hit population density and influencing factors. It can be seen from Figure 4 that in various factors, the average earthquake-hit population density in different intervals was discrete. However, relatively high earthquake-hit population density would concentrate in the specific range of certain factors. For example, in the 2013 Lushan earthquake, area between 500m and 1200m in elevation had relatively high earthquake-hit population density (Figure 4(a)). As regards the relationship between earthquake-hit population density and distance to river, the result showed that the maximum earthquake-hit population density was observed in the interval closest to river (Figure 4(f)).
3.2 Correlation analysis between earthquake-hit population and seismic influencing factors
The Spearman correlation coefficient and significance test results between earthquake-hit population and seismic influencing factors are listed in Table 4. The results of significance test were all less than 0.05. The correlation coefficient between earthquake-hit population density and PGA had the highest value of 0.433. It showed that the earthquake-hit population density had remarkable PGA positive correlation, and the higher the PGA, the greater the earthquake-hit population.
Spearman correlation coefficient results with seismic influencing factors
The spatial correlation between earthquake-hit population density and seismic influencing factors are illustrated in Figure 5. It can be observed that PGA had a stronger positive correlation with the earthquake-hit population density. The earthquake-hit population density in each interval generally increased with the value of PGA. However, when PGA value ranged from 750 m/s2 to 850 m/s2, the earthquake-hit population density had relatively lower value (Figure 5(a)). The reason was that the area with great seismic motion intensity had a low population density. Therefore, the population density affected by the earthquake was relatively lower.
Correlation coefficient is a statistical index to reflect the close degree of linear correlation between variables. The results of correlation analysis indicated that there is a nonlinear relationship between the earthquake-hit population and various influencing factors in spatial distribution. The correlation coefficients imply the existence of spatial correlation between factors and disaster results. For example, in the area with greater seismic motion, the earthquake-hit population density was higher. Nevertheless, the numeric value of correlation coefficients indicates that the earthquake-hit population is a result of the complex interaction of multiple factors. It is difficult to evaluate the spatial earthquake-hit population with a linear relation.
4. Earthquake-hit Population Spatial Evaluation Using Bp Neural Network
4.1 BP neural network
By imitating the information transmission principle of biological neuron, artificial neural network has a powerful ability of data processing (Wang et al., 2016). BP (Back Propagation) neural network is a common model of artificial neural network for weight training, widely applied in the forecasting area. It can more perfectly reflect the mapping relationship between neurons and has the function of approximating non-linear functions arbitrarily (Wang et al., 2018). The BP neural network consists of input layer, hidden layer and output layer, and each layer comprises several neurons. The mapping relationship between input layer and output layer is jointly determined with the activation function and threshold in the hidden layer. In the research, the input layer contained environmental and seismic influencing factors of earthquake-hit population, and the output layer calculated the value of earthquake-hit population density (Figure 6). Moreover, the hidden layer was constructed through model training based on the sample data in the 2013 Ms7.0 Lushan earthquake. The network building process was divided into three steps: forward calculation, error back propagation and weight update (Wen and Yuan, 2020). The Sigmoid was selected as the excitation function, gradient descent algorithm was used to reach the best solution of network and the learning rate was 0.02.
4.2 Sample optimization selection based on correlation characteristics
Generalization capability is an important indicator to measure the accuracy of the neural network for predicting data outside the sample. In order to effectively evaluate the spatial distribution of the earthquake-hit population in a newly occurred earthquake, it is necessary to ensure that the network has a preferable generalization capability. Training samples have a great impact on generalization capability (Partridge, 1996), so the selection of samples is optimized based on the results of correlation analysis.
In the process of network training, to uniform the format of influencing factor data and earthquake-hit population data, the vector images were transformed into raster layer data. The number of raster-based samples was far greater than the number of influencing indicators, which could result in the over fitting of neural network. In practical application, a part of all samples would be randomly selected as the training set. However, the selection of random samples might lose part of the spatial characteristics of all sample data in the training process, thus reducing the generalization capability and evaluation accuracy of the network. Based on the correlation characteristic between influencing factors and earthquake-hit population density, it was observed that compared with other factors, there was a stronger correlation between per capita GDP, PGA and the earthquake-hit population density. It implied that the area with lower values of per capita GDP and higher values of PGA had a greater number of earthquake-hit population, which was the key study area of earthquake casualty evaluation. The per capita GDP and PGA indicator indirectly reflected the spatial distribution characteristics of earthquake-hit population to some extent. Therefore, more samples were selected in the area with lower values of per capita GDP and higher values of PGA, and fewer samples were selected in the area with higher values of per capita GDP and lower values of PGA to consider the spatial variability of earthquake-hit population.
Figure 7 illustrates the frequency histograms of per capita GDP and PGA factor. It can be seen that the frequency of raster was approximately normal to per capita GDP value, and the frequency of raster decreased with the increase of PGA value. It indicated that the area of poor economy and intense seismic motion was much smaller than that of strong economy and weak motion. Although the area was small, it was the predominant area of disaster assessment and emergency rescue under earthquake. The per capita GDP and PGA data were classified into five clusters using Natural Breaks Classification method according to the value, respectively. The Natural Breaks Classification method is an extensively applied clustering method to maximize the internal similarity of each cluster and the difference between clusters. The proportion of samples was determined on the basis of the average value of clusters as listed in Table 5. The proportion of samples was proportional to the average of clusters. A total number of 1000 sample points were generated in the study area to extract attribute values from raster layers. The distribution comparison between random sample points and optimizing sample points is showed in Figure 8. It can be seen that according to the result of clustering analysis, in the area with low per capita GDP values and high PGA values, the sample points were denser. The samples were optimized based on the numerical and spatial characteristics of per capita GDP and PGA indicator.
Range of PGA clusters and number of sample points
Number of sample points
Ⅰ
Per capita GDP (×104 RMB)
Ⅱ
Ⅲ
Ⅳ
Ⅴ
Ⅵ
PGA (cm/s2)
0-114.66
Ⅶ
Ⅷ
Ⅸ
4.3 Earthquake-hit population evaluation results
Two networks based on the random samples and optimizing samples were trained, respectively. One hidden layer is sufficient for most of the applications (Aghamohammadi et al. 2013). Therefore, a three-layer network containing one input layer, one hidden layer and one output layer was adopted. In order to ensure the scientific of the evaluation, 70% of the sample data were used as training set, and 30% of the sample data were used as testing set. The testing set was a sample set to test the classification ability of the trained network. The normalized goal error of training set was 0.0003. When the error of the training set during the iteration was less than 0.0003, the training process stops.
Figure 9 showed the results of iteration number and Root Mean Square Error (RMSE) of two networks based on random samples and optimizing samples. The mathematical expression of RMSE is shown in the following equation:
$${\text{RMSE=}}\sqrt {\frac{{\text{1}}}{n}\sum\limits_{{i{\text{=1}}}}^{n} {{{\left( {{t_i} - {y_i}} \right)}^2}} }$$
where ti is the evaluation result of earthquake-hit population; yi is the actual data of earthquake-hit population. Networks with different numbers of neurons in the hidden layer was trained. In Figure 9, the horizontal axis was the number of neurons, the left vertical axis was the number of iterations required for the error of training set to reach the goal error, and the right vertical axis was the RMSE of testing set. Moreover, the blue curve represented the results from random samples and the red represented the ones from optimizing samples. It can be observed that in networks with different numbers of neurons, the number of iterations based on the optimizing samples was less than that of random samples. It indicated that the optimizing samples accelerated the convergence speed. In the meanwhile, by comparing the RMSE of the networks, it can be noted that the RMSE of the optimizing samples was smaller than that of the random samples, except when the number of neurons was 13. When the number of neurons in the hidden layer was 13, the RMSE of the two networks was close to each other as illustrated in Figure 9. The RMSE measures the difference between estimation data and testing set, and reflects the generalization capability and evaluation accuracy of network for new data. It implied that the earthquake-hit population evaluation based on optimizing samples not only had faster convergence speed, but also had better generalization capability and prediction accuracy comparing to random samples.
4.4 Verification of network generalization capability
In order to verify the evaluation accuracy of the trained network obtained from the samples of the 2013 Ms7.0 Lushan earthquake, they were applied to evaluate the earthquake-hit population in the 2017 Ms7.0 Jiuzhaigou earthquake. The Jiuzhaigou Ms7.0 earthquake happened on August 8, 2017 and its epicenter was located at 33°12'N, 103°49'E, in Jiuzhaigou county, Sichuan province, China. The epicenter of Jiuzhaigou earthquake was approximately 330 km away from that of Lushan earthquake. The focal depth of the Jiuzhaigou earthquake was 20 km. The Jiuzhaigou earthquake killed 25 people, injured 525 people and damaged about 70000 houses. A number of about 220000 earthquake-hit population was caused. The area severely affected by the earthquake mainly included Hongyuan County, Jiuzhaigou County, Pingwu County, Songpan County and Zoige County. Jiuzhaigou earthquake and Lushan earthquake had the same magnitude, and the earthquake affected areas of them had similar geographical conditions. Therefore, the Jiuzhaigou earthquake was selected to verify the generalization capability and effectiveness of the earthquake-hit population evaluation. The earthquake-hit population density in each county-level administrative region collected by Sichuan provincial government was illustrated in Figure 10.
The networks based on optimizing samples and random samples of Lushan earthquake were used to evaluate the earthquake-hit population under Jiuzhaigou earthquake separately. The output of network was a raster layer with earthquake-hit population density data varying with spatial position. The average value of the earthquake-hit population density raster data within the county area was calculated as the earthquake-hit population density of this county. The earthquake-hit population is the product of earthquake-hit population density and county area.
The actual data and evaluation results of earthquake-hit population were listed in Table 6. There were deviations between estimation value and actual value in each area. For the total number of earthquake-hit population, the evaluation result of optimizing samples was much closer to actual data than that of random samples. The actual data of earthquake-hit population in Jiuzhaigou earthquake was 216597. The evaluation results of earthquake-hit population were 198362 and 347207 for optimizing samples and random samples respectively. The mean absolute error (MAE) was calculated to assess the evaluation results and the expression of MAE is shown as following:
Actual data and evaluation results of earthquake-hit population in the Jiuzhaigou earthquake
Actual data
Evaluation result (optimizing samples)
Evaluation result (random samples)
Hongyuan County
Jiuzhaigou County
Pingwu County
Songpan Coungty
Zoige County
Mean absolute error
$${\text{MAE=}}\frac{{\text{1}}}{n}\left( {\sum\limits_{{i=1}}^{n} {\left| {{t_i} - {y_i}} \right|} } \right)$$
The MAE of evaluation result of earthquake-hit population were 18357 and 26121 for optimizing samples and random samples respectively. The comparison of actual data and evaluation results in each county was showed in Figure 11. The histogram represented the value of actual data and evaluation results, and the curve represented the error rate of evaluation results. The expression of error rate is shown as follow:
$${\text{Error rate=}}\frac{{\left| {t - y} \right|}}{t}$$
The error rate was defined as the ratio of the difference between actual data and result and actual data. It can be seen that for Jiuzhaigou County, Pingwu County and Songpan County, where the earthquake-hit population was relatively large, the error rate of evaluation results based on the BP neural network was less than 100%. For Hongyuan County and Zoige County, the error rate was relatively great. The maximum value of error rate based on the random samples reached 1049.84% and that based on the optimizing samples reached 490.14%. It can be noted that in each area the error rate of optimizing samples was smaller than that of random samples. The earthquake-hit population evaluation based on the optimizing samples had more accurate prediction for new data. It revealed that optimizing samples can effectively offer a more accurate evaluation of earthquake-hit population.
In the present study, a spatial earthquake-hit population distribution is evaluated based on correlation characteristics of factors and BP neural network. The main conclusions are as follows:
1) The influencing factors of earthquake-hit population are classified into environmental and seismic factors. Elevation, slope angle, population density, per capita GDP, distance to fault, distance to river and NDVI are considered as environmental factors, and PGA, PGV and distance to epicenter are considered as seismic factors. The correlation analysis between earthquake-hit population and influencing factors indicates that per capita GDP and PGA have stronger correlation relation with earthquake-hit population in the Lushan earthquake. There is a great nonlinear relationship between the earthquake-hit population and various influencing factors.
2) Samples have a significant impact on the generalization capability and evaluation accuracy of neural network. The samples are optimized according to the spatial distribution of per capita GDP and PGA based on the correlation characteristics. In the area with lower per capita GDP values and higher PGA values, more sample points are generated and distributed according to the correlation between per capita GDP, PGA and earthquake-hit population. By comparing to the random samples, the optimizing samples can effectively improve the convergence speed and generalization capability of the trained network. In networks with different numbers of neurons, the number of iterations based on the optimizing samples is less than that of random samples. The network trained by the optimizing samples considering the spatial characteristics has more accurate prediction ability.
3) A BP neural network is established using influencing factors as input indicators based on the data from Lushan earthquake. The trained network is applied for Jiuzhaigou earthquake to test the generalization capability and prediction accuracy. The results show that the neural network has good prediction accuracy on the spatial evaluation in the study area. The total evaluating earthquake-hit population of five counties affected by Jiuzhaigou earthquake based on optimizing samples is 198362 people, while the actual data is 216597 people. BP neural network has abilities to construct complex nonlinear relations to evaluate earthquake-hit population. The trained network can offer spatial evaluation of earthquake-hit population as well as other earthquake disaster information quickly after the occurrence of earthquake, providing significant reference for emergency rescue.
This work is supported by the National Key R&D Program of China (2018YFC1504504).
Aghamohammadi, H., Mesgari, M. S., Mansourian, A. & Molaei, D. Seismic human loss estimation for an earthquake disaster using neural network. International Journal of Environmental Science and Technology, 10, 931–939 https://doi.org/10.1007/s13762-013-0281-5 (2013).
Ara, S. Impact of Temporal Population Distribution on Earthquake Loss Estimation: A Case Study on Sylhet, Bangladesh. International Journal of Disaster Risk Science, 5, 296–312 https://doi.org/10.1007/s13753-014-0033-2 (2014).
Basharat, M., Ali, A., Jadoon, I. A. K. & Rohn, J. Using PCA in evaluating event-controlling attributes of landsliding in the 2005 Kashmir earthquake region, NW Himalayas, Pakistan. Nat. Hazards, 81, 1999–2017 https://doi.org/10.1007/s11069-016-2172-9 (2016).
Boatwright, J. et al. Bulletin of the Seismological Society of America, 93 (5), 2043–2055 https://doi.org/10.1785/0120020201 (2003). The Dependence of PGA and PGV on Distance and Magnitude Inferred from Northern California ShakeMap Data
Choubin, B. et al. Earth fissure hazard prediction using machine learning models. Environ. Res, 179, 108770 https://doi.org/10.1016/j.envres.2019.108770 (2019).
Conforti, M., Pascale, S., Robustelli, G. & Sdao, F. Evaluation of prediction capability of the artificial neural networks for mapping landslide susceptibility in the Turbolo River catchment (northern Calabria, Italy)., 113, 236–250 https://doi.org/10.1016/j.catena.2013.08.006 (2014).
Erdik, M., Sesetyan, K., Demircioglu, M. B., Hanclar, U. & Zulfikar, C. Rapid earthquake loss assessment after damaging earthquakes. Soil Dynamics and Earthquake Engineering, 31 (2), 247–266 https://doi.org/10.1016/j.soildyn.2010.03.009 (2011).
Gao, S. et al. Combined use of principal component analysis and artificial neural network approach to improve estimates of PM2.5 personal exposure: A case study on older adults. Science of the Total Environment, 726, 138533 https://doi.org/10.1016/j.scitotenv.2020.138533 (2020).
Gul, M. & Guneri, A. F. An artificial neural network-based earthquake casualty estimation model for Istanbul city. Nat. Hazards, 84, 2163–2178 https://doi.org/10.1007/s11069-016-2541-4 (2016).
Hou, P., Jolliet, O., Zhu, J. & Xu, M. Estimate ecotoxicity characterization factors for chemicals in life cycle assessment using machine learning models. Environmental International, 135, 105393 https://doi.org/10.1016/j.envint.2019.105393 (2020).
Huang, C. & Huang, Y. An information diffusion technique to assess integrated hazard risks. Environ. Res, 161, 104–113 https://doi.org/10.1016/j.envres.2017.10.037 (2018).
Huang, R. Q. & Li, W. L. Post-earthquake landsliding and long-term impacts in the Wenchuan earthquake area, China. Eng. Geol, 182, 111–120 https://doi.org/10.1016/j.enggeo.2014.07.008 (2014).
Huang, X., Song, J. & Jin, H. The casualty prediction of earthquake disaster based on Extreme Learning Machine method. Nat. Hazards, 102, 873–886 https://doi.org/10.1007/s11069-020-03937-6 (2020).
Huang, X., Zhou, Z. & Wang, S. The prediction model of earthquake casuailty based on robust wavelet v-SVM. Nat. Hazards, 77, 717–732 https://doi.org/10.1007/s11069-015-1620-2 (2015).
Jena, R. et al. Earthquake hazard and risk assessment using machine learning approaches at Palu, Indonesia. Science of the Total Environment, 749, 141582 https://doi.org/10.1016/j.scitotenv.2020.141582 (2020).
Lara, A., Garcia, X., Bucci, F. & Ribas, A. What do people think about the flood risk? An experience with the residents of Talcahuano city. Chile. Natural Hazards, 85, 1557–1575 https://doi.org/10.1007/s11069-016-2644-y (2016).
Li, X. et al. Forecasting of bioaerosol concentration by a Back Propagation neural network model. Science of the Total Environment, 698, 134315 https://doi.org/10.1016/j.scitotenv.2019.134315 (2020).
Lou, W. P., Chen, H. Y., Qiu, X. F., Tang, Q. Y. & Zheng, F. Assessment of economic losses from tropical cyclone disasters based on PCA-BP. Nat. Hazards, 60, 819–829 https://doi.org/10.1007/s11069-011-9881-x (2012).
Luo, L., van Lombardo, L., Westen, C., Pei, X. & Huang, R. From scenario-based seismic hazard to scenario-based landslide hazard: rewinding to the past via statistical simulations. Stochastic Environmental Research and Risk Assessment. https://doi.org/10.1007/s00477-020-01959-x (2021)
Luo, Z., Huang, F. & Liu, H. PM2.5 concentration estimation using convolutional neural network and gradient boosting machine. Journal of Environmental Sciences, 98, 85–93 https://doi.org/10.1016/j.jes.2020.04.042 (2020).
Partridge, D. Network generalization differences quantified. Neural Netw, 9 (2), 263–271 https://doi.org/10.1016/0893-6080(95)00110-7 (1996).
Peng, L. et al. Landslide susceptibility mapping based on rough set theory and support vector machines: a case of the Three Gorges area, China., 204, 287–301 https://doi.org/10.1016/j.geomorph.2013.08.013 (2014).
Peng, Y. Regional earthquake vulnerability assessment using a combination of MCDM methods. Annals of Operations Research, 234, 95–110 https://doi.org/10.1007/s10479-012-1253-8 (2015).
Pourghasemi, H. R., Gayen, A., Panahi, M., Rezaie, F. & Blaschke, T. Multi-hazard probability assessment and mapping in Iran. Science of the Total Environment, 692, 556–571 https://doi.org/10.1016/j.scitotenv.2019.07.203 (2019).
Rossetto, T. et al. The Indian Ocean tsunami of December 26, 2004: observations in Sri Lanka and Thailand. Nat. Hazards, 42, 105–124 https://doi.org/10.1007/s11069-006-9064-3 (2007).
Saha, S. et al. Prediction of landslide susceptibility in Rudraprayag, India using novel ensemble of conditional probability and boosted regression tree-based on cross-validation method. Science of the Total Environment, https://doi.org/10.1016/j.scitotenv.2020.142928 (2020).
Santos-Reyes, J. & Gouzeva, T. Mexico city's residents emotional and behavioural reactions to the 19 September 2017 earthquake. Environ. Res, 186, 109482 https://doi.org/10.1016/j.envres.2020.109482 (2020).
Shahaki, Kenari, M. & Celikag, M. Correlation of Ground Motion Intensity Measures and Seismic Damage Indices of Masonry-Infilled Steel Frames. Arabian Journal for Science and Engineering, 44, 5131–5150 https://doi.org/10.1007/s13369-019-03719-8 (2019).
Shimada, N.. Outline of the Great East Japan Earthquake. In: Urabe J., Nakashizuka T. (eds) Ecological Impacts of Tsunamis on Coastal Ecosystems.Ecological Research Monographs. Springer, Tokyo. https://doi.org/10.1007/978-4-431-56448-5_1 (2016)
Si, M. & Du, K. Development of a predictive emissions model using a gradient boosting machine learning method. Environmental Technology & Innovation, 20, 10128 https://doi.org/10.1016/j.eti.2020.101028 (2020).
Sun, L., Chen, J. & Li, T. A MODIS-based method for detecting large-scale vegetation disturbance due to natural hazards: a case study of Wenchuan earthquake stricken regions in China. Stochastic Environmental Research and Risk Assessment, 30, 2243–2254 https://doi.org/10.1007/s00477-015-1160-z (2016).
Tehrany, M. S., Pradhan, B. & Jebur, M. N. Flood susceptibility analysis and its verification using a novel ensemble support vector machine and frequency ratio method. Stochastic Environmental Research and Risk Assessment, 29, 1149–1165 https://doi.org/10.1007/s00477-015-1021-9 (2015).
Wang, S. & Li, D. ArcGIS-based system analysis of building damage from the Ms7.0 Lushan earthquake. Earthquake Research in Sichuan, 2 (151), 1–5 (in Chinese)https://doi.org/10.13716/j.cnki.1001-8115.2014.02.001 (2014).
Wang, S., Zhang, N., Wu, L. & Wang, Y. Wind speed forecasting based on the hybrid ensemble empirical mode decomposition and GA-BP neural network method., 94, 629–636 https://doi.org/10.1016/j.renene.2016.03.103 (2016).
Wang, X. P. et al. Estimation of soil salt content (SSC) in the Ebinur Lake Wetland National Nature Reserve (ELWNNR), Northwest China, based on a Bootstrap-BP neural network model and optimal spectral indices. Science of the Total Environment, 615, 918–930 https://doi.org/10.1016/j.scitotenv.2017.10.025 (2018).
Wen, L., Yuan, X. & PSO. Forecasting CO2 emissions in Chinas commercial department, through BP neural network based on random forest and. Science of the Total Environment, 718, 137194. https://doi.org/10.1016/j.scitotenv.2020.137194 (2020)
Wu, Q., Wu, J. & Gao, M. Correlation analysis of earthquake impacts on a nuclear power plant cluster in Fujian province. China. Environmental Research, 187, 109689 https://doi.org/10.1016/j.envres.2020.109689 (2020).
Xu, C. et al. Landslides triggered by the 20 April 2013 Lushan, China, Mw 6.6 earthquake from field investigations and preliminary analyses. Landslides, 12, 365–385 https://doi.org/10.1007/s10346-014-0546-1 (2015).
Yalcin, A. GIS-based landslide susceptibility mapping using analytical hierarchy process and bivariate statistics in Ardesen (Turkey): comparisons of results and confirmations., 72, 1–12 https://doi.org/10.1016/j.catena.2007.01.003 (2008).
Yang, J., Chen, J., Liu, H. & Zheng, J. Comparison of two large earthquakes in China: the 2008 Sichuan Wenchuan Earthquake and the 2013 Sichuan Lushan Earthquake. Nat. Hazards, 73, 1127–1136 https://doi.org/10.1007/s11069-014-1121-8 (2014).
Yang, Z. H. et al. Urgent landslide susceptibility assessment in the 2013 Lushan earthquake-impacted area, Sichuan Province, China. Nat. Hazards, 75, 2467–2487 https://doi.org/10.1007/s11069-014-1441-8 (2015).
Yuan, R. M. et al. Density Distribution of Landslides Triggered by the 2008 Wenchuan Earthquake and their Relationships to Peak Ground Acceleration. Bulletin of the Seismological Society of America, 103 (4), 2344–2355 https://doi.org/10.1785/0120110233 (2013).
Zhang, Z., Fleurisson, J. A. & Pellet, F. The effects of slope topography on acceleration amplification and interaction between slope topography and seismic input motion. Soil Dynamics and Earthquake Engineering, 113, 420–431 https://doi.org/10.1016/j.soildyn.2018.06.019 (2018).
Zhao, J. et al. International journal of environmental research and public health, 15 (6), 1111 https://doi.org/10.3390/ijerph15061111 (2018).
No competing interests reported.
|
CommonCrawl
|
Asia Pacific Journal on Computational Engineering
Modeling the spread of computer virus via Caputo fractional derivative and the beta-derivative
Ebenezer Bonyah1,
Abdon Atangana2 &
Muhammad Altaf Khan3
Asia Pacific Journal on Computational Engineering volume 4, Article number: 1 (2017) Cite this article
The concept of information science is inevitable in the human development as science and technology has become the driving force of all economics. The connection of one human being during epidemics is vital and can be studied using mathematical principles. In this study, a well-recognized model of computer virus by Piqueira et al. (J Comput Sci 1:31−34, 2005) and Piqueira and Araujo (Appl Math Comput 2(213):355−360, 2009) is investigated through the Caputo and beta-derivatives. A less detail of stability analysis was discussed on the extended model. The analytical solution of the extended model was solved via the Laplace perturbation method and the homotopy decomposition technique. The sequential summary of each of iteration method for the extend model was presented. Using the parameters in Piqueira and Araujo (Appl Math Comput 2(213):355−360, 2009), some numerical simulation results are presented.
The idea of computer virus came into being around 1980 and has continued threatening the society. During these early stages, the threat of this virus was minimal [1]. Modern civilized societies are being automated with computer applications making life easy in the areas such as education, health, transportation, agriculture and many more. Following recent development in complex computer systems, the trend has shifted to sophisticate dynamic of computer virus which is difficult to deal with. In 2001, for example, the cost associated with computer virus was estimated to be 10.7 United State dollars for only the first quarter [1]. Consequently, a comprehensive understanding of computer virus dynamics has become inevitable to researchers considering the role played by this wonderful device. To ensure the safety and reliability of computers, this virus burden can be tackled in twofold: microscopic and macroscopic [2−6].
The microscopic level has been investigated by [3], who developed anti-virus program that removes virus from the computer system when detected. The program is capable of upgrading itself to ensure that new virus can be dealt with when attacks computer. The characteristics of this program are similar to that of vaccination against a disease. They are not able to guarantee safety in computer network system and also difficult to make good future predictions. The macroscopic aspect of computer has seen tremendous attention in the area of modeling the spread of this virus and determining the long-term behavior of the virus in the network system since 1980 [4]. The concept of epidemiological modeling of disease has been applied in the study of the spread of computer virus in macroscopic scale [6−8].
Possibly reality of nature could be well understood via fractional calculus perspective. A considerable attention has been devoted to fractional differential equations by the fact that fractional-order system is capable to converge to the integer-order system timely. Fractional-order differential equations' applications in modeling processes have the merit of nonlocal property [9–11]. The model proposed in [10] is a deterministic one and fails to have hereditary and memory effect and therefore, cannot adequately describe the processes very well.
In this paper, we present the fractional-order derivative and obtain analytic numeric solution of the model presented in [10]. The rationale behind the application of fractional derivatives can also be ascertained from some of the current papers published on mathematical modeling [12–16]. In addition to this, the practical implication of fractional derivative can be established in [17].
Model formulation
In this study, we take into account the model proposed by [10]. In their study, the total population of this model is denoted by T which is subdivided into four groups. S denotes the non-infected computers capable of being infected after making contact with infected computer. A is the kind of computers non-infected equipped with antivirus. I denotes infected computers capable of infecting non-infected computers and R deals with removed ones due to infection or not. The recruitment rate of computers into the non-infected computers' class is denoted by N and \(\mu\) is the proportion coefficient for the mortality rate which is not attributable to the virus. \(\beta\) is the rate of proportion of infection as a result of product of SI. The conversion of susceptible computer into antidotal is the product of SI denoted by \(\alpha _{SA}\). The proportion of converting infected computers into antidotal ones in the network is the product of SA denoted by \(\alpha _{IA}\). The rate of removed computers being converted into susceptible class is represented by \(\sigma\) and \(\delta\) denotes the rate at which the virus rate computers useless and remove from the system.
The mathematical model under consideration here is given as:
$$\begin{aligned} \left\{ \begin{array}{l} \frac{{{\text{d}}S}}{{{\text{d}}t}} = N - \alpha _{AS}SA - \beta SI - \mu S + \sigma R, \\ \frac{{{\text{d}}I}}{{{\text{d}}t}} = \beta SI - \alpha _{AI} AI - \delta I - \mu I, \\ \frac{{{\text{d}}R}}{{{\text{d}}t}} = \delta I - \sigma R - \mu R, \\ \frac{{{\text{d}}A}}{{{\text{d}}t}} = \alpha _{AS} SA + \alpha _{AS} AI - \mu A. \\ \end{array} \right. \end{aligned}$$
Here, the recruitment rate is taken to be \(N=0,\) indicating that there is no new computer entering into the system during the propagation of the virus. This is because in reality the transfer of virus is faster than adding new computers into the system. The same reason can assign to the choice of \(\mu =0\), taking into account the fact that the computer outmodedness time is longer than the time of the virus action being manifested. Accordingly, equation system (1) is reformulated as follows:
$$\begin{aligned} \left\{\begin{array}{l} \frac{{{\text{d}}S}}{{{\text{d}}t}} = - \alpha _{AS} SA - \beta SI + \sigma R, \\ \frac{{{\text{d}}I}}{{{\text{d}}t}} = \beta SI - \alpha _{AI} {AI} - \delta I, \\ \frac{{{\text{d}}R}}{{{\text{d}}t}} = \delta I - \sigma R, \\ \frac{{{\text{d}}A}}{{{\text{d}}t}} = \alpha _{AS} SA + \alpha _{AS} AI. \\ \end{array} \right. \end{aligned}$$
In this paper, we shall fully explore the concept of fractional derivatives and other current proposed derivatives, and in this study, we shall examine this model in the context of fractional derivatives as well as the beta-derivatives. Consequently, Eq. (2) can be transformed into the following:
$$\begin{aligned} \left\{ \begin{array}{l} _0^A D_t^\alpha S(t) = - \alpha _{AS} SA - \beta SI + \sigma R, \\ _0^A D_t^\alpha I(t) = \beta SI - \alpha _{AI} AI - \delta I, \\ _0^A D_t^\alpha R(t) = \delta I - \sigma R, \\ _0^A D_t^\alpha A(t) = \alpha _{AS} SA + \alpha _{AS} AI. \\ \end{array} \right. \end{aligned}$$
where \(_0^A D_t^\alpha\) represents the Caputo derivative or the new derivative called beta-derivative. In the next section, some background on the use of fractional and beta-derivatives will be presented. The basic aim of this study is to explore both fractional and beta-derivatives for modeling epidemiological problem in computers. The fractional derivatives are noted for non-local problems and maybe appropriate for epidemiological issues. The fractional order, however, is an indispensable tool for numerical simulations, and therefore, a local derivative with fractional order is presented in this study to model the propagation of computer virus in a network.
This provides the invariance of \(\Omega\) as to be determined. We conclude from this theorem, that it is sufficient to deal with the dynamics of (1) in \(\Omega\). In this respect, the model can be assumed as being epidemiologically and mathematically well-posed [18].
Basic concept about the beta-derivative and Caputo derivative
Definition 1
Let f be a function, such that, \(f:\left[ {a,\infty } \right) \rightarrow \mathbb {R}.\) Then, the beta-derivative is expressed as follows:
$$\begin{aligned} _0^A D_x^\beta \left( {f(x)} \right) = \mathop {\lim }\limits _{\varepsilon \rightarrow 0} \frac{{\left( {x + \varepsilon \left( {x + \frac{1}{{\varGamma (\beta )}}} \right) ^{1 - \beta } } \right) - f(x)}}{\varepsilon } \end{aligned}$$
for all \(x \geqslant a,\beta \in \left( {0,1} \right] .\) Then if the limit of the above exists, it is said to be beta-differentiable. It can be noted that the above definition does not depend on the interval stated. If the function is differentiable, then definition given at a point zero is different from zero.
Theorem 1
Assuming that \(g \ne 0\) and there are two functions beta-differentiable with \(\beta \in \left( {0,1} \right] ;\) then, the following relations can be presented:
\(_0^A D_x^\alpha \left( {af(x) + bg(x)} \right) = a_0^A D_x^\alpha \left( {f(x)} \right) + b_0^A D_x^\alpha \left( {f(x)} \right)\) for all a and b being real numbers,
\(_0^A D_x^\alpha (c) = 0\) for c any given constant,
\(_0^A D_x^\alpha \left( {f(x)g(x)} \right) = g(x)_0^A D_x^\alpha \left( {f(x)} \right) + f(x)_0^A D_x^\alpha \left( {g(x)} \right)\)
\(_0^A D_x^\alpha \left( {\frac{{f(x)}}{{g(x)}}} \right) = \frac{{g(x)_0^A D_x^\alpha \left( {f(x)} \right) - f(x)D_x^\alpha \left( {g(x)} \right) }}{{g^2 x}}\)
The proofs of the above relations are identical to the one in [19].
Assuming that \(f:\left[ {a,\infty } \right) \rightarrow \mathbb {R}\) be a function in a way that f is differentiable and also alpha-differentiable. Assume g be a function defined in the range of f and also differentiable, then we obtain the following rule:
$$\begin{aligned} _0^A D_x^\alpha I_x^\beta \left( {f(x)} \right) = \int \limits _a^x {\left( {t + \frac{1}{{\varGamma (\beta )}}} \right) } ^{\beta - 1} f(t)\,{\text{d}}t. \end{aligned}$$
The above operator is referred to as the inverse operator of the proposed fractional derivative. We shall present the principle behind this statement using the following theorem.
\(_0^A D_x^\alpha \left[ {_0^A I_x^\beta f(x)} \right] = f(x)\) for all \(x \geqslant a\) with f a given continuous and differentiable function.
The Caputo fractional derivative of a differentiable function is expressed as:
$$\begin{aligned} D_x^\alpha \left( {f(x)} \right) = \frac{1}{{\varGamma (n - \alpha )}}\int \limits _0^x {(x - t)^{n - \alpha - 1} } \left( {\frac{{\text{d}}}{{{\text{d}}t}}} \right) ^n f(t)\,{\text{d}}t,\quad n - 1 < \alpha \leqslant n. \end{aligned}$$
The properties behind the use of the Caputo derivative can be established in [14–16, 19]. Given all the information discussed, we shall introduce in the subsequent section the analysis of this model.
Analysis of the mathematical model
This section is devoted to discuss the stability analysis of the model. The disease-free equilibrium and endemic equilibrium points are established. To determine the equilibria points, we make an asumption that the system does not depend on time; hence,
$$\begin{aligned} _0^A D_t^\alpha S(t) = \,_0^A D_t^\alpha I(t) = \,_0^A D_t^\alpha R(t) = \,_0^A D_t^\alpha A(t) = 0. \end{aligned}$$
$$\begin{aligned} \left\{ \begin{array}{l} 0 = - \alpha _{AS} S^* A^* - \beta S^* I^* + \sigma R^* \\ 0 = \beta S^* I^* - \alpha _{AI} A^* I^* - \delta I^* \\ 0 = \delta I^* - \sigma R^* \\ 0 = \alpha _{AS} S^* A + \alpha _{AS} A^* I^* \\ \end{array}{.} \right. \end{aligned}$$
Solving the above system, we obtain
$$\begin{aligned} S^* = \frac{\delta }{\beta };\quad I^* = \frac{{T - \delta /\beta }}{{1 + \delta /\sigma }};\quad R^* = \frac{{T - \delta /\beta }}{{1 + \sigma /\delta }};\quad A = 0. \end{aligned}$$
Hence, the disease-free equilibria are given as:
$$\begin{aligned} E_1 = \left( {S,I,R,A} \right) = \left( {0,0,0,T} \right) ; \end{aligned}$$
$$\begin{aligned} E_2 = \left( {S,I,R,A} \right) = \left( {T,0,0,0} \right) . \end{aligned}$$
The stability analysis of this model has been extensively dealt with in [10]. The next section will be concentrated on an approximate solution based on the two analytical techniques for each situation.
Analysis of approximate solutions
One of the most challenging tasks in non-linear fractional differential equation systems is probably how to obtain exact analytical solutions. This accounts for the reasons why in recent times, a lot of attention has been devoted in the quest for obtaining techniques that can ensure asymptotic solutions in such situations. We shall make reference to some of the recent techniques on this subject which are efficient and effective and have been widely used; for instance, the decomposition method [12], Sumudu homotopy perturbation method [20], the Adomian Decomposition method [11, 21], homotopy perturbation method [19, 22, 23], the homotopy Laplace perturbation method [24], and the homotopy. In this study however, we shall make use of two of these stated techniques, specifically the Laplace homotopy perturbation method and the homotopy decomposition method. The homotopy decomposition method will be employed to provide solution to the model with the beta-derivative, followed by the Laplace homotopy perturbation method which will be used to solve the system with Caputo derivative.
Solution with the Laplace homotopy perturbation method
This form was initially proposed in [18], and has been also employed in various scientific researches. We shall explore the methodology for obtaining solution to system (3) with the Caputo fractional derivative in this section:
$$\begin{aligned} \left\{ \begin{array}{l} _0^C D_t^\alpha S(t) = - \alpha _{AS} {SA} - \beta SI + \sigma R, \\ _0^C D_t^\alpha I(t) = \beta {SI} - \alpha _{AI} AI - \delta I, \\ _0^C D_t^\alpha R(t) = \delta I - \sigma R, \\ _0^C D_t^\alpha A(t) = \alpha _{AS} SA + \alpha _{AI} AI. \\ \end{array} \right. \end{aligned}$$
Applying the Laplace transform operator on both sides of the above system, we obtain the following:
$$\begin{aligned} \left\{ \begin{array}{l} S(\tau ) = \frac{1}{{\tau ^\alpha }}S(0) + \frac{1}{{\tau ^\alpha }}\ell \left( { - \alpha _{AS} SA - \beta SI + \sigma R} \right) , \\ I(\tau ) = \frac{1}{{\tau ^\alpha }}I(0) + \frac{1}{{\tau ^\alpha }}\ell \left( {\beta SI - \alpha _{AI} AI - \delta I} \right) , \\ R(\tau ) = \frac{1}{{\tau ^\alpha }}R(0) + \frac{1}{{\tau ^\alpha }}\ell \left( {\delta I - \sigma R} \right) , \\ A(\tau ) = \frac{1}{{\tau ^\alpha }}A(0) + \frac{1}{{\tau ^\alpha }}\ell \left( {\alpha _{AS} SA + \alpha _{AI} AI} \right) . \\ \end{array} \right. \end{aligned}$$
The Laplace variable is denoted by \(\tau{\rm s}\). Further, the inverse Laplace transform is applied on both sides of the system to yield
$$\begin{aligned} \left\{ \begin{array}{l} S(t) = S(0) + \ell ^{ - 1} \left\{ {\frac{1}{{\tau ^\alpha }}\ell \left( { - \alpha _{AS} SA - \beta SI + \sigma R} \right) } \right\} ,\\ I(t) = I(0) + \ell ^{ - 1} \left\{ {\frac{1}{{\tau ^\alpha }}\ell \left( {\beta SI - \alpha _{AI} AI - \delta I} \right) } \right\} , \\ R(t) = R(0) + \ell ^{ - 1} \left\{ {\frac{1}{{\tau ^\alpha }}\ell \left( {\delta I - \sigma R} \right) } \right\} ,\\ A(t) = A(0) + \ell ^{ - 1} \left\{ {\frac{1}{{\tau ^\alpha }}\ell \left( {\alpha _{AS} SA + \alpha _{AI} AI} \right) } \right\} . \\ \end{array} \right. \end{aligned}$$
Following the system above, we shall make an assumption that a solution can be obtained in the form of series as follows:
$$\begin{aligned} S(t) = \sum \limits _{n = 0}^\infty {S_n } (t),\quad \mathrm{{ }}I(t) = \sum \limits _{n = 0}^\infty {I_n } (t),\quad \mathrm{{ }}R(t) = \sum \limits _{n = 0}^\infty {R_n } (t),\quad \mathrm{{ }}A(t) = \sum \limits _{n = 0}^\infty {A_n } (t) .\end{aligned}$$
Conversely, substituting the above solution in system (11) and adding the embedding parameter \(p \in \left( {0,1} \right] ,\) bringing all the terms of the same power of the embedding parameter p together, we have
$$\begin{aligned} p^0:\left\{ \begin{array}{l} S_0 (t) = S(0) \\ I_0 (t) = I(0) \\ R_0 (t) = R(0) \\ A_0 (t) = A(0) \\ \end{array}{,} \right. \end{aligned}$$
$$\begin{aligned} p^1:\left\{ \begin{array}{l} S_1 (t) = \ell ^{ - 1} \left\{ {\frac{1}{{\tau ^\alpha }}\ell \left( { - \alpha _{AS} S_0 A_0 - \beta S_0 I_0 + \sigma R_0 } \right) } \right\} \\ I_1 (t) = \ell ^{ - 1} \left\{ {\frac{1}{{\tau ^\alpha }}\ell \left( {\beta S_0 I_0 - \alpha _{AI} A_0 I_0 - \delta I_0 } \right) } \right\} \\ R_1 (t) = \ell ^{ - 1} \left\{ {\frac{1}{{\tau ^\alpha }}\ell \left( {\delta I_0 - \sigma R_0 } \right) } \right\} \\ A_1 (t) = \ell ^{ - 1} \left\{ {\frac{1}{{\tau ^\alpha }}\ell \left( {\alpha _{AS} S_0 A_0 + \alpha _{AI} A_0 I_0 } \right) } \right\} \\ \end{array} \right. \end{aligned}$$
In broad-spectrum, we shall obtain the following system of iteration formulas (15):
$$\begin{aligned} p^n:\left\{ \begin{array}{l} S_n (t) = \ell ^{ - 1} \left\{ {\frac{1}{{s^\alpha }}\ell \left( { - \alpha _{AS} \sum \limits _{j = 0}^{n - 1} {A_{(n - j - 1)} } S_j A_j - \beta \sum \limits _{j = 0}^{n - 1} {I_{\left( {n - j - 1} \right) } } S_j + \sigma R_{\left( {n - 1} \right) } } \right) } \right\} \\ I_n (t) = \ell ^{ - 1} \left\{ {\frac{1}{{s^\alpha }}\ell \left( {\beta \sum \limits _{j = 0}^{n - 1} {I_{\left( {n - j - 1} \right) } } S_j - \alpha _{Al} \sum \limits _{j = 0}^{n - 1} {I_{\left( {n - j - 1} \right) } } A_j - \delta I_{\left( {n - 1} \right) } } \right) } \right\} \\ R_n (t) = \ell ^{ - 1} \left\{ {\frac{1}{{s^\alpha }}\ell \left( {\delta I_{\left( {n - 1} \right) } - \sigma R_{\left( {n - 1} \right) } } \right) } \right\} \\ A_n (t) = \ell ^{ - 1} \left\{ {\frac{1}{{s^\alpha }}\ell \left( {\alpha _{AS} \alpha _{AS} \sum \limits _{j = 0}^{n - 1} {A_{(n - j - 1)} } S_j A_j + \alpha _{Al} \sum \limits _{j = 0}^{n - 1} {I_{\left( {n - j - 1} \right) } } A_j } \right) } \right\} \\ \end{array}. \right.\end{aligned}$$
The above general rule can be simplified in the following algorithm steps:
$$\begin{aligned} p^0:\left\{ \begin{array}{l} S_0 (t) = S(0) \\ I_0 (t) = I(0) \\ R_0 (t) = R(0) \\ A_0 (t) = A(0) \\ \end{array} \right. \end{aligned}$$
Algorithm 1. This technique can be employed to obtain a special solution to system (2) via a Caputo fractional derivative
Input \(p^0:\left\{ \begin{array}{l}S_0 (t) = S(0) \\ I_0 (t) = I(0) \\ R_0 (t) = R(0) \\ A_0 (t) = A(0) \quad \\ \end{array} \right.\) as initial input,
\(j\)-number terms in the rough computation,
Output: \(\left\{ {\begin{array}{*{20}l} {S_{{{\text{appr}}}} (t) = S(0)} \hfill \\ {I_{{{\text{appr}}}} (t) = I(0)} \hfill \\ {R_{{{\text{appr}}}} (t) = R(0)} \hfill \\ {A_{{{\text{appr}}}} (t) = A(0),} \hfill \\ \end{array} } \right.\quad {\text{the estimated solution}}\).
$$\begin{aligned} {\text{Step}}\mathrm{{ }}1:{\text{Put}}\left\{ \begin{array}{l} S_0 (t) = S(0) \\ I_0 (t) = I(0) \\ R_0 (t) = R(0) \\ A_0 (t) = A(0) \\ \end{array} \right. \mathrm{{ }}\quad {\text{and}}\quad\mathrm{{ }}\left\{ \begin{array}{l} S_{{\rm appr}} (t) = S(0) \\ I_{{\rm appr}} (t) = I(0) \\ R_{{\rm appr}} (t) = R(0) \\ A_{{\rm appr}} (t) = A(0) \\ \end{array} \right. = \mathrm{{ }}\left\{ \begin{array}{l} S_0 (t) = S(t) \\ I_0 (t) = I(t) \\ R_0 (t) = R(t) \\ A_0 (t) = A(t), \\ \end{array} \right. \end{aligned}$$
Step 2: For \(j = 1\) to \(n-1\) do step 3, step 4 and step 5:
$$\begin{aligned} \left\{ \begin{array}{l} S_1 (t) = \ell ^{ - 1} \left\{ {\frac{1}{{\tau ^\alpha }}\ell \left( { - \alpha _{AS} S_0 A_0 - \beta S_0 I_0 + \sigma R_0 } \right) } \right\} \\ I_1 (t) = \ell ^{ - 1} \left\{ {\frac{1}{{\tau ^\alpha }}\ell \left( {\beta S_0 I_0 - \alpha _{AI} A_0 I_0 - \delta I_0 } \right) } \right\} \\ R_1 (t) = \ell ^{ - 1} \left\{ {\frac{1}{{\tau ^\alpha }}\ell \left( {\delta I_0 - \sigma R_0 } \right) } \right\} \\ A_1 (t) = \ell ^{ - 1} \left\{ {\frac{1}{{\tau ^\alpha }}\ell \left( {\alpha _{AS} S_0 A_0 + \alpha _{AI} A_0 I_0 } \right) } \right\} \\ \end{array} \right. \end{aligned}$$
Step 3: Compute
$$\begin{aligned} \left\{ \begin{array}{l} S_n (t) = \ell ^{ - 1} \left\{ {\frac{1}{{\tau ^\alpha }}\ell \left( { - \alpha _{AS} \sum \limits _{j = 0}^{n - 1} {A_{(n - j - 1)} } S_j A_j - \beta \sum \limits _{j = 0}^{n - 1} {I_{\left( {n - j - 1} \right) } } S_j + \sigma R_{\left( {n - 1} \right) } } \right) } \right\} \\ I_n (t) = \ell ^{ - 1} \left\{ {\frac{1}{{\tau ^\alpha }}\ell \left( {\beta \sum \limits _{j = 0}^{n - 1} {I_{\left( {n - j - 1} \right) } } S_j - \alpha _{AI} \sum \limits _{j = 0}^{n - 1} {I_{\left( {n - j - 1} \right) } } A_j - \delta I_{\left( {n - 1} \right) } } \right) } \right\} \\ R_n (t) = \ell ^{ - 1} \left\{ {\frac{1}{{\tau ^\alpha }}\ell \left( {\delta I_{\left( {n - 1} \right) } - \sigma R_{\left( {n - 1} \right) } } \right) } \right\} \\ A_n (t) = \ell ^{ - 1} \left\{ {\frac{1}{{\tau ^\alpha }}\ell \left( {\alpha _{AS} \alpha _{AS} \sum \limits _{j = 0}^{n - 1} {A_{(n - j - 1)} } S_j A_j + \alpha _{AI} \sum \limits _{j = 0}^{n - 1} {I_{\left( {n - j - 1} \right) } } A_j } \right) } \right\} \\ \end{array} \right. \end{aligned}$$
$$\begin{aligned} \left\{ \begin{array}{l} S_{\left( {m + 1} \right) } (t) = B_m (t) + S_{({\rm appr})} (t) \\ I_{\left( {m + 1} \right) } (t) = B_m (t) + I_{({\rm appr})} (t) \\ R_{\left( {m + 1} \right) } (t) = K_m (t) + R_{({\rm appr})} (t) \\ A_{\left( {m + 1} \right) } (t) = K_m (t) + A_{({\rm appr})} (t) \\ \end{array} \right. \end{aligned}$$
$$\begin{aligned} \left\{ \begin{array}{l} S_{\left( {{\rm appr}} \right) } (t) = S_{{\rm appr}} (t) + S_{(m + 1)} (t) \\ I_{\left( {{\rm appr}} \right) } (t) = I_{{\rm appr}} (t) + I_{(m + 1)} (t) \\ R_{\left( {{\rm appr}} \right) } (t) = R_{{\rm appr}} (t) + R_{(m + 1)} (t) \\ A_{\left( {{\rm appr}} \right) } (t) = A_{{\rm appr}} (t) + A_{(m + 1)} (t) \\ \end{array} \right. \end{aligned}$$
The above algorithm shall be applied to obtain the unique solution of system (3) via the Caputo derivative. We shall explore the situation where the beta-derivative is used and this will be discussed in "Basic concept about the beta-derivative and Caputo derivative" section.
Solution with the homotopy decomposition method
This method is explored here to obtain a special solution to system (3) with the beta-derivative. The reason is that using the Laplace transform on this derivative does not guarantee desirable results. The inverse operator of this derivative, however, can be applied which is termed as the beta-integral, and this helps to convert the ordinary differential equation into an integral equation in a way that the concept of homotopy can be applied. Consequently, taking the inverse operator on both sides of system (2), we have the following system of integral equations:
$$\begin{aligned} \left\{ \begin{array}{l} S(\tau ) = S(0) +\, _0^A I_t^\alpha \left( { - \alpha _{AS} SA - \beta SI + \sigma R} \right) \\ I(\tau ) = I(0) +\, _0^A I_t^\alpha \left( {\beta SI - \alpha _{AI} AI - \delta I} \right) \\ R(\tau ) = R(0) +\, _0^A I_t^\alpha \left( {\delta I - \sigma R} \right) \\ A(\tau ) = A(0) + \,_0^A I_t^\alpha \left( {\alpha _{AS} SA + \alpha _{AI} AI} \right) \\ \end{array} \right. \end{aligned}$$
$$\begin{aligned} _0^A I_t^\beta \left( {f(x)} \right) = \int \limits _0^t {\left( {\tau +\frac{1}{{\varGamma (\beta )}}} \right) } ^{1 - \beta } f(\tau )\rm {{\text{d}}}\tau . \end{aligned}$$
Additionally, we shall make an assumption that a solution of the above can be derived in a form of series as follows:
$$\begin{aligned} S(t) = \sum \limits _{n = 0}^\infty {S_n } (t),\quad\mathrm{{ }}I(t) = \sum \limits _{n = 0}^\infty {I_n } (t),\quad\mathrm{{ }}R(t) = \sum \limits _{n = 0}^\infty {R_n } (t),\quad\mathrm{{ }}A(t) = \sum \limits _{n = 0}^\infty {A_n } (t). \end{aligned}$$
$$\begin{aligned} p^0:\left\{ \begin{array}{l}S_0 (t) = S(0) \\ I_0 (t) = I(0) \\ R_0 (t) = R(0) \\ A_0 (t) = A(0) \\ \end{array} \right. \end{aligned}$$
$$\begin{aligned} p^1:\left\{ \begin{array}{l} S_1 (t) = \int \limits _0^t {\left( {\tau + \frac{1}{{\varGamma (\beta )}}} \right) ^{1 - \beta } } \left\{ { - \alpha _{AS} S_0 A_0 - \beta S_0 I_0 + \sigma R_0 } \right\}\rm {{\text{d}}}\tau \\ I_1 (t) = \int \limits _0^t {\left( {\tau + \frac{1}{{\varGamma (\beta )}}} \right) ^{1 - \beta } } \left\{ {\beta S_0 I_0 - \alpha _{AI} A_0 I_0 - \delta I_0 } \right\} \rm {{\text{d}}}\tau \\ R_1 (t) = \int \limits _0^t {\left( {\tau + \frac{1}{{\varGamma (\beta )}}} \right) ^{1 - \beta } } \left\{ {\delta I_0 - \sigma R_0 } \right\} \rm {{\text{d}}}\tau \\ A_1 (t) = \int \limits _0^t {\left( {\tau + \frac{1}{{\varGamma (\beta )}}} \right) ^{1 - \beta } } \left\{ {\alpha _{AS} S_0 A_0 + \alpha _{AI} A_0 I_0 } \right\} \rm {{\text{d}}}\tau.\\ \end{array} \right..\end{aligned}$$
In broad sense, we shall have the ensuing system of iteration formulas:
$$\begin{aligned} p^n:\left\{ \begin{array}{l} S_n (t) = \int \limits _0^t {\left( {\tau + \frac{1}{{\varGamma (\beta )}}} \right) ^{1 - \beta } } \left\{ { - \alpha _{AS} \sum \limits _{j = 0}^{n - 1} {A_{(n - j - 1)} } S_j A_j - \beta \sum \limits _{j = 0}^{n - 1} {I_{\left( {n - j - 1} \right) } } S_j + \sigma R_{\left( {n - 1} \right) } } \right\} \rm {{\text{d}}}\tau \\ I_n (t) = \int \limits _0^t {\left( {\tau + \frac{1}{{\varGamma (\beta )}}} \right) ^{1 - \beta } } \left\{ {\beta \sum \limits _{j = 0}^{n - 1} {I_{\left( {n - j - 1} \right) } } S_j - \alpha _{AI} \sum \limits _{j = 0}^{n - 1} {I_{\left( {n - j - 1} \right) } } A_j - \delta I_{\left( {n - 1} \right) } } \right\} \rm {{\text{d}}}\tau \\ R_n (t) = \int \limits _0^t {\left( {\tau + \frac{1}{{\varGamma (\beta )}}} \right) ^{1 - \beta } } \left\{ {\delta I_{\left( {n - 1} \right) } - \sigma R_{\left( {n - 1} \right) } } \right\} \rm {{\text{d}}}\tau \\ A_n (t) = \int \limits _0^t {\left( {\tau + \frac{1}{{\varGamma (\beta )}}} \right) ^{1 - \beta } } \left\{ {\alpha _{AS} \alpha _{AS} \sum \limits _{j = 0}^{n - 1} {A_{(n - j - 1)} } S_j A_j + \alpha _{AI} \sum \limits _{j = 0}^{n - 1} {I_{\left( {n - j - 1} \right) } } A_j } \right\}\rm {{\text{d}}}\tau. \\ \end{array} \right.\end{aligned}$$
The above embellishment can be taken up in the succeeding procedure.
Input \(\left\{ {\begin{array}{*{20}l} {S_{0} (t) = S(0)} \\ {I_{0} (t) = I(0)} \\ {R_{0} (t) = R(0)} \\ {A_{0} (t) = A(0)} \\ \end{array} } \right.\quad {\text{as initial input}}\),
Output:\(\left\{ \begin{array}{l}S_{{\rm appr}} (t) \\ I_{{\rm appr}} (t) \\ R_{{\rm appr}} (t) \\ A_{{\rm appr}} (t) \quad\\ \end{array} \right.\) the estimated solution.
$$\begin{aligned} {\text{put}}\left\{ \begin{array}{l}S_0 (t) = S(t) \\ I_0 (t) = I(0) \\ R_0 (t) = R(0) \\ A_0 (t) = A(0) \\ \end{array} \right. \mathrm{{ }}\quad{\text{and}}\quad\mathrm{{ }}\left\{ \begin{array}{l}S_{{\rm appr}} (t) \\ I_{{\rm appr}} (t) \\ R_{{\rm appr}} (t) \\ A_{{\rm appr}} (t) \\ \end{array} \right. = \mathrm{{ }}\left\{ \begin{array}{l}S_0 (t) \\ I_0 (t) \\ R_0 (t) \\ A_0 (t) \\ \end{array} \right. \end{aligned}$$
Step 2: For \(J-1\) to \(n-1\) do step 3, step 4 and step 5:
$$\begin{aligned} \left\{ \begin{array}{l} S_1 (t) = \int \limits _0^t {\left( {\tau + \frac{1}{{\varGamma (\beta )}}} \right) ^{1 - \beta } } \left\{ { - \alpha _{AS} S_0 A_0 - \beta S_0 I_0 + \sigma R_0 } \right\} \rm {{\text{d}}}\tau \\ I_1 (t) = \int \limits _0^t {\left( {\tau + \frac{1}{{\varGamma (\beta )}}} \right) ^{1 - \beta } } \left\{ {\beta S_0 I_0 - \alpha _{AI} A_0 I_0 - \delta I_0 } \right\} \rm {{\text{d}}}\tau \\ R_1 (t) = \int \limits _0^t {\left( {\tau + \frac{1}{{\varGamma (\beta )}}} \right) ^{1 - \beta } } \left\{ {\delta I_0 - \sigma R_0 } \right\} \rm {{\text{d}}}\tau \\ A_1 (t) = \int \limits _0^t {\left( {\tau + \frac{1}{{\varGamma (\beta )}}} \right) ^{1 - \beta } } \left\{ {\alpha _{AS} S_0 A_0 + \alpha _{AI} A_0 I_0 } \right\} \rm {{\text{d}}}\tau \\ \end{array} \right. \end{aligned}$$
Step 3: Compute:
$$\begin{aligned} \left\{ \begin{array}{l} B_n (t) = \int \limits _0^t {\left( {\tau + \frac{1}{{\varGamma (\beta )}}} \right) ^{1 - \beta } } \left\{ { - \alpha _{AS} \sum \limits _{j = 0}^{n - 1} {A_{(n - j - 1)} } S_j A_j - \beta \sum \limits _{j = 0}^{n - 1} {I_{\left( {n - j - 1} \right) } } S_j + \sigma R_{\left( {n - 1} \right) } } \right\} \rm {{\text{d}}}\tau \\ B_n (t) = \int \limits _0^t {\left( {\tau + \frac{1}{{\varGamma (\beta )}}} \right) ^{1 - \beta } } \left\{ {\beta \sum \limits _{j = 0}^{n - 1} {I_{\left( {n - j - 1} \right) } } S_j - \alpha _{AI} \sum \limits _{j = 0}^{n - 1} {I_{\left( {n - j - 1} \right) } } A_j - \delta I_{\left( {n - 1} \right) } } \right\} \rm {{\text{d}}}\tau \\ K_n (t) = \int \limits _0^t {\left( {\tau + \frac{1}{{\varGamma (\beta )}}} \right) ^{1 - \beta } } \left\{ {\delta I_{\left( {n - 1} \right) } - \sigma R_{\left( {n - 1} \right) } } \right\} \rm {{\text{d}}}\tau \\ K_n (t) = \int \limits _0^t {\left( {\tau + \frac{1}{{\varGamma (\beta )}}} \right) ^{1 - \beta } } \left\{ {\alpha _{AS} \alpha _{AS} \sum \limits _{j = 0}^{n - 1} {A_{(n - j - 1)} } S_j A_j + \alpha _{AI} \sum \limits _{j = 0}^{n - 1} {I_{\left( {n - j - 1} \right) } } A_j } \right\} \rm {{\text{d}}}\tau \\ \end{array} \right. \end{aligned}$$
$$\begin{aligned} \left\{ \begin{array}{l} S_{(m + 1)} (t) = B_m (t) + S_{({\rm appr})} (t) \\ I_{(m + 1)} (t) = B_m (t) + I_{({\rm appr})} (t) \\ R_{(m + 1)} (t) = K_m (t) + R_{({\rm appr})} (t) \\ A_{(m + 1)} (t) = K_m (t) + A_{({\rm appr})} (t) \\ \end{array} \right. \end{aligned}$$
$$\begin{aligned} \left\{ \begin{array}{l} S_{({\rm appr})} (t) = S_{({\rm appr})} (t) + S_{(m + 1)} (t) \\ I_{({\rm appr})} (t) = I_{({\rm appr})} (t) + I_{(m + 1)} (t) \\ R_{({\rm appr})} (t) = R_{({\rm appr})} (t) + R_{(m + 1)} (t) \\ A_{({\rm appr})} (t) = A_{({\rm appr})} (t) + A_{(m + 1)} (t) \\ \end{array} \right. \end{aligned}$$
Numerical results
We shall employ both Algorithms 1 and 2 to obtain an approximate solution of system (3) via the Caputo fractional derivative and the beta-derivative methods.
With the Caputo fractional derivative
In this part, Algorithm 1 is applied to obtain an approximation solution of system (3):
$$\begin{aligned} \left\{ \begin{array}{l}S_0 (t) = e \\ I_0 (t) = f \\ R_0 (t) = g \\ A_0 (t) = h \\ \end{array} \right. \end{aligned}$$
$$\begin{aligned} \begin{array}{l} S_1 (t) = \frac{{t^\mu \left( { - \alpha _{AS} eh - \beta ef + \sigma g} \right) }}{{\varGamma \left[ {1 + \mu } \right] }} \\ I_1 (t) = \frac{{t^\mu \left( {\beta ef - \alpha _{AI} hf - \delta f} \right) }}{{\varGamma \left[ {1 + \mu } \right] }} \\ R_1 (t) = \frac{{t^\mu \left( {\delta f - \sigma g} \right) }}{{\varGamma \left[ {1 + \mu } \right] }} \\ A_1 (t) = \frac{{t^\mu \left( {\alpha _{AS} eh + \alpha _{AI} hf} \right) }}{{\varGamma \left[ {1 + \mu } \right] }} \\ \end{array} \end{aligned}$$
With the beta-derivative
Algorithm 2 is used to derive the following series solutions:
$$\begin{aligned} S_1 (t) = \frac{{\left( {\left( {\frac{1}{{\varGamma \left[ \mu \right] }}} \right) ^{ - \mu } - \left( {t + \frac{1}{{\varGamma \left[ \mu \right] }}} \right) ^{ - \mu } \left( {1 + t\varGamma \left[ \mu \right] } \right) ^2 } \right) \left( { - \alpha _{AS} eh - \beta ef + \sigma g} \right) }}{{\left( { - 2 + \mu } \right) \varGamma \left[ \mu \right] ^2 }} \end{aligned}$$
$$\begin{aligned} I_1 (t) = \frac{{\left( {\left( {\frac{1}{{\varGamma \left[ \mu \right] }}} \right) ^{ - \mu } - \left( {t + \frac{1}{{\varGamma \left[ \mu \right] }}} \right) ^{ - \mu } \left( {1 + t\varGamma \left[ \mu \right] } \right) ^2 } \right) \left( {\beta ef - \alpha _{AI} hf - \delta f} \right) }}{{\left( { - 2 + \mu } \right) \varGamma \left[ \mu \right] ^2 }} \end{aligned}$$
$$\begin{aligned} R_1 (t) = \frac{{\left( {\left( {\frac{1}{{\varGamma \left[ \mu \right] }}} \right) ^{ - \mu } - \left( {t + \frac{1}{{\varGamma \left[ \mu \right] }}} \right) ^{ - \mu } \left( {1 + t\varGamma \left[ \mu \right] } \right) ^2 } \right) \left( {\delta f - \sigma g} \right) }}{{\left( { - 2 + \mu } \right) \varGamma \left[ \mu \right] ^2 }} \end{aligned}$$
$$\begin{aligned} A_1 (t) = \frac{{\left( {\left( {\frac{1}{{\varGamma \left[ \mu \right] }}} \right) ^{ - \mu } - \left( {t + \frac{1}{{\varGamma \left[ \mu \right] }}} \right) ^{ - \mu } \left( {1 + t\varGamma \left[ \mu \right] } \right) ^2 } \right) \left( {\alpha _{AS} eh + \alpha _{AI} hf} \right) }}{{\left( { - 2 + \mu } \right) \varGamma \left[ \mu \right] ^2 }} \end{aligned}$$
Numerical simulations
In this section, we use parameters in [10], to obtain numerical simulation on illustration of the approximate solutions based on function of time as well as alpha. The parameter values used are \(\alpha _{AS} = 0.6,\) \(\alpha _{IA} = 0.4\), \(\beta =0.2\), \(\sigma = 0.85\), \(\delta = 0.40\) and with the following initial conditions \(S(0) = 100\), \(I(0) = 10\), \(R(0) = 5\) and \(A(0) = 20.\)
Approximate solution for \(\alpha = 0.105\)
Figures 1, 2, 3, 4, 5, 6, 7 and 8 depict the graphical representations of system (3). It is clearly seen from the graph that susceptible class which also represents the total population get quickly infected due to how fast the virus spread with the system. It is obvious in the above figures that the total number of infected computers decreased as the total number of antidotal increased which is a strategy to reduce the number of susceptible computers. The numerical predictions (Figs. 1, 2, 3, 4, 5, 6, 7, 8) are also attributable to fractional order of beta. Realistically, when the beta is above 0.5, the non-realistic prediction is attained. As the beta value increases above 0.5, it appears that anodotal computers exceed the entire initial population. Thus, there exists unrealistic prediction. The best option to secure more computers is put anti-virus on both infected and susceptible computers to avoid disaster and huge cost. This is achieved when the beta value is <0.5 as observed in (Figs. 4, 5, 6, 7, 8). It is not surprising, given the potential danger of anti virus in a computer system because the entire initial population ends up in that compartment. It is remarkable to note that when beta is 1 we obtain the ordinary derivative which implies the model ordinary derivative and hardly give good predictions. The newly introduced beta-calculus, however, has the potential of vividly describing a given physical problem. The obtained Figs. 1, 2, 3, 4, 5, 6, 7 and 8 is far better and accurate as compared to those obtained in [17]. This work has given better predictions as shown in Figs. 1, 2, 3, 4, 5, 6, 7 and 8 which is not the case of integer order in [17].
Approximate solution for \(\beta = 1\)
Approximate solution for \(\beta = 0.65\)
Approximate solution for \(\beta = 0.3\)
The concept of beta-derivative and Caputo fractional derivative has assisted in investigating the spread of computer virus in a system. This computer virus has been found all over the world where computers are available and causing major financial losses to many establishments. It is worthy to note that the definition of fractional derivative is associated with the convolution of the derivative of a given function with its function power. Convolution is applied to many branches of engineering including image processing as a filter. Fractional derivative, however, in epidemiology serves a memory capable of tracing the spread from beginning to the infected individual. For beta-derivative which ranges between fractional order and local derivative, the spread of computer virus at local level is identified with a given fractional order. In this study, two distinct concepts of derivatives are employed to investigate the spread of computer virus. The proposed model based on the methodology used was solved iteratively. The numerical simulation results depict that the prediction is based on the fractional order of beta. Simply when beta is close to 1, we obtain non-realistic prediction which is not the case in [10] and when beta is ≤0.5, a good prediction is attained. Since it is the desire of any institution to have their computers with virus free, the initial computers at the end of the simulation moved into Anodotal section when beta is <0.5 as observed in Figs. 4, 5, 6, 7 and 8. It worthy to notice that when beta is 1, we have ordinary differential derivative cases for both derivatives which do not provide a good prediction. Thus, with the newly introduced, beta-calculus has the potential of providing a vivid account of physical problem more precisely.
Denning PJ (1990) Computers under attack. Addison-Wesley, New York. http://www.washtech.com/news/netarch/12267-1.html
Forrest S, Hofmayer SA, Somayaj A (1997) Computer immunology. Commun ACM 40(10):88–96
Kephart JO, Hogg T, Huberman BA (1989) Dynamics of computational ecosystems. Phys Rev A 40(1):404–421
Kephart JO, White SR, Chess DM (1993) Computers and epidemiology. IEEE Spectr 30(5):20–26
Kephart JO, Sorkin GB, Swimmer M (1997) An immune system for cyberspace. In: Proceedings of the IEEE international conference on systems, men, and cybernetics. IEEE, Orlando, p 879–884
Mishra BK, Saini D (2007) Mathematical models on computer viruses. Appl Math Comput 187(2):929–936
Mishra BK, Jha N (2007) Fixed period of temporary immunity after run of the anti-malicious software on computer nodes. Appl Math Comput 190:1207–1212
Draief M, Ganesh A, Massouili L (2008) Thresholds for virus spread on networks. Ann Appl Probab 18(2):359–378
Piqueira JRC, Navarro BF, Monteiro LHA (2005) Epidemiological models applied to viruses in computer networks. J Comput Sci 1:31–34
Piqueira JRC, Araujo VO (2009) A modified epidemiological model for computer viruses. Appl Math Comput 2(213):355–360
Wazwaz AM (2005) Adomian decomposition method for a reliable treatment of the Bratu-type equations. Appl Math Comput 166:652–663
Atangana A, Bildik N (2013) The use of fractional order derivative to predict the groundwater flow. Math Probl Eng 2013:9
Cloot A, Botha JF (2005) A generalised groundwater flow equation using the concept of non-integer order derivatives. Water SA 32(1):1–7
Atangana A, Vermeulen PD (2014) Analytical solutions of a space-time fractional derivative of groundwater flow equation. Abstr Appl Anal 2014:11
Su W, Baleanu D, Yang X, Jafari H (2013) Damped wave equation and dissipative wave equation in fractal strings within the local fractional variational iteration method. Fixed Point Theory Appl 2013(1):1
Miller KS, Ross B (1993) An introduction to the fractional calculus and fractional differential equations. Wiley, New York
Podlubny I (2002) Geometric and physical interpretation of fractional integration and fractional differentiation. Fract Calc Appl Anal 5:367–386
Hethcote HW (2000) The mathematics of infectious diseases. SIAM Rev 42(4):599–653
Ganji DD, Sadighi A (2007) Application of homotopy-perturbation and variational iteration methods to nonlinear heat transfer and porous media equations. J Comput Appl Math 207:24–34
Singh J, Devendra Kumar S (2011) Homotopy perturbation Sumudu transform method for nonlinear equations. Adv Appl Mech 4:165–175
Adomian G, Rach R (1993) Analytic solution of nonlinear boundary value problems in several dimensions by decomposition. J Math Anal Appl 174:118–137
Atangana A, Bildik N (2013) Approximate solution of Tuberculosis disease population dynamics model. Appl Anal 2013:1–8
Jafari H, Momani S (2007) Solving fractional diffusion and wave equations by modified homotopy perturbation method. Phys Lett A 370:388–396
Khan Y (2009) An effective modification of the Laplace decomposition method for nonlinear equations. Int J Nonlinear Sci Num Simul 10:1373–1376
Authors' contributions
All the authors have contributed equally for the production of this study. All authors read and approved the final manuscript.
The authors are thankful to the editor and reviewers for their careful reading and suggestion that greatly improved the quality of the paper.
Department of Mathematics and Statistics, Kumasi Technical University, Kumasi, Ghana
Ebenezer Bonyah
Institute for Groundwater Studies, University of the Free State, Bloemfontein, 9301, South Africa
Abdon Atangana
Department of Mathematics Abdul Wali Khan, University Mardan, Mardan, 23200, Pakistan
Muhammad Altaf Khan
Search for Ebenezer Bonyah in:
Search for Abdon Atangana in:
Search for Muhammad Altaf Khan in:
Correspondence to Ebenezer Bonyah.
Bonyah, E., Atangana, A. & Khan, M.A. Modeling the spread of computer virus via Caputo fractional derivative and the beta-derivative. Asia Pac. J. Comput. Engin. 4, 1 (2017). https://doi.org/10.1186/s40540-016-0019-1
Caputo fractional derivative
Beta-derivative
Antidotal
|
CommonCrawl
|
Functors Induced by Cauchy Extension of C$^ast$-algebras
Functors Induced by Cauchy Extension of C$^\ast$-algebras
Kourosh Nourouzi
Ali Reza
Faculty of Mathematics, K. N. Toosi University of Technology, Tehran, Iran.
10.22130/scma.2018.73698.306
In this paper, we give three functors $\mathfrak{P}$, $[\cdot]_K$ and $\mathfrak{F}$ on the category of C$^\ast$-algebras. The functor $\mathfrak{P}$ assigns to each C$^\ast$-algebra $\mathcal{A}$ a pre-C$^\ast$-algebra $\mathfrak{P}(\mathcal{A})$ with completion $[\mathcal{A}]_K$. The functor $[\cdot]_K$ assigns to each C$^\ast$-algebra $\mathcal{A}$ the Cauchy extension $[\mathcal{A}]_K$ of $\mathcal{A}$ by a non-unital C$^\ast$-algebra $\mathfrak{F}(\mathcal{A})$. Some properties of these functors are also given. In particular, we show that the functors $[\cdot]_K$ and $\mathfrak{F}$ are exact and the functor $\mathfrak{P}$ is normal exact.
Pre-C$^ast$- algebras
Extensions of C$^ast$- algebras
Exact functors
Cauchy extension
C*-algebra
[1] W. Arveson, Notes on extensions of $C$*-algebras, Duke Math. J., 44 (1977), pp. 329-355.
[2] R.G. Bartle, The elements of real analysis. Second edition, John Wiley & Sons, New York-London-Sydney, 1976.
[3] B. Blackadar, Operator algebras. Theory of $C$*-algebras and Von Neumann algebras, Encyclopaedia of Mathematical Sciences, 122. Operator Algebras and Non-commutative Geometry, III. Springer-Verlag, Berlin, 2006.
[4] D.P. Blecher and C. Le Merdy, Operator algebras and their modules an operator space approach, London Mathematical Society Monographs. New Series, 30. Oxford Science Publications. The Clarendon Press, Oxford University Press, Oxford, 2004.
[5] L.G. Brown, R.G. Douglas and P.A. Fillmore, Extensions of $C$*-algebras and $K$-homology, Ann. of Math., 105 (1977), pp. 265-324.
[6] R.C. Busby, Double centralizers and extensions of $C$*-algebras, Trans. Amer. Math. Soc., 132 (1968), pp. 79-99.
[7] J.B. Conway, A course in functional analysis, Graduate Texts in Mathematics, 96. Springer-Verlag, New York, 1985.
[8] J. Dieudonne, Foundations of modern analysis. Enlarged and corrected printing, Pure Appl. Math., Vol. 10-I. Academic Press, New York-London, 1969.
[9] J. Dugundji, An extension of Tietze's theorem, Pacific J. Math., 1 (1951), pp. 353-367.
[10] G.G. Kasparov, The operator $K$-functor and extensions of $C$*-algebras, (Russian) Izv. Akad. Nauk SSSR Ser. Mat., 44 (1980), pp. 571-636.
[11] S. MacLane, Categories for the working mathematician, Graduate Texts in Mathematics, Vol. 5. Springer-Verlag, New York-Berlin, 1971.
[12] G.J. Murphy, $C$*-algebras and operator theory, Academic Press, Inc., Boston, MA, 1990.
[13] G.A. Reid, Epimorphisms and surjectivity, Invent. Math., 9 (1969/1970), pp. 295-307.
[14] J.J. Rotman, An introduction to homological algebra, Second edition. Universitext. Springer, New York, 2009.
[15] W. Rudin, Principles of mathematical analysis, Third edition. International Series in Pure and Applied Mathematics, McGraw-Hill Book Co., New York-Auckland-DCsseldorf, 1976.
[16] K.W. Yang, Completion of normed linear spaces, Proc. Amer. Math. Soc., 19 (1968), pp. 801-806.
Nourouzi, K., Reza, A. (2019). Functors Induced by Cauchy Extension of C$^\ast$-algebras. Sahand Communications in Mathematical Analysis, 14(1), 27-53. doi: 10.22130/scma.2018.73698.306
Kourosh Nourouzi; Ali Reza. "Functors Induced by Cauchy Extension of C$^\ast$-algebras". Sahand Communications in Mathematical Analysis, 14, 1, 2019, 27-53. doi: 10.22130/scma.2018.73698.306
Nourouzi, K., Reza, A. (2019). 'Functors Induced by Cauchy Extension of C$^\ast$-algebras', Sahand Communications in Mathematical Analysis, 14(1), pp. 27-53. doi: 10.22130/scma.2018.73698.306
Nourouzi, K., Reza, A. Functors Induced by Cauchy Extension of C$^\ast$-algebras. Sahand Communications in Mathematical Analysis, 2019; 14(1): 27-53. doi: 10.22130/scma.2018.73698.306
|
CommonCrawl
|
Problem on parameterization and speed
An ant is on a merry-go-ground that is rotating clockwise at $\omega$ radians per second. Initially, the ant is at $(R,0)$. From the ant's perspective, it walks toward the center with speed $v$. Several snapshots in time are as follows:
Find the parameterization of the path taken by the ant (relative to the ground)
Compute the speed of the ant as a function of $t$. When is it largest?
Set up, but do not evaluate, an integral for the arc length of the path taken by the ant between $t=0$ and when the ant reaches the origin
Parameterization of a circle
A point with distance to the origin $r$ and angle $\theta$ with respect to the $x$-axis has coordinates $r\langle \cos \theta, \sin \theta\rangle$.
Because the ant is always walking to the center with speed $v$, it's distance to the origin at time $t$ is $r(t) = R - v t$.
Because the ant is always walking directly toward the center from its perspective, it stays on the same ray emanating from the origin. Hence, it rotates around the origin at the same angular speed as the merry-go-round. Hence $\theta(t) = - \omega t$. Note the negative sign because the merry-go-round is rotating clockwise.
We conclude that the position of the at is given by \begin{align}\bfx(t) &= (R - vt) \langle \cos(-\omega t), \sin(-\omega t) \rangle\\ &= (R - vt) \langle \cos \omega t, -\sin \omega t \rangle.\end{align}
Speed of a parameterization
If $\bfx(t)$ is the parameterization of a curve, the speed at $t$ is given by the magnitude of the velocity: $| \bfv | = \left |\frac{d \bfx}{d t} \right|.$
Velocity of a parameterization
We now compute the velocity vector by the product rule \begin{align}
\frac{d \bfx}{dt}(t) &= -v \langle \cos \omega t, - \sin \omega t \rangle + (R - vt) \langle -\omega \sin \omega t, - \omega \cos \omega t \rangle.
Now we compute the length of this vector. Recall that
Dot product and vector length
$|\bfv|^2 = \bfv \cdot \bfv$
We can compute that \begin{align}
\left |\frac{d\bfx}{dt} \right|^2 = &v^2 \langle \cos \omega t, - \sin \omega t \rangle \cdot \langle \cos \omega t, - \sin \omega t \rangle \\ &- 2 v \omega (R - vt) \langle \cos \omega t, - \sin \omega t \rangle\cdot \langle - \sin \omega t, - \cos \omega t \rangle \\ & + \omega^2 (R-vt)^2 \langle -\sin \omega t, - \cos \omega t \rangle \cdot \langle -\sin \omega t, - \cos \omega t \rangle.
Direct computation of dot product
We note that \begin{align}
\langle \cos(\omega t), - \sin(\omega t) \rangle \cdot \langle \cos(\omega t), - \sin(\omega t) \rangle &= 1, \\
\langle \cos(\omega t), - \sin(\omega t) \rangle\cdot \langle - \sin (\omega t), - \cos(\omega t) \rangle &= 0, \\
\langle -\sin (\omega t), - \cos(\omega t) \rangle \cdot \langle -\sin (\omega t), - \cos(\omega t) \rangle &=1.
And we get that \begin{align}
&\left |\frac{d\bfx}{dt} \right|^2 = v^2 + \omega^2(R-vt)^2, \\
&\left |\frac{d\bfx}{dt} \right| = \sqrt{v^2 + \omega^2(R-vt)^2}. \\
This expression is largest when $t=0$.
Part c
Arc length of a parameterization
The length of the curve $\bfx(t)$ traced out between $t=a$ and $t=b$ is given by $$s = \int_a^b \left | \frac{d \bfx(t)}{dt} \right| dt.$$
We identify that $a=0$ and $b$ is the time at which the ant reaches the origin. Because the ant has a distance $R$ to travel at speed $v$, it takes $R/v$ time to reach the origin. Hence, we identify that $b = R/v$.
The arc length of the curve traced by the ant is $$s = \int_0^{R/v} \sqrt{v^2 + \omega^2(R-vt)^2} dt.$$
Dot product is defined by $\langle x_1, x_2, x_3 \rangle \cdot \langle y_1, y_2, y_3 \rangle = x_1 y_1 + x_2 y_2 + x_3 y_3.$
$\mathbf{x} \cdot \mathbf{x} = |\mathbf{x}|^2.$
Parameterized curves
The circle of radius $r$, traversed counterclockwise, can be parameterized by $$\bfx(\theta) = \langle r \cos \theta, r \sin \theta \rangle \text{ for } 0 \leq \theta \lt 2 \pi$$
Velocity of a parameterization: $\bfv = \frac{d \bfx} {dt}$
Speed of a parameterization: $|\bfv| = \left| \frac{d\bfx}{dt} \right |$
|
CommonCrawl
|
Arthur and the eclipse
By Gianluigi Filippelli on Saturday, December 27, 2014
by @ulaulaman about #ArthurEddington #AlbertEinstein #GeneralRelativity
On the 17th November 1922, Albert Einstein, accompanied by his wife, arrived in Kobe (see the report of the visit published on the AAPPS Bulletin - pdf). Here he was surrounded by journalists and fans: while the first asked him questions, the latter were on the hunt for an autograph from one of the most famous physicists and scientists of the time. Einstein, as written by Naoki Urasawa on the initial pages of Billy Bat #9, to a specific question on why he won the Nobel Prize for the photoelectric effect and not for the theory of special and general relativity, replied:
Because, that can't be verified.
But the mangaka committed a chronological mistake, probably caused by the Urasawa's need to focus on the innovation represented by the Einstein's theories: the point, in fact, is that just three years earlier, on the 6th November, 1919, during a meeting of the Royal Society and Royal Astronomical Society, Arthur Eddington presented the results of the celestial observations made in mid-spring of that year. The interest and the importance of the discovery was such that the next day, the Times headlined:
Revolution in Science: New Theory of the Universe: Newton's Ideas Overthrown, by Joseph John Thomson:
Our conceptions about the structure of the universe must be changed in a fundamental way
So, when Einstein went to Japan, the evidence of the correctness of his theory had already been around.
Labels: albert einstein, arthur eddington, eclipse, general relativity, relativity
The hobbit, the dragon, and the green knight
By Gianluigi Filippelli on Wednesday, December 24, 2014
by @ulaulaman about #TheHobbit #JRRTolkien #Smaug #mathematics
Gandalf and Bilbo by David Wenzel
The Peter Jackson's Hobbit movie trilogy is arrived to a conclusion, so it could be a good point to write a little, funny curious post about the science and the Tolkien's novel. We start with a paper published last year(1) (2013) in which the researchers find the cause of the triumph of good over evil:
Bilbo Baggins, a hobbit, lives in a hole in the ground but with windows, and when he is first encountered he is smoking his pipe in the sun overlooking his garden (it is worth noting [parenthetically] that smoking is itself associated with skeletal muscle dysfunction6). Dwarves and wizards smoke too, and the production of smoke rings is unfortunately glamourised. The hobbit diet is clearly varied as he is able to offer cake, tea, seed cake, ale, porter, red wine, raspberry jam, mince pies, cheese, pork pie, salad, cold chicken, pickles and apple tart to the dwarves who visit to engage him in the business of burglary. The dwarves also show evidence of a mixed diet and, importantly, although they "like the dark, the dark for dark business", they do spend much time above ground and have plenty of sun exposure during the initial pony ride in June that begins their trip to the Lonely Mountain.
Gollum, himself "as dark as darkness" lives in the dark, deep in the Misty Mountains. He does, however, eat fish, although the text describes these only as "blind" and it is not clear whether they are of an oily kind and thus a potential source of vitamin D. He sometimes eats goblins, but they rarely come down to his lake, suggesting that fish play little part in the goblin diet. Interestingly, these occasional trips to catch fish are undertaken at the behest of the Great Goblin, leading one to speculate that his enhanced diet may have helped him to achieve his pre-eminent position within goblin society. In due course, the Great Goblin is replaced by the Son of the Great Goblin. While simple nepotism is a likely explanation, we are unable to exclude an epigenetic process whereby the son's fitness to rule has been influenced by parental vitamin D exposure.(1)
So the secret is in the diet!
Another great character from The Hobbit is Smaug, the dragon. Its physiology is really peculiar (read also Disco Blog):
Labels: biology, jrr tolkien, literature, mathematics, physiology, the hobbit
When and why the coffee spills
http://t.co/fBhTs48KG4 the #physics of spilling and walking with #coffee
How do we spill coffee?
(a) Either by accelerating too much for a given coffee level (fluid statics)
(b) Or, through more complicated dynamical phenomena:
Initial acceleration sets an initial sloshing amplitude, which is analogous to the main antisymmetric mode of sloshing.
This initial perturbation is amplified by the back-and-forth and pitching excitations since their frequency is close to the natural one because of the choice of normal mug dimensions.
Vertical motion also does not lead to resonance as it is a subharmonic excitation (Faraday phenomena).
Noise has higher frequency, which makes the antisymmetric mode unstable thus generating a swirl.
Time to spill depends on "focused"/"unfocused" regime and increases with decreasing maximum acceleration (walking speed).
How can we prevent spilling?
Lessons learned from sloshing dynamics may suggest strategies to control spilling, e.g. via using
a flexible container to act as a sloshing absorber in suppressing liquid oscillations.
a series of concentric rings (baffles) arranged around the inner wall of a container.
Text via Walking with coffee: when and why coffee spills (pdf)
More information on physics buzz blog
Paper: Mayer H.C. & Krechetnikov R. (2012). Walking with coffee: Why does it spill?, Physical Review E, 85 (4) DOI: http://dx.doi.org/10.1103/physreve.85.046117 (pdf)
Labels: coffee, funny, physics
A brief history of video game graphics
By Gianluigi Filippelli on Sunday, November 23, 2014
Labels: video, video game, youtube
The globe of Galileo
By Gianluigi Filippelli on Saturday, November 22, 2014
video by @ulaulaman #levitation
Published by Gianluigi (@ulaulaman) in data: Nov 11, 2014 at 4:41 PST
It's just a little Earth, turns and levitates above its base, reminding those who contributed to give it its rightful place in space. The globe can light up using the switch on the base. It works in the current network.
Labels: funny, galileo galilei, levitation, science, toys
Fabiola Gianotti, Director General at CERN
By Gianluigi Filippelli on Tuesday, November 04, 2014
http://t.co/rYzcXWlvR0 about #FabiolaGianotti #CERN #ATLAS
Fabiola Gianotti is an Italian particle physicist, a former spokesperson of the ATLAS experiment at the Large Hadron Collider (LHC) at CERN in Switzerland, considered one of the world's biggest scientific experiments. She has been selected as the next Director-General of CERN, starting on 1 January 2016.
She is the 4th italian particle physicist to became Director General at CERN after Amaldi (1952-1954), Rubbia (1989-1993) and Maiani (1999-2003).
A bit concession to the SEO!
Labels: atlas, cern, fabiola gianotti, lhc
Planck results, ATLAS and the dark matter
http://t.co/jJxD8rhCr6 by @ulaulaman about #Planck, #ATLAS, #DarkMatter at #LHC
The last issue of Astronomy & Astrophysics (that it's free) is devoted to the Planck 2013 results:
This collection of 31 articles presents the initial scientific results extracted from this first Planck dataset, which measures the cosmic microwave background (CMB) with the highest accuracy to date. It provides major new advances in different domains of cosmology and astrophysics.
In the first paper there is an overview of 2013 science results, and we can read:
The Universe observed by Planck is well-fit by a six parameter, vacuum-dominated, cold dark matter (ACDM) model, and we provide strong constraints on deviations from this model.
But, in the meanwhile, ATLAS published a preprint about the quest of the dark matter in LHC:
The data are found to be consistent with the Standard Model expectations and limits are set on the mass scale of effective field theories that describe scalar and tensor interactions between dark matter and Standard Model particles. Limits on the dark-matter--nucleon cross-section for spin-independent and spin-dependent interactions are also provided. These limits are particularly strong for low-mass dark matter. Using a simplified model, constraints are set on the mass of dark matter and of a coloured mediator suitable to explain a possible signal of annihilating dark matter.
Tommaso Dorigo, examining ATLAS' results, concludes:
the ATLAS search increases significantly the sensitivity with respect to past searches, but no signal is found. As attractive as DM existence is as an economical explanation of a wealth of cosmological observations, the nature of dark matter continues to remain unknown.
Labels: atlas, dark matter, lhc, planck
Regge theory
By Gianluigi Filippelli on Sunday, October 26, 2014
http://t.co/alaasqcHwl @ulaulaman says #goodbye to #TullioRegge
In quantum physics, Regge theory is the study of the analytic properties of scattering as a function of angular momentum, where the angular momentum is not restricted to be an integer but is allowed to take any complex value. The nonrelativistic theory was developed by Tullio Regge in 1957.
Following Chew and Frautschi (pdf), the key papers by Tullio Regge are:
Regge T. (1959). Introduction to complex orbital momenta, Il Nuovo Cimento, 14 (5) 951-976. DOI: http://dx.doi.org/10.1007/bf02728177 (pdf)
In this paper the orbital momentumj, until now considered as an integer discrete parameter in the radial Schrödinger wave equations, is allowed to take complex values. The purpose of such an enlargement is not purely academic but opens new possibilities in discussing the connection between potentials and scattering amplitudes. In particular it is shown that under reasonable assumptions, fulfilled by most field theoretical potentials, the scattering amplitude at some fixed energy determines the potential uniquely, when it exists. Moreover for special classes of potentials $V(x)$, which are analytically continuable into a function $V(z)$, $z=x+iy$, regular and suitable bounded in $x > 0$, the scattering amplitude has the remarcable property of being continuable for arbitrary negative and large cosine of the scattering angle and therefore for arbitrary large real and positive transmitted momentum. The range of validity of the dispersion relations is therefore much enlarged.
Regge T. (1960). Bound states, shadow states and mandelstam representation, Il Nuovo Cimento, 18 (5) 947-956. DOI: http://dx.doi.org/10.1007/bf02733035
In a previous paper a technique involving complex angular momenta was used in order to prove the Mandelstam representation for potential scattering. One of the results was that the number of subtractions in the transmitted momentum depends critically on the location of the poles (shadow states) of the scattering matrix as a function of the complex orbital momentum. In this paper the study of the position of the shadow states is carried out in much greater detail. We give also related inequalities concerning bound states and resonances. The physical interpretation of the shadow states is then discussed.
The importance of the model is summarized by the following:
As a fundamental theory of strong interactions at high energies, Regge theory enjoyed a period of interest in the 1960s, but it was largely succeeded by quantum chromodynamics. As a phenomenological theory, it is still an indispensable tool for understanding near-beam line scattering and scattering at very large energies. Modern research focuses both on the connection to perturbation theory and to string theory.
During the 1980s, Regge is interested also in the mathematical art, using Anschauliche Geometrie by David Hilbert and Stefan Cohn-Vossen like inspiration for a lot of mathematical objects.
Good bye, Mr. Regge, and thanks for all...
Labels: physics, tullio regge
Alan Guth, eternal inflation and the multiverse
http://t.co/CnvvOY0mAI about #AlanGuth #multiverse #CosmicInflation #icep2014
At the beggining of October, Alan Guth was at the workshop Fine-Tuning, Anthropics and the String Landscape at Madrid, and he concluded his talk with the following slide:
The complete talk, without question time, follows:
Labels: alan guth, cosmic inflation, cosmology, icep2014, video, youtube
Just a bit of blue
By Gianluigi Filippelli on Monday, October 13, 2014
http://t.co/hgbABOxUlm by @ulaulaman about #nobelprize2014 on #physics #led #light #semiconductors
Created with SketchBookX
One of the first classifications that you learn when you start to study the behavior of matter interacting with electricity is between conductors and insulators: a conductor is a material that easily allows the passage of electric charges; on the other hand, an insulator prevents it (or makes it difficult). It is possible to characterize these two kinds of materials through the physical characteristics of the atoms that compose them. Indeed, we know that an atom is characterized by having a positive nucleus with electron clouds which rotate around it: to characterize a material is precisely the behavior of the outer electrons, those of the external band. On the other hand, the energy bands of every atom are characterized by specific properties: there are the valence bands, where the electrons are used in the chemical bonds, and the conduction bands, where the electrons are free to move, the "mavericks" of the atom, used for ionic bonds. At this point I hope it is simple to characterize a conductive material such as the one whose atoms have electrons both in the valence band, both in the conduction band, while an insulating material is characterized by having full only the valence band.
Now, in band theory, the probability that an electron occupies a given band is calculated using the Fermi-Dirac distribution: this means that there is a non-zero probability that an insulator's electron in the valence band is promoted to the conduction band, but it is extremely low because of the large energy difference between the two levels. Moreover, there is an energy level said Fermi level that, while in the conductors is located within the conduction band, in the insulation is located between the two bands, the conduction and valence, allowing a valence electron to jump more easily in the conduction band.
Labels: diods, led, light, nobel prize, nobel prize 2014, physics, semiconductors
Teachers for the peace
By Gianluigi Filippelli on Friday, October 10, 2014
http://t.co/W1K0rh9An6 #nobelprize2014 #peace #children #education #teaching
The Nobel Prize for Peace 2014 is awarded to Kailash Satyarthi and Malala Yousafzai, teachers and activists for children rights,
for their struggle against the suppression of children and young people and for the right of all children to education
Labels: kailash satyarthi, malala yousafzai, nobel prize, nobel prize 2014, peace, school, teachers
Carlo Rubbia and the discoveries of the weak bosons
By Gianluigi Filippelli on Tuesday, October 07, 2014
http://t.co/KGVNarwZMG by @ulaulaman about #CarloRubbia #NobelPrize #physics #particlephysics
On that day 30 years ago, I was almost certainly at school. Physics still was not my passion. Of course I started very well: when the teacher asked what is the space, I thought immediately to the universe, but the question was not referring to that "space", but in another, the geometric. But it is not about those memories that I have to indulge, but on a particular photo, in which Carlo Rubbia and Simon van der Meer, with two goblets, presumably of wine in hand, are celebrating the announcement of the Nobel Prize for Physics
for their decisive contributions to the large project, which led to the discovery of the field particles W and Z, communicators of weak interaction
The story of this Nobel, however, began eight years earlier, in 1976. In that year, in fact, SPS, the Super Proton Synchrotron, begins to operate at CERN, originally designed to accelerate particles up to an energy of 300 GeV.
The same year David Cline, Carlo Rubbia and Peter McIntyre proposed transforming the SPS into a proton-antiproton collider, with proton and antiproton beams counter-rotating in the same beam pipe to collide head-on. This would yield centre-of-mass energies in the 500-700 GeV range(1).
On the other hand antiprotons must be somehow collected. The corresponding beam was then
(...) stochastically cooled in the antiproton accumulator at 3.5 GeV, and this is where the expertise of Simon Van der Meer and coworkers played a decisive role(1).
Labels: biography, bruno pontecorvo, carlo rubbia, cern, cern60, energy, lhc, neutrinos, particle physics, physics, w boson, z boson
Sudoku clues
http://t.co/zk3P3rPjFZ #sudoku #mathematics #arXiv #abstract
The arXiv's paper is published two years ago, but I think that every time is a good time to play sudoku!
The sudoku minimum number of clues problem is the following question: what is the smallest number of clues that a sudoku puzzle can have? For several years it had been conjectured that the answer is 17. We have performed an exhaustive computer search for 16-clue sudoku puzzles, and did not find any, thus proving that the answer is indeed 17. In this article we describe our method and the actual search. As a part of this project we developed a novel way for enumerating hitting sets. The hitting set problem is computationally hard; it is one of Karp's 21 classic NP-complete problems. A standard backtracking algorithm for finding hitting sets would not be fast enough to search for a 16-clue sudoku puzzle exhaustively, even at today's supercomputer speeds. To make an exhaustive search possible, we designed an algorithm that allowed us to efficiently enumerate hitting sets of a suitable size.
In the following video by Numberphile, James Grime discusses the paper results:
Labels: abstract, arxiv, james grime, mathematics, sudoku, video, youtube
From Nash equilibria to collective behavior
By Gianluigi Filippelli on Wednesday, October 01, 2014
https://twitter.com/ulaulaman/status/517303481565458432 by @ulaulaman about #Nash equilibria and their role in collective behavior
The Nash equilibrium is an important tool in game theory:
[It] is a solution concept of a non-cooperative game involving two or more players, in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy. If each player has chosen a strategy and no player can benefit by changing strategies while the other players keep theirs unchanged, then the current set of strategy choices and the corresponding payoffs constitute a Nash equilibrium.
Stated simply, Amy and Will are in Nash equilibrium if Amy is making the best decision she can, taking into account Will's decision, and Will is making the best decision he can, taking into account Amy's decision. Likewise, a group of players are in Nash equilibrium if each one is making the best decision that he or she can, taking into account the decisions of the others in the game.
Nash equilibria may, for example, be found in the game of coordination, in the prisoner's dilemma, in the paradox of Braess(6), or more generally in any strategy game. In particular, given a game, we can ask whether it has or not a Nash equilibrium: apparently deciding the existence of Nash equilibria is an intractable problem, if there is no restriction on the relationships among players. In addition for a strong Nash equilibrium, the problem is on the second level of the polynomial hierarchy, which is a scale for the classification problem based on the complexity of resolution(1).
In addition to this study about Nash equilibria, Gianluigi Greco (one of my high school classmates), together with Francesco Scarcello, also studied the Nash equilibria (in this case the forced equilibria) in graphical games, where graphical game is a game represented in a graphical manner, through a graph(2).
Labels: collective behavior, gianluigi greco, john nash, logic, mathematics, nash equilibria
CERN's 60th Birthday
By Gianluigi Filippelli on Monday, September 29, 2014
http://t.co/zU9b7V4idL by @ulaulaman about #CERN60
The day to celebrate CERN's birthday is arrived:
The convention establishing CERN was ratified on 29 September 1954 by 12 countries in Western Europe. The acronym CERN originally stood in French for Conseil Européen pour la Recherche Nucléaire (European Council for Nuclear Research), which was a provisional council for setting up the laboratory, established by 12 European governments in 1952. The acronym was retained for the new laboratory after the provisional council was dissolved, even though the name changed to the current Organisation Européenne pour la Recherche Nucléaire (European Organization for Nuclear Research) in 1954.
The most recent discovery at the laboratories is the Higgs boson (or a particle that seems it), but there are some others successes in the CERN's history:
1973: The discovery of neutral currents in the Gargamelle bubble chamber;
1983: The discovery of W and Z bosons in the UA1 and UA2 experiments;
1989: The determination of the number of light neutrino families at the Large Electron–Positron Collider (LEP) operating on the Z boson peak;
1995: The first creation of antihydrogen atoms in the PS210 experiment;
1999: The discovery of direct CP violation in the NA48 experiment;
2010: The isolation of 38 atoms of antihydrogen;
2011: Maintaining antihydrogen for over 15 minutes;
There are two Nobel Prizes directly connected to the CERN:
1984: to Carlo Rubbia and Simon Van der Meer for
their decisive contributions to the large project which led to the discovery of the field particles W and Z, communicators of the weak interaction
1992: to Georges Charpak for
his invention and development of particle detectors, in particular the multiwire proportional chamber, a breakthrough in the technique for exploring the innermost parts of matter
On CERN's webcast you can see the official ceremony
Labels: carlo rubbia, cern, cern60, georges charpak, particle physics, physics, simon van der meer
Foucault and the pendulum
By Gianluigi Filippelli on Sunday, September 28, 2014
http://t.co/AphFwEZfQ2 #foucaultpendulum #physics #earthrotation
The first public exhibition of a Foucault pendulum took place in February 1851 in the Meridian of the Paris Observatory. A few weeks later Foucault made his most famous pendulum when he suspended a 28 kg brass-coated lead bob with a 67 meter long wire from the dome of the Panthéon, Paris. The plane of the pendulum's swing rotated clockwise 11° per hour, making a full circle in 32.7 hours. The original bob used in 1851 at the Panthéon was moved in 1855 to the Conservatoire des Arts et Métiers in Paris. A second temporary installation was made for the 50th anniversary in 1902.
During museum reconstruction in the 1990s, the original pendulum was temporarily displayed at the Panthéon (1995), but was later returned to the Musée des Arts et Métiers before it reopened in 2000. On April 6, 2010, the cable suspending the bob in the Musée des Arts et Métiers snapped, causing irreparable damage to the pendulum and to the marble flooring of the museum. An exact copy of the original pendulum had been swinging permanently since 1995 under the dome of the Panthéon, Paris until 2014 when it was taken down during repair work to the building. Current monument staff estimate the pendulum will be re-installed in 2017
Labels: foucault pendulum, leon foucault, pendulum, physics, video, youtube
Idiosyncratic Thinking: a computer heuristics lecture
http://t.co/7JB3CPaQt9 #Feynman
Richard Feynman, Winner of the 1965 Nobel Prize in Physics, gives us an insightful lecture about computer heuristics: how computers work, how they file information, how they handle data, how they use their information in allocated processing in a finite amount of time to solve problems and how they actually compute values of interest to human beings. These topics are essential in the study of what processes reduce the amount of work done in solving a particular problem in computers, giving them speeds of solving problems that can outmatch humans in certain fields but which have not yet reached the complexity of human driven intelligence. The question if human thought is a series of fixed processes that could be, in principle, imitated by a computer is a major theme of this lecture and, in Feynman's trademark style of teaching, gives us clear and yet very powerful answers for this field which has gone on to consume so much of our lives today.
Labels: computer science, lectures, richard feynman, video, youtube
Experiments with inertia
a coulpe of home experiments about #inertia
from Science Comics #2
Labels: comics, inertia, physics, principle of inertia, public domain
Witches Kitchen 1971
By Gianluigi Filippelli on Saturday, September 06, 2014
http://t.co/sHn7nJ4uFj a #funny image about #mathematics by Alexander Grothendieck
Riemann-Roch Theorem: The final cry: The diagram is commutative! To give an approximate sense to the statement about f: X → Y, I had to abuse the listeners' patience for almost two hours. Black on white (in Springer lecture notes) it probably takes about 400, 500 pages. A gripping example of how our thirst for knowledge and discovery indulges itself more and more in a logical delirium far removed from life, while life itself is going to Hell in a thousand ways and is under the threat of final extermination. High time to change our course!
Alexander Grothendieck about the Grothendieck–Riemann–Roch theorem via Math 245
Read also: how does one understand GRR?
Labels: bernhard riemann, funny, mathematics
Aidan Dwyer and a new fotovoltaic design
By Gianluigi Filippelli on Sunday, August 31, 2014
Aidan Dwyer, was one of twelve students to receive the 2011 Young Naturalist Award from the American Museum of Natural History in New York for creating an innovative approach to collecting sunlight in photovoltaic arrays. Dwyer's investigation into the mathematical relationship of the arrangement of branches and leaves in deciduous trees led to his discovery that these species utilized the Fibonacci Sequence in their branch and leaf design. Dwyer transformed this organic concept into a photovoltaic array based upon the Fibonacci pattern of an oak tree and conducted experiments comparing his design to conventional solar panel arrays. After computer analysis, Dwyer discovered that his Fibonacci tree design surpassed the performance of conventional methods in sunlight collection and utilized the greatest quantity of PV panels within the least amount of physical space, making it a versatile and aesthetically pleasing solution for confined and obstructed urban areas.
Labels: aidan dwyer, fibonacci numbers, photovoltaic panels, solar power, video
The discover of Morniel Mathaway
http://youtu.be/OoYkZyZ6XSU a radio drama by William Tenn
Following Deutsch and Lockwood(1), there are two types of time paradoxes: inconsistency paradox and knowledge paradox.
An example of the first type is the grandfather paradox, introduced by the french writer René Barjavel in Le voyageur imprudent (1943 - Future Times Three).
An example of the second type is The discover of Morniel Mathaway, a radio science fiction drama by William Tenn. It was originally transmitted by the show X Minus One by NBC:
A professor of art history from the future travels by time machine some centuries into the past in search of an artist whose works are celebrated in the professor's time. On meeting the artist in the flesh, the professor is surprised to find the artist's current paintings talentlessly amateurish. The professor happens to have brought with him from the future a catalogue containing reproductions of the paintings later attributed to the artist, which the professor has come to see are far too accomplished to be the artist's work. When he shows this to the artist, the latter quickly grasps the situation, and, by means of a ruse, succeeds in using the time machine to travel into the future (taking the catalogue with him), where he realizes he will be welcomed as a celebrity, so stranding the professor in the "present". To avoid entanglements with authority the critic assumes the artist's identity and later achieves fame for producing what he believes are just copies of the paintings he recalls from the catalogue. This means that he, and not the artist, created the paintings in the catalogue. But he could not have done so without having seen the catalogue in the first place, and so we are faced with a causal loop.
Labels: radio, science fiction, time loops, time travel, william tenn, youtube
The solar efficiency of Superman
By Gianluigi Filippelli on Tuesday, August 19, 2014
by @ulaulaman http://t.co/WGbVdfv0nk about #Superman #physics and #solar #energy
In the last saga of the JLA by Grant Morrison, World War III, Superman, leaping against the bomb inside Mageddon says:
The way in which Superman gets the powers, or the way in which them is explained, however, is changed over time. Following Action Comics #1, the debut of the character, Jerry Siegel, combining genetics and evolution, says that on his planet of origin
the physical structure of the inhabitants had advanced millions of years compared to ours. Reaching maturity, people of that race earned a titanic force!
In Superman #1, however, Siegel focuses attention on the different gravity between Earth and Krypton, with the latter with a greater radius than aour planet and therefore with a greater severity. Such a claim is also in Ports of Call by Jack Vance. In order to verify it, we must start from the definition of the density: \[\rho = \frac{M}{V}\] where $M$ is the mass, $V$ the volume of the object, or, in our case, of the planet.
Labels: physics, solar energy, superman
Mathematics is a unique aspect of human thought
By Gianluigi Filippelli on Saturday, August 16, 2014
http://t.co/h9CCSAaER0 #IsaacAsimov about #mathematics
Mathematics is a unique aspect of human thought, and its history differs in essence from all other histories.
As time goes on, nearly every field of human endeavor is marked by changes which can be considered as correction and/or extension. Thus, the changes in the evolving history of political and military events are always chaotic; there is no way to predict the rise of a Genghis Khan, for example, or the consequences of the short-lived Mongol Empire. Other changes are a matter of fashion and subjective opinion. The cave-paintings of 25,000 years ago are generally considered great art, and while art has continuously-even chaotically-changed in the subsequent millennia, there are elements of greatness in all the fashions. Similarly, each society considers its own ways natural and rational, and finds the ways of other societies to be odd, laughable, or repulsive.
But only among the sciences is there true progress; only there is the record one of continuous advance toward ever greater heights.
And yet, among most branches of science, the process of progress is one of both correction and extension. Aristotle, one of the greatest minds ever to contemplate physical laws, was quite wrong in his views on falling bodies and had to be corrected by Galileo in the 1590s. Galen, the greatest of ancient physicians, was not allowed to study human cadavers and was quite wrong in his anatomical and physiological conclusions. He had to be corrected by Vesalius in 1543 and Harvey in 1628. Even Newton, the greatest of all scientists, was wrong in his view of the nature of light, of the achromaticity of lenses, and missed the existence of spectral lines. His masterpiece, the laws of motion and the theory of universal gravitation, had to be modified by Einstein in 1916.
Now we can see what makes mathematics unique. Only in mathematics is there no significant correction-only extension. Once the Greeks had developed the deductive method, they were correct in what they did, correct for all time. Euclid was incomplete and his work has been extended enormously, but it has not had to be corrected. His theorems are, every one of them, valid to this day.
Ptolemy may have developed an erroneous picture of the planetary system, but the system of trigonometry he worked out to help him with his calculations remains correct forever.
Each great mathematician adds to what came previously, but nothing needs to be uprooted. Consequently, when we read a book like A History Of Mathematics, we get the picture of a mounting structure, ever taller and broader and more beautiful and magnificent and with a foundation, moreover, that is as untainted and as functional now as it was when Thales worked out the first geometrical theorems nearly 26 centuries ago.
Nothing pertaining to humanity becomes us so well as mathematics. There, and only there, do we touch the human mind at its peak.
Isaac Asimov from the foreword to the second edition of A History of Mathematics by Carl C. Boyer and Uta C. Merzbach
Labels: isaac asimov, mathematics
Maryam Mirzakhani and Riemann surfaces
By Gianluigi Filippelli on Thursday, August 14, 2014
http://t.co/ZAdRPeiy8b Maryam Mirzakhani wins #FieldsMedal with Riemann surfaces
Maryam Mirzakhani has made several contributions to the theory of moduli spaces of Riemann surfaces. In her early work, Maryam Mirzakhani discovered a formula expressing the volume of a moduli space with a given genus as a polynomial in the number of boundary components. This led her to obtain a new proof for the conjecture of Edward Witten on the intersection numbers of tautology classes on moduli space as well as an asymptotic formula for the length of simple closed geodesics on a compact hyperbolic surface. Her subsequent work has focused on Teichmüller dynamics of moduli space. In particular, she was able to prove the long-standing conjecture that William Thurston's earthquake flow on Teichmüller space is ergodic.
Mirzakhani was awarded the Fields Medal in 2014 for "her outstanding contributions to the dynamics and geometry of Riemann surfaces and their moduli spaces".
Riemann surfaces are one dimensional complex manifolds introduced by Riemann: in some sense, his approach is a cut-and-paste procedure.
He imagined taking as many copies of the open set as there are branches of the function and joining them together along the branch cuts. To understand how this works, imagine cutting out sheets along the branch curves and stacking them on top of the complex plane. On each sheet, we define one branch of the function. We glue the different sheets to each other in such a way that the branch of the function on one sheet joins continuously at the seam with the branch defined on the other sheet. For instance, in the case of the square root, we join each end of the sheet corresponding to the positive branch with the opposite end of the sheet corresponding to the negative branch. In the case of the logarithm, we join one end of the sheet corresponding to the $2 \pi n$ branch with an end of the $(2n+1) \pi n$ sheet to obtain a spiral structure which looks like a parking garage.
A more formal approach to the construction of Riemann surfaces is performed by Hermann Weyl, and the work by Maryam Mirzakhani puts in this line of research.
Some papers:
Mirzakhani M. (2007). Weil-Petersson volumes and intersection theory on the moduli space of curves, Journal of the American Mathematical Society, 20 (01) 1-24. DOI: http://dx.doi.org/10.1090/s0894-0347-06-00526-1
Mirzakhani M. (2006). Simple geodesics and Weil-Petersson volumes of moduli spaces of bordered Riemann surfaces, Inventiones mathematicae, 167 (1) 179-222. DOI: http://dx.doi.org/10.1007/s00222-006-0013-2 (pdf)
Mirzakhani M. (2008). Growth of the number of simple closed geodesics on hyperbolic surfaces, Annals of Mathematics, 168 (1) 97-125. DOI: http://dx.doi.org/10.4007/annals.2008.168.97 (pdf)
Press release by Stanford
The Fields Medal news on Nature
The official press release in pdf
plus math magazine
Labels: fields medal, maryam mirzakhani, mathematics, riemann surfaces
The equation of happiness
By Gianluigi Filippelli on Wednesday, August 06, 2014
by @ulaulaman http://t.co/crZXpaphqA #mathematics #happiness #smile
\[H(t) = w_0 + w_1 \sum_{j=1}^t \gamma^{t-j} CR_j + w_2 \sum_{j=1}^t \gamma^{t-j} EV_j + w_3 \sum_{j=1}^t \gamma^{t-j} RPE_j\] I don't know if my intuition is correct, but the equation from Rutledge et al. reminds me of a neural network, or more correctly a sum of three different neural networks. In every case, this could became an important step in order to mathematically describe our brain.
A common question in the social science of well-being asks, "How happy do you feel on a scale of 0 to 10?" Responses are often related to life circumstances, including wealth. By asking people about their feelings as they go about their lives, ongoing happiness and life events have been linked, but the neural mechanisms underlying this relationship are unknown. To investigate it, we presented subjects with a decision-making task involving monetary gains and losses and repeatedly asked them to report their momentary happiness. We built a computational model in which happiness reports were construed as an emotional reactivity to recent rewards and expectations. Using functional MRI, we demonstrated that neural signals during task events account for changes in happiness.
Rutledge R.B., Skandali N., Dayan P. & Dolan R.J. (2014). A computational and neural model of momentary subjective well-being., Proceedings of the National Academy of Sciences of the United States of America, PMID: http://www.ncbi.nlm.nih.gov/pubmed/25092308
via design & trends
Labels: mathematics
Generalized Venn diagram for genetics
By Gianluigi Filippelli on Monday, August 04, 2014
by @ulaulaman http://t.co/MkGI7L546N #VennDay #VennDiagram #genetics
A generalized Venn diagram with three sets $A$, $B$ and $C$ and their intersections. From this representation, the different set sizes are easily observed. Furthermore, if individual elements (genes) are contained in more than one set (functional category), the intersection sizes give a direct view on how many genes are involved in possibly related functions. During optimization, the localization of the circles is altered to satisfy the possibly contradictory constraints of circle size and intersection size.
For the purpouse of the paper, the researchers used polygons instead of circles. In order to compute the polygons' area, they used the simple formula: \[A = \sum_{k=1}^L x_k (y_{k+1} - y_k)\] where $L$ is the number of the edges of the polygon, and $y_{L+1} := y_1$.
Kestler, H., Muller, A., Gress, T., & Buchholz, M. (2004). Generalized Venn diagrams: a new method of visualizing complex genetic set relations Bioinformatics, 21 (8), 1592-1595 DOI: 10.1093/bioinformatics/bti169
Labels: genetics, john venn, mathematics, venn day, venn diagrams
Turing's morphogenesis and the fingers' formation
By Gianluigi Filippelli on Friday, August 01, 2014
by @ulaulaman http://t.co/9Q5rVkVzEc about #Turing #morphogenesis
On today Science's issue it is published a paper about the application of Turing's morphogenesis to the formation of fingers. In this period I'm not able to download the papers, so I simple publish the editor's summaries. First of all I present you the incipit of the paper by Aimée Zuniga, Rolf Zeller(2)
Alan Turing is best known as the father of theoretical computer sciences and for his role in cracking the Enigma encryption codes during World War II. He was also interested in mathematical biology and published a theoretical rationale for the self-regulation and patterning of tissues in embryos. The so-called reaction-diffusion model allows mathematical simulation of diverse types of embryonic patterns with astonishing accuracy. During the past two decades, the existence of Turing-type mechanisms has been experimentally explored and is now well established in developmental systems such as skin pigmentation patterning in fishes, and hair and feather follicle patterning in mouse and chicken embryos. However, the extent to which Turing-type mechanisms control patterning of vertebrate organs is less clear. Often, the relevant signaling interactions are not fully understood and/or Turing-like features have not been thoroughly verified by experimentation and/or genetic analysis. Raspopovic et al.(1) now make a good case for Turing-like features in the periodic pattern of digits by identifying the molecular architecture of what appears to be a Turing network functioning in positioning the digit primordia within mouse limb buds.
And now the summary of the results:
Most researchers today believe that each finger forms because of its unique position within the early limb bud. However, 30 years ago, developmental biologists proposed that the arrangement of fingers followed the Turing pattern, a self-organizing process during early embryo development. Raspopovic et al.(1) provide evidence to support a Turing mechanism (see the Perspective by Zuniga and Zeller). They reveal that Bmp and Wnt signaling pathways, together with the gene Sox9, form a Turing network. The authors used this network to generate a computer model capable of accurately reproducing the patterns that cells follow as the embryo grows fingers.
(1) Raspopovic, J., Marcon, L., Russo, L., & Sharpe, J. (2014). Digit patterning is controlled by a Bmp-Sox9-Wnt Turing network modulated by morphogen gradients Science, 345 (6196), 566-570 DOI: 10.1126/science.1252960
(2) Zuniga, A., & Zeller, R. (2014). In Turing's hands--the making of digits Science, 345 (6196), 516-517 DOI: 10.1126/science.1257501
Read also on Doc Madhattan:
Doc Madhattan: Matching pennies in Turing's brithday
Turing patterns in coats and sounds
Genetics, evolution and Turing's patterns
Calculating machines
Turing, Fibonacci and the sunflowers
Turing and the ecological basis of morphogenesis
Labels: alan turing, morphogenesis
A trigonometric proof of the pythagorean theorem
by @ulaulaman via @MathUpdate http://t.co/LJX8gSX7xf
\[\alpha + \beta = \frac{\pi}{2}\] \[\sin (\alpha + \beta) = \sin \frac{\pi}{2}\] \[\sin \alpha \cdot \cos \beta + \sin \beta \cdot \cos \alpha = 1\] \[\frac{a}{c} \cdot \frac{a}{c} + \frac{b}{c} \cdot \frac{b}{c} = 1\] \[\frac{a^2}{c^2} + \frac{b^2}{c^2} = 1\]
\[a^2 + b^2 = c^2\]
via @MathUpdate
Labels: mathematics, pythagoras, pythagorean theorem, trigonometry
Fifty years of CP violation
via @CERN http://t.co/9Rac42mBVh #CPviolation #CPsymmetry #matter #antimatter
The CP violation is a violation of the CP-symmetry, a combination between the charge conjugation symmetry (C) and the parity symmetry (P).
CP-symmetry states that the laws of physics should be the same if a particle is interchanged with its antiparticle, and then its spatial coordinates are inverted.
The CP violation is discovered in 1964 by Christenson, Cronin, Fitch, and Turlay (Cronin and Fitch awarded the Nobel Prize in 1980) studying the kaons' decays and it could have a key-role in the matter-antimatter imbalance.
Now the CERN Courier dadicated a special issue about the fifty years of the discovery (download here).
Christenson, J., Cronin, J., Fitch, V., & Turlay, R. (1964). Evidence for the 2π Decay of the K_{2}^{0} Meson Physical Review Letters, 13 (4), 138-140 DOI: 10.1103/PhysRevLett.13.138
Labels: antimatter, cern, cp symmetry, cp violation, lhc, matter
Gods, phylosophy and computers
by @ulaulaman http://t.co/Q3AODpvKAs #Godel #ontologicalproof #god #computer
The ontological arguments for the existence of God was introduced for the first time by St. Anselm in 1078:
God, by definition, is that for which no greater can be conceived. God exists in the understanding. If God exists in the understanding, we could imagine Him to be greater by existing in reality. Therefore, God must exist.
There are a lot of phylosophies, mathematics and logicians that proposed their ontological argument, for example Descartes, Leibniz, Frege, and also Kurt Gödel, that proposed the most formal ontological proof:
The proof was published in 1987 (Godel died in 1978), and a lot of logicians discussed around it. One of the last papers published about the argument is an arXiv that suggested to Anna Limind to write that European Mathematicians 'Prove' the Existence of God, but the aim of the paper is to control the consistence of the proof and not the reality of the theorem (I think that the theorem is, simply, undecidable), and also to start a new discipline: the computer-phylosophy.
Indeed Benzmüller and Paleo developed an algorothm in order to use a computer to control the ontological proof. So the work:
(...) opens new perspectives for a computerassisted theoretical philosophy. The critical discussion of the underlying concepts, definitions and axioms remains a human responsibility, but the computer can assist in building and checking rigorously correct logical arguments. In case of logico-philosophical disputes, the computer can check the disputing arguments and partially fulfill Leibniz' dictum: Calculemus
Read also: Spiegel Online International
Christoph Benzmüller & Bruno Woltzenlogel Paleo (2013). Formalization, Mechanization and Automation of Gödel's Proof of God's Existence, arXiv: 1308.4526v4
Labels: computer science, gottfried leibniz, kurt godel, logic, ontological proof, phylosophy
How quantum mechanics explains global warming
posted by @ulaulaman about #globalwarming http://t.co/VDlaEt2s5m
The physician Mark Schleupner, a Ronaoke native, writes about global warming:
So, according to NASA scientists, if all the ice in 14 million sq km Antarctica melts, sea levels will rise more than 200 feet. Greenland alone has another huge chunk of the Earth's water tied up in ice; some scientists say that its ice sheet has passed a tipping point and will be gone in the next centuries, raising ocean levels by 24 feet. These are scary amounts of sea level rise that put huge areas of population centers (New York, Boston, Miami, San Francisco, etc.) under water.
In the end, one can deny climate change (although I'd not recommend it), but one cannot deny math.
Well, it's really interesting, about the climate change, to see the following Ted-Ed lesson:
You've probably heard that carbon dioxide is warming the Earth. But how exactly is it doing it? Lieven Scheire uses a rainbow, a light bulb and a bit of quantum physics to describe the science behind global warming.
Labels: global warming, lieven scheire, quantum mechanics, ted ed lesson, video, youtube
Mathematicians discuss the Snowden revelations
In the last period I cannot read the Notices of AMS, so I lost the most recent discussion on this journal about the revelations made by Edward Snowden about NSA. Thanks to the n-category Café I recover the letters about this topic:
In the first part of 2013, Edward Snowden, a former contractor for the National Security Agency (NSA), handed over to journalists a trove of secret NSA documents. First described in the media in June 2013, these documents revealed extensive spying programs of the NSA and other governmental organizations, such as the United Kingdom's GCHQ (Government Communications Headquarters). The disclosures reverberated around the world, influencing the bottom lines of big businesses, the upper echelons of international relations, and the everyday activities of ordinary people whose lives are increasingly mirrored in the Internet and on cell phone networks.
The revelations also hit home in the mathematical sciences community. The NSA is often said to be the world's largest employer of mathematicians; it's where many academic mathematicians in the US see their students get jobs. The same is true for GCHQ in the UK. Many academic mathematicians in the US and the UK have done work for these organizations, sometimes during summers or sabbaticals. Some US mathematicians decided to take on NSA work after the 9/11 attacks as a contribution to national defense.
The discussion on Notices: part 1, part 2
Labels: american mathematical society, edward snowden, mathematics, notices of ams
Beach sand for long cycle life batteries
#sand #battery #chemistry #energy
This is the holy grail – a low cost, non-toxic, environmentally friendly way to produce high performance lithium ion battery anodes
Zachary Favors
Schematic of the heat scavenger-assisted Mg reduction process.
Herein, porous nano-silicon has been synthesized via a highly scalable heat scavenger-assisted magnesiothermic reduction of beach sand. This environmentally benign, highly abundant, and low cost SiO2 source allows for production of nano-silicon at the industry level with excellent electrochemical performance as an anode material for Li-ion batteries. The addition of NaCl, as an effective heat scavenger for the highly exothermic magnesium reduction process, promotes the formation of an interconnected 3D network of nano-silicon with a thickness of 8-10 nm. Carbon coated nano-silicon electrodes achieve remarkable electrochemical performance with a capacity of 1024 mAhg−1 at 2 Ag−1 after 1000 cycles.
Favors, Z., Wang, W., Bay, H., Mutlu, Z., Ahmed, K., Liu, C., Ozkan, M., & Ozkan, C. (2014). Scalable Synthesis of Nano-Silicon from Beach Sand for Long Cycle Life Li-ion Batteries Scientific Reports, 4 DOI: 10.1038/srep05623
(via Popular Science)
Labels: abstract, chemistry, energy
Mesons produced in a bubble chamber
by @ulaulaman about #mesons #bubblechamber #CERN #particles #physics
A bubble chamber is a pool filled with a liquid (typically hydrogen) such that its molecules are ionized to the passage of a charged particle, thus producing bubbles. In this way the trajectories of the particles are visible and it is possible to study the various decays(2).
The bubble chamber was invented by Donald Glaser(1) in 1952, who win the Nobel Prize in 1960.
(1) Glaser, D. (1952). Some Effects of Ionizing Radiation on the Formation of Bubbles in Liquids Physical Review, 87 (4), 665-665 DOI: 10.1103/PhysRev.87.665
(2) Image from the italian version of Weisskopf, V. (1968). The Three Spectroscopies Scientific American, 218 (5), 15-29 DOI: 10.1038/scientificamerican0568-15
Labels: bubble chamber, cern, history of physics, mesons, particle physics
Brazuca, a Pogorelov's ball
By Gianluigi Filippelli on Sunday, June 29, 2014
posted by @ulaulaman about #Brazuca #geometry #WorldCup2014 #Brazil2014
Brazuca is the ball of the World Cup 2014. The particular pattern of its surface is a consequence of the Pogorelov's theorem about convex polyhedron:
A domain is convex if the segment joining any two of its points is completely contained within the field.
Now consider two convex domains in the plane whose boundaries are the same length.(1)
Now we can create a solid using the two previous domains: we must simlply connect every point of one boundary with a point of the other boundary, obtaining a convex polyhedron, like showed by Pogorelov in 1970s.
The object you have built consists of two developable surfaces glued together on edge.
Instead of using two domains, you can, for example, start from six convex domains as the "square faces" of a cube. On the edges of each of these areas, you choose four points, as the vertices of the "square". We assume that the four "corners" that you have chosen are like the vertices, that is to say that the domains have angles in these points.(1)
Labels: brazil 2014, brazuca, geometry, mathematics, soccer, world cup
The damages of the heavy metal
By Gianluigi Filippelli on Friday, June 27, 2014
by @ulaulaman via @verascienza about #heavymetal #chemistry #health
A heavy metal is any metal or metalloid of environmental concern. The term originated with reference to the harmful effects of cadmium, mercury and lead, all of which are denser than iron. It has since been applied to any other similarly toxic metal, or metalloid such as arsenic, regardless of density. Commonly encountered heavy metals are chromium, cobalt, nickel, copper, zinc, arsenic, selenium, silver, cadmium, antimony, mercury, thallium and lead.
Heavy metals have a lot of detrimental effects on our body:
Aluminum - Damage to the central nervous system, dementia, memory loss
Antimony - Damage to heart, diarrhea, vomiting, stomach ulcer
Arsenic - Lymphatic cancer, liver cancer, skin cancer
Barium - Increased blood pressure, paralysis
Bismuth - Dermatitis, stomatitis, colitis, diarrhea
Cadmium - Diarrhea, stomach pains, vomiting, bone fractures, damage to the immune, psychological disorders
Chrome - Damage to the kidneys and liver, respiratory problems, lung cancer, death
Copper - Irritation of the nose, mouth and eyes, liver cirrhosis, brain damage and kidney
Gallium - Irritation of the throat, difficulty 'breathing, pain in the chest
Hafnium - Irritation of eyes, skin and mucous membranes
Indium - Damage to the heart, kidneys and liver
Iridium - Irritation of the eyes and digestive tract
Lanthanum - Lung cancer, liver damage
Lead - Fruits, vegetables, meats, cereals, wine, cigarettes contain. Cause brain damage, dysfunction at birth, kidney damage, learning disabilities, destruction of the nervous system
Manganese - Blood clotting, glucose intolerance, disorders of the skeleton
Mercury - Destruction of the nervous system, brain damage, DNA damage
Nickel - Pulmonary embolism, breathing difficulties, asthma and chronic bronchitis, allergic skin reaction
Palladium - Very toxic and carcinogenic, irritant
Platinum - Alterations of DNA, cancer, and damage to intestine and kidney
Rhodium - Stains the skin, potentially toxic and carcinogenic
Ruthenium - Very toxic and carcinogenic, damage to the bones
Scandium - Pulmonary embolism, threatens the liver when accumulated in the body
Silver - Used as a coloring agent E174, headache, breathing difficulties, skin allergies, with extreme concentration it causes coma and death
Strontium - Lung cancer, in children difficulty of bone development
Tantalum - Irritation to the eyes and to the skin, upper respiratory tract lesion
Thallium - Used as a rat poison, stomach damage, nervous system, coma and death for those who survive the remain Thallium nerve damage and paralysis
Tin - Irritation of the eyes and skin, headaches, stomach aches, difficulty to urinate
Tungsten - Damage to the mucous membranes and membranes, eye irritation
Vanadium - heart and cardiovascular disorders, inflammation of the stomach and intestine
Yttrium - Very toxic, lung cancer, pulmonary embolism, liver damage
via verascienza
Labels: chemistry, health, heavy metal
The Championships' Final
By Gianluigi Filippelli on Monday, June 16, 2014
by @ulaulaman via @Airi_Talk about #WorldCup2014 #Brazil2014 predictions: #ESP-#GER
Jürgen Gerhards, Michael Mutz and Gert Wagner developed an economic model in order to predict the results of the football team during the international cups. The researchers evaluated the market value of every player and described every team with an economic price: in this way they predict the winners in 2006 (World Cup: Italy), 2008 (Euro Cup: Spain), 2010 (WC: Spain), 2012 (EC: Spain). Starting from the previous results, the three researchers realized the board of the challenges from the eighth:
The predicted final will be between Spain and Germany, with Spain favorite, but there is a little hope for the other teams, first of all Brazil: in 2012 the model didn't predict the other team in the final game of the Euro Cup: Italy, that should not have reached even into the semi-finals:
Marketization and globalization have changed professional soccer and the composition of soccer teams fundamentally. Against the background of these shifting conditions this paper investigates the extent to which the success of soccer teams in their national leagues is determined by (a) the monetary value of the team expressed in its market value, (b) inequality within the team, (c) the cultural diversity of the team, and (d) the degree of turnover among team members. The empirical analyses refer to the soccer season 2012/13 and include the twelve most important European soccer leagues. The findings demonstrate that success in a national soccer championship is highly predictable; nearly all of our hypotheses are confirmed. The market value of the team is, in today's world, by far the most important single predictor of athletic success in professional soccer.
Jürgen Gerhards, Michael Mutz, Gert Wagner (2014). Die Berechnung des Siegers: Marktwert, Ungleichheit, Diversität und Routine als Einflussfaktoren auf die Leistung professioneller Fußballteams. Zeitschrift für Soziologie, Jg. 43, Heft 3, 231-250
via University of Berlin, galileonet.it, airicerca.org
Labels: brazil 2014, economics, football, soccer, statistics, world cup
#abstract about #soccer #WorldCup
Soccer balls are typically constructed from 32 pentagonal and hexagonal panels. Recently, however, newer balls named Cafusa, Teamgeist 2, and Jabulani were respectively produced from 32, 14, and 8 panels with shapes and designs dramatically different from those of conventional balls. The newest type of ball, named Brazuca, was produced from six panels and will be used in the 2014 FIFA World Cup in Brazil. There have, however, been few studies on the aerodynamic properties of balls constructed from different numbers and shapes of panels. Hence, we used wind tunnel tests and a kick-robot to examine the relationship between the panel shape and orientation of modern soccer balls and their aerodynamic and flight characteristics. We observed a correlation between the wind tunnel test results and the actual ball trajectories, and also clarified how the panel characteristics affected the flight of the ball, which enabled prediction of the trajectory.
Hong S. & Asai T. (2014). Effect of panel shape of soccer ball on its flight characteristics., Scientific reports, 4 DOI: 10.1038/srep05068
Labels: abstract, aerodynamics, soccer, world cup
Portrait of an atom
By Gianluigi Filippelli on Saturday, June 07, 2014
by @ulaulaman about #hydrogen #atom #orbitals #Bohr #Rutherford #quantum_mechanics
The study of the structure of the atom is long story, and it begins with Democritus, or from the point of view of the modern science, with John Dalton in 1808: indeed he tried to fix in scientific terms the ideas of the greek philosopher and naturalist.
Dalton's theory was based on five fundamental points:
matter is made of tiny building blocks called atoms, which are indivisible and indestructible;
atoms of the same element are all equal to each other;
the atoms of different elements combine with each other (via chemical reactions) in ratios of whole numbers and generally small, thus giving rise to compounds;
atoms can be neither created nor destroyed;
atoms of an element can not be converted into atoms of other elements.
As you can see, there are some other ideas correct and incorrect. We do, however, a jump of a century and we go to 1902 with Mr. Thomson, the first to propose an atomic model: he assumed that the atom was made up as a sort of cake, a positively charged sphere in which were scattered, like raisins, the electrons with a negative charge distribution such as to make the object as a whole neutral.
A few years later, however, in 1911, Rutherford devised and conducted an important experiment(1) in which he sent a beam of alpha particles against gold nuclei. The cross section observed, i.e. the surface on which the scattered particles resulting bump, it was too large to be compatible with the Thomson's hypothesis, but was compatible with that of Rutherford, namely that the atom was made up of a positive nucleus and by a number of electrons that revolved around the core itself at a large distance (compared to nuclear ones, of course).
However, this is not the last step: in 1913 Niels Bohr refined the Rutherford's model(2). Accepted the planetary structure of the atom, Bohr suggested that the electrons in their rotational motion, could not occupy orbits at their leisure, but they must themselves at very specific distances from the nucleus: this is the dawn of quantum mechanics, that further refine the atomic model thanks to the famous Schroedinger's equation.
The atom, now, was intended as a postive nucleus consists of protons and neutrons, with a little cloud of electrons that moved around, and not on an orbital but in a sort of spherical cap. And these caps at different energy was recently directly observed by Aneta Stodolna's team(3, 4):
(...) an experimental method was proposed about thirty years ago, when it was suggested that experiments ought to be performed projecting low-energy photoelectrons resulting from the ionization of hydrogen atoms onto a position-sensitive two-dimensional detector placed perpendicularly to the static electric field, thereby allowing the experimental measurement of interference patterns directly reflecting the nodal structure of the quasibound atomic wave function.(3)
via phys.org, io9
(1) Rutherfor E. (1911). The scattering of α and β particles by matter and the structure of the atom, Philosophical Magazine Series 6, 21 (125) 669-688. DOI: 10.1080/14786440508637080
(2) Bohr N. (1913). On the constitution of atoms and molecules, Philosophical Magazine Series 6, 26 (151) 1-25. DOI: 10.1080/14786441308634955
(3) Stodolna, A., Rouzée, A., Lépine, F., Cohen, S., Robicheaux, F., Gijsbertsen, A., Jungmann, J., Bordas, C., & Vrakking, M. (2013). Hydrogen Atoms under Magnification: Direct Observation of the Nodal Structure of Stark States Physical Review Letters, 110 (21) DOI: 10.1103/PhysRevLett.110.213001
(4) Smeenk, C. (2013). A New Look at the Hydrogen Wave Function Physics, 6 (58) DOI: 10.1103/Physics.6.58
Labels: atom, ernest rutherford, hydrogen, niels bohr, quantum mechanics
15 Sorting Algorithms in 6 Minutes
Visualization and "audibilization" of 15 Sorting Algorithms in 6 Minutes.
Sorts random shuffles of integers, with both speed and the number of items adapted to each algorithm's complexity. The algorithms are: selection sort, insertion sort, quick sort, merge sort, heap sort, radix sort (LSD), radix sort (MSD), std::sort (intro sort), std::stable_sort (adaptive merge sort), shell sort, bubble sort, cocktail shaker sort, gnome sort, bitonic sort and bogo sort (30 seconds of it).
More information at the "Sound of Sorting"
Labels: algorithm, timo bingmann, video
Mathematics of soccer: Shot angles
By Gianluigi Filippelli on Tuesday, May 27, 2014
Consider a situation in which a soccer player runs straight, with tha ball, towards the bottom line of the field. Intuitively, it is clear that there is an optimal point maximizing the shot angle, providing the best place to kick in order to improve the chances to score a goal. If the player chooses the bottom line, the angle is zero and his chances are just horrible; if the player kicking far way, tha angle is also too small!
Locus of the optimal points
Two different types of kicks: Diego Armando Maradona in Napoli-Cesena 2-0, Serie A 1987/88, an amazing example of the "Maradona feeling" about the optimal place to kick
and the "impossible" goal by Marco Van Basten during the final of Euro '88
from "Mathematics of Soccer" by Alda Carvalho, Carlos Pereira dos Santos, Jorge Nuno Silva. "Recreational Mathematics Magazine" (2014)
Labels: maradona, mathematics, soccer, van basten
|
CommonCrawl
|
Let $A$ be an $m\times n$ matrix.
The following three operations on rows of a matrix are called elementary row operations.
Interchanging two rows:
$R_i \leftrightarrow R_j$ interchanges rows $i$ and $j$.
Multiplying a row by a non-zero scalar:
$tR_i$ multiplies row $i$ by the non-zero scalar (number) $t$.
Adding a multiple of one row to another row:
$R_j+tR_i$ adds $t$ times row $i$ to row $j$.
Two matrices are row equivalent if one can be obtained from the other by a sequence of elementary row operations.
The matrix in reduced row echelon form that is row equivalent to $A$ is denoted by $\rref(A)$.
The rank of a matrix $A$ is the number of rows in $\rref(A)$.
For each of the following matrices, find a row-equivalent matrix which is in reduced row echelon form. Then determine the rank of each matrix.
(a) $A = \begin{bmatrix} 1 & 3 \\ -2 & 2 \end{bmatrix}$.
(b) $B = \begin{bmatrix} 2 & 6 & -2 \\ 3 & -2 & 8 \end{bmatrix}$.
(c) $C = \begin{bmatrix} 2 & -2 & 4 \\ 4 & 1 & -2 \\ 6 & -1 & 2 \end{bmatrix}$.
(d) $D = \begin{bmatrix} -2 \\ 3 \\ 1 \end{bmatrix}$.
(e) $E = \begin{bmatrix} -2 & 3 & 1 \end{bmatrix}$.
Let $A$ and $I$ be $2\times 2$ matrices defined as follows.
\end{bmatrix}, \qquad I=\begin{bmatrix}
\end{bmatrix}.\] Prove that the matrix $A$ is row equivalent to the matrix $I$ if $d-cb \neq 0$.
If $A, B$ have the same rank, can we conclude that they are row-equivalent? If so, then prove it. If not, then provide a counterexample.
Find the rank of the following real matrix.
\[ \begin{bmatrix}
a & 1 & 2 \\
-1 & 1 & 1-a
(Kyoto University)
For an $m\times n$ matrix $A$, we denote by $\mathrm{rref}(A)$ the matrix in reduced row echelon form that is row equivalent to $A$. For example, consider the matrix $A=\begin{bmatrix}
0 &2 &2
\end{bmatrix}$
Then we have
\xrightarrow{\frac{1}{2}R_2}
0 &1 & 1
\xrightarrow{R_1-R_2}
\end{bmatrix}\] and the last matrix is in reduced row echelon form.
Hence $\mathrm{rref}(A)=\begin{bmatrix}
\end{bmatrix}$. Find an example of matrices $A$ and $B$ such that
\[\mathrm{rref}(AB)\neq \mathrm{rref}(A) \mathrm{rref}(B).\]
(a) Find all $3 \times 3$ matrices which are in reduced row echelon form and have rank 1.
(b) Find all such matrices with rank 2.
If $A, B, C$ are three $m \times n$ matrices such that $A$ is row-equivalent to $B$ and $B$ is row-equivalent to $C$, then can we conclude that $A$ is row-equivalent to $C$? If so, then prove it. If not, then provide a counterexample.
Prove that if $A$ is an $n \times n$ matrix with rank $n$, then $\rref(A)$ is the identity matrix.
Recall that a matrix $A$ is symmetric if $A^\trans = A$, where $A^\trans$ is the transpose of $A$. Is it true that if $A$ is a symmetric matrix and in reduced row echelon form, then $A$ is diagonal? If so, prove it. Otherwise, provide a counterexample.
|
CommonCrawl
|
Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
annex_A.tex in NEMO/branches/2019/dev_r10984_HPC-13_IRRMANN_BDY_optimization/doc/latex/NEMO/subfiles – NEMO
source: NEMO/branches/2019/dev_r10984_HPC-13_IRRMANN_BDY_optimization/doc/latex/NEMO/subfiles/annex_A.tex @ 11353
Visit: branches/NERCbranches/UKMO
Last change on this file since 11353 was 11353, checked in by smasson, 4 years ago
dev_r10984_HPC-13 : merge with trunk@11352, see #2285
\documentclass[../main/NEMO_manual]{subfiles}
% ================================================================
% Chapter Appendix A : Curvilinear s-Coordinate Equations
\chapter{Curvilinear $s-$Coordinate Equations}
\label{apdx:A}
\minitoc
\vfill
\begin{figure}[b]
\subsubsection*{Changes record}
\begin{tabular}{l||l|m{0.65\linewidth}}
Release & Author & Modifications \\
{\em 4.0} & {\em Mike Bell} & {\em review} \\
{\em 3.x} & {\em Gurvan Madec} & {\em original} \\
\end{tabular}
\newpage
% Chain rule
\section{Chain rule for $s-$coordinates}
\label{sec:A_chain}
In order to establish the set of Primitive Equation in curvilinear $s$-coordinates
(\ie an orthogonal curvilinear coordinate in the horizontal and
an Arbitrary Lagrangian Eulerian (ALE) coordinate in the vertical),
we start from the set of equations established in \autoref{subsec:PE_zco_Eq} for
the special case $k = z$ and thus $e_3 = 1$,
and we introduce an arbitrary vertical coordinate $a = a(i,j,z,t)$.
Let us define a new vertical scale factor by $e_3 = \partial z / \partial s$ (which now depends on $(i,j,z,t)$) and
the horizontal slope of $s-$surfaces by:
\label{apdx:A_s_slope}
\sigma_1 =\frac{1}{e_1 } \; \left. {\frac{\partial z}{\partial i}} \right|_s
\quad \text{and} \quad
\sigma_2 =\frac{1}{e_2 } \; \left. {\frac{\partial z}{\partial j}} \right|_s .
The model fields (e.g. pressure $p$) can be viewed as functions of $(i,j,z,t)$ (e.g. $p(i,j,z,t)$) or as
functions of $(i,j,s,t)$ (e.g. $p(i,j,s,t)$). The symbol $\bullet$ will be used to represent any one of
these fields. Any ``infinitesimal'' change in $\bullet$ can be written in two forms:
\label{apdx:A_s_infin_changes}
\begin{aligned}
& \delta \bullet = \delta i \left. \frac{ \partial \bullet }{\partial i} \right|_{j,s,t}
+ \delta j \left. \frac{ \partial \bullet }{\partial i} \right|_{i,s,t}
+ \delta s \left. \frac{ \partial \bullet }{\partial s} \right|_{i,j,t}
+ \delta t \left. \frac{ \partial \bullet }{\partial t} \right|_{i,j,s} , \\
& \delta \bullet = \delta i \left. \frac{ \partial \bullet }{\partial i} \right|_{j,z,t}
+ \delta j \left. \frac{ \partial \bullet }{\partial i} \right|_{i,z,t}
+ \delta z \left. \frac{ \partial \bullet }{\partial z} \right|_{i,j,t}
+ \delta t \left. \frac{ \partial \bullet }{\partial t} \right|_{i,j,z} .
\end{aligned}
Using the first form and considering a change $\delta i$ with $j, z$ and $t$ held constant, shows that
\label{apdx:A_s_chain_rule}
\left. {\frac{\partial \bullet }{\partial i}} \right|_{j,z,t} =
\left. {\frac{\partial \bullet }{\partial i}} \right|_{j,s,t}
+ \left. {\frac{\partial s }{\partial i}} \right|_{j,z,t} \;
\left. {\frac{\partial \bullet }{\partial s}} \right|_{i,j,t} .
The term $\left. \partial s / \partial i \right|_{j,z,t}$ can be related to the slope of constant $s$ surfaces,
(\autoref{apdx:A_s_slope}), by applying the second of (\autoref{apdx:A_s_infin_changes}) with $\bullet$ set to
$s$ and $j, t$ held constant
\label{apdx:a_delta_s}
\delta s|_{j,t} =
\delta i \left. \frac{ \partial s }{\partial i} \right|_{j,z,t}
+ \delta z \left. \frac{ \partial s }{\partial z} \right|_{i,j,t} .
Choosing to look at a direction in the $(i,z)$ plane in which $\delta s = 0$ and using
(\autoref{apdx:A_s_slope}) we obtain
\left. \frac{ \partial s }{\partial i} \right|_{j,z,t} =
- \left. \frac{ \partial z }{\partial i} \right|_{j,s,t} \;
\left. \frac{ \partial s }{\partial z} \right|_{i,j,t}
= - \frac{e_1 }{e_3 }\sigma_1 .
\label{apdx:a_ds_di_z}
Another identity, similar in form to (\autoref{apdx:a_ds_di_z}), can be derived
by choosing $\bullet$ to be $s$ and using the second form of (\autoref{apdx:A_s_infin_changes}) to consider
changes in which $i , j$ and $s$ are constant. This shows that
\label{apdx:A_w_in_s}
w_s = \left. \frac{ \partial z }{\partial t} \right|_{i,j,s} =
- \left. \frac{ \partial z }{\partial s} \right|_{i,j,t}
\left. \frac{ \partial s }{\partial t} \right|_{i,j,z}
= - e_3 \left. \frac{ \partial s }{\partial t} \right|_{i,j,z} .
In what follows, for brevity, indication of the constancy of the $i, j$ and $t$ indices is
usually omitted. Using the arguments outlined above one can show that the chain rules needed to establish
the model equations in the curvilinear $s-$coordinate system are:
&\left. {\frac{\partial \bullet }{\partial t}} \right|_z =
\left. {\frac{\partial \bullet }{\partial t}} \right|_s
+ \frac{\partial \bullet }{\partial s}\; \frac{\partial s}{\partial t} , \\
&\left. {\frac{\partial \bullet }{\partial i}} \right|_z =
\left. {\frac{\partial \bullet }{\partial i}} \right|_s
+\frac{\partial \bullet }{\partial s}\; \frac{\partial s}{\partial i}=
-\frac{e_1 }{e_3 }\sigma_1 \frac{\partial \bullet }{\partial s} , \\
&\left. {\frac{\partial \bullet }{\partial j}} \right|_z =
\left. {\frac{\partial \bullet }{\partial j}} \right|_s
+ \frac{\partial \bullet }{\partial s}\;\frac{\partial s}{\partial j}=
- \frac{e_2 }{e_3 }\sigma_2 \frac{\partial \bullet }{\partial s} , \\
&\;\frac{\partial \bullet }{\partial z} \;\; = \frac{1}{e_3 }\frac{\partial \bullet }{\partial s} .
% continuity equation
\section{Continuity equation in $s-$coordinates}
\label{sec:A_continuity}
Using (\autoref{apdx:A_s_chain_rule}) and
the fact that the horizontal scale factors $e_1$ and $e_2$ do not depend on the vertical coordinate,
the divergence of the velocity relative to the ($i$,$j$,$z$) coordinate system is transformed as follows in order to
obtain its expression in the curvilinear $s-$coordinate system:
\begin{subequations}
\begin{align*}
\begin{array}{*{20}l}
\nabla \cdot {\mathrm {\mathbf U}}
&= \frac{1}{e_1 \,e_2 } \left[ \left. {\frac{\partial (e_2 \,u)}{\partial i}} \right|_z
+\left. {\frac{\partial(e_1 \,v)}{\partial j}} \right|_z \right]
+ \frac{\partial w}{\partial z} \\ \\
& = \frac{1}{e_1 \,e_2 } \left[
\left. \frac{\partial (e_2 \,u)}{\partial i} \right|_s
- \frac{e_1 }{e_3 } \sigma_1 \frac{\partial (e_2 \,u)}{\partial s}
+ \left. \frac{\partial (e_1 \,v)}{\partial j} \right|_s
- \frac{e_2 }{e_3 } \sigma_2 \frac{\partial (e_1 \,v)}{\partial s} \right]
+ \frac{\partial w}{\partial s} \; \frac{\partial s}{\partial z} \\ \\
+ \left. \frac{\partial (e_1 \,v)}{\partial j} \right|_s \right]
+ \frac{1}{e_3 }\left[ \frac{\partial w}{\partial s}
- \sigma_1 \frac{\partial u}{\partial s}
- \sigma_2 \frac{\partial v}{\partial s} \right] \\ \\
& = \frac{1}{e_1 \,e_2 \,e_3 } \left[
\left. \frac{\partial (e_2 \,e_3 \,u)}{\partial i} \right|_s
-\left. e_2 \,u \frac{\partial e_3 }{\partial i} \right|_s
+ \left. \frac{\partial (e_1 \,e_3 \,v)}{\partial j} \right|_s
- \left. e_1 v \frac{\partial e_3 }{\partial j} \right|_s \right] \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
+ \frac{1}{e_3 } \left[ \frac{\partial w}{\partial s}
- \sigma_2 \frac{\partial v}{\partial s} \right] \\
\intertext{Noting that $
\frac{1}{e_1} \left.{ \frac{\partial e_3}{\partial i}} \right|_s
=\frac{1}{e_1} \left.{ \frac{\partial^2 z}{\partial i\,\partial s}} \right|_s
=\frac{\partial}{\partial s} \left( {\frac{1}{e_1 } \left.{ \frac{\partial z}{\partial i} }\right|_s } \right)
=\frac{\partial \sigma_1}{\partial s}
$ and $
\frac{1}{e_2 }\left. {\frac{\partial e_3 }{\partial j}} \right|_s
$, it becomes:}
+\left. \frac{\partial (e_1 \,e_3 \,v)}{\partial j} \right|_s \right] \\
& \qquad \qquad \qquad \qquad \quad
+\frac{1}{e_3 }\left[ {\frac{\partial w}{\partial s}-u\frac{\partial \sigma_1 }{\partial s}-v\frac{\partial \sigma_2 }{\partial s}-\sigma_1 \frac{\partial u}{\partial s}-\sigma_2 \frac{\partial v}{\partial s}} \right] \\
+\left. \frac{\partial (e_1 \,e_3 \,v)}{\partial j} \right|_s \right]
+ \frac{1}{e_3 } \; \frac{\partial}{\partial s} \left[ w - u\;\sigma_1 - v\;\sigma_2 \right]
\end{align*}
\end{subequations}
Here, $w$ is the vertical velocity relative to the $z-$coordinate system.
Using the first form of (\autoref{apdx:A_s_infin_changes})
and the definitions (\autoref{apdx:A_s_slope}) and (\autoref{apdx:A_w_in_s}) for $\sigma_1$, $\sigma_2$ and $w_s$,
one can show that the vertical velocity, $w_p$ of a point
moving with the horizontal velocity of the fluid along an $s$ surface is given by
\label{apdx:A_w_p}
w_p = & \left. \frac{ \partial z }{\partial t} \right|_s
+ \frac{u}{e_1} \left. \frac{ \partial z }{\partial i} \right|_s
+ \frac{v}{e_2} \left. \frac{ \partial z }{\partial j} \right|_s \\
= & w_s + u \sigma_1 + v \sigma_2 .
\end{split}
The vertical velocity across this surface is denoted by
\label{apdx:A_w_s}
\omega = w - w_p = w - ( w_s + \sigma_1 \,u + \sigma_2 \,v ) .
\frac{1}{e_3 } \frac{\partial}{\partial s} \left[ w - u\;\sigma_1 - v\;\sigma_2 \right] =
\frac{1}{e_3 } \frac{\partial}{\partial s} \left[ \omega + w_s \right] =
\frac{1}{e_3 } \left[ \frac{\partial \omega}{\partial s}
+ \left. \frac{ \partial }{\partial t} \right|_s \frac{\partial z}{\partial s} \right] =
\frac{1}{e_3 } \frac{\partial \omega}{\partial s} + \frac{1}{e_3 } \left. \frac{ \partial e_3}{\partial t} . \right|_s
Using (\autoref{apdx:A_w_s}) in our expression for $\nabla \cdot {\mathrm {\mathbf U}}$ we obtain
our final expression for the divergence of the velocity in the curvilinear $s-$coordinate system:
\nabla \cdot {\mathrm {\mathbf U}} =
\frac{1}{e_1 \,e_2 \,e_3 } \left[
+ \frac{1}{e_3 } \frac{\partial \omega }{\partial s}
+ \frac{1}{e_3 } \left. \frac{\partial e_3}{\partial t} \right|_s .
As a result, the continuity equation \autoref{eq:PE_continuity} in the $s-$coordinates is:
\label{apdx:A_sco_Continuity}
\frac{1}{e_3 } \frac{\partial e_3}{\partial t}
+ \frac{1}{e_1 \,e_2 \,e_3 }\left[
{\left. {\frac{\partial (e_2 \,e_3 \,u)}{\partial i}} \right|_s
+ \left. {\frac{\partial (e_1 \,e_3 \,v)}{\partial j}} \right|_s } \right]
+\frac{1}{e_3 }\frac{\partial \omega }{\partial s} = 0 .
An additional term has appeared that takes into account
the contribution of the time variation of the vertical coordinate to the volume budget.
% momentum equation
\section{Momentum equation in $s-$coordinate}
\label{sec:A_momentum}
Here we only consider the first component of the momentum equation,
the generalization to the second one being straightforward.
$\bullet$ \textbf{Total derivative in vector invariant form}
Let us consider \autoref{eq:PE_dyn_vect}, the first component of the momentum equation in the vector invariant form.
Its total $z-$coordinate time derivative,
$\left. \frac{D u}{D t} \right|_z$ can be transformed as follows in order to obtain
its expression in the curvilinear $s-$coordinate system:
\left. \frac{D u}{D t} \right|_z
&= \left. {\frac{\partial u }{\partial t}} \right|_z
- \left. \zeta \right|_z v
+ \frac{1}{2e_1} \left.{ \frac{\partial (u^2+v^2)}{\partial i}} \right|_z
+ w \;\frac{\partial u}{\partial z} \\ \\
- \frac{1}{e_1 \,e_2 }\left[ { \left.{ \frac{\partial (e_2 \,v)}{\partial i} }\right|_z
-\left.{ \frac{\partial (e_1 \,u)}{\partial j} }\right|_z } \right] \; v
+ \frac{1}{2e_1} \left.{ \frac{\partial (u^2+v^2)}{\partial i} } \right|_z
+ w \;\frac{\partial u}{\partial z} \\
\intertext{introducing the chain rule (\autoref{apdx:A_s_chain_rule}) }
- \frac{1}{e_1\,e_2}\left[ { \left.{ \frac{\partial (e_2 \,v)}{\partial i} } \right|_s
-\left.{ \frac{\partial (e_1 \,u)}{\partial j} } \right|_s } \right.
\left. {-\frac{e_1}{e_3}\sigma_1 \frac{\partial (e_2 \,v)}{\partial s}
+\frac{e_2}{e_3}\sigma_2 \frac{\partial (e_1 \,u)}{\partial s}} \right] \; v \\
& \qquad \qquad \qquad \qquad
+ \frac{1}{2e_1} \left( \left. \frac{\partial (u^2+v^2)}{\partial i} \right|_s
- \frac{e_1}{e_3}\sigma_1 \frac{\partial (u^2+v^2)}{\partial s} \right)
+ \frac{w}{e_3 } \;\frac{\partial u}{\partial s}
} \\ \\
- \left. \zeta \right|_s \;v
+ \frac{1}{2\,e_1}\left. {\frac{\partial (u^2+v^2)}{\partial i}} \right|_s \\
&\qquad \qquad \qquad \quad
+ \left[ {\frac{\sigma_1 }{e_3 }\frac{\partial v}{\partial s}
- \frac{\sigma_2 }{e_3 }\frac{\partial u}{\partial s}} \right]\;v
- \frac{\sigma_1 }{2e_3 }\frac{\partial (u^2+v^2)}{\partial s} \\ \\
+ \frac{1}{e_3} \left[ {w\frac{\partial u}{\partial s}
+\sigma_1 v\frac{\partial v}{\partial s} - \sigma_2 v\frac{\partial u}{\partial s}
- \sigma_1 u\frac{\partial u}{\partial s} - \sigma_1 v\frac{\partial v}{\partial s}} \right] \\ \\
+ \frac{1}{2\,e_1}\left. {\frac{\partial (u^2+v^2)}{\partial i}} \right|_s
+ \frac{1}{e_3} \left[ w - \sigma_2 v - \sigma_1 u \right]
\; \frac{\partial u}{\partial s} . \\
\intertext{Introducing $\omega$, the dia-s-surface velocity given by (\autoref{apdx:A_w_s}) }
+ \frac{1}{e_3 } \left( \omega + w_s \right) \frac{\partial u}{\partial s} \\
Applying the time derivative chain rule (first equation of (\autoref{apdx:A_s_chain_rule})) to $u$ and
using (\autoref{apdx:A_w_in_s}) provides the expression of the last term of the right hand side,
\[
\frac{w_s}{e_3} \;\frac{\partial u}{\partial s}
= - \left. \frac{\partial s}{\partial t} \right|_z \; \frac{\partial u }{\partial s}
= \left. {\frac{\partial u }{\partial t}} \right|_s - \left. {\frac{\partial u }{\partial t}} \right|_z \ .
\]
This leads to the $s-$coordinate formulation of the total $z-$coordinate time derivative,
\ie the total $s-$coordinate time derivative :
\begin{align}
\label{apdx:A_sco_Dt_vect}
\left. \frac{D u}{D t} \right|_s
= \left. {\frac{\partial u }{\partial t}} \right|_s
+ \frac{1}{e_3 } \omega \;\frac{\partial u}{\partial s} .
\end{align}
Therefore, the vector invariant form of the total time derivative has exactly the same mathematical form in
$z-$ and $s-$coordinates.
This is not the case for the flux form as shown in next paragraph.
$\bullet$ \textbf{Total derivative in flux form}
Let us start from the total time derivative in the curvilinear $s-$coordinate system we have just establish.
Following the procedure used to establish (\autoref{eq:PE_flux_form}), it can be transformed into :
% \begin{subequations}
\left. \frac{D u}{D t} \right|_s &= \left. {\frac{\partial u }{\partial t}} \right|_s
& - \zeta \;v
+ \frac{1}{2\;e_1 } \frac{\partial \left( {u^2+v^2} \right)}{\partial i}
+ \frac{1}{e_3} \omega \;\frac{\partial u}{\partial s} \\ \\
&= \left. {\frac{\partial u }{\partial t}} \right|_s
&+\frac{1}{e_1\;e_2} \left( \frac{\partial \left( {e_2 \,u\,u } \right)}{\partial i}
+ \frac{\partial \left( {e_1 \,u\,v } \right)}{\partial j} \right)
+ \frac{1}{e_3 } \frac{\partial \left( {\omega\,u} \right)}{\partial s} \\ \\
&&- \,u \left[ \frac{1}{e_1 e_2 } \left( \frac{\partial(e_2 u)}{\partial i}
+ \frac{\partial(e_1 v)}{\partial j} \right)
+ \frac{1}{e_3} \frac{\partial \omega}{\partial s} \right] \\ \\
&&- \frac{v}{e_1 e_2 }\left( v \;\frac{\partial e_2 }{\partial i}
-u \;\frac{\partial e_1 }{\partial j} \right) . \\
Introducing the vertical scale factor inside the horizontal derivative of the first two terms
(\ie the horizontal divergence), it becomes :
% \begin{align*} {\begin{array}{*{20}l}
% {\begin{array}{*{20}l} \left. \frac{D u}{D t} \right|_s
&+ \frac{1}{e_1\,e_2\,e_3} \left( \frac{\partial( e_2 e_3 \,u^2 )}{\partial i}
+ \frac{\partial( e_1 e_3 \,u v )}{\partial j}
- e_2 u u \frac{\partial e_3}{\partial i}
- e_1 u v \frac{\partial e_3 }{\partial j} \right)
+ \frac{1}{e_3} \frac{\partial \left( {\omega\,u} \right)}{\partial s} \\ \\
&& - \,u \left[ \frac{1}{e_1 e_2 e_3} \left( \frac{\partial(e_2 e_3 \, u)}{\partial i}
+ \frac{\partial(e_1 e_3 \, v)}{\partial j}
- e_2 u \;\frac{\partial e_3 }{\partial i}
- e_1 v \;\frac{\partial e_3 }{\partial j} \right)
&& - \frac{v}{e_1 e_2 }\left( v \;\frac{\partial e_2 }{\partial i}
-u \;\frac{\partial e_1 }{\partial j} \right) \\ \\
&+ \frac{1}{e_1\,e_2\,e_3} \left( \frac{\partial( e_2 e_3 \,u\,u )}{\partial i}
+ \frac{\partial( e_1 e_3 \,u\,v )}{\partial j} \right)
+ \frac{\partial(e_1 e_3 \, v)}{\partial j} \right)
+ \frac{1}{e_3} \frac{\partial \omega}{\partial s} \right]
- \frac{v}{e_1 e_2 }\left( v \;\frac{\partial e_2 }{\partial i}
\intertext {Introducing a more compact form for the divergence of the momentum fluxes,
and using (\autoref{apdx:A_sco_Continuity}), the $s-$coordinate continuity equation,
it becomes : }
&+ \left. \nabla \cdot \left( {{\mathrm {\mathbf U}}\,u} \right) \right|_s
+ \,u \frac{1}{e_3 } \frac{\partial e_3}{\partial t}
-u \;\frac{\partial e_1 }{\partial j} \right)
which leads to the $s-$coordinate flux formulation of the total $s-$coordinate time derivative,
\ie the total $s-$coordinate time derivative in flux form:
\begin{flalign}
\label{apdx:A_sco_Dt_flux}
\left. \frac{D u}{D t} \right|_s = \frac{1}{e_3} \left. \frac{\partial ( e_3\,u)}{\partial t} \right|_s
+ \left. \nabla \cdot \left( {{\mathrm {\mathbf U}}\,u} \right) \right|_s
-u \;\frac{\partial e_1 }{\partial j} \right).
\end{flalign}
which is the total time derivative expressed in the curvilinear $s-$coordinate system.
It has the same form as in the $z-$coordinate but for
the vertical scale factor that has appeared inside the time derivative which
comes from the modification of (\autoref{apdx:A_sco_Continuity}),
the continuity equation.
$\bullet$ \textbf{horizontal pressure gradient}
The horizontal pressure gradient term can be transformed as follows:
-\frac{1}{\rho_o \, e_1 }\left. {\frac{\partial p}{\partial i}} \right|_z
& =-\frac{1}{\rho_o e_1 }\left[ {\left. {\frac{\partial p}{\partial i}} \right|_s -\frac{e_1 }{e_3 }\sigma_1 \frac{\partial p}{\partial s}} \right] \\
& =-\frac{1}{\rho_o \,e_1 }\left. {\frac{\partial p}{\partial i}} \right|_s +\frac{\sigma_1 }{\rho_o \,e_3 }\left( {-g\;\rho \;e_3 } \right) \\
&=-\frac{1}{\rho_o \,e_1 }\left. {\frac{\partial p}{\partial i}} \right|_s -\frac{g\;\rho }{\rho_o }\sigma_1 .
Applying similar manipulation to the second component and
replacing $\sigma_1$ and $\sigma_2$ by their expression \autoref{apdx:A_s_slope}, it becomes:
\label{apdx:A_grad_p_1}
-\frac{1}{\rho_o \, e_1 } \left. {\frac{\partial p}{\partial i}} \right|_z
&=-\frac{1}{\rho_o \,e_1 } \left( \left. {\frac{\partial p}{\partial i}} \right|_s
+ g\;\rho \;\left. {\frac{\partial z}{\partial i}} \right|_s \right) \\
-\frac{1}{\rho_o \, e_2 }\left. {\frac{\partial p}{\partial j}} \right|_z
&=-\frac{1}{\rho_o \,e_2 } \left( \left. {\frac{\partial p}{\partial j}} \right|_s
+ g\;\rho \;\left. {\frac{\partial z}{\partial j}} \right|_s \right) . \\
An additional term appears in (\autoref{apdx:A_grad_p_1}) which accounts for
the tilt of $s-$surfaces with respect to geopotential $z-$surfaces.
As in $z$-coordinate,
the horizontal pressure gradient can be split in two parts following \citet{marsaleix.auclair.ea_OM08}.
Let defined a density anomaly, $d$, by $d=(\rho - \rho_o)/ \rho_o$,
and a hydrostatic pressure anomaly, $p_h'$, by $p_h'= g \; \int_z^\eta d \; e_3 \; dk$.
The pressure is then given by:
p &= g\; \int_z^\eta \rho \; e_3 \; dk = g\; \int_z^\eta \rho_o \left( d + 1 \right) \; e_3 \; dk \\
&= g \, \rho_o \; \int_z^\eta d \; e_3 \; dk + \rho_o g \, \int_z^\eta e_3 \; dk .
Therefore, $p$ and $p_h'$ are linked through:
\label{apdx:A_pressure}
p = \rho_o \; p_h' + \rho_o \, g \, ( \eta - z )
and the hydrostatic pressure balance expressed in terms of $p_h'$ and $d$ is:
\frac{\partial p_h'}{\partial k} = - d \, g \, e_3 .
Substituing \autoref{apdx:A_pressure} in \autoref{apdx:A_grad_p_1} and
using the definition of the density anomaly it becomes an expression in two parts:
&=-\frac{1}{e_1 } \left( \left. {\frac{\partial p_h'}{\partial i}} \right|_s
+ g\; d \;\left. {\frac{\partial z}{\partial i}} \right|_s \right) - \frac{g}{e_1 } \frac{\partial \eta}{\partial i} , \\
&=-\frac{1}{e_2 } \left( \left. {\frac{\partial p_h'}{\partial j}} \right|_s
+ g\; d \;\left. {\frac{\partial z}{\partial j}} \right|_s \right) - \frac{g}{e_2 } \frac{\partial \eta}{\partial j} . \\
This formulation of the pressure gradient is characterised by the appearance of
a term depending on the sea surface height only
(last term on the right hand side of expression \autoref{apdx:A_grad_p_2}).
This term will be loosely termed \textit{surface pressure gradient} whereas
the first term will be termed the \textit{hydrostatic pressure gradient} by analogy to
the $z$-coordinate formulation.
In fact, the true surface pressure gradient is $1/\rho_o \nabla (\rho \eta)$,
and $\eta$ is implicitly included in the computation of $p_h'$ through the upper bound of the vertical integration.
$\bullet$ \textbf{The other terms of the momentum equation}
The coriolis and forcing terms as well as the the vertical physics remain unchanged as
they involve neither time nor space derivatives.
The form of the lateral physics is discussed in \autoref{apdx:B}.
$\bullet$ \textbf{Full momentum equation}
To sum up, in a curvilinear $s$-coordinate system,
the vector invariant momentum equation solved by the model has the same mathematical expression as
the one in a curvilinear $z-$coordinate, except for the pressure gradient term:
\label{apdx:A_dyn_vect}
\begin{multline}
\label{apdx:A_PE_dyn_vect_u}
\frac{\partial u}{\partial t}=
+ \left( {\zeta +f} \right)\,v
- \frac{1}{2\,e_1} \frac{\partial}{\partial i} \left( u^2+v^2 \right)
- \frac{1}{e_3} \omega \frac{\partial u}{\partial k} \\
- \frac{1}{e_1 } \left( \frac{\partial p_h'}{\partial i} + g\; d \; \frac{\partial z}{\partial i} \right)
- \frac{g}{e_1 } \frac{\partial \eta}{\partial i}
+ D_u^{\vect{U}} + F_u^{\vect{U}} ,
\end{multline}
\label{apdx:A_dyn_vect_v}
\frac{\partial v}{\partial t}=
- \left( {\zeta +f} \right)\,u
- \frac{1}{2\,e_2 }\frac{\partial }{\partial j}\left( u^2+v^2 \right)
- \frac{1}{e_3 } \omega \frac{\partial v}{\partial k} \\
- \frac{1}{e_2 } \left( \frac{\partial p_h'}{\partial j} + g\; d \; \frac{\partial z}{\partial j} \right)
- \frac{g}{e_2 } \frac{\partial \eta}{\partial j}
+ D_v^{\vect{U}} + F_v^{\vect{U}} .
whereas the flux form momentum equation differs from it by
the formulation of both the time derivative and the pressure gradient term:
\label{apdx:A_dyn_flux}
\label{apdx:A_PE_dyn_flux_u}
\frac{1}{e_3} \frac{\partial \left( e_3\,u \right) }{\partial t} =
- \nabla \cdot \left( {{\mathrm {\mathbf U}}\,u} \right)
+ \left\{ {f + \frac{1}{e_1 e_2 }\left( v \;\frac{\partial e_2 }{\partial i}
-u \;\frac{\partial e_1 }{\partial j} \right)} \right\} \,v \\
\label{apdx:A_dyn_flux_v}
\frac{1}{e_3}\frac{\partial \left( e_3\,v \right) }{\partial t}=
- \nabla \cdot \left( {{\mathrm {\mathbf U}}\,v} \right)
- \left\{ {f + \frac{1}{e_1 e_2 }\left( v \;\frac{\partial e_2 }{\partial i}
-u \;\frac{\partial e_1 }{\partial j} \right)} \right\} \,u \\
Both formulation share the same hydrostatic pressure balance expressed in terms of
hydrostatic pressure and density anomalies, $p_h'$ and $d=( \frac{\rho}{\rho_o}-1 )$:
\label{apdx:A_dyn_zph}
It is important to realize that the change in coordinate system has only concerned the position on the vertical.
It has not affected (\textbf{i},\textbf{j},\textbf{k}), the orthogonal curvilinear set of unit vectors.
($u$,$v$) are always horizontal velocities so that their evolution is driven by \emph{horizontal} forces,
in particular the pressure gradient.
By contrast, $\omega$ is not $w$, the third component of the velocity, but the dia-surface velocity component,
\ie the volume flux across the moving $s$-surfaces per unit horizontal area.
% Tracer equation
\section{Tracer equation}
\label{sec:A_tracer}
The tracer equation is obtained using the same calculation as for the continuity equation and then
regrouping the time derivative terms in the left hand side :
\label{apdx:A_tracer}
\frac{1}{e_3} \frac{\partial \left( e_3 T \right)}{\partial t}
= -\frac{1}{e_1 \,e_2 \,e_3}
\left[ \frac{\partial }{\partial i} \left( {e_2 \,e_3 \;Tu} \right)
+ \frac{\partial }{\partial j} \left( {e_1 \,e_3 \;Tv} \right) \right] \\
- \frac{1}{e_3} \frac{\partial }{\partial k} \left( Tw \right)
+ D^{T} +F^{T}
The expression for the advection term is a straight consequence of (\autoref{apdx:A_sco_Continuity}),
the expression of the 3D divergence in the $s-$coordinates established above.
\biblio
\pindex
|
CommonCrawl
|
Advanced oxidation processes for the removal of cyanobacterial toxins from drinking water
Marcel Schneider ORCID: orcid.org/0000-0002-4040-02901 &
Luděk Bláha1
Environmental Sciences Europe volume 32, Article number: 94 (2020) Cite this article
Drinking water production faces many different challenges with one of them being naturally produced cyanobacterial toxins. Since pollutants become more abundant and persistent today, conventional water treatment is often no longer sufficient to provide adequate removal. Among other emerging technologies, advanced oxidation processes (AOPs) have a great potential to appropriately tackle this issue. This review addresses the economic and health risks posed by cyanotoxins and discusses their removal from drinking water by AOPs. The current state of knowledge on AOPs and their application for cyanotoxin degradation is synthesized to provide an overview on available techniques and effects of water quality, toxin- and technique-specific parameters on their degradation efficacy. The different AOPs are compared based on their efficiency and applicability, considering economic, practical and environmental aspects and their potential to generate toxic disinfection byproducts. For future research, more relevant studies to include the degradation of less-explored cyanotoxins, toxin mixtures in actual surface water, assessment of residual toxicity and scale-up are recommended. Since actual surface water most likely contains more than just cyanotoxins, a multi-barrier approach consisting of a series of different physical, biological and chemical—especially oxidative—treatment steps is inevitable to ensure safe and high-quality drinking water.
Cyanobacteria are the most diverse and widespread phototrophic prokaryotes inhabiting earth for several billions of years [1, 2]. Cyanobacteria can be found almost everywhere in terrestrial and aquatic environments, even in Antarctic lakes and hot springs [3]. Due to their dependence on nutrients and temperature, the increasing eutrophication of waterbodies and climate change promote more frequently and extensively occurring cyanobacterial blooms [2, 4,5,6]. Although not all blooms are poisonous, at least 40 cyanobacterial species are known to produce diverse secondary metabolites that are toxic to biota including plants [7], animals and humans. Consequently, cyanobacteria and their toxins pose a major risk to surface waters intended for drinking and recreational purposes, and adequate measures must be employed to prevent or eliminate cyanobacterial blooms and toxins.
The first approach should prevent the occurrence of cyanobacterial blooms in surface water by measures such as nutrient reduction, biomanipulation or the application of algaecides [4, 8]. Importantly, especially in the case of toxic cyanobacteria, the removal of intact cyanobacterial cells is essential to avoid the release of intracellular toxins, e.g., microcystins (MCs), anatoxin-a (ANTX) and saxitoxin (STX) [6, 9].
The second approach is the removal of cyanobacterial cells and metabolites in drinking water treatment facilities. Although most conventional drinking water treatment methods effectively remove cyanobacterial cells and intracellular metabolites, extracellular and dissolved cyanotoxins may bypass conventional methods such as rapid sand filtration and coagulation [2]. Hence, adequate and more advanced treatment measures must be implemented to ensure sufficient removal of cyanotoxins. While traditional treatment approaches such as physical retention, biodegradation or chemical oxidation can be effective, they all come with various practical, economic or environmental disadvantages.
Toxin removal by physical retention can be achieved by filter membranes with very low molecular weight cut-off pore sizes, i.e., nanofiltration and reverse osmosis, or adsorbents like activated carbon and bioadsorbents [10]. However, when dissolved cyanotoxins are only physically removed, appropriate measures for the disposal or further treatment of the toxin-enriched retentate are required. In addition, filter beds and membranes may need to be backwashed regularly to prevent clogging, fouling and cyanobacterial growth on the filter medium [10, 11].
Although several cyanotoxins are biodegradable [12], their periodical occurrence may limit the microorganisms' ability to degrade cyanotoxins, resulting in an initial lag-phase of up to a few days without pre-conditioning [10, 12]. Moreover, most enzymatic degradation mechanisms are still poorly understood which makes it difficult to predict the effectiveness of a biological treatment barrier [10, 12] and potential drawbacks such as the biotransformation of the less-toxic gonyautoxin (GTX) into the more toxic STX [12].
Although commonly used oxidants such as chlorine and permanganate effectively degrade some cyanotoxins, others are not susceptible or require oxidant concentrations and reaction times that are substantially higher than those usually applied in drinking water treatment [2, 9, 13]. As a major disadvantage, chlorination can produce halogenated disinfection byproducts formed from the reaction of chlorine with organic matter or in the presence of bromide [14]. Furthermore, residual chlorine may impair the drinking water quality due its possibly perceptible taste and odor. Permanganate on the other hand does neither promote the formation of toxic disinfection byproducts, nor produce taste or odor, but it tints the water pink at > 0.05 mg L−1, which limits its application and residual concentration in drinking water [10].
As another form of oxidation, advanced oxidation processes (AOPs) have received a lot of attention for their application in drinking and wastewater treatment for the degradation of even recalcitrant organic compounds and disinfection of pathogens. In AOPs, reactive species, mainly ∙OH, and other mechanisms are formed in situ [10]. This review gives a detailed insight into AOPs which were investigated for cyanotoxin removal from drinking water to evaluate their feasibility and applicability. Therefore, we briefly outline current regulations for cyanotoxins in drinking water and basic principles of different AOPs. Following, we discuss the most relevant findings from the scientific literature on the degradation effectiveness, advantages and disadvantages of individual methods and the role of relevant water quality parameters. From there, information gaps are identified and recommendations for future research for the effective removal of cyanotoxins from drinking water are formulated.
Cyanobacterial toxins
Cyanotoxins can cause a vast range of clinical signs, including acute hepatotoxicosis, peracute neurotoxicosis, gastrointestinal disturbances as well as respiratory and allergic reactions [6]. Many different cyanobacterial metabolites can be considered cyanotoxins [15]. Here, we will only focus on cyanotoxins for which information on their removal from drinking water was found.
Microcystins
Microcystins are the most commonly studied cyanotoxins produced by different cyanobacteria such as Microcystis, Nostoc, Planktothrix and many other species [16]. This group of water soluble cyclic heptapetides consists of more than 100 congeners which exhibit similar toxicological properties due to their akin chemical structure, which mainly differs in two amino acids X and Z (Fig. 1) [6]. Hence, MCs are named according to these two variable amino acids, e.g., MC-LR with X and Z being leucine (L) and arginine (R), respectively. The hydrophobic Adda amino acid is often associated as the key structural element for MCs' biological activity [14]. Unless cyanobacterial cells lyse due to extrinsic stress or senescence, MCs are usually intracellular. In the presence of bacteria and photosynthetic pigments, dissolved MCs rapidly degrade in natural waters [6]. Depending on the degree of sunlight, content of natural organic matter (NOM) and presence of bacteria, MCs can have a half-life of 4 to 14 days in surface water [17].
Structures of cyanobacterial hepatotoxins
Nodularin
Nodularin (NOD) and its seven analogs are cyclic pentapeptides produced by Nodularia and Nostoc strains. NOD's structure (Fig. 1) is similar to MC, leading to similar chemical and toxicological characteristics [6, 16]. Similar to MC, the hepatotoxic NOD is also intracellular until the bloom starts to decay [6].
Cylindrospermopsin
Cylindrospermopsin (CYN) was first isolated from Cylindrospermopsis raciborskii and later from other species including Aphanizomenon and Oscillatoria. So far, five analogs of this highly water soluble and planar-shaped alkaloid have been described (Fig. 1) [18, 19]. CYN is extracellular and is relatively stable to a wide range of heat, light and pH conditions. The alkaloid can persist in water for more than a month [6, 18]. However, when exposed to sunlight in the presence of cell pigments, CYN has a half-life of about 0.6 to 0.9 days [17].
Anatoxins
Anatoxin-a (Fig. 2) and its derivatives are produced by Aphanizomenon, Dolichospermum (formerly Anabaena), Oscillatoria and Planktothrix. The extremely potent alkaloid neurotoxin acts as a cholinergic nicotinic agonist causing nerve depolarization and neuromuscular blockage. ANTX is usually intracellular, but rapidly degrades once it is released from cells and is exposed to natural sunlight (half-life of approximately 100 min) and oxidants. However, in the absence of sunlight, ANTX can reach a half-life of several days to months. Main degradation products of ANTX and homoanatoxin-a are the notably less-toxic dihydro- and epoxy analogs [9, 17, 20].
Structures of cyanobacterial neurotoxins
Saxitoxins
STX, also known as paralytic shellfish toxin, is produced by organisms from different taxonomic kingdoms—eukaryotic dinoflagellates and prokaryotic cyanobacteria. Cyanobacterial producers include Aphanizomenon, Cylindrospermopsis, Dolichospermum and Lyngbya [21]. STX can be substituted at various positions (Fig. 2), resulting in currently 57 known analogs, which can be grouped into non-sulphated STXs, singly sulfated GTXs and doubly sulfated C-toxins. The toxicity of the STXs analogs inversely increases with the number of substituted sulfates [12, 21]. Because of its two cationic guanidine groups, the alkaloid is water soluble [9]. Unless cyanobacterial cells lyse, STX is usually intracellular [9].
β-N-methylamino-l-alanine
β-N-methylamino-l-alanine (BMAA) is a non-proteinogenic amino acid (Fig. 2) reported to be present in terrestrial and marine, free-living and plant symbiotic cyanobacteria including Aphanizomenon, Cylindrospermopsis, Microcystis and Nodularia [22]. The possible association of BMAA with several neurotoxic outcomes is discussed in the literature, e.g., by Ploux et al. [22] and references cited therein.
Exposure to cyanobacterial toxins and current regulations for drinking water
The presence of harmful cyanobacterial blooms and their toxins can evidently be traced back to the nineteenth century, where poisoning through ingestion of surface water led to sickness and death of livestock, pets and wildlife [23]. Ever since, cyanobacterial blooms and toxins have reportedly caused several, partly fatal incidents around the globe. In 1979, more than 100 people were poisoned and had to be hospitalized in Queensland, Australia due to the consumption of contaminated drinking water. Further investigations identified the water supply and later determined Cylindrospermopsis raciborskii as the source for the poisoning, which is now known as the Palm Island mystery disease [24]. Almost 20 years later, 76 hemodialysis patients died in Brazil due to the utilization of water contaminated with MCs and CYN for hemolysis treatment [25]. Besides posing a risk to human health, cyanotoxins can also have economic consequences [26]. Due to a massive Planktothrix rubescens bloom in the Serbian Vrutci reservoir with cell counts of about 10,000 cells L−1 in the treated drinking water in December 2013, Serbian authorities prohibited the use of tap water in the city of Užice, Serbia (approximately 70,000 inhabitants). As a result of the incapability to remove cyanobacterial cells and toxins, an alternative, cyanobacteria-free water source—Sušičko vrelo reservoir—had to be used for several years until reconstruction of the Vrutci reservoir treatment facility was completed [27]. In a similar, but more far-reaching incident, the Ohio EPA put a temporary ban on tap water for the city of Toledo, Ohio, USA in August 2014. About 500,000 people were advised not to drink or otherwise use tap water after MC concentrations in the drinking water exceeded the regulatory threshold of 1 μg L−1. After a few days, when MC concentrations decreased to below the limit, the ban was lifted [28, 29].
To protect humans from exposure to cyanotoxins through consumption of contaminated drinking water, adequate treatment measures must be employed. However, even if effective removal techniques are in place, a comprehensive drinking water guideline, containing a thorough monitoring and actions' program, is indispensable. A set of threshold values can thus help to take actions in case they are exceeded. So far, the WHO suggested a provisional guideline value (GV) for MC-LR of 1 μg L−1 based on a total daily intake (TDI) of 0.04 μg kg−1 day−1 derived from acute toxicity data [30]. However, GVs for others cyanotoxins have not been proposed yet due to the lack of toxicological and epidemiological data. The upcoming update on the WHO Guidelines for Drinking-water Quality, to be published in 2020–2021, is expected to include recommendations for ANTX, CYN and STX as well as a revision of the GV for MC. Updates can be found on the homepage [31]. The currently proposed GV for MC-LR was accepted or adapted by many countries across the globe [32]. The lack of toxicological and epidemiological data on effects of exposure to other cyanotoxins also raises the question on effects from cyanotoxin mixtures and chronic exposure. The US EPA was the first to propose a chronic TDI (0.003 μg kg−1 day−1) for MC-LR and lowered their acute TDI (0.006 μg kg−1 day−1) based on updated data [16].
AOPs for cyanotoxin removal from drinking water
Although many different reactive chemical species can be produced, short-lived ∙OH is often considered to be the most important species generated in AOPs in water, most likely due to its comparably high reactivity as indicated by its redox potential (Table 1). This non-selectively and randomly attacking oxidant primarily reacts with organic compounds in two distinctive mechanisms: (i) via an electrophilic attack at electron-rich moieties such as C=C double bonds, aromatic systems and neutral amines, and (ii) via hydrogen abstraction from C–H groups [33]. At neutral pH, ∙OH can also react in an often kinetically disfavored one-electron transfer mechanism [34].
Table 1 Redox potentials of commonly used oxidants indicating their reactivity
The following sections address the existing knowledge on the removal of different cyanotoxins by AOPs in detail. We were able to identify studies that investigated the use of hydrogen peroxide, ozone, photolysis (including the combination with oxidants and catalysts), Fenton oxidation, non-thermal plasmas, sulfate radicals, electrochemical oxidation, sonolysis and radiolysis.
Although H2O2 has a higher redox potential than, e.g., chlorine under acidic conditions (Table 1) and is often used as a precursor for ∙OH as well as to improve the effectiveness of AOPs, it is relatively ineffective for the degradation of cyanotoxins if employed solely. Removal of MC-LR, CYN, ANTX and BMAA by H2O2 was reported to be < 10% [37,38,39,40,41]. Even at 30 °C for 3.5 h, only about 3% MC-LR was removed by H2O2 [36].
Ozonation and O3-based AOPs
Ozonation is widely employed in drinking water treatment for disinfection of microorganisms and oxidation of various organic pollutants [42]. Even though ozonation itself technically is not an AOP as O3 is usually produced in the gaseous phase, we discuss it in this review, because O3 can be used as an AOP precursor and it decomposes to ∙OH in situ under alkaline conditions (Eqs. 1 and 2) [2, 43]. Ozone itself has a relatively high redox potential at acidic pH (Table 1) and reacts with organic compounds in a similar but more selective manner compared to ∙OH. It attacks electron-rich groups such as unsaturated C=C, aromatic systems and neutral amines [9]. Numerous studies showed the high effectiveness of O3 for the degradation of MCs, NOD, CYN, ANTX and BMAA [14, 19, 37, 44,45,46]). On the other hand, ozonation is not recommended for STXs degradation, as the toxicity to mice of an STX extract treated with O3 and O3/H2O2 was reduced by < 10% only [47].
$${\text{O}}_{3} + {\text{OH}}^{ - } \to {\text{HO}}_{2}^{ - } + {\text{O}}_{2}$$
$${\text{O}}_{3} + {\text{HO}}_{2}^{ - } \to \cdot {\text{OH}} + {\text{O}}_{2}^{ - } \cdot + {\text{O}}_{2}$$
$${\text{Fe}}^{2 + } + {\text{O}}_{3} \to {\text{FeO}}^{2 + } + {\text{O}}_{2}$$
$${\text{FeO}}^{2 + } + {\text{H}}_{2} {\text{O}} \to {\text{Fe}}^{3 + } + \cdot {\text{OH}} + {\text{OH}}^{ - }$$
$${\text{O}}_{3} + {\text{O}}_{2}^{ - } \cdot \to {\text{O}}_{3}^{ - } \cdot + {\text{O}}_{2}$$
$${\text{CO}}_{3}^{2 - } + \cdot {\text{OH}} \to {\text{CO}}_{3}^{ - } \cdot + {\text{OH}}^{ - }$$
$${\text{HCO}}_{3}^{ - } + \cdot {\text{OH}} \to {\text{CO}}_{3}^{ - } \cdot + {\text{H}}_{2} {\text{O}}$$
$${\text{O}}_{3} + \cdot {\text{OH}} \to {\text{HO}}_{2} \cdot + {\text{O}}_{2}$$
Ozone can be used as an AOP precursor in combination with other oxidants, UV light (see the section on "Photolysis in combination with oxidants"), catalysts and adsorbents, which increase ∙OH formation. One of the most commonly used and comparatively cheap O3-based AOPs is the peroxone process, in which O3 reacts with deprotonated H2O2 to produce ∙OH (Eq. 2). This process has been shown to further increase the degradation of MCs and ANTX compared to O3 alone [44, 45]. Similarly, degradation of both toxins also increased when Fe2+ was combined with O3 (Eqs. 3 and 4) [44, 45]. However, the combination of O3 with the Fenton's reagent (see the section on "Fenton oxidation") only improved MC-LR degradation at low O3 concentrations due to the oxidation of Fe2+ at higher levels [48]. Instead of Fe2+, the commonly used photocatalyst TiO2 can also be used. CYN degradation improved by almost 30% when O3 was combined with TiO2 due to increased O3 decomposition to ∙OH and CYN adsorption to the catalyst [49].
With regard to water quality parameters, pH plays an important role because it can affect both the toxin speciation and the oxidant, influencing thus the treatment effectiveness. Although O3 reactivity with the conjugated diene in MCs' Adda amino acid was shown to be pH-independent, reactions with the amine and uracil moiety in ANTX and CYN, respectively, depend on the pH in consistence with the toxins' pKa value [14, 19]. In contrast, Al Momani et al. [45] observed a substantially reduced MC-LR degradation when increasing the pH from 2 to 11. This indicates that not only toxin speciation, but also reactivity and availability of dissolved ozone are pH-dependent. Under alkaline conditions, O3 redox potential decreases by almost 50% (Table 1) and ozone decomposition to ∙OH increases (Eqs. 1, 2 and 5). In addition, if ∙OH quenching by NOM and alkalinity (as carbonate/bicarbonate, Eqs. 6 and 7) is reduced due to low availability, ozone consumption is further promoted by ∙OH (Eq. 8) [43]. BMAA degradation with O3 was also observed to be pH-dependent but direct O3 attack was less important, while secondary oxidants such as HO2− formed from O3 under alkaline conditions (Eq. 1) played a substantial role [37]. Moreover, the selectivity of O3 toward specific electron-rich moieties was shown to be pH-dependent, as the C=C double bonds in CYN and ATNX are primarily attacked at pH < 7–8, whereas oxidation of the amine groups dominates at higher pH [14, 19, 46]. Overall, reaction rate for the ozonation at pH 8 was in the order of MC-LR > CYN > ANTX [19].
Besides ∙OH quenching, NOM may also quench O3. In the presence of 2 mg L−1 humic acid, MC-LR and -RR degradation by O3 reduced by approximately 25% [45]. In fact, NOM concentration was shown to be more influential on the degradation than its composition and alkalinity [19, 49]. In addition to water quality parameters discussed above such as pH, alkalinity and NOM, the actual concentration of cyanotoxins and other pollutants dictates the O3 demand of water. However, the effects of water quality on the pollutant removal are neglectable once a residual O3 concentration is present in the treated water. Hence, an ozone residual of > 0.3 mg L−1 for ≥ 5 min, which is typically applied in water treatment plants, is recommended for cyanotoxin removal [2].
Photolysis
Photolysis occurs in the environment by exposure to sunlight and is commonly employed for disinfection in water treatment utilizing UV light. Upon absorption of light, energy is released from a molecule through physical and chemical processes which include the breakdown of a compound [50]. Although ANTX readily degrades under sunlight in absence of photosensitizers (t1/2 = 1–2 h at alkaline pH) [51], other cyanotoxins such as MCs and CYN are less susceptible to direct photodegradation by sunlight [52]. Efficacy of photolytic treatment strongly depends on the wavelength, i.e., energy of the used light. For instance, ANTX has an absorption maximum in the range of 230–240 nm, which determines the toxin's resistance to UV-A irradiation (315–400 nm), while it degrades by 70% under UV-C irradiation at 254 nm [39]. Similarly, NOD degradation also improved when UV light of a shorter wavelength, i.e., higher energy, was used [53]. With vacuum-UV at 172 nm, water is directly photolyzed to form ∙OH (Eq. 9), which further increased ANTX degradation and substantially reduced the UV dose required for complete removal. However, direct water photolysis is strongly limited to a light penetration depth in water of < 100 μm, which makes ∙OH formation by vacuum-UV less attractive to drinking water treatment compared to other AOPs [54].
$${\text{H}}_{2} {\text{O}} + {\text{hv }}\left( {172\;{\text{nm}}} \right) \to {\text{H}}_{2} {\text{O}}^{*} \to {\text{H}} \cdot + \cdot {\text{OH}}$$
Besides wavelength, light intensity is a crucial parameter as well. MC-LR degradation increased by about 30–40% when light intensity was tripled [55]. Moreover, at 254 nm and a dose of 564 mJ cm−2, approximately 66% MC-LR degradation was achieved, while at 312 nm, a much higher dose of 11,304 mJ cm−2 was required to yield similar results [55, 56]. In addition to irradiation, degradation also depends on the toxin structure as shown in a study on UV-photolytic treatment of four MCs, where degradation increased from MC-LR < -RR < -YR < -LA owing to the different amino acid structures (A = alanine, L = leucine, R = arginine, Y = tyrosine) [41].
UV-based treatments are so far the only methods for which MC-LR detoxification due isomerization of the 4(E),6(E)-Adda chain (Fig. 1) to 4(Z)- or 6(Z)-Adda was observed. Furthermore, degradation mechanisms include decarboxylation, which has only been reported for UV-based methods and sulfate radical-based AOPs (SR-AOPs; see the section on "Sulfate radical-based AOPs") [56, 57].
Cyanotoxins usually co-occur with NOM which can act as photosensitizer and improve the degradation. For instance, MC-RR photodegradation by sunlight substantially increased in presence of the cyanobacterial pigment phycocyanin [58]. However, photosensitizer concentration is essential as it was shown for MC-LR degradation. At lower concentrations, pigment availability was the limiting factor, while at higher concentrations, light attenuation was significant [59]. In a similar manner, ANTX photodegradation was more effective in the presence of NOM but the degradation decreased with increasing NOM concentration. Experiments with quenchers showed that besides excited NOM, 1O2 and ∙OH also contributed to the toxin degradation and that 1O2 was more important than ∙OH [60]. In contrast, photosensitized CYN degradation was observed to be mainly driven by ∙OH (about 65–70%), with 1O2 and excited NOM only playing minor roles [61]. This disagreement may not only be related to the different toxins, but also to experimental conditions and using fulvic acid and solar light vs humic acid and UV-C light, respectively. Although phycocyanin did not improve CYN photodegradation, other cyanobacterial compounds were observed to accelerate NOD and CYN degradation [53, 62, 63]. In fact, the presence of different pigment types was shown to affect MC-LR photodegradation effectivity in the following order: without pigment < chlorophyll a < β-carotene < water-extractable pigments < solvent-extractable pigments [58]. Furthermore, higher light intensities led to pigment bleaching and degradation which adversely affected MC-LR degradation [59].
Turbidity is one of the most important water quality parameters for photodegradation. Light absorption by non-target water constituents not acting as photosensitizer attenuates light and reduces penetration depth. Therefore, photodegradation is usually efficient in relatively clear water, after most turbidity has been removed [10]. Other water quality parameters may also affect the degradation as shown for ANTX degradation by UV-C radiation, where toxin removal was more effective at acidic pH with an optimum at pH = 6.4, most likely due to ANTX speciation under acidic conditions (pKa = 9.4) and possible inter- and intramolecular hydrogen bonding under alkaline pH. Also, higher temperatures led to increased ANTX degradation, but the changes became insignificant at T > 24 °C. Last, as for most AOPs, alkalinity was observed to decrease ANTX degradation due to quenching of reactive species [60].
To achieve high degradation yields, UV doses substantially higher than those commonly used for disinfection in water treatment (10–40 mJ cm−2 [9]) are required. Consequently, to reduce energy demand and operating costs for large-scale water treatment, the combination of UV with oxidants or photocatalysts—as discussed in the following paragraphs—is inevitable.
Photolysis in combination with oxidants
The combination of UV radiation with H2O2 or O3 improves pollutant degradation due to the photolytic production of ∙OH (Eqs. 10 and 11) [56]. Moreover, ∙OH and SO4−∙ are produced from peroxymonosulfate (PMS) or peroxydisulfate (also persulfate, PS) upon UV activation (see the section on "Sulfate radical-based AOPs"). In a UV/chlorine system, ∙OH, Cl∙, OCl∙ and other reactive species are formed following Eqs. (12) to (16) [64].
$${\text{O}}_{3} + {\text{H}}_{2} {\text{O}} + {\text{hv}} \to {\text{O}}_{2} + {\text{H}}_{2} {\text{O}}_{2}$$
$${\text{H}}_{2} {\text{O}}_{2} + {\text{hv}} \to 2 \cdot {\text{OH}}$$
$${\text{HOCl}} + {\text{hv}} \to \cdot {\text{OH}} + {\text{Cl}} \cdot$$
$${\text{OCl}}^{ - } + {\text{hv}} \to {\text{O}}^{ - } \cdot + {\text{Cl}} \cdot$$
$${\text{Cl}} \cdot + {\text{Cl}}^{ - } \to {\text{Cl}}_{2}^{ - } \cdot$$
$${\text{HOCl}}/{\text{OCl}}^{ - } + \cdot {\text{OH}} \to {\text{H}}_{2} {\text{O}}/{\text{OH}}^{ - } + {\text{OCl}} \cdot$$
$${\text{HOCl}}/{\text{OCl}}^{ - } + {\text{Cl}} \cdot \to {\text{HCl}}/{\text{Cl}}^{ - } + {\text{OCl}} \cdot$$
$$\cdot {\text{OH}} + {\text{H}}_{2} {\text{O}}_{2} \to \cdot {\text{HO}}_{2} + {\text{H}}_{2} {\text{O}}$$
UV in combination with oxidants has been studied for the removal of MCs, CYN, ANTX and BMAA. For all for toxins, UV-based treatment was substantially more effective when H2O2 was added [37, 39, 40, 54, 55, 60]. Increasing H2O2 concentration improved cyanotoxin degradation only up to a certain oxidant concentration. Once the optimal H2O2 level was exceeded, ∙OH quenching by H2O2 (Eq. 17) outbalanced radical formation [54, 55, 60]. Different studies reported that MCs were degraded at higher rates compared to CYN, ANTX and BMAA because of their higher reactivity with ∙OH. This is caused by MCs' size and higher number of functional moieties that are partly more susceptible to radical attack [37, 41, 65]. The importance of the structure for the reactivity with ∙OH is further affirmed when looking at different MCs. The major part of their structures is similar with the main difference being two amino acids (see Fig. 1). However, these minor differences suffice to yield different degradation rate constants: MC-YR (1.63 × 1010 M−1 s−1) > MC-RR (1.45 × 1010 M−1 s−1) > MC-LR (1.13 × 1010 M−1 s−1) > MC-LA (1.10 × 1010 M−1 s−1) [41].
When O3 was added to UV instead of H2O2, MC-LR degradation also became more effective compared to UV- and O3-only treatment. O3 decomposition to ∙OH is accelerated under UV irradiation and as a consequence, both O3 and ∙OH oxidize pollutants [56, 66]. Although O3, i.e., its production, may be more expensive compared to H2O2 and TiO2 (for UV/TiO2 see the section on "Photocatalysis"), to achieve similar results, shorter reaction times and lower oxidant doses were required compared to UV/H2O2 treatment [56]. Due to the UV irradiation, decarboxylation and isomerization of MC-LR were observed, which did not occur in O3-only treatment. Furthermore, compared to UV- and O3-only treatment, UV/O3 had a higher potential to degrade MC-LR and its degradation intermediates simultaneously under the same conditions [56].
As another, cheaper alternative to H2O2, the addition of chlorine has been studied in UV-based AOPs [67]. UV/chlorine was shown to be more effective compared to UV/H2O2, UV- and chlorine-only MC-LR treatment. Besides producing a variety of reactive oxygen and chlorine species (Eqs. 12 to 16), Cl∙ is more selective than ∙OH and preferably reacts with electron-rich moieties [64]. Similar to UV/H2O2, increasing the chlorine dose led to a more effective MC-LR degradation due to an increase in reactive chlorine species production and higher contribution to toxin degradation [64, 67]. However, the use of chlorine may lead to the formation of halogenated degradation products such as chloroform and dichloroacetic acid produced from MC-LR following a series of oxidation steps [67]. Even though yields of these chlorinated byproducts increased with prolonged treatment time, residual cytotoxicity after UV/chlorine treatment was lower compared to untreated MC-LR [67].
Besides oxidant type and dose, the UV radiation itself is an important factor, as the peroxide bond in H2O2 is cleaved only upon irradiation with light of λ < 300 nm [39]. Hence, MC-LR and ANTX degradation by UV-A/H2O2 (λUV-A ≈ 400–315 nm) has been reported to be substantially less effective compared to UV-B/H2O2 and UV-C/H2O2 (λUV-B ≈ 315–280 nm, λUV-C ≈ 280–100 nm), respectively [39, 55, 68].
Similar to other AOPs, water quality parameters can influence UV/oxidant degradation efficacy. In the UV/oxidant setup NOM rather acts as oxidant and radical quencher than as photosensitizer, thus decreasing removal efficacy, which is in contrast with NOM action during photolysis without the addition of oxidants. NOM may compete with the oxidant for UV photons which consequently reduce reactive species formation [60, 65, 69]. The UV/O3 system was also affected by NOM but to lesser extent than O3-only treatment of MC-LR [56]. In case of UV/chlorine degradation of MC-LR, NOM did not only decrease the degradation, but also resulted in a higher yield of chlorinated byproducts. This yield was observed to be dependent on NOM as well as chlorine dosage [67]. In the presence of bromide, MC-LR degradation increased due to the formation of HOBr which is more reactive than HOCl toward phenolic and amine moieties. Furthermore, UV activation of HOBr formed reactive bromine species which may have contributed to MC-LR degradation [64]. Alkalinity decreased UV/H2O2 and UV/chlorine degradation efficacy similar to NOM due to H2O2 and radical quenching [64, 69].
UV/oxidant removal efficacy is also affected by water pH. For ANTX removal by UV/H2O2, the highest efficacy was achieved at pH 6.7, while at lower pH the ∙OH yield decreased due to reactions with H+ and at alkaline pH ANTX is deprotonated and exists as neutral amine (pKa = 9.4). In this form, inter- and intramolecular hydrogen bonds can form which affect ANTX reactivity with ∙OH [39]. In contrast, for BMAA removal, alkaline pH appeared to increase the degradation rate constant due to BMAA speciation at higher pH [37]. In UV/O3 systems, the pH does not only determine toxin speciation, but also O3 stability, which decreases at alkaline pH and may affect toxin degradation. However, this effect seemed to be less influential for MC-LR degradation by UV/O3 compared to O3-only treatment [56]. In UV/chlorine-based treatment, the oxidant itself is also strongly affected by the pH, when HOCl dissociates to OCl− at alkaline pH (pKa = 7.5). OCl− has a lower molar absorption and thus a lower radical yield. Furthermore, OCl− reacts at a higher rate with ∙OH and Cl∙ compared to HOCl. The optimum pH for MC-LR degradation by UV/chlorine was determined to be pH 7.4 [67]. In contrast, in another study MC-LR degradation by UV/chlorine was shown to be most effective at pH 6 and decreased at pH 7 [64]. Most of the experimental conditions seemed to be very similar, i.e., oxidant type and concentration, UV wavelength, MC-LR concentration and pH-buffer composition but notable differences were the UV intensity and pH-buffer concentration, which could have affected the outcomes. Both studies also examined the contribution of different reactive species to MC-LR degradation and reported different findings. In the first study, at neutral pH, MC-LR degradation by UV/chlorine was dominated by ∙OH (42.5%), while Cl2 (25.4%), ClO∙ (13.3%), Cl∙ (11.1%) and UV (8.5%) contributions were lower [67]. In the second study, at neutral pH, MC-LR degradation was driven by HOCl/OCl− (47.3%), while reactive chlorine species (21.3%), UV (21.1%) and ∙OH (10.3%) were only partially responsible for MC-LR degradation. Also in the second study, the UV intensity was about twice as high compared to the first study, which may explain the difference in the higher UV contribution [64].
Instead of oxidants, photoactive semiconductors can be used to improve UV-based cyanotoxin degradation. Upon exposure to light with energy exceeding the band gap between occupied valence band and unoccupied conductance band, an electron migrates from the valence to the conductance band. The formed valence band hole yields an oxidative site, while the now occupied conductance band provides a reducing site. As a result, three reaction mechanisms are possible: (i) direct oxidation at the valence band, (ii) ∙OH formation from H2O or OH− at the valence band, and (iii) O2−∙ and subsequent H2O2 formation from O2 at the conductance band [70].
Photocatalysis was shown to be effective for the removal of MCs, NOD and CYN [71,72,73,74]. Besides toxin degradation, adsorption onto the catalyst is often reported as a fourth removal mechanism. In a study with different MCs, degradation was faster, when adsorption to the catalyst was the highest [75], while in another study, no correlation of MC and NOD degradation with dark adsorption was observed [74]. In the first study, TiO2 powder was used as received [75], while in the second study, the catalyst was coated onto glass spheres [74], which may have caused these contradicting findings. Effectivity of dark adsorption depends on toxin composition and hydrophobicity in particular. Adsorption to TiO2 increased with increasing pollutant hydrophobicity and was thus pH-dependent, caused by compound speciation and change of hydrophobicity at certain pH. Hence, for MCs, adsorption increases at acidic pH [75, 76].
Because of its high oxidizing power, chemical stability and low cost, TiO2 is a commonly used photocatalyst [70]. Toxin degradation is accelerated with increasing TiO2 concentration, but levelled off once a certain catalyst concentration was reached [72,73,74]. However, TiO2 is only photoactive at UV light, which limits its applicability. Therefore, TiO2 has been doped with mostly non-metal elements to reduce the band gap and consequently decrease the energy required for its activation [77]. Although N-doped TiO2 was less effective than pure TiO2 under UV and solar light, MC-LR could only be removed under visible light with N-TiO2 [73]. Further, N-F-co-doped TiO2 achieved higher removal compared to N- or F-TiO2 under visible light [77]. For the removal of 6-hydromethyl uracil, a CYN model compound, under UV light, degradation efficacy for different co-doped TiO2 was in order: N-F-TiO2 > P-F-TiO2 > S-TiO2, while N-F-TiO2 was the only catalyst which removed the uracil derivative under visible light [78]. MC-LR could be removed by Vis/S-TiO2 due to the different toxin structure, allowing MC-LR adsorption to the photocatalyst and consequently allowing for degradation [79]. Similarly, C-doped TiO2 showed lower removal rates under UV light compared to pure TiO2, but in contrast, achieved MC-LR and CYN degradation under visible light. Differences in the reaction products and reactive species involved revealed distinct reaction mechanisms under UV and visible light [71]. Under UV light, ∙OH was the primary reactive species, while under visible light, O2−∙ became more important [71, 78, 80]. Besides doped TiO2, other photocatalysts, e.g., WO3 and Fe2O3 showed high response to solar light and were used for MC-LR degradation [81, 82]. Similar to TiO2, doping of WO3 improved MC-LR degradation and dopants can be ordered according to the removal rate: WO3 < CuO–WO3 < Pd–WO3 ≪ Pt–WO3 [81]. When BiOBr was used as photocatalyst, MC-LR and CYN degradation was achieved by direct reaction with the catalyst instead of radicals, which followed a different reaction mechanism, involving decarboxylation [83, 84].
A major limitation of photocatalysis is the need to remove the catalyst in a subsequent treatment step, which becomes even more difficult if nano-scale powder is used. Hence, employing heterogeneous or immobilized photocatalysts improves or avoids removal and makes photocatalysis more attractive for large-scale water treatment. Besides substrates like glass or PVC, cellulose acetate and PET monoliths were found to be the best supporting materials for TiO2 photocatalytic treatment of MC-LR and CYN [85]. When coated onto granular-activated carbon, TiO2 photocatalysis of MC-LR improved compared to pure TiO2 powder due to increased adsorption to TiO2 or to activated carbon sites in vicinity to the catalyst [86]. In case of Fe-based photocatalysts, immobilization to a substrate is evitable because they can be separated magnetically [82].
Similar to photolysis, photocatalysis efficacy depends on the light characteristics, i.e., wavelength and intensity in particular. Although doped TiO2 is also activated under visible light, degradation rates are reduced by two to three orders of magnitude compared to UV or solar light [71, 73]. With higher light intensities, more electron–hole pairs are formed regardless of the employed photocatalyst, which results in higher toxin removal [72, 81].
When UV/TiO2 was combined with H2O2, higher MC-LR degradation was achieved compared to UV/TiO2 or UV/H2O2 alone [87]. Although dark adsorption to the catalyst decreased in the presence of the oxidant, H2O2 decomposition to ∙OH increased when TiO2 was present. However, H2O2 concentration was shown to be a crucial factor, as the MC-LR degradation rate was highest at 0.005% H2O2 in solution and decreased at higher concentrations [87]. Besides adding oxidants, photocatalysis can also be enhanced when used in photoelectrocatalysis. Similar to elemental doping of a photocatalyst, photoelectrocatalysis improves the photocatalytic activity by removing electrons from the catalyst to reduce the recombination of electron–hole pairs to instead utilize the holes in the conductance band for oxidant production and pollutant reduction [88]. Under given experimental conditions, MC-LR degradation was substantially more effective by photoelectrocatalysis using Ag/AgCl/TiO2 nanotube electrodes compared to photocatalysis and electrochemical degradation alone [89]. Because this approach is based on an electrolytic cell, parameters such as electrolyte composition affect the degradation and need to be optimized (see the section on "Electrochemical oxidation").
Also for photocatalytic toxin removal, water quality parameters such as pH and NOM have major an impact. For instance, regardless of type and doping of the photocatalyst, higher MC and CYN removal was achieved under acidic conditions [72, 76, 82]. However, when different scavengers were used during MC-LR degradation by Vis/NF–TiO2, the solution pH did not only affect dark adsorption, but also played a crucial role in the formation and reactivity of reactive oxygen species [80]. Although NOM can act as photosensitizer in photolysis, it may adsorb to the catalyst surface and quench reactive species produced by photocatalysis which reduces toxin removal. Similarly, high alkalinity quenches and thus limits ∙OH availability [76]. Another crucial water quality parameter is dissolved O2 which functions as precursor for O2−∙ and H2O2 formation at the conductance band. Under O2-free atmosphere, cyanotoxin degradation substantially decreased or was completely inhibited [72, 78]. In the presence of Fe3+ and Cu2+, toxin degradation increased due to ∙OH production in a Fenton-like reaction [78, 81] (see the section on "Fenton oxidation"). At lower concentrations, Cl− can function as a precursor for Cl∙ which was shown to increase MC-LR degradation. However, when exceeding an optimal Cl− concentration, ∙OH quenching and Cl2 formation became more efficient and suppressed toxin degradation [81].
Fenton oxidation
In a Fenton reaction ∙OH is produced via the reaction shown in Eq. (18). The Fenton's reagent thereby refers to Fe2+/H2O2, but other transition metals, e.g., chromium, copper, manganese as well as other oxidants, e.g., HClO, S2O82− or HSO5− can also produce ∙OH or SO4−∙ in Fenton-like reactions (see the section on "Sulfate radical-based AOPs" for SO4−∙ production") [36, 90]. Fenton oxidation of several cyanotoxins was studied, and the removal effectiveness was found to be in order of MC-RR > CYN > MC-LR ≫ ANTX > STX [36, 38, 44, 45, 91, 92].
$${\text{Fe}}^{2 + } + {\text{H}}_{2} {\text{O}}_{2} \to {\text{Fe}}^{3 + } + \cdot {\text{OH}} + {\text{OH}}^{ - }$$
$${\text{Fe}}^{3 + } + {\text{H}}_{2} {\text{O}}_{2} \to {\text{Fe}}^{2 + } + {\text{HO}}_{2} \cdot + {\text{H}}^{ + }$$
$${\text{Fe}}^{3 + } + {\text{HO}}_{2} \cdot \to {\text{Fe}}^{2 + } + {\text{O}}_{2} + {\text{H}}^{ + }$$
$${\text{Fe}}^{2 + } + \cdot {\text{OH}} \to {\text{Fe}}^{3 + } + {\text{OH}}^{ - }$$
The Fe2+ to H2O2 ratio is among the most crucial parameter that must be optimized to prevent parasitic reactions which inhibit ∙OH formation. In case of H2O2 excess, Fe3+ is reduced to Fe2+ at a slower rate as the Fenton reaction, consuming H2O2 to yield HO2∙ (Eqs. 19 and 20). The formation of the less reactive HO2∙ is competing with the formation of more reactive ∙OH (Table 1). An excess in H2O2 also leads to the formation of HO2∙ by ∙OH depletion (Eq. 17). In addition, ∙OH can be quenched by an excess of Fe2+ (Eq. 21) [90]. Several studies reported different optimal Fe2+ to H2O2 ratios, even for the same toxins, which emphasizes that experimental conditions like toxin concentration and solution pH must be considered [38, 44, 45, 93].
Fenton oxidation is often reported to be most effective at pH ≈ 3 [90]. At alkaline pH, Fe2+ forms hydroxides which tend to precipitate, thus reducing Fe2+ availability to produce ∙OH [38]. Furthermore, H2O2 stability decreases at alkaline pH which further limits ∙OH formation [94]. On the other hand, under too strong acidic conditions, H+ inhibits Fe3+ reduction which also decreases Fe2+ availability [93]. For practical applications, treatment closer to neutral pH may be beneficial as it would reduce resources and costs required for pH adjustments before and after Fenton treatment while keeping a sufficient effectiveness, e.g., 77% MC-LR removal at pH 3 and 68% removal at pH 5, [38]. To extend the pH range, heterogeneous or immobilized catalysts can be used which additionally improves its removability from water. For instance, across a pH range of 5–8 in a photo-Fenton process, heterogeneous FeY was shown to yield a higher catalytic activity than Fe2+ [95]. Although alkaline pH is often reported to decrease Fenton effectiveness due to lower ∙OH yields, the formation of alternative oxidants at neutral and alkaline pH was proposed in a study with Reactive Black 5 and As3+ [96]. Oxidation increased again under alkaline pH which was associated with high-valent iron species, i.e., Fe4+, in form of hydroxo-complexes [96].
In Fenton-like reactions, transition metals and oxidants other than Fe2+ and H2O2 produce ∙OH. For instance, Fe3+ can be used instead of Fe2+. However, MC-LR degradation was shown to be significantly slower, because Fe3+ first is reduced to Fe2+ by H2O2 (Eqs. 19 and 20) before it eventually produces ∙OH [97]. Besides MC-LR, CYN and ANTX were also shown to be degraded by a Fenton-like system, namely Fe3+, which was bond to a macrocyclic ligand system, in combination with H2O2 [91]. In another study, Cu2+ in combination with ascorbic acid was used to degrade MC-LR. Ascorbic acid reduced Cu2+ to Cu+ which then activated O2 to form H2O2 via O2−∙. Cu+/H2O2 produced ∙OH at a rate of approximately 100 M−1 s−1 which is in a similar range as the Fe2+/H2O2 system (76 M−1 s−1) [98].
Fenton oxidation can be improved when combined with UV/Vis light which leads to photoreduction of Fe3+ to Fe2+ as well as photolytic ∙OH generation from H2O2 [68]. Photo-Fenton was shown to be more effective compared to dark Fenton MC-LR degradation due to combined effects of photolytic and Fenton mechanisms [99]. Due to the continuous photoreduction of Fe3+ and photocatalytic ∙OH generation from Fe3+ in form of Fe(OH)2+, UV-C/Fe3+/H2O2 was more effective than UV-C/Fe2+/H2O2 for the degradation of MC-LR [68]. Furthermore, the spectrum of the irradiation light was also shown to affect MC-LR degradation, as UV-C and solar light were more effective than UV-A [68, 100]. Other hybrid Fenton techniques use ultrasound or electrolysis (for electro-Fenton see the section on "Electrochemical oxidation") to improve pollutant removal. In sono-Fenton, ∙OH formation is further accelerated by combining sonochemical (see the section on "Sonolysis") and Fenton mechanisms [101].
As for most AOPs, NOM quenches produced ∙OH and may thus reduce the effectiveness of Fenton oxidation [38]. Although CYN degradation by Fenton oxidation decreased in the presence of NOM, the effect appeared to be less extensive compared to, e.g., ozonation and UV/TiO2 [92]. Photo-Fenton may be more affected by NOM due to light attenuation [68]. However, the effect of NOM on the degradation strongly depends on its type and composition. The removal rate of MC-LR by solar photo-Fenton in presence of different NOM types was in the order of fulvic acid > NOM-free > humic acid > a mixture of fulvic and humic acids plus bicarbonate as alkalinity. Humic acid usually has a larger molecular weight and contains more aromatic moieties than fulvic acid which, in turn, may act as a chelating agent, stabilizing Fe2+ [102]. Similarly, when zero-valent iron nanoparticles were used in a heterogeneous Fenton-like reaction, humic acid seemed to form H2O2-cleaving iron complexes which resulted in higher MC-LR degradation [103].
Non-thermal plasma
Low or atmospheric pressure plasmas in which most of the energy is transmitted to free electrons (temperatures of ≥ 104 K), while the remaining heavy species only receive minor amounts of energy (temperatures of ≤ 103 K), are called non-thermal plasmas (NTPs). A broad spectrum of reactive species is generated by NTPs, including hot electrons, photons, and heavy species such as radicals, excited atoms, molecules and ions, reactive oxygen and nitrogen species. In addition, some discharges may generate shock waves [104, 105]. NTPs are generated by electric discharges in the gaseous or liquid phase, or at their interface and due to the overall low plasma temperature can be employed in many different fields including water treatment [104, 106]. In fact, because of higher efficiencies compared to other means of O3 production, O3 generators are often based on electric discharges in air or oxygen [106].
For an electric discharge in gas, the gas type and composition dictate which reactive species are produced. For a discharge in air or oxygen, one of the most important processes is O3 formation. However, in the presence of N2, i.e., in discharges in air, NOx are also produced, which lead to acidification and nitrification of the solution if the gas is bubbled through water afterward [107]. O3 produced in gas can directly react with pollutants or dissolve into the liquid when the gas is bubbled through the solution after the discharge, where it can also decompose to ∙OH [108]. If oxygen-free gases such as Ar are used, no reactive oxygen species are produced in the gas phase, but when the gas passes through water, ∙OH can be formed upon reaction with ionized or excited species in the gas [109]. In electric discharges in water, low-energy electrons excite water molecules, whereas high-energy electrons dissociate water. Both reactions lead to the formation of ∙OH (Eqs. 22 and 23), which is one of the main reactive species produced by a discharge in water. H2O2 is formed as a recombination product of ∙OH [107]. With a discharge in liquid, reactive species can directly react with pollutants in the plasma channels or close to the plasma–liquid boundary without the need to diffuse from the gaseous into the liquid phase [110]. For an electric discharge at the gas–liquid interface, plasma channels usually form on top of the liquid surface as the liquid acts as counter electrode. Here, reactive species are formed in the gaseous and liquid phases and can easily diffuse into the other phases [111, 112].
$${\text{H}}_{2} {\text{O}}^{*} + {\text{H}}_{2} {\text{O}} \to {\text{H}} \cdot + \cdot {\text{OH}} + {\text{H}}_{2} {\text{O}}$$
$${\text{e}}^{ - } + {\text{H}}_{2} {\text{O}} \to {\text{H}}^{ - } + \cdot {\text{OH}}$$
So far, NTPs were studied for the removal of MCs, ANTX and BMAA [108, 109, 111,112,113,114,115]. Besides the type of reactive species produced in an electric discharge, other parameters also affect the treatment efficacy. Studies on MC-LR removal in a gas–liquid surface discharge showed that a higher operating voltage increased the degradation due to a higher energy input [111, 115], and similar results were shown for the degradation of ANTX in a dielectric barrier discharge in O2 and subsequently bubbling the gas through the sample solution [108]. Here, an increased operating voltage led to higher O3 concentrations, which in turn also resulted in higher ∙OH levels in water due to decomposition of dissolved O3 [108]. However, for discharges in air, maximal O3 concentration may not be achieved with the highest voltage, because of increasing O2 consumption in NOx reactions and O3 depletion in reactions with N and NO at higher voltages [116]. Besides operating voltage, pH was also shown to affect the MC-LR degradation effectiveness in an Ar–water surface discharge, in which an acidic pH was beneficial for the removal [109]. Since MC-LR was expected to be unaffected in the studied pH range, the ∙OH concentration was assumed to be reduced under alkaline pH due to reaction with OH− [109]. The concentrations of the formed reactive species can also be increased by higher gas flow rates [115].
The electrode distance also impacts the plasma chemistry, when decreasing distance between high-voltage electrode and the water surface, the energy increases and intensifies the reactions induced by the plasma. For a shorter electrode distance, the transfer time into the solution is reduced especially for short-lived reactive species [111, 112, 115], where long transfer times can reduce the degradation effectiveness. This is why catalysts have been studied as additives in NTPs to transform long-lived species like O3 and H2O2 into the more reactive ∙OH. For example, Mn-doped carbon xerogels not only increase the ∙OH concentration, but also adsorb, e.g., MC-LR, thus immobilizing the toxin to enhance reactions with oxidants [112, 114]. Because electric discharges also generate UV light, photocatalysts like TiO2 have also been studied as additives to increase the formation of ∙OH (see the section on "Photocatalysis") [114]. Due to the formation of H2O2 in water, another alternative is the addition of Fe2+ to yield the Fenton's reagent (see the section on "Fenton oxidation") [115]. Plasma generation, intensive heat and direct electro-physical and -chemical processes at the electrode can lead to corrosion, resulting in the release of metal ions from the electrodes. Correspondingly, electrodes made from catalytic active materials may release, e.g., Fe2+ from stainless steel, which, in combination with H2O2 produced by the discharge, increases thus ∙OH formation [107].
Besides pollutant degradation during the actual treatment, plasma treated water has been shown to yield residual—post-treatment—oxidative and microbicidal effects. Up to a few days after exposure to an electric discharge, plasma-treated water still effectively degraded for example BMAA [113]. Although this phenomenon is still not fully elucidated, long-lived reactive species such as O3, H2O2 and peroxynitrous acid (HNO3) may be responsible for this residual effect [104].
When simulating a real water matrix by adding, for example, K2HPO4, NaNO3 or humic acid, degradation of MC-LR was reduced due to competition for ∙OH [109]. For ANTX degradation by a dielectric barrier discharge in O2, KNO3, KH2PO4 and glucose were shown to affect the degradation similarly [108].
Sulfate radical-based AOPs
In SR-AOPs, SO4−∙ is the major reactive species generated from PMS or PS. It is more selective than ∙OH and has a higher redox potential at neutral pH (Table 1), which may make it more suitable for water treatment across a broader pH range [35, 36, 117]. Furthermore, PS and PMS are more stable than H2O2, increasing precursor transportability across longer distances within water [117]. Moreover, the peroxide bond in PS has a lower bond dissociation energy which requires less energy for radical production compared to H2O2 [118]. SO4−∙ can be generated by cleaving the peroxide bond in PMS and PS using energy-based activations through heat, UV irradiation, ultrasound and plasma (Eqs. 24 and 25) [35, 117, 119]. Activation of PMS and PS in redox reactions can be achieved using transition metals in a Fenton-like mechanism (see the section on "Fenton oxidation"), O-functionalized activated carbon, electrochemical processes, radiolysis (e− formation in water, Eq. 46) and ozone (Eqs. 26–33) [35, 117, 120, 121]. Unexpectedly, phosphate-buffered saline (PBS), a commonly used pH-buffer, was also shown to activate PMS, and the PBS/PMS system effectively degraded model water pollutants Acid Orange 7, rhodamine b and 2,4,6-trichlorophenol [122].
$${\text{HSO}}_{5}^{ - } + {\text{energy}}\;{\text{input}} \to {\text{SO}}_{4}^{ - } \cdot + \cdot {\text{OH}}$$
$${\text{S}}_{2} {\text{O}}_{8}^{2 - } + {\text{energy input}} \to 2{\text{SO}}_{4}^{ - } \cdot$$
$${\text{HSO}}_{5}^{ - } + {\text{e}}^{ - } \to {\text{SO}}_{4}^{ - } \cdot + \cdot {\text{OH}}\;({\text{or}}\;{\text{SO}}_{4}^{2 - } + \cdot {\text{OH}})$$
$${\text{S}}_{2} {\text{O}}_{8}^{2 - } + {\text{e}}^{ - } \to {\text{SO}}_{4}^{ - } \cdot + {\text{SO}}_{4}^{2 - }$$
$${\text{SO}}_{4}^{2 - } \to {\text{SO}}_{4}^{ - } \cdot + {\text{e}}^{ - }$$
$${\text{AC surface}} - {\text{OOH}} + {\text{S}}_{2} {\text{O}}_{8}^{2 - } \to {\text{SO}}_{4}^{ - } \cdot + {\text{AC surface}} - {\text{OO}} \cdot + {\text{HSO}}_{4}^{ - }$$
$${\text{AC surface}} - {\text{OH}} + {\text{S}}_{2} {\text{O}}_{8}^{2 - } \to {\text{SO}}_{4}^{ - } \cdot + {\text{AC surface}} - {\text{O}} \cdot + {\text{HSO}}_{4}^{ - }$$
$${\text{SO}}_{5}^{2 - } + {\text{O}}_{3} \to {\text{SO}}_{5}^{ - } \cdot + {\text{O}}_{3}^{ - } \cdot \left( {{\text{or SO}}_{4}^{2 - } + 2{\text{O}}_{2} } \right)$$
$${\text{SO}}_{5}^{ - } \cdot + {\text{O}}_{3} \to {\text{SO}}_{4}^{ - } \cdot + 2{\text{O}}_{2}$$
$${\text{SO}}_{5}^{ - } \cdot \to 2{\text{SO}}_{4}^{ - } \cdot + {\text{O}}_{2} \left( {{\text{or}}\; {\text{S}}_{2} {\text{O}}_{8}^{2 - } + {\text{O}}_{2} } \right)$$
$${\text{O}}_{3}^{ - } \cdot + {\text{H}}_{2} {\text{O}} \to \cdot {\text{OH}} + {\text{OH}}^{ - } + {\text{O}}_{2}$$
$${\text{SO}}_{4}^{ - } \cdot + {\text{H}}_{2} {\text{O}} \to {\text{SO}}_{4}^{2 - } + \cdot {\text{OH}} + {\text{H}}^{ + }$$
$${\text{SO}}_{4}^{ - } \cdot + {\text{OH}}^{ - } \to {\text{SO}}_{4}^{2 - } + \cdot {\text{OH}}$$
$${\text{SO}}_{4}^{ - } \cdot + \cdot {\text{OH}} \to {\text{HSO}}_{5}^{ - }$$
$${\text{HSO}}_{4}^{ - } + {\text{OH}} \cdot \to {\text{SO}}_{4}^{ - } \cdot + {\text{H}}_{2} {\text{O}}$$
$${\text{H}}_{2} {\text{SO}}_{4} + \cdot {\text{OH}} \to {\text{SO}}_{4}^{ - } \cdot + {\text{H}}_{3} {\text{O}}^{ + }$$
SR-AOPs are a worthy alternative to ∙OH-based AOPs due to the simultaneous generation of ∙OH as secondary radical when PMS is used as precursor (Eqs. 24 and 26), in the presence of water (Eq. 35), under alkaline conditions (Eq. 36) or when PMS is activated using O3 (Eqs. 31 to 34). Consequently, ∙OH is the primary reactive species at alkaline pH, whereas SO4−∙ is the dominant radical at acidic pH. At neutral pH, both radicals equally contribute to pollutant oxidation [117, 121]. The reaction of both radicals forms PMS (Eq. 37), which in turn can again be activated to generate SO4−∙ and ∙OH [117]. In addition, the reaction of ∙OH with HSO4− or H2SO4 can also produce SO4−∙ (Eqs. 38 and 39) [120].
SO4−∙ generally reacts with organic pollutants in three distinctive routes: (i) via hydrogen abstraction from C–H bonds, (ii) via addition to unsaturated bonds and (iii) via electron transfer reactions from carboxylates, amines and aromatic compounds [35]. The third mechanism promotes decarboxylation, which, besides for SR-AOPs, has only been reported for UV-based degradation of cyanotoxins [56, 83, 84, 121, 123]. PMS and PS also function as oxidants, but SO4−∙ is usually more effective and faster given its substantially higher redox potential (Table 1). PS is usually preferred over PMS due to its higher stability, water solubility, photosensitivity and is more frequently used in standard methods and commercial instruments [35].
SR-AOPs have been studied for the removal of MCs, CYN and ANTX, mainly focusing on UV and catalyst activation [36, 40, 41, 124]. Even without activation, high removal (≥ 90%) of MC-LR and CYN was achieved after > 500 min of treatment for PS and > 100 min of treatment for PMS, while ANTX was almost unaffected by PMS without activation. However, when UV radiation was added, degradation of these three toxins became more effective [40, 41, 124]. MC-LR and CYN degradation efficacy was in order of UV/PS > UV/PMS > UV/H2O2 [40, 41]. Due to its structure, MC-LR was faster degraded than CYN because it provides more moieties prone to radical attack [40, 41]. Studies with different MC variants showed that under UV only, degradation increased following MC-LR < -RR < -YR < -LA, while the differences were only small in the presence of PMS, PS or H2O2 [41]. Similar to MC-LR and CYN, degradation efficacies for MC-LA, -RR and -YR followed UV/PS > UV/PMS > UV/H2O2 > UV-only [41]. Similar to other photolytic AOPs, the wavelength influenced the degradation efficacy in UV-activated SR-AOPs. For example, ANTX removal increased when the wavelength was decreased from 290 to 260 nm [124]. A follow-up experiment with radical quenchers revealed that under the experimental conditions used, ANTX degradation was dominated by SO4−∙ [124].
In contrast to UV activation, MC-LR degradation efficacy was in a different order: Co2+/PMS (pH = 5.8) > Fe2+/H2O2 (pH = 3) ≫ Ag+/PS (pH = 5.8) because PMS accepts e− more easily than H2O2 and PS [36]. Moreover, activation of PS requires substantially higher transition metal concentrations [36, 125].
MC-LR degradation was further improved by addition of TiO2 to the UV/PMS or UV/PS system due to photolytic and photocatalytic (see the section on "Photocatalysis") production of SO4−∙ and ∙OH [126], and addition of transition metals can promote (photo-) Fenton-like mechanisms. For instance, Cu2+ and Fe2+ improved CYN degradation by UV/PMS, even in the presence of NOM [40]. Similar results were observed for ANTX, and again, UV/PMS/Cu2+ yielded better results compared to UV/PMS/Fe2+ [124]. Besides type and properties of the activation mechanism, an increase in the oxidant concentration seems to generally increase the degradation rate due to the formation of more reactive species [36, 40, 124].
As for other AOPs, water quality parameters may considerably influence cyanotoxin degradation by SR-AOPs. Reaction rate constants of SO4−∙ with NOM were shown to be two orders of magnitude lower compared to rate constants of ∙OH with NOM but quenching effects can still occur and particularly depend on NOM composition and concentration [40, 125]. For ANTX degradation by UV/PMS, ≤ 2 mg L−1 of NOM was shown to improve toxin removal due to photosensitization. But at higher NOM concentrations, radical scavenging outbalanced this photosensitizing effect, and inhibited ANTX degradation [124]. Interestingly, humic acid and quinones, which are active functional humic acid moieties, were shown to activate PS and effectively degraded for example PCB28 [127]. Hence, NOM may not only act as photosensitizer, but also certain moieties may eventually react with PS to produce SO4−∙.
Besides NOM, alkalinity can also act as radical scavenger, especially since carbonate and bicarbonate concentrations (mg L−1) highly exceed toxin concentrations (μg L−1) in surface water [40, 124]. The pH of the treated water is another important factor which determines the speciation of toxins, catalysts as well as oxidants. In UV/SR-AOPs at acidic conditions around pH 3, MC-LR degradation rate constants substantial increased compared to unbuffered solutions with pH 4.8 and pH 6.4 for PMS and PS, respectively [36]. ANTX removal by UV/PMS, on the other hand, was most effective at pH 6.4 and decreased under more acidic (pH 3.0) and alkaline conditions (pH 8.0) [124]. However, when the Co2+/PMS system was acidified, MC-LR removal decreased from 100% after 5 min at pH = 5.8 to 27% after 60 min at pH = 3 and became substantially less effective compared to the Fenton reagent at pH = 3 [36]. This decrease in MC-LR degradation is rather caused by inhibition of PMS decomposition than reduced reactivity of SO4−∙ at acidic pH.
Electrochemical oxidation
In an electrolytic process, pollutant oxidation can occur directly, via electron transfer to the anode surface and indirectly, via electrochemically formed reactive species including ∙OH, H2O2, O3 (Eqs. 40–42) and others depending on electrolyte composition, which is why it is also referred to as electrochemical AOP (EAOP). Based on the setup of the treatment cell, EAOPs can be grouped into four different classes. The simplest is direct and indirect anodic oxidation (AO) of a pollutant. At neutral or acidic pH and in the presence of air or O2, H2O2 can additionally be generated by cathodic reduction (Eq. 43, AO-H2O2). To further increase the treatment, Fe2+ can be added to yield ∙OH (electro-Fenton, EF). In EF, continuous cathodic electrogeneration of H2O2 and cathodic Fe3+ regeneration to Fe2+ (Eq. 44) perpetually produce the Fenton's reagent if an undivided cell is used (see the section on "Fenton oxidation"). If EF is exposed to light (photoelectro-Fenton, PEF), the Fenton reaction itself can be improved by photolytic H2O2 cleavage to ∙OH (see the sections on "Photolysis" and "Fenton oxidation"). In EAOPs, non-active anodes with high O2-overpotential (potential for O2 development) such as boron-doped diamond anodes (BDD) are usually employed. The higher the O2-overpotential, the weaker is the physisorption of ∙OH to the anode surface, which, in turn, leads to higher ∙OH availability in the solution [90].
$${\text{M}} + {\text{H}}_{2} {\text{O}} \to {\text{M}}\left( { \cdot {\text{OH}}} \right) + {\text{H}}^{ + } + {\text{e}}^{ - }$$
$$2{\text{M}}\left( { \cdot {\text{OH}}} \right) \to 2{\text{MO}} + {\text{H}}_{2} {\text{O}}_{2}$$
$$3{\text{H}}_{2} {\text{O}} \to {\text{O}}_{3} + 6{\text{H}}^{ + } + 6{\text{e}}^{ - }$$
$${\text{O}}_{{2\left( {\text{g}} \right)}} + 2{\text{H}}^{ + } + 2{\text{e}}^{ - } \to {\text{H}}_{2} {\text{O}}_{2}$$
$${\text{Fe}}^{3 + } + {\text{e}}^{ - } \to {\text{Fe}}^{2 + }$$
Here, M(∙OH) means ∙OH is physisorbed to the anode surface M. Electrochemical oxidation of MC, NOD and CYN has been investigated with different electrodes and treatment parameters [120, 128,129,130,131]. One of the most influential factors in terms of degradation effectiveness and operating costs are the electrodes used in EAOPs. BDDs are often used due to their high O2-overpotential, and regardless of the used electrolytes, achieved higher MC-LR degradation compared to mixed metal oxide electrodes such as IrO2–Ta2O5/Ti [130]. Even when coated onto Ti as carrier material, MC-LR removal was in order of Ti/BDD > Ti/IrO2 > Ti/Pt > Ti/SnO2 under otherwise same conditions [129]. However, BDD electrodes are costly and cheaper alternatives such as electrodes synthesized from nanosized TiO2 coated onto a graphite carrier were efficient for MC-LR degradation [128].
Besides electrode material, the applied current affects the degradation efficacy with higher current densities resulting in higher toxin removal [120, 129]. The efficacy of an EAOP also depends on the electrolyte composition and its electric conductivity. For instance, MC-LR degradation in filtered lake and tap water improved after increasing the conductivity by adding Na2CO3 [128, 132]. The electrolyte composition also dictates which reactive species are produced and thus, EAOPs can be tailored toward specific requirements and pollutants. Although the order of cyanotoxin removal for electrolyte salts is usually Cl− > SO42− > NO3− > CO32− [129, 133, 134], SO42− is often suggested as best choice because of lower production of toxic disinfection byproducts from halogen-based electrolytes and avoidance of eutrophication from N- and P-based electrolytes [129, 134]. The risk of halogenated byproducts can be reduced at low salt concentrations and higher current densities but this would, in turn, increase electric energy demand [133].
Influence of water parameters on degradation of cyanotoxins by EAOPs has only scarcely been investigated but studies showed NOM scavenging of produced reactive species [130]. For the effect of pH on MC-LR degradation by EAOPs, contradicting observations were reported. While Zhang et al. [135] found no significant effect on MC-LR degradation rate constants across a range of pH from 5 to 9, Zhou et al. [130] observed higher MC-LR degradation at lower pH. Both studies used different electrolytes (NaNO3 and Na2SO4, respectively), which may explain the different results. In a photoelectrocatalytic treatment of MC-LR using Ag/AgCl/TiO2 nanotubes electrodes, the degradation appeared to be pH-dependent due to more effective adsorption to TiO2 under acidic pH and a lower potential level of the valence band of the photoelectrode at alkaline pH which decreased MC-LR oxidation [89].
Sonolysis
In sonolysis, ultrasound is used to form liquid-free cavities, i.e., bubbles, in a liquid medium due to rapid changes in pressure created by an oscillating ultrasonic wave. When these bubbles collapse, high energy is released in form of average bubble temperatures of 4200 K and pressures of 500 atm [10]. Volatile and nonpolar pollutants can be degraded in the cavitation by direct pyrolysis, thermolysis, hydrolysis or hydroxylation with ∙OH formed from the gas-phase thermolysis of water (Eq. 45) [10]. Besides acoustically, i.e., due to ultrasound, cavities can also be formed hydrodynamically, where cavities are generated when a liquid is forced to flow under reduced pressure which leads to a local drop of the static pressure to below the critical value. This can be achieved, e.g., by a local increase of the flow rate, flow line curvature or channel constrictions [136].
$${\text{H}}_{2} {\text{O}}\mathop \to \limits^{)))} {\text{H}} \cdot + \cdot {\text{OH}}$$
Non-volatiles and compounds with an amphiphilic or less polar character are degraded in the interfacial boundary layer between the bubble and bulk where temperatures of up to 2000 K and high ∙OH concentrations are present. Non-volatile and polar compounds are degraded in the bulk aqueous phase by ∙OH migrating away from the cavitation or H2O2 formed in the system [10]. The degradation effectiveness thus depends on the pollutant's physico-chemical properties and preferred chemical environment, i.e., polar/nonpolar or volatile/non-volatile [137]. Besides pyrolysis, thermolysis and chemical reactions, shockwaves and high shear forces are released, which can be utilized to, e.g., destruct and lyse cyanobacterial cells [10]. Depending on the pollutant and desired processes, the ultrasonic wave frequency can be adjusted to favor formation and reactions of ∙OH (200–600 Hz) or higher temperatures and pressures (< 200 Hz) [10].
So far, studies focused solely on the sonolytic treatment of MCs [137,138,139,140], and the highest degradation was observed in the approximate frequency range of 150–410 kHz. Both lower and higher frequencies resulted in less effective degradation due to lower ∙OH concentrations [139, 140]. An increase in the applied power yielded higher MC degradation but the degradation rate was substantially faster only in the first few minutes of the treatment and later became undistinguishable when comparing 30, 60 and 90 W [140]. In addition to the power (in W) or intensity (in W cm−2), the distribution of the ultrasound within the treated area affects the treatment efficiency [10]. A study with different radical scavengers showed that about 39% of the degradation was achieved by ∙OH in the bulk solution and about 35% degradation was achieved by ∙OH at the bubble interface [138]. Due to its non-volatile and polar character, MC-LR is not expected to reside inside the cavity but the nonpolar Adda side chain most likely resides in the bubble interfacial region [138]. MC-LR degradation can be improved under acidic conditions that increase the hydrophobicity of the Adda moiety [137]. Some MC degradation can also be attributed to hydrolysis and pyrolysis in the interfacial region, while shear forces are unlikely to cause mechanical destruction of the toxin [138]. Also, H2O2 is produced in sonolytic processes and can act as a quencher reducing the MC-LR degradation, it can simply be overcome by adding Fe2+, which eliminates H2O2 and further increases ∙OH formation (for the section on "Fenton oxidation") [137]. Interestingly, NOM, e.g., from cyanobacterial cells, appeared to only have a small effect on the treatment effectivity [138].
Radiolysis uses ionizing radiation with energies of approximately 100 eV, which is substantially higher than energies usually required for ionization of organic compounds (< 15 eV) and cleavage of chemical bonds (1–5 eV) [141]. Commonly used radiation sources are radionuclides and electrostatic accelerators emitting γ-radiation and electron beams, respectively [141, 142]. Radiolysis requires specialized instrumentation and expertise which are rare in water treatment facilities. However, because it produces a range of reactive chemical species in water and no precursors or other additives are needed, it can be useful for in-depth studies of oxidation mechanisms. The radiolytic decomposition of water is shown in the following equation (radiation yields G (in μmol J−1) are given in parentheses) [142, 143]:
$${\text{H}}_{2} {\text{O}}\mathop \to \limits^{{\text{rad}}} {{\text{e}}}_{{\text{aq}}}^{ - } \left( {0.27} \right) + \cdot {{\text{OH}}}\left( {0.28} \right) + {{\text{H}}} \cdot \left( {0.06} \right) + {{\text{H}}}_{2} \left( {0.05} \right) + {{\text{H}}}_{2} {{\text{O}}}_{2} \left( {0.07} \right) + {{\text{H}}}_{3} {{\text{O}}}^{ + } \left( {0.27} \right) + {{\text{HO}}}_{2} \cdot \left( {0.003} \right)$$
So far, only studies with MCs, CYN and ANTX used radiolysis [19, 143,144,145]. The rate constants for ∙OH attacking specific functional moieties of MC-LR were determined in the following order: benzene ring in Adda moiety (1010 M−1 s−1) ≥ diene in Adda moiety (1010 to 109 M−1 s−1) > aliphatic hydrogens (108 M−1 s−1) [143]. Although hydrogen abstraction is the slowest reaction pathway, it is still assumed to be significant due the large number of > 50 potential reaction sites [143]. The overall calculated rate constant for the reaction of ∙OH with MC-LR (using literature values for appropriate surrogates, mainly amino acids) 2.1 × 1010 M−1 s−1 [143] is very close to the experimentally derived rate constant 2.3 × 1010 M−1 s−1, for which the ∙OH attack at the Adda group accounted for almost 70% in the model [143]. For CYN, the overall rate constant of 5.1 × 109 M−1 s−1 was measured with the uracil side chain being the main susceptible moiety for ∙OH attacks (84%), whereas the attack at the guanidine group is less important [61]. In another study, the rate constants for the reaction of ∙OH with MC-LR and CYN using radiolysis were determined to be within the same order of magnitude, 1.1 × 1010 M−1 s−1 and 5.5 × 109 M−1 s−1, respectively [19]. The negligibly different factors are most likely caused by differences in the experimental conditions and methodologies used in the different studies. The order of rate constants was found to be MC-LR > CYN > ANTX, which corresponds to the toxins' molecular size and number of H-atoms that can be abstracted by ∙OH [19].
The efficiency of radiolytic treatment of MCs is dose-dependent and can be improved by adding Na2CO3 or H2O2 which leads to the formation of HO2∙ and ∙OH, respectively [144]. In contrast, nitrite and nitrate were shown to decrease the removal due to scavenging of ∙OH [144]. Furthermore, since ∙OH is a non-selective oxidant, radiolytic treatment can obviously be impacted by water quality parameters such as NOM [19, 145].
Comparison of AOPs
Degradation efficiency of AOPs
When comparing different AOPs, especially considering their application in large-scale water treatment, degradation efficiency is among the most crucial parameters. It relates the required energy, oxidant or catalyst dose to the efficacy of the treatment. For energy efficiency of AOPs, electrical energy per order (EEO) is often chosen as a figure of merit. EEO is defined as electrical energy in kWh required to remove a pollutant by 1 order of magnitude, i.e., 90%, in 1 m3 of water [146]. In case of oxidants or catalysts, the "stored electric energy" of a compound can be calculated based on prices for electric energy (price per kWh) and the respective compound (price per kg) [147].
Based on a comprehensive review on AOPs for water treatment and data on their energy efficiency reported in peer-reviewed literature, Miklos et al. [146] grouped established and emerging AOPs according to their EEO values (Fig. 3). According to this review, most reported EEO values did not include auxiliary oxidants or catalysts in the calculations. The first group comprises AOPs with median EEO values of < 1 kWh m−3 which represents a realistic range for full-scale application. The second group includes AOPs with median EEO values of 1–100 kWh m−3 which is energy extensive but in case of specific problems, these AOPs may provide an attractive solution, also for eventual large-scale applications. The last group contains AOPs with median EEO values of > 100 kWh m−3 which are currently not considered to be energy efficient [146]. Nevertheless, future developments may lead to optimization and reduction of energy demands and related costs.
(based on the review by Miklos et al. [146])
Grouping of established and emerging AOPs according to their median EEO values
EEO values for cyanotoxin removal by AOPs have only rarely been reported. For UV/H2O2 treatment of MC-LR and CYN, EEO values were 4.5 × 10−3–6.1 × 10−3 and 1.6 × 10−3 kWh m−3, respectively [40, 148]. For UV/PMS and UV/PS treatment, EEO values were estimated to range from 10−4 to 10−5 kWh m−3, respectively, for CYN, and 0.7 and 0.2 kWh m−3, respectively, for MC-LR [40, 126]. For electrochemical oxidation of MC-LR, the EEO ranged from 48 to 67 kWh m−3 depending on electrode material [129]. These EEO values seem to agree with the results by Miklos et al. [146]. However, for UV/TiO2 treatment, low EEO values, approximately 0.08 to 0.14 kWh m−3, were reported for MC-LR and 0.03 to 0.015 kWh m−3 for MC-LR, -LA and -RR in two independent studies [126, 149] which are at least 2–3 orders of magnitude lower compared to the results by Miklos et al. [146]. Although energy-efficient UV-LEDs were used in the second study [149], the light source itself seems not to have a substantial effect since demanding UV xenon lamps were used in the study of Antoniou et al. [126].
Interestingly, process capacity, i.e., laboratory-, pilot- and full-scale application, was inversely correlated with EEO values which decreased with increasing process capacity. This indicates that up-scaling apparently improves energy efficiency and the demands derived from laboratory-scale experiments may not be correctly translated to full-scale processes. Furthermore, water quality (pure, drinking, ground- and wastewater) did not affect EEO values significantly, even when relevant parameters such as NOM, UV transmittance and turbidity were considered [146].
Potential of disinfection byproduct formation
An important aspect is the formation of toxic disinfection byproducts (DBPs) during AOP treatment, where halogenated organic and inorganic compounds such as trihalomethans, haloacetic acids, haloacetronitrils, chlorates, bromates and others are of special concern [146]. Because of their toxicity, the WHO recommended guideline values for several DBPs like chloroform (300 μg L−1), bromoform (100 μg L−1), perchlorate (70 μg L−1) and bromate (10 μg L−1) [150,151,152]. DBP formation depends on the employed AOP as well as the water matrix, i.e., presence of nitrogen, organic matter and halogens [146].
Formation of bromate is relevant for O3 and O3-based AOPs, where up to 50% of bromide (at concentrations > 100 μg L−1) can be converted to bromate [146], and ∙OH may promote bromate formation by about 30–70% [153]. Attenuation is possible by decreasing pH, O3 or bromide concentration and in the presence of H2O2 [153, 154]. Chlorate formation by O3 and O3-based AOPs may only be relevant if the treatment contains a pre-chlorination step [146].
In most ∙OH-dominated AOPs, bromate formation can usually be neglected in the abundance of organic matter or H2O2 due to radical quenching [146, 154]. Chlorate and perchlorate are only produced under specific conditions when reactive chlorine species are abundant, which may further react with organic matter to form halogenated DBPs. Generally, DBP formation by ∙OH is considered to be noncritical with the exception for some approaches like high-density ∙OH generation at electrode surfaces in EAOPs [146].
For SR-AOPs, bromate formation is effectively inhibited by small concentrations of organic matter [146, 154] but reactions of SO4−∙ with chloride may produce Cl∙ and subsequently chlorate at pH < 5 [154].
UV irradiation does not produce inorganic DBPs but can form nitrite due to photolysis of nitrate which may subsequently lead to the formation of nitrated aromatic compounds. In UV/chlorine processes, organic halides can be formed at alkaline pH and chloride concentrations of > 1 g L−1 [146].
Practical, environmental and economic considerations
Besides treatment efficiency, other relevant factors may impact the choice of AOP for a specific situation. Table 2 summarizes advantages, disadvantages and potential ways to overcome certain drawbacks of discussed AOPs considering mainly practical, environmental and economic aspects.
Table 2 Advantages, disadvantages and measures to overcome drawbacks [in brackets] of different AOPs considering practical, environmental and economic aspects
Cyanobacterial blooms and toxins evidently pose a serious risk to drinking water and human health. Although cells and intracellular toxins can effectively be removed by conventional treatment, dissolved cyanotoxins require more advanced treatment such as AOPs based on reactive species including ∙OH, SO4−∙ and other mechanisms.
However, treatment efficacy is strongly impacted by water quality parameters, where, for example, NOM, alkalinity and pH can impact reactive species stability and abundance, while pH also determines toxin speciation and susceptibility to degradation. Furthermore, NOM, chloride and bromide may function as precursors for toxic DBPs. Hence, AOPs, especially their process parameters, need to be optimized for individual situations also considering economic aspects such as operational and maintenance costs.
So far, most studies focused on single toxin removal in either pure or "simulated surface water". More research is thus needed on the degradation of environmentally relevant cyanotoxin mixtures, which are likely to co-occur in the environment, in actual surface water or water withdrawn from a drinking water treatment process prior to the oxidation step. Further, various degradation products have been tentatively identified in different studies but eventual residual toxicity of the treated cyanotoxin solution is rarely examined. Adequate toxicological assays can be recommended to ensure that toxins are not only degraded but also actually detoxified, especially if degradation products are not analyzed or if DBPs are likely to be produced during the treatment process.
It was also found that efficiency as well as estimated operational and maintenance costs of an AOP at laboratory-scale do not easily translate to full-scale treatment. Hence, there is a need for more research of pilot- and full-scale applications to promote AOPs and provide essential information to drinking water treatment plant operators. For instance, photocatalysis for water treatment has been studied for decades but there are still not many, if any at all, full-scale drinking water treatment applications.
Finally, since cyanotoxins will most likely not be the only challenge to a drinking water treatment facility, a combination of different treatment methods, including different AOPs, in a multi-barrier approach needs to be considered to produce harmless, high-quality drinking water.
ANTX:
Anatoxin-a
AO:
AOP:
Advanced oxidation process
BDD:
Boron-doped diamond
BMAA:
CYN:
DBP:
Disinfection byproduct
EAOP:
Electrochemical advanced oxidation process
E EO :
Electrical energy per order
EF:
Electro-Fenton
GTX:
Gonyautoxin
GV:
Guideline value
Light-emitting diode
Microcystin
NOD:
Natural organic matter
Phosphate-buffered saline
PCB28:
2,4,4′-Trichlorobiphenyl
PEF:
Photoelectro-Fenton
PET:
pKa :
Negative decadic logarithm of the acid dissociation constant
PMS:
Peroxymonosulfate
Peroxydisulfate, persulfate
SHE:
Standard hydrogen electrode
Sulfate radical
STX:
Saxitoxin
t 1/2 :
TDI:
Total daily intake
US EPA:
UV:
Skulberg OM, Carmichael WW, Codd GA, Skulberg R (1993) Taxonomy of toxic cyanophyceae (Cyanobacteria). In: Falconer IR (ed) Algal toxins in seafood and drinking water. Academic Press, London, pp 145–164
Newcombe G (2009) International guidance manual for the management of toxic cyanobacteria. Global Water Research Coalition
Durai P, Batool M, Choi S (2015) Structure and effects of cyanobacterial lipopolysaccharides. Mar Drugs 13:4217–4230. https://doi.org/10.3390/md13074217
Mantzouki E, Visser PM, Bormans M, Ibelings BW (2016) Understanding the key ecological traits of cyanobacteria as a basis for their management and control in changing lakes. Aquat Ecol 50:333–350. https://doi.org/10.1007/s10452-015-9526-3
Codd GA, Meriluoto J, Metcalf JS (2016) Introduction. In: Meriluoto J, Spoof L, Codd GA (eds) Handbook of cyanobacterial monitoring and cyanotoxin analysis. Wiley, Hoboken, pp 1–8
Buratti FM, Manganelli M, Vichi S et al (2017) Cyanotoxins: producing organisms, occurrence, toxicity, mechanism of action and human health toxicological risk evaluation. Arch Toxicol 91:1049–1130. https://doi.org/10.1007/s00204-016-1913-6
Machado J, Campos A, Vasconcelos V, Freitas M (2017) Effects of microcystin-LR and cylindrospermopsin on plant–soil systems: a review of their relevance for agricultural plant quality and public health. Environ Res 153:191–204. https://doi.org/10.1016/j.envres.2016.09.015
Ibelings BW, Bormans M, Fastner J, Visser PM (2016) CYANOCOST special issue on cyanobacterial blooms: synopsis—a critical review of the management options for their prevention, control and mitigation. Aquat Ecol 50:595–605. https://doi.org/10.1007/s10452-016-9596-x
Westrick JA, Szlag DC, Southwell BJ, Sinclair J (2010) A review of cyanobacteria and cyanotoxins removal/inactivation in drinking water treatment. Anal Bioanal Chem 397:1705–1714. https://doi.org/10.1007/s00216-010-3709-5
He X, Liu Y-L, Conklin A et al (2016) Toxic cyanobacteria and drinking water: impacts, detection, and treatment. Harmful Algae 54:174–193. https://doi.org/10.1016/j.hal.2016.01.001
Chorus I (2005) Water safety plans. In: Huisman J, Matthijs HCP, Visser PM (eds) Harmful cyanobacteria. Springer, Dordrecht, pp 201–227
Ho L, Sawade E, Newcombe G (2012) Biological treatment options for cyanobacteria metabolite removal—a review. Water Res 46:1536–1548. https://doi.org/10.1016/j.watres.2011.11.018
Lawton LA, Robertson PKJ (1999) Physico-chemical treatment methods for the removal of microcystins (cyanobacterial hepatotoxins) from potable waters. Chem Soc Rev 28:217–224. https://doi.org/10.1039/A805416I
Rodríguez E, Onstad GD, Kull TPJ et al (2007) Oxidative elimination of cyanotoxins: comparison of ozone, chlorine, chlorine dioxide and permanganate. Water Res 41:3381–3393. https://doi.org/10.1016/J.WATRES.2007.03.033
Meriluoto J, Spoof L, Codd GA (2016) Handbook of cyanobacterial monitoring and cyanotoxin analysis. Wiley, Chichester
Book Google Scholar
Catherine A, Bernard C, Spoof L, Bruno M (2016) Microcystins and nodularins. In: Meriluoto J, Spoof L, Codd GA (eds) Handbook of cyanobacterial monitoring and cyanotoxin analysis. Wiley, Chichester, pp 107–126
Ministry of Health (2017) Guidelines for drinking-water quality management for New Zealand, 3rd edn. Ministry of Health, Wellington
Kokociński M, Cameán AM, Carmeli S et al (2016) Cylindrospermopsin and congeners. In: Meriluoto J, Spoof L, Codd GA (eds) Handbook of cyanobacterial monitoring and cyanotoxin analysis. Wiley, Hoboken, pp 127–137
Onstad GD, Strauch S, Meriluoto J et al (2007) Selective oxidation of key functional groups in cyanotoxins during drinking water ozonation. Environ Sci Technol 41:4397–4404. https://doi.org/10.1021/es0625327
Bruno M, Ploux O, Metcalf JS et al (2016) Anatoxin-a, Homoanatoxin-a, and natural analogues. In: Meriluoto J, Spoof L, Codd GA (eds) Handbook of cyanobacterial monitoring and cyanotoxin analysis. Wiley, Hoboken, pp 138–147
Ballot A, Bernard C, Fastner J (2016) Saxitoxin and analogues. In: Meriluoto J, Spoof L, Codd GA (eds) Handbook of cyanobacterial monitoring and cyanotoxin analysis. Wiley, Hoboken, pp 148–154
Ploux O, Combes A, Eriksson J, Metcalf JS (2016) β-N-Methylamino-l-alanine and (S)-2,4-diaminobutyric acid. In: Meriluoto J, Spoof L, Codd GA (eds) Handbook of cyanobacterial monitoring and cyanotoxin analysis. Wiley, Hoboken, pp 160–164
Carmichael WW (1992) Cyanobacteria secondary metabolites—the cyanotoxins. J Appl Bacteriol 72:445–459. https://doi.org/10.1111/j.1365-2672.1992.tb01858.x
Griffiths DJ, Saker ML (2003) The Palm Island mystery disease 20 years on: a review of research on the cyanotoxin cylindrospermopsin. Environ Toxicol 18:78–93. https://doi.org/10.1002/tox.10103
Carmichael WW, Azevedo SM, An JS et al (2001) Human fatalities from cyanobacteria: chemical and biological evidence for cyanotoxins. Environ Health Perspect 109:663–668
Brooks BW, Lazorchak JM, Howard MDA et al (2016) Are harmful algal blooms becoming the greatest inland water quality threat to public health and aquatic ecosystems? Environ Toxicol Chem 35:6–13. https://doi.org/10.1002/etc.3220
Svirčev Z, Drobac D, Tokodi N et al (2016) Lessons from the Užice case. In: Meriluoto J, Spoof L, Codd GA (eds) Handbook of cyanobacterial monitoring and cyanotoxin analysis. Wiley, Hoboken, pp 298–308
Fitzsimmons EG (2014) Tap water ban for toledo residents. New York Times
Wilson E (2014) Danger from microcystins in toledo water unclear. Chem Eng News 92:9
WHO (2003) Cyanobacterial toxins: microcystin-LR in drinking-water. Background document for preparation of WHO guidelines for drinking-water quality. World Health Organization, Geneva
Drinking-water quality guidelines. https://www.who.int/water_sanitation_health/water-quality/guidelines/en/. Accessed 27 Mar 2020
Chorus I (2012) Current approaches to Cyanotoxin risk assessment, risk management and regulations in different countries. Federal Environment Agency, Dessau-Roßlau
Schwarzenbach RP, Gschwend PM, Imboden DM (2002) Indirect photolysis: reactions with photooxidants in natural waters and in the atmosphere. Environmental organic chemistry. Wiley, Hoboken, pp 655–686
von Sonntag C, von Gunten U (2012) Reactions of hydroxyl and peroxyl radicals. Chemistry of ozone in water and wastewater treatment: from basic principles to applications. IWA Publishing, London, pp 225–248
Zhang B-T, Zhang Y, Teng Y, Fan M (2015) Sulfate radical and its application in decontamination technologies. Crit Rev Environ Sci Technol 45:1756–1800. https://doi.org/10.1080/10643389.2014.970681
Antoniou MG, de la Cruz AA, Dionysiou DD (2010) Degradation of microcystin-LR using sulfate radicals generated through photolysis, thermolysis and e-transfer mechanisms. Appl Catal B Environ 96:290–298. https://doi.org/10.1016/J.APCATB.2010.02.013
Chen Y-T, Chen W-R, Lin T-F (2018) Oxidation of cyanobacterial neurotoxin beta-N-methylamino-l-alanine (BMAA) with chlorine, permanganate, ozone, hydrogen peroxide and hydroxyl radical. Water Res 142:187–195. https://doi.org/10.1016/J.WATRES.2018.05.056
Park J-A, Yang B, Park C et al (2017) Oxidation of microcystin-LR by the Fenton process: kinetics, degradation intermediates, water quality and toxicity assessment. Chem Eng J 309:339–348. https://doi.org/10.1016/j.cej.2016.10.083
Tak S-Y, Kim M-K, Lee J-E et al (2018) Degradation mechanism of anatoxin-a in UV-C/H2O2 reaction. Chem Eng J 334:1016–1022. https://doi.org/10.1016/J.CEJ.2017.10.081
He X, de la Cruz AA, Dionysiou DD (2013) Destruction of cyanobacterial toxin cylindrospermopsin by hydroxyl radicals and sulfate radicals using UV-254 nm activation of hydrogen peroxide, persulfate and peroxymonosulfate. J Photochem Photobiol A Chem 251:160–166. https://doi.org/10.1016/J.JPHOTOCHEM.2012.09.017
He X, de la Cruz AA, Hiskia A et al (2015) Destruction of microcystins (cyanotoxins) by UV-254 nm-based direct photolysis and advanced oxidation processes (AOPs): influence of variable amino acids on the degradation kinetics and reaction mechanisms. Water Res 74:227–238. https://doi.org/10.1016/J.WATRES.2015.02.011
von Sonntag C, von Gunten U (2012) Chemistry of ozone in water and wastewater treatment: from basic principles to applications. IWA Publishing, London
Von Gunten U (2003) Ozonation of drinking water: part I. Oxidation kinetics and product formation. Water Res 37:1443–1467. https://doi.org/10.1016/S0043-1354(02)00457-8
Al Momani F (2007) Degradation of cyanobacteria anatoxin-a by advanced oxidation processes. Sep Purif Technol 57:85–93. https://doi.org/10.1016/J.SEPPUR.2007.03.008
Al Momani F, Smith DW, Gamal El-Din M (2008) Degradation of cyanobacteria toxin by advanced oxidation processes. J Hazard Mater 150:238–249. https://doi.org/10.1016/j.jhazmat.2007.04.087
Yan S, Jia A, Merel S et al (2016) Ozonation of cylindrospermopsin (cyanotoxin): degradation mechanisms and cytotoxicity assessments. Environ Sci Technol 50:1437–1446. https://doi.org/10.1021/acs.est.5b04540
Orr PT, Jones GJ, Hamilton GR (2004) Removal of saxitoxins from drinking water by granular activated carbon, ozone and hydrogen peroxide—implications for compliance with the Australian drinking water guidelines. Water Res 38:4455–4461. https://doi.org/10.1016/J.WATRES.2004.08.024
Bober B, Pudas K, Lechowski Z, Bialczyk J (2008) Degradation of microcystin-LR by ozone in the presence of Fenton reagent. J Environ Sci Health A Tox Hazard Subst Environ Eng 43:186–190. https://doi.org/10.1080/10934520701781582
Wu C-C, Huang W-J, Ji B-H (2015) Degradation of cyanotoxin cylindrospermopsin by TiO2-assisted ozonation in water. J Environ Sci Health A Tox Hazard Subst Environ Eng 50:1116–1126. https://doi.org/10.1080/10934529.2015.1047664
Schwarzenbach RP, Gschwend PM, Imboden DM (2002) Direct photolysis. Environmental organic chemistry. Wiley, Hoboken, pp 611–654
Stevens DK, Krieger RI (1991) Stability studies on the cyanobacterial nicotinic alkaloid anatoxin-A. Toxicon 29:167–179. https://doi.org/10.1016/0041-0101(91)90101-V
Wörmer L, Huerta-Fontela M, Cirés S et al (2010) Natural photodegradation of the cyanobacterial toxins microcystin and cylindrospermopsin. Environ Sci Technol 44:3002–3007. https://doi.org/10.1021/es9036012
Mazur-Marzec H, Meriluoto J, Pliński M (2006) The degradation of the cyanobacterial hepatotoxin nodularin (NOD) by UV radiation. Chemosphere 65:1388–1395. https://doi.org/10.1016/J.CHEMOSPHERE.2006.03.072
Afzal A, Oppenländer T, Bolton JR, El-Din MG (2010) Anatoxin-a degradation by advanced oxidation processes: vacuum-UV at 172 nm, photolysis using medium pressure UV and UV/H2O2. Water Res 44:278–286. https://doi.org/10.1016/J.WATRES.2009.09.021
Moon B-R, Kim T-K, Kim M-K et al (2017) Degradation mechanisms of Microcystin-LR during UV-B photolysis and UV/H2O2 processes: byproducts and pathways. Chemosphere 185:1039–1047. https://doi.org/10.1016/J.CHEMOSPHERE.2017.07.104
Chang J, Chen Z, Wang Z et al (2015) Oxidation of microcystin-LR in water by ozone combined with UV radiation: the removal and degradation pathway. Chem Eng J 276:97–105. https://doi.org/10.1016/J.CEJ.2015.04.070
Liu Y, Ren J, Wang X, Fan Z (2016) Mechanism and reaction pathways for microcystin-LR degradation through UV/H2O2 treatment. PLoS ONE 11:e0156236
Tsuji K, Naito S, Kondo F et al (1994) Stability of microcystins from cyanobacteria: effect of light on decomposition and isomerization. Environ Sci Technol 28:173–177. https://doi.org/10.1021/es00050a024
Gajdek P, Bober B, Mej E, Bialczyk J (2004) Sensitised decomposition of microcystin-LR using UV radiation. J Photochem Photobiol B Biol 76:103–106. https://doi.org/10.1016/J.JPHOTOBIOL.2004.06.001
Verma S, Sillanpää M (2015) Degradation of anatoxin-a by UV-C LED and UV-C LED/H2O2 advanced oxidation processes. Chem Eng J 274:274–281. https://doi.org/10.1016/J.CEJ.2015.03.128
Song W, Yan S, Cooper WJ et al (2012) Hydroxyl radical oxidation of cylindrospermopsin (cyanobacterial toxin) and its role in the photochemical transformation. Environ Sci Technol 46:12608–12615. https://doi.org/10.1021/es302458h
Chiswell RK, Shaw GR, Eaglesham G et al (1999) Stability of cylindrospermopsin, the toxin from the cyanobacterium, Cylindrospermopsis raciborskii: effect of pH, temperature, and sunlight on decomposition. Environ Toxicol 14:155–161. https://doi.org/10.1002/(SICI)1522-7278(199902)14:1%3c155:AID-TOX20%3e3.0.CO;2-Z
Adamski M, Żmudzki P, Chrapusta E et al (2016) Characterization of cylindrospermopsin decomposition products formed under irradiation conditions. Algal Res 18:1–6. https://doi.org/10.1016/J.ALGAL.2016.05.027
Zhang X, He J, Xiao S, Yang X (2019) Elimination kinetics and detoxification mechanisms of microcystin-LR during UV/chlorine process. Chemosphere 214:702–709. https://doi.org/10.1016/J.CHEMOSPHERE.2018.09.162
Park J-A, Yang B, Jang M et al (2019) Oxidation and molecular properties of microcystin-LR, microcystin-RR and anatoxin-a using UV-light-emitting diodes at 255 nm in combination with H2O2. Chem Eng J 366:423–432. https://doi.org/10.1016/J.CEJ.2019.02.101
Liu X, Chen Z, Zhou N et al (2010) Degradation and detoxification of microcystin-LR in drinking water by sequential use of UV and ozone. J Environ Sci 22:1897–1902. https://doi.org/10.1016/S1001-0742(09)60336-3
Duan X, Sanan T, de la Cruz A et al (2018) Susceptibility of the algal toxin microcystin-LR to UV/chlorine process: comparison with chlorination. Environ Sci Technol 52:8252–8262. https://doi.org/10.1021/acs.est.8b00034
Park J-A, Yang B, Kim J-H et al (2018) Removal of microcystin-LR using UV-assisted advanced oxidation processes and optimization of photo-Fenton-like process for treating Nak-Dong River water, South Korea. Chem Eng J 348:125–134. https://doi.org/10.1016/J.CEJ.2018.04.190
He X, Zhang G, de la Cruz AA et al (2014) Degradation mechanism of cyanobacterial toxin cylindrospermopsin by hydroxyl radicals in homogeneous UV/H2O2 process. Environ Sci Technol 48:4495–4504. https://doi.org/10.1021/es403732s
Ochando-Pulido JM, Hodaifa G, Víctor-Ortega MD, Martínez-Ferez A (2013) A novel photocatalyst with ferromagnetic core used for the treatment of olive oil mill effluents from two-phase production process. ScientificWorldJournal 2013:196470. https://doi.org/10.1155/2013/196470
Fotiou T, Triantis TM, Kaloudis T et al (2016) Assessment of the roles of reactive oxygen species in the UV and visible light photocatalytic degradation of cyanotoxins and water taste and odor compounds using C-TiO2. Water Res 90:52–61. https://doi.org/10.1016/J.WATRES.2015.12.006
Chen L, Zhao C, Dionysiou DD, O'Shea KE (2015) TiO2 photocatalytic degradation and detoxification of cylindrospermopsin. J Photochem Photobiol A Chem 307–308:115–122. https://doi.org/10.1016/J.JPHOTOCHEM.2015.03.013
Triantis TM, Fotiou T, Kaloudis T et al (2012) Photocatalytic degradation and mineralization of microcystin-LR under UV-A, solar and visible light using nanostructured nitrogen doped TiO2. J Hazard Mater 211–212:196–202. https://doi.org/10.1016/J.JHAZMAT.2011.11.042
Pestana CJ, Edwards C, Prabhu R et al (2015) Photocatalytic degradation of eleven microcystin variants and nodularin by TiO2 coated glass microspheres. J Hazard Mater 300:347–353. https://doi.org/10.1016/J.JHAZMAT.2015.07.016
Lawton LA, Robertson PKJ, Cornish BJPA et al (2003) Processes influencing surface interaction and photocatalytic destruction of microcystins on titanium dioxide photocatalysts. J Catal 213:109–113. https://doi.org/10.1016/S0021-9517(02)00049-0
Pelaez M, de la Cruz AA, O'Shea K et al (2011) Effects of water parameters on the degradation of microcystin-LR under visible light-activated TiO2 photocatalyst. Water Res 45:3787–3796. https://doi.org/10.1016/J.WATRES.2011.04.036
Pelaez M, de la Cruz AA, Stathatos E et al (2009) Visible light-activated N-F-codoped TiO2 nanoparticles for the photocatalytic degradation of microcystin-LR in water. Catal Today 144:19–25. https://doi.org/10.1016/J.CATTOD.2008.12.022
Zhao C, Pelaez M, Dionysiou DD et al (2014) UV and visible light activated TiO2 photocatalysis of 6-hydroxymethyl uracil, a model compound for the potent cyanotoxin cylindrospermopsin. Catal Today 224:70–76. https://doi.org/10.1016/J.CATTOD.2013.09.042
Han C, Pelaez M, Likodimos V et al (2011) Innovative visible light-activated sulfur doped TiO2 films for water treatment. Appl Catal B Environ 107:77–87. https://doi.org/10.1016/J.APCATB.2011.06.039
Pelaez M, Falaras P, Likodimos V et al (2016) Use of selected scavengers for the determination of NF-TiO2 reactive oxygen species during the degradation of microcystin-LR under visible light irradiation. J Mol Catal A Chem 425:183–189. https://doi.org/10.1016/J.MOLCATA.2016.09.035
Zhao C, Li D, Liu Y et al (2015) Photocatalytic removal of microcystin-LR by advanced WO3-based nanoparticles under simulated solar light. ScientificWorldJournal 2015:720706. https://doi.org/10.1155/2015/720706
Han C, Machala L, Medrik I et al (2017) Degradation of the cyanotoxin microcystin-LR using iron-based photocatalysts under visible light illumination. Environ Sci Pollut Res 24:19435–19443. https://doi.org/10.1007/s11356-017-9566-4
Wang S, Ma W, Fang Y et al (2014) Bismuth oxybromide promoted detoxification of cylindrospermopsin under UV and visible light illumination. Appl Catal B Environ 150–151:380–388. https://doi.org/10.1016/J.APCATB.2013.12.016
Yanfen F, Yingping H, Jing Y et al (2011) Unique ability of BiOBr to decarboxylate d-Glu and d-MeAsp in the photocatalytic degradation of microcystin-LR in water. Environ Sci Technol 45:1593–1600. https://doi.org/10.1021/es103422j
Pinho LX, Azevedo J, Miranda SM et al (2015) Oxidation of microcystin-LR and cylindrospermopsin by heterogeneous photocatalysis using a tubular photoreactor packed with different TiO2 coated supports. Chem Eng J 266:100–111. https://doi.org/10.1016/J.CEJ.2014.12.023
Lee D-K, Kim S-C, Kim S-J et al (2004) Photocatalytic oxidation of microcystin-LR with TiO2-coated activated carbon. Chem Eng J 102:93–98. https://doi.org/10.1016/J.CEJ.2004.01.027
Cornish BJP, Lawton LA, Robertson PK (2000) Hydrogen peroxide enhanced photocatalytic oxidation of microcystin-LR using titanium dioxide. Appl Catal B Environ 25:59–67. https://doi.org/10.1016/S0926-3373(99)00121-6
Bessegato GG, Guaraldo TT, de Brito JF et al (2015) Achievements and trends in photoelectrocatalysis: from environmental to energy applications. Electrocatalysis 6:415–441. https://doi.org/10.1007/s12678-015-0259-9
Liao W, Zhang Y, Zhang M et al (2013) Photoelectrocatalytic degradation of microcystin-LR using Ag/AgCl/TiO2 nanotube arrays electrode under visible light irradiation. Chem Eng J 231:455–463. https://doi.org/10.1016/J.CEJ.2013.07.054
Moreira FC, Boaventura RAR, Brillas E, Vilar VJP (2017) Electrochemical advanced oxidation processes: a review on their application to synthetic and real wastewaters. Appl Catal B Environ 202:217–261. https://doi.org/10.1016/j.apcatb.2016.08.037
Liu J, Hernández SE, Swift S, Singhal N (2018) Estrogenic activity of cylindrospermopsin and anatoxin-a and their oxidative products by FeIII-B*/H2O2. Water Res 132:309–319. https://doi.org/10.1016/J.WATRES.2018.01.018
Munoz M, Nieto-Sandoval J, Cirés S et al (2019) Degradation of widespread cyanotoxins with high impact in drinking water (microcystins, cylindrospermopsin, anatoxin-a and saxitoxin) by CWPO. Water Res 163:114853. https://doi.org/10.1016/J.WATRES.2019.114853
Zhong Y, Jin X, Qiao R et al (2009) Destruction of microcystin-RR by Fenton oxidation. J Hazard Mater 167:1114–1118. https://doi.org/10.1016/J.JHAZMAT.2009.01.117
Jung YS, Lim WT, Park J, Kim Y (2009) Effect of pH on fenton and fenton-like oxidation. Environ Technol 30:183–190. https://doi.org/10.1080/09593330802468848
Fang Y-F, Chen D-X, Huang Y-P et al (2011) Heterogeneous fenton photodegradation of microcystin-LR with visible light irradiation. Chinese J Anal Chem 39:540–543. https://doi.org/10.1016/S1872-2040(10)60433-1
Lee H, Lee H-J, Sedlak DL, Lee C (2013) pH-Dependent reactivity of oxidants formed by iron and copper-catalyzed decomposition of hydrogen peroxide. Chemosphere 92:652–658. https://doi.org/10.1016/J.CHEMOSPHERE.2013.01.073
Gajdek P, Lechowski Z, Bochnia T, Kępczyński M (2001) Decomposition of microcystin-LR by Fenton oxidation. Toxicon 39:1575–1578. https://doi.org/10.1016/S0041-0101(01)00139-8
Zhou S, Yu Y, Sun J et al (2018) Oxidation of microcystin-LR by copper (II) coupled with ascorbic acid: kinetic modeling towards generation of H2O2. Chem Eng J 333:443–450. https://doi.org/10.1016/J.CEJ.2017.09.166
Bandala ER, Martínez D, Martínez E, Dionysiou DD (2004) Degradation of microcystin-LR toxin by Fenton and Photo-Fenton processes. Toxicon 43:829–832. https://doi.org/10.1016/j.toxicon.2004.03.013
de Freitas AM, Sirtori C, Lenz CA, Peralta Zamora PG (2013) Microcystin-LR degradation by solar photo-Fenton, UV-A/photo-Fenton and UV-C/H2O2: a comparative study. Photochem Photobiol Sci 12:696–702. https://doi.org/10.1039/C2PP25233C
Ma Y-S (2012) Short review: current trends and future challenges in the application of sono-Fenton oxidation for wastewater treatment. Sustain Environ Res 22:271–278
Karci A, Wurtzler EM, de la Cruz AA et al (2018) Solar photo-Fenton treatment of microcystin-LR in aqueous environment: transformation products and toxicity in different water matrices. J Hazard Mater 349:282–292. https://doi.org/10.1016/J.JHAZMAT.2017.12.071
Wang F, Wu Y, Gao Y et al (2016) Effect of humic acid, oxalate and phosphate on Fenton-like oxidation of microcystin-LR by nanoscale zero-valent iron. Sep Purif Technol 170:337–343. https://doi.org/10.1016/J.SEPPUR.2016.06.046
Scholtz V, Pazlarova J, Souskova H et al (2015) Nonthermal plasma—a tool for decontamination and disinfection. Biotechnol Adv 33:1108–1119. https://doi.org/10.1016/J.BIOTECHADV.2015.01.002
Meichsner J, Schmidt M, Schneider R, Wagner HE (2013) Introduction. Nonthermal plasma chemistry and physics. CRC Press, Boca Raton, pp 1–6
Meichsner J, Schmidt M, Schneider R, Wagner HE (2013) Selected applications. Nonthermal plasma chemistry and physics. CRC Press, Boca Raton, pp 285–406
Banaschik R, Lukes P, Miron C et al (2017) Fenton chemistry promoted by sub-microsecond pulsed corona plasmas for organic micropollutant degradation in water. Electrochim Acta 245:539–548. https://doi.org/10.1016/J.ELECTACTA.2017.05.121
Jo J-O, Jwa E, Mok Y-S (2016) Decomposition of aqueous anatoxin-a using underwater dielectric barrier discharge plasma created in a porous ceramic tube. J Korean Soc Water Wastewater 30:167–177. https://doi.org/10.11001/jksww.2016.30.2.167
Zhang H, Huang Q, Ke Z et al (2012) Degradation of microcystin-LR in water by glow discharge plasma oxidation at the gas-solution interface and its safety evaluation. Water Res 46:6554–6562. https://doi.org/10.1016/j.watres.2012.09.041
Banaschik R, Jablonowski H, Bednarski PJ, Kolb JF (2018) Degradation and intermediates of diclofenac as instructive example for decomposition of recalcitrant pharmaceuticals by hydroxyl radicals generated with pulsed corona plasma in water. J Hazard Mater 342:651–660. https://doi.org/10.1016/J.JHAZMAT.2017.08.058
Xin Q, Zhang Y, Wu K (2013) Degradation of microcystin-LR by gas–liquid interfacial discharge plasma. Plasma Sci Technol 15:1221
Xin Q, Zhang Y, Wu KB (2013) Mn-doped carbon xerogels as catalyst in the removal of microcystin-LR by water-surface discharge plasma. J Environ Sci Health A Toxic Hazard Subst Environ Eng 48:293–299. https://doi.org/10.1080/10934529.2013.726833
Nisol B, Watson S, Leblanc Y et al (2019) Cold plasma oxidation of harmful algae and associated metabolite BMAA toxin in aqueous suspension. Plasma Process Polym 16:1800137. https://doi.org/10.1002/ppap.201800137
Xin Q, Zhang Y, Li Z et al (2015) Mn/Ti-doped carbon xerogel for efficient catalysis of microcystin-LR degradation in the water surface discharge plasma reactor. Environ Sci Pollut Res 22:17202–17208. https://doi.org/10.1007/s11356-015-4956-y
Zhang Y, Wei H, Xin Q et al (2016) Process optimization for microcystin-LR degradation by response surface methodology and mechanism analysis in gas–liquid hybrid discharge system. J Environ Manage 183:726–732. https://doi.org/10.1016/j.jenvman.2016.09.030
Pekárek S (2012) Experimental study of surface dielectric barrier discharge in air and its ozone production. J Phys D Appl Phys 45:75201. https://doi.org/10.1088/0022-3727/45/7/075201
Matzek LW, Carter KE (2016) Activated persulfate for organic chemical degradation: a review. Chemosphere 151:178–188. https://doi.org/10.1016/J.CHEMOSPHERE.2016.02.055
Wacławek S, Lutze HV, Grübel K et al (2017) Chemistry of persulfates in water and wastewater treatment: a review. Chem Eng J 330:44–62. https://doi.org/10.1016/J.CEJ.2017.07.132
Son G, Lee H (2016) Methylene blue removal by submerged plasma irradiation system in the presence of persulfate. Environ Sci Pollut Res 23:15651–15656. https://doi.org/10.1007/s11356-016-6759-1
Bakheet B, Islam MA, Beardall J et al (2018) Electrochemical inactivation of Cylindrospermopsis raciborskii and removal of the cyanotoxin cylindrospermopsin. J Hazard Mater 344:241–248. https://doi.org/10.1016/J.JHAZMAT.2017.10.024
Yang Y, Jiang J, Lu X et al (2015) Production of sulfate radical and hydroxyl radical by reaction of ozone with peroxymonosulfate: a novel advanced oxidation process. Environ Sci Technol 49:7330–7339. https://doi.org/10.1021/es506362e
Lou X, Wu L, Guo Y et al (2014) Peroxymonosulfate activation by phosphate anion for organics degradation in water. Chemosphere 117:582–585. https://doi.org/10.1016/J.CHEMOSPHERE.2014.09.046
Antoniou MG, de la Cruz AA, Dionysiou DD (2010) Intermediates and reaction pathways from the degradation of microcystin-LR with sulfate radicals. Environ Sci Technol 44:7238–7244. https://doi.org/10.1021/es1000243
Verma S, Nakamura S, Sillanpää M (2016) Application of UV-C LED activated PMS for the degradation of anatoxin-a. Chem Eng J 284:122–129. https://doi.org/10.1016/J.CEJ.2015.08.095
He X, de la Cruz AA, O'Shea KE, Dionysiou DD (2014) Kinetics and mechanisms of cylindrospermopsin destruction by sulfate radical-based advanced oxidation processes. Water Res 63:168–178. https://doi.org/10.1016/J.WATRES.2014.06.004
Antoniou MG, Boraei I, Solakidou M et al (2018) Enhancing photocatalytic degradation of the cyanotoxin microcystin-LR with the addition of sulfate-radical generating oxidants. J Hazard Mater 360:461–470. https://doi.org/10.1016/J.JHAZMAT.2018.07.111
Fang G, Gao J, Dionysiou DD et al (2013) Activation of persulfate by quinones: free radical reactions and implication for the degradation of PCBs. Environ Sci Technol 47:4605–4611. https://doi.org/10.1021/es400262n
Sanz Lobón G, Yepez A, Garcia LF et al (2017) Efficient electrochemical remediation of microcystin-LR in tap water using designer TiO(2)@carbon electrodes. Sci Rep 7:41326. https://doi.org/10.1038/srep41326
Tran N, Drogui P (2013) Electrochemical removal of microcystin-LR from aqueous solution in the presence of natural organic pollutants. J Environ Manage 114:253–260. https://doi.org/10.1016/J.JENVMAN.2012.10.009
Zhou S, Bu L, Yu Y et al (2016) A comparative study of microcystin-LR degradation by electrogenerated oxidants at BDD and MMO anodes. Chemosphere 165:381–387. https://doi.org/10.1016/J.CHEMOSPHERE.2016.09.057
Santos PVF, Lopes IC, Diculescu VC et al (2011) Redox mechanisms of nodularin and chemically degraded nodularin. Electroanalysis 23:2310–2319. https://doi.org/10.1002/elan.201100246
Liang W, Qu J, Wang K et al (2008) Electrochemical degradation of cyanobacterial toxin microcystin-LR using Ti/RuO2 electrodes in a continuous tubular reactor. Environ Eng Sci 25:635–642. https://doi.org/10.1089/ees.2006.0273
Zhang C, Fu D, Gu Z (2009) Degradation of microcystin-RR using boron-doped diamond electrode. J Hazard Mater 172:847–853. https://doi.org/10.1016/J.JHAZMAT.2009.07.071
Zhou S, Bu L, Shi Z et al (2018) Electrochemical inactivation of Microcystis aeruginosa using BDD electrodes: kinetic modeling of microcystins release and degradation. J Hazard Mater 346:73–81. https://doi.org/10.1016/J.JHAZMAT.2017.12.023
Zhang Y, Zhang Y, Yang N et al (2013) Electrochemical degradation and mechanistic analysis of microcystin-LR. J Chem Technol Biotechnol 88:1529–1537. https://doi.org/10.1002/jctb.3999
Gągol M, Przyjazny A, Boczkaj G (2018) Wastewater treatment by means of advanced oxidation processes based on cavitation—a review. Chem Eng J 338:599–627. https://doi.org/10.1016/J.CEJ.2018.01.049
Song W, de la Cruz AA, Rein K, O'Shea KE (2006) Ultrasonically induced degradation of microcystin-LR and -RR: identification of products, effect of pH, formation and destruction of peroxides. Environ Sci Technol 40:3941–3946. https://doi.org/10.1021/es0521730
Song W, Teshiba T, Rein K, O'Shea KE (2005) Ultrasonically induced degradation and detoxification of microcystin-LR (cyanobacterial toxin). Environ Sci Technol 39:6300–6305. https://doi.org/10.1021/es048350z
Shi J, Han X, Zhu Z, Deng H (2012) Identification of cytotoxicity intermediate products and degradation pathways for microcystins using low-frequency ultrasonic irradiation. Water Air Soil Pollut 223:5027–5038. https://doi.org/10.1007/s11270-012-1254-x
Ma B, Chen Y, Hao H et al (2005) Influence of ultrasonic field on microcystins produced by bloom-forming algae. Colloids Surf B Biointerfaces 41:197–201. https://doi.org/10.1016/J.COLSURFB.2004.12.010
Choppin G, Liljenzin J-O, Rydberg J, Ekberg C (2013) Absorption of nuclear radiation. Radiochemistry and nuclear chemistry. Elsevier, Amsterdam, pp 163–208
Choppin G, Liljenzin J-O, Rydberg J, Ekberg C (2013) Radiation effects on matter. Radiochemistry and nuclear chemistry. Elsevier, Amsterdam, pp 209–237
Song W, Xu T, Cooper WJ et al (2009) Radiolysis studies on the destruction of microcystin-LR in aqueous solution by hydroxyl radicals. Environ Sci Technol 43:1487–1492. https://doi.org/10.1021/es802282n
Zhang JB, Zheng Z, Yang GJ, Zhao YF (2007) Degradation of microcystin by gamma irradiation. Nucl Instruments Methods Phys Res Sect A Accel Spectrom Detect Assoc Equip 580:687–689. https://doi.org/10.1016/J.NIMA.2007.05.109
Liu S, Zhao Y, Ma F et al (2015) Control of Microcystis aeruginosa growth and associated microcystin cyanotoxin remediation by electron beam irradiation (EBI). RSC Adv 5:31292–31297. https://doi.org/10.1039/C5RA00430F
Miklos DB, Remy C, Jekel M et al (2018) Evaluation of advanced oxidation processes for water and wastewater treatment—a critical review. Water Res 139:118–131. https://doi.org/10.1016/J.WATRES.2018.03.042
Rosenfeldt EJ, Linden KG, Canonica S, von Gunten U (2006) Comparison of the efficiency of OH radical formation during ozonation and the advanced oxidation processes O3/H2O2 and UV/H2O2. Water Res 40:3695–3704. https://doi.org/10.1016/J.WATRES.2006.09.008
Liu J, Ye J-S, Ou H-S, Lin J (2017) Effectiveness and intermediates of microcystin-LR degradation by UV/H2O2 via 265 nm ultraviolet light-emitting diodes. Environ Sci Pollut Res Int 24:4676–4684. https://doi.org/10.1007/s11356-016-8148-1
Schneider O, Liang R, Bragg L et al (2019) Photocatalytic degradation of microcystins by TiO2 using UV-LED controlled periodic illumination. Catalysts 9:181. https://doi.org/10.3390/catal9020181
WHO (2005) Bromate in drinking-water. Background document for development of WHO guidelines for drinking-water quality. World Health Organization, Geneva
WHO (2005) Trihalomethanes in drinking-water. Background document for development of WHO guidelines for drinking-water quality. World Health Organization, Geneva
WHO (2016) Perchlorate in drinking-water. Background document for development of WHO guidelines for drinking-water quality. World Health Organization, Geneva
von Gunten U, Hoigne J (1994) Bromate formation during ozonization of bromide-containing waters: interaction of ozone and hydroxyl radical reactions. Environ Sci Technol 28:1234–1242. https://doi.org/10.1021/es00056a009
Lutze H (2013) Sulfate radical based oxidation in water treatment (Doctoral Dissertation). University of Duisburg-Essen, Department of Instrumental Analytical Chemistry
Rakness KL (2005) Introduction. In: Rakness KL (ed) Ozone in drinking water treatment: process design, operation, and optimization, 1st edn. American Water Works Association, Denver, pp 1–16
Lawton LA, Robertson PKJ, Cornish BJPA, Jaspars M (1999) Detoxification of microcystins (cyanobacterial hepatotoxins) using TiO2 photocatalytic oxidation. Environ Sci Technol 33:771–775. https://doi.org/10.1021/es9806682
Banaschik R, Lukes P, Jablonowski H et al (2015) Potential of pulsed corona discharges generated in water for the degradation of persistent pharmaceutical residues. Water Res 84:127–135. https://doi.org/10.1016/J.WATRES.2015.07.018
WHO (2004) Sulfate in drinking-water. Background document for development of WHO guidelines for drinking-water quality. World Health Organization, Geneva
Any opinions in this article only reflect the authors' views and the European Union's Research Executive Agency is not responsible for any use that may be made of the information it contains.
This research received funding from the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Grant agreement No. 722493 NaToxAq and No. 857560 CETOCOEN EXCELLENCE Teaming 2 project. The RECETOX Research infrastructure was further supported by the Czech Ministry of Education, Youth and Sports (LM2018121; CZ02.1.01/0.0/0.0/18_046/0015975).
RECETOX, Faculty of Science, Masaryk University, Kamenice 753/5, 62500, Brno, Czech Republic
Marcel Schneider & Luděk Bláha
Marcel Schneider
Luděk Bláha
MS performed the literature research and drafted the manuscript; LB revised the text. Both authors read and approved the final manuscript.
Correspondence to Marcel Schneider.
Schneider, M., Bláha, L. Advanced oxidation processes for the removal of cyanobacterial toxins from drinking water. Environ Sci Eur 32, 94 (2020). https://doi.org/10.1186/s12302-020-00371-0
Cyanotoxin
Hydroxyl radical
Natural Toxins - Environmental Fate and Safe Water Supply
|
CommonCrawl
|
That is, perhaps light of the right wavelength can indeed save the brain some energy by making it easier to generate ATP. Would 15 minutes of LLLT create enough ATP to make any meaningful difference, which could possibly cause the claimed benefits? The problem here is like that of the famous blood-glucose theory of willpower - while the brain does indeed use up more glucose while active, high activity uses up very small quantities of glucose/energy which doesn't seem like enough to justify a mental mechanism like weak willpower.↩
Fitzgerald 2012 and the general absence of successful experiments suggests not, as does the general historic failure of scores of IQ-related interventions in healthy young adults. Of the 10 studies listed in the original section dealing with iodine in children or adults, only 2 show any benefit; in lieu of a meta-analysis, a rule of thumb would be 20%, but both those studies used a package of dozens of nutrients - and not just iodine - so if the responsible substance were randomly picked, that suggests we ought to give it a chance of 20% \times \frac{1}{\text{dozens}} of being iodine! I may be unduly optimistic if I give this as much as 10%.
Some suggested that the lithium would turn me into a zombie, recalling the complaints of psychiatric patients. But at 5mg elemental lithium x 200 pills, I'd have to eat 20 to get up to a single clinical dose (a psychiatric dose might be 500mg of lithium carbonate, which translates to ~100mg elemental), so I'm not worried about overdosing. To test this, I took on day 1 & 2 no less than 4 pills/20mg as an attack dose; I didn't notice any large change in emotional affect or energy levels. And it may've helped my motivation (though I am also trying out the tyrosine).
There is a similar substance which can be purchased legally almost anywhere in the world called adrafinil. This is a prodrug for modafinil. You can take it, and then the body will metabolize it into modafinil, providing similar beneficial effects. Unfortunately, it takes longer for adrafinil to kick in—about an hour—rather than a matter of minutes. In addition, there are more potential side-effects to taking the prodrug as compared to the actual drug.
Let's start with the basics of what smart drugs are and what they aren't. The field of cosmetic psychopharmacology is still in its infancy, but the use of smart drugs is primed to explode during our lifetimes, as researchers gain increasing understanding of which substances affect the brain and how they do so. For many people, the movie Limitless was a first glimpse into the possibility of "a pill that can make you smarter," and while that fiction is a long way from reality, the possibilities - in fact, present-day certainties visible in the daily news - are nevertheless extremely exciting.
Exercise and nutrition also play an important role in neuroplasticity. Many vitamins and ingredients found naturally in food products have been shown to have cognitive enhancing effects. Some of these include vitamins B6 and B12, caffeine, phenethylamine found in chocolate and l-theanine, found in green tea, whose combined effects with caffeine are more extensively researched.
The benefits that they offer are gradually becoming more clearly understood, and those who use them now have the potential to get ahead of the curve when it comes to learning, information recall, mental clarity, and focus. Everyone is different, however, so take some time to learn what works for you and what doesn't and build a stack that helps you perform at your best.
Many over the counter and prescription smart drugs fall under the category of stimulants. These substances contribute to an overall feeling of enhanced alertness and attention, which can improve concentration, focus, and learning. While these substances are often considered safe in moderation, taking too much can cause side effects such as decreased cognition, irregular heartbeat, and cardiovascular problems.
Phenotropil is an over-the-counter supplement similar in structure to Piracetam (and Noopept). This synthetic smart drug has been used to treat stroke, epilepsy and trauma recovery. A 2005 research paper also demonstrated that patients diagnosed with natural lesions or brain tumours see improvements in cognition. Phenylpiracetam intake can also result in minimised feelings of anxiety and depression. This is one of the more powerful unscheduled Nootropics available.
Oxiracetam is one of the 3 most popular -racetams; less popular than piracetam but seems to be more popular than aniracetam. Prices have come down substantially since the early 2000s, and stand at around 1.2g/$ or roughly 50 cents a dose, which was low enough to experiment with; key question, does it stack with piracetam or is it redundant for me? (Oxiracetam can't compete on price with my piracetam pile stockpile: the latter is now a sunk cost and hence free.)
At this point I began to get bored with it and the lack of apparent effects, so I began a pilot trial: I'd use the LED set for 10 minutes every few days before 2PM, record, and in a few months look for a correlation with my daily self-ratings of mood/productivity (for 2.5 years I've asked myself at the end of each day whether I did more, the usual, or less work done that day than average, so 2=below-average, 3=average, 4=above-average; it's ad hoc, but in some factor analyses I've been playing with, it seems to load on a lot of other variables I've measured, so I think it's meaningful).
I don't believe there's any need to control for training with repeated within-subject sampling, since there will be as many samples on both control and active days drawn from the later trained period as with the initial untrained period. But yes, my D5B scores seem to have plateaued pretty much and only very slowly increase; you can look at the stats file yourself.
"They're not regulated by the FDA like other drugs, so safety testing isn't required," Kerl says. What's more, you can't always be sure that what's on the ingredient label is actually in the product. Keep in mind, too, that those that contain water-soluble vitamins like B and C, she adds, aren't going to help you if you're already getting enough of those vitamins through diet. "If your body is getting more than you need, you're just going to pee out the excess," she says. "You're paying a lot of money for these supplements; maybe just have orange juice."
Table 1 shows all of the studies of middle school, secondary school, and college students that we identified. As indicated in the table, the studies are heterogeneous, with varying populations sampled, sample sizes, and year of data collection, and they focused on different subsets of the epidemiological questions addressed here, including prevalence and frequency of use, motivations for use, and method of obtaining the medication.
When it comes to coping with exam stress or meeting that looming deadline, the prospect of a "smart drug" that could help you focus, learn and think faster is very seductive. At least this is what current trends on university campuses suggest. Just as you might drink a cup of coffee to help you stay alert, an increasing number of students and academics are turning to prescription drugs to boost academic performance.
Brain-imaging studies are consistent with the existence of small effects that are not reliably captured by the behavioral paradigms of the literature reviewed here. Typically with executive function tasks, reduced activation of task-relevant areas is associated with better performance and is interpreted as an indication of higher neural efficiency (e.g., Haier, Siegel, Tang, Abel, & Buchsbaum, 1992). Several imaging studies showed effects of stimulants on task-related activation while failing to find effects on cognitive performance. Although changes in brain activation do not necessarily imply functional cognitive changes, they are certainly suggestive and may well be more sensitive than behavioral measures. Evidence of this comes from a study of COMT variation and executive function. Egan and colleagues (2001) found a genetic effect on executive function in an fMRI study with sample sizes as small as 11 but did not find behavioral effects in these samples. The genetic effect on behavior was demonstrated in a separate study with over a hundred participants. In sum, d-AMP and MPH measurably affect the activation of task-relevant brain regions when participants' task performance does not differ. This is consistent with the hypothesis (although by no means positive proof) that stimulants exert a true cognitive-enhancing effect that is simply too small to be detected in many studies.
There is much to be appreciated in a brain supplement like BrainPill (never mind the confusion that may stem from the generic-sounding name) that combines tried-and-tested ingredients in a single one-a-day formulation. The consistency in claims and what users see in real life is an exemplary one, which convinces us to rate this powerhouse as the second on this review list. Feeding one's brain with nootropics and related supplements entails due diligence in research and seeking the highest quality, and we think BrainPill is up to task. Learn More...
These days, young, ambitious professionals prefer prescription stimulants—including methylphenidate (usually sold as Ritalin) and Adderall—that are designed to treat people with attention deficit hyperactivity disorder (ADHD) and are more common and more acceptable than cocaine or nicotine (although there is a black market for these pills). ADHD makes people more likely to lose their focus on tasks and to feel restless and impulsive. Diagnoses of the disorder have been rising dramatically over the past few decades—and not just in kids: In 2012, about 16 million Adderall prescriptions were written for adults between the ages of 20 and 39, according to a report in the New York Times. Both methylphenidate and Adderall can improve sustained attention and concentration, says Barbara Sahakian, professor of clinical neuropsychology at the University of Cambridge and author of the 2013 book Bad Moves: How Decision Making Goes Wrong, and the Ethics of Smart Drugs. But the drugs do have side effects, including insomnia, lack of appetite, mood swings, and—in extreme cases—hallucinations, especially when taken in amounts the exceed standard doses. Take a look at these 10 foods that help you focus.
But he has also seen patients whose propensity for self-experimentation to improve cognition got out of hand. One chief executive he treated, Ngo said, developed an unhealthy predilection for albuterol, because he felt the asthma inhaler medicine kept him alert and productive long after others had quit working. Unfortunately, the drug ended up severely imbalancing his electrolytes, which can lead to dehydration, headaches, vision and cardiac problems, muscle contractions and, in extreme cases, seizures.
Probably most significantly, use of the term "drug" has a significant negative connotation in our culture. "Drugs" are bad: So proclaimed Richard Nixon in the War on Drugs, and Nancy "No to Drugs" Reagan decades later, and other leaders continuing to present day. The legitimate demonization of the worst forms of recreational drugs has resulted in a general bias against the elective use of any chemical to alter the body's processes. Drug enhancement of athletes is considered cheating – despite the fact that many of these physiological shortcuts obviously work. University students and professionals seeking mental enhancements by taking smart drugs are now facing similar scrutiny.
Another class of substances with the potential to enhance cognition in normal healthy individuals is the class of prescription stimulants used to treat attention-deficit/hyperactivity disorder (ADHD). These include methylphenidate (MPH), best known as Ritalin or Concerta, and amphetamine (AMP), most widely prescribed as mixed AMP salts consisting primarily of dextroamphetamine (d-AMP), known by the trade name Adderall. These medications have become familiar to the general public because of the growing rates of diagnosis of ADHD children and adults (Froehlich et al., 2007; Sankaranarayanan, Puumala, & Kratochvil, 2006) and the recognition that these medications are effective for treating ADHD (MTA Cooperative Group, 1999; Swanson et al., 2008).
With so many different ones to choose from, choosing the best nootropics for you can be overwhelming at times. As usual, a decision this important will require research. Study up on the top nootropics which catch your eye the most. The nootropics you take will depend on what you want the enhancement for. The ingredients within each nootropic determine its specific function. For example, some nootropics contain ginkgo biloba, which can help memory, thinking speed, and increase attention span. Check the nootropic ingredients as you determine what end results you want to see. Some nootropics supplements can increase brain chemicals such as dopamine and serotonin. An increase in dopamine levels can be very useful for memory, alertness, reward and more. Many healthy adults, as well as college students take nootropics. This really supports the central nervous system and the brain.
After I ran out of creatine, I noticed the increased difficulty, and resolved to buy it again at some point; many months later, there was a Smart Powders sale so bought it in my batch order, $12 for 1000g. As before, it made Taekwondo classes a bit easier. I paid closer attention this second time around and noticed that as one would expect, it only helped with muscular fatigue and did nothing for my aerobic issues. (I hate aerobic exercise, so it's always been a weak point.) I eventually capped it as part of a sulbutiamine-DMAE-creatine-theanine mix. This ran out 1 May 2013. In March 2014, I spent $19 for 1kg of micronized creatine monohydrate to resume creatine use and also to use it as a placebo in a honey-sleep experiment testing Seth Roberts's claim that a few grams of honey before bedtime would improve sleep quality: my usual flour placebo being unusable because the mechanism might be through simple sugars, which flour would digest into. (I did not do the experiment: it was going to be a fair amount of messy work capping the honey and creatine, and I didn't believe Roberts's claims for a second - my only reason to do it would be to prove the claim wrong but he'd just ignore me and no one else cares.) I didn't try measuring out exact doses but just put a spoonful in my tea each morning (creatine is tasteless). The 1kg lasted from 25 March to 18 September or 178 days, so ~5.6g & $0.11 per day.
Similarly, we could try applying Nick Bostrom's reversal test and ask ourselves, how would we react to a virus which had no effect but to eliminate sleep from alternating nights and double sleep in the intervening nights? We would probably grouch about it for a while and then adapt to our new hedonistic lifestyle of partying or working hard. On the other hand, imagine the virus had the effect of eliminating normal sleep but instead, every 2 minutes, a person would fall asleep for a minute. This would be disastrous! Besides the most immediate problems like safely driving vehicles, how would anything get done? You would hold a meeting and at any point, a third of the participants would be asleep. If the virus made it instead 2 hours on, one hour off, that would be better but still problematic: there would be constant interruptions. And so on, until we reach our present state of 16 hours on, 8 hours off. Given that we rejected all the earlier buffer sizes, one wonders if 16:8 can be defended as uniquely suited to circumstances. Is that optimal? It may be, given the synchronization with the night-day cycle, but I wonder; rush hour alone stands as an argument against synchronized sleep - wouldn't our infrastructure would be much cheaper if it only had to handle the average daily load rather than cope with the projected peak loads? Might not a longer cycle be better? The longer the day, the less we are interrupted by sleep; it's a hoary cliche about programmers that they prefer to work in long sustained marathons during long nights rather than sprint occasionally during a distraction-filled day, to the point where some famously adopt a 28 hour day (which evenly divides a week into 6 days). Are there other occupations which would benefit from a 20 hour waking period? Or 24 hour waking period? We might not know because without chemical assistance, circadian rhythms would overpower anyone attempting such schedules. It certainly would be nice if one had long time chunks in which could read a challenging book in one sitting, without heroic arrangements.↩
|
CommonCrawl
|
HomePosts tagged 'Congress'
Trump is finally being impeached
2019-10-06 2019-09-29 pnrj current events, ethics, public policy ., Bernie Sanders, Congress, deceit, Democrat, Donald Trump, Hillary Clinton, impeachment, lying, Mike Pence, politics, Politifact, Republican, Senate
Post 310 Oct 6 JDN 2458763
Given that there have been efforts to impeach Trump since before he took office (which is totally unprecedented, by the way; while several others have committed crimes and been impeached while in office, no other US President has gone into office with widespread suspicion of mass criminal activity), it seems odd that it has taken this long to finally actually start formal impeachment hearings.
Why did it take so long? We needed two things to happen: One, absolutely overwhelming evidence of absolutely flagrant crimes, and two, a Democratic majority in the House of Representatives.
This is how divided America has become. If the Republicans were really a mainstream center-right party as they purport to be, they would have supported impeachment just as much as the Democrats, we would have impeached Trump in 2017, and he would have been removed from office by 2018. But in fact they are nothing of the sort. The Republicans no longer believe in democracy. The Democrats are a mainstream center-right party, and the Republicans are far-right White-nationalist crypto-fascists (and less 'crypto-' all the time). After seeing how they reacted to his tax evasion, foreign bribes, national security leaks, human rights violations, obstruction of justice, and overall ubiquitous corruption and incompetence, by this point it's clear that there is almost nothing that Trump could do which would make either the voter base or the politicians of the Republican Party turn against him—he may literally be correct that he could commit a murder in broad daylight on Fifth Avenue. Maybe if he raised taxes on billionaires or expressed support for Roe v. Wade they would finally revolt.
Even as it stands, there is good reason to fear that the Republican-majority Senate will not confirm the impeachment and remove Trump from office. The political fallout from such a failed impeachment is highly uncertain. So far, markets are taking it in stride; it may even turn out to be good for the economy. (Then again, a good economy may be good for Trump in 2020!) But at this point the evidence is so damning that if we don't impeach now, we may never impeach again; if this isn't enough, nothing is. (The Washington Examiner said that months ago, and may already have been right; but the case is even stronger now.)
So, the most likely scenario is that the impeachment goes through the House, but fails in the Senate. The good news is that if the Republicans do block the impeachment, they'll be publicly admitting that even charges this serious and this substantiated mean nothing to them. Anyone watching who is still on the fence about them will see how corrupt they have become.
After that, this is probably what will happen: The impeachment will be big news for a month or two, then be largely ignored. Trump will probably try to make himself a martyr, talking even louder about 'witch hunts'. He will lose popularity with a few voters, but his base will continue to support him through thick and thin. (Astonishingly, almost nothing really seems to move his overall approval rating.) The economy will be largely unaffected, or maybe slightly improve. And then we'll find out in the 2020 election whether the Democrats can mobilize enough opposition to Trump, and—just as importantly—enough support for whoever wins the primaries, to actually win this time around.
If by some miracle enough Republicans find a moral conscience and vote to remove Trump from office, this means that until 2020 we will have President Mike Pence. In a sane world, that in itself would sound like a worst-case scenario; he's basically a less-sleazy Ted Cruz. He is misogynistic, homophobic, and fanatically religious. He is also a partisan ideologue who toes the party line on basically every issue. Some have even argued that Pence is worse than Trump, because he represents the same ideology but with more subtlety and competence.
But subtlety and competence are important. Indeed, I would much rather have an intelligent, rational, competent ideologue managing our government, leading our military, and controlling our nuclear launch codes than an idiotic, narcissistic, impulsive one. Pence at least can be trusted to be consistent in his actions and diplomatic in his words—two things which Trump has absolutely never been.
Indeed, Pence's ideological consistency has benefits; unlike Trump, he reliably supports free trade and his fiscal conservatism actually seems genuine for once. Consistency in itself has value: Life is much easier, and the economy is much stronger, when the rules of the game remain the same rather than randomly lurching from one extreme to another.
Pence is also not the pathological liar that Trump is. Yes, Pence has lied many times (only 22% of his statements were evaluated by PolitiFact as "Mostly True" or "True", and 30% were "False" or "Pants on Fire"). But Trump lies constantly. A mere 14% of Trump's statements were evaluated by Trump as "Mostly True" or "True", while 48% were "False" or "Pants on Fire". For Bernie Sanders, 49% were "Mostly True" or better, and only 11% were "False", with no "Pants on Fire" at all; for Hillary Clinton, 49% were "Mostly True" or better, and only 10% were "False", with 3% "Pants on Fire". People have tried to keep running tallies of Trump's lies, but it's a tall order: The Washington Post records over 12,000 lies since he took office less than three years ago. Four thousand lies a year. More than ten every single day. Most people commit lies of omission or say 'white lies' several times per day (depending on who you ask, I've seen everything from an average of 2 times per day to an average of 100 times per day), but that's not what we're talking about here. These are consequential, outright statements of fact that aren't true. And these are not literally everything he has said that wasn't true; they are only public lies with relevance to policy or his own personal record. Indeed, Trump lies recklessly, stupidly, pointlessly, nonsensically. He seems like a pathological liar, or someone with dementia who is confabulating to try to fill gaps in his memory. (Indeed, a lot of his behavior is consistent with dementia, and similar to how Reagan acted in the early days of his Alzheimer's.) At least if Pence takes office, we'll be able to believe some of what he says.
Of course, Pence won't be much better on some of the most important issues, such as climate change. When asked how important he thinks climate change is and what should be done about it, Pence always gives mealy-mouthed, evasive responses—but at least he doesn't make up stories about windmills getting special permits to kill endangered birds.
I admit, choosing Pence over Trump feels like choosing to get shot in the leg instead of the face—but that's really not a difficult choice, is it?
Government shutdowns are pure waste
2019-01-06 2019-01-02 pnrj current events, public policy, Uncategorized border wall, budget, Congress, Donald Trump, government shutdown, immigration, National Science Foundation
Jan 6 JDN 2458490
At the time of writing, the US federal government is still shut down.
The US government has been shut down in this way 22 times—all of them since 1976. Most countries don't do this. The US didn't do it for most of our history. Please keep that in mind: This was an entirely avoidable outcome that most countries never go through.
The consequences of a government shutdown are pure waste on an enormous scale. Most government employees get furloughed without pay, which means they miss their credit card and mortgage payments while they wait for their back pay after the shutdown ends. (And this one happened during Christmas!) Contractors have it even worse: They get their contracts terminated and may never see the money they were promised. This has effects on our whole economy; the 2013 shutdown removed a full $24 billion from the US economy, and the current shutdown is expected to drain $6 billion per week. The government itself is taking losses of about $1 billion per week, mostly in the form of unpaid and unaudited taxes.
I personally don't know what's going to happen to an NSF grant proposal I've been writing for several weeks: Almost the entire NSF has been furloughed as "non-essential" (most of the military remains operative; almost all basic science gets completely shut down—insert comment about the military-industrial complex here), and in 2013 some of the dissertation grants were outright canceled because of the shutdown.
Why do these shutdowns happen?
A government shutdown occurs when the omnibus appropriations bill fails to pass. This bill is essentially the entire US federal budget in a single bill; like any other bill, it has to be passed by both houses of Congress and signed by the President.
For some reason, our government decided that if this process doesn't happen on schedule, the correct answer is to shut down all non-essential government services. This is a frankly idiotic answer. The obviously correct solution is that if Congress and the President can't agree on a new budget, the old budget gets renewed in its entirety with a standard COLA inflation adjustment. This really seems incredibly basic: If the government can't agree on how to change something, the status quo should remain in effect until they do. And the status quo is an inflation-adjusted version of the existing budget.
This particular shutdown occurred because of Donald Trump's brinksmanship on the border wall: He demanded at least $5 billion, and the House wouldn't give it to him.
It won't be much longer before we've already lost more money on the shutdown than that $5 billion; this may tempt you to say that the House should give in. But the wall won't actually do anything to make our nation safer or better, and building it would displace thousands of people by eminent domain and send an unquestionable signal of xenophobia to the rest of the world. Frankly it sickens me that there were not enough principled Republicans to stand their ground against Trump's madness; but at least there are now Democrats standing theirs.
Make no mistake: This is Trump's shutdown, and he said so himself. The House even offered to do what should be done by default, which is renew the old budget while negotiations on the border wall continue—Trump refused this offer. And Trump keeps changing his story with every new tweet.
But the real problem is that this is even something the President is allowed to do. Vetoing the old budget should restore the old budget, not furlough hundreds of thousands of workers and undermine government services. This is a ludicrous way to organize a government, and seems practically designed to make our government as inefficient, wasteful, and hated as possible. This was an absolutely unforced error and we should be enacting policy rules that would prevent it from ever happening again.
9/11, 14 years on—and where are our civil liberties?
2015-09-11 2015-09-11 pnrj public policy 9/11, Afghanistan War, Benjamin Franklin, Big Government, civil liberty, Congress, death, due process, freedom, Iraq War, liberty, malnutrition, NSA, PATRIOT ACT, poverty, Snowden, surveillance, terrorism, tribal paradigm
JDN 2457278 (09/11/2015) EDT 20:53
Today is the 14th anniversary of the 9/11 attacks. A lot has changed since then—yet it's quite remarkable what hasn't. In particular, we still don't have our civil liberties back.
In our immediate panicked response to the attacks, the United States passed almost unanimously the USA PATRIOT ACT, giving unprecedented power to our government in surveillance, searches, and even arrests and detentions. Most of those powers have been renewed repeatedly and remain in effect; the only major change has been a slight weakening of the NSA's authority to use mass dragnet surveillance on Internet traffic and phone metadata. And this change in turn was almost certainly only made because of Edward Snowden, who is still forced to live in Russia for fear of being executed if he returns to the US. That is, the man most responsible for the only significant improvement in civil liberties in the United States in the last decade is living in Russia because he has been branded a traitor. No, the traitors here are the over one hundred standing US Congress members who voted for an act that is in explicit and direct violation of the Constitution. At the very least every one of them should be removed from office, and we as voters have the power to do that—so why haven't we? In particular, why are Dan Lipinski and Steny Hoyer, both Democrats from non-southern states who voted every single time to extend provisions of the PATRIOT ACT, still in office? At least Carl Levin had the courtesy to resign after sponsoring the act allowing indefinite detention—I hope we would have voted him out anyway, since I'd much rather have a Republican (and all the absurd economic policy that entails) than someone who apparently doesn't believe the Fourth and Sixth Amendments have any meaning at all.
We have become inured to this loss of liberty; it feels natural or inevitable to us. But these are not minor inconveniences; they are not small compromises. Giving our government the power to surveil, search, arrest, imprison, torture, and execute anyone they want at any time without the system of due process—and make no mistake, that is what the PATRIOT ACT and the indefinite detention law do—means giving away everything that separates us from tyranny. Bypassing the justice system and the rule of law means bypassing everything that America stands for.
So far, these laws have actually mostly been used against people reasonably suspected of terrorism, that much is true; but it's also irrelevant. Democracy doesn't mean you give the government extreme power and they uphold your trust and use it benevolently. Democracy means you don't give them that power in the first place.
If there's really sufficient evidence to support an arrest for terrorism, get a warrant. If you don't have enough evidence for a warrant, you don't have enough evidence for an arrest. If there's really sufficient evidence to justify imprisoning someone for terrorism, get a jury to convict. If you don't have enough evidence to convince a jury, guess what? You don't have enough evidence to imprison them. These are not negotiable. They are not "political opinions" in any ordinary sense. The protection of due process is so fundamental to democracy that without it political opinions lose all meaning.
People talk about "Big Government" when we suggest increasing taxes on capital gains or expanding Medicare. No, that isn't Big Government. Searching without warrants is Big Government. Imprisoning people without trial is Big Government. From all the decades of crying wolf in which any policy someone doesn't like is accused of being "tyranny", we seem to have lost the ability to recognize actual tyranny. I hope you understand the full force of my meaning when I say that the PATRIOT ACT is literally fascist. Fascism has come to America, and as predicted it was wrapped in the flag and carrying a cross.
In this sort of situation, a lot of people like to quote (or misquote) Benjamin Franklin:
"Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."
With the qualifiers "essential" and "temporary", this quote seems right; but a lot of people forget them and quote him as saying:
"Those would give up liberty to purchase safety, deserve neither liberty nor safety."
That's clearly wrong. We do in fact give up liberty to purchase safety, and as well we should. We give up our liberty to purchase weapons-grade plutonium; we give up our liberty to drive at 220 mph. The question we need to be asking is: How much liberty are we giving up to gain how much safety?
Spoken like an economist, the question is not whether you will give up liberty to purchase safety—the question is at what price you're willing to make the purchase. The price we've been paying in response to terrorism is far too high. Indeed, the price we are paying is tantamount to America itself.
As horrific as 9/11 was, it's important to remember: It only killed 3,000 people.
This statement probably makes you uncomfortable; it may even offend you. How dare I say "only"?
I don't mean to minimize the harm of those deaths. I don't mean to minimize the suffering of people who lost friends, colleagues, parents, siblings, children. The death of any human being is the permanent destruction of something irreplaceable, a spark of life that can never be restored; it is always a tragedy and there is never any way to repay it.
But I think people are actually doing the opposite—they are ignoring or minimizing millions of other deaths because those deaths didn't happen to be dramatic enough. A parent killed by a heart attack is just as lost as a parent who died in 9/11. A friend who died of brain cancer is just as gone as a friend who was killed in a terrorist attack. A child killed in a car accident is just as much a loss as a child killed by suicide bombers. If you really care about human suffering, I contend that you should care about all human suffering, not just the kind that makes the TV news.
Here is a list, from the CDC, of things that kill more Americans per month than terrorists have killed in the last three decades:
Heart disease: 50,900 per month
Cancer: 48,700 per month
Lung disease: 12,400 per month
Accidents: 10,800 per month
Stroke: 10,700 per month
Alzheimer's: 7,000 per month
Diabetes: 6,300 per month
Influenza: 4,700 per month
Kidney failure: 3,900 per month
Terrorism deaths since 1985: 3,455
Yes, that's right; influenza kills more Americans per month (on average; flu is seasonal, after all) than terrorism has killed in the last thirty years.
And for comparison, other violent deaths, not quite but almost as many per month as terrorism has killed in my entire life so far:
Suicide: 3,400 per month
Homicide: 1,300 per month
Now, with those figures in mind, I want you to ask yourself the following question: Would you be willing to give up basic, fundamental civil liberties in order to avoid any of these things?
Would you want the government to be able to arrest you and imprison you without trial for eating too many cheeseburgers, so as to reduce the risk of heart disease and stroke?
Would you want the government to monitor your phone calls and Internet traffic to make sure you don't smoke, so as to avoid lung disease? Or to watch for signs of depression, to reduce the rate of suicide?
Would you want the government to be able to use targeted drone strikes, ordered directly by the President, pre-emptively against probable murderers (with a certain rate of collateral damage, of course), to reduce the rate of homicide?
I presume that the answer to all the above questions is "no". Then now I have to ask you: Why are you willing to give up those same civil liberties to prevent a risk that is three hundred times smaller?
And then of course there's the Iraq War, which killed 4,400 Americans and at least 100,000 civilians, and the Afghanistan War, which killed 3,400 allied soldiers and over 90,000 civilians.
In response to the horrific murder of 3,000 people, we sacrificed another 7,800 soldiers and killed another 190,000 innocent civilians. What exactly did that accomplish? What benefit did we get for such an enormous cost?
The people who sold us these deadly wars and draconian policies did so based on the threat that terrorism could somehow become vastly worse, involving the release of some unstoppable bioweapon or the detonation of a full-scale nuclear weapon, killing millions of people—but that has never happened, has never gotten close to happening, and would be thousands of times worse than the worst terrorist attacks that have ever actually happened.
If we're worried about millions of people dying, it is far more likely that there would be a repeat of the 1918 influenza pandemic, or an accidental detonation of a nuclear weapon, or a flashpoint event with Russia or China triggering World War III; it's probably more likely that there would be an asteroid impact large enough to kill a million people than there would be a terrorist attack large enough to do the same.
As it is, heart disease is already killing millions of people—about a million every two years—and we aren't so panicked about that as to give up civil liberties. Elsewhere in the world, malnutrition kills over 3 million children per year, essentially all of it due to extreme poverty, which we could eliminate by spending between a quarter ($150 billion) and a half ($300 billion) of our current military budget ($600 billion); but we haven't even done that even though it would require no loss of civil liberties at all.
Why is terrorism different? In short, the tribal paradigm.
There are in fact downsides to not being infinite identical psychopaths, and this is one of them. An infinite identical psychopath would simply maximize their own probability of survival; but finite diverse tribalists such as we underreact to some threats (such as heart disease) and overreact to others (such as terrorism). We'll do almost anything to stop the latter—and almost nothing to stop the former.
Terrorists are perceived as a threat not just to our individual survival like heart disease or stroke, but a threat to our tribe from another tribe. This triggers a deep, instinctual sense of panic and hatred that makes us willing to ignore principles we would otherwise uphold and commit acts of violence we would otherwise find unimaginable.
Indeed, it's precisely that instinct which motivates the terrorists in the first place. From their perspective, we are the other tribe that threatens their tribe, and they are therefore willing to stop at nothing until we are destroyed.
In a fundamental way, when we respond to terrorism in this way we do not defeat them—we become them.
If you ask people who support the PATRIOT ACT, it's very clear that they don't see themselves as imposing upon the civil liberties of Americans. Instead, they see themselves as protecting Americans (our tribe), and they think the impositions upon civil liberties will only harm those who don't count as Americans (other tribes). This is a pretty bizarre notion if you think about it carefully—if you don't need a warrant or probable cause to imprison people, then what stops you from imprisoning people who aren't terrorists?—but people don't think about it carefully. They act on emotion, on instinct.
The odds of terrorists actually destroying America by killing people are basically negligible. Even the most deadly terrorist attack in recorded history—9/11—killed fewer Americans than die every month from diabetes, or every week from heart disease. Even the most extreme attacks feared (which are extremely unlikely) wouldn't be any worse than World War II, which of course we won.
But the odds of terrorists destroying America by making us give up the rights and freedoms that define us as a nation? That's well underway.
The terrible, horrible, no-good very-bad budget bill
2014-12-13 2015-04-04 pnrj public policy banking, budget, Congress, credit default swap, derivatives, Dodd-Frank, margin requirement, monetary policy, money supply, regulation, Senate, terrorism
JDN 2457005 PST 11:52.
I would have preferred to write about something a bit cheerier (like the fact that by the time I write my next post I expect to be finished with my master's degree!), but this is obviously the big news in economic policy today. The new House budget bill was unveiled Tuesday, and then passed in the House on Thursday by a narrow vote. It has stalled in the Senate thanks in part to fierce—and entirely justified—opposition by Elizabeth Warren, and so today it has been delayed in the Senate. Obama has actually urged his fellow Democrats to pass it, in order to avoid another government shutdown. Here's why Warren is right and Obama is wrong.
You know the saying "You can't negotiate with terrorists!"? Well, in practice that's not actually true—we negotiate with terrorists all the time; the FBI has special hostage negotiators for this purpose, because sometimes it really is the best option. But the saying has an underlying kernel of truth, which is that once someone is willing to hold hostages and commit murder, they have crossed a line, a Rubicon from which it is impossible to return; negotiations with them can never again be good-faith honest argumentation, but must always be a strategic action to minimize collateral damage. Everyone knows that if you had the chance you'd just as soon put bullets through all their heads—because everyone knows they'd do the same to you.
Well, right now, the Republicans are acting like terrorists. Emotionally a fair comparison would be with two-year-olds throwing tantrums, but two-year-olds do not control policy on which thousands of lives hang in the balance. This budget bill is designed—quite intentionally, I'm sure—in order to ensure that Democrats are left with only two options: Give up on every major policy issue and abandon all the principles they stand for, or fail to pass a budget and allow the government to shut down, canceling vital services and costing billions of dollars. They are holding the American people hostage.
But here is why you must not give in: They're going to shoot the hostages anyway. This so-called "compromise" would not only add $479 million in spending on fighter jets that don't work and the Pentagon hasn't even asked for, not only cut $93 million from WIC, a 3.5% budget cut adjusted for inflation—literally denying food to starving mothers and children—and dramatically increase the amount of money that can be given by individuals in campaign donations (because apparently the unlimited corporate money of Citizens United wasn't enough!), but would also remove two of the central provisions of Dodd-Frank financial regulation that are the only thing that stands between us and a full reprise of the Great Recession. And even if the Democrats in the Senate cave to the demands just as the spineless cowards in the House already did, there is nothing to stop Republicans from using the same scorched-earth tactics next year.
I wouldn't literally say we should put bullets through their heads, but we definitely need to get these Republicans out of office immediately at the next election—and that means that all the left-wing people who insist they don't vote "on principle" need to grow some spines of their own and vote. Vote Green if you want—the benefits of having a substantial Green coalition in Congress would be enormous, because the Greens favor three really good things in particular: Stricter regulation of carbon emissions, nationalization of the financial system, and a basic income. Or vote for some other obscure party that you like even better. But for the love of all that is good in the world, vote.
The two most obscure—and yet most important—measures in the bill are the elimination of the swaps pushout rule and the margin requirements on derivatives. Compared to these, the cuts in WIC are small potatoes (literally, they include a stupid provision about potatoes). They also really aren't that complicated, once you boil them down to their core principles. This is however something Wall Street desperately wants you to never, ever do, for otherwise their global crime syndicate will be exposed.
The swaps pushout rule says quite simply that if you're going to place bets on the failure of other companies—these are called credit default swaps, but they are really quite literally a bet that a given company will go bankrupt—you can't do so with deposits that are insured by the FDIC. This is the absolute bare minimum regulatory standard that any reasonable economist (or for that matter sane human being!) would demand. Honestly I think credit default swaps should be banned outright. If you want insurance, you should have to buy insurance—and yes, deal with the regulations involved in buying insurance, because those regulations are there for a reason. There's a reason you can't buy fire insurance on other people's houses, and that exact same reason applies a thousandfold for why you shouldn't be able to buy credit default swaps on other people's companies. Most people are not psychopaths who would burn down their neighbor's house for the insurance money—but even when their executives aren't psychopaths (as many are), most companies are specifically structured so as to behave as if they were psychopaths, as if no interests in the world mattered but their own profit.
But the swaps pushout rule does not by any means ban credit default swaps. Honestly, it doesn't even really regulate them in any real sense. All it does is require that these bets have to be made with the banks' own money and not with everyone else's. You see, bank deposits—the regular kind, "commercial banking", where you have your checking and savings accounts—are secured by government funds in the event a bank should fail. This makes sense, at least insofar as it makes sense to have private banks in the first place (if we're going to insure with government funds, why not just use government funds?). But if you allow banks to place whatever bets they feel like using that money, they have basically no downside; heads they win, tails we lose. That's why the swaps pushout rule is absolutely indispensable; without it, you are allowing banks to gamble with other people's money.
What about margin requirements? This one is even worse. Margin requirements are literally the only thing that keeps banks from printing unlimited money. If there was one single cause of the Great Recession, it was the fact that there were no margin requirements on over-the-counter derivatives. Because there were no margin requirements, there was no limit to how much money banks could print, and so print they did; the result was a still mind-blowing quadrillion dollars in nominal value of outstanding derivatives. Not million, not billion, not even trillion; quadrillion. $1e15. $1,000,000,000,000,000. That's how much money they printed. The total world money supply is about $70 trillion, which is 1/14 of that. (If you read that blog post, he makes a rather telling statement: "They demonstrate quite clearly that those who have been lending the money that we owe can't possibly have had the money they lent." No, of course they didn't! They created it by lending it. That is what our system allows them to do.)
And yes, at its core, it was printing money. A lot of economists will tell you otherwise, about how that's not really what's happening, because it's only "nominal" value, and nobody ever expects to cash them in—yeah, but what if they do? (These are largely the same people who will tell you that quantitative easing isn't printing money, because, uh… er… squirrel!) A tiny fraction of these derivatives were cashed in in 2007, and I think you know what happened next. They printed this money and now they are holding onto it; but woe betide us all if they ever decide to spend it. Honestly we should invalidate all of these derivatives and force them to start over with strict margin requirements, but short of that we must at least, again at the bare minimum, have margin requirements.
Why are margin requirements so important? There's actually a very simple equation that explains it. If the margin requirement is m, meaning that you must retain a portion m between 0 and 1 of the loans you make as reserves, the total amount of money supply that can be created from the current amount of money M is just M/m. So if margin requirements were 100%—full-reserve banking—then the total money supply is M, and therefore in full control of the central bank. This is how it should be, in my opinion. But usually m is set around 10%, so the total money supply is 10M, meaning that 90% of the money in the system was created by banks. But if you ever let that margin requirement go to zero, you end up dividing by zero—and the total amount of money that can be created is infinite.
To see how this works, suppose we start with $1000 and put it in bank A. Bank A then creates a loan; how big they can make the loan depends on the margin requirement. Let's say it's 10%. They can make a loan of $900, because they must keep $100 (10% of $1000) in reserve. So they do that, and then it gets placed in bank B. Then bank B can make a loan of $810, keeping $90. The $810 gets deposited in bank C, which can make a loan of $729, and so on. The total amount of money in the system is the sum of all these: $1000 in bank A (remember, that deposit doesn't disappear when it's loaned out!), plus the $900 in bank B, plus $810 in bank C, plus $729 in bank D. After 4 steps we are at $3,439. As we go through more and more steps, the money supply gets larger at an exponentially decaying rate and we converge toward the maximum at $10,000.
The original amount is M, and then we add M(1-m), M(1-m)^2, M(1-m)^3, and so on. That produces the following sum up to n terms (below is LaTeX, which I can't render for you without a plugin, which requires me to pay for a WordPress subscription I cannot presently afford; you can copy-paste and render it yourself here):
\sum_{k=0}^{n} M (1-m)^k = M \frac{1 – (1-m)^{n+1}}{m}
And then as you let the number of terms grow arbitrarily large, it converges toward a limit at infinity:
\sum_{k=0}^{\infty} M (1-m)^k = \frac{M}{m}
To be fair, we never actually go through infinitely many steps, so even with a margin requirement of zero we don't literally end up with infinite money. Instead, we just end up with n M, the number of steps times the initial money supply. Start with $1000 and go through 4 steps: $4000. Go through 10 steps: $10,000. Go through 100 steps: $100,000. It just keeps getting bigger and bigger, until that money has nowhere to go and the whole house of cards falls down.
Honestly, I'm not even sure why Wall Street banks would want to get rid of margin requirements. It's basically putting your entire economy on the counterfeiting standard. Fiat money is often accused of this, but the government has both (a) the legitimate authority empowered by the electorate and (b) incentives to maintain macroeconomic stability, neither of which private banks have. There is no reason other than altruism (and we all know how much altruism Citibank and HSBC have—it is approximately equal to the margin requirement they are trying to get passed—and yes, they wrote the bill) that would prevent them from simply printing as much money as they possibly can, thus maximizing their profits; and they can even excuse the behavior by saying that everyone else is doing it, so it's not like they could prevent the collapse all by themselves. But by lobbying for a regulation to specifically allow this, they no longer have that excuse; no, everyone won't be doing it, not unless you pass this law to let them. Despite the global economic collapse that was just caused by this sort of behavior only seven years ago, they now want to return to doing it. At this point I'm beginning to wonder if calling them an international crime syndicate is actually unfair to international crime syndicates. These guys are so totally evil it actually goes beyond the bounds of rational behavior; they're turning into cartoon supervillains. I would honestly not be that surprised if there were a video of one of these CEOs caught on camera cackling maniacally, "Muahahahaha! The world shall burn!" (Then again, I was pleasantly surprised to see the CEO of Goldman Sachs talking about the harms of income inequality, though it's not clear he appreciated his own contribution to that inequality.)
And that is why Democrats must not give in. The Senate should vote it down. Failing that, Obama should veto. I wish he still had the line-item veto so he could just remove the egregious riders without allowing a government shutdown, but no, the Senate blocked it. And honestly their reasoning makes sense; there is supposed to be a balance of power between Congress and the President. I just wish we had a Congress that would use its power responsibly, instead of holding the American people hostage to the villainous whims of Wall Street banks.
Why immigration is good
2014-11-15 2015-04-04 pnrj public policy brain drain, Congress, Fox News, globalization, growth, immigration, inequality, jobs, labor, Mexico, nationalism, New York Times, Obama, racism, wages
The big topic in policy news today is immigration. After years of getting nothing done on the issue, Obama has finally decided to bypass Congress and reform our immigration system by executive order. Republicans are threatening to impeach him if he does. His decision to go forward without Congressional approval may have something to do with the fact that Republicans just took control of both houses of Congress. Naturally, Fox News is predicting economic disaster due to the expansion of the welfare state. (When is that not true?) A more legitimate critique comes from the New York Times, who point out how this sudden shift demonstrates a number of serious problems in our political system and how it is financed.
So let's talk about immigration, and why it is almost always a good thing for a society and its economy. There are a couple of downsides, but they are far outweighed by the upsides.
I'll start with the obvious: Immigration is good for the immigrants. That's why they're doing it. Uprooting yourself from your home and moving thousands of miles isn't easy under the best circumstances (like I when I moved from Michigan to California for grad school); now imagine doing it when you are in crushing poverty and you have to learn a whole new language and culture once you arrive. People are only willing to do this when the stakes are high. The most extreme example is of course the children refugees from Latin America, who are finally getting some of the asylum they so greatly deserve, but even the "ordinary" immigrants coming from Mexico are leaving a society racked with poverty, endemic with corruption, and bathed in violence—most recently erupting in riots that have set fire to government buildings. These people are desperate; they are crossing our border despite the fences and guns because they feel they have no other choice. As a fundamental question of human rights, it is not clear to me that we even have the right to turn these people away. Forget the effect on our economy; forget the rate of assimilation; what right do we have to say to these people that their suffering should go on because they were born on the wrong side of an arbitrary line?
There are wealthier immigrants—many of them here, in fact, for grad school—whose circumstances are not so desperate; but hardly anyone even considers turning them away, because we want their money and their skills in our society. Americans who fear brain drain have it all backwards; the United States is where the brains drain to. This trend may be reversing more recently as our right-wing economic policy pulls funding away from education and science, but it would likely only reach the point where we export as many intelligent people as we import; we're not talking about creating a deficit here, only reducing our world-dominating surplus. And anyway I'm not so concerned about those people; yes, the world needs them, but they don't need much help from the world.
My concern is for our tired, our poor, our huddled masses yearning to breathe free. These are the people we are thinking about turning away—and these are the people who most desperately need us to take them in. That alone should be enough reason to open our borders, but apparently it isn't for most people, so let's talk about some of the ways that America stands to gain from such a decision.
First of all, immigration increases economic growth. Immigrants don't just take in money; they also spend it back out, which further increases output and creates jobs. Immigrants are more likely than native citizens to be entrepreneurs, perhaps because taking the chance to start a business isn't so scary after you've already taken the chance to travel thousands of miles to a new country. Our farming system is highly dependent upon cheap immigrant labor (that's a little disturbing, but if as far as the US economy, we get cheap food by hiring immigrants on farms). On average, immigrants are younger than our current population, so they are more likely to work and less likely to retire, which has helped save the US from the economic malaise that afflicts nations like Japan where the aging population is straining the retirement system. More open immigration wouldn't just increase the number of immigrants coming here to do these things; it would also make the immigrants who are already here more productive by opening up opportunities for education and entrepreneurship. Immigration could speed the recovery from the Second Depression and maybe even revitalize our dying Rust Belt cities.
Now, what about the downsides? By increasing the supply of labor faster than they increase the demand for labor, immigrants could reduce wages. There is some evidence that immigrants reduce wages, particularly for low-skill workers. This effect is rather small, however; in many studies it's not even statistically significant (PDF link). A 10% increase in low-skill immigrants leads to about a 3% decrease in low-skill wages (PDF link). The total economy grows, but wages decrease at the bottom, so there is a net redistribution of wealth upward.
Immigration is one of the ways that globalization increases within-nation inequality even as it decreases between-nation inequality; you move the poor people to rich countries, and they become less poor than they were, but still poorer than most of the people in those rich countries, which increases the inequality there. On average the world becomes better off, but it can seem bad for the rich countries, especially the people in rich countries who were already relatively poor. Because they distribute wealth by birthright, national borders actually create something analogous to the privilege of feudal lords, albeit to a much larger segment of the population. (Much larger: Here's a right-wing site trying to argue that the median American is in the top 1% of income by world standards; neat trick, because Americans comprise 4% of the world population—so our top half makes up 2% of the world's population by themselves. Yet somehow apparently that 2% of the population is the top 1%? Also, the US isn't the only rich country; have you heard of, say, Europe?)
There's also a lot of variation in the literature as to the size—or even direction—of the effect of immigration on low-skill wages. But since the theory makes sense and the preponderance of the evidence is toward a moderate reduction in wages for low-skill native workers, let's assume that this is indeed the case.
First of all I have to go back to my original point: These immigrants are getting higher wages than they would have in the countries they left. (That part is usually even true of the high-skill immigrants.) So if you're worried about low wages for low-skill workers, why are you only worried about that for workers who were born on this side of the fence? There's something deeply nationalistic—if not outright racist—inherent in the complaint that Americans will have lower pay or lose their jobs when Mexicans come here. Don't Mexicans also deserve jobs and higher pay?
Aside from that, do we really want to preserve higher wages at the cost of economic efficiency? Are high wages an end in themselves? It seems to me that what we're really concerned about is welfare—we want the people of our society to live better lives. High wages are one way to do that, but not the only way; a basic income could reverse that upward redistribution of wealth, taking the economic benefits of the immigration that normally accrue toward the top and giving them to the bottom. As I already talked about in an earlier post, a basic income is a lot more efficient than trying to mess around with wages. Markets are very powerful; we shouldn't always accept what they do, but we should also be careful when we interfere with them. If the market is trying to drive certain wages down, that means that there is more desire to do that kind of work then there is work of that kind that needs done. The wage change creates a market incentive for people to switch to more productive kinds of work. We should also be working to create opportunities to make that switch—funding free education, for instance—because an incentive without an opportunity is a bit like pointing a gun at someone's head and ordering them to give birth to a unicorn.
So on the one hand we have the increase in local inequality and the potential reduction in low-skill wages; those are basically the only downsides. On the other hand, we have increases in short-term and long-term economic growth, lower global inequality, more spending, more jobs, a younger population with less strain on the retirement system, more entrepreneurship, and above all, the enormous lifelong benefits to the immigrants themselves that motivated them to move in the first place. It seems pretty obvious to me: we can enact policies to reduce the downsides, but above all we must open our borders.
|
CommonCrawl
|
Meromorphic integrability of the Hamiltonian systems with homogeneous potentials of degree -4
The study on cyclicity of a class of cubic systems
doi: 10.3934/dcdsb.2021191
Simplification of weakly nonlinear systems and analysis of cardiac activity using them
Irada Dzhalladova 1, and Miroslava Růžičková 2,,
V. Hetman Kyiv National Economic University, Department of Computer Mathematics and Information Security, Kyiv 03068, Peremogy 54/1, Ukraine
University of Białystok, Faculty of Mathematics, K. Ciołkowskiego 1M, 15-245 Białystok, Poland
* Corresponding author: Miroslava Růžičková
Received November 2020 Revised June 2021 Early access July 2021
The paper deals with the transformation of a weakly nonlinear system of differential equations in a special form into a simplified form and its relation to the normal form and averaging. An original method of simplification is proposed, that is, a way to determine the coefficients of a given nonlinear system in order to simplify it. We call this established method the degree equalization method, it does not require integration and is simpler and more efficient than the classical Krylov-Bogolyubov method of normalization. The method is illustrated with several examples and provides an application to the analysis of cardiac activity modelled using van der Pol equation.
Keywords: Averaging, normal form, weakly nonlinear system, qualitative properties, essential and non-essential coefficients, degree equalization, van der Pol equation, cardiac activity.
Mathematics Subject Classification: Primary: 34C29, 34C20, 34C60; Secondary: 34B30, 34C15, 34A34.
Citation: Irada Dzhalladova, Miroslava Růžičková. Simplification of weakly nonlinear systems and analysis of cardiac activity using them. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2021191
H. M. Ahmed and Q. Zhu, The averaging principle of Hilfer fractional stochastic delay differential equations with Poisson jumps, Appl. Math. Lett., 112 (2021), 106755, 7 pp. doi: 10.1016/j.aml.2020.106755. Google Scholar
M. Bendahmane, F. Mroue, M. Saad and R. Talhouk, Mathematical analysis of cardiac electromechanics with physiological ionic model, Discrete Continuous Dynam. Systems - B, 24 (2019), 4863-4897. doi: 10.3934/dcdsb.2019035. Google Scholar
G. D. Bifkhoff, Dynamical Systems, American Mathematical Society, Providence, R.I., IX, 1966. Google Scholar
N. N. Bogolyubov, On Certain Statistical Methods in Mathematical Physics, (in Russian), Kiev, 1935. Google Scholar
N. N. Bogolyubov and Y. A. Mitropolskiy, Asymptotic Methods in the Theory of Nonlinear Oscillations, (Translated from Russian), Gordon and Breach, New York, 1961. Google Scholar
A. D. Bryuno, The normal form of differential equations, Dokl. Akad. Nauk SSSR, 157 (1964), 1276-1279. Google Scholar
A. D. Bryuno, A Local Method of Nonlinear Analysis of Differential Equations, Nauka, Moscow, 1979. Google Scholar
A. D. Bryuno, Power Geometry in Algebraic and Differential Equations, Fizmatlit, Moscow, 1998. Google Scholar
G. Chen and J. Della Dora, Further reductions of normal forms for dynamical systems, J. Differential Equations, 166 (2000), 79-106. doi: 10.1006/jdeq.2000.3783. Google Scholar
A. Deprit, Canonical transformations depending on a small parameter, Celest. Mech., 1 (1969), 12-30. doi: 10.1007/BF01230629. Google Scholar
A. Deprit, J. Henrard, J. F. Price and A. Rom, Birkhoff's normalization, Celest. Mech., 1 (1969), 222-251. doi: 10.1007/BF01228842. Google Scholar
S. P. Diliberto, New results on periodic surfaces and the averaging principle, Proc. U.S.-Japan Seminar on Differential and Functional Equations, Minneapolis, Minn., Benjamin, New York, (1967), 49–87. Google Scholar
P. Fatou, Sur le mouvement d'un système soumis à des forces à courte période, Bulletin de la Société Mathématique de France, 56 (1928), 98-139. doi: 10.24033/bsmf.1131. Google Scholar
J. K. Hale, Oscillations in Non-Linear Systems, McGraw-Hill, New York, 1963. Google Scholar
M. Han, Y. Xu and B. Pei, Mixed stochastic differential equations: Averaging principle result, Applied Mathematics Letters, 112 (2021), 106705, 7 pp. doi: 10.1016/j.aml.2020.106705. Google Scholar
G. Hori, Theory of general perturbations with unspecified canonical variables, Publ. Astron. Soc. Japan, 18 (1966), 287-296. Google Scholar
M. Kesmia, S. Boughaba and S. Jacquir, New approach of controlling cardiac alternans, Discrete Continuous Dynam. Systyms - B, 23 (2018), 975-989. doi: 10.3934/dcdsb.2018051. Google Scholar
N. M. Krylov and N. N. Bogolyubov, Introduction to Non-Linear Mechanics, Princeton Univ. Press, Princeton, 1947. (Translated from Russian, Izd-vo AN SSSR, Kiev, 1937) Google Scholar
P. Kügler, Modelling and simulation for preclinical cardiac safety assessment of drugs with Human iPSC-derived cardiomyocytes, Jahresber Dtsch Math-Ver., 122 (2020), 209-257. doi: 10.1365/s13291-020-00218-w. Google Scholar
J. L. Lagrange, Mécanique Céleste $(2$ vols.$)$, {Edition Albert Blanchard}, Paris, 1788. Google Scholar
A. K. Lopatin, Averaging, Normal forms and Symmetry in Non-Linear Mechanics, Preprint Inst. Mat. Nat. Acad. Ukrainy, Kiev, 1994, (in Russian) Google Scholar
D. Luo, Q. Zhu and Z. Luo, An averaging principle for stochastic fractional differential equations with time-delays, Applied Mathematics Letters, 105 (2020), 106290, 8 pp. doi: 10.1016/j.aml.2020.106290. Google Scholar
L. I. Mandelshtam ana N. D. Papaleksi, On justification of a method of approximate solving differential equations, J. Exp. Theor. Physik, 4 (1934), 117–121. (in Russian). Google Scholar
W. Mao, L. Hu, S. You and X. Mao, The averaging method for multivalued SDEs with jumps and non-Lipschitz coefficients, Discrete Continuous Dynam. Systems - B, 24 (2019), 4937-4954. doi: 10.3934/dcdsb.2019039. Google Scholar
J. A. Mitropolskiy and A. M. Samoilenko, To the problem on asymptotic decompositions of non-linear mechanics, Ukr. Mat. Zhurn., 31 (1979), 42–53. (in Russian). Google Scholar
Y. A. Mitropolskiy, Basic lines of research in the theory of nonlinear oscillations and the progress achieved, Proceedings of the International Symposium on Non-linear Oscillations, Kiev, I (1963), 15–22. Google Scholar
Y. A. Mitropolskiy and A. K. Lopatin, Group Theory, Approach in Asymptotic Methods of Non-Linear Mechanics, Naukova Dumka, Kiev, 1988. (in Russian). Google Scholar
Y. A. Mitropolskiy and N. Van Dao, Averaging method, In: Applied Asymptotic Methods in Nonlinear Oscillations, Solid Mechanics and Its Applications, Vol 55, Springer, Dordrecht, (1997), 282–326. doi: 10.1007/978-94-015-8847-8. Google Scholar
A. M Molchanov, Separation of motions and asymptotic methods in the theory of linear oscillations, DAN SSSR, 5, (1961), 1030–1033. (in Russian). Google Scholar
A. Poincaré, New Methods of Celestial Mechanics, Gauthiers-Villars, Paris, 1892. (Translated to Russian, Nauka, Moscow, 1971.) Google Scholar
M. I. Rabinovich and D. I. Trubetskov, Oscillations and Waves in Linear and Nonlinear Systems, Kluwer Academic Publishers, Dordrecht, 1989. (Translated from the Russian by R. N. Hainsworth, "Vvedenie v teoriyu kolebanij i voln, " Nauka, Moscow, 1984.) doi: 10.1007/978-94-009-1033-1. Google Scholar
J. A. Sanders and F. Verhulst, Averaging Methods in Nonlinear Dynamical Systems, Springer-Verlag, New York, 1985. doi: 10.1007/978-1-4757-4575-7. Google Scholar
T. G. Strizhak, Averaging Method in Problems of Mechanics, Vishcha Shkola, Kiev-Donetsk, 1982. (in Russian). Google Scholar
T. G. Strizhak, An Asymptotic Normalization Method, Vishcha Shkola, Glavnoe Izd., Kiev, 1984. Google Scholar
B. van der Pol, A theory of the amplitude of free and forced triode vibrations, Radio Rev., 1 (1920), 701–710. Google Scholar
B. van der Pol, On "Relaxation Oscillations", Philos. Mag., 2 (1926), 978-992. doi: 10.1080/14786442608564127. Google Scholar
B. van der Pol, The nonlinear theory of electric oscillations, Proceedings of the Institute of Radio Engineers, 22 (1934), 1051-1086. doi: 10.1109/JRPROC.1934.226781. Google Scholar
Figure 1. The amplitude of any solution to van der Pol equation increases if its initial value is from the interval $ (0, 2) $, and decreases if the initial value is greater than two. In both cases it converges to the value $ 2 $
Figure 2. The limit cycle $ x^2(t) +\frac{1}{\omega} \dot x^2(t) = a^2 $ and some trajectories to van der Pol equation if $ a_0<2 $
Figure 3. If the initial amplitude value is close to zero, the amplitude exponentially increases to $ 2 $ with increasing $ t $
Figure 4. The area of the viability of the heart. The intensity of energy replenishment depends on $ \mu $
Stefan Siegmund. Normal form of Duffing-van der Pol oscillator under nonautonomous parametric perturbations. Conference Publications, 2001, 2001 (Special) : 357-361. doi: 10.3934/proc.2001.2001.357
Zhaosheng Feng, Guangyue Gao, Jing Cui. Duffing--van der Pol--type oscillator system and its first integrals. Communications on Pure & Applied Analysis, 2011, 10 (5) : 1377-1391. doi: 10.3934/cpaa.2011.10.1377
Zhaosheng Feng. Duffing-van der Pol-type oscillator systems. Discrete & Continuous Dynamical Systems - S, 2014, 7 (6) : 1231-1257. doi: 10.3934/dcdss.2014.7.1231
Dario Bambusi, A. Carati, A. Ponno. The nonlinear Schrödinger equation as a resonant normal form. Discrete & Continuous Dynamical Systems - B, 2002, 2 (1) : 109-128. doi: 10.3934/dcdsb.2002.2.109
Boris Anicet Guimfack, Conrad Bertrand Tabi, Alidou Mohamadou, Timoléon Crépin Kofané. Stochastic dynamics of the FitzHugh-Nagumo neuron model through a modified Van der Pol equation with fractional-order term and Gaussian white noise excitation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (7) : 2229-2243. doi: 10.3934/dcdss.2020397
Xiaoqin P. Wu, Liancheng Wang. Hopf bifurcation of a class of two coupled relaxation oscillators of the van der Pol type with delay. Discrete & Continuous Dynamical Systems - B, 2010, 13 (2) : 503-516. doi: 10.3934/dcdsb.2010.13.503
Zhaoxia Wang, Hebai Chen. A nonsmooth van der Pol-Duffing oscillator (I): The sum of indices of equilibria is $ -1 $. Discrete & Continuous Dynamical Systems - B, 2022, 27 (3) : 1421-1446. doi: 10.3934/dcdsb.2021096
Zhaoxia Wang, Hebai Chen. A nonsmooth van der Pol-Duffing oscillator (II): The sum of indices of equilibria is $ 1 $. Discrete & Continuous Dynamical Systems - B, 2022, 27 (3) : 1549-1589. doi: 10.3934/dcdsb.2021101
Robert T. Glassey, Walter A. Strauss. Perturbation of essential spectra of evolution operators and the Vlasov-Poisson-Boltzmann system. Discrete & Continuous Dynamical Systems, 1999, 5 (3) : 457-472. doi: 10.3934/dcds.1999.5.457
Arnaud Münch. A variational approach to approximate controls for system with essential spectrum: Application to membranal arch. Evolution Equations & Control Theory, 2013, 2 (1) : 119-151. doi: 10.3934/eect.2013.2.119
Yi Wang, Chengmin Zheng. Normal and slow growth states of microbial populations in essential resource-based chemostat. Discrete & Continuous Dynamical Systems - B, 2009, 12 (1) : 227-250. doi: 10.3934/dcdsb.2009.12.227
Virginie De Witte, Willy Govaerts. Numerical computation of normal form coefficients of bifurcations of odes in MATLAB. Conference Publications, 2011, 2011 (Special) : 362-372. doi: 10.3934/proc.2011.2011.362
Shu-Yi Zhang. Existence of multidimensional non-isothermal phase transitions in a steady van der Waals flow. Discrete & Continuous Dynamical Systems, 2013, 33 (5) : 2221-2239. doi: 10.3934/dcds.2013.33.2221
Wenxiong Chen, Congming Li, Biao Ou. Qualitative properties of solutions for an integral equation. Discrete & Continuous Dynamical Systems, 2005, 12 (2) : 347-354. doi: 10.3934/dcds.2005.12.347
Jianyu Chen. On essential coexistence of zero and nonzero Lyapunov exponents. Discrete & Continuous Dynamical Systems, 2012, 32 (12) : 4149-4170. doi: 10.3934/dcds.2012.32.4149
O. A. Veliev. Essential spectral singularities and the spectral expansion for the Hill operator. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2227-2251. doi: 10.3934/cpaa.2017110
Adriana Buică, Jaume Giné, Maite Grau. Essential perturbations of polynomial vector fields with a period annulus. Communications on Pure & Applied Analysis, 2015, 14 (3) : 1073-1095. doi: 10.3934/cpaa.2015.14.1073
Alfonso Castro, Jorge Cossio, Carlos Vélez. Existence and qualitative properties of solutions for nonlinear Dirichlet problems. Discrete & Continuous Dynamical Systems, 2013, 33 (1) : 123-140. doi: 10.3934/dcds.2013.33.123
John Burke, Edgar Knobloch. Normal form for spatial dynamics in the Swift-Hohenberg equation. Conference Publications, 2007, 2007 (Special) : 170-180. doi: 10.3934/proc.2007.2007.170
Jorge A. Esquivel-Avila. Qualitative analysis of a nonlinear wave equation. Discrete & Continuous Dynamical Systems, 2004, 10 (3) : 787-804. doi: 10.3934/dcds.2004.10.787
Irada Dzhalladova Miroslava Růžičková
|
CommonCrawl
|
Discrete and Continuous Dynamical Systems - S
2021, Volume 14, Issue 12: 4321-4335. Doi: 10.3934/dcdss.2021108
This issue Previous Article On the well-posedness and stability for the fourth-order Schrödinger equation with nonlinear derivative term Next Article Global existence, exponential decay and blow-up of solutions for a class of fractional pseudo-parabolic equations with logarithmic nonlinearity
Blow-up phenomena for the sixth-order Boussinesq equation with fourth-order dispersion term and nonlinear source
Jinxing Liu1, ,
Xiongrui Wang1, ,
Jun Zhou1,2, , and
Huan Zhang2,
Department of Mathematics, Yibin University, Yibin, Sichuan 644000, China
School of Mathematics and Statistics, Southwest University, Chongqing 400715, China
* Corresponding author: Jun Zhou
Received: June 30, 2021
Revised: August 31, 2021
Early access: October 2021
This paper deals with the sixth-order Boussinesq equation with fourth-order dispersion term and nonlinear source. By using some ordinary differential inequalities, the conditions on finite time blow-up of solutions are given with suitable assumptions on initial values. Moreover, the upper and lower bounds of the blow-up time are also investigated.
Sixth-order Boussinesq equation,
blow-up,
blow-up time.
Mathematics Subject Classification: Primary: 35A01; Secondary: 35B40, 35B44, 35Q35.
FullText(HTML)
[1] J. L. Bona and R. L. Sachs, Global existence of smooth solutions and stability of solitary waves for a generalized Boussinesq equation, Commun. Math. Phys., 118 (1988), 15-29. doi: 10.1007/BF01218475.
[2] J. Boussinesq, Théorie des ondes et des remous qui se propagent le long d'un canal rectangulaire horizontal, en communiquant au liquide contenu dans ce canal des vitesses sensiblement pareilles de la surface au fond, J. Math. Pures Appl., 17 (1872), 55–108. http://dialnet.unirioja.es/descarga/articulo/4887986.pdf.
[3] H. Chen and H. Xu, Global existence and blow-up of solutions for infinitely degenerate semilinear pseudo-parabolic equations with logarithmic nonlinearity, Discrete Contin. Dyn. Syst., 39 (2019), 1185-1203. doi: 10.3934/dcds.2019051.
[4] C. I. Christov, G. A. Maugin and A. V. Porubov, On Boussinesq's paradigm in nonlinear wave propagation, C. R. Mécanique, 335 (2007), 521-535. doi: 10.1016/j.crme.2007.08.006.
[5] C. I. Christov, G. A. Maugin and M. G. Velarde, Well-posed boussinesq paradigm with purely spatial higher-order derivatives, Phys. Rev. E Statistical Physics Plasmas Fluids and Related Interdisciplinary Topics, 54 (1996), 3621-3638. doi: 10.1103/PhysRevE.54.3621.
[6] P. A. Clarkson, R. J. Leveque and R. Saxton, Solitary-wave interactions in elastic rods, Stud. Appl. Math., 75 (1986), 95-121. doi: 10.1002/sapm198675295.
[7] P. Daripa, Higher-order Boussinesq equations for two-way propagation of shallow water waves, Eur. J. Mech. B Fluids, 25 (2006), 1008-1021. doi: 10.1016/j.euromechflu.2006.02.003.
[8] P. Daripa and W. Hua, A numerical study of an ill-posed Boussinesq equation arising in water waves and nonlinear lattices: Filtering and regularization techniques, Appl. Math. Comput., 101 (1999), 159-207. doi: 10.1016/S0096-3003(98)10070-X.
[9] S. H. Deng, Generalized multi-hump wave solutions of KDV-KDV system of Boussinesq equations, Discrete Contin. Dyn. Syst., 39 (2019), 3671-3716. doi: 10.3934/dcds.2019150.
[10] A. Dé Godefroy, Existence, decay and blow-up for solutions to the sixth-order generalized Boussinesq equation, Discrete Contin. Dyn. Syst., 35 (2015), 117-137. doi: 10.3934/dcds.2015.35.117.
[11] A. Esfahani and L. G. Farah, Local well-posedness for the sixth-order boussinesq equation, J. Math. Anal. Appl., 385 (2012), 230-242. doi: 10.1016/j.jmaa.2011.06.038.
[12] J. A. Esquivel-Avila, Blow-up in damped abstract nonlinear equations, Electron. Res. Arch., 28 (2020), 347-267. doi: 10.3934/era.2020020.
[13] C. Guo and S. Fang, Global existence and pointwise estimates of solutions for the generalized sixth-order Boussinesq equation, Commun. Math. Sci., 15 (2017), 1457-1487. doi: 10.4310/CMS.2017.v15.n5.a11.
[14] V. Komornik, Exact Controllability and Stabilization, RAM: Research in Applied Mathematics. Masson, Paris; John Wiley & Sons, Ltd., Chichester, 1994. The multiplier method.
[15] H. A. Levine, Instability and nonexistence of global solutions to nonlinear wave equations of the form $Pu_tt = -Au+ F(u)$, Trans. Amer. Math. Soc., 192 (1974), 1-21. doi: 10.1090/S0002-9947-1974-0344697-2.
[16] H. A. Levine, Some additional remarks on the nonexistence of global solutions to nonlinear wave equations, SIAM J. Math. Anal., 5 (1974), 138-146. doi: 10.1137/0505015.
[17] M.-R. Li and L.-Y. Tsai, Existence and nonexistence of global solutions of some system of semilinear wave equations, Nonlinear Anal., 54 (2003), 1397-1415. doi: 10.1016/S0362-546X(03)00192-5.
[18] W. Lian, J. Wang and R. Xu, Global existence and blow up of solutions for pseudo-parabolic equation with singular potential, J. Differential Equations, 269 (2020), 4914-4959. doi: 10.1016/j.jde.2020.03.047.
[19] W. Lian and R. Xu, Global well-posedness of nonlinear wave equation with weak and strong damping terms and logarithmic source term, Adv. Nonlinear Anal., 9 (2020), 613-632. doi: 10.1515/anona-2020-0016.
[20] M. Liao, Q. Liu and H. Ye, Global existence and blow-up of weak solutions for a class of fractional $p$-Laplacian evolution equations, Adv. Nonlinear Anal., 9 (2020), 1569-1591. doi: 10.1515/anona-2020-0066.
[21] Q. Lin, Y. H. Wu and R. Loxton, On the Cauchy problem for a generalized Boussinesq equation, J. Math. Anal. Appl., 353 (2009), 186-195. doi: 10.1016/j.jmaa.2008.12.002.
[22] F. Linares, Global existence of small solutions for a generalized Boussinesq equation, J. Differential Equations, 106 (1993), 257-293. doi: 10.1006/jdeq.1993.1108.
[23] G. Liu, The existence, general decay and blow-up for a plate equation with nonlinear damping and a logarithmic source term, Electron. Res. Arch., 28 (2020), 263-289. doi: 10.3934/era.2020016.
[24] X. Liu and J. Zhou, Initial-boundary value problem for a fourth-order plate equation with Hardy-Hénon potential and polynomial nonlinearity, Electron. Res. Arch., 28 (2020), 599-625. doi: 10.3934/era.2020032.
[25] Y. Liu, Instability and blow-up of solutions to a generalized Boussinesq equation, SIAM J. Math. Anal., 26 (1995), 1527-1546. doi: 10.1137/S0036141093258094.
[26] Y. Liu and R. Xu, Global existence and blow up of solutions for cauchy problem of generalized Boussinesq equation, Physica D, 237 (2008), 721-731. doi: 10.1016/j.physd.2007.09.028.
[27] V. G. Makhan'kov, Dynamics of classical solitons (in non-integrable systems), Phys. Reports, 35 (1978), 1-128. doi: 10.1016/0370-1573(78)90074-1.
[28] G. A. Maugin, Nonlinear Waves in Elastic Crystals, Oxford Mathematical Monographs. Oxford University Press, Oxford, 1999.
[29] L. E. Payne and D. H. Sattinger, Saddle points and instability of nonlinear hyperbolic equations, Israel J. Math., 22 (1975), 273-303. doi: 10.1007/BF02761595.
[30] X. Su and S. Wang, The initial-boundary value problem for the generalized double dispersion equation, Z. Angew. Math. Phys., 68 (2017), Paper No. 53, 21 pp. doi: 10.1007/s00033-017-0798-4.
[31] R. Temam, Infinite-Dimensional Dynamical Systems in Mechanics and Physics, volume 68 of Applied Mathematical Sciences, Springer-Verlag, New York, second edition, 1997.
[32] S. Wang and G. Chen, The Cauchy problem for the generalized IMBq equation in $W^{s, p}(\mathbb{R}^n)$, J. Math. Anal. Appl., 266 (2002), 38-54. doi: 10.1006/jmaa.2001.7670.
[33] X. Wang and R. Xu, Global existence and finite time blowup for a nonlocal semilinear pseudo-parabolic equation, Adv. Nonlinear Anal., 10 (2021), 261-288. doi: 10.1515/anona-2020-0141.
[34] R. Xu, Cauchy problem of generalized Boussinesq equation with combined power-type nonlinearities, Math. Meth. Appl. Sci., 34 (2011), 2318-2328. doi: 10.1002/mma.1536.
[35] R. Xu, W. Lian and Y. Niu, Global well-posedness of coupled parabolic systems, Sci. China Math., 63 (2020), 321-356. doi: 10.1007/s11425-017-9280-x.
[36] R. Xu and Y. Yang, Low regularity of solutions to the Rotation-Camassa-Holm type equation with the Coriolis effect, Discrete Contin. Dyn. Syst., 40 (2020), 6507-6527. doi: 10.3934/dcds.2020288.
[37] R. Xu, M. Zhang, S. Chen, Y. Yang and J. Shen, The initial-boundary value problems for a class of sixth order nonlinear wave equation, Discrete Contin. Dyn. Syst., 37 (2017), 5631-5649. doi: 10.3934/dcds.2017244.
[38] R. Xue, Local and global existence of solutions for the Cauchy problem of a generalized Boussinesq equation, J. Math. Anal. Appl., 316 (2006), 307-327. doi: 10.1016/j.jmaa.2005.04.041.
[39] H. Zhang and J. Zhou, Asymptotic behaviors of solutions to a sixth-order Boussinesq equation with logarithmic nonlinearity, Comm. Pur. Appl. Anal., 20 (2021), 1601-1631. doi: 10.3934/cpaa.2021034.
[40] J. Zhou, Global existence and blow-up of solutions for a Kirchhoff type plate equation with damping, Appl. Math. Comput., 265 (2015), 807-818. doi: 10.1016/j.amc.2015.05.098.
[41] J. Zhou, Initial boundary value problem for a inhomogeneous pseudo-parabolic equation, Electron. Res. Arch., 28 (2020), 67-90. doi: 10.3934/era.2020005.
[42] J. Zhou and H. Zhang, Well-posedness of solutions for the sixth-order Boussinesq equation with linear strong damping and nonlinear source, J. Nonlinear Sci., 31 (2021), Paper No. 76, 61 pp. doi: 10.1007/s00332-021-09730-4.
Open Access Under a Creative Commons license
Jinxing Liu
Xiongrui Wang
Jun Zhou
Huan Zhang
|
CommonCrawl
|
Enhancing students' written production in English through flipped lessons and simulations
M. Laura Angelini1 &
Amparo García-Carbonell1
International Journal of Educational Technology in Higher Education volume 16, Article number: 2 (2019) Cite this article
Today, learning is perceived as a challenge that must be faced simultaneously on numerous fronts. Indeed, learning is no longer confined to the classroom. Students have the opportunity to learn inside and outside the classroom walls. Technology plays its part, as does the abundance of information available on social networks and in the mass media. Educators must stay abreast of change as information and potentially useful technological resources leave traditional education behind. Optimising class time through new methods, techniques and resources is paramount in today's education systems. This paper presents the results of a quantitative study of students' written production in English. The English writing skills of engineering students were developed using situational (or class) simulations and a large-scale web-based simulation in real time. Quantitative analysis of students' written production was used to test for differences between experimental and control groups. The goal of this study was to show that simulation-based instruction contributes significantly to students' progress in written production in English. The results showed that students who received simulation-based instruction (experimental group) significantly improved their English writing skills, primarily in terms of organisation and linking of ideas more than students who attended a regular English course (control group).
Communication has long been a primary goal of foreign language educators. Foreign language students must gain fluency and accuracy to communicate effectively in both written and spoken forms. However, language teachers often teach large classes, and communication can become an ordeal. Blended learning has been gaining ground in language teaching, and certain pedagogical strategies are making headway. Flipped learning is one such strategy. Flipped learning is a specific blended learning model that helps educators optimise class time to encourage communication. In this study, flipped learning was applied, moving lectures outside the classroom and introducing simulation-based lessons to enhance English as a foreign language (EFL) learning, particularly in terms of production skills development. This application of flipped learning inverted the traditional teacher-centred method. Instruction on essay writing, registers and simulation procedures was delivered online outside the classroom, whilst traditional homework was moved into the classroom environment to identify students' weakness and strengths before students participated in simulations and written tests. The flipped model uses technology to present the theory and background materials. This paradigm shift transforms the roles of teacher and learner (Strayer, 2007, 2012; Tucker, 2012). In this study, instructors became facilitators and guides for learners, who worked in teams during the simulations in class. The learners became the real participants in the classroom (Strayer, 2007, 2012; Tourón, Santiago, & Diez, 2014).
A simulation is an activity in which participants are assigned roles and are given enough information to solve a specific problem. A simulation is based on a representation of a model that imitates a real-world process or system. Key information is provided so that participants can carry out tasks, debate, negotiate from different points of view and solve a specific problem (Klabbers, 2009). It is the participants' responsibility to perform duties and thereby solve the problem without play-acting or inventing key facts (Jones, 2013). Michelson and Dupuy (2014) further discuss simulation and language learning and refer the simulation potential to enact discourse styles associated with social identities.
In the present study, a web-based simulation was used from The International Communication and Negotiation Simulations (ICONS) platform. The ICONS platform, developed at the University of Maryland, combines simulation tools and simulation development dialogue (SDD) methodology to provide clear insights into global socio-political affairs and evaluate alternative courses of action in crisis situations. Simulations performed using the ICONS platform are thus ideal for addressing social issues that relate to education, environmental threats, the sustainable economy and human rights. Scholars have praised simulations as an effective way of instilling ethical responsibilities in students and developing students' global mindset (Crookall, 2010; Crookall & Oxford, 1990). In the debriefing, students reflected on the simulation and the learning component of the whole experience.
The present study, thus, describes the related works comprising simulations in education and flipped learning. The methodological section describes the participants, the quantitative data and studies carried out. Ethical issues and Threats of validity are subsequently addressed together with Results, Conclusions and Future Research.
Several educational disciplines have embraced simulations. Such disciplines include industry, medicine, nursing, engineering and languages. Despite their relatively short tradition, hundreds of studies have shown the benefits of simulations as they provide immersive experiential learning (Ekker, 2000, 2004; Chang, Peng, & Chao, 2010; Wedig, 2010; Beckem, 2012; Wiggins, 2012; Gegenfurtner, Quesada-Pallarès, & Knogler, 2014; Blyth, 2018). Ekker (2000) studied data on 46 students from four European universities that participated in the Intercultural Dimensions in European Education through On-line Simulation (IDEELS) project. Students' responses to online questionnaires pre- and post-treatment indicated that 90% of students were satisfied with the simulations, reporting a good learning experience. Approximately 73% reported that web-based simulation suited their needs. More than 80% reported that they did not experience difficulties due to cultural differences. Interestingly, all male participants, unlike 22% of female participants, reported that all members of the team contributed to the tasks. Klabbers (2001) described simulations as learning and instructional resources. According to the author, simulations offer a springboard for interactive learning that develops expertise. Kriz (2003), in turn, contextualised simulation within the educational framework. Simulation is an interactive learning environment that converts problem-oriented learning into purposeful action. According to Kriz, training programmes for systems competence through simulation have shown that simulations favour change processes in educational organisations.
Ekker (2004) conducted an empirical research on simulations applied to education. The author analysed data on 241 subjects who had participated in various editions of IDEELS, examining satisfaction levels and attitudes. The participants had different roles as negotiators, technical consultants, activists or journalists within the "Eutropian Federation Simulation". The three-week simulation consisted of message exchanges, written proposals and "live" conference situations. The software used was a web-based interface driven by a database server. The project resorted to a web-based questionnaire to measure students' satisfaction, personal experiences and attitudes towards the simulation. Findings revealed that students found satisfaction during the simulation, they were activated as the simulation invigorated learning and personal characteristics did not significantly predict or affect users' satisfaction with web-based simulations.
Other studies conducted by Levine (2004) and Halleck and Coll-García (2011) integrated tele-collaborative exchanges and global simulations to turn the foreign language class into its own immersive, simulated environment.
Levine (2004) described a global simulation design as a student-centered, task-based alternative to conventional curricula for second year university students of foreign language courses. The author provided clear guidelines to apply simulations in language courses and identifies strengths such as the use of the content knowledge in the simulation dynamics, target language activation during the simulation phases and collaborative work to carry out the tasks. Furthermore, Halleck and Coll-García (2011) presented the results of a pilot project in which simulation-based learning was used to teach English to engineering students. The study involved 42 undergraduate engineering students at Oklahoma State University, USA, and 56 undergraduate engineering students from Universitat Jaume I, Castellón, Spain. The results of this pilot study shed light on participants' perceptions of how web-based simulations affect the development of language abilities, critical thinking and intercultural awareness. The authors highlighted the importance of a simulated experience in an engineering curriculum. They concluded that a real comprehensive engineering education should provide opportunities to work collaboratively with other professionals in an intercultural setting more than simply solving problems from a textbook.
Burke and Mancuso (2012) in their study of social cognitive theory, metacognition, and simulation learning identified core principles of intentionality, forethought, self-reactiveness and self-reflectiveness in simulation environments. They sustained that debriefing helps build students' self-efficacy and regulation of behaviour. Thus, simulation-based learning combines key elements of cognitive theory and interactive approach to learning. Theory-based facilitation of simulated learning enhances the development of social cognitive processes, metacognition, and autonomy.
Other studies on language teaching and learning have shown that simulations encourage the development and acquisition of language (e.g. Rising, 2009; Andreu-Andrés & García-Casas, 2011; Author, 2011; Woodhouse, 2011; Michelson & Dupuy, 2014; Blyth, 2018). The scholars coincide that simulations provide greater exposure to the target language, more purposeful interaction, more comprehensible input for learners, a reduced affective filter and lower anxiety in language learning. To mention some, Author (2011) examined perceptions of collaborative work in web-based simulations through evaluations of each student's end-of-course portfolio [N = 26]. Students highly valued the collaborative work required in the simulation, which was reflected by the active participation of all team members and by team members' motivation and personal satisfaction. By analysing their own work and that of their teams, the students reported that they had become more resolute and had learnt discourse strategies to persuade others and solve problems. Students also reported that the collaborative work increased their capacity to listen to others' ideas and to learn from others. All this helped increase their intellectual development and knowledge of the world. They also understood specific content faster, improved their language skills and acquired experience in self-assessment. Andreu-Andrés and García-Casas (2011) focused on simulation and gaming as a teaching strategy. Qualitative analysis based on grounded theory was used to study the perceptions of 47 engineering students. These students endorsed experiential learning and reported that learning and having fun reaped rewards. As educators and students became more familiar with the simulations, they developed a greater appreciation of their effectiveness. Students complete simulations with a heightened awareness of what they have learnt and how they can learn more. Another clear example is Woodhouse's (2011) study, in which 33 Thai university students participated in a computer simulation to learn English. Data were collected through personal interviews to learn about students' opinions of the use of simulations to learn a foreign language. The students perceived that the simulation, despite not being face-to-face, did not hinder their learning about sociocultural aspects related to communication in the target language. Students noted that they acquired greater powers of decision, persuasion and assertiveness in communication. Ranchhod, Gurău, Loukis, and Trivedi (2014) make a threefold contribution to the simulation and experiential learning literature. They analyse the representational effectiveness of several learning strategies. Their study builds on Reeve's educationally supportive learning environment through simulations (Reeve, 2013) as the investigation deals with the concrete learning experience generated by the simulation to develop or reinforce theoretical understanding, management experience, and professional skills.
An example of a large-scale simulation was described by Michelson and Dupuy (2014) in which 29 intermediate learners of French at a public university in the Southwest of the United States participated in the study. 12 students of the experimental group participated in the simulation and had specific roles to enact the responsibilities of residents in a commercial area in Paris. 17 students belonged to the control group and did not participate in the simulation. They followed a traditional approach to learn French. Only the experimental students demonstrated abilities to describe how their roles motivated certain linguistic choices and non-linguistic semiotic modes. The study highlights the potential for simulations to boost students' awareness of the target language together with other communication codes.
Blyth (2018) explores the challenges of immersive technologies in foreign language learning and global simulations to enhance language use. The study summarizes the impact of simulations in language learning and concludes that simulations of language use in authentic contexts boosts real experiential language learning.
A few other studies have examined the effectiveness of technologies and simulations in the language classroom. O'Flaherty and Phillips (2015) provided a broad overview of research on the flipped classroom and links to other pedagogical models such as simulations. They reported considerable indirect evidence of improved academic performance and student and teacher satisfaction with flipped learning. However, further research is required to provide conclusive evidence of how the fusion of these methods enables language and social competence development. Author (2016) investigated combining flipped learning instruction and simulation-based lessons to optimise class time by using and designing simulations with prospective secondary school teachers. Author outlined the benefits of using simulations that are based on literary extracts with a substantial social component.
The simulation in this study consisted of three phases: briefing, action and debriefing, all of which required immersion in the English language. During the briefing phase, consistent with the flipped classroom model, students were presented with topics related to the simulation scenario, literature on these topics and videos to be viewed outside the classroom. One benefit of this pedagogical shift is that students have more class time to apply the content knowledge in relevant communication situations than they would if they followed more traditional instruction models. Amongst the communication activities performed in class were minor-scale simulations, debates and forums aligned with problem-based learning. This type of practice helped prepare the students for the larger-scale simulation which covered more topics to analyse and had a different complexity as it was international. This class practice also helped instructors estimate students' understanding of the topic and the type of language that the students used. The instructors provided grammar clarifications and explanations where necessary. Students chose their own teams of four or five members. These teams were the same for the activities and the large-scale simulation. Teamwork was fostered, as was individualised learning. The instructor was able to identify the weaknesses of each student. This initial briefing phase served as preparation for phase 2, during which the web-based simulation took place. This large-scale web-based simulation had several steps: reading and analysing the scenario and assigning individual roles; anticipating other team members' proposals and writing a strategy to persuade other team members to vote for a particular proposal; listening to others and taking notes; and debating, negotiating and, finally, making a decision.
Quantitative data collection
This paper presents the findings of a quantitative study of students' progress in written production in English. The cohort of engineering students who participated in the study had attained the B1 level of English. Moreover, they were enrolled in an intensive optional four-month conversational English course at university. This course corresponded to the B2 level by the Common European Framework of Reference for Languages: Learning, teaching, assessment (CEFR). The CEFR has been designed to provide a coherent and comprehensive basis for the language syllabuses creation, teaching and learning materials, and the assessment of foreign language proficiency. It is used in Europe but also in other continents. The CEFR is available in 40 languages (Council of Europe, 2001).
There were five subgroups in total. The experimental group had two subgroups (E1 and E2; N = 50), which were taught separately in different classrooms. The control group had three subgroups (C1, C2 and C3; N = 71), which were taught separately in three classrooms. Smaller groups were more conducive to language learning in both the experimental and the control subgroups. All participants were in the third year of an engineering degree. The experimental subgroups received flipped learning instruction of topics related to the simulation scenario. This means that the students in the experimental subgroups were acquainted with the topics as they had to watch videos and read before the simulation. In class, the simulation guidelines and classroom practice in minor-scale simulations, class debates and forums prepared the students to participate in a large-scale web-based simulation. This latter is conceived as a large virtual exchange with other students from different foreign universities. This web-based simulation was carried out during class-time in the technology lab. Video conferences were held only with groups from other universities in Europe (synchronous simulation). However, there was interaction amongst other groups with different time zone through written messages, recorded voice messages and recorded sessions. Additional file 1 presents a list of materials used. The ICONS web-based simulation consisted of an international summit on current economic, social and security issues. This simulated summit was attended by numerous countries, which were represented by student teams. Attendance was both synchronous and asynchronous. The experimental group worked in teams of four to five members, each with a clear role within the team. These roles were specified in the simulation briefing phase.
The control group, however, was taught under a traditional EFL instruction model, which was based on a B2 course book, 3,5 h-lesson per week in one term. Students sat a final exam at the end of the term. Written production by students in the experimental and control groups was tested pre- and post-treatment. The assessment criteria for the pre- and post-treatment written tests were evaluated on a five-point Likert scale, where 1 indicated 'not accomplished' and 5 indicated 'successfully accomplished' for the three variables topic development, organising and linking ideas, and variety and accuracy in grammar and vocabulary (University of Cambridge, ESOL Examinations).
Although different skills were worked on during the course, this study focused on written production in English. The experimental group followed simulation-based training, which is illustrated in Fig. 1.
Written pre-test. Control and experimental groups wrote 250 words about how living in a cosmopolitan city affects their life and lifestyle. Three external examiners assessed the timed essay by applying the adapted writing criteria (University of Cambridge, ESOL Examinations, 2012) of language development, organising and linking ideas, and variety and accuracy in grammar and vocabulary (Additional file 2).
Flipped learning approach in the briefing phase. Students watched videos, read the news and performed research on several topics related to the web-based simulation scenario. Outside the classroom, they also revised some aspects of grammar that were occasionally clarified in class. In contrast, class sessions were active learning lessons where students took on responsibilities and participated in minor-scale simulations to debate, negotiate and solve problems. Teamwork was fostered. This phase served as preparation for the action phase, where the web-based simulation took place. Attendance was compulsory, and formative assessment was used to keep a record of students' progress.
Web-based simulation. Experimental students revised the simulation guidelines and formed teams of four or five members. The students chose their own teams, with no interference from the teacher. Participants became acquainted with the simulation scenario and their roles within the team (simulation scenario can be consulted in Additional file 1). When the action phase took place (synchronously and asynchronously), students analysed the scenario and identified the problems to be solved, planned strategies, participated in debates, set forth and negotiated proposals, and took a final decision. The web-based simulation lasted three weeks. Conversely, control groups followed a more conventional approach to learning English. They had 3,5-h lessons per week and used a general B2 course book to develop the listening, speaking, reading, writing and interaction skills. Lessons attempted to provide them with possibilities of practising these skills and they usually had to do the exercises in the workbook for homework. They sat a final exam at the end of the course. They did think-pair share and group works in the classroom, basically in the speaking exercises.
Debriefing. A structured debriefing consisted of three phases. The initial phase consisted of reflecting on the simulation experience, discussing it with others and learning and modifying behaviours based on the experience. In this initial phase, facts and concepts were clarified. The second phase dealt with emotions during the simulation, either individually or as a group. The third phase consisted of understanding the different views of each participant and the way each view reflected reality. Thus, the third phase addressed the generalisation and application of the experience to real life (Thatcher & Robinson, 1985).
Written post-test. This phase was common to both groups (experimental and control). It took place at the end of the course. Participants wrote 250 words on the following topic: 'What do you think about immigration in Spain?' Three external examiners assessed the timed essay by applying the same criteria (University of Cambridge, ESOL Examinations, 2012) as in step 1.
Procedure workflow
The goal of the quantitative study was to determine students' progress in written production in English. To achieve this goal, the following tests were conducted:
Pre-treatment homogeneity test. A Student's t-test was used to compare the means for the experimental and control groups because the distribution of assessments for both groups was normalised (non-significant Kolmogorov-Smirnov test results). Fisher's least significant difference (LSD) method was applied to determine which means were significantly different from others.
Post-treatment comparative analysis of the progress of students in both groups. Descriptive analysis of the mean scores and standard deviations for the experimental and the control groups was conducted. For the analysis of effect size, Cohen's (1988) procedure was followed. ANOVA was used to identify significant differences between the average progress levels for each group.
Post-treatment analysis of progress for each variable. A Student's t-test was used to compare mean scores post-treatment. The Kolmogorov-Smirnov test was used to determine the extent to which the distribution of the variables could be considered normal.
Concordance analysis of the three external examiners' assessments. The concordance of external examiners' assessments was studied to determine whether each examiner exercised independent judgement. An F-test of equality of variances was used to check examiners' variability, variability in variables and students' variability. All analyses were performed in SPSS 25 (under a licence held by the Universidad Católica de Valencia).
Letters of consent were previously signed by members of the five subgroups to comply with the basic principles of research ethics. A sample letter can be found in Additional file 3.
Pre-treatment homogeneity test to compare the mean level of written production in the experimental and control groups
The mean level of written production pre-treatment for the experimental group (5.109) was higher than it was for the control group (4.460). The standard deviation for the control group (1.256) was higher than it was for the experimental group (0.869) (Table 1).
Table 1 Mean level of written production pre-treatment in experimental and control groups
These results indicate considerable variability in the command of the English language displayed by students in the control group. In the experimental group students' command of written English varied to a greater degree.
The Student's t test indicated that the difference between the mean level of written production in the experimental and control groups was significant (p = 0.001).
A multiple comparison test (Fisher's LSD method) was applied to determine which means were significantly different from others (Table 2).
Table 2 Multiple comparison test (Fisher's LSD method) of mean level of written production (pre-treatment) in the 5 subgroups (C1, C2, C3, E1 and E2)
Subgroups E1, E2 and C3 had similar levels. C1 and C2 had slightly lower levels. The Student's t-test indicated that the means for the experimental subgroups E1 and E2 were higher and that there was greater variability amongst the control subgroups. This variability in the means for the control group might be associated with the presence of foreign students in subgroup C3. These students had an excellent command of the English language (Table 3).
Table 3 Homogeneous blocks in terms of written production (pre-treatment)
However, the primary goal of this study was not to identify differences between the means of the experimental and control groups. This study was designed to investigate students' progress post-treatment.
Post-treatment comparative analysis of the progress of students in both groups
ANOVA was used to identify significant differences between the mean level of progress of the experimental and control groups (Table 4). The p-value was less than or equal to .05. This result implies that there were significant differences in the mean level of progress of different groups (Table 5).
Table 4 ANOVA of progress in experimental and control groups
Table 5 Descriptives
Analysis of effect size was conducted to determine the magnitude of the change between the mean level of written production pre- and post-treatment.
$$ \frac{\mathrm{d}=\mathrm{X}\ \exp .\mathrm{imprv}-\mathrm{X}\ \mathrm{control}\ \mathrm{imprv}}{{\mathrm{O}^{\prime}}\ \mathrm{control}\ \mathrm{imprv}} $$
The effect size was 1.236. This value exceeds the threshold of 0.8, which is the minimum value for the effect size to be considered large (Cohen, 1988). According to Cohen, the thresholds for effect size are d = 0.20 (small), d = 0.50 (moderate) and d = 0.80 (large).
Table 6 shows the least significant differences (Fisher's LSD method) in means and the estimated differences between means. Two homogeneous blocks were identified. The first block comprised subgroups E1 and E2. Analysis of the mean level of progress post-treatment did not reveal significant differences. This means both experimental groups were homogeneous.
Table 6 Comparison of the mean level of progress of written production (post-treatment)
The second block comprised subgroups C1, C2 and C3. Analysis of the mean level of progress post-treatment did not reveal significant differences. Conversely, when the subgroup E1 was compared with C1, C2 and C3 and when E2 was compared with C1, C2 and C3, significant differences were identified.
To conclude, the initial homogeneity test of both groups pre-treatment indicated that the mean for subgroup C1 was similar to the mean for E1 and E2 and that the mean for subgroup C3 was significantly higher than the mean for C1 and C2. This finding does not invalidate the results of the subsequent comparative analysis of progress, although it is unclear whether the pre-treatment level might have influenced the progress of students in a given subgroup. Nevertheless, the progress of students in subgroup C3 did not differ significantly from the progress of students in subgroups C1 and C2. Students in these subgroups made less progress than did students in the experimental subgroups. This finding shows that the progress of students in the experimental group was significantly greater than the progress of students in all control groups, regardless of students' pre-treatment level (Table 7).
Table 7 Homogeneous blocks of progress (post-treatment)
Thus, the simulation-based instruction proved effective at improving students' written production.
Comparative analysis of progress in each variable post-treatment
The independent variables assessed in the comparative study were topic development, organising and linking ideas, and variety and accuracy in grammar and vocabulary.
The mean level of progress in topic development for the experimental group was 3,89. For the control group, the mean level of progress was 3,02. Figure 2 shows that in the pre-treatment both control and experimental groups are quite homogeneous. In the post-test, the experimental group perceived greater progress although the control group have also improved (Fig. 3).
Box-and-whisker plot in topic development by students in both, experimental and control groups pre-treatment
Box-and-whisker plot in topic development by students in both, experimental and control groups post-treatment
The Student's t-test indicated that the mean for the experimental and control groups was non-significant (Table 8).
Table 8 Comparative analysis of mean level of progress in topic development post-treatment
Dispersion was higher for the control group. This variability amongst students in the control group may owe to the fact that the experimental group was more homogeneous in terms of students' knowledge of English.
Post-treatment progress in topic development was greater for students in the experimental group than for students in the control group. However, the difference was non-significant (α = .05).
Organising and linking ideas
The mean level of progress post-treatment was greater for the experimental group 4,76 than it was for the control group 3,48. Figure 4 shows that the mean level was slightly higher for the experimental group and that dispersion was similar for both groups compared to variability observed in the pre-test (Fig. 5).
Box-and-whisker plot of post-treatment progress in organising and linking ideas in both groups
Box-and-whisker plot of pre-treatment progress in organising and linking ideas in both groups
The Student's t-test indicated that the p-value was less than the level of statistical significance (α = .05). Thus, the difference in progress was significant (Table 9).
Table 9 Comparative analysis of the mean level of progress in organising and linking ideas (post-treatment)
The effect size was 0.876. Because this value was greater than 0.8, the effect can be considered large. This result implies that the experimental group perceived greater progress in organising and linking ideas after the simulation-based lessons.
Variety and accuracy in grammar and vocabulary
The experimental group had a higher mean level (3,78) than the control group (3,60). Figure 6 shows that the mean level was substantially higher for the experimental group compared to the pre-test (Fig. 7).
Box-and-whisker plot of post-treatment progress in variety and accuracy in grammar and vocabulary in both groups
Box-and-whisker plot of pre-treatment in variety and accuracy in grammar and vocabulary in both groups
The Student's t-test indicated that the p-value was substantially less than the level of statistical significance (α = .05). Therefore, the mean level of post-treatment progress in variety and accuracy in grammar and vocabulary did not reach significance for the experimental group (Table 10). The effect size indicated that the treatment effect was large (effect size of 1.599 > 0.8).
Table 10 Comparative analysis of the mean level of progress in grammar and vocabulary (post-treatment)
Thus, the results for the three variables of written production indicate post-treatment progress by students in the experimental group. However, this progress was significant (at the 5% level) and had a large effect size for the variable organising and linking ideas.
Concordance analysis of external examiners' assessments
In this study, we tested the objectivity and impartiality of the three external examiners' assessments of students' written production pre- and post-treatment.
Figure 8 shows the homogeneity of the three external examiners' assessments.
Average assessments by the three external examiners of written production pre-treatment
The variability that can be observed in Fig. 8 is not associated with discrepancies in examiners' assessments (p = 0.674). Instead, it is due to differences in students' knowledge of English as measured by the three variables that were analysed in this study (p < 0.00001). Therefore, the results indicate concordance in the three examiners' assessments pre-treatment.
Post-treatment
The three examiners tended to assess students in the same way in most cases (Fig. 9).
Average assessments by the three external examiners of written production post-treatment
According to examiners' assessments, students in both groups (i.e. control and experimental) progressed post-treatment. However, the students in the experimental group received higher marks.
Threats to validity
The findings of this study should only be considered in light of its limitations.
Internal validity
As regards the selection bias, the group of participants were not selected from populations with different characteristics. Both, experimental and control groups, were all in third year of an engineering degree. To enrol in the course, students had to prove language proficiency. However, the group was heterogeneous as there were students from ages ranging 21–26 years old, some Erasmus students, and very few students with professional experience. Attrition or mortality may have affected the study as data could not be drawn from 7 dropouts in the experimental group and 2 in the control group.
As for the instrumentation, the design of the pre-post test did not vary in any of the groups in spite of different approaches in the development of the lessons. Whereas the control group was more focused on textbook-related activities and developing language skills systematically, the experimental group had autonomous work to do outside of class to learn about specific topics before attending the lessons. However, keeping track of students' activity outside of class was at times difficult to measure. In a few cases, students who did not comply with their homework by reading or watching the videos and they were asked to do so without interfering with the other students, preferably outside of class.
External validity
Situational factors may limit generalizability as the participants of the study were all engineering students who might have found certain limitations to understand the complexities of the problems described in the web-based simulation on social-political issues. However, these types of simulations are often applied to students taking optional conversational courses as the one presented in the study. It can also be stated that participants' reactions to being studied may have altered their behaviour and therefore the study results. Regarding the experimenter effects, only one of the researchers was in charge of teaching one experimental group. Due to this, we have resorted to three external examiners to bring reliability to the study.
In this study, the use of simulations effectively improved students' written production, regardless of the student's initial level, for students in the experimental and control groups. Progress in written production was greater for students who participated in the simulation-based instruction in the experimental groups. Progress in organising and linking ideas was statistically higher for students in the experimental groups. It may be inferred that the great exposure to written input in the target language, the critical dialogical exchanges about the different simulation issues, the elaboration of a written proposal to be later negotiated by other participants have led students to organize their ideas coherently and cohesively. The control groups, instead, did progress in the organization of ideas, grammar and variety of expressions though not as much as the experimental group. It may be inferred that the control group was more focused on dealing with the topics and written models presented by the course book. Thus, their written production was well-structured, there was good control of grammar though the ideas seemed similar to some of the written texts in the course book. However, notably, students' progress in variety and accuracy in grammar and vocabulary; and language development was non-significant for students in the experimental group. By establishing a knowledge base that would support the realization of the target language, students should have enriched their content knowledge of the topics using written and video material outside the classroom and simulations, debates and forums in class. Results indicate that these students were more inclined to use and overuse vocabulary and structures they were familiar with. A deeper interpretation, though, is linked to Wells (1999) and Lipman (2003) who supported the idea of developing thinking skills to be revealed through language use within a 'community of inquiry' in the classroom. Mastering the content knowledge of a specific topic did not guarantee language creativity in the present study.
In a future study, an ANOVA will report differences between the experimental and control groups by comparing the means of two or more variables at different times, between the two groups to clearly identify the differences pre and post treatment; and within the same group pre and post treatment. Furthermore, In the future, lessons will integrate simulations and an inquiry-based model that enhance reflections on the simulation experience and the students' learning in an attempt to reach a common reflection that favours inter-subjectivity and language development.
Andreu-Andrés, M. A., & García-Casas, M. (2011). Perceptions of gaming as experiential learning by engineering students. International Journal of Engineering Education, 27(4), 795–804 Tempus Publications.
Author (2011). Student perceptions of collaborative work in telematic simulation. Journal of Simulation/Gaming for Learning and Development, 1(1), 1–12.
Beckem, J. M. (2012). Bringing life to learning: Immersive experiential learning simulations for online and blended courses. Journal of Asynchronous Learning Networks, 16(5), 61–70.
Burke, H., & Mancuso, L. (2012). Social cognitive theory, metacognition, and simulation learning in nursing education. The Journal of Nursing Education, 51(10), 543–548.
Chang, Y. C., Peng, H. Y., & Chao, H. C. (2010). Examining the effects of learning motivation and of course design in an instructional simulation game. Interactive Learning Environments, 18(4), 319–339.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences, (2nd ed., ). Hillsdale: Lawrence Earlbaum Associates.
Council of Europe. Council for Cultural Co-operation. Education Committee. Modern Languages Division (2001). Common European framework of reference for languages: Learning, teaching. In assessment. Cambridge: Cambridge University Press.
Crookall, D. (2010). Serious games, debriefing, and simulation/gaming as a discipline. Simulation and Gaming, 41(6), 898–920.
Crookall, D., & Oxford, R. L. (1990). Simulation, gaming, and language learning. New York: Newbury House Publishers.
Ekker, K. (2000). Changes in attitude towards simulation-based distributed learning. Project DoCTA, (pp. 112–120). Oslo: Design and use of Collaborative Telelearning Artefacts.
Ekker, K. (2004). User satisfaction and attitudes towards an internet-based simulation. In D. Kinshuk, G. Sampson, & P. Isaías (Eds.), Proceedings of the IADIS international conference cognition and exploratory learning in digital age, (pp. 224–232). Lisbon: IADIS.
Gegenfurtner, A., Quesada-Pallarès, C., & Knogler, M. (2014). Digital simulation-based training: A meta-analysis. British Journal of Educational Technology, 45(6), 1097–1114.
Halleck, G., & Coll-García, J. (2011). Developing problem-solving and intercultural communication: An online simulation for engineering students. Journal of Simulation/Gaming for Learning and Development, 1(1), 1–12.
Jones, K. (2013). Simulations: A handbook for teachers and trainers. London: Routledge.
Klabbers, J. H. (2001). The emerging field of simulation and gaming: Meanings of a retrospect. Simulation and Gaming, 32(4), 471–480.
Klabbers, J. H. (2009). The magic circle: Principles of gaming and simulation. Rotterdam: Sense Publishers.
Kriz, W. C. (2003). Creating effective learning environments and learning organizations through gaming simulation design. Simulation and Gaming, 34(4), 495–511.
Levine, G. (2004). Global simulation: A student-centered, task-based format for intermediate foreign language courses. Foreign Language Annals, 37(1), 26–36.
Lipman, M. (2003). Thinking in education. Cambridge: Cambridge University Press.
Michelson, K., & Dupuy, B. (2014). Multi-storied lives: Global simulation as an approach to developing multiliteracies in an intermediate French course. L2 Journal, 6(1), 21–49.
O'Flaherty, J., & Phillips, C. (2015). The use of flipped classrooms in higher education: A scoping review. The Internet and Higher Education, 25(1), 85–95.
Ranchhod, A., Gurău, C., Loukis, E., & Trivedi, R. (2014). Evaluating the educational effectiveness of simulation games: A value generation model. Information Sciences, 264(1), 75–90.
Reeve, J. (2013). How students create motivationally supportive learning environments for themselves: The concept of agentic engagement. Journal of Educational Psychology, 105(3), 579–595 https://doi.org/10.1037/a0032690.
MathSciNet Article Google Scholar
Rising, B. (2009). Business simulations as a vehicle for language acquisition. In V. Guillén-Nieto, C. Marimón-Llorca, & C. Vargas-Sierra (Eds.), Intercultural business communication and simulation and gaming methodology, (pp. 317–354). Bern: Peter Lang.
Strayer, J. F. (2007). The effects of the classroom flip on the learning environment: A comparison of learning activity in a traditional classroom and a flip classroom that used an intelligent tutoring system. PhD dissertation, Ohio State University. https://etd.ohiolink.edu/!etd.send_file?accession=osu1189523914. Accessed 27 Apr 2018.
Strayer, J. F. (2012). How learning in an inverted classroom influences cooperation, innovation and task orientation. Learning Environments Research, 15(2), 171–193.
Thatcher, D. C., & Robinson, M. J. (1985). An introduction to games and simulations in education. Hants: Solent Simulations.
Tourón, J., Santiago, R., & Diez, A. (2014). The Flipped Classroom: Cómo convertir la escuela en un espacio de aprendizaje. Spain: Grupo Océano.
Tucker, B. (2012). The flipped classroom: Online instruction at home frees class time for learning. Education Next, 12(1), 82–84.
University of Cambridge. ESOL Examinations (2012). Research Notes [PDF] Accessed 26 January 2018. http://www.cambridgeenglish.org/images/23166-research-notes-49.pdf.
Wedig, T. (2010). Getting the Most from classroom simulations: Strategies for maximizing learning outcomes. PS: Political Science and Politics, 43(3), 547–555.
Wells, G. (1999). Dialogic inquiry: Towards a socio-cultural practice and theory of education. Cambridge: Cambridge University Press.
Wiggins, B. E. (2012). Toward a model of intercultural communication in simulations. Simulation & Gaming, 43(4), 550–572. https://doi.org/10.1177/1046878111414486.
Woodhouse, T. (2011). Thai University Students' Perceptions of Simulation for Language Education. https://absel-ojs-ttu.tdl.org/absel/index.php/absel/article/view/3026.
We would like to thank the reviewers for their help in enhancing this paper.
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
The data that support the findings of this study are available from DIAAL Research Group but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of DIAAL Research Group.
School of Education, Universidad Católica de Valencia "San Vicente Mártir", Centro de Postgrado Santísima Trinidad, C/Menéndez y Pelayo, frente al n° 7, 46100, Burjassot, Valencia, Spain
M. Laura Angelini & Amparo García-Carbonell
M. Laura Angelini
Amparo García-Carbonell
MLA carried out the main design, interpretation of the statistic studies and writing of the manuscript. AGC focused on the literature review about simulations and also interpretation of the data. All authors have agreed on the final conclusions and approved the final manuscript.
Correspondence to M. Laura Angelini.
1. Material used by the Experimental Group. (DOCX 43 kb)
Essay Writing Assessment Criteria. (DOCX 32 kb)
Letter of Consent. (DOCX 19 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Laura Angelini, M., García-Carbonell, A. Enhancing students' written production in English through flipped lessons and simulations. Int J Educ Technol High Educ 16, 2 (2019). https://doi.org/10.1186/s41239-019-0131-8
Received: 15 May 2018
Accepted: 11 January 2019
Web-based simulation
Technology Enhanced Learning or Learning driven by Technology?
|
CommonCrawl
|
Applied Water Science
May 2017 , Volume 7, Issue 2, pp 663–676 | Cite as
Probability distribution functions for unit hydrographs with optimization using genetic algorithm
Mohammad Ali Ghorbani
Vijay P. Singh
Bellie Sivakumar
Mahsa H. Kashani
Atul Arvind Atre
Hakimeh Asadi
A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.
Genetic algorithm Least squares method Mathematica Nonlinear optimization Probability distribution function Unit hydrograph
Prediction of flow hydrographs is important for undertaking water emergency measures and management strategies. A large number of methods have been proposed for flow prediction. The unit hydrograph (UH) is one of the most popular and widely used methods, especially in developing countries. A unit hydrograph (Sherman 1932) is defined as the hydrograph of direct runoff resulting from a unit depth of effective rainfall (ER) occurring uniformly over the basin and at a uniform rate for a specified duration. When the duration of ER becomes infinitesimally small, the UH is known as the instantaneous unit hydrograph (IUH). The hydrograph obtained with the use of UH is the direct runoff hydrograph (DRH). Because UH represents a linear response of the basin, the DRH is obtained by convoluting UH with the effective rainfall hyetograph (ERH). The discrete form of convolution can be written as follows(e.g. Chow et al. 1988; Singh 1988):
$$ Q_{n} = \sum\limits_{m = 1}^{n \le M} {P_{m} U_{n - m + 1} }, $$
where \( Q_{n} \) is the DRH ordinate at a discrete time step n, \( P_{m} \) is the effective rainfall pulse at a discrete time step m, and \( U_{n - m + 1} \) is the ordinate of the UH at any discrete time step \( n - m + 1 \). If the number of effective rainfall pulses is M and the number of DRH ordinates is N, then there will be \( N - M + 1 \) ordinates in the UH of the watershed. On the other hand, when effective rainfall pulses (\( P_{m} \)'s) and DRH ordinates (\( Q_{n} \)'s) are known from observations, Eq. (1) can be used to determine the ordinates of UH through a reverse process. This reverse process of determining the UH ordinates is sometimes referred to as the "de-convolution" process.
There are many methods to solve Eq. (1) for determining the UH. These methods include successive substitution method (Dooge and Bruen 1989 ), Collins method (Collins 1939), successive approximation method (Newton and Vinyard 1976 ), Delaine method (Raghavendran and Reddy 1975 ), harmonic analysis (O'Donnell 1960), Fourier method (Levi and Valdes 1964 ), Meixner method (Dooge and Garvey 1978), least squares method (Bruen and Dooge 1984), linear programming method (Deininger 1969), and nonlinear programming method (Unver and Mays 1984), among others; see also Singh (1988) for further details.
Mays and Coles (1980) presented a linear programming (LP) model for the determination of composite UH. This model uses the f-index method for the estimation of infiltration losses. Prasad et al. (1999) applied an LP model to estimate the optimal loss-rate parameters and UH by considering the inherent characteristics of infiltration and UH. Mays and Taur (1982) developed a nonlinear programming (NLP) model to determine the optimal UH. This method does not require losses to be specified a priori. Unver and Mays (1984) extended the method of Mays and Taur (1982) by incorporating an infiltration equation to estimate the optimal loss-rate parameters and UH.
Although these methods have been shown to perform well for certain situations, their main disadvantage is that the number of unknowns is equal to the number of unit hydrograph ordinates. Therefore, for larger time bases, these methods may involve difficulties in estimating the unit hydrograph from the rainfall–runoff data, since the number of unknowns is generally large (Bhattacharjya 2004).
Unit hydrographs have common characteristics with probability distribution functions, such as positive ordinates and unit area. As a result, probability distribution functions have recently gained enormous interest in deriving UH. In this approach, the number of unknowns is less and equal to the number of probability distribution parameters. Bardsley (2003) used the inverse Gaussian distribution as an alternative to the gamma distribution as a two-parameter descriptor of the IUH. The inverse Gaussian distribution was capable of deriving some hydrographs where the gamma would fail. Bhattacharjya (2004) used gamma and log-normal probability distributions to represent the UH for developing two nonlinear optimization models and solved them using binary-coded genetic algorithms. The gamma and log-normal distribution estimated the time to peak correctly. Log-normal distribution predicted peak discharge more or less properly; whereas gamma distribution did not satisfactorily estimate the peak discharge. Moreover, the results showed fairly similar performance of the distributions and the linear optimization model. Bhunya et al. (2007) explored the potential of four popular probability distribution functions (Gamma, Chi square, Weibull, and Beta) to derive synthetic unit hydrograph (SUH) using field data. The results showed that the Beta and Weibull distributions are more flexible in hydrograph prediction. Nadarajah (2007) provided simple Maple programs for determining SUH from eleven of the most flexible probability distributions and derived expressions for the unknown parameters in terms of the time to peak, the peak discharge, and the time base. Rai et al. (2010) derived the UH using the Nakagami-m distribution and compared its results with those of seven other distribution functions over 13 watersheds. The Nakagami-m distribution yielded UHs and direct runoff hydrographs successfully. Singh (2011) employed the entropy theory to derive a general IUH equation on two small agricultural experimental watersheds. This equation was specialized into some distributions, such as the gamma distribution, Lienhard distribution, and Nakagami-m distribution. The results indicated that surface runoff hydrographs computed using the derived IUH equation were in satisfactory agreement with the observed hydrographs.
In the present study, a nonlinear unconstrained optimization model is presented to transmute UHs into probability distribution functions. Six probability distribution functions are considered: two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson, and two-parameter Weibull distribution. The nonlinear least squares optimization formulation is solved by (1) programming in Mathematica and (2) by applying genetic algorithm. The potential of these six probability distribution functions is tested on data from the Lighvan catchment in the northwest of Iran. The nonlinear optimization method is compared with the traditional linear least squares method. One particular novelty of this study is the use of Mathematica for solving the nonlinear optimization formulation problem involved in deriving UH. Since Mathematica has extensive symbolic and numerical capabilities, it enables the calculations in a simpler, faster, and more accurate manner. It also has several statistical distributions already built-in.
The rest of this paper is organized as follows the next section presents a brief description of the six probability distribution functions, nonlinear least squares optimization method, and formulation to transmute UH into probability distribution, genetic algorithm, and traditional least squares methods. After describing the case study area, the results of calibration and validation of the methods are discussed. Finally, the conclusions are drawn.
Probability distribution functions
In this study, six popular probability distribution functions are considered: gamma, Gumbel, log-normal, normal, Pearson, and Weibull. A brief description of these functions can be found in Table 9.
Nonlinear least squares optimization method
In this method, a formula is presented to transmute UH into probability distributions. The objective function is to minimize the sum of the squares of deviation between the actual and the estimated direct runoff hydrographs. This can be written as
$$ \sum\limits_{n = 1}^{N} {e_{n}^{2} }, $$
where \( e_{n} \) is the deviation between the nth ordinates of the estimated and actual direct runoff hydrographs, given by
$$ e_{n} = \sum\limits_{m = 1}^{n \le M} {P_{m} U_{n - m + 1} - Q^{\prime}_{n} }, $$
where \( Q^{\prime}_{n} \) is the nth ordinate of the actual direct runoff hydrograph, \( U_{n - m + 1} = f\left( x \right) \), where \( f\left( x \right) \) is a probability distribution function and \( x = \left( {n - m + 1} \right) \times \Delta t \).
Two constraints must be considered for this objective function: (1) the area under the UH must be unity; and (2) the UH ordinates must be positive. These are given by
$$ \begin{array}{*{20}c} {1 - \Delta t\sum\limits_{r = 1}^{N - M + 1} {U_{r} } = 0} \\ {U_{r} \ge 0} \\ \end{array} \;\;\;\;r = 1,2,3, \ldots ,N - M + 1. $$
In this method, the number of unknowns is equal to the parameters of the probability distribution. In this study, this method is performed by programming in Mathematica and by applying genetic algorithm which is briefly described in next sub-section 2.3.
The genetic algorithm (GA) is a search technique based on the concept of natural selection inherent in the natural genetics, and combines an artificial survival of the fittest with genetic operators abstracted from nature (Holland 1975). The major difference between GA and the classical optimization search techniques is that the GA works with a population of possible solutions, whereas the classical optimization techniques work with a single solution. An individual solution in a population of solutions is equivalent to a natural chromosome. Like a natural chromosome completely specifies the genetic characteristics of a human being, an artificial chromosome in GA completely specifies the values of various decision variables representing a decision or a solution. For most GAs, the candidate solutions are represented by chromosomes coded with either a binary number system or a real decimal number system. These chromosomes are evaluated based on their performance with respect to the objective function. The GA that employs binary strings as its chromosomes is called the binary-coded GA; whereas the GA that employs real-valued strings as its chromosomes is called the real-coded GA. The real-coded GAs offer certain advantages over the binary-coded GAs as they overcome some of the limitations of the binary-coded GAs (Deb and Agarwal 1995; Deb 2000). Regardless of the coding method used, the GA consists of three basic operations: reproduction, crossover or mating, and mutation. Reproduction is a process in which individual strings are copied according to their fitness (Goldberg 1989). Crossover is considered as the partial exchange of corresponding segments between two parent strings to produce two offspring strings. The genetic algorithm picks up two strings from the population to perform crossover with probability p c at a randomly selected point along the string. Mutation is the occasional introduction of new features into the population pool to maintain diversity in the population (Bhattacharjya 2004). Genetic algorithms start with randomly generating an initial population (p) of possible solutions. The population is then operated by the three basic operators in order to produce better offspring for the next generation. This process would repeat till the individual is better enough to suit the objective function.
Linear least squares method
The least squares method minimizes the objective function which is the sum of squares of deviations of the actual and predicted direct runoff hydrographs. According to Eq. (1), the matrix form of the convolution equation can be written as
$$ [\varvec{Q}] = [\varvec{P}][\varvec{U}]. $$
Then, the unit hydrograph is derived using Eq. (6):
$$ [\varvec{U}] = \left[ {[\varvec{P}]^{\varvec{T}} [\varvec{P}] } \right]^{{{\mathbf{ - 1}}}}[\varvec{P}]^{\varvec{T}} [\varvec{Q}], $$
where T and −1 indicate the transpose and inverse of the matrices, respectively. Further details about this method can be found in Singh Singh (1988). In this study, all the calculations of this method are performed in Mathematica.
Study area and data
In this study, the potential of the six probability distribution functions for UH is investigated using data from the Lighvan River in northwest Iran. The Lighvan River watershed is located in East Azarbaijan in the northwest part of Iran (see Fig. 1), between 46°20′30″ and 46°27′30″ east latitude and 37°45′55″ to 37°49′30″ north longitude. This watershed is an important part of the catchment of Talkheh River watershed and has a drainage area of 76.19 km2. The maximum and minimum elevations of the watershed are about 3500 and 2000 m, respectively. The length of longest stream is 17 km. The average stream slope is 11 %. The Lighvan River drains into Talkheh River and Urmia Lake. For this watershed, data availability is generally scarce. For the present analysis, data of rainfall and runoff corresponding to four different storms (Storm A, Storm B, Storm C, and Storm D) are considered for calibration of the models. Data corresponding to two other storms (Storm E and Storm F) are used for validation of the models. Details of these datasets are presented in Table 1.
Geographical location of Lighvan watershed, Iran
Storm data for Lighvan watershed, Iran
(hr)
Storm A
Storm B
Storm C
Storm D
Storm E (test)
Storm F (test)
P (mm)
Q (mm/hr)
It is relevant to note that the effective rainfall rates are computed using the Φ-index for each rainfall hyetograph, and the direct runoff hydrographs are obtained by separating base flow from flow hydrographs using the constant-discharge method.
We use six probability distribution functions for deriving unit hydrographs for the above datasets: two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull. The probability distribution parameters are determined using the nonlinear least squares optimization method by programming in Mathematica and by applying the genetic algorithm. The results are also compared with those obtained using the traditional linear least squares method.
Nonlinear optimization by programming in Mathematica
In the present analysis, the storm data are used to derive a 1-hour unit hydrograph. All the models used involve an inverse problem that optimizes the probability distribution function parameters by minimizing the difference between the actual and predicted direct runoff hydrographs. The probability distribution parameters are obtained using least squares optimization method.
Calibration of the models
The parameters of probability distributions obtained for the storms (A–D) are shown in Table 2. The 1-hour unit hydrographs for the four datasets are presented in Fig. 2a–d, and the resulting direct runoff hydrographs are indicated in Fig. 3a–d, respectively. Figure 2a–d indicate that none of the models have tail oscillations. The oscillations of the UH determined by the least squares method for B, C and D storms may be caused by errors in data measurements, the rainfall abstractions, base flow separation, and non-uniform temporal and spatial distribution of rainfall. All the distribution functions predict the peak discharge, the time to peak, and the shape of the UH successfully for storm A. For storm B, all the distributions estimate the time to peak correctly. The peak discharge estimated by the Weibul and log-normal functions is closer to the actual value. The performance of all the models except the normal and Gumbel is satisfactory in predicting the peak discharge, time to peak, and the UH shape for storm C. The normal and Gumbel distributions are also not successful in predicting the time to peak and the rising limb of the UH for storm D. The peak discharge is estimated with less error by the log-normal model. Similar results can be obtained from Fig. 3a–d. Table 3 shows the objective function values for different models. As can be seen from this table, the Gumbel and normal distribution functions have high objective function values for all the storms except storm A. The objective function values of gamma and Pearson models are almost the same for all the storm data. For storm A, the Weibull and normal distributions outperform the other distribution, because these distributions showed a high ability in predicting the rising and recession limbs as seen from Fig. 2a. For storms B and D, the lowest objective function values are for the log-normal distribution, whereas for storm C the gamma and Pearson show the lowest value of the objective function. If average value of the objective functions is considered for all four storms, then the log-normal distribution gives the lowest objective function value (0.000473). The objective function value of the linear least squares method is very low which indicates the high ability of this method than the nonlinear optimization method for deriving the UH.
Parameters of probability distribution functions calibrated by the nonlinear mathematical optimization method for Lighvan watershed
Gumbel
Log-normal
Weibull
\( \alpha \)
\( \beta \)
\( \gamma \)
Comparison of UHs derived using the linear least squares method and distribution functions calibrated by the nonlinear mathematical optimization method for Lighvan watershed: a Storm A; b Storm B; c Storm C; and d Storm D
Comparison of observed and estimated DRHs related to the linear least squares method and distribution functions calibrated by the nonlinear mathematical optimization method for Lighvan watershed: a Storm A; b Storm B; c Storm C; and d Storm D
Objective function values for six distribution functions calibrated by the linear and nonlinear mathematical optimization method for Lighvan watershed
Gamma distribution
Gumbel distribution
Log-normal distribution
Pearson distribution
Weibull distribution
Least squares
2.416E−19
1.9184E−18
Generally, based on the visual comparison at the calibration stage using the nonlinear optimization method, it was observed that the log-normal distribution estimates the time to peak and peak flow properly for all storms. This distribution along with the gamma, Pearson, and Weibull predicts the rising and recession limbs of the unit hydrographs more or less perfectly. Moreover, the log-normal distribution was recognized as the most successful model based on the average value of the objective function.
Validation of the models
In order to validate the models, average values of the parameters of distribution functions obtained for the four storms were calculated and 1-hour unit hydrographs were derived using the distribution functions with the known parameters. The direct runoff hydrographs were obtained from these unit hydrographs by convoluting them with effective rainfall rates for storms E and F. Figure 4a, b illustrate the derived unit hydrographs for storms E and F, and the resulting direct runoff hydrographs are shown in Fig. 5a, b, respectively. As can be seen from Figs. 4, 5, the log-normal distribution predicts the peak flow for both storms and the time-to-peak for storm E with less error. The Weibull and Pearson distributions perform well in estimating the peak discharge and the time-to-peak for storm F, respectively. Furthermore, none of the distributions predict the rising and recession limbs properly. However, the Gamma and Pearson models estimate the limbs fairly well. Note that the tail end of the hydrographs for storm E is also properly predicted by the Gamma and Pearson distributions. Since storm F is the only one which occurred in winter season when the watershed is covered by snow, one can expect to not see good performance of the models for this storm.
Comparison of UHs derived using the linear least squares method and distribution functions calibrated by the nonlinear mathematical optimization method for Lighvan watershed: a Storm E; and b Storm F
Comparison of observed and estimated DRHs related to the linear least squares method and distribution functions calibrated by the nonlinear mathematical optimization method for Lighvan watershed: a Storm E; and b Storm F
Besides the visual comparison, the model performance is also evaluated using following three statistical measures:
Root mean squared error (RMSE):
$$ {\text{RMSE}} = \sqrt {\frac{{\sum\limits_{i = 1}^{n} {\left( {Q_{{e_{i} }} - Q_{{o_{i} }} } \right)}^{2} }}{n}} $$
Mean absolute error (MAE):
$$ {\text{MAE}} = \frac{1}{n}\sum\limits_{i = 1}^{n} {\left| {Q_{{{\text{e}}_{i} }} - Q_{{{\text{o}}_{i} }} } \right|} $$
Correlation coefficient (CC):
$$ {\text{CC}} = \frac{{\mathop \sum \nolimits_{i = 1}^{n} \left( {Q_{{{\text{o}}_{i} }} - \bar{Q}_{\text{o}} } \right)\left( {Q_{{{\text{e}}_{i} }} - \bar{Q}_{\text{e}} } \right)}}{{\sqrt {\mathop \sum \nolimits_{i = 1}^{n} \left( {Q_{{{\text{o}}_{i} }} - \bar{Q}_{\text{o}} } \right)^{2} } \sqrt {\mathop \sum \nolimits_{i = 1}^{n} \left( {Q_{{{\text{e}}_{i} }} - \bar{Q}_{\text{e}} } \right)^{2} } }}, $$
where \( Q_{{{\text{o}}_{i} }} \) and \( Q_{{{\text{e}}_{i} }} \) are the ith observed and estimated DRH ordinates, respectively; \( \bar{Q}_{\text{o}} \) and \( \bar{Q}_{\text{e}} \) represents the average discharge of the observed and estimated DRH, respectively, and n is the number of ordinates.
Table 4 presents the values of performance criteria. According to this table, the performance criteria values of the gamma and Pearson distributions were close to each other. The gamma distribution with the lowest value of RMSE (0.010) and MAE (0.006 mm/h),the Pearson distribution with the lowest value of RMSE (0.025), MAE (0.021 mm/h), and the highest value of CC (0.929) show successful performances for storms E and F, respectively. The performance of the log-normal model with the highest value of CC (0.776) and the low value of RMSE and MAE (0.012 and 0.009 mm/h, respectively) is successful for storm E. The Gumbel distribution may not be a suitable model for estimating the UH because of its high RMSE and MAE values for both storm data. Generally, the performance of almost all the models is more accurate for storm E than for F. The least squares method shows satisfactory results, considering its statistical measure values for both events. For storm F, this method indicates more error than storm E, because it generated a negative value for the first ordinate of the UH which is the main disadvantage of this method. According to the results of the study done by Singh (1976), the derived unit hydrographs using the least squares method may not have a unit volume and some unit hydrographs ordinates may be negative.
Performance criteria values for six distribution functions calibrated by the linear and nonlinear mathematical optimization method for Lighvan watershed
RMSE (mm/hr)
MAE (mm/hr)
Storm E
Storm F
2.9E−10
2.45E−10
In general, the results of the validation stage showed that the lognormal distribution performance is satisfactory in predicting peak flow and time to peak. The gamma and Pearson models showed acceptable performances in simulating both limbs of the unit hydrographs. Hence, according to the values of statistical measures, these distributions outperformed the others.
Nonlinear optimization by applying genetic algorithm
In this study, the real-coded genetic algorithm in MATLAB software was applied to determine optimal probability distribution parameters. The genetic algorithm parameters, such as crossover and mutation probability applied in this study are given in Table 5.
Genetic algorithm parameters
Population size (p)
15* (number of variables)
Crossover probability (p c)
Mutation probability (p m)
Generation (g)
200* (number of variables)
The optimal probability distribution parameters are shown in Table 6. Figure 6a–d illustrate the UHs obtained for storms A, B, C, and D, respectively, and Fig. 7a–d present the corresponding DRHs. From Figs. 6, 7, it can be seen that for storm A, all the models estimate the time to peak properly, but the peak discharge is estimated more correctly by the Pearson and Weibull distributions. The performance of the normal model in estimating the rising limb of the unit hydrograph is noticeable. Figure 6b shows all the models estimate the time-to-peak properly. However, the accuracy of the Pearson and log-normal distributions is high in predicting the peak flow. Almost all the models predict the rising limb of the unit hydrograph well. For storm C, all the models estimate the time to peak perfectly. The gamma, lognormal, Pearson, and Weibull distributions predict the peak flow and the rising and recession limbs of the UH with less error. For storm D, the gamma, Pearson and log-normal distribution models estimate the time-to-peak and the UH limbs satisfactorily. The peak discharge is estimated properly also by the log-normal model. Table 7 illustrates the objective function values of the distributions. According to this table, the Weibull distribution for storms A and D, and the log-normal and gamma functions for storms B and C give minimum values of the objective function, respectively. Based on the average value of the objective function, the Weibull distribution outperforms the other models for all storms. According to Tables 3 and 7, using the genetic algorithm caused an increase in the objective function values of the models rather than applying the nonlinear mathematical optimization. In other words, the nonlinear mathematical optimization method outperforms the genetic algorithm at the calibration stage.
Parameters of probability distribution functions calibrated by genetic algorithm for Lighvan watershed
Comparison of UHs derived using the linear least squares method and distribution functions calibrated by the genetic algorithm for Lighvan watershed: a Storm A; b Storm B; c Storm C; and d Storm D
Comparison of observed and estimated DRHs related to the linear least squares method and distribution functions calibrated by the genetic algorithm for Lighvan watershed: a Storm A; b Storm B; c Storm C; and d Storm D
Objective function values for six distribution functions calibrated by the genetic algorithm for Lighvan watershed
Generally, at the calibration stage using the genetic algorithm method, the lognormal, Pearson, and gamma models predicted the time to peak more or less properly for all storms. These models along with the Weibull distribution were also successful in simulating the rising and falling limbs of the UHs for all storms except A. The log-normal distribution showed high ability in estimating the peak value for storms B, C, and D. However, the Pearson model can compute well the peak discharge for storms A, B, and C. The Weibull distribution was distinguished as the most successful model based on the average value of the objective functions because of the excellent ability in preserving the UH shape of storm A.
Figure 8a, b show the estimated one-hour unit hydrographs using the average values of the obtained distributions parameters and effective rainfall data for storms E and F, and Fig. 9a, b indicate the corresponding direct runoff hydrographs, respectively. According to Figs. 8, 9, for storm E, the Gumbel, log-normal, and normal models estimate the time to peak perfectly. The log-normal distribution shows high potential in predicting the peak flow. The models did not have a high ability in estimating the recession limbs. For storm F, only the gamma and Weibull distributions estimate the time to peak and peak discharge precisely, respectively. All the models except gamma and Pearson show poor performance in predicting the rising and recession limbs of the UH. Table 8 gives the values of the three statistical measures. The Table illustrates that the gamma distribution with the lowest value of RMSE (0.010), MAE (0.007) and fairly high value of CC (0.697) may be the best model for storm E. This distribution also shows the lowest value of RMSE (0.016), MAE (0.012), and the highest value of CC (0.922) for storm F as the most suitable model. The Pearson model shows almost similar results with the gamma distribution for both storms. The log-normal model gives the highest CC value (0.738) and low RMSE (0.013) and MAE (0.009) values for storm E. The Gumbel model performance according to the statistical measures is poor for both storms. Similar to the previous validation stage, the models performance for storm E is better than for storm F. Using the genetic algorithm has improved the models capability just for storm F compared with the nonlinear mathematical optimization method.
Comparison of UHs derived using the linear least squares method and distribution functions calibrated by the genetic algorithm for Lighvan watershed: a Storm E; and b Storm F
Comparison of observed and estimated DRHs related to the linear least squares method and distribution functions calibrated by the genetic algorithm for Lighvan watershed: a Storm E; and b Storm F
Performance criteria values for six distribution functions calibrated by the genetic algorithm for Lighvan watershed
Generally, at the validation stage, the log-normal distribution showed good performance in predicting the time to peak and peak flow of the UH for storm E. The gamma and Pearson distributions were able to preserve the UH shape. Hence, the gamma distribution with the lowest value of RMSEand MAE, and the highest value of CC is the best model for both storms. The Pearson model indicated similar results with the gamma distribution.
In this study, a nonlinear model was developed to transmute a unit hydrograph into a probability distribution function. The gamma, Gumbel, log-normal, normal, Pearson, and Weibull probability distribution functions were used to derive 1-hour unit hydrographs. The main advantage of this model is that the number of parameters to be determined is equal to the number of probability distribution parameters. In this case, six different storm datasets from the Lighvan catchment were provided. Four storm datasets were used for models calibration and two for validation. The calibration of models was performed using the nonlinear least squares optimization methods, by programming in Mathematica and by applying the genetic Algorithm, and using the traditional linear least squares method.
In general, the following conclusions may be drawn:
The log-normal distribution function has a high potential in predicting the peak flow and the time to peak of the UH.
The gamma and Pearson distributions are more able in preserving the rising and recession limbs of the UH.
The log-normal, gamma, and Pearson distribution functions can be applied for quick and approximate estimation of unit hydrographs for the Lighvan catchment.
The genetic algorithm did not improve the models performance significantly compared with the nonlinear mathematical optimization.
The nonlinear optimization methods are not superior to the linear least squares method when there is only one excess rainfall pulse, but are comparable. The main disadvantage of the traditional least squares method is that it may generate negative unit hydrograph ordinates especially when the number of excess rainfall pulses is bigger than one.
We thank the anonymous reviewers and editor for their constructive and useful comments that helped us improve the quality of the paper.
See Appendix Table 9.
The probability distribution functions (pdf) used in this study
Gamma distribution (Bhattacharjya 2004)
\( f\left( x \right) = \frac{{x^{\beta - 1} }}{{\alpha^{\beta } \varGamma \left( \beta \right)}}\exp \left( { - \frac{x}{\alpha }} \right) \)
for \( x\rangle 0 \), \( \alpha \rangle 0 \), \( \beta \rangle 0 \)
\( \alpha \beta = \frac{1}{n}\sum\limits_{i = 1}^{n} {x_{i} } \)
\( \alpha^{2} \beta^{2} + \beta \alpha^{2} = \frac{1}{n}\sum\limits_{i = 1}^{n} {x_{i}^{2} } \)
Gumbel distribution (Gumbel 1960)
\( f\left( x \right) = \frac{1}{\beta }\exp \left[ { - \exp \left( { + \frac{x - \alpha }{\beta }} \right) + \frac{x - \alpha }{\beta }} \right] \)
\( \alpha = \frac{1}{n}\sum\limits_{i = 1}^{n} {x_{i} } - 0.5772157\beta \)
\( \beta = 0.7797\sqrt {\frac{{\sum\limits_{i = 1}^{n} {\left( {x_{i} - \bar{x}} \right)^{2} } }}{n}} \) where \( \bar{x} \) is the mean of \( x_{i} \)'s.
Log-normal distribution (Nadarajah 2007)
\( f\left( x \right) = \left( {{1 \mathord{\left/ {\vphantom {1 {\left( {x\beta \sqrt {2\pi } } \right)}}} \right. \kern-0pt} {\left( {x\beta \sqrt {2\pi } } \right)}}} \right)\exp \left[ { - {{\left( {\ln x - \alpha } \right)^{2} } \mathord{\left/ {\vphantom {{\left( {\ln x - \alpha } \right)^{2} } {\left( {2\beta^{2} } \right)}}} \right. \kern-0pt} {\left( {2\beta^{2} } \right)}}} \right] \)
for \( x\rangle 0 \), \( - \infty \langle \alpha \langle \infty \), \( \beta \rangle 0 \)
\( \alpha = \frac{1}{n}\sum\limits_{i = 1}^{n} {\ln x_{i} } \)
\( \beta = \sqrt {\frac{{\sum\limits_{i = 1}^{n} {\left( {\ln x_{i} - \alpha } \right)^{2} } }}{n}} \)
Normal distribution (Rao and Hamed 2000)
\( f\left( x \right) = \left( {{1 \mathord{\left/ {\vphantom {1 {\left( {\beta_{{}} \sqrt {2\pi } } \right)}}} \right. \kern-0pt} {\left( {\beta_{{}} \sqrt {2\pi } } \right)}}} \right)\exp \left[ { - {{\left( {x - \alpha } \right)^{2} } \mathord{\left/ {\vphantom {{\left( {x - \alpha } \right)^{2} } {\left( {2\beta^{2} } \right)}}} \right. \kern-0pt} {\left( {2\beta^{2} } \right)}}} \right] \)
\( \alpha = \frac{1}{n}\sum\limits_{i = 1}^{n} {x_{i} } = \bar{x} \)
\( \beta = \sqrt {\frac{{\sum\limits_{i = 1}^{n} {\left( {x_{i} - \alpha } \right)^{2} } }}{n}} \)
Pearson distribution (Rao and Hamed 2000)
\( f\left( x \right) = \frac{1}{\beta \varGamma \left( \alpha \right)}\left( {\frac{x - \gamma }{\beta }} \right)^{\alpha - 1} e^{{ - \left( {\frac{x - \gamma }{\beta }} \right)}} \)
for \( x\rangle \gamma \)
\( \alpha = \frac{{4\left( {\sum\limits_{i = 1}^{n} {\left( {x_{i} - \bar{x}} \right)^{2} } } \right)^{3} }}{{n\left( {\sum\limits_{i = 1}^{n} {\left( {x_{i} - \bar{x}} \right)^{3} } } \right)^{2} }} \)
\( \beta = \sqrt {\frac{1}{n\alpha }\sum\limits_{i = 1}^{n} {\left( {x_{i} - \bar{x}} \right)^{2} } } \)
\( \gamma = \bar{x} - \alpha \beta \)
Weibull distribution (Bhunya et al. 2007)
\( f\left( x \right) = \frac{{\beta x^{\beta - 1} }}{{\alpha^{\beta } }}\exp \left[ { - \left( {\frac{x}{\alpha }} \right)^{\beta } } \right] \)
\( \alpha \varGamma \left( {\frac{1}{\beta } + 1} \right) = \frac{1}{n}\sum\limits_{i = 1}^{n} {x_{i} } \)
\( \alpha^{2} \left[ {\varGamma \left( {1 + \frac{2}{\beta }} \right) - \varGamma^{2} \left( {1 + \frac{1}{\beta }} \right)} \right] = \frac{1}{n}\sum\limits_{i = 1}^{n} {\left( {x_{i} - \bar{x}} \right)^{2} } \)
Bardsley WE (2003) An alternative distribution for describing the instantaneous unit hydrograph. J Hydrol 62(1–4):375–378Google Scholar
Bender DL, Roberson JA (1961) The use of a dimensionless unit hydrograph to derive unit hydrographs for some Pacific basins. J Geogr Res 66:521–527Google Scholar
Bhattacharjya RK (2004) Optimal design of unit hydrographs using probability distribution and genetic algorithms. Sadhana 29(5):499–508CrossRefGoogle Scholar
Bhunya PK, Berndtsson R, Ojha CSP, Mishra SK (2007) Suitability of Gamma, Chi square, Weibull, and Beta distributions as synthetic unit hydrographs. J Hydrol 334:28–38CrossRefGoogle Scholar
Bruen M, Dooge JCI (1984) An efficient and robust method for estimating unit hydrograph ordinates. J Hydrol 70:1–24CrossRefGoogle Scholar
Chow VT, Maidment DR, Mays LR (1988) Applied hydrology. McGraw-Hill International Editions, SingaporeGoogle Scholar
Collins WT (1939) Runoff distribution graphs from precipitation occurring in more than one time unit. Civ Eng 9:559–561Google Scholar
Deb K (2000) An efficient constraint handling method for genetic algorithms. Comput Methods Appl Mech Eng 186:311–338CrossRefGoogle Scholar
Deb K, Agarwal RB (1995) Simulated binary crossover for continuous search space. Complex Syst 9:115–148Google Scholar
Deininger RA (1969) Linear program for hydrologic analysis. Water Resour Res 5:1105–1109CrossRefGoogle Scholar
Dooge JCI, Bruen M (1989) Unit hydrograph stability and linear algebra. J Hydrol 111(1–4):377–390CrossRefGoogle Scholar
Dooge JCI, Garvey BJ (1978) The use of Meixner function in the identification of heavily damped systems. Proc R Irish Acad Sec A 78(18):157–179Google Scholar
Goldberg DE (1989) Genetic algorithms in search, optimization, and in machine learning. Addison Wiley, USAGoogle Scholar
Gumbel EJ (1960) Multivariate extreme distributions. Bull Inter Statist Inst 39(2):471–475Google Scholar
Holland JH (1975) Adaptation in natural and artificial systems. University of Michigan Press, Ann Arbor 183Google Scholar
Levi E, Valdes E (1964) A method for direct analysis of hydrographs. J Hydrol 2:182–190CrossRefGoogle Scholar
Mays LW, Coles L (1980) Optimization of unit hydrograph determination. J Hydraul Div ASCE 106(HY1):85–97Google Scholar
Mays LW, Taur CK (1982) Unit hydrograph via nonlinear programming. Water Resour Res 18(4):744–752CrossRefGoogle Scholar
Nadarajah S (2007) Probability models for unit hydrograph derivation. J Hydrol 344:185–189CrossRefGoogle Scholar
Newton DW, Vinyard JW (1976) Computer-determined unit hydrograph from floods. J Hydraul Div 93(5): 219–235Google Scholar
O'Donnell T (1960) Instantaneous unit hydrograph by harmonic analysis. IASH Pub 51:546–557Google Scholar
Prasad TD, Gupta R, Prakash S (1999) Determination of optimal loss rate parameters and unit hydrograph. J Hydrol Eng 4:83–87CrossRefGoogle Scholar
Raghavendran R, Reddy PJ (1975) Synthesis of basin response with inadequate data. Nord Hydrol 6:14–27Google Scholar
Rai RK, Sarkar S, Upadhyay A, Singh VP (2010) Efficacy of Nakagami-m distribution function for deriving unit hydrograph. Water Resour Manag 24:563–575CrossRefGoogle Scholar
Rao AR, Hamed KH (2000) Flood Frequency Analysis. Print ISBN: 978-0-8493-0083-7Google Scholar
Sherman LK (1932) Stream flow from rainfall by the unit hydrograph method. Eng News Rec 108:501–505Google Scholar
Singh KP (1976) Unit hydrographs: a comparative study. Water Resour Bull 12(2):381–392CrossRefGoogle Scholar
Singh VP (1988) Hydrologic systems, vol 1. Prentice Hall, Englewood CliffsGoogle Scholar
Singh VP (2011) An IUH equation based on entropy theory. Trans ASABE 54(1):131–140CrossRefGoogle Scholar
Unver O, Mays LW (1984) Optimal determination of loss rate functions and unit hydrographs. Water Resour Res 20(2):203–214CrossRefGoogle Scholar
© The Author(s) 2015
Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
1.Department of Water EngineeringUniversity of TabrizTabrizIran
2.Department of Biological and Agricultural EngineeringTexas A and M UniversityCollege StationUSA
3.School of Civil and Environmental EngineeringThe University of New South WalesSydneyAustralia
4.Department of Land, Air and Water ResourcesUniversity of CaliforniaDavisUSA
5.Mahatma Phule Krishi VidyapeethRahuriIndia
Ghorbani, M.A., Singh, V.P., Sivakumar, B. et al. Appl Water Sci (2017) 7: 663. https://doi.org/10.1007/s13201-015-0278-y
Received 09 April 2014
King Abdulaziz City for Science and Technology
|
CommonCrawl
|
WizEdu
The mass of a high speed train is 4.5×105kg, and it is traveling forward at a velocity of 8.3×101m/s.
In: Physics
The mass of a high speed train is 4.5×105kg, and it is traveling forward at a velocity of 8.3×101m/s. Given that momentum equals mass times velocity, determine the values of m and n when the momentum of the train (in kg⋅m/s) is written in scientific notation.
Concepts and reason
The main concepts required to solve this problem are momentum, speed, and mass. Initially, write the equation for the momentum of an object that is moving with velocity. Use this equation and find the momentum of the train and using the scientific notation, finally find the values of \(\mathrm{m}\) and \(\mathrm{n}\).
Momentum can be defined as the product of the mass and speed of the object. The equation for the momentum of the object is, \(P=m v\)
Here, \(\mathrm{m}\) is the mass and \(\mathrm{v}\) is the speed of the object.
Momentum is the product of the mass and speed of an object. Momentum can be applicable only for the object which has mass. Momentum can be denoted by P. The momentum is measured with the unit of \(\mathrm{kg} \cdot \mathrm{m} / \mathrm{s}\). The equation for the momentum for the given train which traveling is, \(P=m v\)
Here, \(\mathrm{m}\) is the mass of the train and \(\mathrm{v}\) is the speed of the train.
The momentum of an object depends on its mass and speed. If the object's speed increases, the momentum will increase, and if the speed of the object decreases, then the momentum will decrease.
The equation for the momentum of the train is, \(P=m v\)
Here, \(\mathrm{m}\) is the mass of the train and \(\mathrm{v}\) is the speed of the train. Substitute \(4.5 \times 10^{5} \mathrm{~kg}\) for \(\mathrm{m}\), and \(8.3 \times 10^{1} \mathrm{~m} / \mathrm{s}\) for \(\mathrm{v}\) in above equation.
$$ \begin{array}{l} P=\left(4.5 \times 10^{5} \mathrm{~kg}\right)\left(8.3 \times 10^{1} \mathrm{~m} / \mathrm{s}\right) \\ =37.35 \times 10^{6} \mathrm{~kg} \cdot \mathrm{m} / \mathrm{s}\left(\frac{10^{1} \mathrm{~kg} \cdot \mathrm{m} / \mathrm{s}}{10 \mathrm{~kg} \cdot \mathrm{m} / \mathrm{s}}\right) \end{array} $$
\(=3.735 \times 10^{7} \mathrm{~kg} \cdot \mathrm{m} / \mathrm{s}\)
According to the scientific notation, here the value of \(\mathrm{m}\) is 3.735 and the value of \(\mathrm{n}\) is 7 in the final answer of the momentum.
The value of \(\mathrm{m}=3.735\) and the value of \(\mathrm{n}=7\).
Here, the values of \(\mathrm{m}\) and \(\mathrm{n}\) are determined in the above step depending on the final results of the momentum of the train.
Dr. OWL answered 4 weeks ago
A car traveling with constant speed travels 150 km in 7200 s. What is the speed of the car?
A stone is projected at a cliff of height h with an initial speed of 42.0 m/s
At one point in a pipeline the water's speed is 3.00 m/s and the gauge pressure is 5.00 × 104 Pa
What mass of Cr(s) is plated out after 1.90 days? What amperage is required to plate out 0.250mol Cr from a Cr3+ solution in a period of 8.60h?
In the rainy season, the amazon flows fast and runs deep. In one location, the river is 23m deep and moves at a speed of 4.0m/s toward the east.
A proton is released from rest at the positive plate of a parallel-plate capacitor. It crosses the capacitor and reaches the negative plate with a speed of 45000 m/s.
Determine the molecular formula for the unknown if the molecular mass is 60.0
what is the angular displacement of the wheel between t = 5 s and t = 15 s
What is the proton's speed as it collides with the negative plate?
What is the proton's speed as it passes through point P?
Match the following properties of liquids to what they indicate about the relative strength of the intermolecular forces in that liquid.
Rank each satellite based on the net force acting on it. Rank from largest to smallest.
Complete the following program skeleton. When finished, the program will ask the user for a length (in inches), convert that value to centimeters, and display the result. You are to write the function convert.
Write a function that accepts an int array and the array's size as arguments.
The figure below shows a cross section across a diameter of a long cylindrical conductor
The two 10-cm-long parallel wires in the figure are separated by 5.0 mm.
What visible wavelengths of light are strongly reflected from a 390-nm-thick soap bubble?
Indexed Websites
+Submit Website
@ 2020 WizEdu
|
CommonCrawl
|
Heat transfer and film cooling measurements on aerodynamic geometries relevant for turbomachinery
Part of a collection:
Engineering: Recent Problems in Fluid Mechanics
Patrick Jagerhofer ORCID: orcid.org/0000-0002-1241-111X1,
Jakob Woisetschläger ORCID: orcid.org/0000-0002-7057-761X1,
Gerhard Erlacher1 &
Emil Göttlich1
SN Applied Sciences volume 3, Article number: 889 (2021) Cite this article
A measurement technique for recording convective heat transfer coefficient and adiabatic film cooling effectiveness in demanding environments with highly curved surfaces and limited optical access, such as turbomachinery, is presented. Thermography and tailor-made flexible heating foils are used in conjunction with a novel multistep calibration and data reduction method. This method compensates for sensor drift, angle dependence of surface emissivity and window transmissivity, heat flux inhomogeneity, and conductive losses. The 2D infrared images are mapped onto the 3D curved surfaces and overlapped, creating surface maps of heat transfer coefficient and film cooling effectiveness covering areas significantly larger than the window size. The measurement technique's capability is demonstrated in a sector-cascade test rig of a turbine center frame (TCF), an inherent component of modern two-spool turbofan engines. The horseshoe vortices were found to play a major role for the thermal integrity of turbine center frames, as they lead to a local increase in heat transfer, and at the same instance, to a reduction of film cooling effectiveness. It was also found that the horseshoe vortices lift off from the curved surface at 50% hub length, resulting in a pair of counter-rotating vortices. The measurement technique was validated by comparing the data against flat plate correlations and also by the linear relation between temperature difference and heat flux. This study is complemented with an extensive error and uncertainty analysis.
Article highlights
This paper presents an accurate measurement technique for heat transfer and film cooling
on 3D curved surfaces with limited optical access
using flexible tailor-made heating foils, infrared thermography and a high-fidelity multistep calibration process.
Driven by environmental protection legislation, the technological development in the field of modern turbomachinery must address process and efficiency optimization [1]. However, possible improvements are often hindered since machinery is designed with excessive safety margins strictly required because sufficiently accurate flow numbers are not at the engineer's disposal [2]. In thermal turbomachinery, the convective heat transfer coefficient and the film cooling effectiveness are such numbers, subject to greater uncertainty. Due to the coupling of heat transfer with secondary flow structures and the presence of cooling and purge flows, the situation quickly becomes complex [3]. With this work, the authors contribute to the development of experimental measurement techniques for heat transfer coefficients and film cooling effectiveness in the challenging environment of thermal turbomachinery.
Geometry investigated
This work discusses a steady measurement technique for heat transfer and film cooling demonstrated in a so-called turbine center frame (TCF), an inherent component of modern turbofan engines. The TCF is an S-shaped duct connecting the high-pressure (HPT) to the low-pressure turbine (LPT) in a turbofan aero-engine and is designed as a diffuser characterized by large concave and convex bends on the end-walls. Accordingly, the aerodynamics in these annular ducts is complex with a variety of secondary flows present [4]. In modern aero-engines, the turbine inlet temperature is increased to further improve the overall joule-efficiency of the engine, a requirement causing higher thermal loads in the TCF. To master these thermal loads, purge and cooling flows are needed, which lead to complex aerodynamics and non-uniform distributions of temperature and heat transfer.
When heat transfer coefficients are required, the surface temperature and the heat flux must be measured. Thermocouples or resistance thermometers positioned at specific positions along the surface are commonly used for temperature measurements [5, 6]. However, the ready accessibility of easy-to-use thermography caused a paradigm shift for surface temperature measurements. Cameras sensitive to infrared (IR) radiation for thermography can record areal surface temperatures non-intrusively with high sensitivity and low response time [7]. Therefore, the majority of more recent experiments use such IR cameras for temperature measurements [8,9,10,11,12,13,14]. In contrast to thermocouples, however, IR cameras have to undergo a more extensive calibration to deliver accurate temperature readings. Martiny et al. [15] present an in-situ calibration for infrared thermography with thermocouples embedded in the surface of interest; a setup later improved for highly curved surfaces by Aberle et al. [16]. Considering non-uniform background radiation reflected by the surface, Elfner et al. [17] developed a spatially resolved calibration based on ray-tracing algorithms.
For heat flux measurements different steady and transient techniques exist. Steady heat flux measurements either calculate the heat flux through a material of known thermal conductivity by measuring the temperatures on both sides [6, 11, 12], or record the local heat flux with the help of heat flux sensors, e.g., thin-film heaters [9, 10, 13]. Carlomagno [18] reviewed different types of heat flux sensors and concluded that thin-film heaters are precise and effective devices. Carlomagno [18] stated that an inherent problem with all foil-type heat flux sensors is tangential or lateral conduction. A correction for this type of conduction is discussed by Astarita et al. [10], using a generalized form of Fourier law adapted for anisotropic conductivity in a thermally thin plate. For transient techniques, the rapid heating or cooling of a specimen is recorded, and the heat transfer coefficient is calculated according to analytical or numerical models of the specimen. Metzger et al. [5] recorded the transient heat up of an aluminum block, and von Hoesslin et al. [14, 19, 20] recorded the temperature decline of a low conductivity coating after exposure to a high-energy laser pulse.
Only one experimental study about heat transfer in TCFs exists to the best of the authors' knowledge. Arroyo Osso et al. [11] investigated an aerodynamically "aggressive" TCF with non-turning structural struts. In this study, the endwalls and the struts were heated by internal water flows to measure the heat flux. The endwalls were made out of a thin polycarbonate sheet with a water channel on the backside. The struts were made of aluminum with bores for the hot water and a thin cover layer of epoxy resin. The heat flux was calculated for the endwalls with a one-dimensional heat transfer model and for the struts with a finite element analysis (FEA) simulation. Multiple spring-loaded hatches enabled optical access to the opposite side of the channel to cover the whole TCF surface while ensuring favourable viewing angles [11].
Concept of current research
Different to Arroyo Osso et al. [11], a single pair of IR windows and heating foils or thin-film heaters on top of an adiabatic surface are used for this work to enable the heat transfer measurements in the TCF. The tailor-made and flexible heating foils can be slightly stretched in order to fit the three-dimensional curvature of the investigated surfaces and are designed for constant heat flux. Since the foils were mounted on an adiabatic carrier material, no FEA simulation considering heat conduction through the material is needed. On the other side, temperature-dependent electrical resistance variations within the heating foil and non-uniform heat release are corrected.
To cover the whole region of interest, the IR camera had to be positioned at various angles to the surfaces investigated and the single pair of IR transparent windows used. To ensure high precision and accuracy, a novel four-step temperature calibration of the recorded IR images is presented in this work, including:
the calibration against an isothermal surface in a vacuum chamber,
the correction of emissivity as a function of observation angle,
the correction of window transmissivity at sharp viewing angles and
an in-situ calibration with embedded thermocouples.
Since heat transfer and cooling are closely related, the film cooling effectiveness of the purge flow ejected from the cavity of the upstream turbine rotor was investigated. Firstly, the heat transfer coefficient and the film cooling effectiveness of an undisturbed inflow condition are presented and then compared to a setup where the TCF inflow is disturbed by inlet pegs. This study is completed by an error analysis covering all of the aforementioned challenges in using infrared thermography and heating foils in engine-relevant geometries.
This section briefly introduces the sector-cascade test rig and its instrumentation. A more thorough explanation of the design and commissioning of the test rig and the aerodynamic measurement equipment can be found in Jagerhofer et al. [21]. The main focus of this paper lies on the data reduction scheme starting with raw measurement data, such as uncalibrated IR images and electrical power to the heating foils, until the final surface maps of convective heat transfer coefficient h and film cooling effectiveness η.
Test rig and operating conditions
Figure 1 shows an exploded view of the sector-cascade test rig. The incoming main flow of approx. 0.54 kg/s was delivered by a variable speed centrifugal compressor. After the inlet duct, the main flow entered the test section of one full TCF passage situated between two quarter passages. The hub (inner endwall) and struts of the TCF were machined out of Rohacell IG-F 71, a quasi-adiabatic material with a very low thermal conductivity of approx. \({\lambda }_{Rohacell}=\) 0.03 W/m·K. On this insulating surface, the heating foils were applied and spray-painted with high emissivity paint (Nextel Velvet 811–21) with an emissivity of 0.967 and a thermal conductivity of 0.197 W/m·K [22]. After spray-painting, the surface was sanded to obtain a hydraulically smooth surface finish [23]. The heating foils were powered by standard laboratory power supplies. A row of cylinders or pegs was installed at the outlet of the TCF to simulate the blockage effect of the downstream low-pressure turbine vanes. The purge flow emanating from the aft hub cavity of the upstream HPT bears a significant cooling potential for the TCF [21]. For this reason, the aft hub and shroud cavities of the HPT were realized as purge plenums with engine-relevant seal geometries. The optical access for the IR camera was implemented by fitting two barium fluoride windows on the shroud surface of the TCF.
Exploded view of the sector-cascade test rig (adapted from [21])
Table 1 summarizes the operating conditions of the two test cases, a case with undisturbed inflow conditions and a case where cylinders or inlet pegs were installed at the TCF inlet. For both cases, the free-stream Mach number at the TCF inlet equaled 0.14 and the Reynolds number based on the strut chord length was 4.25 × 105. The blowing ratio of the hub purge flow was set to 0.21, a relatively high value for aero-turbines. The shroud purge flow was not investigated in this paper and was switched off for the experiments presented in this work. The undisturbed inflow case corresponds to the "chilled high purge" case in Jagerhofer et al. [23] and is explained more in detail there. The purpose of the inlet pegs, installed in the second test case, was to create disturbed inlet flow conditions with wakes and increased turbulence; a highly simplified abstraction of an upstream turbine stage. As illustrated in Fig. 1, four inlet pegs with a pitch of 7.5° were positioned in front of one TCF passage and were aligned in a way that no wake of the pegs impinged on the struts' leading edges. The pegs diameter, d, equals 35% of the struts' maximum thickness with the pegs axially situated 6.5d upstream of the struts' leading edges.
Table 1 Operating conditions
Figure 2 shows a cross-section of the sector-cascade rig with its instrumentation. A total temperature probe, a pitot-static tube, and an orifice plate (not seen in Fig. 2) were positioned far upstream in the main supply pipe to set the operating point. To characterize the inflow, a thermocouple-equipped five-hole probe was radially traversed at the TCF inlet. The five-hole-probe measurement-results of the undisturbed inflow condition (without inlet pegs) can be found in Jagerhofer et al. [21]. The hub purge flow temperature was measured using a single calibrated thermocouple in the axial clearance of the rim seal. The circumferential purge flow uniformity was monitored with nine equally spaced pressure taps in the same location.
Sector-cascade rig instrumentation (adapted from [21])
The hub and strut surfaces were covered with six tailor-made heating foils designed with and produced by the Austrian industrial collaborator ATT GmbH. The heating foils of this manufacturer consist of several different layers. Starting from bottom, the first layer is heat resistant and flexible glue with a thickness of approx. 200 µm, thick enough to compensate for the different thermal expansions of the heating foil and the Rohacell substrate. The next layer is a 50 µm thick Kapton insulation layer, followed by the 35 µm thick active layer of etched copper conductor tracks. The width and the spacing of the meandering copper tracks dictate the local electric heat production and are designed in an iterative process to deliver constant heat flux. After another insulating 50 µm layer of Kapton, a full sheet of 35 µm copper is added to laterally distribute the heat between the hot copper tracks and the cold interstitial gaps between them. This distribution layer is just thick enough so that the single copper tracks are not visible in the IR image but thin enough to prevent significant lateral conduction on a macroscopic scale. The last layer on top is again a Kapton layer of 50 µm. The overall thickness of the heating foil without glue equals 220 µm. A relatively thick (100 µm) layer of the high emissivity paint and the uppermost Kapton layer acted as thermal insulation that reduces the impact of the underneath copper layer on the lateral conduction along the flow-wetted surface.
The surfaces of interest were observed using a FLIR T650-sc IR camera with an uncooled microbolometer and a relatively high thermal sensitivity of 20 mK. To further improve the IR measurement accuracy, nine single-calibrated thermocouples braced on 0.3 mm thick and 6 mm diameter copper discs were placed along the surfaces of interest for an in-situ calibration. These in-situ thermocouples were located on the gaps between the heating foils and were painted together with the heating foils after installation.
At the end of the measurement campaign, an oil dot flow visualization was performed to visualize the trajectories of the wall shear stress. Instead of regular industrial oil, a mixture of glycerol, talcum powder and food coloring was used in order not to harm the high emissivity paint. Talcum powder was used to adjust the desired viscosity of the mixture. Equally sized dots of the mixture were applied on the hub and struts of the TCF. Then the test rig was operated at the nominal operating point until a desired running length of the oil dots was achieved. The running direction of the oil dots then visualized the local wall shear stress trajectories.
Light sheet flow visualization based on a fundamental particle image velocimetry setup [24] was used as an additional qualitative visualization technique of the flow field. The main flow was seeded with small oil droplets coming from a seeding generator far upstream in the supply pipe of the test facility. The light sheet was produced by a 100 mW diode laser with a cylindrical lens and guided through the BaF2 window into the TCF. Images of the light sheet were taken with a standard DSLR camera (Canon EOS 250D).
Data reduction
Figure 3 shows the main data reduction scheme as a data flowchart based on ISO 5807. Blue parallelograms denote input data acquired during the measurement, white parallelograms denote interim data, and white rectangles denote data processing. The whole scheme in Fig. 3 can be divided into the postprocessing of the temperatures measured with the IR camera, the postprocessing of the heat flux produced by the heating foils, and the final combination of these data into the convective heat transfer coefficient h and the film cooling effectiveness η. The following section thoroughly illustrates how data is processed during each step.
Main scheme of data reduction, calibration and correction
Definition of heat transfer coefficient and film cooling effectiveness
To measure the heat transfer coefficient h of a given operating point, two sets of IR images and the electric power used to heat the foils had to be acquired at constant operating conditions. The first set of images was acquired with heated foils and the second without heating. The condition without heating is considered quasi-adiabatic due to the very low conductivity of the substrate material. The heat transfer coefficient h is thus defined by:
$$h = \frac{{\dot{q}}}{{T_{S} - T_{a.S} }}$$
where \(\dot{q}\) is the final heat flux field, TS the final temperature field with heating and Ta.S the quasi-adiabatic final temperature field of the investigated surfaces.
To measure the film cooling effectiveness η, the heating was off and two sets of IR images were acquired, one with the hub purge flow switched off and one with the purge flow set to the desired blowing ratio M. The film cooling effectiveness η is defined by:
$$\eta = \frac{{T_{a.S\,NP} - T_{a.S\,P} }}{{T_{a.S\,NP} - T_{P} }}$$
where Ta.S NP is the quasi-adiabatic final temperature field without purge, Ta.S P the quasi-adiabatic final temperature field with purge and TP the temperature of the purge flow.
Temperature data reduction
Starting with a single raw IR image, the first step is to map the 2D image onto the 3D surface of the TCF using projective geometry [25], as shown in Fig. 3. This direct linear transformation (DLT) is used to project the pixels of the image onto a block-structured surface mesh of the TCF. Please note that the surface mesh resolution must be at least as fine as the finest spatial resolution of all IR images (pixels/mm) to avoid downsampling and mapping errors. Point correspondences between the 2D image plane and the 3D surface mesh of the TCF are necessary for transformation. These correspondences were realized by painting 114 reference points with 1.5 mm diameter on the TCF surface using a high reflectivity paint. This made the reference points visible in the raw IR image, marked with circles in Fig. 4a. The 3D coordinates of the points were measured with a laser scanning measurement arm (Quantum FaroArm). The points were distributed so that at least ten reference points are visible in every image. Please note that the DLT algorithm only needs at least 51/2 reference points, but over-determination leads to a more stable and accurate mapping. After rejecting undesired pixel areas (i.e., window frame or unheated areas), the 2D-3D mapping results in the raw IR 3D patch shown in Fig. 4c.
Temperature postprocessing: from raw IR image to one calibrated temperature 3D patch
An inherent problem of a microbolometer sensor is its long-term drift and the relative drift from pixel to pixel. Therefore, a calibration procedure tailored to the experimental setup was conducted before every test campaign, where the IR camera was calibrated against an isothermally heated copper block placed in a vacuum chamber. The copper block was painted with the same high emissivity paint as the TCF surface, and the optical access of the vacuum chamber was realized with the same IR window as used in the test rig. The 180 mm × 150 mm × 20 mm copper block used for calibration was heated on its backside with a heating foil, and its back and side faces were insulated with Rohacell. The block's temperature was measured with four single calibrated thermocouples immersed into the sidewalls of the block. There was no influence of natural convection due to the vacuum in the chamber. The temperature drop over the paint thickness was corrected by a 1D heat flux balance, where the heat flux crossing the layer of paint was assumed to be equal to the radiative heat flux from the paint surface to the inner walls of the vacuum chamber. To convert the raw counts of the IR footage into temperatures, the Atlas software development kit (SDK, FLIR) was used, with the paint's emissivity, the window's transmissivity and the temperatures driving the background radiation set. These driving temperatures are the window temperature and the temperature of the vacuum chamber, which were measured by single calibrated thermocouples. The copper block calibration results in a calibration curve for every pixel of the IR camera, needed to convert the raw IR 3D patch into the temperature field along the surface. Figure 4d shows the impact of the copper block calibration as temperature difference.
Figure 2 illustrates that the camera has to be used at shallow viewing angles, αS, on the observed surface and, αW, on the window, to cover all surfaces of interest. These variations in angle lead to variations in IR emission from the surface, here discussed in terms of surface emissivity variations of the paint εS, and transmissivity variations of the window τW, always compared to the perpendicular viewing angle of the copper block calibration. To account for the emissivity drop, an isothermally heated copper cylinder, again painted with the same high emissivity paint, was placed in the vacuum chamber. The emissivity was recorded as a function of the surface observation angle αS, using the freshly calibrated IR camera. The resulting emissivity curve is shown in Fig. 5, and the values were found to be similar to the measurements of Lohrengel et al. [22]. The increasing deviation from the Fresnel correlation for viewing angles < 40° can be explained with the paint's solid pigments causing a dull and rough surface. Please note that the paint of the copper block and cylinder was also sanded to obtain the same surface finish as in the test rig. The transmissivity drop of the 10 mm thick BaF2 IR window was recorded by exposing the IR camera to the isothermal copper block and by step-wise inclining the IR window in the optical path. The transmissivity curve in Fig. 5 shows a significant drop for shallow viewing angles below 40°, underlining the importance of this calibration step.
Paint emissivity and window transmissivity as a function of observation angle
The DLT algorithm can also estimate the camera position. This eases image acquisition procedure since no precise traversing mechanism is needed, and the camera could also be used handheld and moved freely in space while acquiring the images. The estimate of the camera position is then used to calculate the viewing vector (Fig. 4b) and subsequently the surface observation angle αS, and the window angle αW, for each pixel. With the curves of Fig. 5, the corresponding surface emissivity and window transmissivity are found and must be updated in Atlas SDK, delivering the viewing angle corrected temperatures for every pixel. Note that the window, ambient and reflected temperature had to be acquired for every test run and set for the correct conversion of digital intensity counts to temperature. The result is the final calibrated temperature 3D patch shown in Fig. 4g. The impact of the surface angle and window angle calibration is shown as temperature difference in Fig. 4e and f. This procedure has to be repeated for every image of the set.
After having mapped and calibrated one complete set of IR images, the patches had to be combined to produce a temperature field covering all sections of interest, the TCF's hub and struts. Due to spatially varying background radiation and other imperfections in the measurement technique, the calibrated temperature 3D patches may show a temperature offset in overlapping areas. This is most pronounced at very shallow observation angles, mainly when the outer window surface reflects the radiation from the hot window frame. In the first step, the offset is removed by subtracting it from the affected patch. The starting point for this procedure always is a patch observed at a nearly perpendicular observation angle with respect to surface and window. This first patch reflects the fully calibrated conditions, and the other patches are corrected for varying background radiation at shallow angles.
In a second step, the following blending function is used to merge the temperature patches while providing a smooth transition in overlapping regions:
$$T = \frac{{\mathop \sum \nolimits_{i}^{n} w_{i} T_{i} }}{{\mathop \sum \nolimits_{i}^{n} w_{i} }}; w_{i} = \frac{{d_{max,i} - d_{i} }}{{d_{max,i} }}$$
where wi is the weight of the temperature Ti of the ith patch. The weight wi is based on the distance di of the affected point from the center of the patch. The weight of the ith patch equals zero at the corner farthest from the center (dmax,i) and increases towards the center of the patch. By using the offset correction and the blending function in Eq. 3, the set of calibrated temperature 3D patches shown in Fig. 6a are overlapped and result in the 3D temperature field shown in Fig. 6b.
Temperature postprocessing: from the set of calibrated temperature 3D patches to the final temperature field
In the next step, the in-situ calibration of the 3D temperature field has to be performed. The copper discs of the nine in-situ thermocouples are marked with circles in the temperature field in Fig. 6b. A single calibrated thermocouple was brazed to the bottom side of each disc. The size of the copper discs ensured that a sufficient number of the camera's pixels read the same temperature as the thermocouple. In the case of heating the foils, an unknown heat flux passed from the borders of the heating foils to the copper discs and subsequently through the paint into the main flow. This heat flux led to a temperature drop over the paint thickness tP ≈100 µm, which needed to be compensated for the in-situ calibration. By assuming one-dimensional conduction through the layer of paint, this temperature drop is calculated by:
$$\frac{{\lambda_{P} }}{{t_{P} }}\left( {T_{TC,heated} - T_{corr.} } \right) = h\left( {T_{corr.} - T_{TC, unheated} } \right)$$
Here, TTC,.. is the thermocouple reading of the heated and unheated condition, Tcorr. is the temperature on top of the painted copper disc when heated, and λP the thermal conductivity of the paint taken from Lohrengel et al. [22]. Note that for unheated conditions, no temperature drop exists over the paint thickness, and the temperatures above and below the paint are identical. The left-hand side of Eq. 4 is the heat flux through the paint, which is equal to the right-hand side, representing the convective heat flux from the paint's surface into the main flow. Since the convective heat transfer coefficient, h, is needed, Eq. 4 has to be solved for Tcorr. iteratively.
In the next step, the temperature difference between the (paint thickness corrected) thermocouple reading and the 3D temperature field in Fig. 6b is calculated for the nine discrete positions of the thermocouples. To obtain a field of temperature offsets from the nine discrete thermocouples, the natural neighbor interpolation [26] is used. Outside the area spanned by the nine thermocouples marked as crosses in Fig. 6c, "ghost positions" marked with G are introduced. The ghost positions have the same ΔT value as the closest thermocouple and have the purpose to extend the interpolation area over the whole TCF surface. The result of the nearest neighbor interpolation is the temperature difference field in Fig. 6c, representing the in-situ calibration offset which is added to the 3D temperature field in Fig. 6b.
The reference points and the gap between the heating foils at the centerline are removed from the in-situ calibrated 3D temperature field in Fig. 6d using an inverse distance interpolation. The gaps between the struts and the hub are still visible in the results because the gaps are too wide for a reliable estimation of the temperature by the neighboring heated areas. In a final step, the temperature field is smoothed by shifting the temperature value at a specific point towards the average of its neighboring data points. After applying all these steps, the final temperature field in Fig. 6e results and the temperature postprocessing is finished.
Heat flux data reduction
Since heat production in the heating foils is based on the resistance of the copper conductor tracks, the local heat release is linked to the local temperature through the copper's temperature coefficient of resistance α. The average heat flux \({\dot{q}}_{el \,avg}\) produced by the heating foil is the supplied voltage times current divided by the heated area of the heating foil. \(\dot{q}_{el\, avg}\) equals the local supplied electric power \(\dot{q}_{el}\) only when the temperature on the heating foil's surface TS equals the average foil temperature, TS,avg HF. Otherwise, the heat flux must be corrected by the following equation [21]:
$$\dot{q}_{el } = \dot{q}_{el\, avg} \left( {1 + \alpha \left( {T_{S} - T_{S,\, avg\, HF} } \right)} \right)$$
Figure 7a shows the heat flux after correction with Eq. 5 for the temperature dependence of the copper track's resistance. The temperature field used for the correction is the final temperature field seen in Fig. 6e, with this data flow indicated by an arrow in Fig. 3.
Heat flux postprocessing
Albeit the heating foils were designed to deliver constant heat flux, a residual inhomogeneity of ± 10% per heating foil is possible. Therefore, a step-wise transient correction similar to Lazzi Gazzini et al. [13] is implemented, where the local heat release of the heating foils is assumed to be proportional to the local temperature increase in a transient experiment without flow. Starting at ambient temperature and without flow, the heating foils were switched on with the temperature increase recorded. By choosing two frames where the impact from free convection and lateral conduction is still negligible, a ΔT map covering all heating foils on the TCF is created. The relative heat flux inhomogeneity per heating foil \(\dot{q}_{inhom. rel.}\) is then calculated with
$$\dot{q}_{inhom. rel.} = \frac{\Delta T}{{\Delta T_{avg\, HF} }}$$
where ΔTavg HF is the average temperature increase per heating foil. This simple approach is possible because the thermal properties of the heating foil and the thermal effusivity of the substrate are constant, and the time step between the frames is the same for every position along the surface. The relative heat flux inhomogeneity map is shown in Fig. 7b.
The hub and the struts are machined out of Rohacell foam. Although nearly adiabatic, the conductive heat loss through the relatively thin struts cannot be completely neglected. The struts were only heated on the sides facing the flow passage investigated. Figure 8 shows a schematic of the strut and the underlying one-dimensional heat resistance network. Since the strut is symmetric and the inflow is without swirl, the same heat transfer coefficient h existed on both sides of the strut. The heating foil had the temperature THF, with the thicknesses of glue and heating foil's layers being neglected. Due to the very low conductivity of Rohacell and the excellent bond between the glue and the Rohacell substrate, their contact resistance was neglected. These assumptions lead to an error of less than one percent in the following calculations. In Fig. 8, the Rohacell and the high emissivity paint acted as competing thermal resistances. Their ratio drove the splitting of the supplied power \(\dot{q}_{el}\) into the desired heating of the investigated surface \(\dot{q}\), and the undesired lost heat flux into the not investigated quarter passage \(\dot{q}_{loss}\).
Simplified heat resistance network of the struts
Since the driving temperature difference THF—T∞ was the same for the desired and the lost heat flux, the relation of \(\dot{q}\) and \(\dot{q}_{loss}\) is a function of the heat resistances:
$$\frac{{\dot{q}_{loss} }}{{\dot{q}}} = \frac{{1/h + t_{P} /\lambda_{P} }}{{1/h + t_{Rohacell} /\lambda_{Rohacell} }} = \dot{q}_{loss, rel.}$$
with tRohacell the local thickness of the Rohacell substrate. The relative conduction loss is shown in Fig. 7c. Since this 1D correction is used on the struts only, a radial blending function is defined with the fillet radius between hub and struts.
The relative heat flux inhomogeneity in Fig. 7b is used together with the relative conduction loss in Fig. 7c to correct the heat flux, \(\dot{q}_{el}\):
$$\dot{q} = \dot{q}_{el} \frac{{1 + \dot{q}_{inhom. rel.} }}{{1 + \dot{q}_{loss, rel.} }}$$
Firstly, the heat transfer coefficient h is calculated without using the one-dimensional conduction loss correction in Eq. 7 and secondly, h is iteratively corrected by computing values for \(\dot{q}\) using Eqs. 7 and 8. Convergence was achieved after less than ten repetitions. The final \(\dot{q}\) field with all aforementioned corrections is shown in Fig. 7d.
Computational cost
The computational cost for the entire data reduction scheme shown in Fig. 3 is approximately 4 h using 24 Intel Xeon E5-2680 v3 cores for a surface mesh with approximately 500,000 cells. The most time-consuming step is the calculation of the surface and window angles for each pixel of the raw IR images, which takes 94% of the computation time. The rest of the data reduction scheme is processed using a single core.
This section presents the final results after conducting all of the aforementioned data reduction schemes, calibrations and corrections, summarized in Fig. 3. Figure 9 shows a qualitative oil dot flow visualization of the undisturbed inflow condition and surface maps of h and η on the hub for the undisturbed inflow and the inlet pegs condition. A hub purge flow with a blowing ratio of M = 0.21 and a density ratio of DR≈1.1 was injected from the hub cavity exit in both conditions.
Oil dot flow visualization, heat transfer coefficient and film cooling effectiveness of the investigated test cases (Horseshoe vortex HSV)
The mapped photography of the oil dot visualization is shown in Fig. 9a overlaid with wall shear stress directions in red and a qualitative illustration of one leg of the horseshoe vortex (HSV) in blue. As indicated by the curvature of the wall shear stress trajectories, the horseshoe vortices strongly direct the flow from the hub towards the center of the channel in the vicinity of the strut leading edge. After 50% hub length, the influence of the horseshoe vortices on the wall shear stress orientation diminishes as the vortices lift off from the hub surface and migrate towards midspan. This lift-off was confirmed with the light sheet flow visualization shown in Fig. 10. Figure 10a shows the position of the light sheet, and Fig. 10b shows the photograph of the light sheet fitted on the 3D model of the TCF. The core of one leg of the now expanded horseshoe vortex can be identified as the dark spot circled red in Fig. 10b because the seeding oil droplets are driven out from the core of the vortex due to centrifugal forces. Although not shown here, the same is true for the other leg of the horseshoe vortex since the flow in the TCF is fully symmetrical and momentum must be conserved. After the lift-off, the horseshoe vortices transition into a vortex pair, obviously driven by the convex curvature of the hub close to the TCF exit [27]. On the strut in Fig. 9a, an s-shaped deviation of the shear stress orientation from the aerodynamic contour was found. As described by Steiner [28], this alternating (first up, then down) radial flow migration is caused by the radial pressure gradient in the passage, which is initially oriented from the hub to the shroud and reverses its orientation after about 50% hub length. Compared to Steiner [28], this radial flow migration starts later in the measurement discussed here due to a lower inlet Mach number.
Light sheet flow visualization of the horseshoe vortex lift-off
In Fig. 9b and c, the heat transfer coefficient h is normalized with the maximum value of the non-purged undisturbed inflow condition hNP max. Both conditions, the undisturbed inflow and the inlet pegs, have the highest heat transfer coefficients on the hub at the cavity exit, followed by an asymptotical decrease in the streamwise direction typical for an unheated starting length setup. Around the leading edge at the hub, where the horseshoe vortices cause local deflections of the flow, the heat transfer is intensified. For both inflow conditions, h is increased by up to ~ + 10% in the vicinity of the horseshoe vortices' onset. This intensification slowly fades out at 50% hub length, where the horseshoe vortices lift off from the surface.
Adding the inlet pegs leads to an ≈ + 10% increase of heat transfer on the first third of the hub compared to the undisturbed inflow condition. This increase reduces to ≈ + 6% at half of the hub length and remains until the TCF exit. The signature of the inlet pegs' wakes can be seen in the heat transfer distribution of the hub marked with the dotted line. Two spots of decreased heat transfer are found for both operating conditions at the end of the hub at the TCF outlet. It is speculated that this decrease of heat transfer is caused by the combination of multiple factors: 1. the growth of the boundary layer, driven by the convex curvature of the hub at this position, 2. the influence of the migrated arms of the horseshoe vortices and 3. the increased diffusion and deceleration of the flow after the trailing edges of the struts. However, it must also be stated that the hub surface downstream the heating foils is made from aluminum, and therefore a systematic error due to lateral heat loss cannot be ruled out.
At the bottom of Fig. 9, the η distributions are compared. The undisturbed inflow condition has superior film cooling performance with a maximum film cooling effectiveness of 0.47 close to the hub cavity exit and a cooling film coverage extending until half of the hub length. Please note that a cooling film coverage is herein defined when η > 0.1. The film cooling performance of the inlet pegs condition is inferior, with a maximum of 0.4 close to the hub cavity exit and a cooling film coverage to only 30% of the hub length. The magnitude and the longitudinal spread of the cooling film deteriorate in the presence of the inlet pegs due to enhanced mixing of the main and the purge flow in the wake of the pegs. As shown in Jagerhofer et al. [21] for the undisturbed inflow condition, η is virtually zero on the fillet radii of the struts and the struts itself due to the horseshoe vortices, which dilute and sweep the purge flow away from the struts.
At the top of Fig. 11, the heat transfer coefficient distributions on the struts of the same conditions as in Fig. 9 are compared. As above, the inlet pegs produce a disturbed inflow that intensifies the heat transfer. This intensification is higher towards the leading edge with ≈+ 15% and decreases towards the trailing edge to zero. The overall heat transfer behavior of both conditions is again comparable since the same zones of high and low h can be identified on both struts.
Heat transfer coefficient on the struts, chordwise variation of h at strut midspan for all investigated conditions
At the bottom of Fig. 11, the chordwise variation of h along the strut's midspan is plotted for the two conditions above (M = 0.21, solid lines) and their corresponding no purge conditions (M = 0, dotted lines). Additionally, the laminar and turbulent flat plate correlations for constant heat flux are shown as dashed lines, and their formula is given in the diagram [29]. The first data points of both undisturbed inflow conditions M = 0.21 and M = 0, agree with the laminar correlation and the laminar-turbulent transition instantaneously onsets, as the black solid and dotted line start to diverge from the laminar correlation. The transition is indicated by the fading grey bar, and the center of the transition is indicated at the first inflection point of the curves with the vertical dotted line.
As shown in Jagerhofer [23], the heat transfer is enhanced for the purged undisturbed inflow condition (M = 0.21) due to a flow acceleration of the main flow caused by a mild blockage effect of the injected purge flow. After the transition, especially the no purge (M = 0) undisturbed inflow condition excellently agrees with the turbulent flat plate correlation, validating the measurement technique. As already noted above, the inlet pegs lead to an intensification of heat transfer. No indication for a laminar-turbulent transition exists in the observable area for the inlet pegs conditions. The boundary layer seems to immediately start turbulent due to the high freestream turbulence caused by the pegs. The red solid and dotted lines agree well with the turbulent correlation from the leading edge up to 8% chord length and then start to deviate from the correlation to even higher heat transfer coefficients. The difference in h between the no purge (M = 0) and the purged (M = 0.21) condition is less pronounced than for the undisturbed inflow condition. The influence of the inlet pegs seems to dominate the heat transfer behavior over the blockage effect of the purge flow.
For further validation of the measurement technique, the no purge inlet pegs condition was repeated with 50% and 130% of the nominal heating power. These conditions are shown as grey dotted lines in the diagram. According to the linearity of the energy equation [30, 31], the heat transfer coefficient h as it is defined in this study has to be independent of the heating power. In other words, different power settings must always give the same value of h. As the two grey dotted lines and the red dotted line collapse within ± 4%, the herein shown measurement results are in accordance with this linearity; a further validation of the measurement technique.
The uncertainties presented in this section were calculated based on the guide to the expression of uncertainty in measurement [32]. The impact of the input variables' uncertainties (blue parallelograms in Fig. 3) on the final measurands h and η are computed using a sensitivity analysis. By offsetting one input variable at a time by its standard deviation (SD) and rerunning the data reduction in Fig. 3, the SD of the input variable is converted to the SD of the final measurands h and η. The root-sum-square (RSS) of these "converted" SDs gives the final SD of h and η. The presented full-width uncertainties in Figs. 12 and 13 are twice the SD.
Uncertainty of h along the hub centerline and midspan of the strut
Uncertainty of η as a function of its value
For input parameters where no probabilistic uncertainty information but guaranteed maximum errors are given, a rectangular or triangular probability distribution between the maximum errors is assumed and its SD is calculated [32]. This is the case for the measured voltage and current, and for error sources where only the maximum possible error is known from a worst-case estimation.
Table 2 lists a breakdown of the most influential error and uncertainty sources of the temperature and heat flux measurements. The main contributors to the temperature's uncertainty are: 1. the 2D-3D mapping accuracy converted into a temperature uncertainty using the highest temperature gradient of the field, 2. The overall uncertainty of the copper block calibration, 3. the uncertainty of the window and surface angle calibration, 4. the uncertainty of the single calibrated in-situ thermocouple readings and 5. the uncertainty of the temperature drop correction for the paint thickness on the in-situ thermocouple's copper disc. The SD of the final temperature field equals 0.3 K for the heated and 0.17 K for the unheated case and is the RSS of all considered uncertainty sources. The uncertainties of the angle calibrations for the emissivity and transmissivity are low, because the in-situ calibration compensates for an absolute error in the ε or τ measurement shown in Fig. 5. Therefore, only errors in the inclination of the ε or τ curves impact the accuracy but not the absolute values. Note that for the sake of brevity, only the most influential uncertainty sources are mentioned here, and the remainders are summarized under "other" in Table 2.
Table 2 Breakdown of measurement uncertainty in the final temperature and heat flux fields
The major contributors to the heat flux's uncertainty are not the uncertainties of measuring electric voltage, current and the heating foil area (altogether SD equals 0.4 to 1.4%), but the uncorrected heat loss through radiation and the uncorrected lateral conduction within the heating foil. The maximum error of the lateral conduction only occurred in areas of very high temperature gradients, such as the laminar-turbulent transition on the struts and at the hub cavity exit and was calculated using an inverse FEA analysis with the IR temperature field as boundary condition. Since the maximum error of lateral conduction only exists in a small area, the maximum error was converted into a standard deviation by assuming a triangular probability distribution. The standard deviation of the final heat flux measurement is slightly different for every heating foil and equals approx. 5%.
Figure 12 shows the 95% confidence interval of h at midspan of the strut and at the hub centerline along the strut chord length. The uncertainty is highest towards the beginning of the hub and towards the strut leading edge because the temperature difference is the lowest there and towards the strut trailing edge because the heat transfer coefficient is lowest there. Figure 13 shows the 95% confidence interval of η. With increasing film cooling, the measured temperature differences increase and the relative uncertainty decreases. In comparison, Arroyo Osso et al. [11] investigated the heat transfer coefficient in a TCF using thermography and water-heated walls. For h and with a 95% confidence interval they recorded error levels between − 6% and + 16% on the hub and − 34% and + 54% on the struts. Lazzi Gazzini et al. [13] measured h and η on a rotor endwall with an error ranging from ± 7% to ± 12% for h and ± 0.05 to ± 0.2 for η, using the same confidence interval.
This study presented a measurement technique for the convective heat transfer coefficient, h, and the adiabatic film cooling effectiveness, η, using thermography and flexible heating foils. The application of the measurement technique was demonstrated in a sector-cascade test rig of a turbine center frame (TCF), an inherent component of modern two-spool turbofan engines. The geometry of interest was milled from low conductivity Rohacell foam, producing a quasi-adiabatic surface for the heating foils, which were coated with a high emissivity paint. The optical access was enabled through two trapezoidal BaF2 windows. The backbone of the measurement method is an elaborate data reduction, calibration and correction scheme:
Any sensor drift of the IR cameras microbolometer, the paint's emissivity and the windows transmissivity, as well as their angular dependence, are recorded and corrected in the multistep pre-calibration conducted in a vacuum chamber. This allows for very shallow viewing angles with respect to the investigated surface and the window without compromising the accuracy of the measurement.
By mapping the 2D infrared images on the 3D surface and overlapping them, it is possible to obtain temperature, h and η surface maps of highly curved surfaces covering areas significantly larger than the window's size.
The heat flux is corrected for the temperature dependence and inhomogeneity of the heating foils, and a 1D heat loss correction for thin parts is implemented.
The measurement technique was validated by comparing the data against flat plate correlations and also by confirming the linearity of ΔT and \(\dot{q}\).
The horseshoe vortices were found to play a major role for the heat transfer and film cooling in turbine center frames: In the vicinity of the horseshoe vortices' onset, the heat transfer was intensified by up to 10%, and at the same instance, the cooling film was diluted and swept away. It was also found that the horseshoe vortices lift off from the surface at 50% hub length, resulting in a pair of counter-rotating vortices. From there, the detrimental influence on heat transfer and film cooling effectiveness vanished. Close to the TCF exit on the hub, the now lift-off pair of counter-rotating vortices even contributed to a local heat transfer reduction. The inlet pegs led to an additional increase in heat transfer (approx. + 10%) in the first third of the TCF and an additional reduction of film cooling effectiveness and coverage. This dangerous combination of intensified heat transfer and reduced cooling in the first half of the TCF must be addressed in the design process.
Two adaptations, a quasi-adiabatic surface and optical access, are the essential prerequisites for the applicability of the herein proposed measurement technique. Therefore, this measurement technique can be transferred with low effort to other applications in and outside the field of turbomachinery. Possible practical applications can be found in all disciplines that have to deal with heat transfer and or film cooling problems. In turbomachinery, this technique could be used to precisely study the thermal behavior of combustors (as steady-state cold flow models), turbine stages and turbine exit casings. Outside of the field of turbomachinery, this technique could contribute to a better understanding of complex heat exchanger geometries, film-cooled rocket nozzles (as steady-state cold flow models), supersonic vehicles and earth re-entry vehicles with complex shapes.
None – Due to confidentiality agreements with the industrial collaborators the authors are not allowed share datasets of the measurements.
The following commercial software was used in the following order: Flir—ResearchIR, Matlab with Flir Atlas SDK, Tecplot.
C :
Chord length (m)
D :
Distance, diameter (m)
DR :
Density ratio (= ρP/ρM)
h :
Convective heat transfer coefficient (W/m2K), coordinate along TCF height (m)
Channel height (m)
I :
Current (A) momentum flux ratio (= ρPVP2 /ρMVM2)
M :
Blowing ratio (= ρPVP/ρMVM)
Nu s :
Local Nusselt number (= hss/λ)
Pr :
Prandtl number (−)
\(\dot{q}\) :
Heat flux (W/m2)
Re C :
Reynolds number based on chord length (= V∞C/ν)
Re s :
Reynolds number based on local chord coordinate (= V∞s/ ν)
s :
Streamwise coordinate (m)
Thickness (m)
Temperature (K)
T S :
Diabatic surface temperature (K)
T a.S :
Adiabatic surface temperature (K)
Tu :
Turbulence intensity (%)
U :
U F :
Uncertainty in parameter F
V :
Velocity (m/s)
Weighting function for blending (-)
α :
Surface angle (degrees), temperature coefficient of resistance (1/K)
ε S :
Emissivity of surface (-)
η :
Adiabatic film cooling effectiveness (-)
λ :
Thermal conductivity (W/m·K)
ν :
Kinematic viscosity (m2/s)
ρ :
Density (kg/m3)
τ W :
Transmissivity of window (-)
NP :
No purge
P :
Purge or paint
∞ :
Freestream
European Environment Agency, European Union Aviation Safety Agency and EUROCONTROL (2019) European Aviation Environmental Report 2019. Publications office of the EU. https://doi.org/10.2822/309946. https://doi.org/10.2822/309946
Eckert C, Isaksson O (2017) Safety margins and design margins: a differentiation between interconnected concepts. Procedia CIRP 60:267–272. https://doi.org/10.1016/j.procir.2017.03.140
Taylor JR (1980) Heat Transfer Phenomena in Gas Turbines. In: Proc. ASME 1980 International Gas Turbine Conference and Products Show V01BT02A078. https://doi.org/10.1115/80-GT-172
Göttlich E (2011) Research on the aerodynamics of intermediate turbine diffusers. Prog Aerosp Sci 47(4):249–279. https://doi.org/10.1016/j.paerosci.2011.01.002
Metzger DE (1971) Fletcher DD (1971) Evaluation of heat transfer for film-cooled turbine components. J Aircr 8(1):33–38. https://doi.org/10.2514/3.44223
Bittlinger G, Schulz A, Wittig S (1994) Film Cooling Effectiveness and Heat Transfer Coefficients for Slot Injection at high Blowing Ratios. Proc. ASME 1994 International Gas Turbine and Aeroengine Congress and Exposition V004T09A032. https://doi.org/10.1115/94-GT-182
Carlomagno GM (2010) Cardone G (2010) Infrared thermography for convective heat transfer measurements. Exp Fluids 49:1187–1218. https://doi.org/10.1007/s00348-010-0912-2
Martiny M, Schulz A, Wittig S, Dilzer M (1997) Influence of a Mixing-Jet on Film Cooling. In: Proc. ASME 1997 International Gas Turbine and Aeroengine Congress and Exhibition V001T03A043. https://doi.org/10.1115/97-GT-247
Sargent SR, Hedlund CR, Ligrani PM (1998) An infrared thermography imaging system for convective heat transfer measurements in complex flows. Meas Sci Technol 9:1974. https://doi.org/10.1088/0957-0233/9/12/008TI
Astarita T, Cardone G (2000) Thermofluidynamic analysis of the flow in a sharp 180° turn channel. Exp Thermal Fluid Sci 20(3–4):188–200. https://doi.org/10.1016/S0894-1777(99)00045-X
Arroyo Osso C, Johansson TG, Wallin F (2010) Heat Transfer Investigation of an Aggressive Intermediate Turbine Duct: Part 1—Experimental Investigation. In: Proc. ASME Turbo Expo 2010 GT2010–23653. https://doi.org/10.1115/1.4004779
Hummel T, Kneer J, Schulz A, Bauer HJ (2015) Experimentelle Untersuchung des Wärmeübergangs und der Filmkühleffektivität einer dreidimensionalen konturierten Turbinenseitenwand. Deutscher Luft und Raumfahrtkongress 2015, urn:nbn:de:101:1–201601293626
Lazzi Gazzini S, Schädler R, Kalfas AI, Abhari RS (2017) Infrared thermography with non-uniform heat flux boundary conditions on the rotor endwall of an axial turbine. Meas Sci Technol 28:025901. https://doi.org/10.1088/1361-6501/aa5174
von Hoesslin S, Stadlbauer M, Gruendmayer J Kähler CJ (2017) Temperature decline thermography for laminar–turbulent transition detection in aerodynamics. Exp Fluids 58:129. https://doi.org/10.1007/s00348-017-2411-1
Martiny M, Schiele R, Gritsch M, Schulz A, Wittig S (1996) In Situ calibration for quantitative infrared thermography. QIRT 96 – Eurotherm Series 50 – Edizioni ETS, Pisa 1997. https://doi.org/10.21611/qirt.1996.001
Aberle S, Bitter M, Hoefler F, Benignos JC, Niehuis R (2019) Implementation of an In-Situ infrared calibration method for precise heat transfer measurements on a linear cascade. ASME. J Turbomach 141(2):021004. https://doi.org/10.1115/1.4041132
Elfner M, Glasenapp T, Schulz A, Bauer HJ (2019) A spatially resolved in situ calibration applied to infrared thermography. Meas Sci Technol 30:085201. https://doi.org/10.1088/1361-6501/ab1db5
Carlomagno GM (2007) Heat flux sensors and infrared thermography. J Visualization 10(1):11–16
von Hoesslin S, Gruendmayer J, Zeisberger A, Kähler CJ (2019) Accessing quantitative heat transfer with Temperature Decline Thermography. Exp Thermal Fluid Sci 108:55–60. https://doi.org/10.1016/j.expthermflusci.2019.06.004
von Hoesslin S, Gruendmayer J, Zeisberger A (2020) Visualization of laminar–turbulent transition on rotating turbine blades. Exp Fluids 61:149. https://doi.org/10.1007/s00348-020-02985-9
Jagerhofer PR, Patinios M, Erlacher G, Glasenapp T, Göttlich E, Farisco F (2021) A sector-cascade test rig for measurements of heat transfer in turbine center frames. ASME J Turbomach doi 10(1115/1):4050432
Lohrengel J, Todtenhaupt R (1996) Thermal conductivity, degree of total emissivity and spectral emissivity of the nextel velvet coating. PTB-Mitteilungen 106:259–265
Jagerhofer PR, Patinios M, Glasenapp T, Göttlich E, Farisco F (2021b) The Influence of Purge Flow Parameters on Heat Transfer and Film Cooling in Turbine Center Frames. In: Proceedings of ASME turbo expo 2021 GT2021–59496. https://doi.org/10.1115/GT2021-59496
Schroeder A, Willert CE (2008) Particle image velocimetry. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-73528-1
Hartley R, Zisserman A (2004) Multiple view geometry in computer vision. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511811685
Book MATH Google Scholar
Sibson R (1981) A brief description of natural neighbor interpolation (Chapter 2). In: Barnett V (ed) Interpreting multivariate data. John Wiley, Chichester, pp 21–36
Ligrani PM, Hedlund CR (2004) Experimental surface heat transfer and flow structure in a curved channel with laminar, transitional, and turbulent flows. ASME J Turbomach 126(3):414–423. https://doi.org/10.1115/1.1738119
Steiner M, Peters A, Gatti G, Zscherp C, Engel K, Cabona I, Ramesh A, Sterzinger PZ, Heitmeir F, Göttlich E (2018) On clean inflow testing for intermediate turbine ducts. In: Proceedings of GPPS Forum 18. GPPS-NA-2018-128. https://doi.org/10.5281/zenodo.1345568
Çengel YA, Ghajar AJ (2011) Heat and mass transfer: fundamentals & applications. McGraw-Hill, New York
Choe H, Kays WM, Moffat RJ (1974) The Superposition Approach to Film-Cooling, ASME 74-WA/HT-27
Eckert ERG (1968) Discussion with Metzger et al. (1968). https://doi.org/10.1115/1.3609156
Joint Committee for Guides in Metrology (2008) Evaluation of measurement data - Guide to the Expression of Uncertainty in Measurement. JCGM 100:2008, GUM 1995. https://www.bipm.org/utils/common/documents/jcgm/JCGM_100_2008_E.pdf. Accessed 1 Feb 2020
This work has been carried out in collaboration with GE Aviation Munich and MTU Aero Engines AG, as part of the research project LuFo V-3 OptiTCF (contract no. FKZ 20T1705B) funded by the German ministry of industry BMWi. Supported by TU Graz Open Access Publishing Fund.
Institute of Thermal Turbomachinery and Machine Dynamics, Graz University of Technology, Graz, Austria
Patrick Jagerhofer, Jakob Woisetschläger, Gerhard Erlacher & Emil Göttlich
Patrick Jagerhofer
Jakob Woisetschläger
Gerhard Erlacher
Emil Göttlich
Patrick Jagerhofer: development of measurement technique, conduction of measurements and post-processing, wrote the paper and created the images. Jakob Woisetschläger: pre-calibration concept development, wrote the paper. Gerhard Erlacher: design, assembly and commissioning of sector-cascade test rig. Emil Göttlich: heating foil and test rig concept development. All authors read and commented on previous versions of the manuscript and read and approved the final manuscript.
Correspondence to Patrick Jagerhofer.
The authors have no relevant financial or non-financial interests to disclose.
Consent to participate
Jagerhofer, P., Woisetschläger, J., Erlacher, G. et al. Heat transfer and film cooling measurements on aerodynamic geometries relevant for turbomachinery. SN Appl. Sci. 3, 889 (2021). https://doi.org/10.1007/s42452-021-04845-5
Film cooling
Infrared thermography
Heating foil
|
CommonCrawl
|
Tuesday, December 30, 2014 ... / /
Czech army physician returns NATO medals
Translation by L.M., original here; RT story
This is an example of gestures that are either weakly or strongly endorsed by roughly 50% of Czechs. I partly agree with the spirit of the letter – long-time TRF readers probably know where I would disagree, too.
Dr Marek Obrtel: open letter to the defense minister
Dear Mr Minister,
due to the reasons I elaborate upon on the attached 3-page letter which is an attachment to this document, I urge you deprive me of the badges of honor from the military operations of the Army of the Czech Republic performed under the NATO umbrella.
I thank you for your understanding and assertively request your endorsement of my application.
Reserve Lieutenant Colonel Marek Obrtel MD
with his own hand
Vystavil Luboš Motl v 7:32 PM | slow feedback (49) | Odkazy na tento příspěvek |
Other texts on similar topics: Czechoslovakia, Europe, Middle East, politics, Russia
Top Slovak moderator, Czech Globe classmate, and climate hysteria
This blog has been extremely quiet during the (post-)Christmas week. There have been many things to write about but even at those moments when I wasn't otherwise engaged, I decided not to be saving the world all the time. ;-) Whether you are a Christian or not, I hope that you have enjoyed Christmas.
I won't be writing about tons of personal experiences in the recent days, about Neil deGrasse Tyson's idiotic tweets about Christmas or his equally idiotic populist tirades against string theory, papers and news reports nonsensically claiming to "unify" the uncertainty principle with the wave-particle duality (be sure that everything about these basic concepts and nothing else has been understood for almost 90 years), or about 50 different provoking things in the media.
And I will also postpone some interesting results of my quantum gravity research – as well as some fun about linguistics and many other things I wanted to write about. Instead, let me offer you a slightly relaxing but potentially infuriating story. Alexander Ač, a climate alarmist weirdo who sometimes visits our TRF community as well, just wrote his most popular blog post ever. It is his
Open letter to Ms Adéla Banášová (orig. SK)
It has 50,000 views and 200+ comments right now. The microscopic reason is that someone (...) placed the blog post at the main page www.sme.sk of the leading Slovak newspaper. But we may still ask: Why was this topic so attractive?
Ms Adéla Banášová (*1980) is Slovakia's most popular female TV and radio host and moderator – and one could argue that she is actually the most popular female TV host and moderator in Czechia, too. (Check YouTube.) She became particularly well-known because she has hosted the "Czech and Slovak American Idol" along with Mr Leoš Mareš. She boasts not only a larger nose and a degree in culturology but also higher intelligence than Mr Mareš who is funny but sort of childish and they did a good job. And it was surely her, and not him, who added some maturity to the mix. ;-)
Equally importantly for our purposes, she was a high school student of Alexander Ač, our local special Czecho-Slovak climate hysteria weirdo.
Vystavil Luboš Motl v 8:44 AM | slow feedback (16) | Odkazy na tento příspěvek |
Other texts on similar topics: climate, science and society, TV
Saturday, December 27, 2014 ... / /
Johannes Kepler: an anniversary
Johannes Kepler was born prematurely near Stuttgart on 12/27/1571. His grandfather was a mayor of their town but once Johannes was born, the family's fortunes were already dropping. His father was a mercenary and left the family when Johannes was five. His mother was a healer and a witch which has also led to some legal problems.
Johannes was a brilliant child with early inclinations to astronomy. In Graz (1594-1600), he was defending the Copernican heliocentric system. At that time, there was no clear difference between astronomy and astrology. Therefore, Kepler also invented the ADE classification of planets orbiting the Sun. ;-) This attempt resembled, but was not identical to, Garrett Lisi's hopeless attempt to unify. Kepler also wrote that the Universe had to be stationary.
Other texts on similar topics: astronomy, Czechoslovakia, science and society
Wednesday, December 24, 2014 ... / /
Only temperatures, not temperature changes, may be dangerous
A Lumo Christmas playlist
Gavin Schmidt wrote a RealClimate.ORG blog post about the difference between the temperatures and temperature anomalies – or temperature changes – and which of them is known, predicted, and important.
Absolute temperatures and relative anomalies
Just to be sure, the temperature anomaly is the difference between the temperature and the "average" temperature recorded for the same place (or region) and the same date or month or season (if applicable). The average is computed (from the data at the same place and the same date[s]) over a period, like 1951-1980 or 1980-1999 or something like that, and this base line is often being changed which makes things confusing.
If the temperature were the changing to the same values every January and every February etc. (at a given location, or globally), the temperature anomalies would be equal to zero.
The global mean temperature is something like 14.5 °C or 15 °C. No one can really determine the value at this amazing, subdegree accuracy. Different methodologies – and indeed, different detailed definitions of the global mean temperature – produce different answers.
Every theory of quantum gravity is a part of string theory: a partial proof
A successful test in \(AdS_3\)
The first hep-th paper today is
String Universality for Permutation Orbifolds
by Alexandre Belin, Christoph A. Keller, and Alexander Maloney who are at McGill and Rutgers University, my graduate Alma Mater (I know A.M. from Harvard). Note that Christopher was terrified by the disagreement between the other two authors when it comes to "-re" or "-er" in their first name, so he erased it from his name altogether. ;-)
Serin Hall, Rutgers University, NJ
We sometimes say that string theory is the only consistent theory of quantum gravity. It's the only game in town. This is an observation mostly based on various types of circumstantial evidence. Whenever you try something that deviates from string/M-theory, you run into inconsistencies. Sometimes you don't run into inconsistencies but something else happens. Many good ideas that were thought to be "competitors" to string theory were shown to be just aspects of some (usually special) solutions to string theory (noncommutative geometry, CFT, matrix models, and even the Hořava-Lifshitz class of theories have been found to be parts of string theory), and so on. And decades of attempts to find a truly inequivalent competing theory have utterly failed. That's not a complete proof of their absence, either, but it is evidence that shouldn't be completely ignored.
But that doesn't mean that the statement that every consistent theory of quantum gravity has to be nothing else than another approach to string/M-theory is just an expression of vague feelings, a guesswork, or a partial wishful thinking. We don't have the "most complete proof" of this assertion yet – this fact may be partly blamed on the absence of the completely universal, most rigorous definition of both "quantum gravity" and "string theory". But there exist partial proofs and this paper is an example.
Did Vladislav Voloshin (UA) shoot down MH17?
A month ago, I mentioned a photograph purportedly showing a Ukrainian SU-27 or Mig-29 that is just shooting down the Malaysian aircraft in Donbas. The picture could have been shown to be fake – too many details were wrong – and some readers helpfully provided us with links to the relevant evidence.
I am hoping that a similar response may emerge now. The new accusations don't come with any high-resolution photograph – it's just an eyewitness – but they are more concrete because they name the boy who is claimed to have shot the airplane by accident.
Komsomolskaya Pravda (in Russian, TV version, an English translation) published the interview with the alleged eyewitness, a former employee of an airbase in Dniepropetrovsk. I am not sure about his or her name – it may be Yuri Shevtsov, the guy who gave this testimony in August, or someone else, like Alexander someone. Most sources say that he is still a "secret witness". Who knows.
Other texts on similar topics: missile, politics, Russia
2015: arXiv identifiers get a new digit
Paul Ginsparg began to maintain xxx.lanl.gov – the server later renamed as arXiv.org – in Summer 1991. Since that time, the number of papers submitted each month would be growing.
You can see that despite the mild acceleration in recent 5 years, the increase was much closer to a simple linear increse from 0 in Fall 1991 to almost 9,000 in recent months (the latter number may be translated to 400+ papers on an average "live" day). Because 9,000 is rather close to 10,000 which is 10 to the fourth power, you may be worried about the identifiers of the papers.
Since April 2007, the users of the preprint repository were using a system that only allows 10,000 papers a month, a threshold that is likely to be surpassed sometime in 2015 or 2016.
Monday, December 22, 2014 ... / /
Cutting ties with Klaus: CATO jumps on the totalitarian PC bandwagon
...and the knee-jerk Russophobia...
The Czech media informed us about the article
Vaclav Klaus, Libertarian Hero, Has His Wings Clipped by Cato Institute (The Daily Beast)
by James Kirchick, a Berlin reporter of the Haaretz and a few other left-wing news outlets. The text is dedicated to the divorce between Václav Klaus and the CATO Institute, a libertarian think tank. Václav Klaus became a Distinguished Senior Fellow in March 2013 and he was silently "fired" sometime in September 2014, apparently mainly because CATO joined the new wave of the mindless Russophobia that is crippling almost the whole mainstream foreign policy discourse in the U.S. these days – while Klaus knows what he is talking about in this context, too.
In February 2007, after I translated an interview with Klaus about global warming that became the main story of the day at the Drudge Report and was mentioned by Fox News and other sources, our then president invited me to Washington D.C. A group including me, Klaus, and several prominent U.S. climate skeptics had a lunch together. It was actually my first – and (so far?) only – visit to the U.S. capital. I liked it and saw lots of the sightseeings, too.
One of the buildings I visited – because of a talk by Klaus – was the CATO Institute at 1000 Massachusetts Avenue. This is a nice address to remember. You know, in Cambridge and Greater Boston, I would both live and work meters from another Massachusetts Avenue as well so I was distracted by the idea that it's actually the same Mass Ave ;-) – a grand hypothesis that hasn't been "safely" falsified for me yet but feel free to do it LOL.
Other texts on similar topics: Czechoslovakia, freedom vs PC, politics, Russia
Sunday, December 21, 2014 ... / /
Discrete spacetimes contradict Unruh effect
Two young Indian men, Golam Mortuza Hossain and Gopal Sardar, wrote a paper about loop quantum gravity and similar "discrete" models of quantum gravity whose mathematical argumentation seems vastly better than that of an average paper about similar subjects:
Absence of Unruh effect in polymer quantization (gr-qc)
Yes, the Unruh effect isn't reproduced by those theories, they say. "Has loop quantum gravity been proved..." sensibly asserted that if the paper is right, it's a way to prove that these theories are dead. Well, it's about the 500th proof that they are wrong, I would say.
These two guys' mathematical and theoretical physical advantage over an average author of "loop quantum gravity" papers seems self-evident. Show me loop quantum gravity papers that actually manipulate with the Mathieu equation, elliptic cosine and sine functions, Riemann zeta function, and even with a simpler mathematical operation in modern physics – the Bogoliubov transformation.
Other texts on similar topics: alternative physics, stringy quantum gravity
PC revolution, Altair 8800: 40th anniversary
Fourty years and one day ago, the PC revolution started when Micro Instrumentation and Telemetry Systems (MITS) released its Altair 8800 personal computer.
In 1994, this guy, Bill Gates, said a few words about his and Paul Allen's decision to write BASIC for that machine (which was released in early 1975). Note that BASIC was invented as a popular language at a New Hampshire school in 1964.
I was just one year and two weeks old when the model was introduced. But even for those of you who are older, it must feel like some mysterious pre-history of the PCs because almost no one bought it. It was using the Intel 8080 microprocessor that is, up to relatively minor variations, still around in Windows PCs. That microprocessor was introduced in 1974, two years after Intel's first microprocessor, Intel 8008 (see its restricted instruction set).
Friday, December 19, 2014 ... / /
Alternative teaching of mathematics: three problems
Mathematics is not just the mechanical elimination of a finite number of answers
Two days ago, I spent hours, literally, by discussions at aktualne.cz about the Hejný alternative method to teach mathematics to the kids.
Off-topic: A group of 100+ engineers is actually building Hyperloop, Elon Musk's mach-one train that gets from San Francisco to L.A. in 30 minutes.
If I try to summarize some key points really concisely (some exchanges helped me to crystallize some of the points): mathematics is not just about the lessons that a human derives from the experience, but about the accumulated knowledge that dozens of generations of mathematicians have extracted from the experience and, even more correctly, from their pure thought. So mathematics can't be left to the rediscovery of each kid.
Now, there are differences between the kids and they will show up. Whether the kids at the top in a given subject – mathematics, in this case – master the subject well is more important than what the others do because those at the top are actually likely to use it. One may reduce the differences by forcing the kids who were not so good to spend much more time. But I actually think it is counterproductive. Kids – and adults – should better focus on things that make them happy and that they are good at. So I think that in a healthy situation, the less talented kids in mathematics will spend less, and not more, time with mathematics than their talented counterparts. Consequently, the gap will be even larger than it would be if everyone spent the same time with everything.
Other texts on similar topics: education, mathematics
Thursday, December 18, 2014 ... / /
Ellis', Silk's undemanding chatter on falsifiability in Nature
Scientific American joined the community of low-brow, ideologically driven, anti-science tabloids some decade ago. Nature was keeping its traditional quality (well, almost) for much longer but recently, it is turning into another venue for mediocre pseudointellectuals to attack science – and especially quality science.
Two days ago, Nature published a rant by an average physicist and a physicist who really sucks titled
Scientific method: Defend the integrity of physics (by George Ellis, Joe Silk)
Similar offensive, intimidating rants love to use the word "defend". It reminded me of "Science defends itself against The Skeptical Environmentalist" in Scientific American (2002).
It's a usual tirade about "falsification" by people who couldn't make it to the top of science by doing real technical work so they decided to spit on the top scientists and collect points among the stupid, science-hating part of the populace by misleading populist texts without any valid technical content.
Let me be somewhat more specific about the reasons why I consider this text (and its authors) to be crap or worse.
Other texts on similar topics: philosophy of science, science and society
Bitcoin: up to noise, the eternal downward trend is very likely
On November 28th, 2013, I wrote a blog post about the Bitcoin with some explanation what the money supply of the Bitcoins is and how the value may evolve. I said many things about the substance that I still believe to be true but I also included a prophesy – which was also included in the title – that the Bitcoin bubble would probably await some new peaks before it bursts.
With the hindsight, I tend to think that my prophesy was completely wrong. I am not a good prophet – because there are probably no good prophets allowed by the laws of physics. November 2013 when I wrote the blog post was actually the month when the price of the Bitcoin peaked. On November 17th, 2013, the price surpassed $1,200 at MtGox but it was before I wrote the blog post where I already talked about the "current price near $1,000", so you couldn't earn any more money by the extra investments.
Vystavil Luboš Motl v 10:38 AM | slow feedback (34) | Odkazy na tento příspěvek |
Other texts on similar topics: markets
Bang or bounce: a new idea on cyclic cosmology
Guest blog by Paul Frampton
Dear Luboš, thank you for the kind invitation to contribute as a guest on your remarkable blog. My subject is cyclic cosmology and will be based on a recent paper archived at 1411.7887 [gr-qc] although I will provide only a non-technical description without many equations and will begin with the interesting history of cyclic model building.
One surprising and interesting output is that no inflation is required to explain the observed flatness and homogeneity of the universe.
Other texts on similar topics: alternative physics, astronomy, guest, science and society
A philosopher's quantum mechanical delusions
It's been weeks since I was infuriated by nonsense about the foundations of quantum mechanics. A nice time, at least from this point of view. It's over now because a "philosopher" named Chip Sebens wrote a blog post at his co-author Sean Carroll's blog about his and their quantum mechanical fantasies and misconceptions:
Guest Post: Chip Sebens on the Many-Interacting-Worlds Approach to Quantum Mechanics
This babbling is "inspired" by quantum mechanics and especially all the wrong things that are being written about quantum mechanics in the popular books. So some of the sentences are similar to the truth even though they are always slightly wrong – it's never right.
I will try to focus on the things that are wrong and you should be aware of the fact that they were cherry-picked to a certain extent and you could cherry-pick some assertions which would make Sebens' essay look less bad. But such fundamental mistakes shouldn't be there at all, so his text is bad, anyway.
Shrinking ruble: Russian calmness impresses me
Thanks to the excess oil (not only) from fracking, the price dropped below $59 for the first time in a long time.
And because of this drop combined with the Russian dependence on the income from fossil fuels and because of sanctions against Russia that have almost cut the world's largest territory from loans, plus the hysteria surrounding these moves, the Russian currency dropped from 32 rubles per dollar in October 2013 to the high of 93.5 rubles per dollar earlier today. The ruble has recovered some ground since that time (to 80 now) but it may have been temporary.
The drop from some moment in Fall 2013 is by the factor of three. A crook named Michael Mann should look at the graph above if he wants to see what a real-world hockey stick graph should look like. And the graph doesn't even show the high at 93.5 today. Every day, the ruble seems to lose over 10% of the value. The half-life (in the sense of radioactivity) is about three days.
Meanwhile, in these extreme conditions, the Russian Central Bank is doing what is right – except for some interventions that always proved to be highly temporary. The last action was the increase of the interest rates from 10.5% to 17% less than one day ago.
Everyone else seems to be calm in Russia. There doesn't even seem to be any anger. I have tried to look at Russian newspapers but I wouldn't even see a single text that would blame the U.S. for this havoc. The Central Bank boss tells everyone to embrace the new financial reality while a region in Central Russia banned the word "crisis" in the public. I am just impressed by all of that.
Other texts on similar topics: markets, politics, Russia
Kids: most new methods to teach mathematics are dangerous pseudoscience
Czechia's most influential (?) news server iDNES.cz has been bombarding the readers with hype about new revolutionary ways to teach mathematics to schoolkids.
I placed this blog post at the top again after I discussed with the advocates at aktualne.cz.
This stuff is combined with constant calls to eliminate mathematics from the mandatory subjects in the high school final exams. Folks from a despicable Faculty of Humanities at the Charles University – a tumorous department that shouldn't exist at all – are constantly involved. You can imagine that I am terrified by all that and you haven't even heard any details.
Today, an 80-year-old chap called Milan Hejný is promoting his and his father's "new method" to teach mathematics (how "new" is a method whose founder was born in 1904?) in an interview titled The kid has mathematics inside itself, just listen to him or her, the father of a revolutionary method says. Quite an uncritical title, right? However, the content of the interview is trash.
Other texts on similar topics: education, mathematics, science and society
Landscape of some M-theory \(G_2\) compactifications: 50 million shapes
The Landscape of M-theory Compactifications on Seven-Manifolds with \(G_2\) Holonomy
by David Morrison and James Halverson. The most important classes or descriptions of superstring/M-theory compactifications (or solutions) that produce realistic physics are
heterotic \(E_8\times E_8\) strings on Calabi-Yau three-folds, string theorists' oldest promising horse
heterotic Hořava-Witten M-theory limit with the same gauge group, on Calabi-Yaus times a line interval
type IIB flux vacua – almost equivalently, F-theory on Calabi-Yau four-folds – with the notorious \(10^{500}\) landscape
type IIA string theory with D6-branes plus orientifolds or similar braneworlds
M-theory on \(G_2\) holonomy manifolds
There are various relationships and dualities between these groups that connect all of them to a rather tight network. All these compactifications yield \(\NNN=1\) supersymmetry in \(d=4\) at some point which is then expected to be spontaneously broken.
Halverson and Morrison focus on the last group, the \(G_2\) compactifications, although they don't consider "quite" realistic compactifications. To have non-Abelian gauge groups like the Standard Model's \(SU(3)\times SU(2)\times U(1)\), one needs singular seven-dimensional \(G_2\) holonomy manifolds: the singularities are needed for the non-Abelian enhanced group.
They are satisfied with smooth manifolds whose gauge group in \(d=4\) is non-Abelian, namely \(U(1)^3\).
Afghanistan war costs exceed $1 trillion
...15-year spending matches 30 years of GDP...
After 9/11, the U.S. had a self-evident moral capital to organize a revenge. The terrorist attacks that took place half an hour before my PhD defense were brutal, shocking, saddening, and spectacular.
This picture of Kabul makes the place look richer than it is.
The immediate damages to the infrastructure exceeded $10 billion but just by a little. On the other hand, the war in Afghanistan that was justified by the attacks has already surpassed $1 trillion (ten to the twelfth power), see CNBC, which beats the immediate damages caused by 9/11 by two orders of magnitude.
(The Soviet Union has spent lots of money for a futile conflict in Afghanistan as well – but it was surely less than a trillion dollars.)
Despite this asymmetry, the operations in Afghanistan seem far less spectacular – that's why I have included the provoking adjective above. One could argue that the money has been almost completely wasted.
Other texts on similar topics: politics
Wolfgang Pauli: an anniversary
Wolfgang Pauli was born in Vienna, here in Austria-Hungary, in 1900, and he died on December 15th, 1958, in Zurich.
His 1945 Nobel prize was given for the exclusion principle but he has contributed many other things and he had the potential to discover all of quantum mechanics by himself, his friends Bohr and Heisenberg would agree.
LHC to restart in March 2015: \(13\TeV\)
Now it's the last month of 2014. So far the last proton-proton collisions – at \(8\TeV\) – occurred in late 2012. The gadget went to Two Years' Vacation and as you have known since your school days, vacations are over very quickly. It takes about two years for Two Years' Vacation to be over.
Iveta Bartošová, Two Years' Vacations (1988). Lyrics: "You [the beam] used to say: I will be back right away. It's just two years' vacation, nothing more." Well, OK, it was originally two years of military service but I found the pop song relevant, anyway.
The collider is designed for the center-of-mass energy of \(14\TeV\). I guess that it's still the plan to get to that level – maybe sometime during 2015. However, March 2015 will begin the acceleration to a somewhat lower energy, \(13\TeV\) which means \(6.5\TeV\) per proton. Collisions used by physicists should be available from May 2015. See some Google News.
Other texts on similar topics: experiments, LHC
Entropy, temperature are not fixed linear operators
Similarly fields in GR. A simple demonstration of "state dependence" in quantum gravity
Kyriakos Papadodimas and Suvrat Raju have demonstrated that it's possible to embed operators describing the fields in the black hole interior into the Hilbert space of a black hole so that all the usual principles approximately hold.
Their construction doesn't imply that certain questions about the perceptions of the infalling observer have unambiguous answers – indeed, one may worry about the non-uniqueness implied by their construction. But I am convinced that the existence of the embedding proves (and it's not the only proof) that various AMPS-like arguments that the black hole interior can't exist in a consistent theory of quantum gravity – e.g. in string theory and its AdS/CFT – are just wrong.
The state dependence of the field operators \(\Phi(x,y,z,t)\) describing some fields in the black hole interior has gradually emerged as the epicenter of the controversy that prevents some physicists from confirming that Papadodimas and Raju have settled the broad AMPS-like questions.
Many previous blog posts – e.g. in August 2013 and August 2014 – unmasked my personal certainty that the concept of state dependence of the field operators is right. We must choose a realistic subspace of the Hilbert space – states that differ from a reference state \(\ket\psi\) at most by the action of some simple enough polynomials of the local operators – and within this "patch" of the Hilbert space, the field operators work exactly like quantum mechanics demands. However, the field operators don't have a broader range of validity – they can't be well-defined on the whole Hilbert space. After all, even the topology of the spacetime is variable which means that there can be no "universal" coordinates parameterizing the spacetime.
Here I want to promote a more obvious argument why the state dependence is inevitable. We know state dependence from statistical physics and the state dependence of the field operators is just a translation of these facts into the quantum gravitational language via the standard Bekenstein-Hawking-like dictionary.
Other texts on similar topics: philosophy of science, quantum foundations, stringy quantum gravity
A typical recent climate alarmist PhD
Gerrit Holl: a mindless parrot on a mission to spread the "consensus"
A week ago or so, I wanted to look how many of my 1,900+ answers at Physics Stack Exchange have a negative overall number of "upvotes minus downvotes". There is a hundred of zero-score ones (some of them have been accepted, however) but only two answers boast a negative score.
In one case, a guy was confused about the spontaneous symmetry breaking, the difference between two configurations' being the same and their being related by a symmetry. If the degree of confusion exceeds a critical threshold, it's hard to help these people because they don't have a clue what they are even asking about. They want to reshape the incoming information in their way which is incoherent and protect themselves against any coherent understanding. Minus one for me.
The second negative-score answer, also at minus one (2 pluses, 3 minuses), was about the rainfall according to global warming. It's not a coincidence. This part of physical sciences has been totally politicized. OK, so the question was:
Why we should observe an increment on the mean intensity in rainfalls and an increment on mean dry days with global warming?
The first, one-sentence answer, sends the author of the question to a propagandistic website with zero quantitative information and some vague claims about models that may predict that something is positive or negative – but the thing you should believe is that there is definitely a problem.
Other texts on similar topics: climate, science and society
Energy conditions from entanglement-glue duality
One of the great conceptual insights in the research of quantum gravity of the recent 5 years or so was the realization that the geometric connection of two regions of the spacetime – according to a theory that respects the postulates of quantum mechanics and allows the spacetime to be curved, too – is physically equivalent to the entanglement between the degrees of freedom that lived in these previously separated regions of the spacetime geometry.
Folks like Mark Van Raamsdonk deserve to be credited for the original discovery of this broader concept. The Maldacena-Susskind ER-EPR correspondence is a particular, simple, well-defined example of the general concept. It claims that the Einstein-Rosen bridge – more generally, a non-traversable wormhole which is a pair of two black holes whose interiors are connected or identified – is equivalent to two perfectly entangled black holes. A high degree of entanglement is capable of changing the "most useful" spacetime topology used to describe the situation. But the spacetime topology itself isn't a well-defined or unique observable on the Hilbert space – it is emergent and one may only mention that it is "easier" to describe one situation with one topology than with another.
Because this realization is a tight link between the quantum information theory on one side; and spacetime geometry within a gravitating quantum theory on the other side (on both sides, we have a quantum theory: we just "visualize" their Hilbert space[s] in two geometrically distinct ways), we may construct dictionaries between various rules, conditions, and concepts in quantum information theory and in general relativity.
Fiat money has been a great invention
It allows accurate, impartial financial planning and makes the economy more efficient than other arrangements
The recent discussions about the Bitcoin and the gold standard have made it clear that the opposition to the fiat money is rooted in many parts of the TRF readership – I would even say that this question divides our community across the usual ideological lines.
Some Czech crown banknotes
Many of my remarks in these exchanges were enumerating the reasons why the gold or the Bitcoins couldn't be a viable replacement of fiat currencies we are using today. But now I think that it may be much more logical to try to present all my points positively – because the essential message I want to convey is positive, after all.
Net neutrality, off-topic: 60 companies including IBM, Intel, Cisco, D-Link, Qualcomm, and Panasonic NA sent a letter to the FCC opposing net neutrality. With this group, do you still misunderstand why I classify the champions of this ideology as anti-capitalist mujahideens?
So why does the society need any money? Why the fiat currencies are better than other setups? What makes a fiat currency system better than others? And why are some of the most widespread criticisms of the whole concept of fiat money unjustified and immaterial?
RealClimate's opinion on the WUWT widget
Two months ago, it has been ten years since this weblog was founded. Two months later, a group of fraudulent proponents of the climate hysteria founded RealClimate.ORG, a domain designed to spread misinformation about the climate issue.
I never planned to celebrate the 10th birthday because I find such celebrations stupid and I am shy – but if you want to drink some whiskey at home, be my guest! After these ten years, this blog run by one person (but made so inspiring and kind by many of you, thank you!) has welcomed the same number of visitors as RealClimate.ORG which is run by a dozen of folks, about 50% of the "global community" that wants to force the mankind to pay trillions of dollars. Not bad.
Congratulations to the 10th birthday of RealClimate.ORG.
Lacking the Lumoesque shyness, modesty, and focus on the beef, the RealClimate.ORG website has published not just one but three Happy 10th Birthday blog posts.
Meanwhile, their friends celebrated, too. All of them flew to Peru, Lima and set a new world record in the money wasted for a hysterical climatic conference. Their Greenpeace comrades, also in Peru, damaged and desecrated the Nazca lines by an ancient civilization that have been carefully protected for 1,500 years. (Ironically enough, the most irreversible damage has been done by the Greenpeace officials' footprints, too. The Peruvian government normally demands special shoes etc. for all the visitors.) The similarity of this vandalism to the liquidation of heritage by the Islamic State is way too obvious.
The Peruvian government is suing Greenpeace and because of the pricelessness of these geoglyphs, the Latin American nation undoubtedly has the moral right to liquidate the disrespectful terrorist organization. But my realism prevents me from believing that this outcome will actually materialize. But I want to discuss the previous RealClimate.ORG blog post – one about a... widget.
MIT's terror against Walter Lewin's lectures is unacceptable
Banned lectures and rewritten history resemble Nazism
Update: Jason seems to claim that all this MIT-wide scandal was caused by one sentence that Walter Lewin tweeted, "queefing [=vaginal farting] is yours", in a childish conversation about a plan to create a water company that two girls started with him. If true, it's quite unbelievable.
Prof Walter Lewin has been a hero of the open courses. His online MIT courses on physics – usually "rather elementary" physics – have attracted millions of viewers. You may perhaps find some cool videos on YouTube or you may directly go to an otherwise obscure backup at VideoLectures.NET.
I recommend you e.g. this lecture on mechanical energy where Lewin offered his life for the claim that the energy is conserved. He said "if I don't succeed in giving the heavy ball the zero speed, this will be my last lecture". The world is a šitty place, however, so the zero speed wasn't a sufficient condition.
Dilaton has noticed an MIT press release (see also NYT) that proudly informs that a student has complained about some online communication with Lewin. A committee has determined that he has violated a "sexual harassment" regulation at MIT.
The result? They removed all of his videos from MIT websites and declared that his title "professor emeritus" is no longer valid.
Other texts on similar topics: education, freedom vs PC, politics
Gold: a 6000-year-old bubble
On November 30th, 2014, Switzerland held three referenda.
Spoilers alert: all of the results were "No".
It's funny I told you the results before I explained what the questions were. One of the proposals demanded the Swiss National Bank to hold at least 20% of its assets in gold, and demand Swiss gold to be returned from New York. 77% were against the proposal; a "yes" result was capable of increasing the gold price by 5%.
The Swiss gold repatriation may have been a reply to similar imminent plans to repatriate the Dutch gold.
The current gold price is about $1,200 per [troy] ounce (0.0311 kg), about 1/3 below the peak above $1,900 per ounce in August 2011. The drop of savings by 1/3 may be unfortunate but the quadrupling of some people's wealth was even more fortunate.
Because of these events, provoking comments by Citi's chief economist Willem Buiter have induced some discussions and responses.
Vystavil Luboš Motl v 1:39 PM | slow feedback (105) | Odkazy na tento příspěvek |
Other texts on similar topics: markets, politics, science and society
Why most minority rights NGOs oppose net neutrality
Zero rating: unlimited Facebook data for cell phones
Yesterday, The New York Times wrote an insightful text about the minority activists' opinions about the net neutrality meme.
Obama's Net Neutrality Bid Divides Civil Rights Groups
The Grey Lady says that it is usually expected that net neutrality is favored by the far left-wing fringe of the Democratic Party. However, the real-world data suggest something else.
The conservative judge Antonin Scalia has supported this egalitarian concept for a decade. But most of the "civil rights groups" that The New York Times enumerates actually oppose net neutrality:
N.A.A.C.P.
Rainbow/PUSH Coalition [Jesse Jackson]
League of United Latin American Citizens
National Organization of Blacks in Government
and others (45 NGO and professional groups have met and voiced the opposition to net neutrality at a meeting). The article only mentions two pro-net-neutrality groups of a similar kind, ColorofChange.org, a black political coalition, and the National Hispanic Media Coalition.
Also, 32 Academics sent a letter to the FCC where they oppose Obama-style net neutrality.
Other texts on similar topics: computers, politics
Dimensionful universal constants are unphysical cultural artifacts
Michael Duff has released a hep-th preprint
How fundamental are fundamental constants?
about a topic I consider elementary and I understood it when I was a high school student. The realization is that the numerical values of dimensionful (having units) universal constants of Nature depend on arbitrary and physically unimportant human decisions and don't really affect the character of the physical laws.
By a more economic parameterization of the physical observables, e.g. the choice of \(1=c=\hbar=\epsilon_0=k=G=\dots \) units, one may completely eliminate the symbols of these constants from the equations describing the laws of Nature. This choice can be made even in situations when some people say that the "constants are evolving in time". To summarize, the number of physical fundamental dimensionful constants that would affect the laws of physics is always zero.
You may compare Duff's paper with some texts of mine such as
Dimensionless Constants in Physics (Physics Stack Exchange, answer, 2011)
Changes of dimensionful quantities are unphysical (TRF 2009)
Let's fix the value of Planck's constant (TRF 2012)
Parameters of Nature (TRF 2004)
and elsewhere. I am sure that he would agree that we fully agree. More precisely, Duff would say (2011):
As a fresh member of the Royal Society, I am grateful to my overlords and I am ready to trample on politically incorrect babies in order to be admitted to as many similar societies as possible. So I will happily start by saying that I do not share Lubos Motl's extreme views on politics, global warming, and sometimes not even string theory. However, he occasionally has some good physics summaries, including a recent one giving a nice history of the triumphs of unification [26]
Yes, Mike, this introduction of yours was despicable, utterly unethical, and you will be grilled in Hell throughout the infinite asymptotic future. But yes, we agree on the units.
Political correctness makes some racial problems unsolvable
Unfortunately, the unrest in Ferguson, Missouri and elsewhere in the U.S. continues. During the 10 years I spent in America, I was mostly exposed to environmental bubbles where no open racism existed – except for some reverse racism. But my understanding is that despite the legal arrangement that guarantees equality of people of different races, some manifestations of racism are bound to exist in 2014. They are a part of the human nature. They have some reasons that will never disappear.
But some problems associated with the co-existence could disappear or become better. They haven't been improving for quite some time – and from many viewpoints, the things became worse. The picture above appeared among some fresh photographs from Harvard published in The Harvard Crimson. The folks' banners say #BlackLivesMatter and I agree with that. However, this slogan is too superficial to solve anything.
Other texts on similar topics: freedom vs PC, politics
2015 Breakthrough Prize: videos
Free Einstein: Fred Singer has informed me about a wonderful birthday gift, "Digital Einstein" to all the readers who have managed to have their birthday on December 5th – e.g. Werner Heisenberg, Shelly Glashow, or your humble correspondent. The Princeton+HUJI collection contains thousands of (scanned...) Einstein's papers – for free. ;-) Info. It's amazing to see how the 1896 papers' fonts look just like \(\rm\LaTeX\).
The 2015 Breakthrough Prize has gone to three experimental teams led by Perlmutter, Riess, and Schmidt who discovered the accelerating expansion of the Universe about 15 years ago.
It's a change of the policy – they were experimenters and they came in large groups. While this diluted outcome may look less interesting, it adds some balance to the prize.
Merkel against net neutrality
It is well-known that net neutrality is just another communist plot by people who are detached from reality and who advocate egalitarianism at every level and in every aspect of the human lives, being completely unable to comprehend that differences are crucial for the system to work efficiently and to make people happy, safe, and prosperous.
Their proposals – encapsulated in childish, pleasingly sounding propaganda – threaten not only the further progress of the Internet but even the efficiency and safety of the Internet as it has marvelously developed in the recent decades. There are very good reasons why some types of data should enjoy a higher priority than others. The reasons why this inequality is being introduced may be presented as the efforts of the Internet providers to increase their profits. But there's really nothing "dirty" about profits: the Internet providers – ISPs – increase their profits because they optimize how their service works for others.
Left-wing politicians – such as Barack Obama and most of the folks in the European Parliament – love to uncritically promote "net neutrality", a government-imposed equality between all packets. However, this concept has earned a rather formidable foe: the most powerful woman on Earth, Angela Merkel.
Weird German climate commercials
German sex: only dressed and in the dark
My homeland belongs to the broader German civilization space. Germans are ahead in many things. What is waiting for us if we pursue their strategy to "fight the climate change"?
Barbara Hendricks, the German minister of the environment (SPD), has spent $2 million of the taxpayer money to produce the Together Is the Climate Change (my translation can't be perfect, can it?) video clips.
What can we learn from our German neighbors? Well, in this video, the wife – a musician – closes the window because she doesn't want to listen to the zombies who are devouring her hard-working husband in their garden.
SUSY and extra dimensions together are more compatible with LHC data
In the morning, I read the daily papers on the arXiv and exactly one paper stayed open in my browser:
Auto-Concealment of Supersymmetry in Extra Dimensions (Stanford-Oxford-Airforce Collaboration)
Eminent physicist Savas Dimopoulos along with his pals (Howe, March-Russell, Scoville) argue that the LHC data don't imply that the superpartners – new elementary particles implied by supersymmetry – have to be as heavy as usually assumed. Instead, they may be rather light and such a theory yields predictions that are compatible with the LHC data – so far compatible with the Standard Model – anyway.
A possible reason why it may be hard for the collider to see SUSY is known as "compressed spectra". What does it mean? There are always some superparticles that are predicted to be produced rather often (if they're light enough). Why aren't they seen? Because they decay into products (including the lightest superpartner, the LSP, at the end) which have nearly equal masses (approximate degeneracy) which is why little energy is left for the "missing transverse energy".
And Savas et al. are proposing a clever microscopic explanation why the spectra might be compressed. Extra dimensions. They mean pretty large dimensions – much larger than the usual Planck length but much smaller than a millimeter, extra dimensions comparable to the size of a nucleus (or larger than at least 10% of a fermi, the nuclear length scale).
Other texts on similar topics: experiments, LHC, string vacua and phenomenology, stringy quantum gravity
Yale/Oxford/MIT/Rockefeller theoretical physicist to lead Pentagon
Not too original news from Planck: inflation constraints, \(n_s= 0.9652 \pm 0.0016\) and \(r\lt 0.09\) (including dust-cleaned WMAP9 low \(\ell\) polarization), via Mathew Madhavacheril. Ongoing ground-based experiments will either guarantee discovery or impose \(r\lt 0.01\).
Chuck Hagel resigned as the U.S. Secretary of Defense last week and no one wants to succeed him. Well, it seems that it's just "almost no one".
After quite some time, I am impressed by the credentials of the likely pick (the credentials don't guarantee great outcomes, of course, but I still care about them). So far, Ashton Carter (*1954 Pennsylvania) has been the Deputy Secretary of Defense, a CEO of a sort, overseeing $0.6 trillion in expenses and supervising 0.0024 billion people.
He's an expert in Star Wars, cyber warfare, and all real-world high-tech systems that America possesses right now. Those things are unusual but not too unexpected for a Pentagon pick. However, the Academic background is unusual. After being the president of the honor society at his high school, he got a theoretical physics PhD at Oxford (1979) where he has been a Rhodes scholar – a prestigious scholarship paid to the best student from the U.S.
UAH AMSU: 2014 probably 3rd or 4th warmest year
RSS AMSU rank: 6th-9th
Even in the heretical Czech nation, the media recently published articles such as
This year will be the warmest one in the history, U.S. climatologists have calculated
I don't know what they smoke, whether the difference is due to the general satellite-surface deviations, or due to the satellites' inability to see the very vicinity of the poles but the article above won't correspond to the reality as determined by UAH AMSU, a satellite dataset.
A new weather building at UAH
We should be using the lower troposphere (near-surface) UAH dataset v5.6 and supplement the first column with the +0.33 °C anomaly for November 2014 that Roy Spencer revealed an hour ago.
Other texts on similar topics: climate, weather records
Black ice kills electric transportation across Czechia
Up to the end of November, Czechia managed to avoid any snow and ice in 2014. I haven't recorded the weather in any detail but I think that for several years, we didn't have such a late arrival of the winter.
In Fall 2014, it wasn't really "warm" but the temperatures managed to stay above the freezing point all the time.
Well, it's dangerous to extrapolate.
James Watson, world's #1 biologist, mostly forced to sell Nobel medal
Many people have received a Nobel prize in medicine. Whom would you consider the most important living laureate?
Well, I think that there's very little doubt that James Watson, a co-discoverer of DNA and a co-winner of the 1962 Nobel prize in medicine, would be chosen by the largest number of respondents. Several years ago, James Watson would be declared an unperson for having pointed out – in a rather straightforward, unfiltered way – that different groups of people differ in their abilities, too. Note that the TRF blog post has attracted almost 200 comments.
Let me emphasize that he didn't join just the "Untermenschen". This hero of life sciences has joined the "Unpersons". He still gets some basic income related to his Academic career but he stopped receiving any other money – from talks etc. – and I find it conceivable, although not guaranteed to be true, that this man (who has probably gotten used to some luxury) is feeling some financial pressure and that's the reason why he decided to sell the Nobel prize medal and some related things through an auction, in a move that is expected to bring several million dollars to him. A part of the revenue would be sent to some of the institutions that have allowed his scientific research to proceed.
Other texts on similar topics: biology, freedom vs PC, IQ, science and society
Top Slovak moderator, Czech Globe classmate, and c...
Only temperatures, not temperature changes, may be...
Every theory of quantum gravity is a part of strin...
Cutting ties with Klaus: CATO jumps on the totalit...
Alternative teaching of mathematics: three problem...
Ellis', Silk's undemanding chatter on falsifiabili...
Bitcoin: up to noise, the eternal downward trend i...
Kids: most new methods to teach mathematics are da...
Landscape of some M-theory \(G_2\) compactificatio...
Entropy, temperature are not fixed linear operator...
MIT's terror against Walter Lewin's lectures is un...
Why most minority rights NGOs oppose net neutralit...
Dimensionful universal constants are unphysical cu...
Political correctness makes some racial problems u...
SUSY and extra dimensions together are more compat...
Yale/Oxford/MIT/Rockefeller theoretical physicist ...
Black ice kills electric transportation across Cze...
James Watson, world's #1 biologist, mostly forced ...
|
CommonCrawl
|
Article | Open | Published: 17 April 2019
A potential sensing mechanism for DNA nucleobases by optical properties of GO and MoS2 Nanopores
Vahid Faramarzi ORCID: orcid.org/0000-0002-0112-41821,
Vahid Ahmadi1,
Bashir Fotouhi ORCID: orcid.org/0000-0002-4164-510X1 &
Mostafa Abasifard ORCID: orcid.org/0000-0002-3143-59931
Scientific Reportsvolume 9, Article number: 6230 (2019) | Download Citation
Nanopores
We propose a new DNA sensing mechanism based on optical properties of graphene oxide (GO) and molybdenum disulphide (MoS2) nanopores. In this method, GO and MoS2 is utilized as quantum dot (QD) nanopore and DNA molecule translocate through the nanopore. A recently-developed hybrid quantum/classical method (HQCM) is employed which uses time-dependent density functional theory and quasi-static finite difference time domain approach. Due to good biocompatibility, stability and excitation wavelength dependent emission behavior of GO and MoS2 we use them as nanopore materials. The absorption and emission peaks wavelengths of GO and MoS2 nanopores are investigated in the presence of DNA nucleobases. The maximum sensitivity of the proposed method to DNA is achieved for the 2-nm GO nanopore. Results show that insertion of DNA nucleobases in the nanopore shifts the wavelength of the emitted light from GO or MoS2 nanopore up to 130 nm. The maximum value of the relative shift between two different nucleobases is achieved by the shift between cytosine (C) and thymine (T) nucleobases, ~111 nm for 2-nm GO nanopore. Results show that the proposed mechanism has a superior capability to be used in future DNA sequencers.
Rapid DNA sequencing methods are excellent tools for the growing field of personalized medicine and have been developed theoretically and experimentally1,2,3,4,5,6,7. These rapid DNA sequencers utilize any changes in the ionic or tunneling currents, surface plasmon resonances, self-aligned optical antenna and surface-enhanced Raman spectroscopy to determine type of the DNA nucleotides: adenine (A), cytosine (C), guanine (G) and thymine (T)2,4,5,6,7,8. The minimal thickness of the single-layer nanopores such as graphene is the key driving force for two-dimensional-material nanopore2,5. However, in order to achieve single-nucleotide resolution, there are still many other challenges such as high membrane thickness, fast DNA translocation speed, slow sensing mechanisms and noise effects1,2,4. In this paper, we propose and analyze a novel concept for sequencing DNA molecules by absorption and emission properties of fluorescent materials. For DNA sequencing by this new approach, we have to use molecules with excitation-dependent emission behaviours, because each DNA nucleotide has a unique absorption spectrum. Recently, semiconductor quantum dots (QDs) are proposed to be used in fluorescence emission applications, because of their advantages such as higher quantum yields (the ratio of emitted to absorbed photons from any object), controllable properties with size and shape, and resistance to photobleaching, over commercial dyes9,10. However, according to the Kasha's rule11, the fluorescence of conventional fluorophores, such as organic dyes and semiconductor QDs, does not depend on excitation energy. This is because excited electrons are mostly relaxed to the bottom of the conduction band before the fluorescence begins, which is independent of the initial excitation photon energy. On the other hands, graphene derivatives exhibit much interesting photoluminescence (PL) properties12,13.
Graphene oxide GO is a functionalized few layered forms of graphene with oxygen functional groups that are attached on the basal plane. Studies show that the photoluminescence emission of GO in a polar solvent, like water, is dependent on the excitation wavelength14,15. The position of the fluorescence peak of GO in such polar solvent, without changing the GO sheet size, red-shifts with increasing excitation wavelength. The strong excitation wavelength dependent fluorescence in GO is originated from the red-edge effect, which results from a slowed solvation process due to an interaction between solvent dipole and fluorophore dipole14. Furthermore, it is shown that molybdenum disulfide (MoS2) QDs with a series of advantages, such as high quantum-yield, multicolor PL emission ranging from blue to red and good biocompatibility, have a great potential for utilizing in bio-detection applications16,17. Also, excitation dependent PL emission spectra in MoS2 QDs are observed and fluorescence peak position, for the uniform size of the gathered MoS2 QDs, varies under different excitation wavelength18,19,20. The aim of this study is showing a new method using optical properties of GO and MoS2 nanosheets in order to fast, label-free and accurate detection and sequencing of DNA nucleobases. The photoabsorption spectra of GO and MoS2 nanopores in the presence of DNA are calculated by employing the powerful hybrid quantum/classical method (HQCM)21. Next, the impact of presented DNA nucleobases at the nanopores on the photoabsorption spectra, band-gap energies, electric field enhancements and emission wavelengths of GO and MoS2 nanopores is investigated. Then, by a signal processing step, we find one frequency channel per DNA nucleobase as an excitation wavelength for each type and size of nanosheets. Regarding the excitation wavelength dependent emission properties of GO and MoS2 nanosheets, emitted light wavelengths from the GO and MoS2 nanopores are calculated and analyzed in the presence of all types of the DNA nucleobases, individually. Thus, an emission peak wavelength, as a detection signal, can be assigned for each type of DNA nucleobases. Results show a superior capability of this concept to be used in future DNA sequencers.
The Proposed Structure and Operation Principle
The schematic structure of our proposed DNA sequencing method is presented in Fig. 1. It contains a symmetric QD nanopore while a DNA molecule is introduced in the middle of the nanopore. In the structure, the GO or MoS2 is utilized as QD nanopore, and DNA molecule translocates through the nanopore. The pore is classically created at the middle of nanosheet. In our theoretical model, the proposed GO and MoS2 structures are considered to be the square sheets with the thicknesses of 1 and 0.65 nm, respectively. Then, a pore with a diameter of 1.5 nm and the same thickness of nanosheet is made at the middle of it. Materials of the pore and surrounding medium are water and treated with optical properties of water in the classical subsystem. The nanopore membrane material and DNA molecules are assumed to be placed in an aqueous solution.
The schematic structure of our proposed DNA sequencing method based on the excitation-dependent emission property of GO or MoS2 nanopore while DNA molecule passes through the nanopore. The structure is assumed to be suspended in a polar solvent such as water. Regarding the excitation wavelength dependent behavior of GO and MoS2 materials, the emission wavelength λout would be a function of the incident light wavelength λin. The blue, green, red and cyan colors represent the emission wavelength of the GO or MoS2, corresponding to the presented A, T, C and G nucleobases at the nanopore. The function (f ) is determined by the type of the DNA nucleobases.
As DNA molecule has four nucleobases, we assign a unique optical signal for each type of DNA nucleobases. The influence of presented DNA nucleobase at the nanopore on the optical properties of the nanopore membrane material is investigated. For this purpose first, we need to obtain one photoabsorption spectrum for membrane nanopore + DNA nucleobase complex per each type of the presented nucleobase at the pore. The selectivity factor which is the capability of distinguishing between two different nucleobases is defined. For this purpose, we search the maximum difference between absorbance peaks in absorption spectra of the membrane nanopore + DNA nucleobases complexes. The peak wavelength of the final absorbed spectrum for which the difference between absorbance peaks of two different nucleobases is maximum is achieved. So, we obtain one frequency channel per DNA nucleobase and consider it as an excitation wavelength for the specific type and size of the membranes. In order to detect DNA nucleobases at the output of the proposed system, we look for the emission wavelength of the layer in the presence of the nucleobase at the nanopore corresponding to the achieved excitation wavelength. According to Kasha's rule, emission wavelength should be fixed and independent of the excitation wavelength when we choose a certain dye or nanosheet. However, we have to use materials with the capability of having different emission wavelengths or excitation wavelength dependent emission properties, because we need to assign an emission wavelength for each type of the nucleobases. On the other hand, GO and MoS2 do not obey Kasha's rule in a polar solution (such as water), and the peak emission wavelength varies by changing the excitation wavelength. Thus, taking into account the conditions mentioned above and also more significant amounts of molar absorption of DNA molecule nucleobases at the higher energies, especially above 4 eV, and biocompatibility issues we select GO and MoS2 nanopores14,18,20.
Figure 2 shows the absorption spectra for the GO and MoS2 nanopores with and without DNA nucleobases. It is assumed that the nanopore, with a diameter of 1.5 nm, to be symmetrically made in the center of GO or MoS2 nanopore and DNA molecule passes through the nanopore. The QD sheet lengths are assumed to be 2, 3 and 5 nm shown in Fig. 2(a–f), respectively. We should note that a single-stranded DNA molecule cannot pass through nanopores smaller than 1.5 nm in diameter22. Also, (for nanopore diameters larger than 1.5 nm) increasing the pore diameter above 1.5 nm gradually reduces the influence of the presented DNA nucleotides on the QD absorption spectra. Thus, we consider the nanopore with a diameter of 1.5 nm. For example, in Fig. 2(a–f), we can see that impact of the DNA nucleobases on the absorption spectrum of the QD nanopore is decreased by changing the sheet length from 2 to 5 nm for both GO and MoS2 nanopores. This is because the optical absorption of QDs increases with increasing the size of QDs and the impact of DNA nucleobases on the QD absorption spectrum is reduced. Generally, absorbance peaks of the QD and DNA nucleobases complex are similar to the peaks of the bare A, C, G and T nucleobases reported by Tsolakidis et al.23. For example, the dominant peaks for the A and T nucleobases are near to each other and about 7 eV (~176 nm)23. Similarly, in our study, and for the whole complex of QD nanopores with A or T nucleobases, the dominant introduced peaks are near to each other, at the same wavelength, that is, around 176 nm. For MoS2 nanopore and DNA molecule complex, in comparison with no nucleobase case, the absorbance increases for wavelengths smaller than 180 nm and decreases for longer wavelengths. It should be noted that in the combined system, the resonance absorbance of the DNA molecule and MoS2 nanopore are coupled, and this leads to hybridized quantum molecule-classical material states. The absorbance peak of MoS2 nanopore is strong as compared to that of the GO. On the other hand, the absorbance resonances of DNA nucleobases are significantly weaker than the absorbance peak of MoS2 nanopore, thus strong interband damping takes place for MoS2 absorbance peak when the absorbance peak of MoS2 nanopore overlaps in energy with the absorbance resonances of DNA nucleobases24. Because of the limited absorption intensity of DNA nucleobases at the higher wavelengths, in comparison to that of MoS2 nanopore, this interband damping effect for MoS2 nanopore can be observed in the absorption spectra of MoS2 nanopore + DNA nucleobases complexes. Also, our calculated absorbance results for the GO and MoS2 sheets are in good agreement with the experimental studies25,26. For more investigation of the impact of the inserted DNA nucleobases on the QD absorption spectrum, we calculate the induced absorbance of the QD due to the presence of the DNA molecule by the difference between the QD-DNA complex and the bare DNA nucleobases absorption spectra. The induced absorbance shows the net absorbance of the QD in the presence of DNA molecule. It also reveals the changes in intensity and peak position of the GO or MoS2 nanopore. Moreover, to determine the net absorbance of the system due to the presence of the DNA molecule, we calculate the difference between the induced absorbance and the absorption spectrum of the bare QD (differential absorbance). Figure 3(a–f) show the differential absorbance of GO and MoS2 nanopores for different lengths of 2, 3 and 5 nm, respectively. As can be seen in the Figure, GO nanopores show more peaks than that of the MoS2 because DNA nucleobases have more influence on the absorption spectra of the GO nanopores, due to the smaller absorbance of GO compared to that of the MoS2 nanopores. The QD nanopores with 5 nm length have higher differential absorption and show more and stronger peaks than that of the smaller QD nanopores, as shown in Fig. 3(c,f). More peaks and larger amounts of differential absorbance can be observed at lower wavelengths because of high optical absorption of DNA nucleobases in these wavelengths. The differential absorbance spectrum of larger QDs, compared to smaller ones, shows a spectral line shape like DNA absorption spectrum, as shown in Fig. 3(c,f,g). To discuss this, we show the results of the induced absorbance of QD in the presence of DNA for GO nanopores with the length of 2, 3 and 5 nm, and MoS2 nanopores of 2, 3 and 5 nm, in the insets of Fig. 3(a–f), respectively. Since optical absorption of GO and MoS2 nanopores increase when the length of the nanosheet gets larger, the effect of DNA nucleobases on the absorbance peak of the larger QD nanopores is not considerable, and the peak has no noticeable wavelength shift, as can be observed in the figures. Because DNA nucleobases have a limited optical absorption, thus for smaller QD nanopores, the induced absorbance in the presence of DNA nucleobases shows more absorbance peaks and wavelength shifting of the QD absorbance peak.
The molar absorbance for GO nanopores with the lengths of (a) 2, (b) 3 and (c) 5 nm, and MoS2 nanopores with the lengths of (d) 2, (e) 3 and (f) 5 nm in length, with and without DNA molecules. The DNA nucleobases have the most influence on the absorption spectra of GO or MoS2 nanopore with a length of 2 nm. The impacts of presented DNA molecules at nanopore reduce with increasing the length of the sheets. The thickness of GO and MoS2 nanopores are assumed to be 1 and 0.65 nm, respectively.
The differential absorbance of QD nanopores due to the presence of DNA molecules for GO sheets with the length of (a) 2, (b) 3 and (c) 5 nm, and MoS2 sheets of (d) 2, (e) 3 and (f) 5 nm. Different amounts of variations in differential absorbance are ascribed to different nucleobases. For lager nanosheets, a spectral line shape like bare DNA absorption spectra (g) can be observed in differential absorbance. Insets: the induced absorbance of differently sized GO and MoS2 nanopores due to presented DNA nucleobases at nanopore. For smaller QDs, DNA nucleobases make more peaks and shift the QD absorbance peak in the induced absorption spectra. The absorbance peaks of 5-nm sheets show no considerable shift in the presence of the DNA nucleobases. (g) The molar absorbance for all four bare nucleobases.
As a result, for larger nanosheets the DNA nucleobases absorption are overwhelmed by the large absorption of the nanosheets and modified absorption spectra of DNA nucleobases can be obtained by differential absorbance. Because, there is no noticeable wavelength shift in the induced absorbance for larger nanosheets, the differential absorbance of the system will be a spectrum similar to that of the original DNA nucleobases. In other words, a spectral line shape like bare DNA nucleobases absorption spectra can be obtained by calculation of differential absorbance of the system. So, differential absorbance can be useful to distinguish different DNA nucleobases presented to the nanopore. It has been shown that by considering the differential absorbance direct access to the modified dye absorbance can be achieved and a spectrum similar to that of the original dye, which is only scaled by the plasmonic enhancement factor, can be obtained27. Therefore, differential absorbance spectrum can provide invaluable information about the inserted DNA nucleobases at the nanopore (such as position and number of absorbance peaks) in a UV-vis absorption set-up by eliminating the background absorbance (absorbance spectrum of GO and MoS2 nanopore) of the system. The schematic structure of bare A nucleobase, and in the presence of GO and MoS2 nanopores, are shown in Fig. 4(a–c), respectively. The electric field enhancement of these configurations is calculated at the major absorbance peak wavelength of A nucleobase and shown in Fig. 4(d–f). As shown in these figures, the electric field of A nucleobase is enhanced in the presence of GO and MoS2 nanopores. The A nucleobase at the MoS2 nanopore has more field enhancement as compared to GO nanopore. Also, as shown in Fig. 4(d–f), the MoS2 nanopore has more enhancement effect on the DNA absorbance than that of the GO, because MoS2 nanopore has a stronger optical absorption than GO for a wide wavelength range from UV to near-infrared. MoS2 nanopore provides a more significant 4.1-fold enhancement in the A nucleobase absorbance, as compared to 1.49-fold enhancement with GO nanopore at the peak wavelength of ~178 nm. The enhanced absorption of DNA molecule at the GO or MoS2 nanopore verifies the results of field enhancement. Similarly, the electric field enhancement and corresponding enhanced absorption spectra of the other types of presented DNA nucleobases at GO and MoS2 nanopores, are shown in Supplementary Information Figs S1, S2 and S3. The enhanced absorption spectra have similar dominant peaks at 178, 188, 192 and 177 nm, corresponding to A, C, G and T nucleobases, respectively, compared with the bare nucleobases absorption spectra23 (see Fig. 4 and Supplementary Information Figs S1, S2 and S3 for more details).
The schematic structure of (a) bare A nucleobases, and in the presence of (b) GO and (c) MoS2 nanopores. The electric field enhancement of (d) bare A nucleobases, (e) at the GO and (f) MoS2 nanopore at 178 nm. The black points show the amplified A nucleobase atoms. At the peak wavelength of 178 nm (~7 eV). The electric field of the A nucleobases at the GO and MoS2 nanopores is enhanced by a factor of 1.2 and 2, respectively. The molar absorbance of bare A nucleobases (g) and the enhanced absorbance of A nucleobases in the presence of (h) GO and (i) MoS2 nanopores. The enhancement factor of the A nucleobases absorbance at the presence of GO and MoS2 nanopores at the peak wavelength of 178 nm is about 1.49 and 4.1, respectively. The length of the sheets is 5 nm.
To investigate the influence of the inserted DNA molecule on the band-gap energy of QD nanopore, we calculate the band-gap energy of the GO or MoS2 nanopore and DNA molecule complex using Tauc plots28. As shown in Fig. 5, the band-gap energies of 5-nm GO and MoS2 sheet are ~3.5 eV and ~1.89 eV, respectively, which are in good agreement with the results presented by Mathkar et al.29 and Arul et al.30. As it is indicated in Fig. 5, the band-gap energy ranges from ~3.53 to 3.8 eV for a 2-nm GO sheet and from ~1.98 to 2.14 eV for a 2-nm MoS2 sheet in the presence of DNA molecule.
The band-gap energy of QDs in the presence of a DNA molecule. The band-gap energy of 2-nm GO sheet shows the most variations in the presence of DNA nucleobases, and ranges from ~3.53 to 3.8 eV. Also, a 5-nm MoS2 sheet has the minimum variations of the band-gap energy.
Applicability of the proposed method for DNA sequencing is influenced by a combination of the spectral shape of the input light and QD size. Here, we consider QD nanopores with lengths of 2, 3 and 5 nm, because increasing the size of the QDs reduces the average sensitivity of QDs to the presence of DNA, as shown in Fig. 2. Next, for DNA sequencing, we define figure-of-merit (FOM) given by
$$\begin{array}{l}FOM=\prod _{\begin{array}{c}i,j=1,2,3,4\\ i\ne j\\ i < j\end{array}}|\frac{{\lambda }_{i}-{\lambda }_{j}}{{\lambda }_{j}}|\end{array}$$
It enables us to distinguish between the nucleobases absorption characteristic in DNA sequencing. Here, i and j stand for possible types of DNA nucleotides: A, C, G and T. The λt is defined as the peak wavelength of the final absorption spectrum from the QD nanopore while the influence of the presented type i nucleobase to the nanopore is considered. To calculate FOM, we apply a specific function to the absorption spectrum of each type of the QD nanopore. The desired function is defined as a Gaussian function with a central frequency and a spectral width of ωc and σc, respectively. ωc and σc are calculated to achieve the maximum value of FOM for each type of the QD. Figure 6 shows the maximum FOMs obtained for the corresponding Gaussian functions with central frequency and spectral width changed from 3 to 8 eV and 0.1 to 1.5 eV, respectively. As the Fig. 6 shows, the best FOM corresponds to 2-nm GO sheet with the Gaussian function of ωc = 2.88 eV and σc = 1.39 eV. Moreover, for 2-nm MoS2 sheet, the best FOM is achieved for ωc = 3.95 eV and σc = 1.38 eV. Then, we search for the peak wavelengths (λi) and peak widths of the absorbed light by the QD nanopore influenced by presented DNA nucleobases, corresponding to the best value of achieved FOM. The calculated peak wavelengths of the absorbed light from GO and MoS2 nanopores corresponding to the best value of achieved FOM, with and without DNA nucleobases are demonstrated in Fig. 7.
The maximum achieved FOM for GO and MoS2 nanopores. The maximum FOM is obtained under conditions in which the center frequencies are 2.88, 2.68 and 2.66 eV, for 2, 3 and 5-nm GO sheets, and 3.95, 6.14 and 6.71 eV, for 2, 3 and 5-nm MoS2 sheets, respectively.
Peak emission wavelengths of all (a) GO and (b) MoS2 nanopores with and without DNA nucleobases corresponding to the center frequency and spectral width of the best achieved FOM for each type of QD nanopore. The peak absorption wavelength is labeled for each peak emission wavelength in the figures.
We consider each peak wavelength of the absorbed light as an excitation wavelength for GO or MoS2 nanopore. Next, we calculate peak emission wavelength of the structures in the presence of each nucleobase. For this purpose, we find the PL peak positions based on the excitation wavelength dependent emission property of GO and MoS2 nanopores. It is shown that when the GO sheet is suspended in a polar solvent, the emission peak of GO is red shifted from 440 to 580 nm by increasing the excitation wavelength from 350 to 500 nm in water at room temperature. This results in creating a linear relationship, with a constant slope of ~1, between the emission and excitation wavelengths up to ~460 nm14. Also, MoS2 QDs, with and without considering the solvent effect, show variable PL emission under different excitation wavelengths and PL peak position is red shifted for the excitation wavelength within 405–552 nm18,20. Then, we calculate the peak emission wavelengths corresponding to the absorbed light wavelengths.
Figure 7 shows the calculated peak wavelengths of the light absorbed and emitted from all three sizes of the GO or MoS2 nanopores and DNA nucleobases complexes. For example, 3-nm GO sheet has the emission peaks centered at 337.9, 395, 363.8 and 290.8 nm corresponding to the absorbed light peaks centered at 284, 302, 269 and 191.3 nm, respectively.
To demonstrate the capabilities of the proposed structures for DNA sequencing, a relative shift of emitted light wavelength from GO and MoS2 nanopores between two different nucleobases is calculated. Figure 8 shows the relative shift of the output light wavelength of GO and MoS2 nanopores between two different nucleobases. The possible cases are A–C, A–G, A–T, C–G, C–T, and G–T. For GO (MoS2) nanopore, the maximum value of relative shift is obtained by the shift between C and T, Δλ(C,T) = 111.54 nm, (G and T, Δλ(G,T) = 56 nm,), while 2-nm GO(2-nm MoS2) sheet is used as QD nanopore. The shift between C and G in 5-nm MoS2 sheet is the minimum value of the relative shift. Here, we define the average sensitivity as
$$\begin{array}{l}{S}_{avg}=\frac{1}{4}\sum _{j}\sum _{i}\frac{|{\lambda }_{max,j}-{\lambda }_{max,i}|}{{\lambda }_{max,j}},\,\,\,i,j=A,C,G,T.\end{array}$$
where λmax is the peak emission wavelength of GO or MoS2 nanopore, i and j are types of the DNA nucleobases. The maximum sensitivity of our proposed method to the presented DNA nucleobases is ~52.2%, which corresponds to 2-nm GO nanopore. This value is higher than the maximum sensitivities for the plasmonic- based DNA sequencing studies with values of 19% and 38% reported in5,6. Also, the maximum sensitivity for Surface-enhanced Raman based method is about 34.22%7.
The relative shift of the main peak of the output emitted light from the QD nanopores between different nucleobases are shown for 2–5 nm (a) GO, and (b) MoS2 sheets. 2-nm GO sheet has the most sensitive emitted light to the type of DNA nucleobases. In all GO nanopores A and G show the minimum relative shifts, and the shift between C and G in 5-nm MoS2 sheet is the minimum value of the relative shift. 2-nm GO sheet has the most sensitive emitted light to the type of DNA nucleobases, and the maximum relative shift of ~112 nm is obtained by shift between C and T. In all GO nanopores, A and G show the minimum relative shifts of 12.4–16.5 nm, and a 2-nm shift between C and G in 5-nm MoS2 sheet is the minimum value of the relative shift.
It should be noted that in the field of nanopore DNA sequencing, in most cases, the main purpose of modeling and simulation is to bring a new idea or class of DNA-sequencing mechanism to this field. Nevertheless, practical parameters and challenges such as pore size, salt solution, translocation dynamics, and nucleobases stick on the pore, effects of noise signal from neighboring nucleobases, contaminations and defects are still present and unknown. From the practical point of view, a study by Yanagi showed that smaller nanopores with a diameter of 1 to 2 nm could be fabricated using dielectric breakdown. This method can generate nanopores with diameters of sub-1 nm in a 10-nm-thick Si3N4 membrane with good stability31. To prevent the nucleobases from sticking to the pore and thus the accumulation of DNA molecules inside the nanopore in the real experiment the pore can be passivated with a protein layer, insulating layer or specific atoms, resulting in an enhancement in the accuracy of the optical measurements and noise level reduction. According to studies from several groups, passivating the surface and the sidewall of a nanopore device can be done using bovine serum albumin (BSA)32, photo-definable PDMS (P-PDMS)33 and silicon atoms34 which result in preventing aggregation of DNA inside the pore, but otherwise do not significantly affect DNA translocation.
Generally, using the proposed method for sequencing DNA molecules has some advantages over the previous methods such as ionic or tunnelling currents, Raman spectroscopy and surface plasmon resonances1,2,3,4,5,6,7,8. The nanosecond-order lifetime of the method is both advantage and disadvantage for DNA sequencing, simultaneously. This is because DNA translocation time is short, but emission lifetime is large. This larger lifetime can be used to the simple tracking of the sensing signal. Also, DNA amplification can be utilized to give enough time for the emission mechanism to be complete. Moreover, because of size-dependent adjustability of the optical properties of QDs, and practical viability of nanometer-sized QDs, the proposed mechanism seems to be more reliable than ionic and tunnelling currents, surface plasmons and Raman spectroscopy. This concept shows more significant amounts of wavelength shifts due to presentation of DNA nucleobases. Hence, the method is more sensitive and selective compared to ionic, tunnelling, plasmonic and Raman-based mechanisms for DNA sequencing2,3,4,5,6,7,8. Also, due to higher selectivity, the suggested method can determine the type of the presented DNA nucleobases to the nanopore.
The Hybrid Quantum/Classical Method (HQCM) has been developed for computing electronic and optical properties of semiconductors and metallic nanostructures using the real and imaginary parts of the refractive index21,24,35,36. This method shows acceptable agreement between modeling and experimental data24,35. In this method, the calculations are divided into two parts: the quantum subsystem, which is propagated using Time-Dependent Density Functional Theory (TDDFT) scheme, and classical subsystem that is treated using Quasistatic Finite-Difference Time-Domain method (QSFDTD). This method employs dipole approximation with neglecting the magnetic field35,37. The subsystems share a common electrostatic potential, while they are propagated separately in their own real space grids. In the Time-propagation TDDFT part of the calculation the electrostatic potential is known as the Hartree potential, ∇2Vqm(r, t) = −4πρqm(r,t), and in the QSFDTD method the electrostatic potential is solved from the Poisson equation as well ∇2Vcl(r, t) = −4πρcl(r,t). The hybrid scheme is created by replacing in both schemes the electrostatic potential by a common potential as ∇2Vtot(r, t) = −4π[ρcl(r, t) + ρqm(r, t)]24. Then, this total potential is used in the Kohn-Sham density functional theory scheme (KS-DFT), and the electronic structure is solved for the ground state and excited state electron density. Finally, using the electron density and solving the time dependent Schrödinger equation the photoabsorption spectrum is extracted from the time-propagation simulations.
In our study, GO and MoS2 nanopores are treated with classical subsystems, and DNA molecule is treated with the quantum subsystem. Since the membranes are thicker than the distance between two adjacent nucleobases (0.34 nm) we use amplified DNA to make sure nanopore is filled with just one type of nucleobase. So, we use four-fold amplified nucleobases (1 nm) for each specific type of DNA nucleobases, equal to the highest membrane thickness (GO membrane).
For classical subsystem modeling, permittivity is modeled as a linear combination of Lorentz oscillators, as demonstrated in
$$\begin{array}{l}\varepsilon (\omega )={\varepsilon }_{Re}(\omega )+i{\varepsilon }_{Im}(\omega )={\varepsilon }_{\infty }+{\varepsilon }_{0}\,\sum _{j}\frac{{\beta }_{j}}{{{\omega }_{j}}^{2}-i\omega {\alpha }_{j}-{\omega }^{2}}\end{array}$$
here, βj, ωj and αj are parameters to fit desired model to the experimental permittivities. In Eq. 3, the frequency ω is presented in eV, εRe and εIm are real and imaginary parts of permittivity, respectively24. To find fitting parameters we search minimum value of
$$\begin{array}{l}\int \,\sqrt{A{({\varepsilon }_{Re}(\omega )-{\varepsilon }_{1}(\omega ))}^{2}+B({\varepsilon }_{Im}(\omega )-{\varepsilon }_{2}{(\omega )}^{2}}d\omega \end{array}$$
where ε1 and ε2 are real and imaginary parts of experimental permittivity, respectively, A and B are constant parameters which can be set to achieve the optimal fitting. The experimental permittivities for GO and MoS2 have already been reported in literature38,39.
Note that the introduced single-stranded DNA molecule to the QD nanopore is assumed to be single-type (only A, C, G or T) and DNA molecule length is considered to be almost equal to the diameter of the GO membrane. In the HQCM calculations, we use 1 and 0.25 Å… real-space grids for the classical and quantum subsystems, respectively, and the distance between the atoms and the grid borders is 0.4 nm. In these calculations, the time evolution is followed for 20 fs with 10 attosecond time steps, and the spectra are convoluted with Gaussian FWHM of 0.35 eV. For quantum subsystem, atomic coordinates of the relaxed DNA molecules are presented to the center of the nanopore. The main parameters for relaxation of DNA molecules and ground state calculations are basis-set = 'dzp', exchange-correlation functional = 'LDA', MeshCutoff = 200 Ry and QuasiNewton minimizer. The optimization algorithm runs until all atomic forces are below 0.05 eV per Angstrom. It should be noted that more accurate results will be obtained if DNA nucleobases are relaxed with GGA functionals. However, we have compared the calculations results of LDA with those of GGA (unpublished results). We find that there is no considerable difference between the LDA and GGA calculations. Therefore, in this study, regarding the computational time and cost of GGA functionals, the LDA functionals have been utilized. The HQCM is accurate under the condition in which characteristics dimensions of the system is smaller than the input light wavelength. For example, if the structure size is about 50 nm, the results are valid up to 6 eV21. Previous researches show that DNA is naturally a fluorescent molecule40. Thus, the excitation light is absorbed and also emitted by both GO or MoS2 nanopore and DNA molecule composition, as a complex molecule. Hence, to study molecule absorbance and emission, we consider the whole complex of the GO or MoS2 nanopore and DNA molecule. For the HQCM calculations we use GPAW codes36,41,42. The absorbance spectrum of the whole complex of the QD and DNA is calculated by
$$\begin{array}{l}Molar\,Absorbance\,(\omega )=\frac{2{\pi }^{2}N}{{10}^{3}ln10}(\frac{{e}^{2}}{mc})S(\omega )\,\,\,({M}^{-1}c{m}^{-1})\end{array}$$
where N, is Avogadro's number and c is the velocity of light43. In Eq. 5 S is dipole strength function, along the direction parallel to the base plane of QD sheets and DNA molecule, which is numerically extracted by HQCM codes.
We presented a novel method based on optical properties of GO and MoS2 QDs for sequencing DNA molecules. The mechanism combined with the nanopore-based DNA translocation is suggested and analyzed for sequencing DNA molecules. The recently developed HQCM which employs TDDFT and QSFDTD calculations are utilized to investigate impacts of DNA nucleobases on the absorption spectrum of the QD nanopores. Due to biocompatibility, stability, large band-gap energy and importantly excitation dependent PL properties, the GO and MoS2 nanopores are selected as nanopore materials. Effect of presented DNA nucleobases at the nanopores on the different parameters of the proposed method such as absorbance spectra, electric field enhancement, band-gap energies and emission peaks wavelengths of GO and MoS2 nanopores, are studied. The effect of different GO and MoS2 nanopore sizes on the proposed method is investigated. The best condition for the proposed DNA sequencing application is obtained while the GO nanopore length is 2 nm, and central frequency and spectral width of the applied Gaussian function is 2.88 and 1.39 eV, respectively. Results show that the presentation of each type of DNA nucleobases in the GO or MoS2 nanopore can change the wavelength shift of the emitted light between 1 to 130 nm. The large amounts of the wavelength shifts due to presented DNA to the nanopore, lead to higher sensitivity and selectivity compared with ionic, tunnelling, plasmonic and Raman-based methods in DNA sequencing. The results show that the proposed concept can clearly determine the type of unknown DNA nucleobases. Our study proves that the proposed method can be effectively used to sequence DNA molecules. Proposed mechanism and the results shed light on a new class of DNA sequencers for future personalized medicine.
Li, J., Yu, D. & Zhao, Q. Solid-state nanopore-based dna single molecule detection and sequencing. Microchimica Acta 183, 941–953 (2016).
Arjmandi-Tash, H., Belyaeva, L. A. & Schneider, G. F. Single molecule detection with graphene and other two-dimensional materials: nanopores and beyond. Chem. Soc. Rev. 45, 476–493 (2016).
Pud, S. et al. Self-aligned plasmonic nanopores by optically controlled dielectric breakdown. Nano letters 15, 7112–7117 (2015).
Nam, S. et al. Graphene nanopore with a self-integrated optical antenna. Nano letters 14, 5584–5589 (2014).
Fotouhi, B., Ahmadi, V., Abasifard, M. & Roohi, R. Interband p plasmon of graphene nanopores: A potential sensing mechanism for dna nucleotides. The J. Phys. Chem. C 120, 13693–13700 (2016).
Fotouhi, B., Ahmadi, V. & Faramarzi, V. Nano-plasmonic-based structures for dna sequencing. Opt. letters 41, 4229–4232 (2016).
Belkin, M., Chao, S.-H., Jonsson, M. P., Dekker, C. & Aksimentiev, A. Plasmonic nanopores for trapping, controlling displacement, and sequencing of dna. ACS nano 9, 10598–10611 (2015).
Shim, J. et al. Detection of methylation on dsdna using nanopores in a mos2 membrane. Nanoscale 9, 14836–14845 (2017).
Clapp, A. R. et al. Fluorescence resonance energy transfer between quantum dot donors and dye-labeled protein acceptors. J. Am. Chem. Soc. 126, 301–310 (2004).
Rogach, A. L., Klar, T. A., Lupton, J. M., Meijerink, A. & Feldmann, J. Energy transfer with semiconductor nanocrystals. J. Mater. Chem. 19, 1208–1221 (2009).
Kasha, M. Characterization of electronic transitions in complex molecules. Discuss. Faraday society 9, 14–19 (1950).
Bradley, S. J. et al. Heterogeneity in the fluorescence of graphene and graphene oxide quantum dots. Microchimica Acta 184, 871–878 (2017).
Zhu, S. et al. The photoluminescence mechanism in carbon dots (graphene quantum dots, carbon nanodots, and polymer dots): current state and future perspective. Nano Res. 8, 355–381 (2015).
Cushing, S. K., Li, M., Huang, F. & Wu, N. Origin of strong excitation wavelength dependent fluorescence of grapheme oxide. ACS nano 8, 1002–1013 (2013).
Zhao, L. et al. The phosphorescence and excitation-wavelength dependent fluorescence kinetics of large-scale grapheme oxide nanosheets. RSC Adv. 7, 22684–22691 (2017).
Štengl, V. & Henych, J. Strongly luminescent monolayered mos2 prepared by effective ultrasound exfoliation. Nanoscale 5, 3387–3394 (2013).
Wang, Y. & Ni, Y. Molybdenum disulfide quantum dots as a photoluminescence sensing platform for 2, 4, 6-trinitrophenol detection. Anal. chemistry 86, 7463–7470 (2014).
Wu, J.-Y., Zhang, X.-Y., Ma, X.-D., Qiu, Y.-P. & Zhang, T. High quantum-yield luminescent mos2 quantum dots with variable light emission created via direct ultrasonic exfoliation of mos2 nanosheets. Rsc Adv. 5, 95178–95182 (2015).
Gopalakrishnan, D. et al. Electrochemical synthesis of luminescent mos2 quantum dots. Chem. Commun. 51, 6293–6296 (2015).
Chacko, L., Jayaraj, M. & Aneesh, P. Excitation-wavelength dependent upconverting surfactant free mos2 nanoflakes grown by hydrothermal method. J. Lumin. 192, 6–10 (2017).
Gao, Y. & Neuhauser, D. Dynamical quantum-electrodynamics embedding: Combining time-dependent density functional theory and the near-field method. The J. Chem. Phys. 137, 074113 (2012).
Sathe, C., Zou, X., Leburton, J.-P. & Schulten, K. Computational investigation of dna detection using graphene nanopores. ACS nano 5, 8842–8851 (2011).
Tsolakidis, A. & Kaxiras, E. A tddft study of the optical response of dna bases, base pairs, and their tautomers in the gas phase. The J. Phys. Chem. A 109, 2373–2380 (2005).
Sakko, A., Rossi, T. P. & Nieminen, R. M. Dynamical coupling of plasmons and molecular excitations by hybrid quantum/classical calculations: time-domain approach. J. Physics: Condens. Matter 26, 315013 (2014).
Liang, H., Smith, C., Mills, C. & Silva, S. The band structure of graphene oxide examined using photoluminescence spectroscopy. J. Mater. Chem. C 3, 12484–12491 (2015).
Kumar, N., George, B. P. A., Abrahamse, H., Parashar, V. & Ngila, J. C. Sustainable one-step synthesis of hierarchical microspheres of pegylated mos2 nanosheets and moo3 nanorods: Their cytotoxicity towards lung and breast cancer cells. Appl. Surf. Sci. 396, 8–18 (2017).
Darby, B. L., Augui´e, B., Meyer, M., Pantoja, A. E. & Le Ru, E. C. Modified optical absorption of molecules on metallic nanoparticles at sub-monolayer coverage. Nat. Photonics 10, 40 (2016).
Tauc, J. Optical properties and electronic structure of amorphous ge and si. Mater. Res. Bull. 3, 37–46 (1968).
Mathkar, A. et al. Controlled, stepwise reduction and band gap manipulation of graphene oxide. The journal physical chemistry letters 3, 986–991 (2012).
Arul, N. S. & Nithya, V. Molybdenum disulfide quantum dots: synthesis and applications. RSC Adv. 6, 65670–65682 (2016).
Yanagi, I., Akahori, R., Hatano, T. & Takeda, K.-i. Fabricating nanopores with diameters of sub-1 nm to 3 nm using multilevel pulse-voltage injection. Sci. reports 4, 5000 (2014).
Sen, Y.-H. & Karnik, R. Investigating the translocation of l-dna molecules through pdms nanopores. Anal. bioanalytical chemistry 394, 437–446 (2009).
Lim, M.-C., Lee, M.-H., Kim, K.-B., Jeon, T.-J. & Kim, Y.-R. A mask-free passivation process for low noise nanopore devices. J. nanoscience nanotechnology 15, 5971–5977 (2015).
Lee, J. et al. Stabilization of graphene nanopore. Proc. Natl. Acad. Sci. 201400767 (2014).
Coomar, A., Arntsen, C., Lopata, K. A., Pistinner, S. & Neuhauser, D. Near-field: A finite-difference time-dependent method for simulation of electrodynamics on small scales. The J. chemical physics 135, 084121 (2011).
Mortensen, J. J., Hansen, L. B. & Jacobsen, K. W. Real-space grid implementation of the projector augmented wave method. Phys. Rev. B 71, 035109 (2005).
Walter, M. et al. Time-dependent density-functional theory in the projector augmented-wave method. The J. chemical physics 128, 244101 (2008).
Schöche, S. et al. Optical properties of graphene oxide and reduced graphene oxide determined by spectroscopic ellipsometry. Appl. Surf. Sci. 421, 778–782 (2017).
Zhang, H. et al. Measuring the refractive index of highly crystalline monolayer mos2 with high confidence. Sci. reports 5, 8440 (2015).
Vayá, I., Gustavsson, T., Miannay, F.-A., Douki, T. & Markovitsi, D. Fluorescence of natural dna: from the femtosecond to the nanosecond time scales. J. Am. Chem. Soc. 132, 11834–11835 (2010).
Enkovaara, J. E. et al. Electronic structure calculations with gpaw: a real-space implementation of the projector augmentedwave method. J. Physics: Condens. Matter 22, 253202 (2010).
Bahn, S. R. & Jacobsen, K. W. An object-oriented scripting interface to a legacy electronic structure code. Comput. Sci. & Eng. 4, 56–66 (2002).
Hsu, L.-Y., Ding, W. & Schatz, G. C. Plasmon-coupled resonance energy transfer. The J. Phys. Chem. Lett. 8, 2357–2367 (2017).
V.F. thanks Thoumas Rossi for sharing valuable information on GPAW calculation methods. The computing facilities were provided by NOPL laboratory at Tarbiat Modares University (TMU). The authors also acknowledge the Iran nanotechnology initiative council (INIC) for the partial support of this project. The authors would like to acknowledge the financial support received from Tarbiat Modares University, through Grant #IG-39703.
Faculty of Electrical and Computer Engineering, Tarbiat Modares University, P. O. Box 14115-194, Tehran, 1411713116, Iran
Vahid Faramarzi
, Vahid Ahmadi
, Bashir Fotouhi
& Mostafa Abasifard
Search for Vahid Faramarzi in:
Search for Vahid Ahmadi in:
Search for Bashir Fotouhi in:
Search for Mostafa Abasifard in:
V.A., V.F. and B.F. performed the project and designed all computational analyses. V.F., B.F. and M.A. carried out DFT simulations. V.F., V.A. and B.F. analyzed and concluded the results. All authors wrote the manuscript. V.A. directed and supervised the study.
Correspondence to Vahid Ahmadi.
The authors declare no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
https://doi.org/10.1038/s41598-019-41165-6
|
CommonCrawl
|
Math Calculator
Topics Blog
Evaluate: /(4+\frac{){2} x} /(x) 3+/(1) 6
Expression: $$\frac { 4 + \frac { 2 } { x } } { \frac { x } { 3 } + \frac { 1 } { 6 } }$$
To add or subtract expressions, expand them to make their denominators the same. Multiply $4$ times $\frac{x}{x}$.
$$\frac{\frac{4x}{x}+\frac{2}{x}}{\frac{x}{3}+\frac{1}{6}}$$
Since $\frac{4x}{x}$ and $\frac{2}{x}$ have the same denominator, add them by adding their numerators.
$$\frac{\frac{4x+2}{x}}{\frac{x}{3}+\frac{1}{6}}$$
To add or subtract expressions, expand them to make their denominators the same. Least common multiple of $3$ and $6$ is $6$. Multiply $\frac{x}{3}$ times $\frac{2}{2}$.
$$\frac{\frac{4x+2}{x}}{\frac{2x}{6}+\frac{1}{6}}$$
Since $\frac{2x}{6}$ and $\frac{1}{6}$ have the same denominator, add them by adding their numerators.
$$\frac{\frac{4x+2}{x}}{\frac{2x+1}{6}}$$
Divide $\frac{4x+2}{x}$ by $\frac{2x+1}{6}$ by multiplying $\frac{4x+2}{x}$ by the reciprocal of $\frac{2x+1}{6}$.
$$\frac{\left(4x+2\right)\times 6}{x\left(2x+1\right)}$$
Factor the expressions that are not already factored.
$$\frac{2\times 6\left(2x+1\right)}{x\left(2x+1\right)}$$
Cancel out $2x+1$ in both numerator and denominator.
$$\frac{2\times 6}{x}$$
Expand the expression.
$$\frac{12}{x}$$
Calculate: 3a^2+22a+40=0
Calculate: ((6r^7st)/(7rt^5))^2
Solve for: 2-7x^2-4x^4
Calculate: 9y-5=2y+9
Calculate: x^4+5x^3-10x^2-80x=96
Evaluate: p(x)=(x^3-64) * (x^4-10x^2+25) * (x^2-2)
Calculate: -(1)/(4) * (p+7)+(5)/(2) * (2p-5) < 9
Evaluate: y=e^{4x}
Calculate: (3x^4y^5)/(xy^3)
Solve for: -13 * (-63) * 0
How to Develop Your Math Skills and Invest in a Stronger Future
The Top 5 Math Learning Books
Top 10 Best Math Apps for Solving Tasks
Unlocking Math Potential Through Puzzles and Games
What Makes MathMaster The Best Math App?
7 Tips to Make Learning Math Easier and Enjoyable
The 5 Best Math Apps to Help You Solve Math Problems
Strategies To Help Students Grasp Concepts Quickly and Easily
The Best Apps and Sites for Learning Math
10 essential tips that will help you solve any math problem
|
CommonCrawl
|
On a class of semipositone problems with singular Trudinger-Moser nonlinearities
DCDS-S Home
A survey on asymptotic stability of ground states of nonlinear Schrödinger equations II
May 2021, 14(5): 1717-1746. doi: 10.3934/dcdss.2020451
Causal fermion systems and the ETH approach to quantum theory
Felix Finster 1,, , Jürg Fröhlich 2, , Marco Oppio 1, and Claudio F. Paganini 1,4,
Fakultät für Mathematik, Universität Regensburg, D-93040 Regensburg, Germany
Institute of Theoretical Physics, ETH Zurich, Switzerland
Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Am Mühlenberg 1, D-14476 Potsdam, Germany
* Corresponding author: Felix Finster
Received April 2020 Revised August 2020 Published May 2021 Early access November 2020
Full Text(HTML)
Figure(2)
After reviewing the theory of "causal fermion systems" (CFS theory) and the "Events, Trees, and Histories Approach" to quantum theory (ETH approach), we compare some of the mathematical structures underlying these two general frameworks and discuss similarities and differences. For causal fermion systems, we introduce future algebras based on causal relations inherent to a causal fermion system. These algebras are analogous to the algebras previously introduced in the ETH approach. We then show that the spacetime points of a causal fermion system have properties similar to those of "events", as defined in the ETH approach. Our discussion is underpinned by a survey of results on causal fermion systems describing Minkowski space that show that an operator representing a spacetime point commutes with the algebra in its causal future, up to tiny corrections that depend on a regularization length.
Keywords: ETH approach to quantum theory, causal fermion systems, measurement problem, quantum field theory, operator algebras.
Mathematics Subject Classification: 83A05, 81T05, 81P15, 47N50, 81R15, 49S05.
Citation: Felix Finster, Jürg Fröhlich, Marco Oppio, Claudio F. Paganini. Causal fermion systems and the ETH approach to quantum theory. Discrete & Continuous Dynamical Systems - S, 2021, 14 (5) : 1717-1746. doi: 10.3934/dcdss.2020451
Link to web platform on causal fermion systems: http://www.causal-fermion-system.com. Google Scholar
L. Bäuml, F. Finster, D. Schiefeneder and H. von der Mosel, Singular support of minimizers of the causal variational principle on the sphere, Calc. Var. Partial Differential Equations, 58 (2019), 205, 27 pp. doi: 10.1007/s00526-019-1652-7. Google Scholar
P. Blanchard, J. Fröhlich and B. Schubnel, A garden of forking paths – the quantum mechanics of histories of events, Nuclear Phys. B, 912 (2016), 463-484. doi: 10.1016/j.nuclphysb.2016.04.010. Google Scholar
D. Buchholz and J. E. Roberts, New light on infrared problems: Sectors, statistics, symmetries and spectrum, Commun. Math. Phys., 330 (2014), 935-972. doi: 10.1007/s00220-014-2004-2. Google Scholar
L. J. Bunce and J. D. Maitland Wright, The Mackey-Gleason problem, Bull. Amer. Math. Soc., 26 (1992), 288-293. doi: 10.1090/S0273-0979-1992-00274-4. Google Scholar
E. Curiel, F. Finster and J. M. Isidro, Two-dimensional area and matter flux in the theory of causal fermion systems, preprint, arXiv: 1910.06161, to appear in Internat. J. Modern Phys. D, (2020). Google Scholar
C. Dappiaggi and F. Finster, Linearized fields for causal variational principles: Existence theory and causal structure, Methods Appl. Anal., 27 (2020), 1-56. doi: 10.4310/MAA.2020.v27.n1.a1. Google Scholar
S. Doplicher, K. Fredenhagen and J. E. Roberts, The quantum structure of spacetime at the Planck scale and quantum fields, Commun. Math. Phys., 172 (1995), 187-220. doi: 10.1007/BF02104515. Google Scholar
A. Dvurečenskij, Gleason's Theorem and its Applications, Mathematics and its Applications (East European Series), vol. 60, Kluwer Academic Publishers Group, Dordrecht; Ister Science Press, Bratislava, 1993. doi: 10.1007/978-94-015-8222-3. Google Scholar
F. Finster, The Principle of the Fermionic Projector, hep-th/0001048, hep-th/0202059, hep-th/0210121, AMS/IP Studies in Advanced Mathematics, vol. 35, American Mathematical Society, Providence, RI, 2006. doi: 10.1090/amsip/035. Google Scholar
F. Finster, On the regularized fermionic projector of the vacuum, J. Math. Phys., 49 (2008), 032304, 60 pp. doi: 10.1063/1.2888187. Google Scholar
F. Finster, Causal variational principles on measure spaces, J. Reine Angew. Math., 646 (2010), 141-194. doi: 10.1515/CRELLE.2010.069. Google Scholar
F. Finster, Perturbative quantum field theory in the framework of the fermionic projector, J. Math. Phys., 55 (2014), 042301, 53 pp. doi: 10.1063/1.4871549. Google Scholar
F. Finster, Causal fermion systems – an overview, in Quantum Mathematical Physics: A Bridge between Mathematics and Physics (F. Finster, J. Kleiner, C. R ken, and J. Tolksdorf, eds.), Birkhäuser Verlag, Basel, (2016), 313–380. doi: 10.1007/978-3-319-42067-7. Google Scholar
F. Finster, The Continuum Limit of Causal Fermion Systems, Fundamental Theories of Physics, vol. 186, Springer, 2016. doi: 10.1007/978-3-319-42067-7. Google Scholar
F. Finster, Causal fermion systems: Discrete space-times, causation and finite propagation speed, J. Phys.: Conf. Ser., 1275 (2019), 012009. Google Scholar
F. Finster, Perturbation theory for critical points of causal variational principles, Adv. Theor. Math. Phys., 24 (2020), 563-619. doi: 10.4310/ATMP.2020.v24.n3.a2. Google Scholar
F. Finster and A. Grotz, A Lorentzian quantum geometry, Adv. Theor. Math. Phys., 16 (2012), 1197-1290. doi: 10.4310/ATMP.2012.v16.n4.a3. Google Scholar
F. Finster and A. Grotz, On the initial value problem for causal variational principles, J. Reine Angew. Math., 725 (2017), 115-141. doi: 10.1515/crelle-2014-0080. Google Scholar
F. Finster, A. Grotz and D. Schiefeneder, Causal fermion systems: A quantum space-time emerging from an action principle, in Quantum Field Theory and Gravity (F. Finster, O. Müller, M. Nardmann, J. Tolksdorf, and E. Zeidler, eds.), Birkhäuser Verlag, Basel, (2012), 157–182. doi: 10.1007/978-3-0348-0043-3_9. Google Scholar
F. Finster and M. Jokel, Causal fermion systems: An elementary introduction to physical ideas and mathematical concepts, in Progress and Visions in Quantum Theory in View of Gravity (F. Finster, D. Giulini, J. Kleiner, and J. Tolksdorf, eds.), Birkhäuser Verlag, Basel, (2020), 63–92. doi: 10.1007/978-3-030-38941-3_2. Google Scholar
F. Finster and N. Kamran, Complex structures on jet spaces and bosonic Fock space dynamics for causal variational principles, preprint, arXiv: 1808.03177, to appear in Pure Appl. Math. Q., (2020). Google Scholar
F. Finster and J. Kleiner, Causal fermion systems as a candidate for a unified physical theory, J. Phys.: Conf. Ser., 626 (2015), 012020. Google Scholar
F. Finster and J. Kleiner, A Hamiltonian formulation of causal variational principles, Calc. Var. Partial Differential Equations, 56 (2017), no. 73, 33pp. doi: 10.1007/s00526-017-1153-5. Google Scholar
F. Finster and C. Langer, Causal variational principles in the $\sigma$-locally compact setting: Existence of minimizers, preprint, arXiv: 2002.04412, to appear in Adv. Calc. Var., (2020). Google Scholar
F. Finster and M. Oppio, Local algebras for causal fermion systems in Minkowski space, preprint, arXiv: 2004.00419, (2020). Google Scholar
F. Finster and D. Schiefeneder, On the support of minimizers of causal variational principles, Arch. Ration. Mech. Anal., 210 (2013), 321-364. doi: 10.1007/s00205-013-0649-1. Google Scholar
J. Fröhlich, The quest for laws and structure, Mathematics and Society, (2016), 101–129. Google Scholar
J. Fröhlich, A brief review of the "ETH-approach to quantum mechanics", preprint, arXiv: 1905.06603, (2019). Google Scholar
J. Fröhlich, Relativistic quantum theory, preprint, arXiv: 1912.00726, (2019). Google Scholar
J. Fröhlich, "Diminishing potentialities", entanglement, "purification" and the emergence of events in quantum mechanics – a simple model, Sect. 5.6 of Notes for a course on Quantum Theory at LMU-Munich (Nov./Dec. 2019). Google Scholar
J. Fröhlich and B. Schubnel, Quantum probability theory and the foundations of quantum mechanics, in The Message of Quantum Science, Springer, 899 (2015), 131–193. doi: 10.1007/978-3-662-46422-9_7. Google Scholar
A. M. Gleason, Measures on the closed subspaces of a Hilbert space, J. Math. Mech., 6 (1957), 885-893. doi: 10.1512/iumj.1957.6.56050. Google Scholar
G. C. Hegerfeldt, Remark on causality and particle localization, Physical Review D, 10 (1974), no. 10, 3320. doi: 10.1103/PhysRevD.10.3320. Google Scholar
J. Kleiner, Dynamics of Causal Fermion Systems – Field Equations and Correction Terms for a New Unified Physical Theory, Dissertation, Universität Regensburg, 2017. Google Scholar
H. Lin, Almost commuting selfadjoint matrices and applications, in Operator Algebras and their Applications (Waterloo, ON, 1994/1995), Fields Inst. Commun., vol. 13, Amer. Math. Soc., Providence, RI, (1997), 193–233. Google Scholar
B. Thaller, The Dirac Equation, Texts and Monographs in Physics, Springer-Verlag, Berlin, 1992. doi: 10.1007/978-3-662-02753-0. Google Scholar
Figure 1. The spacetime restricted causal future $ I^\vee_\rho(x) $ of the causal fermion system and the future light cone $ \mathcal{I}^\vee(x) $ of Minkowski space
Download as PowerPoint slide
Figure 2. The approximate center and the loss of access to information
Huai-Dong Cao and Jian Zhou. On quantum de Rham cohomology theory. Electronic Research Announcements, 1999, 5: 24-34.
Giuseppe Marmo, Giuseppe Morandi, Narasimhaiengar Mukunda. The Hamilton-Jacobi theory and the analogy between classical and quantum mechanics. Journal of Geometric Mechanics, 2009, 1 (3) : 317-355. doi: 10.3934/jgm.2009.1.317
Weinan E, Jianfeng Lu. Mathematical theory of solids: From quantum mechanics to continuum models. Discrete & Continuous Dynamical Systems, 2014, 34 (12) : 5085-5097. doi: 10.3934/dcds.2014.34.5085
Santiago Capriotti. Dirac constraints in field theory and exterior differential systems. Journal of Geometric Mechanics, 2010, 2 (1) : 1-50. doi: 10.3934/jgm.2010.2.1
Leonid Faybusovich, Cunlu Zhou. Long-step path-following algorithm for quantum information theory: Some numerical aspects and applications. Numerical Algebra, Control & Optimization, 2021 doi: 10.3934/naco.2021017
Nicolas Augier, Ugo Boscain, Mario Sigalotti. Semi-conical eigenvalue intersections and the ensemble controllability problem for quantum systems. Mathematical Control & Related Fields, 2020, 10 (4) : 877-911. doi: 10.3934/mcrf.2020023
Harald Markum, Rainer Pullirsch. Classical and quantum chaos in fundamental field theories. Conference Publications, 2003, 2003 (Special) : 596-603. doi: 10.3934/proc.2003.2003.596
Ruikuan Liu, Tian Ma, Shouhong Wang, Jiayan Yang. Thermodynamical potentials of classical and quantum systems. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1411-1448. doi: 10.3934/dcdsb.2018214
Seok-Jin Kang and Jae-Hoon Kwon. Quantum affine algebras, combinatorics of Young walls, and global bases. Electronic Research Announcements, 2002, 8: 35-46.
Ryszard Rudnicki. An ergodic theory approach to chaos. Discrete & Continuous Dynamical Systems, 2015, 35 (2) : 757-770. doi: 10.3934/dcds.2015.35.757
Sergei Avdonin, Julian Edward. An inverse problem for quantum trees with observations at interior vertices. Networks & Heterogeneous Media, 2021, 16 (2) : 317-339. doi: 10.3934/nhm.2021008
Florian Méhats, Olivier Pinaud. A problem of moment realizability in quantum statistical physics. Kinetic & Related Models, 2011, 4 (4) : 1143-1158. doi: 10.3934/krm.2011.4.1143
Vladimir V. Chepyzhov, Monica Conti, Vittorino Pata. A minimal approach to the theory of global attractors. Discrete & Continuous Dynamical Systems, 2012, 32 (6) : 2079-2088. doi: 10.3934/dcds.2012.32.2079
K. Renee Fister, Jennifer Hughes Donnelly. Immunotherapy: An Optimal Control Theory Approach. Mathematical Biosciences & Engineering, 2005, 2 (3) : 499-510. doi: 10.3934/mbe.2005.2.499
Gerasimenko Viktor. Heisenberg picture of quantum kinetic evolution in mean-field limit. Kinetic & Related Models, 2011, 4 (1) : 385-399. doi: 10.3934/krm.2011.4.385
Krešimir Burazin, Marko Vrdoljak. Homogenisation theory for Friedrichs systems. Communications on Pure & Applied Analysis, 2014, 13 (3) : 1017-1044. doi: 10.3934/cpaa.2014.13.1017
Helmut Kröger. From quantum action to quantum chaos. Conference Publications, 2003, 2003 (Special) : 492-500. doi: 10.3934/proc.2003.2003.492
Alberto Ibort, Alberto López-Yela. Quantum tomography and the quantum Radon transform. Inverse Problems & Imaging, 2021, 15 (5) : 893-928. doi: 10.3934/ipi.2021021
Valery Imaikin, Alexander Komech, Herbert Spohn. Scattering theory for a particle coupled to a scalar field. Discrete & Continuous Dynamical Systems, 2004, 10 (1&2) : 387-396. doi: 10.3934/dcds.2004.10.387
Eduardo Martínez. Classical field theory on Lie algebroids: Multisymplectic formalism. Journal of Geometric Mechanics, 2018, 10 (1) : 93-138. doi: 10.3934/jgm.2018004
PDF downloads (146)
HTML views (170)
Felix Finster Jürg Fröhlich Marco Oppio Claudio F. Paganini
Article outline
|
CommonCrawl
|
Numerical +4 -1
Two coils require 20 minutes and 60 minutes respectively to produce same amount of heat energy when connected separately to the same source. If they are connected in parallel arrangement to the same source; the time required to produce same amount of heat by the combination of coils, will be ___________ min.
Your input ____
⬅
As per the given figure, two plates A and B of thermal conductivity K and 2 K are joined together to form a compound plate. The thickness of plates are 4.0 cm and 2.5 cm respectively and the area of cross-section is 120 cm2 for each plate. The equivalent thermal conductivity of the compound plate is $$\left( {1 + {5 \over \alpha }} \right)$$ K, then the value of $$\alpha$$ will be ______________.
300 cal. of heat is given to a heat engine and it rejects 225 cal. If source temperature is 227$$^\circ$$C, then the temperature of sink will be ______________ $$^\circ$$C.
The total internal energy of two mole monoatomic ideal gas at temperature T = 300 K will be _____________ J. (Given R = 8.31 J/mol.K)
Questions Asked from Heat and Thermodynamics (Numerical)
JEE Main 2022 (Online) 29th July Morning Shift (2) JEE Main 2022 (Online) 28th July Evening Shift (1) JEE Main 2022 (Online) 25th July Evening Shift (1) JEE Main 2022 (Online) 29th June Morning Shift (3) JEE Main 2022 (Online) 28th June Morning Shift (1) JEE Main 2022 (Online) 27th June Evening Shift (1) JEE Main 2022 (Online) 27th June Morning Shift (1) JEE Main 2022 (Online) 26th June Evening Shift (2) JEE Main 2022 (Online) 25th June Evening Shift (1) JEE Main 2022 (Online) 25th June Morning Shift (1) JEE Main 2022 (Online) 24th June Evening Shift (2) JEE Main 2022 (Online) 24th June Morning Shift (1) JEE Main 2021 (Online) 1st September Evening Shift (2) JEE Main 2021 (Online) 31st August Evening Shift (1) JEE Main 2021 (Online) 27th August Evening Shift (1) JEE Main 2021 (Online) 27th August Morning Shift (1) JEE Main 2021 (Online) 25th July Evening Shift (1) JEE Main 2021 (Online) 22th July Evening Shift (1) JEE Main 2021 (Online) 20th July Evening Shift (1) JEE Main 2021 (Online) 20th July Morning Shift (1) JEE Main 2021 (Online) 18th March Morning Shift (1) JEE Main 2021 (Online) 16th March Evening Shift (1) JEE Main 2021 (Online) 26th February Evening Shift (2) JEE Main 2021 (Online) 26th February Morning Shift (1) JEE Main 2021 (Online) 25th February Evening Shift (1) JEE Main 2021 (Online) 25th February Morning Shift (2) JEE Main 2021 (Online) 24th February Evening Shift (1) JEE Main 2020 (Online) 6th September Evening Slot (1) JEE Main 2020 (Online) 6th September Morning Slot (2) JEE Main 2020 (Online) 5th September Evening Slot (1) JEE Main 2020 (Online) 4th September Evening Slot (1) JEE Main 2020 (Online) 4th September Morning Slot (1) JEE Main 2020 (Online) 3rd September Evening Slot (1) JEE Main 2020 (Online) 3rd September Morning Slot (1) JEE Main 2020 (Online) 2nd September Morning Slot (1) JEE Main 2020 (Online) 9th January Evening Slot (1) JEE Main 2020 (Online) 8th January Evening Slot (1) JEE Main 2020 (Online) 7th January Evening Slot (1) JEE Main 2020 (Online) 7th January Morning Slot (2)
|
CommonCrawl
|
Solvability of a class of thermal dynamical contact problems with subdifferential conditions
NACO Home
Global optimization via differential evolution with automatic termination
2012, 2(1): 69-90. doi: 10.3934/naco.2012.2.69
Univariate geometric Lipschitz global optimization algorithms
Dmitri E. Kvasov 1, and Yaroslav D. Sergeyev 1,
DEIS, University of Calabria, Via P. Bucci, Cubo 42C, 87036 -- Rende (CS), Italy, Italy
Received May 2011 Revised August 2011 Published March 2012
In this survey, univariate global optimization problems are considered where the objective function or its first derivative can be multiextremal black-box costly functions satisfying the Lipschitz condition over an interval. Such problems are frequently encountered in practice. A number of geometric methods based on constructing auxiliary functions with the usage of different estimates of the Lipschitz constants are described in the paper.
Keywords: geometric approach., black-box function, Lipschitz condition, Global optimization.
Mathematics Subject Classification: Primary: 65K05, 90C26; Secondary: 90C5.
Citation: Dmitri E. Kvasov, Yaroslav D. Sergeyev. Univariate geometric Lipschitz global optimization algorithms. Numerical Algebra, Control & Optimization, 2012, 2 (1) : 69-90. doi: 10.3934/naco.2012.2.69
C. S. Adjiman, S. Dallwig, C. A. Floudas and A. Neumaier, A global optimization method, $\alpha$BB, for general twice-differentiable constrained NLPs - I., Theoretical advances,, Comput. Chem. Engng., 22 (1998), 1137. doi: 10.1016/S0098-1354(98)00027-1. Google Scholar
M. Yu. Andramonov, A. M. Rubinov and B. M. Glover, Cutting angle methods in global optimization,, Appl. Math. Lett., 12 (1999), 95. doi: 10.1016/S0893-9659(98)00179-7. Google Scholar
I. P. Androulakis, C. D. Maranas and C. A. Floudas, $\alpha$BB: A global optimization method for general constrained nonconvex problems,, J. Global Optim., 7 (1995), 337. doi: 10.1007/BF01099647. Google Scholar
C. Audet, P. Hansen and G. Savard, "Essays and Surveys in Global Optimization,", GERAD 25th Anniversary. Springer-Verlag, (2005). Google Scholar
A. M. Bagirov, A. M. Rubinov and J. Zhang, Local optimization method with global multidimensional search,, J. Global Optim., 32 (2005), 161. doi: 10.1007/s10898-004-2700-0. Google Scholar
W. Baritompa and A. Cutler, Accelerations for global optimization covering methods using second derivatives,, J. Global Optim., 4 (1994), 329. doi: 10.1007/BF01098365. Google Scholar
W. Baritompa, Customizing methods for global optimization - A geometric viewpoint,, J. Global Optim., 3 (1993), 193. doi: 10.1007/BF01096738. Google Scholar
W. Baritompa, Accelerations for a variety of global optimization methods,, J. Global Optim., 4 (1994), 37. doi: 10.1007/BF01096533. Google Scholar
K. A. Barkalov and R. G. Strongin, A global optimization technique with an adaptive order of checking for constraints,, Comput. Math. Math. Phys., 42 (2002), 1289. Google Scholar
M. C. Bartholomew-Biggs, Z. J. Ulanowski and S. Zakovic, Using global optimization for a microparticle identification problem with noisy data,, J. Global Optim., 32 (2005), 325. doi: 10.1007/s10898-004-1943-0. Google Scholar
P. Basso, Iterative methods for the localization of the global maximum,, SIAM J. Numer. Anal., 19 (1982), 781. doi: 10.1137/0719054. Google Scholar
G. Beliakov and A. Ferrer, Bounded lower subdifferentiability optimization techniques: Applications,, J. Global Optim., 47 (2010), 211. doi: 10.1007/s10898-009-9467-2. Google Scholar
D. P. Bertsekas, "Nonlinear Programming,", Athena Scientific, (1999). Google Scholar
B. Betrò, Bayesian methods in global optimization,, J. Global Optim., 1 (1991), 1. doi: 10.1007/BF00120661. Google Scholar
M. Björkman and K. Holmström, Global optimization of costly nonconvex functions using radial basis functions,, Optim. Eng., 1 (2000), 373. doi: 10.1023/A:1011584207202. Google Scholar
L. Breiman and A. Cutler, A deterministic algorithm for global optimization,, Math. Program., 58 (1993), 179. doi: 10.1007/BF01581266. Google Scholar
R. G. Carter, J. M. Gablonsky, A. Patrick, C. T. Kelley and O. J. Eslinger, Algorithms for noisy problems in gas transmission pipeline optimization,, Optim. Eng., 2 (2001), 139. doi: 10.1023/A:1013123110266. Google Scholar
L. G. Casado, I. García and Ya. D. Sergeyev, Interval algorithms for finding the minimal root in a set of multiextremal non-differentiable one-dimensional functions,, SIAM J. Sci. Comput., 24 (2002), 359. doi: 10.1137/S1064827599357590. Google Scholar
M. H. Chang, Y. C. Park, and T. Y. Lee, A new global optimization method for univariate constrained twice-differentiable NLP problems,, J. Global Optim., 39 (2007), 79. doi: 10.1007/s10898-006-9121-1. Google Scholar
F. H. Clarke, "Optimization and Nonsmooth Analysis,", John Wiley & Sons, (1983). Google Scholar
J. J. Cochran, "Wiley Encyclopedia of Operations Research and Management Science (8 Volumes),", Wiley, (2011). doi: 10.1002/9780470400531. Google Scholar
A. R. Conn, K. Scheinberg and L. N. Vicente, "Introduction to Derivative-Free Optimization,", SIAM, (2009). doi: 10.1137/1.9780898718768. Google Scholar
S. E. Cox, R. T. Haftka, C. A. Baker, B. Grossman, W. H. Mason and L. T. Watson, A comparison of global optimization methods for the design of a high-speed civil transport,, J. Global Optim., 21 (2001), 415. doi: 10.1023/A:1012782825166. Google Scholar
A. E. Csallner, T. Csendes and M. Cs. Markót, Multisection in interval branch-and-bound methods for global optimization - I. Theoretical results,, J. Global Optim., 16 (2000), 371. doi: 10.1023/A:1008354711345. Google Scholar
Yu. M. Danilin, Estimation of the efficiency of an absolute-minimum-finding algorithm,, USSR Comput. Math. Math. Phys., 11 (1971), 261. doi: 10.1016/0041-5553(71)90020-6. Google Scholar
V. F. Demyanov and V. N. Malozemov, "Introduction to Minimax,", John Wiley & Sons, (1974). Google Scholar
V. F. Demyanov and A. M. Rubinov, "Quasidifferential Calculus,", Optimization Software Inc., (1986). Google Scholar
S. M. Elsakov and V. I. Shiryaev, Homogeneous algorithms for multiextremal optimization,, Comput. Math. Math. Phys., 50 (2010), 1642. doi: 10.1134/S0965542510100027. Google Scholar
Yu. G. Evtushenko, V. U. Malkova and A. A. Stanevichyus, Parallel global optimization of functions of several variables,, Comput. Math. Math. Phys., 49 (2009), 246. doi: 10.1134/S0965542509020055. Google Scholar
Yu. G. Evtushenko, M. A. Posypkin and I. Kh. Sigal, A framework for parallel large-scale global optimization,, Comp. Sci. - Res. Dev., 23 (2009), 211. doi: 10.1007/s00450-009-0083-7. Google Scholar
Yu. G. Evtushenko and M. A. Posypkin, Coverings for global optimization of partial-integer nonlinear problems,, Doklady Mathematics, 83 (2011), 268. doi: 10.1134/S1064562411020074. Google Scholar
Yu. G. Evtushenko, Numerical methods for finding global extrema (Case of a non-uniform mesh),, USSR Comput. Math. Math. Phys., 11 (1971), 38. doi: 10.1016/0041-5553(71)90065-6. Google Scholar
Yu. G. Evtushenko, "Numerical Optimization Techniques,", Translations Series in Mathematics and Engineering. Springer-Verlag, (1985). doi: 10.1007/978-1-4612-5022-7. Google Scholar
D. E. Finkel and C. T. Kelley, Additive scaling and the DIRECT algorithm,, J. Global Optim., 36 (2006), 597. doi: 10.1007/s10898-006-9029-9. Google Scholar
R. Fletcher, "Practical Methods of Optimization,", John Wiley & Sons, (2000). Google Scholar
C. A. Floudas and C. E. Gounaris, A review of recent advances in global optimization,, J. Global Optim., 45 (2009), 3. doi: 10.1007/s10898-008-9332-8. Google Scholar
C. A. Floudas, P. M. Pardalos, C. S. Adjiman, W. Esposito, Z. Gümüs, S. Harding, J. Klepeis, C. Meyer and C. Schweiger, "Handbook of Test Problems in Local and Global Optimization,", Kluwer Academic Publishers, (1999). Google Scholar
C. A. Floudas and P. M. Pardalos, "Encyclopedia of Optimization (6 Volumes),", Kluwer Academic Publishers, (2001). Google Scholar
K. R. Fowler, J. P. Reese, C. E. Kees, J. E. Dennis Jr., C. T. Kelley, C. T. Miller, C. Audet, A. J. Booker, G. Couture, R. W. Darwin, M. W. Farthing, D. E. Finkel, J. M. Gablonsky, G. Gray and T. G. Kolda, Comparison of derivative-free optimization methods for groundwater supply and hydraulic capture community problems,, Adv. Water Res., 31 (2008), 743. doi: 10.1016/j.advwatres.2008.01.010. Google Scholar
J. M. Gablonsky and C. T. Kelley, A locally-biased form of the DIRECT algorithm, J. Global Optim., 21 (2001), 27. doi: 10.1023/A:1017930332101. Google Scholar
D. Y. Gao and H. D. Sherali, Canonical duality theory: Connection between nonconvex mechanics and global optimization,, In, (2009), 257. doi: 10.1007/978-0-387-75714-8_8. Google Scholar
D. Y. Gao, "Duality Principles in Nonconvex Systems: Theory, Methods, and Applications,", Kluwer Academic Publishers, (2000). Google Scholar
V. P. Gergel, A global search algorithm using derivatives,, In, (1992), 161. Google Scholar
V. A. Grishagin, Operating characteristics of some global search algorithms,, In, 7 (1978), 198. Google Scholar
I. E. Grossmann, "Global Optimization in Engineering Design,", Kluwer Academic Publishers, (1996). Google Scholar
H.-M. Gutmann, A radial basis function method for global optimization,, J. Global Optim., 19 (2001), 201. doi: 10.1023/A:1011255519438. Google Scholar
P. Hansen and B. Jaumard, Lipschitz optimization,, In, 1 (1995), 407. Google Scholar
E. M. T. Hendrix and B. G.-Toth, "Introduction to Nonlinear and Global Optimization,", Springer, (2010). Google Scholar
J. He, L. T. Watson, N. Ramakrishnan, C. A. Shaffer, A. Verstak, J. Jiang, K. Bae and W. H. Tranter, Dynamic data structures for a direct search algorithm,, Comput. Optim. Appl., 23 (2002), 5. doi: 10.1023/A:1019992822938. Google Scholar
J. B. Hiriart-Urruty and C. Lemaréchal, "Convex Analysis and Minimization Algorithms (Parts I and II),", Springer-Verlag, (1996). Google Scholar
R. Horst, P. M. Pardalos and N. V. Thoai, "Introduction to Global Optimization,", Kluwer Academic Publishers, (1995). Google Scholar
R. Horst and P. M. Pardalos, "Handbook of Global Optimization,", volume 1. Kluwer Academic Publishers, (1995). Google Scholar
R. Horst and H. Tuy, "Global Optimization - Deterministic Approaches,", Springer-Verlag, (1996). Google Scholar
R. Horst, Deterministic global optimization with partition sets whose feasibility is not known: Application to concave minimization, reverse convex constraints, DC-programming, and Lipschitzian optimization,, J. Optim. Theory Appl., 58 (1988), 11. doi: 10.1007/BF00939768. Google Scholar
V. V. Ivanov, On optimal algorithms for the minimization of functions of certain classes,, Cybernetics, 4 (1972), 81. Google Scholar
D. R. Jones, C. D. Perttunen and B. E. Stuckman, Lipschitzian optimization without the Lipschitz constant,, J. Optim. Theory Appl., 79 (1993), 157. doi: 10.1007/BF00941892. Google Scholar
D. R. Jones, M. Schonlau and W. J. Welch, Efficient global optimization of expensive black-box functions,, J. Global Optim., 13 (1998), 455. doi: 10.1023/A:1008306431147. Google Scholar
O. V. Khamisov, Global optimization of functions with a concave support minorant,, Comput. Math. Math. Phys., 44 (2004), 1473. Google Scholar
A. G. Korotchenko, An algorithm for seeking the maximum value of univariate functions,, USSR Comput. Math. Math. Phys., 18 (1978), 34. doi: 10.1016/0041-5553(78)90162-3. Google Scholar
D. E. Kvasov, C. Pizzuti and Ya. D. Sergeyev, Local tuning and partition strategies for diagonal GO methods,, Numer. Math., 94 (2003), 93. doi: 10.1007/s00211-002-0419-8. Google Scholar
D. E. Kvasov and Ya. D. Sergeyev, Multidimensional global optimization algorithm based on adaptive diagonal curves,, Comput. Math. Math. Phys., 43 (2003), 40. Google Scholar
D. E. Kvasov and Ya. D. Sergeyev, A univariate global search working with a set of Lipschitz constants for the first derivative,, Optim. Lett., 3 (2009), 303. doi: 10.1007/s11590-008-0110-9. Google Scholar
D. Lera and Ya. D. Sergeyev, An information global minimization algorithm using the local improvement technique,, J. Global Optim., 48 (2010), 99. doi: 10.1007/s10898-009-9508-x. Google Scholar
D. Lera and Ya. D. Sergeyev, Lipschitz and {H\"older global optimization using space-filling curves},, Appl. Numer. Math., 60 (2010), 115. doi: 10.1016/j.apnum.2009.10.004. Google Scholar
G. Liuzzi, S. Lucidi and V. Piccialli, A DIRECT-based approach exploiting local minimizations for the solution of large-scale global optimization problems,, Comput. Optim. Appl., 45 (2010), 353. doi: 10.1007/s10589-008-9217-2. Google Scholar
K. Ljungberg, S. Holmgren and Ö. Carlborg, Simultaneous search for multiple QTL using the global optimization algorithm DIRECT,, Bioinformatics, 20 (2004), 1887. doi: 10.1093/bioinformatics/bth175. Google Scholar
D. MacLagan, T. Sturge and W. Baritompa, Equivalent methods for global optimization,, In, (1996), 201. doi: 10.1007/978-1-4613-3437-8_13. Google Scholar
O. L. Mangasarian, "Nonlinear Programming,", McGraw-Hill, (1969). Google Scholar
C. D. Maranas and C. A. Floudas, Global minimum potential energy conformations of small molecules,, J. Global Optim., 4 (1994), 135. doi: 10.1007/BF01096720. Google Scholar
C. C. Meewella and D. Q. Mayne, An algorithm for global optimization of Lipschitz continuous functions,, J. Optim. Theory Appl., 57 (1988), 307. doi: 10.1007/BF00938542. Google Scholar
C. C. Meewella and D. Q. Mayne, Efficient domain partitioning algorithms for global optimization of rational and Lipschitz continuous functions,, J. Optim. Theory Appl., 61 (1989), 247. doi: 10.1007/BF00962799. Google Scholar
R. H. Mladineo, An algorithm for finding the global maximum of a multimodal multivariate function,, Math. Program., 34 (1986), 188. doi: 10.1007/BF01580583. Google Scholar
J. Mockus, W. Eddy, A. Mockus, L. Mockus and G. Reklaitis, "Bayesian Heuristic Approach to Discrete and Global Optimization,", Kluwer Academic Publishers, (1996). Google Scholar
J. Mockus, "Bayesian Approach to Global Optimization,", Kluwer Academic Publishers, (1989). Google Scholar
C. G. Moles, P. Mendes and J. R. Banga, Parameter estimation in biochemical pathways: A comparison of global optimization methods,, Genome Res., 13 (2003), 2467. doi: 10.1101/gr.1262503. Google Scholar
A. Molinaro, C. Pizzuti and Ya. D. Sergeyev, Acceleration tools for diagonal information global optimization algorithms,, Comput. Optim. Appl., 18 (2001), 5. doi: 10.1023/A:1008719926680. Google Scholar
A. Molinaro and Ya. D. Sergeyev, Finding the minimal root of an equation with the multiextremal and nondifferentiable left-hand part,, Numer. Algorithms, 28 (2001), 255. doi: 10.1023/A:1014063303984. Google Scholar
V. N. Nefedov, Some problems of solving Lipschitzian global optimization problems using the branch and bound method,, Comput. Math. Math. Phys., 32 (1992), 433. Google Scholar
Yu. I. Neimark and R. G. Strongin, The information approach to the problem of search of extrema of functions,, Engineering Cybernetics, 1 (1966), 17. Google Scholar
A. Neumaier, Complete search in continuous global optimization and constraint satisfaction,, In, 13 (2004), 271. doi: 10.1017/CBO9780511569975.004. Google Scholar
J. Nocedal and S. J. Wright, "Numerical Optimization,", Springer-Verlag, (1999). Google Scholar
P. M. Pardalos and M. G. C. Resende, "Handbook of Applied Optimization,", Oxford, (2002). Google Scholar
P. M. Pardalos and H. E. Romeijn, "Handbook of Optimization in Medicine,", Springer, (2009). Google Scholar
R. Paulavičius, J. Žilinskas and A. Grothey, Investigation of selection strategies in branch and bound algorithm with simplicial partitions and combination of Lipschitz bounds,, Optim. Lett., 4 (2010), 173. doi: 10.1007/s11590-009-0156-3. Google Scholar
J. D. Pintér, "Global Optimization in Action (Continuous and Lipschitz Optimization: Algorithms, Implementations and Applications),", Kluwer Academic Publishers, (1996). Google Scholar
J. D. Pintér, Global Optimization: Scientific and Engineering Case Studies,, Nonconvex Optimization and Its Applications, 85 (2006). Google Scholar
S. A. Piyavskij, An algorithm for finding the absolute minimum of a function,, In, 2 (1967), 13. Google Scholar
S. A. Piyavskij, An algorithm for finding the absolute extremum of a function,, USSR Comput. Math. Math. Phys., 12 (1972), 57. doi: 10.1016/0041-5553(72)90115-2. Google Scholar
S. Rebennack, P. M. Pardalos, M. V. F. Pereira and N. A. Iliadis, "Handbook of Power Systems I,", Springer, (2010). Google Scholar
M. G. C. Resende and P. M. Pardalos, "Handbook of Optimization in Telecommunications,", Springer, (2006). Google Scholar
R. T. Rockafellar, "Convex Analysis,", Princeton University Press, (1970). Google Scholar
F. Schoen, On a sequential search strategy in global optimization problems,, Calcolo, 19 (1982), 321. Google Scholar
Ya. D. Sergeyev, P. Daponte, D. Grimaldi and A. Molinaro, Two methods for solving optimization problems arising in electronic measurements and electrical engineering,, SIAM J. Optim., 10 (1999), 1. doi: 10.1137/S1052623496312393. Google Scholar
Ya. D. Sergeyev and V. A. Grishagin, Parallel asynchronous global search and the nested optimization scheme,, J. Comput. Anal. Appl., 3 (2001), 123. doi: 10.1023/A:1010185125012. Google Scholar
Ya. D. Sergeyev and D. E. Kvasov, Global search based on efficient diagonal partitions and a set of Lipschitz constants,, SIAM J. Optim., 16 (2006), 910. doi: 10.1137/040621132. Google Scholar
Ya. D. Sergeyev and D. E. Kvasov, "Diagonal Global Optimization Methods,", FizMatLit, (2008). Google Scholar
Ya. D. Sergeyev and D. L. Markin, An algorithm for solving global optimization problems with nonlinear constraints,, J. Global Optim., 7 (1995), 407. doi: 10.1007/BF01099650. Google Scholar
Ya. D. Sergeyev, "Divide the best" algorithms for global optimization,, Technical Report 2-94, (1994), 2. Google Scholar
Ya. D. Sergeyev, Global optimization algorithms using smooth auxiliary functions,, Technical Report 5, (1994). Google Scholar
Ya. D. Sergeyev, A global optimization algorithm using derivatives and local tuning,, Technical Report 1, (1994). Google Scholar
Ya. D. Sergeyev, An information global optimization algorithm with local tuning,, SIAM J. Optim., 5 (1995), 858. doi: 10.1137/0805041. Google Scholar
Ya. D. Sergeyev, A one-dimensional deterministic global minimization algorithm,, Comput. Math. Math. Phys., 35 (1995), 705. Google Scholar
Ya. D. Sergeyev, A method using local tuning for minimizing functions with Lipschitz derivatives,, In, (1997), 199. Google Scholar
Ya. D. Sergeyev, Global one-dimensional optimization using smooth auxiliary functions,, Math. Program., 81 (1998), 127. doi: 10.1007/BF01584848. Google Scholar
Ya. D. Sergeyev, On convergence of "Divide the Best" global optimization algorithms,, Optimization, 44 (1998), 303. doi: 10.1080/02331939808844414. Google Scholar
Ya. D. Sergeyev, Multidimensional global optimization using the first derivatives,, Comput. Math. Math. Phys., 39 (1999), 711. Google Scholar
Ya. D. Sergeyev, Univariate global optimization with multiextremal non-differentiable constraints without penalty functions,, Comput. Optim. Appl., 34 (2006), 229. doi: 10.1007/s10589-005-3906-x. Google Scholar
Z. Shen and Y. Zhu, An interval version of Shubert's iterative method for the localization of the global maximum,, Computing, 38 (1987), 275. doi: 10.1007/BF02240102. Google Scholar
B. O. Shubert, A sequential method seeking the global maximum of a function,, SIAM J. Numer. Anal., 9 (1972), 379. doi: 10.1137/0709036. Google Scholar
C. P. Stephens and W. Baritompa, Global optimization requires global information,, J. Optim. Theory Appl., 96 (1998), 575. doi: 10.1023/A:1022612511618. Google Scholar
A. S. Strekalovsky, "Elements of Nonconvex Optimization,", Nauka, (2003). Google Scholar
R. G. Strongin and D. L. Markin, Minimization of multiextremal functions with nonconvex constraints,, Cybernetics, 22 (1986), 486. doi: 10.1007/BF01075079. Google Scholar
R. G. Strongin and Ya. D. Sergeyev, "Global Optimization with Non-Convex Constraints: Sequential and Parallel Algorithms,", Kluwer Academic Publishers, (2000). Google Scholar
R. G. Strongin, Multiextremal minimization for measurements with interference,, Engineering Cybernetics, 16 (1969), 105. Google Scholar
R. G. Strongin, "Numerical Methods in Multiextremal Problems (Information-Statistical Algorithms),", Nauka, (1978). Google Scholar
A. G. Sukharev, "Minimax Algorithms in Problems of Numerical Analysis,", Nauka, (1989). Google Scholar
L. N. Timonov, An algorithm for search of a global extremum,, Engineering Cybernetics, 15 (1977), 38. Google Scholar
A. Törn and A. Žilinskas, "Global Optimization,", Lecture Notes in Computer Science, 350 (1989). Google Scholar
R. J. Vanderbei, Extension of Piyavskii's algorithm to continuous global optimization,, J. Global Optim., 14 (1999), 205. doi: 10.1023/A:1008395413111. Google Scholar
L. T. Watson and C. Baker, A fully-distributed parallel global search algorithm,, Engineering Computations, 18 (2001), 155. doi: 10.1108/02644400110365851. Google Scholar
G. R. Wood and B. P. Zhang, Estimation of the Lipschitz constant of a function,, J. Global Optim., 8 (1996), 91. doi: 10.1007/BF00229304. Google Scholar
G. R. Wood, Multidimensional bisection applied to global optimisation,, Comput. Math. Appl., 21 (1991), 161. doi: 10.1016/0898-1221(91)90170-9. Google Scholar
A. A. Zhigljavsky and A. Žilinskas, "Stochastic Global Optimization,", Springer, (2008). Google Scholar
A. Žilinskas, Axiomatic approach to statistical models and their use in multimodal optimization theory,, Math. Program., 22 (1982), 104. doi: 10.1007/BF01581029. Google Scholar
A. Žilinskas, "Global Optimization. Axiomatics of Statistical Models, Algorithms, and Applications,", Mokslas, (1986). Google Scholar
Shenggui Zhang. A sufficient condition of Euclidean rings given by polynomial optimization over a box. Numerical Algebra, Control & Optimization, 2014, 4 (2) : 93-101. doi: 10.3934/naco.2014.4.93
Enkhbat Rentsen, J. Zhou, K. L. Teo. A global optimization approach to fractional optimal control. Journal of Industrial & Management Optimization, 2016, 12 (1) : 73-82. doi: 10.3934/jimo.2016.12.73
M. Delgado Pineda, E. A. Galperin, P. Jiménez Guerra. MAPLE code of the cubic algorithm for multiobjective optimization with box constraints. Numerical Algebra, Control & Optimization, 2013, 3 (3) : 407-424. doi: 10.3934/naco.2013.3.407
Z.Y. Wu, H.W.J. Lee, F.S. Bai, L.S. Zhang. Quadratic smoothing approximation to $l_1$ exact penalty function in global optimization. Journal of Industrial & Management Optimization, 2005, 1 (4) : 533-547. doi: 10.3934/jimo.2005.1.533
Mahamadi Warma. Parabolic and elliptic problems with general Wentzell boundary condition on Lipschitz domains. Communications on Pure & Applied Analysis, 2013, 12 (5) : 1881-1905. doi: 10.3934/cpaa.2013.12.1881
Na Zhao, Zheng-Hai Huang. A nonmonotone smoothing Newton algorithm for solving box constrained variational inequalities with a $P_0$ function. Journal of Industrial & Management Optimization, 2011, 7 (2) : 467-482. doi: 10.3934/jimo.2011.7.467
Javier Fernández, Marcela Zuccalli. A geometric approach to discrete connections on principal bundles. Journal of Geometric Mechanics, 2013, 5 (4) : 433-444. doi: 10.3934/jgm.2013.5.433
M. Fernández-Martínez, Yolanda Guerrero-Sánchez, Pía López-Jornet. A novel approach to improve the accuracy of the box dimension calculations: Applications to trabecular bone quality. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1527-1534. doi: 10.3934/dcdss.2019105
Gaoxi Li, Zhongping Wan, Jia-wei Chen, Xiaoke Zhao. Necessary optimality condition for trilevel optimization problem. Journal of Industrial & Management Optimization, 2020, 16 (1) : 55-70. doi: 10.3934/jimo.2018140
Jianjun Liu, Min Zeng, Yifan Ge, Changzhi Wu, Xiangyu Wang. Improved Cuckoo Search algorithm for numerical function optimization. Journal of Industrial & Management Optimization, 2020, 16 (1) : 103-115. doi: 10.3934/jimo.2018142
Hongxiu Zhong, Guoliang Chen, Xueping Guo. Semi-local convergence of the Newton-HSS method under the center Lipschitz condition. Numerical Algebra, Control & Optimization, 2019, 9 (1) : 85-99. doi: 10.3934/naco.2019007
Alireza Bahiraie, A.K.M. Azhar, Noor Akma Ibrahim. A new dynamic geometric approach for empirical analysis of financial ratios and bankruptcy. Journal of Industrial & Management Optimization, 2011, 7 (4) : 947-965. doi: 10.3934/jimo.2011.7.947
Weishi Liu. Geometric approach to a singular boundary value problem with turning points. Conference Publications, 2005, 2005 (Special) : 624-633. doi: 10.3934/proc.2005.2005.624
Qian Liu, Shuang Liu, King-Yeung Lam. Asymptotic spreading of interacting species with multiple fronts Ⅰ: A geometric optics approach. Discrete & Continuous Dynamical Systems - A, 2019, 0 (0) : 0-0. doi: 10.3934/dcds.2020050
Scott Nollet, Frederico Xavier. Global inversion via the Palais-Smale condition. Discrete & Continuous Dynamical Systems - A, 2002, 8 (1) : 17-28. doi: 10.3934/dcds.2002.8.17
Ricai Luo, Honglei Xu, Wu-Sheng Wang, Jie Sun, Wei Xu. A weak condition for global stability of delayed neural networks. Journal of Industrial & Management Optimization, 2016, 12 (2) : 505-514. doi: 10.3934/jimo.2016.12.505
Joon Kwon, Panayotis Mertikopoulos. A continuous-time approach to online optimization. Journal of Dynamics & Games, 2017, 4 (2) : 125-148. doi: 10.3934/jdg.2017008
Renato Bruni, Gianpiero Bianchi, Alessandra Reale. A combinatorial optimization approach to the selection of statistical units. Journal of Industrial & Management Optimization, 2016, 12 (2) : 515-527. doi: 10.3934/jimo.2016.12.515
Zhifeng Dai, Fenghua Wen. A generalized approach to sparse and stable portfolio optimization problem. Journal of Industrial & Management Optimization, 2018, 14 (4) : 1651-1666. doi: 10.3934/jimo.2018025
HTML views (0)
Dmitri E. Kvasov Yaroslav D. Sergeyev
|
CommonCrawl
|
AI Communications - Volume 17, issue 2
Price: EUR 140.00
AI Communications is a journal on Artificial Intelligence (AI) which has a close relationship to ECCAI (the European Coordinating Committee for Artificial Intelligence). It covers the whole AI community: scientific institutions as well as commercial and industrial companies.
AI Communications aims to enhance contacts and information exchange between AI researchers and developers, and to provide supranational information to those concerned with AI and advanced information processing. AI Communications publishes refereed articles concerning scientific and technical AI procedures, provided they are of sufficient interest to a large readership of both scientific and practical background. In addition it contains high-level background material, both at the technical level as well as the level of opinions, policies and news. The Editorial and Advisory Board is appointed by the Editor-in-Chief.
Recommend this journal Editorial board Submissions Subscribe Sign up for news
Query rewriting with symmetric constraints
Authors: Koch, Christoph
Abstract: We address the problem of answering queries using expressive symmetric inter‐schema constraints which allow to establish mappings between several heterogeneous information systems. This problem is of high relevance to data integration, as symmetric constraints are essential for dealing with true concept mismatch and are generalizations of the kinds of mappings supported by both local‐as‐view and global‐as‐view approaches that were previously studied in the literature. Moreover, the flexibility gained by using such constraints for data integration is essential for virtual enterprise and e‐commerce applications. We first discuss resolution‐based methods for computing maximally contained rewritings and characterize computability aspects. Then we propose …an alternative but semantically equivalent perspective based on a generalization of results relating to the database‐theoretic problem of answering queries using views. This leads to a fast query rewriting algorithm based on AI techniques, which has been implemented and experimentally evaluated. Show more
Keywords: Heterogeneous databases, data integration, query rewriting, symmetric constraints, global‐as‐view, local‐as‐view
Citation: AI Communications, vol. 17, no. 2, pp. 41-55, 2004
Web wrapper induction: a brief survey
Authors: Flesca, Sergio | Manco, Giuseppe | Masciari, Elio | Rende, Eugenio | Tagarelli, Andrea
Abstract: Nowadays several companies use the information available on the Web for a number of purposes. However, since most of this information is only available as HTML documents, several techniques that allow information from the Web to be automatically extracted have recently been defined. In this paper we review the main techniques and tools for extracting information available on the Web, devising a taxonomy of existing systems. In particular we emphasize the advantages and drawbacks of the techniques analyzed from a user point of view.
Keywords: Information extraction, wrapper generation
Parametric connectives in Disjunctive Logic Programming
Authors: Perri, Simona | Leone, Nicola
Abstract: Disjunctive Logic Programming (DLP) is an advanced formalism for Knowledge Representation and Reasoning (KRR). DLP is very expressive in a precise mathematical sense: it allows to express every property of finite structures that is decidable in the complexity class ΣP 2 (NPNP ). Importantly, the DLP encodings are often simple and natural. In this paper, we single out some limitations of DLP for KRR, which cannot naturally express problems where the size of the disjunction is not known "a priori" (like N‐Coloring), but it is part of the input. To overcome these limitations, we further enhance the knowledge …modelling abilities of DLP, by extending this language by Parametric Connectives (OR and AND). These connectives allow us to represent compactly the disjunction/conjunction of a set of atoms having a given property. We formally define the semantics of the new language, named DLP∨,∧ and we show the usefulness of the new constructs on relevant knowledge‐based problems. We address implementation issues and discuss related works. Show more
A constraint solver for model‐based engineering
Authors: Mauss, Jakob | Seelisch, Frank | Tătar, Mugur
Abstract: Model‐based applications in engineering, such as configuration, diagnosis or interactive decision‐support systems, require embedded constraint solvers with challenging capabilities. They do not only demand classical services as consistency checking and solving, but also the computation of minimal conflicts and explanations. Moreover, modelling engineered systems makes often use of expressive constraint languages, which mix continuous and discrete variable domains, linear and non‐linear equations, inequalities, and even procedural constraints. A positive feature of typical engineered systems is, however, that their corresponding constraint models have a bounded and even relatively small density (induced width). We present here a relational framework for constraint …solving $\mathbb{RCS} $ that has been specifically designed to address these requirements. $\mathbb{RCS} $ is based on problem decomposition and variable elimination, exploiting the low‐density property. To analyse a set of constraints $\mathbb{RCS} $ builds a so‐called aggregation tree by joining the input constraints and eliminating certain variables after every single join. The aggregation tree is used by a set of conceptually simple algorithms to incrementally check consistency, compute solutions, minimal conflicts and explanations. We also report experimental results obtained with a prototype implementation of this framework. Show more
Model checking for the concurrent constraint paradigm
Authors: Villanueva, Alicia
Article Type: Miscellaneous
Abstract: This paper abstracts the contents of the PhD dissertation which has been recently defended by the author. Although model checking was defined to automatically verify hardware, in the last decades it has been showed that it is possible to apply the technique also to software. The concurrent constraint paradigm is a simple but powerful computational model which we can use to specify reactive and hybrid systems. The thesis considers three of the timed languages of this paradigm. It presents two methods to apply the model checking technique to two different timed concurrent constraint languages, and it is also defined a …denotational semantics which is fully abstract w.r.t. the operational behavior of another timed concurrent constraint language. This new semantics allows one to perform useful static analysis of programs. Show more
Keywords: Timed concurrent constraint languages, model checking, denotational semantics
Multi‐paradigm learning of declarative models
Authors: Ferri, Cèsar
Abstract: This paper abstracts the contents of the PhD dissertation which has been recently defended by the author. Machine learning is the area of computer science that is concerned with the question of how to construct computer programs that automatically improve with experience. Recently, there have been important advances in the theoretical foundations of this field. At the same time, many successful applications have been developed: systems for extracting information from databases (data mining), applications to support decisions in medicine, telephone fraud and network intrusion detection, prediction of natural disasters, email filtering, document classification, and many others. This thesis introduces novel …supervised learning methods that produce accurate and comprehensible models from past experiences which minimise the costs of generation and the costs of application. Show more
Keywords: Machine learning, decision trees, inductive functional logic programming, cost‐sensitive learning
The 16th Belgian–Dutch Conference on Artificial Intelligence (BNAIC'04)
Citation: AI Communications, vol. 17, no. 2, pp. 99-100, 2004
Citation: AI Communications, vol. 17, no. 2, pp. 101-101, 2004
Volume Pre-press
Issue Pre-press
Issue 2,3
|
CommonCrawl
|
Home Journals ACSM Tensile Properties of Remolded Loess and Undisturbed Loess
Tensile Properties of Remolded Loess and Undisturbed Loess
Xianglin Peng* | Chao Sun | Yanbo Cao
Chang'an University, Xi'an 710064, China
Shaanxi Nuclear Industry Engineering Survey Institute Co. Ltd, Xi'an 710000, China
[email protected]
In the light of previous studies, this paper improves the traditional horizontal axial soil tensile tester, eliminating its defects. Then, the improved tester was applied to test the tensile strength of remolded and undisturbed loess. The tensile properties of the two types of loess were summed up based on the test results. For undisturbed loess, the tensile strength increased with dry density when the water content stayed the same, and decreased with water content when dry density remained the same. For remolded loess, the tensile strength first increased to the peak value and then gradually declined, with the growth in water content. In general, the tensile strength curve of undisturbed loess took the shape of a slope, while that of remolded loess showed the wavy pattern. The difference arises from the loss of bonding force and weakening of structuredness in the remolding process.
remolded loess, undisturbed loess, water content, tensile strength
Loess is widely distributed across China, especially in the north and northwest, taking up about 6 % of the country's land area [1]. In fact, China has larger loess-covered land than any other country in the world. With the infrastructure boom, loess excavation is inevitable in many construction projects. This calls for in-depth analysis on loess, a soil with unique mechanical properties, in the context of engineering construction [2].
Concerning the mechanical properties of loess, most of the existing studies focus on the structuredness, collapsibility and the influencing factors of compressive shear strength [3-6], failing to consider the tensile strength. However, the tensile properties of loess are critical in many projects and related to various engineering disaster, such as slope slippage, impervious layer cracking, hydraulic fracturing and uneven foundation settlement [7]. If the law of tensile properties of loess is determined, it will be possible to improve soil strength theory and guide engineering practice.
So far, the tensile properties of loess have mainly been studied through experiments, with special attention to the mechanical mechanism and influencing factors [8-10]. Taking water content as the influencing factor, Wang Yanhui et al. [11] examined the tensile properties of undisturbed loess though various types of tests, revealing that the tensile strength is negatively correlated with water content in any test. Through axial fracturing test, Li Chunqing et al. [12] investigated the tensile properties of undisturbed loess, conducted 3D modelling of the test data, and then detailed the relationship between fracture changes, displacement, stress and strain of the specimens. Dang Jinqian et al. [13] attributed the tensile strength of undisturbed loess to the water film of the soil, particle cohesion and matrix suction.
Overall, the existing studies have gained insights into the tensile properties of loess, yet failing to compare the tensile strength of undistributed loess and that of remolded loess. To make up for the gap, this paper improves the horizontal axial soil tensile tester, and uses the improved tester to test the tensile properties of remolded and undistributed loess of different dry densities and water contents. Based on the test data, the author analyzed, summed up and explained the difference between the two types of loess in tensile properties.
2.1 Tester improvement
To overcome its defects, the traditional horizontal axial soil tensile tester was improved in four steps:
Step 1: To counteract the static friction at the bottom, height-adjustable scale bolts were added around the tester to adjust the inclination of the plate [14].
Step 2: The scale and axis were marked, and a manual adjuster was installed to keep the tensile force on the axis, aiming to minimize the error.
Step 3: The glass plate was applied in advance with lubricant (e.g. Vasline), trying to reduce the friction of the sliding specimen.
Step 4: Different instruments could be placed according to the test requirements, making the tester more universal.
The top and side views of the improved tester are provided in Figure 1 below.
Figure 1. The improved tester
2.2 Specimen preparation
All loess samples were collected at once, from a slope in Luochuan Loess National Geopark, Yan'an, northwestern China's Shaanxi Province. The undisturbed loess was greyish yellow, containing a few grassroots, wormholes and other impurities. It was measured that the loess samples have a plastic limit of 13.7 % and a liquid limit of 28.5 %. According to the empirical formulas in the Code for Soil Test of Railway Engineering (TB10102-2010), the optimal water content and maximum dry density of the loess samples were derived as 15.3 % and 1.78 g/cm3, respectively.
(1) Undistributed loess
With proper clamps, the undistributed loess was shaped into eighteen 15cm×5cm×5cm rectangular blocks in the lab. Next, the dry densities of the specimens were measured. The measured data basically fell in the range 1.20 g/cm3-1.60 g/cm3. In the light of the dry density, the specimens were evenly divided into three groups, whose mean dry density was respectively 1.20 g/cm3, 1.44 g/cm3 and 1.56 g/cm3 [15]. After that, the specimens in each group were injected with different amounts of water, and subjected to moisturizing curing for 12d or more. The curing is to ensure uniform diffusion of water inside each specimen and reduce the test error. After curing, the mean water content of the first specimens in the three groups was 12 %, that of the second specimens in the three groups was 14 %, that of the third specimens in the three groups was 17 %, that of the fourth specimens in the three groups was 18 %, that of the fifth specimens in the three groups was 20 % and that of the sixth specimens in the three groups was 23 %.
(2) Remolded loess
Some of the collected loess samples were remolded, prepared into remolded specimens with different water contents (10 %, 12 %, 14 %, 16 %, 18 %, 20 %, 22 % and 24 %) and dry densities 1.35 g/cm3, 1.45 g/cm3, 1.55 g/cm3, 1.65 g/cm and 1.75 g/cm3), and cured for 2-3 days [16].
2.3 Test procedure
Step 1: A specimen was placed on the glass plate, after the plate surface was treated with lubricant (e.g. Vasline). Then, the bolts at the four corners were adjusted, keeping the front and rear horizontal. Next, the plate inclination was changed until the specimen was about to slide, thus offsetting the friction. After that, the scale of the plate was adjusted, such that the specimen axis, the plate axis and the pulley axis coincided with each other. The scale was fixed to maintain the specimen axis under the tensile force.
Step 2: In the course of loading, the weights were added in descending order of mass. After adding a weight, the specimen was left to stabilize for 1min before adding the next weight.
Step 3: During the tensile loading, the tensile change of the specimen was observed continuously. The test was terminated once the specimen was broken. Then, the weight data were recorded, and the instruments were sorted and cleaned.
Step 4: The tensile strength of the specimen can be calculated by:
$\sigma_{t}=\frac{T}{A_{0}} \times 10$ (1)
The above formula can be rewritten as:
$\sigma_{t}=\frac{(m+n / 2) g}{A_{0}} \times 10$ (2)
3. Test results
3.1 Test results on undisturbed loess
Table 1 lists the test results on undisturbed loess specimens.
Table 1. Test results on undisturbed loess specimens
Group No.
Specimen No.
Dry density (g/cm3)
Water content (%)
Tensile strength (kPa)
Figure 2. Relationship between tensile strength and water content of undisturbed loess
The relationship between tensile strength and water content of undisturbed loess specimens is presented in Figure 2. It can be seen that the measured tensile strength decreased with the growth in water content. This is because the water film, particle cohesion and matrix suction are all negatively correlated with water content. In addition, the lines in Figure 2 were relatively smooth, and the slope reduced with the growth in water content, indicating that the tensile strength of undisturbed loess changed more and more slowly with the rising water content. As shown in the figure, when the water content remained basically the same, the greater the dry density of a loess sample, the stronger its tensile strength. For undisturbed loess, the tensile strength climbs up as the dry density increases between 1.20 %-1.56 %.
3.2 Test results on remolded loess
Table 2 lists the test results on remolded loess specimens.
Table 2. Test results on remolded loess specimens
Water content /%
Figures 3 and 4 respectively display the correlation of tensile strength with water content and dry density of remolded loess specimens.
As shown in Figure 3, with the growth in water content, the tensile strength curve of each specimen first climbed up to a peak point and then went downhill. In other words, the tensile strength first increases to the maximum level and then gradually drops at a decreasing rate, when the water content increased in the range of 10 %-24 %. Moreover, with the increase in dry density, the tensile strength showed a gradual upward trend; the two parameters basically have a positive correlation. Whichever the dry density, the tensile strength always peaked at the water content of 14 %, revealing that 14 % is the optimal water content. This content was about 1 % smaller than the previously computed result. The difference may be the result of the curing condition [17].
As shown in Figure 4, when the water content remained the same in the range of 10 %-24 %, the tensile strength of remolded loess augmented at an increasing rate, with the increase of dry density. When the dry density remained the same, the tensile strength of remolded loess peaked at the water content of 14 %, indicating that the optimal water content of remolded loess is also 14 %. The tensile strength-water content curve of remolded loess obeyed normal distribution: the tensile strength continued to grow as the water content was below 14 %, but started to decline as the latter was greater than 14 %. For remolded loess, the peak tensile strength and the slope of the relationship curve between tensile strength and dry density both changed constantly, as the water content varied between 10 %-14 %.
Figure 3. Relationship between tensile strength and water content of remolded loess
Figure 4. Relationship between tensile strength and dry density of remolded loess
4. Comparative analysis
The variation in tensile strength of undisturbed loess and remolded loess was plotted at two different dry densities. It can be seen from Figure 5 that, the tensile strength of undisturbed loess continued to decline, without any rebound, as the water content was on the rise. Meanwhile, the tensile strength of remolded loess had a wavy pattern: the strength firstly increased to the peak and then declined.
Figure 5. Tensile strength curves at two different dry densities
The comparison shows that both undisturbed and remolded loess have a certain tensile strength, which depends on water content and dry density. When the water content remains the same, the tensile strength of either type of loess increases with dry density. However, when the dry density remains the same, the two types of loess witness different trends in tensile strength. This phenomenon can be explained as follows:
Under natural conditions (e.g. rain, solar radiation, high temperature and severe coldness), all the cementing materials inside undisturbed loess have reacted with soil particles, leaving no chance to bond with water. Thus, the tensile strength of undisturbed loess will not increase with the water content. By contrast, cementing materials within the remolded loess have not fully reacted with water. Thus, the tensile strength of remolded loess will increase with water content, and continue to decrease till breakage after the water content surpasses the critical level.
Furthermore, the remolded loess specimens had lower tensile strength than undisturbed loess specimens. This is because the soil particles are rearranged in remolding, which damages the original cementation state between them. As a result, the tensile strength will grow with the increase in water content, but the specimens cannot reach the optimal stability. The analysis shows that the tensile strength of loess is significantly affected by disturbance [18].
This paper improves the horizontal axial soil tensile tester, and uses the improved tester to test the tensile properties of remolded and undistributed loess of different dry densities and water contents. Based on the test data, the author analyzed and explained the difference between the two types of loess in tensile properties. The main conclusions are as follows:
(1) To make the traditional tester more scientific and rigorous, height-adjustable scale bolts were added to adjust the inclination of the plate and counteract the static friction; the scale and axis were marked, and a manual adjuster was installed to keep the tensile force on the axis, aiming to minimize the error.
(2) The tensile strength of remolded loess specimens first increased to the peak value and then gradually declined, with the growth in water content. The tensile strength curves generally took the wavy pattern. As for undisturbed loess specimens, the tensile strength increased with dry density when the water content stayed the same, and decreased with water content when dry density remained the same.
(3) The most significant difference between undisturbed and remolded loess in tensile properties lies in the shape of the tensile strength-water content curve at a constant dry density: the tensile strength of undisturbed loess continues to decline, while that of remolded loess exhibits as a wave (first increases to the peak and then gradually decrease). The difference is attributable to the incomplete reaction between cementing materials and water in the remolding process, for the particle cohesion and matrix suction are not optimized.
The authors sincerely acknowledge the financial support from the State Key Program of National Science of China (Grant No. 41630634), Science and Technology Planning Project of Shaanxi (Grant No. 2019SF-233) and Science and Technology Burea of Yulin (Grant No. 2014cxy-04).
[1] Li, C.Q. (2015). Study on tensile strength characteristics of loess. Lanzhou Jiaotong University. 2015.
[2] Wu, X.Y., Liang, Q.G., Li, C.Q., Wang, L.L., Sun, W.Y. (2014). Study on tensile properties of remolded loess in Jiuzhou development district, Lanzhou, China. China Earthquake Engineering Journal, 36(3): 562-568. https://doi.org/10.3969/j.issn.1000-0844.2014.03.0562
[3] Wang, L.M., Yuan, Z.X., Wang, G.L. (2013). Study on method for preliminary and detailed evaluation on liquefaction of loess sites. China Earthquake Engineering Journal, 35(1): 1-8. https://doi.org/10.3969/j.issn.1000-0844.2013.01.0001
[4] Wang, J.E., Xiang, W., Bi, R.N. (2011). Experimental study of influence of matric suction on disintegration of unsaturated remolded loess. Rock and Soil Mechanics, 32(11): 3258-3262. https://doi.org/10.3969/j.issn.1000-7598.2011.11.010
[5] Jing, Y.L., Wu, Y.Q., Lin, D.J., Hu, Z.P., Li, X.G., Zhang, Z.Q. (2011). Study of relationship between loess collapsibility and index of compaction test. Rock and Soil Mechanics, 32(2): 393-397. https://doi.org/10.3969/j.issn.1000-7598.2011.02.012
[6] Luo, Y.S., Hu, Z.S., Zhang, A.J. (2009). Regularity of relation between structural parameter and strength indexes of unsaturated loess. Rock and Soil Mechanics, 30(4): 943-948. https://doi.org/10.3969/j.issn.1000-7598.2009.04.014
[7] He, S.X. (2016). Experimental research on tensile characteristics of loess. Shijiazhuang Tiedao University, 2016.
[8] Kim, T.H., Hwang, C. (2003). Modeling of tensile strength on moist granular earth material at low water content. Engineering Geology, 69(3/4): 233-244. https://doi.org/10.1016/S0013-7952(02)00284-3
[9] Tamrakar, S.B., Mitachi, T., Toyosawa, Y. (2007). Measurement of soil tensile strength and factors affecting its measurements. Soils and Foundations, 47(5): 911-918. https://doi.org/10.1007/3-540-69873-6_20
[10] Lu, N., Wu, B., Tan, C.P. (2007). Tensile strength characteristics of unsaturated sands. Journal of Geotechnical and Geoenvironmental Engineering, 133(2): 144-154. https://doi.org/10.1061/(asce)1090-0241(2007)133:2(144)
[11] Wang, Y.H., Ni, W.K., Yuan, Z.H. (2015). Study on the test method for tensile strength of undisturbed loess. Science Technology and Engineering, 15(07): 234-237+247. https://doi.org/10.3969/j.issn.1671-1815.2015.07.045
[12] Li, C.Q., Liang, Q.G., Wu, X.Y., Wang, L.L., Xu, S.C. (2014). Study on the test of tensile strength of remolded loess. China Earthquake Engineering Journal, 36(02): 233-238+248. https://doi.org/10.3969/j.issn.1000-0844.2014.02.0233
[13] Dang, X.Q., Hao, Y.Q., Li, J. (2001). Study on tensile strength of unsaturated loess. Journal of Hohai University (Natural Sciences), 29(6): 106-108. https://doi.org/10.3321/j.issn:1000-1980.2001.06.025
[14] Sun, C. (2017). Study on the test of tensile strength of loess in the south of Jinghe. Chang'an University, 2017.
[15] Yuan, Z.H., Ni, W.K., Tang, C., Huang, C., Wang, Y.H. (2017). Experimental study on tensile strength of loess under dry and wet cycling. Chinese Journal of Rock Mechanics and Engineering, 36(S1): 3670-3677. https://doi.org/10.13722/j.cnki.jrme.2016.0230
[16] Sun, W.Y., Liang, Q.G., Yan, S.H., Ou, E.F., Shao, S.L. (2015). Experimental study on tensile strength of Q2 undisturbed loess in Yan'an Shanxi China. Journal of Geomechanics, 21(03): 386-392.
[17] Sun, W.Y., Liang, Q.G., Ou, E.F., Yan, S.H., Zhang, Y.W. (2015). Comparative experimental study on tensile strength of undisturbed and remolded Q2 Loess from Yan'an Shanxi China. China Civil Engineering Journal, 48(S2): 53-58.
[18] Sun, M.X., Dang, J.Q., Kang, S.X. (2006). Tensile character of disturbed loess. Journal of Xi'an University of Arts and Science (Natural Science Edition), 9(3): 59-61. https://doi.org/10.3969/j.issn.1008-5564.2006.03.015
|
CommonCrawl
|
Meta-analysis on PET plastic as concrete aggregate using response surface methodology and regression analysis
Beng Wei Chong1 &
Xijun Shi1
This paper aims to thoroughly analyze the effect of polyethylene terephthalate (PET) plastic aggregate on concrete compressive strength using a meta-analysis. Forty-three datasets for concrete containing PET coarse aggregate and 60 data sets for concrete containing PET fine aggregate were collected. The input variables used were percentage and nominal maximum size of PET aggregate along with the concrete mix proportions. Main effect plots, contour plots, and surface plots of the expressions were presented to demostrate the effect of PET aggregate on the 28-day compressive strength of concrete. The statistical parameters of the regression equations, such as R2, adjusted R2 and root-mean-square error (RMSE), indicated that the RSM approach is a powerful tool to describe the change of concrete compressive strength by PET aggregate addition. In addition, the study showed that using PET plastic as a fine aggregate replacement performed better than using it as a coarse aggregate replacement in concrete. At up to 30% replacement, concrete containing PET plastic as a fine aggregate can have satisfactory compressive strength.
Plastic pollution is one of the leading challenges plaguing all countries across the globe. The usage of plastic is popularised in 1950, and the annual production of plastic has since increased by about 200 times to 380 million tonnes in 2015 [1]. Despite the massive increase in production and consumption, the management of waste plastic has failed to keep up, with most plastic ending up being inefficiently disposed [2]. Generally, developed countries have generated more plastic waste due to higher production capability and spending power, but plastic pollution has caused greater harm to developing countries as those countries are less equipped in technology to manage the waste [3]. As a result, most of these countries have resorted to the landfill when handling plastic waste, which leads to a multitude of problems. In the landfills, plastic leaches into soil and water sources, causing more pollution to the environment. It is estimated that an estimated 8 million tonnes of plastic ended up in the ocean every year due to leakage [4], causing the disruption and degradation of the marine ecosystem. Consequently, microplastic in the ocean enters the human body through our diet. This causes health concerns as plastic particles are known to possess heavy metals [5] and toxins which come from the manufacturing process or absorbed from the environment.
Polyethylene terephthalate (PET) plastic is a type of thermoplastic which is most extensively used in the food and beverage industry. It is also used as the single-use plastic for bottled water that is quickly discarded [6]. Compared to other types of plastic, thermoplastic such as PET is highly recyclable. Well-established process such as multiple forms of chemical hydrolysis, mechanical recycling, and melt processing provide effective means of recycling PET plastic [7, 8]. Yet, the recycling rate of PET plastic is relatively low. According to the PET Resin Association (PETRA) [9], the United States recycle only about 31% of PET waste while Europe recycle about half of the amount generated. Apart from recycling PET plastic through arduous industrial processes, new innovations on reuse of waste plastic have also been attempted to support the effort in combating plastic pollution.
Waste PET plastic sees great potential to be reused in the field of construction. Thus far, research on the application of PET plastic in various construction materials has been undertaken. Waste PET plastic has been utilized to produce mortar [6, 10], bricks and masonry [6, 11], and concrete [12,13,14]. The introduction of waste plastic as a constituent of cement-based construction materials has a two-fold advantage of not only providing an outlet to reuse or dispose of plastic but also reducing the consumption of non-renewable raw materials such as rocks and sands. Experts have warned that rapid urbanization and the expansion of a massive construction industry will cause a global shortage of sand in the near future [15]. Even today, extensive excavations of sand have been linked to the destruction of the environment and the exacerbation of climate change [16]. This is compounded by the fact that the construction industry is the leading emitter of greenhouse gas, as well as the primary consumer of sand and gravel, taking up to about 40% global usage of stone, sand and gravel [17].
The incorporation of PET plastic into concrete production is a step towards a more sustainable construction. Moreover, the application has the potential to alleviate the brittleness of concrete and enhance the durability of concrete in certain aspects. Saxena et al. [18] studied concrete with up to 20% fine aggregate replaced by PET plastic and concluded that the impact resistance and energy absorption capacity of concrete increased with proportion of replacement. Likewise, Abu-Saleem et al. [19] experimented on concrete with various types of waste plastic and noted that plastic as a coarse aggregate replacement increased the impact resistance of concrete. The specimen with an optimum plastic replacement level of 30% achieved 4.5 times greater impact resistance than the control concrete. In addition, PET plastic concrete has superior abrasion resistance [20], better heat insulation [21], and is more ductile under flexural load [22].
Despite the potential benefits in improving concrete durability, replacement of aggregate with PET plastic in concrete comes with a major challenge – reducing the mechanical strength of concrete. For example, in a study by Bamigboye et al. [23], the 28-day compressive strength of water-cured concrete with 10% PET aggregate is 38% lower than the control concrete. At the 30% PET coarse aggregate replacement level, the strength loss increased to 68%. In another experiment by Islam et al. [24], concrete with 20% PET coarse aggregate lost about 20% to 25% strength depending on different mix designs. To further complicate the matter, different research employed replacement by weight [18, 25] or volume [23, 24] when producing PET aggregate concrete. Application of PET aggregate has also involved the replacement of coarse aggregate [23, 24], fine aggregate [13, 26], and both in different studies.
Design of experiment (DoE) techniques such as response surface methodology (RSM) are gaining popularity on studying the properties of concrete with waste materials. For instance, RSM has been applied to aid the data analysis of a complicated mix design consisting of recycled aggregate, silica fume, and ground-granulated blast-furnace slag [27]. By plotting the contour plot with the replacement materials as primary factors, the trend of strength variation was neatly demonstrated. In another experiment which used a combination of a few admixtures [28], RSM had been similarly utilized for the same purpose. Even for experiment with only one replacement material, Senthil Kumar and Baskar [29] studied the influence of e-waste on 28-day concrete compressive strength by plotting the replacement percentage and w/c ratio together in surface and contour plot. Similar endeavors involving eggshell powder had also been conducted using output from both single [30] and multiple [31] experimental data sets.
Although the statistical analysis in this study was performed based on published works that might contain some unavoidable bias in data collection and presentation, the effect of PET aggregate on compressive strength was thoroughly analyzed, and some general trends were found. The findings from this study could be useful for future studies in this field [29]. Moreover, this study served as a great example on how RSM could facilitate the understanding of material behavior across a broader range of parameters as opposed to some other methods that are limited to the condition of a single experiment set. At present, the state-of-the-art review about PET plastic waste in concrete is available [32, 33]. While the reviews have provided a broad overview of the waste material, an in-depth numerical synthesis and analysis of the influence of PET plastic on the mechanical strength of concrete is still scarce.
In this study, RSM was conducted to formulate the infleunce of PET plastic aggregate on the 28-day compressive strength of concrete. Two mathematical expressions, one for PET plastic as a coarse aggregate replacement and one for PET plastic as a fine aggregate replacement, were analyzed based on data from the gathered literature. The performance of both expresions were evaluated through statistical parameters such as determination coefficient (R2), adjusted coefficient (R2 adj) and root mean-square error (RMSE). Finally, the compressive strength index of both sets of data were presented on a single plot for regression analysis and the inflence of PET plastic aggregate on concrete compressive strength was thoroughly discussed.
PET plastic aggregate
PET is a thermoplastic that is strong, durable and lightweight. It is the most commonly consumed plastic, which can be found from household waste such as bottled drinks [34]. The specific gravity of PET plastic aggregate generally fell between the range of 1.25 to 1.50. Two studies [12, 35] reported the density of PET plastic aggregate as 1225 kg/m3 and 1340 kg/m3. PET plastic has a low water absorption, typically below 0.70%. The fineness modulus of PET plastic used as a coarse aggregate replacement is about 6.70. Meanwhile, the other two studies on PET plastic as a fine aggregate replacement reported a fineness modulus of 3.20 and 3.51, respectively. In addition, the melting point of PET plastic is reported to be higher than 250 °C by Sai Gopi et al. [26], which corroborates with the general recognized melting point of PET in other literature [36, 37] Furthermore, a study by Islam et al. [24] that melted waste plastic to produce plastic flakes specified the usage of temperature between 280 °C and 320 °C in the melting process.
Meta-analysis and data curation
Literature search of studies involving PET plastic as a coarse aggregate or a fine aggregate replacement in the production of concrete was conducted. The collected literature spread approximately over a decade, with the earliest literature from 2012 and the latest from 2022. All papers were studied to extract the relevance information such as the physical properties of PET aggregate, replacement proportion, concrete mix design, and compressive strength. While most literature reported a decrease in strength with PET aggregate, the pattern of strength loss varied among different studies. In certain studies, the strength loss was minimum, while in other studies, a significant strength reduction was observed even in lower percentages of replacement. Hence, a meta-analysis was conducted to access the pattern of strength loss, as well as to identify the conditions in which the usage of PET aggregate was most effective. A meta-analysis is defined as a mathematical or statistical study which combines the outcome of multiple independent studies in an effort to form a new, unified conclude on the topic concerned. By gathering a larger amount of data from multiple accounts, the variables which caused the inconsistencies could be isolated and studied. Subsequently, mathematical expressions that could show the trend of change in concrete strength by the PET aggregate addition were developed.
Studies with enough details on both the concrete mix design and 28-day compressive strength were selected, while those with incomplete data or included more than one waste material were excluded. For the concrete mix design, the values presented in ratio format were all converted to a unified presentation under kilogram per cubic meter (kg/m3). For the compressive strength, data presented in exact numbers were simply extracted. For data presented in figures with incomplete labels, PlotDigitizer software was used to obtain the values through interpolation. A total of 42 data points from six studies were gathered for studies of PET plastic as coarse aggregate replacement by volume. Meanwhile, 60 data points from seven studies of PET plastic as a fine aggregate replacement by volume were collected. The data are presented on Tables 1 and 2, respectively.
Table 1 Literature for PET-CA expression
Table 2 Literature for PET-FA expression
For the PET coarse aggregate (PET-CA) expression, the independent variables were percentage of PET replacement, nominal maximum size of PET particles in millimeter, cement content, fine aggregate (FA) content, coarse aggregate (CA) content, and water-cement ratio (W/C). For the size of PET particle, the maximum size specified on the paper was taken. If the size of PET particle was presented as gradation data, then according to the U.S Department of Transportation, Federal Highway Administration (FHWA) [48], the nominal maximum size is defined as the smallest sieve size in which most of the aggregate passes and less than 15% aggregate is retained. Hence, the closest sieve to 85th percentile of the graduation curve was taken as the nominal maximum size. For the PET fine aggregate (PET-FA) however, the size of plastic was not a variable as most studies used the same sieve passing 4.75 mm which was the same as that of sand. Hence, only PET replacement and the content of each constituent of concrete were chosen as independent variables. The outputs of the studies were the 28-day compressive strengths.
Response surface methodology (RSM)
RSM is a DoE method which can assess the effect of multiple independent variables on the dependent variable. In experiments involving multiple independent variables, the conventional method to study the impact of each variable is conducted by changing one variable at a time (OVAT) while keeping the other variables constant. The OVAT method requires a large number of experiment trails and hence a lot of time and resources. Furthermore, OVAT is unable to analyze the combined effect and interaction within the multiple independent variables [49].. Statistical methods such as RSM enable the understanding of the dependant variable using least number of experimental trials along with a more efficient data collection [50]. In addition, the enhanced processing power of the DoE approach allows the synthesis of results from multiple similar experiments for a thorough understanding of a particular topic of interest. In this study, PET-CA expression involved 43 data points and six independent variables, while PET-FA had 60 data points and five independent variables. Since the data were collected from various sources, the standard methodologies such as Central Composite Design or Box–Behnken Design were not applied. Instead, uncoded variables was used to perform the RSM. The quality of the mathematical expressions were evaluated based on statistical parameters including determination coefficient (R2), adjusted coefficient (R2 adj) and RMSE. The accuracy of the expressions was further accessed by consulting the Patero Chart and residual plot. Subsequently, the effect of the independent variables was studied by referring to the interaction plot. Lastly, the expressions were presented in 2D and 3D plots through the contour plot and surface plot to investigate the impact of PET plastic aggregate on the compressive strength of concrete.
Regression analysis is a basic statistical method that is widely used to determine the relationship between a dependent variable and an independent variable. At the most rudimental level, plotting two variables together allows for a closer examination of how one variable influences another. By applying simple regression analysis, the relationship between the variables may be determined. To apply simple regression on the dataset in Tables 2 and 3, the value differences caused by different mix design was eliminated by using the strength index (SI) of the concrete. The definition of SI is as shown in Eq. 1
$$SI=\frac{CSR}{CSC}$$
Table 3 RSM of PET-CA expression
where CSR is the 28-day compressive strength of concrete with any percentage of PET plastic aggregate, while CSC is the 28-day compressive strength of control mix without any PET. Obviously, all control mix of each data set has a SI of 1.0. For the specimens with PET aggregate, SI above 1.0 indicates a certain percentage increase in compressive strength and vice versa. The independent variable of the expression is the percentage of PET replacement, and the dependent variable of the expression is the SI of concrete specimens. Based on this, the influence of PET replacement on the compressive strength of concrete may be accessed.
RSM of PET-CA expression
Backward elimination method with α = 0.05 was used in the approach. Figure 1 presents the Pareto Chart for the PET-CA expression. Except for cement content (A), the linear term of all primary variables such as coarse aggregate content (B), fine aggregate content (C), water-cement ratio (D), PET content (E) and size (F) were all significant variables. The quadratic term of coarse aggregate (BB) ranked the highest out of all factors, while three interaction terms, CE, CF, and BF rounded up the expression. Figure 2 shows the residual versus order plot for the expression. The residual versus order plot is used to determine the legitimacy of the expression. Any shift or trend in the plot would be caused by other variables, which was not present in the equation. From Fig. 2, the residuals of the expression were distributed randomly in a zig-zag pattern. Hence, there was no other variable which was not accounted for in the analysis.
Pareto Chart of PET-CA expression
Residual versus Order Plot of PET-CA expression
Table 3 depicts the analysis of variance (ANOVA) and RSM analysis for the PET-CA expression. The expression had negligibly low p-value across the board, with CA content had a higher p-value of 0.023 which was still smaller than 0.05 (a commonly used confidence level). Since it was known that the compressive strength of concrete was largely influenced by its mix proportion, all terms in the expression were deemed to be significant. At the same time, the presence of quadratic terms signified that the influence of the factors was not entirely linear. Meanwhile, the interaction terms were included for better computational power and data-fitting. The R2 value of the PET-CA expression was 0.9479, while the adjusted R2 value was 0.9337, indicating a strong fitting of the equations. The RMSE of the PET-CA expression was 1.583, which was minor. Hence, the PET-CA expression was deemed to be satisfactory. The expression for 28-day compressive strength of concrete with PET as a coarse aggregate replacement was given in Eq. 2:
$${\displaystyle \begin{array}{c}{CS}_{28}=-122.3+0.311B+0.020C-29.77D-0.383E+2.232F\\ {}-0.000152{B}^2-0.00156 BF+0.000311 CE-0.00138 CF\end{array}}$$
Figure 3 shows the interaction plot of the PET-CA expression. The interaction plot was used to check the general relationship between each independent variable in the expression and the dependent variable (i.e., the 28-day compressive strength of concrete). From the interaction plot, the content of coarse aggregate has a curved effect on the compressive strength, while a higher fine aggregate content increases the strength of concrete. The compressive strength of concrete drops with a higher water-cement ratio, which is a fundamental principle in concrete mix design. The PET plastic aggregate replacement caused a linear decrease in compressive strength. The decrease in compressive strength was attributed to the morphology of plastic particles. Compared with the rough texture of conventional aggregate, the smooth surface of plastic had a poorer adhesion with cement paste [38]. This statement was further elaborated by Islam et al. [24], where they added that the interfacial transitional zone (ITZ) of PET aggregate concrete was compromised due to the non-existent water absorption of PET plastic. This caused the accumulation of water at the ITZ which explained the reason for a weak bond between PET aggregate and cement paste. The gap within the microstructure subsequently became voids which in turn also increased the porosity and water absorption of PET concrete. Finally, the interaction plot postulated that the particle size of PET aggregate had a negative effect on compressive size. Such a relationship between the two variables was partially hinted in the experiment of Saikia and Brito [51], but the effect was not apparent due to limited data and lower replacement proportion. However, the negative effect of particle size was verified by Osubor et al. [40] in detail using three different sizes of PET plastic at up to 20% aggregate replacement. The main cause fell on the surface area of the single particle of the plastic, which increased with particle size. Hence, PET aggregate of greater size caused a greater level of weakness at the ITZ and thus, reducing compressive strength more markedly.
Figure 4 presents the contour plot and surface plot of the PET-CA expression. The contour plot was set to highlight the influence of the replacement level of PET plastic and particle size on the strength of concrete. From the contour plot, the compressive strength reduction caused by higher percentages of PET aggregate replacement was shown by the changing contour along the y-axis. The x-axis represented higher particle size of PET plastic as it moved along the right. For a same percentage of PET replacement, the compressive strength of concrete crossed the diagonal boundary towards a contour of lower strength as the size of PET plastic increased. The information was also presented in the surface plot in a 3D manner, with the surface inclined downward towards higher percentage of PET and bigger PET size. Based on this, the influence of PET-CA on the 28-day compressive strength of concrete was evidently demonstrated.
RSM of PET-FA expression
For the PET-FA expression, the forward selection method with α = 0.05 was used to filter out least significant terms as the backward elimination resulted in a overly complex expression. The Pareto Chart and residual versus order plot of the expression are shown in Figs. 5 and 6, respectively. Compared to the PET-CA expression, more terms are present in the PET-FA expression. From the Pareto Chart, all primary terms (A to F) were deemed to be significant, followed by the quadratic terms of coarse aggregate (BB) and water-cement ratio (DD). A total of five interaction terms were added to round up the expression. For the residual versus order plot in Fig. 6, a zig-zag pattern was observed, meaning that the residual of the expression was distributed randomly. This indicates good integrity of the expression with all significant variables being accounted for.
Pareto Chart of PET-FA expression
Residual versus Order Plot of PET-FA expression
Table 4 depicts the ANOVA and RSM analysis of the PET-FA expression. Most of the terms in the expression had a low p-value that is below 0.05. However, a certain degree of multicollinearity was observed in the term CA and CA × PET(%), which shows higher p-values of 0.128 and 0.103. However, this was only a minor imperfection on the expression, which occurred due to the complexity of concrete mix design. The primary valuable, the PET(%), which was the major interest in this study, was not affected. Meanwhile, the R2 value of PET-CA expression was 0.9787 and the adjusted R2 value was 0.9733, which confirms a strong correlation (R2 > 0.80) of the expression with the dependent variable. Likewise, the RMSE of PET-CA expression was 1.726, which was minor. The expression for the 28-day compressive strength of concrete with the PET as a fine aggregate replacement was given in Eq. 3:
Table 4 RSM of PET-FA expression
$${\displaystyle \begin{array}{c}{CS}_{28}=-311.4+0.44+0.288B+0.0823C+157.6D+1.522E\\ {}-0.000106{B}^2-156.5{D}^2-0.000109 AB-0.000156 AC\\ {}-0.002249 AE-0.000299 BE-0.000803 CE\end{array}}$$
Figure 7 shows the interaction plot of the PET-FA expression. From the figure, the cement content had a positive effect on concrete strength, which matched the principle of concrete mix design. A curved relation was observed for the coarse aggregate content and water-cement ratio, while a negative effect was reported for the fine aggregate content. For the percentage of PET replacement, an evident decrease in compressive strength was observed at higher proportion of replacements. This is the same as shown in the PET-CA expression. The poor adhesion of the PET plastic with cement paste cited in many studies [42, 43, 52] was the reason for the decrease in strength. The statement was supported by the experimental finding from Black [44] who observed the segregation in concrete matrix and the formation of honeycomb shaped pores and cavities on concrete surface. Bamigboye [13] found the similar phenomenon by observing micropores and honeycombing using the scanning electron microscope (SEM) of PET concrete. Another reason for the strength reduction was proposed by some studies [46, 52], and they pointed out that PET plastic had a lower density and load-bearing capacity compared to river sand.
Main effect plot of PET-FA expression
Figure 8 presents the contour plot and surface plot of PET-FA expression. The water-cement ratio and PET replacement proportion were selected as the variables to be presented in this paper. From the contour plot, the reduction in compressive strength was displayed by the shifting contours across the axis. At the same time, curved boundaries were observed due to the effect of water-cement ratio. For the surface plot, the drop in the surface displayed a consistent decrease in compressive strength regardless of water-cement ratio of the mix.
Contour and surface plot of PET-FA expression
The regression analysis was conducted by plotting the SI of PET concrete versus the percentage of the PET replacement. Both series of data from the PET-CA and PET-FA expressions were plotted together with discernable symbols as shown in Fig. 9. Same as the findings from the RSM, the compressive strength of concrete decreased with increasing percentage of PET replacement in both coarse and fine aggregate replacement cases. The data was scattered across a broad range, which made it difficult to identify a distinct relationship between the variables. Since the compressive strength in the regression output was relative to the control concrete (SI = 1.0), the intercept of the regression was set to be 1.0 to formulate the change in compressive strength with respect to the PET replacement. A quadratic expression was applied to formulate the relationship between the percentages of PET and SI of concrete. The selection of the quadratic equation over a linear expression was made because while the strength of concrete decreased proportionally with percentage of replacement, a higher strength loss was generally seen at higher percentages of replacement. Obviously, there can other equations that can fit the data reasonably well, the quadratic equation was the simplest while still maintaining sufficient accuracy in data fitting. The regression analysis identified a quadratic relation in which the PET-CA expression had a R2 value of 0.6144 and the PET-FA expression had a R2 value of 0.7295. However, the correlation for both sets of data was only moderate (0.60 > R2 > 0.80).
Regression analysis of PET concrete
Figure 10 presents the separate residual plot for both expressions. The RMSE of PET-CA and PET-FA regression expression was computed to be 0.1175 and 0.1080 respectively. Residual plot showed that the quadratic expression in Fig. 9 was moderately descriptive of the SI of PET aggregate concrete. From the residual versus order plot, most of the predicted SI fluctuated within ±0.10 range. However, several data points showed greater deviations, mainly originating from experiment with very high replacement proportion. For PET-CA expression, the highest deviation was from the study of Bamigboye et al. [23] with replacement up to 100%. In contrast, the deviation in PET-FA expression was caused by variation of results in 10% PET aggregate concrete among different researchers. The variation would become a key point for examination to determine the conditions in which PET aggregate displayed greater strength.
Residual Plots of Regression expression
It is concluded that the PET aggregate caused a similar trend of decrease in compressive strength for both coarse aggregate and fine aggregate replacement cases, despite the strength decrease for the PET-FA expression was slightly less significant than the PET-CA. More specifically, at up to 20% PET replacement, a significant amount of data points from the PET-FA expression was placed above the PET-CA expression. Moreover, a number of data from the PET-FA expression were closer to the 1.0 line at about 20% replacement. Those data points were from the experiment of Thorneycroft et al. [41] that utilized PET plastic of a small size at 0.5 mm to 2 mm and 2 mm to 4 mm. At the 5% replacement level, a miniscule increase in compressive strength was even observed, while the 10% PET concrete had comparable strength to that of control. The experiment reported that the increased packing due to the smaller PET particles had a positive effect on compressive strength, which helped mitigate the strength loss inherent to PET concrete [41]. In another study by Rai et al. [43], concrete lost only about 10% of its compressive strength at 15% PET as a fine aggregate replacement. It is unknown if the fineness of ground plastic or the use of CONPLAST SP320 superplasticiser was the key to the low strength reduction. In another example [44], the PET plastic that was ground using a granulator was used as a fine aggregate replacement and achieved comparable strength as the control, even at the 20% replacement level. Based on these analyses, it becomes apparent that using PET plastic as a fine aggregate replacement might be a better option due to the less adverse effect on compressive strength. With a proper mix design and a judicious selection of replacement level, it is feasible to use the PET plastic as a fine aggregate source in the production of sustainable concrete with minimum strength reduction.
In this study, the 28-day compressive strength of concrete with PET as either a coarse aggregate or a fine aggregate replacement was analyzed using RSM and regression approaches. The input variables for the regression equations were the percentage of PET replacement, the size of PET aggregate, and other basic concrete mix design parameters such as cement content, coarse aggregate, fine aggregate, and water-to-cement ratio. The main effect plot was examined to confirm that all the variables were significant and played a role in determining concrete compressive strength. At the same time, the contour and surface plots were generated to thoroughly study the influence of PET replacement on concrete compressive strength. From the PET-CA expression, it was concluded that the increase in percentage of PET aggregate reduced the compressive strength of concrete. The reduction in compressive strength occured because PET aggregate had a smooth surface and low bearing capacity which caused weakness in the ITZ. Moreover, PET aggregate of larger maximum nominal size resulted in concrete with even lower compressive strength. From the PET-FA expression, a similar trend of strength reduction was reported with the increase of the PET aggregate replacement level. The results of the RSM showed satisfactory accuracy with R2 value of 0.9479 for the PET-CA expression and 0.9787 for the PET-FA expression. The RMSE of both expressions was also found to be minimal. Meanwhile, the regression analysis showed that the SI of PET concrete had a moderate quadratic correlation with the percentange of PET aggregate. Comparing both cases, the addition of PET aggregate in concrete yielded less strength reduction when utilized as fine aggregate. Interestingly enough, the experimental data for which PET was ground to fine size below 2 mm showed similar compressive strength between the PET concrete and the control concrete. From this study, it was found that incorporating PET in concrete as a fine aggregate replacement at up to 30% can lead to sustainable concrete with minimum strength reduction.
All data generated or analysed during this study are included in this published article.
ANOVA:
CA:
Coarse Aggregate
DoE:
Design of Experiment
Fine Aggregate
FHWA:
ITZ:
Interfacial Transitional Zone
OVAT:
One Variable at a Time
Polyethylene Terephthalate
R2 :
Determination Coefficient
R2 adj:
Adjusted Coefficient
RMSE:
Root-Mean-Square Error
RSM:
Response Surface Methodology
Scanning Electron Microscope
Strength Index
W/C:
Water-To-Cement
Ritchie H, Roser M (2015) Plastic pollution. In: Our world data https://ourworldindata.org/plastic-pollution. Accessed 4 May 2022
Aarnio T, Hämäläinen A (2008) Challenges in packaging waste management in the fast food industry. Resour Conserv Recycl 52:612–621. https://doi.org/10.1016/j.resconrec.2007.08.002
Ncube LK, Ude AU, Ogunmuyiwa EN et al (2021) An overview of plasticwaste generation and management in food packaging industries. Recycling 6:1–25. https://doi.org/10.3390/recycling6010012
Jambeck JR, Hoegh-Guldberg O, Cai R et al (2015) Plastic waste inputs from land into the ocean. Science (80- ):1655–1734
Turner A (2018) Mobilisation kinetics of hazardous elements in marine plastics subject to an avian physiologically-based extraction test. Environ Pollut 236:1020–1026. https://doi.org/10.1016/j.envpol.2018.01.023
Aslani H, Pashmtab P, Shaghaghi A et al (2021) Tendencies towards bottled drinking water consumption: challenges ahead of polyethylene terephthalate (PET) waste management. Health Promot Perspect 11:60–68. https://doi.org/10.34172/hpp.2021.09
Grigore ME (2017) Methods of recycling, properties and applications of recycled thermoplastic polymers. Recycling 2:1–11. https://doi.org/10.3390/recycling2040024
Awaja F, Pavel D (2005) Recycling of PET. Eur Polym J 41:1453–1477. https://doi.org/10.1016/j.eurpolymj.2005.02.005
(2015) An Introduction to PET. In: petresin.org. http://www.petresin.org/news_introtoPET.asp. Accessed 4 May 2022
Kaur G, Pavia S (2021) Chemically treated plastic aggregates for eco-friendly cement mortars. J Mater Cycles Waste Manag 23:1531–1543. https://doi.org/10.1007/s10163-021-01235-2
Marsiglio L, Cheng S, Falk E et al (2020) Comparing the properties of polyethylene terephthalate (PET) plastic bricks to conventional concrete masonry units. In: 2020 IEEE global humanitarian technology conference (GHTC). IEEE, pp 1–6
Kangavar ME, Lokuge W, Manalo A et al (2022) Investigation on the properties of concrete with recycled polyethylene terephthalate (PET) granules as fine aggregate replacement. Case Stud Constr Mater 16:e00934. https://doi.org/10.1016/j.cscm.2022.e00934
Bamigboye GO, Tarverdi K, Umoren A et al (2021) Evaluation of eco-friendly concrete having waste PET as fine aggregates. Clean Mater 2:100026. https://doi.org/10.1016/j.clema.2021.100026
Figueiredo F, da Silva P, Botero ER, Maia L (2022) Concrete with partial replacement of natural aggregate by PET aggregate—an exploratory study about the influence in the compressive strength. AIMS Mater Sci 9:172–183. https://doi.org/10.3934/MATERSCI.2022011
Torres A, Simoni MU, Keiding JK et al (2021) Sustainability of the global sand system in the Anthropocene. One Earth 4:639–650. https://doi.org/10.1016/j.oneear.2021.04.011
Filho WL, Hunt J, Lingos A et al (2021) The unsustainable use of sand: reporting on a global problem. Sustain 13:1–16. https://doi.org/10.3390/su13063356
Joseph P, Tretsiakova-McNally S (2010) Sustainable non-metallic building materials. Sustainability 2:400–427. https://doi.org/10.3390/su2020400
Saxena R, Siddique S, Gupta T et al (2018) Impact resistance and energy absorption capacity of concrete containing plastic waste. Construct Build Mater 176:415–421. https://doi.org/10.1016/j.conbuildmat.2018.05.019
Abu-Saleem M, Zhuge Y, Hassanli R et al (2021) Impact resistance and sodium sulphate attack testing of concrete incorporating mixed types of recycled plastic waste. Sustain 13. https://doi.org/10.3390/su13179521
Saxena R, Gupta T, Sharma RK et al (2020) Assessment of mechanical and durability properties of concrete containing PET waste. Sci Iran 27:1–9. https://doi.org/10.24200/sci.2018.20334
Deraman R, Nawi MNM, Yasin MN et al (2021) Polyethylene terephthalate waste utilisation for production of low thermal conductivity cement sand bricks. J Adv Res Fluid Mech Therm Sci 88:117–136. https://doi.org/10.37934/arfmts.88.3.117136
Dawood AO, AL-Khazraji H, Falih RS (2021) Physical and mechanical properties of concrete containing PET wastes as a partial replacement for fine aggregates. Case Stud Constr Mater 14:e00482. https://doi.org/10.1016/j.cscm.2020.e00482
Bamigboye GO, Tarverdi K, Wali ES et al (2022) Effects of dissimilar curing systems on the strength and durability of recycled PET-modified concrete. Silicon 14:1039–1051. https://doi.org/10.1007/s12633-020-00898-0
Islam MJ, Meherier MS, Islam AKMR (2016) Effects of waste PET as coarse aggregate on the fresh and harden properties of concrete. Construct Build Mater 125:946–951. https://doi.org/10.1016/j.conbuildmat.2016.08.128
Haque R (2021) Performance of partially replaced plastic bottles (pet) as coarse aggregate in producing green concrete. Brill Eng 2:15–19. https://doi.org/10.36937/ben.2021.004.004
Sai Gopi K, Srinivas DT, Raju VSP (2020) Feasibility study of recycled plastic waste as fine aggregate in concrete. E3S Web Conf 184:1–5. https://doi.org/10.1051/e3sconf/202018401084
Habibi A, Ramezanianpour AM, Mahdikhani M (2021) RSM-based optimized mix design of recycled aggregate concrete containing supplementary cementitious materials based on waste generation and global warming potential. Resour Conserv Recycl 167:105420. https://doi.org/10.1016/j.resconrec.2021.105420
Vasudevan S, Poornima V, Balachandran M (2020) Influence of admixtures on properties of concrete and optimization using response surface methodology. Mater Today Proc 24:650–661. https://doi.org/10.1016/j.matpr.2020.04.319
Senthil Kumar K, Baskar K (2014) Response surfaces for fresh and hardened properties of concrete with E-waste (HIPS). J Waste Manag 2014:1–14. https://doi.org/10.1155/2014/517219
Othman R, Chong BW, Jaya RP et al (2021) Evaluation on the rheological and mechanical properties of concrete incorporating eggshell with tire powder. J Mater Res Technol 14:439–451. https://doi.org/10.1016/j.jmrt.2021.06.078
Chong BW, Othman R, Jaya RP et al (2021) Meta-analysis of studies on eggshell concrete using mixed regression and response surface methodology. J King Saud Univ - Eng Sci. https://doi.org/10.1016/j.jksues.2021.03.011
Sadeghi B, Marfavi Y, AliAkbari R et al (2021) Recent studies on recycled PET fibers: production and applications: a review. Mater Circ Econ 3. https://doi.org/10.1007/s42824-020-00014-y
Abas NF, Taiwo OO (2020) Utilization of pet wastes aggregate in building construction – a review. Int J Recent Technol Eng 9:656–663. https://doi.org/10.35940/ijrte.c4636.099320
Loong TK, Shahidan S, Fikri A et al (2020) Sound absorption for concrete containing polyethylene terephthalate waste. J Crit Rev 7:1379–1389
Abu-Saleem M, Zhuge Y, Hassanli R et al (2021) Microwave radiation treatment to improve the strength of recycled plastic aggregate concrete. Case Stud Constr Mater 15:e00728. https://doi.org/10.1016/j.cscm.2021.e00728
Alagirusamy R, Das A (2011) Yarns: production, processability and properties. Woodhead Publishing Limited
Ahani M, Khatibzadeh M, Mohseni M (2016) Preparation and characterization of poly(ethylene terephthalate)/hyperbranched polymer nanocomposites by melt blending. Nanocomposites 2:29–36. https://doi.org/10.1080/20550324.2016.1187966
Abu-Saleem M, Zhuge Y, Hassanli R et al (2021) Evaluation of concrete performance with different types of recycled plastic waste for kerb application. Construct Build Mater 293:123477. https://doi.org/10.1016/j.conbuildmat.2021.123477
Bachtiar E, Mustaan JF et al (2020) Examining polyethylene terephthalate (pet) as artificial coarse aggregates in concrete. Civ Eng J 6:2416–2424. https://doi.org/10.28991/cej-2020-03091626
Osubor SO, Salam KA, Audu TM (2019) Effect of flaky plastic particle size and volume used as partial replacement of gravel on compressive strength and density of concrete mix. J Environ Prot (Irvine, Calif) 10:711–721. https://doi.org/10.4236/jep.2019.106042
Thorneycroft J, Orr J, Savoikar P, Ball RJ (2018) Performance of structural concrete with recycled plastic waste as a partial replacement for sand. Construct Build Mater 161:63–69. https://doi.org/10.1016/j.conbuildmat.2017.11.127
Ohemeng EA, Yalley PP, Dadzie J, Djokoto SD (2014) Utilization of waste low density polyethylene in high strengths concrete pavement blocks production. Civ Environ Res 6:126–136
Rai B, Rushad ST, Kr B, Duggal SK (2012) Study of waste plastic mix concrete with plasticizer. ISRN Civ Eng 2012:1–5. https://doi.org/10.5402/2012/469272
Black J (2020) The use of recycled polyethylene terephthalate as a partial replacement for sand on the mechanical properties of structural concrete. Plymouth Student Sci 13:143–172
Ferrotto MF, Asteris PG, Borg RP, Cavaleri L (2022) Strategies for waste recycling: the mechanical performance of concrete based on limestone and plastic waste. Sustain 14. https://doi.org/10.3390/su14031706
Albano C, Camacho N, Hernández M et al (2009) Influence of content and particle size of waste pet bottles on concrete behavior at different w/c ratios. Waste Manag 29:2707–2716. https://doi.org/10.1016/j.wasman.2009.05.007
Irwan Juki M, Muhamad K, Annas Mahamad MK et al (2013) Development of concrete mix design nomograph containing polyethylene terephtalate (PET) as fine aggregate. Adv Mat Res 701:12–16. https://doi.org/10.4028/www.scientific.net/AMR.701.12
Highway Materials Engineering Course (1891) Portland cement concrete module G. U.S Department of Transportation, federal highway administration
Kumar S, Meena H, Chakraborty S, Meikap BC (2018) Application of response surface methodology (RSM) for optimization of leaching parameters for ash reduction from low-grade coal. Int J Min Sci Technol 28:621–629. https://doi.org/10.1016/j.ijmst.2018.04.014
Boyaci IH (2005) A new approach for determination of enzyme kinetic constants using response surface methodology. Biochem Eng J 25:55–62. https://doi.org/10.1016/j.bej.2005.04.001
Saikia N, De BJ (2014) Mechanical properties and abrasion behaviour of concrete containing shredded PET bottle waste as a partial substitution of natural aggregate. Construct Build Mater 52:236–244. https://doi.org/10.1016/j.conbuildmat.2013.11.049
Almeshal I, Tayeh BA, Alyousef R et al (2020) Eco-friendly concrete containing recycled plastic as partial replacement for sand. J Mater Res Technol 9:4631–4643. https://doi.org/10.1016/j.jmrt.2020.02.090
Ingram School of Engineering, Texas State University, San Marcos, TX, 78666, USA
Beng Wei Chong & Xijun Shi
Beng Wei Chong
Xijun Shi
BWC conducted the study conception and design. XS aided in the data collection process. BWC performed the analysis and interpretation of results. BWC and XS contributed to draft manuscript preparation. All authors have reviewed the results and approved the final version of the manuscript.
Correspondence to Xijun Shi.
Chong, B.W., Shi, X. Meta-analysis on PET plastic as concrete aggregate using response surface methodology and regression analysis. J Infrastruct Preserv Resil 4, 2 (2023). https://doi.org/10.1186/s43065-022-00069-y
Revised: 16 December 2022
DOI: https://doi.org/10.1186/s43065-022-00069-y
|
CommonCrawl
|
How would lighthouses work in space?
In my world spaceships are guided by lighthouses floating in space instead of electronic navigational systems. A lighthouse in space has the shape of a huge sphere emitting intense red light. Like the ordinary lighthouse a space lighthouse mark dangerous things, such as : black holes, supernovas … They also mark space stations, docking bays, fuel stations and other things.
Is this system viable?
Is this scientifically possible?
How much surface should a sphere (lighthouse) have, in order to emit enough red light?
science-based reality-check space light navigation
JakeGould
JavertJavert
$\begingroup$ Is it mandatory to be light? Couldn't it be something else the navigators recognize as light? $\endgroup$ – Mindwin Mar 10 '16 at 19:36
$\begingroup$ Mildly related: warhammer40k.wikia.com/wiki/Astronomican $\endgroup$ – Reactormonk Mar 10 '16 at 22:09
$\begingroup$ It would be orders of magnitude more effective and useful to leave a radio beacon with an easily identifiable time stamp stream, identifying codes and access codes for some kind of useful data dump, probably the locations of nearby civilizations and a lexicon of languages spoken there. It would become the basis of a relativity compensated galactic positioning system. $\endgroup$ – Sean Boddy Mar 11 '16 at 5:03
$\begingroup$ Why would you need a lighthouse to mark the location of a supernova? Surely the supernova itself would be visible from a far greater distance. Same for a black hole, really. (You wouldn't see the black hole itself, of course, but its accretion disc would be visible from a great distance.) $\endgroup$ – Darrel Hoffman Mar 11 '16 at 14:29
$\begingroup$ There are naturally-occuring "space lighthouses": rotating stars called pulsars that sweep a beam across the sky. $\endgroup$ – pjc50 Mar 11 '16 at 14:31
1 - Is this system viable ?
It is breathtakingly inefficient.
2 - Is this scientifically possible ?
Yes, the question is if you can get enough power to make it worth the trouble.
3 - How much surface should a sphere (lighthouse) have, in order to emit enough red light?
It's not surface, it's power. Because you're omni-direction you'll need a lot of it. Basically you need a star.
The problem is making this thing bright enough to be noticed far enough away so fast-moving spaceships have time to make course corrections with minimal delta-V. I'm going to use some relatively small numbers for time and velocity by sci-fi standards to give this the best chance of working before trying to scale up.
A spaceship going 1% the speed of light is moving about 1010 (10 billion) m/h. If we want to give this ship one hour of warning (probably not enough time, but we're starting small) it needs to see the beacon 10 billion meters out. This might seem far, but it's only 1/5th the distance from Earth to Mars at their closest. Not even interplanetary scales.
First, using visible light means you're competing with everything else that's producing visible light: stars and everything reflecting starlight. It's like turning on a flashlight on a sunny day, can't see it.
An omni-directional lighthouse is just a radio transmitter. Visible light is a crowded part of the spectrum, so you can do a little better by changing to a less common frequency probably in the Microwave Window. No human is going to eyeball this thing anyway, it'll all be done with computers just like a radio. So pick a rare frequency. However, the higher the frequency the more energy required, so pick something low. Transmitting in an uncrowded, low frequency, part of the spectrum will significantly reduce the required energy. Pulsing it in a recognizable sequence will help picking it out of background noise.
But here's the problem: because this is an omni-direction beacon you can think of its energy racing outwards in an expanding sphere. The surface area of a sphere increases with the square of its radius, spreading the energy thinner and thinner. Double the distance from the lighthouse, quarter the energy. If it has 1000 lux at 1 m, at 2 m it will have just 250. At 4 m it will be down to 62.5 lux. At 10 m it's just 10 lux.
A sphere 10 billion meters in radius has a surface area 1020 m2 times larger than one with 1 meter radius. A light source you can spot at 1 meter needs to be 1020 times brighter to be seen at 10 billion meters. And therein lies the problem: power. That's a lot of power. And it's for a relatively slow spaceship with relatively short warning. At a certain point you might as well just create a small star.
To put this in concrete terms, for your beacon to be as bright as Sirius it would have to be as bright as light bulb at 1.6x104 m. A light bulb puts out about 103 or 1000 lumens. To be that bright at 10 billion meters out, 6.25x105 time further away, you'd need to increase with the square of the distance: 3.9x1011 or 400 billion times brighter than light bulb: 6.25x1015 lumens. This is six orders of magnitude larger than the largest spotlight in the world, though still nine orders below the smallest red dwarf.
That's the lower bound to be seen 1 hour away by a ship going at 1% the speed of light. We're not even into interplanetary scales, much less interstellar ones. Double the speed or reaction time, quadruple the brightness required.
Unless you have the power of a star, omni-directional transmission doesn't work at interstellar or even interplanetary scales. You need something directional. The problem for a directional beacon is to work out where to point it.
SchwernSchwern
$\begingroup$ Most lighthouses I'm familiar with are directional lights rotating in a circle. Why couldn't a space lighthouse use similar directional lighting. In all honesty, obstacles in space are going to need multiple beacons since if you enter from the wrong side you won't see it. At that point, you have a few lighthouses (maybe 1 for each cardinal direction) that send signals away from the obstacle using some type or rotating directional light. Maybe even something like GPS satellites that point outwards would be viable. $\endgroup$ – David Starkey Mar 10 '16 at 20:58
$\begingroup$ @DavidStarkey The ocean is 2 dimensional, so a lighthouse only has to sweep its beam in 2 dimensions covering the perimeter of a circle, and the perimeter increases linearly with distance. Space is 3 dimensional. A space lighthouse would have to sweep in 3 dimensions, the surface of a sphere. Since it increases as the square of the distance it gets very big very fast and takes the beam significantly longer to cover this surface. Since a ship only has to see the beacon a few times to get its position via triangulation, this (or simply turning the beacon on and off) might have some power. $\endgroup$ – Schwern Mar 10 '16 at 21:08
$\begingroup$ For rotation, blinking the light would have the same effect, and at least for me, it's easier to imagine the power requirements. $\endgroup$ – Brendan Long Mar 10 '16 at 21:13
$\begingroup$ @BrendanLong Yeah, I played fast and loose with that one. Because omni-directional energy requirements grow so quickly It kinda doesn't matter. Take it from one light bulb (1000 lm) five orders of magnitude down to 1 firefly (0.01 lm) and you still need 10^18 lm which is ridiculously bright. I'll revise it. $\endgroup$ – Schwern Mar 10 '16 at 21:54
$\begingroup$ @rom016, seafaring ships knowing where a lighthouse is and a spacefaring one being able to find a weak signal is not a fair comparison. I don't have to set the bearing of a parabolic antenna to within milliarcseconds of where I suspect the lighthouse might be to get a navigational fix, and a terrestrial lighthouse isn't moving at potentially thousands of kilometers per second relative to my current position and velocity. I still think a sweeping beacon could have limited usefulness within a solar system, but as a navigation aide on interstellar scales, he's right; we'd map the stars instead. $\endgroup$ – Sean Boddy Mar 13 '16 at 7:32
This would depend on how bright the light is, total radiation is surface area * brightness so to make it more visible you can increase either.
It would certainly help make things easier to see, but the thing is space is really big and empty. There isn't much to run into and it's generally easy enough to avoid the things that are there (or you need to go to them anyway).
Navigation can't really be done by eye in space since you're dealing with orbital dynamics and transfer orbits and all sorts of other very complicated maths. Most of the time in space with current or near-future tech you aren't even using your engines. You just use them occasionally to accelerate/correct course/decelerate and spend the rest of the time coasting.
A super-nova would be far more visible than your floating red sphere. Even a black hole is surrounded by an accretion disk and highly visible most of the time. If you had an isolated black hole or neutron star then you might possibly be able to argue for some sort of warning beacon but it still doesn't make a lot of sense since you would need to account for the gravity from the neutron star/black hole when navigating and if you didn't know about it would detect it by the changes to your course it generated long before impact became a risk.
Tim B♦Tim B
$\begingroup$ +1, putting a large red sphere in space to point out a supernova is like using a lighthouse to point out a bigger exploding lighthouse. $\endgroup$ – DaaaahWhoosh Mar 10 '16 at 17:05
$\begingroup$ I think your #2 should be "no," with the question as stated. The question states "... instead of electronic navigational systems." As you mention, navigation by eye isn't really possible. Even if the "lighthouses" are used, electronic navigational systems to detect them and respond appropriately would still be mandatory. $\endgroup$ – GrandOpener Mar 10 '16 at 19:07
$\begingroup$ @Schwern The OP said that electronic navigation systems would not be used in favor of the lighthouse system. In absence of further clarification, I take that to mean that in the context of this question, having a computer "detect and process" is specifically off limits. $\endgroup$ – GrandOpener Mar 10 '16 at 21:19
$\begingroup$ @DaaaahWhoosh actually, it would be more like using a single fluorescent bacteria to indicate the explosion of an atomic bomb. $\endgroup$ – Davidmh Mar 11 '16 at 0:19
$\begingroup$ @Yakk: Well, maybe. If you're looking out for black holes and supernovae, you're engaged in interstellar travel. And that means you either have FTL or your ships are generation ships. I would hesitate to characterize either of those as "near-future tech." $\endgroup$ – Kevin Mar 11 '16 at 5:41
I think the buck stops here, really. No, it's not viable.
What you are proposing is perhaps "possible" in some limited sense (but as already pointed out by others, you are up against some very fierce competition in terms of light sources). It isn't however viable.
This isn't for reasons of the amount of light you would need to put out. Throw sufficient amounts of handwavium at that, and you could explain it away, or just lampshade (pun only half intended) the whole problem.
The reason is spelled orbital mechanics combined with the finite speed of both light and space travel.
If we assume space travel by Newtonian or relativistic mechanics as currently understood (which would be an implied requirement, since you are asking for answers based in known science), then we are limited by the laws of orbital mechanics. Basically, spacecraft coast for all but a tiny fraction of their travel time. Practical spacecraft have very limited delta-v budget (ability to alter their velocity and vector) due to the tyranny of the rocket equation. In order to reduce the delta-v required for a particular position change after some amount of time, you need to increase the time between the maneuver and the time at which the position change needs to be completed. The earlier you can perform a maneuver, the less fuel you need to get the result you want. Compare the fact that in order to land, from an orbital velocity of on the order of 7-8 km/s, the space shuttle only needed to reduce its velocity by about 100 m/s under power before gravity and drag did the rest.
Objects in space generally refuse to stay put. If you take your fancy spacecraft into a low Earth orbit, park and lock the doors when you go for an EVA to grab a lunch, and look for the spacecraft about 45 minutes later, you will find it on the other side of the world. (Thankfully, you will also be on the other side of the world, which somewhat reduces the practical impact of this.) Geosynchronous orbits don't help, because you are still moving at orbital velocities; you just happen to have an orbital velocity that matches the angular velocity of the rotation of the planet beneath you (the point directly to your nadir). Lagrangian points don't help either, because as the relevant objects move those points move as well, which means you (in this case, the light source) is moving with them.
Let's say you can somehow engineer a light source that is bright enough to be possible to make out at the distance between Earth and Pluto when the two are at opposition (points farthest from each other), and we put it roughly where Pluto is in our universe. Pluto's orbit reaches out to about 49 AU from the Sun, and the Earth orbits at about 1 AU from the Sun. So we want something that is visible at 50 AU. (Pluto's orbit is in a different plane than the rest of the solar system, but for an example, this works anyway.) 50 AU really isn't far at all in terms of interstellar travel, which I take it you are concerned with because of the dangerous objects you mention in your question, but it works nicely once you approach a solar system.
Now, let's say your ships travel at 1% of the speed of light, or 3,000 km/s, relative to the light source. (This is far, far faster than anything we can accomplish with chemical rockets, but it is still somewhat within the realm of possibility with science and technology as we know them.) 50 AU is about 7500 Gm, so this distance will take your spacecraft about 2.5 million seconds to travel. (Back-of-the-envelope plausibility check: speed of light time delay from the Sun to Pluto, on the order of 7 hours. Speed of travel, 1/100 of the speed of light. Expected travel time, 700 hours. 700 hours is 2.52 million seconds. Check.)
Pluto's orbital speed averages about 4.67 km/s. In those 2.5 million seconds that the light needs to reach our intrepid spacecraft 50 AU away, Pluto (or our light source) moves almost 11.7 million km along its orbit. This is before the crew of the spacecraft even sees the light.
Spacecraft generally travel along elliptical transfer orbits selected to get it to some particular point in space at some particular time (usually at a time when an object of interest is going to, in its orbit, intersect that point, or a point near that one) within some given constraints (time, delta-v, payload mass, ...). Any time a spacecraft is going anywhere, it is doing so by assuming a transfer orbit (very often a Hohmann transfer orbit, which in many cases is the lowest-energy way we know of to get from point A to point B in space; another alternative sometimes considered is a bi-elliptic transfer orbit). In Hohmann transfer orbits, you basically trade time for delta-v; the more delta-v you can afford, the more direct a route you can take and the quicker you can get to your destination. Since as we saw above that we want to minimize the delta-v expenditure in order to reduce our spacecraft's mass ratio, this means that we need more time to get to where we are going than if we were travelling in a straight line.
So not only has the light source already moved over ten million km along its orbit by the time the light reaches the spacecraft at 50 AU away from the light source's original position, but you also have to consider how long the spacecraft will need to get to where the light source (or point of interest) is now. (And that's "now" in which reference frame?) And by the time you get there, where is the point of interest going to be then? It's a variation of the classic math trick question of halving something repeatedly:
$$ x + \frac{x}{2} + \frac{x}{4} + \cdots + \frac{x}{n} $$
This is exactly the type of problem humans are horrible at solving, and computers are excellent at solving.
Which is why anyone who wants something even remotely like this would be far more likely to use radio beacons and electronic computers than navigation by eyesight and estimation.
The type of light (or even EM) source you are using has no effect on this, because the problem is related not to the type of EM source but rather to the relative speeds of the objects involved, and orbital mechanics.
a CVn♦a CVn
$\begingroup$ What if i use red laser (harmless) instead of red light, could this enhance the viability of such system ? $\endgroup$ – Javert Mar 11 '16 at 8:08
$\begingroup$ @Javert Lasers emit (highly monochromatic) light, so the exact same issues apply. Additionally, lasers (as commonly referred to) emit light only in a specific direction, which means you will have to be in the exact line of emission in order to see anything whatsoever. If anything, using a laser adds problems. $\endgroup$ – a CVn♦ Mar 11 '16 at 8:25
Apart from all the other things mentioned in other answers, there are some more relevant issues:
Lighthouses have to be kept in place relative to the thing they are marking. Not really an issue on earth, when marking rocks, harbour entrances etc; they don't move much, and they don't affect lighthouses.
Supernova, black holes and the like, all have gravity. This will move the "lighthouse" around. In practice, the lighthouse would probably need to be in orbit - pretty much everything smaller than a galaxy, tends to be in orbit - the moon orbits the earth, earth orbits the sun, the sun (and solar system as a whole) orbits the center of our galaxy (our galaxy is the Milky Way).
If you are coming from the wrong direction, the object being marked will obscure the lighthouse.
Lighthouses work great at sea, where nothing sticks up. They work ok on the coast - if you're coming from the land side, you probably don't need the lighthouse, and lighthouses are usually at the highest point, to shine above anything that might obscure it.
In space, one person's up is another person's down, and if you are unlucky enough to approach from the wrong side, that lighthouse you are looking for may be behind the object.
Very massive objects can bend light.
This applies primarily to black holes. Similar to mirages on earth, the light from a "lighthouse" (or, for that matter, radio signals) will bend around a sufficiently heavy object, making it appear in the wrong place, distorted, or different colour and/or brightness. See "Gravitational Lensing"
Light is the same thing as radio.
If this is for alien (from our perspective) beings, then be aware that light and radio are the same thing. We humans have taken a small piece of the radio spectrum, and said "that bit there is different from the rest, because we can see it". Alien beings may say the same thing about other frequencies. Or they may sense their environment in a completely different way. Similarly, if your radio could be tuned up from it's usual 91.5 MHZ to around 480 million MHZ (480 terrahertz), you would be tuned to red light. Snakes can see infrared (for a given value of "see" - they don't sense it through their eyes, but it does go through the "vision" areas of their brains), which seems "black" to us.
Red means "Danger" - to us! To other races of the galaxy, red may mean "safety". Beware of cultural assumptions, when working with alien species - your assumptions are bound to be wrong.
Points 1 & 2 can be solved by putting a number of "lighthouses" in orbit, similar to the GPS satellite "constelation". GPS satellites, for example, are in 3 different orbits, all through the poles; 8 satellites in each orbit, at 120 degree intervals, for a total of 24 satellites, plus a few spares; I would think that 2 per orbit, at opposite sides of the object, for a total of 6, would be enough when seen from space. You might, hypothetically, be able to get away with 2, at opposite sides of the object, though even a small, invisible object could occlude your lighthouse.
AMADANON Inc.AMADANON Inc.
As others have noted, a supernova would be putting out a lot more light than any human-built lighthouse is likely to be capable of. That's like having a guy with a flashlight to warn people away from an erupting volcano.
But as some sort of navigational marker in general ... it could work. My immediate reaction is that it would make more sense to be broadcasting a radio signal. That could be distinguished from background noise much more easily.
Is it technically possible to build a big red light and put it in orbit? Sure, why not? It's certainly possible to build satellites: humans have built plenty by now. And it's certainly possible to build large lights.
How big would it have to be? Depends how much light you want it to put out and how far away you want it to be seen. I don't think there's any formula there. Also, how visible it is will depend on how much energy is being emitted, which might be affected by the size but is not determined by it. A 100 watt light bulb and a 40 watt light bulb are often the same size. I think the bigger question is, How much power will it need, and where will that power come from?
$\begingroup$ The formula for brightness is lx = lm / 4*pi*r^2. lx is your desired lux (illumination per area), lm is lumens (total light emitted), r is distance from the source in meters. It's the brightness of your thing over the surface area of a sphere. A 100 W bulb puts out 1000 lm. So at 1m its putting out 80 lux. At 2m it's 20 lux. At 4 it's 5 lux and so on, decreasing exponentially. $\endgroup$ – Schwern Mar 10 '16 at 22:38
$\begingroup$ @schwern Sorry, I see my statement was unclear. I meant that there's no formula for exactly how much light you need. That is, I doubt there's a formula that would tell us that it has to be visible at 117.3 AU but it isn't necessary for it to be visible at 117.4 AUs. Or that at any given distance, it must be 142% brighter than the nearest planet but 143% is unnecessary. Etc. Ultimately these things would be judgment calls. $\endgroup$ – Jay Mar 11 '16 at 3:30
$\begingroup$ First you decide what frequency you're using, something generally quiet and agreed upon in the Microwave Window. Then you work out the minimum strength approaching ships can reliably detect, or being brighter than anything else nearby emitting in that frequency, whichever is stronger. Then you can work out the minimum distance an approaching ship would need to see the beacon based on A) the maximum speed you expect approaching ships to be moving, B) the minimum delta-V they're capable of, and C) how large the volume to be avoided is. $\endgroup$ – Schwern Mar 11 '16 at 6:38
More than likely they would be emitting something more powerful than red light, but the idea is sound. The Hugh Howey novel Beacon 23 is based on this idea. In that novel, it's broadcasting the location of a large asteroid field, as that world's FTL travel is similar to Star Wars (physical objects can impact travel due to gravity fields).
As for surface area, enough is largely dependent on how far you want it to be seen by. I could definitely see you wanting to mark something dark, like an asteroid field or black hole (emitting something other than Hawking Radiation), or smaller things like the aforementioned docking bays, although something more similar to Range Lights would be better for those.. Supernovas and similar events are already pretty bright all by their lonesome.
VogieVogie
Have you looked into pulsars?
https://en.wikipedia.org/wiki/Pulsar
There's been some talk about using them as galactic navigation beacons.
J.D. RayJ.D. Ray
$\begingroup$ This is a good suggestion, but I don't see how it is an answer to the question as asked. $\endgroup$ – a CVn♦ Mar 10 '16 at 22:04
I don't think this would really work.
Wouldn't be useful
Would cost too much energy
There are far better ways to do things
However, I like the idea of lighthouses in space. It could lead to some nice visuals. That's something that might be worth preserving even if the notion of using them for navigation isn't practical or useful.
Are you proposing faster than light travel? Because space is really, really big. If there's no faster than light travel you don't really have to worry about running into a black hole because it's going to take you thousands of years to get to one. (and if you're traveling faster than light, using light to navigate is not going to work very well-- also the further you dig into FTL the more problems you discover). If you're positing FTL you'll be positing leaving normal space to do it. You go somewhere else, traverse some different distance and then pop out where you're going. Being able to sense some navigation beacon across the boundaries of these two types of space would be necessary. Neither that space nor anything that could penetrate it are known. So invent it ;- ) If it happens to have a nice rosy glow in the visual spectrum? Bonus.
In normal space nobody's going to spend the kind of energy needed to make a visible beacon in space. It wouldn't work well, it'd cost way too much energy and there are way better ways to do it.
You can avoid hitting rocks by mounting radar on your craft. And things you really need to worry about are traveling so fast that visual cues are kind of useless. Imagine a rock traveling faster than a bullet. It's the size of a golf ball and it comes at you from above. You're not going to see it in time to do anything about it. Radar and a computer will though.
Also there's not really much in the way of visual cues when navigating a space ship. Consider the fact that the gravity well functions like a steep slope you can't see. When you "park in orbit" you're really just doing a bunch of math about your speed, direction and placement on a steep slope. You can't see any of that with the naked eye.
What might be interesting is a heads up display which DOES show orbital dynamics and hazards visually. Show the gravity well as a lit up slope rolling down towards the planet. You could point out debris or navigation hazards in white light on your face mask or control panel when they're too far away to be able to see with the naked eye. You could represent size and speed with visual cues. The human mind is really well tuned to that. A tool which could present data in that way could be super useful. It also simplifies controls and readouts.
jorfusjorfus
Not the answer you're looking for? Browse other questions tagged science-based reality-check space light navigation or ask your own question.
How to add tactics and maneuvering into space warfare
Are external lights on spaceships even necessary?
Renewable energy in space
Can we use the Sun as an interstellar signal lamp?
Would my compass still work?
How can a space "radar" work?
How could a space helicopter work
How would one mark directions in three dimensional space without north/south/east/west?
Would space mines really work?
Could a photonic engine actually work?
How does space between ends of a wormhole work?
How would eco-friendly Space Travel Work?
How could this flora/fauna symbiosis work?
Navigating storm fronts in space
|
CommonCrawl
|
Physics And Astronomy (2)
Glasgow Mathematical Journal (2)
Journal of the Australian Mathematical Society (2)
Journal of the London Mathematical Society (2)
Mathematical Proceedings of the Cambridge Philosophical Society (2)
Proceedings of the Edinburgh Mathematical Society (2)
Canadian Journal of Mathematics (1)
Australian Mathematical Society Inc (2)
Canadian Mathematical Society (1)
London Mathematical Society Lecture Note Series (2)
By James Ahn, Eric L. Anderson, Annette L. Beautrais, Dennis Beedle, Jon S. Berlin, Benjamin L. Bregman, Peter Brown, Suzie Bruch, Jonathan Busko, Stuart Buttlaire, Laurie Byrne, Gerald Carroll, Valerie A. Carroll, Margaret Cashman, Joseph R. Check, Lara G. Chepenik, Robert N. Cuyler, Preeti Dalawari, Suzanne Dooley-Hash, William R. Dubin, Mila L. Felder, Avrim B. Fishkind, Reginald I. Gaylord, Rachel Lipson Glick, Travis Grace, Clare Gray, Anita Hart, Ross A. Heller, Amanda E. Horn, David S. Howes, David C. Hsu, Andy Jagoda, Margaret Judd, John Kahler, Daryl Knox, Gregory Luke Larkin, Patricia Lee, Jerrold B. Leikin, Eddie Markul, Marc L. Martel, J. D. McCourt, MaryLynn McGuire Clarke, Mark Newman, Anthony T. Ng, Barbara Nightengale, Kimberly Nordstrom, Jagoda Pasic, Jennifer Peltzer-Jones, Marcia A. Perry, Larry Phillips, Paul Porter, Seth Powsner, Michael S. Pulia, Erin Rapp, Divy Ravindranath, Janet S. Richmond, Silvana Riggio, Harvey L. Ruben, Derek J. Robinson, Douglas A. Rund, Omeed Saghafi, Alicia N. Sanders, Jeffrey Sankoff, Lorin M. Scher, Louis Scrattish, Richard D. Shih, Maureen Slade, Susan Stefan, Victor G. Stiebel, Deborah Taber, Vaishal Tolia, Gary M. Vilke, Alvin Wang, Michael A. Ward, Joseph Weber, Michael P. Wilson, James L. Young, Scott L. Zeller
Edited by Leslie S. Zun
Edited in association with Lara G. Chepenik, Mary Nan S. Mallory
Book: Behavioral Emergencies for the Emergency Physician
Published online: 05 April 2013
Print publication: 21 March 2013, pp viii-xii
Chapter 43 - The Emergency Medical Treatment and Active Labor Act (EMTALA) and psychiatric patients in the emergency department
from Section 6. - Administration of psychiatric care
By Derek J. Robinson
Print publication: 21 March 2013, pp 320-323
In the field of emergency psychiatry, a person's ethnic background, race, religion, values, beliefs, customs, and language can affect the symptoms with which a psychiatric illness may present. A culturally competent evaluation of the psychiatric patient includes assessment of the cultural identity of the individual, the role of culture in the expression and evaluation of psychiatric symptoms, and the effect of cultural differences on the relationship between patient and clinician. The cultures of the clinician and system of care influence diagnosis, treatment, and delivery of care. Language barriers influence the authenticity of the informed consent process. According to federal classification, the four most recognized racial and ethnic minority groups in the United States are Hispanic Americans/Latinos, African Americans/Blacks, Asian Americans and Pacific Islanders, and American Indians and Alaska Natives. Culture-bound syndromes in Hispanic populations include ataque de nervios (attack of nerves), nervios (nerves), and susto (fright or soul loss).
ON GROUPS WITH TWO ISOMORPHISM CLASSES OF DERIVED SUBGROUPS
PATRIZIA LONGOBARDI, MERCEDE MAJ, DEREK J. S. ROBINSON, HOWARD SMITH
Journal: Glasgow Mathematical Journal / Volume 55 / Issue 3 / September 2013
Add to cart USD35.00 Added An error has occurred,
The structure of groups which have at most two isomorphism classes of derived subgroups ( $\mathfrak{D}$ 2-groups) is investigated. A complete description of $\mathfrak{D}$ 2-groups is obtained in the case where the derived subgroup is finite: the solution leads an interesting number theoretic problem. In addition, detailed information is obtained about soluble $\mathfrak{D}$ 2-groups, especially those with finite rank, where algebraic number fields play an important role. Also, detailed structural information about insoluble $\mathfrak{D}$ 2-groups is found, and the locally free $\mathfrak{D}$ 2-groups are characterized.
GROUPS WITH FINITELY MANY DERIVED SUBGROUPS
FRANCESCO DE GIOVANNI, DEREK J. S. ROBINSON
Journal: Journal of the London Mathematical Society / Volume 71 / Issue 3 / June 2005
Published online by Cambridge University Press: 24 May 2005, pp. 658-668
A study is made of groups with finitely many derived groups of subgroups or of infinite subgroups. These groups are classified completely in the locally graded case. In the general case, detailed structural information about groups in each class is found.
The structure of finite groups in which permutability is a transitive relation
Derek J. S. Robinson
Journal: Journal of the Australian Mathematical Society / Volume 70 / Issue 2 / April 2001
Published online by Cambridge University Press: 09 April 2009, pp. 143-160
Print publication: April 2001
The structure of finite groups in which permutability is transitive (PT-groups) is studied in detail. In particular a finite PT-group has simple chief factors and the p-chief factors fall into at most two isomorphism classes. The structure of finite T-groups, that is, groups in which normality is transitive, is also discussed, as is that of groups generated by subnormal or normal PT-subgroups.
Permutability properties of subgroups
By Derek J. S. Robinson, Department of Mathematics, University of Illinois, 1409 West Green Street, Urbana, Illinois 61801, U.S.A.
Edited by C. M. Campbell, University of St Andrews, Scotland, E. F. Robertson, University of St Andrews, Scotland, N. Ruskuc, University of St Andrews, Scotland, G. C. Smith, University of Bath
Book: Groups St Andrews 1997 in Bath
Print publication: 18 February 1999, pp 633-638
ON GROUPS THAT ARE ISOMORPHIC WITH EVERY SUBGROUP OF FINITE INDEX AND THEIR TOPOLOGY
DEREK J. S. ROBINSON, MATHEW TIMM
Journal: Journal of the London Mathematical Society / Volume 57 / Issue 1 / February 1998
Published online by Cambridge University Press: 01 February 1998, pp. 91-104
The main result is that a finitely generated group that is isomorphic to all of its finite index subgroups has free Abelian first homology, and that its commutator subgroup is a perfect group. A number of corollaries on the structure of such groups are obtained, including a method of constructing all such groups for which the commutator subgroup has a trivial centralizer. As an application, conditions are presented for the covering spaces of compact manifolds that determine when the fundamental groups of the base spaces are free Abelian.
The Generalized Wielandt Subgroup of a Group
James C. Beidleman, Martyn R. Dixon, Derek J. S. Robinson
Journal: Canadian Journal of Mathematics / Volume 47 / Issue 2 / 01 April 1995
Published online by Cambridge University Press: 20 November 2018, pp. 246-261
Print publication: 01 April 1995
The intersection IW(G) of the normalizers of the infinite subnormal subgroups of a group G is a characteristic subgroup containing the Wielandt subgroup W(G) which we call the generalized Wielandt subgroup. In this paper we show that if G is infinite, then the structure of IW(G)/ W(G) is quite restricted, being controlled by a certain characteristic subgroup S(G). If S(G) is finite, then so is IW(G)/ W(G), whereas if S(G) is an infinite Prüfer-by-finite group, then IW(G)/W(G) is metabelian. In all other cases, IW(G) = W(G).
Deciding if an automorphism of an infinite soluble group is inner
Let G be a group with a finite set of generators x1, x2,…,xn and a recursive set of defining relators in the generators. Then an endomorphism η of G is completely determined by the images of the generators , and hence by the n-tuple of words in x, (w1,…,wn). This allows the formulation of algorithmic problems about endomorphisms and automorphisms. For example, can one decide if a given n-tuple of words represents an endomorphism, and if so, an automorphism? Some results on these questions may be found in [2] and [12]. Here we shall be concerned with a similar problem: given that an n-tuple of words represents an automorphism of the group G, does there exist an algorithm which decides if the automorphism is inner?
Homology and cohomology of locally supersoluble groups
Journal: Mathematical Proceedings of the Cambridge Philosophical Society / Volume 102 / Issue 2 / September 1987
Published online by Cambridge University Press: 24 October 2008, pp. 233-250
In a recent article [13] a series of vanishing theorems was obtained for the (co)homology of locally nilpotent groups. These results assert that if (co)homology vanishes in low dimensions (0 or 1), then it vanishes in all dimensions, provided that the module satisfies an appropriate finiteness condition.
Groups with prescribed automorphism group: A clarification
Journal: Proceedings of the Edinburgh Mathematical Society / Volume 27 / Issue 1 / February 1984
Published online by Cambridge University Press: 20 January 2009, pp. 59-60
In Theorems 1 and 2 of [] necessary and sufficient conditions were given for a group G to have a finite automorphism group Aut G and a semisimple subgroup of central automorphisms AutcG. Recently it occurred to us, as a result of conversations with Ursula Webb, that these conditions could be stated in a much simpler and clearer form. Our purpose here is to record this reformulation. For an explanation ofterminology and notation we refer the reader to [1].
26 - Addendum to: "Applications of cohomology to the theory of groups"
By Derek J. S. Robinson
Edited by C. M. Campbell, E. F. Robertson
Book: Groups - St Andrews 1981
Published online: 07 September 2010
Print publication: 28 October 1982, pp 365-367
In 1981 the importance of homological algebra as a tool in group theory was beginning to be recognised. After the pioneering work in the 1940's by S. Eilenberg, S. MacLane and B. Eckmann on the homology and cohomology of groups, twenty years elapsed before really convincing applications appeared: the prime example was Gaschütz's famous theorem on the existence of outer automorphisms of finite p-groups. The well known sets of notes by K. W. Gruenberg and U. Stammbach, which were published in the 1970's, had proved to be a stimulus to research, and already a body of work had appeared in the literature. It seemed timely to write a survey for Groups St Andrews 1981.
The twenty five years which have elapsed since that critical conference have witnessed a continuation of the trend in group theory to introduce techniques from homological algebra, as well as other areas of mathematics. Today many group theorists are conversant with a variety of homological methods, including spectral sequences. Our aim here is to survey some of the achievements during this period.
Until about 1980 group theoretic interpretations of the cohomology groups Hn(G,M) had only been found only for n ≤ 3; these arise of course from the classical theory of group extensions. The problem of finding group theoretic interpretations of Hn(G,M) for arbitrary n was solved by D. F. Holt and J. Huebschmann.
Groups with prescribed automorphism group
Journal: Proceedings of the Edinburgh Mathematical Society / Volume 25 / Issue 3 / October 1982
We are concerned here with question: to what extent can the structure of a group G be recaptured from information about the structure of its group of automorphismsAut G? For example, one might try to find all groups which have some specific group astheir (full) automorphism group, a point of view adopted by Iyer in a recent paper [5]. Nothing is known about this question in general except the result of Nagrebeckü [7] that there are only finitely many finite groups with a given group as automorphismgroup.
The subnormal coalescence of some classes of groups of finite rank
Mark Drukker, Derek J. S. Robinson, Ian Stewart
Journal: Journal of the Australian Mathematical Society / Volume 16 / Issue 3 / November 1973
Print publication: November 1973
A class of groups forms a (subnormal) coalition class, or is (subnormally) coalescent, if whenever H and K are subnormal -subgroups of a group G then their join <H, K> is also a subnormal -subgroup of G. Among the known coalition classes are those of finite groups and polycylic groups (Wielandt [15]); groups with maximal condition for subgroups (Baer [1]); finitely generated nilpotent groups (Baer [2]); groups with maximal or minimal condition on subnormal subgroups (Robinson [8], Roseblade [11, 12]); minimax groups (Roseblade, unpublished); and any subjunctive class of finitely generated groups (Roseblade and Stonehewer [13]).
Groups in which normality is a transitive relation
Journal: Mathematical Proceedings of the Cambridge Philosophical Society / Volume 60 / Issue 1 / January 1964
Published online by Cambridge University Press: 24 October 2008, pp. 21-38
Print publication: January 1964
A group is said to have the property T or to be a T-group if every subnormal subgroup is normal. Thus the class of T-groups is just the class of all groups in which normality is a transitive relation. Finite T-groups have been studied by Best and Taussky (l), Gaschütz (4) and Zacher (11). Gaschütz has shown that if G is a finite soluble T-group and G/L is the unique maximal nilpotent quotient group of G, then G/L is Abelian or Hamiltonian and L is an Abelian group of odd order prime to |G:L| ((4), Satz 1). Our aim is to study infinite T-groups and more especially infinite soluble T-groups with a view to extending Gaschütz's results. One of the simplest results on soluble T-groups is THEOREM 2.3.1. Every soluble T-group is metabelian.
|
CommonCrawl
|
2.5.1. Dice Versus Cards
2.5.2. A Special Case of the Multiplication Rule
2.5.3. People v. Collins, 1968
2.5.4. Regina v. Clark, 2003
2.5. Independence#
Events \(A\) and \(B\) are independent if the information that one of them occurred does not change the chance of the other.
If you don't know whether \(A\) has occurred, then the chance of \(B\) is just \(P(B)\).
If you do know that \(A\) occurred, then you have to update the chance of \(B\) to \(P(B \mid A)\), the conditional chance of \(B\) given \(A\).
The definition of independence says that \(A\) and \(B\) are independent if \(P(B \mid A) = P(B)\).
For example, suppose you roll a die once and see an odd number. What does that tell you about the chance that the next roll is a 2? If your answer is, "Nothing – the chance that the next roll is a 2 is still 1/6," then you have said that the two events "first roll is an odd number" and "second roll is 2" are independent.
2.5.1. Dice Versus Cards#
Suppose you are rolling a die twice. Then
\[\begin{split} \begin{align*} &P(2 \text{ on the second roll} \mid \text{odd number on the first roll}) \\ &= ~ P(2 \text{ on the second roll}) ~ = ~ \frac{1}{6} \end{align*} \end{split}\]
Independence is a natural assumption about successive rolls of a die. But when cards are dealt (at random without replacement, as we consistently assume) from a deck, then the situation is different.
Suppose two cards are dealt from a standard deck in which four out of 52 cards are aces. We know that by symmetry,
\[ P(\text{second card is an ace}) ~ = ~ \frac{4}{52} \]
It's worth reviewing the reason for this: since we have no other information, all 52 cards are equally likely to be appear on the second draw, and four of them are aces.
\[\begin{split} \begin{align*} P(\text{second card is an ace} \mid \text{first card is an ace})~ &= ~ \frac{3}{51} \\ &\neq ~ P(\text{second card is an ace}) \end{align*} \end{split}\]
The information that the first card is an ace changes the probability that the second card is an ace. The events "the first card is an ace" and "the second card is an ace" are not independent. Knowing that the first card is an ace eliminates an ace from the deck, and probabilities have to be recalculated accordingly.
2.5.2. A Special Case of the Multiplication Rule#
Suppose you deal two cards. By the multiplication rule,
\[\begin{split} \begin{align*} & P(\text{both cards are aces}) \\ &= ~ P(\text{first card is an ace})P(\text{second card is an ace} \mid \text{first card is an ace}) \\ &= ~ \frac{4}{52} \times \frac{3}{51} \end{align*} \end{split}\]
If you roll two dice, then by the multiplication rule again,
\[\begin{split} \begin{align*} & P(\text{odd number on the first roll and } 2 \text{ on the second roll}) \\ &= ~ P(\text{odd number on the first roll})P(2 \text{ on the second roll} \mid \text{odd number on the first roll}) \\ &= ~ \frac{3}{6} \times \frac{1}{6} \end{align*} \end{split}\]
The second factor in the product is not affected by the condition that the first roll was an odd number.
What we have observed in the example above can be generalized as follows.
The multiplication rule says that for any two events \(A\) and \(B\),
\[ P(AB) ~ = ~ P(A)P(B \mid A) \]
In the case of independent events \(A\) and \(B\) the second factor in the multiplication rule is the same as the unconditional chance of \(B\):
\[ P(AB) ~ = ~ P(A)P(B) ~~ \text{if } A \text{ and } B \text{ are independent} \]
Notice that if you are trying to find the chance of an intersection, then independence does not affect whether you multiply chances. Independence only affects what you multiply.
It is simplest to just remember the general multiplication rule. Independence is then an easy special case.
Return to the example of rolling a die twice. We have
\[ P(\text{ odd number on the first roll and } 2 \text{ on the second roll}) ~ = ~ \frac{3}{6} \times \frac{1}{6} ~ = ~ \frac{3}{36} \]
We can now check that the assumption of independence is consistent with an assumption that we have made all along: that all 36 outcomes of the two rolls are equally likely. Under that assumption, we would have solved the problem by saying that three pairs \((1, 2)\), \((3, 2)\), and \((5, 2)\) form the event and hence the chance is \(3/36\). That's the same as what we got by using independence.
The definition of independence extends to multiple events. A collection of events are mutually independent if knowing that any group of them has occurred doesn't affect the chances of the others.
2.5.3. People v. Collins, 1968#
In 1964, the married couple Michael and Janet Collins were arrested for robbery in California. A jury found them guilty, based in part on a probability calculation.
Michael Collins appealed the conviction to the California Supreme Court. In 1968 the Court overturned the jury's conviction.
In the statement of the majority opinion, Justice Raymond Sullivan wrote, "We deal here with the novel question whether evidence of mathematical probability has been properly introduced and used by the prosecution in a criminal case. While we discern no inherent incompatibility between the disciplines of law and mathematics and intend no general disapproval or disparagement of the latter as an auxiliary in the fact-finding processes of the former, we cannot uphold the technique employed in the instant case."
To see what the Justices could not uphold, we start with their summary of the incident that led to the arrest. We apologize that the quote includes the term used to describe a black person in the statement as well as during the trial.
"On June 18, 1964, about 11:30 a.m. Mrs. Juanita Brooks, who had been shopping, was walking home along an alley in the San Pedro area of the City of Los Angeles. She was pulling behind her a wicker basket carryall containing groceries and had her purse on top of the packages. She was using a cane. As she stooped down to pick up an empty carton, she was suddenly pushed to the ground by a person whom she neither saw nor heard approach. She was stunned by the fall and felt some pain. She managed to look up and saw a young woman running from the scene. According to Mrs. Brooks the latter appeared to weigh about 145 pounds, was wearing "something dark," and had hair "between a dark blond and a light blond," but lighter than the color of defendant Janet Collins' hair as it appeared at trial. Immediately after the incident, Mrs. Brooks discovered that her purse, containing between $35 and $40 was missing.
About the same time as the robbery, John Bass, who lived on the street at the end of the alley, was in front of his house watering his lawn. His attention was attracted by "a lot of crying and screaming" coming from the alley. As he looked in that direction, he saw a woman run out of the alley and enter a yellow automobile parked across the street from him. He was unable to give the make of the car. The car started off immediately and pulled wide around another parked vehicle so that in the narrow street it passed within 6 feet of Bass. The latter then saw that it was being driven by a male Negro, wearing a mustache and beard. At the trial Bass identified defendant as the driver of the yellow automobile. However, an attempt was made to impeach his identification by his admission that at the preliminary hearing he testified to an uncertain identification at the police lineup shortly after the attack on Mrs. Brooks, when defendant was beardless.
In his testimony Bass described the woman who ran from the alley as a Caucasian, slightly over 5 feet tall, of ordinary build, with her hair in a dark blonde ponytail, and wearing dark clothing. He further testified that her ponytail was "just like" one which Janet had in a police photograph taken on June 22, 1964."
In the jury trial, the prosecutor had provided a table of probabilities:
Individual Probability
A. Partly yellow automobile
B. Man with mustache
C. Girl with ponytail
D. Girl with blond hair
E. Negro man with beard
F. Interracial couple in car
The prosecutor had then multiplied all the probabilities together to get an answer of about 1 in 12,000,000. Quoting again from the Supreme Court's statement, "Applying the product rule to his own factors the prosecutor arrived at a probability that there was but one chance in 12 million that any couple possessed the distinctive characteristics of the defendants."
Though the jury had been swayed by this calculation, the Supreme Court understood the importance of checking assumptions.
First, they objected to the fact that the prosecution had provided no basis for arriving at the probabilities in the table: "[W]e find the record devoid of any evidence relating to any of the six individual probability factors."
Second, they did not accept the assumption of independence of the factors. They pointed out the difficulty of believing that having a beard and having a mustache are independent. It is not hard to come up with other examples of this difficulty. Treating characteristics D, E, and F as independent is hard to justify.
When probability theory is used in practice, it is important to make sure that the assumptions of the theory are reasonable before doing calculations. Otherwise the analysis might be just as weak as the one in the Collins case, which "lacked an adequate foundation both in evidence and in statistical theory".
2.5.4. Regina v. Clark, 2003#
The lessons learned in the Collins case have been widely broadcast and appear in many statistics textbooks – but clearly not the ones read by expert witnesses in the case of Sally Clark in the United Kingdom.
Two of Clark's apparently healthy sons died in infancy. On both occasions, Clark was alone with her baby at the time of death, and there was no evidence of injury. After the second death, Clark was arrested for murder. Her defence was that the babies had died of Sudden Infant Death Syndrome (SIDS), referred to in the UK as Sudden Unexplained Death in Infancy (SUDI) or "cot death". But she was found guilty.
Clark's conviction was overturned on a second appeal, after she had served three years of her prison sentence. Among the reasons for the reversal were the acknowledgment by the court that there had been serious misuse of statistical reasoning.
In particular, a medical expert witness claimed that the chance of Clark's two babies dying of SIDS was about 1 in 73 million. He arrived at this figure in two steps:
First he estimated the chance of a cot death in a family like Clark's. The estimate was based on a large-scale multidisciplinary analysis of SIDS that reported an overall SIDS death rate of 1 in 1303 live births in the UK. Clark was a lawyer and the family did not have financial struggles. After taking such factors into account, the expert set the rate at 1 in 8543, arguing that SIDS deaths were less likely in families like Clark's than in the general population.
Then he found the chance of two SIDs deaths in Clark's family by the calculation
\[ \frac{1}{8543} \times \frac{1}{8543} = \frac{1}{72982849} \]
That's about 1 in 73 million.
The Royal Statistical Society (RSS) objected vigorously to this calculation on the grounds that the assumption of independence is hard to justify. Given that there is one SIDS death in a family, the risk to other babies in the family might be larger.
"There may well be unknown genetic or environmental factors that predispose families to SIDS, so that a second case within the family becomes much more likely," the RSS wrote. "The well-publicised figure of 1 in 73 million thus has no statistical basis. Its use cannot reasonably be justified as a "ballpark" figure because the error involved is likely to be very large, and in one particular direction. The true frequency of families with two cases of SIDS may be very much less incriminating than the figure presented to the jury at trial."
The Society also pointed out that the media had misinterpreted the erroneous figure of 1 in 73 million to be the chance that Clark was innocent. The error was an instance of the Prosecutor's Fallacy in which the likelihood of the evidence given innocence was confused with the chance of innocence given the evidence.
Thirty-five years after the Collins case, another unjustifiable assumption of independence made by a prosecution witness resulted in a wrongful conviction for Clark. The Sally Clark case is regarded as a great miscarriage of justice and had devastating consequences. Though Clark was strongly supported by her husband throughout, she was in the end unable to cope with the loss of her sons and the injustice she suffered. She died in 2007.
Apart from statistical error, a serious flaw in the prosecution's case was that another of its medical experts withheld blood test reports showing that the second baby had harmful bacteria in several places in his body including the cerebro-spinal fluid. After Clark's conviction was overturned, the Journal of the Royal Society of Medicine (JRSM) addressed the questions the case raised about the conduct of medical expert witnesses.
The JRSM urged more appropriate investigation of SIDS deaths. It also made an important general recommendation. "[W]hen giving evidence to the police or to the court doctors would be wise to acknowledge the limitations in their understanding. They should present all relevant facts in a balanced manner, offer opinions only within their sphere of expertise and take care not to overstate their case. Wrong conclusions in either direction may be disastrous."
|
CommonCrawl
|
View source for Dirichlet theorem
← Dirichlet theorem
{{TEX|done}} A name referring to several theorems associated with Peter Gustav Lejeune Dirichlet (1805-1859). ===Dirichlet's theorem in the theory of Diophantine approximations=== For any real number $\alpha$ and any natural number $Q$ there exist integers $a$ and $q$ which satisfy the condition $$ |\alpha q - a | < \frac{1}{q}\,,\ \ \ 0 < q \le Q\ . $$ With the aid of the [[Dirichlet box principle|Dirichlet box principle]] a more general theorem can be demonstrated: For any real numbers $\alpha_1,\ldots,\alpha_n$ and any natural number $Q$ there exist integers $a_1,\ldots,a_n$ and $q$ such that $$ \max(|\alpha_1 q - a_1|,\ldots,|\alpha_n q - a_n|) < \frac{1}{Q^{1/n}}\,,\ \ \ 0 < q \le Q\ . $$ ====References==== <table> <TR><TD valign="top">[1]</TD> <TD valign="top"> J.W.S. Cassels, "An introduction to diophantine approximation" , Cambridge Univ. Press (1957)</TD></TR> </table> ''V.I. Bernik'' ===Dirichlet's unit theorem=== A theorem describing the structure of the multiplicative group of units of an algebraic number field; obtained by P.G.L. Dirichlet [[#References|[1]]]. Each algebraic number field $ K $ of degree $ n $ over the field of rational numbers $ \mathbf Q $ has $ n $ different isomorphisms into the field of complex numbers $ \mathbf C $. If under the isomorphism $ \sigma : K \rightarrow \mathbf C $ the image of the field is contained in the field of real numbers, this isomorphism is said to be real; otherwise it is said to be complex. Each complex isomorphism $ \sigma $ has a complex conjugate isomorphism $ \overline \sigma \; : K \rightarrow \mathbf C $, defined by the equation $ \overline \sigma \; ( \alpha ) = \overline{ {\sigma ( \alpha ) }}\; $, $ \alpha \in K $. In this way the number $ n $ may be represented as $ n = s + 2t $, where $ s $ is the number of real and $ 2t $ is the number of complex isomorphisms of $ K $ into $ \mathbf C $. Dirichlet's theorem: In an arbitrary [[Order|order]] $ A $ of an algebraic number field $ K $ of degree $ n = s + 2t $ there exist $ r = s + t - 1 $ units $ \epsilon _ {1} \dots \epsilon _ {r} $ such that any unit $ \epsilon \in A $ is uniquely representable as a product $$ \epsilon = \zeta \epsilon _ {1} ^ {s _ {1} } \dots \epsilon _ {r} ^ {s _ {r} } , $$ where $ s _ {1} \dots s _ {r} $ are integers and $ \zeta $ is some root of unity contained in $ A $. The units $ \epsilon _ {1} \dots \epsilon _ {r} $, the existence of which is established by Dirichlet's theorem, are said to be the basic units of the order $ A $. In particular, the basic units of the maximal order $ D $ of the field $ K $, i.e. the ring of integers of $ K $, are usually called basic units of the algebraic number field $ K $. ====References==== <table><TR><TD valign="top">[1]</TD> <TD valign="top"> P.G.L. Dirichlet, "Werke" , '''1''' , Springer (1889)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> Z.I. Borevich, I.R. Shafarevich, "Number theory" , Acad. Press (1966) (Translated from Russian) (German translation: Birkhäuser, 1966)</TD></TR></table> ''S.A. Stepanov'' ===Dirichlet's theorem on prime numbers in an arithmetical progression=== Each arithmetical progression whose first term and difference are relatively prime contains an infinite number of prime numbers. It was in fact proved by P.G.L. Dirichlet [[#References|[1]]] that for any given relatively prime numbers $ l , k $, $$ \lim\limits _ {s \rightarrow 1 + 0 } \sum _ { p } \frac{1}{p ^ {s} } \frac{1}{ \mathop{\rm ln} 1 / ( s - 1 ) } = \frac{1}{\phi (k) } , $$ where the summation is effected over all prime numbers $ p $ subject to the condition $ p \equiv l $( $ \mathop{\rm mod} k $) and $ \phi (k) $ is Euler's function. This relation may be interpreted as the law of uniform distribution of prime numbers over the residue classes $ l $( $ \mathop{\rm mod} k $), since $$ \lim\limits _ {s \rightarrow 1 + 0 } \sum _ { p } \frac{1}{p ^ {s} } \frac{1}{ \mathop{\rm ln} 1 / ( s - 1 ) } = 1 , $$ where the summation is extended over all prime numbers. Let $ x > 1 $ be an integer and let $ \pi ( x ; l , k ) $ be the amount of prime numbers $ p \leq x $ subject to the condition $ p \equiv l $( $ \mathop{\rm mod} k $), where $ 0 < l < k $ and $ l $ and $ k $ are relatively prime. Then $$ \pi ( x ; l , k ) = \frac{\int\limits _ { 2 } ^ { x } \frac{d u }{ \mathop{\rm ln} u } }{\phi (k) } + O ( x e ^ {-c \sqrt { \mathop{\rm ln} x } } ) , $$ where the estimate of the remainder is uniform in $ k \leq ( \mathop{\rm ln} x ) ^ {A} $ for any given $ A > 0 $, and $ c = c (A) > 0 $ is a magnitude which depends only on $ A $( non-effectively). This is the modern form of Dirichlet's theorem, which immediately indicates the nature of the distribution of the prime numbers $ p \equiv l $( $ \mathop{\rm mod} k $) in the series of natural numbers. It is believed (the extended Riemann hypothesis) that, for given relatively prime $ l $ and $ k $ and any integer $ x > 1 $, $$ \pi ( x ; l , k ) = \frac{\int\limits _ { 2 } ^ { x } \frac{d u }{ \mathop{\rm ln} \ u } }{\phi (k) } + O ( x ^ {1 / 2 + \epsilon } ) , $$ where $ \epsilon > 0 $ is arbitrary, while $ O $ is a magnitude depending on $ k $ and $ \epsilon $. ====References==== <table><TR><TD valign="top">[1]</TD> <TD valign="top"> P.G.L. Dirichlet, "Vorlesungen über Zahlentheorie" , Vieweg (1894)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> K. Prachar, "Primzahlverteilung" , Springer (1957)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> A.A. Karatsuba, "Fundamentals of analytic number theory" , Moscow (1975) (In Russian)</TD></TR></table> ''V.G. Sprindzhuk'' ===Dirichlet's theorem on Fourier series=== If a $ 2 \pi $- periodic function $ f $ is piecewise monotone on the segment $ [ - \pi , \pi ] $ and has at most finitely many discontinuity points on it, i.e. if the so-called Dirichlet conditions are satisfied, then its trigonometric Fourier series converges to $ f (x) $ at each continuity point and to $ [ f ( x + 0 ) + f ( x - 0 ) ]/ 2 $ at each discontinuity point. First demonstrated by P.G.L. Dirichlet [[#References|[1]]]. Dirichlet's theorem was generalized by C. Jordan [[#References|[3]]] to functions of bounded variation. ====References==== <table><TR><TD valign="top">[1]</TD> <TD valign="top"> P.G.L. Dirichlet, "Sur la convergence des series trigonométriques qui servent à représenter une fonction arbitraire entre des limites donnés" ''J. Math.'' , '''4''' (1829) pp. 157–169</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> P.G.L. Dirichlet, "Werke" , '''1''' , Springer (1889)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> C. Jordan, ''C.R. Acad. Sci.'' , '''92''' (1881) pp. 228–230</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> N.K. [N.K. Bari] Bary, "A treatise on trigonometric series" , Pergamon (1964) (Translated from Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> A. Zygmund, "Trigonometric series" , '''1''' , Cambridge Univ. Press (1988)</TD></TR></table>
Template:TEX (view source)
Return to Dirichlet theorem.
Dirichlet theorem. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Dirichlet_theorem&oldid=44786
This article was adapted from an original article by T.P. Lukashenko (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/wiki/Dirichlet_theorem"
|
CommonCrawl
|
On the isomorphism problem of enveloping algebras
Let $\mathfrak{g}$ and $\mathfrak{g}'$ be Lie algebras. It is known that if $U(\mathfrak{g})\cong U(\mathfrak{g}')$ as associative algebras, then it is not necessarily true that $\mathfrak{g}\cong \mathfrak{g}'$ as Lie algebras.
I am looking for examples such that $U(\mathfrak{g})\cong U(\mathfrak{g}')$ as algebras but $\mathfrak{g}\not\cong \mathfrak{g}'$ as Lie algebras (over an algebraically closed field). Moreover, are there examples such that the categories $U(\mathfrak{g})-\text{Mod}$ and $U(\mathfrak{g}')-\text{Mod}$ are not monoidally equivalent?
I'm not very familiar with the isomorphism problem for enveloping algebras, a quick google search only gave me counterexamples in positive characteristic. I'd be very happy with examples in characteristic zero (infinite dimensions are allowed). I'm more into the monoidal stuff and might figure out myself whether the representation categories are monoidally equivalent.
Edit: I'm asking this because I naturally encountered a quantized version of this problem. Obviously the categories $U(\mathfrak{g})-\text{Mod}$ and $U(\mathfrak{g}')-\text{Mod}$ are Morita equivalent but there is more information here. First of all $U(\mathfrak{g})\cong U(\mathfrak{g}')$ as algebras which clearly is stronger but they are also enveloping algebras of Lie algebras, further restricting possibilities. In the quantized version I'm looking at, I suspect the representation rings of both categories to be the same making the difference in the monoidal structure very subtle. So I'm wondering whether anything on this subject is known in the non-quantized world.
rt.representation-theory lie-algebras hopf-algebras monoidal-categories
Mathematician 42
Mathematician 42Mathematician 42
In the recent paper Lie, associative, and commutative quasi-isomorphism, R. Campos, D. Petersen, F. Wierstra, and I settled the question above for nilpotent Lie algebras: if two nilpotent Lie algebras have universal enveloping algebras that are isomorphic as unital associative algebras, then the two Lie algebras also are isomorphic.
In fact, we proved a more general result in the differential graded context:
Theorem B: Let $\mathfrak{g}, \mathfrak{h}$ be two dg Lie algebras. If $U\mathfrak{g}$ and $U\mathfrak{h}$ are quasi-isomorphic as unital associative dg algebras, then the homotopy completions $\mathfrak{g}^{\wedge h}$ and $\mathfrak{h}^{\wedge h}$ are quasi-isomorphic as dg Lie algebras.
This has the statement above as a corollary, since one can show that a Lie algebra that is either strictly positively graded or non-negatively gradedand nilpotent is always quasi-isomorphic to its homotopy completion (in the language of the paper, it is homotopy complete). There are other interesting implications of this result in rational homotopy theory.
In my view (my coauthors might disagree) the spirit of the proof is mostly deformation theoretical, but operad theory play a big supporting role. For those who are interested in the structure of the proof without the technical details, we give a sketch of the arguments in paragraphs 0.27-0.31.
In a previous version of this answer and of the paper, we claimed that the more general statement that if two dg Lie algebras have universal enveloping algebras that are quasi-isomorphic as associative dg algebras, then the two dg Lie algebras are themselves quasi-isomorphic. Unfortunately, the proof had a gap that we were not able to fix. This more general statement remains open.
Daniel Robert-NicoudDaniel Robert-Nicoud
$\begingroup$ Would you edit the post so as to make it easier to read and updated (emphasizing what you prove, rather than being the concatenation of a result and a preceding paragraph saying it has a mistake)? You could state what's proved in v3 of your arxiv paper and then say at the end that in a first version of the post/the paper a stronger version was claimed, which is still an open problem? $\endgroup$
– YCor
$\begingroup$ @YCor Good point, thank you. Done. $\endgroup$
– Daniel Robert-Nicoud
FWIW, a ten year old article states: "We stress that, in spite of all this, the characteristic zero case of the isomorphism problem remains entirely open."
(https://link.springer.com/article/10.1007/s10468-007-9083-0)
Vladimir DotsenkoVladimir Dotsenko
$\begingroup$ What a shame, this probably also explains why I was completely stuck on the quantized problem I encountered. The quantized version shouldn't be too much different as the representation theory of $\mathfrak{g}$ and $U_v(\mathfrak{g})$ are often very similar. Anyway, in the near future we will publish an article on something completely different where all of the sudden this question pops up. Maybe smarter people can use our example to book some progress. $\endgroup$
– Mathematician 42
$\begingroup$ Thanks btw, I encountered this paper but completely missed the sentence saying that the characteristic zero case is still open. Thank you for spotting this. $\endgroup$
Not the answer you're looking for? Browse other questions tagged rt.representation-theory lie-algebras hopf-algebras monoidal-categories or ask your own question.
Which is the correct universal enveloping algebra in positive characteristic?
Is there a canonical Hopf structure on the center of a universal enveloping algebra?
What is the universal enveloping algebra?
Could we define the semi-direct product of two universal enveloping algebras?
The Jordan Plane and Enveloping Algebras
When are Morita classes represented by certain structured algebra objects?
Can one define quantized universal enveloping algebras in a basis-free way?
When is this map of Hopf algebras Surjective?
|
CommonCrawl
|
Infinite-dimensional representation
of a Lie group
A representation of a Lie group (cf. Representation of a topological group) in an infinite-dimensional vector space. The theory of representations of Lie groups is part of the general theory of representations of topological groups. The specific features of Lie groups make it possible to employ analytical tools in this theory (in particular, infinitesimal methods), and also to considerably enlarge the class of "natural" group algebras (function algebras with respect to convolution, cf. Group algebra), the study of which connects this theory with abstract harmonic analysis, i.e. with part of the general theory of topological algebras (cf. Harmonic analysis, abstract; Topological algebra).
Let $ G $ be a Lie group. A representation of $ G $ in a general sense is any homomorphism $ G \rightarrow \mathop{\rm GL} ( E) $, where GL $ ( E) $ is the group of all invertible linear transformations of the vector space $ E $. If $ E $ is a topological vector space, the homomorphisms which are usually considered are those with values in the algebra $ C ( E) $ of all continuous linear transformations of $ E $ or in the algebra $ S( E) $ of all weakly-continuous transformations of $ E $. The algebras $ C( E) $ and $ S( E) $ have one of the standard topologies (for example, the weak or the strong). A representation $ \phi $ is said to be continuous (separately continuous) if the vector function $ \phi ( g) \xi $ is continuous (separately continuous) on $ G \times E $. If $ E $ is a quasi-complete barrelled space, any separately continuous representation is continuous. A continuous representation $ \phi $ is called differentiable (analytic) if the operator function $ \phi ( g) $ is differentiable (analytic) on $ G $. The dimension of a representation $ \phi $ is the dimension of $ E $. The most important example of a representation of a group $ G $ is its regular representation $ \phi ( g) f( x) = f( xg) $, $ x, g \in G $, which can be defined on some class of functions $ f $ on $ G $. If $ G $ is a Lie group, its regular representation is continuous in $ C( G) $ and in $ L _ {p} ( G) $( where $ L _ {p} ( G) $ is defined with respect to the Haar measure on $ G $), and is differentiable in $ C ^ \infty ( G) $( with respect to the standard topology in $ C ^ \infty ( G) $: the topology of compact convergence). Every continuous finite-dimensional representation of a group $ G $ is analytic. If $ G $ is a complex Lie group, it is natural to consider its complex-analytic (holomorphic) representations as well. As a rule, only continuous representations are considered in the theory of representations of Lie groups, and the continuity condition is not explicitly stipulated. If the group $ G $ is compact, all its irreducible (continuous) representations are finite-dimensional. Similarly, if $ G $ is a semi-simple complex Lie group, all its irreducible holomorphic representations are finite-dimensional.
1 Relation to representations of group algebras.
2 The infinitesimal method.
3 Irreducible representations.
4 Harmonic analysis of functions on $ G $.
5 Problems of spectral analysis.
6 Applications to mathematical physics.
6.2 Comments
Relation to representations of group algebras.
The most important group algebras for Lie groups are the algebra $ L _ {1} ( G) $; the algebra $ C ^ {*} ( G) $, which is the completion of $ L _ {1} ( G) $ in the smallest regular norm (cf. Algebra of functions); $ C _ {0} ^ \infty ( G) $— the algebra of all infinitely-differentiable functions on $ G $ with compact support; $ M( G) $— the algebra of all complex Radon measures with compact support on $ G $; $ D( G) $— the algebra of all generalized functions (Schwarz distributions) on G with compact support; and also, for a complex Lie group, the algebra $ A( G) $ of all analytic functionals over $ G $. The linear spaces $ M( G) $, $ D( G) $, $ A( G) $ are dual to, respectively, $ C( G) $, $ C ^ \infty ( G) $, $ H( G) $, where $ H( G) $ is the set of all holomorphic functions on $ G $( with the topology of compact convergence). All these algebras have a natural topology. In particular, $ L _ {1} ( G) $ is a Banach algebra. The product (convolution) of two elements $ a, b \in A $, where $ A $ is one of the group algebras indicated above, is defined by the equality
$$ ab ( g) = \int\limits a ( gh ^ {-} 1 ) b ( h) dh $$
with respect to a right-invariant measure on $ G $, with a natural extension of this operation to the class of generalized functions. The integral formula
$$ \phi ( a) = \int\limits a ( g) \phi ( g) dg,\ \ a \in A , $$
establishes a natural connection between the representations of the group $ G $ and the representations of the algebra $ A $( if the integral is correctly defined): If the integral is weakly convergent and defines an operator $ \phi ( a) \in S( E) $ for each $ a \in A $, then the mapping $ a \rightarrow \phi ( a) $ is a homomorphism. One then says that the representation $ \phi ( g) $ of the group $ G $ is extended to the representation $ \phi ( a) $ of the algebra $ A $, or that it is an $ A $- representation. Conversely, all weakly-continuous non-degenerate representations of the algebra $ A $ are determined, in accordance with the formula above, by some representation of the group $ G $( weakly continuous for $ A = M( G) $, weakly differentiable for $ A = D( G) $, weakly analytic for $ A = A( G) $). This correspondence preserves all natural relations between the representations, such as topological irreducibility or equivalence. If $ G $ is a unimodular group, its unitary representations (in Hilbert spaces, cf. Unitary representation) correspond to symmetric representations of the algebra $ L _ {1} ( G) $ with respect to the involution in $ L _ {1} ( G) $( cf. Group algebra; Involution representation). If $ E $ is a sequentially complete, locally convex Hausdorff space, any continuous representation of a group $ G $ in $ E $ is an $ M( G) $- representation. If, moreover, the representation of the group $ G $ is differentiable, it is a $ D ( G) $- representation. In particular, if $ E $ is a reflexive or a quasi-complete barrelled space, any separately-continuous representation $ \phi ( g) $ is an $ M( G) $- representation, and $ \phi ( a) \in C( E) $ for all $ a \in M( G) $.
The infinitesimal method.
If a representation $ \phi ( g) $ is differentiable, it is infinitely often differentiable, and the space $ E $ has the structure of a $ \mathfrak g $- module, where $ \mathfrak g $ is the Lie algebra of the group $ G $, by considering the Lie infinitesimal operators:
$$ \phi ( a) = \ { \frac{d}{dt} } \phi ( e ^ {ta} ) _ {t = 0 } ,\ \ a \in \mathfrak g . $$
The operators $ \phi ( a) $ form a representation of the algebra $ \mathfrak g $, called the differential representation $ \phi ( g) $. A vector $ \xi \in E $ is said to be differentiable (with respect to $ \phi ( g) $) if the vector function $ \phi ( g) \xi $ is differentiable on $ G $. A vector $ \xi \in E $ is said to be analytic if $ \phi ( g) \xi $ is an analytic function in a neighbourhood of the unit $ e \in G $. If $ \phi ( g) $ is a $ C _ {0} ^ \infty ( G) $- representation, the space $ V( E) $ of all infinitely-differentiable vectors is everywhere-dense in $ E $. In particular, this is true for all continuous representations in a Banach space; moreover, in this case [4] the space $ W( E) $ of analytic vectors is everywhere-dense in $ E $. The differential representation $ \phi ( g) $ in $ V( E) $ may be reducible, even if $ \phi ( g) $ is topologically irreducible in $ E $. To two equivalent representations of $ G $ correspond equivalent differential representations in $ V( E) $( $ W( E) $); the converse is, generally speaking, not true. For unitary representations in Hilbert spaces $ E $, $ H $ it follows from the equivalence of differential representations in $ W( E) $, $ W( H) $ that the representations are equivalent [7]. In the finite-dimensional case a representation of a connected Lie group can be uniquely reproduced from its differential representation. A representation of the algebra $ \mathfrak g $ is said to be integrable ( $ G $- integrable) if it coincides with a differential representation of the group $ G $ in a subspace which is everywhere-dense in the representation space. Integrability criteria are now (1988) known only in isolated cases [4]. If $ G $ is simply connected, all finite-dimensional representations of the algebra $ \mathfrak g $ are $ G $- integrable.
Irreducible representations.
One of the main tasks of the theory of representations is the classification of all irreducible representations (cf. Irreducible representation) of a given group $ G $, defined up to an equivalence, using a suitable definition of the concepts of irreducibility and equivalence. Thus, the following two problems are of interest: 1) the description of the set $ \widehat{G} $ of all unitary equivalence classes of irreducible unitary representations of a group $ G $; and 2) the description of the set $ \widetilde{G} $ of all Fell equivalence classes [7] of totally-irreducible representations (also called completely-irreducible representations) of a group $ G $. For semi-simple Lie groups with a finite centre, Fell equivalence is equivalent to Naimark equivalence [7], and the natural imbedding $ \widehat{G} \rightarrow \widetilde{G} $ holds. The sets $ \widehat{G} $, $ \widetilde{G} $ have a natural topology, and their topologies are not necessarily Hausdorff [5]. If $ G $ is a compact Lie group, then $ \widetilde{G} = \widehat{G} $ is a discrete space. The description of the set $ \widehat{G} $ in such a case is due to E. Cartan and H. Weyl. The linear envelope $ \gamma ( G) $ of matrix entries of the group $ G $( i.e. of matrix entries of the representations $ \phi \in \widehat{G} $) here forms a subalgebra in $ C _ {0} ^ \infty ( G) $( the algebra of spherical functions) which is everywhere-dense in $ C( G) $ and in $ C ^ \infty ( G) $. The matrix entries form a basis in $ C ^ \infty ( G) $. If the matrices of all representations $ \phi \in \widehat{G} $ are defined in a basis with respect to which they are unitary, the corresponding matrix entries form an orthogonal basis in $ L _ {2} ( G) $( the Peter–Weyl theorem). If the group $ G $ is not compact, its irreducible representations are usually infinite-dimensional. A method for constructing such representations analogous to the classical matrix groups was proposed by I.M. Gel'fand and M.A. Naimark [1], and became the starting point of an intensive development of the theory of unitary infinite-dimensional representations. G.W. Mackey's [5] theory of induced representations is a generalization of this method to arbitrary Lie groups. The general theory of non-unitary representations in locally convex vector spaces, which began to develop in the 1950's, is based to a great extent on the theory of topological vector spaces and on the theory of generalized functions. A detailed description of $ \widetilde{G} $( $ \widehat{G} $) is known (1988) for isolated classes of Lie groups (semi-simple complex, nilpotent and certain solvable Lie groups, as well as for their semi-direct products).
Let $ G $ be a semi-simple Lie group with a finite centre, let $ \phi $ be an $ M( G) $- representation in the space $ E $ and let $ K $ be a compact subgroup in $ G $. A vector $ \xi \in E $ is said to be $ K $- finite if its cyclic envelope is finite-dimensional with respect to $ K $. The subspace $ V $ of all $ K $- finite vectors is everywhere-dense in $ E $ and is the direct (algebraic) sum of subspaces $ V ^ \lambda $, $ \lambda \in \widehat{K} $, where $ V ^ \lambda $ is the maximal subspace in $ V $ in which the representation of $ K $ is a multiple of $ \lambda $. A representation $ \phi $ is said to be $ K $- finite if $ { \mathop{\rm dim} } V ^ \lambda < \infty $ for all $ \lambda $. A subgroup $ K $ is said to be massive (large or rich) if every totally-irreducible representation of $ G $ is $ K $- finite. The following fact is of paramount importance in the theory of representations: If $ K $ is a maximal compact subgroup in $ G $, then $ K $ is massive. If the vectors of $ V $ are differentiable, $ V $ is invariant with respect to the differential $ d \phi $ of the representation $ \phi $. The representation $ \phi $ is said to be normal if it is $ K $- finite and if the vectors of $ V $ are weakly analytic. If $ \phi $ is normal, there exists a one-to-one mapping (defined by restriction to $ V $) between closed submodules of the $ G $- module $ \phi $ and submodules of the $ \mathfrak g $- module $ \phi _ {0} = d \phi \mid _ {V} $, where $ \mathfrak g $ is the Lie algebra of the group $ G $[7]. Thus, the study of normal representations can be algebraized by the infinitesimal method. An example of a normal representation of the group $ G $ is its principal series representation $ e( \alpha ) $. This representation is totally irreducible for points $ \alpha $ in general position. In the general case $ e( \alpha ) $ can be decomposed into a finite composition series the factors of which are totally irreducible. Any quasi-simple irreducible representation of the group $ G $ in a Banach space is infinitesimally equivalent to one of the factors of $ e( \alpha ) $ for a given $ \alpha $. This is also true for totally-irreducible representations of $ G $ in quasi-complete locally convex spaces. If $ G $ is real or complex, it is sufficient to consider subrepresentations of $ e( \alpha ) $ instead of its factors [7]. In the simplest case of $ G = \mathop{\rm SL} ( 2, \mathbf C ) $, the representation $ e( \alpha ) $ is defined by a pair of complex numbers $ p, q $ with integral difference $ p - q $, and operates in accordance with the right-shift formula $ \phi ( g) f( x) = f( xg) $, $ x = ( x _ {1} , x _ {2} ) $, $ g \in G $, on the space of all functions $ f \in C ^ \infty ( \mathbf C ^ {2} \setminus \{ 0 \} ) $ which satisfy the homogeneity condition $ f( \lambda x _ {1} , \lambda x _ {2} ) = \lambda ^ {p- 1 } {\overline \lambda \; } {} ^ {q- 1 } f( x _ {1} , x _ {2} ) $. If $ p $ and $ q $ are positive integers, $ e( \alpha ) $ contains the irreducible finite-dimensional subrepresentation $ d( \alpha ) $( in the class of polynomials in $ x _ {1} , x _ {2} $), the factors of which are totally irreducible. If $ p $ and $ q $ are negative integers, $ e( \alpha ) $ has a dual structure. In all other cases the module $ e ( \alpha ) $ is totally irreducible. In such a case $ \widetilde{G} $ is in one-to-one correspondence with the set of pairs $ ( p, q) $, where $ p - q $ is an integer, factorized with respect to the relation $ ( p, q) \sim (- p, - q) $. The subset $ \widehat{G} $ consists of the representations of the basis series ( $ ( p+ q) $ is purely imaginary) (cf. Series of representations), the complementary series $ ( 0 \leq p = q < 1) $ and the trivial (unique) representation $ \delta _ {0} $, which results if $ p = q = 1 $. Let $ G $ be a semi-simple connected complex Lie group, let $ B $ be its maximal solvable (Borel) subgroup, let $ M $ be a maximal torus, let $ H = MA $ be a Cartan subgroup, and let $ \alpha $ be a character of the group $ H $( extended to $ B $). Then $ \widetilde{G} $ is in one-to-one correspondence with $ A/W $, where $ A $ is the set of all characters $ \alpha $ and $ W = W _ {1} $ is the Weyl group of the complex algebra $ \mathfrak g $[7]. For characters in "general position" the representation $ e ( \alpha ) $ is totally irreducible. The description of the set $ \widehat{G} $ is reduced to the study of the positive definiteness of certain bilinear forms, but the ultimate description is as yet (1988) unknown. Of special interest to real groups are the so-called discrete series (of representations) (direct sums in $ L _ {2} ( G) $). All irreducible representations of the discrete series are classified [3] by describing the characters of these representations.
For nilpotent connected Lie groups [8] the set $ \widehat{G} $ is equivalent to $ \mathfrak g ^ \prime /G $, where $ \mathfrak g ^ \prime $ is the linear space dual to $ \mathfrak g $, and the action of $ G $ in $ \mathfrak g ^ \prime $ is conjugate with the adjoint representation on $ \mathfrak g $[9]. The correspondence is established by the orbit method [8]. A subalgebra $ \mathfrak h \subset \mathfrak g $ is called the polarization of an element $ f \in \mathfrak g ^ \prime $ if $ f $ annihilates $ [ \mathfrak h, \mathfrak h ] $ and if
$$ \mathop{\rm dim} \mathfrak h = \ \mathop{\rm dim} \mathfrak g - { \frac{1}{2} } \mathop{\rm dim} \Omega , $$
where $ \Omega $ is the orbit of $ f $ with respect to $ G $( all orbits are even-dimensional). If $ H $ is the corresponding analytic subgroup in $ G $ and $ \alpha = e ^ {f} $ is a character of $ H $, the representation $ u( \alpha ) $ corresponding to $ f $ is induced by $ \alpha $. Here, $ u( \alpha _ {1} ) $ is equivalent to $ u( \alpha _ {2} ) $ if and only if the corresponding functionals $ f _ {1} , f _ {2} $ lie on the same orbit $ \Omega $. In the simple case of the group $ G = Z( 3) $ of all unipotent matrices with respect to a fixed basis in $ \mathbf C ^ {3} $, the orbits of general position in $ \mathbf C ^ {3} = \{ ( \lambda , \mu , \nu ) \} $ are the two-dimensional planes $ \lambda = \textrm{ const } \neq 0 $ and the points $ ( \mu , \nu ) $ in the plane $ \lambda = 0 $. To each orbit in general position corresponds an irreducible representation $ u( \alpha ) $ of the group $ G $, determined by the formula
$$ u ( \alpha , g) f ( t) = a ( t, g) f ( t, g),\ \ - \infty < t < \infty , $$
in the Hilbert space $ E = L _ {2} (- \infty , \infty ) $. The infinitesimal operators of this representation coincide with the operators $ d/dt $, $ i \lambda t $, $ i \lambda I $, where $ I $ is the identity operator on $ E $. This result is equivalent to the Stone–von Neumann theorem on self-adjoint operators $ P $, $ Q $ with the commutator relationship $ [ P, Q] = i \lambda I $. To each point $ ( \mu , \nu ) $ corresponds a one-dimensional representation (a character) of $ Z( 3) $. The set $ \widetilde{G} $ is then described in an analogous way, with values of the parameters $ \lambda , \mu , \nu $ in the complex domain. This method of orbits can be naturally generalized to solvable connected Lie groups and even to arbitrary Lie groups; in the general case the orbits to be considered are orbits in $ \mathfrak g _ {\mathbf C } ^ \prime $( where $ {\mathfrak g } _ {\mathbf C } ^ \prime $ is the complexification of $ \mathfrak g ^ \prime $), which satisfy certain integer conditions [8].
The study of the general case is reduced, to a certain extent, to the two cases considered above by means of the theory of induced representations [5], which permits one to describe the irreducible unitary representations of a semi-direct product $ G = HN $ with normal subgroup $ N $ in terms of irreducible representations of $ N $ and of certain subgroups of the group $ H $( in view of the Levi–Mal'tsev theorem, cf. Levi–Mal'tsev decomposition). In practice, this method is only effective if the radical is commutative. Another method for studying $ \widetilde{G} $( and also $ \widehat{G} $) is the description of the characters of the irreducible unitary representations of $ G $; the set of such characters is in one-to-one correspondence with $ \widehat{G} $. The validity of the general formula for characters, proposed by A.A. Kirillov [8], has been verified (1988) only for a few special classes of Lie groups.
Harmonic analysis of functions on $ G $.
For a compact Lie group, the harmonic analysis is reduced to the expansion of functions $ f( x) $, $ x \in G $, into generalized Fourier series by the matrix entries of the group $ G $( the Peter–Weyl theorem for $ L _ {2} ( G) $ and its analogues for other function classes). For non-compact Lie groups the foundations of harmonic analysis were laid in [1] by the introduction of the generalized Fourier transform
$$ F ( \alpha ) = \int\limits f ( x) e ( \alpha , x) dx, $$
where $ e( \alpha , x) $ is the operator of the elementary representation $ e( \alpha ) $ and $ dx $ is the Haar measure on $ G $, and by the introduction of the inversion formula (in analogy to the Plancherel formula) for $ L _ {2} ( G) $ for the case of classical matrix groups $ G $. This result was generalized to locally compact unimodular groups (the abstract Plancherel theorem). The Fourier transform converts convolution of functions on the group to multiplication of their (operator) Fourier images $ F( \alpha ) $ and is accordingly a very important tool in the study of group algebras. If $ G $ is a semi-simple Lie group, the operators $ F( \alpha ) $ satisfy structure relations of the form
$$ A _ {s} ( \alpha ) F ( \alpha ) = F ( s \alpha ) A _ {s} ( \alpha ), $$
$ s \in W _ {i} $, $ i = 1, 2 $, where $ A _ {s} ( \alpha ) $ are intertwining operators, $ W _ {1} $ is the Weyl group of the symmetric space $ G/K $( $ K $ is a maximal compact subgroup in $ G $), and $ W _ {2} $ is the Weyl group of the algebra $ \mathfrak g _ {\mathbf C} $, where $ \mathfrak g _ {\mathbf C} $ is the complexification of the Lie algebra of the group $ G $. If the functions $ f( x) $ have compact support, the operator functions $ F( \alpha ) $ are entire functions of the complex parameter $ \alpha $. For the group algebras $ C _ {0} ^ \infty ( G) $, $ D( G) $, where $ G $ is a semi-simple connected complex Lie group, analogues of the classical Paley–Wiener theorem [7] are known; these are descriptions of the images of these algebras under Fourier transformation. These results permit one to study the structure of a group algebra, its ideals and representations; in particular, they are used in the classification of irreducible representations of a group $ G $. Analogues of the Paley–Wiener theorem are also known for certain nilpotent (metabelian) Lie groups and for groups of motions of a Euclidean space.
Problems of spectral analysis.
For unitary representations of Lie groups a general procedure is known for the decomposition of the representation into a direct integral of irreducible representations [5]. The problem consists of finding analytical methods which would realize this decomposition for specific classes of groups and their representations, and in the establishment of uniqueness criteria of such a decomposition. For nilpotent Lie groups a method is known for the restriction of an irreducible representation $ \phi $ of a group $ G $ to a subgroup $ G _ {0} $( cf. Orbit method). For non-unitary representations, the task itself must be formulated more precisely, since the property of total reducibility lacks in the class of such representations. In several cases, not the group $ G $ itself is considered, but rather one of its group algebras $ A $, and the problem of spectral analysis is treated as the study of two-sided ideals of the algebra $ A $. The problem of spectral analysis (and spectral synthesis) is also closely connected with the problem of approximation of functions on the group $ G $ or on the homogeneous space $ G/H $, where $ H $ is a subgroup, by linear combinations of matrix entries of the group $ G $.
Applications to mathematical physics.
Cartan was the first to note the connection between the theory of representations of Lie groups and the special functions of mathematical physics. It was subsequently established that the principal classes of functions are closely connected with the representations of classical matrix groups [10]. In fact, the existence of this connection throws light on fundamental problems in the theory of special functions: the properties of completeness and orthogonality, differential and recurrence relations, addition theorems, etc., and also makes it possible to detect new relationships and classes of functions. All these functions are matrix entries of classical groups or their modifications (characters, spherical functions). The theory of expansion with respect to these functions forms part of the general harmonic analysis on a homogeneous space $ G/H $. The fundamental role played by the theory of Lie groups in mathematical physics, particularly in quantum mechanics and quantum field theory, is due to the presence of a group of symmetries (at least approximately) in the fundamental equations of this theory. Classical examples of such symmetries include Einstein's relativity principle (with respect to the Lorentz group), the connection between Mendeleev's table and the representations of the rotation group, the theory of isotopic spin, unitary symmetry of elementary particles, etc. The connection with theoretical physics had a stimulating effect on the development of the general theory of representations of Lie groups.
[1] I.M. Gel'fand, M.A. Naimark, "Unitäre Darstellungen der klassischen Gruppen" , Akademie Verlag (1957) (Translated from Russian)
[2] N. Bourbaki, "Elements of mathematics. Integration" , Addison-Wesley (1975) pp. Chapt.6;7;8 (Translated from French) MR0583191 Zbl 1116.28002 Zbl 1106.46005 Zbl 1106.46006 Zbl 1182.28002 Zbl 1182.28001 Zbl 1095.28002 Zbl 1095.28001 Zbl 0156.06001
[3] A. Borel, "Repŕesentations de groupes localement compacts" , Springer (1972) MR0414779 Zbl 0242.22007
[4] E. Nelson, "Analytic vectors" Ann. of Math. , 70 (1959) pp. 572–615 MR0107176 Zbl 0091.10704
[5] G.W. Mackey, "Infinite-dimensional group representations" Bull. Amer. Math. Soc. , 69 (1963) pp. 628–686 MR0153784 Zbl 0136.11502
[6] M.A. Naimark, "Infinite-dimensional representations of groups and related problems" Itogi Nauk. Ser. Mat. : 2 (1964) pp. 38–82 (In Russian)
[7] D.P. Zhelobenko, "Harmonic analysis of functions on semi-simple complex Lie groups" , Moscow (1974) (In Russian)
[8] A.A. Kirillov, "Elements of the theory of representations" , Springer (1976) (Translated from Russian) MR0412321 Zbl 0342.22001
[9] G. Warner, "Harmonic analysis on semi-simple Lie groups" , 1 , Springer (1972) MR0499000 MR0498999 Zbl 0265.22021 Zbl 0265.22020
[10] N.Ya. Vilenkin, "Special functions and the theory of group representations" , Amer. Math. Soc. (1968) (Translated from Russian) MR0229863 Zbl 0172.18404
The notions of a differentiable or analytic representation are commonly related to the strong topology [9].
The algebra $ D ( G) $( of generalized functions on $ G $ with compact support) is usually denoted by $ E ^ \prime ( G) $ in the West. The notation $ D ( G) $, if used, is then a synonym for $ C _ {0} ^ \infty ( G) $.
Recently (1986), $ \widehat{G} $ has been determined for $ G = \mathop{\rm SL} ( n , K ) $, where $ K $ is the field of real or complex numbers or the skew-field of quaternions (D.A. Vogan), for $ G $ a complex simple Lie group of real rank 2 (M. Duflo) and for $ G $ a split-rank or semi-simple real Lie group (Baldoni–Silva–Barbasch). For a survey of the current state-of-affairs see [a3], [a5].
An analogue of the Paley–Wiener theorem is also known for real reductive Lie groups (cf. [a7], [a10]).
[a1] J. Dixmier, " algebras" , North-Holland (1977) (Translated from French) MR0498740 MR0458185 Zbl 0372.46058 Zbl 0346.17010 Zbl 0339.17007
[a2] Harish-Chandra, "Collected papers" , 1–4 , Springer (1984) Zbl 0652.01036 Zbl 0561.01030 Zbl 0561.01029 Zbl 0546.01015 Zbl 0546.01014 Zbl 0546.01013 Zbl 0541.01013 Zbl 0527.01020 Zbl 0527.01019
[a3] D.A. Vogan, "Representations of real reductive Lie groups" , Birkhäuser (1981) MR0632407 Zbl 0469.22012
[a4] W. Casselman, D. Miličić, "Asymptotic behaviour of matrix coefficients of admissible representations" Duke. Math. J. , 49 (1982) pp. 869–930
[a5] A.W. Knapp, B. Speh, "Status of classification of irreducible unitary representations" F. Ricci (ed.) G. Weiss (ed.) , Harmonic analysis , Lect. notes in math. , 908 , Springer (1982) pp. 1–38 MR0654177 Zbl 0496.22018
[a6] M. Duflo, "Construction de représentations unitaires d'un groupe de Lie" , Harmonic analysis and group representations , C.I.M.E. & Liguousi (1982) MR0777341
[a7] J. Arthur, "A Paley–Wiener theorem for real reductive groups" Acta. Math. , 150 (1983) pp. 1–89 MR0697608 MR0733803 Zbl 0533.43005 Zbl 0514.22006
[a8] W. Rossman, "Kirillov's character formula for reductive Lie groups" Invent. Math. , 48 (1978) pp. 207–220
[a9] M. Duflo, G. Heckman, M. Vergne, "Projection d'orbites, formule de Kirillov et formule de Blattner" Mém. Soc. Math. France Nouvelle Série , 15 (1985) pp. 65–128 MR0789081
[a10] P. Delorme, "Théorème de type Paley–Wiener pour les groupes de Lie semi-simple réels avec une seule classe de conjugaison de sous-groupes de Cartan" J. Funct. Anal. , 47 (1982) pp. 26–63
Infinite-dimensional representation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Infinite-dimensional_representation&oldid=47340
This article was adapted from an original article by D.P. ZhelobenkoM.A. Naimark (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Infinite-dimensional_representation&oldid=47340"
TeX auto
|
CommonCrawl
|
partition function for particle in a boxhonda civic for sale under $5,000 near jurong east
partition function for cases where classical, Bose and Fermi particles are placed into these energy levels . 22. t. 8. It is a function of temperature and other parameters, such as the volume enclosing a gas. 6 2-dimensional"particle-in-a-box"problems in quantum mechanics where E(p) 1 2m p 2 and p(x) 1 h exp i px refer familiarly to the standard quantum mechanics of a free particle. 2. In quantum mechanics, the particle in a box model (also known as the infinite potential well or the infinite square well) describes a particle free to move in a small space surrounded by impenetrable barriers. This means we can treat this gas as. Calculate and plot the heat capacity C V for this system.. . (18.20) (23) Rotation The rotational partition function is reduced by a factor of 4. This result holds in general for distinguishable localized particles. Suppose we have a thermodynamically large system that is in constant thermal contact with the environment, which has temperature T, with both the volume of the system and the number of constituent particles fixed.This kind of system is called a canonical ensemble.Let us label the exact states (microstates) that the system can occupy by j (j = 1, 2, 3 .
to account for the permutations of the N . As a simple example, we will solve the 1D Particle in a Box problem.
partition function for this system is Z = exp (Nm2B2b2/2) Find the average energy for this system. The vibrational partition function is: 1/2 . 4.9 The ideal gas The N particle partition function for indistinguishable particles. The partition function itself (2.5)is counting the number of these thermal wavelengths that we can t into volume V. Z 1 is the partition function for a single particle. Z 3D = (Z 1D) 3 . Consider first the simplest case, of two particles and two energy levels. (7), we write the partition function for N distinguishable particles: Z = " V mT 2h2 3=2 #N (8) and are in a position to employ our generic thermodynamical algorithm: F . 15B.4 shows schematically how p i varies with temperature. Now, if a particle is moving with a velocity v, the momentum p = mv and hence = h / mv. While the expression for the classical canonical partition function is derived in my notes, there is a small detail that goes unexplained: Assuming the particle in the box represents an isolated system and that the potential energy in the box is zero. There is abundant literature for partition function of classical harmonic oscillator is described by a potential energy V = 1kx2 Comparison of the partition function values from Hi-tran96,10 the . . . Consider first the simplest case, of two particles and two energy levels. For delocalized, indistinguishable particles, as found in an ideal gas, we have to allow for overcounting of quantum states as discussed in . We take gI = 1 and gn = 1 for 1 D. Assumption of continuity in energy levels leads to the re-placement of summation by integration and then the partition function becomes Q = Z n=1 endn Using the . Replacing N-particle problem to much simpler one. Partition Functions for Independent Particles Independent Particles We now consider the partition function for independent Applications to atom traps, white dwarf and neutron stars, electrons in metals, photons and solar energy, phonons, Bose condensation and . That is a particle confined to a region . The classical partition function Z CM is thus (N!h 3N) 1 times the phase integral over (4) is to Laplace invert the analytically known partition function using the residue theorem , physical significance of Hamiltonian, Hamilton's variational principle, Hamiltonian for central forces, electromagnetic forces and coupled oscillators, equation of canonical transformations, illustrations of . 1 particles in box 1 and the other N N 1 in other boxes is given by Eq. Defintion. n x L. If you want the eigenfunctions for a particle in a 2-D box, then you just multiply together the eigenfunctions for a 1-D box in each direction. The translational partition function is: 22 2 3 /8 3/2 33 0 nh ma 2 trans B VV qe dn mkT h (20.1) where particle-in-the-box energies 22 nB8 2 nh EkT ma are used to model translations and V=abc. Quantum partition function of a single particle in a box . For such a particle confined to move (translate) in a 2D rectangular "box" the single- particle partition function is given by 42D 2imkg - 2)A h2 where A = LxLy is the area of the box.
(a) What is the partition function of this system if the box contains only one particle? Examples a. Schottky two-state model b. Curie's law of paramagnetism c. quantum mechanical particle in a box d. rotational partition function Many-particle systems are characterized by a huge number of degrees of free-dom. particle-in-a-box ", quantum mechanics shows . So, in this case, Z1 = 10. Consider that there are N molecules of mass m confined in a box of dimensions a x b x c. The total translational partition function is then: Q t = q t N N! The average value of a property of the ensemble corresponds to the time-averaged value for the corresponding macroscopic property of the system. Consider a single particle perturbation of a classical simple harmonic oscillator Hamiltonian Laurent's series To find the mean energy E of this . Then one considers box 3 etc. (b) What is the partition function of this system if the box contains two distinguishable particles? . The subscript "ppb" stands for "point particle in a box". 1. (b) What is the partition function of this system if the box contains two distinguishable particles? The lower limit of the integration is now v 0 = " . Problem 6.3. The partition function Z is an important quantity that encodes the statistical properties of a system in thermodynamic equilibrium. Search: Classical Harmonic Oscillator Partition Function. Translational Partition Function Edit. Look now to the classical mechanics of a connedfree particle.For such a system there exist multipledynamical paths (x,t) (y,0), which is to say: the action functional S[path . As a result we can write the partition function as Z = N (8) where the single particle partition function is = X r er (9) Then lnZ = N ln = N ln X r er! How many distinct ways can we put the particles into the 2 states? For such a particle confined to move (translate) in a 2D rectangular "box" the single- particle partition function is given by (2tmkg' A h2 q2D where A = LxLy is the area of the box. The partition function is defined here and you should show the identity involving the derivative of with respect to . The partition function is a sum of Boltzmann factors over every state of the composite system, without regard . If the particle is not confined to a box but wanders freely, the allowed energies are continuous. A molecule inside a cubic box of length L has the translational energy levels given by Etr = h2 (nx2 + ny2 + nz2) . until the last . Canonical partition function of a system composed of 1 particle in a box. The first three quantum states (for. = 22. The RHS is temperature-dependent because scales like . View sm2.ppt from CHE PHYSICAL C at University at Buffalo. Want to read all 4 pages?
The particle in a box is a staple of entry-level Quantum Mechanics classes because it provides a meaningful contrast between classical and quantum . The translational energy levels available to a molecule are given by the particle in a 3-dimensional box problem from quantum . At very low T, where q 1, only the lowest state is significantly populated. A molecule inside a cubic box of length L has the translational energy levels given by (18.1.1) E t r = h 2 ( n x 2 + n y 2 + n z 2) 8 m L 2 where n x, n y and n z are the quantum numbers in the three directions. For a cubic box like this one, there will . The Particle in a 1D Box. The partition function is a sum over states (of course with the Boltzmann factor multiplying the energy in the exponent) and is a number. Initially, let us assume that a thermodynamically large system is in thermal contact with the environment, with a temperature T, and both the volume of the system and the number of constituent particles are fixed.A collection of this kind of system comprises an ensemble called a canonical ensemble.The appropriate mathematical expression for the . Utility of the partition function b. Density of states c. Q for independent and dependent particles d. The power of Q: deriving thermodynamic quantities from first principles 3. Central Forces 2022 (2 years) You know that the normalized spatial eigenfunctions for a particle in a 1-D box of length L L are 2 L sin nx L 2 L sin. The translational partition function is given by q t r = i e i / k B T 50 . A limitation on the harmonic oscillator approximation is discussed as is the quantal effect in the law of corresponding states The cartesian solution is easier and better for counting states though In it I derived the partition function for a harmonic oscillator as follows q = j e j k T For the harmonic, oscillator j = (1 2 + j . (one dimension . n = 3. n = 3 is the second excited state, and so on. Start with the general expression for the atomic/molecular partition function, q = X states e For translations we will use the particle in a box states, n = h 2n 8ma2 along each degree of freedom (x,y,z) And the total energy is just the sum . n = 1, 2, and 3) n = 1, 2, and 3) of a particle in a box are shown in Figure 7.11. . Transition from quantum mechanical expression to classical Hot Network Questions
The partition function for particle in a box is Q = X n=1 gI ne n(6) Here the energy of a particle is n=n 2h2 8mL2. For example, such a particle could be approximated by an atom (with widely spaced electronic energy levels) adsorbed on the surface of a catalyst: Calculate the . Before reading this section, you should read over the derivation of which held for the paramagnet, where all particles were distinguishable (by their position in the lattice).. 53-61 Ensemble partition functions: Atkins Ch 53-61 Ensemble partition functions: Atkins Ch. PARTITION FUNCTION. 31:12 - Particle in a Box Partition Function 32:36 - Particle in a Box Partition Function, Slide 2 35:20 - Particle in a Box Partition Function, Slide 3 38:12 - Harmonic Oscillator Partition Function 43:08 - Partition Function Example 1. rotational partition function. Here is the thermal de Broglie length part. Larger the value of q, larger the . Partition function of 1-, 2-, and 3-D monatomic ideal gas: A simple and comprehensive review A molecule inside a cubic box of length L has the translational energy levels given by Etr = h2 (nx2 + ny2 + nz2) . [concept:accessible_states] This can be easily seen when considering and . Apr 8, 2018 #3 FranciscoSili 8 0 TSny said: I think your work looks good. integral is tricky because the sum is dominated by the lowest histogram box. Note 2 B h mk T is called the thermal wavelength. We take gI= 1 and gn= 1 for 1 D. Assumption of continuity in energy levels leads to the re- placement of summation by integration and then the partition function becomes Q = Z n=1 endn Using the approximation R 0 R 1 The potential can be written mathematically as; f s d e 0 e V Since the wavefunction should be well behaved, so, it must vanish everywhere outside the box. The potential is zero inside the cube of side and infinite outside. . Upload your study docs or become a Course Hero member to access this document Continue to access Term Fall Professor NoProfessor Tags . However, in essentially all cases a complete knowledge of all quantum or classical states is neither possible nor useful and necessary. One dimensional and in nite range ising models. Energy quantization is a consequence of the boundary conditions. Then the number of ways to put N 2 particles in box 2 is given by a similar formula with N!N N 1 (there are only N N 1 particles after N 1 particles have been put in box 1) and N 1!N 2:These numbers of ways should multiply. Canonical partition function Definition. As discussed in section 26.9, the canonical partition function for a single high-temperature nonrelativistic pointlike particle in a box is: ( 26.1 ) where V is the volume of the container. The first three quantum states of a quantum particle in a box for principal quantum numbers : (a) standing wave solutions and (b) allowed energy states. the particle in a box model or particle in a harmonic oscillator well provide a . Partition Functions The Canonical Ensemble . In order to conveniently write down an expression for W consider an arbitrary Hamiltonian H of eigen-energies En and eigenstates jni (n stands for a collection of all the pertinent quantum numbers required to label the states) The second (order) harmonic has a frequency of 100 Hz, The third harmonic has a frequency of 150 Hz, The fourth . For example, such a particle could be approximated by an atom (with widely spaced electronic energy levels) adsorbed on the surface of a catalyst: Calculate the .
5.2.3 Partition function of ideal quantum gases . But to do so, first I have to compute the one particle partition function and to do so I have to solve the following integral: Z 1 ( V, T) = R 2 e H 1 ( p, q) d p d q. We obtained the semiclassical limit of the partition function Z 1 for one particle in a box by writing it as a sum over single particle states and then converting the sum to an integral. Expressed in terms of energy levels and level degeneracies, this partition function reads Atnormal (room) temperatures, corresponding to energies of the order of kT = 25 meV, which are smaller than electronic ener- gies ( 10 eV) by a factor of 103, the electronic partition function represents merely the constant factor 0 Given a molecule, write down its partition function in terms of molecular Hope I'm not misleading you here. When the particle-in-a-box model is used to describe gas molecules in a large box, it turns out that the number of thermally accessible states is VERY large. Then, for the 3D partition function we get Z3D = V mT 2h2 3=2; (7) where V = LxLyLz is the volume of the box. For a single particle in a 3D box, the partition function is (7) \[Z_1 = \frac{V}{\lambda_T^3}.\] Recall that the partition function is the average of density of states under the Boltzman distribution and that the thermal length is the characteristic length of the thermal system. The partition function gives the symbol q, is a summation that weights the quantum states in terms of their availability and then adds the resulting terms. 2. We can do this with the (unphysical) potential which is zero with in those limits and outside the limits.
1. nh ma t D n. qe. It can be written as a sum of terms. 2. hn ma. Larger the value of q, larger the . (nQV)N. We introduced the factor of N! Part 1, Populations, Partition Functions, Particle in a Box, Harmonic Oscillators, Angular Momentum and the Rigid Rotor C. W. David Department of Chemistry University of Connecticut Storrs, Connecticut 06269-3060 (Dated: March 11, 2008) I. SYNOPSIS This is a set of problems that were used near the turn of Example Partition Function: Uniform Ladder Because the partition function for the uniform ladder of energy levels is given by: then the Boltzmann distribution for the populations in this system is: Fig. The partition function is a function of the temperature T and the microstate energies E1, E2, E3, etc. q = gi e (i - 0)/ (kT) The partition function turns out to be very convenient single quantity that can be used to express the properties of a . Consider a molecule confined to a cubic box. (n x+ n y+ n z); n x;n y;n z= 0;1;2;:::: Again, because the energies for each dimension are simply additive, the 3D partition function can be simply written as the product of three 1D partition functions, i.e. the partition function for a single particle on the 1D line (the states are those . They depend on three quantum numbers, (since there are 3 degrees of freedom). The classical limit corresponds to the case where the probability of having more than 1 . Partition function a. . The cluster is assumed to be unstable and can emit ("evaporate") successively its constituent particles, which populate the previously empty locations (single-particle s.p. All empty s.p.states are accessible to all particles. So in this case: Z 1 = e p 2 2 m d p e K q 4 4 d q. I know this integral can be solved by the Gauss method, knowing that: The wave functions in Equation 7.45 are sometimes referred to as the "states of definite energy.". In this case there is no difficulty Particle in a box is the simplest physical in evaluating the partition function retain- model which has been solved quantum me- ing the summation because of the availability chanically, but unsolved thermodynamically of the Taylor series expansion method. Particle in a 3D Box A real box has three dimensions. N NN Q QZ NV Konfiguran integrl Z dr V 11 2 / 2 1 2 Z e drdrU kT 3 / Suppose you have a "box" in which each particle may occupy any of 10 single-particle states. Then we said the partition function for N weakly interacting particles is the product of N single particle partition functions divided by N!, ZN() = 1 N! Evaluate the partition function Q by summing exp(E/kT ) over levels and compare your result to Q = q N.Do not forget the degeneracy of the levels, which in this case is the number of ways that N + particles out of N can be in the + state. The partition function for particle in a box is Q = X n=1 gI ne n (6) Here the energy of a particle is n = n 2h2 8mL2. this case the particles are distinguishable but identical, so each particle has the same set of single particle energy levels. For simplicity, assume that each of these states has energy zero. We haveN,non-interacting,particles in the box so the partition function of the whole system is Z(N,V,T)=ZN 1 = VN 3N (2.7) particle in a box, ideal Bose and Fermi gases. Let's consider a very simple case in which we have 2 particles in the box and the box has 2 single particle states. Previous: 4.9 The ideal gas The N particle partition function for indistinguishable particles. . The symmetry number, , is the number of ways a molecule can be positioned by rigid body rotation that has the same types of atoms in the same positions. where q t is the individual molecule trans. 3D Particle-in-a-Box Partition Function 1,012 views Aug 5, 2020 15 Dislike Share Save Physical Chemistry 6.41K subscribers Subscribe The energies of the three-dimensional particle-in-a-box model. By analogy to the three-dimensional box, the energy levels for the 3D harmonic oscillator are simply n x;n y;n z = h! the N particles are spatially highly correlated and form a compact cluster. (5). Consider a particle which can move freely with in rectangular box of dimensions a b c with impenetrable walls. Molecular partition functions - sum over all possible states . For a system of N localized spins, as considered in Section 10.5, the partition function can from Equation 10.35 be written as Z=z N, where z is the single particle partition function. So to calculate the mean number of particles in a given single-particle state s, we just have to calculate the partition function Z and take the . The translational partition function, q trans, is the sum of all possible translational energy states, which could be represented using one,two and three dimensional models for a particle in the box equation, depending on the system of the coordinates .The one and two dimensionsal spaces for a particle in the box equation forms are less commonly used than .
For example, it is The microstate energies are determined by other thermodynamic variables, such as the number of particles and the volume, as well as microscopic quantities like the mass of the constituent particles. that the partition function Z is same as the total number of states . The state of the particle can be described by a so-called phase space at the point ( ) where is the position vector of the particle and its momentum. 2. the number of such states to calculate the partition function for a single particle in a box. Write down the energy eigenvalues 3 PHYS 451 - Statistical Mechanics II - Course Notes 4 Armed with the energy states, we can now obtain the partition function: Z= X The classical frequency is given as 1 2 k Our first goal is to solve the Schrdinger equation for quantum harmonic oscillator and find out how the energy levels are related to the . Particle energy For a non-relativistic particle the kinetic energy of the particle is vx;vy;vz = 1 2 m(v2 x+ v 2 y+ v 2 z) Using the Boltzmann factor, the probability that a particle has velocity v x;v y;v z is P(v) /exp vx;vy;vz k BT /exp m 2k BT (v2 x+ v 2 y+ v 2 z) (9) The normalization can be found using the partition function or by direct integration over . 2,1. states) in the planar box. n = 2 is the first excited state, the state for. This solution in pdf format is available for sale for just 15.99 USD. Consider a molecule confined to a cubic box. (Knowledge of magnetism not needed.) Show that the semiclassical partition Z 1 for a particle in a one-dimensional box can be expressed as Z 1 = Z dpdx h ep2/2m. Solution: There are two independent particles, so Z2 = Z2 1 = 100. 8. (10) Now we can calculate the mean occupation . (6 credits) Problem 12: The simplest example would be the coherent state of the Harmonic oscillator that is the Gaussian wavepacket that follows the classical trajectory x;p/D p2 2m C 1 2 m!2 0x 2 (2) with mthe mass of the particle and!0 the frequency . Therefore, the de Broglie wavelength formula is expressed as; = h / mv. The molecular canonical partition function is a measure for the number of states that are accessible to the molecule at a given temperature. Gas of N Distinguishable Particles Given Eq. fct. An example of a problem which has a Hamiltonian of the separable form is the particle in a 3D box . Search: Classical Harmonic Oscillator Partition Function. for bosons. . Particle in a 3D Box. Finally, in the fourth (last) question, evaluate as you have done and then evaluate explicitly the average energy. . Partition functions for molecular motions Translation Consider a particle of mass m in a 1D box of length L. Replacing the sum over quantum states with an integral we have q1D(V,T) = mkBT 2~2 1/2 L (22) For a particle of mass m in a 3D volume V at temperature T, qtrans(V,T) = mkBT 2~2 3/2 V McQ&S, eq. Partition Functions for Independent Particles Indistinguishable Particles There are no labels A or B the particles from each other The system energy is Where, i = 1, 2, , t 1 and m = 1, 2, , t 2 The system partition function is The summation can no longer be separated As a result of performing the full summation . If the box contains N=2 particles, what is the partition function, according to the formula NZZ N!= 1 A) 4x4=16 B) 16 - 4=12 C) 16/2 = 8 D) 4+ (4x3)/2=10 Answer: 8 End of preview. The model is mainly used as a hypothetical example to illustrate the differences between classical and quantum systems.
The partition function is defined by. Consider a single particle in a box (box volume V), and compute the single-particle partition function Z 1 of the system classically. Oscillator Stat At T= 200 K, the lowest temperature in which the exact partition function is available, the KP1 result is 77% of the exact, while the KP2 value is 83% which is similar to the accuracy of the second-order Rayleigh-Schrdinger perturbation theory without resonance correction (86%) , when taking its logarithm No effect on . [ans -Nm2B2 / kT ] Independent Systems and Dimensions When two independent systems have entropies and, the combination of these systems has a total entropy S given by. (c) What is the partition function if the box contains two identical bosons? Canonical partition function Definition . N N NNN mkT Z Q V T Z N h N 1!
It is the thermally averaged wavelength of the particle. What is the particle in a box? The partition function is a sum over states (of course with the Boltzmann factor multiplying the energy in the exponent) and is a number. Required attribution: Martin, Rachel. Because of the infinite potential, this problem has very unusual boundary conditions . Before reading this section, you should read over the derivation of which held for the paramagnet, where all particles were distinguishable (by their position in the lattice). . 1 The translational partition function We will work out the translational partition function. T 2 3.7-5 Quantized particle in a box The single particle partition function for a quantized particle in a 3D box is 1 3 T x y z r V If we consider a mole of (bosons) at 4.2 K, where it liquifies, we find that , which is not a large number . any genuinely classical quantity that we compute. N NN V Z N Q Q 1 1! Virial coefficients - classical limit (monoatomic gas) 3/2 1 23 2 ( , ) mkT V Q V T V h 3 /2 23 12,!!
Ball Gown Dress Flipkart
Best Books 2022 Romance
Google Slides Code Font
Ratio Of Normal Random Variables
Create Basics Tie Dye Ingredients
Lifestyle Of Poor And Rich Comparison And Contrast
Boulder Running Camp 2022
Smash Ultimate Scripts
Calum Chambers Transfer Fee
partition function for particle in a box 2022
|
CommonCrawl
|
Estimating health state utility from activities of daily living in the French National Hospital Discharge Database: a feasibility study with head and neck cancer
Michaël Schwarzinger ORCID: orcid.org/0000-0002-0573-68561,2 &
Stéphane Luchini3
for the EPICORL Study Group
Health state utility (HSU) is a core component of QALYs and cost-effectiveness analysis, although HSU is rarely estimated among a representative sample of patients. We explored the feasibility of assessing HSU in head and neck cancer from the French National Hospital Discharge database.
An exhaustive sample of 53,258 incident adult patients with a first diagnosis of head and neck cancer was identified in 2010–2012. We used a cross-sectional approach to define five health states over two periods: three "cancer stages at initial treatment" (early, locally advanced or metastatic stage); a "relapse state" and otherwise a "relapse-free state" in the follow-up of patients initially treated at early or locally advanced stage. In patients admitted in post-acute care, a two-parameter graded response model (Item Response Theory) was estimated from all 144,012 records of six Activities of Daily Living (ADLs) and the latent health state scale underlying ADLs was calibrated with the French EQ-5D-3 L social value set. Following linear interpolation between all assessments of the patient, daily estimates of utility in post-acute care were averaged by health state, patient and month of follow-up. Finally, HSU was estimated by health state and month of follow-up for the whole patient population after controlling for survivorship and selection in post-acute care.
Head and neck cancer was generally associated with poor HSU estimates in a real-life setting. As compared to "distant metastasis at initial treatment", mean HSU was higher in other health states, although numerical differences were small (0.45 versus around 0.54). It was primarily explained by the negative effects on HSU of an older age (38.4% aged ≥70 years in "early stage at initial treatment") and comorbidities (> 50% in other health states). HSU estimates significantly improved over time in the "relapse-free state" (from 8 to 12 months of follow-up).
HSU estimates in head and neck cancer were primarily driven by age at diagnosis, comorbidities, and time to assessment of cancer survivors. This feasibility study highlights the potential of estimating HSU within and across severe conditions in a systematic way at the national level.
Cost-effectiveness analysis is used in most high-income countries for pricing and reimbursement of new health interventions [1]. In such analysis, effectiveness is generally measured by Quality-Adjusted Life Years (QALYs) where the expected number of years to be lived in different health states is weighted by community preferences for each health state [1, 2]. However, these health state utility (HSU) estimates are typically among the most important but also uncertain drivers of cost-effectiveness results – a paradoxical situation that seems detrimental to fair pricing and reimbursement decisions across competing new health interventions.
There are multiple sources of variability in HSU estimates, although a general adherence to the same guidelines would purposely limit variability to patient surveys [1,2,3]. Indeed, if the same preference-based, generic health-related quality-of-life (HRQoL) instrument was administered in all patient surveys, then all HRQoL profiles of the patients could be similarly converted into HSU estimates with use of country-specific social value sets [4]. However, the variability of HSU estimates may still remain considerable due to the scarcity, small sample size, and lack of representativeness of patient surveys as recently illustrated in the context of relapsed/metastatic head and neck cancer [5,6,7].
In a systematic review of HSU estimates in head and neck cancer [8], Meregaglia and Cairns identified that only 12 patient surveys collected preference-based, generic HRQoL data. Most (9/12) patient surveys relied on the same EQ-5D-3L instrument [9], but none provided HSU estimates by cancer stage due to small sample sizes [8]. Otherwise, EQ-5D-3L data are increasingly collected along clinical trials [3]. However, HSU estimates lack representativeness due to the exclusion criteria applied to the patient population such as an older age or the presence of comorbidities [10,11,12]. Altogether, none of the patient surveys were conducted in France [8] and few French patients were recruited in international clinical trials (e.g., less than 20 patients in [12]). By default, a cost-effectiveness analysis conducted in the French healthcare context should further assume that patient surveys from other countries are representative of French patients [13].
In this study, we explored another route than patient surveys to estimate consistent HSU at the country level. More specifically, the French National Hospital Discharge database allows identifying all patients cared with a severe condition such as cancer as well as health states typically used in a cost-effectiveness analysis such as cancer stage at initial treatment and relapse in the follow-up. In addition, six Activities of Daily Living (ADLs) are systematically collected in patients admitted in post-acute care. Taking head and neck cancer as a case study, we developed a multi-step process to estimate HSU. Steps I and II consist of patient data organization of the French National Hospital Discharge database including selection of incident patients and definition of five core health states over two periods (initial treatment and follow-up). Step III enables utility to be estimated daily from all records of ADLs in post-acute care with use of Item Response Theory [14]. Step IV enables HSU to be estimated by patient and month of follow-up in the whole patient population after controlling for survivorship and selection in post-acute care.
The data source was the French National Hospital Discharge (PMSI) database in the years 2008 to 2013. The database contains all public and private hospital claims for acute and post-acute care. The standardized discharge summary includes: patient's demographics (gender, age, postal code of residency); primary and associated discharge diagnosis codes according to the WHO International Classification of Diseases, tenth revision (ICD-10); medical procedures performed; length of stay; and discharge mode (including in-hospital death). In addition, six ADLs are systematically scored at admission in post-acute care and then every week until hospital discharge (Table 1). For research purposes, all hospital discharge data of the patient could be traced in 2008–2013 with use of an unique anonymous identifier [15, 16].
Table 1 Activities of Daily Living (ADL) recorded in post-acute care among head and neck cancer patients (n = 144,012)
Step I: selection of incident patients
We included all adults residing in metropolitan France and discharged with a primary or associated discharge diagnosis code of head and neck squamous-cell carcinoma (ICD-10: C00-C06; C09-C14; C30.0; C31; C32) in the years 2008 to 2012. We selected incident cases in 2010–2012 after excluding all prevalent cases in 2008–2009 [17, 18]. In addition, we excluded all incident cases recorded with a personal history of cancer to minimize a possible misclassification of a relapse. The coding dictionary of all variables used in this study is provided in Additional file 1: Table S1.
Step II: health state definition over two periods
Most patients with head and neck cancer are diagnosed at locally advanced stage [19] and receive combined-modality treatments over a few months to decrease the high risk of relapse in the short-term [20]. In patient surveys, EQ-5D-3L was mostly (8/9) assessed after initial treatment in relapse-free patients [8]. In accordance with the usual design of patient surveys, ADLs are recorded in post-acute care in the French National Hospital Discharge database, although we aimed at expanding utility assessment to several health states including a relapse state [5,6,7,8].
We used a cross-sectional approach to define five health states over two periods: three cancer stages at initial treatment (early, locally advanced or metastatic stage) [20]; a relapse state and otherwise a relapse-free state in the follow-up. The initial treatment phase was defined by the first 6 months after diagnosis to encompass various lengths of combined-modality treatments [21] and related post-acute care. Cancer stage was identified at initial treatment from medical information that is consistently recorded at hospital [22]: a metastatic stage was identified by any record of distant metastasis; in absence of distant metastasis, a locally advanced stage was identified by any diagnosis indicating locoregional extension (e.g., lymph nodes) or any initial treatment eliminating an early stage (e.g., chemotherapy) [20]; and an early stage was considered by default in other patients.
Patients identified at the metastatic stage at initial treatment had poor prognosis and were followed in the same health state until end of follow-up. Other patients identified at early or locally advanced stage became at risk of relapse after 6 months. Relapse was identified by the first record of a local relapse (i.e., primary discharge diagnosis identical to the original cancer site) or a new event indicative of extension (i.e., distant metastasis, locoregional extension, or chemotherapy). Relapsing patients had poor prognosis and were followed in the same health state until end of follow-up. Other patients were considered relapse-free in the follow-up, starting from 6 months after diagnosis to end of follow-up.
Overall mortality was assessed from in-hospital death records as well as deaths outside hospital with right-censoring for all patients at July 1, 2013 (Additional file 1: Methods). The Kaplan-Meier method was used to test the association of health state with survival over a maximum follow-up of 12 months. The Fine and Gray method was used to test the association of health state with post-acute care admission, where deaths without post-acute care were considered as competing events [23].
Step III: utility estimation over time in post-acute care
Six ADLs are systematically scored at admission in post-acute care and then every week until hospital discharge: 4 self-care tasks (dressing/bathing; functional mobility; self-feeding; continence); social interaction; and communication (Table 1). Each ADL is scored on the same 4-level scale (0 = total dependence, 1 = partial dependence, 2 = supervision, or 3 = independence).
All records of ADLs in post-acute care were analyzed with Item Response Theory [14]. We estimated a two-parameter graded response model [24], in which ordinal scores on ADLs are assumed to be a logistic function of a latent health state scale (i.e., the probability of a higher score on each ADL increases as the latent health state increases). The model is specified as follows:
$$ {P}_{ijk\left({X}_j\ge k|{\theta}_i,{\alpha}_j\right)}=\frac{e^{\alpha_j\left({\theta}_i-{\beta}_{jk}\right)}}{1+{e}^{\alpha_j\left({\theta}_i-{\beta}_{jk}\right)}} $$
where Pijk is the cumulative probability that patient i receives a score of k or above (k = 0, 1, 2, 3) on ADL j (j = 1, 2, 3, 4, 5, 6); θi represents the latent health state value of patient i; αj is the slope parameter of ADL j and indicates the ability of ADL j to discriminate patients on the latent health state scale; and βjk is the threshold parameter of ADL j for score k or above relative to lower scores and indicates the value at which a patient has a 50% chance of scoring k or above on the latent health state scale (i.e., k-1 threshold parameters are estimated).
We assessed the unidimensionality of the latent health state scale, i.e., the assumption that all ADLs measure a single construct of health state, by examining the eigenvalues of the polychoric correlation matrix [14]. Assuming a perfect correlation between the latent health state scale and the French EQ-5D-3L social value set, we computed an ADL-related utility scale calibrated on the worst (− 0.53) and best (1.00) anchors of the French EQ-5D-3L social value set [25]:
$$ {\hat{U}}_{EQ-5D}^{IRT}=\left[\frac{\left({\hat{U}}_{RAW}^{IRT}-\min {\hat{U}}_{RAW}^{IRT}\right)}{\left(\max {\hat{U}}_{RAW}^{IRT}-\min {\hat{U}}_{RAW}^{IRT}\right)}\times \left(1+0.53\right)-0.53\right] $$
Finally, patients may have repeated assessments (i.e., weekly assessments during the same hospital stay and/or multiple hospital stays in post-acute care) and ADL-related utility was linearly interpolated on a daily basis between all assessments from first to last record of the patient in post-acute care.
Step IV: HSU estimation by month of follow-up in the whole patient population
We controlled for a possible survivorship effect on utility by estimating HSU by patient and month of follow-up in each health state. We expanded on previous cross-sectional approach (Step II) to define 48 subpopulations consisting of all patients alive at the beginning of each month of follow-up in a given health state (from 1 to 6 months in early or locally advanced stage at initial treatment; and from 1 to 12 months in the three other health states). In each subpopulation, we identified all patients recorded in post-acute care and HSU was computed by the average of daily ADL-related utility estimates in the month per patient. In the best case scenario with complete daily estimates (n = 30 in the month), HSU represented the area-under-the-curve utility estimate of the patient. In the worst case scenario with a single daily estimate (n = 1 in the month), we assumed that ADL-related utility of the patient was uniform over the month.
Then, we estimated HSU for the whole subpopulation with use of a two-step selection model [26]. In the first step, the selection equation is a binary probit regression estimating the probability of a patient to be recorded in post-acute care in the month:
$$ P\left(\mathrm{post}-\mathrm{acute}\ \mathrm{care}=1\right)=\Phi \left(\beta {\mathrm{X}}_i\right) $$
where i represents patients, X represents a vector of covariates, and ϕ is the cumulative distribution function of the normal distribution. Since our general aim was to improve inference rather than efficiency [27], we used a large set of covariates including time-independent covariates (demographics; tobacco smoking, alcohol use; year at diagnosis, primary head and neck cancer site, second synchronous head and neck cancer) and time-dependent covariates recorded before or during the given month (admission to a public teaching hospital, comprehensive cancer care center, private clinic; second primary cancer other than head and neck cancer [28, 29], each comorbidity of the Charlson comorbidity index other than cancer [30, 31], depression; palliative care) (Additional file 1: Table S1).
In the second step, the outcome equation is a standard OLS regression estimating HSU in post-acute care while controlling for selection bias:
$$ {\mathrm{HSU}}_i={\gamma \mathrm{Y}}_i+\lambda {\mathrm{IMR}}_i $$
where i represents patients, Y represents a vector of covariates, and IMR (for inverse Mills ratio) is the correction factor of selection bias calculated from the probit model at βXi in the selection equation. Selection bias was assessed by testing the null that the coefficient of IMR λ = 0. We used the set of covariates of the selection equation, although some covariates that were assumingly less related to HSU were removed from the outcome equation (region of residency, risk factors, previous admission to several types of hospital) [32]. Since the set of covariates of the selection equation was defined in all patients, we used the outcome equation to impute HSU in all patients unrecorded in post-acute care in the month.
All statistical analyses were performed with SAS 9.4 including PROC IRT for estimating the two-parameter graded response model.
Of the 27.3 million adults discharged from all French hospitals in 2008–2012, 134,324 (0.49%) had a diagnosis of head and neck cancer (Additional file 1: Table S2). Of them, 53,258 (40.4%) were considered incident cases in 2010–2012.
Five health states were defined over two periods: initial treatment and follow-up. Health states were significantly associated with survival (Fig. 1). Patients with distant metastasis at initial treatment or relapsing in the follow-up had the worst prognosis. Patients initially treated at early stage had better prognosis as compared to patients treated at locally advanced stage. Patients in a relapse-free state had the best prognosis.
Survival according to health state in head and neck cancer
Health states of poor prognosis were significantly associated with higher admission rates in post-acute care (Fig. 2). At initial treatment, patients with distant metastasis were 3.5 times more likely to be admitted in post-acute care as compared to patients at early stage (HR = 3.54, 95% CI 3.31–3.80). In the follow-up, relapsing patients were 3.6 times more likely to be admitted in post-acute care as compared to patients in a relapse-free state (HR = 3.62, 95% CI 3.42–3.82).
Admission in post-acute care according to health state in head and neck cancer
Six ADLs were assessed at 144,012 points in time in post-acute care (Table 1). The two-parameter graded response model fitted very well all records of ADLs. The unidimensionality of the latent health scale was supported by the examination of eigenvalues: the first eigenvalue (4.0) explained 66.8% of the variance; the second eigenvalue was below 1.0; and the ratio of the first and second eigenvalues (4.6) was above 3 (Additional file 1: Table S3). In addition, all slope parameters were above 1 indicating that all ADLs were informative regarding the latent health state scale (Additional file 1: Table S4). The assessment of dressing/bathing was the most informative on the latent health state (slope = 5.50; range between threshold parameters = 4.95) (Fig. 3). The assessment of self-feeding was the least informative on the latent health state (slope = 1.12; range between threshold parameters = 2.09). Most (14/18) threshold parameters were below 0 indicating that ADLs were generally more informative on poor health states.
Characteristic curves of 6 Activities of Daily Living (ADL) recorded in post-acute care (n = 144,006). The trait on the horizontal axis is an arbitrarily scaled representation of the latent health state scale. As the value of the latent health state increases, the probability of a higher score on each ADL increases. The relative concentration of the curves reflects the relatively high discriminative ability of an ADL. On the contrary, the relative spread of the curves reflects the relatively low discriminative ability of an ADL
Following calibration of the latent health state scale on the French EQ-5D-3L social value set, the ADL-related utility had a mean (std) of 0.44 (0.40) and a median (IQR) of 0.47 (0.18–0.76). ADL-related utility estimates were completed on a daily basis with use of linear interpolation between all assessments of the patient in post-acute care. The final dataset included 1,032,301 daily estimates of ADL-related utility in post-acute care with a mean (std) of 0.44 (0.38) and a median (IQR) of 0.47 (0.18–0.74).
Daily estimates of ADL-related utility in post-acute care were averaged into HSU estimates by health state, patient and month of follow-up. Patients initially treated at early stage had surprisingly lower HSU estimates than patients at locally advanced stage and a selection bias in post-acute care was suspected (Fig. 4).
Health state utility, by health state and month of follow-up of head and neck cancer patients in post-acute care
Considering all patients alive at the beginning of the month in a given health state, two-step selection models were carried out by health state and month of follow-up (parameter estimates are provided at first and last month of follow-up for the 5 health states in Additional file 1: Tables S5–S14). Overall, HSU estimates significantly increased for each health state and month of follow-up after controlling for selection in post-acute care (Fig. 5). A selection bias was primarily found in patients initially treated at early stage (p < 0.05 for 4 out of 6 months of follow-up) or locally advanced stage (p < 0.05 for 6 out of 6 months of follow-up) (Additional file 1: Table S15), although HSU estimates remained lower in patients initially treated at early stage as compared to locally advanced stage. Patients initially treated with distant metastasis had the worst HSU estimates at all months of follow-up. Patients in a relapse-free state had the best HSU estimates after 8 months of follow-up, with an increasing trend from 8 to 12 months of follow-up (max HSU of 0.61 at 12 months of follow-up).
Health state utility, by health state and month of follow-up of all head and neck cancer patients
HSU summary statistics were computed over the all period of follow-up (Table 2). As compared to the health state "distant metastasis at initial treatment" (mean HSU = 0.45), other health states were associated with a better mean HSU, although numerical differences were small around 0.54. It was primarily explained by the negative effects on HSU of an older age in the health state "early stage at initial treatment" (38.4% patients were aged ≥70 years) and comorbidities (> 50%) in other health states.
Table 2 Summary statistics of health state utility (HSU) in head and neck cancer
Although many Health Technology Assessment bodies (such as the French HAS [13]) have deemed QALYs the principal measure of effectiveness, still only a limited number of studies report QALYs based on actual assessments of preference-based, generic HRQoL among a representative sample of patients. The assessment of new immunotherapy for relapsed/metastatic head and neck cancer provides a pressing example [5,6,7] since few patient surveys were conducted and none provided HSU estimates by cancer stage due to small sample sizes [8].
In this study, we explored another route than patient surveys to estimate consistent HSU at the country level. On the one hand, all incident patients diagnosed with head and neck cancer in France were identified from the French National Hospital Discharge database. Five health states could be reliably defined over time for the whole patient population and we found expectedly that relapsed/metastatic patients had poor prognosis. On the other hand, ADLs rather than the recommended EQ-5D-3L instrument are recorded and we had to develop a multi-step process to transform ADLs records in post-acute care into consistent HSU estimates representative of the whole patient population.
One of the main study results is that head and neck cancer was generally associated with poor HSU estimates in a real-life setting since mean HSU ranged from 0.45 for "distant metastasis at initial treatment" to around 0.54 for other health states (early or locally advanced stage at initial treatment; relapse state and otherwise relapse-free state in the follow-up) with "minimally important differences" (< 0.06) between health states [33]. In comparison, EQ-5D-3L utility estimates were much higher in most (8/9) surveys conducted in relapse-free patients (median (IQR) sample size of 79 (28–112) patients), with a median (IQR) utility of 0.80 (0.78–0.84) for patients aged 63 years on average [8]. EQ-5D-3L utility estimates were also higher in one longitudinal study of 81 patients diagnosed at early/locally advanced stage and aged ≥65 years (median (IQR) utility of 0.66 (0.55–0.76) at diagnosis and 0.64 (0.00–0.74) at 12 months of follow-up) [34]. EQ-5D-3L utility estimates were also higher in patients selected in clinical trials, with a mean (std) utility of 0.79 (0.18) in 715 patients initially treated at locally advanced stage [11] and 0.68 (0.28) in 120 relapsed/metastatic patients [12]. While attention was drawn on the expected variability of EQ-5D-3L utility estimates with community preferences of the country [4, 8], our study results suggest that the lack of representativeness of patient surveys should be of primary concern since the usual recruitment of younger patients with less comorbidities may lead to overly optimistic HSU estimates.
Another main study result is that HSU estimates significantly improved over time in patients in a relapse-free state (from 8 to 12 months of follow-up) in agreement with HRQoL improvements found over longer periods of time in cancer survivors [35, 36]. In comparison, the time to assessment of EQ-5D-3L varied dramatically within and between surveys conducted in relapse-free patients (i.e., from months to years after diagnosis) [8]. On the one hand, a longer time to assessment in cross-sectional patient surveys may also explain our lower HSU estimate since follow-up was limited to 1 year and accounted for utility at each month of follow-up in the relapse-free state. On the other hand, our study results suggest that time to assessment should be better accounted for or even standardized to achieve comparable HSU estimates between patient surveys. Otherwise, we found that HSU estimates did not improve over time in health states other than the relapse-free state. Similarly, no significant changes in EQ-5D-3L utility were found over time in old patients diagnosed at early/locally advanced stage [34], trial patients initially treated at locally advanced stage [11], or trial patients treated at relapsed/metastatic stage [12]. Altogether, it suggests that EQ-5D-3L social value sets exhibit a poor responsiveness to change during treatment in head and neck cancer [37].
The strengths of this nationwide study outline its limitations. Indeed, this study is a secondary analysis of the French National Hospital Discharge database and therefore all measurements relied on administrative records with possible misclassification. Regarding health state definition, TNM cancer staging is not recorded in the standardized discharge summary and we constructed a composite variable to identify three cancer stages at initial treatment. Overall, 37,508 (70.4%) of 53,258 patients were identified at a late stage at initial treatment (Fig. 1), in agreement with previous reports of cancer registries [19]. However, we could no longer estimate HSU related to the treatment modalities since this information was already used to construct the composite variable of cancer stage.
Regarding utility estimation, ADL scores contribute with discharge diagnoses, rehabilitation procedures, and age to the hospital billing system in post-acute care. Accordingly, the completion rate of ADLs was extremely high (> 99%), although a recording bias towards more severe scores is possible and could lead to lower HSU estimates. In absence of mapping studies of ADLs into EQ-5D-3L social value sets [38,39,40], a latent health state scale was estimated from all records of ADLs with use of Item Response Theory and then calibrated on the worst (− 0.53) and best (1.00) anchors of the French EQ-5D-3L social value set [25]. Such approach was supported by the conceptual overlap between ADLs and the EQ-5D-3L instrument regarding dimensions and their ordinal scoring as well as the unidimensionality of the latent health state scale underlying ADLs. However, the calibration implies a perfect correlation of the latent health state scale with the French EQ-5D-3L social value set and the distribution of ADL-related utility should be cross-validated with a mapping study conducted in post-acute care. In the following steps, we made a full use of the repeated assessments of ADLs by patient (linear interpolation of ADL-related utility on a daily basis and then average by month of follow-up) that resulted in smoothed and generally unimodal distributions of utility in the 48 subpopulations. In particular, we found limited evidence of a ceiling effect in post-acute care (utility of 1.00 for 8.6% of 40,812 patients selected in all 48 subpopulations; at maximum, 13.8% of 290 relapse-free patients at 12 months of follow-up) [37].
HSU estimates in head and neck cancer were primarily driven by age at diagnosis, comorbidities, and time to assessment of cancer survivors. This feasibility study highlights the potential of estimating HSU within and across severe conditions in a systematic way at the national level. While the multi-step process to estimate HSU was developed with use of the French National Hospital Discharge database, it may generalize to other Hospital Discharge databases including a systematic assessment of ADLs for billing purposes.
Data sharing of the French National Hospital Discharge database or any related dataset with de-identified data such as the dataset generated for the current study is forbidden by law.
ADLs:
Activities of daily living on 6 dimensions
EQ-5D-3L:
EuroQol 5 dimensions 3 levels instrument
HRQoL:
Health-related quality of life
HSU:
Health state utility
ICD-10:
International Classification of Diseases, tenth revision
IQR:
Interquartile range
IRT:
Item Response Theory
OLS:
Ordinary least squares regression
PMSI:
Programme de médicalisation des systèmes d'information
QALYs:
Quality-Adjusted Life Years
Barnieh L, Manns B, Harris A, Blom M, Donaldson C, Klarenbach S, Husereau D, Lorenzetti D, Clement F. A synthesis of drug reimbursement decision-making processes in organisation for economic co-operation and development countries. Value Health. 2014;17:98–108.
Sanders GD, Neumann PJ, Basu A, Brock DW, Feeny D, Krahn M, Kuntz KM, Meltzer DO, Owens DK, Prosser LA, et al. Recommendations for conduct, methodological practices, and reporting of cost-effectiveness analyses: second panel on cost-effectiveness in health and medicine. JAMA. 2016;316:1093–103.
Wolowacz SE, Briggs A, Belozeroff V, Clarke P, Doward L, Goeree R, Lloyd A, Norman R. Estimating health-state utility for economic models in clinical studies: an ISPOR good research practices task force report. Value Health. 2016;19:704–19.
Xie F, Gaebel K, Perampaladas K, Doble B, Pullenayegum E. Comparing EQ-5D valuation studies: a systematic review and methodological reporting checklist. Med Decis Mak. 2014;34:8–20.
Nivolumab for treating squamous cell carcinoma of the head and neck after platinumbased chemotherapy. https://www.nice.org.uk/guidance/TA490/chapter/1-Recommendations. Accessed 20 Jul 2019.
Ward MC, Shah C, Adelstein DJ, Geiger JL, Miller JA, Koyfman SA, Singer ME. Cost-effectiveness of nivolumab for recurrent or metastatic head and neck cancer. Oral Oncol. 2017;74:49–55.
Tringale KR, Carroll KT, Zakeri K, Sacco AG, Barnachea L, Murphy JD. Cost-effectiveness analysis of Nivolumab for treatment of platinum-resistant recurrent or metastatic squamous cell carcinoma of the head and neck. J Natl Cancer Inst. 2018;110:479–85.
Meregaglia M, Cairns J. A systematic literature review of health state utility values in head and neck cancer. Health Qual Life Outcomes. 2017;15:174.
Brooks R. EuroQol: the current state of play. Health Policy. 1996;37:53–72.
Del Barco ME, Mesia R, Adansa Klain JC, Vazquez Fernandez S, Martinez-Galan J, Pastor Borgonon M, Gonzalez-Rivas C, Caballero Daroqui J, Berrocal A, Martinez-Trufero J, et al. Phase II study of panitumumab and paclitaxel as first-line treatment in recurrent or metastatic head and neck cancer. TTCC-2009-03/VECTITAX study. Oral Oncol. 2016;62:54–9.
Truong MT, Zhang Q, Rosenthal DI, List M, Axelrod R, Sherman E, Weber R, Nguyen-Tan PF, El-Naggar A, Konski A, et al. Quality of life and performance status from a substudy conducted within a prospective phase 3 randomized trial of concurrent accelerated radiation plus cisplatin with or without Cetuximab for locally advanced head and neck carcinoma: NRG oncology radiation therapy oncology group 0522. Int J Radiat Oncol Biol Phys. 2017;97:687–99.
Harrington KJ, Ferris RL, Blumenschein G Jr, Colevas AD, Fayette J, Licitra L, Kasper S, Even C, Vokes EE, Worden F, et al. Nivolumab versus standard, single-agent therapy of investigator's choice in recurrent or metastatic squamous cell carcinoma of the head and neck (CheckMate 141): health-related quality-of-life results from a randomised, phase 3 trial. Lancet Oncol. 2017;18:1104–15.
Haute Autorité de santé (HAS). Guide méthodologique - Choix méthodologiques pour l'évaluation économique à la HAS [Methodological guidance for health technology assessment submitted to the French Health Authority]. La Plaine-Saint Denis: HAS; 2011.
De Ayala RJ. The theory and practice of item response theory. New York: Guilford Press; 2009.
Agence Technique de l'Information sur l'Hospitalisation. Le décès dans le PMSI-MCO : validation et précautions d'utilisation. [Death in acute care : validation and precaution instructions]. Lyon: Agence Technique de l'Information sur l'Hospitalisation; 2010.
Agence Technique de l'Information sur l'Hospitalisation. Aide à l'utilisation des informations de chaînage [How to use de-identified patient information]. Lyon: Agence technique de l'information sur l'hospitalisation; 2014.
Schulman KL, Berenson K, Tina Shih YC, Foley KA, Ganguli A, de Souza J, Yaghmour NA, Shteynshlyuger A. A checklist for ascertaining study cohorts in oncology health services research using secondary data: report of the ISPOR oncology good outcomes research practices working group. Value Health. 2013;16:655–69.
Bagley SC, Altman RB. Computing disease incidence, prevalence and comorbidity from electronic medical records. J Biomed Inform. 2016;63:108–11.
Gatta G, Botta L, Sanchez MJ, Anderson LA, Pierannunzio D, Licitra L, Group EW. Prognoses and improvement for head and neck cancers diagnosed in Europe in early 2000s: the EUROCARE-5 population-based study. Eur J Cancer. 2015;51(15):2130–43.
Gregoire V, Lefebvre JL, Licitra L, Felip E, Group E-E-EGW. Squamous cell carcinoma of the head and neck: EHNS-ESMO-ESTRO clinical practice guidelines for diagnosis, treatment and follow-up. Ann Oncol. 2010;21(Suppl 5):v184–6.
VanderWalde NA, Meyer AM, Liu H, Tyree SD, Zullig LL, Carpenter WR, Shores CD, Weissler MC, Hayes DN, Fleming M, Chera BS. Patterns of care in older patients with squamous cell carcinoma of the head and neck: a surveillance, epidemiology, and end results-medicare analysis. J Geriatr Oncol. 2013;4:262–70.
Amin MB, Edge S, Greene F, Byrd DR, Brookland RK, Washington MK, Gershenwald JE, Compton CC, Hess KR, Sullivan DC, et al. AJCC Cancer Staging Manual. 8th ed. New-York: Springer-Verlag; 2017.
Fine JP, Gray RJ. A proportional hazards model for the subdistribution of a competing risk. J Am Stat Assoc. 1999;94:496–509.
Samejima F. Estimation of latent ability using a response pattern of graded scores. Psychometrika Monogr Suppl. 1969;34:100.
Chevalier J, de Pouvourville G. Valuing EQ-5D using time trade-off in France. Eur J Health Econ. 2013;14:57–66.
Heckman JJ. Dummy endogenous variables in a simultaneous equation system. Econometrica. 1978;46:931–59.
Davidson R, MacKinnon J. Estimation and inference in econometrics. Oxford: Oxford University Press; 1993.
Jegu J, Binder-Foucard F, Borel C, Velten M. Trends over three decades of the risk of second primary cancer among patients with head and neck cancer. Oral Oncol. 2013;49:9–14.
Jegu J, Colonna M, Daubisse-Marliac L, Tretarre B, Ganry O, Guizard AV, Bara S, Troussard X, Bouvier V, Woronoff AS, Velten M. The effect of patient characteristics on second primary cancer risk in France. BMC Cancer. 2014;14:94.
Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40:373–83.
Boje CR. Impact of comorbidity on treatment outcome in head and neck squamous cell carcinoma - a systematic review. Radiother Oncol. 2014;110:81–90.
Sartori AE. An estimator for some binary-outcome selection models without exclusion restrictions. Polit Anal. 2003;11:111–38.
Pickard AS, Neary MP, Cella D. Estimation of minimally important differences in EQ-5D utility and VAS scores in cancer. Health Qual Life Outcomes. 2007;5:70.
Pottel L, Lycke M, Boterberg T, Pottel H, Goethals L, Duprez F, Rottey S, Lievens Y, Van Den Noortgate N, Geldhof K, et al. G-8 indicates overall and quality-adjusted survival in older head and neck cancer patients treated with curative radiochemotherapy. BMC Cancer. 2015;15:875.
Weaver KE, Forsythe LP, Reeve BB, Alfano CM, Rodriguez JL, Sabatino SA, Hawkins NA, Rowland JH. Mental and physical health-related quality of life among U.S. cancer survivors: population estimates from the 2010 National Health Interview Survey. Cancer Epidemiol Biomark Prev. 2012;(21):2108–17.
Kim K, Kim JS. Factors influencing health-related quality of life among Korean cancer survivors. Psychooncology. 2017;26:81–7.
Schwenkglenks M, Matter-Walstra K. Is the EQ-5D suitable for use in oncology? An overview of the literature and recent developments. Expert Rev Pharmacoecon Outcomes Res. 2016;16:207–19.
Brazier JE, Yang Y, Tsuchiya A, Rowen DL. A review of studies mapping (or cross walking) non-preference based measures of health to generic preference-based measures. Eur J Health Econ. 2010;11:215–25.
Longworth L, Rowen D. Mapping to obtain EQ-5D utility values for use in NICE health technology assessments. Value Health. 2013;16:202–10.
Dakin H, Abel L, Burns R, Yang Y. Review and critical appraisal of studies mapping from quality of life or clinical measures to EQ-5D: an online database and application of the MAPS statement. Health Qual Life Outcomes. 2018;16:31.
The EPICORL (EPIdémiologie des Cancers ORL) Study Group includes: Sylvain Baillot, MSc, Translational Health Economics Network (THEN), Paris, France; Mélina Bec, MSc, Health Economics & Outcomes Research department, MSD France; Lynda Benmahammed, MD, Medical Advisor Oncology, MSD France; Caroline Even, MD, PhD, Department of Head & Neck Surgical & Medical Oncology, Institut de cancérologie Gustave Roussy, Villejuif, France; Lionel Geoffrois, MD, PhD, Department of Medical Oncology, Institut de cancérologie de Lorraine – Alexis Vautrin, Vandoeuvre Les Nancy, France; Florence Huguet, MD, PhD, Department of Radiation Oncology, Hôpital Tenon, AP-HP, France; Béatrice Le Vu, MD, MSc, Stratégie et Gestion Hospitalière, UNICANCER Fédération Nationale des Centres de Lutte Contre le Cancer, Paris, France & Translational Health Economics Network (THEN), Paris, France; Laurie Lévy-Bachelot, PhD, Health Economics & Outcomes Research department, MSD France; Stéphane Luchini, PhD, CNRS, GREQAM-IDEP, Marseille, France & Translational Health Economics Network (THEN), Paris, France; Yoann Pointreau,, MD, PhD, Department of Radiation Oncology, ILC- Institut inter-régionaL de Cancérologie, Centre Jean Bernard-Clinique Victor Hugo, Le Mans, France; Camille Robert, PharmD, Health Economics & Outcomes Research department, MSD France; Luis Sagaon Teyssier, PhD, AMU/Inserm/IRD, UMR 912, Marseille, France & Translational Health Economics Network (THEN), Paris, France; Antoine Schernberg, MD, MPH, Department of Radiation Oncology, Hôpital Tenon, AP-HP, Paris, France; Michaël Schwarzinger, MD, PhD, Translational Health Economics Network (THEN), Paris, France; Stéphane Temam, MD, PhD, Department of Head & Neck Surgical & Medical Oncology, Institut de cancérologie Gustave Roussy, Villejuif, France.
The requirement for informed consent was waived because the study used de-identified data.
The EPICORL (EPIdémiologie des Cancers ORL) study was supported by a research grant from MSD France. The funding source had no role in the study design, data collection, analysis, and interpretation of data, in the writing of the report and decision to submit the manuscript.
Translational Health Economics Network (THEN), 39 quai de Valmy, 75010, Paris, France
Michaël Schwarzinger
Infection Antimicrobials Modeling & Evolution (IAME), UMR 1137, Institut National de la Santé et de la Recherche Médicale (INSERM), Université Paris Diderot, Sorbonne Paris Cité, Paris, France
Aix-Marseille University (Aix-Marseille School of Economics), Centre National de la Recherche Scientifique and EHESS Marseille, Marseille, France
Stéphane Luchini
Sylvain Baillot
, Mélina Bec
, Lynda Benmahammed
, Caroline Even
, Lionnel Geoffrois
, Florence Huguet
, Béatrice Le Vu
, Laurie Lévy-Bachelot
, Stéphane Luchini
, Yoann Pointreau
, Camille Robert
, Luis Sagaon Teyssier
, Antoine Schernberg
, Michaël Schwarzinger
& Stéphane Temam
MS conceptualized the study, contributed to the analysis and interpretation of the data, and wrote the first draft of the paper. SL contributed to the analysis and interpretation of the data. All authors gave final approval of this version to be submitted.
Correspondence to Michaël Schwarzinger.
The EPICORL study was approved by the French National Commission for Data Protection (CNIL DE-2015-025) who granted access to the French National Hospital Discharge database for the years 2008 to 2013.
All authors have completed the ICMJE Competing Interest form and declare that: MS is the founder/CEO of Translational Health Economics Network (THEN), Paris, France that received research grants from MSD France as well as Abbvie, Gilead and Novartis, outside and unrelated to the submitted work. SL has declared no conflicts of interest.
Additional Methods. Imputation of mortality outside hospital. Table S1. Coding dictionary. Table S2. Study flowchart. Table S3. Eigenvalues of the Polychoric Correlation Matrix (two-parameter graded response model). Table S4. Parameter estimates (two-parameter graded response model). Table S5. Parameter estimates of the two-step selection model for "initial treatment at early stage" at 1 month of follow-up. Table S6. Parameter estimates of the two-step selection model for "initial treatment at early stage" at 6 months of follow-up. Table S7. Parameter estimates of the two-step selection model for "initial treatment at locally advanced stage" at 1 month of follow-up. Table S8. Parameter estimates of the two-step selection model for "initial treatment at locally advanced stage" at 6 months of follow-up. Table S9. Parameter estimates of the two-step selection model for "initial treatment with distant metastasis" at 1 month of follow-up. Table S10. Parameter estimates of the two-step selection model for "initial treatment with distant metastasis" at 12 months of follow-up. Table S11. Parameter estimates of the two-step selection model for "relapse treatment in the follow-up" at 1 month of follow-up. Table S12. Parameter estimates of the two-step selection model for "relapse treatment in the follow-up" at 12 months of follow-up. Table S13. Parameter estimates of the two-step selection model for "relapse-free in the follow-up" at 1 month of follow-up. Table S14. Parameter estimates of the two-step selection model for "relapse-free in the follow-up" at 12 months of follow-up. Table S15. Selection bias in post-acute care by health state and month of follow-up. (DOCX 208 kb)
Schwarzinger, M., Luchini, S. & for the EPICORL Study Group. Estimating health state utility from activities of daily living in the French National Hospital Discharge Database: a feasibility study with head and neck cancer. Health Qual Life Outcomes 17, 129 (2019). https://doi.org/10.1186/s12955-019-1195-9
EQ-5D-3L
QALYs
National hospital discharge database
|
CommonCrawl
|
⌂ → Higher Genus → C → Aut → 13 → $C_2^2$ → [0;2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2] → 39
One Refined Passport of Genus 13 with Automorphism Group $C_2^2$
Genus \(13\)
Quotient Genus \(0\)
Group \(C_2^2\)
Signature \([ 0; 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 ]\)
Generating Vectors \(1\)
Family containing this refined passport
Code to Magma
Code to Gap
Labeling convention
Genus: 13
Quotient Genus: 0
Group name: $C_2^2$
Group identifier: [4,2]
Signature: $[ 0; 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 ]$
Conjugacy classes for this refined passport: 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4
Jacobian variety group algebra decomposition: $A_{3}\times A_{7}\times A_{3}$
Corresponding character(s): 2, 3, 4
Hyperelliptic curve(s): No
Cyclic trigonal curve(s): No
Generating Vector(s)
Displaying the unique generating vector for this refined passport.
13.4-2.0.2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2.39.1
(1,3) (2,4)
Data computed by Jen Paulhus, using group and signature data originally computed by Thomas Breuer.
|
CommonCrawl
|
macro level effects of a change in the value for epsilon naught
I'm developing a story set 4-5 centuries after the fall of a high-tech civilization. The fall of said civilization is indirectly related to a loss of electricity. i.e. in the chaos of their technology no longer working like it should, they devolved into a scavengerpunk world. I'm toying with the idea of the loss of electricity being caused by some form of energy field disrupting the electromagnetic force and this field would do so by manipulating the electric constant epsilon naught.
Is this workable? I.e. can the field cause enough disruption to electrical devices without causing life to not exist? And if so, what other side effects on the environment would occur? E.g. would a campfire that would normally have an orange flame, now have a red flame because of a change in the atomic energy level?
A few specifications:
I don't need the field to turn the em force completely off, just disrupt tech enough so that it doesn't work
the field would be non-uniform. i.e. concentrated on one spot of the planet, radiating outward and subsiding so that a spot on the opposite side of the planet would still be able to use electricity.
the field would also be self-maintaining through the centuries, such that the resulting society on the planet wouldn't have been able to remake any electrical devices after the fall.
All the research I've done on this site and others address what would happen if the EM force was turned off completely, or what would happen if the fundamental forces were a few percent different and how that would affect the ability of stars or carbon atoms to form.
A few other articles suggested that an easier way to go no electricity would be EMP, grey goo, societal restrictions, mineral scarcity, or even simply say magic did it. one of the more interesting ones I saw was bacteria suddenly getting a taste for copper. And I may end up going with one of those instead of the electric constant way if it doesn't work.
But before I go that way, I would appreciate some feedback on the feasibility of an energy field disrupting the electric constant.
Okay. I didn't think it was possible to modify the electric constant enough to get rid of electricity and still have intelligent life, but I wanted to ask anyway. I'm probably going to go with a modified version of the grey goo method mentioned in What kind of event could stop electricity?.
Essentially, the nanotech will absorb any electricity it finds from any active power sources. But when those power sources are turned off, or stop working from a lack of maintenance, the nanotech still wants to absorb electricity. So without any power sources feeding them, they would draw energy directly from the world. And this will have the effect of making any new power sources produce less electricity than they normally would because the nanotech pulling electricity from everywhere would simulate the effect of having a dielectric superimposed over a vacuum. i.e. more resistance to electric flow.
To quote this page: https://en.wikipedia.org/wiki/Permittivity
permittivity describes the amount of charge needed to generate one unit of electric flux in a particular medium.
A consequence of this would be that you could turn on a whole bunch of power sources, you wouldn't see any electricity output, but the permittivity would drop back down to normal because the nanotech is no longer drawing power from the world.
I also get that messing with the electric constant in this manner would affect lots of other constants, and I don't necessarily have an issue with this, as long as the effects are consistent. And I'm not really interested in changing the fine structure constant by as much as 4%, but to redirect my initial question, if the modification was small, such as 1/1000th of a percent, would there be any visible effects on the environment?
reality-check electricity electromagnetism
bronzeapricotbronzeapricot
$\begingroup$ You might want to check out What would happen if electricity stopped working? $\endgroup$ – a CVn♦ Dec 27 '17 at 7:30
$\begingroup$ Please, use capital letters, punctuation etc. And what do you mean by "manipulating the electric constant epsilon naught"? You know that this constant is not some magic setting of this universe, but rather something that came up from equations, theory and so on. $\endgroup$ – Mołot Dec 27 '17 at 7:39
$\begingroup$ This question feels a bit to me like asking "what would happen if the value of pi were changed?" - you can't. You just can't. $\endgroup$ – Xenocacia Dec 27 '17 at 8:49
$\begingroup$ The specific value of ε₀ depends on the particular system of measurement units. If you don't like the value it has in SI, then you can use one of the various CGS systems which either do away with ε₀ (for example, the CGS electrostatic system), or assign a special value to it in order to make Coulomb's law simpler (for example, the Gaussian system). $\endgroup$ – AlexP Dec 27 '17 at 15:13
If you change the value of $$ \epsilon_0$$ you will end up affecting the value of the fine-structure constant $$ \alpha = \frac{1}{4\pi\epsilon_0}\frac{2\pi e^2}{hc}$$ where e is the elementary charge, c the speed of light and h is Planck's constant.
To quote this page
The anthropic principle is a controversial argument of why the fine-structure constant has the value it does: stable matter, and therefore life and intelligent beings, could not exist if its value were much different. For instance, were α to change by 4%, stellar fusion would not produce carbon, so that carbon-based life would be impossible. If α were > 0.1, stellar fusion would be impossible and no place in the universe would be warm enough for life as we know it.
Long story short: if you change epsilon, you don't have to worry about intelligent life any longer, and this include electricity too.
L.Dutch♦L.Dutch
Not the answer you're looking for? Browse other questions tagged reality-check electricity electromagnetism or ask your own question.
What would happen if electricity stopped working?
What kind of event could stop electricity?
Lightning Rifle
What would be different in a world with insanely cheap electricity?
What would be the immediate effects of no electric current?
Electric Universe: can an object be pinned at a solar pole with magnetic flux?
How can an optical signal be converted into a mechanical/acoustic signal without using electricity?
What gases are made by a live, crackling, exposed electrical wire?
What would be the side effects of a massive, strong magnetic field?
Can a biological creature detect and absorb electricity from power sources?
Schools of magic based around energies, not elements
Applications of Electromagnetism in "Force Fields"
|
CommonCrawl
|
The University of Liverpool, Department of Physics Particle Physics Rolling Grant
Lead Research Organisation: University of Liverpool
Department Name: Physics
The theories describing the forces binding fundamental particles to make the world we live in fail to capture the phenomenon of mass, working most naturally only for massless fundamental particles. The problem is solved by new physics at the electroweak unification energy scale, where the weak nuclear force and that of electromagnetism are found to show the same strength, a fact associated with the mass of the force-carrying W and Z particles being the root cause of the apparent weakness of the 'weak' force. The 'new physics' at this characteristic mass/energy scale is often thought to be related to the existence of a Higgs particle with mass just above that of the W and Z. However, even if this is the correct scenario, strong theoretical arguments exist for other phenomena, close in energy scale. All of this provides a major motivation for building the LHC and for one aspect of its physics programme, the search, using high energy collisions, for direct production of new particles. ATLAS, with a major component of its tracking detector assembled at Liverpool, is searching for such 'new physics', but to be sure of a discovery, the predictions of the known physics, the 'Standard Model', have to be understood extremely well in the high energy collisions of the LHC. This is a key first step and one that Liverpool is emphasising with its impressive understanding of strong interaction physics and proton structure. Another way deviations can be found is in subtler effects at lower energies, for example in the decays of particles containing the b-quark, the most massive quark with a measurable lifetime. This is the target of the LHCb experiment, where the detection of short lived particles produced at the LHC relies on the Vertex Locator (VeLo) detector for which all the modules were built at Liverpool. Liverpool physicists are playing major roles in several aspects of this experiment, including studies relating to exploration of rare Standard Model processes. Another area which has proved very fruitful in finding deviations from the Standard Model is in the properties of neutrinos. These can be explored through 'neutrino oscillation' experiments, sensitive to both neutrino masses (thought to be zero until a decade ago) and how different types of neutrinos can inter-convert. The intense beam being produced at the J-PARC laboratory in Japan is the ideal testing ground for ideas that asymmetries between neutrinos and anti-neutrinos may have helped produce the matter anti-matter asymmetry we see in today's Universe (where, as far as we can tell, all the anti-matter has annihilated away). In all these programmes, there is a lively future building on these experiments and we play a key role in plans and research aimed at upgrades to allow better exploitation of these unique facilities. At the LHC, improvements to the accelerator will lead to the rebuilding of large parts of the detectors. In particular, much greater radiation hardness may be needed and we enjoy a world-leading reputation in radiation hard silicon detectors. Other aspects require advanced materials and low mass structures, novel interconnect technologies and microelectronics developments. These are all areas where we bring unique skills, already demonstrated in the detectors provided to current experiments. In addition to work on silicon sensors (both strip and pixels), we are working on novel liquid argon based detectors, and on a combined magnet-calorimeter concept. The longer term future clearly will involve new accelerators and new accelerator components on existing facilities. We are particularly well placed to contribute through our lead role in the Cockcroft Institute whose Director is a Liverpool professor. Liverpool enjoys a long tradition of generic research and development which leads to major developments in new technologies. We work closely with UK industry wherever possible to ensure the widest possible dissemination of novel results
ST/H001069/2
Philip Patrick Allport
Facility Development (5%)
Nuclear Physics (10%)
Particle physics - experiment (85%)
Accelerator R&D (5%)
Hadron Physics (10%)
University of Liverpool, United Kingdom (Lead Research Organisation)
Nagoya University, Japan (Collaboration)
Fondazione Bruno Kessler (Collaboration)
Government of Ireland (Collaboration)
University of Amsterdam (Collaboration)
University of Padova (Collaboration)
Arktis Radiation Detectors (Collaboration)
Glyndwr University, United Kingdom (Collaboration)
Swiss Federal Institute of Technology (ETH), Zurich (Collaboration)
University of Adelaide, Australia (Collaboration)
University of Insubria, Italy (Collaboration)
CAEN S.p.A. (Collaboration)
Philip Patrick Allport (Principal Investigator)
Neil Kevin McCauley (Co-Investigator)
David Hutchcroft (Co-Investigator)
Max Klein (Co-Investigator)
John Neil Jackson (Co-Investigator)
Michael Anthony Houlden (Co-Investigator)
John Bourke Dainton (Co-Investigator)
Joost Vossebeld (Co-Investigator)
Themistocles Bowcock (Co-Investigator)
Uta Klein (Co-Investigator)
Sergey Burdin (Co-Investigator) http://orcid.org/0000-0003-4831-4132
Tara Shears (Co-Investigator) http://orcid.org/0000-0002-2653-1366
JOHN RICHARD FRY (Co-Investigator)
Andrew Mehta (Co-Investigator)
Christos Touramanis (Co-Investigator) http://orcid.org/0000-0001-5191-2171
Timothy Greenshaw (Co-Investigator)
Timothy John Jones (Researcher)
Girish Dahyabhai Patel (Researcher)
Gianluigi Casse (Researcher)
Stephen John Maxfield (Researcher)
Barry Thomas King (Researcher)
|< < 5 6 7 8 9 10 11 12 13 14 > >|
Aaltonen T (2012) Combination of CDF and D0 measurements of the W boson helicity in top quark decays in Physical Review D
Aaltonen T (2014) Precise measurement of the W -boson mass with the Collider Detector at Fermilab in Physical Review D
Aaltonen T (2013) Production of K S 0 , K * ± ( 892 ) and ? 0 ( 1020 ) in minimum bias events and K S 0 and ? 0 in jets in p p ¯ collisions at s = 1.96 TeV in Physical Review D
Aaltonen T (2012) Search for a Higgs boson in the diphoton final state using the full CDF data set from <mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:t in Physics Letters B
Aaltonen T (2013) Updated search for the standard model Higgs boson in events with jets and missing transverse energy using the full CDF data set in Physical Review D
Aaltonen T (2012) Production of ? 0 , ? ¯ 0 , ? ± , and O ± hyperons in p p ¯ collisions at s = 1.96 TeV in Physical Review D
Aaltonen T (2012) Precise measurement of the W-boson mass with the CDF II detector. in Physical review letters
Aaltonen T (2014) Evidence for s-channel single-top-quark production in events with one charged lepton and two jets at CDF. in Physical review letters
Aaltonen T (2012) Measurement of the C P -violating phase ß s J / ? ? in B s 0 ? J / ? ? decays with the CDF II detector in Physical Review D
Aaltonen T (2012) Search for a dark matter candidate produced in association with a single top quark in pp collisions at v[s]=1.96 TeV. in Physical review letters
Aaltonen T (2013) Search for a dijet resonance in events with jets and missing transverse energy in p p ¯ collisions at s = 1.96 TeV in Physical Review D
Aaltonen T (2013) Direct measurement of the total decay width of the top quark. in Physical review letters
Aaltonen T (2013) W-boson polarization measurement in the <mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:tb="http://www.elsevier.com/xml/common/table/dt in Physics Letters B
Aaltonen T (2014) Measurements of direct CP-violating asymmetries in charmless decays of bottom baryons. in Physical review letters
Aaltonen T (2013) Measurement of the cross section for prompt isolated diphoton production using the full CDF run II data sample. in Physical review letters
Aaltonen T (2012) Measurement of C P -violating asymmetries in D 0 ? p + p - and D 0 ? K + K - decays at CDF in Physical Review D
Aaltonen T (2014) Search for new physics in trilepton events and limits on the associated chargino-neutralino production at CDF in Physical Review D
Aaltonen T (2012) Search for new phenomena in events with two Z bosons and missing transverse momentum in p p ¯ collisions at s = 1.96 TeV in Physical Review D
Aaltonen T (2013) Measurement of the differential cross section ds/d(cos?(t)) for Top-Quark Pair Production in pp Collisions at sqrt[s] = 1.96 TeV. in Physical review letters
Aaltonen T (2012) Measurement of ZZ production in leptonic final states at sqrt[s] of 1.96 TeV at CDF. in Physical review letters
Aaltonen T (2012) Search for a heavy vector boson decaying to two gluons in p p ¯ collisions at s = 1.96 TeV in Physical Review D
Aaltonen T (2012) Search for the rare radiative decay W ? p ? in p p ¯ collisions at s = 1.96 TeV in Physical Review D
Aaltonen T (2012) Diffractive dijet production in p ¯ p collisions at s = 1.96 TeV in Physical Review D
Aaltonen T (2013) Measurement of the cross section for direct-photon production in association with a heavy quark in pp[over ¯] collisions at sqrt[s]=1.96 TeV. in Physical review letters
Aaltonen T (2012) Precision top-quark mass measurement at CDF. in Physical review letters
Aaltonen T (2012) Search for heavy metastable particles decaying to jet pairs in p p ¯ collisions at s = 1.96 TeV in Physical Review D
Aaltonen T (2012) Search for neutral Higgs bosons in events with multiple bottom quarks at the Tevatron in Physical Review D
Aaltonen T (2014) Search for s-channel single-top-quark production in events with missing energy plus jets in pp collisions at sqrt[s] = 1.96 TeV. in Physical review letters
Aaltonen T (2012) Search for scalar top quark production in $ p\bar{p} $ collisions at $ \sqrt{s}=1.96\;\mathrm{TeV} $ in Journal of High Energy Physics
Aaltonen T (2012) Search for a Low-Mass Standard Model Higgs Boson in the t t Decay Channel in p p ¯ Collisions at s = 1.96 TeV in Physical Review Letters
Aaltonen T (2014) Combination of measurements of the top-quark pair production cross section from the Tevatron Collider in Physical Review D
Aaltonen T (2013) Measurement of the mass difference between top and antitop quarks in Physical Review D
Aaltonen T (2012) Combined search for the standard model Higgs boson decaying to a bb pair using the full CDF data set. in Physical review letters
Aaltonen T (2012) Publisher's Note: Novel inclusive search for the Higgs boson in the four-lepton final state at CDF [Phys. Rev. D 86 , 072012 (2012)] in Physical Review D
Aaltonen T (2012) Search for anomalous production of multiple leptons in association with W and Z bosons at CDF in Physical Review D
Aaltonen T (2012) Measurements of the angular distributions of muons from ? decays in pp collisions at sqrt[s] = 1.96 TeV. in Physical review letters
Aaltonen T (2012) Search for the standard model Higgs Boson produced in association with top quarks using the full CDF data set. in Physical review letters
Aaltonen T (2012) An additional study of multi-muon events produced in <mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:tb="http://www.elsevier.com/xml/co in Physics Letters B
Aaltonen T (2014) Measurement of the inclusive leptonic asymmetry in top-quark pairs that decay to two charged leptons at CDF. in Physical review letters
Aaltonen T (2014) Mass and lifetime measurements of bottom and charm baryons in p p ¯ collisions at s = 1.96 TeV in Physical Review D
Aaltonen T (2014) Measurement of the Z Z production cross section using the full CDF II data set in Physical Review D
Aaltonen T (2012) Search for W Z + Z Z production with missing transverse energy + jets with b enhancement at s = 1.96 TeV in Physical Review D
Aaltonen T (2013) Measurement of the top quark forward-backward production asymmetry and its dependence on event kinematic properties in Physical Review D
Aaltonen T (2013) Search for the production of Z W and Z Z boson pairs decaying into charged leptons and jets in p p ¯ collisions at s = 1.96 TeV in Physical Review D
Aaltonen T (2012) Transverse momentum cross section of e + e - pairs in the Z -boson region from p p ¯ collisions at s = 1.96 TeV in Physical Review D
Aaltonen T (2014) Measurement of the top-quark mass in the all-hadronic channel using the full CDF data set in Physical Review D
Aaltonen T (2013) Search for resonant top-antitop production in the lepton plus jets decay mode using the full CDF data set. in Physical review letters
Aaltonen T (2014) Invariant-mass distribution of jet pairs produced in association with a W boson in p p ¯ collisions at s = 1.96 TeV using the full CDF Run II data set in Physical Review D
Aaltonen T (2012) Measurements of the angular distributions in the decays B?K(*)µ(+)µ(-) at CDF. in Physical review letters
Aaltonen T (2012) Novel inclusive search for the Higgs boson in the four-lepton final state at CDF in Physical Review D
ST/H001069/1 01/10/2009 31/03/2011 £2,062,713
ST/H001069/2 Transfer ST/H001069/1 01/10/2010 30/09/2012 £5,357,268
Description This is being entered in 2017 for a grant issued in 2010. The work contained here prepared the groundwork for some of the some most important discoveries of the last decade including the discovery of the Higgs boson, that completed our understanding of the Standard Model. it also contained the preparatory work for the discovery of neutrino oscillations. This showed that, at least in one area of the Standard Model, new physics was to be found.
Exploitation Route These underpin the future of fundamental physics research. The data here will inform the European Strategy 2019 and set the framerwork for future investment Europwide.
Sectors Aerospace, Defence and Marine,Digital/Communication/Information Technologies (including Software),Electronics,Other
URL http://cern.ch
Description CERN Scentific Associateship
Amount SFr. 94,380 (CHF)
Organisation European Organization for Nuclear Research (CERN)
Description LAGUNA-LBNO
Funding ID EU FP7 Project Grant Agreement 284518
Department Seventh Framework Programme (FP7)
Description MODES-SNM
Funding ID 284842 FP7-SEC-2011-1
Description Collaboration with Fondazione Bruno Kessler (FBK)
Organisation Fondazione Bruno Kessler
PI Contribution Creation of the partnership
Collaborator Contribution We have started a partnership in 2 main areas. First with the MicroSystems Division (CMM). Previous staff member Prof. G. Casse became director in 2016. We have expanded this to deep learning with their IT departmetmn
Impact Award of STFC CDT, collabrateion with Microsoft. This multi-disiplinary and impacts health.
Description Compact High Energy Camera
PI Contribution Lead design and construction of camera for GCT telescope.
Collaborator Contribution Electronics testing. Software development.
Impact Design of camera. Prototype camera.
Department Max Planck Institute for Nuclear Physics
Organisation Nagoya University
Organisation University of Adelaide
Organisation University of Amsterdam
Organisation Arktis Radiation Detectors
PI Contribution Detector development and system integration
Collaborator Contribution Providing detector systems, electronics, DAQ, end-user evaluation
Impact Still ongoing. Major deliverable is a complete system demonstrator of a system capable to detect SNM in realistic conditions, to be evaluated at three different locations (ports of entry).
Organisation CAEN S.p.A.
Organisation ETH Zurich
Organisation Government of Ireland
Department Office of the Revenue Commissioners
Organisation University of Insubria
Department Department of Physics and Mathematics
Organisation University of Padova
Description Mirrors for CTA
Organisation Glyndwr University
PI Contribution Specification of mirrors
Collaborator Contribution Mirror construction process, test mirrors.
Impact Test mirrors for study of production process
Description Ann Marks IOP lecture: "Latest news from LHC"
Results and Impact Talk given for the annual Ann Marks memorial lecture for the Merseyside IOP branch
Description Antimatter matters
Results and Impact Two talks at the Royal Astronomical Society in London, one aimed at the general public, one with higher fraction of school children, both on antimatter.
URL https://www.ras.org.uk/events-and-meetings/external-meetings/155-events-and-meetings/index.php?optio...
Description Bang goes the Theory
Results and Impact Interview on antimatter, tour of LHCb for interview on BBC1 Bang goes the theory.
Subsequent sampling of footage for further symphony of science video. Repeat invitations back to BBC.
Description CERN 3.5 TeV startup webcast
Results and Impact Interviewed for LHCb status and plans as part of global LHC startup webcast. Interview available on CERN youtube channel.
Interview sampled for Symphony of Science video.
Description CERN MP visits
Results and Impact Visits of local MP and Science and Technology select committee to CERN. Participated. Ad hoc briefing to S&T committee prior to visit.
Description Collaboration with artist Yu-Chen Wang for Broken Symmetries exhibit (FACT/Arts@CERN)
Results and Impact Discussions with artist Yu-Chen Wang, a FACT/Arts@CERN Collider honourable mention winner, as a collaboration for the development of her piece for the Broken Symmetries exhibit. This exhibit is currently at FACT and will tour Europe and beyond after March 2019. Yu-Chen's piece documents her journey and way of understanding particle physics and very foreign concepts.
URL https://www.fact.co.uk/news/2018/12/artist-interview-yu-chen-wang
Description Debate with artists at Broken Symmetries exhibit launch
Results and Impact Debate with two participating artists at the launch of the Broken Symmetries exhibit (arising from the 3 year Arts@CERN and FACT collaboration).
URL https://www.fact.co.uk/news/2018/11/liverpool-laser-talks
Description Discovery channel Canada
Results and Impact Interview on antimatter for science news programme.
Description Faster than the speed of light (documentary)
Results and Impact Interview on neutrino physics for BBC4 documentary.
Description International press interviews
Results and Impact Irish Times (several).
Many international publications following antimatter/Angels and Demons link (2009).
Repeat requests.
Year(s) Of Engagement Activity 2009,2010,2011,2012
Description International radio interviews
Results and Impact Interviews: 2009 antimatter Drivetime (Irel)
2010 3.5 TeV running: ORF (Austria)
2011 Higgs news Newstalk (Irel)
2012 Higgs Newstalk (irel), radio New Zealand.
Repeat calls.
Description Interview with Nature about particle physics/media experiences and preparation
Results and Impact Interview about experiences with the media and how to best prepare for them.
URL https://www.nature.com/articles/d41586-018-06871-7
Description Interview with New Scientist on LHC 10th anniversary
Results and Impact Interview with the New Scientist on the LHC 10th anniversary
URL https://www.newscientist.com/article/mg23931953-000-the-higgs-hunter-has-just-turned-10-why-is-nobod...
Description Interview with the Guardian for LHC upgrade
Results and Impact Interview with Ian Sample on the HL-LHC upgrade. The story was subsequently picked up by other media outlets.
URL https://www.theguardian.com/science/2018/jun/15/720m-large-hadron-collider-upgrade-could-upend-parti...
Description Keynote speaker at New Scientist Live!
Results and Impact Keynote talk on "Why hasn't the LHC found anything new - or has it?"
URL https://live.newscientist.com/speakers/tara-shears
Description LIFT conference
Results and Impact 20 minute talk on LHC latest results to ~500 techonology/media professionals. Recorded, hosted online on conference website. Reported in CERN bulletin.
Lecture on conference website.
Description Local radio (UK) interviews
Results and Impact Various interviews around:
2008: LHC startup (18 UK stations)
2011: neutrinos (11 UK stations)
2012: local Liverpool LHC work (radio merseyside)
go-to contact for particle physics for several UK local radio stations.
Description New Scientist instant exprt series: keynote talk
Results and Impact (takes place on April 6th). A keynote talk on the LHC and big questions that remain unanswered.
URL https://www.sciencelive.net/event/760/
Description News interviews (Higgs)
Results and Impact 2011: BBC news, BBC news 24 filmed interviews.
2012: RTE news filmed interview.
2013: Newsround filmed interview (unused)
on BBC news go-to list.
Description Physics society talks
Results and Impact IOP SE (2009), UCL chemistry&physics (2010), Liverpool physics (2011). One hour talk on LHC results, Q&A.
repeat invitations.
Description Podcast for Naked Scientists on LHC 10th anniversary
Results and Impact Podcast with the naked scientists on the LHC 10th anniversary plus talking about FCC
URL https://www.thenakedscientists.com/articles/interviews/10-years-lhc
Description Radio 4 Today interviews
Results and Impact Interviewed for 2009: antimatter 2010: LHC restart, 2011: neutrinos.
Description Royal Institution talks
Results and Impact 2011: evening lecture, ~350 attendees. Recorded, lecture available online.
2013: invitation to give evening discourse (Sep. 2013)
Lecture available online on RI video channel.
Description Royal Society talks
Results and Impact Cafe scientifique style discussion with extended Q&A, traditional talk with Q&A.
Description Science Festival (Edinburgh)
Results and Impact ~200 attendees for both talks.
Return invitation. Press report after 2010 talk (with Jim Al-Khlalili, Brian Cox).
Description Science Friday: antimatter
Results and Impact Discussed antimatter and LHCb as part of the Science Friday series on Radio Merseyside, April 2018
Description Science festival (British)
Results and Impact talks on latest LHC results. Q&A.
Now on BSF physics committee. 2012 talk led to pieces in the Irish Times and Financial Times on LHCb CP violation results, Naked scientist podcast. 2010 talk led to X-change appearance.
Description Science festival (ESOF)
Results and Impact 2010 talk (sponsored by IOP) resulted in follow-up IOP coverage. 2012 talk (in conjunction with RIA) resulted in press followup, online video interview.
Online video interview.
Description Speaker at philosophy/physics workshop on model independent searches
Results and Impact Speaker at a 2 day workshop encompassing experimental and theoretical particle physicists, cosmologists and astrophysicists, on the philosophical basis of model independent searches and exploring what model independence means.
URL http://www.perspectivalrealism.org/cross-disciplinary-perspectives-on-model-independent-searches/
Description Speaker on Scientific American cruise (4 lectures on particle physics)
Results and Impact 6 hours of lectures on particle physics; the Standard Model; antimatter; dark matter; particle astrophysics and the future. Plus one debate with a quantum physicist.
URL http://www.insightcruises.com/events/sa35/
Description Talk on DELPHI
Results and Impact Talk on particle physics to accompany opening of a video installation centred on the DELPHI experiment in Berlin; mainly to arts inclined public
URL http://www.scheringstiftung.de/index.php?option=com_content&view=article&id=3192%3Athe-unseen-univer...
Description UK press interviews
Results and Impact Interviews for Liverpool Echo, Wilts Gazette and Herald, Belfast Telegraph; FT;
Daily Telegraph; Daily Mail; Independent; Sunday Times; Guardian (several); BBC news online (several).
Description University open day talks
Primary Audience Undergraduate students
Results and Impact Talks on latest LHC news given to undergrad and postgrad open days.
Description World service interviews
Results and Impact Interviewed on Newshour; 2011 (neutrinos), 2012 (LHCb results and supersymmetry).
Description briefings for researcher for forthcoming BBC4 documentary on Einstein to Hawking
Results and Impact Briefings on particle physics links to Einstein and Hawking for a researcher working in the forthcoming BBC4 documentary "From Einstein to Hawking"
Description interview on BBC world service Newsday
Results and Impact BBC world service newsday interview, to talk about LHC story
Description interviews following New Scientist Live! talk
Results and Impact Piece written up based on listening to the talk and Q&A afterwards.
URL https://www.express.co.uk/news/science/1022632/science-news-lhc-large-hadron-collider-quantum-latest...
|
CommonCrawl
|
How to maximize area of a square inscribed in a equilateral triangle?
We have an equilateral triangle and want to inscribe a square, in such way that maximizes the area of the square.
I sketched two possible ways, not to scale and not perfect.
Note I am not sure if the second way will really have all square corners touching the triangle sides.
The second case appears to have bigger side-lengths of the square, so bigger area. But I do not know how to determine the angles involved. How to solve this?
geometry euclidean-geometry triangles area maxima-minima
DrZ214DrZ214
$\begingroup$ @TobyMak This isn't a duplicate, since OP is asking which of two specific configurations is better. $\endgroup$ – Parcly Taxel Aug 13 '19 at 2:53
$\begingroup$ How do you define "inscribed"? In your first sketch, one corner of the square is actually floating, not touching the triangle. I think it is safe to assume that there is only one way to inscribe a square in an equilateral triangle, and it is when one side of the square lays exactly on one side of the triangle (as in your second sketch). As a result, the problem of maximization is a non-problem. $\endgroup$ – virolino Aug 13 '19 at 11:04
$\begingroup$ @ParclyTaxel Are they? Because that's not the question asked in the title. $\endgroup$ – Jack M Aug 13 '19 at 11:22
$\begingroup$ @JackM It is in the consideration of the title problem that the real question has been asked. $\endgroup$ – Parcly Taxel Aug 13 '19 at 11:24
$\begingroup$ Possible duplicate of What is the maximum area of a square inscribed in an equilateral triangle? $\endgroup$ – Jam Nov 20 '19 at 19:01
Let $a_1$ and $a_2$ be the side lengths of the two squares. To determine which one is larger, we simply look at their ratio below.
With the angles in the diagram,
$$d_1=\frac{1}{2\tan 30}a_1=\frac{\sqrt{3}}{2}a_1$$ $$d_2=\frac{\sin 15}{\sin 30}a_2=\frac{1}{2\cos 15}a_2$$
Assume both equilateral triangles have unit height.
$$1=a_1+d_1=\left(1+\frac{\sqrt{3}}{2}\right)a_1=\frac{1}{2}(2+\sqrt{3})a_1$$ $$1=\sqrt{2}a_2+d_2=\left(\sqrt{2}+\frac{1}{2\cos 15}\right)a_2=\frac{1}{2}(\sqrt{2}+\sqrt{6})a_2$$
So, their ratio is
$$\frac{a_1}{a_2}= \frac{\sqrt{2}+\sqrt{6}}{2+\sqrt{3}} =\left(\frac{8+4\sqrt{3}}{7+4\sqrt{3}}\right)^{\frac{1}{2}} > 1$$
edited Aug 16 '19 at 17:30
QuantoQuanto
The second configuration (square has edge contact with triangle) indeed has a bigger inscribed square. If the square has unit sides, the triangle's side is $1+\frac2{\sqrt3}$:
The symmetric first configuration may be resolved as follows. Set the unit square's bottom corner as $(0,0)$, so that the top corner is $(0,\sqrt2)$. Let the side length of the triangle be $r$. Then we have, by similar triangles, $$\frac{(\sqrt3/2)r-\sqrt2/2}{\sqrt2/2}=\sqrt3$$ $$r=(1+\sqrt3)\sqrt{\frac23}=2.230\dots$$ and this is greater than $1+\frac2{\sqrt3}=2.154\dots$, so the first configuration has a smaller inscribed square than the second.
edited Dec 7 '20 at 0:20
Zsbán Ambrus
Parcly TaxelParcly Taxel
$\begingroup$ Maybe, you mean the second, in your the sentence before last? $\endgroup$ – dmtri Aug 18 '19 at 9:51
$\begingroup$ @dmtri I got my words right there, checking again. $\endgroup$ – Parcly Taxel Aug 18 '19 at 9:52
$\begingroup$ Sorry , you are right , you are talking about the sides of the triangle not the square.... $\endgroup$ – dmtri Aug 20 '19 at 18:36
Let sides-lengths of the equilateral triangle be equal to $1$.
Let $x$ be sides-lengths of the square in the first configuration.
Thus, by law of sines we obtain: $$\frac{x}{\sin60^{\circ}}=\frac{\frac{1}{2}}{\sin75^{\circ}}$$ or $$\frac{x}{\frac{\sqrt3}{2}}=\frac{\frac{1}{2}}{\frac{1+\sqrt3}{2\sqrt2}}$$ or $$x=\frac{\sqrt3}{\sqrt2(1+\sqrt3)}$$ and for the area of the square we obtain: $$\frac{3}{2(4+2\sqrt3)}=\frac{3}{4}(2-\sqrt3).$$
Let $y$ be sides-lengths of the square in the second configuration.
Thus, by similarity we obtain: $$\frac{y}{1}=\frac{\frac{\sqrt3}{2}-y}{\frac{\sqrt3}{2}}$$ or $$y=\sqrt3(2-\sqrt3)$$ and for the area of the square we obtain: $$3(7-4\sqrt3),$$ which is a bit of greater.
Michael RozenbergMichael Rozenberg
Not the answer you're looking for? Browse other questions tagged geometry euclidean-geometry triangles area maxima-minima or ask your own question.
What is the maximum area of a square inscribed in an equilateral triangle?
Find the maximum area possible of equilateral triangle that inside the given square
Area of an equilateral triangle
Triangle perimeter and area
Square with equilateral triangle drawn it it, find area of the triangle.
Equilateral triangle touching three sides of a square
How fast is the area increasing for an equilateral triangle under the given conditions?
How to find the length of one of the sides of a triangle given the area
Maximal area of equilateral triangle inside rectangle.
Area of quadrilateral from an equilateral triangle inside a square
What is the length the side of of a square inscribed in a triangle?
|
CommonCrawl
|
how to find bond length
Bond length is also inversely related to bond strength and the bond dissociation energy: all other factors being equal, a stronger bond will be shorter. Bond lengths are typically in the range of 100-200 pm (1-2 Å). In general, the length of the bonds is a property of a whole molecule. Bond length is the experimentally determined average distance between two bonded atoms. Bonded atoms vibrate due to thermal energy available in the surroundings. It means, that the distance between the same pair of atoms (e.g., C-H) may vary depending on which compound we are dealing with. The length of the bond is determined by the number of bonded electrons (the bond order). In its normal state, the C-O bond length is just 1.43Å, when present in oxatriquinane however, the bond length (also known as bond distance) is naturally stretched to 1.54Å. How to add hydrogens to a crystal structure? Bond length is the distance between two nuclei of atoms that are covalently bonded together. Why use "the" in "than the 3.5bn years ago"? Look up the chart below for the radii for the corresponding bond. How many lithium-ion batteries does a M1 MacBook Air (2020) have? In a bond between two identical atoms, half the bond distance is equal to the covalent radius. Single bonds have a bond order of one, and multiple bonds with bond orders of two (a double bond) and three (a triple bond) are quite common. Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. How do you find the equilibrium length of a bond? Lengths … Scale of braces of cases environment in tabular. Learning Strategies Ernest Z. By clicking "Post Your Answer", you agree to our terms of service, privacy policy and cookie policy. The actual distance between two atoms in a molecule depends on factors such as the orbital hybridization and the electronic nature of its components. Units . It is approximately equal to the sum of the covalent radii of the two bonded atoms. As a sword and board Eldritch Knight do I need to put away my sword on my turn if I want to use Shield as a reaction? Asking for help, clarification, or responding to other answers. How do smaller capacitors filter out higher frequencies than larger values? The weakest of the intramolecular bonds or chemical bonds is the ionic bond. I hope this would be helpful. The sum of two covalent radii of two atoms is usually the single bond length. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. How can I find the smaller symmetric structure from big crystal unit cell? The covalent radius of an atom is determined by halving the bond distance between two identical atoms. Having a good understanding of the kind of bond length you which to get i.e. Explanation. Bond length is related to bond order: when more electrons participate in bond formation the bond is shorter. I used the formula for calculation of distance between two points in 3-D space (square root of the sum of squares of the differences between corresponding coordinates) and got $\pu{0.2372 \mathring A}$. How can I get xyz coordinates of atoms of a unit cell structure from CIF format files? 1 Answer. How can I deal with claims of technical difficulties for an online exam? Bond length is also inversely related to bond strength and the bond dissociation energy: all other factors being equal, a stronger bond will be shorter.In a bond between two identical atoms, half the bond distance is equal to the covalent radius. Would it be reasonable for my manager to state "I ignore emails" as a negative in a performance review? I have highlighted the bond of interest in the image below. Use MathJax to format equations. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. The ionic bond is generally the weakest of the true chemical bonds that bind atoms to atoms. Effectively bonds are in longer-shorter cycle, oscillating around some particular length. I used the formula for calculation of distance between two points in 3-D space (square root of the sum of squares of the differences between corresponding coordinates) and got $\pu{0.2372 \mathring A}$ . Bond length is the average distance between the nuclei of two atoms that are bonded together in a single molecule. 1 Answer. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Gases; 2. The order of bond lengths is single > double > triple. from which atom to which atom , you should navigate through the icons and you should be able to select what you desired. Even though the bond vibrates, equilibrium bond lengths can be determined experimentally to within ±1 pm. Bonds involving hydrogen can be quite short… Why do some crystal structure files not include bonds? The certain factors upon which the bond length is dependent are – Bond Multiplicity: The bond length decreases with an increase in bond multiplicity. Where to find pre-optimized files for Quantum Chemistry Softwares? It only takes a minute to sign up. Size of an Atom: Bond length is directly proportional to the size of an atom. In chemistry Bond length is defined as the average distance between the centre of two atoms or distance between the nuclei of two bonded atoms. Ernest Z. Can verbs/i-adjectives be indefinitely conjugated, or is there a limit? Bond length is reported in picometers. Fundamentals; 1. Where should small utility programs store their preferences? Chemistry 301. Or only on aggregate from the individual holdings? For example, the covalent radii of H and C are 37 and 77 pm respectively. Therefore, bond length increases in the following order: triple bond < double bond < single bond. Do ETFs move on their own? Thanks for contributing an answer to Chemistry Stack Exchange! The longest covalent bond I can find is the bismuth-iodine single bond. How often are encounters with bears/mountain lions/etc? Bond lengths have traditionally been expressed in Ångstrom units, but picometers are sometimes preferred (1 Å = 10-10m = 100 pm). Am I missing something here? 0. The C−H bond is thus (37+77) 114 pm. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. How to Calculate Bond Length. Would this 5.5V transient voltage suppressor be damaged at 15V? Making statements based on opinion; back them up with references or personal experience. Finding cubic ice CIF files in crystal database, Extracting "Crystal Radii" from a CIF file, Gaussian scan function help for constructing input file. Find the sum of the two radii. What is the benefit of having FIPS hardware-level encryption on a drive when you can use Veracrypt instead? To learn more, see our tips on writing great answers. Bonds involving hydrogen can be quite short; the shortest bond of all, H–H, is only 74 pm. For example R(1,2) represents bond length between the atoms 1 and 2. Bonds lengths are typically in the range of 1-2 Å, or 100-200 pm. How the crystal structure varying for the same compound? When changing a dihedral angle, all bond lengths and angles remain unchanged. The order of bond lengths is single > double > triple. When two similar atoms are bonded together, half of the bond length is referred to as covalent radius. Those are, How to find bond lengths from .cif files of crystal structure, MAINTENANCE WARNING: Possible downtime early morning Dec 2/4/9 UTC (8:30PM…, "Question closed" notifications experiment results and graduation. Big changes in engine's evaluation after considerable time. The longest covalent bond I can find is the bismuth-iodine single bond. The bond length of the covalent bond is the nuclear seperation distance where the molecule is most stable.Or in simple words, bond length is the distance between the nuclei in a bond .The H-H bond length in moleular hydrogen is 74 pm.At this distance,attractive interactions are maximied relative to repulsive interreactions.. Atomic; 3. To find the bond length, follow these steps: Draw the Lewis structure.
Mango Lime Coconut Dessert, Nose Sentence For Class 2, Beef Lo Mein With Cabbage, How To Introduce Number 1 To Kindergarten, Who Newborn Care Guidelines, Goulandris Museum Of Modern Art, Advantages Of Undercover Marketing, Class 9 Computer Chapter 1 Pdf, Psalm 18:28 The Message, Blue Curacao Non Alcoholic, Business Ethics Activities In The Classroom, Marshmallow Chocolate Cake,
how to find bond length 2020
|
CommonCrawl
|
Found 2683 results, showing the newest relevant preprints. Sort by relevancy only.Update me on new preprints
Orbital motion of test particles in regular Hayward black hole space-time
In this paper, all possible orbits of test particles are investigated by using phase plane method in regular Hayward black hole space-time. Expand abstract.
In this paper, all possible orbits of test particles are investigated by using phase plane method in regular Hayward black hole space-time. Our results show that the time-like orbits are divided into four types: unstable circular orbits, separates stable orbits, stable hyperbolic orbits and elliptical orbits in regular Hayward black hole space-time. We find that the orbital properties vary with the change of $\ell$ (a convenient encoding of the central energy density $3/8\pi\ell^{2}$). If $\ell =\frac{1}{3}$ and $b < 3.45321$, the test particles which moving toward the black hole will definitely be plunging into the black hole. In addition, it is obtained that the innermost stable circular orbit happens at $r_{min}$ = 5.93055 for $b$ = 3.45321.
10/10 relevant
Hadamard renormalization for a charged scalar field
These results will have applications in the computation of renormalized expectation values for a charged quantum scalar field on a charged black hole space-time, and hence in addressing issues such as the quantum stability of the inner horizon. Expand abstract.
The Hadamard representation of the Green's function of a quantum field on a curved space-time is a powerful tool for computations of renormalized expectation values. We study the Hadamard form of the Feynman Green's function for a massive charged complex scalar field in an arbitrary number of space-time dimensions. Explicit expressions for the coefficients in the Hadamard parametrix are given for two, three and four space-time dimensions. We then develop the formalism for the Hadamard renormalization of the expectation values of the scalar field condensate, current and stress-energy tensor. These results will have applications in the computation of renormalized expectation values for a charged quantum scalar field on a charged black hole space-time, and hence in addressing issues such as the quantum stability of the inner horizon.
7/10 relevant
The Scattering Map on Oppenheimer--Snyder Spacetime
In this paper we analyse the boundedness of solutions $\phi$ of the wave equation in the Oppenheimer--Snyder model of gravitational collapse in both the case of a reflective dust cloud and a permeating dust cloud. Expand abstract.
In this paper we analyse the boundedness of solutions $\phi$ of the wave equation in the Oppenheimer--Snyder model of gravitational collapse in both the case of a reflective dust cloud and a permeating dust cloud. We then proceed to define the scattering map on this space-time, and look at the implications of our boundedness results on this scattering map. Specifically, it is shown that the energy of $\phi$ remains uniformly bounded going forward in time and going backwards in time for both the reflective and the permeating cases, and it is then shown that the scattering map is bounded going forwards, but not backwards, and is therefore not surjective onto the space of finite energy on $\mathcal{I}^+\cup\mathcal{H}^+$. Thus there does not exist a backwards scattering map from finite energy radiation fields on $\mathcal{I}^+\cup\mathcal{H}^+$ to finite energy radiation fields on $\mathcal{I}^-$. We will then contrast this with the situation for scattering in pure Schwarzschild.
Joint H\"older continuity of local time for a class of interacting branching measure valued diffusions
Using a Tanaka representation of the local time for a class of superprocesses with dependent spatial motion, as well as sharp estimates from the theory of uniformly parabolic partial differential equations, the joint H\"older continuity in time and space of said local times is obtained in two and... Expand abstract.
Using a Tanaka representation of the local time for a class of superprocesses with dependent spatial motion, as well as sharp estimates from the theory of uniformly parabolic partial differential equations, the joint H\"older continuity in time and space of said local times is obtained in two and three dimensional Euclidean space.
Dark matter scattering cross section in Yang-Mills theory
We calculate for the first time the scattering cross section between lightest glueballs in SU(2) pure Yang-Mills theory, which are good candidates of dark matter. Expand abstract.
We calculate for the first time the scattering cross section between lightest glueballs in SU(2) pure Yang-Mills theory, which are good candidates of dark matter. In the first step, we evaluate the interglueball potential on lattice using the time-dependent formalism of the HAL QCD method, with one lattice spacing. The statistical accuracy is improved by employing the cluster-decomposition error reduction technique and by using all space-time symmetries. We then derive the scattering phase shift and the scattering cross section at low energy, which is compared with the observational constraint on the dark matter self-scattering. We determine the lower bound on the scale parameter of the SU(2) Yang-Mills theory, as $\Lambda$ > 60 MeV.
Nonexistence results for a higher-order evolution equation with an inhomogeneous term depending on time and space
We consider a higher-order evolution equation with an inhomogeneous term depending on time and space. Expand abstract.
We consider a higher-order evolution equation with an inhomogeneous term depending on time and space. We first derive a general criterion for the nonexistence of weak solutions. Next, we study the particular case when the inhomogeneity depends only on space. In that case, we obtain the first critical exponent in the sense of Fujita, as well as the second critical exponent in the sense of Lee and Ni.
Parameter estimation for SPDEs based on discrete observations in time and space
Resulting method of moments estimators for the diffusivity and the volatility parameter inherit the asymptotic normality and can be constructed robustly with respect to the sampling frequencies in time and space. Expand abstract.
Parameter estimation for a parabolic linear stochastic partial differential equation in one space dimension is studied observing the solution field on a discrete grid in a fixed bounded domain. Considering an infill asymptotic regime in both coordinates, we prove central limit theorems for realized quadratic variations based on temporal and spatial increments as well as on double increments in time and space. Resulting method of moments estimators for the diffusivity and the volatility parameter inherit the asymptotic normality and can be constructed robustly with respect to the sampling frequencies in time and space. Upper and lower bounds reveal that in general the optimal convergence rate for joint estimation of the parameters is slower than the usual parametric rate. The theoretical results are illustrated in a numerical example.
Higher-spin kinematics & no ghosts on quantum space-time in Yang-Mills matrix models
A classification of bosonic on- and off-shell modes on a cosmological quantum space-time solution of the IIB matrix model is given, which leads to a higher-spin gauge theory. Expand abstract.
A classification of bosonic on- and off-shell modes on a cosmological quantum space-time solution of the IIB matrix model is given, which leads to a higher-spin gauge theory. In particular, the no-ghost-theorem is established. The physical on-shell modes consist of 2 towers of higher-spin modes, which are effectively massless but include would-be massive degrees of freedom. The off-shell modes consist of 4 towers of higher-spin modes, one of which was missing previously. The noncommutativity leads to a cutoff in spin, which disappears in the semi-classical limit. An explicit basis allows to obtain the full propagator, which is governed by a universal effective metric. The physical metric fluctuations arise from would-be massive spin 2 modes, which were previously shown to include the linearized Schwarzschild solution. Due to the relation with ${\cal N}=4$ super-Yang-Mills, this is expected to define a consistent quantum theory in 3+1 dimensions, which includes gravity.
Locally contorted space-time invokes inflation, dark energy, and a non-singular Big Bang
The cosmological impact of the Covariant Canonical Gauge Theory of Gravity is investigated. Expand abstract.
The cosmological impact of the Covariant Canonical Gauge Theory of Gravity is investigated. We deduce that, in a metric compatible geometry, the requirement of covariant conservation of matter invokes torsion of space-time. In the Friedman model this leads to a scalar field built from contortion and the metric with the property of dark energy, which transforms the cosmological constant to a time-dependent function. Moreover, the quadratic Rieman-Cartan term in the CCGG field equations adds a geometrical curvature correction to the Friedman equations. Applying the standard $\Lambda$CDM parameter set, those equations give a unique solution for the cosmological field. With a relatively small "deformation" parameter of the theory that determines the strength of the quadratic term and thus the deviation from the Einstein-Hilbert theory, the resulting evolution of the universe starts from a finite extension, undergoes a violent, Big Bang-like, or a smooth and slow bounce process followed by an inflation phase, and exits gracefully to the current dark energy era. The calculations of the SNeIa Hubble diagram and of the most recent transition point from deceleration to acceleration compare well with astronomical observations. The theory also provides a new handle to resolving the cosmological constant problem.
Kinetic limit for a chain of harmonic oscillators with a point Langevin thermostat
We consider an infinite chain of coupled harmonic oscillators with a Langevin thermostat attached at the origin and energy, momentum and volume conserving noise that models the collisions between atoms. Expand abstract.
We consider an infinite chain of coupled harmonic oscillators with a Langevin thermostat attached at the origin and energy, momentum and volume conserving noise that models the collisions between atoms. The noise is rarefied in the limit, {that corresponds to the hypothesis} that in the macroscopic unit time only a finite number of collisions takes place (Boltzmann-Grad limit). We prove that, after the hyperbolic space-time rescaling, the Wigner distribution, describing the energy density of phonons in space-frequency domain, converges to a positive energy density function $W(t, y, k)$ that evolves according to a linear kinetic equation, with the interface condition at $y=0$ that corresponds to reflection, transmission and absorption of phonons. The paper extends the results of [3], where a thermostatted harmonic chain (with no inter-particle scattering) has been considered.
Update me on new preprints
kb:preprints © 2019
|
CommonCrawl
|
Home » Statistics » Normal approximation to Poisson distribution Examples
Normal approximation to Poisson distribution Examples
1 Normal approximation to Poisson distribution Examples
2 Formula for continuity corrections
3 Normal approximation to Poisson Distribution Calculator
4 How to calculate probabilities of Poisson distribution approximated by Normal distribution?
5 Normal approximation to Poisson distribution Example 1
In this tutorial we will discuss some numerical examples on Poisson distribution where normal approximation is applicable. For large value of the $\lambda$ (mean of Poisson variate), the Poisson distribution can be well approximated by a normal distribution with the same mean and variance.
Let $X$ be a Poisson distributed random variable with mean $\lambda$.
The mean of Poisson random variable $X$ is $\mu=E(X) = \lambda$ and variance of $X$ is $\sigma^2=V(X)=\lambda$.
The general rule of thumb to use normal approximation to Poisson distribution is that $\lambda$ is sufficiently large (i.e., $\lambda \geq 5$).
For sufficiently large $\lambda$, $X\sim N(\mu, \sigma^2)$. That is $Z=\frac{X-\mu}{\sigma}=\frac{X-\lambda}{\sqrt{\lambda}} \sim N(0,1)$.
Normal Approx to Poisson
Formula for continuity corrections
Poisson distribution is a discrete distribution, whereas normal distribution is a continuous distribution. When we are using the normal approximation to Poisson distribution we need to make correction while calculating various probabilities.
$P(X=A)=P(A-0.5 < X < A+0.5)$
$P(X < A)=P(X < A-0.5)$
$P(X\leq A)=P(X < A+0.5)$
$P(A< X\leq B)=P(A-0.5 < X < B+0.5)$
$P(A\leq X< B)=P(A-0.5 < X < B-0.5)$
$P(A\leq X\leq B)=P(A-0.5 < X < B+0.5)$
Normal approximation to Poisson Distribution Calculator
Normal Approx. to Poisson Distribution
Parameter ($\lambda$)
Select an Option SelectP(X=A)P(X < A)P(X ≤ A)P(A < X ≤ B)P(A ≤ X < B)P(A ≤ X ≤ B)
Enter the value(s) :
Mean ($\mu=\lambda$)
Standard deviation ($\sqrt{\lambda}$)
Required Probability :
How to calculate probabilities of Poisson distribution approximated by Normal distribution?
Step 1 – Enter the Poisson Parameter $\lambda$
Step 2 – Select appropriate probability event
Step 3 – Enter the values of $A$ or $B$ or Both
Step 4 – Click on "Calculate" button to get normal approximation to Poisson probabilities
Step 5 – Gives output for mean of the distribution
Step 6 – Gives the output for variance of the distribution
Step 7 – Calculate the required probability
Normal approximation to Poisson distribution Example 1
The mean number of kidney transplants performed per day in the United States in a recent year was about 45. Find the probability that on a given day,
a. exactly 50 kidney transplants will be performed,
b. at least 65 kidney transplants will be performed, and
c. no more than 40 kidney transplants will be performed.
Let $X$ denote the number of kidney transplants per day. The mean number of kidney transplants performed per day in the United States in a recent year was about 45. $\lambda = 45$. $X$ follows Poisson distribution, i.e., $X\sim P(45)$.
Since $\lambda= 45$ is large enough, we use normal approximation to Poisson distribution. That is $Z=\dfrac{X-\lambda}{\sqrt{\lambda}}\to N(0,1)$ for large $\lambda$. (We use continuity correction)
a. The probability that on a given day, exactly 50 kidney transplants will be performed is
$$ \begin{aligned} P(X=50) &= P(49.5< X < 50.5)\\ & \quad\quad (\text{Using continuity correction})\\ &= P\bigg(\frac{49.5-45}{\sqrt{45}} < \frac{X-\lambda}{\sqrt{\lambda}} < \frac{50.5-45}{\sqrt{45}}\bigg)\\ &= P(0.67 < Z < 0.82)\\ & = P(Z < 0.82) - P(Z < 0.67)\\ &= 0.7939-0.7486\\ & \quad\quad (\text{Using normal table})\\ &= 0.0453 \end{aligned} $$
b. The probability that on a given day, at least 65 kidney transplants will be performed is
$$ \begin{aligned} P(X\geq 65) &= 1-P(X\leq 64)\\ &= 1-P(X < 64.5)\\ & \quad\quad (\text{Using continuity correction})\\ &= 1-P\bigg(\frac{X-\lambda}{\sqrt{\lambda}} < \frac{64.5-45}{\sqrt{45}}\bigg)\\ &= 1-P(Z < 3.06)\\ &= 1-0.9989\\ & \quad\quad (\text{Using normal table})\\ &= 0.0011 \end{aligned} $$
c. The probability that on a given day, no more than 40 kidney transplants will be performed is
$$ \begin{aligned} P(X < 40) &= P(X < 39.5)\\ & \quad\quad (\text{Using continuity correction})\\ &= P\bigg(\frac{X-\lambda}{\sqrt{\lambda}} < \frac{39.5-45}{\sqrt{45}}\bigg)\\ &= P(Z < -0.82)\\ & = P(Z < -0.82) \\ &= 0.2061\\ & \quad\quad (\text{Using normal table}) \end{aligned} $$
A radioactive element disintegrates such that it follows a Poisson distribution. If the mean number of particles ($\alpha$) emitted is recorded in a 1 second interval as 69, evaluate the probability of:
a. Less than 60 particles are emitted in 1 second.
b. Between 65 and 75 particles inclusive are emitted in 1 second.
Let $X$ denote the number of particles emitted in a 1 second interval. The mean number of $\alpha$-particles emitted per second $69$. Thus $\lambda = 69$ and given that the random variable $X$ follows Poisson distribution, i.e., $X\sim P(69)$.
a. The probability that less than 60 particles are emitted in 1 second is
b. The probability that between $65$ and $75$ particles (inclusive) are emitted in 1 second is
$$ \begin{aligned} P(65\leq X\leq 75) &= P(64.5 < X < 75.5)\\ & \quad\quad (\text{Using continuity correction})\\ &= P\bigg(\frac{64.5-69}{\sqrt{69}} < \frac{X-\lambda}{\sqrt{\lambda}} < \frac{75.5-69}{\sqrt{69}}\bigg)\\ &= P(-0.54 < Z < 0.78)\\ &= P(Z < 0.78)- P(Z < -0.54) \\ &= 0.7823-0.2946\\ & \quad\quad (\text{Using normal table})\\ &= 0.4877 \end{aligned} $$
The number of a certain species of a bacterium in a polluted stream is assumed to follow a Poisson distribution with a mean of 200 cells per ml. if a one ml sample is randomly taken, then what is the probability that this sample contains 225 or more of this bacterium?
Let $X$ denote the number of a certain species of a bacterium in a polluted stream. The mean number of certain species of a bacterium in a polluted stream per ml is $200$. Thus $\lambda = 200$ and given that the random variable $X$ follows Poisson distribution, i.e., $X\sim P(200)$.
Since $\lambda= 200$ is large enough, we use normal approximation to Poisson distribution. That is $Z=\dfrac{X-\lambda}{\sqrt{\lambda}}\to N(0,1)$ for large $\lambda$. (We use continuity correction)
The probability that one ml sample contains 225 or more of this bacterium is
$$ \begin{aligned} P(X\geq 225) &= 1-P(X\leq 224)\\ &= 1-P(X < 224.5)\\ & \quad\quad (\text{Using continuity correction})\\ &= 1-P\bigg(\frac{X-\lambda}{\sqrt{\lambda}} < \frac{224.5-200}{\sqrt{200}}\bigg)\\ &= 1-P(Z < 1.8)\\ &= 1-0.9641\\ & \quad\quad (\text{Using normal table})\\ &= 0.0359 \end{aligned} $$
The vehicles enter to the entrance at an expressway follow a Poisson distribution with mean vehicles per hour of 25. Find the probability that in 1 hour the vehicles are between 23 and 27 inclusive, using Normal approximation to Poisson distribution.
Let $X$ denote the number of vehicles enter to the expressway per hour. The mean number of vehicles enter to the expressway per hour is $25$. Thus $\lambda = 25$ and given that the random variable $X$ follows Poisson distribution, i.e., $X\sim P(25)$.
The probability that in 1 hour the vehicles are between $23$ and $27$ (inclusive) is
$$ \begin{aligned} P(23\leq X\leq 27) &= P(22.5 < X < 27.5)\\ & \quad\quad (\text{Using continuity correction})\\ &= P\bigg(\frac{22.5-25}{\sqrt{25}} < \frac{X-\lambda}{\sqrt{\lambda}} < \frac{27.5-25}{\sqrt{25}}\bigg)\\ &= P(-0.5 < Z < 0.5)\\ &= P(Z < 0.5)- P(Z < -0.5) \\ &= 0.6915-0.3085\\ & \quad\quad (\text{Using normal table})\\ &= 0.383 \end{aligned} $$
Assuming that the number of white blood cells per unit of volume of diluted blood counted under a microscope follows a Poisson distribution with $\lambda=150$, what is the probability, using a normal approximation, that a count of 140 or less will be observed?
Let $X$ denote the number of white blood cells per unit of volume of diluted blood counted under a microscope. Given that the random variable $X$ follows Poisson distribution, i.e., $X\sim P(150)$.
Since the parameter of Poisson distribution is large enough, we use normal approximation to Poisson distribution. That is $Z=\dfrac{X-\lambda}{\sqrt{\lambda}}\to N(0,1)$ for large $\lambda$. (We use continuity correction)
The probability that a count of 140 or less will be observed is
$$ \begin{aligned} P(X \leq 140) &= P(X < 140.5)\\ & \quad\quad (\text{Using continuity correction})\\ &= P\bigg(\frac{X-\lambda}{\sqrt{\lambda}} < \frac{140.5-150}{\sqrt{150}}\bigg)\\ &= P(Z < -0.78)\\ &= 0.2177\\ & \quad\quad (\text{Using normal table}) \end{aligned} $$
In this tutorial, you learned about how to calculate probabilities of Poisson distribution approximated by normal distribution using continuity correction. You also learned about how to solve numerical problems on normal approximation to Poisson distribution.
To read more about the step by step tutorial about the theory of Poisson Distribution and examples of Poisson Distribution Calculator with Examples. This tutorial will help you to understand Poisson distribution and its properties like mean, variance, moment generating function.
To learn more about other probability distributions, please refer to the following tutorial:
Let me know in the comments if you have any questions on Normal Approximation to Poisson Distribution and your on thought of this article.
Categories All Calculators, Probability Distributions, Statistics, Statistics-Calc Tags normal approximation to Poisson Calculator, normal approximation to Poisson distribution, normal distribution, Poisson distribution, probability distributions
Hypergeometric Distribution
Normal Distribution
|
CommonCrawl
|
Abstract Systems Theory
2010 Mathematics Subject Classification: Primary: 93A10 [MSN][ZBL]
This article discusses systems in the context of general systems theory. For systems in the sense of logics, see formal systems.
1 History and Motivation
3 Elementary and Nonelementary Systems
4 Constructing Systems
5 Special classes of Systems
6 Morphisms between systems
History and Motivation
In many sciences, e.g. sociology, biology, cybernetics, chemistry, politics, economics (see the Wikipedia article Systems theory), the notion of a system was defined independently from each other. Thus, problems were inavoidable as soon as topics were discussed at the interplay between them (e.g. cybernetic models of biological systems). A more fundamental definition of a system was required encompassing the different concepts.
These definitions have in common that a system consists of elements related to each other [S]. Accordingly, Hall and Fagen proposed a definition of a general system as 'a set of objects together with relationships between the objects and between their attributes' [HF],[MP] in 1956. Their paper was published in the first volume of the Journal General Systems. Ludwig von Bertalanffy, one of the editors of the Journal, uses a similar definition in his famous book General Sytem Theory [vB] in 1968. The formalization of this view of a system as a relation between two or more sets was subject to others, however, as for example Mesarovic and Takahara [MT1],[MT2]. We will follow their approach here, but not without mentioning that a number of other definitions of an abstract system have been proposed (Klir [K1],[K2],[K3],[K4], Lin [L], Polderman and Willems [PW],[Wi], Rosen [R1],[R2], Wymore [W1],[W2], Wang [Wa]). Their basic ideas may correspond to the definition of Mesarovic and Takahara in essence, but their theories on the whole usually differ considerably. A standard definition accepted by the whole systems community seems to be still missing.
A system is defined as a relation $S\subseteq I \times O$, whereby $I$ and $O$ are sets representing inputs and outputs [MT2]. Consequently, $S$ will be called an input-output (or elementary) system more specifically. This definition reflects some kind of black-box view on the system, since the internal structure or function is not represented. It deals only with the correlations between inputs and outputs.
Sometimes, more than two factors are considered and a system is defined as $S\subseteq \prod_{i\in J} F_i$. Therein, the sets $F_i$ are the objects belonging to the system. A set $F_i$ gives the totality of different properties, which this object may potentially have [MP]. This version of a systems definition reveals some insight into the internal structure and function of the system. It is used for describing composed (or nonelementary) systems.
Elementary and Nonelementary Systems
Both versions of a systems definition given above are interrelated with each other. Two or more elementary systems can be combined to a nonelementary system. In this way a hierarchy of systems can be built. A goal-seeking system $S \subseteq I \times O$ is a simple example for a nonelementary system. It is composed of two elementary systems $S_1\subseteq (I\times O) \times M$ and $S_2\subseteq (M\times I) \times O$. The subsystem $S_1$ gives admissible goals $m\in M$ depending on the inputs $i\in I$ and outputs $o\in O$ of $S$; the subsystem $S_2$ on the other hand defines a relationship between input $i\in I$ and goals $m\in M$ with appropriate outputs $o\in O$.
Constructing Systems
The above definition of an abstract system is general enough to be used in most applications. On the contrary, it is too general for the development of a rich theory with many nontrivial properties. Thus, abstract systems may be extended by additional structures. Typical structures are algebras (e.g. linear systems), function spaces (e.g. time systems, dynamical systems), probability spaces (stochastic systems), ordering relations and so on. In some cases, such structures are used for describing the system class under consideration constructively. Žampa et. al. [ZSV] is following this approach for example. Their starting point are time-discrete systems. Systems with a continuous time space are introduced as limits of sequences of time-discrete systems with increasingly higher time resolution.
Special classes of Systems
Functional System
A functional System $S\subseteq I \times O$ is a system with $S$ being a function $S\colon I\rightarrow O$.
Time-System
A Time System $S$ is a system, in which inputs and outputs are functions defined on a set $T$, i.e. $S\subseteq A^T \times B^T$. The set $T$ has to be equipped with a total ordering relation and represents the time. Usually, time is formalized using stronger assumptions, e.g. by demanding the structure of a linear space as well. The weaker assumption used here allows to include e.g. discrete event systems, however.
Linear System
A system $S\subseteq I \times O$ is called linear, if both $I$ and $O$ are $K$-vector spaces and if $S$ is closed under linear operations: $$\begin{array}{rcl} s,s'\in S &\Longrightarrow & s+s'\in S\\ s\in S, \alpha\in K&\Longrightarrow & \alpha s\in S \end{array}$$
Morphisms between systems
Let $S\subseteq I \times O$ and $S'\subseteq I' \times O'$ be two systems. A system morphism $h\colon S\rightarrow S'$ (in the relational sense) is a pair $h=(h_I, h_O)$ of mappings $h_I\colon I\rightarrow I'$, $h_O\colon O\rightarrow O'$ fulfilling $(i,o)\in S \Longrightarrow (h_I(i),h_O(o))\in S'$. For functional systems, this definition of a morphism turns out to be inappropriate because the function property of $S$ is not preserved under the morphism. This has led to the notion of a system morphism $h\colon S\rightarrow S'$ in the functional sense; here, $h$ is a pair $h=(h_I, h_O)$ of mappings $h_I\colon I\rightarrow I'$, $h_O\colon O'\rightarrow O$ fulfilling $S'(h_I(i))= h_O(S(i))$ for $i\in I$.
[HF] A.D. Hall, R.E. Fagen, "Definition of a system", General Systems 1(1956)18-28
[K1] G. Klir, "An approach to General systems theory", Van Nostrand 1969
[K2] G. Klir, "Trends in General Systems Theory", Wiley, 1972
[K3] G. Klir, "Architecture of Systems Problem-solving", Plenum Press 1985
[K4] G. Klir, "Facets of Systems Science", Springer 1991
[L] Yi Lin, "General Systems Theory: A Mathematical Approach", Springer 1999
[M] M.D. Mesarovic, "On some mathematical results as properties of general systems", Mathematical systems Theory 2(1968)357-361
[MT1] M.D. Mesarovic, Y. Takahara, "Abstract Systems Theory", Springer 1989, LNCIS 116
[MT2] M.D. Mesarovic, Y. Takahara, "General Systems Theory: Mathematical Foundations", Academic Press 1975
[MP] G. Minati, E. Pessa, "Collective Beings", Springer 2006
[PW] J.W. Polderman and J.C. Willems, "Introduction to Mathematical Systems Theory: A Behavioral Approach", Springer 1998
[R1] R. Rosen, "Fundamentals of Measurement and representation of Natural Systems", North-Holland 1978
[R2] R. Rosen, "Anticipatory systems", Pergamon Press 1985
[S] J. Sanders, "Theoretical Approaches to Systems", Diploma Thesis, Bielefeld 2003
[vB] L. von Bertalanffy, "General Sytem Theory", George Braziller Inc. 1968
[Wa] Yingxu Wang, "Software Engineering Foundations: A Software Science Perspective", Auerbach Pubn 2007
[Wi] J. Willems, "Paradigms and puzzles in the theory of dynamical systems", IEEE Transaction on Automatic Control 36(1991)259-294
[W1] W. Wymore, "Model-Based Systems Engineering", CRC Press 1993
[W2] W. Wymore, "A Mathematical Theory of Systems Engineering, The elements", Wiley 1967
[ZSV] P. Žampa, P. Steska, K. Veselý;, "Multivariable linear discrete-time stochastic system continualization", WSEAS Trans. Syst. 3(2004)2898-2903
Abstract Systems Theory. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Abstract_Systems_Theory&oldid=28147
Retrieved from "https://encyclopediaofmath.org/index.php?title=Abstract_Systems_Theory&oldid=28147"
Control theory and optimization
Systems theory; control
|
CommonCrawl
|
Effect of Modification in Flow Distributor Valve Geometry on the Pressure Drop and Chamber Pressures, in Numerical and Analytical Way, in ORBIT Motor HST Unit
Debanshu Roy, Amit Kumar , Rathindranath Maiti , Prasanta Kumar Das
Subject: Engineering, Mechanical Engineering Keywords: pintle type rotary spool valve; flow distributor valve; computational fluid dynamics (CFD); orbit motor
In this paper, an attempt has been made to analyze the effect of spool port/ groove geometry on the pressure drop and chamber pressures which effect the performance parameters of the flow distributor valve. The work mainly involves formulation of detailed mathematical model of the valve and compare them on the same platform. For mathematical modelling, Matlab has been used. The size of the orifices is considered same throughout the model for better comparison. Initially the construction and functioning of flow distributor valve along with working principles of hydrostatic motor (Rotary Piston) is shown. Next shown the analytical analysis of area change and pressure drops due to different geometry of the spool valve ports. After that the computational fluid dynamics (CFD) analysis has been shown. A complete mathematical model to describe such flow distributor valve is developed after having a comprehensive knowledge of orifice characteristics, flow interactions based on valve geometry. Equations of flow through different orifices (fixed and variable area) of the valve have been developed based on the relationships obtained earlier.
Preprint BRIEF REPORT | doi:10.20944/preprints202012.0510.v3
On Some Damped 2 Body Problems
Alain Haraux
Subject: Physical Sciences, Atomic & Molecular Physics Keywords: gravitation; singular potential; global solutions; spiraling orbit
The usual equation for both motions of a single planet around the sun and electrons in the deterministic Rutherford-Bohr atomic model is conservative with a singular potential at the origin. When a dissipation is added, new phenomena appear which were investigated thoroughly by R. Ortega and his co-authors between 2014 and 2017, in particular all solutions are bounded and tend to $0$ for $t$ large, some of them with asymptotically spiraling exponentially fast convergence to the center. We provide explicit estimates for the bounds in the general case that we refine under specific restrictions on the initial state, and we give a formal calculation which could be used to determine practically some special asymptotically spiraling orbits. Besides, a related model with exponentially damped central charge or mass gives some explicit exponentially decaying solutions which might help future investigations. An atomic contraction hypothesis related to the asymptotic dying off of solutions proven for the dissipative model might give a solution to some intriguing phenomena observed in paleontology, familiar electrical devices and high scale cosmology
On a Linearly Damped 2 Body Problem
Subject: Physical Sciences, Mathematical Physics Keywords: gravitation; singular potential; global solutions; spiraling orbit
The usual equation for both motions of a single planet around the sun and electrons in the deterministic Rutherford-Bohr atomic model is conservative with a singular potential at the origin. When a dissipation is added, new phenomena appear. It is shown that whenever the momentum is not zero, the moving particle does not reach the center in finite time and its displacement does not blow-up either, even in the classical context where arbitrarily large velocities are allowed. Moreover we prove that all bounded solutions tend to $0$ for $t$ large, and some formal calculations suggest the existence of special orbits with an asymptotically spiraling exponentially fast convergence to the center.
Evaluation of Intraorbital Soft Tissue Modifications after Traumatic Injuries Management
Cristian Mihail Dinu, Tiberiu Tamas, Gabriela Agrigoroaei, Sebastian Stoia, Simion Bran, Gabriel Armencea, Avram Manea
Subject: Medicine & Pharmacology, Other Keywords: soft; tissue; orbit; trauma; reconstruction; PSI; segmentation; enopthtalmy
Orbital fractures are a common finding in facial trauma and serious complications may arise when orbital reconstruction is not done properly. The virtual planning can be used to manufacture patient-specific titanium orbital implants (PSI) through the process of selective laser melting. This method is currently considered the most accurate technique for orbital reconstruction. Even with the most accurate tehniques of bone reconstruction, there are still situations where enophthalmos is present after reconstruction which may be produced by intraorbital soft tissue atrophy. The aim of this paper was to evaluate the orbital soft tissue after postraumatic reconstruction of the orbital walls fractures. 10 patients diagnosed with unilateral orbital fractures were included in this study. A CT scan of the head region with thin slices (0.6mm) and soft and bone tissue windows was done. After data processing, the STL files were exported and the intraorbital fat tissue volume and the muscular tissue volume were masured. The volumes of affected orbit tissues were compared with the volumes of the healthy orbit tissues for each patient. Our findings conclude that a higher or a lower grade of fat and muscular tissue loss is present in all cases of reconstructed orbital fractures.
Effects of a Time Varying Moment on a Large Tethered Satellite in Circular or Elliptical Orbit
Narayan Iyer
Subject: Engineering, Other Keywords: Time varying moment; tethered satellite; elliptical orbit; flight management
The purpose of this paper is to investigate the effect of a varying moment on the rotation angle of a large tethered satellite that is orbiting a planet. Two different types of orbits were investigated: a simple circular orbit and an elliptical orbit. Cases with zero and non-zero initial angular rotation velocity were investigated as well. This investigation will assist satellite docking missions. The large rigid tethered satellite is a futuristic concept, and this investigation is meant to assist possible docking missions to the satellite. To simplify the problem, the rotation is constrained to the orbital plane.
The Dirac Fermion of a Monopole Pair (MP) Model
Samuel Yuguru
Subject: Physical Sciences, General & Theoretical Physics Keywords: Dirac fermion, magnetic spin, 4D space-time, spin-orbit coupling
The electron of magnetic spin −1/2 is a Dirac fermion of a complex four-component spinor field. Though it is effectively addressed by relativistic quantum field theory, an intuitive form of the fermion still remains lacking. In this novel undertaking, the fermion is examined within the boundary posed by a recently proposed MP model of a hydrogen atom into 4D space-time. Such unorthodox process conceptually transforms the electron to the four-component spinor of non-abelian in both Euclidean and Minkowski space-times. Supplemented by several postulates, the relativistic and non-relativistic applications of the model are explored from an alternative perspective. The outcomes have important implications towards defining the spin-orbit coupling of particles from external light interactions. These findings, if considered could consolidate properly the fundamentals of the quantum state of matter from an alternative perspective using quantum field theory application and they warrant further investigations.
Relativity of Energy
Alireza Jamali
Subject: Physical Sciences, General & Theoretical Physics Keywords: minimum gravitational potential; Mercury's orbit; mass-energy space; function space
After proposing the Principle of Minimum Gravitational Potential, in a pursuit to find the explanation behind the correction to Newton's gravitational potential that accounts for Mercury's orbit, by finding all the higher-order corrections it is shown that the consequences of the existence of speed of light for gravity are not yet fully explored.
Results of Long-Duration Simulation of Distant Retrograde Orbits
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: Distant Retrograde Orbit; DRO; orbits–stability; radiation pressure; orbits–resonance; dynamics
Distant Retrograde Orbits in the Earth-Moon system are gaining in popularity as stable `parking' orbits for various conceptual missions. To investigate the stability of potential Distant Retrograde Orbits, simulations were executed, with propagation running over a thirty-year period. Initial conditions for the vehicle state were limited such that the position and velocity vectors were in the Earth-Moon orbital plane, with the velocity oriented such that it would produce retrograde motion about Moon. The resulting trajectories were investigated for stability against the eccentric relative orbits of Earth and Moon in an environment that also included gravitational perturbations from Sun, Jupiter, and Venus, and the effects of radiation pressure. The results appear to indicate that stability is enhanced for certain resonant states within the Earth-Moon system.
Generating the Triangulations of the Torus with the Vertex-Labeled Complete 4-Partite Graph K_{2,2,2,2}
Serge Lawrencenko, Abdulkarim Magomedov
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: group action; orbit decomposition; polynomial; graph; tree; triangulation; torus; automorphism; quaternion group
Using the orbit decomposition, a new enumerative polynomial P(x) is introduced for abstract (simplicial) complexes of a given type, e.g., trees with a fixed number of vertices or triangulations of the torus with a fixed graph. The polynomial has the following three useful properties. (I) The value P(1) is equal to the total number of unlabeled complexes (of a given type). (II) The value of the derivative P'(1) is equal to the total number of nontrivial automorphisms when counted across all unlabeled complexes. (III) The integral of P(x) from 0 to 1 is equal to the total number of vertex-labeled complexes, divided by the order of the acting group. The enumerative polynomial P(x) is demonstrated for trees and then is applied to the triangulations of the torus with the vertex-labeled complete four-partite graph G = K_{2,2,2,2}, in which specific case P(x) = x^{31}. The graph G embeds in the torus as a triangulation, T(G). The automorphism group of G naturally acts on the set of triangulations of the torus with the vertex-labeled graph G. For the first time, by a combination of algebraic and symmetry techniques, all vertex-labeled triangulations of the torus (twelve in number) with the graph G are classified intelligently without using computing technology, in a uniform and systematic way. It is helpful to notice that the graph G can be converted to the Cayley graph of the quaternion group Q_8 with the three imaginary quaternions i, j, k as generators.
Analysis of Wide-Lane Ambiguities Derived from Geometry-Free and Geometry-Based PPP Model and Their Implication on Orbit and Clock Quality
Gang Chen, Sijing Liu, Qile Zhao
Subject: Earth Sciences, Space Science Keywords: geometry-free; geometry-based; wide-lane ambiguity; orbit and clock residual error
Orbit and clock products are used in real-time GNSS precise point positioning without knowing their quality. This study develops a new approach to detect orbit and clock errors through comparing geometry-free and geometry-based wide-lane ambiguities in PPP model. The reparameterization and estimation procedures of the geometry-free and geometry-based ambiguities are described in detail. The effects of orbit and clock errors on ambiguities are given in analytical expressions. The numerical similarity and differences of geometry-free and geometry-based wide-lane ambiguities are analyzed using different orbit and clock products. Furthermore, two types of typical errors in orbit and clock are simulated and their effects on wide-lane ambiguities are numerically produced and analyzed. The contribution discloses that the geometry-free and geometry-based wide-lane ambiguities are equivalent in terms of their formal errors. Although they are very close in terms of their estimates when the used orbit and clock for geometry-based ambiguities are precise enough, they are not the same, in particular, in the case that the used orbit and clock, as a combination, contain significant errors. It is discovered that the discrepancies of geometry-free and geometry-based wide-lane ambiguities are coincided with the actual time-variant errors in the used orbit and clock at the line-of-sight direction. This provides a quality index for real-time users to detect the errors in real-time orbit and clock products, which potentially improves the accuracy of positioning.
Robust Beamforming Based On Graph Attention Networks For IRS-assisted Satellite IoT Communications
Hailin Cao, Wang Zhu, Wenjuan Feng, Jin Fan
Subject: Engineering, Electrical & Electronic Engineering Keywords: intelligent reflecting surface; low Earth orbit satellite; graph attention networks; unsupervised learning; beamforming
Satellite communication is expected to play a vital role in realizing Internet of Remote Things (IoRT) applications. This article considers an intelligent reflecting surface (IRS)-assisted downlink low Earth orbit (LEO) satellite communication network, where IRS provides additional reflective links to enhance the intended signal power. We aim to maximize the sum-rate of all the terrestrial users by jointly optimizing the satellite's precoding matrix and IRS's phase shifts. However, it is difficult to directly acquire the instantaneous channel state information (CSI) and optimal phase shifts of IRS due to the high mobility of LEO and the passive nature of reflective elements. Moreover, most conventional solution algorithms suffer from high computational complexity and are not applicable to these dynamic scenarios. A robust beamforming design based on graph attention networks (RBF-GAT) is proposed to establish a direct mapping from the received pilots and dynamic network topology to the satellite and IRS's beamforming, which is trained offline using the unsupervised learning approach. The simulation results corroborate that the proposed RBF-GAT can achieve approximate performance compared to the upper bound with low complexity.
Forward Link Optimization for the Design of VHTS Satellite Networks
Flor G. Ortiz-Gomez, Ramón Martínez, Miguel A. Salas-Natera, Andrés Cornejo, Salvador Landeros-Ayala
Subject: Engineering, Other Keywords: CCM; CINR; Cost per Gbps in orbit; Multibeam Satellite Communications; System Optimization; VCM; VHTS
The concept of geostationary VHTS (Very High Throughput Satellites) is based on multibeam coverage with intensive frequency and polarization reuse in addition to the use of larger bandwidths in the feeder links, in order to provide high capacity satellite links at a reduced cost per Gbps in orbit. The dimensioning and design of satellite networks based on VHTS imposes the analysis of multiple trade-offs to achieve an optimal solution in terms of cost, capacity and figure of merit of the user terminal. In this paper, we propose a new method for sizing VHTS satellite networks based on an analytical expression of the forward link CINR (Carrier-to-Interference-plus-Noise Ratio) that is used to evaluate the trade-off of different combinations of system parameters. The proposed method considers both technical and commercial requirements as inputs including the constraints to achieve the optimum solution in terms of the user G/T, the number of beams and the system cost. The cost model includes both satellite and ground segments. Exemplary results are presented with feeder links using Q/V bands, DVB-S2X and transmission methods based on CCM and VCM (Constant and Variable Coding and Modulation, respectively) in two scenarios with different service areas.
InSAR Baseline Estimation for Gaofen-3 Real-Time DEM Generation
Huan Lu, Zhiyong Suo, Zhenfang Li, Jinwei Xie, Qingjun Zhang
Subject: Earth Sciences, Other Keywords: Gaofen-3 (GF-3); Interferometric synthetic aperture radar (InSAR); DEM; baseline estimation; real-time orbit
For Interferometry Synthetic Aperture Radar (InSAR), the normal baseline is one of the main factors that affect the accuracy of the ground elevation. For Gaofen-3 (GF-3) InSAR processing, the poor accuracy of the real-time orbit determination resulting in a large baseline error, leads to the modulation error in azimuth and the slope error in range for timely Digital Elevation Model (DEM) generation. In order to address this problem, a baseline estimation method based on external DEM is proposed in this paper. Firstly, according to the characteristic of the real-time orbit of GF-3 images, orbit fitting is executed to remove the non-linear error factor. Secondly, the height errors are obtained in slant-range plane between Shuttle Radar Topography Mission (SRTM) DEM and the GF-3 generated DEM after orbit fitting. At the same time, the height errors are used to estimate the baseline error which has a linear variation. In this way, the orbit error can be calibrated by the estimated baseline error. Finally, DEM generation is performed by using the modified baseline and orbit. This procedure is implemented iteratively to achieve a higher accuracy DEM. Based on the results of GF-3 interferometric SAR data for Hebei, the effectiveness of the proposed algorithm is verified and the accuracy of GF-3 real-time DEM products can be improved extensively.
Everything Is A Circle: A New Universal Orbital Model
AslıPınar Tan
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: Solar System, Planetary System, Planet, Satellite, Sun, Earth, Moon, Topology, Circle, Ellipse, Orbit, Trajectory, Orbital Mechanics
Based on measured astronomical position data of heavenly objects in the Solar System and other planetary systems, all bodies in space seem to move in some kind of elliptical motion with respect to each other. According to Kepler's 1st Law, "orbit of a planet with respect to the Sun is an ellipse, with the Sun at one of the two foci." Orbit of the Moon with respect to Earth is also distinctly elliptical, but this ellipse has a varying eccentricity as the Moon comes closer to and goes farther away from the Earth in a harmonic style along a full cycle of this ellipse. In this paper, our research results are summarized, where it is first mathematically shown that the "distance between points around any two different circles in three dimensional space" is equivalent to the "distance of points around a vector ellipse to another fixed or moving point, as in two dimensional space". What is done is equivalent to showing that bodies moving on two different circular orbits in space vector wise behave as if moving on an elliptical path with respect to each other, and virtually seeing each other as positioned at an instantaneously stationary point in space on their relative ecliptic plane, whether they are moving with the same angular velocity, or different but fixed angular velocities, or even with different and changing angular velocities with respect to their own centers of revolution. This mathematical revelation has the potential to lead to far reaching discoveries in physics, enabling more insight into forces of nature, with a formulation of a new fundamental model regarding the motions of bodies in the Universe, including the Sun, Planets, and Satellites in the Solar System and elsewhere, as well as at particle and subatomic level. Based on the demonstrated mathematical analysis, as they exhibit almost fixed elliptic orbits relative to one another over time, the assertion is made that the Sun, the Earth, and the Moon must each be revolving in their individual circular orbits of revolution in space. With this expectation, individual orbital parameters of the Sun, the Earth, and the Moon are calculated based on observed Earth to Sun and Earth to Moon distance data, also using analytical methods developed as part of this research to an approximation. This calculation and analysis process have revealed additional results aligned with observation, and this also supports our assertion that the Sun, the Earth, and the Moon must actually be revolving in individual circular orbits.
A Multi-Criteria Assessment Strategy for 3d Printed Porous Polyetheretherketone (Peek) Patient-Specific Implants for Orbital Wall Reconstruction
Neha Sharma, Dennis Welker, Soheila Aghlmandi, Michaela Maintz, Hans-Florian Zeilhofer, Philipp Honigmann, Thomas Seifert, Florian M. Thieringer
Subject: Medicine & Pharmacology, Allergology Keywords: blow-out; biocompatible materials; computer-aided design; finite element analysis; orbit; implant; orbital fracture; patient-specific modeling; printing; three-dimensional.
Pure orbital blowout fractures occur within the confines of the internal orbital wall. Restoration of orbital form and volume is paramount to prevent functional and esthetic impairment. The anatomical peculiarity of the orbit has encouraged surgeons to develop implants with customized features to restore its architecture. This has resulted in worldwide clinical demand for patient-specific implants (PSIs) designed to fit precisely in the patient's unique anatomy. Fused filament fabrication (FFF) three-dimensional (3D) printing technology has enabled the fabrication of implant-grade polymers such as Polyetheretherketone (PEEK), paving the way for a more sophisticated generation of biomaterials. This study evaluates the FFF 3D printed PEEK orbital mesh customized implants with a metric considering the relevant design, biomechanical, and morphological parameters. The performance of the implants is studied as a function of varying thicknesses and porous design constructs through a finite element (FE) based computational model and a decision matrix based statistical approach. The maximum stress values achieved in our results predict the high durability of the implants, and the maximum deformation values were under one-tenth of a millimeter (mm) domain in all the implant profile configurations. The circular patterned implant (0.9 mm) had the best performance score. The study demonstrates that compounding multi-design computational analysis with 3D printing can be beneficial for the optimal restoration of the orbital floor.
Distance Between Two Circles In Any Number Of Dimensions Is A Vector Ellipse
Asli Pinar Tan
Subject: Physical Sciences, Mathematical Physics Keywords: Conic Sections, Topology, Circle, Ellipse, Hyperbola, Parabola, Orbit, Trajectory, Orbital Mechanics, Solar System, Planetary System, Planet, Satellite, Comet, Sun, Earth, Moon
Based on measured astronomical position data of heavenly objects in the Solar System and other planetary systems, all bodies in space seem to move in some kind of elliptical motion with respect to each other, whereas objects follow parabolic escape orbits while moving away from Earth and bodies asserting a gravitational pull, and some comets move in near-hyperbolic orbits when they approach the Sun. In this article, it is first mathematically proven that the "distance between points on any two different circles in three-dimensional space" is equivalent to the "distance of points on a vector ellipse from another fixed or moving point, as in two-dimensional space." Then, it is further mathematically demonstrated that "distance between points on any two different circles in any number of multiple dimensions" is equivalent to "distance of points on a vector ellipse from another fixed or moving point". Finally, two special cases when the "distance between points on two different circles in multi-dimensional space" become mathematically equivalent to distances in "parabolic" or "near-hyperbolic" trajectories are investigated. Concepts of "vector ellipse", "vector hyperbola", and "vector parabola" are also mathematically defined. The mathematical basis derived in this Article is utilized in the book "Everyhing Is A Circle: A New Model For Orbits Of Bodies In The Universe" in asserting a new Circular Orbital Model for moving bodies in the Universe, leading to further insights in Astrophysics.
Signal-to-Noise Ration Evaluation of Luojia 1-01 Satellite Nighttime Light Remote Sensing Camera Based on Time Sequence Images
Wei Wang, Xing Zhong, Zhiqiang Su, Deren Li, Guo Zhang
Subject: Engineering, Other Keywords: signal-to-noise ratio; nighttime light imaging; time sequence images; Luojia 1-01; radiative transfer model; radiometric calibration; in-orbit test
Signal-to-noise ratio (SNR) is an important index to evaluate radiation performance and image quality of optical imaging systems under low illumination background. Under the nighttime lighting condition, the illumination of remote sensing objects is low and varies greatly, usually ranging from several lux to tens of thousands of lux. Nighttime light remote sensing imaging requires high sensitivity and large dynamic range of detectors. Luojia 1-01 is the first professional nighttime light remote sensing satellite in the world. In this paper, we took the nighttime light remote sensing camera carried on the satellite as research object, proposed an in-orbit SNR test method based on time series images to overcome the problem of low spatial resolution. We first analyzed the process of luminous flux transmission between objects and satellite and established a radiative transfer model. By combining the parameters of large relative aperture optical system and high sensitivity CMOS device, we established SNR model and specially analyzed the effect of exposure time and quantization bits on SNR. Finally we used the proposed in-orbit test method to calculate SNR of lighting images acquired by satellite. And the measured result is in good agreement with the model predicted data. Under the condition of 10lx illumination, the SNR of typical objects can reach 27.02dB, which is much better than the requirement of 20dB for engineering application.
Distributed Orbit Determination for Global Navigation Satellite System with Inter-Satellite Link
Yuanlan Wen, Jun Zhu, Youxing Gong, Qian Wang, Xiufeng He
Subject: Earth Sciences, Space Science Keywords: inter-satellite link; whole-constellation centralized extended Kalman filter; distributed orbit determination; iterative cascade extended Kalman filter; increased measurement covariance extended Kalman filter; balanced extended Kalman filter
To keep the global navigation satellite system functional during extreme conditions, it is a trend to employ autonomous navigation technology with inter-satellite link. As in the newly built BeiDou system (BDS-3) equipped with Ka-band inter-satellite links, every individual satellite has the ability of communicating and measuring distances among each other. The system also has less dependence on the ground stations and improved navigation performance. Because of the huge amount of measurement data, centralized data processing algorithm for orbit determination is suggested to be replaced by a distributed one in which each satellite in the constellation is required to finish a partial computation task. In current paper, the balanced extended Kalman filter algorithm for distributed orbit determination is proposed and compared with whole-constellation centralized extended Kalman filter, iterative cascade extended Kalman filter, and increasing measurement covariance extended Kalman filter. The proposed method demands a lower computation power however yields results with a relatively good accuracy.
Effect of Kinematic Algorithm Selection on the Conceptual De-Sign of Orbitally Delivered Vehicles
David Cole, Timothy Sands
Subject: Engineering, Mechanical Engineering Keywords: GNC system design; design and analysis of orbit/attitude determination and control systems; motion planning; guidance; control; trajectory planning; conceptual design; kinematics; direction cosine matrix; DCM; 6 Degrees of Freedom; 6 DOF
A particular challenge in the conceptual design of direct-from orbit delivery systems is the seemingly well-known kinematics retain fallacies whose efficacies are reduced over great distances. Rotation about the local wing of an aerospace vehicle is almost never the pitch angle, yet modern application of kinematics often assumes such (with accompanying angular error). The same assertion is usually true about the nature of roll and yaw angles. Expressing motion in coordinates of rotating reference frames necessitates transformation between reference frames, and one such transformation is embodied in the Direction Cosine Matrices (DCM) formed by a sequence of three successive frame rotations. One of two ubiquitous sequences of three rotations used to construct the DCM involves first a rotation around the inertial z-axis, then intermediate y-axis, then finally about the body's x-axis; a sequence commonly called the "aerospace sequence" or the a "3-2-1 rotation sequence". The second ubiquitous sequence is the so-called "orbital sequence" or "3-1-3 rotation sequence" with rotations about the inertial z axis first, then about the intermediate x-axis, then finally about the body z-axis. This manuscript evaluates which sequence is the most advantageous for an object starting in space and then travels through the atmosphere to a target on the Earth's surface. Six degrees of freedom of vehicle motion were simulated starting in orbit with a given thrust and commanded maneuver. The simulation performs all twelve possible rotation sequences (transforming inertial coordinates to body coordinates) with comparison by computational burden and error representing rotations about body axes (roll, pitch, and yaw respectively). Simulation precision is validated using the quaternion normalization condition indicating near machine precision (0.9\times{10}^{-15}) and reveals the so-called 132 rotation is the most accurate with an average error of 0.14° and a computational time of 0.013 seconds: resulting in a 97.95% increase in accuracy over the so-called 321 rotation and a 99.84% increase over the so-called 313 rotation.
|
CommonCrawl
|
How does gauge symmetry constrain the dynamics of a field's physical degrees of freedom?
My rough understanding of gauge theory is that some of a field's degrees of freedom (d.o.f.) may turn out to be "non-physical" due to local symmetries. But does gauge symmetry constrain the dynamics of the remaining "physical" d.o.f. at all? Suppose we take a field with very complicated dynamics, and then artificially add some "non-physical"/"fake" d.o.f. to obtain a field with gauge symmetry. As far as I can tell, this new field will still have complicated dynamics, since the gauge symmetry doesn't say anything about the physical d.o.f.. Therefore I don't understand what's so constraining about gauge symmetry, and feel I must be missing something.
To make this more concrete, consider the example given by Wikipedia of scalar $O(n)$ gauge theory. In this case, the field $\Phi$ takes values in $\mathbb{R}^n$, and there is local $O(n)$ symmetry: that is, at a given point $x$, the field values $\Phi(x)$ and $O\Psi(x)$ are physically equivalent for any $O\in O(n)$. It therefore seems that the only physical d.o.f. is $|\Phi|^2$, since this is $O(n)$-invariant. Can $|\Phi|^2$ behave however it likes? If so, why do we bother thinking about an $n$-component field $\Phi$ with gauge symmetry when we could just think about a single scalar field $|\Phi|^2$ with no gauge symmetry?
field-theory gauge-theory gauge-invariance constrained-dynamics degrees-of-freedom
Jacob DroriJacob Drori
I'll focus on the scalar $O(n)$ gauge theory that was mentioned in the question. The set of observables in the gauged version of the model is not a subset of the observables in the ungauged version. Gauging the $O(n)$ symmetry does eliminate some observables, but it also introduces others.
Observables in the gauged version are required to be invariant under the combined transformation \begin{align} \Phi(x) &\to O(x)\Phi(x) \tag{1} \\ A_\mu(x) &\to O(x)A_\mu(x)O^{-1}(x) + iO(x)\frac{\partial}{\partial x^\mu} O^{-1}(x) \tag{2} \end{align} where $A$ is the gauge field. Most observables are not invariant under either (1) or (2) individually. The observable $|\Phi|^2$ that was mentioned in the question is invariant under both (1) and (2) individually, but most observables are not. The operator $$ \sum_{j,k}\Phi_j^*(x)U_{jk}(C)\Phi_k(y) \tag{3} $$ qualifies as an observable (it is gauge invariant), where $U(C)$ is the unitary operator constructed from the gauge field by taking a path-ordered integral of $\sim\exp(iA(s))$ along some specified contour $C$ from $s=y$ to $s=x$. If $C$ is a closed contour, then the operator $$ \text{trace}\,U(C) \tag{4} $$ also qualifies as an observable (gauge invariant). This is called a Wilson loop observable.
For the other naunces of the question, you might be interested in ref 1. Section 3.1 introduces a more intrinsic approach to gauge theory, without relying on gauge noninvariant fields as scaffolding.
By the way, this is all more clear in lattice QFT, which is also the only known way to really define this theory. I mean, it's the only known way to define it nonperturbatively. In any other part of physics, the adjective "nonperturbatively" goes without saying whenever the word "define" is used, but the QFT culture is a little weird in this respect, for both historical and technical reasons. Ref 1 uses a generous dose of lattice QFT: the word "lattice" occurs on $42$ of the paper's $176$ pages.
Symmetries in Quantum Field Theory and Quantum Gravity (https://arxiv.org/abs/1810.05338)
Chiral AnomalyChiral Anomaly
Not the answer you're looking for? Browse other questions tagged field-theory gauge-theory gauge-invariance constrained-dynamics degrees-of-freedom or ask your own question.
Gauge fixing and degrees of freedom
Physical difference between gauge symmetries and global symmetries
Question about physical degree of freedom in Maxwell Theory: Why Coulomb gauge can fix all redundant degree of freedom
How do I tell if a symmetry is gauge or not?
Gauge theory and eliminating unphysical degrees of freedom
Counting massive degrees of freedom after gauge fixing
Counting degrees of freedom in the Higgs mechanism for different gauges
What causes here an apparent violation of Elitzur's theorem?
The relation between gauge symmetry and global internal symmetry
What are the physical degrees of freedom in Yang-Mills theories?
|
CommonCrawl
|
Statistical mechanics approach for steady-state analysis in M/M/s queueing system with balking
JIMO Home
Finite-horizon optimal control of discrete-time linear systems with completely unknown dynamics using Q-learning
doi: 10.3934/jimo.2020035
Design of differentiated warranty coverage that considers usage rate and service option of consumers under 2D warranty policy
Peng Tong 1,, and Xiaogang Ma 2,,
School of Management, China University of Mining and Technology, Jiangsu, China
School of Management, Wuhan Textile University, Hubei, China
* Corresponding authors: [email protected]; [email protected]
Received June 2019 Revised September 2019 Published February 2020
Fund Project: The first author is supported by NSF grant the National Natural Science Foundation of China (No. 71701200); the Postdoctoral Fund of China (No. 2016M590525); the Postdoctoral Fund of Jiangsu (No. 1601246C)
Figure(6) / Table(4)
Warranty service providers usually provide homogeneous warranty service to improve consumer satisfaction and market share. Considering the difference of consumers, some scholars have carried out studies on maintenance strategies, service pricing, payment method, claim behaviour and warranty cost analysis in recent years. However, few scholars have focused on the differentiated coverage of warranty service that considers usage rate and service option of consumers. On the basis of previous classification criteria on usage rate, this paper divides consumers into heavy, medium and light usage rate groups with clear boundaries. To avoid discrimination in warranty service, this study divides 2D warranty coverage into disjoint sub-regions and adopts different maintenance modes in each sub-region. By formulating and calculating warranty cost model under warranty cost constraints, we can obtain the maximum warranty coverage under usage rate $ r $. Therefore, differentiated warranty scope for consumers in the three groups can be proposed, whilst consumers can choose the most suitable warranty service according to their usage rate. Evidently, the proposed warranty strategy can provide flexible warranty service for consumers, meet the requirements of the warranty cost constraints of warranty service providers and enable enterprises to occupy a favourable position in the market competition.
Keywords: Usage rate, warranty coverage, modelling, cost analysis, 2D warranty.
Mathematics Subject Classification: Primary: 90B50; Secondary: 90B25.
Citation: Peng Tong, Xiaogang Ma. Design of differentiated warranty coverage that considers usage rate and service option of consumers under 2D warranty policy. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2020035
A. Akbarov and S. Wu, Forecasting warranty claims considering dynamic over-dispersion, Int. J. Prod. Econ., 139 (2012), 615-622. doi: 10.1016/j.ijpe.2012.06.001. Google Scholar
J. Baik, D. N. P. Murthy and N. Jack, Two-dimensional failure modeling with minimal repair, Naval Res. Logist., 51 (2004), 345-362. doi: 10.1002/nav.10120. Google Scholar
W. L. Chang and J.-H. Lin, Optimal maintenance policy and length of extended warranty within the life cycle of products, Comput. Math. Appl., 63 (2012), 144-150. doi: 10.1016/j.camwa.2011.11.001. Google Scholar
S. Chukova and M. R. Johnston, Two-dimensional warranty repair strategy based on minimal and complete repairs, Math. Comput. Modelling, 44 (2006), 1133-1143. doi: 10.1016/j.mcm.2006.03.015. Google Scholar
G. Gallego, R. Wang, M. Hu, J. Ward and J. L. Beltran, No claim? Your gain: Design of residual value extended warranties under risk aversion and strategic claim behavior, Manufacturing Service Oper. Management, 17 (2015), 87-100. doi: 10.1287/msom.2014.0501. Google Scholar
J. C. Hartman and K. Laksana, Designing and pricing menus of extended warranty contracts, Naval Res. Logist., 56 (2009), 199-214. doi: 10.1002/nav.20333. Google Scholar
Y.-S. Huang, W.-Y. Gau and J.-W. Ho, Cost analysis of two-dimensional warranty for products with periodic preventive maintenance, Reliability Engineering System Safety, 134 (2015), 51-58. doi: 10.1016/j.ress.2014.10.014. Google Scholar
Y.-S. Huang, C.-D. Huang and J.-W. Ho, A customized two-dimensional extended warranty with preventive maintenance, European J. Oper. Res., 257 (2017), 971-978. doi: 10.1016/j.ejor.2016.07.034. Google Scholar
B. P. Iskandar and D. N. P. Murthy, Repair-replace strategies for two-dimensional warranty policies, Math. Comput. Modelling, 38 (2003), 1233-1241. doi: 10.1016/S0895-7177(03)90125-7. Google Scholar
B. P. Iskandar, D. N. P. Murthy and N. Jack, A new repair-replace strategy for items sold with a two-dimensional warranty, Comput. Oper. Res., 32 (2005), 669–682. doi: 10.1016/j.cor.2003.08.011. Google Scholar
N. Jack, B. P. Iskandar and D. N. P. Murthy, A repair-replace strategy based on usage rate for items sold with a two-dimensional warranty, Reliability Engineering System Safety, 94 (2009), 611-617. doi: 10.1016/j.ress.2008.06.019. Google Scholar
N. Jack and V. D. D. Schouten, Optimal repair-replace strategies for a warranted product, Int. J. Production Economics, 67 (2000), 95-100. doi: 10.1016/S0925-5273(00)00012-8. Google Scholar
Z.-L. Lin and Y.-S. Huang, Nonperiodic preventive maintenance for repairable systems, Naval Res. Logist., 57 (2010), 615-625. doi: 10.1002/nav.20418. Google Scholar
B. Liu, J. Wu and M. Xie, Cost analysis for multi-component system with failure interaction under renewing free-replacement warranty, European J. Oper. Res., 243 (2015), 874-882. doi: 10.1016/j.ejor.2015.01.030. Google Scholar
D. T. Mai, T. Liu, M. D. S. Morris and S. Sun, Quality coordination with extended warranty for store-brand products, European J. Oper. Res., 256 (2017), 524-532. doi: 10.1016/j.ejor.2016.06.042. Google Scholar
M. D. C. Moura, J. M. Santana, E. L. Droguett, I. D. Lins and B. N. Guedes, Analysis of extended warranties for medical equipment: A Stackelberg game model using priority queues, Reliability Engineering System Safety, 168 (2017), 338-354. doi: 10.1016/j.ress.2017.05.040. Google Scholar
D. G. Nguyen and D. N. P. Murthy, An optimal policy for servicing warranty, J. Oper. Res. Soc., 37 (1986), 1081-1088. doi: 10.1057/jors.1986.185. Google Scholar
D. G. Nguyen and D. N. P. Murthy, Optimal replace repair strategy for servicing products sold with warranty, European J. Oper. Res., 39 (1989), 206-212. doi: 10.1016/0377-2217(89)90193-8. Google Scholar
M. Park, K. M. Jung and D. H. Park, Optimal warranty policies considering repair service and replacement service under the manufacturer's perspective, Ann. Oper. Res., 244 (2016), 117-132. doi: 10.1007/s10479-014-1740-1. Google Scholar
M. Park and H. Pham, Cost models for age replacement policies and block replacement policies under warranty, Appl. Math. Model., 40 (2016), 5689-5702. doi: 10.1016/j.apm.2016.01.022. Google Scholar
X. Qin, Q. Su and S. H. Huang, Extended warranty strategies for online shopping supply chain with competing suppliers considering component reliability, J. Systems Sci. Systems Engineering, 26 (2017), 753-773. doi: 10.1007/s11518-017-5355-3. Google Scholar
M. Reimann and W. Zhang, Joint optimization of new production, warranty servicing strategy and secondary market supply under consumer returns, Pesquisa Operacional, 33 (2013), 325-342. doi: 10.1590/S0101-74382013000300001. Google Scholar
M. Shafiee, M. Finkelstein and S. Chukova, Burn-in and imperfect preventive maintenance strategies for warranted products, Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability, 225 (2011), 211-218. doi: 10.1177/1748006X11398584. Google Scholar
K. Shahanaghi, R. Noorossana, S. G. Jalali-Naini and M. Heydari, Failure modeling and optimizing preventive maintenance strategy during two-dimensional extended warranty contracts, Engineering Failure Analysis, 28 (2013), 90-102. doi: 10.1016/j.engfailanal.2012.09.006. Google Scholar
L. Shang, S. Si and Z. Cai, Optimal maintenance-replacement policy of products with competing failures after expiry of the warranty, Comput. Industrial Engineering, 98 (2016), 68-77. doi: 10.1016/j.cie.2016.05.012. Google Scholar
C. Su and J. Shen, Analysis of extended warranty policies with different repair options, Engineering Failure Analysis, 25 (2012), 49-62. doi: 10.1016/j.engfailanal.2012.04.002. Google Scholar
C. Su and X. Wang, A two-stage preventive maintenance optimization model incorporating two-dimensional extended warranty, Reliability Engineering System Safety, 155 (2016), 169-178. doi: 10.1016/j.ress.2016.07.004. Google Scholar
C. Tom and P. Elmira, Maintenance policies with two-dimensional warranty, Reliability Engineering System Safety, 77 (2002), 61–69. Google Scholar
P. Tong, Z. Liu, F. Men and L. Cao, Designing and pricing of two-dimensional extended warranty contracts based on usage rate, Internat. J. Prod. Res., 52 (2014), 6362-6380. doi: 10.1080/00207543.2014.940073. Google Scholar
P. Tong, X. Song and L. Zixian, A maintenance strategy for two-dimensional extended warranty based on dynamic usage rate, Internat. J. Prod. Res., 55 (2017), 5743-5759. doi: 10.1080/00207543.2017.1330573. Google Scholar
H. Vahdani, H. Mahlooji and A. Eshraghnia Jahromi, Warranty servicing for discretely degrading items with non-zero repair time under renewing warranty, Comput. Industrial Engineering, 65 (2013), 176-185. doi: 10.1016/j.cie.2011.08.012. Google Scholar
S. Varnosafaderani and S. Chukova, A two-dimensional warranty servicing strategy based on reduction in product failure intensity, Comput. Math. Appl., 63 (2012), 201-213. doi: 10.1016/j.camwa.2011.11.011. Google Scholar
J. Wang, Z. Zhou and H. Peng, Flexible decision models for a two-dimensional warranty policy with periodic preventive maintenance, Reliability Engineering System Safety, 162 (2017), 14-27. doi: 10.1016/j.ress.2017.01.012. Google Scholar
Y. Wang, Z. Liu and Y. Liu, Optimal preventive maintenance strategy for repairable items under two-dimensional warranty, Reliability Engineering System Safety, 142 (2015), 326-333. doi: 10.1016/j.ress.2015.06.003. Google Scholar
W. Xie, Optimal pricing and two-dimensional warranty policies for a new product, Internat. J. Prod. Res., 55 (2017), 6857-6870. doi: 10.1080/00207543.2017.1355578. Google Scholar
Z.-S. Ye and D. N. P. Murthy, Warranty menu design for a two-dimensional warranty, Reliability Engineering System Safety, 155 (2016), 21-29. doi: 10.1016/j.ress.2016.05.013. Google Scholar
Figure 1. Termination point of 2D warranty service
Figure Options
Download as PowerPoint slide
Figure 2. Schematic of the maintenance strategy under 2D warranty
Figure 3. Trend diagram of $ W_r-U_r $
Figure 4. Diagram of the differentiated warranty service strategy
Figure 5. Curve of $ W_r-U_r $ ($ ε = 0.9 $)
Figure 6. Curve of $ W_r-U_r $ ($ \varepsilon = 1.1 $)
Table 1. The interval of usage rate intensity
Usage intensity Low limit of interval Upper limit of interval
Light $ r_{l1} $ $ r_{l2} $
Medium $ r_{l2} $ $ r_{h1} $
Heavy $ r_{h1} $ $ r_{h2} $
Failure $ C_{mi}(Yuan) $ $ C_{ci} (Yuan) $ $ f_i(w,r_d) $ $ \lambda_{0} $ $ k $
A31 1000 5000 $ 6.32E-04 w^{1.06}{e^{-({w}/{4.03})}}^{2.06} $ 4.03 2.06
A88 800 4800 $ 2.53E-03 w^{0.45}{e^{-({w}/{2.48})}}^{1.45} $ 2.48 1.45
A10 3000 12000 $ 5.82E-02 w^{0.51}{e^{-({w}/{2.94})}}^{1.51} $ 2.94 1.51
Table 3. Age and usage parameters of the 2D warranty coverage ($ \varepsilon = 0.9 $)
$ w_n $ Value $ u_n $ Value
$ W_l $ 3.35 $ U_l $ 2.18
$ W_m $ 2.44 $ U_m $ 4.39
$ W_h $ 1.83 $ U_h $ 5.49
Tong Peng. Designing prorated lifetime warranty strategy for high-value and durable products under two-dimensional warranty. Journal of Industrial & Management Optimization, 2021, 17 (2) : 953-970. doi: 10.3934/jimo.2020006
Leanne Dong. Random attractors for stochastic Navier-Stokes equation on a 2D rotating sphere with stable Lévy noise. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020352
Tianwen Luo, Tao Tao, Liqun Zhang. Finite energy weak solutions of 2d Boussinesq equations with diffusive temperature. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3737-3765. doi: 10.3934/dcds.2019230
Wenlong Sun, Jiaqi Cheng, Xiaoying Han. Random attractors for 2D stochastic micropolar fluid flows on unbounded domains. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 693-716. doi: 10.3934/dcdsb.2020189
Xin-Guang Yang, Rong-Nian Wang, Xingjie Yan, Alain Miranville. Dynamics of the 2D Navier-Stokes equations with sublinear operators in Lipschitz-like domains. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020408
Xiaoli Lu, Pengzhan Huang, Yinnian He. Fully discrete finite element approximation of the 2D/3D unsteady incompressible magnetohydrodynamic-Voigt regularization flows. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 815-845. doi: 10.3934/dcdsb.2020143
Oleg Yu. Imanuvilov, Jean Pierre Puel. On global controllability of 2-D Burgers equation. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 299-313. doi: 10.3934/dcds.2009.23.299
Sabine Hittmeir, Laura Kanzler, Angelika Manhart, Christian Schmeiser. Kinetic modelling of colonies of myxobacteria. Kinetic & Related Models, 2021, 14 (1) : 1-24. doi: 10.3934/krm.2020046
Yubiao Liu, Chunguo Zhang, Tehuan Chen. Stabilization of 2-d Mindlin-Timoshenko plates with localized acoustic boundary feedback. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021006
Sujit Kumar Samanta, Rakesh Nandi. Analysis of $GI^{[X]}/D$-$MSP/1/\infty$ queue using $RG$-factorization. Journal of Industrial & Management Optimization, 2021, 17 (2) : 549-573. doi: 10.3934/jimo.2019123
Onur Şimşek, O. Erhun Kundakcioglu. Cost of fairness in agent scheduling for contact centers. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021001
Riccarda Rossi, Ulisse Stefanelli, Marita Thomas. Rate-independent evolution of sets. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 89-119. doi: 10.3934/dcdss.2020304
Jian Zhang, Tony T. Lee, Tong Ye, Liang Huang. An approximate mean queue length formula for queueing systems with varying service rate. Journal of Industrial & Management Optimization, 2021, 17 (1) : 185-204. doi: 10.3934/jimo.2019106
Patrick Martinez, Judith Vancostenoble. Lipschitz stability for the growth rate coefficients in a nonlinear Fisher-KPP equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 695-721. doi: 10.3934/dcdss.2020362
Tien-Yu Lin, Bhaba R. Sarker, Chien-Jui Lin. An optimal setup cost reduction and lot size for economic production quantity model with imperfect quality and quantity discounts. Journal of Industrial & Management Optimization, 2021, 17 (1) : 467-484. doi: 10.3934/jimo.2020043
George W. Patrick. The geometry of convergence in numerical analysis. Journal of Computational Dynamics, 2021, 8 (1) : 33-58. doi: 10.3934/jcd.2021003
Reza Lotfi, Zahra Yadegari, Seyed Hossein Hosseini, Amir Hossein Khameneh, Erfan Babaee Tirkolaee, Gerhard-Wilhelm Weber. A robust time-cost-quality-energy-environment trade-off with resource-constrained in project management: A case study for a bridge construction project. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020158
Dorothee Knees, Chiara Zanini. Existence of parameterized BV-solutions for rate-independent systems with discontinuous loads. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 121-149. doi: 10.3934/dcdss.2020332
Xueli Bai, Fang Li. Global dynamics of competition models with nonsymmetric nonlocal dispersals when one diffusion rate is small. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3075-3092. doi: 10.3934/dcds.2020035
Xing Wu, Keqin Su. Global existence and optimal decay rate of solutions to hyperbolic chemotaxis system in Besov spaces. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021002
PDF downloads (58)
HTML views (407)
Peng Tong Xiaogang Ma
|
CommonCrawl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.