text
stringlengths
100
500k
subset
stringclasses
4 values
1. Whys and Whats 1.1 Example Fitting Time Series 1.2 Other References 2. Predicting Coronal Mass Ejections 2.1 Notebook 3. Enhancing SDO Images 4. Differential Emission Measurements 5. Scintillation Prediction 6. Spectra of Flaring Active Regions Deep convolutional neural networks Deep neural networks Activation layers General training process Our architecture Our training data and process Validation with synthetic images Continuum images A magnetogram example: AR 11158 Other general properties Comparison with a standard RL deconvolution algorithm Enhancing SDO/HMI images using deep learning $\newcommand{\arcsec}{"}$ $\newcommand{\AA}{Å}$ C. J. Díaz Baso$^{1,2}$ and A. Asensio Ramos$^{1,2}$ 1 Instituto de Astrofísica de Canarias, Calle Vía Láctea, 38205 La Laguna, Tenerife, Spain 2 Departamento de Astrofísica, Universidad de La Laguna, 38206 La Laguna, Tenerife, Spain In this chapter we will learn how to use and apply deep learning tecniques to improve the resolution of our images in a fast and robust way. We have developed a deep fully convolutional neural network which deconvolves and super-resolves continuum images and magnetograms observed with the Helioseismic and Magnetic Imager (HMI) satellite. This improvement allow us to analyze the smallest-scale events in the solar atmosphere. We want to note that although almost all the examples/images are written in python, we have omitted some materials in their original format (usually large FITS files) to avoid increasing the size of this notebook. The software resulted from this project is hosted in the repository https://github.com/cdiazbas/enhance, which was published in arxiv and A&A with a similar explanation. This software was developed with the python library keras. We recommend visiting the keras documentation for anything related to how it works. Figure 1 — Example of the software Enhance applied to real solar images. # Before starting we have to load some modules # which will allow us the calculations # Libraries: %matplotlib inline import warnings warnings.filterwarnings("ignore") from congrid import resample import radialProfile from hmiutils import * from astropy.convolution import convolve_fft import astropy.io.fits as fits import scipy.special as sp Astronomical observations from Earth are always limited by the presence of the atmosphere, which strongly disturbs the images. An obvious (but expensive) solution to this problem is to place the telescopes in space, which produces observations without any (or limited) atmospheric aberrations. Although the observations obtained from space are not affected by atmospheric seeing, the optical properties of the instrument still limits the observations. In the case of near-diffraction limited observations, the point spread function (PSF) establishes the maximum allowed spatial resolution. The PSF typically contains two different contributions. The central core is usually dominated by the Airy diffraction pattern, a consequence of the finite and circular aperture of the telescope (plus other perturbations on the pupil of the telescope like the spiders used to keep the secondary mirror in place). The tails of the PSF are usually dominated by uncontrolled sources of dispersed light inside the instrument, the so-called stray light. It is known that the central core limits the spatial resolution of the observations (the smallest feature that one can see in the image), while the tails reduce the contrast of the image (Danilovic et al. 2010). Moreover, it is important to note that knowing the PSF of any instrument is a very complicated task (Yeo et al. 2014; Couvidat et al. 2016). If the PSF is known with some precision, it is possible to apply deconvolution techniques to partially remove the perturbing effect of the telescope. The deconvolution is usually carried out with the Richardson-Lucy algorithm (RL; Richardson 1972), an iterative procedure that returns a maximum-likelihood solution to the problem. Single-image deconvolution is usually a very ill-defined problem, in which a potentially infinite number of solutions can be compatible with the observations. Consequently, some kind of regularization has to be imposed. Typically, an early-stopping strategy in the iterative process of the RL algorithm leads to a decent output, damping the high spatial frequencies that appear in any deconvolution process. However, a maximum a-posteriori approach in which some prior information about the image is introduced often gives much better results. Fortunately, spectroscopic and spectropolarimetric observations provide multi-image observations of a field-of-view (FOV) and the deconvolution process is much better defined. This deconvolution process has been tried recently with great success by van Noort (2012), who also introduced a strong regularization by assuming that the Stokes profiles in every pixel have to be explained with the emerging Stokes profiles from a relatively simple model atmosphere assuming local thermodynamical equilibrium. Another solution was provided by Ruiz Cobo & Asensio Ramos (2013), who assumed that the matrix built with the Stokes profiles for all observed pixels has very low rank. In other words, it means that the Stokes profiles on the FOV can be linearly expanded with a reduced set of vectors. This method was later exploited by Quintero Noda et al. (2015) with good results. Another different approach was developed by Asensio Ramos & de la Cruz Rodríguez (2015) where they used the concept of sparsity (or compressibility), which means that one can linearly expand the unknown quantities in a basis set with only a few of the elements of the basis set being active. Under the assumption of sparsity, they exploited the presence of spatial correlation on the maps of physical parameters, carrying out successful inversions and deconvolution simultaneously. A great science case for the application of deconvolution and super-resolution techniques is the Helioseismic and Magnetic Imager (HMI; Scherrer et al. 2012) onboard the Solar Dynamics Observatory (SDO; Pesnell et al. 2012). HMI is a space-borne observatory that deploys full-disk images (plus a magnetogram and Dopplergram) of the Sun every 45 s (or every 720 s for a better signal-to-noise ratio). The spatial resolution of these images is $\sim 1.1''$, with a sampling of $\sim 0.5''$/pix. In spite of the enormous advantage of having such a synoptic spatial telescope without the problematic Earth's atmosphere, the spatial resolution is not enough to track many of the small-scale solar structures of interest. The main reason for that is the sacrifice that HMI makes to cover the full disk of the Sun in the FOV on a single sensor. We think that, in the process of pushing for the advancement of science, it is preferable to have images with a better spatial resolution and which are already compensated for the telescope PSF. Under the assumption of the linear theory of image formation, and writing images in lexicographic order (so that they are assumed to be sampled at a given resolution), the observed image can be written as: \begin{equation} \mathbf{I} = \mathbf{D} [\mathbf{P} * \mathbf{O}] + \mathbf{N}, \tag{1} \end{equation} where $\mathbf{O}$ is the solar image at the entrance of the telescope, $\mathbf{P}$ is a convolution matrix that simulates the effect of the PSF on the image, $\mathbf{D}$ is a sub-sampling (non-square) matrix that reduces the resolution of the input image to the desired output spatial resolution and $\mathbf{N}$ represents noise (usually with Gaussian or Poisson statistics). The solution to the single-image deconvolution+super-resolution problem (SR; Borman & Stevenson 1998) requires the recovery of $\mathbf{O}$ (a high-resolution image of $2N \times 2N$ pixels) from a single measurement $\mathbf{I}$ (a low-resolution image of $N \times N$ pixels). This problem is extremely ill-posed, even worse than the usual deconvolution to correct from the effect of the PSF. A multiplicity (potentially an infinite number) of solutions exists. This problem is then typically solved by imposing strong priors on the image (e.g., Tipping & Bishop 2003). Despite the difficulty of the problem, we think there is great interest in enhancing the HMI images using post-facto techniques. A super-resolved image could help detect or characterize small features in the surface of the Sun, or improve the estimation of the total magnetic flux limited by the resolution in the case of magnetograms. This motivated us to develop an end-to-end fast method based on a deep, fully convolutional neural network that simultaneously deconvolves and super-resolves the HMI continuum images and magnetograms by a factor of two. We prefer to be conservative and only do super-resolution by a factor two because our tests with a larger factor did not produce satisfactory results. Deep learning, single-image deconvolution and super-resolution has recently been applied with great success to natural images (Xu et al. 2014; Dong et al. 2015, 2016; Shi et al. 2016; Ledig et al. 2016; Hayat 2017). Given the variability of all possible natural images, a training-based approach should give much better results in our case than in the case of natural images. In the following, we give details about the architecture and training of the neural network and provide examples of applications to HMI data. Figure 2 — Left panel: building block of a fully-connected neural network. Each input of the previous layer is connected to each neuron of the output. Each connection is represent by different lines where the width is associated to higher weights and the dashed lines to negative weights. Right panel: three-dimensional convolution carried out by a convolutional layer. The 3D-kernel traverses the whole input, producing a single scalar at each position. At the end, a 2D feature map will be created for each 3D kernel. When all feature maps are stacked, a feature map tensor will be created. Artificial neural networks (ANN) are well-known computing systems based on connectionism that can be considered to be very powerful approximators to arbitrary functions (Bishop 1996). They are constructed by putting together many basic fundamental structures (called neurons) and connecting them massively. Each neuron $i$ is only able to carry out a very basic operation on the input vector: it multiplies all the input values $x_j$ by some weights $w_j$, adds some bias $b_i$ and finally returns the value of a certain user-defined nonlinear activation function $f(x)$. In mathematical notation, a neuron computes: \begin{equation} o_i = f(\Sigma_j\,x_j\cdot w_j + b_i). \tag{2} \end{equation} The output $o_i$ is then input in another neuron that carries out a similar task. An ANN can be understood as a pipeline where the information goes from the input to the output, with each neuron making a transformation like the one described above (see left panel of Fig. 2). Given that neurons are usually grouped in layers, the term deep neural network comes from the large number of layers that are used to build the neural network. Some of the most successful and recent neural networks contain several million neurons organized in several tens or hundreds of layers (Simonyan & Zisserman 2014). As a consequence, deep neural networks can be considered to be a very complex composition of very simple nonlinear functions, which provides the capacity to make very complex transformations. The most used type of neural network from the 1980s to the 2000s is the fully connected network (FCN; see Schmidhuber 2014, for an overview), in which every input is connected to every neuron of the following layer. Likewise, the output transformation becomes the input of the following layer (see left panel of Fig. 2). This kind of architecture succeeded in solving problems that were considered to be not easily solvable, such as the recognition of handwritten characters (Bishop 1996). A selection of applications in solar physics include the inversion of Stokes profiles (e.g., Socas-Navarro 2005; Carroll & Kopf 2008), the acceleration of the solution of chemical equilibrium (Asensio Ramos & Socas-Navarro 2005), and the automatic classification of sunspot groups (Colak & Qahwaji 2008). Neural networks are optimized iteratively by updating the weights and biases so that a loss function that measures the ability of the network to predict the output from the input is minimized$^{[1]}$. This optimization is widely known as the learning or training process. In this process a training dataset is required. 1.- This is the case of supervised training. Unsupervised neural networks are also widespread but are of no concern in this study. In spite of the relative success of neural networks, their application to high-dimensional objects like images or videos turned out to be an obstacle. The fundamental reason was that the number of weights in a fully connected network increases extremely fast with the complexity of the network (number of neurons) and the computation quickly becomes unfeasible. As each neuron has to be connected with the whole input, if we add a new neuron we will add the size of the input in number of weights. Then, a larger number of neurons implies a huge number of connections. This constituted an apparently unsurmountable handicap that was only solved with the appearance of convolution neural networks (CNN or ConvNets; LeCun & Bengio 1998). The most important ingredient in the CNN is the convolutional layer which is composed of several convolutional neurons. Each CNN neuron carries out the convolution of the input with a certain (typically small) kernel, providing as output what is known as a feature map. Similar to a FCN, the output of convolutional neurons is often passed through a nonlinear activation function. The fundamental advantage of CNNs is that the same weights are shared across the whole input, drastically reducing the number of unknowns. This also makes CNN shift invariant (features can be detected in an image irrespectively of where they are located). In mathematical notation, for a two-dimensional input $X$ of size $N \times N$ with $C$ channels$^{[2]}$ (really a cube or tensor of size $N \times N \times C$), each output feature map $O_i$ (with size $N \times N \times 1$) of a convolutional layer is computed as: \begin{equation} O_i=K_i * X + b_i, \tag{3} \end{equation} where $K_i$ is the $K \times K \times C$ kernel tensor associated with the output feature map $i$, $b_i$ is a bias value ($1 \times 1 \times 1$) and the convolution is displayed with the symbol $*$. Once the convolution with $M$ different kernels is carried out and stacked together, the output $O$ will have size $N \times N \times M$. All convolutions are here indeed intrinsically three dimensional, but one could see them as the total of $M \times C$ two-dimensional convolutions plus the bias (see right panel of Fig. 2). 2.- The term "channels" is inherited from those of a color image (e.g., RGB channels). However, the term has a much more general scope and can be used for arbitrary quantities (see Asensio Ramos et al. 2017, an application). CNNs are typically composed of several layers. This layered architecture exploits the property that many natural signals are generated by a hierarchical composition of patterns. For instance, faces are composed of eyes, while eyes contain a similar internal structure. This way, one can devise specific kernels that extract this information from the input. As an example, Fig. 3 shows the effect of a vertical border detection kernel on a real solar image. The result at the right of the figure is the feature map. CNNs work on the idea that each convolution layer extracts information about certain patterns, which is done during the training by iteratively adapting the set of convolutional kernels to the specific features to locate. This obviously leads to a much more optimal solution as compared with hand-crafted kernels. Despite the exponentially smaller number of free parameters as compared with a fully connected ANN, CNNs produce much better results. It is interesting to note that, since a convolutional layer simply computes sums and multiplications of the inputs, a multi-layer FCN (also known as perceptron) is perfectly capable of reproducing it, but it would require more training time (and data) to learn to approximate that mode of operation (Peyrard et al. 2015). Although a convolutional layer significantly decreases the number of free parameters as compared with a fully connected layer, it introduces some hyperparameters (global characteristics of the network) to be set in advance: the number of kernels to be used (number of feature maps to extract from the input), size of each kernel with its corresponding padding (to deal with the borders of the image) and stride (step to be used during the convolution operation), and the number of convolutional layers and specific architecture to use in the network. As a general rule, the deeper the CNN, the better the result, at the expense of a more difficult and computationally intensive training. CNNs have recently been used in astrophysics for denoising images of galaxies (Schawinski et al. 2017), for cosmic string detection in CMB temperature maps (Ciuca et al. 2017), and for the estimation of horizontal velocities in the solar surface (Asensio Ramos et al. 2017) . Figure 3 — An example of a convolution with a filter. In this example, a vertical border-locating kernel is convolved with the input image of the Sun. A resulting feature map of size $(N-2)\times(N-2)$ is generated from the convolution. As stated above, the output of a convolutional layer is often passed through a nonlinear function that is termed the activation function. Since the convolution operation is linear, this activation is the one that introduces the nonlinear character of the CNNs. Although hyperbolic tangent, $f(x)=\tanh(x)$, or sigmoidal, $f(x)=[1+\exp(-x)]^{-1}$, activation units were originally used in ANNs, nowadays a panoply of more convenient nonlinearities are used. The main problem with any sigmoid-type activation function is that its gradient vanishes for very large values, hindering the training of the network. Probably the most common activation function is the rectified linear unit (ReLU; Nair & Hinton 2010) or slight variations of it. The ReLU replaces all negative values in the input by zero and leaves the rest untouched. This activation has the desirable property of producing nonvanishing gradients for positive arguments, which greatly accelerates the training. Note: there is a lot of other new activation functions which are less used but they have an excellent performance, like ELUs (https://arxiv.org/abs/1511.07289) plt.rcParams['figure.dpi'] = 150 plt.figure(figsize=(10,3.5)) x = np.arange(-6,6,0.1) plt.subplot(131) plt.plot(x,np.tanh(x), label='f(x)=tanh(x)') plt.yticks([+1,0.5,0,-0.5,-1]) plt.axvline(0., color='k',ls='dashed',alpha=0.2) plt.axhline(0., color='k',ls='dashed',alpha=0.2) plt.legend(loc='upper left'); plt.xlabel('x'); plt.ylabel('y') plt.plot(x,1./(1+np.exp(-x)), label=r'f(x)=1/[1+e$^{-x}$]') plt.legend(loc='upper left'); plt.xlabel('x') plt.plot(x,[np.max([0,i]) for i in x], label='f(x)=max(0,x)') CNNs are trained by iteratively modifying the weights and biases of the convolutional layers (and any other possibly learnable parameter in the activation layer). The aim is to optimize a user-defined loss function from the output of the network and the desired output of the training data. The optimization is routinely solved using simple first-order gradient-descent algorithms (GD; see Rumelhart et al. 1988), which modifies the weights along the negative gradient of the loss function with respect to the model parameters to carry out the update. The gradient of the loss function with respect to the free parameters of the neural network is obtained through the backpropagation algorithm (LeCun et al. 1998). Given that neural networks are defined as a stack of modules (or layers), the gradient of the loss function can be calculated using the chain rule as the product of the gradient of each module and, ultimately, of the last layer and the specific loss function. In practice, procedures based on the so-called stochastic gradient descent (SGD) are used, in which only a few examples (termed batch) from the training set are used during each iteration to compute a noisy estimation of the gradient and adjust the weights accordingly. Although the calculated gradient is a noisy estimation of the one calculated with the whole training set, the training is faster, as we have less to compute, and is more reliable. If the general loss function $Q$ is the average of each loss $Q_j$ computed on a batch of inputs and can be written as $Q=\Sigma_j^n Q_j/n$, the weights $w_i$ are updated following the same recipe as the GD algorithm but calculating the gradient within a single batch: \begin{equation} w_{i+1} = w_i -\eta\nabla Q(w_i) = w_i -\eta\nabla\Sigma_j^n Q_j(w_i)/n \simeq w_i -\eta\nabla Q_j(w_i), \tag{4} \end{equation} where $\eta$ is the so-called learning rate. It can be kept fixed or it can be changed according to our requirements. This parameter has to be tuned to find a compromise between the accuracy of the network and the speed of convergence. If $\eta$ is too large, the steps will be too large and the solution could potentially overshoot the minimum. On the contrary, if it is too small it will take too many iterations to reach the minimum. Adaptive methods like Adam \citep{adam14} have been developed to automatically tune the learning rate. Because of the large number of free parameters in a deep CNN, overfitting can be a problem. One would like the network to generalize well and avoid any type of "memorization" of the training set. To check for that, a part of the training set is not used during the update of the weights but used after each iteration as validation. Desirably, the loss should decrease both in the training and validation sets simultaneously. If overfitting occurs, the loss in the validation set will increase. Moreover, several techniques have been described in the literature to accelerate the training of CNNs and also to improve generalization. Batch normalization (Ioffe & Szegedy 2015) is a very convenient and easy-to-use technique that consistently produces large accelerations in the training. It works by normalizing every batch to have zero mean and unit variance. Mathematically, the input is normalized so that: \begin{align} y_i &= \gamma \hat{x_i} + \beta \nonumber \hat{x_i} &= \frac{x_i - \mu}{\sqrt{\sigma^2 + \epsilon}}, \tag{5} \end{align} where $\mu$ and $\sigma$ are the mean and standard deviation of the inputs on the batch and $\epsilon=10^{-3}$ is a small number to avoid underflow. The parameters $\gamma$ and $\beta$ are learnable parameters that are modified during the training. Note: The BN adds robustness to the network. Here there is an example of the accuracy (ability of the network to predict the result) during a training with and without using BN: Source: https://medium.com/@mozesr/batch-normalization-notes-c527c6bbec4 & https://github.com/udacity/deep-learning/blob/master/batch-norm/Batch_Normalization_Lesson.ipynb We describe in the following the specific architecture of the two deep neural networks used to deconvolve and super-resolve continuum images and magnetograms. It could potentially be possible to use a single network to deconvolve and super-resolve both types of images. However as each type of data has different well defined properties (like the usual range of values, or the sign of the magnitude) we have decided to use two different neural networks, finding remarkable results. We refer to the set of two deep neural networks as Enhance. The deep neural networks used in this work are inspired by DeepVel (Asensio Ramos et al. 2017), used to infer horizontal velocity fields in the solar photosphere. Figure 4 represents a schematic view of the architecture. It is made of the concatenation of $N$ residual blocks (He et al. 2015). Each one is composed of several convolutional layers (two in our case) followed by batch normalizations and a ReLU layer for the first convolutional layer. The internal structure of a residual block is displayed in the blowup$^{[3]}$ of Fig. 4). 3.- We note that we use the nonstandard implementation of a residual block where the second ReLU activation is removed from the reference architecture (He et al. 2015), which provides better results according to https://github.com/gcr/torch-residual-networks Following the typical scheme of a residual block, there is also a shortcut connection between the input and the output of the block (see more information in He et al. 2015; Asensio Ramos et al. 2017), so that the input is added to the output. Very deep networks usually saturate during training producing higher errors than shallow networks because of difficulties during training (also known as the degradation problem). The fundamental reason is that the gradient of the loss function with respect to parameters in early layers becomes exponentially small (also known as the vanishing gradient problem). Residual networks help avoid this problem obtaining state-of-the-art results without adding any extra parameter and with practically the same computational complexity. It is based on the idea that if $y=F(x)$ represents the desired effect of the block on the input $x$, it is much simpler for a network to learn the deviations from the input (or residual mapping), that is $R(x)=y-x$, than the full map $F(x)$, so that $y=F(x)=R(x)+x$. Note: Here there is the diference of two neural networks of 18 and 34 layers trained without (left) and with (right) shortcuts. More information in https://arxiv.org/abs/1512.03385 In our case, all convolutions are carried out with kernels of size $3 \times 3$ and each convolutional layer uses 64 such kernels. Additionally, as displayed in Fig. 4, we also impose another shortcut connection between the input to the first residual block and the batch normalization layer after the last residual block. We have checked that this slightly increase the quality of the prediction. Noting that a convolution of an $N \times N$ image with a $3 \times 3$ kernel reduces the size of the output to $(N-2) \times (N-2)$, we augment the input image with 1 pixel in each side using a reflection padding to compensate for this and maintain the size of the input and output. Because Enhance carries out $\times 2$ super-resolution, we need to add an upsampling layer somewhere in the architecture (displayed in violet in Fig. 4). One can find in the literature two main options to do the upsampling. The first one involves upsampling the image just after the input and allowing the rest of the convolutional layers to do the work. The second involves doing the upsampling immediately before the output. Following Dong et al. (2016), we prefer the second option because it provides a much faster network, since the convolutions are applied to smaller images. Moreover, to avoid artifacts in the upsampling$^{[4]}$ we have implemented a nearest-neighbor resize followed by convolution instead of the more standard transpose convolution. 4.- The checkerboard artifacts are nicely explained in https://distill.pub/2016/deconv-checkerboard/: The last layer that carries out a $1 \times 1$ convolution is of extreme importance in our networks. Given that we use ReLU activation layers throughout the network, it is only in this very last layer where the output gets its sign using the weights associated to the layer. This is of no importance for intensity images, but turns out to be crucial for the signed magnetic field. The number of free parameters of our CNN can easily be obtained using the previous information. In the scheme of Fig. 4, the first convolution layer generates 64 channels by applying 64 different kernels of size $3 \times 3 \times 1$ to the input (a single-channel image), using $(3\times3+1)\times 64=640$ free parameters. The following convolutional layers again have 64 kernel filters, but this time each one of size $(3 \times 3 \times 64 +1)$, with a total of 36928 free parameters. Finally, the last layer contains one kernel of size $1 \times 1 \times 64$, that computes a weighted average along all channels. The total amount of free parameters in this layer is 65 (including the bias). Figure 4 — Upper panel: architecture of the fully convolutional neural network used in this work. Colors refer to different types of layers, which are indicated in the upper labels. The kernel size of convolutional layers are also indicated in the lower labels. Black layers represent the input and output layers. Lower panel: the inner structure of a residual block. This model is hosted in https://github.com/cdiazbas/enhance from keras.layers import Input, Conv2D, Activation, BatchNormalization, GaussianNoise, add, UpSampling2D from keras.models import Model from keras.regularizers import l2 from keras.engine.topology import Layer from keras.engine import InputSpec from keras.utils import conv_utils from models import ReflectionPadding2D # You can check your version of keras doing this import keras keras.__version__ '2.1.1' def Enhance_model(nx, ny, noise, depth, activation='relu', n_filters=64): Deep residual network that keeps the size of the input throughout the whole network # Residual block definition def residual(inputs, n_filters): x = ReflectionPadding2D()(inputs) x = Conv2D(n_filters, (3, 3))(x) x = BatchNormalization()(x) x = Activation(activation)(x) x = ReflectionPadding2D()(x) x = add([x, inputs]) # Inputs of the network inputs = Input(shape=(nx, ny, 1)) # Noise used in the training x = GaussianNoise(noise)(inputs) x0 = Activation(activation)(x) x = residual(x0, n_filters) for i in range(depth-1): x = residual(x, n_filters) x = add([x, x0]) # Upsampling for superresolution x = UpSampling2D()(x) final = Conv2D(1, (1, 1))(x) return Model(inputs=inputs, outputs=final) Using the method summary of the class, we can see a description of all the layers and free parameters of the model. ny, nx = 50,50 # If for example the images have a size of 50 x 50 depth = 5 model = Enhance_model(ny, nx, 0.0, depth, n_filters=64) model.summary() Layer (type) Output Shape Param # Connected to input_1 (InputLayer) (None, 50, 50, 1) 0 gaussian_noise_1 (GaussianNoise (None, 50, 50, 1) 0 input_1[0][0] reflection_padding2d_1 (Reflect (None, 52, 52, 1) 0 gaussian_noise_1[0][0] conv2d_1 (Conv2D) (None, 50, 50, 64) 640 reflection_padding2d_1[0][0] activation_1 (Activation) (None, 50, 50, 64) 0 conv2d_1[0][0] reflection_padding2d_2 (Reflect (None, 52, 52, 64) 0 activation_1[0][0] conv2d_2 (Conv2D) (None, 50, 50, 64) 36928 reflection_padding2d_2[0][0] batch_normalization_1 (BatchNor (None, 50, 50, 64) 256 conv2d_2[0][0] activation_2 (Activation) (None, 50, 50, 64) 0 batch_normalization_1[0][0] add_1 (Add) (None, 50, 50, 64) 0 batch_normalization_2[0][0] activation_1[0][0] reflection_padding2d_4 (Reflect (None, 52, 52, 64) 0 add_1[0][0] add_1[0][0] reflection_padding2d_10 (Reflec (None, 52, 52, 64) 0 add_4[0][0] conv2d_10 (Conv2D) (None, 50, 50, 64) 36928 reflection_padding2d_10[0][0] batch_normalization_9 (BatchNor (None, 50, 50, 64) 256 conv2d_10[0][0] reflection_padding2d_11 (Reflec (None, 52, 52, 64) 0 activation_6[0][0] batch_normalization_10 (BatchNo (None, 50, 50, 64) 256 conv2d_11[0][0] add_5 (Add) (None, 50, 50, 64) 0 batch_normalization_10[0][0] up_sampling2d_1 (UpSampling2D) (None, 100, 100, 64) 0 add_6[0][0] reflection_padding2d_13 (Reflec (None, 102, 102, 64) 0 up_sampling2d_1[0][0] conv2d_13 (Conv2D) (None, 100, 100, 64) 36928 reflection_padding2d_13[0][0] activation_7 (Activation) (None, 100, 100, 64) 0 conv2d_13[0][0] conv2d_14 (Conv2D) (None, 100, 100, 1) 65 activation_7[0][0] Total params: 446,657 Trainable params: 445,249 Non-trainable params: 1,408 A crucial ingredient for the success of a CNN is the generation of a suitable high-quality training set. Our network is trained using synthetic continuum images and synthetic magnetograms from the simulation of the formation of a solar active region described by Cheung et al. (2010). This simulation provides a large FOV with many solar-like structures (quiet Sun, plage, umbra, penumbra, etc.) that visually resemble those in the real Sun. We note that if the network is trained properly and generalizes well, the network does not memorize what is in the training set. On the contrary, it applies what it learns to the new structures. Therefore, we are not especially concerned by the potential lack of similarity between the solar structures in the simulation of Cheung et al. (2010) and the real Sun. The radiative MHD simulation was carried out with the MURaM code (Vögler et al. 2005). The box spans 92 Mm $\times$ 49 Mm in the two horizontal directions and 8.2 Mm in the vertical direction (with horizontal and vertical grid spacing of 48 and 32 km, respectively). After $\sim$20 h of solar time, an active region is formed as a consequence of the buoyancy of an injected flux tube in the convection zone. An umbra, umbral dots, light bridges, and penumbral filaments are formed during the evolution. As mentioned above, this constitutes a very nice dataset of simulated images that look very similar to those on the Sun. Synthetic gray images are generated from the simulated snapshots (Cheung et al. 2010) and magnetograms are obtained by just using the vertical magnetic field component at optical depth unity at 5000 ${\AA}$. A total of 250 time steps are used in the training (slightly less for the magnetograms when the active region has already emerged at the surface). We note that the magnetograms of HMI in the Fe I 6173 ${\AA}$ correspond to layers in the atmosphere around log$\tau=-1$ (Bello González et al. 2009), while our magnetograms are extracted from log$\tau=0$, where $\tau$ is the optical depth at 5000 ${\AA}$. In our opinion this will not affect the results because the concentration of the magnetic field is similar in terms of size and shape at both atmospheric heights. The synthetic images (and magnetograms) are then treated to simulate a real HMI observation. All 250 frames of 1920 $\times$ 1024 images are convolved with the HMI PSF (Wachter et al. 2012; Yeo et al. 2014; Couvidat et al. 2016) and resampled to 0.504$\arcsec$/pixel. For simplicity, we have used the PSF described in Wachter et al. (2012). The PSF functional form is azimuthally symmetric and it is given by \begin{equation} \mathrm{PSF}(r) = (1-\epsilon) \exp \left[ -\left(\frac{r}{\omega}\right)^2 \right] + \epsilon \left[1+\left( \frac{r}{W}\right)^k \right]^{-1}, \end{equation} which is a linear combination of a Gaussian and a Lorentzian. We note that the radial distance is $r=\pi D \theta/\lambda$, with $D$ the telescope diameter, $\lambda$ the observing wavelength and $\theta$ the distance in the focal plane in arcsec. The reference values for the parameters (Wachter et al. 2012) are $\epsilon=0.1$, $\omega=1.8$, $k=3$ and $W=3$. Figure 5 demonstrates the similarity between an HMI image of the quiet Sun (upper left panel) and the simulations degraded and downsampled (lower left panel). The simulation at the original resolution is displayed in the upper right panel. For clarity, we display the horizontal and vertical axis in pixel units, instead of physical units. This reveals the difference in spatial resolution, both from the PSF convolution and the resampling. In this process we also realized that using the PSF of Wachter et al. (2012), the azimuthally averaged power spectrum of the degraded simulated quiet Sun turns out to have stronger tails than those of the observation. For this reason, we slightly modified it so that we finally used $\omega=2$ and $W=3.4$. The curve with these modified values is displayed in orange as the new PSF in Fig. 5 with the original PSF and the default values in blue. For consistency, we also applied this PSF to the magneto-convection simulations described by Stein & Nordlund (2012) and Stein (2012), finding a similar improvement in the comparison with observations. One could argue that using the more elaborate PSFs of Yeo et al. (2014) (obtained via observations of the Venus transit) or Couvidat et al. (2016) (obtained with ground data before the launch) is preferred. However, we point out that applying the PSF of Wachter et al. (2012) (with some modifications that are specified above) to the simulations produces images that compare excellently at a quantitative level with the observations. Anyway, given that our code is open sourced, anyone interested in using a different PSF can easily retrain the deep networks. # Here we present first a comparison between the different radial shapes of # each PSF with the described parameters. We also include the Airy function # as the ideal PSF of the instrument. def old_PSF(x): # http://jsoc.stanford.edu/relevant_papers/Wachter_imageQ.pdf e = 0.1 k = 3.0 w = 1.8 return (1.-e)*np.exp(-(x/w)**2.) + e/(1.+(x/W)**k) def new_PSF(x): # Modified version with a longer tail def airyR(x,R): # Ideal point spread function Rz = 1.21966989 x = (np.pi*x)/(R/Rz) return (2*sp.j1(x)/(x))**2. theta = np.arange(1e-5,3,0.01) lambdai = 6173e-10 D = 0.14 r = np.pi*D*theta/lambdai/206265. res =1.22*lambdai/D*206265. print('Ideal resolution: {0:2.2f} [arcsec]\n'.format(res)) plt.figure(figsize=(4,3)) plt.rcParams['font.size'] = 8 plt.plot(theta,old_PSF(r),label='Wachter et al. (2012)') plt.plot(theta,new_PSF(r),label='Our work') plt.plot(theta,airyR(theta,res),label='Airy function') plt.ylim(0.,1.05); plt.xlim(0,2.5) plt.ylabel(r'PSF($\theta$)'); plt.xlabel(r'$\theta$ [arcsec]') Ideal resolution: 1.11 [arcsec] <matplotlib.legend.Legend at 0x7faf8ac79ef0> # Here we define the 3D shape of the PSF, and not only the radial profile def new_PSF3D(): # We refill the Airy PSF created by astropy with our new values # The radius of the Airy disk kernel [in pixels] radio_aprx = 1.1/(0.0662) psfs0 = AiryDisk2DKernel(radio_aprx) psfs1 = np.copy(psfs0) x0, y0 = psfs0.center for ypos in range(psfs1.shape[0]): for xpos in range(psfs1.shape[1]): rr = (np.sqrt(abs(xpos-x0)**2.+abs(ypos-y0)**2.)) rr_pix = rr*np.pi*D/lambdai/206265.*0.0662 psfs1[ypos,xpos] = new_PSF(rr_pix) psfs1 /= np.sum(psfs1) return psfs1 def old_PSF3D(): psfs1[ypos,xpos] = old_PSF(rr_pix) # We load two images of QS from the simulation and from the HMI satellite imSIMU = np.load('simulation.npy'); imHMI = np.load('hmi.npy') dx = 108 # Size of the sample pHMI = imHMI[:dx,:dx] plt.title('HMI - Observation') plt.imshow(pHMI,cmap='gray',origin='lower',interpolation='bicubic') plt.xlabel('X [pixel]'); plt.ylabel('Y [pixel]') plt.title('Simulation - Original') pSIMU = imSIMU[:int(dx/0.0662*0.504),:int(dx/0.0662*0.504)] plt.imshow(pSIMU,cmap='gray',origin='lower',interpolation='bicubic') yticki = [0,200,400,600,800] plt.yticks(yticki); plt.xticks(yticki) # We convolve both PSFs to compare later the images new_SIMU = convolve_fft(pSIMU,new_PSF3D(),boundary='wrap') old_SIMU = convolve_fft(pSIMU,old_PSF3D(),boundary='wrap') # Now we resample the original images to the HMI sampling pnew_SIMU = resample(new_SIMU[:int(dx/0.0662*0.504),:int(dx/0.0662*0.504)],[dx,dx]) pold_SIMU = resample(old_SIMU[:int(dx/0.0662*0.504),:int(dx/0.0662*0.504)],[dx,dx]) plt.title('Simulation - Degraded') plt.imshow(pnew_SIMU,cmap='gray',origin='lower',interpolation='bicubic') # We calculate the FFT of each image to a better comparison v, psf1D = fft1D(pHMI) v2, psf1D2 = fft1D(pnew_SIMU) v3, psf1D3 = fft1D(pold_SIMU) plt.semilogy(v,psf1D,label='HMI data',c='k') plt.semilogy(v2,psf1D2,label='New PSF - Our work',c='C1') plt.semilogy(v3,psf1D3,label='Old PSF - Wachter et al. (2012)',c='C0') plt.xlim(0,0.5); plt.legend() plt.title('Power Spectrum of the image') plt.xlabel(r'$\nu$ [pix$^{-1}$]') plt.ylabel(r'$P(\nu)$') Figure 5 — Upper left: HMI observation. Upper right: snapshot from the simulation used for training. Lower left: degraded simulations, which can be compared with the HMI observations. Lower right: azimuthally averaged power spectrum of the HMI observations and the degraded simulations with the original PSF and the one modified and used in the training process. The physical dimension of the three maps is 54$\arcsec$$\times$54$\arcsec$. Then, we randomly extract 50000 patches of $50\times 50$ pixels both spatially and temporally, which will constitute the input patches of the training set. We also randomly extract a smaller subset of 5000 patches which will act as a validation set to avoid overfitting. These are used during the training to check that the CNN generalizes well and is not memorizing the training set. The targets of the training set are obtained similarly but convolving with the Airy function of a telescope twice the diameter of HMI (28 cm), which gives a diffraction limit of $0.55"$/pixel, and then resampled to $0.25"$/pixel. Therefore, the sizes of the output patches are $100 \times 100$ pixels. All inputs and outputs for the continuum images are normalized to the average intensity of the quiet Sun. This is very convenient when the network is deployed in production because this quantity $I/Ic$ is almost always available. On the contrary, the magnetograms are divided by 10$^3$, so they are treated in kG during the training. The training of the network is carried out by minimizing a loss function defined as the squared difference between the output of the network and the desired output defined on the training set. To this end, we use the Adam stochastic optimizer (Kingma & Ba 2014) with a learning rate of $\eta=10^{-4}$. The training is done in a Titan X GPU for 20 epochs, taking $\sim 500$ seconds per epoch. We augment the loss function with an $\ell_2$ regularization for the elements of the kernels of all convolutional layers to avoid overfitting. Finally, we add Gaussian noise (with an amplitude of 10$^{-3}$ in units of the continuum intensity for the continuum images and 10$^{-2}$ for the magnetograms, following HMI standard specifications specifications) to stabilize the training and produce better quality predictions. This is important for regions of low contrast in the continuum images and regions of weak magnetic fields in the magnetograms. Apart from the size and number of kernels, there are a few additional hyperparameters that need to be defined in Enhance. The most important ones are the number of residual blocks, the learning rate of the Adam optimizer and the amount of regularization. We have found stable training behavior with a learning rate of $10^{-4}$ so we have kept this fixed. Additionally, we found that a regularization weight of $10^{-6}$ for the continuum images and $10^{-5}$ for the magnetograms provides nice and stable results. Finally, five residual blocks with $\sim$450k free parameters provide predictions that are almost identical to those of 10 and 15 residual blocks, but much faster. We note that the number of residual blocks can be further decreased even down to one and a good behavior is still found (even if the number of kernels is decreased to 32). This version of Enhance is six times faster than the one presented here, reducing the number of parameters to $\sim$40k, with differences around 3%. Although Enhance is already very fast, this simplified version can be used for an in-browser online super-resolution and deconvolution of HMI data. # This is an pseudocode example of the training process using keras. See the full version in the repository: # https://github.com/cdiazbas/enhance/blob/master/train.py # We asign the loss function and the optimizer with the learning rate to the model class model.compile(loss='mean_squared_error', optimizer=Adam(lr)) # And then we start the training process with the described data. Usually the method fit() is used # when the dataset can be allocated in memory and fit_generator() for larger datasets. model.fit_generator(training_data, batchsize, epochs, validation_data) # After this last process the NN can be used to do predictions in the same way: prediction_data = model.predict(input_data) Before proceeding to applying the networks to real data, we show in Fig. 6 the results with some of the patches from the validation set which are not used during the training. The upper three rows show results for the continuum images, while the lower three rows show results for the magnetograms. The leftmost column is the original synthetic image at the resolution of HMI. The rightmost column is the target that should be recovered by the network, which has doubled the number of pixels in each dimension. The middle column displays our single-image superresolution results. Even though the appearance of all small-scale details are not exactly similar to the target, we consider that Enhance is doing a very good job in deconvolving and super-resolving the data in the first column. In the regions of increased activity, we find that we are able to greatly improve the fine structure, specially in the penumbra. Many details are barely visible in the synthetic HMI image but can be guessed. Of special relevance are the protrusions in the umbra in the third row, which are very well recovered by the neural network. The network also does a very good job in the quiet Sun, correctly recovering the expected shape of the granules from the blobby appearance in the HMI images. The trained networks are then applied to real HMI data. In order to validate the output of our neural network we have selected observations of the Broadband Filter Instrument (BFI) from the Solar Optical Telescope (SOT, Ichimoto et al. 2008; Tsuneta et al. 2008) onboard Hinode (Kosugi et al. 2007). The pixel size of the BFI is $0.109"$ and the selected observations were obtained in the red continuum filter at $6684 \pm 2$ ${\AA}$, which is the one closer to the observing wavelength of HMI. To properly compare our results with Hinode, we have convolved the BFI images with an Airy function of a telescope of 28 cm diameter and resampled to $0.25"$/pixel to match those of the output of Enhance. The Hinode images have not been deconvolved from the influence of its PSF. We point out that the long tails of the PSF of the Hinode/SOT instrument produce a slight decrease in the contrast (Danilovic et al. 2010) and this is the reason why our enhanced images have a larger contrast. Figure 7 displays this comparison for two different regions (columns) observed simultaneously with Hinode and HMI. These two active regions are: NOAA 11330 (N09, E04) observed on October 27, 2011 (first column), and NOAA 12192 (S14, E05) observed on October 22, 2014 (second column). We have used HMI images with a cadence of 45 seconds, which is the worst scenario in terms of noise in the image. The upper rows show the original HMI images. The lower rows display the degraded Hinode images, while the central row shows the output of our neural network. Given the fully convolutional character of the deep neural network used in this work, it can be applied seamlessly to input images of arbitrary size. As an example, an image of size $400 \times 400$ can be super-resolved and deconvolved in $\sim$100 ms using a Titan X GPU, or $\sim$1 s using a 3.4 GHz Intel Core i7. Figure 7 — Application of the neural network to real HMI images. From the upper to the lower part of each column: the original HMI images, the output of the neural network, and the degraded Hinode image. All the axes are in pixel units. The contrast $\sigma_I/I$, calculated as the standard deviation of the continuum intensity divided by the average intensity of the area, is quoted in the title of each panel and has been obtained in a small region of the image displaying only granulation. The granulation contrast increases from $\sim$3.7% to $\sim$7% (as Couvidat et al. 2016), almost a factor two larger than the one provided by degraded Hinode. We note that the contrast may be slightly off for the right column because of the small quiet Sun area available. The granulation contrast measured in Hinode without degradation is around 7%. After the resampling, it goes down to the values quoted in the figure. We note that (Danilovic et al. 2008) analyzed the Hinode granulation contrast at 630 nm and concluded that it is consistent with those predicted by the simulations (in the range 14$-$15%) once the PSF is taken into account. From a visual point of view, it is clear that Enhance produces small-scale structures that are almost absent in the HMI images but clearly present in the Hinode images. Additionally, the deconvolved and super-resolved umbra intensity decreases between 3 and 7% when compared to the original HMI umbral intensity. Interesting cases are the large light bridge in the images of the right column, that increases in spatial complexity. Another example is the regions around the light bridge, which are plagued with small weak umbral dots that are evident in Hinode data but completely smeared out in HMI. For instance, the region connecting the light bridge at (125, 240) with the penumbra. Another similar instance of this enhancement occurs (375, 190); a pore with some umbral dots that are almost absent in the HMI images. As a caveat, we warn the users that the predictions of the neural network in areas close to the limb is poorer than those at disk center. Given that Enhance was trained with images close to disk center, one could be tempted to think that a lack of generalization is the cause for the failure. # Here we describe how to use ENHANCE # using a region close to the limb as an example: mymap = fits.open('hmi_20111102_000029_continuum.fits') dat = np.nan_to_num(mymap[1].data) # We only want a small region submap = dat[1700:2100,0:500] # and it has to be normalized to the QS. maxim = np.max(dat[0:,0:]) submap = submap/maxim #submap = submap/np.mean(submap[:,400:]) # The image have to be save in fits format hdu = fits.PrimaryHDU(submap) os.system('rm samples/nhmi.fits') hdu.writeto('samples/nhmi.fits') # Then, we run our code as it is explained in the repository: run enhance.py -i samples/nhmi.fits -t intensity -o output/hmi_enhanced.fits Model : intensity Setting up network... WARNING:tensorflow:From /usr/pkg/python/Python-3.4.3/lib/python3.4/site-packages/keras/backend/tensorflow_backend.py:1242: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version. Instructions for updating: keep_dims is deprecated, use keepdims instead Loading weights... Predicting validation data... Prediction took 5.7 seconds... Saving data... Overwriting... <Figure size 900x600 with 0 Axes> plt.figure(figsize=(10,6)) mymap2 = fits.open('output/hmi_enhanced.fits') submap2 = np.nan_to_num(mymap2[0].data) plt.subplot(212); plt.title('HMI - Enhanced') plt.imshow(submap2[0:400,:],cmap='gray',interpolation='None',vmin=submap2.min(),vmax=submap2.max()) plt.locator_params(axis='y', nbins=4) plt.subplot(211); plt.title('HMI - Original') plt.imshow(submap[0:200,:],cmap='gray',interpolation='None',vmin=submap2.min(),vmax=submap2.max()) plt.tick_params(axis='x',labelbottom='off'); plt.ylabel('Y [pixel]') However, we note that structures seen in the limb, such as elongated granules, share some similarity to some penumbral filaments, so these cases are already present in the training set. The fundamental reason for the failure is that the spatial contrast in the limb is very small, so the neural network does not know how to reconstruct the structures, thus creating artifacts. We speculate that these artifacts will not be significantly reduced even if limb synthetic observations are included in the training set. plt.imshow(submap2[200:400,200:400],cmap='gray',interpolation='None',vmax=np.max(submap2[200:400,200:400])) plt.xlabel('X [pixel]'); plt.ylabel('Y [pixel]'); plt.locator_params(axis='y', nbins=4) plt.imshow(submap[100:200,100:200],cmap='gray',interpolation='None',vmax=np.max(submap2[200:400,200:400])) As a final example, we show in Fig. 8 an example of the neural network applied to the intensity and the magnetogram for the same region: the NOAA 11158 (S21, W28), observed on February 15, 2011. The FOV is divided into two halfs. The upper parts show the HMI original image both for the continuum image (left panel) and the magnetogram (right panel). The lower parts display the enhanced images after applying the neural network. Note: This active region has been studied in the past. See for example: http://iopscience.iop.org/article/10.1088/0004-637X/783/2/98/pdf Figure 8 — An example of our neural network applied to the intensity (left) and magnetogram (right) for the same region. The FOV is divided into two halves. The upper half shows the HMI original image, without applying the neural network. The lower half shows enhanced image applying the neural network to the last image. The original image was resampled to have the same scale as the network output. After the deconvolution of the magnetogram, we find that: i) regions with very nearby opposite polarities suffer from an apparent cancellation in HMI data that can be restored with Enhance, giving rise to an increase in the absolute value of the longitudinal field; and ii) regions far from magnetized areas become contaminated by the surroundings in HMI, which are also compensated for with Enhance, returning smaller longitudinal fields. The left panel of Fig. 9 shows the density plot of the input versus output longitudinal magnetic field. Almost all the points lie in the 1:1 relation. However, points around 1 kG for HMI are promoted to larger absolute values, a factor $\sim 1.3 - 1.4$ higher than the original image (Couvidat et al. 2016). Another interesting point to study is the range of spatial scales at which Enhance is adding information. The right panel of Fig. 9 displays the power spectrum of both magnetograms showed in the right part of Fig. 8. The main difference between both curves is situated in the range of spatial scales $\nu = 0.05-0.25$ pix$^{-1}$ with a peak at $\nu=0.15$ pix$^{-1}$. In other words, the neural network is operating mainly at scales between 4 and 20 pixels, where the smearing effect of the PSF is higher. The same effect can be seen when a standard Richardson–Lucy maximum-likelihood algorithm (RL) (including a bilinear interpolation to carry out the super-resolution) is used (see next section for more details). The power spectrum of the output of Enhance and the one deconvolved with RL are almost the same for frequencies below 0.15 pix$^{-1}$ (equivalent to scales above $\sim 6$ pix). For larger frequencies (smaller scales), the RL version adds noisy small scale structures at a level of $\sim$80 G; this is not case with Enhance. We note that the original image has a noise around $\sim$10 G. To quantify this last point, we have showed in Fig. 9 the flat spectrum of white noise artificial images with zero mean and standard deviations $\sigma=12$G and $\sigma=75$G. Figure 9 — Left: scatter plot between the original magnetogram signal and the deconvolved magnetogram signal. Dotted lines indicate a change of a factor two. Right: spatial Fourier power spectrum of all considered magnetograms: the original, the output of Enhance and the one deconvolved with RL. We also show the power spectrum of white noise at two different levels. Depending on the type of structure analyzed, the effect of the deconvolution is different. In plage regions, where the magnetic areas are less clustered than in a sunspot, the impact of the stray light is higher. Then, Enhance produces a magnetic field that can increase up to a factor two (Yeo et al. 2014), with magnetic structures smaller in size, as signal smeared onto the surrounding quiet Sun is returned to its original location. According to the left panel of Fig 9, fields with smaller amplitudes suffer a larger relative change. As a guide to the eye, the two dotted lines indicate a change of a factor two in the same figure. To check these conclusions, we have used a Hinode-SOT Spectropolarimeter (SP) (Lites et al. 2013) Level 1D$^{[5]}$ magnetogram. The region was observed in April 25, 2015, at 04:00h UT and its pixel size is around 0.30''/pix. Figure 10 shows the increase of the magnetic field after the deconvolution: magnetic fields of kG flux were diluted by the PSF and recovered with Enhance. It was impossible to find the Hinode map of exactly the same region at exactly the same moment, meaning that some differences are visible. However the general details are retrieved. In regions of strong concentrations, like the ones found in Fig. 8, almost each polarity is spatially concentrated and increased by a factor below 1.5. 5.- Hinode-SOT Spectropolarimeter(SP) Data Product Description and Access: http://sot.lmsal.com/data/sot/level1d/ The magnetogram case is more complex than the intensity map. Many studies (Krivova & Solanki 2004; Pietarila et al. 2013; Bamba et al. 2014) have demonstrated the impact of the resolution to estimate a value of the magnetic flux and products of the magnetogram as nonlinear force-free extrapolations (Tadesse et al. 2013; DeRosa et al. 2015), to compare with in–situ spacecraft measurements (Linker et al. 2017). Contrary to deconvolving intensity images, deconvolving magnetograms is always a very delicate issue. The difficulty relies on the presence of cancellation produced during the smearing with a PSF if magnetic elements of opposite polarities are located nearby. This never happens for intensity images, which are always non-negative. Consequently, one can arbitrarily increase the value of nearby positive and negative polarities while maintaining a small quadratic approximation to the desired output. This effect is typically seen when a standard RL algorithm is used for deconvolution. Enhance avoids this effect by learning suitable spatial priors from the training dataset. It is true that the method will not be able to separate back two very nearby opposite polarities that have been fully canceled by the smearing of the PSF. Extensive tests show that the total absolute flux of each deconvolved image is almost the same as that in the original image, that is, the magnetic field is mainly "reallocated". Figure 10 — Left: original HMI magnetogram of a plage region observed on April 25, 2015. Middle: the result of applying Enhance to the HMI magnetogram. Right: the Hinode magnetogram at the same resolution of Enhance. The magnetic flux has been clipped from −1kG to 1kG. As a final step, we compare our results with those of a RL algorithm in a complicated case. Figure 11 shows the same image deconvolved with both methods. The output of Enhance is similar to the output of the RL method. Some noisy artifacts are detected in areas with low magnetic-field strength. A detailed analysis can reveal some differences however. In the light–bridge (LB), the magnetic field is lower in the RL version. Additionally, the polarity inversion line (PIL) appears sharper and broader in the RL version than in the Enhance one. The magnetic flux in both areas (LB and PIL) is reduced by a factor 0.5, which might be an indication of too many iterations. The magnetic field strength of the umbra is between 50G and 80G higher in the RL version. As a final test, we checked the difference between the original image and the output of Enhance convolved with the PSF. The average relative difference is around 4% (which is in the range 10-80G depending on the flux of the pixel), which goes down to less than 1% in the RL case (this is a clear indication that Enhance is introducing prior information not present in the data). Additionally, our network is orders of magnitude faster than RL, it does not create noisy artifacts and the estimation of the magnetic field is as robust as a RL method. def rlucy(raw, psf, niter=2, damped=False, verbose=False): """ working on it""" from astropy.convolution import convolve_fft as convolve psf /= psf.sum() lucy = np.ones(raw.shape) * raw.mean() for i in range(niter): ratio = raw / convolve(lucy, psf, boundary='wrap') ratio[ np.isnan(ratio) ] = 0 top = convolve( ratio, psf, boundary='wrap') top[ np.isnan(top) ] = 0 lucy = lucy * top dife = np.abs(convolve(lucy, psf, boundary='wrap') - raw) chisq = np.nanmedian(dife) if verbose: print('iteration', i,chisq) if damped: umbral = chisq*10.0 lucy[dife>umbral] = raw[dife>umbral] return lucy def new_PSF_scaled(): radio_aprx = 1.1/(0.504302/2.) rr_pix = rr*np.pi*D/lambdai/206265.*(0.504302/2.) # As we will deconvolve the interpolated version of the # magnetogram, we also need to scale the PSF psfHMI = new_PSF_scaled() # We load the magnetograms in kG pHMI = np.load('blos_paper0.npy')*1000. pEhnhace = np.load('blos_paper1.npy')*1000. # NOTE: we have implemented a damping method to avoid very large values. # It can be enable/disabled using damped=True/False in this function: pLucy = rlucy(pHMI, psf=psfHMI, niter=20, damped=False, verbose=False) plt.imshow(pEhnhace,vmin=-1500,vmax=1500,cmap='gray',interpolation='None') plt.imshow(pLucy,vmin=-1500,vmax=1500,cmap='gray',interpolation='None') plt.xlabel('X [pixel]'); plt.tick_params(axis='y',labelleft='off'); plt.imshow(pLucy-pEhnhace,vmin=-200,vmax=200,cmap='gray',interpolation='None') Figure 11 — Left: the output of Enhance. Middle: output after applying a Richardson-Lucy method to deconvolve the image. Right: the difference between the RL version and the Enhance output. The magnetic flux has been clipped to ±1.5kG and ±200G in the last image. This paper presents the first successful deconvolution and super-resolution applied on solar images using deep convolutional neural network. It represents, after Asensio Ramos et al. (2017), a new step toward the implementation of new machine learning techniques in the field of solar physics. Single-image super-resolution and deconvolution, either for continuum images or for magnetograms, is an ill-defined problem. It requires the addition of extra knowledge for what to expect in the high-resolution images. The deep learning approach presented in this paper extracts this knowledge from the simulations and also applies a deconvolution. All this is done very quickly, almost in real-time, and to images of arbitrary size. We hope that Enhance will allow researchers to study small-scale details in HMI images and magnetograms, something that is currently impossible. Often, HMI is used not as the primary source of information but as a complement for ground-based observations, providing the context. For this reason, having enhanced images where you can analyze the context with increased resolution is interesting. We have preferred to be conservative and only do super-resolution by a factor two. We carried out some tests with a larger factor, but the results were not satisfactory. Whether or not other techniques proposed in this explosively growing field can work better remains to be tested. Among others, techniques like a gradual up-sampling (Zhao et al. 2017), recursive convolutional layers (Kim et al. 2015), recursive residual blocks (Tai et al. 2017), or using adversarial networks as a more elaborate loss function (Ledig et al. 2016; Schawinski et al. 2017) could potentially produce better results. We provide Enhance hosted on https://github.com/cdiazbas/enhance as an open-source tool, providing the methods to apply the trained networks used in this work to HMI images or re-train them using new data. In the future, we plan to extend the technique to other telescopes/instruments to generate super-resolved and deconvolved images. We would like to thank Monica Bobra and her collaborators for promoting these new methods of analysis to be used in solar physics and for inviting us to do it with this contribution. As this chapter is based on a publication in A&A, 614, A5 (2018) we also like to thank the anonymous referee of our article. We thank Mark Cheung for kindly sharing with us the simulation data, without which this study would not have been possible. Financial support by the Spanish Ministry of Economy and Competitiveness through project AYA2014-60476-P is gratefully acknowledged. CJDB acknowledges Fundaci'on La Caixa for the financial support received in the form of a PhD contract. We also thank the NVIDIA Corporation for the donation of the Titan X GPU used in this research. This research has made use of NASA's Astrophysics Data System Bibliographic Services. We acknowledge the community effort devoted to the development of the following open-source packages that were used in this work: numpy (numpy.org), matplotlib (matplotlib.org), Keras (keras.io), Tensorflow (tensorflow.org) and SunPy (sunpy.org). Asensio Ramos, A. & de la Cruz Rodríguez, J. 2015, A&A, 577, A140 Asensio Ramos, A., Requerey, I. S., & Vitas, N. 2017, A&A, 604, A11 Asensio Ramos, A. & Socas-Navarro, H. 2005, A&A, 438, 1021 Bamba, Y., Kusano, K., Imada, S., & Iida, Y. 2014, PASJ, 66, S16 Bello González, N., Yelles Chaouche, L., Okunev, O., & Kneer, F. 2009, A&A, 494, 1091 Bishop, C. M. 1996, Neural networks for pattern recognition (Oxford University Press) Borman, S. & Stevenson, R. L. 1998, Midwest Symposium on Circuits and Systems, 374 Carroll, T. A. & Kopf, M. 2008, A&A, 481, L37 Cheung, M. C. M., Rempel, M., Title, A. M., & Schüssler, M. 2010, ApJ, 720, 233 Ciuca, R., Hernández, O. F., & Wolman, M. 2017, ArXiv e-prints arXiv:1708.08878 Colak, T. & Qahwaji, R. 2008, Sol. Phys., 248, 277 Colak, T., & Qahwaji, R. 2008, Sol. Phys., 248, 277 Couvidat, S., Schou, J., Hoeksema, J. T., et al. 2016, Sol. Phys., 291, 1887 Danilovic, S., Gandorfer, A., Lagg, A., et al. 2008, A&A, 484, L17 Danilovic, S., Schüssler, M., & Solanki, S. K. 2010, A&A, 513, A1 DeRosa, M. L., Wheatland, M. S., Leka, K. D., et al. 2015, ApJ, 811, 107 Dong, C., Change Loy, C., He, K., & Tang, X. 2015, ArXiv e-prints arXiv:1501.00092 Dong, C., Change Loy, C., & Tang, X. 2016, ArXiv e-prints arXiv:1608.00367 Hayat, K. 2017, ArXiv e-prints arXiv:1706.09077 He, K., Zhang, X., Ren, S., & Sun, J. 2015, ArXiv e-prints arXiv:1512.03385 Ichimoto, K., Lites, B., Elmore, D., et al. 2008, Sol. Phys., 249, 233 Ioffe, S., & Szegedy, C. 2015, ICML-15, eds. D. Blei, & F. Bach, 448 Kim, J., Lee, J. K., & Lee, K. M. 2015, ArXiv e-prints arXiv:1511.04491 Kingma, D. P., & Ba, J. 2014, ArXiv e-prints arXiv:1412.6980 Kosugi, T., Matsuzaki, K., Sakao, T., et al. 2007, Sol. Phys., 243, 3 Krivova, N. A., & Solanki, S. K. 2004, A&A, 417, 1125 LeCun, Y., & Bengio, Y. 1998, ed. M. A. Arbib (Cambridge, MA: MIT Press), 255 LeCun, Y., Bottou, L., Orr, G. B., & Müller, K.-R. 1998, NIPS Workshop (London, UK: Springer-Verlag), 9 Ledig, C., Theis, L., Huszar, F., et al. 2016, ArXiv e-prints arXiv:1609.04802 Linker, J. A., Caplan, R. M., Downs, C., et al. 2017, ApJ, 848, 70 Lites, B. W., Akin, D. L., Card, G., et al. 2013, Sol. Phys., 283, 579 Nair, V., & Hinton, G. E. 2010, ICML-10, (Haïfa: ACM Digital Library), 21, 807 Pesnell, W. D., Thompson, B. J., & Chamberlin, P. C. 2012, Sol. Phys., 275, 3 Peyrard, C., Mamalet, F., & Garcia, C. 2015, in VISAPP, eds. J. Braz, S. Battiato, & J. F. H. Imai (Setùbal: SciTePress), 1, 84 Pietarila, A., Bertello, L., Harvey, J. W., & Pevtsov, A. A. 2013, Sol. Phys., 282, 91 Quintero Noda, C., Asensio Ramos, A., Orozco Suárez, D., & Ruiz Cobo B. 2015, A&A, 579, A3 Richardson, W. H. 1972, J. Opt. Soc. Am, 62, 55 Ruiz Cobo, B., & Asensio Ramos A. 2013, A&A, 549, L4 Rumelhart, D. E., Hinton, G. E., & Williams, R. J. 1986, (Cambridge, MA: MIT Press), Nature, 323, 533 Schawinski, K., Zhang, C., Zhang, H., Fowler, L., & Santhanam, G. K. 2017, MNRAS, 467, L110 Scherrer, P. H., Schou, J., Bush, R. I., et al. 2012, Sol. Phys., 275, 207 Schmidhuber, J. 2015, Neural Networks, 61, 85 Shi, W., Caballero, J., Huszár, F., et al. 2016, ArXiv e-prints arXiv:1609.05158 Simonyan, K., & Zisserman, A. 2014, ArXiv e-prints arXiv:1409.1556 Socas-Navarro, H. 2005, ApJ, 621, 545 Stein, R. F. 2012, Liv. Rev. Sol. Phys., 9, 4 Stein, R. F., & Nordlund, Å. 2012, ApJ, 753, L13 Tadesse, T., Wiegelmann, T., Inhester, B., et al. 2013, A&A, 550, A14 Tai, Y., Yang, J., & Liu, X. 2017, Proceeding of IEEE Computer Vision and Pattern Recognition Tipping, M. E., & Bishop, C. M. 2003, (Cambridge, MA: MIT Press), 1303 Tsuneta, S., Ichimoto, K., Katsukawa, Y., et al. 2008, Sol. Phys., 249, 167 van Noort, M. 2012, A&A, 548, A5 Vögler, A., Shelyag, S., Schüssler, M., et al. 2005, A&A, 429, 335 Wachter, R., Schou, J., Rabello-Soares, M. C., et al. 2012, Sol. Phys., 275, 261 Xu, L., Ren, J. S. J., Liu, C., & Jia, J. 2014, NIPS'14 (Cambridge, MA: MIT Press), 1790 Yeo, K. L., Feller, A., Solanki, S. K., et al. 2014, A&A, 561, A22 Zhao, Y., Wang, R., Dong, W., et al. 2017, ArXiv e-prints arXiv:1703.04244 〈 Enhancing SDO Images Differential Emission Measurements 〉
CommonCrawl
Existence and uniform decay estimates for the fourth order wave equation with nonlinear boundary damping and interior source A note on sign-changing solutions for the Schrödinger Poisson system March 2020, 28(1): 205-220. doi: 10.3934/era.2020014 Global existence and energy decay of solutions for a wave equation with non-constant delay and nonlinear weights Vanessa Barros 1, , Carlos Nonato 1, and Carlos Raposo 2,, Department of Mathematics, Federal University of Bahia, Salvador, 40170-115, Bahia, Brazil Department of Mathematics, Federal University of São João del-Rei, São João del-Rei, 36307-352, Minas Gerais, Brazil * Corresponding author: [email protected] Received January 2020 Revised February 2020 Published March 2020 Fund Project: The first author was partially supported by FCT project PTDC/MAT-PUR/28177/2017, with national funds, and by CMUP (UID/MAT/00144/2019), which is funded by FCT with national (MCTES) and European structural funds through the programs FEDER, under the partnership agreement PT2020. The second author was partially supported by CAPES (Brazil) We consider the wave equation with a weak internal damping with non-constant delay and nonlinear weights given by $ \begin{eqnarray*} \label{NLS} u_{tt}(x,t) - u_{xx}(x,t)+\mu_1(t)u_t(x,t) +\mu_2(t)u_t(x,t-\tau(t)) = 0 \end{eqnarray*} $ in a bounded domain. Under proper conditions on nonlinear weights $ \mu_1(t), \mu_2(t) $ and non-constant delay $ \tau(t) $, we prove global existence and estimative the decay rate for the energy. Keywords: Wave equation, non-constant delay and weights, exponential stability. Mathematics Subject Classification: Primary: 35D05, 35E15; Secondary: 35Q35. Citation: Vanessa Barros, Carlos Nonato, Carlos Raposo. Global existence and energy decay of solutions for a wave equation with non-constant delay and nonlinear weights. Electronic Research Archive, 2020, 28 (1) : 205-220. doi: 10.3934/era.2020014 F. A. Mehmeti, Nonlinear Waves in Networks, Vol. 80, Mathematical Research, Akademie-Verlag, Berlin, 1994. Google Scholar A. Benaissa, A. Benguessoum and S. A. Messaoudi, Energy decay of solutions for a wave equation with a constant weak delay and a weak internal feedback, Electron. J. Qual. Theory Differ. Equ., 11 (2014), 13 pp. doi: 10.14232/ejqtde.2014.1.11. Google Scholar F. Z. Benyoub, M. Ferhat and A. Hakem, Global existence and asymptotic stability for a coupled viscoelastic wave equation with a time-varying delay term, Electron. J. Math. Anal. Appl., 6 (2018), 119-156. Google Scholar H. Brézis, Opérators Maximaux Monotones et Semi-groupes de Contractions dans les Espaces de Hilbert, North-Holland Publishing Co., Amsterdam-London; American Elsevier Publishing Co., Inc., New York. Google Scholar G. Chen, Control and stabilization for the wave equation in a bounded domain, SIAM J. Control Optim., 17 (1979), 66-81. doi: 10.1137/0317007. Google Scholar G. Chen, Control and stabilization for the wave equation in a bounded domain. Ⅱ, SIAM J. Control Optim., 19 (1981), 114-122. doi: 10.1137/0319009. Google Scholar R. Datko, J. Lagnese and M. P. Polis, An example on the effect of time delays in boundary feedback stabilization of wave equations, SIAM J. Control Optim., 24 (1986), 152-156. doi: 10.1137/0324007. Google Scholar R. Datko, Not all feedback stabilized hyperbolic systems are robust with respect to small time delays in their feedbacks, SIAM J. Control Optim., 26 (1988), 697-713. doi: 10.1137/0326040. Google Scholar B. Feng and X.-G. Yang, Long-time dynamics for a nonlinear Timoshenko system with delay, Appl. Anal., 96 (2017), 606-625. doi: 10.1080/00036811.2016.1148139. Google Scholar B. Feng, Well-posedness and exponential decay for laminated Timoshenko beams with time delays and boundary feedbacks, Math. Methods Appl. Sci., 41 (2018), 1162-1174. doi: 10.1002/mma.4655. Google Scholar M. Ferhat, Energy decay of solutions for the wave equation with a time-varying delay term in the weakly nonlinear internal feedbacks, Discrete Contin. Dyn. Syst. Ser. B, 22 (2017), 491-506. Google Scholar A. Guesmia, Well-posedness and exponential stability of an abstract evolution equation with infinity memory and time delay, IMA J. Math. Control Inform., 30 (2013), 507-526. doi: 10.1093/imamci/dns039. Google Scholar A. Haraux, Two remarks on hyperbolic dissipative problems, Res. Notes in Math., 122 (1985), 161-179. Google Scholar T. Kato, Nonlinear semigroups and evolution equations, J. Math. Soc. Japan, 19 (1967), 508-520. doi: 10.2969/jmsj/01940508. Google Scholar T. Kato, Abstract Differential Equations and Nonlinear Mixed Problems, Lezioni Fermiane, [Fermi Lectures], Scuola Normale Superiore, Pisa; Accademia NAzionale dei Lincei, Rome, 1985. Google Scholar V. Komornik, Exact Controllability and Stabilization. The Multiplier Method, Masson, Paris; John Wiley & Sons, Ltd., Chichester, 1994. Google Scholar I. Lasiecka and R. Triggiani, Uniform exponential energy decay of wave equations in a bounded region with $L_2(0, \infty; L_2(\Gamma))$-feedback control in the Dirichlet boundary conditions, J. Differential Equations, 66 (1987), 340-390. doi: 10.1016/0022-0396(87)90025-8. Google Scholar G. Liu, Well-posedness and exponential decay of solutions for a transmission problem with distributed delay, Electron. J. Differential Equations, 174 (2017), 13 pp. Google Scholar W. Liu, General decay rate estimate for the energy of a weak viscoelastic equation with an internal time-varying delay term, Taiwanese J. Math., 17 (2013), 2101-2115. doi: 10.11650/tjm.17.2013.2968. Google Scholar Z. Liu and S. Zheng, Semigroups Associated with Dissipative Systems, Chapman & Hall/CRC, Boca Raton, FL, 1999. Google Scholar M. Nakao, Decay of solutions of some nonlinear evolution equations, J. Math. Anal. Appl., 60 (1977), 542-549. doi: 10.1016/0022-247X(77)90040-3. Google Scholar S. Nicaise and C. Pignotti, Stability and instability results of the wave equation with a delay term in the boundary or internal feedbacks, SIAM J. Control Optim., 45 (2006), 1561-1585. doi: 10.1137/060648891. Google Scholar S. Nicaise and C. Pignotti, Interior feedback stabilization of wave equations with time dependent delay, Electron. J. Differential Equations, 41 (2011), 20 pp. Google Scholar S. Nicaise and C. Pignotti, Stabilization of the wave equation with boundary or internal distributed delay, Differential Integral Equations, 21 (2008), 935-958. Google Scholar S. Nicaise, C. Pignotti and J. Valein, Exponential stability of the wave equation with boundary time-varying delay, Discrete Contin. Dyn. Syst. Ser. S, 4 (2011), 693-722. doi: 10.3934/dcdss.2011.4.693. Google Scholar A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations, Vol. 44, Applied Mathematical Sciences, Springer-Verlag, New York, 1983. doi: 10.1007/978-1-4612-5561-1. Google Scholar C. A. Raposo, H. Nguyen, J. O. Ribeiro and V. Barros, Well-posedness and exponential stability for a wave equation with nonlocal time-delay condition, Electron. J. Differential Equations, 279 (2017), 11 pp. Google Scholar M. Remil and A. Hakem, Global existence and asymptotic behavior of solutions to the viscoelastic wave equation with a constant delay term, Facta Univ. Ser. Math. Inform, 32 (2017), 485-502. doi: 10.1007/s11766-017-3280-3. Google Scholar F. Tahamtani and A. Peyravi, Asymptotic behavior and blow-up of solutions for a nonlinear viscoelastic wave equation with boundary dissipation, Taiwanese J. Math., 17 (2013), 1921-1943. doi: 10.11650/tjm.17.2013.3034. Google Scholar A. A. Than and J. Wang, Stabilization of the cascaded ODE-Schrödinger equations subject to observation with time delay, IEEE/CAA J. Autom. Sin., 6 (2019), 1027-1035. doi: 10.1109/JAS.2019.1911588. Google Scholar G. Q. Xu, S. P. Yung and L. K. Li, Stabilization of wave systems with input delay in the boundary control, ESAIM Control Optim. Calc. Var., 12 (2006), 770-785. doi: 10.1051/cocv:2006021. Google Scholar K.-Y. Yang and J.-M. Wang, Pointwise feedback stabilization of an Euler-Bernoulli beam in observations with time delay, ESAIM Control Optim. Calc. Var., 25 (2019), 23 pp. doi: 10.1051/cocv/2017080. Google Scholar Serge Nicaise, Cristina Pignotti, Julie Valein. Exponential stability of the wave equation with boundary time-varying delay. Discrete & Continuous Dynamical Systems - S, 2011, 4 (3) : 693-722. doi: 10.3934/dcdss.2011.4.693 Juan Pablo Rincón-Zapatero. Hopf-Lax formula for variational problems with non-constant discount. Journal of Geometric Mechanics, 2009, 1 (3) : 357-367. doi: 10.3934/jgm.2009.1.357 Kais Hamza, Fima C. Klebaner. On nonexistence of non-constant volatility in the Black-Scholes formula. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 829-834. doi: 10.3934/dcdsb.2006.6.829 Yaru Xie, Genqi Xu. Exponential stability of 1-d wave equation with the boundary time delay based on the interior control. Discrete & Continuous Dynamical Systems - S, 2017, 10 (3) : 557-579. doi: 10.3934/dcdss.2017028 Junya Nishiguchi. On parameter dependence of exponential stability of equilibrium solutions in differential equations with a single constant delay. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5657-5679. doi: 10.3934/dcds.2016048 Steven M. Pederson. Non-turning Poincaré map and homoclinic tangencies in interval maps with non-constant topological entropy. Conference Publications, 2001, 2001 (Special) : 295-302. doi: 10.3934/proc.2001.2001.295 Linfang Liu, Tomás Caraballo, Xianlong Fu. Exponential stability of an incompressible non-Newtonian fluid with delay. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4285-4303. doi: 10.3934/dcdsb.2018138 R. P. Gupta, Shristi Tiwari, Shivam Saxena. The qualitative behavior of a plankton-fish interaction model with food limited growth rate and non-constant fish harvesting. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021160 Emanuela R. S. Coelho, Valéria N. Domingos Cavalcanti, Vinicius A. Peralta. Exponential stability for a transmission problem of a nonlinear viscoelastic wave equation. Communications on Pure & Applied Analysis, 2021, 20 (5) : 1987-2020. doi: 10.3934/cpaa.2021055 Eugen Stumpf. On a delay differential equation arising from a car-following model: Wavefront solutions with constant-speed and their stability. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3317-3340. doi: 10.3934/dcdsb.2017139 Luis Barreira, Claudia Valls. Delay equations and nonuniform exponential stability. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 219-223. doi: 10.3934/dcdss.2008.1.219 Shi-Liang Wu, Wan-Tong Li, San-Yang Liu. Exponential stability of traveling fronts in monostable reaction-advection-diffusion equations with non-local delay. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 347-366. doi: 10.3934/dcdsb.2012.17.347 Zhijian Yang, Yanan Li. Criteria on the existence and stability of pullback exponential attractors and their application to non-autonomous kirchhoff wave models. Discrete & Continuous Dynamical Systems, 2018, 38 (5) : 2629-2653. doi: 10.3934/dcds.2018111 István Györi, Ferenc Hartung. Exponential stability of a state-dependent delay system. Discrete & Continuous Dynamical Systems, 2007, 18 (4) : 773-791. doi: 10.3934/dcds.2007.18.773 Serge Nicaise, Cristina Pignotti. Stability of the wave equation with localized Kelvin-Voigt damping and boundary delay feedback. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 791-813. doi: 10.3934/dcdss.2016029 Tomás Caraballo, Renato Colucci, Luca Guerrini. Bifurcation scenarios in an ordinary differential equation with constant and distributed delay: A case study. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2639-2655. doi: 10.3934/dcdsb.2018268 Stéphane Gerbi, Belkacem Said-Houari. Exponential decay for solutions to semilinear damped wave equation. Discrete & Continuous Dynamical Systems - S, 2012, 5 (3) : 559-566. doi: 10.3934/dcdss.2012.5.559 Zhaojuan Wang, Shengfan Zhou. Random attractor and random exponential attractor for stochastic non-autonomous damped cubic wave equation with linear multiplicative white noise. Discrete & Continuous Dynamical Systems, 2018, 38 (9) : 4767-4817. doi: 10.3934/dcds.2018210 Aowen Kong, Carlos Nonato, Wenjun Liu, Manoel Jeremias dos Santos, Carlos Raposo. Equivalence between exponential stabilization and observability inequality for magnetic effected piezoelectric beams with time-varying delay and time-dependent weights. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021168 Tomás Caraballo, José Real, T. Taniguchi. The exponential stability of neutral stochastic delay partial differential equations. Discrete & Continuous Dynamical Systems, 2007, 18 (2&3) : 295-313. doi: 10.3934/dcds.2007.18.295 Vanessa Barros Carlos Nonato Carlos Raposo
CommonCrawl
Eigenvalue problems with extremely small gaps I'm interested in numerically diagonalizing a class of structured, symmetric eigenvalue problems with potentially extremely small eigenvalue gaps. The question I have is how to design a numerically stable way to get the eigenvalues and eigenvectors. While usually small gaps destroy precision in numerical eigenproblems, for reasons I will explain below the structure of these problems allow for accurate solutions in principal, i.e. the eigenvalues and eigenvectors don't change dramatically as a function of perturbations of the non-zero matrix elements. An explicit example will follow, but the reason for the stability is the following: the structure of the small parameters is such that they can always be brought to the block off-diagonal form $H = \left( \begin{array}{c|c} D_0 & \epsilon \\ \epsilon^T & D_1 \end{array} \right),$ where $D_0$, $D_1$ are diagonal and $\epsilon$ is small compared to the level spacing in $D_0$ and $D_1$. In this case, perturbation theory says that the eigenvalues $\lambda_i$ are close to those of $D_0$ or $D_1$, i.e. $|\lambda_i - d_i | < \frac{||\epsilon||^2}{d_i - d_j} $. The eigenproblems of interest come from Hamiltonians of very short quantum spin chains, for example $H = -h_1 \sigma^z_1 - h_2\sigma^z_2 - h_3\sigma^z_3 - J_1 \sigma^x_1 \sigma^x_2 - J_2 \sigma^x_2 \sigma^x_3,$ $\sigma^z_1 = kron(\sigma^z, I, I),$ $\sigma^x_2 = kron(I, \sigma^x, I),$ Thus the full matrix takes a form like $H = \left( \begin{array}{cccccccc} -d_0 & 0 & 0 & -J_1 & 0 & 0 & -J_2 & 0 \\ 0 & -d_1 & -J_1 & 0 & 0 & 0 & 0 & -J_2 \\ 0 & -J_1 & -d_2 & 0 & -J_2 & 0 & 0 & 0 \\ -J_1 & 0 & 0 & -d_3 & 0 & -J_2 & 0 & 0 \\ 0 & 0 & -J_2 & 0 & d_3 & 0 & 0 & -J_1 \\ 0 & 0 & 0 & -J_2 & 0 & d_2 & -J_1 & 0 \\ -J_2 & 0 & 0 & 0 & 0 & -J_1 & d_1 & 0 \\ 0 & -J_2 & 0 & 0 & -J_1 & 0 & 0 & d_0 \\ \end{array} \right),$ where $d_0 = h_1 + h_2 + h_3$, $d_1 = -h_1 + h_2 + h_3$, $d_2 = h_1 - h_2 + h_3$, $d_3 = h_1 + h_2 - h_3$. The ratios of the parameters $h_1, h_2, h_3, J_1, J_2$ are potentially much smaller than machine precision. This leads to very small gaps in the eigenvalues and trouble for the typical numerical eigensolvers working at finite precision. However, I do not think it should be impossible to determine the eigenvectors, because when the ratios are large we have precisely the situation where perturbation theory can be used to approximately determine the eigenvalues and eigenvectors in the following manner: For example, if $h_2$ is the largest scale, then perturbation theory in the small parameters $J_i/h_2$ shows that the eigenvalues are very close to the eigenvalues of these smaller matrices: $ \tilde{H}_{\pm} = \pm h_2 I - h_1 \sigma^z_1 - h_3 \sigma^z_2 \pm \frac{J_1 J_2}{h_2} \sigma^x_1 \sigma^x_2 + \mathcal{O}(\frac{J_i^4}{h_2^3}).$ This argument can be repeated again if one of the remaining coefficients $h_1, h_3, \frac{J_1 J_2}{h_2}$ is large compared to the others. For example, if $\frac{J_1 J_2}{h_2}$ is the largest remaining scale, the eigenvalues will be very close to $ \lambda_{\pm, \pm, \pm} \approx \pm h_2 \pm \frac{J_1 J_2}{h_2} \pm \frac{h_1 h_2 h_3}{J_1 J_2}.$ The eigenvectors are also accurately determined by perturbation theory. My goal is to accurately determine a single eigenvector of these matrices. When using numerical eigensolvers, the issue is as follows: when the small gaps (in the example $2\frac{h_1 h_2 h_3}{J_1 J_2}$) are less than machine precision, a numerical eigensolver will mix the eigenstates. I could just use perturbation theory in this case. In the situations where the ratios of the parameters are not so small, perturbation theory fails to be accurate but numerical eigensolvers have no problem. Is it possible to make an eigensolver that captures both cases accurately? P.S. For the specific test problem chosen here, a mapping to free fermions shows that the exact eigenvalues are given by the formula $ \lambda_{\pm, \pm, \pm} = \pm \sigma_1 \pm \sigma_2 \pm \sigma_3$ where $\sigma_i$ are the singular values of the matrix $M = \left( \begin{array}{ccc} h_1 & J_1 & 0 \\ 0 & h_2 & J_2 \\ 0 & 0 & h_3 \\ \end{array} \right).$ This isn't true for the general case I am interested in but can serve as a useful check that the eigensolver is actually accurate. computational-physics eigensystem preconditioning deemaregeedeemaregee $\begingroup$ What about using high (arbitrary precision) arithmetic? Do you have computation time or memory restrictions which would preclude that? For example (not an endorsement) advanpix.com/2011/10/12/… . $\endgroup$ – Mark L. Stone Jun 5 '18 at 18:41 Is it possible to make an eigensolver that captures both cases accurately? In general no, for the following reason. When a typical eigensolver produces approximate eigenvalues of a matrix $A$, the answer is usually backward stable: the output eigenvalues are the exact eigenvalues of a nearby matrix $A+\delta A$ where $\|\delta A\|\leq \epsilon_{\mathrm{mach}}\|A\|$. This is also what causes them to produce inaccurate almost-double eigenvalues. In your case, you have a special matrix $H$ that can only be perturbed in specific ways: as far as an eigensolver is concerned, the nearby matrix can contain 64 independent perturbation, one per matrix element, but in your matrix only the 5 parameters can be perturbed and the space of potential perturbations $\delta H$ is much smaller than what is allowed by $\|\delta H\| \leq \epsilon_{\mathrm{mach}}\|H\|$. This means that if you did have an eigensolver that could handle this, it would have to be designed in advance with knowledge of the special structure of your matrix. This is in fact what you've done already: you've described an eigensolver that dispatches either to a standard solver or to perturbation theory. I think this is probably the best you can do anyway, short of having some closed form solution or something. KirillKirill Not the answer you're looking for? Browse other questions tagged computational-physics eigensystem preconditioning or ask your own question. Spectral decomposition with eigenvalue shift solving generalized eigenvalue problems with the same precondition Specialized methods for complex symmetric tridiagonal generalized eigenvalue problems Verification in Eigenvalue problems Normalizing a density matrix at each iteration Benchmark problems for eigenvalue reordering algorithms sought
CommonCrawl
eMathZone From basic to higher mathematics Basic Algebra Business Math Higher Mathematics Real Analysis Math Results And Formulas Math Symbols Abelian Group or Commutative Group If the commutative law holds in a group, then such a group is called an Abelian group or commutative group. Thus the group $$\left( {G, * } \right)$$ is said to be an Abelian group or commutative group if $$a * b = b * a,\forall a,b \in G$$. A group which is not Abelian is called a non-Abelian group. The group $$\left( {G, + } \right)$$ is called the group under addition while the group $$\left( {G, \times } \right)$$ is known as the group under multiplication. The structure $$\left( {\mathbb{Z}, + } \right)$$ is a group, i.e., the set of integers with the addition composition is a group. This is so because addition in numbers is associative. The additive identity $$0$$ belongs to $$\mathbb{Z}$$, and the inverse of every element $$a$$, viz. $$ – a$$ belongs to $$\mathbb{Z}$$. This is known as additive Abelian group of integers. The structures $$\left( {\mathbb{Z}, + } \right),\left( {\mathbb{R}, + } \right),\left( {\mathbb{C}, + } \right)$$ are all groups, i.e., the sets of rational numbers, real numbers, complex numbers, each with the additive composition, form an Abelian group. But the same sets with the multiplication composition do not form a group, for the multiplicative inverse of the number zero does not exist in any of them. The structure $$\left( {{\mathbb{Q}_o}, \times } \right)$$ is an Abelian group, where $${\mathbb{Q}_o}$$ is the set of non-zero rational numbers. This is so because the operation is associative. The multiplicative identity $$1$$ belongs to $${\mathbb{Q}_o}$$, and the multiplicative inverse of every element $$a$$ in the set is $$1/a$$, which also belongs to $${\mathbb{Q}_o}$$. This is known as the multiplicative Abelian group of non-zero rational. Obviously $$\left( {{\mathbb{R}_o}, \times } \right)$$ and $$\left( {{\mathbb{C}_o}, \times } \right)$$ are groups where $${\mathbb{R}_o}$$ and $${\mathbb{C}_o}$$ are respectively the sets of non-zero real numbers and non-zero complex numbers. ⇐ Definition of Group ⇒ Examples of Group ⇒ © emathzone.com - All rights reserved
CommonCrawl
Golden Thoughts Rectangle PQRS has X and Y on the edges. Triangles PQY, YRX and XSP have equal areas. Prove X and Y divide the sides of PQRS in the golden ratio. The area of a regular pentagon looks about twice as a big as the pentangle star drawn within it. Is it? A circular plate rolls in contact with the sides of a rectangular tray. How much of its circumference comes into contact with the sides of the tray when it rolls around one circuit? Investigating the Dilution Series 1) It is clear the highest concentration which is achievable is the original concentration of 100000 cells/ml, which results in transferring solution between subsequent beakers but without adding any additional water. The smallest concentration results from transferring the minimum amount of solution each time (10ml) but adding the maximum amount of water (100ml). This gives a minimum final concentration of 6 cells/ml. 2) It should be noted that for several of the required dilutions, there are more than one possible way to make them. However, only a single solution is provided below: a) To achieve a concentration of 10 cells/ml requires a dilution of 10,000 times. Since we have four opportunities to dilute the original solution, this logically requires a tenfold dilution each time. Thus, 10ml of solution should be transferred each time, and 90ml of water added to it. b) To give a concentration of 100 cells/ml, the same process should be repeated as in a) except that the final addition of water should not occur. Thus only three tenfold dilutions occur, and so the final concentration is 100 cells/ml as opposed to 10 cells/ml. c) To give a concentration of 160 cells/ml requires a 625 times dilution, which can be decomposed into a two 2.5x dilutions and two 10x dilutions. Thus, to give the required concentration involves taking 20ml of solution and adding 30ml of water, and then taking 10ml of solution and adding 90ml of water, before repeating both of these steps. d) To achieve a 20 cells/ml concentration is similar to that of the 10 cells/ml dilution, except that the final step is different. Rather than taking 10ml of solution and adding 90ml of water to give a tenfold dilution, 20ml of solution is taken and 80ml of water adding to give a fivefold dilution. e) To give a concentration of 125 cells/ml requires a dilution of 800 times. This can be easily decomposed into two tenfold dilutions and one eightfold dilution. Thus, the first two dilution steps involve taking 10ml of solution and adding 90ml of water, whereas the third step involve staking 10ml of solution and adding 70ml of water. The final step involves no addition of water. f) A concentration of 1875 cells/ml requires a dilution of $\frac{160}{3}$. We are essentially saying that $100000 \times \frac{3}{160} = 1875$, and so require up to four fractions which multiply together to give $\frac{3}{160}$. Three such fractions are $\frac{3}{5}$, $\frac{1}{8}$ and $\frac{1}{4}$. Thus, the first dilution involves taking 30ml of solution and adding 20ml of water; the second involves taking 10ml of solution and adding 70ml of water; the third involves taking 10ml of solution and adding 30ml of water; whereas the fourth requires no addition of water. 3) At each dilution stage it is possible to transfer between 10 and 100ml of solution (in 10ml intervals) and then add between 0 and 100ml (in 10ml intervals) of water. This gives a possibility of 110 different combinations, but unfortunately these are not all unique dilutions. For example, taking 10ml of solution and adding 10ml of water will give the same dilution as taking 20ml of solution and adding 20ml of water. By looking carefully at all these different dilutions (by writing them out!) it can be deduced that there are 64 different unique dilutions possible. 4) A dilution of 1/11 can be made very simply: just take 10ml of solution and add 100ml of water. However a dilution of 1/17 is impossible to make in a single dilution since no less than 10ml of solution can be taken, and no more than 100ml of water can be added. We can get a denominator of 17 by taking 100ml of solution and 70ml of water - this would give a dilution 10/17 of the original. Then we need to do a tenfold dilution to get 1/17; we can do this by taking 10ml of solution and adding 90ml of water. To get a 1/23 dilution we would need to make a fraction with 23 in the denominator. This will not be possible since the denominator is made by adding the amount of solution to the amount of water (and dividing by 10 since we are using multiples of 10ml throughout), so the maximum denominator in a single step is achieved by adding 100ml and 90ml, giving a denominator of 190 which cancels to 19. We can't make a denominator of 23 in more than one step since 23 is prime, and each subsequent step multiplies the denominators together. 5) A dilution of 1/21 can be made by recognising that the fraction can be written as a product of 1/3 and 1/7. Thus, using two dilutions 1/21 can be made: firstly take 10ml of solution and add 20ml of water. Next, take 10ml of this new solution and add 60ml of water. A dilution of 1/46 cannot be made exactly. 1/46 can be decomposed to 1/2 x 1/23. Although a dilution of 1/2 can be made, it is not possible to then dilute this by a factor of 23. 6) Dilutions cannot be made for fractions whose denominators are prime numbers greater than 19, or multiples thereof. With the added restriction of only performing four dilutions altogether, to decide whether a particular dilution is possible it is necessary to consider how to express the fraction as a product of those fractions which can be made in one dilution.
CommonCrawl
An overview of recommender systems in the healthy food domain Thi Ngoc Trang Tran ORCID: orcid.org/0000-0002-3550-83521, Müslüm Atas1, Alexander Felfernig1 & Martin Stettinger1 Journal of Intelligent Information Systems volume 50, pages 501–526 (2018)Cite this article Recently, food recommender systems have received increasing attention due to their relevance for healthy living. Most existing studies on the food domain focus on recommendations that suggest proper food items for individual users on the basis of considering their preferences or health problems. These systems also provide functionalities to keep track of nutritional consumption as well as to persuade users to change their eating behavior in positive ways. Also, group recommendation functionalities are very useful in the food domain, especially when a group of users wants to have a dinner together at home or have a birthday party in a restaurant. Such scenarios create many challenges for food recommender systems since the preferences of all group members have to be taken into account in an adequate fashion. In this paper, we present an overview of recommendation techniques for individuals and groups in the healthy food domain. In addition, we analyze the existing state-of-the-art in food recommender systems and discuss research challenges related to the development of future food recommendation technologies. According to the prediction of the World Health OrganizationFootnote 1, the quantity of overweight adults all over the world has reached an alarming number with 2.3 billions by 2015. More significantly, overweight and obesity also cause many chronic diseases (Robertson 2004). An appropriate dietary intake is considered as an important factor to improve overall well-being. Although most people are aware of the importance of healthy eating habits, they usually tend to neglect appropriate behaviors because of busy lifestyles and/or unwillingness to spend cognitive effort on food preparation. Those problems prevent users from a healthy food consumption (Van Pinxteren et al. 2011). Hence, recommender systems are investigated as an effective solution in order to help users to change their eating behavior and to aim for healthier food choices. However, food and diet are complex domains bringing many challenges for recommendation technologies. For making recommendations, thousands of food items/ingredients have to be collected. Besides, because foods/ingredients are usually combined with each other in a recipe instead of being consumed separately, this exponentially increases the complexity of a recommender system (Freyne and Berkovsky 2010). Furthermore, food recommender systems not only recommend food suiting users' preferences, but also suggest healthy food choices, keep track of eating behavior, understand health problems, and persuade to change user behavior. While many existing recommender systems mainly target individuals, there is a remarkable increase of recommender systems which generate suggestions for groups. Some early systems were developed in a variety of domains, such as, group web page recommendation (Lieberman et al. 1999), tour packages for groups of tourists (Ardissono et al. 2003), music tracks and playlists for large groups of many listeners (Crossen et al. 2002), movies and TV programs for friends and family (O'Connor et al. 2001; Yu et al. 2006). Group scenarios are especially popular in the food domain in which a group of family members, friends or colleagues wants to make a party or simply have a meal together. However, the complexity significantly increases when food recommender systems need to take into account the preferences of all group members and strategies for achieving the consensus within group members. In this paper, we summarize existing research related to food/recipe recommender systems which give recommendations on the basis of considering the users' preferences as well as their nutritional needs. In this context, we also discuss scenarios for applying group recommender systems in the healthy food domain. An overview of some research related to the application of recommender systems in the healthy food domain is provided in Table 1. Table 1 A summary of state-of-the-art of recommender systems in the healthy food domain (CF: Collaborative filtering recommender systems, CB: Content-based recommender systems) The contributions of this paper are the following. First, we provide a short overview of recommendation approaches for individuals. Second, we discuss group decision making issues which have an impact on the development of group recommendation technologies. Third, on the basis of categorizing food recommender systems, we analyze how well those systems can help individuals or groups to choose healthy food which best fits their preferences and health situations. Finally, we point out some challenges of food recommender systems with regard to user information, recommendation algorithms, changing eating behaviors, explanations provision, and group decision making as topics for future work. The remainder of this paper is organized as follows. In Section 2, we provide an overview of basic recommendation techniques for individuals and groups. In Section 3, we summarize existing studies on food recommender systems for single users and categorize them according to different criteria, such as preferences, nutritional needs, health problems, and eating behaviors of users. Besides, in this section we also discuss some research related to food recommender systems in group scenarios. Research challenges for food recommender systems are discussed in Section 4. The paper is concluded with Section 5. Due to heavy information overloads triggered by the Internet, extracting/finding valuable information becomes increasingly difficult. In this context, recommender systems became an effective tool to extract useful information and deliver it in an efficient way. A recommender system predicts the preferences of users for unrated items and recommends new items to users. Along with the benefits of recommender systems, developing new recommendation approaches and including them in different fields rise extremely. The following subsections present an overview of recommendation techniques for individuals and groups. Recommendation techniques for individuals According to Burke et al. (2011) and Burke (2000), a recommender system can be defined as follows: "Any system that guides a user in a personalized way to interesting or useful objects in a large space of possible options or that produces such objects as output". Recommender systems are intensively applied for the purpose of recommending products and services (e.g., movies, books, digital cameras, and financial services) which best meet users' needs and preferences. Recently, in the healthy food domain, recommender systems have been discovered as a potential solution to help users to cope with the vast amount of available data related to foods/recipes. Many different techniques have been proposed for making personalized recommendations and these will be discussed in the followings. Collaborative filtering recommender systems (CF) CF became one of the most researched techniques of recommender systems. The basic idea of CF is to use the wisdom of the crowd for making recommendations. First of all, a user rates some given items in an implicit or explicit fashion. Then, the recommender identifies the nearest neighbors whose tastes are similar to those of a given user and recommends items that the nearest neighbors have liked (Ekstrand et al. 2011). CF is usually implemented on the basis of the following approaches: user-based (Asanov 2011), item-based (Sarwar et al. 2001), model-based approaches (Koren et al. 2009), and matrix factorization (Bokde et al. 2015). Content-based recommender systems (CB) These systems can make a personalized recommendation by exploiting information about available item descriptions (e.g., genre and director of movies) and user profiles describing what the users like. The main task of a CB system is to analyze the information regarding user preferences and item descriptions consumed by the user, and then recommend items based on this information. Research in this area primarily focused on recommending items with textual content, such as web-pages (Pazzani et al. 1996), books (Mooney and Roy 2000), and documents (Lang 1995). There are different approaches applied to make recommendations to users, such as Information Retrieval (Balabanović and Shoham 1997) or Machine Learning algorithms (Mooney and Roy 2000). Knowledge-based recommender systems (KBS) KBS are recognized as a solution for tackling some problems generated by classical approaches (e.g., ramp-up problems (Burke 2000)). Moreover, these systems are especially useful in domains where the number of available item ratings is very low (e.g., apartments, financial services) or when users want to define their requirements explicitly (e.g., "the color of the car should be white"). There are two main approaches for developing knowledge-based recommender systems: case-based recommendation (Bridge et al. 2005) and constraint-based recommendation (Felfernig and Burke 2008). In addition, critiquing-based recommendation is considered as a variant of case-based recommendation. This approach uses users' preferences to recommend specific items, and then elicits users' feedback in the form of critiques for the purpose of improving the recommendation accuracy (Burke 2000). There are four basic steps in a knowledge-based recommendation setting: Requirement specification: Users can interact with a recommender system for specifying their requirements. Repair of inconsistent requirements: If the recommender can not find a solution, it suggests a set of repair actions, i.e., it proposes alternatives to user requirements ensuring the identification of a recommendation (Felfernig et al. 2011). Presentation of results: A set of alternatives is delivered to the user. These are usually presented as a ranked list according to the item utility for the user (Felfernig et al. 2006). Explanation: For each presented alternative, the user can activate a corresponding explanation to understand why a specific item has been recommended (Felfernig et al. 2006). Hybrid recommender systems (HRS) HRS are based on the combination of the above mentioned techniques. According to Ricci et al. (2010): "A hybrid system combining techniques A and B tries to use the advantages of A to fix the disadvantages of B". For instance, CF methods have to face the new-item problem. Whereas, CB approaches can tackle this problem because the prediction for new items is usually based on available descriptions of these items. Burke (2002) presents some hybrid approaches which combine both CF and CB, including weighted, switching, mixed, feature combination, cascade, feature augmentation, and meta-level. Recommendation techniques for groups Research on recommender systems as discussed in Section 2.1 only focuses on recommending items to individual users. However, in reality, there is a high probability of situations where recommender systems should support a group of users. For instance, a tourist package for a group of friends or a Christmas party destination for all colleagues in a company. In such situations, Group Recommender Systems (Masthoff 2011) are considered as an optimal solution. In this subsection, we present an overview of some basic aspects of group-based recommendation. Aggregation strategies The main problem that group recommender systems need to solve is how to aggregate preferences based on information about the interests of each individual. Masthoff (2011) presented many different strategies for merging individual user profiles into a group profile. These strategies can be also used for combining individual recommendations into group recommendations. Mostly used aggregation strategies for group recommendations are least misery (O'Connor et al. 2001), average (Ardissono et al. 2003), and multiplicative (Masthoff 2004). Group formation In group recommendation scenarios, group creation and group maintenance are important steps that should be addressed. Groups can be built intentionally by an explicit definition from the users (Smith et al. 1998) or unintentionally by an automatic identification from the system (McCarthy and Anagnost 1998). Within a group, roles of group members can be conferred differently according to their importance level within the group (Cantador and Castells 2012; Berkovsky and Freyne 2010). For instance, in a holiday planning scenario of a family, parents have more influence on choosing a tourism destination than children. Group recommendation approaches Group recommendations are mostly determined by using an aggregated model or an aggregated prediction (Jameson and Smyth 2007). Aggregated model generates predictions for a group on the basis of aggregating individual user preferences into a group profile. The group recommendation process can be executed in three steps: First, users with similar preferences will be classified in subgroups. Next, the available items will be ranked based on each subgroup preference. Finally, related items in subgroups are merged to get the ranking for the whole group. This approach was applied in some well-known systems, e.g., musicfx (McCarthy and Anagnost 1998) and intrigue (Ardissono et al. 2003) for the purpose of supporting a group of users to choose suitable alternatives. Aggregated prediction firstly computes the recommendation for each group member and then computes the intersection of individual recommendations to get the common recommendations for whole the group. For instance, polylens (O'Connor et al. 2001) generates a ranked list of movies for each group member by using a classic CF approach. After that, the individual ranked lists are merged according to the least misery strategy, i.e., group's happiness is the minimum of the individual members' happiness scores. After forming groups, discovering some constraints within a group is an important phase which facilitates a recommender to make group recommendations. For instance, in the scenario of recommending recipes to a group of family members, because of the seafood allergy of one family member, recipes including shrimp or sea-crab might not be recommended to the whole group. In addition, in group recommender systems, sometimes, knowing the preferences of other group members will have an impact on the decisions of other users. travel decision forum (Jameson 2004) provides an interaction environment which allows members to optionally view (or copy) the preferences already specified by other group members. The preference visibility helps users to save time and minimize conflicts generated in the decision making process (Jameson 2004). However, in some decision scenarios, the insight to individual preferences of all group members can deteriorate the quality of the decision outcome (Stettinger et al. 2015). This issue was known as an anchoring effect (Adomavicius et al. 2011; Felfernig 2014) which is responsible for decisions biased by a shown reference value. In the context of group decision scenarios, the anchoring effect can be controlled by not completely disclosing the preferences of other group members in early stages of the decision process (Felfernig et al. 2012). In choicla (Stettinger et al. 2015), a user can solely see the summary of all given ratings of other group members for a specific alternative after giving his/her rating. Seeing the summarized rating prevents all users from statistical inferences, which can influence on the quality of decision processes. Until now, group recommendations are still a novel area compared to research on individual recommendations (Masthoff 2011). There are still open issues on group decision making which need to be resolved in the future research, such as bundle recommendations, intelligent user interface design, group aggregation strategies for cold-start problems (Masthoff 2011), consensus achievement within group members, and counteracting decision biases in group decision processes (Felfernig et al. 2014a). Food recommender systems "Where should we go for lunch?" or "What should we eat for dinner?" are usual questions we have to answer very frequently. While many recommender systems only tried to match users' preferences to music, movie, or book domains, recently they also have been applied in the food domain in order to give reliable answers to the above questions. For instance, RecipeKey Footnote 2 is a food recommender system that filters recipes on the basis of considering favorite ingredients, existing food allergies, and item descriptions (e.g., meal type, cuisine, preparation time, etc.) chosen by users. In relation to the food consumption these days, it is noticeable that there has been an increase of lifestyle-related illnesses, such as diabetes and obesity, which are the cause of many chronic diseases (Robertson 2004). This problem can be improved by applying appropriate dietary (Knowler et al. 2002). In this context, food recommender systems are also investigated as a potential means to aid people nourish themselves more healthily (Elsweiler et al. 2015). It makes sense to utilize food recommender systems as a part of a strategy for changing eating behaviour of users. In this case, food recommender systems not only learn users' preferences for ingredients and food styles, but also select healthy food by taking into account health problems, nutritional needs, and previous eating behaviors. As mentioned in Mika (2011), there are two types of food recommender systems. The first type (type 1) recommends healthier recipes or food items which are most similar to the ones the user liked in the past. The second type of recommender system (type 2) only recommends to users those items which have been identified beforehand by health care providers. In addition, in this section, we also discuss two other types of food recommender systems (type 3 and type 4) which consider other scenarios when making recommendations. Type 3 generates recommendations on the basis of considering both above criteria for the purpose of balancing between the food users like and the food users should consume. All these types of recommender systems are primarily designed for individual users. Type 4 represents group recommendations in which food items are consumed by groups of users rather than by individuals. These four types of food recommender systems will be made more explicit and will be discussed in more detail in the following subsections. Type 1: Considering user preferences In the healthy food domain, learning user tastes is recognized as a crucial pre-requisite step in order to suggest dishes that users will like. All research discussed in this subsection aims for recommending food items or menus to individual users on the basis of exploring user tastes. Most of them use popular recommendation techniques (Freyne and Berkovsky 2010; Svensson et al. 2000; El-Dosuky et al. 2012), and/or combine with different techniques in order to improve the quality of recommendation (Elahi et al. 2015; Kuo et al. 2012) (see Table 1). First of all, we present a food recommender system (El-Dosuky et al. 2012) with a simple scenario which only recommends individual food items to users. The authors use TF-IDF (Term Frequency-Inverse Document Frequency) term extraction method for creating the user profile and apply some computations for identifying the similarity between a recipe and the user profile. In addition, healthy and standard food databases, which have been extracted from the United States Department of AgricultureFootnote 3 (USDA), are incorporated into the knowledge base. The knowledge base is a domain ontology consisting of classes, relationships, and instances of classes. For getting a recommendation, each user manually rates the food items of a specific category (e.g., fruits, vegetables, meat, etc.) as relevant or non-relevant for his/her interest. After that, the recommender will compute the similarity between the food items and the previously computed user profile. If the similarity value is higher than a predefined threshold, the food item is recommended, otherwise it gets ignored. In another research, Freyne et al. (Freyne and Berkovsky 2010) use a CB algorithm to predict the rating value for a target recipe on the basis of exploiting the information of corresponding ingredients included in this recipe. The prediction process includes the following steps: Break down an unrated target recipe r t into ingredients i n g r 1, i n g r 2, ..., i n g r n . Assign the rating value for each ingredient in the target recipe r t according to (1) as shown below. Particularly, the rating value of the user u a for a specific ingredient i n g r i in the target recipe r t (i.e., rat(u a , i n g r i )) is calculated by using rating values of the user u a for all other recipes r l which contain the ingredient i n g r i (i.e., rat(u a , r l )). The value l mentioned in (1) is the number of recipes containing i n g r i . $$\begin{array}{@{}rcl@{}} \text{rat }(u_{a}, ingr_{i}) = \frac{{\sum}_{l\;s.t\; ingr_{i} \;\in \; r_{l}} rat(u_{a}, r_{l})}{l} \end{array} $$ Predict the rating value of the user u a for the target recipe r t (i.e., pred(u a , r t )) based on the average of all the rating values of all ingredients i n g r 1, ..., i n g r j included in this recipe (see (2)). $$\begin{array}{@{}rcl@{}} \text{ pred }(u_{a}, r_{t}) = \frac{{\sum}_{j \; \in \; r_{t}} rat(u_{a}, ingr_{j})}{j} \end{array} $$ Recipes with a high predicted rating value will be recommended to user u a . An illustration for predicting a rating value for a target recipe is presented in the following example: Let us assume that r e c i p e 1 is a recipe which has not been rated by user u a . It includes 3 ingredients, i.e., i n g r 1, i n g r 2, and i n g r 3. i n g r 1 is included in r e c i p e 4 and r e c i p e 2, i n g r 2 is included in r e c i p e 3, and i n g r 3 is included in r e c i p e 2 and r e c i p e 3. Rating values of user u a for r e c i p e 2, r e c i p e 3, and r e c i p e 4 are respectively 4, 2, and 5 (see Fig. 1). Predicting the rating value for a target recipe by using a CB algorithm proposed by Freyne and Berkovsky (2010) According to (1), rating values for ingredients of r e c i p e 1 will be evaluated as follows: $$\begin{array}{@{}rcl@{}} \text{rat }(u_{a},ingr_{1}) &=& \frac{rat(u_{a}, recipe_{4}) + rat(u_{a}, recipe_{2})}{2} = \frac{5 + 4}{2} = 4.5\\ \text{rat }(u_{a},ingr_{2}) &=& rat(u_{a}, recipe_{3}) = 2\\ \text{rat }(u_{a},ingr_{3}) &=& \frac{rat(u_{a}, recipe_{2}) + rat(u_{a}, recipe_{3})}{2} = \frac{4+2}{2} = 3\\ \end{array} $$ Prediction value of r e c i p e 1 for user u a is calculated by applying (2) as follows: $$\begin{array}{@{}rcl@{}} \text{pred }(u_{a},recipe_{1}) &=&\frac{rat(u_{a}, ingr_{1}) + rat(u_{a}, ingr_{2}) + rat(u_{a}, ingr_{3})}{3} = \frac{4.5 + 2 + 3}{3} \\ &=& 3.166 \end{array} $$ Recently, some new approaches have been included to food recommender systems, such as using labels for different clusters of users (Svensson et al. 2000), active learning algorithms, and matrix factorization (Elahi et al. 2015). Particularly, in Svensson et al. (2000), the authors design an on-line food shop for the purpose of suggesting kinds of food that should be purchased by users. Based on recipes which users have chosen before, user groups are labeled and named according to their content, for instance, "Meat lovers", "Vegetarians", "Spice lovers", etc. The recommended recipes are determined on the basis of three different characteristics chosen by users: user groups, food categories (e.g., fish, oriental, Italian, red meat, chicken), and ingredients (e.g., rice, spaghetti, curry, tomatoes). Users select recipes from the recommendation list and put them into a shopping basket. Then, all ingredients of chosen recipes are automatically added to the list of items which is delivered to a user's doorstep. In addition, in order to enhance the social interaction for recipes, some additional features (e.g., the average rating value or comments from other users) are added into each recommended recipe. Elahi et al. (2015) propose a food recommender system by using an active learning algorithm and matrix factorization. This research provides users with a complete human-computer interaction for the purpose of collecting long-term user preferences in terms of recipe ratings and tags. In addition, when requesting recommendations, users are required to provide short-term preferences referring to ingredients which they want to cook or to include in the meal. Then, the system utilizes both types of user preferences to make recommendations. The long-term preferences are exploited by a Matrix Factorization rating prediction model designed to consider both user tags and ratings. Each user and each recipe are modeled by vectors that represent their latent features. The rating value of a user for a specific item is estimated by computing the inner product of the user and item vectors. With short-term preferences, the system filters recipes according to the current user preferences. The recipes with highest rating values are recommended to the user. While most of existing research in the food domain only focuses on making recommendations on food items or recipes, there is a need for users to plan menus with the combination of many recipes into complete meals. With this idea, Kuo et al. (2012) propose an intelligent menu planning mechanism which suggests a set of recipes by using a graph-based algorithm. First, an undirected recipe graph is constructed, where each node is a recipe possessing a set of ingredients, each edge represents the relationship between two recipes, and edge weight represents the distance between two recipes (see Fig. 2). The weight of each edge connecting two different recipes describes the cost of a menu which includes these two recipes. The lower the weight the higher the probability two recipes co-occur in a menu. For instance, in Fig. 3, the recipe "Italian Bread" has co-occurrence relationship with five recipes, i.e., "Tiramisu", "Lasagne", "Mozzarella, Tomato, and Basil Salad", "Caesar Salad", and "Stuffed Shells". Among these five recipes, "Tiramisu" has the highest co-occurrence relationship with "Italian Bread" since the weight of their edge is lowest (i.e., 0.11). Whereas, "Stuffed Shells" has the lowest relationship with "Italian Bread" because the weight of their edge is highest (i.e., 0.5). An example of recipe graph G for menu planning (Kuo et al. 2012). "Tomato, flour, basil" (ingredients shown with black borders) are query ingredients. The recommended menu plan is a set of recipes {"Mozzarella, Tomato and Basil Salad", "Lasagna", "Italian Bread"} (nodes shown with black frames) which the total menu cost is minimal (i.e., 0.23) Reference values for nutritional intake. Bonn. 2. Edition, 1. Volume (2015) published by German Nutrition Association, Austrian Nutrition Association, and Swiss Nutrition Association In addition, the cost of a menu is also defined as a weighted sum of edges of the minimum spanning tree on the induced sub-graph. From that, a menu plan is created by choosing a set of recipes which contains all query ingredients (i.e., ingredients requested by users) and the menu cost is minimal. For instance, in Fig. 2, with query ingredients {tomato, flour, basil}, we can find many different sets of recipes, for instance, {"Mozzarella, Tomato and Basil Salad", "Lasagna", "Italian Bread"}, {"Mozzarella, Tomato and Basil", "Lasagne", "Almond cake"}, {"Mozzarella, Tomato and Basil","Italian Bread", "Spinach Salad"}, etc. However, the first set {"Mozzarella, Tomato and Basil Salad", "Lasagna", "Italian Bread"} will be recommended to users because its total menu cost is minimal (i.e., 0.23). Type 2: Considering nutritional needs of users Nowadays, unhealthy eating habits and imbalanced nutrition increase possibilities of people having obesity and other dietary-related conditions such as diabetes, hypertension, etc. As a treatment or preventive measure, nutritionists or dietitians usually recommend regular exercises and design individualized meal plans for their patients. Unfortunately, these nutrition experts are overloaded with too many patients to manually tailor an individualized meal plan for each user. That is where food recommender systems can be used as an intelligent nutrition consultation system. In this subsection, we provide a discussion of recommender systems that takes into account nutritional needs (see Table 1). First, we discuss a simple recommendation scenario showing how menu items can be recommended to users on the basis of considering their nutritional needs as well as health problems. In this context, a user enters some personal information (e.g., age, gender, occupation, physical activities, health problem, etc.). This information is the basis for selecting food items which best fit the user's nutritional needs. The following example will be an illustration of this scenario. In a menu recommender system, we assume that there are 5 menus with corresponding information, e.g, ingredients, calories, fat (see Table 2). A user u a enters the following information: Age: 52, Gender: male, Occupation: office worker, Physical activity: walking (10 minutes/day), Health problem: cardiovascular. For recommending appropriate menus to user u a , the following steps should be performed: Step 1: An energy table from DACHFootnote 4 (see Fig. 3) is used to estimate the amount of calories (in kcal) which the user u a should get per day. The amount of calories intake per day for each person is estimated according to age, gender and PAL (Physical Activity Level) value. PAL value is categorized into 3 types: + PAL = 1.4: Is used for people who have exclusively sedentary lifestyles (such as office workers, precision mechanics) with very little or no strenuous leisure activity. + PAL = 1.6: Is used for people who have sedentary lifestyles, but an additional energy is required for long-time walking and standing activities, such as laboratory assistants, students, production line workers. + PAL = 1.8: Is used for people who have extensive lifestyles, for instance, sellers, waiters, mechanics, artisans. In this example, user u a is an office worker with very little physical activity (only 10 minutes/day for walking), that means his PAL value belongs to the first type. By looking up information regarding age, gender and physical activity from Fig. 3, we can find the daily calories intake for u a is 2200 kcal. Step 2: Filtering menus with the amount of calories smaller or equal 2200 kcal/day. Step 3: Ranking filtered menus in the ascending order of fat (since u a has heart disease, less fatty menus will be shown to him first). In Table 2, we can see that m e n u 4 will not be added to the recommendation list because its calories is more than 2200 kcal. The list of recommended menus is ranked in the ascending order of fat (see Table 3). Table 2 A list of available menus with corresponding information Table 3 A list of recommended menus to user u a For the purpose of improving health conditions of users, Ueta et al. (2011) propose a goal-oriented recipe recommendation in order to provide a list of dishes that contains the right type of nutrient to treat users' health problems. To do that, first of all, a user enters her health problem in natural language, for instance, "I want to cure my acne". Next, the system analyzes the user's request and identifies the keywords describing the health problem (e.g., acne). The noun is pushed into the co-occurence database to search the nutrient co-occuring mostly with it. For instance, by searching the noun acne in the co-occurence database, pantothenic acid is found as a nutrient component which can be used for curing acne because it co-occurs with "acne" more often than any other nutrients. Finally, the nutrients identified in the previous step are used to find dishes which are closest to those nutrients in a food database. This food database includes two sub-databases, ingredient nutrient database and nutritional information database for recipes. The ingredient nutrient database contains information about the nutritional value of each ingredient. The nutritional information database includes recipe types and the amount of nutrients contained in each recipe. The ingredients in each recipe are identified and then their nutritional elements are calculated using the ingredient nutrient database. When recommending recipes for users, the system also considers the daily nutrient intake of users. These requirements vary according to age and gender of users. In a research related to dealing with malnutrition for the elderly, Aberg (2006) proposes a menu-planning tool which is required to take into account the following user-related information: Dietary restrictions, such as allergic ingredients. Nutritional values, such as the amount of fat or protein contained in a recipe. Preparation time of a meal. Preparation difficulty of a meal. Cost of necessary ingredients for a meal. The availability of ingredients for a meal. The variety of meals in terms of used ingredients and meal category. User food preferences, i.e., rating of a user for a certain recipe. To be able to consider all these requirements, the author apply a hybrid design on the basis of combining CF, CB, and constraint-based recommendation. CF recommendation uses the ratings to predict the user's feedback on unrated recipes. With the CB approach, the author uses XML-based mark-up language to represent the needed information for the recipes in the database. A constraint-based recommendation approach represented as a constraint satisfaction problem is used to construct optimal meal plans. A constraint satisfaction problem is modeled with two different approaches: parameter-based approach and recipe-based approach. However, the author did not mention in detail the recipe-based approach. Therefore, in this paper, we solely discuss the parameter-based approach and the details of this approach is presented in Table 4. A prototype is developed to offer a meal-plan recommendation to users in a certain time period. Users can switch between the top-5 meal plans and give ratings on recommended recipes or create special settings for a meal. Table 4 The constraint satisfaction problem modeled with a parameter-based approach (Aberg 2006) For demonstration purposes, we propose an example of a constraint-satisfaction problem, which is similar to a parameter-based approach (Aberg 2006) to suggest a recipe on the basis of taking into account user's preferences. In this example, we assume that variables are used for representing the parameters of a recipe, such as time, cost, energy, protein, allergies, disease, where: time (in minutes) is the preparation time of a recipe, cost (in euro) is the cost of a recipe, energy (in kcal) is the nutritional value of a recipe, protein (in %) is the percentage of protein contained in a recipe, allergies represents a set of allergic ingredients of users, and disease represents health problems of users. Each variable has corresponding domain definition, for instance, dom(time) = [1..60]. In addition, a knowledge base CKB (Constraint Knowledge Base) includes constraints used for describing the knowledge base. For instance, t i m e < 60 denotes the fact that preparation time of a recipe should be lower than 60 minutes. PREF is the set of user preferences, which should be consistent with CKB such that a corresponding solution can be identified. V = {time, cost, ingredients, energy, protein, allergies, diseases} D = {dom(time) = [1..60], dom(cost) = [1..100], dom(energy) = [1..3000], dom(protein) = [1..100], dom(allergies) = [ m i l k,e g g,p e a n u t,s e a f o o d,w h e a t], dom(diseases) = [ d i a b e t e s, c a r d i o v a s c u l a r, p a r k i n s o n, d i g e s t i o n, a l z h e i m e r,o s t e o a r t h r i t i s,o s t e o p o r o s i s], dom(ingredients) = [ v e g e t a b l e s, s h r i m p, s e a−c r a b, f i s h, p o r k, b e e f, c h i c k e n,s p i c e s, b u t t e r, c h e e s e, f r u i t s] } CKB = { c 1 : t i m e < 60, c 2 : c o s t < 100, c 3 : e n e r g y < 3000, c 4 : p r o t e i n < 35%, c 5 : d i s e a s e = c a r d i o v a s c u l a r ⇒ p r o t e i n < 30, c 6 : a l l e r g i e s = s e a f o o d ⇒ i n g r e d i e n t s≠s e a − c r a b } PREF = { p r e f 1 : t i m e < 30,p r e f 2 : c o s t < 50, p r e f 3 : e n e r g y = 2200,p r e f 4 : p r o t e i n = 25%, p r e f 5 : a l l e r g i e s = s e a f o o d,p r e f 6 : d i s e a s e = c a r d i o v a s c u l a r} On the basis of the constraint satisfaction problem as specified above, one solution can be determined for users: { t i m e = 25,c o s t = 40,i n g r e d i e n t s = {v e g e t a b l e s,c h i c k e n,s p i c e s,f r u i t s},e n e r g y = 2200,p r o t e i n = 25%}. Type 3: Balancing between user preferences and nutritional needs of users Considering either user preferences or nutritional needs in an isolated fashion sometimes leads to sub-optimal recommendations of food items. For instance, if recommenders only take into account user preferences then bad eating habits would also be encouraged. On the contrary, if only nutritional needs are considered then proposed food items sometimes will not be attractive to users. Therefore, considering both, user preferences and nutritional needs seems to provide the best solution since users receive more relevant recommendations, they become more interested and increasingly engaged in using them. We now discuss a simple recommendation scenario showing how a food recommender system can suggest menu items on the basis of considering both user preferences and nutritional needs. In this example, we assume the existence of a menu table as shown in Table 2. A user u a provides personal information as follows: Age: 52, Gender: male, Occupation: office worker, Physical activity: walking (10 minutes/day), Health problem: cardiovascular, Favorite ingredients: tomato. In this scenario, the recommender system considers both ingredients preferred by the user u a and further user-related information (e.g., age, gender, occupation, physical activity, and health problem). The list of recommended menus is created by performing the following steps: Step 1: Estimating the daily amount of calories for user u a by looking up the energy table shown in Fig. 3. The user u a is an office worker and has very little physical activity per day (only 10 minutes/day for walking), hence the nutrient intake of the user u a is 2200 kcal. Step 2: Filtering menus from Table 2 which contain lower or equal 2200 kcal of calories, and include favourite ingredient "tomato". Step 3: Ranking filtered menus in the ascending order of fat (because u a has vascular disease, less fatty menus will be shown to him first). After accomplishing these steps, there are two menus, i.e., m e n u 3 and m e n u 5 will be recommended to user u a (see Table 5). Table 5 A list of menus recommended to user u a on the basis of considering his favorite ingredient (i.e., tomato) and his nutritional needs Also for the purpose of balancing users' preferences and nutritional needs, Elsweiler et al. (2015) propose two approaches to integrate nutritional aspects into recommendations. The first approach figures out trade-offs between giving the user some foods she really likes and some foods which are really healthy to her. This approach is implemented by using the following steps. First, a prediction algorithm estimates the top recipes for the user, i.e., a set of recipes with predicted probability above a certain threshold. Next, the amount of calories and fat per gram for each recipe in the chosen set is calculated. Finally, meals with less fat or calories per gram will be chosen for the final recommendation. In the second approach, instead of recommending individual meals, this approach proposes complete meal plans, which are generated not only based on the users' food preferences but also conform to daily nutritional guidelines (Harvey and Elsweiler 2015). For making recommendations, the user provides information regarding his/her preferences by rating a number of recipes in the system on a 5-star rating scale. In addition, the "Recommender" also takes into account additional users' personal information, such as height, weight, age, daily activity level, and goal (lose, gain, or maintain weight) in order to calculate the nutritional needs. The nutritional requirements of users are calculated by using an updated version of the Harris Benedict equation (Roza and Shizgal 1984). After that, the "Recommender" predicts ratings for unrated recipes and sends a ranked list of recipes with high ratings (e.g., 4 or 5 stars) to the "Planner". The "Planner" takes top-n recipes from the ranked list of recipes and splits them into two separated sets: one for breakfasts and one for main meals. A full search is performed to find all combinations of these recipes in the sequence {Breakfast, Main meal, Main meal} which meets the target nutritional needs. For instance, {Muesli Breakfast Muffins, Catalan Chickpeas, Chicken Cacciatore} (Harvey and Elsweiler 2015) represents a complete menu recommended to users, where Muesli Breakfast Muffins is for breakfast, Catalan Chickpeas for lunch, and Chicken Cacciatore for dinner. Combinations with the same recipes can not be repeated, for instance, { r 1,r 2,r 3} and {r 1,r 3,r 2} are considered as only one menu plan. Although two of the above proposed approaches are helpful for supporting the trade-off between users' preferences and healthy foods, the suitability of combining separate ingredients into a complete meal should be considered in more detail to make an appealing meal plan (Elsweiler et al. 2015). Type 4: Food recommender systems for groups As mentioned above, in many real-world scenarios, recipe and food consumption are good examples of a group activity, for instance, a birthday party with friends, daily meals with family members (Elahi et al. 2014). In these scenarios, recommendations should be tailored to the entire group in order to assure the maximum satisfaction of each member and the group as a whole. CF is one of the most widely used recommendation techniques and also applied in many group recommender systems (McCarthy et al. 2006; O'Connor et al. 2001). In the food domain, Berkovsky and Freyne (2010) investigate the applicability of two CF recommendation strategies for the purpose of discovering which strategy is most relevant when making CF recommendations for a group. The authors discuss two group-based recommendation strategies as the following: Aggregated models strategy. First, this strategy computes a rating r a t(f a ,r i ) for a family f a and recipe r i by aggregating the individual ratings r a t(u x ,r i ) of family members u x ∈ f a who rated recipe r i according to their relative weight ω(u x ,f a ) (see (3)). The authors add weights into the rating calculation process for the purpose of allowing some users in a family to have more influence on the group decision than others. For instance, parents have more influence on the group decision than children, therefore weights assigned for parents are higher than the children's ones. The details of weighting models will be presented in the next paragraph. $$\begin{array}{@{}rcl@{}} rat(f_{a}, r_{i}) = \frac{{\sum}_{x\in f_{a}}\omega(u_{x}, f_{a})rat(u_{x}, r_{i})}{{\sum}_{x\in f_{a}}\omega(u_{x}, f_{a})} \end{array} $$ After that, CF is applied to the family model. Particularly, a prediction p r e d(f a ,r i ) for the whole family f a and unrated recipe r i is generated by computing similarity degree s i m(f a ,f b ) between the family f a and all other families f b ∈ F, and then aggregating all family's ratings r a t(f b ,r i ) for recipe r i according to the similarity degree s i m(f a ,f b ) (see (4)). $$\begin{array}{@{}rcl@{}} pred(f_{a}, r_{i}) = \frac{{\sum}_{f_{b}\in F}sim(f_{a}, f_{b})rat(f_{b}, r_{i})}{{\sum}_{f_{b} \in F}sim(f_{a}, f_{b})} \end{array} $$ Aggregated predictions strategy. First, this strategy generates individual predictions p r e d(u x ,r i ) for user u x and unrated recipe r i by using the standard CF algorithm (see (5)). In this prediction, the degree of similarity s i m(u x ,u y ) between the target user u x and all other users u y ∈ U is calculated according to (6) Freyne et al. (2011). Then, individual ratings r a t(u y ,r i ) of users who rated r i are aggregated according to the similarity degree s i m(u x ,u y ). $$\begin{array}{@{}rcl@{}} pred(u_{x}, r_{i}) = \frac{{\sum}_{y \in U}sim(u_{x}, u_{y})rat(u_{y}, r_{i})}{{\sum}_{i \in U}sim(u_{x}, u_{y})} \end{array} $$ $$\begin{array}{@{}rcl@{}} sim(u_{x}, u_{y}) = \frac{{\sum}_{i=1}^{k}(u_{x_{i}} - \overline{u_{x}})(u_{y_{i}} - \overline{u_{y}})}{\sqrt{{\sum}_{i=1}^{k}(u_{x_{i}} - \overline{u_{x}})^{2}}\sqrt{{\sum}_{i=1}^{k}(u_{y_{i}} - \overline{u_{y}})^{2}}} \end{array} $$ where k is the number of items already rated by the user u x and the user u y . After that, to generate the prediction p r e d(f a ,r i ) for the whole family f a and recipe r i , individual predictions p r e d(u x ,r i ) of family members u x ∈ f a are aggregated according to their relative weight ω(u x ,f a ) (see (7)). $$\begin{array}{@{}rcl@{}} pred(f_{a}, r_{i}) = \frac{{\sum}_{x\in f_{a}}\omega(u_{x}, f_{a})pred(u_{x}, r_{i})}{{\sum}_{x \in f_{a}}\omega(u_{x}, f_{a})} \end{array} $$ Both aggregated models strategy and aggregated predictions strategy recommend a list of recipes to the whole family by considering the task of recommending top-k recipes, i.e., k recipes having the highest predicted ratings. The evaluation results on MAE (Mean Absolute Error) show that aggregated models strategy are usually predominant to aggregated predictions strategy (Berkovsky and Freyne 2010). This means that individual models of users should be aggregated into a group model first and then using this model in the recommendation process. Weighting models. Inspired by allowing for some users to have more influence than others, the authors proposed four different weighting models when aggregating the data of individual users. Two first models (called uniform model and role-based model) assign pre-defined weights for users. Particularly, the uniform model uses the same weight for all group members. The role-based model weights users according to their role. For instance, there are two roles specified in a family party: organizer and family member. The weight for the organizer will be 2 because she is responsible for organizing the party as well as preparing food. Whereas, the weights for family-members are 1 because they are likely less important people. Two other models (called role-based model and family-log model) weight users according to their interactions with the content. The role-based model weights users according to their activities across the entire community. The activity of a certain user is predicted based on the number of ratings which (s)he rated for items. The family-log model weights users according to their activities in relation to other family members. With the idea of combining individual user preferences into a group profile by using aggregation heuristics (Masthoff 2011) (e.g., Least Misery, Average, Most Pleasure, Group Distance, Ensemble, etc.), we discuss in this subsection a simple group recommendation scenario in the food domain to show how a group recommendation can be created. Supposing that in a recipe recommender system we have a group with four users (e.g., u s e r1..4) who rated 5 recipes (e.g., r e c i p e1..5) by using a 5-star rating scale. We use Least Misery strategy (Masthoff 2011) to aggregate individual user preferences into a group profile as a whole. Least Misery strategy makes sense in recipe decision scenarios since it helps to minimize the misery within a group, i.e., recipes which are not liked or can not be consumed by at least one group member will not be recommended to the whole group. In our example, the group rating value for each recipe is the minimum of all ratings given by all group members (see Table 6). After that, the recipe having the highest group rating value will be recommended to the group (Cantador and Castells 2012). In this example, r e c i p e 1 is recommended to the group because its group rating value is highest (i.e., 4). Table 6 An example of using Least Misery strategy to aggregate individual user preferences into a group profile. r e c i p e 1 is recommended to the group because its group rating value is highest Also for the purpose of supporting a group decision making process in a family, Elahi et al. (2014) proposed a novel interactive environment for groups in planning their meals through a conversational process based on critiquing (Chen and Pu 2012). The system consists of two components. The first one is a tagging and critiquing-based user interface. The second one is a utility function that takes into account diet compliance and healthiness of the users. The utility for each meal is calculated on the basis of considering meal time, user rating, diet plan, and health situation of each group member. After that, the utility of each meal for the whole group is quantified by aggregating the individual utility scores of all group members. Based on the utility of each meal for the whole group, the system delivers a meal recommendation list for the group. Each group has a group leader (also called the cook), and participants who will attend the group meal. Sometimes, the cook must not select the recipe with the highest utility score. (S)he can accept or refuse recipe(s) for some reasons (e.g., the unavailability of ingredients or insufficient cooking-skills). The participants are allowed to criticize the meal which was chosen by the cook. This critiquing process will be repeated until all members are satisfied. Until now, to the best of our knowledge, there has been only few research on food recommender systems for groups. In the mentioned study (Elahi et al. 2014), although proposing a new interactive mechanism for group in the food domain, it exposes a lot of issues to be tackled in terms of group decision making, such as bundle recommendation, fast consensus in a group, time of preference visibility, etc. Figure 4 illustrates the user interfaces of the choicla Footnote 5 group decision support environment (Stettinger 2014), which can be applied as a potential solution for supporting group decision making process in the food domain. choicla can support a group of friends to choose a menu for a Christmas party in an asynchronous fashion. That means all group members can join the decision making process without being on-line together at the same time. In this scenario, one member creates a decision (e.g., Christmas party) and enters some menus into the decision. Each menu is described by name, photo, and description. While joining in a decision, each group member is able to invite other members to participate in this decision. Invited members give their preferences by rating proposed menus (e.g., using thumbs up and thumbs down) and can discuss with each other by using the "comment" functionality. Rating values from group members will be aggregated into group preferences using some group decision heuristics (e.g., average, least misery, most pleasure, etc.) Masthoff (2011) to propose a menu for the whole group. To avoid the anchoring effects bias (Felfernig 2014), the group suggestion is solely shown to a group member after he/she saved the ratings for menus. Having said that, choicla is the potential application for group decision processes in the food domain. However, the future version of choicla should integrate a complete group decision process for the food domain, which takes into account additional information of all group members (such as health situations, allergies, nutritional consumption, cooking skills, the availability of ingredients, etc.), in order to recommend healthy food to the whole group. Screenshots of the choicla group decision support environment (iOS version). Figure 4a shows a list of different group decisions created by users. Users can rate alternatives by using the user interface shown in Fig. 4b. The suggestion for the whole group is shown in the "Suggestion" tab (Fig. 4c). The alternative enclosed with the medal icon is the suggested alternative for the whole group. For instance, "Turkish menu" is the menu chosen by choicla for recommending to the whole group Research challenges Existing research on food recommender systems plays a crucial role in supporting users to choose a diet which suits interests and health conditions. These studies exploit the information regarding user profiles and recipes in order to generate food recommendations. It has been recognized that the quality of recommendations is strongly influenced by the adequacy and accuracy of user information as well as nutritional information of food. However, recent studies have not provided detailed discussions on this issue. In addition, although some papers (e.g., Ueta et al. 2011, Aberg 2006) propose food recommendations to tackle health problems, suggestions regarding changing eating behaviors, which are the premise to maintain a healthy lifestyle, are still missing. Explanations could help users more trust in recommendations and encourage them to follow good eating habits, however the inclusion of explanations into food recommender systems has not received the interests from researchers. Besides, research on food recommender systems mainly focuses on single-user scenarios rather than on group scenarios. Until now, research on group recommender systems in the healthy food domain is very limited. Berkovsky and Freyne (2010) is one of the studies which proposes some aggregation strategies to generate food recommendations for groups of users. However, there still exist some open issues which should be taken into account within the scope of future work, such as achieving fast consensus within the group or fostering the fairness among group members. In this section, we will discuss research challenges in food recommender systems and propose some potential solutions. A summary of open issues is presented in Table 7. Table 7 A summary of research challenges in food recommender systems and proposed solutions Challenges regarding user information The uncertainty of nutritional information from users: In order to make recommendations, the system needs to collect nutritional needs, ratings for food items/recipes and information of previous meals from users (Mika 2011). Most of the information is only provided through continuous interactions with users. However, in reality, recording nutritional intake from users can not avoid faults because users usually forget or give wrong information about the foods they have consumed (Mika 2011). Although some systems were proposed to tackle with these problems, for instance, foodlog (Aizawa et al. 2010), they are not able to give the accurate information about the consumed meals, even though they can estimate the nutritional balance among different kinds of food in a meal. Collecting user rating data: Food recommender systems need information about users' preferences to recommend similar food items ((Van Pinxteren et al. 2011; Mika 2011)). This information can be gathered by asking users to rate foods/recipes. However, it is not convenient if the system asks users to rate too many items. Hence, a challenge entailed is "How to collect enough users' ratings while saving their effort?" Freyne and Berkovsky (2010). Besides, similar to keeping reporting food consumption (as mentioned above), persuading users to keep rating dishes also becomes another challenge for food recommender systems (Mika 2011). Challenges regarding recommendation algorithms As mentioned in Mika (2011), in order to calculate nutritional recommendations for users, any algorithm needs the following information: User information (e.g., likes, dislikes, food consumption, or nutritional needs): Similar to recommender systems in other domains, food recommender systems also face with the cold-start problem when the system is used in the first time (Mika 2011). This problem can be surmounted by using information about users' previous meals to calculate similarity and then recommend new recipes to users (Van Pinxteren et al. 2011). However, this solution requires many user efforts and abates the desire of system usage. Recipe databases: Mika (2011) discussed two challenges that need to be solved: How many recipes the system should have? The quantity of gathered recipes should be large enough to accommodate the preferences of many users and vary the recommended recipes while still minimize the time for making recommendations. This is a tricky problem when the system tries to balance between the variety of recommendations and system response time. Hoxmeier et al. (Hoxmeier and Manager 2000) point out that long response times triggers user dissatisfaction which further decreases continuous use of the system. How to gather accurate nutritional information of recipes? It is observed that with the same food item, if we use different ways to cook it then we will get different nutritional values from it Mika (2011). Therefore, it is very difficult to ensure that whether gathered nutritional tables for food items are accurate because when comparing different nutritional value table of foods, sometimes it returns varying values for the same food items (Mika 2011). For instance, the nutritional value of celery in "a salad recipe" is different from the nutritional value of itself "in a fried recipe", since cooking with high temperature make celery lose a big amount of essential oil. It means that the amount of essential oil of celery in the "fried recipe" could be lower than in the "salad recipe". A set of constraints or rules: Considering more constraints and rules in the recommendation process will improve the quality of recommendations (Mika 2011). For instance, with a user who has heart disease, the system should recommend menus with less fat and salt. Moreover, it is very necessary to detect the conflicts among the constraints or rules which prevent the recommendation algorithms from finding a solution. However, with the large database (e.g., thousands of foods/recipes), checking constraints/rules in the database brings negative effects for system performance (Mika 2011). In addition, food recommender systems should take into account constraints with regard to the availability of ingredients in the households for the purpose of helping users to save money and prevent the food waste behavior. The challenge here is how to propose food which meets health situations and nutritional needs of users, as well as taking advantages of the ingredients that are currently in the fridge. In this scenario, recommender systems seems to require many efforts from users because users have to report the consumption of all ingredients regularly and this can prevent users from using the system permanently. Challenges regarding changing eating behavior of users Nowadays, many people are suffering health problems because of inappropriate eating habits (Snooks 2009). For instance, some people eat too much food compared to their physical activity level and gradually become obese. Whereas, others (e.g., the elderly, the dieters) restrict extremely nutrition intake and this leads to malnutrition. Therefore, one of the main functions of food recommender systems is understanding users' eating behaviors and persuading them to change eating behaviors in positive ways. However, this is a big challenge for food recommender systems because eating is a lifelong behavior which is influenced by many factors, especially psychological factors. Hence, food recommender systems should integrate health psychology theory in order to stimulate users to comply healthy eating behaviors. The first approach can be used by applying one simple change at a specific time until the user behavior becomes habitual (Snooks 2009). Another approach can be applied for food recommender systems by comparing to the ideal nutrient. Users can find the structure of ideal diet according to the age and physical activity level from reliable resources (e.g., USDA, DACH) and then compare what food they ate to what is recommended (Snooks 2009). The comparison approach is also proposed in Mankoff et al. (2002) in order to provide users potential dietary changes. Challenges regarding explanations Explanations play an important role in recommender systems since they increase the trust of users in decision outcomes (Tintarev and Masthoff 2007). In the healthy food domain, explanations are even more necessary since they not only increase the trust in recommendations but also stimulate users to consume healthy foods or change their eating behaviors. For this purpose, it makes sense that explanations of food recommender systems clarify how a decision outcome is created (Elahi et al. 2014). Besides, a detailed description of food items (e.g., nutritional value table for a recipe) needs to be included in a way that emphasizes the healthiness of a specific food for users. Challenges regarding group decision making As mentioned in previous sections of the paper, recommending recipes/food items usually involves groups rather than individual users. However, there is a low amount of research on food recommender systems for groups. Therefore, it is still an open topic which needs to be analyzed in future research. Bundle recommendations: Group recommender systems usually attach the requirements/preferences of different users into group recommendation. This is the crucial idea discussed in many related studies (Masthoff 2011; O'Connor et al. 2001; Berkovsky and Freyne 2010). In the food domain, the aggregation process raises more challenges when users want to get recommendations for a complete meal with the combination of many recipes/food or a food schedule for more than one day (e.g., foods for next week). This issue is known as bundle recommendation which is a new research branch of recommender systems. The idea here is recommending a sequence of items instead of separated ones. In the healthy food domain, recommending a complete meal is even more complicated because the system has to consider not only preferences of group members but also other aspects, e.g., the variety of meals, weather and season (Van Pinxteren et al. 2011), the healthiness of recipes, health problems or daily nutrition needs of group members, etc. On the other hand, the recommendation of bundles has to assure the fairness among users within the group. It means that negotiation and argumentation mechanisms have to be developed in order to support group members to express acceptable trade-offs (Felfernig et al. 2014b). For instance, in a meal plan for a week, the preferences of users who were discriminated in previous meals should have a higher emphasis in the upcoming meals. Achieving fast consensus in groups: In group recommender systems, although different aggregation approaches have been applied to generate group recommendations, such processes do not ensure that the recommended items reflect a high agreement level among group members (Castro et al. 2015). In this context, achieving consensus helps to bring individual preferences closer to each other before delivering group recommendations. However, further issues need to be considered in order to accelerate the achievement of consensus in groups. One of the promising solutions is enriching user interfaces which support basic negotiation mechanisms among users. User interfaces are designed such that all members can share their preferences within the group (Thuy Ngoc Nguyen 2017). Knowing the preferences of each other helps the group to reach a consensus quickly. An example thereof is the following: user A prefers cheese, whereas user B is interested in beef. There is a probability to achieve a consensus between user A and user B is that user A will eat recipes with beef as long as they include cheese. How to represent the current decision situation is considered as an issue of future work. In this paper, we provide an overview of recommender systems in the healthy food domain on the basis of discussing four different types of food recommender systems. The first three types present some existing studies in the healthy food domain, which mainly focus on tailoring recommendations to individuals, by considering the preferences and/or nutritional needs of users. Meanwhile, recent studies presented in the fourth type target at consulting healthy food items in group scenarios. Popular recommendation approaches (e.g., Collaborative Filtering recommendation, Content-based recommendation, Constraint-based recommendation) are used in many food recommender systems. Besides, hybrid approaches are also employed in order to improve the recommender's performance. Although being considered in different contexts, in general all food recommender systems play a vital role in providing food items meeting preferences and adequate nutritional needs of users as well as persuading them to comply positive eating behaviors. Some challenges regarding user information, recommendation algorithms, changing eating behaviors, explanations provision, and group decision making are discussed as issues for further work. http://www.who.int http://www.recipekey.com https://ndb.nal.usda.gov/ www.sge-ssn.ch www.choicla.com Aberg, J. (2006). Dealing With malnutrition: A meal planning system for elderly. Adomavicius, G., Bockstedt, J., Curley, S., & Zhang, J. (2011). Recommender systems, consumer preferences, and anchoring effects, 811, 35–42. Aizawa, K., de Silva, G.C., Ogawa, M., & Sato, Y. (2010). Food log by snapping and processing images, 2010 16th international conference on virtual systems and multimedia, IEEE (pp. 71–74). Ardissono, L., Goy, A., Petrone, G., Segnan, M., & Torasso, P. (2003). Intrigue: Personalized recommendation of tourist attractions for desktop and handset devices. Applied Artificial Intelligence, 17(8), 687–714. Asanov, D. (2011). Algorithms and methods in recommender systems Berlin Institute of Technology. Germany: Berlin. Balabanović, M., & Shoham, Y. (1997). Fab: Content-based, collaborative recommendation. Communications of the ACM, 40(3), 66–72. Berkovsky, S., & Freyne, J. (2010). Group-based recipe recommendations: Analysis of data aggregation strategies, Proceedings of the fourth ACM conference on recommender systems, RecSys '10 (pp. 111–118). Bokde, D.K., Girase, S., & Mukhopadhyay, D. (2015). Role of matrix factorization model in collaborative filtering algorithm: A survey. CoRR abs/1503.07475. Bridge, D., Göker, M.H., McGinty, L., & Smyth, B. (2005). Case-based recommender systems. Knowl Eng Review, 20(3), 315–320. Burke, R. (2000). Knowledge-based recommender systems, Encyclopedia of library and information systems, vol 69, Marcel Dekker (pp. 180–200). Burke, R. (2002). Hybrid recommender systems: Survey and experiments. User Modeling and User-Adapted Interaction, 12(4), 331–370. Article MATH Google Scholar Burke, R., Felfernig, A., & Göker, M.H. (2011). Recommender systems: an overview. AI Magazine, 32, 13–18. Cantador, I., & Castells, P. (2012). Group recommender systems: New perspectives in the social web, Recommender systems for the social Web, intelligent systems reference library, vol 32, Springer Berlin Heidelberg (pp. 139–157). Castro, J., Quesada, F.J., Palomares, I., & Martínez-López, L. (2015). A consensus-driven group recommender system. International Journal of Intelligent Systems, 30(8), 887–906. Chen, L., & Pu, P. (2012). Critiquing-based recommenders: survey and emerging trends. User Model User-Adapt Interact, 22(1-2), 125–150. Crossen, A., Budzik, J., & Hammond, K.J. (2002). Flytrap: Intelligent group music recommendation, Proceedings of the 7th international conference on intelligent user interfaces, ACM, New York, NY, USA, IUI '02 (pp. 184–185). Ekstrand, M.D., Riedl, J.T., & Konstan, J.A. (2011). Collaborative filtering recommender systems. Found Trends Hum-Comput Interact, 4(2), 81–173. El-Dosuky, M.A., Rashad, M.Z., Hamza, T.T., & El-Bassiouny, A.H. (2012). Food recommendation using ontology and heuristics, AMLTA, Springer, communications in computer and information science, (Vol. 322 pp. 423–429). Elahi, M., Ge, M., Ricci, F., Massimo, D., & Berkovsky, S. (2014). Interactive food recommendation for groups, RECSYS, Vol. 1247. Elahi, M., Ge, M., Ricci, F., Fernández-Tobías, I., Berkovsky, S., & Massimo, D. (2015). Interaction design in a mobile food recommender system, IntRS@recsys, CEUR-WS.org, CEUR workshop proceedings, (Vol. 1438 pp. 49–52). Elsweiler, D., Harvey, M., Ludwig, B., & Said, A. (2015). Bringing the healthy into food recommenders. In Ge, M., & Ricci, F. (Eds.), DMRS, CEUR-WS.org, CEUR workshop proceedings, (Vol. 1533 pp. 33–36). Felfernig, A. (2014). Biases in decision making, Proceedings of the first international workshop on decision making and recommender systems (DMRS2014), Bolzano, Italy, September 18-19, 2014., vol 1278, CEUR Proceedings (pp. 32–37). Felfernig, A., & Burke, R. (2008). Constraint-based recommender systems: Technologies and research issues, Proceedings of the 10th international conference on electronic commerce, ACM, New York, NY, USA, ICEC '08 (pp. 3:1–3:10). Felfernig, A., Teppan, E., & Gula, B. (2006). Knowledge-based recommender technologies for marketing and sales. Pattern Recognition and Artificial Intelligence, 21 (2), 1–22. Felfernig, A., Friedrich, G., Jannach, D., & Zanker, M. (2011). Recommender systems handbook, Springer US, chapter: Developing Constraint-based Recommenders, 187–215. Felfernig, A., Zehentner, C., Ninaus, G., Grabner, H., Maalej, W., Pagano, D., Weninger, L., & Reinfrank, F. (2012). Advances in user modeling, Springer Berlin Heidelberg, chapter: Group Decision Support for Requirements Negotiation, 105–116. Felfernig, A., Hotz, L., Bagley, C., & Tiihonen, J. (2014a). Knowledge-based Configuration: From Research to Business Cases, 1st edn. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. Felfernig, A., Stettinger, M., Ninaus, G., Jeran, M., Reiterer, S., Falkner, A.A., Leitner, G., & Tiihonen, J. (2014b). Towards open configuration, Proceedings of the 16th international configuration workshop, Novi Sad, Serbia, September 25-26, 2014 (pp. 89–94). Freyne, J., & Berkovsky, S. (2010). Intelligent food planning: personalized recipe recommendation, Proceedings of the 15th international conference on Intelligent user interfaces, ACM, New York, NY, USA, IUI '10 (pp. 321–324). Freyne, J., Berkovsky, S., & Smith, G. (2011). Recipe recommendation: Accuracy and reasoning, 19th international conference, UMAP 2011, Girona, Spain, July 11-15, 2011., Springer Berlin Heidelberg (pp. 99–110). Harvey, M., & Elsweiler, D. (2015). Automated recommendation of healthy, personalised meal plans, Proceedings of the 9th ACM conference on recommender systems, ACM, New York, NY, USA, RecSys '15 (pp. 327–328). Hoxmeier, J.A.D.P., & Manager, C.D. (2000). System response time and user satisfaction: An experimental study of browser-based applications, Proceedings of the association of information systems americas conference (pp. 10–13). Jameson, A. (2004). More than the sum of its members: Challenges for group recommender systems, Proceedings of the working conference on advanced visual interfaces, ACM, New York, NY, USA, AVI '04 (pp. 48–54). Jameson, A., & Smyth, B. (2007). The adaptive web. chap Recommendation to Groups, 596–627. Knowler, W., Barrett-Connor, E., Fowler, S., Hamman, R., Lachin, J., Walker, E., & Nathan, D. (2002). Reduction in the incidence of type 2 diabetes with lifestyle intervention or metformin. New England Journal of Medicine, 346(6), 393–403. Koren, Y., Bell, R., & Volinsky, C. (2009). Matrix factorization techniques for recommender systems. Computer, 42(8), 30–37. Kuo, F.F., Li, C.T., Shan, M.K., & Lee, S.Y. (2012). Intelligent menu planning: Recommending set of recipes by ingredients, Proceedings of the ACM multimedia 2012 workshop on multimedia for cooking and eating activities, ACM, New York, NY, USA, CEA '12 (pp. 1–6). Lang, K. (1995). Newsweeder: Learning to filter netnews. In Prieditis, A., & Russell, S.J. (Eds.), Machine learning, proceedings of the twelfth international conference on machine learning, Tahoe City, California, USA, July 9-12, 1995, Morgan Kaufmann (pp. 331–339). Lieberman, H., Van Dyke, N.W., & Vivacqua, A.S. (1999). Let's browse: A collaborative web browsing agent, Proceedings of the 4th international conference on intelligent user interfaces, ACM, New York, NY, USA, IUI '99 (pp. 65–68). Mankoff, J., Hsieh, G., Hung, H.C., Lee, S., & Nitao, E. (2002). Using low-cost sensing to support nutritional awareness, Ubicomp, Springer, lecture notes in computer science, (Vol. 2498 pp. 371–376). Masthoff, J. (2004). Group modeling: Selecting a sequence of television items to suit a group of viewers. User Modeling and User-Adapted Interaction, 14(1), 37–85. Masthoff, J. (2011). Group recommender systems: Combining individual models, Recommender systems handbook, Springer (pp. 677–702). McCarthy, J.F., & Anagnost, T.D. (1998). Musicfx: An arbiter of group preferences for computer supported collaborative workouts, Proceedings of the 1998 ACM conference on computer supported cooperative work, ACM, New York, NY, USA, CSCW '98 (pp. 363–372). McCarthy, K., Salamó, M., Coyle, L., McGinty, L., Smyth, B., & Nixon, P. (2006). Group recommender systems: A critiquing based approach, Proceedings of the 11th international conference on intelligent user interfaces, ACM, New York, NY, USA, IUI '06 (pp. 267–269). Mika, S. (2011). Challenges for nutrition recommender systems. CEUR-WS.org, Workshop Proceedings on Context Aware Intelligent Assistance, 25–33. Mooney, R.J., & Roy, L. (2000). Content-based book recommending using learning for text categorization. 195–204. O'Connor, M., Cosley, D., Konstan, J.A., & Riedl, J. (2001). Polylens: A recommender system for groups of users, Proceedings of the seventh conference on European conference on computer supported cooperative work, Kluwer Academic Publishers, Norwell, MA, USA, ECSCW'01 (pp. 199– 218). Pazzani, M.J., Muramatsu, J., & Billsus, D. (1996). Syskill and webert: Identifying interesting web sites. In Clancey, W.J., & Weld, D.S. (Eds.), AAAI/IAAI, Vol. 1, AAAI Press / The MIT Press (pp. 54–61). Ricci, F., Rokach, L., Shapira, B., & Kantor, P.B. (2010). Recommender Systems Handbook, 1st edn. New York, NY, USA: Springer-Verlag New York, Inc. Robertson, A. (2004). Food and health in europe: a new basis for action. Academic Search Complete, WHO Regional Office for Europe. Roza, A.M., & Shizgal, H.M. (1984). The harris benedict equation reevaluated: resting energy requirements and the body cell mass. The American Journal of Clinical nutrition, 40(1), 168–182. Sarwar, B., Karypis, G., Konstan, J., & Riedl, J. (2001). Item-based collaborative filtering recommendation algorithms, Proceedings of the 10th international conference on World Wide Web, ACM, New York, NY, USA, WWW '01 (pp. 285–295). Smith, R.B., Hixon, R., & Horan, B. (1998). Supporting flexible roles in a shared space, Proceedings of the 1998 ACM conference on computer supported cooperative work, ACM, New York, NY, USA, CSCW '98 (pp. 197–206). Snooks, M. (2009). Health Psychology: Biological, Psychological, and Sociocultural Perspectives, Jones & Bartlett Learning chapter 5: Applications of Health Psychology to Eating Behaviors: Improving health through nutritional changes. Stettinger, M. (2014). Choicla: Towards domain-independent decision support for groups of users, Proceedings of the 8th ACM conference on recommender systems, ACM, New York, NY, USA, RecSys '14 (pp. 425–428). Stettinger, M., Felfernig, A., Leitner, G., Reiterer, S., & Jeran, M. (2015). Counteracting serial position effects in the choicla group decision support environment, Proceedings of the 20th international conference on intelligent user interfaces. IUI 2015 (ACM - San Francisco) (pp. 148–157). Svensson, M., Laaksolahti, J., Höök, K., & Waern, A. (2000). A recipe based on-line food store, Proceedings of the 5th international conference on intelligent user interfaces, ACM, New York, NY, USA, IUI '00 (pp. 260–263). Thuy Ngoc Nguyen, F.R. (2017). A chat-based group recommender system for tourism. Information and Communication Technologies, 17–30. Tintarev, N., & Masthoff, J. (2007). A survey of explanations in recommender systems, Proceedings of the 2007 IEEE 23rd international conference on data engineering workshop, IEEE Computer Society, Washington, DC, USA, ICDEW '07 (pp. 801–810). Ueta, T., Iwakami, M., & Ito, T. (2011). A recipe recommendation system based on automatic nutrition information extraction, Proceedings of the 5th international conference on knowledge science, engineering and management, Springer-Verlag, Berlin, Heidelberg, KSEM'11 (pp. 79–90). Van Pinxteren, Y., Geleijnse, G., & Kamsteeg, P. (2011). Deriving a recipe similarity measure for recommending healthful meals, Proceedings of the 16th international conference on intelligent user interfaces, ACM, New York, NY, USA, IUI '11 (pp. 105–114). Yu, Z., Zhou, X., Hao, Y., & Gu, J. (2006). Tv program recommendation for multiple viewers based on user profile merging. User Modeling and User-Adapted Interaction, 16(1), 63–82. Open access funding provided by Graz University of Technology. This work presented in this paper has been developed within the scope of the WE-WANT project (Enabling Technologies for Group-based Configuration) which is funded by the Austrian Research Promotion Agency (850702). Applied Software Engineering Group, Institute for Software Technology, Graz University of Technology, Graz, Austria Thi Ngoc Trang Tran, Müslüm Atas, Alexander Felfernig & Martin Stettinger Thi Ngoc Trang Tran Müslüm Atas Alexander Felfernig Martin Stettinger Correspondence to Thi Ngoc Trang Tran. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Trang Tran, T.N., Atas, M., Felfernig, A. et al. An overview of recommender systems in the healthy food domain. J Intell Inf Syst 50, 501–526 (2018). https://doi.org/10.1007/s10844-017-0469-0 Revised: 24 May 2017 Issue Date: June 2018 Eating behavior
CommonCrawl
Probabilistic Proofs of Analytic Facts What are some interesting examples of probabilistic reasoning to establish results that would traditionally be considered analysis? What I mean by "probabilistic reasoning" is that the approach should be motivated by the sort of intuition one gains from a study of probability, e.g. games, information, behavior of random walks and other processes. This is very vague, but hopefully some of you will know what I mean (and perhaps have a better description for what this intuition is). I'll give one example that comes to mind, which I found quite inspiring when I worked through the details. Every Lipschitz function (in this case, $[0,1] \to \mathbb{R}$) is absolutely continuous, and thus is differentiable almost everywhere. We can use a probabilistic argument to actually construct a version of its derivative. One begins by considering the standard dyadic decompositions of [0,1), which gives us for each natural n a partition of [0,1) into $2^{n-1}$ half-open intervals of width $1/{2^{n-1}}$. We define a filtration by letting $\mathcal{F}_n$ be the sigma-algebra generated by the disjoint sets in our nth dyadic decomposition. So e.g. $\mathcal{F}_2$ is generated by $\{[0,1/2), [1/2,1)\}$. We can then define a sequence of random variables $Y_n(x) = 2^n (f(r_n(x)) - f(l_n(x))$ where $l_n(x)$ and $r_n(x)$ are defined to be the left and right endpoints of whatever interval contains x in our nth dyadic decomposition (for $x \in [0,1)$). So basically we are approximating the derivative. The sequence $Y_n$ is in fact a martingale with respect to $\mathcal{F}_n$, and the Lipschitz condition on $f$ makes this a bounded martingale. So the martingale convergence theorem applies and we have that $Y_n$ converges almost everywhere to some $Y$. Straightforward computations yield that we indeed have $f(b) - f(a) = \int_a^b Y$. What I really like about this is that once you get the idea, the rest sort of works itself out. When I came across the result it was the first time I had thought of dyadic decompositions as generating a filtration, but it seems like a really natural idea. It seems much more structured than just the vague idea of "approximation", since e.g. the martingale condition controls the sort of refinement the next approximating term must yield over its predecessor. And although we could have achieved the same result easily by a traditional argument, I find it interesting to see from multiple points of view. So that's really my goal here. pr.probability ca.classical-analysis-and-odes big-list Gerry Myerson Erik DavisErik Davis $\begingroup$ One example that comes to mind is the relationship between Brownian motion and harmonic functions. Is that the kind of thing you're thinking of? $\endgroup$ – Qiaochu Yuan Dec 18 '09 at 1:08 $\begingroup$ The book The Probabilistic Method, by Alon and Spencer, includes probability-inspired proofs of results that don't belong to probability in sections called "The Probabilistic Lens", which are inserted between the various chapters. I don't have my copy at hand right now, and the only analytic one I remember being there is Bernstein's proof of the Weierstrass approximation theorem, which Harald Hanche-Olsen has already mentioned. $\endgroup$ – Michael Lugo Dec 18 '09 at 1:53 $\begingroup$ I don't mean to be overly critical (really!), but isn't it immediate from the definition that a Lipshitz function is absolutely continuous? It's just a replay of the argument that a linear function with slope M is continuous: take delta = epsilon/M. I think HHO's example below is so good that it should be the exemplar of the question, perhaps. $\endgroup$ – Pete L. Clark Dec 18 '09 at 2:51 $\begingroup$ @Yemon: Absolute continuity does not mention differentiability. It turns out, but is comparatively much deeper, that an AC function is differentiable almost everywhere. I still think that Lipschitz implies AC is done just by taking delta = epsilon/(Lipschitz constant). $\endgroup$ – Pete L. Clark Dec 18 '09 at 7:33 $\begingroup$ Shouldn't this be community wiki? $\endgroup$ – Harry Gindi Dec 27 '09 at 0:40 One nice example is Bernstein's proof of the Weierstrass theorem. This proof analyses a simple game: Let $f$ be a continuous function on $[0,1]$, and run $n$ independent yes/no experiments in which the "yes" probability is $x$. Pay the gambler $f(m/n)$ if the answer "yes" comes up $m$ times. The gambler's expected gain from this is, of course, $$p_n(x)=\sum_{k=0}^n f(k/n)\binom{n}{k}x^k(1-x)^{n-k}$$ (known as the Bernstein polynomial). The analysis shows that $p_n(x)\to f(x)$ uniformly. S. N. Bernstein, A demonstration of the Weierstrass theorem based on the theory of probability, first published (in French) in 1912. It has been reprinted in Math. Scientist 29 (2004) 127–128 (MR2102260). Harald Hanche-OlsenHarald Hanche-Olsen $\begingroup$ This is freaking amazing!! $\endgroup$ – Pietro KC May 2 '10 at 20:06 $\begingroup$ Apologies for pushing this to the front page, but I spotted a misprint that I just couldn't let stand. $\endgroup$ – Harald Hanche-Olsen Aug 30 '16 at 14:34 $\begingroup$ Heh. If only we could so easily correct misprints that have really been in a printed journal for 7 years. $\endgroup$ – Lee Mosher Sep 2 '16 at 17:07 $\begingroup$ It is also reported in Sinai book Probability Theory - an Introductory Course, also available online $\endgroup$ – Pietro Majer Jun 14 '19 at 21:15 Question: Given $n$ points in Euclidean space (which we might as well take to be $\ell_2^n$), what is the smallest $k=k(n)$ so that these points can be moved into $k$-dimensional Euclidean space via a transformation which expands or contracts all pairwise distances by a factor of at most $1+\epsilon$? Answer: $k(n)\le C \ {\log (n+1) \over {\epsilon^2}}$. Proof: A (suitably normalized) random rank $k(n)$ orthogonal projection works. Nowadays this is called the Johnson-Lindenstrauss Lemma. All known proofs in a form this strong use random linear operators. Bill JohnsonBill Johnson $\begingroup$ Nice! One needs to rescale by a factor $\sqrt{\frac{n}{k}}$ after applying the projection. $\endgroup$ – Mizar Aug 9 '15 at 17:25 See also http://en.wikipedia.org/wiki/Probabilistic_proofs_of_non-probabilistic_theorems Boris TsirelsonBoris Tsirelson While I've forgotten most of the necessary technical details (ah for the days when I knew more about probability and less about homological algebra), one striking example is the exploitation of conformal invariance of planar Brownian motion to reprove results in complex analysis. See Burgess Davis. Brownian Motion and Analytic Functions, Ann. Probab. Volume 7, Number 6 (1979), 913-932. which in particular has a probabilistic proof of the little Picard theorem. (I first learned of Davis' proof from a sketch in Körner's wonderful book Fourier Analysis, which I'd recommend for students as an antidote to the inevitable tedium and occasional narrowness of a first & second course in analysis.) Yemon ChoiYemon Choi 1) perimeter of planar sets with constant width I like the probabilistic proof that every set of constant width 1 has perimeter pi using Buffon's needle problem. See also the wikipedia article on Buffon's noodle problem. Another beautiful analytic (of a sort) theorem where probability plays an important role is regarding the overhang problem. The description of the problem and the solution is taken from the abstract of the paper "maximum overhang" by Mike Paterson, Yuval Peres, Mikkel Thorup, Peter Winkler and Uri Zwick: 2) Maximum overhang How far can a stack of $n$ identical blocks be made to hang over the edge of a table? The question dates back to at least the middle of the 19th century and the answer to it was widely believed to be of order $\log n$. Recently, Paterson and Zwick constructed $n$-block stacks with overhangs of order $n^{1/3}$, exponentially better than previously thought possible. We show here that order $n^{1/3}$ is indeed best possible, resolving the long-standing overhang problem up to a constant factor. Gil KalaiGil Kalai Here's something that's pretty neat: find a measurable subset $A$ of $[0,1]$ such that for any subinterval $I$ of $[0,1]$, the Lebesgue measure $\mu(A\cap I)$ has $0 < \mu(A\cap I) < \mu(I)$. There's an explicit construction of such a set in Rudin, who describes such sets as "well-distributed". Balint Virag (and maybe others) found a very slick probabilistic construction. Let $X_1, X_2, \ldots$ be i.i.d. coin flips, i.e. $X_1$ is $1$ with probability $1/2$ and $-1$ with probability $1/2$. Consider the (random) series $$S:=\sum_{n=1}^\infty X_n/n.\,\,\,$$ By the Kolmogorov three-series theorem, it converges almost surely. However, it's a simple exercise to see that for any $a$, the event $\{S > a\}$ has non-trivial measure: for $a>0$, there's a positive chance of the first $e^a$ terms of the series being positive, so the $e^a$-th partial sum is positive, and the tail is independent and positive or negative with equal probability, due to symmetry. For $a\leq 0$, it's trivial, again because of symmetry. A common way of realizing i.i.d. coin flips on the unit interval is as Rademacher functions: for $x\in[0,1]$, let ${b_n}$ be its binary expansion, and $X_n(x) = (-1)^{b_n}$. Realized this way, the random sum $S$ becomes an almost everywhere finite measurable function from $[0,1]$ to $\mathbb R$. It only takes a bit more work to see that the set $\{S>a\}$ is exactly a well-distributed set. Alex Bloemendal has written this up in a short note, but I'm not sure if he's published it anywhere. BigM Andrew StewartAndrew Stewart $\begingroup$ You shouldn't use $x$ for both the variable in $[0,1]$ and the number that you want $S$ greater than. $\endgroup$ – Robert Israel Feb 7 '12 at 20:47 There are probabilistic proofs of Atiyah-Singer or most anything else that can be done with a heat kernel. (Rogers & Williams is rife with probabilistic proofs of analytic facts [as well as the fundemental theorem of algebra], and more generally just about all of potential theory can be recast in terms of martingales a la Doob, as Qiaochu points out; surely there are many more examples.) Steve HuntsmanSteve Huntsman $\begingroup$ This is obviously a very old answer, but I see that you're still active on MO. Would you happen to have a reference for the probabilistic proof of Atiyah-Singer that you mention? Is it substantially different from the usual heat kernel approach? $\endgroup$ – Paul Siegel Feb 7 '12 at 19:44 $\begingroup$ amazon.com/Stochastic-Analysis-Manifolds-Graduate-Mathematics/… $\endgroup$ – Steve Huntsman Feb 8 '12 at 0:02 $\begingroup$ @SteveHuntsman Could you please comment more on 'There are probabilistic proofs ....... anything else that can be done with a heat kernel' and please provide some references ('any other non-trivial examples other than index theorem')? $\endgroup$ – T.... Dec 31 '15 at 8:46 $\begingroup$ @Turbo - Treat the heat kernel as a transition density as in section 4.1 of the book mentioned in my comment above $\endgroup$ – Steve Huntsman Dec 31 '15 at 13:16 I believe that Krylov and Safonov's original proof of the Harnack inequality for solutions of elliptic equations in nondivergence form was a probabilistic one. PDE people wouldn't have the slightest idea just from glancing at the title of their paper that this is what they proved (or at least this PDE person). This was not an isolated incident. Much of Krylov's pioneering work in elliptic equations was originally written up in the language of Markov processes, etc, with analytic proofs appearing later. Scott ArmstrongScott Armstrong There are a number of probabilistic inequalities that are quite frequently used in harmonic analysis. For example, Khintchine's inequality (http://en.wikipedia.org/wiki/Khintchine_inequality). The same idea of using random signs and taking expectations is rather common. One specific inequality proved in this manner which I've come across comes is the Rademacher-Menshov Theorem (for almost orthogonal functions). The theorem gives a way to control the $L^2$ norm of partial sums of a sequence of N "almost orthogonal" functions by the sum of the $L^2$ norms of each function modulo a logarithmic loss in N. A precise statement and proof of this inequality can be found on page 43 of this article by Ciprian Demeter, Terence Tao, and Christoph Thiele: http://arxiv.org/abs/math/0510581. Peter LuthyPeter Luthy The Radon-Nikodym Theorem and the Lebesgue differentiation theorem can be proved by Martingale theory (see "Probabilty Theory" by Heinz Bauer, pp. 173-5). Whoever This paper (Prime Numbers and Brownian Motion, by Patrick Billingsley) is perhaps more about proving number theoretic facts than analytical, but at least to me they have a very analytical flavor anyway, and was the first thing to come into my mind when I read your question. I think you would find it interesting. Adrian PetrescuAdrian Petrescu One of the basic constructions in the theory of singular integral operators is the Calderón-Zygmund decomposition, which follows from a simple stopping time argument. This result has numerous important applications in harmonic analysis; for instance, it plays a role in proving the $L^p$-convergence of Fourier series ($1 < p < \infty$). Evan JenkinsEvan Jenkins I'm not sure how kosher it is for me to answer my question, but since there had been several comments about my original post I did not want to make any major edits to it. I've posed this question to my probability professor and he mentioned his favorite, from the paper "Triple points: from non-Brownian filtrations to harmonic measures." by Tsirelson. It's pretty far over my head, but it claims to have a probabilistic proof of (I'm quoting the description) A conjecture by C. Bishop (1991) about harmonic measures for three arbitrary (not just regular) non-intersecting domains in Rn. Roughly speaking, trilateral contact is always rare harmonically (though not topologically). This seems like it goes hand in hand with some of the above comments, where basically knowledge of things like hitting probabilities of brownian motion and similar things for other processes can assist in understanding the fine properties of various domains, useful to people in PDE and harmonic analysis. Davie's construction of subspaces of $c_0$ and $\ell_p$ ($p\in (2, \infty)$) without the approximation property, as outlined in Section 2.d of Lindenstrauss and Tzafriri's book Classical Banach Spaces I, uses a probabilistic lemma (Lemma 2.d.4, p.87-88). I do not know Davie's proof all that intimately, having been through it only once - courtesy of a fellow grad student who took a couple of hours to go over it in a research group seminar... I remember that it looked like magic at the time. (Edited once for a typo) Philip BrookerPhilip Brooker How about the Convolution Theorem, which can be seen as a consequence of $$ {\rm E}[e^{iu(X+Y)}]={\rm E}[e^{iuX}]{\rm E}[e^{iuY}],\;\; u \in \mathbb{R}, $$ where $X$ and $Y$ are independent random variables. Shai CovoShai Covo An outstanding result of this sort is the theorem of Tsirelson, MR1487755 Tsirelson, B. Triple points: from non-Brownian filtrations to harmonic measures. Geom. Funct. Anal. 7 (1997), no. 6, 1096–1142. He proved the following theorem conjectured by M. Sodin and myself: Let $D_j,1\leq j\leq 3$ be three disjoint regions in $R^n$. Choose points $x_j\in D_j$, and consider harmonic measures $\mu_j=\omega(D_j,x_j)$. We consider them as Borel probability measures in $R^n$ sitting on the boundaries $\partial D_j$. Then there exist Borel sets $E_j\subset\partial D_j$, such that $\mu_j(E_j)=1$, and $E_1\cap E_2\cap E_3=\emptyset$. The very difficult proof is based on advanced probability theory. When $n=2$ there is a relatively simple analytic proof. It is also not hard to obtain such result with $3$ replaced by $12$, where $12$ does not depend on dimension:-) Tsirelson writes: "This is a challenge: can the result be achieved by non-stochastic arguments?" As far as I know, nobody has done this. Usually the results of potential theory which are proved using probability, can e also proved without probability, in most cases with simpler proofs. This result is a remarkable exception. Alexandre EremenkoAlexandre Eremenko Some examples can be found in the book "Statistical independence in probability, analysis and number theory" by Mark Kac. Johann CiglerJohann Cigler Shannon's theorem giving the capacity of noisy channel is proved using random coding. (Efficiently-computable codes are not known.) $\begingroup$ How is this an analytic fact? $\endgroup$ – Marcin Kotowski Feb 8 '12 at 1:23 $\begingroup$ @MarcinKotowski Well, there is a limit. $\endgroup$ – Lorenzo Najt Apr 27 '17 at 4:20 Since probability theory is usefully formalized as a special case of quantum probability, a related question is what examples are there of quantum proofs for classical (non-quantum) results. There are now sufficiently many examples to merit a survey by Drucker and deWolf "Quantum Proofs for Classical Theorems." I blogged about two such examples on FXPAL's blog. Eleanor RieffelEleanor Rieffel $\begingroup$ None of the results in the (excellent) paper you reference are ven remotely analytic. $\endgroup$ – Marcin Kotowski Feb 8 '12 at 1:23 I really like the probabilistic proof of the fact that $$ e^{-n}\sum_{k=0}^n\frac{n^{k}}{k!}\to\frac12 $$ as $n\to\infty$. The proof goes as follows (taken from here). Suppose that $X_1,\ldots,X_n$ are independent and identically distributed Poisson random variables with the parameter $\lambda=1$. We have that $$ P(X_1+\ldots+X_n\le n)=e^{-n}\sum_{k=0}^n\frac{n^{k}}{k!} $$ and $$ P(X_1+\ldots+X_n\le n)=P(n^{-1/2}(X_1+\ldots+X_n-n)\le 0) $$ for each $n\ge1$. By the central limit theorem, $$ P(n^{-1/2}(X_1+\ldots+X_n-n)\le 0)\to\Phi(0)=1/2 $$ as $n\to\infty$, where $\Phi$ is the cumulative distribution function of the standard normal random variable. Cm7F7BbCm7F7Bb $\begingroup$ One of my favorite exercises to give or demonstrate for students taking an early course in probability (i.e. not their first but not measure theory based). $\endgroup$ – Pierre Sep 2 '16 at 15:30 A concrete example of using conformal invariance of Brownian motion in the plane (alluded to in Yemon Choi's answer) is the following: Consider a simply connected domain in the plane which contains the unit disk and whose boundary is a smooth curve which contains an arc of length $2\pi(1-\epsilon)$ in the unit circle. Then the Riemann map sending this domain to the unit disk and fixing the center of the disk sends the rest of the boundary curve to an arc of length at most $2\pi\epsilon$. I find this pretty amazing considering there is no bound on the length of the remaining boundary (e.g. you can draw an elephant compared to which the unit disk is a tiny golf ball). The proof is that the distribution of the first time Brownian motion starting at the center hits the boundary must be sent to the uniform distribution on the circle by the Riemann mapping. I'm not sure what a non-probabilistic proof looks like (probably cross-cuts plus domain-monoticity of some sort I guess) but I doubt it competes in elegance (though we all have our own taste of course). Pablo LessaPablo Lessa Doeblin's proof of the fundamental limit theorem for regular Markov chains: (450 p. in Introduction to probability, available online.) The proof uses [coupling](http://en.wikipedia.org/wiki/Coupling_(probability)). YooYoo $\begingroup$ I realize this isn't an analytic fact $\endgroup$ – Yoo Dec 24 '09 at 5:37 In operator theory there is a result of C.Brislawn here where he uses Doob's Martingales in order to derive a criterion for integral operators to be nuclear. Also using that he gives a trace formula which is similar to the usual trace formula,but applied to a version of operator's kernel. BigMBigM Dudley's VC-dimension-based upper bound on the packing numbers of function classes used a very clever (and simple!) sampling argument; see Theorem 29.3 in http://link.springer.com/book/10.1007%2F978-1-4612-0711-5 or these notes: https://www.cs.bgu.ac.il/~asml162/wiki.files/dudley-pollard.pdf Aryeh KontorovichAryeh Kontorovich I stumbled across an elementary and probabilistic proof of Euler's formula $\prod_{p\in\mathbb{P}}\frac{1}{1-p^{-s}}=\sum_{n=1}^{\infty}\frac{1}{n^s} $ for the zeta-function. One reference is here: https://math.stackexchange.com/questions/427910/a-simple-way-to-obtain-prod-p-in-mathbbp-frac11-p-s-sum-n-1-in. Ismael LemhadriIsmael Lemhadri Chebyshev's sum inequality can be proven more concisely using probability. Indeed, if $f$ and $g$ are two monotone functions and $X$ a random variable, then we have $$ (f(X_1)-f(X_2))(g(X_1)-g(X_2)) \ge 0 $$ where $X_1, X_2$ are i.i.d. copies of $X$. Then taking expectation gives us the Chebyshev's sum inequality. Sandeep SilwalSandeep Silwal Not the answer you're looking for? Browse other questions tagged pr.probability ca.classical-analysis-and-odes big-list or ask your own question. What is convolution intuitively? Probabilistic method used to prove existence theorems Probabilistic proof for "regularity" Which functions are Wiener-integrable? fourier analytic proofs "Fractional sampling" from a probability distribution Uniform martingale convergence of Radon-Nikodym derivatives of a convex set of probabilities Probabilistic Proofs of Key Number-Theoretic Results Dependent random variables converging to a density in mean
CommonCrawl
Shan , Wang , Ren , and Wang: Tobacco Sales Bill Recognition Based on Multi-Branch Residual Network Yuxiang Shan , Cheng Wang , Qin Ren and Xiuhui Wang Tobacco Sales Bill Recognition Based on Multi-Branch Residual Network Abstract: Tobacco sales enterprises often need to summarize and verify the daily sales bills, which may consume sub-stantial manpower, and manual verification is prone to occasional errors. The use of artificial intelligence technology to realize the automatic identification and verification of such bills offers important practical significance. This study presents a novel multi-branch residual network for tobacco sales bills to improve the efficiency and accuracy of tobacco sales. First, geometric correction and edge alignment were performed on the input sales bill image. Second, the multi-branch residual network recognition model is established and trained using the preprocessed data. The comparative experimental results demonstrated that the correct recognition rate of the proposed method reached 98.84% on the China Tobacco Bill Image dataset, which is superior to that of most existing recognition methods. Keywords: Artificial Intelligence , Image Recognition , Residual Network As important aspects of artificial intelligence technology, image classification and recognition [1,2] have been used extensively in industry, agriculture and daily life. For example, a fine-grained recognition method [3] was proposed to improve the real-time performance and accuracy of vehicle recognition in an intelligent transportation system. Chen et al. [4] presented a load forecasting framework based on self-adaptive dropout, which improves the load supply robustness of power systems. However, as the input data and specific requirements vary significantly in different application scenarios, it is difficult to use a general model to solve all problems. For example, in the identification of tobacco sales bills, different regions have varying management departments, and different customers adopt varying management clients, which results in many differences in the formats and contents of the sales bills. In view of the above problems, this paper presents a novel multi-branch residual network for tobacco sales bills to improve the efficiency and accuracy of tobacco sales reconciliation. Our work can be summarized as follows: (1) A novel multi-branch residual network is proposed for the recognition of tobacco sales bills. The proposed network integrates a multi-branch conv-block module, and spatial and channel squeeze and excitation (SCSE) module. (2) Score advancements are achieved on large-scale datasets. We conducted comparative experiments using the China Tobacco Bill Image (CT-BI) dataset, which consists of more than 125,000 images. The rest of this paper is structured as follows: Section 2 discusses related work on image classification and recognition for different tasks. Then, the methodology of the proposed framework is described in detail in Section 3. In Section 4, several comparative experiments that were conducted on CT-BI dataset are outline, and conclusions are drawn in Section 5. The application of image recognition technology in various fields has resulted in a series of phased achievements. A weakly supervised fine-grained image recognition method that can accurately locate objects and parts without annotation was proposed in [5]. The experimental results demonstrated that the target mask module and salient point detection module in the method could suppress the background interference and improve the correct recognition rate. Zhao et al. [6] proposed a nonlinear loosely coupled nonnegative matrix decomposition method for low-resolution image recognition, in which the target images were regarded as being composed of different local features. Furthermore, to evaluate the reliability of image recognition applications that are driven by deep learning technology to change the image background area, Zhang et al. [7] introduced a deformation test method for image recognition systems. To alleviate the difficulty of understanding the network system owing to the unpredictability of the network structure, a coordination method based on deep learning technology was proposed [8] to solve the network structure reasoning problem by integrating a residual network and fully-connected network. Yan et al. [9] proposed a method for predicting future traffic, and constructed a prediction interval based on the combination of a residual neural network as well as upper- and lower-bound estimation. This method generated a combined residual network by optimizing the objective function and adjusting the type of the remaining blocks, which effectively improved the accuracy of quantifying the uncertainty prediction of future traffic. To solve the problem of obtaining subtle clues in the process of fine-grained image recognition more effectively, Kim et al. [10] proposed a method for generating the characteristics of hard negative samples that reduced the dependence on the number of tuples of hard negative samples. The proposed tobacco sales bill recognition model consists of three multi-branch conv-blocks and an SCSE module, as shown in Fig. 1. 3.1 Data Preprocessing In this study, the data preprocessing consisted of two sequential processes: image calibration and image segmentation, as shown in Fig. 2. Architecture of the proposed recognition model. Flow of data preprocessing. In the image correction process, the input image was tilt-corrected; the four top corners of the document were extracted to calculate the length and width of the image, and rotation alignment was performed. Thereafter, in order to reduce the impact of image noise on the recognition rate, a Gaussian difference filter was applied for noise reduction and the image was binarized. The transfer and impact response functions are presented in Eqs. (1) and (2), respectively. [TeX:] $$G(s)=M e^{-s^{2} / 2 \alpha_{1}^{2}}-N e^{-s^{2} / 2 \alpha_{2}^{2}}$$ [TeX:] $$g(t)=\frac{M}{\sqrt{2 \pi \sigma_{1}^{2}}} e^{-t^{2} / 2 \sigma_{1}^{2}}-\frac{N}{\sqrt{2 \pi \sigma_{2}^{2}}} e^{-t^{2} / 2 \sigma_{2}^{2}}$$ where [TeX:] $$M \geq N$$, [TeX:] $$\alpha_{1}>\alpha_{2}$$, [TeX:] $$\sigma_{i}=1 / 2 \pi \alpha_{i}$$. Based on the image calibration, the Sobel operator was used to determine the gradient in the X-direction of the input image to realize the text positioning operation. Each pixel in the input image was convoluted using two convolution kernels of the Sobel operator. One of the two convolution kernels had the largest response to the vertical edge, whereas the other had the largest response to the horizontal edge. The maximum value of the two convolution results was used as the output of the pixel. Then, the processed image was further expanded and corroded to detect the text area, to realize line segmentation of the entire image. Finally, a vertical projection operation was used to divide each line into a series of characters for processing. 3.2 Recognition Model The preprocessed image was input into the proposed recognition model that consists of a multi-branch conv-block module and an SCSE module, as shown in Fig. 1. The design of the former integrates the branch idea of the Inception series model and residual mechanism of the ResNET network. To reduce the total number of parameters, we referred to the Inception network and used two 3[TeX:] $$\times$$3 convolutions instead of large 5[TeX:] $$\times$$5 convolutions. This improvement could reduce the number of model parameters and establish more nonlinear transformations, which increased the capability of the conv-block for learning features. Moreover, using this structure, the sparse matrix could be clustered into dense submatrices to improve the computational performance. As illustrated in Fig. 3, the input of the multi-branch conv-block module was composed of four branches: branch 0 was composed of one BasicConv2d; branch 1 was composed of two BasicConv2d; branch 2 was composed of three BasicConv2d; and branch 3 was composed of an average pool and one BasicConv2d. The input tensors passed through the above four branches, respectively, following which the results were spliced together. The advantage of this procedure was that visual information could be processed on different scales and subsequently aggregated, and features could be extracted from different scales simultaneously. Multi-branch conv-block module. The SCSE module consists of the sSE and cSE. The cSE is a channel attention module. The specific process is as follows: the global average pooling method is used to convert the feature map from [C, H, W] to [C, 1, 1], and two 1[TeX:] $$\times$$1 convolutions are used for information processing to obtain the C-dimensional vector. Thereafter, the sigmoid function is used for normalization to obtain the corresponding mask. Finally, the feature map that is calibrated by the information is obtained by channel-wise multiplication. The sSE module is a spatial attention module, whose implementation process is as follows: a 1[TeX:] $$\times$$1 convolution operation is conducted directly on the feature maps and their dimensions are converted from [C, H, W] to [1, H, W]. Subsequently, the feature maps are activated with a sigmoid function to obtain a spatial attention map, which is applied to the original feature map to complete the spatial information calibration. The SCSE is a parallel connection between the two modules. Input features from the sSE and cSE modules are added to obtain a more accurately calibrated feature map. Finally, the result is added to the input tensor as the output of the block. The structure of the SCSE is presented in Fig. 4. By introducing an attention mechanism, the network can focus on more critical information in the current task to solve the problem of information overload, and the efficiency and accuracy of the task processing can be improved. SCSE module. We conducted a series of comparative experiments to demonstrate the effectiveness of the proposed recognition approach through comparisons with existing methods. 4.1 Experimental Configuration The hardware of the experimental environment consisted of an NVIDIA Titan X graphics card, 128 G of running memory, and an Intel E5-2678V3 CPU. The software environment comprised an Ubuntu 16 system, Python 3.6 and the PyTorch 1.0 development environment. The experiments were conducted using the integrated development environment Python 3.6+PyTorch 0.4.0. The experimental data were obtained from the CT-BI dataset. This dataset contained more than 1.2 million tobacco sales bills and corresponding statistical data, collected from different regions and dealers. These sales bills were saved in the JPG image format according to different tobacco types and statistical data were saved in a sheet in the .xlsx format. Each sample image contained the store name, monopoly license number, and tobacco commodity sales data and amounts. Fig. 5 presents several samples from the CT-BI dataset. In the experiments, the correct recognition rate of the algorithm was tested by identifying the sales and amounts of specific types of cigarettes in the image bill and verifying these with the data in the statistical table. The experimental results were analyzed quantitatively using the Top-1 to Top-5 error rate indexes. Image examples from CT-BI dataset. 4.2 Experiment I: Different Tobacco Types In this experiment, we selected seven types of tobacco sales—namely, SUYAN, NANJING, ZHONGHUA, TAISHAN, HUANJINYE, WANBAOLU and LIQUN—to verify the accuracy of the proposed recognition method. The experimental results are listed in Table 1. It can be observed from Table 1 that the recognition accuracies of the different tobacco were slightly different. This is mainly because the number and complexity of the Chinese characters in the names of these types of tobacco products differed, resulting in changes in accuracy when positioning the commodity names from the sample images with interference information. For example, the Chinese name for HUANJINYE contains three complex Chinese characters, whereas the Chinese name for ZHONGHUA contains only two simple Chinese characters. This resulted in a difference of 0.88% in the Top-1 index and 0.36% in the Top-5 index. However, Table 1 also indicates that the maximum standard deviation corresponding to each index from Top-1 to Top-5 was only 0.62, which reflects the stability of the proposed method. Experimental results for different tobacco (unit: %) SUYAN 6.48 5.52 3.90 2.62 1.47 NANJING 5.94 4.32 3.05 2.29 1.27 ZHONGHUA 5.62 3.96 2.78 2.04 1.13 TAISHAN 5.34 3.78 2.67 1.85 1.09 HUANJINYE 6.50 4.95 3.31 2.56 1.49 WANBAOLU 6.42 4.97 3.40 2.31 1.26 LIQUN 5.46 4.39 3.14 2.01 1.16 Standard deviation 0.50 0.62 0.41 0.29 0.16 4.3 Experiment II: Different Recognition Methods We compared the proposed method with several existing methods. The experimental data were the sales data of LIQUN tobacco. The experimental results are presented in Table 2. It can be observed from Table 2 that the proposed method was superior to the other four methods in terms of the Top-1 to Top-5 indicators. For example, compared to the second-ranked Inception V3 method, the Top-1 of our method increased by 1.14%. Moreover, the Top-5 increased by almost 5% compared to the closest methods of Inception V3 and NasNet. The direct reason for this performance improvement was that our method integrated the multi-branch concept of Inception network, and pro¬vided the conv-block with a stronger learning feature ability by establishing more nonlinear transfor¬mations. Another possible reason was that the proposed method could focus on more critical information in the current task by introducing an attention mechanism, to improve the efficiency and accuracy of the task processing. Experimental results for different methods (unit: %) Inception V3 6.60 5.29 3.87 2.44 1.58 NasNet 6.64 5.57 4.04 2.68 1.58 MobileNet 7.08 5.02 3.75 3.45 1.99 ResNet-18 8.91 8.49 6.54 5.98 3.12 Our method 5.46 4.39 3.14 2.01 1.16 In this study, as one of the typical applications of artificial intelligence technology in traditional industries, a new multi-branch residual framework was developed for the recognition of tobacco sales bills. A multi-branch residual network recognition model was designed and trained based on the geo¬metric correction and edge alignment of input images. Finally, the effectiveness of the proposed approach was verified through comparative experiments on a large-scale tobacco sales bill dataset. This work was supported by the Research on Key Technology and Application of Marketing Robot Process Automation (RPA) Based on Intelligent Image Recognition of the Zhejiang China Tobacco Industry Co. Ltd. (No. ZJZY2021E001). Yuxiang Shan He received his M.S. degree in School of Computer Science and Technology from Zhejiang University in 2013. He is now an engineer of Information Center of China Tobacco Zhejiang Industrial Co. Ltd. His current research interests include image recognition and artificial intelligence. Cheng Wang He received his B.S. degree in School of Human Resources Management from Nanjing Audit University in 2010. Since then, he joined Zhejiang Tobacco Industry Company as custom manager. In 2020, he joined the brand operation department, engaged in data operation and customer operation. Qin Ren She received a bachelor's degree in marketing from Zhejiang Normal University in 2010. Since August 2011, she has worked in China Tobacco Zhejiang Industrial Co. Ltd. engaged in Tobacco Marketing and Internet Marketing Research, respectively. Xiuhui Wang He received his master's degree and doctor's degree from Zhejiang University in 2003 and 2007, respectively. He is now a professor in the Computer Department of China Jiliang University. His current research interests include computer graphics, pattern recognition and artificial intelligence. 1 K. Ohri, M. Kumar, "Review on self-supervised image recognition using deep neural networks," Knowledge-Based Systems, vol. 224, no. 107090, 2021.doi:[[[10.1016/j.knosys..107090]]] 2 W. Ma, X. Tu, B. Luo, G. Wang, "Semantic clustering based deduction learning for image recognition and classification," Pattern Recognition2022, vol. 124, no. 108440, 2021.doi:[[[10.1016/j.patcog..108440]]] 3 Y. Zhou, "Vehicle image recognition using deep convolution neural network and compressed dictionary learning," Journal of Information Processing Systems, vol. 17, no. 2, pp. 411-425, 2021.doi:[[[10.3745/JIPS.01.0073]]] 4 Q. Chen, W. Zhang, K. Zhu, D. Zhou, H. Dai, Q. Wu, "A novel trilinear deep residual network with self-adaptive Dropout method for short-term load forecasting," Expert Systems with Applications, vol. 182, no. 115272, 2021.doi:[[[10.1016/j.eswa..115272]]] 5 J. Chen, J. Hu, S. Li, "Learning to locate for fine-grained image recognition," Computer Vision and Image Understanding, vol. 206, no. 103184, 2021.doi:[[[10.1016/j.cviu..103184]]] 6 Y. Zhao, C. Wang, J. Pei, X. Yang, "Nonlinear loose coupled non-negative matrix factorization for low-resolution image recognition," Neurocomputing, vol. 443, pp. 183-198, 2021.doi:[[[10.1016/j.neucom.2021.02.068]]] 7 Z. Zhang, P. Wang, H. Guo, Z. Wang, Y. Zhou, Z. Huang, "DeepBackground: metamorphic testing for deep-learning-driven image recognition systems accompanied by background-relevance," Information and Software Technology, vol. 140, no. 106701, 2021.doi:[[[10.1016/j.infsof..106701]]] 8 K. Huang, S. Li, W. Deng, Z. Y u, L. Ma, "Structure inference of networked system with the synergy of deep residual network and fully connected layer network," Neural Networks, vol. 145, pp. 288-299, 2022.doi:[[[10.1016/j.neunet.2021.10.016]]] 9 L. Yan, J. Feng, T. Hang, Y. Zhu, "Flow interval prediction based on deep residual network and lower and upper boundary estimation method," Applied Soft Computing, vol. 104, no. 107228, 2021.doi:[[[10.1016/j.asoc..107228]]] 10 T. Kim, K. Hong, H. Byun, "The feature generator of hard negative samples for fine-grained image recognition," Neurocomputing, vol. 439, pp. 374-382, 2021.doi:[[[10.1016/j.neucom.2020.10.032]]] Received: February 15 2022 Accepted: April 12 2022 Published (Print): June 30 2022 Published (Electronic): June 30 2022 Corresponding Author: Xiuhui Wang , [email protected] Yuxiang Shan, Chinese Tobacco Zhejiang Industrial Company Limited, Hangzhou, China, [email protected] Cheng Wang, Chinese Tobacco Zhejiang Industrial Company Limited, Hangzhou, China, [email protected] Qin Ren, Chinese Tobacco Zhejiang Industrial Company Limited, Hangzhou, China, [email protected] Xiuhui Wang, Dept. of Computer, China Jiliang University, Hangzhou, China, [email protected]
CommonCrawl
Distant Particle Entanglement To test particle entanglement at a distance, do they have to start in proximity or can they be identified already distant? If so, how? quantum-entanglement austincooperaustincooper $\begingroup$ Are you asking if you can get two spatially-separated particles to become entangled (without bringing them near each other)? $\endgroup$ – BMS Dec 6 '14 at 23:31 As in Holger Felder's Answer, to make an entangled state, all known ewxperimental techniques are local insofar that entangled states must be 'created' by a single interaction: you need to produce a pure quantum state e.g. of two photons with opposite spin in a given direction, so that the pure quantum state is nonfactorisable (i.e. can't be written in the form $\left.\left|\psi_1\otimes \psi_2\right.\right>$, where $\left.\left|\psi_1\right.\right>$ and $\left.\left|\psi_2\right.\right>$ are independent one-photon states). Even in theory, two spatially separated particles can become entangled, but only by a special communication protocol between the two particles' locations called Entanglement Swapping (see Wiki page "Quantum Teleportation" and the "Entanglement Swapping" section. The entanglement must be "transmitted" from one location to another. But I think you're asking whether one can tell whether two particles are entangled when they have been entangled elsewhere and travelled to the experimenter's location. There are two important things to heed here: Any entanglement experiment is like this, even if the detection apparatus is only a few centimetres away from where the entangled particles are produced. So you can in theory detect entanglement arising from photon productions in the next galaxy just as well as you can detect entanglement arising from their production in the next room, as long as you can still access the pairs. In practice, you would be unlikely to know which pairs of photons are meant to be entangled if they just dropped in for afternoon tea in your laboratory casually after having made a journey from M87; NO form of entanglement can be confirmed by the observation of only ONE pair of particles. Quantum mechanical experiments are probabilistic in nature, so the only thing you can do is make many measurements and confirm whether or not the correlations between the measured states is statistically significantly higher than the limits laid down by the Bell Inequality for non-entangled particles. A kind of exception to point 2. is if you measured the state of one photon and confirmed it to have collapsed to, say, left circular polarisation and measured the other also to have collapsed to the same polarisation then you can say that they were, to within your experimental accuracy, not in the entangled state as follows: $$\frac{1}{2}\left(e^{i\,\phi_1}\,\left|\left.L,\,R\right>\right.+e^{i\,\phi_2}\,\left|\left.R,\,L\right>\right.\right)$$ So you could conclude, even after one observation, that they probably were not produced in a single interaction producing photons of opposite spin. But note that this STILL does not rule out entanglement. There is nothing to say that the photons were not in the following nonfactorisable state before the measurement: $$\frac{1}{2}\left(e^{i\,\phi_1}\,\left|\left.L,\,L\right>\right.+e^{i\,\phi_2}\,\left|\left.R,\,R\right>\right.\right)$$ even though I don't know of any experimental apparatus that could make such an entangled state. WetSavannaAnimalWetSavannaAnimal $\begingroup$ @Savanna, did you notice what Austin asked? I repeat it here: test particle entanglement at a distance, i.e. without bringing them together. The answer is YES. The entanglement of the signal photons (signals), is tested on the idler photons, by Victor, which is far from both Alice and Bob. (You can claim that Victor's measurement CREATES the entanglement of the signals, but this should be in the context of ANOTHER QUESTION.) So, the signals don't start in proximity, and are identified (their entanglement is identified) at a distance. The signals are NEVER close to one another. $\endgroup$ – Sofia Dec 7 '14 at 12:35 $\begingroup$ (continuation) One of the experts in fundaments of QM (I regret, I don't remember exactly who), said that, "entanglements live OUTSIDE space and time". $\endgroup$ – Sofia Dec 7 '14 at 12:38 $\begingroup$ @Sofia It sounds a little like you're thinking be thinking of David Bohm and his aquarium analogy: a fish in a tank with two cameras on it: its movements then beget high correlations in the signals from both, spatially displaced cameras $\endgroup$ – WetSavannaAnimal Dec 7 '14 at 12:56 $\begingroup$ @Savanna, I am not sure that I understand your comment (also, English isn't my mother-tongue). So, can you reformulate your comment some more clearly? I'd be very glad. Now, am NO Bohmian, I investigate at present this interpretation, and my bad feeling, according to preliminary results/conclusions, is that we are doomed to remain with the collapse. I.e. Bohm's interpretation, appealing as it may be despite being NONLOCAL, is unacceptable. But it will take a looong time until I finish this investigation. Now, please tell me in WHAT do you disagree with me? It's simpler. $\endgroup$ – Sofia Dec 7 '14 at 14:00 $\begingroup$ (continuation) Bohm's interpretation, though assuming that a quantum particle WAS, before the measurement, there where is was detected by the measurement, doesn't save us from the non-locality. $\endgroup$ – Sofia Dec 7 '14 at 14:08 See this article: Časlav Brukner, Markus Aspelmeyer, and Anton Zeilinger, "Complementarity and Information in "Delayed-choice for entanglement swapping", arxiv.org/abs/quant-ph/0405036v1 . You can find it in Internet, in the arXiv quant-ph. By the way, it is BETTER to read the article than the issue in Wikipedia about entanglement swapping. The treatment in Wikipedia is more complicated than needed. The idea is as follows: two identical pairs of photons are produced by identical sources (e.g. by down-conversion in identical non-linear crystals illuminated with identical ultraviolet beams). In such pairs, one of the photons is called signal, the other idler. From each pair, one photon, say, the idler, is sent to a common observer, Victor, that projects the pair on a certain Bell-type state. A Bell-type state is an entangled state, and there are four such states - see in the article formulas (3) - (6). The signal photons were sent, one to an observer Alice, and one to an observer Bob, these two observers being far from one another. Well, in fact, Alice and Bob performed their measurements, each one on her/his signal-photon, before Victor performed his measurement on the two idlers. So, Alice made her measurement and Bob too, then Victor. But when all three measurements were compared, it was found that Alice's and Bob's particles were entangled in the same Bell-type state as Victor's idlers. One could suggest that Victor's measurement collapsed the two pairs of photons, imposing thereby an entanglement also between Alice's and Bob's photons. But Victor measured after Alice and Bob. The facts go as you asked, by the time that Alice and Bob measured their particles, these should have been independent, not entangled. P.S. Just keep in mind some issue. Entangled particles seem to ignore space and time. I say "seem" because we know very little on entanglement, neither can we say what is the wave-function. We are only able to play with formulas. Thus, about the fact that Victor makes his measurement after Alice and Bob, doesn't mean much for these particles. Moreover, we can find a moving frame of coordinates in which Victor measures first, and Alice and Bob later. SofiaSofia $\begingroup$ In my understanding (see my answer) the result from Zeilinger's at all experiment is predicted by the artful production of the pairs of photons. That's all. The collapse is in our knowledge, not in the photons states. $\endgroup$ – HolgerFiedler Dec 7 '14 at 7:49 $\begingroup$ Your understanding is FRAIL. I recommend you NOT to rely on other things than PROVED things. What is the wave function, what is the collapse, if in the nature a certain part of the wave-function really disappear or only becomes non-accessible to us, are things NOT YET ELUCIDATED. So, please don't make statements based just on your understanding, without being proved. $\endgroup$ – Sofia Dec 7 '14 at 11:29 To make two photons entangled they had to be produced together. For example with down-conversion in non-linear crystals or from quantum dots which emit pairs of photons. This is needed for quantum cryptography. On the other side the photons from a laser beam if one let them through a polarisator are entangled in their frequency and in their field direction. But this is not bugproof because one can't count the missing photons and the misalignment of the system from the spy's influence. The strange thing is the point of view that the artfully made quantum dot does not produce the photons in the state we measure them but in mixed states. In mathematical expressions this is correct and describes our knowledge in the time between the pair production and the measurement. The oposite point of view that the photons are produces in the states we measure them is not more or less to proof then the mixed states. This is the same as one don't know is the tree falling in the forest until he is not going to proof it. HolgerFiedlerHolgerFiedler $\begingroup$ That's not true. The photons DONN'T need to be produced together - read my answer $\endgroup$ – Sofia Dec 7 '14 at 11:15 $\begingroup$ did you hear about ENTANGLEMENT SWAPPING? Please read my answer that describes IN DETAIL the BAZ experiment (Bruckner, Aspelmeyer, Zeilinger). I appreciate that you removed the words "in my understanding". But, it is a good procedure, before posting an answer, to read the previous answer. $\endgroup$ – Sofia Dec 7 '14 at 12:22 Not the answer you're looking for? Browse other questions tagged quantum-entanglement or ask your own question. How can we know that all of the results for entangled photons are not chosen when the pair is created? Is the spin in quantum entanglement set at the moment the particles are separated, instead of when measured? Question regarding entanglement Has quantum entanglement been demonstrated to be able to take place over infinite distances? Quantum entanglement vs classical analogy Is particle entanglement a binary property? Entanglement in single particle state Wouldn't 3 or more particle entanglement allow passing classical information? Is there a fundamental interaction responsible for quantum entanglement? Quantum Entanglement Particle Properties Specific Question about Particle Entanglement Is BEC the same as entanglement?
CommonCrawl
Research on the algorithm of electromagnetic leakage reduction and sequence of image migration feature retrieval Chunwei Miao1 & Jianlin Hu2 EURASIP Journal on Image and Video Processing volume 2019, Article number: 47 (2019) Cite this article When the computer is working, it will transmit the electromagnetic leakage signal containing the video information and receive and process the electromagnetic leakage signal within a certain distance, which can reproduce the screen information and form the electromagnetic leakage restoration sequence image. Due to the noise in the receiving process and the fluctuation of the video line and field signal, the image of the electromagnetic leakage restoration sequence will be blurred and will drift between frames. The multi-frame cumulative averaging method can theoretically improve the signal-to-noise ratio of the reconstructed image, but the offset of the reconstructed image sequence caused by the electromagnetic leakage will bring adverse effects. Firstly, this paper analyzes the noise of electromagnetic leakage emission and restoration sequence images and the method of multi-frame cumulative averaging, as well as the cumulative averaging effect of multi-frame sequence images under the influence of image offset. Secondly, on the basis of theoretical analysis, an algorithm of image migration feature retrieval for electromagnetic leakage restoration sequence is proposed and validated, which achieves more accurate inter-frame matching, automatic offset calculation, and multi-frame sequence image accumulation and enhances the recognition of the electromagnetic leakage restoration image. Lastly, different algorithms are compared, and their effects are evaluated as well. The changes of current during the process of a working computer will generate electromagnetic leakage emission. If the electromagnetic leakage emission is analyzed, it may be restored to relevant information, resulting in information leakage [1,2,3,4]. A large number of scholars have conducted a series of reduction studies on electromagnetic leakage and recovered useful video information successfully. As the radiation efficiency of video information of computer becomes relatively higher, electromagnetic radiation signals are easier to receive, and video information becomes the most easily intercepted and reproduced red information in a computer system. Video reduction is also evolving towards portability in the current days [5,6,7]. Due to the influence of the electromagnetic environment, equipment noise and other factors during the process of video information receiving and reduction, the introduction noise and fluctuation of video signal line and field signal will inevitably lead to the blur and drift of video information received. Electromagnetic leakage reduction sequence images are accompanied by a lot of noise. Only after the image is processed can the SNR of the image be improved to the maximum extent and the recognition of the image be enhanced. Once the image details are lost, it is impossible to recover them accurately, but it is possible to eliminate or mitigate the visual effects caused by a false contour. Specifically, for electromagnetic leakage reduction sequence images, multi-frame accumulation can effectively improve the image signal-to-noise ratio, but the premise is that the influence of position migration of each frame image should be eliminated by image feature retrieval and matching, and the migration amount can be found respectively. Xiao et al. proposed an improved brain CT image point matching algorithm based on the original SIFT algorithm, which combined SIFT and gray scale features. Using Euclidean distance and cosine similarity of gray feature vectors as a similarity measure, the final matching point pairs are obtained [8]. Qu et al. proposed a new image registration algorithm based on SURF feature point extraction and bidirectional matching to solve the problem of low matching accuracy caused by different imaging mechanisms of different source images [9]. Takasu et al. offered an edge detection algorithm that can be applied to image matching [10]. Ding et al. brought up an image matching method based on a gray relational degree and feature point analysis, which has high matching precision and robustness and can eliminate the impact of stretching, rotation, and illumination changes [11]. Konar aims at designing a fuzzy matching algorithm that would automatically recognize an unknown ballet posture [12]. Ma proposes a simple yet surprisingly effective approach, termed as guided locality preserving matching, for robust feature matching of remote sensing images. The key idea is merely to preserve the neighborhood structures of potential true matches between two images [13]. The FAST feature point detection algorithm and the FREAK feature point description algorithm were combined and applied in image matching, to improve the image recognition performance of the image recognition algorithm in the mobile phone [14]. Sun propose a Feature Guided Biased Gaussian Mixture Model (FGBG) for image matching [15]. Aiming at the problems of slow image processing speed and poor real-time capability and accuracy of feature point matching in mobile robot vision-based SLAM, Zhu proposes a novel image matching method based on color feature and improved SURF algorithm [16]. Olson improve upon these using a probabilistic formulation for image matching in terms of maximum-likelihood estimation that can be used for both edge template matching and gray-level image matching [17]. In order to further improve and broaden the accuracy of the image matching algorithm based on spectral features, Bao proposes an image matching algorithm based on the elliptic metric spectral feature [18]. Marc-Michel proposes an innovative approach for registration based on the deterministic prediction of the parameters from both images instead of the optimization of an energy criteria [19]. Kim deals with the problem of boundary image matching which finds similar boundary images regardless of partial noise exploiting time-series matching techniques [20]. Some of these image matching algorithms have complex operations, while some can only process binary images, and some have poor adaptability. Firstly, this paper analyzes the noise of electromagnetic leakage emission and restoration sequence images and the method of multi-frame cumulative averaging, as well as the cumulative averaging effect of multi-frame sequence images under the influence of image offset. Secondly, on the basis of theoretical analysis, an algorithm of image migration feature retrieval for electromagnetic leakage restoration sequence is proposed and validated, which achieves more accurate inter-frame matching, automatic offset calculation, and multi-frame sequence image accumulation and enhances the recognition of electromagnetic leakage restoration image. Lastly, different algorithms are compared, and their effects are evaluated as well. Proposed method Multi-frame cumulative average A typical computer video electromagnetic leakage emission reduction is shown in Fig. 1. The video restore device has no physical connection to the target stealing computer. The target computer generates electromagnetic radiation signals while operating and propagates through the air. Within a certain distance, the video restoration device can receive the computer electromagnetic radiation signal through the receiving antenna, and parse the line and field synchronization signals, thereby reproducing the information on the target computer screen on the video restoration device. Generally, the video restoration device converts the received analog signal into a digital signal, which is displayed on the screen as a video signal consisting of a sequence of digits. Due to the influence of an airborne transmission channel, weak signal, equipment noise, etc., the reduced video signal loses greatly, and Gaussian noise and impulse noise are introduced at the same time. Noise and signal may be related or independent. Diagram of computer video electromagnetic leakage emission reduction Gaussian noise is the largest proportion in the general electromagnetic leakage reduction sequence image. In the time domain, Gaussian noise is irrelevant to every coordinate point of the time axis, and its average value is 0. If the Gaussian noise can be effectively eliminated, the image noise can be reduced, and the signal-to-noise ratio of the electromagnetic leakage emission reduction image can be improved. As the collected images are static images, the electromagnetic leakage emission reduction images actually belong to periodic repeating images for a period of time. As a periodic repetition image, the general signal is more stable, and the correlation is better. Although the noise is more serious in a single frame, the signal distribution is regular in a statistical sense, and the noise of each frame image can be randomly and uniformly distributed. A relatively effective method is the average accumulation method of each frame-related image, which can greatly improve the SNR of the image. The principle is as follows: To assume that g(x, y) is a noise image, n(x, y) is noise, f(x, y) is the original image, which can be represented in the following formula: $$ g\left(x,y\right)=f\ \left(x,y\right)+n\left(x,y\right) $$ Take the images with the same content but different noises in the M frame and superimpose them, then do the average calculation, as shown in the following formula: $$ \overline{g}\left(x,y\right)=\frac{1}{M}\sum \limits_{j=1}^M{g}_j\left(x,y\right) $$ In an ideal case, it follows that: $$ E\left\{\overset{\_}{g}\left(x,y\right)\right\}=f\left(x,y\right) $$ $$ {\sigma}_g^2\left(x,y\right)=\frac{1}{M}{\sigma}_n^2\left(x,y\right) $$ \( E\left\{\overset{\_}{g}\left(x,y\right)\right\} \) is \( \overset{\_}{g}\left(x,y\right) \)'s mathematical expectation, and \( {\sigma}_g^2\left(x,y\right) \) and \( {\sigma}_n^2\left(x,y\right) \) are the variance of the sum between \( \overset{\_}{g}\left(x,y\right) \) and n(x, y) on the (x, y) coordinates. The mean variance of any point in the average image can be obtained by the following formula: $$ {\sigma}_{\overline{g}}\left(x,y\right)=\frac{1}{\sqrt{M}}{\sigma}_n\left(x,y\right) $$ As can be seen from the above two formulas, the variance of the pixel value decreases with the increase of M, indicating that the deviation of the pixel gray value is caused by noise decreases with the average result. It can be seen from formula 6 that when the number of noise images processed as average increases, their statistical average value will be closer to that of the original non-noise image. According to the calculation formula of SNR: $$ {\left(\frac{S}{N}\right)}_p=\frac{S}{\sigma_{\overline{g}}\left(x,y\right)}=\frac{S}{\frac{1}{\sqrt{M}}{\sigma}_n\left(x,y\right)}=\sqrt{M}\frac{S}{\sigma_n\left(x,y\right)}=\sqrt{M}{\left(\frac{S}{N}\right)}_d. $$ The above formula \( {\left(\frac{S}{N}\right)}_p \) is the SNR of the multi-image linear accumulation averaging method, and \( {\left(\frac{S}{N}\right)}_d \) is the SNR of the single-frame image. Therefore, it can be inferred that the M frame image can improve the signal-to-noise ratio \( \sqrt{\mathrm{m}} \) after linear accumulation averaging of multiple images. Theoretically, by increasing the number of images used for average, the deviation of image gray value caused by noise can be reduced, and the signal-to-noise ratio can be improved. Multi-frame cumulative average application This is an ideal analysis, but it is not the case that more images are better. In practical application, the video signal synchronization accuracy problem of the reduction device can accurately capture to receive images of line synchronization and frame synchronization, and the average frame accumulation in the actual test is often more than a certain number of frames. In the superimposed effect, the number of image superposition effect can cause the average image to produce larger edge blur, which influences the resolution of image detail. In this paper, a 30-frame image of the electromagnetic leakage reduction sequence is selected as the multi-frame accumulation average sample. Figure 2 shows the original information of the image with noise in the first frame. Figure 3 is the cumulative average result of 10 consecutive frames, while Fig. 4 is the cumulative average result of 20 consecutive frames. First frame of the original image 10-frame cumulative average image Cumulative average of 30 frames It can be clearly seen from above that the result of the accumulation average image of 10 consecutive frames shows that the noise is effectively suppressed, and the target text is relatively clear. The result of the accumulated average of 30 frames of continuous images shows that the noise is suppressed, but the image becomes fuzzy compared with that of 10 frames of cumulative average. This indicates that the image quality must be further improved by correcting the deviation between images in the image processing of accumulation average. That is to say, the migration feature is retrieved for the adjacent images through a certain algorithm to find the offset, and the effect of the offset is removed when multiple frames are accumulated. Image migration feature retrieval algorithm The core of image migration feature retrieval is to find the offset in the x direction and y direction. Therefore, it is important to find the offset accurately by a certain matching algorithm. Ideally, when all pixels of the two images have the same gray value, it can be assumed that the two images are perfectly matched. It is not exactly the same as the noise. In addition, the matching difference under different SNR images should also be considered. By matching two sequence images, the migration position of two images can be located. In other words, select the specific image as the reference image in the first image, and then use the reference image to traverse the second image and find the most similar sub-image as the final matching result among all sub-images that can be obtained. The basic principle of image feature retrieval is to find the reference image and the coordinate position of the retrieved image by relevant calculation. The process of image feature retrieval is shown in Fig. 5. Image feature retrieval Where R is the reference image, I is the searched image, H is the height of the I image, W is the width of the I image, R0,0 is the display schematic of the reference image at the coordinates (0,0) of the I image, and Rr,s is the display schematic diagram of the reference image R shifted to the coordinates (r,s) of the I image. The template matching in the gray image is mainly to find the same or most similar position of the template image R and the sub-image in the searched image I. The following formula shows that the reference image R is shifted by R and s in the horizontal and vertical directions of the searched image I. $$ {R}_{r,s}\left(u,v\right)=R\left(u-r,v-s\right) $$ The most important thing in template matching is to find a similarity measure function. In order to measure the similarity degree between images, we calculated the "distance" D(r,s) of the reference image after each shift (r,s) and the corresponding sub-image in the searched image (as shown in the figure below) (Fig. 6). Diagram of image measurement function Assuming that the reference image R is placed on the searched image I and translated, the block of the search map covered by the translation of the reference image is called the Ir, s subgraph, where r and s are the offset distance. As can be seen from figure X, the offset distance value r is equal to the coordinate of the pixel in the upper left corner of the subgraph on the I graph. M and N are the width and height of the reference image. The measure function D(r,s) that measures R and Ii, j is the degree of similarity which can be divided into the following. The smaller D(r,s) is, the higher the degree of similarity is. (1)Sum of absolute difference (SAD) The formula of measure function of SAD algorithm is as follows: $$ D\left(r,s\right)=\sum \limits_{m=1}^M\sum \limits_{n=1}^N\left[{I}^{r,s}\left(m,n\right)-R\left(m,n\right)\right] $$ where D(r,s) represents the value to measure the similarity, that is, the sum of the absolute value difference of gray value between the search subgraph and the reference image. Ir, s(m, n) denotes the gray value of the coordinates at (m, n) after the offset r and s. R(m, n) denotes the gray value of coordinates at (m, n) in the reference image. Given that the width and height of the search image I are W and H, the size of the search subgraph must be consistent with that of the reference image. Therefore, the values of r and s in the process of traversal should be removed. That is to say, the search range is limited to 1 ≤ r ≤ W − M, 1 ≤ s ≤ H − N. The smaller D(r,s) is, the more similar it is, and the matching position can be determined only by finding the smallest D(r,s) in the graph. M and N refer to image R and the width and height of the subgraphs covered in search graph I, respectively. (2)Sum of squared differences (SSD) The measure function formula of SSD algorithm is as follows: $$ D\left(r,s\right)=\sum \limits_{m=1}^M\sum \limits_{n=1}^N{\left[{I}^{r,s}\left(m,n\right)-R\left(m,n\right)\right]}^2 $$ Expand the above formula and get: $$ D\left(r,s\right)=\sum \limits_{m=1}^M\sum \limits_{n=1}^N{\left[{I}^{r,s}\left(m,n\right)\right]}^2-2\sum \limits_{m=1}^M\sum \limits_{n=1}^N{I}^{r,s}\left(m,n\right)\times R\left(m,n\right)+\sum \limits_{m=1}^M\sum \limits_{n=1}^N{\left[R\left(m,n\right)\right]}^2 $$ The third term on the right is a constant and is independent of the matching offset distance, which can be ignored when calculating the minimum distance. The first term is that the energy of the subgraph, which is covered by the template, changes slowly from place to place. The second term is the interrelation between the sub-image and the template, which changes with the reference point of the retrieval. When the template and the subgraph match, the value of this term is the maximum. Therefore, the following normalized correlation function can be used for similarity measurement: $$ C\left(r,s\right)=\frac{\sum \limits_{m=1}^M\sum \limits_{n=1}^N{I}^{r,s}\left(m,n\right)\times R\left(m,n\right)}{\sqrt{\left(\sum \limits_{m=1}^M\sum \limits_{n=1}^N{\left[{I}^{r,s}\left(m,n\right)\right]}^2\right)\sqrt{\left(\sum \limits_{m=1}^M\sum \limits_{n=1}^N{\left[R\left(m,n\right)\right]}^2\right)}}} $$ When the gray value of both the reference image and the search image sub-image is positive, the value of C(r, s) is always within the range [0,1], independent of the gray value of other pixels of the image. When C(r,s) is equal to 1, it indicates that at the translation position (r,s), the reference image, and sub-image reach the maximum similarity. On the contrary, when C(r,s) is equal to 0, it indicates that at the translation position (r,s), the reference image, and the sub-image do not match at all. The normalized cross-correlation C(r,s) also changes dramatically when all the gray values in the sub-images change. By calculating the size of C(r,s), the maximum value can be found, so the corresponding subgraph can be found in the Ir, s(m, n) graph, that is, the matching target. The influence of image signal-to-noise ratio on migration feature retrieval SNR can affect image feature retrieval. For a template image, the factors affecting feature retrieval have their own reasons. For example, the gray distribution of the image searched is relatively consistent, and there are more pixels belonging to a certain gray level. That means that the more details in the image template, the better the registration could become. Ideally, it should be zero. In practical application, feature retrieval is usually not ideal. It is to find the target in a certain frame. The difference of the noise in this frame will affect the feature retrieval of the image. If the signal is f(x, y) and the noise is n(x, y), the image with noise can be represented by formula 1, then formula 9 becomes: $$ D\left(r,s\right)=\sum \limits_{m=1}^M\sum \limits_{n=1}^N\left|{f}^{r,s}\left(m,n\right)+{n}^{r,s}\Big(m,n\left)-{f}^R\right(m,n\left)-{n}^R\right(m,n\Big)\right| $$ Formula 10 becomes: $$ D\left(r,s\right)=\sum \limits_{m=1}^M\sum \limits_{n=1}^N{\left|{f}^{r,s}\left(m,n\right)+{n}^{r,s}\Big(m,n\left)-{f}^R\right(m,n\left)-{n}^R\right(m,n\Big)\right|}^2 $$ As shown in the above formula, when the signal noise of the sequential image is relatively low, the influence of the signal is smaller than that of the noise, and even the signal can be neglected. At this point, image feature retrieval is mainly the feature of noise, and noise is the combination of all kinds of noise, so the randomness is relatively strong. Therefore, the minimum value of the found D(r, s) is not necessarily the position of image matching, and the error can easily occur. Image migration feature retrieval steps Image migration feature retrieval is a method to estimate the current target location by using the reference template obtained from the previous image to find the most similar region in the current image. As the received text image is static, the effective information of the two adjacent frames does not change much and the random noise is different. Migration feature retrieval steps are as follows: Read the adjacent images of two frames, which are Image 1 and Image 2 respectively. Select a template R in Image 1 as the reference image for traversal search. Select Image 2 as the searched image I and calculate according to the algorithm selected by formula 9 or formula 12 to get the image matching position, and then calculate the offset of the two frames of image according to the template R. Image 1 and corrected and offset Image 2 are overlapped to obtain the new image on average. Take the new image after superposition as Image 1 and select template R to continue matching and superposition with the next image. The process of multiple calibration is the repetition of the above process. Each time, take the image after the average superposition of the previous calibration and calibrate it with the next image, and then average the superposition. In the actual algorithm implementation process, appropriate templates, search objects, and algorithms can be selected according to the characteristics of sequence images. Experimental results Based on image migration feature algorithm, image matching is a method to estimate the current target location by using the reference template obtained from the previous image to find the most similar region in the current image. It has good effect on complex background and high signal noise. For the image processing of this system, the multi-frame average method has a significant effect on noise removal, while the existing multi-frame cumulative deviation error will affect the image processing effect, resulting in the image edge blurring and poor readability. Therefore, the deviation of each frame image must be minimized. For the received sequence images, the image migration feature algorithm is adopted to complete the calculation of the migration frame number, which can reduce the accumulated error. There are many factors influencing the matching process. First of all, the number of frames of the image is relatively large, and it is proved by experiments that the number of frames of the multi-frame accumulation method should be more than 10. We choose 100 frames (theoretically, the more the cumulative number of images to be processed, the better the results could become, because the existing image inter-frame migration and limited registration accuracy lead to the bad effect of excessive cumulative number of frames). In the experiment, we selected 100 images, and it was impossible to match each image from the calculation speed. Secondly, the size of the image is very large which is the size of 1024 × 768, and it affects the time of each match. Considering the complexity of the image migration feature algorithm, the following processing methods are adopted: Select the appropriate template R The selection of the template size has a great impact on the speed of image matching. Under the same conditions, the smaller the image template is, the faster the image matching processing speed could become. The larger the image template is, the worse the dynamic properties get, but the more detail it contains, the more accurate the image registration is. Taking into account the impact of both aspects, the template size can be selected according to different situations. In the actual application, the size of the template can be changed. In the test below, we selected the size of 412 × 78. Select the search object Electromagnetic leakage emission reduction image has a strong correlation. Since the image that needs to be restored is the display text image, generally the image should be kept for a period of time, and the position of the same target in each frame of the image is changed slowly. Sync signal drift is the reason for causing image offset. In a short period of time, every frame image offset is roughly the same size, and the restore image feature is that the offset in a short period of time is comparatively fixed, which can be taken as a constant, so it could reduce the number of matching times and replace it with an average number. According to the characteristics of the sequence image, the last frame can be searched. Collecting 100 frame accumulation, for example, firstly find a template, choose the starting point of the first frame coordinates for the size of the (344,616) of 414 × 78 template, and then map-search the 100th frame, accordingly find the matching point, according to the location of the matching point to calculate the average deviation value per frame, in this way, the rest of the accumulative calculation can be more frames. If the signal is controlled by software, the deviation of the signal on the y-axis will be excluded. In this way, the image migration feature retrieval is only calculated in the x direction, and the template also slides on a line, which will save a lot of time relatively, or the image migration feature retrieval steps can be followed to match frame by frame, and the result will be more accurate. According to the above selection results and considering the calculation speed and data volume, there are four methods adopted in practice for image migration feature retrieval: Method 1: As shown in formula 12, correlation matching is adopted. Method 2: As shown in formula 9, absolute value matching is adopted. Method 3: Implement method 1, but limit the area to be matched, set the matching rectangle box according to the features of the restored image to improve the operation speed Method 4: As method 2 is adopted, the matched region should also be restricted. According to the characteristics of the restored image, the matching rectangle box is set to improve the operation speed. The specific settings of methods 3 and 4 are as follows: firstly, the template position and size of the image in frame 0 are determined. Assuming that the template image starts from (x, y), width is set as TWidth and height as THeight. The selected starting point coordinates of the matching rectangular box is (0, y-yshift), width of the rectangular box is the width of the sequence image, and height is THeight+2YShift. The parameter Yshift represents the maximum value of the offset on the sequence image which can be adjusted in the setting. In the optimal case, the value of Yshift is 0. In Fig. 7, a and b are respectively the images of frame 0 and frame 99 collected in the experiment. The size of the original image is 1024 wide and 768 high: Select an area in frame 0, starting at (344,616) with a size of 412 × 78. Figure 8 is the matching template image of the image at frame 0. In Fig. 9 (zoom out), the area circled by dotted lines is the selection area of frame 0: Matching template image of frame 0 Selected area of the image at frame 0 After selecting the region, Fig. 10 is the result diagram in the matching process. The matching point is on (381,615), and the region surrounded by white lines is the matching target. Fig. 10 It can be seen from the matching results that the template image in frame 99 is offset from the original position of frame 0 in both vertical and horizontal coordinates. As can be seen from the experimental results, the image of frame 99 is downwarded by 1 coordinate unit relative to the image of frame 0, and it is shifted to the right by 37 coordinate units, which is approximately equivalent to one coordinate unit for every three frames of the image. According to this migration rule, the sequence image can be corrected. The results show that in the case of high signal noise, all four methods have successfully found the correct offset, which lays a good foundation for the multi-frame accumulation average. The correct offset is also found by randomly changing the reference image in frame 0. In the practical application, by adjusting the receiving distance and direction of the video information of the computer, the restored image under different signal-to-noise ratio can be obtained. Figure 11 shows the images of frame 0 and 99 after the adjustment. By using the above four methods, accurate offset can be obtained only when certain regions with obvious features are selected as reference images, while there is a certain error when other non-obvious regions are taken as reference images. The experiments show that the accuracy of image matching is reduced when the signal noise is low and the image features are blurred. The SSD image migration feature retrieval algorithm has a large amount of matching calculation. If you want to do a full image search, you need to do a relevant calculation at (n − m + 1) × (n − m + 1) reference locations, and each correlation calculation needs to do 3 × M × N addition, 3 × M × N multiplication, two times square operation, and one time division. Because the arithmetic speed of multiplication and division is slower than that of addition and subtraction, the arithmetic speed of SSD image migration feature retrieval is slower. In contrast, the SAD image migration feature retrieval algorithm reduces multiplication; only subtraction and addition operation and the operation speed are greatly improved. However, it is proved by the experiments that the matching accuracy of the SSD image migration feature retrieval algorithm is higher than that of the SAD image migration feature retrieval algorithm. In this paper, the frame matching of the electromagnetic leakage transmission receiving sequence images is completed by using the full-figure SSD image migration feature retrieval algorithm, the full-figure SAD image migration feature retrieval algorithm, the limited-range SSD image migration feature retrieval algorithm, and the limited-range SAD image migration feature retrieval algorithm. The effect evaluation is shown in Table 1. Table 1 Effect evaluation of four image matching methods As shown in the Table 1, the correlation method takes a long time to operate, but the result is highly accurate. The absolute value method takes a short time to operate, but the accuracy is relatively low. Aiming at the moving range of the image of electromagnetic leakage transmission receiving sequence, the method of limiting range is selected to improve the speed of image matching. In this paper, the principle of video electromagnetic leakage emission reduction is briefly introduced, and the multi-frame accumulation average method is proposed based on the noise characteristics of sequence images. The results show that the image shift is an important cause of the image blurring after multi-frame accumulation. In order to correct the image migration, this paper analyzed the characteristics of sequence images, proposed the image migration feature retrieval algorithm and the implementation steps, and used the correlation between sequence images to verify the effectiveness of the image migration feature retrieval algorithm. Through experiment and comparison, in the case of high signal and noise, the image migration feature retrieval algorithm can accurately locate the image migration amount, can solve the inter-frame migration problem of multi-frame accumulation average, and can be applied to automatic migration correction and multi-frame accumulation average. The results show that the image signal-to-noise ratio obtained by direct superposition of the image processed by the image migration feature retrieval algorithm is improved, and the recognition degree of video electromagnetic leakage emission reduction image is enhanced effectively. At the same time, in order to evaluate the image migration feature retrieval algorithm, four image migration feature retrieval algorithms were selected in this experiment, and image matching and speed comparison were conducted respectively. The limited range of the image migration feature retrieval algorithm can greatly improve the speed of image matching without reducing the accuracy of image matching. There are still some limitations in the research work in this paper. For example, the two-frame sequence images are of low signal noise with flat images, and no obvious feature details are available. As it is impossible to accurately select the appropriate reference images, there will be some errors in the image matching. In recent years, image matching technology has made great progress and been applied in some fields. However, the electromagnetic information security field is still in its infancy and needs further optimization. The next step in low SNR circumstance is expectedly to analyze the characteristics of the reference images and combine with the fuzzy matching, feature point matching, template matching, and edge detection of time sequence matching theory research results. Future studies are recommended to put forward a new suitable method for electromagnetic leakage reduction sequence image shift characteristics of the image retrieval algorithm in order to promote the application of image processing in the electromagnetic field of information security and innovation. Z. Qian et al., Analysis and reconstruction of conduction leakage signal of computer video cable based on the spatial correlation filtering method. Chinese Journal of Radio Science 32(3), 331–337 (2017) C. Ulaş, U. Aşık, C. Karadeniz, Analysis and reconstruction of laser printer information leakages in the media of electromagnetic radiation, power, and signal lines. Computers & Security 58(2), 250–267 (2016) Ding, Jian Feng, et al. "New threat analysis of electromagnetic information leakage in electronic equipment based on active detection."Communications Technology (2018) Gong, Yanfei, et al. "An analytical model for electromagnetic leakage from double cascaded enclosures based on Bethe's small aperture coupling theory and mirror procedure." Transactions of China Electrotechnical Society (2018) S. LEE H, G. YOOK J, K. SIM, An Information Recovery Technique from Radiated Electromagnetic Fields from Display Devices[C]//2016Asia-Pacific International Symposium on Electromagnetic Compatibility.Piscataway IEEE (2016), pp. 473–475 S. WANG, Y. QIU, J. TIAN, et al., Countermeasure for electromagnetic information leakage of digital video cable[C]//2016Asia-Pacific International Symposium on Electromagnetic Compatibility.Piscataway. IEEE, 44–46 (2016) I. Frieslaar, B. Irwin, in Information security for South Africa IEEE. Investigating the electromagnetic side channel leakage from a raspberry pi (2018) H.Z. Xiao, L.F. Yu, Z. Qin, H.G. Ren, Z.W. Geng, in 2016 IEEE 13th International Conference on Signal Processing ( ICSP). A point matching algorithm for brain CT images based on SIFT andgray feature (2016), pp. 6–10 X.J. Qu, Y. Sun, Y. Gu, S. Yu, L.W. Gao, in 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery ( ICNC- FSKD). A high - precision registration algorithm for heterologous image based on effective sub - graph extraction and feature points bidirectional matching (2016), pp. 13–15 TAKASU T, KUMAGAI Y, OHASHI G. Object extraction using an edge-based feature for query-by-sketch image retrieval [J]. Ieice Transactions on Information & Systems, 2015, E98 D(1): 214–217 Z.S. Ding, S. Qian, Y.L. Li, Z.H. Li, in NAECON 2014- IEEE National Aerospace and Electronics Conference. An image matching method based on the analysis of grey correlation degree and feature points (2014), pp. 24–27 A. Konar, S. Saha, "Fuzzy image matching based posture recognition in ballet dance." IEEE international conference on fuzzy systems. IEEE, 1–8 (2018) J. Ma et al., Guided locality preserving feature matching for remote sensing image registration. IEEE Transactions on Geoscience & Remote Sensing 56. 8, 4435–4447 (2018) S. Li, R. Shi, "The comparison of two image matching algorithms based on real-time image acquisition." Packaging Engineering (2016) K. Sun et al., "Feature guided biased Gaussian mixture model for image matching." Information Sciences 295.C (2015), pp. 323–336 Q. Zhu et al., Investigation on the image matching algorithm based on global and local feature fusion. Chinese Journal of Scientific Instrument (2016) C.F. Olson, Maximum-likelihood image matching. Pattern Analysis & Machine Intelligence IEEE Transactions on 24(6), 853–857 (2016) W. Bao et al., in Journal of Southeast University. Image matching algorithm based on elliptic metric spectral feature (2018) M.-M. Rohé et al., SVF-Net: Learning Deformable Image Registration Using Shape Matching (2017), pp. 266–274 B.S. Kim, Y.S. Moon, J.G. Lee, Boundary Image Matching Supporting Partial Denoising Using Time-Series Matching Techniques (Kluwer Academic Publishers, 2017) The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions. Please contact author for data requests. School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China Chunwei Miao Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China Jianlin Hu Search for Chunwei Miao in: Search for Jianlin Hu in: All authors take part in the discussion of the work described in this paper. All authors read and approved the final manuscript. Correspondence to Chunwei Miao. Miao, C., Hu, J. Research on the algorithm of electromagnetic leakage reduction and sequence of image migration feature retrieval. J Image Video Proc. 2019, 47 (2019) doi:10.1186/s13640-019-0430-y Electromagnetic leakage Emission image accumulation Image migration Feature retrieval Inter-frame matching Visual Information Learning and Analytics on Cross-Media Big Data
CommonCrawl
Optimal time-decay rates of the compressible Navier–Stokes–Poisson system in $ \mathbb R^3 $ A computable formula for the class number of the imaginary quadratic field $ \mathbb Q(\sqrt{-p}), \ p = 4n-1 $ December 2021, 29(6): 3867-3887. doi: 10.3934/era.2021066 A class of fourth-order hyperbolic equations with strongly damped and nonlinear logarithmic terms Yi Cheng and Ying Chu , School of Science, Changchun University of Science and Technology, Changchun 130022, China * Corresponding author: Ying Chu Received June 2021 Revised July 2021 Published December 2021 Early access September 2021 Fund Project: The second author is supported by the fund of the "Thirteen Five" Scientific and Technological Research Planning Project of the Department of Education of Jilin Province in China [number JJKH20190547KJ and JJKH20200727KJ] In this paper, we study a class of hyperbolic equations of the fourth order with strong damping and logarithmic source terms. Firstly, we prove the local existence of the weak solution by using the contraction mapping principle. Secondly, in the potential well framework, the global existence of weak solutions and the energy decay estimate are obtained. Finally, we give the blow up result of the solution at a finite time under the subcritical initial energy. Keywords: Fourth-order hyperbolic equations, strong damping, global existence, energy decay estimate, blow up. Mathematics Subject Classification: Primary: 35A01, 35L75; Secondary: 35B40, 35B44. Citation: Yi Cheng, Ying Chu. A class of fourth-order hyperbolic equations with strongly damped and nonlinear logarithmic terms. Electronic Research Archive, 2021, 29 (6) : 3867-3887. doi: 10.3934/era.2021066 M. M. Al-Gharabli and S. A. Messaoudi, The existence and the asymptotic behavior of a plate equation with frictional damping and a logarithmic source term, J. Math. Anal. Appl., 454 (2017), 1114-1128. doi: 10.1016/j.jmaa.2017.05.030. Google Scholar M. M. Al-Gharabli and S. A. Messaoudi, Existence and a general decay result for a plate equation with nonlinear damping and a logarithmic source term, J. Evol. Equ., 18 (2018), 105-125. doi: 10.1007/s00028-017-0392-4. Google Scholar C. Alves, A. Moussaoui and L. Tavares, An elliptic system with logarithmic nonlinearity, Adv. Nonlinear Anal., 8 (2019), 928-945. doi: 10.1515/anona-2017-0200. Google Scholar J. D. Barrow and P. Parsons, Inflationary models with logarithmic potentials, Phys. Rev. D, 52 (1995), 5576-5587. doi: 10.1103/PhysRevD.52.5576. Google Scholar K. Bartkowski and P. Górka, One-dimensional Klein-Gordon equation with logarithmic nonlinearities, J. Phys. A, 41 (2008), 355201, 11 pp. doi: 10.1088/1751-8113/41/35/355201. Google Scholar I. Bialynicki-Birula and J. Mycielski, Wave equations with logarithmic nonlinearities, Bull. Acad. Polon. Sci. Ser. Sci. Math. Astronom. Phys., 23 (1975), 461-466. Google Scholar G. Bonanno and B. Di Bella, A boundary value problem for fourth-order elastic beam equations, J. Math. Anal. Appl., 343 (2008), 1166-1176. doi: 10.1016/j.jmaa.2008.01.049. Google Scholar E. Brué and Q-H. Nguyen, On the Sobolev space of functions with derivative of logarithmic order, Adv. Nonlinear Anal., 9 (2020), 836-849. doi: 10.1515/anona-2020-0027. Google Scholar T. Cazenave and A. Haraux, Équations d'évolution avec non-linéarité logarithmique, Ann. Fac. Sci. Toulouse Math., 2 (1980), 21-51. doi: 10.5802/afst.543. Google Scholar J. Chabrowski and J. Marcos do Ó, On some fourth-order semilinear elliptic problems in $R^N$, Nonlinear Anal., 49 (2002), 861-884. doi: 10.1016/S0362-546X(01)00144-4. Google Scholar H. Chen and S. Tian, Initial boundary value problem for a class of semilinear pseudo-parabolic equations with logarithmic nonlinearity, J. Differential Equations, 258 (2015), 4424-4442. doi: 10.1016/j.jde.2015.01.038. Google Scholar H. Chen and H. Xu, Global existence and blow-up of solutions for infinitely degenerate semilinear pseudo-parabolic equations with logarithmic nonlinearity, Discrete Contin. Dyn. Syst., 39 (2019), 1185-1203. doi: 10.3934/dcds.2019051. Google Scholar W. Chen and Y. Zhou, Global nonexistence for a semilinear Petrovsky equation, Nonlinear Anal., 70 (2009), 3203-3208. doi: 10.1016/j.na.2008.04.024. Google Scholar H. Di, Y. Shang and J. Yu, Existence and uniform decay estimates for the fourth order wave equation with nonlinear boundary damping and interior source, Electron. Res. Arch., 28 (2020), 221-261. doi: 10.3934/era.2020015. Google Scholar K. Enqvist and J. McDonald, Q-balls and baryogenesis in the MSSM, Phys. Lett. B, 425 (1998), 309-321. doi: 10.1016/S0370-2693(98)00271-8. Google Scholar L. C. Evans, Partial Differential Equations, Second ed., in: Graduate Studies in Mathematics, vol. 19, 2010. doi: 10.1090/gsm/019. Google Scholar F. Gazzola and M. Squassina, Global solutions and finite time blow up for damped semilinear wave equations, Ann. Inst. H. Poincaré Anal. Non Linéaire, 23 (2006), 185-207. doi: 10.1016/j.anihpc.2005.02.007. Google Scholar P. Górka, Logarithmic Klein-Gordon equation, Acth Physica Polonica B, 40 (2009), 59-66. Google Scholar X. Han, Global existence of weak solutions for a logarithmic wave equation arising from Q-ball dynamics, Bull. Korean Math. Soc., 50 (2013), 275-283. doi: 10.4134/BKMS.2013.50.1.275. Google Scholar Y. Han, Finite time blow up for a semilinear pseudo-parabolic equation with general nonlinearity, Appl. Math. Lett., 99 (2020), 105986, 7 pp. doi: 10.1016/j.aml.2019.07.017. Google Scholar T. Hiramatsu, M. Kawasaki and F. Takahashi, Numerical study of Q-ball formation in gravity mediation, J. Cosmol. Astropart. P., 6 (2010), 008-008. Google Scholar M. Kafini and S. Messaoudi, Local existence and blow up of solutions to a logarithmic nonlinear wave equation with delay, Appl. Anal., 99 (2020), 530-547. doi: 10.1080/00036811.2018.1504029. Google Scholar H. A. Levine, Some nonexistence and instability theorems for solutions of formally parabolic equations of the form $Pu_t = -Au+ F(u)$, Arch. Ration. Mech. Anal., 51 (1973), 371-386. doi: 10.1007/BF00263041. Google Scholar P. Li and C. Liu, A class of fourth-order parabolic equation with logarithmic nonlinearity, J. Inequal. Appl., (2018), Paper No. 328, 21 pp. doi: 10.1186/s13660-018-1920-7. Google Scholar W. Lian, M. S. Ahmed and R. Xu, Global existence and blow up of solution for semilinear hyperbolic equation with logarithmic nonlinearity, Nonlinear Anal., 184 (2019), 239-257. doi: 10.1016/j.na.2019.02.015. Google Scholar W. Lian, M. S. Ahmed and R. Xu, Global existence and blow up of solution for semi-linear hyperbolic equation with the product of logarithmic and power-type nonlinearity, Opuscula Math., 40 (2020), 111-130. doi: 10.7494/OpMath.2020.40.1.111. Google Scholar W. Lian, J. Wang and R. Xu, Global existence and blow up of solutions for pseudo-parabolic equation with singular potential, J. Differ. Equations, 269 (2020), 4914-4959. doi: 10.1016/j.jde.2020.03.047. Google Scholar W. Lian and R. Xu, Global well-posedness of nonlinear wave equation with weak and strong damping terms and logarithmic source term, Adv. Nonlinear Anal., 9 (2020), 613-632. doi: 10.1515/anona-2020-0016. Google Scholar M. Liao, B. Guo and Q. Li, Global existence and energy decay estimates for weak solutions to the pseudo-parabolic equation with variable exponents, Math. Method. Appl. Sci., 43 (2020), 2516-2527. doi: 10.1002/mma.6060. Google Scholar J.-L. Lions, Quelque Méthodes de Résolution des Problemes aux Limites non Linéaires, Dunod, Paris, 1969. Google Scholar G. Liu, The existence, general decay and blow-up for a plate equation with nonlinear damping and a logarithmic source term, Electron. Res. Arch., 28 (2020), 263-289. doi: 10.3934/era.2020016. Google Scholar X. Liu and J. Zhou, Initial-boundary value problem for a fourth-order plate equation with Hardy-Hénon potential and polynomial nonlinearity, Electron. Res. Arch., 28 (2020), 599-625. doi: 10.3934/era.2020032. Google Scholar S. A. Messaoudi, Global existence and nonexistence in a system of petrovsky, J. Math. Anal. Appl., 265 (2002), 296-308. doi: 10.1006/jmaa.2001.7697. Google Scholar V. Pata and S. Zelik, Smooth attractors for strongly damped wave equations, Nonlinearity, 19 (2006), 1495-1506. doi: 10.1088/0951-7715/19/7/001. Google Scholar M.-P. Tran and T.-N. Nguyen, Pointwise gradient bounds for a class of very singular quasilinear elliptic equations, Discrete Contin. Dyn. Syst., 41 (2021), 4461-4476. doi: 10.3934/dcds.2021043. Google Scholar X. Wang and R. Xu, Global existence and finite time blowup for a nonlocal semilinear pseudo-parabolic equation, Adv. Nonlinear Anal., 10 (2021), 261-288. doi: 10.1515/anona-2020-0141. Google Scholar S.-T. Wu and L.-Y. Tsai, On global solutions and blow-up of solutions for a nonlinearly damped Petrovsky system, Taiwan. J. Math., 13 (2009), 545-558. doi: 10.11650/twjm/1500405355. Google Scholar R. Xu and J. Su, Global existence and finite time blow-up for a class of semilinear pseudo-parabolic equations, J. Funct. Anal., 264 (2013), 2732-2763. doi: 10.1016/j.jfa.2013.03.010. Google Scholar Y. Yang, M. Salik Ahmed, L. Qin and R. Xu, Global well-posedness of a class of fourth-order strongly damped nonlinear wave equations, Opuscula Math., 39 (2019), 297-313. doi: 10.7494/OpMath.2019.39.2.297. Google Scholar Y. Zeng and K. Zhao, On the logarithmic Keller-Segel-Fisher/KPP system, Discrete Contin. Dyn. Syst., 39 (2019), 5365-5402. doi: 10.3934/dcds.2019220. Google Scholar H. Zhang, G. Liu and Q. Hu, Exponential decay of energy for a logarithmic wave equation, J. Partial Differ. Equ., 28 (2015), 269-277. doi: 10.4208/jpde.v28.n3.5. Google Scholar X. Zhu, B. Guo and M. Liao, Global existence and blow-up of weak solutions for a pseudo-parabolic equation with high initial energy, Appl. Math. Lett., 104 (2020), 106270, 7 pp. doi: 10.1016/j.aml.2020.106270. Google Scholar Claudianor O. Alves, M. M. Cavalcanti, Valeria N. Domingos Cavalcanti, Mohammad A. Rammaha, Daniel Toundykov. On existence, uniform decay rates and blow up for solutions of systems of nonlinear wave equations with damping and source terms. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 583-608. doi: 10.3934/dcdss.2009.2.583 Jinxing Liu, Xiongrui Wang, Jun Zhou, Huan Zhang. Blow-up phenomena for the sixth-order Boussinesq equation with fourth-order dispersion term and nonlinear source. Discrete & Continuous Dynamical Systems - S, 2021, 14 (12) : 4321-4335. doi: 10.3934/dcdss.2021108 Pablo Álvarez-Caudevilla, Jonathan D. Evans, Victor A. Galaktionov. Gradient blow-up for a fourth-order quasilinear Boussinesq-type equation. Discrete & Continuous Dynamical Systems, 2018, 38 (8) : 3913-3938. doi: 10.3934/dcds.2018170 Jun Zhou. Global existence and energy decay estimate of solutions for a class of nonlinear higher-order wave equation with general nonlinear dissipation and source term. Discrete & Continuous Dynamical Systems - S, 2017, 10 (5) : 1175-1185. doi: 10.3934/dcdss.2017064 Mohammad A. Rammaha, Daniel Toundykov, Zahava Wilstein. Global existence and decay of energy for a nonlinear wave equation with $p$-Laplacian damping. Discrete & Continuous Dynamical Systems, 2012, 32 (12) : 4361-4390. doi: 10.3934/dcds.2012.32.4361 Wenjun Liu, Jiangyong Yu, Gang Li. Global existence, exponential decay and blow-up of solutions for a class of fractional pseudo-parabolic equations with logarithmic nonlinearity. Discrete & Continuous Dynamical Systems - S, 2021, 14 (12) : 4337-4366. doi: 10.3934/dcdss.2021121 Filippo Gazzola, Paschalis Karageorgis. Refined blow-up results for nonlinear fourth order differential equations. Communications on Pure & Applied Analysis, 2015, 14 (2) : 677-693. doi: 10.3934/cpaa.2015.14.677 Chao Yang, Yanbing Yang. Long-time behavior for fourth-order wave equations with strain term and nonlinear weak damping term. Discrete & Continuous Dynamical Systems - S, 2021, 14 (12) : 4643-4658. doi: 10.3934/dcdss.2021110 Vo Anh Khoa, Le Thi Phuong Ngoc, Nguyen Thanh Long. Existence, blow-up and exponential decay of solutions for a porous-elastic system with damping and source terms. Evolution Equations & Control Theory, 2019, 8 (2) : 359-395. doi: 10.3934/eect.2019019 Gongwei Liu. The existence, general decay and blow-up for a plate equation with nonlinear damping and a logarithmic source term. Electronic Research Archive, 2020, 28 (1) : 263-289. doi: 10.3934/era.2020016 Huafei Di, Yadong Shang, Jiali Yu. Existence and uniform decay estimates for the fourth order wave equation with nonlinear boundary damping and interior source. Electronic Research Archive, 2020, 28 (1) : 221-261. doi: 10.3934/era.2020015 Marco Donatelli, Luca Vilasi. Existence of multiple solutions for a fourth-order problem with variable exponent. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021141 Yue Pang, Xingchang Wang, Furong Wu. Global existence and blowup in infinite time for a fourth order wave equation with damping and logarithmic strain terms. Discrete & Continuous Dynamical Systems - S, 2021, 14 (12) : 4439-4463. doi: 10.3934/dcdss.2021115 Alan E. Lindsay. An asymptotic study of blow up multiplicity in fourth order parabolic partial differential equations. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 189-215. doi: 10.3934/dcdsb.2014.19.189 Akmel Dé Godefroy. Existence, decay and blow-up for solutions to the sixth-order generalized Boussinesq equation. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 117-137. doi: 10.3934/dcds.2015.35.117 Jaime Angulo Pava, Carlos Banquet, Márcia Scialom. Stability for the modified and fourth-order Benjamin-Bona-Mahony equations. Discrete & Continuous Dynamical Systems, 2011, 30 (3) : 851-871. doi: 10.3934/dcds.2011.30.851 Feliz Minhós, João Fialho. On the solvability of some fourth-order equations with functional boundary conditions. Conference Publications, 2009, 2009 (Special) : 564-573. doi: 10.3934/proc.2009.2009.564 Gabriele Bonanno, Beatrice Di Bella. Fourth-order hemivariational inequalities. Discrete & Continuous Dynamical Systems - S, 2012, 5 (4) : 729-739. doi: 10.3934/dcdss.2012.5.729 Honglv Ma, Jin Zhang, Chengkui Zhong. Global existence and asymptotic behavior of global smooth solutions to the Kirchhoff equations with strong nonlinear damping. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 4721-4737. doi: 10.3934/dcdsb.2019027 Nguyen Thanh Long, Hoang Hai Ha, Le Thi Phuong Ngoc, Nguyen Anh Triet. Existence, blow-up and exponential decay estimates for a system of nonlinear viscoelastic wave equations with nonlinear boundary conditions. Communications on Pure & Applied Analysis, 2020, 19 (1) : 455-492. doi: 10.3934/cpaa.2020023 Yi Cheng Ying Chu
CommonCrawl
On the Accuracy of the Johnson-Cook Constitutive Model for Metals ZHOU Lin , WANG Zihao , WEN Heming ZHOU Lin , , WANG Zihao , WEN Heming , 中国科学技术大学中科院材料力学行为和设计重点实验室,安徽 合肥 230027 作者简介: ZHOU Lin (1988-), female, doctoral student, major in impact dynamics. E-mail: [email protected]; 通讯作者: WEN Heming, [email protected] CAS Key Laboratory for Mechanical Behavior and Design of Materials, University of Science and Technology of China, Hefei 230027, China Author Bio: ZHOU Lin (1988-), female, doctoral student, major in impact dynamics. E-mail: [email protected]; Corresponding author: WEN Heming, [email protected] 图(18) 表(2) 摘要: 通过比较JC模型预测结果与6种金属(2024-T351铝合金、6061-T6铝合金、OFHC无氧铜、4340高强钢、Ti-6Al-4V钛合金和Q235软钢)在不同应变率及温度下的实验数据,对JC本构模型的精确性进行了关键评估。为了进一步评估其精确性,采用JC本构模型和失效准则对平头弹正撞2024-T351铝合金靶板进行数值模拟,并与实验结果比较。结果表明:JC本构模型只适用于中、低应变率和温度下的Mises材料,对非Mises材料该模型预测的剪切应力-应变曲线和失效与实验结果吻合较差;同时,JC本构模型的精度随应变率和温度的提高而降低,特别是在高应变率条件下利用实验得到的动态增强因子进行相应数值模拟时,所得计算结果与弹道穿透实验结果不一致,说明其表达式(即准静态应力-应变关系×动态增强因子)是不恰当的。 Johnson-Cook constitutive model / metal / stress-strain curve / strain rate effect / temperature effect / fracture criterion Abstract: A critical assessment is made herein on the accuracy of the Johnson-Cook (JC) constitutive model by comparing the model predictions with the test data for 2024-T351 aluminum alloy, 6061-T6 aluminum alloy, OFHC copper, 4340 steel, Ti-6Al-4V alloys and Q235 mild steel. These materials are selected because their test data are more complete in terms of true stress-true strain relationships, strain rate effects, temperature effects and failure. To further assess its accuracy numerical results for the ballistic perforation of plates made of 2024-T351 aluminum alloy using the JC constitutive model are also presented and compared with corresponding test data. It transpires that the JC constitutive model is applicable to Mises materials at quasi-static to intermediate strain rates and low to moderate temperature. It also transpires that for non-Mises materials the agreement between the model predictions and the test results are poor in terms of shear stress-shear strain curve and fracture strain. Furthermore, the accuracy of the JC model decreases with increasing strain rate, temperature and, above all, it fails to produce consistent results at high strain rates when the experimentally obtained dynamic increase factors (DIF) are employed in the calculations implying the form of the model's equation (namely, quasi-static stress-strain curve multiplied by DIF) may be inadequate at least for the scenarios where high strain rates are involved. Figure 1. Comparison of the JC model with the true stress-true strain curves obtained from tension and torsion tests Figure 2. Comparison of the JC model with the test data for 2024-T351 aluminum alloy Figure 7. Comparison of the JC model with the test data for Q235 mild steel Figure 3. Comparison of the JC model with the test data for 6061-T6 aluminum alloy Figure 4. Comparison of the JC model with the test data for OFHC copper Figure 5. Comparison of the JC model with the test data for 4340 steel Figure 6. Comparison of the JC model with the test data for Ti-6Al-4V alloys Figure 8. Comparison of DIF vs. strain rate at different plastic strains at room temperature Figure 9. Comparison of the JC model predictions with the tensile test data for 2024-T351 aluminum alloy Figure 14. Comparison of the JC model predictions with the compression test data for Q235 mild steel Figure 10. Comparison of the JC model predictions with the tensile test data for 6061-T6 aluminum alloy Figure 11. Comparison of the JC model predictions with the compression test data for OFHC copper Figure 12. Comparison of the JC model predictions with the tensile test data for 4340 steel Figure 13. Comparison of the JC model predictions with the compression test data for Ti-6Al-4V Figure 15. Dependence of the equivalent strain to fracture on the stress triaxiality on some metal Figure 16. Finite element model used in the numerical simulations Figure 17. Comparison of the JC constitutive model with the test data for 2024-T351 aluminum alloy Figure 18. Comparison of the numerically predicted residual velocities with the test results for the perforation of the 4 mm-thick 2024-T351 aluminum alloy plates struck normally by the 5.5 mm-diameter flat-ended projectile[21] Table 1. Values of constants in the Johnson-Cook constitutive model and Johnson-Cook fracture criterion Materials A/MPa B/MPa n C m ${\dot \varepsilon _0}/{\rm s}^{-1}$ Tm/K D1 D2 D3 2024-T351 Al[3–4] 340 510 0.510 0.002 1.890 9.0×10–5 775 –0.070 1.020 –1.620 6061-T6 Al[5–8] 265 170 0.314 0.007 1.316 1.0×10–3 855 –0.070 0.810 –1.240 OFHC copper[1, 9–13] 50 340 0.425 0.011 0.883 1.0×10–5 1356 0.540 4.890 –3.030 4340 steel[1–2] 792 846 0.582 0.009 1.030 2.0×10–3 1793 0.050 3.440 –2.120 Ti-6Al-4V alloy[14–17] 938 947 0.636 0.013 0.779 1.0×10–5 1933 0.200 3.590 –3.800 Q235 mild steel[18–20] 293 543 0.489 0.045 0.942 2.1×10–3 1795 0.070 6.116 –3.445 下载: 导出CSV Table 2. Values of various parameters for 2024-T351 aluminum alloy $\rho $ /(kg·m–3) E/GPa v $\chi $ Cp/(J·kg–1·K–1) C0/ (m·s–1) s1 ${\varGamma _0}$ 2700 72 0.3 0.9 875 5328 1.338 2 JC Model A/MPa B/MPa n C m ${\dot \varepsilon _0}$ /s–1 Tm/K This paper 340 510 0.510 0.002 1.890 9.0×10–5 775 Ref.[25] 352 440 0.42 0.0083 1.7 3.3×10–4 775 JC Model D1 D2 D3 D4 D5 This paper –0.070 1.020 –1.620 0.011 0 Ref.[25] 0.13 0.13 –1.5 0.011 0 [1] JOHNSON G R, COOK W H. A constitutive model and data for metals subjected to large strains, high strain rates and high temperatures [C]//Proceedings of the 7th International Symposium on Ballistics, 1983, 21: 541–547. [2] JOHNSON G R, COOK W H. Fracture characteristics of three metals subjected to various strains, strain rates, temperatures and pressures [J]. Engineering Fracture Mechanics, 1985, 21(1): 31–48. doi: 10.1016/0013-7944(85)90052-9 [3] SEIDT J D, GILAT A. Plastic deformation of 2024-T351 aluminum plate over a wide range of loading conditions [J]. International Journal of Solids and Structures, 2013, 50(10): 1781–1790. doi: 10.1016/j.ijsolstr.2013.02.006 [4] WIERZBICKI T, BAO Y, LEE Y W. Calibration and evaluation of seven fracture models [J]. International Journal of Mechanical Sciences, 2005, 47(4): 719–743. [5] WILKINS M L, STREIT R D, REAUGH J E. Cumulative-strain-damage model of ductile fracture: simulation and prediction of engineering fracture tests: UCRL-53058 [R]. Livermore: Lawrence Livermore National Laboratories, 1980. [6] SCAPINA M, MANES A. Behaviour of Al6061-T6 alloy at different temperatures and strain-rates: experimental characterization and material modeling [J]. Materials Science and Engineering A, 2018, 734: 318–328. doi: 10.1016/j.msea.2018.08.011 [7] LESUER D R, KAY G J, LEBLANC M M. Modeling large-strain, high-rate deformation in metals: UCRL-JC-134118 [R]. Livermore: Lawrence Livermore National Laboratory, 2001. [8] GILIOLI A, MANES A, GIGLIO M, et al. Predicting ballistic impact failure of aluminum 6061-T6 with the rate-independent Bao-Wierzbicki fracture model [J]. International Journal of Impact Engineering, 2015, 76(1): 207–220. [9] BAIG M, KHAN A S, CHOI S H, et al. Shear and multiaxial responses of oxygen free high conductivity (OFHC) copper over wide range of strain-rates and temperatures and constitutive modeling [J]. International Journal of Plasticity, 2013, 40(1): 65–80. [10] NEMAT-NASSER S, LI Y. Flow stress of FCC polycrystals with application to OFHC Cu [J]. Acta Materialia, 1998, 46: 565–577. doi: 10.1016/S1359-6454(97)00230-9 [11] GUO W G. Flow stress and constitutive model of OFHC Cu for large deformation, different temperatures and different strain rates [J]. Explosion and Shock Waves, 2005, 25(3): 244–250. doi: 10.3321/j.issn:1001-1455.2005.03.009 [12] ANAND L, KALIDINDI S R. The process of shear band formation in plane strain compression of fcc metals: effects of crystallographic texture [J]. Mechanics of Materials, 1994, 17(2): 223–243. [13] FOLLANSBEE P S, KOCKS U F. A constitutive description of the deformation of copper based on the use of the mechanical threshold stress as an internal state variable [J]. Acta Metallurgica, 1988, 36(1): 81–93. doi: 10.1016/0001-6160(88)90030-2 [14] MIRONE G, BARBAGALLO R, CORALLO D. A new yield criteria including the effect of lode angle and stress triaxiality [J]. Procedia Structural Integrity, 2016, 2: 3684–3696. doi: 10.1016/j.prostr.2016.06.458 [15] KHAN A S, SUH Y S, KAZMI R. Quasi-static and dynamic loading responses and constitutive modeling of titanium alloys [J]. International Journal of Plasticity, 2004, 20(12): 2233–2248. doi: 10.1016/j.ijplas.2003.06.005 [16] NEMAT-NASSER S, GUO W G, NESTERENKO V F, et al. Dynamic response of conventional and hot isostatically pressed Ti-6Al-4V alloys: experiments and modeling [J]. Mechanics of Materials, 2001, 33(8): 425–439. doi: 10.1016/S0167-6636(01)00063-1 [17] GIGLIO M, MANES A, VIGANÒ F. Ductile fracture locus of Ti-6Al-4V titanium alloy [J]. International Journal of Mechanical Sciences, 2012, 54(1): 121–135. doi: 10.1016/j.ijmecsci.2011.10.003 [19] LIN L, ZHI X D, FAN F, et al. Determination of parameters of Johnson-Cook models of Q235B steel [J]. Journal of Vibration and Shock, 2014, 33(9): 153–158. [20] GUO Z T, SHU K O, GAO B, et al. J-C model based failure criterion and verification of Q235 steel [J]. Explosion and Shock Waves, 2018, 38(6): 1325–1332. doi: 10.11883/bzycj-2017-0163 [21] MARCOS R M, DANIEL G G, ALEXIS R, et al. Influence of stress state on the mechanical impact and deformation behaviors of aluminum alloys [J]. Metals, 2018, 8(7): 520–540. doi: 10.3390/met8070520 [22] CAMPBELL J D, COOPER R H. Yield and flow of low-carbon steel at medium strain rates [C]//Proceedings of the Conference on the Physical Basis of Yield and Fracture. London: Institute of Physics and Physical Society, 1966: 77–87. [23] JONES N. Structural impact [M]. 2 Ed. Cambridge: Cambridge University Press, 2012. [24] CHEN G, CHEN Z F, TAO J L, et al. Investigation and validation on plastic constitutive parameters of 45 steel [J]. Explosion and Shock Waves, 2005, 25(5): 451–456. doi: 10.3321/j.issn:1001-1455.2005.05.010 [25] BAI Y, WIERZBICKI T. A comparative study of three groups of ductile fracture loci in the 3D space [J]. Engineering Fracture Mechanics, 2015, 135: 147–167. doi: 10.1016/j.engfracmech.2014.12.023 [26] WANG P, QU S. Analysis of ductile fracture by extended unified strength theory [J]. International Journal of Plasticity, 2018, 104: 196–213. doi: 10.1016/j.ijplas.2018.02.011 [1] 王鹏飞 , 徐松林 , 胡时胜 . 基于温度与应变率相互耦合的泡沫铝本构关系. 高压物理学报, doi: 10.11858/gywlxb.2014.01.004 [2] 李雪艳 , 李志斌 , 张舵 . 考虑温度效应的泡沫铝准静态压缩本构模型. 高压物理学报, doi: 10.11858/gywlxb.20170642 [3] 单双明 , 汪日平 , 郭捷 , 李和平 . YJ-3000t型紧装式六面顶大腔体高温高压实验装置样品室的压力标定. 高压物理学报, doi: 10.11858/gywlxb.2007.04.006 [4] JIANG Shuqing , YANG Xue , WANG Yu , ZHANG Xiao , CHENG Peng . Symmetrization and Chemical Precompression Effect of Hydrogen-Bonds in H2-H2O System. 高压物理学报, doi: 10.11858/gywlxb.20190730 [5] 蓝蔚青 , 王蒙 , 车旭 , 孙晓红 , 陈扬易 , 谢晶 . Effect of High Pressure Processing with Different Holding Time on the Quality of Pomfret (Pampus argenteus) Fillets. 高压物理学报, doi: 10.11858/gywlxb.20180549 [6] 郑伟涛 , 丁涛 , 钟凤兰 , 张建民 , 张瑞林 . 金属线性热膨胀系数的研究. 高压物理学报, doi: 10.11858/gywlxb.1994.04.010 [7] 郑伟涛 , 张瑞林 . 铝、铜、铅压力熔化曲线的研究. 高压物理学报, doi: 10.11858/gywlxb.1993.01.006 [8] TONG Zong-Bao , WANG Jin-Xiang , LIANG Li , QIAN Ji-Sheng , TANG Kui . Resistance Performance of Grooved Metal Target Subjected to Projectile Impact. 高压物理学报, doi: 10.11858/gywlxb.2017.02.002 [9] 王泽平 . 粘塑性介质中球形孔洞的动态增长. 高压物理学报, doi: 10.11858/gywlxb.1994.04.004 [10] 庞宝君 , 杨震琦 , 王立闻 , 迟润强 . 橡胶材料的动态压缩性能及其应变率相关的本构模型. 高压物理学报, doi: 10.11858/gywlxb.2011.05.005 [11] 李俊玲 , 卢芳云 , 傅华 , 赵玉刚 , 谭多望 . 某PBX炸药的动态力学性能研究. 高压物理学报, doi: 10.11858/gywlxb.2011.02.012 [12] 李定远 , 朱志武 , 卢也森 . 冲击加载下42CrMo钢的动态力学性能及其本构关系. 高压物理学报, doi: 10.11858/gywlxb.2017.06.011 [13] 宋敏 , 王志勇 , 闫晓鹏 , 王志华 . 落锤冲击下钢筋混凝土梁响应及破坏的数值模拟. 高压物理学报, doi: 10.11858/gywlxb.20170693 [14] 高光发 . 混凝土材料动态压缩强度的应变率强化规律. 高压物理学报, doi: 10.11858/gywlxb.2017.03.007 [15] 高光发 . 混凝土材料动态拉伸强度的应变率强化规律. 高压物理学报, doi: 10.11858/gywlxb.2017.05.013 [16] 王永欢 , 徐鹏 , 范志强 , 王壮壮 . 球形孔开孔泡沫铝的力学特性及准静态压缩变形机制. 高压物理学报, doi: 10.11858/gywlxb.20180532 [17] 周宁 , 耿莹 , 冯磊 , 刘超 , 张冰冰 . Experimental Study on the Strain Law of the Thin-Walled Pipe in the Gas Explosion Process with Different Ignition Energies. 高压物理学报, doi: 10.11858/gywlxb.2016.03.004 PDF下载 ( 7841 KB) 图(18)表(2) 文章访问数: 1437 阅读全文浏览量: 1503 ZHOU Lin WANG Zihao WEN Heming, 作者简介:ZHOU Lin (1988-), female, doctoral student, major in impact dynamics. E-mail: [email protected] Numerical simulations have been widely used in the study of the response of structures under projectile impact or explosive loadings due to rapid advancement in both computer and computing technologies. Hence, it becomes essential to develop a constitutive model which can accurately describe the dynamic behaviors of materials under different loading conditions in terms of true stress-true strain relationships, strain rate effects, temperature effects and failure. Johnson and Cook[1–2] proposed an empirical viscoplastic constitutive model for metals which expressed the equivalent stress as a function of plastic strain, strain rate and temperature. They also proposed a ductile fracture criterion which is a function of hydrostatic pressure (stress triaxiality), strain rate and temperature but did not consider the effect of the third invariant of the deviatoric stress tensor or Lode angle. The Johnson-Cook (JC) constitutive model including its failure criterion has been widely used in engineering applications because it has a simple form and already been implemented in some commercial software. Nevertheless, its accuracy has been a major concern both in academic community and industry. Historically, the difficulty to make a complete assessment on the accuracy of the JC constitutive model had two reasons: one was due to the lack of a complete set of the test data for a selected material under different loading conditions (namely, quasi-static to high strain rate loading, low to high temperature and fracture of the material in different states such as axisymmetric stress state, plane strain state) in terms of true stress-true strain relationships, strain rate effects, temperature effects and failure; and the other was the cost which would be prohibitively high if a complete set of the test data as mentioned above were obtained. Nonetheless, many researchers have contributed to the collection of the database for different materials over many years. The objective of the present work is to assess the accuracy of the JC constitutive model for metals by comparing the model predictions with the material test data for 2024-T351 aluminum alloy[3–4], 6061-T6 aluminum alloy[5–8], OFHC copper[1, 9–13], 4340 steel[1-2], Ti-6Al-4V alloys[14–17] and Q235 mild steel[18–20] as well as the ballistic test data for 2024-T351 aluminum alloy plates[21]. The reasons for choosing these 6 materials are two-fold: first, their material test data are more complete; second, they are widely used in various industries such as construction, automobile, naval architecture, aviation and defense. The accuracy of the JC constitutive model is evaluated in terms of quasi-static true stress-true strain relationships in both tension and shear, dynamic increase factor (DIF) and dynamic true stress-true strain curves, ratio of (yield) stress at elevated temperature to that at room temperature and the true stress-true strain curves at different temperature, failure strains vs. stress triaxiality, projectile residual velocities. The results are given and discussed. 1. JC Constitutive Model The JC constitutive model including failure criterion are commonly used in the numerical simulations of the response and failure of a metal structure subjected to dynamic loadings. The constitutive equation represents the equivalent stress of the metal as the product of the terms of strain, strain rate and temperature. It has a simple form and the parameters in the model can be obtained easily through laboratory material tests. The equivalent stress ${\sigma _{\rm{eq}}}$ in the JC constitutive model can be expressed by the following equation[1–2] ${\sigma _{\rm{eq}}} = \left( {A + B\varepsilon _{\rm{p}}^n} \right)\left( {1 + C\, \ln\,{{\dot \varepsilon }^*}_{\rm{p}}} \right)\left( {1 - {T^*}^m} \right)$ where A, B, C, n, m are material constants, ${\varepsilon _{\rm{p}}}$ is the equivalent plastic strain, ${\dot \varepsilon ^*}_{\rm{p}} = {{{{\dot \varepsilon }_{\rm{p}}}} /{{{\dot \varepsilon }_0}}}$ with ${\dot \varepsilon _0}$ being a reference strain rate (in this paper it is taken as the strain rate employed in the quasi-static tensile tests), and $T^*$ =(T–Tr)/(Tm–Tr) is the homologous temperature with T, Tr, Tm being the current temperature, the room temperature, the melting temperature, respectively. The JC failure criterion is based on equivalent plastic strain and it is assumed that the damage of materials accumulates with plastic deformation. In the JC failure criterion a damage parameter D can be defined as $D = \sum {\frac{{\Delta {\varepsilon _{\rm{p}}}}}{{{\varepsilon _{\rm{f}}}}}} $ where $\Delta {\varepsilon _{\rm{p}}}$ is the increment of the equivalent plastic strain,${\varepsilon _{\rm{f}}}$ is the equivalent plastic strain to fracture. Eq.(2) means that when D reaches 1 the material will fail, that is to say, the accumulation of equivalent plastic strain reaches the equivalent plastic strain to fracture. ${\varepsilon _{\rm{f}}}$ is defined as the product of the terms of stress triaxiality, strain rate and temperature, which can be written in the following form ${\varepsilon _{\rm{f}}} = \left[ {{D_1} + {D_2}\exp \left( {{D_3}\eta } \right)} \right]\left( {1 + {D_4}\ln {{\dot \varepsilon }^*}_{\rm{p}}} \right)\left( {1 + {D_5}{T^*}} \right)$ where D1, D2, D3, D4, D5 are material constants, $\eta = {\sigma _{\rm{H}}}/{\sigma _{{\rm{eq}}}}$ is stress triaxiality with ${\sigma _{\rm{H}}}$ being hydrostatic pressure. In addition, the numerical model implemented in this work takes into account the temperature evolution due to heat equation assuming adiabatic conditions, the temperature increment $\Delta T$ can be defined as $\Delta T = \int_0^{{\varepsilon _{{\rm{eq}}}}} {\chi \frac{{{\sigma _{{\rm{eq}}}}{\rm d}{\varepsilon _{{\rm{eq}}}}}}{{\rho {C_{\rm{p}}}}}} $ where $\rho $ is material density, Cp is material specific heat, $\chi $ is the proportion coefficient of plastic work conversion into heat usually taken to be 0.9. The equation of state is Grüneisen EOS (Equation of State) as follows $P = \rho C_0^2\frac{\mu }{{{{\left( {1 - {s_1}\mu } \right)}^2}}}\left( {1 - \frac{{{\varGamma _0}\mu }}{2}} \right) + {\varGamma _0}\rho {E_{\rm{m}}}$ where P is hydrostatic pressure, C0, s1 and ${\varGamma _0}$ are material constants, Em is material specific internal energy and $\mu=1-V/V_0 $ (V and V0 are current volume and initial volume, respectively). 2. A Critical Assessment on the Accuracy of JC Model In this section the accuracy of the JC constitutive model including failure criterion is critically assessed by comparing the model predictions with the material test data for some metals under different loading conditions in terms of stress-strain relationships, strain rate effects, temperature effects and failure criterion as well as ballistic perforation data. 2.1. Material Test Data 2.1.1. Quasi-Static Stress-Strain Relationships Fig.1 shows the quasi-static true stress-true strain curves both in tension and shear for 2024-T351 aluminum alloy[3], 6061-T6 aluminum alloy[5–6], OFHC copper[9–12], 4340 steel[1], Ti-6Al-4V alloys[14–16] and Q235 mild steel[18–19]. The values of $A,B$ and $n$ are shown in Table 1. The quasi-static true stress-true strain relationships in shear in Fig.1 are obtained from the torsion data by using the von Mises flow rule, i.e., tensile stress is $\sigma = \sqrt 3 \tau $ and corresponding tensile strain is $\varepsilon = {\gamma /{\sqrt 3 }}$ . It is clear from Fig.1 that the mechanical behavior of the materials examined does not obey the von Mises flow rule and its associated flow rule as the true stress-true strain curves both in tension and shear are not identical. In other words, the mechanical behavior of metals examined is sensitive to the state of stress and the JC constitutive model has failed to capture this phenomenon as it does not take into consideration the effects of Lode angle. 2.1.2. Strain Rate Effects As mentioned before strain rate effects are very important in the formulation of a constitutive model for materials and the JC constitutive model is no exception. In this section, the accuracy of the term of strain rate in the model will be assessed in terms of dynamic increase factor (DIF) and dynamic true stress-true strain curves. Fig.2–Fig.7 show comparisons of the JC model predictions with the test data for 2024-T351 aluminum alloy[3], 6061-T6 aluminum alloy[6–7], OFHC copper[1, 9–11, 13], 4340 steel[1], Ti-6Al-4V alloys[15–16] and Q235 mild steel[18]. The values of $C$ and ${\dot \varepsilon _0}$ are shown in Table 1. In this paper, DIF is mainly employed to characterize the rate sensitive behavior of metallic materials. It is evident from Fig.2(a) that the term of strain rate, namely, $(1 + C\dot \varepsilon _{\rm{p}}^*)$ in the JC constitutive model can be used to describe the relationship between the DIF and the strain rate for 2024-T351 aluminum alloy at strain rates less than 4600 s–1 and for strain rates greater than this value the DIF increases rapidly with increasing strain rate and the JC model fails to predict it. As a matter of fact, it significantly under-estimate the strain rate effects for 2024-T351 aluminum alloy at higher strain rates as can be seen from Fig.2(a). It is also evident from Fig.2(b) and Fig.2(c) that the true stress-true strain relationships both in tension and compression can be reasonably described by the JC model for strain rates less than 4600 s–1 whilst for strain rates larger than this value the agreement between the JC model predictions and the experimental data are poor. Similar results are also observed for 6061-T6 aluminum alloy and OFHC copper as can be seen from Fig.3 and Fig.4, respectively. Fig.5 and Fig.6 show comparisons of the JC constitutive model predictions with the test data for 4340 steel and for Ti-6Al-4V alloys, respectively. It is clear from the figures that reasonable agreement is obtained within the ranges of the strain rates examined (for 4340 steel the highest strain rate achieved in reference is only 570 s–1 whilst for Ti-6Al-4V alloys the strain rates of up to 6000 s–1). It should be mentioned here that in some situations (e.g. ballistic impact) where much higher strain rates can be involved, and the uncertainty of the accuracy of the JC model still remains due to lack of test data at even higher strain rates both for 4340 steel and Ti-6Al-4V alloys. Fig.7 shows comparison between the JC model predictions and the test results for Q235 mild steel. As can be seen from Fig.7(a) that the JC model has failed to describe the strain rate effects for strain rates greater than 727.7 s–1 accurately, although it seemingly describes reasonably well the dynamic stress-strain response as can be seen from Fig.7(b). To further investigate the accuracy of the JC constitutive model the true stress-true strain curves predicted by the quasi-static stress-strain curve multiplied by the experimentally determined DIF at the highest strain rates for 2024-T351, 6061-T6, OFHC copper and Q235 mild steel are also shown in Fig.2(c), Fig.3(b), Fig.4(b) and Fig.7(b) as indicated by the dashed lines. It is found from Fig.2(c), Fig.3(b) and Fig.4(b) much better agreements are obtained for 2024-T351, 6061-T6 and OFHC copper whilst from Fig.7(b) much worse agreement is achieved for Q235 mild steel. In other words, the JC model has failed to produce consistent results for different materials, implying the form of its equation, i.e. $\left( {A + B\varepsilon _{\rm{p}}^n} \right)\left( {1 + C \ln\,{{\dot \varepsilon }^*}_{\rm{p}}} \right)$ , may be inadequate. Moreover, it should be borne in mind here that the test data presented in Fig.2(a), Fig.3(a), Fig.4(a), Fig.5(a), Fig.6(a) and Fig.7(a) were taken at different plastic strains, for example, DIF at a plastic strain of 0.075 for 2024-T351, DIF at a plastic strain corresponding to UTS for 6061-T6, DIF at a plastic strain of 0.05 both for 4340 steel and Ti-6Al-4V alloy, DIF at a plastic strain of 0.15 for OFHC copper and DIF at yield stress for Q235 mild steel. Generally speaking, the strain rate effects in terms of DIF for a particular material at different plastic strains are different as observed experimentally by Campbell and Cooper[22] and highlighted by Jones[23] for mild steel which is redrawn in Fig.8(a) in terms of DIF vs. strain rate and by Chen et al.[24] which is presented in Fig.8(b) for 45 steel. In other words, as can be clearly seen from Fig.8 that the experimentally determined DIF according to different plastic strains can lead to different results. Hence, the choice of DIF at different plastic strains may add further uncertainty to the accuracy of the JC constitutive model. 2.1.3. Temperature Effects Temperature effect is also very important in the situations where high strain rates, large plastic strains are involved which lead to temperature rise due to (quasi) adiabatic conditions and it should be considered in the formulation of a constitutive model for materials. In this section, the accuracy of the term of temperature effect in the JC constitutive model will be assessed in terms of ratio of true stress at elevated temperature to that at room temperature and true stress-true strain curves at elevated temperature. Fig.9–Fig.14 show comparisons of the JC model predictions with the test data for 2024-T351 aluminum alloy[3], 6061-T6 aluminum alloy[6], OFHC copper[9, 11], 4340 steel[1], Ti-6Al-4V alloys[15] and Q235 mild steel[18]. The values of $m$ and ${T_{\rm{m}}}$ are shown in Table 1. In this paper, ratio of (yield) stress at elevated temperature to that at room temperature ($\sigma _0^{\rm{T}}/\sigma _0^{{\rm{RM}}}$ ) is normally employed to characterize the effect of temperature on the behavior of metallic materials. It can be seen from Fig.9(a) that the term of temperature, namely, $(1 - {T^*}^m)$ in the JC constitutive model can be used to describe the relationship between the ratio of stress at elevated temperature to that at room temperature ($\sigma _0^{\rm{T}}/\sigma _0^{{\rm{RM}}}$ ) and dimensionless temperature (T*) for 2024-T351 aluminum alloy when Tm is taken to be 775 K[3, 21]. Similarly, the true stress-true strain relationships in tension can be also reasonably described by the JC model as can be seen from Fig.9(b). However, it is evident from Fig.10(a), Fig.11(a) and Fig.14(a) that the term of temperature, namely, $(1 - {T^*}^m)$ in the JC constitutive model can be used to describe the relationship between the ratio of stress at elevated temperature to that at room temperature ($\sigma _0^{\rm{T}}/\sigma _0^{{\rm{RM}}}$ ) and dimensionless temperature (T*) for 6061-T6 aluminum alloy, OFHC copper and Q235 mild steel at T* less than 0.4, and for T* greater than this value the ratio decreases rapidly with increasing T* and the JC model fails to predict it. As a matter of fact, it significantly over-estimate the temperature effects for 6061-T6 aluminum alloy, OFHC copper and Q235 mild steel at higher T* as can be seen from Fig.10(a), Fig.11(a) and Fig.14(a). It is also evident from Fig.10(b), Fig.10(c), Fig.11(b) and Fig.14(b) that the true stress-true strain relationships in tension can be reasonably described by the JC model for T*< 0.4, whilst for temperature larger than this value the agreement between the JC model predictions and the experimental data are poor. Fig.12 and Fig.13 show comparisons of the JC constitutive model predictions with the test data for 4340 steel and Ti-6Al-4V alloys, respectively. It is clear from these figures that reasonable agreement is obtained within the ranges of the temperatures examined (i.e. T* < 0.3). 2.1.4. Failure Criterion Failure is extremely important in the safety calculations and assessment of structures subjected to large loads which produce large plastic deformations leading to rupture, and it should be taken into account in the formulation of a constitutive model for materials. In this section, the accuracy of the JC failure criterion will be assessed by comparing it with the test data for the fracture of different metallic materials under different loading conditions. Fig.15 shows the dependence of the equivalent strain to fracture on the stress triaxiality for 2024-T351 aluminum alloy[4], 6061-T6 aluminum alloy[8], OFHC copper[2], 4340 steel[2], Ti-6Al-4V alloys[17] and Q235 mild steel[20]. The values of ${D_1},{D_2}$ and ${D_3}$ are shown in Table 1. It is demonstrated from Fig.15(a) that for 2024-T351 aluminum alloy the JC fracture criterion can describe the relationship between the fracture strain and the stress triaxiality for the smooth and notched tension tests reasonably well. Nonetheless, as can be seen from Fig.15(a), the JC fracture criterion has failed to predict the fracture strains of 2024-T351 specimens under other loading conditions such as shear loading. Similar results are also obtained for 4340 steel, Ti-6Al-4V alloys, Q235 mild steel as can be seen from Fig.15(d)–Fig.15(f). For 6061-T6 aluminum alloy and OFHC copper the JC fracture criterion is in reasonable agreement with the available test data as can be seen from Fig.15(b) and Fig.15(c) . It is also demonstrated from Fig.15(a), Fig.15(d)–Fig.15(f) that the critical fracture strains at shear are much lower than those at tension for 2024-T351 aluminum alloy, 4340 steel, Ti-6Al-4V alloys and Q235 mild steel. Moreover, the experimental results in Fig.15(a) does not show that the fracture strain decreases monotonically with increasing stress triaxiality as the JC fracture criterion implies. It is no surprising since the JC fracture criterion does not take into account the effect of Lode angle on the rupture of materials subjected to different loadings. Bai and Wierzbicki[25] conducted a comparative study of 16 fracture models and pointed out that the JC fracture criterion has not taken into consideration the effect of the third deviatoric stress invariant. Wang and Qu[26] performed an analysis of ductile fracture by the way of an extended unified strength theory which catered for the effects of the stress triaxiality and the normalized Lode angle parameter. 2.2. Ballistic Perforation Data In Section 2.1 a critical assessment of the accuracy of the JC constitutive model has been made by comparing its predictions with the material test data available for some metals. It is clear that JC constitutive model is applicable to Mises materials at quasi-static to intermediate strain rates and low to moderate temperature and that its accuracy deceases with increasing strain rate and temperature. In order to assess further the accuracy of the JC constitutive model and the suitability of its application in situations where higher strain rates and temperatures are involved, the following numerical simulations are carried out for the ballistic perforation of metal plates made of 2024-T351 aluminum alloy by flat-ended projectiles using the JC constitutive model. Rodriguez-Millan et al.[21] recently conducted an experimental investigation into the ballistic perforation of 2024-T351 aluminum alloy plates. The diameter of the flat-ended cylindrical projectile was 5.5 mm and its length 7 mm. The maraging steel projectile had a mass of 1.1 g. The aluminum alloy target was fully clamped with a square window of 100 mm×100 mm and its thickness 4 mm. Fig.16 shows the finite element model used in the numerical simulations. In order to achieve computational efficiency as well as accuracy, the target plate adopts transition grid from the impact center to the boundary. The target plate area around the impact point, which has a radius of 1.8 times the projectile one, has the smallest mesh size (i.e. 0.1 mm×0.1 mm×0.1 mm). Due to symmetry boundary conditions only 1/2 of finite element model is built to save computing time. In the numerical simulations no friction is considered and the projectile is assumed to remain rigid. Table 2 lists two sets of the values of various parameters in the JC constitutive model for 2024-T351 aluminum alloy employed in the numerical simulations. One set is directly quoted from Ref.[21] and the other obtained from Section 2.1. To illustrate the difference of these two sets of parameters values Fig.17 shows the comparison of them in terms of quasi-static stress-strain curves, DIF vs. strain rate, $\sigma _0^{\rm{T}}/\sigma _0^{{\rm{RM}}}$ vs. T*, and equivalent strain to fracture vs. the stress triaxiality curves. It can be seen from Table 2 and Fig.17 that the two sets of data differ a lot in terms of strain rate effect and fracture strain effect. Fig.18 shows comparison of the numerically predicted residual velocities with the test results for the perforation of the 2024-T351 aluminum alloys plates struck by the flat-faced projectile[21]. Also shown in the figure are the numerical results obtained by Rodriguez-Millan et al.[21]. It is clear from Fig.18 that the numerical results using both sets of parameters values in the JC constitutive model are in poor agreement with the experimental data. The numerical results using the set of parameter values obtained in Section 2.1 are generally lower than the test data; the numerical results obtained by Rodriguez-Millan et al.[21] using the other set of parameter values are higher than the test data for impact velocities less than 400 m/s, whilst for impact velocities greater than 400 m/s the numerical results are lower than the experimental data. All these results as presented in Fig.17 and Fig.18 have demonstrated that the JC constitutive model is incapable to describe the response of metals at higher strain rates and at fracture. This is due to the facts that the linear term of $\left( {1 + C\,{\rm {ln}}\,{{\dot \varepsilon }^*}_{\rm{p}}} \right)$ in the JC constitutive model cannot reflect the non-linearity of the strain rate sensitivity of metals at higher loading rates as involved in ballistic perforation; and that its failure criterion is a monotonically decreasing function of the stress triaxiality and takes no account the effect of Lode angle. 3. Conclusions A critical assessment has been made in this paper about the accuracy of the JC constitutive model including failure criterion based on the analysis of the material test data for different metals under different loading conditions. To assess further its accuracy the model predictions have been also made with the experimental data for the perforation of 2024-T351 aluminum alloy plates struck normally by a flat-ended projectile. The main conclusions are as follows: (1) The JC constitutive model is applicable to Mises materials at quasi-static to intermediate strain rates and low to moderate temperature; (2) The agreement between the model predictions and the experimental results is poor for non-Mises materials in terms of shear stress-shear strain curve; (3) The JC fracture criterion can describe reasonably well the fracture of metals in axisymmetric stress state (i.e. tensile loading) but fail to predict the failure of the metals in plane strain state (i.e. shear loading) as it takes no account of Lode angle effect; (4) Its accuracy decreases with increasing strain rate and temperature; (5) The form of the model's equation (namely, quasi-static stress-strain curve multiplied by DIF) may be inadequate at least for the scenarios where high strain rates are involved.
CommonCrawl
use green's theorem to evaluate the line integral 1. Since D D is just a half circle it makes sense to use polar coordinates for this problem. Last Post; Nov 20, 2012; Replies 3 Views 1K. We can apply Greens theorem to calculate the amount of work done on a force field. 2. Remember to use absolute values where appropriate.) Assume the curve is oriented counterclockwise. If we want to find the area of a region which is the union of two simple regions, and the original line integral has the form. Another Example Green's Theorem only works for simple, closed curves. We can also write Green's Theorem in vector form. $ \displaystyle \int_C ye^x \, dx + 2e^x \, dy $, $ C $ is the rectangle with vertices $ (0, 0) $, $ (3, 0) $, $ (3, 4) $, and $ (0, 4) $ Use Green's Theorem to evaluate the line integral along the given posit 02:44. the line integral of a conservative vector field F on a closed curve is zero. See answers (1) asked 2021-02-24. Thanks a lot ! Use Green's Theorem to evaluate the line integral along the given positively oriented curve. This video explains Green's Theorem and explains how to use Green's Theorem to evaluate a line integral.http://mathispower4u.com C (lnx+y)dxx2dy where C is the rectangle with vertices (1, 1), (3, 1), (1, 4), and (3, 4) Use Green's Theorem to evaluate the line integral. Find step-by-step Calculus solutions and your answer to the following textbook question: Use Greens Theorem to evaluate the line integral along the given positively oriented curve. Using Green's Theorem evaluate the integral c(xydx + x^2y^2 dy) where C is the triangle with vertices (0 ,0), (1, 0) and (1, 2). Use Green's Theorem to evaluate the line integral along the given positively oriented curve. This theorem is also helpful when we want to calculate the area of conics using a line integral. ? 6 ydx -81 We consider two cases: the case when C encompasses the origin and the case when C does not encompass the origin.. Case 1: C Does Not Encompass the Origin Evaluate the integral using the residue theorem and its applications. Green's Theorem can also be interpreted in terms of two-dimensional flux integrals and the two-dimensional divergence. : A series converges if and only if it satisfies the Cauchy criterion for convergence of a series. Use Greens Theorem to evaluate C (7x +y2)dy (x2 2y) dx C ( 7 x + y 2) d y ( x 2 2 y) d x where C C is are the two circles as shown below. we let pa ln (raty) 28 x =-20-1 ax Oy According to Green's theorsem. (2) Plot the vertices . (Use C for the constant of integration. C: boundary of the region lying inside the rectangle with vertices (5, 3), (5, 3), (5, 3), and (5, 3), and outside the square with vertices (1, 1), (1, 1), (1, 1), and (1, -1) Explanation Verified Reveal next step Find step-by-step Calculus solutions and your answer to the following textbook question: Use Greens theorem to evaluate line integral $\int_{C} \sqrt{1+x^{3}} d x+2 x y d y$ where C is a triangle with vertices (0, 0), (1, 0), and (1, 3) oriented clockwise.. Use Greens theorem to evaluate line integral . Use Green's Theorem to evaluate the line integral. calculus review please help! Use Stokes\' Theorem to evaluate the line integral ?C y3 dx + 1 dy + (x + z2) dz, where C is the triangle with vertices (2, 0, 0), (0, 2, 0), and (0, 0, 2), oriented In 18.04 we will mostly use the notation (v) = (a;b) for vectors. Identify a technique of integration for evaluating the following integrals. Find and sketch the gradient vector eld of the following functions: (1) f(x;y) = 1 2 (x y)2 (2) f(x;y) = 1 2 (x2 y2): Problem 2 (Stewart, Exercise 16.2.(5,11,14)). Use Greens Theorem to evaluate C (y2 6y) dx +(y3 +10y2) dy C ( y 2 6 y) d x + ( y 3 + 10 y 2) d y where C C is shown below. That's why square is the question. So we have to evaluate this integral. Use Green's Theorem to evaluate the line integral along the given positively oriented curve. Find and sketch the gradient vector eld of the following functions: (1) f(x;y) = 1 2 (x y)2 (2) f(x;y) = 1 2 (x2 y2): Problem 2 (Stewart, Exercise 16.2.(5,11,14)). c P d x + Q d y \oint_cP\ dx+Q\ dy c P d x + Q d y. then we can apply Greens theorem to change the line integral into a double integral in the form. By Green's Theorem we have: $$I=\int_{0}^{1}\int_{0}^{2x}\left(\frac{d(x^2y^3)}{dx}-\frac{d(xy)}{dy}\right)dydx=\int_{0}^{1}\int_{0}^{2x}(2xy^3-y)dydx$$ You can evaluate this integral and the result is $\frac{2}{3}$. oint_C F8dr, where F(x,y)=<> and C consists of the arcs y = x^2 and C: boundary of the region lying between the graphs of y = x and y = x - 8x. The circulation line integral of F = \langle 3xy^2,4x^3 + y asked Feb 18, 2015 in CALCULUS by anonymous. Answer to Use Green's theorem to evaluate the line integral (x - 3y) dx (4x + y) dy, where Cis the rectangle with vertices (2, 0), (3, 0), (3, 2), (2, 2). Substitute in the parabola .. Then the intersection points are .. 16.3 The Fundamental Theorem of Line Integrals. Use Green's Theorem to evaluate the line integral along the given positively oriented curve. C is the boundary of the region Green's theorem gives a relationship between the line integral of a two-dimensional vector field over a closed path in the plane and the double integral over the region it encloses. R3 is a bounded function. (a) By evaluating an appropriate double integral. Find step-by-step Calculus solutions and your answer to the following textbook question: Use Greens Theorem to evaluate the line integral along the given positively oriented curve. The circulation line integral of F=<(2xy^2),(4x^3)+y> where C is the boundary of {(x,y): 0<=y<=sinx, 0<=x<=pi} 1 See answer brendibooker596 is waiting for your help. It's because your 64. This video explains Green's Theorem and explains how to use Green's Theorem to evaluate a line integral.http://mathispower4u.com Using Green's Theorem to evaluate the line integral. We can use Greens theorem when evaluating line integrals of the form, $\oint M (x, y) \phantom {x}dx + N (x, y) \phantom {x}dy$, on a vector field function. greens-theorem; Show Step 3. Use Green's Theorem to evaluate the line integral along the given posit 01:08 Use Greens theorem in a plane to evaluate line integral $\oint_{C}\left(x y Solved Use Green's theorem for circulation to evaluate the | Chegg.com Orient the curve counterclockwise unless otherwise indicated. 2) ##\int_C \cos ydx + x^2\sin ydy ##, C is the rectangle with vertices (0,0) (5,0) (5,2) and (0,2). Solved: Use Green's Theorem to evaluate the line integral. Answered: Q:4) Use Green theorem and evaluate the | bartleby. Our verified expert tutors typically answer within 15-30 minutes. In Exercises 3-10, use Green\'s Theorem to evaluate the line integral. 1) Find the area of the region bounded by the curves y=arcsin (x/4), y = 0, and x The other common notation (v) = ai + bj runs the risk of i being confused with i = p 1 {especially if I forget to make i boldfaced. In this lecture we dene a concept of integral for the function f.Note that the integrand f is dened on C R3 and it is a vector valued function. Q: prove the Fact. Since the numbers a and b are the boundary of the line segment [a, b], the theorem says we can calculate integral b aF(x)dx based on information about the boundary of line segment [a, b] ( Figure 6.32 ). The same idea is true of the Fundamental Theorem for Line Integrals: Result 1.2. Application of Green's Theorem: The line integral of a vector-valued function along a closed path can be converted into a double integral whose domain includes the set of Step 1: The integral is and circle is . More on Green's Theorem. Show Step 3. Find step-by-step Calculus solutions and your answer to the following textbook question: Use Green's Theorem to evaluate the line integral. Orient the curve counterclockwise unless otherwise indicated. Substitute in the parabola .. Math Calculus Q&A Library Use Green's Theorem to evaluate the line integral. See answers (1) asked 2021-10-19. integral x^5/x^6-5 dx, u = x6 5 I got the answer 1/6ln (x^6-5)+C but it was wrong. greens-theorem; Use Greens Theorem to evaluate (Check the orientation of the curve before applying the theorem.) Then verify Green's Theorem by computing the flux two different ways. In (A) you have to evaluate the line integral along a piecewise smooth path. Orient the curve counterclockwise unless otherwise indicated. D D is the region enclosed by the curve. Compute the area of the region which is bounded by y= 4xand y= x2 using the indicated method. Orient the curve counterclockwise unless otherwise indicated. Use Green's Theorem to evaluate the line integral. 32 3 (b) By evaluating one or more appropriate line integrals. I can easily find Q x P y, but I'm not sure which approach to take after that. This video explains how to evaluate a line integral using Green's Theorem. ( x 2 + 2 x y 4 y 2) d x ( x 2 8 x y 4 y 2) d y = 0. Homework Statement Green's Theorem to evaluate the line following line integral, oriented clockwise. U. X subscribed by square Dubai. (Give an exact answer. Then Green's theorem states that. The following result, called Greens Theorem, allows us to convert a line integral into a double integral (under certain special conditions). 32 3 11. I wrote the word. Application of Green's Theorem: The line integral of a vector-valued function along a closed path can be converted into a double integral whose domain includes the set of In Calculus I we had the Fundamental Theorem of Calculus that told us how to evaluate definite integrals. Use Greens Theorem to evaluate the line integral along the given positively oriented curve. In the last few videos, we evaluated this line integral for this path right over here by using Stokes' theorem, by essentially saying that it's equivalent to a surface integral of the curl of the vector field dotted with the surface. C xy ds. d r is either 0 or 2 2 that is, no matter how crazy curve C is, the line integral of F along C can have only one of two possible values. Subsequently, question is, when can I use Green's theorem? If we choose to use Greens theorem and change the line integral to a double integral, well need to find limits of integration for both x and y so that we can evaluate the double integral as an iterated integral. Use Greens Theorem to evaluate C (6y9x)dy(yxx3) dx C ( 6 y 9 x) d y ( y x x 3) d x where C C is shown below. Aviv CensorTechnion - International school of engineering Integrate 1 1 + sin x d x using substitution u = 1 + sin x. Last Post; Apr 28, 2015; Replies 4 Views 2K. Fds, where F = and C consists of the arcs y=x^2 and y=2x for 0x2. C is the triangle with vertices (0, 0), (2, 1), and (0, 1) Using Greens Theorem the line integral becomes, C y x 2 d x x 2 d y = D 2 x x 2 d A C y x 2 d x x 2 d y = D 2 x x 2 d A. You do not need to evaluate the integrals. We can apply Greens theorem to calculate the amount of work done on a force field. [Steps Shown] Evaluate the following line integral using the Fundamental theorem of line Integrals: _C[ 2 (x +y)i + 2 (x +y)j ] d r , , Online Calculators. ps. Use Green's Theorem to evaluate the line integral (y - x) dx + (2x y) dy for the given path. That's why square. /2xydx+(x+y)dy C C: boundary of the region lying between the graphs of y = 0 and y = 1 Use Green's Theorem to evaluate the line integral. $ \displaystyle\oint_C (e^{x^2} + y^2) dx + (e^{y^2} + x^2 )dy $; C is the boundary of the triangle with vertices (0,0), (4,0) and (0,4). Line Integrals and Greens Theorem Jeremy Orlo 1 Vector Fields (or vector valued functions) Vector notation. Using Green's theorem to evaluate. C (x 2 + y 2 ) dx + (x 2 - y 2 ) dy, C is the triangle with vertices (0, 0), (2, 1), and (0, 1) Greens Theorem gives you a relationship between the line integral of a 2D vector field over a closed path in a plane and the double integral over the region that it encloses. ?C(Inx+y)dx-x2dy where C is the rectangle with vertices (1.1), (3, 1). Orient the curve counterclockwise unless otherwise indicated. $ 6 y dx + 3x*dy, where Cis the square with vertices (0, 0), (3, 0), (3, 3), and (0, 3) oriented counterclockwise. the partial derivatives on an open region then. we compute dereble integrar & obtain the the following da we - She de fan of Gordan OP ( We evaluate the integral dneedly Using the rectangular coordinates. Further, we assume a positive orientation. 1) Is the statement above the same as finding the area enclosed? Math Calculus Q&A Library Q:4) Use Green theorem and evaluate the line integral $ {Mdx + Ndy} Where M (x,y)=y, N (x,y)=x, C is the triangle bounded by x=0, x+y=2, y=0. Use Green's Theorem to evaluate the line integral. 1) For the green's theorem, Q: Using Green's theorem, evaluate the line integral F(r).dr counterclockwise around the boundary Use Green's Theorem to evaluate the line integral + (2+)Cxydx+ (x2+x)dy where C is the path shown in the figure. Greens Theorem Problems 1 Using Greens formula, evaluate the line integral , where C is the circle x2 + y2 = a2. 2 Calculate , where C is the circle of radius 2 centered on the origin. 3 Use Greens Theorem to compute the area of the ellipse (x 2/a2) + (y2/b2) = 1 with a line integral. 27 0. and. May 30 2022 | 11:05 AM |. Evaluate the line integral, where C is the given curve. Show Step 2. The choices are rounded to the nearest hundredth. R 1 ( Q x P y) d A + R 2 ( Q x P y) d A \int\int_ {R_1}\left (\frac {\partial {Q}} C x 2 y 2 dx + y tan 1 y dy, C is the triangle with vertices (0, 0), (1, 0), and (1, 3). integral of xy2 dx + 4x2y dy C is the triangle with vertices (0, 0), (2, 2), and (2, 4) View all. Math Calculus Q&A Library Use Green's Theorem to evaluate the line integral (y - x) dx + (2x y) dy for the given path. The See answers (1) asked 2020-11-08. nine and x subscribed. Green's Theorem Use Green's Theorem to evaluate the line integral. {image} and C is the boundary of the region enclosed by the parabola {image} and the line y = 16. Use Green's Theorem to evaluate the line integral along the given positively oriented curve. Video transcript. 0 votes. Custom Calculus (8th Edition) Edit edition Solutions for Chapter 16.4 Problem 6E: Use Greens Theorem to evaluate the line integral along the given positively oriented curve. c y 3 d x x 3 d y, C is the circle x 2 + y 2 = 4. Using the green student and the equation of the circles are Subscribe. Homework help starts here! Use symbolic notation and fractions where needed.) + (2+)= Show transcribed image text Expert Answer 100% (25 ratings) Use Green's Theorem to evaluate the line integral. 3y3 dx 3x3 dy C is the circle x2 + y2. C (Inx+y)dx-x2dy where C is the rectangle with vertices (1, 1), (3, 1), (1,4), and (3, 4) < Previous Next > Answer C xe-2x dx + (x 4 + 2x 2 y 2)dy. Use Greens Theorem to evaluate the line integral \int_ {C} (y-x) d x+ (2 x-y) d y C(yx)dx+(2xy)dy for the given path. 2xy dx + (x + y) dy Jc C: boundary of the region lying between the graphs of y = 0 and y = 1 - x< Show more 2. c cos y dx + x^2 sin y dy, C is the rectangle with vertices (0, 0), (5, 0), (5, 2), and (0, 2). 2. Calculus 2 - internationalCourse no. where n here denotes the outward unit normal to C in the xy-plane. Use Greens Theorem to evaluate the line integral along the given positively oriented curve. Evaluate the following line integrals using Greens Theorem. Evaluate the integral by making the given substitution. If Green's formula yields: where is the area of the region bounded by the contour. Thread starter Unart; Start date Nov 19, 2012; Nov 19, 2012 #1 Unart. See answers (1) asked 2020-11-08. Using Greens Theorem the line integral becomes, C y x 2 d x x 2 d y = D 2 x x 2 d A C y x 2 d x x 2 d y = D 2 x x 2 d A. Convert the area from rectangle coordinates to Evaluate the following line integrals: (1) R C (x 2y+ sinx)dy, where C is the arc of the parabola y = x from (0;0) to the partial derivatives on an open region then.. Graph : (1) Draw the coordinate plane. Use Greens Theorem to evaluate C (6y9x)dy(yxx3) dx C ( 6 y 9 x) d y ( y x x 3) d x where C C is shown below. C: x = t 2, y = 2 t, 0 t 5. 3y3 dx 3x3 dy C is the circle x2 + y2- 4 . Orient. xydx+(x^2+x)dy, where C is the path though points (-1,0);(1,0);(0,1) Here, curl F is not zero, so F is not conservative, which is consistent with the value we obtained for the integral on a closed curve being non-zero. Q: ch8 Evaluate the limit lim x e x x k , where k is a positive constant. Use Green's Theorem to evaluate the line integral Scox (y - x) dx + (2x - y) dy for the given path. Unlock full access to Course Hero. The Connection with Area A curious consequence of Green's Theorem is that the area of the region R enclosed by a simple closed curve C in the plane can be computed directly from a line integral over the curve itself, without direct reference to the interior. Line Integrals and Greens Theorem Problem 1 (Stewart, Exercise 16.1.(25,26)). Where C is the boundary of the unit square 0 ( x ( 1, 0 ( y ( 1 Use Greens Theorem to evaluate the line integral along the given positively oriented curve. This theorem is also helpful when we want to calculate the area of conics using a line integral. D D is the region enclosed by the curve. Use Green's Theorem to evaluate the following line integral. Suppose that =7.a=7. Orient the curve counterclockwise unless otherwise indicated. Use Greens Theorem to evaluate the line integral along the given positively oriented curve. $$ c (y + e^x)dx+(2x+cosy^2)dy, $$ C is the boundary of the region enclosed by the parabolas $$ y | SolutionInn Section 5-5 : Fundamental Theorem for Line Integrals. Orient the curve counterclockwise unless otherwise indicated. View Green's Theorem.jpg from MATHEMATIC 40 at Red River College. Use Green's Theorem to evaluate the line integral. Step 1: The integral is and parabolas are .. To find the point of intersection, equate the parabolas .. Use Green's theorem to evaluate the line integral | Fds where F = 2xyi + (x y)j and C is the Use Green's theorem to evaluate the line integral | Fds where F = 2xyi + (x y)j and C is the C path along the curve y= x' from (0,0) to (1,1) and x = y from (1,1) to (0,0). Since D D is just a half circle it makes sense to use polar coordinates for this problem. Math Advanced Math Q&A Library Use Green's Theorem to evaluate the line integral Scox (y - x) dx + (2x - y) dy for the given path. $ \displaystyle \int_C y^4 \, dx + Step 2: Greens theorem : If the two-dimensional curl of a vector field is positive throughout a region (on which the conditions of Greens Theorem are met), then the circulation on the boundary of that region is positive (assuming counterclockwise orientation). However, the integral of a 2D conservative field over a closed path is zero is a type of special case in Greens Theorem. Problem 8 Medium Difficulty. De nition. ? Homework help starts here! Use Green's Theorem to find C F d r where F = y 3, x 3 and C is the circle x 2 + y 2 = 3 . Suppose that the plane region D, its boundary curve C, and the functions P and Q satisfy the hypothesis of Greens Orient the curve counerclockwise. 104004Dr. If the two-dimensional curl of a vector field is positive throughout a region (on which the conditions of Greens Theorem are met), then the circulation on the boundary of that region is positive (assuming counterclockwise orientation). 1 Lecture 36: Line Integrals; Greens Theorem Let R: [a;b]! This means breaking the boundary of the rectangle up into 4 smooth curves (the sides), parameterising the curves, evaluating the line integral along each curve and summing the results. Plot 1 shows the plane \(z-4-x\) Greens theorem can only handle surfaces in a plane, Divergence Theorem Use the Divergence Theorem to evaluate the surface integral S FdS of the vector field F(x,y,z) = (x,y,z), where S is the surface of the solid bounded by the cylinder x2 + y2 = a2 and the planes z = 1, z = 1 (Figure 1) . Show Step 2. Use Greens Theorem to evaluate the line integral along the given positively. Line Integrals and Greens Theorem Problem 1 (Stewart, Exercise 16.1.(25,26)). In Exercises 3-10, use Green\'s Theorem to evaluate the line integral. If necessary, explain how to first simplify the integrand before applying the suggested technique of integration. Something similar is true for line integrals of a certain form. Get the detailed answer: Use Green's Theorem to evaluate the line integral along the given positively oriented curve. We can use Greens theorem when evaluating line integrals of the form, $\oint M (x, y) \phantom {x}dx + N (x, y) \phantom {x}dy$, on a vector field function. c cos y dx + x^2 sin y dy, C is the rectangle with vertices (0, 0), (5, 0), (5, 2), and (0, 2) lucasperrine1679 is waiting for your help. So the green student, the students are integral on the cover of si F of X comma by the express Z. Orient the curve counterclockwise unless otherwise indicated Sc ( Use Green's Theorem to evaluate the following line integral. Share. Use Green's Theorem to evaluate the line integral along the given positively oriented curve. Add your answer and earn points. ?C(Inx+y)dx-x2dy where C is the rectangle with vertices (1, 1), (3, 1), (1,4), and (3, 4) < PreviousNext > Answer. $ \displaystyle \int_C \Bigl( y + e^{\sqrt{x}} \Bigr) \, dx + \Bigl( 2x + \cos y^2 \Bigr) \, dy $, Use Green's Theorem to evaluate the line integral along the given posit 05:11. (Greens Theorem) Let C be a positively oriented piece-wise smooth simple closed curve in the plane and let D be the region bounded by C. If P and Q have continuous partial derivatives on an Using Green's Theorem for line integral. Use Green's Theorem to evaluate the line integral along the given positively oriented curve. We recall that if C is a closed plane curve parametrized by in the counterclockwise direction then. Use Greens Theorem to evaluate the line integral along the given positively oriented curve. Using Green's Theorem to evaluate the line integral. Greens theorem : If C be a positively oriented closed curve, and R be the region bounded by C, M and N are. asked Feb 18, 2015 in CALCULUS by anonymous. Use Greens Theorem to evaluate C (y42y) dx(6x4xy3) dy C ( y 4 2 y) d x ( 6 x 4 x y 3) d y where C C is shown below. C: boundary of the region lying between the graphs of y = x and y = x2 - 8x. Answer to Use Green's Theorem to evaluate the line integral. Ok, so I'm not sure how to approach this problem. R3 and C be a parametric curve dened by R(t), that is C(t) = fR(t) : t 2 [a;b]g. Suppose f: C ! Assume the curve is oriented counterclockwise. One way to write the Fundamental Theorem of Calculus ( 7.2.1) is: That is, to compute the integral of a derivative we need only compute the values of at the endpoints. See answers (1) Ask your question. Show transcribed image text Best Answer Transcribed image text: (1 pt) Use Green's Theorem to evaluate the line integral. In (B) you have to expand d F 2 d x, d F 1 d y and d A and evaluate the result. Use Greens Theorem to evaluate the line integral along the given positively oriented curve. 16.3 The Fundamental Theorem of Line Integrals. Use Green's Theorem to evaluate the line integral along the given positively oriented curve. Get your answer. Evaluate the line integral using Green's Theorem and check the answer by evaluating it directly. More Questions on Application of derivatives. ? Answer. where the symbol indicates that the curve (contour) is closed and integration is performed counterclockwise around this curve. Step 1: (b) The integral is and vertices of the triangle are .. Greens theorem : If C be a positively oriented closed curve, and R be the region bounded by C, M and N are . What is Greens Theorem. ? Orient the curve counterclockwise unless otherwise indicated. {image} {image} {image} {image} Use Green's Theorem to evaluate the line integral along the given positively oriented curve. Add your answer and earn points. 10. Chsaa Track And Field State Meet 2022 Genetics Introduction Ppt Who Is The Oldest Nascar Driver Still Racing 2022 Toyota Camry Hybrid Problems How To Play Banjo-kazooie On Switch Sociology Report Topics use green's theorem to evaluate the line integral2008 mazda mx-5 miata grand touring 0-60
CommonCrawl
Singular Hardy-Trudinger-Moser inequality and the existence of extremals on the unit disc Faber-Krahn and Lieb-type inequalities for the composite membrane problem September 2019, 18(5): 2693-2715. doi: 10.3934/cpaa.2019120 Concentration of ground state solutions for quasilinear Schrödinger systems with critical exponents Yongpeng Chen 1, , Yuxia Guo 2,, and Zhongwei Tang 1, School of Mathematical Sciences, Beijing Normal University, Beijing, 100875, China Department of Mathematics, Tsinghua University, Beijing, 100084, China * Corresponding author: [email protected] Received September 2018 Revised September 2018 Published April 2019 Fund Project: The second author is supported by Supported by National Science Foundation of China (11571040, 11771235, 11331010). The third author is supported by National Science Foundation of China (11571040). This paper is concerned with the critical quasilinear Schrödinger systems in $ {\Bbb R}^N: $ $ \left\{\begin{array}{ll}-\Delta w+(\lambda a(x)+1)w-(\Delta|w|^2)w = \frac{p}{p+q}|w|^{p-2}w|z|^q+\frac{\alpha}{\alpha+\beta}|w|^{\alpha-2}w|z|^\beta\\\ -\Delta z+(\lambda b(x)+1)z-(\Delta|z|^2)z = \frac{q}{p+q}|w|^p|z|^{q-2}z+\frac{\beta}{\alpha+\beta}|w|^\alpha|z|^{\beta-2}z, \ \end{array}\right. $ $ \lambda>0 $ is a parameter, $ p>2, q>2, \alpha>2, \beta>2, $ $ 2\cdot(2^*-1) < p+q<2\cdot2^* $ $ \alpha+ \beta = 2\cdot2^*. $ By using variational method, we prove the existence of positive ground state solutions which localize near the set $ \Omega = int \left\{a^{-1}(0)\right\}\cap int \left\{b^{-1}(0)\right\} $ $ \lambda $ large enough. Keywords: Quasilinear Schrödinger systems, critical exponent, concentration solution. Mathematics Subject Classification: Primary: 35Q55; Secondary: 35J655. Citation: Yongpeng Chen, Yuxia Guo, Zhongwei Tang. Concentration of ground state solutions for quasilinear Schrödinger systems with critical exponents. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2693-2715. doi: 10.3934/cpaa.2019120 S. Adachi, M. Shibata and T. Watanabe, Asymptotic behavior of positive solutions for a class of quasilinear elliptic equations with general nonlinearities, Comm. Pure. Appl. Anal., 13 (2014), 97-118. doi: 10.3934/cpaa.2014.13.97. Google Scholar C. Alves, D. Filho and M. Sonto, On systems of elliptic equations involving subcritical or critical Sobolev eponents, Nonlinear Anal., 42 (2000), 771-787. doi: 10.1016/S0362-546X(99)00121-2. Google Scholar J. Bezerra do Ó, O. Miyagaki and S. Soares, Soliton solutions for quasilinear Schrödinger equations with critical growth, J. Diff. Equat., 248 (2010), 722-744. doi: 10.1016/j.jde.2009.11.030. Google Scholar A. Bouard, N. Hayashi and J. Saut, Global existence of small solutions to a relativistic nonlinear Schrödinger equation, Comm. Math. Phys., 189 (1997), 73-105. doi: 10.1007/s002200050191. Google Scholar H. S. Brandi, C. Manus, G. mainfray, T. Lehner and G. Bonnaud, Relativistic and ponderomotive self-focusing of a laser beam in a radially inhomogeneous plasma, Phys. Fluids. B., 5 (1993), 3539-3550. Google Scholar X. L. Chen and R. N. Sudan, Necessary and sufficient conditions for self-focusing of short ultraintense laser pulse, Phys. Rev. Lett., 70 (1993), 2082-2085. Google Scholar M. Colin and L. Jeanjean, Solutions for a quasilinear Schrödinger equation: a dual approach, Nonlinear Anal., 56 (2004), 213-226. doi: 10.1016/j.na.2003.09.008. Google Scholar Y. Deng, S. Peng and S. Yan, Positive solition solutions for generalized quasilinear Schrödinger equations with critical growth, J. Diff. Equat., 37 (2017), 4213-4230. doi: 10.1016/j.jde.2014.09.006. Google Scholar Y. Guo and B. Li, Solutions for qusilinear Schrödinger systems with critical exponents, Z. Angew. Math. Phys., 66 (2015), 517-546. doi: 10.1007/s00033-014-0416-7. Google Scholar Y. Guo, X. Liu and F. Zhao, Positive solutions for quasilinear systems with critical growth, Advan. Nonl. Studies., 13 (2013), 893-919. doi: 10.1515/ans-2013-0408. Google Scholar Y. Guo and J. Nie, Infinitely many solutions for quasilinear Schrödinger systems with finite and sign-changing potentials, Z. Angew. Math. Phys., 67 (2016), 1-30. doi: 10.1007/s00033-016-0621-7. Google Scholar Y. Guo and Z. Tang, Ground state solutions for the quasilinear Schrödinger systems, J. Math. Anal. Appl., 389 (2012), 322-339. doi: 10.1016/j.jmaa.2011.11.064. Google Scholar X. He, A. Qian and W. Zou, Existence and concentration of positive solutions for quasilinear Schrö dinger equations with critical growth, Nonlinearity, 26 (2013), 3137-3168. doi: 10.1088/0951-7715/26/12/3137. Google Scholar Y. He and G. Li, Concentrating solition solutions for quasilinear Schrödinger equations involving critical sobolev exponents, Disc. Cont. Dyna. Syst., 36 (2016), 731-762. doi: 10.3934/dcds.2016.36.731. Google Scholar L. Jeanjean, T. Lou and Z. Q. Wang, Multiple normalized solutions for quasi-linear Schrödinger equations, J. Diff. Equat., 259 (2015), 3894-3928. doi: 10.1016/j.jde.2015.05.008. Google Scholar J. Liu, X. Liu and Z. Q. Wang, Multiple sign-changing solutions for quasilinear elliptic equations via perturbation method, Comm. Part. Diff. Equat., 39 (2014), 2216-2239. doi: 10.1080/03605302.2014.942738. Google Scholar J. Liu, X. Liu and Z. Q. Wang, Multibump solutions for quasilinear elliptic equations, Jour. Func. Anal., 262 (2012), 4040-4102. doi: 10.1016/j.jfa.2012.02.009. Google Scholar J. Liu, X. Liu and Z. Q. Wang, Existence theory for quasilinear elliptic equations via regularization approach, Topo. Meth. Nonlinear Anal., 50 (2017), 469-487. doi: 10.12775/tmna.2017.008. Google Scholar J. Liu, Y. Wang and Z. Q. Wang, Soliton solutions for quasilinear Schrö dinger equation, Ⅱ, J. Diff. Equat., 187 (2003), 473-493. doi: 10.1016/S0022-0396(02)00064-5. Google Scholar J. Liu, Y. Wang and Z. Q. Wang, Solutions for quasilinear Schrödinger equation via Nehari method, Comm. Part. Diff. Equat., 29 (2004), 879-901. doi: 10.1081/PDE-120037335. Google Scholar X. Liu, J. Liu and Z. Q. Wang, Quasilinear elliptic equations via perturbation method, Proc. Amer. Math. Soc., 141 (2013), 253-263. doi: 10.1090/S0002-9939-2012-11293-6. Google Scholar X. Liu, J. Liu and Z. Q. Wang, Quasilinear elliptic equation with critical growth via perturbation method, J. Diff. Equat., 254 (2013), 102-124. doi: 10.1016/j.jde.2012.09.006. Google Scholar B. Ritchie, Relativistic self-focusing channel formation in laser-plasma interactions, Phys. Rev. E., 50 (1994), 687-689. Google Scholar E. Silva and G. Vieira, Quasilinear asymptotically periodic Schrödinger equations with critical growth, Calc. Var., 39 (2010), 1-33. doi: 10.1007/s00526-009-0299-1. Google Scholar Y. Wang and Y. Shen, Existence and asymptotic behavior of positive solutions for a class of quasilinear Schrödinger equations, Advan. Nonl. Studies., 1 (2018), 131-150. doi: 10.1515/ans-2017-6026. Google Scholar Y. Wang, Y. Zhang and Y. Shen, Multiple solutions for quasilinear Schrödinger equations involving critical exponent, Appl. Math. Comp., 216 (2010), 849-856. doi: 10.1016/j.amc.2010.01.091. Google Scholar Y. Wang and W. Zou, Bound states to critical quasilinear Schrödinger equations, Non. Diff. Equat. App., 19 (2012), 19-47. doi: 10.1007/s00030-011-0116-3. Google Scholar M. Willem, Minimax Theorem, Birkhäuser, Boston, 1996. doi: 10.1007/978-1-4612-4146-1. Google Scholar K. Wu, Positive solutons of quasilinear Schrödinger equations critical growth, Appl. Math. Letters., 45 (2015), 52-57. doi: 10.1016/j.aml.2015.01.005. Google Scholar X. Zeng, Y. Zhang and H. Zhou, Positive solutions for a quasilinear Schrödinger equation involving Hardy potential and critical exponent, Comm. Cont. Math., 16 (2014), 1-32. doi: 10.1142/S0219199714500345. Google Scholar Vincenzo Ambrosio. Concentration phenomena for critical fractional Schrödinger systems. Communications on Pure & Applied Analysis, 2018, 17 (5) : 2085-2123. doi: 10.3934/cpaa.2018099 Wentao Huang, Jianlin Xiang. Soliton solutions for a quasilinear Schrödinger equation with critical exponent. Communications on Pure & Applied Analysis, 2016, 15 (4) : 1309-1333. doi: 10.3934/cpaa.2016.15.1309 Jian Zhang, Shihui Zhu, Xiaoguang Li. Rate of $L^2$-concentration of the blow-up solution for critical nonlinear Schrödinger equation with potential. Mathematical Control & Related Fields, 2011, 1 (1) : 119-127. doi: 10.3934/mcrf.2011.1.119 Daniele Cassani, João Marcos do Ó, Abbas Moameni. Existence and concentration of solitary waves for a class of quasilinear Schrödinger equations. Communications on Pure & Applied Analysis, 2010, 9 (2) : 281-306. doi: 10.3934/cpaa.2010.9.281 GUANGBING LI. Positive solution for quasilinear Schrödinger equations with a parameter. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1803-1816. doi: 10.3934/cpaa.2015.14.1803 Mengyao Chen, Qi Li, Shuangjie Peng. Bound states for fractional Schrödinger-Poisson system with critical exponent. Discrete & Continuous Dynamical Systems - S, 2021, 14 (6) : 1819-1835. doi: 10.3934/dcdss.2021038 Daniele Cassani, Luca Vilasi, Jianjun Zhang. Concentration phenomena at saddle points of potential for Schrödinger-Poisson systems. Communications on Pure & Applied Analysis, 2021, 20 (4) : 1737-1754. doi: 10.3934/cpaa.2021039 Yimin Zhang, Youjun Wang, Yaotian Shen. Solutions for quasilinear Schrödinger equations with critical Sobolev-Hardy exponents. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1037-1054. doi: 10.3934/cpaa.2011.10.1037 Kun Cheng, Yinbin Deng. Nodal solutions for a generalized quasilinear Schrödinger equation with critical exponents. Discrete & Continuous Dynamical Systems, 2017, 37 (1) : 77-103. doi: 10.3934/dcds.2017004 Edcarlos D. Silva, Jefferson S. Silva. Multiplicity of solutions for critical quasilinear Schrödinger equations using a linking structure. Discrete & Continuous Dynamical Systems, 2020, 40 (9) : 5441-5470. doi: 10.3934/dcds.2020234 Guofa Li, Yisheng Huang. Positive solutions for critical quasilinear Schrödinger equations with potentials vanishing at infinity. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021214 Xiang-Dong Fang. A positive solution for an asymptotically cubic quasilinear Schrödinger equation. Communications on Pure & Applied Analysis, 2019, 18 (1) : 51-64. doi: 10.3934/cpaa.2019004 Abbas Moameni. Soliton solutions for quasilinear Schrödinger equations involving supercritical exponent in $\mathbb R^N$. Communications on Pure & Applied Analysis, 2008, 7 (1) : 89-105. doi: 10.3934/cpaa.2008.7.89 Myeongju Chae, Sunggeum Hong, Sanghyuk Lee. Mass concentration for the $L^2$-critical nonlinear Schrödinger equations of higher orders. Discrete & Continuous Dynamical Systems, 2011, 29 (3) : 909-928. doi: 10.3934/dcds.2011.29.909 Xia Sun, Kaimin Teng. Positive bound states for fractional Schrödinger-Poisson system with critical exponent. Communications on Pure & Applied Analysis, 2020, 19 (7) : 3735-3768. doi: 10.3934/cpaa.2020165 Yanfang Peng. On elliptic systems with Sobolev critical exponent. Discrete & Continuous Dynamical Systems, 2016, 36 (6) : 3357-3373. doi: 10.3934/dcds.2016.36.3357 Yongpeng Chen Yuxia Guo Zhongwei Tang
CommonCrawl
Is the set of codes of Deterministic Finite-State Automata a regular language? Let $\Sigma$ be a given alphabet. Is there a way to code up Deterministic Finite state Automata (DFA) over $\Sigma$ as strings of $\Sigma$ in such a way that the corresponding subset of $\Sigma^*$ is a regular language? For example for Turing machines, the set of codes of Turing machines over a fixed alphabet is decidable, and we can speak of decidable sets of Turing machines (through their codes). Of course we can also speak of regular sets of DFA's (through their codes). Is the set of all DFA's regular in this sense? formal-languages computability automata finite-automata Kaveh AlbertoAlberto migrated from stackoverflow.com Jul 10 '12 at 21:57 $\begingroup$ I'm pretty sure I know most of the topics you are asking about, and yet I can't understand what you are trying to ask... Mind rephrasing it somehow? $\endgroup$ – user1494736 Jul 9 '12 at 19:36 $\begingroup$ No problem, I can try. Let $\Sigma$ be a fixed alphabet and let $DFA(\Sigma)$ be the set of all DFA's having input alphabet $\Sigma$. I would like to know whether there exists an alphabet $S$ and a function $f: DFA(\Sigma) \to S^*$ such that the range of $f$ is a regular language. In this moment I think the answer is no. However I have the impression that there exists an alphabet $S$ and a function $f: DFA(\Sigma) \to S^*$ such that the range of $f$ is a context-free language. $\endgroup$ – user1491069 Jul 10 '12 at 12:43 $\begingroup$ I'm sorry. The question I posted is trivial, since there exists a bijiection $f:DFA(\Sigma) \to S^∗$. I was trying to post a simpler version of what I really wanted. The real point is as follows. Let Σ be a fixed alphabet and let DFA(Σ) be the set of all DFA's having input alphabet Σ. Is there an alphabet S and a function f:DFA(Σ)→S∗ such that the set {(f(A),w):w∈L(A)} is accepted by a push-down automaton? $\endgroup$ – user1491069 Jul 10 '12 at 14:36 $\begingroup$ Here is a similar question about encoding trees on cstheory.SE which illuminates multiple aspects of this question. Tl;dr: which kind(s) of "cheating" do you want to allow? $\endgroup$ – Raphael♦ Jul 18 '12 at 1:29 This answer is, in a sense, a completely cheating approach, but it is indeed possible to encode all DFAs as strings. We can write out a DFA by writing out its transition table. We can write out the transition table using just 0s and 1s as follows: first, write out a number of 1s equal to the number of states, then a 0. Then, write out a number of 1s equal to the number of symbols in the alphabet, then a zero. Then, write out each row of the transition table by writing out each entry as a number of 1s indicating which state should be transitioned into, then a 1. Now, this particular encoding of a DFA is not regular. However, what we can do is the following. Consider the set of all such encodings. We can then order them in length-lex order, and then can number the DFAs produced this way 0, 1, 2, 3, 4, ..., etc. based on their ordering. In this case, we now have a bijection between $\mathbb{N}$ and the set of all DFAs. From there, we can then consider the regular language consisting of all natural numbers written out in binary. This set is definitely regular; here's a regular expression for it: 0 | 1(0|1)* So we now have a regular language consisting of encodings of DFAs. The encoding is not at all easy to work with - you'd have to start listing off all encodings of DFAs until you found the one you were looking for - but mathematically it is well-defined. templatetypedeftemplatetypedef $\begingroup$ Does this prove too much? Does this imply that any countable language (e.g.: all languages of finite strings over finite alphabets) can be treated this way? E.g.: The fact that there exists a bijection between, say, halting TMs and $\mathbb N$ does not mean that it is actually accessible in any meaningful way. $\endgroup$ – mhum Jul 11 '12 at 2:28 $\begingroup$ @mhum- That's a good point. My answer was mostly to point out that there is some way of building a regular language of DFA encodings, so that you could (for example) diagonalize and prove that there must exist a nonregular language. I completely agree that this is not of much practical value. $\endgroup$ – templatetypedef Jul 11 '12 at 2:28 $\begingroup$ If anything, this points out the need to specify precisely what is meant by an "encoding". After all, you could have made the bijection between DFAs and finite-length binary strings instead of binary encodings of integers. Now, the accepting language is $(0|1)*$. Is this even a legitimate (never mind practical) "encoding" of DFAs? $\endgroup$ – mhum Jul 11 '12 at 3:05 $\begingroup$ @mhum: I guess your concern is that the bijection might not be computable (or very expensive) so that it is impossible (in practice) to perform it. See also my comment above. $\endgroup$ – Raphael♦ Jul 18 '12 at 1:27 DFAs can be stored in a regular way: We assume $\#\notin \Sigma$ and define $$L = \{\#\#e\mid e \in \{0,1\}^*\}^*\cdot\{\#s\#b\mid b \in \{0,1\}^*,s\in\Sigma\}^*\quad ,$$ which is clearly regular. Then for $w\in L$ such that $w = \#\#e_1\dots \#\#e_o\# s_1\#b_1\#\dots\#s_n\#b_n$ we define $$p_0 = 1, p_i = \min\{r_{i-1},n\}$$ where $$r_i = \min\{j>p_i\mid \exists k, p_i \leq k < j: \ s_j = s_k \}$$ is the index of the first symbol repetition after $p_i$. Let $\{p_1,\dots,p_k\}$ be the set definable in this way. Now we construct a DFA: The set of states will be $Q=\{1,\dots,m\}$, where $m=\max(\{k\}\cup\{\mathrm{bin}(b_i)\mid 1\leq i \leq n\})$ and for the sake of simplicity we choose $1$ as the starting state. The set of accepting states shall be $E=\{\mathrm{bin}(e_i)\mid 1\leq i \leq o\}$. By our interpretation of the string $w$, each part $\#s_{p_i}\#,\dots,\#b_{p_{i+1-1}}$ contains each $s\in\Sigma$ at most once and for each such $s$ a binary string. We'll interpret this string as target for our transition function $\delta: Q\times \Sigma \to Q$: $$\delta(i,s)=\begin{cases}\mathrm{bin}(b_j) & \exists p_i,j: p_i\leq j < \min\{p_{i+1},n\}, s_j=s \\ 1 & \text{else}\end{cases}$$ Now $(\Sigma,Q,\delta,1,E)$ is a DFA. On the other hand it's obvious that any DFA can be sored this way (after renaming the states). fraflfrafl I have no idea, but my intuition is that you couldn't. You are basically asking if you can implement a regexp matcher with a push-down automaton, and I don't think you can... The regexp can be arbitrarily complex, so there is no way you can store all the state necessary in the push-down automaton states, so you'll need to use the push-down to store the state of where you are in the regexp (and/or the word)... And the problem with the push-down, is that you won't be able to do backtracking if you take a wrong turn in the regexp (in a normal automaton you have a state covering for each wrong turn possible, because all the possible combinations are known when you determine the number of states for the automaton, but I think that doing this in the push-down would require backtracking, and I don't think you can implement that)... Not the answer you're looking for? Browse other questions tagged formal-languages computability automata finite-automata or ask your own question. Languages accepted by modified versions of finite automata Computational power of deterministic versus nondeterministic min-heap automata Given a non-deterministic Mealy machine $M$, if $L$ is regular, is $M(L)$ regular? Languages recognized by finite state automata of polynomially growing size Order classic notions of computability by power How to apply insights from the theory of codes to alternating codes? Finite substitution and regular closure properties Why exactly are Regular Languages decidable? The equational theory of regular languages has no finite set of axioms for general alphabets Choose the best classifier to predict the label of strings of a regular language
CommonCrawl
February 2019 , Volume 26, Issue 3, pp 1895–1908 | Cite as Dexamethasone-containing bioactive dressing for possible application in post-operative keloid therapy Agnieszka Rojewska Anna Karewicz Marta Baster Mateusz Zając Karol Wolski Mariusz Kępczyński Szczepan Zapotoczny Krzysztof Szczubiałka Maria Nowakowska First Online: 10 December 2018 Bioactive dressing based on bacterial cellulose modified with carboxymethyl groups (mBC) was successfully prepared and studied. The surface of mBC was activated using carbodiimide chemistry and decorated with alginate/hydroxypropyl cellulose submicroparticles containing dexamethasone phosphate (DEX-P). Prior to their deposition particles were coated with chitosan in order to facilitate their binding to mBC, and to increase the control over the release process. The detailed physicochemical characterization of the particles and the bioactive dressing was performed, including the determination of the particles' size and size distribution, DEX-P encapsulation efficiency and loading, particles' distribution on the surface of the mBC membrane, as well as DEX-P release profiles from free and mBC-bound particles. Finally, the preliminary cytotoxicity studies were performed. The fabricated bioactive material releases DEX-P in a controlled manner for as long as 25 h. Biological tests in vitro indicated that the dexamethasone-containing submicroparticles are not toxic toward fibroblasts, while effectively inhibiting their proliferation. The prepared bioactive dressing may be applied in the treatment of the post-operative wounds in the therapy of keloids and in other fibrosis-related therapies. Alginate Hydroxypropyl cellulose Dexamethasone phosphate Modified bacterial cellulose Wound healing Drug release Dexamethasone sodium phosphate (DEX-P) is a water-soluble derivative of dexamethasone (DEX), a potent corticosteroid used widely to treat various inflammatory and autoimmune conditions. Although many pharmaceutical applications of DEX-P are based on its anti-inflammatory activity (Hickey et al. 2002; Zhang et al. 2016; Hu et al. 2017), its use as a bioactive component of the implants, stents and wound dressings can also benefit from the anti-proliferative (Bao et al. 2016) and anti-apoptotic properties of the drug toward fibroblasts (Nieuwenhuis et al. 2010). DEX and DEX-P were shown to have a negative influence on wound healing process, but only when administered systemically for longer time periods, in particular when the patient was treated with that drug prior to injury (Wang et al. 2013). No significant influence of the short-term post injury/surgery application of DEX and DEX-P was, however, confirmed so far. On the contrary, there are reports on the positive effects of DEX in healing process. DEX was applied with success in repair of the mucous membrane defects in oral submucous fibrosis (Raghavendra Reddy et al. 2012). In that study Reddy et al. have used DEX-impregnated collagen membranes to cover the raw wound created by surgical excision of fibrous bands. The presence of the drug was shown to decrease the inflammatory reaction and extent of the fibrosis process, most probably due to the reduced proliferation and deposition of fibroblasts. Beule et al. (2009) have shown the ability of DEX to decrease postoperative osteogenesis in a standardized animal wound model for endoscopic surgery of sinus. Li et al. (2014) have recently proposed the electrospun polymer fiber meshes based on poly(lactide-co-glycolide) (PLGA) as a delivery vehicle for DEX and green tea polyphenols. The obtained, bioactive dressing was proposed for the post-operative therapy of keloids. Hydrophilic green tea polyphenols were introduced as necessary permeation enhancers for hydrophobic DEX, simultaneously providing antibacterial activity. The long-term systemic treatment with DEX may lead to various adverse effects, including swelling, insomnia, bleeding of the stomach or intestines, Cushing's Syndrome, diabetes, or osteoporosis (Vardy et al. 2006; Ren et al. 2015). The systemic absorption of the topically administered DEX is low, but not negligible (Weijtens et al. 2002). Thus, there is a need for a controlled delivery system for DEX-P. In response to this challenge, we have developed a nanoparticulate delivery system for DEX-P based on the alginate-hydroxypropylcellulose (ALG/HPC) composite. Such system can be utilized in various topical applications to increase the duration, efficiency and safety of therapy. In a current paper we used that system as a component of a bioactive dressing. For that purpose the obtained ALG/HPC particles containing DEX-P were coated with a thin layer of chitosan to increase the control over the drug release and enable their binding to bacterial cellulose (Bionanocellulose®, BC). They were then covalently bound to the modified Bionanocellulose® membrane (mBC), which was obtained by functionalization of BC with carboxyl groups. BC was chosen because it constitutes an excellent wound dressing (Ul-Islam 2013; Liu 2016). It is a natural biopolymer material consisting of the interconnected network of cellulose fibrils. BC has high surface area and high ability for water retention. Due to its high sorption ability it allows to remove exudates from the wound, while providing the moist environment which promotes healing processes and prevents the formation of scars (Moritz et al. 2014; Napavichayanun et al. 2016). Ionically-modified BC was shown to have a better stability in water and higher water retention (Spaic 2014). The studies presented in the current paper were carried out based on the hypothesis that one can fabricate the bioactive wound dressing material by immobilization on the surface of BC the polysaccharide submicroparticles containing dexamethasone phosphate (ensuring prolonged and controlled release profile of that drug) which can be particularly useful in the therapy of post-operative wounds, especially those resulting from the surgical treatment of the pathologies related to the uncontrolled proliferation of the fibrous tissue (fibrosis, keloid). Experimental part The sheets of Bionanocellulose® modified with carboxymethyl groups (mBC) were kindly provided by Biovico company [(6.07 ± 0.42) nmol of carboxylic groups per 1 mm2 of mBC surface (Guzdek et al. 2018)]. Dexamethasone 21-phosphate disodium salt (DEX-P, ≥ 98%, Sigma-Aldrich), hydroxypropyl cellulose (HPC, MW = 100,000 g/mol, Sigma-Aldrich), alginic acid sodium salt (ALG, medium molecular weight, from brown algae, Sigma-Aldrich; Mv = 260,000 g/mol, M/G ratio = 1.20), chitosan (low molecular weight, Sigma-Aldrich, Mv = 120,000 g/mol, degree of deacetylation DDA = 79%), calcium chloride (p.a., Fluka, Poland), 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide (EDC, Sigma-Aldrich, commercial grade, powder), N-hydroxysuccinimide (NHS, 98%, Sigma-Aldrich), 2-propanol (99.9% pure, Sigma Aldrich), acetonitrile (gradient grade for liquid chromatography, Sigma-Aldrich), acetic acid (≥ 99.5%, Chempur), fluorescein sodium salt (powder, Sigma Aldrich), rhodamine B isothiocyanate (powder, Sigma-Aldrich), sodium phosphate monobasic dihydrate (≥ 99.0%, Sigma-Aldrich) were used as received. Dulbecco's Modified Eagle's Medium–high glucose (DMEM), phosphate buffered saline tablets (PBS) and Cell Proliferation Kit II (XTT) were purchased from Sigma-Aldrich; HyClone Research Grade Fetal Bovine Serum, South American Origin (FBS), HyClone trypsin–EDTA and HyClone Penicillin–Streptomycin solution were purchased from Symbios. Synthesis of ALG/HPC submicroparticles containing DEX-P To encapsulate DEX-P into the nanospheres, 20 mg of the drug was dissolved in 5 mL of water. Then 50 mg of sodium alginate (ALG) and 12.5 mg of hydroxypropyl cellulose (HPC) were dissolved in this solution. After stirring for 1 h at room temperature, the resulting solution was injected at 0.05 mL/min with a syringe pump (Aladdin-1000, World Precision Instruments) to a solution of cross-linking agent (0.2 M calcium chloride, 10 mL) under continuous stirring. The obtained ALG/HPC particles were filtered off using a fritted glass funnel (11-G4), washed with distilled water and isopropanol and dried at room temperature. Coating ALG/HPC particles with chitosan 20 mg of nanospheres was added to 1 mL of 1% (w/v) solution of chitosan in 1% (w/v) acetic acid and stirred at room temperature for 2 h. The excess of the chitosan was then removed by filtration on a Büchner funnel (11-G4) and the obtained ALG/HPC-Ch particles were dried at room temperature. Deposition of ALG/HPC-Ch on BC modified with carboxyl groups (mBC) To attach the chitosan-coated nanospheres on the surface of nanocellulose modified with carboxyl groups, a 4 cm2 sheet of mBC was immersed in an acetate buffer (pH 5.5) at room temperature. NHS (0.025 M) and EDC (0.008 M) were then added sequentially and the sample was continuously stirred at 150 rpm for 120 min. The activated BC was rinsed with deionized water, incubated with the suspension of the nanospheres (100 mg) in phosphate buffer (pH 7.4) for 30 min, and washed thoroughly with deionized water. The deposition reaction was carried out in the ultrasound bath to prevent aggregation of the particles. The obtained samples of mBC with attached particles (ALG/HPC-Ch-mBC) were dried in the air. Release profiles: entrapment efficiency and loading capacity determination 20 mg of the ALG/HPC particles loaded with DEX-P was placed in a 15 mL centrifuge tube and 5 mL of 10 mM PBS (pH 7.4) was added. The sample was incubated at 37 °C (IKA, a KS 3000 incubator) with continuous agitation (140 rpm). After defined time intervals the sample was centrifugated at 10,000 rpm for 5 min and then the supernatant was collected. The new portion (5 mL) of PBS was added to the nanospheres and the system was placed back in the incubator. The experiment was performed in four repetitions. To study the release profiles of DEX-P from ALG/HPC-Ch-mBC, a 4 cm2 sample of ALG/HPC-Ch-mBC was incubated in 5 mL of 10 mM PBS (pH 7.4). The sample was incubated at 37 °C with continuous agitation (140 rpm). After defined time intervals the supernatant was collected, replaced with the new portion (5 mL) of PBS and the system was placed back in the incubator. The experiment was performed in four repetitions. The concentration of DEX-P in the collected samples was determined using an HPLC system (Waters) consisting of a 515 pump, a rheodyne-type dosing system and a 2996 Photodiode Array Detector. The separation was performed on a C18 column (3.9 mm × 150 mm, 5 µm), using a 30:70 (v/v) mixture of acetonitrile and 10 mM phosphate buffer (pH 7.4) as the mobile phase at a flow rate of 0.5 mL/min. The drug signal was detected at 241 nm (absorption maximum of DEX-P). All the measurements were done in triplicate. DEX-P concentration was calculated based on the calibration curve obtained for the standard solutions (R2 = 0.999). Entrapment efficiency (EE [%]) and loading capacity (LC [%]) were calculated based on the total amount of the drug released from the obtained particles, using the following equations: $$EE \left[ \% \right] = \frac{Total\, weight\, of\,DEX - P \,in \,the\, obtained \,submicroparticles}{Weight\, of\,DEX - P \,used \,in\, the\, synthesis} \times 100$$ $$LC \,\left[ \% \right] = \frac{Weight\, of\, DEX - P \,encapsulated \,in\, the\, sample}{Weight\, of\, the\, sample} \times 100$$ Microscopic and spectroscopic characterization of the unbound and mBC-bound particles SEM analysis was carried out using a PhenomWorld Pro scanning electron microscope. ALG/HPC-Ch spheres were dried at room temperature on a watch glass, and then the obtained material was placed on a carbon tape. The mBC membrane with the attached particles was stretched flat on a glass slide, dried in vacuum and placed on a carbon tape. Atomic force microscopic (AFM) images were obtained using a Dimension Icon AFM microscope (Bruker, Santa Barbara, CA) working in the PeakForce Tapping (PFT) and QNM® modes with standard silicon cantilevers for measurements in the air (nominal spring constant of 0.4 N/m). For confocal laser scanning microscopy (CLSM) studies the mBC membranes were stained with the aqueous solution of fluorescein sodium salt (0.1 mg/mL) for 72 h before the particle deposition. Chitosan-coated particles were also labeled with rhodamine B isothiocyanate (0.1 mg/mL solution) in 0.1 M phosphate buffer (pH 9.0) 24 h prior to the deposition on mBC. The deposition was carried out as previously described. Images were acquired using an A1-Si Nikon (Japan) confocal laser scanning system built on a Nikon inverted microscope Ti-E using a Plan Apo 100×/1.4 Oil DIC objective. Diode lasers (488 nm and 561 nm) were used for excitation. FTIR spectra were recorded using a Nicolet iS10 FT-IR spectrometer equipped with an ATR accessory (SMART iTX). Cytotoxicity studies Mouse Embryonic Fibroblasts MEF ATCC SCRC-1008 (MEFs) were maintained in a cell culture dish containing DMEM with streptomycin (100 µg/mL) and penicillin (100 U/mL) supplemented with 5% (v/v) FBS. The cells were incubated at 37 °C, 90% humidity with 5% CO2. Before toxicity and proliferation assessment, the cells (at approximately 70% confluence) were washed twice with PBS solution and subsequently harvested after 3 min incubation with 1 mL of 0.25% trypsin with 0.1% EDTA. After adding 3 mL of DMEM [with 5% (v/v) FBS] the cell suspension was centrifugated at 1250 g for 5 min, the supernatant was removed, and the pellet was resuspended in DMEM and 5% (v/v) FBS. Toxicity and proliferation MEFs suspended in DMEM supplemented with 5% (v/v) FBS were seeded into a 48-well cell culture plate (0.5 mL) at 5.2 × 104 (cytotoxicity) or 2.5 × 104 (proliferation) cells/well and incubated (37 °C, 5% CO2, 90% humidity). After 8 h (proliferation) or 29 h (cytotoxicity) the medium was replaced with 0.5 mL of fresh DMEM (supplemented with 5% (v/v) FBS in the case of proliferation test) containing different nanospheres concentration. After 23.5 h (cytotoxicity) or 42 h (proliferation) XTT assay was performed. It is based on reduction of tetrazolium salt XTT to formazan salt which occurs only in metabolically active (live) cells. The medium was removed and 200 µL of fresh DMEM with 100 µL of the activated XTT mixture was added to each well [with 5% (v/v) FBS in a final solution in the case of proliferation test]. After 2.5 h of incubation the plate was analyzed using a microplate reader (EPOCH2, Biotek Instruments, Inc) by measuring the absorbance at 460 nm. The results are normalized to an untreated control (without the nanospheres). All the data were presented as the mean of three replicates with standard deviation of the mean. Low solubility of DEX in water (85 mg/L) (Messner and Loftsson 2010) limits its application as a component of the wound-healing dressings, as these are supposed to provide moist environment and are often based on hydrogels. On the other hand, DEX-P has similar anti-inflammatory and immunosuppressive properties, while showing higher solubility in aqueous media, thus it can be successfully introduced into the hydrogel matrix. To ensure the control over the release of DEX-P we have encapsulated it into the nanoparticulate system based on natural polysaccharides, and then used these particles to modify the surface of mBC. Encapsulation of DEX-P in ALG/HPC particles The initial experiments allowed us to select the optimal composition of the ALG/HPC hydrogel matrix of the particles, which in the case of DEX-P was found to be 4:1 w/w. We have shown previously that this particular hydrogel composition can be used to encapsulate bioactive macromolecules [heparin (Karewicz et al. 2010)] and alkaline phosphatase (Karewicz et al. 2014). The DEX-P containing submicroparticles were obtained using the extrusion technique. Their diameter was maintained at an average value of 170 nm by adjusting the injection rate of the polymeric mixture containing the active agent into the solution of cross-linking agent (0.2 M CaCl2) and the flow of the inert gas, which was applied parallel to the injection flow. Figure 1a shows a typical SEM image of the particles produced using the method described above. They were spherical in shape and showed moderate tendency to aggregate. The dispersity of the obtained particles was significant, which is a typical outcome of the extrusion method. However, the diameter of 96% of particles did not exceed 500 nm, with the average diameter of 175 nm. The histogram showing the distribution of sizes of the obtained particles is presented in Fig. 1b. a SEM analysis of the uncoated ALG/HPC particles with DEX-P, b histogram showing size distribution of ALG/HPC particles Coating ALP/HPC particles with chitosan To facilitate the attachment of the DEX-P-loaded particles to the surface of mBC via EDC/NHS chemistry, as well as to increase the control over the release profile, the nanospheres were coated with a thin layer of chitosan using polycation-polyanion electrostatic interactions. The chitosan-coated particles (ALG/HPC-Ch) retained the spherical shape and did not significantly change their average size as illustrated by the SEM image (Fig. 2). The diameter of 96% of the chitosan-coated particles did not exceed 600 nm, with the average diameter of 190 nm. Due to the fact that the coating process was carried out in an aqueous solution, the drug loss was unavoidable and it was estimated that ca. 18% of the initially loaded drug was lost. The concentration of the drug released from the ALG/HPC-Ch particles was, however, still at the desired therapeutic level (Dayanarayana et al. 2014). a SEM image of chitosan-coated particles (ALG/HPC-Ch) containing encapsulated DEX-P, b histogram showing size distribution of ALG/HPC-Ch with DEX-P Release profiles of DEX-P from ALG/HPC and ALG/HPC-Ch particles Release profiles of DEX-P from the submicroparticles were studied under physiological conditions (PBS, pH 7.4, 37 °C). Each experiment was conducted fourfold, and each collected supernatant was measured in triplicate. Figure 3 shows a typical chromatogram (panel A) and the release profiles (panel B). 50% of DEX-P was released from the uncoated submicroparticles within the first 45 min and from the particles coated with chitosan within the first 75 min. In both systems the drug was still being released after 24 h. a Typical chromatogram of DEX-P released under physiological conditions (37 °C, pH 7.4) from the uncoated ALG/HPC particles containing DEX-P, b The release profiles of DEX-P from the uncoated ALG/HPC particles (squares) and particles coated with chitosan (circles) Based on the total amount of DEX-P released from the submicroparticles, the encapsulation efficiency (EE) and loading capacity (LC) were calculated for both uncoated and chitosan-coated particles. The obtained EE and LC values for uncoated particles were found to be (65.1 ± 1.1)% and (14.9 ± 0.6)%, respectively. For the chitosan coated particles these values were slightly lower: (53.0 ± 1.9)% and (12.2 ± 1.7)%, respectively. EE was satisfactory for both systems, and LC was comparable or higher than that obtained for DEX-P in other micro/nanoparticulate systems described in literature (Jaraswekin et al. 2007). Deposition of ALG/HPC-Ch particles on mBC The ALG/HPC/Ch submicroparticles containing DEX-P were covalently attached to the surface of Bionanocellulose® modified with carboxymethyl groups (mBC) using EDC/NHS methodology. The coupling reaction led to the formation of amide bonds between the amino groups of chitosan and the carboxyl groups of mBC. To avoid the use of an excess of the coupling agents, the concentrations of EDC and NHS necessary for activation of carboxymethyl groups were first optimized. In order to minimize the release of DEX-P from the nanospheres during their binding to mBC, the shortest required exposure time to the coupling agents solution was also established. The optimal concentrations and exposure time were found based on the changes in the amount of the particles bound to mBC and DEX-P released from the mBC modified with nanospheres (Table 1). To estimate the weight of attached particles, the weight of particles remaining in the solution after the deposition process was determined. The concentration of DEX-P was measured with HPLC in the same conditions as in the release profile experiments. In all experiments EDC:NHS ratio was 1:3. Optimization of the mBC activation procedure with EDC and NHS as coupling agents Time of exposure to EDC/NHS (h) Concentration of EDC (mol/L) Concentration of NHS (mol/L) DEX-P released from attached particles (mg/cm2) Weight of attached particles (mg/cm2) The optimal time of mBC exposure to coupling agents was established first and was found to be 2 h. It enabled effective attachment of particles (19.93 ± 0.10 mg of particles/cm2) and the highest amount of drug released from mBC (2.15 ± 0.15 mg/cm2). The amount of attached particles decreased only slightly with decreasing concentration of EDC and NHS therefore the amount of both compounds could be significantly reduced while maintaining an effective attachment of the nanospheres to mBC. A fourfold reduction of EDC and NHS concentrations (to 0.008 M and 0.025 M, respectively) allowed to remove any detectable (based on HPLC analysis) traces of coupling agents from the material by washing with small amount of water, without any significant reduction in the amount of entrapped DEX-P. To confirm the formation of amide bonds, the FTIR-ATR spectra of the unmodified mBC sheets, ALG/HPC-Ch and ALG/HPC-Ch-mBC were measured (Fig. 4). The covalent attachment of the particles to the mBC surface was confirmed by the presence of the strong amide band I (at 1644 cm−1, stretching vibrations of C=O group of amide bond) and amide band II (1561 cm−1, deforming vibrations of N–H group of amide bond) in the sample of ALG/HPC-Ch-mBC FTIR-ATR spectra of (a) mBC, (b) ALG/HPC-Ch, and (c) ALG/HPC-Ch-mBC a SEM image of mBC, b SEM image of ALG/HPC-Ch-mBC, c histogram showing size distribution of ALG/HPC-Ch bound to the surface of mBC The morphology of the ALG/HPC-Ch-mBC material was visualized using SEM, AFM and confocal microscopy. SEM images revealed the presence of a large amount of nanometric spherical structures at the surface of mBC (Fig. 5). SEM images of the surface of pristine mBC and the surface of mBC with attached ALG/HPC-Ch particles are shown in Fig. 5a, b. The comparison of the histograms presented on Figs. 2b and 5c leads to the conclusion that the smaller submicroparticles are preferentially attached to mBC. This observation was confirmed using the AFM visualization. The results of the AFM measurements are presented in Fig. 6. The strands of submicroparticles decorating nanocellulose fibrils are clearly visible. The analysis of the AFM images allows to define the average size of the attached particles as being around 100–120 nm, which is in a good agreement with the SEM analysis. It should be stressed, that due to the significant pressure of the tip exerted on the soft hydrogel particle, the AFM images do not represent well the shape of the particle, thus the diameter can be reasonably estimated only in the horizontal direction. a AFM image of mBC, b AFM image of ALG/HPC-Ch-mBC material containing DEX-P, c AFM cross-section profiles marked as 1 and 2 in the image b The confocal microscopy was used to study the extent to which the particles penetrate the 3D structure of the mBC sheet. The mBC fibrils and DEX-P-loaded particles were fluorescently labeled with fluorescein sodium salt and rhodamine B isothiocyanate (green and red fluorescence), respectively. The 3D image (Fig. 7a) shows a spatial distribution of the ALG/HPC-Ch particles in mBC. Most of the submicroparticles occupy the surface or the space close to the surface of mBC, however, small number of particles was distributed evenly in the whole volume of the mBC sheet. Therefore, although most of the drug would be released form the surface, a small amount of DEX-P will diffuse slowly through the nanocellulose hydrogel to reach the contact with the wound much later, thus possibly prolonging the healing effect of the material. The 2D image (Fig. 7b) shows the surface of mBC with particles distributed alongside the cellulose nanofibrils. Confocal microscopy images of the mBC labelled with fluorescein after the deposition of rhodamine B-labelled ALG/HPC-Ch submicroparticles containing DEX-P: a 3D image in the green channel (FITC), b 3D image in the red channel (rhodamine B), c 3D image showing a merge of green and red channels, d 2D image of the sample—merge of green and red channels. (Color figure online) Release of DEX-P from ALG/HPC-Ch-mBC hydrogel material The release of DEX-P from the mBC-attached submicroparticles was studied. The drug was released from the material in a controlled manner, as shown in Fig. 8. The release profile is advantageous, with fast release during the first few hours, and slow but still significant delivery of DEX-P for up to 2 days. A separate experiment was also designed to confirm the beneficial impact of the encapsulation of DEX-P in the submicroparticles attached to the mBC hydrogel matrix on the resulting release profile. For this purpose mBC was incubated in 0.75 mg/mL solution of DEX-P in PBS for 1 h in order to allow its diffusion into the material. The release profile of the free drug entrapped physically in mBC was then studied. The obtained profile (Fig. 8) was characterized by undesirable "burst release" and all the entrapped drug was released within 2 h. That confirms that our approach involving entrapment of DEX-P in the mBC hydrogel matrix is reasonable. Release profiles of DEX-P from ALG/HPC-Ch-mBC (squares) and from mBC pre-incubated in a PBS solution of free DEX-P (0.75 mg/mL) (circles) We attempted to fit different kinetic models, frequently applied to the drug release from the nano/microparticulate systems, to our data in order to obtain some insight into the possible release mechanism. The fitting parameters obtained for the three models used (Higuchi, Peppas and Weibull) are presented in Table 2. The Higuchi model resulted in the relatively poor approximation for all of the obtained systems, thus it was not taken into consideration in further analysis. Fitting the experimental data to the Peppas model gave the highest R2 values for both: unbound and m-BC-bound chitosan-coated particles, therefore it was used as the best model for the release from the proposed dressing. It also gave a relatively good fit for uncoated particles, as illustrated in Fig. 9. This model is a short time approximation, so the fitting procedure has to be limited to the first 60% of the release profile. In this semi-empirical equation a is the kinetic constant and k is an exponent characterizing the diffusion mechanism. For uncoated particles the release exponent k suggested that the drug release was driven by a Fickian diffusion from the spherical matrix, whereas for coated particles and mBC attached particles a non-Fickian diffusion is the most possible release mechanism. The empirical Weibull model allows to take into consideration the whole data set. Depending on the correlation between the d value in the Weibull equation and the type of diffusional mechanism of drug release one can specify the type of release from the obtained particles (Papadopoulou et al. 2006). For the uncoated particles d was found to be 0.75, which suggests that Fickian diffusion in either fractal or Euclidian spaces is dominant. For mBC with attached submicroparticles d is in the range of 0.75–1, which is typical for a combined (Fickian diffusion and Case II transport) mechanism. For chitosan-coated particles d is higher than 1, suggesting more complex release mechanism. In all cases the Peppas model analysis leads to the conclusions which are in agreement with these obtained based on the Weibull model. Parameters obtained by fitting different release kinetic models to experimental release profiles for uncoated, chitosan-coated and mBC-bound particles Uncoated submicroparticles Unbound chitosan-coated submicroparticles mBC with attached chitosan-coated submicroparticles Higuchi \(y = a\sqrt x\) Peppas \(y = ax^{k}\) Weibull \(y = a - \left( {a - b} \right)e^{{\left( { - kx} \right)^{d} }}\) − 2.57 Release data fits to Peppas model for ALG/HPC particles (squares), ALG/HPC-Ch particles (circles) and mBC bound ALG/HPC-Ch particles (triangles) Biological studies—cell viability and proliferation Nanocellulose is well known to be highly biocompatible and non-toxic (Lin and Dufresne 2014). To verify whether the ALG/HPC-Ch submicroparticles generate any toxic effect to fibroblasts, the cell viability assay was performed. Figure 10a shows that at the concentrations up to 20 mg/mL the empty ALG/HPC-Ch submicroparticles (carrier) caused no toxicity to the mouse embryonic fibroblasts (MEFs), while for the DEX-P-loaded particles only slight toxicity was observed, with cell viability in the range of 80–95%. At higher ALG/HPC-Ch concentrations (40 and 80 mg/mL) some toxicity of the carrier was registered for empty particles (still the viability was above 70%), whereas the protective effect of DEX-P was revealed. This is in agreement with some previous reports where the protective effect of DEX on various cell lines, including endothelial cells (Zakkar et al. 2011) and fibroblasts (Mendoza-Milla et al. 2005) was observed. These results and no significant fibroblast necrosis suggest the DEX-containing dressing is safe. This is essential, as the healthy tissue on the edges of the wound may be in contact with the dressing. It is also important for the wound itself, as necrosis may increase inflammatory response (Davidovich et al. 2014), resulting in the elevated pro-fibrotic activity (White and Mantovani 2013). In attempt to assess the influence of the DEX-P-loaded ALG/HPC-Ch on the proliferation of fibroblasts the suitable proliferation test was also performed (Fig. 10b). The proliferation was not hindered by neither empty nor loaded submicroparticles at the concentration below 10 mg/mL. Although DEX-P has positively influenced the proliferation observed for the loaded ALG/HPC-Ch in the concentration range of 10–20 mg/mL, at higher concentrations the inhibition effect of both: the carrier and the drug on the proliferation rate is clearly visible. The inhibitory effect was more pronounced for DEX-P containing submicroparticles, as expected (Wu et al. 2006). One can conclude that the DEX-P-loaded submicroparticles at concentrations equal or higher than 40 mg/mL can effectively inhibit fibroblast proliferation. As shown before, 40 mg of DEX-P-loaded submicroparticles is deposited on ca. 5 cm2 of the ALG/HPC-Ch-mBC surface, although it can be also observed, that the material will contain relatively limited amount of water and will be in direct contact with the wound. This may lead to higher DEX-P concentrations and a satisfactory decrease in proliferation rate even at considerably lower surface of contact. a MEF cell viability test results, b MEF cell proliferation test results (black—empty particles, grey—DEX-P-loaded particles) DEX-P was successfully encapsulated in the ALG/HPC submicroparticles. These particles were then surface-modified with chitosan to obtain ALG/HPC-Ch system. Both ALG/HPC and ALG/HPC-Ch particles showed spherical morphology, high DEX-P encapsulation efficiency (65.1% and 53%, respectively) and drug loading values, which were comparable to other delivery systems for DEX-P. A thin layer of chitosan coating increased the control over the DEX-P release profile and allowed to covalently attach ALG/HPC-Ch particles to mBC. DEX-P was released from the particles attached to mBC in a controlled manner for up to 2 days. Additional experiments confirmed the advantage of DEX-P encapsulation in submicroparticles prior to introduction into mBC matrix over the direct introduction of the drug into mBC matrix. Preliminary biological studies showed that no toxicity is induced by both empty and DEX-P-loaded submicroparticles up to 20 mg/mL and only a slight decrease in fibroblasts viability at the concentration up to 80 mg/mL could be observed. At the concentration above 40 mg/mL the DEX-P-loaded particles have effectively inhibited proliferation of fibroblasts. Based on the results of our studies we can conclude that we have successfully fabricated the novel bioactive wound dressing material combining the advantageous properties of BC and these of dexamethasone. Due to the decoration of BC surface with submicroparticles containing encapsulated DEX-P the drug release from the material could be controlled, ensuring its local concentration at the required therapeutic level (e.g. that needed to inhibit the fibroblasts proliferation). In view of the negligible cytotoxicity combined with anti-inflammatory properties and ability to inhibit fibroblast proliferation the proposed system may constitute a promising bioactive dressing useful for the treatment of the wound fibrosis. Authors would like to thank The National Centre for Research and Development (NCBiR) for the financial support in the form of grant no. K/NCB/000013 obtained in the frame of the INNOTECH Programme. The research was carried out with the equipment purchased thanks to the financial support of the European Regional Development Fund in the framework of the Polish Innovation Economy Operational Programme (Contract No. POIG.02.01.00-12-023/08). Karol Wolski would like to thank the Fundation for Polish Science for the financial support (START 96.2018). Bao Z, Gao P, Xia G, Wang Z, Kong M, Feng Ch, Cheng X, Liu Y, Chen X (2016) A thermosensitive hydroxybutyl chitosan hydrogel as a potential co-delivery matrix for drugs on keloid inhibition. J Mater Chem B 4:3936–3944. https://doi.org/10.1039/C6TB00378H CrossRefGoogle Scholar Beule AG, Steinmeier E, Kaftan H, Biebler KE, Gopferich A, Wolf E, Hosemann W (2009) Effects of a dexamethasone-releasing stent on osteoneogenesis in a rabbit model. Am J Rhinol Allergy 23:433–436. https://doi.org/10.2500/ajra.2009.23.3331 CrossRefGoogle Scholar Davidovich P, Kearney CJ, Martin SJ (2014) Inflammatory outcomes of apoptosis, necrosis and necroptosis. Biol Chem 395:1163–1171. https://doi.org/10.1515/hsz-2014-0164 CrossRefGoogle Scholar Dayanarayana U, Doggalli N, Patil K, Shankar J, Sanjay MKP (2014) Non surgical approaches in treatment of OSF. IOSR-JDMS 13:63–69. https://doi.org/10.9790/0853-131136369 CrossRefGoogle Scholar Guzdek K, Lewandowska-Łańcucka J, Zapotoczny S, Nowakowska M (2018) Novel bionanocellulose based membrane protected with covalently bounded thin silicone layer as promising wound dressing material. Appl Surf Sci 459:80–85. https://doi.org/10.1016/j.apsusc.2018.07.180 CrossRefGoogle Scholar Hickey T, Kreutzer D, Burgess DJ, Moussy F (2002) Dexamethasone/PLGA microspheres for continuous delivery of an anti-inflammatory drug for implantable medical devices. Biomaterials 23:1649–1656. https://doi.org/10.1016/S0142-9612(01)00291-5 CrossRefGoogle Scholar Hu J-B, Kang X-Q, Liang J, Wang X-J, Xu X-L, Yang P, Ying X-Y, Jiang S-P, Du Y-Z (2017) E-selectin-targeted sialic acid-PEG-dexamethasone micelles for enhanced anti-inflammatory efficacy for acute kidney injury. Theranostics 7:2204–2219. https://doi.org/10.7150/thno.19571 CrossRefGoogle Scholar Jaraswekin S, Prakongpan S, Bodmeier R (2007) Effect of poly(lactide-co-glycolide) molecular weight on the release of dexamethasone sodium phosphate from microparticles. J Microencapsul 24:117–128. https://doi.org/10.1080/02652040701233655 CrossRefGoogle Scholar Karewicz A, Zasada K, Szczubiałka K, Zapotoczny S, Lach R, Nowakowska M (2010) "Smart" alginate-hydroxypropylcellulose microbeads for controlled release of heparin. Int J Pharm 385:163–169. https://doi.org/10.1016/j.ijpharm.2009.10.021 CrossRefGoogle Scholar Karewicz A, Zasada K, Bielska D, Douglas TEL, Jansen JA, Leeuwenburgh SCG, Nowakowska M (2014) Alginate-hydroxypropylcellulose hydrogel microbeads for alkaline phosphatase encapsulation. J Microencapsul 31:68–76. https://doi.org/10.3109/02652048.2013.805841 CrossRefGoogle Scholar Li J, Fu R, Li L, Yang G, Ding S, Zhong Z, Zhou S (2014) Co-delivery of Dexamethasone and green tea polyphenols using electrospun ultrafine fibers for effective treatment of keloid. Pharm Res 31:1632–1643. https://doi.org/10.1007/s11095-013-1266-2 CrossRefGoogle Scholar Lin N, Dufresne A (2014) Nanocellulose in biomedicine: current status and future prospect. Eur Polym J 59:302–325. https://doi.org/10.1016/j.eurpolymj.2014.07.025 CrossRefGoogle Scholar Liu J, Chinga-Carrasco G, Cheng F, Xu W, Willför S, Syverud K, Xu Ch (2016) Hemicellulose-reinforced nanocellulose hydrogels for wound healing application. Cellulose 23:3129–3143. https://doi.org/10.1007/s10570-016-1038-3 CrossRefGoogle Scholar Mendoza-Milla C, Machuca Rodríguez C, Córdova Alarcón E, Estrada Bernal A, Toledo-Cuevas EM, Martínez Martínez E, Zentella Dehesa A (2005) NF-κB activation but not PI3 K/Akt is required for dexamethasone dependent protection against TNF-α cytotoxicity in L929 cells. FEBS Lett 579:3947–3952. https://doi.org/10.1016/j.febslet.2005.05.081 CrossRefGoogle Scholar Messner M, Loftsson T (2010) Solubility and permeability of steroids in water in the presence of potassium halides. Pharmazie 65:83–85. https://doi.org/10.1691/ph.2010.9211 Google Scholar Moritz S, Wiegand C, Wesarg F, Hessler N, Muller FA, Kralisch D, Hipler UC, Fischer D (2014) Active wound dressings based on bacterial nanocellulose as drug delivery system for octenidine. Int J Pharm 471:45–55. https://doi.org/10.1016/j.ijpharm.2014.04.062 CrossRefGoogle Scholar Napavichayanun S, Yamdech R, Aramwit P (2016) The safety and efficacy of bacterial nanocellulose wound dressing incorporating sericin and polyhexamethylene biguanide: in vitro, in vivo and clinical studies. Arch Dermatol Res 308:123–132. https://doi.org/10.1007/s00403-016-1621-3 CrossRefGoogle Scholar Nieuwenhuis B, Luth A, Kleuser B (2010) Dexamethasone protects human fibroblasts from apoptosis via an S1P3-receptor subtype dependent activation of PKB/Akt and BclXL. Pharmacol Res 61:449–459. https://doi.org/10.1016/j.phrs.2009.12.005 CrossRefGoogle Scholar Papadopoulou V, Kosmidis K, Vachou M, Macheras P (2006) On the use of the Weibull function for the discernment of drug release mechanisms. Int J Pharm 309:44–50. https://doi.org/10.1016/j.ijpharm.2005.10.044 CrossRefGoogle Scholar Raghavendra Reddy Y, Srinath N, Nandakumar H, Rajini Kanth M (2012) Role of collagen impregnated with dexamethasone and placentrix in patients with oral submucous fibrosis. J Maxillofacc Oral Surg 11:166–170. https://doi.org/10.1007/s12663-011-0274-1 CrossRefGoogle Scholar Ren H, Liang D, Jiang X, Tang J, Cui J, Wei Q, Zhang S, Yao Z, Shen G, Lin S (2015) Variance of spinal osteoporosis induced by dexamethasone and methylprednisolone and its associated mechanism. Steroids 102:65–75. https://doi.org/10.1016/j.steroids.2015.07.006 CrossRefGoogle Scholar Spaic M, Small DP, Cook JR, Wan W (2014) Characterization of anionic and cationic functionalized bacterial cellulose nanofibres for controlled release applications. Cellulose 21:1529–1540. https://doi.org/10.1007/s10570-014-0174-x CrossRefGoogle Scholar Ul-Islam M, Khan T, Khattak WA, Park JK (2013) Bacterial cellulose-MMTs nanoreinforced composite films: novel wound dressing material with antibacterial properties. Cellulose 20:589–596. https://doi.org/10.1007/s10570-012-9849-3 CrossRefGoogle Scholar Vardy J, Chiew KS, Galica J, Pondand GR, Tannock IF (2006) Side effects associated with the use of dexamethasone for prophylaxis of delayed emesis after moderately emetogenic chemotherapy Br. J Cancer 94:1011–1015. https://doi.org/10.1038/sj.bjc.6603048 CrossRefGoogle Scholar Wang AS, Armstrong EJ, Armstrong AW (2013) Corticosteroids and wound healing: clinical considerations in the perioperative period. Am J Surg 206:410–417. https://doi.org/10.1016/j.amjsurg.2012.11.018 CrossRefGoogle Scholar Weijtens O, Schoemaker RC, Romijn FP, Cohen AF, Lentjes EG, van Meurs JC (2002) Intraocular penetration and systemic absorption after topical application of dexamethasone disodium phosphate. Ophthalmology 109:1887–1981. https://doi.org/10.1016/S0161-6420(02)01176-4 CrossRefGoogle Scholar White ES, Mantovani AR (2013) Inflammation, wound repair, and fibrosis: reassessing the spectrum of tissue injury and resolution. J Pathol 229:141–144. https://doi.org/10.1002/path.4126 CrossRefGoogle Scholar Wu WS, Wang F-S, Yang KD, Huang CC, Kuo YR (2006) Dexamethasone induction of keloid regression through effective suppression of VEGF Expression and keloid fibroblast proliferation. J Investig Dermatol 126:1264–1271. https://doi.org/10.1038/sj.jid.5700274 CrossRefGoogle Scholar Zakkar M, le Luong A, Chaudhury H, Ruud O, Punjabi PP, Anderson JR, Mullholand JW, Clements AT, Krams R, Foin N, Athanasiou T, Leen EL, Mason JC, Haskard DO, Evans PC (2011) Dexamethasone arterializes venous endothelial cells by inducing mitogen-activated protein kinase phosphatase-1: a novel antiinflammatory treatment for vein grafts? Circulation 123:524–532. https://doi.org/10.1161/CIRCULATIONAHA.110.979542 CrossRefGoogle Scholar Zhang B, Molino PJ, Harris AR, Yue Z, Moulton SE, Wallace GG (2016) Conductive and protein resistant polypyrrole films for dexamethasone delivery. J Mater Chem B 4:2570–2577. https://doi.org/10.1039/c5tb00574d CrossRefGoogle Scholar 1.Faculty of ChemistryJagiellonian University in KrakówKrakówPoland Rojewska, A., Karewicz, A., Baster, M. et al. Cellulose (2019) 26: 1895. https://doi.org/10.1007/s10570-018-2182-8 Received 17 August 2018 Accepted 06 December 2018 First Online 10 December 2018
CommonCrawl
Hostname: page-component-7ccbd9845f-mpxzb Total loading time: 1.836 Render date: 2023-01-29T20:10:14.344Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true >Journals >British Journal of Political Science >Volume 46 Issue 3 >Places and Preferences: A Longitudinal Analysis of... THE SOURCE OF THE SPATIAL CLUSTERING OF POLITICAL PREFERENCES: CONTEXTUAL EFFECTS OR SELECTION? DATA AND METHODOLOGY DISCUSSION AND CONCLUSIONS Places and Preferences: A Longitudinal Analysis of Self-Selection and Contextual Effects Published online by Cambridge University Press: 21 October 2014 Aina Gallego , Franz Buscha , Patrick Sturgis and Daniel Oberski Save PDF (0.41 mb) View PDF[Opens in a new window] Rights & Permissions[Opens in a new window] Contextual theories of political behaviour assert that the contexts in which people live influence their political beliefs and vote choices. Most studies, however, fail to distinguish contextual influence from self-selection of individuals into areas. This article advances understanding of this controversy by tracking the left–right position and party identification of thousands of individuals over an eighteen-year period in England before and after residential moves across areas with different political orientations. There is evidence of both non-random selection into areas and assimilation of new entrants to the majority political orientation. These effects are contingent on the type of area an individual moves into and contextual effects are weak and dominated by the larger effect of self-selection into areas. British Journal of Political Science , Volume 46 , Issue 3 , July 2016 , pp. 529 - 550 DOI: https://doi.org/10.1017/S0007123414000337[Opens in a new window] Copyright © Cambridge University Press 2014 It is unexceptional to remark that the political preferences of a national population are not randomly distributed across geographical areas. We know considerably less, however, about how this spatial clustering comes about. Contextual theories of political behaviour assert that elements of the environment in which individuals are situated exert a causal influence on the political parties and policies they prefer.Footnote 1 People, the argument goes, progressively assimilate through a variety of social-psychological mechanisms to the dominant political orientation of the environments in which they live. Consistent with these theories, a long tradition of research in electoral geography has examined the cross-sectional correlation between individual political preferences and social and political characteristics of local contexts.Footnote 2 Despite the large body of existing research in this area, however, the question of whether local contexts actually cause political preferences to change remains contested. Scholars have long argued that the correlation between individual political preferences and contextual characteristics is driven, in whole or in part, by self-selection of people into areas with congruent political beliefs.Footnote 3 It is our contention in this article that this longstanding debate remains unresolved because previous studies based on cross-sectional data have not been able to separate self-selection and contextual effects from one another convincingly. Although falling some way short of the gold standard of random assignment of individuals to areas, a longitudinal research design provides a considerably more satisfactory means of identifying the independent effects of assimilation and self-selection. This is because panel data enable the tracking of changes in political preferences before and after individuals move to contexts with different political majorities. Yet, because of the strong data requirements that longitudinal approaches impose, few examples of this type of strategy can be found in the existing literature. This article advances understanding in this area by assessing the causal effect of contexts on individual political orientations by tracking the preferences of individuals before and after residential moves, over an eighteen-year period. We do not seek to address the full range of causes and political consequences of internal migration, but focus our attention on two narrow yet fundamental research questions: Do people select into areas that exhibit majority political beliefs congruent with their own? And are individual political preferences influenced by the political orientation of the area into which an individual moves? To foreshadow our later results, our findings show that people are more likely to choose areas in which to live that are congruent with their pre-existing political preferences. Yet political orientation plays only a very limited, if any, role in location choice. Rather it is the socio-economic characteristics of individuals that are correlated with political preference, such as work and parental status, which are consequential in this regard. Self-selection into areas occurs, in short, for non-political reasons. We also find that in the years following a residential move, an individual's political preferences become more aligned with the majority political orientation of the area into which they moved. However, this process of assimilation is both weak and contingent upon area type; in England only those moving into strongly Conservative areas from other types of area exhibit evidence of contextual effects. Establishing that both self-selection and, albeit limited, contextual effects exist is important for several reasons. In a fundamental sense, it provides political scientists with a better understanding of the origins of political preferences. From a more practical perspective, both processes are likely to have important political consequences because they produce, over time, more spatially polarized political preferences. In politically homogeneous communities, residents are less likely to be exposed to diverse opinions, with negative implications for social and political tolerance, and the majority party will have larger margins of victory, resulting in less competitive elections. Geographical polarization of political preferences has also been shown to generate electoral biases in the translation from votes to seats in majoritarian systems and affects the incentives of parties to modify their policy platforms.Footnote 4 The remainder of this article is structured as follows. First, we review the existing literature on the geographical clustering of political preferences. A discussion of the limitations of existing methodological approaches to untangling contextual effects and selection mechanisms leads us to conclude that a longitudinal research strategy is required. We then describe the dataset and key measures to be used in our analysis and detail our model-fitting strategy, before presenting our empirical results. We conclude with a discussion of the limitations of our own approach and a consideration of the substantive implications of our findings. Scholars have argued that contextual influence occurs through a variety of social-psychological mechanisms,Footnote 5 including interpersonal contact and persuasion,Footnote 6 party mobilization,Footnote 7 exposure to shared local socio-economic conditions,Footnote 8 common local interests,Footnote 9 and exposure to low-intensity information cues.Footnote 10 For instance, theories of interpersonal contact argue that members of the political majority will tend to have their views reinforced, whereas those in the minority will change their views through processes of persuasion and conformity, generating a more homogeneous community outlook over time. Strong local party branches can also persuade local residents to change their political views through outreach activities. A large body of empirical evidence supports the main prediction of contextual theories in showing that there is a robust correlation between individual political preferences and area-level characteristics. For instance, Butler and Stokes showed that residents of British parliamentary constituencies vote for the local majority more frequently than would be expected for the population as a whole.Footnote 11 Similar correlations between measures of aggregate political orientations and individual political preferences have been replicated many times since.Footnote 12 Other studies have shown that socio-economic characteristics of neighbourhoods, variously defined and measured, predict vote choice above and beyond individual level characteristics.Footnote 13 Many studies have also found evidence consistent with the mechanisms hypothesized to underpin assimilation effects, such as interpersonal discussion or political mobilization.Footnote 14 While the large majority of existing studies rely on cross-sectional evidence, some have used short-run panels. Pattie and Johnston examined the characteristics of individuals who switched parties between the 1987 and 1992 general elections in Britain and found that people reporting discussion partners who supported a different party were more likely to have changed allegiance.Footnote 15 Johnston et al. showed with data from the British Household Panel Survey (BHPS) that people in the same constituency had changed their votes in similar directions across elections in 1992 and 1997, leading them to conclude that 'place matters'.Footnote 16 However, these two-wave studies remain somewhat inconclusive about causal order. That is to say, it may be the case that individuals who do not identify strongly with a party are both more likely to select discussion partners with different opinions and to switch parties. Hence, with only a few exceptions, it is still true that 'the standard approach in studies of context is to examine the effect that an aggregate-level compositional measure has on an individual behavior or attitude … covariation between the individual variable and aggregate variable is taken as evidence of a contextual effect'.Footnote 17 There are good reasons to assume that an individual's choice of a residential location will be correlated with prior political preferences. While it is unlikely that political preferences have a strong direct causal effect on residential choices, an indirect influence seems more plausible. This is because, when deciding where to relocate, movers must balance a broad range of preferences and constraints, including local housing prices, distance to work, quality of services, and tastes over the type of neighbourhood and the features of the house.Footnote 18 Political preferences are correlated with the socio-economic characteristics that constrain the choice of destination, such as income and family situation. Preferences are also correlated with tastes for many types of public goods (safety, scenic landscapes, cultural events, nightlife) which different localities provide.Footnote 19 Because of differences in both socio-economic constraints and location preferences we expect that those on the left and on the right will exhibit different location choices when they move and, specifically, that they will be more likely to move to politically like-minded communities. Recent research on the dynamics of partisan support have shown that, in the United States at least, movers tend to select areas that have majority political preferences more similar to their own than their original location.Footnote 20 While these studies provide evidence of non-random selection into areas, they tell us nothing about contextual effects and do not examine whether self-selection is political in nature. Consistent with the view that self-selection trumps contextual effects, critics of contextual effect theories have pointed out that the correlation between contextual characteristics and individual preferences generally becomes negligible, or zero, when controlling for individual level characteristics.Footnote 21 Hence, summarizing the critics' position in the debate, King concludes: 'The geographical variation is usually quite large to begin with, but after we control for what we have learned about voters, there isn't much left for contextual effects'.Footnote 22 Concerns about selection bias can, of course, be mitigated by controlling for socio-economic characteristics, which influence both the propensity to move and political preferences. A conditioning strategy, though, requires that all the requisite control variables are known and measurable, which seems unrealistic given that many of the candidate variables are notoriously difficult to measure in surveys (for example, housing tastes, early socialization experiences, personality). Thus, statistical control using cross-sectional data is unlikely to be a wholly effective strategy for dealing with selection bias. In summary, despite advances in both data and method, doubts remain over the core claim of theories of contextual effects: that contexts cause changes in political attitudes and behaviour. A more robust strategy for estimating the effects of contexts on political preferences is to make use of data containing longitudinal information about the same individuals over time. The primary advantage of repeated measurements or panel data is that, under certain model specifications, it is possible to partial out all observed and unobserved time-invariant characteristics of individual units.Footnote 23 As Halaby puts it, 'the problem of causal inference is fundamentally one of unobservables, and unobservables are at the heart of the contribution of panel data to solving problems of causal inference'.Footnote 24 The incorporation of a longitudinal dimension yields crucial additional leverage on questions of causal order, making it possible to model within-individual change as a function of preceding events.Footnote 25 Because this approach is based on the analysis of change in both dependent and independent variables within individuals over time, the estimated model coefficients are purged of the effects of all fixed (or 'time-invariant') respondent characteristics. Such fixed characteristics comprise both the 'usual suspects' such as gender, age cohort, and ethnicity, as well as less easily measurable variables such as personality traits and pre-adult socialization experiences. We are unaware of any existing study which has used this type of design to evaluate the extent and magnitude of self-selection and political assimilation to areas. This article has been motivated by the need to address this longstanding lacuna. To estimate the effect of area-level political orientation on individual political preferences, we have used the BHPS. We tracked individuals who moved across electoral constituencies over an eighteen-year period and observed whether movers were more likely to choose constituencies where their pre-existing views were closer to the views of existing residents than other potential choices of location (self-selection). Additionally, we evaluated whether the self-selection effects observed were political in nature, which is to say that individuals chose 'like-minded' areas because of their political orientation. We also assessed whether individuals adopted the political preferences prevalent in their new contexts over time. The BHPS is a large, high-quality repeated measures survey in which a stratified, multi-stage, random sample of British households had been interviewed annually since 1991. Computer-assisted face-to-face interviews were attempted with all household members aged 16 years or older. The initial Wave 1 household response rate was 74 per cent. Extensive efforts were made to track responding individuals across waves when a household had moved address, or when an individual moved from an existing household to a new one, such as when adult children had left home, or when a cohabiting couple separated. The study achieved a tracking rate averaging 95 per cent across all waves. The BHPS was thus ideally suited to our objectives because it contains a large number of residentially mobile individuals for whom self-reports of political preferences are observed before and after a move. Our analysis uses eighteen waves of data from 1991 to 2008 inclusive, with almost 10,000 individuals clustered within over 5,000 households in the first wave (1991). We restrict our focus to England only, excluding households in Wales, Scotland and Northern Ireland because the party systems in these countries are sufficiently different from England to make combined analyses difficult to interpret. We also exclude observations of those aged under 18 in order to match our analysis sample with the voting age population in England. We include 'new sample members' who join the BHPS through the formation of new households with 'original sample members' as well as 're-entrants' (i.e. those who had been non-respondents in the previous wave). These inclusion criteria yield an analysis sample of 17,373 individuals, who provide a combined total of 158,000 unique observations over the eighteen waves. The average number of waves completed by individuals is 9.14 and 4,100 individuals responded in all eighteen waves. To deal with the issue of non-random attrition, we include a range of control variables that predict drop-out from the study. Our estimates are, therefore, unbiased under the 'missing at random (MAR)' assumption, which we consider to be plausible in the current context.Footnote 26 Individual Political Preferences The BHPS offers two options for specifying individual political preferences. The first is a standard measure of party identification, which was administered in all eighteen waves; the second is a multi-item scale designed to measure people's 'left–right' economic value orientation, which was administered in Waves 1, 3, 5, 7, 10, 14 and 17.Footnote 27 Each measure has contrasting advantages and disadvantages. Party identification was measured in every wave and enables us to detect potential assimilation effects which are not based on changes in an individual's underlying preferences and beliefs, for example as a result of differences in the quality of candidates. By contrast, the left–right scale provides a finer-grained measure of political orientation, which enables detection of smaller changes across and within individuals over time and is not subject to the potentially distorting influence of tactical voting. Because of their differing theoretical and empirical properties we undertook all analyses using both measures of political preference. For the left–right scale, we took the first principal component of the six items, which is appropriate for these items.Footnote 28 For party identification, we considered only supporters of the two main parties, Labour and the Conservatives. This yields a binary variable for party support which is considerably more straightforward to handle in a panel data regression framework than a nominal variable with more than two categories. Areal Units An important question in the study of contextual effects is how the areal units defining spatial location should be defined. The mechanisms through which contextual influence operates can manifest themselves at small (e.g. interaction with neighbours), intermediate (e.g. party mobilization in a constituency), or large (e.g. regional media) spatial scales. Studies of the influence of spatial scales on political behaviour have found, as in other substantive contexts, that choice of scale is consequential for the estimates obtained.Footnote 29 In this study, we have used electoral constituencies as our areal units. Constituencies are the key electoral boundary in first-order, parliamentary elections in the United Kingdom. In England, parliamentary constituencies contain an average of 70,000 voters, having approximately the population size of a small town. While interpersonal interactions with neighbours that might result in political assimilation happen at finer-grained levels of geography, the constituency level should still be capable of capturing local conditions and interactions in school and work-place settings that require some short-range mobility. Additionally, since party organizations work to win a majority of the vote within the constituency boundaries and constituencies share the same MP, the political environment within a constituency will probably be more internally homogeneous than the country as a whole. Of more practical importance, however, is the fact that constituencies are the lowest geographical level at which it is possible to derive a useable measure of aggregate political orientation for the period in question. Electoral wards, which would be preferable with regard to size (they are smaller) are problematic because of the limited nature of information that can be attached to them and because their boundaries changed substantially between 1991 and 2008, rendering longitudinal analysis difficult. Other boundaries, such as census output areas,Footnote 30 have no feasible way of being linked to electoral results or to other measures of aggregate political orientation. Thus, while our choice of areal unit is not perfect, we believe it to be the best amongst the available alternatives. We return to the implications of our use of constituencies as the areal unit for the interpretation of our findings in the discussion section. While English constituency boundaries were quite stable in the period between 1991 and 2008, the redistricting for the 1997 general election affected a non-trivial number of constituency boundaries. However, most of the boundary changes affected only a small number of electors and the results we present here are robust to excluding observations located in constituencies which were subject to boundary changes during the reference period.Footnote 31 Area-Level Political Orientation We focus on the political orientation of the constituency as the main independent variable at the contextual level. Our measure is based on the electoral results for the four parliamentary elections held in 1992, 1997, 2001 and 2005. An intuitively appealing strategy would be to define constituency-level political orientation as the ratio (or similar function) of the vote share of the two main parties. However, this is challenging because of the sometimes significant role of tactical voting and of minor parties, which vary across constituencies and elections. Neither would it be clear how to apply vote shares to constituencies in non-election years. Therefore, we applied a typology to constituencies, placing them into one of six mutually exclusive categories:Footnote 32 — Safe Conservative constituencies (N=154): The Conservative party won a parliamentary seat in all four elections. — Safe Labour constituencies (N=211): The Labour party won a parliamentary seat in all four elections. — Marginal Conservative constituencies (N=12): The Conservative party won a parliamentary seat in three of the four elections. — Marginal Labour constituencies (N=111): The Labour party won a parliamentary seat in three of the four elections. — Safe or marginal Liberal-Democrat constituencies (N=31): The Liberal-Democratic party won a parliamentary seat in three or four elections. — Mixed constituencies (N=47): None of the three main parties won a seat in three or four of the elections. During the period of analysis, parties other than the main three won a parliamentary seat once in eight constituencies,Footnote 33 and only in one constituency did a different party win two elections. The mixed category is mostly made up of constituencies in which one of the three large parties won two elections and another of the large parties won the other two elections. Thus, these are the most competitive constituencies, where no party has a clear dominance. This classification of constituencies captures large differences in voting patterns. For instance, in the 1992 election the Conservative party won on average 57 per cent of the vote in safe Conservative seats but only 33 per cent in safe Labour seats. The Labour party received 18 per cent of the vote in safe Conservative seats and 53 per cent in safe Labour seats. Residential Mobility In our analysis sample, we observed a total of 14,500 residential moves from one wave to the next, which represents an average annual move rate of just over 9 per cent across the sample as a whole. Many of these relocations are, however, over a small distance within the same constituency and so would not be expected to result in discernible change in the external political environment. Therefore, we further restricted our definition of 'movers' to individuals who relocated to a different parliamentary constituency. This reduced the number of moves we observe by approximately half, yielding a total of 7,437. Using this definition, 69 per cent of respondents did not move at all, 17.5 per cent moved once, 7 per cent moved twice and 6.5 per cent moved three times or more during the period of observation.Footnote 34 Table 1 shows one-year transition probabilities (as percentages) for moves between different constituency types. Only individuals who were observed in at least two consecutive waves can be included in transition tables, which results in a reduction of the sample size from 142,000 to 125,000 observations.Footnote 35 The diagonal row in Table 2 comprises observations which remained in the same constituency type in any two-year period. The majority of observations did not move into different constituency types. However, of the 36,900 observations in safe Conservative constituencies, 315 moved into safe Labour constituencies in a subsequent year. Similarly, of the 41,100 observations in safe Labour constituencies, 417 moved to a safe Conservative constituency in a subsequent year. Of the 28,000 observations in marginal Labour constituencies, approximately 400 moved to safe Labour and safe Conservative constituencies respectively. Table 1 BHP3 Constituency Trasition Probabiilities Source: BHPS 1991–2008. Note: The table shows the total number of moves over all transition pair-years and, below these, the average percentage of respondents transitioning in such pair-years. Table 2 Panel Regression Models for Political Assimilation and Selection Effects Notes: Cluster standard errors in parentheses. Additional move types include Safe Con to Safe Con, Safe Lab to Safe Lab and Other Mover Types. Individual controls include: age, gender, education, marital status, children, income, class status, employment status, part/full-time, health status and time dummies. Coefficients in Models 1 to 5 are ordinary least squares, in Models 6 to 10 coefficients are odds ratios. Note that the long-run multiplier for Models 6 to 10 is derived by summing the exponentiated logit coefficients and converting this to odds ratios rather than summing the individual odds ratios. *p<0.05, **p<0.01, ***p<0.001. † Reference group consists of those who have not moved. Because the inclusion of the full set of transition probabilities results in categories with small cell sizes we collapsed the full set of transitions into the following six categories: 1. No move 2. Moves from any constituency type (apart from safe Conservative) into safe Conservative 3. Moves from any constituency type (apart from safe Labour) into safe Labour 4. Moves from safe Conservative into safe Conservative 5. Moves from safe Labour into safe Labour 6. All other move types. It is moves of Types 2 and 3 which are of greatest analytical interest because they represent a clear change in the political orientation of the area in which an individual lives.Footnote 36 They can be considered, therefore, as ideal test-cases for theories of contextual effects. We focus our attention on these two move types in the analyses that follow, although analyses have been undertaken for all move types and will be made available upon request. To estimate the effect on individual political preferences of people moving to an area with a different political context, we use a panel data model with fixed effects and distributed lags and leads.Footnote 37 We include lagged effects because the influence of a new area on a mover is unlikely to occur immediately, potentially taking several years or more to materialize. The baseline model has the following form: (1) $$y_{{it}} =\mathop{\sum}\nolimits_{k={\rm 0}}^{\rm 5} {{\bf MovCon'}_{{i,t{-}k}} \beta _{{{-}k}} {\plus}x_{{it}} '\lambda } {\plus}e_{{it}} ,$$ where the political preferences of the i-th person in the t-th year, y it , are modelled as depending on the type of move in the preceding five years, MovCon′ i,t−k , a design vector corresponding to the six categories of move type described in the previous section, and 'no move' as the reference category. Time-varying covariates are collected in the vector x it . At the individual level we control for sex, age, age squared, educational level, income, social class, employment status, health status, marital status, and parental status. Where models are estimated with fixed effects, time-invariant characteristics such as sex are excluded. To control for spurious effects caused by unobserved differences between individuals that are time-invariant, each person's observations are centred on the within-person mean. That is, we use time-demeaned data such that an individual's score at time t is subtracted from their person specific mean over all observations across all time points $$(y_{{it}} =\tilde{y}_{{it}} {\minus}\mathop{y}^\limits{{{\vskip-1.5pt\hskip-5pt\tf="Els-ent4" \char 126}}} _{{i.}} )$$ . The well-known consequence of using time-demeaned data is that all time-invariant characteristics of sample units are 'differenced out', yielding the fixed effects model. The coefficients of primary interest in Equation 1 are the lagged coefficient vectors β −k where k is set to a maximum of 5. The choice of a five-year maximum lag is a trade-off between our theoretical expectation that it may take several years for a contextual effect to be manifested and the fact that extending lags beyond five years results in small cell sizes and, therefore, imprecise estimates. Moreover, as we shall show later, increasing the number of lags to nine years does not alter our key substantive findings. One way of thinking about how to interpret the lagged coefficients is to consider a hypothetical person who moves to a safe Labour constituency one year but then moves back the next year: β −k, labour will then be the effect on political preferences of that one-time move after k years, controlling for time-invariant unobserved variables and time-varying covariates. However, people typically stay in their new place of residence for longer than one year, so it is also of interest to know what the effect of the move on preferences will be when the effect is aggregated over ensuing years. This 'long-run effect' can be obtained by summing the coefficients over the lag vector: (2) $$\bibeta _{{{\rm long{\hbox-}run}}} =\mathop{\sum}\nolimits_{k={\rm 0}}^{\rm 5} {\bibeta _{{{-}k}} } $$ Hypothesis tests of zero long-run effects can be performed by obtaining the sampling variance of $\hat{\bibeta }_{{{\rm long{\hbox-}run}}} $ , which by standard methods, can be shown to be: (3) $${\mathop{\rm var}} (\hat{\bibeta }_{{{\rm long{\hbox-}run}}} )=\mathop{\sum}\nolimits_{k{\rm =0}}^{\rm 5} {{\rm var(}\hat{\bibeta }_{{{-}k}} {\rm ){\plus}2}\mathop{\sum}\nolimits_{k\,\lt\, l} {{\mathop{\rm cov}} } (\hat{\bibeta }_{{{\minus}k}} ,\hat{\bibeta }_{{{\minus}l}} )} $$ Note that this means that individual hypothesis tests performed on the lagged effects β −k may be non-significant while the overall hypothesis test on β long-run is significantly greater than 0. By using a fixed effects model with time-varying covariates, we control for possible confounding due to unobserved between-person differences, as well as observed differences due to the covariates. Although this is already a strong research design, confounding could conceivably still occur if the probability of moving is correlated with the propensity to change political preferences. For example, becoming a parent is an event that can cause both a residential move and a change in political preferences. Blanden et al. propose controlling for the effect of such 'pre-programme trends' by including 'lead' dummies in the model.Footnote 38 The coefficients for the effect of the future on the present, γ +k , should not be interpreted causally, but as evidence for selection effects on the change in preferences. By including leads in the model, our final specification becomes: (4) $$y_{{it}} =\mathop{\sum}\nolimits_{k{\rm =0}}^{\rm 5} {{\bf MovCon'}_{{i,t{-}k}} \bibeta _{{{\rm {-}}k}} {\plus}} \mathop{\sum}\nolimits_{k{\rm =0}}^{\rm 5} {{\bf MovCon'}_{{i,t{\plus}k}} \bigamma _{{{\rm {\plus}}k}} {\plus}x_{{it}} '\lambda {\plus}e_{{it}} } $$ In addition to their substantive interpretation as indicators of political selection into areas, the inclusion of leads also enables us to obtain estimates of the lagged effects, adjusted for non-random selection into areas. Thus, our control for time-constant, time-varying, as well as trend selection effects motivates the interpretation of the lagged coefficients β − k and their long-run versions β long-run as the effect of the constituency on an individual's political preferences. Finally, it should be noted that the use of lags and leads in their raw form results in reduced sample size. This is because for some observations, we do not observe political preferences five years before and after a move. Rather than dropping such observations we set the unobserved prior and subsequent moves to 0 and include a vector of dummies representing 'missingness' in the model, although we do not report the coefficient estimates in our results. Specification tests show that our findings are substantively unaffected by the inclusion or exclusion of these cases. We begin by presenting some descriptive statistics before moving to more causally-focused analyses. An implication of contextual theories of political behaviour is that the magnitude of an assimilation effect should (initially at least) increase over time, because the opportunity for and experience of the various influence mechanisms will grow as a function of time spent in a locale.Footnote 39 Thus, we should expect the association between constituency type and individual policy preferences to increase over the number of years an individual has lived in the area. Figure 1 shows percentage support for the Conservative party and the mean of the left–right scale by type of constituency and the number of years the individual has lived at their current address.Footnote 40 Individuals who live in safe Conservative constituencies are likely to support the Conservative party even immediately after moving to their new place of residence. Additionally, the longer the period of residence within a safe Conservative constituency, the more likely an individual is to support the Conservatives. Conversely, individuals in safe Labour constituencies, while considerably more likely to support the Labour party, show no trend towards increasing support for Labour the longer they have lived in a safe Labour constituency. This may be due, partially at least, to a ceiling effect because the level of support for Labour in safe Labour constituencies is already close to 75 per cent at year zero. The pattern for the left–right scale is clearer and more consistent, with individuals in safe Conservative constituencies expressing more right-wing views the longer they have lived in that area and the opposite being the case for individuals in safe Labour constituencies. Thus, the BHPS provides quite strong preliminary evidence of contextual effects; people are closer to the aggregate political orientation of their constituency the longer they have lived in it. However, although Figure 1 shows an apparent trend over time, the data is analysed cross-sectionally and the variation is, therefore, between rather than within individuals. Thus, the patterns we observe may have emerged due to non-random selection of individuals into (and out of) areas rather than to the effect of areas on individuals. It is to this possibility that we now turn via regression analysis. Table 2 presents the results of our regression models for the left–right value scale and party support over ten columns, with each column representing a different model specification. In Model 1 we include only lags, no individual-level controls and no fixed effects. Model 2 adds leads to this specification. Model 3 contains lags and individual level controls but no leads or fixed effects, whilst Model 4 introduces individual fixed effects to Model 3. Finally, Model 5 reintroduces lead indicators. The same pattern is repeated for party support in Models 6 to 10, though now we use a logistic link function because the outcome is binary. In Table 2 we suppress the results for other move types (such as moving from a safe Conservative constituency to another safe Conservative constituency) and for the control variables.Footnote 41 The results of these additional contrasts make no material difference to our substantive conclusions. In all models the reference category for the different move types is non-movers. Thus the coefficients should be interpreted as the effect of making the move type in question on political preferences, compared to (covariate adjusted) non-movers. Model 1 in Table 2 shows that moving into a safe Conservative constituency from any other type of constituency is associated with a significant move to the right on the left–right scale (higher scores indicate more right-wing preferences). The long-run multiplier for this move type is 0.946 (p<0.001), indicating significant and quite substantial contextual effects over the five years following the move. The magnitude of this effect is approximately equivalent to the average cross-sectional difference in left–right scores between a Liberal Democrat and a Conservative party identifier. The negative coefficients for the effect of moving into a safe Labour constituency from other constituency types provides some evidence of assimilation for this type of move, though none of these is statistically significant, either singly or in combination. This pattern corresponds to that which was observed for the cross-sectional analysis in Figure 1, where the trend for Conservative support was stronger than for Labour support. Model 2 introduces lead indicators. Significant coefficients for the lead dummies suggest that, before controlling for other characteristics, individuals behave as if they choose their new areas, at least in part, on the basis of their prior political preferences. Our results suggest that those moving to safe Conservative areas become significantly more right-wing prior to a move. The coefficient estimates for those moving to safe Labour constituencies are not statistically significant, although they are of the correct sign (they become more left-wing). The long-run multiplier for those moving to safe Conservative areas between time period 0 and time period 5 has been reduced slightly to 0.845 (p<0.003), indicating smaller, but still substantial, assimilation effects after controlling for non-random selection into constituency types. However, these estimates cannot be considered causal as we have not yet controlled for selection on observed time-varying and time-invariant characteristics. In Model 3, we remove the lead dummies and introduce individual level controls for characteristics which might lead people to relocate to a different constituency and also to change their political orientation. These are: age, gender, marital status, parental status, household income, social class, employment status and health status. We also include year dummies to control for macro-level events in the external environment. Introduction of these controls results in the lagged coefficients becoming somewhat reduced in magnitude, which suggests that the assimilation effects observed in Model 1 are at least partly due to non-random selection of individuals into constituencies. Controlling for these individual characteristics and survey year reduces the long-run multiplier to 0.639, which although statistically significant (p<0.018), represents a 30 per cent reduction in magnitude compared to Model 1. Estimates for those moving into safe Labour constituencies change only marginally with the introduction of controls in Model 3; these individuals become more left-wing over time, although the effect cannot be distinguished from zero when inference is made to the broader population (five-year cumulative estimate=−0.865 (p<0.772)). The introduction of individual fixed effects, to control for time invariant characteristics, in Model 4 reduces the magnitude of the assimilation effects quite substantially, with the contemporaneous coefficient for Conservative constituency moves, in particular, reduced from 0.206 to a statistically non-significant 0.047. However, while the majority of coefficients are reduced in magnitude, there are exceptions at the three-year and four-year lags for safe Conservative (0.139; p<0.05) and safe Labour (−0.201; p<0.01) constituency moves, respectively. This suggests that assimilation, rather than occurring immediately after a move, takes place after some years in the new location. For safe Conservative constituency moves, the long-run effect is reduced to 0.428, but this is still statistically different from zero (p<0.011). For Labour, the long-run multiplier is −0.200 (p<0.310) which suggests that the small apparent assimilation effect observed after four years is removed when combined with the effects in the other years. Finally, Model 5 re-introduces lead dummies in order to control for all forms of self-selection in one model. Results suggest that re-introducing the leads has only a small effect on the estimates. The leads are statistically non-significant for those moving to safe Conservative constituencies and have little effect on the coefficient estimates of Model 4. The long-run multiplier becomes marginally non-significant in Model 5. However, none of the lead coefficients in Model 5 are themselves significantly different from zero; hence our preferred estimates are those in Model 4. For those moving to safe Labour constituencies, however, there is some limited evidence of political self-selection; two years prior to a move, individuals moving to a safe Labour constituency become somewhat more left-wing, suggesting that move choice is related to prior shifts in political orientation, though the magnitude of the effect is weak. Moreover, political self-selection has no material effect on our conclusions regarding assimilation for those moving to safe Labour constituencies, with the magnitude and significance of the coefficients unaltered by the introduction of the leads. We now turn to the results for party identification, which are presented in Models 6 to 10 in Table 2. It should be noted that the sample size for some of these models is reduced substantially compared to the linear specifications in Models 1 to 5. This is because relatively few individuals changed their party identification between Labour and the Conservatives during the period of observation and only individuals who change on the outcome contribute to the parameter estimates in a fixed effects model. We must, therefore, be cautious in our interpretation of these models, because our power of inference is weak. The pattern of coefficients across Models 6 to 10 is quite similar to that found for the left–right scale models. There is evidence in the 'naïve' Models 6, 7 and 8 of political assimilation for individuals moving into safe Conservative constituencies but only weak and inconsistent support for an effect of moves into safe Labour constituencies. Once individual fixed effects are introduced in Models 9 and 10, there is no evidence of assimilation occurring for either type of move. Indeed, these results suggest that individuals moving to safe Conservative constituencies become less likely to support the Conservative party over time, although these estimates are not statistically significant. Long-run multiplier coefficients for those moving into safe Conservative and safe Labour constituencies are also not statistically distinguishable from zero. Because the BHPS has eighteen waves of data, it is possible to extend the annual lagged and cumulative effects beyond five years after a move. Presentation of these models in tabular form is cumbersome, so we show them in graphical summary form in Figures 2 and 3. We present estimates corresponding to the left–right scale Models 1 and 4 and party support Models 6 and 9 with nine lags instead of five. This provides a contrast between naïve estimation (lags only model) and estimation which is more robust to potential confounders.Footnote 42 Figure 2 shows the effect of moving to safe Conservative and Labour constituencies on left–right attitudes up to nine years after a move. Fig. 1 Party identification and left–right economic values over time lived in constituency Fig. 2 Assimilation to left–right values by type of moves over time with extended lags These longer-run estimates produce similar results to those presented in Table 2; a long-run effect is apparent for left–right attitudes for those moving to a safe Conservative constituency in the model with individual-level controls only. However, once individual fixed effects and time-varying individual level characteristics are controlled for, this long-run effect is approximately halved. There is no commensurate effect on left–right attitudes for individuals moving to safe Labour constituencies, with or without fixed effects. The models for party support in Figure 3 also suggest the presence of an assimilation effect for individuals moving to safe Conservative constituencies, but these are statistically non-significant when individual fixed effects are added to the model. No evidence of assimilation effects is evident for individuals who move to safe Labour constituencies, even before the inclusion of fixed effects. Fig. 3 Assimilation to Conservative party support by type of moves over time with extended lags In this article we have taken a new approach to addressing an enduring controversy in the study of political behaviour. While theories which posit a causal effect of geographical context on individual political preferences have a long tradition in political science, existing studies have yet to provide convincing evidence that individuals do indeed assimilate, over time, to the majority preferences of the areas in which they live. Our analyses advance the existing state-of-the art in this field by tracking the political preferences of a large sample of individuals over an eighteen-year period. Our analysis used panel data models with fixed effects and controls for time-varying individual characteristics. This longitudinal approach yields a considerably stronger protection against the primary threat to valid causal inference in standard cross-sectional designs, namely that individuals choose which areas they wish to move to (and remain in) and that these choices are themselves correlated with political preferences. Our results suggest that political assimilation effects were evident in England during the period 1991–2008 but that these were weak and differential across different types of areas. On the one hand, movers to safe Conservative seats became more economically right-wing and more likely to vote Conservative following the move. This suggests that, consistent with the predictions of contextual theories of political behaviour, moving to a more conservative area leads individuals to become more aligned in their political preferences with the local majority. On the other hand, we found no change in left–right attitudes and only very weak evidence of change in party identification amongst movers to safe Labour constituencies. Several factors may account for the differential assimilation effects across constituency types. First, people who move to a safe Labour area already have a high probability of voting for the Labour party and of having economically left-wing attitudes. Thus, the contingent nature of our findings may be due to a ceiling effect; there is little scope for movers into Labour areas to become more left-wing than they already are immediately prior to moving. An alternative possibility is that the mechanisms through which contextual effects are manifested are less powerful in safe Labour seats. Safe Labour seats are mostly located in urban areas such as London, Birmingham, Manchester, Newcastle upon Tyne and Liverpool. The social pressure to conform to the local majority may be less strong in socially diverse, urban areas than in more homogeneous rural or suburban areas.Footnote 43 Conservatives, who traditionally value conformity to existing social norms, may be more likely than those on the left to pressure newcomers to conform to the local majority position. These possibilities are, however, speculative and it is not possible to establish, with the data available to us, why moving to a Conservative area has an effect on political preferences, while moving to a Labour context does not. Be that as it may, the finding that contexts have heterogeneous effects is important in its own right because it suggests that it is necessary, for a complete account, to clearly specify the conditions under which we should expect areal units to affect political preferences. With regard to selection into areas, previous studies have demonstrated that American movers, on average, relocate into areas with more congruent political beliefs than the constituencies from which they moved.Footnote 44 Our results show that this kind of geographic sorting generalizes to the British context; an individual's existing political preference is a strong and significant predictor of the political orientation of the area into which he or she moves. However, the finding that citizens relocate to constituencies that are more congruent with their existing political beliefs does not imply that the choice of locale is caused by political orientation. In our analysis, when individual level controls are introduced, political preferences prior to the time of moving no longer predict the political orientation of the destination constituency. This suggests that sorting of politically like-minded individuals into areas arises indirectly, because people with different political preferences also have different socio-economic characteristics which are the actual causal determinants of residential location choices. In short, self-selection into areas appears to be almost entirely non-political in nature. While our study significantly improves on previous attempts to identify contextual effects, it has limitations of its own which should be acknowledged. In particular our choice of areal unit (Westminster constituencies) can be criticized for inadequately representing the spatial scale at which the mechanisms generally thought to underlie assimilation are likely to operate. Neighbourhood effects theories generally contend that context effects operate primarily through social-psychological processes, such as interpersonal influence and persuasion, which are likely to happen at smaller spatial scales than a parliamentary constituency. Using smaller areal units would almost certainly result in different estimates of assimilation and selection effects than those we have presented here.Footnote 45 While we acknowledge that this problem is pertinent to the interpretation of our findings, we do not believe that it invalidates our findings and conclusions. As we have argued, the constituency is a substantively important context because it is the relevant electoral scale in parliamentary elections and political parties are incentivized to target mobilization efforts strategically across constituencies. In addition, the fact that we find evidence consistent with contextual effects for movers to Conservative constituencies suggests that the null results for some mover types are not due merely to the use of a large areal unit. While we cannot conclude that our results will necessarily generalize across different spatial scales, or to different political contexts, this is not something that we should expect to be the case in any event. The modifiable areal unit problem should not, in short, be taken as a threat to the validity of our conclusions as it is equally pertinent to any spatial scale that an analyst happens, or is able, to select. A second limitation of our approach is that it overlooks other important ways in which assimilation is likely to take place. In particular, our focus on adult movers means that we cannot draw conclusions about two important groups: young people and those who reside in the same area (or same type of area) for long periods. The places where people live in their childhood, adolescence and early adulthood are likely to shape their political outlook more profoundly than at other times in their lives.Footnote 46 Because we focus only on adults aged 18 and over, we would not detect effects which occur prior to adulthood. And, although it would, in principle, be possible to break our analyses down across age groups, in practice our sample of movers is too small to be able to detect differences between age groups that might exist in the population reliably. Similarly, it is plausible that contextual effects occur for individuals who never move but are nonetheless influenced by the changing political environment within their own locale over time. However, identification of causal assimilation effects for non-movers is a considerably more challenging analytical task than for a sample of movers and consideration of this important group is beyond the scope of this article. A third limitation of our research is that it only addresses one possible source of heterogeneity in treatment effects (the political hue of the destination constituency). Both individual and contextual characteristics may influence the extent to which individuals adopt the political preferences prevalent in their local environments. For instance, the reasons why individuals move – such as getting a new job, moving to a more family-friendly neighbourhood, or attending university – may themselves shape the propensity to assimilate to the new context. Very stable local communities may affect newcomers in different ways than gentrifying communities or places that are growing fast over a specific period of time. Long-distance moves that disrupt social relationships may have different implications for political preferences than short-distance moves where it is easier to maintain pre-existing social networks. While we acknowledge, then, that our conclusions cannot be generalized without appropriate caution to other spatial levels or to other population sub-groups, we believe that our findings are important nevertheless. The evidence that we have presented suggests that self-selection into areas is considerably more important than assimilation effects in producing the spatial clustering of political preferences long observed by political geographers. This confirms the dominance of selection over assimilation that has been observed in other contexts, including those which have used experimental designs.Footnote 47 We have also shown for the first time that self-selection into areas is almost entirely non-political in nature, in the sense that individuals do not choose where to live on the basis of political preference per se, but as a result of socio-economic characteristics which are jointly correlated with choice of location and political orientation. Location affects individual political preferences, but only weakly, in some areas, and for some outcomes. Thus, while contexts are certainly relevant to our understanding of political preferences, they appear to be considerably less important than proponents of contextual theories have sometimes maintained. Gallego: Institut de Barcelona d'Estudis Internacionals (email: [email protected]); Buscha: Department of Economics and Quantitative Methods, University of Westminster (email: [email protected]); Sturgis: Department of Social Statistics and Demography, University of Southampton (email: [email protected]); Oberski: Methodology Department, Tilburg University (email: [email protected]). The authors gratefully acknowledge the support of the Economic and Social Research Council through the grant for the National Centre for Research Methods (NCRM; grant reference: RES-576-47-5001-01) and from the Marie Curie Actions of the European Union's Seventh Framework Programme under REA grant agreement no. 334054 (PCIG12-GA-2012-334054). The code utilized to produce the results is posted in the BJPS repository. Data replication sets and online appendices are available at http://dx.doi.org/doi: 10.1017/S0007123414000337. The British Household Panel Study, however, does not allow dissemination of the micro-data. 1 Agnew Reference Agnew1987; Books and Prysby Reference Books and Prysby1988; Burbank Reference Burbank1997; Cox Reference Cox1969; Ethington and McDaniel Reference Ethington and McDaniel2007; Huckfeldt and Sprague Reference Huckfeldt and Sprague1995; Johnston and Pattie Reference Johnston and Pattie2006. 2 Andersen and Heath Reference Andersen and Heath2002; Butler and Stokes Reference Butler and Stokes1974; Crewe and Payne Reference Crewe and Payne1976; Johnston et al. Reference Johnston, Jones, Sarker, Propper, Burgess and Bolster2004; Johnston, Pattie and Allsopp Reference Johnston, Pattie and Allsopp1988; McAllister et al. Reference McAllister, Johnston, Pattie, Tunstall, Dorling and Rossiter2001; Miller Reference Miller1978. 3 Dunleavy Reference Dunleavy1979; Kelley and McAllister Reference Kelley and McAllister1985; King Reference King1996; McAllister and Studlar Reference McAllister and Studlar1992. 4 Chen and Rodden Reference Chen and Rodden2009; Rodden Reference Rodden2010; Rodden Reference Rodden2012. 5 While existing work focuses mostly on social mechanisms, other contextual characteristics such as climate or geographic features can also conceivably affect political behaviour. 6 Butler and Stokes Reference Butler and Stokes1974; Huckfeldt, Ikeda and Pappi Reference Huckfeldt, Ken'ichi and Pappi2005; Huckfeldt and Sprague Reference Huckfeldt and Sprague1995. 7 Denver and Hands Reference Denver and Hands1997; Pattie, Johnston and Fieldhouse Reference Pattie, Johnston and Fieldhouse1995. 8 Books and Prysby Reference Books and Prysby1988. 9 Cutler Reference Cutler2007. 10 Cho and Rudolph Reference Cho and Rudolph2008; Huckfeldt and Sprague Reference Huckfeldt and Sprague1992. 11 Butler and Stokes Reference Butler and Stokes1974. 12 E.g. Cox Reference Cox1969; Crewe and Payne Reference Crewe and Payne1976; Taylor and Johnston Reference Taylor and Johnston1979. 13 E.g. Andersen and Heath Reference Andersen and Heath2002; Johnston, Pattie and Allsopp Reference Johnston, Pattie and Allsopp1988; Johnston et al. Reference Johnston, Propper, Burgess, Sarker, Bolster and Jones2005; McAllister et al. Reference McAllister, Johnston, Pattie, Tunstall, Dorling and Rossiter2001; Miller Reference Miller1978. 14 For a review, see Johnston and Pattie Reference Johnston and Pattie2006. 15 Pattie and Johnston Reference Pattie and Johnston2000. 16 Johnston et al. (Reference Johnston, Pattie, Dorling, MacAllister, Tunstall and Rossiter2001), p. 107. 17 Baybeck and McClurg (Reference Baybeck and McClurg2005), p. 494. 18 Rabe and Taylor 2010. 19 E.g., see Florida (2003) on cities of the creative class; and Tiebout (Reference Tiebout1956) on political orientation and preferred bundles of taxes and services. 20 Cho, Gimpel and Hui Reference Cho, Gimpel and Hui2012; McDonald Reference McDonald2011. 21 Kelley and McAllister Reference Kelley and McAllister1985; McAllister and Studlar Reference McAllister and Studlar1992. 22 King (Reference King1996), p. 160. 23 Halaby Reference Halaby2003; Halaby Reference Halaby2004; Wooldridge Reference Wooldridge2002. 24 Halaby (Reference Halaby2003), p. 2. 25 Allison Reference Allison1994. 26 That is, we consider it unlikely that political orientation is a strong determinant of drop-out from the study. Although the BHPS contains a longitudinal weight, using this to correct for differential attrition is not attractive because any unit with a single missing wave of data over the eighteen years of observation is dropped from the weighted estimator. 27 Evans, Heath and Lalljee 1996; Heath, Evans and Martin Reference Heath, Evans and Martin1994. 28 Sturgis Reference Sturgis2002. 29 Cutts and Fieldhouse Reference Cutts and Fieldhouse2009; Johnston et al. Reference Johnston, Jones, Propper and Burgess2007. 30 Martin 2008. 31 Constituency identifiers in the BHPS are over-written with the new constituency code when boundaries change. This means that it is not possible to identify which respondents in our analysis sample changed to a different constituency without moving house due to the 1997 boundary revisions. However, we are able to identify respondents who are resident in constituencies which were created in 1997 and therefore we can estimate models including and excluding this group. We find no significant differences in the patterns reported in the analyses. 32 In case of boundary revision, the coding matches any constituency revised in 1997 to the 1992 constituency with which it has the largest overlap (see Norris Reference Norris2005). 33 These are: Bethnal Green and Bow, Birmingham Sparkbrook and Small Heath, Brentwood and Ongar, East Ham, Staffordshire South, Tatton, West Bromwich West, West Ham. 34 While the United States is widely considered to have high rates of residential mobility, one-year mobility rates are similar to those observed in Britain; 12 per cent of the US population moved to a different address in 2005, while 11 per cent of the population in Britain did so (Molloy, Smith and Wozniak Reference Molloy, Smith and Wozniak2011). Data from the 2001 Census and administrative records suggest that about 6.7 million UK residents, or 11.4 per cent of the population, moved from one address to another in the previous twelve months (Champion Reference Champion2005). Most of these moves are over short distances, with approximately two fifths moving less than 2 km away and only one third moving more than 10 km away. 35 All observations in the first wave (1991) of the BHPS do not have earlier information and can, therefore, not be used. Individuals who drop out of waves cannot provide transition information. 36 In principle moves from safe Conservative to safe Labour constituencies and vice versa are of greatest theoretical interest, although our sample size is insufficient to go to this level of granularity. 37 Blanden et al. Reference Blanden, Buscha, Sturgis and Urwin2012; Laporte and Windmeijer Reference Laporte and Windmeijer2005. 38 Blanden et al. Reference Blanden, Buscha, Sturgis and Urwin2012. 39 A corollary example is the long-established association between the political preferences of married and co-habiting couples. Alford et al. (Reference Alford, Hatemi, Hibbing, Martin and Eaves2011) argue that, if the intra-spousal correlation is due to influence rather than self-selection, we should observe partners becoming more similar to each other the longer they are together. 40 The 'time lived in constituency' variable is derived from the address record rather than by self-report. We limit the upper bound of time at address to ten years because 'time at address' is increasingly confounded with age as 'time at residence' increases. 41 These results are available from the corresponding author upon request. 42 The overall pattern is very similar when using different specifications, e.g. when plotting estimates from Table 2 with five lags or from lag and lead models. The additional plots are available upon request. 43 Labour constituencies are on average poorer and younger; they have more residents who are tenants, have a low education and are unemployed, but there is more variation in these characteristics across Labour constituencies than across Conservative constituencies. 44 Cho, Gimpel and Hui Reference Cho, Gimpel and Hui2012; Gimpel Reference Gimpel1999; McDonald Reference McDonald2011. 45 Fotheringham and Wong Reference Fotheringham and Wong1991. 46 Campbell Reference Campbell2006; Jennings, Stoker and Bowers Reference Jennings, Stoker and Bowers2009. 47 Katz, Kling and Liebman Reference Katz, Kling and Liebman2001; Ludwig, Duncan and Hirschfield Reference Ludwig, Duncan and Hirschfield2001; Ludwig et al. Reference Ludwig, Liebman, Kling, Duncan, Katz, Kessler and Sanbonmatsu2008. Agnew, John A. 1987. Place and Politics: The Geographical Mediation of State and Society. Boston, Mass.: Allen & Unwin.Google Scholar Alford, John R., Hatemi, Peter K., Hibbing, John R., Martin, Nicholas G. and Eaves, Lindon J.. 2011. The Politics of Mate Choice. Journal of Politics 73 (2):362–379.CrossRefGoogle Scholar Allison, Paul D. 1994. Using Panel Data to Estimate the Effects of Events. Sociological Methods & Research 23:174–199.CrossRefGoogle Scholar Andersen, Robert, and Heath, Anthony. 2002. Class Matters: The Persisting Effects of Contextual Social Class on Individual Voting in Britain, 1964–97. European Sociological Review 18 (2):125–138.CrossRefGoogle Scholar Baybeck, Brady, and McClurg, Scott D.. 2005. What Do They Know and How Do They Know It? American Politics Research 33 (4):492–520.CrossRefGoogle Scholar Blanden, Jo, Buscha, Franz, Sturgis, Patrick J. and Urwin, Peter. 2012. Measuring the Earnings Returns to Accredited Adult Learning in the UK. Economics of Education Review 31 (4):501–514.CrossRefGoogle Scholar Books, John, and Prysby, Charles. 1988. Studying Contextual Effects on Political Behavior. American Politics Research 16 (2):211–238.CrossRefGoogle Scholar Burbank, Matthew J. 1997. Explaining Contextual Effects on Vote Choice. Political Behavior 19 (2):113–132.CrossRefGoogle Scholar Butler, David, and Stokes, Donald E.. 1974. Political Change in Britain: The Evolution of Electoral Choice. London: Macmillan.CrossRefGoogle Scholar Campbell, David E. 2006. Why We Vote: How Schools and Communities Shape Our Civic Life. Princeton, N.J.: Princeton University Press.Google Scholar Champion, Tony. 2005. Population Movement within the UK. Pp. 91–113 in Focus on People and Migration, edited by Edward R. Chappell. Basingstoke, Hants.: Palgrave.CrossRefGoogle Scholar Chen, Jowei, and Rodden, Jonathan. 2009. Tobler's Law, Urbanization, and Electoral Bias: Why Compact, Contiguous Districts are Bad for the Democrats. Working paper, Department of Political Science, Stanford University, Palo Alto, Calif.Google Scholar Cho, Wendy Tam, Gimpel, James G. and Hui, Iris S.. 2012. Voter Migration and the Geographic Sorting of the American Electorate. Annals of the Association of American Geographers 103 (4):856–870.Google Scholar Cho, Wendy Tam, and Rudolph, Thomas J.. 2008. Emanating Political Participation: Untangling the Spatial Structure behind Participation. British Journal of Political Science 38 (2):273–289.CrossRefGoogle Scholar Cox, Kevin R. 1969. The Voting Decision in a Spatial Context. Progress in Geography 1:81–117.Google Scholar Crewe, Ivor, and Payne, Clive. 1976. Another Game with Nature: An Ecological Regression Model of the British Two-Party Vote Ratio in 1970. British Journal of Political Science 6 (1):43–81.CrossRefGoogle Scholar Cutler, Fred. 2007. Context and Attitude Formation: Social Interaction, Default Information, or Local Interests? Political Geography 26 (5):575–600.CrossRefGoogle Scholar Cutts, David, and Fieldhouse, Edward. 2009. What Small Spatial Scales are Relevant as Electoral Contexts for Individual Voters? The Importance of the Household on Turnout at the 2001 General Election. American Journal of Political Science 53 (3):726–739.CrossRefGoogle Scholar Denver, David T., and Hands, Gordon. 1997. Modern Constituency Electioneering: Local Campaigning in the 1992 General Election. Abingdon, Oxon.: Routledge.Google Scholar Dunleavy, Patrick. 1979. The Urban Basis of Political Alignment: Social Class, Domestic Property Ownership and State Intervention in Consumption Processes. British Journal of Political Science 9 (4):409–443.CrossRefGoogle Scholar Ethington, Philip J., and McDaniel, Jason A.. 2007. Political Places and Institutional Spaces: The Intersection of Political Science and Political Geography. Annual Review of Political Science 10:127–142.CrossRefGoogle Scholar Evans, Geoffrey, Heath, Anthony and Lalljee, Mansur. 1996. Measuring Left–Right and Libertarian–Authoritarian Values in the British Electorate. British Journal of Sociology 47 (1):93–112.CrossRefGoogle Scholar Florida, Richard. 2003. Cities and the Creative Class. City & Community 2:3–19.CrossRefGoogle Scholar Fotheringham, Alexander S., and Wong, David W. S.. 1991. The Modifiable Areal Unit Problem in Multivariate Statistical Analysis. Environment and Planning A 23 (7):1025–1044.CrossRefGoogle Scholar Gimpel, James G. 1999. Separate Destinations: Migration, Immigration, and the Politics of Places. Ann Arbor: University of Michigan Press.CrossRefGoogle Scholar Halaby, Charles N. 2003. Panel Models for the Analysis of Change and Growth in Life Course Studies. Pp. 503–527 in Handbook of the Life Course, edited by J. Mortimer and M. Shanahan. New York: Kluwer Academic/Plenum Publishers.CrossRefGoogle Scholar Halaby, Charles N.. 2004. Panel Models in Sociological Research: Theory into Practice. Annual Review of Sociology 30:507–544.CrossRefGoogle Scholar Heath, Anthony, Evans, Geoffrey and Martin, Jean. 1994. The Measurement of Core Beliefs and Values: The Development of Balanced Socialist/Laissez Faire and Libertarian/Authoritarian Scales. British Journal of Political Science 24 (1):115–132.CrossRefGoogle Scholar Huckfeldt, Robert, Ken'ichi, Ikeda and Pappi, Franz. U.. 2005. Patterns of Disagreement in Democratic Politics: Comparing Germany, Japan, and the United States. American Journal of Political Science 49 (3):497–514.CrossRefGoogle Scholar Huckfeldt, Robert, and Sprague, John D.. 1992. Political Parties and Electoral Mobilization: Political Structure, Social Structure, and the Party Canvass. American Political Science Review 86 (1):70–86.CrossRefGoogle Scholar Huckfeldt, Robert, and Sprague, John D.. 1995. Citizens, Politics, and Social Communication. New York: Cambridge University Press.CrossRefGoogle Scholar Jennings, M. Kent, Stoker, Laura and Bowers, Jake. 2009. Politics across Generations: Family Transmission Reexamined. Journal of Politics 71 (3):782–799.CrossRefGoogle Scholar Johnston, Ron J., Jones, Kelvyn, Sarker, Rebecca, Propper, Carol, Burgess, Simon and Bolster, Anne. 2004. Party Support and the Neighbourhood Effect: Spatial Polarisation of the British Electorate, 1991–2001. Political Geography 23 (4):367–402.CrossRefGoogle Scholar Johnston, Ron J., Jones, Kelvyn, Propper, Carol and Burgess, Simon. 2007. Region, Local Context, and Voting at the 1997 General Election in England. American Journal of Political Science 51 (3):640–654.CrossRefGoogle Scholar Johnston, Ron J., and Pattie, Charles J.. 2006. Putting Voters in Their Place: Geography and Elections in Great Britain. Oxford: Oxford University Press.CrossRefGoogle Scholar Johnston, Ron J., Pattie, Charles J. and Allsopp, Graham. 1988. A Nation Dividing? The Electoral Map of Great Britain, 1979–1987. London: Longman.Google Scholar Johnston, Ron J., Pattie, Charles J., Dorling, Danny F. L., MacAllister, Ian, Tunstall, Helena and Rossiter, David J.. 2001. Social Locations, Spatial Locations and Voting at the 1997 British General Election: Evaluating the Sources of Conservative Support. Political Geography 20 (1):85–111.CrossRefGoogle Scholar Johnston, Ron, Propper, Carol, Burgess, Simon, Sarker, Rebecca, Bolster, Anne and Jones, Kelvyn. 2005. Spatial Scale and the Neighbourhood Effect: Multinomial Models of Voting at Two Recent British General Elections. British Journal of Political Science 35 (3):487–514.CrossRefGoogle Scholar Katz, Lawrence F., Kling, Jeffrey R. and Liebman, Jeffrey B.. 2001. Moving to Opportunity in Boston: Early Results of a Randomized Mobility Experiment. Quarterly Journal of Economics 116 (2):607–654.CrossRefGoogle Scholar Kelley, Jonathan, and McAllister, Ian. 1985. Social Context and Electoral Behavior in Britain. American Journal of Political Science 29 (3):564–586.CrossRefGoogle Scholar King, Gary. 1996. Why Context Should Not Count. Political Geography 15:159–164.CrossRefGoogle Scholar Laporte, Audrey, and Windmeijer, Frank. 2005. Estimation of Panel Data Models with Binary Indicators when Treatment Effects Are Not Constant over Time. Economics Letters 88 (3):389–396.CrossRefGoogle Scholar Ludwig, Jens, Duncan, Greg J. and Hirschfield, Paul. 2001. Urban Poverty and Juvenile Crime: Evidence from a Randomized Housing-Mobility Experiment. Quarterly Journal of Economics 116 (2):655–679.CrossRefGoogle Scholar Ludwig, Jens, Liebman, Jeffrey B., Kling, Jeffrey R., Duncan, Greg J., Katz, Lawrence F., Kessler, Ronald C., and Sanbonmatsu, Lisa. 2008. What Can We Learn about Neighborhood Effects from the Moving to Opportunity Experiment? American Journal of Sociology 114 (1):144–188.CrossRefGoogle Scholar McAllister, Ian, Johnston, Ron J., Pattie, Charles J., Tunstall, Helena, Dorling, Danny F. L. and Rossiter, David J.. 2001. Class Dealignment and the Neighbourhood Effect: Miller Revisited. British Journal of Political Science 31 (1):41–59.Google Scholar McAllister, Ian, and Studlar, Donley T.. 1992. Region and Voting in Britain, 1979–87: Territorial Polarization or Artifact? American Journal of Political Science 36 (1):168–199.CrossRefGoogle Scholar McDonald, Ian. 2011. Migration and Sorting in the American Electorate: Evidence from the 2006 Cooperative Congressional Election Study. American Politics Research 39 (3):512–533.CrossRefGoogle Scholar Miller, William L. 1978. Electoral Dynamics in Britain since 1918. New York: St Martin's Press.Google Scholar Molloy, Raven, Smith, Christopher L. and Wozniak, Abigail K.. 2011. Internal Migration in the United States. Washington, D.C.: National Bureau of Economic Research.CrossRefGoogle Scholar Norris, Pippa. 2005. The British Parliamentary Constituency Database. 1992–2005. Release 1.3.Google Scholar Pattie, C., and Johnston, R.. 2000. 'People Who Talk Together Vote Together': An Exploration of Contextual Effects in Great Britain. Annals of the Association of American Geographers 90:41–66.CrossRefGoogle Scholar Pattie, Charles J., Johnston, Ron J., and Fieldhouse, Edward A.. 1995. Winning the Local Vote: The Effectiveness of Constituency Campaign Spending in Great Britain, 1983–1992. American Political Science Review 89 (4):969–983.CrossRefGoogle Scholar Rabe, Birgitta, and Taylor, Mark. 2010. Residential Mobility, Quality of Neighbourhood and Life Course Events. Journal of the Royal Statistical Society: Series A (Statistics in Society) 173:531–555.CrossRefGoogle Scholar Rodden, Jonathan. 2010. The Geographic Distribution of Political Preferences. Annual Review of Political Science 13:321–340.CrossRefGoogle Scholar Rodden, Jonathan. 2012. The Long Shadow of the Industrial Revolution. Unpublished manuscript. Stanford University Calif.Google Scholar Sturgis, Patrick. 2002. Attitudes and Measurement Error Revisited: A Reply to Johnson and Pattie. British Journal of Political Science 32 (4):691–698.CrossRefGoogle Scholar Taylor, Peter J., and Johnston, Ron J.. 1979. Geography of Elections. London: Croom Helm.Google Scholar Tiebout, Charles M. 1956. A Pure Theory of Local Expenditures. Journal of Political Economy 64:416–424.CrossRefGoogle Scholar Wooldridge, Jeffrey M. 2002. Econometric Analysis of Cross Section and Panel Data. Cambridge, Mass.: MIT Press.Google Scholar View in content Gallego Supplementary Material File 120 Bytes File 25 KB PDF 50 KB This article has been cited by the following publications. This list is generated based on data provided by Crossref. Kaufmann, Eric and Harris, Gareth 2015. "White Flight" or Positive Contact? Local Diversity and Attitudes to Immigration in Britain. Comparative Political Studies, Vol. 48, Issue. 12, p. 1563. Czapiewski, Tomasz 2016. The electoral map of Szczecin. Spatial diversity of political preferences of Szczecin citizens in years 2006-2015. Zeszyty Naukowe Uniwersytetu Szczecińskiego. Acta Politica, Vol. 36, Issue. , p. 21. Mummolo, Jonathan and Nall, Clayton 2017. Why Partisans Do Not Sort: The Constraints on Political Segregation. The Journal of Politics, Vol. 79, Issue. 1, p. 45. Vallbé, Joan-Josep and Magre Ferran, Jaume 2017. The Road Not Taken. Effects of residential mobility on local electoral turnout. Political Geography, Vol. 60, Issue. , p. 86. Gimpel, James G. and Hui, Iris 2017. Inadvertent and intentional partisan residential sorting. The Annals of Regional Science, Vol. 58, Issue. 3, p. 441. Oberski, Daniel L. 2017. Total Survey Error in Practice. p. 339. Prior, Markus 2018. Hooked. Kaufmann, Eric and Goodwin, Matthew J. 2018. The diversity Wave:A meta-analysis of the native-born white response to ethnic diversity. Social Science Research, Vol. 76, Issue. , p. 120. van Wijk, Daniël Bolt, Gideon and Johnston, Ron 2019. Contextual Effects on Populist Radical Right Support: Consensual Neighbourhood Effects and the Dutch PVV. European Sociological Review, Vol. 35, Issue. 2, p. 225. Anastasopoulos, L. Jason 2019. Migration, Immigration, and the Political Geography of American Cities. American Politics Research, Vol. 47, Issue. 2, p. 362. MAXWELL, RAHSAAN 2019. Cosmopolitan Immigration Attitudes in Large European Cities: Contextual or Compositional Effects?. American Political Science Review, Vol. 113, Issue. 2, p. 456. Kawalerowicz, Juta 2019. Long-running traditions of racial exclusionism: Is there evidence of historical continuity in local support for extreme right parties in England and Wales?. Party Politics, Vol. 25, Issue. 2, p. 227. Long, Jacob A. Eveland, William P. and Slater, Michael D. 2019. Partisan Media Selectivity and Partisan Identity Threat: The Role of Social and Geographic Context. Mass Communication and Society, Vol. 22, Issue. 2, p. 145. Abreu, Maria and Öner, Özge 2019. Disentangling the Brexit Vote: The Role of Economic, Social, and Cultural Contexts in Explaining the UK's EU Referendum Vote. SSRN Electronic Journal , Hjorth, Frederik 2020. The Influence of Local Ethnic Diversity on Group-Centric Crime Attitudes. British Journal of Political Science, Vol. 50, Issue. 1, p. 321. Scarborough, William J. and Sin, Ray 2020. Gendered Places: The Dimensions of Local Gender Norms across the United States. Gender & Society, Vol. 34, Issue. 5, p. 705. Maxwell, Rahsaan 2020. Geographic Divides and Cosmopolitanism: Evidence From Switzerland. Comparative Political Studies, Vol. 53, Issue. 13, p. 2061. Ford, Robert and Jennings, Will 2020. The Changing Cleavage Politics of Western Europe. Annual Review of Political Science, Vol. 23, Issue. 1, p. 295. Weller, Sally 2021. Places that matter: Australia's crisis intervention framework and voter response. Cambridge Journal of Regions, Economy and Society, Vol. 14, Issue. 3, p. 529. Wiertz, Dingeman and Rodon, Toni 2021. Frozen or malleable? Political ideology in the face of job loss and unemployment. Socio-Economic Review, Vol. 19, Issue. 1, p. 307. Download full list View all Google Scholar citations for this article. Save article to Kindle To save this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the 'name' part of your Kindle email address below. Find out more about saving to your Kindle. Aina Gallego, Franz Buscha, Patrick Sturgis and Daniel Oberski DOI: https://doi.org/10.1017/S0007123414000337 Save article to Dropbox To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox. Save article to Google Drive To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive. Reply to: Submit a response Title * Please enter a title for your response. Contents * Contents help Close Contents help - No HTML tags allowed - Web page URLs will display as text only - Lines and paragraphs break automatically - Attachments, images or tables are not permitted Please enter your response. First name * Please enter your first name. Last name * Please enter your last name. Email * Email help Close Email help Your email address will be used in order to notify you when your comment has been reviewed by the moderator and in case the author(s) of the article or the moderator need to contact you directly. Occupation Please enter your occupation. Affiliation Please enter any affiliation. Conflicting interests Do you have any conflicting interests? * Conflicting interests help Close Conflicting interests help Please list any fees and grants from, employment by, consultancy for, shared ownership in or any close relationship with, at any time over the preceding 36 months, any organisation whose interests may be affected by the publication of the response. Please also list any non-financial associations or interests (personal, professional, political, institutional, religious or other) that a reasonable reader would want to know about in relation to the submitted work. This pertains to all the authors of the piece, their spouses or partners. Please enter details of the conflict of interest or select 'No'. Please tick the box to confirm you agree to our Terms of use. * Please accept terms of use. Please tick the box to confirm you agree that your name, comment and conflicts of interest (if accepted) will be visible on the website and your comment may be printed in the journal at the Editor's discretion. * Please confirm you agree that your details will be displayed.
CommonCrawl
The effect of plant weight on estimations of stalk lodging resistance Christopher J. Stubbs1, Yusuf A. Oduntan1, Tyrone R. Keep2, Scott D. Noble2 & Daniel J. Robertson ORCID: orcid.org/0000-0003-1089-02491 Stalk lodging (breaking of agricultural plant stalks prior to harvest) is a multi-billion dollar a year problem. Stalk lodging occurs when bending moments induced by a combination of external loading (e.g. wind) and self-loading (e.g. the plant's own weight) exceed the stalk bending strength of plant stems. Previous studies have investigated external loading and self-loading of plants as separate and independent phenomena. However, these two types of loading are highly interconnected and mutually dependent. The purpose of this paper is twofold: (1) to investigate the combined effect of external loads and plant weight on the flexural response of plant stems, and (2) to provide a generalized framework for accounting for self-weight during mechanical phenotyping experiments used to predict stalk lodging resistance. A mathematical methodology for properly accounting for the interconnected relationship between self-loading and external loading of plants stems is presented. The method was compared to numerous finite element models of plants stems and found to be highly accurate. The resulting interconnected set of equations from the derivation were used to produce user-friendly applications by presenting (1) simplified self-loading correction factors for common loading configurations of plants, and (2) a generalized Microsoft Excel framework that calculates the influence of self-loading on crop stems. Results indicate that ignoring the effects of self-loading when calculating stalk flexural stiffness is appropriate for large and stiff plants such as maize, bamboo, and sorghum. However, significant errors result when ignoring the effects of self-loading in smaller plants with larger relative grain sizes, such as rice (8% error) and wheat (16% error). Properly accounting for self-weight can be critical to determining the structural response of plant stems. Equations and tools provided herein enable researchers to properly account for the plant's weight during mechanical phenotyping experiments used to determine stalk lodging resistance. Yield losses due to stalk lodging (breakage of crop stems or stalks prior to harvest) are estimated to range from 5% to 20% annually [1, 2] resulting in billions of dollars of lost revenue. Stalk flexural stiffness and stalk bending strength (see Table 1 for definitions) are key mechanical phenotypes that govern stalk lodging resistance [3,4,5,6,7,8]. These key phenotypes are measured with the aid of mechanical phenotyping devices [9]. However, a method to properly account for plant weight when measuring stalk flexural stiffness and stalk bending strength has not been presented. Consequently, the effect of self-weight is typically neglected in mechanical tests used to quantify these phenotypes. Neglecting self-weight during mechanical phenotyping experiments can introduce significant errors in stalk flexural stiffness and stalk bending strength measurements which in turn result in inaccurate predictions of stalk lodging resistance. Properly accounting for self-weight during mechanical phenotyping experiments requires (1) a basic understanding of the types of mechanical forces plants experience, (2) clear definitions of the mechanical phenotypes being measured and (3) a conceptual understanding of how mechanical phenotyping devices work and the types of forces present during mechanical phenotyping experiments. Each of these three requirements is discussed in the paragraphs that follow. An explanation of the basic types of forces plants experience is presented first, followed by definitions for stalk flexural stiffness and stalk bending strength. Finally, a discussion of the basic principles of mechanical phenotyping devices used to measure stalk flexural stiffness and stalk bending strength is presented. Table 1 Glossary of terms Types of forces experienced by plants Plants are subjected to three principle types of forces, namely: (1) Contact Forces, (2) Surface Forces and (3) Body Forces. Contact Forces occur when solid materials 'contact' (i.e., push on) one another. Most mechanical phenotyping devices impart Contact Forces (i.e., they physically contact and push on the plant). Contact Forces can also occur when an adjacent plant or a researcher contacts a plant and pushes on it. Surface Forces are forces that are distributed across a plants surface. The wind is an example of a Surface Force. Both Contact Forces and Surface Forces are commonly referred to as External Forces or externally applied loads as they originate from external objects. The last type of mechanical force plants are subjected to is Body Forces. Body Forces are forces due to gravity (i.e., the plants weight). It is important to note that all plants are constantly subjected to Body Forces whereas they are only intermittently subjected to External Forces (e.g., Contact Forces and Surface Forces). In other words, Body Forces (i.e., self-weight) are always present in any mechanical phenotyping test and as such need to be accounted for. Bending strength and flexural stiffness definitions Determining the bending strength and flexural stiffness of plant stems requires the calculation of "bending moments" (see [10] for a complete discussion of bending moments). Bending moments arise from any force (either External Forces or Body Forces) that cause a plant to bend or flex and can be conceptually thought of as a torque. A bending moment is calculated by multiplying a force by the perpendicular distance from the force to the axis about which the bending moment is being calculated. In most plant studies bending moments are typically calculated about the base of the plant (i.e., at the stalk–soil interface) as this is where bending moments are the largest. Both External Forces and Body Forces (i.e., self-weight) create bending moments in plant stems. We now proceed to provide definitions for stalk bending strength and stalk flexural stiffness. Note these terms are sometimes used incorrectly and interchangeably in the mechanical plant phenotyping literature. However, they are structural engineering terms with precise and distinct definitions. The stalk bending strength of a plant is defined as the maximum bending moment the plant stalk can support before structural failure occurs (i.e., before breaking). In contrast stalk flexural stiffness is a measurement of the flexural (i.e., bending) deformability of the plant. In other words, stalk flexural stiffness is a measure of a plant's resistance to bending deformations, whereas stalk bending strength is a measure of a plants resistance to breaking. The flexural stiffness of standard engineering structures is defined as the elastic modulus of the material the structure is composed of multiplied by the moment of inertia of the structure. The moment of inertia is a geometric term that quantifies the distribution of mass about an object's centroid [10]. However, plant stalks are often composed of multiple materials and are non-prismatic (i.e., tapered) thus their moment of inertia changes as a function of length along the stalk. This complicates the calculation of stalk flexural stiffness. Consequently, most studies utilize engineering beam equations to indirectly solve for stalk flexural stiffness (e.g., [4, 10]). The process of indirectly solving for stalk flexural stiffness is explained in detail in the methods section. Mechanical phenotyping principles Several mechanical phenotyping devices have been developed to measure stalk flexural stiffness and/or stalk bending strength [6, 8, 9, 11]. A review of these devices is presented in [9]. In general, all these devices apply an external load (e.g., a contact force) to either a single plant or to a group of plants and measure the accompanying deflection of the plant stem(s). Standard engineering beam equations are then used to calculate the flexural stiffness and bending strength of the plant sample (e.g. [6, 9]). However, the standard engineering beam equations used in these analyses ignore the effect of Body Forces (i.e. self-weight) and are therefore error prone. It is important to note that the bending moments induced from Body Forces are inextricably connected to External Forces. In particular, the bending moment induced from Body Forces (i.e., self-weight) is a function of the distance between the plant's base and its center of gravity. As External Forces from a phenotyping device displace the center of gravity of the plant away from the base of the stem, the bending moment induced from Body Forces increases. Previous studies have examined the influence of Body Forces (i.e., self-weight) on stalk bending strength in the absence of External Forces while others have examined the influence of External Forces on stalk bending strength while ignoring Body Forces [3, 4, 12,13,14,15,16,17,18,19,20]. However, a method for simultaneously accounting for both External Forces and Body Forces during mechanical phenotyping experiments has not been presented. Consequently, Body Forces are ignored in mechanical phenotyping studies which leads to inaccuracies in stalk lodging resistance predictions. The purpose of this paper is to provide a generalized framework to simultaneously account for both Body Forces and External Forces when taking measurements of stalk flexural stiffness and stalk bending strength. A derivation of the governing engineering equations used to calculate these mechanical phenotypes is presented. The derivation is validated by comparing its results to the results of several nonlinear finite element models of plant stems. In addition, a user-friendly Microsoft Excel spreadsheet is developed and presented to aid researchers in determining the effect of self-weight in mechanical phenotyping experiments. The spreadsheet does not require an advanced understanding of engineering mechanics. It was developed to aid researchers from non-engineering disciplines to determine the necessity of accounting for plant weight in mechanical phenotyping experiments. Finally, several case studies are presented to demonstrate the type of error present in mechanical phenotyping tests that do not account for Body Forces. The sections that follow detail the methods used to investigate the effect of self-weight on measurements of stalk flexural stiffness and stalk bending strength of plant stems. For clarity, the methods are broken into five distinct subsections. First, the traditional approach (which ignores Body Forces) to calculate bending strength and flexural stiffness is presented, and its limitations are discussed. Second, a derivation of a more accurate approach to calculating bending strength and flexural stiffness that simultaneously accounts for both Body Forces and External Forces is presented. The derivation is predicated upon engineering solid mechanics theory. The third section describes how this new approach was parametrically investigated and validated by comparing its results to those of engineering finite element models of plant stems. In the fourth section, the development of a user-friendly Excel spreadsheet is explained. The spreadsheet was developed to help researchers without a background in engineering mechanics successfully apply the new approach to calculating stalk bending strength and stalk flexural stiffness. The last section explains a series of three case studies. These case studies were conducted to illustrate how the equations presented in the current work can be applied to investigate the effects of self-weight. Table 2 displays the variables and abbreviations used in the equations presented below. Table 2 Abbreviations Traditional solution (ignoring body forces) Traditionally, the bending strength of a plant stem is calculated as the maximum externally applied moment (Mext) (applied from a phenotyping device) that the stem can withstand prior to structural failure, i.e., bending strength = Maximum (Mext). Using traditional methods, the flexural stiffness (EI) of a plant is solved for indirectly by relating the externally applied moment (Mext) induced by a phenotyping device to the resulting deflection of the stem (δ) using Castigliano's energy method [6, 9, 11]. In this way, the deflection of the plant is equal to the partial derivative of the internal potential energy of the system with respect to the applied load (F) from the phenotyping device [10]: $$EI = \frac{{\smallint M_{ext} \frac{{dM_{ext} }}{dF}dx}}{2\delta }$$ Unfortunately, the effect of Body Forces is ignored in these traditional approaches. In other words, these analyses consider only the external bending moment (Mext) applied by the phenotyping device. In reality the total bending moment (MTOTAL) which is the combination of both the externally applied bending moment (Mext) and the bending moment resulting from Body Forces (Mbody) should be considered (i.e., MTOTAL = Mext + Mbody). Thus, to more accurately quantify stalk flexural stiffness and stalk bending strength the traditional approach must be modified to use MTOTAL, and not just Mext. Derivation of new approach that accounts for both body forces and external forces Properly accounting for Body Forces when calculating stalk bending strength and stalk flexural stiffness requires derivation of a closed form solution for the total bending moment of the stem (MTOTAL). The derivation is presented in this section for completeness. However, it should be noted that the derivation is based upon engineering solid mechanics theory and those from a non-engineering background may therefore find parts of the derivation difficult to follow. For this reason, the authors have incorporated the resulting sets of equations from the derivation into a user-friendly excel spreadsheet that can be used by the plant research community. The derivation is presented below followed by an explanation of the excel spreadsheet. Consider Fig. 1, which depicts the free body diagram of a plant stem with an arbitrary loading applied at two locations. The figure depicts two weights (w) (e.g. stem weight, grain weight), as well as two externally applied Contact Forces (F) and two externally applied moments (M). Note that as mentioned before the externally applied loads and moments can be arise from any external object. Commons sources of externally applied forces include phenotyping devices, wind, and adjacent plants. The loading diagram of a deflected stem, showing two loading locations with all three types of loading (an applied force, an applied moment, and a weight) Bending moments induced from self-weight (i.e., Body Forces) will increase with increased stem deflection. For the weight (w) at each location, we can calculate the induced bending moment from self-weight (W) as the product of the weight and the weight's offset [i.e., the deflection of the stem at the location of the weight (δ)]. Thus for the two locations shown in Fig. 1, we have: $$W_{1} = \delta_{1} w_{1}$$ It should be noted that Eqs. (2) and (3) assume that the maximum bending moment induced by self-loading is applied to the entire length of the stem. Details regarding this assumption are presented in the Limitations section. The offsets (δ1 and δ2) used in Eqs. (2) and (3) to calculate the bending moments induced from self-weight are unknowns and are a function of the externally applied moments and forces. Using engineering theory for beam deflection and the theory of superposition of loading [10], we can calculate the deflection of the stem at height h1 (i.e., location 1) as a function of the applied forces, applied moments, and weight-induced moments. Equation (4) shows this calculation, where the first row of Eq. (4) concerns loads, moments and weights at location 1 (i.e., at height h1) and the second row of Eq. (4) concerns forces, moments and weights at location 2 (i.e., at height h2). Similarly, we can write the deflection of the stem at h2 as: Thus we have four linearly independent equations (Eqs. (2)–(5)) allowing us to solve for four unknown values (W1, W2, δ1, δ2). It should be emphasized that for all equations in this manuscript (including Eqs. (4) and (5)) locations are numbered from the top of plant down (i.e., location 1 is above location 2 which is above location 3…). Equations (2) through (5) can be generalized to account for any number of locations (n) along the length of the stalk. First, for any loading location L, at a height hL along the stalk, deflected by δL, Eqs. (2) and (3) can be generalized as: $$W_{L} = \delta_{L} w_{L}$$ Next, Eqs. (4) and (5) can be generalized by noting that each force, moment or weight (F, M, or W, shown in bold in Eqs. (4) and (5)) is multiplied by a geometric coefficient. The geometric coefficient for each term is a function of the height where the deflection is measured and the height at which the loading is applied. This geometric coefficient can be denoted as either ƒF (for forces) or ƒM (for externally applied moments or internal weight-induced moments). As such, for any vertical location Z at a height of hP, the deflection δP is calculated by summing the product of each load, moment or weight (F, M, or W) and its corresponding geometric coefficient (ƒF or ƒM) at every loading location (from L = 1 to L = n). Note that this geometric coefficient assumes a constant flexural stiffness (EI), as discussed in the Limitations section. Thus, the generalized form of Eqs. (4) and (5) can be written as: where "location 1" is the most apical location of interest and "location L" is the most basal location of interest. Equation (7) can now be consolidated into a fully generalized form of: where the geometric coefficients for the forces and moments are defined as [21]: $$f_{F} \left( {P,L} \right) = \left\{ {\begin{array}{*{20}l} {3h_{L} h_{P}^{2} - \frac{{\left( {h_{L} - h_{P} } \right)^{3} }}{6EI}, \quad h_{P} \ge h_{L} } \\ {\frac{{h_{L}^{2} \left( {3h_{L} - h_{P} } \right)}}{6EI},\quad h_{P} < h_{L} } \\ \end{array} } \right.$$ $$f_{M} \left( {P,L} \right) = \left\{ {\begin{array}{*{20}l} {\frac{{h_{L} \left( {2h_{P} - h_{L} } \right)}}{2EI}, \quad h_{P} \ge h_{L} } \\ {\frac{{h_{P}^{2} }}{2EI},\quad h_{P} < h_{L} } \\ \end{array} } \right.$$ Equations (6)–(9) can also be put into a generalized matrix form. From Eqs. (6) and (8) we see that for any number of weights at any number of locations (n), we will have 2n unknown values (δ1, δ2, … ,δn, W1, W2,..., Wn), and 2n linearly independent equations. By rearranging these equations and converting them to matrix notation we can write: where the first matrix in the equation is a square matrix of size 2n × 2n, and the second and third matrices in the equation are column matrices of size 2n × 1. Within the square matrix, the top left and bottom right n × n submatrices (shown in green text) are identity matrices, the bottom left n × n submatrix (shown in blue text) is a diagonal matrix of the negative weights (− w), and the top right n × n submatrix (shown in orange text) is the negative geometric coefficients of the weight-induced moments, as calculated by Eq. (10). We can then solve this matrix equation by taking the inverse of the multi-colored matrix and multiplying by the right-most vector to calculate the deflections and moments induced by Body Forces: We can now look at the total bending moment (MTOTAL) of any cross-section along the length of the stem. In particular, MTOTAL can be written as a function of hP and hL, by considering all of the loads that are applied to the stem above the cross-section of interest (i.e., for hL ≥ hP), $$M_{TOTAL} \left( {h_{P} } \right) = \mathop \sum \limits_{L = 1}^{{n \left[ {h_{L} \ge h_{P} } \right]}} F_{L} \left( {h_{L} - h_{P} } \right) + \mathop \sum \limits_{L = 1}^{{n \left[ {h_{L} \ge h_{P} } \right]}} M_{L} + \mathop \sum \limits_{L = 1}^{{n \left[ {h_{L} \ge h_{P} } \right]}} W_{L}$$ Now that we have derived a closed form solution for MTOTAL (Eq. 13) we can calculate the stalk flexural stiffness and the stalk bending strength of the plant stem. Additionally, we can now calculate the value of bending stress. Bending stress is a useful measure of the loading of the plant tissue that is normalized to size and geometry. The larger the bending stress in the tissue, the closer it is to tissue fracture and structural failure. We can write the bending stress in the stem in this case as a function of the total bending moment and the section modulus of the cross-section (S(hZ)): $$\sigma_{bending} \left( {h_{P} } \right) = \frac{{M_{TOTAL} \left( {h_{P} } \right)}}{{S\left( {h_{P} } \right)}}$$ Note that "section modulus" is an engineering term used to quantify the cross-sectional distribution of mass about its centroid and can be used in making stalk flexural stiffness and stalk bending strength predictions [10]. It should be noted that the section modulus is constant for a given plant stem cross-section. Therefore, there exists a 1:1 correlation between the total bending moment, and the bending stress. As such, all comparisons performed between total bending moments can also be conceptualized as being comparisons in stalk bending strength or bending stress. Table 3 shows a comparison between the equations used to calculate stalk flexural stiffness, stalk bending strength and bending stress for the new method, which accounts for Body Forces and the traditional method which does not account for Body Forces. Table 3 Comparison of equations used to calculate stalk flexural stiffness, stalk bending strength and bending stress for the traditional method and the new approach derived in this study Finite element modeling to confirm accuracy of new closed form solution method The new approach to calculating stalk flexural stiffness and stalk bending strength outlined in the previous section was derived based on governing physical principles and well-established engineering equations. Special care was taken to ensure no algebraic mistakes were made during the derivation and that any assumptions were properly considered. Nonetheless, as a form of data triangulation [21] to confirm the accuracy of the new approach it was compared to a series of nonlinear finite element models of plant stems. A basic description of the Finite Element Method and the construction of the specific finite element models of plant stems used in this study are presented below. The Finite element method is a standard numerical technique used by engineers to quantify the detailed mechanical response of complex structures and materials [22]. Finite element models are commonly used calculate the flexural stiffness of complex structures which violate basic assumptions made in closed form engineering equations. It should be noted that nonlinear finite element models (i.e. "large deflection" simulations) are valid for both small and large deflections. Comparing the new closed from solution approach which accounts for Body Forces to nonlinear finite models of plant stems thus enables us to check the accuracy of the new approach. To this end, a series of 768 non-linear finite element models of plant stems were developed, analyzed, and compared to the new approach derived in the previous section. The models were developed in Abaqus/CAE 2019 [23, 24] and analyzed in Abaqus/Standard 2019 using a direct, full Newton solver [23, 24]. A mesh convergence study was performed to ensure adequate mesh density of all models. Analyses were run non-linearly, recalculating the system stiffness matrix at each solution increment. In other words, the models were fully capable of accounting for nonlinear effects due to large deformations. Model development and post-processing were automated through a series of custom Python scripts, which can be obtained upon reasonable request to the authors. A brief description of the models is given below. In these simulations the stems were modeled as 2-noded linear beam elements in a 2-dimensional analysis [23, 24]. In each of these models the bottom node of the stem was fixed in all degrees of freedom (U1 = U2 = UR3 = 0). Stems were modeled with a weight at height h1, applied force at height h2, and applied moment at height h3. It should be noted that because 2-noded beam elements were used, the model was partitioned at h3 so that moments could be directly applied to nodes. The plant stem was modeled with the radius values such that that the resulting moments of inertia were as presented in Table 4 using the equation \(I = \frac{\pi }{4}r^{4}\) [10]. As the models allowed free expansion in the radial direction, Poisson's ratio was found to be negligible based on preliminary parametric analyses and was set to a value of 0.3 for all analyses. Table 4 Each input parameter (i.e., factor) and value of each input parameter (i.e., level) for the finite element analyses A factorial design of experiments was used to compare the results of the new approach derived above to the results of the finite element models. In particular, the stalk flexural stiffness and stalk bending strength of each finite element model was compared to the corresponding values calculated using the new approach derived in the previous section. A full parametric sweep of all relevant input parameters (i.e. factors) was conducted to ensure the accuracy of new approach for a broad range of plant species. In particular, a factorial design of experiments was utilized with 8 factors to compare the two methods. The factors were the elastic moduli of the stem (E), the moment of inertia (I) of the stem, the heights of the applied moments, forces and weights (h1, h2 and h3), the magnitude of the applied moment (M), the magnitude of self-weight (W), and the magnitude of the applied force (F). The moduli, moment of inertia, heights, weights, and moments were evaluated at two different levels. The force was evaluated at 6 levels. Thus a total of 768 unique models were constructed covering every combination of factors and levels (i.e., 2E's × 2I's × 2h1's × 2h2's × 2h3's × 2 M's × 2 W's × 6F's = 768 models). Table 4 presents each of these factors and the levels of each factor used in the experiment. The level of each factor (i.e., the value of input parameters to the model) were based on previous studies of plant stem material properties [8, 25]. Development of excel spreadsheet to calculate stalk flexural stiffness and stalk bending strength An Excel spreadsheet (Microsoft Corporation, 2019) was developed to help researchers without a background in engineering mechanics successfully apply the new approach to calculating stalk flexural stiffness and stalk bending strength. The spreadsheet was developed using the equations presented in Table 3 and is included as Additional file 1. The spreadsheet allows the user to input the flexural stiffness of the plant stem as well as the magnitude of externally applied forces and moments, and weights. Input values can be given for up to ten locations of interest along the length of the plant stem. The spreadsheet calculates the weight induced moments (Mbody) and deflections, as well as the total induced moment (Mtotal) at all locations. The spreadsheet makes the calculation both with and without self-loading considered. In addition, the error induced by ignoring self-loading is calculated for the deflections and total induced moments. More details about the spreadsheet and use instructions are provided in Additional file 2. To provide further insights and to demonstrate how to effectively use the equations derived above three separate case studies were conducted. The primary purpose of the first case study was to demonstrate how researchers can determine if the influence of self-weight is a significant factor in a given experiment. In this case study, two loading configurations commonly used to measure stalk bending strength and stalk flexural stiffness are presented [9]. Figure 2 displays these two test configurations. The equations derived above are applied to each test configuration and are used to develop simple correction factors to account for the moments induced by Body Forces that are typically ignored in mechanical phenotyping experiments. These correction factors can be used to determine the magnitude of error introduced if Body Forces are ignored. The loading diagrams for two common mechanical phenotyping test protocols used to determine flexural stiffness; a typical maize phenotyping protocol (left), and a typical wheat phenotyping protocol (right) To provide general insights into the effect of Body Forces on several plant species a second more generalized case study was conducted. Five plants species were included in this case study: maize (Zea mays), wheat (Triticum aestivum), sweet sorghum (Sorghum bicolor), bamboo (Bambusoideae), and rice (Oryza sativa). Average mechanical properties and biomass distributions for each plant species were attained from the literature and were used as inputs to the Excel spreadsheet provided in Additional file 1. The spreadsheet was then used to determine the impact of self-weight on measurements of stalk flexural stiffness and stalk bending strength (i.e., to quantify the amount of error introduced when Body Forces are ignored). For the third case study a detailed experimental analysis of a commercially available wheat variety was conducted. In this study, the Excel spreadsheet provided in Additional file 1 was used to determine the effects of self-loading on the flexural response of wheat stems throughout a growing season. The methods and results of this third case study are presented in Additional files 3 and 4. Comparison of finite element and closed form solutions As a form of data triangulation finite element models of plant stems were compared to the new closed form solution which accounts for Body Forces that is presented in the methods section. In other words, the closed form solution was evaluated using the same inputs as each of the 768 finite models and the solutions from the closed form equations and the finite element models were compared. The finite element models were found to be in good agreement with the closed form solutions. In particular, the median error between the 768 finite element models and the closed form equations was found to be 0.126% for deflection at the top of the specimen, and 0.0003% for the total bending moment at the base of the specimen. Figure 3 displays these comparisons in terms of calculations of stalk bending strength and stalk flexural stiffness. As shown in the figure the closed form solution method can accurately account for both Body Forces and External Forces when calculating stalk flexural stiffness and stalk bending strength. These data imply that for the ranges evaluated, the closed form solution is providing accurate results and no mistakes were made during its derivation. A comparison between the closed form solution and the solution of finite element models for stalk flexural stiffness (a) and for stalk bending strength (b), n = 768, as a function of deflection normalized by plant height. Histograms of the error between the closed form solution and the finite element models for stalk flexural stiffness (c) and for stalk bending strength (d), n = 768. a demonstrates that significant errors can occur at very small (near-zero) deflections. A deflection of 2.5% to 20% of the stalk height is recommended to minimize error during stalk flexural stiffness phenotyping experiments However, it be should be noted that as shown in Fig. 3a, the error in measured stalk flexural stiffness is relatively high in analyses with very small deflections. This was expected. The error in stalk flexural stiffness measurements that occurs at near-zero deflections is caused by simplifying assumptions made in the derivation of the closed form solution. Researchers should therefore avoid using the closed form solution method to analyze plant samples undergoing very small (near-zero) deflections. A deflection of approximately 2.5%–20% of the stalk height (i.e., a deflection angle of ~ 6°) is generally a good starting point to employ in mechanical phenotyping experiments used to measure stalk flexural stiffness. As mentioned previously, the engineering theory used to derive the closed form solution presented above contains several inherit assumptions. These assumptions gradually become less valid as deflections become very large. Therefore, to determine the maximum range of applicability for the closed form solutions one additional finite element model was created and subjected to extremely large deflections. In particular, the model was created with the following input parameters: E = 5.00E + 07 N/mm2, I = 5.50E + 04 mm4, EI = 2.8E12 Nmm2, h1 = 1000 mm, h2 = 550 mm, h3 = 200 mm, M = 1000 Nmm @ h1, W = 100 N @ h3, F = Ramped up to 5.00E + 07 N @ h2. It should be noted that this loading scenario exceeds the realistic range of forces and deflections a plant stem would be subjected to. In other words structural failure of the stem would occur far before such high forces and deflections could be achieved. This extreme model was used to investigate the extent of validity of the closed form solution for very large deflections. Agreement between this finite element model and the closed form solutions is strong at small deflections (as expected). At very large deflections (greater than ~ 45° angle at the tip of the stem), geometric nonlinearities that are not captured by the closed form engineering beam equations become more influential [4]. That is to say that the closed form solution is accurate so long as the linear closed form engineering beam equations upon which it is predicated are accurate. For more discussion on this topic, see the Limitations section. Figure 4 depicts the comparison between the extremely large deflection finite element model and the closed form solution. Figure 4 displays a maximum horizontal deflection equal to the height of the stem. A comparison between the closed form solution and the finite element model solution (FEM) for very large deflections (i.e., for deflections and loads beyond what would typically be seen in the field). Plots depict the deflection at the tip of the stalk (a) and the maximum moment at the base of the stalk (b); the % error between the finite element model and the closed form calculation of stalk flexural stiffness and stalk bending strength are shown as a function of stalk deflection normalized by stalk height (c) A computational tool for accounting for weights To make the closed form solutions derived in the methods section more amenable to researchers without a structural engineering background (i.e., plant scientists, agronomists, and other end-users), an Excel (Microsoft Corporation, 2019) spreadsheet was developed, and is included as Additional file 1. The user simply inputs the stalk flexural stiffness of the plant stem, the heights to each location of interest, the magnitude of externally applied forces and moments, and the weights at each location. The spreadsheet calculates the weight induced moments (Mbody) and deflections as well as the total induced moment (Mtotal) at all locations. The spreadsheet makes the calculation both with and without self-loading considered. In addition, the error induced by ignoring self-loading is calculated. Figure 5 shows an example of the spreadsheet in which 3 externally applied forces, 2 externally applied moments, and 3 weights are considered. This tool can be used by researchers to determine the necessity of including self-loading in their studies. An example of the Excel spreadsheet (see Additional file 1), showing loading at three locations, and calculating deflection and induced moments at four locations: the three loading locations and the base of the plant. Note that the error in deflection is not calculated at the base, as deflection at the base is zero regardless of loading condition For example, if this spreadsheet were used to determine the necessity of including self-weight in a mechanical phenotyping study (e.g., a study using the device as presented in [6]), the following would be performed: (1) A non-destructive, small deflection, flexural test as described in [6] would be performed, to determine the specimen's stalk flexural stiffness; (2) a destructive, large deflection bending strength test as described in [6] would then be performed on the same specimen; (3) the specimen would then be weighed and the center-of-gravity would be determined; (4) the specimen weight, center-of-gravity, and stalk flexural stiffness as well as the magnitude and location of the load applied to the plant by the phenotyping device from the destructive bending strength test would be input into the spreadsheet; (5) the spreadsheet would report out the amount of error present in stalk flexural stiffness and stalk bending strength calculations if the weight of the specimen was ignored. This procedure would then be repeated for several representative specimens. This data could then be used to inform the researchers if self-weight induced loadings are significant and need to be accounted for in phenotyping experiments or if the amount of error introduced by neglecting self-weight is negligible. If self-weight was determined to be significant then the spreadsheet could be used to properly account for the self-weight of measured samples. Case study results Results from the first and second case studies are presented below. Results from the third case study (experimental analysis of wheat throughout a growing season) are found in Additional files 3 and 4. With regards to the first case study, Fig. 2 displays two common loading configurations used during mechanical phenotyping experiments. The first test configuration represents a typical stalk flexural stiffness test for maize [6, 26] and applies a Contact Force at the top of the specimen, while the stalk's center of gravity is below the loading point. The second test configuration shown in Fig. 2 represents a typical stalk flexural stiffness test for wheat [27, 28] and applies a Contact Force below the grain head but near the top of the specimen. During these types of mechanical phenotyping tests the Contact Force (F) applied by a phenotyping device and the deflection of the stem at the point of loading (δ1) are recorded. Ignoring the weight of the stalk, the stalk flexural stiffness (EI) is then typically calculated from the test data by rearranging the following engineering beam equation to solve for EI: $$\delta_{t} = \frac{{Fh_{t}^{3} }}{3EI}$$ To account for the weight of the stalk when calculating stalk flexural stiffness, we must modify Eq. (15) to include the stalk weight (w) as discussed in the methods section. For example: Configuration 1: load at top, weight at midspan First, solving Eq. (11) for loading configuration 1 results in: $$\left[ {\begin{array}{*{20}c} 1 & {\frac{{ - h_{2} }}{2EI}} \\ { - w} & 1 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\delta_{2} } \\ \\ W \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {\frac{{Fh_{2}^{2} \left( {3h_{1} - h_{2} } \right)}}{6EI}} \\ 0 \\ \end{array} } \right]$$ where the two unknowns are the deflection at the weight (δ2) and the weight-induced moment (W). From this equation, the weight-induced moment can be calculated as: $$W = \frac{{Fh_{2}^{2} w\left( {3h_{1} - h_{2} } \right)}}{{6EI - 3wh_{2}^{2} }}$$ Finally, we can solve Eq. (5) at the point of loading (δ1) to find a relationship between the test data and the deflection: $$\delta_{1} = \frac{{Fh_{1} }}{3EI} + \frac{{Fwh_{2}^{2} \left( {3h_{1} - h_{2} } \right)\left( {2h_{1} - h_{2} } \right)}}{{6EI\left( {2EI - wh_{2}^{2} } \right)}}$$ where Eq. (15) is shown in black, and the correction factor for the weight-induced moment is shown in blue. This newly calculated deflection can then be substituted into the corresponding equation in Table 3 to calculate the corrected stalk flexural stiffness. Configuration 2: load at midspan, weight at top As before, solving Eq. (11) for loading configuration 2 at the weight's location results in: Solving for the weight-induced moment and solving for Eq. (5) for the point of loading (δ1) to find a relationship between the test data and the deflection: $$\delta_{2} = \frac{{Fh_{2}^{3} }}{3EI} + \frac{{Fwh_{2}^{4} \left( {3h_{1} - h_{2} } \right)}}{{6EI\left( {2EI - wh_{2}^{2} } \right)}}$$ It should be noted that Eqs. (18) and (20) are simply Eq. (15) with the addition of a correction factor that accounts for the influence of the weight-induced bending moment. Thus by comparing the results of Eq. (15) with either Eqs. (18) or (20), the influence of the weight-induced bending moment on the deflection of the stem can be calculated. Additionally, the results of Eqs. (18) and (20) (i.e., the deflections) can be input into Eq. (6) to determine the magnitude of the weight-induced moment. The weight induced bending moment (W) can then be compared to the bending moment induced from the applied force (Mext) to determine the effect of self-weight on the stalk bending strength. Using the methods presented in this case study researchers can easily determine if weight-induced bending moments are negligible or if they need to be incorporated into their mechanical phenotyping studies. A second case study was conducted to determine the general influence of Body Forces on several plant species. The values shown in Table 5 represent typical values reported in the literature for the five plants species included in this case study. It should be noted that these are average single data points and a significant amount of variation in heights, weights, and flexural stiffnesses is expected within a given plant species. This information is presented here as an accessible reference for researchers to develop an understanding of the types of plants that are more or less affected by self-loading. Table 5 Self-loading related properties and the % error introduced when self-loading is ignored in calculations of stalk bending strength and stalk flexural stiffness A key factor in determining the influence of Body Forces in different plant species is the ratio of the weight of a plant to its flexural stiffness. While this ratio does not include all of the factors that influence self-loading, it can be used as a quick evaluation tool for researchers to determine the general amount of influence self-loading may have. Figure 6 depicts the influence of this ratio on stalk flexural stiffness and stalk bending strength, with the plant varieties in Table 5 shown as data points. In general, it can be seen from the figure that Body Forces (i.e., self-weight) has a negligible effect on stiff and strong stems (i.e., bamboo and maize) but becomes more influential in smaller stems (i.e., rice, wheat). The error of stalk flexural stiffness (left) and stalk bending strength (right), as a function of the ratio between the combined weight of the grain and plant and stalk flexural stiffness Mechanical measurements of plants have been used to investigate stalk lodging resistance for over a century. However, engineers or mechanical measurement experts have typically not been involved in past studies. Consequently, very few previous studies have attempted to account for the complex influence of the plant's own weight (i.e., Body Forces) on mechanical measurements. The studies that have attempted to account for self-weight typically normalized bending strength measurements by specimen weight (e.g., [29, 30]). This was an important first step and raised general awareness of the need to somehow account for self-weight during mechanical phenotyping studies. However, the effect of self-weight on stalk bending strength and stalk flexural stiffness is complex and is not fully captured by normalizing stalk bending strength measurements by specimen weight. This is the first report the authors are aware of that presents a method to properly account for plant weight when calculating stalk bending strength and stalk flexural stiffness. Results demonstrate the equations derived herein to account for the complex effects of self-weight during mechanical phenotyping experiments are accurate. The authors therefore recommend that future studies utilize the equations, corrections factors and Excel spreadsheet presented herein to account for the effects of self-weight during mechanical phenotyping experiments. More specifically, based on prior experience and the results presented in Table 5 and Fig. 6, the authors recommend that self-weight be accounted for when testing small grain stems. However, the effect of self-weight on large grain stems that possess a small ratio of plant weight to stalk flexural stiffness (e.g., mature maize stalks) is minimal and for many intents and purposes is most likely negligible. More broadly the authors would advocate for increased collaboration between plant scientist and engineers. The mechanical response of plant stems is complex and requires specific expertise to fully understand. While the Excel spreadsheet and equations derived above have been made as approachable as is feasible to non-experts, they will be most useful to engineers and structural mechanics experts who fully comprehend the inherent assumptions and limitations of the tools. Finally, it should be noted that the association between stalk flexural stiffness, stalk bending strength, and stalk lodging resistance are plant- and time-specific. For instance, in late-season lodging of maize stalks, previous studies have found that plants experience a predominantly linear-elastic response prior to failure, and that stalk flexural stiffness tends to strongly correlate with lodging resistance [5]. In such a case, Eq. (14) demonstrates that the total bending moment and bending stress are directly linear, e.g. a 10% increase in the total bending moment will result in a 10% increase in stress. Therefore, the authors hypothesize that increasing the stalk bending strength will decrease the lodging resistance at a ratio of − 1:1, e.g. a 10% increase in the induced bending moment from self-loading will result in a 10% decrease in the lodging resistance of the stalk. However, for less linear material responses (e.g. during green-snap), these relationships will be less direct. For stems with nonlinear material responses, researchers will need to incorporate these self-loading equations into their biomechanical models which contain non-linear material responses. The primary limitation of the current study is that the stalk was assumed to be in-line with the assumptions made for pure bending, including maintaining a constant cross-section with homogeneous, isotropic, linear elastic material subjected to pure bending [4]. It should be noted that the finite element models were also only valid for linear elastic materials. Inclusion of changes in cross-sectional geometry along the length of the stalks [8], material heterogeneity and anisotropy, and non-linear material properties would likely change the behavior of the analytical system. A discussion of the influence of these factors has been presented in a previous study by the authors [4]. The simplifying assumptions made in the derivation of the closed form solutions combined with the assumption of a single cross-section along the entire length of the stalk, results in a single flexural stiffness parameter for the entire stalk. However, the flexural stiffness of plants changes constantly along the length of the stalk (i.e., the diameter of most plant stems are large near the base of the plant and smaller near the top of the plant). The simplifying assumption of a single flexural stiffness parameter was deliberately made to allow for an easily-used generalized equation. This assumption is routinely made in phenotyping studies as well. If researchers need to incorporate changes in flexural stiffness along the length of the stalk, the approach presented in this study can be incorporated into a full Castigliano's method beam approximation [10]. Additionally, the equations used in this study assume small strains and small deflections. As such, these equations carry the same limitations as standard engineering beam bending equations, and are not suitable to predict post-failure loading conditions or deflections. When post-buckling analyses are required, non-linear finite element modeling approaches are recommended. In summary, the analyses in this study are only valid for conditions in which traditional phenotyping methods are considered valid. Finally, Eqs. (1), (2), and (6) assume that the maximum moment induced by self-loading is applied to the entire length of the stem below the weight, which is not accurate, and is used as a simple estimation of the moment induced by self-loading. In reality, self-loading is not a constant moment along the length of the stalk, but instead is an axial compressive load that induces a moment that varies along the length of the stalk. However, modeling loading as an axial compressive load greatly increases the complexity of the equation, to the point that the matrix equations presented in this study would not be practical. Therefore, Eq. (6) presents an upper-bound of the influence of self-loading by simply applying the maximum moment along the entire length of the stem. As shown in Figs. 3 and 4, this assumption is reasonable for the parameter space explored. Equations were derived to account for the influence of self-loading on measurements of stalk flexural stiffness and stalk bending strength of plant stems. The derived equations were parametrically validated against hundreds of nonlinear finite element models of plant stems. The closed form equations are accurate and showed good agreement with the finite element models (median error < 0.2%). The equations were incorporated into a user-friendly spreadsheet that can be used by the research community to account for self-loading of plants during mechanical phenotyping studies. Results indicate that ignoring self-weight can lead to significant errors in phenotyping measurements of small grains (e.g. 16% error in stalk flexural stiffness for wheat). It is the recommendation of the authors that self-loading be taken into account for plants such as wheat and rice that have a large ratio of weight to flexural stiffness. In addition, to minimize error, a deflection of 2.5% to 20% of the stalk height (a deflection angle of around 6º) is recommended for mechanical phenotyping tests used to characterize stalk flexural stiffness. Flint-Garcia SA, Jampatong C, Darrah LL, McMullen MD. Quantitative trait locus analysis of stalk strength in four maize populations. Crop Sci. 2003;43:13–22. Berry P, Sylvester-Bradley R, Berry S. Ideotype design for lodging-resistant wheat. Euphytica. 2007;154:165–79. Niklas KJ, Spatz H-C. Plant physics. Chicago: University of Chicago Press; 2012. Stubbs C, Baban N, Robertson D, Al-Zube L, Cook D. Bending stress in plant stems: models and assumptions. In: Geitmann A, Gril J, editors. Plant biomechanics—from structure to function at multiple scales. Springer Verlag; 2018. p. 49–77. https://www.springer.com/gp/book/9783319790985. Robertson DJ, Lee SY, Julias M, Cook DD. Maize stalk lodging: flexural stiffness predicts strength. Crop Sci. 2016;56:1711–8. Cook DD, de la Chapelle W, Lin T-C, Lee SY, Sun W, Robertson DJ. DARLING: a device for assessing resistance to lodging in grain crops. Plant Methods. 2019;15:102. Pinthus MJ. Lodging in Wheat, Barley, and Oats: The phenomenon, its causes, and preventive measures. Adv Agron. Elsevier; 1974. p. 209–63. https://linkinghub.elsevier.com/retrieve/pii/S0065211308607828. Stubbs C, Seegmiller K, McMahan C, Sekhon R, Robertson DJ. Diverse maize hybrids are structurally inefficient at resisting wind induced bending forces that cause stalk lodging. Plant Methods. 2020;16:67. Erndwein L, Cook DD, Robertson DJ, Sparks EE. Field-based mechanical phenotyping of cereal crops to assess lodging resistance. Appl Plant Sci. 2020;8:8. https://doi.org/10.1002/aps3.11382. Beer FP, Johnston E, Dewolf JT. Mechanics of materials. 3rd ed. New York: McGraw-Hill; 2002. Grafius JE, Brown HM. Lodging resistance in oats 1. Agron J. 1954;46(9):414–8. Robertson DJ, Julias M, Lee SY, Cook DD. Maize stalk lodging: morphological determinants of stalk strength. Crop Sci. 2017;57:926. Zuber MS, Grogan CO. A new technique for measuring stalk strength in corn. Crop Sci. 1961;1:378–80. Cloninger FD. Methods of evaluating stalk quality in corn. Phytopathology. 1970;60:295. Singh TP. Association between certain stalk traits related to lodging and grain yield in maize (Zea mays L.). Euphytica. 1970;19:394–7. Remison SU, Akinleye D. Relationship between lodging, morphological characters and yield of varieties of maize (Zea-Mays-L). J Agric Sci. 1978;91:633–8. Zuber MS, Kang MS. Corn lodging slowed by sturdier stalks. Crops Soils. 1978. Hondroyianni E, et al. Corn stalk traits related to lodging resistance in two soils of differing salinity. Maydica. 2000;45(2):125–33. Ma D, Xie R, Liu X, Niu X, Hou P, Wang K, et al. Lodging-related stalk characteristics of maize varieties in china since the 1950s. Crop Sci. 2014;54:2805. Wegst U, Ashby M. The structural efficiency of orthotropic stalks, stems and tubes. J Mater Sci. 2007;42:9005–14. Nelson N, Stubbs CJ, Larson R, Cook DD. Measurement accuracy and uncertainty in plant biomechanics. J Exp Bot. 2019;70:3649–58. Kim NH, Sankar BV, Kumar AV. Introduction to finite element analysis and design. Hoboken: Wiley; 2018. Hibbitt K, Karlsson BI, Sorenson EP. ABAQUS/Standard theory manual. Salt Lake City: Sorenson Inc.; 2016. Simulia DS. ABAQUS Analysis manual. Provid RI. 2016. Stubbs CJ, Larson R, Cook DD. Maize stem buckling failure is dominated by morphological factors. BioRxiv. 2019;833863. Sekhon RS, Joyner CN, Ackerman AJ, McMahan CS, Cook DD, Robertson DJ. Stalk bending strength is strongly associated with maize stalk lodging incidence across multiple environments. Field Crops Res. 2020;249:107737. Berry PM, Spink JH, Gay AP, Craigon J. A comparison of root and stem lodging risks among winter wheat cultivars. J Agric Sci. 2003;141:191–202. Berry PM, Spink J, Sterling M, Pickett AA. Methods for rapidly measuring the lodging resistance of wheat cultivars. J Agron Crop Sci. 2003;189:390–401. Crook MJ, Ennos AR. Stem and root characteristics associated with lodging resistance in four winter wheat cultivars. J Agric Sci. 1994;123(2):167–74. Oladokun MAO, Ennos AR. Structural development and stability of rice Oryza sativa L. var. Nerica 1. J Exp Bot. 2006;57(12):3123–30. Boon EJMC, Engels FM, Struik PC, Cone JW. Stem characteristics of two forage maize (Zea mays L.) cultivars varying in whole plant digestibility. I. Relevant morphological parameters. NJAS Wagening J Life Sci. 2005;53:71–85. Fateh M, Mohammadi S, Arbt HK, Farahvash F, Zand E. Effects of density and nitrogen fertilizer on number of ear, number of grains and grain weight in maize cultivars. Int J Biosci IJB. 2014;4:76–82. Tongdi Q, Yaoming L, Jin C. Experimental study on flexural mechanical properties of corn stalks. In: 2011 Int Conf New Technol Agric. New York: IEEE; 2011. p. 130–4. Hirai Y, Inoue E, Matsui M, Mori K, Hashiguchi K. Reaction force of a wheat stalk undergoing forced displacement. J Jpn Soc Agric Mach. 2003;65:47–55. Austenson HM, Walton PD. Relationships between initial seed weight and mature plant characters in spring wheat. Can J Plant Sci. 1970;50:53–8. Zhihua Y, Yingjun L. Relationship between bending property and density of wheat stem. Agric Sci Technol. 2009;10:100–1. Bakeer B, Taha I, El-Mously H, Shehata SA. On the characterisation of structure and properties of sorghum stalks. Ain Shams Eng J. 2013;4:265–71. Ekefre DE, Mahapatra AK, Latimore M Jr, Bellmer DD, Jena U, Whitehead GJ, et al. Evaluation of three cultivars of sweet sorghum as feedstocks for ethanol production in the Southeast United States. Heliyon. 2017;3:e00490. Tsuchihashi N, Goto Y. Cultivation of sweet sorghum (Sorghum bicolor (L.) Moench) and determination of its harvest time to make use as the raw material for fermentation, practiced during rainy season in dry land of Indonesia. Plant Prod Sci. 2004;7:442–8. Obataya E, Kitin P, Yamauchi H. Bending characteristics of bamboo (Phyllostachys pubescens) with respect to its fiber–foam composite structure. Wood Sci Technol. 2007;41:385–400. Yen T-M. Culm height development, biomass accumulation and carbon storage in an initial growth stage for a fast-growing moso bamboo (Phyllostachy pubescens). Bot Stud. 2016;57:10. Jin X, Fourcaud T, Li B, Guo Y. Towards modeling and analyzing stem lodging for two contrasting rice cultivars. 2009 Third Int Symp Plant Growth Model Simul Vis Appl. Beijing, China: IEEE; 2009. p. 253–60. http://ieeexplore.ieee.org/document/5474810/. Accessed 11 Feb 2020. Chen J, Gao H, Zheng X-M, Jin M, Weng J-F, Ma J, et al. An evolutionarily conserved gene, FUWA, plays a role in determining panicle architecture, grain shape and grain weight in rice. Plant J. 2015;83:427–38. van Delden SH, Vos J, Ennos AR, Stomph TJ. Analysing lodging of the panicle bearing cereal teff (Eragrostis tef). New Phytol. 2010;186:696–707. Field data collection was completed by Undergraduate Research Assistants Matthew Kolbeck and Jonathan Fenske at the University of Saskatchewan's Plant Phenotyping and Imaging Research Centre (P2IRC). This work was funded in part by the National Science Foundation (Award #1826715) by the United States Department of Agriculture—NIFA (#2016-67012-2381) and by the Canada First Research Excellence Fund (CFREF). Any opinions, findings, conclusions, or recommendations are those of the author(s) and do not necessarily reflect the view of the funding bodies. Department of Mechanical Engineering, University of Idaho, Moscow, ID, USA Christopher J. Stubbs, Yusuf A. Oduntan & Daniel J. Robertson Department of Mechanical Engineering, University of Saskatchewan, Saskatoon, SK, Canada Tyrone R. Keep & Scott D. Noble Christopher J. Stubbs Yusuf A. Oduntan Tyrone R. Keep Scott D. Noble Daniel J. Robertson All authors were fully involved in the study and preparation of the manuscript. The material within has not been and will not be submitted for publication elsewhere. All authors read and approved the final manuscript. Correspondence to Daniel J. Robertson. Spreadsheet for calculating the effect of self-weight. Instructions for using the spreadsheet presented in Additional file 1. Case study 3—The effect of self-weight on wheat stems. Biomass data from case study 3. Stubbs, C.J., Oduntan, Y.A., Keep, T.R. et al. The effect of plant weight on estimations of stalk lodging resistance. Plant Methods 16, 128 (2020). https://doi.org/10.1186/s13007-020-00670-w DOI: https://doi.org/10.1186/s13007-020-00670-w Flexural
CommonCrawl
Computational methods for Lyapunov functions DCDS-B Home Classical converse theorems in Lyapunov's second method October 2015, 20(8): 2291-2331. doi: 10.3934/dcdsb.2015.20.2291 Review on computational methods for Lyapunov functions Peter Giesl 1, and Sigurdur Hafstein 2, Department of Mathematics, University of Sussex, Falmer BN1 9QH School of Science and Engineering, Reykjavik University, Menntavegi 1, IS-101 Reykjavik Received August 2014 Revised January 2015 Published August 2015 Lyapunov functions are an essential tool in the stability analysis of dynamical systems, both in theory and applications. They provide sufficient conditions for the stability of equilibria or more general invariant sets, as well as for their basin of attraction. The necessity, i.e. the existence of Lyapunov functions, has been studied in converse theorems, however, they do not provide a general method to compute them. Because of their importance in stability analysis, numerous computational construction methods have been developed within the Engineering, Informatics, and Mathematics community. They cover different types of systems such as ordinary differential equations, switched systems, non-smooth systems, discrete-time systems etc., and employ different methods such as series expansion, linear programming, linear matrix inequalities, collocation methods, algebraic methods, set-theoretic methods, and many others. This review brings these different methods together. First, the different types of systems, where Lyapunov functions are used, are briefly discussed. In the main part, the computational methods are presented, ordered by the type of method used to construct a Lyapunov function. Keywords: dynamical system, basin of attraction, numerical method., contraction metric, stability, converse theorem, Lyapunov function. Mathematics Subject Classification: Primary: 37M99, 34D20; Secondary: 34D05, 37C75, 34D4. Citation: Peter Giesl, Sigurdur Hafstein. Review on computational methods for Lyapunov functions. Discrete & Continuous Dynamical Systems - B, 2015, 20 (8) : 2291-2331. doi: 10.3934/dcdsb.2015.20.2291 N. Aghannan and P. Rouchon, An intrinsic observer for a class of Lagrangian systems,, IEEE Trans. Automat. Control, 48 (2003), 936. doi: 10.1109/TAC.2003.812778. Google Scholar A. Agrachev and D. Liberzon, Lie-algebraic stability criteria for switched systems,, SIAM J. Control Optim., 40 (2001), 253. doi: 10.1137/S0363012999365704. Google Scholar A. Ahmadi and R. Jungers, On complexity of Lyapunov functions for switched linear systems,, in Proceedings of the 19th World Congress of the International Federation of Automatic Control, (2014). Google Scholar A. Ahmadi, K. Krstic and P. Parrilo, A globally asymptotically stable polynomial vector field with no polynomial Lyapunov function,, in Proceedings of the 50th IEEE Conference on Decision and Control (CDC), (2011), 7579. doi: 10.1109/CDC.2011.6161499. Google Scholar A. Ahmadi, A. Majumdar and R. Tedrake, Complexity of ten decision problems in continuous time dynamical systems,, in Proceedings of the American Control Conference, (2013), 6376. doi: 10.1109/ACC.2013.6580838. Google Scholar E. Akin, The General Topology of Dynamical Systems,, American Mathematical Society, (1993). Google Scholar A. Aleksandrov, A. Martynyuk and A. Zhabko, Professor V. I. Zubov to the 80th birthday anniversary,, Nonlinear Dyn. Syst. Theory, 10 (2010), 1. Google Scholar R. Ambrosino and E. Garone, Robust stability of linear uncertain systems through piecewise quadratic Lyapunov functions defined over conical partitions,, in Proceedings of the 51st IEEE Conference on Decision and Control, (2012), 2872. doi: 10.1109/CDC.2012.6427016. Google Scholar J. Anderson and A. Papachristodoulou, Advances in computational Lyapunov analysis using sum-of-squares programming,, Discrete Contin. Dyn. Syst. Ser. B, 8 (2015), 2361. doi: 10.3934/dcdsb.2015.20.2361. Google Scholar D. Angeli, A Lyapunov approach to incremental stability properties,, IEEE Trans. Automat. Contr., 47 (2002), 410. doi: 10.1109/9.989067. Google Scholar E. Aragão-Costa, T. Caraballo, A. Carvalho and J. Langa, Stability of gradient semigroups under perturbations,, Nonlinearity, 24 (2011), 2099. doi: 10.1088/0951-7715/24/7/010. Google Scholar E. Aragão-Costa, T. Caraballo, A. Carvalho and J. Langa, Non-autonomous Morse-decomposition and Lyapunov functions for gradient-like processes,, Trans. Amer. Math. Soc., 365 (2013), 5277. doi: 10.1090/S0002-9947-2013-05810-2. Google Scholar L. Arnold, Stochastic Differential Equations: Theory and Applications,, Wiley, (1974). Google Scholar L. Arnold, Random dynamical systems,, in Dynamical Systems (Montecatini Terme, (1994), 1. doi: 10.1007/BFb0095238. Google Scholar L. Arnold and B. Schmalfuss, Lyapunov's second method for random dynamical systems,, J. Differential Equations, 177 (2001), 235. doi: 10.1006/jdeq.2000.3991. Google Scholar J.-P. Aubin and A. Cellina, Differential Inclusions,, Springer, (1984). doi: 10.1007/978-3-642-69512-4. Google Scholar B. Aulbach, Asymptotic stability regions via extensions of Zubov's method. I, II,, Nonlinear Anal., 7 (1983), 1431. doi: 10.1016/0362-546X(83)90010-X. Google Scholar E. Aylward, P. Parrilo and J.-J. Slotine, Stability and robustness analysis of nonlinear systems via contraction metrics and SOS programming,, Automatica, 44 (2008), 2163. doi: 10.1016/j.automatica.2007.12.012. Google Scholar R. Baier, L. Grüne and S. Hafstein, Linear programming based Lyapunov function computation for differential inclusions,, Discrete Contin. Dyn. Syst. Ser. B, 17 (2012), 33. doi: 10.3934/dcdsb.2012.17.33. Google Scholar R. Baier and S. Hafstein, Numerical computation of Control Lyapunov Functions in the sense of generalized gradients,, in Proceedings of the 21st International Symposium on Mathematical Theory of Networks and Systems (MTNS) (no. 0232), (0232), 1173. Google Scholar H. Ban and W. Kalies, A computational approach to Conley's decomposition theorem,, J. Comput. Nonlinear Dynam., 1 (2006), 312. doi: 10.1115/1.2338651. Google Scholar E. Barbašin and N. Krasovskiĭ, On the existence of Lyapunov functions in the case of asymptotic stability in the large,, Prikl. Mat. Meh., 18 (1954), 345. Google Scholar R. Bartels and G. Stewart, Solution of the matrix equation AX+XB=C,, Communications of the ACM, 15 (1972), 820. doi: 10.1145/361573.361582. Google Scholar R. Bellman, Vector Lyapunov functions,, J. SIAM Control Ser. A, 1 (1962), 32. Google Scholar R. Bellman, Introduction to Matrix Analysis,, Classics in Applied Mathematics, (1995). Google Scholar A. Berger, On finite-time hyperbolicity,, Commun. Pure Appl. Anal., 10 (2011), 963. doi: 10.3934/cpaa.2011.10.963. Google Scholar A. Berger, T. S. Doan and S. Siegmund, A definition of spectrum for differential equations on finite time,, J. Differential Equations, 246 (2009), 1098. doi: 10.1016/j.jde.2008.06.036. Google Scholar J. Bernussou and P. Peres, A linear programming oriented procedure for quadratic stabilization of uncertain systems,, Systems Control Lett., 13 (1989), 65. doi: 10.1016/0167-6911(89)90022-4. Google Scholar N. Bhatia and G. Szegő, Dynamical Systems: Stability Theory and Applications,, Lecture Notes in Mathematics, (1967). Google Scholar G. Birkhoff, Dynamical Systems,, American Mathematical Society Colloquium Publications, (1966). Google Scholar J. Björnsson, P. Giesl and S. Hafstein, Algorithmic verification of approximations to complete Lyapunov functions,, in Proceedings of the 21st International Symposium on Mathematical Theory of Networks and Systems (no. 0180), (0180), 1181. Google Scholar J. Björnsson, P. Giesl, S. Hafstein, C. Kellett and H. Li, Computation of continuous and piecewise affine Lyapunov functions by numerical approximations of the Massera construction,, in Proceedings of the CDC, (2014), 5506. Google Scholar J. Björnsson, P. Giesl, S. Hafstein, C. Kellett and H. Li, Computation of Lyapunov functions for systems with multiple attractors,, Discrete Contin. Dyn. Syst. Ser. A, 35 (2015), 4019. doi: 10.3934/dcds.2015.35.4019. Google Scholar F. Blanchini, Ultimate boundedness control for uncertain discrete-time systems via set-induced Lyapunov functions,, in Proceedings of the 30th IEEE Conference on Decision and Control, (1991), 1755. doi: 10.1109/CDC.1991.261708. Google Scholar F. Blanchini, Ultimate boundedness control for uncertain discrete-time systems via set-induced Lyapunov functions,, IEEE Trans. Automat. Control, 39 (1994), 428. doi: 10.1109/9.272351. Google Scholar F. Blanchini, Nonquadratic Lyapunov functions for robust control,, Automatica, 31 (1995), 451. doi: 10.1016/0005-1098(94)00133-4. Google Scholar F. Blanchini and S. Carabelli, Robust stabilization via computer-generated Lyapunov functions: An application to a magnetic levitation system,, in Proceedings of the 33th IEEE Conference on Decision and Control, (1994), 1105. doi: 10.1109/CDC.1994.411291. Google Scholar F. Blanchini and S. Miani, Set-theoretic Methods in Control,, Systems & Control: Foundations & Applications, (2008). Google Scholar V. Boichenko, G. Leonov and V. Reitmann, Dimension Theory for Ordinary Differential Equations,, Teubner-Texte zur Mathematik [Teubner Texts in Mathematics], (2005). doi: 10.1007/978-3-322-80055-8. Google Scholar G. Borg, A Condition for the Existence Of Orbitally Stable Solutions of Dynamical Systems,, Kungliga Tekniska Högskolan Handlingar Stockholm, (1960). Google Scholar J. Bouvrie and B. Hamzi, Model reduction for nonlinear control systems using kernel subspace methods,, , (2011). Google Scholar S. Boyd, L. El Ghaoui, E. Feron and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory,, SIAM Studies in Applied Mathematics, (1994). doi: 10.1137/1.9781611970777. Google Scholar S. Boyd and L. Vandenberghe, Convex Optimization,, Cambridge University Press, (2004). doi: 10.1017/CBO9780511804441. Google Scholar M. Branicky, Multiple Lyapunov functions and other analysis tools for switched and hybrid systems,, IEEE Trans. Automat. Control, 43 (1998), 475. doi: 10.1109/9.664150. Google Scholar R. Brayton and C. Tong, Stability of dynamical systems: A constructive approach,, IEEE Trans. Circuits and Systems, 26 (1979), 224. doi: 10.1109/TCS.1979.1084637. Google Scholar R. Brayton and C. Tong, Constructive stability and asymptotic stability of dynamical systems,, IEEE Trans. Circuits and Systems, 27 (1980), 1121. doi: 10.1109/TCS.1980.1084749. Google Scholar M. Buhmann, Radial Basis Functions: Theory and Implementations,, Cambridge Monographs on Applied and Computational Mathematics, (2003). doi: 10.1017/CBO9780511543241. Google Scholar H. Burchardt and S. Ratschan, Estimating the region of attraction of ordinary differential equations by quantified constraint solving,, in Proceedings Of The 3rd WSEAS International Conference On Dynamical Systems And Control, (2007), 241. Google Scholar C. Byrnes, Topological methods for nonlinear oscillations,, Notices Amer. Math. Soc., 57 (2010), 1080. Google Scholar F. Camilli, L. Grüne and F. Wirth, A generalization of Zubov's method to perturbed systems,, SIAM J. Control Optim., 40 (2001), 496. doi: 10.1137/S036301299936316X. Google Scholar F. Camilli, L. Grüne and F. Wirth, A regularization of Zubov's equation for robust domains of attraction,, in Nonlinear Control in the Year 2000, (2000), 277. doi: 10.1007/BFb0110220. Google Scholar F. Camilli, L. Grüne and F. Wirth, Control Lyapunov functions and Zubov's method,, SIAM J. Control Optim., 47 (2008), 301. doi: 10.1137/06065129X. Google Scholar A. Carvalho, J. Langa and J. Robinson, Attractors for Infinite-Dimensional Non-Autonomous Dynamical Systems,, Applied Mathematical Sciences, (2013). doi: 10.1007/978-1-4614-4581-4. Google Scholar C. Chen and E. Kinnen, Construction of Liapunov functions,, J. Franklin Inst., 289 (1970), 133. doi: 10.1016/0016-0032(70)90299-1. Google Scholar G. Chesi, LMI techniques for optimization over polynomials in control: A survey,, IEEE Trans. Automat. Control, 55 (2010), 2500. doi: 10.1109/TAC.2010.2046926. Google Scholar C. Chicone, Ordinary Differential Equations with Applications,, Texts in Applied Mathematics, (1999). Google Scholar I. Chueshov, Introduction to the Theory of Infinite-Dimensional Dissipative Systems,, ACTA Scientific Publishing House, (2002). Google Scholar F. Clarke, Lyapunov functions and discontinuous stabilizing feedback,, Annu. Rev. Control, 35 (2011), 13. doi: 10.1016/j.arcontrol.2011.03.001. Google Scholar F. Clarke, Y. Ledyaev and R. Stern, Asymptotic stability and smooth Lyapunov functions,, J. Differential Equations, 149 (1998), 69. doi: 10.1006/jdeq.1998.3476. Google Scholar C. Conley, Isolated Invariant Sets and the Morse Index,, CBMS Regional Conference Series, (1978). Google Scholar J.-M. Coron, B. d'Andréa Novel and G. Bastin, A strict Lyapunov function for boundary control of hyperbolic systems of conservation laws,, IEEE Trans. Automat. Control, 52 (2007), 2. doi: 10.1109/TAC.2006.887903. Google Scholar E. Davison and E. Kurak, A computational method for determining quadratic Lyapunov functions for non-linear systems,, Automatica, 7 (1971), 627. doi: 10.1016/0005-1098(71)90027-6. Google Scholar G. Davrazos and N. Koussoulas, A review of stability results for switched and hybrid systems,, in Proceedings of 9th Mediterranean Conference on Control and Automation, (2001). Google Scholar W. Dayawansa and C. Martin, A converse Lyapunov theorem for a class of dynamical systems which undergo switching,, IEEE Tra, 44 (1999), 751. doi: 10.1109/9.754812. Google Scholar M. Dellnitz, G. Froyland and O. Junge, The algorithms behind {GAIO} - set oriented numerical methods for dynamical systems,, in Ergodic Theory, (2001), 145. Google Scholar M. Dellnitz and O. Junge, Set oriented numerical methods for dynamical systems,, in Handbook of Dynamical Systems, (2002), 221. doi: 10.1016/S1874-575X(02)80026-1. Google Scholar U. Dini, Fondamenti per la Teoria Delle Funzioni di Variabili Reali,, (in Italian) Pisa, (1878). Google Scholar S. Dubljević and N. Kazantzis, A new Lyapunov design approach for nonlinear systems based on Zubov's method,, Automatica, 38 (2002), 1999. doi: 10.1016/S0005-1098(02)00110-3. Google Scholar N. Eghbal, N. Pariz and A. Karimpour, Discontinuous piecewise quadratic Lyapunov functions for planar piecewise affine systems,, J. Math. Anal. Appl., 399 (2013), 586. doi: 10.1016/j.jmaa.2012.09.054. Google Scholar K. Erickson and A. Michel, Stability analysis of fixed-point digital filters using computer generated Lyapunov functions - Part I: Direct form and coupled form filtes,, IEEE Trans. Circuits and Systems, 32 (1985), 113. doi: 10.1109/TCS.1985.1085676. Google Scholar K. Erickson and A. Michel, Stability analysis of fixed-point digital filters using computer generated Lyapunov functions - Part II: Wave digital filters and lattice digital filters,, IEEE Trans. Circuits and Systems, 32 (1985), 132. doi: 10.1109/TCS.1985.1085677. Google Scholar M. Falcone, Numerical solution of dynamic programming equations,, in Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations, (1997). Google Scholar F. Fallside, M. Patel, M. Etherton, S. Margolis and W. Vogt, Control engineering applications of V. I. Zubov's construction procedure for Lyapunov functions,, IEEE Trans. Automat. Control, 10 (1965), 220. doi: 10.1109/TAC.1965.1098103. Google Scholar F. Faria, G. Silva and V. Oliveira, Reducing the conservatism of LMI-based stabilisation conditions for TS fuzzy systems using fuzzy Lyapunov functions,, International Journal of Systems Science, 44 (2013), 1956. doi: 10.1080/00207721.2012.670307. Google Scholar D. R. Ferguson, Generalisation of Zubov's construction procedure for Lyapunov functions,, Electron. Lett., 6 (1970), 73. doi: 10.1049/el:19700046. Google Scholar A. Filippov, Differential Equations with Discontinuous Right-hand Side,, Translated from Russian, (1985). Google Scholar H. Flashner and R. Guttalu, A computational approach for studying domains of attraction for nonlinear systems,, Internat. J. Non-Linear Mech., 23 (1988), 279. doi: 10.1016/0020-7462(88)90026-1. Google Scholar F. Forni and R. Sepulchre, A differential Lyapunov framework for Contraction Analysis,, IEEE Trans. Automat. Control, 59 (2014), 614. doi: 10.1109/TAC.2013.2285771. Google Scholar K. Forsman, Construction of Lyapunov functions using Grobner bases,, In Proceedings of the 30th IEEE Conference on Decision and Control, 1 (1991), 798. doi: 10.1109/CDC.1991.261424. Google Scholar R. Geiselhart, R. Gielen, M. Lazar and F. Wirth, An alternative converse Lyapunov theorem for discrete-time systems,, Systems Control Lett., 70 (2014), 49. doi: 10.1016/j.sysconle.2014.05.007. Google Scholar R. Genesio, M. Tartaglia and A. Vicino, On the estimation of asymptotic stability regions: State of the art and new proposals,, IEEE Trans. Automat. Control, 30 (1985), 747. doi: 10.1109/TAC.1985.1104057. Google Scholar P. Giesl, Necessary conditions for a limit cycle and its basin of attraction,, Nonlinear Anal., 56 (2004), 643. doi: 10.1016/j.na.2003.07.020. Google Scholar P. Giesl, The basin of attraction of periodic orbits in nonsmooth differential equations,, ZAMM Z. Angew. Math. Mech., 85 (2005), 89. doi: 10.1002/zamm.200310164. Google Scholar P. Giesl, Construction of Global Lyapunov Functions Using Radial Basis Functions,, Lecture Notes in Math., (1904). Google Scholar P. Giesl, On the determination of the basin of attraction of discrete dynamical systems,, J. Difference Equ. Appl., 13 (2007), 523. doi: 10.1080/10236190601135209. Google Scholar P. Giesl, Construction of a local and global Lyapunov function using radial basis functions,, IMA J. Appl. Math., 73 (2008), 782. doi: 10.1093/imamat/hxn018. Google Scholar P. Giesl, On the determination of the basin of attraction of periodic orbits in three- and higher-dimensional systems,, J. Math. Anal. Appl., 354 (2009), 606. doi: 10.1016/j.jmaa.2009.01.027. Google Scholar P. Giesl, Construction of a finite-time Lyapunov function by meshless collocation,, Discrete Contin. Dyn. Syst. Ser. B, 17 (2012), 2387. doi: 10.3934/dcdsb.2012.17.2387. Google Scholar P. Giesl, Converse theorems on contraction metrics for an equilibrium,, J. Math. Anal. Appl., 424 (2015), 1380. doi: 10.1016/j.jmaa.2014.12.010. Google Scholar P. Giesl and S. Hafstein, Existence of piecewise affine Lyapunov functions in two dimensions,, J. Math. Anal. Appl., 371 (2010), 233. doi: 10.1016/j.jmaa.2010.05.009. Google Scholar P. Giesl and S. Hafstein, Existence of piecewise linear Lyapunov functions in arbitrary dimensions,, Discrete Contin. Dyn. Syst., 32 (2012), 3539. doi: 10.3934/dcds.2012.32.3539. Google Scholar P. Giesl and S. Hafstein, Construction of a CPA contraction metric for periodic orbits using semidefinite optimization,, Nonlinear Anal., 86 (2013), 114. doi: 10.1016/j.na.2013.03.012. Google Scholar P. Giesl and S. Hafstein, Computation of Lyapunov functions for nonlinear discrete time systems by linear programming,, J. Difference Equ. Appl., 20 (2014), 610. doi: 10.1080/10236198.2013.867341. Google Scholar P. Giesl and S. Hafstein, Revised CPA method to compute Lyapunov functions for nonlinear systems,, J. Math. Anal. Appl., 410 (2014), 292. doi: 10.1016/j.jmaa.2013.08.014. Google Scholar P. Giesl and M. Rasmussen, Areas of attraction for nonautonomous differential equations on finite time intervals,, J. Math. Anal. Appl., 390 (2012), 27. doi: 10.1016/j.jmaa.2011.12.051. Google Scholar P. Giesl and H. Wendland, Meshless collocation: Error estimates with application to dynamical systems,, SIAM J. Numer. Anal., 45 (2007), 1723. doi: 10.1137/060658813. Google Scholar P. Giesl and H. Wendland, Approximating the basin of attraction of time-periodic {ODE}s by meshless collocation,, Discrete Contin. Dyn. Syst., 25 (2009), 1249. doi: 10.3934/dcds.2009.25.1249. Google Scholar P. Giesl and H. Wendland, Numerical determination of the basin of attraction for asymptotically autonomous dynamical systems,, Nonlinear Anal., 75 (2012), 2823. doi: 10.1016/j.na.2011.11.027. Google Scholar R. Goebel, R. Sanfelice and A. Teel, Hybrid Dynamical Systems,, Modeling, (2012). Google Scholar R. Goebel, A. Teel, T. Hu and Z. Lin, Conjugate convex Lyapunov functions for dual linear differential inclusions,, IEEE Trans. Automat. Control, 51 (2006), 661. doi: 10.1109/TAC.2006.872764. Google Scholar A. Goullet, S. Harker, K. Mischaikow, W. Kalies and D. Kasti, Efficient computation of Lyapunov functions for Morse decompositions,, Discrete Contin. Dyn. Syst. Ser. B, 8 (2015), 2419. doi: 10.3934/dcdsb.2015.20.2419. Google Scholar B. Grosman and D. Lewin, Lyapunov-based stability analysis automated by genetic programming,, Automatica, 45 (2009), 252. doi: 10.1016/j.automatica.2008.07.014. Google Scholar L. Grujić, Exact determination of a Lyapunov function and the asymptotic stability domain,, Internat. J. Systems Sci., 23 (1992), 1871. doi: 10.1080/00207729208949427. Google Scholar L. Grujić, Complete exact solution to the Lyapunov stability problem: Time-varying nonlinear systems with differentiable motions,, Nonlinear Anal., 22 (1994), 971. doi: 10.1016/0362-546X(94)90060-4. Google Scholar L. Grüne, Asymptotic Behavior of Dynamical and Control Systems Under Perturbation and Discretization,, Lecture Notes in Mathematics, (1783). doi: 10.1007/b83677. Google Scholar L. Grüne, P. Kloeden, S. Siegmund and F. Wirth, Lyapunov's second method for nonautonomous differential equations,, Discrete Contin. Dyn. Syst., 18 (2007), 375. doi: 10.3934/dcds.2007.18.375. Google Scholar O. $\ddot G$urel and L. Lapidus, A guide to the generation of Liapunov functions,, Indust. Engrg. Chem., 61 (1969), 30. doi: 10.1021/ie50711a006. Google Scholar S. Hafstein, A constructive converse Lyapunov theorem on exponential stability,, Discrete Contin. Dyn. Syst., 10 (2004), 657. doi: 10.3934/dcds.2004.10.657. Google Scholar S. Hafstein, A constructive converse Lyapunov theorem on asymptotic stability for nonlinear autonomous ordinary differential equations,, Dynamical Systems: An International Journal, 20 (2005), 281. doi: 10.1080/14689360500164873. Google Scholar S. Hafstein, An Algorithm for Constructing Lyapunov Functions,, Electronic Journal of Differential Equations. Monograph, (2007). Google Scholar S. Hafstein, C. Kellett and H. Li, Continuous and piecewise affine Lyapunov functions using the Yoshizawa construction,, in Proceedings of the 2014 American Control Conference (no. 0170), (2014), 548. doi: 10.1109/ACC.2014.6858660. Google Scholar W. Hahn, Stability of Motion,, Springer, (1967). Google Scholar G. Haller, Finding finite-time invariant manifolds in two-dimensional velocity fields,, Chaos, 10 (2000), 99. doi: 10.1063/1.166479. Google Scholar B. Hargrave, Using Genetic Algorithms to Optimize Control Lyapunov Functions,, PhD thesis, (2008). Google Scholar P. Hartman, Ordinary Differential Equations,, Wiley, (1964). Google Scholar P. Hartman and C. Olech, On global asymptotic stability of solutions of differential equations,, Trans. Amer. Math. Soc., 104 (1962), 154. Google Scholar M. Abu Hassan and C. Storey, Numerical determination of domains of attraction for electrical power systems using the method of Zubov,, Int. J. Control, 34 (1981), 371. doi: 10.1080/00207178108922536. Google Scholar C. Hsu, Cell-to-cell Mapping,, Applied Mathematical Sciences, (1987). doi: 10.1007/978-1-4757-3892-6. Google Scholar T. Hu and Z. Lin, Composite quadratic Lyapunov functions for constrained control systems,, IEEE Trans. Automat. Control, 48 (2003), 440. doi: 10.1109/TAC.2003.809149. Google Scholar M. Hurley, Lyapunov functions and attractors in arbitrary metric spaces,, Proc. Amer. Math. Soc., 126 (1998), 245. doi: 10.1090/S0002-9939-98-04500-6. Google Scholar B. Ingalls, E. Sontag and Y. Wang, An infinite-time relaxation theorem for differential inclusions,, Proc. Amer. Math. Soc., 131 (2003), 487. doi: 10.1090/S0002-9939-02-06539-5. Google Scholar T. Johansen, Computation of Lyapunov functions for smooth, nonlinear systems using convex optimization,, Automatica, 36 (2000), 1617. doi: 10.1016/S0005-1098(00)00088-1. Google Scholar M. Johansson, Piecewise Linear Control Systems,, Lecture Notes in Control and Information Sciences, (2003). doi: 10.1007/3-540-36801-9. Google Scholar M. Johansson and A. Rantzer, Computation of piecewise quadratic Lyapunov functions for hybrid systems,, IEEE Trans. Automat. Control, 43 (1998), 555. doi: 10.1109/9.664157. Google Scholar P. Julian, A High Level Canonical Piecewise Linear Representation: Theory and Applications,, PhD thesis, (1999). Google Scholar P. Julian, J. Guivant and A. Desages, A parametrization of piecewise linear Lyapunov functions via linear programming,, Int. J. Control, 72 (1999), 702. doi: 10.1080/002071799220876. Google Scholar O. Junge, Rigorous discretization of subdivision techniques,, in International Conference on Differential Equations, (1999), 916. Google Scholar W. Kalies, K. Mischaikow and R. VanderVorst, An algorithmic approach to chain recurrence,, Found. Comput. Math, 5 (2005), 409. doi: 10.1007/s10208-004-0163-9. Google Scholar R. Kamyar and M. Peet, Polynomial optimization with applications to stability analysis and control - Alternatives to sum of squares,, Discrete Contin. Dyn. Syst. Ser. B, (): 2383. doi: 10.3934/dcdsb.2015.20.2383. Google Scholar J. Kapinski, J. Deshmukh, S. Sankaranarayanan and N. Arechiga, Simulation-guided Lyapunov analysis for hybrid dynamical systems,, in Proceedings of the 17th International Conference on Hybrid Systems: Computation and Control (HSCC 2014), (2014), 133. doi: 10.1145/2562059.2562139. Google Scholar C. Kellett, A compendium of comparison function results,, Math. Control Signals Syst., 26 (2014), 339. doi: 10.1007/s00498-014-0128-8. Google Scholar C. Kellett, Classical converse theorems in Lyapunov's second method,, Discrete Contin. Dyn. Syst. Ser. B, 8 (2015), 2333. doi: 10.3934/dcdsb.2015.20.2333. Google Scholar H. Khalil, Nonlinear Systems,, Macmillan Publishing Company, (1992). Google Scholar V. Kharitonov, Time-delay Systems: Lyapunov Functionals and Matrices,, Control Engineering. Birkhäuser/Springer, (2013). doi: 10.1007/978-0-8176-8367-2. Google Scholar R. Khasminskii, Stochastic Stability of Differential Equations,, Springer, (2012). doi: 10.1007/978-3-642-23280-0. Google Scholar E. Kinnen and C. Chen, Liapunov functions derived from auxiliary exact differential equations,, Automatica, 4 (1968), 195. doi: 10.1016/0005-1098(68)90014-9. Google Scholar P. Kloeden and M. Rasmussen, Nonautonomous Dynamical Systems,, Amer. Mathematical Society, (2011). doi: 10.1090/surv/176. Google Scholar P. Koltai, A stochastic approach for computing the domain of attraction without trajectory simulation,, in Dynamical Systems, (2011), 854. Google Scholar N. Krasovskiĭ, Problems of the Theory of Stability of Motion,, English translation by Stanford University Press, (1963). Google Scholar A. Kravchuk, G. Leonov and D. Ponomarenko, A criterion for the strong orbital stability of the trajectories of dynamical systems I,, Diff. Uravn., 28 (1992), 1507. Google Scholar V. Lakshmikantham, V. Matrosov and S. Sivasundaram, Vector Lyapunov Functions and Stability Analysis of Nonlinear Systems,, Mathematics and its Applications, (1991). doi: 10.1007/978-94-015-7939-1. Google Scholar M. Lazar, On infinity norms as Lyapunov functions: Alternative necessary and sufficient conditions,, in Proceedings of the 49th IEEE Conference on Decision and Control, (2010), 5936. doi: 10.1109/CDC.2010.5717266. Google Scholar M. Lazar and A. Doban, On infinity norms as Lyapunov functions for continuous-time dynamical systems,, in Proceedings of the 50th IEEE Conference on Decision and Control, (2011), 7567. doi: 10.1109/CDC.2011.6161163. Google Scholar M. Lazar, A. Doban and N. Athanasopoulos, On stability analysis of discrete-time homogeneous dynamics,, in Proceedings of the 17th International Conference on Systems Theory, (2013), 297. doi: 10.1109/ICSTCC.2013.6688976. Google Scholar M. Lazar and A. Jokić, On infinity norms as Lyapunov functions for piecewise affine systems,, in HSCC'10, (2010), 131. doi: 10.1145/1755952.1755972. Google Scholar G. Leonov, I. Burkin and A. Shepelyavyi, Frequency Methods in Oscillation Theory,, Mathematics and its Applications, (1996). doi: 10.1007/978-94-009-0193-3. Google Scholar D. Lewis, Metric properties of differential equations,, Amer. J. Math., 71 (1949), 294. doi: 10.2307/2372245. Google Scholar H. Li, R. Baier, L. Grüne, S. Hafstein and F. Wirth, Computation of local ISS Lyapunov functions via linear programming,, in Proceedings of the 21st International Symposium on Mathematical Theory of Networks and Systems (MTNS) (no. 0158), (0158), 1189. Google Scholar H. Li, R. Baier, L. Grüne, S. Hafstein and F. Wirth, Computation of local ISS Lyapunov functions with low gains via linear programming,, Discrete Contin. Dyn. Syst. Ser. B, 8 (2015), 2477. doi: 10.3934/dcdsb.2015.20.2477. Google Scholar H. Li, S. Hafstein and C. Kellett, Computation of Lyapunov functions for discrete-time systems using the Yoshizawa construction,, in Proceedings of the 53rd IEEE Conference on Decision and Control - CDC 2014, (2014), 5512. doi: 10.1109/CDC.2014.7040251. Google Scholar D. Liberzon, Switching in Systems and Control,, Systems & Control: Foundations & Applications, (2003). doi: 10.1007/978-1-4612-0017-8. Google Scholar Y. Lin, E. Sontag and Y. Wang, A smooth converse Lyapunov theorem for robust stability,, SIAM J. Control Optimization, 34 (1996), 124. doi: 10.1137/S0363012993259981. Google Scholar Z. Liu, The random case of Conley's theorem,, Nonlinearity, 19 (2006), 277. doi: 10.1088/0951-7715/19/2/002. Google Scholar Z. Liu, The random case of Conley's theorem. II. The complete Lyapunov function,, Nonlinearity, 20 (2007), 1017. doi: 10.1088/0951-7715/20/4/012. Google Scholar Z. Liu, The random case of Conley's theorem. III. Random semiflow case and Morse decomposition,, Nonlinearity, 20 (2007), 2773. doi: 10.1088/0951-7715/20/12/003. Google Scholar W. Lohmiller and J.-J. Slotine, On Contraction Analysis for Non-linear Systems,, Automatica, 34 (1998), 683. doi: 10.1016/S0005-1098(98)00019-3. Google Scholar K. Loparo and G. Blankenship, Estimating the domain of attraction of nonlinear feedback systems,, IEEE Trans. Automat. Control, 23 (1978), 602. Google Scholar A. Lyapunov, The general problem of the stability of motion,, Internat. J. Control, 55 (1992), 521. doi: 10.1080/00207179208934253. Google Scholar M. Malisoff and F. Mazenc, Constructions of Strict Lyapunov Functions,, Communications and Control Engineering, (2009). doi: 10.1007/978-1-84882-535-2. Google Scholar I. Manchester and J.-J. Slotine, Transverse contraction criteria for existence, stability, and robustness of a limit cycle,, Systems Control Lett., 63 (2014), 32. doi: 10.1016/j.sysconle.2013.10.005. Google Scholar X. Mao, Stochastic Differential Equations and Applications,, 2nd edition, (2008). doi: 10.1533/9780857099402. Google Scholar S. Margolis and W. Vogt, Control engineering applications of V. I. Zubov's construction procedure for Lyapunov functions,, IEEE Trans. Automat. Control, 8 (1963), 104. doi: 10.1109/TAC.1963.1105553. Google Scholar S. Marinósson, Lyapunov function construction for ordinary differential equations with linear programming,, Dynamical Systems: An International Journal, 17 (2002), 137. doi: 10.1080/0268111011011847. Google Scholar S. Marinósson, Stability Analysis of Nonlinear Systems with Linear Programming: A Lyapunov Functions Based Approach,, PhD thesis, (2002). Google Scholar A. Martynyuk, Analysis of stability problems via Matrix Lyapunov Functions,, J. Appl. Math. Stochastic Anal., 3 (1990), 209. doi: 10.1155/S104895339000020X. Google Scholar J. Massera, On Liapounoff's conditions of stability,, Ann. of Math., 50 (1949), 705. doi: 10.2307/1969558. Google Scholar J. Massera, Contributions to stability theory,, Ann. of Math., 64 (1956), 182. doi: 10.2307/1969955. Google Scholar V. Matrosov, On the stability of motion,, J. Appl. Math. Mech., 26 (1963), 1337. Google Scholar P. Menck, J. Heitzig, N. Marwan and K. Kurths, How basin stability complements the linear-stability paradigm,, Nature Physics, 9 (2013), 89. doi: 10.1038/nphys2516. Google Scholar S. Meyn and R. Tweedie, Markov Chains and Stochastic Stability,, 2nd edition, (2009). doi: 10.1017/CBO9780511626630. Google Scholar A. Michel, L. Hou and D. Liu, Stability of Dynamical Systems: Continuous, Discontinuous, and Discrete Systems,, Systems & Control: Foundations & Applications, (2008). Google Scholar A. Michel, R. Miller and B. Nam, Stability analysis of interconnected systems using computer generated Lyapunov functions,, IEEE Trans. Circuits and Systems, 29 (1982), 431. doi: 10.1109/TCS.1982.1085181. Google Scholar A. Michel, B. Nam and V. Vittal, Computer generated Lyapunov functions for interconnected systems: Improved results with applications to power system,, IEEE Trans. Circuits and Systems, 31 (1984), 189. doi: 10.1109/TCS.1984.1085483. Google Scholar A. Michel, N. Sarabudla and R. Miller, Stability analysis of complex dynamical systems,, Circuits Systems Signal Process, 1 (1982), 171. doi: 10.1007/BF01600051. Google Scholar C. Mikkelsen, Numerical Methods For Large Lyapunov Equations,, PhD thesis, (2009). Google Scholar N. Mohammed and P. Giesl, Grid refinement in the construction of Lyapunov functions using radial basis functions,, Discrete Contin. Dyn. Syst. Ser. B, 8 (2015), 2453. doi: 10.3934/dcdsb.2015.20.2453. Google Scholar A. Molchanov and E. Pyatnitskiiĭ, Criteria of asymptotic stability of differential and difference inclusions encountered in control theory,, Systems Control Lett., 13 (1989), 59. doi: 10.1016/0167-6911(89)90021-2. Google Scholar A. Molchanov and E. Pyatnitskiĭ, Lyapunov functions that specify necessary and sufficient conditions of absolute stability of nonlinear nonstationary control systems I, II,, Automat. Remote Control, 47 (1986), 344. Google Scholar N. Noroozi, P. Karimaghaee, F. Safaei and H. Javadi, Generation of Lyapunov functions by neural networks,, in Proceedings of the World Congress on Engineering 2008, (2008). Google Scholar D. Norton, The fundamental theorem of dynamical systems,, Comment. Math. Univ. Carolinae, 36 (1995), 585. Google Scholar Y. Ohta, On the construction of piecewise linear Lyapunov functions,, in Proceedings of the 40th IEEE Conference on Decision and Control, 3 (2001), 2173. doi: 10.1109/CDC.2001.980577. Google Scholar Y. Ohta and M. Tsuji, A generalization of piecewise linear Lyapunov functions,, in Proceedings of the 42nd IEEE Conference on Decision and Control, 5 (2003), 5091. doi: 10.1109/CDC.2003.1272443. Google Scholar R. O'Shea, The extension of Zubov's method to sampled data control systems described by nonlinear autonomous difference equations,, IEEE Trans. Automat. Control, 9 (1964), 62. Google Scholar S. Panikhom and S. Sujitjorn, Numerical approach to construction of Lyapunov function for nonlinear stability analysis,, Research Journal of Applied Sciences, 4 (2012), 2915. Google Scholar A. Papachristodoulou, J. Anderson, G. Valmorbida, S. Pranja, P. Seiler and P. Parrilo, SOSTOOLS: Sum of Squares Optimization Toolbox for MATLAB,, User's Guide, (2013). Google Scholar P. Parrilo, Structured Semidefinite Programs and Semialgebraic Geometry Methods in Robustness and Optimiziation,, PhD thesis, (2000). Google Scholar M. Patrão, Existence of complete Lyapunov functions for semiflows on separable metric spaces,, Far East J. Dyn. Syst., 17 (2011), 49. Google Scholar M. Peet, Exponentially stable nonlinear systems have polynomial Lyapunov functions on bounded regions,, IEEE Trans. Automat. Control, 54 (2009), 979. doi: 10.1109/TAC.2009.2017116. Google Scholar M. Peet and A. Papachristodoulou, A converse sum of squares Lyapunov result with a degree bound,, IEEE Trans. Automat. Control, 57 (2012), 2281. doi: 10.1109/TAC.2012.2190163. Google Scholar S. Pettersson and B. Lennartson, Stability and robustness for hybrid systems,, in Proceedings of the 35th IEEE Conference on Decision and Control, (1996), 1202. doi: 10.1109/CDC.1996.572653. Google Scholar A. Polanski, Lyapunov functions construction by linear programming,, IEEE Trans. Automat. Control, 42 (1997), 1113. doi: 10.1109/9.599986. Google Scholar A. Polanski, On absolute stability analysis by polyhedral Lyapunov functions,, Automatica, 36 (2000), 573. doi: 10.1016/S0005-1098(99)00180-6. Google Scholar I. Pólik and T. Terlaky, A survey of the S-lemma,, SIAM Review, 49 (2007), 371. doi: 10.1137/S003614450444614X. Google Scholar C. Prieur and F. Mazenc, ISS-Lyapunov functions for time-varying hyperbolic systems of balance laws,, Math. Control Signals Syst., 24 (2012), 111. doi: 10.1007/s00498-012-0074-2. Google Scholar D. Prokhorov, A Lyapunov machine for stability analysis of nonlinear systems,, in Proceedings of the IEEE International Conference on Neural Networks, (1994), 1028. doi: 10.1109/ICNN.1994.374324. Google Scholar S. Raković and M. Lazar, The Minkowski-Lyapunov equation for linear dynamics: Theoretical foundations,, Automatica, 50 (2014), 2015. doi: 10.1016/j.automatica.2014.05.023. Google Scholar M. Rasmussen, Attractivity and Bifurcation for Nonautonomous Dynamical Systems,, Lecture Notes in Mathematics, (1907). Google Scholar M. Rasmussen, Morse decompositions of nonautonomous dynamical systems,, Trans. Amer. Math. Soc., 359 (2007), 5091. doi: 10.1090/S0002-9947-07-04318-8. Google Scholar S. Ratschan and Z. She, Providing a basin of attraction to a target region of polynomial systems by computation of Lyapunov-like functions,, SIAM J. Control Optim., 48 (2010), 4377. doi: 10.1137/090749955. Google Scholar M. Rezaiee-Pajand and B. Moghaddasie, Estimating the region of attraction via collocation for autonomous nonlinear systems,, Structural Engineering and Mechanics, 41 (2012), 263. Google Scholar M. Roozbehani, S. Megretski and E. Feron, Optimization of Lyapunov invariants in verification of software systems,, IEEE Trans. Automat. Control, 58 (2013), 696. doi: 10.1109/TAC.2013.2241472. Google Scholar B. Rüffer, N. van de Wouw and M. Mueller, Convergent systems vs. incremental stability,, Systems Control Lett., 62 (2013), 277. doi: 10.1016/j.sysconle.2012.11.015. Google Scholar G. Serpen, Empirical approximation for Lyapunov functions with artificial neural nets,, in Proceedings of the 2005 IEEE International Joint Conference on Neural Networks, (2005), 735. doi: 10.1109/IJCNN.2005.1555943. Google Scholar Z. She, H. Li, B. Xue, Z. Zheng and B. Xia, Discovering polynomial Lyapunov functions for continuous dynamical systems,, J. Symbolic Comput., 58 (2013), 41. doi: 10.1016/j.jsc.2013.06.003. Google Scholar Z. She and B. Xue, Computing an invariance kernel with target by computing Lyapunov-like functions,, IET Control Theory Appl., 7 (2013), 1932. doi: 10.1049/iet-cta.2013.0275. Google Scholar R. Shorten, F. Wirth, O. Mason, K. Wulff and C. King, Stability criteria for switched and hybrid systems,, SIAM Review, 49 (2007), 545. doi: 10.1137/05063516X. Google Scholar D. Šiljak, Large-scale Dynamic Systems. Stability and Structure,, North-Holland Series in System Science and Engineering, (1979). Google Scholar E. Sontag, A Lyapunov-like characterization of asymptotic controllability,, SIAM J. Control Optimization, 21 (1983), 462. doi: 10.1137/0321028. Google Scholar E. Sontag, Smooth stabilization implies coprime factorization,, IEEE Trans. Automat. Control, 34 (1989), 435. doi: 10.1109/9.28018. Google Scholar E. Sontag, New characterizations of input-to-state stability,, IEEE Trans. Automat. Control, 41 (1996), 1283. doi: 10.1109/9.536498. Google Scholar E. Sontag, Mathematical Control Theory,, 2nd edition, (1998). doi: 10.1007/978-1-4612-0577-7. Google Scholar E. Sontag and H. Sussman, Nonsmooth control-Lyapunov functions,, in Proceedings of the 34th IEEE Conference on Decision and Control, (1995), 2799. doi: 10.1109/CDC.1995.478542. Google Scholar E. Sontag and Y. Wang, On characterizations of the input-to-state stability property,, Systems Control Lett., 24 (1995), 351. doi: 10.1016/0167-6911(94)00050-6. Google Scholar B. Stenström, Dynamical systems with a certain local contraction property,, Math. Scand., 11 (1962), 151. Google Scholar A. Subbaraman and A. Teel, A converse Lyapunov theorem for strong global recurrence,, Automatica, 49 (2013), 2963. doi: 10.1016/j.automatica.2013.07.001. Google Scholar A. Subbaraman and A. Teel, A Matrosov theorem for strong global recurrence,, Automatica, 49 (2013), 3390. doi: 10.1016/j.automatica.2013.08.009. Google Scholar Z. Sun, Stability of piecewise linear systems revisited,, Annu. Rev. Control, 34 (2010), 221. doi: 10.1016/j.arcontrol.2010.08.003. Google Scholar Z. Sun and S. Ge, Stability Theory of Switched Dynamical Systems,, Communications and Control Engineering, (2011). doi: 10.1007/978-0-85729-256-8. Google Scholar K. Tanaka, T. Hori and H. Wang, A multiple Lyapunov function approach to stabilization of fuzzy control systems,, IEEE T. Fuzzy Syst., 11 (2003), 582. doi: 10.1109/TFUZZ.2003.814861. Google Scholar A. R. Teel and L. Praly, A smooth Lyapunov function from a class-KL estimate involving two positive semidefinite functions,, ESAIM Control Optim. Calc. Var., 5 (2000), 313. doi: 10.1051/cocv:2000113. Google Scholar R. Temam, Infinite-Dimensional Dynamical Systems in Mechanics and Physics,, Applied Mathematical Sciences, (1997). doi: 10.1007/978-1-4612-0645-3. Google Scholar A. Tesi, F. Villoresi and R. Genesio, On stability domain estimation via a quadratic Lyapunov function: Convexity and optimality properties for polynomial systems,, in Proceedings of the 33rd Conference on Decision and Control, (1994), 1907. doi: 10.1109/CDC.1994.411100. Google Scholar A. Vannelli and M. Vidyasagar, Maximal Lyapunov functions and domains of attraction for autonomous nonlinear systems,, Automatica, 21 (1985), 69. doi: 10.1016/0005-1098(85)90099-8. Google Scholar K. Wang and A. Michel, On the stability of a family of nonlinear time-varying system,, IEEE Trans. Circuits and Systems, 43 (1996), 517. doi: 10.1109/81.508171. Google Scholar H. Wendland, Scattered Data Approximation,, Cambridge Monographs on Applied and Computational Mathematics, (2005). Google Scholar C. Xu and G. Sallet, Exponential stability and transfer functions of processes governed by symmetric hyperbolic systems,, ESAIM Control Optim. Calc. Var., 7 (2002), 421. doi: 10.1051/cocv:2002062. Google Scholar C. Yfoulis and R. Shorten, A numerical technique for the stability analysis of linear switched systems,, Int. J. Control, 77 (2004), 1019. doi: 10.1080/002071704200026963. Google Scholar T. Yoshizawa, Stability Theory by Liapunov's Second Method,, Publications of the Mathematical Society of Japan, (1966). Google Scholar V. Zubov, Methods of A. M. Lyapunov and Their Application,, Translation prepared under the auspices of the United States Atomic Energy Commission; edited by Leo F. Boron, (1964). Google Scholar Peter Giesl. Converse theorem on a global contraction metric for a periodic orbit. Discrete & Continuous Dynamical Systems - A, 2019, 39 (9) : 5339-5363. doi: 10.3934/dcds.2019218 Sigurdur Freyr Hafstein. A constructive converse Lyapunov theorem on exponential stability. Discrete & Continuous Dynamical Systems - A, 2004, 10 (3) : 657-678. doi: 10.3934/dcds.2004.10.657 Hjörtur Björnsson, Sigurdur Hafstein, Peter Giesl, Enrico Scalas, Skuli Gudmundsson. Computation of the stochastic basin of attraction by rigorous construction of a Lyapunov function. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4247-4269. doi: 10.3934/dcdsb.2019080 Antonio Siconolfi, Gabriele Terrone. A metric proof of the converse Lyapunov theorem for semicontinuous multivalued dynamics. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4409-4427. doi: 10.3934/dcds.2012.32.4409 Christopher M. Kellett. Classical converse theorems in Lyapunov's second method. Discrete & Continuous Dynamical Systems - B, 2015, 20 (8) : 2333-2360. doi: 10.3934/dcdsb.2015.20.2333 Peter Giesl, Holger Wendland. Construction of a contraction metric by meshless collocation. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3843-3863. doi: 10.3934/dcdsb.2018333 Peter Giesl, Holger Wendland. Approximating the basin of attraction of time-periodic ODEs by meshless collocation. Discrete & Continuous Dynamical Systems - A, 2009, 25 (4) : 1249-1274. doi: 10.3934/dcds.2009.25.1249 Peter Giesl, James McMichen. Determination of the basin of attraction of a periodic orbit in two dimensions using meshless collocation. Journal of Computational Dynamics, 2016, 3 (2) : 191-210. doi: 10.3934/jcd.2016010 Xiangnan He, Wenlian Lu, Tianping Chen. On transverse stability of random dynamical system. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 701-721. doi: 10.3934/dcds.2013.33.701 Michael Schönlein. Asymptotic stability and smooth Lyapunov functions for a class of abstract dynamical systems. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 4053-4069. doi: 10.3934/dcds.2017172 Jacques Féjoz. On "Arnold's theorem" on the stability of the solar system. Discrete & Continuous Dynamical Systems - A, 2013, 33 (8) : 3555-3565. doi: 10.3934/dcds.2013.33.3555 Helge Dietert, Josephine Evans, Thomas Holding. Contraction in the Wasserstein metric for the kinetic Fokker-Planck equation on the torus. Kinetic & Related Models, 2018, 11 (6) : 1427-1441. doi: 10.3934/krm.2018056 Peter Giesl, Holger Wendland. Approximating the basin of attraction of time-periodic ODEs by meshless collocation of a Cauchy problem. Conference Publications, 2009, 2009 (Special) : 259-268. doi: 10.3934/proc.2009.2009.259 Peter Giesl. Necessary condition for the basin of attraction of a periodic orbit in non-smooth periodic systems. Discrete & Continuous Dynamical Systems - A, 2007, 18 (2&3) : 355-373. doi: 10.3934/dcds.2007.18.355 Ugo Bessi. The stochastic value function in metric measure spaces. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 1819-1839. doi: 10.3934/dcds.2017076 B. Coll, A. Gasull, R. Prohens. On a criterium of global attraction for discrete dynamical systems. Communications on Pure & Applied Analysis, 2006, 5 (3) : 537-550. doi: 10.3934/cpaa.2006.5.537 Út V. Lê. Contraction-Galerkin method for a semi-linear wave equation. Communications on Pure & Applied Analysis, 2010, 9 (1) : 141-160. doi: 10.3934/cpaa.2010.9.141 Guoshan Zhang, Peizhao Yu. Lyapunov method for stability of descriptor second-order and high-order systems. Journal of Industrial & Management Optimization, 2018, 14 (2) : 673-686. doi: 10.3934/jimo.2017068 Giuseppe Savaré. Self-improvement of the Bakry-Émery condition and Wasserstein contraction of the heat flow in $RCD (K, \infty)$ metric measure spaces. Discrete & Continuous Dynamical Systems - A, 2014, 34 (4) : 1641-1661. doi: 10.3934/dcds.2014.34.1641 Matthias Gerdts, Sven-Joachim Kimmerle. Numerical optimal control of a coupled ODE-PDE model of a truck with a fluid basin. Conference Publications, 2015, 2015 (special) : 515-524. doi: 10.3934/proc.2015.0515 Peter Giesl Sigurdur Hafstein
CommonCrawl
New methods for local solvability of quasilinear symmetric hyperbolic systems EECT Home Exponential stability of a coupled system with Wentzell conditions June 2016, 5(2): 251-272. doi: 10.3934/eect.2016004 On a parabolic-hyperbolic filter for multicolor image noise reduction Valerii Maltsev 1, and Michael Pokojovy 2, Taras Shevchenko National University of Kyiv, Faculty of Cybernetics, 4D Glushkov Ave, 03680 Kyiv, Ukraine Karlsruhe Institute of Technology, Department of Mathematics, Englerstrasse 2, 76131 Karlsruhe, Germany Received March 2016 Revised May 2016 Published June 2016 We propose a novel PDE-based anisotropic filter for noise reduction in multicolor images. It is a generalization of Nitzberg & Shiota's (1992) model being a hyperbolic relaxation of the well-known parabolic Perona & Malik's filter (1990). First, we consider a `spatial' mollifier-type regularization of our PDE system and exploit the maximal $L^{2}$-regularity theory for non-autonomous forms to prove a well-posedness result both in weak and strong settings. Again, using the maximal $L^{2}$-regularity theory and Schauder's fixed point theorem, respective solutions for the original quasilinear problem are obtained and the uniqueness of solutions with a bounded gradient is proved. Finally, the long-time behavior of our model is studied. Keywords: Image processing, weak solutions, strong solutions, maximal regularity., nonlinear partial differential equations. Mathematics Subject Classification: Primary: 35G61, 35M33, 65J15; Secondary: 35B30, 35D30, 35D3. Citation: Valerii Maltsev, Michael Pokojovy. On a parabolic-hyperbolic filter for multicolor image noise reduction. Evolution Equations & Control Theory, 2016, 5 (2) : 251-272. doi: 10.3934/eect.2016004 L. Alvarez, F. Guichard, P.-L. Lions and J.-M. Morel, Axioms and fundamental equations of image processing,, Archive for Rational Mechanics and Analysis, 123 (1993), 199. doi: 10.1007/BF00375127. Google Scholar H. Amann, Compact embeddings of vector-valued Sobolev and Besov spaces,, Glasnik Matematički, 35 (2000), 161. Google Scholar H. Amann, Non-local quasi-linear parabolic equations,, Russian Mathematical Surveys, 60 (2005), 1021. doi: 10.1070/RM2005v060n06ABEH004279. Google Scholar H. Amann, Time-delayed Perona-Malik type problems,, Acta Mathematica Universitatis Comenianae, 76 (2007), 15. Google Scholar F. Andreu, C. Ballester, V. Caselles and J. M. Mazón, Minimizing total variational flow,, Differential and Integral Equations, 14 (2001), 321. Google Scholar F. Andreu, C. Ballester, V. Caselles and J. M. Mazón, Some qualitative properties for the total variation flow,, Journal of Functional Analysis, 188 (2002), 516. doi: 10.1006/jfan.2001.3829. Google Scholar W. Arendt and R. Chill, Global existence for quasilinear diffusion equations in isotropic nondivergence form,, Annali della Scuola Normale Superiore di Pisa (5), 9 (2010), 523. Google Scholar V. Barbu, Nonlinear Differential Equations Of Monotone Types in Banach Spaces,, Springer Monographs in Mathematics, (2010). doi: 10.1007/978-1-4419-5542-5. Google Scholar A. Belahmidi, Équations Aux Dérivées Partielles Appliquées à la Restauration et à L'agrandissement des Images,, PhD thesis, (2003). Google Scholar A. Belahmidi and A. Chambolle, Time-delay regularization of anisotropic diffusion and image processing,, ESAIM: Mathematical Modelling and Numerical Analysis, 39 (2005), 231. doi: 10.1051/m2an:2005010. Google Scholar A. Belleni-Morante and A. C. McBride, Applied Nonlinear Semigroups: An Introduction,, Wiley Series in Mathematical Methods in Practice, (1998). Google Scholar G. Bellettini, V. Caselles and M. Novaga, The total variation flow in $\mathbbR^N$,, Journal of Differential Equations, 184 (2002), 475. doi: 10.1006/jdeq.2001.4150. Google Scholar M. Burger, A. C. G. Menucci, S. Osher and M. Rumpf (eds.), Level Set and PDE Based Reconstruction Methods in Imaging, vol. 2090 of Lecture Notes in Mathematics,, Springer International Publishing, (1992). Google Scholar J. Canny, Finding Edges and Lines in Images,, Technical Report 720, (1983). Google Scholar G. R. Cattaneo, Sur une forme de l'équation de la chaleur éliminant le paradoxe d'une propagation instantanée,, Comptes Rendus de l'Académie des Sciences, 247 (1958), 431. Google Scholar F. Catté, P.-L. Lions, J.-M. Morel and T. Coll, Image selective smoothing and edge detection by nonlinear diffusion,, SIAM Journal on Numerical Analysis, 29 (1992), 182. doi: 10.1137/0729012. Google Scholar G. H. Cottet and M. El Ayyadi, A Volterra type model for image processing,, IEEE Transactions on Image Processing, 7 (1998), 292. doi: 10.1109/83.661179. Google Scholar R. Dautray and J.-L. Lions, Evolution Problems, vol. 5 of Mathematical Analysis and Numerical Methods for Science and Technology,, Springer-Verlag, (1992). doi: 10.1007/978-3-642-58090-1. Google Scholar D. Dier, Non-autonomous maximal regularity for forms of bounded variation,, Journal of Mathematical Analysis and Applications, 425 (2015), 33. doi: 10.1016/j.jmaa.2014.12.006. Google Scholar M. E. Gurtin and A. C. Pipkin, A general theory of heat conduction with finite wave speeds,, Archive for Rational Mechanics and Analysis, 31 (1968), 113. doi: 10.1007/BF00281373. Google Scholar A. Handlovičová, K. Mikula and F. Sgallari, Variational numerical methods for solving nonlinear diffusion equations arising in image processing,, Journal of Visual Communication and Image Representation, 13 (2002), 217. Google Scholar M. Hieber and M. Murata, The $L^p$-approach to the fluid-rigid body interaction problem for compressible fluids,, Evolution Equations and Control Theory, 4 (2015), 69. doi: 10.3934/eect.2015.4.69. Google Scholar M. Hochbruck, T. Jahnke and R. Schnaubelt, Convergence of an ADI splitting for Maxwell's equations,, Numerische Mathematik, 129 (2015), 535. doi: 10.1007/s00211-014-0642-0. Google Scholar S. L. Keeling and R. Stollberger, Nonlinear anisotropic diffusion filtering for multiscale edge enhancement,, Inverse Problems, 18 (2002), 175. doi: 10.1088/0266-5611/18/1/312. Google Scholar D. Marr and E. Hildreth, Theory of edge detection,, Proceedings of the Royal Society B, 207 (1980), 187. doi: 10.1098/rspb.1980.0020. Google Scholar S. A. Morris, The Schauder-Tychonoff fixed point theorem and applications,, Matematický Časopis, 25 (1975), 165. Google Scholar M. Nitzberg and T. Shiota, Nonlinear image filtering with edge and corner enhancement,, IEEE Transactions on Pattern Analysis and Machine Intelligence, 14 (1992), 826. doi: 10.1109/34.149593. Google Scholar T. Ohkubo, Regularity of solutions to hyperbolic mixed problems with uniformly characteristic boundary,, Hokkaido Mathematical Journal, 10 (1981), 93. doi: 10.14492/hokmj/1381758116. Google Scholar P. Perona and J. Malik, Scale space and edge detection using anisotropic diffusion,, IEEE Trans. Pattern Anal. Machine Intell., 12 (1990), 629. doi: 10.1109/34.56205. Google Scholar J. Prüss, Maximal regularity of linear vector-valued parabolic Volterra equations,, Journal of Integral Equations and Applications, 3 (1991), 63. doi: 10.1216/jiea/1181075601. Google Scholar J. Prüss, Evolutionary Integral Equations and Applications, vol. 87 of Monographs in Mathematics,, Birkhäuser Verlag, (1993). doi: 10.1007/978-3-0348-8570-6. Google Scholar L. I. Rudin, S. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms,, Physica D: Nonlinear Phenomena, 60 (1992), 259. doi: 10.1016/0167-2789(92)90242-F. Google Scholar G. Savaré, Regularity results for elliptic equations in Lipschitz domains,, Journal of Functional Analysis, 152 (1998), 176. doi: 10.1006/jfan.1997.3158. Google Scholar D. W. Scott, Multivariate Density Estimation: Theory, Practice, and Visualization,, 2nd edition, (). Google Scholar P. Secchi, Well-posedness of characteristic symmetric hyperbolic systems,, Archive for Rational Mechanics and Analysis, 134 (1996), 155. doi: 10.1007/BF00379552. Google Scholar K. Takezawa, Introduction to Nonparametric Regression,, Wiley Series in Probability and Mathematical Statistics, (2006). Google Scholar J. Weickert, Anisotropic Diffusion in Image Processing,, B. G. Teubner, (1998). Google Scholar A. P. Witkin, Scale-space filtering,, Readings in Computer Vision: Issues, (1987), 329. doi: 10.1016/B978-0-08-051581-6.50036-2. Google Scholar R. Zacher, Maximal regularity of type $L_p$ for abstract parabolic Volterra equations,, Journal of Evolution Equations, 5 (2005), 79. doi: 10.1007/s00028-004-0161-z. Google Scholar José Luiz Boldrini, Jonathan Bravo-Olivares, Eduardo Notte-Cuello, Marko A. Rojas-Medar. Asymptotic behavior of weak and strong solutions of the magnetohydrodynamic equations. Electronic Research Archive, 2021, 29 (1) : 1783-1801. doi: 10.3934/era.2020091 Jens Lorenz, Wilberclay G. Melo, Suelen C. P. de Souza. Regularity criteria for weak solutions of the Magneto-micropolar equations. Electronic Research Archive, 2021, 29 (1) : 1625-1639. doi: 10.3934/era.2020083 Martin Kalousek, Joshua Kortum, Anja Schlömerkemper. Mathematical analysis of weak and strong solutions to an evolutionary model for magnetoviscoelasticity. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 17-39. doi: 10.3934/dcdss.2020331 Lorenzo Zambotti. A brief and personal history of stochastic partial differential equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 471-487. doi: 10.3934/dcds.2020264 Hua Chen, Yawei Wei. Multiple solutions for nonlinear cone degenerate elliptic equations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020272 Alex H. Ardila, Mykael Cardoso. Blow-up solutions and strong instability of ground states for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2021, 20 (1) : 101-119. doi: 10.3934/cpaa.2020259 Rim Bourguiba, Rosana Rodríguez-López. Existence results for fractional differential equations in presence of upper and lower solutions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1723-1747. doi: 10.3934/dcdsb.2020180 Yueyang Zheng, Jingtao Shi. A stackelberg game of backward stochastic differential equations with partial information. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020047 Junyong Eom, Kazuhiro Ishige. Large time behavior of ODE type solutions to nonlinear diffusion equations. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3395-3409. doi: 10.3934/dcds.2019229 Tianwen Luo, Tao Tao, Liqun Zhang. Finite energy weak solutions of 2d Boussinesq equations with diffusive temperature. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3737-3765. doi: 10.3934/dcds.2019230 Yang Liu. Global existence and exponential decay of strong solutions to the cauchy problem of 3D density-dependent Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1291-1303. doi: 10.3934/dcdsb.2020163 Serge Dumont, Olivier Goubet, Youcef Mammeri. Decay of solutions to one dimensional nonlinear Schrödinger equations with white noise dispersion. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020456 Helmut Abels, Johannes Kampmann. Existence of weak solutions for a sharp interface model for phase separation on biological membranes. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 331-351. doi: 10.3934/dcdss.2020325 Ryuji Kajikiya. Existence of nodal solutions for the sublinear Moore-Nehari differential equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1483-1506. doi: 10.3934/dcds.2020326 Pierre Baras. A generalization of a criterion for the existence of solutions to semilinear elliptic equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 465-504. doi: 10.3934/dcdss.2020439 Bo Chen, Youde Wang. Global weak solutions for Landau-Lifshitz flows and heat flows associated to micromagnetic energy functional. Communications on Pure & Applied Analysis, 2021, 20 (1) : 319-338. doi: 10.3934/cpaa.2020268 Thierry Cazenave, Ivan Naumkin. Local smooth solutions of the nonlinear Klein-gordon equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020448 Riadh Chteoui, Abdulrahman F. Aljohani, Anouar Ben Mabrouk. Classification and simulation of chaotic behaviour of the solutions of a mixed nonlinear Schrödinger system. Electronic Research Archive, , () : -. doi: 10.3934/era.2021002 Hua Qiu, Zheng-An Yao. The regularized Boussinesq equations with partial dissipations in dimension two. Electronic Research Archive, 2020, 28 (4) : 1375-1393. doi: 10.3934/era.2020073 PDF downloads (58) HTML views (0) Valerii Maltsev Michael Pokojovy
CommonCrawl
Primitive equations with horizontal viscosity: The initial value and The time-periodic problem for physical boundary conditions DCDS Home Global boundedness of solutions to the two-dimensional forager-exploiter model with logistic source July 2021, 41(7): 3045-3062. doi: 10.3934/dcds.2020397 The Littlewood-Paley $ pth $-order moments in three-dimensional MHD turbulence Yao Nie and Jia Yuan , School of Mathematics and Systems Science, Beihang University, Beijing 100191, China * Corresponding author: Jia Yuan Received July 2020 Revised October 2020 Published July 2021 Early access December 2020 Fund Project: The work is supported by NSF grant No.11871087 and No.11771423. The first author is supported by the Academic Excellence Foundation of BUAA for PhD students and China Scholarship Council No.201906020100 In this paper, we consider the Littlewood-Paley $ p $th-order ($ 1\le p<\infty $) moments of the three-dimensional MHD periodic equations, which are defined by the infinite-time and space average of $ L^p $-norm of velocity and magnetic fields involved in the spectral cut-off operator $ \dot\Delta_m $. Our results imply that in some cases, $ k^{-\frac{1}{3}} $ is an upper bound at length scale $ 1/k $. This coincides with the scaling law of many observations on astrophysical systems and simulations in terms of 3D MHD turbulence. Keywords: Incompressible MHD equations, three-dimensional, Littlewood-Paley, Kolmogorov, turbulence. Mathematics Subject Classification: Primary: 35Q35; Secondary: 42B37, 76D03, 76W05. Citation: Yao Nie, Jia Yuan. The Littlewood-Paley $ pth $-order moments in three-dimensional MHD turbulence. Discrete & Continuous Dynamical Systems, 2021, 41 (7) : 3045-3062. doi: 10.3934/dcds.2020397 H. Bahouri, J.-Y. Chemin and R. Danchin, Fourier Analysis and Nonlinear Partial Differential Equations, Grundlehren der mathematischen Wissenschaften, 343, Springer-Verlag, 2011. doi: 10.1007%2F978-3-642-16830-7. Google Scholar A. Basu and J. K. Bhattacharjee, Universal properties of three-dimensional magnetohydrodynamic turbulence: do Alfvén waves matter?, J. Stat. Mech., 2005 (2005), P07002. doi: 10.1088/1742-5468/2005/07/P07002. Google Scholar A. Basu, A. Sain, S. K. Dhar and R. Pandit, Multiscaling in models of magnetohydrodynamic turbulence, Phys. Rev. Lett., 81 (1998), 2687-2690. doi: 10.1103/PhysRevLett.81.2687. Google Scholar D. Biskamp and W-C. Müller, Scaling properties of three-dimensional isotropic magnetohydrodynamic turbulence, Phys. Plasmas, 7 (2000), 4889-4900. doi: 10.1063/1.1322562. Google Scholar M. Cannone, Harmonic analysis tools for solving incompressible Navier-Stokes equations, Handbook of Mathmatical Fluid Dynamics vol 3,161–244, North-Holland, Amsterdam, 2004. Google Scholar Q. Chen, C. Miao and Z. Zhang, A new Bernstein's inequality and the 2D dissipative quasi-geostrophic equation, Comm. Math. Phys., 271 (2007), 821-838. doi: 10.1007/s00220-007-0193-7. Google Scholar Q. Chen, C. Miao and Z. Zhang, On the regularity criterion of weak solution for the 3D viscous magneto-hydrodynamics equations, Comm. Math. Phys., 284 (2008), 919-930. doi: 10.1007/s00220-008-0545-y. Google Scholar Q. Chen, C. Miao and Z. Zhang, On the well-posedness of the ideal MHD equations in the Triebel-Lizorkin spaces, Arch. Ration. Mech. Anal., 195 (2010), 561-578. doi: 10.1007/s00205-008-0213-6. Google Scholar J. Cho, E. T. Vishniac and A. Lazarian, Simulations of magnetohydrodynamic turbulence in a strongly magnetized medium, Astrophys. J., 564 (2002), 291-301. doi: 10.1086/324186. Google Scholar J. Cho and E. T. Vishniac, The anisotropy of magnetohydrodynamic Alfvénic turbulence, Astrophys. J., 539 (2000), 273-282. doi: 10.1086/309213. Google Scholar P. Constantin, Euler equations, Navier-Stokes equations and turbulence, in Mathematical Foundation of Turbulent Viscous Flows, Lecture Notes in Math. Vol. 1871, Berlin: Springer, 2006, 1–43. doi: 10.1007%2F11545989_1. Google Scholar P. Constantin, The Littlewood-Paley spectrum in two-dimensional turbulence, Theor. Comput. Fluid Dyn., 9 (1997), 183-189. doi: 10.1007/s001620050039. Google Scholar E. Falgarone and T. Passot, Turbulence and Magnetic Fields in Astrophysics, Lecture Notes in Physics, Springer, 2003. doi: 10.1007%2F3-540-36238-X. Google Scholar Y. Gupta, B. J. Rickett and W. A. Coles, Refractive interstellar scintillation of pulsar intensities at 74 MHz, Astrophysical J., 403 (1993), 183-201. doi: 10.1086/172193. Google Scholar E. Hopf, Über die anfangswertaufgabe für die hydrodynamischen grundgleichungen, Math. Nachr., 4 (1951), 213-231. doi: 10.1002/mana.3210040121. Google Scholar P. S. Iroshnikov, Turbulence of a conducting fluid in a strong magnetic field, Soviet Astronom. AJ, 7 (1964), 566-571. Google Scholar A. N. Kolmogorov, The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers, Proceedings of the Royal Society A, 434 (1991), 9-13. doi: 10.1098/rspa.1991.0075. Google Scholar A. N. Kolmogorov, Dissipation of energy in the locally isotropic turbulence, Proceedings of the Royal Society A, 434 (1991), 15-17. doi: 10.1098/rspa.1991.0076. Google Scholar R. H. Kraichnan, Lagrangian-history closure approximation for turbulence, Phys. Fluids, 8 (1965), 575-598. doi: 10.1063/1.1761271. Google Scholar J. Leray, Sur le mouvement d'un liquide visqueux emplissant l'espace., Acta Math., 63 (1934), 193-248. doi: 10.1007/BF02547354. Google Scholar [21] C. Miao, J. Wu and Z. Zhang, Littlewood-Paley Theory and Applications to Fluid Dynamics Equations, Monographs on Modern pure mathematics, No. 142, Beijing: Science Press, 2012. Google Scholar W-C. Müller and D. Biskamp, Scaling properties of three-dimensional magnetohydrodynamic turbulence, Phys. Rev. Lett., 84 (2000), 475-478. doi: 10.1103/PhysRevLett.84.475. Google Scholar F. Otto and F. Ramos, Universal bounds for the Littlewood-Paley first-order moments of the 3D Navier-Stokes equations, Comm. Math. Phys., 300 (2010), 301-315. doi: 10.1007/s00220-010-1098-4. Google Scholar S. R. Spangler and C. R. Gwinn, Evidence for an inner scale to the density turbulence in the interstellar medium, Astrophys. J., 353 (1990), L29–L32. doi: 10.1086/185700. Google Scholar J. Wu, Regularity criteria for the generalized MHD equations, Comm. Partial Differential Equations, 33 (2008), 285-306. doi: 10.1080/03605300701382530. Google Scholar Zeqi Zhu, Caidi Zhao. Pullback attractor and invariant measures for the three-dimensional regularized MHD equations. Discrete & Continuous Dynamical Systems, 2018, 38 (3) : 1461-1477. doi: 10.3934/dcds.2018060 Radjesvarane Alexandre, Mouhamad Elsafadi. Littlewood-Paley theory and regularity issues in Boltzmann homogeneous equations II. Non cutoff case and non Maxwellian molecules. Discrete & Continuous Dynamical Systems, 2009, 24 (1) : 1-11. doi: 10.3934/dcds.2009.24.1 Xue-Li Song, Yan-Ren Hou. Attractors for the three-dimensional incompressible Navier-Stokes equations with damping. Discrete & Continuous Dynamical Systems, 2011, 31 (1) : 239-252. doi: 10.3934/dcds.2011.31.239 Cheng Wang. Convergence analysis of Fourier pseudo-spectral schemes for three-dimensional incompressible Navier-Stokes equations. Electronic Research Archive, 2021, 29 (5) : 2915-2944. doi: 10.3934/era.2021019 Hao Chen, Kaitai Li, Yuchuan Chu, Zhiqiang Chen, Yiren Yang. A dimension splitting and characteristic projection method for three-dimensional incompressible flow. Discrete & Continuous Dynamical Systems - B, 2019, 24 (1) : 127-147. doi: 10.3934/dcdsb.2018111 Weiping Yan. Existence of weak solutions to the three-dimensional density-dependent generalized incompressible magnetohydrodynamic flows. Discrete & Continuous Dynamical Systems, 2015, 35 (3) : 1359-1385. doi: 10.3934/dcds.2015.35.1359 Igor Kukavica, Vlad C. Vicol. The domain of analyticity of solutions to the three-dimensional Euler equations in a half space. Discrete & Continuous Dynamical Systems, 2011, 29 (1) : 285-303. doi: 10.3934/dcds.2011.29.285 Madalina Petcu, Roger Temam, Djoko Wirosoetisno. Averaging method applied to the three-dimensional primitive equations. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5681-5707. doi: 10.3934/dcds.2016049 Nouressadat Touafek, Durhasan Turgut Tollu, Youssouf Akrour. On a general homogeneous three-dimensional system of difference equations. Electronic Research Archive, 2021, 29 (5) : 2841-2876. doi: 10.3934/era.2021017 Yu-Zhu Wang, Yin-Xia Wang. Local existence of strong solutions to the three dimensional compressible MHD equations with partial viscosity. Communications on Pure & Applied Analysis, 2013, 12 (2) : 851-866. doi: 10.3934/cpaa.2013.12.851 Tobias Breiten, Karl Kunisch. Feedback stabilization of the three-dimensional Navier-Stokes equations using generalized Lyapunov equations. Discrete & Continuous Dynamical Systems, 2020, 40 (7) : 4197-4229. doi: 10.3934/dcds.2020178 Mário Bessa, Jorge Rocha. Three-dimensional conservative star flows are Anosov. Discrete & Continuous Dynamical Systems, 2010, 26 (3) : 839-846. doi: 10.3934/dcds.2010.26.839 Juan Vicente Gutiérrez-Santacreu. Two scenarios on a potential smoothness breakdown for the three-dimensional Navier–Stokes equations. Discrete & Continuous Dynamical Systems, 2020, 40 (5) : 2593-2613. doi: 10.3934/dcds.2020142 Ciprian Foias, Ricardo Rosa, Roger Temam. Topological properties of the weak global attractor of the three-dimensional Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2010, 27 (4) : 1611-1631. doi: 10.3934/dcds.2010.27.1611 Cheng-Jie Liu, Ya-Guang Wang, Tong Yang. Global existence of weak solutions to the three-dimensional Prandtl equations with a special structure. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 2011-2029. doi: 10.3934/dcdss.2016082 Xin Zhong. A blow-up criterion for three-dimensional compressible magnetohydrodynamic equations with variable viscosity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3249-3264. doi: 10.3934/dcdsb.2018318 Baoquan Yuan, Xiao Li. Blow-up criteria of smooth solutions to the three-dimensional micropolar fluid equations in Besov space. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 2167-2179. doi: 10.3934/dcdss.2016090 Daniel Pardo, José Valero, Ángel Giménez. Global attractors for weak solutions of the three-dimensional Navier-Stokes equations with damping. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3569-3590. doi: 10.3934/dcdsb.2018279 Michal Beneš. Mixed initial-boundary value problem for the three-dimensional Navier-Stokes equations in polyhedral domains. Conference Publications, 2011, 2011 (Special) : 135-144. doi: 10.3934/proc.2011.2011.135 Vu Manh Toi. Stability and stabilization for the three-dimensional Navier-Stokes-Voigt equations with unbounded variable delay. Evolution Equations & Control Theory, 2021, 10 (4) : 1007-1023. doi: 10.3934/eect.2020099 Yao Nie Jia Yuan \begin{document}$ pth $\end{document}-order moments in three-dimensional MHD turbulence" readonly="readonly">
CommonCrawl
GO Mechanical GATE2017 ME-2: 51 Arjun asked in Numerical Methods Feb 27, 2017 recategorized Mar 5, 2021 by Lakshman Patel RJIT Maximise $Z=5x_{1}+3x_{2}$ subject to $\begin{array}{} x_{1}+2x_{2} \leq 10, \\ x_{1}-x_{2} \leq 8, \\ x_{1}, x_{2} \geq 0 \end{array}$ In the starting Simplex tableau, $x_{1}$ and $x_{2}$ are non-basic variables and the value of $Z$ is zero. The value of $Z$ in the next Simplex tableau is _______. gateme-2017-set2 numerical-answers numerical-methods linear-programming by ♦Arjun 27.4k points Arjun asked in Numerical Methods Feb 24, 2017 GATE2015-3-51 For the linear programming problem: $\begin{array}{ll} \text{Maximize} & Z = 3X_1 + 2X_2 \\ \text{Subject to} &−2X_1 + 3X_2 \leq 9\\ & X_1 − 5 X_2 \geq −20 \\ & X_1, X_2 \geq 0 \end{array}$ The above problem has unbounded solution infeasible solution alternative optimum solution degenerate solution by Arjun GATE Mechanical 2014 Set 3 | Question: 39 Consider an objective function $Z(x_1,x_2)=3x_1+9x_2$ and the constraints $x_1+x_2 \leq 8$ $x_1+2x_2 \leq 4$ $x_1 \geq 0$ , $x_2 \geq 0$ The maximum value of the objective function is _______ The problem of maximizing $z=x_1-x_2$ subject to constraints $x_1+x_2 \leq 10, \: x_1 \geq 0, x_2 \geq 0$ and $x_2 \leq 5$ has no solution one solution two solutions more than two solutions Maximize $Z = 15X_1 + 20X_2$ subject to $\begin{array}{l} 12X_1 + 4X_2 \geq 36 \\ 12X_1 − 6X_2 \leq 24 \\ X_1, X_2 \geq 0 \end{array}$ The above linear programming problem has infeasible solution unbounded solution alternative optimum solutions degenerate solution piyag476 asked in Numerical Methods Feb 19, 2017 GATE ME 2013 | Question: 36 A linear programming problem is shown below. $\begin{array}{ll} \text{Maximize} & 3x + 7y \\ \text{Subject to} & 3x + 7y \leq 10 \\ & 4x + 6y \leq 8 \\ & x, y \geq 0 \end{array}$ It has an unbounded objective function. exactly one optimal solution. exactly two optimal solutions. infinitely many optimal solutions. by piyag476 1.4k points gateme-2013 Welcome to GO Mechanical, where you can ask questions and receive answers from other members of the community. GATE Mechanical Engineering Syllabus for GATE 2023 (Updated 2020) ISRO Questions Papers for Mechanical ISRO Mechanical and RAC Previous Year Papers How to Apply to Colleges after GATE Top Users Jan 2023 RubySN Create Subject Test Developed by Chun Twitter WhatsApp Facebook Reddit LinkedIn Email
CommonCrawl
On the transmission dynamics of Buruli ulcer in Ghana: Insights through a mathematical model Farai Nyabadza1 & Ebenezer Bonyah2 Mycobacterium ulcerans is know to cause the Buruli ulcer. The association between the ulcer and environmental exposure has been documented. However, the epidemiology of the ulcer is not well understood. A hypothesised transmission involves humans being bitten by the water bugs that prey on mollusks, snails and young fishes. In this paper, a model for the transmission of Mycobacterium ulcerans to humans in the presence of a preventive strategy is proposed and analysed. The model equilibria are determined and conditions for the existence of the equilibria established. The model analysis is carried out in terms of the reproduction number \(\mathcal{R}_0\). The disease free equilibrium is found to be locally asymptotically stable for \(\mathcal{R}_0<1.\) The model is fitted to data from Ghana. The model is found to exhibit a backward bifurcation and the endemic equilibrium point is globally stable when \(\mathcal{R}_0>1.\) Sensitivity analysis showed that the Buruli ulcer epidemic is highly influenced by the shedding and clearance rates of Mycobacterium ulcerans in the environment. The model is found to fit reasonably well to data from Ghana and projections on the future of the Buruli ulcer epidemic are also made. The model reasonably fitted data from Ghana. The fitting process showed data that appeared to have reached a steady state and projections showed that the epidemic levels will remain the same for the projected time. The implications of the results to policy and future management of the disease are discussed. Buruli ulcer is caused by pathogenic bacterium where infection often leads to extensive destruction of skin and soft tissue through the formation of large ulcers usually on the legs or arms [28]. It is a devastating disease caused by Mycobacterium ulcerans. The ulcer is fast becoming a debilitating affliction in many countries [3]. It is named after a region called Buruli, near the Nile River in Uganda, where in 1961 the first large number of cases was reported. In Africa, close to 30,000 cases were reported between 2005 and 2010 [29]. Cote d'Ivoire, with the highest incidence, reported 2533 cases in 2010 [27]. This disease has dramatically emerged in several west African countries, such as Ghana, Cote d'Ivoire, Benin, and Togo in recent years [26]. The transmission mode of the ulcer is not well understood, however residence near an aquatic environment has been identified as a risk factor for the ulcer in Africa [6, 16, 25]. Transmission is thus likely to occur through contact with the environment [20]. Recent studies in West Africa have implicated aquatic bugs as transmission vectors for the ulcer [18, 24]. An attractive hypothesis for a possible mode of transmission to humans was proposed by Portaels et al. [22]: water-filtering hosts (fish, mollusks) concentrate the Mycobacterium ulcerans bacteria present in water or mud and discharge them again to this environment, where they are then ingested by aquatic predators such as beetles and water bugs. These insects, in turn, may transmit the disease to humans by biting [18]. Person to person transmission is less likely. Aquatic bugs are insects found throughout temperate and tropical environments with abundant freshwater. They prey, according to their size, on mollusks, snails, young fishes, and the adults and larvae of other insects that they capture with their raptorial front legs and bite with their rostrum. These insects can inflict painful bites on humans as well. In Ghana, where Buruli ulcer is endemic, the water bugs are present in swamps and rivers, where human activities such as farming, fishing, and bathing take place [18]. Research on Buruli ulcer has focused mainly on the socio-cultural aspects of the disease. The research recommends the need for Information, Education and Communication (IEC) intervention strategies, to encourage early case detection and treatment with the assumption that once people gain knowledge they will take the appropriate action to access treatment early [2]. IEC is defined as an approach which attempts to change or reinforce a set of behaviours to a targeted group regarding a problem. The IEC strategy is preventive in that it has a potential of enhancing control of the ulcer [5]. It is also important to note that Buruli ulcer is treatable with antibiotics. A combination of rifampin and streptomycin administered daily for 8 weeks has the potential to eliminate Mycobacterium ulcerans bacilli and promote healing without relapse. Mathematical models have been used to model the transmission of many diseases globally. Many advances in the management of diseases have been born from mathematical modeling [11, 12, 14, 15]. Mathematical models can evaluate actual or potential control measures in the absence of experiments, see for instance [19]. To the best of our knowledge very few mathematical models have been formulated to analyse the transmission dynamics of Mycobacterium ulcerans. This could be largely due to the elusive epidemiology of the Buruli ulcer. Aidoo and Osei [3] proposed a mathematical model of the SIR-type in an endeavour to explain the transmission of Mycobacterium ulcerans and its dependence on arsenic. In this paper, we propose a model which takes into account the human population, water bugs as vectors and fish as potential reservoirs of Mycobacterium ulcerans following the transmission dynamics described in [8]. In addition we include the preventive control measures in a bid to capture the IEC strategy. Our main aim is to study the dynamics of the Buruli ulcer in the presence of a preventive control strategy, while emphasizing the role of the vector (water bugs) and fish and their interaction with the environment. The model is then validated using data from Ghana. This is crucial in informing policy and suggesting strategies for the control of the disease. This paper is arranged as follows; in "Methods", we formulate and establish the basic properties of the model. We also determine the steady states and analysed their stability. The results of this paper are given in "Results". Parameter estimation, sensitivity analysis and the numerical results on the behavior of the model are also presented in this section. The paper is concluded in "Discussion". Model formulation We consider a constant human population \(N_H(t),\) the vector population of water bugs \(N_V(t)\) and the fish population \(N_F(t)\) at any time t. The total human population is divided into three epidemiological subclasses of those that are susceptible \(S_H(t),\) the infected \(I_H(t)\) and the recovered who are still immune \(R_H(t)\). Total population of vector (water bug) at any time t is divided into two subclasses to susceptible water bugs \(S_V(t)\) and those that are infectious and can transmit the Buruli ulcer to humans, \(I_V(t).\) The total population reservoir of small fish is also divided into two compartments of susceptible fish \(S_F(t)\) and infected fish \(I_F(t).\) We also consider the role of the environment by introducing a compartment U, representing the density of Mycobacterium ulcerans in the environment. We make the following basic assumptions: Mycobacterium ulcerans are transferred only from vector ( water bug) to the humans. There is homogeneity of human, water bug and fish populations' interactions. Infected humans recover and are temporarily immune, but lose immunity. Fish are preyed on by the water bugs. Unlike some bacterial infections such as leprosy (caused by Mycobacterium leprae) and tuberculosis (caused by Mycobacterium tuberculosis), which are characterized by person-to-person contact transmission, it is hypothesized that Mycobacterium ulcerans is acquired through environmental contact and direct person-to-person transmission is rare [20]. Susceptible host (human population) can be infected through biting by an infectious vector (water bug). We represent the effective biting rate that an infectious vector has to susceptible host as \(\beta _H\) and the incidence of new infections transmitted by water bugs is expressed by standard incidence rate \( \displaystyle \beta _H \frac{S_H I_V}{N_H}.\) One can interpret \(\beta _H\) as a function of the biting frequency of the infected water bugs on humans, density of infectious water bugs per human, the probability that a bite will result in an infection and the efficacy of the IEC strategy. In particular we can set \(\beta _H=(1-\epsilon )\tau \alpha \beta _H^*,\) where \(\epsilon \in (0,1)\) is the efficacy of the IEC strategy, \(\tau \) the number of water bugs per human host, \(\alpha \) the biting frequency (the biting rate of humans by a single water bug) and \(\beta _H^*\) the probability that a bite by an infected vector to a susceptible human will produce an infection. Susceptible water bugs are infected at a rate \(\displaystyle \beta _V \frac{S_V I_F}{N_V}\) through predation of infected fish and \(\displaystyle \eta _v\beta _V \frac{S_V U}{K}\) representing other sources in the environment. Here \(\eta _V\) differentiates the infectivity potential of the fish from that of the environment. Assuming fish prey on infected water bugs, susceptible fish are infected at a rate \(\displaystyle \beta _F\frac{S_F I_V}{N_F}\) through predation of infected fish and \(\displaystyle \eta _F\beta _F \frac{S_F U}{K}\) representing infection through the environment. Here \(\eta _F\) is a modification parameter that models the relative infectivity of fish from that of the environment. The vector population and the fish populations are assumed to be constant. The growth functions are respectively given by \(g(N_V)\) and \(g(N_F),\) where $$\begin{aligned} g(N_V)=\mu _VN_V~~\mathrm{and}~~g(N_F)=\mu _FN_F. \end{aligned}$$ It is important to note that other types of functions can be chosen as growth functions. In this work we however assume that the growth functions are linear. There is a proposed hypothesis that environmental mycobacteria in the bottoms of swamps may be mechanically concentrated by small water-filtering organisms such as microphagous fish, snails, mosquito larvae, small crustaceans, and protozoa [8]. We assume that fish increase the environmental concentrations of Mycobacterium ulcerans at a rate \(\sigma _F.\) Humans are are assumed not to shed any bacteria into the environment. Aquatic bugs release bacteria into the environment at a rate \(\sigma _V.\) The model does not include a potential route of direct contact with the bacterium in water. The birth rate of the human population is directly proportional to the size of the human population. The recovery of infected individuals is assumed to occur both spontaneously and through treatment. Research has shown that localized lesions may spontaneously heal but, without treatment, most cases of Buruli ulcer result in physical deformities that often lead to physiological abnormalities and stigmas [4]. We now describe briefly, the transmission dynamics of Buruli ulcer: New susceptibles enter the population at a rate of \(\mu _H N_H.\) Buruli ulcer sufferers do not recover with permanent immunity, they loose immunity at a rate \(\theta \) and become susceptible again. Susceptibles and infected through interaction with infected water bugs, with infection driven by water bugs biting susceptible humans. Once infected, individuals are allowed to recover either spontaneously or through antibiotic treatment at a rate \(\gamma .\) In this model, the human population is assumed to be constant over the modeling time with the birth and death rates being equal. The compartment \(S_V\) tracks the changes in the susceptible water bugs population that are recruited at a rate \(\mu _V N_V\). The infection of water bugs is driven by two processes: their interaction infected fish and with the environment. The natural mortality of the water bugs occurs at a rate \(\mu _V.\) Similarly, the compartment \(S_F\) tracks the changes in the susceptible fish population that are recruited at a rate \(\mu _FN_F\). The infection of fish is also driven by two processes: their interaction infected water bugs and with the environment. Fish's natural mortality rate is \(\mu _F.\) The growth of Mycobacterium ulcerans in the environment is driven by their shedding by infected water bugs and fish into the environment. They are assumed to die naturally at a rate \(\mu _E.\) The possible interrelations between humans, the water bug and fish are represented by the schematic diagram below (Fig. 1). Proposed transmission dynamics of the Buruli ulcer among humans, fish, water bugs and the environment (U) The descriptions of the parameters that describe the flow rates between compartments are given in Table 1. Table 1 Description of parameters used in the model The dynamics of the ulcer can be described by the following set of nonlinear differential equations: $$\begin{aligned} \left. \begin{array}{lcl} \displaystyle \frac{dS_H}{dt}&{}= &{} \displaystyle \mu _HN_H +\theta R_H - \beta _H\frac{S_HI_V}{N_H}-{\mu _H}S_H,\\ \displaystyle \frac{dI_H}{dt}&{} = &{} \displaystyle \beta _H\frac{S_HI_V}{N_H} - ({\mu _H} +\gamma )I_H,\\ \displaystyle \frac{dR_H}{dt}&{} =&{} \displaystyle \gamma I_H-(\mu _H+\theta )R_H,\\ \displaystyle \frac{dS_V}{dt}&{} = &{}\displaystyle \mu _VN_V -\beta _V\frac{S_VI_F}{N_V}-\eta _V\beta _V\frac{S_VU}{K}- {\mu _V}S_V,\\ \displaystyle \frac{dI_V}{dt}&{} = &{} \displaystyle \beta _V\frac{S_VI_F}{N_V}+\eta _V\beta _V\frac{S_VU}{K}- {\mu _V}I_V,\\ \displaystyle \frac{dS_F}{dt}&{} = &{} \displaystyle \mu _FN_F-\beta _F\frac{S_FI_V}{N_F} -\eta _F\beta _F\frac{S_FU}{K}- {\mu _F}S_F,\\ \displaystyle \frac{dI_F}{dt}&{} = &{} \displaystyle \beta _F\frac{S_FI_V}{N_F}+\eta _F\beta _F\frac{S_FU}{K}- {\mu _F}I_F,\\ \displaystyle \frac{dU}{dt}&{} = &{} \displaystyle \sigma _FI_F+\sigma _VI_V- {\mu _E}U. \end{array} \right\} \end{aligned}$$ We assume that all the model parameters are positive and the initial conditions of the model system (1) are given by $$\begin{aligned} S_H(0)&= {} S_{H0} > 0, I_H(0) = I_{H0}\ge 0, R_H(0)= R_{H0}= 0,~S_V(0) = S_{V0} > 0,\\ I_V(0)&= {} I_{V0}\ge 0,~S_F(0) = S_{F0} > 0, ~I_F(0) = I_{F0}\ge 0 \quad \text {and}\quad U(0)=U_0>0. \end{aligned}$$ We arbitrarily scale the time t by the quantity \({1 \over {\mu _V }}\) by letting \(\tau = \mu _Vt\) and introduce the following dimensionless parameters; $$\begin{aligned} \tau&= {} \mu _Vt,~ \beta _h=\frac{\beta _H}{\mu _V},~\mu _h=\frac{\mu _H}{\mu _V},~ \theta _h=\frac{\theta }{\mu _V}, ~\gamma _h=\frac{\gamma }{\mu _V}, ~m_1=\frac{N_H}{N_V},~m_2=\frac{N_F}{N_V},\\ m_3&= {} \frac{1}{m_2},~m_4=\frac{N_F}{K},~m_5=\frac{N_V}{K},~ \mu _f=\frac{\mu _F}{\mu _V},~\beta _f=\frac{\beta _F}{\mu _V},\\ \sigma _f&= {} \frac{\sigma _F}{\mu _V},~\sigma _v=\frac{\sigma _V}{\mu _V},~\beta _v=\frac{\beta _V}{\mu _V} \;\mathrm{and}\;\mu _e=\frac{\mu _E}{\mu _V}. \end{aligned}$$ can be non dimensionalised bySo, system (1) setting $$\begin{aligned} s_h=\frac{S_H}{N_H},~i_h=\frac{I_H}{N_H},r_h=\frac{R_H}{N_H},~i_v=\frac{I_V}{N_V},~s_f=\frac{S_F}{N_F},~i_f=\frac{I_F}{N_F}\;\mathrm{and}\;\displaystyle u=\frac{U}{K}. \end{aligned}$$ The forces of infection for humans, water bugs and fish are respectively $$\begin{aligned} \lambda _H=\beta _h m_1i_v,~~\lambda _V=\beta _v m_2i_f+\eta _V\beta _v u,~~\lambda _F=\beta _f m_3i_v+\eta _F\beta _f u. \end{aligned}$$ Given that the total number of bites made by the water bugs must equal the number of bites received by the humans, \(m_1\) is a constant, see [9]. Similarly \(m_2\) is constant and so is \(m_3.\) We also note that since \(N_F\) and \(N_V\) are constants, \(m_4\) and \(m_5\) are constants. Given that \(\displaystyle s_h+i_h+r_h=1,~s_v+i_v=1,~s_f+i_f=1\) and \(\displaystyle 0\le u\le 1,\) system (1) can be reduced to the following system of equations by conveniently maintaining the capitalised subscripts so that we can still respectively write \(\displaystyle s_h,~i_h,~i_v,~i_f\) and \(\displaystyle u\) as \(\displaystyle S_H,~I_H,~I_V,~I_F\) and \(\displaystyle U.\) $$\begin{aligned} \left. \begin{array}{lcl} \displaystyle \frac{dS_H}{d\tau }&{}= &{} \displaystyle (\mu _h +\theta _h)(1- S_H) -\theta _h I_H - \lambda _H S_H,\\ \\ \displaystyle \frac{dI_H}{d\tau }&{} = &{} \displaystyle \lambda _HS_H - ({\mu _h} +\gamma _h)I_H,\\ \\ \displaystyle \frac{dI_V}{d\tau }&{} = &{} \displaystyle \lambda _V(1-I_V)-\mu _vI_V,\\ \\ \displaystyle \frac{dI_F}{d\tau }&{} = &{} \displaystyle \lambda _F(1-I_F)- {\mu _f}I_F,\\ \\ \displaystyle \frac{dU}{d\tau }&{} = &{} \displaystyle m_4\sigma _f I_F+m_5\sigma _vI_V- {\mu _e}U. \end{array} \right\} \end{aligned}$$ Feasible region Note that \(\displaystyle \frac{dU}{d\tau }=m_4\sigma _f I_F+m_5\sigma _vI_V- {\mu _e}U\le m_4\sigma _f +\,m_5\sigma _v-\mu _eU.\) Through integration we obtain \(\displaystyle U\le \frac{m_4\sigma _f +m_5\sigma _v}{\mu _e}.\) The feasible region (the region where the model makes biological sense) for the system (2) is in \(\mathbb {R}^5_+\) and is represented by the set $$\begin{aligned} \Omega= & {} \left\{ (S_H,I_H,I_V,I_F,U)\in \mathbb {R}^5_+|0\le S_H+I_H\le 1,0\le I_V\le 1, 0\le I_F\le 1,\right. \\&~~~\left. 0\le U\le \frac{m_4\sigma _f +m_5\sigma _v}{\mu _e}\right\} , \end{aligned}$$ where the basic properties of local existence, uniqueness and continuity of solutions are valid for the Lipschitzian system (2). The populations described in this model are assumed to be constant over the modelling time. The solutions of system (2) starting in \(\displaystyle \Omega \) remain in \(\displaystyle \Omega \) for all \(t>0.\) Thus , \(\displaystyle \Omega \) is positively invariant and it is sufficient to consider solutions in \(\displaystyle \Omega .\) Positivity of solutions We desire to show that for any non-negative initial conditions of system (2), say \(\displaystyle (S_{H0},I_{H0},I_{V0},I_{F0},U_0),\) the solutions remain non-negative for all \(\displaystyle \tau \in [0,\infty ).\) We prove that all the state variables remain non-negative and the solutions of the system (2) with positive initial conditions will remain positive for all \(\tau > 0\). We thus state the following lemma. Lemma 1 Given that the initial conditions of system (2) are positive, the solutions \(S_H(\tau ),~I_H(\tau ),~I_V(\tau ),~I_F(\tau )\) and \(U(\tau )\) are non-negative for all \(\tau >0\). Assume that $$\begin{aligned} \hat{\tau } = \sup \left\{ \tau >0: S_H>0, I_H>0, I_V>0, I_F>0, U >0\right\} \in ( 0, \tau ]. \end{aligned}$$ Thus \(\hat{\tau } > 0,\) and it follows directly from the first equation of the system (2) that $$\begin{aligned} \frac{dS_H}{d\tau } \ge - (\theta _h + \lambda _H)S_H. \end{aligned}$$ We thus have $$\begin{aligned} \frac{dS_H}{dt}\ge S_{H0}\exp \left[ - \theta _h t+ \int _0^\tau \lambda _H(\varsigma )d\varsigma \right] . \end{aligned}$$ Since the exponential function is always positive and \(S_{H0}=S_H(0)>0,\) the solution \(S_H(\tau )\) will thus be always positive. From the second equation of (2), $$\begin{aligned} \frac{dI_H}{d\tau }&\ge -(\mu _h+\gamma _h)I_H,\\ \Rightarrow I_H&\ge I_{H0}e^{-(\mu _h +\gamma _h)\tau }>0. \end{aligned}$$ Similarly, it can be shown that \(I_V(\tau ) > 0,~I_F(\tau ) > 0\) and \(U(\tau ) > 0\) for all \( \tau > 0 ,\) and this completes the proof. \(\square \) Steady states analysis The disease free equilibrium In this section, we solve for the equilibrium points by setting the right hand side of system (2) to zero. This direct calculation shows that system (2) always has a disease free equilibrium point $$\begin{aligned} \mathbf{\mathcal {E}_0}=(1,0,0,0,0). \end{aligned}$$ We have the following result on the local stability of the disease free equilibrium. Theorem 1 The disease free equilibrium \(\mathbf{\mathcal {E}_0}\) whenever it exists, is locally asymptotically stable if \(\mathcal{R}_0 <1\) and unstable otherwise. The Jacobian matrix of system (2) at the equilibrium point \(\mathbf{\mathcal {E}_0}\) is given by $$\begin{aligned} J_{\mathbf{\mathcal {E}_0}}&= \left( \begin{array}{ccccc} -(\mu _h+\theta _h) &{}-\theta _h &{}-m_1\beta _h&{} 0&{}0 \\ 0&{} -(\mu _h+\gamma _h) &{}m_1\beta _h&{}0&{}0 \\ 0&{} 0 &{}-1&{}m_2\beta _v&{}\eta _v\beta _v\\ 0&{} 0 &{}m_3\beta _f&{}-\mu _f&{}\eta _f\beta _f\\ 0&{} 0 &{}m_5\sigma _v&{}m_4\sigma _f&{}-\mu _e \end{array} \right) . \end{aligned}$$ It can be seen that the eigenvalues of \(\displaystyle J_{\mathbf{\mathcal {E}_0}}\) are \( -(\mu _h+\theta _h),~ -(\mu _h+\gamma _h)\) and the solution of the characteristic polynomial $$\begin{aligned} P(\vartheta )=\vartheta ^3+a_2\vartheta ^2 + a_1\vartheta +\mu _e\mu _f(1-\mathcal{R}_0)=0, \end{aligned}$$ $$\begin{aligned} a_2&= {} 1+\mu _e+\mu _f,\\ a_1&= {} \mu _e+\mu _f+\mu _e\mu _f-(\beta _f\beta _v+m_4\eta _f\sigma _f\beta _f+m_5\eta _v\sigma _v\beta _v)~~\mathrm{and}\\ \mathcal{R}_0&= {} R_0^1+R_0^2+R_0^3, \end{aligned}$$ $$\begin{aligned} R_0^1=\frac{m_4\eta _f\sigma _f\beta _f}{\mu _e\mu _f},~R_0^2=\frac{m_5\eta _v\sigma _v\beta _v}{\mu _e} \quad \mathrm{and} \quad R_0^3=\beta _f\beta _v\left( \frac{\mu _e+m_3m_4\eta _v\sigma _f+m_2m_5\eta _f\sigma _v}{\mu _e\mu _f}\right) . \end{aligned}$$ The solutions of \(P(\vartheta )=0\) have negative real parts only if \(\displaystyle \mathcal{R}_0<1\) following the use of the Routh Hurwitz Criterion. We can thus conclude that the disease free equilibrium is locally asymptotically stable whenever \(\displaystyle \mathcal{R}_0<1.\) \(\square \) We note that \(\displaystyle \mathcal{R}_0\) is the model system (2)'s reproduction number and does not depend on the human population size. The model reproduction number is a sum of three terms. The terms \(R_0^1\) and \(R_0^2\) represent the contribution of fish and water bugs respectively to the infection dynamics. The term \(R_0^3,\) which is not very common in many epidemiological models, shows the combined contribution of the water bugs, fish and their shedding of Mycobacterium ulcerans into the environment. So, the infection is driven by the water bugs, fish and the density of the bacterium in the environment. The model reproduction number increases linearly with the shedding rates of the Mycobacterium ulcerans into the environment by fish and water bugs and the effective contact rates \(\beta _f\) and \(\beta _v\). It decreases with increasing removal rates of the fish and Mycobacterium ulcerans. So the control of the ulcer depends largely on environmental management. The endemic equilibrium The endemic equilibrium is much more tedious to obtain. Given that \(\displaystyle \lambda ^*_H=\beta _hm_1I_V^*,\) from the first and second equations of system (2) we have $$\begin{aligned} S_H^*=\frac{1}{1+\mathcal{A}I_V^*} \quad \mathrm{and}\quad I_H^*=\frac{m_1\beta _hI_V^*}{(\mu _h+\gamma _h)(1+\mathcal{A}I_V^*)}, \end{aligned}$$ where \(\displaystyle \mathcal{A}=\frac{m_1\beta _h(\mu _h+\theta _h+\gamma _h)}{(\mu _h+\gamma _h)(\mu _h+\theta _h)}.\) The last equation of system (2) can be written as $$\begin{aligned} U^*=\vartheta _1I_F^*+\vartheta _2I_V^*, \quad \mathrm{where}~\vartheta _1=\frac{m_4\sigma _f}{\mu _e}~ \mathrm{and}~\vartheta _2=\frac{m_5\sigma _v}{\mu _e}. \end{aligned}$$ $$\begin{aligned} \lambda ^*_F=\vartheta _3I_V^*+\vartheta _4I_F^ \mathrm{and}~\lambda ^*_V=\vartheta _5I_V^*+\vartheta _6I_F^*, \end{aligned}$$ where \(\displaystyle \vartheta _3=\beta _f(m_3+\vartheta _2\eta _f),~\vartheta _4=\vartheta _1\beta _f\eta _f,~\vartheta _5=\vartheta _2\eta _v\beta _v ~\mathrm{and}~\vartheta _6=\beta _v(m_2+\vartheta _1\eta _v).\) From the third and fourth equations of system (2)we have $$\begin{aligned} I_F^*=\, & {} \frac{I_V^*[1-\vartheta _5(1-I_V^*)]}{\vartheta _6(1-I_V^*)},\end{aligned}$$ $$\begin{aligned} I_V^*=\, & {} \frac{I_F^*[\mu _f-\vartheta _4(1-I_F^*)]}{\vartheta _3(1-I_F^*)}. \end{aligned}$$ Substituting (3) into (4) we obtain \(\displaystyle I_V^*=0\) and the cubic equation $$\begin{aligned} f(I_V^*)=a_3{I_V^*}^3+a_2{I_V^*}^2+a_1I_V^*+a_0=0, \end{aligned}$$ $$\begin{aligned} a_0&= {} \frac{\beta _f\mu _f}{\mu _e}\left( \mu _em_2+m_4\eta _v\sigma _f\right) \left[ \mathcal{R}_0-1\right] ,\\ a_1&= {} \vartheta _4\vartheta _5(1+\vartheta _6)+\vartheta _5(\vartheta _4+\vartheta _3\vartheta _6)+\vartheta _3\vartheta _5\vartheta _6-[\vartheta _3\vartheta _6(1+\vartheta _6)+ \vartheta _5(\vartheta _4\vartheta _5+\mu _f\vartheta _6)+\vartheta _4\vartheta _5^2],\\ a_2&= {} (1+\vartheta _6)(\vartheta _4+\vartheta _3\vartheta _6)+\vartheta _5(\vartheta _4\vartheta _5+\mu _f\vartheta _6)+\vartheta _6(\vartheta _3\vartheta _6+\mu _f\vartheta _5)-[ 2\vartheta _4\vartheta _5(1+\vartheta _6)+\vartheta _6(\vartheta _3\vartheta _5+\mu _f)],\\ a_3&= {} -\frac{m_5\beta _f\eta _v\sigma _v\beta _v^2}{\mu _e^2}\left( (\mu _em_2+m_4\eta _v\sigma _f)m_3+m_2m_5\eta _f\sigma _v\right) <0. \end{aligned}$$ $$\begin{aligned} a_0 \left\{ \begin{array}{ll}> 0\quad \mathrm{if}\quad \mathcal{R}_0>1\\ <0\quad \mathrm{if}\quad \mathcal{R}_0<1. \end{array}\right. \end{aligned}$$ $$\begin{aligned} f'(I_V^*) = 3a_3(I_V^*)^2 + 2a_2\lambda _1^* + a_1 , \end{aligned}$$ the turning points of equation (5) are given by $$\begin{aligned} (I_V^*)^{1,2} = \dfrac{-a_2 \pm \sqrt{a_2^2 - 3 a_1a_3}}{3a_3}. \end{aligned}$$ The discriminant of solutions (7) is \(\triangle = a_2^2 - 3 a_1a_3\). We now focus on the sign of the discriminant. If \(\triangle <0\), then \(f(I_V^*)\) has no real turning points, which implies that \(f(I_V^*)\) is a strictly monotonic function. The sign of \(f'(\lambda _1^*)\) is crucial in determining the monotonicity. Through completing the square, equation (6) can be written as $$\begin{aligned} f'(I_V^*) = 3a_3 \left[ \left( {I_V^*} + \dfrac{a_2}{3a_3} \right) ^2 + \dfrac{1}{9a_3^2}(3 a_1a_3-a_2^2) \right] . \end{aligned}$$ Clearly if \(\triangle <0\), then \(3 a_1a_3-a_2^2>0\). Since \(a_3<0\), then \(f'(I_V^*)<0\). Thus \(f(I_V^*)\) is a strictly monotone decreasing function. Note that \(\lim _{I_V^*\rightarrow \mp \infty } f(I_V^*)=\pm \infty \). For \(f(0) = a_0<0,\) the polynomial \(f(I_V^*)\) has no positive real roots for \(\mathcal{R}_0<1,\). However, if \(f(0) = a_0>0\) it has only one positive real root for \(\mathcal{R}_0>1,\) and consequently only one endemic equilibrium. If \(\triangle =0\), then \(f'(I_V^*)\) has only one real root with multiplicity two. This implies that \((I_V^*)^1=(I_V^*)^2 = -\frac{a_2}{3a_3}\) and that \(f'(I_V^*)<0\). Thus the polynomial \(f(I_V^*)\) is a decreasing function. Given that \(f''(I_V^*)(-\frac{a_2}{3a_3}) = 0,\) the turning point is a point of inflexion for \(f(I_V^*).\) The polynomial \(f(I_V^*)\) has only one endemic equilibrium. For \(\triangle >0\), we consider two cases; \(a_1<0\) and \(a_1>0\). If \(a_1<0\), then \(a_1a_3>0\). This means that \(\sqrt{\triangle }<a_2\). Irrespective of the sign of \(a_2\), \(f'(I_V^*)\) has two real positive and distinct roots. This implies that (5) has two positive turning points. If \(f(0) = a_0>0\) i.e \(\mathcal {R}_0>1\) then, \(f(I_V^*)\) has at least one positive real root, and hence at least one endemic equilibrium. On the other hand, if \(f(0) = a_0<0\) then, \(f(I_V^*)\) has at most two positive real roots when \(\mathcal {R}_0<1\), and hence at most two endemic equilibria. If \(a_1>0\), then \(a_1a_3<0\), which implies that \(\sqrt{\triangle }>a_2\). For \(a_2>0\), \(f'(I_V^*)\) has two real roots of opposite signs. Since \(f(0) = a_0>0\) for \(\mathcal{R}_0>1\), then, \(f(I_V^*)\) has one positive root. For \(a_2<0\), \(f'(I_V^*)\) has two negative real roots. Since \(f(0) = a_0<0\) for \(\mathcal{R}_0<1\), then, \(f(I_V^*)\) has no positive real roots, and consequently no endemic equilibria. Furthermore, we can use the Descartes' Rule of Signs [7] to explore the existence of endemic equilibrium (or equilibria) for \(\mathcal{R}_0<1\). We note the possible existence of backward bifurcation. The theorem below summarises the existence of endemic equilibria of the model system (2). The model system (2) has a unique endemic equilibrium point if \(\mathcal{R}_0>1\). has two endemic equilibria for \(\mathcal{R}_0^c<\mathcal{R}_0<1\) where \(\mathcal{R}_0^c\) is the critical threshold below which no endemic equilibrium exists. Remark The evaluation of \(\mathcal{R}_0^c\) depends on the signs of \(a_2\) and \(a_1\) and the sign of the discriminant. The computations are algebraically involving and long and are not included here. Since the model system (2) possesses two endemic equilibria when \(\mathcal{R}_0^c<\mathcal{R}_0<1\), the model exhibits backward bifurcation for \(\mathcal{R}_0<1\). The consequence of the above remark is that bringing \(\mathcal{R}_0\) below unity is not sufficient to eradicate the disease. For eradication, \(\mathcal{R}_0\) must be brought below the critical value \(\mathcal{R}_0^c\). Global stability of the endemic equilibrium The endemic equilibrium point \(\mathbf{\mathcal {E}_1}\) of system (2), is globally asymptotically stable. The global stability of the endemic equilibrium, can be determined by constructing a Lyapunov function \(\mathcal{V}(t)\) such that $$\begin{aligned} \mathcal{V}(t)&= S_H -S_{H}^{*}-S_{H}^{*}\ln \frac{S_H}{S_{H}^{*}} +A\left( I_H -I_{H}^{*}-I_{H}^{*}\ln \frac{I_H}{I_{H}^{*}}\right) + B\left( I_V -I_{V}^{*}-I_{V}^{*}\ln \frac{I_V}{I_{V}^{*}}\right) \nonumber \\&\quad +C\left( I_{F} -I_{F}^{*}-I_{F}^{*}\ln \frac{I_{F}}{I_{F}^{*}}\right) + D\left( U -U^{*}-U^{*}\ln \frac{U}{U^{*}}\right) . \end{aligned}$$ The corresponding time derivative of \(\mathcal{V}(t)\) is given by $$\begin{aligned} \dot{\mathcal{V}}&= \left( 1 - \frac{S_{H}^{*}}{S_{H}}\right) \dot{S}_{H} + A\left( 1 - \frac{I_{H}^{*}}{I_{H}}\right) \dot{I}_{H} + B\left( 1 - \frac{I_{V}^{*}}{I_{V}}\right) \dot{I}_{V} \nonumber \\&\quad + C\left( 1 - \frac{I_{F}^{*}}{I_{F}}\right) \dot{I}_{F}+D\left( 1 - \frac{U^{*}}{U}\right) \dot{U}. \end{aligned}$$ At the endemic equilibrium, we have the following relations $$\begin{aligned} \begin{array}{rcl} \mu _h+\theta _h&{}=&{} (\mu _h+\theta _h)S^{*}_{H} +\theta _h{I^*}_H+ m_1\beta _hS^{*}_{H}I^{*}_{V},\\ \mu _h+\gamma _h &{}=&{}m_1\beta _h\frac{S^{*}_{H}I^{*}_{V}}{{I^*}_H},\\ 1&{} =&{} m_2\beta _v\left( 1-I^{*}_{V}\right) \frac{I^{*}_{F}}{{I^*}_V}+ \eta _v\beta _v\left( 1-I^{*}_{V}\right) \frac{U^*}{{I^*}_V},\\ \mu _f&{} =&{} m_3\beta _f\left( 1-I^{*}_{F}\right) \frac{{I^*}_V}{I^{*}_{F}}+\eta _f\beta _f\left( 1-I^{*}_{F}\right) \frac{U^*}{I^{*}_{F}},\\ \mu _e&{} =&{} m_4\sigma _f\frac{I^{*}_{F}}{{U^*}}+m_5\sigma _v\frac{I^{*}_{V}}{{U^*}}. \end{array} \end{aligned}$$ Evaluating the components of the time derivative of the Lyapunov function using the relations (11) we have $$\begin{aligned} \dot{\mathcal{V}}&= \left( 1 - \frac{S_{H}^{*}}{S_{H}}\right) \left[ (\mu _h+\theta _h)S_{H}^{*}\left( 1 - \frac{S_{H}}{S_{H}^{*}}\right) +\theta _h I_{H}^{*}\left( 1 - \frac{I_{H}}{I_{H}^{*}}\right) +m_1\beta _h S_{H}^{*}I_{V}^{*}\left( 1 - \frac{S_{H}I_{V}}{S_{H}^{*}I_{V}^{*}}\right) \right] \nonumber \\&\quad \quad + A\left( 1 - \frac{I_{H}^{*}}{I_{H}}\right) \left[ m_1\beta _h S_{H}^{*}I_{V}^{*}\left( \frac{S_{H}I_{V}}{S_{H}^{*}I_{V}^{*}}-\frac{I_{H}}{I_{H}^{*}}\right) \right] + B\left( 1 - \frac{I_{V}^{*}}{I_{V}}\right) \left[ m_2\beta _vI_{F}^{*}\left( \frac{I_{F}}{I_{F}^{*}}-\frac{I_{V}}{I_{V}^{*}}\right) \right. \nonumber \\&\quad \quad +\left. m_2\beta _vI_{F}^{*}I_V\left( 1-\frac{I_{F}}{I_{F}^{*}}\right) +\eta _v\beta _v U^{*}\left( \frac{U}{U^{*}}-\frac{I_{V}}{I_{V}^{*}}\right) +\eta _v\beta _vU^{*}I_V \left( 1-\frac{U}{U^{*}}\right) \right] \nonumber \\&\quad \quad + C\left( 1 - \frac{I_{F}^{*}}{I_{F}}\right) \left[ \eta _f\beta _fU^{*}\left( \frac{U}{U^{*}}-\frac{I_{F}}{I_{F}^{*}}\right) +\eta _f\beta _fU^{*}I_F\left( 1-\frac{U}{U^{*}}\right) \right. \nonumber \\&\quad \quad +\left. m_3\beta _fI_{V}^{*}\left( \frac{I_{V}}{I_{V}^{*}}-\frac{I_{F}}{I_{F}^{*}}\right) +m_3\beta _fI_{V}^{*}I_F\left( 1-\frac{I_{V}}{I_{V}^{*}}\right) \right] \nonumber \\&\quad \quad +D\left( 1 - \frac{U^{*}}{U}\right) \left[ m_4\sigma _f{I_F}^{*}\left( \frac{I_{F}}{I_{F}^{*}}-\frac{U}{U^{*}}\right) +m_5\sigma _v{I_V}^{*}\left( \frac{I_{V}}{I_{V}^{*}}-\frac{U}{U^{*}}\right) \right] . \end{aligned}$$ $$\begin{aligned} v=\frac{S_H}{S^{*}_{H}},&w=\frac{I_H}{I^{*}_{H}}, x=\frac{I_V}{I^{*}_{V}},y=\frac{I_F}{I^{*}_{F}}\quad \mathrm{and}\quad z=\frac{U}{U^{*}}. \end{aligned}$$ Substituting (13) into (12), we obtain $$\begin{aligned} \dot{\mathcal{V}}= & {} -(\mu _h+\theta _h)S_{H}^{*}\frac{( 1 - v)^2}{v}+\mathcal{H}(v,w,x,y,z), \end{aligned}$$ $$\begin{aligned} \mathcal{H}&= \theta _h I_{H}^{*}\left( 1 -w-\frac{1}{v}+\frac{w}{v}\right) +m_1\beta _h S_{H}^{*}I_{V}^{*}\left( 1 - \frac{1}{v}+x-xv\right) \nonumber \\&\quad \quad + A m_1\beta _h S_{H}^{*}I_{V}^{*}\left( 1+xv-w-\frac{vx}{w}\right) + B m_2\beta _vI_{F}^{*}\left( 1+y-x-\frac{x}{y}\right) \nonumber \\&\quad \quad +B m_2\beta _vI_{F}^{*}{I^*}_V\left( x+y-xy-1\right) +B\eta _v\beta _v U^{*}\left( 1+z-x-\frac{z}{x}\right) \nonumber \\&\quad \quad +B\eta _v\beta _vU^{*}{I^*}_V \left( x+z-xz-1\right) + Cm_3\beta _f{I_V}^{*}\left( 1+x-y-\frac{x}{y}\right) \nonumber \\&\quad \quad + Cm_3\beta _f{I_V}^{*}{I_F}^{*}\left( y+x-xy-1\right) +C\eta _f\beta _fU^{*}\left( 1+z-y-\frac{z}{y}\right) \nonumber \\&\quad \quad +C\eta _f\beta _fU^{*}{I_F}^*\left( y+z-yz-1\right) +Dm_4\sigma _f{I_F}^{*}\left( 1+y-z-\frac{y}{z}\right) \nonumber \\&\quad \quad +Dm_5\sigma _v{I_V}^{*}\left( 1+x-z-\frac{x}{z}\right) . \end{aligned}$$ Next, we choose A, B, C and D so that none of the variable terms of \(\mathcal{H}\) are positive. It is important to group together the terms in \(\mathcal{H}\) that involve the same state variable terms, as well as grouping all of the constant terms together. So we can show that \(\mathcal{H}<0\) by expanding (15), writing out the constant term and the coefficients of the variable terms such as \(v,w,x,y,z,\frac{1}{v},\frac{w}{v},\frac{x}{v}\) and so on. The only variable terms that appear with positive coefficients are x, y and z. We thus choose the Lyapunov coefficients so as to make the coefficients ofx, y and z equal to zero. We have $$\begin{aligned} A&=1, B=\frac{m_1\beta _hS_{H}^{*}I_{V}^{*}}{m_2\beta _vI_{V}^{*}(1-I_{F}^{*})+\eta \beta _vU^{*}(1-I_{V}^{*})}. \end{aligned}$$ The coefficients C and D can similarly be evaluated from the coefficients of y and z. Note that expressions such as $$\begin{aligned} m_1\beta _hS_{H}^{*}I_{V}^{*}\left( 2-\frac{1}{v}-\frac{xv}{w}\right) \end{aligned}$$ emanating from the substitution of the coefficients into \(\mathcal{H},\) are less than or equal to zero by the arithmetic mean-geometric mean inequality. This implies that \(\mathcal{H}\le 0\) with equality only if \(\frac{S_H}{{S_H}^*}=\frac{I_H}{{I_H}^*} = \frac{I_V}{{I_V}^*}=\frac{I_F}{{I_F}^*}=\frac{U}{{U}^*}=1.\) Therefore, \(\dot{\mathcal{V}} \le 0\) and by the LaSalle's Extension [17], it implies that the omega limit set of each solution lies in an invariant set contained in \({\Omega }.\) The only invariant set contained in \(\Omega \) is the singleton \(\mathcal{E}_1\). This shows that each solution which intersects \(\mathbb {R}_+^5\) limits to the endemic equilibrium. This completes the proof. \(\square \) The biggest challenge in epidemic modeling is the estimation of parameters in the model validation process. In this section we endeavour to estimate some of the parameter values of system (2). The demographic parameters can be easily estimated from census population data. We begin by estimating the mortality rate \(\mu _h.\) We note that the average life expectancy of the human population in Ghana is 60 years [21]. This translates into \(\mu _h=0.017\) per year or equivalently \(4.6\times 10^{-5}\) per day. Buruli ulcer is currently regarded as a vector borne disease. Recovery rates modelled by \(\gamma _h,\) of vector borne diseases range from \(1.6\times 10^{-5}\) to 0.5 per day [23]. This translates to an average of between 0.00584 and 183 per year. The rate of loss of immunity \(\theta _h\) for vector borne diseases range between 0 and \(1.1\times 10^{-2}\) per day[23]. The mortality rate of the water bugs is assumed to be 0.15 per day [3]. The rates per day can easily be transferred to yearly rates. In this model we shall assume that we have more water bugs than humans so that \(m_1<1.\) Since the water bugs prey on the fish, a reasonable food chain structure leads to the assumption that we have more fish than water bugs hence \(m_2>1\) and consequently \(0<m_3<1.\) If the water bug is assumed to interact more with the environment than fish then \(\eta _v >1\) and \(0<\eta _f<1.\) The natural mortality of small fish in rivers is not well documented and data on the mortality of river fish in Ghana is not available. For the purpose of our simulations, we shall assume that \(3\times 10^{-3}<\mu _f<7\times 10^{-3}\) per day. Given that \(K\ge N_F,N_V\) we have \(0\le m_4,m_5\le 1.\) We shall also assume that \(0\le \sigma _f,\sigma _v\le 1.\) We summarise the parameters in the following Table 2. Table 2 Parameter values used for the simulations and sensitivity analysis Many of the parameters used in this paper are not determined experimentally. Therefore their accuracy is always in doubt. This can be overcome by observing responses of such parameters and their influence on the model variables through sensitivity and uncertainty analysis. In this subsection we present the sensitivity analysis of the model parameters to ascertain the degree to which the parameters affect the outputs of the model. We use the Partial Correlation Coefficients (PRCCs) analysis to determine the sensitivity of our model to each of the parameters used in the model. Through correlations, the association of the parameters and state variables can be established. In our case, we determine the correlation of our parameters and the state variable U. Alongside the PRCCs are the statistical significance test p-values for each of the parameters. If the magnitude of the PRCC value of a parameter is greater than 0.5 or less than −0.5 and the p-value less than 0.05, then the model is sensitive to the parameter. On the other hand, PRCC values close to \(+1\) or \(-1\) indicate that the parameter strongly influences the state variable output. The sign of a PRCC value indicates the qualitative relationship between the parameter and the output variable. A negative sign indicates that the parameter is inversely proportional to the outcome measure [10]. The parameters with negative PRCCs reduce the severity of Burili ulcer disease while those with positive PRCCs aggravate it. Using Latin Hypercube Sampling (LHS) scheme with 1000 simulations for each run, with U as the outcome variable. Our results show that the variable U is sensitive to the changes in the parameters \(m_3,~ \eta _f,~\mu _e,~\mu _f\) and \(\beta _f\). The results are shown in Fig. 2. PRCC plots: The variable U largely depends on \(m_3,~ \eta _f,~\mu _e,~\mu _f\) and \(\beta _f\). The bars pointing to the left indicate that U has an inverse dependence on the respective parameters. We observe that the parameters \(m_3,~ \eta _f\) and \(\beta _f\) aggravate the disease when they are increased while \(\mu _f\) and \(\mu _e\) reduce its severity when increased The results from the PRCC analysis are summarized in Table 3. The significant parameters together with their PRCC values and p-values have been encircled. Table 3 Outputs from PRCC analysis In Fig. 3 the residuals for the ranked Latin Hypercube Sampling parameter values are plotted against the residuals for the ranked density of Mycobacterium ulcerans.The PRCC plots for parameters \(\beta _f,~\mu _f,~\mu _e\) and \(\eta _f\) show a strong linear correlation. The growth of Mycobacterium ulcerans increases as the number of infected fish that eventually shed bacteria into the environment increases. An increase in the parameters \(\mu _f\) and \(\mu _e\) leads a decrease in amount of bacteria in the environment. PRCC plots shows the PRCC plots for the parameters \(\beta _f\), \(\mu _f\), \(\mu _e\), \(\eta _f\) and \(m_3\) Data and the fitting process One of the most important steps in the model building chronology is model validation. We now focus on the data provided by the Ashanti Regional Disease Control Office for Buruli ulcer cases in Ghana per 10,0000 people. The data are given in the Table 4 below for the years 2003–2012. Table 4 Data on Buruli ulcer cases in Ghana We fit the model system (2) to the data of Buruli ulcer cases expressed as fractions. We use the least squares curve fit routine (lsqcurvefit) in Matlab with optimisation to estimate the parameter values. Many parameters are known to lie within limits. A few parameters such as the demographic parameters are known [13] and it is thus important to estimate the others. The process of estimating the parameters aims at finding the best concordance between computed and observed data. One tedious way to do it is by trial and error or by the use of software programs designed to find parameters that give the best fit. Here, the fitting process involves the use of the least squares-curve fitting method. Matlab code is used where unknown parameter values are given a lower and upper bound from which the set of parameter values that produce the best fit are obtained. Figure 4 shows how system (2) fits to the available data on the incidence of the BU. The incidence solution curve shows a very reasonable fit to the data. Model fit to data. Model system (2) fitted to data of Burili ulcer cases in Ghana. The circles indicate the actual data and the solid line indicates the model fit to the data. The parameter values used for the fitting; \( \mu _h=0.000045,~\theta =0.1,~m_1=5,~\beta _h=0.1,~\gamma =0.056,~m_2=10,~\beta _v=0.000065,~\eta _v=1.5,~\eta _f=0.6,~\mu _V=0.15,~\beta _f=0.00005,~\mu _f=0.05,~\sigma _f=0.05,~\sigma _v=0.006,~\mu _e=0.4\) In planning for a long term response to the Buruli ulcer epidemic, it is important to have some reasonable projections to the epidemic. The fitting process allows us to envisage the Buruli ulcer epidemic in future. it is important to note that the projections are reasonably good over a short period of time since the current is evolving gradually based on the available data. We chose to project the epidemic beyond 5 years to 2017. Figure 5 show the projected Buruli ulcer epidemic. Projected model fit. Projection to fit in Fig. 4 Figures 6 and 7 show the changes in the prevalence of infected humans respectively when \(\sigma _f,\) the shedding rate of Mycobacterium ulcerans in the environment and \(\mu _e\) the removal rate of MU from the environment, are varied. Based on the sensitivity analysis, our model is very sensitive to the shedding rate of Mycobacterium ulcerans into the environment. Figure 6 shows that an increase in the shedding rate will lead increased human infections. We can actually quantify the related increases. For instance, if \(\sigma _f\) is increased from 0.51 to 0.52 on year 15, the percentage increase in the prevalence of human infections is 6 %. Minimising Mycobacterium ulceransin the environment is an important control measure that is, albeit impractical at the moment. We observe through our results that their decrease in the environment can lead to quantifiable changes in the prevalence of infected humans. Increasing \(\mu _e\) leads to a decrease in the prevalence of infected humans. Prevalence of Buruli ulcer infection in humans. Shows prevalence humans when \(\sigma _f\) is varied Prevalence of Buruli ulcer in infected humans for different values of \({\mu }_{e.}\) Shows prevalence the infected humans when \(\mu _e\) is varied In this paper, a deterministic model on the dynamics of the Buruli ulcer in the presence of a preventive intervention strategy is presented. The model's steady states are determined and their stabilities investigated in terms of the classic threshold \(\mathcal{R}_0.\) In disease transmission modelling, it is well known that a classical necessary condition for disease eradication is that the basic reproductive number \(\mathcal{R}_0,\) must be less than unity. The model has multiple endemic equilibria (in fact it exhibits a backward bifurcation). When a backward bifurcation occurs, endemic equilibria coexist with the disease free equilibrium for \(\mathcal{R}_0<1.\) This means that getting the classic threshold \(\mathcal{R}_0\) less than 1, might not be sufficient to eliminate the disease. Thus the existence of backward bifurcation has important public health implications. This might explain why the disease has persisted in the human population over time. The endemic equilibrium is found to be globally stable if \(\mathcal{R}_0>1.\) The sensitivity analysis of model parameters showed some interesting results. These results suggest that efforts to remove Mycobacterium ulcerans and infected fish from the environment will greatly reduce the epidemic although the latter will be impracticable. This is because of the costs involved and the fact that many governments in affected areas operate on lean budgets. The model is then fitted to data on the Buruli ulcer in Ghana. The model reasonably fits the data. The challenge in the fitting process was that the data appears to indicate that Buruli ulcer has reached a steady state. This then produced some parameter values that appeared unreasonable. Despite these challenges, the fit produced reasonable projections on the future of the ulcer. The model shows that in the near future, the number of cases will not change if everything remains the same. An important consideration that can be added to the model is the inclusion of probable policy shifts and the investigation of different scenarios on the progression of the epidemic as the policies change. Because not much of the disease is understood, parameter estimation was a daunting task. So we had to reasonably estimate some of the parameter using the hypothesis that Buruli ulcer is a vector borne disease. Due to the estimation of essential parameters sensitivity analysis was necessary and very important to determine how these parameters influence the model. The implications of varying some of the important epidemiological parameters such as the shedding rates were investigated. Important results were drawn through Figs. 6 and 7. The main result of this paper is that the management of Buruli ulcer depends mostly on the management of the environment. This model can be improved by considering social interventions in the human population, modeled as functions and the inclusion of the different forms of treatment available as some individuals opt for traditional methods while others depend on the government health care system [1]. Social interventions include education, awareness, poverty reduction and provision of social services. While the mathematical representations of these interventions are insurmountable, they are vital to the dynamics of the disease and public health policy designs. Finally this model can be used to suggest the type of data that should be collected as research on the Buruli ulcer intensifies. The global burden of the disease and its epidemiology are not well understood, [28]. Clearly, gaps do exist in the nature and type of data available. Reports on the disease are often based on passive presentations of patients at health care facilities. As a result of the difficulties of accessing health care in affected areas, data on the disease is scanty. Agbenorku P, Donwi IK, Kuadzi P, Saunderson P. Buruli Ulcer: treatment challenges at three centres in Ghana. J Trop Med. 2012; doi:10.1155/2012/371915. Ahorlu CK, Koka E, Yeboah-Manu D, Lamptey I, Ampadu E. Enhancing Buruli ulcer control in Ghana through social interventions: a case study from the Obom sub-district. BMC Public Health. 2013;13:59. Aidoo AY, OSei B. Prevalence of acquatic insects and arsenic concentration determine the geographical distribution of Mycobacterium ulcerans infection. Comput Math Method Med. 2007;8:235–44. Boleira M, Lupi O, Lehman L, Asiedu KB, Kiszewski Ana Elisa. Buruli ulcer. Anais Brasileiros de Dermatologia. 2010;85(3):281–301. Clift E. IEC interventions for health: a 20 year retrospective on dichotomies and directions. Int J Health Commun. 1998;3(4):367–75. Debacker M, Portaels F, Aguiar J, Steunou C, Zinsou C, Meyers W, et al. Risk factors for Buruli ulcer, Benin. Emerg Infect Dis. 2006;12:1325–31. Descartes' Rule of Signs, Available at: http://www.purplemath.com/modules/drofsign.htm, Acessed 2 Sept 2013. Eddyani M, Ofori-Adjei D, Teugels G, De Weirdt D, Boakye D, Meyers WM, Portaels F. Potential role for fish in transmission of Mycobacterium ulcerans disease (Buruli Ulcer): an environmental study. Appl Environ Microbiol. 2004;5679–81. Garba SM, Gumel AB, Abu Bakar MR. Backward bifurcaytion in dengue transmission dynamics. Math Biosci. 2008;215:11–25. Gomero B. Latin Hypercube Sampling and Partial Rank Correlation Coefficient analysis applied to an optimal control problem, MSc Thesis, The University of Tennessee. 2012. Grassly NC, Fraser C. Mathematical models of infectious disease transmission. Nat Rev Microbiol. 2008;6:477–87. Grundmann H, Hellriegel B. Mathematical modelling: a tool for hospital infection control. Lancet Infect Dis. 2005;6(1):39–45. Ghana Statistical Service, Available at: http://www.statsghana.gov.gh. Accessed Sept 2013. Houben RM, Dowdy DW, Vassall A, et al. How can mathematical models advance tuberculosis control in high HIV prevalence settings? Int J Tuber Lung Dis. 2014;18(5):509–14. Huppert A, Katriel G. Mathematical modelling and prediction in infectious disease epidemiology. Clin Microbiol Infect. 2013;19:999–1005. Jacobsen KH, Padgett JJ. Risk factors for Mycobacterium ulcerans infection. Int J Infect Dis. 2010;14(8):e677–81. LaSalle JP. The stability of dynamical systems. In: CBMS-NSF Regional Conference Series in Applied Mathematics 25, SIAM: Philadelphia. 1976. Marsollier L, Robert R, Aubry J, Andre JS, Kouakou H, Legras P, Manceau A, Mahaza C, Carbonnelle B. Aquatic insects as a vector for Mycobacterium ulcerans. Appl Environ Microbiol. 2002;68:4623–8. Marty R, Roze S, Bresse X, Largeron N, Smith-Palmer J. Estimating the clinical benefits of vaccinating boys and girls against HPV-related diseases in Europe. BMC Cancer. 2013;19:19. doi:10.1186/1471-2407-13-10. Merritt RW, Walker ED, Small PLC, Wallace JR, Johnson PDR, et al. Ecology and transmission of buruli ulcer disease: a systematic review. PLoS Neglect Trop Dis. 2010;4(12):e911. Population and Housing Census National Analytical Report, 2012. http://www.statsghana.gov.gh. Portaels F, Chemlal K, Elsen P, Johnson PD, Hayman JA, Hibble J, Kirkwood R, Meyers WM. Mycobacterium ulcerans in wild animals. Revue Scientifique et Technique. 2001;20:252–64. Rascalou G, Pontier D, Menu F, Gourbie're S. Emergence and prevalence of human vector-borne diseases in sink vector populations. PLoS One. 2012;7(5):e36858. doi:10.1371/journal.pone.0036858. Silva MT, Portaels F, Pedrosa J. Aquatic insects and Mycobacterium ulcerans: an association relevant to buruli ulcer control? PLoS Med. 2007;4(2):e63. Sopoh GE, Barogui YT, Johnson RC, Dossou AD, Makoutode M. Family relationship, water contact and occurrence of Buruli ulcer in Benin. PLoS Neglect Trop Dis. 2010;4(7):e746. Stienstra Y, van der Graaf WTA, Asamoa K, van der Werf TS. Beliefs and attitudes toward buruli ulcer in Ghana. Am J Trop Med Hyg. 2002;67:207–13. Williamson HR, Benbow ME, Campbell LP, Johnson CR, Sopoh G, Barogui Y, Merritt RW, Small PLC. Detection of Mycobacterium ulcerans in the environment predicts prevalence of buruli ulcer in Benin, PLoS Neglect Trop Dis. 2012; e1506. World Health Organization. Buruli ulcer Mycobacterium ulcerans infection, http://www.who.int/buruli/en/. World Health Organization. Buruli ulcer: Number of new cases of Buruli ulcer reported (per year). http://apps.who.int/neglected_diseases/ntddata/buruli/buruli.html. FN designed the model and carried out the numerical simulations. EB did the mathematical analysis and writing of the manuscript. Both authors read and approved the final manuscript. The first author acknowledges with gratitude the support from the Stellenbosch University, International office for the research visit that culminated into this manuscript. The second author acknowledge, with thanks, the support of the Department of Mathematics and Statistics, Kumasi Polytechnic. Department of Mathematical Sciences, Stellenbosch University, Private Bag X1, Matieland, 7602, South Africa Farai Nyabadza Department of Mathematics and Statistics, Kumasi Polytechnic, P. O. Box 854, Kumasi, Ghana Ebenezer Bonyah Correspondence to Ebenezer Bonyah. Nyabadza, F., Bonyah, E. On the transmission dynamics of Buruli ulcer in Ghana: Insights through a mathematical model. BMC Res Notes 8, 656 (2015). https://doi.org/10.1186/s13104-015-1619-5 Buruli ulcer Transmission dynamics Basic reproduction number
CommonCrawl
from Loughborough University mathscard online Order printed version Surds Simplifying Surds Simplifying products involving surds $\sqrt x $ means the positive square root of $x$. A number that can be expressed in the form $\displaystyle \frac{a}{b}$, where $a$ and $b$ are integers, is said to be a rational number e.g. $\displaystyle \frac{3}{4}$, $\displaystyle \frac{5}{9}$, $\displaystyle \frac{2}{3}$. An irrational number is one which cannot be expressed in this form e.g. $\sqrt { 19}, \sqrt {13}, \sqrt {27}$. Numbers such as $\sqrt {3}, \sqrt {5}, \sqrt {7}$ are said to be in surd form. We use surd form when we want to be exact. $\sqrt {9\hbox{x}16} \quad =\sqrt {144} \quad =\quad 12$ Also $\sqrt {9\hbox{x}16} =\sqrt {9}\times \sqrt {16}=3\hbox{x}4=12$ So $\sqrt {a\times b} =\sqrt a \times \sqrt b $ Similarly $ \sqrt {\displaystyle \frac{a}{b}} = \displaystyle \frac{\sqrt {a}}{\sqrt {b}}$eg$\displaystyle \sqrt {{9\over 16}}={\sqrt {9}\over \sqrt {16}}={3\over 4}$ But $ \sqrt {a+b} \ne \sqrt {a} + \sqrt {b}$ $\sqrt {a-b} \ne \sqrt {a} - \sqrt {b}$ $\begin{array}{rcl}& & (x+\sqrt {3})(x+\sqrt {2})\\ & & = x.x+x\sqrt {2} + x\sqrt {3}+\sqrt {3}\sqrt {2}\\ & & = x^{2}+x(\sqrt {2}+\sqrt {3}) + \sqrt {6}\\ \end{array}$ Pairs of values such as $2 +\sqrt {3},2-\sqrt {3}$ are called pairs of conjugates. To simplify an expression such as $\displaystyle \frac{2 + \sqrt {5}}{3 - \sqrt {5}}$, multiply the numerator and denominator by the conjugate of the denominator, i.e. $3 + \sqrt {5}$ Need to access this guide when you're offline? Download the free iOS/Android mobile app Love maths? You'll love Loughborough University. All content content is for guidance only and does not represent a definitive statement of the requirements for AS and A-level. All formulae have been put together by Loughborough University's Mathematics Education Centre. About Loughborough University Website and mobile app by Rock Kitchen Harris
CommonCrawl
The simplest and most common type of nucleic acid mutation is a point mutation, which replaces one base with another at a single nucleotide. In the case of DNA a point mutation must change the complementary base accordingly, as in the figure where a C-G pair is changed into an A-T pair. DNA strands taken from different organism or species genomes are homologous if they share a recent ancestor. In comparing several homologous DNA strands, it might be helpful to compute their consensus sequence. After all, according to the biological principle of parsimony1 — which demands that evolutionary histories should be as simply explained as possible — this sequence represents the most likely ancestor of the given DNA strands. A matrix is a rectangular table of values divided into rows and columns. An $$m \times n$$ matrix has $$m$$ rows and $$n$$ columns. Given a matrix $$A$$, we write $$A_{i,j}$$ ($$0 \leq i < m; 0 \leq j < n$$) to indicate the value at the intersection of row $$i$$ and column $$j$$. Say that we have a series of DNA sequences, all having the same length $$n$$. Their profile matrix is a $$4 \times n$$ matrix $$P$$ in which $$P_{0, j}$$ represents the number of times the base A occurs in the $$j$$-th position of the given sequences, $$P_{1, j}$$ represents the number of times the base C occurs in the $$j$$-th position of the given sequences, and so on (see table below). The consensus sequence $$c$$ is a string of length $$n$$ formed from the series of DNA sequences by taking the most common base at each position. The $$j$$-th character of $$c$$ therefore corresponds to the base having the maximal value in the $$j$$-th column of the profile matrix of the DNA sequences. If there is more than one maximal value in the $$j$$-th column of the profile matrix, the letter N is used as the $$j$$-th character of $$c$$. G C A A A A C G G C G A A A C T T A C C T T C A sequences T A T G T T C A G C C T T A G G G A C T T A T A T C G G A T C C A 0 3 1 2 3 4 0 3 profile C 0 4 3 1 0 0 5 1 G 4 0 2 2 0 0 1 2 T 3 0 1 2 4 3 1 1 consensus G C C N T A C A Your task: {' ': 'SbBsSsBsS', '%': 'SsSbSbSbS', '$': 'SbSbSbSsS', '+': 'SbSsSbSbS', '*': 'SbSsBsBsS', '-': 'SbSsSsBsB', '/': 'SbSbSsSbS', '.': 'BbSsSsBsS', '1': 'BsSbSsSsB', '0': 'SsSbBsBsS', '3': 'BsBbSsSsS', '2': 'SsBbSsSsB', '5': 'BsSbBsSsS', '4': 'SsSbBsSsB', '7': 'SsSbSsBsB', '6': 'SsBbBsSsS', '9': 'SsBbSsBsS', '8': 'BsSbSsBsS', 'A': 'BsSsSbSsB', 'C': 'BsBsSbSsS', 'B': 'SsBsSbSsB', 'E': 'BsSsBbSsS', 'D': 'SsSsBbSsB', 'G': 'SsSsSbBsB', 'F': 'SsBsBbSsS', 'I': 'SsBsSbBsS', 'H': 'BsSsSbBsS', 'K': 'BsSsSsSbB', 'J': 'SsSsBbBsS', 'M': 'BsBsSsSbS', 'L': 'SsBsSsSbB', 'O': 'BsSsBsSbS', 'N': 'SsSsBsSbB', 'Q': 'SsSsSsBbB', 'P': 'SsBsBsSbS', 'S': 'SsBsSsBbS', 'R': 'BsSsSsBbS', 'U': 'BbSsSsSsB', 'T': 'SsSsBsBbS', 'W': 'BbBsSsSsS', 'V': 'SbBsSsSsB', 'Y': 'BbSsBsSsS', 'X': 'SbSsBsSsB', 'Z': 'SbBsBsSsS'} Write a function profile that takes a series (a list or a tuple) of DNA sequences as its argument. In this assignment, DNA sequences are represented as strings that only contain the uppercase letters A, C, G and T. In case not all DNA sequences have equal length, the function must raise an AssertionError with the message sequences should have equal length. In case all DNA sequences have equal length $$n$$, the function must return the profile matrix of the sequences. The profile matrix is represented as a dictionary that maps each of the bases A, C, G and T onto a list of $$n$$ integers, where the integer at position $$j$$ indicates how often that base occurs in the $$j$$-th position of the given DNA sequences. Write a function consensus that takes a profile matrix as its argument. This profile matrix must be a dictionary that is formatted as the return values of the function profile. The function consensus must return the consensus sequence that corresponds to the given profile matrix. >>> seqs = ['GCAAAACG', 'GCGAAACT', 'TACCTTCA', 'TATGTTCA', 'GCCTTAGG', 'GACTTATA', 'TCGGATCC'] >>> profile(seqs) {'A': [0, 3, 1, 2, 3, 4, 0, 3], 'C': [0, 4, 3, 1, 0, 0, 5, 1], 'T': [3, 0, 1, 2, 4, 3, 1, 1], 'G': [4, 0, 2, 2, 0, 0, 1, 2]} >>> consensus(profile(seqs)) 'GCCNTACA' >>> seqs = ['GGTATCTTTA', 'TTGTCGTCTTAGA', 'GGATCCAGAC', 'ATTCAATCGA', 'TGATCTGGAA', 'AGAGTCATGC'] AssertionError: sequences should have equal length [1]: http://en.wikipedia.org/wiki/Maximum_parsimony_%28phylogenetics%29
CommonCrawl
pp. C1-C11 •https://doi.org/10.1364/JOCN.382557 Using machine learning in an open optical line system controller Andrea D'Amico, Stefano Straullu, Antonino Nespola, Ihtesham Khan, Elliot London, Emanuele Virgillito, Stefano Piciaccia, Aberto Tanzi, Gabriele Galimberti, and Vittorio Curri Andrea D'Amico,1,* Stefano Straullu,2 Antonino Nespola,2 Ihtesham Khan,1 Elliot London,1 Emanuele Virgillito,1 Stefano Piciaccia,3 Aberto Tanzi,3 Gabriele Galimberti,3 and Vittorio Curri1 1Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy 2LINKS Foundation, Via Pier Carlo Boggio 61, 10138 Torino, Italy 3Cisco Photonics, Via S. M. Molgora 48/C, 20871 Vimercate, Italy *Corresponding author: [email protected] Andrea D'Amico https://orcid.org/0000-0003-0828-6157 Emanuele Virgillito https://orcid.org/0000-0003-2682-6110 Vittorio Curri https://orcid.org/0000-0003-0691-0067 A D'Amico S Straullu A Nespola I Khan E London E Virgillito S Piciaccia A Tanzi G Galimberti V Curri Andrea D'Amico, Stefano Straullu, Antonino Nespola, Ihtesham Khan, Elliot London, Emanuele Virgillito, Stefano Piciaccia, Aberto Tanzi, Gabriele Galimberti, and Vittorio Curri, "Using machine learning in an open optical line system controller," J. Opt. Commun. Netw. 12, C1-C11 (2020) Assessment of cross-train machine learning techniques for QoT-estimation in agnostic optical... Ihtesham Khan, et al. OSA Continuum 3(10) 2690-2706 (2020) Lightpath QoT computation in optical networks assisted by transfer learning J. Opt. Commun. Netw. 13(4) B72-B82 (2021) Machine learning regression for QoT estimation of unestablished lightpaths Memedhe Ibrahimi, et al. J. Opt. Commun. Netw. 13(4) B92-B101 (2021) Dense wavelength division multiplexing Hole burning Packet switched networks Original Manuscript: November 5, 2019 Revised Manuscript: January 10, 2020 Manuscript Accepted: January 10, 2020 PHYSICAL LAYER ABSTRACTION AND OPTIMIZATION IN TRANSPARENT OPTICAL NETWORKS STATISTICAL ANALYSIS OF EXPERIMENTAL DATA QOT-E BASED ON MACHINE-LEARNING The reduction of system margin in open optical line systems (OLSs) requires the capability to predict the quality of transmission (QoT) within them. This quantity is given by the generalized signal-to-noise ratio (GSNR), including both the effects of amplified spontaneous emission (ASE) noise and nonlinear interference accumulation. Among these, estimating the ASE noise is the most challenging task due to the spectrally resolved working point of the erbium-doped fiber amplifiers (EDFAs), which depend on the spectral load, given the overall gain profile. An accurate GSNR estimation enables control of the power optimization and the possibility to automatically deploy lightpaths with a minimum margin in a reliable manner. We suppose an agnostic operation of the OLS, meaning that the EDFAs are operated as black boxes and rely only on telemetry data from the optical channel monitor at the end of the OLS. We acquire an experimental data set from an OLS made of 11 EDFAs and show that, without any knowledge of the system characteristics, an average extra margin of 2.28 dB is necessary to maintain a conservative threshold of QoT. Following this, we applied deep neural network machine-learning techniques, demonstrating a reduction in the needed margin average down to 0.15 dB. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement Data traffic demand will experience a dramatic increase over the next few years, driven by the implementation of 5G access and the expansion of bandwidth-hungry applications, such as high definition video and virtual- and augmented-reality content [1]. These applications will boost cloud computing and cloud-storage-related data exchange, causing traffic expansion both within and between data centers. Optical networks will sustain this growth trend, particularly within their backbone portion. These backbone networks already carry massive amounts of data, and a further push will be required to match the required transmission capacity over the next five years. A key operator request is the ability to fully exploit existing infrastructure in order to maximize returns from investments. This need is directly related to the capability of orchestrating all network layers, allowing the data transport to reach the maximum available capacity [2–6]. In optical networks, the enabler for optimal exploitation of data transport—the dense wavelength division multiplexed (DWDM) transmission—is the control layer. In particular, software-defined network controllers rely on a network abstraction. Nowadays, optical networks are fast moving toward partial disaggregation, with a final goal of full disaggregation; a disaggregated network has subsystems that are managed independently from one another by relying on common data structures and API (application program interface). Contrary to aggregated networks, disaggregated networks can be open and multivendor but are not able to have closed management. These features pave the road for a software-defined controller that is able to manage separately the working points of the various network elements, enabling the management to be user-customizable. The first step in disaggregating the network is to consider the optical line systems (OLSs) that connect the network nodes. In this framework, the quality of transmission (QoT) degradation depends on the capability of OLS controllers to operate at the optimal working point [7,8]. The more accurately this demand is reached, the lower the margin for traffic deployment and, thus, the larger the deployed traffic rate. Moreover, there is the potential for the recovery of network failures to be automated, reducing downtime. Therefore, to reduce the margin, it is mandatory to rely on a QoT estimator (QoT-E) that is able to reliably predict lightpath (LP) performance before its actual deployment, i.e., the generalized signal-to-noise ratio (GSNR), that includes the effects of amplified spontaneous emission (ASE) noise and provides both the optical SNR (OSNR) and nonlinear interference (NLI) accumulation [9]. The interaction between ASE noise and NLI [10,11] occurs in the case of very low operational GSNR, namely for extremely long OLSs, which require several amplification points. These conditions are verified in submarine point-to-point networks but have negligible effects within terrestrial networks. In this work we focus on terrestrial regional and notional backbone networks for which transparent propagation is over much smaller distances, meaning that considerable ASE-NLI interactions are not produced. Among the ASE noise and NLI contributions, the former is the most dominant, because it is twice the NLI when the system operates at optimal power [7,12]. Remarkably, it is also the most challenging to estimate. In fact, the ASE noise magnitude depends on the working point of erbium-doped fiber amplifiers (EDFAs) [13]; this in turn depends on the spectral load [14]. On the contrary, the NLI can be accurately predicted when the ASE noise accumulation is well characterized [15]. The purpose of this work is to investigate the reduction of uncertainty in the OSNR prediction and, consequently, to enable the network controller to reliably deploy the LP at the minimum margin. In this work we suppose the worst case of a completely agnostic scenario, by relying only on data coming from the optical channel monitor (OCM) available at the end of the line system. The uncertainty on the working point of the EDFAs is typically induced by a mixed effect of physical phenomena [14] and implementation issues, meaning that an analytic approach is almost impossible to achieve in an open environment. To counteract this, we opted to use machine-learning (ML) techniques, a tactic that has already been effectively tested when managing optical networks; see [16–19] for performance monitoring applications, [20,21] for prediction estimation of the ML approach, and [22] for both. An overall survey of ML applied in optical networks can be found in [23]. Specifically, we also cite [24–27]. In [24], the authors utilize ML to predict the gain of a single EDFA and show that this method can provide improvements over an analytical model. In [25] ML is used to predict the output of an EDFA cascade; in particular, wavelength assignment over a specific network considered in its entirety is able to be automated. Reference [26] investigates how ML can mitigate the effect of the EDFA gain ripple on QoT-E within a simulated network and [27] demonstrates how ML may be used to automatically configure the gain required by amplifiers after deployment. The main difference between this previous research and the present work is that we focus on the OSNR response to specific configurations in a particular OLS that is considered as an element of a completely disaggregated network. Through this, we obtain an evaluation that can be combined with a nonlinear SNR prediction, in order to obtain a reliable QoT-E that can be used both in network planning and for the wavelength assignment in the online case. In Section 2, we first address the issues related to the abstraction of the physical layer in order to effectively perform a multilayer optimization. In particular, we argue that an accurate QoT-E has a key role in minimizing the margin. In Section 3, we describe the experiments performed to emulate an open OLS composed of 11 cascaded amplifiers and one booster amplifier. With this setup we have obtained a data set of measurements mimicking the power readings from an OCM, where different spectral loads have been generated by shaped ASE noise. Additionally, the EDFAs are used as black boxes, setting the average gain to the nominal level. In Section 4, we statistically analyze the experimentally measured data set over all investigated bandwidths. Then, we present the variation in OSNR with respect to the spectral load configuration and discuss these fluctuations in light of physical considerations. Consequently, we derive the required margins, supposing a total absence of knowledge on the EDFA gain and of the noise figure per wavelength. These results show that the uncertainty induced by an agnostic use of the OLS may require the deployment of 2.28 dB of system margin, on average. Note that a closed OLS based on single-vendor equipment may largely reduce this uncertainty by characterizing the parameters of these devices. Nevertheless, aging and environmental effects may introduce some uncertainty even in this case. In Section 5 we tested ML techniques. Here, we suppose that a training data set acquired before the deployment of real traffic has been collected in order to reduce the uncertainty of the estimated OSNR. We did not aim to develop a specific ML algorithm from scratch and instead aimed to show the effectiveness of ML in this scenario. For this reason, we relied upon the TensorFlow open source library [28]. We show that by utilizing and optimizing deep neural network (DNN) algorithms, we are able to reduce the required average margin on the OSNR prediction from the initial value of 2.28 dB down to 0.15 dB. In Section 6, we give some overall comments that address possible further investigations. 2. PHYSICAL LAYER ABSTRACTION AND OPTIMIZATION IN TRANSPARENT OPTICAL NETWORKS Fig. 1. Schematic description of an optical network as a topology of ROADM nodes connected by OLSs. The inset shows a general setup for an OLS that in this case is supposed to be open. From a data transport point of view, an optical network is an infrastructure connecting site—in general with a meshed topology—where traffic is added/dropped or routed (see Fig. 1). Site-to-site links are bidirectional fiber connections implemented as one or more fiber pairs, with one fiber for each direction, that are periodically amplified by lumped and/or distributed amplification techniques: EDFAs optionally assisted by some distributed Raman amplification. These links are commonly defined as an OLS and are managed by a controller that has properly set the working point of the amplifiers and, consequently, the power spectral density (spectral load) at the input of each fiber span. State-of-the-art optical networks rely on coherent technology for optical transmission; routing operations are done at the optical transport layer thanks to reconfigurable optical add/drop multiplexers (ROADMs) that implement the transparency paradigm. The spectral usage of fiber propagation exploits DWDM to enable multichannel transmission over the C-band and, in the future, over multiband systems, starting from the L-band. The DWDM spectral grid can be either fixed or flexible, according to the ITU-T recommendations [29] that define the spectral slots enabling transparent source-to-destination optical transport. Within this grid, the LPs are defined as the circuits describing the routing space, i.e., the set of possible connections that the routing wavelength assignment may rely on to set traffic transport (the LP deployment). Over a deployed LP, a polarization-division-multiplexed multilevel modulation format propagates transparently from source to destination, suffering from propagation impairments; this is summarized as ASE noise added by the amplifiers, fiber propagation effects, and ROADM filtering effects. It has been extensively demonstrated that the fiber propagation on an uncompensated OLS impairs the QoT of LPs operated with coherent technologies by introducing some amount of phase and amplitude noise [7,30–32]. This phase noise is typically well compensated by the carrier phase estimator module of the digital signal processing at the receiver. This disturbance must be considered only for high symbol rate transmission that is designed for short-reach, high-capacity transparent optical transmission or in the case of probabilistic shaping [32]. The amplitude noise that derives from fiber propagation, commonly defined as the NLI, always impairs performance as it is a Gaussian disturbance that sums with the ASE noise at the receiver. Additionally, the filtering effects of ROADMs impact QoT degradation as an extra loss contribution. A. Quality of Transmission Estimation Based on the GSNR It is well accepted that the merit of QoT for deployed LPs is given by the GSNR, including both the effects of the accumulated ASE noise and NLI disturbance, defined as (1)$${\rm GSNR} = \frac{{{P_{{\rm Rx}}}}}{{{P_{{\rm ASE}}} + {P_{{\rm NLI}}}}} = {\left( {{{{\rm OSNR}}^{ - 1}} + {\rm SNR}_{{\rm NL}}^{ - 1}} \right)^{ - 1}},$$ where $ {\rm OSNR} = {P_{{\rm Rx}}}/{P_{{\rm ASE}}} $, $ {{\rm SNR}_{{\rm NL}}} = {P_{{\rm Rx}}}/{P_{{\rm NLI}}} $, $ {P_{{\rm Rx}}} $ is the power of the channel under test (CUT) at the receiver, $ {P_{{\rm ASE}}} $ is the power of the ASE noise, and $ {P_{{\rm NLI}}} $ is the power of the NLI. In particular, given the bit-error ratio (BER) versus the OSNR back-to-back characterization of the transceiver, the GSNR accurately predicts the BER, as has been extensively shown in multivendor experiments using commercial products [9]. $ {P_{{\rm NLI}}} $ is generated by nonlinear effects and depends on the power of the CUT and on the spectral load with a cubic law [7]. This means that for each OLS there exists an optimal spectral load that maximizes the GSNR [8]. Given the cascade of $ N $ optical domains, each characterized by a generalized $ {{\rm GSNR}_i} $, where $ i = 1,\ldots,N $, it is straightforward to demonstrate that the overall QoT is given by the following expression: (2)$${\rm GSNR} = {\left( {\sum\limits_{i = 1}^N \frac{1}{{{{{\rm GSNR}}_i}}}} \right)^{ - 1}}.$$ If we analyze the propagation effects on a given LP over a network route, we can abstract it as a cascade of the effects of each optical domain that introduces QoT impairments. Therefore, besides the effects of ROADMs, each LP experiences the cumulative impairments of all previously passed OLSs, where each introduces some amount of ASE noise and NLI. For QoT purposes, the OLS can be abstracted by a unique parameter commonly defined as SNR degradation that, in general, is frequency resolved ($ {{\rm GSNR}_i}(f\,) $), if the OLS controllers are able to keep the OLS operating at the optimal working point. Hence, with this condition, if the OLS controllers are able to expose the corresponding $ {{\rm GSNR}_i} $ for QoT operations, a network can be abstracted as a weighted graph corresponding to its topology. The graph nodes are ROADM network nodes, while the edges are the OLSs and the weights on these edges are the $ {{\rm GSNR}_i}(f) $ degradations of the corresponding OLSs, as shown in Fig. 2. In particular, for a LP routed from A to F that passes through C and E, the QoT is $$\begin{split}{\rm GSNR}_{{\rm AF}}^{ - 1}(f) &= {\rm GSNR}_{{\rm AC}}^{ - 1}(f) + {\rm GSNR}_{{\rm CE}}^{ - 1}(f))\\&\quad + {\rm GSNR}_{{\rm EF}}^{ - 1}(f).\end{split}$$ Note that the network abstraction of the physical layer may be enriched with additional information, such as the latency or the accumulated chromatic dispersion. Both of these additional quantities sum on routes as the SNR degradation and are not exploited in this work. Once the network abstraction is available and reliable for network management, LPs can be deployed with the minimum margin, which relies upon the GSNR of the related route and frequency in the case of traffic deployment or recovery. To ensure reliability, the margin minimization requires full control of physical layer fluctuations. In particular, the OLS controllers must fix the response of the amplifiers and expose an accurate evaluation of the GSNR in the frequency domain. Fig. 2. Abstraction of an optical network as a topology graph weighted by the generalized SNR degradation for optical line systems, $ {{\rm GSNR}_i}(f)$. To obtain this accuracy, it is straightforward to address the two contributions to the OLS impairments separately: the NLI generation and the ASE noise accumulation. The NLI power can be reliably calculated with different levels of uncertainty using mathematical models [33–36]. The required data for these models are the spectral load of the fiber span and its characteristics (including Raman pumps, if used). Among these variables, only the input connector loss is affected by some considerable uncertainty. For each fiber span, this loss fixes the actual power of the spectral load, producing different magnitudes of NLI. Nevertheless, the prediction capability of $ {P_{{\rm NLI}}} $ is in general very good once a suitable mathematical model is applied to the system under analysis [15,35]. Consequently, in this work, we focus our investigation only on the OSNR component of the GSNR. In order to address only the OSNR characteristics, in a typical scenario (an EDFA cascade), we consider a line composed only of amplifiers and variable optical attenuators (VOAs) in place of the fiber spans. With this constraint, we avoid any generation of NLI due to propagation through the fiber. Therefore, all the experimental measurements analyzed within this work are not affected by any nonlinear effects. Each EDFA in the line is characterized by a gain $ {G_i}(f) $ and a noise figure $ {{\rm NF}_i}(f) $, where $ i = 0,\ldots, N $. After the $ i $th EDFA, the $ i $th attenuator introduces the $ {L_{i + 1}} $ loss, except for the final amplifier. The overall OSNR is given by (3)$$\begin{split}{\rm OSNR}(f) = \frac{{{P_{{\rm Tx}}}\prod\nolimits_{i = o}^N {G_i}(f){L_i}(f)}}{{\sum\nolimits_{i = o}^N hf{B_n}{{{\rm NF}}_i}[{G_i}(f) - 1]\prod\nolimits_{k = i + 1}^N {L_k}(f){G_k}(f)}},\end{split}$$ where $ h $ is the Planck constant and $ {B_n} $ is the reference bandwidth for the OSNR. It is straightforward to observe that the uncertainties on $ {G_i}(f) $ and $ {{\rm NF}_i}(f) $ induce overall OSNR fluctuations, which must be taken into account when the system margin is estimated. B. Approaches for QoT Estimation In Fig. 3, we list three possible data sets, each representing a different level of knowledge of the OLS behavior, with each allowing a different reduction of the GSNR uncertainty. Typically [option (1)], some data is available from the static characterization of devices (e.g., calculating amplifier gain and noise figure in the frequency domain, connector loss, etc.) and is very significant for closed systems. By using these data and characterizing the OLS components, an accurate QoT-E can be implemented in vendor-specific systems. In particular, if all of the physical characteristics of the OLS are known, the OSNR may be calculated using Eq. (3). Nevertheless, this static data may be incomplete or inaccurate; even in a best-case scenario, the components experience degeneration due to aging, leading to a progressively unreliable QoT-E over time. Fig. 3. General scheme for a QoT-E module predicting the $ {\rm GSNR}(f)$. The three available data sets are shown: (1) static data from device characterization, (2) data from current-state telemetry, and (3) stored data from historical telemetry that feeds a ML module. A second possibility is that telemetry data concerning only the current network status is available [option (2)]. Assuming an agnostic operation of the OLS (as is required in an open OLS) means that the OLS controller must mainly rely upon telemetry data originating from the OCM and the EDFAs. This approach does not require knowledge of the device parameters and avoids the deterioration of the QoT-E accuracy due to aging discussed in option (1). In this case it is possible to use the telemetry data to estimate the OSNR response of the system by relying on the current parameter values. The problem of this approach is that the OSNR response is highly dependent upon the spectral load configuration, requiring a large margin, as can be seen from the analysis of the experimental data set in Section 4. Lastly, option (3) considers a data set that collects the QoT responses to random spectral loads. These data can be generated before the in-service operation of the OLS, supposing the availability of a device that is able to supply the OLS with various spectral load configurations and measure the OLS response in terms of OSNR. As OLSs are typically bidirectional, it is conceivable that a two-port portable device operating as an ASE-shaped generator at the output port and an accurate OCM at the input port can be used to retrieve these data. Moreover, a future implementation considers the possibility of these devices being built into the ROADM nodes, allowing the data to be collected with periodical updates via streaming. Utilizing this data set enables a QoT-E based on the OSNR response to specific spectral load configurations, increasing the accuracy of OSNR predictions with respect to option (2), where only telemetry data is considered. Additionally, this approach does not require knowledge of the physical parameters of the OLS. This case provides an ideal scenario to apply ML, where the OLS is treated as a black box. In fact, a ML method using a training data set composed of past spectral load realizations can yield an accurate prediction for every newly generated spectral load realization. In this work, we focus on option (3) and consider a realistic use case, namely, a scenario where the OLS controller wishes to allocate a new LP over the CUT, given an existing spectral load. In particular, we investigate the level of OSNR associated with this new LP. 3. EXPERIMENTAL SETUP To obtain an experimental data set, we design and implement the experimental setup depicted in Fig. 4, based on commercial EDFAs [37] used as black boxes. Span losses are obtained by attenuators in order to focus only the OSNR and to avoid any NLI generation. The channel combs that provide the OLS spectral load have been obtained by shaping ASE noise. This approach does not limit the generality of the results because of the large time constant that characterizes the physical effects within EDFAs. The output of the ASE noise source is shaped by means of a programmable optical wave-shaper filter (Finisar 1000 S) to generate a 100 GHz-spaced, 35-channel WDM comb centered at 193.5 THz, amplified by a booster amplifier ($ {{\rm EDFA}_{0}} $ in Fig. 4). The choice of the 100 GHz spacing was forced by the hardware availability, as well as the overall frequency domain under investigation, which was limited to 3.5 THz (35 channels, each with 100 GHz spacing). These restrictions do not limit the generality of the results, as the OSNR values do not change appreciably within each channel bandwidth and all criticalities concerning the EDFA amplification process are properly captured. The optical line is composed of 11 spans, each made of a VOA, with the optical span attenuation set to 10 dB, each followed by an EDFA that operates at a constant output power of $ - {10}\;{\rm dBm}$ per channel. For the EDFAs, MATLAB control software has been developed to enable black box control. The OCM at the end of the OLS is mimicked by an optical spectrum analyzer (OSA). OCMs that are currently present in ROADM nodes are not able to capture the noise floor due to their lack of sensitivity. As mentioned in option (3) within the previous section, for a real application scenario we suppose the presence of a specific device that is able to measure both the channel powers and the noise floor, or, to update the current OCM presence on the ROADM nodes. Regarding the technical aspects of the data collection within this project, the experimental campaign lasted several days due to the OSA usage, taking significantly longer than an OCM. We expect that within a real application scenario the data collection process would last the duration of a single night before the in-service operation of the OLS, producing the required amount of data needed for training the ML. Fig. 4. Experimental setup: Here, the OLS under investigation is composed of an initial booster amplifier and a cascade of 11 spans, each containing a VOA and an EDFA. We show the input and output spectral power measurements obtained using an optical spectrum analyzer in blue and red, respectively. For every spectral load, we measured the input and output spectrum in order to generate the final data set. Specifically, we measured the total power over each channel spectral bandwidth, i.e., the noise floor if the channel is off, or the channel power if the channel is on. In fact, since the channel bandwidth (32 GHz) is less than half of the channel spacing, we have been able to measure the noise floor even for the on channels, estimating their OSNR. An experimental data set has been generated with 4435 cases representing different spectral load configurations. For clarity, let us define $ {N_\textit{on}} $ as the number of channels in the on state in a distinct configuration. Given this definition, the data set is composed of a scenario with all channels on ($ {N_\textit{on}} = 35 $), the 35 cases where only one channel is on ($ {N_\textit{on}} = 1 $), and 140 configurations for each $ {N_\textit{on}} = 2,\ldots ,34 $. This final set of configurations includes pairs of spectral loads that are identical, except for the CUT being either within the on or off state. 4. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA In this section, we statistically analyze the OSNR fluctuations produced by different spectral loads in order to obtain a quantitative estimation of the total OSNR uncertainty, given a static OLS (the OSNR values are calculated with a noise bandwidth of 12.5 GHz). Moreover, we use the experimental data set as outlined in option (3) in Section 2 to acquire a prediction of the OSNR responses. To summarize the data set characteristics, there are 4435 measurements of distinct spectral load configurations, which are a subset of the $ {2^{35}} $ possibilities, given 35 channels. To populate the data set, we select a sample of spectral load configurations which is uniform over the number of channels in the on state. Moreover, for the set of configurations with the same $ {N_\textit{on}} $, the channels that are in the on state are chosen randomly, except for the CUT, which is equally divided between the on and off states. This specific data set selection method is enacted in order to validate the prediction method on the CUT OSNR response. During the entire analysis, we have not taken into account any uncertainty in the measurements, as they are negligible with respect to the characteristic variances of the system. A few basic considerations arise by calculating the average of the OSNRs for each channel over the entire sample, presented in Fig. 5. These OSNR averages sketch a characteristic figure of the EDFA amplification process, which takes place between 29.5 and 30.9 dB, with standard deviations from 0.14 to 0.40 dB. In order to learn more about the EDFA cascade behavior, it is necessary to consider each configuration separately. In fact, the OSNR of each channel depends upon the state of every other channel within the spectral load. For example, as a primitive analysis in this direction, we investigate how the OSNR distributions change with regards to the number of on channels in the spectral load. Figures 6 and 7 present the distributions enclosed in Fig. 5 for a select subset of channels, plotted against the total number of on channels in the configurations: here these figures show the means and standard deviations, $ \sigma $, of the channels, respectively. It must be noted that because the data set was further divided into chunks, the reliability of the averaged quantities is substantially decreased. This causes the standard deviation (presented in Fig. 7) to be far less uniform across all channels when only a small number of channels are in the on state. Regardless, Fig. 6 shows that for the CUT ($ f = 195.25 \;{\rm THz} $), there is an unquestionable increase in the OSNR as the line approaches a full load configuration. Moreover, for all channels $ \sigma $ decreases under the same conditions, meaning that the system tends toward a stable state. To further characterize the OSNR response with respect to a specific configuration, it is necessary to fully understand the intrinsic behavior of the amplification phenomenon. Fig. 5. Overall OSNR measurements in the frequency domain. The blue dots are the mean values over the entire sample for each channel; the error bars are equal to the standard deviations. In red and green the maximum and the minimum for each channel are outlined, respectively. The dashed red line indicates the overall OSNR minimum of 28.1 dB. Fig. 6. Mean values of four channel OSNRs are plotted with respect to the configurations for an increasing $ {N_\textit{on}} $. In the legend, we report the central frequency of the channels considered. The colored lines and shaded areas are qualitative visual expressions of the trend of measured data. Fig. 7. Standard deviation values of the same configurations plotted in Fig. 6. As expected, the channel centered at 195.25 THz maintains the highest variance out of all of the configurations. The colored lines and shaded areas are qualitative visual expressions of the trend of measured data. A. Physical Considerations Despite it being possible to obtain a precise physical description of the emission phenomenon involved in the amplification process, without accurate knowledge of the OLS physical parameters it is not feasible to determine the evolution of the spectral load through the EDFA cascade. In a general scenario, this obstacle would be exacerbated by the embedded EDFA software controller, which, in order to maintain specific requirements, changes the spectral powers at the output of the amplifiers with an unknown algorithm. Properly addressing the cause of the OSNR fluctuations requires splitting the OSNR into its constituents: the received signal power and the ASE noise. An important point is that intensity of the signal amplification and the ASE noise are strictly related. Essentially, these quantities coincide with the stimulated and spontaneous emission of the amplifiers, respectively, and both depend on the population inversions of the erbium within the EDFAs [14]. As a rough summary, if no power is transmitted in a given frequency band, all the relative population inversion is utilized by the ASE noise, allowing it to reach a maximum value. In contrast, when the transmitted signal is amplified, a smaller amount of population inversion is present, resulting in a lower maximum noise value that may be attained. This effect is shown within Fig. 8, where two spectral load configurations are considered. Here, a clear reduction in ASE noise is observed by switching an extra channel on. This is the case for all channels, with the minimum amount of ASE noise being achieved when all channels are in the on state. Furthermore, it should be noted that among all possible configurations, the example shown in Fig. 8 experiences the wildest change in the noise figure. In fact, the channel switched to the on state has a frequency bandwidth centered at 195.25 THz, with a frequency close to the peak of the well-known spectral hole burning phenomenon [14]. Likewise, this behavior is also reflected by the large OSNR variance of this channel. Revisiting the data set, this feature is pictured in Fig. 9, where we plot the standard deviations of the overall OSNR measurements for each channel. Furthermore, in Fig. 8 it can be observed that even though channels have a frequency spacing of 32 GHz in this experiment, changing a channel to on can affect the power of the noise upon frequency bandwidths hundreds of gigahertz away. Since the EDFA energy level population inversion quantifies the intensity of both the amplification and of the noise, we can conclude that the state of a single channel impacts both the signal power and the ASE noise of channels within its frequency neighborhood. This cross-dependency between the power of the channel and the ASE noise, which depends on the state of the other channels, means that calculating the OSNR of every channel is challenging; this is not an intrinsic value of the channel but of the entire spectral load. Owing to the above considerations, it is not possible to further characterize the OSNR response for a particular configuration if the parameters of the OLS are not accurately known. Fig. 8. Qualitative visualization of the OSNR fluctuations that arise from turning on a new channel, for both the ASE noise (shaded lines) and the power of the on channels (dots). Here, the $ {N_\textit{on}} = 1 $ case is given in red, and the $ {N_\textit{on}} = 2 $ case is given in blue. In this figure, all quantities are normalized in order to have a unitary mean value. Fig. 9. Standard deviation trend over all of the channels, highlighting an increase as the OSNR approaches the frequency where the peak of the spectral hole burning occurs, given by the dashed red line in the figure. Apart from the statistical description of the entire data set and the heuristic analysis on OSNR fluctuations, we wish to use this data set as grounds for a realistic use case. In general, the required margin must be conservative and take into account the OSNR fluctuations, and depends upon the needs of the OLS operators; to be agnostic with respect to these needs and to compare different prediction methods in a fair manner, we quantify an estimation of the average margin by calculating the root-mean-square (RMS) error, given by (4)$${\rm RMS} = \sqrt {\frac{{\sum\nolimits_{i = 0}^D {{\left( {{\rm OSNR}_i^{\rm r} - {\rm OSNR}_i^{\rm p}} \right)}^2}}}{D}} ,$$ where $ {\rm OSNR}_i^{\rm r} $ and $ {\rm OSNR}_i^{\rm p} $ are the measured and predicted values of the CUT OSNR for the $ i $th spectral load, respectively, and $ D $ is the dimension of the test data set. If nothing is known about the OSNR dependency upon frequency, the same OSNR threshold must be implemented for all channels with a magnitude lower than an overall expected minimum. In this case, the $ {\rm OSNR}_i^{\rm p} $ are set to the constant OSNR threshold of 28.1 dB, producing an average margin of up to 2.28 dB over a set of realizations equivalent to our sample. Supposing the availability of stored data that describes the frequency-resolved OSNR response [option (3) in Section 2], one can reduce the margin by setting a minimum value for each channel that must lie beneath the respective minimum measurement (the continuous green line in Fig. 5). Although this solution is suboptimal, it is the best achievable result that is conservative and agnostic with regards to the specific spectral load configuration. This solution produces a limited improvement, compared to the initial value of 2.28 dB, as the average margin would lie between 1.72 and 0.46 dB, depending upon the channel. This result can be further improved by characterizing the OSNR fluctuation dependency upon the specific spectral load configuration; as the user knows the number of on channels for a given spectral load, they can set the threshold as the minimum value of the OSNR measurement for the given $ {N_\textit{on}} $. The result of this approach produces an RMS error which lies between 1.22 and 0.09 dB for the CUT (worst-case scenario), shown in Fig. 10. These improvements would reduce the margin in an effective manner; however, being highly dependent upon the sample features, their accuracy is limited by the statistical incidence of the sample over all possible realizations of the system. This means that having a reliable value for each channel may require considering a large number of instances. In light of this, a ML approach appears to be an appropriate candidate to increase the accuracy of OSNR predictions, if the dimensions of the sample are fixed. Fig. 10. RMS error for the worst-case scenario channel, with an increasing number of channels in the on state within the configuration, obtained considering the respective minimum measured OSNR value used as a margin threshold. 5. QOT-E BASED ON MACHINE-LEARNING The prediction of the OSNR based upon a specific spectral load configuration is an ideal scenario for ML, especially within a case where the OLS is treated as a black box, as ML is able to compensate for the lack of knowledge of the OLS parameters. In order to measure the enhancement obtained using a ML approach, we focus on the real-case scenario outlined at the end of Section 2. Far from being an exhaustive description of ML applications, the goal of this work is to achieve a better prediction of OSNR using ML techniques in the scenario under investigation. First, it is necessary to divide the measurement data set into training and testing sets. The former represents the stored data set on which the OLS controller can base the OSNR predictions for a LP that will be allocated to the CUT. The latter represents a set of real outcomes that can be used to validate the accuracy of a particular prediction method. To estimate this accuracy we use the RMS error, considering $ {\rm OSNR}_i^{\rm r} $ and $ {\rm OSNR}_i^{\rm p} $ as the measured and predicted values of the CUT OSNR, restricted to the test subset of the data set. Setting a constant $ {\rm OSNR}_i^{\rm p} $ for all $ i $ as the minimum measured value of the CUT OSNR yields a value of 1.63 dB RMS error over all the configurations in the test data set. Following this, we take advantage of the well-known TensorFlow platform [28] to perform ML, adapting various high-level features from this platform according to our requirements. Before proceeding with implementing a ML technique to predict the OSNR of an OLS, we first undertook preliminary investigations in order to probe whether a neural network or a linear regression model provides superior performance—as a result we decided to utilize the DNN implemented in TensorFlow, which is a feed-forward multilayer (deep) neural network, because it outperforms a linear regression model in this scenario. We applied this DNN model to our data set, obtaining various levels of accuracy depending on the DNN network parameters. We characterized this DNN model utilizing a proximal Adagrad optimizer (again, implemented in TensorFlow [28]) with a fixed learning rate of 0.1 and a regularization strength of 0.001. Most importantly, we have tuned the number of hidden layers and nodes in order to achieve the best trade-off between precision and computational time. These two parameters are linked to the complexity of the DNN, which in turn is tied to the complexity of the problem to be solved. Although increasing the number of layers and nodes improves the accuracy of the DNN, raising these values also has an adverse effect on computational time. In the end, we decided upon a DNN with three hidden layers, containing 32 nodes each, taking approximately 8 min to train (using a machine running with 32 GB of 2133 MHz RAM and an Intel Core i7 6700 3.4 GHz CPU), as increasing DNN complexity does not further improve the accuracy of the OSNR estimations. These quantities would be changed if we considered a system with a larger number of amplifiers, with the computation time increasing accordingly (a rough estimation obtained from our trials is that the computation time scales linearly with the number of nodes). Once the model has been trained it can be validated and utilized for any possible spectral load configuration, within the overall investigated bandwidth, for the OLS under consideration. A. Data Set Preparation Considering a single CUT (with $ f = 195.25 \;{\rm THz} $), we selected 30% of the data set to be designated as a testing subset. Because of the CUT being close to the spectral hole burning peak, this is a worst-case scenario for OSNR fluctuations; therefore, lower error predictions are expected for all other CUT selections. The testing subset was created by randomly choosing instances from the data set, with the only requirement being that the uniformity of the distribution with respect to the number of on channels in the configurations was preserved. This means that for each configuration subset with a given $ {N_\textit{on}} $ we select 30% to be in the test data set. DNN training and prediction processes require the definition of features and labels, which indicate system inputs and outputs, respectively. As outlined in the previous section, the uncertainty of the system can be divided along the variances of the received signal power and the ASE noise. Therefore, we consider these two quantities as independent inputs of the system and set them as the DNN features. Correspondingly, the OSNR is the only system output under investigation and so is set as the DNN label. In order to properly address the aforementioned realistic scenario, the DNN features correspond to the quantities measured when the CUT is off, whereas the labels correspond to the CUT OSNR when the CUT is in the on state. As a consequence of this restriction, the final data set composed of the training and testing subsets is half the size of the original data set. Fig. 11. Comparison of the OSNR distributions of the DNN guesses and the measured values, respectively. Fig. 12. Comparison of the OSNR averages of the DNN guesses and the measured values, respectively, presented in terms of the number of channels, $ {N_\textit{on}} $. With the error bars we indicate the RMS error. B. Results and Comments In Fig. 11, we show the distributions of the measured OSNR for the CUT and the predictions of the DNN over the test data set. This figure highlights how the DNN predictions closely resemble the measured OSNR values, having a similar mean, $\mu$, range, and standard deviation, $ \sigma $. An average margin of 0.15 dB is obtained through this DNN estimation of the CUT OSNR, a significant improvement with respect to the previous solutions presented at the end of the previous section. To properly frame these results in the realistic use-case scenario, it must be underlined that despite the DNN providing a high level of accuracy, it may make predictions that are not conservative. For example, in this case 38% of the predictions are greater than the real values, even if the majority are greater by a marginal amount. This percentage of nonconservative predictions may be reduced by shifting the OSNR estimations of the DNN by a fixed amount. For example, to reach a scenario where less than 6% of the predictions are nonconservative, the DNN estimations must be shifted by a factor of 0.2 dB, giving an RMS error of 0.27 dB, which remains a significant improvement over the initial average margin estimations. Furthermore, it should be stressed that the data set used in this work contains fewer configurations where a small number of channels are on, visible in Fig. 12. The result is that these scenarios are underrepresented in the training data set, causing the accuracy of the DNN predictions to be lower when $ {N_\textit{on}} \lt 10 $; ensuring that these cases are represented equally would reduce the overall RMS error. Additionally, Fig. 12 reveals that all nonconservative cases in this investigation were given when $ {N_{10}} = 10 $ or less, further stressing that the criticalities of the DNN prediction depends upon the statistical incidence of the sample over all possible realizations. In light of these results a ML approach exhibits promising accuracy, and it seems that with further, more in-depth parameter selection and training that the DNN may eventually lead to an OSNR margin estimation that approaches zero, at least for similar use cases. In this work we have addressed the system margin minimization enabled by a reliable prediction of the QoT given by the GSNR. The main idea of our approach is that in order to obtain the best estimation of the GSNR, this QoT-E must be separated into OSNR and nonlinear SNR components. In fact, because of the inaccuracy on the parameters and the software-defined EDFA behavior, the former cannot be analytically estimated in an accurate way, and so requires an adaptive approach. We focus on predicting the OSNR component of the GSNR, as opposed to the nonlinear SNR, as this term is both the most dominant and the most affected by uncertainties. We propose a ML approach to estimate the OSNR response over distinct spectral load configurations, leaving the estimation of the nonlinear SNR to an analytical model that may give a fast and accurate prediction, once the actual signal spectral powers are known. We supposed an agnostic use of the OLS by operating the EDFAs as black boxes that set the nominal gain and by relying only on data from the OCM to predict the spectrally resolved GSNR. Experimentally, we obtained a data set from an OLS containing a cascade of 11 pairs of EDFAs and VOAs; we utilize the attenuators in place of the fiber in order to avoid any NLI generation and to focus our investigation only on the prediction of the OSNR. We consider a realistic scenario where an OLS controller wishes to predict the OSNR of a LP over the CUT, given an existing spectral load. Supposing the availability of previously measured OSNR outputs, we give different predictions with different levels of accuracy by considering different OLS behavior awareness. First, we show that, without any specific knowledge of the OLS or the uncertainty fluctuations of the OSNR, deploying the minimum required conservative threshold produces an average margin of 2.28 dB. Next, by considering the minimum measurements for each channel as an OSNR threshold we evaluate a varying average margin that lies between 1.72 and 0.46 dB, depending upon the channel under consideration. This result can be further improved by assuming that the $ {N_\textit{on}} $ is known, allowing the OSNR threshold to be set to the minimum values that have been measured within the respective set of configurations. An average margin between 1.22 and 0.09 dB is found in this case, which, nevertheless is not reliable as it depends strongly upon the statistical incidence of the analyzed sample over all possible realizations of the system. Finally, we demonstrate that DNN ML techniques from the TensorFlow platform enable an accurate OSNR estimation with an RMS error of 0.15 dB over the CUT, representing the worst-case scenario. By applying a rigid shift to the DNN predictions, it is possible to guarantee a requested conservative percentage threshold, decreasing the DNN accuracy. For example, introducing a shift of 0.2 dB to the DNN estimations produces a result where 94% of the predictions are fully conservative and gives a reasonable RMS error of 0.27 dB. To conclude, future analyses performed by also including telemetry data from the EDFAs may yield a further reduction in the residual uncertainty, consequently reducing the required system margin. Furthermore, a future investigation could exploit a ML algorithm that, during the training stage, penalizes prediction values that are higher than the measured values, obtaining a model that is predisposed to conservative predictions and ensuring that the model maintains reliability with high accuracy. H2020 Marie Skłodowska-Curie Actions (814276). The authors would like to thank Alessio Ferrari and Dr. Mattia Cantono for their fruitful suggestions. 1. "Cisco Visual Networking Index: Forecast and Trends, 2017–2022," Cisco White Paper (2017), https://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/white-paper-c11-741490.html. 2. V. Curri, M. Cantono, and R. Gaudino, "Elastic all-optical networks: a new paradigm enabled by the physical layer. How to optimize network performances?" J. Lightwave Technol.35, 1211–1221 (2017). [CrossRef] 3. D. J. Ives, P. Bayvel, and S. J. Savory, "Routing, modulation, spectrum and launch power assignment to maximize the traffic throughput of a nonlinear optical mesh network," Photon. Netw. Commun.29, 244–256 (2015). [CrossRef] 4. Y. Pointurier, J.-L. Augé, M. Birk, and E. Varvarigos, "Introduction to the JOCN special issue on low-margin optical networks: publisher's note," J. Opt. Commun. Netw.11, 598 (2019). [CrossRef] 5. Y. Pointurier, "Design of low-margin optical networks," J. Opt. Commun. Netw.9, A9–A17 (2017). [CrossRef] 6. D. W. Boertjes, M. Reimer, and D. Côté, "Practical considerations for near-zero margin network design and deployment," J. Opt. Commun. Netw.11, C25–C34 (2019). [CrossRef] 7. V. Curri, A. Carena, A. Arduino, G. Bosco, P. Poggiolini, A. Nespola, and F. Forghieri, "Design strategies and merit of system parameters for uniform uncompensated links supporting Nyquist-WDM transmission," J. Lightwave Technol.33, 3921–3932 (2015). [CrossRef] 8. R. Pastorelli, "Network optimization strategies and control plane impacts," in Optical Fiber Communication Conference (OSA, 2015). 9. M. Filer, M. Cantono, A. Ferrari, G. Grammel, G. Galimberti, and V. Curri, "Multi-vendor experimental validation of an open source QoT estimator for optical networks," J. Lightwave Technol.36, 3073–3082 (2018). [CrossRef] 10. A. Bononi, P. Serena, and N. Rossi, "Nonlinear signal–noise interactions in dispersion-managed links with various modulation formats," Opt. Fiber Technol.16, 73–85 (2010). [CrossRef] 11. P. Poggiolini, A. Carena, Y. Jiang, G. Bosco, V. Curri, and F. Forghieri, "Impact of low-OSNR operation on the performance of advanced coherent optical transmission systems," in The European Conference on Optical Communication (ECOC) (IEEE, 2014), pp. 1–3. 12. A. Ferrari, G. Borraccini, and V. Curri, "Observing the generalized SNR statistics induced by gain/loss uncertainties," in European Conference on Optical Communication (ECOC) (IEEE, 2019). 13. B. Taylor, G. Goldfarb, S. Bandyopadhyay, V. Curri, and H.-J. Schmidtke, "Towards a route planning tool for open optical networks in the telecom infrastructure project," in Optical Fiber Communication Conference and the National Fiber Optic Engineers Conference (2018). 14. M. Bolshtyansky, "Spectral hole burning in erbium-doped fiber amplifiers," J. Lightwave Technol.21, 1032–1038 (2003). [CrossRef] 15. G. Grammel, V. Curri, and J. L. Auge, "Physical simulation environment of the telecommunications infrastructure project (TIP)," in Optical Fiber Communication Conference and the National Fiber Optic Engineers Conference (2018). 16. M. Freire, S. Mansfeld, D. Amar, F. Gillet, A. Lavignotte, and C. Lepers, "Predicting optical power excursions in erbium doped fiber amplifiers using neural networks," in Asia Communications and Photonics Conference (ACP) (IEEE, 2018), pp. 1–3. 17. J. Thrane, J. Wass, M. Piels, J. C. Diniz, R. Jones, and D. Zibar, "Machine learning techniques for optical performance monitoring from directly detected PDM-QAM signals," J. Lightwave Technol.35, 868–875 (2017). [CrossRef] 18. F. N. Khan, C. Lu, and A. P. T. Lau, "Optical performance monitoring in fiber-optic networks enabled by machine learning techniques," in Optical Fiber Communication Conference and Exposition (OFC) (IEEE, 2018), pp. 1–3. 19. L. Barletta, A. Giusti, C. Rottondi, and M. Tornatore, "QoT estimation for unestablished lighpaths using machine learning," in Optical Fiber Communication Conference (Optical Society of America, 2017), paper Th1J–1. 20. I. Sartzetakis, K. K. Christodoulopoulos, and E. M. Varvarigos, "Accurate quality of transmission estimation with machine learning," J. Opt. Commun. Netw.11, 140–150 (2019). [CrossRef] 21. W. Mo, Y.-K. Huang, S. Zhang, E. Ip, D. C. Kilper, Y. Aono, and T. Tajima, "ANN-based transfer learning for QoT prediction in real-time mixed line-rate systems," in Optical Fiber Communication Conference and Exposition (OFC) (IEEE, 2018), pp. 1–3. 22. C. Rottondi, L. Barletta, A. Giusti, and M. Tornatore, "Machine-learning method for quality of transmission prediction of unestablished lightpaths," J. Opt. Commun. Netw.10, A286–A297 (2018). [CrossRef] 23. J. Mata, I. De Miguel, R. J. Duran, N. Merayo, S. K. Singh, A. Jukan, and M. Chamania, "Artificial intelligence (AI) methods in optical networks: a comprehensive survey," Opt. Switching Netw.28, 43–57 (2018). [CrossRef] 24. S. Zhu, C. L. Gutterman, W. Mo, Y. Li, G. Zussman, and D. C. Kilper, "Machine learning based prediction of erbium-doped fiber WDM line amplifier gain spectra," in European Conference on Optical Communication (ECOC) (IEEE, 2018), pp. 1–3. 25. C. L. Gutterman, W. Mo, S. Zhu, Y. Li, D. C. Kilper, and G. Zussman, "Neural network based wavelength assignment in optical switching," in Proceedings of the Workshop on Big Data Analytics and Machine Learning for Data Communication Networks (ACM, 2017), pp. 37–42. 26. A. Mahajan, K. Christodoulopoulos, R. Martinez, S. Spadaro, and R. Munoz, "Machine learning assisted EFDA gain ripple modelling for accurate QoT estimation," in European Conference on Optical Communication (ECOC) (IEEE, 2019). 27. M. Ionescu, "Machine learning for ultrawide bandwidth amplifier configuration," in 21st International Conference on Transparent Optical Networks (ICTON) (IEEE, 2019). 28. https://www.tensorflow.org/. 29. https://www.itu.int/rec/T-REC-G.694.1/en. 30. D. J. Elson, G. Saavedra, K. Shi, D. Semrau, L. Galdino, R. Killey, B. C. Thomsen, and P. Bayvel, "Investigation of bandwidth loading in optical fibre transmission using amplified spontaneous emission noise," Opt. Express25, 19529–19537 (2017). [CrossRef] 31. A. Nespola, S. Straullu, A. Carena, G. Bosco, R. Cigliutti, V. Curri, P. Poggiolini, M. Hirano, Y. Yamamoto, T. Sasaki, J. Bauwelinck, K. Verheyen, and F. Forghieri, "GN-model validation over seven fiber types in uncompensated PM-16QAM Nyquist-WDM links," IEEE Photon. Technol. Lett.26, 206–209 (2014). [CrossRef] 32. D. Pilori, F. Forghieri, and G. Bosco, "Residual non-linear phase noise in probabilistically shaped 64-QAM optical links," in Optical Fiber Communication Conference and the National Fiber Optic Engineers Conference (2018). 33. R.-J. Essiambre and R. W. Tkach, "Capacity trends and limits of optical communication networks," Proc. IEEE100, 1035–1055 (2012). [CrossRef] 34. A. Carena, V. Curri, G. Bosco, P. Poggiolini, and F. Forghieri, "Modeling of the impact of nonlinear propagation effects in uncompensated optical coherent transmission links," J. Lightwave Technol.30, 1524–1539 (2012). [CrossRef] 35. M. Cantono, D. Pilori, A. Ferrari, C. Catanese, J. Thouras, J. L. Auge, and V. Curri, "On the interplay of nonlinear interference generation with stimulated Raman scattering for QoT estimation," J. Lightwave Technol.36, 3131–3141 (2018). [CrossRef] 36. R. Dar, M. Feder, A. Mecozzi, and M. Shtaif, "Properties of nonlinear noise in long, dispersion-uncompensated fiber links," Opt. Express21, 25685–25699 (2013). [CrossRef] 37. https://www.cisco.com/c/en/us/products/collateral/optical-networking/ons-15454-series-multiservice-transport-platforms/data_sheet_c78-658542.html. "Cisco Visual Networking Index: Forecast and Trends, 2017–2022," Cisco White Paper (2017), https://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/white-paper-c11-741490.html . V. Curri, M. Cantono, and R. Gaudino, "Elastic all-optical networks: a new paradigm enabled by the physical layer. How to optimize network performances?" J. Lightwave Technol. 35, 1211–1221 (2017). D. J. Ives, P. Bayvel, and S. J. Savory, "Routing, modulation, spectrum and launch power assignment to maximize the traffic throughput of a nonlinear optical mesh network," Photon. Netw. Commun. 29, 244–256 (2015). Y. Pointurier, J.-L. Augé, M. Birk, and E. Varvarigos, "Introduction to the JOCN special issue on low-margin optical networks: publisher's note," J. Opt. Commun. Netw. 11, 598 (2019). Y. Pointurier, "Design of low-margin optical networks," J. Opt. Commun. Netw. 9, A9–A17 (2017). D. W. Boertjes, M. Reimer, and D. Côté, "Practical considerations for near-zero margin network design and deployment," J. Opt. Commun. Netw. 11, C25–C34 (2019). V. Curri, A. Carena, A. Arduino, G. Bosco, P. Poggiolini, A. Nespola, and F. Forghieri, "Design strategies and merit of system parameters for uniform uncompensated links supporting Nyquist-WDM transmission," J. Lightwave Technol. 33, 3921–3932 (2015). R. Pastorelli, "Network optimization strategies and control plane impacts," in Optical Fiber Communication Conference (OSA, 2015). M. Filer, M. Cantono, A. Ferrari, G. Grammel, G. Galimberti, and V. Curri, "Multi-vendor experimental validation of an open source QoT estimator for optical networks," J. Lightwave Technol. 36, 3073–3082 (2018). A. Bononi, P. Serena, and N. Rossi, "Nonlinear signal–noise interactions in dispersion-managed links with various modulation formats," Opt. Fiber Technol. 16, 73–85 (2010). P. Poggiolini, A. Carena, Y. Jiang, G. Bosco, V. Curri, and F. Forghieri, "Impact of low-OSNR operation on the performance of advanced coherent optical transmission systems," in The European Conference on Optical Communication (ECOC) (IEEE, 2014), pp. 1–3. A. Ferrari, G. Borraccini, and V. Curri, "Observing the generalized SNR statistics induced by gain/loss uncertainties," in European Conference on Optical Communication (ECOC) (IEEE, 2019). B. Taylor, G. Goldfarb, S. Bandyopadhyay, V. Curri, and H.-J. Schmidtke, "Towards a route planning tool for open optical networks in the telecom infrastructure project," in Optical Fiber Communication Conference and the National Fiber Optic Engineers Conference (2018). M. Bolshtyansky, "Spectral hole burning in erbium-doped fiber amplifiers," J. Lightwave Technol. 21, 1032–1038 (2003). G. Grammel, V. Curri, and J. L. Auge, "Physical simulation environment of the telecommunications infrastructure project (TIP)," in Optical Fiber Communication Conference and the National Fiber Optic Engineers Conference (2018). M. Freire, S. Mansfeld, D. Amar, F. Gillet, A. Lavignotte, and C. Lepers, "Predicting optical power excursions in erbium doped fiber amplifiers using neural networks," in Asia Communications and Photonics Conference (ACP) (IEEE, 2018), pp. 1–3. J. Thrane, J. Wass, M. Piels, J. C. Diniz, R. Jones, and D. Zibar, "Machine learning techniques for optical performance monitoring from directly detected PDM-QAM signals," J. Lightwave Technol. 35, 868–875 (2017). F. N. Khan, C. Lu, and A. P. T. Lau, "Optical performance monitoring in fiber-optic networks enabled by machine learning techniques," in Optical Fiber Communication Conference and Exposition (OFC) (IEEE, 2018), pp. 1–3. L. Barletta, A. Giusti, C. Rottondi, and M. Tornatore, "QoT estimation for unestablished lighpaths using machine learning," in Optical Fiber Communication Conference (Optical Society of America, 2017), paper Th1J–1. I. Sartzetakis, K. K. Christodoulopoulos, and E. M. Varvarigos, "Accurate quality of transmission estimation with machine learning," J. Opt. Commun. Netw. 11, 140–150 (2019). W. Mo, Y.-K. Huang, S. Zhang, E. Ip, D. C. Kilper, Y. Aono, and T. Tajima, "ANN-based transfer learning for QoT prediction in real-time mixed line-rate systems," in Optical Fiber Communication Conference and Exposition (OFC) (IEEE, 2018), pp. 1–3. C. Rottondi, L. Barletta, A. Giusti, and M. Tornatore, "Machine-learning method for quality of transmission prediction of unestablished lightpaths," J. Opt. Commun. Netw. 10, A286–A297 (2018). J. Mata, I. De Miguel, R. J. Duran, N. Merayo, S. K. Singh, A. Jukan, and M. Chamania, "Artificial intelligence (AI) methods in optical networks: a comprehensive survey," Opt. Switching Netw. 28, 43–57 (2018). S. Zhu, C. L. Gutterman, W. Mo, Y. Li, G. Zussman, and D. C. Kilper, "Machine learning based prediction of erbium-doped fiber WDM line amplifier gain spectra," in European Conference on Optical Communication (ECOC) (IEEE, 2018), pp. 1–3. C. L. Gutterman, W. Mo, S. Zhu, Y. Li, D. C. Kilper, and G. Zussman, "Neural network based wavelength assignment in optical switching," in Proceedings of the Workshop on Big Data Analytics and Machine Learning for Data Communication Networks (ACM, 2017), pp. 37–42. A. Mahajan, K. Christodoulopoulos, R. Martinez, S. Spadaro, and R. Munoz, "Machine learning assisted EFDA gain ripple modelling for accurate QoT estimation," in European Conference on Optical Communication (ECOC) (IEEE, 2019). M. Ionescu, "Machine learning for ultrawide bandwidth amplifier configuration," in 21st International Conference on Transparent Optical Networks (ICTON) (IEEE, 2019). https://www.tensorflow.org/ . https://www.itu.int/rec/T-REC-G.694.1/en . D. J. Elson, G. Saavedra, K. Shi, D. Semrau, L. Galdino, R. Killey, B. C. Thomsen, and P. Bayvel, "Investigation of bandwidth loading in optical fibre transmission using amplified spontaneous emission noise," Opt. Express 25, 19529–19537 (2017). A. Nespola, S. Straullu, A. Carena, G. Bosco, R. Cigliutti, V. Curri, P. Poggiolini, M. Hirano, Y. Yamamoto, T. Sasaki, J. Bauwelinck, K. Verheyen, and F. Forghieri, "GN-model validation over seven fiber types in uncompensated PM-16QAM Nyquist-WDM links," IEEE Photon. Technol. Lett. 26, 206–209 (2014). D. Pilori, F. Forghieri, and G. Bosco, "Residual non-linear phase noise in probabilistically shaped 64-QAM optical links," in Optical Fiber Communication Conference and the National Fiber Optic Engineers Conference (2018). R.-J. Essiambre and R. W. Tkach, "Capacity trends and limits of optical communication networks," Proc. IEEE 100, 1035–1055 (2012). A. Carena, V. Curri, G. Bosco, P. Poggiolini, and F. Forghieri, "Modeling of the impact of nonlinear propagation effects in uncompensated optical coherent transmission links," J. Lightwave Technol. 30, 1524–1539 (2012). M. Cantono, D. Pilori, A. Ferrari, C. Catanese, J. Thouras, J. L. Auge, and V. Curri, "On the interplay of nonlinear interference generation with stimulated Raman scattering for QoT estimation," J. Lightwave Technol. 36, 3131–3141 (2018). R. Dar, M. Feder, A. Mecozzi, and M. Shtaif, "Properties of nonlinear noise in long, dispersion-uncompensated fiber links," Opt. Express 21, 25685–25699 (2013). https://www.cisco.com/c/en/us/products/collateral/optical-networking/ons-15454-series-multiservice-transport-platforms/data_sheet_c78-658542.html . Amar, D. Aono, Y. Arduino, A. Auge, J. L. Augé, J.-L. Bandyopadhyay, S. Barletta, L. Bauwelinck, J. Bayvel, P. Birk, M. Boertjes, D. W. Bolshtyansky, M. Bononi, A. Borraccini, G. Bosco, G. Cantono, M. Carena, A. Catanese, C. Chamania, M. Christodoulopoulos, K. Christodoulopoulos, K. K. Cigliutti, R. Côté, D. Curri, V. Dar, R. De Miguel, I. Diniz, J. C. Duran, R. J. Elson, D. J. Essiambre, R.-J. Feder, M. Ferrari, A. Filer, M. Forghieri, F. Freire, M. Galdino, L. Galimberti, G. Gaudino, R. Gillet, F. Giusti, A. Goldfarb, G. Grammel, G. Gutterman, C. L. Hirano, M. Huang, Y.-K. Ionescu, M. Ip, E. Ives, D. J. Jiang, Y. Jones, R. Jukan, A. Khan, F. N. Killey, R. Kilper, D. C. Lau, A. P. T. Lavignotte, A. Lepers, C. Li, Y. Lu, C. Mahajan, A. Mansfeld, S. Martinez, R. Mata, J. Mecozzi, A. Merayo, N. Mo, W. Munoz, R. Nespola, A. Pastorelli, R. Piels, M. Pilori, D. Poggiolini, P. Pointurier, Y. Reimer, M. Rossi, N. Rottondi, C. Saavedra, G. Sartzetakis, I. Sasaki, T. Savory, S. J. Schmidtke, H.-J. Semrau, D. Serena, P. Shi, K. Shtaif, M. Singh, S. K. Spadaro, S. Straullu, S. Tajima, T. Taylor, B. Thomsen, B. C. Thouras, J. Thrane, J. Tkach, R. W. Tornatore, M. Varvarigos, E. Varvarigos, E. M. Verheyen, K. Wass, J. Yamamoto, Y. Zhu, S. Zibar, D. Zussman, G. J. Opt. Commun. Netw. (5) Opt. Fiber Technol. (1) Opt. Switching Netw. (1) Photon. Netw. Commun. (1) Proc. IEEE (1) Fig. 2. Abstraction of an optical network as a topology graph weighted by the generalized SNR degradation for optical line systems, $ {{\rm GSNR}_i}(f)$ . Fig. 3. General scheme for a QoT-E module predicting the $ {\rm GSNR}(f)$ . The three available data sets are shown: (1) static data from device characterization, (2) data from current-state telemetry, and (3) stored data from historical telemetry that feeds a ML module. Fig. 6. Mean values of four channel OSNRs are plotted with respect to the configurations for an increasing $ {N_\textit{on}} $ . In the legend, we report the central frequency of the channels considered. The colored lines and shaded areas are qualitative visual expressions of the trend of measured data. Fig. 12. Comparison of the OSNR averages of the DNN guesses and the measured values, respectively, presented in terms of the number of channels, $ {N_\textit{on}} $ . With the error bars we indicate the RMS error. (1) G S N R = P R x P A S E + P N L I = ( O S N R − 1 + S N R N L − 1 ) − 1 , (2) G S N R = ( ∑ i = 1 N 1 G S N R i ) − 1 . (3) G S N R A F − 1 ( f ) = G S N R A C − 1 ( f ) + G S N R C E − 1 ( f ) ) + G S N R E F − 1 ( f ) . (3) O S N R ( f ) = P T x ∏ i = o N G i ( f ) L i ( f ) ∑ i = o N h f B n N F i [ G i ( f ) − 1 ] ∏ k = i + 1 N L k ( f ) G k ( f ) , (4) R M S = ∑ i = 0 D ( O S N R i r − O S N R i p ) 2 D , Andrew Lord, Editor-in-Chief
CommonCrawl
Borel theorem 2010 Mathematics Subject Classification: Primary: 26E10,34E05 Secondary: 30E15 [MSN][ZBL] A class of theorems guaranteeing existence of a smooth function with any preassigned (eventually diverging) Taylor series, including statements for complex functions defined in sectorial domains. 1 Real version 2 Complex version 2.1 Remark 3 Multidimensional version Real version For any collection of real numbers $\{c_\alpha:\ \alpha\in\Z_+^n\}$ labeled by multiindices there exists a $C^\infty$-smooth function $f:(\R^n,0)\to\R$ such that $c_\alpha=\frac1{\alpha!}\partial^\alpha f(0)$. In other words, any formal series $\sum_{|\alpha|\ge 0} c_\alpha x^\alpha\in\R[[x_1,\dots,x_n]]$ is the Taylor series of a $C^\infty$-smooth function defined in an open neighborhood of the origin. In this form the Borel theorem is a particular case of the Whitney extension theorem, see [N]. Complex version Let $S\subset(\C,0)$ be an open sector $\{0<|z|<\rho,\ |\theta_-<\arg z <\theta_+\}$ with the opening angle $\theta_+-\theta_-$ less than $2\pi$ on the complex plane with the vertex at the origin, and $\{c_k:\ k=0,1,2,\dots\}$ a sequence of complex numbers Then there exists a function $f$ holomorphic in $S$, for which the formal series $\sum c_k z^k$ is an asymptotic series: $$ \forall m\in\N\quad \lim_{z\to 0} \frac1{z^m}\Big(f(z)-\sum_1^m c_k z^k\Big)=0\qquad\text{as }z\to 0,\ z\in S. $$ This theorem is also referred to as the Borel-Ritt theorem, see [W, Sect. 9]. One can consider also sectors with opening larger than $2\pi$, but only on the suitable Riemann surface. A (single-valued) function defined in a punctured neighborhood of the origin and admitting an asymptotic series, is necessarily holomorphic, so its asymptotic series must converge. Multidimensional version The analog of Borel-Ritt theorem is valid also for formal series in several variables: any such series can be realized as an asymptotic series for a suitable function of several complex variables, holomorphic in a proper polysector $z\in\{\C^n:\ \theta_{i-}<\arg z_i<\theta_{i+},\ i=1,\dots,n,\ 0<|z|<\rho\}$, provided that $\theta_{i+}-\theta_{i-}<2\pi$. See [R]. [W] Wasow, W., Asymptotic expansions for ordinary differential equations, Dover Publications, Inc., New York (1987). MR0919406 Zbl 0644.34003 [N] Narasimhan, R., Analysis on real and complex manifolds, North-Holland Mathematical Library, 35. North-Holland Publishing Co., Amsterdam (1985). MR0832683 Zbl 0583.58001 [R] Ramis, J.-P., À propos du théorème de Borel-Ritt à plusieurs variables, Lecture Notes in Math., 712, Équations différentielles et systèmes de Pfaff dans le champ complexe, Sem., Inst. Rech. Math. Avancée, Strasbourg, 1975, pp. 289--292, Springer, Berlin (1979). MR0548148 Zbl 0455.35036 Borel theorem. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Borel_theorem&oldid=30977 Retrieved from "https://encyclopediaofmath.org/index.php?title=Borel_theorem&oldid=30977" Real functions Functions of a complex variable
CommonCrawl
Europe/Paris English String-Math 2016, Collège de France, Paris 27 June 2016 to 2 July 2016 Collège de France Europe/Paris timezone General Public Session Recorded talks Amphithéâtre Marguerite de Navarre (Collège de France) Amphithéâtre Marguerite de Navarre 11, place Marcelin-Berthelot, 75231 Paris cedex There are no materials yet. Contribution list Timetable 1. Framed BPS States In Two And Four Dimensions Gregory Moore (Rutgers University) This talk has four parts. Part one reviews the derivation of the Kontsevich-Soibelman wall-crossing formula for BPS degeneracies in four-dimensional theories with N=2 supersymmetry using framed BPS states. This follows the papers [ArXiv:1006.0146,1008.0030]. I might also mention briefly a possible generalization under discussion with T. Dimofte and D. Gaiotto. Motivated by this possible... 6. Exact WKB analysis, cluster algebras and Painlevé equations Kohei Iwaki (University of Nagoya) In the first part of the talk I'll describe a joint work with Tomoki Nakanishi and explain how the Voros symbols in exact WKB analysis realize (generalized) cluster variables. In the second part I'll generalize the notion of the Voros symbols to the Painlevé equations, and discuss their applications. 5. Reduction for SL(3) pre-buildings Carlos Simpson (Université de Nice - Sophia Antipolis) We discuss some aspects of the reduction process leading to a pre-building associated to an SL(3) spectral curve. This construction is related to harmonic maps and the WKB problem, and has potential applications to the construction of stability conditions. This is joint work with Katzarkov, Noll and Pandit [arXiv:1503.00989], see also [arXiv:1311.7101] 2. Meromorphic connections and quivers Daisuke Yamakawa (Tokyo Institute of Technology) In this talk I will review recent developments on the relationship between meromorphic connections on the Riemann sphere and quivers. Such relationship was first found by Crawley-Boevey in the case of logarithmic connections. He used it to solve the additive Deligne-Simpson problem, a sort of existence problem on logarithmic connections. I will explain the generalization of Crawley-Boevey's... 7. SUSY field theories and geometric Langlands: The other side of the coin Joerg Teschner (DESY, Hamburg) Inclusion of surface operators leads to interesting generalisations of the correspondence discovered by Alday, Gaiotto and Tachikawa between four-dimensional N=2 SUSY field theories and conformal field theory. My goal will be to outline how this generalisation is related to the geometric Langlands correspondence and to a certain quantum generalisation of this correspondence. The resulting... 8. Fredholm determinant and Nekrasov type representations for isomonodromic tau functions Oleg Lisovyy (LMPT, Tours) We will derive Fredholm determinant representation for isomonodromic tau functions of Fuchsian systems with $n$ regular singular points on the Riemann sphere and generic monodromy in $\mathrm{GL}(N,C)$. The corresponding operator acts in the direct sum of $N( n-3) $ copies of $L^2(S^1)$. Its kernel is expressed in terms of fundamental solutions of $n-2$ elementary 3-point Fuchsian systems... 9. Elliptically fibered Calabi-Yau threefolds: mirror symmetry and Jacobi forms Sheldon Katz (University of Illinois at Urbana-Champaign) I explain an ansatz for the partition function of elliptically fibered Calabi-Yau threefolds in terms of Jacobi forms using a combination of B-model, homological mirror symmetry, and geometric techniques. This talk is based on joint work with Minxin Huang and Albrecht Klemm appearing in [arXiv:1501.04891] as well as work in progress. 10. Derivation of modular anomaly equation in compact elliptic Calabi-Yau spaces Minxin Huang (University of Science and Technology of China) Modular anomaly have been discovered in topological string theory on elliptic Calabi-Yau spaces. We extend the derivation of genus zero anomaly equation for non-compact cases in the literature to compact cases. For higher genus, we derive the modular anomaly equation from BCOV holomorphic anomaly equation. Based on [arXiv:1501.04891]. 11. Quantized Coulomb branches of 3d N=4 gauge theories and difference operators Hiraku Nakajima (RIMS, Kyoto) In [arXiv:1503.03676,1601.03586] (with Braverman and Finkelberg), I have proposed a mathematical approach to define Coulomb branches of 3d N=4 SUSY gauge theories. It is based on the homology group of a certain moduli space, and has a natural quantization by the equivariant homology group. For a quiver gauge theory, the quantized Coulomb branch has an embedding into the ring of difference... 12. 3D supersymmetric gauge theories and Hilbert series Stefano Cremonesi (King's college, London) The Hilbert series is a generating function that enumerates gauge invariant chiral operators of a supersymmetric field theory with four supercharges and an R-symmetry. In this talk I will explain how the counting of dressed 't Hooft monopole operators leads to a formula for the Hilbert series of a 3d N=2 gauge theory, which captures precious information about the chiral ring and the geometry... 13. Cohomological Hall algebra actions and Kac polynomials Olivier Schiffmann (Université de Paris-Sud, Orsay) We consider cohomological Hall algebras associated to quivers and their actions on the cohomology of Nakajima varieties; we relate these algebras with the Yangians constructed by Maulik and Okounov, and show that their Hilbert series are encoded by the Kac polynomials of the underlying quiver. For instance, for the 1-loop quiver, one obtains the Yangian of $\widehat{gl(1)}$ used relevant in... 14. Plane partitions and W algebras Mikhail Bershtein (Landau Institute, Moscow) I will talk about new example of W algebras depending on three integer numbers n,m,k. Category of representations of such algebras is equivalent (similar to Drinfeld–Kohno or Kazhdan–Lusztig theorem) to the category of representations of product of three quantum groups gl_{n|k}, gl_{k|m} and gl_{m|n}. Irreducible representations of these W algebras have a basis labeled by plane partition with... 35. Spectral theory and topological strings Marcos Marino (University of Geneva) I present a conjectural correspondence between topological string theory on toric Calabi-Yau manifolds, and the spectral theory of certain trace class operators on the real line, in the spirit of large N dualities. The operators are obtained by quantization of the algebraic curves which define the mirror manifolds to the Calabi-Yau's. This conjecture can be regarded as a non-perturbative... 16. Monopoles, Vortices, and Vermas Mathew Bullimore (Oxford) In three-dimensional gauge theories, monopole operators create and destroy vortices. I will explore this idea in the context of three-dimensional gauge theories with N=4 supersymmetry in the presence of an omega background. This leads to a finite version of the AGT correspondence, involving an action of the quantized Coulomb branch on the equivariant cohomology of vortex moduli spaces. (Work... 17. Higgs branches, vertex operator algebras and modular differential equations Leonardo Rastelli (Stony Brook University) Any four-dimensional N=2 superconformal field theory (SCFT) admits a subsector of operators and observables isomorphic to a vertex operator algebra. After reviewing this correspondence (first identified in arXiv:1312.5344), I will aim to characterize the relationship between the Higgs branch of the SCFT (as an algebraic geometric object) and the associated vertex operator algebra. Our proposal... 18. What Chern-Simons theory assigns to a point ? André Henriques (Oxford and Utrecht University) According the cobordism hypothesis (proposed by Baez-Dolan, and proved by Lurie), an extended topological quantum field theory is fully determined by its value on the point. A natural question is then: does this classification theorem apply to the topological quantum field theories of physical interest? And if yes, what is then the value of those theories on a point (the latter will then... 19. Period integrals of algebraic manifolds and their differential equations Shing-Tung Yau (Harvard) Period integrals are transcendental objects that play a central role in the study of algebraic manifolds. They describe deformations of the manifold, among other things, and were originally studied by Euler, Gauss, and Riemann. In recent time, they also turn out to be very important in topological field theories, and in particular mirror symmetry. In this talk, we explain a recent method to... 20. Hexagons and 3-point functions Benjamin Basso (ENS Paris) I will present a framework for computing correlators of three single trace operators in planar N=4 SYM theory that uses hexagonal patches as building blocks. This approach allows one to exploit the integrability of the theory and derive all loop predictions for its structure constants. After presenting the main ideas and results, I will discuss recent perturbative tests and open problems.... 21. A Vafa-Witten invariant for projective surfaces Richard Thomas (Imperial College, London) I will describe joint work with Yuuji Tanaka. We define a Vafa-Witten invariant for algebraic surfaces. For Fano and K3 surfaces, a standard vanishing theorem means it reduces to (roughly speaking) the Euler characteristic of the moduli space of sheaves on the surface. For general type surfaces there are other contributions, which we calculate. 22. Quantum Spectral Curve for AdS/CFT and its applications Nikolai Gromov (King's College, London ) 23. Moduli spaces of holomorphic and meromorphic differentials Rahul Pandharipande (ETH Zurich) I will discuss a new moduli space of holomorphic/meromorphic differentials on Riemann surfaces (joint work with G. Farkas) and propose connections between the fundamental class to Pixton's formulas and Witten's r-spin class (joint work with F. Janda, A. Pixton, and D. Zvonkine). 24. The Chern character of the Verlinde bundle Dimitri Zvonkine (Université Pierre et Marie Curie, Paris) The Verlinde bundle, or the bundle of conformal blocks, is a vector bundle whose rank is given by the well-known Verlinde formula. We will explain how Teleman's classification of semi-simple cohomological field theories allow one to find the Chern character of this vector bundle. 25. Correlation Functions in Superconformal Field Theories Jaume Gomis (Perimeter Institute, Waterloo) We discuss the exact computation of correlation functions of local operators in the Coulomb branch in four-dimensional N=2 superconformal field theories. 26. Conformal constraints on defects Abhijit Gadde (IAS, Princeton) I will explore the constraints imposed by conformal invariance on defects in a conformal field theory. Correlation function of a conformal defect with a bulk local operator is fixed by conformal invariance up to an overall constant. This gives rise to the notion of defect expansion, where the defect itself is expanded in terms of local operators. A correlator of two defect operators admits a... 27. Moduli spaces of curves with non-special divisors Alexander Polischchuk (University of Oregon) In this talk I will discuss the moduli spaces of pointed curves with possibly non-nodal singularities such that the marked points form a nonspecial ample divisor. I will show that such curves have natural projective embeddings, with a canonical choice of homogenous coordinates up to rescaling. Using Groebner bases technique this leads to the identification of the moduli with the quotient of... 30. Higgs bundles, branes and applications Laura Schaposnik (University of Illinois at Chicago) We shall begin the talk by first introducing Higgs bundles for complex Lie groups and the associated Hitchin fibration, and recalling how to realize Langlands duality through spectral data. We will then look at a natural construction of families of subspaces which give different types of branes, and explain how the topology of some of these branes can be described by considering the spectral... 29. From the Hitchin component to opers Olivia Dumitrescu (MPIM, Bonn) Gaiotto's conjecture (2014) is a particular construction of opers from Higgs bundles in one Hitchin component. The conjecture has been recently solved by a joint paper of Dumitrescu, Fredrickson, Kydonakis, Mazzeo, Mulase, and Neitzke (2016). In this talk, I will present a holomorphic description of the limiting oper, and its geometry. The importance of this correspondence, in particular the... 28. Chern–Simons theory on S^3/G and topological strings Gaétan Borot (MPIM, Bonn) We study the matrix models repreenting (a piece of) SU(N) Chern-Simons partition function on quotients of S^3 by a finite group of isometries (these are the spherical Seifert manifolds). We show 1) that these partition functions have 1/N asymptotic (by opposition to formal) expansion, which is computed by the topological recursion of Eynard and Orantin for a suitable spectral curve and 2) that... 31. Resurgence and exact quantization via holomorphic Floer cohomology Maxim Kontsevich (IHES) I will present a new perspective on Riemann-Hilbert correspondence and wall-crossing based on the considerations of Fukaya categories associated with a holomorphic symplectic manifold and a possibly singular analytic Lagrangian subvariety. This framework includes holonomic D-modules (for the case of cotangent bundles) on the same footing as q-difference equations. 32. Derived equivalences from a duality of non-abelian GLSM's Jørgen Rennemo (Oxford) Joint work with Ed Segal. Producing examples of non-isomorphic varietes X and Y with equivalent derived categories is in general hard. A technique involving LG models and variation of GIT stability has recently proved to be a powerful way of obtaining such examples. Kentaro Hori has proposed an duality between different non-abelian gauged linear sigma models. One consequence of this duality is... 33. Analytical Approaches to Coalescing Binary Black Holes Thibault Damour (IHES, Bures sur Yvette) The rationale for interpreting the recently announced events of the Laser Interferometer Gravitational-Wave Observatory (LIGO) as gravitational wave (GW) signals emitted during the coalescence of two black holes is the excellent match between these events and the corresponding theoretical predictions within General Relativity. We shall review the mix of analytical and numerical methods that... 36. Poisson-Lie T-duality Pavol Severa (University of Geneva) I will give a review of Poisson-Lie T-duality, which is a non-Abelian generalization of T-duality, and explain it in terms of Chern-Simons theory and its generalizations (AKSZ models) with appropriate boundary conditions. Based on [arXiv:1602.05126] 15. Two mathematical applications of little string theory Mina Aganagic (UC Berkeley) I will describe two mathematical applications of little string theory. The first leads to a variant of AGT correspondence that relates q-deformed W-algebra conformal blocks to K-theoretic instanton counting. This correspondence can be proven for any simply laced Lie algebra. The second leads to a variant of quantum Langlands correspondence which relates q-deformed conformal blocks of an affine... 34. Umbral symmetry groups and K3 CFTs Sarah Harrison (Harvard) Umbral moonshine is a connection between mock modular forms and discrete symmetry groups which arise as automorphisms of the Niemeier lattices, the 24-dimensional unimodular lattices labeled by their ADE root systems. The first example of Umbral moonshine was original discovered by Eguchi, Ooguri, Tachikawa when expanding the elliptic genus of a K3 surface into N=4 characters and seeing... 42. Geometric Langlands applications of boundary conditions for maximally supersymmetric Yang Mills theory Davide Gaiotto (Perimeter Institute, Waterloo) I will discuss the properties of boundary conditions of maximally supersymmetric Yang Mills theory compactified on a Riemann surface. Depending on the details of the compactification, this produces BAA branes (i.e. complex Lagrangian submanifolds) or BBB branes (i.e. hyper-holomorphic sheaves) for the two-dimensional sigma model in the Hitchin moduli space. I will discuss the map from...
CommonCrawl
Difference between revisions of "Resultant" (Importing text file) m (Typos corrected) ''of two polynomials <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r0816601.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r0816602.png" />'' {{MSC|12}} The element of the field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r0816603.png" /> defined by the formula: The ''resultant of two polynomials $f(x)$ and $g(x)$'' is the element of the field $Q$ defined by the formula: <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r0816604.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table> $$\def\a{ {\alpha}}\def\b{ {\beta}}R(f,g) = a_0^s b_0^n \prod_{i=1}^n\prod_{j=1}^s(\a_i-\b_j),\label{1}$$ where $Q$ is the splitting field of the polynomial $fg$ (cf. where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r0816605.png" /> is the splitting field of the polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r0816606.png" /> (cf. [[Splitting field of a polynomial|Splitting field of a polynomial]]), and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r0816607.png" /> are the roots (cf. [[Root|Root]]) of the polynomials [[Splitting field of a polynomial|Splitting field of a polynomial]]), and $\a_i,\b_j$ are the roots (cf. [[Root|Root]]) of the polynomials <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r0816608.png" /></td> </tr></table> $$f(x) = a_0x^n+a_1x^{n-1}+\cdots+a_n$$ $$fg(x) = b_0x^s+b_1x^{s-1}+\cdots+b_s,$$ respectively. If $a_0b_0 \ne 0$, then the polynomials have a common root if and only if the resultant equals zero. The following equality holds: respectively. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166010.png" />, then the polynomials have a common root if and only if the resultant equals zero. The following equality holds: $$R(g,f) = (-1)^{ns}R(f,g).$$ The resultant can be written in either of the following ways: $$R(f,g) = a_0^s\prod_{i=1}^n g(\a_i),\label{2}$$ The expressions (1)–(3) are inconvenient for computing the resultant, since they contain the roots of the polynomials. Using the coefficients of the polynomials, the resultant can be expressed in the form of the following [[Determinant|determinant]] of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166014.png" />: $$R(f,g) = (-1)^{ns}b_0^n\prod_{j=1}^s f(\b_j),\label{3}$$ The expressions (1)–(3) are inconvenient for computing the resultant, since they contain the roots of the polynomials. Using the coefficients of the polynomials, the resultant can be expressed in the form of the [[Determinant|determinant]] of the following matrix of order $n+s$: This determinant contains in the first <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166016.png" /> rows the coefficients of the polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166017.png" />, and in the last <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166018.png" /> rows the coefficients of the polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166019.png" />, and in the free spaces there are zeros. $$ \begin{pmatrix} a_0 & a_1 & \cdots & a_n & & \\ & a_0 & a_1 & \cdots & a_n & \\ & &\cdots&\cdots& &\\ & & a_0 & a_1 & \cdots & a_n \\ b_0 & b_1 & \cdots & b_s & & \\ & b_0 & b_1 & \cdots & b_s & \\ & & b_0 & b_1 & \cdots & b_s \\ \end{pmatrix} \label{4}$$ This matrix contains in the first $s$ rows the coefficients of the polynomial $f(x)$, and in the last $n$ rows the coefficients of the polynomial $g(x)$, and in the free spaces there are zeros. The resultant of two polynomials <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166020.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166021.png" /> with numerical coefficients can be represented in the form of a determinant of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166022.png" /> (or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166023.png" />). For this one has to find the remainders from the division of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166024.png" /> by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166025.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166026.png" />. Let these be The resultant of two polynomials $f(x)$ and $g(x)$ with numerical coefficients can be represented in the form of a determinant of order $n$ (or $s$). For this one has to find the remainders from the division of $x^kg(x)$ by $f(x)$, $k=0,\cdots,n-1$. Let these be $$a_{k0}+ a_{k1}x+\cdots+a_{kn-1}x^{n-1}.$$ $$R(f,g) = a_0^s \det\begin{pmatrix} a_{00} & a_{01} & \cdots & a_{0n-1}\\ \vdots & \cdots & \cdots & \vdots \\ a_{n-10} & a_{n-11} & \cdots & a_{n-1n-1}\\ \end{pmatrix}.$$ [[Discriminant|discriminant]] $D(f)$ of the polynomial The [[Discriminant|discriminant]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166029.png" /> of the polynomial $$f(x) = a_0x^n + a_1 x^{n-1} + \cdots + a_n, \quad a_0 \ne 0$$ can be expressed by the resultant of the polynomial $f(x)$ and its derivative $f'(x)$ in the following way: can be expressed by the resultant of the polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166031.png" /> and its derivative <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166032.png" /> in the following way: $$D(f) = (-1)^{n(n-1)/2} a_0^{-1} R(f,f').$$ ==Application to solving a system of equations.== Let there be given a system of two algebraic equations with coefficients from a field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166034.png" />: Let there be given a system of two algebraic equations with coefficients from a field $P$: $$f(x,y) = 0,\ g(x,y) = 0.\label{5}$$ The polynomials $f$ and $g$ are written as polynomials in $x$: The polynomials <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166036.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166037.png" /> are written as polynomials in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166038.png" />: and according to formula (4) the resultant of these polynomials (as polynomials in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166041.png" />) is calculated. This yields a polynomial that depends only on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166042.png" />: One says that the polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166044.png" /> is obtained by eliminating <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166045.png" /> from the polynomials <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166046.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166047.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166048.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166049.png" /> is a solution of the system (5), then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166050.png" />, and, conversely, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166051.png" />, then either the polynomials <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166052.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166053.png" /> have a common root (which must be looked for among the roots of their greatest common divisor), or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166054.png" />. Solving system (5) is thereby reduced to the computation of the roots of the polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166055.png" /> and of the common roots of the polynomials <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166056.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081660/r08166057.png" /> in one indeterminate. By analogy, systems of equations with any number of unknowns can be solved; however, this problem leads to extremely cumbersome calculations (see also [[Elimination theory|Elimination theory]]). <table><TR><TD valign="top">[1]</TD> <TD valign="top"> A.G. Kurosh, "Higher algebra" , MIR (1972) (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> L.Ya. Okunev, "Higher algebra" , Moscow-Leningrad (1979) (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> B.L. van der Waerden, "Algebra" , '''1–2''' , Springer (1967–1971) (Translated from German)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> W.V.D. Hodge, D. Pedoe, "Methods of algebraic geometry" , '''1–3''' , Cambridge Univ. Press (1947–1954)</TD></TR></table> $$f(x,y) = a_0(y) x^k+ a_1(y)x^{k-1}+\cdots+a_k(y),$$ $$g(x,y) = b_0(y) x^l+ b_1(y)x^{l-1}+\cdots+b_l(y),$$ and according to formula (4) the resultant of these polynomials (as polynomials in $x$) is calculated. This yields a polynomial that depends only on $y$: $$R(f,g) = F(y).$$ One says that the polynomial $F(y)$ is obtained by eliminating $x$ from the polynomials $f(x,y)$ and $g(x,y)$. If $\def\a{ {\alpha}}\def\b{ {\beta}} x=\a$ and $y=\b$ is a solution of the system (5), then $F(\b) = 0$, and, conversely, if $F(\b) = 0$, then either the polynomials $f(x,\b)$ or $g(x,\b)$ have a common root (which must be looked for among the roots of their greatest common divisor), or $a_0(\b) = b_0(\b) = 0$. Solving system (5) is thereby reduced to the computation of the roots of the polynomial $F(y)$ and of the common roots of the polynomials $f(x,\b)$ and $g(x,\b)$ in one indeterminate. By analogy, systems of equations with any number of unknowns can be solved; however, this problem leads to extremely cumbersome calculations (see also [[Elimination theory|Elimination theory]]). <table><TR><TD valign="top">[a1]</TD> <TD valign="top"> S. Lang, "Algebra" , Addison-Wesley (1984)</TD></TR></table> |valign="top"|{{Ref|HoPe}}||valign="top"| W.V.D. Hodge, D. Pedoe, "Methods of algebraic geometry", '''1–3''', Cambridge Univ. Press (1947–1954) {{MR|1288307}} {{MR|1288306}} {{MR|1288305}} {{MR|0061846}} {{MR|0048065}} {{MR|0028055}} {{ZBL|0796.14002}} {{ZBL|0796.14003}} {{ZBL|0796.14001}} {{ZBL|0157.27502}} {{ZBL|0157.27501}} {{ZBL|0055.38705}} {{ZBL|0048.14502}} |valign="top"|{{Ref|Ku}}||valign="top"| A.G. Kurosh, "Higher algebra", MIR (1972) (Translated from Russian) {{MR|0945393}} {{MR|0926059}} {{MR|0778202}} {{MR|0759341}} {{MR|0628003}} {{MR|0384363}} {{ZBL|0237.13001}} |valign="top"|{{Ref|La}}||valign="top"| S. Lang, "Algebra", Addison-Wesley (1984) {{MR|0783636}} {{ZBL|0712.00001}} |valign="top"|{{Ref|Ok}}||valign="top"| L.Ya. Okunev, "Higher algebra", Moscow-Leningrad (1979) (In Russian) {{MR|}} {{ZBL|0154.26401}} |valign="top"|{{Ref|Wa}}||valign="top"| B.L. van der Waerden, "Algebra", '''1–2''', Springer (1967–1971) (Translated from German) {{MR|1541390}} {{ZBL|1032.00002}} {{ZBL|1032.00001}} {{ZBL|0903.01009}} {{ZBL|0781.12003}} {{ZBL|0781.12002}} {{ZBL|0724.12002}} {{ZBL|0724.12001}} {{ZBL|0569.01001}} {{ZBL|0534.01001}} {{ZBL|0997.00502}} {{ZBL|0997.00501}} {{ZBL|0316.22001}} {{ZBL|0297.01014}} {{ZBL|0221.12001}} {{ZBL|0192.33002}} {{ZBL|0137.25403}} {{ZBL|0136.24505}} {{ZBL|0087.25903}} {{ZBL|0192.33001}} {{ZBL|0067.00502}} 2010 Mathematics Subject Classification: Primary: 12-XX [MSN][ZBL] The resultant of two polynomials $f(x)$ and $g(x)$ is the element of the field $Q$ defined by the formula: $$\def\a{ {\alpha}}\def\b{ {\beta}}R(f,g) = a_0^s b_0^n \prod_{i=1}^n\prod_{j=1}^s(\a_i-\b_j),\label{1}$$ where $Q$ is the splitting field of the polynomial $fg$ (cf. Splitting field of a polynomial), and $\a_i,\b_j$ are the roots (cf. Root) of the polynomials $$f(x) = a_0x^n+a_1x^{n-1}+\cdots+a_n$$ and $$fg(x) = b_0x^s+b_1x^{s-1}+\cdots+b_s,$$ respectively. If $a_0b_0 \ne 0$, then the polynomials have a common root if and only if the resultant equals zero. The following equality holds: $$R(g,f) = (-1)^{ns}R(f,g).$$ The resultant can be written in either of the following ways: $$R(f,g) = (-1)^{ns}b_0^n\prod_{j=1}^s f(\b_j),\label{3}$$ The expressions (1)–(3) are inconvenient for computing the resultant, since they contain the roots of the polynomials. Using the coefficients of the polynomials, the resultant can be expressed in the form of the determinant of the following matrix of order $n+s$: $$ \begin{pmatrix} a_0 & a_1 & \cdots & a_n & & \\ & a_0 & a_1 & \cdots & a_n & \\ & &\cdots&\cdots& &\\ & & a_0 & a_1 & \cdots & a_n \\ b_0 & b_1 & \cdots & b_s & & \\ & b_0 & b_1 & \cdots & b_s & \\ & &\cdots&\cdots& &\\ & & b_0 & b_1 & \cdots & b_s \\ \end{pmatrix} \label{4}$$ This matrix contains in the first $s$ rows the coefficients of the polynomial $f(x)$, and in the last $n$ rows the coefficients of the polynomial $g(x)$, and in the free spaces there are zeros. $$a_{k0}+ a_{k1}x+\cdots+a_{kn-1}x^{n-1}.$$ Then $$R(f,g) = a_0^s \det\begin{pmatrix} a_{00} & a_{01} & \cdots & a_{0n-1}\\ a_{10} & a_{11} & \cdots & a_{1n-1}\\ \vdots & \cdots & \cdots & \vdots \\ a_{n-10} & a_{n-11} & \cdots & a_{n-1n-1}\\ \end{pmatrix}.$$ The discriminant $D(f)$ of the polynomial $$f(x) = a_0x^n + a_1 x^{n-1} + \cdots + a_n, \quad a_0 \ne 0$$ can be expressed by the resultant of the polynomial $f(x)$ and its derivative $f'(x)$ in the following way: Application to solving a system of equations. $$f(x,y) = 0,\ g(x,y) = 0.\label{5}$$ The polynomials $f$ and $g$ are written as polynomials in $x$: $$g(x,y) = b_0(y) x^l+ b_1(y)x^{l-1}+\cdots+b_l(y),$$ and according to formula (4) the resultant of these polynomials (as polynomials in $x$) is calculated. This yields a polynomial that depends only on $y$: $$R(f,g) = F(y).$$ One says that the polynomial $F(y)$ is obtained by eliminating $x$ from the polynomials $f(x,y)$ and $g(x,y)$. If $\def\a{ {\alpha}}\def\b{ {\beta}} x=\a$ and $y=\b$ is a solution of the system (5), then $F(\b) = 0$, and, conversely, if $F(\b) = 0$, then either the polynomials $f(x,\b)$ or $g(x,\b)$ have a common root (which must be looked for among the roots of their greatest common divisor), or $a_0(\b) = b_0(\b) = 0$. Solving system (5) is thereby reduced to the computation of the roots of the polynomial $F(y)$ and of the common roots of the polynomials $f(x,\b)$ and $g(x,\b)$ in one indeterminate. By analogy, systems of equations with any number of unknowns can be solved; however, this problem leads to extremely cumbersome calculations (see also Elimination theory). [HoPe] W.V.D. Hodge, D. Pedoe, "Methods of algebraic geometry", 1–3, Cambridge Univ. Press (1947–1954) MR1288307 MR1288306 MR1288305 MR0061846 MR0048065 MR0028055 Zbl 0796.14002 Zbl 0796.14003 Zbl 0796.14001 Zbl 0157.27502 Zbl 0157.27501 Zbl 0055.38705 Zbl 0048.14502 [Ku] A.G. Kurosh, "Higher algebra", MIR (1972) (Translated from Russian) MR0945393 MR0926059 MR0778202 MR0759341 MR0628003 MR0384363 Zbl 0237.13001 [La] S. Lang, "Algebra", Addison-Wesley (1984) MR0783636 Zbl 0712.00001 [Ok] L.Ya. Okunev, "Higher algebra", Moscow-Leningrad (1979) (In Russian) Zbl 0154.26401 [Wa] B.L. van der Waerden, "Algebra", 1–2, Springer (1967–1971) (Translated from German) MR1541390 Zbl 1032.00002 Zbl 1032.00001 Zbl 0903.01009 Zbl 0781.12003 Zbl 0781.12002 Zbl 0724.12002 Zbl 0724.12001 Zbl 0569.01001 Zbl 0534.01001 Zbl 0997.00502 Zbl 0997.00501 Zbl 0316.22001 Zbl 0297.01014 Zbl 0221.12001 Zbl 0192.33002 Zbl 0137.25403 Zbl 0136.24505 Zbl 0087.25903 Zbl 0192.33001 Zbl 0067.00502 Resultant. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Resultant&oldid=15841 This article was adapted from an original article by I.V. Proskuryakov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Resultant&oldid=36314" Field theory and polynomials
CommonCrawl
From formulasearchengine {{#invoke:Hatnote|hatnote}} {{ safesubst:#invoke:Unsubst||$N=Use dmy dates |date=__DATE__ |$B= }} The absolute value of a number may be thought of as its distance from zero. In mathematics, the absolute value (or modulus) |x| of a real number Template:Mvar is the non-negative value of Template:Mvar without regard to its sign. Namely, |x| = x for a positive Template:Mvar, |x| = −x for a negative Template:Mvar (in which case −x is positive), and |0| = 0. For example, the absolute value of 3 is 3, and the absolute value of −3 is also 3. The absolute value of a number may be thought of as its distance from zero. Generalisations of the absolute value for real numbers occur in a wide variety of mathematical settings. For example an absolute value is also defined for the complex numbers, the quaternions, ordered rings, fields and vector spaces. The absolute value is closely related to the notions of magnitude, distance, and norm in various mathematical and physical contexts. 1 Terminology and notation 2 Definition and properties 2.1 Real numbers 3 Absolute value function 3.1 Relationship to the sign function 3.2 Derivative 3.3 Antiderivative 5 Generalizations 5.1 Ordered rings 5.3 Vector spaces Terminology and notation In 1806, Jean-Robert Argand introduced the term module, meaning unit of measure in French, specifically for the complex absolute value,[1][2] and it was borrowed into English in 1866 as the Latin equivalent modulus.[1] The term absolute value has been used in this sense from at least 1806 in French[3] and 1857 in English.[4] The notation |x| was introduced by Karl Weierstrass in 1841.[5] Other names for absolute value include numerical value[1] and magnitude.[1] The same notation is used with sets to denote cardinality; the meaning depends on context. Definition and properties For any real number Template:Mvar the absolute value or modulus of Template:Mvar is denoted by |x| (a vertical bar on each side of the quantity) and is defined as[6] | x | = { x , if x ≥ 0 − x , if x < 0. {\displaystyle |x|={\begin{cases}x,&{\mbox{if }}x\geq 0\\-x,&{\mbox{if }}x<0.\end{cases}}} As can be seen from the above definition, the absolute value of Template:Mvar is always either positive or zero, but never negative. From an analytic geometry point of view, the absolute value of a real number is that number's distance from zero along the real number line, and more generally the absolute value of the difference of two real numbers is the distance between them. Indeed the notion of an abstract distance function in mathematics can be seen to be a generalisation of the absolute value of the difference (see "Distance" below). Since the square root notation without sign represents the positive square root, it follows that | a | = a 2 {\displaystyle |a|={\sqrt {a^{2}}}} (1) which is sometimes used as a definition of absolute value of real numbers.[7] The absolute value has the following four fundamental properties: | a | ≥ 0 {\displaystyle |a|\geq 0} (2) Non-negativity | a | = 0 ⟺ a = 0 {\displaystyle |a|=0\iff a=0} (3) Positive-definiteness | a b | = | a | | b | {\displaystyle |ab|=|a||b|} (4) Multiplicativeness | a + b | ≤ | a | + | b | {\displaystyle |a+b|\leq |a|+|b|} (5) Subadditivity Other important properties of the absolute value include: | ( | a | ) | = | a | {\displaystyle |(|a|)|=|a|} (6) Idempotence (the absolute value of the absolute value is the absolute value) | − a | = | a | {\displaystyle |-a|=|a|} (7) Evenness (reflection symmetry of the graph) | a − b | = 0 ⟺ a = b {\displaystyle |a-b|=0\iff a=b} (8) Identity of indiscernibles (equivalent to positive-definiteness) | a − b | ≤ | a − c | + | c − b | {\displaystyle |a-b|\leq |a-c|+|c-b|} (9) Triangle inequality (equivalent to subadditivity) | a b | = | a | | b | {\displaystyle \left|{\frac {a}{b}}\right|={\frac {|a|}{|b|}}\ } (if b ≠ 0 {\displaystyle b\neq 0} ) (10) Preservation of division (equivalent to multiplicativeness) | a − b | ≥ | ( | a | − | b | ) | {\displaystyle |a-b|\geq |(|a|-|b|)|} (11) Reverse triangle inequality (equivalent to subadditivity) Two other useful properties concerning inequalities are: | a | ≤ b ⟺ − b ≤ a ≤ b {\displaystyle |a|\leq b\iff -b\leq a\leq b} | a | ≥ b ⟺ a ≤ − b {\displaystyle |a|\geq b\iff a\leq -b\ } or b ≤ a {\displaystyle b\leq a} These relations may be used to solve inequalities involving absolute values. For example: | x − 3 | ≤ 9 {\displaystyle |x-3|\leq 9} ⟺ − 9 ≤ x − 3 ≤ 9 {\displaystyle \iff -9\leq x-3\leq 9} ⟺ − 6 ≤ x ≤ 12 {\displaystyle \iff -6\leq x\leq 12} Absolute value is used to define the absolute difference, the standard metric on the real numbers. The absolute value of a complex number Template:Mvar is the distance Template:Mvar from Template:Mvar to the origin. It is also seen in the picture that Template:Mvar and its complex conjugate Template:Mvar have the same absolute value. Since the complex numbers are not ordered, the definition given above for the real absolute value cannot be directly generalised for a complex number. However the geometric interpretation of the absolute value of a real number as its distance from 0 can be generalised. The absolute value of a complex number is defined as its distance in the complex plane from the origin using the Pythagorean theorem. More generally the absolute value of the difference of two complex numbers is equal to the distance between those two complex numbers. For any complex number z = x + i y , {\displaystyle z=x+iy,} where Template:Mvar and Template:Mvar are real numbers, the absolute value or modulus of Template:Mvar is denoted |z| and is given by[8] | z | = x 2 + y 2 . {\displaystyle |z|={\sqrt {x^{2}+y^{2}}}.} When the complex part Template:Mvar is zero this is the same as the absolute value of the real number Template:Mvar. When a complex number Template:Mvar is expressed in polar form as z = r e i θ {\displaystyle z=re^{i\theta }} with r ≥ 0 and θ real, its absolute value is | z | = r {\displaystyle |z|=r} . The absolute value of a complex number can be written in the complex analogue of equation (1) above as: | z | = z ⋅ z ¯ {\displaystyle |z|={\sqrt {z\cdot {\overline {z}}}}} where Template:Mvar is the complex conjugate of Template:Mvar. Notice that, contrary to equation (1): | z | ≠ z 2 {\displaystyle |z|\neq {\sqrt {z^{2}}}} . The complex absolute value shares all the properties of the real absolute value given in equations (2)–(11) above. Since the positive reals form a subgroup of the complex numbers under multiplication, we may think of absolute value as an endomorphism of the multiplicative group of the complex numbers.[9] Absolute value function The graph of the absolute value function for real numbers Composition of absolute value with a cubic function in different orders The real absolute value function is continuous everywhere. It is differentiable everywhere except for Template:Mvar = 0. It is monotonically decreasing on the interval Template:Open-closed and monotonically increasing on the interval Template:Closed-open. Since a real number and its opposite have the same absolute value, it is an even function, and is hence not invertible. Both the real and complex functions are idempotent. It is a piecewise linear, convex function. Relationship to the sign function The absolute value function of a real number returns its value irrespective of its sign, whereas the sign (or signum) function returns a number's sign irrespective of its value. The following equations show the relationship between these two functions: | x | = x sgn ⁡ ( x ) , {\displaystyle |x|=x\operatorname {sgn}(x),} | x | sgn ⁡ ( x ) = x , {\displaystyle |x|\operatorname {sgn}(x)=x,} and for x ≠ 0, sgn ⁡ ( x ) = | x | x . {\displaystyle \operatorname {sgn}(x)={\frac {|x|}{x}}.} The real absolute value function has a derivative for every x ≠ 0, but is not differentiable at x = 0. Its derivative for x ≠ 0 is given by the step function[10][11] d | x | d x = x | x | = { − 1 x < 0 1 x > 0. {\displaystyle {\frac {d|x|}{dx}}={\frac {x}{|x|}}={\begin{cases}-1&x<0\\1&x>0.\end{cases}}} The subdifferential of |x| at x = 0 is the interval Template:Closed-closed.[12] The complex absolute value function is continuous everywhere but complex differentiable nowhere because it violates the Cauchy–Riemann equations.[10] The second derivative of |x| with respect to Template:Mvar is zero everywhere except zero, where it does not exist. As a generalised function, the second derivative may be taken as two times the Dirac delta function. Antiderivative The antiderivative (indefinite integral) of the absolute value function is ∫ | x | d x = x | x | 2 + C , {\displaystyle \int |x|dx={\frac {x|x|}{2}}+C,} where Template:Mvar is an arbitrary constant of integration. {{#invoke:see also|seealso}} The absolute value is closely related to the idea of distance. As noted above, the absolute value of a real or complex number is the distance from that number to the origin, along the real number line, for real numbers, or in the complex plane, for complex numbers, and more generally, the absolute value of the difference of two real or complex numbers is the distance between them. The standard Euclidean distance between two points a = ( a 1 , a 2 , … , a n ) {\displaystyle a=(a_{1},a_{2},\dots ,a_{n})} b = ( b 1 , b 2 , … , b n ) {\displaystyle b=(b_{1},b_{2},\dots ,b_{n})} in [[Euclidean space|Euclidean Template:Mvar-space]] is defined as: ∑ i = 1 n ( a i − b i ) 2 . {\displaystyle {\sqrt {\sum _{i=1}^{n}(a_{i}-b_{i})^{2}}}.} This can be seen to be a generalisation of |a − b|, since if Template:Mvar and Template:Mvar are real, then by equation (1), | a − b | = ( a − b ) 2 . {\displaystyle |a-b|={\sqrt {(a-b)^{2}}}.} While if a = a 1 + i a 2 {\displaystyle a=a_{1}+ia_{2}} b = b 1 + i b 2 {\displaystyle b=b_{1}+ib_{2}} are complex numbers, then | a − b | {\displaystyle |a-b|} = | ( a 1 + i a 2 ) − ( b 1 + i b 2 ) | {\displaystyle =|(a_{1}+ia_{2})-(b_{1}+ib_{2})|} = | ( a 1 − b 1 ) + i ( a 2 − b 2 ) | {\displaystyle =|(a_{1}-b_{1})+i(a_{2}-b_{2})|} = ( a 1 − b 1 ) 2 + ( a 2 − b 2 ) 2 . {\displaystyle ={\sqrt {(a_{1}-b_{1})^{2}+(a_{2}-b_{2})^{2}}}.} The above shows that the "absolute value" distance for the real numbers or the complex numbers, agrees with the standard Euclidean distance they inherit as a result of considering them as the one and two-dimensional Euclidean spaces respectively. The properties of the absolute value of the difference of two real or complex numbers: non-negativity, identity of indiscernibles, symmetry and the triangle inequality given above, can be seen to motivate the more general notion of a distance function as follows: A real valued function Template:Mvar on a set X × X is called a metric (or a distance function) on Template:Mvar, if it satisfies the following four axioms:[13] d ( a , b ) ≥ 0 {\displaystyle d(a,b)\geq 0} Non-negativity d ( a , b ) = 0 ⟺ a = b {\displaystyle d(a,b)=0\iff a=b} Identity of indiscernibles d ( a , b ) = d ( b , a ) {\displaystyle d(a,b)=d(b,a)} Symmetry d ( a , b ) ≤ d ( a , c ) + d ( c , b ) {\displaystyle d(a,b)\leq d(a,c)+d(c,b)} Triangle inequality Generalizations Ordered rings The definition of absolute value given for real numbers above can be extended to any ordered ring. That is, if Template:Mvar is an element of an ordered ring R, then the absolute value of Template:Mvar, denoted by |a|, is defined to be:[14] | a | = { a , if a ≥ 0 − a , if a ≤ 0 {\displaystyle |a|={\begin{cases}a,&{\mbox{if }}a\geq 0\\-a,&{\mbox{if }}a\leq 0\end{cases}}\;} where −a is the additive inverse of Template:Mvar, and 0 is the additive identity element. {{#invoke:main|main}} The fundamental properties of the absolute value for real numbers given in (2)–(5) above, can be used to generalise the notion of absolute value to an arbitrary field, as follows. A real-valued function Template:Mvar on a field Template:Mvar is called an absolute value (also a modulus, magnitude, value, or valuation)[15] if it satisfies the following four axioms: v ( a ) ≥ 0 {\displaystyle v(a)\geq 0} Non-negativity v ( a ) = 0 ⟺ a = 0 {\displaystyle v(a)=0\iff a=\mathbf {0} } Positive-definiteness v ( a b ) = v ( a ) v ( b ) {\displaystyle v(ab)=v(a)v(b)} Multiplicativeness v ( a + b ) ≤ v ( a ) + v ( b ) {\displaystyle v(a+b)\leq v(a)+v(b)} Subadditivity or the triangle inequality Where 0 denotes the additive identity element of Template:Mvar. It follows from positive-definiteness and multiplicativeness that v(1) = 1, where 1 denotes the multiplicative identity element of Template:Mvar. The real and complex absolute values defined above are examples of absolute values for an arbitrary field. If Template:Mvar is an absolute value on Template:Mvar, then the function Template:Mvar on F × F, defined by d(a, b) = v(a − b), is a metric and the following are equivalent: Template:Mvar satisfies the ultrametric inequality d ( x , y ) ≤ max ( d ( x , z ) , d ( y , z ) ) {\displaystyle d(x,y)\leq \max(d(x,z),d(y,z))} for all Template:Mvar, Template:Mvar, Template:Mvar in Template:Mvar. { v ( ∑ k = 1 n 1 ) : n ∈ N } {\displaystyle {\big \{}v{\Big (}{\textstyle \sum _{k=1}^{n}}\mathbf {1} {\Big )}:n\in \mathbb {N} {\big \}}} is bounded in R. v ( ∑ k = 1 n 1 ) ≤ 1 {\displaystyle v{\Big (}{\textstyle \sum _{k=1}^{n}}\mathbf {1} {\Big )}\leq 1\ } for every n ∈ N . {\displaystyle n\in \mathbb {N} .} v ( a ) ≤ 1 ⇒ v ( 1 + a ) ≤ 1 {\displaystyle v(a)\leq 1\Rightarrow v(1+a)\leq 1\ } for all a ∈ F . {\displaystyle a\in F.} v ( a + b ) ≤ m a x { v ( a ) , v ( b ) } {\displaystyle v(a+b)\leq \mathrm {max} \{v(a),v(b)\}\ } for all a , b ∈ F . {\displaystyle a,b\in F.} An absolute value which satisfies any (hence all) of the above conditions is said to be non-Archimedean, otherwise it is said to be Archimedean.[16] Vector spaces {{#invoke:main|main}} Again the fundamental properties of the absolute value for real numbers can be used, with a slight modification, to generalise the notion to an arbitrary vector space. A real-valued function on a vector space Template:Mvar over a field Template:Mvar, represented as ‖·‖, is called an absolute value, but more usually a norm, if it satisfies the following axioms: For all Template:Mvar in Template:Mvar, and v, u in Template:Mvar, ‖ v ‖ ≥ 0 {\displaystyle \|\mathbf {v} \|\geq 0} Non-negativity ‖ v ‖ = 0 ⟺ v = 0 {\displaystyle \|\mathbf {v} \|=0\iff \mathbf {v} =0} Positive-definiteness ‖ a v ‖ = | a | ‖ v ‖ {\displaystyle \|a\mathbf {v} \|=|a|\|\mathbf {v} \|} Positive homogeneity or positive scalability ‖ v + u ‖ ≤ ‖ v ‖ + ‖ u ‖ {\displaystyle \|\mathbf {v} +\mathbf {u} \|\leq \|\mathbf {v} \|+\|\mathbf {u} \|} Subadditivity or the triangle inequality The norm of a vector is also called its length or magnitude. In the case of Euclidean space Rn, the function defined by ‖ ( x 1 , x 2 , … , x n ) ‖ = ∑ i = 1 n x i 2 {\displaystyle \|(x_{1},x_{2},\dots ,x_{n})\|={\sqrt {\sum _{i=1}^{n}x_{i}^{2}}}} is a norm called the Euclidean norm. When the real numbers R are considered as the one-dimensional vector space R1, the absolute value is a norm, and is the Template:Mvar-norm (see Lp space) for any Template:Mvar. In fact the absolute value is the "only" norm on R1, in the sense that, for every norm ‖·‖ on R1, ‖x‖ = ‖1‖ ⋅ |x|. The complex absolute value is a special case of the norm in an inner product space. It is identical to the Euclidean norm, if the complex plane is identified with the Euclidean plane R2. ↑ 1.0 1.1 1.2 1.3 Oxford English Dictionary, Draft Revision, June 2008 ↑ Nahin, O'Connor and Robertson, and functions.Wolfram.com.; for the French sense, see Littré, 1877 ↑ Lazare Nicolas M. Carnot, Mémoire sur la relation qui existe entre les distances respectives de cinq point quelconques pris dans l'espace, p. 105 at Google Books ↑ James Mill Peirce, A Text-book of Analytic Geometry at Google Books. The oldest citation in the 2nd edition of the Oxford English Dictionary is from 1907. The term absolute value is also used in contrast to relative value. ↑ Nicholas J. Higham, Handbook of writing for the mathematical sciences, SIAM. ISBN 0-89871-420-6, p. 25 ↑ Mendelson, p. 2. ↑ {{#invoke:citation/CS1|citation |CitationClass=book }}, p. A5 ↑ {{#invoke:citation/CS1|citation |CitationClass=book }} ↑ {{#invoke:citation/CS1|citation |CitationClass=citation }}. ↑ 10.0 10.1 Weisstein, Eric W. Absolute Value. From MathWorld – A Wolfram Web Resource. ↑ Bartel and Sherbert, p. 163 ↑ Peter Wriggers, Panagiotis Panatiotopoulos, eds., New Developments in Contact Problems, 1999, ISBN 3-211-83154-1, p. 31–32 ↑ These axioms are not minimal; for instance, non-negativity can be derived from the other three: 0 = d(a, a) ≤ d(a, b) + d(b, a) = 2d(a, b). ↑ Mac Lane, p. 264. ↑ Shechter, p. 260. This meaning of valuation is rare. Usually, a valuation is the logarithm of the inverse of an absolute value ↑ Shechter, pp. 260–261. Bartle; Sherbert; Introduction to real analysis (4th ed.), John Wiley & Sons, 2011 ISBN 978-0-471-43331-6. Nahin, Paul J.; An Imaginary Tale; Princeton University Press; (hardcover, 1998). ISBN 0-691-02795-1. Mac Lane, Saunders, Garrett Birkhoff, Algebra, American Mathematical Soc., 1999. ISBN 978-0-8218-1646-2. Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. ISBN 978-0-07-148754-2. O'Connor, J.J. and Robertson, E.F.; "Jean Robert Argand". Schechter, Eric; Handbook of Analysis and Its Foundations, pp. 259–263, "Absolute Values", Academic Press (1997) ISBN 0-12-622760-8. {{#invoke:citation/CS1|citation |CitationClass=citation }} Template:PlanetMath Weisstein, Eric W., "Absolute Value", MathWorld. Retrieved from "https://en.formulasearchengine.com/index.php?title=Absolute_value&oldid=218504" Use dmy dates from June 2013
CommonCrawl
Is a definite integral just a summation? I am learning about definite integrals and found the formula for finding an average of a function over a given interval: $$\frac{1}{b-a} \int_{a}^{b} f\left(x\right) dx$$ If we look at the average function for a set of numbers: $$\frac{1}{n} \sum_{i=1}^{n} x_i$$ It would seem as if the integral is essentially a summation of all values from $a$ to $b$. Is it correct to think of it this way? calculus definite-integrals means asked Jan 29 at 3:38 MCMasteryMCMastery $\begingroup$ Yeah; that's exactly how one would think of it. This becomes more clear if you look at the formal definition of an integral. $\endgroup$ – Hyperion Jan 29 at 3:44 $\begingroup$ Notice that the functional values have been multiplied by $dx$, so $f(x)dx$ is the area of the elemental strip having thickness $dx$ and height $f(x)$. The integral can be treated as a summation of these differential areas $\endgroup$ – Shubham Johri Jan 29 at 3:45 $\begingroup$ I would disagree that it's correct to think of it this way. The reason is that people, especially ones only beginning to do math, are in general very very very bad at having any intuition regarding infinities. Both potential and actual. Thinking of an integral as a summation (weighted summation btw.) is inviting trouble once it doesn't quite behave you think it should. And here by correct I mean "good idea" it's certainly not correct in a mathematical sense of the word. $\endgroup$ – DRF Jan 29 at 12:35 $\begingroup$ So bad there is no (currently) graph in this whole page of answers! It would help a lot OP to understand the concept... Or even better an animation with Riemann sums and the width of each rectangle going to $0$. (If I knew how do to one, I'd happy to post it here, but I never explored how to plot animations easily). $\endgroup$ – Basj Jan 29 at 21:33 It's certainly helpful to think of integrals this way, though not strictly correct. An integral takes all the little slivers of area beneath a curve and sums them up into a bigger area - though there's, of course, a lot of technicality needed to think about it this way. There is an idea being obscured by your idea of an integral as a sum, however. In truth, a sum is a kind of integral and not vice versa - this is somewhat counterintuitive given that sums are much more familiar objects than integrals, but there's an elegant theory known as Lebesgue integration which basically makes the integral a tool which eats two pieces of information: We start with some indexing set and some way to "weigh" pieces of that set. We have some function on that set. Then, the Lebesgue integral spits out the weighted "sum" of that function. An integral in the most common sense arises when you say, "I have a function on the real numbers, and I want an interval to have weight equal to its length." A finite sum arises when you say "I have a function taking values $x_1,\ldots,x_n$ on the index set $\{1,\ldots,n\}$. Each index gets weight $1$" - and, of course, you can change those weights to get a weighted sum or you can extend the indexing set to every natural number to get an infinite sum. But, the basic thing to note is that there's a more general idea of integral that places summation as part of the theory of integration. Milo BrandtMilo Brandt $\begingroup$ This answer is great, but I would posit that "a sum is a kind of integral and not vice versa" is arbitrary. We have a general technique that includes both "standard summation" and "standard integration" as special cases, and we chose to call this generalization "Lebesgue integration", but there is no a priori mathematical reason why we couldn't have called this a "Lebesgue summation". $\endgroup$ – Mees de Vries Jan 29 at 10:32 $\begingroup$ Agree with Mees de Vries. The entire idea of integrals came about from taking a sum of areas of rectangles and then looking at what happens in the limit as the width of those rectangles went to zero. So, in one sense, integrals are exactly a sum. Just one of infinite elements that are infinitely thin. $\endgroup$ – Shufflepants Jan 29 at 23:25 Yes, and no. In so much as the definite integral is a "sum" it is a limit of a Riemann sum as the "mesh" goes to zero. The "mesh" is the largest part of a partition. That is, your domain is broken up into parts over which you take the average value of your function, and multiply by the width of that part. It's typically helpful to think about definite integrals as sums over "infinitesimal" displacements, but strictly speaking, that is incorrect. Steven Thomas HattonSteven Thomas Hatton In a way, yes. In both cases, what these two notions are expressing is the idea of accumulated change: you can think of it as the end result of a large number of changes to some quantity of interest that have built up over time (or along some dimension, more generally), whether those changes are positive or negative. In the case of the summation, the changes come in discrete parcels - think about, for example, regular withdrawals or deposits of money into a bank account but of varying amounts. The summation from a starting time to an ending time of interest adds all those parcels together over a given interval of interest and thus gives you the total change, say the total amount by which the money in your account changed over a year's worth of transactions. In the case of integration, the changes are steady - think a smooth, continuous flow, like filling up a bucket with a stream of water from a hose, while we control the flow rate with the tap. The integral of the flow rate (how far open the tap is, effectively, or proportional thereto) from the starting time to the ending time, equals the total amount of water we have added under that variable change. Of course, in integration we can also have negative changes while a hose can only add water to a bucket. And moreover, this points the way to how we usually define the integral. If the rate of change - amount of change had over unit of time, e.g. kilograms of water coming out of the hose per minute, for example - at a given time $t$ is $f(t)$, then we can approximate the amount of change, i.e. the number of kilograms delivered, over a suitably-small time interval $\Delta t$ by $f(t)\ \Delta t$. For example, if $f(t)$ at some given point in time is 50 kilograms per minute, and the time step is 0.001 minute, then that means that the small change is 0.05 kilograms of water added. We can add all these small changes up over a protracted interval to estimate the result of the continuously-varying change, e.g. if we add those all up over 10 minutes, with no change, we will get 10,000 time steps times 0.05 kg equals 500 kg of water delivered. Of course, this is just the same as if we multiplied and thus in this case actually exact, but that's only because we did not vary the flow rate, for simplicity. The exact integral when there is a variable rate of change results by taking the limit: the "idealized" value that the result this process approximates ever better - if it does - when we repeat it with $\Delta t$ is taken ever smaller. We thus write $$\int_{t_a}^{t_b} f(t)\ dt = \lim_{\Delta t \rightarrow 0} \sum_j f(t_j)\ \Delta t$$ where $t_j = t_a + j(\Delta t)$ and j ranges just high enough for the final point to be just below $t_b$. Although, to make the integral a bit more well-behaved for rougher functions than those we would obtain from a faucet, we like to also consider where the time steps are irregular instead of just regular steps of $\Delta t$ and this leads to the textbook definition in terms of the "limit of a partition". The_SympathizerThe_Sympathizer Indeed you can see $\int_a^bf(x)dx$ (1) as an infinite sum of rectangles where $f(x)dx$ is a rectangle of width $dx$ and height $f(x)$. This corresponds to the Riemann integral [0]. However, this is actually one interpretation (rectangles) of one example (the formula (1)), using one definition of an integral (Riemann integral). You can give other forms, and you can look for other integrals (Lebegues, Itô, etc) and work inside other theories, as well as you can create your own definition and your own theory. Examples and images are important to get a feeling of what a mathematical object can be, however, as you go further in mathematics, you'll enrich your inner feeling of it as you add new interpretations and examples. Can can see an example of a projection; and what matters to the mathematician is not the projection but the whole set. Another few things matter too. You wrote the formula (1) but without giving what are $a,b,x$ or the theory in which you're working in. Most of the time it's not given, and many mathematicians don't be specific about which theory they are working in. Most of the time it'll be in ZFC + first order logic. I would like to share few other point of views. The form you wrote (1) (where $a,b$ are constants and $x$ a variable) is also a value. It can have a dimension or not. For $a,b\in \mathbb X$ and $x \to f(x): \mathbb X\to \mathbb X$ and if $\mathbb X$ is $\mathbb R$ then we can see $a,b,x,f(x)$ as lengths and (1) as a surface. You can have $\mathbb X$ as a set of vectors, and $a,b,f(x),x$ as vectors; eg., each vector represents as set of particles (eg., a gaz). Then $\mathbb Z$ will be $\mathbb R^n \forall n\in \mathbb N$ [2]. $\int _a^zf(x)dx$ is a function of $z$, and this is definitely different than a value. It can also be a function of $a,b,z$ where all three are variables. (1) could be an Itô integral [2], and will be interpreted as a random variable, or the path of a stochastic process in an unspecified dimension. (1) could be a set of proofs of a sentence $\mathbb X$ [3a, 3b] Comment/edit I would like to discuss a point that is extremely commonly encountered. There is often a confusion between function: $f$ or $f(\cdot)$ and its value at $x$: $f(x)$. They are really not the same. However, usually if $x$ is a variable, we'll see $f(x)$ as a function and if $x$ is a constant, $f(x)$ as a value, but this is only a convenience. A good way to see that is that $f(x)$ is the result of a projection. You'll see the same duality with integrals: if can be a function or a value. We write the function, but it's actually "how we get the value". An integral over a domain is also a projection, only more explicit than that of $f(x)$. Each time we do a sum or a projection, we loose data, and the result is having as many dimension less as integral(s) (we usually write one $\int$ instead of many $\int \int \int \ldots$). You'll see that a lot in quantum physics; where an integral will be a measure, and as any measure, it's a projection. You'll also loose data as to get the measure, as well as you loose "how to get the value" (the integral as function), when you compute over a domain and get the value (the integral as value). Maybe the point is that I encourage anyone to be careful and critical about the taught mathematics, since they are often simplified, interpreted; and I really encourage to a personnal interpretation. Another example, is the developped and factorized forms of a polynom. We put the sign equal between them, but it only means that their value is the same, but they are not equal. One form has more information (the factorized one). And the whole process of transformation is also information. [0] https://en.wikipedia.org/wiki/Riemann_integral [1] https://en.wikipedia.org/wiki/Boltzmann_equation [2] https://en.wikipedia.org/wiki/It%C3%B4_calculus [3a] https://en.wikipedia.org/wiki/Type_theory [3b] https://homotopytypetheory.org/book/ SoleilSoleil $\begingroup$ You definitely need to be careful, but it's a bit pointless to split hairs when the value of $f$ at $x$ is a function (named $f$) of the value of $x$. $\endgroup$ – nomen Jan 29 at 18:23 $\begingroup$ @nomen I disagree. Another example: in computing it changes everything. We don't pass a function or a lambda as the same as its value, and we won't get the same result at all. However the syntax looks similar. It should be the same about how we see $fx$ somewhere as mathematicans, an especially what we write about it. $\endgroup$ – Soleil Jan 29 at 18:28 $\begingroup$ You can disagree all you want, but the value of $f(x)$ is literally a function (named $f$) of the value at $x$. Similarly, plenty of programming languages treat functions as first class values to be passed around and applied. $\endgroup$ – nomen Jan 29 at 18:31 $\begingroup$ What does any of this have to do with the question? $\endgroup$ – nomen Jan 29 at 20:40 $\begingroup$ @Soleil: Good luck explaining to the OP what a width of $\mathrm d x$ could mean. Your post confuses the OP more that it enlightens him. $\endgroup$ – Alex M. Apr 7 at 22:09 Not the answer you're looking for? Browse other questions tagged calculus definite-integrals means or ask your own question. Can every definite integral be expressed as a combination of elementary functions? What domains and codomains has the definite integral been defined for? Definite Integrals and Average Temperature Finding the value of a definite integral Riemann Sums as definite integrals Limit as a definite Integral Definite integral over singularity Definite integral of Gumbel functions Summation of series using definite integral How would we evaluate the definite integral $ \int_0^{e^{\frac{1}{e}}}\frac {-W(-\ln(x))}{\ln(x)}dx $
CommonCrawl
Search SpringerLink A joint time-assignment and expenditure-allocation model: value of leisure and value of time assigned to travel for specific population segments Reinhard Hössinger ORCID: orcid.org/0000-0002-0090-05731, Florian Aschauer1, Sergio Jara-Díaz2, Simona Jokubauskaite3, Basil Schmid4, Stefanie Peer5, Kay W. Axhausen4 & Regine Gerike6 Transportation volume 47, pages 1439–1475 (2020)Cite this article Based on a time-use model with a sound theoretical basis and carefully collected data for Austria, the value of leisure (VoL) for different population segments has been estimated. Through the combination of these results with mode-specific values of travel time savings from a related study based on the same data, the first mode-specific values of time assigned to travel (VTAT) were calculated. Data was collected using a Mobility-Activity-Expenditure Diary, a novel survey format which gathers all activities, expenditures, and travel decisions from the same individuals for 1 week in a diary-based format. The average VoL is 8.17 €/h, which is below the mean wage of 12.14 €/h, indicating that the value of work is, on average, negative. Regarding the reliability of the VoL, we show its sensitivity to the variance of working time in a sample, something that has been ignored in previous studies and could be used to avoid inadequate segmentation. We controlled this effect in the analysis of the heterogeneity of the VoL across the population by estimating the parameters from the total (unsegmented) dataset with single interaction terms. We find that the VTAT is strictly negative for walking, predominantly negative for cycling and car, and predominantly positive for public transport with 0.27 €/h on average. The positive VTAT for public transport is a strong indication for the importance of travel conditions, in turn suggesting that improvements in travel conditions of public transport might be as important as investing in shorter travel times. Jara-Díaz and Guevara (2003) highlighted that a person who makes a travel decision not only maximizes her utility in this particular choice, but also in the surrounding time-expenditure space. In order to combine both components, they developed a theoretical time-use framework model which can be applied to obtain values for different aspects of time use. A key output is the value of leisure (VoL) which represents the value of the marginal utility of all activities that are assigned more time than the minimum necessary. Following DeSerpa (1973), the authors show that estimating the VoL permits a deeper examination of the value of travel time savings (VTTS) obtained from travel choice models because the VTTS equals the VoL minus the value of time assigned to travel (VTAT). The intuition behind this is that the VTTS summarizes the value of the liberated time (opportunity cost of travel), while the VTAT represents the 'loss' when travel time is reduced—which is why it relates to travel conditions. The VoL is, therefore, a key piece of information for the integration of travel decisions into the framework of consumer home production. Furthermore, the VTAT is also important because it represents the direct utility (or disutility) derived from the time spent in the travel activity. The VTAT may differ between modes and according to specific conditions of travel such as comfort, reliability, crowding, or the possibility to use the in-vehicle time productively. There is some indication that the increasing availability of mobile devices enables public transport passengers to use the in-vehicle time more productively, which may yield a higher value of time assigned to public transport (e.g. Litman 2008). In particular, train travel time can be used for many activities (Lyons et al. 2013). Flügel (2014) provided a summary of why public transport travellers may perceive travel as more relaxed than car travellers. The VTTS is usually obtained from (conditional) indirect utilities estimated using discrete choice models; it represents the total marginal willingness-to-pay for a reduction of travel time in the context of travel choices.Footnote 1 The VTTS has high practical relevance in transport planning because savings in travel time account for the biggest share of user benefits in most cost–benefit analyses (e.g. Jara-Díaz 1990; Wardman and Lyons 2016; Hensher and Wang 2016). Mode choice models are able to estimate the VTTS by travel mode. Obtaining the VTAT is more difficult: to be computed it requires the VoL and the VTTS. Estimating the VoL requires in turn a large amount of information from each individual, most importantly time assignment patterns, the allocation of expenditures to various commodities, and travel decisions over a period of sufficient length to be considered as the long-term equilibrium of the individual (Jara-Díaz and Rosales-Salas 2015, 2017). As a consequence, only a few attempts have been made so far to estimate the VoL with the aforementioned model framework. Table 1 lists the results obtained with the original model formulation presented by Jara-Díaz and Guevara (2003) and later expanded by Jara-Díaz et al. (2008). They reveal a huge variability of VoL estimates ranging from 0.12 to 123 €/h, and the ratio VoL/w ranges between 0.04 and 6.83. Note, however, that the results from Jara-Díaz and Guevara (2003) were obtained with a limited preliminary version of the model. Also, the results from the Netherlands reported by Jara-Díaz et al. (2016) are rather implausible; this is discussed at the end of "Data preparation" section. If these studies are not considered, the ratio VoL/w moves only from 0.57 to 2.48. Nonetheless, only a small part of this range can be explained by socio-demographic characteristics or structural factors such as survey year or the economic level of the country. Most VoL estimates follow the order of each country's well-being from the World Values Survey (Frey and Stutzer 2002), but the differences are too large to result from this factor alone. The main part of variability remains unexplained; this raises the question how to estimate the VoL to reflect the time and cost preferences of in a reliable manner. Table 1 Values of leisure (VoL) estimated from microeconomic time-use models), wage rates (w), and ratios between them reported in the literature A possible source of unsystematic fluctuations are deficits and gaps in the data. One of the Chilean samples includes a specific population segment (long-distance commuters to downtown Santiago) who completed a 3-day activity diary—expenditures were not reported (Jara-Díaz et al. 2004; Munizaga et al. 2008). Other Chilean data bases were constructed from origin–destination surveys (Jara-Díaz and Guevara 2003; Jara-Díaz et al. 2013). The German and Swiss data are based on a 6-week travel diary—expenditures were not reported and non-travel activities were inferred from the trip purposes (for Germany: Axhausen et al. 2002; for Switzerland: Löchl et al. 2005). The Dutch results (Jara-Díaz et al. 2016) are based on the LISS panel (Longitudinal Internet Studies for the Social Sciences), which is a retrospective survey of average activity durations and expenditures; trip details such as travel modes are not reported. Finally, the U.S. results are based on a synthetic dataset obtained from a probabilistic merge of participants of a time-use survey and a consumer expenditure survey (Konduri et al. 2011). The dataset has been used to estimate various time-use and expenditure models including the multiple discrete–continuous extreme value model (MDCEV, see Castro et al. 2012). Both time-use and expenditure information is assuredly of high quality, but the probabilistic merge is questionable given that the aim of such models is to estimate trade-offs between time-use and expenditures at the individual level. This calls for a simultaneous survey of both components. To our best knowledge, no dataset exists so far with information on time assignment (including travel details) and expenditures, which has been collected from the same individuals at the same time in a diary-based format. The unavailability of appropriate data to estimate all values of time (leisure, work, VTTS, VTAT) is the starting point of the current study. We are contributing to this aspect of research by using a novel comprehensive dataset for model estimation, which was obtained from a Mobility-Activity-Expenditure Diary (MAED). This dataset includes information on activity assignment, expenditure allocation, and travel decisions over 1 week (considered as the whole work-leisure cycle) in a diary-based survey format, which has been proven reliable and valid in time-use surveys, consumer expenditure surveys, and travel surveys. Using these data, many objectives can be achieved: Estimation of the VoL for different population segments with three alternative approaches: A priori segmentation: subdivide the dataset into segments and estimate separate models for each one; Ex-post segmentation: estimate a global model and calculate the VoL from segmented result datasets; Ex-post segmentation with interaction terms: like (i2) but with moderator variables and interaction terms. Previous research has used a priori segmentation which estimates all parameters group-specific, for example, the results of Jara-Díaz and Astroza (2013) and Jara-Díaz et al. (2013) included in Table 1. Ex-post segmentation is more efficient, as it does not require dividing the sample into small groups. It would be a methodological advance if ex-post segmentation (without or with interaction terms) proves to be suitable. Estimation of the value of time assigned to travel (VTAT) with two innovations: (ii1) mode-specific estimation, which yields a separate VTAT for each travel mode, and (ii2) estimation based on the complete model framework introduced by Jara-Díaz and Guevara (2003), including travel choices, activity assignment, and (for the first time) expenditure allocation. The VTAT requires the VoL and the VTTS to be calculated.Footnote 2 Our model is based on the formulation created by Jara-Díaz et al. (2008). We used this model as a benchmark because it has been applied to four countries (Chile, Germany, Switzerland, USA) as well as to many segments within two of those countries (Chile and USA). This allows our estimations to have a basis for comparison (see Table 1). Note that the MAED data that are used in this paper do not provide a one-to-one mapping between activities and goods, and accounting for these relations would require assumptions which are not needed in the basic 2008 model.Footnote 3 Here we explicitly recognize that activities have a cost through the market goods bought but we do not attempt to find the proportions by which expenses are allocated to individual activities. Our utility function U is shown in Eq. (1). It is the log-linear version of a Cobb–Douglas function including three terms which relate to the utility gained from time assigned to work, time assigned to leisure, and expenses assigned to freely consumed goods. The logarithms enforce diminishing marginal utility as the consumption level of a particular alternative increases (i.e., satiation). This assumption yields a multiple discreteness model—that is, the choice of multiple alternatives can occur simultaneously (see Bhat 2005, 2008).Footnote 4 $$ U = \theta_{w} \log \left( {T_{w} } \right) + \mathop \sum \limits_{i = 1}^{n} \theta_{i} \log \left( {T_{i} } \right) + \mathop \sum \limits_{j = 1}^{m} \varphi_{j} \log \left( {E_{j} } \right) $$ The utility-generating resources (time T and expenses E) are subject to the following constraints: $$ \tau - T_{w} - \sum\limits_{i = 1}^{n} {T_{i} = 0 \left( \mu \right)} \quad {\text{time}}\;{\text{constraint}} $$ $$ wT_{w} + I - \sum\limits_{j = 1}^{m} {E_{j} \ge 0 \left( \lambda \right)} \quad {\text{budget}}\;{\text{constraint}} $$ $$ T_{i} - T_{i}^{Min} \ge 0, \;\forall i \in A^{r} \left( {\kappa_{i} } \right)\quad {\text{technical}}\;{\text{constraint}}\;{\text{on}}\;{\text{committed}}\;{\text{activities}} $$ $$ E_{j} - E_{j}^{Min} \ge 0,\, \forall j \in G^{r} \left( {\eta_{j} } \right)\quad {\text{technical}}\;{\text{constraint}}\;{\text{on}}\;{\text{committed}}\;{\text{goods}} $$ \( \theta_{w} \) is the baseline utility of assigning time to work; Tw the amount of time assigned to work; \( \theta_{i} \) and Ti the baseline utility and amount of time assigned to activity i; \( \varphi_{j} \) and Ej the baseline utility and amount of expenses assigned to good j; \( \tau \) the total time constraint; w the wage rate; I fixed income from other sources but work; μ and λ are Lagrange multipliers representing the marginal utility of increasing available time and increasing available income; κi the Lagrange multiplier representing the marginal utility of reducing the minimum time constraint of restricted activity i ϵ Ar; and ηi the Lagrange multiplier representing the marginal utility of reducing the minimum expenditure constraint of restricted good j ϵ Gr. Committed activities and goods are those which are necessary for personal and household maintenance such as travel, cleaning the house, rental cost, etc. They are limited at the bottom by technical constraints (i.e., people would like to assign less time and money but cannot because of the technical constraints). The amount of time and expenses assigned to these activities and goods is given externally. It is inferred from the observations and included in the equations as TC and Ec (see below). Furthermore, we assume that each individual assigns non-zero amounts of time and money to each unconstrained activity and consumed good because the logarithms in Eq. (1) do not allow zeros. This is reasonable as we are dealing with an aggregated view of activities and expenses assigned to a work-leisure cycle (only one category of work, leisure, and expenses during a whole week), which prevents the presence of zero assignments.Footnote 5 The original form of the Jara-Díaz et al. (2008) model was stated in terms of goods consumption Xj, which is represented here by expenses assigned to goods in monetary terms \( E_{j} = P_{j} X_{j} \), where Pj is the unit price of good j. This will be shown to be equivalent to the original model in Eq. (15) below. Following Jara-Díaz et al. (2008), we obtain the first order conditions to find the optimal allocation of activities and expenditures. They yield a solution for Tw, Ti, and Ej, which can be used to calculate \( \mu \) and \( \lambda \), and consequently the VoL and VTAT. The first order conditions are: $$ \frac{{\theta_{w} }}{{T_{w} }} + \lambda w - \mu = 0 $$ $$ \frac{{\theta_{i} }}{{T_{i} }} - \mu = 0,\; \forall i \in A^{f} $$ $$ \frac{{\varphi_{j} }}{{E_{j} }} - \lambda = 0,\; \forall j \in G^{f} $$ where Af and Gf denote the set of freely chosen activities and freely consumed goods, respectively. Equation (8) is derived from the budget constraint in Eq. (3) which is always binding when maximizing U, such that \( \lambda \) is always positive. Calculate \( \mu \) and \( \lambda \) from first order conditions: $$ \mu = \frac{\partial U}{{\partial T_{i} }} = \frac{\varTheta }{{\left( {\tau - T_{w} - T_{c} } \right)}} $$ $$ \lambda = \frac{\partial U}{{\partial E_{j} }} = \frac{\varPhi }{{\left( {wT_{w} - E_{c} } \right)}} $$ The parameters Θ and Φ correspond to the sum of individual time coefficients \( \theta_{i} (\varTheta = \sum\nolimits_{{i \in A^{f} }} {\theta_{i} } ) \) and individual expenditure coefficients \( \varphi_{j} (\varPhi = \sum\nolimits_{{i \in G^{f} }} {\varphi_{j} } ) \). Re-write (6) to (11) and insert (9) and (10) into (11): $$ T_{w} \left( {\lambda w - \mu } \right) + \theta_{w} = 0 $$ $$ T_{w} \left[ {\frac{\varPhi w}{{\left( {wT_{w} - E_{c} } \right)}} {-} \frac{\varTheta }{{\left( {\tau - T_{w} - T_{c} } \right)}} } \right] + \theta_{w} = 0 $$ Solve the quadratic Equation in (10) to obtain the optimal working time \( T_{w}^{*} \): $$ T_{w}^{*} = \frac{{\left( {\varPhi + \theta_{w} } \right)\left( {\tau - T_{c} } \right) + \frac{{E_{c} }}{w}\left( {\varTheta + \theta_{w} } \right) \pm \sqrt {\left[ {\frac{{E_{c} }}{w}\left( {\varTheta + \theta_{w} } \right) + \left( {\tau - T_{c} } \right)\left( {\varPhi + \theta_{w} } \right)} \right]^{2} - 4\frac{{E_{c} }}{w}\left( {\tau - T_{c} } \right)\theta_{w} \left( {\varTheta + \varPhi + \theta_{w} } \right)} }}{{2\left( {\varTheta + \varPhi + \theta_{w} } \right)}} $$ Insert (9) into (7) with \( T_{w}^{*} \) to obtain \( T_{i}^{*} \): $$ T_{i}^{*} = \frac{{\theta_{i} }}{\varTheta }\left( {\tau - T_{w}^{*} - T_{c} } \right) $$ Insert (10) into (8) with \( T_{w}^{*} \) to obtain \( E_{j}^{*} \)Footnote 6: $$ E_{j}^{*} = \frac{{\varphi_{j} }}{\varPhi }\left( {wT_{w}^{*} - E_{c} } \right) $$ Please note that there is a difference between our equation system and the one proposed by Jara-Díaz et al. (2008). They normalised their parameters to \( 2(\varTheta + \varPhi + \theta_{w} ) \). This yields a simplified equations system (20)–(22), which was used to estimate normalised parameters \( \alpha \) and \( \beta \). We normalised our parameters by setting Θ to one. This enables us to estimate the original parameters directly from Eqs. (13) to (15).Footnote 7 The VoL and VTAW are then obtained from the estimated parameters by inserting (9) and (10) into (6): $$ VoL = \frac{{\partial U/\partial T_{i} }}{{\partial U/\partial E_{j} }} = \frac{\mu }{\lambda } = \frac{{\varTheta \left( {wT_{w} - E_{c} } \right)}}{{\varPhi \left( {\tau - T_{w} - T_{c} } \right)}} $$ $$ VTAW = \frac{{\partial U/\partial T_{w} }}{{\partial U/\partial E_{j} }} = \frac{\mu }{\lambda } - w = VoL - w $$ A relevant new aspect of the paper at hand is the estimation of time-use models from a dataset in which activities and expenditures are obtained simultaneously from the same individuals in a diary-based survey. The underlying dataset is discussed in detail in papers by Aschauer et al. (2018, 2019). The sample provides information about all activities, expenditures, and travel decisions over a period of 1 week. It is based on a novel survey design, the Mobility-Activity-Expenditure Diary (MAED), and a survey conducted in spring and autumn 2015. It is a self-administered mail-back survey with a 1-week reporting period, including questions concerning trips, activities, and expenditures for each diary day. The trip section resembles the traditional household travel survey format based on the New KONTIV designFootnote 8 (Brög et al. 2009; Socialdata 2009), but the trip purpose section is more comprehensive. It resembles a time-use diary but with predefined activity types instead of open text fields. Each activity type is reported in a separate row along with the start time, end time, and possible expenditures, which are specified by means of their amount and type. The classification of expenses follows the UN standard Classification of Individual Consumption According to Purpose (COICOP; see UN 2018). The sample was based on a random selection of Austrian households for 18 pre-defined strata defined by region and level of urbanisation as shown in Fig. 1. Only employed persons were selected for participation because a wage rate is required for model estimation. The survey procedure followed the household travel survey tradition with some modifications resulting from the necessity of screening for employed persons and the high respondent burden as explained in detail in Aschauer et al. (2018, 2019), where the MAED data are presented and compared to the latest Austrian travel survey, time-use survey, and expenditure survey. Survey locations in Austria Aside from usual plausibility checks, two additional adjustments were necessary in order to reduce the incidental variation in the diary data and to better reflect the long-term equilibrium of the individuals. Adjustment of activity durations A key problem with respect to activities is the working time reported in the diary. It can deviate from the usual amount due to incidental events during the reporting week, such as workload peaks, bank holidays, sickness, training courses, etc. The result is an unsystematic variation of the reported working time which causes unrealistic balances of income and expenditures, because the working time (along with the wage) determines the implied income in the time-use model [see Eq. (3)]. We addressed this problem by asking for the regular hours worked (according to the contract) and the usual hours of overtime in the personal questionnaire which accompanied the diary. For data analysis, we replaced the reported working time in the diary for all respondents with the 'effective working time', which is the sum of the regular working time and the usual hours of overtime. The durations of non-work activities were adjusted accordingly to satisfy the time constraint. We assume an asymmetric adjustment pattern in the sense that an incidental increase of working time (beyond the usual level) causes different re-arrangement patterns than an incidental reduction (below the usual level). For this purpose, we estimated two separate models which were used for the adjustment of activities of two different groups: Persons who worked more than usual in the reporting week: reduce the working time to the 'usual effective working time' and increase non-work activities accordingly in order to meet the time constraint; Those who worked less than usual: increase the working time and reduce non-work activities accordingly. Adjustment of expenditures Linked to the reporting of expenditures is the large variability of purchase rhythms of goods and services. In line with conventional expenditure surveys, expenditure information was collected in two sections of the questionnaire: frequently purchased items were reported in the diary, whereas long-term expenses were reported in the household section. This requires a procedure of combining both sources in a manner that avoids double-counting through expenses that occur in both sections. Aschauer et al. (2019) describe the procedures that have been tested and applied in this context. The collection of expenditure data at two levels (personal and household) induces the need of some rule to allocate the expenses to those individuals who generate income (i.e. earners). The default MAED dataset is based on 'proportional expenses' according to the labour income of the household members. In order to run a sensitivity analysis we generated an alternative dataset based on 'equal expenses' for freely chosen goods. It assumes transfer payments within the household such that all earners have equal amounts available for freely chosen expenses. In "Value of leisure (VoL) and value of time assigned to work (VTAW)" section (Table 3) we provide and discuss the results of both datasets. A second issue associated with the expenses is the large variation of short-term expenses in the diary. A randomly selected week can deviate from the long-term equilibrium for two reasons: exceptionally large purchases (one-time-big-ticket items, e.g. a new car) and implausible zero spending on essential goods such as food or travel. Reported zeros may be reduced by a longer observation period and face-to-face support of participants, as is usual for conventional expenditure surveys, but this has not been done in the MAED because of the unacceptable response burden given that the participants also reported their trips and activities. We employed a model-based smoothing of expenditures with the intention to reduce the large incidental variation caused by the aforementioned problems but to retain the individual variability as much as possible. The applied procedure consisted of three steps: Predict the total expenditures as the difference between reported income and estimated savings; the monthly savings are estimated by a linear model using personal and household characteristics as predictors. Predict the expenditure shares by category with a multinomial logit model, again using personal and household characteristics as predictors. Replace the reported total expenditures with the predicted total expenditures (Step 1) and fix the balance by adjusting individual expenditure categories using the predicted expenditure shares (Step 2) as a benchmark. This procedure ensured that the reported expenditures were carefully adjusted (1) only to the necessary extent in order to fix the balance between income, savings, and expenditures, and (2) towards a benchmark which is already adapted to individual characteristics by the multinomial logit model. All models used for the adjustment are provided in the "Appendix" section (Tables 7, 8, 9 and Fig. 10). The models comprise many predictors including insignificant ones. This is in accordance with the purpose of the adjustment: we did not attempt to obtain the most parsimonious model (as usual for prediction models) but a rich model that reproduces the highest possible share of individual variability. Table 2 shows the pairwise correlations between reported and adjusted amounts. The average correlation is 0.94 for activities and 0.85 for expenditures. The lower correlation of expenditures results from their larger variability: the coefficient of variation of reported expenditures is more than two times higher than that of activities. The generally high level of correlations (also for expenses) indicates that the major portion of reported variability could be maintained with the adjusted data. Table 2 Pairwise correlations between reported and adjusted activity durations and expenditures The MAED data rectify some limitations of existing datasets that include time-use and expenditures and have been used in previous estimations of time-use models. Our data were obtained directly from the same individuals in a diary format. This is an advantage over retrospective data collections (such as in the Dutch LISS panel) because retrospective questions can lead to biased mean values (Browning and Gørtz 2006). Moreover, obtaining activities and expenditures simultaneously from the same individuals should be preferred over imputing expenditures from external sources as done by Konduri et al. (2011), because a time-use model is mainly about individuals' trade-offs between time-use, income, and expenditures. Note, however, that the probabilistic merge of participants of a time-use survey with those of a consumer expenditure survey has one possible strength: consumer expenditure surveys collect the expenses usually with more effort over a longer time period than combined surveys (such as the MAED survey) which will most likely result in more accurate data (while also requiring some adjustment and averaging). Finally, we believe that any data obtained from a diary-based survey should undergo an adjustment to fix the individual balances between work time, income, and expenses because data used for a time-use model should represent the long-term equilibrium on individual levels and not simply as an average across the sample. Figure 2 shows the average activity duration per activity category during the reporting week. The top two bars compare the MAED with the latest Austrian time-use survey (ATUS); we included only employed persons of the ATUS to be consistent with the MAED. Apart from minor deviations, the MAED results fit the time distribution of the ATUS very well. The largest difference is a shortfall of leisure activities in favour of travel and personal activities. Both shifts are probably caused by methodological differences. In the MAED we took great care to record all trips, whereas time-use surveys are well known for under-reporting trips (Gerike et al. 2015 as well as Aschauer et al. 2018). The shift from leisure to personal activities in the MAED is very likely caused by different coding schemes. MAED participants coded the activity types themselves (such as personal or leisure) based on our instructions; one instruction was that 'leisure' should be coded if the activity was performed voluntarily. ATUS participants stated the specific kind of activity in open text fields (such as reading or playing with the children). The abstract activity types were inferred from these statements during data processing, but, there is a broad overlap between personal and leisure; many activities that were inferred as 'leisure' from the ATUS statements could be perceived as duty by the participants—in particular, social activities such as going to church or visiting a hospital patient. MAED participants would have coded 'personal' in this case. Average duration by activity category and population segments (ATUS = Austrian Time Use Survey) The remaining bars in Fig. 2 show the average activity durations across different population segments in the MAED sample; they reveal only small differences. If there is a horizontal shift across the segments, it is in most cases a trade-off between paid work and unpaid (domestic) work. This shift is most pronounced in the difference between men and women. The particularly high substitution rate between paid work and domestic work is reflected by the largest negative correlation (− 0.93) among all pairwise correlations between activity categories. Figure 3 shows the total weekly expenses (white dots) and shares of expenditures by category (coloured bars). The total expenses differ greatly between the segments, most of all between low and high-income (as expected) with a ratio of 1.9; but other segments reveal large differences as well: men, older persons, persons with higher education, and persons living in single-worker households spend more money than those in the complementary segments. The two bars at the top compare the MAED sample with the latest Austrian consumer expenditure survey (ACES), including only employed persons to be consistent with the MAED. The differences are larger than those between MAED and ATUS (see Fig. 2), possibly reflecting the difficulties of surveying expenditures (see "Data preparation" section). The largest deviation (4.6%) refers to the share of housing; it has a specific reason: the original ACES includes rental equivalents (instead of reported expenses) of owner-occupied housing. The MAED data include, in contrast, reported mortgage repayments and operating costs, which are not comparable to (on average lower than) the rental equivalents. Since we found no way to match both procedures, we removed the rental equivalents in the ACES, which explains the lower share. Personal expenses by population segments (ACES = Austrian Consumer Expenditure Survey); the white dots show the total expenses with respect to the lower axis; the coloured bars show the average shares of expenditures by category with respect to the upper axis. (Color figure online) The remaining bars in Fig. 3 show the average shares of expenditures across different population segments in the MAED sample. The variability of the shares across the segments is much smaller than that of total expenses, which means that people with higher income spend more money on all kinds of commodities: they live in more expensive houses, eat more expensive food, wear more expensive clothes, etc. From this pattern we can conclude that the Cobb–Douglas function holds for the expenditures in the sense that "having chosen the ultimately satisfying budget shares at any given set of relative prices, the superlatively wealthy continue to allocate additional income in the same proportions" (Powell et al. 2002). Value of leisure (VoL) and value of time assigned to work (VTAW) The model estimation requires to classify the reported activity and expenditure categories into the model variables. The model defines three types of decision variables: (1) duration of paid work [TW in Eq. (6)], (2) duration of freely chosen activities [Af in Eq. (7)] to which people assign more time than the technical minimum, and (3) expenses on freely consumed goods [Gf in Eq. (8)] which people consume more than the technical minimum. These three types of variables allow for a closed-form solution; the resulting equation system (13)–(15) can be used to estimate the utility parameters and to calculate the marginal values of leisure and work by inserting the estimates in Eqs. (16) and (17). Furthermore, the model defines two types of exogenous variables referred to as committed activities [Tc in Eq. (9)] and committed expenses [Ec in Eq. (10)]. We assume that the consumption levels of these committed variables are externally determined by technical constraints, which require a certain minimum (Jara-Díaz 2003) and leave no choice to the consumers but to stick to this minimum. Table 3 shows how the reported activity and expenditure categories were assigned to the model variables. The allocation is critical because it is arbitrary (cannot be deducted from the data) but affects the result. Our definition of committed activities (TC) follows the classification of Jara-Díaz et al. (2013), who identified six types: household chores, personal care, assisting friends and family ('other' in the MAED sample), administrative chores and family finances, commuting, and education. The only exception is 'sleep', which Jara-Díaz et al. classified as free activity, whereas we belief that most people try to stick to the minimum. Personal care and household chores are typically classified as 'committed' because of their maintenance-oriented nature (Bittman and Wajcman 2000; Robinson and Godbey 2010). These activities are driven by a physical need, but, in most cases, people do not want to pay more attention than necessary. Gronau and Hamermesh (2006) classified these activities as 'goods intensive', that is, individuals particularly care about the amount of goods assigned to them. Ahn et al. (2004) also found that people try to save money in maintenance activities. Travel time might, in principle, be considered as an endogenous variable, which is related to activity destinations and also to the overall framework of time and budget assignment as shown by Jara-Díaz and Guerra (2003). However, in this paper we follow the approach by Jara-Díaz et al. (2008) which, in essence, states that ceteris paribus individuals would be willing to reduce travel time but cannot due to the characteristics of the transport system (transit design, road network, etc.). Table 3 Classification of observed activities and expenditures into model variables The classification of committed expenses (EC) follows Aschauer et al. (2019) as well as Mokhtarian and Chen (2004): expenses on goods associated with physical needs or maintenance were classified as 'committed'. People need to eat (food), take care of their health (personal), and need a dwelling (housing) with equipment (furnishing). Further committed expenses are financing, insurance, services not related to leisure activities, education, and travel. Freely chosen expenses include out-of-home accommodation (mainly visiting a restaurant and holidays), leisure and recreational goods, as well as electronics and communication devices, which are mainly used for entertainment. 'Clothing' was also classified as 'non-committed' although it is at least partially essential. The reason is that clothing expenses add up to fairly high amounts in our sample, indicating that the 'technical minimum' is exceeded. Those activity and expenditure categories which have been classified as 'freely consumed' as described above were further subdivided into two groups: Categories that are entirely or at least predominantly freely consumed were classified as T1 and E1. T1 includes leisure; E1 includes leisure, accommodation (mainly eat outside), and electronic. Categories that are committed by their nature, but it seems that most respondents have exceeded the technical minimum, were classified as T2 and E2. T2 includes eating and shopping; E2 includes clothes. Figure 4 shows the correlation pattern of the model variables (descriptive statistics of these variables are provided in the "Appendix" section, Table 10). TW is positively related with EC and negatively with TC—as assumed in the theory and specified in Eq. (13). Another aspect to be noted is the opposite pattern of time-use and expenditure variables: all time-use variables are negatively correlated due to the common time constraint τ, whereas the expenditure variables are positively correlated among each other and also with TW. This follows from the equalizing effect of labour income: it increases with TW and increases in turn the available budget for all kinds of goods. Correlogram of model variables The model estimation was carried out using a maximum likelihood estimation. It can be used under the normality assumption to estimate the parameters from the nonlinear equation system (13)–(15), which is re-written as: $$ \hat{Y}_{i} = g_{i} \left( \beta \right) + \eta_{i} , i \in \left\{ {1, \ldots ,3} \right\} $$ where gi denotes a function of parameter vector β and error terms \( \eta_{i} \sim N\left( {\mu_{i} ,\sigma_{i} } \right) \). The joint density of all error terms can be expressed as: $$ f\left( \eta \right) = f\left( {\eta_{1} } \right)f\left( {\eta_{2} |\eta_{1} } \right)f\left( {\eta_{3} |\eta_{1} ,\eta_{2} } \right) $$ The log-likelihood function of a sample of size J is: $$ LL\left( \eta \right) = \sum\limits_{i = 1}^{J} {\log \left[ {f\left( {\eta_{1} } \right)f\left( {\eta_{2} |\eta_{1} } \right)f\left( {\eta_{3} |\eta_{1} ,\eta_{2} } \right)} \right]} $$ The maximum log-likelihood function in Eq. (20) yields estimates of the parameters in Eqs. (13)–(15). The VoL and VTAW can be calculated by entering these estimates in Eqs. (16) and (17). As stated in "Data preparation" section we tested two assumptions regarding how expenses are shared between members of the same household. The default dataset assumes 'proportional expenses' according to the labour income. The alternative dataset assumes 'equal expenses on freely chosen goods'. This was achieved by allocating all expenses on freely consumed goods (E1 and E2) at equal amounts to the household members; the committed expenses (EC) were left unchanged (i.e. proportional to the labour income) to avoid negative disposable incomes (EC > wTW), which can cause negative square roots in Eq. (13). Table 4 shows the result of the estimation. The default dataset (proportional expenses) yields a VoL of 8.17 €/h, which is below the average wage of 12.14 €/h. As shown by many (see Jara-Díaz 2007 for a synthesis) the VoL equals the total value of work given by the wage plus the value of time assigned to work (VTAW). Therefore, the VTAW is negative with an average of − 3.97 €/h; it means that the average person works for the money and dislikes work as an activity. The alternative dataset (equal expenses on freely chosen goods) yields a VoL of 9.68 €/h, which is 18% higher. The difference arises from the lower estimate of Φ (0.308 vs. 0.365). The interpretation is straightforward: the implicit transfer of income between household members causes that the expenses have less statistical influence on the working time TW [the response variable in Eq. (13)], because the expenses are equalised but TW continues to differ between members of the same household. This results in a lower sensitivity of TW with respect to changes in expenses and consequently in a lower value of income (λ and Φ). The sensitivity analysis gives an idea to what extent and in which direction the VoL is influenced by how resources are allocated among household members. Given the importance of this aspect, we consider household models (either in the cooperative or non-cooperative version) as an avenue for future work. The remaining results of this paper are based on the dataset with proportional expenses, because it yields a clear balance between labour income and expenses at the individual level and permits comparison with existing studies reported in Table 1, which have used samples of one-worker households (Jara-Díaz et al. 2016) or one-person-one-worker households (Konduri et al. 2011). Table 4 Results of the model estimation from the total sample Heterogeneity of the VoL across different population segments A segmented consideration of the values of time seems to be important given the large differences between VoL estimates of different population segments in previous studies (see Table 1). These studies have consistently used a priori segmentation (i.e., the segments were treated as independent samples and separate models were estimated for each segment), but there are different options how to conduct a segmentation. These options have never been compared to each other, although they have different strengths and weaknesses and might yield different results. In this section we compare alternative options to capture the heterogeneity in the VoL across seven segmentation variables, each of which was treated as follows: the variable was transformed to a binary variable (if not already binary) in a way that it identifies two groups of similar size (low vs. high age; low vs. high income etc.). Table 5 shows these variables along with their original distribution and binary segments. Table 5 List of the variables used for segmentation Segmentation approaches The possible influence of the segmentation method on the VoL manifests in Eq. (16): it reveals the VoL as a function of four observed variables and one estimated parameter as follows: The VoL increases with observed wage rate w, working time TW, and committed time TC; It decreases with observed committed expenses EC and the estimated parameter Φ.Footnote 9 This means that the segmentation method can affect the VoL only through the parameter Φ because this is the only estimated quantity in Eq. (16). We compared three options of how to estimate Φ: Ex-Post Segmentation The parameters are estimated from the total dataset using a global model. The VoL is then calculated from the segmented dataset based on the global Φ estimate; the VoL accounts only for differences in the distribution of observed variables, whereas Φ is constant across all segments. A-Priori Segmentation The dataset is segmented beforehand; the parameters are estimated for each of the segmented datasets, which implies that all parameters are segment-specific. The VoL is then calculated from the segmented dataset based on segment-specific Φ estimates; it accounts for differences in the distribution of observed variables as well as other differences that affect the estimation of Φ. Interaction terms The parameters are estimated from the total dataset using a model with interaction terms involving the segmentation variables. Each of the four main effect parameters (θw, θ1, φ1, Φ) can have an interaction term independent from the other parameters. This way, segment-specific Φ values can be obtained and used to calculate the VoL for each segment. For the sake of comparability, we used the binary grouping variables also for the interaction model, although a moderator variable could, in principle, have a higher scale level (e.g., actual income rather than a binary dummy indicating low and high income). Ex-post versus a-priori segmentation Figure 5 compares the segmented VoL estimates from ex-post segmentation (the most restrictive model where all parameters are estimated from the total sample) with those from a priori segmentation (the least restrictive model where all parameters are segment-specific); results are provided in the "Appendix" section, Table 11. Both methods yield similar results when segmenting by gender, age, and income—which suggests that these classifications have a consistent impact on the VoL. However, four segmentation variables yield very dissimilar or even reversed effects depending on the method used: urbanity, level of education, presence of children, and number of workers in the household. The reversed effect of the presence of children on the VoL provides no indication of the superiority of either procedure, because this effect can indeed be twofold as pointed out by Jara-Díaz et al. (2013): on the one hand, children require a lot of time, which translates into more time pressure compared to childless households; on the other hand, taking care of children can be a pleasurable activity for parents. However, the segmentation with respect to the educational level appears counter-intuitive in the a priori case, because the VoL of both low and high education segments deviates in the same direction from the global average.Footnote 10 VoL estimates and 95% confidence intervals (according to the Delta method) of population segments, obtained from a priori segmentation and ex-post segmentation compared against the global average; oblique lines connect the VoL of two complementary segments This raises the question why, in some cases, a priori segmentation causes these problems with reversed effects and counter-intuitive results. We found the main reason in the sensitivity of the VoL to the variance of the working time (TW) in each segment, which becomes effective only if the parameters are estimated segment-specific (i.e. in the case of a priori segmentation). This sensitivity can be intuitively explored from the behaviour of the TW in Eq. (13) during the process of parameter estimation: A large variability of TW must be reflected by a large variability of the predicted working time TW* to achieve a close fit, which means large responses of TW* to given changes in the explanatory variablesFootnote 11; The responsiveness of TW* is larger, if θw (baseline utility of work) is more negative. The reason is the right term under the square root in Eq. (13) because this term increases linearly with − θwFootnote 12; A negative θw enforces a large Φ (baseline utility of freely consumed goods) to satisfy the condition that the marginal utility of work plus labour income equals the marginal utility of leisure as defined in Eq. (6). In order to verify this in our data, Fig. 6 shows the results of a simulation. We generated a series of datasets based on the total sample (n = 737). In each dataset we pivoted the values of TW symmetrically around the mean, such that the mean does not change, but the variance becomes smaller or larger; the changes in TW were balanced by opposite changes of TC to meet the time constraint; everything else was left unchanged. The result shows the close response of the parameter Φ to changes in the variance of TW in line with the aforementioned description; it VoL and parameter Φ of a model series estimated from simulated samples, in which the variance of TW and TC was changed systematically; all mean values remained unchanged Figure 7 shows how this mechanism affects the segmentation results. The blue and red lines are the same as in Fig. 5; the grey line shows the inverse standard deviation of TW in each segment.Footnote 13 The deviation of the a priori segments from the ex-post segments throughout follows the direction of the grey line, especially in the three cases of reversed results (urbanity, education, and presence of children). The sensitivity to the variance of TW makes a priori segmentation vulnerable to unexpected external influences because the variance of TW can differ for many reasons. An example is the segmentation by gender. Men are more often full-time employed (high TW but small variance of TW), whereas women have more flexible part-time arrangements (low TW but large variance of TW). The lower TW (and lower wage) of women causes a lower VoL; this is already captured in the ex-post segment. The a priori segment yields an even lower VoL for women, because it accounts for the larger variance of womens' TW. Does this really indicate a low value of time? Or rather the opposite: a higher time pressure on women resulting from unpaid duties such as domestic work and child care, which requires more flexibility with paid work? The same pattern applies to the particularly high VoL of single workers in the a priori segment: single workers are in most cases full time workers with large TW and small variance of TW The ex-post segmentation yields almost no difference, because the larger TW is balanced by a slightly lower wage of single workers, and the influence of the variance of TW disappears. Influence of the inverse variance of TW on the VoL estimates obtained from a priori segmentation To summarize, we find that a priori segmentation yields, for some segments, peculiar results and reversed effects compared to those of ex-post segmentation. We have presented an explanation for these problems based on the role played by the variance of working time within each segment. An additional problem of a priori segmentation can be large standard errors, if the underlying segments have a small sample size. Interaction terms versus ex-post segmentation The problems associated with a priori segmentation call for a parsimonious use of degrees of freedom—such as reflected by the use of interaction terms, which allow for more flexibility from single interaction terms up to a full interaction model.Footnote 14 In preparation of this approach we modified the model Eqs. (13–15) by replacing each instance of the main effect parameter with an interaction term of the form \( \beta_{i} Z^{{y_{i} }} \), where \( \beta_{i} \) denotes the main effect parameter, Z the segmentation variable, and \( y_{i} \) the interaction parameter, which gives the sensitivity of \( \beta_{i} \) with respect to changes in Z. We estimated all 15 possible interaction models for each segmentation variable, but only 5 models are shown in the "Appendix" section (Table 11): four 'single interaction models' with one interaction term on one of the four main effect parameters and a 'full interaction model' with interaction terms on all four parameters. The full interaction model has the same degrees of freedom as a priori segmentation; it yields indeed very similar results. The single interaction models are more similar to ex-post segmentation. Those models with an interaction term on another parameter than Φ can only change the magnitude of the VoL in both segments but not the ratio between the two segments, unless Φ has also an interaction term. An interaction term on Φ causes the largest deviation from ex-post segmentation. Figure 8 compares the Φ-interaction model with ex-post segmentation. It reveals only one noticeable (but still insignificant) difference for households with and without children. VoL estimates of different population segments gained from ex-post segmentation and from as single interaction model with an interaction term on Φ Since the VoL is a latent variable which cannot be observed, there is no basis for comparison across models based on the VoL estimates themselves. However, from the analysis above, we conclude that a priori segmentation and the full interaction model are not appropriate in our case. The large number of degrees of freedom makes the estimation process sensitive to the variance of working time within each segment, to which ex-post segmentation is not sensitive. This difference is evident in the simulation results (Fig. 6), empirical results (Fig. 7), and in the behaviour of Eq. (13). We recommend a limited number of degrees of freedom to make the estimation process more robust against the influence of the working-time variance. The most restrictive option is ex-post segmentation which supresses this influence entirely. From our results, however, it seems that a single interaction term on the parameter Φ can be used to account for heterogeneity in the sample without seriously affecting the robustness of the model. Value of travel time saving (VTTS) and value of time assigned to travel (VTAT) As explained earlier, the value of travel time savings (VTTS)—estimated from travel choice models—represents the willingness-to-pay to diminish travel time by one unit. As originally shown by DeSerpa (1971), the VTTS has two components: the opportunity cost regarding other activities (leisure or work) and the value of a reduction of the travel activity by itself. The first component is the value of leisure (VoL). The second-called the value of time assigned to travel (VTAT)—depends on the travel conditions. Analytically, the formula is $$ VTTS_{m} = VoL - VTAT_{m} $$ where VTTSm is the (mode-specific) value of travel time saving estimated from a travel choice model, VoL is the (individual-specific) value of leisure, and VTATm is the value of time assigned to travel, driven by mode-specific characteristics such as comfort and how productively in-vehicle time can be used for secondary activities (for a general derivation see Jara-Díaz 2007, Chapter 2). Equation (21) shows that unless one has an estimate of the opportunity cost of travel given by the VoL, the VTATm simply cannot be estimated. As explained earlier, this is exactly the reason why a time-use model is needed. The VoL estimates were presented in "Results" section of this paper, while the mode-specific VTTSm were estimated in a parallel effort by Schmid et al. (2018) from a model which combines different data types (RP, SP) and experiment types (mode, route, and shopping destination choice) using 21,681 choice observations of 744 respondents. The data used for the travel-choice model originates from the same MAED survey, which was used for the continuous choice models in this paper.Footnote 15 The SP data were collected by a follow-up survey from a subsample of 504 respondents. A mixed logit model was estimated, which accounts for unobserved heterogeneity in the VTTS and the availability of the different modes and includes scale parameters for the different data and experiment types (see e.g. Train 2009). The common data source makes the VoL and VTTSm, although estimated separately, compatible to each other. However, a consequence of the independent estimation is that possible correlations between the error terms of continuous decisions and discrete mode choices are not considered. Munizaga et al. (2008) tested the effect of a joint estimation of both types of decisions using full information maximum likelihood (FIML) in comparison to an independent estimation of both types. They had very large correlations between continuous and discrete choices (up to 0.676), possibly because they used a sample of long-distance commuters, for whom the chosen travel mode can make a substantial difference on how their day is organised. Despite the large correlations, they found only small differences between the parameters from joint and independent estimations. Table 6 shows the correlations between the error terms of the continuous equations and the mode choice probabilities estimated from the MAED sample.Footnote 16 They are much smaller than those reported by Munizaga et al. (2008); the largest is 0.108 between the error term of working time and the choice probability of public transport. It indicates that the bias from ignoring the correlations between continuous and discrete decisions is likely to be small. Future work might include a joint estimation of continuous and discrete decisions. Table 6 Pairwise correlations between error terms of continuous equations and Lee-transformed choice probabilities of the mode choice model Figure 9 shows the VoL, VTTS, and VTAT estimates for different population segments; the VTTS and VTAT also for different travel modes (results are provided in the "Appendix" section, Table 12). The VTTS related to public transport is throughout lower than that of other modes including the car, which confirms a common finding (see Table 1 in Schmid et al. 2018). From Eq. (21) one can see that, for a given individual (i.e. a given value of leisure), the low willingness to pay to reduce travel time in public transport is caused by a large (predominantly non-negative) value of time assigned to public transport, as shown on the right hand side of Fig. 9. Another important aspect is the large difference between the VTAT of car and public transport (4.4 €/h on average), which persists even after controlling for user characteristics. The smallest difference arises in the urban segment with 2.2 €/h. Value of leisure (VoL), mode-specific values of travel time savings (VTTS) and values of time assigned to travel (VTAT) for the total sample (top row) and for different population segments. Note that the VTTS estimates of the segments by age and number of workers in the household are equal to the global VTTS, because the mode choice model revealed insignificant interaction effects for the corresponding segmentation variables The findings regarding VTAT are indeed novel and interesting. They emerge exactly due to the possibility of disentangling the two components behind the VTTS. The main finding is that travel conditions in public transport (captured only by VTAT) are perceived more pleasant than those in a car, which seems to capture well the quality of service of public transport in Austria, contradicting the common opinion that traveling by car is generally more pleasant. We have no basis for comparison, because these are the first mode-specific VTAT estimates. But there are reasonable arguments why public transport users might perceive the time assigned to travel more pleasant than car drivers (and are therefore less time-sensitive): they are released from the driving task and can engage in many kinds of secondary activities, which makes the time assigned to travel more comfortable, entertaining, and useful. Flügel (2014) provides a summary of why public transport travellers may be less time-sensitive than car travellers. Synthesis and conclusions The aim of this study was to obtain representative estimates for the value of leisure (VoL), value of time assigned to work (VTAW), and (for the first time) mode-specific values of time assigned to travel (VTAT) of Austrian workers. VTAT have been obtained by comparing the VoL with mode-specific values of travel time savings (VTTS) from a related study based on the same data source (Schmid et al. 2018). The average VoL in the population was estimated at 8.17 €/h. This is considerably less than the average wage rate of 12.14 €/h; the result is a negative VTAW of − 3.97 €/h, indicating that time assigned to work is valued negatively on average and people work mainly for the salary. The result seems reasonable in the sense that the VoL is not too far away from (but also not identical to) the wage rate. In their estimations of the VTTS, Schmid et al. (2018) found that the mode-effect dominates over the effect of user characteristics; the average VTTS estimates for walk, bike, car, and public transport are 12.30, 11.20, 12.40, and 7.90 €/h, respectively. An important implication is that the direct utility of time assigned to travel, expressed by the VTAT, has inverse signs for different modes: it is strictly negative for walking, cycling and car driving, and close to zero (predominantly positive) for public transport with an average of 0.27 €/h. The clear priority of public transport has not been identified in previous studies (e.g. Wardman 2004; Shires and De Jong 2009). It may indicate that the public transport benefits more than other modes from technological innovations and mobile devices such as smartphones, etc. These devices affect the perceived comfort and how in-vehicle time can be used for secondary activities such as work, communication, or entertainment. From a transport planning perspective, the results support those who claim that the conditions of travel matter greatly (e.g. Litman 2008; Lyons et al. 2013; Flügel 2014) and investments in better travel conditions are as important as investments in higher speed to attract customers to public transport. An important finding with respect to the reliability of the VoL is its sensitivity to the variance of working time in the sample: a high variance causes a low VoL and vice versa. This has not been noted in previous studies, possibly because it is not visible in the equations but results from a specific behaviour of the equations during parameter estimation. This has several implications: It might be responsible for some of the fluctuations of VoL estimates in previous results (see Table 1). It can cause biased VoL estimates of population segments if the variance of working time in the segment deviates from the global average. This problem appears only if a priori segmentation is used (i.e., if separate models are estimated for each segment). To be on the safe side, we recommend using ex-post segmentation (i.e., estimation of global parameters and calculation of the VoL in the segmented data with these parameters). In our sample it seems that single interaction terms in the global model do not seriously affect the robustness. It might affect the comparison of countries with different degrees of regulation of the labour market. Part-time workers exhibit a large variability in any labour market, but the variability of full-time workers depends on the degree of regulation. Full-time workers in a strongly regulated market (as in Austria and many other European countries) exhibit a low variance in working time because the maximum is limited by collective agreements, whereas full-time workers in a de-regulated market may exhibit a larger variability. To the best of our knowledge, this is the first study that uses a data source which has been collected with the explicit intention to estimate all components of the time-use framework introduced by Jara-Díaz and Guevara (2003): a representative sample, where all information required for modelling has been collected from the same individuals at the same time in a diary-based format. Given the high data quality and the fact that we obtained reasonable results (in terms of a plausible size and moderate variability of VoL estimates) we conclude: Going ahead towards practical usability of the values of time obtained from the time-use framework model not only requires advanced models, but (possibly even more so) advanced data. If high-quality data is used for parameter estimation, the data collection effort seems to be rewarded by more reliable results. This interpretation should be confirmed by further efforts into gathering of high-quality data. A data-collection technique that is likely to become more important in the future is probabilistic merging. It would thus be a promising option for further research to compare the MAED data with an artificial dataset in which the expenses are imputed from the latest Austrian consumer expenditure survey. This might answer the question how much is lost by probabilistic merging compared to simultaneous collection—which is indeed more burdensome. For the VTTS to represent the opportunity cost of travel, it must include all utility components, which are experienced while travelling, e.g. more comfortable seats or the availability of WIFI—but no one-off effects such as the possibility of online reservation. The respective time-dependent variables should either be multiplied by travel time (and the coefficient is added to the travel time coefficient), or they are omitted in the utility function (and the corresponding utility is implicitly included in the travel time coefficient by its average value). To obtain VTAT we use the VTTS estimates obtained by Schmid et al. (2018), which are based on the same data source. There is only one experimental attempt in the literature which introduces (technical) relationships between goods and time (Jara-Díaz et al. 2016); this model yields rather implausible results which underlines the experimental character of this branch of research. Also, the MAED data do not include expenses for external service providers, which prevents the use of another recent experimental extension: the introduction of domestic activities as a decision of households which are hiring external providers (Rosales-Salas and Jara-Díaz 2017). The logarithm in Eq. (1) corresponds to the \( \alpha_{k} \) parameter in Bhat's model. A difference, however, is that the logarithm has a predefined curvature, whereas \( \alpha_{k} \) can be estimated. In an empirical survey it can still happen that zero assignments occur, in particular, if more detailed activity classifications are used. In this case we suggest imputing reasonable values using methods available in the literature. Jara-Díaz et al. (2008) used the equation as \( X_{j}^{*} = \frac{{\varphi_{j} }}{{P_{j} \varPhi }}\left( {wT_{w}^{*} - E_{c} } \right) \), which is equivalent to Eq. (15) noting that \( E_{j} = X_{j} P_{j} \). The change in normalization was done for analytical convenience only; it does not affect the results. Please note that α and β are derived from the exponents of the Cobb–Douglas function and are therefore a-dimensional—as the original parameters. 'KONTIV' is the name of a travel survey design and instrument developed in the seventies by Werner Brög and associates for the German national travel diary. It has become the standard of self-administered travel surveys in German speaking countries. Φ captures the utility associated with freely consumed goods; the remaining two input quantities in Eq. (16) are constants and do not affect the VoL: the time constraint τ and the parameter Θ, which is set to one for normalisation. Another problem associated with a priori segmentation are the large confidence intervals of 'urban residents' and 'single workers'; they are a result of the low sample size of these segments (see Table 5). The main explanatory variables of TW are EC and TC [see Eq. (13)]; both show indeed a strong correlation with TW in the expected direction (see Fig. 4); the wage rate w is no actual explanatory variable but serves to translate the money units of EC into time units. The respective term is \( - 4E_{c} /w\left( {\tau - T_{c} } \right)\theta_{w} \left( {\varTheta + \varPhi + \theta_{w} } \right) \). All other terms in Eq. (13) respond in the opposite direction to changes of θw, but these responses are smaller in size and are therefore outperformed by the former. The standard deviation of TW was rescaled to fit in the VoL scale of Fig. 7. The rescaled values can be perceived as 'predicted VoL' from a model with the segment-specific standard deviation of TW as sole predictor: \( y = - 12.36 + 239/stdev\left( {T_{W} } \right) \). Each of the four main effect parameters can have an interaction term independent from the other parameters, which yields 15 possible models according to permutation rules: four models with one interaction term, six models with two interaction terms, four models with three interaction terms, and one model with four interaction terms. The continuous choice models presented in this paper include the reported trips as part of the committed time and the travel cost as part of the committed expenses (see Table 3). The choice probabilities are assumed to distribute logistically and were subject to a Lee-transformation in order to obtain normally distributed variables. Lee (1983) proposed a method to account for correlations in a discrete–continuous model system by transforming a-priori assumed marginal distributions for each error term into the standard normal and generating a joint multivariate normal distribution of the resulting transformed error terms. Lee's method has been applied by many authors, e.g., Bhat (1998) and Habib (2013). Ahn, N., Jimeno, J.F., Ugidos, A.: 'Mondays in the Sun:' unemployment, time use, and consumption patterns in spain. Contrib. Econ. Anal. 271, 237–259 (2004) Aschauer, F., Hössinger, R., Axhausen, K.W., Schmid, B., Gerike, R.: Implications of survey methods on travel and non-travel activities: a comparison of the Austrian national travel survey and an innovative mobility-activity-expenditure diary (MAED). Eur. J. Transp. Infrastruct. Res. 18(1), 4–35 (2018) Aschauer, F., Rösel, I., Hössinger, R., Kreis, B., Gerike, R.: Time use, mobility and expenditure: an innovative survey design for understanding individuals' trade-off processes. Transportation 46(2), 307–339 (2019). https://doi.org/10.1007/s11116-018-9961-9 Axhausen, K., Zimmermann, A., Schönfelder, S., Rindsfüser, G., Haupt, T.: Observing the rhythms of daily life: a six-week travel diary. Transportation 29(2), 95–124 (2002) Bhat, C.: A model of post-home arrival activity participation behavior. Transp. Res. B Methodol. 32(6), 387–400 (1998) Bhat, C.: A multiple discrete–continuous extreme value model: formulation and application to discretionary time-use decisions. Transp. Res. B Methodol. 39(8), 679–707 (2005) Bhat, C.: The multiple discrete-continuous extreme value (MDCEV) model: role of utility function parameters, identification considerations, and model extensions. Transp. Res. Part B 42(3), 274–303 (2008) Bittman, M., Wajcman, J.: The rush hour: the character of leisure time and gender equity. Soc. For. 79(1), 165–189 (2000) Brög, W., Erl, E., Ker, I., Ryle, J., Wall, R.: Evaluation of voluntary travel behaviour change: experiences from three continents. Transp. Policy 16(6), 281–292 (2009) Browning, M., Gørtz, M.: Spending time and money within the household. Economics Series Working Papers 288, University of Oxford, Department of Economics (2006) Castro, M., Bhat, C., Pendyala, R., Jara-Díaz, S.: Accommodating multiple constraints in the multiple discrete-continuous extreme value (MDCEV) choice model. Transp. Res. B 46(6), 729–743 (2012) Daly, A., Hess, S., de Jong, G.: Calculating errors for measures derived from choice modelling estimates. Transp. Res. B Methodol. 46(2), 333–341 (2012) DeSerpa, A.: A theory of the economics of time. Econ. J. 81(324), 828–846 (1971) DeSerpa, A.: Microeconomic theory and the valuation of travel time: some clarification. Reg. Urban Econ. 2(4), 401–410 (1973) Flügel, S.: Accounting for user type and mode effects on the value of travel time savings in project appraisal: opportunities and challenges. Res. Transp. Econ. 47, 50–60 (2014) Frey, B.S., Stutzer, A.: Happiness and Economics: How the Economy and Institutions Affect Well-Being. Princeton University Press, Princeton (2002) Gerike, R., Gehlert, T., Leisch, F.: Time use in travel surveys and time use surveys—two sides of the same coin? Transp. Res. A 76, 4–24 (2015) Gronau, R., Hamermesh, D.S.: Time vs. goods: the value of measuring household production technologies. Rev. Income Wealth 52(1), 1–16 (2006) Habib, K.: A joint discrete-continuous model considering budget constraint for the continuous part: application in joint mode and departure time choice modelling. Transp. A Transp. Sci. 9(2), 149–177 (2013) Hensher, D.A., Wang, B.: Productivity foregone and leisure time corrections of the value of business travel time savings for land passenger transport in Australia. Road Transp. Res. A J. Aust. N. Z. Res. Pract. 25(2), 15 (2016) Jara-Díaz, S.: Consumer's surplus and the value of travel time savings. Transp. Res. B Methodol. 24(1), 73–77 (1990) Jara-Díaz, S.: On the goods-activities technical relations in the time allocation theory. Transportation 30(3), 245–260 (2003) Jara-Díaz, S.: Allocation and valuation of travel-time savings. In: Hensher, D.A., Button, K.J. (eds.) Handbook of Transport Modelling, 2nd edn, pp. 363–379. Emerald Group Publishing Limited, Bingley (2007) Jara-Díaz, S., Astroza, S.: Revealed willingness to pay for leisure link between structural and microeconomic models of time use. Transp. Res. Rec. 2382, 75–82 (2013) Jara-Díaz, S., Guerra, R.: Modeling activity duration and travel choice from a common microeconomic framework. Paper presented at IATBR 2003—10th International Conference on Travel Behaviour Research, 10–15 August 2003, Lucerne, CH (2003) Jara-Díaz, S., Guevara, C.: Behind the subjective value of travel time savings—the perception of work, leisure, and travel from a joint mode choice activity model. J. Transp. Econ. Policy 37(1), 29–46 (2003) Jara-Díaz, S., Rosales-Salas, J.: Understanding time use: daily or weekly data? Transp. Res. A 76, 38–57 (2015) Jara-Díaz, S., Rosales-Salas, J.: Beyond transport time: a review of time use modeling. Transp. Res. A Policy Pract. 97, 209–230 (2017) Jara-Díaz, S., Munizaga, M., Palma, C.: The Santiago TASTI survey (time assignment travel and income). In: ISCTSC 7th International Conference on Travel Survey Methods, San José, Costa Rica (2004) Jara-Díaz, S., Munizaga, M., Greeven, P., Guerra, R., Axhausen, K.: Estimating the value of leisure from a time allocation model. Transp. Res. B 42(10), 946–957 (2008) Jara-Díaz, S., Munizaga, M., Olguín, J.: The role of gender, age and location in the values of work behind time use patterns in Santiago, Chile. Pap. Reg. Sci. 92, 87–102 (2013) Jara-Díaz, S., Astroza, S., Bhat, C., Castro, M.: Introducing relations between activities and goods consumption in microeconomic time use models. Transp. Res. B Methodol. 93, 162–180 (2016) Konduri, K., Astroza, S., Sana, B., Pendyala, R., Jara-Díaz, S.: Joint analysis of time use and consumer expenditure data. Transp. Res. Rec. 2231, 53–60 (2011) Lee, L.F.: Generalized econometric models with selectivity. Econom. J. Econom. Soc. 51, 507–512 (1983) Litman, T.: Valuing transit service quality improvements. J. Public Transp. 11(2), 3 (2008) Löchl, M., Axhausen, K.W., Schönfelder, S.: Analysing Swiss longitudinal travel data. Paper Presented at the 5th Swiss Transport Research Conference, Ascona (2005) Lyons, G., Jain, J., Susilo, Y., Atkins, S.: Comparing rail passengers' travel time use in Great Britain between 2004 and 2010. Mobilities 8(4), 560–579 (2013) Mokhtarian, P.L., Chen, C.: TTB or not TTB, that is the question: a review and analysis of the empirical literature on travel time (and money) budgets. Transp. Res. A Policy Pract. 38(9–10), 643–675 (2004) Munizaga, M., Jara-Díaz, S., Greeven, P., Bhat, C.: Econometric calibration of the joint time assignment-mode choice model. Transp. Sci. 42(2), 208–219 (2008) Powell, A.A., McLaren, K.R., Pearson, K.R., Rimmer, M.T.: Cobb-douglas utility-eventually! (No. ip-80). Victoria University, Centre of Policy Studies/IMPACT Centre (2002) Robinson, J., Godbey, G.: Time for Life: The Surprising Ways Americans use Their Time. Penn State Press, University Park (2010) Rosales-Salas, J., Jara-Díaz, S.: A time allocation model considering external providers. Transp. Res. B Methodol. 100, 175–195 (2017) Schmid, B., Jokubauskaite, S., Aschauer, F., Peer, S., Hössinger, R., Gerike, R., Jara-Díaz, S., Axhausen, K.W.: A pooled RP/SP mode, route and destination choice model to disentangle mode and user-type effects in the value of travel time savings. Transp. Res. A Policy Pract. 124, 262–294 (2019). https://doi.org/10.1016/j.tra.2019.03.001 Shires, J.D., De Jong, G.C.: An international meta-analysis of values of travel time savings. Eval. Program Plan. 32(4), 315–325 (2009) Socialdata: The New KONTIV-Design (NKD). Munich, http://www.socialdata.de/info/KONTIV_engl.pdf (2009). Accessed 24 Aug 2017 Train, K.E.: Discrete Choice Methods with Simulation. Cambridge University Press, Cambridge (2009) UN: United Nations Statistics Division. https://unstats.un.org/unsd/classifications (2018). Accessed 29 Oct 2018 Wardman, M.: Public transport values of time. Transp. Policy 11(4), 363–377 (2004) Wardman, M., Lyons, G.: The digital revolution and worthwhile use of travel time: implications for appraisal and forecasting. Transportation 43(3), 507–530 (2016) Open access funding provided by Austrian Science Fund (FWF). Reinhard Hössinger, Regine Gerike, and Florian Aschauer gratefully thank to Austrian Science Fund (FWF) for funding the research project Valuing (Travel) Time (Award Number I 1491-G11), from which this article arises. Sergio Jara-Díaz would like to thank funding by Fondecyt, Chile, Grant 1160410, and the Complex Engineering Systems Institute, ISCI, Grant CONICYT: FB0816. We also thank Friedrich Leisch, head of the Institute of Applied Statistics and Computing at the University of Natural Resources and Life Sciences Vienna, for his statistical advice, as well as Ashleigh Möller from the Institute of Transport Planning and Road Traffic at the TU Dresden for proofreading the manuscript. Institute for Transport Studies, University of Natural Resources and Life Sciences, Vienna, Austria Reinhard Hössinger & Florian Aschauer Department of Civil Engineering, University of Chile, Santiago, Chile Sergio Jara-Díaz Institute of Applied Statistics and Computing, University of Natural Resources and Life Sciences, Vienna, Austria Simona Jokubauskaite Institute for Transport Planning and Systems, ETH Zurich, Zurich, Switzerland Basil Schmid & Kay W. Axhausen Institute for Multi-Level Governance and Development, WU Vienna, Vienna, Austria Stefanie Peer Integrated Transport Planning and Traffic Engineering, TU Dresden, Dresden, Germany Regine Gerike Reinhard Hössinger Florian Aschauer Basil Schmid Kay W. Axhausen RH: literature review, data analysis, manuscript writing. FA: data collection, data description, manuscript writing. SJ-D: theoretical model, interpretation, manuscript editing. SJ: software development for data analysis, manuscript editing. BS: provision of choice data and models, manuscript editing. SP: introduction, conclusions, manuscript editing. KWA: content planning, interpretation, manuscript editing. RG: project leader, research approach, sample design, manuscript editing Correspondence to Reinhard Hössinger. Table 7 Linear models for adjustment of activity assignments; the parameters indicate how 1 h less of work is replaced by additional time spent on other activities (and vice versa) to meet the time constraint; left side: adjustment model for the increase in working time (if reported working time was lower than effective working time); right side: adjustment model for reduction of working time (if reported working time was higher than effective working time) The two models in Table 7 correspond to our assumption that an incidental increase of working time beyond the usual level (left model) causes different re-arrangement patterns than an incidental reduction below the usual level (right model). Furthermore, we assume strictly substitutional relationships between work and other activities, such that an increase of working time causes a reduction of other activities and vice versa. This was informed by restricting the lower bound of the parameters to zero. In three cases, we obtained parameters at the lower bound, because the estimated values were negative, although they did not significantly differ from zero: (1) Domestic work is not reduced as the working time increases. This might indicate that domestic activities cannot be reduced easily due to their strongly committed nature. (2) Travel is not increased as the working time decreases. This makes sense due to the complementary relationship between work and the travel to work. The parameter might indeed be negative, but it is fixed here to zero because of the insignificant deviation. (3) Personal activities are also not increased as the working time decreases. This might indicate that an additional demand for personal activities is usually not the reason why the working time is reduced below its usual level. Figure 10 illustrates by means of an example (a particular person) how the reported expenses were adjusted to match the balance between income and expenses, while keeping the inter-person variability of expenses. The aim of the adjustment is to equal the total expenses (sum of reported expenses) to the predicted expenses such that the ratio reported/predicted is one, irrespective of whether or not the reported expenditure shares match the predicted shares. The person in Fig. 10 reported only 85.5% of predicted expenses (obtained as income minus predicted savings from the savings model in Table 8). The missing 14.5% were imputed by increasing those expenditure shares that were below the predicted shares obtained from the expenditure shares model in Table 9 (in this case: Housing, Food, Accommodation, Clothes etc.). The remaining shares, which equal or exceed the predicted shares, were not increased (Health, Electronic, Financing, Other). Please note that the predicted shares (grey line) are already adapted to the individual's household and personal characteristics. The adjustment works inversely for persons whose reported income exceeds the predicted income (ratio reported/predicted > 1). Example for the adjustment of reported expenditures to predicted expenditures (calculated as reported income minus predicted savings obtained from the model in Table 8) using the predicted expenditure shares (gained from the model in Table 9) as the benchmark Table 8 Linear model for prediction of savings using household and personal characteristics as predictors (R2 = 0.269, F-statistic = 9.209, p value = 0.000) Table 9 Estimated parameters of a multinomial logit model for prediction of expenditure shares by category using household and personal characteristics as predictors; reference category = 'other expenditures' (McFadden's Pseudo-R2 = 0.130 compared to the constants only model) Table 10 Model variables along with their mean values and standard deviation (in brackets) across different population segments (time use variables: h/week; expenditures: €/week, wage: €/h) Table 11 Values of leisure estimated from different models (ex-post segmentation without and with interaction terms, a priori segmentation) across different population segments (€/h; in brackets: standard errors) Table 12 Values of leisure (VoL), values of travel time savings (VTTS) and values of time assigned to travel (VTAT) across different population segments; VTTS and VTAT also across travel modes (all in €/h) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Hössinger, R., Aschauer, F., Jara-Díaz, S. et al. A joint time-assignment and expenditure-allocation model: value of leisure and value of time assigned to travel for specific population segments. Transportation 47, 1439–1475 (2020). https://doi.org/10.1007/s11116-019-10022-w Issue Date: June 2020 DOI: https://doi.org/10.1007/s11116-019-10022-w Value of leisure Value of time assigned to travel Expenditure allocation Over 10 million scientific documents at your fingertips Switch Edition Academic Edition Corporate Edition Not affiliated © 2022 Springer Nature Switzerland AG. Part of Springer Nature.
CommonCrawl
History of Science and Mathematics History of Science and Mathematics Meta History of Science and Mathematics Stack Exchange is a question and answer site for people interested in the history and origins of science and mathematics. It only takes a minute to sign up. Notations for Laplacian: $\nabla^2$ vs. $\Delta$ For a (sufficiently smooth) function $f\colon \Bbb R^n\to\Bbb R$, the Laplacian of $f$ is defined to be $\sum_{j=1}^n \frac{\partial^2 f}{\partial x_j^2}$. There are two notations for the Laplacian that I have seen being commonly used, viz., $\nabla^2 f$ and $\Delta f$. The first one is mostly used by physicists, whereas mathematicians tend to prefer the latter one (at least in my area of study, where $\nabla^2 f$ is reserved for the Hessian of $f$). I would like to ask if anyone knows the origin of these two usages, in particular regarding the people who first introduced the symbols and the people who popularized them. mathematics notation calculus differential-equations J. W. Tanner BigbearZzzBigbearZzz What follows is from A History of Vector Analysis by Michael John Crowe (1967; 1985 Dover corrected reprint). (from middle of p. 167) In Heaviside's later papers of 1883 and 1884 use was made of vectors, but no new principles were $\text{introduced.}^{35}$ (from middle of p. 180) ${}^{35}$This should perhaps be qualified by the statement that [the symbol] $\nabla^2$ made its first appearance in an 1884 paper. See (5,I; 338). No comment was given by Heaviside about its meaning. He seems to use it as $+\left(\frac{d^2}{dx^2} + \frac{d^2}{dy^2} + \frac{d^2}{dz^2}\right)$ rather than, as Maxwell and Tait had done, as the negative of the above. I have no idea what specific reference (paper/book title) Crowe's cryptic code refers to. There is no bibliography (by chapter or for the entire book). After several minutes of looking through the book (preface, beginning of chapter notes, various things at end of the book, etc.), I gave up trying to figure out what it means. I suspect the reference is to Volume I of Heaviside's collected works, and this is probably mentioned somewhere in pages before this, but I don't have time now to keep looking. (rant) I think there's an important lesson to be learned here for anyone wishing to write something like Crowe's book: Don't use your own private bibliographic code unless you clearly indicate how to decipher it, in a location that a casual user can reasonably be expected to find without much difficulty. I think the reason some people used a negative sign is because of the influence of quaternions, where squares of i, j, k are negative. Possibly useful to look over is Vector Analysis by Gibbs/Wilson (1901), which I believe is the primary text that popularized and spread vector ideas beyond the relatively few researchers that had up to that time been using them. Finally, see p. 2 of Peter Guthrie Tait, [untitled address to the Mathematics and Physics section], pp. 1-8 in the 2nd paging of Report of the Forty-First Meeting of the British Association for the Advancement of Science (Edinburgh, 2−9 August 1871), John Murray (London), 1872, cv + 207 + 281 + iv + 83 pages. (If anyone is curious as to how I happen to know about Tait's comments, see these 2 comments.) Dave L RenfroDave L Renfro $\begingroup$ Thank you for the answer! I really appreciate your effort of trying to decipher the bibliographic code even though it wasn't very successful. $\endgroup$ – BigbearZzz $\begingroup$ It's weird that Crowe doesn't include the vector component symbols $\vec{i}$, $\vec{j}$, $\vec{k}$ in his representation of Heaviside's meaning. $\endgroup$ – Spencer $\begingroup$ @Spencer: Unit vectors are not needed, since $\nabla^2$ is a scalar. As for the use of unit vectors in $\nabla$ and in $\nabla \cdot \nabla$ (formal dot product), this appears in several places (e.g. p. 131). From bottom of p. 136: >Thus in Cartesian analysis $\nabla^2$ can be defined as either plus or minus $\left(\frac{d^2}{dx^2} + \frac{d^2}{dy^2} + \frac{d^2}{dz^2}\right).$ [Lord] Kelvin used the positive sign, whereas Maxwell used the negative sign and noted: "The negative sign is employed here in order to make our expressions consistent with those in which Quaternions are employed." $\endgroup$ – Dave L Renfro $\begingroup$ (5,I;338) means Heaviside's Electrical Papers, London, 1892 (referenced in Note 5 to Crowe's chapter), volume I, p. 338. This is probably a typo, there is no $\nabla^2$ on p. 338, but there is one on p. 358 in the equation $\nabla^2H=\frac{4\pi\mu}{\rho}\dot{H}$ for pure conductors. Published as The Induction of Currents in Cores in The Electrician, p. 583ff, May 3, 1884. $\endgroup$ – Conifold $\begingroup$ @Conifold: The '5' as referring to "Note 5" in the same chapter was one of the things I thought might be intended (based on "Concerning bibliography" paragraph in the Preface), but the fact that nothing relevant was on p. 338 led me to doubt that meaning. Yesterday I returned to this issue for a few minutes and tried looking in Macfarlane's bibliography, but "5,I; 338" didn't lead me to anything. I'll update the answer to include the reference when I get a chance later. $\endgroup$ Jeff Miller's site gives the first occurrence of $\Delta$ as The symbol $\Delta$ for the Laplacian operator (also represented by $\nabla^2$) was introduced by Robert Murphy in 1833 in Elementary Principles of the Theories of Electricity. (Kline, page 786) The first use of the term Laplace's Operator is given as The term LAPLACE'S OPERATOR (for the differential operator $\nabla^2$) was used in 1873 by James Clerk Maxwell in A Treatise on Electricity and Magnetism (p. 29): "...an operator occurring in all parts of Physics, which we may refer to as Laplace's Operator" (OED). The first use by physicist of the term Laplacian for the $\nabla^2$ operator is given on the same page as: LAPLACIAN (as a noun, for the differential operator $\nabla^2$) was used in 1935 by Pauling and Wilson in Introd. Quantum Mech. (OED). Unfortunately, these entries do not make clear when the $\nabla^2$ notation was first introduced. nwrnwr $\begingroup$ This is very informative, thanks! $\endgroup$ I believe the notation $\nabla^2 f$ comes from $$ \nabla^2 f = \nabla\cdot\nabla f = \operatorname{div}(\operatorname{grad} f) $$ See here Gerald EdgarGerald Edgar $\begingroup$ Do you happen to know who was the first mathematician/scientist to use $\nabla^2$ to represent $\nabla \cdot \nabla$? It's a very poor choice of notation in my opinion. $\endgroup$ $\begingroup$ @BigbearZzz: $\nabla^2$ was a very natural choice of notation for the times. Keep in mind that we're talking about mid to late 1800s English works (mostly), and at the time the Calculus of Operations and related "algebrizations" (e.g. Cayley's quantics) was in fashion in Great Britain. For example, see The calculus of operations and the rise of abstract algebra by Elaine H. Koppelman (1971), especially p. 238. $\endgroup$ $\begingroup$ @DaveLRenfro It still seems weird at the very least since one would expect the mixed terms like $\frac{\partial^2}{\partial x\partial y}$ to be presented in $\nabla^2$. Using $|\nabla|^2$ would make more sense I think (unless the symbol $|v|$ for vector norm didn't exist back then). $\endgroup$ $\begingroup$ @BigbearZzz: If you look through the 1901 Gibbs/Wilson book, you'll find the dot product version almost exclusively used. As for using $|\nabla|^2,$ I guess even if the norm symbol was in use then (and I don't think it was), $\nabla^2$ would probably be preferred simply to avoid clutter and because the context would be clear and I don't think $\nabla^2$ was manipulated algebraically in any nontrivial way. Finally, at least with quaternions you don't have cross terms. The quaternion product $(a\text{i}+b\text{j}+c\text{k})(a\text{i}+b\text{j}+c\text{k})$ is equal to $-(a^2+b^2+c^2).$ $\endgroup$ $\begingroup$ @DaveLRenfro The quaternion product analogy makes so much sense, I have never thought about it that way before. Thank you very much. $\endgroup$ Thanks for contributing an answer to History of Science and Mathematics Stack Exchange! Not the answer you're looking for? Browse other questions tagged mathematics notation calculus differential-equations or ask your own question. What is the history of linear vs logarithmic scales? How were vector calculus nabla ∇ identities first derived? Why was delta ($\Delta$) chosen to represent change of a quantity? $\epsilon$-$\delta$ definition of continuity Introduction of $\imath$ and $\jmath$ notations for the imaginary unit Who first defined the "equal-delta" or "delta over equal" ($\triangleq$) symbol? What is the name of the identity $\frac{1}{2}\mathbf{\nabla (u \cdot u) = u \times (\nabla \times u ) + (u \cdot \nabla)u}$ and who derived it? Differences between modern and old mathematical notations
CommonCrawl
Difference between revisions of "Spline" Jjtorrens (talk | contribs) Jjg (talk | contribs) (Refs, code tidy) A function $s_m(\Delta_n;x)$ {{MSC|}} <!-- <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086800/s0868001.png" /> ---> {{TEX|done}} which is defined and has continuous $(m-1)$-st A function $s_m(\Delta_n;x)$ which is defined and has continuous $(m-1)$-st derivative on an interval $[a,b]$, and which coincides on each interval $[x_i,x_{i+1}]$ formed by the partition $\Delta_n$: $\alpha=x_0<x_1<\cdots<x_n=b$ with a certain algebraic polynomial of degree at most $m$. Splines can be represented in the following way: derivative on an interval $[a,b]$, and which coincides on each interval $[x_i,x_{i+1}]$ formed by the partition $\Delta_n$: $\alpha=x_0<x_1<\cdots<x_n=b$ <!-- <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086800/s0868005.png" />: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086800/s0868006.png" /> ---> with a certain algebraic polynomial of degree at most $m$. Splines can be represented in the following way: \[ s_m(\Delta_n;x)=P_{m-1}(x) + \sum_{k=0}^{n-1}c_k (x-x_k)^m_{+},\] <!-- <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086800/s0868008.png" /></td> </tr></table> ---> where the $c_k$ are real numbers, $P_{m-1}(x)$ is a polynomial of degree at most $m-1$, and $(x-t)^m_{+}=\max\left(0,(x-t)^m\right)$. where the $c_k$ The points $\{x_i\}_{i=1}^{n-1}$ are called the knots of the spline. If a spline $s_m(\Delta_n;x)$ has a continuous $(m-k)$-th derivative on $[a,b]$ for $k\geq 1$ and at the knots the $(m-k+1)$-st derivative of the spline is discontinuous, then it is said to have defect $k$. Besides these polynomial splines, one also considers more general splines ($L$-splines), which are "tied together" from solutions of a homogeneous linear differential equation $Ly=0$, splines ($L_g$-splines) with different smoothness properties at various knots, and also splines in several variables. Splines and their generalizations often occur as extremal functions when solving extremum problems, e.g. in obtaining best quadrature formulas and best numerical differentiation formulas. Splines are applied to approximate functions (see [[Spline approximation|Spline approximation]]; [[Spline interpolation|Spline interpolation]]), and in constructing approximate solutions of ordinary and partial differential equations. They can also be used to construct orthonormal systems with good convergence properties. are real numbers, $P_{m-1}(x)$ <!-- <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086800/s08680010.png" /> ---> is a polynomial of degree at most $m-1$, and $(x-t)^m_{+}=\max\left(0,(x-t)^m\right)$. The points $\{x_i\}_{i=1}^{n-1}$ are called the knots of the spline. If a spline $s_m(\Delta_n;x)$ has a continuous $(m-k)$-th derivative on $[a,b]$ for $k\geq 1$ and at the knots the $(m-k+1)$-st derivative of the spline is discontinuous, then it is said to have defect $k$. Besides these polynomial splines, one also considers more general splines ($L$-splines), which are "tied together" from solutions of a homogeneous linear differential equation $Ly=0$, splines ($L_g$-splines) with different smoothness properties at various knots, and also splines in several variables. Splines and their generalizations often occur as extremal functions when solving extremum problems, e.g. in obtaining best quadrature formulas and best numerical differentiation formulas. Splines are applied to approximate functions (see [[Spline approximation|Spline approximation]]; [[Spline interpolation|Spline interpolation]]), and in constructing approximate solutions of ordinary and partial differential equations. They can also be used to construct orthonormal systems with good convergence properties. ====References==== <table><TR><TD valign="top">[1]</TD> <TD valign="top"> S.B. Stechkin, Yu.N. Subbotin, "Splines in numerical mathematics" , Moscow (1976) (In Russian)</TD></TR></table> |valign="top"|{{Ref|StSu}}||valign="top"| S.B. Stechkin, Yu.N. Subbotin, "Splines in numerical mathematics", Moscow (1976) (In Russian) ====Comments==== I.J. Schoenberg is generally acknowledged to be the "father" of splines; these functions were named and singled out for special study by him in the middle of the 1940's. Since 1960 the field of spline interpolation and approximation has grown enormously. For a reasonably complete bibliography of papers dealing with spline functions that were published before 1973, see [[#References|[a4]]]; a valuable bibliography is also contained in [[#References|[a3]]]. I.J. Schoenberg is generally acknowledged to be the "father" of splines; these functions were named and singled out for special study by him in the middle of the 1940's. Since 1960 the field of spline interpolation and approximation has grown enormously. For a reasonably complete bibliography of papers dealing with spline functions that were published before 1973, see {{Cite|Sc4}}; a valuable bibliography is also contained in {{Cite|Sc3}}. <table><TR><TD valign="top">[a1]</TD> <TD valign="top"> I.J. Schoenberg, "Contributions to the problem of approximation of equidistant data by analytic functions. Part A: On the problem of smoothing of graduation. A first class of analytic approximation formulae" ''Quart. Appl Math.'' , '''4''' (1946) pp. 45–99</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> I.J. Schoenberg, "Contributions to the problem of approximation of equidistant data by analytic functions. Part B: On the problem of osculatory formulae" ''Quart. Appl. Math.'' , '''4''' (1946) pp. 112–141</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> L.L. Schumaker, "Spline functions, basic theory" , Wiley (1981)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> F. Schurer, "A bibliography on spline functions" K. Böhmer (ed.) G. Meinardus (ed.) W. Schempp (ed.) , ''Spline-Funktionen'' , B.I. Wissenschaftsverlag Mannheim (1974) pp. 315–415</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> P.M. Prenter, "Splines and variational methods" , Wiley (1975)</TD></TR></table> |valign="top"|{{Ref|Pr}}||valign="top"| P.M. Prenter, "Splines and variational methods", Wiley (1975) |valign="top"|{{Ref|Sc}}||valign="top"| I.J. Schoenberg, "Contributions to the problem of approximation of equidistant data by analytic functions. Part A: On the problem of smoothing of graduation. A first class of analytic approximation formulae" ''Quart. Appl Math.'', '''4''' (1946) pp. 45–99 |valign="top"|{{Ref|Sc2}}||valign="top"| I.J. Schoenberg, "Contributions to the problem of approximation of equidistant data by analytic functions. Part B: On the problem of osculatory formulae" ''Quart. Appl. Math.'', '''4''' (1946) pp. 112–141 |valign="top"|{{Ref|Sc3}}||valign="top"| L.L. Schumaker, "Spline functions, basic theory", Wiley (1981) |valign="top"|{{Ref|Sc4}}||valign="top"| F. Schurer, "A bibliography on spline functions" K. Böhmer (ed.) G. Meinardus (ed.) W. Schempp (ed.), ''Spline-Funktionen'', B.I. Wissenschaftsverlag Mannheim (1974) pp. 315–415 A function $s_m(\Delta_n;x)$ which is defined and has continuous $(m-1)$-st derivative on an interval $[a,b]$, and which coincides on each interval $[x_i,x_{i+1}]$ formed by the partition $\Delta_n$: $\alpha=x_0<x_1<\cdots<x_n=b$ with a certain algebraic polynomial of degree at most $m$. Splines can be represented in the following way: \[ s_m(\Delta_n;x)=P_{m-1}(x) + \sum_{k=0}^{n-1}c_k (x-x_k)^m_{+},\] where the $c_k$ are real numbers, $P_{m-1}(x)$ is a polynomial of degree at most $m-1$, and $(x-t)^m_{+}=\max\left(0,(x-t)^m\right)$. The points $\{x_i\}_{i=1}^{n-1}$ are called the knots of the spline. If a spline $s_m(\Delta_n;x)$ has a continuous $(m-k)$-th derivative on $[a,b]$ for $k\geq 1$ and at the knots the $(m-k+1)$-st derivative of the spline is discontinuous, then it is said to have defect $k$. Besides these polynomial splines, one also considers more general splines ($L$-splines), which are "tied together" from solutions of a homogeneous linear differential equation $Ly=0$, splines ($L_g$-splines) with different smoothness properties at various knots, and also splines in several variables. Splines and their generalizations often occur as extremal functions when solving extremum problems, e.g. in obtaining best quadrature formulas and best numerical differentiation formulas. Splines are applied to approximate functions (see Spline approximation; Spline interpolation), and in constructing approximate solutions of ordinary and partial differential equations. They can also be used to construct orthonormal systems with good convergence properties. [StSu] S.B. Stechkin, Yu.N. Subbotin, "Splines in numerical mathematics", Moscow (1976) (In Russian) I.J. Schoenberg is generally acknowledged to be the "father" of splines; these functions were named and singled out for special study by him in the middle of the 1940's. Since 1960 the field of spline interpolation and approximation has grown enormously. For a reasonably complete bibliography of papers dealing with spline functions that were published before 1973, see [Sc4]; a valuable bibliography is also contained in [Sc3]. [Pr] P.M. Prenter, "Splines and variational methods", Wiley (1975) [Sc] I.J. Schoenberg, "Contributions to the problem of approximation of equidistant data by analytic functions. Part A: On the problem of smoothing of graduation. A first class of analytic approximation formulae" Quart. Appl Math., 4 (1946) pp. 45–99 [Sc2] I.J. Schoenberg, "Contributions to the problem of approximation of equidistant data by analytic functions. Part B: On the problem of osculatory formulae" Quart. Appl. Math., 4 (1946) pp. 112–141 [Sc3] L.L. Schumaker, "Spline functions, basic theory", Wiley (1981) [Sc4] F. Schurer, "A bibliography on spline functions" K. Böhmer (ed.) G. Meinardus (ed.) W. Schempp (ed.), Spline-Funktionen, B.I. Wissenschaftsverlag Mannheim (1974) pp. 315–415 Spline. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Spline&oldid=25871 This article was adapted from an original article by Yu.N. Subbotin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://www.encyclopediaofmath.org/index.php?title=Spline&oldid=25871" TeX done
CommonCrawl
Unfettered migration is an economic free lunch. The Economics 101 of Immigration The Empirical Literature High-Skilled Innovation Net Effects Identification and the Mariel Boatlift Inequality and Welfare Assimilation, Crime and Refugees Spatial Misallocation Migration as a Free Lunch Within hours of being inaugurated, President Biden rolled back various immigration restrictions that had been put in by the Trump administration across the last 4 years, gesturing a significantly more progressive stance to the free movement of people into the US. Inevitably, this will reopen debates around the costs and benefits of more or less migration. In particular, this involves looking not just at the effects of migration on the receiving country, but also on the country which people are leaving. Normally, I would just dive right into looking at what the empirical literature says, but instead I want to consider some first-pass ways of framing the question of whether immigration is good for the receiving country. The first is just to consider a simple supply and demand model of the labour market - although this sucks as a representation of the labour market, the fact that an influx of immigrants both increases the supply of labour and by their spending increases the demand for labour means that at first blush, the effect on wages is ambiguous. Notice this already debunks the absurdly common criticism that "basic Economics 101 principles" tells us that an outwards shift of the supply curve would ceteris paribus lower prices, which means lower wages for domestic workers. This lump of labour fallacy, which assumes there is a fixed amount of work that needs to be done, is silly because all else is not equal - as we noted, labour demand changes too. So if your Econ 101 class did not teach you to move more than one curve at a time, please ask for your money back. Secondly, we can consider what happens in a Solow model with a Cobb-Douglas production function and the standard equation of capital stock change. \[ Y = K^\alpha L^{1-\alpha} \] \[ \dot{K} = sY - \delta K \] We know that at the Solow model's steady state, we have \(\dot{K}=0\). \[ sK^{*\alpha} L^{1-\alpha}=\delta K^* \] \[ K^{*1-\alpha} = \frac{s}{\delta} L^{1-\alpha} \] \[ K^* = \left(\frac{s}{\delta}\right)^\frac{1}{1-\alpha} L \] \[ Y^* = \left(\frac{s}{\delta}\right)^\frac{\alpha}{1-\alpha} L \] In particular, what we are actually interested in is output per person \(y\). Notice that this doesn't depend on the size of the labour force - as such, a one-off influx of immigrants will have no effect on living standards. This would suggest that the only effect of immigration (if at all) will be in the short run, where per capita output may drop temporarily, depending on how long is needed for capital to accumulate to reach its previous capital-labour ratio. \[ y^* = \left(\frac{s}{\delta}\right)^\frac{\alpha}{1-\alpha} \] Thirdly, we can augment this Solow model to include endogenous growth, since we know that countries do grow in real life. We can bolt on a process of Total Factor Productivity improvement onto the basic Solow setup, where this is dependent on a portion of the labour force working in research. \[ Y = A K^\alpha L_y^{1-\alpha} \] \[ \dot{A} = zAL_a \] \[ L = L_y + L_a \] \[ L_a = \ell L \] We can write out the growth rate of the various variables to get the balanced growth path. We get the first two from their equations of change, while we take the population to be constant as before. And for \(g_K\) to be constant on the balanced growth path when \(s\) and \(\delta\) are fixed parameters, that means \(\frac{Y}{K}\) must be constant. The only way the output-capital ratio is constant is if output grows at the same rate as capital. \[ g_Y = g_A + \alpha g_K + (1-\alpha) g_{L_y} \] \[ g_A = \frac{\dot{A}}{A} = zL_a = z\ell L \] \[ g_K = \frac{\dot{K}}{K} = s\frac{Y}{K} - \delta \] \[ g_{L_y} = 0 \] \[ g^*_Y = g^*_K \] This gives us the balanced growth path of output - insofar as there is no population growth, this is the growth rate of output per capita too. \[ g^*_Y = \frac{1}{1-\alpha} z\ell L \] Although this model is by no means a perfect description of growth in practice, it does tells us that a larger population due to immigrants might have an actively positive effect on growth, because there could be scale effects on innovation in larger populations. Fourthly, we can consider Krugman's parable about trade: a new entrepreneur uses some secret technology that converts domestic exports into consumer goods. Hailed as a magician, investigative reporters uncover the truth - it isn't a machine, it's just that entrepreneur exporting goods and buying imports. It's not entirely clear where the substantive difference is, and yet people worry a lot more about the dislocation caused once it is framed in the second way. Likewise, we can draw parallels between immigration and the process of the population having more babies. The substantive effect is the same, and yet we don't worry about every generation being larger as potentially pushing wages or employment downwards - immigration just represents babies from elsewhere. This is why it should be a reasonable guess that the effect of immigration are fairly ambiguous. To be entirely clear, all four of our aggregate approaches so far are approximations, and importantly they miss out a crucial component of the debate i.e. heterogeneity in labour markets. If the composition and characteristics of immigrants is identical to the native workforce, there will be no information lost from taking the aggregate approach. However, disparities could cause shifts we don't see by treating labour as homogenous. For example, an abundance of low-skilled workers might alter their relative wages within the country. Another example would be how immigrants may end up being complements rather than substitutes for native workers, due to having differing skillsets. So let's look at the empirical literature to see the sorts of effects that actually occur. Firstly, we can try to resolve the ambiguity mentioned previously - between 1980 and 2000, the increased demand caused by immigrants meant that each immigrant created 1.2 native jobs in the US i.e. the effect of greater demand dominated1. Secondly, even the increased labour supply can be helpful - because immigrants accept lower wages, this decreases the average labour cost and means that more jobs for natives are created2. Consequently, the legalisation of more immigration can raise income for native workers, based on a model calibrated to the economies of the US and Mexico from 2000 to 20103. Notice these benefits occur even when the immigrants are low-skilled. This is because of a third mechanism, where low-skilled immigrants caused low-skilled native workers in the US to pursue careers in which they have the comparative advantage, such as those which are less manual-intensive and involve more communications4. Notice that this results from the fact that even within the category of low-skilled workers, immigrants are imperfect substitutes for natives - a good example would be in California, where immigration stimulated the demand and wages of native workers between 1960 and 20045. Similarly, the wages, employment and occupational mobility of unskilled native workers in Denmark actually saw an increase as a result of low-skilled immigrants6. Crucially, this effect is seen further downstream, with the net effect of a 1 percentage point increase in the share of the population from ages 11 to 64 increasing the probability of natives aged 11 to 17 completing 12 years of schooling by 0.3 percentage points7. In doing so, it enhances the earning possibilities of natives. Furthermore, even in the cases where immigrants are substitutes and cause wages to fall in the short run, these seem to be very minimal due to the ability for firms to adjust their production technologies, as seen in the US in the early 1900s8 as well as in the 1980s and 1990s9. This has been corroborated by the 1964 bracero exclusion in the US, which had no effect on domestic workers despite increasing immigration barriers and reducing the labour supply10. These are buttressed by a second sort of benefits, relating to high-skilled immigrants in particular. In the US, immigrants are twice as likely to be granted patents and to start new businesses11. In fact, a 1 percentage point increase in the share of college-graduated immigrants boosts patents per capita by up to 18%12. This effect was seen in citations too, with areas having more prevalent immigrant inventors between 1880 and 1940 seeing more citations from 1940 to 200013. In part, this is a function of self-selection - for example, Mexican immigrants to the US are disproportionately educated compared to Mexico's average14. Consequently, STEM immigrants have had a significant direct effect on increasing TFP growth in the US between 1990 and 2010, in addition to raising wages for both college-educated and non-college-educated native workers15. This is augmented by the fact that cultural diversity itself also accrues benefits to productivity16. The result of all of this is that more relaxed immigration restrictions were associated with innovation17 and within a state, an increase of employment by 1% due to immigrants led to an increase in income per worker of 0.5%18. As seen in Brazil, the influx of these high-skilled immigrants has long run benefits with respect to income per capita and the level of education19. The consequence of all of this is that across 22 OECD countries between 1986 and 2006, the overall impact of immigration on per capita GDP is positive, even where immigration policies are non-selective20. To the extent to which there are mixed or negative effects, these are very small and very much limited to previous waves of immigrants21 or natives without a high school diploma22. And because of the composition of immigrants in the US, coupled with the pace of production technology adjustment, the negative effects on the absolute or relative wages of even those groups have been very limited23. All of the studies above involve very careful identification strategies, because as we all know, correlation does not imply causation. Immigrants may well be drawn to economically booming areas and leave economically weaker areas - as such, it is very useful if there is a natural experiment, where an exogenous shock causes immigration for non-economic reasons. The most prominent case of this is the Mariel Boatlift. After 20 years of no immigration between Cuba and Miami, Fidel Castro lifted the ban on Cubans to emigrate in 1980. In the span of half a year, around 125,000 Cubans immigrated to Miami from the Mariel harbour in Cuba. This resulted in the Miami workforce rising by 7% that year, compared to the national average of 0.3% per annum. If there was going to be an effect on wages and employment from immigration, it would happen here. This would be especially pronounced for low-skilled workers, since most of the immigrants were low-skilled. And yet, Professor David Card found in his seminal study that there was basically no effect on low-skilled workers, even including those who had immigrated earlier24. Importantly, this was not because the workers who faced a wage drop moved away25. Rather, it involved an increased labour demand26 and the adoption of production technology which use more low-skilled labour27. Famously, there was a contradictory study by Professor George Borjas, who argued that because 60% of the Marielitos did not have a high school diploma, the relevant comparison was not just those with a high school diploma or less, but only those who had dropped out of high school. By isolating only those individuals, there was a dramatic drop in wages of up to 30%28. Unfortunately, Borjas's study looks at a very specific sample - it takes everyone with a high school diploma or less and then excludes women, Hispanics, non-prime age workers (between 19-24 and 60-65 years old) and those with a high school diploma. That results in 17 workers every year - that is, 91% of the data points have been stripped away. The problem with such a tiny statistical artifact is that it's results are not terribly representative and very sensitive to changes in methodology. As it turns out, the way some of the data was found was changed after the boatlift, with far more black workers being included than before. Due to their lower earnings on average in that specific situation, this exaggerated the wage decline29. Unfortunately, it is impossible to exclude black workers from Borjas's analysis, because that leaves us with 4 observations a year i.e. 98% of the data is gone. If the composition of those interviewed for data collection is instead adjusted for, Borjas's result is much more fragile30. And if we simply took the group of all high school dropouts, the result disappears entirely31. So the main takeaway from the Mariel boatlift is that the general premise from before remains true - Noah Smith has a few more natural experiments for the curious. Clearly employment and wages matter - but they aren't the be all and end all. Some have argued that there are less obvious costs of having immigrants, in the form of more inequality and larger welfare costs. By now, it should be clear that it is unlikely inequality will be significantly affected by immigration, insofar as the skill distribution of immigrants in places like the UK are very similar to the native workforce32. To the extent to which they increase inequality, they do so because they are concentrated on the high-skilled and low-skilled sections of the workforce, and do not increase inequality for the native workforce33. Instead, things like skill-based technology change i.e. automation34, housing prices35 and fiscal policy36 are much more responsible. As for public finances and welfare programs, most studies find that immigrants produce a positive net fiscal effect37. In part, this is because even illegal immigrants pay taxes38. It is also because poor immigrants use welfare less than comparable natives39. Indeed, even undocumented immigrants contribute more than they cost the public40. To the extent to which immigrants take more than they put in, this only occurs for first-generation ones, because of the high costs of childrearing, though even this doesn't occur more so than for native parents41. Furthermore, this is paid back by the second generation of immigrants, who are stronger economic contributers than even natives42. By considering both the labour market effects and the impacts on government redistribution, a general equilibrium model applied across 20 OECD countries finds that immigration improves wellbeing of both high-skilled and low-skilled natives43. Another angle of opposition to immigration is about non-pecuniary factors - specifically, their ability to assimilate and not get involved with criminal activities. In general, migrant assimilate culturally - for example, the social values of Muslim immigrants are somewhere between that of their own country and that of their country of destination44. Another example would be the fact that immigrants to the US are learning English at a faster rate than ever before45. And this is reflected in the fact that second-generation immigrants following the 1965 Immigration Reform Act in the US have on average higher education levels and wages than children of natives, suggesting they've caught up and assimilated rather quickly46. And while it is true that not every immigrant community has assimilated, a lot of this is down to a bad equilibrium - for example, this is exemplified by how French discrimination against Muslims may lead to a reluctance to assimilate, making their incongruity even more salient and causing even more discrimination47. Certainly however, this does not mean their lack of assimilation causes crimes. A meta-analysis suggests that there is practically zero magnitude association between immigration and crime48. In fact, foreign-born immigrants are less likely than the average native to commit crimes49, with immigrants to the US being incarcerated at a fifth of the rate of natives50. As with the previous areas, being an undocumented immigrant doesn't change this finding either51. Where crime does occur, this is usually for financial motives and related to poor labour market outcomes52. Importantly, the benefits of immigration are robust to those entering the country being refugees, who still contribute positively to the economy53. Although it is true that refugees generally start out less successful and being more low-skilled, they surpass within 15 years, earning and improving their human capital more than economic immigrants54. We've spent a lot of time discussing the benefits of immigration. But throughout all of this, we haven't discussed the most important people: the migrants themselves. The reason why making migration easier is useful, above all else, is because borders are arbitrary constraints that cause the spatial misallocation of labour. The biggest cause of cross-country income disparity is TFP differences, caused by gaps in technologies and institutions - by artificially trapping labour in places that are deeply unproductive, we are losing out on a huge amount of potential output. For example, it is found that the place premium i.e. the disparity in wages between identical workers in different countries, of a medium-skilled worker from a median country moving to the US is on the order of $10,000 per annum55. In other words, this is roughly double income per capita in the developing world. Indeed, we can see this by considering the income per capita, organised not by which country one is in but by which country one was born in - when considered thusly, we can see that 40% of living Mexicans escaped poverty by leaving Mexico and that rises to 80% for Haitians56. Certainly, this would be one of the most powerful development policies available, to the order of 40 times more effective than any sort of direct aid interventions57. Open borders would yield welfare gains equivalent to a miraculous doubling of income levels in developing countries58. In the words of Dr. Michael Clemens, there are "trillion dollar bills [lying] on the sidewalk"59. And although it is true that there is a danger of migrants transmitting low productivity, we are nowhere close to that being a problem60. If migration is so helpful for the individuals who leave, doesn't this leave their countries of origin hung out to dry? As it turns out, it doesn't. Emigration has a positive net effect even on the sending country61. This occurs via various channels: of remittances62, of the possibility of emigration encouraging human capital formation63, of emigrants building the trust needed to get FDI back to their home country64 and of encouraging the formation of inclusive institutions via the propagation of cultural ideas65. It is clear that the fact of a downwards sloping demand curve66 doesn't automatically prove that immigration is harmful, especially when it is based on faulty assumptions regarding capital being fixed, the composition of native workers and immigrants being perfect substitutes for native workers67. In reality, a broad survey of the literature paints a much more encouraging picture. This is one of significant benefits to immigrants without noticeable harms on native workers, government services or public finances68. Insofar as most problems around labour market outcomes, assimilation and welfare program usage of immigrants are negatively correlated with human capital accumulation69, making legal immigration easier solves most of those. It's damn hard to immigrate. In economics, we often say that there are no free lunches. But reducing migration barriers comes as close to a free lunch as it gets. Given the tiny harms and the ease of solving them, as well as the magnitude of the benefits to the receiving country, individual migrants and even the country of origin, the obvious solution is to make immigration easier and to open up borders70. Hong and McLaren 2015↩︎ Albert 2021↩︎ Chassamboulli and Peri 2015↩︎ Peri and Sparber 2009↩︎ Peri 2007↩︎ Foged and Peri 2016↩︎ Hunt 2012↩︎ Lafortune, Lewis and Tessada 2019↩︎ Lewis 2011↩︎ Clemens, Lewis and Postel 2018↩︎ Denhart 2015↩︎ Hunt and Gauthier-Loiselle 2010↩︎ Akcigit, Grigsby and Nicholas 2017↩︎ Chiquiar and Hanson 2002↩︎ Peri, Shih and Sparber 2014↩︎ Ottaviano and Peri 2006↩︎ Kerr and Lincoln 2010↩︎ Rocha, Ferraz and Soares 2017↩︎ Boubtane, Dumont and Rault 2016↩︎ Card 1990↩︎ Card and DiNardo 2000↩︎ Bodvarsson, Van den Berg and Lewer 2008↩︎ Borjas 2017↩︎ Clemens and Hunt 2017↩︎ Clemens 2017↩︎ Peri and Yasenov 2015↩︎ Dustmann, Fabbri and Preston 2005↩︎ Autor, Katz and Kearney 2008↩︎ Rognlie 2015↩︎ Gupta 2014↩︎ Nowrasteh 2014↩︎ Gee et al. 2017↩︎ Bruen and Ku 2013↩︎ Lipman 2006↩︎ Greenstone and Looney 2010↩︎ Blau and Mackie 2017↩︎ Battisti et al. 2017↩︎ Norris and Inglehart 2012↩︎ Waters and Pineau 2015↩︎ Adida, Laitin and Valfort 2012↩︎ Ousey and Kubrin 2018↩︎ Bersani 2014↩︎ Butcher and Piehl 2007↩︎ Light and Miller 2018↩︎ Spenkuch 2013↩︎ Betts et al. 2014↩︎ Cortes 2004↩︎ Clemens, Montenegro and Pritchett 2009↩︎ Clemens and Pritchett 2008↩︎ Pritchett 2018↩︎ Kennan 2012↩︎ Asch 1994↩︎ Giovanni, Levchenko and Ortega 2014↩︎ Chand and Clemens 2014↩︎ Burchardi, Chaney and Hassan 2018↩︎ Barsbai et al. 2017↩︎ Kerr and Kerr 2011↩︎ Drinkwater and Robinson 2011↩︎ Bratsberg, Ragan and Nasir 2002↩︎
CommonCrawl
OALib Journal OALib PrePrints My Lib Follow Us+ LinkedIn (OALib Journal) LinkedIn (Open Access Library) Title Keywords Abstract Author All Publish in OALib Journal APC: Only $99 Search Results: 1 - 10 of 3131 matches for " Mara Lorenzi " All listed articles are free for downloading (OA Articles) Page 1 /3131 Display every page 5 10 20 Item The Polyol Pathway as a Mechanism for Diabetic Retinopathy: Attractive, Elusive, and Resilient Mara Lorenzi Experimental Diabetes Research , 2007, DOI: 10.1155/2007/61038 Abstract: The polyol pathway is a two-step metabolic pathway in which glucose is reduced to sorbitol, which is then converted to fructose. It is one of the most attractive candidate mechanisms to explain, at least in part, the cellular toxicity of diabetic hyperglycemia because (i) it becomes active when intracellular glucose concentrations are elevated, (ii) the two enzymes are present in human tissues and organs that are sites of diabetic complications, and (iii) the products of the pathway and the altered balance of cofactors generate the types of cellular stress that occur at the sites of diabetic complications. Inhibition (or ablation) of aldose reductase, the first and rate-limiting enzyme in the pathway, reproducibly prevents diabetic retinopathy in diabetic rodent models, but the results of a major clinical trial have been disappointing. Since then, it has become evident that truly informative indicators of polyol pathway activity and/or inhibition are elusive, but are likely to be other than sorbitol levels if meant to predict accurately tissue consequences. The spectrum of abnormalities known to occur in human diabetic retinopathy has enlarged to include glial and neuronal abnormalities, which in experimental animals are mediated by the polyol pathway. The endothelial cells of human retinal vessels have been noted to have aldose reductase. Specific polymorphisms in the promoter region of the aldose reductase gene have been found associated with susceptibility or progression of diabetic retinopathy. This new knowledge has rekindled interest in a possible role of the polyol pathway in diabetic retinopathy and in methodological investigation that may prepare new clinical trials. Only new drugs that inhibit aldose reductase with higher efficacy and safety than older drugs will make possible to learn if the resilience of the polyol pathway means that it has a role in human diabetic retinopathy that should not have gone undiscovered. Save Related Articles A Trust Model for Multiagent Recommendations Fabiana Lorenzi,Gabriel Baldo,Rafael Costa,Mara Abel Journal of Emerging Technologies in Web Intelligence , 2010, DOI: 10.4304/jetwi.2.4.310-318 Abstract: This paper describes a trust model for multiagent recommender systems. A user's request for a travel recommendation is decomposed by the system into subtasks, corresponding to travel services. Agents select tasks autonomously, and accomplish them using knowledge derived from previous solutions or with the help of other agents. Agents maintain local knowledge bases and, when requested to support a user in a travel planning task, they may collaborate exchanging information stored in their local bases. During this exchange process trusting other agents is fundamental. It helps agents to improve the quality of the recommendations and to avoid communication with unreliable agents. In the proposed model, the trust is also used to allow agents to become experts in particular subtasks, helping them to generate better recommendations. In this paper, we propose and validate a multiagent trust model showing the benefits of such model in a travel planning scenario. A strongly ill-posed problem for a degenerate parabolic equation with unbounded coefficients in an unbounded domain $Ω\times {\mathcal O}$ of $\R^{M+N}$ Alfredo Lorenzi,Luca Lorenzi Mathematics , 2012, DOI: 10.1088/0266-5611/29/2/025007 Abstract: In this paper we deal with a strongly ill-posed second-order degenerate parabolic problem in the unbounded open set $\Omega\times {\mathcal O}\subset \mathbb R^{M+N}$, related to a linear equation with unbounded coefficients, with no initial condition, but endowed with the usual Dirichlet condition on $(0,T)\times \partial(\Omega\times {\mathcal O})$ and an additional condition involving the $x$-normal derivative on $\Gamma\times {\mathcal O}$, $\Gamma$ being an open subset of $\Omega$. The task of this paper is twofold: determining sufficient conditions on our data implying the uniqueness of the solution $u$ to the boundary value problem as well as determining a pair of metrics with respect of which $u$ depends continuously on the data. The results obtained for the parabolic problem are then applied to a similar problem for a convolution integrodifferential linear parabolic equation. EU researches superbugs Rossella Lorenzi Genome Biology , 2003, DOI: 10.1186/gb-spotlight-20031202-01 Abstract: Announced in Rome at a 3-day EU conference on the role of research in combating antibiotic resistance, the funding is part of a €12.6 million budget from the first call for proposals within the Sixth Framework Programme (2002-2006)."People trust antibiotics to cure almost any kind of disease. Unfortunately, as recent outbreaks of severe acute respiratory syndrome show, this is not the case," European Research Commissioner Philippe Busquin said in a statement. "More research for the benefit of patients is needed to make use of the wealth of information provided by more than 140 bacterial genomes known today. We must also make sure that the pharmaceutical industry continues its research into the development of new antibiotics."The new research projects will be launched in early 2004. While the first project looks into resistance to lactam antibiotics in clinical use, the other one investigates basic molecular mechanisms of resistance. It will focus specifically on Streptococcus pneumoniae, a major contributor to community-acquired pneumonia and invasive disease."Despite being a major cause of morbidity and mortality worldwide, sometimes leading to a fatal disease, Streptococcus pneumoniae is also found in a high proportion of healthy children attending daycare centers," Birgitta Henriques Normark, head of the Department of Molecular Epidemiology and Biotechnology at the Swedish Institute for Infectious Disease Control, told us. "A better knowledge of molecular mechanisms involved in antibiotic resistance development and of host-pathogen interactions affecting pneumococcal infections would lead to improved intervention, prevention, and treatment strategies of these common community acquired infections."The 3-year project will look into how the bacteria manage to survive, grow, and spread in the presence of an antibiotic and what factors determine whether an infection will be mild or severe.It will also involve comparative genomic approaches, including DNA microarrays t An abstract ultraparabolic integrodifferential equation Luca Lorenzi Le Matematiche , 1998, Abstract: We prove an existence and uniqueness result for a ultraparabolic integrodifferential equation in the strip [0, T1 ] × [0, T2 ] in the context of the spaces of continuous functions with values in a Banach space X and we give some applications to specific partial integrodifferential problems. Power: A Radical View, by Stephen Lukes Maximiliano Lorenzi Crossroads , 2006, The New Wedge-Shaped Hubble Diagram of 398 SCP Supernovae According to the Expansion Center Model Luciano Lorenzi Physics , 2010, Abstract: Following the successful dipole test on 53 SCP SNe Ia presented at SAIt2004 in Milan, this 9th contribution to the ECM series beginning in 1999 in Naples (43th SAIt meeting: "Revolutions in Astronomy") deals with the construction of the new wedge-shaped Hubble diagram obtained with 398 supernovae of the SCP Union Compilation (Kowalski et al. 2008) by applying a calculated correlation between SNe Ia absolute blue magnitude MB and central redshift z0, according to the expansion center model. The ECM distance D of the Hubble diagram (cz versus D) is computed as the ratio between the luminosity distance DL and 1 + z. Mathematically D results to be a power series of the light-space r run inside the expanding cosmic medium or Hubble flow; thus its expression is independent of the corresponding z. In addition one can have D = D(z, h) from the ECM Hubble law by using the h convention with an anisotropic HX. It is proposed to the meeting that the wedge-shape of this new Hubble diagram be confirmed independently as mainly due to the ECM dipole anisotropy of the Hubble ratio cz/D. A crucial dipole test of the expansion center Universe - based on high-z SCP Union & Union2 supernovae Abstract: The expansion center Universe (ECU) gives a dipole anisotropy to the Hubble ratio, at any Hubble depth D. After a long series of successful dipole tests, here is a crucial multiple dipole test at z bins centred on the mean =z0=1, or Hubble depth D=c/H0, and based on data from SCP Union & Union2 compilation. Table 5abc lists data of two main samples, with 48 SCPU SNe Ia and 58 SCPU2 SNe Ia respectively. The confirmed dipole anisotropy, shown by 6 primary sample tests and by another 27 from 9 encapsulated z bins with DL=D(1+z) assumed and the Hubble Magnitude definition, gives a model independent result, in full accordance with the expansion center model (ECM). That means a maximum cz range of about 50000 km/s at the central redshift z0=1. As a complement to the dipole tests, here is a new computation of the relativistic deceleration parameter q0, based on the extrapolated total M spread, that is the deviation of the Hubble Magnitude M of high-z SCP Union supernovae at a normal or central redshift =z0=z << 1 from the absolute magnitude M0 at z0=0 (cf. parallel paper XVI). A total M spread according to ECM is derived from 249 high-z SCPU SNe listed in paper XVI. In a concordance test with the expansion center model, the obtained new relativistic q0 agrees with the value q0=+2 inferred from the ECM paper I eq. (41), when R0 is the proper distance at t0 of the expansion center from the Galaxy. Dipole & absolute magnitude analysis of the SCP Union supernovae within the expansion center model Abstract: 1743 data calculated for 249 high-z SCP Union supernovae are analysed according to the expansion center model (ECM). The analysis in Hubble units begins with 13 listed normal points corresponding to 13 z-bin samples at as many Hubble depths. The novel finding is a clear drop in the average scattering of the SNe Ia Hubble Magnitude M with the ECM Hubble depth D, after using the average trend computed in paper IX. Other correlations of the M scattering with the position in the sky are proposed. Consequently, 13 ECM dipole tests on the 13 z-bin samples were carried out both with unweighted and weighted fittings. A further check was made with Hubble depths D obtained by assuming M= according to paper IX and XV. In conclusion the analysis of 249 SCPU SNe confirms once again the expansion center model at any Hubble depth, including a strengthening perturbation effect of the M scattering at decreasing z<0.5. A new successful dipole test introduces the absolute magnitude analysis of 398 SCPU supernovae. After testing 14 high-z normal points from paper IX Table 2, a trend analysis of another 15 and 30 normal points of the Hubble Magnitude M and a new absolute magnitude M*, at increasing =z0 corresponding to a different series of z bins, leads to the discovery of the magnitude anomaly of the low points. When the low points are excluded, the best fittings make it possible to extrapolate the SNe Ia absolute magnitude M0 at a central redshift z0=0, with M0=-17.9+-0.1 and a few final ECM solutions of the SNe Ia and M*. The magnitude anomaly is here interpreted as due to a deficiency in the magnitude formulas used; these produce a maximum peak of deviation in the range 0.04 < < 0.08. That is a proof of the Universe rotation within the expansion center model. II-Local Solution of a Spherical Homogeneous and Isotropic Universe Radially Decelerated towards the Expansion Center: Tests on Historic Data Sets Abstract: The topic of the paper is the mathematical analysis of a radially decelerated Hubble expansion from the Bahcall & Soneira void center. Such analysis, in the hypothesis of local homogeneity and isotropy, gives a particular Hubble ratio dipole structure to the expansion equation, whose solution has been studied at different precision orders and successfully tested on a few historic data sets, by de Vaucouleurs (1965), by Sandage & Tammann (1975), and by Aaronson et al. (1982-86). The fittings of both the separate AA1 and AA2 samples show a good solution convergence as the analysis order increases, giving even coinciding solutions when applied to 308 nearby individual galaxies (308AA1) and to 10 clusters (148AA2), respectively. Copyright © 2008-2017 Open Access Library. All rights reserved.
CommonCrawl
Journal of High Energy Physics Second level semi-degenerate fields in \( {\mathcal{W}}_3 \) Toda theory... Second level semi-degenerate fields in \( {\mathcal{W}}_3 \) Toda theory: matrix element and differential equation On horizonless temperature with an accelerating mirror On horizonless temperature with an accelerating mirror Anomalous electrodynamics of neutral pion matter in strong magnetic fields Anomalous electrodynamics of neutral pion matter in strong magnetic fields De Alfaro, Fubini and Furlan from multi matrix systems De Alfaro, Fubini and Furlan from multi matrix systems Wormholes, emergent gauge fields, and the weak gravity conjecture Wormholes, emergent gauge fields, and the weak gravity conjecture Trace anomaly and counterterms in designer gravity Trace anomaly and counterterms in designer gravity Near-horizon geometry and warped conformal symmetry Near-horizon geometry and warped conformal symmetry State-dependent divergences in the entanglement entropy State-dependent divergences in the entanglement entropy Modifications to holographic entanglement entropy in warped CFT Modifications to holographic entanglement entropy in warped CFT Entanglement entropy in flat holography Entanglement entropy in flat holography Emergent gravity from Eguchi-Kawai reduction Journal of High Energy Physics, Mar 2017 Edgar Shaghoulian Abstract Holographic theories with a local gravitational dual have a number of striking features. Here I argue that many of these features are controlled by the Eguchi-Kawai mechanism, which is proposed to be a hallmark of such holographic theories. Higher-spin holographic duality is presented as a failure of the Eguchi-Kawai mechanism, and its restoration illustrates the deformation of higher-spin theory into a proper string theory with a local gravitational limit. AdS/CFT is used to provide a calculable extension of the Eguchi-Kawai mechanism to field theories on curved manifolds and thereby introduce "topological volume independence." Finally, I discuss implications for a general understanding of the extensivity of the Bekenstein-Hawking-Wald entropy. https://link.springer.com/content/pdf/10.1007%2FJHEP03%282017%29011.pdf Received: December Emergent gravity from Eguchi-Kawai reduction Santa Barbara 0 CA 0 U.S.A. 0 Open Access 0 c The Authors. 0 0 Department of Physics, University of California Holographic theories with a local gravitational dual have a number of striking features. Here I argue that many of these features are controlled by the Eguchi-Kawai mechanism, which is proposed to be a hallmark of such holographic theories. Higherspin holographic duality is presented as a failure of the Eguchi-Kawai mechanism, and its restoration illustrates the deformation of higher-spin theory into a proper string theory with a local gravitational limit. AdS/CFT is used to provide a calculable extension of the Eguchi-Kawai mechanism to eld theories on curved manifolds and thereby introduce topological volume independence." Finally, I discuss implications for a general understanding of the extensivity of the Bekenstein-Hawking-Wald entropy. AdS-CFT Correspondence; Gauge-gravity correspondence; Gauge Symmetry 1 Introduction 1.1 Summary of results 2 Center symmetry and Wilson loops 3 Reproducing gravitational phase structure/sparse spectra/extended 5 Higher-spin theory as a failure of the Eguchi-Kawai mechanism 6 Learning about the Eguchi-Kawai mechanism from gravity Center symmetry stabilization and translation symmetry breaking 6.2 Extending the Eguchi-Kawai mechanism to curved backgrounds 7 Extensivity of the Bekenstein-Hawking-Wald entropy Reproducing additional features of AdS gravity Reducing or blowing up models The necessity of the Eguchi-Kawai mechanism for holographic gauge theories 28 range of validity of the Cardy formula Extended range of validity of Cardy formula Sparse spectra in holographic CFTs SL(2; Z) family of black holes 3.4 SL(d; Z) family of black holes 4 Correlation functions and entanglement entropy Correlation functions Two-point functions M -point functions 4.4 Entanglement/Renyi entropies 3.1 3.2 3.3 8 Discussion Outlook A SL(d; Z) B Four-point function sample calculation C Validity of gravitational description Holographic theories with a local gravitational dual have several remarkable features that can be read o by analyzing (semi-)classical gravity in Anti-de Sitter space (AdS). To understand the emergence of gravity, it is important to understand precisely in the language of eld theory what mechanism is responsible for these features. Much of the work in this direction has focused on constraints from conformal eld theory (CFT). Conformality is not an essential feature of holography. On the other hand, every holographic theory to date can be understood as a large-N gauge theory. It is therefore natural to leverage whatever power such a structure brings us. This brings us to the idea of Eguchi-Kawai reduction. The proposal of Eguchi and Kawai was that large-N SU(N ) lattice gauge theory could be reduced to a matrix model living on a single site of the lattice [1]. This equivalence was postulated through an analysis of the Migdal-Makeenko loop equations (the SchwingerDyson equations for Wilson loop correlation functions) [2, 3] and assumed the preservation of center symmetry in the gauge theory. However, it was immediately noticed [4] that the center symmetry is spontaneously broken at weak coupling, disallowing the consistency of the reduction with a continuum limit. The authors of [4] further proposed the rst in a long list of modi cations to the gauge theory in an attempt to prevent center symmetry from spontaneously breaking. Their proposal is known as the quenched Eguchi-Kawai model, further studied in [5], where the eigenvalues of the link matrices were frozen to a centersymmetric distribution. Another proposed variant is known as the twisted Eguchi-Kawai model, wherein each plaquette in Wilson's action is \twisted" (multiplied by) an element of the center of the gauge group [6]. Numerical studies have shown these early modi cations fail at preserving center symmetry as well [7{10]. Let us turn to the continuum. Whether or not center symmetry is preserved is often checked analytically by pushing the theory into a weakly coupled regime and calculating the one-loop Coleman-Weinberg potential for the Wilson loop around the compact direction. This is an order parameter for the center symmetry, and a nonvanishing value indicates a breaking of center symmetry. An early analytic calculation of the Coleman-Weinberg potential indicates the center-symmetry-breaking nature of Yang-Mills theories [11]. Nevertheless, there are a few tricks that seem to work at suppressing any center-breaking phase transitions: a variant of the original twisted Eguchi-Kawai model [12], deforming the action by particular double-trace terms [13], or considering adjoint fermions with periodic boundary conditions [14]. For a modern review see [15]. In this work, we will not be concerned with suppressing center-breaking phase transitions. Instead, we will focus on implications of the Eguchi-Kawai mechanism within centersymmetric phases. This will not be a restriction to the con ned phase since we will be considering center symmetry with respect to spatial and thermal cycles. As we will be working in the continuum, let us formulate the continuum version of the Eguchi-Kawai mechanism. Consider a d-dimensional large-N gauge theory compacti ed on M symmetry at the Lagrangian level. If translation symmetry and center symmetry are not spontaneously broken along a given S1, then correlation functions of appropriate singletrace, gauge-invariant operators are independent of the size of that S1 at leading order in (S1)k with center N . We will review these notions in the rest of the introduction and spend section 4 elaborating on which sorts of observables are \appropriate." This is often called large-N volume independence, where \volume" in particular refers to the size of the center-symmetric S1s. The Eguchi-Kawai mechanism is a robust, nonperturbative property of large-N gauge theories that preserve certain symmetries. Famously, large-N gauge theories also play a starring role in holographic duality. Curiously both contexts involve emergent spacetime in radically di erent ways. In this work we will be interested in what predictions the Eguchi-Kawai mechanism makes about gravity in AdS. Since the proposal concerns only leading-in-N observables, we will be dealing exclusively with the (semi-)classical gravity limit in AdS. A simple example illustrating the mechanism at work is the temperatureindependence of the free energy density on M S1 at leading order in N in the con ned S1). In AdS/CFT, this occurs because the thermal partition function is given by the contribution of thermal AdS below the HawkingPage phase transition, whose on-shell action has an overall factor of inverse temperature . When the theory decon nes, the free energy density becomes a nontrivial function volume independence, see equation (3.1). We will spend the next section reviewing introductory material, ending with the central tool of this work, which is that a smooth, translation-invariant gravitational description implies center symmetry preservation along all but one cycle. Center symmetry can spontaneously break along a given cycle as its size is varied, but there must only ever be one cycle which breaks the symmetry. We will refer to these transitions as center-symmetryswapping transitions (CSSTs). The rest of the paper will leverage this structure to learn primarily about universal features of gravity, but also to learn about the Eguchi-Kawai mechanism in large-N gauge theories. For some previous work exploring the Eguchi-Kawai mechanism in holography, see [16{19]. Summary of results Our primary tool will be that a smooth, translation-invariant gravitational description of a state or density matrix in a toroidally compacti ed CFT preserves center symmetry along all but one cycle. We will use this to produce the following universal features of gravity in AdS: (a) an extended range of validity of the general-dimensional Cardy formula, (b) the exact phase structure (including thermal and quantum phase transitions) with a toroidally compacti ed boundary, (c) a sparse spectrum of light states on the torus, (d) leading-in-N connected correlators will be given by the method of images under smooth quotients of the spacetime, which reproduces the behavior of tree-level Witten diagrams, and (e) extensivity of the entropy for spherical/hyperbolic/planar black holes which dominate the canonical ensemble; for planar black holes this implies the Bekenstein-Hawking-Wald area law. (a)(c) are closely related and can be found in section 3, (d) can be found in section 4, and (e) can be found in section 7. Using gravity to learn about the Eguchi-Kawai mechanism, we will nd new center-stabilizing structures for strongly coupled holographic theories and propose an extension of the mechanism to curved backgrounds in section 6. Center symmetry and Wilson loops Consider pure Yang-Mills theory on manifold Md 1 SU(N )) with nontrivial center C (for example ZN ): This theory is invariant under the gauge symmetry S = F a = @ A S1 with gauge group G (for example S1 ! G a map from our spacetime into the gauge group. The eld strength Let us consider the function g to be periodic along the S1 only up to an element of the gauge group: g(x; + ) = g(x; )h for h 2 G. For A to remain periodic we need h 1A (x; )h. But this requires h 2 C so we can commute it past A h 1. So we see that we can consistently maintain twisted gauge transformations as long as and cancel it against we twist by an element of the center. The action above is invariant under these extended gauge transformations. The space of physical states are constrained to be singlets under the usual gauge group G but not under the twisted gauge transformations. In particular, Wilson loops which wrap an S1, which will henceforth be referred to as Polyakov loops, transform under the generalized gauge transformation. To see this, consider the pathordered exponential, i.e. the holonomy of the connection, around the S1: x( + ; ) = P exp The P stands for path. We will refer to the trace of this object as the Polyakov loop, which for ordinary gauge transformations causes g and g 1 to annihilate by cylicity. For twisted gauge transformations, however, we are left with The W stands for Polyakov. Thus the expectation value of a Polyakov loop can serve as an order parameter for the spontaneous breaking of center symmetry. We will always take our trace in the fundamental representation, since the vanishing of the expectation value of such a loop is necessary and su cient for the preservation of center symmetry, independent of the matter content. Contrast this with the case of rectangular Wilson loops (traces of path-ordered exponentials where the path traces out a large rectangle instead of wrapping an S1) where the trace needs to be evaluated in the same representation as that of the matter content to access the energy required to decon ne the matter. Even for matter in vectorlike representations that break center symmetry, there is Let us now specify to gauge group SU(N ). The center of the gauge group is ZN , by which of the N representations of ZN it falls under. This is called the N -ality of the representation, and it is determined by counting the number of boxes mod N of the Young tableau of the representation. The addition of matter to our gauge theory explicitly breaks the center symmetry of the Lagrangian unless the matter is in a representation of vanishing N -ality [20]. Fundamental representations have N -ality 1 and therefore explicitly break center symmetry. Adjoint representations, on the other hand, have vanishing N -ality and therefore preserve center symmetry. an e ective emergence of the symmetry as N ! 1 as long as the number of vectorlike avors is kept nite. This is simply because quarks decouple at leading order and one is left with the pure Yang-Mills theory. Interestingly, by orientifold dualities, even matrix representations (which break center symmetry and for which the matter does not decouple) have an emergent center symmetry at in nite N [21, 22]. Calculating Wilson loops in AdS. There is a simple prescription for calculating the expectation value of a Wilson loop in the fundamental representation of the gauge theory using classical string theory. One calculates e SNG for the Nambu-Goto action SNG for a Euclidean string worldsheet which ends on the contour of the Wilson loop C [23]. Let us specify to Polyakov loops wrapping an S1 on the boundary. Notice that if this circle example where this criterion distinguishes con ned and decon ned phases. The thermally stable (i.e. large) AdS-Schwarzschild black hole, which has a thermal circle which caps o in the interior, admits a string worldsheet and therefore gives a nonvanishing Polyakov loop expectation value. This indicates a decon ned phase, which is appropriate as the AdS-Schwarzschild black hole is the correct background for the gauge theory at high temperature. Thermal global AdS, however, has a thermal circle which does not cap o in the interior and therefore gives a vanishing Polyakov loop expectation value. This indicates a con ned phase, which is appropriate for the theory at low temperature. Indeed, the bulk canonical phase structure for pure gravity indicates a transition between these two backgrounds when the inverse temperature is of order the size of the sphere. Similarly, the entropy transitions from O(1) in the con ned phase (no black hole horizon) to O(N 2) in the decon ned phase (yes black hole horizon). There is one more basic geometric fact we will need. Consider an asymptotically Euclidean AdSd+1 spacetime with toroidal boundary conditions. Preserving translation invariance along the non-radial directions | a necessary condition for the Eguchi-Kawai mechanism to work | gives a metric of the form ds2 = g (r ! 1) = r2 To avoid conical singularities (e.g. metrics which look locally like r2(d 21 + d 22)), no more than one of the boundary circles can cap o in the interior of the spacetime. While it may be possible that none of the boundary circles cap o in the interior (say through the internal manifold capping o instead), I do not know of any smooth, geodesically complete examples. We will therefore not consider this possibility, so in our context exactly one cycle caps o and the other d 1 circles remain nite-sized. This motivates the following simple yet extremely powerful statement. In any smooth, translation-invariant geometric description, the expectation value of Polyakov loops in the fundamental representation vanish in 1 of the directions. For theories with an explicit center symmetry, this means that we will have volume independence along d 1 directions as discussed in the introduction. Appropriate observables will therefore be independent of the sizes of the circles. For the gravitational description to be valid, the circles in the interior need to remain above string scale. For a translation of this criterion into eld theory language, and in particular a discussion of Eguchi-Kawai reduction to zero size, see appendix C. Just like the original Eguchi-Kawai example of pure Yang-Mills, our theory will of course decon ne, as signaled by the Hawking-Page phase transition in the bulk. This is sometimes called partial Eguchi-Kawai reduction, since the reduction only holds in the center-symmetric phase. We will refer to the \Eguchi-Kawai mechanism" and \large-N volume independence" to describe this state of a airs. (Large-N volume independence refers in particular to independence of the size of center-symmetric S1s, not necessarily the overall volume.) From our point of view, the decon nement transition is just a centersymmetry-swapping transition (CSST) from the thermal cycle to a spatial cycle. It remains true that d 1 of the cycles preserve center symmetry. CSSTs can also occur between spatial cycles as they are varied. In this case, the transition is unrelated to con nement of degrees of freedom, since the entropy is O(1) before and after the transition. It instead signals a quantum phase transition, which can take place at zero temperature. Interestingly, this quantum phase transition persists up to a critical temperature. Reproducing gravitational phase structure/sparse spectra/extended range of validity of the Cardy formula We will now show that the semiclassical phase structure of gravity in AdS is implied by our center symmetry structure. Consider an asymptotically AdSd+1 spacetime with toroidal boundary conditions. The cycle lengths will be denoted L1; : : : ; Ld with = L1. We will pick thermal periodicity conditions for any bulk matter along all cycles and will comment at the end about di erent periodicity conditions. Assuming a smooth and translationinvariant description, the phase structure implied by gravity can succinctly be written in terms of the free energy density as log Z(L1; : : : ; Ld) where "vac is a pure positive number (independent of any length scales) characterizing the vacuum energy on S1 d 2 as Evac=V "vac=Ld for spatial volume V [24], and Lmin is the length of the smallest cycle. This is the phase structure independent of the precise bulk theory of di eomorphism-invariant gravity, as long as we maintain translation invariance and consider the thermal ensemble. Like in AdS3, all the data about higher curvature terms is packaged into "vac. Notice that the triviality of this phase structure implies highly unorthodox eld theory behavior. The phase structure (3.1) implies thermal phase transitions as the thermal cycle becomes the smallest cycle. There are also quantum phase transitions when two spatial cycles are smaller than the rest of the cycles (including ), and the larger of the two is changed to become smaller. These are quantum phase transitions because they can (and do) occur when ! 1, so they are not driven by thermal uctuations. These quantum phase transitions, however, persist at nite temperature. Finally, in any given phase the functional form of the free energy density is independent of all cycle lengths except for one! Much of [25] was focused on reproducing this structure in eld theory, and we refer the reader to that work to see the many nuances involved. We now turn to the gauge theory. We will see that our framework gives (3.1) immediately, thereby locating the points where phase transitions occur and the precise functional form of the free energy in all phases. Consider a eld theory with our assumed center symmetry structure, which is that all but one cycle preserve center symmetry. We also have thermal periodicity conditions for the matter elds along all cycles, since this will give thermal periodicity conditions for the bulk matter elds and preserve modular S invariance between any pair of cycles. Notice that by extensivity of the free energy and modular invariance [24, 26], we have f (L1 ! 0; L2; : : : ; Ld) = Since the free energy density is supposed to be independent of the center-symmetry preserving directions, we deduce that the L1 cycle breaks center symmetry. This is consistent with the expected decon nement of the theory. Now let us consider varying any of the cycle sizes. As long as there is no center-symmetry-swapping transition (CSST), f (L1; : : : ; Ld) continues to depend only on L1. Since the theory is scale invariant, this xes the L1 dependence and we continue to have the behavior (3.2). Finally, any CSST that occurs between has to occur when L = L by the modular symmetry between all cycles. So, when cycle lengths are equal, they must be symmetric: either they both preserve the center or they are undergoing a CSST. They cannot both break the center since only one cycle can ever break the center in our framework. Using the above facts that f (L1; : : : ; Ld) can only change its functional form at CSSTs and that two cycles which have equal length must have the same center-symmetry structhe symmetry between cycles there must be a CSST between L1 and the next-smallest cycle when they become equal. As L1 is increased further, it is a center-preserving cycle passing other center-preserving cycles, so no more CSSTs can occur and the free energy density remains unchanged. Starting from an arbitrary torus, with an arbitrary cycle taken asymptotically small, this argument produces for us the entire phase structure (3.1). What about the case where we do not preserve the symmetry between cycles? An interesting example of this is if we pick bulk fermions to be periodic along some cycles. In the gravitational picture these cycles are not allowed to cap o in the interior since this would not lead to a consistent spin structure. Thus, the phase structure is just as in (3.1), where now Lmin minimizes only over the cycles with antiperiodic bulk fermions. We will comment more on the eld theory implications of this in section 6.1. To predict this bulk phase structure, we need to supplement our assumption of d 1 cycles preserving center symmetry with an assumption about which cycles preserve center symmetry for all cycle sizes. These cycles can then never undergo CSSTs with other cycles. By repeating the arguments above, we can reproduce this modi ed bulk phase structure. Extended range of validity of Cardy formula Holographic gauge theories, in addition to having the remarkable phase structure exhibited above, have an extended range of validity of the general-dimensional Cardy formula. The Cardy formula in higher dimensions was derived in [24, 26] and reproduces the entropy of toroidally compacti ed black branes at asymptotically high energy: This precisely mimics how the two-dimensional Cardy formula [27] reproduces the entropy of BTZ black holes at asymptotically high energy [28]. Large N operates as a thermodynamic limit that can transform our statements about the canonical partition function into the microcanonical density of states (this is discussed for example in the appendices of [25, 29]). We nd that the Cardy formula is not valid only asymptotically, but instead is valid down to E = 1)Evac, which in canonical variables is at a symmetric point = Li;min where Li;min is the smallest spatial cycle. This is precisely the energy at which the HawkingPage phase transition between the toroidally compacti ed black brane and the toroidally compacti ed AdS soliton occurs in the bulk! Similar arguments in the case of non-conformal branes should give an extended range of validity for the Cardy formula of [30]. Sparse spectra in holographic CFTs A sparse spectrum is often invoked as a fundamental requirement of holographic CFTs, and we have several avenues of thought that lead to this conclusion. Here we will be concerned with the sparseness necessary to reproduce the phase structure of gravity [25, 29], not with the sparseness necessary to decouple higher-spin elds [31]. We have already reproduced the complete phase structure (3.1). By the arguments in [25, 29] this implies a sparse low-lying spectrum 1)Evac) . exp (Li;min(E where Li;min is the smallest spatial cycle. To roughly recap the argument of [25], modular constraints on the vacuum energy coupled with the phase structure imply vacuum domination along all cycles except the smallest one. But to be vacuum dominated means that excited states do not contribute to the partition function. This leads to the constraint above, which is really a constraint on the entire spectrum, but is written as above since for 1)Evac we have a precise functional form for the density of states: it takes the higher-dimensional Cardy form, which trivially satis es the Hagedorn bound above. One can also access additional sparseness data by investigating di erent boundary conditions. To point out the simplest case, consider super Yang-Mills theory in a given number of dimensions with fermions having periodic boundary conditions along one cycle and antiperiodic boundary conditions along another cycle. Then modular covariance will equate a thermal partition function ZNS;R with a twisted partition function ZR;NS (twisted by ( 1)F ), which will access the twisted density of states F (E). By similar steps as performed above, one will conclude a sparseness bound for this twisted density of states. The fact that preserving center symmetry can imply a supersymmetry-like bound is carefully discussed in a non-supersymmetric context in [32, 33]. SL(2; Z) family of black holes In this section and the next we will consider the case of twists between the cycles of the torus. We will begin with three bulk dimensions, where there is an extended family of solutions known as the SL(2; Z) family of black holes, rst discussed in [34] and elaborated upon in [35]. They give an in nite number of phases, instead of the two we usually consider in Lorentzian signature, and we can check volume independence in each of the phases individually. Twists do not seem to be considered in the literature on large-N volume independence, but we will show that volume independence continues to hold. A general SL(2; Z) black hole has a unique contractible cycle, sometimes called an A-cycle. The non-contractible cycle (sometimes called a B-cycle) is only additively de ned, since for any B-cycle one can construct another B-cycle by winding around the A-cycle n times (n 2 Z) while going over the original B cycle. The usual convention is to set this winding number to zero. Due to this in nity is contractible in the interior [35]. Here Z acts as + n for modular parameter . This data is given by two relatively prime integers (c; d) with c 0. We also need to include the famous examples (0; 1) (thermal AdS3) and (1; 0) (BTZ). In the rest of this section we will ignore numerical prefactors in the free energy density and will only track the dependence on cycle lengths. Let us consider the simplest cases rst, thermal AdS3 and BTZ, both with zero angular potential. This means 1= are pure imaginary. We have Thermal AdS : f ( ; L) BTZ : f ( ; L) These exhibit volume independence for the center-symmetry preserving (i.e. noncontractible) cycles. Let us now add an angular potential , which makes Thermal rotating AdS : L 2=L2 + 2=L2 = We again get consistent results, since the lengths of the contractible cycles of thermal rotating AdS and rotating BTZ are L and p 2 + 2, respectively. The general SL(2; Z) black hole can be given in a frame where the modular parameter is (a +b)=(c +d), the contractible cycle z = z + L(a + b). Their lengths are given as z +L(c +d) and the non-contractible jSA1j = pd2L2 + 2cdL + c2( 2 + 2); jSB1j = pb2L2 + 2abL + a2( 2 + 2) : (3.9) The free energy density is found, for general = i =L + =L, to be d2L2 + 2cdL + c2( 2 + 2) Notice that a and b enter into the size of the non-contractible cycle, but the condition expected since the physically distinct states should only care about c; d by the arguments above. We therefore nd for the general SL(2; Z) geometry that the free energy density exhibits volume independence. SL(d; Z) family of black holes There exists an unexplored analog to the SL(2; Z) family of black holes in higher dimensions, which I will call the SL(d; Z) family of black holes. For a review of some salient points about conformal eld theory on Td and SL(d; Z), see appendix A. The bulk topology is that of a solid d-torus, with a unique contractible cycle. Winding a B-cycle by an A-cycle is topologically trivial. A \small" bulk di eomorphism, i.e. one continuously connected to the identity, can undo this winding. However, winding a Bcycle by another B-cycle leads to a true winding number and is topologically distinct. This corresponds to a large di eomorphism in the bulk. Thus, as in the two-dimensional case, we only need to sum over a subgroup of the full SL(d; Z), because B-cycles are only 1, n 2 Z and V~d the xed contractible cycle vector. As reviewed in appendix A, the V~i represent lattice vectors that de ne the quotient of the plane that gives us the torus Td. Our \seed" solution in three bulk dimensions was global AdS3 at nite temperature and nite angular velocity. In higher dimensions our seed solution will be the AdS soliton, with all spatial directions compacti ed, arbitrary twists turned on (including both twists between spatial directions and time-space twists, interpreted as angular velocities), and the geometry described above should give an SL(d; Z)-invariant partition function. Ignoring the important issue of convergence of this sum, we can see that the invariance is naively guaranteed since the seed solution and its images are independently invariant under the Z we mod out by. In other words, the analog of Z0;1( ) from the previous section, call it Z0(V~1; : : : V~d), and its images are invariant under shifts V~i ! V~i + nV~d. Anyway, this restricted sum is not important for our purposes. It is su cient to show that an arbitrary element of the SL(d; Z) family has a free energy density that depends only on the contractible cycle. The simplest case is the AdS soliton at nite temperature with spatial directions compacti ed, which has free energy density where Ld is the length of the contractible cycle. This is volume-independent as required. Twisting any of the non-contractible directions by any of the other directions by any amount does not change this answer. Thus, the general AdS soliton with arbitrary angular potentials and spatial twists exhibits volume independence with respect to the non-contractible cycles. We can now consider SL(d; Z) images of this geometry. The general SL(d; Z) image geometry has global Killing vector elds for all the nonVol(Td) R drF (r; r~h) where r~h is a parameter xed by the size of the dth cycle and F (r; r~h) is some function. Thus, twists can only enter into Vol(Td), but torus volumes are invariant under twists. Higher-order corrections in the Newton constant GN will bring in a dependence on the twists, as the momentum quantization of perturbative torus depends on the twists. In this way we see that volume-independence will break down density a little more carefully. Consider a general twisted seed geometry, with the contractible direction chosen to lie along the dth direction, speci ed by lattice vectors de ning the twists: L2k = j=1 i=1 = 666 ... det(A) = +1 to give Pid=1 a1i id Pid=1 adi id We can compose d(d 1)=2 rotations in the d(d 1)=2 two-planes to make this matrix upper triangular. This will allow us to identify the new modular parameter matrix This will not change the lengths of the cycles, which are given as where Ld gives the length of the contractible direction. The volume of the resulting torus Vol(ATd) = det(A ) = det(A) det( ) = Y In particular, it is unchanged by the SL(d; Z) transformation. The free energy density is exhibiting volume independence in the center-symmetric directions. with the case of no twist in that direction. This is because there exists a bulk di eomorphism, continuously connected to the identity, which induces this twist on the boundary. Twists in non-contractible directions, however, correspond to large gauge transformations cycle is not su cient. We still have a reduction in moduli, with d2 1) = d(d numbers specifying distinct geometries. Interestingly, the distinct geometries obtained by twisting non-contractible directions by other non-contractible directions do not di er in their classical on-shell action. Correlation functions and entanglement entropy In this section we will discuss the implications of the Eguchi-Kawai mechanism for correlation functions and Renyi entropies. As usual, the statements are restricted to leading order in N , meaning tree-level Witten diagrams in the bulk. We will only consider volume independence with respect to a single direction for conceptual clarity; generalization to multiple directions is straightforward. For correlation functions we will see that position space correlators must be given by the method-of-images under smooth quotients, as in (4.10). The connection between large-N reduced correlation functions and the role of the method of images in AdS has previously been explored in the stringy (zero 't Hooft coupling) limit in [16, 17], although there are several points of deviation from the present work. Correlation functions Let us assume that we are volume-independent with respect to a single direction. Then connected correlation functions of local, single-trace, gauge-invariant, neutral-sector observables will be volume independent at leading order in N . Nonlocal operators like Wilson loops can also be treated as long as they have trivial winding around the cycle. One term that may need explanation is \neutral-sector." We will explain brie y below; for details see [14]. Consider the theory on R S1 as we vary the circle size from some length L to some other length L0. A given operator in the theory of size L can be decomposed as n= 1 On=Le2 inx=L : sector" operators, and it is their correlation functions which are volume-independent. For independent. While this may seem like a severe restriction, we will only be concerned with nite-size results from in nite-size results, and all momenta in commensurate with some momentum in in nite size. write this precisely as O(1=N 2(M purely adjoint theory shows that the connected correlator of M single-trace operators is N M 1 in front to isolate the leading contribution to the connected correlator. But the basic point is clear: the statement is about the rst order in N that is expected to have a nonvanishing answer by large-N counting. If it vanishes, no statements are made about the leading nonvanishing order. This is what the limit above makes precise in a pure adjoint theory. We will not worry about the various cases of large-N counting, because within AdS/CFT the leading-in-N diagrams are given by tree-level diagrams in the bulk. It is only these diagrams we wish to make a statement about. We will therefore use as our primary tool the equality hOn1=LOn2=L : : : OnM =LiL = J M 1hOn1=LOn2=L : : : OnM =LiJL with the caveat that this is the leading-in-N piece of a connected correlator left implicit. To see the e ect on a general correlation function of local operators, it will su ce to consider the two-point function. We consider the Fourier representation of the nite-size hO(x)O(y)iL = e 2 i(nx+my)=LhOn=LOm=LiL e 2 i(nx+my)=LJ hOn=LOm=LiJL ; where in the second line we used (4.3). We could immediately use translation invariance to write the correlator as a function of only the separation x y, but to make generalization to higher-point correlators clear we will keep the dependence until the end. We can now simplify this expression by transforming the momentum-space correlator in size J L to position space and evaluating the various sums and integrals: hO(x)O(y)iL = dy0e 2 i(nx+my)=Le2 i(nx0+my0)=LJ hO(x0)O(y0)iJL This generalizes to hO(x1) : : : O(xM )iL = hO(x1 + n1L) : : : O(xM + nM L)iJL : The converse is also true. That is, starting from the method-of-images form of a position space correlator above, one can show (4.3). Altogether, volume-independence of neutralsector correlators is true if and only if nite-size correlators are obtained by the method of images from correlators in a larger size. Two-point functions To focus on the simplest case, consider the equal-time two-point function in a translationinvariant two-dimensional theory. Say we want to construct the nite-size correlator from the in nite-size correlator. We begin from (4.10) and use translation invariance, which says that our correlator is only a function of the distance between the two insertion points: hO(x)O(y)iL = hO(x y) O(0)iL = m))L) O(0)iJL where we used the J L-periodicity of the size-J L correlator. To compare to the in nite-size correlator we can take J ! 1 in a particular way: y) O(0)iL = lim y + nL) O(0)iJL + hO(x hO(x + nL)O(y + mL)iJL : ni=0 = lim n= (J 1) e 2 i(nx+my)=Le2 i(nx0+my0)=LhO(x0)O(y0)iJL mL)hO(x0)O(y0)iJL Notice that taking this limit will give us the correlator on the semi-in nite line with semiin nite periodicity. Doubling it (and picking up a factor of 2 just as in the factor of J that comes from relating two-point functions in size L to size J L) gives us the real-line correlator. We thus have our nal result n= 1 y) O(0)iL = Now we compare to gravity in AdS. Conformal eld theory correlators, at leading order in N , are obtained by extrapolating the bulk-to-bulk propagator to the boundary. Since the bulk-to-bulk propagator for free elds satis es a Green function equation, we can the propagator after performing an arbitrary smooth quotient by the method of images. This gives precisely the form of correlator above, which for example in the famous case of the BTZ black hole takes the form [36] hO(t; )O(0; 0)i = n= 1 cosh 2 t for operators of dimension . Notice that this sums over spatial images but not thermal images. For thermal AdS3, which is obtained instead as a quotient in the Euclidean time direction, we would sum over thermal images but not over spatial images. In each case, the correlator is given by a sum over images with respect to the center-preserving direction. This is exactly what is predicted by our arguments above. Furthermore, we see that the \free-ness" of large-N theories is not su cient by itself to imply that the correlator should be a sum over images, since there is no sum over images in the center-breaking direction. M -point functions For higher-point functions, recall that we focus only on diagrams in the bulk that do not have any loops. Any given contribution to the tree-level M -point function is constructed out of M bulk-to-boundary propagators K and n < M bulk-to-bulk propagators G. This means there are n + 1 interaction vertices in the bulk. An illustrative case of tree-level (leading in N ) and loop level (subleading in N ) diagrams is depicted in The position space correlation function can be written schematically as hO(x1) : : : O(xM )iAdS = AdS i=1 where boundary points are denoted by small x and bulk points by big X. From here we hOn1=L OnM =LiAdS= = j j M 1hOn1=L OnM =LiAdS : Before we outline the proof of this we need the following facts. The bulk-to-bulk propagator satis es a Green function equation since the bulk theory is free at this order (leading in N ). The bulk-to-boundary propagator is obtained by a certain limit of the bulk-to-bulk propagator where one of its points is pulled to the boundary. Thus, both propagators There are many more diagrams contributing at this order. Right: a loop-level Witten diagram, which contributes at rst subleading order in N to the nine-point function. It is constructed out interaction vertices. There are again many more diagrams contributing at this order. can be obtained on a smooth quotient of our AdS background by the method of images. Finally, in momentum space, the integrals over spacetime give n + 1 momentum-conserving delta functions since there are no loops in the bulk. The general proof of (4.19) is notationally clumsy and would ruin the already regretful aesthetics of this paper, so we will provide an outline of the general proof here and give a sample calculation in appendix B. The left-hand-side is evaluated by an inverse Fourier transform of the position space expression. The position space expression is written in by those in AdS by the method of images. These propagators are then transformed into and integrals are re-ordered at will and this expression is simpli ed down to an integral over the bulk radial interaction vertices zi. The right-hand-side is evaluated in the same way, except its propagators are never replaced with other propagators. This leads to (4.19). Explicit details for a four-point function can be found in appendix B. So we see that the behavior of tree-level perturbation theory in AdSd+1 under generic, smooth quotients of spacetime is reproduced. Notice that bulk loops are made of bulk-tobulk propagators as well, but their momenta are not xed and instead are integrated over. This leads to a non-universal answer, since there are bulk-to-bulk propagators in the AdS by the usual method of images trick, the sums are over di erent momenta and cannot be carried out in general. Entanglement/Renyi entropies Another place where volume independence crops up is in the calculation of entanglement entropy of theories dual to gravity in AdSd+1. For simplicity I will restrict to AdS3. Recall that the Ryu-Takayanagi prescription dictates that the entanglement entropy is given by the regularized area of a minimal surface that is anchored on the entangling surface on the AdS boundary [37]. Consider a spatial interval of size ` on a spatial circle of size L at temperature T . For entangling surfaces at xed time for static states or density matrices, the minimal surface will lie on a constant bulk time slice. This makes it clear that in the con ned phase, which is thermal AdS3, the Ryu-Takayanagi answer will be independent of the center-preserving thermal circle of size T : SEE = . Note that it is not given as a sum over thermal images like in the case of correlation functions. It is instead completely independent of the thermal cycle size. In the decon ned phase, i.e. above the Hawking-Page CSST, we get an answer independent of the center-preserving spatial circle of size L: SEE = The minimization inherent in the Ryu-Takayanagi prescription is the reason why we do not sum over images and so get exact volume independence. (There is a proposal that the image minimal surfaces instead contribute to entanglement between internal degrees of freedom, coined \entwinement" [38].) Apparently, single-interval entanglement entropy is an appropriate neutral-sector \observable" that obeys large-N volume independence. As shown by a bulk calculation in [39], volume-dependence appears at rst subleading order in the central charge c (the proxy for N in two-dimensional theories). Volume-dependence also appears at leading order in the central charge in the Renyi entropies, but not in any trivial way as in the local correlators of the previous section. The Renyi entropies must not be neutral-sector observables. The Renyi entropy in this context is related to the free energy on higher-genus handlebodies; the analytic continuation connecting to the original torus to de ne the entanglement entropy is therefore special. It is interesting that in the cases where we have a volume-independent object, it is the entanglement entropy and not any of the higher Renyi entropies. This may be related to the fact that it is the entanglement entropy that naturally geometrizes in the bulk, or to the fact that it is a good ensemble observable (or these two could be the same thing). Higher-spin theory as a failure of the Eguchi-Kawai mechanism We have presented large-N volume independence along all but one cycle of toroidal compacti cations as a necessary condition for a eld theory to have a local gravitational dual. This is discussed further in section 8.3. Higher-spin theories are a good example of how things go wrong if this does not occur, and provide additional evidence for this conjecture. Higher-spin theories in AdS are nonlocal on the scale of the AdS curvature. There are a zoo of higher-spin theories, so let us analyze one of the simplest cases. Consider the parity-invariant Type-A non-minimal Vasiliev theory with Neumann boundary conditions for the bulk scalar eld [40{42]. This is a theory that can be expanded around an AdS4 background and has elds of all non-negative integer spin. It is proposed to be dual to the three-dimensional, free U(N ) vector model of a scalar eld restricted to the singlet sector [43]. The singlet projection is performed by weakly gauging the U(N ) symmetry with a Chern-Simons gauge eld. The Chern-Simons-matter theory does not enjoy large-N volume independence. In fact, given that the matter is in the fundamental representation, it does not even have center symmetry at the Lagrangian level. However, there is a simple procedure for deforming such theories into close cousins with explicit center symmetry at the Lagrangian level. This is discussed for example in [14]. First we add a global U(Nf ) avor symmetry to the boundary theory, and then we weakly gauge it and change the representation of the matter to be in the bifundamental. Such a theory has explicit center symmetry at the Lagrangian level now exist single-trace, gauge-invariant operators made up of arbitrarily long strings of the bifundamental elds, which did not exist in the previous theory. These are the objects associated to the string states in the bulk. This procedure, with some more bells and whistles (the bells and whistles being an appropriate amount of supersymmetry), is precisely what takes these vector models into the more mature ABJ theory [44, 45]. The bulk interpretation of this procedure is also straightforward and deforms the higher-spin theory into its more mature cousin, string theory. The addition of the global avor symmetry is the addition of Chan-Paton factors to the higher-spin theory, which implies upgrading the spin-1 bulk gauge eld to a nonabelian U(Nf ) gauge eld, with all other elds transforming in the adjoint of U(Nf ). The gauging is then a familiar procedure in AdS/CFT whereby the boundary conditions of this bulk gauge eld are changed. In fact, this entire story is just that of the ABJ triality beautifully painted in [46], whereby the higher-spin \bits" are conjectured to bind together into the strings of ABJ theory. All I would like to highlight is that the deformations that were necessary to connect to a theory with a local gravitational limit included deforming to a theory with an explicitly center-symmetric Lagrangian and center-symmetric phases leads to a lifting [47] of the light states present in vector models [48]. It may be interesting to explore what other deformations of the vector models can introduce center symmetry and the particular center symmetry structure that is a hallmark of classical gravity. This may shed light on how to deform the set of proposed higher-spin dualities for de Sitter space [49{51] to an Einstein-like dual. In the context of de Sitter, the deformation discussed above leads to a \tachyonic catastrophe" in the bulk, as discussed in [52], and does not seem to give a viable option. Learning about the Eguchi-Kawai mechanism from gravity In this section, we will shift our focus and analyze what gravity teaches us about the Eguchi-Kawai mechanism. Center symmetry stabilization and translation symmetry breaking Although this was discussed in previous sections, we would like to emphasize that the bulk gravitational description gives us a way to predict whether volume independence is upheld in particular holographic gauge theories. rst nontrivial statement is that center symmetry can be broken along at most one cycle for any given con guration of cycle sizes. The second nontrivial statement is that there are simple ways to preserve center symmetry along a given cycle for any cycle size which remains larger than string scale in the bulk. In particular, periodic bulk fermions and antiperiodic bulk scalars prevent cycles from capping o in the bulk, as this is an inconsistent spin structure. These cases therefore preserve center symmetry beyond the CSST points which correspond to gravitational Hawking-Page transitions. This argument does not explicitly rely on the representation theory, where the fermions are in the adjoint and the bifundamental, respectively). The bulk matter is made of gauge-invariant combinations of the boundary periodicity conditions of the bulk matter will be correlated with the periodicity conditions of the boundary elds. For example, bulk fermions are constructed by taking single-trace gauge-invariant operators consisting of an odd number of boundary fermionic elds (e.g. ]). Therefore, bulk fermions with periodic spin structure imply boundary fermions with periodic spin structure. A similar statement is true for antiperiodic bulk scalars. Higher-spin dualities, however, o er an interesting case where the bulk theory is purely bosonic while the boundary theory can be purely fermionic. The quantum-mechanically generated potentials for the gauge eld holonomies can be straightforwardly calculated at weak coupling, see for example [11, 53]. From the weakly coupled point of view, for (3 + 1)-dimensional SU(N ) Yang-Mills theories, preserving center symmetry with non-adjoint periodic fermions or antiperiodic scalars of any representation is not possible. The only choice that works is periodic adjoint fermions. Interestingly, for periodic adjoint fermions (which we will have for super Yang-Mills theories) we seem to preserve center symmetry at strong coupling as well. But there is a small catch. At weak coupling, one would need to make the fermions periodic along all k cycles of Tk At strong coupling, however, this will not give us a background well-described by gravity alone, since it will be the toroidally compacti ed Poincare patch with circles shrinking to substringy scales near the horizon. To have a proper gravitational description, we would need to make the fermions antiperiodic along one of the cycles (or the scalars periodic). In this case, we will still preserve center symmetry along all the cycles that have periodic fermions, but this does not match what happens at weak coupling. 1A calculation of the Casimir energy in N = 4 super Yang-Mills on T2 R2 [54], for example, shows that we lose volume independence along both cycles if the fermion is periodic along only one cycle. We have not made any comments about operator expectation values and correlation functions within a grand canonical ensemble, say for turning on a chemical potential for some global symmetry. In this case, one can spontaneously break translation invariance, in which case the Eguchi-Kawai mechanism fails [55]. There exist holographic examples of such spatially modulated phases [56{58]. Extending the Eguchi-Kawai mechanism to curved backgrounds An important question about the Eguchi-Kawai mechanism is whether it extends to curved backgrounds. The original Eguchi-Kawai mechanism, and most modern proofs of largeN volume independence, rely on a lattice regularization which we do not have on curved backgrounds (although see [59] for some progress in the case of spherical backgrounds). We will set this aside for the moment as a technical issue. We will see that the natural uplift of volume independence to curved backgrounds is what I will call \topological volume independence." We will make this notion precise by de ning an order parameter (which will again be the expectation value of a Polyakov loop) and checking in gravitational examples that \topological volume independence" is indeed realized. eld theory. We already have some hints from eld theory about what the Eguchi-Kawai mechanism on curved manifolds should look like. The rst hint comes from the perturbative intuition for volume independence on torus compacti cations. In particular, mesons and glueballs form the con ned phase degrees of freedom (baryons have masses that scale with N and can be ignored for our purposes), and interactions between theory behaves as if it is free. The con ned phase degrees of freedom are therefore incapable of communicating with their images to discover they are in a toroidal box. This intuition, however, is valid even in a curved box. This seems to suggest the size of the manifold should again not be relevant even if it is curved. But curved backgrounds have local curvature which can vary as you change the overall size of the background, e.g. increasing the radius of a sphere. There is no reason the mesons and glueballs cannot feel this local curvature at leading order in N and thereby (for maximally symmetric manifolds like a sphere of hyperboloid) would know the overall size of the compact manifold on which they live. So it seems we should not expect a totally general uplift to curved backgrounds. The second hint comes from thinking about volume independence in toroidal compacti cations as a generalized orbifold projection, where one orbifolds by a discrete translation group [14]. (The language of orbifolds here is conventional but everything is really a smooth quotient.) Generic changes in the overall size of curved backgrounds cannot be thought of this way, so we again see that we cannot expect a totally general uplift to Combining the two hints above provides a compelling case for what kind of setup has a chance of maintaining a useful notion of volume independence. One begins with a curved background and considers smooth quotients that change the volume of the manifold. Such operations do not change the local curvature and maintain the picture of volume-changing as an orbifold procedure. This therefore utilizes the two hints above. We can now check that gravity provides a calculable setup where this proposal for the Eguchi-Kawai mechanism on curved backgrounds can be checked to be valid. The simplest case to analyze is the conformal eld theory on any simply connected manifold, like the sphere or the hyperboloid. As an illustrative example, we will investigate the family of lens spaces formed by smooth quotients of S3, although our results are general. For any smooth quotients of simply connected manifolds, we will see that the Polyakov loop expectation value continues to serve as an order parameter for center symmetry. Holographic realization of the Eguchi-Kawai mechanism on curved manifolds. Holographic gauge theories in the gravitational limit realize all of the intuition of the above. They explicitly show that naive volume-independence on curved backgrounds does not hold. Furthermore, they show that topological volume independence does hold when interpreted in the above sense! To see that naive volume independence on curved backgrounds does not hold, we can consider an observable as basic as the zero-point function, or the free energy density. We saw that for torus-compacti ed holographic theories, the free energy density was volumeindependent due to the thermodynamics of black branes. For holographic theories on a sphere or the hyperboloid, this is no longer the case. The relevant bulk geometries are the spherical and hyperbolic black holes. The key di erence between these geometries and the black brane is that the horizon radius is not proportional to the Hawking temperature. Instead, we have This means that the Bekenstein-Hawking area law, which scales as rd 1 and gives the thermal entropy of the CFT, is not extensive in eld theory variables (i.e. does not scale as T d 1). Here it is important to keep in mind that the theories we are considering are xing the temperature dependence xes the volume dependence. Moreover changing the radius of the sphere or hyperboloid can equally well be regarded as changing we do not have extensivity of the thermal entropy or the free energy, unless rh ! 1 which pushes us into the black brane limit. Furthermore, correlation functions in these backgrounds have nontrivial volume-dependence. While the ideas of large-N volume independence do not apply, there may still be a lower-dimensional matrix model description of the higher-dimensional theory, see e.g. [60{62] Both of these problems are solved by considering the smooth orbifolds suggested in the previous section. The entropy density (or free energy density) becomes appropriately volume-independent because smooth orbifolds of the spatial manifold cannot be interpreted as changes in the temperature. Thus, the nonlinear relation between horizon radius and temperature is not a problem. Said another way, we consider a setup where our eld theory is on a manifold M d 1 and its thermal ensemble at high temperature (i.e. the decon ned theory) is dominated by a black object with horizon topology M for the eld theory on a sphere, plane, or hyperboloid. The quotient of the manifold Md 1 d 1. This is what happens by some freely acting group changes the Bekenstein-Hawking entropy as follows: SBH = We see from this formula that the eld theory's entropy density and free energy density is appropriately independent of such changes in volume, as long as no CSST occurs (more on this possibility below). How about correlation functions? As we saw before, these are constructed by bulk Witten diagrams, whose atoms are bulk-to-bulk and bulk-to-boundary propagators. These objects again obey a Green function equation in the bulk, meaning any orbifold of the background geometry can be dealt with by summing over orbifold images. As long as we remain at leading order in N , meaning we do not consider bulk loops, the correlator will pick up a trivial volume dependence fully determined by the volume-dependence before quotienting. We have analyzed volume independence in the decon ned phase of the theory, where the relevant bulk geometries which dominate the thermal ensemble are given by black holes with some horizon topology. Uplifting the intuition from our torus-compacti ed theories, we should expect to nd nontrivial volume-dependence and temperature-independence in the con ned phase of the theory. We will address this in the next section. It is interesting that the gravitational description and the eld theory description give the same hints as to what sort of generalization to curved backgrounds should work. In particular, we discussed how from the eld theory point of view we should expect volume-changing orbifolds to be the natural uplift of the Eguchi-Kawai mechanism to curved backgrounds. Gravity gives the exact same intuition, and furthermore it explicitly demonstrates that it works, at least for the types of observables considered above. Order parameter on curved manifolds and testing topological volume independence. For any simply connected manifold M d 1, the quotient by some freely acting gives a manifold with nontrivial fundamental group isomorphic to . This means that we can wrap a Polyakov loop on the existing nontrivial cycle and could reasonably expect that its expectation value continues to serve as a good order parameter. We will see in a concrete example that this is the case. To illustrate the point, consider the family of lens spaces L(p; 1) which have < 4 =p : (6.3) d 23=p = Volume independence for lens spaces can now be stated in terms very close to that of the generalized orbifold projections used to discuss volume independence for torus compacti cations. Just as we vary the size of a circle in a torus compacti cation by shifting its periodicity, in this case we move between lens spaces by changing the periodicity of the coordinate. To maintain a smooth quotient we need p 2 Z+ so these are discrete changes. by the change in the circle. We can wrap a Polyakov loop around the circle due to the nontrivial homotopy, and it is again the expectation value of this loop which we propose serves as our order parameter. Let us turn to the gravity picture. In the decon ned phase, the orbifolded circle is non-contractible in the bulk, which implies a vanishing Polyakov loop expectation value and therefore volume independence: ds2 = + r2d 23=p : As we showed in the previous section, topological volume independence is indeed realized in the free energy density and correlation functions. How about the con ned phase? The naive geometry for the con ned phase is obtained by taking a quotient of global AdS. This geometry has a conical singularity at the origin which is not well-described within gravity. For antiperiodic fermions along the orbifolded circle (with even p > 2), it has been proposed that closed string tachyon condensation regularizes the geometry into what is called the Eguchi-Hanson-AdS soliton [63, 64]. This geometry has the orbifolded circle smoothly capping o in the interior, giving a nonvanishing expectation value to the Polyakov loop. There is a decon ning CSST at inverse temperature c = 2 8p2 + 20)3=2 (This corrects the expression given in (4.14) of [65].) In the con ned phase, an analysis of the Eguchi-Hanson soliton shows that we have topological volume-dependence with respect to the spatial manifold and volume-independence with respect to the thermal circle! This picture of topological volume independence is also found in ABJM theory through a nontrivial calculation utilizing supersymmetric localization on lens spaces [66]. An intuitive way to understand the absence of nite-size e ects is to transmute the connections along the orbifolded circle is discussed in e.g. [67]. The topological volume independence that we discuss seems to be controlling the relaon S2, as discussed on the gravity side in [68] and the eld theory side in [69]. An important distinction we draw here from previous work is that the precise pattern of center symmetry breaking/preservation in the gravitational picture is not realized at weak coupling. It would be fascinating to carry out weakly coupled tests of our proposal for topological volume independence of gauge theory on quotients of simply connected manifolds. A simple case to analyze is that of (3 + 1)-dimensional gauge theory on a lens space. In particular, our arguments (and weakly coupled intuition from an ordinary circle compacti cation of at space) suggest that periodic adjoint fermions along the Hopf ber of the lens space should lead to topological volume independence at weak coupling. Extensivity of the Bekenstein-Hawking-Wald entropy The Bekenstein-Hawking area law is a universal formula in Einstein gravity that applies to black hole horizons, cosmological horizons, and in a certain sense to spacetime itself. Let us restrict the discussion to black hole horizons and focus on the scaling with area, ignoring the AdS, since this corresponds to the asymptotically high-temperature limit of the eld theory where the entropy should become extensive [70]. As discussed in the previous section, in this limit the scaling of the eld theory entropy with the spatial volume maps directly to the scaling with the area of the horizon in the bulk. The Eguchi-Kawai mechanism, when manifested as the volume-independence of entropy density, seems to be exactly the sort of tool necessary to provide a general mechanism for the area law. But there are several puzzling and ultimately insurmountable features in trying to pinpoint an exact scaling with area purely from the Eguchi-Kawai mechanism (except for large toroidally compacti ed black branes in AdS). We will instead see that the mechanism explains a more general \area" law: the extensivity of the Bekenstein-Hawking-Wald entropy.2 Before considering higher curvature corrections, however, let us investigate how the Bekenstein-Hawking area law is at least consistent with the Eguchi-Kawai mechanism, even if not predicted by it. In AdSd+1/CFTd, we may ask why toroidally compacti ed black branes above the Hawking-Page phase transition have no subextensive piece in their classical entropy. Fixing to a spatial torus, as ! 0 we expect to get an entropy scaling of the conformal eld theory . Since the bulk Hawking temperature scales as T rh, this gives S d 1Vd 1 in bulk variables, which is precisely the Bekenstein-Hawking area law. However, h as the temperature is lowered we should generically expect subextensive corrections to the thermal entropy, which would spoil the universal area law in the bulk since T tained for black branes at any temperature. However, the Eguchi-Kawai mechanism saves the day, and implies that no such corrections can appear until one undergoes a CSST, whose location can be determined as discussed in section 3. This uses the Eguchi-Kawai mechanism to generalize Witten's explanation of the Bekenstein-Hawking area law to all toroidally compacti ed black branes above the Hawking-Page transition. Of course, if a periodic spin structure is chosen for the fermions along all spatial cycles, then no such transition appears in the gravitational regime and we can explain the area law for arbitrary toroidally compacti ed black branes. This is just a recap of what was shown more carefully in section 3. What about the Bekenstein-Hawking area law for black hole horizons with curvature, like the spherical or hyperbolic black holes in AdS? Again adopting center-symmetry preservation along the orbifolding cycle (up to any CSST) as our working assumption, we deduce that the entropy density in the eld theory is volume-independent in the orbifold2The language here and in the literature is very confusing. We refer to the Bekenstein-Hawking entropy as extensive even though it is very famously subextensive. By this we mean extensive in horizon area not volume. Also, the Wald entropy is sometimes referred to as providing subextensive corrections to the Bekenstein-Hawking area law, by which it is meant terms that do not scale with the area of the event horizon. When we refer to the extensivity of the Wald or Bekenstein-Hawking-Wald entropy, we mean the fact that it can be written as an integral of a local quantity over the horizon of the black hole. We will discuss this further below. ing direction. The orbifolding direction is a discrete direction, indexed by an integer p in the previous section. Any potential analytic continuation to complex p is on very shaky ground, but the Bekenstein-Hawking area law for the original spherical or hyperbolic black hole may be understood by analytic continuation from the discrete family of quotiented geometries. This is akin to understanding entanglement entropy through the discrete Renyi family, although there the analytic continuation is on much rmer footing. If these ideas are correct, then they provide a mechanism for the area law for large black holes with horizon topology which dominate the canonical ensemble for some dual eld theory on background . What about small black holes? Here the interpretation in terms of plasma balls in the dual large-N gauge theory may be useful [71]. It may then be true that the Eguchi-Kawai mechanism applies to this decon ned plasma ball in a way which maps to the area law in the bulk, as we saw for large black holes above. Stringy corrections and extensivity of the Wald entropy. We can ask about subleading order in the 't Hooft coupling , which should correspond to bulk stringy corrections. One way these stringy corrections manifest themselves is as higher-curvature corrections to the bulk Einstein gravity. The Polyakov loop analysis remains the same and continues to indicate center symmetry preservation along d 1 cycles. Thus a center-symmetry analysis in the eld theory predicts that for any planar/spherical/hyperbolic black holes, the entropy density should be volume-independent in any smooth orbifolding direction. To check this, we can look at zero-point functions like the entropy density. Since we have higher-curvature corrections we need to use the Wald formula for black hole entropy. For toroidally compacti ed black branes, the area law is maintained although the coe cient can change. For spherical or hyperbolic black holes, we have corrections to the BekensteinHawking area law which do not scale with the area of the horizon. This seems to be in contradiction with the Eguchi-Kawai mechanism. To address this, let us step back for a There is a spiritually correct but technically incorrect holographic explanation of the Bekenstein-Hawking area law that is often given. It says that the scaling with area is because there is a holographic dual theory in one lower dimension with the same entropy, and its entropy is scaling with volume as it should be. This captures the holographic spirit, but in general it is technically incorrect as can be seen in many ways. If the area maps to a eld theory volume, does the 1/GN map to temperature? This is of course wrong. Even in the cases where the area does map rigorously to volume, like toroidally compacti ed black branes, why does the eld theory not exhibit any subextensive corrections to its entropy? This we explained within our framework of large-N volume independence. Finally, what about higher curvature corrections? In the bulk the entropy picks up what are sometimes confusingly called \subextensive corrections to the Bekenstein-Hawking area law" from the Wald entropy formula. This ruins the Bekenstein-Hawking area law. Interpreted as bulk stringy corrections and therefore as corrections in the gauge coupling of a dual eld theory, why should going to weaker coupling ruin extensivity? These issues are clari ed by recalling that the Wald entropy is an integral over the event horizon and is therefore extensive. Consider a black hole with metric ansatz ds2 = is independent of r and t. This does not capture the most general case but will su ce for the argument. The Wald entropy for a general di eomorphism-invariant higher-curvature theory of gravity with Lagrangian density L is given as an integral along is the binormal to the horizon. The corrections implied by the Wald entropy are terms that do not scale as rhd 1, which is the scaling of the Bekenstein-Hawking entropy. But notice that the general theory will still scale with the volume of : SW Vol( ). This is what we mean by extensivity, which as before can be thought of in terms of quotients of SW = ! Md 1= =) SW ! SW =j j : In this sense the general Wald entropy | therefore the entropy in an arbitrary di eomorphism-invariant theory of classical gravity | is just as extensive as the Bekenstein-Hawking entropy. For black branes this means that the Wald entropy To bring this extensivity of curved horizons into clearer focus, consider quantum (subleading in GN , i.e. subleading in N ) corrections to the Bekenstein-Hawking-Wald entropy. At rst order, these are logarithmic in the area of the event horizon: SW + log(SW ) + : : : : The correction neither scales with the area of the horizon nor with Vol( ). It is truly This discussion should make clear that the gravity that emerges from our center symmetry analysis is not necessarily Einstein gravity. Nevertheless, it would be fascinating if somehow the stringency of this center symmetry structure necessitated a CFT with an Einstein gravity dual. One way this could occur is by requiring a sparse higher spin spectrum [31] | recently shown to give c a for the anomaly coe cients c and a in four-dimensional CFTs [72] | just as it required a sparse spectrum of low-lying states to reproduce the extended range of validity of the general-dimensional Cardy formula. In this spirit, it is encouraging that restoration of a center symmetry plays an important role in deforming higher-spin theory (within which the higher spin elds cannot be made sparse) into ABJ theory (within which they can). Reproducing additional features of AdS gravity We have shown that several universal features of AdS gravity can be reproduced with the starting assumption of center symmetry preservation along all but one cycle in a large-N theory (and the suitable generalization of this statement to curved backgrounds as discussed before). However, there are still several features that we would like to explain. A powerful technical assumption in the context of reproducing universal features of gravity in AdS3/CFT2 is that of Virasoro vacuum block domination of the four-point function on the sphere. This is expected to be a valid assumption in large-c theories with a sparse light spectrum and sparse low-lying operator-product-expansion (OPE) coe cients. This suggests that it might be implied by our framework. More precisely, consider a fourpoint function hO1(1); O2(z)O3(1)O4(0)i, which can be decomposed into representations of the Virasoro algebra (i.e. into Virasoro blocks) by inserting a complete set of states. It is believed that taking c ! 1 with external and internal operator dimensions scaling with c leads to an exponentiation of the Virasoro block [73, 74]: i = 1; 2; 3; 4 ; where hp is the internal operator dimension. Now taking z ! 0 leads to vacuum block leading OPE singularity from bringing together O2 and O4: F (c; hp; hi; z) = zhp h2 h4 (1 + O(z)) : In holographic theories, vacuum block dominance | like the Cardy formula we discussed in (3) | seems to have an extended range of validity, which in this case means for a range of z beyond the asymptotic limit z ! 0. This requires a sparseness bound both on the spectrum of states and on the operator product expansion coe cients. Our framework requires large c to begin with and reproduces a sparse light spectrum as discussed in section 3. Data about the OPE coe cients is also accessible in this framework since treelevel Witten diagrams have bulk interactions. Concretely, one may hope to analyze more carefully volume independence for the blocks between the sphere and the torus, possibly using the tools of [75{79]. An orthogonal clue that vacuum block dominance may be implied by this framework is a calculation of the entanglement entropy in a heavy microstate on a circle [80{82], which gives an answer independent of the size of the circle! Accessing some quantity or feature which directly exhibits the smooth, geometric nature of the bulk is another natural goal for this framework. The singularities of [83] are one such feature that indicate a sharp geometric structure. Reducing or blowing up models The strong coupling description of holographic theories makes manifest that one can achieve full volume-independence (i.e. preserve center symmetry for all cycle sizes) along directions with periodic (antiperiodic) boundary conditions for fermions (bosons), as long as one direction has the opposite boundary conditions and caps o in the interior. then perform a large-N reduction of these theories down to matrix quantum mechanical theories, i.e. (0 + 1)-dimensional theories. For a discussion of the validity of the reduction down to zero size, see appendix C. This captures physics in both con ned and decon ned phases. When describing thermal physics in the gravitational limit, there will always be one direction that does not reduce, prohibiting the reduction to a matrix model description, i.e. a (0 + 0)-dimensional theory. (See [84] for a discussion of subtleties in dimensionally reducing volume-independent theories.) Blowing up low-dimensional models is another interesting direction to pursue, especially in light of recent developments in low-dimensional models like the Sachdev-Ye-Kitaev (SYK) model, which captures some features of AdS2 gravity. The addition of avor to the SYK model [85] gives it the necessary ingredient to be blown up into a higher-dimensional model by the methods of [86, 87]. (See also [88, 89] for a di erent kind of blow-up.) The necessity of the Eguchi-Kawai mechanism for holographic gauge theI have intermittently referred to the Eguchi-Kawai mechanism as a necessary feature of holographic gauge theories. In a certain sense, this is obviously ridiculous. Center symmetal matter eld, although we still have a controlled gravitational description of the infrared physics. In this case, what I really mean is that there exists a theory which at large N is equivalent to the one with a single fundamental eld, but which has center symmetry at the Lagrangian level. More simply, the fundamental matter decouples at leading order in N , so the center symmetry is emergent at in nite N . As explored heavily in the literature on large-N volume independence and mentioned in the introduction, orbifold/orientifold dualities in many cases imply an emergent center symmetry at in nite N , even when centerbreaking matter does not naively decouple [21, 22]. It is this generalized emergent sense in which the Eguchi-Kawai mechanism is necessary. In other words, there is a possibility that center symmetry (whether existing explicitly or emergent) is playing an indispensable role in realizing the precise form of volume independence necessary to admit a gravitational description. Absent conclusive evidence to the contrary, I conjecture this to be the case. It would be nice to have a formalism centered around center symmetry that does not use the crutch of gauge theory, which may be an unnecessary redundancy of description.3 Interesting cases to study, which may teach us about large-N equivalences, are that of the D1-D5 system and of attempts at describing unquenched avor in AdS/CFT. At the orbifold point, the D1-D5 theory can be thought of as a free symmetric orbifold CFT. It is a gauge theory, but the gauge group is SN which has a trivial center. Nevertheless, this theory seems to have at least some aspects of large-N volume independence. It realizes the phase structure of gravity, and certain correlators can be written as a sum over images [91]. Indeed, the physics of long strings/short strings and sharp transitions (see for example [92, 93]). The case of unquenched avor requires keeping Nf =N N ! 1, which means the avor does not decouple at leading order in N . If there is a 3It was pointed out to me by Brian Willett that center symmetry can be discussed in the language of one-form global symmetries, without the need for a Lagrangian, as developed in [90]. smooth gravitational description in AdS (or some similarly warped spacetime), then the nature of nite-size e ects should be analyzed. There are many directions to pursue with these ideas in the context of AdS/CFT, only some of which were addressed above. Taking a broader view of the subject, it is clear that holographic dualities which have rules like those of AdS/CFT will have similar volumeindependent structure in correlation functions and phase structures. It is remarkable that rst introduced by Eguchi and Kawai is relevant only in the context of largeN gauge theories, and even then only at leading order in N . It is as if it was tailor-made to explain classical gravity, whether within AdS or with some other asymptotia. Indeed, one universal feature of classical gravity we can hang our hats on, robust against changes in asymptotia, is the extensivity of the Bekenstein-Gibbons-Hawking-Wald entropy. The universality of this simple formula only exists at leading order in GN , and we saw that in the context of AdS/CFT it maps to universal volume-independence at leading order in N for certain black holes. It is natural to conjecture that the same mechanism is controlling the entropy for all black holes, although as discussed in the main text this statement should be interpreted with care. The capability of these ideas in addressing classical gravity more this is a useful and technically accurate perspective beyond AdS/CFT remains to be seen. I am greatly indebted to Aleksey Cherman for his many patient explanations of modern developments regarding the Eguchi-Kawai mechanism. I would like to thank Tarek Anous, Aleksey Cherman, and Raghu Mahajan for useful conversations and comments on a draft. I would also like to thank Dionysios Anninos, David Berenstein, William Donnelly, David Gross, Gary Horowitz, Nabil Iqbal, Zohar Komargodski, Don Marolf, Mark Srednicki, Tomonori Ugajin, Mithat Unsal and Brian Willett for useful conversations. In this section we will review some basic points about SL(d; Z), the mapping class group of Td. When d is even, we will want to consider PSL(d; Z) instead, obtained by quotienting by the center f1; 1g. For simplicity we will just refer to the group as SL(d; Z) with this Naively, the torus is parameterized by d arbitrary real vectors V1; : : : ; Vd in ddimensional space. However, we can use global rotational invariance to eliminate Pd 1 overall size modulus. The torus now has d2 1 = (d 1)(d + 2)=2 real moduli. Calling the coordinates x1; : : : ; xd, we have a twist modulus ij between xi and all xj with i=1 i = i < j, and a size modulus ii for d 1 of the cycles xi. Keeping the overall size modulus explicit, we can arrange the moduli in terms of the following lattice vectors: 2 V~1 3 6 V~2 77 u1 = 66 U1 = 660 1 U2 = 660 0 1 Generators. In this section we will list four sets of generators of SL(d; Z) and show them to be equivalent. Our rst two sets of generators of SL(d; Z) can be written as u2 = 660 0 1 d matrices. The small u's can be shown to generate the big U 's and vice versa. The relations for e.g. d = 4 are U1 = u1 1; U2 = u1 1u2u1 2u2u1u2u1 1u2 1u1u2 1u1u2 1u1 1u2u1 1u2u1u2 1u 1 Generating the small u's by the big U 's is obtained by swapping u $ U . We will henceforth stick with the big U 's. U1 cyclically permutes all the entries of a vector while U2 twists the rst vector by an integer amount in the direction of the second vector. The power d + 1 on Another set of generators can be given by a simple generalization of the usual S and T generators familiar from SL(2; Z). In this case, we simply have Sij and Tij along any pair of directions i < j. Beware the notation: Sij is a d d matrix for any given i; j, not the fi; jgth element of a matrix S. Confusingly, S Transposes and T Shears! Better to think of it as S Swaps and T Twists. So we have the elementary row switching (with a minus sign, conventionally placed in the upper triangular part) and upper-triangular row addition (with integer entry) transformations. To see their action more explicitly as matrix multiplication, imagine arranging the lattice vectors row by row into a d-dimensional matrix. Then, for example, T25 twists direction two by an integer in direction transposes lattice vectors as V~1 ! are more diagrams contributing at this order, including the one with the four bulk-to-boundary propagators meeting at a single interaction vertex in the interior. twists in any direction. These include the upper-triangular Tij from the previous section and upper-twists can also generate U1 and U2 as U1 = (S12)(S23) (Sd 1;d) and U2 = T12. Four-point function sample calculation Here we calculate the tree-level contribution to the four-point function illustrated in gure 2. We will calculate it in an AdS background where one direction has size L and another AdS background where the same direction has size J L for J 2 Z+. We will suppress all We rst calculate the correlator in size J L. We have O(x4)iJL = Fourier transforming gives K(s1=J L) K(s4=J L)G(s5=J L); where i = 1; : : : ; 5. Evaluating the x5 and x6 integrals gives s1+s2;s5 s3+s4; s5 e 2JLi (s1x1+ +s4x4)K(s1=J L) K(s4=J L)G(s5=J L) : X J 2L2 s1+s2+s3+s4;0e 2JLi (s1x1+ +s4x4)K(s1=JL) K(s4=JL)G((s1+s2)=JL) ; transform with respect to the variables xi. Recall that the discrete transforms in nite size hO(n1=L) : : : O(n4=L)iJL = X e 2JLi (s1x1+:::s4x4)+ 2Li (n1x1+ +n4x4)K(s1=JL) K(s4=JL)G((s1 + s2)=JL) : Evaluating the integrals and then the sum gives dz5 dz6K(n1=L) : : : K(n4=L)G((n1 + n2)=L) n1+n2+n3+n4;0 : f (x) = X e2 inx=Lf (n=L) =) f (n=L) = dx e 2 inx=Lf (x) : 1 Z L e 2JLi (n01(x1 x5)+ +n04(x4 x6)+n05(x5 x6))K(n01=JL) K(n04=JL)G(n05=JL) (B.14) Now we consider the correlator in size L, where we replace the bulk-to-bulk propagator and the bulk-to-boundary propagators with those of size JL by the method of images: O(x4)iL = ni=0 x5)K(x2 + n2L x6)K(x4 + n4L x6)G(x5 + n5L ni=0 n0i= 1 e 2JLi (n01(x1+n1L x5)+ +n04(x4+n4L x6)+n05(x5+n5L x6))K(n01=JL) K(n04=JL)G(n05=JL) : Switching the two sums and evaluating the sums over ni gives n0i= 1 for arbitrary integer si. Evaluating the sums over n0i gives si= 1 e 2Li (s1(x1 x5)+ +s4(x4 x6)+s5(x5 x6))K(s1=L) K(s4=L)G(s5=L) : s1+s2;s5 s3+s4; s5 e 2Li (s1x1+ +s4x4+s5x5)K(s1=L) K(s4=L)G(s5=L) : s1+s2+s3+s4;0 e 2Li (s1x1+ +s4x4)K(s1=L) K(s4=L)G((s1 + s2)=L) : Performing the x5 and x6 integrals gives si= 1 Performing the sum over s5 gives si= 1 = J L 1 Z L hO(n1=L) O(n4=L)iL = dz5 dz6K(n1=L) K(n4=L)G((n1 + n2)=L) n1+n2+n3+n4;0 : (B.20) This is our nal answer for the correlator in size L. Comparing this answer to (B.8) gives us hO(n1=L) O(n4=L)iL = J 3hO(n1=L) O(n4=L)iJL as predicted by (4.19). This calculation should make clear that (4.19) is correct diagram-by-diagram in the bulk. Moreover, any bulk-to-bulk propagator with momenta that need to be integrated over, as would be the case for loop diagrams, would ruin this structure. This is expected since the presence of such propagators signals a subleading-in-N Witten diagram, for which volume-independence does not apply. Validity of gravitational description For our gravitational description to be valid, we need to deal with smooth geometries and keep cycle sizes larger than string scale. The rst criterion is simply because singularities are not well-described within gravity. The second criterion is because stringy excitations (e.g. strings that wrap the cycles) will become important for cycles that are string scale. In this case, one needs to T-dualize along the small cycle to blow it up. The language here is a bit confusing, as T-dualizing takes us from a valid IIB gravity description to a valid IIA gravity description, but we are concerned with maintaining a valid gravity description in the same frame throughout. Maintaining validity of the gravitational description depends on the periodicity conditions chosen for the matter elds. To be very concrete, let us consider the duality between Type IIB string theory in AdS5 pacti ed on a spatial three-torus of cycle lengths Li. First consider the case where the matter elds are given supersymmetry-preserving boundary conditions along the spatial cycles. In this case the ground state geometry is given by the Poincare patch with periodic identi cations in the spatial directions. But this means that the cycles become arbitrarily small as the horizon is approached, necessitating a breakdown of the IIB gravity description. This was the case analyzed in [18]. However, nite temperature is di erent and necessitates a discussion of the order of limits taken. The Euclidean geometry is that of the black brane: ds2 = f (r) = r2(1 (rh=r))4; rh, the S5 is suppressed, and tE gives the inverse temperature. The minimum proper size of a given cycle i occurs at rh. This size must be bigger than the string scale `s, which gives us the condition `s =) Here we have brought in the 't Hooft coupling . We see that we can make Li arbitrarily small and maintain validity of the gravitational description as long as we take rst. In other words, we do not scale any cycle sizes with the 't Hooft coupling as we take the strong coupling limit The case we were more preoccupied with in the text, especially in section 3, is that of modular U1-invariant boundary conditions. This means supersymmetry-breaking boundary conditions along all cycles. As we saw, this implies that when a cycle size is the smallest, it caps o in the interior. The geometry that dominates is either the black brane or the AdS soliton, whose Euclidean continuations are identical. The condition above therefore where L ;min is the minimum cycle size. By de nition we have L ;min < 1, so this condition is satis ed trivially. Any time a cycle tries to become substringy, it instead caps o . Mixed boundary conditions which preserve some subgroup of the full modular U1 invariance are analyzed similarly. The nal conclusion is that the gravitational description will remain valid for all cycle sizes as long as at least one cycle has supersymmetry-breaking boundary conditions and remains nite sized in the CFT. The one caveat is that any supersymmetry-preserving cycles are not taken to zero size as an inverse power of the 't Hooft coupling . This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. 113 (1982) 47 [INSPIRE]. Phys. B 206 (1982) 440 [INSPIRE]. theory, Phys. Rev. Lett. 48 (1982) 1063 [INSPIRE]. Phys. Lett. B 88 (1979) 135 [Erratum ibid. B 89 (1980) 437] [INSPIRE]. B 188 (1981) 269 [Sov. J. Nucl. Phys. 32 (1980) 431] [Yad. Fiz. 32 (1980) 838] [INSPIRE]. [hep-th/0608072] [INSPIRE]. B 652 (2007) 359 [hep-th/0612097] [INSPIRE]. large-N lattice gauge theory, Phys. Rev. D 27 (1983) 2397 [INSPIRE]. U(1) non-commutative gauge theory: the fate of one-loop instability, JHEP 10 (2006) 042 Eguchi-Kawai model, JHEP 01 (2008) 025 [arXiv:0711.1925] [INSPIRE]. gauge theories, Phys. Rev. D 78 (2008) 034507 [arXiv:0805.2146] [INSPIRE]. Mod. Phys. 53 (1981) 43 [INSPIRE]. JHEP 07 (2010) 043 [arXiv:1005.1981] [INSPIRE]. volume independence, Phys. Rev. D 78 (2008) 065035 [arXiv:0803.0344] [INSPIRE]. theories, JHEP 06 (2007) 019 [hep-th/0702021] [INSPIRE]. [arXiv:1210.4997] [INSPIRE]. [hep-th/0505148] [INSPIRE]. [hep-th/0506183] [INSPIRE]. (2010) 066002 [arXiv:1005.3519] [INSPIRE]. (2014) 030 [arXiv:1404.0225] [INSPIRE]. International Conference on Supersymmetry and Uni cation of Fundamental Interactions, Karlsruhe Germany, 26 July{1 August 2007, pg. 148 [arXiv:0708.0632] [INSPIRE]. [hep-th/9803002] [INSPIRE]. [arXiv:1512.06855] [INSPIRE]. B 270 (1986) 186 [INSPIRE]. [hep-th/9712251] [INSPIRE]. sparse d > 2 conformal eld theory at large-N , arXiv:1610.06186 [INSPIRE]. Rev. D 93 (2016) 126005 [arXiv:1508.02728] [INSPIRE]. the large c limit, JHEP 09 (2014) 118 [arXiv:1405.5137] [INSPIRE]. (2015) 081 [arXiv:1504.02094] [INSPIRE]. theory, JHEP 10 (2009) 079 [arXiv:0907.0151] [INSPIRE]. [arXiv:1306.2960] [INSPIRE]. JHEP 07 (2015) 016 [arXiv:1409.1617] [INSPIRE]. JHEP 12 (1998) 005 [hep-th/9804085] [INSPIRE]. [34] J.M. Maldacena and A. Strominger, AdS3 black holes and a stringy exclusion principle, emergence of spacetime, JHEP 01 (2015) 048 [arXiv:1406.5859] [INSPIRE]. elds, Phys. Lett. B 189 (1987) 89 [INSPIRE]. 550 (2002) 213 [hep-th/0210114] [INSPIRE]. classical gravity, JHEP 09 (2013) 109 [arXiv:1306.4682] [INSPIRE]. elds, Nucl. Phys. B 291 (1987) 141 [INSPIRE]. Chern-Simons-matter theories, M 2-branes and their gravity duals, JHEP 10 (2008) 091 [arXiv:0806.1218] [INSPIRE]. [arXiv:0807.4924] [INSPIRE]. strings, J. Phys. A 46 (2013) 214009 [arXiv:1207.4485] [INSPIRE]. 06 (2014) 168 [arXiv:1308.2077] [INSPIRE]. coupled to fundamental matter, JHEP 03 (2013) 097 [arXiv:1207.4195] [INSPIRE]. correspondence, Class. Quant. Grav. 34 (2017) 015009 [arXiv:1108.5735] [INSPIRE]. arXiv:1309.7413 [INSPIRE]. de Sitter space, JHEP 01 (2015) 074 [arXiv:1405.1424] [INSPIRE]. holography from functional determinants, JHEP 02 (2014) 007 [arXiv:1305.6321] [INSPIRE]. S1: a smooth journey from small to [arXiv:0802.1232] [INSPIRE]. Rev. D 60 (1999) 046002 [hep-th/9903203] [INSPIRE]. theories, JHEP 08 (2010) 030 [arXiv:1006.2101] [INSPIRE]. [56] S. Nakamura, H. Ooguri and C.-S. Park, Gravity dual of spatially modulated phase, Phys. [57] A. Donos and J.P. Gauntlett, Holographic striped phases, JHEP 08 (2011) 140 [arXiv:1106.2004] [INSPIRE]. [62] M. Honda and Y. Yoshida, Localization and large-N reduction on S3 for the planar and M-theory limit, Nucl. Phys. B 865 (2012) 21 [arXiv:1203.1016] [INSPIRE]. [66] L.F. Alday, M. Fluder and J. Sparks, The large-N limit of M 2-branes on lens spaces, JHEP correspondence, JHEP 01 (2002) 013 [hep-th/0112131] [INSPIRE]. from conformal eld theory, arXiv:1610.09378 [INSPIRE]. eld theory, Nucl. Phys. B 241 (1984) 333 [INSPIRE]. [74] Al.B. Zamolodchikov, Conformal symmetry in two-dimensional space: recursion representation of conformal block, Theor. Math. Phys. 73 (1987) 1088 [Teor. Mat. Fiz. 73 [75] A.L. Fitzpatrick, J. Kaplan and M.T. Walters, Universality of long-distance AdS physics from the CFT bootstrap, JHEP 08 (2014) 145 [arXiv:1403.6829] [INSPIRE]. JHEP 08 (2015) 049 [arXiv:1504.05943] [INSPIRE]. JHEP 07 (2016) 123 [arXiv:1603.04856] [INSPIRE]. equivalences of large-Nc orbifold gauge theories, JHEP 07 (2005) 008 [hep-th/0411177] Sachdev-Ye-Kitaev models, arXiv:1609.07832 [INSPIRE]. SYK model, JHEP 01 (2017) 138 [arXiv:1610.02422] [INSPIRE]. 02 (2015) 172 [arXiv:1412.5148] [INSPIRE]. string theory and gravity, Phys. Rept. 323 (2000) 183 [hep-th/9905111] [INSPIRE]. [1] T. Eguchi and H. Kawai , Reduction of dynamical degrees of freedom in the large -N gauge [2] Yu . M. Makeenko and A.A. Migdal , Exact equation for the loop average in multicolor QCD , [3] Yu . Makeenko and A.A. Migdal , Quantum chromodynamics as dynamics of loops, Nucl . Phys. [4] G. Bhanot , U.M. Heller and H. Neuberger , The quenched Eguchi-Kawai model , Phys. Lett . B [5] D.J. Gross and Y. Kitazawa , A quenched momentum prescription for large-N theories , Nucl. [6] A. Gonzalez-Arroyo and M. Okawa , The twisted Eguchi-Kawai model: a reduced model for [7] W. Bietenholz , J. Nishimura , Y. Susaki and J. Volkholz , A non-perturbative study of 4D [8] M. Teper and H. Vairinhos , Symmetry breaking in twisted Eguchi-Kawai models , Phys. Lett. [9] T. Azeyanagi , M. Hanada , T. Hirata and T. Ishikawa , Phase structure of twisted [10] B. Bringoltz and S.R. Sharpe , Breakdown of large-N quenched reduction in SU(N ) lattice [11] D.J. Gross , R.D. Pisarski and L.G. Ya e, QCD and instantons at nite temperature , Rev. [12] A. Gonzalez-Arroyo and M. Okawa , Large-N reduction with the twisted Eguchi-Kawai model , [14] P. Kovtun , M. U nsal and L.G. Ya e, Volume independence in large-Nc QCD-like gauge [15] B. Lucini and M. Panero , SU(N ) gauge theories at large-N , Phys. Rept. 526 ( 2013 ) 93 [16] K. Furuuchi , From free elds to AdS: thermal case , Phys. Rev. D 72 (2005) 066009 [17] K. Furuuchi , Large-N reductions and holography, Phys. Rev. D 74 (2006) 045027 [18] E. Poppitz and M. Unsal, AdS/CFT and large-N volume independence, Phys. Rev. D 82 [19] D. Young and K. Zarembo , Holographic dual of the Eguchi-Kawai mechanism , JHEP 06 [20] J. Greensite , An introduction to the con nement problem , Lect. Notes Phys . 821 ( 2011 ) 1 [21] A. Armoni , M. Shifman and M. Unsal, Planar limit of orientifold eld theories and emergent center symmetry , Phys. Rev. D 77 ( 2008 ) 045012 [arXiv:0712.0672] [INSPIRE]. [22] M. Shifman , Some theoretical developments in SUSY , in SUSY 2007 Proceedings , 15th [23] J.M. Maldacena , Wilson loops in large-N eld theories , Phys. Rev. Lett . 80 ( 1998 ) 4859 [24] E. Shaghoulian , Black hole microstates in AdS , Phys. Rev . D 94 ( 2016 ) 104044 [25] A. Belin , J. de Boer , J. Krutho , B. Michel , E. Shaghoulian and M. Shyani , Universality of [26] E. Shaghoulian , Modular forms and a generalized Cardy formula in higher dimensions , Phys. [27] J.L. Cardy , Operator content of two-dimensional conformally invariant theories, Nucl . Phys. [28] A. Strominger, Black hole entropy from near horizon microstates, JHEP 02 (1998) 009 [29] T. Hartman, C.A. Keller and B. Stoica, Universal spectrum of 2d conformal eld theory in [30] E. Shaghoulian, A Cardy formula for holographic hyperscaling-violating theories, JHEP 11 [31] I. Heemskerk, J. Penedones, J. Polchinski and J. Sully, Holography from conformal eld [32] G. Basar, A. Cherman, D. Dorigoni and M. U nsal, Volume independence in the large-N limit and an emergent fermionic symmetry, Phys. Rev. Lett. 111 (2013) 121601 [33] G. Basar, A. Cherman and D.A. McGady, Bose-Fermi degeneracies in large-N adjoint QCD, [35] R. Dijkgraaf, J.M. Maldacena, G.W. Moore and E.P. Verlinde, A black hole Farey tail, [36] E. Keski-Vakkuri, Bulk and boundary dynamics in BTZ black holes, Phys. Rev. D 59 (1999) [37] S. Ryu and T. Takayanagi, Aspects of holographic entanglement entropy, JHEP 08 (2006) [38] V. Balasubramanian, B.D. Chowdhury, B. Czech and J. de Boer, Entwinement and the [40] E.S. Fradkin and M.A. Vasiliev, Cubic interaction in extended theories of massless higher [41] E.S. Fradkin and M.A. Vasiliev, On the gravitational interaction of massless higher spin [42] M.A. Vasiliev, Higher spin gauge theories: star product and AdS space, hep-th/9910096 [43] I.R. Klebanov and A.M. Polyakov, AdS dual of the critical O(N ) vector model, Phys. Lett. B [44] O. Aharony, O. Bergman, D.L. Ja eris and J. Maldacena, N = 6 superconformal [45] O. Aharony, O. Bergman and D.L. Ja eris, Fractional M 2-branes, JHEP 11 (2008) 043 [46] C.-M. Chang, S. Minwalla, T. Sharma and X. Yin, ABJ triality: from higher spin elds to [47] S. Banerjee and D. Radicevic, Chern-Simons theory coupled to bifundamental scalars, JHEP [48] S. Banerjee, S. Hellerman, J. Maltz and S.H. Shenker, Light states in Chern-Simons theory [49] D. Anninos, T. Hartman and A. Strominger, Higher spin realization of the dS/CFT [50] C.-M. Chang, A. Pathak and A. Strominger, Non-minimal higher-spin DS4/CF T3, [51] D. Anninos, R. Mahajan, D. Radicevic and E. Shaghoulian, Chern-Simons-ghost theories and [52] D. Anninos, F. Denef, G. Konstantinidis and E. Shaghoulian, Higher spin de Sitter [53] M. Shifman and M. Unsal, QCD-like theories on R3 large r(S1) with double-trace deformations, Phys. Rev. D 78 (2008) 065004 [54] R.C. Myers, Stress tensors and Casimir energies in the AdS/CFT correspondence, Phys. [55] M. Unsal and L.G. Ya e, Large-N volume independence in conformal and con ning gauge [58] D. Anninos , T. Anous , F. Denef and L. Peeters , Holographic vitri cation , JHEP 04 ( 2015 ) [59] R.C. Brower , G.T. Fleming and H. Neuberger , Lattice radial quantization: 3D Ising , Phys. [60] T. Ishii , G. Ishiki , S. Shimasaki and A. Tsuchiya , N = 4 super Yang-Mills from the plane wave matrix model , Phys. Rev. D 78 ( 2008 ) 106001 [arXiv:0807.2352] [INSPIRE]. [61] H. Kawai , S. Shimasaki and A. Tsuchiya , Large-N reduction on group manifolds, Int . J. [63] R. Clarkson and R.B. Mann , Eguchi-Hanson solitons in odd dimensions, Class. Quant. Grav. [64] R. Clarkson and R.B. Mann , Soliton solutions to the Einstein equations in ve dimensions , [65] Y. Hikida , Phase transitions of large-N orbifold gauge theories , JHEP 12 ( 2006 ) 042 [68] H. Lin and J.M. Maldacena , Fivebranes from gauge theory , Phys. Rev. D 74 (2006) 084014 [69] G. Ishiki , S. Shimasaki , Y. Takayama and A. Tsuchiya , Embedding of theories with SU(2j4) symmetry into the plane wave matrix model , JHEP 11 ( 2006 ) 089 [hep-th /0610038] [70] E. Witten , Anti-de Sitter space, thermal phase transition and con nement in gauge theories, Adv . Theor. Math. Phys. 2 ( 1998 ) 505 [hep-th /9803131] [INSPIRE]. [71] O. Aharony , S. Minwalla and T. Wiseman , Plasma-balls in large-N gauge theories and localized black holes , Class. Quant. Grav . 23 ( 2006 ) 2171 [hep-th /0507219] [INSPIRE]. [72] N. Afkhami-Jeddi , T. Hartman , S. Kundu and A. Tajdini , Einstein gravity 3-point functions [73] A.A. Belavin , A.M. Polyakov and A.B. Zamolodchikov , In nite conformal symmetry in [76] E. Hijano , P. Kraus and R. Snively , Worldline approach to semi-classical conformal blocks , [77] E. Hijano , P. Kraus , E. Perlmutter and R. Snively , Witten diagrams revisited: the AdS geometry of conformal blocks , JHEP 01 ( 2016 ) 146 [arXiv:1508.00501] [INSPIRE]. [78] E. Hijano , P. Kraus , E. Perlmutter and R. Snively , Semiclassical Virasoro blocks from AdS3 gravity , JHEP 12 ( 2015 ) 077 [arXiv:1508.04987] [INSPIRE]. [79] K.B. Alkalaev and V.A. Belavin , Classical conformal blocks via AdS/CFT correspondence, [80] C.T. Asplund , A. Bernamonti , F. Galli and T. Hartman , Holographic entanglement entropy from 2d CFT: heavy states and local quenches , JHEP 02 ( 2015 ) 171 [arXiv:1410.1392] [81] P. Caputa , J. Simon , A. Stikonas and T. Takayanagi , Quantum entanglement of localized excited states at nite temperature , JHEP 01 ( 2015 ) 102 [arXiv:1410.2287] [INSPIRE]. [82] T. Anous , T. Hartman , A. Rovai and J. Sonner , Black hole collapse in the 1=c expansion , [83] J. Maldacena , D. Simmons-Du n and A. Zhiboedov , Looking for a bulk point , JHEP 01 [84] A. Cherman and D. Dorigoni , Large-N and bosonization in three dimensions , JHEP 10 [85] D.J. Gross and V. Rosenhaus , A generalization of Sachdev-Ye-Kitaev , JHEP 02 ( 2017 ) 093 [86] P. Kovtun , M. Unsal and L.G. Ya e, Nonperturbative equivalences among large-Nc gauge theories with adjoint and bifundamental matter elds , JHEP 12 ( 2003 ) 034 [87] P. Kovtun , M. Unsal and L.G. Ya e, Necessary and su cient conditions for non-perturbative [88] Y. Gu , X.-L. Qi and D. Stanford , Local criticality, di usion and chaos in generalized [89] M. Berkooz , P. Narayan , M. Rozali and J. Simon , Higher dimensional generalizations of the [90] D. Gaiotto , A. Kapustin , N. Seiberg and B. Willett , Generalized global symmetries , JHEP [91] V. Balasubramanian , P. Kraus and M. Shigemori , Massless black holes and black rings as e ective geometries of the D1-D5 system , Class. Quant. Grav. 22 ( 2005 ) 4803 [92] O. Aharony , S.S. Gubser , J.M. Maldacena , H. Ooguri and Y. Oz , Large-N [93] D. Birmingham , I. Sachs and S.N. Solodukhin , Relaxation in conformal eld theory, Hawking-Page transition and quasinormal normal modes , Phys. Rev. D 67 (2003) 104026 This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1007%2FJHEP03%282017%29011.pdf Edgar Shaghoulian. Emergent gravity from Eguchi-Kawai reduction, Journal of High Energy Physics, 2017, 11, DOI: 10.1007/JHEP03(2017)011
CommonCrawl
What are the graph automorphisms of the Hasse diagram of a (finite) poset? Suppose we have a finite set $A$ and a partial order on its subsets--i.e. a poset. Draw the directed Hasse diagram: that is, for elements $a$ and $b$ in the powerset of $A$ we draw $a \to b$ if $a \leq b$. My conjecture is that the graph automorphisms (which preserve direction of edges) of any Hasse diagram is the product of symmetric groups. I.e. For any poset $P$ with Hasse diagram $H$ we have $Aut(H) = S_{n_1} \times S_{n_2} \times \cdots \times S_{n_k}$. Example: Take the poset that is set inclusion on the powerset $P(A)$ for $A$ finite. This is a |A|-dimensional hypercube. The Hasse diagram is preserved by permuting elements of $A$. Is this true in general? graph-theory finite-groups order-theory automorphism-group abnry abnryabnry $\begingroup$ Are the graph automorphisms the same as the order isomorphisms of the poset? $\endgroup$ – William Elliot Mar 26 '18 at 21:46 $\begingroup$ I am a little out of my depth here but I believe so. $\endgroup$ – abnry Mar 26 '18 at 21:48 $\begingroup$ If (a,b) is permuted to (b,a) how is the directed diagram not altered? $\endgroup$ – William Elliot Mar 26 '18 at 21:49 $\begingroup$ What does $(a,b)$ represent here? Isn't an order isomorphism $f$ such that if $a \leq b$ then $f(a) \leq f(b)$? $\endgroup$ – abnry Mar 26 '18 at 21:52 $\begingroup$ What are $n_{1}$, $n_{2}$, $\ldots$, $n_{k}$? Are these the sizes of the levels of the Hasse diagram? Your automorphism group will be a subgroup of the direct product of symmetric groups you have written, but will certainly not be isomorphic (unless you have all possible edges between levels). $\endgroup$ – Morgan Rodgers Mar 26 '18 at 22:11 No this is not true if you allow arbitrary partial orderings. If you take the Hasse diagram on the collection of subspaces of $\mathbb{F}_{q}^{n}$, the automorphism group will be $\mathrm{P\Gamma L}(n,q)$ which cannot be represented as a direct product of symmetric groups. This can be represented in the terms you give as all subspaces are collections of vectors, and the collection of vectors is a finite set. Any subset that does not correspond to a subspace can be declared incomparable with the rest. Also note that, in general, the automorphism group of a Hasse diagram is not necessarily determined by its permutation action on the elements at a fixed level. It is a permutation group on all of the elements of the diagram. The example you give, $P(A)$ for some finite set $A$, is especially nicely behaved. This is an example of a lattice, where every pair of elements has a meet and a join. It has a natural base set (the collection of one-element subsets) which can be used to describe all of the other elements, and so an automorphism of the diagram can be described in terms of the action on this base set. And for this particular example, any permutation of these elements can be extended to an automorphism of the diagram. But this is a special property of the particular structure you are looking at. Morgan RodgersMorgan Rodgers $\begingroup$ Thank you for your answer, you've given me some things to chew on. Can you fill in detail for your example of the subspace of $F_q^n$? For $F_2^2$ we actually have $S_3$ as the set of graph automorphisms of the poset of subspaces. But that is probably an artifact of small $n$ and $q$. I do not understand why the projective linear group are the graph automorphisms either. $\endgroup$ – abnry Mar 27 '18 at 17:58 $\begingroup$ A graph automorphism of the Hasse diagram is not determined by its permutation action on the elements at each level, but elements from each level must be mapped to elements within the same level. As if $a \leq b \leq c$ has no intermediate element between comparisons and $a \leq x$ for any comparable $x$, then $f(a) \leq f(b) \leq f(c)$ must also hold. So the level of $c$ is at least 3. But consider $f^{-1}$ to get it exactly 3. $\endgroup$ – abnry Mar 27 '18 at 18:01 $\begingroup$ My conjecture came from not being able to find any examples otherwise, not from thinking about the action on the levels. But perhaps I am not sure about the differences between lattices and hasse diagrams, and my examples were only lattices. $\endgroup$ – abnry Mar 27 '18 at 18:04 Not the answer you're looking for? Browse other questions tagged graph-theory finite-groups order-theory automorphism-group or ask your own question. How to deal with Ideals generation from a Poset of sets including the empty set? order preserving function over Poset Enumerating all antichains in a finite poset Understanding comparability in a partially ordered sets with Hasse diagrams Trouble understanding poset notation and Hasse Diagrams The cut space of a poset On automorphisms of Cayley colour graphs that are not colour preserving Poset test and Hasse diagram in GAP. Divisibility Relation On the Set $S = \{ 2, 6, 7, 14, 15, 30, 70, 105, 210 \}$: Hasse Diagram, Maximal, Minimal Elements, Greatest, Least elements A question about groups, that are isomorphic to the automorphism groups of their cycle graphs
CommonCrawl
Concepts and reason The concept required to solve this problem is electric force and Coulomb's law. Initially, obtain the expression for net electric force on \(q_{2}\) by using expression for electric force and knowing direction of force due to charge \(q_{1}\) and \(-2.0 \mathrm{n} \mathrm{C}\) then find the charge \(q_{1}\). According to Coulomb's law, the electric force between two charged particles is directly proportional to the product of their magnitude of charges and inversely proportional to the square of the distance between them. In equation form, Coulomb's law can be stated as \(F=\frac{k q_{1} q_{2}}{r^{2}}\) Here, \(q_{1}\) and \(q_{2}\) are the charges separated by a distance \(r\) and \(k\) is the Coulomb's constant. Use Coulomb's law, to solve for the magnitude of the force of one charge on another. Also use the idea that charges of the same signs repel while charges of opposite signs attract. Distance between charges \(q_{1}\) and \(q_{3}\) is, $$ \begin{array}{c} r_{1}=10 \mathrm{~cm} \\ =10 \times 10^{-2} \mathrm{~m} \end{array} $$ So distance between charges \(q_{1}\) and \(q_{2}\) is, \(r=r_{1}+r_{2}\) Substitute \(10 \times 10^{-2} \mathrm{~m}\) for \(r_{1}\) and \(r_{2}\) in equation \(r=r_{1}+r_{2}\) as follows: $$ \begin{array}{c} r=10 \times 10^{-2} \mathrm{~m}+10 \times 10^{-2} \mathrm{~m} \\ =20 \times 10^{-2} \mathrm{~m} \\ =0.2 \mathrm{~m} \end{array} $$ The net force on \(q_{2}\) due to \(q_{1}\) and \(q_{3}\) is given by, \(F_{\text {neton } q 2}=F_{q 1 \text { on } q 2}+F_{q 3 \text { on } q 2}\) Here, \(F_{q 3 \text { on } q 2}\) is the force due to \(q_{3}\) on \(q_{2}\) and \(F_{q 1 \text { on } q 2}\) is the force due to \(q_{1}\) on \(q_{2}\) The force on \(q_{2}\) due to \(q_{1}\) is \(F_{q 1 \mathrm{on} q 2}=\frac{k q_{1} g_{2}}{(r)^{2}}\) Substitute \(0.2 \mathrm{~m}\) for \(r\) in equation \(F_{q 1 \mathrm{on} q 2}=\frac{k q_{1} q_{2}}{(r)^{2}}\) as follows: \(F_{q 1 \mathrm{on} q 2}=\frac{k q_{1} q_{2}}{(0.2 \mathrm{~m})^{2}}\) The force on \(q_{2}\) due to \(q_{3}\) is \(F_{q 3 \text { on } q 2}=\frac{k q_{3} q_{2}}{\left(r_{2}\right)^{2}}\) Substitute \(0.1 \mathrm{~m}\) for \(r_{2}\) and \(-2 \times 10^{-9} C\) for \(q_{3}\) in equation \(F_{q 3 \mathrm{on} q 2}=\frac{k q_{3} q_{2}}{\left(r_{2}\right)^{2}}\) as follows: \(F_{q 3 \mathrm{on} q 2}=\frac{k q_{2}\left(-2 \times 10^{-9} C\right)}{(0.2 \mathrm{~m})^{2}}\) Substitute \(\frac{k q_{2}\left(-2 \times 10^{-9} C\right)}{(0.2 \mathrm{~m})^{2}}\) for \(F_{q 3 \mathrm{on} q 2}\) and \(\frac{k q_{1} q_{2}}{(0.2 \mathrm{~m})^{2}}\) for \(F_{q_{1} \text { on } q 2}\) in equation \(F_{\text {neton } q 2}=F_{q_{1} \text { on } q_{2}}+F_{q 3 \text { on } q 2}\) as follows: $$ \begin{array}{l} F_{\text {neton } q_{2}}=\frac{k q_{1} q 2}{(0.2 \mathrm{~m})^{2}}+\frac{k\left(-2 \times 10^{-9} \mathrm{C}\right) q_{2}}{(0.10 \mathrm{~m})^{2}} \\ \quad=\frac{k q_{1} q_{2}}{(0.2 \mathrm{~m})^{2}}-\frac{k\left(2 \times 10^{-9} \mathrm{C}\right) q_{2}}{(0.10 \mathrm{~m})^{2}} \end{array} $$ Here assume that the charge \(q_{2}\) is positive. Considering the forces on positive charge \(q_{2}\). The other positive charge, \(q_{1}\), exerts a repulsive force \(q_{2}\) that pushes \(q_{2}\) away from \(q_{1}\), that is, to the right. The negative charge \(q_{3}\) exerts an attractive force on \(q_{2}\) that pulls \(q_{2}\) toward \(q_{3}\), that is, to the left. For the net force on \(q_{2}\) to be zero, these two forces must have the same magnitude. Equate the net force on \(q_{2}\) to zero, \(F_{\text {neton } q_{2}}=0\) \(k q_{2}\left[\frac{q 1}{(0.2 \mathrm{~m})^{2}}-\frac{2 \times 10^{-9} \mathrm{C}}{(0.10 \mathrm{~m})^{2}}\right]=0\) Rearrange the above equation as follows: $$ \begin{array}{c} {\left[\frac{q_{1}}{(0.2 \mathrm{~m})^{2}}-\frac{2 \times 10^{-9} \mathrm{C}}{(0.10 \mathrm{~m})^{2}}\right]=0} \\ \frac{q_{1}}{(0.2 \mathrm{~m})^{2}}=\frac{2 \times 10^{-9} \mathrm{C}}{(0.10 \mathrm{~m})^{2}} \\ q_{1}=\frac{(0.2 \mathrm{~m})^{2} \times\left(2 \times 10^{-9} \mathrm{C}\right)}{(0.10 \mathrm{~m})^{2}} \\ =8 \times 10^{-9} \mathrm{C} \end{array} $$ Convert the units of charge from Coulomb's to Nano Coulombs as follows: $$ \begin{array}{c} q_{1}=8 \times 10^{-9} \mathrm{C}\left(\frac{1 \mathrm{nC}}{10^{-9} \mathrm{C}}\right) \\ =8 \mathrm{nC} \end{array} $$ Charge \(q_{1}\) is \(8 \mathrm{nC}\). The Charge \(q_{1}\) is \(8 \mathrm{nC}\). The charged particles are point charges. The charge \(q_{2}\) is in static equilibrium, so the net force on \(q_{2}\) is zero. Dr. OWL answered 1 week ago What magnitude and sign of charge Q will make the force on charge q zero?
CommonCrawl
Asymptotic analysis of charge conserving Poisson-Boltzmann equations with variable dielectric coefficients Homoclinic orbits with many loops near a $0^2 i\omega$ resonant fixed point of Hamiltonian systems Homotopy invariants methods in the global dynamics of strongly damped wave equation Piotr Kokocki 1, Faculty of Mathematics and Computer Science, Nicolaus Copernicus University, Chopina 12/18, 87-100 Toruń Received May 2015 Revised October 2015 Published December 2015 We are interested in the following differential equation $\ddot u(t) = -A u(t) - c A \dot u(t) + \lambda u(t) + F(u(t))$ where $c > 0$ is a damping factor, $A$ is a sectorial operator and $F$ is a continuous map. We consider the situation where the equation is at resonance at infinity, which means that $\lambda$ is an eigenvalue of $A$ and $F$ is a bounded map. We provide geometrical conditions for the nonlinearity $F$ and determine the Conley index of the set $K_\infty$, that is the union of the bounded orbits of this equation. Keywords: invariant set, resonance., Conley index, dynamical system. Mathematics Subject Classification: Primary: 37B30, 47J35; Secondary: 35B3. Citation: Piotr Kokocki. Homotopy invariants methods in the global dynamics of strongly damped wave equation. Discrete & Continuous Dynamical Systems, 2016, 36 (6) : 3227-3250. doi: 10.3934/dcds.2016.36.3227 S. Ahmad, A nonstandard resonance problem for ordinary differential equations, Trans. Amer. Math. Soc., 323 (1991), 857-875. doi: 10.1090/S0002-9947-1991-1010407-9. Google Scholar A. Ambrosetti and G. Mancini, Theorems of existence and multiplicity for nonlinear elliptic problems with noninvertible linear part, Ann. Scuola Norm. Sup. Pisa Cl. Sci., 5 (1978), 15-28. Google Scholar H. Amann, Linear and Quasilinear Parabolic Problems. Vol. I. Abstract Linear Theory, Monographs in Mathematics, Birkhäuser Boston, Inc., Boston, MA, 1995. doi: 10.1007/978-3-0348-9221-6. Google Scholar J. Arrieta, R. Pardo and A. Rodriguez-Bernal, Equilibria and global dynamics of a problem with bifurcation from infinity, J. Differential Equations, 246 (2009), 2055-2080. doi: 10.1016/j.jde.2008.09.002. Google Scholar P. Bartolo, V. Benci and D. Fortunato, Abstract critical point theorems and applications to some nonlinear problems with "strong'' resonance at infinity, Nonlinear Anal., 7 (1983), 981-1012. doi: 10.1016/0362-546X(83)90115-3. Google Scholar H. Brézis and L. Nirenberg, Characterizations of the ranges of some nonlinear operators and applications to boundary value problems, Ann. Scuola Norm. Sup. Pisa Cl. Sci., 5 (1978), 225-326. Google Scholar A. N. Carvalho and J. W. Cholewa, Attractors for strongly damped wave equations with critical nonlinearities, Pacific J. Math., 207 (2002), 287-310. doi: 10.2140/pjm.2002.207.287. Google Scholar J. W. Cholewa and T. Dłotko, Global Attractors in Abstract Parabolic Problems, London Mathematical Society Lecture Note Series, vol. 278, Cambridge University Press, Cambridge, 2000. doi: 10.1017/CBO9780511526404. Google Scholar C. Conley, Isolated Invariant Sets and the Morse Index, CBMS Regional Conference Series in Mathematics, vol. 38, American Mathematical Society, Providence, R.I., 1978. Google Scholar A. Ćwiszewski, Periodic solutions of damped hyperbolic equations at resonance: A translation along trajectories approach, Differential Integral Equations, 24 (2011), 767-786. Google Scholar A. Ćwiszewski and P. Kokocki, Krasnosel\cprime skii type formula and translation along trajectories method for evolution equations, Discrete Contin. Dyn. Syst., 22 (2008), 605-628. doi: 10.3934/dcds.2008.22.605. Google Scholar A. Ćwiszewski and K. P. Rybakowski, Singular dynamics of strongly damped beam equation, J. Differential Equations, 247 (2009), 3202-3233. doi: 10.1016/j.jde.2009.09.006. Google Scholar D. Daners and P. K. Medina, Abstract Evolution Equations, Periodic Problems and Applications, Pitman Research Notes in Mathematics Series, 279, Longman Scientific & Technical, Harlow; copublished in the United States with John Wiley & Sons, Inc., New York, 1992. Google Scholar K. J. Engel and R. Nagel, One-parameter Semigroups for Linear Evolution Equations, Graduate Texts in Mathematics, 194, Springer-Verlag, New York, 2000. Google Scholar J. K. Hale, Asymptotic Behavior of Dissipative Systems, Mathematical Surveys and Monographs, 25, American Mathematical Society, Providence, RI, 1988. Google Scholar J. K. Hale, L. T. Magalhaes and W. M. Oliva, Dynamics in Infinite Dimensions, Applied Mathematical Sciences, 47, Springer-Verlag, New York, 2002. doi: 10.1007/b100032. Google Scholar D. Henry, Geometric Theory of Semilinear Parabolic Equations, Lecture Notes in Mathematics, vol. 840, Springer-Verlag, Berlin, 1981. Google Scholar P. Hess, Nonlinear perturbations of linear elliptic and parabolic problems at resonance: Existence of multiple solutions, Ann. Scuola Norm. Sup. Pisa, 5 (1978), 527-537. Google Scholar E. Hille and R. Phillips, Functional Analysis and Semi-Groups, Colloquium Publications, 31, American Mathematical Society, Providence, RI, 1957. Google Scholar P. Kokocki, Averaging principle and periodic solutions for nonlinear evolution equations at resonance, Nonlinear Analysis: Theory, Methods and Applications, 85 (2013), 253-278. doi: 10.1016/j.na.2013.02.030. Google Scholar P. Kokocki, Effect of resonance on the existence of periodic solutions for strongly damped wave equation, Nonlinear Analysis: Theory, Methods and Applications, 125 (2015), 167-200. doi: 10.1016/j.na.2015.05.012. Google Scholar E. M. Landesman and A. C. Lazer, Nonlinear perturbations of linear elliptic boundary value problems at resonance, J. Math. Mech., 19 (1970), 609-623. Google Scholar A. C. Lazer and P. J. McKenna, Large-amplitude periodic oscillations in suspension bridges: some new connections with nonlinear analysis, SIAM Rev., 32 (1990), 537-578. doi: 10.1137/1032120. Google Scholar A. C. Lazer and P. J. McKenna, Open problems in nonlinear ordinary boundary value problems arising from the study of large-amplitude periodic oscillations in suspension bridges, World Congress of Nonlinear Analysts '92, Vol. I-IV (Tampa, FL, 1992), de Gruyter, Berlin, 1996, 349-358. Google Scholar P. Massatt, Limiting behavior for strongly damped nonlinear wave equations, J. Differential Equations, 48 (1983), 334-349. doi: 10.1016/0022-0396(83)90098-0. Google Scholar J. Mawhin and J. R. Ward, Bounded solutions of some second order nonlinear differential equations, J. London Math. Soc., 58 (1998), 733-747. doi: 10.1112/S0024610798006784. Google Scholar R. Ortega and A. Tineo, Resonance and non-resonance in a problem of boundedness, Proc. Amer. Math. Soc., 124 (1996), 2089-2096. doi: 10.1090/S0002-9939-96-03457-0. Google Scholar A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations, Springer Verlag, 1983. doi: 10.1007/978-1-4612-5561-1. Google Scholar K. P. Rybakowski, On the homotopy index for infinite-dimensional semiflows, Trans. Amer. Math. Soc., 269 (1982), 351-382. doi: 10.1090/S0002-9947-1982-0637695-7. Google Scholar K. P. Rybakowski, Nontrivial solutions of elliptic boundary value problems with resonance at zero, Ann. Mat. Pura Appl., 139 (1985), 237-277. doi: 10.1007/BF01766857. Google Scholar K. P. Rybakowski, The Homotopy Index and Partial Differential Equations, Universitext, Springer-Verlag, Berlin, 1987. doi: 10.1007/978-3-642-72833-4. Google Scholar K. P. Rybakowski, Trajectories joining critical points of nonlinear parabolic and hyperbolic partial differential equations, J. Differential Equations, 51 (1984), 182-212. doi: 10.1016/0022-0396(84)90107-4. Google Scholar D. Salamon, Connected simple systems and the Conley index of isolated invariant sets, Trans. Amer. Math. Soc., 291 (1985), 1-41. doi: 10.1090/S0002-9947-1985-0797044-3. Google Scholar M. Schechter, Nonlinear elliptic boundary value problems at resonance, Nonlinear Anal., 14 (1990), 889-903. doi: 10.1016/0362-546X(90)90027-E. Google Scholar J. Smoller, Shock Waves and Reaction-Diffusion Equations, Grundlehren der Mathematischen Wissenschaften, vol. 258, Springer-Verlag, New York, 1983. Google Scholar H. Triebel, Interpolation Theory, Function Spaces, Differential Operators, VEB Deutscher Verlag der Wissenschaften, Berlin, 1978. Google Scholar J. Valdo and A. Gonçalves, On bounded nonlinear perturbations of an elliptic equation at resonance, Nonlinear Anal., 5 (1981), 57-60. doi: 10.1016/0362-546X(81)90070-5. Google Scholar Marian Gidea. Leray functor and orbital Conley index for non-invariant sets. Discrete & Continuous Dynamical Systems, 1999, 5 (3) : 617-630. doi: 10.3934/dcds.1999.5.617 Todd Young. A result in global bifurcation theory using the Conley index. Discrete & Continuous Dynamical Systems, 1996, 2 (3) : 387-396. doi: 10.3934/dcds.1996.2.387 M. C. Carbinatto, K. Mischaikow. Horseshoes and the Conley index spectrum - II: the theorem is sharp. Discrete & Continuous Dynamical Systems, 1999, 5 (3) : 599-616. doi: 10.3934/dcds.1999.5.599 Jintao Wang, Desheng Li, Jinqiao Duan. On the shape Conley index theory of semiflows on complete metric spaces. Discrete & Continuous Dynamical Systems, 2016, 36 (3) : 1629-1647. doi: 10.3934/dcds.2016.36.1629 Anna Go??biewska, S?awomir Rybicki. Equivariant Conley index versus degree for equivariant gradient maps. Discrete & Continuous Dynamical Systems - S, 2013, 6 (4) : 985-997. doi: 10.3934/dcdss.2013.6.985 Ketty A. De Rezende, Mariana G. Villapouca. Discrete conley index theory for zero dimensional basic sets. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1359-1387. doi: 10.3934/dcds.2017056 Fanni M. Sélley. A self-consistent dynamical system with multiple absolutely continuous invariant measures. Journal of Computational Dynamics, 2021, 8 (1) : 9-32. doi: 10.3934/jcd.2021002 Robert Skiba, Nils Waterstraat. The index bundle and multiparameter bifurcation for discrete dynamical systems. Discrete & Continuous Dynamical Systems, 2017, 37 (11) : 5603-5629. doi: 10.3934/dcds.2017243 Dmitriy Yu. Volkov. The Hopf -- Hopf bifurcation with 2:1 resonance: Periodic solutions and invariant tori. Conference Publications, 2015, 2015 (special) : 1098-1104. doi: 10.3934/proc.2015.1098 Michihiro Hirayama. Periodic probability measures are dense in the set of invariant measures. Discrete & Continuous Dynamical Systems, 2003, 9 (5) : 1185-1192. doi: 10.3934/dcds.2003.9.1185 Ji Li, Kening Lu, Peter W. Bates. Invariant foliations for random dynamical systems. Discrete & Continuous Dynamical Systems, 2014, 34 (9) : 3639-3666. doi: 10.3934/dcds.2014.34.3639 Tong Li, Hailiang Liu. Critical thresholds in a relaxation system with resonance of characteristic speeds. Discrete & Continuous Dynamical Systems, 2009, 24 (2) : 511-521. doi: 10.3934/dcds.2009.24.511 Ivan Werner. Equilibrium states and invariant measures for random dynamical systems. Discrete & Continuous Dynamical Systems, 2015, 35 (3) : 1285-1326. doi: 10.3934/dcds.2015.35.1285 Nils Ackermann, Thomas Bartsch, Petr Kaplický. An invariant set generated by the domain topology for parabolic semiflows with small diffusion. Discrete & Continuous Dynamical Systems, 2007, 18 (4) : 613-626. doi: 10.3934/dcds.2007.18.613 Xianwei Chen, Xiangling Fu, Zhujun Jing. Chaos control in a special pendulum system for ultra-subharmonic resonance. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 847-860. doi: 10.3934/dcdsb.2020144 P.K. Newton. The dipole dynamical system. Conference Publications, 2005, 2005 (Special) : 692-699. doi: 10.3934/proc.2005.2005.692 Xin Li, Wenxian Shen, Chunyou Sun. Invariant measures for complex-valued dissipative dynamical systems and applications. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2427-2446. doi: 10.3934/dcdsb.2017124 Grzegorz Łukaszewicz, James C. Robinson. Invariant measures for non-autonomous dissipative dynamical systems. Discrete & Continuous Dynamical Systems, 2014, 34 (10) : 4211-4222. doi: 10.3934/dcds.2014.34.4211 Gary Froyland, Philip K. Pollett, Robyn M. Stuart. A closing scheme for finding almost-invariant sets in open dynamical systems. Journal of Computational Dynamics, 2014, 1 (1) : 135-162. doi: 10.3934/jcd.2014.1.135 Shengqing Hu, Bin Liu. Degenerate lower dimensional invariant tori in reversible system. Discrete & Continuous Dynamical Systems, 2018, 38 (8) : 3735-3763. doi: 10.3934/dcds.2018162 Piotr Kokocki
CommonCrawl
Dr. Greig Cowan STFC Fellow Particle and Nuclear Physics Particle Physics Experiment [email protected] http://www2.ph.ed.ac.uk/~gcowan1 The LHCb experiment at CERN is searching for new physics through precision measurements of the properties of heavy quarks. Quarks are the fundamental building blocks of the protons and neutrons which make up atomic nuclei. The study of heavy quarks has a long and illustrious history, and led to many important discoveries and the award of the 2008 Nobel prize in physics to the theorists who first wrote down the mathematics of CP-violation in the SM. The LHCb experiment is continuing this legacy by studying heavy quarks in unprecedented detail due to the huge size of the event samples that it can record, process and analyse at the CERN LHC. Other experiments at the LHC have yet to find any direct evidence of new physics beyond the SM: they are currently pushing the energy boundaries of their searches into the multi-TeV scale. LHCb can indirectly probe to much higher energies via the presence of non-SM particles in the quantum virtual loops of the heavy-quark decay processes. My research aims to perform the precise measurements of CP-violating and heavy-quark properties. To do this requires us to have a deep understanding of the reconstruction and selection of many different processes which occur in the LHC, requiring the use of clever algorithms and modern computing technology to help us dig out signals from the large background noise. This technology will be used in the next generation of grid/cloud computing, with all the potential for innovation that it brings. We have to understand the subtle effects which our experimental apparatus can have on the measurements, a task which only becomes more difficult as the size of the data sample grows. Hyper-Kamiokande The Hyper-Kamiokande (Hyper-K) experiment is the next generation flagship facility for the study of neutrino oscillations, nucleon decays, and astrophysical neutrinos. Hyper-K is a third generation underground water Cherenkov detector situated in Kamioka, Japan. It consists of a 1 million tonne water target, which is about 20 times larger than that of the existing Super-Kamiokande (Super-K) detector. It will serve as the far detector for a long baseline neutrino oscillation experiment planned for the upgraded J-PARC proton synchrotron beam. With a total exposure of 7.50 MW x 107s to the 2.5degree off-axis neutrino beam, Hyper-K aims to make a measurement of the CP (charge-parity) violating phase of the neutrino mixing matrix, δCP, and to determine the neutrino mass hierarchy through the study of atmospheric neutrinos. It is expected that the CP phase δCP can be determined to better than 19degrees for all possible values of δCP and CP violation can be established with a statistical significance of 3(5)σ for 76(58)% of all possible values of δCP. Hyper-K will also serve as a detector capable of observing proton decays, atmospheric neutrinos, and neutrinos from astronomical origins enabling measurements that far exceed the current world best measurements. We are currently performing R&D studies for the design of the proposed TITUS intermediate detector of Hyper-K. In addition we are characteristing new hybrid photo-detectors that are candidates for use in TITUS and Hyper-K. Teaching assistant/lecturer for the Junior Honours "Numerical Methods" course. Teaching assistant for the Junior Honours "Research Methods" course. Teaching assistant for the Junior Honours "Data acquisition and handling" course. Observation of the decay $\overline{B_s^0} \rightarrow χ_{c2} K^+ K^- $ in the $\varphi$ mass region DOI LHCB Collaboration, L. Carson, P. E. L. Clarke, G. A. Cowan, D. C. Craik, S. Eisenhardt, E. Gabriel, S. Gambetta, K. Gizdov, F. Muheim et al., Journal of High Energy Physics (2018) Measurement of the $\Upsilon$ polarizations in $pp$ collisions at $\sqrt{s}$=7 and 8TeV DOI Peter Clarke, Greig Cowan, Stephan Eisenhardt, Franz Muheim, Matthew Needham, Stephen Playfer and LHCb Collaboration, Journal of High Energy Physics, 1712, p. 110 (2017) Measurement of the shape of the $\Lambda_b^0\to\Lambda_c^+ \mu^- \overline{\nu}_{\mu}$ differential decay rate DOI Peter Clarke, Greig Cowan, Stephan Eisenhardt, Franz Muheim, Matthew Needham, Stephen Playfer and LHCb Collaboration, Physical Review, D96, 11 , p. 112005 (2017) Bose-Einstein correlations of same-sign charged pions in the forward region in $pp$ collisions at $\sqrt{s}$ = 7 TeV DOI P E L Clarke, G A Cowan, S Eisenhardt, F Muheim, M Needham, S Playfer and LHCb Collaboration, Journal of High Energy Physics, 1712, p. 025 (2017) Measurement of the $B^{\pm}$ production cross-section in pp collisions at $\sqrt{s} =$ 7 and 13 TeV DOI First Observation of the Rare Purely Baryonic Decay $B^0\to p\bar p$ DOI Peter Clarke, Greig Cowan, Stephan Eisenhardt, Franz Muheim, Matthew Needham, Stephen Playfer and LHCb Collaboration, Physical Review Letters, 119, 23 , p. 232001 (2017) Updated search for long-lived particles decaying to jet pairs DOI Peter Clarke, Greig Cowan, Stephan Eisenhardt, Franz Muheim, Matthew Needham, Stephen Playfer and LHCb Collaboration, European Physical Journal C: Particles and Fields, C77, 12 , p. 812 (2017) χc1 and χc2 Resonance Parameters with the Decays χc1,c2→J/ψμ+μ− DOI Peter Clarke, Greig Cowan, Stephan Eisenhardt, Franz Muheim, Matthew Needham, Stephen Playfer and LHCb Collaboration, Physical Review Letters, 119, 22 (2017) Measurement of $CP$ violation in $B^0\rightarrow J/\psi K^0_\mathrm{S}$ and $B^0\rightarrow\psi(2S) K^0_\mathrm{S}$ decays DOI Measurement of $CP$ observables in $B^{\pm} \rightarrow D K^{*\pm}$ decays using two- and four-body $D$ final states DOI Show all 256 research outputs
CommonCrawl
AIMS Medical Science, 2018, 5(3): 204-223. doi: 10.3934/medsci.2018.3.204 Research articles Special Issues Combinatorial optimisation in radiotherapy treatment planning Emma Altobelli, Maurizio Amichetti, Alessio Langiu, Francesca Marzi, Filippo Mignosi, Pietro Pisciotta, Giuseppe Placidi, Fabrizio Rossi, Giorgio Russo, Marco Schwarz, Stefano Smriglio, Sabina Vennarini 1 MeSVA Department, University of L'Aquila, L'Aquila, Italy; 2 Protontherapy Department, Trento Hospital, Trento, Italy; 3 ICAR-CNR, National Research Council of Italy, Palermo, Italy; 4 Department of Informatics, King's College London, London, UK; 5 DISIM Department, University of L'Aquila, L'Aquila, Italy; 6 IBFM-CNR, National Research Council of Italy, Cefal´u, Italy; 7 Dept. of Physics and Astronomy, University of Catania, Catania, Italy; 8 LNS, National Institute for Nuclear Physics, Catania, Italy; 9 TIFPA-INFN, Trento, Italy Special Issues: The Future of Informatics in Biomedicine Abstract Full Text(HTML) Figure/Table The goal of radiotherapy is to cover a target area with a desired radiation dose while keeping the exposition of non-target areas as low as possible in order to reduce radiation side effects. In the case of Intensity Modulated Proton Therapy (IMPT), the dose distribution is typically designed via a treatment planning optimisation process based on classical optimisation algorithms on some objective functions.We investigate the planning optimisation problem under the point of view of the Theory of Complexity in general and, in particular, of the Combinatorial Optimisation Theory. We firstly give a formal definition of a simplified version of the problem that is in the complexity class NPO.We prove that above version is computationally hard, i.e. it belongs to the class NPO$\setminus$PTAS if $\mathbb{NP}\neq \mathbb{P}$.We show how Combinatorial Optimisation Theory can give valuable tools, both conceptual and practical, in treatment plan definition, opening the way for new deterministic algorithms with bounded time complexity which have to support the technological evolution up to adaptive plans exploiting near real time solutions. 1. Wilson RR (1946) Radiological use of fast protons. Radiology 47: 487–491. 2. Chuong MD, Mehta MP, Langen K, et al. (2014) Is proton beam therapy better than standard radiation therapy? The available evidence points to benefits of proton beam therapy, Clinical advances in hematology and oncology 12:861–864. http://europepmc.org/abstract/MED/ 25674846 3. Ezzell GA, Galvin JM, Low D, et al. (2003) Guidance document on delivery, treatment planning, and clinical implementation of IMRT: Report of the IMRT subcommittee of the AAPM radiation therapy committee. Medical Physics 30: 2089–2115. https://doi.org/10.1118/1.1591194 4. Hartmann J, Wölfelschneider J, Stache C, et al. (2016) Novel technique for high-precision stereotactic irradiation of mouse brains. Strahlentherapie und Onkologie 192: 806–814. https: //doi.org/10.1007/s00066-016-1014-8 5. Lomax A (1999) Intensity modulation methods for proton radiotherapy. Physics in Medicine and Biology 44: 185. https://doi.org/10.1259/bjr.20150195 6. Pieplenbosch S (2015) Potential Benefits of Proton Therapy in Clinic, Master's thesis. 7. van de Schoot AJAJ, de Boer P, Crama KF, et al. (2016) Dosimetric advantages of proton therapy compared with photon therapy using an adaptive strategy in cervical cancer. Acta Oncologica 55: 892–899. https://doi.org/10.3109/0284186X.2016.1139179 8. Zelefsky MJ, Fuks Z, Happersett L, et al. (2000) Clinical experience with intensity modulated radiation therapy (IMRT) in prostate cancer. Radiotherapy and Oncology 55: 241–249. https: //doi.org/10.1016/S0167-8140(99)00100-0 9. Börgers C (1999) The Radiation Therapy Planning Problem: 1–16, Springer New York, New York, NY, https://doi.org/10.1007/978-1-4612-1550-9_1 10. De Ruysscher D, Sterpin E, Haustermans K, et al. (2015) Tumour movement in proton therapy: Solutions and remaining questions: A review. Cancers 7: 1143–1153. https://doi.org/10. 3390/cancers7030829 11. Engelsman M, Schwarz M, Dong L (2013) Physics controversies in Proton Therapy. Seminars in Radiation Oncology 23: 88–96. https://doi.org/10.1016/j.semradonc.2012.11.003 12. Grutters JP, Kessels AG, Pijls-Johannesma M, et al. (2010) Comparison of the effectiveness of radiotherapy with photons, protons and carbon-ions for non-small cell lung cancer: A meta-analysis. Radiotherapy and Oncology 95: 32–40. https://doi.org/10.1016/j.radonc.2009.08.003 13. Kabarriti R, Mark D, Fox J, et al. (2015) Proton therapy for the treatment of pediatric head and neck cancers: A review. International Journal of Pediatric Otorhinolaryngology 79: 1995–2002. https://doi.org/10.1016/j.ijporl.2015.10.042 14. Lee KA, O'Sullivan C, Daly P, et al. (2017) Proton therapy in paediatric oncology: an Irish perspective. Irish Journal of Medical Science 186: 577–582. https://doi.org/10.1007/ s11845-016-1520-9 15. Mohan R, Grosshans D (2017) Proton therapy - present and future. Advanced Drug Delivery Reviews 109: 26–44. 16. Salama JK, Willett CG (2014) Is proton beam therapy better than standard radiation therapy? A paucity of practicality puts photons ahead of protons. Clinical advances in hematology and oncology 12: 861, 865–6, 869. http://europepmc.org/abstract/MED/25674847 17. Schulz-Ertner D, Tsujii H (2007) Particle radiation therapy using proton and heavier ion beams. Journal of Clinical Oncology 25: 953–964. https://doi.org/10.1200/JCO.2006.09.7816 18. Fellin F, Azzeroni R, Maggio A, et al. (2013) Helical tomotherapy and intensity modulated proton therapy in the treatment of dominant intraprostatic lesion: A treament planning comparison. Radiotherapy and Oncology 107: 207–212. https://doi.org/10.1016/j.radonc.2013.02. 016 19. Fredriksson A (2013) Robust optimization of radiation therapy accounting for geometric uncertainty, PhD thesis. 20. Giap H, Roda D, Giap F (2015) Can proton beam therapy be clinically relevant for the management of lung cancer? Translational Cancer Research 4. https://doi.org/10.3978/j.issn. 2218-676X.2015.08.15 21. Guta B (2003) Subgradient Optimization Methods in Integer Programming with an Application to a Radiation Therapy Problem, PhD thesis. http://nbn-resolving.de/urn/resolver.pl? urn:nbn:de:bsz:386-kluedo-16224 22. McGowan SE, Burnet NG, Lomax AJ (2013) Treatment planning optimisation in proton therapy. The British Journal of Radiology 86: 20120288–20120288. https://doi.org/10.1259/bjr. 20120288 23. Schwarz M (2011) Treatment planning in proton therapy. The European Physical Journal Plus 126: 67. https://doi.org/10.1140/epjp/i2011-11067-y 24. Schwarz M, Cattaneo GM, Marrazzo L (2017) Geometrical and dosimetrical uncertainties in hypofractionated radiotherapy of the lung: A review. Physica Medica 36: 126–139. https://doi.org/10.1016/j.ejmp.2017.02.011 25. Schwarz M, Molinelli S (2016) What can particle therapy add to the treatment of prostate cancer?. Physica Medica 32: 485–491. https://doi.org/10.1016/j.ejmp.2016.03.017 26. Alber M, Meedt G, N¨usslin F, et al. (2002) On the degeneracy of the IMRT optimization problem. Med Physics 29: 2584–2589. https://doi.org/10.1118/1.1500402 27. Edimo P, Clermont C, Kwato M, et al. (2009) Evaluation of a commercial VMC++ Monte Carlo based treatment planning system for electron beams using EGSnrc/BEAMnrc simulations and measurements. Physica Medica: European Journal of Medical Physics 25: 111–121. https://doi.org/10.1016/j.ejmp.2008.07.001 28. Li H, Liu W, Park P, et al. (2014) Evaluation of the systematic error in using 3d dose calculation in scanning beam proton therapy for lung cancer. Journal of Applied Clinical Medical Physics 15: 47–56. https://doi.org/10.1120/jacmp.v15i5.4810 29. Paganetti H (2012) Range uncertainties in proton therapy and the role of Monte Carlo simulations. Physics in Medicine and Biology 57: R99–R117 https://doi.org/10.1088/0031-9155/57/ 11/R99 30. Spirou SV, Chui CS (1998) A gradient inverse planning algorithm with dose-volume constraints. Medical Physics 25: 321–333. https://doi.org/10.1118/1.598202 31. Amichetti M (2016) The actual interest in radiotherapy for the utilization of proton beam, highlighting physics basis, technology and common clinical indications. J Tumor 4: 378–385. 32. Brombal L, Barbosa D, Belcari N, et al. (2017) Proton therapy treatment monitoring with inbeam pet: investigating space and time activity distributions. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. https://doi.org/10.1016/j.nima.2017.05.002 33. Kiely JPB, White BM (2016) Robust proton pencil beam scanning treatment planning for rectal cancer radiation therapy. International Journal of Radiation Oncology*Biology*Physics 95: 208– 215. https://doi.org/10.1016/j.ijrobp.2016.02.037 34. Moignier A, Gelover E, Wang D, et al. (2016) Theoretical benefits of dynamic collimation in pencil beam scanning proton therapy for brain tumors: Dosimetric and radiobiological metrics. International Journal of Radiation Oncology*Biology*Physics 95: 171–180. https://doi.org/ 10.1016/j.ijrobp.2015.08.030 35. Scalco E, Schwarz M, Sutto M, et al. (2016) Evaluation of different ct lung anatomies for proton therapy with pencil beam scanning delivery, using a validated non-rigid image registration method. Acta Oncologica 55: 647–651. https://doi.org/10.3109/0284186X.2015.1105383 36. Schwarz M, Algranati C, Widesott L, et al. (2016) Clinical Pencil Beam Scanning: Present and Future Practices, Springer India, New Delhi, 95–110. https://doi.org/10.1007/ 978-81-322-2622-2_7 37. Cao W, Lim GJ, Lee A, et al. (2012) Uncertainty incorporated beam angle optimization for IMPT treatment planning. Medical Physics 39: 5248–5256. https://doi.org/10.1118/1.4737870 38. Fredriksson A, Forsgren A, Härdemark B (2011) Minimax optimization for handling range and setup uncertainties in proton therapy. Medical Physics 38: 1672–1684. https://doi.org/10. 1118/1.3556559 39. Li H, Zhu XR, Zhang X (2015) Reducing dose uncertainty for spot-scanning proton beam therapy of moving tumors by optimizing the spot delivery sequence. International Journal of Radiation Oncology*Biology*Physics 93: 547–556. https://doi.org/10.1016/j.ijrobp.2015.06. 019 40. Liu W, Frank SJ, Li X, et al. (2013) PTV-based IMPT optimization incorporating planning risk volumes vs robust optimization. Medical Physics 40: 021709. https://doi.org/10.1118/1. 4774363 41. Liu W, Li Y, Li X, et al. (2012) Influence of robust optimization in intensity-modulated proton therapy with different dose delivery techniques. Medical Physics 39: 3089–3101. https://doi. org/10.1118/1.4711909 42. Liu W, Zhang X, Li Y, et al. (2012) Robust optimization of intensity modulated proton therapy. Medical Physics 39: 1079–1091. https://doi.org/10.1118/1.3679340 43. Alber M, Reemtsen R (2007) Intensity modulated radiotherapy treatment planning by use of a barrier-penalty multiplier method. Optimization Methods Software 22: 391–411. https://doi. org/10.1080/10556780600604940 44. Dionisi F, Ben-Josef E (2014) The use of proton therapy in the treatment of gastrointestinal cancers: Liver. Cancer Journal 20: 371–377. 45. Kessler ML, Mcshan DL, Epelman MA, et al. (2005) Costlets: A generalized approach to cost functions for automated optimization of IMRT treatment plans. Optimization and Engineering 6: 421–448. https://doi.org/10.1007/s11081-005-2066-2 46. Schwarz M, Pierelli A, Fiorino C, et al. (2011) Helical tomotherapy and intensity modulated proton therapy in the treatment of early stage prostate cancer: A treatment planning comparison. Radiotherapy and Oncology 98: 74–80. https://doi.org/10.1016/j.radonc.2010.10.027 47. Witte MG, van der Geer J, Schneider C, et al. IMRT optimization including random and systematic geometric errors based on the expectation of tcp and ntcp. Medical Physics,34: 3544–3555. https://doi.org/10.1118/1.2760027 48. Bokrantz R, Fredriksson A (2017) Necessary and su cient conditions for pareto e ciency in robust multiobjective optimization. European J Operational Res. https://doi.org/10.1016/ j.ejor.2017.04.012 49. Janssen F, Landry G, Lopes PC, et al. (2014) Factors influencing the accuracy of beam range estimation in proton therapy using prompt gamma emission. Physics in Medicine and Biology 59: 4427. https://doi.org/10.1088/0031-9155/59/15/4427 50. Cheung JP (2014) Image-Guided Proton Therapy for Online Dose-Evaluation and Adaptive Planning, PhD thesis. http://digitalcommons.library.tmc.edu/utgsbs_dissertations/439 51. Dionisi F, Avery S, Lukens JN, et al. (2014) Proton therapy in adjuvant treatment of gastric cancer: Planning comparison with advanced x-ray therapy and feasibility report. Acta Oncologica 53: 1312–1320. https://doi.org/10.3109/0284186X.2014.912351 52. Grant JD, Chang JY (2014) Proton-based stereotactic ablative radiotherapy in early-stage nonsmall-cell lung cancer. BioMed Research International 53. Krämer M, Jäkel O, Haberer T, et al. (2000) Treatment planning for heavy-ion radiotherapy: physical beam model and dose optimization. Physics in Medicine and Biology 45: 3299. https: //doi.org/10.1088/0031-9155/45/11/313 54. Riboldi M, Baroni G (2015) Challenges and opportunities in image guided particle therapy, in: 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 5227–5230. 55. Widesott L, Amichetti M, Schwarz M (2008) Proton therapy in lung cancer: Clinical outcomes and technical issues. A systematic review. Radiotherapy and Oncology 86: 154–164. https: //doi.org/10.10 6/j.radonc.2008.01.003 56. Widesott L, Lomax AJ, Schwarz M (2012) Is there a single spot size and grid for intensity modulated proton therapy? Simulation of head and neck, prostate and mesothelioma cases. Medical Physics 39: 1298–1308. https://doi.org/10.1118/1.3683640 57. Boland N, Hamacher HW, Lenzen F (2004) Minimizing beam-on time in cancer radiation treatment using multileaf collimators, Networks 43: 226–240, https://doi.org/10.1002/net. 20007 58. Cantone MC, Ciocca M, Dionisi F, et al. (2013) Application of failure mode and effects analysis to treatment planning in scanned proton beam radiotherapy. Radiation Oncology 8: 127. https: //doi.org/10.1186/1748-717X-8-127 59. Hoffmann L, Alber M, Jensen MF, et al. (2017) Adaptation is mandatory for intensity modulated proton therapy of advanced lung cancer to ensure target coverage. Radiotherapy and Oncology 122: 400–405. https://doi.org/10.1016/j.radonc.2016.12.018 60. Jäkel O, Hartmann GH, Karger CP, et al. (2000) Quality assurance for a treatment planning system in scanned ion beam therapy. Medical Physics 27: 1588–1600. https://doi.org/10.1118/1.599025 61. Lewis MW (2009) On the use of guided design search for discovering significant decision variables in the fixed-charge capacitated multicommodity network design problem. Networks 53: 6–18. https:/ doi.org/10.1002/net.20255 62. van de Schoot AJAJ, Visser J, van Kesteren Z, et al. (2016) Beam configuration selection for robust intensity-modulated proton therapy in cervical cancer using pareto front comparison. Physics in Medicine and Biology 61: 1780. https://doi.org/10.1088/0031-9155/61/4/1780 63. Baatar D, HamacherHW, Ehrgott M, et al. (2005) Decomposition of integer matrices and multileaf collimator sequencing. Discrete Applied Mathematics 152: 6–34, https://doi.org/10.1016/j.dam.2005.04.008 64. Ausiello G, Marchetti-Spaccamela A, Crescenzi P, et al. (1999) Complexity and Approximation, Springer Berlin Heidelberg, https://doi.org/10.1007/978-3-642-58412-1 65. Bovet DP, Crescenzi P (1994) Introduction to the Theory of Complexity, Prentice Hall International (UK) Ltd., Hertfordshire, UK, UK. 66. Garey MR, Johnson DS (1979) Computers and Intractability: A Guide to the Theory of NPCompleteness. W. H. Freeman. 67. Felzenszwalb P, Huttenlocher D (2004) Effcient graph-based image segmentation. International Journal of Computer Vision 59: 167–181. https://doi.org/10.1023/B:VISI.0000022288. 19776.77 68. A compendium of NP optimization problems, 2005. Available from: https://www.nada.kth. se/~viggo/problemlist/compendium.html 69. Alimonti P, Kann V (1997) Hardness of approximating problems on cubic graphs, in Proceedings of the Third Italian Conference on Algorithms and Complexity CIAC '97, Springer-Verlag, London, UK, UK, 1997, 288–298. https://doi.org/10.1007/3-540-62592-5_80 © 2018 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution Licese (http://creativecommons.org/licenses/by/4.0) Show full outline
CommonCrawl
How is it that the equilibrium constant does not depend on the mechanism? For a reaction of the form $$\ce{aA + bB <=> cC + dD}$$ the equilibrium constant is $$K_c=\frac{[\ce{C}]^c[\ce{D}]^d}{[\ce{A}]^a[\ce{B}]^b}$$ regardless of the mechanism of the reaction. Why is this the case? I've seen derivations that use a reaction in one elementary step to demonstrate this, but this obviously doesn't work in general. The sample reaction used was $\ce{N2O4 <=> 2NO2}$. reaction-mechanism equilibrium kinetics andselisk♦ You should indeed be able to write every equilibrium reaction in this simple form. I guess that the reason for this might lie in the principle of microscopic reversibility (shameless self-plug: see this answer of mine). But for non-elementary reactions, i.e. those whose reaction mechanism consists of more than one step, you get a more complicated equilibrium constant that is the product of the equilibrium constants of all the intermediate reaction steps (if the reaction mechanism has no branches). An example: Consider the reaction \begin{equation} \ce{H2(g) + I2(g) <=>> 2HI(g)} \ . \end{equation} Its mechanism consists of three reaction steps: $\ce{ I2 <<=> 2I } \qquad \qquad \qquad \qquad K_{1} = \frac{[\ce{I}]^{2}}{[\ce{I2}]}$ $\ce{I + H2 <<=> H2I} \ \ \quad \qquad \qquad K_{2} = \frac{[\ce{H2I}]}{[\ce{I}] [\ce{H2}]}$ $\ce{H2I + I <=>> 2HI} \quad \qquad \qquad K_{3} = \frac{[\ce{HI}]^{2}}{[\ce{I}] [\ce{H2I}]}$ Now, take a look at the equilibrium constant of the overall reaction \begin{equation} K_{\mathrm{tot}} = \frac{[\ce{HI}]^{2}}{[\ce{I2}] [\ce{H2}]} \ . \end{equation} Then make the following substitutions: From the third reaction step's equilibrium equation you get $[\ce{HI}]^{2} = K_{3} [\ce{I}] [\ce{H2I}]$. Substituting this into the equation for $K_{\mathrm{tot}}$ \begin{equation} K_{\mathrm{tot}} = \frac{K_{3} [\ce{I}] [\ce{H2I}]}{[\ce{I2}] [\ce{H2}]} \ . \end{equation} From reaction step 2 you get $[\ce{H2I}] = K_{2} [\ce{I}] [\ce{H2}]$ which then gives \begin{equation} K_{\mathrm{tot}} = \frac{K_{3} [\ce{I}] K_{2} [\ce{I}] [\ce{H2}] }{[\ce{I2}] [\ce{H2}]} = \frac{K_{3} K_{2} [\ce{I}]^{2}}{[\ce{I2}]} \ . \end{equation} And finally, from reaction step 1 you get $[\ce{I}]^{2} = K_{1} [\ce{I2}]$. Substituting this into the equation for $K_{\mathrm{tot}}$ you see that the overall equilibrium constant reduces to the product of the equilibrium constants of the reaction steps \begin{equation} K_{\mathrm{tot}} = \frac{K_{3} [\ce{I}] K_{2} [\ce{I}] [\ce{H2}] }{[\ce{I2}] [\ce{H2}]} = \frac{K_{3} K_{2} K_{1} [\ce{I2}]}{[\ce{I2}]} = K_{1} K_{2} K_{3} \ . \end{equation} If you have a reaction that doesn't proceed along a chain of reaction steps but branches out the situation will become more complicated but can be treated in an analogous way as above. PhilippPhilipp Philipp's answer is a nice application of Hess's Law to equilibrium. Let's look at why this approach works. The position of equilibrium (as given by the equilibrium constant $K$) is related to the standard Gibbs free energy change that occurs during the reaction: $$\Delta_rG^\circ=-RT\ln{K}$$ The Gibbs free energy $G$ is a state function - a property that depends only on the current state of the system, not the pathway by which the system reached that state. The value of $\Delta_rG^\circ$ depends only on the values of $G^\circ$ (or $\Delta_f G^\circ$ if you prefer) of the reactants and the products, which depend only on the identity and amounts of the substances. $\Delta_fG^\circ$ is independent of the way in which any of the substances were prepared. Likewise, $K$ is a state function. $K$ describes the compositions of the mixture which qualify as meeting the requirements for equilibrium. The value of $K$ is independent of the amounts of any specific substance or the pathway (mechanism) by which equilibrium is reached (see Philipp's answer). The individual equilibrium concentrations, however, in the system are not state functions. $[\ce{A}]_{eq}$ depends on the initial concentrations of all species in the mixture. The rate of reaction is dependent on the mechanism. Different mechanisms may have different orders, different activation energies, etc. Reaction rate (and consequently the rate constant $k$) are not necessarily state functions. Ben NorrisBen Norris This answer is the same as the one by Philipp, but instead of using a specific example, it makes the argument for any multi-step reaction where each step is an elementary reaction. The overall reaction is a sum of elementary steps The overall reaction happens because elementary steps happen consecutively. Along these multiple steps, some species are reactants or products (i.e. appear in the chemical equation of the overall reaction) and others are intermediates (i.e. made by one elementary reaction and consumed by another). The overall chemical equation is equal to the sum of the elementary steps, with intermediates appearing both on the reactant and the product side (those can be removed to arrive at the net chemical equation of the overall reaction). At equilibrium, all elementary steps are at equilibrium The only way the reactant and product concentrations can be constant (i.e. at equilibrium) is when all the intermediates are at equilibrium as well. You arrive at the equilibrium constant of the overall reaction by multiplying the equilibrium constants of the elementary reactions. The rate laws of the elementary steps have the stoichiometric coefficients as exponents This is the definition of an elementary step. The rate law for the forward reaction is the product of the reactant concentrations raised to the power of their coefficients. The rate law for the reverse reaction is the product of the product concentrations raised to the power of their coefficients. (Many of these reactants and products are intermediates with respect to the overall reaction.) At equilibrium, the forward and reverse rates are equal, so product concentrations divided by reactant concentrations (raised to respective coefficients) for the elementary steps will equal the quotient of reverse and forward rate constants, and we will call this equilibrium constant. The equilibrium constant expression of the overall reaction is the product of the equilibrium constant expression of the elementary steps Because the equilibrium constant of the overall reaction is equal to the product of the equilibrium constants of all the elementary steps, we can derive the equilibrium constant expression of the overall reaction by writing the product of the individual equilibrium constant expressions. Just like intermediates cancel out of the net chemical equation because they appear both as reactant and as product, concentrations of intermediates will cancel out of this derived equilibrium expression (they appear both in the denominator - as reactant of one step - and in the numerator - as product of a different step). What remains after canceling are concentrations of reactants and products of the overall reaction, raised to their respective coefficients in the net reaction. So whatever the coefficients are in the chemical equation of the net overall reaction will appear as exponents in the equilibrium constant expression for that same net overall reaction. For an example, look at the answer by Philipp. answered Mar 30 at 21:41 Karsten TheisKarsten Theis Not the answer you're looking for? Browse other questions tagged reaction-mechanism equilibrium kinetics or ask your own question. Equilibrium Constant and Reaction Mechanism Transition state and free energy Why does ozone have higher entropy than oxygen? Relationship between rate equation and equilibrium constant Validity of the equation K_eq = ratio of rate constants of forward and reverse reactions How to find partial pressures from a given equilibrium constant Kp? How to calculate the equilibrium constant from the density of a nitrogen dioxide and dinitrogen tetroxide mixture? The effect of pressure on equilibrium constant Is the new equilibrium total pressure of this equilibrium reaction always more than its initial total pressure? Why does the pressure equilibrium constant exist? Why are the concentrations in the equilibrium constant multiplied? How does the equilibrium shift when concentration of reactant and product are increased simulatenously Show that this mechanism is consistent with the rate law - how to? 2-step reaction mechanism for the decomposition of NO2Br
CommonCrawl
Home Journals MMEP Effect of Mass Transfer in a Horizontal Pipe with Suction and Chemical Reaction on Magnetic Newtonian Flow Journal Rank: Q2 Engineering Effect of Mass Transfer in a Horizontal Pipe with Suction and Chemical Reaction on Magnetic Newtonian Flow Nagaraju Gajjela* | Mahesh Garvandha | Anjanna Matta Center for Research and Strategic Studies, Lebanese French University, Kurdistan Region, Erbil 44001, Iraq Department of Mathematics, GITAM Deemed to be University, Hyderabad 502329, India Department of Mathematics, Faculty of Science and Technology, ICFAI Foundation for Higher Education, Dontanapalli, Hyderabad, Telangana 501203, India [email protected] https://doi.org/10.18280/mmep.060407 06.04_07.pdf This paper examines the impact of reactive diffusion(mass) transport with the application of magnetic field vertical to the flow direction where the Newtonian fluid is passing over a circular pipe. Uniform suction is applied, externally, across the wall in the transverse direction. A strong analytical methodology, specifically, the homotopy analysis method (HAM) is employed to realize solutions to the non-linear coupled equations. The effect of magnetic parameter $(M),$ suction Reynold number $(R e),$ Schmidt parameter $(S c),$ first-order chemical reaction parameter( $\gamma)$ on velocity components and concentration are displayed graphically and explained numerically. The dimensionless of axial concentration $\phi$ decrease for a given rise in $\gamma .$ Further, the behavior is inverted i.e. for the cases of radial concentration $\phi$ increases as $\gamma$ increases. mass transfer, suction, magnetic field, chemical reaction, Newtonian fluid, HAM In recent years, engineers have additionally investigated the modification of Newtonian flows via wall porousness of the channel or pipe. The wall Injection/removal of fluid via pores may be a strong mechanism for flow management. This technology quality vital potential in bio-medical engineering (e.g. artificial qualitative analysis, blood circulation) and in different engineering areas like rocket technology & food process. Consequently, mathematical modeling of this type of flow through surface mass flux pipes has stimulated some interest in the research community. Intilaly the study laminar flow in 2-D channel was examined by Berman [l], Sellars [2] and Yuana [3]. Later Bansal [4] extended this work through a permeable circular pipe to a steady, viscous flow. He obtained an analytical solution for velocity under the influence of suction/injection parameter and pressure gradient in z-direction. An analytical expression for the Newtonian flow over the pipe underneath uniform wall suction/injection was any reportable by Terril [5, 6]. Tsangaris and Kondaxakis [7] thought of time-dependent Newtonian flow in an exceedingly porous pipe. He obtained an analytical solution for time-varying wall injection/suction concerning the pipe. Cox and Hill [8] examined the Newtonian fluid flow with a Navier slip at the wall through carbon nanotubes. They recognized the maximum flow rate for the standard Poiseuille flow, which happens for steady inward-directed flow across the boundary. Ramana murthy et al. [9] studied micropolar flow developed by a permeable cylinder exhibiting rotary oscillations, they have concluded that the tangential drag decreases with suction parameter will increase. In 2-phase flow between two porous plates, Srinivas and Ramana Murthy [10] studied wall suction effects. They found that Darcy parameter will increase the velocity of the fluid. Magnetohydrodynamics (MHD) is also an active area of modern engineering sciences and involves the interaction of magnetic fields and conduction of electrical fluids. MHD tube flows arise in ion accelerators, MHD flow management in nuclear reactors, liquid metal fabrication processes, bubble levitation, etc. MHD flows that includes suction/injection wall effects have garnered considerable attention. Terrill and Shrestha [11] found that, with an increase in magnetic number, wall friction increases. Attia [12] investigated the unstable laminar flow of 2-phase non-Newtonian liquids through a circular tube between a pressure gradient at z-direction. He concluded that the velocity and temperature components for both phases decrease as Magnetic parameter increases. Attia and Ahmed [13] computed solutions for unsteady magnetic Bingham plastic flow in a circular tube. They found an increase in the viscosity of the particles and skin friction. El-Shahed [14] examined the impact of the transverse magnetic field in a porous material on the transient viscoelastic fluid flow. In terms of Fox's H-function, he found the solution velocity. Ramanamurthy and Bahali [15] investigated the effects of wall suction/injection on magneto-micropolar transport in a porous circular tube. They recognized that shear stress (skin friction) at wall is boosted with a greater magnetic number. Murthy et al. [16] considered the impact on the micropolar flow in a rectangular duct of the magnetic parameter, wall suction / blowing. They found that larger magnetic fields decelarate the magnitudes of the flow rate. Implication of mass transfer study is very important in the fields of engineering, Industries and sciences. Applications of this study are found in many branches of engineering and industry such as the safety of nuclear reactor, reaction engineering, absorption resistance, chemical and metallurgical industry etc. In day to day life also, we observe in the dissolution of sugar added to a cup of coffee and diffusion of smoke through tall chimneys into the environment. Hayat et al. [17] found the impact of chemical reaction on Maxwell fluid in a porous channel. They notice that the velocity in viscoelastic fluid has an inverse conduct by raising the amount of Reynolds number. Bridges and Rajagopal [18] examined the pulsatile flow of a chemical reactive fluid. They notice that the fluid concentration near the centerline rises with distinct cycles of time. Other interesting and recent investigations into pipe dynamics have been communicated in previous studies [19-28]. Our study aims to examine the impact of suction, chemical reaction and magnetic field on two-dimensional incompressible Newtonian fluid flow in a circular pipe. By using the powerful homotopy analysis method (HAM) [29] we find the solutions for the non-dimensional equations. We have studied the effect of Schmidt number, Suction Reynold number, magnetic number, and chemical reaction on velocity components and concentration and presented the results graphically. 2. Mathematical Hydromagnetic Mass Transfer Model Figure 1 shows an endless circular radius tube in which Newtonian fluid flows. The system of mass transfer is studied throughout uniform mass flux on the axis of the pipe and constant mass at the surface of the pipe, ignoring the effects of thickness of the pipe. As the tube is of semi infinite length, the flow is considered to be fully developed. This flow is exposed to an outwardly applied perpendicular suction across the wall and a constant magnetic force field B0 in polar direction. The number of magnets is small enough to neglect the effects of the magnetic field being induced. The primitive equations for viscous flow [30] are: Figure 1. Schematic diagram $\frac{\partial U}{\partial R}+\frac{U}{R}+\frac{\partial W}{\partial Z}=0$ (1) $\rho\left(U \frac{\partial U}{\partial R}+W \frac{\partial U}{\partial Z}\right)=-\frac{\partial P}{\partial R}+\mu \frac{\partial}{\partial Z}\left(\frac{\partial U}{\partial Z}-\frac{\partial W}{\partial R}\right)-\sigma B_{0}^{2} U$ (2) $\rho\left(U \frac{\partial W}{\partial R}+W \frac{\partial W}{\partial Z}\right)=-\frac{\partial P}{\partial Z}-\frac{\mu}{R} \frac{\partial}{\partial R}\left(R\left(\frac{\partial U}{\partial Z}-\frac{\partial W}{\partial R}\right)\right)-\sigma B_{0}^{2} W$ (3) $U \frac{\partial c}{\partial R}+W \frac{\partial c}{\partial z}=D\left(\frac{\partial^{2} c}{\partial R^{2}}+\frac{1}{R} \frac{\partial c}{\partial R}+\frac{\partial^{2} C}{\partial z^{2}}\right)-k_{1} C$ (4) The boundary conditions on the axis are obtained by taking the flow to be symmetrical, so that $\text { At } \mathrm{R}=0, \frac{\partial W}{\partial R}=U=\frac{\partial C}{\partial R}=0$ At $\mathrm{R}=a, W=0, U=v_{0},$ and $C=C_{w}$ In the following equations, upper case letters denote physical (dimensional) quantities and the corresponding non-dimensional amounts represent lower case letters: $U=u v_{0}, W=w v_{0}, R=r a, P=\rho p v_{0}^{2}, Z=z a, \phi=\frac{c-c_{0}}{c_{w}-c_{0}}$ (5) Implementing Eq. (5) into Eqns. (1)-(4), following the dimensionless system of non-linear coupled equations, emerges: $\operatorname{Re}\left(u \frac{\partial u}{\partial r}+w \frac{\partial u}{\partial z}\right)=-\operatorname{Re} \frac{\partial p}{\partial r}+\frac{\partial}{\partial z}\left(\frac{\partial u}{\partial z}-\frac{\partial w}{\partial r}\right)-M^{2} u$ (7) $\operatorname{Re}\left(u \frac{\partial w}{\partial r}+w \frac{\partial w}{\partial z}\right)=-\operatorname{Re} \frac{\partial p}{\partial z}-\frac{1}{r} \frac{\partial}{\partial r}\left(r\left(\frac{\partial u}{\partial z}-\frac{\partial w}{\partial r}\right)\right)-M^{2} w$ (8) $\begin{aligned} \operatorname{Re} S c\left(u \frac{\partial \phi}{\partial r}+w \frac{\partial \phi}{\partial z}\right)=\frac{\partial^{2} \phi}{\partial r^{2}}+\frac{1}{r} \frac{\partial \phi}{\partial r}+\frac{\partial^{2} \phi}{\partial z^{2}}-S c\left(K_{1}+\gamma \phi\right) \end{aligned}$(9) $\operatorname{Re}=\frac{\rho v_{0} a}{\mu}, M^{2}=\frac{\sigma B_{0}^{2} a^{2}}{\mu}, S c=\frac{v}{D}, \gamma=\frac{k_{1} a^{2}}{v} \quad$ and $K_{1}=\frac{k_{1} C_{0} a^{2}}{v\left(C_{w}-C_{0}\right)}$ Now, we incorporate a stream function ψ [11] that will satisfy the continuity, Eq. (6) $\psi=(N-z) f(r)$ (10) The components of axial and radial velocity can be taken as $u=-\frac{1}{r} \frac{\partial \psi}{\partial z}, w=\frac{1}{r} \frac{\partial \psi}{\partial r}$ (11) Replacing Eq. (11) in Eqns. (7) and (8) and then the pressure term is removed, we get $\operatorname{Re}\left(-\frac{2}{r^{3}} \frac{\partial \psi}{\partial z} E^{2} \psi+\frac{1}{r^{2}}\left(\frac{\partial \psi}{\partial z} \frac{\partial E^{2} \psi}{\partial r}-\frac{\partial \psi}{\partial r} \frac{\partial E^{2} \psi}{\partial z}\right)\right)=\frac{-1}{r} E^{2}\left(E^{2} \psi\right)+\frac{M^{2}}{r} E^{2} \psi$ (12) where, $E^{2}=\frac{\partial^{2}}{\partial r^{2}}-\frac{1}{r} \frac{\partial}{\partial r}+\frac{\partial^{2}}{\partial z^{2}}$ . Using Eq. (10) in equation (12) we get, $\begin{aligned} R e &=\left\{2 f \cdot D^{2} f+r\left(f^{\prime} \cdot D^{2} f-f \cdot \frac{d}{d r} D^{2} f\right)\right\} =r^{2} D^{2}\left(-D^{2}+M^{2}\right) f \end{aligned}$ Or, alternately, $\operatorname{Re}\left(\frac{3 f f^{\prime \prime}}{r^{2}}-\frac{3 f f^{\prime}}{r^{3}}-\frac{f f^{\prime \prime \prime}}{r}+\frac{f_\prime f^{\prime \prime}}{r}-\frac{f^{\prime 2}}{r^{2}}\right)=D^{2}\left(-D^{2}+M^{2}\right) f$ (13) where, $D^{2}=\frac{d^{2}}{d r^{2}}-\frac{1}{r} \frac{d}{d r}$ is a differential operator. The corresponding boundary conditions are $\left.\begin{array}{l}{f=D^{2} f=\frac{\partial \phi}{\partial r}=0 \text { at } \mathrm{r}=0.} \\ {f_\prime=0, f=\phi=1 \text { at } \mathrm{r}=1}\end{array}\right\}$ (14) 3. Analytical Solution via HAM For the HAM approximation of Eqns. (9) and $(13),$ we propose the Eq. ( 15) as the boundary conditions from the Eq. $(14),$ the initial (zero order) approximations for $f_{0}$ and $\phi_{0}$ and auxiliary linear operators $L_{1}$ and $L_{2}$ are as follows: $\begin{aligned} f(0)=0, & D^{2} f(0)=0, f(1)=1, f^{\prime}(1)=0, \phi^{\prime}(0)=0 \text { and } \phi(1)=1 \end{aligned}$ (15) $f_{0}=r^{2}\left(2-r^{2}\right)$ and $\phi_{0}(r)=1$ $L_{1}[f]=D^{4} f$ and $L_{2}[\phi]=\nabla^{2} \phi$ With $L_{1}\left[c_{1} r^{4}+c_{2} r^{2}(2 \log r-1)+c_{3} r^{2}+c_{4}\right]=0$ $L_{2}\left[(N-z)^{2}\left(c_{5}+c_{6} \log r\right)+c_{7} r^{2}+c_{8} r^{2}(\log r-1)\right]$=0 (16) where, $C_{i}$(i = 1–8) are constants. 3.1 Zeroorder deformation equations The zero order deformation Eqns. for (13) and (9) can be written as follows: $(1-\lambda) L_{1}\left[f(r, \lambda)-f_{0}(r)\right]=\lambda h_{1} H N_{1}(f(r, \lambda))$ (17) $f(0, \lambda)=D^{2} f(0, \lambda)=f^{\prime}(1, \lambda)=0, f(1, \lambda)=1$ (18) $(1-\lambda) L_{2}\left[\phi(r, z, \lambda)-\phi_{0}(r)\right]=\lambda h_{2} H N_{2}(\phi(r, z, \lambda), f(r, \lambda))$ (19) $\phi(1, \lambda)=1, \quad \phi^{\prime}(0, \lambda)=0$ (20) $N_{1}[f(r, \lambda)]=D^{2}\left(-D^{2}+M^{2}\right) f(r, \lambda)-\frac{R e}{r^{3}}\left(\begin{array}{c}{3 r f(r, \lambda) \frac{\partial^{2} f(r, \lambda)}{\partial r^{2}}-3 f(r, \lambda) \frac{\partial f(r, \lambda)}{\partial r}-r^{2} f(r, \lambda) \frac{\partial^{3} f(r, \lambda)}{\partial r^{3}}} \\ {+r^{2} \frac{\partial f(r, \lambda)}{\partial r} \frac{\partial^{2} f(r, \lambda)}{\partial r^{2}}-r\left(\frac{\partial f(r, \lambda)}{\partial r}\right)^{2}}\end{array}\right)$ (21) $\begin{aligned} N_{2}(\phi(r, z, \lambda), f(r, \lambda))=& \operatorname{Re} \operatorname{Sc}\left(\frac{f(r, \lambda)}{r}\left((N-z)^{2} \frac{\partial \phi_{1}(r, \lambda)}{\partial r}+\right.\right.\\\left.\left.\frac{\partial \phi_{2}(r, \lambda)}{\partial r}\right)-2 \frac{(N-z)^{2}}{r} \phi_{1}(r, \lambda) \frac{\partial f(r, \lambda)}{\partial r}\right)-\left((N-z)^{2}\left(\frac{\partial^{2} \phi_{1}(r, \lambda)}{\partial r^{2}}+\right.\right.\\\left.\frac{1}{r} \frac{\partial \phi_{1}(r, \lambda)}{\partial r}\right)+\left(\frac{\partial^{2} \phi_{2}(r, \lambda)}{\partial r^{2}}+\frac{1}{r} \frac{\partial \phi_{2}(r, \lambda)}{\partial r}+2 \phi_{1}(r, \lambda)\right)+\operatorname{Sc} \gamma((N-\\\left.z)^{2} \phi_{1}(r, \lambda)+\phi_{2}(r, \lambda)\right)+S c K_{1} \end{aligned}$ (22) With $\phi(r, z, \lambda)=(N-z)^{2} \phi_{1}(r, \lambda)+\phi_{2}(r, \lambda)$ Here $\lambda \in[0,1]$ is a homotopy parameter, $h_{1}$ and $h_{2}$ are the convergence control parameters, H is the auxiliary function (taken as 1) For $\lambda=0$ and $\lambda=1,$ we have $f(r, 0)=f_{0}(r)$ $f(r, 1)=f(r), \phi(r, 0)=\phi_{0}(r), \phi(r, 1)=\phi(r)$ (23) Further, by Maclaurin's series expansion, one gets $f(r, \lambda)=f_{0}(r)+\sum_{n=1}^{\infty} f_{n}(r) \lambda^{n}$ (24) where, $f_{n}(r)=\frac{1}{n !} \frac{\partial^{n} f(r, \lambda)}{\partial \lambda^{n}}$. $\phi(r, \lambda)=\phi_{0}(r)+\sum_{n=1}^{\infty} \phi_{n}(r) \lambda^{n}$ (25) where,$\phi_{n}(r)=\frac{1}{n !} \frac{\partial^{n} \phi(r, \lambda)}{\partial \lambda^{n}}$. We choose $h_{1}$ and $h_{2},$ suitably just as these succession are convergent at $\lambda=1,$ hence the expressions of solution from Eqns. (24)-(25) as follows: $f(r)=f_{0}(r)+\sum_{n=1}^{\infty} f_{n}(r)$ (26) $\phi(r)=\phi_{0}(r)+\sum_{n=1}^{\infty} \phi_{n}(r)$ (27) 3.2 The higher order deformation equation Differentiating n times for Eqns. (17)-(20) with respect to parameter $\lambda,$ next multiply by $\frac{1}{n !}$ And subsequently we get the equations of nth order as follows: $L_{1}\left[f_{n}(r)-\Upsilon_{n} f_{n-1}(r)\right]=h_{1} R_{1, n}(r)$ (28) $L_{2}\left[\phi_{n}(r)-\Upsilon_{n} \phi_{n-1}(r)\right]=h_{2} R_{2, n}(r)$ (29) $R_{1, n}(r)=D^{2}\left(-D^{2}+M^{2}\right) f_{n-1}-\frac{R e}{r^{3}} \sum_{i=0}^{n-1}\left(3 r f_{i} f_{n-1-i}^{\prime \prime}-\right.\left.3 f_{i} f^{\prime}_{n-1-i}-r^{2} f_{i} f^{\prime \prime \prime}_{n-1-i}+r^{2} f_{i}^{\prime} f_{i}^{\prime \prime} f_{n-1-i}^{\prime \prime}-r f_{i}^{\prime} f_{n-1-i}^{\prime}\right)$ (30) $R_{2, n}(r)=\nabla^{2} \phi_{n-1}-\operatorname{ReSc} \sum_{t=0}^{n-1}\left((N-z)^{2}\left(\frac{f_{i}}{r} \phi_{1, n-1-i}^{\prime}-\frac{2}{r} \phi_{1, n-1-i} f_{i}^{\prime}\right)+\right.$ $\left.\frac{f_{i}}{r} \phi_{2, n-1-i}^{\prime}\right)-S c \gamma\left((N-z)^{2} \phi_{1, n-1-i}+\phi_{2, n-1-i}\right)-S_{C K_{1}}\left(1-\Upsilon_{n}\right)$ (31) $\Upsilon_{n}=\left\{\begin{array}{l}{1, n \neq 1} \\ {0, n=1}\end{array}\right.$ And the relevant boundary conditions are $f_{n}(0)=D^{2} f_{n}(0)=f_{n}(1)=f_{n}^{\prime}(1)=\phi_{n}(1)=\phi_{n}^{\prime}(0)=0$ (32) Solving Eqns. (28)-(29) We use the representational calculation software Mathematica under the conditions Eq. (32). 3.3 Sherwood number The mass transfer rate (mass flux) at the pipe wall is given by: $q_{w}=-\left.D \frac{\partial c}{\partial R}\right|_{R=a}$ (33) The dimensionless mass flux may be expressed by: $S h=\frac{a q_{w}}{D\left(C_{w}-C_{0}\right)}$ (34) where, Sh is the Sherwood number. From (33) and (34), the Sherwood number takes the form: $S h=-\left.\frac{\partial \phi}{\partial r}\right|_{r=1}$ (35) 4. Results and Discussions In the present study, we have examined the effect of various relevant parameters on momentum of velocity, mass diffusion and Sherwood number through graphical illustrations in Figures 2-9. The velocity and concentration distributions involve auxiliary parameters h1 and h2. Convergence rate for the homotopy approximations strongly taking the value of h [26]. As a result, h-curves are depicted for finding the range of h1 and h2. The h-curves are depicted in Figures 2(a)–2(b)for 20thorder of approximation. It is obvious that h1 and h2 values have these ranges: -1.5< h1<-0.5 and -1.25< h2< 0. So we have taken h1 = h2 = -1. The enormous convergence range for the approximation of the 20th order accepted in all subsequent calculations and figures is achieved. Figures 3(a)-3(b) displays the response of M on radial and axial velocity components $f \& f^{\prime}$ . These figures show that the velocity is increased with an increase in M, wherever the velocity element within the z-direction is that the most values of $f^{\prime}$ are reduced and shifted towards the origin (axis of the cylinder) with the increasing values of M. The Lorentzian forces are responsible for the velocity coefficients in (r, z) directions. The radial component assists momentum development and enhances the flow in r-direction. The Lorentzian magnetic force along axis of pipe acts to inhibit flow, especially at larger values of the radial coordinate. It is clear from Figure 4(a) that the velocity f increases as Re increases. From Figure 4(b), we see that the highest values of M are reduced as Re enhances. This observation is against the effect of M which enhances the maximum values of $f^{\prime}$ . This may be due to the fact Reynolds number implies a greater inertial force in the regime relative to viscous force and this handle to accelerate the flow along r-axis. Re cannot induce a force in the perpendicular direction as in the case of magnetic parameter M. Figure 2. $h$ curves for a ) velocity with $M=5, R e=10$ b) Concentration $\phi(\mathrm{r})$ at $M=5, R e=10, S c=0.7, K_{1}=0.1, y=1, N=2, z=1$ Figure 3. Response of $M$ on a) Radial velocity(f) with $R e=10, \mathcal{N}=2, z=1$ b) Axial velocity $\left(f^{\prime}\right)$ with $R e=10, \mathcal{N}=2, z=1$ Figure 4. Responce of $R e$ on a ) Radial velocity with $\left.M=3, N=2, z=1 ; \text { b }\right)$ Axial velocity with $M=3, N=2, z=1$ Figure 5. Responce of Re on a) Axial Concentration with $M=1, r=0.75, S c=0.5, \gamma=0.3, K_{1}=0.1$ b) Radial Concentration with at $M=I, z=1.75, S c=0.5, \gamma=0.3, K_{l}=0.1$ Figure 6. Effect of Sc on a) Axial Concentration with $M=4, r=0.25, R e=10, \gamma=1, K_{1}=0.1$ b) Radial Concentration with at $M=1, z=0.75, R e=10, \gamma=1, K_{l}=0.1$ Figure 7. Effect of $\gamma$ on a) Axial Concentration with $M=4, r=0.5, R e=10, \mathrm{Sc}=0.7, K_{1}=0.1$ b) Radial Concentration with at $M=1, z=1.5, R e=10, \mathrm{Sc}=1, K_{l}=0.1$ Figure 8. Sherwood distribution: Effect of γ when K1=0.15, N=2, r=1, M=1, Re=5, Sc=0.25 Figures $5(a)-5(b)$ demonstrate the response of $R e$ on axial and radial concentration $\phi .$ Practically shows that for a given rise in Re the axial and radial concentration $\phi$ numerically increases. But as $z$ increases $\phi$ decreases initially upto $z=1.2$ then $\phi$ increases. Figures $6(a)-6(b)$ illustrate the deviation of Schmidt parameter $S c$ on axial and radial concentration $\phi .$ t is observed that $\phi$ decelerates for a given enhance in $S c$ (i.e., with declining of molecular diffusivity). This is due to the fact that the molecular diffusivity develops into relatively less momentum as $S c$ is enchanted. The effect of first order reaction parameter $\gamma$ on dimensionless axial and radial concentration distribution $\phi$ is shown in Figures $7(\mathrm{a})$ to $7(\mathrm{b})$ The rate of reaction is the speed at which reactants are changed into products. It is experiential that for the case of axial \phidecelerates for a given enhance in $\gamma .$ Additionally, the behavior is inverted the radial concentration $\phi$ increases as $\gamma$ increases. Figure 8 represents the Sherwood distribution $S h$ versus Re for different values of $\gamma .$ It presents the proportion of the convective mass transportation to the rate of diffusive mass transfer. From this figure, it is observed that $S h$ decelerates as $\gamma$ raises. 4.1 Streamlines and contours of concentration From the picture $9(\text { a ) Streamlines for } z \leq N$ are positive and non-positive for $z>N$ are detected. The streamlines of line $z$ $=N$ are demographically symmetric. The streamlines are more clustered for lower $z$ values and more dispersed for greater $z$ values indicating that the intensity of the flow is greater at lower $z$ (axial coordinate). From the picture $9(b),$ the concentration around line $z=N$ is symmetrical to almost $r=$ $0.5 .$ The concentration is minimal near the axis of the cylinder (because gradual blue shading is present in that region). At $z$ $=N$ and $r=0,$ the lowest concentration appears. Figure 9. a) Stream lines for velocity at Re $=10, M=2, S c=0.7, \gamma=1, K_{1}=0.2 ;$ b ) Contour graphs for Concentration at Re $=10, M=2, S c=0.7, \gamma=1, K_{1}=0.2$ Analytical solutions are given for the Mass transfer in Newtonian MHD pipe flow exploitation the homotopy analysis technique (HAM). Wall suction/injection and first order chemical reaction effects are also included. The following conclusions were made (i) Increasing the magnetic force parameter strongly retards the axial flow, although it accelerates the radial flow. (ii) Increasing the number of suction Reynolds slows the axial velocity and rises the radial velocity. (iii) The non-dimensional concentration $\phi$ decelerates for $\gamma>0 .$ Additionally, $\phi$ decelerates as such Sc rises. In the absence of an expanding/contracting parameter, these investigations are qualitatively consistent with the results of Srinivas et al. [ 23] (iv) The Sherwood number decline with chemical reaction parameter $\gamma$ increments. The authors are grateful to the anonymous referees for remarks which improved the work considerably. W, U dimensional axial and radial velocity components non-dimensional axial and radial velocity components p, P non-dimensional and dimensional pressure Concentration of the fluid Stoke's stream function operator $N=U_{a} / v_{0}$ suction velocity entrance velocity chemical reaction rate. reference concentration at the axis wall concentration at the surface of pipe Suction Reynolds number Magnetic parameter Schmidt number Greek symbols $\rho$ $\mu$ Viscosity kg. m-1.s-1 $\sigma$ $\phi$ Dimensionless concentration $\gamma$ Chemical reaction parameter [1] Berman, A.S. (1953). Laminar flow in channels with porous walls. Journal of Applied Physics, 24(9): 1232-1235. http://dx.doi.org/10.1063/1.1721476 [2] Sellars, J.R. (1955). Laminar flow in channels with porous walls at high suction Reynolds numbers. Journal of Applied Physics, 6(4): 489. http://dx.doi.org/10.1063/1.1722024 [3] Yuan, S.W. (1956). Further investigation of laminar flow in channels with porous walls. Journal of Applied Physics, 27(3): 267-269. http://dx.doi.org/10.1063/1.1722355 [4] Bansal, J.L. (1967). Laminar flow through a uniform circular pipe with small suction. Proc. Natn. Acad. Sci. 32A(4): 368-378. [5] Terril, R.M. (1982). An exact solution for flow in a porous pipe. Zeitschrift für angewandte Mathematik und Physik, 33: 547-542. https://doi.org/10.1007/BF00955703 [6] Terril, R.M. (1983). Laminar flow through a porous tube. J. Fluids Eng., 105(3): 303-306. https://doi.org/10.1115/1.3240992 [7] Tsangaris, S., Kondaxakis, D. (2007). Exact solution for flow in a porous pipe with unsteady wall suction/injection. Comm. in Nonlinear Sci. Num. Simu., 12(7): 1181-1189. https://doi.org/10.1016/j.cnsns.2005.12.009 [8] Cox, B.J., Hill, J.M. (2011). Flow through a circular tube with permeable Navier slip boundary. Nanoscale Research Letters, 389: 1-9. https://doi.org/10.1186/1556-276X-6-389 [9] Ramana Murthy, J.V., Nagaraju, G., Muthu, P. (2012). Micropolar fluid flow generated by a circular cylinder subject to longitudinal and torsional oscillations with suction/injection. Tamkang J. Mathematics, 43(3): 339-356. https://dx.doi.org/10.5556/j.tkjm.43.2012.339-356 [10] Srinivas, J., Ramana Murthy, J.V. (2016). Flow of two immiscible couple stress fluids between two permeable beds. J. Applied Fluid Mechanics, 9(1): 501-507. https://dx.doi.org/10.5556/j.tkjm.43.2012.339-356 [11] Terril, R.M., Shrestha, G.M. (1963). Laminar flow through channels with porous walls and with an applied transverse magnetic field. Appl. Sci. Res., 11: 134-144. https://dx.doi.org/10.1007/BF02922219 [12] Attia, H.A. (2003). Unsteady flow of a dusty conducting non-Newtonian fluid through a pipe. Can. J. Phys., 81(5): 789-795. https://dx.doi.org/10.1139/p03-054 [13] Attia, H.A., Ahmed, M.E.S. (2005). Circular pipe MHD flow of a dusty Bingham fluid. Tamkang J. Science and Engineering, 8(4): 257-265. [14] EL-Shahed, M. (2006). MHD of a fractional viscoelastic fluid in a circular tube. Mech. Res. Comm., 33: 261-268. https://dx.doi.org/10.1016/j.mechrescom.2005.02.017 [15] Ramana Murthy, J.V., Bahali, N.K., Srinivasacharya, D. (2010). Unsteady flow of a micropolar fluid through acircular pipe under a transverse magnetic field with suction/injection. Selguk Journal of Applied Mathematics, 11(2): 13-25. [16] Ramana Murthy, J.V., Sai, K.S., Bahali, N.K. (2011). Steady flow of micropolar fluid in a rectangular channel under transverse magnetic field with suction. AIP Advances, 1(032123). https://doi.org/10.1063/1.3624837 [17] Hayat, T., Abbas, Z. (2008). Channel flow of a Maxwell fluid with chemical reaction. Z Angew Math Phys., 59: 124-144. https://dx.doi.org/10.1007/s00033-007-6067-1 [18] Bridges, C., Rajagopal, K.R. (2006). Pulsatile flow of a chemically-reacting nonlinear fluid. Comput Math Appl., 52(6-7): 1131-1144. https://dx.doi.org/10.1016/j.camwa.2006.01.014 [19] El Dabe, N.T., Moatimid, G.M., Ali, H.S.M. (2002). Rivlin-Eriksen fluid in tube of varying cross section with mass and heat transfer. Z. Naturforsch. 57(11): 863-873. https://doi.org/10.1515/zna-2002-1105 [20] Sahin, A.Z., Ben-Mansour, R. (2003). Entropy generation in laminar fluid flow through a circular pipe. Entropy, 5(5): 404-416. https://dx.doi.org/10.3390/e5050404 [21] Ben-Mansour, R., Sahin, A.Z. (2005). Entropy generation in developing laminar fluid flow through a circular pipe with variable properties. Heat Mass Transfer, 42: 1-11. https://dx.doi.org/10.1007/s00231-005-0637-6 [22] Ramana Murthy, J.V., Nagaraju, G., Sai, K.S. (2012). Numerical solution for MHD flow of micro polar fluid between two concentric rotating cylinders with porous lining. International Journal of Nonlinear Science, 13(2): 183-193. [23] Srinivas, S., Subramanyam Reddy, A., Ramamohan, T.R. (2015). Mass transfer effects on viscous flow in an expanding or contracting porous pipe with chemical reaction. Heat Transfer-Asian Research, 44(6): 552-567. https://dx.doi.org/10.1002/htj.21136 [24] Mandapati, M.J.K. (2016). Effect of axial conduction and viscousdissipation on heat transfer for laminarflow through a circular pipe. Perspectives in Science, 8: 61-65. https://dx.doi.org/10.1016/j.pisc.2016.03.008 [25] Nagaraju, G., Srinivas, J., Ramana Murthy, J.V., Rashad, A.M. (2017). Entropy generation analysis of the mhd flow of couple stress fluid between two concentric rotating cylinders with porous lining. Heat Transfer-Asian Research, 46(4): 316-330. https://dx.doi.org/10.1002/htj.21214 [26] Gajjela, N., Matta, A., Kaladhar, K. (2017). The effects of Soret and Dufour, chemical reaction, Hall and ion currents on magnetized micropolar flow through co-rotating cylinders. AIP Advances, 7(115201): 1-16. https://dx.doi.org/10.1063/1.4991442 [27] Bouras, A., Taloub, D., Djezzar, M., Driss, Z. (2018). Natural convective heat transfer from a heated horizontal elliptical cylinder to its coaxial square enclosure. Mathematical Modeling of Engineering Problems, 5(4): 379-385. https://dx.doi.org/10.18280/mmep.050415 [28] Nagaraju, G., Jangili, S., Murthy, R.J.V., Beg, O.A., Kadir, A. (2019). Second law analysis of flow in a circular pipe with uniform suction and magnetic field effects. J of Heat Transfer, 141(1): 012004. https://doi.org/10.1115/1.4041796 [29] Liao, S.J. (2004). Beyond perturbation: Introduction to Homotopy analysis method. Applied Mechanics Reviews, 57(5): B25-B26. https://dx.doi.org/10.1115/1.1818689 [30] Bird, R.B., Stewart, W.E., Lightfoot, E.N. (1960). Transport Phenomena. John Wiley and Sons, New York. https://doi.org/10.1002/aic.690070245
CommonCrawl
Comparative effectiveness of different composting methods on the stabilization, maturation and sanitization of municipal organic solid wastes and dried faecal sludge mixtures Tesfu Mengistu1, Heluf Gebrekidan1, Kibebew Kibret1, Kebede Woldetsadik2, Beneberu Shimelis1 & Hiranmai Yadav1 Environmental Systems Research volume 6, Article number: 5 (2018) Cite this article Composting is one of the integrated waste management strategies used for the recycling of organic wastes into a useful product. Composting methods vary in duration of decomposition and potency of stability, maturity and sanitation. This study was aimed to investigate the comparative effectiveness of four different methods of composting viz. windrow composting (WC), Vermicomposting (VC), pit composting (PC) and combined windrow and vermicomposting (WVC) on the stabilization, maturation and sanitization of mixtures of municipal solid organic waste and dried faecal sludge. The composting treatments were arranged in a completely randomized block design with three replications. The changes in physico-chemical and biological characteristics of the compost were examined at 20 days interval for 100 days using standard laboratory procedures. The analysis of variance was performed using SAS software and the significant differences were determined using Fisher's LSD test at P ≤ 0.05 level. The evolution of composting temperature, pH, EC, \({\text{NH}}_{ 4}^{ + }\), \({\text{NO}}_{ 3}^{ - }\), \({\text{NH}}_{ 4}^{ + }\):\({\text{NO}}_{ 3}^{ - }\) ratio, OC, C:N ratio and total volatile solids varied significantly among the composting methods and with composting time. The evolution of total nitrogen and germination index also varied significantly (P ≤ 0.001) with time, but their variation among the composting methods was not significant (P > 0.05). Except for PC, all other methods of composting satisfied all the indices for stability/maturity of compost at the 60th day of sampling; whereas PC achieved the critical limit values for most of the indices at the 80th day. A highly significant differences (P ≤ 0.001) were noted among the composting methods with regard to their effectiveness in eliminating pathogens (faecal coliforms and helminth eggs). The WVC method was most efficient in eliminating the pathogens complying with WHO's standard. Turned windrow composting and composting involving earthworms hastened the biodegradation process of organic wastes and result in the production of stable compost earlier than the traditional pit method of composting. The WVC method is most efficient in keeping the pathogens below the threshold level. Thus, elimination of pathogens from composts being a critical consideration, this study would recommend this method for composting organic wastes involving human excreta. As in many other cities of the developing countries, the rapid urbanization and high population growth of Dire Dawa (Ethiopia's 2nd largest city) have resulted into a significant increase in generation of wastes from domestic and commercial activities, posing numerous questions concerning the adequacy of the current waste management systems, and their associated environmental, economical and social implications. A report by Beneberu et al. (2012) depicted that, despite the great efforts made by the Dire Dawa city municipality, it has been hardly possible to meet the ever-increasing waste management service demand of the city adequately and effectively. The per capita waste generation rate of the city is reported to be 0.3 kg day−1 and the city generates an estimated quantity of 77 tonnes of solid wastes per day (Community Development Research 2011). The same report indicated that, as there is very limited or no effort to recycle, reuse or recover the waste that is being generated; waste disposal has been the major mode of waste management practice. It has been observed that the indiscriminate dumping of wastes into the landfill is resulting in unexpectedly faster filling up of the city's sanitary landfill which would, thus, likely be abandoned in the near future than anticipated 30 years (Beneberu et al. 2012). In addition to the municipal solid wastes (MSW), the human excreta also constitute a significant component of wastes generated from Dire Dawa city. Faecal sludge (FS) accumulating in the commonly used on-site sanitation systems are periodically collected and dumped indiscriminately into its well-engineered sludge dewatering and drying bed. The faecal sludge, after being dried in the beds, since it has no purpose in Dire Dawa, was observed to be excavated from the drying beds and disposed in the landfill site. It is, therefore, of paramount importance to establish economically viable, environmentally sustainable and socially acceptable method of waste management for the sustainable development of the city. Bundela et al. (2010) suggested that agricultural application of organic solid wastes, as nutrient source for plants and as soil conditioner, is the most cost effective municipal solid waste (MSW) disposal option because of its advantages over traditional means, such as land filling or incineration. Though, human wastes are a rich source of organic matter and inorganic plant nutrients and therefore used to support food production, their use without prior stabilization represents a high risk because of the potentially negative effects of any phytotoxic substances or pathogens they may contain (Garcia et al. 1993). Application of raw wastes may inhibit seed germination, reduce plant growth and damage crops by competing for oxygen or causing phytotoxicity to plants due to insufficient biodegradation of organic matter (Brewer and Sullivan 2003; Cooperband et al. 2003). Moreover, the reuse of untreated faeces for agricultural purposes can cause a great health risk, because a great number of pathogens such as bacteria, viruses and helminthes can be found in human excreta (Gallizzi 2003). Therefore, the management of urban solid wastes involving human excreta for recycling in agriculture should necessarily incorporate sanitization, stabilization and maturation aspects to minimize potential disease transmission and to obtain a more stabilized and matured product for application to soil (Carr et al. 1995). Composting and vermicomposting are two of the best-known processes for biological stabilization of solid organic wastes by transforming them into a safer and more stabilized material that can be used as a source of nutrients and soil conditioner in agricultural applications (Lazcano et al. 2008; Bernal et al. 2009; Domínguez and Edwards 2010). Composting involves the accelerated degradation of organic matter by microorganisms under controlled conditions, in which the organic material undergoes a characteristic thermophilic stage that allows sanitization of the waste by elimination of pathogenic microorganisms (Lung et al. 2001). Vermicomposting, on the other hand, is emerging as the most appropriate alternative to conventional aerobic composting (Yadav et al. 2010) and it involves the bio-oxidation and stabilization of organic material by the joint action of earthworms and microorganisms (Lazcano et al. 2008). More recently, combining thermophilic composting and vermicomposting has been considered as a way of achieving stabilized substrates (Tognetti et al. 2007). Thermophilic composting results in sanitization of wastes and elimination of toxic compounds while the subsequent vermicomposting reduces particle size and increases nutrient availability (Mupondi et al. 2010). Composting methods differ in duration of decomposition and potency of stability and maturity (Iqbal et al. 2012). Due to the ecological and health concerns of human wastes, extensive research has been conducted to study the composting process and to evaluate methods to describe the stability, maturity and sanitation of compost prior to its agricultural use (Brewer and Sullivan 2003; Zmora-Nahum et al. 2005). Although several studies have addressed the optimization of composting, vermicomposting or composting with subsequent vermicomposting of various organic wastes (Dominguez et al. 1997; Frederickson et al. 1997; Ndegwa and Thompson 2001; Tognetti et al. 2005, 2007; Lazcano et al. 2008; Mupondi et al. 2010), information on the effectiveness of the different composting methods on biodegradation and sanitization of mixtures of MSW and dried faecal sludge (DFS) is scant. Moreover, regarding the sanitization efficiency of the different composting techniques, controversial reports have been presented in different literatures. Several researchers reported the effectiveness of thermophilic composting in eliminating pathogenic organisms (Koné et al. 2007; Vinnerås 2007; Mupondi et al. 2010). However, a few studies on composting of source-separated faeces claimed that a sufficiently high temperature for pathogen destruction is difficult to achieve (Bjorklund 2002; Niwagaba et al. 2009). Similarly, in vermicomposting, some studies have provided evidence of suppression of pathogens (Monroy et al. 2008; Rodriguez-Canche et al. 2010; Eastman et al. 2001), while others (Bowman et al. 2006; Hill et al. 2013) demonstrated the insignificant effect of vermicomposting in reducing Ascaris summ ova as compared to composting without worms. The effectiveness of vermicomposting for pathogen destruction was still remaining unclear due to conflicting information in the literature (Hill et al. 2013); the present scenario thus, calls for further exploration. Accordingly, the present study attempted to investigate the comparative effectiveness of four different methods of composting viz. windrow composting (WC), Vermicomposting (VC), pit composting (PC), and combined windrow and vermicomposting (WVC) on the stabilization, maturation and sanitization of mixtures of MSW and dried faecal sludge. Experimental site, wastes and earthworms utilized The study was carried out at Dire Dawa, a city in Eastern Ethiopia located at 9° 6′ N, 41° 8′ E and at an altitude of 1197 m above sea level. The Municipal solid organic waste used in this study was obtained from a door-to-door waste collection service provided by the Sanitation and Beautification Agency (SBA) of Dire Dawa city, in which the wastes were collected from various locations in the city. The dried faecal cake which was about to be excavated from the drying bed and dumped to the landfill site was collected from the dumping site. The garbage receives mixed organic and inorganic domestic wastes, upon arrival to the composting site; the wastes were spread flat on the ground and sorted manually into organic and non-organic fractions. All the compostable components were shredded manually into small pieces of particle sizes ranging from 3 to 5 cm as described by Pisa and Wuta (2013). The shredded MSW and dried faecal sludge were then mixed manually in a 2:1 mix ratio. The earthworm species (Eisenia foetida) were obtained from Haramaya University. Matured earthworms and their cocoons were brought to Dire Dawa, where they were made to be multiplied (reared) for about 4 months using cow dung as medium. Composting treatments The methods of composting tested were: turned windrow composting (WC), pit composting (PC) (a composting method commonly practiced by farmers of the study area), vermicomposting (VC) and combined windrow and vermicomposting (WVC). The composting was done in outdoor but under shade condition. Three replicates of each of the four composting methods were made being arranged in a completely randomized block design. Each composting pile was covered with a layer of dry grass (5 cm) to prevent excessive loss of moisture. Windrow composting: In the thermophilic composting, the homogenized feedstock of 1 m3 volume (~275 kg dry weight) was heaped into conical piles in about 1 m2 area after being wetted with water to 50–60% (Maso and Blasi 2008). Pit composting a homogenized feedstock with the same moisture level as in 'a' was filled in a pit with dimension of 1 × 1 × 1 m (length width and depth). Vermicomposting: Vermicomposting was performed in vermicompost bed measuring 1 × 1 × 0.3 m (length, width and height respectively) framed with bricks where the walls and bottom of the structure was lined with polyethylene sheet. In order to drain the excess water, the bottom of the polyethylene sheet was made to have tiny holes. Mature earthworms (E. foetida) were introduced at the recommended stocking rate of 250 adult worms per 20 kg of bio-waste (Padmavathiamma et al. 2008). The moisture content of the material was maintained between 70 and 80% (Maso and Blasi 2008). Combined windrow composting and vermicomposting: Thermophilic composting of the wastes was done in same manner as in windrow composting and the piled substrate was allowed to be composted until the temperature was dropped to mesophilic phase. After the completion of the thermophilic phase (15 days after the initiation of the process), the subsequent vermicomposting continued using earthworms (E. foetida) as described under vermicomposting (Mupondi et al. 2010 ). The pilled heaps in WC were turned and mixed every week while the substrates in other methods of composting were left intact. The moisture content of each pile was checked every week and adjusted accordingly. The compost mass in WVC received the same treatment as WC and VC during the thermophilic and mesophilic phases of composting respectively. The temperatures in each heap was measured daily with a temperature probe from randomly selected places (centre, bottom and top) throughout the process. Compost sampling and analysis To evaluate the various physical, chemical and biological transformations of the compost, representative samples were collected from four different points of the compost pile (bottom, surface, side and centre) of each pile at every 20 days (20, 40, 60, 80 and 100 days). All the samples were sealed in plastic containers and transported immediately to the laboratory using an ice box. Up on their arrival to the laboratory, the samples were stored in a refrigerator at 4 °C until they were analysed. Physico-chemical and microbial analyses were carried out at Haramaya University following standard procedures. Physico-chemical analysis of compost Moisture content was determined as weight loss upon drying in an oven at 105 °C to a constant weight (Lazcano et al. 2008). Total nitrogen (TN) and organic carbon (OC) were determined using dried compost samples which were ground to pass through a 2-mm sieve as described by Pisa and Wuta (2013). For the determination of total N, samples were decomposed using concentrated H2SO4 and catalyst mixture in Kjeldahl flask and subsequently, N content in the digest was determined following steam distillation and titration method (Bremner and Mulvaney 1982).Organic carbon was estimated by dichromate wet digestion and rapid titration methods as described by Walkley and Black (1934). Total volatile solids was determined as weight loss on ignition at 550 °C for 4 h in a muffle furnace as described by Lazcano et al. (2008). Ammonium N (\({\text{NH}}_{ 4}^{ + }\)–N) was determined from 0.2 ml aliquot of 0.5 M K2SO4 extract of the filtrate after colour development with sodium nitroprusside, whereas, Nitrate N (\({\text{NO}}_{ 3}^{ - }\)–N) was determined in a separate aliquot (0.5 ml) after colour development with 5% salicylic acid using a spectrophotometer (Okalebo et al. 2002). Analysis for pH and electrical conductivity (EC) were performed in extracts of 1:10 (w/v) compost: distilled water ratio as described by Ndegwa and Thompson (2001). The C:N ratio was calculated using the individual values of OC and TN. Compost phytotoxicity test For determining compost phytotoxicity, a modified phytotoxicity test employing seed germination was used (Zucconi et al. 1981). A 10 g of screened compost sample was shaken with 100 ml of distilled water for an hour, then the suspension was centrifuged at 3000 rpm for 15 min and the supernatant was filtered through a Whatman No 42 filter paper. Number 2 Whatman filter paper was placed inside a sterilized petri dish and wetted with 9 ml of the extract, 30 tomato seeds (Solanum esculentum L.) were placed on the paper. Nine ml of distilled water was used as a control and all experiments were run in triplicate (Wu et al. 2000). The petri dishes were kept in the dark for 4 days at room temperature. At the end of the 4th day, the germination index (GI) was calculated using the following formula (Selim et al. 2012). $${\text{Germination Index }}\left( \% \right) \, = \frac{{{\text{Seed germination }}\left( \% \right) \, \times {\text{ root elongation }}\left( \% \right)}}{ 100}$$ Faecal coliform analysis For the determination of faecal coliforms in the initial raw materials and in the composts the procedures described by Mupondi et al. (2010) were employed. Aseptically weighed 10 g samples of either waste mixture or fresh compost were added to 90 ml of distilled water previously autoclaved at 121 °C for 15 min and the suspensions were then mixed using a blender to ensure thorough mixing. Additional serial dilutions were made up to 10−6. A 0.1 ml aliquot of each dilution was plated, in triplicate, in appropriate media-Violet Red Bile Agar (VBA) (Vuorinen and Saharinen 1997). The plates were then maintained in an incubator at a constant temperature of 44 °C for 24 h. For each of the treatment samples the numbers of faecal coliforms were expressed as log10 CFU (colony forming unit) per gram of fresh sample and average values were calculated. Helminth eggs recovery The determination of helminth egg in this study was done based on the US EPA protocol (1999) modified by Schwartzbrod (2003). The analysis was carried out in triplicate for the initial raw waste and compost samples. The concentration of number of eggs per gram of dry weight of sample was computed according to the following formula (Ayres and Mara 1996): $${\text{N}} = \frac{\text{Y}}{\text{C }} \times \frac{\text{M}}{\text{S}},$$ where N = number of eggs per gram of dry weight of sample, Y = number of eggs in the McMaster slide (mean of counts from three slides), M = estimated volume of product at final centrifugation, C = volume of the McMaster slide, S = dry weight of the original sample. The data obtained from this study were subjected to statistical analysis of variance (ANOVA) procedures using SAS software and the significant differences were determined using Fisher's LSD test at P ≤ 0.05 level. Characteristics of the raw waste materials The results of the analysis for the raw wastes are presented in Table 1. The pH of the municipal solid waste (MSW) was alkaline and that of dried faecal sludge (DFS) was acidic in reaction. EC of MSW was much greater than that of DFS. The alkaline pH and high EC value in MSW could be attributed to the presence of wood ash which was observed to occur in considerable amount during the screening of the waste. The total N content of DFS was more double than that of MSW, indicating that it could be used to reduce the C:N ratio of the MSW. Table 1 Mean values ± standard error of the chemical and biochemical properties in the initial raw wastes used in the study The total helminth egg count for the dried faecal sludge and mixture of faecal sludge and MSW was 80.56 g−1 TS and 38.89 g−1 TS respectively, which is far greater than the recommended value for materials used in agriculture as per WHO's guidelines (≤3–8 eggs g−1 TS) (Xanthoulis and Strauss 1991). Similarly, the total faecal coliform count of all the raw materials was found to exceed the standard threshold limit of <1000 cfu g−1 (WHO 2006). Therefore, it suggests that the raw wastes cannot be used directly for agriculture without being treated as it may result in soil contamination. The germination index values of the wastes was also far below the standard limit (>80%) substantiating the presence of phytotoxic substances which would make the raw wastes unfit for application in agricultural soils (Additional file 1: Table S1). Evolution of composting temperature Considerable variations in temperature conditions were observed among the different composting methods on course of the composting period (Fig. 1). Though there were series of rise and fall in temperature, the general pattern of temperature for treatments (particularly for WC and PC) was similar. There was a rapid rise in temperature during the first few days of the composting process followed by a fall with time and finally it began to gradually reach to the ambient temperature. These temperature patterns denoted the thermophilic, mesophilic and maturation phases of a composting process, respectively. The rapid progress from initial mesophilic phase to thermophilic phase in WC and PC indicates a high proportion of readily degradable substances and self-insulating capacity of the waste (Sundberg et al. 2004). The change in temperature pattern observed in this study is in accord with other composting study (Tognetti et al. 2007). Changes in ambient air temperature and temperature in the experimental piles during the composting process (WC windrow composting, VC vermicomposting, PC pit composting, WVC combined windrow and vermicomposting) Temperatures reached the thermophilic range (>45 °C) on the second and third day for the WC and PC which lasted for 15 and 19 days, respectively after initiation of the process. During these days of the process, a higher temperature was recorded for the WC than the PC. A peak average temperature ranging between 60.7 and 62.67 °C was recorded during the 3rd to 6th days for WC. Correspondingly for PC, the highest average temperature of 50.2–52.4 °C was registered during the 3rd to 9th day (Additional file 1). The increase in temperature within the composting mass was caused when the heat generated from the respiration and decomposition of sugar, starch and protein by the population of microorganisms accumulates faster than it is dissipated to the surrounding environment (Jusoh et al. 2013). During the subsequent mesophilic phase (45–35 °C), however, PC registered a relatively higher temperature than WC. This phase was lasted for 13 days, from 16th to 28th day for WC and from 20th to 32nd day for PC and from the respective days on temperature values <35 °C and very close to the ambient temperature was recorded for both composting methods. The ambient temperature during the experimental period ranged from 23.7 to 33.7 °C (Fig. 1). The vermicomposting unit (VC), where low temperature was induced intentionally by spreading the material in ground beds, tended to show the lowest temperature all through the process. The temperature profile for the WVC during the thermophilic phase showed similar pattern as that of the WC and has taken a different track during the subsequent vermicomposting process resembling the sole vermicomposting unit. The size, initial moisture content and aeration of the piled substrate might have attributed for the variation in temperature of the different composting methods. Initially, to protect the earthworms from extreme thermophilic temperature and to keep an optimum condition for their performance, the height and moisture content of the pile in the vermicomposting unit were maintained to 30 cm and 80% compared to 1 m height/depth and 60%, respectively, in the WC and PC piles. As a result, the vermicompost with small volume of organic pile and relatively high moisture content does not heat up as such because the heat generated by the microbial population is lost quickly to the atmosphere, whereas in the WC and PC heat build-up particularly in the centre of the pile might have been insulated by the outer layer letting the temperature inside the pile to be raised. It is a well-established fact that, the smaller the bioreactor or compost pile, the greater the surface area-to-volume ratio, and therefore the larger the degree of heat loss to conduction and radiation (http://www.cfe.cornell.edu/compost/invertebrates.html). The possible explanation for the variation in temperature profile of the WC and PC, given the same volume and moisture content of the pile, may be the differences in aeration (air circulation) in the piled substrates. The weekly turning of the compost mass in WC might have promoted the free circulation of air to enhance the microbial activity in the oxidation process and thereby raise the temperature; whereas in PC, the substrates being stacked in the pit without being turned the circulation of air in the pile might have been relatively restricted to impair the microbial activity and thereby the heat generated during the process. Finstein et al. (1986) who demonstrated the linear relationship between the oxygen consumed and heat produced during aerobic metabolism, support the finding of this study. Evolution of pH The first pH reading being taken at the 20th day after the initiation of the process, a sharp and significant (P ≤ 0.001) rise in pH than the initial state was observed in all the treatments. The rise in pH during these days is considered to be the result of the metabolic degradation of organic matter containing nitrogen (proteins, amino acids etc.) leading to formation of amines and ammonia salts through mineralization of organic nitrogen (Dumitrescu et al. 2009). As Smith and Hughes (2002) and Mupondi et al. (2006) suggested, it might also be attributed to the decomposition of organic acids to release alkali and alkali earth cations previously bound by organic matter. An increase in pH during composting of different substrates was also reported in many other studies (Sundberg et al. 2004; Tognetti et al. 2007; Gao et al. 2010). The analysis of variance (ANOVA) showed a non-significant variation (P > 0.05) of pH values among the different methods of composting at the 20th day of sampling. Nevertheless, as composting progressed, significant variation (P ≤ 0.01) in pH was noted among the different composting methods (Fig. 2). Except for PC, which exhibited a further rise in pH, all other methods of composting showed a fairly stable pH during the 20th to 60th day of the process. This was followed by a slight fall to nearly neutral pH value during 80th to 100th day. In PC, a rise in pH value was observed to extend to the 60th day (8.03), after which it declined slightly at the 80th day and finally dropped to 7.83 at the 100th day. Changes in pH in different composting methods with time. (WC windrow composting, VC vermicomposting, PC pit composting, WVC combined windrow and vermicomposting, LSD least significant difference). Different letters indicate significant differences at P ≤ 0.05 Generally, from the 20th day till the end of the process (100th day), PC registered the highest pH value than the rest of the composting methods which were noted for their statistical parity (P > 0.05) (Fig. 2). This may possibly be caused due to the relatively higher concentration of ammonium ion maintained in PC. The relative decline in pH during the latter stage of the composting process might be caused due to the nitrification process which is responsible for the release of H+ ion (Huang et al. 2001). This is also evident from \({\text{NO}}_{ 3}^{ - }\) data which was observed to increase remarkably during later stages of the process. Overall, the pH values achieved in all treatments at the end of the experiment were within the range acceptable for plant growth as recommended by Tognetti et al. (2005). Evolution of electrical conductivity (EC) The electrical conductivity values varied significantly (P ≤ 0.01) among the composting methods and over the different composting period. Generally, as indicated in Fig. 3, all the treatments showed similar pattern of change in EC where the value decreased steadily with the progress in the composting process. It was found to be reduced by about 55.53, 54.66, 47.97, and 37.40% respectively for VC, WVC, PC, and WC at the 100th day as compared to the initial value of the raw material at day 0. The obtained results are in agreement with Yadav et al. (2012) and Gao et al. (2010) who reported an eventual decrease in EC value with progress in composting and vermicomposting. However this is in contrast with other studies (Gómez-Brandón et al. 2008) which reported increased EC values with composting time. Changes in EC in composting mixtures of different composting methods with time. (WC windrow composting, VC vermicomposting, PC pit composting, WVC combined windrow and vermicomposting). Different letters indicate significant differences at P ≤ 0.05 The progressive decline of EC value with time would justify that, firstly; there might be leaching of mineralized ions during periodic showering of water on the composting mass, secondly; as composting process progressed, humification would inevitably proceed and the resulting humic fractions might have complexed the soluble salts which in turn tend to decrease the amount of mobile free ions and thereby the EC (Rao 2007). The ANOVA results revealed that the EC value during the entire composting period was significantly higher (P ≤ 0.001) for WC followed by PC, whereas VC which was in statistical parity with WVC recorded the lowest value (Fig. 3). This would justify that the piled substrates in PC, VC and WVC which were not turned, but rather watered periodically on top to maintain the moisture at optimum; the soluble ions might have gradually been leached down. Moreover in VC and WVC, owing to the smaller size of the pile and a relatively large quantity of water added, the leaching of those ions might have been even more pronounced than the PC. In WC on the other hand, the weekly turning and mixing up of the substrate might have helped the redistribution of the mineralized ions in the compost mass and hence the loss of those ions from the system through leaching might have relatively been reduced. This finding is in line with Lazcano et al. (2008) and Frederickson et al. (2007) who reported a significantly lower EC value for VC and WVC than WC. The EC value in the final product of all treatments was far below the threshold value of 3000 µS cm−1 indicating a material which can be safely applied to soil (Soumaré et al. 2002). Evolution of total organic carbon With advancement of the composting process, the total organic carbon content of the compost decreased consistently and significantly (P ≤ 0.01) for all the treatments (Fig. 4). The decrease in organic carbon content at the end of the composting process with respect to WVC, VC, WC and PC was 54.74, 54.52, 52.00, and 48.80%, respectively of their initial carbon content. The present finding is also in consent with the findings of Tiquia et al. (2002), who reported a total carbon loss that ranged from 50 to 63% in turned windrows and 30–54% in unturned windrows. Similarly, reviewing the works of other authors, Yadav et al. (2010) reported total organic carbon reduction values ranging between 26 and 66% during vermicomposting of wastes of various sources. The variation in the amount of OC lost from the different composting method may possibly be caused by differences in the aeration of the piled substrate. Turning the compost pile (in WC) and continuous borrowing and fragmenting of the material by earthworms (in VC and WVC) might have altered the aeration of the compost mass and accelerated the degradation process to enhance the loss of carbon as carbon dioxide. The results are in agreement with the findings of Guo et al. (2012) who demonstrated higher losses of carbon in treatments receiving higher rates of aeration. Changes in total organic carbon in composting mixture of different composting methods with time. (WC windrow composting, VC vermicomposting, PC pit composting, WVC combined windrow and vermicomposting). Different letters indicate significant differences at P ≤ 0.05 Evolution of total nitrogen Changes in the total nitrogen of the different composting methods varied significantly (P ≤ 0.01) with the different sampling period, while the variation among the composting methods was found to be statistically insignificant (P > 0.05) (Fig. 5). The total nitrogen content of the initial raw material of all treatments was reduced significantly (P ≤ 0.01) during the first 20 days of composting. However, during the subsequent sampling, there was a gradual increment of total nitrogen, the maximum value being recorded at the 100th day. The decline in the total nitrogen during the first 20 days might be attributed to the loss of nitrogen in the form of ammonia which is apparent during the active phase of composting. Witter and Lopez-Real (1988) reported nitrogen losses that could amount to 50% and considered that nearly all nitrogen lost is due to ammonia volatilization. Changes in total nitrogen in composting mixture of different composting methods with time (WC windrow composting, VC vermicomposting, PC pit composting, WVC combined windrow and vermicomposting). Different letters indicate significant differences at P ≤ 0.05 The rise in total nitrogen after the 20th day may be caused due to a concentration effect that resulted from degradation of organic C compounds which in turn leads to weight loss and therefore, a relative increase of N concentration (Dias et al. 2010). As Bernal et al. (1998) explained the concentration of N usually increases during composting when the loss of volatile solid (organic matter) is greater than the loss of NH3. This would generally indicate that there was a relatively greater increase in total N compared with the decrease in the organic carbon content. The results of the present study would, therefore, justify that during the first 20 days of composting, losses of N through NH3 volatilization occurred at a greater rate than organic matter degradation, while during the subsequent periods, the rate of N loss as NH3 might be slower than the rate of dry matter loss as CO2. In addition, the N level might have also been increased due to the fixation of atmospheric N within the compost heap by the free living N fixing microorganisms' activity that commonly occurs during the later stage of the composting process (Seal et al. 2012). In their co-composting study of pig manure and corn stalks, Guo et al. (2012) reported results that were in agreement with the trends of the present study—a general decrease of total nitrogen during the thermophilic phase followed by an increase then after. Evolution of C:N Ratio The C:N ratio of the composting material of all the treatments narrowed consistently and significantly (P ≤ 0.01) with the advancement of the composting time (Fig. 6). The initial C:N ratio of the raw material at day 0 was 19:1 which was within the recommended range suitable for composting (35–12) (Epstein 1997). This was found to decrease to nearly 11:1, 9:1, 10:1 and 9:1 at the 100th day of sampling for PC, VC, WC and WVC, respectively. Obviously, throughout the composting process the organic matter is decomposed by microorganisms through which the organic carbon was oxidized to CO2 gas to the atmosphere and thus lowers the C:N ratio (Jusoh et al. 2013). This is in conformity with the findings of other studies (Kumar et al. 2009; Khwairakpam and Kalamdhad 2011). Changes in C:N ratio of composting mixture in different composting methods with time (WC windrow composting, VC vermicomposting, PC pit composting, WVC combined windrow and vermicomposting). Different letters indicate significant differences at P ≤ 0.05 C:N ratio value for PC was significantly (P ≤ 0.01) higher than the other methods of composting which were statistically at par (P > 0.05) with each other (Fig. 6). The variation seemed to arise mainly due to the differences in the amount of total organic carbon as could be witnessed from previous discussion and the same justification given above can also be claimed for the variation in C:N ratio among the different composting methods. Generally, the C:N ratios in the final product of all the treatments were found to be satisfactory because matured compost material usually has a C:N ratio of 15 or less (Hock et al. 2009). As Gómez-Brandón et al. (2008) pointed out C:N ratio may not be a good indicator of compost stability because it can level off before the compost stabilizes. When wastes rich in nitrogen are used as source material for composting, the C:N ratio can be within the values of stable compost even though it may still be unstable. By the same token, Zmora-Nahum et al. (2005) reported a C:N ratio lower than the cut-off value of 15 very early during the composting of cattle manure, while important stabilization processes were still taking place. Correspondingly, in the present study, three of the four treatments (VC, WVC and WC) and PC achieved a C:N ratio of <15 at the 40th and 60th day of sampling, respectively, while the degradation of the organic material was still significant till the 60th and 80th days for the respective treatments. As evidenced earlier a statistically stable values for total organic carbon was observed during the 60th to 100th and 80th to 100th day of sampling for the respective treatments. Evolution of \({\text{NH}}_{ 4}^{ + }\), \({\text{NO}}_{ 3}^{ - }\) and \({\text{NH}}_{ 4}^{ + }\):NO3 ratio The concentration of \({\text{NO}}_{ 3}^{ - }\)–N and \({\text{NH}}_{ 4}^{ + }\)–N varied significantly (P ≤ 0.001) for the different composting methods and over the different composting period, notwithstanding that all the treatments have generally shown similar pattern of changes in both ammonium and nitrate concentrations (Figs. 7, 8). As can be seen from the graph (Fig. 7), all the composting methods showed a rise in \({\text{NH}}_{ 4}^{ + }\)–N concentration during the 20th day of sampling which was then declined sharply as evidenced at the 40th day and coming to decrease slightly from the 40th day until the end of the experiment (100th day). Changes in \({\text{NH}}_{ 4}^{ + }\) concentration of composting mixture in different composting methods with time (WC windrow composting, VC vermicomposting, PC pit composting, WVC combined windrow and vermicomposting). Different letters indicate significant differences at P ≤ 0.05 Changes in \({\text{NO}}_{ 3}^{ - }\) concentration of composting mixture in different composting methods with time (WC windrow composting, VC vermicomposting, PC pit composting, WVC combined windrow and vermicomposting). Different letters indicate significant differences at P ≤ 0.05 The rise in \({\text{NH}}_{ 4}^{ + }\)–N concentration during the first 20 days was likely to be caused as a result of the mineralization of organic matter (the conversion of organic N to \({\text{NH}}_{ 4}^{ + }\) via the ammonification process), thus reflecting active transformation of organic matter and unstable substrate (Tognetti et al. 2005; Guo et al. 2012). Whereas the decrease in \({\text{NH}}_{ 4}^{ + }\)–N during the subsequent sampling periods was probably due to NH3 volatilization (Gao et al. 2010), the microbial immobilization as nitrogenous compounds such as amino acids, nucleic acids and proteins and/or its oxidation to \({\text{NO}}_{ 3}^{ - }\) through nitrification process (Guo et al. 2012). An increase in \({\text{NH}}_{ 4}^{ + }\)–N concentration during the initial stage of composting and its reduction afterwards was reported by Gao et al. (2010). The analysis of variance indicated that PC registered the highest concentration of \({\text{NH}}_{ 4}^{ + }\)–N during all the sampling period. However, a statistically significant (P ≤ 0.01) variation of \({\text{NH}}_{ 4}^{ + }\)–N among the treatments was recorded only at the 20th and 40th day of sampling (Fig. 7). Turning the piled substrate in WC and the smaller size and increased surface area of the vermibed in VC and WVC might have resulted in increased loss of ammonia leading to a relatively low level of ammonium at this day of sampling (20th day). The compost pile in PC, on the other hand, being not turned and mixed, the loss of N in the form of ammonia might have relatively been reduced and this might have contributed for the increased level of ammonium nitrogen in PC than the other methods of composting. Similar results were reported by Guo et al. (2012) who noted highest level of ammonium nitrogen in treatments with low than high aeration rate. Regarding the \({\text{NO}}_{ 3}^{ - }\)–N, for all the treatments its level was sharply and significantly (P ≤ 0.01) decreased at the 20th day sampling than the initial. This might be caused due to either the leaching of nitrate by water during periodic watering of the composting mass or its immobilization by the decomposing microorganisms. During the subsequent composting period (20th to 60th days), however, the \({\text{NO}}_{ 3}^{ - }\)–N level came to be relatively stable and during these days the variation in \({\text{NO}}_{ 3}^{ - }\)–N level among all the treatments was insignificant (P > 0.05) (Fig. 8). This was followed by a sharp rise of \({\text{NO}}_{ 3}^{ - }\)–N after the 60th day (for WC, VC and WVC) and 80th day (for PC) as evidenced on the 80th and 100th day of sampling, respectively. At the end of the process (100th day), PC exhibited a significantly lower value of \({\text{NO}}_{ 3}^{ - }\)–N than the other methods of composting. It seems that due to the better aeration by earthworms (in VC and WVC) and turning of the piles (in WC), the oxidation of \({\text{NH}}_{ 4}^{ + }\) to \({\text{NO}}_{ 3}^{ - }\) might have been enhanced in the respective methods of composting than in PC. The \({\text{NH}}_{ 4}^{ + }\)–N content of the starting material was clearly higher (1014.28 mg kg−1) than the \({\text{NO}}_{ 3}^{ - }\)–N content (684.5 mg kg−1), giving the \({\text{NH}}_{ 4}^{ + }\):\({\text{NO}}_{ 3}^{ - }\) ratio to be 1.48. On course of the composting process the ratio was found to be raised sharply at the 20th day of sampling for all the treatments. This is followed by a drastic decline during the 40th day and coming to be declining gradually during the subsequent periods of composting (60–100 days) (Fig. 9). PC registered the highest ratio during all the sampling periods; however, a statistically significant variation among the composting treatments was noted only at the 20th and 40th day of sampling (Fig. 9). At the 20th day, the highest (13.57) and lowest (9.42) ratio was recorded for PC and WVC, respectively. At the 100th day of sampling the value was found to drop to 0.06, 0.026, 0.016 and 0.02, respectively for PC, VC, WC and WVC. Changes in \({\text{NH}}_{ 4}^{ + }\):\({\text{NO}}_{ 3}^{ - }\) ratio of composting mixture in different composting methods with time (WC windrow composting, VC vermicomposting, PC pit composting, WVC combined windrow and vermicomposting). Different letters indicate significant differences at P ≤ 0.05 Critical limit values of <400 mg kg−1 for \({\text{NH}}_{ 4}^{ + }\)–N (Zucconi and de Bertoldi 1987), >300 mg kg−1 for \({\text{NO}}_{ 3}^{ - }\)–N (Forster et al. 1993) and <1 for \({\text{NH}}_{ 4}^{ + }\)-:\({\text{NO}}_{ 3}^{ - }\) ratio (Brewer and Sullivan 2003) has been established as a stability/maturity indices for composts of various origins. Concomitantly, except for PC all the other composting treatments satisfied the critical limits for stability/maturity at the 60th day of sampling. Whereas, PC achieved these values(\({\text{NO}}_{ 3}^{ - }\) –N and \({\text{NH}}_{ 4}^{ + }\):\({\text{NO}}_{ 3}^{ - }\) ratio) at the 80th day, implying that PC was late to achieve the index value for maturity than the other three methods of composting and the same explanation given above pertaining to differences in aeration would also be suggested for the variation in these values among the treatments. Evolution of total volatile solids (TVS) The average total volatile solid (TVS) content of the raw waste was 523.4 mg kg−1 which steadily decomposed throughout the experimental period. The change in TVS with composting time showed the same pattern as the change in total organic carbon in that it decreases significantly (P ≤ 0.01) with the advancement of composting time. The greatest reduction in TVS was noted during the first 20 days of composting signifying the fast degradation of the substrate during this active phase of composting (Fig. 10). The decrease in TVS content of the sample indicates the degradation of organic matter of the waste during the composting process (Levanon and Pluda 2002). Values of TVS varied significantly (P ≤ 0.01) among the different methods of composting (Fig. 10). On the course of composting, the highest and lowest values of TVS were recorded for PC and WVC, respectively. Changes in total volatile solids of composting mixture in different composting methods with time (WC windrow composting, VC vermicomposting, PC pit composting, WVC combined windrow and vermicomposting). Different letters indicate significant differences at P ≤ 0.05 The analysis of variance revealed that the values of TVS for the three methods of composting (WC, VC and WVC) after the 60th day was insignificant (P > 0.05) indicating the stability of the product at the 60th day. Whereas, for PC a statistically stable value was achieved at the 80th day of composting, implying the relatively longer period of time the latter has taken for the product to be stable. This is due to the relatively slow rate of degradation of the organic matter in PC. The important role played by the earthworms in reducing the TVS through degrading wastes was reported by Yadav et al. (2012). Phytotoxicity assessment All the composting treatments followed the same general pattern of changes in germination index (GI) over the different sampling period and the variation in GI values among the treatments was insignificant (P > 0.05; Fig. 11). However, the values varied significantly (P ≤ 0.01) with the composting time. The lowest value of this variable was recorded at the 20th day of sampling which was of course statistically not different from the starting material (day 0). This was observed to increase with the advancement of composting period up to the 60th day and from the 60th day on it came to a more or less stable value with insignificant variation (Fig. 11). Tiquia and Tam (1998) also reported findings that are similar to the results of this study. Changes in germination index (GI) of composting mixture in different composting methods with time (WC windrow composting, VC vermicomposting, PC pit composting, WVC combined windrow and vermicomposting). Different letters indicate significant differences at P ≤ 0.05 The reason for the low germination index value of in the initial sample and the sample taken at the 20th day of the composting process could be attributed to the presence of phytotoxic compounds in the raw wastes and their production in the substrate during the active phase of composting. Phytotoxic compounds, such as; ammonium ions, fatty acids, and low molecular weight phenolic acids are reported to impair seed germination and root elongation (Delgado 2010; Gómez-Brandón et al. 2008). It was also evident from the chemical analysis of the raw material and compost samples of this study that the highest level of ammonium was recorded at the 20th day of sampling followed by the initial substrate at day 0. The detrimental effect of high levels of ammonium to seed germination and root elongation was reported in many other studies (Tiquia and Tam 1998; Selim et al. 2012 and Guo et al. 2012). The rise in GI late at the 60th day might be due to the degradation of the phytotoxic compounds which were present in the initial raw wastes or produced during the active phase of composting as intermediate products of microbial metabolism (Bernal et al. 1998). According to Haq et al. (2014) compost with GI of more than 80% is considered to be matured and practically free of phytotoxic substances. In this study as indicated in the graph (Fig. 11), all the treatments were found to have a GI value of >80% at the 60th day of sampling, implying that, about 60 days were needed to overcome the threshold limit of 80% by reducing the phytotoxicity of the compost to levels consistent for a safe soil application (Soares et al. 2013). Pathogen inactivation Total faecal coliforms Except for VC all other methods of composting showed a substantial reduction in population of faecal coliforms at the 20th day of sampling. These treatments were effective in keeping the population of the faecal coliforms in the compost below the minimum allowable limit (<1000 cfu g−1) right at the 20th day. The reduction in the population of faecal coliforms in these methods of composting might be related to the high temperature generated in the compost pile during the thermophilic phase. Perhaps in this study the first sampling was taken at the 20th day, but it is likely that these methods could have attained such low population even much earlier than the 20th day. As per the reports of WHO (2006) and Schönning and Stenström (2004), pathogen inactivation in composting is achieved when temperatures above 50 °C are maintained for at least 1 week. Temperatures exceeding 50 °C were also recorded in those methods (WC, PC and WVC) involving thermophilic phase of the current study. Some inconsistencies in reduction pattern of the faecal coliforms were detected in WC during the mesophilic and curing phase, where the population of these pathogens came to rise and fall at different sampling periods (Fig. 12). This may be due to the contamination of the compost mass from the external source during the periodic and manual turning of the compost pile. Elimination of faecal coliform during co-composting of dried faecal sludge and municipal solid organic wastes with time. (WC windrow composting, VC vermicomposting, PC pit composting, WVC combined windrow and vermicomposting). Different letters indicate significant differences at P ≤ 0.05 Regarding VC, contrary to the former methods, the number of the faecal coliforms was found to increase remarkably at the 20th day of sampling, this was then declined steadily during the subsequent sampling periods (Fig. 12). The increasing of faecal coliforms in VC during the 20th day of sampling could be attributed to creation of a good environment for multiplication of this pathogen through rehydration and subsequent availability of easily degradable substrates by dissolution following rehydration (Mupondi et al. 2010). The reports by Schönning and Stenström (2004) and WHO (2006) also indicated that certain types of pathogenic bacteria can increase in numbers when conditions favouring their growth are established in their storage medium/environment. The reduction of the faecal coliforms population during the subsequent period of vermicomposting may be attributed to some activities of earthworms which possibly include: selective predation/consumption (Edward and Bohlen 1996; Kumar and Shweta 2011); mechanical destruction through action of gizzard (Edwards and Subler 2011); microbial inhibition through humic and coelomic acids or other enzymes secreted within the digestive tract (Edwards and Subler 2011); stimulation of microbial antagonists (Kumar and Shweta 2011); and indirectly through stimulation of endemic or other microbial species which outcompete, antagonize, or otherwise destroy pathogens (Edwards and Subler 2011). Helminth egg count During the composting process, there was a general reduction in the number of helminth eggs for all the treatments (Fig. 13). The total helminth egg count was found to decrease from 38.89 g−1 TS of the starting material to 8.33 (WC), 19.44(VC), 14.81 (PC) and 2.78 (WVC) in the final product as evidenced at the 100th day. These values correspond to a 78.57, 50, 61.9 and 92.86% total reduction of eggs for the respective treatments. It has been observed that the extent to which the helminth eggs were eliminated varied significantly with time and among the treatments (P ≤ 0.01). Those treatments involving thermophilic composting (WC, PC and WVC) demonstrated a drastic reduction of eggs during the first 20 days of the process when the active thermophilic phase was prevailing. This amounts to 84.85% (WC), 73.08% (PC) and 74.36% (WVC) of the total reductions recorded in the respective treatments. Whereas the treatment without a thermophilic phases (VC), the greatest reduction of helminth eggs was observed during the latter stages of the composting process. More than 75% of the total reduction was recorded after the 60th day of the process while only 23.81% of it was recorded during the first 40 days of the composting process. Helminth eggs removal dynamics during co-composting of faecal sludge and municipal organic solid waste. (WC windrow composting, VC vermicomposting, PC pit composting, WVC combined windrow and vermicomposting LSD Least significant difference). Different letters indicate significant differences at P ≤ 0.05 The highest reduction of eggs was achieved in WVC method followed by windrow method of composting (WC), while the sole vermicomposting method (VC) registered the lowest value (Fig. 13). However, only the former treatment (WVC) is complying with the WHO guidelines of <3–8 Ascaris egg g−1 TS while all the rest treatments were found to have egg counts more than the threshold limit. The result of this study clearly demonstrated that the high temperature produced in the thermophilic phase of the composting process is much more effective in sanitizing pathogenic parasites of faecal sludge than the earthworms did. It has been suggested that high temperature may increase the permeability of the Ascaris eggs' shell, allowing transport of harmful compounds, as well as increasing the desiccation rate of the eggs (Koné et al. 2010). Even though numerous authors reported the full elimination of parasitic eggs under thermophilic condition (Plym-Forshell 1995; Gantzer et al. 2001), this had not come about in the present study where helminth eggs were still detected despite the fact that the thermophilic condition (≥45 °C) was maintained for about 15–19 days. It is likely that the lethal temperature, being not evenly distributed throughout the piled biomass, the complete destruction of the eggs may not be ensured. The substrates that lay on the top of the pile, being exposed to the open atmosphere, might have experienced a relatively cooler temperature than the inner laid ones. Strauch (1991) suggested that composting ensures hygienization of the material on condition that all biomass is exposed to a sufficiently high temperature (55 °C for 14 days). The temperature reading of the present study indicates that, on average, a high temperature of (>55 °C) was recorded only for 8 days in windrows and during which the pile was turned only once letting it to experience the high temperature of >55 °C for only a day after this first turning. This would therefore suggest that, had the piled feedstock been turned more frequently such that every 2 or 3 days, the biomass would have enjoyed the lethal high temperature uniformly and for relatively longer period of time and thus would have resulted in increased efficiency of helminth egg elimination. This justification is of course in argument with the reports of Koné et al. (2007) who demonstrated the non-significant effect of turning frequency on the inactivation efficiency of helminths egg. However, it has been explained that the size of the piled feedstock determines the magnitude of heat generated and the time duration in which the thermophilic phase would be maintained during the composting process. The larger the size of the pile the higher the magnitude of heat generated and the longer the thermophilic phase would be maintained within the pile, and thus the less frequently it can be turned. In cases where the pile size is smaller, the thermophilic phase would last for a short period of time; therefore, unless turned frequently there would be no chance for the out laid biomass to enjoy the lethal high temperature which is usually formed inside the pile. In the United States of America, the compost is regarded as hygienically safe if a temperature >55 °C is maintained in windrows for at least 15 days with a minimum of 5 turnings during the high temperature period (USEPA 1999). The biodegradation process of organic wastes is markedly influenced by the methods of composting employed. Turned windrows (WC) and composting involving earthworms (VC and WVC) hasten the biodegradation process of organic wastes and result in the production of stable compost earlier than the traditional pit method of composting (PC). Even though all the tested methods of composting remarkably reduced the pathogenic organisms (faecal coliforms and helminth eggs), it was only the WVC method that qualify the standard set by WHO, keeping the concentration of helminth egg below the threshold level. Thus, elimination of pathogens from composts being a critical consideration, this study would recommend the WVC method for composting organic wastes involving human excreta. ANOVA: CFU: colony forming unit DFS: dried faecal sludge GI: germination index FS: faecal sludge MSW: OC: organic carbon pit composting TN: total:nitrogen TVS: total volatile solids USEPA: windrow composting WVC: windrow plus vermicomposting Ayres RM, Mara DD (1996) Analysis of wastewater for use in agriculture—a laboratory manual of parasitological and bacteriological techniques. World Health Organization (WHO), Geneva Beneberu S, Eline B, Harole Y, Zelalem L (2012) Current solid waste management practices for productive reuse in Dire Dawa City, Ethiopia (Draft project Report) Bernal MP, Paredes C, Sanchez-Monedero MA, Cegarra J (1998) Maturity and stability parameters of composts prepared with a wide range of organic wastes. Bioresour Technol 63:91–99 Bernal MP, Alburquerque JA, Moral R (2009) Composting of animal manures and chemical criteria for compost maturity assessment: a review. Bioresour Technol 100:5444–5453 Bjorklund A (2002) The potential of using thermal composting for disinfection of separately collected faeces in Cuernacava, Mexico. Minor Field Studies No. 200. Swedish University of Agricultural Sciences, International Office. ISSN 1402-3237 Bowman DD, Liotta JL, McIntosh M, Lucio-Forster A (2006) Ascaris suum egg inactivation and destruction by the vermicomposting worm, Eisenia foetida. Residuals Biosolids Manag 2:11–18 Bremner JM, Mulvaney CS (1982) Nitrogen—total. In: Page AL, Miller RH, Keeney DR (eds) Methods of soil analysis, Part 2. Chemical and Microbiological Properties. Agronomy Monograph No. 9. ASA-SSSA, Madison, Wisconsin, USA, pp 595–624 Brewer LJ, Sullivan DM (2003) Maturity and stability evaluation of composted yard trimmings. Compost Sci Util 11(2):96–112 Bundela PS, Gautam SP, Pandey AK, Awasthi MG, Sarsaiya S (2010) Municipal solid waste management in Indian cities—a review. Int J Environ Sci 1(4):591–606 Carr L, Grover R, Smith B, Richard T, Halbach T (1995) Commercial and on-farm production and marketing of animal waste compost products. In: Steele K (ed) Animal waste and the land–water interface. Lewis Publishers, Boca Raton, pp 485–492 Community Development Research (2011) Ethiopia solid waste and landfill (country profile and action plan). Global Methane Initiative http://www.globalmethane.org/. Accessed on August 2012 Cooperband LR, Stone AG, Fryda MR, Ravet JL (2003) Relating compost measures of stability and maturity to plant growth. Compost Sci Util 11(2):113–124 Delgado M (2010) Phytotoxicity of uncomposted and composted poultry manure, African. J Plant Sci 4:154–162 Dias BO, Silva CA, Higashikawa FS, Roig A, Sánchez-Monedero MA (2010) Use of biochar as bulking agent for the composting of poultry manure: effect on organic matter degradation and humification. Bioresour Technol 101:1239–1246 Domínguez J, Edwards CA (2010) Relationships between composting and vermicomposting: relative values of the products. In: Edwards CA, Arancon NQ, Sherman RL (eds) Vermiculture technology: earthworms, organic waste and environmental management. CRC Press, Boca Raton, pp 1–14 Dominguez J, Edwards CA, Subler S (1997) Comparison of vermicomposting and composting. Bio-Cycle 38(4):57–59 Dumitrescu L, Manciulea I, Sauciuc A, Zaha C (2009) Obtaining fertilizer compost by composting vegetable waste, sewage sludge and sawdust. In: Bulletin of the Transilvania, vol 2, no 51. University of Braşov, pp 117–122 Eastman BR, Kane PN, Edwards CA, Trytek L, Gunadi B, Stermer AL, Mobley JR (2001) The effectiveness of vermiculture in human pathogen reduction for USEPA biosolids stabilization. Compost Sci Util 9(1):38–49 Edward CA, Bohlen PJ (1996) Biology and ecology of earthworms. Chapman and Hall, London Edwards CA, Subler S (2011) Human pathogen reduction during vermicomposting. In: Edwards CA, Arancon NQ, Sherman R (eds) Vermiculture technology Florida. CRC Press Taylor and Francis Group, Florida, pp 249–261 Epstein E (1997) The science of composting. Technomic Publishing Company Inc, Lancaster Finstein MS, Miller FC, Strom PF (1986) Monitoring and evaluating composting process performance. J Water Pollut Control Fed 58(4):272–278 Forster JC, Zech W, Wiirdinger E (1993) Comparison of chemical and microbiological methods for the characterization of the maturity of composts from contrasting sources. Biol Fertil Soils 16:93–99 Frederickson J, Butt KR, Morris RM, Daniels C (1997) Combining vermiculture with traditional green waste composting systems. Soil Biol Biochem 29(3/4):725–730 Frederickson J, Howell G, Hobson AM (2007) Effect of pre-composting and vermicomposting on compost characteristics. Eur J Soil Biol 43:S320–S326 Gallizzi K (2003) Co-composting reduces helminth eggs in faecal sludge: a field study in Kumasi, Ghana. SANDEC, Dübendorf, p 45 Gantzer C, Gaspard P, Galvez L, Huyard A, Dumouthier N, Schwartzbrod J (2001) Monitoring of bacteria and parasitological contamination during various treatment of sludge. Water Resour 35(16):3763–3770 Gao M, Liang F, Yub A, Li B, Yang L (2010) Evaluation of stability and maturity during forced-aeration composting of chicken manure and sawdust at different C/N ratios. Chemosphere 78:614–619 Garcia C, Hernandez T, Costa F (1993) Evaluation of the organic matter composition of raw and composted municipal wastes. Soil Sci Plant Nutr 39:99–108 Gómez-Brandón M, Lazcano C, Domínguez J (2008) The evaluation of stability and maturity during the composting of cattle manure. Chemosphere 70:436–444 Guo R, Li G, Jiang T, Schuchardt F, Chen T, Zhao Y, Shen Y (2012) Effect of aeration rate, C/N ratio and moisture content on the stability and maturity of compost. Bioresour Technol 112:171–178 Haq T, Ali TA, Begum R (2014) Seed germination bioassay using maize seeds for phytoxicity evaluation of different composted materials. Pak J Bot 46(2):539–542 Hill GB, Lalander C, Baldwin SA (2013) The effectiveness and safety of vermi-versus conventional composting of human feces with Ascaris suum ova as model helminthic parasites. J Sustain Dev 6(4):1–10 Hock LS, Baharuddin AS, Ahmed MN, Md. Shah UK, Abdul Rahaman NA, Abd-Aziz S, Hassan MA, Shirai Y (2009) Physicochemical changes in windrow co-composting process oil palm mesocarpfiber and palm oil effluent anaerobic sludge. Aust J Basic Appl Sci 3(3):2809–2819 http://www.cfe.cornell.edu/compost/invertebrates.html (1 of 4) [1/16/2001 8:49:10 AM]. Cornell composting Science and Engineering Huang GF, Fang M, Wu QT, Zhou LX, Liao XD, Wong JWC (2001) Co-composting of pig manure with leaves. Environ Technol 22:1203–1212 Iqbal MK, Khan RA, Nadeem A, Hussnain A (2012) Comparative study of different techniques of composting and their stability evaluation in municipal solid waste. J Chem Soc Pak 34(2):273–282 Jusoh ML, Manaf LA, Abdul Latif P (2013) Composting of rice straw with effective microorganisms (EM) and its influence on compost quality. Iran J Environ Health Sci Eng 10:17 Khwairakpam M, Kalamdhad AS (2011) Vermicomposting of vegetable wastes amended with cattle manure. Res J Chem Sci 1(8):49–56 Koné D, Cofie O, Zurbru C, Gallizzi K, Moser D, Drescher S, Strauss M (2007) Helminth eggs inactivation efficiency by faecal sludge dewatering and co-composting in tropical climates. Water Res 14(9):4397–4402 Koné D, Cofie O, Nelson K (2010) Low-cost options for pathogen reduction and nutrient recovery from faecal sludge. In: Drechsel P, Scott CA, Raschid-Sally L, Redwood M, Bahri A (eds) Wastewater irrigation and health: assessing and mitigating risk in low-income countries. International Water Management Institute (IWMI), Earthscan, International Development Research Centre (IDRC), Colombo, pp 171-188. Kumar R, Shweta (2011) Removal of pathogens during vermi-stabilization. J Environ Sci Technol 4(6):621–629 Kumar PR, Jayaram A, Somashekar RK (2009) Assessment of the performance of different compost models to manage urban household organic solid wastes. Clean Technol Environ Policy 11:473–484 Lazcano C, Gómez-Brandón M, Domínguez J (2008) Comparison of the effectiveness of composting and vermicomposting for the biological stabilization of cattle manure. Chemosphere 72:1013–1019 Levanon D, Pluda D (2002) Chemical, physical and biological criteria for maturity in composts for organic farming. Compost Sci Util 10(4):339–346 Lung AJ, Lin CM, Kim JM, Marshall MR, Nordstedt R, Thompson NP, Wei CI (2001) Destruction of Escherichia coli O157:H7 and Salmonella enteritidis in cow manure composting. J Food Prot 64:1309–1314 Maso MA, Blasi AB (2008) Evaluation of composting as a strategy for managing organic wastes from a municipal market in Nicaragua. Bioresou Technol 99:5120–5124 Monroy F, Aira M, Domínguez J (2008) Changes in density of nematodes, protozoa and total coliforms after transit through the gut of four epigeic earthworms (Oligochaeta). Appl Soil Ecol 39:127–132 Mupondi LT, Mnkeni PNS, Brutsch MO (2006) The effects of goat manure, sewage sludge and effective microorganisms on the composting of pine bark. Compost Sci Util 14:201–210 Mupondi LT, Mnkeni PN, Muchaonyerwa P (2010) Effectiveness of combined thermophilic composting and vermicomposting on biodegradation and sanitization of mixtures of dairy manure and waste paper. Afr J Biotechnol 9(30):4754–4763 Ndegwa PM, Thompson SA (2001) Integrating composting and vermicomposting in the treatment and bioconversion of solids. Bioresour Technol 76:107–112 Niwagaba C, Nalubega M, Vinnerås B, Sundberg C, Jonsson H (2009) Benchscale composting of source-separated human faeces for sanitation. Waste Manag 29:585–589 Okalebo JR, Guthua KW, Woomer PJ (2002) Laboratory methods of soil and plant analysis—a working manual. TSBF-CIAT and SACRED Africa, Nairobi Padmavathiamma PK, Li LY, Kumari UR (2008) An experimental study of vermin biowaste composting for agricultural soil improvement. Bioresour Technol 99:1672–1681 Pisa C, Wuta M (2013) Evaluation of composting performance of mixtures of chicken blood and maize stover in Harare, Zimbabwe. Int J Recycl Org Waste Agric 2(5):1–11 Plym-Forshell L (1995) Survival of Salmonellas and Ascarissuum eggs in a thermophilic biogas plant. Acta Vet Scandinavica 36:79–85 Rao KJ (2007) Composting of municipal and agricultural wastes. In: Proceedings of the international conference on sustainable solid waste management, Chennai, India, 5–7 September 2007 Rodriguez-Canche LG, Cardoso-Vigueros L, Maldonado-Montiel T, Martinez Sanmiguel M (2010) Pathogen reduction in septic tank sludge through vemicomposting using Eisenia fetida. Bioresour Technol 101:3548–3553 Schönning C, Stenström TA (2004) Guidelines for the safe use of urine and faeces in ecological sanitation. Report 2004-1. Ecosanres, SEI. Sweden. www.ecosanres.org Schwartzbrod J (2003) Quantification and viability determination for helminth eggs in sludge (modified EPA method 1999). University of Nancy, Nancy Seal A, Bera R, Chatterjee AK, Dolui AK (2012) Evaluation of a new composting method in terms of its biodegradation pathway and assessment of compost quality, maturity and stability. Arch Agron Soil Sci 58(9):995–1012 Selim SM, Zayed MS, Atta MH (2012) Evaluation of phytotoxicity of compost during composting process. Nat Sci 10(2):69–77 Smith DC, Hughes JC (2002) Changes in chemical properties and temperature during the degradation of organic wastes subjected to simple composting protocols suitable for small-scale farming, and quality of the mature compost. S Afr J Plant Soil 19:53–60 Soares MR, Matsinhe C, Belo S, Quina MJ, Quinta-Ferreira R (2013) Phytotoxicity evolution of biowastes undergoing aerobic decomposition. J Waste Manag. doi:10.1155/2013/479126 Soumaré M, Demeyer A, Tack FMG, Verloo MG (2002) Chemical characteristics of Malian and Belgian solid waste composts. Bioresour Technol 81:97–101 Strauch D (1991) Survival of pathogenic micro-organisms and parasites in excreta, manure and sewage sludge. Revue Sci Tech (Int Off Epizoot) 10:813–846 Sundberg C, Smars S, Jonsson H (2004) Low pH as an inhibiting factor in the transition of mesophilic to thermophilic phase in composting. Bioresour Technol 95:145–150 Tiquia SM, Tam NFY (1998) Elimination of phytotoxicity during co-composting of spent pig-manure sawdust litter and pig sludge. Bioresour Technol 65:43–49 Tiquia SM, Richard TL, Honeyman MS (2002) Carbon, nutrient, and mass loss during composting. Nutr Cycl Agroecosyst 62:15–24 Tognetti C, Loas F, Mazzarino MJ, Hernandez MT (2005) Composting vs. vermicomposting: a comparison of end product quality. Compost Sci Util 13(1):6–13 Tognetti C, Mazzarino MJ, Laos F (2007) Improving the quality of municipal organic waste compost. Bioresour Technol 98:1067–1076 USEPA (United States Environmental Protection Authority) Pathogen Equivalency Committee (PEC) (1999) Control of pathogens and vector attraction in sewage sludge. In: USEPA Environmental Regulations and Technology, Office of Research and Development EPA/625/R-92/013, Washington, DC, p 177 Vinnerås B (2007) Comparison of composting, storage and urea treatment for sanitising of faecal matter and manure. Bioresour Technol 98:3317–3321 Vuorinen AH, Saharinen MH (1997) Evolution of microbiological and chemical parameters during manure and straw co-composting in a drum composting system. Agric Ecosyst and Environ 66:19–29 Walkley A, Black IA (1934) An examination of the Degtjareff method for determining soil organic matter and proposed modification of the titration method. Soil Sci Soc Am J 37:29–34 WHO (2006) Guidelines for the safe use of wastewater, excreta and greywater, vol 4. Excreta and grey water use in agriculture. ISBN: 92 4 154685 9 Witter E, Lopez-Real J (1988) Nitrogen losses during the composting of sewage sludge, and the effectiveness of clay soil, Zeolite and compost in adsorbing volatilised ammonia. Biol Wastes 23:279–294 Wu L, Ma LQ, Martinez GA (2000) Comparison of methods for evaluating stability and maturity of bio-solids compost. J Environ Qual 29(2):424–429 Xanthoulis D, Strauss M (1991) Reuse of wastewater in agriculture at Ouarzazate, Morocco (Project UNDP/FAO/WHO MOR 86/018). Unpublished mission reports Yadav KD, Tare V, Ahammed MM (2010) Vermicomposting of source separated human faeces for nutrient recycling. Waste Manag 30:50–56 Yadav KD, Tare V, Ahammed MM (2012) Integrated composting–vermicomposting process for stabilization of human faecal slurry. Ecol Eng 47:24–29 Zmora-Nahum S, Markovitch O, Tarchitzky J, Chen Y (2005) Dissolved organic carbon (DOC) as a parameter of compost maturity. Soil Biol Biochem 37:2109–2116 Zucconi F, de Bertoldi M (1987) Compost specification for the production and characterization of compost from municipal solid waste. In: de Bertoldi M, Ferranti MP, Hermite PL, Zucconi F (eds) Compost: production, quality and use. Elsevier Applied Science Publishers, Barking, pp 30–50 Zucconi F, Forte M, Monaco A, De Bertoldi M (1981) Biological evaluation of compost maturity. Biocycle 22(4):27–29 TM conceived and carried out the study; performed the analyses and drafted the manuscript. HG participated in the design of the study; KK, KW, BS and HY participated in the design of the study supervised the analysis process and helped draft the manuscript. All authors read and approved the final manuscript. This study was financially supported by the Ministry of Education of the Federal Democratic Republic of Ethiopia. Authors wish to thank the Sanitation and Beautification Agency (SBA) and Water Supply and Sewerage Authority (WSSA) of Dire Dawa City Administration for their cooperation in collecting and providing the compostable material. The authors declare that they have no competing interests School of Natural Resources Management and Environmental Sciences, Haramaya University, P.O.Box 138, Dire Dawa, Ethiopia Tesfu Mengistu, Heluf Gebrekidan, Kibebew Kibret, Beneberu Shimelis & Hiranmai Yadav School of Plant Sciences, Haramaya University, P.O.Box 138, Dire Dawa, Ethiopia Kebede Woldetsadik Tesfu Mengistu Heluf Gebrekidan Kibebew Kibret Beneberu Shimelis Hiranmai Yadav Correspondence to Tesfu Mengistu. Additional file 1. Table S1 Mean daily temperature values of different composting methods Mengistu, T., Gebrekidan, H., Kibret, K. et al. Comparative effectiveness of different composting methods on the stabilization, maturation and sanitization of municipal organic solid wastes and dried faecal sludge mixtures. Environ Syst Res 6, 5 (2018). https://doi.org/10.1186/s40068-017-0079-4 Faecal coliform Helminth egg
CommonCrawl
Sapir Ron-Doitch, Marina Frušić-Zlotkin, Yoram Soroka, Danielle Duanis-Assaf, Dalit Amar, Ron Kohen, and Doron Steinberg. 2021. "eDNA-Mediated Cutaneous Protection Against UVB Damage Conferred by Staphylococcal Epidermal Colonization." Microorganisms, 9, 4. Abstract The human skin is a lush microbial habitat which is occupied by a wide array of microorganisms. Among the most common inhabitants are Staphylococcus spp., namely Staphylococcus epidermidis and, in ≈20% of healthy individuals, Staphylococcus aureus. Both bacteria have been associated with cutaneous maladies, where they mostly arrange in a biofilm, thus achieving improved surface adhesion and stability. Moreover, our skin is constantly exposed to numerous oxidative environmental stressors, such as UV-irradiation. Thus, skin cells are equipped with an important antioxidant defense mechanism, the Nrf2-Keap1 pathway. In this work, we aimed to explore the morphology of S. aureus and S. epidermidis as they adhered to healthy human skin and characterize their matrix composition. Furthermore, we hypothesized that the localization of both types of bacteria on a healthy skin surface may provide protective effects against oxidative stressors, such as UV-irradiation. Our results indicate for the first time that S. aureus and S. epidermidis assume a biofilm-like morphology as they adhere to ex vivo healthy human skin and that the cultures' extracellular matrix (ECM) is composed of extracellular polysaccharides (EPS) and extracellular DNA (eDNA). Both bacterial cultures, as well as isolated S. aureus biofilm eDNA, conferred cutaneous protection against UVB-induced apoptosis. This work emphasized the importance of skin microbiota representatives in the maintenance of a healthy cutaneous redox balance by activating the skin's natural defense mechanism. Amichai Perlman, Rachel Goldstein, Lotan Choshen Cohen, Bruria Hirsh-Raccah, David Hakimian, Ilan Matok, Yosef Kalish, Daniel E. Singer, and Mordechai Muszkat. 2021. "Effect of Enzyme-Inducing Antiseizure Medications on the Risk of Sub-Therapeutic Concentrations of Direct Oral Anticoagulants: A Retrospective Cohort Study." CNS Drugs, 35, 3, Pp. 305–316. Abstract Background: Stroke and thromboembolic events occurring among patients taking direct oral anticoagulants (DOACs) have been associated with low concentrations of DOACs. Enzyme-inducing antiseizure medications (EI-ASMs) are associated with enhanced cytochrome-P450-mediated metabolism and enhanced P-glycoprotein-mediated transport. Objective: The aim of this study was to evaluate the effect of concomitant EI-ASM use on DOAC peak concentrations in patients treated in clinical care. Methods: We performed a retrospective cohort study of patients treated with DOACs for atrial fibrillation and venous thromboembolic disease in an academic general hospital. In total, 307 patients treated with DOACs between August 2015 and January 2020 were reviewed. Clinical characteristics and peak DOAC plasma concentrations of patients co-treated with an EI-ASM were compared with those of patients not treated with an EI-ASM. An apixaban dose score (ADS) was defined to account for apixaban dosage and the number of apixaban dose-reduction criteria. Results: In total, 177 peak DOAC plasma concentrations (including apixaban, rivaroxaban, and dabigatran) from 131 patients were measured, including 24 patients co-treated with an EI-ASM and 107 controls not treated with an EI-ASM. The proportion of patients with DOAC concentrations below the expected range was significantly higher among EI-ASM users than among patients not taking an EI-ASM (37.5 vs. 9.3%, respectively; p = 0.0004; odds ratio 5.82; 95% confidence interval [CI] 2.03–16.66). Most of these patients were treated with apixaban (85%); however, sensitivity analysis results were also significant (p = 0.031) for patients with non-apixaban DOACs. In patients co-treated with apixaban and an EI-ASM, median apixaban peak concentration was 106 ng/mL (interquartile range [IQR] 71–181) compared with 150 ng/mL (IQR 94–222) in controls (p = 0.019). In multivariable analysis, EI-ASM use was associated with 6.26-fold increased odds for apixaban concentration below the expected range (95% CI 2.19–17.90; p = 0.001). Apixaban concentrations were significantly associated with EI-ASM use, moderate enzyme inhibitor use, and ADS. Conclusions: Concurrent EI-ASM and DOAC use presents a possible risk for DOAC concentrations below the expected range. The clinical significance of the interaction is currently unclear. Nino Tetro, Roua Hamed, Erez Berman, and Sara Eyal. 2021. "Effects of antiseizure medications on placental cells: Focus on heterodimeric placental carriers." Epilepsy research, 174, Pp. 106664. Abstract OBJECTIVE: Appropriate placental nutrient transfer is essential for optimal fetal development. We have previously shown that antiseizure medications (ASMs) can alter the expression of placental carriers for folate and thyroid hormones. Here we extended our analysis to heterodimeric carriers that mediate the placental uptake of amino acids and antioxidant precursors. We focused on the L-type amino acid transporter (LAT)2/SLC7A8, the cystine/glutamate antiporter xCT/SLC7A11, and their chaperone 4F2hc/SLC3A2. METHODS: BeWo cells were exposed for two or five days to therapeutic concentrations of valproate, levetiracetam, carbamazepine, lamotrigine, or lacosamide. Transcript levels were measured by quantitative PCR. Levetiracetam effects on placental carriers were further explored using a tailored gene array. RESULTS: At five days, 30 $μ$g/mL levetiracetam (high therapeutic concentrations) significantly reduced the expression of all studied genes (p < 0.05). Carbamazepine treatment was associated with lower SLC7A8 (LAT2) expression (p < 0.05), whereas valproate increased the transcript levels of this transporter by up to 2.0-fold (p < 0.01). Some of these effects were already observed after two incubation days. Lamotrigine did not alter gene expression, and lacosamide slightly elevated SLC3A2 levels (p < 0.05). The array analysis confirmed the trends observed for levetiracetam and identified additional affected genes. SIGNIFICANCE: Altered expression of placental heterodimeric transporters may represent a mechanism by which ASM affect fetal development. The placental effects are differential, with valproate, carbamazepine and levetiracetam as the more active compounds. The concentration-dependence of those ASM effects are in line with established dose-dependent teratogenicity implying that ASM doses should be adjusted during pregnancy with caution. Bareket Daniel, Ariela Livne, Guy Cohen, Shirin Kahremany, and Shlomo Sasson. 2021. "Endothelial Cell-Derived Triosephosphate Isomerase Attenuates Insulin Secretion from Pancreatic Beta Cells of Male Rats." Endocrinology (United States), 162, 3. Abstract Insulin secretion from pancreatic beta cells is tightly regulated by glucose and paracrine signals within the microenvironment of islets of Langerhans. Extracellular matrix from islet microcapillary endothelial cells (IMEC) affect beta-cell spreading and amplify insulin secretion. This study was aimed at investigating the hypothesis that contact-independent paracrine signals generated from IMEC may also modulate beta-cell insulin secretory functions. For this purpose, conditioned medium (CMp) preparations were prepared from primary cultures of rat IMEC and were used to simulate contact-independent beta cell-endothelial cell communication. Glucose-stimulated insulin secretion (GSIS) assays were then performed on freshly isolated rat islets and the INS-1E insulinoma cell line, followed by fractionation of the CMp, mass spectroscopic identification of the factor, and characterization of the mechanism of action. The IMEC-derived CMp markedly attenuated first-and second-phase GSIS in a time-and dose-dependent manner without altering cellular insulin content and cell viability. Size exclusion fractionation, chromatographic and mass-spectroscopic analyses of the CMp identified the attenuating factor as the enzyme triosephosphate isomerase (TPI). An antibody against TPI abrogated the attenuating activity of the CMp while recombinant human TPI (hTPI) attenuated GSIS from beta cells. This effect was reversed in the presence of tolbutamide in the GSIS assay. In silico docking simulation identified regions on the TPI dimer that were important for potential interactions with the extracellular epitopes of the sulfonylurea receptor in the complex. This study supports the hypothesis that an effective paracrine interaction exists between IMEC and beta cells and modulates glucose-induced insulin secretion via TPI-sulfonylurea receptor-KATP channel (SUR1-Kir6.2) complex attenuating interactions. Yoel Goldstein, Katerina Tischenko, Yifat Brill-Karniely, and Ofra Benny. 2021. "Enhanced Biomechanically Mediated "Phagocytosis" in Detached Tumor Cells." Biomedicines, 9, 8. Abstract Uptake of particles by cells involves various natural mechanisms that are essential for their biological functions. The same mechanisms are used in the engulfment of synthetic colloidal drug carriers, while the extent of the uptake affects the biological performance and selectivity. Thus far, little is known regarding the effect of external biomechanical stimuli on the capacity of the cells to uptake nano and micro carriers. This is relevant for anchorage-dependent cells that have detached from surfaces or for cells that travel in the body such as tumor cells, immune cells and various circulating stem cells. In this study, we hypothesize that cellular deformability is a crucial physical effector for the successful execution of the phagocytosis-like uptake in cancer cells. To test this assumption, we develop a well-controlled tunable method to compare the uptake of inert particles by cancer cells in adherent and non-adherent conditions. We introduce a self-designed 3D-printed apparatus, which enables constant stirring while facilitating a floating environment for cell incubation. We reveal a mechanically mediated phagocytosis-like behavior in various cancer cells, that was dramatically enhance in the detached cell state. Our findings emphasize the importance of including proper biomechanical cues to reliably mimic certain physiological scenarios. Beyond that, we offer a cost-effective accessible research tool to study mixed cultures for both adherent and non-adherent cells. Reem Odi, Roberto William Invernizzi, Tamar Gallily, Meir Bialer, and Emilio Perucca. 2021. "Fenfluramine repurposing from weight loss to epilepsy: What we do and do not know." Pharmacology & therapeutics, 226, Pp. 107866. Abstract In 2020, racemic-fenfluramine was approved in the U.S. and Europe for the treatment of seizures associated with Dravet syndrome, through a restricted/controlled access program aimed at minimizing safety risks. Fenfluramine had been used extensively in the past as an appetite suppressant, but it was withdrawn from the market in 1997 when it was found to cause cardiac valvulopathy. Available evidence indicates that appetite suppression and cardiac valvulopathy are mediated by different serotonergic mechanisms. In particular, appetite suppression can be ascribed mainly to the enantiomers d-fenfluramine and d-norfenfluramine, the primary metabolite of d-fenfluramine, whereas cardiac valvulopathy can be ascribed mainly to d-norfenfluramine. Because of early observations of markedly improved seizure control in some forms of epilepsy, fenfluramine remained available in Belgium through a Royal Decree after 1997 for use in a clinical trial in patients with Dravet syndrome at average dosages lower than those generally prescribed for appetite suppression. More recently, double-blind placebo-controlled trials established its efficacy in the treatment of convulsive seizures associated with Dravet syndrome and of drop seizures associated with Lennox-Gastaut syndrome, at doses up to 0.7 mg/kg/day (maximum 26 mg/day). Although no cardiovascular toxicity has been associated with the use of fenfluramine in epilepsy, the number of patients exposed to date has been limited and only few patients had duration of exposure longer than 3 years. This article analyzes available evidence on the mechanisms involved in fenfluramine-induced appetite suppression, antiseizure effects and cardiovascular toxicity. Despite evidence that stimulation of 5-HT(2B) receptors (the main mechanism leading to cardiac valvulopathy) is not required for antiseizure activity, there are many critical gaps in understanding fenfluramine's properties which are relevant to its use in epilepsy. Particular emphasis is placed on the remarkable lack of publicly accessible information about the comparative activity of the individual enantiomers of fenfluramine and norfenfluramine in experimental models of seizures and epilepsy, and on receptors systems considered to be involved in antiseizure effects. Preliminary data suggest that l-fenfluramine retains prominent antiseizure effects in a genetic zebrafish model of Dravet syndrome. If these findings are confirmed and extended to other seizure/epilepsy models, there would be an incentive for a chiral switch from racemic-fenfluramine to l-fenfluramine, which could minimize the risk of cardiovascular toxicity and reduce the incidence of adverse effects such as loss of appetite and weight loss. S Rakedzon, A Neuberger, AJ Domb, N Petersiel, and E Schwartz. 2021. "From hydroxychloroquine to ivermectin: what are the anti-viral properties of anti-parasitic drugs to combat SARS-CoV-2?" Journal of travel medicine, 28, 2. Abstract BACKGROUND: Nearly a year into the COVID-19 pandemic, we still lack effective anti-SARS-CoV-2 drugs with substantial impact on mortality rates except for dexamethasone. As the search for effective antiviral agents continues, we aimed to review data on the potential of repurposing antiparasitic drugs against viruses in general, with an emphasis on coronaviruses. METHODS: We performed a review by screening in vitro and in vivo studies that assessed the antiviral activity of several antiparasitic agents: chloroquine, hydroxychloroquine (HCQ), mefloquine, artemisinins, ivermectin, nitazoxanide (NTZ), niclosamide, atovaquone and albendazole. RESULTS: For HCQ and chloroquine we found ample in vitro evidence of antiviral activity. Cohort studies that assessed the use of HCQ for COVID-19 reported conflicting results, but randomized controlled trials (RCTs) demonstrated no effect on mortality rates and no substantial clinical benefits of HCQ used either for prevention or treatment of COVID-19. We found two clinical studies of artemisinins and two studies of NTZ for treatment of viruses other than COVID-19, all of which showed mixed results. Ivermectin was evaluated in one RCT and few observational studies, demonstrating conflicting results. As the level of evidence of these data is low, the efficacy of ivermectin against COVID-19 remains to be proven. For chloroquine, HCQ, mefloquine, artemisinins, ivermectin, NTZ and niclosamide, we found in vitro studies showing some effects against a wide array of viruses. We found no relevant studies for atovaquone and albendazole. CONCLUSIONS: As the search for an effective drug active against SARS-CoV-2 continues, we argue that pre-clinical research of possible antiviral effects of compounds that could have antiviral activity should be conducted. Clinical studies should be conducted when sufficient in vitro evidence exists, and drugs should be introduced into widespread clinical use only after being rigorously tested in RCTs. Such a search may prove beneficial in this pandemic or in outbreaks yet to come. Moriya Weitz, Alaa Khayat, and Rami Yaka. 2021. "GABAergic projections to the ventral tegmental area govern cocaine-conditioned reward." Addiction Biology, 26, 4. Abstract Elevated dopamine (DA) levels in the reward system underlie various drug-related behaviors, including addiction. As a major DA source in the reward system, the ventral tegmental area (VTA) is highly regulated by GABAergic inputs projected from different brain regions. It was previously shown that cocaine exposure reduces GABAA-mediated inhibitory postsynaptic currents (IPSCs) in VTA DA neurons; however, the specific GABAergic input underlying this inhibitory effect remains unknown. Here, using optogenetics, we separately activate and characterize different GABAergic afferents innervating the VTA, focusing on the rostromedial tegmental nucleus (RMTg) and the nucleus accumbens (NAc). GABAA-mediated IPSCs were recorded from VTA DA neurons, and the effect of DA-induced inhibition was measured in an afferent-specific manner. In addition, to examine the effect of enhanced GABAergic tone on the rewarding properties of cocaine, we exogenously activated the different GABAergic inputs during the acquisition phase of cocaine conditioned place preference (CPP). We found that acute cocaine exposure strongly attenuates GABAA-mediated IPSCs in VTA DA neurons from both inhibitory sources. Furthermore, exogenous light activation of both RMTg and NAc afferents in the VTA during the acquisition of cocaine-CPP significantly reduced the rewarding properties of cocaine. This behavioral observation was correlated with the reduction in the neuronal activity of VTA DA neurons as measured by the expression of c-fos. Together, these results emphasize the critical role of these GABAergic inputs to the VTA in modulating and potentially interrupting cocaine reward. Paweł Paśko, Agnieszka Galanty, Małgorzata Tyszka-Czochara, Paweł Żmudzki, Paweł Zagrodzki, Joanna Gdula-Argasińska, Ewelina Prochownik, and Shela Gorinstein. 2021. "Health Promoting vs Anti-nutritive Aspects of Kohlrabi Sprouts, a Promising Candidate for Novel Functional Food." Plant Foods for Human Nutrition, 76, 1, Pp. 76–82. Abstract Kohlrabi sprouts are just gaining popularity as the new example of functional food. The study was focused on the influence of germination time and light conditions on glucosinolates, phenolic acids, flavonoids, and fatty acids content in kohlrabi sprouts, in comparison to the bulbs. The effect of kohlrabi products on SW480, HepG2 and BJ cells was also determined. The length of sprouting time and light availability significantly influenced the concentrations of the phenolic compounds. Significant differences in progoitrin concentrations were observed between the sprouts harvested in light and in the darkness, with significantly lower content for darkness conditions. Erucic acid was the dominant fatty acid found in sprouts (14.5–34.5%). Sprouts and bulbs were less toxic to normal than to cancer cells. The sprouts stimulated necrosis (56.4%) more than apoptosis (34.1%) in SW480 cells, while the latter effect was predominant for the bulbs. Both sprouts and bulbs caused rather necrosis (45.5 and 63.9%) than apoptosis (32 and 32.5%) in HepG2 cells. Graphical Abstract: [Figure not available: see fulltext.] Raviv Dharan, Asaf Shemesh, Abigail Millgram, Ran Zalk, Gabriel A. Frank, Yael Levi-Kalisman, Israel Ringel, and Uri Raviv. 2021. "Hierarchical Assembly Pathways of Spermine-Induced Tubulin Conical-Spiral Architectures." ACS Nano, 15, 5, Pp. 8836–8847. Abstract Tubulin, an essential cytoskeletal protein, assembles into various morphologies by interacting with an array of cellular factors. One of these factors is the endogenous polyamine spermine, which may promote and stabilize tubulin assemblies. Nevertheless, the assembled structures and their formation pathways are poorly known. Here we show that spermine induced the in vitro assembly of tubulin into several hierarchical architectures based on a tubulin conical-spiral subunit. Using solution X-ray scattering and cryo-TEM, we found that with progressive increase of spermine concentration tubulin dimers assembled into conical-frustum-spirals of increasing length, containing up to three helical turns. The subunits with three helical turns were then assembled into tubules through base-to-top packing and formed antiparallel bundles of tubulin conical-spiral tubules in a distorted hexagonal symmetry. Further increase of the spermine concentration led to inverted tubulin tubules assembled in hexagonal bundles. Time-resolved experiments revealed that tubulin assemblies formed at higher spermine concentrations assembled from intermediates, similar to those formed at low spermine concentrations. These results are distinct from the classical transition between twisted ribbons, helical, and tubular assemblies, and provide insight into the versatile morphologies that tubulin can form. Furthermore, they may contribute to our understanding of the interactions that control the composition and construction of protein-based biomaterials. Carmil Azran, Nirvana Hanhan-Shamshoum, Tujan Irshied, Tomer Ben-Shushan, Dror Dicker, Arik Dahan, and Ilan Matok. 2021. "Hypothyroidism and levothyroxine therapy following bariatric surgery: a systematic review, meta-analysis, network meta-analysis, and meta-regression." Surgery for Obesity and Related Diseases, 17, 6, Pp. 1206–1217. Abstract Background: Many health benefits of bariatric surgery are known and well-studied, but there is scarce data on the benefits of bariatric surgery on the thyroid function. Objective: We aimed to make a meta-analysis regarding the impact of bariatric surgery on thyroid-stimulating hormone (TSH) levels, levothyroxine dose, and the status of subclinical hypothyroidism. Setting: Systematic review and meta-analysis. Methods: PubMed, EMBASE, and Cochrane Library were searched up to December 2020 for relevant clinical studies. Random-effects model was used to pool results. Network meta-analysis was performed, incorporating direct and indirect comparisons among different types of bariatric surgery. Meta-regression analysis was performed to evaluate the impact of moderator variables on TSH levels and required levothyroxine dose after surgery. We followed the PRISMA guidelines for data selection and extraction. PROSPERO registry number: CRD42018105739. Results: A total of 28 studies involving 1284 patients were included. There was a statistically significant decrease in TSH levels after bariatric surgery (mean difference = −1.66 mU/L, 95%CI [−2.29, −1.03], P <.0001). In meta-regression analysis, we found that the following moderator variables: length of follow-up, mean age, baseline TSH, and preoperative thyroid function, could explain 1%, 43%, 68%, and 88% of the between-study variance, respectively. Furthermore, subclinical hypothyroidism was completely resolved in 87% of patients following bariatric surgery. In addition, there was a statistically significant decrease of levothyroxine dose in frank hypothyroid patients following bariatric surgery (mean difference = −13.20 mcg/d, 95%CI [−19.69, −6.71]). In network meta-analysis, we found that discontinuing or decreasing levothyroxine dose was significant following Roux-en-Y gastric bypass, 1 anastomosis gastric bypass, and sleeve gastrectomy, (OR = 31.02, 95%CI [10.34, 93.08]), (OR = 41.73, 95%CI [2.04, 854.69]), (OR = 104.03, 95%CI [35.79, 302.38]), respectively. Conclusions: Based on our meta-analysis, bariatric surgery is associated with the resolution of subclinical hypothyroidism, a decrease in TSH levels, and a decrease in levothyroxine dose. Batya Isaacson, Maya Baron, Rachel Yamin, Gilad Bachrach, Francesca Levi-Schaffer, Zvi Granot, and Ofer Mandelboim. 2021. "The inhibitory receptor CD300a is essential for neutrophil-mediated clearance of urinary tract infection in mice." European Journal of Immunology, 51, 9, Pp. 2218–2224. Abstract Neutrophils play a crucial role in immune defense against and clearance of uropathogenic Escherichia coli (UPEC)-mediated urinary tract infection, the most common bacterial infection in healthy humans. CD300a is an inhibitory receptor that binds phosphatidylserine and phosphatidylethanolamine, presented on the membranes of apoptotic cells. CD300a binding to phosphatidylserine and phosphatidylethanolamine, also known as the "eat me" signal, mediates immune tolerance to dying cells. Here, we demonstrate for the first time that CD300a plays an important role in the neutrophil-mediated immune response to UPEC-induced urinary tract infection. We show that CD300a-deficient neutrophils have impaired phagocytic abilities and despite their increased accumulation at the site of infection, they are unable to reduce bacterial burden in the bladder, which results in significant exacerbation of infection and worse host outcome. Finally, we demonstrate that UPEC's pore forming toxin $\alpha$-hemolysin induces upregulation of the CD300a ligand on infected bladder epithelial cells, signaling to neutrophils to be cleared. Limor Rubin, Collin T Stabler, Adi Schumacher-Klinger, Cezary Marcinkiewicz, Peter I Lelkes, and Philip Lazarovici. 2021. "Neurotrophic factors and their receptors in lung development and implications in lung diseases." Cytokine & growth factor reviews, 59, Pp. 84–94. Abstract Although lung innervation has been described by many studies in humans and rodents, the regulation of the respiratory system induced by neurotrophins is not fully understood. Here, we review current knowledge on the role of neurotrophins and the expression and function of their receptors in neurogenesis, vasculogenesis and during the embryonic development of the respiratory tree and highlight key implications relevant to respiratory diseases. Ihab Abd-Elrahman, Taher Nassar, Noha Khairi, Riki Perlman, Simon Benita, and Dina Ben Yehuda. 2021. "Novel targeted mtLivin nanoparticles treatment for disseminated diffuse large B-cell lymphoma." Oncogene, 40, 2, Pp. 334–344. Abstract We previously showed that Livin, an inhibitor of apoptosis protein, is specifically cleaved to produce a truncated protein, tLivin, and demonstrated its paradoxical proapoptotic activity. We further demonstrated that mini-tLivin (MTV), a 70 amino acids derivative of tLivin, is a proapoptotic protein as potent as tLivin. Based on these findings, in this study we aimed to develop a venue to target MTV for the treatment of diffuse large B-cell lymphoma (DLBCL). MTV was conjugated to poly (lactide-co-glycolic acid) surface-activated nanoparticles (NPs). In order to target MTV-NPs we also conjugated CD40 ligand (CD40L) to the surface of the NPs and evaluated the efficacy of the bifunctional CD40L-MTV-NPs. In vitro, CD40L-MTV-NPs elicited significant apoptosis of DLBCL cells. In a disseminated mouse model of DLBCL, 37.5% of MTV-NPs treated mice survived at the end of the experiment. Targeting MTV-NPs using CD40L greatly improved survival and 71.4% of these mice survived. CD40L-MTV-NPs also greatly reduced CNS involvement of DLBCL. Only 20% of these mice presented infiltration of lymphoma to the brain in comparison to 77% of the MTV-NPs treated mice. In a subcutaneous mouse model, CD40L-MTV-NPs significantly reduced tumor volume in correlation with significant increased caspase-3 activity. Thus, targeted MTV-NPs suggest a novel approach to overcome apoptosis resistance in cancer. Liad Hinden, Aviram Kogot-Levin, Joseph Tam, and Gil Leibowitz. 2021. "Pathogenesis of diabesity-induced kidney disease: role of kidney nutrient sensing." The FEBS journal. Abstract Diabetes kidney disease (DKD) is a major healthcare problem associated with increased risk for developing end-stage kidney disease and high mortality. It is widely accepted that DKD is primarily a glomerular disease. Recent findings however suggest that kidney proximal tubule cells (KPTCs) may play a central role in the pathophysiology of DKD. In diabetes and obesity, KPTCs are exposed to nutrient overload, including glucose, free-fatty acids and amino acids, which dysregulate nutrient and energy sensing by mechanistic target of rapamycin complex 1 and AMP-activated protein kinase, with subsequent induction of tubular injury, inflammation, and fibrosis. Pharmacological treatments that modulate nutrient sensing and signaling in KPTCs, including cannabinoid-1 receptor antagonists and sodium glucose transporter 2 inhibitors, exert robust kidney protective effects. Shedding light on how nutrients are sensed and metabolized in KPTCs and in other kidney domains, and on their effects on signal transduction pathways that mediate kidney injury, is important for understanding the pathophysiology of DKD and for the development of novel therapeutic approaches in DKD and probably also in other forms of kidney disease. Nethanel Friedman, Arie Dagan, Jhonathan Elia, Sharon Merims, and Ofra Benny. 2021. "Physical properties of gold nanoparticles affect skin penetration via hair follicles." Nanomedicine : nanotechnology, biology, and medicine, 36, Pp. 102414. Abstract Drug penetration through the skin is significant for both transdermal and dermal delivery. One mechanism that has attracted attention over the last two decades is the transport pathway of nanoparticles via hair follicle, through the epidermis, directly to the pilosebaceous unit and blood vessels. Studies demonstrate that particle size is an important factor for drug penetration. However, in order to gain more information for the purpose of improving this mode of drug delivery, a thorough understanding of the optimal physical particle properties is needed. In this study, we fabricated fluorescently labeled gold nanoparticles (GNP) with a tight control over the size and shape. The effect of the particles' physical parameters on follicular penetration was evaluated histologically. We used horizontal human skin sections and found that the optimal size for polymeric particles is 0.25 $μ$m. In addition, shape penetration experiments revealed gold nanostars' superiority over spherical particles. Our findings suggest the importance of the particles' physical properties in the design of nanocarriers delivered to the pilosebaceous unit. Dan Gibson. 2021. "Platinum(IV) anticancer agents; are we en route to the holy grail or to a dead end?" Journal of inorganic biochemistry, 217, Pp. 111353. Abstract Pt(IV) complexes are designed as prodrugs that are intended to overcome resistance. Pt(IV) prodrugs are activated inside cancer cells releasing cytotoxic Pt(II) drugs as well as two axial ligands that can be used to confer favorable pharmacological properties to the prodrug. The ligands can be innocent spectators, cancer targeting agents or bioactive moieties. The choice of axial ligands determines the chemical and pharmacological properties of the prodrugs. Over the years, several approaches were employed in attempts to increase the selectivity of the prodrugs to cancer cells and to utilize multi-action prodrugs to overcome resistance. In this review, we critically examine several of these approaches in order to evaluate the validity of some of the working hypotheses that are driving the current research. Awanish Kumar and Abraham J. Domb. 2021. "Polymerization Enhancers for Cyanoacrylate Skin Adhesive." Macromolecular Bioscience. Abstract Cyanoacrylate glues are a renowned synthetic tissue sealant that cures rapidly through polymerization at room temperature, felicitating medical glues to treat skin wounds and surgical openings. Despite a wide range of cyanoacrylates available, only 2-octyl cyanoacrylates (OCA) provides the best biocompatibility. In this study, the polymerization and adhesive properties of 2-octyl cyanoacrylates (OCA) are explored in the presence of a highly biocompatible and biochemically inert polymer, poly(ethylene glycol) polyhedral oligomeric silsesquioxane (PEG-POSS). The effect of PEG-POSS on the polymerization of OCA is examined on a plastic surface and over pig skin. A peel-test is performed to evaluate the strength of OCA adhesive properties between two pieces of pig skin samples. Additionally, thin films of OCA are prepared using different fillers and evaluated for tear test. The results reveal that when applied on the plastic or pig skin, PEG-POSS initiated polymerization in OCA yields a high molecular weight OCA polymer with much better adhesive properties compared to commercially available cyanoacrylate adhesives. The relative change in the molecular weights of OCA compared to commercially available cyanoacrylate bioadhesives such as Dermaflex is much higher. The pig skin peeling test shows that OCA needs higher peeling force than Dermaflex. Alexey Bingor, Matityahu Azriel, Lavi Amiad, and Rami Yaka. 2021. "Potentiated Response of ERK/MAPK Signaling is Associated with Prolonged Withdrawal from Cocaine Behavioral Sensitization." Journal of molecular neuroscience : MN. Abstract Among the neuroadaptations underlying the expression of cocaine-induced behaviors are modifications in glutamate-mediated signaling and synaptic plasticity via activation of mitogen-activated protein kinases (MAPKs) within the nucleus accumbens (NAc). We hypothesized that exposure to cocaine leads to alterations in MAPK signaling in NAc neurons, which facilitates changes in the glutamatergic system and thus behavioral changes. We have previously shown that following withdrawal from cocaine-induced behavioral sensitization (BS), an increase in glutamate receptor expression and elevated MAPK signaling was evident. Here, we set out to determine the time course and behavioral consequences of inhibition of extracellular signal-regulated kinase (ERK) or NMDA receptors following withdrawal from BS. We found that inhibiting ERK by microinjection of U0126 into the NAc at 1 or 6 days following withdrawal from BS did not affect the expression of BS when challenged with cocaine at 14 days. However, inhibition of ERK 1 day before the cocaine challenge abolished the expression of BS. We also inhibited NR2B-containing NMDA receptors in the NAc by microinjection of ifenprodil into the NAc following withdrawal from BS, which had no effect on the expression of BS. However, microinjection of ifenprodil to the NAc 1 day before challenge attenuated the expression of BS similar to ERK inhibition. These results suggest that following a prolonged period of withdrawal, NR2B-containing NMDA receptors and ERK activity play a critical role in the expression of cocaine behavioral sensitization.
CommonCrawl
stability of the Monge-Ampère equation Is there any hope to prove this conjecture (or a similar one)? Conjecture Let $\Omega_k$ be a family of convex (smooth) domains, and let $u_k$ be convex Alexandrov solution of $$ \begin{cases} det(D^2u_k)=f_k&\mbox{ in }\Omega_k\\ u_k=0 &\mbox{ on }\partial\Omega_k \end{cases} $$ with $0<\lambda\leq f_k\leq\Lambda,$ $f_k\in C^{n,\beta}.$ Assume that $\Omega_k$ converges to some domain $\Omega$ in ``some appropriate distance'' (the Hausdorff distance?) and $f_k\chi_{\Omega_k}\to f$ in $C_{loc}^{n,\beta},$ $f\in C^{n,\beta}.$ Then, if $u$ denotes the unique Alexandrov solution of $$ \begin{cases} det(D^2u)=f&\mbox{ in }\Omega\\ u=0 &\mbox{ on }\partial\Omega \end{cases} $$ for any $\Omega'\subset\subset \Omega,$ we have that $u_k\to u$ in $C^{r,\beta}(\Omega')$ as $k\to\infty,$ where $r>n.$ In Second order stability for the Monge-Ampère equation and strong Sobolev convergence of optima transport maps it is proved that: if $f_k\chi_{\Omega_k}\to f$ in $L^1_{loc}(\Omega),$ then $\|u_k-u\|_{W^{2,1}(\Omega')}\to 0$ as $k\to\infty.$ ap.analysis-of-pdes differential-equations elliptic-pde applied-mathematics regularity Yes, this follows from Schauder theory for the Monge-Ampere equation and for linear equations. Subtracting the equations $\det D^2u_k = f_k$ and $\det D^2u = f$ gives, for $v_k = u_k - u$, the equation $$a_{ij}(x) (v_k)_{ij} = f_k - f$$ where $a_{ij}$ are coefficients depending on $D^2u_k$ and $D^2u$ (to see this observe that $f_k - f = \int_{0}^1 \frac{d}{dt} \det(tD^2u_k + (1-t)D^2u)\,dt$). By Caffarelli's Schauder estimates, the $a_{ij}$ are $C^{n,\beta}$ and uniformly elliptic when we step away from the boundary. Say $B_2 \subset \Omega_k,\,\Omega$ after an affine transformation. Linear Schauder theory gives (take $n = 0$ for simplicity) $$\|v_k\|_{C^{2,\beta}(B_{1/2})} < C(\|f_k-f\|_{C^{\beta}(B_1)} + \|v_k\|_{L^{\infty}(B_1)}).$$ By hypothesis, the first term on the right side goes to zero, and it is easy to see that the second term goes to zero using the maximum principle (e.g. apply the ABP maximum principle to $v_k$, which is small on the boundary of the common domain of definition for $u_k$ and $u$ by the Alexandrov maximum principle). Connor MooneyConnor Mooney Not the answer you're looking for? Browse other questions tagged ap.analysis-of-pdes differential-equations elliptic-pde applied-mathematics regularity or ask your own question. Gradient elliptic estimate Monge–Ampère type equation Finding a smooth convex function with prescribed boundary value and small Monge-Ampère measure Boundary regularity for the Monge-Ampère equation $\det D^2u=1$ Regularity on the boundary for the heat equation with linear source
CommonCrawl
Tutorial #15: Parsing I: context-free grammars and the CYK algorithm Authors: A. Kádár, S. Prince The current dominant paradigm in natural language processing is to build enormous language models based on the transformer architecture. Models such as GPT3 contain billions of parameters, which collectively describe joint statistics of spans of text and have been extremely successful over a wide range of tasks. However, these models do not explicitly take advantage of the structure of language; native speakers understand that a sentence is syntactically valid, even if it is meaningless. Consider how Colorless green ideas sleep furiously feels like valid English, whereas Furiously sleep ideas green colorless does not 1. This structure is formally described by a grammar, which is a set of rules that can generate an infinite number of sentences, all of which sound right, even if they mean nothing. In this blog, we review earlier work that models grammatical structure. We introduce the CYK algorithm which finds the underlying syntactic structure of sentences and forms the basis of many algorithms for linguistic analysis. The algorithms are elegant and interesting for their own sake. However, we also believe that this topic remains important in the age of large transformers. We hypothesize that the future of NLP will consist of merging flexible transformers with linguistically informed algorithms to achieve systematic and compositional generalization in language processing. Our discussion will focus on context-free grammars or CFGs. These provide a mathematically precise framework in which sentences are constructed by recursively combining smaller phrases usually referred to as constituents.2 Sentences under a CFG are analyzed through a tree-structured derivation in which the sentence is recursively generated phrase by phrase (figure 1). Figure 1. Parsing example for the sentence "The dog is in the garden." The sentence is parsed into constituent part-of-speech (POS) categories represented in a tree structure. The POS categories and phrase types here are sentence (S), noun phrase (NP), determiner (DT), verb phrase (VP), present tense verb (VBZ), prepositional phrase (PP), preposition (P), and noun (NN). The problem of recovering the underlying structure of a sentence is known as parsing. Unfortunately, natural language is ambiguous and so there may not be a single possible meaning; consider the sentence I saw him with the binoculars. Here, it is unclear whether the subject or the object of the sentence holds the binoculars (figure 2). To cope with this ambiguity, we will need weighted and probabilistic extensions to the context free grammar (referred to as WCFGs and PCFGs respectively). These allow us to compute a number that indicates how "good" each possible interpretation of a sentence is. Figure 2. Parsing the sentence "I saw him with binoculars" into constituent part-of-speech (POS) categories (e.g. noun) and phrase types (e.g., verb phrase) represented in a tree structure. The POS categories and phrase types here are sentence (S), noun phrase (NP), verb phrase (VP), past tense verb (VBD), prepositional phrase (PP), preposition (P), determiner (DT), and noun (NN). a) In this parse, it is "I" who have the binoculars. b) A second possible parse of the same sentence, in which it is "him" who possesses the binoculars. In Part I of this series of two blogs, we introduce the notion of a context-free grammar and consider how to parse sentences using this grammar. We then describe the CYK recognition algorithm which identifies whether the sentence can be parsed under a given grammar. In Part II, we introduce the aforementioned weighted context-free grammars and show how the CYK algorithm can be adapted to compute different quanties including the most likely sentence structure. In Part III we instroduce probabilistic context-free grammars, and we present the inside-outside algorithm which efficiently computes the expected counts of the rules in the grammar for all possible analyses of a sentence. These expected counts are used in the E-Step of an expectation-maximization procedure for learning the rule weights. Parse trees Before tackling these problems, we'll first discuss the properties of a parse tree (figure 3). The root of the tree is labelled as "sentence" or "start". The leaves or terminals of the tree contain the words of the sentence. The parents of these leaves are called pre-terminals and contain the part-of-speech (POS) categories of the words (e.g., verb, noun, adjective, preposition). Words are considered to be from the same category if a sentence is still syntactically valid when they are substituted. For example: The {sad, happy, excited, bored} person in the coffee shop. This is known as the substitution test. Above the pre-terminals, the word categories are collected together into phrases. Figure 3. Parse tree for a more complex sentence. The POS categories here are sentence (S), noun phrase (NP), determiner (DT), noun (NN), verb phrase (VP), third person singular verb (VBZ), and gerund (VBG). There are three more important things to notice. First, the verb phrase highlighted in magenta has three children. However, there is no theoretical limit to this number. We could easily add the prepositional phrases in the garden and under a tree and so on. The complexity of the sentence is limited in practice by human memory and not by the grammar itself. Second, the grammatical structure allows for recursion. In this example, a verb phrase is embedded within a second verb phrase, which itself is embedded in a third verb phrase. Finally, we note that the parse tree disambiguates the meaning of the sentence. From a grammatical point of view, it could be that it was the bone that was enjoying every moment. However, it is clear that this is not the case, since the verb phrase corresponding to enjoying is attached to the verb phrase corresponding to eating and not the bone (see also figure 2). Context free grammars In this section, we present a more formal treatment of context-free grammars. In the following section, we'll elucidate the main ideas with an example. A language is a set of strings. Each string is a sequence of terminal symbols. In figure 3 these correspond to individual words, but more generally they may be abstract tokens. The set of terminals $\Sigma=\{\mbox{a,b,c},\ldots\}$ is called an alphabet or lexicon. There is also a set $\mathcal{V}=\{\mbox{A,B,C}\ldots...\}$ of non-terminals, one of which is the special start symbol $S$. Finally, there are a set $\mathcal{R}$ of production or re-write rules. These relate the non-terminal symbols to each other and to the terminals. Formally, these grammar rules are a subset of the finite relation $\mathcal{R}\in \mathcal{V} \times (\Sigma \cup \mathcal{V})^*$ where $*$ denotes the Kleene star. Informally, this means that each grammar rule is an ordered pair where the first element is a non-terminal from $\mathcal{V}$ and the second is any possible string containing terminals from $\Sigma$ and non-terminal from $\mathcal{V}$. For example, B$\rightarrow$ab, C$\rightarrow$Baa and A$\rightarrow$AbCa are all production rules. A context free grammar is the tuple $G=\{\mathcal{V}, \Sigma, \mathcal{R}, S\}$ consisting of the non-terminals $\mathcal{V}$, terminals $\Sigma$, production rules $\mathcal{R}$, and start symbol $S$. The associated context-free language consists of all possible strings of terminals that are derivable from the grammar. Informally, the term context-free means that each production rule starts with a single non-terminal symbol. Context-free grammars are part of the Chomsky hierarchy of languages which contains (in order of increasing expressiveness) regular, context-free, context-sensitive, and recursively enumerable grammars. Each differs in the family of production rules that are permitted and the complexity of the associated parsing algorithms (table 1). As we shall see, context-free languages can be parsed in $O(n^{3})$ time where $n$ is the number of observed terminals. Parsing more expressive grammars in the Chomsky hierarchy has exponential complexity. In fact, context-free grammars are not considered to be expressive enough to model real languages. Many other types of grammar have been invented that are both more expressive and parseable in polynomial time, but these are beyond the scope of this post. Language Recognizer Parsing Complexity Recursively enumerable Context-sensitive Context-free Regular Turing machine Linear-bounded automata Pushdown automata Finite-state automata decideable PSPACE $O(n^3)$ $O(n)$ Table 1. The Chomsky hierarchy of languages. As the grammar-type becomes simpler, the required computation model (recognizer) becomes less general and the parsing complexity decreases. Consider the context free grammar that generated the example in figure 4. Here, the set of non-terminals $\mathcal{V}=\{\mbox{VP, PP, NP, DT, NN, VBZ, IN,}\ldots\}$ contains the start symbol, phrases, and pre-terminals. The set of terminals $\Sigma=\{$The, dog, is, in, the, garden, $\ldots \}$ contains the words. The production rules in the grammar associated with this example include: Of course, a full model of English grammar contains many more non-terminals, terminals, and rules than we observed in this single example. The main point is that the tree structure in figure 4 can be created by the repeated application of a finite set of rules. Figure 4. Example sentence to demonstrate context free grammar rules Chomsky Normal Form Later on, we will describe the CYK recognition algorithm. This takes a sentence and a context-free grammar and determines whether there is a valid parse tree that can explain the sentence in terms of the production rules of the CFG. However, the CYK algorithm assumes that the context free grammar is in Chomsky Normal Form (CNF). A grammar is in CNF if it only contains the following types of rules: \tag{binary non-terminal} \text{A} &\rightarrow \text{B} \; \text{C} \\ \tag{unary terminal} \text{A} &\rightarrow \text{a} \\ \tag{delete sentence} \text{S} &\rightarrow \epsilon where A,B, and C are non-terminals, a is a token, S is the start symbol and $\epsilon$ represents the empty string. The binary non-terminal rule means that a non-terminal can create exactly two other non-terminals. An example is the rule $S \rightarrow \text{NP} \; \text{VP}$ in figure 4. The unary terminal rule means that a non-terminal can create a single terminal. The rule $\text{NN} \rightarrow$ $\text{dog}$ in figure 4 is an example. The delete sentence rule allows the grammar to create empty strings, but in practice we avoid $\epsilon$-productions. Notice that the parse tree in figure 3 is not in Chomsky Normal Form because it contains the rule $\text{VP} \rightarrow \text{VBG} \; \text{NP} \; \text{VP}$. For the case of natural language processing, there are two main tasks to convert a grammar to CNF: We deal with rules that produce more than two non-terminals by creating new intermediate non-terminals (figure 5a). We remove unary rules like A $\rightarrow$ B by creating a new node A_B (figure 5b). Figure 5. Conversion to Chomsky Normal form. a) Converting non-binary rules by introducing new non-terminal B_C. b) Eliminating unary rules by creating new non-terminal A_B. Both of these operations introduce new non-terminals into the grammar. Indeed, in the former case, we may introduce different numbers of new non-terminals depending on which children we choose combine. It can be shown that in the worst-case scenario, converting CFGs into an equivalent grammar in Chomsky Normal Form results in a quadratic increase in the number of rules. Note also that although the CNF transformation is the most popular, it is not the only, or even the most efficient option. Given a grammar in Chomsky Normal Form, we can turn our attention to parsing a sentence. The parsing algorithm will return a valid parse tree like the one in figure 6 if the sentence has a valid analysis, or indicate that there is no such valid parse tree. Figure 6. Example parse tree for the sentence Jeff trains geometry students. This sentence has $n=4$ terminals. It has $n-1=3$ internal nodes representing the non-terminals and a $n=4$ pre-terminal nodes. It follows that one way to characterize a parsing algorithm is that it searches over the set of all possible parse trees. A naive approach might be to exhaustively search through these trees until we find one that obeys all of the rules in the grammar and yields the sentence. In the next section, we'll consider the size of this search space, find that it is very large, and draw the conclusion that this brute-force approach is intractable. Number of parse trees The parse tree of a sentence of length $n$ consists of a binary tree with $n-1$ internal nodes, plus another $n$ nodes connecting the pre-terminals to the terminals. The number of binary trees with $n$ internal nodes can be calculated via the recursion: \begin{equation} C_{n} = \sum_{i=0}^{n-1}C_{n-i}C_{i}. \tag{1} \end{equation} Figure 7. Intuition for number $C_{n}$ of binary trees with $n$ internal nodes. a) There is only one tree with a single internal node, so $C_{1}=1$. b) To generate all the possible trees with $n=2$ internal nodes, we add a new root (red sub-tree). We then add the tree with $n=1$ node either to the left or to the right branch of this sub-tree to create the $C_{2} = C_{1}+C_{1} =2$ possible combinations. c) To generate all the possible trees with $n=3$ nodes, we again add a new root (green sub-tree). We can then add either of the trees with $n=2$ to the left node of this sub-tree, or d) add the tree with $n=1$ nodes to both the left and right, or e) add either of the trees with $n=2$ to the right node of this sub-tree. This gives a total of $C_{3} = C_{2} + C_{1}C_{1}+ C_{2} = 5$ possible trees. We could continue in this way and by the same logic $C_{4} = C_{3}+C_{2}C_{1}+C_{1}C_{2}+C_{3} = 14$. Defining $C_{0}=1$ we have the general recursion $C_{n} = \sum_{i=0}^{n-1}C_{n-i}C_{i}$. The intuition for this recursion is illustrated in figure 7. This series of intergers are known as the Catalan number and can be written out explicitly as: C_n = \frac{(2n)!}{(n+1)!n!}. \tag{2} Needless to say the series grows extremely fast: 1, 1, 2, 5, 14, 42, 132, 429, 1430, 4862, 16796, 58786, \ldots \tag{3} Consider the example sentence I saw him with the binoculars. Here there are only C_5=42 possible trees, but these must be combined with the non-terminals in the grammar (figure 8). In this example, for each of the 42 trees, each of the six leaves must contain one of four possible parts of speech (DT, NN, P, VBD) and each of the five non-leaves must contain one of four possible clause types (S, NP, VP, PP) and so there are 42 * 4^6 * 4^5 = 176160768 possible parse trees. Figure 8. Minimal set of grammar rules in Chomsky Normal Form for parsing example sentence I saw him with the binoculars. a) Rules relating pre-terminals to terminals. Note that the word saw is ambiguous and may be a verb (meaning observed) or as a noun (meaning a tool for cutting wood). b) Rules relating non-terminals to one another. Even this minimal example had a very large number of possible explanations. Now consider that (i) the average sentence length written by Charles Dickens was 20 words, with an associated $C_{20}=6,564,120,420$ possible binary trees and (ii) that there are many more parts of speech and clause types in a realistic model of the English language. It's clear that there are an enormous number of possible parses and it is not practical to employ exhaustive search to find the valid ones. The CYK algorithm The CYK algorithm (named after inventors John Cocke, Daniel Younger, and Tadao Kasami) was the first polynomial time parsing algorithm that could be applied to ambiguous CFGs (i.e., CFGs that allow multiple derivations for the same string). In its simplest form, the CYK algorithm solves the recognition problem; it determines whether a string $\mathbf{w}$ can be derived from a grammar $G$. In other words, the algorithm takes a sentence and a context-free grammar and returns TRUE if there is a valid parse tree or FALSE otherwise. This algorithm sidesteps the need to try every possible tree by exploiting the fact that a complete sentence is made by combining sub-clauses, or equivalently, a parse tree is made by combining sub-trees. A tree is only valid if its sub-trees are also valid. The algorithm works from the bottom of the tree upwards, storing possible valid sub-trees as it goes and building larger sub-trees from these components without the need to re-calculate them. As such, CYK is a dynamic programming algorithm. The CYK algorithm is just a few lines of pseudo-code: 0 # Initialize data structure 1 chart[1...n, 1...n, 1...V] := FALSE 3 # Use unary rules to find possible parts of speech at pre-terminals 4 for p := 1 to n # start position 5 for each unary rule A -> w_p 6 chart[1, p, A] := TRUE 8 # Main parsing loop 9 for l := 2 to n # sub-string length 10 for p := 1 to n-l+1 #start position 11 for s := 1 to l-1 # split width 12 for each binary rule A -> B C 13 chart[l, p, A] = chart[l, p, A] OR (chart[s, p, B] AND chart[l-s,p+s C]) 15 return chart[n, 1, S] The algorithm is simple, but is hard to understand from the code alone. In the next section, we will present a worked example which makes this much easier to comprehend. Before we do that though, let's make some high level observations. The algorithm consists of four sections: Chart: On line 1, we initialize a data structure, which is usually known as a chart in the context of parsing. This can be thought of as an $n\times n$ table where $n$ is the sentence length. At each position, we have a length $V$ binary vector where $V=|\mathcal{V}|$ is the number of non-terminals (i.e., the total number of clause types and parts of speech). Parts of speech: In lines 4-6, we run through each word in the sentence and identify whether each part of speech (noun, verb, etc.) is compatible. Main loop: In lines 8-13, we run through three loops and assign non-terminals to the chart. This groups the words into possible valid sub-phrases. Return value: In line 15 we return TRUE if the start symbol $S$ is TRUE at position $(n,1)$. The complexity of the algorithm is easy to discern. Lines 9-13 contain three for loops depending on the sentence length $n$ (lines 9-11) and one more depending on the number of grammar rules $|R|$ (line 12). This gives us a complexity of $\mathcal{O}(n^3 \cdot |R|)$. To make the CYK algorithm easier to understand, we'll use the worked example of parsing the sentence I saw him with the binoculars. We already saw in figure 2 that this sentence has two possible meanings. We'll assume the minimal grammar from figure 8 that is sufficient to parse the sentence. In the next four subsections we'll consider the four parts of the algorithm in turn. Figure 9 shows the chart for our example sentence, which is itself shown in an extra row under the chart. Each element in the chart corresponds to a sub-string of the sentence. The first index of the chart $l$ represents the length of that sub-string and the second index $p$ is the starting position. So, the element of the chart at position (4,2) represents the sub-string that is length four and starts at word two which is saw him with the. We do not use the upper triangular portion of the chart. The CYK algorithm runs through each of the elements of the chart, starting with strings of length 1 and working through each position and then moving to strings of length 2, and so on, until we finally consider the whole sentence. This explains the loops in lines 9 and 10. The third loop considers possible binary splits of the strings and is indexed by $s$. For position (4,2), the string can be split into saw $|$ him with the ($s=1$, blue boxes), saw him $|$ with the ($s=2$, green boxes), or saw him with $|$ the ($s=3$, red boxes). Figure 9. Chart construction for CYK algorithm. The original sentence is below the chart. Each element of the chart corresponds to a sub-string so that position (l, p) is the sub-string that starts at position $p$ and has length $l$. For the $l^{th}$ row of the chart, there are $l-1$ ways of dividing the sub-string into two parts. For example, the string in the gray box at position (4,2) can be split in 4-1 =3 ways that correspond to the blue, green and red shaded boxes and these splits are indexed by the variable $s$. Now that we understand the meaning of the chart and how it is indexed, let's run through the algorithm step by step. First we deal with strings of length $l=1$ (i.e., the individual words). We run through each unary rule $A \rightarrow w_p$ in the grammar and set these elements to TRUE in the chart (figure 10). There is only one ambiguity here, which is the word saw which could be a past tense verb or a noun. This process corresponds to lines 5-6 of the algorithm. Figure 10. Applying unary rules in CYK algorithm. We consider the sub-strings of length 1 (i.e., the individual words) and note which parts of speech could account for that word. In this limited grammar there is only one ambiguity which is the word saw which could be the past tense of see or a woodworking tool. Note that this is the same chart as in figure 9 but the rows have been staggered to make it easier to draw subsequent steps in the algorithm. In the main loop, we consider sub-strings of increasing length starting with pairs of words and working up to the full length of the sentence. For each sub-string, we determine if there is a rule of the form $\text{A}\rightarrow \text{B}\;\text{C}$ that can derive it. We start with strings of length $l=2$. These can obviously only be split in one way. For each position, we note in the chart all the non-terminals A that can be expanded to generate the parts of speech B and C in the boxes corresponding to the individual words (figure 11). Figure 11. Main loop for strings of length $l=2$. We consider each pair of words in turn (i.e, work across the row $l=2$). There is only one way to split a pair of words, and so for each position, we just consider whether each grammar rule can explain the parts of speech in the boxes in row $l=1$ that correspond to the individual words. So, position (2,1) is left empty as there is no rule of the form $\text{A}\rightarrow \text{NP}\;\text{NN}$ or $\text{A}\rightarrow \text{NP}\;\text{VBD}$. Position (2,2) contains $\text{VP}$ as we can use the rule $\text{VP}\rightarrow \text{VBD}\:\text{NP}$ and so on. In the next outer loop, we consider sub-strings of length $l=3$ (figure 12). For each position, we search for a rule that can derive the three words. However, now we must also consider two possible ways to split the length 3 sub-string. For example, for position $(3,2)$ we attempt to derive the sub-string saw him with. This can be split as saw him $|$ with corresponding to positions (2,2)$|$(1,4) which contain VP and P respectively. However, there is no rule of the form $\text{A}\rightarrow\text{VP}\;\text{P}$. Likewise, there is no rule that can derive the split saw $|$ him with since there was no rule that could derive him with. Consequently, we leave position $(3,2)$ empty. However, at position $(3,4)$, the rule $\text{PP}\rightarrow \text{P}\;\text{NP}$ can be applied as discussed in the legend of figure 12. Figure 12. Main loop for strings $l=3$. We consider each triple of words in turn (i.e., work across the row $l=3$). We can split each triple in two possible ways and for each box we consider whether there is a rule that explains each split. For example, for position (3,4) corresponding to the sub-string with the binoculars, we can explain with by the non-terminal P from row $l=1$ and the binoculars with the non-terminal NP from row $l=2$ using the rule $\text{PP}\rightarrow\text{P}\;\text{NP}$. Hence, we add PP to position (3,4). We continue this process, working upwards through the chart for longer and longer sub-strings (figure 13). For each sub-string length, we consider each position and each possible split and add non-terminals to the chart where we find an applicable rule. We note that position $(5,2)$ in figure 13b corresponding to the sub-string saw him with the binoculars is particularly interesting. Here there are two possible rules $\text{VP}\rightarrow\text{VP}\;\text{PP}$ and $\text{VP}\rightarrow\text{VBD}\;\text{NP}$ that both come to the conclusion that the sub-string can be derived by the non-terminal VP. This corresponds to the original ambiguity in the sentence. Figure 13. Continuing CYK main loop for strings of a) length $l=4$ b) length $l=5$ and c) length $l=6$. Note the ambiguity in panel (b) where there are two possible routes to assign the non-terminal VP to position (5,2) corresponding to the sub-strings saw $|$ him with the binoculars and saw him $|$ with the binoculars. This reflects the ambiguity in the sentence ; it may be either I or him who has the binoculars. When we reach the top-most row of the chart ($l=6$), we are considering the whole sentence. At this point, we discover if the start symbol $S$ can be used to derive the entire string. If there is such a rule, the sentence is valid under the grammar and if there isn't then it is not. This corresponds to the final line of the CYK algorithm pseudocode. For this example, we use the rule $S\rightarrow \text{NP}\;\text{VP}$ explain the entire sting with the noun phrase I and the verb phrase saw him with the binoculars and conclude that the sentence is valid under this context free grammar. Retrieving solutions The basic CYK algorithm just returns a binary variable indicating whether the sentence can be parsed or not under a grammar $G$. Often we are interested in retrieving the parse tree(s). Figure 14 superimposes the paths that led to the start symbol in the top left from figures 11-13. These paths form a shared parse forest; two trees share the black paths, but the red paths are only in the first tree and the blue paths are only in the second tree. These two trees correspond to the two possible meanings of the sentence (figure 15). Figure 14. Superimposition of paths that lead to the start symbol at position (6,1) from figures 11-13. These describe two overlapping trees forming a shared parse forest: the common parts are drawn in black, paths in only the first tree are in red and paths only in the second tree are only in blue. These are the parse trees for two possible meanings of this sentence. These two figures show that it is trivial to reconstruct the parse tree once we have run the CYK algorithm as long as we cache the inputs to each position in the chart. We simply start from the start symbol at position (6,1) and work back down through the tree. At any point where there are two inputs into a cell, there is an ambiguity and we must enumerate all combinations of these ambiguities to find all the valid parses. This technique is similar to other dynamic programming problems (e.g.: the canonical implementation of the longest common subsequence algorithm computes only the size of the subsequence, but backpointers allow for retrieving the subsequence itself). Figure 15. The two trees from figure 14 correspond exactly with the two possible parse trees that explain this sentence under the provided grammar. a) In this analysis, it is I who have the binoculars. b) A second possible analysis of the same sentence, in which it is him who has the binoculars. A more challenging example The previous example was relatively unambiguous. For a bit of fun, we'll also show the results on the famously difficult-to-understand sentence Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo. Surprisingly, this is a valid English sentence. To comprehend it, you need to know that (i) buffalo is a plural noun describing animals that are also known as bison, (ii) Buffalo is a city, and (iii) buffalo is a verb that means "to indimidate". The meaning of the sentence is thus: Bison from the city Buffalo that are intimidated by other bison from the city Buffalo, themselves intimidate yet other bison from the city Buffalo. To make things even harder, we'll assume that the text is written in all lower case, and so each instance of buffalo could correspond to any of the three meanings. Could you come up with a grammar that assigns an intuitive analysis to this sentence? In Figure 16 we provide a minimal, but sufficient grammar that allows the CYK algorithm to find a single and reasonable parse tree for this strange sentence. Figure 16. Running the CYK algorithm on the sentence buffalo buffalo buffalo buffalo buffalo buffalo buffalo buffalo. The CYK algorithm returns TRUE as it is able to put the start symbol $S$ in the top-left corner of the chart. The red lines show the tracing back of the parse tree to the constituent parts of speech. CYK algorithm summary In this part of the blog, we have described the CYK algorithm for the recognition problem; the algorithm determines whether a string can be generated by a given grammar. It is a classic example of a dynamic programming algorithm that explores an exponential search space in polynomial time by storing intermediate results. Another way of thinking about the CYK algorithm from a less procedural and more declarative perspective is that it is performing logical deduction. The axioms are the grammar rules and we are presented with facts which are the words. For a given sub-string length, we deduce new facts applying the rules of the grammar $G$ and facts (or axioms) that we had previous deduced about shorter sub-strings. We keep applying the rules to reach new facts about which sub-string is derivable by $G$ with the goal of proving that $S$ derives the sentence. Note that we have used an unconventional indexing for the chart in our description. For a more typical presentation, consult these slides. In part II, we will consider assigning probabilities to the production rules, so when the parse is ambiguous, we can assign probabilities to the different meanings. We will also consider the inside-outside algorithm which helps learn these probabilities. 1 This famous example was used in Syntactic Structures by Noam Chomsky in 1957 to motivate the independence of syntax and semantics. 2 The idea that sentences are recursively built up from smaller coherent parts dates back at least to a Sanskrit sutra of around 4000 verses known as Aṣṭādhyāyī written by Pāṇini probably around the 6th-4th century BC. A. Kádár S. Prince Tutorial #19: Parsing III: PCFGs and the inside-outside algorithm Tutorial #18: Parsing II: WCFGs, the inside algorithm, and weighted parsing Meet Turing by Borealis AI, an AI-powered text to SQL database interface Impressed by the work of the team? Borealis AI is looking to hire for various roles across different teams. Visit our career page now and discover opportunities to join similar impactful projects! Careers at Borealis AI
CommonCrawl
Calculate profits by comparing total revenue and total cost Identify profits and losses with the average cost curve Explain the shutdown point Determine the price at which a firm should continue producing in the short run A perfectly competitive firm has only one major decision to make—namely, what quantity to produce. To understand why this is so, consider a different way of writing out the basic definition of profit: [latex]\begin{array}{r @{{}={}} l}Profit & Total\;revenue\;-\;Total\;cost \\[1em] & (Price)(Quantity\;produced)\;-\;(Average\;cost)(Quantity\;produced) \end{array}[/latex] Since a perfectly competitive firm must accept the price for its output as determined by the product's market demand and supply, it cannot choose the price it charges. This is already determined in the profit equation, and so the perfectly competitive firm can sell any number of units at exactly the same price. It implies that the firm faces a perfectly elastic demand curve for its product: buyers are willing to buy any number of units of output from the firm at the market price. When the perfectly competitive firm chooses what quantity to produce, then this quantity—along with the prices prevailing in the market for output and inputs—will determine the firm's total revenue, total costs, and ultimately, level of profits. Determining the Highest Profit by Comparing Total Revenue and Total Cost A perfectly competitive firm can sell as large a quantity as it wishes, as long as it accepts the prevailing market price. Total revenue is going to increase as the firm sells more, depending on the price of the product and the number of units sold. If you increase the number of units sold at a given price, then total revenue will increase. If the price of the product increases for every unit sold, then total revenue also increases. As an example of how a perfectly competitive firm decides what quantity to produce, consider the case of a small farmer who produces raspberries and sells them frozen for $4 per pack. Sales of one pack of raspberries will bring in $4, two packs will be $8, three packs will be $12, and so on. If, for example, the price of frozen raspberries doubles to $8 per pack, then sales of one pack of raspberries will be $8, two packs will be $16, three packs will be $24, and so on. Total revenue and total costs for the raspberry farm, broken down into fixed and variable costs, are shown in Table 1 and also appear in Figure 1. The horizontal axis shows the quantity of frozen raspberries produced in packs; the vertical axis shows both total revenue and total costs, measured in dollars. The total cost curve intersects with the vertical axis at a value that shows the level of fixed costs, and then slopes upward. All these cost curves follow the same characteristics as the curves covered in the Cost and Industry Structure chapter. Figure 1. Total Cost and Total Revenue at the Raspberry Farm. Total revenue for a perfectly competitive firm is a straight line sloping up. The slope is equal to the price of the good. Total cost also slopes up, but with some curvature. At higher levels of output, total cost begins to slope upward more steeply because of diminishing marginal returns. The maximum profit will occur at the quantity where the gap of total revenue over total cost is largest. (Q) (FC) Variable Cost (VC) 0 $62 $62 – $0 −$62 10 $90 $62 $28 $40 −$50 20 $110 $62 $48 $80 −$30 30 $126 $62 $64 $120 −$6 40 $144 $62 $82 $160 $16 50 $166 $62 $104 $200 $34 100 $404 $62 $342 $400 −$4 Table 1. Total Cost and Total Revenue at the Raspberry Farm Based on its total revenue and total cost curves, a perfectly competitive firm like the raspberry farm can calculate the quantity of output that will provide the highest level of profit. At any given quantity, total revenue minus total cost will equal profit. One way to determine the most profitable quantity to produce is to see at what quantity total revenue exceeds total cost by the largest amount. On Figure 1, the vertical gap between total revenue and total cost represents either profit (if total revenues are greater that total costs at a certain quantity) or losses (if total costs are greater that total revenues at a certain quantity). In this example, total costs will exceed total revenues at output levels from 0 to 40, and so over this range of output, the firm will be making losses. At output levels from 50 to 80, total revenues exceed total costs, so the firm is earning profits. But then at an output of 90 or 100, total costs again exceed total revenues and the firm is making losses. Total profits appear in the final column of Table 1. The highest total profits in the table, as in the figure that is based on the table values, occur at an output of 70–80, when profits will be $56. A higher price would mean that total revenue would be higher for every quantity sold. A lower price would mean that total revenue would be lower for every quantity sold. What happens if the price drops low enough so that the total revenue line is completely below the total cost curve; that is, at every level of output, total costs are higher than total revenues? In this instance, the best the firm can do is to suffer losses. But a profit-maximizing firm will prefer the quantity of output where total revenues come closest to total costs and thus where the losses are smallest. (Later we will see that sometimes it will make sense for the firm to shutdown, rather than stay in operation producing output.) Comparing Marginal Revenue and Marginal Costs Firms often do not have the necessary data they need to draw a complete total cost curve for all levels of production. They cannot be sure of what total costs would look like if they, say, doubled production or cut production in half, because they have not tried it. Instead, firms experiment. They produce a slightly greater or lower quantity and observe how profits are affected. In economic terms, this practical approach to maximizing profits means looking at how changes in production affect marginal revenue and marginal cost. Figure 2 presents the marginal revenue and marginal cost curves based on the total revenue and total cost in Table 1. The marginal revenue curve shows the additional revenue gained from selling one more unit. As mentioned before, a firm in perfect competition faces a perfectly elastic demand curve for its product—that is, the firm's demand curve is a horizontal line drawn at the market price level. This also means that the firm's marginal revenue curve is the same as the firm's demand curve: Every time a consumer demands one more unit, the firm sells one more unit and revenue goes up by exactly the same amount equal to the market price. In this example, every time a pack of frozen raspberries is sold, the firm's revenue increases by $4. Table 2 shows an example of this. This condition only holds for price taking firms in perfect competition where: [latex]marginal\;revenue = price[/latex] The formula for marginal revenue is: [latex]marginal\;revenue = \frac {change\;in\;total\;revenue}{change\;in\;quantity}[/latex] Marginal Revenue $4 1 $4 – $4 2 $8 $4 $4 3 $12 $4 Table 2. Marginal Revenue Notice that marginal revenue does not change as the firm produces more output. That is because the price is determined by supply and demand and does not change as the farmer produces more (keeping in mind that, due to the relative small size of each firm, increasing their supply has no impact on the total market supply where price is determined). Since a perfectly competitive firm is a price taker, it can sell whatever quantity it wishes at the market-determined price. Marginal cost, the cost per additional unit sold, is calculated by dividing the change in total cost by the change in quantity. The formula for marginal cost is: [latex]marginal\;cost = \frac {change\;in\;total\;cost}{change\;in\;quantity}[/latex] Ordinarily, marginal cost changes as the firm produces a greater quantity. In the raspberry farm example, shown in Figure 2, Figure 3 and Table 3, marginal cost at first declines as production increases from 10 to 20 to 30 packs of raspberries—which represents the area of increasing marginal returns that is not uncommon at low levels of production. But then marginal costs start to increase, displaying the typical pattern of diminishing marginal returns. If the firm is producing at a quantity where MR > MC, like 40 or 50 packs of raspberries, then it can increase profit by increasing output because the marginal revenue is exceeding the marginal cost. If the firm is producing at a quantity where MC > MR, like 90 or 100 packs, then it can increase profit by reducing output because the reductions in marginal cost will exceed the reductions in marginal revenue. The firm's profit-maximizing choice of output will occur where MR = MC (or at a choice close to that point). You will notice that what occurs on the production side is exemplified on the cost side. This is referred to as duality. Figure 2. Marginal Revenues and Marginal Costs at the Raspberry Farm: Individual Farmer. For a perfectly competitive firm, the marginal revenue (MR) curve is a horizontal straight line because it is equal to the price of the good, which is determined by the market, shown in Figure 3. The marginal cost (MC) curve is sometimes first downward-sloping, if there is a region of increasing marginal returns at low levels of output, but is eventually upward-sloping at higher levels of output as diminishing marginal returns kick in. Figure 3. Marginal Revenues and Marginal Costs at the Raspberry Farm: Raspberry Market. The equilibrium price of raspberries is determined through the interaction of market supply and market demand at $4.00. Marginal Cost 0 $62 $62 – – – – 10 $90 $62 $28 $2.80 $40 $4.00 20 $110 $62 $48 $2.00 $80 $4.00 30 $126 $62 $64 $1.60 $120 $4.00 50 $166 $62 $104 $2.20 $200 $4.00 100 $404 $62 $342 $8.00 $400 $4.00 Table 3. Marginal Revenues and Marginal Costs at the Raspberry Farm In this example, the marginal revenue and marginal cost curves cross at a price of $4 and a quantity of 80 produced. If the farmer started out producing at a level of 60, and then experimented with increasing production to 70, marginal revenues from the increase in production would exceed marginal costs—and so profits would rise. The farmer has an incentive to keep producing. From a level of 70 to 80, marginal cost and marginal revenue are equal so profit doesn't change. If the farmer then experimented further with increasing production from 80 to 90, he would find that marginal costs from the increase in production are greater than marginal revenues, and so profits would decline. The profit-maximizing choice for a perfectly competitive firm will occur where marginal revenue is equal to marginal cost—that is, where MR = MC. A profit-seeking firm should keep expanding production as long as MR > MC. But at the level of output where MR = MC, the firm should recognize that it has achieved the highest possible level of economic profits. (In the example above, the profit maximizing output level is between 70 and 80 units of output, but the firm will not know they've maximized profit until they reach 80, where MR = MC.) Expanding production into the zone where MR < MC will only reduce economic profits. Because the marginal revenue received by a perfectly competitive firm is equal to the price P, so that P = MR, the profit-maximizing rule for a perfectly competitive firm can also be written as a recommendation to produce at the quantity where P = MC. Profits and Losses with the Average Cost Curve Does maximizing profit (producing where MR = MC) imply an actual economic profit? The answer depends on the relationship between price and average total cost. If the price that a firm charges is higher than its average cost of production for that quantity produced, then the firm will earn profits. Conversely, if the price that a firm charges is lower than its average cost of production, the firm will suffer losses. You might think that, in this situation, the farmer may want to shut down immediately. Remember, however, that the firm has already paid for fixed costs, such as equipment, so it may continue to produce and incur a loss. Figure 4 illustrates three situations: (a) where price intersects marginal cost at a level above the average cost curve, (b) where price intersects marginal cost at a level equal to the average cost curve, and (c) where price intersects marginal cost at a level below the average cost curve. Figure 4. Price and Average Cost at the Raspberry Farm. In (a), price intersects marginal cost above the average cost curve. Since price is greater than average cost, the firm is making a profit. In (b), price intersects marginal cost at the minimum point of the average cost curve. Since price is equal to average cost, the firm is breaking even. In (c), price intersects marginal cost below the average cost curve. Since price is less than average cost, the firm is making a loss. First consider a situation where the price is equal to $5 for a pack of frozen raspberries. The rule for a profit-maximizing perfectly competitive firm is to produce the level of output where Price= MR = MC, so the raspberry farmer will produce a quantity of 90, which is labeled as e in Figure 4 (a). Remember that the area of a rectangle is equal to its base multiplied by its height. The farm's total revenue at this price will be shown by the large shaded rectangle from the origin over to a quantity of 90 packs (the base) up to point E' (the height), over to the price of $5, and back to the origin. The average cost of producing 80 packs is shown by point C or about $3.50. Total costs will be the quantity of 80 times the average cost of $3.50, which is shown by the area of the rectangle from the origin to a quantity of 90, up to point C, over to the vertical axis and down to the origin. It should be clear from examining the two rectangles that total revenue is greater than total cost. Thus, profits will be the blue shaded rectangle on top. It can be calculated as: [latex]\begin{array}{r @{{}={}} l}profit & total\;revenue\;-\;total\;cost \\[1em] & (90)(\$5.00)\;-\;(90)(\$3.50) \\[1em] & \$135 \end{array}[/latex] Or, it can be calculated as: [latex]\begin{array}{r @{{}={}} l}profit & (price\;-\;average\;cost)\;\times\;quantity \\[1em] & (\$5.00\;-\;\$3.50)\;\times\;90 \\[1em] & \$135 \end{array}[/latex] Now consider Figure 4 (b), where the price has fallen to $3.00 for a pack of frozen raspberries. Again, the perfectly competitive firm will choose the level of output where Price = MR = MC, but in this case, the quantity produced will be 70. At this price and output level, where the marginal cost curve is crossing the average cost curve, the price received by the firm is exactly equal to its average cost of production. The farm's total revenue at this price will be shown by the large shaded rectangle from the origin over to a quantity of 70 packs (the base) up to point E (the height), over to the price of $3, and back to the origin. The average cost of producing 70 packs is shown by point C'. Total costs will be the quantity of 70 times the average cost of $3.00, which is shown by the area of the rectangle from the origin to a quantity of 70, up to point E, over to the vertical axis and down to the origin. It should be clear from that the rectangles for total revenue and total cost are the same. Thus, the firm is making zero profit. The calculations are as follows: [latex]\begin{array}{r @{{}={}} l}profit & total\;revenue\;-\;total\;cost \\[1em] & (70)(\$3.00)\;-\;(70)(\$3.00) \\[1em] & \$0 \end{array}[/latex] [latex]\begin{array}{r @{{}={}} l}profit & (price\;-\;average\;cost)\;\times\;quantity \\[1em] & (\$3.00\;-\;\$3.00)\;\times\;70 \\[1em] & \$0 \end{array}[/latex] In Figure 4 (c), the market price has fallen still further to $2.00 for a pack of frozen raspberries. At this price, marginal revenue intersects marginal cost at a quantity of 50. The farm's total revenue at this price will be shown by the large shaded rectangle from the origin over to a quantity of 50 packs (the base) up to point E" (the height), over to the price of $2, and back to the origin. The average cost of producing 50 packs is shown by point C" or about $3.30. Total costs will be the quantity of 50 times the average cost of $3.30, which is shown by the area of the rectangle from the origin to a quantity of 50, up to point C", over to the vertical axis and down to the origin. It should be clear from examining the two rectangles that total revenue is less than total cost. Thus, the firm is losing money and the loss (or negative profit) will be the rose-shaded rectangle. The calculations are: [latex]\begin{array}{r @{{}={}} l}profit & total\;revenue\;-\;total\;cost \\[1em] & (50)(\$2.00)\;-\;(50)(\$3.30) \\[1em] & -\$77.50 \end{array}[/latex] [latex]\begin{array}{r @{{}={}} l}profit & (price\;-\;average\;cost)\;\times\;quantity \\[1em] & (\$1.75\;-\;\$3.30)\;\times\;50 \\[1em] & -\$77.50 \end{array}[/latex] If the market price received by a perfectly competitive firm leads it to produce at a quantity where the price is greater than average cost, the firm will earn profits. If the price received by the firm causes it to produce at a quantity where price equals average cost, which occurs at the minimum point of the AC curve, then the firm earns zero profits. Finally, if the price received by the firm leads it to produce at a quantity where the price is less than average cost, the firm will earn losses. This is summarized in Table 4. Price > ATC Firm earns an economic profit Price = ATC Firm earns zero economic profit Price < ATC Firm earns a loss The Shutdown Point The possibility that a firm may earn losses raises a question: Why can the firm not avoid losses by shutting down and not producing at all? The answer is that shutting down can reduce variable costs to zero, but in the short run, the firm has already paid for fixed costs. As a result, if the firm produces a quantity of zero, it would still make losses because it would still need to pay for its fixed costs. So, when a firm is experiencing losses, it must face a question: should it continue producing or should it shut down? As an example, consider the situation of the Yoga Center, which has signed a contract to rent space that costs $10,000 per month. If the firm decides to operate, its marginal costs for hiring yoga teachers is $15,000 for the month. If the firm shuts down, it must still pay the rent, but it would not need to hire labor. Table 5 shows three possible scenarios. In the first scenario, the Yoga Center does not have any clients, and therefore does not make any revenues, in which case it faces losses of $10,000 equal to the fixed costs. In the second scenario, the Yoga Center has clients that earn the center revenues of $10,000 for the month, but ultimately experiences losses of $15,000 due to having to hire yoga instructors to cover the classes. In the third scenario, the Yoga Center earns revenues of $20,000 for the month, but experiences losses of $5,000. In all three cases, the Yoga Center loses money. In all three cases, when the rental contract expires in the long run, assuming revenues do not improve, the firm should exit this business. In the short run, though, the decision varies depending on the level of losses and whether the firm can cover its variable costs. In scenario 1, the center does not have any revenues, so hiring yoga teachers would increase variable costs and losses, so it should shut down and only incur its fixed costs. In scenario 2, the center's losses are greater because it does not make enough revenue to offset the increased variable costs plus fixed costs, so it should shut down immediately. If price is below the minimum average variable cost, the firm must shut down. In contrast, in scenario 3 the revenue that the center can earn is high enough that the losses diminish when it remains open, so the center should remain open in the short run. If the center shuts down now, revenues are zero but it will not incur any variable costs and would only need to pay fixed costs of $10,000. [latex]\begin{array}{r @{{}={}} l}profit & total\;revenue\;-\;(fixed\;costs\;+\;variable\;cost) \\[1em] & 0\;-\;\$10,000 \\[1em] & -\$10,000 \end{array}[/latex] The center earns revenues of $10,000, and variable costs are $15,000. The center should shut down now. [latex]\begin{array}{r @{{}={}} l}profit & total\;revenue\;-\;(fixed\;costs\;+\;variable\;cost) \\[1em] & \$10,000\;-\;(\$10,000\;+\;\$15,000) \\[1em] & -\$15,000 \end{array}[/latex] The center earns revenues of $20,000, and variable costs are $15,000. The center should continue in business. [latex]profit = total\;revenue\;-\;(fixed\;costs\;+\;variable\;cost)[/latex] [latex]= \$20,000\;-\;(\$10,000\;+\;\$15,000)[/latex] [latex]= -\$5,000[/latex] Table 5. Should the Yoga Center Shut Down Now or Later? This example suggests that the key factor is whether a firm can earn enough revenues to cover at least its variable costs by remaining open. Let's return now to our raspberry farm. Figure 5 illustrates this lesson by adding the average variable cost curve to the marginal cost and average cost curves. At a price of $2.20 per pack, as shown in Figure 5 (a), the farm produces at a level of 50. It is making losses of $56 (as explained earlier), but price is above average variable cost and so the firm continues to operate. However, if the price declined to $1.80 per pack, as shown in Figure 5 (b), and if the firm applied its rule of producing where P = MR = MC, it would produce a quantity of 40. This price is below average variable cost for this level of output. If the farmer cannot pay workers (the variable costs), then it has to shut down. At this price and output, total revenues would be $72 (quantity of 40 times price of $1.80) and total cost would be $144, for overall losses of $72. If the farm shuts down, it must pay only its fixed costs of $62, so shutting down is preferable to selling at a price of $1.80 per pack. Figure 5. The Shutdown Point for the Raspberry Farm. In (a), the farm produces at a level of 50. It is making losses of $56, but price is above average variable cost, so it continues to operate. In (b), total revenues are $72 and total cost is $144, for overall losses of $72. If the farm shuts down, it must pay only its fixed costs of $62. Shutting down is preferable to selling at a price of $1.80 per pack. Looking at Table 6, if the price falls below $2.05, the minimum average variable cost, the firm must shut down. Average Variable Cost 10 $90 $62 $28 $2.80 $9.00 $2.80 20 $110 $62 $48 $2.00 $5.50 $2.40 50 $166 $62 $104 $2.20 $3.32 $2.08 100 $404 $62 $342 $8.00 $4.04 $3.42 Table 6. Cost of Production for the Raspberry Farm The intersection of the average variable cost curve and the marginal cost curve, which shows the price where the firm would lack enough revenue to cover its variable costs, is called the shutdown point. If the perfectly competitive firm can charge a price above the shutdown point, then the firm is at least covering its average variable costs. It is also making enough revenue to cover at least a portion of fixed costs, so it should limp ahead even if it is making losses in the short run, since at least those losses will be smaller than if the firm shuts down immediately and incurs a loss equal to total fixed costs. However, if the firm is receiving a price below the price at the shutdown point, then the firm is not even covering its variable costs. In this case, staying open is making the firm's losses larger, and it should shut down immediately. To summarize, if: price < minimum average variable cost, then firm shuts down price = minimum average variable cost, then firm stays in business Short-Run Outcomes for Perfectly Competitive Firms The average cost and average variable cost curves divide the marginal cost curve into three segments, as shown in Figure 6. At the market price, which the perfectly competitive firm accepts as given, the profit-maximizing firm chooses the output level where price or marginal revenue, which are the same thing for a perfectly competitive firm, is equal to marginal cost: P = MR = MC. Figure 6. Profit, Loss, Shutdown. The marginal cost curve can be divided into three zones, based on where it is crossed by the average cost and average variable cost curves. The point where MC crosses AC is called the zero-profit point. If the firm is operating at a level of output where the market price is at a level higher than the zero-profit point, then price will be greater than average cost and the firm is earning profits. If the price is exactly at the zero-profit point, then the firm is making zero profits. If price falls in the zone between the shutdown point and the zero-profit point, then the firm is making losses but will continue to operate in the short run, since it is covering its variable costs. However, if price falls below the price at the shutdown point, then the firm will shut down immediately, since it is not even covering its variable costs. First consider the upper zone, where prices are above the level where marginal cost (MC) crosses average cost (AC) at the zero profit point. At any price above that level, the firm will earn profits in the short run. If the price falls exactly on the zero profit point where the MC and AC curves cross, then the firm earns zero profits. If a price falls into the zone between the zero profit point, where MC crosses AC, and the shutdown point, where MC crosses AVC, the firm will be making losses in the short run—but since the firm is more than covering its variable costs, the losses are smaller than if the firm shut down immediately. Finally, consider a price at or below the shutdown point where MC crosses AVC. At any price like this one, the firm will shut down immediately, because it cannot even cover its variable costs. Marginal Cost and the Firm's Supply Curve For a perfectly competitive firm, the marginal cost curve is identical to the firm's supply curve starting from the minimum point on the average variable cost curve. To understand why this perhaps surprising insight holds true, first think about what the supply curve means. A firm checks the market price and then looks at its supply curve to decide what quantity to produce. Now, think about what it means to say that a firm will maximize its profits by producing at the quantity where P = MC. This rule means that the firm checks the market price, and then looks at its marginal cost to determine the quantity to produce—and makes sure that the price is greater than the minimum average variable cost. In other words, the marginal cost curve above the minimum point on the average variable cost curve becomes the firm's supply curve. Watch this video that addresses how drought in the United States can impact food prices across the world. (Note that the story on the drought is the second one in the news report; you need to let the video play through the first story in order to watch the story on the drought.) As discussed in the chapter on Demand and Supply, many of the reasons that supply curves shift relate to underlying changes in costs. For example, a lower price of key inputs or new technologies that reduce production costs cause supply to shift to the right; in contrast, bad weather or added government regulations can add to costs of certain goods in a way that causes supply to shift to the left. These shifts in the firm's supply curve can also be interpreted as shifts of the marginal cost curve. A shift in costs of production that increases marginal costs at all levels of output—and shifts MC to the left—will cause a perfectly competitive firm to produce less at any given market price. Conversely, a shift in costs of production that decreases marginal costs at all levels of output will shift MC to the right and as a result, a competitive firm will choose to expand its level of output at any given price. The following Work It Out feature will walk you through an example. At What Price Should the Firm Continue Producing in the Short Run? To determine the short-run economic condition of a firm in perfect competition, follow the steps outlined below. Use the data shown in Table 7. 0 $28 $20 $0 – – – – – – 1 $28 $20 $20 – – – – – – Step 1. Determine the cost structure for the firm. For a given total fixed costs and variable costs, calculate total cost, average variable cost, average total cost, and marginal cost. Follow the formulas given in the Cost and Industry Structure chapter. These calculations are shown in Table 8. (TFC+TVC) (TVC/Q) (TC/Q) (TC2−TC1)/ (Q2−Q1) 0 $28 $20 $0 $20+$0=$20 – – – 1 $28 $20 $20 $20+$20=$40 $20/1=$20.00 $40/1=$40.00 ($40−$20)/ (1−0)= $20 (2−1)= $5 5 $28 $20 $80 $20+$80=$100 $80/5=$16.00 $100/5=$20.00 ($100−$72)/ Step 2. Determine the market price that the firm receives for its product. This should be given information, as the firm in perfect competition is a price taker. With the given price, calculate total revenue as equal to price multiplied by quantity for all output levels produced. In this example, the given price is $30. You can see that in the second column of Table 9. Total Revenue (P × Q) 0 $28 $28×0=$0 1 $28 $28×1=$28 4 $28 $28×4=$112 Step 3. Calculate profits as total cost subtracted from total revenue, as shown in Table 10. Profits (TR−TC) 0 $0 $20 $0−$20=−$20 1 $28 $40 $28−$40=−$12 2 $56 $45 $56−$45=$11 4 $112 $72 $112−$72=$40 5 $140 $100 $140−$100=$40 Table 10. Step 4. To find the profit-maximizing output level, look at the Marginal Cost column (at every output level produced), as shown in Table 11, and determine where it is equal to the market price. The output level where price equals the marginal cost is the output level that maximizes profits. 0 $28 $20 $0 $20 – – – $0 −$20 1 $28 $20 $20 $40 $20.00 $40.00 $20 $28 −$12 2 $28 $20 $25 $45 $12.50 $22.50 $5 $56 $11 3 $28 $20 $35 $55 $11.67 $18.33 $10 $84 $29 4 $28 $20 $52 $72 $13.00 $18.00 $17 $112 $40 5 $28 $20 $80 $100 $16.40 $20.40 $30 $140 $40 Step 5. Once you have determined the profit-maximizing output level (in this case, output quantity 5), you can look at the amount of profits made (in this case, $40). Step 6. If the firm is making economic losses, the firm needs to determine whether it produces the output level where price equals marginal revenue and equals marginal cost or it shuts down and only incurs its fixed costs. Step 7. For the output level where marginal revenue is equal to marginal cost, check if the market price is greater than the average variable cost of producing that output level. If P > AVC but P < ATC, then the firm continues to produce in the short-run, making economic losses. If P < AVC, then the firm stops producing and only incurs its fixed costs. In this example, the price of $28 is greater than the AVC ($16.40) of producing 5 units of output, so the firm continues producing. As a perfectly competitive firm produces a greater quantity of output, its total revenue steadily increases at a constant rate determined by the given market price. Profits will be highest (or losses will be smallest) at the quantity of output where total revenues exceed total costs by the greatest amount (or where total revenues fall short of total costs by the smallest amount). Alternatively, profits will be highest where marginal revenue, which is price for a perfectly competitive firm, is equal to marginal cost. If the market price faced by a perfectly competitive firm is above average cost at the profit-maximizing quantity of output, then the firm is making profits. If the market price is below average cost at the profit-maximizing quantity of output, then the firm is making losses. If the market price is equal to average cost at the profit-maximizing level of output, then the firm is making zero profits. The point where the marginal cost curve crosses the average cost curve, at the minimum of the average cost curve, is called the "zero profit point." If the market price faced by a perfectly competitive firm is below average variable cost at the profit-maximizing quantity of output, then the firm should shut down operations immediately. If the market price faced by a perfectly competitive firm is above average variable cost, but below average cost, then the firm should continue producing in the short run, but exit in the long run. The point where the marginal cost curve crosses the average variable cost curve is called the shutdown point. Look at Table 12. What would happen to the firm's profits if the market price increases to $6 per pack of raspberries? 50 $166 $62 $104 $300 $134 100 $404 $62 $342 $600 $196 Suppose that the market price increases to $6, as shown in Table 13. What would happen to the profit-maximizing output level? 0 $62 $62 – – $0 – Explain in words why a profit-maximizing firm will not choose to produce at a quantity where marginal cost exceeds marginal revenue. A firm's marginal cost curve above the average variable cost curve is equal to the firm's individual supply curve. This means that every time a firm receives a price from the market it will be willing to supply the amount of output where the price equals marginal cost. What happens to the firm's individual supply curve if marginal costs increase? How does a perfectly competitive firm decide what price to charge? What prevents a perfectly competitive firm from seeking higher profits by increasing the price that it charges? How does a perfectly competitive firm calculate total revenue? Briefly explain the reason for the shape of a marginal revenue curve for a perfectly competitive firm. What two rules does a perfectly competitive firm apply to determine its profit-maximizing quantity of output? How does the average cost curve help to show whether a firm is making profits or losses? What two lines on a cost curve diagram intersect at the zero-profit point? Should a firm shut down immediately if it is making losses? How does the average variable cost curve help a firm know whether it should shut down immediately? What two lines on a cost curve diagram intersect at the shutdown point? Your company operates in a perfectly competitive market. You have been told that advertising can help you increase your sales in the short run. Would you create an aggressive advertising campaign for your product? Since a perfectly competitive firm can sell as much as it wishes at the market price, why can the firm not simply increase its profits by selling an extremely high quantity? The AAA Aquarium Co. sells aquariums for $20 each. Fixed costs of production are $20. The total variable costs are $20 for one aquarium, $25 for two units, $35 for the three units, $50 for four units, and $80 for five units. In the form of a table, calculate total revenue, marginal revenue, total cost, and marginal cost for each output level (one to five units). What is the profit-maximizing quantity of output? On one diagram, sketch the total revenue and total cost curves. On another diagram, sketch the marginal revenue and marginal cost curves. Perfectly competitive firm Doggies Paradise Inc. sells winter coats for dogs. Dog coats sell for $72 each. The fixed costs of production are $100. The total variable costs are $64 for one unit, $84 for two units, $114 for three units, $184 for four units, and $270 for five units. In the form of a table, calculate total revenue, marginal revenue, total cost and marginal cost for each output level (one to five units). On one diagram, sketch the total revenue and total cost curves. On another diagram, sketch the marginal revenue and marginal cost curves. What is the profit maximizing quantity? A computer company produces affordable, easy-to-use home computer systems and has fixed costs of $250. The marginal cost of producing computers is $700 for the first computer, $250 for the second, $300 for the third, $350 for the fourth, $400 for the fifth, $450 for the sixth, and $500 for the seventh. Create a table that shows the company's output, total cost, marginal cost, average cost, variable cost, and average variable cost. At what price is the zero-profit point? At what price is the shutdown point? If the company sells the computers for $500, is it making a profit or a loss? How big is the profit or loss? Sketch a graph with AC, MC, and AVC curves to illustrate your answer and show the profit or loss. If the firm sells the computers for $300, is it making a profit or a loss? How big is the profit or loss? Sketch a graph with AC, MC, and AVC curves to illustrate your answer and show the profit or loss. the additional revenue gained from selling one more unit shutdown point level of output where the marginal cost curve intersects the average variable cost curve at the minimum point of AVC; if the price is below this point, the firm should shut down immediately Holding total cost constant, profits at every output level would increase. When the market price increases, marginal revenue increases. The firm would then increase production up to the point where the new price equals marginal cost, at a quantity of 90. If marginal costs exceeds marginal revenue, then the firm will reduce its profits for every additional unit of output it produces. Profit would be greatest if it reduces output to where MR = MC. The firm will be willing to supply fewer units at every price level. In other words, the firm's individual supply curve decreases and shifts to the left. Previous: 8.1 Perfect Competition and Why It Matters Next: 8.3 Entry and Exit Decisions in the Long Run
CommonCrawl
Only show open access (16) Last 12 months (27) Earth and Environmental Sciences (252) Physics And Astronomy (252) flm;mixing;mixing Journal of Fluid Mechanics (252) Ryan Test (247) test society (5) Large-eddy simulation of small-scale Langmuir circulation and scalar transport A. E. Tejada-Martínez, A. Hafsi, C. Akan, M. Juha, F. Veron Journal: Journal of Fluid Mechanics / Volume 885 / 25 February 2020 Published online by Cambridge University Press: 18 December 2019, A5 Print publication: 25 February 2020 Large-eddy simulation (LES) of a wind- and wave-forced water column based on the Craik–Leibovich (C–L) vortex force is used to understand the structure of small-scale Langmuir circulation (LC) and associated Langmuir turbulence. The LES also serves to understand the role of the turbulence in determining molecular diffusive scalar flux from a scalar-saturated air side to the water side and the turbulent vertical scalar flux in the water side. Previous laboratory experiments have revealed that small-scale LC beneath an initially quiescent air–water interface appears shortly after the initiation of wind-driven gravity–capillary waves and provides the laminar–turbulent transition in wind speeds between 3 and $6~\text{m}~\text{s}^{-1}$ . The LES reveals Langmuir turbulence characterized by multiple scales ranging from small bursting eddies at the surface that coalesce to give rise to larger (centimetre-scale) LC over time. It is observed that the smaller scales account for the bulk of the near-surface turbulent vertical scalar flux. Although the contribution of the larger (centimetre-scale) LC to the near-surface turbulent flux increases over time as these scales emerge and become more coherent, the contribution of the smaller scales remains dominant. The growing LC scales lead to increased vertical scalar transport at depths below the interface and thus greater scalar transfer efficiency. Simulations were performed with a fixed wind stress corresponding to a $5~\text{m}~\text{s}^{-1}$ wind speed but with different wave parameters (wavelength and amplitude) in the C–L vortex force. It is observed that longer wavelengths lead to more coherent, larger centimetre-scale LC providing greater contribution to the turbulent vertical scalar flux away from the surface. In all cases, the molecular diffusive scalar flux at the water surface relaxes to the same statistically steady value after transition to Langmuir turbulence occurs, despite the different wave parameters in the C–L vortex force across the simulations. This implies that the small-scale turbulence intensity and the molecular diffusive scalar flux at the surface scale with the wind shear and not with the wave parameters. Furthermore, it is seen that the Langmuir (wave) forcing (provided by the C–L vortex force) is necessary to trigger the turbulence that induces elevated molecular diffusive scalar flux at the water surface relative to wind-driven flow without wave forcing. Supersaturation fluctuations in moist turbulent Rayleigh–Bénard convection: a two-scalar transport problem Kamal Kant Chandrakar, Will Cantrell, Steven Krueger, Raymond A. Shaw, Scott Wunsch Published online by Cambridge University Press: 09 December 2019, A19 Moist Rayleigh–Bénard convection with water saturated boundaries is explored using a One-Dimensional Turbulence model. The system involves both temperature $T$ and water vapour pressure $e_{v}$ as driving scalars. The emphasis of the work is on a supersaturation $s$ , a nonlinear combination of $T$ and $e_{v}$ that is crucial to cloud formation. Its mean as well as fluctuation statistics determine cloud droplet growth and therefore precipitation formation and cloud optical properties. To explore the role of relative scalar diffusivities for temperature ( $D_{t}$ ) and water vapour ( $D_{v}$ ), three different regimes are considered: $D_{v}>D_{t}$ , $D_{v}\approx D_{t}$ and $D_{v}<D_{t}$ . Scalar fluxes (Nusselt number, $Nu$ and Sherwood number, $Sh$ ) and their scalings with moist Rayleigh number $Ra_{moist}$ are consistent with previous studies of one-component convection. Moreover, variances of the scalars in the bulk region increase with their diffusivities and also reasonably follow derived scaling expressions. Eulerian properties plotted in $(T,e_{v})$ coordinates have a different slope compared to an idealized mixing process. Additionally, the scalars are highly correlated, even in the cases of high relative diffusivities (factor of four) $D_{v}$ and $D_{t}$ . Based on the above fact and the scaling relation of the scalars, the supersaturation variance is found to vary approximately as $Ra_{moist}^{5/3}$ , in agreement with numerical results. Finally, the supersaturation profile in the boundary layer is explored and compares well with scalar boundary layer models. A sharp peak appears in the boundary-layer-supersaturation profile, not only in the variance but also in the mean profile, due to relative diffusivities of the scalars. The turbulent Faraday instability in miscible fluids Antoine Briard, Louis Gostiaux, Benoît-Joseph Gréa Journal: Journal of Fluid Mechanics / Volume 883 / 25 January 2020 Published online by Cambridge University Press: 29 November 2019, A57 Print publication: 25 January 2020 Experiments of a turbulent mixing zone created by the Faraday instability at the statically stable interface between salt and fresh water are presented. The two-layer system, contained in a cuboidal tank of large dimensions, is accelerated vertically and periodically at various frequencies and amplitudes for three different density contrasts. We have developed a linear approach accounting for the full inhomogeneous and viscous problem, that is applied to a linear piecewise background density profile, and recovers the limiting cases of interface and homogeneous turbulence with a fully developed mixing layer. At onset, the wavelength of the most amplified modes and the corresponding Floquet exponent of the interface both verify our predictions. The dynamics is rather different when the instability is triggered from a sharp or diffuse interface: in the latter case, a change of characteristic wavelengths can be observed experimentally and explained by the theory. In the turbulent regime, the time evolution of the mixing zone size $L(t)$ for various experimental configurations compares well with confined direct numerical simulations. For some initial conditions, a short harmonic response of the instability is observed before the usual subharmonic one. Finally, the ultimate size of the mixing layer $L_{end}$ , measured with a probe after the saturation of the instability and end of the forcing, is in excellent agreement with the recent theoretical prediction $L_{sat}=2{\mathcal{A}}g_{0}(2F+4)/\unicode[STIX]{x1D714}^{2}$ , where $g_{0}$ is the gravitational acceleration, ${\mathcal{A}}$ the Atwood number, $\unicode[STIX]{x1D714}/2\unicode[STIX]{x03C0}$ the frequency and $F$ the acceleration ratio. A perturbation approach to understanding the effects of turbulence on frontogenesis Abigail S. Bodner, Baylor Fox-Kemper, Luke P. Van Roekel, James C. McWilliams, Peter P. Sullivan Ocean fronts are an important submesoscale feature, yet frontogenesis theory often neglects turbulence – even parameterized turbulence – leaving theory lacking in comparison with observations and models. A perturbation analysis is used to include the effects of eddy viscosity and diffusivity as a first-order correction to existing strain-induced inviscid, adiabatic frontogenesis theory. A modified solution is obtained by using potential vorticity and surface conditions to quantify turbulent fluxes. It is found that horizontal viscosity and diffusivity tend to be readily frontolytic – reducing frontal tendency to negative values under weakly non-conservative perturbations and opposing or reversing front sharpening, whereas vertical viscosity and diffusivity tend to only weaken frontogenesis by slowing the rate of sharpening of the front even under strong perturbations. During late frontogenesis, vertical diffusivity enhances the rate of frontogenesis, although perturbation theory may be inaccurate at this stage. Surface quasi-geostrophic theory – neglecting all injected interior potential vorticity – is able to describe the first-order correction to the along-front velocity and ageostrophic overturning circulation in most cases. Furthermore, local conditions near the front maximum are sufficient to reconstruct the modified solution of both these fields. The entrainment and energetics of turbulent plumes in a confined space John Craske, Megan S. Davies Wykes Published online by Cambridge University Press: 20 November 2019, A2 We analyse the entrainment and energetics of equal and opposite axisymmetric turbulent air plumes in a vertically confined space at a Rayleigh number of $1.24\times 10^{7}$ using theory and direct numerical simulation. On domains of sufficiently large aspect ratio, the steady state consists of turbulent plumes penetrating an interface between two layers of approximately uniform buoyancy. As described by Baines & Turner (J. Fluid Mech., vol. 37(1), 1969, pp. 51–80), upon penetrating the interface the flow in each plume becomes forced and behaves like a constant-momentum jet, due to a reduction in its mean buoyancy relative to the local environment. To observe the behaviour of the plumes we partition the domain into sub-domains corresponding to each plume. Domains of relatively small aspect ratio produce a single primary mean-flow circulation between the sub-domains that is maintained by entrainment into the plumes. At larger aspect ratios the mean flow between the sub-domains bifurcates, indicating the existence of a secondary circulation within each layer associated with entrainment into the jets. The largest aspect ratios studied here exhibit an additional, tertiary, circulation in the vicinity of the interface. Consistency between independent calculations of an effective entrainment coefficient allows us to identify aspect ratios for which the flow can be modelled using plume theory, under the assumption of a two-layer stratification. To study the flow's energetics we use a local definition of available potential energy (APE). For plumes with Gaussian velocity and buoyancy profiles, the theory we develop suggests that the kinetic energy dissipation is split equally between the jets and the plumes and, collectively, accounts for almost half of the input of APE at the boundaries. In contrast, 1/4 of the APE dissipation and background potential energy (BPE) production occurs in the jets, with the remaining 3/4 occurring in the plumes. These bulk theoretical predictions agree with observations of BPE production from simulations to within 1 % and form the basis of a similarity solution that models the vertical dependence of APE dissipation and BPE production. Unlike results concerning the dissipation of buoyancy variance and the strength of the circulations described above, the model for the flow's energetics does not involve an entrainment coefficient. Richtmyer–Meshkov instability of an unperturbed interface subjected to a diffracted convergent shock Liyong Zou, Mahamad Al-Marouf, Wan Cheng, Ravi Samtaney, Juchun Ding, Xisheng Luo Journal: Journal of Fluid Mechanics / Volume 879 / 25 November 2019 Print publication: 25 November 2019 The Richtmyer–Meshkov (RM) instability is numerically investigated on an unperturbed interface subjected to a diffracted convergent shock created by diffracting an initially cylindrical shock over a rigid cylinder. Four gas interfaces are considered with Atwood number ranging from $-0.18$ to 0.67. Results indicate that the diffracted convergent shock increases its strength gradually and reduces its amplitude quickly when it propagates towards the convergence centre. After the strike of the diffracted convergent shock, the initially unperturbed interface deforms with a bulge structure at the centre and two interface steps at both sides, which can be ascribed to the non-uniformity of the pressure distribution behind the diffracted convergent shock. With the decrease of Atwood number, the bulge structure becomes more pronounced. Quantitatively, the interface amplitude experiences a fast but short growing stage and then enters a linear stage. A good collapse of the dimensionless amplitude is found for all cases, which indicates a weak dependence of the growth rate on Atwood number in the deformed shock-induced RM instability. Then the impulsive theory is modified by eliminating the Atwood number and considering the geometry convergence, which well predicts the amplitude growth for the deformed shock-induced RM instability. Finally, the underlying mechanism is decoupled into three parts, and it is found that both the impulsive pressure perturbation and the geometry convergence promote the growth of interface perturbation while the continuous pressure perturbation inhibits the growth. As the Atwood number decreases, the impulsive perturbation plays an increasingly important role, which suggests that the impulsive perturbation dominates the deformed shock-induced RM instability at the linear stage. Behaviour of small-scale turbulence in the turbulent/non-turbulent interface region of developing turbulent jets M. Breda, O. R. H. Buxton Tomographic particle image velocimetry experiments were conducted in the near and intermediate fields of two different types of jet, one fitted with a circular orifice and another fitted with a repeating-fractal-pattern orifice. Breda & Buxton (J. Vis., vol. 21 (4), 2018, pp. 525–532; Phys. Fluids, vol. 30, 2018, 035109) showed that this fractal geometry suppressed the large-scale coherent structures present in the near field and affected the rate of entrainment of background fluid into, and subsequent development of, the fractal jet, relative to the round jet. In light of these findings we now examine the modification of the turbulent/non-turbulent interface (TNTI) and spatial evolution of the small-scale behaviour of these different jets, which are both important factors behind determining the entrainment rate. This evolution is examined in both the streamwise direction and within the TNTI itself where the fluid adapts from a non-turbulent state, initially through the direct action of viscosity and then through nonlinear inertial processes, to the state of the turbulence within the bulk of the flow over a short distance. We show that the suppression of the coherent structures in the fractal jet leads to a less contorted interface, with large-scale excursions of the inner TNTI (that between the jet's azimuthal shear layer and the potential core) being suppressed. Further downstream, the behaviour of the TNTI is shown to be comparable for both jets. The velocity gradients develop into a canonical state with streamwise distance, manifested as the development of the classical tear-drop shaped contours of the statistical distribution of the velocity-gradient-tensor invariants $\mathit{Q}$ and $\mathit{R}$ . The velocity gradients also develop spatially through the TNTI from the irrotational boundary to the bulk flow; in particular, there is a strong small-scale anisotropy in this region. This strong inhomogeneity of the velocity gradients in the TNTI region has strong consequences for the scaling of the thickness of the TNTI in these spatially developing flows since both the Taylor and Kolmogorov length scales are directly computed from the velocity gradients. Convergent Richtmyer–Meshkov instability of a heavy gas layer with perturbed outer interface Juchun Ding, Jianming Li, Rui Sun, Zhigang Zhai, Xisheng Luo The evolution of an $\text{SF}_{6}$ layer surrounded by air is experimentally studied in a semi-annular convergent shock tube by high-speed schlieren photography. The gas layer with a sinusoidal outer interface and a circular inner interface is realized by the soap-film technique such that the initial condition is well controlled. Results show that the thicker the gas layer, the weaker the interface–coupling effect and the slower the evolution of the outer interface. Induced by the distorted transmitted shock and the interface coupling, the inner interface exhibits a slow perturbation growth which can be largely suppressed by reducing the layer thickness. After the reshock, the inner perturbation increases linearly at a growth rate independent of the initial layer thickness as well as of the outer perturbation amplitude and wavelength, and the growth rate can be well predicted by the model of Mikaelian (Physica D, vol. 36, 1989, pp. 343–357) with an empirical coefficient of 0.31. After the linear stage, the growth rate decreases continuously, and finally the perturbation freezes at a constant amplitude caused by the successive stagnation of spikes and bubbles. The convergent geometry constraint as well as the very weak compressibility at late stages are responsible for this instability freeze-out. Lagrangian coherent structures and entrainment near the turbulent/non-turbulent interface of a gravity current Marius M. Neamtu-Halic, Dominik Krug, George Haller, Markus Holzner In this paper, we employ the theory of Lagrangian coherent structures for three-dimensional vortex eduction and investigate the effect of large-scale vortical structures on the turbulent/non-turbulent interface (TNTI) and entrainment of a gravity current. The gravity current is realized experimentally and different levels of stratification are examined. For flow measurements, we use a multivolume three-dimensional particle tracking velocimetry technique. To identify vortical Lagrangian coherent structures (VLCSs), a fully automated three-dimensional extraction algorithm for multiple flow structures based on the so-called Lagrangian-averaged vorticity deviation method is implemented. The size, the orientation and the shape of the VLCSs are analysed and the results show that these characteristics depend only weakly on the strength of the stratification. Through conditional analysis, we provide evidence that VLCSs modulate the average TNTI height, consequently affecting the entrainment process. Furthermore, VLCSs influence the local entrainment velocity and organize the flow field on both the turbulent and non-turbulent sides of the gravity current boundary. Nonlinear behaviour of convergent Richtmyer–Meshkov instability Xisheng Luo, Ming Li, Juchun Ding, Zhigang Zhai, Ting Si A novel shock tube is designed to investigate the nonlinear feature of convergent Richtmyer–Meshkov instability on a single-mode interface formed by a soap film technique. The shock tube employs a concave–oblique–convex wall profile which first transforms a planar shock into a cylindrical arc, then gradually strengthens the cylindrical shock along the oblique wall, and finally converts it back into a planar one. Therefore, the new facility can realize analysis on compressibility and nonlinearity of convergent Richtmyer–Meshkov instability by eliminating the interface deceleration and reshock. Five sinusoidal $\text{air}{-}\text{SF}_{6}$ interfaces with different amplitudes and wavelengths are considered. For all cases, the perturbation amplitude experiences a linear growth much longer than that in the planar geometry. A compressible linear model is derived by considering a constant uniform fluid compression, which shows a slight difference to the incompressible theory. However, both the linear models overestimate the perturbation growth from a very early stage due to the presence of strong nonlinearity. The nonlinear model of Wang et al. (Phys. Plasmas, vol. 22, 2015, 082702) is demonstrated to predict well the amplitude growth up to a normalized time of 1.0. The prolongation of the linear increment is mainly ascribed to the counteraction between the promotion by geometric convergence and the suppression by nonlinearity. Growths of the first three harmonics, obtained by a Fourier analysis of the interface contour, provide a first thorough validation of the nonlinear theory. Turbulent shear-layer mixing: initial conditions, and direct-numerical and large-eddy simulations Nek Sharan, Georgios Matheou, Paul E. Dimotakis Aspects of turbulent shear-layer mixing are investigated over a range of shear-layer Reynolds numbers, $Re_{\unicode[STIX]{x1D6FF}}=\unicode[STIX]{x0394}U\unicode[STIX]{x1D6FF}/\unicode[STIX]{x1D708}$ , based on the shear-layer free-stream velocity difference, $\unicode[STIX]{x0394}U$ , and mixing-zone thickness, $\unicode[STIX]{x1D6FF}$ , to probe the role of initial conditions in mixing stages and the evolution of the scalar-field probability density function (p.d.f.) and variance. Scalar transport is calculated for unity Schmidt numbers, approximating gas-phase diffusion. The study is based on direct-numerical simulation (DNS) and large-eddy simulation (LES), comparing different subgrid-scale (SGS) models for incompressible, uniform-density, temporally evolving forced shear-layer flows. Moderate-Reynolds-number DNS results help assess and validate LES SGS models in terms of scalar-spectrum and mixing estimates, as well as other metrics, to $Re_{\unicode[STIX]{x1D6FF}}\lesssim 3.3\times 10^{4}$ . High-Reynolds-number LES investigations to $Re_{\unicode[STIX]{x1D6FF}}\lesssim 5\times 10^{5}$ help identify flow parameters and conditions that influence the evolution of scalar variance and p.d.f., e.g. marching versus non-marching. Initial conditions that generate shear flows with different mixing behaviour elucidate flow characteristics in each flow regime and identify elements that induce p.d.f. transition and scalar-variance behaviour. P.d.f. transition is found to be largely insensitive to local flow parameters, such as $Re_{\unicode[STIX]{x1D6FF}}$ , or a previously proposed vortex-pairing parameter based on downstream distance, or other equivalent criteria. The present study also allows a quantitative comparison of LES SGS models in moderate- and high- $Re_{\unicode[STIX]{x1D6FF}}$ forced shear-layer flows. Evolution of thermally stratified turbulent open channel flow after removal of the heat source Michael P. Kirkpatrick, N. Williamson, S. W. Armfield, V. Zecevic Evolution of thermally stratified open channel flow after removal of a volumetric heat source is investigated using direct numerical simulation. The heat source models radiative heating from above and varies with height due to progressive absorption. After removal of the heat source the initial stable stratification breaks down and the channel approaches a fully mixed isothermal state. The initial state consists of three distinct regions: a near-wall region where stratification plays only a minor role, a central region where stratification has a significant effect on flow dynamics and a near-surface region where buoyancy effects dominate. We find that a state of local energetic equilibrium observed in the central region of the channel in the initial state persists until the late stages of the destratification process. In this region local turbulence parameters such as eddy diffusivity $k_{h}$ and flux Richardson number $R_{f}$ are found to be functions only of the Prandtl number $Pr$ and a mixed parameter ${\mathcal{Q}}$ , which is equal to the ratio of the local buoyancy Reynolds number $Re_{b}$ and the friction Reynolds number $Re_{\unicode[STIX]{x1D70F}}$ . Close to the top and bottom boundaries turbulence is also affected by $Re_{\unicode[STIX]{x1D70F}}$ and vertical position $z$ . In the initial heated equilibrium state the laminar surface layer is stabilised by the heat source, which acts as a potential energy sink. Removal of the heat source allows Kelvin–Helmholtz-like shear instabilities to form that lead to a rapid transition to turbulence and significantly enhance the mixing process. The destratifying flow is found to be governed by bulk parameters $Re_{\unicode[STIX]{x1D70F}}$ , $Pr$ and the friction Richardson number $Ri_{\unicode[STIX]{x1D70F}}$ . The overall destratification rate ${\mathcal{D}}$ is found to be a function of $Ri_{\unicode[STIX]{x1D70F}}$ and $Pr$ . A semi-Lagrangian direct-interaction closure of the spectra of isotropic variable-density turbulence David J. Petty, C. Pantano A study of variable-density homogeneous stationary isotropic turbulence based on the sparse direct-interaction perturbation (SDIP) and supporting direct numerical simulations (DNS) is presented. The non-solenoidal flow considered here is an example of turbulent mixing of gases with different densities. The spectral statistics of this type of flow are substantially more difficult to understand theoretically than those of the similar solenoidal flows. In the approach described here, the nonlinearly coupled velocity and scalar (which determine the density of the fluid) equations are expanded in terms of a normalised density ratio parameter. A new set of coupled integro-differential SDIP equations are derived and then solved numerically for the first-order correction to the incompressible equations in the variable-density expansion parameter. By adopting a regular expansion approach, one obtains leading-order corrections that are universal and therefore interesting in their own right. The predictions are then compared with DNS of forced variable-density flow with different density contrasts. It is found that the velocity spectrum owing to variable density is indistinguishable from that of constant-density turbulence, as it is supported by a wealth of indirect experimental evidence, but the scalar spectra show significant deviations, and even loss of monotonicity, as a function of the type and strength of the large-scale source of the mixing. Furthermore, the analysis helps clarify what may be the proper approach to interpret the power spectrum of variable-density turbulence. Turbulent temperature fluctuations in a closed Rayleigh–Bénard convection cell Yin Wang, Xiaozhou He, Penger Tong Journal: Journal of Fluid Mechanics / Volume 874 / 10 September 2019 Print publication: 10 September 2019 We report a systematic study of spatial variations of the probability density function (PDF) $P(\unicode[STIX]{x1D6FF}T)$ for temperature fluctuations $\unicode[STIX]{x1D6FF}T$ in turbulent Rayleigh–Bénard convection along the central axis of two different convection cells. One of the convection cells is a vertical thin disk and the other is an upright cylinder of aspect ratio unity. By changing the distance $z$ away from the bottom conducting plate, we find the functional form of the measured $P(\unicode[STIX]{x1D6FF}T)$ in both cells evolves continuously with distinct changes in four different flow regions, namely, the thermal boundary layer, mixing zone, turbulent bulk region and cell centre. By assuming temperature fluctuations in different flow regions are all made from two independent sources, namely, a homogeneous (turbulent) background which obeys Gaussian statistics and non-uniform thermal plumes with an exponential distribution, we obtain the analytic expressions of $P(\unicode[STIX]{x1D6FF}T)$ in four different flow regions, which are found to be in good agreement with the experimental results. Our work thus provides a unique theoretical framework with a common set of parameters to quantitatively describe the effect of turbulent background, thermal plumes and their spatio-temporal intermittency on the temperature PDF $P(\unicode[STIX]{x1D6FF}T)$ . Mixing and entrainment are suppressed in inclined gravity currents Maarten van Reeuwijk, Markus Holzner, C. P. Caulfield Journal: Journal of Fluid Mechanics / Volume 873 / 25 August 2019 Published online by Cambridge University Press: 28 June 2019, pp. 786-815 Print publication: 25 August 2019 We explore the dynamics of inclined temporal gravity currents using direct numerical simulation, and find that the current creates an environment in which the flux Richardson number $\mathit{Ri}_{f}$ , gradient Richardson number $\mathit{Ri}_{g}$ and turbulent flux coefficient $\unicode[STIX]{x1D6E4}$ are constant across a large portion of the depth. Changing the slope angle $\unicode[STIX]{x1D6FC}$ modifies these mixing parameters, and the flow approaches a maximum Richardson number $\mathit{Ri}_{max}\approx 0.15$ as $\unicode[STIX]{x1D6FC}\rightarrow 0$ at which the entrainment coefficient $E\rightarrow 0$ . The turbulent Prandtl number remains $O(1)$ for all slope angles, demonstrating that $E\rightarrow 0$ is not caused by a switch-off of the turbulent buoyancy flux as conjectured by Ellison (J. Fluid Mech., vol. 2, 1957, pp. 456–466). Instead, $E\rightarrow 0$ occurs as the result of the turbulence intensity going to zero as $\unicode[STIX]{x1D6FC}\rightarrow 0$ , due to the flow requiring larger and larger shear to maintain the same level of turbulence. We develop an approximate model valid for small $\unicode[STIX]{x1D6FC}$ which is able to predict accurately $\mathit{Ri}_{f}$ , $\mathit{Ri}_{g}$ and $\unicode[STIX]{x1D6E4}$ as a function of $\unicode[STIX]{x1D6FC}$ and their maximum attainable values. The model predicts an entrainment law of the form $E=0.31(\mathit{Ri}_{max}-\mathit{Ri})$ , which is in good agreement with the simulation data. The simulations and model presented here contribute to a growing body of evidence that an approach to a marginally or critically stable, relatively weakly stratified equilibrium for stratified shear flows may well be a generic property of turbulent stratified flows. Interaction of a downslope gravity current with an internal wave Raphael Ouillon, Eckart Meiburg, Nicholas T. Ouellette, Jeffrey R. Koseff We investigate the interaction of a downslope gravity current with an internal wave propagating along a two-layer density jump. Direct numerical simulations confirm earlier experimental findings of a reduced gravity current mass flux, as well as the partial removal of the gravity current head from its body by large-amplitude waves (Hogg et al., Environ. Fluid Mech., vol. 18 (2), 2018, pp. 383–394). The current is observed to split into an intrusion of diluted fluid that propagates along the interface and a hyperpycnal current that continues to move downslope. The simulations provide detailed quantitative information on the energy budget components and the mixing dynamics of the current–wave interaction, which demonstrates the existence of two distinct parameter regimes. Small-amplitude waves affect the current in a largely transient fashion, so that the post-interaction properties of the current approach those in the absence of a wave. Large-amplitude waves, on the other hand, perform a sufficiently large amount of work on the gravity current fluid so as to modify its properties over the long term. The 'decapitation' of the current by large waves, along with the associated formation of an upslope current, enhance both viscous dissipation and irreversible mixing, thereby strongly reducing the available potential energy of the flow. Kinematics of local entrainment and detrainment in a turbulent jet Dhiren Mistry, Jimmy Philip, James R. Dawson Journal: Journal of Fluid Mechanics / Volume 871 / 25 July 2019 Print publication: 25 July 2019 In this paper we investigate the continuous, local exchange of fluid elements as they are entrained and detrained across the turbulent/non-turbulent interface (TNTI) in a high Reynolds number axisymmetric jet. To elucidate characteristic kinematic features of local entrainment and detrainment processes, simultaneous high-speed particle image velocimetry and planar laser-induced fluorescence measurements were undertaken. Using an interface-tracking technique, we evaluate and analyse the conditional dependence of local entrainment velocity in a frame of reference moving with the TNTI in terms of the interface geometry and the local flow field. We find that the local entrainment velocity is intermittent with a characteristic length scale of the order of the Taylor micro-scale and that the contribution to the net entrainment rate arises from the imbalance between local entrainment and detrainment rates that occurs with a ratio of two parts of entrainment to one part detrainment. On average, an increase in local entrainment is correlated with excursions of the TNTI towards jet centreline into regions of higher streamwise momentum, convex surface curvature facing the turbulent side of the jet and along the leading edges of the interface. In contrast, detrainment is correlated with excursions of the TNTI away from the jet centreline into regions of lower streamwise momentum, concave surface curvature and along the trailing edge. We find that strong entrainment is characterised by a local counterflow velocity field in the frame of reference moving with the TNTI which enhances the transport of rotational and irrotational fluid elements. On the other hand, detrainment is characterised by locally uniform flow fields with the local fluid velocity on either side of the TNTI advecting in the same direction. These local flow patterns and the strength of entrainment or detrainment rates are also observed to be strongly influenced by the presence and relative strength of vortical structures which are of the order of the Taylor micro-scale that populate the turbulent region along the jet boundary. The transition to turbulence in shock-driven mixing: effects of Mach number and initial conditions Mohammad Mohaghar, John Carter, Gokul Pathikonda, Devesh Ranjan The effects of incident shock strength on the mixing transition in the Richtmyer–Meshkov instability (RMI) are experimentally investigated using simultaneous density–velocity measurements. This effort uses a shock with an incident Mach number of 1.9, in concert with previous work at Mach 1.55 (Mohaghar et al., J. Fluid Mech., vol. 831, 2017 pp. 779–825) where each case is followed by a reshock wave. Single- and multi-mode interfaces are used to quantify the effect of initial conditions on the evolution of the RMI. The interface between light and heavy gases ( $\text{N}_{2}/\text{CO}_{2}$ , Atwood number, $A\approx 0.22$ ; amplitude to wavelength ratio of 0.088) is created in an inclined shock tube at $80^{\circ }$ relative to the horizontal, resulting in a predominantly single-mode perturbation. To investigate the effects of initial perturbations on the mixing transition, a multi-mode inclined interface is also created via shear and buoyancy superposed on the dominant inclined perturbation. The evolution of mixing is investigated via the density fields by computing mixed mass and mixed-mass thickness, along with mixing width, mixedness and the density self-correlation (DSC). It is shown that the amount of mixing is dependent on both initial conditions and incident shock Mach number. Evolution of the density self-correlation is discussed and the relative importance of different DSC terms is shown through fields and spanwise-averaged profiles. The localized distribution of vorticity and the development of roll-up features in the flow are studied through the evolution of interface wrinkling and length of the interface edge, which indicate that the vorticity concentration shows a strong dependence on the Mach number. The contribution of different terms in the Favre-averaged Reynolds stress is shown, and while the mean density-velocity fluctuation correlation term, $\langle \unicode[STIX]{x1D70C}\rangle \langle u_{i}^{\prime }u_{j}^{\prime }\rangle$ , is dominant, a high dependency on the initial condition and reshock is observed for the turbulent mass-flux term. Mixing transition is analysed through two criteria: the Reynolds number (Dimotakis, J. Fluid Mech., vol. 409, 2000, pp. 69–98) for mixing transition and Zhou (Phys. Plasmas, vol. 14 (8), 2007, 082701 for minimum state) and the time-dependent length scales (Robey et al., Phys. Plasmas, vol. 10 (3), 2003, 614622; Zhou et al., Phys. Rev. E, vol. 67 (5), 2003, 056305). The Reynolds number threshold is surpassed in all cases after reshock. In addition, the Reynolds number is around the threshold range for the multi-mode, high Mach number case ( $M\sim 1.9$ ) before reshock. However, the time-dependent length-scale threshold is surpassed by all cases only at the latest time after reshock, while all cases at early times after reshock and the high Mach number case at the latest time before reshock fall around the threshold. The scaling analysis of the turbulent kinetic energy spectra after reshock at the latest time, at which mixing transition analysis suggests that an inertial range has formed, indicates power scaling of $-1.8\pm 0.05$ for the low Mach number case and $-2.1\pm 0.1$ for the higher Mach number case. This could possibly be related to the high anisotropy observed in this flow resulting from strong, large-scale streamwise fluctuations produced by large-scale shear. A second-order integral model for buoyant jets with background homogeneous and isotropic turbulence Adrian C. H. Lai, Adrian Wing-Keung Law, E. Eric Adams Buoyant jets or forced plumes are discharged into a turbulent ambient in many natural and engineering applications. The background turbulence generally affects the mixing characteristics of the buoyant jet, and the extent of the influence depends on the characteristics of both the jet discharge and ambient. Previous studies focused on the experimental investigation of the problem (for pure jets or plumes), but the findings were difficult to generalize because suitable scales for normalization of results were not known. A model to predict the buoyant jet mixing in the presence of background turbulence, which is essential in many applications, is also hitherto not available even for a background of homogeneous and isotropic turbulence (HIT). We carried out experimental and theoretical investigations of a buoyant jet discharging into background HIT. Buoyant jets were designed to be in the range of $1<z/l_{M}<5$ , where $l_{M}=M_{o}^{3/4}/F_{o}^{1/2}$ is the momentum length scale, with $z/l_{M}<\sim 1$ and $z/l_{M}>\sim 6$ representing the asymptotic cases of pure jets and plumes, respectively. The background turbulence was generated using a random synthetic jet array, which produced a region of approximately isotropic and homogeneous field of turbulence to be used in the experiments. The velocity scale of the jet was initially much higher, and the length scale smaller, than that of the background turbulence, which is typical in most applications. Comprehensive measurements of the buoyant jet mixing characteristics were performed up to the distance where jet breakup occurred. Based on the experimental findings, a critical length scale $l_{c}$ was identified to be an appropriate normalizing scale. The momentum flux of the buoyant jet in background HIT was found to be conserved only if the second-order turbulence statistics of the jet were accounted for. A general integral jet model including the background HIT was then proposed based on the conservation of mass (using the entrainment assumption), total momentum and buoyancy fluxes, and the decay function of the jet mean momentum downstream. Predictions of jet mixing characteristics from the new model were compared with experimental observation, and found to be generally in agreement with each other. On the robustness of emptying filling boxes to sudden changes in the wind John Craske, Graham O. Hughes Journal: Journal of Fluid Mechanics / Volume 868 / 10 June 2019 Published online by Cambridge University Press: 11 April 2019, R3 We determine the smallest instantaneous increase in the strength of an opposing wind that is necessary to permanently reverse the forward displacement flow that is driven by a two-layer thermal stratification. With an interpretation in terms of the flow's energetics, the results clarify why the ventilation of a confined space with a stably stratified buoyancy field is less susceptible to being permanently reversed by the wind than the ventilation of a space with a uniform buoyancy field. For large opposing wind strengths we derive analytical upper and lower bounds for the system's marginal stability, which exhibit a good agreement with the exact solution, even for modest opposing wind strengths. The work extends a previous formulation of the problem (Lishman & Woods, Build. Environ., vol. 44 (4), 2009, pp. 666–673) by accounting for the transient dynamics and energetics associated with the homogenisation of the interior, which prove to play a significant role in buffering temporal variations in the wind.
CommonCrawl
Computational Astrophysics and Cosmology Simulations, Data Analysis and Algorithms The evolution of hierarchical triple star-systems Silvia Toonen ORCID: orcid.org/0000-0002-2998-79401, Adrian Hamers1 & Simon Portegies Zwart1 Computational Astrophysics and Cosmology volume 3, Article number: 6 (2016) Cite this article Field stars are frequently formed in pairs, and many of these binaries are part of triples or even higher-order systems. Even though, the principles of single stellar evolution and binary evolution, have been accepted for a long time, the long-term evolution of stellar triples is poorly understood. The presence of a third star in an orbit around a binary system can significantly alter the evolution of those stars and the binary system. The rich dynamical behaviour in three-body systems can give rise to Lidov-Kozai cycles, in which the eccentricity of the inner orbit and the inclination between the inner and outer orbit vary periodically. In turn, this can lead to an enhancement of tidal effects (tidal friction), gravitational-wave emission and stellar interactions such as mass transfer and collisions. The lack of a self-consistent treatment of triple evolution, including both three-body dynamics as well as stellar evolution, hinders the systematic study and general understanding of the long-term evolution of triple systems. In this paper, we aim to address some of these hiatus, by discussing the dominant physical processes of hierarchical triple evolution, and presenting heuristic recipes for these processes. To improve our understanding on hierarchical stellar triples, these descriptions are implemented in a public source code TrES, which combines three-body dynamics (based on the secular approach) with stellar evolution and their mutual influences. Note that modelling through a phase of stable mass transfer in an eccentric orbit is currently not implemented in TrES, but can be implemented with the appropriate methodology at a later stage. The majority of stars are members of multiple systems. These include binaries, triples, and higher order hierarchies. The evolution of single stars and binaries have been studied extensively and there is general consensus over the dominant physical processes (Postnov and Yungelson 2014; Toonen et al. 2014). Many exotic systems, however, cannot easily be explained by binary evolution, and these have often been attributed to the evolution of triples, for examples low-mass X-ray binaries (Eggleton and Verbunt 1986) and blue stragglers (Perets and Fabrycky 2009). Our lack of a clear understanding of triple evolution hinders the systematic exploration of these curious objects. At the same time triples are fairly common; Our nearest neighbour α Cen is a triple star system (Tokovinin 2014a), but more importantly ∼10% of the low-mass stars are in triples (Tokovinin; 2008, 2014b; Raghavan et al. 2010; Moe and Di Stefano 2016) a fraction that gradually increases (Duchêne and Kraus 2013) to ∼50% for spectral type B stars (Remage Evans 2011; Sana et al. 2014; Moe and Di Stefano 2016). The theoretical studies of triples can classically be divided into three-body dynamics and stellar evolution, which both are often discussed separately. Three-body dynamics is generally governed by the gravitational orbital evolution, whereas the stellar evolution is governed by the internal nuclear burning processes in the individual stars and their mutual influence. Typical examples of studies that focused on the three-body dynamics include Ford et al. (2000), Fabrycky and Tremaine (2007), Naoz et al. (2013), Naoz and Fabrycky (2014), Liu et al. (2015a), and stellar evolution studies include Eggleton and Kiseleva (1996), Iben and Tutukov (1999), Kuranov et al. (2001). Interdisciplinary studies, in which the mutual interaction between the dynamics and stellar aspects are taken into account are rare (Kratter and Perets 2012; Perets and Kratter 2012; Hamers et al. 2013; Shappee and Thompson 2013; Michaely and Perets 2014; Naoz et al. 2016), but demonstrate the richness of the interacting regime. The lack of a self consistent treatment hinders a systematic study of triple systems. This makes it hard to judge the importance of this interacting regime, or how many curious evolutionary products can be attributed to triple evolution. Here we discuss triple evolution in a broader context in order to address some of these hiatus. In this paper we discuss the principle complexities of triple evolution in a broader context (Section 2). We start by presenting an overview of the evolution of single stars and binaries, and how to extend these to triple evolution. In the second part of this paper we present heuristic recipes for simulating their evolution (Section 3). These recipes combine three-body dynamics with stellar evolution and their mutual influences, such as tidal interactions and mass transfer. These descriptions are summarized in a public source code TrES with which triple evolution can be studied. We will give a brief overview of isolated binary evolution (Section 2.2) and isolated triple evolution (Section 2.3). We discuss in particular under what circumstances triple evolution differs from binary evolution and what the consequences are of these differences. We start with a brief summary of single star evolution with a focus on those aspects that are relevant for binary and triple evolution. Single stellar evolution Hydrostatic and thermal equilibrium in a star give rise to temperatures and pressures that allow for nuclear burning, and consequently the emission of the starlight that we observe. Cycles of nuclear burning and exhaustion of fuel regulate the evolution of a star, and sets the various phases during the stellar lifetime. The evolution of a star is predominantly determined by a single parameter, namely the stellar mass (Table 1). It depends only slightly on the initial chemical composition or the amount of core overshooting.Footnote 1 Table 1 Necessary parameters to describe a single star system, a binary and a triple Fundamental timescales of stellar evolution are the dynamical (\(\tau _{\mathrm{dyn}}\)), thermal (\(\tau_{\mathrm{th}}\)), and nuclear timescale (\(\tau_{\mathrm{nucl}} \)). The dynamical timescale is the characteristic time that it would take for a star to collapse under its own gravitational attraction without the presence of internal pressure: $$ \tau_{\mathrm{dyn}} = \sqrt{\frac{R^{3}}{Gm}}, $$ where R and m are the radius and mass of the star. It is a measure of the timescale on which a star would expand or contract if the hydrostatic equilibrium of the star is disturbed. This can happen for example because of sudden mass loss. A related timescale is the time required for the Sun to radiate all its thermal energy content at its current luminosity: $$ \tau_{\mathrm{th}} = \frac{Gm^{2}R}{L}, $$ where L is the luminosity of the star. In other words, when the thermal equilibrium of a star is disturbed, the star will move to a new equilibrium on a thermal (or Kelvin-Helmholtz) timescale. Finally, the nuclear timescale represents the time required for the star to exhaust its supply of nuclear fuel at its current luminosity: $$ \tau_{\mathrm{nucl}} = \frac{\epsilon c^{2}m_{\mathrm{nucl}}}{L}, $$ where ϵ is the efficiency of nuclear energy production, c is the speed of light, and \(m_{\mathrm{nucl}}\) is the amount of mass available as fuel. For core hydrogen burning, \(\epsilon= 0.007\) and \(M_{\mathrm{nucl}}\approx0.1M\). Assuming a mass-luminosity relation of \(L\propto M^{\alpha}\), with empirically \(\alpha\approx 3\mbox{-}4\) (e.g. Salaris and Cassisi 2005; Eker et al. 2015), it follows that massive stars live shorter and evolve faster than low-mass stars. For the Sun, \(\tau_{\mathrm{dyn}} \approx 30~\mbox{min}\), \(\tau_{\mathrm{th}} \approx 30~\mbox{Myr}\), and \(\tau_{\mathrm{nucl}} \approx 10~\mbox{Gyr}\). Typically, \(\tau_{\mathrm{dyn}}< \tau_{\mathrm{th}} < \tau_{\mathrm{nucl}} \), which allows us to quantitatively predict the structure and evolution of stars in broad terms. The Hertzsprung-Russell (HR) diagram in Figure 1 shows seven evolutionary tracks for stars of different masses. The longest phase of stellar evolution is known as the main-sequence (MS), in which nuclear burning takes place of hydrogen in the stellar core. The MS occupies the region in the HR-diagram between the stellar birth on the zero-age MS (ZAMS, blue circles in Figure 1) and the end of the MS-phase (terminal-age MS (TAMS), blue circles in Figure 1). Stars more massive than \(1.2M_{\odot}\) contract slightly at the end of the MS when the stellar core runs out of hydrogen. This can be seen in Figure 1 as the hook in the tracks leading up to the TAMS. Hertzsprung-Russell diagram. Evolutionary tracks for seven stars in the HR-diagram with masses 1, 1.5, 2.5, 4, 6.5, 10, and 15\(M_{\odot}\) at solar metallicity. Specific moments in the evolution of the stars are noted by blue circles as explained in the text. The tracks are calculated with SeBa (Portegies Zwart and Verbunt 1996, Toonen et al. 2012). The dashed lines show lines of constant radii by means of the Stefan-Boltzmann law. After the TAMS, hydrogen ignites in a shell around the core. Subsequently the outer layers of the star expand rapidly. This expansion at roughly constant luminosity results in a lower effective temperature and a shift to the right in the HR-diagram. Stars of less than \(13M_{\odot}\) reach effective temperatures as low as (103.7K) 5,000K before helium ignition. At this point (denoted by a blue circle in Figure 1) they start to ascend the red giant branch (RGB) which goes hand in hand with a strong increase in luminosity and radius. On the right of the RGB in the HR-diagram, lies the forbidden region where hydrostatic equilibrium cannot be achieved. Any star in this region will rapidly move towards the RGB. The red giant star consists of a dense core and an extended envelope up to hundreds of solar radii. When the temperature in the core reaches 108K, helium core burning commences and the red giant phase has come to an end. For stars less massive than \(2M_{\odot}\), helium ignites degenerately in a helium flash. For stars more massive than \(13M_{\odot}\), helium ignites before their effective temperature has decreased to a few thousand Kelvin; the shift to the right in the HR-diagram is truncated when helium ignites. During helium burning the stellar tracks make a loop in the HR-diagram, also known as the horizontal branch. This branch is marked in Figure 1 by a blue circle at its maximum effective temperature. The loop goes hand in hand with a decrease and increase of the stellar radius. As the burning front moves from the core to a shell surrounding the core, the outer layers of the star expand again and the evolutionary track bends back to right in the HR-diagram. As the core of the star reaches temperatures of \(5\cdot10^{8}\mbox{K}\), carbon ignites in the star (denoted by a blue circle in Figure 1). As the core of the star becomes depleted of helium, helium burning continues in a shell surrounding the inert carbon-oxygen core. The star has now has reached the supergiant-phases of its life. The star ascents the asymptotic giant branch (AGB) reaching its maximum size of about a thousand solar radii. Figure 2 shows the variation of the outer radius as the star evolves in its lifetime. It illustrates the dramatic increases in radius during the RGB- and AGB-phases as previously discussed. Shrinkage of star occur after helium ignition, and to a lesser degree at the end of the MS. The radial evolution is of particular interest for binaries and triples, as a star is more likely to initiate mass transfer (i.e. fill its Roche lobe) when its envelope is extended e.g. on the RGB or AGB. Evolution of stellar radius. Radius as a function of stellar age for two stars with masses 4 and \(6.5M_{\odot}\) at solar metallicity. Specific moments in the evolution of the stars are noted by blue circles as for Figure 1. The radius evolution is calculated with SeBa (Portegies Zwart and Verbunt 1996; Toonen et al. 2012). The figure also shows that high-mass stars evolve faster and live shorter than lower-mass stars. Stellar winds During the lifetime of a star, a major fraction of the star's mass is lost by means of stellar winds (HJGLM Lamers and Cassinelli 1999; Owocki 2013). The winds deposit enriched material back into the ISM and can even collide with previously ejected matter to form stellar-wind bubbles and planetary nebulae. Stellar winds develop for almost all stars, but the mass losses increases dramatically for more evolved stars and for more massive stars. The winds of AGB stars (see Höfner 2015 for a review) are characterized by extremely high mass-loss rates (\(10^{-7}\mbox{-}10^{-4}M_{\odot }~\mathrm {yr}^{-1}\)) and low terminal velocities (5-30 km s−1). For stars up to \(8M_{\odot}\), these 'superwinds' remove the entire stellar envelope. AGB-winds are driven by radiation pressure onto molecules and dust grains in the cold outer atmosphere. The winds are further enhanced by the stellar pulsations that increase the gas density in the extended stellar atmosphere where the dust grains form. For massive O and B-type stars, strong winds already occur on the MS. These winds (e.g. Puls et al. 2008; Vink 2015) are driven by another mechanism, i.e. radiation pressure in the continuum and absorption lines of heavy elements. The winds are characterized with high mass-loss rates (\(10^{-7}\mbox{-}10^{-4}~M_{\odot }~\mathrm {yr}^{-1}\)) and high velocities (several 100-1,000 km s−1) (e.g. Kudritzki and Puls 2000). For stars of more than \({\sim}30M_{\odot}\), the mass-loss rate is sufficiently large that the evolution of the star is significantly affected, as the timescale for mass loss is smaller than the nuclear timescale. In turn the uncertainties in our knowledge of the stellar wind mechanisms, introduces considerable uncertainties in the evolution of massive stars. Stellar remnants The evolution of a star of less than \({\sim}6.5M_{\odot}\) comes to an end as helium burning halts at the end of the AGB. Strong winds strip the core of the remaining envelope and this material forms a planetary nebula surrounding the core. The core cools and contracts to form a white dwarf (WD) consisting of carbon and oxygen (CO). Slightly more massive stars up to \({\sim}11M_{\odot}\) experience an additional nuclear burning phase. Carbon burning leads to the formation of a degenerate oxygen-neon (ONe) core. Stars up to \({\sim}8M_{\odot}\) follow a similar evolutionary path discussed above, but they end their lives as oxygen-neon white dwarfs. In the mass range \({\sim}8\mbox{-}11M_{\odot}\), the oxygen-neon core reaches the Chandrasekhar mass, and collapses to a neutron star (NS). More massive stars than \({\sim}11M_{\odot}\) go through a rapid succession of nuclear burning stages and subsequent fuel exhaustion. The nuclear burning stages are sufficiently short, that the stellar envelope hardly has time to adjust to the hydrodynamical and thermal changes in the core. The position of the star in the HR-diagram remains roughly unchanged. The stellar evolution continues until a iron core is formed after which nuclear burning cannot release further energy. The star then collapses to form a NS or a black hole (BH). An overview of the initial mass ranges and the corresponding remnants are given in Table 2. Table 2 Initial stellar mass range and the corresponding remnant type and mass When a star is part of a compact stellar system, its evolution can be terminated prematurely when the star looses its envelope in a mass-transfer phase. After the envelope is lost, the star may form a remnant directly. If on the other hand, the conditions to sustain nuclear burning are fulfilled, the star can evolve further as a hydrogen-poor helium rich star i.e. helium MS star or helium giant star. Due to the mass loss, the initial mass ranges given in Table 2 can be somewhat larger. Furthermore, if a star with a helium core of less than \({\sim}0.32M_{\odot}\) (e.g. Han et al. 2002) looses its envelope as a result of mass transfer before helium ignition, the core contracts to form a white dwarf made of helium instead of CO or ONe. When a high-mass star reaches the end of its life and its core collapses to a NS or BH, the outer layers of the star explode in a core-collapse supernova (SN) event. The matter that is blown off the newly formed remnant, enriches the ISM with heavy elements. Any asymmetry in the SN, such as in the mass or neutrino loss (e.g. Lai 2004; Janka 2012), can give rise to a natal-kick \(\boldsymbol {v}_{\mathrm {k}}\) to the star. Neutron stars are expected to receive a kick at birth of about 400 km s−1 (e.g. Cordes et al. 1993; Lyne and Lorimer 1994; Hobbs et al. 2005), however smaller kick velocities in the range of ≲50 km s−1 have been deduced for neutron stars in high-mass X-ray binaries (Pfahl et al. 2002). Also, whether or not black holes that are formed in core-collapse supernova receive a kick is still under debate (e.g. Gualandris et al. 2005; Repetto et al. 2012; Wong et al. 2014; Repetto and Nelemans 2015; Zuo 2015). Binary evolution The evolution of a binary can be described by the masses of the stars \(m_{1}\) and \(m_{2}\), the semi-major axis a, and the eccentricity e. A useful picture for binaries is the Roche model, which describes the effective gravitational potential of the binary. It is generally based on three assumptions: (1) the binary orbit is circular (2) the rotation of the stellar components are synchronized with the orbit (3) the stellar components are small compared to the distance between them. The first two assumptions are expected to hold for binaries that are close to mass transfer because of tidal forces (Section 2.2.3). Under the three assumptions given above, the stars are static in a corotating frame of reference. The equipotential surface around a star in which material is gravitationally bound to that star is called the Roche lobe. The Roche radius is defined as the radius of a sphere with the same volume of the nearly spherical Roche lobe, and is often approximated (Eggleton 1983) by: $$\begin{aligned} R_{\mathrm{L1}} &\approx\frac{0.49q^{2/3}}{0.6q^{2/3} + \ln(1+q^{1/3})} \\ &\approx0.44a\frac{q^{0.33}}{(1+q)^{0.2}}, \end{aligned}$$ where the mass ratio \(q=m_{1}/m_{2}\). If one of the stars in the binary overflows its Roche lobe, matter from the outer layers of the star can freely move through the first Lagrangian point L1 to the companion star. Binaries with initial periods less than several years (depending on the stellar masses) will experience at least one phase of mass transfer, if the stars have enough time to evolve. If the stars do not get close to Roche lobe overflow (RLOF), the stars in a binary evolve effectively as single stars, slowly decreasing in mass and increasing in radius and luminosity until the remnant stage. The binary orbit can be affected by stellar winds, tides and angular momentum losses such as gravitational wave emission and magnetic braking. These processes are discussed in the following three sections. In the last three sections of this chapter we describe how RLOF affects a binary. Stellar winds in binaries Wind mass loss affects a binary orbit through mass and angular momentum loss. Often the assumption is made that the wind is spherically symmetric and fast with respect to the orbit. In this approximation, the wind does not interact with the binary orbit directly, such that the process is adiabatic. Furthermore, the orbital eccentricity remains constant (Huang 1956, 1963). If none of the wind-matter is accreted, the wind causes the orbit to widen. From angular momentum conservation, it follows as: $$ \frac{ \dot{a}_{\mathrm{wind, no\mbox{-}acc}}}{a} =\frac{-\dot{m}_{1}}{m_{1} + m_{2}}, $$ where \(m_{1}\) and \(m_{2}\) are the masses of the stars, \(\dot{m}_{1}\) is the mass lost in the wind of the star with mass \(m_{1}\) (\(\dot {m}_{1} \leqslant0\)), a is the semi-major axis of the orbit, and \(\dot{a}_{\mathrm{wind, \text{no-acc}}}\) the change in the orbital separation with no wind accretion. Eq. (5) can be rewritten to: $$ \frac{a_{\mathrm{f}}}{a_{\mathrm{i}}} =\frac{m_{1}+m_{2}}{m_{1} + m_{2}-\Delta m_{\mathrm{wind}}}, $$ where \(a_{\mathrm{f}}\) and \(a_{\mathrm{i}}\) are the semi-major axis of the orbit before and after the wind mass loss, and \(\Delta m_{\mathrm{wind}}\) is the amount of matter lost in the wind (\(\Delta m_{\mathrm{wind}}\geqslant0\)). While the two stars in the binary are in orbit around each other, the stars can accrete some of the wind material of the other star. Including wind accretion, the orbit changes as: $$ \frac{ \dot{a}_{\mathrm{wind}}}{a} = \frac{-\dot{m_{1}}}{m_{1}} \biggl( -2\beta+ 2\beta \frac{m_{1}}{m_{2}} - (1-\beta)\frac{m_{1}}{m_{1}+m_{2}} \biggr), $$ where the star with mass \(m_{2}\) accretes at a rate of \(\dot {m}_{2}=-\beta\dot{m}_{1}\). Note that Eq. (7) reduces to Eq. (5) for complete non-conservative mass transfer i.e. \(\beta=0\). Wind accretion is often modelled by Bondi-Hoyle accretion (Bondi and Hoyle 1944; Livio and Warner 1984). This model considers a spherical accretion onto a point mass that moves through a uniform medium. Wind accretion is an important process known to operate in high-mass X-ray binaries (Tauris and van den Heuvel 2006; Chaty 2011) and symbiotic stars (Mikolajewska 2002; Sokoloski 2003). The assumptions of a fast and spherically symmetric wind are not always valid. The former is not strictly true for all binary stars i.e. an evolved AGB-star has a wind of 5-30 km s−1 (e.g. Höfner 2015), which is comparable to the velocity of stars in a binary of \(a\approx 10^{3}R_{\odot }\). Hydrodynamical simulations of such binaries suggest that the wind of the donor star is gravitationally confined to the Roche lobe of the donor star (Mohamed and Podsiadlowski 2007, 2011; de Val-Borro et al. 2009). The wind can be focused towards the orbital plane and in particular towards the companion star. This scenario (often called wind Roche-lobe overflow (wRLOF) or gravitational focusing) allows for an accretion efficiency of up to 50%, which is significantly higher than for Bondi-Hoyle accretion. A requirement for wRLOF to work is that the Roche lobe of the donor star is comparable or smaller than the radius where the wind is accelerated beyond the escape velocity. wRLOF is supported by observations of detached binaries with very efficient mass transfer (Karovska et al. 2005; Blind et al. 2011). Furthermore, the assumption of adiabatic mass loss is inconsistent with binaries in which the orbital timescale is longer than the mass-loss timescale. The effects of instantaneous mass loss has been studied in the context of SN explosions, and can even lead to the disruption of the binary system (see also Section 2.2.7). However, also wind mass-loss can have a non-adiabatic effect on the binary orbit (e.g. Hadjidemetriou 1966; Rahoma et al. 2009; Veras et al. 2011) if the mass-loss rate is high and the orbit is wide. Under the assumption that mass-loss proceeds isotropically, the wind causes the orbit to widen, as in the case for adiabatic mass loss. However, the eccentricity may decrease or increase, and may even lead to the disruption of the binary system (see e.g. Veras et al. 2011 for a detailed analysis of the effects of winds on sub-stellar binaries in which an exoplanet orbits a host star). Toonen, Hollands, Gaensicke and Boekholt, in prep. show that also (intermediate-mass) stellar binaries can be disrupted during the AGB-phases when the mass loss rates are high (\(10^{-7}\mbox{-}10^{-4}M_{\odot }~\mathrm {yr}^{-1}\)) for orbital separations approximately larger then \(10^{6}R_{\odot }\) (\(P\approx10^{6}~\mbox{yr}\) where P is the orbital period). Lastly, anisotropic mass-loss might occur in fast-rotating stars or systems that harbour bipolar outflows. Rotation modifies the structure and evolution of a star, and as such the surface properties of the star where the wind originates (see Maeder and Meynet 2012 for a review). For an increasing rate of rotation until critical rotation, the stellar winds increasingly depart from a spherical distribution (see e.g. Georgy et al. 2011). Additionally, the bipolar outflows or jets are associated with protostars, evolved post-AGB stars and binaries containing compact objects. Their origin is most likely linked to the central object or the accretion disk (e.g. O'Brien 1990). The effect of anisotropic mass loss on the orbit of a binary system is important primarily for wide binaries (e.g. Parriott and Alcock 1998; Veras et al. 2013). Specifically, Veras et al. (2013) show that the relative contribution of the anisotropic terms to the overall motion scale as \(\sqrt{a}\). If the mass loss is symmetric about the stellar equator, the mass loss does not affect the orbital motion in another way than for the isotropic case. Veras et al. (2013) conclude that the isotropic mass-loss approximation can be used safely to model the orbital evolution of a planet around a host star until orbital separations of hundreds of AU. For a fixed total mass of the system, the effects of anisotropic mass loss are further diminished with decreasing mass ratio (i.e. for systems with more equal masses), such that the assumption of isotropic mass-loss is robust until even larger orbital separations for stellar binaries. Angular momentum losses Angular momentum loss from gravitational waves (GW) and magnetic braking act to shrink the binary orbit (e.g. Peters 1964; Verbunt and Zwaan 1981). Ultimately this can lead to RLOF of one or both components and drive mass transfer. The strength of GW emission depends strongly on the semi-major axis, and to lesser degree on the eccentricity. It affects the orbits as: $$ \dot{a}_{\mathrm{gr}} = -\frac{64}{5} \frac{G^{3} m_{1}m_{2} (m_{1}+m_{2})}{c^{5}a^{3}(1-e^{2})^{7/2}} \biggl( 1 + \frac {73}{24}e^{2} + \frac{37}{96}e^{4} \biggr) $$ $$ \dot{e}_{\mathrm{gr}} = -\frac{304}{15} e\frac {G^{3}m_{1}m_{2}(m_{1}+m_{2})}{c^{5}a^{4}(1-e^{2})^{5/2}} \biggl( 1+ \frac{121}{304}e^{2} \biggr), $$ where \(\dot{a}_{\mathrm{gr}}\) and \(\dot{e}_{\mathrm{gr}}\) are the change in orbital separation and eccentricity averaged over a full orbit (Peters 1964). Accordingly, GW emission affects most strongly the compact binaries. These binaries are a very interesting and the only known source of GWs for GW interferometers such as LIGO, VIRGO and eLISA. Magnetic braking extracts angular momentum from a rotating magnetic star by means of an ionized stellar wind (Schatzman 1962; Huang 1966; Skumanich 1972). Even when little mass is lost from the star, the wind matter can exert a significant spin-down torque on the star. This happens when the wind matter is forced to co-rotate with the magnetic field. If the star is in a compact binary and forced to co-rotate with the orbit due to tidal forces, angular momentum is essentially removed from the binary orbit as well (Verbunt and Zwaan 1981). This drain of angular momentum results in a contraction of the orbit. Magnetic braking plays an important role in the orbital evolution of interacting binaries with low-mass donor stars, such as cataclysmic variables and low-mass X-ray binaries (Knigge et al. 2011; Tauris and van den Heuvel 2006). For magnetic braking to take place, the donor star is expected to have a mass between \(0.2\mbox{-}1.2M_{\odot}\), such that the star has a radiative core and convective envelope to sustain the magnetic field. The strength of magnetic braking is still under debate and several prescriptions exist (see Knigge et al. 2011, for a review). The presence of a companion star introduces tidal forces in the binary system that act on the surface of the star and lead to tidal deformation of the star. If the stellar rotation is not synchronized or aligned with the binary orbit, the tidal bulges are misaligned with the line connecting the centres of mass of the two stars. This produces a tidal torque that allows for the transfer of angular momentum between the stars and the orbit. Additionally, energy is dissipated in the tides, which drains energy from the orbit and rotation. Tidal interaction drives the binary to a configuration of lowest energy e.g. it strives to circularize the orbit, synchronize the rotation of the stars with the orbital period and align the stellar spin with respect to the orbital spin. See Zahn (2008) and Zahn (2013) for recent reviews. For binaries with extreme mass ratios, a stable solution does not exist (Darwin 1879; Hut 1980). In this scenario a star is unable to extract sufficient angular momentum from the orbit to remain in synchronized rotation. Tidal forces will cause the orbit to decay and the companion to spiral into the envelope of the donor star. This tidal instability occurs when the angular momentum of the star \(J_{\star} > \frac{1}{3} J_{\mathrm{b}}\), with \(J_{\mathrm{b}}\) the orbital angular momentum and \(J_{\star}=I\Omega\), where I is the moment of inertia and Ω the spin angular frequency. Hut (1981) derives a general qualitative picture of tidal evolution and its effect on the orbital evolution of a binary system. Hut (1981) considers a model in which the tides assume their equilibrium shape, and with very small deviations in position and amplitude with respect to the equipotential surfaces of the stars. If a companion star with mass \(m_{2}\) raises tides on a star with mass \(m_{1}\), the change of binary parameters due to tidal friction is: $$\begin{aligned} \dot{a}_{\mathrm{TF}} ={}& {-}6 \frac{k_{\mathrm{am}}}{\tau_{\mathrm {TF}}} \tilde{q}(1+\tilde{q}) \biggl( \frac{R}{a} \biggr) ^{8} \frac{a}{(1-e^{2})^{15/2}} \\ &{} \times \biggl( f_{1}\bigl(e^{2}\bigr)- \bigl(1-e^{2}\bigr)^{3/2} f_{2}\bigl(e^{2} \bigr) \frac{\Omega }{\Omega_{b}} \biggr), \end{aligned}$$ $$\begin{aligned} \dot{e}_{\mathrm{TF}} ={}& {-}27 \frac{k_{\mathrm{am}}}{\tau _{\mathrm{TF}}} \tilde{q}(1+\tilde{q}) \biggl( \frac{R}{a} \biggr) ^{8} \frac{e}{(1-e^{2})^{13/2}} \\ &{}\times \biggl( f_{3}\bigl(e^{2}\bigr)-\frac{11}{18} \bigl(1-e^{2}\bigr)^{3/2} f_{4}\bigl(e^{2} \bigr) \frac{\Omega}{\Omega_{b}} \biggr), \end{aligned}$$ $$\begin{aligned} \dot{\Omega}_{\mathrm{TF}} ={}& 3 \frac{k_{\mathrm{am}}}{\tau _{\mathrm{TF}}} \frac{\tilde{q}^{2}}{k^{2}} \biggl( \frac {R}{a} \biggr) ^{6} \frac{\Omega_{b}}{(1-e^{2})^{6}} \\ &{} \times \biggl( f_{2}\bigl(e^{2}\bigr)- \bigl(1-e^{2}\bigr)^{3/2} f_{5}\bigl(e^{2} \bigr) \frac{\Omega }{\Omega_{b}} \biggr), \end{aligned}$$ where \(\tilde{q}\equiv m_{2}/m_{1}\), and \(\Omega_{b}=2\pi/P\) is the mean orbital angular velocity. The star with mass \(m_{1}\) has an apsidal motion constant \(k_{\mathrm{am}}\), gyration radius k, and spin angular frequency Ω. \(\tau_{\mathrm{TF}}\) represents the typical timescale on which significant changes in the orbit take place through tidal evolution. The parameters \(f_{n}(e^{2})\) are polynomial expressions given by (Hut 1981): $$ \textstyle\begin{cases} f_{1}(e^{2}) = 1+\frac{31}{2}e^{2}+\frac{255}{8}e^{4}+\frac {185}{16}e^{6}+\frac{25}{64}e^{8},\\ f_{2}(e^{2}) = 1+\frac{15}{2}e^{2}+\frac{45}{8}e^{4}+\frac {5}{16}e^{6},\\ f_{3}(e^{2}) = 1+\frac{15}{4}e^{2}+\frac{15}{8}e^{4}+\frac {5}{64}e^{6},\\ f_{4}(e^{2}) = 1+\frac{3}{2}e^{2}+\frac{1}{8}e^{4},\\ f_{5}(e^{2}) = 1+3e^{2}+\frac{3}{8}e^{4}. \end{cases} $$ The degree of tidal interaction strongly increases with the ratio of the stellar radii to the semi-major axis of the orbit (Eqs. (10), (11) and (12)). Therefore, tidal interaction mostly affect the orbits of relatively close binaries, unless the eccentricities are high and/or the stellar radii are large. The tidal timescale T (Eqs. (10)-(12)) is subject to debate due to quantitative uncertainties in tidal dissipation mechanisms (Witte and Savonije 1999; Willems 2003; Meibom and Mathieu 2005). Tidal dissipation causes the misalignment of the tidal bulges with the line connecting the centres of mass of the two stars. For stars (or planets) with an outer convection zone, the dissipation is often attributedFootnote 2 to turbulent friction in the convective regions of the star (Goldman and Mazeh 1991; Zahn 1977, 1989). For stars with an outer radiation zone, the dominant dissipation mechanism identified so far is radiative damping of stellar oscillations that are exited by the tidal field i.e. dynamical tides (Zahn 1975, 1977). Despite the uncertainties in tidal dissipation mechanisms, it is generally assumed that circularization and synchronization is achieved before RLOF in a binary. Whether or not mass transfer is stable depends on the response of the donor star upon mass loss, and the reaction of the Roche lobe upon the re-arrangement of mass and angular momentum within the binary (e.g. Webbink 1985; Hjellming and Webbink 1987; Pols and Marinus 1994; Soberman et al. 1997). If the donor star stays approximately within its Roche lobe, mass transfer is dynamically stable. When this is not the case, the donor star will overflow its Roche lobe even further as mass is removed. This leads to a runaway situation that progresses into a common-envelope (CE, Paczynski 1976). During the CE-phase, the envelope of the donor star engulfs both stars causing them to spiral inwards until both stars merge or the CE is expelled. Due to the mass loss, the donor star falls out of hydrostatic and thermal equilibrium. The radius of the star changes as the star settles to a new hydrostatic equilibrium, and subsequently thermal equilibrium. The stellar response upon mass loss depends critically on the structure of the stellar envelope i.e. the thermal gradient and entropy of the envelope. In response to mass loss, stars with a deep surface convective zone tend to expand, whereas stars with a radiative envelope tend to shrink rapidly. Therefore, giant donor stars with convective envelopes favour CE-evolution upon RLOF.Footnote 3 As giants have radii of several hundreds to thousands of Solar radii, the orbit at the onset of mass transfer is of the same order of magnitude. On the other hand, donor stars on the MS with radiative envelope often lead to dynamically stable mass transfer in binaries with short orbital periods (e.g. Toonen et al. 2014). Common-envelope evolution During the CE-phase, the core of the donor star and the companion are contained within a CE. Friction between these objects and the slow-rotating envelope is expected to cause the objects to spiral-in. If this process does not release enough energy and angular momentum to drive off the entire envelope, the binary coalesces. On the other hand if a merger can be avoided, a close binary remains in which one or both stars have lost their envelopes. The evolution of such a star is significantly shortened, or even terminated prematurely if it directly evolves to a remnant star. The systems that avoid a merger lose a significant amount of mass and angular momentum during the CE-phase. The orbital separation of these systems generally decreases by two orders of magnitude, which affects the further evolution of the binary drastically. The CE-phase plays an essential role in the formation of short-period systems with compact objects, such as X-ray binaries, and cataclysmic variables. In these systems the current orbital separation is much smaller than the size of the progenitor of the donor star, which had giant-like dimensions at the onset of the CE-phase. Despite of the importance of the CE-phase and the enormous efforts of the community, the CE-phase is not understood in detail (see Ivanova et al. 2013 for a review). The CE-phase involves a complex mix of physical processes, such as energy dissipation, angular momentum transport, and tides, over a large range in time- and length-scales. A complete simulation of the CE-phase is still beyond our reach, but great progress has been made with hydrodynamical simulations in the last few years (Ricker and Taam 2012; Passy et al. 2012b; Nandez et al. 2015). The uncertainty in the CE-phase is one of the aspects of the theory of binary evolution that affects our understanding of the evolutionary history of a specific binary population most (e.g. Toonen and Nelemans 2013; Toonen et al. 2014). The classical way to treat the orbital evolution due to the CE-phase, is the α-formalism. This formalism considers the energy budget of the initial and final configuration (Tutukov and Yungelson 1979); $$ E_{\mathrm{gr}} = \alpha(E_{\mathrm{orb,i}}-E_{\mathrm{orb,f}}), $$ where \(E_{\mathrm{gr}}\) is the binding energy of the envelope, \(E_{\mathrm{orb, i}}\) and \(E_{\mathrm{orb, f}}\) are the orbital energy of the pre- and post-mass transfer binary. The α-parameter describes the efficiency with which orbital energy is consumed to unbind the CE. When both stars have loosely bound envelopes, such as for giants, both envelopes can be lost simultaneously (hereafter double-CE, see Brown 1995; Nelemans et al. 2001). In Eq. (14) \(E_{\mathrm{gr}}\) is then replaced by the sum of the binding energy of each envelope to its host star: $$ E_{\mathrm{gr,1}}+E_{\mathrm{gr,2}} = \alpha(E_{\mathrm {orb,i}}-E_{\mathrm{orb,f}}). $$ The binding energy of the envelope of the donor star in Eqs. (14) and (15) is given by: $$ E_{\mathrm{gr}} = \frac{Gm_{\mathrm{d}} m_{\mathrm{d,env}}}{\lambda _{\mathrm{ce}} R}, $$ where R is the radius of the donor star, \(M_{\mathrm{d,env}}\) is the envelope mass of the donor and \(\lambda_{\mathrm{ce}}\) depends on the structure of the donor (de Kool et al. 1987; Dewi and Tauris 2000; Xu and Li 2010; Loveridge et al. 2011). The parameters \(\lambda_{\mathrm{ce}}\) and α are often combined in one parameter \(\alpha\lambda_{\mathrm{ce}}\). According to the alternative γ-formalism (Nelemans et al. 2000), angular momentum is used to expel the envelope of the donor star, according to: $$ \frac{J_{\mathrm{b, i}}-J_{\mathrm{b, f}}}{J_{\mathrm{b,i}}} = \gamma\frac{\Delta m_{\mathrm{d}}}{m_{\mathrm{d}}+ m_{\mathrm{a}}}, $$ where \(J_{\mathrm{b,i}}\) and \(J_{\mathrm{b,f}}\) are the orbital angular momentum of the pre- and post-mass transfer binary respectively. The parameters \(m_{d}\) and \(m_{a}\) represent the mass of the donor and accretor star, respectively, and \(\Delta m_{\mathrm{d}}\) is the mass lost by the donor star. The γ-parameter describes the efficiency with which orbital angular momentum is used to blow away the CE. Valuable constraints on CE-evolution have come from evolutionary reconstruction studies of observed samples of close binaries and from comparing those samples with the results of binary population synthesis studies. The emerging picture is that for binaries with low mass ratios, the CE-phase leads to a shrinkage of the orbit. For the formation of compact WD-MS binaries with low-mass MS companions, the orbit shrinks strongly (\(\alpha\lambda_{\mathrm{ce}} \approx0.25 \), see Zorotovic et al. 2010; Toonen and Nelemans 2013; Portegies Zwart 2013; Camacho et al. 2014). However, for the formation of the second WD in double WDs, the orbit only shrinks moderately (\(\alpha\lambda_{\mathrm{ce}} \approx 2 \), see Nelemans et al. 2000, 2001; van der Sluys et al. 2006). When binaries with approximately equal masses come in contact, mass transfer leads to a modest widening of the orbit, alike the γ-formalism (Nelemans et al. 2000, 2001). The last result is based on a study of the first phase of mass transfer for double WDs, in which the first WD is formed. Woods et al. (2012) suggested that this mass transfer episode can occur stably and non-conservatively even with donor star (early) on the red giant branch. Further research is needed to see if this evolutionary path suffices to create a significant number of double WDs. Stable mass transfer Whereas the duration of the CE-phase is likely of the order of \(10^{3}~\mbox{yr}\) (i.e. the thermal timescale of the envelope), stable mass transfer occurs on much longer timescales. Several driving mechanisms exist for stable mass transfer with their own characteristic mass transfer timescales. The donor star can drive Roche lobe overflow due to its nuclear evolution or due to the thermal readjustment from the mass loss. Stable mass transfer can also be driven by the contraction of the Roche lobe due to angular momentum losses in the system caused by gravitational wave radiation or magnetic braking. When mass transfer proceeds conservatively the change in the orbit is regulated by the masses of the stellar components. For circular orbits, $$ \frac{a_{\mathrm{f}}}{a_{\mathrm{i}}} = \biggl( \frac{m_{\mathrm {d,i}}m_{\mathrm{a,i}}}{m_{\mathrm{d,f}}m_{\mathrm{a,f}}} \biggr) ^{2}, $$ where the subscript i and f denote the pre- and post-mass transfer values. In general, the donor star will be the more massive component in the binary and the binary orbit will initially shrink in response to mass transfer. After the mass ratio is approximately reversed, the orbit widens. In comparison with the pre-mass transfer orbit, the post-mass transfer orbit is usually wider with a factor of a few (Toonen et al. 2014). If the accretor star is not capable of accreting the matter conservatively, mass and angular momentum are lost from the system. The evolution of the system is then dictated by how much mass and angular momentum is carried away. Assuming angular momentum conservation and neglecting the stellar rotational angular momentum compared to the orbital angular momentum, the orbit evolves as (e.g. Massevitch and Yungelson 1975; Pols and Marinus 1994; Postnov and Yungelson 2014): $$\begin{aligned} \frac{\dot{a}}{a} ={}& {-}2\frac{\dot{m_{\mathrm{d}}}}{m_{\mathrm {d}}} \biggl[ 1-\beta\frac{m_{\mathrm{d}}}{m_{\mathrm{a}}} \\ &{}-(1- \beta ) \biggl(\eta+\frac{1}{2}\biggr)\frac{m_{\mathrm{d}}}{m_{\mathrm{d}}+m_{\mathrm {a}}} \biggr], \end{aligned}$$ where the accretor star captures a fraction \(\beta\equiv-\dot {m_{\mathrm{a}}}/\dot{m_{\mathrm{d}}}\) of the transferred matter, and the matter that is lost carries specific angular momentum h equal to a multiple η of the specific orbital angular momentum of the binary: $$ h\equiv\frac{\dot{J}}{\dot{m_{\mathrm{d}}}+\dot{m_{\mathrm{a}}}} = \eta\frac{J_{\mathrm{b}}}{m_{\mathrm{a}}+m_{\mathrm{d}}}. $$ Different modes of angular momentum loss exist which can lead to a relative expansion or contraction of the orbit compared to the case of conservative mass transfer (Soberman et al. 1997; Toonen et al. 2014). For example, the generic description of orbital evolution of Eq. (19) reduces to that of conservative mass transfer (Eq. (18)) for \(\beta=1\) or \(\dot{m_{\mathrm{a}}}=\dot{m_{\mathrm{d}}}\). Also, Eq. (19) reduces to Eq. (7) describing the effect of stellar winds on the binary orbit, under the assumption of specific angular momentum loss equal to that of the donor star (\(h=J_{\mathrm{d}}/m_{\mathrm{d}} = m_{\mathrm{a}}/m_{\mathrm{d}} \cdot J_{\mathrm{b}}/(m_{\mathrm{d}}+m_{\mathrm{a}})\) or \(\eta =m_{\mathrm{a}} /m_{\mathrm{d}}\)). Depending on which mode of angular momentum loss is applicable, the further orbital evolution and stability of the system varies. Stable mass transfer influences the stellar evolution of the donor star and possibly that of the companion star. The donor star is affected by the mass loss, which leads to a change in the radius on long timescales compared to a situation without mass loss (Hurley et al. 2000). Stable mass transfer tends to terminate when the donor star has lost most of its envelope, and contracts to form a remnant star or to a hydrogen-poor helium rich star. In the latter case the evolution of the donor star is significantly shortened, and in the former it is stopped prematurely, similar to what was discussed previously for the CE-phase. If the companion star accretes a fraction or all of the transferred mass, evolution of this star is affected as well. Firstly, if due to accretion, the core grows and fresh fuel from the outer layers is mixed into the nuclear-burning zone, the star is 'rejuvenated' (see e.g. Vanbeveren and De Loore 1994). These stars can appear significantly younger than their co-eval neighbouring stars in a cluster.Footnote 4 Secondly, the accretor star adjusts its structure to a new equilibrium. If the timescale of the mass transfer is shorter than the thermal timescale of the accretor, the star will temporarily fall out of thermal equilibrium. The radial response of the accretor star will depend on the structure of the envelope (as discussed for donor stars in Section 2.2.4). A star with a radiative envelope is expected to expand upon mass accretion, whereas a star with a convective envelope shrinks. In the former case, the accretor may swell up sufficiently to fill its Roche lobe, leading to the formation of a contact binary. Supernova explosions in binaries If the collapsing star is part of a binary or triple, natal kick \(\boldsymbol {v}_{\mathrm{k}}\) alters the orbit and it can even unbind the system. Under the assumption that the SN is instantaneous and the SN-shell does not impact the companion star(s), the binary orbit is affected by the mass loss and velocity kick (Hills 1983; Kalogera 1996; Tauris and Takens 1998; Pijloo et al. 2012) through: $$\begin{aligned} \frac{a_{\mathrm{f}}}{a_{\mathrm{i}}} = {}& \biggl( 1-\frac{\Delta m}{m_{\mathrm{t,i}}} \biggr) \\ &{}\cdot \biggl( 1-\frac{2a_{\mathrm{i}} }{r_{\mathrm{i}}}\frac{\Delta m }{ m_{\mathrm{t,i}}} -\frac{2(\boldsymbol {v}_{\mathrm{i}}\cdot \boldsymbol {v}_{\mathrm{k}})}{v_{\mathrm{c}}^{2}} - \frac{ v_{\mathrm {k}}^{2}}{v_{\mathrm{c}}^{2}} \biggr) ^{-1}, \end{aligned}$$ where \(a_{\mathrm{i}}\) and \(a_{\mathrm{f}}\) are the semi-major axis of the pre-SN and post-SN orbit, Δm is the mass lost by the collapsing star, \(m_{\mathrm{t,i}}\) is the total mass of the system pre-SN, \(r_{\mathrm{i}}\) is the pre-SN distance between the two stars, \(\boldsymbol {v}_{\mathrm{i}}\) is the pre-SN relative velocity of the collapsing star relative to the companion, and $$ v_{\mathrm{c}} \equiv\sqrt{\frac{Gm_{\mathrm{t,i}}}{a_{\mathrm{i}}}} $$ is the orbital velocity in a circular orbit. A full derivation of this equation and that for the post-SN eccentricity is given in Appendix A.1. Note that the equation for the post-SN eccentricity of Eq. (8a) in Pijloo et al. (2012) is incomplete. Eq. (21) shows that with a negligible natal kick, a binary survives the supernova explosion if less than half of the mass is lost. Furthermore, the binary is more likely to survive if the SN occurs at apo-astron. With substantial natal kicks compared to the pre-SN orbital velocity, survival of the binary depends on the magnitude ratio and angle between the two (through \(\boldsymbol {v}_{\mathrm{i}}\cdot \boldsymbol {v}_{\mathrm{k}}\) in Eq. (21)). Furthermore, the range of angles that lead to survival is larger at peri-astron than apo-astron (Hills 1983). If the direction of the natal kick is opposite to the orbital motion of the collapsing star, the binary is more likely to survive the SN explosion. Triple evolution The structures of observed triples tend to be hierarchical, i.e. the triples consist of an inner binary and a distant star (hereafter outer star) that orbits the centre of mass of the inner binary (Hut and Bahcall 1983). To define a triple star system, no less than 10 parameters are required (Table 1): the masses of the stars in the inner orbit \(m_{1}\) and \(m_{2}\), and the mass of the outer star in the outer orbit \(m_{3}\); the semi-major axis a, the eccentricity e, the argument of pericenter g of both the inner and outer orbits. Parameters for the inner and outer orbit are denoted with a subscript 'in' and 'out', respectively; the mutual inclination \(i_{r}\) between the two orbits. The longitudes of ascending nodes h specify the orientation of the triple on the sky, and not the relative orientation. Therefore, they do not affect the intrinsic dynamical evolution. From total angular momentum conservation \(h_{\mathrm{in}} - h_{\mathrm{out}}= \pi\) for a reference frame with the z-axis aligned along the total angular momentum vector (Naoz et al. 2013). In some cases, the presence of the outer star has no significant effect on the evolution of the inner binary, such that the evolution of the inner and outer binary can be described separately by the processes described in Sections 2.1 and 2.2. In other cases, there is an interaction between the three stars that is unique to systems with multiplicities of higher orders than binaries. In this way, many new evolutionary pathways open up compared to binary evolution. The additional processes are described in the following sections, such as the dynamical instability and Lidov-Kozai cycles. Stability of triples The long-term behaviour of triple systems has fascinated scientists for centuries. Not only stellar triples have been investigated, but also systems with planetary masses, such as the Earth-Moon-Sun system by none other than Isaac Newton. It was soon realised that the three-body problem does not have closed-form solutions as in the case for two-body systems. Unstable systems dissolve to a lower order systems on dynamical timescales (van den Berk et al. 2007). It is hard to define the boundary between stable and unstable systems, as stability can occur on a range of timescales. Therefore, many stability criteria exist (Mardling 2001; Georgakarakos 2008), that can be divided in three categories: analytical, numerical integration and chaotic criteria. The commonly used criterion of Mardling and Aarseth (1999): $$\begin{aligned} \frac{a_{\mathrm{out}}}{a_{\mathrm{in}}}\bigg|_{\mathrm{crit}} ={} & \frac{2.8}{1-e_{\mathrm{out}}} \biggl(1- \frac{0.3i}{\pi}\biggr) \\ &{}\cdot \biggl( \frac{(1.0+q_{\mathrm{out}})\cdot(1+e_{\mathrm {out}})}{\sqrt{1-e_{\mathrm{out}}}} \biggr) ^{2/5}, \end{aligned}$$ where systems are unstable if \(\frac{a_{\mathrm{out}}}{a_{\mathrm {in}}} < \frac{a_{\mathrm{out}}}{a_{\mathrm{in}}}|_{\mathrm {crit}}\) and \(q_{\mathrm{out}}\equiv\frac{m_{3}}{m_{1}+m_{2}}\). This criterion is based on the concept of chaos and the consequence of overlapping resonances. The criterion is conservative, as the presence of chaos in some cases is not necessarily the same as an instability. By comparison with numerical integration studies, it was shown that Eq. (23) works well for a wide range of parameters (Aarseth and Mardling 2001; Aarseth 2004). Most observed triples have hierarchical structures, because democratic triples tend to be unstable and short-lived (van den Berk et al. 2007). Hierarchical triples that are born in a stable configuration can become unstable as they evolve. Eq. (23) shows that when the ratio of the semi-major axes of the outer and inner orbit decreases sufficiently, the system enters the instability regime. Physical mechanisms that can lead to such an event, are stellar winds from the inner binary and stable mass transfer in the inner binary (Kiseleva et al. 1994; Iben and Tutukov 1999; Freire et al. 2011; Portegies Zwart et al. 2011). Regarding wind mass losses from the inner binary exclusively, the fractional mass losses \(\vert \dot{m} \vert /(m_{1} + m_{2}) > \vert \dot{m} \vert /(m_{1} + m_{2} + m_{3})\). Therefore, the fractional orbital increases \(\dot{a}_{\mathrm{in}}/a_{\mathrm{in}} > \dot{a}_{\mathrm {out}}/a_{\mathrm{out}}\), following Eq. (5). Perets and Kratter (2012) shows that such a triple evolution dynamical instability (TEDI) lead to close encounters, collisions, and exchanges between the stellar components. They find that the TEDI evolutionary channel caused by stellar winds is responsible for the majority of stellar collisions in the Galactic field. Lidov-Kozai mechanism Secular dynamics can play a mayor role in the evolution of triple systems. The key effect is the Lidov-Kozai mechanism (Lidov 1962; Kozai 1962), see Section 4.1 for an example of a triple undergoing Lidov-Kozai cycles. Due to a mutual torque between the inner and outer binary orbit, angular momentum is exchanged between the orbits. The orbital energy is conserved, and therefore the semi-major axes are conserved as well (e.g. Mardling and Aarseth 2001). As a consequence, the orbital inner eccentricity and mutual inclination vary periodically. The maximum eccentricity of the inner binary is reached when the inclination between the two orbits is minimized. Additionally, the argument of pericenter may rotate periodically (also known as precession or apsidal motion) or librate. For a comprehensive review of the Lidov-Kozai effect, see Naoz (2016). The Lidov-Kozai mechanism is of great importance in several astrophysical phenomena. For example, it can play a mayor role in the eccentricity and obliquity of exoplanets (e.g. Holman et al. 1997; Veras and Ford 2010; Naoz et al. 2011) including high-eccentricity migration to form hot Jupiters (e.g. Wu and Murray 2003; Correia et al. 2011; Petrovich 2015), and for accretion onto black holes in the context of tidal disruption events (e.g. Chen et al. 2009; Wegg and Nate Bode 2011) or mergers of (stellar and super-massive) black hole binaries (e.g. Blaes et al. 2002; Miller and Hamilton 2002; Antonini et al. 2014). In particular for the evolution of close binaries, the Lidov-Kozai oscillations may play a key role (e.g. Harrington 1969; Mazeh and Shaham 1979; Kiseleva et al. 1998; Fabrycky and Tremaine 2007; Naoz and Fabrycky 2014), e.g. for black hole X-ray binaries (Ivanova et al. 2010), blue stragglers (Perets and Fabrycky 2009), and supernova type Ia progenitors (Thompson 2011; Hamers et al. 2013). When the three-body Hamiltonian is expanded to quadrupole order in \(a_{\mathrm{in}}/a_{\mathrm{out}}\), the timescale for the Lidov-Kozai cycles is (Kinoshita and Nakai 1999): $$ t_{\mathrm{Kozai}} = \alpha\frac{P_{\mathrm{out}}^{2}}{P_{\mathrm {in}}} \frac{m_{1}+m_{2}+m_{3}}{m_{3}} \bigl( 1-e_{\mathrm {out}}^{2} \bigr) ^{3/2}, $$ where \(P_{\mathrm{in}}\) and \(P_{\mathrm{out}}\) are the periods of the inner and outer orbit, respectively. The dimensionless quantity α depends weakly on the mutual inclination, and on the eccentricity and argument of periastron of the inner binary, and is of order unity (Antognini 2015). The timescales are typically much longer than the periods of the inner and outer binary. Within the quadrupole approximation, the maximum eccentricity \(e_{\mathrm{max}}\) is a function of the initial mutual inclination \(i_{\mathrm{i}}\) as (Innanen et al. 1997): $$ e_{\mathrm{max}} = \sqrt{1-\frac{5}{3} \mathrm{cos}^{2}(i_{\mathrm{i}})}, $$ in the test-particle approximation (Naoz et al. 2013), i.e. nearly circular orbits (\(e_{\mathrm{in}}=0\), \(e_{\mathrm{out}} = 0\)) with one of the inner two bodies a massless test particle (\(m_{1} \ll m_{0}, m_{2}\)) and the inner argument of pericenter \(g_{\mathrm{in}} = 90^{\circ}\). In this case, the (regular) Lidov-Kozai cycles only take place when the initial inclination is between 39.2-140.8∘. For larger inner eccentricities, the range of initial inclinations expands. For higher orders of \(a_{\mathrm{in}}/a_{\mathrm{out}}\) i.e. the octupole level of approximation, even richer dynamical behaviour is expected than for the quadrupole approximation (e.g. Ford et al. 2000; Blaes et al. 2002; Lithwick and Naoz 2011; Naoz et al. 2013; Shappee and Thompson 2013; Teyssandier et al. 2013). The octupole term is non-zero when the outer orbit is eccentric or if the stars in the inner binary have unequal masses. Therefore it is often deemed the 'eccentric Lidov-Kozai mechanism'. In this case the z-component of the angular momentum of the inner binary is no longer conserved. It allows for a flip in the inclination such that the inner orbit flips from prograde to retrograde or vice versa (hereafter 'orbital flip'). Another consequence of the eccentric Lidov-Kozai mechanism is that the eccentricity of the inner binary can be excited very close to unity. The octupole parameter \(\epsilon_{\mathrm{oct}}\) measure the importance of the octupole term compared to the quadropole term, and is defined by: $$ \epsilon_{\mathrm{oct}} = \frac{m_{1}-m_{2}}{m_{1}+m_{2}} \frac {a_{\mathrm{in}}}{a_{\mathrm{out}}} \frac{e_{\mathrm {out}}}{1-e_{\mathrm{out}}^{2}}. $$ Generally, when \(\vert \epsilon_{\mathrm{oct}} \vert \gtrsim0.01\), the eccentric Lidov-Kozai mechanism can be of importance (Naoz et al. 2011; Shappee and Thompson 2013). The dynamical behaviour of a system undergoing regular or eccentric Lidov-Kozai cycles can lead to extreme situations. For example, as the eccentricity of the inner orbit increases, the corresponding pericenter distance decreases. The Lidov-Kozai mechanism is therefore linked to a possible enhanced rate of grazing interactions, physical collisions, and tidal disruptions events of one of the stellar components (Ford et al. 2000; Thompson 2011), and to the formation of eccentric semi-detached binaries (Section 2.3.6). Lidov-Kozai mechanism with mass loss Eqs. (24) and (26) show that the relevance of the Lidov-Kozai mechanism for a specific triple strongly depends on the masses and mass ratios of the stellar components. If one of the components loses mass, the triple can change from one type of dynamical behaviour to another type. For example, mass loss from one of the stars in the inner binary, can increase \(\vert \epsilon_{\mathrm{oct}} \vert \) significantly. As a result the triple can transfer from a regime with regular Lidov-Kozai cycles to a regime where the eccentric Lidov-Kozai mechanism is active. This behaviour is known as mass-loss induced eccentric Kozai (MIEK) (Shappee and Thompson 2013; Michaely and Perets 2014). See also Section 4.3 for an example of this evolutionary pathway. The inverse process (inverse-MIEK), when a triple changes state from the octupole to the quadrupole regime, can also occur. Eq. (26) shows this is the case when mass loss in the inner binary happens to create an fairly equal mass binary, or when the semi-major axis of the outer orbit increases. This latter is possible when the outer star loses mass in a stellar wind (Section 2.2.1). Another example comes from Michaely and Perets (2014), who studied the secular freeze-out (SEFO). In this scenario mass is lost from the inner binary such that the Lidov-Kozai timescale increases (Eq. (24)). This induces a regime change from the quadrupole regime, to a state where secular evolution is either quenched or operates on excessively long time-scales. The three examples given above illustrate that the dynamical evolution of a triple system is intertwined with the stellar evolution of its components. Thus, in order to gain a clear picture of triple evolution, both three-body dynamics and stellar evolution need to be taken into account simultaneously. Precession Besides precession caused by the Lidov-Kozai mechanism, other sources of precession exist in stellar triples. These include general relativistic effects (Blaes et al. 2002): $$ \dot{g}_{\mathrm{GR}} = \frac{3a^{2}\Omega_{b}^{3}}{c^{2}(1-e^{2})}. $$ Furthermore, orbital precession can be caused by the distortions of the individual stars by tides (Smeyers and Willems 2001): $$ \dot{g}_{\mathrm{tides}} = \frac{15k_{\mathrm {am}}}{(1-e^{2})^{5}\Omega_{b}} \biggl( 1+\frac{3}{2}e^{2}+ \frac {1}{8}e^{4} \biggr) \frac{m_{\mathrm{a}}}{m_{\mathrm{d}}} \biggl( \frac{R}{a} \biggr) ^{5}, $$ and by intrinsic stellar rotation (Fabrycky and Tremaine 2007): $$ \dot{g}_{\mathrm{rotate}} = \frac{k_{\mathrm{am}} \Omega ^{2}}{(1-e^{2})^{2}\Omega_{b}} \frac{m_{\mathrm{d}}+m_{\mathrm {a}}}{m_{\mathrm{d}}} \biggl( \frac{R}{a} \biggr) ^{5}, $$ where \(m_{d}\) is the mass of the distorted star that instigates the precession and \(m_{a}\) the companion star in the two-body orbit. The distorted star has a classical apsidal motion constant \(k_{\mathrm {am}}\), radius R, and a spin frequency Ω. The precession rates in Eq. (27), as well as Eq. (28) and Eq. (29), are always positive. This implies that relativistic effects, tides and stellar rotation mutually stimulate precession in one direction. Note, that precession due to these processes also take place in binaries, which affects the binary orientation, but not the evolution of the system. If the timescalesFootnote 5 for these processes become comparable or smaller than the Lidov-Kozai timescales, the Lidov-Kozai cycles are suppressed. Because Lidov-Kozai cycles are driven by tidal forces between the outer and inner orbit, the additional precession tends to destroy the resonance (Liu et al. 2015a). As a result of the suppression of the cycles, the growth of the eccentricity is limited, and orbital flips are limited to smaller ranges of the mutual inclination (Naoz et al. 2012; Petrovich 2015; Liu et al. 2015a). Tides and gravitational waves As mentioned earlier, the Lidov-Kozai mechanism can lead to very high eccentricities that drives the stars of the inner binary close together during pericenter passage. During these passages, tides and GW emission can effectively alter the orbit (Mazeh and Shaham 1979; Kiseleva et al. 1998). Both processes are dissipative, and act to circularize the orbit and shrink the orbital separation (Sections 2.2.2 and 2.2.3). The combination of Lidov-Kozai cycles with tides or GW emission can then lead to an enhanced rate of mergers and RLOF. For GW sources, the merger time of a close binary can be significantly reduced, if an outer star is present that gives rise to Lidov-Kozai cycles on a short timescale (Thompson 2011). This is important in the context of supernova type Ia and gamma-ray bursts. The combination of Lidov-Kozai cycles with tidal friction (hereafter LKCTF) can also lead to an enhanced formation of close binaries (Mazeh and Shaham 1979; Kiseleva et al. 1998). This occurs when a balance can be reached between the eccentricity excitations of the (regular or eccentric) Lidov-Kozai mechanism and the circularisation due to tides.Footnote 6 The significance of LKCTF is illustrated by Fabrycky and Tremaine (2007), who show that MS binaries with orbital periods of 0.1-10d are produced from binaries with much longer periods up to 105d. Observationally, 96% of close low-mass MS binaries are indeed part of a triple system (Tokovinin et al. 2006). Several studies of LKCTF for low-mass MS stars exist (Mazeh and Shaham 1979; Eggleton and Kiseleva-Eggleton 2001; Fabrycky and Tremaine 2007; Kisseleva-Eggleton and Eggleton 2010; Hamers et al. 2013), however, a study of the effectiveness of LKCTF for high-mass MS triples or triples with more evolved components is currently lacking. Due to the radiative envelopes of high-mass stars, LKCTF is likely less effective compared to the low-mass MS case. However, evolved stars develop convective envelopes during the giant phases for which tidal friction is expected to be effective. Hence, in order to understand the full significance of LKCTF for triple evolution, it is necessary to model three-body dynamics and stellar evolution consistently. Mass transfer initiated in the inner binary In Section 2.2.4, we described the effect of mass transfer on a circularized and synchronized binary. However, as Lidov-Kozai cycles can lead effectively to RLOF in eccentric inner binaries, the simple picture of synchronization and circularisation before RLOF, is no longer generally valid for triples. In an eccentric binary, there does not exist a frame in which all the material is corotating, and the binary potential becomes time-dependent. Studies of the Roche lobe for eccentric and/or asynchronous binaries, show that the Roche lobe can be substantially altered (Plavec 1958; Regös et al. 2005; Sepinsky et al. 2007a). In an eccentric orbit, the Roche lobe of a star at periastron may be significantly smaller than that in a binary that is circularized at the same distance \(r_{p}=a(1-e)\). The Roche lobe is smaller for stars that rotate super-synchronously at periastron compared to the classical Roche lobe (Eq. (4)), and larger for sub-synchronous stars. It is even possible that the Roche lobe around the accretor star opens up. When mass is transferred from the donor star through L1, it is not necessarily captured by the accretor star, and mass and angular momentum may be lost from the binary system. The modification of the Roche lobe affects the evolution of the mass transfer phase, e.g. the duration and the mass loss rate. Mass transfer in eccentric orbits of isolated binaries has been studied in recent years with SPH techniques (Layton et al. 1998; Regös et al. 2005; Church et al. 2009; Lajoie and Sills 2011; van der Helm et al. 2016) as well as analytical approaches (Sepinsky et al. 2007b, 2009, 2010; Davis et al. 2013; Dosopoulou and Kalogera 2016a, 2016b). These studies have shown that (initially) the mass transfer is episodic. The mass transfer rate peaks just after periastron, and its evolution during the orbit shows a Gaussian-like shape with a FWHM of about 10% of the orbital period. The long-term evolution of eccentric binaries undergoing mass transfer can be quite different compared to circular binaries. The long-term evolution has been studied with analytics adopting a delta-function for the mass transfer centred at periastron (Sepinsky et al. 2007b, 2009, 2010; Dosopoulou and Kalogera 2016a, 2016b). Under these assumptions, the semi-major axis and eccentricity can increase as well as decrease depending on the properties of the binary at the onset of mass transfer. In other words, the secular effects of mass transfer can enhance and compete with the orbital effects from tides. Therefore, rapid circularization of eccentric binaries during the early stages of mass transfer is not generally justified. The current theory of mass transfer in eccentric binaries predicts that some binaries can remain eccentric for long periods of time. The possibility of mass transfer in eccentric binaries is supported by observations. For example, the catalogue of eccentric binaries of Petrova and Orlov (1999) contains 19 semi-detached and 6 contact systems out of 128 systems. That circularisation is not always achieved before RLOF commences, is supported by observations of some detached, but close binaries e.g. ellipsoidal variables (Nicholls and Wood 2012) and Be X-ray binaries, which are a subclass of high-mass X-ray binaries (Raguzova and Popov 2005). …and its effect on the outer binary Mass transfer in the inner binary can affect the triple as a whole. The most simple case is during conservative stable mass transfer, when the outer orbit remains unchanged. If the inner orbit is circularized and synchronised, mass transfer generally leads to an increase in the inner semi-major axis by a factor of a few (Eq. (18)). When the ratio \(a_{\mathrm{out}}/a_{\mathrm{in}}\) decreases, the triple approaches and possibly crosses into the dynamically unstable regime (Section 2.3.1 and Eq. (23)). If the mass transfer in the inner binary occurs non-conservatively, the effect on the outer binary is completely determined by the details of the mass loss from the inner binary. We conceive three scenarios for this to take place. First, if during stable mass transfer to a hydrogen-rich star, matter escapes from the inner binary, it is likely the matter will escape from L2 in the direction of the orbital plane of the inner binary. Second, during mass transfer to a compact object, a bipolar outflow or jet may develop. Thirdly, matter may be lost from the inner binary as a result of a common-envelop phase, which we will discuss in Section 2.3.8. If matter escapes the inner binary, its velocity must exceed the escape velocity: $$ v_{\mathrm{esc, in}} = \sqrt{\frac{2G(m_{1}+m_{2})}{a_{\mathrm{in}} (1+e_{\mathrm{in}})}}, $$ and analogously, to escape from the outer binary, and the triple as a whole: $$ v_{\mathrm{esc, out}} = \sqrt{\frac {2G(m_{1}+m_{2}+m_{3})}{a_{\mathrm{out}} (1+e_{\mathrm{out}})}}. $$ For stable triples, such that \(a_{\mathrm{out}}/a_{\mathrm{in}} \gtrsim3\), \(v_{\mathrm{esc, in}} > v_{\mathrm{esc, out}}\) unless \(m_{3}\gtrsim f(m_{1}+m_{2})\). The factor f is of the order of one, e.g. for circular orbits \(f=2\). In the catalogue of Tokovinin (2014b) it is uncommon that \(m_{3}\gtrsim(m_{1}+m_{2})\). Out of 199 systems, there are 3 systems with \((m_{1}+m_{2}) < m_{3}< 2(m_{1}+m_{2})\) and none with \(m_{3}\gtrsim2(m_{1}+m_{2})\). Therefore, if the inner binary matter is energetic enough to escape from the inner binary, it is likely to escape from the triple as a whole as well. For isolated binary evolution, it is unclear if the matter that leaves both Roche lobes is energetic enough to become unbound from the system, e.g. when mass is lost through L2. Instead a circumbinary disk may form that gives rise to a tidal torque between the disk and the binary. This torque can efficiently extract angular momentum from the binary i.e. Eq. (19) with \(\eta=\sqrt{\frac{a_{\mathrm{ring}}}{a}}\frac {(m_{1}+m_{2})^{2}}{m_{1} m_{2}}\), where \(a_{\mathrm{ring}}\) is the radius of the circumbinary ring (Soberman et al. 1997). For example, for a binary with \(q=2\), angular momentum is extracted more than 10 faster if the escaping matter forms a circumbinary disk compared to a fast stellar wind (Soberman et al. 1997; Toonen et al. 2014). Hence, the formation of a circumbinary disk leads to a stronger reduction of the binary orbit, and possibly a merger. If a circumbinary disk forms around the inner binary of a triple, we envision two scenarios. Firstly, the outer star may interact with the disk directly if its orbit crosses the disk. Secondly, the disk gives rise to two additional tidal torques, with the inner binary and the outer star. It has been shown that the presence of a fourth body can lead to a suppression of the Lidov-Kozai cycles, however, bodies less massive than a Jupiter mass have a low chance of shielding (Hamers et al. 2015, 2016; Martin et al. 2015; Muñoz and Lai 2015). The effect of common-envelope on the outer binary Another scenario exists in which material is lost from the inner binary; in stead of a stable mass transfer phase, mass is expelled through a CE-phase. For isolated binaries the CE-phase and its effect on the orbit is an unsolved problem, and the situation becomes even more complicated for triples. In the inner binary the friction between the stars and the material is expected to cause a spiral-in. If the outer star in the triple is sufficiently close to the inner binary, the matter may interact with the outer orbit such that a second spiral-in takes place. If on the other hand, the outer star is in a wide orbit, and the CE-matter is lost in a fast and isotropic manner, the effect on the outer orbit would be like a stellar wind (Section 2.2.1). Veras and Tout (2012) study the effect of a CE in a binary with a planet on a wider orbit. Assuming that the CE affects the planetary orbit as an isotropic wind, they find that planetary orbits of a few \(10^{4}R_{\odot}\) are readily dissolved. In this scenario the CE-phase operates virtually as a instantaneous mass loss event, and therefore the maximum orbital separation for the outer orbit to remain bound is strongly dependent on the uncertain timescale of the CE-event (Section 2.2.1). Disruption of a triple due to a CE-event may also apply to stellar triples, however, the effect is likely less dramatic, as the relative mass lost in the CE-event to the total system mass is lower. Note that most hydrodynamical simulations of common-envelope evolution show that matter is predominantly lost in the orbital plane of the inner binary (e.g. Ricker and Taam 2012; Passy et al. 2012b), however, these simulations are not been able to unbind the majority of the envelope. In contrast, in the recent work of Nandez et al. (2015), the envelope is expelled successfully due to the inclusion of recombination energy in the equation of state. These simulations show a more spherical mass loss. Roughly 60% of the envelope mass is ejected during the spiral-in phase in the orbital plane, while the rest of the mass is ejected after the spiral-in phase in a closely spherical way (priv. comm. Jose Nandez). The first scenario of friction onto the outer orbit has been proposed to explain the formation of two low-mass X-ray binaries with triple components (4U 2129+47 (V1727 Cyg) (Portegies Zwart et al. 2011) and PSR J0337+1715 (Tauris and van den Heuvel 2014)), however, it could not be ruled out that the desired decrease in the outer orbital period did not happen during the SN explosion in which the compact object was formed. Currently, it is unclear if the CE-matter is dense enough at the orbit of the outer star to cause significant spiral-in. Sabach and Soker (2015) suggests that if there is enough matter to bring the outer star closer, the CE-phase would lead to a merger in the inner binary. Mass loss from the outer star In about 20% of the multiples in the Tokovinin catalogue of multiple star systems in the Solar neighbourhood, the outer star is more massive than the inner two stars. For these systems the outer star evolves faster than the other stars (Section 2.1). In about 1% of the triples in the Tokovinin catalogue, the outer orbit is significantly small that the outer star is expected to fill its Roche lobe at some point in its evolution (de Vries et al. 2014). What happens next has not been studied to great extent, i.e. the long-term evolution of a triple system with a mass-transferring outer star. It is an inherently complicated problem where the dynamics of the orbits, the hydrodynamics of the accretion stream and the stellar evolution of the donor star and its companion stars need to be taken into account consistently. Such a phase of mass transfer has been invoked to explain the triple system PSR J0337+1715, consisting of a millisecond pulsar with two WD companions (Tauris and van den Heuvel 2014; Sabach and Soker 2015). Tauris and van den Heuvel (2014) note one of the major uncertainties in their modelling of the evolution of PSR J0337+1715 comes from the lack of understanding of the accretion onto the inner binary system and the poorly known specific orbital angular momentum of the ejected mass during the outer mass transfer phase. Sabach and Soker (2015) proposes that if the inner binary spirals-in to the envelope of the expanding outer star, the binary can break apart from tidal interactions. To the best of our knowledge, only de Vries et al. (2014) have performed detailed simulations of mass transfer initiated by the outer star in a triple. They use the same software framework (AMUSE, Section 3) as we use for our code TrES. de Vries et al. (2014) simulate the mass transfer phase initiated by the outer star for two triples in the Tokovinin catalogue, ξ Tau and HD97131. For both systems, they find that the matter lost by the outer star does not form an accretion disk or circumbinary disk, but instead the accretion stream intersects with the orbit of the inner binary. The transferred matter forms a gaseous cloud-like structure and interacts with the inner binary, similar to a CE-phase. The majority of the matter is ejected from the inner binary, and the inner binary shrinks moderately to weakly with \(\alpha\lambda_{\mathrm{ce}} \gtrsim3\) depending on the mutual inclination of the system. In the case of HD97131, this contraction leads to RLOF in the inner binary. The vast majority of the mass lost by the donor star is funnelled through L1, and eventually ejected from the system by the inner binary through the L3 Lagrangian pointFootnote 7 of the outer orbit. As a consequence of the mass and angular momentum loss, the outer orbit shrinks withFootnote 8 \(\eta\approx 3\mbox{-}4\) in Eq. (19). During the small number of outer periods that are modelled, the inner and outer orbits approaches contraction at the same fractional rate. Therefore the systems remain dynamically stable. Systems that are sufficiently wide that the outer star does not fill its Roche lobe, might still be affected by mass loss from the outer star in the form of stellar winds. Soker (2004) has studied this scenario for systems where the outer star is on the AGB, such that the wind mass loss rates are high. Assuming Bondi-Hoyle-Littleton accretion and $$\begin{aligned} R_{1} \ll R_{\mathrm{acc, column}} \lesssim a_{\mathrm{in}} \ll R_{\mathrm{acc, B\mbox{-}H}} \ll a_{\mathrm{out}}, \end{aligned}$$ where \(R_{\mathrm{acc, column}}\) is the width of the accretion column at the binary location, and \(R_{\mathrm{acc, B\mbox{-}H}}\) the Bondi-Hoyle accretion radius, Soker (2004) finds that a large fraction of triples the stars in the inner binary may accrete from an accretion disk around the stars. The formation of an accretion disk depends strongly on the orientation of the inner and outer orbit. When the inner and outer orbits are parallel to each other, no accretion disk forms. On the other hand, when the inner orbit is orientated perpendicular to the outer orbit, an accretion disk forms if \(q \lesssim3.3\). In this case the accretion is in a steady-state and mainly towards the most massive star of the inner binary. Triples and planetary nebulae Interesting to mention in the context of triple evolution are planetary nebulae (PNe), in particular those with non-spherical structures. The formation of these PNe is not well understood, but maybe attributed to interactions between an AGB-star and a companion (e.g. Bond and Livio 1990; Bond 2000; De Marco et al. 2015; Zijlstra 2015) or multiple companions (e.g. Bond et al. 2002; Exter et al. 2010; Soker 2016; Bear and Soker 2016). Where a binary companion can impose a non-spherical symmetry on the resulting PN, and even a non-axisymmetry (see e.g. Soker and Rappaport 2001, for eccentric binaries), triple evolution can impose structures that are not axisymmetric, mirrorsymmetric, nor pointsymmetric (Bond et al. 2002; Exter et al. 2010; Soker 2016). Since the centers of many elliptical and bipolar PNe host close binaries, the systems are expected to have undergone a CE-phase. In the context of triples, PNe formation channels have been proposed that concern outer stars on the AGB whose envelope matter just reaches or completely engulfs a tight binary system, e.g. the PN SuWt 2 (Bond et al. 2002; Exter et al. 2010). Another proposed channel involves systems with a very wide outer orbit of tens to thousands of AU and in which the outer star interacts with the material lost by the progenitor-star of the PN (Soker et al. 1992; Soker 1994). For a detailed review of such evolutionary channels, see Soker (2016). Under the assumptions that PN from triple evolutionary channels give rise to irregular PNe, Soker (2016) and Bear and Soker (2016) find that about 1 in 6-8 PNe might have been shaped by an interaction with an outer companion in a triple system. Supernova explosions in triples Pijloo et al. (2012) study the effect of a supernova explosion in a triple star system under the same assumptions as Hills (1983). The authors show that for a hierarchical triple in which the outer star collapses in a SN event, the inner binary is not affected, and the effect on the outer orbit can be approximated by that of an isolated binary. For a SN taking place in the inner binary, the inner binary itself is modified similar to an isolated binary (Section 2.2.7). The effect on the outer binary can be viewed as that of an isolated binary in which the inner binary is replaced by an effective star at the center of mass of the inner binary. The effective star changes mass and position (as the center of mass changes) instantaneously in the SN event. The semi-major axis of the outer orbit is affected by the SN in the inner binary as: $$\begin{aligned} \frac{a_{\mathrm{f}}}{a_{\mathrm{i}}} ={}& \biggl( 1-\frac{\Delta m}{m_{\mathrm{t,i}}} \biggr) \cdot \biggl( 1- \frac{2a_{\mathrm {i}}\Delta m}{r_{\mathrm{f}}m_{\mathrm{t,i}}} \\ &{}-\frac{2a(\boldsymbol {v}_{\mathrm{i}}\cdot \boldsymbol {v}_{\mathrm {sys}})}{v_{\mathrm{c}}^{2}} - \frac{v_{\mathrm{sys}}^{2} }{v_{\mathrm{c}}^{2}} + 2a_{i} \frac{r_{\mathrm{i}}-r_{\mathrm {f}}}{r_{\mathrm{i}}r_{\mathrm{f}}} \biggr)^{-1}, \end{aligned}$$ where \(m_{\mathrm{t,i}}\) is the total mass of the pre-SN triple, \(r_{\mathrm{i}}\) and \(r_{\mathrm{f}}\) are the pre-SN and post-SN distance between the star in the outer orbit and the center of mass of the inner binary, \(\boldsymbol {v}_{\mathrm{i}}\) is the pre-SN relative velocity of the center of mass of the inner binary relative to the outer star, \(\boldsymbol {v}_{\mathrm{sys}}\) is the systemic velocity the inner binary due to the SN, and the orbital velocity in a circular orbit. Note that in comparison with the circular velocity in binaries (Eq. (22)), here \(m_{\mathrm{t,i}}\) refers to \(m_{1}+m_{2}+m_{3}\) and \(a=a_{\mathrm{out}}\). A full derivation of the change in semi-major axis (Eq. (33)) and eccentricity (Eq. (78)) of the outer orbit due to a SN in the inner orbit, are given in Appendix A.1. Note that the equation for the post-SN eccentricity of the outer orbit in Pijloo et al. (2012), their Eq. (27), is incomplete (see Appendix A.1). Quadruples and higher-order hierarchical systems Although quadruples star systems are less common than triple systems, hierarchical quadruples still comprise about 1% of F/G dwarf systems in the field (Tokovinin 2014a, 2014b). While for triples, there is one type of hierarchy that is stable on long-terms (compared to stellar lifetimes), quadruples can be arranged in two distinct long-term stable configurations: the '2+2' or 'binary-binary' configuration, and the '3+1' or 'triple-single' configuration. In the first case, two binaries orbit each others barycentre, and in the latter case a hierarchical triple is orbited by a fourth body. In the sample of F/G dwarfs of Tokovinin (2014a, 2014b), the '2+2' systems comprise about 2/3 of quadruples, and the '3+1' about 1/3. The secular dynamics of the '2+2' systems were investigated by Pejcha et al. (2013) using N-body methods. Pejcha et al. (2013) find that in these systems, orbital flips and associated high eccentricities are more likely compared to equivalent triple systems (i.e. with the companion binary viewed as a point mass). The '3+1' configuration was studied by Hamers et al. (2015).Footnote 9 For highly hierarchical systems, i.e. in which the three binaries are widely separated, the global dynamics can be qualitatively described in terms of the (initial) ratio of the Lidov-Kozai time-scales of the two inner most binaries compared to that of the outer two binaries. This was applied to the '3+1' F/G systems of Tokovinin (2014a, 2014b), and most (90%) of these systems were found to be in a regime in which the inner three stars are effectively an isolated triple, i.e. the fourth body does not affect the secular dynamical evolution of the inner triple. We note that in the case of '3+1' quadruples and with a low-mass third body (in particular, a planet), the third body can affect the Lidov-Kozai cycles that would otherwise have been induced by the fourth body. In particular, under specific conditions the third body can 'shield' the inner binary from the Lidov-Kozai oscillations, possibly preventing the inner binary from shrinking due to tidal dissipation, and explaining the currently observed lack of circumbinary planets around short-period binaries (Martin et al. 2015; Muñoz and Lai 2015; Hamers et al. 2016). A similar process could apply to more massive third bodies, e.g. low-mass MS stars. It is currently largely unexplored how non-secular effects such as stellar evolution, tidal evolution and mass transfer affect the evolution of hierarchical quadruple systems, or, more generally, in higher-order multiple systems. The secular dynamics of the latter could be efficiently modelled using the recent formalism of Hamers and Portegies Zwart (2016). In the previous section, we gave an overview of the most important ingredients of the evolution of stars in single systems, binaries and triples. For example, nuclear evolution of a star leads to wind mass loss, that affects the dynamics of binaries and triples, and can even lead to a dynamical instability in multiple systems. Three-body dynamics can give rise to oscillations in the eccentricity of the inner binary system of the triple, which can lead to an amplified tidal effect and an enhanced rate of stellar mass transfer, collisions, and mergers. Additionally, a triple system can transition from one to another dynamical regime (i.e. without Lidov-Kozai cycles, regular and eccentric Lidov-Kozai cycles) due to stellar evolution, e.g. wind mass loss or an enhancement of tides as the stellar radius increases in time. These examples illustrate that for the evolution of triple stars, stellar evolution and dynamics are intertwined. Therefore, in order to study the evolution of triple star systems consistently, three-body dynamics and stellar evolution need to be taken into account simultaneously. In this paper, we present a public source code TrES to simulate the evolution of wide and close, interacting and non-interacting triples consistently. The code is designed for the study of coeval, dynamically stable, hierarchical, stellar triples. The code is based on heuristic recipes that combine three-body dynamics with stellar evolution and their mutual influences. These recipes are described here. The code can be used to evaluate the distinct evolutionary channels of a specific population of triples or the importance of different physical processes in triple evolution. As an example, it can be used to assess the occurrence rate of stable and unstable mass transfer initiated in circular and eccentric inner orbits of triple systems (Toonen, Hamers and Portegies Zwart in prep.). We stress that modelling though a phase of stable mass transfer in an eccentric orbit is currently not implemented in TrES, but we aim to add this to the capabilities of TrES in a later version of the code. The code TrES is based on the secular approach to solve the dynamics (Section 3.3) and stellar evolution is included in a parametrized way through the fast stellar evolution code SeBa (Section 3.2). TrES is written in the Astrophysics Multipurpose Software Environment, or AMUSE (Portegies Zwart et al. 2009; Portegies Zwart 2013). This is a component library with a homogeneous interface structure based on Python. AMUSE can be downloaded for free at amusecode.org and github.com/amusecode/amuse. In the AMUSE framework new and existing code from different domains (e.g. stellar dynamics, stellar evolution, hydrodynamics and radiative transfer) can be easily used and coupled. As a result of the easy coupling, the triple code can be easily extended to include a detailed stellar evolution code (i.e. that solve the stellar structure equations) or a direct N-body code to solve the dynamics of triples that are unstable or in the semi-secular regime (Section 3.3.1). Structure of TrES The code consist of three parts: Step 1.: Stellar evolution. Stellar interaction. Orbital evolution. At the beginning of each timestep we estimate an appropriate timestep \(dt_{\mathrm{trial}}\) and evolve the stars as single stars for this timestep (Step 1). The trial timestep is estimated with: $$ dt_{\mathrm{trial}} = \min(dt_{\mathrm{star}}, dt_{\mathrm{wind}}, dt_{\mathrm{R}}, f_{\mathrm{prev}} dt_{\mathrm{prev}}), $$ where \(dt_{\mathrm{star}}\), \(dt_{\mathrm{wind}}\), \(dt_{\mathrm{R}}\) and \(f_{\mathrm{prev}} dt_{\mathrm{prev}}\) are the minimum timesteps due to stellar evolution, stellar wind mass losses, stellar radius changes and the previous timestep. Each star gives rise to a single value for \(dt_{\mathrm{trial}}\), where the minimum is adopted as a trial timestep in TrES. The timestep \(dt_{\mathrm{star}}\) is determined internally by the stellar evolution code (SeBa, Section 3.2). It is the maximum attainable timestep for the next iteration of this code and is mainly chosen such that the stellar masses that evolve due to winds, are not significantly affected by the timesteps. Furthermore, when a star changes its stellar type (e.g. from a horizontal branch star to an AGB star), the timestep is minimized to ensure a smooth transition. For TrES, we require a more strict constraint on the wind mass losses, such that \(dt_{\mathrm{wind}} = f_{\mathrm{wind}} m/\dot{m}_{\mathrm{wind}}\), where \(f_{\mathrm{wind}}=0.01\) and \(\dot{m}_{\mathrm{wind}}\) is the wind mass loss rate given by the stellar evolution code. The numerical factor \(f_{\mathrm{wind}}\) establishes a maximum average of 1% mass loss from stellar winds per timestep. Furthermore, we ensure that the stellar radii change by less then a percent per timestep through \(dt_{\mathrm{R}} = f_{R}f'_{R}\cdot R/\dot{R}\), where \(f_{R}\) and \(f'_{R}\) are numerical factors. We take \(f_{R} = 0.005\) and $$ f'_{R} = \textstyle\begin{cases} 0.1 & \mbox{for } \dot{R}_{\mathrm{prev}} = 0~R_{\odot }/\mbox{yr}, \\ 0.01 & \mbox{for } \dot{R} \cdot\dot{R}_{\mathrm{prev}} < 0~(R_{\odot }/\mbox{yr})^{2}, \\ 1 & \mbox{for } \dot{R} < \dot{R}_{\mathrm{prev}} \mbox{ and not MS}, \\ \dot{R}/\dot{R}_{\mathrm{prev}} & \mbox{for } \dot{R} < \dot{R}_{\mathrm{prev}} \mbox{ and MS}, \\ \dot{R}_{\mathrm{prev}}/\dot{R} & \mbox{for } \dot{R} > \dot{R}_{\mathrm{prev}}, \end{cases} $$ where Ṙ and \(\dot{R}_{\mathrm{prev}}\) represent the time derivative of the radius of the current and previous timestep, respectively. This limit is particularly important since the degree of tidal interaction strongly depends on the stellar radius. Lastly, we require that \(dt_{\mathrm{trial}} < f_{\mathrm{prev}} dt_{\mathrm{prev}}\), where \(dt_{\mathrm{prev}}\) is the previous successfully accomplished timestep and \(f_{\mathrm{prev}}\) a numerical factor with a value of 100. The trial timestep is accepted and the code continues to Step 2, only if: case (a): no star has started to fill its Roche lobe, case (b): stellar radii have changed by less than 1%, case (c): stellar masses have changed by less than 5% within the trial timestep. We have tested that these percentages give accurate results with respect to the orbital evolution. Condition (b) is not applied at moments when the stellar radius changes discontinuously, such as during the helium flash or at white dwarf formation. Conditions (b) and (c) are not applied, when a massive star collapses to a neutron star or black hole. Note that when a star undergoes such a supernova explosion, the timestep is minimized through \(dt_{\mathrm {star}}\). Additionally Steps 2 and 3 are skipped and the triple is adjusted according to Section 3.4.7. If conditions (a), (b) and (c) are not met, the timestep is reverted and Step 1 is tried again with a smaller timestep \(dt'_{\mathrm{trial}}\). This process is done iteratively until the conditions are met or until the timestep is sufficiently small, \(dt_{\mathrm{min}} = 10^{-3}~\mbox{yr}\). If the change in the stellar parameters is too large (i.e. case (b) and (c)), the new trail timestep is taken to be: $$ dt'_{\mathrm{trial}} = \mathrm{min}\bigl[0.9dt_{\mathrm{trial}}, dt'_{\mathrm{trial}}\bigl(\dot{R}'\bigr)\bigr], $$ where \(0.9 dt_{\mathrm{trial}}\) represents 90% of the previous timestep and \(dt'_{\mathrm{trial}}(\dot{R}')\) is a newly calculated timestep according to Eq. (35) for which the time derivative of the radius \(\dot{R}'\) from the last trial timestep is used. During mass transfer, the timestep is estimated by: $$ dt_{\mathrm{trial, MT}} = \mathrm{min}\biggl(dt_{\mathrm{trial}}, f_{\mathrm{MT}} \frac{m}{\dot{m}_{\mathrm{MT}}}, f_{\mathrm{MT, prev}} dt_{\mathrm{prev}}\biggr), $$ where \(\dot{m}_{\mathrm{MT}}\) is the mass loss rate from mass transfer (Section 3.4.5), and the numerical factors \(f_{\mathrm{MT}}=0.01\) and \(f_{\mathrm{MT, prev}}=5\). If a star starts filling its Roche lobe in a timestep, \(dt'_{\mathrm {trial}} = 0.5dt_{\mathrm{trial}}\). If \(dt'_{\mathrm{trial}} < dt_{\mathrm{trial, MT}}\), mass transfer is allowed to commence. Step 2 in our procedure regards the modelling of the stellar interactions such as stable mass transfer, contact evolution and common-envelope evolution (Section 3.4). The last step involves the simulation of the orbital evolution of the system by solving a system of differential equations (Section 3.3). If the evolution leads to the initiation of RLOF during the trial timestep, both the orbit and stellar evolution are reverted to the beginning of the timestep. If the time until RLOF is shorter than 1% of \(dt_{\mathrm{MT}}\), the latter is taken to be the new trial timestep and mass transfer is allowed to commence. If not, the timestep is taken to be the time until RLOF that was found during the last trial timestep. If during the orbital evolution the system becomes dynamically unstable, the simulation is terminated. The stability criterion of Mardling and Aarseth (2001) is used (Eq. (23)). In all other cases, the trial timestep is accepted, and the next iteration begins. Stellar evolution Single stellar evolutionFootnote 10 is included through the fast stellar evolution code SeBa (Portegies Zwart and Verbunt 1996; Nelemans et al. 2001; Toonen et al. 2012; Toonen and Nelemans 2013). SeBa is a parametrized stellar evolution code providing parameters such as radius, luminosity and core mass as a function of initial mass and time. SeBa is based on the stellar evolution tracks from Hurley et al. (2000). These tracks are fitted to the results of a detailed stellar evolution code (based on Eggleton 1971, 1972) that solves the stellar structure equations. Orbital evolution TrES solves the orbital evolution through a system of first-order ordinary differential equations (ODE): $$ \textstyle\begin{cases} \dot{a}_{\mathrm{in}} = \dot{a}_{\mathrm{in, GR}} +\dot {a}_{\mathrm{in, TF}} +\dot{a}_{\mathrm{in, wind}} +\dot {a}_{\mathrm{in, MT}},\\ \dot{a}_{\mathrm{out}} = \dot{a}_{\mathrm{out, GR}} +\dot {a}_{\mathrm{out, TF}} +\dot{a}_{\mathrm{out, wind}} + \dot{a}_{\mathrm{out, MT}}, \\ \dot{e}_{\mathrm{in}} = \dot{e}_{\mathrm{in,3b}} + \dot {e}_{\mathrm{in,GR}} +\dot{e}_{\mathrm{in,TF}}, \\ \dot{e}_{\mathrm{out}} = \dot{e}_{\mathrm{out,3b}} +\dot {e}_{\mathrm{out,GR}} + \dot{e}_{\mathrm{out,TF}}, \\ \dot{g}_{\mathrm{in}} = \dot{g}_{\mathrm{in,3b}} + \dot {g}_{\mathrm{in,GR}} + \dot{g}_{\mathrm{in,tides}} + \dot {g}_{\mathrm{in,rotate}},\\ \dot{g}_{\mathrm{out}} = \dot{g}_{\mathrm{out, 3b}} + \dot {g}_{\mathrm{out,GR}} + \dot{g}_{\mathrm{out,tides}} + \dot{g}_{\mathrm{out,rotate}},\\ \dot{h}_{\mathrm{in}} = \dot{h}_{\mathrm{in, 3b}},\\ \dot{\theta} = \frac{-1}{J_{\mathrm{b, in}}J_{\mathrm{b, out}}}[\dot{J}_{\mathrm{b, in}}(J_{\mathrm{b, in}}+J_{\mathrm{b,out}}\theta) \\ \hphantom{\dot{\theta} = } + \dot{J}_{\mathrm{b, out}}(J_{\mathrm{b, out}}+ J_{\mathrm{b,in}}\theta)],\\ \dot{\Omega}_{1} = \dot{\Omega}_{1, TF} +\dot{\Omega}_{1, I}+\dot{\Omega}_{\mathrm{1, wind}}, \\ \dot{\Omega}_{2} = \dot{\Omega}_{2, TF} +\dot{\Omega}_{2, I}+\dot{\Omega}_{\mathrm{2, wind}},\\ \dot{\Omega}_{3} = \dot{\Omega}_{3, TF} +\dot{\Omega}_{3, I}+\dot{\Omega}_{\mathrm{3, wind}}, \end{cases} $$ where \(\theta\equiv{\mathrm{cos}}(i)\), \(J_{\mathrm{b, in}}\) and \(J_{\mathrm{b, out}}\) are the orbital angular momentum of the inner and outer orbit, and \(\Omega_{1}\), \(\Omega_{2}\) and \(\Omega_{3}\) the spin frequency of the star with mass \(m_{1}\), \(m_{2}\) and \(m_{3}\), respectively, and I the moment of inertia of the corresponding star. ẋ represents the time derivative of parameter x. Eq. (39) includes secular three-body dynamics (with subscript 3b), general relativistic effects (GR), tidal friction (TF), precession, stellar wind effects and mass transfer (MT). The quadrupole terms of the three-body dynamics are based on Harrington (1968), and the octupole terms on Ford et al. (2000) with the modification of Naoz et al. (2013). Gravitational wave emission is included as in Eqs. (8) and (9) (Peters 1964). Our treatment of tidal friction and precession is explained in Section 3.4.2. Magnetic braking is currently not included. The treatment of stellar winds and mass transfer is described in Sections 3.4.1-3.4.5. The ODE solver routine uses adaptive timesteps to simulate the desired timestep \(dt_{\mathrm{trial}}\). Within the ODE solver, parameters that are not given in Eq. (39) (e.g. gyration radius), are assumed to be constant during \(dt_{\mathrm{trial}}\). An exception to this is the stellar radius, mass and moment of inertia. Even though \(dt_{\mathrm{trial}}\) is chosen such that the parameters do not change significantly within this timestep (Section 3.1), there is a cumulative effect that can violate angular momentum conservation on longer timescales if \(\dot{\Omega}_{I}\) is not taken into account. As a non-interacting star evolves and the mass, radius and moment of inertia change, the spin frequency of the star evolves accordingly due to conservation of spin angular momentum. The change in the spin frequency is: $$ \dot{\Omega}_{I} = \frac{-\dot{I}\Omega}{I}, $$ $$ I = k_{2}(m-m_{c})R^{2} + k_{3}m_{c}R_{c}^{2}, $$ where \(m_{c}\) and \(R_{c}\) are the mass and radius of the core, \(k_{2}=0.1\) and \(k_{3}=0.21\) (Hurley et al. 2000). Thus we approximate the moment of inertia with a component for the core and for the envelope of the star. This method works well for evolved stars that have developed dense cores, as well as for MS stars with \(m_{c}\equiv0M_{\odot}\), and compact objects for which \(m-m_{c}\equiv0M_{\odot}\). The initial spin periods of the stellar components of the triple are assumed to be similar to that of ZAMS stars. Based on observed rotational velocities of MS stars from Lang (1992), Hurley et al. (2000) proposed the fit: $$ \Omega= \frac{2\text{,}058}{R} \frac{330M^{3.3}}{15.0+M^{3.45}}~\mathrm{ yr^{-1}}. $$ As in Hamers et al. (2013), we make the simplifying assumption that the stellar spin axes are aligned with the orbital axis of the corresponding star. For the vast majority of stellar triples the magnitude of the spin angular momenta are small compared to that of the orbital angular momenta. A consequence of this assumption, is that tidal friction from a spin-orbit misalignment is absent. The change in the mutual inclination in Eq. (39) is based on the conservation of total angular momentum (Hamers et al. 2013). The set of orbital equations of Eq. (39) are solved by a routine based on the ODE solver routine presented in Hamers et al. (2013). This routine uses the CVODE library, which is designed to integrate stiff ODEs (Cohen et al. 1996). It has been verified by comparing integrations with example systems presented in Ford et al. (2000), Blaes et al. (2002) and Naoz et al. (2011), and comparing with analytical solutions at the quadrupole-order level of approximation assuming \(J_{\mathrm{b, out}}\gg J_{\mathrm{b, in}}\), given by Kinoshita and Nakai (1999). The ODE consists of a combination of prescriptions for the main physical processes for triple evolution (e.g. three-body dynamics and tides) which are described in Section 2. In order to set up the ODE, we have assumed that the physical processes are independent of one another, such that their analytical treatments can be added linearly. Michaely and Perets (2014) show that the dynamics of a hierarchical triple including mass loss and transfer can be well modelled with this approach. They find that the secular approach shows excellent agreement with full N-body simulations. Additionally, we note that the ODE of Eq. (39) is valid as long as the included processes occur on timescales longer than the dynamical timescale. In the next section we discuss when this criterion is violated, and describe the alternative treatments in TrES. The secular approach The components in Eq. (39) (\(\dot{e}_{\mathrm{3b}}\), \(\dot {g}_{\mathrm{3b}}\), and \(\dot{h}_{\mathrm{3b}}\)) that describe the secular three-body dynamics, including the regular Lidov-Kozai cycles, are derived using the orbital-averaging technique (e.g. Michaely and Perets 2014; Naoz 2016; Luo et al. 2016). With this method, the masses of the components of the triple system are distributed over the inner and outer orbit. The three-body dynamics is then approximated by the interaction between these ellipses. Furthermore, the Hamiltonian is expanded up to third order in \(a_{\mathrm{in}}/a_{\mathrm{out}}\) (i.e. the octopole term). The quadrupole level of approximation refers to the second-order expansion, which is the lowest non-trivial expansion order. The quadrupole level is sufficient for systems in which the octupole parameter (Eq. (26)) is sufficiently low, i.e. \(\vert \epsilon_{\mathrm{oct}} \vert < 10^{-4}\). For example, this includes systems with \(m_{1} \approx m_{2}\). To accurately model the dynamics of a population of systems with a wide range of mass ratios, eccentricities and orbital separations, one needs to include the octupole term as well. However, whereas the orbit-averaged equations of motion are valid for strongly hierarchical systems, they become inaccurate for triples with weaker hierarchies (e.g. Antonini and Perets 2012; Katz and Dong 2012; Naoz et al. 2013; Antonini et al. 2014; Antognini et al. 2014; Bode and Wegg 2014; Luo et al. 2016). For these systems, the timescale of the perturbations due to the outer star can become comparable to or shorter than the dynamical timescales of the system. This is problematic as the orbit-average treatment neglects any modulations on short orbital timescales per definition. For moderately hierarchical systems, the outer star can significantly change the angular momentum of the inner binary between two successive pericenter passages causing rapid oscillations in the corresponding eccentricity. The orbit-average treatment is valid when: $$ \sqrt{1-e_{\mathrm{in}}} \gtrsim5\pi\frac{m_{3}}{m_{1}+m_{2}} \biggl( \frac{a_{\mathrm{in}}}{a_{\mathrm{out}}(1-e_{\mathrm {out}})} \biggr) ^{3}, $$ as derived by Antonini et al. (2014). In the non-secular regime, the inner binary can be driven to much higher eccentricities than the secular approximation predicts, and subsequently lead to more collisions of e.g. black holes (Antonini et al. 2014), neutron stars (Seto 2013) or white dwarfs (Katz and Dong 2012). These are interesting in the context of gravitational wave emission and type Ia supernovae. Due to the very short timescale of the eccentricity oscillations, and therefore rapid changes of the periapse distance, tidal or general relativistic effects do not play a role (Katz and Dong 2012). With the secular approach, as in TrES, the maximum inner eccentricity and therefore the number of collisions is probably underestimated for moderately hierarchical systems (see also Naoz et al. 2016). Furthermore, Luo et al. (2016) showed recently that the rapid oscillations accumulate over time and alter the long-term evolution of the triple systems (e.g. whether or not an orbital flip occurs). The non-secular behaviour discussed by Luo et al. (2016) occurs in systems in which the mass of the outer star is comparable or larger than that of the inner binary, and in which the octupole term is important (\(\vert \epsilon_{\mathrm{oct}} \vert \gtrsim0.05\)). Stellar interaction Stellar winds in TrES The mass loss rate of each star depends on the evolutionary stage and is determined by the stellar evolution code. We make the common assumption that the wind is fast and spherically symmetric with respect to the orbit, as discussed in Section 2.2.1. For the inner binary, we assume a fraction \(\beta_{1\rightarrow2}\) of the wind mass lost from \(m_{1}\) at a rate \(\dot{m}_{1}\) can be accreted by \(m_{2}\), and \(\beta_{2\rightarrow1}\) for the mass flowing in the other direction. Following Eq. (7), the effect on the inner orbit is then: $$ \dot{a}_{\mathrm{in, wind}} = \dot{a}_{\mathrm{wind}}(\dot{m}_{1}, \beta_{1\rightarrow2}) + \dot{a}_{\mathrm{wind}}(\dot{m}_{2}, \beta_{2\rightarrow1}). $$ The wind mass lost from the inner binary is \(\dot{m}_{\mathrm {in}}=(1-\beta_{1\rightarrow2})\dot{m}_{1}+(1-\beta_{2\rightarrow 1})\dot{m}_{2}\), of which the outer star can accrete a fraction \(\beta_{\mathrm{in}\rightarrow3}\). We do not allow the inner binary to accrete mass from the outer star. The winds widen the orbit according to: $$ \dot{a}_{\mathrm{out, wind}} = \dot{a}_{\mathrm{wind}}(\dot{m}_{\mathrm{in}}, \beta_{\mathrm{in}\rightarrow3}) + \dot{a}_{\mathrm{wind}, \text{no-acc}}(\dot{m}_{3}), $$ see Eq. (5) and Eq. (7). The wind matter carries away an amount of angular momentum which affects the spin of the star. Under the assumption that the wind mass decouples from the star as a spherical shell: $$ \dot{\Omega}_{\mathrm{wind}} = \frac{-2/3\dot{m}_{\mathrm {wind}}R^{2}\Omega}{I}. $$ If wind matter is accreted by a star, we assume the accretor star spins up i.e. the stellar spin angular momentum increases with the specific angular momentum of the wind matter. For example for \(m_{1}\), the total change in the spin due to winds is: $$\begin{aligned} \dot{\Omega}_{\mathrm{1, wind}} = {}&\frac{-2/3\dot{m}_{\mathrm{1, wind}}R_{1}^{2}\Omega_{1}}{I_{1}} \\ {}&+ \frac{2/3\beta_{2\rightarrow 1}\dot{m}_{\mathrm{2, wind}}R_{2}^{2}\Omega_{2}}{I_{1}}. \end{aligned}$$ Tides and precession Tidal friction is included in TrES as described in Eqs. (10)-(12). The dominant tidal dissipation mechanism is linked with the type of energy transport in the outer zones of the star. We follow Hurley et al. (2002), and distinguish three types: damping in stars with convective envelopes, radiative envelopes (i.e. dynamical tide), and degenerate stars. The quantity \(k_{\mathrm{am}}/\tau_{\mathrm{TF}}\) of Eq. (10)-(12) is given for these three types of stars in their Eqs. (30), (42)Footnote 11 and (47), respectively. We assume that radiative damping takes place in MS stars with \(M>1.2M_{\odot}\), in helium-MS stars and horizontal branch stars. Excluding compact objects, all other stars are assumed to have convective envelopes. For the mass and radius of the convective part of the stellar envelope, we follow Hurley et al. (2000) (their Section 7.2) and Hurley et al. (2002) (their Eqs. (36)-(40)), respectively, with the modification that MS stars to have convective envelopes in the mass range \(0.3\mbox{-}1.2M_{\odot}\). Regarding the gyration radius k, for stars with convective or radiative envelopes we assume \(k=k_{2}\), for compact objects \(k=k_{3}\) (see Eq. (41)). We include precession from three-body dynamics, general relativistic effects (Eq. (27), Blaes et al. 2002), tides (Eq. (28), Smeyers and Willems 2001), and stellar rotation (Eq. (29), Fabrycky and Tremaine 2007). The latter two equations require an expression for the apsidal motion constant \(k_{\mathrm {am}}\) (instead of \(k_{\mathrm{am}}/\tau_{\mathrm{TF}}\) as required for the tidal equations of Eqs. (10)-(12)). For MS, helium-MS, and WDs, we assume \(k_{\mathrm {am}}=0.0144\) (Brooker and Olle 1955), for neutron stars \(k_{\mathrm {am}}=0.260\) (Brooker and Olle 1955), for black holes \(k_{\mathrm{am}}=0\), and for other stars \(k_{\mathrm{am}}=0.05\) (Claret and Gimenez 1992). For low-mass (\(m<0.7M_{\odot}\)) MS stars that are fully or deeply convective, we take \(k_{\mathrm{am}}=0.05\) (Claret and Gimenez 1992). Stability of mass transfer initiated in the inner binary When one of the inner stars fills its Roche lobe, we test for the stability of the mass transfer: Tidal instability; Tidal friction can lead to an instability in the binary system and subsequent orbital decay (see Section 2.2.3). The tidal instability takes place in compact binaries with extreme mass ratios. It occurs when there is insufficient angular momentum to keep the star in synchronization i.e. \(J_{\star} > \frac{1}{3} J_{\mathrm{b}}\), with \(J_{\star}=I\Omega\). When RLOF occurs due to a tidal instability, we assume that a CE develops around the inner binary. This will lead further orbital decay, and finally either a merger or ejection of the envelope. RLOF instability; The stability of the mass transfer depends on the response of the radius and the Roche lobe to the imposed mass loss. In the fundamental work of Hjellming and Webbink (1987), theoretical stability criteria are derived for polytropes. Stability criteria have been improved with the use of more realistic stellar models (see e.g. Ge et al. 2010, 2015; Passy et al. 2012a; Woods et al. 2010, 2012). Our incomplete understanding of the stability of mass transfer leads to differences between synthetic binary populations (Toonen et al. 2014). The response of the Roche lobe is strongly dependent on the envelope of the donor star and the mass ratio of the system.Footnote 12 Therefore the stability of mass transfer is often described by a critical mass ratio \(q_{\mathrm{crit}}< q \equiv m_{d}/m_{a} \) for different types of stars. For unevolved stars with radiative envelopes, mass transfer can proceed in a stable manner for relatively large mass ratios. We assume \(q_{\mathrm{crit}} = 3\), unless the star is on the MS for which we take \(q_{\mathrm{crit}} = 1.6\) (de Mink et al. 2007b; Claeys et al. 2014). Stars with convective envelopes are typically unstable to mass transfer, unless the donor is considerably less massive than the companion. For giants, we adopt \(q_{\mathrm{crit}} = 0.362 +[3(1- M_{c}/M)]^{-1}\), where \(M_{c}\) is the core mass of the donor star (Hjellming and Webbink 1987). For naked helium giants, low-mass MS stars (\(M<0.7M_{\odot }\)), and white dwarfs, we follow Hurley et al. (2002) and adopt \(q_{\mathrm{crit}} = 0.784\), \(q_{\mathrm{crit}} = 0.695\) and \(q_{\mathrm{crit}} = 0.628\), respectively. Common-envelope evolution in the inner binary As CE-evolution is a fast, hydrodynamic process, the ODE solver routine is disabled during the modelling of the CE-phase. If the donor star is a star without a clear distinction of the core and the envelope (i.e. MS stars, helium MS stars and remnants), we assume the phase of unstable mass transfer leads to a merger. For other types of donor stars, the treatment of the CE-phase consists of three different models that are based on combinations of the formalisms described in Section 2.2.5. In model 1 and model 2, the α-formalism (Eq. (14)) and γ-formalism (Eq. (17)) are used to determine the outcome of the CE-phase, respectively. When two giants are involved, the double-CE is applied (Eq. (15)). In the standard model, the γ-formalism is applied unless the CE is triggered by a tidal instability or the binary contains a remnant star. This is based on modelling the evolution of double white dwarfs (Nelemans et al., 2000, 2001; Toonen et al. 2012). The standard values of \(\alpha\lambda _{\mathrm{ce}}\) and γ are taken to be 2.0 and 1.75 (Nelemans et al. 2000). The companion star in the inner binary is probably not able to accrete from the overflowing material of the CE-phase, because of its relatively long thermal timescale compared to the short timescale on which the CE is expected to place. Therefore, we assume that the CE occurs completely non-conservatively. The effect of a CE-phase on the outer star of a triple is poorly studied or constrained (Section 2.3.8). For stable, hierarchical systems, if the CE-material is energetic enough to escape from the inner binary, the matter is likely energetic enough to escape from the triple as well (Section 2.3.7). We assume that the escaping CE-matter has expanded and diluted sufficiently to avoid a second CE-phase with the outer star, as we only consider stable triples with \(a_{\mathrm {out}}/a_{\mathrm{in}} \gtrsim3\). We allow the matter to escape as a fast wind in a non-conservative way i.e. according to Eq. (45) with \(\dot{m}_{3}=0\) and \(\beta_{\mathrm{in}\rightarrow3} = 0\), i.e. $$ \dot{a}_{\mathrm{out, wind}} = \dot{a}_{\mathrm{wind}, \text{no-acc}}(\dot{m}_{\mathrm{in}}). $$ There may be friction between the CE-matter and the outer star, if the CE-matter is primarily expelled in the orbital plane and the inner and outer orbital planes are parallel. Therefore we multiply Eq. (48) with a factor $$ f_{\mathrm{fric}} = \mathrm{min} \biggl( 1,\frac{\vert \mathrm{sin}(i)\vert }{ \vert \mathrm{sin}(i_{\mathrm{crit}}) \vert } \biggr), $$ where \(i_{\mathrm{crit}}\) is a minimum inclination necessary for the friction to take place. Stable mass transfer in a circular inner binary We assume mass transfer in the inner binary takes place on either the thermal or nuclear timescale of the donor star. Mass transfer driven by angular momentum loss is currently not implemented in the code. The mass transfer rate is estimated by \(\dot{m}_{\mathrm{MT}} = m/\tau _{\mathrm{MT}}\), where \(\tau_{\mathrm{MT}}\) is the timescale of mass transfer. The thermal timescale of a star is given by Eq. (2), where R and L are given by the stellar evolution code used in TrES, SeBa. The nuclear timescale of a MS or helium-MS star is estimated by Eq. (3). For other stars we take \(\tau_{\mathrm{nucl}} = R/\dot{R}\), where Ṙ is the time derivative of the radius, calculated from the current and previous timestep. If the star is shrinking, which is possible for horizontal branch stars or evolved AGB stars, we estimate the nuclear timescale by 10% of the stellar age. Rejuvenation of the accretor star, and the opposite process for the donor star are taken into account by SeBa. Their method is explained in Appendix A.2.1 of Toonen et al. (2012). The orbital evolution of the inner binary is approximated with Eq. (19) where β and η are taken to be constants. If the companion star in the inner binary fills its Roche lobe during the mass transfer phase in response to the accretion, a contact binary is formed. We allow the inner binary to go through a CE-phase as described in Section 3.4.4, which likely leads to a merger of the system. The degree of conservativeness β of the mass transfer is one of the major uncertainties in binary evolutionary calculations. The accretor star is expected to spin-up due to the accretion. Even if the companion only accretes a few percent of its own mass, the accretor is spin-up to critical rotation (Packet 1981). This has been invoked to limit the amount of accretion that can take place. However, as some binaries have managed to experience a phase of fairly conservative mass transfer (e.g. ϕ Per Pols 2007), additional mechanisms of angular momentum loss must play a role during mass transfer (de Mink et al. 2007a). The lack of synchronisation can affect the size of the Roche lobe significantly. For example, for a star that is rotating 100 times faster than synchronization, the Roche lobe is only 5-10% of that of the classical Roche lobe (based on Sepinsky et al. 2007a). For simplicity, we make the common assumption that any circularized system entering RLOF, is and will remain synchronized during the mass transfer phase. For stable, hierarchical triples systems, the matter lost by the inner binary is likely energetic enough to escape from the triple (Section 2.3.7), and we model this as a fast wind i.e. according to Eq. (48). To incorporate the effect of friction between the matter and the outer star, we multiply Eq. (48) with a factor \(0< f_{\mathrm {crit}}<1\) (Eq. (49)), similar to the case of a CE-phase in the inner binary (Section 3.4.4). During mass transfer, the effect of wind mass losses on the inner and outer orbit (Eqs. (44) and (45)) are taken into account simultaneously. Mass transfer initiated by the outer star Mass transfer from an outer star onto a binary is an intriguing new evolutionary pathway opened up by stellar triples. Even though it is relatively common for triples (about 1%), it is a complex process that has not been studied in much detail. The study of (de Vries et al. 2014) focuses on two triples undergoing mass transfer initiated in the outer star, however, the study is limited in parameter space (as they study two triples), and in time (as the hydrodynamical simulations they perform are expensive). For this reason, we have not implemented the process of outer mass transfer, and currently the code is stopped when the outer star fills its Roche lobe. Supernova explosions in TrES During a SN event, the star collapses on a dynamical timescale, for which the secular approach is not valid. The ODE solver routine is therefore disabled, and the orbital evolution due to the SN event is solved for in a separate function as detailed below. The amount of mass-loss in the SN-event, and the type or remnant that is left behind are determined by the stellar evolution code SeBa. The effect of the SN ejecta on the companion stars (e.g. compositions and velocities) is usually small (e.g. Kalogera 1996; Hirai et al. 2014; Liu et al. 2015b; Rimoldi et al. 2015), unless the pre-supernova separation between the stars is smaller than a few solar radii. For this reason, we assume the dynamics of the companion stars are not affected by the expanding shell of material, and the companion stars neither accrete nor are stripped of mass. We make the common assumption that the SN takes place instantaneously. As a result, the positions of the stars just before and after the SN are not changed. As TrES is based on orbit-averaged techniques, we do not follow the position of the stars along the orbit as a function of time. In order to obtain the position at the moment of the SN, we randomly sample the mean anomaly from a uniform distribution. The natal kick is randomly drawn from either of three distributions (Paczynski 1990, Hansen and Phinney 1997 or Hobbs et al. 2005) in a random direction. Our method simply consists of two coordinate transformations (thus we do not use Eqs. (21), (33), (63), (70), (73), nor Eq. (78) directly). We convert from our standard orbital parameters of i and \(a,e,g,h\) for the inner and outer orbit to orbital vectors i.e. eccentricity ê and angular momentum vector \(\hat{J_{\mathrm{b}}}\) for both orbits. After the mass of the dying star is reduced and the natal kick is added to it, we convert back to the orbital elements. The reason for performing two coordinate transformation, to orbital vectors and back, is that the orbital elements in the code are defined with respect to the 'invariable' plane, i.e. in a frame defined by the total angular momentum. In the case of a SN, however, the total orbital angular momentum vector is not generally conserved, which implies that the coordinate frame changes after the SN. In contrast, the orbital vectors are defined with respect to an arbitrary inertial frame that is not affected by the SN. The post-SN orbital vectors are transformed to the orbital elements in the new 'invariable' plane, i.e. defined with respect to the new total angular momentum vector. An additional advantage of the double coordinate transformation is that the pre-supernova orbit can be circular as well as have an arbitrary eccentricity. If the post-supernova eccentricity of an orbit is larger than one, the orbit is unbound. We distinguish four situations: Both the inner as the outer orbit remain bound, and the system remains a triple. The simulation of the evolution of the triple is continued. When the inner orbit remains bound, and the outer orbit becomes unbound, the outer star and inner binary remain as separated systems. We assume the outer star does not dynamically affect the inner binary. With the default options in TrES the simulation is stopped here unless the user specifies otherwise. When both the inner as the outer orbit become unbound, the stars evolve further as isolated stars. As in the previous scenario, by default the simulation is stopped unless the user specifies otherwise. The inner orbit becomes unbound, but at the moment just after the SN the outer star remains bound the inner system. In this case, TrES cannot simulate the evolution of this system further. The evolution of these systems should be followed up with an N-body code. In Section 2.2 and Section 2.3, we discussed several physical processes and how they affect the long-term evolution of inner binaries and triple systems. Here we illustrate those processes by simulating the evolution of a few realistic triple star systems. For example, the evolution of Gliese 667 displays the Lidov-Kozai cycles, and the evolution of Eta Carinae illustrates the effect of precession and stellar winds. The evolutionary pathways of the triple systems are simulated with the new triple code TrES, such that the examples below also demonstrate the capabilities of TrES. Gliese 667 Gliese 667 is a nearby triple system in the constellation of Scorpius. The orbital parameters of the system are described in Table 3 based on Tokovinin (2008). The outer star is in an orbit of \(a_{\mathrm{out}}>230~\mbox{AU}\), but for simplicity, we will assume \(a_{\mathrm{out}}=250~\mbox{AU}\) in the following. The outer star is also a planetary host-star; up to five planets have been claimed, of which two have been confirmed so far (Feroz and Hobson 2014). The orbit of the planet Gliese 667 Cb lies just within the habitable zone, which makes this planet a prime candidate in the search for liquid water and life on other planets (Anglada-Escudé et al. 2012). In the following, we will neglect the dynamical effect of the presence of planets on the evolution of the triple. Table 3 Initial conditions for the three triple systems discussed in Section 4 Gliese 667 is a prime example of a triple system undergoing Lidov-Kozai cycles. Figures 3 and 4 show the evolution of the inner eccentricity and mutual inclination for the first 3 Myr after the birth of the system. under the assumption of \(e_{\mathrm{out}} = 0.5\), \(i=90^{\circ}\), \(g_{\mathrm{in}}=0.1\) and \(g_{\mathrm{out}}=0.5\). For different values for the outer eccentricity, arguments of pericenters, and mutual inclination the general behaviour of Figures 3-5 remains the same, but the timescale and amplitude of the Lidov-Kozai cycles varies (to the point where the cycles are not notable). Figures 3 and 4 show the cyclic behaviour of eccentricity and inclination in Gliese 667. When the eccentricity is at its maximum, the inclination between the orbits is minimal. The timescale of the oscillations is a few 0.1 Myr, which is consistent with the order of magnitude approximation of 0.4 Myr of Eq. (24). The octupole parameter \(\epsilon_{\mathrm{oct}} <0.001 \), which indicates that the eccentric Lidov-Kozai mechanism is not of much importance here. Inner eccentricity evolution. The evolution of the inner eccentricity \(e_{\mathrm{in}}\) as a function of time for the first 3 Myr of the evolution of Gliese 667. The figure shows that Gliese 667 is susceptible for Lidov-Kozai cycles. The initial conditions are given in Table 3. Mutual inclination evolution. The evolution of the mutual inclination i as a function of time for the first 3 Myr of the evolution of Gliese 667. This triple shows the cyclic behaviour in inclination and inner eccentricity (Figure 3) related to Lidov-Kozai cycles. Inner semi-major axis evolution. The evolution of the inner semi-major axis as a function of time for the first 3 Myr of the evolution of Gliese 667. The figure shows a decreasing semi-major axis due to the combination of Lidov-Kozai cycles with tidal friction, i.e. LKCTF. Note the small scale on the y-axis, where \(a_{0}=12.599999999\). For the same timescale as Figures 3 and 4, Figure 5 shows the evolution of the inner-orbital semi-major axis \(a_{\mathrm{in}}\). The change in the inner semi-major axis of Gliese 667 is negligibly small, however, the figure illustrates the effect of Lidov-Kozai cycles with tidal friction or LKCTF. When the inner eccentricity is at its maximum, and the inner stars are at their closest approach during pericenter passage, the inner semi-major axis decreases due to tidal forces. In this way, LKCTF could lead to RLOF in or a merger of the inner system. Regarding the long-term evolution of this system, it is analogous to that discussed for the first 3 Gyr, After 10 Gyr (approximately the age of the Galactic thin disk, e.g. Oswalt et al. 1996; del Peloso and da Silva 2005; Salaris 2009), the system is still detached. The orbital separation has decreased by only ∼7 km. The stellar masses are sufficiently low, that the stars do not evolve off the MS within 10 Gyr, and as such do not experience a significant growth in radius that could lead to RLOF. Even taking into account the low metallicity of Gliese A (Cayrel de Strobel et al. 2001), and the corresponding speed-up of the evolutionary timescales, a \(0.73M_{\odot}\) star is not massive enough to evolve of the MS within 10 Gyr. Furthermore, the stars are not massive enough to lose a considerable amount of matter in stellar winds, such that the triple is not affected dynamically by wind mass losses. Eta Carinae Eta Carinae is a binary system with two massive stars (\(m_{1}\sim90M_{\odot }\) and \(m_{2}\sim30M_{\odot }\)) in a highly eccentric orbit (\(e=0.9\)) with a period of 5.5 yr (Damineli et al. 1997). Both stars are expected to explode as supernovae at the end of their stellar lives. Eta Carinae is infamous for its 'Great Eruption'. From 1837 to 1857, it brightened considerably, and in 1843 it even became the second brightest star in the sky (de Vaucouleurs and Eggen 1952). The system is surrounded by the Homunculus Nebulae, that was formed during the Great Eruption, and heavily obscures the binary stars (Humphreys and Davidson 1999). The kinetic energy of the Homunculus Nebulae is large i.e. 1049.7erg (Smith et al. 2003) and comes close to that of normal supernovae. However, as both stars have survived the Great Eruption, Eta Carinae is often referred to as a 'supernova imposter'. The cause of the Great Eruption remains unexplained. A massive outflow, as during the Great Eruption, can be driven by a strong interaction between two stars (e.g. Harpaz and Soker 2009; Smith 2011) or a merger of two stars (e.g. Soker and Tylenda 2003). Recently, Portegies Zwart and van den Heuvel (2016) tested the hypothesis that the Eta Carinae system is formed from the merger of a massive inner binary of a triple system. According to their model, the merger was triggered by the gravitational interaction with a massive third companion star, which is the current \({\sim}30M_{\odot}\) companion star in Eta Carinae. Here, we simulate the evolution of their favourite model with the initial conditions as given by Table 3. Furthermore, Portegies Zwart and van den Heuvel (2016) assume that the argument of periastron does not affect the tidal evolution, and therefore we arbitrarily set \(g_{\mathrm{in}} = 0.1 \) and \(g_{\mathrm{out}} = 0.5\). During the early evolution of the triple, the system experiences Lidov-Kozai cycles with a timescale of a few kyr, see Figure 6. The octupole parameter \(\epsilon_{\mathrm{oct}} = 0.068 >0.01\), which indicates that the system is in the eccentric Lidov-Kozai regime with a timescale of order \(t_{\mathrm{oct}}\sim t_{\mathrm{Kozai}}/\epsilon _{\mathrm{oct}} \sim\) few tens of kyr. Inner eccentricity evolution. The evolution of the inner eccentricity \(e_{\mathrm{in}}\) as a function of time for the first 20,000 yr of the evolution of triple progenitor of Eta Carinae. The figure shows that the system undergoes Lidov-Kozai cycles. The initial conditions of the triple progenitor are given in Table 3. The evolution of the semi-major axis shows two characteristics in Figure 7. Firstly, as for Gliese 667, the system is affected by LKCTF i.e. the semi-major axis shrinks periodically, due to strong tides at pericenter when the inner eccentricity is at its maximum. Secondly, as the primary star is very massive, strong winds remove large amounts of mass while the star is still on the MS (Section 2.1.3). The dynamical effect of such a fast wind is that the inner and outer orbits expand (Section 2.2.1). Initially the inner orbit expands faster than the outer orbit, as expected for a strong wind from the inner orbit (Section 2.3.1). However, due to the combination of stellar winds with LKCTF for the Eta Carinae progenitor, its outer orbit expands faster, and the triple becomes more dynamically stable. Semi-major axis evolution. The evolution of the inner eccentricity \(e_{\mathrm{in}}\) as a function of time for the first 20,000 yr of the evolution of triple progenitor of Eta Carinae. The figure shows the characteristic increase in semimajor-axes due to stellar wind mass loss, and the periodic decrease in inner semimajor-axis when the eccentricity is high due to LKCTF. On a longer timescale, the triple moves from the eccentric Lidov-Kozai regime to the regular regime (\(\vert \epsilon_{\mathrm{oct}} \vert \lesssim 0.01\)) as the inner binary loses matter and angular momentum in the stellar winds (Figure 8). After 3 Myr, the octupole parameter has decreased from \(\epsilon_{\mathrm{oct}} = 0.068\) initially, to \(\epsilon_{\mathrm{oct}} = 0.001\). Radius and mass evolution. The evolution of the radii (dashed line) and mass (dash-dotted line) as a function of time for the stars in the triple progenitor of Eta Carinae. The primary star of initial mass \(110M_{\odot}\) is shown in blue. The secondary and tertiary star, both of initial mass \(30M_{\odot}\), are shown in red. The Roche lobe of the primary and secondary are overplotted (blue solid line and red solid line respectively). The Roche lobe of the tertiary is about 2,000-3,000\(R_{\odot}\). After about 3 Myr, the primary star fills its Roche lobe. The long-term evolution of the progenitor candidate of Eta Carinae shows another interesting feature in Figures 9 and 10, i.e. the Lidov-Kozai cycles are quenched. Here the precession due to the distortion and rotation of the stars dominates over the precession caused by the Lidov-Kozai mechanism. As a result, the amplitudes of the cycles in inner eccentricity and mutual inclination are reduced. After approximately 1.5 Myr, the evolution of the system is completely dominated by tides, i.e. the system circularizes and the inner semi-major decreases accordingly (Figure 11). After circularization of the inner binary has been achieved (\({\gtrsim}2~\mbox{Gyr}\)), the inner semi-major axis increases again due to the stellar winds from the inner binary. The evolution of the Eta Carinae progenitor (Figures 6-11) illustrates that both three-body dynamics and stellar evolution matter, and neither can be neglected. Inner eccentricity evolution. The evolution of the inner eccentricity on a timescale of 3 Myr. The timescale of the Lidov-Kozai cycles is a few kyr, such that the lines overlap in Figure 9. As the system evolves, the amplitude of the cycles reduces, until the system circularizes. Mutual inclination evolution. The evolution of the mutual inclination on the same timescale as Figure 9. The variation in the inclination decreases with time; as the stars evolve, their radii increase, and tidal effects become stronger (Eqs. (10)-(12)). Inner semi-major axis evolution. The evolution of the inner semimajor-axis on the same timescale as Figure 9 and 10. In the first Myr, the evolution of the system is dominated by the Lidov-Kozai mechanism and the inner semimajor-axis remains more or less constant. In the following Myr, the system circularizes and the semimajor-axis decreases by a factor 2. After synchronisation and circularisation has been reached, the inner semimajor-axis increases due to the ejection of stellar winds. After about 3 Myr, the primary star fills its Roche lobe and initiates a mass transfer phase (Figure 8). The inner semi-major axis is about 0.5 AU (109\(R_{\odot}\)), and the massive primary has increased in size to 51\(R_{\odot}\). Even though, the inner semi-major axis (and Roche lobe) of the primary are ∼10% smaller around 2 Gyr, there is no RLOF yet as the radius of the primary star is ∼50% smaller. At RLOF, the masses of the inner stars have reduced from the initial \(110M_{\odot}\) and \(30M_{\odot}\) to \(75M_{\odot}\) and \(29M_{\odot}\), and both stars are still on the MS. The mass transfer phase proceeds in an unstable manner (Section 3.4.3), and a common-envelope develops that leads to a merger of the inner stars (Section 3.4.4). A new star is formed, that is still on the MS, with a mass of \(104M_{\odot}\). We assume this merger proceeds conservatively, and therefore the outer orbit is not affected, such that \(a_{\mathrm{out}} \approx32~\mbox{AU}\), \(e_{\mathrm{out}} \approx0.2\) and \(P\approx15.7~\mbox{yr}\). Prior to the merger, the outer semi-major axis has increased from the initial value of 25 AU to 32 AU due to the stellar winds. The resulting binary is similar to the current Eta Carinae system in mass and orbital period. It is not an exact match, as the evolution of this specific triple is shown for illustrative purposes, and has not been fitted to match the currently observed system. A progenitor study of Eta Carinae to improve the match is beyond the scope of this paper. We note that the current eccentricity of our remaining binary is low i.e. \(e \approx0.2\) compared to the observed \(e=0.9\). In our simulations, the post-merger eccentricity is equal to the pre-merger outer eccentricity. During the evolution of the Eta Carinae progenitor, the outer eccentricity has remained roughly equal to its initial value of \(e_{\mathrm{out}} = 0.2\). The outer eccentricity is not affected strongly by stellar evolution or Lidov-Kozai cycles. If we study the evolution of an alternative progenitor similar to the favourite model of Portegies Zwart and van den Heuvel (2016), but with \(e_{\mathrm{out}} = 0.9\), the system is dynamically unstable at birth. In order for the triple to be dynamically stable \(e_{\mathrm{out}} \lesssim0.81\) for the standard \(i=60^{\circ}\), or up to \(e_{\mathrm {out}} \lesssim0.84\) for \(i=90^{\circ}\). The evolutions of the dynamically stable systems with \(e_{\mathrm {out}} \lesssim0.7\) show similar behaviour as our initial system (Table 3), and the merger leads to a similar binary as in the case of our initial system. For dynamically stable systems with higher outer eccentricities, the merger time decreases strongly, and the inner system does not reach circularization before the merger takes place. The merger product is more massive, as less mass is lost in stellar winds. In the simulation of (Portegies Zwart and van den Heuvel 2016), the outer orbit has an eccentricity \(e_{\mathrm{out}}=0.2\) initially, but becomes highly eccentric in the merger phase due to asymmetric mass loss. MIEK-mechanism In this section, we illustrate the dynamical effect of mass loss on a triple system from the point of view of transitions in dynamical regimes, e.g. the regime without Lidov-Kozai cycles, with regular or with eccentric Lidov-Kozai behaviour. Here, we focus on the transition from the regular Lidov-Kozai regime to the eccentric regime, i.e. where the octupole term is significant. This transition has been labelled 'mass-loss induced eccentric Kozai' or MIEK (Section 2.3.3). The canonical example of MIEK-evolution is a triple with the initial conditions as given by Table 3 (Shappee and Thompson 2013; Michaely and Perets 2014). To reproduce the experiment of Shappee and Thompson (2013), we simulate the evolution of this triple with TrES including three-body dynamics and wind mass losses, however, without stellar evolution in radius, luminosity, or stellar core mass etc. Starting from the birth of the triple system, its orbit is susceptible to Lidov-Kozai cycles (Figures 13 and 14). The timescale of the cycles is approximately 0.1 Myr. The cycles are in the regular regime, i.e. \(\epsilon_{\mathrm{oct}} = 0.002\). As time passes, the stars evolve. The primary star evolves off the MS at 49 Myr, and after 55 Myr it reaches the AGB with a mass of \(6.9M_{\odot}\) (Figure 12). Subsequently, it quickly loses a few solar masses in stellar winds, before it becomes an oxygen-neon white dwarf of \(1.3M_{\odot}\) at 56 Myr. The outer orbit widens to about \(a_{\mathrm{out}}\sim350~\mbox{AU}\) due to the wind mass losses. Radius and mass evolution. The evolution of the radii (dashed line) and mass (dash-dotted line) as a function of time for the stars in a triple that transitions from a region with regular to eccentric Lidov-Kozai behaviour i.e. MIEK. The initial conditions of the triple are given in Table 3, based on Shappee and Thompson (2013) and Michaely and Perets (2014). The primary star is shown in blue, the secondary in green and the tertiary in red. The Roche lobe of the primary and secondary are overplotted (blue solid line and green solid line respectively). The Roche lobe of the tertiary is 6,500-8,000\(R_{\odot}\). After about 55.5 Myr, the primary star fills its Roche lobe. The wind mass loss allows the triple to transition to the eccentric Lidov-Kozai regime at about 56 Myr, i.e. \(\epsilon_{\mathrm{oct}} = 0.045\) at this time. The system is driven into extremely high eccentricities, and also the amplitude of the Lidov-Kozai cycle in inclination increases. The evolution of the system as shown in Figures 13 and 14 is qualitatively similar to that found by Shappee and Thompson (2013) based on N-body calculations and Michaely and Perets (2014) based on the secular approach. In these studies, stellar winds are implemented ad-hoc with a constant mass loss rate for a fixed time interval starting at a fixed time. Moreover, the system is followed for multiple Myrs after the mass loss event in both papers, such that the inclination rises above 90∘, and the inner and outer orbit become retrograde to each other. In our case the simulation is stopped before such a flip in inclination develops, as RLOF is initiated in the inner binary when the inner eccentricity is high. Inner eccentricity evolution. The evolution of the inner eccentricity \(e_{\mathrm{in}}\) as a function of time for a triple that transitions from a region with regular to eccentric Lidov-Kozai behaviour i.e. MIEK. The initial conditions of the triple are given in Table 3, based on Shappee and Thompson (2013) and Michaely and Perets (2014). For this figure, the stars are not allowed to evolve in TrES, except for wind mass losses. If stellar evolution is taken into account fully, RLOF initiates at 55.5 Myr, before the transition to MIEK can develop. Mutual inclination evolution. The evolution of the mutual inclination i as a function of time for the same triple as in Figure 13. The triple transitions from a region with regular to eccentric Lidov-Kozai behaviour at 56 Myr. However, if we fully include stellar evolution, as in the standard version of TrES, the triple is not driven into the octupole regime. On the AGB, the radius of a \(7M_{\odot}\)-star can reach values as large as \({\sim}1\text{,}000R_{\odot}\) (Figure 12), and therefore RLOF initiates before the MIEK-mechanism takes place. Even if the inner binary would be an isolated binary, RLOF would occur for initial separations of \(a<15~\mbox{AU}\). For triples, RLOF can occur for larger initial (inner) separations, as the Lidov-Kozai cycles can drive the inner eccentricity to higher values. For wider inner binaries i.e. \(a_{\mathrm{in}} > 16~\mbox{AU}\), the MIEK-mechanism does not occur either, as the triple is dynamically unstable. This example indicates that the parameter space for the MIEK-mechanism to occur is smaller than previously thought, and so it may occur less frequently. Moreover, this example demonstrates the importance of taking into account stellar evolution when studying the evolution of triples. For the canonical triple with \(a_{\mathrm{in}} =10~\mbox{AU}\) and \(a_{\mathrm {out}}=250~\mbox{AU}\), RLOF occurs at 55.5 Myr, just a few 0.1 Myr after the primary star arrives on the AGB. In that time, the radius of the primary increased by a factor ∼3, and tides can no longer be neglected. The tidal forces act to circularize and synchronize the inner system, such that \(e_{\mathrm{in}}=0\) at RLOF. The eccentric Kozai-mechanism does not play a role at this point, i.e. \(\epsilon_{\mathrm{oct}} = 0.0008\). The mass of the core has not had enough time to grow to the same size as in the example without RLOF, i.e. the core mass is \(1.25M_{\odot}\) instead of \(1.3M_{\odot}\). Stellar winds have reduced the mass of the primary star to \(6.8M_{\odot}\). As the primary has a convective envelope and is more massive than the secondary, a CE-phase develops. We envision three scenarios based on the different models for CE-evolution (Sections 2.2.5 and 3.4.4). First, the CE-phase leads to a merger of the inner binary, when the inner orbit shrinks strongly, as for the α-model of CE-evolution with \(\alpha\lambda_{\mathrm{ce}} = 0.25 \) (Section 2.2.5). Second, the CE-phase leads to strong shrinkage of the orbit, but not enough for the inner stars to merge. In this scenario, the envelope of the donor star is completely removed from the system, and the outer orbit widens to about 350 AU, under the assumption that the mass removal affects the outer orbit as a fast wind. Assuming \(\alpha\lambda _{\mathrm{ce}} = 2 \) (Eq. (14), Section 2.2.5), \(a_{\mathrm{in}}\sim0.33~\mbox{AU}\) and \(\epsilon_{\mathrm{oct}} = 0.0009\) after the CE-phase. In this scenario, the triples does not enter the octupole regime, and the MIEK-mechanism does not manifest. Lastly, the CE-phase does not lead to a strong shrinkage of the inner orbit, as for the γ-model of CE-evolution with \(\gamma=1.75 \) (Eq. (17), Section 2.2.5). The inner semimajor-axis even increases from 6.0 to 7.3 AU. In this scenario, \(\epsilon_{\mathrm {oct}} = 0.02\), such that the perturbations from the octupole level become significant. In this last scenario, the triple undergoes the MIEK mechanism, despite and because of the mass transfer phase. Discussion and conclusion In this paper, we discuss the principle complexities of the evolution of hierarchical triple star systems. Hierarchical triples are fairly common and potentially long-lived, which allows for their evolution to be affected by (secular) three-body dynamics, stellar evolution and their mutual influences. We present an overview of single star evolution and binary evolution with a focus on those aspects that are relevant for triple evolution. Subsequently, we describe the processes that are unique to systems with multiplicities of higher order than for binaries. In some cases, the evolution of a hierarchical triple can be adequately described by the evolution of the inner and outer binary separately. In other cases, the presence of the outer star significantly alters the evolution of the inner binary. Several examples of the latter are given in detail. These examples also show the richness of the regime in which both three-body dynamics and stellar evolution play a role simultaneously. Moreover, the examples demonstrate the importance of coupling three-body dynamics with stellar evolution. Additionally, we present heuristic recipes for the principle processes of triple evolution. These descriptions are incorporated in a public source code TrES for simulating the evolution of hierarchical, coeval, dynamically stable stellar triples. We discuss the underlying (sometimes simplifying) assumptions of the heuristic recipes. Some recipes are exact or adequate (e.g. gravitational wave emission, wind mass loss or Lidov-Kozai cycles), and others are admittedly crude (e.g. mass transfer). The recipes are based on simple assumptions and should be seen as a starting point for discussion and further study. When more sophisticated models become available of processes that influence triple evolution, these can be included in TrES, and subsequently the effect on the triple populations can be studied. For now, the accuracy levels of the heuristic recipes are sufficient to initiate the systematic exploration of triple evolution (e.g. populations, evolutionary pathways), while taking into account three-body dynamics and stellar evolution consistently. We note that simulating through a phase of stable mass transfer in an eccentric inner orbit is currently beyond the scope of the project. However, appropriate methodology for eccentric mass transfer (e.g. Sepinsky et al. 2007b, 2009; Dosopoulou and Kalogera 2016b) has been developed that we aim to implement at a later stage. The triple evolution code TrES is based on the secular approach to solve for the dynamics of the triple system. It has been shown that this approach is in good agreement with N-body simulations of systems in which the secular approximations are valid (Naoz et al. 2013; Hamers et al. 2013; Michaely and Perets 2014). The advantage of the secular approach is that the computational time is orders of magnitudes shorter than for an N-body simulation. The secular approach, however, is not valid when the evolutionary processes occur on timescales shorter than the dynamical timescale of the system. In these cases, we either stop the simulation (e.g. during a dynamical instability) or simulate the process as an instantaneous event (such as a common-envelope phase). Lastly, the secular approximation becomes inaccurate when the triple hierarchy is weaker (e.g. Antonini and Perets 2012; Katz and Dong 2012; Antognini et al. 2014; Bode and Wegg 2014; Luo et al. 2016). In this case, the timescale of the perturbation from the outer star onto the inner binary during its periastron passage, is comparable to the dynamical timescale of the inner binary. This can result in extremely high eccentricities and collisions between the stars in the inner binary. With the secular approach, as in TrES, these occurrences are probably underestimated in systems with moderate hierarchies (see also Naoz et al. 2016). TrES is written in the Astrophysics Multipurpose Software Environment, or AMUSE (Portegies Zwart et al. 2009; Portegies Zwart 2013), which is based on Python. AMUSE including TrES can be downloaded for free at amusecode.org and github.com/amusecode/amuse. Due to the nature of AMUSE, the triple code can be easily extended to include a detailed stellar evolution code or a direct N-body code. Regarding the latter, this is interesting in the context of triples with moderate hierarchies where the orbit-averaged technique breaks down (as discussed above). Furthermore, it is relevant for triples that become dynamically unstable during and as a consequence of their evolution. For example, Perets and Kratter (2012) show triples that become dynamically unstable due to their internal wind mass losses, are responsible for the majority of stellar collisions in the Galactic field. Consequently, the majority of stellar collisions do not take place between two MS stars, but involve an evolved star of giant-dimensions. Another interesting prospect is the inclusion of triples in simulations of cluster evolution, where triples are often not taken into account e.g. in the initial population, through dynamical formation nor a consistent treatment of the evolution of triple star systems. However, dynamical encounters involving triples are common, reaching or even exceeding the encounter rate involving solely single or binary stars, in particular in low- to moderate-density star clusters (Leigh and Sills 2011; Leigh and Geller 2013). Therefore, the evolution of triples might not only be important for the formation and destruction of compact or exotic binaries, but also for the dynamical evolution of clusters in general. Overshooting refers to a chemically mixed region beyond the boundary of the convective core (e.g. Maeder and Meynet 1991; Massevitch et al. 1979; Stothers 1963), as predicted by basic stellar evolutionary theory, i.e. the Schwarzschild criterion. A possible mechanism is convection carrying material beyond the boundary due to residual velocity. For the effects of overshooting on stellar evolution, see e.g. Bressan et al. (2015). For alternative mechanisms, see Goodman and Dickson (1998), Savonije and Witte (2002), Book Review (2000). Unless \(q \lesssim0.7\), such that the orbit and the Roche lobe expand significantly upon mass transfer (e.g. Eqs. (18)-(19)). Stable mass transfer is one of the proposed evolutionary pathways for the formation of blue stragglers (see Davies 2015 for a review). These are MS stars in open and globular clusters that are more luminous and bluer than other MS stars in the same cluster. See for example Eqs. (8) and (9). The timescales for GW emission are shorter for binaries with small separations and large eccentricities. A balance is also possible between the eccentricity excitations of the Lidov-Kozai mechanism and other sources of precession, see Section 2.3.4. Here the L3 Lagrangian point is located behind the inner binary on the line connecting the centres of mass of the outer star and inner binary. There is an inconsistency in the meaning of η between Eqs. (4) and (5) in de Vries et al. (2014), denoted as β in their equations. In their fits η represents the ratio \(\frac {\Delta J}{J_{b}}\), where ΔJ is the amount of angular momentum that is lost from the system, and \(J_{b}\) the orbital angular momentum. It does not represent \(\frac{\Delta J}{J_{a}}\) where \(J_{a}\) is the angular momentum of the accretor star. Hamers et al. (2015) derive the orbit-averaged Hamiltonian expressions for the '3+1' as well as the '2+2' configuration. Note that SeBa is not used to model binary evolution in TrES. Note that there is an error in Eq. (42) of Hurley et al. (2002). The factor \(MR^{2}/a^{5}\) should be raised to the power 1/2, which means that \(k_{\mathrm{am}}/\tau_{\mathrm{TF}} \propto R \) instead of \(k_{\mathrm{am}}/\tau_{\mathrm{TF}} \propto R^{2} \). And to a lesser degree also the accretion efficiency and the corresponding angular momentum loss mode (e.g. Soberman et al. 1997; Toonen et al. 2014). Aarseth, SJ: Formation and evolution of hierarchical systems. In: Allen, C, Scarfe, C (eds.) The Environment and Evolution of Double and Multiple Stars. Proceedings of IAU Colloquium 191. Revista Mexicana de Astronomia Y Astrofisica Conference Series, vol. 21, pp. 156-162 (2004) Aarseth, SJ, Mardling, RA: The formation and evolution of multiple star systems. In: Podsiadlowski, P, Rappaport, S, King, AR, D'Antona, F, Burderi, L (eds.) Evolution of Binary and Multiple Star Systems. Astronomical Society of the Pacific Conference Series, vol. 229, pp. 77-90 (2001). astro-ph/0011514 Anglada-Escudé, G, Arriagada, P, Vogt, SS, Rivera, EJ, Butler, RP, Crane, JD, Shectman, SA, Thompson, IB, Minniti, D, Haghighipour, N, Carter, BD, Tinney, CG, Wittenmyer, RA, Bailey, JA, O'Toole, SJ, Jones, HRA, Jenkins, JS: A planetary system around the nearby M dwarf GJ 667C with at least one super-earth in its habitable zone. Astrophys. J. 751, 16 (2012). doi:10.1088/2041-8205/751/1/L16. 1202.0446 ADS Article Google Scholar Antognini, JM, Shappee, BJ, Thompson, TA, Amaro-Seoane, P: Rapid eccentricity oscillations and the mergers of compact objects in hierarchical triples. Mon. Not. R. Astron. Soc. 439, 1079-1091 (2014). doi:10.1093/mnras/stu039. 1308.5682 Antognini, JMO: Timescales of Kozai-Lidov oscillations at quadrupole and octupole order in the test particle limit. Mon. Not. R. Astron. Soc. 452, 3610-3619 (2015). doi:10.1093/mnras/stv1552. 1504.05957 Antonini, F, Murray, N, Mikkola, S: Black hole triple dynamics: a breakdown of the orbit average approximation and implications for gravitational wave detections. Astrophys. J. 781, 45 (2014). doi:10.1088/0004-637X/781/1/45. 1308.3674 Antonini, F, Perets, HB: Secular evolution of compact binaries near massive black holes: gravitational wave sources and other exotica. Astrophys. J. 757, 27 (2012). doi:10.1088/0004-637X/757/1/27. 1203.2938 Bear, E, Soker, N: Planetary nebulae that might have been shaped by a triple stellar system. ArXiv e-prints (2016). 1606.08149 Blaes, O, Lee, MH, Socrates, A: The Kozai mechanism and the evolution of binary supermassive black holes. Astrophys. J. 578 775-786 (2002). doi:10.1086/342655. astro-ph/0203370 Blind, N, Boffin, HMJ, Berger, J-P, Le Bouquin, J-B, Mérand, A, Lazareff, B, Zins, G: An incisive look at the symbiotic star SS leporis. Milli-arcsecond imaging with PIONIER/VLTI. Astron. Astrophys. 536, 55 (2011). doi:10.1051/0004-6361/201118036. 1112.1514 Bode, JN, Wegg, C: Production of EMRIs in supermassive black hole binaries. Mon. Not. R. Astron. Soc. 438, 573-589 (2014). doi:10.1093/mnras/stt2227 Bond, HE: Binarity of central stars of planetary nebulae. In: Kastner, JH, Soker, N, Rappaport, S (eds.) Asymmetrical Planetary Nebulae II: From Origins to Microstructures. Astronomical Society of the Pacific Conference Series, vol. 199, pp. 115-124 (2000). astro-ph/9909516 Bond, HE, Livio, M: Morphologies of planetary nebulae ejected by close-binary nuclei. Astrophys. J. 355, 568-576 (1990). doi:10.1086/168789 Bond, HE, O'Brien, MS, Sion, EM, Mullan, DJ, Exter, K, Pollacco, DL, Webbink, RF: V471 Tauri and SuWt 2: the exotic descendants of triple systems? In: Tout, CA, van Hamme, W (eds.) Exotic Stars as Challenges to Evolution. Astronomical Society of the Pacific Conference Series, vol. 279, p. 239-394 (2002) Bondi, H, Hoyle, F: On the mechanism of accretion by stars. Mon. Not. R. Astron. Soc. 104, 273 (1944) Book Review: Stellar Rotation. Observatory 120, 414 (2000) Bressan, A, Girardi, L, Marigo, P, Rosenfield, P, Tang, J: Uncertainties in stellar evolution models: convective overshoot. Astrophys. Space Sci. Proc. 39, 25 (2015). doi:10.1007/978-3-319-10993-0-3. 1409.2268 Brooker, RA, Olle, TW: Apsidal-motion constants for polytropic models. Mon. Not. R. Astron. Soc. 115, 101-106 (1955) Brown, GE: Neutron star accretion and binary pulsar formation. Astrophys. J. 440, 270-279 (1995). doi:10.1086/175268 Camacho, J, Torres, S, García-Berro, E, Zorotovic, M, Schreiber, MR, Rebassa-Mansergas, A, Nebot Gómez-Morán, A, Gänsicke, BT: Monte Carlo simulations of post-common-envelope white dwarf + main sequence binaries: comparison with the SDSS DR7 observed sample. ArXiv e-prints (2014). 1404.5464 Cayrel de Strobel, G, Soubiran, C, Ralite, N: Catalogue of [Fe/H] determinations for FGK stars: 2001 edition. Astron. Astrophys. 373, 159-163 (2001). doi:10.1051/0004-6361:20010525. astro-ph/0106438 Chaty, S: Nature, formation, and evolution of high mass X-ray binaries. In: Schmidtobreick, L, Schreiber, MR, Tappert, C (eds.) Evolution of Compact Binaries. Astronomical Society of the Pacific Conference Series, vol. 447, p. 29 (2011). 1107.0231 Chen, X, Madau, P, Sesana, A, Liu, FK: Enhanced Tidal disruption rates from massive black hole binaries. Astrophys. J. 697, 149-152 (2009). doi:10.1088/0004-637X/697/2/L149. 0904.4481 Church, RP, Dischler, J, Davies, MB, Tout, CA, Adams, T, Beer, ME: Mass transfer in eccentric binaries: the new oil-on-water smoothed particle hydrodynamics technique. Mon. Not. R. Astron. Soc. 395, 1127-1134 (2009). doi:10.1111/j.1365-2966.2009.14619.x. 0902.3509 Claeys, JSW, Pols, OR, Izzard, RG, Vink, J, Verbunt, FWM: Theoretical uncertainties of the Type Ia supernova rate. Astron. Astrophys. 563, 83 (2014). doi:10.1051/0004-6361/201322714. 1401.2895 Claret, A, Gimenez, A: Evolutionary stellar models using Rogers and Iglesias opacities, with particular attention to internal structure constants. Astron. Astrophys. Suppl. Ser. 96, 255-268 (1992) ADS Google Scholar Cohen, SD, Hindmarsh, AC, Dubois, PF: CVODE, a stiff/nonstiff ODE solver in C. Comput. Phys. 10, 138-143 (1996). doi:10.1063/1.4822377 Cordes, JM, Romani, RW, Lundgren, SC: The guitar nebula - a bow shock from a slow-spin, high-velocity neutron star. Nature 362, 133-135 (1993). doi:10.1038/362133a0 Correia, ACM, Laskar, J, Farago, F, Boué, G: Tidal evolution of hierarchical and inclined systems. Celest. Mech. Dyn. Astron. 111, 105-130 (2011). doi:10.1007/s10569-011-9368-9. 1107.0736 ADS MathSciNet MATH Article Google Scholar Damineli, A, Conti, PS, Lopes, DF: Eta Carinae: a long period binary? New Astron. 2, 107-117 (1997). doi:10.1016/S1384-1076(97)00008-0 Darwin, G: On the precession of a viscous spheroid and on the remote history of the Earth. Philos. Trans. R. Soc. Lond. 170, 447-538 (1879) MATH Article Google Scholar Davies, MB: In: Boffin, HMJ, Carraro, G, Beccari, G (eds.) Formation Channels for Blue Straggler Stars, p. 203 (2015). doi:10.1007/978-3-662-44434-4-9 Davis, PJ, Siess, L, Deschamps, R: Mass transfer in eccentric binary systems using the binary evolution code BINSTAR. Astron. Astrophys. 556, 4 (2013). doi:10.1051/0004-6361/201220391. 1305.6092 de Kool, M, van den Heuvel, EPJ, Pylyser, E: An evolutionary scenario for the black hole binary A0620-00. Astron. Astrophys. 183, 47-52 (1987) De Marco, O, Long, J, Jacoby, GH, Hillwig, T, Kronberger, M, Howell, SB, Reindl, N, Margheim, S: Identifying close binary central stars of PN with Kepler. Mon. Not. R. Astron. Soc. 448, 3587-3602 (2015). doi:10.1093/mnras/stv249. 1608.03046 de Mink, SE, Pols, OR, Glebbeek, E: Critically-rotating stars in binaries - an unsolved problem. In: Stancliffe, RJ, Houdek, G, Martin, RG, Tout, CA (eds.) Unsolved Problems in Stellar Physics: A Conference in Honor of Douglas Gough. American Institute of Physics Conference Series, vol. 948, pp. 321-325 (2007a). doi:10.1063/1.2818989. 0709.2285 de Mink, SE, Pols, OR, Hilditch, RW: Efficiency of mass transfer in massive close binaries. Tests from double-lined eclipsing binaries in the SMC. Astron. Astrophys. 467 1181-1196 (2007b). doi:10.1051/0004-6361:20067007. astro-ph/0703480 de Val-Borro, M, Karovska, M, Sasselov, D: Numerical simulations of wind accretion in symbiotic binaries. Astrophys. J. 700, 1148-1160 (2009). doi:10.1088/0004-637X/700/2/1148. 0905.3542 de Vaucouleurs, G, Eggen, OJ: The brightening of η Carinae. Publ. Astron. Soc. Pac. 64, 185-190 (1952). doi:10.1086/126457 de Vries, N, Portegies Zwart, S, Figueira, J: The evolution of triples with a Roche lobe filling outer star. Mon. Not. R. Astron. Soc. 438, 1909-1921 (2014). doi:10.1093/mnras/stt1688. 1309.1475 del Peloso, EF, da Silva, L, Arany-Prado, LI,: The age of the Galactic thin disk from Th/Eu nucleocosmochronology. II. Chronological analysis. Astron. Astrophys. 434 301-308 (2005). doi:10.1051/0004-6361:20042438. astro-ph/0411699 Dewi, JDM, Tauris, TM: On the energy equation and efficiency parameter of the common envelope evolution. Astron. Astrophys. 360, 1043-1051 (2000). arXiv:astro-ph/0007034 Dosopoulou, F, Kalogera, V: Orbital evolution of mass-transferring eccentric binary systems. I. Phase-dependent evolution. Astrophys. J. 825, 70 (2016a). doi:10.3847/0004-637X/825/1/70. 1603.06592 Dosopoulou, F, Kalogera, V: Orbital evolution of mass-transferring eccentric binary systems. II. Secular evolution. Astrophys. J. 825, 71 (2016b). doi:10.3847/0004-637X/825/1/71. 1603.06593 Duchêne, G, Kraus, A: Stellar multiplicity. Annu. Rev. Astron. Astrophys. 51, 269-310 (2013). doi:10.1146/annurev-astro-081710-102602. 1303.3028 Eggleton, PP: The evolution of low mass stars. Mon. Not. R. Astron. Soc. 151, 351-364 (1971) Eggleton, PP: Composition changes during stellar evolution. Mon. Not. R. Astron. Soc. 156, 361-376 (1972) Eggleton, PP: Approximations to the radii of Roche lobes. Astrophys. J. 268, 368-369. (1983). doi:10.1086/160960 Eggleton, PP, Kiseleva, LG: Stellar and dynamical evolution within triple stars. In: Wijers, RAMJ, Davies, MB, Tout, CA (eds.) Evolutionary Processes in Binary Stars. NATO Advanced Science Institutes (ASI) Series C vol. 477, pp. 345-363 (1996). astro-ph/9510110 Eggleton, PP, Kiseleva-Eggleton, L: Orbital evolution in binary and triple stars, with an application to SS Lacertae. Astrophys. J. 562, 1012-1030 (2001). doi:10.1086/323843. astro-ph/0104126 Eggleton, PP, Verbunt, F: Triple star evolution and the formation of short-period, low mass X-ray binaries. Mon. Not. R. Astron. Soc. 220, 13-18 (1986). doi:10.1093/mnras/220.1.13P Eker, Z, Soydugan, F, Soydugan, E, Bilir, S, Yaz Gökçe, E, Steer, I, Tüysüz, M, Şenyüz, T, Demircan, O: Main-sequence effective temperatures from a revised mass-luminosity relation based on accurate properties. Astron. J. 149, 131 (2015). doi:10.1088/0004-6256/149/4/131. 1501.06585 Exter, K, Bond, HE, Stassun, KG, Smalley, B, Maxted, PFL, Pollacco, DL: The exotic eclipsing nucleus of the ring planetary nebula SuWt 2. Astron. J. 140, 1414-1427 (2010). doi:10.1088/0004-6256/140/5/1414. 1009.1919 Fabrycky, D, Tremaine, S: Shrinking binary and planetary orbits by Kozai cycles with tidal friction. Astrophys. J. 669, 1298-1315 (2007). doi:10.1086/521702. 0705.4285 Feroz, F, Hobson, MP: Bayesian analysis of radial velocity data of GJ667C with correlated noise: evidence for only two planets. Mon. Not. R. Astron. Soc. 437, 3540-3549 (2014). doi:10.1093/mnras/stt2148. 1307.6984 Ford, EB, Kozinsky, B, Rasio, FA: Secular evolution of hierarchical triple star systems. Astrophys. J. 535, 385-401 (2000). doi:10.1086/308815 Freire, PCC, Bassa, CG, Wex, N, Stairs, IH, Champion, DJ, Ransom, SM, Lazarus, P, Kaspi, VM, Hessels, JWT, Kramer, M, Cordes, JM, Verbiest, JPW, Podsiadlowski, P, Nice, DJ, Deneva, JS, Lorimer, DR, Stappers, BW, McLaughlin, MA, Camilo, F: On the nature and evolution of the unique binary pulsar J1903+0327. Mon. Not. R. Astron. Soc. 412, 2763-2780 (2011). doi:10.1111/j.1365-2966.2010.18109.x. 1011.5809 Ge, H, Hjellming, MS, Webbink, RF, Chen, X, Han, Z: Adiabatic mass loss in binary stars. I. Computational method. Astrophys. J. 717, 724-738 (2010). doi:10.1088/0004-637X/717/2/724. 1005.3099 Ge, H, Webbink, RF, Chen, X, Han, Z: Adiabatic mass loss in binary stars. II. From zero-age main sequence to the base of the giant branch. ArXiv e-prints (2015). 1507.04843 Georgakarakos, N: Stability criteria for hierarchical triple systems. Celest. Mech. Dyn. Astron. 100, 151-168 (2008). doi:10.1007/s10569-007-9109-2. 1408.5431 Georgy, C, Meynet, G, Maeder, A: Effects of anisotropic winds on massive star evolution. Astron. Astrophys. 527, 52 (2011). doi:10.1051/0004-6361/200913797. 1011.6581 ADS MATH Article Google Scholar Goldman, I, Mazeh, T: On the orbital circularization of close binaries. Astrophys. J. 376, 260-265 (1991). doi:10.1086/170275 Goodman, J, Dickson, ES: Dynamical tide in solar-type binaries. Astrophys. J. 507, 938-944 (1998). doi:10.1086/306348. astro-ph/9801289 Gualandris, A, Colpi, M, Portegies Zwart, S, Possenti, A: Has the black hole in XTE J1118+480 experienced an asymmetric natal kick? Astrophys. J. 618, 845-851 (2005). doi:10.1086/426126. astro-ph/0407502 Hadjidemetriou, JD: Binary systems with decreasing mass. Z. Astrophys. 63, 116-130 (1966) ADS MathSciNet Google Scholar Hamers, AS, Perets, HB, Antonini, F, Portegies Zwart, SF: Secular dynamics of hierarchical quadruple systems: the case of a triple system orbited by a fourth body. Mon. Not. R. Astron. Soc. 449, 4221-4245 (2015). doi:10.1093/mnras/stv452. 1412.3115 Hamers, AS, Perets, HB, Portegies Zwart, SF: A triple origin for the lack of tight coplanar circumbinary planets around short-period binaries. Mon. Not. R. Astron. Soc. 455, 3180-3200 (2016). doi:10.1093/mnras/stv2447. 1506.02039 Hamers, AS, Pols, OR, Claeys, JSW, Nelemans, G: Population synthesis of triple systems in the context of mergers of carbon-oxygen white dwarfs. Mon. Not. R. Astron. Soc. 430, 2262-2280 (2013). doi:10.1093/mnras/stt046. 1301.1469 Hamers, AS, Portegies Zwart, SF: Secular dynamics of hierarchical multiple systems composed of nested binaries, with an arbitrary number of bodies and arbitrary hierarchical structure. First applications to multiplanet and multistar systems. Mon. Not. R. Astron. Soc. (2016). doi:10.1093/mnras/stw784. 1511.00944 Han, Z, Podsiadlowski, P, Maxted, PFL, Marsh, TR, Ivanova, N: The origin of subdwarf B stars - I. The formation channels. Mon. Not. R. Astron. Soc. 336, 449-466 (2002). doi:10.1046/j.1365-8711.2002.05752.x. astro-ph/0206130 Hansen, BMS, Phinney, ES: The pulsar kick velocity distribution. Mon. Not. R. Astron. Soc. 291, 569-577 (1997). astro-ph/9708071 Harpaz, A, Soker, N: Triggering eruptive mass ejection in luminous blue variables. New Astron. 14, 539-544 (2009). doi:10.1016/j.newast.2009.01.011. 0901.2026 Harrington, RS: Dynamical evolution of triple stars. Astron. J. 73, 190-194 (1968). doi:10.1086/110614 Harrington, RS: The stellar three-body problem. Celest. Mech. 1, 200-209 (1969). doi:10.1007/BF01228839 Hills, JG: The effects of sudden mass loss and a random kick velocity produced in a supernova explosion on the dynamics of a binary star of arbitrary orbital eccentricity - applications to X-ray binaries and to the binary pulsars. Astrophys. J. 267, 322-333 (1983). doi:10.1086/160871 Hirai, R, Sawai, H, Yamada, S: The outcome of supernovae in massive binaries; removed mass, and its separation dependence. Astrophys. J. 792, 66 (2014). doi:10.1088/0004-637X/792/1/66. 1404.4297 Hjellming, MS, Webbink, RF: Thresholds for rapid mass transfer in binary systems. I. Polytropic models. Astrophys. J. 318, 794-808 (1987). doi:10.1086/165412 Hobbs, G, Lorimer, DR, Lyne, AG, Kramer, M: A statistical study of 233 pulsar proper motions. Mon. Not. R. Astron. Soc. 360, 974-992 (2005). doi:10.1111/j.1365-2966.2005.09087.x. astro-ph/0504584 Höfner, S: Wind acceleration in AGB stars: solid ground and loose ends. In: Kerschbaum, F, Wing, RF, Hron, J (eds.) Why Galaxies Care About AGB Stars III: A Closer Look in Space and Time. Astronomical Society of the Pacific Conference Series, vol. 497, p. 333-343 (2015). 1505.07425 Holman, M, Touma, J, Tremaine, S: Chaotic variations in the eccentricity of the planet orbiting 16 Cygni B. Nature 386, 254-256 (1997). doi:10.1038/386254a0 Huang, S-S: Modes of mass ejection by binary stars and the effect on their orbital periods. Astrophys. J. 138, 471-480 (1963). doi:10.1086/147659 Huang, S-S: A theory of the origin and evolution of contact binaries. Ann. Astrophys. 29, 331-338 (1966) Huang, SS: A dynamical problem in binary systems and its bearing on stellar evolution. Astron. J. 61, 49-61 (1956). doi:10.1086/107290 Humphreys, RM, Davidson, K: The historical light curves of the Eta Carinae-like variables. In: Morse, JA, Humphreys, RM, Damineli, A (eds.) Eta Carinae at the Millennium. Astronomical Society of the Pacific Conference Series, vol. 179, p. 216 (1999) Hurley, JR, Pols, OR, Tout, CA: Comprehensive analytic formulae for stellar evolution as a function of mass and metallicity. Mon. Not. R. Astron. Soc. 315, 543-569 (2000). doi:10.1046/j.1365-8711.2000.03426.x Hurley, JR, Tout, CA, Pols, OR: Evolution of binary stars and the effect of tides on binary populations. Mon. Not. R. Astron. Soc. 329, 897-928 (2002). doi:10.1046/j.1365-8711.2002.05038.x. arXiv:astro-ph/0201220 Hut, P: Stability of tidal equilibrium. Astron. Astrophys. 92, 167-170 (1980) ADS MathSciNet MATH Google Scholar Hut, P: Tidal evolution in close binary systems. Astron. Astrophys. 99, 126-140 (1981) ADS MATH Google Scholar Hut, P, Bahcall, JN: Binary-single star scattering. I. Numerical experiments for equal masses. Astrophys. J. 268, 319-341 (1983). doi:10.1086/160956 Iben, I Jr., Tutukov, AV: On the evolution of close triple stars that produce type IA supernovae. Astrophys. J. 511, 324-334 (1999). doi:10.1086/306672 Innanen, KA, Zheng, JQ, Mikkola, S, Valtonen, MJ: The Kozai mechanism and the stability of planetary orbits in binary star systems. Astron. J. 113, 1915-1919 (1997). doi:10.1086/118405 Ivanova, N, Chaichenets, S, Fregeau, J, Heinke, CO, Lombardi, JC, Woods, TE: Formation of black hole X-ray binaries in globular clusters. Astrophys. J. 717, 948-957 (2010). doi:10.1088/0004-637X/717/2/948. 1001.1767 Ivanova, N, Justham, S, Chen, X, De Marco, O, Fryer, CL, Gaburov, E, Ge, H, Glebbeek, E, Han, Z, Li, X-D, Lu, G, Marsh, T, Podsiadlowski, P, Potter, A, Soker, N, Taam, R, Tauris, TM, van den Heuvel, EPJ, Webbink, RF: Common envelope evolution: where we stand and how we can move forward. Astron. Astrophys. Rev. 21, 59 (2013). doi:10.1007/s00159-013-0059-2. 1209.4302 Janka, H-T: Explosion mechanisms of core-collapse supernovae. Annu. Rev. Nucl. Part. Sci. 62, 407-451 (2012). doi:10.1146/annurev-nucl-102711-094901. 1206.2503 Kalogera, V: Orbital characteristics of binary systems after asymmetric supernova explosions. Astrophys. J. 471, 352-365 (1996). doi:10.1086/177974. astro-ph/9605186 Karovska, M, Schlegel, E, Hack, W, Raymond, JC, Wood, BE: A large X-ray outburst in Mira A. Astrophys. J. 623, 137-140 (2005). doi:10.1086/430111. astro-ph/0503050 Katz, B, Dong, S: The rate of WD-WD head-on collisions may be as high as the SNe Ia rate. ArXiv e-prints (2012). 1211.4584 Kinoshita, H, Nakai, H: Analytical solution of the Kozai resonance and its application. Celest. Mech. Dyn. Astron. 75, 125-147 (1999). doi:10.1023/A:1008321310187 Kiseleva, LG, Eggleton, PP, Mikkola, S: Tidal friction in triple stars. Mon. Not. R. Astron. Soc. 300, 292-302 (1998). doi:10.1046/j.1365-8711.1998.01903.x Kiseleva, LG, Eggleton, PP, Orlov, VV: Instability of close triple systems with coplanar initial doubly circular motion. Mon. Not. R. Astron. Soc. 270, 936-946 (1994) Kisseleva-Eggleton, L, Eggleton, PP: Kozai cycles and tidal friction. In: Prša, A, Zejda, M (eds.) Binaries - Key to Comprehension of the Universe. Astronomical Society of the Pacific Conference Series, vol. 435, p. 169 (2010) Knigge, C, Baraffe, I, Patterson, J: The evolution of cataclysmic variables as revealed by their donor stars. Astrophys. J. Suppl. Ser. 194, 28 (2011). doi:10.1088/0067-0049/194/2/28. 1102.2440 Kozai, Y: Secular perturbations of asteroids with high inclination and eccentricity. Astron. J. 67, 591-598 (1962). doi:10.1086/108790 ADS MathSciNet Article Google Scholar Kratter, KM, Perets, HB: Star hoppers: planet instability and capture in evolving binary systems. Astrophys. J. 753, 91 (2012). doi:10.1088/0004-637X/753/1/91. 1204.2014 Kudritzki, R-P, Puls, J: Winds from hot stars. Annu. Rev. Astron. Astrophys. 38, 613-666 (2000). doi:10.1146/annurev.astro.38.1.613 Kuranov, AG, Postnov, AG, Prokhorov, ME: Formation of low-mass X-ray novae with black holes from triple systems. Astron. Rep. 45, 620-630 (2001). doi:10.1134/1.1388927 Lai, D: Neutron star kicks and supernova asymmetry. In: Höflich, P, Kumar, P, Wheeler, JC (eds.) Cosmic Explosions in Three Dimensions, p. 276 (2004). astro-ph/0312542 Lajoie, C-P, Sills, A: Mass transfer in binary stars using smoothed particle hydrodynamics. II. Eccentric binaries. Astrophys. J. 726, 67 (2011). doi:10.1088/0004-637X/726/2/67. 1011.2204 Lamers, HJGLM, Cassinelli, JP: Introduction to Stellar Winds. Cambridge University Press, Cambridge (1999) Lang, KR: Astrophysical Data I. Planets and Stars. Springer, Berlin (1992) Layton, JT, Blondin, JM, Owen, MP, Stevens, IR: Tidal mass transfer in elliptical-orbit binary stars. New Astron. 3, 111-119 (1998). doi:10.1016/S1384-1076(97)00047-X Leigh, N, Sills, A: An analytic technique for constraining the dynamical origins of multiple star systems containing merger products. Mon. Not. R. Astron. Soc. 410, 2370-2384 (2011). doi:10.1111/j.1365-2966.2010.17609.x. 1009.0461 Leigh, NWC, Geller, AM: The dynamical significance of triple star systems in star clusters. Mon. Not. R. Astron. Soc. 432, 2474-2479 (2013). doi:10.1093/mnras/stt617. 1304.2775 Lidov, ML: The evolution of orbits of artificial satellites of planets under the action of gravitational perturbations of external bodies. Planet. Space Sci. 9, 719-759 (1962). doi:10.1016/0032-0633(62)90129-0 Lithwick, Y, Naoz, S: The eccentric Kozai mechanism for a test particle. Astrophys. J. 742, 94 (2011). doi:10.1088/0004-637X/742/2/94. 1106.3329 Liu, B, Muñoz, DJ, Lai, D: Suppression of extreme orbital evolution in triple systems with short-range forces. Mon. Not. R. Astron. Soc. 447, 747-764 (2015a). doi:10.1093/mnras/stu2396. 1409.6717 Liu, Z-W, Tauris, TM, Roepke, FK, Moriya, TJ, Kruckow, M, Stancliffe, RJ, Izzard, RG: The interaction of core-collapse supernova ejecta with a companion star. ArXiv e-prints (2015b). 1509.03633 Livio, M, Warner, B: Wind accretion onto white dwarfs. Observatory 104, 152-159 (1984) Loveridge, AJ, van der Sluys, MV, Kalogera, V: Analytical expressions for the envelope binding energy of giants as a function of basic stellar parameters. Astrophys. J. 743, 49 (2011). doi:10.1088/0004-637X/743/1/49. 1009.5400 Luo, L, Katz, B, Dong, S: Double-averaging can fail to characterize the long-term evolution of Lidov-Kozai cycles and derivation of an analytical correction. Mon. Not. R. Astron. Soc. 458, 3060-3074 (2016). doi:10.1093/mnras/stw475. 1601.04345 Lyne, AG, Lorimer, DR: High birth velocities of radio pulsars. Nature 369, 127-129 (1994). doi:10.1038/369127a0 Maeder, A, Meynet, G: Tables of isochrones computed from stellar models with mass loss and overshooting. Astron. Astrophys. Suppl. Ser. 89, 451-467 (1991) Maeder, A, Meynet, G: Rotating massive stars: from first stars to gamma ray bursts. Rev. Mod. Phys. 84, 25-63 (2012). doi:10.1103/RevModPhys.84.25 Mardling, R, Aarseth, S: Dynamics and stability of three-body systems. In: Steves, BA, Roy, AE (eds.) The Dynamics of Small Bodies in the Solar System NATO Advanced Science Institutes (ASI) Series C, vol. 522, p. 385-392 (1999) Mardling, RA: Stability in the general three-body problem. In: Podsiadlowski, P, Rappaport, S, King, AR, D'Antona, F, Burderi, L (eds.) Evolution of Binary and Multiple Star Systems. Astronomical Society of the Pacific Conference Series, vol. 229, p. 101-116 (2001) Mardling, RA, Aarseth, SJ: Tidal interactions in star cluster simulations. Mon. Not. R. Astron. Soc. 321, 398-420 (2001). doi:10.1046/j.1365-8711.2001.03974.x Martin, DV, Mazeh, T, Fabrycky, DC: No circumbinary planets transiting the tightest Kepler binaries - a possible fingerprint of a third star. Mon. Not. R. Astron. Soc. 453, 3554-3567 (2015). doi:10.1093/mnras/stv1870. 1505.05749 Massevitch, A, Yungelson, L: On the evolution of close binaries with mass and momentum loss from the system. Mem. Soc. Astron. Ital. 46, 217-229 (1975) Massevitch, AG, Popova, EI, Tutukov, AV, Iungelson, LR: On the influence of mass loss and convective overshooting on the evolution of massive stars. Astrophys. Space Sci. 62, 451-463 (1979). doi:10.1007/BF00645480 Mazeh, T, Shaham, J: The orbital evolution of close triple systems - the binary eccentricity. Astron. Astrophys. 77, 145-151 (1979) Meibom, S, Mathieu, RD: A robust measure of tidal circularization in coeval binary populations: the solar-type spectroscopic binary population in the open cluster M35. In: Claret, A, Giménez, A, Zahn, J-P (eds.) Tidal Evolution and Oscillations in Binary Stars. Astronomical Society of the Pacific Conference Series, vol. 333, p. 95 (2005) Michaely, E, Perets, HB: Secular dynamics in hierarchical three-body systems with mass loss and mass transfer. Astrophys. J. 794, 122 (2014). doi:10.1088/0004-637X/794/2/122. 1406.3035 Mikolajewska, J: Orbital and stellar parameters of symbiotic stars. ArXiv Astrophysics e-prints (2002). astro-ph/0210489 Miller, MC, Hamilton, DP: Four-body effects in globular cluster black hole coalescence. Astrophys. J. 576, 894-898 (2002). doi:10.1086/341788. astro-ph/0202298 Moe, M, Di Stefano, R: Mind your Ps and Qs: the interrelation between period (P) and mass-ratio (Q) distributions of binary stars. ArXiv e-prints (2016). 1606.05347 Mohamed, S, Podsiadlowski, P: Wind Roche-lobe overflow: a new mass-transfer mode for wide binaries. In: Napiwotzki, R, Burleigh, MR (eds.) 15th European Workshop on White Dwarfs. Astronomical Society of the Pacific Conference Series, vol. 372, p. 397 (2007) Mohamed, S, Podsiadlowski, P: Wind Roche-lobe overflow: a new mass transfer mode for Mira-type binaries. In: Kerschbaum, F, Lebzelter, T, Wing, RF (eds.) Why Galaxies Care About AGB Stars II: Shining Examples and Common Inhabitants. Astronomical Society of the Pacific Conference Series, vol. 445, p. 355 (2011) Muñoz, DJ, Lai, D: Survival of planets around shrinking stellar binaries. Proc. Natl. Acad. Sci. USA 112, 9264-9269 (2015). doi:10.1073/pnas.1505671112. 1505.05514 Nandez, JLA, Ivanova, N, Lombardi, JC: Recombination energy in double white dwarf formation. Mon. Not. R. Astron. Soc. 450, 39-43 (2015). doi:10.1093/mnrasl/slv043. 1503.02750 Naoz, S: The eccentric Kozai-Lidov effect and its applications. ArXiv e-prints (2016). 1601.07175 Naoz, S, Fabrycky, DC: Mergers and obliquities in stellar triples. Astrophys. J. 793, 137 (2014). doi:10.1088/0004-637X/793/2/137. 1405.5223 Naoz, S, Farr, WM, Lithwick, Y, Rasio, FA, Teyssandier, J: Hot Jupiters from secular planet-planet interactions. Nature 473, 187-189 (2011). doi:10.1038/nature10076. 1011.2501 Naoz, S, Farr, WM, Lithwick, Y, Rasio, FA, Teyssandier, J: Secular dynamics in hierarchical three-body systems. Mon. Not. R. Astron. Soc. 431, 2155-2171 (2013). doi:10.1093/mnras/stt302. 1107.2414 Naoz, S, Farr, WM, Rasio, FA: On the formation of hot Jupiters in stellar binaries. Astrophys. J. 754, 36 (2012). doi:10.1088/2041-8205/754/2/L36. 1206.3529 Naoz, S, Fragos, T, Geller, A, Stephan, AP, Rasio, FA: Formation of black hole low-mass X-ray binaries in hierarchical triple systems. Astrophys. J. 822, 24 (2016). doi:10.3847/2041-8205/822/2/L24. 1510.02093 Nelemans, G, Verbunt, F, Yungelson, LR, Portegies Zwart, SF: Reconstructing the evolution of double helium white dwarfs: envelope loss without spiral-in. Astron. Astrophys. 360, 1011-1018 (2000) Nelemans, G, Yungelson, LR, Portegies Zwart, SF, Verbunt, F: Population synthesis for double white dwarfs. I. Close detached systems. Astron. Astrophys. 365, 491-507 (2001). doi:10.1051/0004-6361:20000147 Nicholls, CP, Wood, PR: Eccentric ellipsoidal red giant binaries in the LMC: complete orbital solutions and comments on interaction at periastron. Mon. Not. R. Astron. Soc. 421, 2616-2628 (2012). doi:10.1111/j.1365-2966.2012.20492.x. 1201.1043 O'Brien, P: Book Review: Active Galactic Nuclei, Observatory 111, 328 (1990) Oswalt, TD, Smith, JA, Wood, MA, Hintzen, P: A lower limit of 9.5 Gyr on the age of the galactic disk from the oldest white dwarf stars. Nature 382, 692-694 (1996). doi:10.1038/382692a0 Owocki, S: Stellar winds. In: Oswalt, TD, Barstow, MA (eds.) Planets, Stars and Stellar Systems. Stellar Structure and Evolution, vol. 4, pp. 735-788 (2013). doi:10.1007/978-94-007-5615-1-15 Packet, W: On the spin-up of the mass accreting component in a close binary system. Astron. Astrophys. 102, 17-19 (1981) Paczynski, B: Common envelope binaries. In: Eggleton, P, Mitton, S, Whelan, J (eds.) Structure and Evolution of Close Binary Systems. IAU Symposium, vol. 73, p. 75 (1976) Paczynski, B: A test of the galactic origin of gamma-ray bursts. Astrophys. J. 348, 485-494 (1990). doi:10.1086/168257 Parriott, J, Alcock, C: On the number of comets around white dwarf stars: orbit survival during the late stages of stellar evolution. Astrophys. J. 501, 357-366 (1998). doi:10.1086/305802. astro-ph/9709193 Passy, J-C, De Marco, O, Fryer, CL, Herwig, F, Diehl, S, Oishi, JS, Mac Low, M-M, Bryan, GL, Rockefeller, G: Simulating the common envelope phase of a red giant using smoothed-particle hydrodynamics and uniform-grid codes. Astrophys. J. 744, 52 (2012b). doi:10.1088/0004-637X/744/1/52. 1107.5072 Passy, J-C, Herwig, F, Paxton, B: The response of giant stars to dynamical-timescale mass loss. Astrophys. J. 760, 90 (2012a). doi:10.1088/0004-637X/760/1/90. 1111.4202 Pejcha, O, Antognini, JM, Shappee, BJ, Thompson, TA: Greatly enhanced eccentricity oscillations in quadruple systems composed of two binaries: implications for stars, planets and transients. Mon. Not. R. Astron. Soc. 435, 943-951 (2013). doi:10.1093/mnras/stt1281. 1304.3152 Perets, HB, Fabrycky, DC: On the triple origin of blue stragglers. Astrophys. J. 697, 1048-1056 (2009). doi:10.1088/0004-637X/697/2/1048. 0901.4328 Perets, HB, Kratter, KM: The triple evolution dynamical instability: stellar collisions in the field and the formation of exotic binaries. Astrophys. J. 760, 99 (2012). doi:10.1088/0004-637X/760/2/99. 1203.2914 Peters, PC: Gravitational radiation and the motion of two point masses. Phys. Rev. 136, 1224-1232 (1964). doi:10.1103/PhysRev.136.B1224 Petrova, AV, Orlov, VV: Apsidal motion in double stars. I. Astron. J. 117, 587-602 (1999). doi:10.1086/300671 Petrovich, C: Steady-state planet migration by the Kozai-Lidov mechanism in stellar binaries. Astrophys. J. 799, 27 (2015). doi:10.1088/0004-637X/799/1/27. 1405.0280 Pfahl, E, Rappaport, S, Podsiadlowski, P, Spruit, H: A new class of high-mass X-ray binaries: implications for core collapse and neutron star recoil Astrophys. J. 574 364-376 (2002). doi:10.1086/340794. astro-ph/0109521 Pijloo, JT, Caputo, DP, Portegies Zwart, SF: Asymmetric supernova in hierarchical multiple star systems and application to J1903+0327. Mon. Not. R. Astron. Soc. 424, 2914-2925 (2012). doi:10.1111/j.1365-2966.2012.21431.x. 1207.0009 Plavec, MJ: Dynamical instability of the components of close binary systems. Mem. Soc. R. Sci. Liege 20, 411-420 (1958) Pols, OR: Mass and angular momentum loss in massive binary evolution. In: St-Louis, N, Moffat, AFJ (eds.) Massive Stars in Interactive Binaries. Astronomical Society of the Pacific Conference Series, vol. 367, p. 387 (2007) Pols, OR, Marinus, M: Monte-Carlo simulations of binary stellar evolution in young open clusters. Astron. Astrophys. 288, 475-501 (1994) Portegies Zwart, S: Planet-mediated precision reconstruction of the evolution of the cataclysmic variable HU Aquarii. Mon. Not. R. Astron. Soc. 429, 45-49 (2013). doi:10.1093/mnrasl/sls022. 1210.5540 Portegies Zwart, S, McMillan, S, Harfst, S, Groen, D, Fujii, M, Nualláin, BÓ, Glebbeek, E, Heggie, D, Lombardi, J, Hut, P, Angelou, V, Banerjee, S, Belkus, H, Fragos, T, Fregeau, J, Gaburov, E, Izzard, R, Jurić, M, Justham, S, Sottoriva, A, Teuben, P, van Bever, J, Yaron, O, Zemp, M: A multiphysics and multiscale software environment for modeling astrophysical systems. New Astron. 14, 369-378 (2009). doi:10.1016/j.newast.2008.10.006. 0807.1996 Portegies Zwart, S, van den Heuvel, EPJ, van Leeuwen, J, Nelemans, G: The formation of the eccentric-orbit millisecond pulsar J1903+0327 and the origin of single millisecond pulsars. Astrophys. J. 734, 55 (2011). doi:10.1088/0004-637X/734/1/55. 1103.2375 Portegies Zwart, SF, van den Heuvel, EPJ: Was the nineteenth century giant eruption of Eta Carinae a merger event in a triple system? Mon. Not. R. Astron. Soc. 456, 3401-3412 (2016). doi:10.1093/mnras/stv2787. 1511.06889 Portegies Zwart, SF, Verbunt, F: Population synthesis of high-mass binaries. Astron. Astrophys. 309, 179-196 (1996) Postnov, KA, Yungelson, LR: The evolution of compact binary star systems. Living Rev. Relativ. 17, 6 (2014). doi:10.12942/lrr-2014-3. 1403.4754 Puls, J, Vink, JS, Najarro, F: Mass loss from hot massive stars. Astron. Astrophys. Rev. 16, 209-325 (2008). doi:10.1007/s00159-008-0015-8. 0811.0487 Raghavan, D, McAlister, HA, Henry, TJ, Latham, DW, Marcy, GW, Mason, BD, Gies, DR, White, RJ, ten Brummelaar, TA: A survey of stellar families: multiplicity of solar-type stars. Astrophys. J. Suppl. Ser. 190, 1-42 (2010). doi:10.1088/0067-0049/190/1/1. 1007.0414 Raguzova, NV, Popov, SB: Be X-ray binaries and candidates. Astron. Astrophys. Trans. 24, 151-185 (2005). doi:10.1080/10556790500497311. astro-ph/0505275 Rahoma, WA, Abd El-Salam, FA, Ahmed, MK: Analytical treatment of the two-body problem with slowly varying mass. J. Astrophys. Astron. 30, 187-205 (2009). doi:10.1007/s12036-009-0012-y Regös, E, Bailey, VC, Mardling, R: Mass transfer in eccentric binary stars. Mon. Not. R. Astron. Soc. 358, 544-550 (2005). doi:10.1111/j.1365-2966.2005.08813.x Remage Evans, N: Multiplicity in 5 MSun stars. Bull. Soc. R. Sci. Liège 80, 663-667 (2011). 1010.2939 Repetto, S, Davies, MB, Sigurdsson, S: Investigating stellar-mass black hole kicks. Mon. Not. R. Astron. Soc. 425, 2799-2809 (2012). doi:10.1111/j.1365-2966.2012.21549.x. 1203.3077 Repetto, S, Nelemans, G: Constraining the formation of black-holes in short-period black-hole low-mass X-ray binaries. ArXiv e-prints (2015). 1507.08105 Ricker, PM, Taam, RE: An AMR study of the common-envelope phase of binary evolution. Astrophys. J. 746, 74 (2012). doi:10.1088/0004-637X/746/1/74. 1107.3889 Rimoldi, A, Portegies Zwart, S, Rossi, EM: Simulations of stripped core-collapse supernovae in close binaries. ArXiv e-prints (2015). 1510.02483 Sabach, E, Soker, N: A formation scenario for the triple pulsar PSR J0337+1715: breaking a binary system inside a common envelope. Mon. Not. R. Astron. Soc. 450, 1716-1723 (2015). doi:10.1093/mnras/stv717. 1501.06787 Salaris, M: White dwarf cosmochronology: techniques and uncertainties. In: Mamajek, EE, Soderblom, DR, Wyse, RFG (eds.) The Ages of Stars. Proceedings of the International Astronomical Union - IAU Symposium, vol. 258, pp. 287-298 (2009). doi:10.1017/S1743921309031937 Salaris, M, Cassisi, S: Evolution of Stars and Stellar Populations (2005) Sana, H, Le Bouquin, J-B, Lacour, S, Berger, J-P, Duvert, G, Gauchet, L, Norris, B, Olofsson, J, Pickel, D, Zins, G, Absil, O, de Koter, A, Kratter, K, Schnurr, O, Zinnecker, H: Southern massive stars at high angular resolution: observational campaign and companion detection. Astrophys. J. Suppl. Ser. 215, 15 (2014). doi:10.1088/0067-0049/215/1/15. 1409.6304 Savonije, GJ, Witte, MG: Tidal interaction of a rotating 1 \(M_{\odot }\) star with a binary companion. Astron. Astrophys. 386, 211-221 (2002). doi:10.1051/0004-6361:20020237. astro-ph/0202276 Schatzman, E: A theory of the role of magnetic activity during star formation. Ann. Astrophys. 25, 18-29 (1962) Sepinsky, JF, Willems, B, Kalogera, V: Equipotential surfaces and Lagrangian points in nonsynchronous, eccentric binary and planetary systems. Astrophys. J. 660, 1624-1635 (2007a). doi:10.1086/513736. astro-ph/0612508 Sepinsky, JF, Willems, B, Kalogera, V, Rasio, FA: Interacting binaries with eccentric orbits: secular orbital evolution due to conservative mass transfer. Astrophys. J. 667, 1170-1184 (2007b). doi:10.1086/520911. 0706.4312 Sepinsky, JF, Willems, B, Kalogera, V, Rasio, FA: Interacting binaries with eccentric orbits. II. Secular orbital evolution due to non-conservative mass transfer. Astrophys. J. 702, 1387-1392 (2009). doi:10.1088/0004-637X/702/2/1387. 0903.0621 Sepinsky, JF, Willems, B, Kalogera, V, Rasio, FA: Interacting binaries with eccentric orbits. III. Orbital evolution due to direct impact and self-accretion. Astrophys. J. 724, 546-558 (2010). doi:10.1088/0004-637X/724/1/546. 1005.0625 Seto, N: Highly eccentric Kozai mechanism and gravitational-wave observation for neutron-star binaries. Phys. Rev. Lett. 111(6), 061106 (2013). doi:10.1103/PhysRevLett.111.061106. 1304.5151 Shappee, BJ, Thompson, TA: The mass-loss-induced eccentric Kozai mechanism: a new channel for the production of close compact object-stellar binaries. Astrophys. J. 766, 64 (2013). doi:10.1088/0004-637X/766/1/64. 1204.1053 Skumanich, A: Time scales for CA II emission decay, rotational braking, and lithium depletion. Astrophys. J. 171, 565-567 (1972). doi:10.1086/151310 Smeyers, P, Willems, B: On the apsidal motion in close binaries due to the tidal deformations of the components. Astron. Astrophys. 373, 173-180 (2001). doi:10.1051/0004-6361:20010563 Smith, N: Explosions triggered by violent binary-star collisions: application to Eta Carinae and other eruptive transients. Mon. Not. R. Astron. Soc. 415, 2020-2024 (2011). doi:10.1111/j.1365-2966.2011.18607.x. 1010.3770 Smith, N, Gehrz, RD, Hinz, PM, Hoffmann, WF, Hora, JL, Mamajek, EE, Meyer, MR: Mass and kinetic energy of the Homunculus Nebula around η Carinae. Astron. J. 125, 1458-1466 (2003). doi:10.1086/346278 Soberman, GE, Phinney, ES, van den Heuvel, EPJ: Stability criteria for mass transfer in binary stellar evolution. Astron. Astrophys. 327, 620-635 (1997). astro-ph/9703016 Soker, N: Influences of wide binaries on the structures of planetary nebulae. Mon. Not. R. Astron. Soc. 270, 774-780 (1994). doi:10.1093/mnras/270.4.774 Soker, N: Wind accretion by a binary stellar system and disc formation. Mon. Not. R. Astron. Soc. 350, 1366-1372 (2004). doi:10.1111/j.1365-2966.2004.07731.x. astro-ph/0402364 Soker, N: Planetary nebula progenitors that swallow binary systems. Mon. Not. R. Astron. Soc. 455, 1584-1593 (2016). doi:10.1093/mnras/stv2384. 1508.05698 Soker, N, Rappaport, S: Departure from axisymmetry in planetary nebulae. Astrophys. J. 557, 256-265 (2001). doi:10.1086/321669. astro-ph/0102202 Soker, N, Tylenda, R: Main-sequence stellar eruption model for V838 monocerotis. Astrophys. J. 582, 105-108 (2003). doi:10.1086/367759. astro-ph/0210463 Soker, N, Zucker, DB, Balick, B: The density profile of the elliptical planetary nebula NGC 3242. Astron. J. 104, 2151-2160 (1992). doi:10.1086/116390 Sokoloski, JL: Symbiotic stars as laboratories for the study of accretion and jets: a call for optical monitoring. J. Am. Assoc. Var. Star Obs. 31, 89-102 (2003). astro-ph/0403004 Stothers, R: Evolution of o stars. I. Hydrogen-burning. Astrophys. J. 138, 1074-1084 (1963). doi:10.1086/147706 Tauris, TM, Takens, RJ: Runaway velocities of stellar components originating from disrupted binaries via asymmetric supernova explosions. Astron. Astrophys. 330, 1047-1059 (1998) Tauris, TM, van den Heuvel, EPJ: In: Lewin, W, van der Klis, M (eds.) Formation and Evolution of Compact Stellar X-Ray Sources, pp. 623-665 (2006) Tauris, TM, van den Heuvel, EPJ: Formation of the galactic millisecond pulsar triple system PSR J0337+1715: a neutron star with two orbiting white dwarfs. Astrophys. J. 781, 13 (2014). doi:10.1088/2041-8205/781/1/L13. 1401.0941 Teyssandier, J, Naoz, S, Lizarraga, I, Rasio, FA: Extreme orbital evolution from hierarchical secular coupling of two giant planets. Astrophys. J. 779, 166 (2013). doi:10.1088/0004-637X/779/2/166. 1310.5048 Thompson, TA: Accelerating compact object mergers in triple systems with the Kozai resonance: a mechanism for 'Prompt' type Ia supernovae, gamma-ray bursts, and other exotica. Astrophys. J. 741, 82 (2011). doi:10.1088/0004-637X/741/2/82. 1011.4322 Tokovinin, A: Comparative statistics and origin of triple and quadruple stars. Mon. Not. R. Astron. Soc. 389, 925-938 (2008). doi:10.1111/j.1365-2966.2008.13613.x. 0806.3263 Tokovinin, A: From binaries to multiples. I. Data on F and G dwarfs within 67 pc of the Sun. Astron. J. 147, 86 (2014a). doi:10.1088/0004-6256/147/4/86. 1401.6825 Tokovinin, A: From binaries to multiples. II. Hierarchical multiplicity of F and G dwarfs. Astron. J. 147, 87 (2014b). doi:10.1088/0004-6256/147/4/87. 1401.6827 Tokovinin, A, Thomas, S, Sterzik, M, Udry, S: Tertiary companions to close spectroscopic binaries. Astron. Astrophys. 450, 681-693 (2006). doi:10.1051/0004-6361:20054427. astro-ph/0601518 Toonen, S, Claeys, JSW, Mennekens, N, Ruiter, AJ: PopCORN: hunting down the differences between binary population synthesis codes. Astron. Astrophys. 562, 14 (2014). doi:10.1051/0004-6361/201321576. 1311.6503 Toonen, S, Nelemans, G: The effect of common-envelope evolution on the visible population of post-common-envelope binaries. Astron. Astrophys. 557, 87 (2013). doi:10.1051/0004-6361/201321753. 1309.0327 Toonen, S, Nelemans, G, Portegies Zwart, S: Supernova type Ia progenitors from merging double white dwarfs. Using a new population synthesis model. Astron. Astrophys. 546, 70 (2012). doi:10.1051/0004-6361/201218966. 1208.6446 Tutukov, A, Yungelson, L: Evolution of massive common envelope binaries and mass loss. In: Conti, PS, De Loore, CWH (eds.) Mass Loss and Evolution of O-Type Stars. IAU Symposium, vol. 83, pp. 401-406 (1979) van den Berk, J, Portegies Zwart, SF, McMillan, SLW: The formation of higher order hierarchical systems in star clusters. Mon. Not. R. Astron. Soc. 379, 111-122 (2007). doi:10.1111/j.1365-2966.2007.11913.x. astro-ph/0607456 van der Helm, E, Portegies Zwart, S, Pols, O: Simulations of the tidal interaction and mass transfer of a star in an eccentric orbit around an intermediate-mass black hole: the case of HLX-1. Mon. Not. R. Astron. Soc. 455, 462-475 (2016). doi:10.1093/mnras/stv2318. 1510.01879 van der Sluys, MV, Verbunt, F, Pols, OR: Modelling the formation of double white dwarfs. Astron. Astrophys. 460, 209-228 (2006). doi:10.1051/0004-6361:20065066. arXiv:astro-ph/0610492 Vanbeveren, D, De Loore, C: The evolution of the mass gainer in massive close binaries. Astron. Astrophys. 290, 129-132 (1994) Veras, D, Ford, EB: Secular orbital dynamics of hierarchical two-planet systems. Astrophys. J. 715, 803-822 (2010). doi:10.1088/0004-637X/715/2/803. 1004.1421 Veras, D, Hadjidemetriou, JD, Tout, CA: An exoplanet's response to anisotropic stellar mass loss during birth and death. Mon. Not. R. Astron. Soc. 435, 2416-2430 (2013). doi:10.1093/mnras/stt1451. 1308.0599 Veras, D, Tout, CA: The great escape - II. Exoplanet ejection from dying multiple-star systems. Mon. Not. R. Astron. Soc. 422, 1648-1664 (2012). doi:10.1111/j.1365-2966.2012.20741.x. 1202.3139 Veras, D, Wyatt, MC, Mustill, AJ, Bonsor, A, Eldridge, JJ: The great escape: how exoplanets and smaller bodies desert dying stars. Mon. Not. R. Astron. Soc. 417, 2104-2123 (2011). doi:10.1111/j.1365-2966.2011.19393.x. 1107.1239 Verbunt, F, Zwaan, C: Magnetic braking in low-mass X-ray binaries. Astron. Astrophys. 100, 7-9 (1981) Vink, JS: Mass-loss rates of very massive stars. In: Vink
CommonCrawl
linear algebra and its applications answers The largest possible dimension of Linear algebra is relatively easy for students during the early stages of the course, when the material is presented in a familiar, concrete setting. Linear Algebra and Its Applications | 4th Edition. Why buy extra books when you can get all the homework help you need in one place? How is Chegg Study better than a printed Linear Algebra And Its Applications 4th Edition student solution manual from the bookstore? Instructors seem to agree that certain concepts (such as linear independence, spanning, subspace, vector space, and linear transformations), are not easily understood, and require time to assimilate. -$2x_{1}+7x_{2}=5$ Unlock your Linear Algebra and Its Applications PDF (Profound Dynamic Fulfillment) today. No need to wait for office hours or assignments to be graded to find out where you took a wrong turn. Solutions Manuals are available for thousands of the most popular college and high school textbooks in subjects such as Math, Science (Physics, Chemistry, Biology), Engineering (Mechanical, Electrical, Civil), Business and more. 0 0 0 2 4 . 0 0 0 1 2 Since problems from 65 chapters in Linear Algebra and Its Applications have been answered, more than 34610 students have viewed full step-by-step answer. It does a great job in showing real life applications of the concepts presented throughout the book. The $2x_{1}$ cancels out and you are left with $3x_{2}=9$ Since they are fundamental to the study of linear algebra, students' understanding of these concepts is vital to their mastery of the subject. Divide both sides by 3 and receive $x_{2}=3$ Textbook Authors: Lay, David C.; Lay, Steven R.; McDonald, Judi J. , ISBN-10: 0-32198-238-X, ISBN-13: 978-0-32198-238-4, Publisher: Pearson No need to wait for office hours or assignments to be graded to find out where you took a wrong turn. 0 0 1 0 5 The smallest possible dimension of Just post a question you need help with, and one of our experts will provide a custom solution. Objective is to find the largest possible dimension of. add 3 times the 4th row to the 3rd row Our interactive player makes it easy to find solutions to Linear Algebra And Its Applications 4th Edition problems you're working on - just go to the chapter for your book. 0 1 0 0 8 Shed the societal and cultural narratives holding you back and let step-by-step Linear Algebra and Its Applications textbook solutions reorient your old paradigms. Ask our subject experts for help answering any of your homework questions! An editor this answer. Why is Chegg Study better than downloaded Linear Algebra And Its Applications 4th Edition PDF solution manuals? Linear Algebra and Its Applications (5th Edition) answers to Chapter 2 - Matrix Algebra - 2.1 Exercises - Page 102 1 including work step by step written by community members like you. to get access to your one-sheeter, Linear Algebra and Its Applications, 5th Edition, Linear Models in Business, Science, and Engineering, Cramer's Rule, Volume, and Linear Transformations, Null Spaces, Column Spaces, and Linear Transformations, Applications to Image Processing and Statistics. $-2x_{1}-7x_{2}=-5$ $2x_{1}+10x_{2}=14$ in the above formula. add 3 times the 3rd row to the 2nd row will review the submission and either publish your submission or provide feedback. is shown below: The largest possible dimension of Unlike static PDF Linear Algebra And Its Applications 4th Edition solution manuals or printed answer keys, our experts show you how to solve each problem step-by-step. Now you can subtract one equation from the other to get a new equation with ONLY ONE TERM. $2x_{1}+7x_{2}=5$ v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram. View an educator-verified, detailed solution for Chapter 1, Problem 23 in Lay/Lay/McDonald's Linear Algebra and Its Applications (5th Edition). $16-21=-5$ 3rd Edition, Companion Website for Linear Algebra and Its Applications with CD-ROM, Update, Linear Algebra and Its Applications (5th Edition), Thomas' Calculus and Linear Algebra and Its Applications Package for the Georgia Institute of Technology, 1/e, Student Study Guide for Linear Algebra and Its Applications, Linear Algebra and Its Applications, Books a la Carte Edition (5th Edition), Linear Algebra and Its Application - With MathXL, Linear Algebra & Its Applications 5th Ed Instructor's Edition, Linear Algebra and Its Applications; Student Study Guide for Linear Algebra and Its ApplicationsStudent Study Guide for Linear Algebra and Its Applications (5th Edition), Linear Algebra and Its Applications, Books a la Carte Edition Plus MyLab Math with Pearson eText -- Access Code Card (5th Edition), Linear Algebra and Its Applications plus New MyLab Math with Pearson eText -- Access Card Package (5th Edition) (Featured Titles for Linear Algebra (Introductory)), Linear Algebra Plus Mymathlab Getting Started Kit for Linear Algebra and Its Applications, Student Study Guide For Linear Algebra And Its Applications, Linear Algebra and Its Applications, 4th Edition, Linear Algebra And Its Applications, Books A La Carte Edition (4th Edition), Linear Algebra And Its Applications, Mymathlab, And Student Study Guide (4th Edition), Linear Algebra and Its Applications with Student Study Guide (4th Edition), Linear Algebra And Its Applications, Books A La Carte Edition Plus New Mymathlab With Pearson Etext -- Access Card Package (4th Edition), Linear Algebra And Its Applications, Custom Edition For Idaho State University, 2/e, Linear Algebra And Its Applications Package For University Of Arkansas Fort Smith, Linear Algebra And Its Applications (custom Edition For Byu), Instructor's Matlab Manual: Linear Algebra And Its Applications, College Algebra, Books a la Carte Edition Plus NEW MyMathLab -- Access Card Package (6th Edition), MyLab Math with Pearson eText -- Standalone Access Card -- for Algebra and Trigonometry (6th Edition), College Algebra, Books A La Carte Edition Plus MyLab Math with eText -- Access Card Package (7th Edition), MyLab Math with Pearson eText -- Standalone Access Card -- for College Algebra Essentials (5th Edition) (Cisco Top Score (NRP)), College Algebra with Modeling & Visualization, Books a la Carte Edition plus MyLab Math with Pearson eText -- Access Card Package (6th Edition), Aleks 360 Access Code (18 weeks) for College Algebra & Trigonometry, College Algebra Essentials, Books a la Carte Edition Plus Mymathlab with New Pearson Etext -- Access Card Package, Aleks 360 Access Card (18 Weeks) for College Algebra, ALEKS 360 Access Card 18 Weeks for Beginning and Intermediate Algebra, College Algebra: Graphs and Models, Books a la Carte Edition plus MyLab Math with Pearson eText -- Access Card Package (6th Edition). Blood And Whiskey Lyrics The Mechanisms, Salman Khan Goggles Price, Oppo A9 2020 Price In Sri Lanka, Seattle Mariners Old Logo, Modern Baby Boy Names Hindu, Nike Dri-fit Polo, Signs Your Glutes Are Growing, Semi Mount Ring Settings Wholesale, Nike Dri-fit Polo, Is Red Licorice Bad For Dogs, First Ponies For Sale, Frigidaire Ffss2614qp6a Ice Maker, Loyola High School Coronavirus, linear algebra and its applications answers 2020
CommonCrawl
Structures that turn out to exhibit a symmetry even though their definition doesn't Sometimes (often?) a structure depending on several parameters turns out to be symmetric w.r.t. interchanging two of the parameters, even though the definition gives a priori no clue of that symmetry. As an example, I'm thinking of the Littlewood–Richardson coefficients: If defined by the skew Schur function $s_{\lambda/\mu}=\sum_\nu c^\lambda_{\mu\nu}s_\nu$, where the sum is over all partitions $\nu$ such that $|\mu|+|\nu|=|\lambda|$ and $s_{\lambda/\mu}$ itself is defined e.g. by $ s_{\lambda/\mu}= \det(h _{\lambda_i-\mu_j-i+j}) _{1\le i,j\le n}$, it is not at all straightforward to see from that definition that $c^\lambda_{\mu\nu} =c^\lambda_{\nu\mu} $. Granted that this way of looking at it may seem a bit artificial, as I guess that in many of such cases, it is possible to come up with a "higher level" definition that shows the symmetry right away (e.g. in the above example, the usual (?) definition of $c_{\lambda\mu}^\nu$ via $s_\lambda s_\mu =\sum c_{\lambda\mu}^\nu s_\nu$), but showing the equivalence of both definitions may be more or less involved. So I am aware that it might just be a matter of "choosing the right definition". Therefore, maybe it would be better to think of the question as asking especially for cases where historically, the symmetry of a certain structure has been only stated 'later', after defining or obtaining it in a different way first. Another example that would fit here: the Perfect graph theorem, featuring a 'conceptual' symmetry between a graph and its complement. What are other examples of "unexpected" or at least surprising symmetries? (NB. The 'combinatorics' tag seemed the most obvious to me, but I won't be surprised if there are upcoming examples far away from combinatorics.) co.combinatorics big-list big-picture gm.general-mathematics $\begingroup$ Quadratic reciprocity. $\endgroup$ – Terry Tao $\begingroup$ The relation between $\zeta(1-x)$ and $\zeta(x)$ for the Riemann $\zeta$ function. $\endgroup$ – Lev Borisov $\begingroup$ Number of partitions of $n$ into no more than $k$ terms that are each no larger than $l$. The symmetry between $l$ and $k$ might not be immediately obvious to novices. $\endgroup$ – Yoav Kallus $\begingroup$ The Peano definition of addition, even. $\endgroup$ – Joe Z. $\begingroup$ I saw the title and my first thought was "Littlewood-Richardson coefficients". :) $\endgroup$ – darij grinberg If $a$ and $b$ are positive integers, and you make the definition $$ a \cdot b = \underbrace{a + \cdots + a}_{b \text{ times} }$$ then it's a slightly surprising fact that $a \cdot b$ is actually equal to $b \cdot a$. $\begingroup$ Indeed, this fails in general when $a,b$ are ordinals. $\endgroup$ $\begingroup$ It's even more surprising if you start with the inductive definitions of plus and times. The proof that $ab=ba$ comes as Proposition 72 in the first development of this theory, by Grassmann in 1861. $\endgroup$ – John Stillwell Jan 13, 2014 at 9:12 A nice example from classical mechanics is this: there is a hidden $SO(4)$ symmetry in the elliptical orbits of a particle in an inverse square potential, ie. the Kepler problem. The system has an obvious $SO(3)$ symmetry because the inverse square law is invariant under rotations. But there's no a priori clue that an $SO(4)$ symmetry exists in this system. You can read about it here: http://math.ucr.edu/home/baez/classical/runge_pro.pdf This carries over to the quantum mechanical case when you solve the Schrödinger equation for an inverse square potential. You can read about that here: http://hep.uchicago.edu/~rosner/p342/projs/weinberg.pdf The result is that the hidden $SO(4)$ symmetry explains the "coincidence" that many hydrogen atom states have the same energy. Dan Piponi I think that if you put yourself back in the position of someone discovering this for the first time, the equality (under suitable hypotheses) $${\partial^2f\over\partial x\partial y}={\partial^2 f\over\partial y\partial x}\quad (1)$$ should count. Here's a surprising application of that suprising equality. Suppose you're a profit-maximizing competitive firm, hiring both labor ($L$) (at a wage rate of $W$) and capital ($K$) (at a rental rate of $R$). Then an increase in $W$ will, in general, lead you to reduce your output and so employ less capital, but at the same time lead you to substitute capital for labor and so employ more capital. On balance, the derivative $dK/dW$ could be either positive or negative. Likewise for the derivative $dL/dR$. It does not seem to me to be at all intuitively obvious that these derivatives even have the same sign, much less that they are equal. But if one takes $f$ in (1) to be profit as a function of $x$ (labor) and $y$ (capital) then one discovers that in fact $${dK\over dW}={dL\over dR}$$ (Of course this looks more symmetric if you write $X_1$ and $X_2$ for labor and capital, and $P_1$ and $P_2$ for the wage rate and the rental rate.) Steven Landsburg $\begingroup$ Under the same heading: equality of the mutual inductance $M_{12}$, ratio of the emf induced in coil 1 to the rate of change of current in coil 2, to $M_{21}$. $\endgroup$ $\begingroup$ Maybe it's just sleep deprivation, but I don't see how that second equality works out. It doesn't look like the dimensions match; the left-hand side seems to be in dimensions of capitaltime/labor, while the right-hand side seems to be in dimensions of labortime/capital. $\endgroup$ $\begingroup$ @user2357112 : They both have units of capital*labor/output. If output is $F(K,L)$ then profit is $F(K,L)-RK-WL$. Profit maximization implies that $\partial F/\partial K=R$ and $\partial F/\partial L=W$. Use this and the equality of the two cross partials to get the result. $\endgroup$ – Steven Landsburg $\begingroup$ This is only one instance of a much more general fact known as Onsager's reciprocity formula. This is found everywhere there is a thermodynamical formulation. $\endgroup$ – Denis Serre Higher Homotopy groups $\pi_n(X)$ are abelian. This is quite surprising if you see the defintion for the first time and probably got in touch with the classical fundamental group before, which is not abelian in general. In fact, higher homotopy groups should serve as a generalization to the fundamental group in contrast to the abelian homology groups, when they were introduced, but as one recognized, that they are abelian too, they seemed to be not a nice generalization. $\begingroup$ I'd like to say something which is well-known by algebraic topologists. This commutativity actually hides some asymmetry. If you look at the usual proof that e.g. $\pi_2(X)$ is abelian, then to prove $ab=ba$ you need e.g. to "move" $b$ over $a$. But if you do it twice, moving $a$ over $b$ back, you have made a loop in the double loop space $\Omega^2 X$, i.e. you get an element of $\pi_3(X)$ that tells you that while $a$ and $b$ commute, you have two choices of making them commute ($a$ over $b$ or $b$ over $a$), and they are not identical. See: Whitehead products (and $E_n$ operads ☺). $\endgroup$ – Najib Idrissi Rolling one surface on another without slipping binds the velocity of the rolling surface and its angular velocity, giving a rank 2 subbundle in the tangent bundle of the 5-dimensional space of tangential positionings of the 2 surfaces in space. This subbundle, when you roll one sphere on another, has an 8 dimensional symmetry group, unless one sphere has exactly one third the radius of the other sphere, in which case the subbundle is preserved by a 14 dimensional group of diffeomorphisms of the 5-dimensional manifold: the split real form of the simple Lie group $G_2$. Ben McKay $\begingroup$ This subbundle is my favorite example of a non-integrable distribution (if the surfaces are "generic", at least) - you can physically see that rolling a sphere in an "infinitesimal square" on a plane makes the sphere rotate. $\endgroup$ – Peter Samuelson The joint distribution of IID normal random variables is spherically symmetric. Although invariance under permutations of the coordinates is obvious for any IID variables, spherical symmetry is rare. In fact, this characterizes the normal distribution. Douglas Zare $\begingroup$ This is not a characterization in dimension 1! $\endgroup$ – KConrad $\begingroup$ @KConrad: One-dimensional normal distributions are the subject of the statement: If IID copies $(X_1,...,X_n)$ of a random variable $X$ have a spherically symmetric distribution in $\mathbb{R}^n, n\gt 1,$ then $X$ is normally distributed with mean $0$. Maxwell's Theorem is actually a little stronger than this. $\endgroup$ – Douglas Zare $\begingroup$ What I meant was that spherical symmetry when $n=1$ (i.e., using the group $O(n)$ when $n = 1$) does not characterize the normal distribution. For any fixed $n > 1$, spherical symmetry implies normality. The Wikipedia page for Maxwell's theorem, at the moment I write this, leaves off the condition that $n > 1$ and when I look at Maxwell's theorem in other references it is pretty common to see that the author forgets to say $n > 1$ in the theorem. $\endgroup$ $\begingroup$ There are characterizations of normal distributions that work directly in dimension 1, e.g., the characterization using maximum entropy. Maxwell's theorem is a characterization that requires using dimension > 1. $\endgroup$ $\begingroup$ @Kconrad: Yes, you are right that you need to use more than $1$ copy (I did say variableS and coordinateS), but this is a hidden symmetry of a $1$-dimensional normal distribution, not just higher dimensional normal distributions. If you really think there is a problem, you are welcome to edit the answer to improve it. The entropy characterization doesn't seem to be a "surprising symmetry" which is what this question asked. $\endgroup$ A pedestrian definition of the rank of a matrix as the maximum number of linearly independent columns equals the maximum number of linearly independent rows. P Vanchinathan Consider the Desargues configuration. It consists of (1) two triangles, say $ABC$ and $A'B'C'$ such that the lines $AA'$, $BB'$, and $CC'$ all meet at a point $P$, and (2) the three points of intersection of corresponding sides $X=(BC)\cap(B'C')$, $Y=(AC)\cap(A'C')$, and $Z=(AB)\cap(A'B')$. Desargues's theorem says that then $X$, $Y$, and $Z$ are collinear. The Desargues configuration consists of the 10 points mentioned above ($A,B,C,A',B',C',P,X,Y,Z$) and the 10 lines mentioned (the three sides of both triangles, the three lines through $P$, and the line $XYZ$). The surprising (to me) symmetry is an action of the cyclic group of order 5. In fact, the graph whose vertices are the 10 points of the Desargues configuration and whose edges join any two points that are not together on any of the configuration's 10 lines is the Petersen graph, which is usually drawn in a way that makes the cyclic 5-fold symmetry visible. Andreas Blass $\begingroup$ Have used Desargues for easily a hundred times in my schooldays and never realized this. I actually wasn't aware that the Petersen graph had any deeper meaning than that of a counterexample to some conjectures of days gone by. Nice!! $\endgroup$ Hermite's reciprocity: as representations of $GL_2$, we have $$ S^k(S^l\mathbb{C}^2)\simeq S^l(S^k\mathbb{C}^2). $$ Vladimir Dotsenko The outer automorphism of $S_6$. Adam P. Goucher In fact, the "correct" definition of Littlewood-Richardson coefficients shows a surprising $S_3$-symmetry among all the indices $\lambda,\mu,\nu$. See Thomas and Yong - An $S_3$-symmetric Littlewood–Richardson rule. A further example related to symmetric functions is the symmetry between the area and bounce statistics of Dyck paths. See for instance Chapter 3 of Haglund - The $q, t$-Catalan numbers and the space of diagonal harmonics. No combinatorial proof of symmetry is known. There are many enumeration problems with "hidden symmetry." For instance, what is the probability that 1 and 2 are in the same cycle of a (uniform) random permutation of $1,2,\dots,n$? More interesting, suppose that I shuffle an ordinary deck of 26 red cards and 26 black cards. I turn the cards face up one at a time. At any point before the last card is dealt, you can guess that the next card is red. What strategy maximizes the probability of guessing correctly? The surprising answer is that all strategies have a probability of 1/2 of success! There is a very elegant way to see this. Richard Stanley $\begingroup$ I see how to solve the card problem by proving a more general result for R red cards and B black cards, and then using induction on the size of the deck. (There are two cases: Either my strategy is to guess before the first card or my strategy is contingent on the first card.) But I wonder if that's the "very elegant way" you have in mind. $\endgroup$ $\begingroup$ @StevenLandsburg: imagine the dealer turns over the bottom card of the deck when you guess, instead of the top one. Clearly this situation is symmetric to the one described above, but also clearly every strategy gives 50/50 odds as the outcome is determined before the game even starts. $\endgroup$ – Sam Hopkins $\begingroup$ Can you fix the first link to point to the abstract rather than directly to the PDF? Thank you! $\endgroup$ – Harry Altman From school days... Take positive reals x,y,z,w. The following statement is actually symmetric in x,y,z,w: "there exists an equilateral triangle of side length w, and a point whose distances from the three vertices are x,y,z" A quick proof: Let $ABC$ be equilateral and $P$ arbitrary. Construct $BPQ$ equilateral. Let $AB=AC=BC=w$, $AP=x$, $BP=y$ and $CP=z$. Then $BP=PQ=BQ=y$ by construction, $CP=z$ and $CB=w$ obviously, so it remains to check that $CQ=x$. Now note that triangle $CBQ$ is the $60^\circ$ rotation of $ABP$ around $B$. $\begingroup$ I have problems with this when w=10x=10y=10z. You might add an inequality to show when a triangle might exist. Gerhard "Not Doubting The Equivalence, However" Paseman, 2013.12.18 $\endgroup$ – Gerhard Paseman $\begingroup$ @GerhardPaseman I think the statement includes cases like yours where a triangle inequality is violated. It just says that the structure exists iff it exists for any one permutation of $x,y,z,w$. And in your case, it exists for none. :) $\endgroup$ – Wolfgang $\begingroup$ Right. I am not disagreeing with the argument or the statement. I am disagreeing with the presentation. Even if it gives the game away, I would posit "Let there be x,y,z,w satisfying the following inequalities:...", then follow up with the supposedly asymmetrical statement of the existence of an object. I agree that the proof convinces me the statement has a hidden symmetry. Gerhard "Ask Me About System Design" Paseman, 2013.12.19 $\endgroup$ The Jacobson radical of a ring $R$ is defined to be the intersection of all maximal left ideals in $R$. It turns out that the Jacobson radical is the intersection of all maximal right ideals in $R$ as well, so the Jacobson radical does not depend on whether one considers left or right ideals. In particular, the Jacobson radical of a ring is a two-sided ideal. In fact, there are several characterizations of the Jacobson radical that do not appear to be symmetric with respect to "leftness" and "rightness" including the following. The intersection of all maximal left ideals. $\bigcap\{\textrm{Ann}(M)|M\,\textrm{is a simple left}\,R-\textrm{module}\}$ $\{x\in R|1-rx\,\textrm{has a left inverse for each}\,r\in R\}$ $\{x\in R|1-rx\,\textrm{has a two-sided inverse for each}\,r\in R\}$ Joseph Van Name The combinatorial definition of the Schur functions is $$ s_\lambda(x) = \sum_{T \in SSYT(\lambda)} x^{cont(T)} $$ where $SSYT(\lambda)$ is the set of semi-standard Young tableaux of shape $\lambda$ and $x^{cont(T)}$ is the product over all $i$ of $x_i^{\# i\text{'s in }T}$. This is not manifestly a symmetric function. The Bender-Knuth involution proves that $s_\lambda(x)$ is invariant after swapping $x_i$ with $x_{i+1}$, and thus $s_\lambda(x)$ is, indeed, symmetric. Andy B $\begingroup$ And more startlingly (or at least far less obviously), the Stanley symmetric functions and their generalizations. $\endgroup$ $\begingroup$ And the LLT polynomials. And the Eulerian quasisymmetric functions (which are symmetric - (i did not name them...). $\endgroup$ – Per Alexandersson Aug 20, 2018 at 8:47 Let $G$ be a finite group with order $n$. For each $d$ dividing $n$, the number of subgroups of $G$ of order $d$ equals the number of subgroups of order $n/d$ if $G$ is abelian. More broadly, the lattice of subgroups of a finite abelian group looks the same if you flip it around by 180 degrees. This is not at all obvious at the level at which the statement can first be understood, essentially because there is no natural way to construct subgroups of index $d$ from subgroups of order $d$ in a general finite abelian group with order divisible by $d$. It is not clear at a beginning level how the commutativity of the group leads to such conclusions. KConrad Morley's trisector theorem allows you to build a triangle which is maximally symmetric out of one which has no symmetry at all. Fabien Besnard Consider a differential inequality, like the Hardy-Sobolev inequality $$\left|\int\int_{{\mathbb R}^N\times{\mathbb R^N}}\frac{\overline{f(x)}g(y)}{|x-y|^\lambda}dxdy\right|\leq C\|f\|_r\|g\|_s.$$ Even if you put the sharp constant $C$ in this inequality, for most functions the inequality is strict. Now look for maximizers, i.e., functions for which the LHS is equal to the RHS: they are highly symmetric functions, actually spherically symmetric and very smooth. This is a general phenomenon, connected with monotonicity of $L^p$ and Sobolev norms with respect to symmetrization procedures. Piero D'Ancona I always found $\mathrm{Tor}_R\left(M,N\right) \cong \mathrm{Tor}_R\left(N,M\right)$ for a commutative ring $R$ and two $R$-modules $M$ and $N$ to be mysterious. Then again I have no idea about homology and thus wouldn't be surprised if this is a triviality from an appropriate viewpoint. Volker Strehl's generalized cyclotomic identity (Corollary 6 in Volker Strehl, Cycle counting for isomorphism types of endofunctions states that $\prod\limits_{k\geq 1} \left(\dfrac{1}{1-az^k}\right)^{M_k\left(b\right)} = \prod\limits_{k\geq 1}\left(\dfrac{1}{1-bz^k}\right)^{M_k\left(a\right)}$ in the formal power series ring $\mathbb Q\left[\left[z,a,b\right]\right]$, where $M_k\left(t\right)$ denotes the $k$-th necklace polynomial $\dfrac{1}{k}\sum\limits_{d\mid k} \mu\left(d\right) t^{k/d}$. I recall this being not particularly difficult, but quite useful. Every nontrivial commutativity of some family of operators probably qualifies as an unexpected symmetry. Here are three examples: 1. Consider the group ring $\mathbb Z\left[S_n\right]$ of the symmetric group $S_n$. For every $i\in \left\{1,2,...,n\right\}$, define an element $Y_i \in \mathbb Z\left[S_n\right]$ by $Y_i = \left(1,i\right) + \left(2,i\right) + ... + \left(i-1,i\right)$ (a sum of $i-1$ transpositions). Then, $Y_i Y_j = Y_j Y_i$ for all $i$ and $j$ in $ \left\{1,2,...,n\right\}$. This is a simple exercise, and the $Y_i$ are called the Young-Jucys-Murphy elements. 2. Consider the group ring $\mathbb Z\left[S_n\right]$ of the symmetric group $S_n$. For every $i\in \left\{0,1,...,n\right\}$, define an element $\mathrm{Sch}_i \in \mathbb Z\left[S_n\right]$ as the sum of all permutations $\sigma \in S_n$ satisfying $\sigma\left(1\right) < \sigma\left(2\right) < ... < \sigma\left(i\right)$. (Note that $\mathrm{Sch}_0 = \mathrm{Sch}_1$ when $n\geq 1$.) Then, $\mathrm{Sch}_i \mathrm{Sch}_j = \mathrm{Sch}_j \mathrm{Sch}_i$ for all $i$ and $j$ in $ \left\{0,1,...,n\right\}$. In fact, $\mathrm{Sch}_i \mathrm{Sch}_j = \sum\limits_{k=0}^{\min\left\{n,i+j-n\right\}} \dbinom{n-j}{i-k} \dbinom{n-i}{j-k} \left(n+k-i-j\right)! \mathrm{Sch}_k$, which makes the symmetry maybe not that surprising (no similar equalities hold in cases 1 and 3!). See Manfred Schocker, Idempotents for derangement numbers, Discrete Mathematics, vol. 269 (2003), pp. 239-248 for a proof. (This is also proven in my answers to Is this sum of cycles invertible in QSn? now, except that instead of the condition $\sigma\left(1\right) < \sigma\left(2\right) < ... < \sigma\left(i\right)$ I require $\sigma\left(n-i+1\right) < \sigma\left(n-i+2\right) < ... < \sigma\left(n\right)$ in that thread. But the two conditions can be transformed into one another by the automorphism $S_n \to S_n,\ \sigma \mapsto w \circ \sigma \circ w$ of $S_n$, where $w$ is the permutation in $S_n$ that sends each $i$ to $n+1-i$.) 3. Consider the group ring $\mathbb Z\left[S_n\right]$ of the symmetric group $S_n$. For every $i\in \left\{1,2,...,n\right\}$, define an element $\mathrm{RSW}_i \in \mathbb Z\left[S_n\right]$ as $\sum\limits_{1\leq u_1 < u_2 < ... < u_i\leq n} \sum\limits_{\substack{\sigma\in S_n, \\ \sigma\left(u_1\right) < \sigma\left(u_2\right) < ... < \sigma\left(u_i\right)}} \sigma$. Then, $\mathrm{RSW}_i \mathrm{RSW}_j = \mathrm{RSW}_j \mathrm{RSW}_i$ for all $i$ and $j$ in $ \left\{1,2,...,n\right\}$. This is Theorem 1.1 in Victor Reiner, Franco Saliola, Volkmar Welker, Spectra of Symmetrized Shuffling Operators, arXiv:1102.2460v2, and a nice proof remains to be found. darij grinberg $\begingroup$ The Tor symmetry is basically just that $M \otimes N \cong N \otimes M$, and you take the derived functors of both sides. Generalizing, any and all nice properties of (co)homology groups would seem to be mysterious symmetries if you consider the definition to be messing around with projective or injective modules, and not something more intrinsic like derived functors. $\endgroup$ – Ryan Reich $\begingroup$ Regarding Volker Strehl's identity, it seems to be true for any function, not just $\mu$, although presumably taking $\mu$ has some application. Thus let $f:\mathbb{N}\to\mathbb{Q}$ be any function. Then in $\mathbb Q[[a,b,z]]$, we have the formal identity $$ \prod_{k\ge1} \left(\frac{1}{1-az^k}\right)^{\frac{1}{k}\sum_{d\mid k} f(d)b^{k/d}} = \exp\left( \sum_{d=1}^\infty\frac{f(d)}{d} \sum_{i,j=1}^\infty \frac{a^ib^j}{ij} z^{ijd}\right). $$ so symmetry in $a$ and $b$ is clear. Proof: take logs of both sides, use the series for $\log(1-t)^{-1}$, and flip the order of series. $\endgroup$ – Joe Silverman $\begingroup$ @JoeSilverman: Nice observation! $\endgroup$ This is a rather specialized example, but dear to my heart. Consider the set of "Richardson subvarieties" of the flag manifold $GL_n/B$, intersections of Schubert and opposite Schubert varieties. The only part of the Weyl group that preserves this set is $\{1,w_0\}$ where the $w_0$ exchanges Schubert and opposite Schubert varieties. Now project these varieties to a $k$-Grassmannian, obtaining "positroid varieties". This includes the Richardson varieties in the Grassmannian, and many new varieties. Now the part of the Weyl group that preserves this collection is the dihedral group $D_n$! The symmetry has gotten bigger by a factor of $n$. Allen Knutson Maxwell's equations were originally formulated for Newtonian physics. However, special relativity has found that these equations have a surprising symmetry to Lorentz transformations. The equations remain true in a moving reference frame. The transformation of the values is such that (loosely speaking) what looks like pure electric charge in one reference frame can be electric current and charge in another reference frame; and what looks like pure electric field from one reference frame can be magnetic and electric field in another reference frame. See https://en.wikipedia.org/wiki/Covariant_formulation_of_classical_electromagnetism for a precise formulation. Zsbán Ambrus Here is an example from potential theory where symmetry is a not-so-obvious property: the Green function of a bounded open subset $\Omega \subset \mathbb{C}$. More precisely, having specified a point $a \in \Omega$, one defines the classical Green function for $\Omega$ with pole at $a$, , as a function on $\mathbb{C}$ with the following properties: (i) $G_\Omega(\cdot; a)$ is harmonic in $\Omega \setminus \{a\}$; (ii) $z \mapsto G(z;a) + \log |z-a|$ extends to a harmonic function on $\Omega$; (iii) for each $w \in \partial \Omega$, $\lim_{z \to w} G_\Omega(z;a)=0$. The symmetry property says that $G_\Omega(z;w)=G_\Omega(w;z)$ for any $z,w \in \Omega$ such that $z \ne w$. Note that the functions on either side of the equation are different: one has a pole at $w$ and the other at $z$. It is not very hard to prove the symmetry property, but it is not obvious either. The existence of such a function is related to the solution of a Dirichlet problem for the Laplace equation in $\Omega$. Analogous functions can be considered for domains in $\mathbb{R}^n, \ n>2$ or in $\mathbb{C}^n, n > 1$, and they also enjoy the symmetry property. Margaret Friedland A couple very disparate answers that spring to mind (fortunately, this is community wiki, and actual experts should feel very free to improve my exposition of either): The negative gradient flow for the Chern-Simons functional on a 3-manifold $M$ naturally satisfies a four-dimensional symmetry. Namely, if one has a principal $G$-bundle on $M$ and some connection $A$ on this $G$-bundle (which I'll carelessly think of as a $\mathfrak{g}$-valued $1$-form on $M$), the Chern-Simons functional $CS(A) = \int_M \Big( dA + \frac{2}{3} A \wedge A \Big) \wedge A$ is a perfectly well-defined function on the space of connections, and one can attempt to perform the negative gradient flow with respect to a natural metric on this space of connections (this being a very natural thing to do from the point of view of Morse theory, for example). If you want, you can interpret the solution to this flow as a connection on the bundle pulled back to $M \times \mathbb{R}$, and while this connection clearly transforms nicely under $Diff(M)$, there's no particular reason to think it's a well-behaved object under the diffeomorphism group of the four-manifold $M \times \mathbb{R}$. However, this negative gradient flow equation turns out to be exactly the anti-self dual equation $F^+ = 0$, where the curvature $F = dA + A \wedge A$ and its self-dual part is $F^+ = \frac{1}{2}(F + *F)$. This equation manifestly respects the symmetries of the entire four-manifold, and this point of view is a very effective one for proving even basic things, like gauge invariance, of the Chern-Simons functional. Witten is very fond of making this point and my understanding is that this insight allowed him to extend his QFT description of the Jones polynomial to a QFT description of its categorification, Khovanov homology. And now for something completely different: associativity of the quantum cup product. A familiar object to many people is the cohomology ring $H^*(X)$ of a space $X$, which is associative, (graded) commutative, and just generally great. If $X$ is a symplectic manifold, there's an interesting way to deform the multiplication on this ring using counts of $J$-holomorphic curves passing through various cycles. In effect, one picks a compatible almost-complex structure on the symplectic manifold, and then if one writes $\alpha * \beta = \sum_{\gamma} c_{\alpha \beta \gamma} \gamma$, where we think of $\alpha, \beta, \gamma$ as cycles in $X$ (using Poincare duality), the coefficient $c_{\alpha \beta \gamma}$ is a generating function in some formal variables, the coefficients of which are counts of holomorphic curves of fixed genus and homology class intersecting our three cycles $\alpha, \beta, \gamma$. Using this deformed multiplication gives the quantum cohomology ring $QH^*(X)$. Now, some properties of this ring, like graded commutativity, are fairly easy to see from the definition, but associativity is really quite tricky! (I realise this isn't exactly what you asked in your question as it's not just a symmetry of some coefficient, but you can phrase associativity as a symmetry of something or other—if you want to be technical, a four-point Gromov-Witten invariant—so I think it qualifies.) The associativity is somehow not so bad to see in the algebro-geometric case (or perhaps this is just my bias as an algebraic geometer), but in symplectic geometry you really need some nontrivial analytic estimates at some point in the proof. And you get a lot out of it! Associativity of this quantum cohomology ring encapsulates a wealth of information on enumerative geometry counts associated to $M$; indeed, it was basically this idea that allowed Kontsevich to find his recursion for the number of degree $d$ curves through $3d + 1$ general points in $\mathbb{P}^2$. Finally, I kind of want to mention strange duality, even though that now really isn't an answer to the question, as you have to modify one side or the other; I'll just copy a very quick summary from the abstract to Belkale - The strange duality conjecture for generic curves: "For $X$ a compact Riemann surface of positive genus, the strange duality conjecture predicts that the space of sections of certain theta bundle on moduli of bundles of rank $r$ and level $k$ is naturally dual to a similar space of sections of rank $k$ and level $r$." The paper itself is a great place to learn more about it if you're interested! Arnav Tripathy Characters of affine Kac-Moody Lie algebras and Virasoro Lie algebra are modular forms. These modular symmetries are not that much evident from the definitions. Alexander Chervov In number theory, Terry Tao already mentioned Quadratic Reciprocity in his first comment, but there's also the reciprocity formula $$ s(b,c) + s(c,b) = \frac1{12}\left( \frac{b}{c} + \frac1{bc} + \frac{c}{b} \right) - \frac14 $$ for Dedekind sums, symmetrized further in Rademacher's formula $$ D(a,b;c) + D(b,c;a) + D(c,a;b) = \frac1{12} \frac{a^2+b^2+c^2}{abc} - \frac14. $$ [Here $D(a,b;c) = \sum_{n\,\bmod\,c} ((an/c)) ((bn/c))$, where $((\cdot))$ is the sawtooth function taking $x$ to $0$ if $x \in {\bf Z}$ and to $x - \lfloor x \rfloor - 1/2$ otherwise; and the Dedekind sum is the special case $s(b,c) = D(1,b;c)$.] Noam D. Elkies $\begingroup$ But I don't understand what is so special about this, at least in terms of symmetry: for about any function $s(\cdot,\cdot)$, including the Legendre symbol, $s(b,c)+s(c,b)$ or $s(b,c)s(c,b)$ is symmetric in $b$ and $c$. Where is the surprise? $\endgroup$ $\begingroup$ I think the point is that each of $s(b,c)$ and $s(c,b)$ is complicated, but once added together, one obtains an extremely simple formula. It's the simplicity of the right hand side rather than the symmetry. $\endgroup$ – Matt Young $\begingroup$ @Wolfgang asks a fair question. To add to Matt Young's answer, we can define $s'(b,c) = s(b,c) + 1/8 - b/12c - 1/24bc$, and then the reciprocity formula says that $s'(b,c)$ is antisymmetric: $s'(b,c) = -s'(c,b)$. $\endgroup$ – Noam D. Elkies $\begingroup$ @Matt: yes, that is exactly the point, and I guess that is also why Terry Tao's mention of Quadratic Reciprocity got so many "great comment" votes... Now if we started a thread about this kind of "simplicity", that one would be endless (not in a mathematical sense). $\endgroup$ $\begingroup$ @NoamD.Elkies Granted. That reminds me of the relation between $\zeta(1-s)$ and $\zeta(s)$, cast as $\Xi(1-s)=\Xi(s)$ with appropriate $\Xi$. $\endgroup$ Betti numbers: the symmetry $\dim(H^k(M^n))=\dim(H^{n-k}(M^n))$ does not immediately follow from the definition. $\begingroup$ Poincare duality (in the form you've stated it) comes from the local symmetries of $n$-manifolds (any point has a neighborhood homeomorphic to $\mathbb{R}^n$) and the global symmetry of $M$ (orientability)--this is not a property of Betti numbers, but rather of the underlying space. $\endgroup$ – Daniel Litt $\begingroup$ @DanielLitt, I know, I just don't want to deal with torsion, and for the purpose of this question Betti numbers' symmetry is sufficient. $\endgroup$ $\begingroup$ My point is that the symmetry does not come from the Betti numbers, but from the space $M$; I don't think this is an example of what the question asks for. $\endgroup$ $\begingroup$ There is a philosophy that the functional equation of a zeta function should be a consequence of Poincare duality on some exotic space. For zeta functions of varieties over finite fields, this was made rigorous in the 1960s, but over number fields it's still just a philosophy. So we have two non-obvious symmetries that are the same, but not obviously the same. In other words, we have a non-obvious symmetry between non-obvious symmetries. $\endgroup$ – JBorger In the definition of "Latin square" there is complete symmetry between the roles of "row", "column" and "symbol", so that any of the 6 permutations of that role produces another Latin square. Brendan McKay The Jordan-Kronecker function is defined by the infinite sum $$ F(x, y) = \sum_{n=-\infty}^\infty \frac{y^n}{1 - x q^n}, \quad |q|<|y|<1 $$ and, obviously, restrictions on $x$ to avoid poles. Surprisingly, $$ F(x, y) = F(y, x) = -F(-1/x,1/y). $$ answered Aug 20, 2018 at 3:49 I would like to add an example coming from the area of additive theory known as Freiman's structure theory. If I am not (too) blind, this has not been mentioned yet, and hopefully it qualifies as an appropriate answer. Assume that $\mathbb{A} = (A, +)$ is a (possibly non-commutative) semigroup, and let $X$ be a non-empty subset of $A$. Given an integer $n \ge 1$, we write $nX$ for $\{x_1+\cdots + x_n: x_1, \ldots, x_n \in X\}$. In principle, we have $1 \le |nX| \le |X|^n$, and for all $k \in \mathbb{N}^+$ and $i \in \{1, \ldots, k\}$ we can actually find a pair $(\mathbb{A}, X)$ such that $|X| = k$ and $|nX| = i$, with the result that, in general, not much can be concluded about the "structure" of $X$. However, if $|nX|$ is sufficiently small with respect to $|X|$ and $\mathbb{A}$ has suitable properties, then "surprising" things start happening, and for instance we have the following: Theorem. If $\mathbb{A}$ is a linearly orderable semigroup (i.e., there exists a total order $\preceq$ on $A$ such that $x + z \prec y + z$ and $z + x \prec z + y$ for all $x,y,z \in A$ with $x \prec y$) and $|2X| \le 3|X|-3$, then the smallest subsemigroup of $\mathbb{A}$ containing $X$ is abelian. This implies at once an analogous result by Freiman and coauthors which is valid for linearly ordered groups; see Theorem 1.2 in [F] (a preprint can be found here). I don't know of any similar result for larger values of $n$. [F] G. Freiman, M. Herzog, P. Longobardi, and M. Maj, Small doubling in ordered groups, to appear in J. Austr. Math. Soc. Salvo Tringali The "Little Prince" problem, which I learned from Greg Kuperberg, is a geometric answer to your question. Here is the problem: the Little Prince stands in (I do mean in, not on) the plane and wants to shape its planet from a given quantity of matter (of given density) in order to maximize the gravity he feels. The most efficient way to go is to shape the planet as a round disk. The problem has a particular point, the position of the Little Prince, but turns out to have a symmetric solution. Note that the same problem in higher dimension does not have a symmetric solution. Let me add two points that make this example all the more interesting: first, the results still stands if the Little Prince is also authorized to shape the space (rather the surface) he lives in, with the constraint that it should have nonpositive curvature and be simply connected: he should still make the planet a round flat disk. Second, if one takes a general domain and integrates the inequality between the felt gravity and the optimal gravity, one gets the isoperimetric inequality. Benoît Kloeckner Some categories are self-dual in ways not obvious from their definitions. One good example is Pontryagin duality, which states that the category of locally compact Hausdorff abelian groups is self-dual, via the taking of continuous character groups. Another is Connes' cyclic category. It is not obvious that this particular melding of the simplex category (of nonempty finite ordinals) and cyclic groups would result in a self-dual category, and in fact this property would fail if the definition were tweaked just slightly (say by working with all finite ordinals). answered Aug 18, 2018 at 16:47 Todd Trimble Is this sum of cycles invertible in $\mathbb QS_n$? A list of symmetric statistics Strange boundary-like map on tensor algebra: what is its kernel? Reconstruction Conjecture holds for Directed Acyclic Graphs? Why complete symmetric polynomials and elementary symmetric polynomials are dual to each other? Important formulas in combinatorics What techniques are there to prove Schur positivity? How to be rigorous about combinatorial algorithms? A diameter 2 arc-transitive graph whose complement is not arc-transitive?
CommonCrawl
Oliver Stockdale Light Dark Automatic Entanglement detection in quantum fields Last updated on Dec 15, 2021 Credit: Scixel/TU Delft. Entanglement is an inherently quantum phenomenon with no classical counterpart. It has baffled Einstein, Heisenberg, and nearly every physicist since. At its most fundamental level, entanglement means that a combined description of many subsystems cannot be written as a product of the subsystems. This is best demonstrated with an example. Consider two spin-$\frac{1}{2}$ particles, such as two electrons, where their total spin is zero. That is, if one were to measure the spin of particle A to be spin up, we know the second (without even measuring it) would be in the spin down state. The wave function of the combined system would then be $$|\psi\rangle_{AB} = \frac{1}{\sqrt{2}}\left[|\hspace{-0.1cm}\uparrow\rangle_A|\hspace{-0.1cm}\downarrow\rangle_B + |\hspace{-0.1cm}\downarrow\rangle_A|\hspace{-0.1cm}\uparrow\rangle_B\right].$$ This system is entangled as the state is not separable, i.e., $|\psi\rangle_{AB} \neq |\psi\rangle_{A}\otimes|\psi\rangle_{B}$. In this example, the measurement of the spins are perfectly (anti-)correlated. Why bother studying entanglement? The study of entanglement is a popular research topic within the physics community for many reasons. In terms of practical uses, it has gained significant interest within quantum information theory. Entanglement is a crucial resource for quantum computing, quantum communication, and quantum key distribution ­− all of which are promising future technologies that are widely believed to advance and outperform our current capabilities. Beyond its practical uses, entanglement has attracted interest at a fundamental physics level. It's thought to provide insight into areas such as statistical mechanics and quantum field theory (see, e.g., Amico et al., Rev. Mod. Phys. 80, 517 (2008) for a comprehensive review). Into the many-body regime For the simple two-particle system discussed above, determining whether it is entangled is relatively straightforward and there are many methods for witnessing the entanglement. However, in the many-body limit, where there are tens of thousands of particles, detecting entanglement becomes increasingly difficult. One method that has been identified as a promising witness is entropic uncertainty relations (see, e.g., Coles et al., Rev. Mod. Phys. 89, 015002 (2017) for a review). Closely related to Heisenberg's uncertainty principle, the entropy we measure in a system is bounded from below. Consider a system with two observables $\hat{X}$ and $\hat{Z}$. For a finite-dimensional system, the entropic uncertainty relation is $$H(\hat{X}) + H(\hat{Z}) \geq -\log_2 c + S(\hat{\rho}),$$ where $H(\hat{X})$ is the Shannon entropy, $c$ is the maximum overlap between any two eigenvectors of the observables, and $S(\hat{\rho})$ is the von Neumann entropy of the quantum state $\hat{\rho}$. In the case of two systems like the spin-$\frac{1}{2}$ example, we can extend the above relation to account for the bipartite nature. The corresponding entropic uncertainty relation reads $$H(\hat{X}_A|\hat{X}_B) + H(\hat{Z}_A|\hat{Z}_B) \geq -\log_2 c + S(\hat{\rho}_A|\hat{\rho}_B).$$ Here, $H(\hat{X}_A|\hat{X}_B)$ represents the conditional entropy. This gives the entropy of $H(\hat{X}_A$ given we measure system $B$. For the case of a separable state (i.e., not entangled), $S(\hat{\rho}_A|\hat{\rho}_B)=0$. Therefore, $-H(\hat{X}_A|\hat{X}_B) - H(\hat{Z}_A|\hat{Z}_B) -\log_2 c$ serves as an entanglement witness. Entanglement in Heidelberg We aim to apply the concept of entropic uncertainty relations to witness entanglement in Rubidium-87 spinor Bose-Einstein condensates (see, e.g., Kunkel et al., Science 360 (2018) for experimental details on entanglement generation). In collaboration with experiments, we perform numerical simulations and analytical calculations to better understand entanglement within these systems. Our research applies entropic uncertainty relations to these systems hoping to better constrain measurements on entanglement and improve upon current theories. Our two central aims of the project are: Use entropic uncertainty relations to understand the evolution of entanglement out of the Gaussian regime Investigate the behaviour of entanglement in a system where multiple spatial modes of the condensate are excited An infographic (aimed at a general audience) for the first point can be found here. More detailed information regarding our specific work on entanglement witnessing can be found here. PhD student in Physics My research interests include ultracold quantum gases, entanglement detection, entropic uncertainty relations, and superfluidity © 2022 Oliver Stockdale Published with Wowchemy Website Builder
CommonCrawl
for events the day of Thursday, September 20, 2018. August 2018 September 2018 October 2018 1 2 3 4 1 1 2 3 4 5 6 5 6 7 8 9 10 11 2 3 4 5 6 7 8 7 8 9 10 11 12 13 12 13 14 15 16 17 18 9 10 11 12 13 14 15 14 15 16 17 18 19 20 26 27 28 29 30 31 23 24 25 26 27 28 29 28 29 30 31 11:00 am in 241 Altgeld Hall,Thursday, September 20, 2018 Number Theory Seminar Maass forms and the mock theta function f(q) Scott Ahlgren (Illinois Math) Abstract: Let f(q) be the well-known third order mock theta function of Ramanujan. In 1964, George Andrews proved an asymptotic formula for the Fourier coefficients of f(q), and he made two conjectures about his asymptotic series (these coefficients have an important combinatorial interpretation). The first of these conjectures was proved in 2009 by Bringmann and Ono. Here we prove the second conjecture, and we obtain a power savings bound in Andrews' original asymptotic formula. The proofs rely on uniform bounds for sums of Kloosterman sums which follow from the spectral theory of Maass forms of half integral weight and in particular from a new estimate which we derive for the Fourier coefficients of such forms. This is joint work with Alexander Dunn. Submitted by sahlgren 12:00 pm in 243 Altgeld Hall,Thursday, September 20, 2018 Geometry, Groups and Dynamics/GEAR Seminar The generalization of the Goldman bracket to three manifold and its relation to Geometrization Moira Chas (Stony Brook) Abstract: In the eighties, Bill Goldman discovered a Lie algebra structure on the free abelian group with basis the free homotopy classes of closed oriented curves on an oriented surface S. In the nineties, jointly with Dennis Sullivan, we generalized this Lie algebra structure to families of loops (defining the equivariant homology of the free loop space of a manifold). This Lie algebra, together with other operations in spaces of loops is now known as String Topology. The talk will start with a discussion of the Goldman Lie bracket in surfaces, and how it "captures" the geometric intersection number between curves. It will continue with the description of the string bracket, which generalizes of the Goldman bracket to oriented manifolds of dimension larger than two, and the space of families of loops where the string bracket is defined. The second part of the lecture describes how this structure in degrees zero and one plus the power operations in degree zero recognizes key features of the Geometrization, the above mentioned joint work. The lions share of effort concerns the torus decomposition of three manifolds which carry mixed geometry. This is joint work with Siddhartha Gadgil and Dennis Sullivan. Submitted by clein 2:00 pm in 243 Altgeld Hall,Thursday, September 20, 2018 Analysis Seminar Three and a half asymptotic properties Ryan Causey (Miami University Ohio) Abstract: We introduce several isomorphic and isometric properties related to asymptotic uniform smoothness. These properties are analogues of p-smoothability, martingale type p, and equal norm martingale type p. We discuss distinctness, alternative characterizations, and renorming theorems for these properties. Submitted by aimo Integrability and Representation Theory (IRT/AGC) Wreath Macdonald polynomials as eigenstates Joshua Wen (University of Illinois) Abstract: Proposed by Haiman, wreath Macdonald polynomials are distinguished bigraded characters of the wreath product $\Sigma_n\wr \mathbb{Z}/\ell\mathbb{Z}$ generalizing the usual (transformed) Macdonald polynomials. Their existence was proved in 2014 by Bezrukavnikov-Finkelberg via a generalization of Haiman's proof of Macdonald positivity. Little else is known about them, and thus anyone trying to develop analogues for the rest of the 'Macdonald package' (Macdonald operators, DAHA, Pieri rules, evaluation formulas, refined topological vertex, refined knot invariants, etc.) is in the strange position of only having Macdonald positivity as a starting point. I'll present work on a necessary ingredient for many of these structures: that the wreath Macdonald polynomials diagonalize something. This something is a commutative subalgebra of the quantum toroidal algebra of $\mathfrak{sl}_\ell$. While the proof is still incomplete, it already involves a wide range of techniques from quantum algebra and partition combinatorics that might be of independent interest. Submitted by rinat Computer Driven Questions, Theorems and Pre-theorems in Low Dimensional Topology Moira Chas (Stony Brook University) Abstract: Consider an orientable surface S with negative Euler characteristic, a minimal set of generators of the fundamental group of S, and a hyperbolic metric on S. Then each unbased homotopy class C of closed oriented curves on S determines three numbers: the word length (that is, the minimal number of letters needed to express C as a cyclic word in the generators and their inverses), the minimal geometric self-¬intersection number, and finally the geometric length. Also, the set of free homotopy classes of closed directed curves on S (as a set) is the vector space basis of a Lie algebra discovered by Goldman. This Lie algebra is closely related to the intersection structure of curves on S. These three numbers, as well as the Goldman Lie bracket of two classes, can be explicitly computed (or approximated) using a computer. These computations led us to counterexamples to existing conjectures, to formulate new conjectures and (sometimes) to subsequent theorems. Submitted by kapovich
CommonCrawl
If gravity is additive, then how does it cancel itself out? I understand that gravity, as far as we know, is always attractive. Also, it has additive qualities - i.e. the size and strength of the field are proportional to the quantities of mass. This seems to counteract the idea that gravity can cancel itself out. The centre of the Earth is said to be a zero-g environment, yet it is in the midst of a whole load of mass. I guess this makes sense when thinking about the mass as pulling equally from all directions... Which leads on to two questions. If opposing masses can effectively cancel each other out, does this mean Gravity is not always additive? Is spacetime geometrically indistinguishable in an area of zero-g, lets say, between galaxies, and in the centre of very massive bodies, like a planet? What I mean here is, can you tell that there are strong gravitational forces pulling you in all directions as opposed to weak ones? newtonian-gravity vectors AmphibioAmphibio $\begingroup$ Its additive in the sense that F(two masses)=F(one mass alone) + F(the other mass alone). Say im in the middle and one mass to each side of me. both would attract me but the net force is zero, this does not conflict with additivity $\endgroup$ – Bort Feb 10 '16 at 11:00 $\begingroup$ Opposing forces can cancel out. Does this mean forces are not additive? Of course not - forces add as vectors, by the parallogram rule. $\endgroup$ – Peter Diehr Feb 10 '16 at 11:02 $\begingroup$ Gravity is not even a force, even though we formulate it that way. The reason why this matters is because while forces are being characterized by a charge (electric charge, magnetic moments etc.), mass does not play this role for gravity. That is why there are no negative masses. While in the Newtonian limit gravity looks very similar to a force, in general relativity it behaves very differently and becomes both non-linear and non-conservative (because of gravitational waves), which leads to consequences that take the form of geometric and thermodynamic properties. $\endgroup$ – CuriousOne Feb 10 '16 at 11:19 $\begingroup$ Second part of question is much interesting than first. You should have posted question separately. $\endgroup$ – Anubhav Goel Feb 10 '16 at 14:33 You need to keep the direction in mind. While the direction never makes the gravity negative, adding opposite directions will cancel out. I don't know if you are familiar with vectors? The length of a vector is always positive (strength of gravity) but it also has a direction (direction of gravity). If you add two vectors of equal length ("strength") but with opposite direction they cancel out. RHawkeyedRHawkeyed Gravity is always additive. You can never add mass and reduce the gravitational force created. What does happen is that gravitational forces in opposite directions can cancel one another out, leaving 0 net force. If gravity wasn't always additive, it would mean that it would be possible for gravity to push things, instead of always pulling. This is not the case. It seems the confusion here is stemming from vector addition - adding a 1N force pulling left to a 1N force pulling right gets you a 0N net force. The forces are still added together, it just so happens that the opposite directionality reduces the magnitude of the final net force. Not sure what you mean by this question. The center of a planet is a 0 G environment, but spacetime is still "geometrically distinguishable". Moving away from the center in any direction causes you to feel a pull back toward the center. It's not like directionality or distance loses meaning in a 0 G environment. Nuclear WangNuclear Wang $\begingroup$ 1. but, semantically, if something can cancel itself out, that means it is not always additive. Always attractive, yes. 2. I edited the question above to make it more clear. $\endgroup$ – Amphibio Feb 10 '16 at 11:00 $\begingroup$ Additive allows for negative numbers 1+(-1)=0. That is still additive. If there are two identical planets and you are standing in the middle between them the force on you is zero BECAUSE the gravitational effects are additive. It's simply because the pull from one is minus the pull from the other. $\endgroup$ – Ymareth Feb 10 '16 at 11:17 $\begingroup$ @ymareth, does this mean gravity has polarity? i.e. 1 and -1? I think I'm going to ask a new question related to this $\endgroup$ – Amphibio Feb 10 '16 at 11:21 $\begingroup$ Directionality, not polarity. You can define "to the right" as the positive direction, and "to the left" as the negative direction. If you have 1N pulling you right, it's a force of 1. A 1N force pulling you left is a force of -1. Adding them up gets you 1 + (-1) = 0. Look into vector addition - forces aren't just numbers, they're numbers with an associated direction. $\endgroup$ – Nuclear Wang Feb 10 '16 at 11:24 $\begingroup$ Put it another way: if we assume gravity has polarity (which it doesn't as far as every experiment ever performed goes), in your example which of the masses is "pushing"? Neither, they're both "pulling". (Quotes used as "pull" and "push" aren't completely accurate terms to use.) That's the whole point. They're both pulling in opposite directions, but they're PULLING. You're suggesting gravity pushes in some circumstances. $\endgroup$ – The Geoff Feb 10 '16 at 13:41 I suspect you're getting confused about the difference between the gravitational potential energy and the gravitational force. Potential energy is always additive, however in electromagnetism it can have different signs while in gravity it cannot. With electromagnetism the potential energy can be positive or negative so it is possible for the potential energy from two sources to cancel out and be zero. By contrast, in gravity the potential energy is always negative so combining any two sources can only ever decrease the gravitational potential energy. The only way for gravitational potential energy to cancel would be if negative matter existed. In both cases the force is the gradient of the potential energy: $$ \mathbf{F} = \nabla V $$ and the force can in principle have any magnitude and point in any direction. John RennieJohn Rennie Take your example to an extreme to see what happens - sit yourself just between two black holes. There will always be a (Lagrange) point where you're not pulled towards one or the other, but there will be pulls from either side...at an extreme, you'll be ripped in two. "Cancelling out gravity" is about gradients - remember, the gradient on a graph can be zero even for vastly different Y-values. As an analogy, you're asking: "I was at sea level and the ground was flat. Then I went up a mountain and it was flat at the top. How can the top of a mountain be at sea level?" The GeoffThe Geoff $\begingroup$ I think this is at least misleading. The fact that you will be ripped apart is due to the fact that you are an extended object. The forces do NOT cancel all over your body, only in the Lagrange point. For a point mass sitting at the Lagrange point, the forces cancel perfectly and there is no way of telling that 2 BHs pull on you. No forces means you don't feel anything, doesn't matter if 2 forces cancel or there aren't any at the first place $\endgroup$ – Noldig Feb 10 '16 at 13:21 $\begingroup$ Yup, fair comment. Although if we take the argument to the extreme, there are no point masses, and we hit the problem of describing quantum gravity, which is a little non-trivial ;) $\endgroup$ – The Geoff Feb 10 '16 at 13:24 There are two elements you need to keep in mind: direction and intensity. Direction is obvious, the intensity is the length of the vector. The fact that Gravity is always addictive means that aligned forces are addictive, therefore the lengths, if aligned and pointing in the same direction, will add up. There is no repulsive force of gravity, Gravity is always attractive. However you need to learn how to add up vectors that are not aligned. There is a rule to add vectors (and I emphasise, add, since gravity is always addictive) and if two vectors are aligned, same intensity, and pointing in opposite direction, their sum will be zero. In general, given two vectors, their sum is given by a third vector created by creating a parallelogram with equal opposite edges corresponding to the two vectors. The sum will then be given by the diagonal drawn in this parallelogram. (1) Gravity is not additive it is constant but mass and acceleration due to gravity are additive. (2) I don't think you can tell the difference by force but maybe from time dilations. Bill AlseptBill Alsept Not the answer you're looking for? Browse other questions tagged newtonian-gravity vectors or ask your own question. How does gravity work? What is Difference between Acceleration due to Gravity and Gravitational Field Intensity? Does the gravitational force of a massive, spherical body draw only towards its center of gravity, and to no other area? Does gravity turn itself off when it has nothing to work with? Does gravitational force tend to zero over large distances? How to expand gravitational equation for anisotropic gravity? Why doesn't gravitational potential energy cancel out? Does mass affect how likely an object is to topple over?
CommonCrawl
Program To Calculate Area Of Circle And Square Using Function In Python The program should compute and display the circumference and area of that circle on the screen with four decimal places of accuracy. Related Examples C program to calculate the area of square with and without using function. This has the benefit of meaning that you can loop through data to reach a result. Demonstrates how to complete the square to find the center and radius of a circle. computer science questions and answers. The Python area of a circle is number of square units inside the circle. See All Calculators. Here, the circumference of a circle is the arc length around the perimeter of the circle, a quantity which can be formally defined independently of geometry using limits—a concept in calculus. Write a C++ program to find Area of square,rectangle,circle and triangle using Function Overloading. There is another formula with circumference, A=c2/4π, that can be used for finding the area. Hence, the formula is: Area = Length x Width. They are the most basic structure of a program, and so Python provides this technique for code re-use. It must then prompt the user for appropriate input. Create a method called as area which returns the area of the class and a method called as perimeter which returns the perimeter of the class. Specifically, we'll be using the OpenCV contours functionality and the findContours function in the cv2 package. channels: it is also given in as a list []. December 14, 2016. Write a C++ program to calculate the area of triangle, rectangle and circle using function overloading. 2f" % area). 2, Calculate the height of ΔAOB i. WriteaC++programtofindareaof triangle,circle,andrectangleusing functionoverloading. Thus, the message above can be printed out more. Write a python program to find area of circle using radius, circumstance and diameter. The basic idea of psychomatrix is that the date of birth has a certain combination of numbers, with the help of which the. pi * radius ** 2 Note that your function doesn't use or need a myarea argument. Here's a practical example of using trigonometry with arcs and chords. computer science questions and answers. Create a sequence of numbers from Write a Python function to create and print a list where the values are square of numbers Details: Previous: Write a Python function to calculate the factorial of a number (a non-negative integer). Start studying Introduction to Programming - Chapter 5. Now, Convex Hull of a shape is the tightest convex shape that completely encloses the shape. And data can be of different types like numerical value, string, image, etc. We can't use reserved keywords as the I have been working on Python for more than 5 years. h > // for input/output functions # include < math. Find step by step code solutions to sample programming questions with syntax and structure for lab practicals and assignments. Use this Class to store two double type values that could be used to compute areas. China Promotional Calculators Promotional Calculators-021D is supplied by Promotional Calculators manufacturers, producers, suppliers on Global Sources. Coordinate system changes are done with the transform. A Circle has a radius, a Square has a side, and a Rectangle has height and width. on the other hand we can say that a sphere is a set of purposes that area unit all at constant distance r from a given point. This is the first mini-project I have completed by myself with Python 2. Sin (X) = X. Floating Point or Real Numbers. Use the calculator below to calculate the segment area given the radius and segment's central angle, using the formula described above. and its function on the same set of axes. 5142857142857142 gallons Cans needed: 1 can(s) (4) Extend by prompting the user for a color they want to paint the walls. But Python has a built-in document function for every built-in functions. Kalahari Waterpark 43 properties. Answer (1 of 3): Function overloading allows to use the same function name for different functions. Enter the radius: 1 The area of circle is: 3. Online C++ functions programs and examples with solutions, explanation and output for computer science and information technology students pursuing BE, BTech, MCA, MTech, MCS, MSc, BCA, BSc. Practice Problems & examples. Calculate the Area and Perimeter for shapes using functions in Python. It is expected that students understand the foundations of. Earn XP, unlock achievements and level up. Square millimeters (mm2) Square centimeters (cm2) Square decimeters (dm2) Square meters (m2) Square feet (sqft) Square inches Square miles Square If the calculation did not give you the result you expected, please write which values you used and what you expected the calculation to do. Area Explorer is one of the Interactivate assessment explorers. This linear function has slope. With using excel we can find circumference of a circle. The average of a list can be done in many ways i. ToInt32(Console. pi * r * r); return round (area, 2) # using formula area(cylinder) = 2πrh + 2πr^2 def surfaceAreaCylinder(r,h): area = (2 * math. In this program, we are going to learn how we can calculate the surface area and volume of the cylinder in python. We need math library to get the value of PI, so it is imported at the top of the program. Python source code is also available under GNU General Public License (GPL). Area of a circle. So the ratio of the area of the circle to the area of the square will be pi/4. Calculate the Area of a Rectangle: 3. (RUN TIME POLYMORPHISM) C++ : Create a base class Shape. The approximation on each interval gives a distinct portion of the solid and to make this clear each portion is colored differently. Find the Area of a circle in python : Python tutorial 28. The Python area of a circle is number of square units inside the circle. The constructor of the class initiates these two attributes using __init__ function. Below program uses pow function. A Circle has a radius, a Square has a side, and a Rectangle has height and width. Cute ruler shaped outlook. , the minimization proceeds with respect to its first If the argument x is complex or the function fun returns complex residuals, it must be wrapped in a real function of real arguments, as shown at the end of. Examples of lines, circle, rectangle, and path. This is good for testing one line of code. The default number of decimals is 0, meaning that the function will return the nearest integer. Making Strings Upper and Lower Case The functions str. Comparing declarative and imperative programming. To draw a circle using Matplotlib, the line of code below will do so. The constructor of the class initiates the attribute using the __init__ function. AUC-ROC curve is one of the most commonly used metrics to evaluate the performance of machine learning algorithms particularly in the cases. In first line, we're importing math module, then in next line, taking input from user. c = c def. Finally, the program will calculate and print out the sum of all odd numbers and even numbers in the list. Calculate Area of Rectangle in Python. Contents in Detail xi 5 Playing with sets anD Probability 121 What's a Set?. A RegEx, or Regular Expression, is a sequence of characters that forms a search pattern. Python Programming. println ("Area of rectangle : "+a. Specifically, we'll be using the OpenCV contours functionality and the findContours function in the cv2 package. (RUN TIME POLYMORPHISM) C++ : Create a base class Shape. An Introduction to Python and JES. pi * radius ** 2 # The circumference function accepts a circle's # radius and returns the circle's circumference. A method "area" is created to calculate the area of the given rectangle, which. 82, perimeter = 22. It too can be justified by a double integral of the constant function 1 over the disk by reversing the order of integration and using a change of. 14, r=3 (unit is missing). The following are the dimensions that the user is asked to input: the side length, the diagonal, the area (measured in units squared). Auto power-off function. The method to use this function is as follows SQR(number) and used to calculate the square root of a given number in excel; however, the nomenclature is different. Utilizing the hatch command outputs a filled object that can then be utilized to find this; however, I need a command or lisp routine that allows me to calculate the area between these objects. 14 - Pi value using System; using System. In this tutorial, we'll go through how to make a simple command-line calculator program in Python 3. pow(side1, 2) + Math. To calculate the circumference of the Next, multiply the squared radius by pi to get the area. Object-oriented programming in Python shows how to work define, create, and work with objects in Python. This Calculate Circle Area using Java Example shows how to calculate area of circle using it's radius. Second Way to Denote a String Literal in Python One way in which python differs from other langauges is that it provides two ways to specify string literals. Utilizing the hatch command outputs a filled object that can then be utilized to find this; however, I need a command or lisp routine that allows me to calculate the area between these objects. It's like Duolingo for learning to code. Online C++ functions programs and examples with solutions, explanation and output for computer science and information technology students pursuing BE, BTech, MCA, MTech, MCS, MSc, BCA, BSc. Once you have a reliable validation of the. Following python program ask from user to enter side length of square to print area of the square: # Python Program - Calculate Area of Square print ("Enter 'x' for exit. Vector Calculator: add, subtract, find length, angle, dot and cross product of two vectors in 2D or 3D. How to calculate area and volume of a sphere in Python Sphere: A sphere is defined as the perfect geometrical shape in 3D space that looks like a completely round ball. Calculate Area & Volume of Sphere. py) or use. Convexity : A picture is worth a thousand words. Python Program - Hypotenuse Using Pythagorean Theorem: Simple Python program using functions to calculate the hypotenuse of a triangle using the Pythagorean Theorem. Filled Area Chart¶. The shape of the bases is the circle. Programming. Solve various attributes of different types of triangles. Area of a cyclic quadrilateral. Area of a regular polygon. Each variable i only exists when the computer is executing the given function. C Program for Beginners : Area of Circle Shape : Circle Formula : Π * r * r Definition : Ellipse in which the two axes are of equal length Plane curve generated by one point moving at a constant distance from a fixed point You can compute the area of a Circle if […]. Convexity : A picture is worth a thousand words. I have to create a function to calculate the area, perimeter, and volume of a rectangle, prompt user to enter height, width, and length, and write out inputs and outputs using document. o Calculate the area of a circle. pi * radius. How to use this circle calc? This area of a circle calculator will help you determine the Even though we substituted it with trigonometric functions, especially sine, it's still good to know how to calculate its length In common use, squaring the circle is a metaphor of struggling with some difficult or even. you calculate the Area of the circle at the end of the Cylinder and then multiply it by the lenght to the second circle at the end of the cylinder Circle area Get radius (as parameter) Calculate area = pi x radius squared Return area The above assumes you write a function or method that calculates the. In the above program we are stating that the return variables will be area and e and the input parameters for the function ellipse_fun will be a and b. Functional programming in Python. Necessary cookies are absolutely essential for the website to function properly. #Program to draw spiral circles in Python Turtle import turtle. Amount to be paid Below is the example of the. We emphasize the concept of a data type and its implementation using Python's class mechanism. We will pass those values to the function arguments to calculate the area of a rectangle. It is the locus of all points in a plane at a constant distance, called the radius, from a fixed point, called the center. In Python any table can be represented as a list of lists (a list, where each element is in turn a list). pi * radius. This is my (straightforward) solution to calculating the area of hysteresis loops from cyclic testing using the R. How to Calculate the Square root in Python: Using sqrt() function; Using pow() function; A working example of Square root in Python; What is a Square root? The square root is any number y such that x 2 = y. Learn vocabulary, terms True/False: Unlike other languages, in Python, the number of values a function can return is The Python standard library's _____ module contains numerous functions that can be used in mathematical calculations. This example illustrates the use of the of the object-oriented feature known as polymorphism. A for loop is used to prevent the wrong. 37 Square : area = 85. 6: Python Program to print Natural Numbers Using Recursion(Hindi) Подробнее. Earlier schemes for approximating pi simply gave an approximate value, usually based on comparing the area or perimeter of a certain polygon with that of a circle. Input the the value of radius R 3. Tangent (X) = W. To draw a circle using Matplotlib, the line of code below will do so. Use this calculator to easily calculate the area of a circle, given its radius in any metric: mm, cm, meters, km, inches, feet, yards, miles, etc. computer science questions and answers. This calculator will find either the equation of the circle from the given parameters or the center, radius, diameter, area, circumference (perimeter), eccentricity, linear eccentricity, x-intercepts, y-intercepts, domain, and range of the entered circle. You need to specify the radius value in * program itself. This guide is for for students in CS101 at Boston University and covers the Python, Jython, and JES features that you'll use in CS101. This is the first mini-project I have completed by myself with Python 2. 24 Triangle : area = 21. In the Fibonacci example the memoize function was used to demonstrate using function arguments as a sequence, which works for immutable / hashable arguments. Python numpy. OP using Pythagoras theorem as given below: OP = √[r 2 –(AB/2) 2] if the length of AB is given. com https Python allows us to handle this kind of situation through function calls with an arbitrary number of In the function definition, we use an asterisk (*) before the parameter name to denote this kind of. Aggregation and grouping of Dataframes is accomplished in Python Pandas using "groupby()" and "agg()" functions. In general, the seaborn categorical plotting functions try to infer the order of categories from the data. 1415 up to fourth decimal places. Create a method called as area which returns the area of the class and a method called as perimeter which returns the perimeter of the class. How to calculate area and volume of a sphere in Python Sphere: A sphere is defined as the perfect geometrical shape in 3D space that looks like a completely round ball. The method to use this function is as follows SQR(number) and used to calculate the square root of a given number in excel; however, the nomenclature is different. The function cv2. The syntax used for the time module is actually the safer and more typical way to import a module. You can find the Python script for this process here. In Python OpenCV module, there is no particular function to adjust image contrast but the official documentation of OpenCV suggests an equation that can perform image brightness Then we need to calculate the x and y coordinates of the center of the image by using the moments that we got above. First of all name a class as "CircleArea" under Java I/O package and define and integer r=o, which is the radius of the circle. Write a program that displays the following menu. Python also accepts function recursion, which means a defined function can call itself. C Program to find the Area of Triangle using Base and Height. Examples of lines, circle, rectangle, and path. Minimum Enclosing Circle. C program to calculate area of circle In this article, we will illustrate c program to calculate area of circle. Find out the uses of Python map function to apply functions to objects in sequences. round( ) function in C returns the nearest integer value of the float/double/long double argument passed to this function. In case you have the diameter of the circle, the formula A=1/4πd2 has to be used. Your goal is to match the sample output below: Welcome to my area and perimeter calculator ===== Circle : area = 39. C Program to find the area of a circle using pow function. Plotting of line chart using Matplotlib Python library. Python program to find area of circle using function. If you have the base and height of the triangle, you can use the following code to get the area of the triangle,. There is also a search page for a number of sources of Python-related information. Calculate Area & Volume of Sphere. Now use try exception to handle errors and other exceptional events. What unit should you use? Give your program an appropriate name. In programming, you need to strore value in a variable to use it in your program. Python Statistics Tutoria - Python:p-value ,Python T-test, one sample and Two Sample T-test,Paired Sample T-test,correlation in Python, Python KS test. invert() method allows us to determine a scale function's input value given an output value (provided the scale function has a numeric domain). Enter the radius: 1 The area of circle is: 3. Dynamic Programming : The dynamic programming using optimal substructure and overlapping subproblems. You'll start with simple projects, like a factoring program and a quadratic-equation solver, and then create more complex projects once you've gotten the hang of things. } } Each concrete class: Circle, Square and Rectangle implements the Shape interface, so they must implement the area method (and they each do Then, the area is computed for each Shape object in the array. Use our online surface area of a circle calculator to find the circle surface area just by knowing the radius value. Solution to the problem: The equation of the circle shown above is given by x 2 + y 2 = a 2 The circle is symmetric with respect to the x and y axes, hence we can find the area of one quarter of a circle and multiply by 4 in order to obtain the total area of the circle. We've provided several Python programming examples here so that you can easily understand the logic. This must be done in a loop. The Python web site provides a Python Package Index (also known as the Cheese Shop, a reference to the Monty Python script of that name). Derivative Grapher Simple program to graph a deriv. pi * radius ** 2 # The circumference function accepts a circle's # radius and returns the circle's circumference. As a circle is not a regular figure so obviously we cannot use a ruler for calculating the circumference of the circle. Lambda functions can be used together with Python's built-in functions like map(), filter() etc. Show Instructions In general, you can skip the multiplication sign, so `5x` is equivalent to `5*x`. It's a utility function to quickly get the square of the matrix elements. 0, then the Area of Circle will be 28. Here we have passed 45 as a central angle. So, our program will ask the height and width from the user and calculate its area and perimeter using the above formulae. Total price before discount c. pi * radius. A rectangle can be divided into 4 similar square. Write a program that calculates the cost per square inch of a circular pizza given it's diameter and height. In Python any table can be represented as a list of lists (a list, where each element is in turn a list). HOME C C++ DS Java AWT Collection Jdbc JSP Servlet SQL PL/SQL C-Code C++-Code Java-Code Project Word Excel. January 3, 2011. Area of a Circle and its formula. pow() is a predefined function in math. Python program to find area of circle using function. In your drawing you have a different scenario. When working with GPS, it is sometimes helpful to calculate distances between points. C Program to find the area of a circle using pow function. To calculate area of a square in python, you have to ask from user to enter the side length of square to calculate and print the area of that square on the output screen as shown in the program given below. Using when three sides are given Basically, In order to calculate the area, you need to find out the Height of the triangle. We can't use reserved keywords as the I have been working on Python for more than 5 years. But we have to import math module to use the sqrt() method. Calculate and. A circle is a round, two-dimensional shape. You can calculate the area of a circle in Java by just writing a class and a method. def circumference (radius): return 2 * math. Calculators. compute (5,10)); a=cir; System. An Imaginary Number: To calculate the square root of an imaginary number, find the square root of the number as if it were a real number (without the i) and then multiply by the square. To calculate area of a square, we need length of any side of a square. Standard formula to calculate the area of a circle is: A=πr². File name: area. or (θ/2π) x (πR 2. If we know the radius value, diameter value or area value then we can calculate and find circumference of a circle. Here we gonna use the basic concept of vector, dot product to determine how closely two texts are similar by computing the value of We need to create two lambda functions, one to convert the text to arrays of numbers and the other one to compute the similarity between them. How to Randomly Select From or Shuffle a List in Python. Step 2: Defining a python function to plot the ROC curves. Through any three points not on the same line, there passes one and only one circle. The area of a circle is the number of square units inside that circle. 3D Programming In Python. The euclidean_division function to calculate online the quotient and the remainder in the euclidean division of two polynomials or two integers. For more details see appendix. It's easy to use and free. A calculator is an Electronic Hardware Device that is capable of doing very Mathmetical Calculations such as addition, multiplication, division and subtraction. To calculate area of rectangle in python, you have to ask from user to enter length and breadth of rectangle to calculate and print area of that rectangle on the output screen as shown in the program given below. Get absolute value without using abs function nor if. Specifically, we'll be using the OpenCV contours functionality and the findContours function in the cv2 package. Our simple method imports all functions available in the math module. While the categorical functions lack the style semantic of the relational functions, it can still be a good idea to vary the marker and/or linestyle along with the hue to make figures that are maximally. I want to calculate the area for every polygon. Python Tutorials. The lambda part is based on the use of the keyword lambda to define them in Python. solve() which solves a linear matrix equation, or system of linear scalar equation. See calculation formulas and definition of a truncated pyramid. Python is a widely used high-level dynamic programming language. py # area of square def square(x): return x * x # area of rectangle def rectangle(l, b): return l * b # area of circle def circle(r): return (22 / 7) * (r * r) Import a module. We'll be using only python and its official GUI, tkinter (so no official 3D engine will be used If the number of the rows is equal to that of the columns then we have a square (or quadratic) matrix. Cosine (X) = Z. So, the first thing we must do is import the matplotlib package. Python Program to find Area of a Rectangle using functions. I have to create a function to calculate the area, perimeter, and volume of a rectangle, prompt user to enter height, width, and length, and write out inputs and outputs using document. To calculate the area of a circle the standard formula is: Area = Pi R Square. The formula used to calculate the area is (π*r 2) or {(π*d 2)/4}. This Python area of rectangle program allows the user to enter the width and height of a rectangle. Using numeric variables and constants // 2. The formula for the area of the circle is : Area_circle = Π * r * r. These include trigonometric functions, representation functions, logarithmic functions, angle conversion functions, etc. 14 * radius ** 2). This means that a circle has a circularity of 1, circularity of a square is 0. forward(side) t. Square is a polygon with four equal sides in length and has four equal angles. And data can be of different types like numerical value, string, image, etc. In each function, you need to calculate the following items: a. Draw spiral square in Python Turtle #Python program to draw spiral square in turtle programming import turtle t = turtle. How to calculate the area and volume of the cylinder in Python. Functions in python are defined using the block keyword "def", followed with the function's name as the block's name. #include int main() { printf(" \t\tStudytonight - Best place to learn "); int h, b; float area; printf(" Enter the height of the Triangle: "); scanf("%d", &h); printf(" Enter the base of the Triangle: "); scanf("%d", &b); /* Formula for the. SQRT is a square root function in both excel and VBA. [email protected]! Geodesic area and length can also be calculated using geodesicArea and geodesicLength properties with @ followed. In geometry a circular section is a circle on a quadric surface such as an ellipsoid or hyperboloid it is a special plane section of the quadric as this circle is the. I had to make this method static because you cannot call a non-static method from a static context in Java. Chapter 2: Functions and Modules introduces modular programming. import math # The area function accepts a circle's radius as an # argument and returns the area of the circle. Volume of a. Understand and develop Tkinter Widgets and useful Apps such as calculators. This java example program also expain the concepts for Basic Programs. To write a program to find the multiplication values and the cubic values using the inline function. We then use the 9 circle template I created to calculate the Image points, which is the information we need for the perspective calculation. Surface area of cone calculated by using following formula: A = πr ( r + √(r 2 + h 2 ) ). solve() which solves a linear matrix equation, or system of linear scalar equation. pi * r * r) return round (area, 2) # Using formula perimeter(square) = 4 * Side def perimeterSquare(a): per = 4 * a. Then display the area in both square feet and square meters. 141592653589793. Next story C program to swap two numbers using functions. Learn how to calculate averages in Python using the "len" and "sum" methods, and the statistics module's With that in mind, we can use a few built-in Python functions to calculate averages without Our matching algorithm will connect you to job training programs that match your schedule. Plugging in 37. Here we have created a class named "Circle" that has an attribute radius. To draw a circle using Matplotlib, the line of code below will do so. 25 square feet Paint needed: 0. What's the difference between a function and a method? How can you write your own text-based adventure game using Python?. I then dive into the basics of working with first-class functions in Python, as well as the built-in functions and features in Python that support functional. These include trigonometric functions, representation functions, logarithmic functions, angle conversion functions, etc. Square Root Calculator that is quick and interactive. Each shape is referred to as a patch. Turtle() side = 200 for i in range(100): t. For python main function, we have to define a function and then use if __name__ == '__main__' condition to execute this function. This is a real-world situation where it pays to. How to write python program to find area of circle using radius, circumstance and diameter. Find step by step code solutions to sample programming questions with syntax and structure for lab practicals and assignments. round( ) function in C returns the nearest integer value of the float/double/long double argument passed to this function. The Python Package Index (PyPI) is a repository of software for the Python programming language. In this program, we are going to learn how we can calculate the surface area and volume of the cylinder in python. This is the first mini-project I have completed by myself with Python 2. sqrt() function is an inbuilt function in Python programming language that provides the square root of a given number. Hence formula is:. It is a very simple, friendly and easy to learn programming language. Solve various attributes of different types of triangles. But this value is actually one of the optional parameters we can pass to the circle function, which defaults to 1 pixel. What is Square and Square Root? How to Square a Number in Java. Convex Regular Polygons Looking at the. Before we start our treatize on possible Python representations of graphs, we want to The following Python function calculates the isolated nodes of a given graph. In previous exercise we learned to declare and use program with single user defined function. pi * r * h) + (2 *math. Mathematical formula: Area of a triangle = (s*(s-a)*(s-b)*(s-c))-1/2. Instructions: Write a function named circle_area that accepts the radius of a circle as a parameter (as a number) and returns the area of a circle with that radius. we also check if the user has entered the expression in. I am working on an area calculator in python, and everything seems ok,until I get to calculating the perimeter of a circle Can anyone point me in the right direction? import math from math. In programming, you need to strore value in a variable to use it in your program. Next story C program to swap two numbers using functions. The area of a circle is the area covered by the circle in a two dimensional plane. function sum(). Remember that you can pass in custom and lambda functions to your list of aggregated calculations, and each will be passed the values from the column in your grouped data. EasycodeBook. An easy to use, free area calculator you can use to calculate the area of shapes like square, rectangle, triangle, circle, parallelogram, trapezoid, ellipse, and sector of a circle. Here we find the solution to the above set of equations in Python using NumPy's numpy. 1416 (approx. I don't find the implementation in the R package ineq particularly conversational, and also I was working on a Python project, so I wrote this function to calculate a Gini Coefficient from a list of actual values. Attributes include sides, angles, altitudes, medians, angle bisectors, perimeters, semiperimeters, areas, base, Law of Cosines and Sines, height, radius of circumscribed circles, Pythagorean Theorem, radius of inscribed circles. The net result is that our simple circle-drawing algorithm exploits 2-way symmetry about the x-axis. 37 Square : area = 85. Write the nature of the lines. Posts: 14 Threads: 5 Joined Hello everybody, I'm a beginner in python, I just started today with it, and I created the following file in Linux All you've done is define the function. Necessary cookies are absolutely essential for the website to function properly. Write a program that displays the following menu. Following is a question that was asked in the MSBTE (Maharashtra State Board of Technical Education, Mumbai) Diploma in Computer Engineering question paper for Winter 2018 examination. Watch live video game streams from popular creators on Facebook. [email protected]! Geodesic area and length can also be calculated using geodesicArea and geodesicLength properties with @ followed. It also sets few implicit variable values, one of them is __name__ whose value is set as __main__. Mathematicians use the letter r for the length of a circle's radius. Times Square 1,382 properties. Python Program To find Area Of Circle. Find The Volume of a Square Pyramid Using Integrals. A value used when calculating the square root of x. In this Python Statistics tutorial, we will learn how to calculate the p-value and Correlation in Python. It denotes the ratio of circumference to diameter of a circle and it has a value of 3. The 4 angles present in the rectangle are also equal. Write a program that displays the following menu. As a circle has 360°, the actual fraction is \(\frac{\theta_A}{2\pi}\) Calculating \(\theta_A\) can be done either by the points \(\vec{P}_{1,2}\) we calculated already, or much simpler by using the sine of the half of the triangle and multiplying. Please write a program to print some Python built-in functions documents, such as abs(), int(), raw_input(). #include #include const float pi=3. The area of a square: To find the area of a square, multiply the lengths of two sides together. 24, perimeter = 34. sqrt(a-b) in a program, the effect is as if you had replaced that code with the return value that is produced by Python's math. #We must import the math module to incorporate the "pi" function. Trigonometry: Wave Interference. Your program will have in-line documentation (a header and in-line comments). the number of small pipes that fits into a large pipe or tube; the number of wires possible in a conduit; the number of fibers that fits in a connector; and similar. C program to calculate area of circle In this article, we will illustrate c program to calculate area of circle. Write a C++ program to calculate the area of triangle, rectangle and circle using function overloading. First assign a meaningful name to all the three functions. r- radius, 3. Volume Calculator. Here I write tutorials related to Python Programming Language. float area(float r) {float ar; ar=pi*r*r; return ar;} float area(float l,float b) {float ar; ar=l*b; return ar;} void main() {float b,h,r,l; float result; clrscr(); cout<<" Enter the Base & Hieght of Triangle: "; cin>>b>>h; result=area(0. Python by Saurabh Shukla Sir Python by Saurabh Sir Visit https://premium. A Shorter Version of the Sketch. Pythagorean theorem: pythagorean. Thus, if there were a total of 28. The calculator can be used to calculate applications like. Automated facial recognition enables the identification or verification of someone using the unique characteristics of their face and has many applications from. If you need to calculate area of a triangle depending upon the input from the user, input() function can be used. It is represented by a formula: The ID3 algorithm uses entropy to calculate the homogeneity of a sample. OOP is a programming paradigm that uses objects and their interactions to design applications and computer programs. In previous exercise we learned to declare and use program with single user defined function. Square footage is a measurement of the area of a room (or other type of space) expressed in feet square (ft 2). January 3, 2011. 14159 for π. to Calculate Area and Perimeter of a Rectangle,C++ Program to Find Area and Circumference of a Circle,C++ Program to Print Array in Reverse Order Creating an application using function makes it easier to understand, edit, check errors etc. We can't use reserved keywords as the I have been working on Python for more than 5 years. Before we start our treatize on possible Python representations of graphs, we want to The following Python function calculates the isolated nodes of a given graph. This Python program calculates surface area of cone given radius and height. Python Completions. This means whenever we go one square to the right, we have to go three This is what Mathepower calculated: To calculate the slope m, use the formula. Solve various attributes of different types of triangles. The radius of the circle should be given as an argument to the function and the equation to calculate the area is PI*r2. To find the area of square we multiple the length of side with itself and store area in a floating point variable. This program is also same as the previous program but here we are using function so that the user can use the function in any other program. Show Instructions. Use of function with an argument and a return value. Obtaining keyboard input // 3. A Python Histogram/Matplotlib Histogram is an accurate representation of the distribution of numerical data. py file and PDF file. Write a C++ program to find Area of square,rectangle,circle and triangle using Function Overloading. compute (5,0)); }. In the Fibonacci example the memoize function was used to demonstrate using function arguments as a sequence, which works for immutable / hashable arguments. solve() which solves a linear matrix equation, or system of linear scalar equation. Regular polygons may be convex or star. Java program to calculate the area of a circle /* Java Programming for Engineers Julio Sanchez Maria P. Program to calculate area of inner circle which passes through center of outer circle and touches its circumference; Area of a Square | Using Side, Diagonal and Perimeter first_page Python Program to convert Kilometers to Miles. h header file, it is used for calculate power of any number. Most often used by people in the United States. Standard formula to calculate the area of a circle is: A=πr². The Python's filter() function takes a lambda function together with a list as the arguments. Per our terms of use, Mathway's live experts will not knowingly provide solutions to students while they are taking a test or quiz. The following Java program also checks whether the given 3 sides can form part of a triangle. If you know the coordinates of the vertices of a square, you can calculate all the other properties, including the area. Find out the uses of Python map function to apply functions to objects in sequences. This Python area of rectangle program allows the user to enter the width and height of a rectangle. Lets write the C code to compute the area of the circle. The course shows you how to use the free open-source PyScripter IDE for Python to write basic programs using concepts such as. The constants of proportionality are 2 π and π, respectively. Add to the base class, a member functions get_data () to initialize the data members in the base class and add another member function display_area () to compute the area. The maximum value in the interval is 3750, and thus, an x-value of 37. o Calculate the area of a circle. /* C program to find square of given number using function. A circle's circumference and radius are proportional. I'm a little concerned about how I used the functions as well as my else statement when an "invalid" input is. Create an object for the class. Technology has advanced and with that, there have been many. Distance across the circle passing through the center is called as diameter. Python program to find area of circle. f / ReciprocalSquareRootSSE` produced slower results than the accurate square root. Earn XP, unlock achievements and level up. py) or use. File name: area. First-class functions and how to use them. They are the most basic structure of a program, and so Python provides this technique for code re-use. h > // for getch() function: double const pi= 3. we also check if the user has entered the expression in. Using the Math. You may assume t. The null hypothesis is rejected only if the test statistic falls in the critical region, i. once you put in the centers coordinates, your equation should be, however we still need to find the radius. >>error: void value not ignored as it ought to be| This simply means that the function you are calling has a return type of void but you are trying to assign the. This must be done in a loop. The user need not worry about the functions' definitions. Circle with square and octagon circumscribed, showing area gap. Volume of a right square prism. Either is fine. pi/4) * (diameter * diameter) # diameter = 2 * radius # radius = diameter/2 radius = diameter / 2 area2 = math. It's like Duolingo for learning to code. 94 94 1782% of 4497,221austinc9 Issues Reported. Enter one known value of a circle and calculate the area, circumference, radius or diameter. The formula for finding the area of a circle is 3. Text; namespace ForgetCode { class Program { static void Main(string[] args) { int r; double A; Console. Calculate the area of rectangle 3. This program is used to calculate the area of a circle where the radius will be fetched from user. c program using star symbol in factorial; c program to find factorials using function; c program to find factorial using functions; c program to find factorial of a number using functions; c program to calculate factorial of a number using function. This free area calculator determines the area of a number of common shapes using both metric units and US customary units of length, including rectangle, triangle, trapezoid, circle, sector, ellipse, and parallelogram. You'll receive a free ebook to read, and upon posting a review to Amazon, you will receive a complementary print. 785, and so on. The area of a circle is number of square units inside the circle. Below are Python script versions. Here we find the solution to the above set of equations in Python using NumPy's numpy. #include using namespace std float r,area; cout<< "\nEnter radius of circle. How to write python program to find area of circle using radius, circumstance and diameter. #method 1 PI = 3. A Shorter Version of the Sketch. This is the first mini-project I have completed by myself with Python 2. Here s is the semi-perimeter and a, b and c are three sides of the triangle. 14159, which is equal to the ratio of the circumference of any circle to its diameter. In order to find out the area of a circle in Python, you have to know the radius of the circle. RegEx can be used to check if a string contains the specified search pattern. } } Each concrete class: Circle, Square and Rectangle implements the Shape interface, so they must implement the area method (and they each do Then, the area is computed for each Shape object in the array. and print output on screen using cout>> function. Outline: About Functions How to define a function Example for defining a function Calling a function with arguments Calling a function without arguments Return values from a function Indentation in coding Documenting or commenting code How to use docstrings in python function How to write a function circle to return area and perimeter with radius r. How to write python program to find area of circle using radius, circumstance and diameter. Here's a Simple C++ program to find Area using Function Overloading in C++ Programming Language. Here's a practical example of using trigonometry with arcs and chords. calculate the area of a triangle 4. Graphs of Functions, Equations, and Algebra. Plugging in 37. Please write a program to print some Python built-in functions documents, such as abs(), int(), raw_input(). Write a method which can calculate square value of number Hints: Using the ** operator Solution: def square(num): return num ** 2 print square(2) print square(3) #-----# Question 24 Level 1 Question: Python has many built-in functions, and if you do not know how to use it, you can read document online or find some books. Python program to create and. Since the formula for the area of a circle squares the radius, the area of the larger circle is always 4 (or 2 2) times the smaller circle. How to calculate the length of an arc. For more on this, see Area and Perimeter of a square. the equation of a circle recall is with the center point being (h,k). The shape of the bases is the circle. We use optional third-party analytics cookies to understand how you use GitHub. To find the area of a parallelogram we need to the length of it's Base and Height. I am working on an area calculator in python, and everything seems ok,until I get to calculating the perimeter of a circle Can anyone point me in the right direction? import math from math. Using the radius value, this Python formula to calculate the Circumference, Diameter, and Area Of a Circle, Diameter of a Circle = 2r = 2 * radius, Area of a circle are: A = πr² = π * radius * radius and Circumference of a Circle = 2πr = 2 * π * radius. Please specify you want "Cracking Codes with Python". This program is written in C++ and it combines the step 1 and step 2 to create a menu driven program. Remote running a local file using ssh. Programming. for a square inscribed in a circle we have that : the diagonal of the square corresponds to the diameter of the circle. We recommend you should at least use Python 3 for writing code and run examples. How to calculate area of four sides which are 50ft,59ft,65ft,71ft without knowing (base & height,angles). Circle((0,0), radius=5), gives the circle a center of (0,0) on an X-Y axis, along with a radius of 5 units (for a total diamter of 10 units). Following python program ask from user to enter side length of square to print area of the square: # Python Program - Calculate Area of Square print ("Enter 'x' for exit. C Program to find the area of a circle using pow function. Use this formula: circumference = 2PIr. Hopefully you have found the chart you needed. Square is a polygon with four equal sides in length and has four equal angles. Python Program - Hypotenuse Using Pythagorean Theorem: Simple Python program using functions to calculate the hypotenuse of a triangle using the Pythagorean Theorem. In many areas of science and technology, such as physics, biology, construction and even That is, it is necessary to calculate its coordinates at any given time. In this quick and practical tutorial, you'll learn what a square root is and how to calculate one in Python. In the following section, we will be discussing how to use lambda functions with various Python built-in functions. Understanding what a circumference of a circle is and how to calculate it is crucial as you move to higher You can also think of the radius as the distance between the center of the circle and its edge. Tuples also use parentheses instead of square brackets. Python Program To Find The Roots Of Quadratic Equation A quadratic equation is an equation of the second degree, meaning it contains at least one term that is squared. Step 2: Declare a class over with data members and member functions. Twosignhoords, one circular and ond ond square are to be made using a wis, st length 40m and cutting it into two pieces. Here are the functions you should create: public static double area_circle( int radius ) // returns the area of a circle public static int area_rectangle( int length, int width ) // returns the area of a rectangle public static int area_square( int side ) // returns the area of a square public static double area_triangle( int base, int height ) // returns the area of a triangle. Python Programming Code to Calculate Area of Square. and print output on screen using cout>> function. C Program to find the area of a circle using pow function. Let's make a program using a function to make the. Calculating Square Root in Python Using sqrt() Function. This means that a circle has a circularity of 1, circularity of a square is 0. For this type of equation, pi would be equal to 3. In other words, the variable is a place holder for the data. The first function, max, found the largest element of the list [1,2,3,4]. Pie (π) is a well-known mathematical constant, which is defined as the ratio of the circumference to the diameter of a circle and its value is 3. Let's look at some examples involving the area of a circle. Do not forget you can propose a chart if you think one is missing!. So we need to inherit the abstract class and define the abstract methods in the new class. The given example will teach you the method for preparing a program to calculate the area and perimeter of a circle. Numbers in Python # In Python, Numbers are of 4 types: Integer. Join a community of players and streamers. Put the verb in the brackets into the Present Perfect Continuous 1. Learn more about how to calculate them using $π$ above!. The area of a circle can be defined by knowing the number of square units that can fit inside that circle and if each square inside has an area of 1 cm 2. That means that the area of the rectangle, or the space that covers the rectangle, is 48 square units. So, first you have to import the util package of Java so that you can use the Scanner class in this program which will help programmers to fetch input from users. center to chord midpoint distance Calculate Area of Circle Segment given radius and central angle in Calculate largest size Square that would fit in a Circle given Diameter Calculate largest size Volume is in cubic units. It is cumulative distribution function because it gives us the probability that variable will take a value less than or equal to specific value of the variable. This tutorial explains how to create a simple python program to. The goal is to calculate the area of the sector of the circle, which is a fraction of the whole. Everyone can create professional designs with Canva. Circle area inside square. So if I draw a line across the circle that goes through the center, the length of that line all the way across the circle through the center is 16 millimeters. For example there is the Great-circle distance, which is the shortest distance between two. Filled Area Chart¶. Before using that function, we need to understand some terminologies related with histograms. According to heron's formula, the area of a triangle with 3 sides a, b and c is, Area = square root of (p*(p-a)*(p-b)*(p-c)) (where p = (a+b+c)/2). How to write python program to find area of circle using radius, circumstance and diameter. Coordinate system changes are done with the transform. C++ supplies a library for math functions in C++ to readily execute intricate mathematical functions such as trigonometric function and algebraic equations. What unit should you use? Give your program an appropriate name. A value used when calculating the square root of x. In the Fibonacci example the memoize function was used to demonstrate using function arguments as a sequence, which works for immutable / hashable arguments. pi constant you can use here: import math def calculate_area(radius): return math. This java example program also expain the concepts for Basic Programs. Python Statistics Tutoria - Python:p-value ,Python T-test, one sample and Two Sample T-test,Paired Sample T-test,correlation in Python, Python KS test. """ This program calculates area of shapes. {var celsuis=document. Python program to find area of circle. The process of finding standard deviation requires you to. Tuples also use parentheses instead of square brackets. If you don't know the height or you may have no idea how to find out the height of the triangle, then you can use the below program to calculate the area of a triangle. Let us put a circle of radius 5 on a graph: Now let's work out exactly where all the points are. In this program, you'll learn to find the square root of a number using exponent operator and cmath module. """ print "The area calculator has started" #. Answer: Function overloading allows to use the same function name for different functions. Remember to use the appropriate labels corresponding to each formula. 24 Triangle : area = 21. Calculate Gini for sub-nodes, using formula sum of the square of probability for success and failure (p^2+q^2). C# Corner is Hosting Global AI October Sessions 2020. Python programming suite provides a large number of GUI frameworks (or toolkits), from TkInter (traditionally bundled with Python, using Tk) to a number. >>error: void value not ignored as it ought to be| This simply means that the function you are calling has a return type of void but you are trying to assign the. Program for Area Of Square after N-th fold; Program to print Square inside a Square; Calculate Volume, Curved Surface Area and Total Surface Area Of Cylinder; Find area of the larger circle when radius of the smaller circle and difference in the area is given; Area of a square from diagonal length; Area of a Circumscribed Circle of a Square. Write a C program to input radius of circle from user and find diameter, circumference and area of the given circle using function. In particular, instead of using double quotes to begin and end a string literal, one can use single quotes as well. Here are the functions you should create: public static double area_circle( int radius ) // returns the area of a circle public static int area_rectangle( int length, int width ) // returns the area of a rectangle public static int area_square( int side ) // returns the area of a square public static double area_triangle( int base, int height ) // returns the area of a triangle. Here's a Simple C++ program to find Area using Function Overloading in C++ Programming Language. Area of a regular polygon.
CommonCrawl
Moduli space for $CP^N$ and $T^{*} CP^N$ in $\mathcal{N}=2$ SUSY Question regarding moduli space of a Calabi-Yau manifold Kähler Potential of Calabi-Yau volume How exactly do we construct the $T^2\times \mathbb{R}$ toric Calabi-Yau three-fold? Genus one topological string amplitude and Ray-Singer torsion Why RR cohomology is important in string theory? Question about notation used in writing the moduli space in string theory Is there a T-dual of Witten's twistor topological string theory? Do all $\mathcal{N}=2$ Gauge Theories "Descend" from String Theory? What is the need to consider a singular spacetime? Topological strings: Why is the complex structure for $T^2$ denoted as $\tau$ in string theory? In these notes by Vafa on topological string theory he says in page 7 that the moduli of the 2-torus can be repackaged into two quantities: $$A=iR_1/R_2 \,\,\,\,\,\,\,\,\, \tau=iR_2/R_1$$ where $A$ describes the overall area of the torus or its size and $\tau$ describes its complex structure or its shape. Why $A$ measures the area? Why is $\tau$ describing the complex structure of $T^2$? The complex structure of $T^2$ which is Kahler is a tensor $J$. What is its relation to this $\tau$? And what has the complex structure to do with the shape of $T^2$? I would assume that the cohomology class of the Kahler form only as to do with the area. Later he says that this is an example of mirror symmetry in string theory. Why? Mirror symmetry relates two different CYs. Here we only have different moduli of $T^2$ Finally, which parameters actually correspond to the moduli space of $T^2$? Both $A,\tau$ only $A$ or only $\tau$? This is a quite mathematical question but it is in the heart of string theory. This post imported from StackExchange Physics at 2015-06-05 09:44 (UTC), posted by SE-user Marion string-theory differential-geometry topological-field-theory calabi-yau asked Jun 3, 2015 in Theoretical Physics by Marion Edualdo (250 points) [ revision history ] edited Jun 5, 2015 by Dilaton I will attempt to answer with very little string theory background - because your questions seem oriented towards this basic case rather then the theory in general. First, a correction. On page 7 of that article, it defines $A=iR_1R_2$, not $R_1/R_2$. So, since the torus is flat, $A$ is $i$ times the usual area $R_1R_2$. As you say, a complex structure is a map $J$ such that $J^2=-1$. It comes from thinking of the complex structure on $\mathbb{C}$, where $iz=i(x+iy)=-y+ix$, so it exchanges the roles of the two coordinates. If the torus is a rectangular region of $\mathbb{C}$ with the opposite sides identified, the complex structure is a "rotation+flip", and changes the appearance of the rectangle. Since $\tau$ is the ratio of the two sides of the rectangle, it tells us something about the shape of the torus [some clarity below]. The torus is a CY manifold in 1 dimension, so the symmetry $A\leftrightarrow\tau$ is a map between two CY manifolds. He equates this with T-duality $A\leftrightarrow 1/A$, which is closely related to mirror symmetry. Well, to be clear we are talking about the torus metrics, which are completely specified by $R_1$ and $R_2$. (This is not the same as "the moduli space of $\mathbb{T}^2$" because that would mean more or less structure, depending on the context. To a topologist, the moduli space of tori is 0-dimensional, since there is only one 2d topological surface with genus 1). That means just $A$ wouldn't cut it - there would be pairs $(R_1,R_2)$ with the same $A$ but different sizes. If you include $\tau$ (linearly independent from $A$), then you can break that degeneracy. So the moduli space is parametrized by either the pair $(R_1,R_2)$ or $(A,\tau)$. (He does say that for more general tori you need to consider real parts for $A$ and $\tau$, so the moduli space would be bigger). [Some Clarity] In case that wasn't clear - consider the complex structure of $\mathbb{C}$, the imaginary unit $i$. It's action on the edges is $(R_1,R_2)\rightarrow (-R_2,R_1)$ So what happens to $A$ and $\tau$ under this map? $$A=iR_1R_2\rightarrow A'=-iR_2R_1$$ $$\tau=iR_2/R_1\rightarrow \tau'=i(R_1)/(-R_2)$$ So $A$ doesn't tell us anything about the complex structure, because under that map we just get $A\rightarrow -A$. However, $\tau\rightarrow -1/\tau$, so $\tau$ tells "how wide" and "how long" the torus is (at least, the ratio of these), which is the complex structure. This post imported from StackExchange Physics at 2015-06-05 09:44 (UTC), posted by SE-user levitopher answered Jun 4, 2015 by levitopher (160 points) [ no revision ] Thanks a lot, this has been extremely useful and helpful. Thanks for correcting my typo as well. commented Jun 4, 2015 by Marion Edualdo (250 points) [ no revision ] I don't know string theory, but I do know about complex structures on 2-tori, also known as complex elliptic curves. Most of your questions were answered by levitopher, I'll just elaborate a bit on that part. The space of all complex structures on a topological torus is called the moduli space of elliptic curves. This means that points of this space correspond exactly to isomorphism classes of elliptic curves, where two elliptic curves are isomorphic if there exists a biholomorphic mapping between them (typically a point is singled out that has to be respected by the mapping, but that is not important). It can be shown that every complex structure on a torus is obtained as a quotient of the complex plane modulo a lattice, i.e. a discrete subgroup of rank two of the plane, acting by translation: you roll up the plane in two independent directions. An isomorphism is a multiplication by a complex number that induces a bijection on these lattices. Now let $R_1,R_2$ be two generators of your lattice, hence two complex numbers. I assume that in the first part of the example they authors are thinking of two perpendicular generators $R_1$ and $iR_2$. In general, multiplication by a (nonzero) complex number doesn't change the isomorphism class of the corresponding complex torus, to we use it to scale one of the generators to 1, and we get a lattice generated by $1, R_2/R_1$. Conventionally this scaling is done in such a way that $\tau$ has positive imaginary part. The ratio $R_2/R_1$ is often denoted $\tau$. Now two complex tori having the same $\tau$ have equivalent complex structures, but the converse doesn't quite hold yet. I think what we have now is the Teichmüller space, which is easy as a space itself, namely the complex upper half plane, but whose moduli interpretation is more technical, namely of complex structures on the torus up to only some complex isomorphisms (namely those isotopic to the identity). To go to the actual moduli space of complex structures, you have to factor out equivalent lattices: e.g. $1, \tau + 1$ generates the same lattice, and $\tau + 1$ corresponds to the same complex structure as $\tau$. This is essentially a change of basis, and all bases are obtained by applying elements of $SL_2(\Bbb Z)$ to a given set of generators. Note that this directly translates into an action on $\tau$ by Möbius transformations: $$\begin{pmatrix} a & b \\ c & d\end{pmatrix}\tau = \frac{a\tau + b}{c\tau + d}$$ The quotient of the complex upper half plane (with coordinate $\tau$) under the action of $SL_2(\Bbb Z)$ is exactly the moduli space of complex structures on a topological torus. This post imported from StackExchange Physics at 2015-06-05 09:45 (UTC), posted by SE-user doetoe answered Jun 4, 2015 by doetoe (125 points) [ no revision ]
CommonCrawl
Premiumfunktioner Betalningsplaner och avgifter Basel Problem Proof Visa PDF Varun Rajkumar In this paper, we talk about Euler's famous solution to the Basel problem and his advantage over other mathematicians of the time. MathEssay Källfiler \usepackage[margin=1in]{geometry} \usepackage{blindtext} \usepackage{amssymb} \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=blue, filecolor=magenta, urlcolor=cyan, \urlstyle{same} \usepackage{amsthm} \usepackage{amsmath} %\newtheorem{theorem}{Theorem} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}{Corollary}[theorem] \newtheorem{lemma}[theorem]{Lemma} \theoremstyle{remark} \newtheorem*{remark}{Remark} \theoremstyle{definition} \newtheorem{definition}{Definition}[section] \title{Basel Problem} \author{Varun Rajkumar} \date{October 2019} \maketitle \begin{abstract} \end{abstract} \section{Introduction to the Basel Problem} The Basel problem was originally posed by the Italian mathematician Pietro Mengoli in 1650. The problem was to find the value of the infinite sum \[\zeta(2)=\sum_{n=1}^{\infty}\frac{1}{n^2}.\] Nearly 90 years later, it was famously solved by Leonhard Euler in 1735, solving this got Euler a lot of the fame he has today. Euler's solution was an intriguing value of $\frac{\pi^2}{6}$. The problem to most mathematicians was the sum converged extremely slowly and they could not guess a good value of the sum to prove. But Euler had an advantage; he could approximate the sum better. \section{How did Euler approximate the value of the sum?} Other mathematicians attempts were to get the sum at small values and guess the answer. Here are the approximations of the sum; \[n=10, \qquad\sum_{n=1}^{10}\frac{1}{n^2}=1.5497677311665408\] \[n=100, \qquad \sum_{n=1}^{100}\frac{1}{n^2}=1.6349839001848923\] \[n=1000,\qquad\sum_{n=1}^{1000}\frac{1}{n^2}=1.6439345666815615\] as you can see, the convergence is really bad, so Euler had a method for the acceleration of series (see \cite{Gos00}). To start, let's integrate the Taylor series for $\frac{-\ln(1-x)}{x}$, \int_{0}^{t}\frac{-\ln(1-x)}{x}dx &=\int_{0}^{t}\sum_{n=1}^{\infty}\frac{x^{n-1}}{n} dx \\ &= \sum_{n=1}^{\infty}\frac{x^{n}}{n^2} Plugging in $t=1$ will give the Basel problem, so we have: \sum_{n=1}^{\infty}\frac{1}{n^2}&=\int_{0}^{1} \frac{-\ln(1-x)}{x}dx\\ & = \int_{0}^{1/2}\frac{-\ln(1-x)}{x}dx+\int_{1/2}^{1}\frac{-\ln(1-x)}{x}dx There is no easy way of evaluating this integral from the margin 0 to 1 to get the solution to the Basel Problem, so Euler did something different. Euler split the integral into 2 parts, \sum_{n=1}^{\infty}\frac{1}{n^2}&=\int_{0}^{1/2}\frac{-\ln(1-x)}{x}dx+\int_{1/2}^{1}\frac{-\ln(1-x)}{x}dx\\&= \sum_{n=1}^{\infty}\frac{1}{n^22^n}+\int_{1/2}^{1}\frac{-\ln(1-x)}{x}dx. Now, let $x=1-t$; \[\zeta(2)= \sum_{n=1}^{\infty}\frac{1}{n^22^n}-\int_{0}^{1/2}\frac{\ln(x)}{1-x}dx\] observe that $\frac{1}{1-x}$ is a geometric series. Plugging this in we have: \[\zeta(2)= \sum_{n=1}^{\infty}\frac{1}{n^22^n}-\int_{0}^{1/2}\sum_{n=0}^{\infty}\ln(x)x^ndx\] we use a calculator to compute the integral: \[\zeta(2)=\sum_{n=1}^{\infty}\frac{1}{n^22^n}-\sum_{n=1}^{\infty}\frac{1}{n}\left(\frac{1}{2}^{n}\ln \left(\frac{1}{2}\right)-\frac{\frac{1}{2}^{n}}{n}\right).\] Simplifying a little bit, we have: \[\zeta(2)=2\sum_{n=1}^{\infty}\frac{1}{n^22^n}+\ln(2)\sum_{n=1}^{\infty}\frac{1}{n2^n}\] and using the power series for $-ln(1-x)$ with $x=\frac{1}{2}$, \[-\ln(1-\frac{1}{2})=\ln(2)=\sum_{n=1}^{\infty}\frac{1}{n2^n}\] plugging this in, we get; \[\zeta(2)=\sum_{n=1}^{\infty}\frac{1}{n^22^n}+\ln(2)^2.\] Because the terms in the sequence get closer and closer to 0 faster, it must converge to $\zeta(2)$ faster. Here are the sum of a few terms; \[n=10, \zeta(2)=\sum_{n=1}^{10}\frac{1}{n^22^n}+\ln(2)^2=1.64492005167\] \[n=100, \zeta(2)=\sum_{n=1}^{100}\frac{1}{n^22^n}+\ln(2)^2=1.64493406685\] \[n=1000, \zeta(2)=\sum_{n=1}^{1000}\frac{1}{n^22^n}+\ln(2)^2=1.64493406685.\] This converges so well that the first 12 digits are the same and \[\frac{\pi^2}{6}-(\sum_{n=1}^{1000}\frac{1}{n^22^n}+\ln(2))=-1.7739143e-12\] a very small number. Euler was now very convinced that the sum was $\frac{\pi^2}{6}$, and if you know the answer to a sum, it is much easier to prove the value of the sum. \section{The proof of the Basel problem} Euler's proof, unlike his previous analysis, was a lot simpler and less rigorous. \begin{theorem} The sum \[\sum_{n=1}^{\infty}\frac{1}{n^2}=\frac{\pi^2}{6}\]. \end{theorem} \begin{proof} In order to prove this, we need the lemma: \begin{lemma} The sine function can be expressed as the following infinite product; \[\sin(x)=x\prod_{n=1}^{\infty}(1-\frac{x^2}{\pi^2n^2}). \end{lemma} If we wanted to have a product for $\sin(x)$, we can write in terms of it zero's. For example, if we wanted to write $x^2+x+2$ as a product, we would consider it's zeros: -2, and -1, then \[x^2+x+2=c(x+2)(x+1)\] where c is a constant decided by $f(0)$. We can do this with $\frac{\sin(x)}{x}$ (it turns out to be easier than $\sin(x)$), it's zeros are all the positive and negative non-zero multiples of $\pi$. So we can write $\sin(x)$ as; \[\frac{\sin(x)}{x}=c\prod_{n=1}^{\infty}(1-\frac{x^2}{\pi^2n^2}).\] Now, to solve for c, we will take the limit as x approaches 0; \[\lim_{x\rightarrow 0}{\frac{\sin(x)}{x}}=1=c\prod_{n=1}^{\infty}(1-\frac{0^2}{\pi^2n^2})=c\] which shows $c=1$. Next, we multiply both sides by $x$; \[\sin(x)=x\prod_{n=1}^{\infty}(1-\frac{x^2}{\pi^2n^2})\] as claimed. \end{proof} First, we will expand the first term in the infinite product of the sine function \sin(x)=x\prod_{n=1}^{\infty}(1-\frac{x^2}{\pi^2n^2})=x-x^3\sum_{n=1}^{\infty}\frac{1}{\pi^2n^2}…. because a function cannot have 2 Taylor expansions, the terms must be equal. so; \[\sin(x)=x\prod_{n=1}^{\infty}(1-\frac{x^2}{\pi^2n^2})=x-x^3\sum_{n=1}^{\infty}\frac{1}{\pi^2n^2}=x-\frac{x^3}{6}….\] Comparing the $x^3$ terms; \[\frac{x^3}{6}=\sum_{n=1}^{\infty}\frac{1}{\pi^2n^2}\] \[\frac{\pi^2}{6}=\sum_{n=1}^{\infty}\frac{1}{n^2}\] \section{More about the Basel problem} Mathematicians originally knew that the sum \[\sum_{n=1}^{\infty}\frac{1}{n}\] diverges to infinity at a very slow rate (logarithmic growth). A lot of mathematicians attempted to sum the similar sum \[\sum_{n=1}^{\infty}\frac{1}{n^2}\] but it was only formally posed by Pietro Mengoli as a challenge in 1650. Euler generalized his solution to \[\zeta (2n)={\frac {(-1)^{n+1}B_{2n}(2\pi )^{2n}}{2(2n)!}}\] where $B_{n}$ is the nth Bernoulli number. Euler also included his famous Euler product; \[\zeta(s)^{-1}=\prod_{p\in\mathbb{P}}(1-\frac{1}{p^s})\] a more analytic result than a summation result (see \cite{RS19}). Euler included a lot about $\zeta(2n)$, but never managed to get the value of $\zeta(3)$ in closed form. Euler is now well known for his work in computation of the Riemann Zeta function, and was very good considering his level of technology he had in 1735. \section{Conclusion} This is a truly beautiful result from Euler, and his way of doing so is very elegant and short. \section{Acknowledgments} Thanks to Bill Gosper for mentoring me and Simon Rubinstein-Salzedo for helping me write this (Bill has a proof of the Basel problem at \cite{Gos99}). \bibliographystyle{alpha} \bibliography{biblio} Har du kollat i vår kunskapsbank? Påverkad projekt URL (Valfritt) Kontakta ossSkickar… Meddelandet har skickats! Vårt team kommer att granska det och svara via e-post.
CommonCrawl
In-depth comparative analysis of malaria parasite genomes reveals protein-coding genes linked to human disease in Plasmodium falciparum genome Xuewu Liu1, Yuanyuan Wang2, Jiao Liang1, Luojun Wang2, Na Qin2, Ya Zhao1 & Gang Zhao2 Plasmodium falciparum is the most virulent malaria parasite capable of parasitizing human erythrocytes. The identification of genes related to this capability can enhance our understanding of the molecular mechanisms underlying human malaria and lead to the development of new therapeutic strategies for malaria control. With the availability of several malaria parasite genome sequences, performing computational analysis is now a practical strategy to identify genes contributing to this disease. Here, we developed and used a virtual genome method to assign 33,314 genes from three human malaria parasites, namely, P. falciparum, P. knowlesi and P. vivax, and three rodent malaria parasites, namely, P. berghei, P. chabaudi and P. yoelii, to 4605 clusters. Each cluster consisted of genes whose protein sequences were significantly similar and was considered as a virtual gene. Comparing the enriched values of all clusters in human malaria parasites with those in rodent malaria parasites revealed 115 P. falciparum genes putatively responsible for parasitizing human erythrocytes. These genes are mainly located in the chromosome internal regions and participate in many biological processes, including membrane protein trafficking and thiamine biosynthesis. Meanwhile, 289 P. berghei genes were included in the rodent parasite-enriched clusters. Most are located in subtelomeric regions and encode erythrocyte surface proteins. Comparing cluster values in P. falciparum with those in P. vivax and P. knowlesi revealed 493 candidate genes linked to virulence. Some of them encode proteins present on the erythrocyte surface and participate in cytoadhesion, virulence factor trafficking, or erythrocyte invasion, but many genes with unknown function were also identified. Cerebral malaria is characterized by accumulation of infected erythrocytes at trophozoite stage in brain microvascular. To discover cerebral malaria-related genes, fast Fourier transformation (FFT) was introduced to extract genes highly transcribed at the trophozoite stage. Finally, 55 candidate genes were identified. Considering that parasite-infected erythrocyte surface protein 2 (PIESP2) contains gap-junction-related Neuromodulin_N domain and that anti-PIESP2 might provide protection against malaria, we chose PIESP2 for further experimental study. Our analysis revealed a limited number of genes linked to human disease in P. falciparum genome. These genes could be interesting targets for further functional characterization. Malaria is still a major global public health problem. According to the World Malaria Report 2016, more than 200 million people suffer from malaria and over 400,000 people die as a consequence of this disease [1]. Malaria is caused by parasitic protozoans belonging to the genus Plasmodium. At least five species of Plasmodium are capable of infecting humans, including P. falciparum, P. knowlesi, P. vivax, P. ovale, and P. malariae [2]. Among them, P. falciparum causes the most-often fatal and medically severe form of the disease, and has thus received the most attention. The animal malaria parasites, such as P. berghei, P. chabaudi, P. vinckei, and P. yoelii, are natural parasites of rodents. They are usually used as models to study malarial infections in the laboratory [3]. Two biological features of P. falciparum are particularly noteworthy regarding its ability to cause human disease. One is that, as a human malaria parasite, P. falciparum can invade and parasitize human erythrocytes, while the rodent malaria parasites are infectious to rodent species but not humans, suggesting that P. falciparum possesses some properties required for parasitizing human erythrocytes. The other feature is that P. falciparum is much more virulent than all other human malaria species. P. falciparum infection may progress to severe malaria, which manifests as one or more of the following severe complications: cerebral malaria (CM), severe malaria anemia, and acidosis/respiratory distress (RD) [4]. Among these complications, CM accounts for a significant proportion of malaria-related deaths and shows potential for the induction of neurological deficits in survivors [5]. It is characterized by the accumulation of P. falciparum-infected RBCs (iRBCs) at the pigmented trophozoite stage in the microvasculature of the brain [6]. Very few malaria deaths have been reported for P. vivax and P. knowlesi. In fact, P. vivax rarely kills the infected individual and is responsible for most cases of benign tertian malaria [7]. Identification of the genetic basis of the aforementioned biological features can help in the discovery of genes contributing to human disease, the development of new strategies to prevent P. falciparum infecting humans, and the treatment of severe malaria in humans. Recently, the genome sequences of several malaria parasites have become publicly available [8], making comparative genome analysis a practical strategy to search for human disease-related genes. A series of genes contributing to human disease have been identified by this method. For example, the comparative analysis of human and rodent malaria parasite genomes revealed that two enzymes, PF3D7_0520500 and PF3D7_0614000, which are essential enzymes in thiamine biosynthesis, are absent in rodent malaria parasites [9]. As the elimination of thiamine greatly impairs the erythrocytic multiplication rates of malaria parasites, the presence of the thiamine synthesis pathway in human malaria parasites can be seen as an adaption to increase the viability of such parasites in human erythrocytes and contribute to human pathogenesis. Furthermore, a comparison of the genome of non-cytoadherent P. falciparum D10 to that of cytoadherent P. falciparum 3D7 revealed a subtelomeric deletion on the right arm of chromosome 9 in D10 [10]. Further experimental study of 25 genes in this subtelomeric region indicated that the absence of virulence-associated protein 1 (PfVAP1) was responsible for the non-cytoadherent phenotype of D10, demonstrating that PfVAP1 is a virulence-related factor [11]. Although comparative genome analysis is feasible for the identification of genes associated with a particular phenotype, there were two limitations in previous analyses: First, earlier analyses only focused on genes specific to a group of species (group-specific), while genes conserved across all species but expanding in a group of species (group-expansion) were usually not considered. Second, using the previous method to identify the species-enriched genes among n species required at least \( \left(\genfrac{}{}{0pt}{}{n}{2}\right) \) comparisons, which makes the task quite resource-intensive when n is too large. In this study, to identify genes related to human disease in the P. falciparum genome, we developed a virtual genome method that overcomes the aforementioned limitations. Three human malaria parasites, namely, P. falciparum, P. knowlesi and P. vivax, and three rodent malaria parasites, namely, P. berghei, P. chabaudi and P. yoelii, were selected because these species have NCBI taxon IDs, and their host-tropism and virulence are relatively well characterized. We hypothesized that all of the analyzed malaria parasites had a common virtual genome, where each virtual gene actually represents a cluster of real genes whose protein sequences are similar. The phenotypic difference can be attributed to differences in the expression of virtual genes. Genes associated with a particular biological feature are those highly or specifically expressed in the group of species with such features. To look for genes linked to human disease, first, we established a protein sequence similarity network through sequence alignment and utilized the modularity method to partition this network into thousands of clusters. The obtained clusters varied in terms of the number of genes, ranging from one to more than 1000 genes. Each cluster was considered a virtual gene. Second, we compared the enriched values of all clusters in human malaria parasites with those in rodent malaria parasites to find genes responsible for P. falciparum parasitizing human erythrocytes. Third, we looked for genes related to virulence by comparing cluster values in P. falciparum with those in P. vivax and P. knowlesi. Finally, to discover novel molecules contributing to CM, we integrated gene expression data and extracted virulence-related genes highly transcribed at the trophozoite stage. One candidate gene was selected as an attractive starting point for follow-up experimental investigation. Establishment of virtual genome method by sequence cluster identification P. falciparum can parasitize human erythrocytes and is the most virulent malaria parasite. To identify the genetic basis of these important biological features, we performed comparative analysis of three human and three rodent malaria parasite genomes. We assumed that all of these Plasmodium species have a common virtual genome, but differ in virtual gene expression. Genes highly expressed in a subgroup of Plasmodium species are frequently associated with the unique feature of such parasites. For example, var. is specific to P. falciparum. It encodes the prime virulence factor PfEMP1 involved in the attachment of infected erythrocytes to microvascular [12]. To find species-group enriched genes, we performed protein sequence alignment to construct a network where each edge represents a significant hit between query and target (Fig. 1a). Then, we developed a modified BGLL (see methods) algorithm and applied it to identify sequence clusters within this network. Genes within each cluster are significantly similar in their protein sequences. Finally, the members of each cluster are allocated to the Plasmodium species from which they are derived, generating enriched values of all clusters in those species. The enriched value of a cluster can be considered to reflect the expression level of such a cluster. Species group-enriched clusters can be found by comparing the cluster values in all ingroup species with those in outgroup species. Genes within the enriched clusters are then defined as species group-enriched genes. Identification of group-enriched genes by virtual genome method. a Workflow of our comparative analysis. Protein sequence alignment was performed using phmmer to construct a protein similarity network where each edge represents a significant hit between query and target. Then, a modified BGLL algorithm was applied to find clusters within this network. Each cluster was considered as a virtual gene. Genes within these clusters were allocated to the species from which they originated, subsequently generating enriched values of all clusters in six species. Group-enriched genes can be identified by comparing cluster values in ingroup species with those in outgroup species. b The number of edges and the number of components included in the protein similarity networks that were obtained under different thresholds. c The number of clusters identified by the modified BGLL algorithm using different cut-off values of modularity. The arrow indicates the cut-off value used in this study. d Principal component analyses (PCA) of the enriched values of all clusters in six Plasmodium species. Components 1 (PC1) and 2 (PC2) represent 79% and 9% of total variance, respectively Each protein sequence of the six Plasmodium species was used as a query and searched against the total protein sequences of these species by phmmer. Figure 1b shows the numbers of edges and components within the disconnected network using expectation values ranging from 1E − 1 to 1E − 16. Although the relationship between the number of edges and the threshold values was almost linear, an unapparent knee point was still observed at 1E − 7. The number of components significantly increased at 1E − 4 and mildly increased at 1E − 7. Further decrease in the threshold led to a slight increase in the number of components. Therefore, we set the threshold as 1E − 7. The resulting disconnected network consisted of 931,335 edges and 3768 components. We then adopted a modified BGLL algorithm to identify sequence clusters within this disconnected network (see Methods). Figure 1c shows the number of clusters identified using different modularity cut-off values. An increase in cut-off values from 0.4 to 0.5 led to a significant drop in the number of clusters, implying that many cluster structures were not well identified. The number of clusters had a relative apparent increase when the cut-off value was reduced from 0.2 to 0.1, indicating that several homologs had been classified into different groups. To avoid the presence of a supercluster consisting of several independent clusters and the misclassification of remote homologs, we set the cut-off value to 0.2. Under this condition, 33,314 genes were grouped into 4605 clusters (Additional file 1: Table S1). Thus, we achieved a virtual genome which represents a collection of 4605 virtual genes. Among the obtained clusters, some of them, such as Cluster_12 and Cluster_223 consisting of 4 and 9 genes, respectively, comprise genes from a single Plasmodium species. Some clusters, such as Cluster_15 and Cluster_16 which contain 39 and 19 genes, include genes from six species (Additional file 2: Figure S1), respectively. Cluster_1 has the largest number of members, and it contains 1096 vertices. The genes within this cluster were all from rodent malaria parasite genomes, but absent in human malaria parasites. A total of 426 clusters constitute a member. On the basis of the obtained sequence clusters, we generated the expression profiles of all clusters in six Plasmodium species, and each column shows the enrichment values of all clusters in such a species (Additional file 3: Table S2). Principal component analysis (PCA) of all of these cluster values demonstrated that P. falciparum differs from five other species in the second component, which represents 9% of the variance, while P. yoelii differs from other parasites in the first component, which represents 79% of the variance (Fig. 1d). Comparison of the cluster profiles of six Plasmodium species can reveal species group-enriched genes, including group-specific genes and group-expansion genes. For example, Cluster_32 is composed of 227 genes and found to be unique to P. falciparum. Genes within this cluster encode RIFIN/STEVOR proteins which exist specifically in P. falciparum [9]. Additionally, Cluster_161, which consists of 27 FIKK genes, was found in all species, but was much more abundant in P. falciparum than in all other Plasmodium species, consistent with the report that the FIKK gene had been amplified in P. falciparum to approximately 20 sequence-related members [13]. Therefore, our method was shown to be feasible for identifying species group-enriched genes. In comparison with previous genomic analysis where comparison was performed between any two of these species [9], our method outperforms this in two aspects. First, our analysis is more comprehensive than the previous approach. In our analysis, we can identify both group-specific and group-expansion genes, while in the previous comparative analysis, the investigators usually only focused on group-specific genes. Second, our method makes the identification of genes underlying phenotypic differences much simpler than the previous analysis because we avoid performing comparative analysis of all pairs of species. Thereafter, we looked for P. falciparum genes linked to the infection of human erythrocytes and virulence by our method. A cluster was considered to be enriched in a group of species if its minimal value in all ingroup species was fivefold higher than its maximal value in the outgroup species. Identification of P. falciparum genes responsible for parasitizing human erythrocytes As a human malaria parasite, P. falciparum can infect human erythrocytes but not the erythrocytes of rodent species, while the rodent malaria parasites are incapable of parasitizing human erythrocytes, suggesting that the P. falciparum genes enriched in human malaria parasites might be required for parasitizing human erythrocytes. To identify genes linked to this biological feature, we compared the enriched values of all clusters in human malaria parasites with those in rodent malaria parasites. As shown in Fig. 2a, there were 94 and 57 clusters enriched in human and rodent malaria parasites, respectively. To illustrate the difference between human and rodent malaria parasites in detail, P. falciparum genes within human-enriched clusters were compared with P. berghei genes included in rodent-enriched clusters. In total, 121 P. falciparum genes and 398 P. berghei genes were identified. After removing pseudogenes, 115 P. falciparum genes and 289 P. berghei genes were retained for further analysis (Additional file 4: Table S3 and Additional file 5: Table S4). Identification of P. falciparum genes responsible for parasitizing human erythrocytes. a Heat map showing the clusters enriched in human and malaria parasites. Green, black, and red indicate cluster values equal to zero, one, and higher than one, respectively. b Bar plot displaying the genomic location of 115 P. falciparum genes and 267 P. berghei genes. Proximity to telomeres and proximity to centromeres refer to the genome regions within 40 kb away from telomeres and 10 kb away from centromeres, respectively. The rest of the genome was referred to as the chromosome internal region. The numbers in each parenthesis represent the number of genes and the percentage to human or malaria enriched genes. c Venn diagram showing the number of P. falciparum genes (upper panel) or P. berghei genes (lower panel) whose proteins contain a signal peptide, a transmembrane domain, or a PEXEL motif. d Domain models of SURFIN family members. Domains were identified through CD-search (https://www.ncbi.nlm.nih.gov/Structure/cdd/wrpsb.cgi) with a cut-off value of 0.01. TM domain stands for transmembrane domain Genomic location analysis revealed that very few of these P. falciparum genes are located in the vicinity of the telomeres or centromeres, but almost all of them are located in the chromosome internal regions (Fig. 2b and Additional file 6: Figure S2), while 180 of 267 P. berghei genes with a known location are present in the subtelomeric regions and five genes are located in proximity to the centromeres (Fig. 2b and Additional file 7: Figure S3), demonstrating that human and rodent parasite-enriched genes have different chromosome locations. Sequence feature analysis of proteins encoded by these genes indicated that approximately 10% (28/289) of P. berghei candidate genes encode intracellular proteins, significantly less than that of P. falciparum candidate genes, for which the rate is about 44%. In P. falciparum, there were 51 transmembrane domain-containing proteins, 10 of which have a signal peptide and 11 of which contain a PEXEL motif (Fig. 2c upper panel). Meanwhile, in P. berghei, there were 198 proteins containing a transmembrane domain, mostly because of the presence of the plasmodium interspersed repeat (PIR) multigene gene family, whose proteins were displayed on the surface of infected erythrocytes [14]. Nearly one-fifth (41/198) of them possess signal peptides, but none of them has a canonical PEXEL motif (Fig. 2c lower panel). Among the human parasite-enriched clusters, Cluster_99 was the sole cluster that consisted of group-expansion genes. This cluster comprised genes from the PHISTc gene family, which is a subtype of the PHIST family [15]. This family was found to be amplified in human malaria parasites to more than 10 members, but has only a few members in the rodent malaria parasites (see Additional file 3: Table S2). A recent study showed that a PHISTc protein, named PFI1780w, localizes underneath the membrane of infected erythrocytes and participates in the remodeling of host erythrocytes by interacting with the ATS (acidic terminal segments) domain of P. falciparum erythrocyte membrane protein 1 (PfEMP1) [16]. Apart from Cluster_99, the remaining clusters were specific to human malaria parasites. Of them, Cluster_13 was the largest cluster, comprising seven members of the SURF gene (surface-associated interspersed gene) family. Apart from the SURFIN1.1 protein whose intracellular region contains a SICA_C (schizont-infected cell agglutination C-terminal) domain and a DNAJ domain, all other SURFIN proteins are characterized by one or two SICA_C domains and one to three ATS domains (Fig. 2d left panel). SURFIN4.2 is the best characterized member. It can interact with F-actin and spectrin through its internal domain and be co-transported with PfEMP1 and RIFIN to the surface of infected erythrocytes [17, 18]. Analysis of the expression of SURF members revealed that SURFIN4.2 was highly transcribed at the ring stage, while SURFIN8.1, 8.2, 8.3, 1.3, and 14.1 were maximally expressed at the trophozoite stage (Additional file 8: Figure S4). Very low expression of SURFIN1.1 was observed. This difference in expression dynamics implied that these members might play different roles in the intraerythrocytic developmental cycle of the P. falciparum parasite. Besides SURF genes, two group-specific genes, PF3D7_0520500 and PF3D7_0614000, which are required for thiamine biosynthesis, were found only to be present in human malaria parasites, but not in rodent malaria parasites (see Additional file 4: Table S3). This is in agreement with a previous report describing that the thiamine biosynthesis pathway was absent in rodent malaria parasites [9]. Apart from the aforementioned genes, there were many additional protein-coding genes specific to human malaria parasites. Proteins encoded by PF3D7_0731100, PF3D7_1002100, and PF3D7_1302000 play a role in the increasing rigidity and adhesiveness of infected erythrocytes by trafficking and displaying PfEMP1 on the host erythrocytes [19]. PF3D7_1322100 is a histone-lysine N-methyltransferase gene and its protein product methylates histone H3K36 and plays a role in immune evasion [20, 21]. PF3D7_0807700 encodes a serine protease, DegP, which has a role in the growth and development of P. falciparum through its ability to confer protection against thermal/oxidative stress [22]. PF3D7_1206100 encodes an IMP-specific 5′-nucleotidase, which is involved in purine metabolism. However, the functions of approximately 44% (51/115) of human malaria parasite-enriched genes are unknown. Taking these findings together, genes enriched in human-specific malaria parasites are related to a variety of biological processes and the combination of these genes might be responsible for the overall ability of P. falciparum to parasitize human erythrocytes. Identification of genes related to the virulence of P. falciparum P. falciparum is much more virulent than any other human malaria parasites. We looked for genes linked to virulence by comparing the cluster profile in P. falciparum with those in P. vivax and P. knowlesi. As shown in Fig. 3a, there were 141 P. falciparum-enriched clusters, of which 139 were unique to P. falciparum. After removing 114 pseudogenes, the remaining 493 genes were analyzed further (Additional file 9: Table S5). Gene Ontology (GO) subcellular localization analysis demonstrated that protein products of these genes were enriched in the infected host cell surface knob, host cell membrane, and Maurer's cleft (Table 1), suggesting their possible roles in cell–cell adhesion. Additionally, biological process analysis revealed that these genes were associated with the regulation of cell adhesion and erythrocyte aggregation (Table 2). Candidate genes related to virulence of the P. falciparum parasite. a Heat map showing the clusters enriched in P. falciparum. Green, black, and red indicate cluster values equal to zero, one, and higher than one, respectively. b Pie chart displaying the enrichment of each cluster in candidate genes. The numbers in each box represent the cluster size and the percentage to the total number of P. falciparum enriched genes. c Heat map showing the number of members detected in Plasmodium species or other species using hidden Markov models of seven families. Deep purple indicates no member was found, black indicates one member was detected, and gold indicates more than one member was discovered Table 1 Cellular component analysis of proteins produced by virulence-related candidate genes. Enriched terms were ranked according to their percentage of background. The top 20 terms are listed Table 2 Biological process analysis of proteins produced by virulence-related candidate genes. Enriched terms were ranked according to their percentage of background. The top 20 terms are listed Figure 3b shows the proportion of each cluster among these candidate genes. We focused on the clusters containing more than five members. Of these clusters, Cluster_63 and Cluster_161 were composed of group-expansion genes. Cluster_63 mainly comprised var. gene family members, which encode the prime virulence factor PfEMP1. The extracellular region of PfEMP1 contains DBL (Duffy binding-like) and CIDR (cysteine-rich inter-domain region) domains. The DBL domain can bind intercellular adhesion molecule 1 (ICAM1) and the CIDR domain can bind endothelial protein C receptor (EPCR) or CD36 on the endothelium surface [12, 23]. By interacting with erythrocyte surface proteins, PfEMP1 mediates the attachment of infected erythrocytes to the endothelium, subsequently resulting in CM. Cluster_161 was composed of the FIKK gene family. This family encodes protein kinases that co-localize with Maurer's cleft proteins and have a role in remodeling of the erythrocyte surface [13]. Apart from the above two clusters, all of the remaining five clusters were specific to P. falciparum. The largest cluster of them consists of rif/stevor gene family members. Protein products of this family are expressed on the surface of infected RBCs where they bind these cells together to form large rosettes or microvascular endothelial cells, subsequently leading to the occurrence of severe malaria [17, 24]. The second largest group contains PHISTa family members. The transcription of several of them was found to be induced under febrile conditions [25]. As PHISTa proteins contain the PEXEL motif and a transmembrane domain close to their N-terminus, in a febrile state, they might be exported to the host membrane and involved in interacting with host cells. The remaining three clusters were the PfMC2TM, EPF3, and EPF4 gene families. Proteins encoded by these families are exported to Maurer's clefts, which act as a platform for marshaling exported parasite proteins addressed to the host cell plasma membrane or displayed on the erythrocyte surface, implying their possible role in assisting the correct presentation of membrane proteins on the surface of infected erythrocytes [26, 27]. To identify possible members of the above seven families in other genome-sequenced organisms, the profile hidden Markov model of each cluster was built and used as a query for a search against the reference proteome database. Except for the FIKK family members found in other species, such as species of bacteria, fungi, and plants, the remaining six gene families were only found in the Plasmodium genus (Fig. 3c). In particular, the RIFIN/STEVOR, EPF3, and EPF4 families were unique to P. falciparum, and the PfMC2TM and PHISTa families were found only in human malaria parasites. The PfEMP1/EBAs/DBLMSP family, all members of which contain a DBL domain, comprises proteins from the EMP1, EBA, and DBLMSP families. This family has nearly 80 members in P. falciparum and a few members in other Plasmodium species, but no members of it were detected in other organisms, suggesting that this family arose in the Plasmodium genus and then underwent dramatic proliferation in P. falciparum. However, after removing EBA and DBLMSP family members, we established a new profile hidden Markov model for PfEMP1 proteins. Searching the reference database using this new model demonstrated that the PfEMP1 family exists only in P. falciparum (Fig. 3c). Thus, although the DBL domain can be found in all six species, PfEMP1 proteins are unique to P. falciparum and were amplified in this species. Additionally, in this new model, we identified a conserved peptide region harbored in DBL-1α domain of all PfEMP1 proteins (Additional file 10: Figure S5), implying that antibody recognizing this region might elicit cross-reactive response to a substantial number of PfEMP1 variants. The remaining clusters specifically belong to P. falciparum. Proteins of several genes within these clusters have been well characterized. These include the reticulocyte binding protein homologue 5 (RH5), which aids parasite invasion of erythrocytes by binding CD147 on the erythrocyte surface [28], and two membrane protein trafficking molecules, PF3D7_0730900 and PF3D7_1478600, which play a role in trafficking and display of the virulence protein PfEMP1 on the host erythrocytes; disruption of these genes leads to no or very low levels of surface-expressed PfEMP1 [19], as well as a merozoite surface protein 2 (MSP2), which is involved in fibril formation [29]. In addition, histidine-rich protein II (HRPII) released by erythrocytes infected with P. falciparum can inhibit antithrombin. It binds cellular glycosaminoglycans and prevents their interaction with antithrombin, thereby contributing to the procoagulant state associated with P. falciparum infection [30]. However, for nearly one-quarter of P. falciparum-enriched genes, the function is unknown, so this requires further elucidation. Taking these findings together, the majority of P. falciparum-enriched genes encode exported or membrane-associated proteins that either serve as adhesins or participate in membrane protein trafficking, erythrocyte invasion, and the inhibition of antithrombin, pointing towards to the virulence of the P. falciparum parasite. Identification of novel molecules contributing to cerebral malaria CM is the most life-threatening complication of human malaria. Many parasite proteins that mediate the binding of infected erythrocytes to endothelium remain unknown, impeding our understanding of the molecular mechanisms behind CM. To identify novel genes potentially related to CM, we performed sequence feature analysis of P. falciparum-enriched genes and identified 308 genes whose proteins contain transmembrane domains. Genes whose products were annotated as peripheral or integral proteins of the Maurer's cleft membrane were removed, including members of the EPF4, PfMC2TM, and FIKK families. Genes producing proteins associated with membrane protein trafficking were also removed. Three genes, namely, PF3D7_1431800, PF3D7_0529200, and PF3D7_1140000, encode proteins annotated as apyrase, sugar transporter, and carbonic anhydrase, respectively. They are unlikely to serve as adhesion proteins and were thus not considered further. Finally, we identified a total of 279 candidate genes that may contribute to CM. Not all of the candidate genes are associated with CM because some genes were not expressed at the trophozoite stage. To improve our analysis, we thus needed to integrate gene expression information into our analysis. The RNA-seq dataset, GSE23787, which features gene expression data measured during the intraerythrocytic development cycle of P. falciparum, was adopted to identify genes highly expressed at the trophozoite stage. PCA analysis revealed that expression datasets of two adjacent time points tend to be closer together in the PCA plot (Fig. 4a), suggesting a small difference between them. However, the distance in the plot between the datasets from 5 and 10 h was larger than that of any other two adjacent time points, demonstrating that the P. falciparum parasite experienced a clear change in gene expression at 10 h. A previous study revealed that genes induced in this stage are mainly associated with cytoplasmic transcriptional and translational machinery, glycolysis and ribonucleotide biosynthesis [31]. FFT was thus introduced to extract genes associated with the trophozoite stage. The amplitude of expression of each gene was computed. We only retained expression signals with maximal amplitude at frequency ω = 1. After removing genes with mean of log2 transformed TPM < 2 or amplitude A < 0.5 at ω = 1, the remaining 4248 genes were ordered in terms of the time of their peak expression (Fig. 4b). As P. falciparum has an approximately 48 h intraerythrocytic cycle, to capture as many trophozoite-stage genes as possible, we considered the genes with a peak expression time point (t p ) at 15–40 h to be highly expressed in the trophozoite stage [32]. Using this method, we identified a total of 3425 genes maximally expressed in this stage. Identification of P. falciparum genes contributing to cerebral malaria. a Principal component analysis performed on eight RNA-seq datasets. Components 1 (PC1) and 2 (PC2) represent 71% and 21% of total variance, respectively. Datasets of two adjacent time points tend to be located close together within the plot. b The periodic genes identified by FFT ordered by the time points of their peak expression. Expression values of each transcript were log2-scaled and centered by subtracting their mean value. c Venn diagram of the number of genes transcribed at the trophozoite stage and that of candidate genes whose proteins contain transmembrane domains. d Domain model of PIESP2 protein (upper panel) and expression signals of PIESP2 in the intraerythrocytic cycle (lower panel). TM represents transmembrane domain. Blue line represents the observed expression level of PIESP2 and red line is the fitting curve using FFT Comparing the 279 candidate genes with the genes expressed in the trophozoite stage, we obtained 55 candidate genes that overlapped between these groups (Fig. 4c and Additional file 11: Table S6). Most of them encode exported proteins and have never been studied, but several of them, such as PfEMP1 and RIFIN/STEVOR, have been reported to mediate the interaction between infected erythrocytes and endothelial cells [23, 33, 34]. Two genes were newly identified to contribute to CM, including genes encoding glycophorin binding protein (GBP) and parasite-infected erythrocyte surface protein 2 (PIESP2). The presence of the PEXEL motif and the transmembrane domains in these two proteins suggests their possible location on the surface of infected erythrocytes. The GBP protein contains a tandem repeat that can bind with glycophorin on the erythrocyte surface [35], implying that this protein might have a role in mediating the binding of infected erythrocytes to uninfected ones. PIESP2 is an erythrocyte surface protein and contains a gap-junction-related Neuromodulin_N domain in its extracellular region (Fig. 4d, upper panel). It was maximally transcribed at the trophozoite stage (tp = 22.5 h, Fig. 4d, lower panel). In a serology study, antibody against PIESP2 in a malaria-protected group was much higher than that in a malaria-susceptible group [36], suggesting that the blockage of PIESP2 might confer a protective effect against malaria. In view of these features of PIESP2, we were prompted to consider that it might play a role in CM. Therefore, we selected this gene as an interesting target for further functional characterization in our lab. In this study, to identify P. falciparum genes that contribute to human disease, we developed a virtual genome method that can be applied to identify genes enriched in a group of species, including group-specific genes and group-expansion genes. By this method, we looked for protein-coding genes in the P. falciparum genome that are responsible for parasitizing human erythrocytes, for human virulence, and for CM. Our method can be used not only for malaria genome comparisons, but also for other pathogen genome comparisons, such as for Toxoplasma gondii and Mycobacterium tuberculosis. As mentioned previously, our method is much simpler and more comprehensive than previous comparative analysis methods; however, it has two limitations that should be pointed out. One limitation is that we used the modified BGLL algorithm to find disjointed clusters, but in practice many clusters overlap to some extent. Some vertices were shared by many clusters. Therefore, it is reasonable that an algorithm allowing cluster overlap should outperform the current BGLL method. Actually, we attempted to apply the extended Girvan and Newman algorithm and the clique percolation method to identify clusters overlapping within the protein similarity network [37, 38]. Owing to high requirements for computational resources and the analysis not being finished even 7 days after program initiation, we had to choose the fast-greedy method of using the BGLL algorithm instead. Another limitation is that edge weight was not considered when performing modularity analysis, leading to a failure to identify some clusters. For example, in the P. falciparum genome, the DBL family has three members, including genes encoding erythrocyte binding antigen-175 (EBA-175), EBA-140, and EBA-181 [39]. They were assigned with var. gene family members to Cluster_63, as all of these genes produce proteins containing DBL domains. Actually, despite significant similarities among these protein sequences, their alignment scores were quite different. The scores between any two members of the DBL family were much higher than those between members of the DBL family and members of PfEMP1. To overcome this issue, we can introduce edge weight, which represents the degree of conservation between the query and the target, to construct a weighted network. The identification of clusters within the weighted network might provide a better result. We compared cluster values in human malaria parasites with those in rodent malaria parasites in the search for P. falciparum genes potentially responsible for parasitizing human erythrocytes. In total, 115 genes were identified to be enriched in human malaria parasites and to participate in many biological processes, such as thiamine biosynthesis, parasite growth and development, and purine metabolism. One peculiarity of human malaria parasites is that these species contain several genes whose proteins are involved in trafficking and the display of membrane proteins on the surface of infected erythrocytes, including three EMP1-trafficking protein-coding genes and SURFIN4.2. The disruption of some of these genes in P. falciparum resulted in a complete lack or greatly reduced expression levels of surface proteins on the surface of infected erythrocytes [19]. Thus, we proposed that human malaria parasites are capable of utilizing a distinctive transport system to export proteins on the membrane of infected erythrocytes. Additionally, 57 clusters that consist of 289 genes from P. berghei were enriched in rodent malaria parasites. Most of these genes are located within subtelomeric regions which usually contain various repeated elements. Subtelomeric regions are usually responsible for frequent duplication events and recombination events, which are mechanisms for generating antigenic diversity of genes and enhancing the adaption of organisms to the environment [40]. Cellular component analysis demonstrated that most of the proteins encoded by these genes are displayed on the surface of erythrocytes, and thus could be potential targets of the host's immune response. In light of these results, we speculated that the majority of rodent parasite-enriched genes are probably involved in antigenic variation and immune evasion, subsequently contributing to survival in rodent erythrocytes and the establishment of long-lasting chronic infection, a process that is essential in malaria parasites to ensure mosquito transmission and the completion of the life cycle [41]. To search for genes related to the virulence of P. falciparum parasites, we compared enriched values of all clusters in P. falciparum with those in P. vivax and P. knowlesi. Finally, we identified 493 candidate genes. Some of these genes encode proteins related to cytoadhesion, such as RIFIN/STEVOR and PfEMP1 proteins. Others participate in erythrocyte invasion and the inhibition of antithrombin. In particular, a number of the P. falciparum-enriched genes were shown to be associated with membrane protein trafficking, including genes from the FIKK, PfMC2TM, EPF3, and EPF4 families, suggesting that P. falciparum has a more powerful membrane protein transporting system than the other two human malaria parasites. One possible explanation for this is that the P. falciparum parasite has developed a unique cytoadhesion and antigenic variation system encoded by genes from var., rifin/stevor, or other families. The trafficking and correct exposure of these molecules on infected erythrocytes require the assistance of a number of trafficking proteins encoded by the aforementioned genes. Therefore, these trafficking proteins could be novel therapeutic targets to reduce pathogen virulence by decreasing the exposure of virulence factors on the surface of erythrocytes. In an attempt to discover novel genes that contribute to CM, by the integration of FFT analysis, we identified 55 candidate genes. Considering that surface antigen PIESP2 contains the gap-junction-related Neuromodulin_N domain and anti-PIESP2 might protect against malaria, we finally chose this protein as an interesting target for further experimental study. Supposed that PIESP2 participated in CM, blockage of this antigen by an antibody could be a promising strategy to prevent CM for the following reasons: One is that the P. falciparum parasite has an approximately 48-h intraerythrocytic developmental cycle and PIESP2 is highly expressed at the trophozoite stage, which means that the antibody against PIESP2 has more than 20 h to recognize and bind with this antigen before the release of new merozoites. Therefore, compared with antibodies against invasion-related parasite proteins, anti-PIESP2 might be more effective to prevent malaria infection because the parasite invasion process is extremely rapid (taking less than 2 min) and is only at risk of immunological attack for a very short time [42]. The other reason is that antibody against PIESP2 might disrupt the interaction between PIESP2 and its interactant on the endothelium surface, subsequently decreasing the binding of infected erythrocytes on the microvasculature. To date, we have successfully produced a soluble extracellular domain of PIESP2. Efforts to elucidate the function of PIESP2 are currently ongoing. In this study, to identify P. falciparum genes linked to human disease, we developed a new comparative analysis method that can be applied to find both group-specific and group-expansion genes. Through genome comparisons, we identified a limited number of genes in the P. falciparum genome related to parasitizing human erythrocytes, virulence, and CM. Our analysis not only revealed the genome-wide differences between P. falciparum and five other Plasmodium species, but also identified several novel genes that could serve as starting points for follow-up experimental investigations. Protein sequence acquisition Protein sequences of three human malaria parasites, P. falciparum 3D7, P. knowlesi strain H, and P. vivax Sal-1, and three rodent malaria parasites, P. berghei ANKA, P. chabaudi chabaudi, and P. yoelii yoelii 17XNL, were acquired from the database Plasmodb (http://plasmodb.org). Sequences with length ≤ 50 aa were removed. The remaining sequences were combined into a total set containing 5532 sequences from P. falciparum, 5320 sequences from P. knowlesi, 5580 sequences from P. vivax, 5070 sequences from P. berghei ANKA, 5211 sequences from P. chabaudi, and 6601 sequences from P. yoelii. The total set was used for sequence alignment. Protein sequence alignment We employed phmmer instead of BLASTP to perform protein sequence alignment since it is more sensitive and accurate than BLASTP [43]. Thresholds ranging from 1E-01 to 1E-16 were tested. A hit with an expected value of less than the threshold was considered to be significant. By this rule, we established a protein correlation matrixA = [a ij ] 33314 × 33314 , where a ij = 1 indicates that protein i significantly hits protein j, and a ij = 0 shows no significant hit between proteins i and j. We considered only the hits with mutual hits for two proteins in alignment analysis, that is, $$ {\boldsymbol{a}}_{\boldsymbol{ij}}=\left\{\begin{array}{c}\mathbf{1}\kern0.75em \boldsymbol{if}\ {\boldsymbol{a}}_{\boldsymbol{ij}}={\boldsymbol{a}}_{\boldsymbol{ji}}=\mathbf{1}\kern9.5em \\ {}\mathbf{0}\kern0.5em \boldsymbol{if}\ {\boldsymbol{a}}_{\boldsymbol{ij}}={\boldsymbol{a}}_{\boldsymbol{ji}}=\mathbf{0}\ \boldsymbol{or}\kern0.75em {\boldsymbol{a}}_{\boldsymbol{ij}}\mathbf{\ne}\kern0.5em {\boldsymbol{a}}_{\boldsymbol{ji}}\kern2.75em \end{array}\right. $$ The obtained matrix was converted into a protein similarity network, which was composed of several separate components. Sequence cluster identification We introduced a modularity method to find communities in the protein similarity network. Modularity refers to the fraction of the edges that fall within the given groups minus the expected value of such a fraction if the edges are distributed at random. It has been used to evaluate the cluster structure of networks from a global perspective. Despite the effectiveness of the modularity method in cluster identification, finding the maximum modularity involves NP-complete complexity [44] and exhibits a high computational consumption. The approximation method with the BGLL algorithm has been developed and widely used to find sequence clusters within a connected network [45]. BGLL algorithm consists of two steps: in the first step, each node is considered a cluster. A node is moved into the group of neighborhood nodes when the maximal modularity gain is positive. This process is applied to all nodes until the modularity value is not improved. In the second step, the clusters found in the first step is considered as nodes, and a new network is built. The edge weight between any two nodes is given by the sum of weights of edges in clusters. These two combined steps constituted a pass which is repeated until the maximum modularity is achieved. Here, to apply this algorithm to a disconnected network, we modified the BGLL algorithm in two aspects: First, the depth first search (DFS) algorithm was employed to extract all separate components; second, for the component with the number of nodes ≥3, the BGLL algorithm was recursively used until the modularity of the resulting subnetwork was below the cut-off value. We tested the cut-off values ranging from 0.1 to 0.5 to find reasonable clusters. For the component with the number of nodes ≤3, the BGLL algorithm was not applied, and the component was directly kept as a cluster. The modified BGLL algorithm was implemented in MATLAB 2015a. Homolog identification by profile hidden Markov model The construction of a profile hidden Markov model (HMM) involves two steps: multiple sequence alignment and parameter estimation. Protein sequences in each of the designated clusters were subjected to multiple sequence alignment using MSAProbs [46]; then, the aligned ensembles were used to estimate the parameters of the profile HMM using HMMER3.1b1. The resulting models were searched against a reference proteome database to find possible homologs in genome-sequenced species though the web server HMMER (https://www.ebi.ac.uk/Tools/hmmer/search/hmmsearch). Bit score ≥ 40 was considered to be significant. Fast Fourier transform (FFT) analysis of gene expression data RNA-seq sequence reads from eight time points of the intraerythrocytic cycle (GSE23787) were acquired [47]. Reads with low complexity, low quality, and multiple Ns were filtered out. Duplicated reads were also removed. Thereafter, the resulting clean reads were mapped against the P. falciparum genome (PlasmoDB v26) using HISAT2 [48]. The abundance of each reference gene was estimated with StringTie [49]. Relative transcriptional activity of each euchromatic gene was assessed using transcripts per million (TPM). When TPM < 1, it was adjusted to be 1. All expression values were log2-scaled and were used for FFT analysis. FFT can be used to detect transcripts specific to a biological process, such as cell cycle and circadian clock. It converts an expression signal in the time domain to the frequency domain, showing the magnitude of each frequency [50]. The formula was as follows: $$ {\boldsymbol{Y}}_{\boldsymbol{k}}=\frac{\mathbf{1}}{\sqrt{\boldsymbol{N}}}\sum \limits_{\boldsymbol{n}=\mathbf{0}}^{\boldsymbol{N}-\mathbf{1}}{\boldsymbol{X}}_{\boldsymbol{n}}{\boldsymbol{e}}^{-\boldsymbol{jk}\left(\frac{\mathbf{2}\boldsymbol{\pi }}{\boldsymbol{N}}\right)\boldsymbol{n}} $$ Here, N = 8 is the length of signal and k ≤ 7 is the frequency. The expression value of each transcript was centered by subtracting the mean value so that the amplitude equals zero at frequency ω = 0. Transcripts correlated with the cell cycle were selected as those whose maximal magnitudes M > 0.5 at frequency ω = 1. To estimate the maximal expression time point of selected transcripts, the phase value (P) at frequency ω = 1 was calculated. The maximal expression time point was estimated using the following formula: $$ {\boldsymbol{t}}_{\boldsymbol{p}}=\left\{\begin{array}{c}\frac{\left(-\boldsymbol{P}\right)}{\mathbf{2}\boldsymbol{\pi }}\ast \mathbf{35}+\mathbf{5}\kern3em if\ P\ is\ negative\\ {}\frac{\left(-\boldsymbol{P}\right)}{\mathbf{2}\boldsymbol{\pi }}\ast \mathbf{35}+\mathbf{40}\kern2em if\ P\ is\ positive\end{array}\right. $$ Based on the t p value, we can identify genes highly expressed at a particular stage during the intraerythrocytic developmental cycle of malaria parasites. Enrichment analysis and sequence feature identification To understand the biological meaning of a given gene set, we performed GO term enrichment analysis through the web server Plasmodb. A GO term was considered to be statistically overrepresented if its p-value was less than 0.05. Protein sequence features, such as signal peptide and transmembrane domain, were analyzed using SignalP4.1 and TMHMM2.0 [51, 52], respectively. To avoid a signal peptide being wrongly predicted to be a transmembrane domain, the N-terminus of each sequence was truncated with a length of 25 aa. Tools were run with default parameters. PEXEL motif with the consensus R/KxLxE/Q is necessary for parasite protein export into the host erythrocytes [15]. Proteins containing this motif were identified via the web server Plasmodb. ATS: Acidic terminal segment CIDR: Cysteine-rich inter-domain region Cerebral malaria DBL: Duffy-binding-like EBA-175: Erythrocyte binding antigen-175 EPCR: Endothelial protein C receptor EPF3: Exported protein family 3 Glycophorin binding protein HRPII: Histidine-rich protein II ICAM1: Intercellular adhesion molecule 1 MC2TM: Maurer's cleft two transmembrane MSP2: Merozoite surface protein 2 PCA: PfEMP1: P. falciparum erythrocyte membrane protein 1 PHISTa: Plasmodium helical interspersed subtelomeric family subtype a PIESP2: Parasite-infected erythrocyte surface protein 2 PIR: Plasmodium interspersed repeat RBC: Red blood cell SICA_C: Schizont-infected cell agglutination C-terminal WHO. World malaria report 2016. World Health Organization; 2016. http://www.who.int/malaria/publications/world-malaria-report-2016/en/. Fuehrer HP, Noedl H. Recent advances in detection of Plasmodium ovale: implications of separation into the two species Plasmodium ovale wallikeri and Plasmodium ovale curtisi. J Clin Microbiol. 2014;52(2):387–91. Otto TD, Bohme U, Jackson AP, Hunt M, Franke-Fayard B, Hoeijmakers WA, Religa AA, Robertson L, Sanders M, Ogun SA, Cunningham D, Erhart A, Billker O, Khan SM, Stunnenberg HG, Langhorne J, Holder AA, Waters AP, Newbold CI, Pain A, Berriman M, Janse CJ. A comprehensive evaluation of rodent malaria parasite genomes and gene expression. BMC Biol. 2014;12:86. Miller LH, Ackerman HC, Su XZ, Wellems TE. Malaria biology and disease pathogenesis: insights for new treatments. Nat Med. 2013;19(2):156–67. Grau GE, Craig AG. Cerebral malaria pathogenesis: revisiting parasite and host contributions. Future Microbiol. 2012;7(2):291–302. Claessens A, Rowe JA. Selection of Plasmodium falciparum parasites for cytoadhesion to human brain endothelial cells. J Vis Exp. 2012;59:e3122. Nicolas X, Granier H, Laborde JP, Talarmin F, Klotz F. Plasmodium vivax: therapy update. Presse Med. 2001;30(15):767–71. Aurrecoechea C, Brestelli J, Brunk BP, Dommer J, Fischer S, Gajria B, Gao X, Gingle A, Grant G, Harb OS, Heiges M, Innamorato F, Iodice J, Kissinger JC, Kraemer E, Li W, Miller JA, Nayak V, Pennington C, Pinney DF, Roos DS, Ross C, Stoeckert CJ Jr, Treatman C, Wang H. PlasmoDB: a functional genomic database for malaria parasites. Nucleic Acids Res. 2009;37(Database issue):D539–43. Frech C, Chen N. Genome comparison of human and non-human malaria parasites reveals species subset-specific genes potentially linked to human disease. PLoS Comput Biol. 2011;7(12):e1002320. Nacer A, Roux E, Pomel S, Scheidig-Benatar C, Sakamoto H, Lafont F, Scherf A, Mattei D. Clag9 is not essential for PfEMP1 surface expression in non-cytoadherent Plasmodium falciparum parasites with a chromosome 9 deletion. PLoS One. 2011;6(12):e29039. Nacer A, Claes A, Roberts A, Scheidig-Benatar C, Sakamoto H, Ghorbal M, Lopez-Rubio JJ, Mattei D. Discovery of a novel and conserved Plasmodium falciparum exported protein that is important for adhesion of PfEMP1 at the surface of infected erythrocytes. Cell Microbiol. 2015;17(8):1205–16. Lau CK, Turner L, Jespersen JS, Lowe ED, Petersen B, Wang CW, Petersen JE, Lusingu J, Theander TG, Lavstsen T, Higgins MK. Structural conservation despite huge sequence diversity allows EPCR binding by the PfEMP1 family implicated in severe childhood malaria. Cell Host Microbe. 2015;17(1):118–29. Nunes MC, Goldring JP, Doerig C, Scherf A. A novel protein kinase family in Plasmodium falciparum is differentially transcribed and secreted to various cellular compartments of the host cell. Mol Microbiol. 2007;63(2):391–403. Carlton JM, Angiuoli SV, Suh BB, Kooij TW, Pertea M, Silva JC, Ermolaeva MD, Allen JE, Selengut JD, Koo HL, Peterson JD, Pop M, Kosack DS, Shumway MF, Bidwell SL, Shallom SJ, van Aken SE, Riedmuller SB, Feldblyum TV, Cho JK, Quackenbush J, Sedegah M, Shoaibi A, Cummings LM, Florens L, Yates JR, Raine JD, Sinden RE, Harris MA, Cunningham DA, Preiser PR, Bergman LW, Vaidya AB, van Lin LH, Janse CJ, Waters AP, Smith HO, White OR, Salzberg SL, Venter JC, Fraser CM, Hoffman SL, Gardner MJ, Carucci DJ. Genome sequence and comparative analysis of the model rodent malaria parasite Plasmodium yoelii yoelii. Nature. 2002;419(6906):512–9. Sargeant TJ, Marti M, Caler E, Carlton JM, Simpson K, Speed TP, Cowman AF. Lineage-specific expansion of proteins exported to erythrocytes in malaria parasites. Genome Biol. 2006;7(2):R12. Oberli A, Slater LM, Cutts E, Brand F, Mundwiler-Pachlatko E, Rusch S, Masik MF, Erat MC, Beck HP, Vakonakis I. A Plasmodium falciparum PHIST protein binds the virulence factor PfEMP1 and comigrates to knobs on the host cell surface. FASEB J. 2014;28(10):4420–33. Winter G, Kawai S, Haeggstrom M, Kaneko O, von Euler A, Kawazu S, Palm D, Fernandez V, Wahlgren M. SURFIN is a polymorphic antigen expressed on Plasmodium falciparum merozoites and infected erythrocytes. J Exp Med. 2005;201(11):1853–63. Zhu X, He Y, Liang Y, Kaneko O, Cui L, Cao Y. Tryptophan-rich domains of Plasmodium falciparum SURFIN4.2 and Plasmodium vivax PvSTP2 interact with membrane skeleton of red blood cell. Malar J. 2017;16(1):121. Maier AG, Rug M, O'Neill MT, Brown M, Chakravorty S, Szestak T, Chesson J, Wu Y, Hughes K, Coppel RL, Newbold C, Beeson JG, Craig A, Crabb BS, Cowman AF. Exported proteins required for virulence and rigidity of Plasmodium falciparum-infected human erythrocytes. Cell. 2008;134(1):48–61. Ukaegbu UE, Kishore SP, Kwiatkowski DL, Pandarinath C, Dahan-Pasternak N, Dzikowski R, Deitsch KW. Recruitment of PfSET2 by RNA polymerase II to variant antigen encoding loci contributes to antigenic variation in P. Falciparum. PLoS Pathog. 2014;10(1):e1003854. Jiang L, Mu J, Zhang Q, Ni T, Srinivasan P, Rayavara K, Yang W, Turner L, Lavstsen T, Theander TG, Peng W, Wei G, Jing Q, Wakabayashi Y, Bansal A, Luo Y, Ribeiro JM, Scherf A, Aravind L, Zhu J, Zhao K, Miller LH. PfSETvs methylation of histone H3K36 represses virulence genes in Plasmodium falciparum. Nature. 2013;499(7457):223–7. Sharma S, Jadli M, Singh A, Arora K, Malhotra P. A secretory multifunctional serine protease, DegP of Plasmodium falciparum, plays an important role in thermo-oxidative stress, parasite growth and development. FEBS J. 2014;281(6):1679–99. Lennartz F, Adams Y, Bengtsson A, Olsen RW, Turner L, Ndam NT, Ecklu-Mensah G, Moussiliou A, Ofori MF, Gamain B, Lusingu JP, Petersen JE, Wang CW, Nunes-Silva S, Jespersen JS, Lau CK, Theander TG, Lavstsen T, Hviid L, Higgins MK, Jensen AT. Structure-guided identification of a family of dual receptor-binding PfEMP1 that is associated with cerebral malaria. Cell Host Microbe. 2017;21(3):403–14. Niang M, Bei AK, Madnani KG, Pelly S, Dankwa S, Kanjee U, Gunalan K, Amaladoss A, Yeo KP, Bob NS, Malleret B, Duraisingh MT, Preiser PR. STEVOR is a Plasmodium falciparum erythrocyte binding protein that mediates merozoite invasion and rosetting. Cell Host Microbe. 2014;16(1):81–93. Oakley MS, Kumar S, Anantharaman V, Zheng H, Mahajan B, Haynes JD, Moch JK, Fairhurst R, McCutchan TF, Aravind L. Molecular factors and biochemical pathways induced by febrile temperature in intraerythrocytic Plasmodium falciparum parasites. Infect Immun. 2007;75(4):2012–25. Bachmann A, Scholz JA, Janssen M, Klinkert MQ, Tannich E, Bruchhaus I, Petter M. A comparative study of the localization and membrane topology of members of the RIFIN, STEVOR and PfMC-2TM protein families in Plasmodium falciparum-infected erythrocytes. Malar J. 2015;14:274. Mbengue A, Audiger N, Vialla E, Dubremetz JF, Braun-Breton C. Novel Plasmodium falciparum Maurer's clefts protein families implicated in the release of infectious merozoites. Mol Microbiol. 2013;88(2):425–42. Crosnier C, Bustamante LY, Bartholdson SJ, Bei AK, Theron M, Uchikawa M, Mboup S, Ndir O, Kwiatkowski DP, Duraisingh MT, Rayner JC, Wright GJ. Basigin is a receptor essential for erythrocyte invasion by Plasmodium falciparum. Nature. 2011;480(7378):534–7. Zhang X, Adda CG, Low A, Zhang J, Zhang W, Sun H, Tu X, Anders RF, Norton RS. Role of the helical structure of the N-terminal region of Plasmodium falciparum merozoite surface protein 2 in fibril formation and membrane interaction. Biochemistry. 2012;51(7):1380–7. Ndonwi M, Burlingame OO, Miller AS, Tollefsen DM, Broze GJ Jr, Goldberg DE. Inhibition of antithrombin by Plasmodium falciparum histidine-rich protein II. Blood. 2011;117(23):6347–54. Bozdech Z, Llinas M, Pulliam BL, Wong ED, Zhu J, DeRisi JL. The transcriptome of the intraerythrocytic developmental cycle of Plasmodium falciparum. PLoS Biol. 2003;1(1):E5. Wilson DW, Goodman CD, Sleebs BE, Weiss GE, de Jong NW, Angrisano F, Langer C, Baum J, Crabb BS, Gilson PR, McFadden GI, Beeson JG. Macrolides rapidly inhibit red blood cell invasion by the human malaria parasite, Plasmodium falciparum. BMC Biol. 2015;13:52. Chen Q, Schlichtherle M, Wahlgren M. Molecular aspects of severe malaria. Clin Microbiol Rev. 2000;13(3):439–50. Goel S, Palmkvist M, Moll K, Joannin N, Lara P, Akhouri RR, Moradi N, Ojemalm K, Westman M, Angeletti D, Kjellin H, Lehtio J, Blixt O, Idestrom L, Gahmberg CG, Storry JR, Hult AK, Olsson ML, von Heijne G, Nilsson I, Wahlgren M. RIFINs are adhesins implicated in severe Plasmodium falciparum malaria. Nat Med. 2015;21(4):314–7. Kochan J, Perkins M, Ravetch JV. A tandemly repeated sequence determines the binding domain for an erythrocyte receptor binding protein of P. Falciparum. Cell. 1986;44(5):689–96. Crompton PD, Kayala MA, Traore B, Kayentao K, Ongoiba A, Weiss GE, Molina DM, Burk CR, Waisberg M, Jasinskas A, Tan X, Doumbo S, Doumtabe D, Kone Y, Narum DL, Liang X, Doumbo OK, Miller LH, Doolan DL, Baldi P, Felgner PL, Pierce SK. A prospective analysis of the ab response to Plasmodium falciparum before and after a malaria season by protein microarray. Proc Natl Acad Sci U S A. 2010;107(15):6958–63. Adamcsek B, Palla G, Farkas IJ, Derenyi I, Vicsek T. CFinder: locating cliques and overlapping modules in biological networks. Bioinformatics. 2006;22(8):1021–3. Gregory S. An algorithm to find overlapping community structure in networks, Proceedings of the 11th European conference on principles and practice of knowledge discovery in databases; 2007. https://doi.org/10.1007/978-3-540-74976-9_12. Malpede BM, Lin DH, Tolia NH. Molecular basis for sialic acid-dependent receptor recognition by the Plasmodium falciparum invasion protein erythrocyte-binding antigen-140/BAEBL. J Biol Chem. 2013;288(17):12406–15. Rubio JP, Thompson JK, Cowman AF. The var genes of Plasmodium falciparum are located in the subtelomeric region of most chromosomes. EMBO J. 1996;15(15):4069–77. Pain A, Bohme U, Berry AE, Mungall K, Finn RD, Jackson AP, Mourier T, Mistry J, Pasini EM, Aslett MA, Balasubrammaniam S, Borgwardt K, Brooks K, Carret C, Carver TJ, Cherevach I, Chillingworth T, Clark TG, Galinski MR, Hall N, Harper D, Harris D, Hauser H, Ivens A, Janssen CS, Keane T, Larke N, Lapp S, Marti M, Moule S, Meyer IM, Ormond D, Peters N, Sanders M, Sanders S, Sargeant TJ, Simmonds M, Smith F, Squares R, Thurston S, Tivey AR, Walker D, White B, Zuiderwijk E, Churcher C, Quail MA, Cowman AF, Turner CM, Rajandream MA, Kocken CH, Thomas AW, Newbold CI, Barrell BG, Berriman M. The genome of the simian and human malaria parasite Plasmodium knowlesi. Nature. 2008;455(7214):799–803. Wright GJ, Rayner JC. Plasmodium falciparum erythrocyte invasion: combining function with immune evasion. PLoS Pathog. 2014;10(3):e1003943. Eddy SR. Profile hidden Markov models. Bioinformatics. 1998;14(9):755–63. Newman ME, Finding GM. Evaluating community structure in networks. Phys Rev E Stat Nonlinear Soft Matter Phys. 2004;69(2 Pt 2):026113. Vincent D, Blondel J-LG. Renaud Lambiotte, Etienne Lefebvre. Fast unfolding of communities in large networks. J Stat Mech: Theory Exp. 2008;2008:P10008. Liu Y, Schmidt B, Maskell DL. MSAProbs: multiple sequence alignment based on pair hidden Markov models and partition function posterior probabilities. Bioinformatics. 2010;26(16):1958–64. Bartfai R, Hoeijmakers WA, Salcedo-Amaya AM, Smits AH, Janssen-Megens E, Kaan A, Treeck M, Gilberger TW, Francoijs KJ, Stunnenberg HG. H2A.Z demarcates intergenic regions of the plasmodium falciparum epigenome that are dynamically marked by H3K9ac and H3K4me3. PLoS Pathog. 2010;6(12):e1001223. Kim D, Langmead B, Salzberg SL. HISAT: a fast spliced aligner with low memory requirements. Nat Methods. 2015;12(4):357–60. Pertea M, Pertea GM, Antonescu CM, Chang TC, Mendell JT, Salzberg SL. StringTie enables improved reconstruction of a transcriptome from RNA-seq reads. Nat Biotechnol. 2015;33(3):290–5. Liu X, Huang Y, Liang J, Zhang S, Li Y, Wang J, Shen Y, Xu Z, Zhao Y. Computational prediction of protein interactions related to the invasion of erythrocytes by malarial parasites. BMC Bioinformatics. 2014;15:393. Petersen TN, Brunak S, von Heijne G, Nielsen H. SignalP 4.0: discriminating signal peptides from transmembrane regions. Nat Methods. 2011;8(10):785–6. Sonnhammer EL, von Heijne G, Krogh A. A hidden Markov model for predicting transmembrane helices in protein sequences. Proc Int Conf Intell Syst Mol Biol. 1998;6:175–82. This work was supported by a grant from the National Natural Science Foundation of China (Grant No. 31600615) and the Natural Science Foundation of Shaanxi Province (Grant No. 2016JQ8023), as well as the China Postdoctoral Science Foundation (Grant No. 2015M582796). Protein sequences used in our analysis are available in PlasmoDB (http://plasmodb.org/common/downloads/release-26/), RNA-seq datasets are available from GEO (Gene Expression Omnibus) datasets with accession number GSE23787 (https://www.ncbi.nlm.nih.gov/gds/?term=GSE23787). The MATLAB code of the modified BGLL algorithm can be obtained upon request from the first author. Department of Pathogenic Biology, Fourth Military Medical University, Xi'an, 710032, China Xuewu Liu, Jiao Liang & Ya Zhao Department of Neurology, Xijing Hospital, Fourth Military Medical University, Xi'an, 710032, China Yuanyuan Wang, Luojun Wang, Na Qin & Gang Zhao Xuewu Liu Yuanyuan Wang Jiao Liang Luojun Wang Na Qin Ya Zhao Gang Zhao XL and YW wrote the program for the modified BGLL algorithm, performed the RNA-seq dataset analysis, and drafted the manuscript. JL performed gene annotation and prepared all figures and tables for this paper. LW and NQ helped to collect protein sequences and gave constructive advice for the discussion. The corresponding authors GZ and YZ initiated this study and helped in writing the manuscript. All authors have read and approved the final manuscript. XL, YW, and JL contributed equally to this work. Correspondence to Ya Zhao or Gang Zhao. Table S1. The corresponding relationships between 33,314 genes and 4605 clusters. (XLSX 605 kb) Figure S1. Clusters composed of members from a single species or six species. a) Clusters comprise P. vavix genes (left panel) or P. falciparum genes (right panel). b) Clusters comprising genes from six Plasmodium species. (TIF 1617 kb) Table S2. Enriched values of all clusters in six Plasmodium species. (XLSX 154 kb) Table S3. The candidate P. falciparum genes probably responsible for parasitizing human erythrocytes. (XLSX 16 kb) Table S4. The P. berghei genes included in rodent malaria parasite-enriched clusters. (XLSX 20 kb) Figure S2. Genomic location of 115 P. falciparum genes. (TIF 968 kb) Figure S3. Genomic location of 267 P. berghei genes. (TIF 2034 kb) Figure S4. Expression dynamics of SURF family members in the intraerythrocytic cycle of the P. falciparum parasite. (TIF 75 kb) Table S5. Candidate genes related to virulence of the P. falciparum parasite. (XLSX 31 kb) Figure S5. Conserved peptide region identified in PfEMP1 variants. Upper panel, multiple sequence alignment of conserved regions from PfEMP1 proteins. Lower panel, sequence logo showing the conserved peptide region. (TIF 1250 kb) Table S6. Identified P. falciparum genes that possibly contribute to cerebral malaria. (XLSX 12 kb) Liu, X., Wang, Y., Liang, J. et al. In-depth comparative analysis of malaria parasite genomes reveals protein-coding genes linked to human disease in Plasmodium falciparum genome. BMC Genomics 19, 312 (2018). https://doi.org/10.1186/s12864-018-4654-5 Virtual genome Parasite-infected erythrocyte surface protein 2 (PIESP2)
CommonCrawl
Solutions containing a large parameter of a quasi-linear hyperbolic system of equations and their nonlinear geometric optics approximation Author: Atsushi Yoshikawa Journal: Trans. Amer. Math. Soc. 340 (1993), 103-126 MSC: Primary 35L60; Secondary 35A35, 35B40 DOI: https://doi.org/10.1090/S0002-9947-1993-1208881-X Abstract: It is well known that a quasi-linear first order strictly hyperbolic system of partial differential equations admits a formal approximate solution with the initial data ${\lambda ^{ - 1}}{a_0}(\lambda x \bullet \eta ,x){r_1}(\eta ),\lambda > 0,x,\eta \in {{\mathbf {R}}^n}, \eta \ne 0$. Here ${r_1}(\eta )$ is a characteristic vector, and ${a_0}(\sigma ,x)$ is a smooth scalar function of compact support. Under the additional requirements that $n = 2$ or $3$ and that ${a_0}(\sigma ,x)$ have the vanishing mean with respect to $\sigma$, it is shown that a genuine solution exists in a time interval independent of $\lambda$, and that the formal solution is asymptotic to the genuine solution as $\lambda \to \infty$. Yvonne Choquet-Bruhat, Ondes asymptotiques et approchées pour des systèmes d'équations aux dérivées partielles non linéaires, J. Math. Pures Appl. (9) 48 (1969), 117–158 (French). MR 255964 Ingrid Daubechies, Orthonormal bases of compactly supported wavelets, Comm. Pure Appl. Math. 41 (1988), no. 7, 909–996. MR 951745, DOI https://doi.org/10.1002/cpa.3160410705 John K. Hunter and Joseph B. Keller, Weakly nonlinear high frequency waves, Comm. Pure Appl. Math. 36 (1983), no. 5, 547–569. MR 716196, DOI https://doi.org/10.1002/cpa.3160360502 J. K. Hunter, A. Majda, and R. Rosales, Resonantly interacting, weakly nonlinear hyperbolic waves. II. Several space variables, Stud. Appl. Math. 75 (1986), no. 3, 187–226. MR 867874, DOI https://doi.org/10.1002/sapm1986753187 Tosio Kato, The Cauchy problem for quasi-linear symmetric hyperbolic systems, Arch. Rational Mech. Anal. 58 (1975), no. 3, 181–205. MR 390516, DOI https://doi.org/10.1007/BF00280740 Tosio Kato, Quasi-linear equations of evolution, with applications to partial differential equations, Spectral theory and differential equations (Proc. Sympos., Dundee, 1974; dedicated to Konrad Jörgens), Springer, Berlin, 1975, pp. 25–70. Lecture Notes in Math., Vol. 448. MR 0407477 Sergiu Klainerman, Global existence for nonlinear wave equations, Comm. Pure Appl. Math. 33 (1980), no. 1, 43–101. MR 544044, DOI https://doi.org/10.1002/cpa.3160330104 Peter D. Lax, Hyperbolic systems of conservation laws and the mathematical theory of shock waves, Society for Industrial and Applied Mathematics, Philadelphia, Pa., 1973. Conference Board of the Mathematical Sciences Regional Conference Series in Applied Mathematics, No. 11. MR 0350216 A. Majda, Compressible fluid flow and systems of conservation laws in several space variables, Applied Mathematical Sciences, vol. 53, Springer-Verlag, New York, 1984. MR 748308 Andrew Majda, Nonlinear geometric optics for hyperbolic systems of conservation laws, Oscillation theory, computation, and methods of compensated compactness (Minneapolis, Minn., 1985) IMA Vol. Math. Appl., vol. 2, Springer, New York, 1986, pp. 115–165. MR 869824, DOI https://doi.org/10.1007/978-1-4613-8689-6_6 Atsushi Yoshikawa, Note on the Taylor expansion of smooth functions defined on Sobolev spaces, Tsukuba J. Math. 15 (1991), no. 1, 145–149. MR 1118590, DOI https://doi.org/10.21099/tkbjm/1496161575 Y. Choquet-Bruhat, Ondes asymptotiques et approchées pour des systèmes d'équations aux dérivées partielles nonlinéaires, J. Math. Pures Appl. 48 (1969), 117-158. I. Daubechies, Orthonormal basis of compactly supported wavelets, Comm. Pure Appl. Math. 41 (1988), 909-996. J. K. Hunter and J. B. Keller, Weakly nonlinear high frequency waves, Comm. Pure Appl. Math. 36 (1983), 547-569. J. K. Hunter, A. Majda, and R. Rosales, Resonantly interacting, weakly nonlinear hyperbolic waves. II: Several space variables, Stud. Appl. Math. 75 (1986), 187-226. T. Kato, The Cauchy problem for quasi-linear symmetric hyperbolic systems, Arch. Rational Mech. Anal. 58 (1975), 181-205. ---, Quasi-linear equations of evolution with applications to partial differential equations, Lecture Notes in Math., vol. 448, Springer-Verlag, Berlin, Heidelberg, and New York, 1975, pp. 25-70. S. Klainerman, Global existence for nonlinear wave equation, Comm. Pure Appl. Math. 33 (1980), 43-101. P. Lax, Hyperbolic systems of conservation laws and the mathematical theory of shock waves, CBMS-NSF Regional Conf. Ser. in Appl. Math., vol. 13, SIAM, Philadelphia, Pa., 1973. A. Majda, Compressible fluid flow and systems of conservation laws in several space variables, Springer-Verlag, New York and Berlin, 1984. ---, Nonlinear geometric optics for hyperbolic systems of conservation laws, Oscillation Theory, Computation, and Methods of Compensated Compactness, IMA Vol. Math. Appl., vol. 2, Springer-Verlag, Berlin and New York, 1986, pp. 115-165. A. Yoshikawa, Note on the Taylor expansion of smooth functions defined on Sobolev spaces, Tsukuba J. Math. 15 (1991), 145-149. Retrieve articles in Transactions of the American Mathematical Society with MSC: 35L60, 35A35, 35B40 Retrieve articles in all journals with MSC: 35L60, 35A35, 35B40
CommonCrawl
Florians Blog – Simple Math for Engineers Tag Archives: Non-circularity Erratum for "Deterministic Cramer-Rao Bound for Strictly Non-Circular Sources and Analytical Analysis of the Achievable Gains" I am posting since since we were recently made aware of an error in our journal paper "Deterministic Cramer-Rao Bound for Strictly Non-Circular Sources and Analytical Analysis of the Achievable Gains" (T-SP vol. 64, no. 17). Unfortunately, T-SP does not allow (yet?) to publish errata alongside the original papers, nor to send updates. Therefore, right now the best we can do is to update the arxiv version and inform the community via this blog post. In fact, the error is more or less a copy-paste error from the earlier conference version "Deterministic Cramér-Rao Bounds for strict sense non-circular sources" (WSA 2007) that contains the same error, although harder to spot. As the title says, both papers are concerned with Deterministic Cramér-Rao Bounds (CRBs) for strictly non-circular sources. A closed-form expression of the CRB was derived and given by the expression (8) in WSA2007, which reads as $$\begin{align} \newcommand{ma}[1]{{\mathbf {#1}}} \ma{C} = & \frac{\sigma^2}{2N} \Big\{ \left(\ma{G}_2-\ma{G}_1 \ma{G}_0^{-1} \ma{G}_1^T \right) \odot \ma{\hat{R}}_{S,0} \\ & + \left[ \left( \ma{G}_1 \ma{G}_0^{-1} \ma{H}_0 \right) \odot \ma{\hat{R}}_{S,0} \right] \left[ \left( \ma{G}_0-\ma{H}_0^T \ma{G}_0^{-1} \ma{H}_0 \right) \odot \ma{\hat{R}}_{S,0} \right]^{-1} \\ & \cdot \left[ \left(\ma{H}_1^T- \ma{H}_0^T \ma{G}_0^{-1} \ma{G}_1^T \right) \odot \ma{\hat{R}}_{S,0} \right] \\ & + \left[ \ma{H}_1 \odot \ma{\hat{R}}_{S,0} \right] \cdot \left[ \ma{G}_0 \odot \ma{\hat{R}}_{S,0} \right]^{-1} \cdot \left[ \left( \ma{H}_0^T \ma{G}_0^{-1} \ma{G}_1^T \right) \odot \ma{\hat{R}}_{S,0} \right] \\ & + \left[ \ma{H}_1 \odot \ma{\hat{R}}_{S,0} \right] \cdot \left[ \ma{G}_0 \odot \ma{\hat{R}}_{S,0} \right]^{-1} \cdot \left[ \left( \ma{H}_0^T \ma{G}_0^{-1} \ma{H}_0 \right) \odot \ma{\hat{R}}_{S,0} \right] \\& \cdot \left[ \left( \ma{G}_0-\ma{H}_0^T \ma{G}_0^{-1} \ma{H}_0 \right) \odot \ma{\hat{R}}_{S,0} \right]^{-1} \cdot \left[ \left( \ma{H}_0^T \ma{G}_0^{-1} \ma{G}_1^T \right) \odot \ma{\hat{R}}_{S,0} \right] \\ &-\left[ \ma{H}_1 \odot \ma{\hat{R}}_{S,0} \right] \cdot \left[ \left( \ma{G}_0-\ma{H}_0^T \ma{G}_0^{-1} \ma{H}_0 \right) \odot \ma{\hat{R}}_{S,0} \right]^{-1} \cdot \left[ \ma{H}_1^T \odot \ma{\hat{R}}_{S,0} \right] \Big\}^{-1}, \end{align}$$ where the matrices $\ma{G}_i, \ma{H}_i$ for $i=0, 1, 2$ are all of size $d \times d$ and given by $$\begin{eqnarray} \ma{G}_0 & = & {\rm Re}\{\ma{\Psi}^* \cdot \ma{A}^H \cdot \ma{A} \cdot \ma{\Psi}\} \\ \ma{H}_0 & = & {\rm Im}\{\ma{\Psi}^* \cdot \ma{A}^H \cdot \ma{A} \cdot \ma{\Psi}\} \\ \ma{G}_1 & = & {\rm Re}\{\ma{\Psi}^* \cdot \ma{D}^H \cdot \ma{A} \cdot \ma{\Psi}\} \\ \ma{H}_1 & = & {\rm Im}\{\ma{\Psi}^* \cdot \ma{D}^H \cdot \ma{A} \cdot \ma{\Psi}\} \\ \ma{G}_2 & = & {\rm Re}\{\ma{\Psi}^* \cdot \ma{D}^H \cdot \ma{D} \cdot \ma{\Psi}\} \\ \ma{H}_2 & = & {\rm Im}\{\ma{\Psi}^* \cdot \ma{D}^H \cdot \ma{D} \cdot \ma{\Psi}\}. \end{eqnarray}$$ All nice and good, no problem so far. It was then though said in Section 4 how to generalize this to 2-D, where it was claimed that all we need to do is to replace $\ma{D} \in \mathbb{C}^{M \times d}$ by $\ma{D}_{{\rm 2D}} \in \mathbb{C}^{M \times 2d}$ and $\ma{\hat{R}}_{S,0}$ by $\ma{1}_{2 \times 2} \otimes \ma{\hat{R}}_{S,0}$. Well, the first statement is correct, the second one only partially. Unfortunately this went unnoticed into the TSP2016 paper, where the expression is given for $R$-D. Why is it wrong? Well, you can see that for $R$-D, the size of $\ma{A}$ is unaffected while $\ma{D}$ goes from having $d$ columns to having $R\cdot d$ columns. Therefore, the size of $\ma{G}_0$ and $\ma{H}_0$ is unaffected ($d\times d$) whereas $\ma{G}_1$ and $\ma{H}_1$ are now $R \cdot d \times d$ and $\ma{G}_2$ and $\ma{H}_2$ are now $R\cdot d \times R\cdot d$. To make the CRB work, the augmentation of of $ \ma{\hat{R}}_{S,0} $ has to be done such that it is consistent with the dimensions of the $\ma{G}_i$ and $\ma{H}_i$. Concretely, this means that $ \left(\ma{G}_2-\ma{G}_1 \ma{G}_0^{-1} \ma{G}_1^T \right) \odot \ma{\hat{R}}_{S,0} $ changes into $ \left(\ma{G}_2-\ma{G}_1 \ma{G}_0^{-1} \ma{G}_1^T \right) \odot \left(\ma{1}_{R\times R} \otimes \ma{\hat{R}}_{S,0}\right) $, which is the example treated in the paper. However, $ \left( \ma{G}_1 \ma{G}_0^{-1} \ma{H}_0 \right) \odot \ma{\hat{R}}_{S,0} $ becomes $ \left( \ma{G}_1 \ma{G}_0^{-1} \ma{H}_0 \right) \odot \left(\ma{1}_{R \times {\color{red}1}} \otimes \ma{\hat{R}}_{S,0} \right)$ and $ \left( \ma{G}_0-\ma{H}_0^T \ma{G}_0^{-1} \ma{H}_0 \right) \odot \ma{\hat{R}}_{S,0} $ remains unaffected. Long story short, here is the corrected version of the R-D CRB (equation (15) in TSP2016): \ma{C} = & \frac{\sigma^2}{2N} \Big\{ \left(\ma{G}_2-\ma{G}_1 \ma{G}_0^{-1} \ma{G}_1^T \right) \odot \left(\ma{1}_{R\times R} \otimes \ma{\hat{R}}_{S,0} \right) \\ & + \left[ \left( \ma{G}_1 \ma{G}_0^{-1} \ma{H}_0 \right) \odot \left(\ma{1}_{R\times 1} \otimes \ma{\hat{R}}_{S,0} \right)\right] \left[ \left( \ma{G}_0-\ma{H}_0^T \ma{G}_0^{-1} \ma{H}_0 \right) \odot \ma{\hat{R}}_{S,0} \right]^{-1} \\ & \cdot \left[ \left(\ma{H}_1^T- \ma{H}_0^T \ma{G}_0^{-1} \ma{G}_1^T \right) \odot \left(\ma{1}_{1\times R} \otimes \ma{\hat{R}}_{S,0}\right) \right] \\ & + \left[ \ma{H}_1 \odot \left(\ma{1}_{R\times 1} \otimes \ma{\hat{R}}_{S,0}\right) \right] \cdot \left[ \ma{G}_0 \odot \ma{\hat{R}}_{S,0} \right]^{-1} \cdot \left[ \left( \ma{H}_0^T \ma{G}_0^{-1} \ma{G}_1^T \right) \odot \left(\ma{1}_{1\times R} \otimes \ma{\hat{R}}_{S,0} \right) \right] \\ & + \left[ \ma{H}_1 \odot \left(\ma{1}_{R\times 1} \otimes \ma{\hat{R}}_{S,0}\right) \right] \cdot \left[ \ma{G}_0 \odot \ma{\hat{R}}_{S,0} \right]^{-1} \cdot \left[ \left( \ma{H}_0^T \ma{G}_0^{-1} \ma{G}_1^T \right) \odot \left(\ma{1}_{1\times R} \otimes \ma{\hat{R}}_{S,0} \right) \right] \\ &-\left[ \ma{H}_1 \odot \left(\ma{1}_{R\times 1} \otimes \ma{\hat{R}}_{S,0}\right) \right] \cdot \left[ \left( \ma{G}_0-\ma{H}_0^T \ma{G}_0^{-1} \ma{H}_0 \right) \odot \ma{\hat{R}}_{S,0} \right]^{-1} \cdot \left[ \ma{H}_1^T \odot \left(\ma{1}_{1\times R} \otimes \ma{\hat{R}}_{S,0}\right) \right] \Big\}^{-1}. \end{align}$$ The pattern is clear: the "outer" terms get expanded, while the "inner" terms remain unaffected. We would like to thank Mr. Tanveer Ahmed for noticing the mistake! We sure hope we got it right this time! 🙂 Cramer-Rao BoundNon-circularity Cramer-Rao Bound Kronecker product linear form Non-circularity quadratic form Extended Trigonometric Pythagorean Identities: The Proof Trigonometric Pythagorean Identity, supercharged Bharath Shamasundar on Widely linear system of equations, revisited Extended Trigonometric Pythagorean Identities: The Proof | Florians Blog - Simple Math for Engineers on Trigonometric Pythagorean Identity, supercharged Trigonometric Pythagorean Identity, extended | Florians Blog - Simple Math for Engineers on Trigonometric Pythagorean Identity, supercharged Trigonometric Pythagorean Identity, extended Widely linear system of equations, revisited Trigonometric Pythagorean Identity, supercharged | Florians Blog - Simple Math for Engineers on Extended Trigonometric Pythagorean Identities: The Proof Extended Trigonometric Pythagorean Identities: The Proof | Florians Blog - Simple Math for Engineers on Trigonometric Pythagorean Identity, extended Just fur fun Tools for Engineers
CommonCrawl
tools / Date Published: November 22, 2015 Oscilloscope Specifications Probe Compensation Maths Functions Tenma Ghost Voltages (Low-Impedance Voltage Measurements) Logic Analysers DreamSourceLab Analog Discovery Pro 3000 AC/DC Electronics Loads TekBox TBOH02 USB-to-Serial Converters FTDI Converters LISNs CISPR 25 Optical (Analogue) AmScope 7X-45X Simul-Focal Stereo Lockable Zoom Microscope on Dual Arm Boom Stand The Infamous Puhui T-962 (and variants) Output Voltage Not What You Expect? Components That Don't Like Ultrasonic Baths Soldering Irons And Stations Hobbyist Range Hakko FX888D Weller WE1010 Hakko FM203-15 Weller WT 1010 Weller WT2010M Preheating Stations PUHUI T-862 GORDAK 853 PJLSW Hall-Effect Probes Standard AC/DC Current Probes Fluxgate Magnetometer Current Probes Aim I-Prober 520 EM Probes Beehive Non-contact EM Probes Solder Compatibility Types Of Flux Rosin Flux Organic Acid Flux Inorganic Acid Flux "No Clean" Flux Flux Applicators Soldering Fumes This page is in notes format, and may not be of the same quality as other pages on this site. Descriptions, usages, pictures and more info of various tools used by embedded engineers. An oscilloscope (or just scope) is to an electrical engineer what a hammer is to a builder. It is a general purpose tool which lets you view voltages (and currents) in a circuit over time. It's cheaper counterpart is a digital multimeter, however they can typically only display the voltage with an update rate of a few Hertz, and only display a discrete value, not the waveform over time. Oscilloscopes can also measure much faster signals and much faster rates and trigger (take a snapshot) on specific conditions. They typically also have multiple input channels (at least two) so you can compare two voltage waveforms side-by-side. Mixed-signal oscilloscopes (MSOs) measure both traditional analogue signals as well as digital signals. The bandwidth of a scope defines the range of frequencies it can measure. The upper limit is defined when the observed signal drops by -3dB (drops to 70.7%) of the true value. Because all oscilloscopes start measuring at DC (0Hz), this also defines the highest frequency the scope can measure. El cheapo scopes have a bandwidth of 100MHz. Passive: Non-powered Active: Powered with active buffering and/or amplification of the signal within the probe itself (before it gets to the oscilloscope). 10:1 probes are the industry standard. Almost all oscilloscopes have an input impedance of \(1M\Omega\) when looking into the connector on the front panel of the scope. Capacitance increases when you go from 10:1 to 1:1. e.g. a 10:1 passive probe may have 10pF of capacitance while an equivalent 1:1 probe may have approx. 100pF. You also lose some input resistance, e.g. it drops from \(10M\Omega\) to \(1M\Omega\). 1:1 probes can be good for measuring small levels of noise as they effectively increase the minimum resolution of the oscilloscope by 10 (compared to a 10:1 probe). Some scope probes allow you to adjust the probes compensation. Without the variable capacitor, the resistance of the probe combined with the resistance and capacitance of the scope will create a low-pass filter that will greatly distort high frequency measurements. The variable capacitor is added in parallel with the \(9M\Omega\) probe resistance so that there is both a resistive and capacitive potential divider. The capacitance is then adjusted so that both the resistor divider and capacitor divider have the same division ratio. This ensures the response of the probe is flat across the frequencies of interest. I have never had great success using the maths functions on a scope to measure differential signals (by using two single-ended inputs and subtracting one from the other). Mid to high-end scope manufacturer, in the same class as Tektronix. Mid to high-end scope manufacturer, in the same class as Keysight. Low-end scope manufacturer. Multimeters are multi-purpose electrical measurement devices used by both electricians and electronics engineers (among other disciplines). Some multimeters are designed for electricians (people who deal primarily with 240VAC), which are not as suitable for electronics engineers. Typically, voltage measurements are done at the highest-impedance achievable (\(>1M\Omega\)) by the multimeter so that the multimeter does not effect the circuit it is measuring. However, this can sometimes lead to "ghost voltages". This is when a real but high-impedance voltage is present on a circuit, normally due to the circuit picking up noise from near-by circuits via phenomenon such as capacitive coupling. This voltage, although real, is misleading as it does not represent the voltage a load (or person getting a shock) would actually see. For this reason these voltages are called ghost voltages. Some newer digital multimeters come with a low impedance voltage measurement mode to ignore these ghost voltages. This is especially useful for when working with mains power (115/240VAC) and trying to determine if a voltage on a wire is a significant danger or not. Examples of multimeters which have a low-impedance voltage measurement setting include the Fluke 117 Digital Multimeter. Older, analogue multimeters are not as susceptible to the ghost voltage problem as they typically have a lower impedance when measuring voltages, normally around \(10k\Omega\). Low-impedance mode should not be used when working with sensitive, small-signal circuitry. The low-impedance of the multimeter might draw enough current to disrupt the circuit and will give you incorrect readings. Logic analysers are electronic tools which connect to digital circuitry and decode serial or parallel communication protocols. High-end oscilloscopes now include logic analysers, either built-in or as an additional module/license. Tektronic charges a license per communication protocol. Saleae is probably the most expensive well-known logic analyzer brand. As of April 2020, the 8-channel, 500MS/s (Samples/second), 100MHz, USB3.0 Saleae logic analyzer (the Logic Pro 8) will cost USD$699, which is quite a lot of money for JUST a logic analyzer. DreamSourceLab provides the DSView software for viewing the digital signals from the DSLogic series of logic analysers. DSView is compatible with Windows, MacOS and Linux. It uses the sigrok project to provide all of the protocol decoders and therefore supports many of the protocols listed on https://sigrok.org/wiki/Protocol_decoders. A screenshot of the DSView v1.1.2 software. Whilst a setup .exe is provided for Windows and a .dmg for MacOS, no pre-built executables are provided for Linux, and you have to build yourself from the source code. Easy instructions are provided in the INSTALL text file. The Digilent Analog Discovery Pro 3000 series is an combined benchtop oscilloscope, function generator, I/O, and protocol/logic analyser. The ADP3450 is the cheaper member in the family, and the ADP5250 is the more expensive unit with additional power supplies, but lacks a 4-channel option. A photo of the Analog Discovery Pro ADP34501. The oscilloscope in the ADP3x50 has a bandwidth of 55MHz. Not as great as the 100MHz you usually get with low-end dedicated oscilloscopes, but still good enough for most general-purpose work. The max. sample rate is 1Msps The WaveForms SDK can be used to control the Analog Discovery Pro via software. Supported languages include C++, C# and Python[^bib-digilent-analog-discovery-pro-ref-manual]. 471-040: ADP3450, 4ch, with BNC probes. 410-394: ADP3450, 4ch, without BNC probes. As of Aug 2022, the ADP3450 with BNC probes retails for NZ$2,453.19 on Mouser, and NZ$2,582.30 on DigiKey. A AC or DC _Electronic Load (a.k.a. Active Load) is a piece of electronic test equipment which can act as either a programmable resistance, voltage sink (voltage source, but can only sink power, not produce it) or current sink. They act as a load by converting the incoming electrical power into heat, just like a resistor. However, rather than using a standard fixed resistor (or sequence of switched fixed resistors), they typically use a transistor(s) to dissipate the energy so that it's "resistance" can be changed electronically, hence why they are also known as active loads. They are usually designed to dissipate 100's of Watts or more of power (depending on the model). They are separated into two distinct families: DC electronic loads (the most common variety) AC electronic loads AC and DC electronic loads are used to: Load up power supplies to test their response under a range of operating conditions (incl. 0A to full current, and 0V to highest voltage) Act as constant-current sinks to drive LEDs when performing testing/design validation. The TekBox TBOH02 DC load is a great, simple, low-cost DC load. It is self-powered, meaning it powers itself from the energy dissipated via the "load" it pretends to be. 25W continuous power dissipation with no fan, 100W with fan. The advantage of it being an analogue, self-powered load means that there will be no digital/PSU/control-circuitry noise superimposed onto the measurements you are making. The TekBox TBOH02 Self-Powered Active Load. Image from https://www.tekbox.com/product/tboh02-self-powered-active-load/. This device is open-source hardware (design is based of https://www.edn.com/precision-active-load-operates-as-low-as-2v/, however EDN's link to the PDF/schematics is broken as of 2021-06-22), the full schematics, board files and BOM are provided at https://www.tekbox.com/product/tboh02-self-powered-active-load/. Schematics and board files are in the Eagle file format. FTDI (Future Technology Devices International Ltd.) is a popular and reputable designer and manufacturer of USB-to-Serial converters. They make a range of ICs for this purpose, as well and manufacturing useful products which use these ICs (such as USB-to-serial cables). As of 2016, their ICs are commonly found in good quality USB-to-serial hardware (more so than one of their main competitors, Prolific). USB-to-Serial converters introduce a fair bit of delays into serial communications. and depending on your latency requirements, this may effect your design. The conditions which will cause an FTDI IC to send received serial data to the computer. Especially notice the 16ms 'latency timer'. Image from 'FTDI – AN232B-04 – Data Throughput. Latency and Handshaking'. The below image is a screenshot of FTDI RX/TX data captured with a logic analyser. The computer was running Java code which sent an 0x02 response as soon as it received an 0x01 byte. FTDI RX and TX data captured by a logic analyser, with the computer running Java code which responds to 0x01 with 0x02. The 'latency timer' on the FTDI IC has been reduced to 1ms, which gives a much faster response time from the computer (about 1.5ms delay). FTDI provides the Java D2xx API for Android systems. The API is packaged into a file called d2xx.jar and can be downloaded from http://www.ftdichip.com/Android.htm. Basic information on the driver software can be found at http://www.ftdichip.com/Support/Documents/TechnicalNotes/TN_147_Java_D2xx_for_Android.pdf. A line impedance stabilization network (LISN) is a tool used when performing EMC/EMI tests. A LISN is essentially a low-pass filter placed between a power source and the DUT (device under test). A LISN performs the following functions: Provides a well-known impedance to the power input of the DUT. Prevents high-frequency noise from the power supply entering into the DUT, making the measurements of the DUT seem worse than they actually are (isolation of the power supply). A "50uH" LISN is a common choice, which provides impedance control down to 10kHz. Below 10kHz, impedance control is difficult3. MIL-STD-461E mandates the use of LISNs to control the impedance of power sources for many of it's measurement procedures: The impedance of power sources providing input power to the EUT shall be controlled by Line Impedance Stabilization Networks (LISNs) for all measurement procedures of this document unless otherwise stated in a particular test procedure. – MIL-STD-461E, Section 40.3.6 (4.3.6): Power source impedance CISPR 25 sets limits and procedures for the measurement for EMI in the frequency range of 150kHz to 2.5GHz4. Among other utilities, the standard is applicable to vehicles, and it is a popularly referenced standard among automobile electronic design. It specifies the uses of a \(5uH\) LISN when performing EMI measurements, is one of the main reasons you will see \(5uH\) LISN devices available for purchase. The TexBox TBOH01 (5uH LISN) is a LISN designed to be compliant with CISPR 25, and retails for around US$250. Digital microscopes are a great tool to have on an electronics workbench. Coupled with a screen, they allow you to look up close at a PCB without having to peer down the sights of a optical microscope. Depth-of-field: The larger the depth of field, the less zooming you have to do to get different height components and tracks on your PCB into focus. This a great, well-weighted optical microscope for electronics lab use. The adjustable boom makes it easy to swing the microscope. Link: https://amscope.com/products/sm-4ntp 144 LED Intensity-adjustable Ring Light for Stereo Microscopes with White Housing: https://amscope.com/products/led-144w-zk. This illuminates the work area and gives you great shadow-free light to look at the object at. The Puhui T-962 (and T-962A, T-962C variants) are cheap static desktop reflow soldering ovens. The T-962A has the same design except is a larger unit and provides and effective soldering area of 300x320mm. Panel Area Cost (1, approx.) T-962 800W 180x235mm US$200 T-962A 1500W 300x320mm T-962C 2900W 585x400mm US$750 It appears there are "2020 New Versions" of the above reflow ovens which have exhaust pipe brackets added onto the back so you can clamp on a pipe to vent exhaust fumes. It is known to produce bad-smelling fumes when in use, especially when it is new. This is because the manufacturer uses aluminium tape and masking tape in the unit which is not designed for high temperatures, which melts!!! It is recommended to replace the masking tape with kapton tape after purchase (see the upgrade section below for more info). Third parties have made "upgrade kits" for these reflow ovens which aim to to provide better thermal control of the soldering process and improve the UI experience. For example, ES Technical provides upgrade packages for both the T-962 and T-962A. UnifiedEngineering redesigned the firmware to run on the existing microcontroller, with the hardware addition of a temperature sensor for cold junction compensation. Clones? The Atten AT-R3028 looks VERY similar to the T-962. Thermal cameras are great tools to have in an electronics lab for inspecting the thermal behaviour for PCBs and other electrical devices. They can be used to: See how heat spread across a PCB Detect if things are getting too hot Work out where heat sinking is needed Calculate thermal resistances. Find short-circuits In the context of hand-held thermal cameras, 80x80 is a small number of pixels, 160x120 is moderate, and 640x480 is a large amount. NETD: Noise Equivalent Temperature Difference: This is the minimum temperature difference that is resolvable by the camera. You could think of this as the sensitivity. It is bad practise to refer to this as the resolution as this will get confused with the pixel (spatial) resolution. NETD of thermal cameras is typically between 100-500mk (100-500m°C). The NETD is measure by pointing the camera at a very stable and uniform black body at a specific temperature. The NETD is the standard deviation of the varying pixel values recorded by the camera over a specific period of time5. Keysight has one range of handheld thermal cameras called TrueIR. Within this range there are 3 separate devices, with the key difference between them being the maximum measurement temperature. They all have a medium resolution of 160x120 pixels. A marketing photo for the Keysight U5856A thermal camera. A unique selling point of the Keysight TrueIR range is the small minimum focal distance of 100mm (most other hand-held thermal cameras have a minimum focal length of 300-500mm), which makes them especially useful for inspecting PCBs. The 350C camera (U5855A) starts at about US$2500, going up to US$3500 for the 1200C camera (U5857A). TrueIR Analysis And Reporting Tool Windows only. Includes ability to stream video from the IR camera when plugged in via USB cable. A screenshot of the Keysight TrueIR software tool. I was not impressed with the FLIR software (called Fluke Connect Desktop). It took account registration and email link clicking to even get to the point to be able to download it. I then encountered issues installing it without having and old version of Microsoft Word present (the software was looking for this so it could generate reports). A photo of the Optris Xi 400 spot finder IR camera. Testo 865: 160x120 pixels, measurement range -20 to 280°C. Testo 868: 160x120 pixels, measurement range -50 to 650°C. Minimum focal distance of 0.5m, not so suitable for viewing PCBs. "SuperResolution" takes the raw infrared pixel resolution of 160x120 and upscales it to 320x240pixels. However I'm not sure how more advanced this is other than just up-sampling the image in the digital realm. IRsoft RS Pro is RS Components self-owned brand. Most signal generators have a "Load Impedance" setting. Whilst the signal generator almost always has an output of \(50\Omega\), the signal generator will take this load impedance setting into account and generate a voltage that will result in the set peak-to-peak/amplitude at the output. However, if this load impedance setting is set to say, 50R, but connected to a high-impedance load (for example, connected straight up to the oscilloscope), you will measure twice the expected voltage at the output! Fill up with mixture of water and detergent. Standard kitchen detergent will do. Expensive cleaning solutions aimed at the professional electronics market. Do they perform any better? GT Sonic Ultrasonic Cleaner 6L: Large enough for most PCBs. Synergy Electronics Ltd, NZ supplier of the GT Sonic range. MEMS Oscillators: Ultrasonic cleaners can cause permanent damage or long-term reliability issues to the MEMS resonator inside a MEMS oscillator. Crystal Resonators (XTALs): The ultrasonic bath could excite a XTAL into a resonant frequency (or harmonic) that causes damage. 32.678kHZ crystals are especially sensitive since they operate at about the same frequency as an ultrasonic bath uses for it's cleaning action. MHz XTALs are far less sensitive. Quick-change tips Maximum heating power Heating rate (high quality soldering irons can heat-up to the set temperature within about 2 seconds!) Stable temperature control under different loads For all but the lightest of work you will want to choose a soldering station instead of a soldering iron. The station provides a holder for the iron, keeping it in a safe place while you do other work (so you don't burn yourself!). It also allows to a lighter and higher power iron, as most of the electronics can now be located in the freestanding control unit rather than in the handheld iron. An American company with a vivid and memorable blue/yellow brand color. The creme-de-la-creme of soldering iron brands. JBC makes some of the highest-quality soldering stations, but as expected, this comes at a very high price. The Weller brand is associated with quality, second only to JBC. Naturally, their products are generally cheaper than JBCs to compensate. No quick change tips A dual port soldering station. For use with the FM-2027 soldering iron which takes the Hakko T12 range of tips. Note that the T12 range are quick-change, and you start paying more than double for quick-changes tips (versus the T18 range). Note that that tips are not truly quick change until you also purchase extra Soldering Pencil Sleeves. These are proprietary hand-grips that slide onto the tip. Once each tip has one of these, you can quickly change tips by unclipping the sleeve + tip from the rest of the iron and inserting a new one. These sleeves also remove the need for using pliers or a rubber mat to remove hot tips. A green soldering pencil sleeve from Hakko. You have to purchase one of these per tip before your soldering iron truly becomes 'quick change'. Image from https://nz.element14.com/hakko/b3219/soldering-pencil-sleeve-green/dp/1676853. This soldering station can also accept Hakko tweezers. A great choice for popping off and on small 0402/0603/0805 chip resistors and capacitors is the FM-2023 Mini SMD Hot Tweezers with the T9-1L tips: Close up of the T9-1L tips on the Hakko FM-2023 Mini SMD Hot Tweezers. Great for popping on and off small 0402/0603/0805 chip resistors and capacitors. Single port. 95W. 550°C max temp. For use with the WTP90 soldering iron, which takes the XT tips. Dual port. 75W x2 450°C Most heater elements are between 600-1200W. The body material of the heating element is usually ceramic. Yihua UYUE Adjustable temp. range from 0-450°C. 250x220x110mm 600W Current probes are measurement devices which are used to measure the current flowing through a conductive material, typically a wire or track on the PCB (usually with an appropriate connection loop). The main disadvantage with a hall-effect or transformer-based current-probe is that the probe tip must encircle the conductor under test. To do this you must use a wire, or provide a special PCB cut-out around the current-carrying trace. Fluxgate magnetometer-based current probes do not have this issue. A typical current-probe will add a few nH of inductance to the conductor under test. Any additional wire added to the conductor to accommodate the current-probe might add around 10nH per centimetre. The sensitivity of a current probe can be increase by increasing the number of turns of the wire. Be careful to divide the displayed current on the oscilloscope by the number of turns to get the actual current. Note that increasing the number of turns increases the insertion impedance (the inductance rises with the square of the number of turns). Current probes are not cheap! They are significantly more expensive than their voltage-measuring brothers. As of 2016, you can find cheap no-brand ones for US$60-700, and more expensive Tektronics or Keysight Technologies (the new Agilent) current probes for US$1000-8000. Hall-effect current probes use the hall-effect phenomenon to measure the current travelling through a conductor. Their main advantage over the transformer-based current probes is that they can measure DC currents. However, they do not perform well at higher frequencies (20kHz seems to be a rough upper limit). The hall-effect sensor is an active sensor, and therefore the probe requires an external power source. This may be provided by an internal replaceable battery (e.g. 9V battery), and external power supply connector, or from the oscilloscope through a specialised connector (this is common on the more expensive, brand specific ones). Like the AC transformer-based current probes, they require the wire to be inserted into a loop. A combined AC/DC current probe is the most versatile current measurement probe. Traditionally, it uses a transformer to measure AC current, and a hall-effect sensor to measure DC currents (originally patented by Tektronics). Hall-effect sensors are active sensors, so AD/DC current probes require a power source. The probes are normally split-core, which allows you to open the probe tip up to inset the wire under test. AC/DC probes output a voltage which is proportional to the current flowing through the wire under test. This voltage is measured by the oscilloscope and displayed on a current-scaled waveform. High-end current probes which are built for specific oscilloscopes may draw power from the single oscilloscope connection, as well as automatically changing the units on the scope and auto-scaling. The main advantage is that the measuring device does not need to fully encircle the track/wire under test, and you can design a probe-styled instrument that can measure track/wire current just by bringing the probe tip into close proximity. Aim has patents around it's fluxgate magnetometer based current probe, so it might be a while before other manufacturers make similar probes. The AIM I-Prober 520 current probe based on fluxgate magnetometer technology. Image from http://www.tti-test.com/. The Beehive Electronics probes set contains three H-field probes (100A, 100B, 100C) and one E-field probe (100D). All are non-contact probes. The four non-contact EM probes made by Beehive Electronics. Three are for magnetic field measurement and one is for electric field measurement. The magnetic flux density can be calculated for the H-field probes using the equation below: The equation to work out the magnetic flux density as measured by any of the three magnetic EM probes made by Beehive Electronics. The scale factors for each of the magnetic probes is given below: The scale factors and resonances for each of the three magnetic field probes made by Beehive Electronics. Flux is a substance used in the soldering process to remove metal corrosion and improve the adhesion of the molten solder to the metal surfaces. Typically, fluxes are compatible with a broad range of solder compounds, including both leaded and higher-temperature lead-free solders. Flux activity is a measure of the strength/aggressiveness of the flux in it's ability to clean metals while soldering. Low activity fluxes are weak fluxes and as usually mild acids. High activity fluxes are strong fluxes and are usually low pH acids. Rosin fluxes are the oldest types of flux (well, charcoal was first!). Rosin is the name of refined pine sap. Rosin flux is typically a solid at room temperature, but quickly melts and flow easily at soldering temperatures. It is usually a light or dark amber colour. Rosin fluxes have a low flux activity. A tin of rosin-based flux. Image from https://en.wikipedia.org. As such, it is usually inert while as a solid, and therefore safe to leave on the PCB after soldering. This is of course unless during normal operation the PCB temperature rises enough to melt the rosin flux. Rosin fluxes are usually non-polar and therefore cannot be washed off with plain water. Non-polar solvents like isopropyl alcohol, acetone, or paint thinner can be used to clean rosin fluxes. Semi-aqueous solvents or water with Some types of solder contain a rosin core to aid the soldering process, and saves you time because you do not have to apply the flux manually. A brand of solder which has a rosin-based flux core. For the chemically-minded people, rosin flux usually has a formula of: \begin{align} C_{19}H_{19}COOH \end{align} Obviously, being a naturally produced substance, the make-up of a rosin flux will change. Organic acid flux is typically made of a weak, organic-based acid such as citric, lactic or stearic acid. The acid is dissolved in a solvent such as a mixture of isopropyl alcohol and water. They can be a good compromise between reliability, flux activity and cleanability. The most aggressive type of flux, inorganic fluxes are usually a blend of aggressive chemicals such as hydrochloric acid, zinc chloride and ammonium chloride. They have a high activity. They are normally used for non-electronics related soldering such as a joining of copper pipes (also called brazing). Inorganic acid fluxes should not be used for electronic soldering because they can leave chemically active residues which cause reliability problems. The term "no clean" flux is used for fluxes whose residue will not effect the long-term reliability of the PCB. The two important qualities A disadvantage of no clean flux is the poor aesthetics of leaving the flux residue on the PCB, it can make the PCB appear dirty, old, and may give people the perception that the build quality is not high (only relevant if people actually see the PCB during it's normal use). The IPC-610 standard specifies some the required properties of no clean flux to be compliant. Flux can be shipped in a syringe. The syringe tip is either a large-diameter (compared to most medical syringes) metal or plastic needle. Syringes offer more precise application of flux than a syringe pen or rod. Flux pens are permanent marker ("sharpies" for all the Americans) sized pens which contain flux inside them. The tip is made from a porous material which applies flux to the surface and draws more up via the capillary action (much like a normal pen). To promote proper flowing, fluxes used in flux pens are typically of a lower viscosity than the ones in syringes or standard containers. A no-clean solder flux pen from ChemTools (part number CT-NC-DP). Flux pens are great to have on the work bench for quick, on-off flux applications for reworking. The tips are usually quite thick and do not offer the same precision as flux syringes, but normally this extra precision is not necessary (flux can be "slopped" around the board with little consequence). During the soldering process fumes are released. The amount of fumes increases drastically as flux is used. It is generally not a good thing to inhale these fumes on a long term basis. Fume extractors can be used to remove the fumes safely. Digilent. Analog Discovery Pro (ADP3450/ADP3250) Reference Manual. Retrieved 2022-08-09, from https://digilent.com/reference/test-and-measurement/analog-discovery-pro-3x50/reference-manual. ↩︎ Digilent. Analog Discovery Pro (ADP5250) Reference Manual. Retrieved 2022-08-09, from https://digilent.com/reference/test-and-measurement/analog-discovery-pro-5250/reference-manual. ↩︎ Department of Defense (1999, August 20). MIL-STD-461E: Requirements for the Control of Electromagnetic Interference Characteristics of Subsystems and Equipment. Quick Search. Retrieved 2021-06-30, from https://quicksearch.dla.mil/qsDocDetails.aspx?ident_number=35789 ↩︎ IEC (2016). Vehicles, boats and internal combustion engines - Radio disturbance characteristics - Limits and methods of measurement for the protection of on-board receivers. Retrieved 2021-07-02, from https://webstore.iec.ch/publication/26122. ↩︎ MoviTHERM. What is NETD in a Thermal Camera?. Retrieved 2020-09-03, from https://movitherm.com/knowledgebase/netd-thermal-camera/. ↩︎ July 2016 Updates Cause Found For The Mysterious Skateboard PCB Faults Puhui T-962A T-962C Atten AT-R3028 low-impedance voltage measurements ghost voltages capacitive coupling DSView sigrok thermal resistances heatsinking TrueIR NETD noise equivalent temperature difference UART DC loads TekBox TBOH02 line impedance stabilization networks MIL-STD-461E impedance control pre-heating station hall-effect fluxgate organic acid inorganic acid flux pen
CommonCrawl
The greatly increased variance, but only somewhat increased mean, is consistent with nicotine operating on me with an inverted U-curve for dosage/performance (or the Yerkes-Dodson law): on good days, 1mg nicotine is too much and degrades performance (perhaps I am overstimulated and find it hard to focus on something as boring as n-back) while on bad days, nicotine is just right and improves n-back performance. That it is somewhat valuable is clear if we consider it under another guise. Imagine you received the same salary you do, but paid every day. Accounting systems would incur considerable costs handling daily payments, since they would be making so many more and so much smaller payments, and they would have to know instantly whether you showed up to work that day and all sorts of other details, and the recipients themselves would waste time dealing with all these checks or looking through all the deposits to their account, and any errors would be that much harder to track down. (And conversely, expensive payday loans are strong evidence that for poor people, a bi-weekly payment is much too infrequent.) One might draw a comparison to batching or buffers in computers: by letting data pile up in buffers, the computer can then deal with them in one batch, amortizing overhead over many items rather than incurring the overhead again and again. The downside, of course, is that latency will suffer and performance may drop based on that or the items becoming outdated & useless. The right trade-off will depend on the specifics; one would not expect random buffer-sizes to be optimal, but one would have to test and see what works best. Factor analysis. The strategy: read in the data, drop unnecessary data, impute missing variables (data is too heterogeneous and collected starting at varying intervals to be clean), estimate how many factors would fit best, factor analyze, pick the ones which look like they match best my ideas of what productive is, extract per-day estimates, and finally regress LLLT usage on the selected factors to look for increases. But though it's relatively new on the scene with ambitious young professionals, creatine has a long history with bodybuilders, who have been taking it for decades to improve their muscle #gains. In the US, sports supplements are a multibillion-dollar industry – and the majority contain creatine. According to a survey conducted by Ipsos Public Affairs last year, 22% of adults said they had taken a sports supplement in the last year. If creatine was going to have a major impact in the workplace, surely we would have seen some signs of this already. The goal of this article has been to synthesize what is known about the use of prescription stimulants for cognitive enhancement and what is known about the cognitive effects of these drugs. We have eschewed discussion of ethical issues in favor of simply trying to get the facts straight. Although ethical issues cannot be decided on the basis of facts alone, neither can they be decided without relevant facts. Personal and societal values will dictate whether success through sheer effort is as good as success with pharmacologic help, whether the freedom to alter one's own brain chemistry is more important than the right to compete on a level playing field at school and work, and how much risk of dependence is too much risk. Yet these positions cannot be translated into ethical decisions in the real world without considerable empirical knowledge. Do the drugs actually improve cognition? Under what circumstances and for whom? Who will be using them and for what purposes? What are the mental and physical health risks for frequent cognitive-enhancement users? For occasional users? Drugs and catastrophe are seemingly never far apart, whether in laboratories, real life or Limitless. Downsides are all but unavoidable: if a drug enhances one particular cognitive function, the price may be paid by other functions. To enhance one dimension of cognition, you'll need to appropriate resources that would otherwise be available for others. The FDA has approved the first smart pill for use in the United States. Called Abilify MyCite, the pill contains a drug and an ingestible sensor that is activated when it comes into contact with stomach fluid to detect when the pill has been taken. The pill then transmits this data to a wearable patch that subsequently transfers the information to an app on a paired smartphone. From that point, with a patient's consent, the data can be accessed by the patient's doctors or caregivers via a web portal. Methylphenidate, commonly known as Ritalin, is a stimulant first synthesised in the 1940s. More accurately, it's a psychostimulant - often prescribed for ADHD - that is intended as a drug to help focus and concentration. It also reduces fatigue and (potentially) enhances cognition. Similar to Modafinil, Ritalin is believed to reduce dissipation of dopamine to help focus. Ritalin is a Class B drug in the UK, and possession without a prescription can result in a 5 year prison sentence. Please note: Side Effects Possible. See this article for more on Ritalin. My general impression is positive; it does seem to help with endurance and extended the effect of piracetam+choline, but is not as effective as that combo. At $20 for 30g (bought from Smart Powders), I'm not sure it's worthwhile, but I think at $10-15 it would probably be worthwhile. Sulbutiamine seems to affect my sleep negatively, like caffeine. I bought 2 or 3 canisters for my third batch of pills along with the theanine. For a few nights in a row, I slept terribly and stayed awake thinking until the wee hours of the morning; eventually I realized it was because I was taking the theanine pills along with the sleep-mix pills, and the only ingredient that was a stimulant in the batch was - sulbutiamine. I cut out the theanine pills at night, and my sleep went back to normal. (While very annoying, this, like the creatine & taekwondo example, does tend to prove to me that sulbutiamine was doing something and it is not pure placebo effect.) So it's no surprise that as soon as medical science develops a treatment for a disease, we often ask if it couldn't perhaps make a healthy person even healthier. Take Viagra, for example: developed to help men who couldn't get erections, it's now used by many who function perfectly well without a pill but who hope it will make them exceptionally virile. The placebos can be the usual pills filled with olive oil. The Nature's Answer fish oil is lemon-flavored; it may be worth mixing in some lemon juice. In Kiecolt-Glaser et al 2011, anxiety was measured via the Beck Anxiety scale; the placebo mean was 1.2 on a standard deviation of 0.075, and the experimental mean was 0.93 on a standard deviation of 0.076. (These are all log-transformed covariates or something; I don't know what that means, but if I naively plug those numbers into Cohen's d, I get a very large effect: \frac{1.2 - 0.93}{0.076}=3.55.) Organizations, and even entire countries, are struggling with "always working" cultures. Germany and France have adopted rules to stop employees from reading and responding to email after work hours. Several companies have explored banning after-hours email; when one Italian company banned all email for one week, stress levels dropped among employees. This is not a great surprise: A Gallup study found that among those who frequently check email after working hours, about half report having a lot of stress. (In particular, I don't think it's because there's a sudden new surge of drugs. FDA drug approval has been decreasing over the past few decades, so this is unlikely a priori. More specifically, many of the major or hot drugs go back a long time. Bacopa goes back millennia, melatonin I don't even know, piracetam was the '60s, modafinil was '70s or '80s, ALCAR was '80s AFAIK, Noopept & coluracetam were '90s, and so on.) l-Theanine – A 2014 systematic review and meta-analysis found that concurrent caffeine and l-theanine use had synergistic psychoactive effects that promoted alertness, attention, and task switching;[29] these effects were most pronounced during the first hour post-dose.[29] However, the European Food Safety Authority reported that, when L-theanine is used by itself (i.e. without caffeine), there is insufficient information to determine if these effects exist.[34]
CommonCrawl
A comparison of methods for multiple degree of freedom testing in repeated measures RNA-sequencing experiments Elizabeth A. Wynn1, Brian E. Vestal2, Tasha E. Fingerlin2 & Camille M. Moore2 BMC Medical Research Methodology volume 22, Article number: 153 (2022) Cite this article 956 Accesses As the cost of RNA-sequencing decreases, complex study designs, including paired, longitudinal, and other correlated designs, become increasingly feasible. These studies often include multiple hypotheses and thus multiple degree of freedom tests, or tests that evaluate multiple hypotheses jointly, are often useful for filtering the gene list to a set of interesting features for further exploration while controlling the false discovery rate. Though there are several methods which have been proposed for analyzing correlated RNA-sequencing data, there has been little research evaluating and comparing the performance of multiple degree of freedom tests across methods. We evaluated 11 different methods for modelling correlated RNA-sequencing data by performing a simulation study to compare the false discovery rate, power, and model convergence rate across several hypothesis tests and sample size scenarios. We also applied each method to a real longitudinal RNA-sequencing dataset. Linear mixed modelling using transformed data had the best false discovery rate control while maintaining relatively high power. However, this method had high model non-convergence, particularly at small sample sizes. No method had high power at the lowest sample size. We found a mix of conservative and anti-conservative behavior across the other methods, which was influenced by the sample size and the hypothesis being evaluated. The patterns observed in the simulation study were largely replicated in the analysis of a longitudinal study including data from intensive care unit patients experiencing cardiogenic or septic shock. Multiple degree of freedom testing is a valuable tool in longitudinal and other correlated RNA-sequencing experiments. Of the methods that we investigated, linear mixed modelling had the best overall combination of power and false discovery rate control. Other methods may also be appropriate in some scenarios. RNA-sequencing (RNA-seq) technology has revolutionized how we study and understand the underlying pathobiology of disease. Recently, declining sequencing costs have allowed for more complex investigations, including correlated and longitudinal study designs. In particular, longitudinal designs have become increasingly popular, as they allow researchers to understand the dynamics of gene expression across time and how these dynamics differ between groups of subjects. However, complex study designs demand more sophisticated analysis methods. As with single timepoint designs, careful pre-processing of longitudinal RNA-seq data is still necessary prior to analysis to remove artifacts produced during sequencing [1, 2]. Following pre-processing, distributional and computational considerations are necessary to model overdispersed count data on 10,000-20,000 genes. Additionally, analysis methods for longitudinal study designs must also account for the correlation induced by repeated measures, which is often achieved with random effects or modeling of the error covariance structure. To be most applicable to these complex study designs, analysis approaches should allow for flexible modeling, including the ability to adjust for potential confounders and subject demographics. In longitudinal RNA-seq studies, researchers are often interested in multiple hypotheses. For example, many longitudinal RNA-seq studies include repeated measures from each subject over time, with subjects coming from multiple treatment groups. This allows for the investigation of between-subject comparisons, such as a test for differences in gene expression between treatment groups at a particular timepoint; within-subject comparisons, such as a test for differences in gene expression across two timepoints in a single treatment group; or interaction effects to compare changes over time between groups. Furthermore, studies with more than two timepoints per subject might involve multiple comparisons across different timepoints in order to characterize how gene expression changes across time. In the situation where there are multiple hypotheses to be tested for each gene, the ability to perform an omnibus test, or a test where multiple hypotheses are evaluated, is valuable for controlling false discovery rates. For example, in a study with multiple timepoints per subject in which time is treated categorically, a researcher might wish to compile a list of genes that change over time for further investigation. In such a situation, one could perform a series of hypothesis tests to identify the differentially expressed genes (DEGs) between each pair of timepoints and perform a multiple testing correction to each hypothesis test individually to control the false discovery rate to 5%, for example. However, because each hypothesis test may produce different false positive genes, when lists of significant genes are aggregated across multiple hypotheses, the percentage of false positives in the aggregated list will be greater than 5% without additional adjustment [3]. Thus, performing an omnibus test for multiple hypotheses is useful in false discovery rate control. These types of tests are often referred to as multiple degree of freedom (DF) tests because the hypothesis for these tests involve multiple degrees of freedom as opposed to the single degree of freedom required for hypothesis testing of a single covariate or effect. Several different methods have been proposed for the analysis of longitudinal RNA-seq data. Popular analysis packages such as edgeR [4, 5] and DESeq2 [6] are often appealing to researchers because they allow for flexible modelling in a generalized linear modelling (GLM) framework. However, these packages do not allow for random effects or covariance structures to properly accommodate correlated data. Despite this limitation, these packages are sometimes used to analyze correlated data, either by treating each subject/cluster as a fixed effect under a regression framework, or by ignoring the correlation altogether and treating correlated samples as independent. It is well established that ignoring correlation can lead to bias in standard error estimation which can influence the results of statistical tests [7]. Alternatively, treating each subject/cluster as a fixed effect may result in inflated false positive rates due to over-fitting [8]. Additionally, when coefficients for each subject/cluster are included in the model, other subject-level effects, such as group differences, are not estimable. The limma [9] package, another popular analysis tool for RNA-seq data, includes the capability to account for correlation between related samples using a method in which a common correlation value estimated across all genes is incorporated into the model for each gene [10]. However, this method assumes that the correlation between samples is the same for all genes. This is a strong assumption that may not be true in practice. Recently, several methods have been proposed for longitudinal and other correlated RNA-seq studies. These methods generally use random effects or covariance structures to account for the correlation in the data while also considering the unique characteristics of RNA-seq data such as overdispersion. Many methods developed for correlated RNA-seq data are limited by the fact that they do not allow for multiple treatment groups or additional covariates (e.g. PLNseq [11], multiDE [12]), can only be used for paired data (e.g. baySeq [13, 14], PairedFB [15]), or can only perform single DF tests (e.g. MCMSeq [16], ShrinkBayes [17]). Some researchers have proposed employing standard statistical models typically used for longitudinal and correlated data outside of the context of RNA-seq data, as these well-developed modeling frameworks allow for flexible modeling and hypothesis testing [18–20]. In applying these methods to RNA-seq data, considerations still must be made to account for the non-normality of the data, for example, by choosing a repeated measures model with an underlying distribution for overdispersed counts. Tsonaka & Spitali [20] investigated the use of negative binomial mixed models (NBMM) for RNA-seq data using an adaptive Gaussian quadrature method to estimate parameters and found that this method was relatively unbiased and exhibited type 1 error (T1E) and false discovery rate (FDR) control. Similarly, Zhang et al. [21] used NBMM to analyze correlated microbiome data, which are also overdispersed counts, but used an iterative weighted least squares (pseudo-likelihood) approach for parameter estimation. They demonstrated the utility of the method through both simulation study and application to mouse gut microbiome data. Rather than using the negative binomial distribution, Park et al. [19] investigated the use of generalized estimating equation (GEE) models using a Poisson distribution with an extra scale parameter to account for overdispersion. They found that these models identified more DEGs than edgeR, DESeq or limma, though they did not explore whether this was driven by high false positive rates. Instead of directly modeling counts, another approach is to normalize the data and then utilize models that assume a normal distribution. The package rmRNAseq [18] utilizes the voom normalization method on log-transformed counts and then models the transformed data using a linear model with a continuous auto-regressive structure to account for the correlation in the data. Vestal et al. [16] tested a similar method by using a variance stabilizing transformation (VST) on raw RNA-seq counts and then fitting linear mixed models (LMMs) to the transformed data. They found that this method performed similarly to their hierarchical Bayesian MCMSeq method in terms of T1E and FDR control, but many models failed to converge in small sample size situations. All of the methods outlined above allow for multiple DF hypothesis testing. However, there has been little research evaluating and comparing the performance of multiple DF tests across these methods. Some studies have evaluated the use of multiple DF tests for a single method or in comparison to DESeq2 and edgeR, which do not account for correlation, rather than methods that account for correlation [18, 20]. Others have compared multiple correlated data approaches but only for single DF hypothesis tests [16]. As complex study designs become more common in correlated RNA-seq designs, multiple DF hypothesis testing is important for identifying interesting genes for downstream analysis without increasing the FDR. In this paper, we compare the performance of several methods for analyzing correlated RNA-seq count data with particular emphasis on multiple DF test performance within each method. First, we investigate model performance through a simulation study. Each method is also applied to RNA-seq data collected from septic shock and cardiogenic shock patients over multiple timepoints following admission to the intensive care unit (ICU). Finally, we provide recommendations as to which models are most appropriate under various circumstances. Analysis methods compared We compared methods which have been proposed for correlated RNA-seq experiments and that allow for multiple treatment groups, covariates and/or timepoints, and can be used to perform multiple DF tests. We describe the selected methods below. Additional information on each method is available in Supplementary Materials Section 1. Standard RNA-seq analysis tools Standard RNA-seq analysis tools generally use a linear modelling framework with transformed data, or a generalized linear model (GLM) framework, assuming a negative binomial distribution. In studies with correlated designs, these methods can be implemented with the caveat that the model assumptions, such as the independence of observations, will not be met, or adjustments can be made to attempt to account for the correlation of the data. In this study, we tested three of the methods from the most popular RNA-seq analysis packages: limma, edgeR, and DESeq2. The R package limma was originally created for the analysis of microarray expression data, which are approximately normally distributed [9]. limma employs linear models to test for differential expression using an empirical Bayes approach to share information across genes. This methodology has been extended to RNA-seq data by applying the "voom" transformation to RNA-seq counts [22, 23]. First, RNA-seq counts are normalized using the log counts per million (log-CPM) transformation. A mean-variance relationship is then estimated, and from this relationship, a predicted variance is calculated for each log-CPM value, which is then incorporated into a linear model as an inverse weight. The duplicateCorrelation function within the package can be used to estimate correlation values for each subject which are then incorporated in the linear model. However, only one correlation is computed for all genes. The edgeR and DESeq2 packages both employ a negative binomial GLM framework to address overdispersion [4–6]. Both methods use empirical Bayes procedures to estimate variability, effectively borrowing information across genes to inform the estimation. Both methods also include offset terms in their models to account for differences in library size between samples, though edgeR uses the trimmed mean of M-values (TMM) method [4], while DESeq2 uses the median ratio method [24]. These packages do not include methods to account for correlation between samples. Generalized estimating equations Generalized estimating equations (GEE) are a semi-parametric extension of GLM that can account for correlation between observations [25]. This method uses a working correlation structure to model the association between measurements within a subject. The covariance matrix of the estimated regression coefficients is typically estimated using robust (sandwich) estimators so that the estimates are robust to misspecification of the working correlation structure. In this analysis, we modelled the data using a Poisson distribution with an extra scale parameter in the variance to account for overdispersion, and an exchangeable working correlation structure. One drawback to GEE models is that sandwich estimators have poor performance at small sample sizes. To address this issue, we used the small sample size adjustment proposed by Wang and Long [26], which utilizes information from all subjects to calculate the covariance for each individual subject and also uses an additional adjustment to correct for bias. Negative binomial mixed models Generalized linear mixed models (GLMM) are an extension of GLMs that use random effects to account for correlation. Similar to the methods implemented in edgeR and DESeq2, in using the GLMM framework, the gene expression for each gene can be modeled using a negative binomial distribution, which accounts for the overdispersion. When using negative binomial mixed modelling (NBMM), parameter estimation can be analytically complex and there are multiple approaches that can be used. We consider two maximum likelihood estimation approaches, Laplace (NBMM-LP) and adaptive Gaussian quadrature (NBMM-AGQ) as well as the pseudolikelihood approach (NBMM-PL). rmRNAseq and linear mixed models The rmRNAseq package employs a method similar to the limma+voom method in which the data are first transformed using the voom approach and then a linear model is fit for each gene using the transformed data. However, within the rmRNAseq framework, models are fit using a continuous autoregressive correlation structure to account for correlation in the data. A similar approach is to use linear mixed modelling (LMM) with random effects to account for correlated data after applying a normalizing transformation. We test this approach using a variance stabilizing transformation (VST), as demonstrated in Vestal et al. [16]. We implemented each method using R (version 4.0.2). All analysis was carried out on a Linux high performance computing (HPC) cluster and parallel processing with 8 cores was used for all methods besides limma, DESeq2, and edgeR. Table 1 contains the specific packages used for each method and implementation details. Where possible, we used previously implemented R packages. In some cases, available R packages were missing important functionality, such as the capacity to account for offsets (geesvm for GEE small sample estimators). In these cases, custom R functions were built using the source code from the previously implemented R packages as a framework. Functions for implementing and summarizing results for methods in which no wrapper/summarization functions were available can be found in the corrRNASeq package, which is available at https://github.com/ewynn610/corrRNASeq. Table 1 Analysis methods with their associated R packages and details concerning their implementation Offsets to adjust for differences in library size were included in models for all except three methods (Table 1). The transformations used in limma, rmRNAseq and the LMM method accounted for differences in library size, so no additional adjustment was used. The models using the edgeR and DESeq2 packages were fit in two ways. First, correlation was ignored and a model was fit with an intercept, time and group main effects, and an interaction term. Second, a fixed effect for subject was included in the model (edgeR ∗ and DESeq2 ∗). When including this extra fixed effect, the group term was not included in the model as it is inestimable. Models were designated as non-converged if a maximum number of iterations were run without convergence during model fitting, models were found to be singular, or other errors prevented the model from fitting properly. All models that did not converge were discarded before further analysis. The packages used to implement each method in this analysis utilize different types of multiple DF tests. Table 1 shows the class of tests used for each method. We used likelihood ratio tests (LRT) for the edgeR, DESeq2, NBMM-LP and NBMM-AGQ analyses. For all of these methods excluding edgeR, this required fitting two models for each test, a full model as well as a reduced model. The GLMMadaptive package used for fitting NBMM-LP models offers the option of using a multivariate Wald test instead of an LRT test. However Tsonaka & Spitali [20] found that in the context of correlated RNA-seq data, using LRTs resulted in lower T1E rate and FDR and thus we chose to use LRTs rather than multivariate Wald tests for these models. Additionally, Tsonaka & Spitali [20] proposed a bootstrap procedure for calculating p-values, particularly in small sample size situations. However, in running the example code provided with their publication, we found that it took about 2 hours to fit models and perform hypothesis testing for 10 genes with 1,000 bootstrap samples each. Because RNA-seq studies typically include 10,000-20,000 genes, this bootstrapping approach is likely not computationally feasible for most studies and we did not include it in our analysis. Hypothesis testing for GEE was done using a Wald χ2 test as implemented by the esticon function in the doBy package [32]. F-tests were used for LMM and NBMM-PL and the Satterthwaite method was used to calculate denominator degrees of freedom [33, 34]. The limma and rmRNAseq packages both utilize the moderated F-statistic outlined by Smyth [35] for hypothesis testing. Under the limma framework, p-values are computed using an F-test with augmented degrees of freedom. The rmRNASeq package calculates p-values by building a distribution of null test statistics from data generated by a parametric bootstrap procedure and then computing the proportion of null statistics greater than or equal to the observed F-statistic. In order to evaluate and compare the testing characteristics of the previously described methods, we performed a simulation study. We used a two group design (e.g. treatment and control) with four observations per subject. A negative binomial distribution was used to simulate a matrix of counts Y. Let Ygij be the expression level of gene g for the ith subject and jth observation, with E(Ygij)=μgij. Further, let αg be a dispersion parameter for gene g with \(Var(Y_{gij})=\mu _{gij}+\alpha _{g}\mu ^{2}_{gij}\). Then $$\begin{array}{*{20}l} Y_{gij} &\sim& \mathcal{NB}(\mu_{gij}, \alpha_{g}) \end{array} $$ $$\begin{array}{*{20}l} log(\mu_{gij}) &\!=& \!\beta_{g0}+\beta_{g1}X_{1i}+\beta_{g2}X_{2ij}+\beta_{g3}X_{3ij}+\beta_{g4}X_{4ij} \end{array} $$ $$\begin{array}{*{20}l} & &+\beta_{g5}X_{1i}X_{2ij} +\beta_{g6}X_{1i}X_{3ij}+\beta_{g7}X_{1i}X_{4ij}+b_{gi} \\ b_{gi}&\sim& \mathcal{N}(0, \sigma^{2}_{g}) \end{array} $$ where X1i is an indicator variable signifying whether the ith subject is in the treatment group or not, and X2ij, X3ij and X4ij are indicator variables representing whether observation j was taken at the 2nd, 3rd, or 4th timepoint respectively. Each βgk,k∈0,...,7 is a fixed effect regression coefficient specific to gene g. Finally, bgi is the random intercept for gene g and subject i which is normally distributed with a mean of 0 and a variance of \(\sigma ^{2}_{g}\). Table 2 shows a summary of the simulation settings and multiple DF tests performed. We simulated 10 datasets for each simulation scenario. For each dataset we simulated 15,000 genes and then genes were filtered out if N samples had less than 1 count per million (CPM), where N was equal to the number of samples collected for a single group and timepoint. We simulated datasets to contain a mix of null and differentially expressed genes by changing the interaction coefficients for 20% of genes. In order to mimic real data, βg0,αg and \(\sigma ^{2}_{g}\), were drawn from an empirical distribution for triplets of mean CPM, dispersion, and random intercept variance observed across human samples in several real RNA-seq data sets with repeated measures [36, 37]. The fixed effect intercept parameter, βg0 was derived by scaling the randomly drawn CPM values to add up to one million and then multiplying each scaled value by a total library size of 25 million. Then, βg0 was set to the log of this value. Table 2 Summary of simulated datasets We analyzed simulated data using each method as described in the implementation section. Models for each gene were fit using fixed effects for group and time variables, which were both treated as categorical, as well as the interaction between group and time. A random intercept for each subject was included in models for methods in which random effects are possible. After the models were fit, the percentage of models that successfully converged for each method was calculated, and non-converged models were removed. Then the false discovery rate (FDR) and power were calculated for four different multiple DF tests: a between-subject test, a within-subject test, an interaction test, and a global test (Table 2). Power and FDR were calculated using Benjamini Hochberg adjusted p-values [38] and a significance threshold of 0.05 was used. For each simulation scenario, we averaged the statistics across 10 simulated datasets. Real data analysis We applied the analysis methods previously outlined to a publicly available, longitudinal RNA-seq dataset of 96 whole blood samples from 32 patients experiencing circulatory shock who were admitted into the ICU (GEO Dataset: GSE131411). For each patient, three blood samples were collected: one within 16 hours after ICU admission, one 48 hours after admission, and one seven days after admission or at discharge. Subjects were categorized by whether they experienced septic shock (SS, N=21) or cardiogenic shock (CS, N=11). Further information on the study design and methods is available in Braga et al. [39]. Data pre-processing and model information We downloaded the count table and study meta data from the GEO DataSets website. The data included 58,096 genes. We filtered out lowly expressed genes by removing genes that did not have greater than 1 CPM in at least 11 of the 96 samples (11 was the sample size in the smallest experimental group of interest), which reduced the total number of genes analyzed to 14,340. The goal of our analysis was to investigate how the gene expression of shock patients changed over time and how these changes differed between patients with SS versus CS. To accomplish this, for each method we fit a model with fixed effects for the type of shock and timepoint (treated categorically) as well as the interaction between the two variables. A random intercept for each subject was included in models for methods in which random effects are possible. All models were fit as described in the implementation section. As with the simulation study, the percentage of models that failed to converge for each method was calculated and non-converged models were removed. For each model, we ran four different multiple DF hypothesis tests: a between-subject test to assess if there was a difference in gene expression between the SS and CS groups at any timepoint, two within-subject tests to assess if there was a change in gene expression over time in the SS group or the CS group, and a test to assess if any of the interaction coefficients were significant. The Benjamini Hochberg method was used to adjust p-values for multiple comparisons and the DEGs for each method and test were identified using a 0.05 FDR threshold. Hierarchical clustering and functional enrichment analysis Because LMM exhibited comparatively good behavior in the simulation study, we used the results from this method to explore the patterns in the changes in gene expression over time in the SS and CS groups. All analysis was done for each group separately. First, we subset the data to include only genes that were significant in the multiple DF test for difference in gene expression at any timepoint in the SS group or CS group. For these genes, we computed the predicted gene expression (log scale) for each gene at each of the three timepoints for the group in question. We then constructed heatmaps for these genes, with genes clustered hierarchically using a correlation distance metric and a complete linkage clustering method. We visually inspected the heatmaps to decide where to cut each clustering tree to identify clusters that represented distinct profiles of change over time. After clustering, we ran functional enrichment analysis on the genes in each cluster to better understand the functional role of genes with different expression profiles over time. Analysis was executed using the topGO package in R [40] using biological process biological process gene ontology (GO) annotations. The significance of the GO terms was assessed using a Fisher's exact test with an FDR level of 0.05 as the threshold for significance. We further filtered the results to include only GO terms with at least 10 genes and > 10% overlap of the genes associated with each GO term and the genes in the cluster. Of the 11 methods evaluated, only 3 methods (NBMM-LP, NBMM-PL, and LMM) had average non-convergence rates above 0.1% for any of the sample sizes tested. Figure 1 shows the average percentage of models which did not converge across sample sizes for these methods. Because we used LRTs for NBMM-LP, for every gene a reduced model was fit for each of the four hypothesis tests. In some cases the full model converged but one or more of the reduced models failed to converge and thus the p-value for the corresponding hypothesis tests could not be calculated. The transparent portion of the bars in Fig. 1 represent cases in which the full model converged but one or more of the reduced models failed to converge. Percentage of non-converged models from selected methods. Methods in which less than 1% of models failed to converge are not included in the figure. For NBMM-LP, which uses a likelihood ratio test, the solid portion of the bar represents the proportion of models in which the full model did not converge and the transparent portion represents genes for which the reduced model for one or more tests failed to converge in which case results for those tests could not be obtained NBMM-LP had the highest non-convergence rates at all sample sizes, even when only considering cases in which only the full model did not converge. At N=3 per group, about 21% of the full models did not converge and the reduced model(s) for an additional 10% of genes did not converge. Comparatively, at N=3 per group around 16% and 15% of models did not converge for NBMM-PL and LMM respectively. For all three methods, non-convergence rates decreased with increasing sample size, though the magnitude of the decrease was larger for NBMM-PL and LMM than for NBMM-LP. At N=10 per group, NBMM-PL and LMM both had non-convergence rates around 4% while NBMM-LP had a non-convergence rate of 11% with at least one reduced model failing to converge for an additional 15% of genes. For all three methods and at all sample sizes, at least 90% of convergence failures were due to model singularities, with remaining non-converged models reaching model iteration limits or experiencing other errors which prevented the model from fitting properly. On average, the random intercept variance used to simulate the data was lower for genes that did not converge while the dispersion was generally higher (Supplementary Fig. 1). These results indicate that in some cases, model convergence issues may be due in part to low between-subject variation or high dispersion. However, there was substantial overlap in the random intercept and dispersion distributions between genes that did and did not converge, and many genes with high random intercept variance and low dispersion still failed to converge. In addition, the proportion of non-converged genes generally decreased only slightly (0.75%-1%) when using a higher expression filtering threshold of 5 CPM instead of 1 CPM, indicating that small expression values are also not completely responsible for model non-convergence (Supplementary Table 1). Figure 2 shows the relationship between FDR and power across different sample sizes for the four multiple DF tests of interest using a 0.05 FDR level. More detailed results are available in Supplementary Tables 2-4. The FDR for GEE, NBMM-AGQ, and NBMM-LP was higher than the nominal 0.05 level across all sample sizes for all tests. Other methods showed a mix of conservative and anti-conservative behavior. Across all tests, limma had an FDR close to the nominal rate for the smallest sample size (N=3 per group), but the FDR was increasingly inflated for the larger sample sizes. Conversely, DESeq2* and edgeR* had an inflated FDR at N=3 and N=5 per group, but at N=10 per group the rate was close to the nominal value. DESeq2 and edgeR (ignoring correlation) both had conservative FDR for the interaction and within-subject test, but showed inflated rates for the between-subject test and test for any significant coefficient. Across all of the tests, LMM was slightly conservative while NBMM-PL was slightly inflated except for the between-subject test, in which it was conservative. Finally, rmRNASeq had very conservative FDR values across all tests. For the majority of methods and tests, FDR approached the nominal rate (dashed line) and had increasing power with increasing sample size. FDR versus power across different sample sizes for four tests of interest. FDR and power were calculated using a 0.05 FDR significance level and were averaged across 10 simulations for each method and sample size. Points that lie to the left of the dashed vertical line represent methods that have an observed FDR less than the nominal rate of 5%, while points to the right represent methods with FDR inflation. Points located in the bottom left-hand corner with an FDR and power of 0 represent instances in which no genes were found significant. A log scale is used on the x-axis to better differentiate between methods with close to nominal FDR Of the methods that had FDR values which were conservative or close to the nominal rate across all sample sizes and conditions, LMM and NBMM-PL generally had the highest power. rmRNASeq, which showed conservative FDR values, had low power, particularly at the smaller sample sizes. For the within-subject test and the test for significant interaction effects in which edgeR and DESeq2 (ignoring correlation) exhibited conservative FDR values, both methods were less powered than LMM and NBMM-PL at all sample sizes. DESeq2* and edgeR*, which had close to nominal FDR values at N=10 per group, showed similar power to LMM and NBMM-PL at this sample size. Similarly, limma, which had close to nominal FDR at N=3 per group, had comparable power to LMM and NBMM-PL for most tests at this sample size and had more power than either method for the between-subject test. At the smallest sample size, N=3 per group, no method that had conservative or close to nominal FDR had high power. For the within-subject test, LMM, NBMM-PL and limma had power values near 60% at N=3 per group, but no other tests showed power values this high for methods without severely inflated FDR. The power values at N=5 and N=10 per group were much stronger with LMM and NBMM-PL having power values near or above 80% for all tests at N=10 per group. The distributions of the raw p-values from the null features in each simulated dataset are shown for each combination of method, test, and sample size in Supplementary Figs. 2-4. In general, we would expect these distributions to look fairly uniform. However, only LMM displays this behavior consistently. Some other methods, like NBMM-PL, limma at the smaller sample sizes, and DESeq2* and edgeR* at the larger sample sizes, are not too far off. Conversely, DESeq2, edgeR, GEE, and rmRNAseq show substantial skew. This suggests that the assumed distributions for the test statistics used in these methods is incorrect, and thus inference from these methods is likely compromised [41]. Real data results Table 3 shows the run time for each of the methods. The time to fit the full model and the total time (model fitting and hypothesis testing) are both shown for all methods except rmRNAseq, for which the model fitting and testing are carried out within one function and thus the run times cannot be uncoupled. NBMM-AGQ, NBMM-LP and both DESeq2 methods use an LRT which requires a full and reduced model to be fit for each hypothesis test, so for these methods hypothesis testing took a relatively large amount of time compared to the time to fit the full model. NBMM-LP had the longest total run time by far, taking over 24 hours to complete. The second highest run time was for rmRNAseq which took around 7 hours. Aside from these two methods, NBMM-AGQ (102 minutes), and NBMM-PL (65 minutes), all other methods ran in less than 30 minutes. Table 3 Non-convergence rate, analysis run time, and number of DEGs for 4 hypothesis tests in the shock dataset. The run time for fitting the full model for each gene, as well as the total time to fit models and perform hypothesis testing is displayed. There were 14,340 genes in the dataset and genes were labelled as a DEG if the Benjamini Hochberg adjusted p-value was < 0.05. For NBMM-LP, the percentage of genes in which one or more reduced models failed to converge is shown in parentheses after the full model non-convergence rate Model convergence NBMM-LP had the largest percentage of non-converged models with 4.33% of the full model fits not converging (Table 3). An additional 9.07% of models did not converge for one or more reduced models used for LRTs, making the corresponding hypothesis test(s) incomputable. The non-convergence rate for the rest of the methods was less than 1%. This differed from the simulation results in which NBMM-PL and LMM had a non-convergence rate of around 4% at the largest sample size. The percentage of non-convergence for NBMM-LP was also smaller than for the largest sample size simulation scenario. This discrepancy is likely due in part to the large number of subjects in the shock dataset (32 total subjects; SS group: 21 subjects, CS group: 11 subjects). The largest sample size in the simulation scenarios only had 20 total subjects (10 per group, 2 groups). In order to assess the effect of sample size in our real dataset, we sampled 10 subjects from both the SS and CS groups and reran the analysis on this reduced dataset. The non-convergence rates for NBMM-PL and LMM increased to around 1% for both methods (Table 4). Surprisingly, the non-convergence rate for the NBMM-LP models changed very little even after reducing the number of subjects. Table 4 Non-convergence rate, analysis run time, and number of DEGs for 4 hypothesis tests in the reduced shock dataset in which ten subjects from each group were randomly selected. The run time for fitting the full model for each gene, as well as the total time to fit models and perform hypothesis testing is displayed. There were 14,340 genes in the dataset and genes were labelled as a DEG if the Benjamini Hochberg adjusted p-value was < 0.05. For NBMM-LP, the percentage of genes in which one or more reduced models failed to converge is shown in parentheses after the full model non-convergence rate Number of DEGs Table 3 shows the number of DEGs identified by each method for various hypothesis tests using a 0.05 significance threshold for Benjamini Hochberg adjusted p-values. Though there was a range in the number of DEGs found across the different methods and tests, every method found the most DEGs for the test for the difference across time in the SS group. This is perhaps due in part to the fact that the SS group has more subjects than the CS group (N=21 vs. N=11). However, in the analysis of the reduced dataset in which each group was filtered to ten random subjects, this test still had the most DEGs across methods, while the test for differences across time in the CS group had the least amount of DEGs. This may indicate that the changes in gene expression over the course of treatment are more prevalent in SS patients than CS patients. The differences in the number of DEGs for each method was generally what would be expected based on the results of the simulation study. NBMM-AGQ showed relatively inflated FDR values in the simulation study, and in this analysis this method found more DEGs than most other methods, particularly for the within-subject and interaction tests. DESeq2 and edgeR (ignoring correlation) had high DEG counts for the between-subject test and low DEG counts for the within-subject and interaction tests, which is also in line with the simulation results. limma also showed a mix of conservative and anti-conservative behavior in terms of the number of DEGs for each test. Finally, DESeq2*,edgeR*, NBMM-PL, NBMM-LP and LMM all had relatively moderate numbers of DEGs across all tests, with DESeq2*, edgeR*, NBMM-LP and NBMM-PL generally finding slightly more DEGs than LMM. This also corresponds to the simulation results in which in the largest sample size scenario (N=10 per group) all three methods exhibited FDR values close to the nominal rate with LMM showing conservative rates compared to the other three methods. There were some discrepancies between this analysis and the simulation study. These discrepancies appear to be partially due to the difference in the number of subjects in the real data and the simulations and may point to the continuation of patterns related to sample size that were observed in the simulation study. For example, rmRNAseq displayed conservative FDR values and low power in the simulation study, though the power for the method increased with increasing numbers of subjects. In this analysis, the number of DEGs for rmRNAseq was comparable to other, less conservative methods, particularly for the between-subject test and the within-subject test for differences across time in the SS group. However, in the analysis of the reduced dataset, rmRNAseq found less DEGs than the majority of other methods (Table 4). Similarly, GEE generally had the most inflated FDR and highest power in the simulation study with FDR decreasing as the number of subjects increased. In this analysis the number of DEGs was moderate compared to the other methods, while in the analysis on the reduced data, GEE had more DEGs than most other methods, though NBMM-AGQ still found more DEGs for all tests except the between-subject test. Hierarchical clustering and functional enrichment analysis results For brevity, we will focus on results from our post-hoc analysis of genes with significant differential expression between at least two timepoints in the CS group. Similar results for the SS group can be found in Supple-mentary Fig. 5 and Supplementary Table 5. Using the LMM method, there were 1,003 genes that were significant for the test for differential expression between any two timepoints in the CS group. Figure 3 shows a heatmap of predicted expression (row scaled) for these genes along with the hierarchical clustering. Based on a visual inspection of the heatmap, a cutpoint was chosen such that the genes were split into seven clusters representing seven different patterns of change over time. For example, cluster 3 was the largest cluster with 328 genes. The expression of genes in this cluster stayed somewhat steady across the first two timepoints, but then steeply dropped between the second and third timepoint. Cluster 5 (309 genes) and cluster 2 (228 genes) were also relatively large. The genes in cluster 5 had expression levels that remained relatively unchanged between the first two timepoints, but then steeply climbed between the final two timepoints; cluster 2 contained genes that dropped in expression somewhat linearly across the three timepoints. Heatmap of predicted gene expression (row scaled) across the three study timepoints for genes that were significant in a test for differential expression between any two timepoints in the CS group. Predicted values and significance results came from the LMM analysis. Genes are clustered using a correlation distance metric and complete linkage clustering methods and are split into seven clusters indicated by the color bars along the rows For three clusters (cluster 3, cluster 5, and cluster 6) at least one GO term was significantly enriched. Table 5 shows an abbreviated list of the significant terms. For cluster 3, several significantly enriched terms were related to an innate immune response including terms related to inflammation as well as neutrophil migration. For cluster 5, the GO terms were related to complement activation and phagocytosis. There were also terms related to adaptive immunity such as immunoglobulin production and positive regulation B-cell activation. Because genes from cluster 3 are relatively highly expressed at timepoints 1 and 2, but have lower expression at time 3, while cluster 5 shows the opposite behavior, these results may point to a heightened innate immune system response early in the ICU stay of CS patients, with a delayed adaptive immune response. Similar to cluster 5, genes in cluster 6 were involved in complement activation and phagocytosis. This cluster has a similar pattern across time to that of cluster 5, but genes in this category drop in expression between timepoints 1 and 2 before showing heightened expression at time 3. Table 5 Functional enrichment analysis results. The 25 GO terms with the smallest Benjamini Hochberg (BH) adjusted p-values were selected for each cluster. The lists were then reduced to include only the most specific subclass for each ontology. All GO terms had a BH adjusted p-value < 0.01 In RNA-seq studies with longitudinal and other correlated designs, researchers are often interested in multiple hypotheses. Multiple DF tests allow researchers to assess multiple hypotheses at once, which is a useful method for selecting lists of genes for further exploration and can also be valuable in FDR control. Recently, several researchers have developed and compared analysis methods for analyzing longitudinal RNA-seq data. However, there has been little research evaluating and comparing these methods in the context of multiple DF testing. Understanding the comparative performance of various multiple DF hypothesis testing methods is becoming increasingly important as complex study designs become more common in correlated RNA-seq designs. Of the methods compared in this study, LMM using data transformed using VST generally exhibited FDR closest to the nominal rate across the different sample sizes and multiple DF tests. NBMM-PL generally resulted in FDR values close to nominal as well, though slightly more inflated than LMM. GEE, NBMM-AGQ, and NBMM-LP had high FDR values across all simulation scenarios. DESeq2* and edgeR* had inflated FDR values at small sample sizes, but were relatively close to the nominal value for the highest sample size (N=10 per group). Conversely, limma had optimal FDR values at the smallest sample size, but these increased for the larger sample sizes. DESeq2 and edgeR (ignoring correlation) showed a mix of conservative and anti-conservative behavior. rmRNAseq had conservative FDR values, but was also extremely underpowered, particularly at the lower sample sizes. LMM and NBMM-PL generally had the highest power of the methods that had FDR values which were conservative or close to the nominal rate across all sample sizes and conditions. Unsurprisingly, for the majority of methods, FDR values approached nominal rates and power increased as the sample size increased. We chose to use three small sample size scenarios in our simulation study because researchers often do not have the resources for large-scale studies, particularly in longitudinal studies where multiple samples are collected for each subject. However, we also analyzed data from a study involving shock patients and this study had 11 and 21 subjects in its two groups. In this analysis, methods such as GEE showed similar numbers of DEGs as LMM. When we reduced the dataset to 10 subjects per group, the difference in the number of DEGs for LMM compared to methods like GEE was wider. This implies that the FDR for methods that performed poorly, particularly at low sample sizes, may converge to that of LMM as the sample size increases past N=10 per group. Another problem that occurred at low sample sizes was model non-convergence for LMM, NBMM-LP and NBMM-PL. Though LMM had the lowest non-convergence rate of these three methods, around 15% of models did not converge for this method at N=3 per group. We identified low between-subject variance, high dispersion, and small gene expression values as potential causes of non-convergence, though these data characteristics were not universal in non-converged models. Because LMM had otherwise good performance, future research regarding the cause of the high non-convergence rates and alternative ways of fitting singular and other non-converged models would be valuable. In small sample size cases in which many models do not converge, limma may be a good alternative because it demonstrated near nominal FDR at small sample sizes. However, no method was highly powered at the smallest sample size; choosing a sample size of at least 5 subjects per group is preferable. One limitation of this study is that we only simulated data from one relatively simple correlation structure. This choice may have particularly affected the rmRNAseq simulation results since rmRNAseq utilizes a continuous autoregressive correlation structure and we simulated using a single random effect (equivalent to a compound symmetric structure). In analysis of the shock dataset, which may have a correlation structure that is not strictly compound symmetric, rmRNAseq did behave more similarly to other methods than in the simulation study, though we found that this was driven partially by sample size. Still, because complex RNA-seq studies are becoming more common, future research concerning the performance of multiple DF tests on data with different correlation structures and models with more complex random effects structures would be beneficial. We did not explore the use of multiple DF tests in the context of single cell RNA-sequencing (scRNA-seq). Because gene expression of cells from the same sample or subject is more similar than cells from different samples [42], multi-sample scRNA-seq studies result in a hierarchical or correlated data structure, similar to longitudinal bulk RNA-seq studies. While the methods described in this work could theoretically be applied to scRNA-seq data, there are unique features of scRNA-seq data that could influence method performance and that should be further investigated. For example, scRNA-seq experiments typically collect data on thousands of cells from a relatively small number of samples or subjects, resulting in a large number of repeated observations per sample. This is in contrast to a longitudinal bulk RNA-seq study, where a relatively smaller number of repeated measurements (as few as two) is collected per subject. The library size per cell is also much smaller in scRNA-seq resulting in smaller numbers of counts per gene and more genes with zero counts. The data volume and sparsity could affect both the computation time and performance of the multiple DF testing methods. This would be a valuable area for future research. As the cost of RNA-seq experiments decreases, it becomes increasingly feasible to perform experiments using correlated designs, including longitudinal studies. Because these studies often involve multiple hypotheses and also require initial filtration to a set of genes for further exploration, multiple DF tests are a valuable tool for correlated RNA-seq data. In this work, we tested several modelling methods for longitudinal RNA-seq data with an emphasis on multiple DF hypotheses tests. Through a simulation study, we found that overall, LMM exhibited the best performance in terms of controlling FDR at nominal levels while maintaining the power to detect differential expression, though there were convergence issues at low sample sizes. limma offers a good alternative for small studies since it did not have convergence issues and had adequate FDR control at the smallest sample size. However, all methods were underpowered at N=3 per group, so we suggest that at least five subjects be included per group when possible. Multiple DF testing is a valuable tool for selecting interesting genes for downstream analysis while also controlling the FDR. However, as we show in this study, there are many methods that allow for multiple DF testing all with different levels of efficacy. Making an informed decision when choosing a method based on the study goals as well as design elements such as sample size is key in producing useful, meaningful findings. Code for simulating the datasets and running the methods used in the paper are available at https://github.com/ewynn610/multiDF_corr_RNASeqand through the corrRNASeq package, which can be found at https://github.com/ewynn610/corrRNASeq. Additional simulated datasets used in the simulation studies are available from the corresponding author upon request. The real RNA-Seq data was originally published in [39], and was downloaded for this application from the GEO DataSets website (GEO Dataset: GSE131411). CPM: Counts Per Million CS: DF: GEE: GLM: Generalized Linear Model GLMM: Generalized Linear Mixed Model HPC: LMM: Linear Mixed Model LRT: Likelihood Ratio Test NBMM: Negative Binomial Mixed Model NBMM-AGQ: Negative Binomial Mixed Model, Advanced Gaussian Quadrature approach NBMM-LP: Negative Binomial Mixed Model, Laplace approach NBMM-PL: Negative Binomial Mixed Model, Pseudolikelihood approach RNA-seq: RNA-sequencing TMM: Trimmed Mean of M-values T1E: Type One Error VST: Variance Stabilizing Transformation Schmieder R, Edwards R. Quality control and preprocessing of metagenomic datasets. Bioinformatics. 2011; 27(6):863–64. https://doi.org/10.1093/bioinformatics/btr026. Alkhateeb A, Rueda L. Zseq: An Approach for Preprocessing Next-Generation Sequencing Data. J Comput Biol. 2017; 24(8):746–55. https://doi.org/10.1089/cmb.2017.0021. Van den Berge K, Soneson C, Robinson MD, Clement L. stageR: A general stage-wise method for controlling the gene-level false discovery rate in differential expression and differential transcript usage. Genome Biol. 2017; 18(1):1–14. https://doi.org/10.1186/s13059-017-1277-0. Robinson MD, Oshlack A. A scaling normalization method for differential expression analysis of RNA-seq data. Genome Biol. 2010; 11(3). https://doi.org/10.1186/gb-2010-11-3-r25. McCarthy DJ, Chen Y, Smyth GK. Differential expression analysis of multifactor RNA-Seq experiments with respect to biological variation. Nucleic Acids Res. 2012; 40(10):4288–97. https://doi.org/10.1093/nar/gks042. Love MI, Huber W, Anders S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 2014; 15(12):550. https://doi.org/10.1186/s13059-014-0550-8. Cannon MJ, Warner L, Taddei JA, Kleinbaum DG. What can go wrong when you assume that correlated data are independent: An illustration from the evaluation of a childhood health intervention in Brazil. Stat Med. 2001; 20(9-10):1461–67. https://doi.org/10.1002/sim.682. Cui S, Ji T, Li J, Cheng J, Qiu J. What if we ignore the random effects when analyzing RNA-seq data in a multifactor experiment. Stat Appl Genet Mol Biol. 2016; 15(2):87–105. https://doi.org/10.1515/sagmb-2015-0011. Ritchie ME, Phipson B, Wu D, Hu Y, Law CW, Shi W, Smyth GK. Limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 2015; 43(7):47. https://doi.org/10.1093/nar/gkv007. Smyth GK, Michaud J, Scott HS. Use of within-array replicate spots for assessing differential expression in microarray experiments. Bioinformatics. 2005; 21(9):2067–75. https://doi.org/10.1093/bioinformatics/bti270. Zhang H, Xu J, Jiang N, Hu X, Luo Z. PLNseq: A multivariate Poisson lognormal distribution for high-throughput matched RNA-sequencing read count data. Stat Med. 2015; 34(9):1577–89. https://doi.org/10.1002/sim.6449. Kang G, Du L, Zhang H. MultiDE: A dimension reduced model based statistical method for differential expression analysis using RNA-sequencing data with multiple treatment conditions. BMC Bioinformatics. 2016; 17(1):1–16. https://doi.org/10.1186/s12859-016-1111-9. Hardcastle TJ, Kelly KA. BaySeq: Empirical Bayesian methods for identifying differential expression in sequence count data. BMC Bioinformatics. 2010; 11(1):1–14. https://doi.org/10.1186/1471-2105-11-422. Hardcastle TJ, Kelly KA. Empirical Bayesian analysis of paired high-throughput sequencing data with a beta-binomial distribution. BMC Bioinformatics. 2013; 14(1):1–11. https://doi.org/10.1186/1471-2105-14-135. Bian Y, He C, Hou J, Cheng J, Qiu J. PairedFB: A full hierarchical Bayesian model for paired RNA-seq data with heterogeneous treatment effects. Bioinformatics. 2019; 35(5):787–97. https://doi.org/10.1093/bioinformatics/bty731. Vestal BE, Moore CM, Wynn E, Saba L, Fingerlin T, Kechris K. MCMSeq: Bayesian hierarchical modeling of clustered and repeated measures RNA sequencing experiments. BMC Bioinformatics. 2020; 21(1):1–20. https://doi.org/10.1186/s12859-020-03715-y. Van de Wiel MA, Neerincx M, Buffart TE, Sie D, Verheul HM. ShrinkBayes: A versatile R-package for analysis of count-based sequencing data in complex study designs. BMC Bioinformatics. 2014; 15(1). https://doi.org/10.1186/1471-2105-15-116. Nguyen Y, Nettleton D. RmRNAseq: Differential expression analysis for repeated-measures RNA-seq data. Bioinformatics. 2020; 36(16):4432–39. https://doi.org/10.1093/bioinformatics/btaa525. Park H, Lee S, Kim YJ, Choi MS, Park T. Multivariate approach to the analysis of correlated RNA-seq data. In: Proceedings - 2016 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2016: 2017. p. 1783–86. https://doi.org/10.1109/BIBM.2016.7822789. https://ieeexplore-ieee-org.proxy.hsl.ucdenver.edu/stamp/stamp.jsp?tp=arnumber=7822789. Tsonaka R, Spitali P. Negative Binomial mixed models estimated with the maximum likelihood method can be used for longitudinal RNAseq data. Brief Bioinform. 2021; 22(4):1–14. https://doi.org/10.1093/bib/bbaa264. Zhang X, Pei YF, Zhang L, Guo B, Pendegraft AH, Zhuang W, Yi N. Negative binomial mixed models for analyzing longitudinal microbiome data. Front Microbiol. 2018; 9(JUL):1683. https://doi.org/10.3389/fmicb.2018.01683. Smyth GK. limma: Linear Models for Microarray Data. In: Bioinformatics and Computational Biology Solutions Using R and Bioconductor. New York: Springer: 2005. p. 397–420. Law CW, Chen Y, Shi W, Smyth GK. Voom: Precision weights unlock linear model analysis tools for RNA-seq read counts. Genome Biol. 2014; 15(2):29. https://doi.org/10.1186/gb-2014-15-2-r29. Anders S, Huber W. Differential expression analysis for sequence count data. Genome Biol. 2010; 11(10):106. https://doi.org/10.1186/gb-2010-11-10-r106. Liang KY, Zeger SL. Longitudinal data analysis using generalized linear models. Biometrika. 1986; 73(1):13–22. https://doi.org/10.1093/biomet/73.1.13. Wang M, Long Q. Modified robust variance estimator for generalized estimating equations with improved small-sample performance. Stat Med. 2011; 30(11):1278–91. https://doi.org/10.1002/sim.4150. Halekoh U, Højsgaard S, Yan J. The R package geepack for generalized estimating equations. J Stat Softw. 2006; 15(2):1–11. https://doi.org/10.18637/jss.v015.i02. Wang M. geesmv: Modified Variance Estimators for Generalized Estimating Equations. 2015. https://cran.r-project.org/package=geesmv. Accessed 12 Oct 2021. Kuznetsova A, Brockhoff PB, Christensen RHB. lmerTest Package: Tests in Linear Mixed Effects Models. J Stat Softw. 2017; 82(13). https://doi.org/10.18637/jss.v082.i13. Rizopoulos D. GLMMadaptive: Generalized Linear Mixed Models Using Adaptive Gaussian Quadrature. 2021. https://cran.r-project.org/package=GLMMadaptive. Accessed 7 Jan 2022. Fournier DA, Skaug HJ, Ancheta J, Ianelli J, Magnusson A, Maunder MN, Nielsen A, Sibert J. AD model builder: Using automatic differentiation for statistical inference of highly parameterized complex nonlinear models. Optim Methods Softw. 2012; 27(2):233–249. Højsgaard S, Halekoh U. doBy: Groupwise Statistics, LSmeans, Linear Contrasts, Utilities. 2021. https://cran.r-project.org/package=doBy. Accessed 12 Oct 2021. Satterthwaite FE. Synthesis of variance. Psychometrika. 1941; 6(5):309–16. https://doi.org/10.1007/BF02288586. Satterthwaite FE. An Approximate Distribution of Estimates of Variance Components. Biom Bull. 1946; 2(6):110. https://doi.org/10.2307/3002019. Smyth GK. Linear models and empirical bayes methods for assessing differential expression in microarray experiments. Stat Appl Genet Mol Biol. 2004; 3(1). https://doi.org/10.2202/1544-6115.1027. Singhania A, Verma R, Graham CM, Lee J, Tran T, Richardson M, Lecine P, Leissner P, Berry MPR, Wilkinson RJ, Kaiser K, Rodrigue M, Woltmann G, Haldar P, O'Garra A. A modular transcriptional signature identifies phenotypic heterogeneity of human tuberculosis infection. Nat Commun. 2018; 9(1). https://doi.org/10.1038/s41467-018-04579-w. Rosenberg BR, Depla M, Freije CA, Gaucher D, Mazouz S, Boisvert M, Bédard N, Bruneau J, Rice CM, Shoukry NH. Longitudinal transcriptomic characterization of the immune response to acute hepatitis C virus infection in patients with spontaneous viral clearance. PLoS Pathog. 2018; 14(9). https://doi.org/10.1371/journal.ppat.1007290. Benjamini Y, Hochberg Y. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. J R Stat Soc Ser B Methodol. 1995; 57(1):289–300. https://doi.org/10.1111/j.2517-6161.1995.tb02031.x. Braga D, Barcella M, Herpain A, Aletti F, Kistler EB, Bollen Pinto B, Bendjelid K, Barlassina C. A longitudinal study highlights shared aspects of the transcriptomic response to cardiogenic and septic shock. Crit Care. 2019; 23(1):1–14. https://doi.org/10.1186/s13054-019-2670-8. Alexa A, Rahnenführer J. Gene set enrichment analysis with topGO. Bioconductor Improvments. 2009; 27:1–26. Hu X, Gadbury GL, Xiang Q, Allison DB. Illustrations on Using the Distribution of a P-value in High Dimensional Data Analyses,. Adv Appl Stat Sci. 2010; 1(2):191–213. Zimmerman KD, Espeland MA, Langefeld CD. A practical solution to pseudoreplication bias in single-cell studies. Nat Commun. 2021; 12(1):738. https://doi.org/10.1038/s41467-021-21038-1. CMM and EAW were funded by a Webb-Waring Early Career Investigator Award from the Boettcher Foundation. Department of Biostatistics and Informatics, University of Colorado, Anschutz Medical Campus, Aurora, CO, USA Elizabeth A. Wynn Center for Genes, Environment and Health, National Jewish Health, 1400 Jackson St, Denver, 80206, CO, USA Brian E. Vestal, Tasha E. Fingerlin & Camille M. Moore Brian E. Vestal Tasha E. Fingerlin Camille M. Moore EAW designed and implemented the simulation study and application data analysis, prepared tables and figures, and wrote the manuscript. BEV designed the data simulation framework, provided feedback concerning analysis and reviewed the manuscript. TEF provided feedback concerning analysis and reviewed the manuscript. CMM designed the data simulation framework, supervised the analysis and the writing of the manuscript, and reviewed the manuscript. All authors read and approved the final manuscript. Correspondence to Camille M. Moore. No ethics approval was required for this study. All data analyzed in this manuscript was either simulated or downloaded from publicly available sources. Supplementary methods, results, tables, and figures. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Wynn, E.A., Vestal, B.E., Fingerlin, T.E. et al. A comparison of methods for multiple degree of freedom testing in repeated measures RNA-sequencing experiments. BMC Med Res Methodol 22, 153 (2022). https://doi.org/10.1186/s12874-022-01615-8 RNA-seq Correlated data Multiple DF testing
CommonCrawl
How to choose the test statistic in Mann-Whitney test? Reading about the Mann-Whitney test for simple random and independent samples I encountered a small issue. According to the book "Introductory Statistics" by Weiss, the test statistic is obtained using $M = \text{sum of the ranks for sample data from population 1}$ As usual, we use this test statistic to decide whether we reject the null hypothesis or not. But this was a bit confusing because it seems arbitrary to choose a given sample as the first one. However, trying to clarifiy this, I found that there are other so-called test statistic $U$, and sometimes we are supposed to choose $\min(U_{1}, U_{2})$ or the opposite $\max(U_{1}, U_{2})$. For example, in this tutorial, this statistic is used: $U_{1} = R_{1} - \frac{n_{1}(n_{1} + 1)}{2}$ where $R_{1}$ is the sum of ranks in population $1$ as above. It also adds: Note that it doesn't matter which of the two samples is considered sample 1. The smaller value of U1 and U2 is the one used when consulting significance tables. But this procedure doesn't seem to be used in Weiss' book. Which one is the correct procedure? Maybe I'm just confusing different tests with similar names. wilcoxon-mann-whitney Robert SmithRobert Smith $\begingroup$ Because the two samples of known sizes are combined first and then ranked (and the overall sum of ranks of is therefore fixed), it makes then no difference whether you base the test on min() or max(). $\endgroup$ – ttnphns Nov 17 '13 at 7:58 $\begingroup$ Could you elaborate a bit more? Obviously there are two test statistic to choose from and those have different values depending on sum of ranks and possibly sample size, so why using min() or max() makes no difference? $\endgroup$ – Robert Smith Nov 17 '13 at 8:08 $\begingroup$ Because since you know the sum of ranks, U1+U2, then if you get to know U1 you automatically know U2, and vice versa. $\endgroup$ – ttnphns Nov 17 '13 at 8:11 $\begingroup$ A single answer to your confusion cannot be done because different programs (implementations) differ in details. The fact is that whether to rely on U1 or U2, there always a due move is done to compute the unique correct Z or the exact p-value. Don't bother your brains. $\endgroup$ – ttnphns Nov 17 '13 at 8:25 $\begingroup$ All of the statistics mentioned are equally correct statistics, yielding equivalent tests. As long as you're clear which one you're using, and use the corresponding tables for that statistic, they all reject or fail to reject the same cases. There's a little bit of relevant discussion in this answer. $\endgroup$ – Glen_b -Reinstate Monica Nov 17 '13 at 14:28 For the Normal distribution test in the Mann Whitney U Test, note that the distribution that both $U1$ and $U2$ follow is a Normal distribution with mean equal to $n_{1}n_{2}/2$. But since $U_{1} + U_{2} = n_{1}n_{2}$, this means the null hypothesis distribution is Normal with a mean equal to the average of $U_{1}$ and $U_{2}$. In particular, this means that both $U_{1}$ and $U_{2}$ are equally distant from the mean of the distribution (they are both exactly $[max(U_{1}, U_{2}) - min(U_{1}, U_{2})] / 2$ units away from the center of the null hypothesis distribution, by definition). Since they are equally distant and the distribution is symmetric about its mean, then both $U_{1}$ and $U_{2}$ must generate identical z-scores (up to a sign difference) in that distribution, and thus also identical p-values. Having a consistent choice like always using the minimum, I suppose, helps with things like software implementation consistency, but it should not have any effect of the output statistics of the test itself. Also, if you calculate the z-score based on $U_{min}$, it is guaranteed to be along the left tail of the null hypothesis distribution. Call $z_{min}$ this z-score. Then the p-value generally of interest will simply be $2 * normCDF(z_{min})$, because $z_{min}$ is already negative on the left tail and you're measuring the probability of a value smaller than what was observed. Doubling it also accounts for the other tail where you care about $P(z > z_{max})$, since by the argument mentioned above, $z_{max} = |z_{min}|$. If you base it off of $z_{max}$ directly, you'd need the ever so slightly more complicated calculation of $2 * (1 - normCDF(z_{max}))$, which has a track record of confusing the heck out of people. elyely Not the answer you're looking for? Browse other questions tagged wilcoxon-mann-whitney or ask your own question. Wilcoxon rank sum test in R ANOVA on ranks vs. Mann-Whitney U test Why are there two forms for the Mann-Whitney U test statistic? How is the Mann-Whitney U statistic calculated in SPSS? Mann-Whitney U for two distributions with equal mean Reconciling seemingly different null hypotheses for Mann-Whitney test
CommonCrawl
Analyzing the relationship between sustainable development indicators and renewable energy consumption Rania Hamed Rashed ORCID: orcid.org/0000-0002-0331-58712 The transition to renewable energy sources remains a major challenge for developed and developing countries. Therefore, the study aims at investigating the relationship between sustainable development indicators and renewable energy consumption utilizing integrated data sets for 255 indicators expressing the sustainable development goals from 137 developed and developing countries. Principal component analysis then multiple linear regression tests are employed to conclude a mathematical model representing the numerical relationship between a set of sustainable development indicators and renewable energy consumption. The statistical analysis results include (i) an inverse correlation between Sustainable Development Index which expresses the dominant factor representing collected data and renewable energy consumption, (ii) a set of sustainable development indicators as the determinants of renewable energy consumption. The findings explain the rapid transformation of low Sustainable Development Index countries towards renewable energy technology by realizing the effective role of using renewable energy as a local solution. Moreover, the findings manifest the importance of the given sustainable development indicators in obtaining a more significant increase in renewable energy consumption. Using the concluded mathematical mode, planners and decision-makers can compromise the concluded indicators to attain a serious progressing step towards renewable energy transition aligned with achieving sustainable development. Energy has a positive impact on health, education, transportation, business, and most crucial; how long people may survive [1]. There is an exponential growing energy demand to meet the global population growth and maintain higher living standards [2]. Primary energy sources are categorized based on long-term availability as renewable and conventional energy resources; thus, consuming energy resources have two critical options, using easily accessed, conventional but unhealthy environmental energy resources or adopting technology-oriented, non-conventional, and healthy environmental energy resources [3]. Nowadays, the world is heavily dependent on depletable energy sources tracking not sustainable pathways. Renewables are responsible for only 20% of global energy consumption, which is a small share compared to its benefits Fig. 1 [4, 5]. Therefore, the transition to renewable energy sources remains a major challenge for developed and developing countries. Renewables are a perfect key for increasing energy security since they are physically available, economically affordable, socially accessible, and publicly acceptable [6]. Geographic limitations are the main obstacle of renewable power technologies. Renewable-based energy generation is still not as cost-effective compared to other energy generation options; it has high initial costs besides the high cost of storing systems although the costs of renewables have been going down [7]. Also, land areas that are required for the installation of energy technology are large compared to plants powered by fossil fuel [8]. Despite some existing limitations and challenges that need to be overcome, clean sources make a significant contribution in providing energy within buildings, industry, and transport sectors. Accordingly, there is an imperative need for exploring the relationship between renewable energy use and sustainable development (SD) to ensure energy access, promote a healthier environment and achieve energy access equality among people. (Global energy consumption growth by source 1965-2019 - [4, 5] Importance of renewable energy and sustainable development nexus United Nations' Sustainable Development Goals (SDGs) are a blueprint that guides societies for achieving progress in all pressing challenges. The United Nations defined the SDGs as "a universal call to action to end poverty, protect the planet and ensure that all people enjoy peace and prosperity" [9]. Renewable energy expressed in Goal 7 "Ensure access to affordable, reliable, and modern energy for all" is considered the heart of SDGs [10]. Securing access to energy supply is a highly demanding concern, but it is more challenging to provide energy in a sustainable form. Governments worldwide have declared the 17 SDGs to be 'integrated and indivisible' [11]; meaning that SDG7 cannot be achieved in sectoral isolation apart from the achievement of SDGs. Renewable energy is strongly connected to all human activities, and it contributes to achieving urban and environmental sustainability [12]. Ensuring access to renewable energy sources contributes to the implementation of SDGs through enabling development processes and promoting progress path. The analysis of the relationship between renewable energy and SDGs, which is the main aim of the current study, represents a step for mapping the links between energy systems and social well-being, economic activities, and the environment. Also, this interaction would affect future energy scenarios at national and local levels. The analysis of the relationship between renewable energy and SDGs at the targets level reveals a complex interaction including synergies and trade-offs [13] in which positive interactions between renewable energy and SDGs exceed the negative ones [14]. Evidence of synergies between 143 SDGs targets to achieve SDG7 is established, meaning that about 85% of SDGs targets support SDG7 [15]. The role of renewable energy in achieving Sustainable Development Goals This section discovers the connection between adopting renewable energy sources and the achievement of SDGs at goals level, using an analysis extracted from three studies, issued from global organizations scoped in SDGs interactions (1) accelerating the global energy transformation [16], (2) mapping the Renewable Energy Sector to the Sustainable Development Goals: An Atlas [17], and (3) a guide to SDG interactions: from science to implementation [18]. Following the results of the reviewed studies, a multi-perspective analyzed summary of the connection between renewable energy and SDGs is listed in Table 1. Table 1 Multi-perspective summary of renewable energy-SDGs nexus The focus of the literature search was on the studies that examine the relationship between the use of renewable energy and one or more SD dimensions. The literature section is divided according to the examined SD dimensions, while environmental and economic dimensions have been most discussed among the majority of previous studies. Renewable energy and environmental dimension Most researchers use the amount of carbon dioxide (CO2) emissions to express global climate change and environmental quality. Empirical results, from a series of studies, indicate that CO2 emissions and REC are inversely correlated and there is a bidirectional causality running from CO2 to REC in developed and developing countries [19–22]. Evidence, from 50 African countries across regions and income levels, confirm that REC contributes to mitigating CO2 emissions within ten years [23]. The result of examining a group of EU countries prove that the use of renewable energy options is a key solution to improve air quality by decreasing greenhouse gas emissions (GHG), where CO2 is the major component of GHG emissions [24]. A recent study confirms the role of REC in improving environmental sustainability characterizing environmental quality by ecological footprint [25]. Another study finds that global REC has a long-run significant positive impact on environmental sustainability by testing a global framework of developed and developing countries. The study recommends that the roles of renewable energy in increasing environmental quality should be considered by reforming the energy policies to encourage the use of renewable energy sources [26]. The empirical outcomes of a study analyzing the environmental degradation in Japan demonstrate proof for the existence of an interaction between renewable energy use and CO2 emissions. Hence, in the short and medium terms, renewable energy usage mitigates CO2. The study recommends that Japan should support renewable energy development [27]. Moreover, there is one-way causality from renewable energy consumption (REC) to CO2 emissions in Argentina; thus, renewables improves the environment [28]. Modelling the dynamic linkage between REC and environmental degradation, renewable energy use can predict CO2 emissions in South Korea [29]. A weak negative relationship is shown between renewable energy and CO2 emissions in China, the world's biggest carbon emitter [30]. Renewable energy and economic dimension A bidirectional causality is running between per capita Growth Domestic Product (GDP) and REC, addressing that developed countries are consuming more renewables sources, while lower GDP countries rely more on non-renewables sources [31, 32]. In most European countries, there is a positive relationship between REC and economic growth; REC has a positive impact on GDP [33, 34 ]. A panel of data for 102 countries with different income levels were analyzed and the results prove that for low-income countries, REC has a positive relationship with 'industrial and service values added' the industry/service contribution to overall GDP [35]. Testing data, from some Latin American countries, confirm that GDP per capita, technological innovation and trade have a statistically significant positive association with renewable energy use [36]. Evidence, from the association of southeast Asian nations countries, finds that the adopting of renewable sources in energy generation spurs economic growth and creates better export opportunities [37]. There is a positive nexus at the regional level in seven East African countries between the growth of renewable energy and economic growth [38]. In Rwanda, a low-income country, an asymmetric causality relationship running from REC positive shocks to economic growth is noted [39]. Contrary to popular belief, there is a bi-directional relationship between economic growth and the use of renewable energy in developing countries that are rapidly endorsing renewable energy to power the economic growth engine [40]. Renewable energy and other dimensions A closer look at previous studies shows an average number of researches linking REC to a group of indicators that have not been frequently examined. In developed countries, income inequality is associated with REC, thus increasing REC plays a notable role in reducing income inequality [41]. An increase in the usage of renewable energy leads to a decrease in public health expenditure for the association of Southeast Asian nations countries [37]. Corruption control is positively linked to renewable energy participation; it increases the REC in developed and developing countries [42]. The education level has a significant impact on renewable energy deployment in developed and developing countries [43]. A study has agreed to use 'adjusted net savings' as a good SD variable and the results have confirmed that renewable energy has a statistically significant positive impact on SD for developed and developing countries [44]. Measuring the connection between the Human Development Index (HDI) and renewable energy, a study indicates that the deployment of renewable energy contributes to improving the SD level in 28 OECD countries [45]. The analysis of the above literature has revealed that existing studies have connected REC to limited SDGs dimensions which do not meet the SDGs wide-ranging concept. Previous researches offer an improved understanding of the REC-SDGs nexus. Therefore, the current study addresses the literature gap by examining the relationship between REC and an integrated panel of SDGs indicators. The study aims at investigating the relationship between SD indicators and REC by concluding a mathematical model to represent the numerical relationship utilizing data of 255 SDGs indicators from 137 countries. Moreover, the study seeks to find out how effective is a high SD level in increasing REC by testing the adopted hypothesis which proposes that REC is associated with a group of SD indicators and its value can be calculated in terms of these indicators. This quantitative study is based on deductive and statistical analytical approaches to test the adopted hypothesis. The deductive approach extracts a key summary of the relation between renewable energy and SDGs through surfing theoretical background readings and previous studies guidelines. The statistical approach examines the influence of SD indicators on REC, and Fig. 2. illustrates research methodology scheme. Research Methodology Scheme Statistical indicators provide an accurate conception for analyzing and comparing data; therefore, it was crucial to interpret the nexus between renewable energy dependency and SDGs into measurable indicators. Renewable energy is frequently measured by: consumption, production, capacity, and energy supply. The current study agrees to measure the country dependency on renewable energy by consumption which specifies the actual energy need [46]. The arrangement of the SDGs is used as a format for collecting and allocating indicators that are gathered under each associated SDG. The research methods are designed within two parameters: (1) collecting and organizing data, and then (2) applying the statistical model. Data collecting and organizing The study depends on three sources of data: (1) SDGs indicators by World Bank [47], (2) Human Development Indices and Indicators by the United Nations Development Program [48, 49], and (3) SDGs indicators from the United Nations SDGs index and dashboards [50]. Each source endorses a group of indicators for measuring the achievement of SDGs and generates regularly updated data. The study collects each source of the endorsed indicators and compiles them into a preliminary list containing 517 indicators for 218 countries following the World Bank countries list order. Data availability and update Data within the years 2017, 2018, and 2019 are collected, and 2017 data is selected as it has the most available data. Indicators (variables) and countries are specified within the framework of data availability. An optimization process is applied to filter the collected data, including indicators and countries with complete data or less than 5% missing data, and other indicators are excluded. Data adjusting Due to the unavailability of some indicators, the study proposes a collection of supplementary indicators to represent demographic aspects, human development, and SDGs' overall performance. The Series Mean method is employed to fill missing data [51] using Statistical Package for Social Sciences (SPSS) software. The matrix variables are is formed from using the adjusted indicators list which compiles 255 SDGs indicators for 137 countries. Figure 3 illustrates the data collection and organizing sequence, and Table 2 indicates additional indicators and variables distribution according to SDGs. Data collecting and organizing scheme) Table 2 Variables distribution according to SDGs indicators and Additional indicators The study divided the statistical model into two tests: (1) principal component analysis (PCA) which is a method used for multivariate data analysis to reduce dimensionality [52]. PCA is a prerequisite for (2) multiple linear regression (MLR) analysis which is a conceptually analytic technique for understanding the interrelationships among variables [53]. Both PCA and MLR analyses are processed by the SPSS software. Principal component analysis (PCA) PCA is a widely used method for factor reduction. It reduces dataset dimensionality and preserves as much 'variability' as possible [54]. It applies to large data sets to minimize a large number of variables (indicators) into a small group of components. The study employs the PCA test to compute the dominant components that have the most variances in variables [55]. PCA test is applied by utilizing SPSS software to the 255 collected variables. A preliminary PCA run generates 40 components. The first component accounted for 34.2% of the variance of the variables explained. PCA analysis is based on which variables are most correlated with each component. Variables correlated with the first component with a saturation value of more than 0.5 whether positive or negative are obtained, while the rest variables are dropped. To exclude the least influencing indicators a second PCA run is executed on obtained variables. Its results show that the first component of the 14 extracted components accounted for about 61.7% of variables variance which is a high eigenvalue as shown in Table 3. Therefore, the first component has the dominant forces to describe the change that occurred in variables. Variables expressed by the first component can explain 61.7% of a country's SD level. One hundred twenty-six variables obtained from PCA are the input dependent variables in the next statistical analysis phase. Table 3 Part of the first and second runs components of PCA Multiple linear regression analysis (MLR) MLR is a quantitative analytical tool to explain the behavior of some variables by another variable. The regression equation, which is the result of MLR, has the form of a mathematical function that quantifies the relationships between a set of independent variables and the dependent variable [56, 57]. The regression equation is used to estimate past values and predict future values of the dependent variable in terms of independent variables' values [58]. MLR produces a regression equation that has the form of Eq. (1) $$ \mathrm{Y}=\mathrm{a}+{\mathrm{b}}_1{\mathrm{X}}_1+{\mathrm{b}}_2{\mathrm{X}}_2+{\mathrm{b}}_3{\mathrm{X}}_3+\dots \dots +{\mathrm{b}}_{\mathrm{n}}{\mathrm{X}}_{\mathrm{n}} $$ Where Y is the response (dependent variable), Xi (i=1,2, 3…n) is the set of predictors (independent variables), bi (i=1,2, 3…n) is the line slope and a is the y-intercept. MLR analysis is applied within the 126 variables obtained from the first component produced from the PCA second run. The study employs MLR analysis utilizing SPSS software to set an equation with calculated values for constant and SDGs indicators coefficients. The regression equation summarizes the linear relationship between REC and SDGs indicators as shown in Eq. 2. The response variable represents the REC, and the predictors (X1, X2, X3, …... Xn) represent SDGs indicators. The determined regression equation offers a calculated value for REC in terms of 111 SD Indicators given in Eq. (2). Sustainable Development Index Along with PCA results, SPSS software generates factor score values for each extracted component. The factor score is a numerical value mapping the variables of each component into one composite value. The study proposes the factor score value of PCA dominant component to be an SD Index as it explains 60% of SD country level. The study classifies countries according to the proposed SD Index, Table 4. mentions four categories of SD Index high, medium-high, medium-low, and low referring to the share of REC. A closer look at SD Index and REC values for each category, it is evident that most countries with a high REC have a low SD Index and vice versa. Pearson correlation coefficient is calculated to measure the strength and the direction of the relationship between SD Index and RE. The obtained Pearson coefficient value (− 0.672) describes an inverted linear association as shown in Fig. 4. Contrary to most previous studies that demonstrate a positive association between REC and SD indicators [31, 32]. The obtained strong inverse relationship provides evidence that the SD level is not sufficient to explain the increase in REC. The transition to renewable energy use in countries with a high SD Index occurs slowly as these counties already have conventional power plants and a solid infrastructure network for energy generation and transmission, so generating energy from renewable sources is not an essential need for providing a normal life. In such countries, renewable energy is generated for saving natural resources, improving environmental conditions, and reducing global climate deterioration [38]. On the other side, countries with a low SD Index are rapidly turning to the new clean energy as they do not own suitable fossil resources enough to comply with their energy needs and the infrastructure network is not proper and sometimes does not exist [39, 40]. Table 4 Examples of countries classification according to SD Index Correlation (− 0.672) between SD Index, factor score, and REC Sustainable development indicators and REC relationship: The determined regression equation Eq. (2) describes the mathematical relationship between each SDGs indicator (independent variables) and REC (the dependent variable) determining the perfect line to fit the relationship. Furthermore, the regression equation provides a calculated value for a country's REC in terms of SDGs indicators values. Analyzing the 111 SDGs indicators given in the regression equation, further explanations to the relationship between REC and SDGs indicators can be provided in the following points: The sign of the predictors is used to describe how an individual SDGs indicator change with REC; positive sign means direct relationship and negative sign means inverse relationship. The results indicate that 68 SDGs represent a direct relationship with REC, while 43 SDGs represent an inverse relationship with REC. The value of the predictor coefficient is used to evaluate the importance of individual predictors. SDGs indicator coefficient, which has a more significant positive or negative value, makes a more remarkable change in REC value. Likewise, the SDGs indicator coefficient that has a smaller positive or negative value makes a smaller change in the value of REC. The regression equation is used to adjust the individual predictors according to the sign and the value of the predictor coefficient to estimate the REC values in any year by changing the values of the predictors' indicators given in the regression equation. Tables 5 and 6 show the SDGs indicator given in Eq. (2) that have a high positive and negative relationship with REC. Comparing indicators, that have appeared in the regression equation to previous literature results, has found that the environmental dimension characterized by CO2 emissions has a negative relationship with REC. Hence, the use of renewable energy contributes to improving the environment as stated in most previous studies [19–30]. Regarding the economic dimension, most previous studies have mentioned the positive relationship between GDP and REC [32–38], while the current study demonstrates that GDP has no relationship with REC meantime the results indicate a positive relationship between Income Index and REC. A positive correlation between service and industrial value-added and REC is indicated in both the current results and a former study [35]. Previous studies signify a positive correlation between the education level [43], income inequality [41], HDI [45], public health expenditure [37], adjusted net savings [44], and REC. On the other hand, the results indicate a negative correlation between the education level characterized by Education Index, income inequality represented by Inequality-Adjusted Income Index, HDI, and REC, and no relationship appears between adjusted net savings, public health expenditure, and REC. Table 5 SDGs indicators that have a high positive relationship with REC Table 6 SDGs indicators that have a high negative relationship with REC The extracted indicators make a significant change in REC value when their value change, meaning that to increase the REC, it is helpful for planners and decision-makers to consider these indicators. Estimated renewable energy consumption The study uses the determined regression equation Eq. (2) to calculate the estimated value of REC for the 137 tested countries in terms of SDGs indicators values. Pearson correlation coefficient is calculated to investigate the connection between estimated REC and the concluded SD Index (Factor Score), The values of the estimated REC have a positive weak relationship (+ 0.25) with the SD Index. The range of the estimated REC values indicates that an increase in most countries REC should be considered except in the low SD Index category that the REC real value is greater than the estimated value as shown in Table 7. Table 7 Examples of estimated REC according to SD Index Most existing studies have examined the relationship between SD and REC using economic and environmental indicators but only a few studies have included some social indicators. However, this study extends the literature on investigating the relationship between SD indicators and REC. A quantitative dedicative approach is adopted for setting a statistical model to test the proposed hypothesis, which suggests that REC is associated with a group of SD indicators and its value can be calculated in terms of these indicators. The statistical model, which consists of PCA and MLR that tests and utilizes data of 255 SDGs indicators from 137 countries, is employed to examine the REC-SD nexus. The results, from the statistical tests, declare a further explanation for the relationship between REC and SDGs indicators. PCA results are (1) reducing data and extracting the dominant SDGs indicators, (2) concluding the SD Index, (3) classifying countries according to SD Index, and (4) determining the correlation between REC and SD Index. On the other hand, MLR results are (1) determining the relationship between SDGs indicators and REC, (2) evaluating the importance of each SGDs indicator, and (3) estimating REC value in a certain year by adjusting the SDGs indicators values. The inverse correlation between REC and SD Index, which expresses the dominant factor representing collected data, explain the rapid transformation of low SD Index countries towards renewable energy technology. In low SD Index countries, many factors drive people to depend on renewable resources but the most forcing factor is the lack of a source of energy and the absence of transmission infrastructure due to many economic, political or natural obstacles. The results. also. provide perceptible evidence of the relationship between REC and a set of SDGs indicators. In contrast, the importance of the individual SDGs indicators is varied according to the change they make in REC value. This variation provides planners and decision-makers with SDGs indicators that have the greatest importance (coefficients values) to obtain a more significant increase in REC value. For planners and decision-makers, the concluded regression equation, which represents the relationship between REC and a set of an integrated panel of SDGs indicators, is an effective optimization tool to increase the opportunities of providing societies with clean, modern and affordable sources of energy at the same time accelerating the development wheel. The datasets used are available from the World Bank database [47], Human Development Data Center [49], and Sustainable Development Report 2021 [50]. The combined dataset is available to the authors. CO2: GDP: Growth domestic product REC: Renewable energy consumption GHG: SPSS: Statistical Package for Social Sciences HDI: MLR: SDGs: Lloyd PJ (2017) The Role of Energy in Development. J Energy Southern Africa 28(1):54–62 Avtar R, Tripathi S, Aggarwal AK, Kumar P (2019) Population–Urbanization–Energy Nexus: A Review. Resources 8(3):136 Kumar M (2020) Social, Economic, and Environmental Impacts of Renewable Energy Resources. In: Okedu KE, Tahour A, Aissaoui AG (eds) Chapter in Wind Solar Hybrid Renewable Energy System. BoD – Books on Demand, London, UK, pp 227–234. https://doi.org/10.5772/intechopen.89494 Available from: https://www.intechopen.com/chapters/70874 (Accessed 22/10/2020) Dudley B (2018) BP Statistical Review of World Energy. Published online at bp.com, London, UK, British Petroleum. Retrieved from: https://www.bp.com/content/dam/bp/business-sites/en/global/corporate/pdfs/energy-economics/statistical-review/bp-stats-review-2018-full-report.pdf. Accessed 15 July 2020 Ritchie H, Roser M (2020) Renewable Energy. Published online at OurWorldInData.org, England Retrieved from: https://ourworldindata.org/renewable-energy (Accessed 15/7/2020) Paravantis JA, Kontoulis N. "Energy Security and Renewable Energy: A Geopolitical Perspective", Chapter at Renewable Energy - Resources, Challenges and Applications, Edited by Mansour Al Qubeissi, Ahmad El-kharouf and Hakan Serhad Soyhan, London, IntechOpen, 2020, Doi: https://doi.org/10.5772/intechopen.91848. Available from: https://www.intechopen.com/chapters/71552 (Accessed 9/6/2021) Bogdanov D, Ram M, Aghahosseini A, Gulagi A, Oyewo AS, Child M et al (2021) Low-cost renewable electricity as the key driver of the global energy transition towards sustainability. Energy 227:120467 Halkos GE, Gkampoura EC (2020) Reviewing usage, potentials, and limitations of renewable energy sources. Energies 13(11):2906 UNDP (2020) Sustainable Development Goals. Published online at UNDP.org, New York, USA Retrieved from: http://www.undp.org/content/undp/en/home/sustainable-developmentgoals.html (Accessed 19/4/2021) IEA (2018) Energy is at the heart of the sustainable development agenda to 2030. Published online at IEA.org, Paris Retrieved from: https://www.iea.org/commentaries/energy-is-at-the-heart-of-the-sustainable-development-agenda-to-2030 (Accessed 16/4/2021) UNDP (2015) Transforming our world: the 2030 Agenda for Sustainable Development. Published online at UNDP.org, New York, USA Retrieved from: https://sdgs.un.org/2030agenda (Accessed 26/5/2021) Barmelgy MMEL, Shalaby AM, Kamal RM (2020) A Framework for Developing Sustainable New Cities in Egypt. J Eng Appl Sci 67(3):585–604 Santika WG, Anisuzzaman M, Bahri PA, Shafiullah GM, Rupf GV, Urmee T (2019) From goals to joules: A quantitative approach of interlinkages between energy and the Sustainable Development Goals. Energy Res Soc Sci 50:201–214 McCollum DL, Echeverri LG, Busch S, Pachauri S, Parkinson S, Rogelj J et al (2018) Connecting the sustainable development goals by their energy inter-linkages. Environ Res Lett 13(3):033006 Nerini FF, Tomei J, To LS, Bisaga I, Parikh P, Black M et al (2018) Mapping synergies and trade-offs between energy and the Sustainable Development Goals. Nat Energy 3(1):10–15 IRENA (2017) Rethinking Energy 2017: Accelerating the Global Energy Transformation. Published online at IRENA.org, Abu Dhabi, UAE Retrieved from: https://www.irena.org/publications/2017/jan/rethinking-energy-2017-accelerating-the-global-energy-transformation (Accessed 16/4/2021) SDSN (2019) Mapping the Renewable Energy Sector to the Sustainable Development Goals: An Atlas. Published online at unsdsn.org, New York, USA Retrieved from: https://resources.unsdsn.org/mapping-the-renewable-energy-sector-to-the-sustainable-development-goals-an-atlas (Accessed 5/2/2020) Griggs DJ, Nilsson M, Stevance A, McCollum D (2017) A Guide to SDG Interactions: From Science to Implementation. Published online at council.science, Paris, France Retrieved from: https://council.science/publications/a-guide-to-sdg-interactions-from-science-to-implementation/ (Accessed 2/2/2021) Kahia M, Jebli MB, Belloumi M (2019) Analysis of the Impact of Renewable Energy Consumption and Economic Growth on Carbon Dioxide Emissions in 12 MENA Countries. Clean Technol Environ Policy 21(4):871–885 Bekun FV, Alola AA, Sarkodie SA (2019) Toward a Sustainable Environment: Nexus between CO2 Emissions, Resource Rent, Renewable and Nonrenewable Energy in 16-EU Countries. Sci Total Environ 657:1023–1029 Hanif I (2018) Impact of Economic Growth, Nonrenewable and Renewable Energy Consumption, and Urbanization on Carbon Emissions in Sub-Saharan Africa. Environ Sci Pollut Res 25(15):15057–15067 Sarkodie SA, Adams S (2018) Renewable Energy, Nuclear Energy, and Environmental Pollution: Accounting for Political Institutional Quality in South Africa. Sci Total Environ 643:1590–1601 Namahoro JP, Wu Q, Zhou N, Xue S (2021) "Impact of energy intensity, renewable energy, and economic growth on CO2 emissions: Evidence from Africa across regions and income levels" Renewable and Sustainable Energy Reviews. Vol. 147:111233 Vasylieva T, Lyulyov O, Bilan Y, Streimikiene D (2019) Sustainable Economic Development and Greenhouse Gas Emissions: The Dynamic Impact of Renewable Energy Consumption, GDP, and Corruption. Energies 12(17):3289 Alola AA, Bekun FV, Sarkodie SA (2019) Dynamic Impact of Trade Policy, Economic Growth, Fertility Rate, Renewable and Non-renewable Energy Consumption on Cological Footprint in Europe. Sci Total Environ 685:702–709 Kirikkaleli D, Adebayo TS (2021) Do renewable energy consumption and financial development matter for environmental sustainability? New global evidence. Sustain Dev 29(4):583–594 Adebayo TS, Kirikkaleli D (2021) Impact of renewable energy consumption, globalization, and technological innovation on environmental degradation in Japan: application of wavelet tools. Environ Dev Sustain 1:26 Adebayo TS, Akinsola GD, Bekun FV, Osemeahon OS, Sarkodie SA (2021) Mitigating human-induced emissions in Argentina: role of renewables, income, globalization, and financial development. Environ Sci Pollut Res 1:15 Adebayo TS, Coelho MF, Onbaşıoğlu DÇ, Rjoub H, Mata MN, Carvalho PV et al (2021) Modeling the dynamic linkage between renewable energy consumption, globalization, and environmental degradation in South Korea: does technological innovation matter? Energies 14(14):4265 Soylu ÖB, Adebayo TS, Kirikkaleli D (2021) The imperativeness of environmental quality in China amidst renewable energy consumption and trade openness. Sustainability 13(9):5054 Aydin M (2019) Renewable and Non-renewable Electricity Consumption–economic Growth Nexus: evidence from OECD countries. Renew Energy 136:599–606 Marinaș MC, Dinu M, Socol AG, Socol C (2018) Renewable Energy Consumption and Economic Growth. Causality Relationship in Central and Eastern European Countries. PLoS One 13(10):e0202951 Ntanos S, Skordoulis M, Kyriakopoulos G, Arabatzis G, Chalikias M, Galatsidas S et al (2018) Renewable Energy and Economic Growth: Evidence from European Countries. Sustainability 10(8):2626 Simionescu M, Strielkowski W, Tvaronavičienė M (2020) Renewable Energy in Final energy Consumption and Income in the EU-28 countries. Energies 13(9):2280 Jebli MB, Farhani S, Guesmi K (2020) Renewable Energy, CO2 Emissions and Value Added: Empirical Evidence from Countries with Different Income Levels. Structural Change Econ Dynamics 53:402–410 Vural G (2021) Analyzing the impacts of economic growth, pollution, technological innovation and trade on renewable energy production in selected Latin American countries. Renew Energy 171:210–216 Khan SAR, Zhang Y, Kumar A, Zavadskas E, Streimikiene D (2020) Measuring the Impact of Renewable Energy, Public Health Expenditure, Logistics, and Environmental Perform Sustainable Economic Growth. Sustain Dev 28(4):833–843 Namahoro JP, Wu Q, Xiao H, Zhou N (2021) The Impact of Renewable Energy, Economic and Population Growth on CO2 Emissions in the East African Region: Evidence from Common Correlated Effect Means Group and Asymmetric Analysis. Energies 14(2):312 Namahoro JP, Wu Q, Xiao H, Zhou N (2021) The asymmetric nexus of renewable energy consumption and economic growth: New evidence from Rwanda. Renew Energy 174:336–346 Fu Q, Álvarez-Otero S, Sial MS, Comite U, Zheng P, Samad S, Oláh J (2021) Impact of Renewable Energy on Economic Growth and CO2 Emissions—Evidence from BRICS Countries. Processes 9(8):1281 Topcu M, Tugcu CT (2020) The Impact of Renewable Energy Consumption on Income Inequality: Evidence from Developed Countries. Renew Energy 151:1134–1140 Uzar U (2020) Is Income Inequality a Driver for Renewable Energy Consumption? J Clean Prod 255:120287 Özçiçek Ö, Ağpak F (2017) The Role of Education on Renewable Energy Use: Evidence from Poisson Pseudo Maximum Likelihood Estimations. J Bus Econ Polic 4(4):49–61 Güney T (2019) Renewable Energy, Non-renewable Energy and Sustainable Development. Int J Sustain Dev World Ecol 26(5):389–397 Soukiazis E, Proenca S, Cerqueira PA, 'The interconnections between Renewable Energy, Economic Development and Environmental Pollution. A simultaneous equation system approach," Centre for Business and Economics Research (CeBER), CeBER Working Papers 2017-10, Coimbra, University of Coimbra, 2017. IEA (2018) Understanding and using the Energy Balance. Published online at IEA.org, Paris Retrieved from: https://www.iea.org/commentaries/energy-is-at-the-heart-of-the-sustainable-development-agenda-to-2030 (Accessed 5/9/2020) World Bank (2017) World Development Indicators: Sustainable Development Goal. Published online at worldbank.org, Washington, DC, USA Retrieved from: http://datatopics.worldbank.org/sdgs/ (Accessed 2/10/2020) UNDP (2018) 2018 Statistical Update: Human Development Indices and Indicators. Published online at hdr.undp.org, New York, USA Retrieved from: http://hdr.undp.org/en/content/human-development-indices-indicators-2018-statistical-update (Accessed 15/10/2020) UNDP (2018) Human Development Data Center. Published online at hdr.undp.org, New York, USA Retrieved from: http://hdr.undp.org/en/data (Accessed 20/9/2020) Sachs J, Kroll C, Lafortune G, Fuller G, Woelm F (2021) The Decade of Action for the Sustainable Development Goals: Sustainable Development Report 2021. Published online at sdgindex.org, Cambridge, UK Retrieved from: https://unstats.un.org/sdgs/report/2020/ (Accessed 5/11/2020) IBM (2016) Estimation Methods for Replacing Missing Values. Published online at ibm.com, New York, USA Retrieved from: https://www.ibm.com/docs/en/spss-statistics/24.0.0?topic=values-estimation-methods-replacing-missing (Accessed 20/21/2020) Lever J, Krzywinski M, Altman N (2017) Points of significance: Principal component analysis. Nat Methods 14(7):641–643 Mukhopadhyay P (2014) Learning Regression Analysis by Simulation by Kunio Takezawa. Int Stat Rev 82(2):325–325 IBM (2016) Categorical Principal Components Analysis. Published online at ibm.com, New York, USA Retrieved from: https://www.ibm.com/docs/en/spss-statistics/23.0.0?topic=application-categorical-principal-components-analysis (Accessed 15/21/2020) Meng Y, Qasem S, Shokri M (2020) Dimension Reduction of Machine Learning-Based Forecasting Models Employing Principal Component Analysis. Mathematics 8(8):1233 Bolshakova L (2021) Correlation and Regression Analysis of Economic Problems. Revista Gestão Inovação e Tecnologias 11(3):2077–2088 Weisberg S (2014) Applied linear regression, 4th edn. Wiley, Hoboken, New Jersy Gogtay NJ, Deshpande SP, Thatte UM (2017) Principles of regression analysis. J Assoc Physic India 65(48):48–52 The authors declare that they did not receive any funding sources. Architecture and Regional Planning, Faculty of Engineering, Cairo University, Cairo, Egypt Architecture Department, Faculty of Engineering, Cairo University, Cairo, Egypt Rania Hamed Rashed Each author has made substantial contributions to the conception and design of the work. R.H. has prepared the original draft, conceptualization and methodology, has performed the data curation formal analysis and interpretation of data, has utilized the software, and has attained manuscript review and editing. T.A. has substantively revised the manuscript, has verified all data and materials, and has approved the submitted version. All authors have read and approved the final manuscript to be personally accountable for the authors' contributions. Correspondence to Rania Hamed Rashed. The authors declare that they have no competing interests Aboul-Atta, T.AL., Rashed, R.H. Analyzing the relationship between sustainable development indicators and renewable energy consumption. J. Eng. Appl. Sci. 68, 45 (2021). https://doi.org/10.1186/s44147-021-00041-9
CommonCrawl
Metabolic syndrome among type 2 diabetic patients in Ethiopia: a cross-sectional study Mequanent Kassa Birarra ORCID: orcid.org/0000-0002-8700-06141 & Dessalegn Asmelashe Gelayee2 Metabolic syndrome (MetS) increases risk of cardiovascular diseases (CVD), premature death as well as cost related to health care.This study was aimed at investigating the prevalence of MetS and its determinant factors among type2 diabetes mellitus (T2DM) patients attending a specialized hospital. A cross-sectional study was conducted on a total of 256 T2DM patients from the first march to 30th May 2017 at university of gondar comprehensive specialized hospital (UGCSH). Data was collected based on STROBE (strengthening the reporting of observational studies in epidemiology) statement. Bivariable and multivariable logistic regression analysis were run to identify predictors of MetS from the independent variables and significance test was set at P < 0.05. The prevalence of MetS in this study was 70.3, 57 & 45.3% and it is more common in females (66.1, 83.3 & 70.7%) by using national cholesterol education program adult treatment panel III (NCEP-ATP III), International diabetic federation (IDF) and world health organization (WHO) criteria respectively. The most prevalent components of MetS were low level of high density lipoprotein (HDL) and triglyceride(TG). By usingIDF criteria, female gender was significantly associated with MetS (AOR = 0.2 at 95%CI: 0.1, 0.6 P = 0.00). Where as by NCEP-ATP IIIcriteria, age between 51 and 64 years old (AOR = 2.4 95% CI: 1.0,5.8, P = 0.04), self employment (AOR = 2.7 95% CI:1.1, 6.5, P = 0.03), and completetion of secondary school and above (AOR = 3.2, 95% CI:1.6,6.7, P = 0.001) were predictors for the development of MetS. In the WHO criteria, being single in marital status was significantly associated with MetS (AOR = 17 at 95%CI: 1.8, 166, P = 0.000). This study demonstrates that Metabolic syndrome is a major health concern for diabetic patients in Ethiopia and they are at increased risk of developing complications such as cardiovascular diseases and premature mortality. The predictors female gender, age between 51 and 64 years old, urban area residence, and being single are modifiable.Thus,health authorities shall provide targeted interventions such as life style modifications to these most at risk sub-populations of diabetic patients. The burden of non-communicable disease in the developing countries is increasing, and leading to high mortality rates [1]. Nowadays T2DM is pandemic and there are no signs of reduction in the incidence rates [2]. Forexamle, according to international diabetes federation report indicates that more than 415 million of people worldwide adults have diabetes. By 2040 this will rise to 642 million. In Africa, 441 million people live with diabetes which is likely to increase by 926 million in 2040 [3]. Diabetic population are at increased risk of mortality and morbidity primarily due to cardiovascular diseases [4]. The relative risks are 1 to 3 in men and from 2 to 5 in women [5]. Metabolic syndrome would have its own contribution in these outcomes of DM. Metabolic syndromeis highly prevalent in T2DM patients [6,7,8]. However, several studies have reported lower prevalence of MetS [9, 10] and this is largely due to differences in characteristics of the studied population such as residence, type of disease and comorbidities, etc. Metabolic syndrome can be defined as a cluster of interconnected cardio-metabolic dysfunctions which is characterized by the increase in fasting blood sugar (FBS), abdominal circumference (AC), arterial pressure (AP), triglycerides (TG), and reduction in high-density lipoprotein cholesterol (HDL) [11]. This syndrome has different set of criterias to measure it. Those are National Cholesterol Education Program Adult Treatment Panel III (NCEP-ATP III) [12], WHO criteria's [13] and IDF [14]. The NCEP-ATP III definition uses the presence of 3 or more parameters as a cutoff to define MetS and the WHO as well as IDF definitions require the presence of at least two parameters. The syndrome can directly contributes to the development of CVD and the appearance of T2DM in non-diabetic patients. Additionally, it increases the risk of premature death, renal disease, mental disorders and cancer. Thus MetS represents a serious public health problem [15,16,17]. Metabolic syndrome is not also with out cost implications. For instance, Boudreau et al. found that costs for subjects with diabetes plus weight risk, dyslipidemia, and hypertension were almost double the costs for subjects with prediabetes plus similar risk factors ($8067 vs. $4638) [18]. Globally, 20–25% of the adult population has MetS and they are twice as likely to die from it; and they are three times more likely to have a heart attack or stroke compared with people without the syndrome [14, 19]. However, the prevalence of MetS in type 2 diabetes in sub-Saharan Africans according to two sets of diagnostic criteria was 71.7% according to the IDF criteria and 60.4% using NCEP-ATP III criteria [20]. In Ethiopia the prevalence of MetS was range from (26–70%) using NCEP-ATP III criteria [21,22,23]. Nowadays, MetS has become a significant public health problem. Therefore, there is a need for investigation in this area [24]. Taking into consideration, diabetic patients who had MetS also they have cardiovascular risk factors, therefore the diagnosis of MetS in those patients is very important for detection, prevention, and treatment of the underlying risk factors and for the reduction of the cardiovascular disease burden in the general population [25, 26]. While limited studies of MetS among diabetic patients in Ethiopia acknowledge its burden, they followed a single criteria(NCEP-ATP III) to define MetS and using a signle criteria may either under or over estimate the problem. Thus denying or providing interventions to minimize the risks of MetS complications would be irrational since a given patient may be categorized as having MetS in one set of definition but not in the others. In this regard, the present study employed three commonly used criteria to define MetS so that it would be easy to acknowledge the importance of having a unified MetS criterion to make appropriate clinical decision in the context of Ethiopia.Therefore, this study was aimed at inevestigating the prevalence of MetS and its determinant factors among T2DM patients attending a comphrensivespecialized hospital. Study Area & Period The study was conducted from March to May 2017 at UGCSH, Northwest Ethiopia. The hospital is currently serving more than 5 million people in the surrounding area andit is located in Gondar town, 750 km Northwest of the capital city. It has more than 400 beds and fourteen different units that provide medical services to nearly 250,000 out-patients each year. More than 5 thousand diabetic patients attend the diabetic follow up clinic. Study design and population An institutional based cross sectional study design was followed.The source populations were all patients attending the facility on out-patient basis at UGCSH.Whereas, all adult T2DM patients attending the facility on out-patient basis during the study period and volunteered to take part in the study were the study population. Those patients whose age ≥ 20 years old and diagnosed as T2DM undergoing treatment with the facility were included in the study. Whereas, pregnant women, excessive alcohol or other drug abuse, having current psychiatric treatment and incomplete patient's data were excluded from the study. Sample size and sampling procedure The sample size was calculated based on single population proportion formula [27]. By using the following assumption: (1.96)2 were used for \( Z\frac{\alpha }{2} \) and the proportion (P) of MetS in these groups was 0.5. With 95% confidence interval (CI) and marginal error (d) of 5%. $$ n=\frac{Z_{\frac{\alpha }{2}}{}^2P\left(1-P\right)}{d^2} $$ Based on the above formula, assumptions, correction formula and 5% of contingency the sample size(n) was calculated to be 256. Study participants were selected using systematic random sampling technique. Then, every third patient arrived at the clinic was selected for the study. Data collection procedure Data of socio demographic and economic (age, sex, monthly income, life style, family history of diabetes and other diseases/disorders) of the study participants were collected by using standardized interview questioner.Whereas, data of HDL, fasting plasma glucose (FPG), and TG were recorded from patient files and chart.The components of MetS was identified and determined according to NCEP-ATP III, IDF and WHO definitions. Anthropometric data of the study participants (weight, height and waist circumference) was obtained by two data collector nurses who are working at UGCSH diabetic clinics. Weight was obtained from patients using weight balance while they were visiting the clinics during their follow-up. The average follow-up interval was 2–3 months. Height of the patients were assessed using meter and also data collectors were instruct participants to stand upright, motionless,and touching their thighs with their palms. Based on height and weight body mass index was calculated. Waist circumference (WC) was measured midway between the inferior angle of the ribs and the supra-iliac crest by using Meter [28]. After 10 min of arrival of the study participants at UGCSH diabetic clinic, Blood pressure (BP) was measured using a standard adult arm cuff of mercury type sphygmomanometer by the recruited nurses as data collectors who working in theclinic. Inorder to assure the reliability of BP measurement data collectors were taken two readings with 1 minute interval and the average of the two readings was recorded as the final BP of the patient. However, a third measurement was taken if the difference between the two readings was greater than 5 mmHg and the average of the 3 BP readings was recorded as the final BP of the patient [29]. In order to control the quality of data, pre-test was done in the data abstraction format before the main data collection on a sample equivalent to 13 (5%) of the total sample size in randomly selected patients. The pretested papers were not included in the study and appropriate adjustment was done on the data abstraction format. In addition to this the principal investigator had supervised the data collectors during data collection. Thenthe collected data were checked for completeness and consistency on daily basis. Data analysis and interpretation The collected data were entered into Epi Info version 7 and exported to statistical package for the social sciences (SPSS) version 20 for statistical analysis. The results presented using tables and figures. Frequency distribution was calculated. The prevalence of patients with MetS was calculated, dividing the number of patients with MetS by the total number of study participants. To identify factors independently associated with the occurrence of MetS Bivariable and multivariable logistic regression analysis was run. The results of Bivariable and multivariable analysis were reported as crude and adjusted odds ratio at 95% confidence intervals (95% CI) and P-value ≤0.05 was considered as statistical significance. Operational definitions NCEP-ATP III criteria Study participants were classified as having MetS if they had three or more of the following risk factors: waist circumference (> 102 cm for men and > 88 cm for women), high plasma triglycerides (≥ 150 mg/dl), low HDL cholesterol (< 40 mg/dl for men and < 50 mg/dl for women), blood pressure (≥ 130/85 mmHg) and fasting plasma glucose (≥110 mg/dl) [12]. WHO criteria Study participants were classified as having MetS as along with DM if they had any two of the following components: Obesity: BMI (> 30 kg/m2), high serum triglycerides, (≥150 mg/dl), low serum high density lipoprotein cholesterol (< 35 mg/dl for men and < 39 mg/dl for women) and having hypertension (≥140/90 mmHg [13]. IDFcriteria Study participants were classified as having MetS as along with central obesity if they had any two of the following components:Raised TG levels ≥150 mg/dl (1.7 mmol/l), or specific treatment for this lipid abnormality, reduced HDL-cholesterol < 40 mg/dl (1.03 mmol/l) in males and < 50 mg/dl (1.29 mmol/l) in females, or specific treatment for this lipid abnormality, raised blood pressure: systolic BP ≥130 or diastolic BP ≥85 mmHg or treatment of previously diagnosed hypertension, raised fasting blood glucose ≥100 mg/dl (≥5.6 mmol/l) or previously diagnosed diabetes and waist circumference (> 94 cmfor men and > 88 cm for women [14]. Was defined as the ratio between weight (kg) and the square of the height (m) and used to categorize BMI-measured weight status: patients with (BMI ≤ 18.5) statedas under weight, patients with (BMI 18.5–24.9) consider as normal,however, patients with (BMI 25.0–29.9) is overweight and obese if BMI is ≥30 [22]. A total of 256 study participants were included in the study of which more than half of them were females 143 (55.9%). The highest number of study participants were in the age group (51–64) years old. More than three fourth were 207 (80.9%) lives in urban area and 93 (36.3%) of them were complete their secondary school and above. In addition, more than two third of 219 (85%) were married. The total number of unemployed study participants were 131 (51.2%) and majority of them 116 (45.3%) had < 600 Ethiopian birr monthly income. The highest number (81.6%) of them used palm oil for food preparation. In addition to this, majority of study participants 136 (53.1%) were not involved in work vigarious intensity of activity and the highest number 173 (67.6%) of them were not did regular physical exercise. One hundred sixty seven (62.2%) of study participants have no family history of chronic diseases. Around half of the study subjects 136 (53.1%) diagnosed DM between 1 and 5 years duration and all of them were under medication. Most of them 132 (51.5%) were also undertaking combination treatment. Details are presented in Table 1. Table 1 Socio demographic characteristics of the study participants at UGCSH, June 2017 Prevalence of metabolic syndrome with each criteria The prevalence of MetS in this study was 180 (70.3%), 146 (57%) and 116 (43.3%) using NCEP(ATPIII), IDF and WHO criteria respectively (Fig. 1). Metabolic syndrome in different criteria at UGCSH, June 2017 Frequency of metabolic syndrome components by sex The frequency of MetS components in this study based on NCEP-ATP III criteria were 53.5, 68.8 and 67.2% for abdominal obesity, elevated triglyceride and reduced HDL respectively. Whereas, using the IDF criteria the prevalence was 61.7, 67.6 and 66.8% for abdominal obesity, elevated triglyceride and reduced HDL respectively. Details are presented in Table 2. Table 2 Frequency of metabolic syndrome components among T2DM patients with sex, at UGCSH, June 2017 Factors associated with metabolic syndrome In order to control confounders effect multivariable logistic regression analysis was run to analyze variables which were significantly associated to different components of MetS using different criteria in bivariable logistic analysis. These variables were sex, age, educational status, residency, duration since DM diagnosed, monthly income, family history of chronic disease and marital status. The analysis showed that, sex was significantly associated with MetS by using IDF criteria. Based on this, female patients were (AOR = 0.2 at 95%CI: 0.1, 0.6, P = 0.00) significantly associated with MetS compared to men using IDF criteria. Details are presented in Table 3. Table 3 Bivariable and multivariable logistic regression analysis by using IDF criteria at UGCSH, June 2017 Using NCEP-ATPIII criteria, female sex was (AOR = 0.2 at 95%CI: 0.1, 0.6, P = 0.00) significantly associated with MetS compared to male sex. Similarly, patients whose age is between 51 and 64 years old were about two (AOR = 2.4 95% CI: 1.0, 5.8, P = 0.04) times more likely to haveMetS compared to those patients whose age is < 30 years old. Likewise,self employed participants were about three (AOR = 2.7 95% CI: 1.1, 6.5, P = 0.03) times more likely to develop MetS compared to those unemployed. Patients who completed secondary school and above were about three (AOR = 3.2, 95% CI: 1.6, 6.7, P = 0.001) times more likely to develop MetS compared to those unable to read and write. In addition, patients whose DM diagnosis duration was less than 1 year were about three (AOR = 2.7 95% CI = 1.1, 7.1, P = 0.04) times more likely to develop MetS compared to those with DM diagnosis duration 1–5 years. Details are presented in Table 4. Table 4 Bivariable and multivariable logistic regression analysis using NCEP-ATPIII criteria at UGCSH, June 2017 Based onWHO criteria female sex was (AOR = 0.4 at 95%CI: 0.2, 0.7, P = 0.000) significantly associated with MetS compared to male sex. Patients who were single were significantly associated with MetS and were about seventeen (AOR = 17 at 95%CI: 1.8, 166, P = 0.01) times more likely to develop MetS compared to those divorced patients.Details are presented in Table 5. Table 5 Bivariable and multivariable logistic regression analysis result using WHO criteria at UGCSH, June 2017 This study was aimed at describing the prevalence and predictors of Metabolic syndrome among type 2 diabetic patients attending a comprehensive specialized hospital in Northwest Ethiopia. The main finding of the present study demonstrates that MetS is a major health concern for diabetic patients in Ethiopia and the predictors like female gender, age between 51 and 64 years old, urban area residence, and being single, are modifiable. The prevalence of MetS in this study was 70.3, 57 & 45.3% using NCEP-ATP III, IDF& WHO criteria respectively. These different prevalence rates arise due to the different cutoff points and sets of criteria used by those three definitions. In previous studies among DM patients, a lower 45.9% and comparable 70.1% results were reported from Ethiopia using NCEP-ATP III criteria [22, 23]. However, a higher rate of prevalence, 73.9, 69.9 and 66.8%, was reported from Nepal using NCEP-ATP III, WHO and IDF criteria respectively [30] and 73.4 & 64.9% using NCEP-ATP III and IDF criteria respectively was reported from Iran [31]. On the other hand, a lower prevalence of MetS was reported from India 45.8, 57.7 and 28% using NCEP-ATP III, WHO and IDF criteria respectively [8] and 58% was from Ghana using NCEP-ATP III criteria [7]. The prevalence of MetS in the present study some what different from others and this could be due to differences in sample size, socio-economic status, ethnicity difference [32], sampling method and difference in life style of study participants. The present study demonstrated that prevalence of MetS was found to be higher in female (83.3, 66.1 & 70.7%) study participants than men (17, 31.9 & 29.3%) using IDF, NCEP-ATPIII & WHO criteria's respectively. This result is in agreement with other studies [7, 30, 31]. As shown in Table 2, a significantly higher proportion of females than males have abnormal components in four (66.7%) of the six components used to define MetS in the three criterias. This might explain the observed higher prevalence of MetS in the female geneder. Such discrepancy is attributed to the several physiological differences: Pregnancy induced increase in weight as well as gestational DM; the use of hormonal oral contraceptives that can decrease insulin sensitivity, glucose tolerance, increase blood pressure and increase in weight gain; menopause promotes a change in body fat distribution to increase central adiposity [33]. However, as the majority of females in the present study (81.8%) were above 46 years old, the increased prevalence of MetS among females unlike that of males may be due to menopause. The presence of hormonal replacement therapy (HRT) was, however, not assessed but might have some effect on the higher prevalence of MetS. Inaddition, less proportion of females were involved in regular physical exercise than males in this study which might have its own contribution to the observed higher MetS prevalence among female. Females in Ethiopia are socio-economically and culturally influenced to stay at home so that they are typically involved in daily living activities rather than regular physical exercise to maintain body fitness. The role of exercise in minimizing risks of developing MetS is reported in Greec study of 1128 men and 1154 women [34]. According to the NCEP- ATPIII criteria, where the highest prevalence of MetS was observed, TG and HDL were the most frequent abnormal MetS components. Abnormal levels of TG and HDL has been implicated with adverse health effects. Fore example, Callaghan et al. reported that hypertriglyceridemia is a significant risk factor for lower-extremity amputation in a 10-year cohort study (from 1995 to 2006) of 28,701 diabetic patients [35]. A 2 years of multi-ethnic study of atherosclerosis on a total of 6814 participants showed that low level of HDL in the body is associated with an increased risk of CVD, coronary heart diseases and death [36]. Thus, interventions focusing on abnormal TG and HDL need to be prioritized. Regarding to residency, the association of MetS and urbanization could be as a result of a sedentary life style, increased intake of calorie rich foods and central obesity.This result is supported by other studies world wide [37, 38]. In addition, people who were self employed had significant association with MetS and the reason could be also sedentary life style related with the type of job they are involved . On the other hand, patients who were secondary school and above had significantly associated with MetS.This might be due to significantly higher economic status (greater than 1500 ETB) of those who are highly educated in our study population (Secondary school and above:59 (77.6%); primary school:15 (19.7%). This finding is consistent with that of Chakraborty et al. and Khanam et al. [39, 40]. Therefore, higher levl of education may indirectly lead to risky life style adoption interms of dietary pattern and physical activity. When compared to those patients aged 30 years and less, the ones in the age intervals 31–40, 41–50, and 51–64 were at inceased risk of MetS. The reasons for a direct relations of age and MetS is that age related processes such as gradual decrease in the basal metabolic rate, stress induced hypercortisolism, hypogonadism, decreased growth hormone secretion, concomitant insulin resistance and abdominal fat deposition [41, 42]. However, those patients who are 65 years old and above were found to have no significantly increased risk of MetS. This might be because of reduced survival of patients who developed MetS in this age group. In this regard, further prospective studies need to be carried out. The finding reiterates that of Devers et al. According to this study which was conducted among 1429 adults aged ≥25 years from randomly selected house holds in Australia, MetS components cluster most markedly in those aged < 65 years [43]. Therefore, serious preventive and control measures should be taken as age increases. Individuals should be advised to make life style changes. Doing reguar exercise, eating foods containing little amount of saturated fats and cholesterol as well as taking more fiber-rich foods should be encouraged. Chandalia et al. have shown that taking high fiber diets have the potential to lower fasting plasma glucose, total cholesterol, triglyceride, and helps to have good glycemic index through a decrease in gastrointestinal absorption of cholesterol and carbohydrates [44]. Regarding to duration of period since DM was diagnosed, those patients diagnosed within a year had significantly higher risk of developing MetS according to NCEP-ATPIII criteria. Since lifestyle modifications on diet and physical activity are the main initial interventions in T2DM patients, those respondents treated for short period of time may not effectively adopt the needed life style changes and hence are at increaed risk of MetS. It is also worth to note that some of the patients in our study might be in the very ealy stages of treatment so that reduction of MetS components might be unlikely. Incontrast to our finding, a previous study in Ethiopia reported the absence of impact of duration of treatment on MetS development [22]. Since in this study patients were classified based on higher cut off treatment duration i.e. below or above 10 years, it might fail to signify the impact of duration on MetS. On the otherhand, patients who stayed on treatment for short duration were not specifically isolated and compared with others who stayed longer on therapy. In this study using WHO criteria it indicate that patients who were single had association with MetS,the possible reason may be small sample size of this segment of respondants (N = 18,7%). In general, the findings of the present stydy taken together showed that MetS is a mjor burden among T2DM patients in Ethiopia. Early identification of MetS among T2DM patients is of great importance since MetS imply increased risk of morbidities such as CVD,decreased quality of life, increased health care cost, as well as mortality. Therefore, UGCSH has to strengthen appropriate and targeted prevention strategies such as encouraging people to adopt dietary modification and physical activity which are reported to reduce occurrence and progression of MetS [45]. Inaddition, there should be a more frequent screening of patients for MetS components prior to full blow development of MetS. This study for the first time in Ethiopia, employed three defining criteria for MetS and was able to highlight the importance of having unified definition to diagnose and make clinical decisions in the context of low income settings. Data were also collected prospectively and this strengthens the conclusions made. Yet, there are limitations and one should consider these in interpreting the findings. The study may not be generalized to the nation as a whole due to small sample size and thus further studies would be important. It is also important to show the health related outcome and economic consequences of MetS among T2DM patients in Ethiopia. In conclusion, this study demonstrates that MetS is a major health concern for diabetic patients in Ethiopia. They are at increased risk of developing complications such as cardiovascular diseases and premature mortality.The predictors, female gender, age between 51 and 64 years old, urban area residence, and being single, are modifiable. Thus, health authorities shall provide targeted interventions to this most at risk sub populations of diabetic patients such as promotion of life style modifications. CVD: UGCSH: University of Gondar Comprehensive Specialized Hospital Islam SMS, Purnat TD, Phuong NTA, Mwingira U, Schacht K, Fröschl G. Non-communicable diseases (NCDs) in developing countries: a symposium report. Glob Health. 2014;10(81) Nijpels, Giel. Epidemiology of type 2 diabetes. 2016 Nov 23; Diapedia 3104287123 rev. no. 18. Acess on june 2017 Available from: https://doi.org/10.14496/dia.3104287123.18. Accessed 4 June 2017. International Diabetes Federation: IDF Atlas 7th edition, 2015. Accessed on June 2017 Avilable at https://www.idf.org/e-library/epidemiology-research/diabetes-atlas/13-diabetes-atlas-seventh-edition.html Matheus AS, Tannus LR, Cobas RA, Palma CC, Negrato CA, Gomes MB. Impact of diabetes on cardiovascular disease: an update. Int J Hypertens. 2013;653789 Rivellese AA, Riccardi G, Vaccaro O. Cardiovascular risk in women with diabetes. Nutr Metab Cardiovasc Dis. 2010;20(6):474–80. Basol G, Barutcuoglu B, Cakir Y, Ozmen B, Parildar Z, Kose T, et al. Diagnosing metabolic syndrome in type 2 diabetic Turkish patients: comparison of AHA/NHLBI and IDF definitions. Bratisl Lek Listy. 2011;112:253–9. PubMed CAS Google Scholar Nsiah K, Shang VO, Boateng KA, Mensah F. Prevalence of metabolic syndrome in type 2 diabetes mellitus patients. Int J Appl Basic Med Res. 2015;5(2):133–8. Yadav D, Mahajan S, Subramanian SK, Bisen PS, Chung CH, Prasad G. Prevalence of metabolic syndrome in type 2 diabetes mellitus using NCEP-ATPIII, IDF and WHO definition and its agreement in Gwalior Chambal region of Central India. Glob J Health Sci. 2013;5(6):142–55. Saloojee S, Burns JK, Motala AA. Very low rates of screening for metabolic syndrome among patients with severe mental illness in Durban, South Africa. BMC Psychiatry. 2014;14:228. Morimoto A, Nishimura R, Suzuki N, Matsudaira T, Taki K, Tsujino D, et al. Low prevalence of metabolic syndrome and its components in rural Japan. Tohoku J Exp Med. 2008;216(1):69–75. Alberti KG, Zimmet P, Shaw J. IDF epidemiology task force consensus group. The metabolic syndrome new worldwide definition. Lancet. 2005;366:1059–62. Executive Summary of the Third Report Of The National Cholesterol Education Program (NCEP). Expert panel on detection, evaluation, and treatment of high blood cholesterol in adults (adult treatment panel III). JAMA. 2001;285:2486–97. Alberti KG, Zimmet PZ. Definition, diagnosis and classification of diabetes mellitus and its complications. Part 1: diagnosis and classification of diabetes mellitus provisional report of a WHO consultation. Diabet Med. 1998;15:539–53. The IDF consensus worldwide definition of the metabolic syndrome. [Last accessed on June 2017]. Available at https://www.idf.org/e-library/consensus-statements/60-idfconsensus-worldwide-definitionof-the-metabolic-syndrome.html Borena W, Edlinger M, Bjørge T, et al. A prospective study on metabolic risk factors and gallbladder cancer in the metabolic syndrome and cancer (me-can) collaborative study. PLoS One. 2014;9(2):e89368. Pan A, Keum N, Okereke OI, et al. Bidirectional association between depression and metabolic syndrome: a systematic review and meta-analysis of epidemiological studies. Diabetes Care. 2012;35(5):1171–80. Thomas G, Sehgal AR, Kashyap SR, Srinivas TR, Kirwan JP, Navaneethan SD. Metabolic syndrome and kidney disease: a systematic review and meta-analysis. Clin J Am Soc Nephrol. 2011;6(10):2364–73. Boudreau DM, Malone DC, Raebel MA, Fishman PA, Nichols GA, Feldstein AC, et al. Health care utilization and costs by metabolic syndrome risk factors. Metab Syndr Relat Disord. 2009;7(4):305–14. World Health Organization (WHO): Non-Communicable Diseases Country Profile. 2011. Kengne AP, Limen SN, Sobngwi E, Djouogo CF, Nouedoui C. Metabolic syndrome in type 2 diabetes: comparative prevalence according to two sets of diagnostic criteria in sub-Saharan Africans. Diabetol Metab Syndr. 2012;4:22. Abda E, Hamza L, Tessema F, Cheneke W. Metabolic syndrome and associated factors among outpatients of Jimma University teaching hospital. Diabetes Metab Syndr Obes. 2016;9:47–53. Woyesa SB, Hirigo AT, Wube TB. Hyperuricemia and metabolic syndrome in type 2 diabetes mellitus patients at Hawassa university comprehensive specialized hospital, south West Ethiopia. BMC Endocr Disord. 2017;17:76. Tadewos A, Ambachew H, Assegu D. Pattern of metabolic syndrome in relation to gender among type-II DM patients in Hawassa university comprehensive specialized hospital, Hawassa, southern Ethiopia. Health Sci J. 2017;11(3) Smith SR. Importance of diagnosing and treating the metabolic syndrome in reducing cardiovascular risk. Obesity (Silver Spring). 2006;3:128S–34S. Galassi A, Reynolds K, He J. Metabolic syndrome and risk of cardiovascular disease: a meta-analysis. Am J Med. 2006;119(10):812–9. Han TS, Lean ME. A clinical perspective of obesity, metabolic syndrome and cardiovascular disease. JRSM Cardiovasc Dis. 2016;5:2048004016633371. Pourhoseingholi MA, Vahedi M, Rahimzadeh M. Sample size calculation in medical studies. Gastroenterol Hepatol Bed Bench. 2013;6(1):14–7. Bray GA. Obesity: Basic consideration and clinical approaches. Dis Mon. 1989;35(7):449–537. Lemogoum D, Seedat YK, Mabadeje AF, Mendis S, Bovet P, Onwubere B, et al. Recommendations for prevention, diagnosis and management of hypertension and cardiovascular risk factors in sub-Saharan Africa. J Hypertens. 2003;21(11):1993–2000. Pokharel DR, Khadka D, Sigdel M, et al. Prevalence of metabolic syndrome in Nepalese type 2 diabetic patients according to WHO, NCEP-ATP III, IDF and harmonized criteria. J Diabetes Metab Disord. 2014;13:104. Foroozanfar Z, Najafipour H, Khanjani N, Bahrampour A, Ebrahimi H. The prevalence of metabolic syndrome according to different criteria and its associated factors in type 2 diabetic patients in Kerman, Iran. Iran J Med Sci. 2015;40(6):522–5. Rampal S, Mahadeva S, Guallar E, Bulgiba A, Mohamed R, Rahmat R, et al. Ethnic differences in the prevalence of metabolic syndrome: results from a multi-ethnic population-based survey in Malaysia. PLoS One. 2012;7(9):e46365. Bentley-Lewis R, Koruda K, Seely EW. The metabolic syndrome in women. Nat Clin Pract Endocrinol Metab. 2007;3(10):696–704. Panagiotakos DB, Pitsavos CH, Chrysohoou C, Skoumas J, Tousoulis D, Toutouza M, Toutouzas PK, Stefanadis C. The Impact of lifestyle habits on the prevalence of the metabolic syndrome among Greek adults from the ATTICA study. Am Heart J. 2004;147:106–12. Callaghan BC, Feldman E, Liu J, et al. Triglycerides and amputation risk in patients with diabetes: ten-year follow-up in the DISTANCE study. Diabetes Care. 2011;34(3):635–40. https://doi.org/10.2337/dc10-0878. Ahmed HM, Miller M, Nasir K, McEvoy JW, Herrington D, Blumenthal RS, Blaha MJ. Primary low level of high-density lipoprotein cholesterol and risks of coronary heart disease, cardiovascular disease, and death: results from the multi-ethnic study of atherosclerosis. Am J Epidemiol. 2016;183(10):875–83. Bouguerra R, Ben Selam L, Alberti H, Ben Rayana C, El Atti J, Blouza S, et al. Prevalence of metabolic abnormalities in the Tunisian adults: a population based study. Diabete Metab. 2006;32(3):215–21. Chowdhury MZI, Anik AM, Farhana Z, et al. Prevalence of metabolic syndrome in Bangladesh: a systematic review and meta-analysis of the studies. BMC Public Health. 2018;18:308. Chakraborty SN, Roy SK, Rahaman MA. Epidemiological predictors of metabolic syndrome in urban West Bengal, India. J Family Med Prim Care. 2015;4(4):535–8. Khanam MA, Qiu C, Lindeboom W, Streatfield PK, Kabir ZN, Wahlin Å. The metabolic syndrome: prevalence, associated factors, and impact on survival among older persons in rural Bangladesh. PLoS One. 2011;6(6):e20259. Chrousos GP, Gold PW. The concepts of stress and stress system disorders. Overview of physical and behavioral homeostasis. JAMA. 1992;267:1244–52. Charmandari E, Tsigos C, Chrousos G. Endocrinology of the stress response. Annu Rev Physiol. 2005;67:259–84. Devers MC, Campbell S, Simmons D. Influence of age on the prevalence and components of the metabolic syndrome and the association with cardiovascular disease. BMJ Open Diabetes Res Care. 2016;4(1):e000195. Chandalia M, Garg A, Lutjohann D, Bergmann VK, Grundy MS, Brinkley JL. Beneficial effects of high dietary fiber intake in patients with type 2 diabetes mellitus. N Engl J Med. 2000;342(19):1392–8. Pitsavos C, Panagiotakos D, Weinem M, Stefanadis C. Diet, exercise and the metabolic syndrome. Rev Diabet Stud. 2006;3(3):118–26. The authrs appreciate the data collectors as well as the study participants. The data sets used and/or analysed during the current study are available from the corresponding author on reasonable request. Department of Clinical Pharmacy, School of Pharmacy,College of Medicine and Health Sciences, University of Gondar, Lideta Street, P.o.box: 196, Gondar, Ethiopia Mequanent Kassa Birarra Department of Pharmacology, School of Pharmacy,College of Medicine and Health Sciences, University of Gondar, Gondar, Ethiopia Dessalegn Asmelashe Gelayee MKB designed the study. Both authors conducted the study, analyzed data, developed and approved the final version of the manuscript. Correspondence to Mequanent Kassa Birarra. Letter of ethical clearance was obtained from Ethical Review Committee of School of Pharmacy, College of Medicine and Health Sciences University of Gondar, as well as medical director of University of Gondar hospital. Informed verbal consent was obtained from each study participant with respect to their willingness to take part in the study after explaining the objective of the study. This was approved by the Ethical Review Committee of School of Pharmacy, College of Medicine and Health Sciences University of Gondar, as well as medical director of UGCSH. Birarra, M.K., Gelayee, D.A. Metabolic syndrome among type 2 diabetic patients in Ethiopia: a cross-sectional study. BMC Cardiovasc Disord 18, 149 (2018). https://doi.org/10.1186/s12872-018-0880-7 University of Gondar, Ethiopia
CommonCrawl
Discriminative frequency filter banks learning with neural networks Teng Zhang ORCID: orcid.org/0000-0003-3545-390X1 & Ji Wu1 EURASIP Journal on Audio, Speech, and Music Processing volume 2019, Article number: 1 (2019) Cite this article Filter banks on spectrums play an important role in many audio applications. Traditionally, the filters are linearly distributed on perceptual frequency scale such as Mel scale. To make the output smoother, these filters are often placed so that they overlap with each other. However, fixed-parameter filters are usually in the context of psychoacoustic experiments and selected experimentally. To make filter banks discriminative, the authors use a neural network structure to learn the frequency center, bandwidth, gain, and shape of the filters adaptively when filter banks are used as a feature extractor. This paper investigates several different constraints on discriminative frequency filter banks and the dual spectrum reconstruction problem. Experiments on audio source separation and audio scene classification tasks show performance improvements of the proposed filter banks when compared with traditional fixed-parameter triangular or gaussian filters on Mel scale. The classification errors on LITIS ROUEN dataset and DCASE2016 dataset are reduced by 13.9% and 4.6% relatively. Filter banks have been used for a long time to make time-frequency analysis of audio signals. The most commonly used short-time Fourier transform (STFT) [1] or wavelet transform [2] can decompose audio signals into sub-band components with certain time-frequency locations and resolutions. Filter banks implemented in the time domain [3] are usually shown as Fig. 1. Audio signals are convolved with M frequency-constrained filters, followed by averaging over an nk-length window. For audio recognition tasks, such as speech recognition [4, 5], automatic speaker verification [6, 7], and audio scene classification [8, 9], the filter banks are used as a front-end feature extractor followed by a back-end classifier. For audio enhancement tasks, such as source separation [10, 11] and speech de-noising [12, 13], a perfect or near perfect reconstruction procedure combined with an up-sampling module and dual filter banks is needed. These fixed-parameter filters are usually in the context of psychoacoustic experiments, which need task-related expertise. To discriminatively learn parameters of filter banks remains a difficult problem. Non-uniform filter banks. The terms Hk(z) and Rk(z) correspond to the z-transforms of the kth filter and dual filter, i.e., \(\protect \phantom {\dot {i}\!}H_{k}(z)=z^{-l_{k}}\) for some integers lk and \(\protect \phantom {\dot {i}\!}R_{k}(z)=z^{l_{k}}\) In early pattern recognition studies [14], the input is first converted into some features, which are usually defined empirically by experts and believed to be suitable with recognition targets. Then, a design named discriminative feature extraction (DFE) [4, 15] is proposed to systematically train the overall recognizer in a manner consistent with the minimization of recognition errors. For audio signals, a DFE method with learnable filter banks is first investigated in [16]. In principle, the filter banks are composed of a finite or infinite number of filters. However, this needs careful investigation for the stability of the filters. Besides, the convolution operation in filter banks in the time-domain is time-consuming. Filter banks on FFT-based spectrums [17] have been studied for simplicity, which can be modeled as Eq. 1, where n is the discrete index of different filters, f is the frequency in hertz. $$ w_{n}(f)=\alpha_{n}g(c_{n}(p(f));s_{n}(p(f))) $$ Filter banks are parameterized in the frequency domain with the frequency center cn, bandwidth sn, gain αn, shape g, and frequency scale p. The result wn is a continuous function defined in the frequency domain. When p is a linear function, filter banks are uniformly distributed in the frequency domain. However, there is a strong desire to analyze audio signals similar to human ears, which means a non-linear function named auditory filter banks [18–20]. Based on psychoacoustics experiments, three non-linear mappings between the frequency and perceptual domain are commonly used, including the Bark scale [21], ERB scale [22], and Mel scale [23]. The parameters αn, cn, and sn in Eq. 1 represent the frequency properties of wn, which simulate the frequency selectivity in human ears. In [16], g is selected as a gaussian function because of its smoothness and tractability, correspondingly, the Mel filter banks use triangular filters [17]. When g is totally independent and not limited to any specific shape, wn for each filter can be parameterized as a fully connected mapping from all frequency bins to a value. Auditory filters of different shapes have been trained discriminatively for robust speech recognition [24]. Filter banks can also be trained discriminatively using Fisher discriminant analysis (FDA) method [25]. In recent years, deep neural networks (DNN) have achieved significant success in the field of audio processing and recognition because of its advantages in discriminative feature extraction. Standard filter banks computed in the time domain have been simulated using unsupervised convolutional restricted Boltzmann machine(ConvRBM) [26]. The speech recognition performance of ConvRBM features is improved compared to the Mel-frequency cepstrum coefficients (MFCCs), and the relative improvements are 5% on TIMIT test set and 7% on WSJ0 database using GMM-HMM systems. Discriminative frequency filter banks can also be learned together with the recognition error using a time-convolutional layer and a temporal pooling layer over the raw waveform [27]. The results in [27] show that the filter size and pooling operation play an important role in the performance improvement, but the temporal convolutional operation is time-consuming. Filter banks implemented in the frequency domain are also studied with DNNs in recent years. When g in Eq. 1 is parameterized in all frequency bins, and the parameters are restricted to be positive using exponential function exp [28] or sigmoid [29], filter banks with multiple peaks and complicated shape are learned for specific tasks. However, further experiments show that the positive constraint is too weak to learn smooth and robust filter banks. When g in Eq. 1 is restricted to a gaussian shape, the gain, frequency center, and bandwidth in Eq. 1 can be learned using a neural network [30]. The triangular filter shape (commonly used to compute Mel scale features) is not investigated since it is piecewise differentiable and difficult to be incorporated into the scheme of a back-propagation algorithm. Contribution of this paper In this paper, we use a neural network structure to learn the frequency center, bandwidth, gain, and shape of filter banks adaptively, and investigate several different constraints on filter banks and the dual spectrum reconstruction problem. Filter banks are said to be maximally decimated [3] if the channel decimation rates nk in Fig. 1 are integers satisfying Eq. 2. $$ 1 = \sum_{i=1}^{M-1} \frac{1}{n_{i}} $$ This condition means that there are more transformed sub-band coefficients per second than the original data points. In this case, the filter banks are overcomplete [31] and a perfect reconstruction from the sub-band coefficients is possible. However, in some scenarios, audio reconstruction from incomplete information is necessary because of the limitation of storage and computing resources, especially when the signals are sampled at a higher rate greater than or equal to 44.1 kHz. Speech reconstruction from MFCCs has been studied by predicting the fundamental frequency and voicing of a frame as intermediation [32–34]. The simplest case is that ni in Eq. 2 equals to the frame length N, which is equivalent to filter banks implemented in the frequency domain in this paper. As shown in Eq. 1, when filter banks are parameterized and learned using neural networks, a major concern is the constraint to the shape of its responses in the frequency range. When the constraint is weak [28, 29], the number of parameters is too large to learn smooth and robust filter banks in some scenarios. When the constraint is a basic shape function and this function is piecewise differentiable such as the triangular shape [30], the model cannot be trained using a back-propagation algorithm. At the same time, the sub-band processing module in Fig. 1 may introduce distortions, particularly if the sub-bands are not equally processed, in this case, signal reconstruction in the frequency domain is not analytical. In this paper, the major contributions are summarized as follows: Approximate continuous shape function: shape constraints play an important role in discriminative frequency filter banks. Few investigations have been conducted to compare different shape constraints, because that commonly used shapes such as triangular shapes are piecewise differentiable. We use steep sigmoid functions and other basic functions to approximate desired shapes. This makes a further study on shape constraints possible. Comparison of different constraints: in Eq. 1, different selections of trainable parameters can result in different implementations of filter banks. In this paper, we select six different constraints to investigate their applicable condition. When all parameters are constant, we adopt triangular and gaussian shapes whose frequency centers distribute uniformly in the Mel-frequency scale. For weak constraints, we conduct experiments similar to [28, 29]. For strong constraints, both gaussian and triangular constraints are used to train the frequency center, bandwidth, and gain in Eq. 1. Reconstruction from incomplete filter bank coefficients: in this paper, the amount of filter bank coefficients is much less than original data points, so the reconstruction can be seen as a process of solving overdetermined linear equations. We use a neural network to implement this reconstruction process, and a well-designed regularization method is used to make sure that the filter banks are bounded input bounded output (BIBO-stable). The paper is organized as follows. Next section briefly describes the Mel-frequency scale used in this paper and introduce the uniformly distributed filter banks with constant parameters as the baseline. Section 3 introduces the analytical and experimental settings of our proposed filter bank learning framework. Then, network structures used in our proposed methods are introduced in Section 4. Section 5 conducts several experiments to show the performance of discriminative frequency filter banks in terms of source separation and audio scene classification tasks. Finally, we conclude our paper and give directions for future work in Section 6. Filter banks are used to model the frequency selectivity of an auditory system in many applications. Traditionally, the design of filter banks is motivated by psychoacoustic experiments, such as the detection of tones in noise maskers [35], or by physiological experiments such as observing the mechanical responses of the cochlea when a sound reaches the ear [36, 37]. The frequency center, bandwidth, and energy gain in the frequency response of filter banks are consistent with the position and vibration patterns in the ear. In the history of auditory filter banks [35], rounded exponential family [38] and gammatone family [39] are the most widely used families. We use the simplest form of these two families, triangular case for the rounded exponential family and gaussian case for the gammatone family, to construct our filter banks in the frequency domain. In this section, we introduce the commonly used Mel-frequency filter banks. Mel-frequency scale The perceptual frequency scale is usually a mapping between the linear frequency domain and the nonlinear perceptual frequency domain. The Mel-frequency scale is the result of a classic psychoacoustical test conducted by Stevens and Volkman [40], which provides the relation between the real frequency and hearing pitch. The conversion from the linear frequency to Mel-scale [41] is as follows, where f is frequency in hertz. $$ {\text{Mel}}(f) = 1127{\text{log}}_{2}\left(1+\frac{f}{700}\right) $$ Mel-frequency filter banks The commonly used MFCC features in the field of speech recognition are computed based on Mel-frequency filter banks. It is a common practice to construct filters distributing uniformly in the Mel-frequency scale, and the bandwidth is often 50% overlapped between neighboring filters. When the filter shape is restrained using Eq. 4, triangular filter banks are constructed in the Mel-frequency scale. For gaussian filter banks, the bandwidth is 4σ of a gaussian distribution as Eq. 5. These two types of filter banks are the baselines in this paper, respectively named TriFB and GaussFB. Although they are combinations of existing works, we implement our own implementation of these two methods in this paper. In Eqs. 4 and 5, cn represents the frequency center, sn represents the bandwidth, mel is the unit of Mel-frequency scale, Tri and Gauss are the triangular and gaussian filter banks defined in the Mel-frequency scale. $$ {\text{Tri}}(n)=\left\{ \begin{array}{l} \frac{2}{s_{n}}({\text{mel}}-c_{n})+1, c_{n}-\frac{s_{n}}{2}\le {\text{mel}} \le c_{n} \\ \frac{2}{s_{n}}(c_{n}-{\text{mel}})+1, c_{n} \le {\text{mel}} \le c_{n}+\frac{s_{n}}{2} \\ 0, {\text{elsewhere}} \\ \end{array} \right. $$ $$ {\text{Gauss}}(n) = {\text{exp}}\left(-\frac{8({\text{mel}}-c_{n})^{2}}{s_{n}^{2}}\right) $$ Discriminative filter bank learning For generality, we consider in this section a discriminative filter bank learning framework based on a neural network as shown in Fig. 2. Discriminative filter bank learning framework. The left part of the framework is the feature analysis procedure including STFT and discriminative filter banks. The right part is the application example of the extracted feature map, such as audio scene classification and audio source separation. Discriminative filter banks in the feature analysis procedure and the back-end application modules are stacked into a deep neural network The input audio signal is first transformed to a sequence of vectors using STFT, the STFT result can be represented as X1...T={x1,x2,...,xT}. T is determined by the frame shift in STFT, corresponding to the time resolution in the frame theory [42]. The dimension of each vector x can be labeled as N, which is determined by the frame length. The discriminative frequency filter banks in Fig. 2 can be simplified as linear transformations fθ, the output of this module can be represented as Y1...T={fθ(x1),fθ(x2),...,fθ(xT}). θ are the parameters of filter banks defined similar to Eq. 1. The dimension of each yt=fθ(xt) here is equal to M, which is the number of filters. The back-end application modules in Fig. 2 vary from different applications. For audio scene classification task, they will be deep convolutional neural networks followed by a softmax layer to convert the feature maps to the corresponding categories. However, for audio source separation task, the modules will be composed by a binary gating layer and some spectrogram reconstruction layers. We simplify all these situations and define the back-end application modules as non-linear functions fβ. The filter bank parameters θ can be trained jointly with the back-end parameters β using a back-propagation method in neural networks. In this framework, filter banks work as a set of weights on a spectrum vector xt as Eq. 6. Each wk is a filter with positive values and a bounded range. $$ \boldsymbol{y}_{t}=f_{\theta}(\boldsymbol{x}_{t})=\left\{\boldsymbol{w}_{1}^{T}\boldsymbol{x}_{t},\boldsymbol{w}_{2}^{T}\boldsymbol{x}_{t},...,\boldsymbol{w}_{m}^{T}\boldsymbol{x}_{t}\right\} $$ In this paper, we consider two types of constraints on filter banks. Shape constraint: in this case, the amplitude of filter's frequency response is constrained to be a special shape, and only the frequency center, bandwidth, and gain of the filter remain to be trained. The gaussian shape has been investigated in [16, 30]. We will focus on the piecewise differentiable situation such as the triangular shape. Positive constraint: when all the weights of filters are independent but only constrained to be positive, more complicated filter banks can be learned. Exponential functions such as exp [28] and sigmoid [29] have been used together with a bandwidth constraint for the filters. We investigate two new positive constraints ReLU and square, and discuss their performances associated with the bandwidth constraint. Shape constraints of discriminative frequency filter banks Triangular filters are commonly used to compute Mel-scale filter bank features in many audio applications such as speech recognition. However, when we use a triangular shape described in Eq. 4 to restrict the discriminative frequency filter banks in Fig. 2, the backward propagation process is blocked because of the discontinuous point in the triangular shape. Instead of using the piecewise continuous form of a triangular shape, we decompose it into piecewise continuous step functions and linear functions as Fig. 3a. We define the piecewise step function as Eq. 7. Then, a mathematical representation of the decomposition can be shown as Eq. 8. αn is the gain parameter, cn, sn, and mel in this formula have been defined in Eq. 4. $$ {\text{rec}}(x,x_{0})=\left\{ \begin{array}{l} 1, x>x_{0} \\ 0, {\text{elsewhere}} \\ \end{array} \right. $$ Triangular-shape decomposition. a The accurate decomposition using piecewise continuous step functions and linear functions. b The approximate decomposition using sigmoid functions and linear functions $$\begin{array}{*{20}l} &f_{1}({\text{mel}})={\text{rec}}\left({\text{mel}},c_{n}-\frac{s_{n}}{2}\right)(1-{\text{rec}}({\text{mel}},c_{n}))\\ &l_{1}({\text{mel}})=\frac{2}{s_{n}}({\text{mel}}-c_{n})+1\\ &f_{2}({\text{mel}})=\left(1-{\text{rec}}\left({\text{mel}},c_{n}+\frac{s_{n}}{2}\right)\right){\text{rec}}({\text{mel}},c_{n})\\ &l_{2}({\text{mel}})=\frac{2}{s_{n}}(c_{n}-{\text{mel}})+1\\ &w_{n}({\text{mel}})=\alpha_{n}(f_{1}l_{1}+f_{2}l_{2}) \end{array} $$ We use a sigmoid function \({\text {sig}}(x,x_{0})=\frac {1}{1+e^{-r_{0}(x-x_{0})}}\) to approximate the step function and get an approximate triangular decomposition as Eq. 9. In this formula, r0 represents the steep rate of the sigmoid function. Figure 3b is an example when r0 is 10. $$\begin{array}{*{20}l} &f_{1}({\text{mel}})={\text{sig}}\left({\text{mel}},c_{n}-\frac{s_{n}}{2}\right)(1-{\text{sig}}({\text{mel}},c_{n}))\\ &l_{1}({\text{mel}})=\frac{2}{s_{n}}({\text{mel}}-c_{n})+1\\ &f_{2}({\text{mel}})=\left(1-{\text{sig}}\left({\text{mel}},c_{n}+\frac{s_{n}}{2}\right)\right){\text{sig}}({\text{mel}},c_{n})\\ &l_{2}({\text{mel}})=\frac{2}{s_{n}}(c_{n}-{\text{mel}})+1\\ &w_{n}({\text{mel}})=\alpha_{n}(f_{1}l_{1}+f_{2}l_{2}) \end{array} $$ The trainable parameters in Eq. 9 are the frequency center cn, bandwidth sn, and gain αn. The goal of the training procedure is to minimize some objective loss ε. The derivative of an objective loss given trainable parameters can be calculated by back-propagating error gradients. Positive constraint of discriminative frequency filter banks Another selection of discriminative frequency filter banks is a set of independent weights W={w1,w2,...,wm}. The only constraint is that these weights should be positive to keep their physical meaning of the filters. There are a couple of options to keep them positive: Exponent: for every parameter wij, we make it positive by transform it to vij=exp(wij)[28]. If wij∼N(μ,σ), vij satisfies the log-normal distribution, where the mean of vij is \(e^{\mu +\frac {\sigma ^{2}}{2}}\) and the variance of vij is \(\left (e^{\sigma ^{2}}-1\right)e^{2\mu +\sigma ^{2}}\). Sigmoid: for every parameter wij, we use the sigmoid function \(v_{ij}=\frac {1}{1+{\text {exp}}(-w_{ij})} \)[29] to ensure the parameters positive. If wij∼N(μ,σ), vij satisfies a logit-normal distribution, where the moments of vij is not analytical, but the numerical calculating results have been discussed in [43]. ReLU: for every parameter wij, we simply make vij=0, when wij<0 and vij=wij, when wij≥0. This will lead to a folded normal distribution. When wij∼N(μ,σ), the mean of vij is \(\sigma \sqrt {\frac {2}{\pi }}e^{-\frac {\mu ^{2}}{2\sigma ^{2}}}\) and the variance of vij is μ2+σ2−[mean(vij)]2. Square: the last option to make the parameters positive is that \(v_{ij}=w_{ij}^{2}\). Then, vij is a variable satisfying a chi-squared distribution. The mean of vij is σ2(1+μ2), and the variance of vij is σ4(2+4μ2). Without loss of generality, if we initialize the parameters with a gaussian distribution wij∼N(0,0.1), the moments of the four positive transformations can be calculated as follows: Exponent: mean = 1.0, variance = 0.01. Sigmoid: mean = 0.5, variance ≈ 0.01. ReLU: mean ≈ 0.08, variance≈0.01. Square: mean ≈ 0.01, variance≈0.0002. In this section, we consider two variants of discriminative frequency filter banks. If the frequency center cn and bandwidth sn in Eq. 1 are constant and the filter weights are restrained to be positive, the filter weights are limited in the range of bandwidth. All the above distributions can be good solutions. Another case is that the filter weights are totally independent. In this case, the resulting distributions of the exponent and sigmoid constraints mean that most filter weights are not zero, which violates the physical meaning of filter banks. In order to fulfill the physical meaning, the moments of positive transformations should be around N(0.1,0.01), which is approximately calculated using the Mel-frequency triangular filter banks defined in Section 2.2. The inverse calculation of these positive transformations shows that when the parameters are initialized w∼N(−3.0,2.0), the exponent and sigmoid constraints may result in meaningful distributions. Thus, when the filter banks are constrained by constant bandwidths and frequency centers, all these positive constraints are suitable. But when the filter weights are totally independent, only ReLU and square constraints are suitable, unless we can perform elaborate initialization for different positive transformations. Our experiments in Section 3.3 demonstrate our conclusion. Reconstruction from filter bank coefficients In the traditional design of filter banks as Fig. 1, the completeness of filter banks is determined by the number of filters M and the channel decimation rate nk. In our proposal of discriminative frequency filter banks, nk is equivalent to the frame length N. And in general, M is less than N for the purpose to reduce the computational cost and extracting significant features. In this case, the filter banks are incomplete and hence, the perfect spectral reconstruction from the filter bank coefficients is impossible. As described before, the spectrum xt is first transformed to the Mel-frequency scale using a transformation matrix derived from Eq. 3. Then, the filter banks work as a set of weights on it as Eq. 6. Thus, the conversion from spectrum vectors to filter bank coefficients can be represented as Eq. 10. M is the Mel-frequency transition matrix, and F are the discriminative frequency filter banks. $$ \boldsymbol{y}_{t}=\boldsymbol{x}_{t}\boldsymbol{M}\boldsymbol{F} $$ The spectrum reconstruction process can be simplified as a reconstruction transformation as Eq. 11. R is the reconstruction matrix, and the parameters in R can be trained jointly with the parameters of filter banks in F. $$ \boldsymbol{\hat{x}}_{t}=\boldsymbol{y}_{t}\boldsymbol{R} $$ The problem of finding the optimal reconstruction matrix R and filter bank matrix F is equivalent of finding the solution of a linear system [44] as Eq. 12. R+ is the Moore Penrose pseudoinverse [45] of R and has an approximate numerical representation of MF. Here, we define the condition number [46] for R as Eq. 13. $$ \boldsymbol{R}\boldsymbol{R^{+}}\boldsymbol{x}_{t}=\boldsymbol{\hat{x}}_{t} $$ $$ {\text{cond}}(\boldsymbol{R})=\parallel\boldsymbol{R}\parallel\cdot\parallel\boldsymbol{R^{+}}\parallel\le \left(\parallel\boldsymbol{R}\parallel+\parallel\boldsymbol{R^{+}}\parallel\right)^{2} $$ In Eq. 13, cond(R) means the condition number of R and ∥·∥ means the Frobenius norm of a matrix. A large condition number implies that the linear system is ill-conditioned in the sense that small errors in the input can lead to huge errors in the output. So, we modify the reconstruction loss by adding an L2-regularization constraint to keep the linear system stable. This is also known as the bounded-input, bounded-output (BIBO) stability [47]. The L2-regularization for different types of filter banks in Sections 3.1 and 3.2 are discussed respectively as follows. Shape constraint: for shape constraints in Section 3.1, parameters such as the frequency center cn and bandwidth sn, do not contribute to the regularization. Regularization of the gain αn should be added up across the bandwidth. Positive constraint: for positive constraints in Section 3.2, all parameters contribute to the regularization. The positive weights vij should replace the filter bank parameters wij to calculate the regularization, but the regularization of reconstruction parameters rij remain unchanged. Reconstruction vs classification For spectrum reconstruction-related tasks as described in Eq. 11, the output size of the reconstruction system is NT, where N is the FFT length, and T is the number of frames. Thus, the number of equations in optimizing the reconstruction matrix R and filter bank matrix F is DNT, where D is the number of audio samples. Meanwhile, for positive constraints, the number of parameters in R and F is about 2NM, where M is the number of filter banks. For shape constraints, the number of parameters is about 3M+NM. M is usually much less than DT, so the reconstruction usually can be seen as a process of solving overdetermined linear equations. Correspondingly, when the output of filter banks is followed by a classifier, the number of equations in solving the classification task is DC, where C is the number of classes. The number of parameters is MN+MC for positive constraints, and 3M+MC for shape constraints. In some small-scale applications, DC is less than MN. The classification is equivalent of solving underdetermined linear equations for positive constraints. Over-fitting is a notorious issue in this scenario. This phenomenon can be seen in Section 5.5. As described in Section 3, the discriminative frequency filter banks we proposed here can be integrated into a neural network (NN) structure. The parameters of the models are learned jointly with the target of a specific task. In this section, we introduce two NN-based structures respectively for audio source separation and audio scene classification tasks. Audio source separation In Fig. 4a, the NN structure for audio source separation tasks is divided into three steps. The module of discriminative filter banks is implemented as Eq. 6, which can be denoted as h1. The reconstruction layer is constructed using a fully connected layer and can be denoted as h3. NN-based structures with proposed methods. a is the NN structure for audio source separation tasks. b is the NN structure for audio scene classification tasks We attempt the audio separation from an audio mixture using a simple masking method [48], which can be represented as a binary masking module in Eq. 14 and denoted as h2. In Eq. 14, ytj is an element of the feature map Y, mji is a trainable parameter of this layer. The output of this layer is a linear projection modulated by the gates gt. These gates multiply each element of the matrix Y and control the information passed on in the hierarchy. Stacking these three layers on the top of input X gives a representation of the separated clean spectrogram \(\hat {\boldsymbol {X}}=h_{3}\circ h_{2}\circ h_{1}(\boldsymbol {X})\), the symbol ∘ is used here to represent the connection between different layers. $$ \begin{array}{l} g_{ti}={\text{sigmoid}}\left(\sum_{j=1}^{N}y_{tj}m_{ji}\right) \\ o_{ti}=y_{ti}g_{ti} \\ \end{array} $$ Neural networks are trained on a frame error (FE) minimization criterion, and the corresponding weights are adjusted to minimize the square errors over the whole training dataset. The error of the mapping is given by Eq. 15, where xt is the targeted clean spectrum, and \(\hat {\boldsymbol {x}}_{t}\) is the corresponding separated representation. As commonly used, L2-regularization is typically chosen to impose a penalty on the complexity of the mapping, which is the λ term in Eq. 15. However, when the layer of discriminative filter banks is implemented with shape constraints, the elements of w1 have definitude physical meanings. Thus, the L2-regularization is operated only on the upper two layers in this model. In this case, the network in Fig. 4a can be optimized by the back-propagation method. $$ \epsilon=\sum_{t=1}^{T}\parallel \boldsymbol{x}_{t}-\hat{\boldsymbol{x}}_{t}\parallel^{2}+\lambda \sum_{l=2}^{3}\parallel \boldsymbol{w}_{l}\parallel^{2} $$ Audio scene classification In Fig. 4b, a feature extraction structure including the discriminative frequency filter banks is proposed to systematically train the overall recognizer in a manner consistent with the minimization of recognition errors. The NN structure for audio scene classification tasks can be divided into five steps, where the first layer of discriminative frequency filter banks is implemented using Eq. 6. The convolutional and pooling layers are conducted using the network structure described in [49]. In general, let zi:i+j refer to the concatenation of frames after discriminative filter banks yi,yi+1,...yi+j. The convolution operation involves a filter w∈Rh, which is applied to a window of h frames to produce a new feature. For example, a feature ci is generated from a window of frames yi:i+h−1 by Eq. 16, where b∈R is a bias term and f is a non-linear function. This filter is applied to each possible window of frames to produce a feature map c=[c1,c2,...cT−h+1]. Then, a max-overtime pooling operation [50] over the feature map is applied and the maximum value \(\hat {c}={\text {max}}(\boldsymbol {c})\) is taken as the feature corresponding to this filter. Thus, one feature is extracted using one filter. This model uses multiple filters with varying window sizes to obtain multiple features. $$ c_{i}=f(\boldsymbol{w}\cdot \boldsymbol{y}_{i:i+h-1}+b) $$ The features extracted from the convolutional and pooling layers are then passed to a fully connected layer and a softmax layer to output the probability distribution over categories. The classification loss of this model is given by Eq. 17, where n is the number of audios, k is the number of categories, li,j is the category label, and pi,j is the probability distribution produced by the NN structure. In this case, the network in Fig. 4b can be optimized by the back-propagation method. $$ \epsilon=\sum_{i=1}^{n}\sum_{j=1}^{k}l_{i,j}\cdot {\text{log}}(p_{i,j})+\lambda \sum_{l=2}^{4}\parallel \boldsymbol{w}_{l}\parallel^{2} $$ To illustrate the properties and performance of the discriminative frequency filter banks proposed in this paper, we conduct three experiments respectively on spectrum reconstruction, audio source separation and audio scene classification tasks. In the first experiment, several groups of comparisons are made on reconstruction errors to verify the assumption and conclusion we proposed in Section 3. Moreover, we have two more experiments to test the applications of the discriminative frequency filter banks to audio source separation and audio scene classification tasks. Filter bank settings All experiments conducted below make a comparison between the discriminative frequency filter banks that can be trained using neural networks and the fixed-parameter filter banks described in Section 2.2. The detailed settings are as follows: TriFB: frequency centers of the filters distribute uniformly in the Mel-frequency scale, bandwidths are 50% overlapped between neighboring filters, the gain is 1, and the shape is restrained with Eq. 4. GaussFB: frequency centers of the filters distribute uniformly in the Mel-frequency scale, bandwidths are 4σ of an gaussian distribution as Eq. 5, the gain is 1, and the shape is restrained with Eq. 5. TriFB-DN: in order to achieve a fair comparison with TriFB, the initialization of the frequency centers, bandwidths, and gain of the filters are the same as TriFB, the shape is restrained with Eq. 9, and the gain and bandwidths are guaranteed to be positive with a square constraint described in Section 3.2. GaussFB-DN: in order to achieve a fair comparison with GaussFB, the initialization of the frequency centers, bandwidths, and gain of the filters are the same as GaussFB, the shape is restrained with Eq. 5. Other settings are the same as TriFB-DN. BandPosFB-DN: frequency centers and bandwidths are the same as GaussFB, all parameters are initialized using N(0,0.1), and are guaranteed to be positive with the square constraint described in Section 3.2. The shape is not restrained. PosFB-DN: the parameters are initialized using N(0,0.1) and are guaranteed to be positive with the square constraint described in Section 3.2. There are no constraints for the frequency centers, bandwidths, and shape of the filters. Dataset and experimental setup In this section, we employ three datasets to conduct the experiments. MIR-1K dataset [51] is utilized to implement the spectrum reconstruction and audio source separation experiments. LITIS ROUEN [52] and DCASE2016 [53] datasets are used for audio scene classification experiments. Details of these datasets are listed as follows: MIR-1K dataset: this dataset consists of 1000 song clips recorded at a sample rate of 16,000 Hz, with durations ranging from 4 to 13 s. The dataset is then utilized with four training/testing splits. In each split, 700 examples are randomly selected for training and the others for testing. We use the mean average accuracy over the four splits as the evaluation criterion. LITIS ROUEN dataset: this is the largest publicly available dataset for ASC to the best of our knowledge. The dataset contains about 1500 min of audio scene recordings belonging to 19 classes. Each audio recording is divided into 30-s examples without overlapping, thus obtaining 3026 examples in total. The sampling frequency of the audio is 22,050 Hz. The dataset is provided with 20 training/testing splits. In each split, 80% of the examples are kept for training and the other 20% for testing. We use the mean average accuracy over the 20 splits as the evaluation criterion. DCASE2016 dataset: the dataset is released as task 1 of the DCASE2016 challenge. We use the development data in this paper. The development data contains about 585 min of audio scene recordings belonging to 15 classes. Each audio recording is divided into 30-s examples without overlapping, thus obtaining 1170 examples in total. The sampling frequency of the audio is 44,100 Hz. The dataset is divided into fourfolds. Our experiments obey this setting, and the average performance will be reported. In all experiments, the audio signal is first transformed using STFT with the frame length of 1024 and the frame shift of 10 ms, so the size of audio spectrums is 513×128. The mini-batch size is set to be 50, and the learning rate is initialized with 0.001. In our audio source separation experiments, the number of discriminative filters is set to be 64, other parameters are set as described in Section 4.1. When the spectrum reconstruction is needed, the regularization coefficient is set to be 0.0001. Training is done using the Adam [54] update method and is stopped after 500 training epochs. In our audio scene classification experiments, the number of discriminative filters is also set to be 64. For both LITIS ROUEN and DCASE2016 datasets, we use rectified linear units; the window sizes of convolutional layers are 64×2×64, 64×3×64, and 64×4×64, and the fully connected layers are 196×128×19(15). For DCASE2016 dataset, we use the dropout rate of 0.5. Training is done using the Adam update method and is stopped after 100 training epochs. Properties of discriminative frequency filter banks In this experiment, we analyze the properties of the discriminative frequency filter banks using the clean music audios in MIR-1K dataset. The binary gating layer in Fig. 4a is left out for simplicity. To quantify the performance of our method, we evaluate the reconstruction performance using the metric of signal to distortion ratios (SDR). In Eq. 18, \(\hat {x}\) is the reconstructed signal and x is the source signal. $$ {\text{SDR}}(x,\hat{x})=10{\text{log}}_{10}\left(\frac{||x||^{2}}{||x-\hat{x}||^{2}}\right) $$ Table 1 shows the reconstruction SDR under different positive constraints. In order to exclude the influence of filter numbers, these experiments are configured with M = 32 and M = 64, respectively. The consistent results in Table 1 demonstrate that exponent, sigmoid, ReLU, and square positive constraints show similar performances when parameters are constrained by fixed frequency center and bandwidth, but ReLU and square positive constraints perform much better than exponent and sigmoid constraints when parameters are totally independent and initialized with N(0,0.1). As we have discussed in Section 3.2, in this case, ReLU and square constraints can result in a similar parameter distribution with the traditional Mel-frequency triangular filter banks, but exponent and sigmoid constraints will result in an entirely different distribution, which violates the physical meaning of the filter banks. However, when the initialization for exponent and sigmoid constraints are finely designed to be N(− 3.0,2.0), the results improve a lot for totally independent situations. Taken together, the performance of ReLU and square positive constraints are more stable, and their performances are similar, so we can select the square constraint in follow-up experiments because of its differentiability. Table 1 Reconstruction SDR under different positive constraints in decibel For audio scene classification tasks, we use DCASE2016 dataset to examine the rationality of our selection. Table 2 is the classification performance on the validation part of DCASE2016 dataset. The NN structure is implemented as Fig. 4b, and the training process is stopped after 180 epochs. Accuracy and Matthews correlation coefficient (MCC) are employed to make the comparison. The results are consistent with Table 1. For all these positive constraints and initialization schemes, the classification performances are similar when parameters are constrained by fixed frequency center and bandwidth. However, when parameters are totally independent, ReLU and square positive constraints are more stable. If the parameters are initialized with N(0,0.1), it is difficult to converge to an optimal solution for exponent and sigmoid constraints. Therefore, our selection of square positive constraint also works for audio scene classification tasks. Table 2 Audio scene classification performance under different positive constraints Table 3 is the reconstruction SDRs with and without regularization. The results in the last two columns show the performance improvement by adding proper L2-regularization constraint as described in Section 3.3. Comparing with TriFB and GaussFB, the results of the four discriminative frequency filter bank models improve a lot. MF is the Moore Penrose pseudoinverse of R in Eq. 12; thus, R is the dual matrix determined by F. Thus, the L2-regularization constraint in Eq. 13 comes down to ∥R∥ or ∥F∥. In TriFB and GaussFB, F is fixed experimentally, ∥F∥ is constant, so the regularization constraint makes no difference. The results in the first two columns show the performances of different filter bank methods. Totally independent parameters with only positive constraints get the best result, gaussian and triangular shape constraints follow closely. Triangular shape constraint performs a little better than gaussian constraint. Fixed-bandwidth parameters with positive constraint make no obvious improvement in contrast with traditional TriFB and GaussFB. Table 3 Reconstruction SDR with/without regularization in decibel A direct perspective of the six types of filter banks can be seen in Fig. 5. Comparing with TriFB in Fig. 5a, the filter banks of TriFB-DN in Fig. 5c show great difference along the Mel axis. The frequency centers and bandwidths in TriFB-DN distribute relatively regular at low frequencies, but out of order at high frequencies. Comparing with GaussFB in Fig. 5b, the bandwidths of GaussFB-DN in Fig. 5d are less overlapped between neighboring filters. The filter banks of BandPosFB-DN come to be multimodal in the fixed bandwidth. The results in Table 3 show that the frequency center and bandwidth are more important than the shape in music reconstruction tasks. As we have discussed in Section 3.4, the reconstruction tasks usually can be seen as a process of solving overdetermined linear equations, which means that the more parameters the better. Result for PosFB-DN demonstrates this assumption, PosFB-DN has much more parameters than other methods, thus get a much better reconstruction result. Shape of different filter banks. a, b The traditional fixed-parameter filter banks. c–f The discriminative frequency filter banks we proposed and learned in the simple music spectrum reconstruction task Finally, in this experiment, in order to compare the learned frequency centers and traditional auditory scales, we have shown several frequency center plots in Fig. 6. In Fig. 6a, frequency centers learned in the audio separation task on MIR-1K dataset are compared with the Mel scale. We have also compared frequency centers learned in the audio classification task on both DCASE2016 and LITIS ROUEN datasets with the Mel scale in Fig. 6b, c. For DCASE2016 dataset as shown in Fig. 6b, learned frequency centers coincide well with the Mel scale. The frequency centers almost keep the initial value; this may be due to the lack of data. In Fig. 6a, we can see that the changes of frequency centers can only be observed in high-frequency regions, which means that the learned frequency centers tend to give a different representation of high-frequency components in audio separation tasks. This result is consistent with our experiments in Section 5.4. However, the frequency centers in Fig. 6c change only in relatively low-frequency regions. This observation shows the difference between separation and classification tasks. Comparison of frequency centers learned from network with the Mel scale. a Audio separation task on MIR-1K dataset. b Audio classification task on DCASE2016 dataset. c Audio classification task on LITIS ROUEN dataset In this experiment, we investigate the application of discriminative frequency filter banks in audio source separation tasks using the MIR-1K dataset. We attempt the music separation from a vocal and music mixture using Fig. 4a. Table 4 shows the reconstruction SDR in the music separation task. In order to achieve a fair comparison between different filter bank methods, we mix the vocal and music tracks under various conditions, where the energy ratio between music and voice takes 0.1, 1, and 10 respectively. The results of discriminative frequency filter banks in Table 4 show consistent performance improvements in comparison with TriFB and GaussFB. As an example, when we use the PosFB-DN method, and the energy ratio between music and voice is 1, the reconstruction SDR is improved by 0.75 dB compared to GaussFB. When the energy ratio is 0.1, which means that the voice is much louder than music, BandPosFBDN performs better than TriFB-DN and GaussFB-DN, because the relatively independent parameters can limit the voice amplitude effectively. However, when music is louder, the flexible frequency center and bandwidth in TriFB-DN and GaussFB-DN give better separation results than BandPosFB-DN. In keeping with Table 3, TriFB-DN performs a little better than GaussFB-DN when voice is louder, but the advantage is much smaller than the results in Table 3. Table 4 Reconstruction SDR of audio source separation in decibel. M/V represents the energy ratio between music and voice Figure 7 shows the clean music spectrum (a), mixed spectrum (b), and separated spectrums (c–h) when the energy ratio is 1. For this example, the separated spectrum can be discussed in the following aspects. In highfrequency regions, TriFB-DN, GaussFB-DN, and PosFBDN perform much better than the others, which is consistent with Fig. 6a. For these three types of discriminative frequency filter banks, the shape and positive constraints allow the filter banks to learn a more precise representation of high-frequency components. While for fixed-bandwidth methods such as TriFB, GaussFB, and BandPosFB-DN, the representations of high-frequency components are confused. In low-frequency regions, TriFB and GaussFB tend to result in a smooth energy distribution, thus give better performance for spectrum reconstruction. Reconstructed spectrums of audio source separation tasks. The clean music spectrum in a is randomly selected from the dataset. b The corresponding music and vocal mixture. c–h The reconstructed music spectrums from the mixture spectrums using different filter bank methods Audio scene classification (ASC) When filter banks are used as a feature extractor, the filter banks proposed in this paper can extract more salient features. In this section, we apply the discriminative frequency filter banks to the ASC task. The NN structure is implemented as Fig. 4b. We employ LITIS ROUEN and DCASE2016 datasets in our experiments. In the data preprocessing step, we first divide a 30-s example into 1-s clips with 50% overlap. Then each clip is processed as Fig. 2 for feature extraction. The classification results of all these clips will be averaged to get an ensemble result for the 30 s example. Training and validation curves on LITIS ROUEN dataset are shown in Fig. 8. All these methods are stopped after 100 training epochs. In Fig. 8a and b, 1-s clip classification errors on the training and validation set are compared between different methods. We can see that GaussFB-DN performs better than GaussFB along all training epochs, so is TriFB-DN and TriFB. The performance of BandPosFB-DN is almost the same as GaussFB on the validation set. The poor performance of PosFB-DN may be due to the difficulty to learn so many parameters using this dataset. We have also compared 30 s audio classification errors on the validation set in Fig. 8c. The results are almost exactly the same as Fig. 8b, except that BandPosFB-DN becomes one of the best performing methods. Training and validation curves on LITIS ROUEN dataset. a 1 s clip classification error on training set. b 1 s clip classification error on validation set. c 30 s audio classification error on validation set Table 5 is the performance comparison of LITIS ROUEN dataset after 100 training epochs. Evaluation criteria such as accuracy, F-measure and MCC are employed to make the comparison. CNN-Gam [9] is the best performing single-feature model to the best of our knowledge. However, owning to elaborate implementation of the sub-band processing module and classification module in Fig. 2, our baseline model with traditional TriFB and GaussFB perform much better than it. Among these four types of filter banks, shape constrained GaussFB-DN and fixed-bandwidth constrained BandPosFB-DN get the best classification performance, BandPosFB-DN reduces the classification error by relatively 13.9%. While the positive constrained PosFB-DN make no difference in comparison with TriFB and GaussFB. Table 5 Performance comparison on LITIS ROUEN dataset Training and validation curves on DCASE2016 dataset are shown in Fig. 9. After 100 training epochs, all these methods encounter the overfitting problem. This observation is different from Fig. 8. Table 6 is the performance comparison after 100 training epochs. In order to achieve a fair comparison, we use the same NN structure on both DCASE2016 and LITIS ROUEN datasets, including the hyper-parameters. In keeping with the results in Table 5, TriFB-DN, GaussFB-DN, and BandPosFB-DN get better classification performances as well. The performance of PosFB-DN gets much worse. In comparison with reconstruction related tasks, classification tasks have fewer output dimensions, so when parameters are not constrained by specific shapes, the number of parameters is too large to converge to a stable and smooth classification model. Training and validation curves on DCASE2016 dataset. a 1-s clip classification error on training set. b 1-s clip classification error on validation set. c 30 s audio classification error on validation set Table 6 Performance comparison on DCASE2016 dataset We also investigate the classification result when we use less than 30 s audios. Figure 10 is the classification error on the two datasets when audios extend from 1 s to 30 s. With long audios, we expect to extract more information by accumulating more statistics. As a result, for DCASE2016 dataset, GaussFB-DN can obtain an accuracy of 75.2% at 15 s, which is better than TriFB at 30 s. Early classification error The construction of discriminative frequency filter banks that can be learned by neural networks has been presented in this paper. The filter banks are implemented on FFT-based spectrums and can be constrained under different conditions to express different aspects of physical meanings. For shape-related constraints, a piecewise differentiable triangular shape is approximated using several differentiable basic functions. For positive constraints, ReLU and square constraints are proposed to fulfill the demand for the probability distribution of weights. Then, a spectrum reconstruction method from incomplete filter bank coefficients is implemented using neural networks. A well-designed regularization strategy is also studied to guarantee the filter banks to be BIBO-stable. Overall, this paper provides a practical and complete framework to learn discriminative frequency filter banks for different tasks. The discriminative frequency filter banks proposed in this paper are compared with traditional fixed-parameter filter banks using several experiments. The results show performance improvements for both music reconstruction and audio classification tasks. However, not all variants of discriminative frequency filter banks are suitable for all situations. In our experiments, positive constrained filter banks perform best on music reconstruction tasks, and shape constrained filter banks obtain the best results on ASC tasks. Discriminative frequency filter banks on FFT-based spectrums have the ability to get adaptive resolution on the frequency domain. To achieve adaptive resolution on the time domain, the future work will include introducing temporal information into filter banks, for example, the filter banks may span several frames. We will also perform cross-domain experiments to learn filter banks on one dataset and use it for classification tasks on another dataset to see if the generalized filter banks can be learned as done in [55]. J. Allen, Short term spectral analysis, synthesis, and modification by discrete fourier transform. IEEE Trans. Acoust. Speech Signal Process.25(3), 235–238 (1977). I. Daubechies, The wavelet transform, time-frequency localization and signal analysis. IEEE Trans. Inf. Theory. 36(5), 961–1005 (1990). Article MathSciNet Google Scholar S. Akkarakaran, P. Vaidyanathan, in Acoustics, Speech, and Signal Processing, 1999. Proceedings., 1999 IEEE International Conference On, vol 3. New results and open problems on nonuniform filter-banks (IEEEPiscataway, 1999), pp. 1501–1504. A. Biem, S. Katagiri, B. -H. Juang, in Neural Networks for Processing [1993] III. Proceedings of the 1993 IEEE-SP Workshop. Discriminative feature extraction for speech recognition (IEEEPiscataway, 1993), pp. 392–401. Á de la Torre, A. M. Peinado, A. J. Rubio, V. E. Sánchez, J. E. Diaz, An application of minimum classification error to feature space transformations for speech recognition. Speech Comm. 20(3-4), 273–290 (1996). N. Chen, Y. Qian, H. Dinkel, B. Chen, K. Yu, in INTERSPEECH. Robust deep feature for spoofing detection—the sjtu system for asvspoof 2015 challenge (International Speech Communication Association (ISCA)Dresden, 2015), pp. 2097–2101. Y. Qian, N. Chen, K. Yu, Deep features for automatic spoofing detection. Speech Comm. 85:, 43–52 (2016). H. Phan, P. Koch, F. Katzberg, M. Maass, R. Mazur, A. Mertins, Audio scene classification with deep recurrent neural networks. arXiv preprint arXiv:1703.04770 (2017). H. Phan, L. Hertel, M. Maass, P. Koch, R. Mazur, A. Mertins, Improved audio scene classification based on label-tree embeddings and convolutional neural networks. IEEE/ACM Trans. Audio Speech Lang. Process. 25(6), 1278–1290 (2017). B. Gao, W. Woo, L. Khor, Cochleagram-based audio pattern separation using two-dimensional non-negative matrix factorization with automatic sparsity adaptation. J. Acoust. Soc. Am.135(3), 1171–1185 (2014). J. Le Roux, E. Vincent, Consistent wiener filtering for audio source separation. IEEE Signal Process Lett.20(3), 217–220 (2013). P. Majdak, P. Balazs, W. Kreuzer, M. Dörfler, in Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference On. A time-frequency method for increasing the signal-to-noise ratio in system identification with exponential sweeps (IEEEPiscataway, 2011), pp. 3812–3815. D. L. Donoho, De-noising by soft-thresholding. IEEE Trans. Inf. Theory. 41(3), 613–627 (1995). R. O. Duda, P. E. Hart, D. G. Stork, Pattern classification (Wiley, New York, 1973). A. Biem, S. Katagiri, in Acoustics, Speech, and Signal Processing, 1993. ICASSP-93., 1993 IEEE International Conference On, vol 2. Feature extraction based on minimum classification error/generalized probabilistic descent method (IEEEPiscataway, 1993), pp. 275–278. A. Biem, S. Katagiri, E. McDermott, B. -H. Juang, An application of discriminative feature extraction to filter-bank-based speech recognition. IEEE Trans. Speech Audio Process.9(2), 96–110 (2001). S. Davis, P. Mermelstein, Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Trans. Acoustics Speech Signal Process.28(4), 357–366 (1980). V. Hohmann, Frequency analysis and synthesis using a gammatone filterbank. Acta Acustica U. Acustica. 88(3), 433–442 (2002). T. Irino, R. D. Patterson, A dynamic compressive gammachirp auditory filterbank. IEEE Trans. Audio Speech Lang. Process.14(6), 2222–2232 (2006). E. A. Lopez-Poveda, R. Meddis, A human nonlinear cochlear filterbank. J. Acoust. Soc. Am.110(6), 3107–3118 (2001). E. Zwicker, E. Terhardt, Analytical expressions for critical-band rate and critical bandwidth as a function of frequency. J. Acoust. Soc. Am.68(5), 1523–1525 (1980). B. R. Glasberg, B. C. Moore, Derivation of auditory filter shapes from notched-noise data. Hear. Res.47(1), 103–138 (1990). R. P. Lippmann, Speech recognition by machines and humans. Speech Commun.22(1), 1–15 (1997). B. Mak, Y. -C. Tam, R. Hsiao, in Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP'03). 2003 IEEE International Conference On, vol 2. Discriminative training of auditory filters of different shapes for robust speech recognition (IEEEPiscataway, 2003), p. 45. T. Kobayashi, J. Ye, in Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference On. Discriminatively learned filter bank for acoustic features (IEEEPiscataway, 2016), pp. 649–653. H. B. Sailor, H. A. Patil, in Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference On. Filterbank learning using convolutional restricted boltzmann machine for speech recognition (IEEEPiscataway, 2016), pp. 5895–5899. T. N. Sainath, R. J. Weiss, A. Senior, K. W. Wilson, O Vinyals, in INTERSPEECH. Learning the speech front-end with raw waveform cldnns (International Speech Communication Association (ISCA)Dresden, 2015), pp. 2097–2101. T. N. Sainath, B. Kingsbury, A. -R. Mohamed, B. Ramabhadran, in Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop On. Learning filter banks within a deep neural network framework (IEEEPiscataway, 2013), pp. 297–302. H. Yu, Z. -H. Tan, Y. Zhang, Z. Ma, J. Guo, Dnn filter bank cepstral coefficients for spoofing detection. IEEE Access. 5:, 4779–4787 (2017). H. Seki, K. Yamamoto, S. Nakagawa, in Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference On. A deep neural network integrated with filterbank learning for speech recognition (IEEEPiscataway, 2017), pp. 5480–5484. S. Strahl, A. Mertins, Analysis and design of gammatone signal models. J. Acoust. Soc. Am.126(5), 2379–2389 (2009). B. Milner, X. Shao, Prediction of fundamental frequency and voicing from mel-frequency cepstral coefficients for unconstrained speech reconstruction. IEEE Trans. Audio Speech Lang. Process.15(1), 24–33 (2007). D. Chazan, R. Hoory, G. Cohen, M. Zibulski, in Acoustics, Speech, and Signal Processing, 2000. ICASSP'00. Proceedings. 2000 IEEE International Conference On, vol 3. Speech reconstruction from mel frequency cepstral coefficients and pitch frequency (IEEEPiscataway, 2000), pp. 1299–1302. B. Milner, X. Shao, in ICSLP. Speech reconstruction from mel-frequency cepstral coefficients using a source-filter model (International Speech Communication Association (ISCA)Denver, 2002), pp. 2421–2424. R. F. Lyon, A. G. Katsiamis, E. M. Drakakis, in Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium On. History and future of auditory filter models (IEEEPiscataway, 2010), pp. 3809–3812. T. Necciari, N. Holighaus, P. Balazs, Z. Prusa, A perceptually motivated filter bank with perfect reconstruction for audio signal processing. arXiv preprint arXiv:1601.06652 (2016). W. A. Yost, R. R. Fay, Auditory perception of sound sources, vol 29 (Springer Science & Business Media, Berlin, 2007). S. Rosen, R. J. Baker, A. Darling, Auditory filter nonlinearity at 2 khz in normal hearing listeners. J. Acoust. Soc. Am.103(5), 2539–2550 (1998). R. Patterson, I. Nimmo-Smith, J. Holdsworth, P. Rice, in a Meeting of the IOC Speech Group on Auditory Modelling at RSRE, vol 2. An efficient auditory filterbank based on the gammatone function, (1987). S. S. Stevens, J. Volkmann, The relation of pitch to frequency: A revised scale. Am. J. Psychol.53(3), 329–353 (1940). S. Young, G. Evermann, M. Gales, T. Hain, D. Kershaw, X. Liu, G. Moore, J. Odell, D. Ollason, D. Povey, et al., The htk book. Camb. Univ. Eng. Dept.3:, 175 (2002). P. Balazs, M. Dörfler, F. Jaillet, N. Holighaus, G. Velasco, Theory, implementation and applications of nonstationary gabor frames. J. Comput. Appl. Math.236(6), 1481–1496 (2011). P. Frederic, F. Lad, Two moments of the logitnormal distribution. Commun. Stat.–Simul. Comput.®. 37(7), 1263–1269 (2008). M. James, The generalised inverse. Math. Gaz.62(420), 109–114 (1978). A. Ben-Israel, T. N. Greville, Generalized Inverses: Theory and Applications, vol 15 (Springer Science & Business Media, Berlin, 2003). R. Hagen, S. Roch, B. Silbermann, C*-algebras and Numerical Analysis (CRC Press, Boca Raton, 2000). P. Varaiya, R. Liu, Bounded-input bounded-output stability of nonlinear time-varying differential systems. SIAM J. Control.4(4), 698–704 (1966). X. Zhao, Y. Shao, D. Wang, Casa-based robust speaker identification. IEEE Trans. Audio Speech Lang. Process.20(5), 1608–1616 (2012). Y. Kim, Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 (2014). R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, P. Kuksa, Natural language processing (almost) from scratch. J. Mach. Learn. Res.12(Aug), 2493–2537 (2011). H. Chao-Ling, J. Shing, R. Jang, MIR Database (2010). http://sites.google.com/site/unvoicedsoundseparation/mir-1k/. Accessed 8 Dec 2018. A. Rakotomamonjy, G. Gasso, Histogram of gradients of time-frequency representations for audio scene classification. IEEE/ACM Trans. Audio Speech Lang. Process. (TASLP). 23(1), 142–153 (2015). A. Mesaros, T. Heittola, T. Virtanen, in Signal Processing Conference (EUSIPCO), 2016 24th European. Tut database for acoustic scene classification and sound event detection (IEEEPiscataway, 2016), pp. 1128–1132. D. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). H. B. Sailor, H. A. Patil, Novel unsupervised auditory filterbank learning using convolutional rbm for speech recognition. IEEE/ACM Trans. Audio Speech Lang. Process.24(12), 2341–2353 (2016). Q. Kong, I. Sobieraj, W. Wang, M. Plumbley, Deep neural network baseline for dcase challenge 2016. Tampere University of Technology, Department of Signal Processing. Proceedings of DCASE 2016 (2016). D. Battaglino, L. Lepauloux, N. Evans, F. Mougins, F. Biot, Acoustic scene classification using convolutional neural networks. DCASE2016 Challenge, Tech. Rep. Tampere University of Technology, Department of Signal Processing (2016). This work was partly funded by National Natural Science Foundation of China (Grant No. 61571266). The datasets analysed during the current study are available in the MIR-1K repository, http://sites.google.com/site/unvoicedsoundseparation/mir-1k/, LITIS ROUEN repository, https://sites.google.com/site/alainrakotomamonjy/home/audio-scene, and DCASE2016 repository, http://www.cs.tut.fi/sgn/arg/dcase2016/download. Department of Electronic Engineering, Tsinghua University, Beijing, China Teng Zhang & Ji Wu Teng Zhang Ji Wu TZ designed the core methodology of the study, carried out the implementation and experiments, and drafted the manuscript. JW participated in the study and helped to draft the manuscript. All authors read and approved the final manuscript. Correspondence to Teng Zhang. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Zhang, T., Wu, J. Discriminative frequency filter banks learning with neural networks. J AUDIO SPEECH MUSIC PROC. 2019, 1 (2019). https://doi.org/10.1186/s13636-018-0144-6 Discriminative frequency filter banks
CommonCrawl
Publication Info. BMB Reports Korean Society for Biochemistry and Molecular Biology (생화학분자생물학회) 1976-670X(eISSN) Life Science > Developmental/Neuronal Biology Aim & Scope BMB Reports is an international journal devoted to the very rapid dissemination of timely and significant results in diverse fields of biochemistry and molecular biology. Novel findings in the area of genomics, proteomics, metabolomics, bioinformatics, and systems biology are also considered for publication. For speedy publication of novel knowledge, we aim to offer a first decision to the authors in less than 3 weeks from the submission date. BMB Reports is an open access, online-only journal. The journal publishes peer-reviewed Original Articles and Contributed Mini Reviews. http://submit.bmbreports.org/ KSCI KCI SCOPUS SCIE From Cytosol to Mitochondria: The Bax Translocation Story Khaled, Annette R.;Durum, Scott. K. 391 The balance between life and death of a cell regulates essential developmental processes in multicellular organisms. Apoptotic cell death is a complex, stepwise program involving multiple protein components that trigger and execute the demise of the cell. Of the many triggers of apoptosis, most are not well understood, but some key components have been identified, such as those of the Bcl-2 family, which function as anti-apoptotic or proapoptotic factors. Bax, a pro-apoptotic member of this family, has been shown to serve as a component of many apoptotic triggering cascades and its mechanism of action is the focus of intense study. Herein we discuss current, differing ideas on the function of Bax and its structure, and suggest novel mechanisms for how this death protein targets mitochondria, triggering apoptosis. Expression of Schizosaccharomyces pombe Thioltransferase and Thioredoxin Genes under Limited Growth Conditions Cho, Young-Wook;Sa, Jae-Hoon;Park, Eun-Hee;Lim, Chang-Jin 395 Schizosaccharomyces pombe gene encoding redox enzymes, such as thioltransferase (TTase) and thioredoxin (TRX), were previously cloned and induced by oxidative stress. In this investigation, their expressions were examined using $\beta$-galactosidase fusion plasmids. The expression of the two cloned genes appeared to be growth-dependent. The synthesis of $\beta$-galactosidase from the TTase-lacZ fusion was increased in the medium with the low glucose level, whereas it was significantly decreased in the medium without glucose or with galactose. It was also decreased in the nitrogen-limited medium. The synthesis of galactosidase from the TRX-lacZ fusion was unaffected by galactose or low glucose. However, it was lowered the absence of glucose. The synthesis of $\beta$-galactosidase from the TTase-lacZ fusion was shown to be enhanced in a higher medium pH. Our findings indicate that S. pombe TTase and TRX genes may be regulated by carbon and nitrogen sources, as well as medium pH. Directed Mutagenesis of the Bacillus thuringiensis Cry11A Toxin Reveals a Crucial Role in Larvicidal Activity of Arginine-136 in Helix 4 Angsuthanasombat, Chanan;Keeratichamreon, Siriporn;Leetacheewa, Somphob;Katzenmeier, Gerd;Panyim, Sakol 402 Based on the currently proposed toxicity model for the different Bacillus thuringiensis Cry $\delta$-endotoxins, their pore-forming activity involves the insertion of the ${\alpha}4-{\alpha}5$ helical hairpin into the membrane of the target midgut epithelial cell. In this study, a number of polar or charged residues in helix 4 within domain I of the 65-kDa dipteranactive Cry11A toxin, Lys-123, Tyr-125, Asn-128, Ser-130, Gln-135, Arg-136, Gln-139 and Glu-141, were initially substituted with alanine by using PCR-based directed mutagenesis. All mutant toxins were expressed as cytoplasmic inclusions in Escherichia coli upon induction with IPTG. Similar to the wild-type protoxin inclusion, the solubility of each mutant inclusion in the carbonate buffer, pH 9.0, was relatively low When E. coli cells, expressing each of the mutant proteins, were tested for toxicity against Aedes aegypti mosquito-larvae, toxicity was completely abolished for the alanine substitution of arginine at position 136. However, mutations at the other positions still retained a high level of larvicidal activity Interestingly, further analysis of this critical arginine residue by specific mutagenesis showed that conversions of arginine-136 to aspartate, glutamine, or even to the most conserved residue lysine, also abolished the wild-type activity The results of this study revealed an important determinant in toxin function for the positively charged side chain of arginine-136 in helix 4 of the Cry11A toxin. Comparison of Biochemical and Immunological Properties Between Rat and Nicotiana glutinosa Ornithine Decarboxylase Lee, Yong-Sun;Cho, Young-Dong 408 Ornithine decarboxylase (EC 4.1.1.17) is an essential enzyme for polyamine synthesis and growth in mammalian cells and plants. We compared the biochemical and immunological properties of rat and Nicotiana glutinosa ODC by cloning and expressing the recombinant proteins. The primary amino acid sequence between rat and N. glutinosa ODC had a 40% homology The molecular weight of the overexpressed rat ODC was 53 kDa, and that of N. glutinosa was 46.5 kDa. Adding 1 mM of putrescine to the enzyme reaction mixture inhibited both rat and N. glutinosa ODC activity to 30%. Agmatine had an inhibitory effect only on N. glutinosa ODC. Cysteine and lysine modifying reagents reduced both ODC activities, verifying the key roles of cysteine and lysine residues in the catalytic mechanism of ODC. ELISA was performed to characterize the immunological difference between the rat and plant ODC. Both the rat and N. glutinosa ODC were recognized by the polyclonal antibody that was raised against purified N. glutinosa ODC, but the rat ODC was 50-fold less sensitive to the antibody binding. These results indicate that even though both ODCs have the same evolutionary origin, there seems to be a structural distinction between the species. Purification and Characterization of Chloramphenicol Acetyltransferase from Morganella morganii El-Gamal, Basiouny;Temsah, Samiha;Olama, Zakia;Mohamed, Amany;El-Sayed, Mohamed 415 Chloramphenicol acetyltransferase (CAT) was purified to homogeneity from Morganella morganii starting with ammonium sulphate fractionation, followed by separation on DEAE-Sephadex A50, and G-100 Sephadex gel filtration. The enzyme was purified 133.3 fold and showed a final specific activity of 60 units/mg protein with a yield of 37%. Sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) of the purified enzyme revealed it as a heterotetramer that consists of four subunits with close molecular weights (19.5, 19, 18, and 17.5 kDa). The molecular weight of the native enzyme was calculated to be 78 kDa, as determined by gel filtration, which approximated to that of the four subunits (74 kDa). The enzyme showed a maximum activity at pH 7.8 when incubated at $35^{\circ}C$. A Lineweaver-Burk analysis gave a Km of 5.0 uM and Vmax of 153.8 U/ml. The amino acid composition of the purified enzyme was also determined. Osteoclast Differentiation Factor Engages the PI 3-kinase, p38, and ERK pathways for Avian Osteoclast Differentiation Kim, Hong-Hee;Kim, Hyun-Man;Kwack, Kyu-Bum;Kim, Si-Wouk;Lee, Zang-Hee 421 Osteoclasts, cells primarily involved in bone resorption, originate from the hematopoietic precursor cells of the monocyte/macrophage lineage and differentiate into multinucleated mature forms. We developed an in vitro osteoclast culture system using embryonic chicken bone marrow cells. This culture system can be utilized in studies on the differentiation and function of osteoclasts. Phosphatidylinositol 3-kinase (PI3-kinase) and mitogen-activated protein kinases (MAPKs) have been implicated in diverse cellular functions including proliferation, migration, and survival. Using the developed avian osteoclast culture system, we examined the involvement of these kinases in osteoclast differentiation by employing specific inhibitors of the kinases. We Found that the inhibition of the PI 3-kinase, p38, or ERK interfered with osteoclast formation, suggesting that the signaling pathways that involve these molecules participate in the process of chicken osteoclast differentiation. UVSC of Aspergillus nidulans is a Functional Homolog of RAD51 in Yeast Yoon, Jin-Ho;Seong, Kye-Yong;Chae, Suhn-Kee;Kang, Hyen-Sam 428 A defect in uvsC of Aspergillus nidulans caused high methyl methansulfonate (MMS)-sensitivity, hyporecombination, and a lack of UV induced mutation. The uvsC gene of Aspergillus nidulans shares a sequence similarity with the RAD51 gene of Saccharomyces cerevisiae. In this study, in vitro and in vivo tests were conducted in order to determine whether or not the UVSC protein had functional similarities to RAD51, the recombination enzyme in yeast. The purified recombinant UVSC protein, following expression in Escherichia coli, showed binding activity to single-stranded DNA (ssDNA), when both ATP and magnesium are present. In addition, ATPase activity was also demonstrated and its activity was stimulated in the presence of ssDNA. The UVSC protein that was expressed under the ADH promoter in S. cerevisiae suppressed in part the sensitivity to MMS of the rad51 null mutant. Similarly, when the uvsC cDNA was expressed from the nmt promoter, the MMS sensitivity of the rhp51 null mutant of Schizosaccharomyces pombe was partially complemented. These results indicate that the A. nidulans UVSC protein is a functional homologue of the RAD51 protein. Effects of Regular Endurance Exercise or Acute-exercise and Rest on the Levels of Lipids, Carnitines and Carnitine Palmitoyltransferase-I in rats Cha, Youn-Soo;Kim, Hyoung-Yon;Soh, Ju-Ryoun;oh, Suk-Heung 434 The effects of regular endurance exercise, or acute-exercise and rest on the levels of lipids, carnitines and carnitine palmitoyltransferase-I (CPT-I) were investigated in male Sprague-Dawley rats. The rats were exercise trained on a treadmill for 60 min per day for 60 days (long-term trained, LT), or non-trained for 59 days (NT) and exercised for 60 min on the 60th day. In NT rats, the levels of serum nonesterified carnitine (NEC), acidsoluble acylcarnitine (ASAC), and total carnitine (TONE) increased significantly during the post-exercise recovery period (PERP). In LT rats, ASAC, and TCNE, which increased right after the 60 min running session decreased to the levels of pre-exercise during the PERP. The levels of skeletal muscle ASAC in NT rats, which increased significantly by the acute-exercise, decreased to the pre-exercise levels during the PERP. However, the ASAC level in LT rats reached its peak at 4 h after running for 60 min. Liver triglyceride (TG) and total lipids (TL), which increased by the acute-exercise, decreased to the pre-exercise levels during the PERP in both NT and LT rats. CPT-I activity in NT rats increased significantly after 1 h of a 60-min exercise and slowly decreased to pre-exercise levels during the PERP. However, the CPT-I activity in LT rats, which increased significantly by the 60 min exercise, decreased slowly and reached its pre-exercise level within 8 h of the PERP. Northern blot analysis showed that the changes of CPT-I activities during the PERP coincided with changes in CPT-I mRNA levels. This study shows that both regular endurance exercise, and acute-exercise and rest, can influence differently the levels of carnitines, lipids and CPT-I in rats. The results suggest that regular endurance exercise, rather than the acute-exercise, can change effectively the distributions of carnitines, lipids and CPT-I in rats during exercise and rest. Detoxification of Sarin, an Acetylcholinesterase Inhibitor, by Recombinant Organophosphorus Acid Anhydrolase Kim, Seok-Chan;Lee, Nam-Taek 440 Pesticide waste and chemical stockpiles are posing a potential threat to both Vie environment and human health. There is currently a great effort toward developing effective and economical methods for the detoxification of these toxic organophosphates. In terms of safety and economy, enzymatic biodegradation has been recommended as the most promising tool to detoxify these toxic materials. To develop an enzymatic degradation method to detoxify such toxic organophosphorus compounds, a gene encoding organophosphorus acid anhydrolase (OPAA) from genomic DNA of Alteromonas haloplanktis C was subcloned and expressed. The enzyme consists of a single polypeptide chain with a molecular weight of 48 kDa. It demonstrates strong hydrolyzing activity on sarin, an acetylcholinesterase inhibitor. Moreover, its high activity is sustained for a considerable length of time. It is projected that the recombinant OPAA can be applied as an enzymatic tool that can be used not only for the detoxification of pesticide wastes, but also for the demilitarization of chemical stockpiles. Modulation of the Specific Interaction of Cardiolipin with Cytochrome c by Zwitterionic Phospholipids in Binary Mixed Bilayers: A $^2H$-and $^{31}P$-NMR Study Kim, Andre;Jeong, In-Chul;Shim, Yoon-Bo;Kang, Shin-Won;Park, Jang-Su 446 The interaction of cytochrome c with binary phospholipid mixtures was investigated by solid-state $^2H$- and $^{31}P$-NMR. To examine the effect of the interaction on the glycerol backbones, the glycerol moieties of phosphatidylcholine (PC), and cardioliph (CL) were specifically deuterated. On the binding of cytochrome c to the binary mixed bilayers, no changes in the quadrupole splittings of each of the components were observed for the PC/PG, PE/CL and PE/PG liposomes. In contrast, the splittings of CL decreased on binging of protein to the PC/CL liposomes, although those of PC did not change at all. This showed that cytochrome c specifically interacts with CL in PC/CL bilayers, and penetrates into the lipid bilayer to some extent so as to perturb the dynamic structure of the glycerol backbone. This is distinctly different from the mode of interaction of cytochrome c with other binary mixed bilayers. In the $^{31}P$-NMR spectra, line broadening and a decrease of the chemical shift anisotropy were observed on the binding of cytochrome c for all binary mixed bilayers that were examined. These changes were more significant for the PC/CL bilayers. Furthermore, the line broadening is more significant for PC than for CL in PC/CL bilayers. Therefore, it can be concluded that with the polar head groups, not only CL but also PC are involved in the interaction with cytochrome c. Determination of Monoclonal Antibodies Capable of Recognizing the Native Protein Using Surface Plasmon Resonance Kim, Deok-Ryong 452 Surface plasmon resonance has been used for a biospecific interaction analysis between two macromolecules in real time. Determination of an antibody that is capable of specifically interacting with the native form of antigen is very useful for many biological and medical applications. Twenty monoclonal antibodies against the $\alpha$ subunit of E. coli DNA polymerase III were screened for specifically recognizing the native form of protein using surface plasmon resonance. Only four monoclonal antibodies among them specifically recognized the native $\alpha$ protein, although all of the antibodies were able to specifically interact with the denatured $\alpha$ subunit. These antibodies failed to interfere with the interaction between the $\tau$ and $\alpha$ subunits that were required for dimerization of the two polymerases at the DNA replication fork. This real-time analysis using surface plasmon resonance provides an easy method to screen antibodies that are capable of binding to the native form of the antigen molecule and determine the biological interaction between the two molecules. Comparative Kinetic Studies of Two Staphylococcal Lipases Using the Monomolecular Film Technique Sayari, Adel;Verger, Robert;Gargouri, Youssef 457 Using the monomolecular film technique, we compared the interfacial properties of Staphylococcus simulans lipase (SSL) and Staphylococcus aureus lipase (SAL). These two enzymes act specifically on glycerides without any detectable phospholipase activity when using various phospholipids. Our results show that the maximum rate of racemic dicaprin (rac-dicaprin) hydrolysis was displayed at pH 8.5, or 6.5 with Staphylococcus simulans lipase or Staphylococcus aureus lipase, respectively The two enzymes interact strongly with egg-phosphatidyl choline (egg-PC) monomolecular films, evidenced by a critical surface pressure value of around $23\;mN{\cdot}m^{-1}$. In contrast to pancreatic lipases, $\beta$-lactoglobulin, a tensioactive protein, failed to inhibit Staphylococcus simulans lipase and Staphylococcus aureus lipase. A kinetic study on the surface pressure dependency, stereoselectivity, and regioselectivity of Staphylococcus simulans lipase and Staphylococcus aureus lipase was performed using optically pure stereoisomers of diglycerides (1,2-sn-dicaprin and 2,3-sn-dicaprin) and a prochiral isomer (1,3-sn-dicaprin) that were spread as monomolecular films at the air-water interface. Both staphylococcal lipases acted preferentially on distal carboxylic ester groups of the diglyceride isomer (1,3-sn-dicaprin). Furthermore, Staphylococcus simulans lipase was found to be markedly stereoselective for the sn-3 position of the 2,3-sn-dicaprin isomer. Inhibitory Effects of the Ethanol Extract of Ulmus davidiana on Apoptosis Induced by Glucose-glucose Oxidase and Cytokine Production in Cultured Mouse Primary Immune Cells Lee, Jeong-Chae;Lim, Kye-Taek 463 The bark of Ulmus darvidiana var. japonica Nakai (UDN) has been used for a long time to cure inflammation in oriental medicine. In the present study, two types of extracts, Ulmus water-eluted fraction (UWF) and Ulmus ethanol-eluted fraction (UEF), were prepared from the UDN stem bark, and employed to test the extracts to see if they had anti-oxidative properties against hydroxyl radicals that could alter immune reactivity in mouse immune cells. Deoxyribose assay, DNA nicking assay, and glucose/glucose oxidase assay showed that both fractions had scavenging activity against oxygen free radicals at 50 mg/ml. In addition, hydroxyl radical-mediated apoptosis in mouse thymocytes was not protected by UEF treatment, but the apoptosis was protected by UWF at the same concentration. DNA synthesis and cytokine production that were induced in splenocytes by mitogens (Concanavalin A and lipopolysaccharide) were reduced by the addition of both fractions. These results indicate that both extracts that were prepared from the UDN stem bark have anti-oxidative activities, anti-apoptotic effects, and inhibitory effects on DNA synthesis and cytokine production in mouse immune cell cultures. Mutation of Cysteine-115 to Alanine in Nicotiana glutinosa Ornithine Decarboxylase Reduces Enzyme Activity Ornithine decarboxylase (ODC, EC 4.1.1.17) is the first and key enzyme in eukaryotic polyamine biosynthesis. The cDNA encoding ornithine decarboxylase from Nicotiana glutinosa was cloned ($GeBank^{TM}$ AF 323910) and expressed in E. coli. Site directed mutagenesis were performed on several highly conserved cysteine residues. Among the mutants, C115A showed significant changes in the kinetic properties. The $K_m$ value of the C115A mutant was $1790\;{\mu}M$, which was 3-fold higher than that of the wild-type ODC. There was a dramatic decrease in the $k_{cat}$, values of the C115A mutant, compared to that of the wild-type ODC, which had a $k_{cat}$ value of $77.75\;s^{-1}$. C115A caused a shift in the optimal pH from 8.0 to 8.4. Considering these results, we suggest that cys-115 is involved in the catalytic activity of N. glutinosa ODC. Regulation of Glyine max Ornithine Decarboxylase by Salt and Spermine Lee, Yong-Sun;Lee, Geun-Taek;Cho, Young-Dong 478 We examined the effect of CsCl and spermine on the induction of ornithine decarboxylase (ODC), a key enzyme in polyamine synthesis form Glycine max axes. Transcription of the ODC gene was induced by 0.1 and 1 mM of CsCl, and the amount of putrescine was increased 3.5-fold by 1 mM CsCl treatment. Spermine also induced the expression of the ODC gene in a die dependent manner. However, CsCl provoked an increase in the active phosphorylated ERK (pERK), a central element of the mitogen-activated protein kinase (MAPK) cascade. Our data demonstrates an interaction between the ODC induction and the MAPK signaling pathway, and suggests that the latter may be involved in cell signaling in salt-stressed plants. Role of STAT3 as a Molecular Adaptor in Cell Growth Signaling: Interaction with Ras and other STAT Proteins Song, Ji-Hyon;Park, Hyon-Hee;Park, Hee-Jeong;Han, Mi-Young;Kim, Sung-Hoon;Lee, Choong-Eun 484 STATs are proteins with a dual function: signal transducers in the cytoplasm and transcriptional activators in the nucleus. Among the six known major STATs (STAT1-6), STAT3 has been implicated in the widest range of signaling pathways that regulate cell growth and differentiation. As a part of our on-going investigation on the pleiotropic functions of STAT proteins, we examined the role of STAT3 as a molecular adaptor that links diverse cell growth signaling pathways. We observed that STAT3 can be specifically activated by multiple cytokines, such as IL-3, in transformed fibroblasts and IL-4 or IFN-$\gamma$ in primary immune cells, respectively. The selective activation of STAT3 in H-ras-transformed NIH3T3 cells is associated with an increased expression of phosphoserioe STAT3 in these cells, compared to the parental cells. Notably phosphoresine-STAT3 interacts with oncogenic ras, shown by immunoprecipitation and Western blots. The results suggest the role of STAT3 in rasinduced cellular transformation as a molecular adaptor linking the Jak/STAT and Ras/MAPK pathways. In primary immune cells, IL-4 and IFN-$\gamma$ each induced (in addition to the characteristic STAT6 and STAT1 homodimers) the formation of STAT3-containing complexes that bind to GAS probes, which correspond to the $Fe{\varepsilon}$ Rll and $Fe{\gamma}$ RI promoter sequences, respectively. Since IL-4 and IFN-$\gamma$ are known to counter-regulate the expression of these genes, the ability of STAT3 to form heterodimeric complexes with STAT6 or STAT1 implies its role in the fine-tuned control of genes that are regulated by IL-4 and IFN-$\gamma$.
CommonCrawl
18w5088 HomeConfirmed Participants ScheduleWorkshop Videos Workshop Files Final Report (PDF) Testimonials Schedule for: 18w5088 - The Traveling Salesman Problem: Algorithms & Optimization Arriving in Banff, Alberta on Sunday, September 23 and departing Friday September 28, 2018 16:00 - 17:30 Check-in begins at 16:00 on Sunday and is open 24 hours (Front Desk - Professional Development Centre) 17:30 - 19:30 Dinner ↓ A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) 20:00 - 22:00 Informal gathering (Corbett Hall Lounge (CH 2110)) 07:00 - 08:45 Breakfast ↓ Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building. 08:45 - 09:00 Introduction and Welcome by BIRS Staff ↓ A brief introduction to BIRS with important logistical information, technology instruction, and opportunity for participants to ask questions. (TCPL 201) 09:00 - 10:15 Jakub Tarnawski: A constant-factor approximation algorithm for the Asymmetric Traveling Salesman Problem (TCPL 201) 10:15 - 10:45 Coffee Break (TCPL Foyer) 10:45 - 11:45 Bill Cook: Open problems on TSP computation ↓ We discuss a number of open research topics surrounding the computation of exact and approximation solutions to large-scale instances of the TSP. 11:45 - 13:00 Lunch ↓ Lunch is served daily between 11:30am and 1:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. 13:00 - 14:00 Guided Tour of The Banff Centre ↓ Meet in the Corbett Hall Lounge for a guided tour of The Banff Centre campus. (Corbett Hall Lounge (CH 2110)) 14:00 - 14:20 Group Photo ↓ Meet in foyer of TCPL to participate in the BIRS group photo. The photograph will be taken outdoors, so dress appropriately for the weather. Please don't be late, or you might not be in the official group photo! 15:30 - 16:00 Kent Quanrud: Approximating metric TSP and approximating the Held-Karp LP ↓ Let $G$ be an undirected graph with $m$ edges and let $\epsilon > 0$ be a constant, and consider the Metric-TSP instance induced by the shortest path metric on $G$. First, we give an algorithm that computes, in $~O(m/\epsilon^2)$ randomized time and with high probability, a $(1 + \epsilon)$-approximation for an LP relaxation of Metric-TSP which is equivalent to the Held-Karp bound [Held and Karp, 1970]. Second, we describe an algorithm that computes, in $~O(m / \epsilon^2 + n^{1.5} / \epsilon^3)$ randomized time and with high probability, a tour of $G$ with cost at most $(3 + \epsilon)/2$ times the minimum cost tour of $G$. The second algorithm uses the LP solution from the first algorithm as a starting point. (Joint work with Chandra Chekuri.) 16:00 - 16:30 Viswanath Nagarajan: Stochastic k-TSP ↓ We study the stochastic version of the k-Traveling Salesman Problem. Given a metric with independent random rewards at vertices, the objective is to minimize the expected length of a tour that collects total reward at least k. We consider both adaptive and non-adaptive solutions: an adaptive tour depends on observed rewards. We provide an $O(\log\:k)$ approximate adaptive solution and $O(\log^2\: k)$ approximate non-adaptive solution, which also upper bounds the "adaptivity gap". Time permitting, we will also discuss the setting with more general reward functions. 16:30 - 17:00 Stephan Held: Vehicle routing with subtours ↓ When delivering items to a set of destinations, one can save time and cost by passing a subset to a sub-contractor at any point en route. We consider a model where a set of items are initially loaded in one vehicle and should be distributed before a given deadline $T$. In addition to travel time and time for deliveries, we assume that there is a fixed delay for handing over an item from one vehicle to another. We will show that it is easy to decide whether an instance is feasible, i.e., whether it is possible to deliver all items before the deadline $T$. We then consider computing a feasible tour of minimum cost, where we incur a cost per unit distance traveled by the vehicles, and a setup cost for every used vehicle. Our problem arises in practical applications and generalizes classical problems such as shallow-light trees and the bounded-latency problem. Our main result is a polynomial-time algorithm that, for any given $\alpha > 0$ and any feasible instance, computes a solution that delivers all items before time $(1+ \alpha) T$ and has cost $O(1 + 1 / \alpha)$ OPT, where OPT is the minimum cost of any feasible solution. (Joint work with Jochen Konemann and Jens Vygen. https://arxiv.org/pdf/1801.04991) 17:00 - 17:30 Zachary Friggstad: Compact, provably-good LPs for orienteering and regret-bounded vehicle routing ↓ We develop polynomial-size LP-relaxations for orienteering and the regret-bounded vehicle routing problem (RVRP) and devise suitable LP-rounding algorithms that lead to various new insights and approximation results for these problems. In orienteering, the goal is to find a maximum-reward r-rooted path, possibly ending at a specified node, of length at most some given budget B. In RVRP, the goal is to find the minimum number of r-rooted paths of regret at most a given bound R that cover all nodes, where the regret of an r-v path is the difference between its length and the {distance of v from r}. For orienteering without a specified end-node, we introduce a natural bidirected LP-relaxation and obtain a simple 3-approximation algorithm via LP-rounding. This is the first LP-based guarantee for this problem. We also show that point-to-point orienteering (where the end-node is also specified) can be reduced to a regret-version of rooted orienteering at the expense of a factor-2 loss in approximation, and present an LP-relaxation with an integrality gap of 6 for this problem. For RVRP, we propose two compact LPs that lead to significant improvements, in both approximation ratio and running time, over the previous O(1)-factor approximation algorithm. One of these LPs is a rather unconventional formulation that leverages various structural properties of an RVRP-solution. (Joint work with Chaitanya Swamy.) 07:00 - 09:00 Breakfast (Vistas Dining Room) 09:00 - 09:30 Neil Olver: Pipage rounding, pessimistic estimators and matrix concentration ↓ We introduce a simple but useful technique called concavity of pessimistic estimators. This technique allows us to show concentration of submodular functions and concentration of matrix sums under pipage rounding (we prove the latter by a new variant of Lieb's celebrated concavity theorem in matrix analysis). A spectrally-thin tree is a spectral analog of the thin trees that played a crucial role in recent approximation algorithms for the asymmetric traveling salesman problem. Pipage rounding can be used to (constructively) obtain an $O(\kappa^{-1} \log n / \log \log n)$-spectrally thin tree, where $\kappa$ is the minimum edge conductance. (Joint work with Nick Harvey. https://arxiv.org/abs/1307.2274) 09:30 - 10:30 Shayan Oveis Gharan: Thin trees and the asymmetric traveling salesman, Part 1 ↓ Title is TENTATIVE. Abstract: TBA. 11:00 - 12:00 Nima Anari: Thin trees and the asymmetric traveling salesman, Part 2 ↓ 12:00 - 13:30 Lunch (Vistas Dining Room) 15:30 - 16:00 R. Ravi: Shorter tours and longer detours ↓ We study decompositions of graphs that cover small-cardinality cuts an even number of times, and we use these decompositions to design algorithms with improved approximation guarantees for the traveling salesman problem (TSP) and the 2-edge-connected spanning multigraph problem (2EC) on (restricted classes of) weighted graphs. Motivated by the well known "four-thirds conjecture", we apply our decomposition tools to the problem of uniform covers. For a cubic, 3-edge-connected graph, we show that the everywhere 18/19 vector can be efficiently written as a convex combination of tours, answering a question of Sebo. Additionally, for such graphs, we show that the everywhere 15/17 vector can be efficiently written as a convex combination of 2-edge-connected spanning multigraphs. Our constructions of these uniform covers use the algorithms of Boyd, Iwata and Takazawa for cycle covers and Cheriyan, Jordan and Ravi for tree augmentations. (Joint work with Arash Haddadan and Alantha Newman. arxiv.org/abs/1707.05387) 16:00 - 16:30 Alantha Newman: Using large cycle covers to find small cycle covers in cubic graphs ↓ A classic algorithm for the traveling salesman problem (TSP) on cubic graphs consists of finding a double spanning tree on the contracted graph of a cycle cover, where a cycle cover is defined as the set of edges in the complement of a perfect matching. If a cubic graph G on n vertices has a cycle cover containing k cycles, this results in a TSP tour of size $n+2k$. Since we are interested in short TSP tours, we would like to find cycle covers that have small size, i.e., having few connected components. Moemke and Svensson showed that a bridgeless, cubic graph contains a cycle cover consisting of at most n/6 cycles. Here we show how to use a large cycle cover to obtain a small cycle cover. In particular, if G is a bipartite, cubic graph on $n$ vertices, a cycle cover of size $(1/6+\epsilon)n$ can be used to find a cycle cover of size $(1/6 - \epsilon/2)n$. If G is a bridgeless, cubic graph on $n$ vertices, a cycle cover of size $(1/6 + \epsilon)n$ that covers all 3-edge cuts in G can be used to find a cycle cover of size $(1/6 - \epsilon/5)n$. (Joint work with Arash Haddadan.) 16:30 - 17:00 Katarzyna Paluch: New approximation algorithms for (1,2)-TSP ↓ We give faster and simpler approximation algorithms for the (1,2)-TSP problem, a well-studied variant of the traveling salesperson problem where all distances between cities are either 1 or 2. Our main results are two approximation algorithms for (1,2)-TSP, one with approximation factor 8/7 and run time $O(n^3)$ and the other having an approximation guarantee of 7/6 and run time $O(n^{2.5})$. The 8/7 -approximation algorithm is based on combining three copies of a minimum-cost cycle cover of the input graph together with a relaxed version of a minimum weight matching, which allows using "half-edges". The resulting multigraph is then edge-colored with four colors so that each color class yields a collection of vertex-disjoint paths. The paths from one color class can then be extended to an 8/7 -approximate traveling salesperson tour. Our algorithm, and in particular its analysis, is simpler than the previously best 8/7 -approximation. The 7/6 -approximation algorithm is similar and even simpler, and has the advantage of \(\textbf{not}\) using Hartvigsen's complicated algorithm for computing a minimum-cost triangle-free cycle cover. (Joint work with Anna Adamaszek and Matthias Mnich ICALP 2018) 17:00 - 17:30 Vincent Cohen-Addad: On the effectiveness of k-opt for Euclidean TSP ↓ What is the effectiveness of local search algorithms for TSP in the plane? Motivated by the strong results of Johnson et al. during the TSP challenge, we prove that $k$-opt yields a $(1+1/poly(k))$-approximation when points are chosen uniformly in $R^d$. We show that the randomness assumption is necessary as in the worst-case $k$-opt could return at least a 2-approximate solution. 17:30 - 19:30 Dinner (Vistas Dining Room) 09:00 - 10:00 Martin Naegele: A 1.5-approximation for path TSP ↓ I will present recent work by Rico Zenklusen on obtaining a 1.5-approximation for the Metric Path Traveling Salesman Problem (path TSP). All recent improvements on path TSP crucially exploit a structural property shown by An, Kleinberg, and Shmoys [Journal of the ACM, 2015], namely that cuts with a value strictly below 2 with respect to any Held-Karp solution form a chain. Such narrow cuts are the obstacle why Christofides' celebrated 1.5-approximation for TSP does not easily extend to the more general path version of TSP. The newly introduced approach significantly deviates from prior techniques in this point, by showing the benefit of not only focussing on narrow cuts, but instead dealing with larger s-t cuts even though they are much less structured. More precisely, we will see that a variation of the dynamic programming idea recently introduced by Traub and Vygen [SODA, 2018] is versatile enough to deal with larger size cuts, by exploiting a seminal result of Karger on the number of near-minimum cuts. Through this technique, we obtain a well-structured point in the Held-Karp relaxation from which we derive the 1.5-approximation. This allows us to avoid a recursive application of dynamic programming as used by Traub and Vygen in their recent (1+epsilon)-approximation, and we obtain a considerably simpler algorithm avoiding an additional error term in the approximation guarantee. The obtained approach matches the still unbeaten 1.5-approximation guarantee of Christofides' algorithm for TSP. Hence, any further progress on the approximability of path TSP will also lead to an improvement over Christofides' 1.5-approximation for TSP. 10:30 - 11:30 Vera Traub: Beating the integrality ratio for s-t-tours in graphs ↓ (Joint with Jens Vygen.) 11:30 - 12:30 Jens Vygen: Integrality ratios for the s-t-path TSP ↓ We prove new upper bounds on the integrality ratios for the standard subtour elimination LPs for the symmetric and for the asymmetric s-t-path TSP. For symmetric distances (joint work with Vera Traub), we give an improved analysis of the algorithm of Seb\H{o} and van Zuylen. For asymmetric distances (joint work with Anna K\"{o}hne and Vera Traub), we prove that the integrality ratio is constant. 13:30 - 17:30 Free Afternoon (Banff National Park) 09:00 - 10:00 Hung Le: PTASes for (subset) TSP in minor-free graphs ↓ TSP and subset TSP were known to have PTASes for planar and bounded genus graphs, and they were conjectured to have PTASes in minor-free graphs that contain planar and bounded genus graphs as subclasses. In this talk, we will survey existing results on designing PTASes for TSP and subset TSP, and explore the resolution of both conjectures. Demaine, Hajiaghayi and Kawarabayashi, in their seminal paper on contraction decomposition in minor-free graphs, described the first PTAS for TSP in minor-free graphs. However, their PTAS is inefficient. In a joint work with Glencora Borradaile and Christian Wulff-Nilsen, we design an efficient PTAS for TSP in minor-free graphs. This result constitutes the first part of the talk. Recently, building on the new technique developed in solving TSP problem, we are able to resolve the second conjecture, that is, designing the first PTAS for subset TSP in minor-free graphs. This is the second part of the talk. To conclude the talk, we will discuss several open problems. (One part is joint with Glencora Borradaile and Christian Wulff-Nilsen.) 10:30 - 11:30 Andras Sebo: The salesman, the postman and (delta-) matroids ↓ Abstract: TBA 15:30 - 16:00 Sam Gutekunst: Semidefinite programming relaxations of the Traveling Salesman Problem ↓ We analyze the integrality gap of a semidefinite programming relaxation of the traveling salesman problem due to de Klerk, Pasechnik, and Sotirov. We show that the integrality gap is unbounded by searching for highly structured feasible solutions; the problem of finding such solutions reduces to finding feasible solutions for a related linear program. These solutions imply several corollaries that help us better understand the semidefinite program and its relationship to other relaxations of the traveling salesman problem. Using the same technique, we show that a more general semidefinite program introduced by de Klerk, de Oliveira Filho, and Pasechnik for the k-cycle cover problem also has an unbounded integrality gap. 16:00 - 16:30 Tobias Moemke: Maximum Scatter TSP in Doubling Metrics ↓ We study the problem of finding a tour of n points in which every edge is long. More precisely, we wish to find a tour that visits every point exactly once, maximizing the length of the shortest edge in the tour. The problem is known as Maximum Scatter TSP, and was introduced by Arkin et al. (SODA 1997), motivated by applications in manufacturing and medical imaging. Arkin et al. gave a 0.5 -approximation for the metric version of the problem and showed that this is the best possible ratio achievable in polynomial time (assuming P !=NP). Arkin et al. raised the question of whether a better approximation ratio can be obtained in the Euclidean plane. We answer this question in the affirmative in a more general setting, by giving a $(1-\epsilon)$-approximation algorithm for d-dimensional doubling metrics, with running time $\tilde{O}(n^3 + 2^{(O(K log K))} )$, where $K \leq ( 13/\epsilon)^d$. As a corollary we obtain (i) an efficient polynomial-time approximation scheme (EPTAS) for all constant dimensions d, (ii) a polynomial-time approximation scheme (PTAS) for dimension $d = (\log \log n)/c$, for a sufficiently large constant c, and (iii) a PTAS for constant d and $\epsilon = Omega(1/\log \log n)$. Furthermore, we show the dependence on d in our approximation scheme to be essentially optimal, unless Satisfiability can be solved in subexponential time. (Joint work with Laszlo Kozma, SODA 2017.) 16:30 - 17:00 Kenjiro Takazawa: Excluded t-factors in bipartite graphs: A unified framework for nonbipartite matchings and restricted 2-matchings ↓ We propose a framework for optimal $t$-matchings excluding prescribed $t$-factors in bipartite graphs. The proposed framework is a generalization of the nonbipartite matching problem and includes several problems, such as the triangle-free $2$-matching, square-free $2$-matching, even factor, and arborescence problems. In this talk, we demonstrate a unified understanding of these problems by commonly extending previous important results. We solve our problem under a reasonable assumption, which is sufficiently broad to include the specific problems listed above. We first present a min-max theorem and a combinatorial algorithm for the unweighted version. We then provide a linear programming formulation with dual integrality and a primal-dual algorithm for the weighted version. A key ingredient of the proposed algorithm is a technique to shrink forbidden structures, which corresponds to the techniques of shrinking odd cycles, triangles, squares, and directed cycles in Edmonds' blossom algorithm, a triangle-free $2$-matching algorithm, a square-free $2$-matching algorithm, and an arborescence algorithm, respectively. 17:00 - 17:30 Yuri Faenza: Bounded pitch inequalities for min knapsack: approximate separation and integrality gaps ↓ The pitch of a (valid) inequality for the min knapsack polytope is the minimum integer k such that, if any k variables from its support are set to one, then the inequality is satisfied. Bounded pitch inequalities came to prominence for their connections with the Chvatal-Gomory and Bienstock-Zuckerberg operators. In this talk, we investigate the strength of bounded pitch inequalities, proving bounds on the integrality gap when they are added to the natural LP relaxation (possibly, in conjunction with other inequalities), and we discuss algorithms for approximately separating them. 09:00 - 09:30 Thomas Rothvoss: A Tale of Santa Claus, Hypergraphs and Matroids ↓ A well-known problem in scheduling and approximation algorithms is the Santa Claus problem. Suppose that Santa Claus has a set of gifts, and he wants to distribute them among a set of children so that the least happy child is made as happy as possible. Here, the value that a child $i$ has for a present $j$ is of the form $p_{ij} \in \{0,p_j\}$. The only known polynomial time algorithm by Annamalai et al. gives a 12.33-approximation algorithm and is based on a modification of Haxell's hypergraph matching argument. This factor compares to the value of an exponential size \emph{configuration LP}. In this paper, we introduce a \emph{matroid} version of the Santa Claus problem with unit size presents and design an algorithm which gives a polynomial time $(3+\epsilon)$-approximation compared to a natural, compact LP. Our algorithm is also based on Haxell's augmentation tree, but despite the greater generality, it is cleaner than previous methods. Our result can then be used as a blackbox to obtain a $(6+\epsilon)$-approximation for Santa Claus (with arbitrary present sizes). This factor also compares against a natural, compact LP for Santa Claus. (Joint work with Sami Davies and Yihao Zhang.) 09:30 - 10:00 Tom McCormick: Strongly Polynomial Algorithms for Some Problems Related to Parametric Global Minimum Cuts ↓ In the parametric global minimum cut problem, we are given a graph $G=(V,E)$ where the cost of each edge is an affine function of a parameter in $R^d$ for some fixed dimension d. Megiddo's parametric search is a widely known technique to solve parametric optimization problems. We give faster algorithms to solve two problems related to the parametric global minimum cut problem: Finding the next breakpoint in a given direction, and finding a parameter value that maximizes the global min cut value, and we show the relation between the two problems. 11:15 - 12:00 Checkout by Noon ↓ 5-day workshop participants are welcome to use BIRS facilities (BIRS Coffee Lounge, TCPL and Reading Room) until 3 pm on Friday, although participants are still required to checkout of the guest rooms by 12 noon. (Front Desk - Professional Development Centre) 11:30 - 13:30 Lunch from 11:30 to 13:30 (Vistas Dining Room)
CommonCrawl
Equation of pair of tangents to an ellipse equation of pair of tangents to an ellipse 38. Find an equation of the leftmost one. The integral on the left-hand side of equation (2) is interpreted as 2) Find the equation of this ellipse: time we do not have the equation, but we can still find the foci. Jan 17, 2020 · 1. 17. This theorem can also be proved by writing down the equation of the tangent at P, m y= x + 2 and finding the intercept of this line on the axis of x. It is a similar idea to the tangent to a circle. Answer 3x - 8y = 7 3x - 8y = 25 3x + 8y = 7 3x + 8y = 25 math. The parametric equations for a curve in the plane consists of a pair of equations Each value of the parameter t gives values for x and y; the point is the corresponding point on the curve. Jan 17, 2013 · Homework Statement The angle between the tangents drawn from the point (2,2) to the ellipse, 3x2+5y2=15 is: a)##\\pi##/6 b)##\\pi##/4 c)##\\pi##/3 d)##\\pi##/2 Homework Equations The Attempt at a Solution To find the equation of tangents, I need to use the following formula Question from Coordinate Geometry,jeemain,math,class11,coordinate-geometry-conic -sections,ellipse,ch11,medium Find the equation of tangent to the ellipse $3x^2 Find the equation of the tangent line to the ellipse x 2 + 4y 2 = 25 when x = 3 and y . is the slope In this video, the instructor shows how to find the equation of a circle given its center point and a tangent line to it. Locate each focus and discover the reflection property. 12}\] For the ellipse and hyperbola, our plan of attack is the same: 1. I've built this equation from the ground up using Pythagoras - it's only by coincidence that this happens to form an ellipse. The standard formula of a ellipse: 6. The tangent of an ellipse is a line that touches a point The two ellipses are given as the equation given in Wiki. A nondegenerate conic section has the general form [latex]A{x}^{2}+Bxy+C{y}^{2}+Dx+Ey+F=0[/latex] where [latex]A,B[/latex] and [latex]C[/latex] are not all zero. May 18, 2016 · How do you find the equations of both the tangent lines to the ellipse #x^2 + 4y^2 = 36# that pass through the point (12,3)? Calculus Derivatives Tangent Line to a Curve 1 Answer The set of all points in the plane, the sum of whose distances from two xed points, called the foci, is a constant. Pair of Tangents 5. For more see General equation of an ellipse Jan 19, 2018 · Two ellipses typically have four common tangents. All these equation are explained below in detail. Jun 16, 2017 · The Questions and Answers of find the equation of the pair of tangents drawn to the ellipse 3x^2 + 2y^2= 5 from point (1,2) and find the angle between the tangents. Slope form of tangent of Tangents and normal of general equation and their forms in their particular conic section, Equation of polar, chord of contact, pair of tangents in case of parabola, ellipse, Hyperbola and their special properties, Polar equation of conic section-Tangents and normals. The given conic has equation; Divide through by 9. Example. Thus we get the equation of the tangent to the curve traced by the parametric equations x(t) and y(t) without having to explicitly solve the equations to find a formula relating x and y. THE PROBLEM. (iii) Slope Form The equation of the tangent of slope m to the ellipse x2 / a2 + y2 / b2 = 1 are y = mx ± √a2m2 + b2 and the coordinates of the point of contact are (iv) Point of Intersection of Two Tangents The equation of the tangents to the ellipse at points P(a cosθ 1, b sinθ 1) and Q (a cos θ 2, b sinθ 2) are x / a cos θ 1 + y / b The equation ax2+2hxy+by2+2gx+2fy+c=0 denotes an ellipse when abc+2fgh-af2-bg2-ch2≠0 and h2-ab<0. Equation of a normal in terms of its slope m is (a 2 b 2 )m y mx a 2 b 2 m2 Condition for line y = mx + c to be the tangent to the ellipse is c2 = a2m2 + b, with the point of contact is and the equation of tangent is y = mx ± √ [a2m2+b2] =. (c) hyperbola. Figure 7. 4x 2 +9y 2 = 36. It covers a wide range of topics including tangents, normals, chords and locus. List the line with the smaller slope first thank you!!-Hello, x² + 4y² = 36 y² = 9 - x²/ Answer to Find equations of both the tangent lines to the ellipse x2 + 4y2 = 36 that pass through the point (12, 3). Point-slope form of line equation :. The formula for calculating com-plete elliptic integrals of the second kind be now known: (2) Z 1 0 s 1 −γ 2x2 1−x2 dx = πN(β ) 2M(β), where N(x) is the modified arithmetic-geometric mean of 1 and x. Equation of ellipse is,The slope of the perpendicular drawn from the centre (0,0) to (h,k) is k/h. Equations of the tangent lines to hyperbola xy=1 that pass through point (-1,1) I know the graph of y=1/x but not sure about the tangent lines at given point. where T = 0 is the equation to the chord of contact. Equation. units) of quadrilateral formed by the common May 08, 2011 · Ellipse General Equation If X is the foot of the perpendicular from S to the Directrix, the curve is symmetrical about the line XS. Let xT and yT be the - and -intercepts of T and xN and yN be the intercepts of N. An axis-aligned ellipse centered at the origin with a>b. This concept is a part of Coordinate Geometry (or Analytical Geometry), and is one of the important chapters in this area. For Those Who Want To Learn More:Form of quadratic equations, discriminant formula,…Mutual relations between line and ellipseLinear Diophantine equationsNon-linear Diophantine equationsDefinition of radical equations with examples Example 2: Find the standard equation of an ellipse represented by x 2 + 3y 2 - 4x - 18y + 4 = 0. x2 y2 ELLIPSES -+ -= 1 (CIRCLES HAVE a= b) a2 b2 This equation makes the ellipse symmetric about (0, 0)-the center. For example, consider the parametric equations Here are some points which result from plugging in some values for t: Definition 1 (Ellipse) Consider the linear transformation x = Ay where A is a nonsingular 2×2 real matrix. 9x2+ 16y2= 144 15. Let F=(0,0) be the focus and the line y=-6 be the directrix. Tangent and normal. Draw a line from the center of the ellipse to the tangent, parallel We know that the equations of tangents with slope m to the ellipse x 2 a 2 + y 2 b 2 = 1 are y = m x ± a 2 m 2 + b 2 (1) The equation of the ellipse is x 2 + 4y 2 = 9 Calibration, Ellipse's parameters. Everything ive seen online assumes that the resulting ellipse will be either centered on the origin, have axes of symmetry parallel with x and y, or both. Take a point )T(x0 , y. Let the tangents from P(x 1, y 1) touch the circle at Q(x 2, y 2) and R(x 3, y 3). Table 3. The first is as functions of the independent variable t. Circle, if If focus of a parabola is and equation of the directrix is , Apr 01, 2011 · coefficients in ellipse equation K, P labels for two contacting surfaces R 3 real, three-dimensional space a, b major and minor semiaxes of the contact ellipse c, d major and minor semiaxes of the tangent ellipse e eccentricity of an ellipse h distance between opposing points on contacting surfaces k, p opposing points on contacting surfaces r Hence the equation of the ellipse referred to conjugate diameters 2a', 2b' as co-ordinate axes is It is easily shown that = a constant. We consider now the general case: none of the tangents in the pair is parallel to the . Find equations of both the tangent lines to the ellipse $ x^2 + 4y^2 = 36 $ that pass through the point (12, 3). 4) of the Ellipse 3x ^ 2 + 16y ^ 2 = 192? A circle is tangent to the x axis, the y axis and the line 3x - 4y +6 = 0. The maximum possible area of the triangle formed by the tangent at 'P' , ordinate of the point 'P' and the x-axis is equal to (A) 8 (B) 16 (C) 24 (D) 32 Q. The equation x^2-xy+y^2=3 describes an ellipse centered at the origin with semi-major axis of length √6 and semi-minor axis of length √2, with the axes of the ellipse rotated π/4. To reduce this to one of the forms given previously, perform the following steps (note that the decisions are based on the most recent values of the coefficients, taken after all the transformations so far): Nov 02, 2017 · Equation of tangent is x = my + b m a2 2 2 slope of tangent = 1 4 m 3 m = 3 4 Hence equation of tangent is 4x + 3y = 24 or x y 1 6 8 Its intercepts on the axes are 6 and 8. The centre of the circle is say (5,5) and the foci of the ellipse are say (5,5) and (10,5), so one of the foci is the centre of the circle. 1. For each of the following equations, identify whether the curve is a parabola, circle, ellipse, or hyperbola by removing the xy x y term from the equation by rotation of the axes. Let s = x 2 + 4y 2 - 25, so that s = 0 is an ellipse x 2 + 4y 2 = 25. x2 a2 + y2 The latter is a quadratic equation which may be factorized into the product of two linear equations each representing a tangent to the conic through P(x 1, y 1). y = mx+√(a 2 m 2 +b 2) From equation of ellipse, we may derive the values of a 2 and b 2. Example 3 : Find a point on the curve. The minor and major axes are of lengths 3 and 5 and are parallel to the \(x\) and \(y\) axes respectively. Suppose your two ellipses have equations [math]e_1(x,y)=0[/math] and [math]e_2(x,y)=0[/math]. Figure \(\PageIndex{7}\): Graph of the plane curve described by the parametric equations in part b. Equation of tangent to two ellipse x 2 9 + y 2 4 = 1 which cut off equal intercepts on the axes is Q: Equation of tangent to two ellipse x 2 9 + y 2 4 = 1 which cut off equal intercepts on the axes is (A) y = x + 13 (B) y = − x + 13 Example of the graph and equation of an ellipse on the . 2 Equation of Tangent and Normal at a Point on the ellipse (Cartesian and Parametric) - Condition for a Straight Line to be a Tangent 4. the arc length of an ellipse has been its (most) central problem. The graph is shown in Figure 3. and non-parallel to the . The ellipse x 2 + 4y 2 = 4 is inscribed in a rectangle aligned with the coordinate axes, which in turn is inscribed in another ellipse that passes through the point (4, 0). com The locus of middle points of parallel chords of an ellipse is the diameter of the ellipse and has the equation y = 2 a m The condition for y = m x + c to be the tangent to the ellipse is c = a 2 m 2 + b 2 Oct 29, 2007 · i need help solving this problem: If we plot the points (x,y) satisfying the equation [(x^2)/4] + y^2 = 1, the result is an ellipse. Tangency condition of straight line and ellipse. If y = mx + c represent a system of parallel chords of the ellipse x 2 a 2 + y 2 b 2 = 1 is then the equation of the diameter is y = – b 2 a 2 m x. Area of an Ellipse. Each pair of conjugate diameters of an ellipse has a corresponding tangent parallelogram, sometimes called a bounding parallelogram (skewed compared to a bounding rectangle). dy a. Focal length. Then the equation of pair of tangents of PA and PB is SS1 S S 1 = T 2 T 2 The equations of tangent and normal to the ellipse $$\frac{{{x^2}}}{{{a^2}}} + \frac{{{y^2}}}{{{b^2}}} = 1$$ at the point $$\left( {{x_1},{y_1}} \right)$$ are $$\frac Coordinates of the point A (x, y), from which we draw tangents to an ellipse, must satisfy equations of the tangents, y = mx + c and their slopes and intercepts, m and c, must satisfy the condition of tangency therefore, using the system of equations, (1) y = mx + c <= A (x, y) Find the angle between the pair of tangents from the point (1,2) to the ellipse `3x^2+2y^2=5 Solutions of the system of equations of tangents to the ellipse determine the points of contact, i. 17). 6 118. gl/9WZjCW Equations of tangents to the ellipse `x^2/9+y^2/4=1` which are perpendicular to the line `3x + 4y = 7,` are. The line barely touches the ellipse at a single point. Witing the equation of the tangent in # y=mx +c# form we have the equation of the tangent as #y=x-2#,So it is obvious that the slope of the tangent is 1. An ellipse is also the the result of projecting a circle, sphere, or ellipse in three dimensions onto a plane, by parallel lines. To sketch a graph of an ellipse with the equation , start by plotting the four axes intercepts, which are easy to find by plugging in 0 for and then for . Eccentricity. By using this website, you agree to our Cookie Policy. The equation of the pair of tangents is SS 1 = T 2 where the equation of the chord of contact is T = 0 and t he equation of the chord bisected at the point (x 1, y 1) is T = S1. Dec 29, 2014 · This is called the standard form of the equation of an ellipse, assuming that the ellipse is centered at (0,0). 18 Find the length of major axis, minor axis, latus rectum, eccentricity, coordinates of the centre, foci and equations of directrices of the ellipse. If PA and PB be the tangents from point P(x 1, y 1) to the ellipse + = 1. Divide by 36, we get (x 2 /9) + (y 2 /4) = 1. As t varies over the interval I, the functions and generate a set of ordered pairs This set of ordered pairs generates the graph of the parametric equations. Find the equation of the ellipse. Any ellipse is an affine image of the unit circle with equation + =. play_arrow Equation of Pair of Tangents From a Point to a Parabola play_arrow Equations of Normal in Different Forms play_arrow Point of Intersection of Normals at Any Two Points on The Parabola Solution Let P be any point on the locus Equation of pair of tangents from P to from MATH JEE at Delhi Public School - Durg A pair of tangents are drawn from a point P to the circle . Since, both the lines are perpendicular, The locus of the point of intersection of perpendicular tangents to an ellipse is a director circle. Then sketch the ellipse freehand, or with a graphing program or calculator. We connect students with top tutors from the IITs and BITS - instantly, anytime, anywhere. Step 1: Group the x- and y-terms on the left-hand side of the equation. Let ACA' and BCB' be a pair of conjugate diameters, PCP' and DCD' another pair, and PN, DM be ordinates of ACA' (meaning they connect points on the ellipse to ACA' along lines parallel to the conjugate of ACA' - BCB' in this case). . Another way of saying it is that it is "tangential" to the ellipse. Nov 29, 2018 · where \(\vec T\) is the unit tangent and \(s\) is the arc length. The Equation to this second tangent becomes (after multiplication throughout by \(m\) ) K. 12} \tag{2. An ellipse is basically a circle that has been squished either horizontally or vertically. A tangent to a curve is a line that touches the curve at one point and has the same slope as the curve at that point. The equation of the tangent to the ellipse S = 0 is y mx a m b= ± +2 2 2 … (1) Dec 24, 2019 · (iii) Slope Form The equation of the tangent of slope m to the ellipse x 2 / a 2 + y 2 / b 2 = 1 are y = mx ± √a 2 m 2 + b 2 and the coordinates of the point of contact are (iv) Point of Intersection of Two Tangents The equation of the tangents to the ellipse at points P(a cosθ 1 , b sinθ 1 ) and Q (a cos θ 2 , b sinθ 2 ) are May 01, 2019 · Statement-1 : A tangent of the ellipse x^2 + 4y^2 = 4 meets the ellipse x^2 + 2y^2 = 6 at P & Q. The equation of the pair of tangents drawn from (4, 10) to x² + y² = 25 is The equation to the pair of tangents from (x₁, y₁) is S²₁ = S₁₁S. These units are analyzed in a hierarchy: points with tangents are paired into triangles in the first layer and pairs of triangles in the second layer vote for ellipse cen-ters. The point A has coordinates (a1,a2). Tangent is drawn at any point other than the vertex on the parabola . to the ellipse S = 0 lies on a circle, concentric with the ellipse. Then the equation of the ellipse is The ray goes from the shot at one focus of the ellipse to anywhere on the ellipse, and then to the receiver in traveltime t h. Second ellipse is centered at (15,0), rotated by 120 degrees with semi-major and semi-minor axes length as 3,1 For an ellipse, two diameters are conjugate if and only if the tangent line to the ellipse at an endpoint of one diameter is parallel to the other diameter. 6 117. If equation of an ellipse is x 2 / a 2 + y 2 / b 2 = 1, then equation of director circle is x 2 + y 2 = a 2 + b 2. that pass through the point (5,3) which is not a point on the ellipse. First show: \[ \begin{equation}{{CN^2 + CM^2} = AC^2. The "line" from (e 1, f 1) to each point on the ellipse gets rotated by a. A normal to a curve is a line perpendicular to a tangent to the curve. This equation admits of reduction; and we propose to obtain the reduced form independently, and to supply its geometrical interpretation. Let any tangent of ellipse is x cos y sin 1 4 3 Let it meets axes at A 4,0 Equation: T = 0 (Similar to that of tangent equation) 16. 2 depicts Earth's orbit around the Sun during one year. at which the tangent is parallel to the x axis. Chord of Contact 5 However, in projective geometry every conic section is equivalent to an ellipse. Find the equations of the tangents to the hyperbola x2– 4y2= 4 which are (i) parallel (ii) perpendicular to the line x + 2y = 0. , 2x0x a2 + 2y0y b2 = 2x20 a2 + 2y20 b2. Oct 10, 2010 · Find equations of both the tangent lines to the ellipse x^2 + 4y^2 = 36 that pass through the point (12, 3). The slope of the ellipse at the point (m,n) can be computed by implicit differentiation of the ellipse equation with respect to x. The major axis of this ellipse is horizontal and is the red segment from (-2, 0) to (2, 0). What I meant to say, revised, is that I want to find the normal or tangent vector to the curve of form \(\displaystyle k=\sqrt{x^2 + y^2} + \sqrt{x^2 + (AB - y)^2}\), where k is some value and AB is the distance between Equation of the director circle is x 2 + y 2 = a 2 – b 2. An ellipse is the figure consisting of all points in the plane whose Cartesian coordinates satisfy the equation $\frac{(x - h)^2}{a^2} + \frac{(y - k)^2}{b^2} = 1$ 7. An ellipse is the figure consisting of all those points for which the sum of their distances to two fixed points (called the foci) is a constant. 0; a line . Some of the most important equations of an ellipse include tangent and tangent equation, the tangent equation in slope form, chord equation, normal equation and the equation of chord joining the points of the ellipse. 5. Drag the sliders or the point on the diagram to move A. Therefore, if we replace \(m\) in the above Equation by \(−1/m\) we shall obtain another tangent to the ellipse, at right angles to the first one. 0. 6 115. y2 – 2ax = x2 Parametric Equations of Curves. The domain of this relation is -3,3. Substitute in the above equation. Taxi Cab Ellipse A GCF file Using the TC distance metric, and the definition of an ellipse as the set points where the sum of the distance from two fixed points is a constant d, we can write an equation for the ellipse with foci at A(a,b) and B(g,h) as Equation of Chord of Contact of Tangents. (b) line. Move the center of the ellipse to the point (x o,y o) maintaining the inclination θ of the major axis. The equation for a circle of radius with center on the surface at the source-receiver pair coordinate x=b is In fact the ellipse is a conic section (a section of a cone) with an eccentricity between 0 and 1. Jul 04, 2016 · Now it is given that #x-y=2# is the equation of tangent to the circle at the point(4,2) on the circle. . SS 1 = T 2, where S is the equation of the hyperbola, S 1 is the equation when a point P (h,k) satisfies S, T is the equation of the tangent. y-axis, therefore both have a slope. If tangents are drawn from any point on this tangent to the circle such that all the chords of contact pass through a fixed point then (a) in GP (b) are in GP (c) (x_1//x_2) x_1x_2+y_1y_2=a^2 Jan 08, 2021 · The angle between the pair of tangents drawn to the ellipse 3x^2 + 2y^2 = 5 from the point (1,2) is? I considered using homogenization for this problem, consider the shifted coordinates: $$ x' = x-1$$ $$ y' = y-2$$ In shifted coordinates, our conic becomes: $$ 3(x'+1)^2 + 2 (y'+2)^2 =5 \tag{1}$$ The relation of slope is given as: (c) Equation to the chord of contact, polar, chord with a given middle point, pair of tangents from an external point is to be interpreted as in ellipse. An ellipse centered at the origin is defined to be the image of the unit circle under this transformation. First, The equation of tangents to the ellipse x 2 a 2 + y 2 b 2 = 1 with slope m are y = m x ± a 2 m 2 + b 2 The tangent makes equal intercepts on the coordinate axes, its slope = m = – 1 ∴ a 2 m 2 + b 2 = 16 (- 1) 2 + 9 = 5 However, when you graph the ellipse using the parametric equations, simply allow t to range from 0 to 2π radians to find the (x, y) coordinates for each value of t. In this chapter, we introduce parametric equations on the plane and polar coordinates. Also, there are two 'focus' in an ellipse, and hence two 'directrix', one corresponding to each. Enter none if there are no such points. Tangents and Normals. Slope of the tangent line : dy/dx = 2x-2. 1) is the center of the ellipse (see above figure), then equations (2) are true for all points on the rotated ellipse. 0, where . Condition c=± a2m2−b2 Tangent in terms of slope - formula Let m be the slope of the tangent, then the equation of tangent is y=mx± a2m2−b2 The equations of the chord of contact chord bisected at a given point and pair of tangents from a point are dealt with extensively. Define b by the equations c 2 = a 2 − b 2 for an ellipse and c 2 = a 2 + b 2 for a hyperbola. Find the equations of the two tangents that can be drawn from (5, 2) to the ellipse 2x 2 + 7y 2 =14 . Page 95 THE PARABOLA 95 EXERCISES 1. We serve Class 8th - 12th students preparing for CBSE, ICSE and State boards as well as all entrance exams such as IIT JEE Main & Advanced, BITSAT, NEET, VITEEE, MU OET, SRMEEE, AIPMT and all State entrance exams. " can be solved by deductive reasoning. Area ( AOB) = 1 2 × 6 × 8 = 24 sq. 4) Eccentricity of a Rectangular Hyperbola is √2 and the angle between asymptotes is 90°. The equation to the director circle is : x 2 + y 2 =1, if y =mx+c is the tangent then substituting it in the equation of ellipse gives a quadratic equation with equal roots. Various Forms of Normals 5. Pair of tangents. g. 2 Equation Jan 29, 2018 · hyperbola, standard equation of hyperbola, transverse and conjugate axes,directrices, conjugate hyperbola, intersection of a line and a hyperbola, tangent to a hyperbola, number of co tangents, pair of tangents and their chord of contact, number of normal drawn from a point, rectangular hyperbola, rectangular hyperbola referred as to its Feb 15, 2012 · Find equations of both the tangent lines to the ellipse x 2 + 4y 2 = 36 that pass through the point (12, 3). A complete graph of an ellipse can be obtained without graphing the foci. ) slope is undefined at =? Answer by Alan3354(67285) (Show Source): An analysis of the equations associated with pairs of straight lines. Example 2 Find the equation of the common tangents to the circles x 2 + y 2 – 6x = 0 and x 2 + y 2 + 2x = 0. The 4th section deals with Normals. 6 116. Help!!!!!!! Writing Equations of Ellipses Not Centered at the Origin. Large and small axes of ellipse. An Ellipse is the geometric place of points in the coordinate axes that have the property that the sum of the distances of a given point of the ellipse to two fixed points (the foci) is equal to a constant, which we denominate \(2a\). Condition for the sine y = mx + c to be a tangent to the Conics. In this section, finding equations for normals to ellipse is addressed. a 2 b 2. HashLearn is India's first on-demand tutoring app. Group-D: Analytical Geometry Of 3 Dimension (Three Question) Free line equation calculator - find the equation of a line given two points, a slope, or intercept step-by-step This website uses cookies to ensure you get the best experience. Equation of tangent line to ellipse. The graph of the second degree equation is one of a circle, parabola, an ellipse, a hyperbola, a point, an empty set, a single Aug 31, 2020 · The tangent to the circle at that point will have slope -1/2, since the radius perpendicular to that point has slope 2. } \tag{7 Jun 17, 2008 · 3. at a point (x1, y1) is xx1 + yy1 ‗ 1. Figure 1. 6 119. Theorem 1 (Matrix Representation of Ellipse) The equation of the el-lipse so defined is xTMx=1, (1) Dec 29, 2014 · This is called the standard form of the equation of an ellipse, assuming that the ellipse is centered at (0,0). By placing an ellipse on an x-y graph (with its major axis on the x-axis and minor axis on the y-axis), the equation of the curve is: x 2 a 2 + y 2 b 2 = 1 (similar to the equation of the hyperbola: x 2 /a 2 − y 2 /b 2 = 1, except for 4. The blue line on the outside of the ellipse in the figure above is called the "tangent to the ellipse". The locus of centre of the ellipse sliding between two perpendicular lines is - I have an overlapping circle and ellipse. If P(x 1, y 1) be any point lies outside the ellipse + = 1, and a pair of tangents PA, PB can be drawn to it from P. (1) Tangent line to the ellipse at the point (,) has the equation . The result follows after dividing by 2 and using the fact that f(x0, y0) = 1. The equation of a horizontal hyperbola in standard form is where the center has coordinates the vertices are located at and the coordinates of the foci are where ; The eccentricity of an ellipse is less than 1, the eccentricity of a parabola is equal to 1, and the eccentricity of a hyperbola is greater than 1. 21. Table 2. The point labeled F 2 F 2 is one of the foci of the ellipse; the other focus is occupied by the Sun. through . Draw PM perpendicular a b from P on the Given an ellipse on the coordinate plane, Sal finds its standard equation, which is an equation in the form (x-h)²/a²+(y-k)²/b²=1. 3) The equation of a chord of the hyperbola whose mid-point is (x₁,y₁) is given by T = S₁. The only thing that changed between the two equations was the placement of the a 2 and the b 2. sec θ – by. Q. Eccentric Angle of a Point. Plot several points P that are half as far from the focus as they are from the directrix. The equation of an ellipse centered at (0, 0) with major axis a and minor axis b (a > b) is If we add translation to a new center located at ( h, k ), the equation is: The locations of the foci are (-c, 0) and (c, 0) if the ellipse is longer in the x direction, and (0, -c) & (0, c) if it's elongated in the y -direction. Summarizing, we get: Result 1. In general the formal definition of the curvature is not easy to use so there are two alternate formulas that we can use. x - 4y + 12 = 0. The ellipse must be tangent to both coordinate axis: that gives two equations with variables x o,y o and parameter θ. 2 The General Quadratic Equation. Additional ordered pairs that satisfy the equation of the ellipse may be found and plotted as needed (a calculator with a square root key will be helpful). How to solve: Find an equation of the tangent line to the curve at the given point. Free Ellipse calculator - Calculate ellipse area, center, radius, foci, vertice and eccentricity step-by-step This website uses cookies to ensure you get the best experience. Implicit differentiation yields: 2x/a 2 + (2 y/b 2 ) (dy/dx) = 0 The slope is dy/dx. The equation of pair of tangents would be. Let's start by marking the center point: Looking at this ellipse, we can determine that a = 5 (because that is the distance from the center to the ellipse along the major axis) and b = 2 (because that is the distance from the center to the 2. , 2x0(x − x0) a2 + 2y0(y − y0) b2 = 0, i. where S 1 = + - 1, T = + - 1. x = 1 Solution for The graph of the equation x + xy+ y = 8 is an ellipse lying obliquely in the plane, as illustrated in the figure below. 1. the definition of the ellipse is given in terms of its foci, the foci are not part of the graph. The Attempt at a Solution Apr 06, 2013 · EXAMPLES Write the equation of pair of tangents to the parabola y2 = 4x drawn from a point P(–1, 2) Ans. Apr 06, 2013 · NORMALS Equation of the normal at (x1, y1) to the ellipse x2 y2 a 2 x b2 y 1 is a 2 b2 a2 b2 x1 y1 Equation of the normal at the point (a cos θ, b sin θ) to x2 y2 the ellipse 1 is; a2 b2 ax. What are the tangents from P(0, 0) to the ellipse? Let's see that there are none. y-axis has an equation of the formy m(x x0 ) y. 7 . S²₁ = (xx₁ +y y₁ – 25)² = (4x₁ + 10y₁ – 25)² S₁₁ = x²₁ + y₁² – 25 at point (4, 10) Aug 05, 2019 · (iii) Slope Form The equation of the tangent of slope m to the ellipse x 2 / a 2 + y 2 / b 2 = 1 are y = mx ± √ a 2 m 2 + b 2 and the coordinates of the point of contact are (iv) Point of Intersection of Two Tangents The equation of the tangents to the ellipse at points P(a cosθ 1 , b sinθ 1 ) and Q (a cos θ 2 , b sinθ 2 ) are See full list on askiitians. The ratio,is called eccentricity and is less than 1 and so there are two points on the line SX which also lie on the curve. Figure1shows such an ellipse. If tangents are drawn to the ellipse $${x^2} + 2{y^2} = 2,$$ then the locus of the mid-point of the intercept made by th IIT-JEE 2004 Screening GO TO QUESTION Four basic shapes can result from the intersection of a plane with a pair of right circular cones connected tail to tail. Equation of an ellipse Transforming a circle we can get an ellipse (as Archimedes did to calculate its area). Graphing an Ellipse Centered at the Origin Graph and locate the foci: Solution The given equation is the standard form of an ellipse's equation with and x2 9 y2 4 +=1 a2 = 9 Every ellipse has two foci and if we add the distance between a point on the ellipse and these two foci we get a constant. i found the derivative to be -x / 4y Mar 04, 2013 · This video discusses the combined equation of pair of tangents drawn from a point to the circle. Solution : Equation of tangent drawn to the ellipse will be in the form Mar 13, 2019 · 2) The combined equation of pair of tangents drawn from an external point P(x₁,y₁) is SS₁–T². You need the chain rule. Now, squish the y axis by a factor of 2. 15 The equation of the chord whose middle point is (x1, y1): T = S1 Thus, for the equation to represent an ellipse that is not a circle, the coefficients must simultaneously satisfy the discriminant condition B 2 − 4 A C < 0 B^2 - 4AC< 0 B 2 − 4 A C < 0 and also A ≠ C. To draw this set of points and to make our ellipse, the following statement must be true: if you take any point on the ellipse, the sum of the distances to those 2 fixed points ( blue tacks ) is constant. Problems based on focal property of ellipse; Equation of ellipse having axis parallel to coordinate axis; Equation of ellipse having any two perpendicular lines as its axes; Equation of ellipse in parametric form_Part I; Equation of ellipse in parametric form_Prat II; Properties of ellipse; Tangent to ellipse_Theory; Tangent to ellipse_Problems The centre of another ellipse is now given as the point (2, 1). Let the ellipse extents along those axes be ' 0 and ' 1, a pair of positive numbers, each measuring the distance from the center to an extreme point along the corresponding axis. This ellipse is centered at the origin, with x-intercepts 3 and -3, and y-intercepts 2 and -2. 16. That turns the circle into your ellipse, and it changes the slope of that tangent line by a factor of 2, from -1/2 to -1/4. ; The center of this ellipse is the origin since (0, 0) is the midpoint of the major axis. The major axis is perpendicular to directrix and passes through the focus. 1 The Standard Form for an Ellipse Let the ellipse center be C 0. Compute dx dy dx b. Let P be any point on the ellipse x 2 / a 2 + y 2 / b 2 = 1. Let's start by marking the center point: Looking at this ellipse, we can determine that a = 5 (because that is the distance from the center to the ellipse along the major axis) and b = 2 (because that is the distance from the center to the Let T and N be the tangent and normal lines to the ellipse x2/9 + y2/4 = 1 at any point P on the ellipse in the first quadrant. If the tangents make an intercept of 2 on the line x=1 then the locus of P is If the tangents make an intercept of 2 on the line x=1 then the locus of P is Tangent lines and normal vectors to an ellipse Tangent line to the ellipse at the point (,) has the equation . , 9. From a point 'P' if common tangents are drawn to circle x 2 + y = 8 and parabola y = 16x, then the area (in sq. and the range is -2,2. = L. Using the Pythagorean Theorem to find the points on the ellipse, we get the more common form of the equation. I have the equation for both. The ellipse has two vertical tangents. 2) Find the equation of this ellipse: time we do not have the equation, but we can still find the foci. This cheat sheet covers the high school math concept – Ellipse. Finney Chapter A5. Rotate to remove Bxy if the equation contains it. Question 1075727: Find the slope of the tangent line to the ellipse x^2/9+y^2/4=1 at the point (x,y) slope =? Are there any points where the slope is not defined? (Enter them as comma-separated ordered-pairs, e. We have step-by-step solutions for your textbooks written by Bartleby experts! May 06, 2002 · An ellipse can be represented parametrically by the equations x = a cos θ and y = b sin θ, where x and y are the rectangular coordinates of any point on the ellipse, and the parameter θ is the angle at the center measured from the x-axis anticlockwise. unit. Jun 06, 2019 · The equation of tangent to the ellipse can be written as. Parametric form of a tangent to an ellipse The equation of the tangent at any point (a cosɸ, b sinɸ) is [x / a] cosɸ + [y / b] sinɸ. please write the steps to find the answers are solved by group of students and teacher of JEE, which is also the largest student community of JEE. The equation to the pair of tangents from the point (x ′, y ′) to the conic φ(x, y) = 0 is usually obtained in the form. y = x 2-2x-3 . , the closest and the farthest point of the ellipse from the given line, thus Example: Determine equation of the ellipse which the line - 3 x + 10 y = 25 touches at the point P ( - 3, 8/5). The parametric equation of a parabola with directrix x = −a and focus (a,0) is x = at2, y = 2at. If x(t) and y(t) are parametric equations, then dy dx = dy dt dx dt provided dx dt 6= 0 . Then the equation of pair of tangents of PA and PB is SS 1 = T 2. Bourne. Let the ellipse axis directions be U 0 and U 1, a pair of unit-length orthogonal vectors. Dec 21, 2020 · This is the equation of a horizontal ellipse centered at the origin, with semi-major axis 4 and semi-minor axis 3 as shown in the following graph. Using the center point and the radius, you can find the Equation of an Ellipse. Sep 30, 2018 · the ellipse, (2) the major and minor axes of the ellipse, (3) the minimum bounding box for the ellipse, and (4) the points on the ellipse at which the tangent is horizontal, v ertical, or at a We use a theorem of Marden relating the foci of an ellipse tangent to the lines thru the sides of a triangle and the zeros of a partial fraction expansion to prove the converse: If P lies on Z The equation of ellipse is and the point is . (d) ellipse. Ans. 2x = 2. 1 Equation of ? Ellipse in Standard Form - Parametric Equations 4. chord of contact. Find the equation of the tangent line to the ellipse 25 x 2 + y 2 = 109 at the point (2,3). Find equations of both tangent lines to the ellipse x2+4y2= 36 that pass through the point (12,3). 2) is #-1# Figure 3 in the previous section shows the osculating circle and the normal and tangent lines for a point in the first quadrant. Recall that we saw in a previous section how to reparametrize a curve to get it into terms of the arc length. Calculus: Tangent Line Finding the tangent to a point on an ellipse The following is a series of pictures which show how one goes about finding the tangent to an ellipse, or any curve for that matter. If an ellipse is translated [latex]h[/latex] units horizontally and [latex]k[/latex] units vertically, the center of the ellipse will be [latex]\left(h,k\right)[/latex]. 10 A tangent is drawn to the parabola y2 = 4x at the point 'P' whose abscissa lies in the interval [1,4]. Identifying the conic from the general equation of conic Ax 2 + Bxy + Cy 2 + Dx + Ey + F = 0. Now, the ellipse itself is a new set of points. Given an ellipse {eq}\displaystyle \dfrac {x^2}{a^2} + \dfrac {y^2}{b^2} = 1 {/eq}, where {eq}a ot = b {/eq}, find the equation of the set of all points from which there are two tangents to the Find the equations of both of the tangent lines to the ellipse x^2+4y^2=36 that pass through the point (12,3). For a circle, c = 0 so a 2 = b 2 . Then the equation of the ellipse is 2 Area of an Ellipse An axis-aligned ellipse centered at the origin is x a 2 + y b 2 = 1 (1) where I assume that a>b, in which case the major axis is along the x-axis. Number of Normals are Drawn to an Ellipse From a Point to its Plane 5. The variable \(\phi\) is not an angle, and has no geometric interpretation analogous to the eccentric anomaly of an ellipse. Parametric forms . The relation may be written as two functions: Differentiating the function in the upper two positive y quadrants: Oct 29, 2010 · again, for the right-most tangent, you gotta take the greater value of x, which is + 2sqrt(3) the corresponding value for y is -sqrt(3) so the right-most vertical tangent has the equation x = 2sqrt(3) and it touches the ellipse at (2sqrt(3), -sqrt(3)) The equation of pair of tangents would be SS1 = T2, where S is the equation of the ellipse, S 1 is the equation when a point P (h, k) satisfies S, T is the equation of the tangent. 2. 3. 7 121. If we superimpose coordinate axes over this graph, then we can assign ordered pairs to each point on the ellipse (). From this, we can construct a tangent to the ellipse that lies in the plane normal to n: t 1 = N 1 × n = ( P 1 – F 1 ) × n Now, since t 1 is perpendicular to P 1 – F 1 , the dot product of any vector with t 1 will be unchanged if we add or subtract some multiple of P 1 – F 1 to the original vector. 8 Equations of tangent and normal to an ellipse: Theorem: The equation of tangent to the ellipse x 2 + y 2 ‗ 1. Let the tangents at P and D meet ACA' at T and t. smaller slope y= larger slope y= cal. parametric representation. Feb 13, 2015 · Show that the tangent lines where the ellipse crosses the X-axis are parallel. To rotate an ellipse about a point (p) other then its center, we must rotate every point on the ellipse around point p, including the center of the Oct 08, 2020 · The tangent line always has a slope of 0 at these points (a horizontal line), but a zero slope alone does not guarantee an extreme point. Log InorSign Up. May 08, 2018 · Here is a set of practice problems to accompany the Ellipses section of the Graphing and Functions chapter of the notes for Paul Dawkins Algebra course at Lamar University. To do this, take a graph and plot the given point and the tangent on that graph. Now, from the center of the circle, measure the perpendicular distance to the tangent line. I won't be deriving the direct common tangents' equations here, as the method is exactly the same as in the previous example. Equation of ellipse : 4x 2 +9y 2 = 36. I am trying to figure out a way to go from 2 coordinate points, each on a 0-180° line, to an ellipse equation. Various Forms of Tangents 5. pair of tangents. 39 22 2 1 2 ae a m b P 1m ±+ = + The ellipse x 2 + 4y 2 = 4 is inscribed in a rectangle aligned with the coordinate axes, which in turn is inscribed in another ellipse that passes through the point (4, 0). The Equations \[x = a \sec E, \quad y = b \tan E \label{2. I converted the given equation to x 2 /36 + y 2 /9 = 1 by dividing each value by 36. L. ( called auxiliary circle) Proof: Equation of the ellipse 2 2 2 2 x y S 1 0 a b ≡ + − = Let P(x 1, y 1) be the foot of the perpendicular drawn from either of the foci to a tangent. An ellipse equation, in conics form, is always "=1". 2. Equation of tangent at vertex: 9: Pair of straight lines , if . Center the curve to remove any linear terms Dx and Ey. 7 120. Let P(x 1, y 1) be a point outside the circle. 4. Equation of ellipse. The equation of the pair of tangents drawn from a point p (x 1, y 1) to the hyperbola is SS 1 = T 2. A variable point P moves such that the chord of contact of the pair of tangent drawn to hyperbola 2 x y 12 16 always parallel to its tangent at (5, 3/4). The area bounded by the ellipse is ˇab. , the parallelogram of tangents at the ends of con jugate diameters is constant in area. Equation of Another definition of an ellipse uses affine transformations: . Jan 10, 2019 · Sol:Use the standard equation of a tangent in terms of m and then proceed accordingly, The general equation of a tangent to the ellipse is y mx a m b=±+22 2…(i) Let the points on the minor axis be P(0,ae) and Q(0, ae)− as b a (1 e)22 2= − Length of the perpendicular from P on (i) is Mathematics | 11. So just like that, by eliminating the parameter t, we got this equation in a form that we immediately were able to recognize as ellipse. Homework Equations The equation of an ellipse is x 2 /a 2 + y 2 /b 2 = 1. by M. Hence, for (x0, y0) ∈ R2 such that f(x0, y0) = 1, an equation of the tangent line to the ellipse f(x, y) = 1 at (x0, y0) is →∇f(x0, y0) ⋅ (x − x0, y − y0) = 0, i. cosec θ = (a² - b²). ) Ans. Solve for f'(x) = 0 to find possible extreme points. This line is taken to be the x axis. For the parabola, the standard form has the focus on the x -axis at the point ( a , 0) and the directrix the line with equation x = − a . Straight Line. Let T (h, k) be nay point on the pair of tangents PQ or PR draw P (x₁, y₁) to the parabola y² = 4ax. May 18, 2016 · How do you find the equations of both the tangent lines to the ellipse #x^2 + 4y^2 = 36# that pass through the point (12,3)? Calculus Derivatives Tangent Line to a Curve 1 Answer Oct 26, 2010 · Find the equation of both tangent lines to the ellipse x^2 + 4y^2 = 36. x2 +2xy+y2−4x+4y = 0 x 2 + 2 x y + y 2 − 4 x + 4 y = 0 x2 +2xy+y2+x−y−1 = 0 x 2 + 2 x y + y 2 + x − y − 1 = 0 24xy−7y2−1 = 0 24 x y − 7 y 2 − 1 = 0 The equation of ellipse is and the point is . Use calculus to find the equation of the line. Like the graphs of other equations, the graph of an ellipse can be translated. To find the slope of tangent line, derivative with respect to x to the ellipse equation. Solution These circles touch externally, which means there'll be three common tangents. We need to find the equation of AB, the chord of contact. 5 Relative Position of Two Circles - Circles Touching Each Other Externally, Internally; Common Tangents – Centers of Similitude - Equation of Pair of Tangents from an External Point Chapter : Maths II B - 1 - Coordinate Geometry Notice in this definition that x and y are used in two ways. The equation and slope form of a rectangular hyperbola's tangent is given as: Equation of tangent The y = mx + c write hyperbola x /a – y /b = 1 will be tangent if c = a /m – b . Ghanshyam Tewani, the author of top selling books on JEE Main and Advanced published by Cengage Learning. x 2 + 3y 2 - 4x - 18y + 4 = 0 Problem 76 Hard Difficulty. Textbook solution for Calculus 2012 Student Edition (by… 4th Edition Ross L. (Pronounced "tan-gen-shull"). The locus of P is a - (A) straight line (B) circle (C) parabola (D) ellipse 22. y2 – x2 – 2xy – 6x + 2y = 1 If two tangents to the parabola y2 = 4ax from a point P make angles 1 and 2 with the axis of the parabola, then find the locus of P if tan2 1 + tan2 2 = (a const. Notice that the normal line to the ellipse is a tangent line to its evolute, a property which leads to an alternative way to define the evolute of a curve. (2)Let us prove the statement (1) now. Again, if 4) be the angle between two conjugates, then sin4=ab/a'b', or 4a'b' sin cf)= 4ab; i. Here's how to find them: Take the first derivative of the function to get f'(x), the equation for the tangent's slope. Expand the squares: this is the most complicated part, but in the end we manage to clean a lot of terms. Draw PM perpendicular a b from P on the The blue line on the outside of the ellipse in the figure above is called the "tangent to the ellipse". The leftmost vertical tangent line is defined by the equation x=. Analysis of the Ellipse; Its Tangents and the Auxiliary Circle Chords. For the ellipse [MATH] b^2x^2+a^2y^2=a^2b^2 [/MATH] show that the equations of its tangent lines of slope m are [MATH]y=mx \pm \sqrt{a^2m^2+b^2}[/MATH] Question is in chapter on tangent lines and is mostly based on taking implicit derivatives and plugging into point-slope format for the tangent lines. Hence the slope of the normal passing through (4. If α, β, γ, δ be the eccentric angles of the four concyclic points on an ellipse then α + β + γ + δ = 2nπ. DIRECTOR CIRCLE : The locus of the intersection of tangents which are at right angles is known as the D IRECTOR C IRCLE of the hyperbola. There you go. Tangent of ellipse. This is done by taking two points, one on either side of the point at which the tangent is to be drawn. x 2 + y 2 + 2gx + 2fy + c = 0. An affine transformation of the Euclidean plane has the form → ↦ → + →, where is a regular matrix (with non-zero determinant) and → is an arbitrary vector. Condition c = ± a 2 m 2 + b 2 Pair of tangents If P (x1,y1 x 1, y 1) be any point lies outside the ellipse x2 a2 x 2 a 2 + y2 b2 y 2 b 2 = 1, and a pair of tangents PA, PB can be drawn to it from P. Note that, in both equations above, the h always stayed with the x and the k always stayed with the y. The equation of PT is y – y₁ = (k – y₁)/ (h – x₁) (x – x₁) The chords of contact of the pair of tangents drawn from each point on the line 2x + y = 4 to the circle x 2 +y 2 = 1 pass through a fixed point- (a) (2,4) (b) (-1/2,-1/4) (c) (1/2, 1/4) Q. An ellipse (Fig. 7. The angle between the pair of tangents drawn from the point (1, 2) to the ellipse $3x^2 + 2y^2 = 5 $ is CHORD OF CONTACT FROM P (x1, y1): Two tangents are drawn from an external point P (x 1, y 1) to the ellipse x2 a2 + y2 b2 = 1, x 2 a 2 + y 2 b 2 = 1, touching this ellipse at points A and B. 5. Divide the elipse equation by 400 to get the general form of the ellipse, we can see that the major and minor lengths are a = 5 and b = 4: May 02, 2019 · The tangent at the point α on a standard ellipse meets the auxiliary circle in two points which subtends a right angle at the centre. A parabola is an ellipse that is tangent to the line at infinity Ω, and the hyperbola is an ellipse that crosses Ω. So if (x0,y0) is a point on the ellipse, then the slope of the tangent line at (x0,y0) is dy dx x=x0,y=y0 7. The general equation of the circle is. Show that the eccentricity of the ellipse is 2 (1+ sin 2 α)-1/2. They include an ellipse, a circle, a hyperbola, and a parabola. Dec 26, 2010 · If you draw the tangents from an external point D(x',y') to the ellipse then equation of the equation of the chord joining the points of contact is xx'/a²+yy'/b²=1. T. Director Circle 5. So first we finddy dx, so we have: 2x+8y dy dx = 0 ⇒ dy dx = − x 4y. 15. 2hxy angle asymptotes axes axis becomes bisectors called centre chord circle circle x2 co-eff co-efficients co-ordinates common Comparing condition conic conjugate constant curve cuts diameter directrix distance divides Draw drawn eccentricity ellipse equal Equation of tangent Example Find the equation foci focus given given points Hence Focuses. Condition on a line to be a tangent - formula For an ellipse a 2 x 2 + b 2 y 2 = 1, if y = m x + c is the tangent then substituting it in the equation of ellipse gives a quadratic equation with equal roots. Parametric Equations Consider the following curve \(C\) in the plane: A curve that is not the graph of a function \(y=f(x)\) The curve cannot be expressed as the graph of a function \(y=f(x)\) because there are points \(x\) associated to multiple values of \(y\), that is, the curve does not pass the vertical Oct 08, 2020 · The tangent line always has a slope of 0 at these points (a horizontal line), but a zero slope alone does not guarantee an extreme point. The Length of the Chord Intercepted by the Ellipse on the Line y = mx + c. In angle between the pair of tangents drawn from a point 'P' to the parabola y2 = 4ax is 4 , then locus of point 'P' is : (a) parabola. 2 Problem 28E. 9k points) Given the ellipse 16x 2 + 25y 2 = 400 and the line y = −x + 8 find the minimum and maximum distance from the line to the ellipse and the equation of the tangents lines. , (1,3), (-2,5). Both [math]e_1[/math The locus of the point of intersection of perpendicular tangents to an ellipse is a director circle. e. Dec 30, 2020 · Now the product of the slopes of two lines that are at right angles to each other is \(−1\) (Equation 2. 6. Solution : Equation of tangent to ellipse will be in the form. We often need to find tangents and normals to curves when we are analysing forces acting on a moving body. … May 08, 2018 · Here is a set of practice problems to accompany the Ellipses section of the Graphing and Functions chapter of the notes for Paul Dawkins Algebra course at Lamar University. We illustrate with a couple of Jan 15, 2020 · Intersection of a Line and an Ellipse 5. I need to find the equation of the 2 common tangents (and consequently, the intersection points of the tangents with the circle and Resolving the ellipse 4x 2 + y 2 = 16 in terms of y explicitly as a function of x and differentiating with respect to x. From any point (x 1, y 1) in general two tangents can be drawn to hyperbola. Every ellipse has two axis, major and minor. Oct 13, 2018 · To ask Unlimited Maths doubts download Doubtnut from - https://goo. First, note that the straight line passes through the point (,), since (,) satisfies the equation . The analytic equation for a conic in arbitrary position is the following: where at least one of A, B, C is nonzero. i. Tangent of Rectangular hyperbola The tangent of a rectangular hyperbola is a line that touches a point on the rectangular hyperbola's curve. m . Other forms of the equation. Find the equation of the tangent to the parabola y2 = 3x at the point (12, 6). When I just look at that, unless you deal with parametric equations, or maybe polar coordinates a lot, it's not obvious that this is the parametric equation for an ellipse. The eccentricity of a circle is 0. Equation of the tangent line is 3x+y+2 = 0. Solution : y = x 2-2x-3. 2x-2 = 0. The equation of the chord whose middle point is (x 1, y 1) T = S 1. How do I find the equation of the tangent of the ellipse if it is constructed from the given point M (0. A pair of diameters is conjugate if each is parallel to the tangents at the ends of the other. Description This package includes video lectures on Coordinate Geometry(Coordinate System, Straight lines, Circles, Parabola, Ellipse and Hyperbola) of +1 by Mr. A line is tangent to the ellipse at point P and passes through the point Q at (4,0). The angle between the tangents asked Mar 31, 2019 in Mathematics by ManishaBharti ( 64. and the equation of normal to the ellipse is x 2 + y 2 ‗ 1; at point (x1, y1) is An ellipse is a set of points on a plane, creating an oval, curved shape, such that the sum of the distances from any point on the curve to two fixed points (the foci) is a constant (always the same). This gives us the radius of the circle. 1) is called a locus of points, a sum of distances from which to the two given points F 1 and F 2, called focuses of ellipse, is a constant value. We present an ellipse finding and fitting algorithm that uses points and tangents, rather than just points, as the basic unit of information. In the Matrix form the ellipse can be shown as this: First ellipse is centered at (0,0) rotated by 45 degrees with semi-major and semi-minor axes length as 2,1. a 2 = 9, b 2 = 4. If the tangent line is parallel to x-axis, then slope of the line at that point is 0. \frac{x^2}{9} + \frac{y^2}{36}= 1 (-1, 4\sqrt 2) (ellipse) Figure 4: Tangents to the ellipse, parallel to the coordinate axes. 11 From an external point P, pair of tangent lines are drawn to the parabola, y2 Dec 30, 2020 · These two Equations are therefore the parametric Equations to the hyperbola, and any point satisfying these two Equations lies on the hyperbola. Find the equation of tangent to the ellipse . at (2, 2). equation of pair of tangents to an ellipse 7ry, gmp, rp, l4k, ix, j3f, wm, hx, 4qr9, ucigr, rj7i, vsr, cx8i, zz, 3y,
CommonCrawl
Sentiment in global financial markets was once again largely affected by the developments of Russia's invasion of Ukraine, and by the Federal Reserve policy. The declarations of Fed Governor Lael Brainard on Tuesday and the release of minutes of the Federal Open Market Committee of mid-March on Wednesday, where followed by a sharp pullback in the stock market. Governor Brainard, considered amongst the most dovish policymakers, announced that the Federal Reserve will start its quantitative tightening as soon as in May. Fed minutes revealed the intention of reducing the Central Bank's balance sheet by $60bn in treasuries and $35bn in MBS per month, for a total of $95bn, which is higher than the $70-$90bn consensus expectations. This plan would lead to a shrink of the Federal Reserve balance sheet by more than $1tn per year. In addition, policymakers are determined to raise rates by 50bps in May, with futures that are pricing in a Fed range between 2.50% and 2.75%. This led to the 10yr yield rising to its highest levels since the start of 2019, with the 2-10yr yield curve steepening after inverting briefly on the 1st of April. Despite the fears of a possible future recession, we must highlight that the "near-term forward spread", which has proven to be the timeliest indicator of an economic decline, has remained far from negative. On Wednesday, the U.S. has also imposed more sanctions on Russia, blocking two of its largest banks: Sberbank and Alfa Bank; respectively Russia's largest financial institution and largest private bank. The additional sanctions have intensified worries about inflation, keeping buyers on the side-lines. This severe reaction comes after alleged reports of Russia committing war crimes on Ukraine's civilians. Focusing on equity, all the major indexes closed the week in negative territory, giving back some of the gains from the move that started around mid-March, which has managed to bring the Nasdaq out of the bear market territory and the S&P500 at less than 7% from its all-time highs. Growth stocks and small caps have shown relative weakness with respect to defensive stocks, as reflected in their year-to-date performance. In particular, money has rotated into Consumer Staples, Utilities and Health Care. At the end of the week, trading volumes were moderate, with market participants that are cautiously waiting the start of the Q1 earnings season. Earnings expectations heading into next week, as surveyed by FactSet, are averaging 4.5% (YoY) for companies in the S&P500, which would mark the first time in the last two years that earnings growth has fallen short of 10%. It's important to mention that weekly jobless claims considerably beat expectations, falling to their lower levels since 1968 and signalling a resilient economy in the face of the threats of the current global scenario. Finally, it's interesting to observe how Twitter shares rose 27% on Monday, following the announcement of Elon Musk's 9.2% stake in the company. The business magnate was appointed to Twitter board of directors the following day, accompanied by some controversy around the disclosure of his stake. Europe's stocks saw small gains, affected by concerns around inflation, Russia, and quantitative tightening of the European Central Bank. Among the major economies, only U.K. shares showed a positive performance, while Germany, France and Italy's major indexes lagged behind. The STOXX Europe 600 climbed 1.3% on Friday, but analysts are warning that traders may have not yet priced the risks of the incoming election in France. Emmanuel Macron's lead on his rival Marie Le Pen has been significantly narrowing in recent years and the possibility of Macron losing the elections are their highest levels. The week started with the re-election of two European leaders, both closely allied to Vladimir Putin: the Hungarian Prime Minister Viktor Orban, at its fourth mandate, and the Serbian President Aleksandar Vucic, that won a second term. On Thursday, minutes of the ECB March meeting were more hawkish than expected, with many policymakers that expressed the need for a normalization of the monetary policy. While the European Central Bank is still remaining very cautious, especially due to the uncertainty of the war, the 10yr bond yields climbed. On the same day, Europe democrats signed off an agreement to ban coal imports from Russia, including its ships and truck entering the European Union, aiming at one of Russia's key sources of revenue. The action was coordinated with representatives of the U.S. and U.K., and the plan will come into effect around mid-August. Europe is also blocking new machinery exports, while continuing to target the assets of Russian oligarchs and Putin's two daughters. Meanwhile, there hasn't been any significant breakthrough in the ceasefire talks between Ukraine and Russia. On Friday Russia's Central Bank unexpectedly cut interest rates from 20% to 17% to alleviate the effects of the western sanctions on the national economy, justifying the cut also thanks to the recent rebound of the rouble, which has apparently eased inflationary pressures. Finally, Labour shortages are worsening in the U.K., and workers' increasing bargaining power has led the growth of the average salary awarded to new joiners at its highest level since when the polling began in October 1997. Reflecting not only demand from employers, but also rising prices. Notwithstanding easing borders restriction, Japan's major indexes fell over the week on the expected impact of a hawkish Fed and inflation. The Japan yen further depreciated against the U.S. dollar, closing on Friday at its worst levels since 2015. Bank of Japan Governor Haruhiko Kuroda commentated on the matter, reflecting on the importance of the stability of exchange rates due to their impact on the economy that is still recovering, and remained committed to the BoJ quantitative easing, targeting a 2% inflation rate. In addition, the IMF has revised downwards the projected economic growth for Japan in 2022 from 3.4% to 2.4% YoY. On a more positive note, the Tokyo Stock Exchange reform could potentially increase companies' governance and value creation, elevating the capital attraction of the TSE. China's President Xi Jinping is currently facing one of his biggest challenges since he took power in 2012, as anxiety is growing due to Shanghai's Covid lockdown. The Communist party sees the Covid Zero Policy as essential to keep saving lives and guarantee the future growth of the economy. However, Nomura estimated that the 23 cities that are currently affected by lockdowns account for 13.5% of the Chinese economy. In addition, China's latest PMI readings indicate that the country's manufacturing and services sectors are deteriorating, with the lockdowns that are halting production and causing labor issues. What's more, China's 10yr yield declined by 19bps for the week, with its premium over Treasury bills that has almost entirely disappeared for the first time since 2012. Global funds continue to reduce their holdings of Chinese sovereign debt. Frictions between China and the U.S. persist, with China refusing to condemn Russia's invasion of Ukraine. Sanctions could arrive from the United States in the case of China materially supporting Putin. Mexico's IPC Index and Brazil's Bovespa experienced large losses, as the Central Banks of Latin America prepare to hike rate at a faster pace than planned. Brazil, Chile, Mexico and Peru are all experiencing rising prices, and even tough Brazil and Mexico's Central Bank rates are as high as 11.75% and 6.5% respectively, they have failed to bring down prices. Chile's President Gabriel Boric announced on Tuesday the national Inclusive Recovery Plan, aimed at supporting the recovery of the economy from the pandemic and the effect of inflation on citizens. Considering the country's fiscal deficit and inflation, a fiscal expansion could worsen the situation. However, the amount of the package could be covered by the revenues of an imminent tax reform. Turkey's BIST 100 experienced a weekly gain of 6.29%, while the country's Central Bank allows for a loose expansionary monetary policy that seems completely disconnected from the rest of the world. The country is experiencing skyrocketing inflation, while the Lira remains under heavy pressure. Regarding Australia, the Reserve Bank of Australia has decided to maintain the cash rate target at 10bps, and the interest rate on Exchange settlement balances at zero percent, supporting the country's growth. The decision reflects the resilience of the Australian economy: national income is being boosted by rising commodity prices, unemployment is falling, and inflation has increased but remains under control. FX and commodities The US dollar started the week with a mixed performance, but buyers jumped in after the hawkish announcements of the FOMC minutes, with Canadian and Australian dollars following as second and third strongest currencies. The worst performer was the Euro, followed by the Japanese Yen, which is at its weakest levels since 2015. The Pound Sterling and the Swiss Franc were more stable as they didn't lose much territory with respect to the US dollar. The performance of the Euro doesn't reflect the unexpectedly hawkish position taken by some policymakers at the ECB's March meeting. However, it's important to keep in mind that the ECB is still lagging behind other major central banks in their quantitative tightening, and that the ongoing conflict still leaves a highly uncertain situation. WTI Crude Oil closed the week under $100 per barrel, retracing for the second consecutive week, and closing around $98. The pullback has been led by the US announcement of the release of 1 million barrels per day from the Strategic Petroleum Reserve at the end of March, combined with demand worries from China. Nonetheless, it has remained slightly above pre-war levels. On the other hand, Brent Crude Oil price per barrel is still above $100. Next week main events Next week U.S. investors will be paying attention to the start of the Q1 earnings season. In addition, data that could significantly impact the markets is being released: the Consumer Price Index (CPI), the Producer Price Index (PPI), and the Import and Export Price Indexes. Those will define the inflation situation ahead of the May 4 Fed's meeting. In Europe, the attention will be concentrated on the ECB Monetary Policy meeting in Frankfurt on the 14th of April. There are many CPI index reports, between China, Japan, Germany, UK, France, Italy and India. Overall, we have intense week with many data reports to keep a close eye on. Brain Teaser #22 1972 USAMO P3. A random number selector can only select one of the nine integers 1, 2, …, 9, and it makes these selections with equal probability. Determine the probability that after n selections (n>1), the product of the n numbers selected will be divisible by 10. For a number to be divisible by 10, it needs to be divisible by 2 and 5. We can start by computing the probabilities for the following events: 1) there is no factor of 2 or 5, i.e., we select one of the numbers 1, 3, 7, 9 each time – with a probability of (\frac{4}{9})^2 2) there is no factor of 5, with a probability of (\frac{8}{9})^2 3) there is a factor of 2, with a probability (\frac{5}{9})^2 Therefore, the answer is 1 - (\frac{8}{9})^n - (\frac{5}{9})^n + (\frac{4}{9})^n . 1964 IMO P4. Seventeen people correspond by mail with one another – each one with all the rest. In their letters only three different topics are discussed. Each pair of correspondents deals with only one of these topics. Prove that there are at least three people who write to each other about the same topic. Tags: chinaEurope & UKjapanmarket recapUSA
CommonCrawl
Advanced Imaging Applications for Locally Advanced Cervical Cancer Petsuksiri, Janjira;Jaishuen, Atthapon;Pattaranutaporn, Pittayapoom;Chansilpa, Yaowalak 1713 https://doi.org/10.7314/APJCP.2012.13.5.1713 PDF KSCI Advanced imaging approaches (computed tomography, CT; magnetic resonance imaging, MRI; $^{18}F$-fluorodeoxyglucose positron emission tomography, FDG PET) have increased roles in cervical cancer staging and management. The recent FIGO (International Federation of Gynecology and Obstetrics) recommendations encouraged applications to assess the clinical extension of tumors rather than relying on clinical examinations and traditional non-cross sectional investigations. MRI appears to be better than CT for primary tumors and adjacent soft tissue involvement in the pelvis. FDG-PET/CT has increased in usage with a particular benefit for whole body evaluation of tumor metabolic activity. The potential benefits of advanced imaging are assisting selection of treatment based upon actual disease extent, to adequately treat a tumor with minimal normal tissue complications, and to predict the treatment outcomes. Furthermore, sophisticated external radiation treatment and brachytherapy absolutely require advanced imaging for target localization and radiation dose calculation. WAVEs: A Novel and Promising Weapon in the Cancer Therapy Tool Box Sakthivel, K.M.;Prabhu, V. Vinod;Guruvayoorappan, C. 1719 The Wiskott-Aldrich Syndrome Protein family Verprolin - homologous proteins (WAVEs), encoded by a metastasis promoter gene, play considerable roles in adhesion of immune cells, cell proliferation, migration and destruction of foreign agents by reactive oxygen species. These diverse functions have lead to the hypothesis that WAVE proteins have multi-functional roles in regulating cancer invasiveness, metastasis, development of tumor vasculature and angiogenesis. Differentials in expression of WAVE proteins are associated with a number of neoplasms include colorectal cancer, hepatocellular cancer, lung squamous cell carcinoma, human breast adenocarcinoma and prostate cancer. In this review we attempt to unify our knowledge regarding WAVE proteins, focusing on their potentials as diagnostic markers and molecular targets for cancer therapy. Research Progress in Potential Urinary Markers for the Early Detection, Diagnosis and Follow-up of Human Bladder Cancer Wang, Hai-Feng;Wang, Jian-Song 1723 Objective: To summarize and evaluate various urinary markers for early detection, diagnosis and follow-up of human bladder cancer. Methods: A MEDLINE and PUBMED search of the latest literature on urinary markers for bladder cancer was performed. We reviewed these published reports and made a critical analysis. Results: Most urinary markers tend to be less specific than cytology, yielding more false-positive results, but demonstrating an advantage in terms of sensitivity, especially for detecting low grade, superficial tumors. Some tumor markers appear to be good candidates for early detection, diagnosis, and follow-up of human bladder cancer. Conclusion: A number of urinary markers are currently available that appear to be a applicable for clinical detection, diagnosis, and follow-up of bladder cancer. However, further studies are required to determine their accuracy and widespread applicability. Recent Candidate Molecular Markers: Vitamin D Signaling and Apoptosis Specific Regulator of p53 (ASPP) in Breast Cancer Patel, Jayendra B.;Patel, Kinjal D.;Patel, Shruti R.;Shah, Franky D.;Shukla, Shilin N.;Patel, Prabhudas S. 1727 Regardless of advances in treatment modalities with the invention of newer therapies, breast cancer remains a major health problem with respect to its diagnosis, treatment and management. This female malignancy with its tremendous heterogeneous nature is linked to high incidence and mortality rates, especially in developing region of the world. It is the malignancy composed of distinct biological subtypes with diverse clinical, pathological, molecular and genetic features as well as different therapeutic responsiveness and outcomes. This inconsistency can be partially overcome by finding novel molecular markers with biological significance. In recent years, newer technologies help us to indentify distinct biomarkers and increase our understanding of the molecular basis of breast cancer. However, certain issues need to be resolved that limit the application of gene expression profiling to current clinical practice. Despite the complex nature of gene expression patterns of cDNAs in microarrays, there are some innovative regulatory molecules and functional pathways that allow us to predict breast cancer behavior in the clinic and provide new targets for breast cancer treatment. This review describes the landscape of different molecular markers with particular spotlight on vitamin D signaling pathway and apoptotic specific protein of p53 (ASPP) family members in breast cancer. Macrophage Migration Inhibitory Factor: a Potential Marker for Cancer Diagnosis and Therapy Babu, Spoorthy N.;Chetal, Gaurav;Kumar, Sudhir 1737 Macrophage migration inhibitory factor (MIF) is a pluripotent cytokine which plays roles in inflammation, immune responses and cancer development. It assists macrophages in carrying out functions like phagocytosis, adherence and motility. Of late, MIF is implicated in almost all stages of neoplasia and expression is a feature of most types of cancer. The presence of MIF in almost all tumors and all stages of cancer makes it an interesting candidate for cancer therapy. This review explores the roles of MIF in neoplasia. Lack of Association Between Helicobacter pylori Infection and Oral Lichen Planus Pourshahidi, Sara;Fakhri, Farnaz;Ebrahimi, Hooman;Fakhraei, Bahareh;Alipour, Abbas;Ghapanchi, Janan;Farjadian, Shirin 1745 Oral lichen planus is a premalignant chronic inflammatory mucosal disorder with unknown etiology. It is a multifactorial disease and in addition to genetic background, infections, stress, drug reactions are suggested as risk factors. Helicobacter pylori which is involved in development of many gastrointestinal lesions may also be implicated in oral lichen planus induction. This is of clear importance for cancer prevention and the present study was performed to determine any association between H. pylori infection and oral lichen planus in southwestern Iran. Anti H. pylori IgG levels were determined in 41 patients and 82 sex-age matched controls. The results showed no association between H. pylori infection and oral lichen planus (51% in patients vs. 66% in control). or any of its clinical presentations. No Association Between the USP7 Gene Polymorphisms and Colorectal Cancer in the Chinese Han Population Li, Xin;Wang, Yang;Li, Xing-Wang;Liu, Bao-Cheng;Zhao, Qing-Zhu;Li, Wei-Dong;Chen, Shi-Qing;Huang, Xiao-Ye;Yang, Feng-Ping;Wang, Quan;Wang, Jin-Fen;Xiao, Yan-Zeng;Xu, Yi-Feng;Feng, Guo-Yin;Peng, Zhi-Hai;He, Lin;He, Guang 1749 Colorectal cancer (CRC), now the third most common cancer across the world, is known to aggregate in families. USP7 is a very important protein with an important role in regulating the p53 pathway, which is critical for genomic stability and tumor suppression. We here genotyped eight SNPs within the USP7 gene and conducted a case-control study in 312 CRC patients and 270 healthy subjects in the Chinese Han population. No significant associations were found for any single SNP and CRC risk. Our data eliminate USP7 as a potential candidate gene towards for CRC in the Han Chinese population. Evaluation of Health Education in the Multi-professional Intervention and Training for Ongoing Volunteer-based Community Health Programme in the North-East of Thailand Promthet, Supannee;Wiangnon, Surapon;Senarak, Wiporn;Saranrittichai, Kesinee;Vatanasapt, Patravoot;Kamsa-ard, Supot;Wongphuthorn, Prasert;Kasinpila, Chananya;Moore, Malcolm Anthony 1753 This was a survey research conducted in Northestern Thailand during 2009-2010 and designed to evaluate the success of a health education program by comparing levels of health knowledge in the community before and after the launching of a Multi-professional Intervention and Training for Ongoing Volunteer-based Community Health Programme. The survey questionnaire included items about demographic characteristics and health knowledge. The participants were 1,015 members of various communities, who were randomly selected to be included in the survey before launching the intervention, and 1,030 members of the same communities randomly selected to be included in the survey after the intervention was completed. The demographic characteristics of both groups were similar. Overall knowledge and knowledge of all the diseases, except lung and cervical cancer, were significantly higher after the intervention. In conclusion, a Volunteer-based Community Health Programme has advantages for areas where the numbers of health personnel are limited. The use of trained community health volunteers may be one of the best sustainable alternative means for the transfer of health knowledge. Down-Regulation of CYP1A1 Expression in Breast Cancer Hafeez, S.;Ahmed, A.;Rashid, Asif Z.;Kayani, Mahmood Akhtar 1757 Breast cancer is a major cause of death in women worldwide. Mammary tissue expressing xenobiotic metabolizing enzymes metabolically activate or detoxify potential genotoxic breast carcinogens. Deregulation of these xenobiotic metabolizing enzymes is considered to be a major contributory factor to breast cancer. The present study is focused on the expression of the xenobiotic metabolizing gene, CYP1A1, in breast cancer and its possible relationships with different risk factors. Twenty five tumors and twenty five control breast tissue samples were collected from patients undergoing planned surgery or biopsy from different hospitals. Semi-quantitative reverse transcriptase polymerase chain reaction (RT-PCR) and western-blotting were used to investigate the expression of CYP1A1 in breast cancer control and disease samples. mRNA expression of CYP1A1 was down-regulated in 40% of breast tumor samples. Down-regulation was also observed at the protein level. Significnat relations were noted with marital status and tumour grade but not histopathological type. In conclusion, CYP1A1 protein expression was markedly reduced in tumor breast tissues samples as compared to paired control tissue samples. Repeat Colonoscopy Every 10 Years or Single Colonoscopy for Colorectal Neoplasm Screening in Average-risk Chinese: A Cost-effectiveness Analysis Wang, Zhen-Hua;Gao, Qin-Yan;Fang, Jing-Yuan 1761 Background: The appropriate interval between negative colonoscopy screenings is uncertain, but the numbers of advanced neoplasms 10 years after a negative result are generally low. We aimed to evaluate the cost-effectiveness of colorectal neoplasm screening and management based on repeat screening colonoscopy every 10 years or single colonoscopy, compared with no screening in the general population. Methods and materials: A state-transition Markov model simulated 100,000 individuals aged 50-80 years accepting repeat screening colonoscopy every 10 years or single colonoscopy, offered to every subject. Colorectal adenomas found during colonoscopy were removed by polypectomy, and the subjects were followed with surveillance every three years. For subjects with a normal result, colonoscopy was resumed within ten years in the repeat screening strategy. In single screening strategy, screening process was terminated. Direct costs such as screening tests, cancer treatment and costs of complications were included. Indirect costs were excluded from the model. The incremental cost-effectiveness ratio was used to evaluate the cost-effectiveness of the different screening strategies. Results: Assuming a first-time compliance rate of 90%, repeat screening colonoscopy and single colonoscopy can reduce the incidence of colorectal cancer by 65.8% and 67.2% respectively. The incremental cost-effectiveness ratio for single colonoscopy (49 Renminbi Yuan [RMB]) was much lower than that for repeat screening colonoscopy (474 RMB). Single colonoscopy was a more cost-effective strategy, which was not sensitive to the compliance rate of colonoscopy and the cost of advanced colorectal cancer. Conclusion: Single colonoscopy is suggested to be the more cost-effective strategy for screening and management of colorectal neoplasms and may be recommended in China clinical practice. Clinical, Endoscopic and Pathogical Characteristics of Early-Onset Colorectal Cancer in Vietnamese Quach, Duc Trong;Nguyen, Oanh Thuy 1767 Background: The Asia Pacific consensus for colorectal cancer (CRC) recommends that screening programs should begin by the age of 50. However, there have been reports about increasing incidence of CRC at a younger age (i.e. early-onset CRC). Little is known about the features of early-onset CRC in the Vietnamese population. Aim: To describe the clinical, endoscopic and pathological characteristics of early-onset CRC in Vietnamese. Method: A prospective, cross-sectional study was conducted at the University Medical Center from March 2009 to March 2011. All patients with definite pathological diagnosis of CRC were recruited. The early-onset CRC group were analyzed in comparison with the late-onset (i.e. ${\geq}$ 50-year-old) CRC group. Results: The rate of early-onset CRC was 28% (112/400) with a male-to-female ratio of 1.3. Some 22.3% (25/112) of the patients only experienced abdominal pain and/or change in bowel habit without alarming symptoms, 42.9% (48/112) considering their symptoms intermittent. The rate of familial history of CRC in early-onset group was significantly higher that of the late-onset group (21.4% versus 7.6%, p<0.001). The distribution of CRC lesions in rectum, distal and proximal colon were 51.8% (58/112), 26.8% (30/112) and 21.4% (24/112), respectively; which was not different from that in the late-onset group (${\chi}2$, p = 0.29). The rates for poorly differentiated tumors were also not significantly different between the two groups: 12.4% (14/112) versus 8.3% (24/288) (${\chi}2$, p = 0.25). Conclusion: A high proportion of CRC in Viet Nam appear at an earlier age than that recommended for screening by the Asia Pacific consensus. Family history was a risk factor of early-onset CRC. Diagnosis of early-onset CRC needs more attention because of the lack of alarming symptoms and their intermittent patterns as described by the patients. Recurrence after Anatomic Resection Versus Nonanatomic Resection for Hepatocellular Carcinoma: A Meta-analysis Ye, J.Z.;Miao, Z.G.;Wu, F.X.;Zhao, Y.N.;Ye, H.H.;Li, L.Q. 1771 The impact of anatomic resection (AR) as compared to non-anatomic resection (NAR) for hepatocellular carcinoma (HCC) as a factor for preventing intra-hepatic and local recurrence after the initial surgical procedure remains controversial. A systematic review and meta-analysis of nonrandomized trials comparing anatomic resection with non-anatomic resection for HCC published from 1990 to 2010 in PubMed and Medline, Cochrane Library, Embase, and Science Citation Index were therefore performed. Intra-hepatic recurrence, including early and late, and local recurrence were considered as primary outcomes. As secondary outcomes, 5 year survival and 5 year disease-free survival were considered. Pooled effects were calculated utilizing either fixed effects or random effects models. Eleven non-randomized studies including 1,576 patients were identified and analyzed, with 810 patients in the AR group and 766 in the NAR group. Patients in the AR group were characterized by lower prevalence of cirrhosis, more favorable hepatic function, and larger tumor size and higher prevalence of macrovascular invasion compared with patients in the NAR group. Anatomic resection significantly reduced the risks of local recurrence and achieved a better 5 years disease-free survival. Also, anatomic resection was marginally effective for decreasing the early intra-hepatic recurrence. However, it was not advantageous in preventing late intra-hepatic recurrence compared with non-anatomic resection. No differences were found between AR and NAR with respect to postoperative morbidity, mortality, and hospitalization. Anatomic resection can be recommended as superior to non-anatomic resection in terms of reducing the risks of local recurrence, early intra-hepatic recurrence and achieving a better 5 year disease-free survival in HCC patients. Gastric Precancerous Lesions in First Degree Relatives of Patients with Known Gastric Cancer: a Cross-Sectional Prospective Study in Guilan Province, North of Iran Mansour-Ghanaei, Fariborz;Joukar, Farahnaz;Baghaei, Seyed Mohammad;Yousefi-Mashhoor, Mahmood;Naghipour, Mohammad Reza;Sanaei, Omid;Naghdipour, Misa;Shafighnia, Shora;Atrkar-Roushan, Zahra 1779 Background & Objectives: In patients with gastric cancer, the most frequently reported family history of cancer also involves the stomach. The aim of this study was to assess the presence of gastric precancerous lesions in first-degree relatives of patients with gastric cancer and to compare the obtained results with those of individuals with no such family history. Methods: Between 2007 and 2009, 503 consecutive persons more than 30 years old were enrolled in the study covering siblings, parents or children of patients with confirmed adenocarcinoma of stomach. The control group was made up of 592 patients who were synchronously undergoing upper gastrointestinal endoscopy for evaluation of dyspepsia without gastric cancer or any family history. All subjects were endoscopically examined. Results: The overall prevalence of Helicobacter pylori was 77.7% in the cancer relatives and in 75.7% in the control group. Chronic gastritis was found in 90.4% vs. 81.1% (P<0.001). Regarding histological findings, 37(7.4%) of the study group had atrophy vs. 12(1.7%) in the control group (P<0.001), while no difference was observed for intestinal metaplasia (20.3%vs. 21.6%, P=0.58). Dysplasia were shown in 4% of cancer relatives but only 0.4% of the control group (P<0.001). There was no gender specificity. Conclusions: Findings of our study point to great importance of screening in relatives of gastric cancer patients in Iran. Obviously Increasing Incidence Trend for Males but Stable Pathological Proportions for Both Genders: Esophageal Cancer in Zhongshan of China from 1970-2007 Wei, Kuan-Grong;Liang, Zhi-Heng 1783 Objectives: To analyze esophageal cancer incidence and pathological data of Zhongshan in China in 1970-2007, and to provide scientific information for its prevention and control. Methods: From Zhongshan Cancer Registry esophageal cancer incident and pathological data were obtained. Pathological proportions and trends were calculated and analyzed. Results: Although there was a continuously and obviously increasing trend for male incidence rates in 1970-2007 in Zhongshan, squamous cell carcinoma (SCC) and adenocarcinoma (AD) incident proportions during 1990-2007 remained relatively stable. Moreover, SCC was the major pathological type, accounting for 70.6 percent of all new cases, while AD were relatively few and accounted for only 2.66 percent throughout the period. Conclusion: The male esophageal cancer incident pattern in Zhongshan in 1970-2007 was quite different from most other domestic areas. The data suggest that etiological analysis should be enhanced for improved control in Zhongshan. Updated Meta-analysis of the TP53 Arg72Pro Polymorphism and Gastric Cancer Risk Xiang, Bin;Mi, Yuan-Yuan;Li, Teng-Fei;Liu, Peng-Fei 1787 Objective: The p53 tumor suppressor pathway plays an important role in gastric cancer (GC) development. Auto-regulatory feedback control of p53 expression is critical to maintaining proper tumor suppressor function. So far, several studies between p53 Arg72Pro polymorphism and GC have generated controversial and inconclusive results. Methods: To better assess the purported relationship, we performed a meta-analysis of 19 publications. Eligible studies were identified by searching the Pubmed database. Odds ratios (ORs) with 95% confidence intervals (CIs) were estimated to assess any link. Results: Overall, a significant association was detected between the p53 Arg72Pro polymorphism and GC risk (Pro-allele vs. Arg-allele: OR = 1.05, 95%CI = 1.01-1.08; Pro/Pro vs. Arg/Arg: OR = 1.13, 95%CI = 1.04-1.22). Moreover, on stratified analysis by race, significantly increased risk was found for Asian populations (Pro-allele vs. Arg-allele: OR = 1.06, 95%CI = 1.02-1.10; Pro/Pro vs. Arg/Arg: OR = 1.16, 95%CI = 1.07-1.26; Pro/Pro+Pro/Arg vs. Arg/Arg: OR = 1.58, 95%CI = 1.09-2.27). Conclusions: Our study provided evidence that the p53 72Pro allele may increase GC risk in Asians. Future studies with larger sample size are warranted to further confirm this association in more detail. Neurotrophic Artemin Promotes Motility and Invasiveness of MIA PaCa-2 Pancreatic Cancer Cells Meng, Ling-Xin;Chi, Yu-Hua;Wang, Xiang-Xu;Ding, Zhao-Jun;Fei, Li-Cong;Zhang, Hong;Mou, Ling;Cui, Wen;Xue, Ying-Jie 1793 Objective: To analyze the capacity of neurotrophic artemin to promote the motility and invasiveness of MIA PaCa-2 pancreatic cancer cells. Methods: MIA PaCa-2 was cultured in vitro and studied using transwell chambers for motility and invasiveness on treatment with different concentrations of aArtemin or its receptor $GFR{\alpha}3$ were also determined. Expression of matrix metalloproteinase-2 (MMP-2) and epithelial cadherin (E-cadherin) was quantified using RT-PCR and Western blotting. Results: MIA PaCa-2 pancreatic cancer cell motility and invasiveness was significantly increased with artemin and its receptor $GFR{\alpha}3$ with dose dependence (P<0.01). MMP-2 production was also significantly increased (t = 6.35, t = 7.32), while E-cadherin was significantly lowered (t = 4.27, t = 5.61) (P <0.01). Conclusion: Artemin and its receptor $GFR{\alpha}3$ can promote pancreatic cancer cell motility and invasiveness and contribute to aggressive behavior. The mechanism may be related to increased expression of MMP-2 molecule and down-regulation of E-cadherin expression. Predictive Value of Excision Repair Cross-complementing Rodent Repair Deficiency Complementation Group 1 and Ovarian Cancer Risk He, Shan-Yang;Xu, Lin;Niu, Gang;Ke, Pei-Qi;Feng, Miao-Miao;Shen, Hong-Wei 1799 Objective: We aimed to analyze the association between excision repair cross-complementing rodent repair deficiency complementation group 1 (XRCC1) and ovarian cancer risk. Methods: We performed a hospital-based case-control study with 155 cases and 313 controls in China. All Chinese cases with newly diagnosed primary ovarian cancer between May 2005 to May 2010 in our hospital were invited to participate within 2 months of diagnosis. Controls were randomly selected from people who requested general health examinations in the same hospital during the same period. SNPs in EXCC1, ERCC1 C8092A and ERCC1 T19007C, were analyzed by PCR-RFLP method. Results: We observed a non-significantly increased risk of ovarian cancer among individuals with ERCC1 8092TT compared with those with the 8092CC genotype (adjusted OR=1.55, 95% CI%=0.74-2.97). Moreover, 19007TT genotype carriers also showed a non-significant increased risk of ovarian cancer over those with the 19007CC genotype (adjusted OR=1.78, 95% CI%=0.91-3.64). Conclusion: Our firstly investigation of links between polymorphisms in the ERCC1 gene and the risk of ovarian cancer in Chinese population demonstrated no significant association. Further large sample studies in Chinese populations are needed. hOGG1, p53 Genes, and Smoking Interactions are Associated with the Development of Lung Cancer Cheng, Zhe;Wang, Wei;Song, Yong-Na;Kang, Yan;Xia, Jie 1803 This study aimed to investigate the effects of Ser/Cys polymorphism in hOGG1 gene, Arg/Pro polymorphism in p53 gene, smoking and their interactions on the development of lung cancer. Ser/Cys polymorphism in hOGG1 and Arg/Pro polymorphism in p53 among 124 patients with lung cancer and 128 normal people were detected using PCR-RFLP. At the same time, smoking status was investigated between the two groups. Logistic regression was used to estimate the effects of Ser/Cys polymorphism and Arg/Pro polymorphisms, smoking and their interactions on the development of lung cancer. ORs (95% CI) of smoking, hOGG1 Cys/Cys and p53 Pro/Pro genotypes were 2.34 (1.41-3.88), 2.12 (1.03-4.39), and 2.12 (1.15-3.94), respectively. The interaction model of smoking and Cys/Cys was super-multiplicative or multiplicative, and the OR (95% CI) for their interaction item was 1.67 (0.36 -7.78). The interaction model of smoking and Pro/Pro was super-multiplicative with an OR (95%CI) of their interaction item of 5.03 (1.26-20.1). The interaction model of Pro/Pro and Cys/Cys was multiplicative and the OR (95%CI) of their interaction item was 0.99 (0.19-5.28). Smoking, hOGG1 Cys/Cys, p53 Pro/Pro and their interactions may be the important factors leading to the development of lung cancer. Retinoid Receptors in Gastric Cancer: Expression and Influence on Prognosis Hu, Kong-Wang;Chen, Fei-Hu;Ge, Jin-Fang;Cao, Li-Yu;Li, Hao 1809 Background: Gastric cancer is frequently lethal despite aggressive multimodal therapies, and new treatment approaches are therefore needed. Retinoids are potential candidate drugs: they prevent cell differentiation, proliferation and malignant transformation in gastric cancer cell lines. They interact with nuclear retinoid receptors (the retinoic acid receptors [RARs] and retinoid X receptors [RXRs]), which function as transcription factors, each with three subclasses, ${\alpha}$, ${\beta}$ and ${\gamma}$. At present, little is known about retinoid expression and influence on prognosis in gastric cancers. Patients and Methods: We retrospectively analyzed the expression of the subtypes RARa, $RAR{\beta}$, $RAR{\gamma}$, RXRa, $RXR{\beta}$, $RXR{\gamma}$ by immunohistochemistry in 147 gastric cancers and 51 normal gastric epithelium tissues for whom clinical follow-up data were available and correlated the results with clinical characteristics. In addition, we quantified the expression of retinoid receptor mRNA using real-time PCR (RT-PCR) in another 6 gastric adenocarcinoma and 3 normal gastric tissues. From 2008 to 2010, 80 patients with gastric cancers were enrolled onto therapy with all-trans-retinoic acid (ATRA). Results: RARa, $RAR{\beta}$, $RAR{\gamma}$ and $RXR{\gamma}$ positively correlated with each other (p < 0.001) and demonstrated significantly lower levels in the carcinoma tissue sections (p < 0.01), with lower $RAR{\beta}$, $RAR{\gamma}$ and RXRa expression significantly related to advanced stages (p < =0.01). Tumors with poor histopathologic grade had lower levels of RARa and $RAR{\beta}$ in different histological types of gastric carcinoma (p < 0.01). Patients whose tumors exhibited low levels of RARa expression had significantly lower overall survival compared with patients who had higher expression levels of this receptor (p < 0.001, HR=0.42, 95.0% CI 0.24-0.73), and patients undergoing ATRA treatment had significantly longer median survival times (p = 0.007, HR=0.41, 95.0% CI 0.21-0.80). Conclusions: Retinoic acid receptors are frequently expressed in epithelial gastric cancer with a decreased tendency of expression and RARa may be an indicator of a positive prognosis. This study provides a molecular basis for the therapeutic use of retinoids against gastric cancer. Acute Effects of Dokha Smoking on the Cardiovascular and Respiratory Systems among UAE Male University Students Shaikh, Rizwana B.;Haque, Noor Mohammad Abdul;Al Mohsen, Hassan Abdul Hadi Khalil;Al Mohsen, Ali Abdul Hadi Khalil;Humadi, Marwa Haitham Khalaf;Al Mubarak, Zainab Zaki;Mathew, Elsheba;Al Sharbatti, Shatha 1819 Background: In the United Arab Emirates (UAE) tobacco use is rampant. A less reported, yet widely used form of smoking native to UAE is midwakh or dhokha. The aim of the study is to assess the acute effects of smoking dokha (Arabian pipe) on the cardiovascular and respiratory systems among male university students in the UAE. Method: A quasi-experimental study was conducted among 97 male volunteers aged more than 17 years. Blood pressure, heart rate and respiratory rate of each participant, were measured before and immediately after smoking. A self administered questionnaire was used to collect personal details and data about smoking pattern. Results: Mean increases in systolic blood pressures ($12{\pm}1$ mmHg), heart rates ($20{\pm}2$ bpm) and respiratory rates ($4{\pm}1$ breaths/min) were observed (p < 0.001). A mean decrease in diastolic blood pressures ($1{\pm}1$ mmHg) was observed (p = 0.483). Conclusion: Smoking dokha has a significant acute effect on systolic blood pressure, heart rate and respiratory rate. Anti smoking campaigns must address the ill effects of this form of smoking. Results from the study warrant further research into this method of smoking which is becoming more popular. siRNA Mediated Silencing of NIN1/RPN12 Binding Protein 1 Homolog Inhibits Proliferation and Growth of Breast Cancer Cells Huang, Wei-Yi;Chen, Dong-Hui;Ning, Li;Wang, Li-Wei 1823 The gene encoding the Nin one binding (NOB1) protein which plays an essential role in protein degradation has been investigated for possible tumor promoting functions. The present study was focused on NOB1 as a possible therapeutic target for breast cancer treatment. Lentivirus mediated NOB1 siRNA transfection was used to silence the NOB1 gene in two established breast cancer cell lines, MCF-7 and MDA-MB-231, successful transfection being confirmed by fluorescence imaging. NOB1 deletion caused significant decline in cell proliferation was observed in both cell lines as investigated by MTT assay. Furthermore the number and size of the colonies formed were also significantly reduced in the absence of NOB1. Moreover NOB1 gene knockdown arrested the cell cycle and inhibited cell cycle related protein expression. Collectively these results indicate that NOB1 plays an essential role in breast cancer cell proliferation and its gene expression could be a therapeutic target. Applying Conventional and Saturated Generalized Gamma Distributions in Parametric Survival Analysis of Breast Cancer Yavari, Parvin;Abadi, Alireza;Amanpour, Farzaneh;Bajdik, Chris 1829 Background: The generalized gamma distribution statistics constitute an extensive family that contains nearly all of the most commonly used distributions including the exponential, Weibull and log normal. A saturated version of the model allows covariates having effects through all the parameters of survival time distribution. Accelerated failure-time models assume that only one parameter of the distribution depends on the covariates. Methods: We fitted both the conventional GG model and the saturated form for each of its members including the Weibull and lognormal distribution; and compared them using likelihood ratios. To compare the selected parameter distribution with log logistic distribution which is a famous distribution in survival analysis that is not included in generalized gamma family, we used the Akaike information criterion (AIC; r=l(b)-2p). All models were fitted using data for 369 women age 50 years or more, diagnosed with stage IV breast cancer in BC during 1990-1999 and followed to 2010. Results: In both conventional and saturated parametric models, the lognormal was the best candidate among the GG family members; also, the lognormal fitted better than log-logistic distribution. By the conventional GG model, the variables "surgery", "radiotherapy", "hormone therapy", "erposneg" and interaction between "hormone therapy" and "erposneg" are significant. In the AFT model, we estimated the relative time for these variables. By the saturated GG model, similar significant variables are selected. Estimating the relative times in different percentiles of extended model illustrate the pattern in which the relative survival time change during the time. Conclusions: The advantage of using the generalized gamma distribution is that it facilitates estimating a model with improved fit over the standard Weibull or lognormal distributions. Alternatively, the generalized F family of distributions might be considered, of which the generalized gamma distribution is a member and also includes the commonly used log-logistic distribution. Lack of Significance of the BRCA2 Promoter Methylation Status in Different Genotypes of the MTHFR a1298c Polymorphism in Ovarian Cancer Cases in Iran Darehdori, Ahmad Shabanizadeh;Dastjerdi, Mehdi Nikbakht;Dahim, Hajar;Slahshoor, Mohammadreza;Babazadeh, Zahra;Taghavi, Mohammad Mohsen;Taghipour, Zahra;Gaafarineveh, Hamidreza 1833 Objective: Promoter methylation, which can be regulated by MTHFR activity, is associated with silencing of genes. In this study we evaluated the methylation status (type) of the BRCA2 promoter in ovarian cancer patients carrying different genotypes of the MTHFR gene (A or C polymorphisms at position 1298). Methods: The methylation type of the BRCA2 promoter was evaluated using bisulfate-modified DNA in methylation-specific PCR and the MTHFRa1278c polymorphism was assessed by PCR-RFLP. Results: Analysis of the BRCA2 promoter methylation type of cases showed that 7 out of 60 cases (11.7%) were methylated while the remaining 53 (88.3%) were unmethylated. In methylated cases, one out of the 7 cases had a CC genotype and the remaining 6 methylated cases had an AC genotype. The AA genotype was absent. In unmethylated cases, 34, 18, and one out of these had AC, AA and CC genotype, respectively. Conclusion: There was no significant relationship between the methylation types of the BRCA2 promoter in different genotypes of MTHFRa1298c polymorphism in ovarian cancer; p=0.255. There was no significant relation between the methylation types of the BRCA2 promoter in different genotypes of the MTHFRa1298c polymorphism in ovarian cancer. Cisplatin-Based Therapy for the Treatment of Elderly Patients with Non-Small-Cell Lung Cancer: a Retrospective Analysis of a Single Institution Inal, Ali;Kaplan, M. Ali;Kucukoner, Mehmet;Urakcl, Zuhat;Karakus, Abdullah;Islkdogan, Abdurrahman 1837 Background: In spite of the fact that platinum-based doublets are considered the standard therapy for patients with advanced non-small-cell lung cancer (NSCLC), no elderly-specific platinum based prospective phase III regimen has been explored. The aim of this retrospective singlecenter study was to evaluate the efficacy and side effects of cisplatin-based therapy specifically for the elderly. Methods: Patients receiving platinum-based treatment were divided into three groups. In the first group (GC), Gemcitabine was administrated at 1000 $mg/m^2$ on days 1, 8 and cisplatin was added at 75 $mg/m^2$ on day 1. In the second group (DC), 75 $mg/m^2$ docetaxel and cisplatin were administered on day 1. The third group (PC) received 175 mg of paclitaxel and 75 mg of cisplatin on day 1. These treatments were repeated every three weeks. Result: GC arm had 36, the DC arm 42 and the PC arm 29 patients. Grade III-IV thrombocytopenia was higher in the GC arm (21.2% received GC, 2.8% received DC, and 3.8% received PC), while sensory neuropathy was lower in patients with GC arm (3.0%, 22.2%, and 23.1% received GC, DC and PC, respectively). There were no statistically significant difference in the response rates among the three groups (p>0.05). The median Progression-free survival (PFS) was 5.0 months and the median Overall survival (OS) in each group was 7.1, 7.4 and 7.1 months, respectively (p>0.05). Conclusion: The response rate, median PFS and OS were similar among the three treatment arms. Grade III-IV thrombocytopenia was higher in the GC arm, while the GC regimen was more favorable than the other cisplatin-based treatmetns with regard to sensory neuropathy. Long Term Survivors with Metastatic Pancreatic Cancer Treated with Gemcitabine Alone or Plus Cisplatin: a Retrospective Analysis of an Anatolian Society of Medical Oncology Multicenter Study Inal, Ali;Ciltas, Aydin;Yildiz, Ramazan;Berk, Veli;Kos, F. Tugba;Dane, Faysal;Unek, Ilkay Tugba;Colak, Dilsen;Ozdemir, Nuriye Yildirim;Buyukberber, Suleyman;Gumus, Mahmut;Ozkan, Metin;Isikdogan, Abdurrahman 1841 Background: The majority of patients with pancreatic cancer present with advanced disease. Systemic chemotherapy has limited impact on overall survival (OS) so that eligible patients should be selected carefully. The aim of this study was to analyze prognostic factors for survival in Turkish advanced pancreatic cancer patients who survived more than one year from the diagnosis of recurrent and/or metastatic disease and receiving gemcitabine (Gem) alone or gemcitabine plus cisplatin (GemCis). Methods: This retrospective evaluation was performed for patients who survived more than one year from the diagnosis of recurrent and/or metastatic disease and who received gemcitabine between December 2005 and August 2011. Twenty-seven potential prognostic variables were chosen for univariate and multivariate analyses to identify prognostic factors associated with survival. Results: Among the 27 variables in univariate analysis, three were identified to have prognostic significance: sex (p = 0.04), peritoneal dissemination (p =0.02) and serum creatinine level (p=0.05). Multivariate analysis by Cox proportional hazard model showed only peritoneal dissemination to be an independent prognostic factor for survival. Conclusion: In conclusion, peritoneal metastasis was identified as an important prognostic factor in metastatic pancreatic cancer patients who survived more than one year from the diagnosis of recurrent and/or metastatic disease and receiving Gem or GemCis. The findings should facilitate pretreatment prediction of survival and can be used for selecting patients for treatment. Effect of Tissue Factor on Invasion Inhibition and Apoptosis Inducing Effect of Oxaliplatin in Human Gastric Cancer Cell Yu, Yong-Jiang;Li, Yu-Min;Hou, Xu-Dong;Guo, Chao;Cao, Nong;Jiao, Zuo-Yi 1845 Objective: Tissue factor (TF) is expressed abnormally in certain types of tumor cells, closely related to invasion and metastasis. The aim of this study was to construct a human gastric cancer cell line SGC7901 stably-transfected with human TF, and observe effects on oxaliplatin-dependent inhibition of invasion and the apoptosis induction. Methods: The target gene TF was obtained from human placenta by nested PCR and introduced into the human gastric cell line SGC7901 through transfection mediated by lipofectamine. Stably-transfected cells were screened using G418. Examples successfully transfected with TF-pcDNA3 recombinant (experimental group), and empty vector pcDNA3 (control group) were incubated with oxaliplatin. Transwell chambers were used to show change in invasive ability. Caspase-3 activity was detected using a colorimetric method and annexin-V/PI double-staining was applied to detect apoptosis. Results: We generated the human gastric cancer cell line SGC7901/TF successfully, expressing TF stably and efficiently. Compared with the control group, invasion increased, whereas caspase-3 activity and apoptosis rate were decreased in the experimental group. Conclusion: TF can enhance the invasive capacity of gastric cancer cells in vitro. Its increased expression may reduce invasion inhibition and apoptosis-inducing effects of oxaliplatin and therefore may warrant targeting for improved chemotherapy. Smoking Trajectories among Koreans in Seoul and California: Exemplifying a Common Error in Age Parameterization Allem, Jon-Patrick;Ayers, John W.;Unger, Jennifer B.;Irvin, Veronica L.;Hofstetter, C. Richard;Hovell, Melbourne F. 1851 Immigration to a nation with a stronger anti-smoking environment has been hypothesized to make smoking less common. However, little is known about how environments influence risk of smoking across the lifecourse. Research suggested a linear decline in smoking over the lifecourse but these associations, in fact, might not be linear. This study assessed the possible nonlinear associations between age and smoking and examined how these associations differed by environment through comparing Koreans in Seoul, South Korea and Korean Americans in California, United States. Data were drawn from population based telephone surveys of Korean adults in Seoul (N=500) and California (N=2,830) from 2001-2002. Locally weighted scatterplot smoothing (lowess) was used to approximate the association between age and smoking with multivariable spline logistic regressions, including adjustment for confounds used to draw population inferences. Smoking differed across the lifecourse between Korean and Korean American men. The association between age and smoking peaked around 35 years among Korean and Korean American men. From 18 to 35 the probability of smoking was 57% higher (95%CI, 40 to 71) among Korean men versus 8% (95%CI, 3 to 19) higher among Korean American men. A similar difference in age after 35, from 40 to 57 years of age, was associated with a 2% (95%CI, 0 to 10) and 20% (95%CI, 16 to 25) lower probability of smoking among Korean and Korean American men. A nonlinear pattern was also observed among Korean American women. Social role transitions provide plausible explanations for the decline in smoking after 35. Investigators should be mindful of nonlinearities in age when attempting to understand tobacco use. Impacts of Household Income and Economic Recession on Participation in Colorectal Cancer Screening in Korea Myong, Jun-Pyo;Kim, Hyoung-Ryoul 1857 To assess the impact of household income and economic recession on participation in CRC screening, we estimated annual participating proportions from 2007 to 2009 for different CRC screening modalities according to household income levels. A total of 8,042 subjects were derived from the fourth Korean National Health and Nutrition Examination Survey (KNHANES IV). Multivariate logistic regression analysis was used to estimate odds ratios and 95% confidence intervals for CRC screening with household income quartiles by gender in each year. People were less likely to attend a high-cost CRC screening such as a sigmoidoscopy or colonoscopy independent of the income quartile during the economic recession. Income disparities for participating in opportunistic cancer screening appear to have existed among both males and females during the three years (2007-2009), but were most distinctive in 2009. An increase in mortality of CRC can therefore be expected due to late detection in periods of economic crisis. Accordingly, the government should expand the coverage of CRC screening to prevent excess deaths by reducing related direct and indirect costs during the economic recession. Triplet Platinum-based Combination Sequential Chemotherapy Improves Survival Outcome and Quality of Life of Advanced Non-small Cell Lung Cancer Patients Chen, Li-Kun;Liang, Ying;Yang, Qun-Ying;Xu, Fei;Zhou, Ning-Ning;Xu, Guang-Chuan;Liu, Guo-Zhen;Wei, Wei-Dong 1863 Background: Maintenance chemotherapy is one strategy pursued in recent years with intent to break through the chemotherapy plateau for advanced non-small cell lung cancer (NSCLC). However, given the toxicity, platinum-based combinations are rarely given for this purpose. We carried out the present prospective study of triplet platinum-based combination sequential chemotherapy in advanced NSCLC to investigate if patients could tolerate and benefit from such intensive treatment. Methods: From Dec 2003 to Dec 2007, 190 stage IIIB and IV NSCLC patients in Sun yat-sen University sequentially received the 3 platinum-based combination (TP-NP-GP) treatment (T: paclitaxol175$mg/m^2$ d1; N: vinorelbine25$mg/m^2$ d1 and 8; G: gemcitabine1$g/m^2$ d1 and 8; P: cisplatin20$mg/m^2$ d1-5; repeated every 3 weeks). Patients were followed up to at least 3 years to obtain survival data. Treatment toxicities and the quality of life (QOL) were assessed during the whole treatment. Results: There were 187 patients evaluable. The TP, NP and GP response rates with sequential use were 42.8% (80/187), 41.1% (65/158) and 28.8% (21/73) respectively. Median survival time was 18.2 months and the 1, 2 and 3 year overall survival (OS) rates were 78.7%, 38.5% and 21.3%. Patients receiving > 6 cycles of chemotherapy had significantly longer OS and TTP (MST 25.3 vs. 14.5 months, TTP 15.1 vs. 9.1 months). The QOL on the whole for the patients was improved after chemotherapy. Conclusions: The sequential chemotherapy strategy with triplet platinum-based combination regimens can improve the survival outcome and the quality of life of advanced non-small cell lung cancer patients. Pilot Study of the Sensitivity and Specificity of the DNA Integrity Assay for Stool-based Detection of Colorectal Cancer in Malaysian Patients Yehya, Ashwaq Hamid;Yusoff, Narazah Mohd;Khalid, Imran A.;Mahsin, Hakimah;Razali, Ruzzieatul Akma;Azlina, Fatimah;Mohammed, Kamil Sheikh;Ali, Syed A. 1869 Background: To assess the diagnostic potential of tumor-associated high molecular weight DNA in stool samples of 32 colorectal cancer (CRC) patients compared to 32 healthy Malaysian volunteers by means of polymerase chain reaction (PCR). Methods: Stool DNA was isolated and tumor-associated high molecular weight DNA (1.476 kb fragment including exons 6-9 of the p53 gene) was amplified using PCR and visualized on ethidium bromide-stained agarose gels. Results: Out of 32 CRC patients, 18 were positive for the presence of high molecular weight DNA as compared to none of the healthy individuals, resulting in an overall sensitivity of 56.3% with 100% specificity. Out of 32 patients, 23 had tumor on the left side and 9 on the right side, 16 and 2 being respectively positive. This showed that high molecular weight DNA was significantly (p = 0.022) more detectable in patients with left side tumor (69.6% vs 22.2%). Out of 32 patients, 22 had tumors larger than 1.0 cm, 18 of these (81.8%) being positive for long DNA as compared to not a single patient with tumor size smaller than 1.0 cm (p <0.001). Conclusion: We detected CRC-related high molecular weight p53 DNA in stool samples of CRC patients with an overall sensitivity of 56.3% with 100% specificity, with a strong tumor size dependence. Comparison of Complications of Peripherally Inserted Central Catheters with Ultrasound Guidance or Conventional Methods in Cancer Patients Gong, Ping;Huang, Xin-En;Chen, Chuan-Ying;Liu, Jian-Hong;Meng, Ai-Feng;Feng, Ji-Feng 1873 Objective: To compare the complications of peripherally inserted central catheters (PICC) by a modified Seldinger technique under ultrasound guidance or the conventional (peel-away cannula) technique. Methods: From February to December of 2010, cancer patients who received PICC at the Department of Chemotherapy in Jiangsu Cancer Hospital were recruited into this study, and designated UPICC if their PICC lines were inserted under ultrasound guidance, otherwise CPICC if were performed by peel-away cannula technique. The rates of successful placement, hemorrhage around the insertion area, phlebitis, comfort of the insertion arm, infection and thrombus related to catheterization were analyzed and compared on days 1, 5 and 6 after PICC and thereafter. Results: A total of 180 cancer patients were recruited, 90 in each group. The rates of successful catheter placement between two groups differed with statistical significance (P <0.05), favoring UPICC. More phlebitis and finger swelling were detected in the CPICC group (P <0.05). From day 6 to the date the catheter was removed and thereafter, more venous thrombosis and a higher rate of discomfort of insertion arms were also observed in the CPICC group. Conclusion: Compared with CPICC, UPICC could improve the rate of successful insertion, reduce catheter related complications and increase comfort of the involved arm, thus deserving to be further investigated in randomized clinical studies. Inhibition of Breast Cancer Metastasis Via PITPNM3 by Pachymic Acid Hong, Ri;Shen, Min-He;Xie, Xiao-Hong;Ruan, Shan-Ming 1877 Breast cancer metastasis is the most common cause of cancer-related death in women. Thus, seeking targets of breast tumor cells is an attractive goal towards improving clinical treatment. The present study showed that CCL18 from tumor-associated macrophages could promote breast cancer metastasis via PITPNM3. In addition, we found that pachymic acid (PA) could dose-dependently inhibit migration and invasion of MDA-MB-231cells, with or without rCCL18 stimulation. Furthermore, evidence was obtained that PA could suppress the phosphorylation of PITPNM3 and the combination of CCL18 and PITPNM3. Therefore, we speculate that PA could inhibit breast cancer metastasis via PITPNM3. Breast Cancer Molecular Subtypes and Associations with Clinicopathological Characteristics in Iranian Women, 2002-2011 Kadivar, Maryam;Mafi, Negar;Joulaee, Azadeh;Shamshiri, Ahmad;Hosseini, Niloufar 1881 Breast cancer is a heterogeneous disease that is affected by ethnicity of patients. According to hormone receptor status and gene expression profiling, breast cancers are classified into four molecular subtypes, each showing distinct clinical behavior. Lack of sufficient data on molecular subtypes of breast cancer in Iran, prompted us to investigate the prevalence and the clinicopathological features of each subtype among Iranian women. A total of 428 women diagnosed with breast cancer from 2002 to 2011 were included and categorized into four molecular subtypes using immunohistochemistry. Prevalence of each subtype and its association with patients' demographics and tumor characteristics, such as size, grade, lymph-node involvement and vascular invasion, were investigated using Chi-square, analysis of variance and multivariate logistic regression. Luminal A was the most common molecular subtype (63.8%) followed by Luminal B (8.4%), basal-like (15.9%) and HER-2 (11.9%). Basal-like and HER-2 subtypes were mostly of higher grades while luminal A tumors were more of grade 1 (P<0.001). Vascular invasion was more prevalent in HER-2 subtype, and HER-2 positive tumors were significantly associated with vascular invasion (P=0.013). Using muti-variate analysis, tumor size greater than 5 cm and vascular invasion were significant predictors of 3 or more nodal metastases. Breast cancer was most commonly diagnosed in women around 50 years of age and the majority of patients had lymph node metastasis at the time of diagnosis. This points to the necessity for devising an efficient screening program for breast cancer in Iran. Further, prospective surveys are suggested to evaluate prognosis of different subtypes in Iranian patients. Characteristics of Mammary Paget's Disease in China: a National-wide Multicenter Retrospective Study During 1999-2008 Zheng, Shan;Song, Qing-Kun;Zhao, Lin;Huang, Rong;Sun, Li;Li, Jing;Fan, Jin-Hu;Zhang, Bao-Ning;Yang, Hong-Jian;Xu, Feng;Zhang, Bin;Qiao, You-Lin 1887 The aim of this study was to detail characteristics of mammary Paget's disease (PD) representing the whole population in China. A total of 4211 female breast cancer inpatients at seven tertiary hospitals from seven representative geographical regions of China were collected randomly during 1999 to 2008. Data for demography, risk factors, diagnostic imaging test, physical examination and pathologic characters were surveyed and biomarker status was tested by immunohistochemistry. The differences of demography and risk factors between PD with breast cancer and other lesions were compared using Chi-square test or t-test, with attention to physical examination and pathological characters. The percentage of PD was 1.6% (68/4211) in all breast cancers. The mean age at diagnosis was 48.1, and 63.2% (43/68) patients were premenopausal. There is no difference in demography and risk factors between PD with breast cancer and other breast cancer (P > 0.05). The main pattern of PD in physical exam and pathologic pattern were patients presenting with a palpable mass in breast (65/68, 95.6%) and PD with underlying invasive cancer (82.4%, 56/68) respectively. The rate of multifocal disease was 7.4% (5/68). PD with invasive breast cancer showed larger tumor size, more multifocal disease, lower ER and PR expression and higher HER2 overexpression than those in other invasive breast cancer (P < 0.05). These results suggested that PD in China is a concomitant disease of breast cancer, and that PD with underlying invasive cancer has more multiple foci and more aggressive behavior compared with other breast invasive cancer. We address the urgent needs for establishing diagnostic and therapeutic guidelines for mammary PD in China. Liver Cancer Mortality Trends during the Last 30 Years in Hebei province: Comparison Results from Provincial Death Surveys Conducted in the 1970's, 1980's, 1990's and 2004-2005 Xu, Hong;He, Yu-Tong;Zhu, Jun-Qing 1895 Background and Aims: Liver cancer is a major health problem in low-resource countries. Approximately 55% of all liver cancer occurs in China. Hebei Province is one of the important covering nearly 6% of the population of China. The aim of this paper was to explore liver cancer mortality trends during past 30 years, and provide basic information on prevention strategies. Methods: Hebei was covered covered all the three national surveys during 1973-1975, 1990-1992, and 2004-2005 and one provincial survey during 1984-1986. Subjects included all cases dying from liver cancer in Hebei Province. Liver cancer mortality trend and geographic differences across cities and counties were analyzed. Results: There were 82,878 deaths in Hebei Province during 2004-2005 with an average mortality rate was 600.9/10,000, and an age-adjusted rate of 552.3/10,000. Those dying of cancer were 18,424 cases, accounting for 22.2% of all deaths, second only to cerebrovascular disease as a cause of death. Cancer mortality was 133.6/100,000 (age-adjusted rate was 119.2/100,000). Liver cancer ranked fourth in this survey with a mortality rate of 21.0/100,000, 28.4/100,000 in males and 13.35/10,000 in females, accounting for 15.7%, 17.1% and 13.4% of the total number of cancer deaths and in males and females, respectively. The sex ratio was 2.13. Since the 1970s, liver cancer deaths of Hebei province have been increasing slightly. The crude mortality rates in the four surveys were 11.3, 16.0, 17.4, 21.0 per 100,000, respectively, with age-adjusted rates fluctuating during the past 30 years, but the trend also being upwards. There is a tendency for the mortality rates to be higher in coastal than mountain areas, and is relative lower in the plain area, with crude mortality rates of 25.3, 22.1, and 19.1 per 100,000, respectively. There were no notable differences in cride data between urban and rural, but the age-adjusted mortality rate in rural was much higher. Conclusion: Our study indicated that the mortality of liver cancer in Hebei Province is lower than the national average level. There is a slightly increase trend, especially in some counties. Liver cancer is a major health problem and it is necessary to further promote prevention strategies in Hebei province. Differential Distribution of miR-20a and miR-20b may Underly Metastatic Heterogeneity of Breast Cancers Li, Jian-Yi;Zhang, Yang;Zhang, Wen-Hai;Jia, Shi;Kang, Ye;Zhu, Xiao-Yu 1901 Background: The discovery that microRNA (miRNA) regulates metastasis provide a principal molecular basis for tumor heterogeneity. A characteristic of solid tumors is their heterogenous distribution of blood vessels, with significant hypoxia occurring in regions (centers of tumor) of low blood flow. It is necessary to discover the mechanism of breast cancer metastasis in relation to the fact that there is a differential distribution of crucial microRNA in tumors from centers to edges. Methods: Breast tissues from 48 patients (32 patients with breast cancer) were classified into the high invasive and metastatic group (HIMG), low invasive and metastatic group (LIMG), and normal group. Samples were collected from both the centers and edges of all tumors. The first six specimens were detected by microRNA array, and the second ten specimens were detected by real-time qRT-PCR and Western blot analyses. Correlation analysis was performed between the miRNAs and target proteins. Results: The relative content of miR-20a and miR-20b was lower in the center of the tumor than at the edge in the LIMG, lower at the edge of the tumor than in the center in the HIMG, and lower in breast cancer tissues than in normal tissues. VEGF-A and HIF-1alpha mRNA levels were higher in the HIMG than in the LIMG, and levels were higher in both groups than in the normal group; there was no difference in mRNA levels between the edge and center of the tumor. VEGF-A and HIF-1alpha protein levels were higher in the HIMG than in the LIMG, and protein levels in both groups were higher than in the normal group; there was a significant difference in protein expression between the edge and center of the tumor. Correlation analysis showed that the key miRNAs (miR-20a and miR-20b) negatively correlated with the target proteins (VEGF-A and HIF-1alpha). Conclusions: Our data suggest that miR-20a and miR-20b are differentially distributed in breast cancer, while VEGF-A and HIF-1alpha mRNA had coincident distributions, and VEGF-A and HIF-1alpha proteins had uneven and opposing distributions to the miRNAs. It appears that one of the most important facets underlying metastatic heterogeneity is the differential distribution of miR-20a and miR-20b and their regulation of target proteins. Mammography and Ultrasonography Reports Compared with Tissue Diagnosis - An Evidence Based Study in Iran, 2010 Akbari, Mohammad Esmaeil;Haghighatkhah, Hamidreza;Shafiee, Mohammad;Akbari, Atieh;Bahmanpoor, Mitra;Khayamzadeh, Maryam 1907 Background: Breast cancer is the most prevalent cancer and the fifth cause of cancer death in Iranian women. Early detection and treatment are important for appropriate management of this disease. Mammography and ultrasonography are used for screening and evaluation of symptomatic cases and the main diagnostic test for breast cancer is pathological. In this study we evaluated mammography and ultrasonography as diagnostic tools. Methods: In this cross-sectional study 384 mammography and ultrasonography reports for 255 women were assessed, divided into benign and malignant groups. Suspected cases were referred for pathology evaluation. The radiologic and pathologic reports were compared and also comparison was performed based on age groups (more and less than 50 years old), history of breastfeeding and gravidity. Statistical analysis was performed by SPSS. Results: The mean ages of malignant and benign cases were $49{\pm}11.6$ and $43{\pm}11.2$ years, respectively. Sensitivity and specificity for mammography were 73% and 45%, respectively. Sensitivity and specificity for ultrasonography were 69% and 49%, respectively. There were statistical differences between specificity of mammography in patients based on factors such as history of gravidity, breastfeeding and sensitivity in patients equal or more than 50 years old and less. Conclusion: Factors affecting different results in mammography and ultrasonography reports were classified into three groups, consisting of skill, experience and training of medical staff, and setting of instruments. It is recommended that health managers in developing countries pay attention the quality of setting and man power more than current status. Policy-makers and managers must establish guidelines regarding breast imaging in Iran. SELDI-TOF MS Combined with Magnetic Beads for Detecting Serum Protein Biomarkers and Establishment of a Boosting Decision Tree Model for Diagnosis of Pancreatic Cancer Qian, Jing-Yi;Mou, Si-Hua;Liu, Chi-Bo 1911 Aim: New technologies for the early detection of pancreatic cancer (PC) are urgently needed. The aim of the present study was to screen for the potential protein biomarkers in serum using proteomic fingerprint technology. Methods: Magnetic beads combined with surface-enhanced laser desorption/ionization (SELDI) TOF MS were used to profile and compare the protein spectra of serum samples from 85 patients with pancreatic cancer, 50 patients with acute-on-chronic pancreatitis and 98 healthy blood donors. Proteomic patterns associated with pancreatic cancer were identified with Biomarker Patterns Software. Results: A total of 37 differential m/z peaks were identified that were related to PC (P < 0.01). A tree model of biomarkers was constructed with the software based on the three biomarkers (7762 Da, 8560 Da, 11654 Da), this showing excellent separation between pancreatic cancer and non-cancer., with a sensitivity of 93.3% and a specificity of 95.6%. Blind test data showed a sensitivity of 88% and a specificity of 91.4%. Conclusions: The results suggested that serum biomarkers for pancreatic cancer can be detected using SELDI-TOF-MS combined with magnetic beads. Application of combined biomarkers may provide a powerful and reliable diagnostic method for pancreatic cancer with a high sensitivity and specificity. Genome-wide Analysis of Aberrant DNA Methylation for Identification of Potential Biomarkers in Colorectal Cancer Patients Fang, Wei-Jia;Zheng, Yi;Wu, Li-Ming;Ke, Qing-Hong;Shen, Hong;Yuan, Ying;Zheng, Shu-Sen 1917 Background: Colorectal cancer is one of the leading causes of mortality worldwide. Genome wide analysis studies have identified sequence mutations causing loss-of-function that are associated with disease occurrence and severity. Epigenetic modifications, such DNA methylation, have also been implicated in many cancers but have yet to be examined in the East Asian population of colorectal cancer patients. Methods: Biopsies of tumors and matched non-cancerous tissue types were obtained and genomic DNA was isolated and subjected to the bisulphite conversion method for comparative DNA methylation analysis on the Illumina Infinium HumanMethylation27 BeadChip. Results: Totals of 258 and 74 genes were found to be hyper- and hypo-methylated as compared to the individual's matched control tissue. Interestingly, three genes that exhibited hypermethylation in their promoter regions, CMTM2, ECRG4, and SH3GL3, were shown to be significantly associated with colorectal cancer in previous studies. Using heatmap cluster analysis, eight hypermethylated and 10 hypomethylated genes were identified as significantly differentially methylated genes in the tumour tissues. Conclusions: Genome-wide methylation profiling facilitates rapid and simultaneous analysis of cancerous cells which may help to identify methylation markers with high sensitivity and specificity for diagnosis and prognosis. Our results show the promise of the microarray technology in identification of potential methylation biomarkers for colorectal cancers. 2R of Thymidylate Synthase 5'-untranslated Enhanced Region Contributes to Gastric Cancer Risk: a Meta-analysis Yang, Zhen;Liu, Hong-Xiang;Zhang, Xie-Fu 1923 Background: Studies investigating the association between 2R/3R polymorphisms in the thymidylate synthase 5'-untranslated enhanced region (TYMS 5'-UTR) and gastric cancer risk have generated conflicting results. Thus, a meta-analysis was performed to summarize the data on any association. Methods: Pubmed, Embase, and CNKI databases were searched for all available studies. The strength of association between TYMS 5'-UTR 2R/3R polymorphism and gastric cancer risk was estimated by odds ratios (ORs) with 95% confidence intervals (CIs). Results: Six individual case-control studies with a total of 1, 472 cases and 1, 895 controls were included into this meta-analysis. Analyses of total six relevant studies showed that there was no obvious association between the TYMS 5'-UTR 2R/3R polymorphism and gastric cancer risk. Subgroup analyses based on ethnicity showed 2R of TYMS 5'-UTR 2R/3R contributes to gastric cancer risk in the Asian population ($OR_{Homozygote\;model}$ = 1.71, 95%CI 1.19-2.46, P = 0.004; $OR_{Recessive\;genetic\;model}$ = 1.70, 95%CI 1.18-2.43, P = 0.004). However, the association in Caucasian populations was uncertain due to the limited studies. Conclusions: Our meta-analysis suggests that 2R of TYMS 5'-UTR 2R/3R contributes to gastric cancer risk in the Asian population, while this association in Caucasians populations needs further study. Red Strain Oryza Sativa-Unpolished Thai Rice Prevents Oxidative Stress and Colorectal Aberrant Crypt Foci Formation in Rats Tammasakchai, Achiraya;Reungpatthanaphong, Sareeya;Chaiyasut, Chaiyavat;Rattanachitthawat, Sirichet;Suwannalert, Prasit 1929 Oxidative stress has been proposed to be involved in colorectal cancer development. Many dark pigments of plants have potent oxidative stress preventive properties. In this study, unpolished Thai rice was assessed for antioxidant activity using 1,1-diphenyl-2-picrylhydrazyl (DPPH) and 2,2'-azinobis-3-ethylbenzothiazoline-6-sulfonic acid (ABTS) methods. Red strain unpolished Thai rice was also administered to rats exposed to azoxymethane (AOM) for induction of aberrant crypt foci (ACF). Serum malondialdehyde (MDA) and ferric reducing antioxidant power (FRAP) were investigated for cellular oxidative stress and serum antioxidants, respectively. Red pigment unpolished Thai rice demonstrated high antioxidant activity and was found to significantly and dose dependently decrease the total density and crypt multiplicity of ACF. Consumption of Thai rice further resulted in high serum antioxidant activity and low MDA cellular oxidative stress. Interestingly, the density of ACF was strongly related to MDA at r = 0.964, while it was inversely related with FRAP antioxidants (r = -0.915, p < 0.001). The results of this study suggest that the consumption of red strain of unpolished Thai rice may exert potentially beneficial effects on colorectal cancer through decrease in the level of oxidative stress. Prognostic Factors and Treatment Outcomes in 93 Patients with Uterine Sarcoma from 4 Centers in Turkey Durnali, Ayse;Tokluoglu, Saadet;Ozdemir, Nuriye;Inanc, Mevlude;Alkis, Necati;Zengin, Nurullah;Sonmez, Ozlem Uysal;Kucukoner, Mehmet;Anatolian Society of Medical Oncology (ASMO), Anatolian Society of Medical Oncology (ASMO) 1935 Introduction: Uterine sarcomas are a group of heterogenous and rare malignancies of the female genital tract and there is a lack of consensus on prognostic factors and optimal treatment. Objective and Methodology: To perform a retrospective evaluation of clinicopathological characteristics, prognostic factors and treatment outcomes of 93 patients with uterine sarcomas who were diagnosed and treated at 4 different centers from November 2000 to October 2010. Results: Of the 93 patients, 58.0% had leiomyosarcomas, 26.9% malignant mixed Mullerian tumors, 9.7% endometrial stromal sarcomas, and 5.4% other histological types. According to the last International Federation of Gynecology and Obstetrics (FIGO) staging, 43.0% were stage I, 20.4% were stage II, 22.6% were stage III and 14.0 % were stage IV. Median relapse free survival (RFS) was 20 months (95% confidence interval (CI), 12.4-27.6 months), RFS after 1, 2, 5 years were 66.6%, 44.1%, 16.5% respectively. Median overall survival (OS) was 56 months (95% CI, 22.5-89.5 months), and OS after 1, 2, 5 years was 84.7%, 78%, 49.4% respectively. Multivariate analysis showed that age ${\geq}60$ years and high grade tumor were significantly associated with poor OS and RFS; patients administered adjuvant treatment with sequential chemotherapy and radiotherapy had longer RFS time. Among patients with leiomyosarcoma, in addition to age and grade, adjuvant treatment with sequential chemotherapy and radiotherapy after surgery had significant effects on OS. Conclusion: Uterine sarcomas have poor progrosis even at early stages. Prognostic factors affecting OS were found to be age and grade. p63 Cytoplasmic Aberrance is Associated with High Prostate Cancer Stem Cell Expression Ferronika, Paranita;Triningsih, F.X. Ediati;Ghozali, Ahmad;Moeljono, Abraham;Rahmayanti, Siti;Shadrina, Arifah Nur;Naim, Awang Emir;Wudexi, Ivan;Arnurisa, Alfa Monica;Nanwani, Sandeep Tarman;Harijadi, Ahmad 1943 Introduction: Prostate cancer in Indonesia is the $3^{rd}$ ranking cancer among males and the $5^{th}$ rank for their cancer mortality. Prognostic markers that can identify aggressive prostate cancer in early stages and help select appropriate therapy to finally reduce the mortality are therefore urgently needed. It has been suggested that stem cells in the prostate gland have a role in initiation, progression, and metastasis of cancer, although controversy continues to exist. Maintenance of normal stem cell or reserve cell populations in several epithelia including prostate has been shown to be regulated by p63 and alteration of p63 expression is considered to have an oncogenic role in prostate cancer. We hypothesize that the expression of cytoplasmic aberrance of p63 is associated with high ALDH1A1 expression as a cancer stem cell marker, thus leading to progression of prostate cancer. Methods: Using a cross-sectional study during two years (2009-2010), a total of 79 paraffin embedded tissues of benign prostatic hyperplasia, PIN prostatic intraepithelial neoplasia, low and high Gleason score prostate cancer were investigated using immunohistochemistry. Associations between cytoplasmic p63 and ALDH1A1, as well as with pathological diagnosis, were analyzed by Chi-Square test using SPSS 15.0. Links of both markers with cell proliferation rate (KI-67) and apoptotic rate (cleaved caspase 3) were also analyzed by Kruskal-Wallis test. Results: The mean age of patient at the diagnosis is 70.0 years. Cytoplasmic aberrance of p63 was associated with ALDH1A1 expression (p<0.001) and both were found to have significant relationships with pathological diagnosis (including Gleason score), (p=0.006 and p<0.001 respectively). Moreover, it was also found that higher levels of cytoplasmic p63 were significantly associated with the frequency of proliferating cells and cells undergoing apoptosis in prostate cancers (p=0.001 and p=0.016 respectively). Conclusion: p63 cytoplasmic aberrance is associated with high ALDH1A1 expression. These components are suggested to have an important role in prostate cancer progression and may be used as molecular markers. The Lymphotoxin-α 252 A>G Polymorphism and Breast Cancer: A Meta-analysis Zhou, Ping;Huang, Wei;Chu, Xing;Du, Liang-Feng;Li, Jian-Ping;Zhang, Chun 1949 Objective: The aim of this meta-analysis is to evaluate associations between LTA-252 A>G and breast cancer (BC). Methods: Electronic searches of several databases were conducted for all online publications. A total of 7 studies involving 4,625 BC patients and 4,373 controls were identified. Results: This meta-analysis showed no significant association between the LTA-252 A>G polymorphism and BC in overall or Caucasian populations. However, a positive association was found limited to Asian populations. Conclusion: Although there was no significant association found between the LTA-252 A>G polymorphism and BC overall, a positive association was found in Asian populations. Accuracy of Frozen Sections for Intraoperative Diagnosis of Complex Atypical Endometrial Hyperplasia Turan, Taner;Karadag, Burak;Karabuk, Emine;Tulunay, Gokhan;Ozgul, Nejat;Gultekin, Murat;Boran, Nurettin;Isikdogan, Zuhal;Kose, Mehmet Faruk 1953 Objective: The purpose of this study was to correlate the histological diagnosis made during intraoperative frozen section (FS) examination of hysterectomy samples with complex atypical endometrial hyperplasia (CAEH) diagnosed with definitive paraffin block histology. Methods: FS pathology results of 125 patients with a preoperative biopsy showing CAEH were compared retrospectively with paraffin block pathology findings. Results: Paraffin block results were consistent with FS in 78 of 125 patients (62.4%). The FS sensitivity and specificity of detecting cancer were 81.1% and 97.9%, with negative and positive predictive values of 76.7%, and 98.4%, respectively. Paraffin block results were reported as endometrial cancer in 77 of 125 (61.6%) patients. Final pathology was endometrial cancer in 45.3% patients diagnosed at our center and 76.9% for patients who had their diagnosis at other clinics (p=0.018). Paraffin block results were consistent with FS in 62.4% of all cases Consistence was 98.4% in patients who had endometrial cancer in FS. Conclusion: FS does not exclude the possibility of endometrial cancer in patients with the preoperative diagnosis of CAEH. In addition, sufficient endometrial sampling is important for an accurate diagnosis. Cytostatic in vitro Effects of DTCM-Glutarimide on Bladder Carcinoma Cells Brassesco, Maria S.;Pezuk, Julia A.;Morales, Andressa G.;De Oliveira, Jaqueline C.;Valera, Elvis T.;Da Silva, Glenda N.;De Oliveira, Harley F.;Scrideli, Carlos A.;Umezawa, Kazuo;Tone, Luiz G. 1957 Bladder cancer is a common malignancy worldwide. Despite the increased use of cisplatin-based combination therapy, the outcomes for patients with advanced disease remain poor. Recently, altered activation of the PI3K/Akt/mTOR pathway has been associated with reduced patient survival and advanced stage of bladder cancer, making its upstream or downstream components attractive targets for therapeutic intervention. In the present study, we showed that treatment with DTCM-glutaramide, a piperidine that targets PDK1, results in reduced proliferation, diminished cell migration and G1 arrest in 5637 and T24 bladder carcinoma cells. Conversely, no apoptosis, necrosis or autophagy were detected after treatment, suggesting that reduced cell numbers in vitro are a result of diminished proliferation rather than cell death. Furthermore previous exposure to 10 ${\mu}g/ml$ DTCM-glutarimide sensitized both cell lines to ionizing radiation. Although more studies are needed to corroborate our findings, our results indicate that PDK1 may be useful as a therapeutic target to prevent progression and abnormal tissue dissemination of urothelial carcinomas. The Metabolic Syndrome and Risk Factors for Biliary Tract Cancer: A Case-control Study in China Wu, Qiao;He, Xiao-Dong;Yu, Lan;Liu, Wei;Tao, Lian-Yuan 1963 Objectives: Recent data show that the metabolic syndrome may play a role in several cancers, but the etiology for biliary tract cancer is incompletely defined. The present aim was to evaluate risk factors for biliary tract cancer in China. Methods: A case-control study in which cases were biliary tract cancer patients referred to Peking Union Medical College Hospital (PUMCH). Controls were randomly selected from an existing database of healthy individuals at the Health Screening Center of PUMCH. Data on the metabolic syndrome, liver diseases, family history, and history of diabetes and hypertension were collected by retrospective review of the patients' records and health examination reports or by interview. Results: A total of 281 patients (102 intrahepatic cholangiocarcinoma (ICC), 86 extrahepatic cholangiocarcinoma (ECC) and 93 gallbladder carcinoma (GC)) and 835 age- and sex-matched controls were enrolled. $HBsAg^+/anti-HBc^+$ (P=0.002), history of diabetes (P=0.000), cholelithiasis (P=0.000), TC (P=0.003), and HDL (P=0.000) were significantly related to ICC. Cholelithiasis (P=0.000), Tri (P=0.001), LDL (P=0.000), diabetes (P=0.000), Apo A (P=0.000) and Apo B (P=0.012) were significantly associated with ECC. Diabetes (P=0.017), cholelithiasis (P=0.000) and Apo A (P=0.000) were strongly inversely correlated with GC. Conclusion: Cholelithiasis, HBV infection and metabolic symptoms may be potential risk factors for the development of biliary tract cancer. Significance and Expression of Aquaporin 1, 3, 8 in Cervical Carcinoma in Xinjiang Uygur Women of China Shi, Yong-Hua;Chen, Rui;Talafu, Tuokan;Nijiati, Rehemu;Lalai, Suzuke 1971 Overexpression of several aquaporins (AQPs) has been reported in different types of human cancer but their role in carcinogenesis, for example in the cervix, have yet to be clearly defined. In this study, expression of AQPs in cervical carcinomawas investigated by real-time PCR, immunofluorescent and immunohistochemical assays and evaluated for correlations with clinicopathologic variables. AQP1, 3, 8 exhibited differential expression in cervical carcinoma, corresponding CIN and mild cervicitis. AQP1 was predominantly localized in the microvascular endothelial cell in the stroma of mild cervicitis, CIN and cervical carcinoma. AQP3 and AQP8 were localized in the membrane of normal squamous epithelium and carcinoma cells, local signals being more common than diffuse staining. AQP1 and AQP3 expression was remarkably stronger in cervical cancer than in mild cervicitis and CIN2-3 (P<0.05). AQP8 expression was highest in CIN2-3 (91.7%), but levels in cervical carcinoma were also higher than in mild cervicitis. AQP1, AQP3, AQP8 expression significantly increased in advanced stage, deeper infiltration, metastatic lymph nodes and larger tumor volume (P<0.05). Our findings showed that AQPs might play important roles in cervical carcinogenesis and tumour progression in Uygur women. Houttuynia cordata Thunb Fraction Induces Human Leukemic Molt-4 Cell Apoptosis through the Endoplasmic Reticulum Stress Pathway Prommaban, Adchara;Kodchakorn, Kanchanok;Kongtawelert, Prachya;Banjerdpongchai, Ratana 1977 Houttuynia cordata Thunb (HCT) is a native herb found in Southeast Asia which features various pharmacological activities against allergy, inflammation, viral and bacterial infection, and cancer. The aims of this study were to determine the cytotoxic effect of 6 fractions obtained from silica gel column chromatography of alcoholic HCT extract on human leukemic Molt-4 cells and demonstrate mechanisms of cell death. Six HCT fractions were cytotoxic to human lymphoblastic leukemic Molt-4 cells in a dose-dependent manner by MTT assay, fraction 4 exerting the greatest effects. Treatment with $IC_{50}$ of HCT fraction 4 significantly induced Molt-4 apoptosis detected by annexinV-FITC/propidium iodide for externalization of phosphatidylserine to the outer layer of cell membrane. The mitochondrial transmembrane potential was reduced in HCT fraction 4-treated Molt-4 cells. Moreover, decreased expression of Bcl-xl and increased levels of Smac/Diablo, Bax and GRP78 proteins were noted on immunoblotting. In conclusion, HCT fraction 4 induces Molt-4 apoptosis cell through an endoplasmic reticulum stress pathway. Inhibition of ENNG-Induced Pyloric Stomach and Small Intestinal Carcinogenesis in Mice by High Temperature- and Pressure-Treated Garlic Kaneko, Takaaki;Shimpo, Kan;Chihara, Takeshi;Beppu, Hidehiko;Tomatsu, Akiko;Shinzato, Masanori;Yanagida, Takamasa;Ieike, Tsutomu;Sonoda, Shigeru;Futamura, Akihiko;Ito, Akihiro;Higashiguchi, Takashi 1983 High temperature- and pressure-treated garlic (HTPG) has been shown to have enhanced antioxidative activity and polyphenol contents. Previously, we reported that HTPG inhibited 1,2-dimethylhydrazine-induced mucin depleted foci (premalignant lesions) and $O^6$-methylguanine DNA adduct formation in the rat colorectum. In the present study, we investigated the modifying effects of HTPG on N-ethyl-N'-nitro-N-nitrosoguanidine (ENNG)-induced pyloric stomach and small intestinal carcinogenesis in mice. Male C57BL/6 mice were given ENNG (100 mg/l) in drinking water for the first 4 weeks, then a basal diet or diet containing 2% or 5% HTPG for 30 weeks. The incidence and multiplicity of pyloric stomach and small intestinal (duodenal and jejunal) tumors in the 2% HTPG group (but not in the 5% HTPG group) were significantly lower than those in the control group. Cell proliferation of normal-appearing duodenal mucosa was assessed by MIB-5 immunohistochemistry and shown to be significantly lower with 2% HTPG (but again not 5% HTPG) than in controls. These results in dicate that HTPG, at 2% in the diet, inhibited ENNG-induced pyloric stomach and small intestinal (especially duodenal) tumorigenesis in mice, associated with suppression of cell proliferation. Planning of Nuclear Medicine in Turkey: Current Status and Future Perspectives Goksel, Fatih;Peksoy, Irfan;Koc, Orhan;Gultekin, Murat;Ozgul, Nejat;Sencan, Irfan 1989 Background and Purpose: An analysis of the current nuclear medicine (NM) status and future demand in Turkey in line with the international benchmarks was conducted to establish a comprehensive baseline reference. Methods: Data from all NM centers on major equipment and manpower in Turkey were collected through a survey and cross-checked with the primary research and governmental data. Data regarding manpower currently working were obtained from the relevant academic centers and occupational societies. Results: The current numbers of NM laboratories, NM specialists, gamma cameras, PET/CT scanners, radioiodine treatment units for thyroid cancer are 217, 474, 287, 75 and 39, respectively. There was personnel and equipment need underestimated in the field compared to developed countries. Equipment insufficiency was more significant in the Ministry of Health (MoH) hospitals. These gaps should be eliminated with strategic planning of equipment and NM laboratories. Currently, the number of the PET/CT devices is at the level of the developed countries. The number of specialists in the field should reach the expected goal in 2023. By 2023, Turkey will need around 820 NM specialists, 498 gamma cameras and 99 PET/CT devices. In addition, further studies should be made regarding other related staff, particularly for health physicians, radiopharmacists and NM technicians. Conclusion: There is an insufficiency of personnel and equipment in Turkey's NM field. Comprehensive strategic planning is required to allocate limited resources and the purchase of the equipment and employment policies should be structured as part of "National Special Feature Requiring Health Service Plan". Brain Metastases from Cholangiocarcinoma: a First Case Series in Thailand Chindaprasirt, Jarin;Sookprasert, Aumkhae;Sawanyawisuth, Kittisak;Limpawattana, Panita;Tiamkao, Somsak 1995 Background: Brain metastasis from cholangiocarcinoma (CCA) is a rare but fatal event. To the best of our knowledge, only few cases have been reported. Herein, we report the incident rate and a first case series of brain metastases from CCA. Methods: Between January 2006 and December 2010 5,164 patients were treated at Srinagarind hospital, Khon Kaen University; of those, 8 patients developed brain metastasis. Here we reviewed clinical data and survival times. Results: The incident rate of brain metastases from CCA was 0.15%. The median age of the patients was 60 years. Tumor subtypes were intrahepatic in 6 and hilar in 2 patients. All suffered from symptoms related to brain metastasis. Three patients were treated with whole-brain radiation therapy (WBRT), one of whom also underwent surgery. The median survival after the diagnosis of brain metastasis was 9.5 weeks (1-28 weeks). The longest survival observed in a patient in RPA class I with two brain lesions and received WBRT. Conclusion: This is a first case series of brain metastases from CCA with the incident rate of 0.15%. It is rare and associated with short survival time. Association of a Newly Identified Variant of DNA Polymerase Beta (polβΔ63-123, 208-304) with the Risk Factor of Ovarian Carcinoma in India Khanra, Kalyani;Bhattacharya, Chandan;Bhattacharyya, Nandan 1999 Background: DNA polymerase is a single-copy gene that is considered to be part of the DNA repair machinery in mammalian cells. The encoded enzyme is a key to the base excision repair (BER) pathway. It is evident that pol beta has mutations in various cancer samples, but little is known about ovarian cancer. Aim: Identification of any variant form of $pol{\beta}$ cDNA in ovarian carcinoma and determination of association between the polymorphism and ovarian cancer risk in Indian patients. We used 152 samples to isolate and perform RT-PCR and sequencing. Results: A variant of polymerase beta (deletion of exon 4-6 and 11-13, comprising of amino acid 63-123, and 208-304) is detected in heterozygous condition. The product size of this variant is 532 bp while wild type pol beta is 1 kb. Our study of association between the variant and the endometrioid type shows that it is a statistically significant factor for ovarian cancer [OR=31.9 (4.12-246.25) with p<0.001]. The association between variant and stage IV patients further indicated risk (${\chi}^2$ value of 29.7, and OR value 6.77 with 95% CI values 3.3-13.86). The correlation study also confirms the association data (Pearson correlation values for variant/stage IV and variant/endometrioid of 0.44 and 0.39). Conclusion: Individuals from this part of India with this type of variant may be at risk of stage IV, endometrioid type ovarian carcinoma. Association of Reduced Immunohistochemical Expression of E-cadherin with a Poor Ovarian Cancer Prognosis - Results of a Meta-analysis Peng, Hong-Ling;He, Lei;Zhao, Xia 2003 Purpose: E-cadherin is a transmemberane protein which is responsible for adhesion of endothelial cells. The aim of our study was to assess existing evidence of associations between reduced expression of E-cadherin and prognosis of ovarian cancer with a discussion of potential approaches to exploiting any prognostic value for improved clinical management. Methods: We conducted a meta-analysis of 9 studies (n=915 patients) focusing on the correlation of reduced expression of E-cadherin with overall survival. Data were synthesized with random or fixed effect hazard ratios. Results: The studies were categorized by author/year, number of patients, FIGO stage, histology, cutoff value for E-cadherin positivity, and methods of hazard rations (HR) estimation, HR and its 95% confidence interval (CI). Combined hazard ratios suggested that reduced expression of E-cadherin positivity was associated with poor overall survival (OS), HR= 2.10, 95% CI:1.13-3.06. Conclusion: The overall survival of the E-cadherin negative group with ovarian cancer was significant poorer than the E-cadherin positive group. Upregulation of E-cadherin is an attractive therapeutic approach that could exert significant effects on clinical outcome of ovarian cancer. Tas13D Inhibits Growth of SMMC-7721 Cell via Suppression VEGF and EGF Expression He, Huai-Zhen;Wang, Nan;Zhang, Jie;Zheng, Lei;Zhang, Yan-Min 2009 Objective: Taspine, isolated from Radix et Rhizoma Leonticis has demosntrated potential proctiective effects against cancer. Tas13D, a novel taspine derivative synthetized by structure-based drug design, have been shown to possess interesting biological and pharmacological activities. The current study was designed to evaluate its antiproliferative activity and underlying mechanisms. Methods: Antiproliferative activity of tas13D was evaluated by xenograft in athymic mice in vivo, and by 3-(4, 5-dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide (MTT) and cell migration assays with human liver cancer (SMMC-7721) cell lines in vitro. Docking between tas13D and VEGFR and EGFR was studied by with a Sybyl/Surflex module. VEGF and EGF and their receptor expression was determined by ELISA and real-time PCR methods, respectively. Results: Our present study showed that tas13D inhibited SMMC-7721 xenograft tumor growth, bound tightly with the active site of kinase domains of EGFR and VEGFR, and reduced SMMC-7721 cell proliferation (IC=34.7 ${\mu}mol/L$) and migration compared to negative controls. VEGF and EGF mRNAs were significantly reduced by tas13D treatment in a dose-dependent manner, along with VEGF and EGF production. Conclusion: The obtained results suggest that tas13D inhibits tumor growth and cell proliferation by inhibiting cell migration, downregulating mRNA expression of VEGF and EGF, and decreasing angiogenic factor production. Tas13D deserves further consideration as a chemotherapeutic agent. Apoptosis-Induced Cell Death due to Oleanolic Acid in HaCaT Keratinocyte Cells -a Proof-of-Principle Approach for Chemopreventive Drug Development George, V. Cijo;Kumar, D.R. Naveen;Suresh, P.K.;Kumar, R. Ashok 2015 Oleanolic acid (OA) is a naturally occurring triterpenoid in food materials and is a component of the leaves and roots of Olea europaea, Viscum album L., Aralia chinensis L. and more than 120 other plant species. There are several reports validating its antitumor activity against different cancer cells apart from its hepatoprotective activity. However, antitumor activity against skin cancer has not beed studied well thus far. Hence the present study of effects of OA against HaCaT (immortalized keratinocyte) cells - a cell-based epithelial model system for toxicity/ethnopharmacology-based studies - was conducted. Radical scavenging activity ($DPPH{\cdot}$) and FRAP were determined spectrophotometrically. Proliferation was assessed by XTT assay at 24, 48 and 72 hrs with exposure to various concentrations (12.5-200 ${\mu}M$) of OA. Apoptotic induction potential of OA was demonstrated using a cellular DNA fragmentation ELISA method. Morphological studies were also carried out to elucidate its antitumor potential. The results revealed that OA induces apoptosis by altering cellular morphology as well as DNA integrity in HaCaT cells in a dose-dependent manner, with comparatively low cytotoxicity. The moderate toxicity observed in HaCaT cells, with induction of apoptosis, possibly suggests greater involvement of programmed-cell death-mediated mechanisms. We conclude that OA has relatively low toxicity and has the potential to induce apoptosis in HaCaT cells and hence provides a substantial and sound scientific basis for further validation studies. Loss of DBC2 Expression is an Early and Progressive Event in the Development of Lung Adenocarcinoma Dong, Wei;Meng, Long;Shen, Hong-Chang;Du, Jia-Jun 2021 Purpose: DBC2 (Deleted in Breast Cancer 2) has been indicated to be a tumor suppressor gene in many cancers including lung adenocarcinoma recently. In this study, we aimed to explore the expression status of DBC2 in different subtypes of lung adenocarcinoma (from pre-invasive to invasive lesions), and to determine if downregulation becomes more marked with pathological progression. Methods: We collected 172 tissue samples from different subtypes of lung adenocarcinoma and investigated the frequency of DBC2 loss by immunohistochemistry. Results: Our results indicated that DBC2 downregulation is a relatively frequent event in lung adenocarcinoma. Moreover, as the adenocarcinoma subtype turns to be more invasive, more downregulation occurred. Conclusion: We conclude that loss of DBC2 expression is an early and progressive event in the pathogenesis of lung adenocarcinoma. Positive DBC2 immunohistochemistry may become an indicator for early stage disease and better prognosis of lung adenocarcinomas. Methylenetetrahydrofolate Reductase Gene C677T Polymorphism and Lung Cancer: an Updated Meta-analysis Hou, Xin-Heng;Huang, Yu-Min;Mi, Yuan-Yuan 2025 Objective: Methylenetetrahydrofolate reductase (MTHFR) catalyzes the metabolism of folate and nucleotides needed for DNA synthesis and repair. Variations in MTHFR functions likely play roles in the etiology of lung cancer (LC). So far, several studies between MTHFR C677T polymorphism and LC provide controversial or inconclusive results. Methods: To better assess the purported relationship, we performed a meta-analysis of 14 publications. Eligible studies were identified by searching the Pubmed, Embase, Web of Science and Google Scholar databases. Odds ratios (ORs) with 95% confidence intervals (CIs) were estimated to assess the association. Results: Overall, no significant association was detected between the MTHFR C677T polymorphism and LC risk, the same as in race subgroup. However, in the stratified analysis by histological type, significantly increased non-small-cell lung cancer (NSCLC) risk was indicated (T-allele vs. C-allele: OR = 1.11, 95%CI = 1.03-1.19; TT vs. CC: OR = 1.24, 95%CI = 1.09-1.41; TC vs. CC: OR = 1.11, 95%CI = 1.03-1.20 and TT+TC vs. CC: OR = 1.09, 95%CI = 1.03-1.15). At the same time, ever-smokers who carried T-allele (TT+TC) had a 10% decreased LC risk compared with CC genotype carriers. Conclusions: Our study provided evidence that the MTHFR 677T null genotype may increase NSCLC risk, however, it may protect ever-smokers against LC risk. Future studies with large sample sizes are warranted to further evaluate this association in more detail. Application of Crossover Analysis-logistic Regression in the Assessment of Gene- environmental Interactions for Colorectal Cancer Wu, Ya-Zhou;Yang, Huan;Zhang, Ling;Zhang, Yan-Qi;Liu, Ling;Yi, Dong;Cao, Jia 2031 Background: Analysis of gene-gene and gene-environment interactions for complex multifactorial human disease faces challenges regarding statistical methodology. One major difficulty is partly due to the limitations of parametric-statistical methods for detection of gene effects that are dependent solely or partially on interactions with other genes or environmental exposures. Based on our previous case-control study in Chongqing of China, we have found increased risk of colorectal cancer exists in individuals carrying a novel homozygous TT at locus rs1329149 and known homozygous AA at locus rs671. Methods: In this study, we proposed statistical method-crossover analysis in combination with logistic regression model, to further analyze our data and focus on assessing gene-environmental interactions for colorectal cancer. Results: The results of the crossover analysis showed that there are possible multiplicative interactions between loci rs671 and rs1329149 with alcohol consumption. Multifactorial logistic regression analysis also validated that loci rs671 and rs1329149 both exhibited a multiplicative interaction with alcohol consumption. Moreover, we also found additive interactions between any pair of two factors (among the four risk factors: gene loci rs671, rs1329149, age and alcohol consumption) through the crossover analysis, which was not evident on logistic regression. Conclusions: In conclusion, the method based on crossover analysis-logistic regression is successful in assessing additive and multiplicative gene-environment interactions, and in revealing synergistic effects of gene loci rs671 and rs1329149 with alcohol consumption in the pathogenesis and development of colorectal cancer. Prevalence of Depression and its Correlations: a Cross-sectional Study in Thai Cancer Patients Maneeton, Benchalak;Maneeton, Narong;Mahathep, Pojai 2039 Objectives: Depression is common in cancer patients. However, only limited evidence is available for Asian populations. The authors therefore examine the prevalence of depression in Thai patients with cancer. In addition, associated factors were determined. Methods: This cross-sectional study was conducted in cancer patients admitted to a university hospital during December 2006 - December 2007. The Patient Health Questionnaire (PHQ-9) was used to assess all cancer patients. Suicidal risk was assessed by using the Mini-International Neuropsychiatric Interview (MINI) in the module of suicidal risk assessment. Results: Of 108 cancer patients, 29.6 % were diagnosed with a depressive disorder (mild, 14.8 %; moderate, 5.6 %; severe, 9.3 %). However, only 25.0 % of these were recognized as being depressed by the primary physician. According to the MINI., 28.1 % of these depressed cancer patients had a moderate to severe level of suicidal risk. In addition, the findings suggest that increased risk of depression is significantly associated with increased pain score, lower number of cancer treatments (< 2 methods), increased educational duration (>13 years), increased age (> 50 years old) and being female. Conclusions: The prevalence of depression is high in Thai cancer patients. However, depressive disorder in those patients is frequently undiagnosed. It is associated with several factors including pain, a number of cancer treatments, education duration, age and sex. To improve quality of life, increase compliance with treatments and prevent of suicide, screening for depressive disorders in this patient group is strongly recommended. Down-regulation of SENP1 Expression Increases Apoptosis of Burkitt Lymphoma Cells Huang, Bin-Bin;Gao, Qing-Mei;Liang, Wei;Xiu, Bing;Zhang, Wen-Jun;Liang, Ai-Bin 2045 Objective: To investigate the effect of down-regulation of Sentrin/SUMO-specific protease 1 (SENP1) expression on the apoptosis of human Burkitt lymphoma cells (Daudi cells) and potential mechanisms. Methods: Short hairpin RNA (shRNA) targeting SENP1 was designed and synthesized and then cloned into a lentiviral vector. A lentiviral packaging plasmid was used to transfect Daudi cells (sh-SENP1-Daudi group). Daudi cells without transfection (Daudi group) and Daudi cells transfected with blank plasmid (sh-NC-Daudi group) served as control groups. Flow cytometry was performed to screen GFP positive cells and semiquantitative PCR and Western blot assays were employed to detect the inference efficiency. The morphology of cells was observed under a microscope before and after transfection. Fluorescence quantitative PCR and Western blot assays were conducted to measure the mRNA and protein expression of apoptosis related molecules (caspase-3, 8 and 9). After treatment with $COCl_2$ for 24 h, the mRNA and protein expression of hypoxia inducible factor -$1{\alpha}$ (HIF-$1{\alpha}$) was determined. Results: Sequencing showed the expression vectors of shRNA targeting SENP1 to be successfully constructed. Following screening of GFP positive cells by FCM, semiqualitative PCR showed the interference efficiency was $79.2{\pm}0.026%$. At 48 h after transfection, the Daudi cells became shrunken, had irregular edges and presented apoptotic bodies. Western blot assay revealed increase in expression of caspase-3, 8 and 9 with prolongation of transfection (P<0.05). Following hypoxia treatment, mRNA expression of HIF-$1{\alpha}$ remained unchanged in three groups (P>0.05) but the protein expression of HIF-$1{\alpha}$ markedly increased (P<0.05). However, in the sh-SENP1-Daudi group, the protein expression of HIF-$1{\alpha}$ remained unchanged Conclusion: SENP1-shRNA can efficiently inhibit SENP1 expression in Daudi cells. SENP1 inhibition may promote cell apoptosis. These findings suggest that SENP1 may serve as an important target in the gene therapy of Burkitts lymphoma. The CHEK2 I157T Variant and Colorectal Cancer Susceptibility: A Systematic Review and Meta-analysis Liu, Chuan;Wang, Qing-Shui;Wang, Ya-Jie 2051 Background: The cell cycle checkpoint kinase 2 (CHEK2) gene I157T variant may be associated with an increased risk of colorectal cancer, but it is unclear whether the evidence is sufficient to recommend testing for the mutation in clinical practice. Materials and Methods: We systematically searched PubMed, EMBASES, Elsevier and Springer for relevant articles before Apr 2012. Summary odds ratios (ORs) and 95% confidence intervals (95% CIs) were calculated using a fixed-effects or random-effects models with Review Manager 5.0 software. Results: A total of seven studies including 4,029 cases and 13,844 controls based on the search criteria were included for analysis. A significant association of the CHEK2 I157T C variant with unselected CRC was found (OR = 1.61, 95% CI = 1.40-1.87, P < 0.001). We also found a significant association with sporadic CRC (OR = 1.48, 95% CI = 1.23-1.77, P < 0.001) and separately with familial CRC (OR = 1.97, 95% CI = 1.41-2.74, P < 0.001). Conclusion: This meta-analysis demonstrates that the CHEK2 I157T variant may be another important CRC-predisposing gene, which increases CRC risk, especially in familial CRC. Variation of Urinary and Serum Trace Elements (Ca, Zn, Cu, Se) in Bladder Carcinoma in China Guo, Kun-Feng;Zhang, Zhe;Wang, Jun-Yong;Gao, Sheng-Lin;Liu, Jiao;Zhan, Bo;Chen, Zhi-Peng;Kong, Chui-Ze 2057 Backgrounds: Deficiency or excess of trace elements can induce body metabolic disorders and cellular growth disturbance, even mutation and cancerization. Since there are few studies of the effect of trace elements in bladder carcinoma in China, the aim of this study was thus to assess variation using a case control approach. Methods: To determine this, 81 patients with bladder carcinoma chosen as a study group and 130 healthy persons chosen as a control group were all assayed for urinary and serum trace elements (calcium [Ca], zinc [Zn], copper [Cu], selenium [Se]) using an atomic absorption spectrophotometer, and the results were analyzed by independent sample t tests. The correlative factors on questionnaires answered by all persons were analyzed by logistic regression. Results: The results showed urinary Ca, Zn and serum Cu levels of the study group to be significantly higher (P<0.05) than those of he control group. Serum Ca and Se levels of study group were significantly lower (P<0.05) than those of control group. Conclusion: There were higher urinary Zn and serum Cu concentrations in bladder carcinoma cases. Bladder carcinoma may be associated with Ca metabolic disorder, leading to higher urinary Ca and lower serum Ca. Low serum Se and smoking appear to be other risk factors for bladder carcinoma in China. Significance of Human Telomerase RNA Gene Amplification Detection for Cervical Cancer Screening Chen, Shao-Min;Lin, Wei;Liu, Xin;Zhang, You-Zhong 2063 Aim: Liquid-based cytology is the most often used method for cervical cancer screening, but it is relatively insensitive and frequently gives equivocal results. Used as a complementary procedure, the high-risk human papillomavirus (HPV) DNA test is highly sensitive but not very specific. The human telomerase RNA gene (TERC) is the most often amplified oncogene that is observed in cervical precancerous lesions. We assessed genomic amplification of TERC in liquid-based cytological specimens to explore the optimal strategy of using this for cervical cancer screening. Methods: Six hundred and seventy-one residual cytological specimens were obtained from outpatients aged 25 to 64 years. The specimens were evaluated by the Digene Hybrid Capture 2 (HC2) HPV DNA test and fluorescence in situ hybridization (FISH) with a chromosome probe to TERC (3q26). Colposcopic examination and histological evaluation were performed where indicated. Results: The TERC positive rate was higher in the CIN2+ (CIN2, CIN3 and SCC) group than in the normal and CIN 1 groups (90.0% vs. 10.4%, p < 0.01). In comparison with the HC2 HPV DNA test, the TERC amplification test had lower sensitivity but higher specificity (90.0% vs. 100.0%, 89.6% vs. 44.0%, respectively). TERC amplification test used in conjunction with the HC2 HPV DNA test showed a combination of 90.0% sensitivity and 92.2% specificity. Conclusion: The TERC amplification test can be used to diagnose cervical precancerous lesions. TERC and HPV DNA co-testing shows an optimal combination of sensitivity and specificity for cervical cancer screening. Overview of Methodological Quality of Systematic Reviews about Gastric Cancer Risk and Protective Factors Li, Lun;Ying, Xiang-Ji;Sun, Tian-Tian;Yi, Kang;Tian, Hong-Liang;Sun, Rao;Tian, Jin-Hui;Yang, Ke-Hu 2069 Background and Objective: A comprehensive overall review of gastric cancer (GC) risk and protective factors is a high priority, so we conducted the present study. Methods: Systematic searches in common medical electronic databases along with reference tracking were conducted to include all kinds of systematic reviews (SRs) about GC risk and protective factors. Two authors independently selected studies, extracted data, and evaluated the methodological qualities and the quality of evidence using R-AMSTAR and GRADE approaches. Results: Beta-carotene below 20 mg/day, fruit, vegetables, non-fermented soy-foods, whole-grain, and dairy product were GC protective factors, while beta-carotene 20 mg/day or above, pickled vegetables, fermented soy-foods, processed meat 30g/d or above, or salty foods, exposure to alcohol or smoking, occupational exposure to Pb, overweight and obesity, helicobacter pylori infection were GC risk factors. So we suggested screening and treating H. pylori infection, limiting the amount of food containing risk factors (processed meat consumption, beta-carotene, pickled vegetables, fermented soy-foods, salty foods, alcohol), stopping smoking, avoiding excessive weight gain, avoidance of Pb, and increasing the quantity of food containing protective components (fresh fruit and vegetables, non-fermented soy-foods, whole-grain, dairy products). Conclusions: The conclusions and recommendations of our study were limited by including SRs with poor methodological bases and low quality of evidence, so that more research applying checklists about assessing the methodological qualities and reporting are needed for the future. Value of Ultrasound Elastography in Assessment of Enlarged Cervical Lymph Nodes Teng, Deng-Ke;Wang, Hui;Lin, Yuan-Qiang;Sui, Guo-Qing;Guo, Feng;Sun, Li-Na 2081 Background: To investigate the value of ultrasound elastography (UE) in the differentiation between benign and malignant enlarged cervical lymph nodes (LNs). Methods: B-mode ultrasound, power Doppler imaging and UE were examined to determine LN characteristics. Two kinds of methods, 4 scores of elastographic classification and a strain ratio (SR) were used to evaluate the ultrasound elastograms. Results: The cutoff point of SR had high utility in differential diagnosis of benign and malignant of cervical lymph nodes, with good sensitivity, specificity and accuracy. Conclusion: UE is an important aid in differential diagnosis of benign and malignant cervical LNs. β3GnT8 Regulates Laryngeal Carcinoma Cell Proliferation Via Targeting MMPs/TIMPs and TGF-β1 Hua, Dong;Qin, Fang;Shen, Li;Jiang, Zhi;Zou, Shi-Tao;Xu, Lan;Cheng, Zhi-Hong;Wu, Shi-Liang 2087 Previous evidence showed ${\beta}1$, 3-N-acetylglucosaminyltransferase 8 (${\beta}3GnT8$), which can extend polylactosamine on N-glycans, to be highly expressed in some cancer cell lines and tissues, indicating roles in tumorigenesis. However, so far, the function of ${\beta}3GnT8$ in laryngeal carcinoma has not been characterized. To test any contribution, Hep-2 cells were stably transfected with sense or interference vectors to establish cell lines that overexpressed or were deficient in ${\beta}3GnT8$. Here we showed that cell proliferation was increased in ${\beta}3GnT8$ overexpressed cells but decreased in ${\beta}3GnT8$ knockdown cells using MTT. Furthermore, we demonstrated that change in ${\beta}3GnT8$ expression had significant effects on tumor growth in nude mice.We further provided data suggesting that overexpression of ${\beta}3GnT8$ enhanced the expression of matrix metalloproteinase-2 (MMP-2) and matrix metalloproteinase-9 (MMP-9) at both the mRNA and protein levels, associated with shedding of tissue inhibitors of metalloproteinase TIMP-2. In addition, it caused increased production of transforming growth factor beta 1 (TGF-${\beta}1$), whereas ${\beta}3GnT8$ gene knockdown caused the reverse effect. The results may indicate a novel mechanism by which effects of ${\beta}3GnT8$ in regulating cellular proliferation are mediated, at least in partvia targeting MMPs/TIMPs and TGF-${\beta}1$ in laryngeal carcinoma Hep-2 cells. The finding may lay a foundation for further investigations into the ${\beta}3GnT8$ as a potential target for therapy of laryngeal carcinoma. Mechanisms of Anticancer Activity of Sulforaphane from Brassica oleracea in HEp-2 Human Epithelial Carcinoma Cell Line Devi, J. Renuka;Thangam, E. Berla 2095 Sulforaphane (SFN) an isothiocyanate formed by hydrolysis of glucosinolates found in Brassica oleraceae is reported to possess anticancer and antioxidant activities. In this study, we isolated SFN from red cabbage (Brassica oleraceae var rubra) and evaluated the comparative antiproliferative activity of various fractions (standard SFN, extract and purified SFN) by MTT assay in human epithelial carcinoma HEp -2 and and Vero cells. Probable apoptotic mechanisms mediated through p53, bax and bcl-2 were also examined. The SFN fraction was collected by HPLC, enriched for its SFN content and confirmed. Expression of apoptosis-related proteins was detected by western blotting and RT PCR. Results showed that Std SFN and purified SFN concentration found to have closer $IC_{50}$ which is equal to 58.96 microgram/ml (HEp-2 cells), 61.2 microgram/ml (Vero cells) and less than the extract which is found to be 113 microgram/ml (HEp-2 cells) and 125 microgram/ml (Vero cells). Further studies on apoptotic mechanisms showed that purified SFN down-regulated the expression of bcl-2 (antiapoptotic), while up-regulating p53 and Bax (proapoptotic) proteins, as well as caspase-3. This study indicates that purified SFN possesses antiproliferative effects the same as Std SFN and its apoptotic mechanism in HEp-2 cells could be mediated through p53 induction, bax and bcl-2 signaling pathways. Long Term Outcomes and Prognostic Factors of N0 Stage Nasopharyngeal Carcinoma: a Single Institutional Experience with 610 Patients Sun, Jian-Da;Chen, Chuang-Zhen;Chen, Jian-Zhou;Li, Dong-Sheng;Chen, Zhi-Jian;Zhou, Ming-Zhen;Li, De-Rui 2101 Treatment responses of $N_0$ stage nasopharyngeal carcinoma were firstly analyzed comprehensively to evaluate long term outcomes of patients and identify prognostic factors. A total of 610 patients with $N_0$ NPC, undergoing definitive radiotherapy to their primary lesion and prophylactic radiation to upper neck, were reviewed retrospectively. Concomitant chemotherapy was administrated to 65 out of the 610. Survival rates of the patients were calculated using the Kaplan-Meier method and compared by log-rank test. Prognostic factors were identified by the Cox regression model. The study revealed the 5-year and 10-year overall, disease-free, disease-specific, local failure-free, regional failure-free, locoregional failure-free and distant metastasis-free survival rates to be 78.7% and 66.8%, 68.8% and 55.8%, 79.9% and 70.4%, 81.2% and 72.5%, 95.8% and 91.8%, 78.3% and 68.5%, 88.5% and 85.5%, respectively. There were 192 patients experiencing failure (31.5%) after radiotherapy or chemoradiotherapy. Of these, local recurrence, regional relapse and distant metastases as the first event of failure occurred in 100 (100/610, 16.4%), 15(15/610, 2.5%) and 52 (52/610, 8.5%), respectively. Multivariate analysis showed that T stage was the only independent prognostic factor for patients with $N_0$ NPC (P=0.000). Late T stage (P=0.000), male (P=0.039) and anemia (P=0.007) were independently unfavorable factors predicting disease-free survival. After treatment, satisfactory outcome wasgenerally achieved in patients with $N_0$ NPC. Local recurrence represented the predominant mode of treatment failure, while T stage was the only independent prognostic factor for overall survival. Late T stage, male gender, and anemia independently predicted lower possibility of the disease-free survival. Triple Negative Status is a Poor Prognostic Indicator in Chinese Women with Breast Cancer: a Ten Year Review Ma, K.K.;Chau, Wai Wang;Wong, Connie H.N.;Wong, Kerry;Fung, Nicholas;Lee, J.T. Andrea;Choi, L.Y. Catherine;Suen, Dacita T.K.;Kwong, Ava 2109 Background: Ethnic variation in tumor characteristics and clinical presentation of breast cancer is increasingly being emphasized. We studied the tumor characteristics and factors which may influence the presentation and prognosis of triple negative breast cancers (TNC) in a cohort of Chinese women. Methods: A prospective cohort of 1800 Chinese women with breast cancer was recruited in a tertiary referral unit in Hong Kong between 1995 and 2006 and was followed up with a median duration of 7.2 years. Of the total, 216 (12.0%) had TNC and 1584 (88.0%) had non-TNC. Their clinicopathological variables, epidemiological variables and clinical outcomes were evaluated. Results: Patients with TNC had similar age of presentation as those with non-TNC, while presenting at earlier stages (82.4% were stage 1-2, compared to 78.4% in non-TNC, p=0.035). They were likely to be associated with grade 3 cancer (Hazard Ratio(HR)=5.8, p<0.001). TNC showed higher chance of visceral relapse (HR=2.69, p<0.001), liver metastasis (HR=1.7, p=0.003) and brain metastasis (HR=1.8, p=0.003). Compared with non-TNC group, TNC had similar 10-year disease-free survival (82% vs 84%, p=0.148), overall survival (78% vs 79%, p=0.238) and breast cancer-specific mortality (18% vs 16%, p=0.095). However, TNC showed poorer 10-year stage 3 and 4 specific survival (stage 3: 53% vs. 67%, p=0.010; stage 4: 0% vs. 40%, p=0.035). Conclusions: Chinese women with triple negative breast cancer do not have less aggressive biological behavior compared to the West and presentation at a later stage results in worse prognosis compared with those with non triple negative breast cancer. Hypoxia-Inducible Factor 1 Promoter-Induced JAB1 Overexpression Enhances Chemotherapeutic Sensitivity of Lung Cancer Cell Line A549 in an Anoxic Environment Hu, Ming-Dong;Xu, Jian-Cheng;Fan, Ye;Xie, Qi-Chao;Li, Qi;Zhou, Chang-Xi;Mao, Mei;Yang, Yu 2115 The presence of lung cancer cells in anoxic zones is a key cause od chemotherapeutic resistance. Thus, it is necessary to enhance the sensitivity of such lung cancer cells. However, loss of efficient gene therapeutic targeting and inefficient objective gene expression in the anoxic zone in lung cancer are dilemmas. In the present study, a eukaryotic expression plasmid pUC57-HRE-JAB1 driven by a hypoxia response elements promoter was constructed and introduced into lung cancer cell line A549. The cells were then exposed to a chemotherapeutic drug cis-diamminedichloroplatinum (C-DDP). qRT-PCR and western blotting were used to determine the mRNA and protein level and flow cytometry to examine the cell cycle and apoptosis of A549 transfected pUC57-HRE-JAB1. The results showed that JAB1 gene in the A549 was overexpressed after the transfection, cell proliferation being arrested in G1 phase and the apoptosis ratio significantly increased. Importantly, introduction of pUC57-HRE-JAB1 significantly increased the chemotherapeutic sensitivity of A549 in an anoxic environment. In conclusion, JAB1 overexpression might provide a novel strategy to overcome chemotherapeutic resistance in lung cancer. Potent Anticancer Effects of Lentivirus Encoding a Drosophila Melanogaster Deoxyribonucleoside Kinase Mutant Combined with Brivudine Zhang, Nian-Qu;Zhao, Lei;Ma, Shuai;Gu, Ming;Zheng, Xin-Yu 2121 Objective: Deoxyribonucleoside kinase of Drosophila melanogaster (Dm-dNK) mutants have been reported to exert suicide gene effects in combined gene/chemotherapy of cancer. Here, we aimed to further evaluate the capacity of the mutanted enzyme and its potential for inhibiting cancer cell growth. Methods: We altered the sequence of the last 10 amino acids of Dm-dNK to perform site-directed mutagenesis and constructed active site mutanted Dm-dNK (Dm-dNKmut), RT-PCR and western bloting studies were used to reveal the expression of lentivirus mediated Dm-dNKmut in a breast cancer cell line (Bcap37), a gastric cancer cell line (SGC7901) and a colorectal cancer cell line (CCL187). [3H]-labeled substrates were used for enzyme activity assays, cell cytotoxicity was assessed by MTT assays, cell proliferation using a hemocytometer and apoptosis induction by thenannexin-V-FITC labeled FACS method. In vivo, an animal study was set out in which BALB/C nude mice bearing tumors were treated with lentivirus mediated expression of Dm-dNKmut with the pyrimidine nucleoside analog brivudine (BVDU, (E)-5-(2-bromovinyl)-(2-deoxyuridine). Results: The Dm-dNKmut could be stably expressed in the cancer cell lines and retained its enzymatic activity. Moreover, the cells expressing Dm-dNKmut exhibited increased sensitivity in combination with BVDU, with induction of apoptosis in vitro and in vivo. Conclusion: These findings underlined the importance of BVDU phosphorylated by Dm-dNKmut in transduced cancer cells and the potential role of Dm-dNKmut as a suicide gene, thus providing the basis for future intensive research for cancer therapy. Gemcitabine-based Concurrent Chemoradiotherapy Versus Chemotherapy Alone in Patients with Locally Advanced Pancreatic Cancer Wang, Bu-Hai;Cao, Wen-Miao;Yu, Jie;Wang, Xiao-Lei 2129 Objective: To explore improved treatment by retrospectively comparing survival time of gemcitabine-based concurrent chemoradiotherapy (GemRT) versus chemotherapy (Gem) alone in patients with locally advanced pancreatic cancer (LAPC). Methods: From January 2005 to June 2010, 56 patients with LAPC from Subei People's Hospital were treated either with Gem (n=21) or GemRT (n=35). Gem consisted of 4-6 cycles gemcitabine alone (1000 mg/m2 on Days 1, 8, 15, 28-day a cycle). GemRT consisted of 50.4Gy/28F radiotherapy with concurrent 2 cycles of gemcitabine (1000 $mg/m^2$ on days of radiation 1, 8, 15, 21-day a cycle). Radiation was delivered to the gross tumor volume plus 1-1.5 cm by use of a three-dimensional conformal technique. The follow-up time was calculated from the time of diagnosis to the date of death or last contact. Kaplan-Meier methodology wes used to evaluate survival. Results: Patient characteristics were not significantly different between treatment groups. The disease control rate and the objective response rate of GemRT versus Gem was 97.1% vs 71.4%, 74.3% vs 38.1%. The overall survival (OS) was significantly better for GemRT compared to Gem (median 13 months versus 8 months; 51.4% versus 14.3% at 1 year, respectively). Conclusion: Radiation therapy at 50.4Gy with 2 concurrent cycles of gemcitabine results in favorable rates of OS. Concurrent chemoradiotherapy should be the first choice for patients with LAPC. G1/S-specific Cyclin-D1 Might be a Prognostic Biomarker for Patients with Laryngeal Squamous Cell Carcinoma Zhang, Ying-Yao;Xu, Zhi-Na;Wang, Jun-Xi;Wei, Dong-Min;Pan, Xin-Liang 2133 Objective: To investigate the prognostic role of antigen KI-67 (Ki-67) and G1/S-specific cyclin-D1 (cyclin-D1) in patients with laryngeal squamous cell carcinoma (LSCC). Methods: Immunohistochemical staining (IHS) was used to determine the protein expression of Ki-67 and cyclin-D1 in LSCC tissues. Kaplan-Meier survival curves was calculated with reference to Ki-67 and cyclin-D1 levels. Results: Cyclin-D1 and Ki67 were expressed in the nuclei of cancer cells. Among the total of 92 cancer tissues examined by immunohistochemistry, 60 (65.22%) had cyclin-D1 overexpression and 56 (60.87%) had Ki67 overexpression. Cyclin-D1 overexpression is associated with the advanced stage of the cancer (P=0.029), but not with gender, age, stage of cancer, histological differentiation, anatomical site, smoking history and alcohol consumption history. Ki67 overexpression is not associated with the advanced stage, gender, age, histological differentiation, anatomical site, smoking history and alcohol consumption history. A statistically significant correlation was found between lymph node status and the expression of Ki67 (p = 0.025). Overexpression of cyclin-D1 was correlated to shorter relapse-free survival period (P<0.001). Conclusions: Overexpression of cyclin-D1 can be used as a marker to predict relapse in patients with LSCC after primary curative resection. ADPRT Val762Ala and XRCC1 Arg194Trp Polymorphisms and Risk of Gastric Cancer in Sichuan of China Wen, Yuan-Yuan;Pan, Xiong-Fei;Loh, Marie;Tian, Zhi;Yang, Shu-Juan;Lv, Si-Han;Huang, Wen-Zhi;Huang, He;Xie, Yao;Soong, Richie;Yang, Chun-Xia 2139 Objective: Gastric cancer remains a major health problem in China. We hypothesized that XRCC1 Arg194Trp and ADPRT Val762Ala may be associated with risk. Methods: We designed a multicenter 1:1 matched case-control study of 307 pairs of gastric cancers and controls between October 2010 and August 2011. XRCC1 Arg194Trp and ADPRT Val762Ala were sequenced, and demographic data as well as lifestyle factors were collected using a self-designed questionnaire. Results: Individuals carrying XRCC1 Trp/Trp or Arg/Trp variant genotype had a significantly increased risk of gastric cancer (OR, 1.718; 95% CI, 1.190-2.479), while the OR for ADPRT Val762Ala variant genotype (Ala/Ala or Val/Ala) was 1.175 (95% CI, 0.796-1.737). No gene-gene or gene-environment interactions were found. In addition, family history of cancer and drinkers proportion were higher among cases than among controls (P<0.05). Conclusions: XRCC1 194 Arg/Trp or Trp/Trp genotype, family history of cancer, and drinking are suspected risk factors of gastric cancer from our study. Our findings may offer insight into further similar large gene-environment and gene-gene studies in this region. PBK/TOPK Expression During TPA-Induced HL-60 Leukemic Cell Differentiation Liu, Yu-Hong;Gao, Xue-Mei;Ge, Fan-Mei;Wang, Zhe;Wang, Wen-Qing;Li, Xiao-Yong 2145 Objective: This study concerns expression of PBK/TOPK during differentiation of HL-60 leukemic cells induced by tetradecanoyl phorbol acetate (TPA). Methods: Wright-Giemsa staining was performed to observe morphological changes in the HL-60 cells, and flow cytometry was used to assess the cell cycle and CD11b, CD14, CD13, and CD33 expression. PBK/TOPK levels were determined by Western blot analysis. Results: After treating HL60 cells with $5.1{\times}10^{-9}$ mmol/L of TPA for three days, the number of nitroblue-tetrazolium-positive cells and CD11b, CD13, and CD14 expression increased, whereas the PBK/TOPK levels decreased. Conclusions: TPA can inhibit proliferation and induce differentiation of HL60 cells of the granulocytic or monocytic lineage. PBK/TOPK expression was downregulated during this process, whereas the Pho-PBK/TOPK expression was increased. Prostate Biomarkers with Reference to Body Mass Index and Duration of Prostate Cancer Poudel, Bibek;Mittal, Ankush;Shrestha, Rojeet;Nepal, Ashwini Kumar;Shukla, Pramod Shanker 2149 Objective: This study was performed to assess prostate biomarkers with reference to body mass index and duration of prostate cancer. Materials and Methods: A hospital based retrospective study was undertaken using data retrieved from the register maintained in the Department of Biochemistry of Manipal Teaching Hospital, Pokhara, Nepal between $1^{st}$ January, 2009 and $28^{th}$ February, 2012. Biomarkers studied were prostate specific antigen (PSA), acid phosphatase (ACP) and prostatic acid phosphatase (PAP), alkaline phosphatase (ALP) and gamma glutamyl transpeptidase (${\gamma}GT$). Demographic data including age, duration of disease, body weight, height and body mass index (BMI) were also collected. Duration of disease was categorized into three groups: <1 year, 1-2years and >2 years. Similarly, BMI ($kg/m^2$) was categorized into three groups: <23 $kg/m^2$, 23-25 $kg/m^2$ and >25 $kg/m^2$. Descriptive statistics and testing of hypothesis were used for the analysis using EPI INFO and SPSS 16 software. Results: Out of 57 prostate cancers, serum level of PSA, ACP and PAP were increased above the cut-off point in 50 (87.5%), 30 (52.63%) and 40 (70.18%) respectively. Serum levels of PSA, ACP and PAP significantly declined with the duration of disease after diagnosis. We observed significant and inverse relation between PSA and BMI. Similar non-signficiant tendencies were apparent for ACP and PAP. Conclusions: Decreasing levels of prostate biomarkers were found with the duration of prostate cancer and with increased BMI. Out of prostate biomarkers, PSA was found to be significantly decreased with the duration of disease and BMI. Liver Involvement in Multiple Myeloma: A Hospital Based Retrospective Study Poudel, Bibek;Mittal, Ankush;Shrestha, Rojeet;Farooqui, Mohammad Shamim;Yadav, Naval Kishor;Shukla, Pramod Shanker 2153 Objective: This study was to assess liver involvement in multiple myeloma with the aid of liver function tests. Materials and Methods: A hospital based retrospective study was undertaken using data retrieved of multiple myeloma from the register maintained in the Department of Biochemistry of the Manipal Teaching Hospital, Pokhara, Nepal between $1^{st}$ January, 2007 and $28^{th}$ February, 2012. We collected biomarkers of liver profiles including bilirubin (Total, Direct and Indirect), total protein, albumin, AG ratio, SGOT, SGPT, ALP, ${\gamma}GT$, LDH, ferritin, renal profile and hematological profile. Descriptive statistics and testing of hypothesis were used for the analysis using EPI INFO and SPSS 16 software. Results: Out of 37 cases of multiple myeloma, serum level of AST, ALT, ALP, ${\gamma}GT$ and LDH were increased above the cut-off point in 22 (59.5%), 24 (64.86%), 13 (35.13%), 9 (24.3%) and 11 (29.7%) respectively. The mean values of AST ($65.5{\pm}28.18$ U/L), ALT ($68.37{\pm}29.74$ U/L), ALP ($328.0{\pm}148.4$ U/L), ${\gamma}GT$ ($44.5{\pm}29.6$ U/L) and LDH ($361.7{\pm}116.5$ U/L), total protein ($9.79{\pm}1.03$ gm/dl) were significantly increased when compared with controls. In contrast, albumin ($3.68{\pm}0.43$ gm/dl) and the AG ratio ($0.62{\pm}0.15$) were significantly decreased. Similarly, anemia, hyperuricemia, azotemia, hypercalcaemia and Bence Jones proteinuria were found in 30 (78.9%), 27 (71.1%), 19 (51.5%), 15 (39.5%) and 16 (42.1%) respectively, in cases of multiple myeloma. Conclusions: While clinical manifestation of liver disease among the multiple myeloma was not common, abnormalities in liver function were characteristic. Genetic Variants in the PI3K/PTEN/AKT/mTOR Pathway Predict Platinum-based Chemotherapy Response of Advanced Non-small Cell Lung Cancers in a Chinese Population Xu, Jia-Li;Wang, Zhen-Wu;Hu, Ling-Min;Yin, Zhi-Qiang;Huang, Ming-De;Hu, Zhi-Bin;Shen, Hong-Bing;Shu, Yong-Qian 2157 Objective: The PI3K/PTEN/AKT/mTOR signaling pathway has been implicated in resistance to cisplatin. In the current study, we determined whether common genetic variations in this pathway are associated with platinum-based chemotherapy response and clinical outcome in advanced non-small cell lung cancer (NSCLC) patients. Methods: Seven common single nucleotide polymorphisms (SNPs) in core genes of this pathway were genotyped in 199 patients and analyzed for associations with chemotherapy response, progression-free survival (PFS) and overall survival (OS). Results: Logistic regression analysis revealed an association between AKT1 rs2494752 and response to treatment. Patients carrying heterozygous AG had an increased risk of disease progression after two cycles of platinum-based chemotherapy compared to those with AA genotype (Adjusted odds ratio (OR)=2.18, 95% confidence interval (CI): 1.00-4.77, which remained significant in the stratified analyses). However, log-rank test and cox regression detected no association between these polymorphisms in the PI3K pathway genes and survival in advanced NSCLC patients. Conclusions: Our findings suggest that genetic variants in the PI3K/PTEN/AKT/mTOR pathway may predict platinum-based chemotherapy response in advanced NSCLC patients in a Chinese population. Development and Area Adaptation of Flow Charts Related to Gynecologic Oncology Nursing Practices Beydag, Kerime Derya;Komurcu, Nuran 2163 Aim: This one group semi-experimental study was performed to develop and adapt flow charts of nursing practices applied to gynecologic oncology patients to the field. Methods: The research was conducted between October 2008 and March 2009 in 6 hospitals in Istanbul (3 health ministry hospitals, 2 private hospitals and 1 university hospital) with effective programs. The scope of the study included 97 midwives/nurses who had been working as caregivers of gynecologic oncology patients in this unit at least for 6 months and who participated in this study voluntarily; 87 people composed the sample because of the absence of others on vacation or sick leave when the data were collected or who did not wish to participate. The data were in descriptive information form collected via "Forms to Determine the Efficiency of Flow Charts". Before data collection, risks related to gynecologic oncology problems were identified, a literature scanning was made for existing flow charts based on actual practices and the discovered charts were reviewed. As a result of the evaluations, it was decided to create 15 flow charts intended for risks, symptoms, operation processes and discharge. Questionnaires to determine activity were applied to participants before and after practice. Results: As a result of the study, it was determined that the efficiency of the flow charts increased significantly (p <0.01) after practice of the participants, nosignificant relationships (p>0.01) being apparent with age group, education level, occupational period in the job and in the gynecologic oncology field and evaluations of the practice before and after it was applied. Conclusion: The results of the study revealed that nursing participants in university and private hospitals and who supported the existence of a flow chart in the field evaluated the flow charts positively. Improved Diagnostic Accuracy of Pancreatic Diseases with a Combination of Various Novel Serum Biomarkers - Case Control Study from Manipal Teaching Hospital, Pokhara, Nepal Farooqui, Mohammad Shamim;Mittal, Ankush;Poudel, Bibek;Mall, Suhas Kumar;Sathian, Brijesh;Tarique, Mohammad;Farooqui, Mohammad Hibban 2171 Background: Pancreatic cancer is a distressing disease with a miserable prospects and early recognition remains a challenge due to ubiquitous symptomatic presentation, deep anatomical location, and aggressive etiology. False positives and problems in distinguishing pancreatitis from adenocarcinoma limit the use of CA 19-9 as both disorders can present with similar symptoms and share radiographic physiognomies. This study aimed to assess the relative increase in accuracy of diagnosing the patients with chronic pancreatitis, benign neoplasm of pancreas and adenocarcinomas with CA 19-9, haptoglobin, and serum amyloid A in comparison to CA 19-9 alone. Materials and Methods: This hospital based case control study was carried out in the Departments of Medicine and Biochemistry of Manipal Teaching Hospital, Pokhara, Nepal, between $1^{st}$ January 2010 and $31^{st}$ December 2011. The variables assessed were age, gender, serum CA19-9, serum haptoglobulin, serum Amyloid A. The data were analyzed using Excel 2003, R 2.8.0 Statistical Package for the Social Sciences (SPSS) for Windows Version 16.0 (SPSS Inc; Chicago, IL, USA) and the EPI Info 3.5.1 Windows Version. Results: Out of 197 cases of pancreatic disease, maximum number of assumed cases were of adenocarcinoma of pancreas (95). Number of males (59) were more than females (36) in assumed cases of adenocarcinoma of pancreas. The mean values of CA19-9 raised considerably in cases of chronic pancreatitis, benign neoplasm and adenocarcinoma of pancreas when compared to controls. The highest augmention in CA19-9 values were in cases of adenocarcinoma of pancreas. The p-value indicates that in cases of chronic pancreatitis, there was not significant increase in precision of diagnosis. Conclusions: These statistics established that haptoglobin and SAA are useful in discriminating cancer from benign conditions as well as healthy controls. Expression of Matrix Metalloproteinase-2, but not Caspase-3, Facilitates Distinction between Benign and Malignant Thyroid Follicular Neoplasms Sanii, Sanaz;Saffar, Hiva;Tabriz, Hedieh M.;Qorbani, Mostafa;Haghpanah, Vahid;Tavangar, Seyed M. 2175 Purpose: Definite diagnosis of follicular thyroid carcinoma (FTC) is based on the presence of capsular or vascular invasion. To date, no reliable and practical method has been introduced to discriminate this malignant neoplasm from follicular thyroid adenoma (FTA) in fine needle aspiration biopsy material. Matrix metalloproteinase-2 (MMP-2), by degrading extracellular matrix, and caspase-3, by induction of apoptosis, have been shown to play important roles in carcinogenesis and aggressive behavior in many tumor types. The aim of this study was to examine expression of MMP-2 and caspase-3 in thyroid follicular neoplasms and to determine their usefulness for differential diagnosis. Method: Sixty FTAs and 41 FTCs were analysed immunohistochemically for MMP-2 and caspase-3. Result: MMP-2 was positive in 4 FTCs (9.8%), but in none of FTAs, with statistical significance (p= 0.025). Caspase-3 was positive in 30 (50%) of FTAs and in 27 (65.9%) of FTCs. Conclusion: Our results show MMP-2 expression only in FTCs and suggest that this protein may be a useful marker to confirm diagnosis of FTC versus FTA with 100% specificity and 100% predictive value of a positive test. We failed to show any differential diagnostic value for caspase-3 in thyroid follicular neoplasms. Reproductive Variables and Risk of Breast Malignant and Benign Tumours in Yunnan Province, China Yanhua, Che;Geater, Alan;You, Jing;Li, Li;Shaoqiang, Zhou;Chongsuvivatwong, Virasakdi;Sriplung, Hutcha 2179 Introduction and aim: To compare reproductive factor influence on patients with pathological diagnosed malignant and benign tumor in the Breast Department, The First Peoples' Hospital of Kunming in Yunnan province, China. Methods: A hospital-based case-control study was conducted on 263 breast cancer (BC) cases and 457 non-breast cancer controls from 2009 to 2011. The cases and controls information on demographics, medical history, and reproductive characteristics variables were collected using a self-administered questionnaire and routine medical records. Histology of breast cancer tissue and benign breast lesion were documented by pathology reports. Since some variables in data analysis had zero count in at least one category, binomial-response GLM using the bias-reduction method was applied to estimate OR's and their 95% confidence intervals (95% CI). To adjust for age and menopause status, a compound variable comprising age and menopausal status was retained in the statistical models. Results: multivariate model analysis revealed significant independent positive associations of BC with short menstrual cycle, old age at first live birth, never breastfeeding, history of oral contraception experience, increased number of abortion, postmenopausal status, and nulliparity. Categorised by age and menopausal status, perimenopausal women had about 3-fold and postmenopausal women had more than 5-fold increased risk of BC compared to premenopausal women. Discussion and Conclusion: This study has confirmed the significant association of BC and estrogen related risk factors of breast cancer including longer menstrual cycle, older age of first live birth, never breastfeeding, nulliparity, and number of abortions more than one. The findings suggest that female hormonal factors, especially the trend of menopause status play a significant role in the development of BC in Yunnan women. Prevalence and Pathogenesis of Barrett's Esophagus in Luoyang, China Zhang, Ru-Gang;Wang, Chang-Song;Gao, Cun-Fang 2185 Background: Prevalence of Barrett's esophagus (BE) in Luoyang, China, has not been reported, and its pathogenesis is controversial. The aim of this study was therefore to investigate the prevalence of BE and its underlying factors in the city of Luoyang. Method: This was a prospective study in one center. Many patients were analyzed using endoscopy who showed upper gastrointestinal symptoms between August 2006 and June 2007. In addition, the effect of apoptosis-related proteins and heat shock proteins upon BE's pathogenesis were also investigated by an immunohistochemical protocol. Results: Prevalence of BE was at 4.55% and the mean age of those affected was about 10 years older than for esophagitis. Typical reflux symptoms were significantly lower than with esophagitis, whereas signs of caspase-3 and HSP105 elevation were significantly higher. Expression of TERT, HSP70 and $HSP90{\alpha}$ in BE cases was significantly lower than in esophagitis. However, there was no statistical difference between the two groups in expression of HSP27. Conclusions: The prevalence of BE is high in Luoyang, which could result from esophagitis despite typical reflux symptoms being relatively uncommon. Initiation and development of BE might be the result of accelerated proliferation, apoptosis and differentiation of original cells to intestinal epithelium. Methylenetetrahydrofolate Reductase C677T Polymorphism and Cervical Cancer Risk: a Meta-Analysis Guo, Li-Na 2193 Background: Methylenetetrahydrofolate reductase (MTHFR) is a key enzyme in the metabolism of folate, and the role of MTHFR C677T polymorphism in cervical carcinogenesis is still controversial. Method: We performed a meta-analysis of all relevant case-control studies that examined any association between the C677T polymorphism and cervical cancer risk. We estimated summary odds ratios (ORs) with their confidence intervals (CIs) to assess links. Results: Finally, 10 studies with a total of 2113 cervical cancer cases and 2804 controls were included. Results from this meta-analysis showed that significantly elevated cervical cancer risk was associated with the MTHFR T allele in the Asian population under conditions of two genetic comparison models (for TT vs. CC, OR = 1.37, 95%CI 1.00-1.87, P = 0.050; for TT vs. TC+CC: OR = 1.34, 95%CI 1.01-1.77, P = 0.039). However, there was no obvious association between the MTHFR C677T polymorphism and cervical cancer risk in the other populations. Conclusion: The MTHFR C677T polymorphism is associated with cervical cancer risk in Asians, while any possible link in the Caucasian population needs further studies. Interactions Between MTHFR C677T - A1298C Variants and Folic Acid Deficiency Affect Breast Cancer Risk in a Chinese Population Wu, Xia-Yu;Ni, Juan;Xu, Wei-Jiang;Zhou, Tao;Wang, Xu 2199 Background: Our objective was to evaluate the MTHFR C677T-A1298C polymorphisms in patients with breast cancer and in individuals with no history of cancer, to compare the levels of genetic damage and apoptosis under folic acid (FA) deficiency between patients and controls, and to assess associations with breast cancer. Methods: Genetic damage was marked by micronucleated binucleated cells (MNBN) and apoptosis was estimated by cytokinesis-block micronucleus assay (CBMN). PCR-RFLP molecular analysis was carried out. Results: The results showed significant associations between the MTHFR 677TT or the combined MTHFR C677T-A1298C and breast cancer risk (OR = 2.51, CI = 0.85 to 7.37, p = 0.08; OR = 4.11, CI = 0.78 to 21.8, p < 0.001). The MNBN from the combined MTHFR C677T-A1298C was higher and the apoptosis was lower than that of the single variants (p < 0.05). At 15 to 60 nmol/L FA, the MNBN in cases with the TTAC genotype was higher than controls (p < 0.05), whereas no significant difference in apoptosis was found between the cases and controls after excluding the genetic background. Conclusions: Associations between the combined MTHFR C677T-A1298C polymorphism and breast cancer are possible from this study. A dose of 120 nmol/L FA could enhance apoptosis in cases with MTHFR C677T-A1298C. Breast cancer individuals with the TTAC genotype may be more sensitive to the genotoxic effects of FA deficiency than controls. 4-(Methylnitrosamino)-1-(3-pyridyl)-1-butanone Induces Retinoic Acid Receptor β Hypermethylation through DNA Methyltransferase 1 Accumulation in Esophageal Squamous Epithelial Cells Wang, Jing;Zhao, Shu-Lei;Li, Yan;Meng, Mei;Qin, Cheng-Yong 2207 Overexpression of DNA methyltransferase 1 (DNMT1) has been detected in many cancers. Tobacco exposure is known to induce genetic and epigenetic changes in the pathogenesis of malignancy. 4-(Methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK) is an important carcinogen present in tobacco smoke; however the detailed molecular mechanism of how NNK induces esophageal carcinogenesis is still unclear. We found that DNMT1 was overexpressed in ESCC tissues compared with paired non-cancerous tissues, the overexpression being correlated with smoking status and low expression of $RAR{\beta}$. The latter could be upregulated by NNK treatment in Het-1A cells, and the increased DNMT1 expression level reflected promoter hypermethylation and downregulation of retinoic acid receptor ${\beta}$($RAR{\beta}$). RNA interference mediated knockdown of DNMT1 resulted in promoter demethylation and upregulation of $RAR{\beta}$ in KYSE30 and TE-1 cells. 3-(4,5-Dimethyl-thiazol-2yl)-2,5-diphenyltetrazolium bromide (MTT) and flow cytometric analysis demonstrated that NNK treatment in Het-1A cells could enhance cell proliferation and inhibit cell apoptosis in a dose-dependent manner. In conclusion, DNMT1 overexpression is correlated with smoking status and low expression of $RAR{\beta}$ in esophageal SCC patients. NNK could induce $RAR{\beta}$ promoter hypermethylation through upregulation of DNMT1 in esophageal squamous epithelial cells, finally leading to enhancement of cell proliferation and inhibition of apoptosis. Who are the Breast Cancer Survivors in Malaysia? Ibrahim, Nor Idawaty;Dahlui, M.;Aina, E.N.;Al-Sadat, N. 2213 Introduction: Worldwide, breast cancer is the commonest cause of cancer death in women. However, the survival rate varies across regions at averages of 73%and 57% in the developed and developing countries, respectively. Objective: This study aimed to determine the survival rate of breast cancer among the women of Malaysia and characteristics of the survivors. Method: A retrospective cohort study was conducted on secondary data obtained from the Breast Cancer Registry and medical records of breast cancer patients admitted to Hospital Kuala Lumpur from 2005 to 2009. Survival data were validated with National Birth and Death Registry. Statistical analysis applied logistic regression, the Cox proportional hazard model, the Kaplan-Meier method and log rank test. Results: A total of 868 women were diagnosed with breast cancer between January 2005 and December 2009, comprising 58%, 25% and 17% Malays, Chinese and Indians, respectively. The overall survival rate was 43.5% (CI 0.573-0.597), with Chinese, Indians and Malays having 5 year survival rates of 48.2% (CI 0.444-0.520), 47.2% (CI 0.432-0.512) and 39.7% (CI 0.373-0.421), respectively (p<0.05). The survival rate was lower as the stages increased, with the late stages were mostly seen among the Malays (46%), followed by Chinese (36%) and Indians (34%). Size of tumor>3.0cm; lymph node involvement, ERPR, and HER 2 status, delayed presentation and involvement of both breasts were among other factors that were associated with poor survival. Conclusions: The overall survival rate of Malaysian women with breast cancer was lower than the western figures with Malays having the lowest because they presented at late stage, after a long duration of symptoms, had larger tumor size, and had more lymph nodes affected. There is an urgent need to conduct studies on why there is delay in diagnosis and treatment of breast cancer women in Malaysia. Elevated Circulating CD19+ Lymphocytes Predict Survival Advantage in Patients with Gastric Cancer Yu, Qi-Ming;Yu, Chuan-Ding;Ling, Zhi-Qiang 2219 Background: Circulating lymphocyte subsets reflect the immunological status and might therefore be a prognostic indicator in cancer patients. Our aim was to evaluate the clinical significance of circulating lymphocyte subset in gastric cancer (GC) cases. Methods: A retrospective study on a prevalent cohort of 846 GC patients hospitalized at Hospital from Aug 2006 to Jul 2010 was conducted. We calculated the patient's disease free survival (DFS) after first hospital admission, and hazard ratios (HR) from the Cox proportional hazards model. Results: Our findings indicated a significantly decreased percentage of CD3+, and CD8+ cells, a significantly increased proportion of $CD4^+$, $CD19^+$, $CD44^+$, $CD25^+$, NK cells, and an increased $CD4^+/CD8^+$ ratio in GC patients as compared with healthy controls (all P < 0.05). Alteration of lymphocyte subsets was positively correlated with sex, age, smoking, tumor stage and distant metastasis of GC patients (all P<0.05). Follow-up analysis indicated significantly higher DFS for patients with high circulating $CD19^+$ lymphocytes compared to those with low $CD19^+$ lymphocytes (P=0.037), with $CD19^+$ showing an important cutoff of $7.91{\pm}2.98%$ Conclusion: Circulating lymphocyte subsets in GC patients are significantly changed, and elevated CD19+ cells may predict a favorable survival. Ifosfamide and Doxorubicin Combination Chemotherapy for Recurrent Nasopharyngeal Carcinoma Patients Dede, Didem Sener;Aksoy, Sercan;Cengiz, Mustafa;Gullu, Ibrahim;Altundag, Kadri 2225 Background: We assessed the efficacy and toxicity of ifosfamide and doxorubicin combination chemotherapy (CT) regimen retrospectively in Turkish patients with recurrent or metastatic nasopharyngeal carcinoma (NPC) previously treated with platinum-based chemotherapy. Methods: A total of thirty patients who had received cisplatin based chemotherapy/chemoradiotherapy as a primary treatment received ifosfamide 2500 $mg/m^2$ days 1-3, mesna 2500 $mg/m^2$ days 1-3, doxorubicin 60 mg/m2 day 1 (IMA), repeated every 21 days. Eligible patients had ECOG PS< 2, measurable recurrent or metastatic disease, with adequate renal, hepatic and hematologic functions. Results: Median age was 47 (min-max; 17-60). Twenty six (86.7 %) were male. Median cycles of chemotherapy for each patient were 2 (range:1-6). Twenty patients were evaluable for toxicity and response. No patient achieved complete response, with nine partial responses for a response rate of 30.0% in evaluable patients. Stable disease, and disease progression were observed in five (16.7%) and six (20.0%) patients, respectively. Clinical benefit was 46.7%. Median time to progression was 4.0 months. Six patients had neutropenic fever after IMA regimen and there were one treatment-related death due to tumor lysis syndrome in first cycle of the CT. No cardiotoxicity was observed after CT and treatments were generally well tolerated. Conclusion: Ifosfomide and doxorubicin combination is an effective regimen for patients with recurrent and metastatic NPC. For NPC patients demonstrating failure of cisplatin based regimens, this CT combination may be considered as salvage therapy. A Novel Molecular Grading Model: Combination of Ki67 and VEGF in Predicting Tumor Recurrence and Progression in Non-invasive Urothelial Bladder Cancer Chen, Jun-Xing;Deng, Nan;Chen, Xu;Chen, Ling-Wu;Qiu, Shao-Peng;Li, Xiao-Fei;Li, Jia-Ping 2229 Purpose: To assess efficacy of Ki67 combined with VEGF as a molecular grading model to predict outcomes with non-muscle invasive bladder cancer (NMIBC). Materials: 72 NMIBC patients who underwent transurethral resection (TUR) followed by routine intravesical instillations were retrospectively analyzed in this study. Univariate and multivariate analyses were performed to confirm the prognostic values of the Ki67 labeling index (LI) and VEGF scoring for tumor recurrence and progression. Results: The novel molecular grading model for NMIBC contained three molecular grades including mG1 (Ki67 $LI{\leq}25%$, VEGF $scoring{\leq}8$), mG2 (Ki67 LI>25%, VEGF $scoring{\leq}8$; or Ki67 $LI{\leq}25%$, VEGF scoring > 8), and mG3 (Ki67 LI > 25%, VEGF scoring > 8), which can indicate favorable, intermediate and poor prognosis, respectively. Conclusions: The described novel molecular grading model utilizing Ki67 LI and VEGF scoring is helpful to effectively and accurately predict outcomes and optimize personal therapy. c-Src Antisense Complexed with PAMAM Denderimes Decreases of c-Src Expression and EGFR-Dependent Downstream Genes in the Human HT-29 Colon Cancer Cell Line Nourazarian, Ali Reza;Pashaei-Asl, Roghiyeh;Omidi, Yadollah;Najar, Ahmad Gholamhoseinian 2235 c-Src is one member of non-receptor tyrosine kinase protein family that has over expression and activation in many human cancer cells. It has been shown that c-Src is implicated in various downstream signaling pathways associated with EGFR-dependent signaling such as MAPK and STAT5 pathways. Transactivation of EGFR by c-Src is more effective than EGFR ligands. To inhibit the c-Src expression, we used c-Src antisense oligonucleotide complexed with PAMAM Denderimes. The effect of c-Src antisense oligonucleotide on HT29 cell proliferation was determined by MTT assay. Then, the expression of c-Src, EGFR and the genes related to EGFR-depended signaling with P53 was applied by real time PCR. We used western blot analysis to elucidate the effect of antisense on the level of c-Src protein expression. The results showed, c-Src antisense complexed with PAMAM denderimers has an effective role in decrease of c-Src expression and EGFR-dependent downstream genes. What Made Her Give Up Her Breasts: a Qualitative Study on Decisional Considerations for Contralateral Prophylactic Mastectomy among Breast Cancer Survivors Undergoing BRCA1/2 Genetic Testing Kwong, Ava;Chu, Annie T.W. 2241 Objective: This qualitative study retrospectively examined the experience and psychological impact of contralateral prophylactic mastectomy (CPM) among Southern Chinese females with unilateral breast cancer history who underwent BRCA1/2 genetic testing. Limited knowledge is available on this topic especially among Asians; therefore, the aim of this study was to acquire insight from Chinese females' subjective perspectives. Methods: A total of 12 semi-structured in-depth interviews, with 11 female BRCA1/BRCA 2 mutated gene carriers and 1 non-carrier with a history of one-sided breast cancer and genetic testing performed by the Hong Kong Hereditary Breast Cancer Family Registry, who subsequently underwent CPM, were assessed using thematic analysis and a Stage Conceptual Model. Breast cancer history, procedures conducted, cosmetic satisfaction, pain, body image and sexuality issues, and cancer risk perception were discussed. Retrieval of medical records using a prospective database was also performed. Results: All participants opted for prophylaxis due to their reservations concerning the efficacy of surveillance and worries of recurrent breast cancer risk. Most participants were satisfied with the overall results and their decision. One-fourth expressed different extents of regrets. Psychological relief and decreased breast cancer risk were stated as major benefits. Spouses' reactions and support were crucial for post-surgery sexual satisfaction and long-term adjustment. Conclusions: Our findings indicate that thorough education on cancer risk and realistic expectations of surgery outcomes are crucial for positive adjustment after CPM. Appropriate genetic counseling and pre-and post-surgery psychological counseling were necessary. This study adds valuable contextual insights into the experiences of living with breast cancer fear and the importance of involving spouses when counseling these patients. MTHFR Polymorphisms and Pancreatic Cancer Risk:Lack of Evidence from a Meta-analysis Li, Lei;Wu, Sheng-Di;Wang, Ji-Yao;Shen, Xi-Zhong;Jiang, Wei 2249 Objective: Methylenetetrahydrofolate reductase (MTHFR) gene polymorphisms have been reported to be associated with pancreatic cancer, but the published studies had yielded inconsistent results.We therefore performed the present meta-analysis. Methods: A search of Google scholar, PubMed, Cochrane Library and CNKI databases before April 2012 was conducted to summarize associations of MTHFR polymorphisms with pancreatic cancer risk. Assessment was with odds ratios (ORs) and 95% confidence intervals (CIs). Publication bias were also calculated. Results: Four relative studies on MTHFR gene polymorphisms (C667T and A1298C) were involved in this meta-analysis. Overall, C667T(TT vs. CC : OR = 1.61, 95%CI = 0.78 - 3.34; TT vs. CT : OR = 1.41, 95%CI = 0.88-2.25; dominant model: OR = 0.68, 95%CI = 0.40-1.17; recessive model: OR = 0.82, 95%CI = 0.52-1.30) and A1298C(CC vs. AA:OR=1.01, 95%CI=0.47-2.17; CC vs. AC: OR=0.99,95%CI=0.46-2.14; dominant model: OR=1.01, 95%CI = 0.47-2.20; recessive model: OR = 1.01, 95%CI = 0.80-1.26) did not increase pancreatic cancer risk. Conclusion: This meta-analysis indicated that MTHFR polymorphisms (C667T and A1298C) were not associated with pancreatic cancer risk. Serum Amyloid A as an Independent Prognostic Factor for Renal Cell Carcinoma - A Hospital Based Study from the Western Region of Nepal Mittal, Ankush;Poudel, Bibek;Pandeya, Dipendra Raj;Gupta, Satrudhan Pd;Sathian, Brijesh;Yadav, Shambhu Kumar 2253 Objective: The objective of our present study was to assess the role of serum amyloid A (SAA) in stages and prognosis of renal cell carcinoma. Material and Methods: It was a hospital based retrospective study carried out in the Department of Medicine and Biochemistry of Manipal Teaching Hospital, Pokhara, Nepal between $1^{st}$ January 2008 and $31^{st}$ December 2011. The variables collected were SAA, CRP. Approval for the study was obtained from the institutional research ethical committee. Quantitative analysis of human SAA and C-reactive protein (CRP) was performed by radial immune diffusion (RID) assay for all cases. Results: Of the 422 total cases of renal cell carcinoma, 218 patients had normal and 204 abnormal SAA. SAA levels were grossly elevated in T3 stage ($122.3{\pm}SD35.7$) when compared to the mean for the T2 stage ($84.2{\pm}SD24.4$) (p value: 0.0001). Similarly, SAA levels were grossly elevated in M1 stage ($190.0{\pm}SD12.7$) when compared to the M0 stage ($160.9{\pm}SD24.8$) (p: 0.0001). There was no significant association with elevated CRP levels ($209.1{\pm}SD22.7$, normal $199.0{\pm}SD19.5$). Conclusion: The validity of SAA in serum as being of independent prognostic significance in RCC was demonstrated with higher levels in advanced stage disease. Effects of Pinocembrin on the Initiation and Promotion Stages of Rat Hepatocarcinogenesis Punvittayagul, Charatda;Pompimon, Wilart;Wanibuchi, Hideki;Fukushima, Shoji;Wongpoomchai, Rawiwan 2257 Pinocembrin (5, 7-dihydroxyflavanone) is a flavanone extracted from the rhizome of Boesenbergia pandurata. Our previous studies demonstrated that pinocembrin had no toxicity or mutagenicity in rats. We here evaluated its effects on the initiation and promotion stages in diethylnitrosamine-induced rat hepatocarcinogenesis, using short- and medium-term carcinogenicity tests. Micronucleated hepatocytes and liver glutathione-S-transferase placental form foci were used as end point markers. Pinocembrin was neither mutagenic nor carcinogenic in rat liver, and neither inhibited nor prevented micronucleus formation as well as GST-P positive foci formation induced by diethylnitrosamine. Interestingly, pinocembrin slightly increased the number of GST-P positive foci when given prior to diethylnitrosamine injection. miR-181b as a Potential Molecular Target for Anticancer Therapy of Gastric Neoplasms Guo, Jian-Xin;Tao, Qing-Song;Lou, Peng-Rong;Chen, Xiao-Chun;Chen, Jun;Yuan, Guang-Bo 2263 Objective: MicroRNAs (miRNAs) play important roles in carcinogenesis. The aim of the present study was to explore the effects of miR-181b on gastric cancer. Methods: The expression level of miR-181b was quantified by qRT-PCR. MTT, flow cytometry and matrigel invasion assays were used to test proliferation, apoptosis and invasion of miR-181b stable transfected gastric cancer cells. Results: miR-181b was aberrantly overexpressed in gastric cancer cells and primary gastric cancer tissues. Further experiments demonstrated inducible expression of miR-181b by Helicobacter pylori treatment. Cell proliferation, migration and invasion in the gastric cancer cells were significantly increased after miR-181b transfection and apoptotic cells were also increased. Furthermore, overexpression of miR-181b downregulated the protein level of tissue inhibitor of metalloproteinase 3 (TIMP3). Conclusion: The upregulation of miR-181b may play an important role in the progress of gastric cancer and miR-181b maybe a potential molecular target for anticancer therapeutics of gastric cancer. Health-promoting Lifestyle Behaviour for Cancer Prevention: a Survey of Turkish University Students Ay, Semra;Yanikkerem, Emre;Calim, Selda Ildan;Yazici, Mete 2269 Background: Health risks associated with unhealthy behaviours in adolescent and university students contribute to the development of health problems in later life. During the past twenty years, there has been a dramatic increase in public, private, and professional interest in preventing disability and death through changes in lifestyle and participation in screening programs. The aim of the study was to evaluate university students' health-promoting lifestyle behaviour for cancer prevention. Method: This study was carried out on university students who had education in sports, health and social areas in Celal Bayar University, Manisa, Turkey. The health-promoting lifestyles of university students were measured with the "health-promoting lifestyle profile (HPLP)" The survey was conducted from March 2011 to July 2011 and the study sample consisted of 1007 university students. T-test, ANOVA and multiple regression analyses were used for statistical analyses. Results: In the univariate analyses, the overall HPLP score was significantly related to students' school, sex, age, school grades, their status of received health education lessons, place of birth, longest place of residence, current place of residence, health insurance, family income, alcohol use, their status in sports, and self-perceived health status. Healthier behaviour was found in those students whose parents had higher secondary degrees, and in students who had no siblings. In the multiple regression model, healthier behaviour was observed in Physical Education and Sports students, fourth-year students, those who exercised regularly, had a good self-perceived health status, who lived with their family, and who had received health education lessons. Conclusion: In general, in order to ensure cancer prevention and a healthy life style, social, cultural and sportive activities should be encouraged and educational programmes supporting these goals should be designed and applied in all stages of life from childhood through adulthood. Knowledge, Attitude and Practice of Malaysian Medical and Pharmacy Students Towards Human Papillomavirus Vaccination Rashwan, Hesham H.;Saat, Nur Zakiah N. Mohd;Manan, Dahlia Nadira Abd 2279 Human Papillomavirus (HPV) infection is one of the most common sexually transmitted infections and oncogenic HPV is the main cause of cervical cancer. However, HPV vaccination is already available as the primary preventive method against cervical cancer. The objective of this study was to determine the level of knowledge, attitude and practice of HPV vaccination among Universiti Kebangsaan Malaysia (UKM) and Universiti Malaya (UM) students. This study was conducted from March until August 2009. Pre-tested and validated questionnaires were filled by the third year UKM (n=156) and UM (n=149) students from medical, dentistry and pharmacy faculties. The results showed that the overall level of knowledge on HPV infection, cervical cancer and its prevention among respondents was high and the majority of them had positive attitude towards HPV vaccination. Medical students had the highest level of knowledge (p<0.05). Very few students (3.6%) had already taken the vaccine with no significant difference between the two Universities (p=0.399). In conclusion, the knowledge and attitude of the respondents were high and positive, respectively. Only few students took HPV vaccination. Thus, more awareness campaigns and HPV vaccination services should be provided at universities' campuses with the price of the HPV vaccine reduced for the students. Multidrug Resistance-Associated Protein 1 Predicts Relapse in Iranian Childhood Acute Lymphoblastic Leukemia Mahjoubi, Frouzandeh;Akbari, Soodeh 2285 Multidrug resistance (MDR) is a main cause of failure in the chemotherapeutic treatment of malignant disorders. One of the well-known genes responsible for drug resistance encodes the multidrug resistance-associated protein (MRP1). The association of MRP1 with clinical drug resistance has not systematically been investigated in Iranian pediatric leukemia patients. We therefore applied real-time RT-PCR technology to study the association between the MRP1 gene and MDR phenotype in Iranian pediatric leukemia patients. We found that overexpression of MRP1 occurred in most Iranian pediatric leukemia patients at relapse. However, no relation between MRP1 mRNA levels and other clinical characteristics, including cytogenetic subgroups and FAB subtypes, was found. Evaluation of Dietary and Life-Style Habits of Patients with Gastric Cancer: A Case-Control Study in Turkey Yassibas, Emine;Arslan, Perihan;Yalcin, Suayib 2291 Objective: Gastric cancer is an important public health problem in the world and Turkey. In addition to Helicobacter pylori (H. pylori), smoking, alcohol consumption and family history, certain dietary factors have been associated with its occurrence. The impact of dietary habits and life-style factors on the risk of gastric cancer in Turkey were evaluated in this study. Design: A questionnaire was applied to 106 patients with gastric adenocarcinoma and 106 controls without cancer matched for age (range 28-85 years) and gender selected from a hospital based population. Adjusted odds ratios (ORs) and 95% confidence intervals (CI) were calculated with logistic regression analysis. Results: The incidence of H. pylori was 81.3% in patients. Frequent consumption of salty dishes, very salty foods like pickles, soup mixes, sausages, foods at hot temperature (ORs = 3.686, 7.784, 5.264, 3.148 and 3.273 respectively) and adding salt without tasting (OR = 4.198) were associated with increased gastric risk. Also heavy smoking and high amount of alcohol consumption (p = 0.000) were risk factors. Frequent consumption of green vegetables, onion, garlic and dried fruits (ORs = 0.569, 0.092, 0.795 and 0.041) was nonsignificantly associated with decreased risk. Conclusion: Improved dietary habits, reducing salt consumption and eradication of H. pylori infection may provide protection against gastric cancer in Turkey. Detection of Human Papillomavirus in Normal Oral Cavity in a Group of Pakistani Subjects using Real-Time PCR Gichki, Abdul Samad;Buajeeb, Waranun;Doungudomdacha, Sombhun;Khovidhunkit, Siribang-On Pibooniyom 2299 Since there is evidence that human papillomavirus (HPV) may play some role in oral carcinogenesis, we investigated the presence of HPV in a group of Pakistani subjects with normal oral cavity using real-time PCR analysis. Two-hundred patients attending the Dental Department, Sandaman Provincial Hospital, Balochistan, Pakistan, were recruited. After interview, oral epithelial cells were collected by scraping and subjected to DNA extraction. The HPV-positive DNA samples were further analyzed using primer sets specific for HPV-16 and -18. It was found that out of 200 DNA samples, 192 were PCR-positive for the ${\beta}$-globin gene and these were subsequently examined for the presence of HPV DNA. Among these, 47 (24.5%) were HPV-positive with the virus copy number ranged between 0.43-32 copies per 1 ${\mu}g$ of total DNA (9-99 copies per PCR reaction). There were 4 and 11 samples containing HPV-16 and -18, respectively. Additionally, one sample harbored both types of HPV. Among the investigated clinical parameters, smoking habit was associated with the presence of HPV (p = 0.001) while others indicated no significant association. The prevalence of HPV in normal oral cavity in our Pakistani subjects appears to be comparable to other studies. However, the association between the presence of HPV and smoking warrants further investigations whether both of these factors can cooperate in inducing oral cancer in this group of patients. Antiproliferative Effects of Crocin in HepG2 Cells by Telomerase Inhibition and hTERT Down-Regulation Noureini, Sakineh Kazemi;Wink, Michael 2305 Crocin, the main pigment of Crocus sativus L., has been shown to have antiproliferative effects on cancer cells, but the involved mechanisms are only poor understood. This study focused on probable effect of crocin on the immortality of hepatic cancer cells. Cytotoxicity of crocin ($IC_{50}$ 3 mg/ml) in hepatocarcinoma HepG2 cells was determined after 48 h by neutral red uptake assay and MTT test. Immortality was investigated through quantification of relative telomerase activity with a quantitative real-time PCR-based telomerase repeat amplification protocol (qTRAP). Telomerase activity in 0.5 ${\mu}g$ protein extract of HepG2 cells treated with 3 mg/ml crocin was reduced to about 51% as compared to untreated control cells. Two mechanisms of inhibition, i.e. interaction of crocin with telomeric quadruplex sequences and down regulation of hTERT expression, were examined using FRET analysis to measure melting temperature of a synthetic telomeric oligonucleotide in the presence of crocin and quantitative real-time RT-PCR, respectively. No significant changes were observed in the $T_m$ telomeric oligonucleotides, while the relative expression level of the catalytic subunit of telomerase (hTERT) gene showed a 60% decrease as compared to untreated control cells. In conclusion, telomerase activity of HepG2 cells decreases after treatment with crocin, which is probably caused by down-regulation of the expression of the catalytic subunit of the enzyme. Association between Polymorphisms in UDP-glucuronosyltransferase 1A6 and 1A7 and Colorectal Cancer Risk Osawa, Kayo;Nakarai, Chiaki;Akiyama, Minami;Hashimoto, Ryuta;Tsutou, Akimitsu;Takahashi, Juro;Takaoka, Yuko;Kawamura, Shiro;Shimada, Etsuji;Tanaka, Kenichi;Kozuka, Masaya;Yamamoto, Masahiro;Kido, Yoshiaki 2311 Genetic polymorphisms of uridine diphosphate-glucuronosyltransferases 1A6 (UGT1A6) and 1A7 (UGT1A7) may lead to genetic instability and colorectal cancer carcinogenesis. Our objective was to measure the interaction between polymorphisms of these repair genes and tobacco smoking in colorectal cancer (CRC). A total of 68 individuals with CRC and 112 non-cancer controls were divided into non-smoker and smoker groups according to pack-years of smoking. Genetic polymorphisms of UGT1A6 and UGT1A7 were examined using polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP). We found a weak association of UGT1A6 polymorphisms with CRC risk (crude odds ratio [OR], 1.65;95% confidence interval [95% CI], 0.9-3.1, P=0.107; adjusted OR 1.95%, 95% CI 1.0-3.8, P=0.051). The ORs for the UGT1A7 polymorphisms were statistically significant (crude OR: 26.40, 95% CI: 3.5-198.4, P=0.001; adjusted OR: 21.52, 95% CI: 2.8-164.1, P=0.003). The joint effect of tobacco exposure and UGTIA6 polymorphisms was significantly associated with colorectal cancer risk in non-smokers (crude OR, 2.11; 95% CI, 0.9-5.0, P=0.092; adjusted OR 2.63, 95% CI, 1.0-6.7, P=0.042). In conclusion, our findings suggest that UGT1A6 and UGT1A7 gene polymorphisms are associated with CRC risk in the Japanese population. In particualr, UGT1A6 polymorphisms may strongly increase CRC risk through the formation of carcinogens not associated with smoking. Use of Smoke-less Tobacco Amongst the Staff of Tertiary Care Hospitals in the Largest City of Pakistan Valliani, Arif;Ahmed, Bilawal;Nanji, Kashmira;Valliani, Salimah;Zulfiqar, Beenish;Fakih, Misbah;Mehdi, Mehwish;Khan, Anam;Sheikh, Sana Arshad;Fatima, Nida;Ahmad, Sobia;Farah, Fariya;Saleem, Shaheera;Ather, Sana;Majid, Syed Khubaib;Hashmi, Syed Salman;Arjan, Sunil 2315 Background: Use of smoke-less tobacco (SLT) is very common in South and South-East Asian countries. It is significantly associated with various types of cancers. The objectives of this study were to assess the proportion of hospital staff that use SLT, and to identify the factors associated with its use and their practices. Methods: In a cross-sectional study, 560 staff of two tertiary care hospitals were interviewed in the year 2009. Nurses, ward boys and technicians were counted as a paramedic staff while drivers, peons, security guards and housekeeping staff were labeled as non-paramedic staff. SLT use was considered as usage of any of the following: betel quid (paan) with or without tobacco, betel nuts with or without tobacco (gutkha) and snuff (naswar). Results: About half (48.6%) of the hospital staff were using at least one type of SLT. Factors found to be statistically significant with SLT were being a male (OR=2.5; 95% CI=1.8-3.7); having no/fewer years of education (OR=1.7; 95% CI=1.2-2.4) and working as non-paramedic staff (OR=2.6; 95% CI=1.8-3.8). Majority of SLT users were using it on regular basis, for > 5 years and keeping the tobacco products in the oral cavity for >30 minutes. About half of the users started due to peer pressure and had tried to quit this habit but failed. Conclusion: In this study, about half of the study participants were using SLT in different forms. We suggest educational and behavioral interventions for control of SLT usage. Expression and Clinical Significance of Hedgehog Signaling Pathway Related Components in Colorectal Cancer Wang, Hong;Li, Yu-Yuan;Wu, Ying-Ying;Nie, Yu-Qiang 2319 Aim: To investigate the expression of three components of the Hedgehog (Hh) signaling pathway (SHH, SMO and GLI1) in human colorectal cancer (CRC) tissues and evaluate their association with clinicopathologic characteristics of the patients. Methods: Fresh tumor tissues and matched tissues adjacent to the tumor were collected from 43 CRC patients undergoing surgery. Normal colorectal tissues from 20 non-CRC cases were also sampled as normal controls. The expression of SHH, SMO, GLI1 mRNAs was assessed by RT-PCR and proteins were detected by immunohistochemical staining. Associations with clinicopathological characteristics of patients were analyzed. Results: SHH mRNA was expressed more frequently in tumor tissues than in normal tissues, but the difference did not reach significance in comparison to that in the adjacent tissues. SMO and GLI1 mRNAs were expressed more frequently in tumor tissues than in both adjacent andnormal tissues. The expression intensities of SHH, SMO, GLI1 mRNA in tumor tissues were significantly higher than those in adjacent tissues and normal tissues. Proteins were also detected more frequently in tumors than other tissues. No significant links were apparent with gender, age, location, degree of infiltration or Dukes stage. Conclusion: Positive rates and intensities of mRNA and protein expression of Hh signaling pathway related genes SHH, SMO, GLI1 were found to be significantly increased in CRC tissues. However, over-expression did not appear to be associated with particular clinicopathological characteristics. Effects of the Cyclin D1 Polymorphism on Lung Cancer Risk - a Meta-analysis Li, Yue;Zhang, Shuai;Geng, Jian-Xiong;Yu, Yan 2325 Background: Cyclin D1 (CCND1) is critical in the transition of the cell cycle from G1 to S phases and unbalanced cell cycle regulation is a hallmark of carcinogenesis. A number of studies conducted to assess the association between CCND1 G870A polymorphism and susceptibility to lung cancer have yielded inconsistent and inconclusive results. In the present study, the possible association above was assessed by a meta-analysis. Methods: Eligible articles were identified for the period up to November 2011. Pooled odds ratios (OR) with 95% confidence intervals (95%CI) were appropriately derived from fixed effects or random-effects models. Sensitivity analysis excluding studies whose genotype frequencies in controls significantly deviated from the Hardy-Weinberg equilibrium (HWE) was performed. Results: Ten case-control studies with a total of 10,548 subjects were eligible. At the overall analysis the CCND1 870A allele appeared to be associated with elevated lung cancer risk (for allele model, pooled OR = 1.24, 95% CI: 1.08-1.44, P = 0.004; for homozygous model, pooled OR = 1.45, 95% CI: 1.14-1.84, P = 0.003; for recessive model, pooled OR = 1.29, 95% CI: 1.06-1.58, P = 0.013; for dominant model, pooled OR = 1.33, 95% CI: 1.08-1.65, P = 0.009). Subgroup analyses by ethnicity and sensitivity analysis further pointed to associations, particularly in Asians. Conclusion: This meta-analysis suggests that the A allele of CCND1 G870A polymorphism confers additional lung cancer risk. Prognostic Role of MicroRNA-21 in Non-small Cell Lung Cancer: a Meta-analysis Ma, Xue-Lei;Liu, Lei;Liu, Xiao-Xiao;Li, Yun;Deng, Lei;Xiao, Zhi-Lan;Liu, Yan-Tong;Shi, Hua-Shan;Wei, Yu-Quan 2329 Introduction: Many studies have reported that microRNA-21 (miR-21) mihght predict the survival outcome in non-small cell lung cancers (NSCLCs) but the opposite opinion has also been expressed. The aim of this study was to summarize the evidence for a prognostic role of miR-21. Materials and Methods: All the eligible studies was searched by Medline and EMBASE and patients' clinical characteristics and survival outcome were extracted. Then a meta-analysis was performed to clarify the prognostic role of the miR-21 expression in different subgroups. Results: A total of 8 eligible articles were yielded covering survival outcomes or clinical characteristics. The combined hazard ratio (HR) and 95% confidence interval (95% CI) for overall survival (OS) was 2.19 [0.76, 6.30], while the combined HR (95% CI) of Asian group for OS had a significant result, 5.49 [2.46, 12.27]. The combined HR (95% CI) for recurrence free survival or disease free survival (RFS/DFS) was 2.31 [1.52, 3.49]. Odds ratios (ORs) showed that the miR-21 expression was associated with lymph node status and histological type. Conclusion: miR-21 expression could predict the prognostic outcome of NSCLC in Asians, despite some deficiencies in the study data. Metabolic Changes Enhance the Cardiovascular Risk with Differentiated Thyroid Carcinoma - A Case Control Study from Manipal Teaching Hospital of Nepal Objective: To evaluate several metabolic changes in patients with differentiated thyroid carcinoma (DTC ) which enhance cardiovascular risk in the western region of Nepal. Materials and Methods: This hospital based case control study was carried out using data retrieved from the register maintained in the Department of Biochemistry of the Manipal Teaching Hospital, Pokhara, Nepal between $1^{st}$ January, 2009 and $31^{st}$ December, 2011. The variables collected were age, gender, BMI, glucose, insulin, HbA1C, CRP, fibrinogen, total cholesterol, triglycerides, HDL, LDL, VLDL, f-T3, f-T4, TSH. One way ANOVA was used to examine statistical significance of differences between groups, along with the Post Hoc test LSD for comparison of means. Results: fT3 values were markedly raised in DTC cases ($5.7{\pm}SD1.4$) when compared to controls ($2.2{\pm}SD0.9$). Similarly, fT4 values were also moderately raised in cases of DTC ($4.9{\pm}SD1.3$ and $1.7{\pm}SD0.9$). In contrast, TSH values were lowered in DTC cases ($0.39{\pm}SD0.4$) when compared to controls ($4.2{\pm}SD1.4$). Mean blood glucose levels were decreased while insulin was increased and HDL reduced ($39.5{\pm}SD4.7$ as compared to the control $43.1{\pm}SD2.2$). Conclusion: Cardiovascular risk may be aggravated by insulin resistance, a hypercoagulable state, and an atherogenic lipid profile in patients with differentiated thyroid cancer. Comparative Study on the Value of Anal Preserving Surgery for Aged People with Low Rectal Carcinoma in Jiangsu, China Yu, Dong-Sheng;Huang, Xin-En;Zhou, Jian-Nong 2339 Objective: To compare the efficacy of anal preserving surgery for aged people with low rectal carcinoma. Methods: Clinical data for a consecutive cohort of 98 rectal cancer patients with distal tumors located within 3cm -7cm of the anal verge were collected. Among these, 42 received anal preserving surgery (35 with Dixon, 3 with Parks and 4 with transanal operations). The local recurrence and survival rates in the above operations were compared with those of the Miles operation in another 56 patients with rectal cancer. Results: The local recurrence and 3-, 5-year survival rates of anal preserving surgery were 16.7%, 64.3% and 52.4%, those of Miles operations were 16.1%, 67.9% and 51.8% respectively (P>0.05). Conclusion: Anal preserving surgery for aged people with low rectal cancer is not inferior to conventional operations in China, with satisfactory long term survival and comparable local recurrence rates. Epidemiological Evaluation of Breast Cancer in Ecological areas of Kazakhstan - Association with Pollution Emissions Bilyalova, Zarina;Igissinov, Nurbek;Moore, Malcolm;Igissinov, Saginbek;Sarsenova, Samal;Khassenova, Zauresh 2341 The aim of the research was to evaluate the incidence of breast cancer in the ecological areas of Kazakhstan and assess the potential. A retrospective study of 11 years (1999 to 2009) was conducted using descriptive and analytical methods. The incidence of breast cancer was the lowest in the Aral-Syr Darya area ($18.6{\pm}0.80$/100,000), and highest in the Irtysh area ($48.9{\pm}1.90$/100,000), with an increasing trends over time in almost all areas. A direct strong correlation between the degree of contamination with high pollution emissions in the atmosphere from stationary sources and the incidence of breast cancer ($r=0.77{\pm}0.15$; p=0.026). The results indicate an increasing importance of breast cancer in Kazakhstan and an etiological role for environmental pollution. Epidemiological Aspects of Morbidity and Mortality from Cervical Cancer in Kazakhstan Igissinov, Nurbek;Nuralina, Indira;Igissinova, Gulnur;Kim, Sergei;Moore, Malcolm;Igissinov, Saginbek;Khassenova, Zauresh 2345 Epidemiological studies of cancer incidence in Kazakhstan have revealed an uneven distribution for cervical cancer. Incidence and mortality rates were calculated for different regions of the republic, including the two major cities of Almaty and Astana, in 1999-2008. Defined levels for cartograms for incidence were low (up to 12.8/100,000), medium (12.8 to 15.9) and high (above 15.9) and for mortality were up to 7.1, 7.1 to 10.8 and more than 10.8, respectively. Basically high incidence rates were identified in the eastern, central and northern parts of the country and in Almaty. Such differences in cervical cancer data, and also variation in mortality/ incidence ratios, from a low of 0.4 in Almaty to a high of 0.71 in Zhambyl, point to variation in demographic and medical features which impact on risk and prognistic factors for cervical cancer in the country. Further research is necessary to highlight areas for emphasis in cancer control programs for this important cancer. P53 Arg72Pro Polymorphism and Bladder Cancer Risk - Meta-analysis Evidence for a Link in Asians but not Caucasians Xu, Ting;Xu, Zi-Cheng;Zou, Qin;Yu, Bin;Huang, Xin-En 2349 Objective: Individual studies of the associations between P53 codon 72 polymorphism (rs1042522) and bladder cancer susceptibility have shown inconclusive results. To derive a more precise estimation of the relationship, we performed this systemic review and meta-analysis based on 15 publications. Methods: We used odds ratios (ORs) with 95% confidence intervals (CIs) to assess the strength of the association. Results: We found that there was no association between P53 codon 72 polymorphism and bladder cancer risk in the comparisons of Pro/Pro vs Arg/Arg; Pro/Arg vs. Arg/Arg; Pro/Pro plus Pro/Arg vs. Arg/Arg; Arg/Arg vs. Pro/Arg plus Arg/Arg (OR=1.06 95%CI 0.81-1.39; OR=1.06 95%CI 0.83-1.36; OR=0.98 95%CI 0.78-1.23; OR=1.06 95%CI 0.84-1.32). However, a significantly increased risk of bladder cancer was found among Asians in the homozygote comparison (Pro/Pro vs. Arg/Arg, OR=1.36 95%CI 1.05-1.75, P=0.790 for heterogeneity) and the dominant model (Arg/Pro plus Pro/Pro vs. Arg/Arg, OR=1.26 95%CI 1.05-1.52, P=0.564 for heterogeneity). In contrast, no evidence of an association between bladder cancer risk and P53 genotype was observed among Caucasian population in any genetic model. When stratifying for the stage of bladder, no statistical association were found (Pro/Pro vs. Arg/Arg, OR=0.45 95%CI 0.17-1.21; Pro/Arg vs. Arg/Arg, OR=0.60 95%CI 0.28-1.27; Dominant model, OR=0.56 95%CI 0.26-1.20; Recessive model, OR=0.62 95%CI0.35-1.08) between P53 codon 72 polymorphism and bladder cancer in all comparisons. Conclusions: Despite the limitations, the results of the present meta-analysis suggest that, in the P53 codon 72, Pro/Pro type and dominant mode might increase the susceptibility to bladder cancer in Asians; and there are no association between genotype distribution and the stage of bladder cancer. Clinical Significance of SH2B1 Adaptor Protein Expression in Non-small Cell Lung Cancer Zhang, Hang;Duan, Chao-Jun;Chen, Wei;Wang, Shao-Qiang;Zhang, Sheng-Kang;Dong, Shuo;Cheng, Yuan-Da;Zhang, Chun-Fang 2355 The SH2B1 adaptor protein is recruited to multiple ligand-activated receptor tyrosine kinases that play important role in the physiologic and pathologic features of many cancers. The purpose of this study was to assess SH2B1 expression and to explore its contribution to the non-small cell lung cancer (NSCLC). Methods: SH2B1 expression in 114 primary NSCLC tissue specimens was analyzed by immunohistochemistry and correlated with clinicopathological parameters and patients' outcome. Additionally, 15 paired NSCLC background tissues, 5 NSCLC cell lines and a normal HBE cell line were evaluated for SH2B1 expression by RT-PCR and immunoblotting, immunofluorescence being applied for the cell lines. Results: SH2B1 was found to be overexpressed in NSCLC tissues and NSCLC cell lines. More importantly, high SH2B1 expression was significantly associated with tumor grade, tumor size, clinical stage, lymph node metastasis, and recurrence respectively. Survival analysis demonstrated that patients with high SH2B1 expression had both poorer disease-free survival and overall survival than other patients. Multivariate Cox regression analysis revealed that SH2B1 overexpression was an independent prognostic factor for patients with NSCLC. Conclusions: Our findings suggest that the SH2B1 protein may contribute to the malignant progression of NSCLC and could offer a novel prognostic indicator for patients with NSCLC. Association Between C1019T Polymorphism in the Connexin 37 Gene and Helicobacter Pylori Infection in Patients with Gastric Cancer Jing, Yuan-Ming;Guo, Su-Xia;Zhang, Xiao-Ping;Sun, Ai-Jing;Tao, Feng;Qian, Hai-Xin 2363 Objective: To investigate the association between the connexin 37 C1019T polymorphism and Helicobacter pylori infection in patients with gastric cancer. Methods: 388 patients with gastric cancer (GC), 204 with chronic superficial gastritis (CSG) were studied. H. pylori was detected by gastric mucosal biopsies biopsy dyeing method. Connexin 37 gene polymorphism 1019 site genotypes were determined by gene sequencing technology. Genotypes and alleles frequencies were compared. Results: (1) Connexin37 gene 1019 site distribution frequency (CC type, TC type, TT type) in the CSG group was 18.1%, 45.1% and 36.8%; in the stomach cancer group it was 35.1%, 45.9% and 19.%, conforming to the Hardy-Weinberg euilibrium. (2) In comparison with CSG group, the frequency of Connexin37 C allele was higher in the gastric cancer group (58.0% vs 40.7%, OR = 2.01, 95%CI = 1.58-2.57, P < 0.01). The prevalence of gastric cancer risk was significantly increased in the carriers of C allele (CC+TC) than in TT homozygote (OR = 2.47, 5%CI = 1.68- 3.610. (3) Gastric cancer patients complicated with Hp infection 211 cases, gastric cancer group of the male patients with HP positive patients with 187 cases, 40 cases of female patients with negative patients, 24 cases were HP positive, negative in 137 cases, control group male patients, 28 cases were Hp positive, negative in 95 patients, female patients with Hp positive 6 cases, 75 cases were negative. On hierarchical analysis, the male group OR value was 15.9 (95%CI to 9.22-27.3), and the female OR was 2.19 (95%CI 0.88-5.59), indicating a greater contribution in males (P <0.01). After elimination of gender effects, positive HP and gastric cancer were closely related (OR 8.82, 95% CI: 5.45-14.3). (4) The distribution frequency of C allele in patients with Hp infection was much higher than that in Hp negative cases in the GC group (64.5% vs 47.0%, OR = 2.05, 95%CI = 1.54-2.74, P < 0.01). Compared with TT homozygotes, (CC+TC) genotype prevalence of gastric cancer risk increased significantly (OR = 2.96, 5%CI = 1.76-2.99 ). Conclusion: The T allele in the connexin37 gene might not only be associated with gastric cancer but also with H. pylori infection. A Multi-center Survey of HPV Knowledge and Attitudes Toward HPV Vaccination among Women, Government Officials, and Medical Personnel in China Zhao, Fang-Hui;Tiggelaar, Sarah M.;Hu, Shang-Ying;Zhao, Na;Hong, Ying;Niyazi, Mayinuer;Gao, Xiao-Hong;Ju, Li-Rong;Zhang, Li-Qin;Feng, Xiang-Xian;Duan, Xian-Zhi;Song, Xiu-Ling;Wang, Jing;Yang, Yun;Li, Chang-Qin;Liu, Jia-Hua;Liu, Ji-Hong;Lu, Yu-Bo;Li, Li;Zhou, Qi;Liu, Jin-Feng;Xu, Li-Na;Qiao, You-Lin 2369 Objectives: To assess knowledge of HPV and attitudes towards HPV vaccination among the general female population, government officials, and healthcare providers in China to assist the development of an effective national HPV vaccination program. Methods: A cross-sectional epidemiologic survey was conducted across 21 urban and rural sites in China using a short questionnaire. 763 government officials, 760 healthcare providers, and 11,681 women aged 15-59 years were included in the final analysis. Data were analyzed using standard descriptive statistics and logistic regression. Results: Knowledge of HPV among the general female population was low; only 24% had heard of HPV. Less than 20% of healthcare providers recognized sexually na$\ddot{i}$ve women as the most appropriate population for HPV vaccination. There was high acceptance of the HPV vaccine for all categories of respondents. Only 6% of women were willing to pay more than US $300 for the vaccine. Conclusions: Aggressive education is necessary to increase knowledge of HPV and its vaccine. Further proof of vaccine safety and efficacy and government subsidies combined with increased awareness could facilitate development and implementation of HPV vaccination in China. Mechanism of P-glycoprotein Expression in the SGC7901 Human Gastric Adenocarcinoma Cell Line Induced by Cyclooxygenase-2 Gu, Kang-Sheng;Chen, Yu 2379 Objective: To investigate possible signal pathway involvement in multi-drug resistant P-glycoprotein (P-gp) expression induced by cyclooxygenase-2 (COX-2) in a human gastric adenocarcinoma cell line stimulated with pacliaxel (TAX). Methods: The effects of TAX on SGC7901 cell growth with different doses was assessed by MTT assay, along with the effects of the COX-2 selective inhibitor NS-398 and the nuclear factor-KB (NF-KB) pathway inhibitor pyrrolidine dithiocarbamate (PDTC). Influence on COX-2, NF-KB p65 and P-gp expression was determined by Western blotting. Results: TAX, NS-398 and PDTC all reduced SGC7901 growth, with dosedependence. With increasing dose of TAX, the expression of COX-2, p65 and P-gp showed rising trends, this being reversed by NS-398. PDTC also caused decrease in expression of p65 and P-gp over time. Conclusion: COX-2 may induce the expression of P-gp in SGC7901 cell line via the NF-kappa B pathway with pacliaxel stimulation. ER81-shRNA Inhibits Growth of Triple-negative Human Breast Cancer Cell Line MDA-MB-231 In Vivo and in Vitro Chen, Yue;Zou, Hong;Yang, Li-Ying;Li, Yuan;Wang, Li;Hao, Yan;Yang, Ju-Lun 2385 The lack of effective treatment targets for triple-negative breast cancers make them unfitted for endocrine or HER2 targeted therapy, and their prognosis is poor. Transcription factor ER81, a downstream gene of the HER2, is highly expressed in breast cancer lines, breast atypical hyperplasia and primary breast cancers including triple-negative examples. However, whether and how ER81 affects breast cancer carcinogenesis have remained elusive. We here assessed influence on a triple-negative cell line. ER81-shRNA was employed to silence ER81 expression in the MDA-MB-231 cell line, and MTT, colony-forming assays, and flow cytometry were used to detect cell proliferation, colony-forming capability, cell cycle distribution, and cell apoptosis in vitro. MDA-MB-231 cells stably transfected with ER81-shRNA were inoculated into nude mice, and growth inhibition of the cells was observed in vivo. We found that ER81 mRNA and protein expression in MDA-MB-231 cells was noticeably reduced by ER81-shRNA, and that cell proliferation and clonality were decreased significantly. ER81-shRNA further increased cell apoptosis and the residence time in $G_0/G_1$ phase, while delaying tumor-formation and growth rate in nude mice. It is concluded that ER81 may play an important role in the progression of breast cancer and may be a potentially valuable target for therapy, especially for triple negative breast cancer. Evaluation of the Geometric Accuracy of Anatomic Landmarks as Surrogates for Intrapulmonary Tumors in Image-guided Radiotherapy Li, Hong-Sheng;Kong, Ling-Ling;Zhang, Jian;Li, Bao-Sheng;Chen, Jin-Hu;Zhu, Jian;Liu, Tong-Hai;Yin, Yong 2393 Objectives: The purpose of this study was to evaluate the geometric accuracy of thoracic anatomic landmarks as target surrogates of intrapulmonary tumors for manual rigid registration during image-guided radiotherapy (IGRT). Methods: Kilovolt cone-beam computed tomography (CBCT) images acquired during IGRT for 29 lung cancer patients with 33 tumors, including 16 central and 17 peripheral lesions, were analyzed. We selected the "vertebrae", "carina", and "large bronchi" as the candidate surrogates for central targets, and the "vertebrae", "carina", and "ribs" as the candidate surrogates for peripheral lesions. Three to six pairs of small identifiable markers were noted in the tumors for the planning CT and Day 1 CBCT. The accuracy of the candidate surrogates was evaluated by comparing the distances of the corresponding markers after manual rigid matching based on the "tumor" and a particular surrogate. Differences between the surrogates were assessed using 1-way analysis of variance and post hoc least-significant-difference tests. Results: For central targets, the residual errors increased in the following ascending order: "tumor", "bronchi", "carina", and "vertebrae"; there was a significant difference between "tumor" and "vertebrae" (p = 0.010). For peripheral diseases, the residual errors increased in the following ascending order: "tumor", "rib", "vertebrae", and "carina"; There was a significant difference between "tumor" and "carina" (p = 0.005). Conclusions: The "bronchi" and "carina" are the optimal surrogates for central lung targets, while "rib" and "vertebrae" are the optimal surrogates for peripheral lung targets for manual matching of online and planned tumors. Comparison of Serum Tumor Associated Material (TAM) with Conventional Biomarkers in Cancer Patients Shu, Jian;Li, Cheng-Guang;Liu, Yang-Chen;Yan, Xiao-Chun;Xu, Xu;Huang, Xin-En;Cao, Jie;Li, Ying;Lu, Yan-Yan;Wu, Xue-Yan;Liu, Jin;Xiang, Jin 2399 Objective: To compare expression level of serum tumor associated materials (TAM) with several conventional serum tumor biomarkers, eg., carcinoembryonic antigen (CEA), carbohydrate antigen19-9 (CA19-9), carbohydrate antigen 15-3 (CA15-3), alpha-fetoprotein(AFP), in selected solid tumors. Methods: Patients diagnosed histologically or cytologically with liver, breast, esophageal, gastric, colorectal or pancreatic cancers were enrolled into this study. After diagnosis, the level of TAM was determined by chemical colorimetry, and levels of conventional tumor markers was measured by chemiluminescence methods. Results: A total of 560 patients were enrolled into this study. No statistically significant difference was detected in TAM and the above mentioned tumor biomarkers in terms of their positivity and negativity ( P>0. 05). Conclusions: Detection of TAM in liver, breast, esophageal, gastric, colorectal, and pancreatic cancer patients demonstrates a good accordance with CEA, CA199, CA153, and AFP, thus suggesting that further study is warranted to verify whether TAM could be a surrogate for these conventional biomarkers. Factors Affecting the Death Anxiety Levels of Relatives of Cancer Patients Undergoing Treatment Beydag, Kerime Derya 2405 This descriptive study was performed to determine levels of the death anxiety levels of relatives of patients who being treated in a public hospital located in the Asian side of Istanbul and influencing factors. The sample was 106 patient relatives of patients from oncology or chemotherapy units of the hospital. Data were collected between May-June 2011 with the 15-item Death Anxiety Scale developed by Templer (1970) and adapted to Turkish by Senol (1989) and evaluated by number-percentage calculations, the Kruskal Wallis, Anova and t tests. Some 36.8% of the included group were aged 45 years and over, 57.5% were female and 65.1% were married. A statistically significant difference was found between the age groups, genders of the patient relatives, the period of cancer treatment regarding the death anxiety levels (p<0.05). The death anxiety levels of the patient relatives who were in the 17-39 age group, female and had a patient who was under treatment for less than 6 months were found to high as compared to others. Inhibition of Tumor Growth in Vitro by a Combination of Extracts from Rosa Roxburghii Tratt and Fagopyrum Cymosum Liu, Wei;Li, Su-Yi;Huang, Xin-En;Cui, Jiu-Jie;Zhao, Ting;Zhang, Hua 2409 Objective: Traditional Chinese herbal medicines have a very long history. Rosa roxburghii Tratt and Fagopyrum cymosum are two examples of plants which are reputed to have benefits in improving immune responses, enhancing digestive ability and demonstrating anti-aging effects. Some evidence indicates that herbal medicine soups containing extracts from the two in combination have efficacy in treating malignant tumors. However, the underlying mechanisms are far from well understood. The present study was therefore undertaken to evaluate anticancer effects and explore molecular mechanisms in vitro. Methods: Proliferation and apoptosis were assessed with three carcinoma cell lines (human esophageal squamous carcinoma CaEs-17, human gastric carcinoma SGC-7901 and pulmonary carcinoma A549) by MTT assay and flow cytometry, respectively, after exposure to extract from Rosa roxburghii Tratt (CL) and extract from Fagopyrum cymosum (FR). $IC_{30}$ of CL and FR were obtained by MTT assay. Tumor cells were divided into four groups : control with no exposure to CL or FR; CL with $IC_{30}$ CL; FR with $IC_{30}$ FR; CL+FR group with 1/2 ($IC_{30}$ CL + $IC_{30}$ FR). RT-PCR and Western blot analysis were used to detect the expression of Ki-67, Bax and Bcl-2 at mRNA and protein levels. Results: Compared with the CL or FR groups, the combination of CL+FR showed significant inhibition of cell growth and increase in apoptosis; the mRNA and protein expression levels of Ki-67 and Bcl-2 in CL+FR group were all greatly decreased, while the expression of Bax was markedly increased. Conclusions: These results indicate that the synergistic antitumor effects of combination of CL and FR are related to inhibition of proliferation and induction of apoptosis. Phase II Study on Voriconazole for Treatment of Chinese Patients with Malignant Hematological Disorders and Invasive Aspergillosis Zhang, Xue-Zhong;Huang, Xin-En;Xu, Yan-Li;Zhang, Xiu-Qun;Su, Ai-ling;Shen, Zheng-Shan 2415 Objective: To investigate the efficacy and safety of voriconazole in treating Chinese patients with hematological malignancies and invasive aspergillosis. Methods: From March 2007 to April 2012, patients with diagnoses confirmed by CT, GM test and/or PCR assays, were recruited into this study. Aspergillosis of all patients were treated with voriconazole 6 mg/kg intravenous infusion (iv) every 12 h for 1 day, followed by 4 mg/kg IV every 12 h for 10-15 days; Then, switch to oral administration that was 200mg every 12h for 4-12 weeks. Efficacy and safety were evaluated according to Practice Guideline of Infectious Diseases Society of America. Results: The overall response rate of 38 patients after voriconazole treatment was 81.6%. The median time to pyretolysis was 4.5 days. Treatment related side effects were mild and found in only 15.8% of cases. No treatment related deaths occurred. Conclusions: Voriconazole can considered to be a safe and effective front-line therapy to treat patients with hematological malignancies and invasive aspergillosis. Alternatively it could be used as a remedial treatment when other antifungal therapies are ineffective. A Model for Community Participation in Breast Cancer Prevention in Iran Context: Genuine community participation does not denote taking part in an action planned by health care professionals in a medical or top-down approach. Further, community participation and health education on breast cancer prevention are not similar to other activities incorporated in primary health care services in Iran. Objective: To propose a model that provides a methodological tool to increase women's participation in the decision making process towards breast cancer prevention. To address this, an evaluation framework was developed that includes a typology of community participation approaches (models) in health, as well as five levels of participation in health programs proposed by Rifkin (1985&1991). Method: This model explains the community participation approaches in breast cancer prevention in Iran. In a 'medical approach', participation occurs in the form of women's adherence to mammography recommendations. As a 'health services approach', women get the benefits of a health project or participate in the available program activities related to breast cancer prevention. The model provides the five levels of participation in health programs along with the 'health services approach' and explains how to implement those levels for women's participation in available breast cancer prevention programs at the local level. Conclusion: It is hoped that a focus on the 'medical approach' (top-down) and the 'health services approach' (top-down) will bring sustainable changes in breast cancer prevention and will consequently produce the 'community development approach' (bottom-up). This could be achieved using a comprehensive approach to breast cancer prevention by combining the individual and community strategies in designing an intervention program for breast cancer prevention. Ornithine Decarboxylase: A Promising and Exploratory Candidate Target for Natural Products in Cancer Chemoprevention Luqman, Suaib 2425 Ornithine decarboxylase (ODC), the first enzyme in the polyamine biosynthesis, plays an important role in tumor progression, cell proliferation and differentiation. In recent years, ODC has been the subject of intense study among researchers, as a target for anti-cancer therapy and specific inhibitory agents, have the potential to suppress carcinogenesis and find applications in clinical therapy. In particular, it is suggested that ODC is a promising candidate target for natural products in cancer chemoprevention. Future exploration of ornithine decarboxyalse inhitors present in nature may offer great hope for finding new cancer chemporeventive agents.
CommonCrawl
Global boundedness of solutions to a chemotaxis-fluid system with singular sensitivity and logistic source Approximation of the trajectory attractor of the 3D smectic-A liquid crystal flow equations July 2020, 19(7): 3829-3842. doi: 10.3934/cpaa.2020169 Algebraic structure of the $ L_2 $ analytic Fourier–Feynman transform associated with Gaussian paths on Wiener space Jae Gil Choi 1,, and David Skoug 2, School of General Education, Dankook University, Cheonan 31116, Republic of Korea Department of Mathematics, University of Nebraska-Lincoln, Lincoln, NE 68588-0130, USA Received December 2019 Revised January 2020 Published April 2020 In this paper we study algebraic structures of the classes of the $ L_2 $ analytic Fourier–Feynman transforms on Wiener space. To do this we first develop several rotation properties of the generalized Wiener integral associated with Gaussian paths. We then proceed to analyze the $ L_2 $ analytic Fourier–Feynman transforms associated with Gaussian paths. Our results show that these $ L_2 $ analytic Fourier–Feynman transforms are actually linear operator isomorphisms from a Hilbert space into itself. We finally investigate the algebraic structures of these classes of the transforms on Wiener space, and show that they indeed are group isomorphic. Keywords: Paley-Wiener-Zygmund stochastic integral, Gaussian process, Fourier- Feynman transform associated with Gaussian paths, monoid isomorphism, linear operator isomorphism, free group. Mathematics Subject Classification: Primary: 28C20, 60G15, 60J65; Secondary: 46B09, 42B10, 46G12. Citation: Jae Gil Choi, David Skoug. Algebraic structure of the $ L_2 $ analytic Fourier–Feynman transform associated with Gaussian paths on Wiener space. Communications on Pure & Applied Analysis, 2020, 19 (7) : 3829-3842. doi: 10.3934/cpaa.2020169 J. E. Bearman, Rotations in the product of two Wiener spaces, Proc. Amer. Math. Soc., 3 (1952), 129-137. doi: 10.2307/2032469. Google Scholar M. D. Brue, A Functional Transform for Feynman Integrals Similar to the Fourier Transform, Ph.D Thesis, University of Minnesota, 1972. Google Scholar R. H. Cameron and D. A. Storvick, An $L_2$ analytic Fourier–Feynman transform, Michigan Math. J., 23 (1976), 1-30. Google Scholar R. H. Cameron and D. A. Storvick, An operator valued Yeh–Wiener integral, and a Wiener integral equation, Indiana Univ. Math. J., 25 (1976), 235-258. doi: 10.1512/iumj.1976.25.25020. Google Scholar S. J. Chang and J. G. Choi, Rotation of Gaussian paths on Wiener space with applications, Banach J. Math. Anal., 12 (2018), 651-672. doi: 10.1215/17358787-2017-0057. Google Scholar S. J. Chang, H. S. Chung and J. G. Choi, Generalized Fourier–Feynman transforms and generalized convolution products on Wiener space, Indag. Math., 28 (2017), 566-579. doi: 10.1016/j.indag.2017.01.004. Google Scholar J. G. Choi and S. J. Chang, Note on generalized Wiener integrals, Arch. Math., 101 (2013), 569-579. doi: 10.1007/s00013-013-0595-z. Google Scholar J. G. Choi, D. Skoug and S. J. Chang, A multiple generalized Fourier–Feynman transform via a rotation on Wiener space, Int. J. Math., 23 (2012), Art. 1250068. doi: 10.1142/S0129167X12500681. Google Scholar D. M. Chung, C. Park and D. Skoug, Generalized Feynman integrals via conditional Feynman integrals, Michigan Math. J., 40 (1993), 377-391. doi: 10.1307/mmj/1029004758. Google Scholar T. Huffman, C. Park and D. Skoug, Analytic Fourier-Feynman transforms and convolution, Trans. Amer. Math. Soc., 347 (1995), 661-673. doi: 10.2307/2154908. Google Scholar T. Huffman, C. Park and D. Skoug, Generalized transforms and convolutions, Int. J. Math. Math. Sci., 20 (1997), 19-32. doi: 10.1155/S0161171297000045. Google Scholar T. Huffman, D. Skoug and D. Storvick, Integration formulas involving Fourier-Feynman transforms via a Fubini theorem, J. Korean Math., 38 (2001), 421-435. Google Scholar G. W. Johnson and D. L. Skoug, An $L_p$ analytic Fourier-Feynman transform, Michigan Math. J., 26 (1979), 103-127. Google Scholar G. W. Johnson and D. L. Skoug, Scale-invariant measurability in Wiener space, Pac. J. Math., 83 (1979), 157-176. Google Scholar G. W. Johnson and D. L. Skoug, Notes on the Feynman integral, II, J. Funct. Anal., 41 (1981), 277-289. doi: 10.1016/0022-1236(81)90075-6. Google Scholar R. E. A. C. Paley, N. Wiener and A. Zygmund, Notes on random functions, Math. Z., 37 (1933), 647-668. doi: 10.1007/BF01474606. Google Scholar C. Park and D. Skoug, A note on Paley–Wiener–Zygmund stochastic integrals, Proc. Amer. Math. Soc., 103 (1988), 591-601. doi: 10.2307/2047184. Google Scholar C. Park and D. Skoug, A Kac-Feynman integral equation for conditional Wiener integrals, J. Integral Equ. Appl., 3 (1991), 411-427. doi: 10.1216/jiea/1181075633. Google Scholar C. Park and D. Skoug, Generalized Feynman integrals: The $\mathcal L(L_2, L_2)$ theory, Rocky Mountain J. Math., 25 (1995), 739-756. doi: 10.1216/rmjm/1181072247. Google Scholar D. Robinson, A course in the Theory of Groups, 2$^{nd}$ edition, Graduate texts in mathematics, Vol. 80, Springer, New York, 1996. doi: 10.1007/978-1-4419-8594-1. Google Scholar D. Skoug and D. Storvick, A survey of results involving transforms and convolutions in function space, Rocky Mountain J. Math., 34 (2004), 1147-1175. doi: 10.1216/rmjm/1181069848. Google Scholar J. Yeh, Stochastic Processes and the Wiener Integral, Marcel Dekker, Inc., New York, 1973. Google Scholar Seung Jun Chang, Jae Gil Choi. A Cameron-Storvick theorem for the analytic Feynman integral associated with Gaussian paths on a Wiener space and applications. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2225-2238. doi: 10.3934/cpaa.2018106 Seung Jun Chang, Jae Gil Choi. Generalized transforms and generalized convolution products associated with Gaussian paths on function space. Communications on Pure & Applied Analysis, 2020, 19 (1) : 371-389. doi: 10.3934/cpaa.2020019 Yan Wang, Lei Wang, Yanxiang Zhao, Aimin Song, Yanping Ma. A stochastic model for microbial fermentation process under Gaussian white noise environment. Numerical Algebra, Control & Optimization, 2015, 5 (4) : 381-392. doi: 10.3934/naco.2015.5.381 Hongjun Gao, Fei Liang. On the stochastic beam equation driven by a Non-Gaussian Lévy process. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 1027-1045. doi: 10.3934/dcdsb.2014.19.1027 Dariusz Skrenty. Absolutely continuous spectrum of some group extensions of Gaussian actions. Discrete & Continuous Dynamical Systems, 2010, 26 (1) : 365-378. doi: 10.3934/dcds.2010.26.365 Barbara Brandolini, Francesco Chiacchio, Jeffrey J. Langford. Estimates for sums of eigenvalues of the free plate via the fourier transform. Communications on Pure & Applied Analysis, 2020, 19 (1) : 113-122. doi: 10.3934/cpaa.2020007 Shaokuan Chen, Shanjian Tang. Semi-linear backward stochastic integral partial differential equations driven by a Brownian motion and a Poisson point process. Mathematical Control & Related Fields, 2015, 5 (3) : 401-434. doi: 10.3934/mcrf.2015.5.401 Anton A. Kutsenko. Isomorphism between one-dimensional and multidimensional finite difference operators. Communications on Pure & Applied Analysis, 2021, 20 (1) : 359-368. doi: 10.3934/cpaa.2020270 Valerii Los, Vladimir A. Mikhailets, Aleksandr A. Murach. An isomorphism theorem for parabolic problems in Hörmander spaces and its applications. Communications on Pure & Applied Analysis, 2017, 16 (1) : 69-98. doi: 10.3934/cpaa.2017003 Kan Jiang, Lifeng Xi, Shengnan Xu, Jinjin Yang. Isomorphism and bi-Lipschitz equivalence between the univoque sets. Discrete & Continuous Dynamical Systems, 2020, 40 (11) : 6089-6114. doi: 10.3934/dcds.2020271 Nikolai Edeko. On the isomorphism problem for non-minimal transformations with discrete spectrum. Discrete & Continuous Dynamical Systems, 2019, 39 (10) : 6001-6021. doi: 10.3934/dcds.2019262 Zhenghong Qiu, Jianhui Huang, Tinghan Xie. Linear-Quadratic-Gaussian mean-field controls of social optima. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021047 Qinglan Xia. On landscape functions associated with transport paths. Discrete & Continuous Dynamical Systems, 2014, 34 (4) : 1683-1700. doi: 10.3934/dcds.2014.34.1683 Gregorio Díaz, Jesús Ildefonso Díaz. Stochastic energy balance climate models with Legendre weighted diffusion and an additive cylindrical Wiener process forcing. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021165 Pierre Gervais. A spectral study of the linearized Boltzmann operator in $ L^2 $-spaces with polynomial and Gaussian weights. Kinetic & Related Models, 2021, 14 (4) : 725-747. doi: 10.3934/krm.2021022 Dachun Yang, Sibei Yang. Maximal function characterizations of Musielak-Orlicz-Hardy spaces associated to non-negative self-adjoint operators satisfying Gaussian estimates. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2135-2160. doi: 10.3934/cpaa.2016031 Massimiliano Tamborrino. Approximation of the first passage time density of a Wiener process to an exponentially decaying boundary by two-piecewise linear threshold. Application to neuronal spiking activity. Mathematical Biosciences & Engineering, 2016, 13 (3) : 613-629. doi: 10.3934/mbe.2016011 Fengwei Li, Qin Yue, Xiaoming Sun. The values of two classes of Gaussian periods in index 2 case and weight distributions of linear codes. Advances in Mathematics of Communications, 2021, 15 (1) : 131-153. doi: 10.3934/amc.2020049 Alain Bensoussan, Xinwei Feng, Jianhui Huang. Linear-quadratic-Gaussian mean-field-game with partial observation and common noise. Mathematical Control & Related Fields, 2021, 11 (1) : 23-46. doi: 10.3934/mcrf.2020025 Marie Turčičová, Jan Mandel, Kryštof Eben. Score matching filters for Gaussian Markov random fields with a linear model of the precision matrix. Foundations of Data Science, 2021, 3 (4) : 793-824. doi: 10.3934/fods.2021030 Jae Gil Choi David Skoug
CommonCrawl
languages / tex / Date Published: September 7, 2011 Last Modified: September 7, 2011 TeX is a typesetting language for producing documents. It is one of the most popular alternatives to WYSIWYG text editors such as Microsoft Word. The language largely resembles a programming language, and is then compiled to produce professional looking documents. The advantage of TeX typesetting over an editor such as Microsoft Word is the conformity and standardization that comes naturally when writing a document using 'code'. For example, figures are always labelled correctly and in the same manner, page margins are identical, and bibliographic references are identical and always match correctly with the reference. The large disadvantage with TeX typesetting is the lack of instant feedback (although there are some packages that now support live feedback), and the complexity in understanding and knowing how to write in the TeX language. There is a difference between a TeX distribution and a TeX editor. TeX Editors MiKTeK TeXnicCenter \dfrac{x}{y} Prints a fraction, in display mode (normally larger than \frac{}). \(\frac{x}{y}\) \frac{x}{y} \frac{x}{y} Prints a fraction. \(\frac{x}{y}\) \dfrac{x}{y} \text{This is normal text.} Print normal text (not math-style text). This also means spaces are preserved. \(\text{This is normal text.} \\ This is maths text.\) % Produces a matrix equation I_{\alpha\beta\gamma} = TI_{abc} = \frac{2}{3} \begin{bmatrix} 1 & -\frac{1}{2} & -\frac{1}{2} \\ 0 & \frac{\sqrt{3}}{2} & -\frac{\sqrt{3}}{2} \\ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \end{bmatrix} \begin{bmatrix} I_a \\ I_b \\ I_c \end{bmatrix} \text{(unsimplified Clarke transform)} $$ I_{\alpha\beta\gamma} = TI_{abc} = \frac{2}{3} \begin{bmatrix} 1 & -\frac{1}{2} & -\frac{1}{2} \\ 0 & \frac{\sqrt{3}}{2} & -\frac{\sqrt{3}}{2} \\ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \end{bmatrix} \begin{bmatrix} I_a \\ I_b \\ I_c \end{bmatrix} \text{(unsimplified Clarke transform)} $$ February-June 2014 Updates
CommonCrawl
Antonie van Leeuwenhoek October 2012 , Volume 102, Issue 3, pp 409–423 | Cite as Structured morphological modeling as a framework for rational strain design of Streptomyces species Katherine Celler Cristian Picioreanu Mark C. M. van Loosdrecht Gilles P. van Wezel Successful application of a computational model for rational design of industrial Streptomyces exploitation requires a better understanding of the relationship between morphology—dictated by microbial growth, branching, fragmentation and adhesion—and product formation. Here we review the state-of-the-art in modeling of growth and product formation by filamentous microorganisms and expand on existing models by combining a morphological and structural approach to realistically model and visualize a three-dimensional pellet. The objective is to provide a framework to study the effect of morphology and structure on natural product and enzyme formation and yield. Growth and development of the pellet occur via the processes of apical extension, branching and cross-wall formation. Oxygen is taken to be the limiting component, with the oxygen concentration at the tips regulating growth kinetics and the oxygen profile within the pellet affecting the probability of branching. Biological information regarding the processes of differentiation and branching in liquid cultures of the model organism Streptomyces coelicolor has been implemented. The model can be extended based on information gained in fermentation trials for different production strains, with the aim to provide a test drive for the fermentation process and to pre-assess the effect of different variables on productivity. This should aid in improving Streptomyces as a production platform in industrial biotechnology. Morphological modeling Fermentation Microscopy Enzyme Antibiotic SsgA Abranch Branch age (h) Adiff Differentiation age (h) Branch formation interval (m−1) cint Cross-wall formation interval (m−1) Oxygen concentration in fermentation broth (kg/m3) Concentration of biomass (kg/m3) CX,A Concentration of biomass, apical (kg m−3) CX,B Concentration of biomass, subapical (kg m−3) CX,H Concentration of biomass, hyphal (kg m−3) Hyphal diameter (m) Time step (h) DO2,eff Effective diffusion coefficient for oxygen (m2 h−1) HGU Hyphal growth unit Microbial saturation coefficient for oxygen (kg m−3) LA,max Maximum length of apical compartment (m) Oxygen uptake rate (kg m−3 h−1) Pbranch Probability of branching (% h−1) Pbreak Probability of breaking (% h−1) Radius of pellet (m) rtip Distance from pellet centre to tip (m) rthres Threshold radius (m) Hyphal segment volume (m3) YXO Yield of biomass on oxygen (kg kg−1) YXO,A Yield of biomass on oxygen, apical (kg kg−1) YXO,B Yield of biomass on oxygen, subapical (kg kg−1) YXO,H Yield of biomass on oxygen, hyphal (kg kg−1) Greek symbols αi,max Maximum linear apical extension rate of branch i (m h−1) αi,max(A) Maximum linear extension rate of branch i, apical compartment (m h−1) αi,max(B) Maximum linear extension rate of branch i, subapical compartment (m h−1) αi,max(H) Maximum linear extension rate of branch i, hyphal compartment (m h−1) λshear Shear force parameter Maximum specific hyphal growth rate (h−1) ρx Density of hyphae (kg dw m−3) Polar angle in spherical coordinates Cone angle in spherical coordinates State of the art in growth modeling of filamentous microorganisms Streptomycetes are Gram-positive mycelial bacteria which are commercially used in the production of natural products such as antibiotics, anticancer agents and immunosuppressants, as well as industrial enzymes (Hopwood 2007). Unlike unicellular bacteria, which grow exponentially by binary fission with a constant generation time (Errington et al. 2003), filamentous organisms grow due to the combination of steady hyphal growth and addition of new hyphal tips via branching of the mycelium. During growth, vegetative hyphae are divided into compartments by cross-walls (Chater and Losick 1997). The reproductive phase is initiated by the erection of sporogenic structures called aerial hyphae, which are nonbranching structures that differentiate following a complex cell division event whereby the multigenomic hyphae are converted into chains of unigenomic spores. Aerial hyphae are formed only on solid-grown cultures, giving the colonies their characteristic white and fluffy appearance. Some Streptomyces species are also able to produce spores in submerged culture (Glazebrook et al. 1990; Kendrick and Ensign 1983). Morphology and structure formation vary from species to species, based on strain-specific genetic make-up that is yet poorly understood (Jakimowicz and van Wezel 2012). Many genes and physiological mechanisms are involved in the development of a particular morphological type (Kossen 2000). A survey of the submerged growth of over 100 reference species identified a continuum of morphological types, ranging from large macroscopic mycelial pellets several millimeters in diameter to small fragmented particles (Tresner et al. 1967). The different types of mycelia have been classified as pellets (compact masses of over around 1 mm in diameter), clumps (less compact masses between 0.6 and 1 mm in diameter), branched hyphae and non-branched hyphae (Pamboukian et al. 2002). Streptomyces species can be further subdivided into those which sporulate in liquid culture (S. albus, S. griseus, S. roseosporus), and those that do not (Glazebrook et al. 1990; Kendrick and Ensign 1983; van Wezel et al. 2009). When grown under different conditions, growth rate and morphology change depending on the composition of the growth medium, pH, temperature, mixing intensity, dissolved oxygen concentration and inoculum (Tough and Prosser 1996; Cui et al. 1998). In a sense, mycelial morphology is the classic example of "nature versus nurture"—an observed morphology emerges from the combination of genetic and environmental factors in a fermentation. The filamentous nature of streptomycetes, resulting in highly viscous broths, unfavorable pellet formation and slow growth, strongly affects the rheology of liquid cultures, which makes fermentation difficult (van Wezel et al. 2009). Large clumps are mainly physiologically active around the edge of the pellet, with oxygen and nutrient depletion in the centre. Increased broth mixing may improve transport, but results in shearing off of pellet tips and lysis. Shear force may also rupture the pellet as a whole, especially if the pellet is already hollow due to oxygen or substrate limitations (Meyerhoff et al. 1995). In addition, downstream processing of fermentation broths is complex and costly (van Wezel et al. 2006). The understanding and control of morphology is therefore key for optimization of industrial fermentations. The relationship between growth and morphology, on the one hand, and biomass accumulation and productivity on the other, is complicated, and optimal morphology varies from product to product. Studies on erythromycin production by Saccharospolyspora erythraea showed a strong correlation between mycelium fragment diameter (defined as the minimum diameter of a sphere that can bound a hyphal fragment, or pellet) and productivity, with a critical pellet diameter of 88 μm, below which production was drastically reduced (Wardell et al. 2002). Variants with decreased branching frequency showed increased hyphal strength, larger mycelial fragments, and increased antibiotic production. In Streptomyces, regulation of the secondary metabolism is complex, necessitating directed systems-level engineering approaches for strain improvement (van Wezel and McDowall 2011). One of the most direct ways of tackling the morphological problems was achieved by overexpression of the SsgA protein, which results in fragmentation of mycelial clumps (Kawamoto et al. 1997; van Wezel et al. 2000a). SsgA and its paralogue SsgB are required for the activation of cell division in streptomycetes, with SsgB recruiting the cell division scaffold protein FtsZ (Keijser et al. 2003; Willemse et al. 2011), and the enhanced expression of SsgA improving growth rates in batch fermentations of Streptomyces coelicolor and Streptomyces lividans, and resulting in a two-fold increase in yield of enzyme production with a higher production rate (van Wezel et al. 2006). Secretion capacity is also directly related to the activity of SsgA (Noens et al. 2007). This highlights the potential of genetic engineering approaches based on understanding of the biological processes that govern morphology and production. The effects of enhanced division on antibiotic production are less predictable (van Wezel et al. 2000b), and better insight into this relationship is needed. Morphological modeling is a valuable tool to suggest potential strain improvements and predict optimal fermentation conditions. Present models largely investigate the influence of environmental factors on morphology, while modeling with a strong focus on genetics may be powerful (Kossen 2000), but has not been attempted. Several models for growth of filamentous organisms (both fungal and actinomycete) have been proposed, including single-pellet models that focus on microscopic morphology and model three-dimensional tip elongation and branching. An initial model was based on diffusion–reaction of a hypothetical intracellular growth-limiting component (Yang et al. 1992a); this model was later extended to include the diffusion of limiting substrates and fragmentation due to shear forces (Meyerhoff et al. 1995). Basing growth on a tip extension rate depending on oxygen concentration, a similar model combined microscopic morphology with analysis of solute profiles along the pellet radius and the fractal dimension (Lejeune and Baron 1997). Macroscopic (fermentation) models focus instead on the effects of mass transport on growth and production in a reactor, providing e.g. an unstructured approach for modeling growth of mycelial pellets in submerged cultures. These models integrate growth kinetics at hyphal scale with the physical mechanisms of mass-transfer processes in pellets and the fermentor (van Suijdam et al. 1982). Modeling of microbial kinetics may also be based on structured models that describe rates by means of selected cell components rather than by the undifferentiated biomass (Nielsen and Villadsen 1992). Fermentation-scale models are typically combined with a population balance, in which the behavior and effect of pellet populations in cultivations is studied, e.g. predicting changes in the distribution of pellet sizes within a population growing in a fermentor (Tough and Prosser 1996). Fermentation population models have also been coupled with structured models, with a particular focus on morphological forms, with various growth and production rates in different hyphal elements. In an early example of structured modeling, hyphae of Aspergillus awamori were divided into five differentiation states with different growth and metabolite synthesis rates (Megee et al. 1970). This approach was extended to include fragmentation and compared to experimental data for submerged growth of Geotrichum candidum, Streptomyces hygroscopicus, and Penicillium chrysogenum (Nielsen 1993). Population-based structural models were used to study the production of among others penicillin (Birol et al. 2002) or streptomycin (Liu et al. 2005). While these models keep track of the proportions of hyphal elements in a fermentor, they do not incorporate changes to the developing pellet during the fermentation process. Each structural element represents a fraction of the clump, without providing insight into the three-dimensional pellet morphology. Despite the strong increase in computational power and available modeling software, little has been done in recent years to improve and expand these models. In this work growth of a mycelial pellet of Streptomyces is modeled, combining three-dimensional morphological pellet formation with a structured approach. We propose a combined morphological and structured model of a single-pellet, with a three-dimensional computational framework including oxygen (or other solute) diffusion and reaction in pellets, hyphal growth, branching and shearing, cross-wall formation and fragmentation, as well as collision detection during development. Biological information regarding the processes of differentiation and branching in liquid cultures of the model organism S. coelicolor has been implemented. The current modeling platform allows for study of the relationship between enzyme or antibiotic production and morphology and structure. Mathematical model description Fermentations involve many interacting factors of biological, chemical and physical nature. The microorganism itself, and its genetic make-up, are the backbone of the process, but process conditions (nutrients, oxygen, heat, mixing) also play a crucial role in controlling microbial growth, hyphal/pellet morphology and productivity. Mathematical models should incorporate these variables, so as to provide a test drive for the fermentation process and to pre-assess the effect of different variables on productivity. Although all cells in a hyphal element share a common cytoplasm with multiple nuclei, significant cellular and functional differentiation within the mycelial pellet exists (Megee et al. 1970). Earlier studies indicate that secondary metabolite production in filamentous organisms is associated with this morphological differentiation, which favors a structured approach to modeling (Giudici et al. 2004; Manteca et al. 2008). In creating a structured model, hyphae may be divided into three compartments: (i) apical, (ii) subapical, and (iii) hyphal compartment, each type indicating a different stage of cellular differentiation (Fig. 1a). Graphical representations of hyphal growth. a Left: Hyphal element with different compartments: apical (A), subapical (B), newly-formed apical as a result of branching (A new) and hyphal (H). Right: light microscopy image of S. coelicolor hyphae for comparison. Scale bar, 3 μm. b Metamorphosis reactions between different hyphal compartments: apical (A), subapical (B), newly-formed apical as a result of branching (A new) and hyphal (H) At the start of growth, a spore germinates to form a new apical compartment (Anew). Oxygen and substrate are assimilated within the apical compartment for growth and apical extension (Gray et al. 1990). The apical compartment A extends until a maximum apical length is reached. As apical extension continues, some of compartment A is converted into subapical compartment B. Compartment B has an intracellular composition very similar to compartment A. However, oxygen and substrate consumption in compartment B results not in growth, but rather in the formation of new branches (Anew). Within mycelial pellets, where substrate levels are depleted, compartment B transforms into the hyphal compartment H. Based on the observation that pellet and clump formation are important determinants for yield, it is the hyphal compartment that is taken to be responsible for secondary metabolite production (Manteca et al. 2008). The various metamorphosis reactions described are shown in Fig. 1b. To describe three-dimensional growth of the mycelium, a hyphal tip growth and branching model (Yang et al. 1992a) was adapted, with the assumptions that hyphae are cylindrical, have constant diameter d and density ρ x , and grow by apical extension. Three-dimensional collision detection is employed to prevent overlapping hyphae. The orientation of the growing tip is characterized by angles θ and φ in spherical coordinates that change stochastically as a function of time. During growth, an apical compartment extends until the maximum apical length (L A,max ) is reached, which may occur before the formation of a cross-wall near the tip. Tip extension is exponential until L A,max is reached, after which apical extension continues, but rather than extending compartment A, a new compartment B is added. In this case, the net result of tip extension is actually the formation of subapical cells from apical cells. Growth rate is based on local oxygen concentration according to Monod kinetics (with maximum specific growth rate μ max and half-saturation coefficient K O ). The total growth rate in a shell with thickness dr situated at distance r from the pellet centre is calculated as the sum of extension rates α of all tips in that shell (Eq. (1)). The total growth rate is correlated with the oxygen consumption rate by a yield coefficient Y OX . $$ \mu (r)C_{X,A} (r) = \frac{1}{V}\left( {\frac{{\pi d^{4} }}{4}} \right) \cdot \frac{{C_{O} (r)}}{{K_{O} + C_{O} (r)}} \cdot \sum {\alpha_{i,\max (A)} } $$ Although Streptomyces grow by apical extension, the model assumption is that once the apical compartment has reached its maximum length, growth results in formation of subapical compartment Eq. (2). $$ \mu (r)C_{X,H} (r) = \frac{1}{V}\left( {\frac{{\pi d^{4} }}{4}} \right) \cdot \frac{{C_{O} (r)}}{{K_{O} + C_{O} (r)}} \cdot \sum {\alpha_{i,\max (B)} } $$ Ageing cells become increasingly vacuolated and have a completely different metabolism than actively growing apical cells (Zangirolami et al. 1996). Differentiation is the process of conversion of subapical cells B to hyphal cells H once a certain arbitrary differentiation age (A diff ) has been reached. The assumption is made that at this time point, growth results in formation of hyphal compartment Eq. (3), reflecting the natural differentiation observed, but still poorly understood, within Streptomyces sp. Hyphal cells do not grow or branch, but are responsible for secondary metabolite production. The amount of hyphal cells in a pellet can be taken as indicative of the level of secondary metabolite production. $$ \mu (r)C_{X,H} (r) = \frac{1}{V}\left( {\frac{{\pi d^{4} }}{4}} \right) \cdot \frac{{C_{O} (r)}}{{K_{O} + C_{O} (r)}} \cdot \sum {\alpha_{i,\max (H)} } $$ Because of the hyphal differentiation assumed in the model, apical, subapical and hyphal compartments may have different oxygen consumption rates. However, because it is not known whether this is a valid assumption, a single yield coefficient is taken for all compartments. Upon fitting the model to experimental data, different yield coefficients may be obtained. New branches form by the extension of new tips Anew from the subapical compartment B. These tips grow in the plane perpendicular to the parent segments. Initially, a new coordinate system is set up with the parent branches defining the x–y plane, and a z-vector drawn perpendicular to the plane. The direction of the new branch growth in this plane is chosen stochastically from a uniform distribution. Once the branch endpoint has been chosen within the new coordinate system, this point is placed back within the original coordinate system. In Streptomyces, branching depends on the essential protein DivIVA, which localizes at tips and new branch sites, recruiting or activating cell wall synthesis enzymes (Flärdh 2003; Hempel et al. 2008). In the model, branching occurs according to the local oxygen concentration in the pellet: probability of branching tapers off towards the inside of the pellet, where the oxygen concentration is diminished. As branching rates have not been measured within pellets, the correlation of branching and oxygen concentration is a model assumption made to decrease levels in the crowded inner core of a pellet. Distance between branches results from the branching probability and a chosen branching interval (b int ). As in growth, collision detection is employed to prevent overlapping hyphae during branching. Cross-wall formation The current model relies on the assumption that cross-walls form near branches (Reichl et al. 1990). Once cross-wall formation has been initiated, a cross-wall will form directly before or after a branch. Subsequent cross-walls on the branch will form at a multiple (random uniform distribution) of the specified interval (c int ) from the first cross-wall. Cross-wall formation occurs at a chosen time interval. Multiple cross-walls may simultaneously form on a given branch during a cross-wall formation event, as evidenced in live-imaging experiments (Jyothikumar et al. 2008). The number of cross-walls formed on a given hyphae is dependent on the number of branches on the hyphae. However, if the choice of new cross-wall position is at another branch point or location of an existing cross-wall, the cross-wall is not formed. Stirring in the fermentor creates shear forces that enforce fragmentation of the pellets. Similarly to the model of Meyerhoff, a biomass density parameter (m3 biomass/m3 total volume) was chosen above which hyphae are not affected by shear because they stabilize each other within the pellet. A respective threshold radius (r thres ) is chosen based on this density, and a breaking probability (P break ) is chosen at a certain distance to this radius from the tip (r tip ). The expression for the probability of breaking, assuming shear force parameter λ shear , is given by Eq. (4). $$ P_{break} = 100.0 - 100.0 \cdot exp\left[ { - \lambda_{shear} \cdot \frac{{(r_{tip} - r_{thres} )}}{{r_{thres} }}} \right] $$ Breaking is assumed to occur at cross-wall locations, where the hyphal wall is reported to be weaker (Krabben and Nielsen 1998), and this is supported by the strong effect of the cell-division activator protein SsgA on fragmentation (Traag and van Wezel 2008). Collision detection The computational framework includes collision detection between the hyphal branches, which has not been incorporated in any previous model for micro-scale pellet formation. Collision detection is required to build a realistic model, as it avoids overlap of growing and branching hyphae in a spatially constrained environment. Moreover, the implemented collision detection algorithm ensures that the resulting pellet volume densities do not surpass 100 % filled volume near the pellet centre. Diffusion of oxygen and substrates into the pellet can then be based on real rather than hypothetical pellet cell volume density. Further, data on the location of e.g. the cell division and secretion machineries along the hyphae can be implemented in the model. In the collision detection algorithm, the space domain is partitioned into cubes and all segments of the mycelium are placed in the space cube corresponding to their location. Collision is checked between a new hyphal segment and other segments in the same and neighboring space cubes. A collision is detected if any points on the segments are closer than the distance of two hyphal radii from each other. The algorithm from (Ericson 2005) was used for determining the distance between two segments. Once a near collision has been detected, tip growth or branching does not take place in the given time step. In the following time step, should a growth or branching angle be stochastically chosen which does not result in collision, extension of the mycelium at the given location can take place. This corresponds to observations during live imaging of hyphal growth which revealed that when two hyphal tips are found close to each other, one may exert apical dominance over the other and arrest the latter's growth for a period of time [(Jyothikumar et al. 2008) and our unpublished data]. Oxygen diffusion Oxygen is taken to be the limiting substrate, based on previous studies which have shown that oxygen can become mass transfer limited within mycelial pellets (Michel et al. 1992). The one-solute assumption is made here for the simplicity of the case study and more solutes can easily be taken into account. When a pellet reaches a certain critical size, oxygen limitation within the centre occurs, resulting in lysis. This critical size is a function of the pellet biomass density, the dissolved oxygen concentration in the bulk liquid and the hyphal respiration rate (Cui et al. 1998). Initially, a fully three-dimensional (3D) model for the oxygen diffusion and reaction within the pellet was developed. Oxygen concentrations in concentric shells obtained with the three-dimensional model were subsequently compared with the radial (one-dimensional, 1D) oxygen profile obtained assuming spherical symmetry (Lejeune and Baron 1997). Obviously, the 3D model was computationally much more intensive than the 1D counterpart, but nevertheless resulted in a very similar oxygen concentration profile. Therefore, the simplified 1D radial symmetry was further assumed for each pellet, with oxygen transport occurring by molecular diffusion. Alteration in local oxygen concentration is assumed to be slow compared to growth, and hence a differential equation with stationary coupling of oxygen diffusion and reaction rates can be written in radial coordinates r Eq. (5). $$ D_{O2,eff} \frac{1}{{r^{2} }}\frac{d}{dr}\left( {r^{2} \frac{{dC_{O} }}{dr}} \right) = \frac{\mu (r)}{{Y_{XO} }}C_{X} (r) = OUR $$ The expression for oxygen uptake rate (OUR) can be rewritten to take into account the extension of the apical, subapical or hyphal compartment at different points in the branch lifetime Eq. (6–8). The apical compartment is extended if the branch length is less than the maximum apical compartment length (L branch < L A,max .) The subapical compartment is extended thereafter until the branch reaches differentiation age (A diff ), at which point the hyphal compartment is extended. Until data is obtained to prove otherwise, the yield coefficients of biomass on oxygen Y XO,A , Y XO,B , Y XO,H are assumed to be the same in all compartments. $$ OUR = \frac{\mu (r)}{{Y_{XO,A} }}C_{X,A} (r)\quad {\text{if }}L_{branch} < L_{A,\max } $$ $$ OUR = \frac{\mu (r)}{{Y_{XO,B} }}C_{X,B} (r)\quad {\text{if }}A_{branch} < A_{diff} $$ $$ OUR = \frac{\mu (r)}{{Y_{XO,H} }}C_{X,H} (r)\quad {\text{otherwise}} $$ Within the pellet, the effective diffusion coefficient D eff was computed proportional to the pellet porosity (ε). The boundary conditions were (1) constant oxygen concentration in the bulk phase and (2) zero oxygen flux (symmetry condition) at the center of the pellet. Parameter values and simulation The computational model was implemented in MATLAB (MATLAB 2008b, Mathworks, Natick, MA, www.mathworks.com) and visualization of pellets was performed using the freeware Persistance of Vision Raytracer software (Pov-Ray, www.povray.org). The 1D diffusion–reaction solution was approximated using the finite differences method. Parameter values for a typical simulation run are given in Table 1. Parameters for variation in tip angle direction are based on measurements performed in growth chambers (Yang et al. 1992b). Branching and cross-wall intervals are parameters that are strain dependent; differences in the intervals result in significant morphological variation. The yield of biomass on oxygen was based on literature, with a value of 1.2 kg kg−1 assumed for all compartments (Meyerhoff et al. 1995). The model may later be fitted to experimental data to determine whether different yield coefficients exist for each compartment type. The shearing probability was adjusted to result in realistic breakage. Values of model parameters used in a typical simulation run for pellet morphology S. coelicolor average radius kg · dw m−3 Assumed (Lejeune and Baron 1997) αmax μm h−1 Conservative estimate dθ (Yang et al. 1992b) dφ Branch μm−1 Cross-wall μm−1 m2 s−1 2.25 × 10−9 (at 30 °C) Diffusion coefficient of oxygen in water kg m−3 1.0 × 10−4 (Meyerhoff et al. 1995) kg kg−1 Cb,O2 (Lejeune and Baron 1997) Model output The results of a simulation can be assessed by visualizing the mycelium morphology development (growth, branching and cross-wall formation) or numerical analysis of several quantitative measures. These measures include: the hyphal growth unit (HGU), fractions of different mycelial types over time or space, number of tips for a given morphology, number of cross-walls formed, biomass density, number of fragmentation events, etc. Here, we will limit the discussion to visualization, the HGU and component fractions. Visualization of pellet development Modern software enables first-rate visualization of biological information, such as the localization of cell division proteins within developing hyphae. Morphological differentiation of streptomycetes is closely integrated with fundamental growth and cell-cycle processes (Flärdh and Buttner 2009). Implementation of knowledge on components that control morphogenesis (such as cell division components) or product formation (e.g. biosynthesis and secretion of natural products) is required to allow for building a more realistic model. A 3D rendering of a simulated portion of a growing mycelium is given in Fig. 2. The DivIVA protein (marked as green) drives tip growth and branching and is therefore always present at apical sites (Hempel et al. 2008). Cross-walls, where cell division proteins localize, are given in red. The potential of modern visualization software to enable realistic rendering of hyphal growth and pellet formation is demonstrated. Protein localizations were derived from the in vivo localizations of GFP-tagged proteins. The model can be readily extended with novel biological data and insights, such as the localization of antibiotic and protein secretion machineries depending on growth or the function of novel morphoproteins. Simulation of early mycelial pellet formation with DivIVA localizing at hyphal tips (green) and cross-walls (red) forming at roughly 10 μm intervals. Scale bar, 5 μm The ratio between the size of the mycelium and the number of tips is a characteristic morphological variable called the HGU of a simulated pellet (Caldwell and Trinci 1973). Although originally defined as the total mycelium length divided by the number of tips, it may also be based on the total mycelium volume or mass (Nielsen 1993). In this study, HGU is calculated according to the original definition based on the total mycelium length. If the HGU is constant, both tip extension rate and branching frequency are proportional to the specific growth rate of the biomass. The HGU also provides an idea of the mycelium morphology: a large value indicates long hyphal threads with few branch points, whereas a small value indicates a dense hyphal structure with many branch points (Nielsen and Villadsen 1992). The HGU provides a quick assessment of morphological type; a change of parameters which results in an increase in the HGU may provide a wider or more fragmented morphology with enhanced mass transfer capability and better performance in the fermentor. Ratio of compartment types Given that secondary metabolite production in filamentous organisms is associated with morphological differentiation in the mycelium, it is interesting to compare the ratio of different compartment types over time for a given morphology. Recent structured models correlate the amount of antibiotic production in a fermentation to the amount of subapical or hyphal compartments, where secondary metabolite formation is expected to take place (Paul and Thomas 1996; Birol et al. 2002; Giudici et al. 2004; Liu et al. 2005). The ratio of component types can be followed over pellet development, or alternatively, represented as a function of pellet radius. Classification of the mycelium into components with different metabolic activity and function may provide more understanding of the relationship between morphology and biomass accumulation and productivity. A case study was performed to demonstrate the model's ability to accurately show differences in Streptomyces strain morphologies and incorporate molecular information. Depending on model parameters the model represents the different morphological variants (pellets, mycelial mats or hyphal fragments). Quantitative model output parameters, such as the HGU and fractions of different component types are discussed. To visualize growth, an example of mycelium development over time is given (Fig. 3 and Supplemental Video). Starting from a single unbranched mycelium, a full quasi-spherical pellet with high inner cellular density and a loose outer layer of outgrowing filaments develops after 1 day. Different modeled morphological types were simulated and compared to real mycelial clumps with the described morphologies (Fig. 4). The biomass density profiles (kg biomass/m3 of pellet volume) for the pellet and mycelial mat morphologies are given in Fig. 5, showing the larger density of the pellet morphology. Pellet growth was simulated using the parameters given in Table 1; mycelial mat formation was simulated by increasing the distance between branches (from one branch every 2 μm to one every 20 μm); fragments were created when the pellet scenario was simulated taking shear into consideration. Formation of fragments (Fig. 3c) is representative of strains such as the S. coelicolor GSA2 strain over-expressing SsgA (van Wezel et al. 2000a), which shows increased sensitivity to shear and a pronounced tendency to fragment (van Wezel et al. 2006). Mycelial mat formation occurs naturally in other stains, such as S. lividans variant MR (GPvW, unpublished). Changing a single parameter value may have a major impact on the predicted morphology. By performing parameter sensitivity analysis studies and investigating the results, optimal production strains can be designed. Two-dimensional projection of a 3D simulated pellet. Images based on the three-dimensional model presented as Supplemental Video. Parameters as given in Table 1. Growth time indicated. Scale bar, 100 μm Qualitative comparison of real (left) and simulated (right) mycelial morphologies. Shake flask cultures were grown in TSBS/YEME medium at 30 °C for 20 h. a characteristic pellet produced by S. coelicolor M145 (wild-type strain); b mycelial mat produced by S. lividans variant MR (GPvW, unpublished); c fragmented growth of S. coelicolor GSA2 [overexpressing ssgA; (van Wezel et al. 2000a)]. For simulation parameters see Table 1. In case b branching interval was set to 1 branch/20 μm. In case c broken mycelial fragments are shown from a case where shear is taken into consideration. All the bars: 50 μm Biomass density profiles (kg biomass/m3 of pellet volume) for the pellet and mycelial mat morphologies showing increased density at the pellet/mat core and the larger density of the pellet morphology The HGU values were determined for the different simulated morphologies over 36 h of growth (Fig. 6), with higher HGU values indicating more open morphological structures. As expected, the dense pellet HGU is much smaller than that of the mycelial mat morphology. Fragmentation results in an increase in the HGU. Interestingly, for the pellet and sheared pellet (fragmented) morphologies, HGU levels off at roughly 12 h, while it continues increasing for the mycelial mat morphology. This indicates that in the mat, the mass of the mycelium increases more by extension of existing branches than by addition of new ones. HGU values for simulated dense pellet, mycelial mat and sheared pellet morphologies over time Component fractions for the three simulations were compared (Fig. 7). In the pellet (Fig. 7a), dense growth of branches results in almost equal distribution of apical and subapical component at 36 h; ageing of cells, however, has begun. The hyphal fraction will extend steadily as oxygen and space limitations within the pellet limit growth and addition of new branches. In the mycelial mat (Fig. 7b), because of the reduced frequency of branching, at 36 h, the mycelium already consists of 40 % hyphal fraction and only 10 % apical component. Conversely, fragmented growth (Fig. 7c) results in a high apical fraction, due to the presence of a large new population of apical pieces. Comparison of component fractions (apical, subapical and hyphal) in simulated mycelial morphologies: a pellet, b mycelial mat, c fragmented pellet, plotted over time In the above examples, the exact fraction values are not of importance. Rather, the plots serve to illustrate the potential of structural modeling, where the mycelium is considered to be a differentiating species, consisting of cells with different metabolic function. Such modeling can have an important role in the strategic genetic and morphological design of Streptomyces for industrial fermentations. As was demonstrated previously (Liu et al. 2005), fermentation trials can be used to modify the described morphogenesis reactions between structural components (Fig. 1b) such that the fractions of components directly correlate to production of metabolites. The distribution of component fractions can also be plotted over hyphal radius at different time points (Fig. 8). The hyphal component is situated within the pellet centre because as the pellet ages, subapical compartment transforms into hyphal compartment. It is evident that oxygen limitation within the pellet results in decreased formation of branches, with an equal distribution of apical and subapical compartments. Where the oxygen level is higher, fraction of subapical component decreases because branching rate is higher and more apical compartments are being formed. The exterior of the pellet consists entirely of new, apical compartments. When coupled with confocal microscopy (Manteca et al. 2008) and microsensor measurements of oxygen concentration (Hille et al. 2005), such modeling can provide insight into processes that occur within pellets during fermentations. Fractions of apical, subapical and hyphal components versus pellet radius in a simulated dense pellet after a 12, b 24, and c 36 h of growth. Oxygen concentration along the radius indicates how branching frequency is affected by oxygen level. The hyphal compartment is located within the pellet core Conclusions and perspectives The presented model integrates for the first time three-dimensional morphological visualization with structured modeling. It thereby provides a realistic, single-pellet, three-dimensional framework to study the relationship between enzyme or antibiotic production and morphology and structure. Simulations can be analyzed via visualization of mycelium development (growth, branching and cross-wall formation) or analysis of numerical measures, such as the HGU or fractions of different mycelial types over time or space. The output thus provides both a visual and numerical assessment of morphological type; for example, a change of parameters which results in an increase in the HGU may provide a wider or more fragmented morphology with enhanced mass transfer capability. Classification of the mycelium into compartments with different metabolic activity and function can provide better understanding of the relationship between morphology and biomass accumulation and productivity. The purpose of this type of modeling is to replace the conventional 'black-box' approach to morphological engineering with a directed rational design and evolution approach in order to better understand how growth rate and morphology affect secretion and yield. Literature describes key regulators that govern pivotal processes during growth in submerged culture, such as Crp for germination (Piette et al. 2005), DivIVA for tip growth and branching (Hempel et al. 2008) and SsgA for fragmentation (Kawamoto et al. 1997; van Wezel et al. 2000a), but further insight into these regulatory mechanisms is needed. Optimal morphology should facilitate fermentations from an engineering perspective as well as result in sufficient production of the desired metabolite product. We are grateful to Jozef Anné for discussions. This research is supported by a VICI grant (10379) of the Netherlands Applied Research Council (STW) to GPvW, and by a VIDI Grant (864.06.003) from the Netherlands Organization for Scientific Research (NWO) to CP. This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. 10482_2012_9760_MOESM1_ESM.doc (25 kb) Supplementary material 1 (DOC 25 kb) Supplementary material 2 (MP4 5042 kb) Birol G, Ündey C, Parulekar SJ, Cinar A (2002) A morphologically structured model for penicillin production. Biotechnol Bioeng 77(5):538–552CrossRefPubMedGoogle Scholar Caldwell IY, Trinci AP (1973) The growth unit of the mould Geotrichum candidum. Archiv fur Mikrobiologie 88(1):1–10CrossRefPubMedGoogle Scholar Chater KF, Losick R (1997) Mycelial life style of Streptomyces coelicolor A3(2) and its relatives. In: Shapiro JA, Dworkin M (eds) Bacteria as multicellular organisms. Oxford University Press, New York, pp 149–182Google Scholar Cui YQ, Okkerse WJ, van der Lans RG, Luyben KC (1998) Modeling and measurements of fungal growth and morphology in submerged fermentations. Biotechnol Bioeng 60(2):216–229CrossRefPubMedGoogle Scholar Ericson C (2005) Real-time collision detection. Morgan Kaufmann series in interactive 3D technology. Elsevier, AmsterdamGoogle Scholar Errington J, Daniel RA, Scheffers DJ (2003) Cytokinesis in bacteria. Microbiol Mol Biol Rev 67(1):52–65CrossRefPubMedGoogle Scholar Flärdh K (2003) Growth polarity and cell division in Streptomyces. Curr Opin Microbiol 6(6):564–571. doi: S1369527403001474 CrossRefPubMedGoogle Scholar Flärdh K, Buttner MJ (2009) Streptomyces morphogenetics: dissecting differentiation in a filamentous bacterium. Nat Rev Microbiol 7(1):36–49CrossRefPubMedGoogle Scholar Giudici R, Pamboukian CR, Facciotti MC (2004) Morphologically structured model for antitumoral retamycin production during batch and fed-batch cultivations of Streptomyces olindensis. Biotechnol Bioeng 86(4):414–424. doi: 10.1002/bit.20055 CrossRefPubMedGoogle Scholar Glazebrook MA, Doull JL, Stuttard C, Vining LC (1990) Sporulation of Streptomyces venezuelae in submerged cultures. J Gen Microbiol 136(Pt 3):581–588PubMedGoogle Scholar Gray DI, Gooday GW, Prosser JI (1990) Apical hyphal extension in Streptomyces coelicolor A3(2). J Gen Microbiol 136(6):1077–1084CrossRefPubMedGoogle Scholar Hempel AM, Wang SB, Letek M, Gil JA, Flardh K (2008) Assemblies of DivIVA mark sites for hyphal branching and can establish new zones of cell wall growth in Streptomyces coelicolor. J Bacteriol 190(22):7579–7583. doi: 10.1128/JB.00839-08 CrossRefPubMedGoogle Scholar Hille A, Neu TR, Hempel DC, Horn H (2005) Oxygen profiles and biomass distribution in biopellets of Aspergillus niger. Biotechnol Bioeng 92(5):614–623. doi: 10.1002/bit.20628 CrossRefPubMedGoogle Scholar Hopwood DA (2007) Streptomyces in nature and medicine: the antibiotic makers. Oxford University Press, New YorkGoogle Scholar Jakimowicz D, van Wezel GP (2012) Cell division and DNA segregation in Streptomyces: how to build a septum in the middle of nowhere? Mol Microbiol. doi: 10.1111/j.1365-2958.2012.08107.x Jyothikumar V, Tilley EJ, Wali R, Herron PR (2008) Time-lapse microscopy of Streptomyces coelicolor growth and sporulation. Appl Environ Microbiol 74(21):6774–6781. doi: 10.1128/AEM.01233-08 CrossRefPubMedGoogle Scholar Kawamoto S, Watanabe H, Hesketh A, Ensign JC, Ochi K (1997) Expression analysis of the ssgA gene product, associated with sporulation and cell division in Streptomyces griseus. Microbiology 143:1077–1086CrossRefPubMedGoogle Scholar Keijser BJ, Noens EE, Kraal B, Koerten HK, van Wezel GP (2003) The Streptomyces coelicolor ssgB gene is required for early stages of sporulation. FEMS Microbiol Lett 225(1):59–67CrossRefPubMedGoogle Scholar Kendrick KE, Ensign JC (1983) Sporulation of Streptomyces griseus in submerged culture. J Bacteriol 155(1):357–366PubMedGoogle Scholar Kossen NW (2000) The morphology of filamentous fungi. Adv Biochem Eng Biotechnol 70:1–33PubMedGoogle Scholar Krabben P, Nielsen J (1998) Modeling the mycelium morphology of Penicillium species in submerged cultures. Adv Biochem Eng Biotechnol 60:125–152Google Scholar Lejeune R, Baron GV (1997) Simulation of growth of a filamentous fungus in 3 dimensions. Biotechnol Bioeng 53(2):139–150CrossRefPubMedGoogle Scholar Liu G, Xing M, Han QG (2005) A population-based morphologically structured model for hyphal growth and product formation in streptomycin fermentation. World J Microbiol Biotechnol 21(8–9):1329–1338. doi: 10.1007/s112740053648z CrossRefGoogle Scholar Manteca A, Alvarez R, Salazar N, Yague P, Sanchez J (2008) Mycelium differentiation and antibiotic production in submerged cultures of Streptomyces coelicolor. Appl Environ Microbiol 74(12):3877–3886CrossRefPubMedGoogle Scholar Megee RD 3rd, Kinoshita S, Fredrickson AG, Tsuchiya HM (1970) Differentiation and product formation in molds. Biotechnol Bioeng 12(5):771–801. doi: 10.1002/bit.260120507 CrossRefPubMedGoogle Scholar Meyerhoff J, Tiller V, Bellgardt KH (1995) Two mathematical models for the development of a single microbial pellet. Biotechnol Bioeng 12:305–313Google Scholar Michel FC Jr, Grulke EA, Reddy CA (1992) Determination of the respiration kinetics for mycelial pellets of Phanerochaete chrysosporium. Appl Environ Microbiol 58(5):1740–1745PubMedGoogle Scholar Nielsen J (1993) A simple morphologically structured model describing the growth of filamentous microorganisms. Biotechnol Bioeng 41(7):715–727. doi: 10.1002/bit.260410706 CrossRefPubMedGoogle Scholar Nielsen J, Villadsen J (1992) Modeling of microbial kinetics. Chem Eng Sci 47(17–18):4225–4270Google Scholar Noens EE, Mersinias V, Willemse J, Traag BA, Laing E, Chater KF, Smith CP, Koerten HK, van Wezel GP (2007) Loss of the controlled localization of growth stage-specific cell-wall synthesis pleiotropically affects developmental gene expression in an ssgA mutant of Streptomyces coelicolor. Mol Microbiol 64(5):1244–1259CrossRefPubMedGoogle Scholar Pamboukian CRD, Guimaraes LM, Facciotti MCR (2002) Applications of image analysis in the characterization of Streptomyces olindensis in submerged culture. Braz J Microbiol 33(1):17–21CrossRefGoogle Scholar Paul GC, Thomas CR (1996) A structured model for hyphal differentiation and penicillin production using Penicillium chrysogenum. Biotechnol Bioeng 51(5):558–572CrossRefPubMedGoogle Scholar Piette A, Derouaux A, Gerkens P, Noens EE, Mazzucchelli G, Vion S, Koerten HK, Titgemeyer F, De Pauw E, Leprince P, van Wezel GP, Galleni M, Rigali S (2005) From dormant to germinating spores of Streptomyces coelicolor A3(2): new perspectives from the crp null mutant. J Proteome Res 4(5):1699–1708. doi: 10.1021/pr050155b CrossRefPubMedGoogle Scholar Reichl U, Yang H, Gilles ED, Wolf H (1990) An improved method for measuring the interseptal spacing in hyphae of Streptomyces tendae by fluorescence microscopy coupled with image-processing. FEMS Microbiol Lett 67(1–2):207–209CrossRefGoogle Scholar Tough AJ, Prosser JI (1996) Experimental verification of a mathematical model for pelleted growth of Streptomyces coelicolor A3(2) in submerged batch culture. Microbiol UK 142:639–648CrossRefGoogle Scholar Traag BA, van Wezel GP (2008) The SsgA-like proteins in actinomycetes: small proteins up to a big task. Antonie Van Leeuwenhoek 94(1):85–97CrossRefPubMedGoogle Scholar Tresner HD, Hayes JA, Backus EJ (1967) Morphology of submerged growth of streptomycetes as a taxonomic aid. I. Morphological development of Streptomyces aureofaciens in agitated liquid media. Appl Microbiol 15(5):1185–1191PubMedGoogle Scholar van Suijdam JC, Hols H, Kossen NW (1982) Unstructured model for growth of mycelial pellets in submerged cultures. Biotechnol Bioeng 24(1):177–191. doi: 10.1002/bit.260240115 CrossRefPubMedGoogle Scholar van Wezel GP, McDowall KJ (2011) The regulation of the secondary metabolism of Streptomyces: new links and experimental advances. Nat Prod Rep 28(7):1311–1333CrossRefPubMedGoogle Scholar van Wezel GP, van der Meulen J, Kawamoto S, Luiten RG, Koerten HK, Kraal B (2000a) ssgA is essential for sporulation of Streptomyces coelicolor A3(2) and affects hyphal development by stimulating septum formation. J Bacteriol 182(20):5653–5662CrossRefPubMedGoogle Scholar van Wezel GP, van der Meulen J, Taal E, Koerten H, Kraal B (2000b) Effects of increased and deregulated expression of cell division genes on the morphology and on antibiotic production of Streptomycetes. Antonie Van Leeuwenhoek 78(3–4):269–276CrossRefPubMedGoogle Scholar van Wezel GP, Krabben P, Traag BA, Keijser BJ, Kerste R, Vijgenboom E, Heijnen JJ, Kraal B (2006) Unlocking Streptomyces spp. for use as sustainable industrial production platforms by morphological engineering. Appl Environ Microbiol 72(8):5283–5288CrossRefPubMedGoogle Scholar van Wezel GP, McKenzie NL, Nodwell JR (2009) Chapter 5. Applying the genetics of secondary metabolism in model actinomycetes to the discovery of new antibiotics. Methods Enzymol 458:117–141CrossRefPubMedGoogle Scholar Wardell JN, Stocks SM, Thomas CR, Bushell ME (2002) Decreasing the hyphal branching rate of Saccharopolyspora erythraea NRRL 2338 leads to increased resistance to breakage and increased antibiotic production. Biotechnol Bioeng 78(2):141–146CrossRefPubMedGoogle Scholar Willemse J, Borst JW, de Waal E, Bisseling T, van Wezel GP (2011) Positive control of cell division: FtsZ is recruited by SsgB during sporulation of Streptomyces. Genes Dev 25(1):89–99CrossRefPubMedGoogle Scholar Yang H, King R, Reichl U, Gilles ED (1992a) Mathematical-model for apical growth, septation, and branching of mycelial microorganisms. Biotechnol Bioeng 39(1):49–58CrossRefPubMedGoogle Scholar Yang H, Reichl U, King R, Gilles ED (1992b) Measurement and simulation of the morphological development of filamentous microorganisms. Biotechnol Bioeng 39(1):44–48CrossRefPubMedGoogle Scholar Zangirolami TC, Johansen CL, Nielsen J, Jørgensen SB (1996) Simulation of penicillin production in fed-batch cultivations using a morphologically structured model. Biotechnol Bioeng 56(6):593–604CrossRefGoogle Scholar © The Author(s) 2012 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1.Molecular Biotechnology, Institute of BiologyLeiden UniversityLeidenThe Netherlands 2.Department of BiotechnologyDelft University of TechnologyDelftThe Netherlands Celler, K., Picioreanu, C., van Loosdrecht, M.C.M. et al. Antonie van Leeuwenhoek (2012) 102: 409. https://doi.org/10.1007/s10482-012-9760-9 Received 03 April 2012 Accepted 30 May 2012 First Online 21 June 2012 Publisher Name Springer Netherlands
CommonCrawl
Multiple positive solutions for Schrödinger-Poisson system in $\mathbb{R}^{3}$ involving concave-convex nonlinearities with critical exponent CPAA Home On uniform estimate of complex elliptic equations on closed Hermitian manifolds September 2017, 16(5): 1571-1585. doi: 10.3934/cpaa.2017075 Exponential boundary stabilization for nonlinear wave equations with localized damping and nonlinear boundary condition Takeshi Taniguchi Division of Mathematical Sciences, Graduate School of Comparative Culture, Kurume University, Miimachi, Kurume, Fukuoka 839-8502, Japan Received September 2015 Revised October 2015 Published May 2017 Fund Project: To Shiho and Sarasa from Grandpapa. The author is partially supported by the Grant-in-Aid for Scientific Research (No.24540198) from Japan Society for the Promotion of Science Full Text(HTML) $ D\subset R^{d}$ be a bounded domain in the $d- $ dimensional Euclidian space $R^{d} $ with smooth boundary $Γ=\partial D.$ In this paper we consider exponential boundary stabilization for weak solutions to the wave equation with nonlinear boundary condition: $\left\{ \begin{gathered}u_{tt}(t)-ρ(t)Δ u(t)+b(x)u_{t}(t)=f(u(t)), \\ u(t)=0\ \ \text{on }Γ_{0}×(0,T), \\ \dfrac{\partial u(t)}{\partialν}+γ(u_{t}(t))=0\ \ \text{on }Γ _{1}×(0,T), \\ u(0)=u_{0},u_{t}(0)=u_{1},\end{gathered} \right.$ $\left\| {{u_0}} \right\| < {\lambda _\beta }, $ $ E(0) < d_{β},$ $λ_{β}, $ $d_{β} $ are defined in (21), (22) and $Γ=Γ_{0}\cupΓ_{1} $ $\bar{Γ}_{0}\cap\bar{Γ}_{1}=φ. $ Keywords: Wave equation, exponential behavior of solutions, nonlinear boundary condition. Mathematics Subject Classification: Primary: 35L05; Secondary: 35L20, 35B40. Citation: Takeshi Taniguchi. Exponential boundary stabilization for nonlinear wave equations with localized damping and nonlinear boundary condition. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1571-1585. doi: 10.3934/cpaa.2017075 F. D. Araruna and A. B. Maciel, Existence and boundary stabilization of the semilinear wave equation, Nonlinear Analysis, 67 (2007), 1288-1305. doi: 10.1016/j.na.2006.07.015. Google Scholar M. M. Cavalcanti, V. N. D. Cavalcanti and P. Martinez, Existence and decay rate estimates for the wave equation with nonlinear boundary damping and source term, J. Differential Equations, 203 (2004), 119-158. doi: 10.1016/j.jde.2004.04.011. Google Scholar M. M. Cavalcanti and V. N. Domingos Cavalcanti, Existence and asymptotic stability for evolution problems on manifolds with damping and source terms, J. Math. Anal. Appli., 291 (2004), 109-127. doi: 10.1016/j.jmaa.2003.10.020. Google Scholar M. M. Cavalcanti, V. N. Domingos Cavalcanti and I. Lasiecka, Well posedness and optimal decay rates for the wave equation with nonlinear damping-source interaction, J. Differential Equations, 236 (2007), 407-459. Google Scholar M. M. Cavalcanti, V. N. D. Cavalcanti and J. A. Soriano, On existence and asymptotic stability of solutions of the degenerate wave equation with nonlinear boundary conditions, J. Math. Anal. Appli., 281 (2003), 108-124. Google Scholar V. Georgiev and G. Todorova, Existence of a solution of the wave equation with damping and source terms, J. Differential Equations, 109 (1994), 295-308. doi: 10.1006/jdeq.1994.1051. Google Scholar S. Gerbi and B. Said-Houari, Local existence and exponential growth for a semilinear damping wave equation with dynamic boundary conditions, Ad. Diff. Equ., 13 (2008), 1051-1074. Google Scholar B. Guo and Z-C. Shao, On exponential stability of a semilinear wave equation with variable coefficients under the nonlinear boundary feedback, Nonlinear Analysis, 71 (2009), 5961-5978. doi: 10.1016/j.na.2009.05.018. Google Scholar V. Komornik and E. Zuazua, A direct method for boundary stabilization of the wave equation, J. Math. Pures et appl., 69 (1990), 33-54. Google Scholar A. T. Louredo, M. A. Ferreira and M. M. Miranda, On a nonlinear wave equation with boundary damping, Math. Meth. in Applied Sciences, 37 (2014), 1278-1302. Google Scholar J. Malek, J. Necas, M. Rokyta and M. Ruzicka, Weak and Measure-valued Solutions to Evolutionary PDEs Chapman and Hall, 1996. doi: 10.1007/978-1-4899-6824-1. Google Scholar S. A. Messaoudi, Blow up in a nonlinearly damped wave equation, Math Nachr, 231 (2001), 105-111. doi: 10.1002/1522-2616(200111)231:1<105::AID-MANA105>3.3.CO;2-9. Google Scholar K. Ono, Asymptotic stability and blowing up of solutions for some degenerate non-linear wave equations of Kirchhoff type with a strong dissipation, Math. Meth. in Applied Sciences, 20 (1997), 151-177. doi: 10.1002/(SICI)1099-1476(19970125)20:2<151::AID-MMA851>3.3.CO;2-S. Google Scholar R. Temam, Infinite Dimensional Dynamical Systems in Mechanics and Physics Springer-Verlarg, Berlin, 1989. doi: 10.1007/978-1-4684-0313-8. Google Scholar E. Vitillaro, A potential well method for the wave equation with nonlinear source and boundary damping terms, Glasgow Math. J, 44 (2002), 375-395. doi: 10.1017/S0017089502030045. Google Scholar E. Vitillaro, Global existence for the wave equation with nonlinear boundary damping and source terms, J. Differential Equations, 186 (2002), 259-298. doi: 10.1016/S0022-0396(02)00023-2. Google Scholar Zai-yun Zhang and Xiu-jin Miao, Global existence and uniform decay for wave equation with dissipative term and boundary damping, Computers and Math. Appli., 59 (2010), 1003-1018. doi: 10.1016/j.camwa.2009.09.008. Google Scholar Hongwei Zhang, Qingying Hu. Asymptotic behavior and nonexistence of wave equation with nonlinear boundary condition. Communications on Pure & Applied Analysis, 2005, 4 (4) : 861-869. doi: 10.3934/cpaa.2005.4.861 Jong-Shenq Guo. Blow-up behavior for a quasilinear parabolic equation with nonlinear boundary condition. Discrete & Continuous Dynamical Systems - A, 2007, 18 (1) : 71-84. doi: 10.3934/dcds.2007.18.71 Le Thi Phuong Ngoc, Nguyen Thanh Long. Existence and exponential decay for a nonlinear wave equation with nonlocal boundary conditions. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2001-2029. doi: 10.3934/cpaa.2013.12.2001 Hiroshi Takeda. Large time behavior of solutions for a nonlinear damped wave equation. Communications on Pure & Applied Analysis, 2016, 15 (1) : 41-55. doi: 10.3934/cpaa.2016.15.41 Haiyang He. Asymptotic behavior of the ground state Solutions for Hénon equation with Robin boundary condition. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2393-2408. doi: 10.3934/cpaa.2013.12.2393 Stéphane Gerbi, Belkacem Said-Houari. Exponential decay for solutions to semilinear damped wave equation. Discrete & Continuous Dynamical Systems - S, 2012, 5 (3) : 559-566. doi: 10.3934/dcdss.2012.5.559 Tsung-Fang Wu. Multiplicity of positive solutions for a semilinear elliptic equation in $R_+^N$ with nonlinear boundary condition. Communications on Pure & Applied Analysis, 2010, 9 (6) : 1675-1696. doi: 10.3934/cpaa.2010.9.1675 Cong He, Hongjun Yu. Large time behavior of the solution to the Landau Equation with specular reflective boundary condition. Kinetic & Related Models, 2013, 6 (3) : 601-623. doi: 10.3934/krm.2013.6.601 Vyacheslav A. Trofimov, Evgeny M. Trykin. A new way for decreasing of amplitude of wave reflected from artificial boundary condition for 1D nonlinear Schrödinger equation. Conference Publications, 2015, 2015 (special) : 1070-1078. doi: 10.3934/proc.2015.1070 Linglong Du, Caixuan Ren. Pointwise wave behavior of the initial-boundary value problem for the nonlinear damped wave equation in $\mathbb{R}_{+}^{n} $. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3265-3280. doi: 10.3934/dcdsb.2018319 Muhammad I. Mustafa. On the control of the wave equation by memory-type boundary condition. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 1179-1192. doi: 10.3934/dcds.2015.35.1179 Alain Hertzog, Antoine Mondoloni. Existence of a weak solution for a quasilinear wave equation with boundary condition. Communications on Pure & Applied Analysis, 2002, 1 (2) : 191-219. doi: 10.3934/cpaa.2002.1.191 Serge Nicaise, Cristina Pignotti, Julie Valein. Exponential stability of the wave equation with boundary time-varying delay. Discrete & Continuous Dynamical Systems - S, 2011, 4 (3) : 693-722. doi: 10.3934/dcdss.2011.4.693 Peter V. Gordon, Cyrill B. Muratov. Self-similarity and long-time behavior of solutions of the diffusion equation with nonlinear absorption and a boundary source. Networks & Heterogeneous Media, 2012, 7 (4) : 767-780. doi: 10.3934/nhm.2012.7.767 Gen Nakamura, Michiyuki Watanabe. An inverse boundary value problem for a nonlinear wave equation. Inverse Problems & Imaging, 2008, 2 (1) : 121-131. doi: 10.3934/ipi.2008.2.121 Davit Martirosyan. Exponential mixing for the white-forced damped nonlinear wave equation. Evolution Equations & Control Theory, 2014, 3 (4) : 645-670. doi: 10.3934/eect.2014.3.645 Marek Fila, Kazuhiro Ishige, Tatsuki Kawakami. Convergence to the Poisson kernel for the Laplace equation with a nonlinear dynamical boundary condition. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1285-1301. doi: 10.3934/cpaa.2012.11.1285 Kazuhiro Ishige, Ryuichi Sato. Heat equation with a nonlinear boundary condition and uniformly local $L^r$ spaces. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2627-2652. doi: 10.3934/dcds.2016.36.2627 Makoto Nakamura. Remarks on global solutions of dissipative wave equations with exponential nonlinear terms. Communications on Pure & Applied Analysis, 2015, 14 (4) : 1533-1545. doi: 10.3934/cpaa.2015.14.1533 Zuodong Yang, Jing Mo, Subei Li. Positive solutions of $p$-Laplacian equations with nonlinear boundary condition. Discrete & Continuous Dynamical Systems - B, 2011, 16 (2) : 623-636. doi: 10.3934/dcdsb.2011.16.623 HTML views (21)
CommonCrawl
Estimation of diffusion constants from single molecular measurement without explicit tracking Shunsuke Teraguchi1,2 & Yutaro Kumagai2 Time course measurement of single molecules on a cell surface provides detailed information about the dynamics of the molecules that would otherwise be inaccessible. To extract the quantitative information, single particle tracking (SPT) is typically performed. However, trajectories extracted by SPT inevitably have linking errors when the diffusion speed of single molecules is high compared to the scale of the particle density. To circumvent this problem, we develop an algorithm to estimate diffusion constants without relying on SPT. The proposed algorithm is based on a probabilistic model of the distance to the nearest point in subsequent frames. This probabilistic model generalizes the model of single particle Brownian motion under an isolated environment into the one surrounded by indistinguishable multiple particles, with a mean field approximation. We demonstrate that the proposed algorithm provides reasonable estimation of diffusion constants, even when other methods suffer due to high particle density or inhomogeneous particle distribution. In addition, our algorithm can be used for visualization of time course data from single molecular measurements. The proposed algorithm based on the probabilistic model of indistinguishable Brownian particles provide accurate estimation of diffusion constants even in the regime where the traditional SPT methods underestimate them due to linking errors. Sensing the extracellular environment is crucial for cells to properly respond and function. The information from the environment is typically encoded in microscopic molecular signals that are recognized by cell surface receptors. The signaling of cell surface receptors involves several physical processes, including ligation to their ligands, oligomerization, and subsequent binding to the downstream signaling components in cytosol. Although many details of these processes have been inferred from biochemical, genetic, and molecular or cell biological studies, their physical and dynamical aspects at the microscopic level are still largely unknown [1]. Recent development of techniques for single molecular measurement such as total internal reflection fluorescence (TIRF) microscopy [2] provides a chance to directly observe the dynamics of these processes from time course images of fluorescently-labeled single molecules on cell surfaces [3, 4]. A typical workflow for such data is single particle tracking (SPT) [5]. In SPT, the positions of particles in each time frame are first detected. With the help of the sophisticated detection algorithms, the spatial resolution of the detected position could be of sub-pixel order [6]. The next step is linking, where the trajectory of each molecule is inferred by connecting seemingly identical particles in subsequent frames. Usually, the nearest particle in the subsequent frame with global consistency is identified as the same particle [7, 8]. The identified trajectories of particles must be further analyzed quantitatively to find biologically relevant physical parameters. The diffusion constant, which characterizes the diffusion speed of the particles, is one such parameter, and has been the target for subsequent analyses [9,10,11]. It has been shown that the diffusion constants of membrane proteins such as cell surface receptors can change along with biophysical events such as binding to their ligand or cytosolic adaptor molecules. For example, the diffusion constants of the epidermal growth factor receptor (EGFR), which belongs to a family of receptor tyrosine kinase, have been found to decrease after binding to EGF, and to transduce signals via subsequent binding with its adaptor Grb2 protein [12, 13]. It has also been shown that intracellular signaling proteins functioning on the membrane have multiple states, each of which have different diffusion constants [14, 15]. Although SPT methods are widely used, they encounter difficulties when the density of particles is higher. When the particle density becomes comparable to the scale of diffusion in the time resolution of the measurement, the expected area of diffusion of a particle tends to contain several irrelevant particles purely by chance. Since, in typical experiments, visualized molecules are indistinguishable from fluorescent signals, linking errors of SPT are inevitable. Then, trajectories from such erroneous SPT lead to underestimation of diffusion constants, and incorrect biological interpretations. Note that this problem of linking error may occur even in the regime where the detection error coming from the diffraction limit of a microscope is negligible. In this paper, we address this problem of linking error in diffusion constant estimation. As we have seen, the problem arises from the impossibility of perfect hard linking of identical particles in SPT. Here instead of linking the nearest particles in subsequent frames, we only assign a probability of such possible identification with respect to the particle density around the position, and directly estimate the diffusion constant without specifying concrete trajectories. For this purpose, we derive a probabilistic model of the distance to the nearest neighbor by generalizing the canonical theory of single Brownian motion into multiple indistinguishable particles. The resultant algorithm successfully estimates diffusion constants even under high particle density conditions where SPT based methods underestimate them. The proposed algorithm shows some resemblance to another SPT free diffusion constant estimation method, namely particle image correlation spectroscopy (PICS) [16], which was inspired by image correlation microscopy [17,18,19,20]. The advantages of our algorithm over PICS include lower variances of estimated diffusion constants, lower numbers of hyperparameters to be determined before the analysis, and the applicability to cases with inhomogeneous particle distributions, whereas PICS assumes a homogeneous distribution. In this paper, we first introduce the probabilistic model of the positions of the nearest neighbors of a diffusing particle surrounded by indistinguishable particles and then formulate the inference of diffusion constants in terms of maximum likelihood estimation based on this model. In a simple setting with a homogeneous particle distribution, our algorithm can be considered to be a natural generalization of the canonical diffusion constant estimation from the mean square displacement (MSD) to the case of finite density of surrounding particles. Our algorithm is further generalized to allow multiple states with different diffusion constants with the help of the expectation maximization (EM) algorithm [21]. Comparison of the performance of our proposed method based on simulated artificial diffusion data with other diffusion constant-estimation methods indicates the advantage of the proposed algorithm. Finally, we demonstrate that the algorithm can be used to infer the state of each molecule and visualize the single molecular data with such information. A probabilistic model of a diffusing particle surrounded by indistinguishable particles To develop the probabilistic model for estimating the 2D lateral diffusion constants under high particle density, we focus on a single Brownian particle in a time frame (Fig. 1). Without loss of generality, we take the position of the particle as the origin of our polar coordinates. As is well known, the probability of finding the same Brownian particle at a position with a radial distance greater than Δr after a time-lag Δt is given by [22]. $$ {P}_{\mathrm{dif}}\left(r>\Delta r|D\right)={e}^{-\frac{\Delta {r}^2}{4D\Delta t}}, $$ where the parameter D is the diffusion constant of the particle. Schematic of the probabilistic model. a a typical distribution of particles at t + Δt (thick circles) with an indication of the position of a representative particle at t (dashed circle). b the case where the nearest particle is the original particle. c the case where the nearest particle is a surrounding particle. Gray color indicates the identification of the original particle. The large dotted circles indicate the distance to the nearest particle. The distance to the nearest neighbor of the origin at the subsequent time frame is modeled by the probabilistic model with respect to the diffusion constant of the original particle and the particle density at the origin In typical time-lapse single molecular imaging of cells, particles are indistinguishable from one another. By assuming the independence of the dynamics of each particle, we can model the distribution of such indistinguishable surrounding particles by a local uniform density,ρ, which is a sort of mean field approximation of the surrounding particles. In this approximation, we can derive the probability of having the nearest surrounding particle at a distance greater than Δr as follows. We begin with a finite case where there are, on average, N surrounding particles in the disk with a radius R around a point. We assume that the surrounding particles are uniformly distributed within the disk. If we consider a smaller disk with a radius Δr inside the disk, the probability of a single surrounding particle being found outside of the smaller disk is 1 − a/A, where a = πΔr2 and A = πR2 correspond to the areas of the smaller and bigger disks, respectively. Then, the probability that all the N surrounding particles are also found outside of the smaller disk is (1 − a/A)N. Assuming that a is much smaller than A, this probability can be approximated as $$ {\left(1-\frac{a}{A}\right)}^N=\exp \left(N\log \left(1-\frac{a}{A}\right)\right)\cong \exp \left(-\frac{aN}{A}\right)=\exp \left(-\rho \pi \Delta {r}^2\right), $$ where ρ = N/A is the local particle density. Thus, the probability of having the nearest surrounding particle at a distance greater than Δr is given by $$ {P}_{\mathrm{bg}}\left(r>\Delta r|\rho \right)={e}^{-\rho \pi \Delta {r}^2}. $$ By combining the above results together, the probability of detecting the nearest particle at a distance greater than Δr would be given by $$ {P}_{\mathrm{nn}}\left(r>\Delta r|\rho, D\right)={P}_{\mathrm{dif}}\left(r>\Delta r|D\right){P}_{\mathrm{bg}}\left(r>\Delta r|\rho \right)={e}^{-\rho \pi \Delta {r}^2-\frac{\Delta {r}^2}{4D\Delta t}}. $$ This is the fundamental probabilistic model upon which we develop the estimation algorithm of the diffusion constant in this paper (Fig. 1). This probabilistic model generalizes the theory of Brownian motion of a single isolated particle into that of a single particle surrounded by indistinguishable particles. The indication of the model becomes more manifest if we calculate the expected mean square displacement to the nearest particle (MSDN) as $$ \mathrm{MSDN}=E\left(\Delta {r}^2\right)\equiv \underset{0}{\overset{\infty }{\int }}\Delta {r}^2\left(-\frac{d}{d\left(\Delta r\right)}{P}_{\mathrm{nn}}\left(r>\Delta r|\rho, D\right)\right)d\left(\Delta r\right)=\frac{4D\Delta t}{1+4\rho \pi D\Delta t}. $$ This is a natural generalization of the well-known relationship between the MSD of a single diffusing particle and the diffusion constant [22], $$ \mathrm{MSD}=4D\Delta t. $$ As expected, MSDN goes back to the original MSD in the limit of ρ being zero, (i.e., where there are no surrounding particles). Due to the additional term in the denominator, the MSDN is, in general, smaller than MSD. This is because the nearest particle can be the original particle diffused from the origin as in MSD, or even a nearer surrounding particle. This relationship can be easily solved with respect to D, allowing it to be estimated as $$ D=\frac{\mathrm{MSDN}}{4\Delta t\left(1-\rho \pi \kern0em \mathrm{MSDN}\right)}. $$ Compared to the standard estimation from MSD, $$ D=\frac{\mathrm{MSD}}{4\Delta t}, $$ the estimated diffusion constant acquires a fold increase of 1/(1 − ρπMSDN), which compensates for the apparent reduction of the displacement compared to MSD. In Fig. 2, we show the MSDN for simulated data. As Δt increases, the points deviate from the line 4DΔt and obey the above theoretical prediction as expected. Note that the time course of MSDN is conceptually different from that of MSD in a trajectory after SPT. In the case of SPT, the identification of the same particle is consecutively performed using all measured time points during Δt. On the other hand, in MSDN, the nearest point after time duration Δt was chosen without referring to the measured time points before Δt. Mean square displacement to the nearest particle. A comparison of MSDN and MSD. The black straight line corresponds to the expected MSD, while the black curve is the expected MSDN, with D=1 μm2/s and ρ=1 particles/μm2. The points are the mean MSDN directly calculated from corresponding simulated data. The error bars indicate the standard deviation from one thousand independent simulations. The red line indicates the asymptotic value of the expected MSDN at Δt → ∞ Maximum likelihood estimation of diffusion constants for local particle density Though the above relationship between the diffusion constant and MSDN allows us to estimate diffusion constants for the case of a uniform particle distribution, it is difficult to generalize it into an inhomogeneous particle distribution, which is a less ideal but much more relevant situation. In such a case, a constant particle density ρ alone cannot capture the underlying particle distribution. Here, we formulate a more general estimation algorithm of diffusion constants using a maximum likelihood estimation based on the above probabilistic model. The log-likelihood of an observed dataset is given by $$ {\displaystyle \begin{array}{l}l=\log \prod \limits_{i=1}^N{P}_{\mathrm{nn}}\left(r=\Delta {r}_i|{\rho}_i,D\right)\\ {}=\sum \limits_{i=1}^N\left[\log \left(2{\rho}_i\pi +\frac{1}{2D\Delta t}\right)+\log \left(\Delta {r}_i\right)-{\rho}_i\pi \Delta {r}_i^2-\frac{\Delta {r}_i^2}{4D\Delta t}\right].\end{array}} $$ Here, the index i represents each particle in the preceding time frame, Δr i is the distance to the nearest particle in the subsequent time frame, and ρ i is the local particle density around particle i. If we assume a uniform distribution (i.e., that all ρ i are the same), this maximum likelihood estimation of D is analytically tractable and reduces to the same relation between the diffusion constant and MSDN described above. In the case of general ρ i , it is convenient to utilize the EM algorithm [21, 23]. For this purpose, we introduce a latent variable q i ∈ {0, 1}, which takes the value of zero if the nearest point comes from the surrounding particles, but becomes one if it is the original particle diffused from the origin. Then the complete-data log-likelihood with the information of the latent variable is given by $$ {l}^{\prime }=\log \prod \limits_{i=1}^Np\left(\Delta {r}_i,{q}_i|{\rho}_i,D\right). $$ Here, the joint probability distribution is defined as $$ p\left(\Delta {r}_i,{q}_i|{\rho}_i,D\right)=\left\{\begin{array}{cc}2{\rho}_i\pi \Delta {r}_i{e}^{-{\rho}_i\pi \Delta {r}_i^2-\frac{\Delta {r}_i^2}{4D\Delta t}}& \mathrm{for}\kern0.24em {q}_i=0\\ {}\frac{\Delta {r}_i}{2D\Delta t}{e}^{-{\rho}_i\pi \Delta {r}_i^2-\frac{\Delta {r}_i^2}{4D\Delta t}}& \mathrm{for}\kern0.24em {q}_i=1\end{array}\right.. $$ In the EM algorithm, instead of maximizing the log-likelihood directly, a quantity Q(D, Dl) is maximized with respect to D by iteration: $$ Q\left(D,{D}^l\right)=\sum \limits_{i=1}^N\sum \limits_{q\in \left\{0,1\right\}}\log \left(p\left(\Delta {r}_i,q|{\rho}_i,D\right)\right)p\left(q|\Delta {r}_i,{\rho}_i,{D}^l\right). $$ Here, Dl is the estimation of the diffusion constant D at the l-th iteration. The conditional probability based on Dl is calculated from the above joint probability as $$ p\left(q=0|\Delta {r}_i,{\rho}_i,{D}^l\right)=\frac{4{\rho}_i\pi {D}^l\Delta t}{4{\rho}_i\pi {D}^l\Delta t+1}, $$ $$ p\left(q=1|\Delta {r}_i,{\rho}_i,{D}^l\right)=\frac{1}{4{\rho}_i\pi {D}^l\Delta t+1}. $$ Taking the derivative of Q with respect to D and equating it to zero, $$ \frac{dQ}{dD}=\sum \limits_{i=1}^N\left[\frac{\Delta {r}_i^2}{4{D}^2\Delta t}p\left(q=0|\Delta {r}_i,{\rho}_i,{D}^l\right)+\left(\frac{\Delta {r}_i^2}{4{D}^2\Delta t}-\frac{1}{D}\right)p\Big(q=1|\Delta {r}_i,{\rho}_i,{D}^l\Big)\right]=0, $$ we obtain the update rule $$ {D}^{l+1}=\frac{\left\langle \Delta {r}^2\right\rangle }{4\Delta t{\left\langle P\left(q=1\right)\right\rangle}_{D^l}}, $$ where we have defined the expected fraction of data points with q = 1 as $$ {\left\langle P\left(q=1\right)\right\rangle}_{D^l}\equiv \frac{1}{N}\sum \limits_{i=1}^Np\left(q=1|\Delta {r}_i,{\rho}_i,{D}^l\right). $$ Now, the correction from the original MSD relation is neatly summarized by this expected fraction of the data whose nearest points come from the original particle diffused from the origin. Generalization to models with multiple diffusive states In this subsection, we further generalize the maximum likelihood estimation of diffusion constants into the case where particles take multiple states with different diffusion constants. It has been revealed that some membrane proteins change their physical properties upon binding to other molecules or spontaneous change of their conformation, and that these changes can be inferred from the change of the diffusion constant in some cases [14, 15]. Here we consider this type of change of diffusion constants, which we shall refer as to the change of their states. In this paper, we only provide the solution for relatively simpler situations of the dynamics with multiple diffusive states where the interconversion of different states can be ignored in the time resolution [10]. This simple generalization is practically quite useful, even when there is no biological reason to expect the existence of such multiple states of the target molecule. In a real experiment, many fluorescently-dyed surface molecules disappear for several reasons, such as internalization of the particle, breaching of the fluorescent dye, and so on. Such disappearance of particles can be modeled in the above framework by adding an additional state whose diffusion constant is infinitely large. In addition, some accidental peaks of fluorescent intensity may be wrongly detected as particles due to the low signal-to-noise ratio of the original images (false detections). Those spurious particles also tend to disappear in the subsequent time frame. Thus, we can reduce the effects of such false detections by introducing such a state in advance. We will address this issue again in Result section. The derivation of the corresponding EM algorithm is largely parallel to the one in the previous subsection. In addition to the latent variable q i , which specifies whether or not the nearest particles are the original particle itself, we introduce an additional latent variable specifying states of the particle i, s i ∈ {1, ⋯, M}, where M is the number of possible states. The joint probability distribution of this model is given by $$ p\left(\Delta {r}_i,{q}_i,{s}_i|{\rho}_i,{D}_{s_i},{\alpha}_{s_i}\right)=\left\{\begin{array}{cc}2{\rho}_i{\pi \alpha}_{s_i}\Delta {r}_i{e}^{-{\rho}_i\pi \Delta {r}_i^2-\frac{\Delta {r}_i^2}{4{D}_{s_i}\Delta t}}& \mathrm{for}\kern0.24em {q}_i=0\\ {}\frac{\alpha_{s_i}\Delta {r}_i}{2{D}_{s_i}\Delta t}{e}^{-{\rho}_i\pi \Delta {r}_i^2-\frac{\Delta {r}_i^2}{4{D}_{s_i}\Delta t}}& \mathrm{for}\kern0.24em {q}_i=1\end{array}\right., $$ where \( {D}_{S_i} \) is the diffusion constant of the state s i , and \( {\alpha}_{s_i} \) is the probability of being the state s i . The quantity Q for deriving the update rule of the EM algorithm is similarly defined by $$ Q\left(D,{D}^l\right)=\sum \limits_{i=1}^N\sum \limits_{s=1}^M\sum \limits_{q\in \left\{0,1\right\}}\log \left(p\left(\Delta {r}_i,q,s|{\rho}_i,\theta \right)\right)p\left(q,s|\Delta {r}_i,{\rho}_i,{\theta}^l\right). $$ Here, θ collectively denotes all of the parameters to be estimated, namely, θ = {D1, ⋯, D M , α1, ⋯α M }. The conditional probability is calculated from the joint probability as follows: $$ p\left(q=0,s|\Delta {r}_i,{\rho}_i,{\theta}^l\right)=\frac{2{\rho}_i{\pi \alpha}_s^l{e}^{-\frac{\Delta {r}_i^2}{4{D}_s\Delta t}}}{\sum \limits_{s^{\prime }=1}^M\left(2{\rho}_i\pi +\frac{1}{2{D_{s^{\prime}}}^l\Delta t}\right){\alpha}_{s^{\prime}}^l{e}^{-\frac{\Delta {r}_i^2}{4{D}_{s^{\prime }}\Delta t}}}, $$ $$ p\left(q=1,s|\Delta {r}_i,{\rho}_i,{\theta}^l\right)=\frac{\frac{\alpha_s^l}{2{D_{s^{\prime}}}^l\Delta t}{e}^{-\frac{\Delta {r}_i^2}{4{D}_s\Delta t}}}{\sum \limits_{s^{\prime }=1}^M\left(2{\rho}_i\pi +\frac{1}{2{D_{s^{\prime}}}^l\Delta t}\right){\alpha}_{s^{\prime}}^l{e}^{-\frac{\Delta {r}_i^2}{4{D}_{s^{\prime }}\Delta t}}}. $$ Compared to the single state case, here, the joint probability also depends upon the displacement, Δr i . By maximizing Q under the restriction of conservation of probability, \( \sum \limits_s{\alpha}_s=1 \), we obtain $$ {\alpha_s}^{l+1}=\frac{1}{N}\sum \limits_{i=1}^N\sum \limits_{q\in \left\{0,1\right\}}p\left(q,s|\Delta {r}_i,{\rho}_i,{\theta}^l\right), $$ $$ {D_s}^{l+1}=\frac{\sum \limits_{i=1}^N\sum \limits_{q\in \left\{0,1\right\}}\Delta {r_i}^2p\left(q,s|\Delta {r}_i,{\rho}_i,{\theta}^l\right)}{4\Delta t\sum \limits_{i=1}^Np\left(q=1,s|\Delta {r}_i,{\rho}_i,{\theta}^l\right)}. $$ This is our final update rule for maximum likelihood estimation for the multi state model. Monte Carlo simulation To compare the performance of the proposed and existing methods, we generate artificial data of single molecular particle diffusion with Monte Carlo simulation. Depending on the purpose of simulation, we generate simulated data in two different ways. Pairwise simulation In [16], to evaluate the performance of PICS algorithm, the authors utilized simulated data generated as pairs of time frames, rather than a single time course of diffusing particles. Since it allows precise controls of the distribution of the simulated data, it makes subsequent comparison among algorithms and interpretation of observed performance easier. Thus, we follow the same strategy to simulate diffusion dynamics in some of our simulations in Result section. First we draw a fixed number of positions of particles from the corresponding probability distribution of particles for the preceding time frame. In the case of uniform particle distribution, we sample the particles over a much larger area than the area of interest, in order to keep the same distribution after the diffusion steps. Next, we generate the subsequent frame by adding a displacement drawn from the two-dimensional normal distribution with a variance of 2DΔt to each position. When needed, another fixed number of particles are drawn from the same particle distribution, and added independently to both the preceding and subsequent frames to represent the existence of false detections, which typically occur in detection from low signal-to-noise ratio image data. In the simulation with false detections, we set the fraction of false detections to 20%. Each estimation of diffusion constants is performed against 10 pairs of time frames. The simulation is repeated 100 times for each condition. All simulations are performed using R (http://www.r-project.org/). Image based time course simulation In the above simulation method, positions of detected points were directly generated by Monte Carlo simulation. Thus, no particular bias coming from detecting particle positions from image data is taken into account. In order to take account of such uncontrollable effects, we further examine diffusion constant estimation algorithms by artificially generated time course image data of single molecular measurements. For this purpose, we utilize the image data generator provided as a plugin "ISBI Challenge Track Generator" [24] of an open platform software "ICY" [25] for bioimage analysis. We set the parameters of the plugin software as follows; SNR = 4, sequence length = 10, particle density = 100, 500 and 1000, sigma = 1, 2, 3, 5, 7 and 10 in the particle motion with creator type "BROWNIAN_UNIFORM". The image size is 512 pixels × 512 pixels. The other parameters (except for seeds) are set to default, which means the extinction rate of each particle is 0.05. The particles in generated image data are detected by another plugin "Spot Detector" of the ICY software. The detection of bright spots by Spot Detector plugin is performed with default parameters. The simulation is repeated 3 times for each condition with different seed values. Other algorithms to estimate diffusion constants To evaluate the performance of our proposed method, we compare it with existing algorithms. To make the comparison make sense, we examine algorithms that are applicable to the same type of the data, namely, the time series of the location of detected points. For example, some of algorithms utilized to estimate the diffusion constants under higher density or higher diffusion speed cannot be compared because they require specially designed data set for the algorithms [26, 27]. As a result, our comparison is made mainly with PICS algorithm which is particularly designed for estimating diffusion constants under higher particle density, in addition to SPT based methods. We implement the PICS algorithm in R to enable automatic parameter estimation from the Monte Carlo simulation data. A minor difference from the original implementation described in [16] is that we fit the whole cumulative correlation function at once to simplify the automation instead of separately fitting the linear and non-linear parts of the cumulative correlation function to the data. In our experience, this implementation of PICS provides comparative or even better performance compared to the original one (data not shown). Local SPT As an example of the most naïve approach, we make trajectories by simply associating each particle to the nearest particle in the subsequent frame without considering global consistency. Unlike the case of global SPT described below, in this approach, a particle in a subsequent time frame might be associated with several particles in the preceding time frame. Global SPT As a representative of SPT method, we implement the global linking algorithm based on a greedy hill-climbing optimization with topological constraints following the literature [24, 28]. This algorithm was used in one of the best performance groups in the international competition of particle tracking methods [24]. In this algorithm, there is no conflict between the associations of each particle. We set the maximum distance parameter for limiting the association of subsequent particles to a large enough value to link all particles. For the pairwise simulation, this procedure provided sensible estimation of diffusion constants independent of the details of the exact value of the maximum distance parameter, as far as the particle density is not very high (data not shown). After obtaining the distribution of diffusion step sizes with local or global SPT, we estimate the diffusion constant with a maximal likelihood estimation based on the assumption that each single particle exhibits Brownian motion. Particle density estimation for the simulated data To apply our algorithm, we have to estimate the (local) particle density. In the case of a uniform distribution, we estimate the density by simply dividing the total particle number in the frame by the area of interest. In an inhomogeneous case, it is difficult to accurately estimate the local particle density based on just a single time frame. Therefore, we estimate the local probabilistic density by a k nearest-neighbor algorithm after merging all subsequent frames in the dataset except for the one in the frame of interest. Then, the particle density at the point is obtained by weighting the probabilistic density with the number of particles in the frame of interest. The value of k from the k nearest neighbor density estimation in the merged data is chosen to be the number of time frames utilized, which corresponds to the length scale of k = 1 in a time frame. Estimation of diffusion constants for the real data HeLa cells grown on glass coverslips (Matsunami) in a 6-well plate were transfected with Lyn11-Halotag using Lipofectamine 2000 (Invitrogen). Afters 4 h, the culture medium was replaced with DMEM and the cells were incubated at 37 °C for 24 h. The culture medium was exchanged with Opti-MEM (Gibco), and the cells were incubated at 37 °C. After 2 h, the cells were washed once with OPTI-MEM and incubated with 0.03 nM of Halotag TMR ligand (Promega) in Opti-MEM for 30 min in a CO2 incubator. The cells were then washed three times with Opti-MEM and single-molecule imaging was performed using a TIRF microscope. Single particle detection and estimation of diffusion constants were done using ICY and PNN algorithm, respectively. Dependence of estimated diffusion constants on particle density Both PICS and our estimation algorithm, hereafter called the probabilistic nearest neighbor (PNN) estimation, have been designed to accurately estimate diffusion constants under the condition of high particle density. We first compare these methods to SPT-based methods with and without global optimization of linking (referred to as global SPT and local SPT, respectively) with pairwise simulated data (see Method section for details). First, we examine the effect of particle density under the ideal condition of a homogeneous distribution (Additional file 1: Figure S1 and Fig. 3). We vary the particle density from 0.1 to 10 particles/μm2, fixing the diffusion constant to be 1μm2/s. The time resolution, Δt, of the data acquisition is assumed to be 20 ms [16]. Note that, in this ideal situation of Brownian motion, only the ratio of the scales of the diffusion constant and the particle density is the relevant parameter. Thus, the effects of changing the particle density with a fixed diffusion constant are effectively equivalent to the ones of changing the diffusion constant with a fixing particle density. Comparison of the performance of different algorithms in a uniform distribution. Box plots summarizing a comparison of the algorithms. The x axis is the particle density and the y axis is the estimated diffusion constant. The red line indicates the true diffusion constant. a local SPT. b global SPT. c PICS and d PNN As expected, the change in particle density significantly affects the diffusion constants estimated by the simplest method, local SPT (Fig. 3). In this method, each pair of nearest neighbor points in the subsequent time frame is simply identified as the same physical particle without consideration of the behaviors of other particles. With this simple method, even with one-order lower particle density, the estimation accuracy is low due to the bias caused by the linking error (Additional file 1: Figure S1). After global optimization (global SPT) of the linking, the estimation accuracy of SPT method is improved. In particular, under lower particle density conditions, it reproduces the true diffusion constants to great accuracy (Additional file 1: Figure S1). However, in the condition with higher particle density (ρ ≥ 2), this method also underestimates the diffusion constants. This value of the particle density roughly corresponds to that where 4ρπDΔt becomes comparable to 1 in Eq. 1. This result suggests the limitation in SPT methods under high particle density conditions. On the other hand, the two SPT-free methods PICS and PNN, which take the effects of surrounding particles explicitly into account, estimate the diffusion constants quite well over the whole range of particle densities under consideration (Fig. 3 and Additional file 1: Figure S1). Though the standard deviations among independent simulations tend to increase along with the increase of particle density, these could be reduced if more data in the same condition became available [16]. Thus, the estimation of diffusion constants using PICS or PNN leads to similar performance with SPT-based methods under lower particle density and outperforms them under higher particle density. Therefore, we focus on these two methods in the following discussion. Effect of false detections By comparing PNN and PICS from the above results, one might conclude that the accuracy of PNN is slightly better than that of PICS because the standard deviation of the estimated results is smaller in the former than the latter. However, the above comparison was performed based on simulation in a quite ideal condition: particles distributed uniformly without any false detection. On the other hand, real single molecular measurements tend to be performed under less ideal conditions with a lower signal-to-noise ratio. This affects the accuracy of the detection of peak positions from raw images, leading to spurious particles that are wrongly detected in such noisy images. In order to mimic such a situation, we artificially introduce additional particles independently drawn from the same distribution in each time frame. We simply refer to these additional particles as false detections. The existence of false detections significantly degrades the estimation accuracy (Fig. 4, left panels) of both PNN and PICS. The effects of false detections in the diffusion constant estimation are two-fold. One effect is to increase the apparent density of surrounding particles in the subsequent time frames, and the other is the addition of spurious particles in the preceding time frames that immediately disappear from the scope. The former effect is, by design, treated both in PICS and PNN since the particle density is estimated with both physical particles and false detections. On the other hand, the spurious particles coming from false detections in the preceding time frames behave like particles with an infinitely high diffusion constant. Therefore, the addition of false detections biases the estimated diffusion constants towards higher values. Note also that similar effects may occur when actual particles disappear by internalization or dissociation of surface protein from the membrane, bleaching of fluorescent dye and so on. Comparison of the performance of PICS and PNN in a uniform distribution with false detections. Box plots summarizing the comparison of PICS (a and b) and PNN (c and d). The top row is for PICS and the bottom row is for PNN. The first column is the result before introducing the state corresponding to the false detections. The second column is the result after introducing the state for false detection compensation. The x axis is the particle density and the y axis is the estimated diffusion constant. The red line indicates the true diffusion constant Fortunately, as commented in Theory section, this effect of false detections can be addressed by generalizing the probabilistic model both in PICS and PNN by introducing an additional state for false detections with an infinitely large diffusion constant. With this generalization, both PICS and PNN improve their prediction accuracy (Fig. 4, right panels) with a cost of larger standard deviation, which originates from the increase of the number of the parameters to be estimated, namely the fraction of false detections. Estimation with an inhomogeneous distribution As mentioned above, another idealization in the above simulation was the assumption of a uniform distribution of the particles. In fact, this is one of the key assumptions in the PICS algorithm. On the other hand, we have designed PNN to be applicable beyond this assumption. Here, we compare the performance of these two methods under three inhomogeneous distributions: Gaussian, circular and Gaussian mixture. Figure 5, Additional file 2: Figure S2 and Additional file 3: Figure S3 show the results of estimation of diffusion constants under three classes of inhomogeneous distributions, a Gaussian distribution, a circular distribution forming an annulus and Gaussian mixture distributions, respectively. Panel B of each figure shows the results of PICS, where the estimated diffusion constants are biased, especially for the higher particle density. This result is more or less expected, since this type of inhomogeneous condition is beyond the original scope of PICS. Comparison of the performance of PICS and PNN in a Gaussian distribution. a a representative snapshot of the particle distribution. b, c, and d box plots summarizing the comparison between PICS and PNN under a Gaussian distribution. b PICS. c PNN, where the known particle density distribution for the simulation is used for the diffusion constant estimation. d PNN where the particle density distribution is estimated from the data using a k nearest neighbor algorithm. The x axis is the mean particle density over the area of interest, and the y axis is the estimated diffusion constant. The red line indicates the true diffusion constant Panel C of each figure is the result of the PNN estimation with the known theoretical distribution utilized to generate simulated data. In this case, the estimated diffusion constants are much closer to their true values. Of course, in a real situation, we cannot access to the true underlying distribution of the particles. Thus, we have to estimate the distribution from the data, and the accuracy of the diffusion constant estimation depends upon the accuracy of the density estimation. However, the results here demonstrate that as far as the particle density is estimated accurately enough, PNN should work reasonably well. Panels D of Fig. 5, Additional file 2: Figure S2 and Additional file 3: Figure S3 show the results of PNN with a particle density estimated from the data itself. Here, in order to estimate the particle density, we use k nearest neighbor estimation. In general, there is a tradeoff between spatial resolution and statistical error in density estimation. Since our algorithm of PNN relies on the (first) nearest neighbor, smaller k values with high spatial resolution would be preferable. However, density estimation based on a smaller k tends to have a larger variance. In order to circumvent this problem, we estimate the particle density using all the post frames in the dataset except for the one in the frame of interest while keeping the effective k value equal to one (see Method section for details). The accuracy of the resultant diffusion constant is comparable to the accuracy using theoretical distributions. Our result here demonstrates that, with a suitable choice of density estimation methods, our algorithm can be utilized to estimate the diffusion constant, even under an inhomogeneous particle distribution. Image based simulation To mimic a realistic situation of diffusion constant estimation from typical single molecular measurements, we further examine our algorithm and others using artificial image data generator for an open competition of SPT organized in 2012 [24]. The image data generator is provided as a plugin "ISBI Challenge Track Generator" of an open platform "ICY" for bioimage analysis. We generate image data of diffusion dynamics as triplicates for each condition. We set the parameters of the simulator to be relatively low signal-to-noise ratio, and short sequence length, to increase the difficulty of the estimation in the category of "BROWNIAN_UNIFORM" (see Method section for details). Note also that the particles in this simulation disappear with an extinction rate of 0.05. A representative movie and images of this simulation are in Additional file 4: Movie S1 and in Additional file 5: Figure S4, respectively. The detection of the particles from image data was made by another plugin "Spot Detector" of the ICY software. The results of the estimation of diffusion constants are summarized in Fig. 6. Here, we show the diffusion constants estimated by PNN, PICS and Local SPT. Though we have also applied global SPT to the same data, it showed very strong dependence on the maximum distance parameter and we could not obtain sensible estimation from the analysis (data not shown). Thus, the results of global SPT are omitted. We observe very similar tendency as in the previous simulations. PNN provides the most accurate results over the range of simulated conditions. Local SPT shows very strong bias depending on the particle density (number) and true diffusion constants. PICS does not show particular bias but tends to have higher variances. Visual inspection of the fitted curves of PICS clearly indicated poor fitting due to the effect of diffraction, as discussed in the original paper of PICS [16]. In the paper, they discussed how to mitigate the effect of diffraction in an iterative algorithm. Here, instead of implementing their iterative algorithm, we apply PICS to the corresponding ground truth data provided by the simulator, which are free from all the effects of diffraction (Panel D). Though this additional favor improves the fitting and the performance of PICS, PNN still seems to outperform the ground truth based PICS (Fig. 6). These results indicate the advantage of PNN in the application to the real image data from single molecular measurement of living cells. The performance of different algorithms in image based simulations. Scatter plots summarizing the performance of the algorithms. The x axis is the true diffusion constant used for the simulation and the y axis is the estimated diffusion constant. The red line indicates the diagonal line corresponding to the successful estimation. a PNN. b PICS. c local SPT and d PICS applied to the corresponding ground truth data 3D visualization of particle states The key of the proposed algorithm is that it assigns a probability of taking each possible state to each particle detected without specifying a trajectory. This property of the algorithm can be utilized to visualize time course data itself. The data shown in the upper panel of Fig. 7 consists of particles taking three different states, namely slower diffusion (0.2 μm2/s), faster diffusion (2 μm2/s), and false detections. The particle density including all of the three states is 1 particles/μm2. The lower left panel is the same data in color (red: slower particle, cyan: faster particle) after removing the false detections. We apply the PNN algorithm to the data and infer the state of each particle by choosing the most probable one among the assigned probabilities. As shown in the lower right panel, the resultant figure bears a strong resemblance to the original data, giving another support for the validity of this algorithm. Unlike canonical SPT methods attempting to determine a hard-wired trajectory, our algorithm keeps several possibilities at the same time. This application of PNN to a visualization purpose would be useful, particularly when one is interested in identifying rare events like interactions between pairs of particles. 3D visualization of particle positions and states. 3D representation of the time course simulated data of diffusing particles. The z axis corresponds to time while the other two axes correspond to the x- and y-axes of the original data. a the original data. b, the same data depicted in color (red: slower particle (0.2 μm2/s), cyan: faster particle (2 μm2/s)) after removing the false detection. c the same data depicted in colors based on the particle states inferred by PNN Application to real data We applied the PNN algorithm to a real data. Lyn11-Halotag construct, which is localized on cytoplasmic membrane, was expressed in HeLa cells and single-molecule imaging was performed (Additional file 6: Movie S2). Particle detection data was generated by using ICY and subjected to estimation of diffusion constant by PNN. PNN with k nearest neighbor particle density estimation resulted in diffusion constant of 2.81 × 10− 2 μm2/s under the assumption where molecules take only one state and those of 5.20 × 10− 3 μm2/s and 6.15 × 10− 2 μm2/s under the assumption where molecules take two states. We could also calculate AIC under these assumptions and the result supported the latter. In the case, fractions of false detection, slow and fast states were estimated to be 36%, 24% and 40%, respectively. A previous report [29] suggested that Lyn, origin of the Lyn11 tag, exhibits two states, namely lateral diffusion and transient confinement in a lipid region through lipid-lipid interaction. This supports our result where Lyn11-Halotag has two states with slow and fast diffusion. Together, this result suggests that the proposed algorithm works well for real data and helps to understand dynamics of molecules. Movie S2. TIRF microscopic single molecule video image of Lyn11-Halotag in HeLa cells. Membrane-localized single Lyn11-Halotag protein molecules in a HeLa cell were observed by a TIRF microscope as described in Methods. (AVI 2796 kb) In this paper, we proposed a novel diffusion constant estimation algorithm based on a probabilistic model of the nearest point without explicitly performing SPT. Though conventional SPT methods try to link pairs of particles in the subsequent frames in a hard manner, such hard linking inevitably leads to erroneous pairing if no other information to distinguish particles is available. We have derived a probabilistic model by explicitly considering a Brownian particle surrounded by indistinguishable particles in a mean field approximation. Since our probabilistic model allows us to estimate diffusion constants without relying on particular hard-linked trajectories, it performs well even in the cases with higher particle density or higher diffusion speed, where standard SPT methods underestimate the diffusion constant. Since particle density is difficult to control in real experiments, this is advantageous in practical usage. We have also provided a generalization of our algorithm to multiple diffusive states. This generalization was the key to address the case with false detections, since disappearing particles behave like particles with the additional diffusive state whose diffusion constant is infinity. Thus, in practice, one is recommended to examine both models with and without a fraction of disappearing particles, and select a model by comparing a statistical indicator like the Akaike Information Criterion [30]. In addition to high prediction accuracy, one of the advantages of PNN is its applicability beyond a uniform particle distribution. This has been the limitation on PICS, another existing SPT-free algorithm. We have demonstrated that, with or without knowledge of the underlying distribution, our algorithm accurately estimates diffusion constants even for the cases where PICS cannot be properly applied. In general, without prior knowledge of the underlying particle distribution, the actual performance of diffusion constant estimation also depends upon the accuracy of the estimation of the underlying particle distribution from the data, though the investigation of optimal density estimation itself is beyond the scope of this paper. Since PNN considers each particle separately, it allows us to obtain detailed information about each particle. With the help of the EM algorithm, PNN estimates the probability that each particle is in each state. This kind of information, combined with their spatial distribution, can be used for providing further insights into the underlying biology, as briefly demonstrated in Fig. 7. Another advantage of the proposed method, which is not apparent from the above benchmark results, is the small number of hyperparameters to be determined before analyses. For example, SPT based methods typically have a hyperparameter corresponding to the maximum distance parameter, which specifies the possible maximum displacement of diffusing particles to avoid connections of completely irrelevant particles. As mentioned in Result section, the estimated diffusion constants tend to largely depend on the choice of such a hyperparameter especially when particle density or diffusion speed is higher. PICS also requires several number of hyperparameters to perform a fitting to the empirical cumulative correlation functions [16], including the bin size and the range of consideration, whose optimal values may depend on data. On the other hand, PNN under a homogeneous particle distribution effectively has only a single hyperparameter, the margin to define the range of interest of the preceding time frames compared to the subsequent time frames, which also needed for PICS in addition to the ones mentioned above. We have confirmed that PNN has very weak dependency on the margin parameter as expected from the construction of the algorithm (data not shown). In the case of PNN under inhomogeneous particle distribution, the number of hyperparameters may vary depending on the chosen method of particle distribution estimation. In fact, in combination with the k nearest neighbor estimation of particle distribution we utilized in this paper, no hyperparameter, even the margin parameter, is required. This nature of small number of hyperparameters in PNN is very convenient in practice, since, otherwise, many trials and errors are needed to optimize hyperparameters. In particular, when the absolute value of estimated parameters is of concern, it is not a trivial matter to choose such hyperparameters objectively. Finally, we would like to emphasize the complementary role of diffusion constant estimation methods. First of all, all of the methods we examined in this paper are based on the assumption that identification of single molecules from the image data is more or less possible. If the particle density is too high, the resultant images cannot have the resolution of single molecular imaging. In this extreme case, other methods without relying on particle detection at all, like image correlation microscopy [17,18,19,20], would be preferable. Or, if one can use specially designed experimental equipments, some other choices of methods to finely estimate diffusion constants like [26, 27] are available. In the case of the typical time lapse images of single molecules [3, 4, 12,13,14,15] which we considered in this paper, we have demonstrated that PNN is favorable than PICS, in terms of the accuracy, the applicability to inhomogeneous distribution and the convenience of the analysis due to the small number of hyperparameters. However, PICS analysis also has an advantage that the analysis is more graphical than PNN and one might be able to address the validity of the model by a visual inspection as far as the underlying spatial distribution of the particle is uniform. In turn, though canonical SPT methods tend to underestimate the diffusion constant and largely depend on hyperparameters under higher particle distribution, they allow one to analyze individual trajectories which may provide otherwise inaccessible information of each trajectory. In this sense, these methods can be utilized in combination. For example, one may first apply PNN to robustly estimate diffusion constants. This information of diffusion constants in turn might be utilized to determine the hyperparameters of SPT methods to minimize the linking error of SPT. Then, the resultant trajectories may be utilized, not to estimate the diffusion constants any more, but to extract other biologically interesting parameters which PNN cannot infer. Thus, having different diffusion estimation algorithms enlarges our freedom to analyze data, and increases the chance of obtaining biologically meaningful information from various single molecular time course datasets. In this regard, our algorithm opens a new window for accessing diffusion constants, in particular, in the regime where the particle density becomes comparable to the effective scale of diffusion. Liu Z, Lavis LD, Betzig E. Imaging live-cell dynamics and structure at the single-molecule level. Mol Cell Elsevier Inc. 2015;58:644–59. Axelrod D. Total internal reflection fluorescence microscopy in cell biology. Traffic. 2001;2:764–74. Sako Y, Minoghchi S, Yanagida T. Single-molecule imaging of EGFR signalling on the surface of living cells. Nat Cell Biol. 2000;2:168–72. Ueda M, Sako Y, Tanaka T, Devreotes P, Yanagida T. Single-molecule analysis of chemotactic signaling in Dictyostelium cells. Science. 2001;294:864–7. Saxton MJ, Jacobson K. Single-particle tracking: applications to membrane dynamics. Annu Rev Biophys Biomol Struct. 1997;26:373–99. Gelles J, Schnapp BJ, Sheetz MP. Tracking kinesin-driven movements with nanometre-scale precision. Nature. 1988;331:450–3. Reid DB. An algorithm for tracking multiple targets. Autom. Control. IEEE. Trans. 1979;24:843–54. Jaqaman K, Loerke D, Mettlen M, Kuwata H, Grinstein S, Schmid SL, et al. Robust single-particle tracking in live-cell time-lapse sequences. Nat Methods. 2008;5:695–702. Selvin P, Ha T. Single-molecule techniques: a laboratory manual. Selvin P. & Ha T, editor. Cold Spring Harbor Laboratory Press; 2008. Sako Y, Ueda M. Cell signaling reactions: single-molecular kinetic analysis. In: Sako Y, Ueda M, editors. Signal Transduct: Springer; 2011. http://www.springer.com/la/book/9789048198634. Vestergaard CL, Blainey PC, Flyvbjerg H. Optimal estimation of diffusion coefficients from single-particle trajectories. Phys Rev E - Stat Nonlinear, Soft Matter Phys. 2014;89:022726. Morimatsu M, Takagi H, Ota KG, Iwamoto R, Yanagida T, Sako Y. Multiple-state reactions between the epidermal growth factor receptor and Grb2 as observed by using single-molecule analysis. Proc Natl Acad Sci U S A. 2007;104:18013–8. Low-Nam ST, Lidke KA, Cutler PJ, Roovers RC, van PMP B e H, Wilson BS, et al. ErbB1 dimerization is promoted by domain co-confinement and stabilized by ligand binding. Nat Struct Mol Biol Nature Publishing Group. 2011;18:1244–9. Hibino K, Watanabe TM, Kozuka J, Iwane AH, Okada T, Kataoka T, et al. Single- and multiple-molecule dynamics of the signaling from H-Ras to cRaf-1 visualized on the plasma membrane of living cells. ChemPhysChem. 2003;4:748–53. Matsuoka S, Shibata T, Ueda M. Asymmetric PTEN distribution regulated by spatial heterogeneity in membrane-binding state transitions. PLoS Comput Biol. 2013;9:e1002862. Semrau S, Schmidt T. Particle image correlation spectroscopy (PICS): retrieving nanometer-scale correlations from high-density single-molecule position data. Biophys. J. Elsevier. 2007;92:613–21. Hebert B, Costantino S, Wiseman PW. Spatiotemporal image correlation spectroscopy (STICS) theory, verification, and application to protein velocity mapping in living CHO cells. Biophys. J. Elsevier. 2005;88:3601–14. Kolin DL, Costantino S, Wiseman PW. Sampling effects, noise, and photobleaching in temporal image correlation spectroscopy. Biophys J Elsevier. 2006;90:628–39. Kolin DL, Wiseman PW. Advances in image correlation spectroscopy: measuring number densities, aggregation states, and dynamics of fluorescently labeled macromolecules in cells. Cell Biochem Biophys. 2007;49:141–64. Pandžić E, Rossy J, Gaus K. Tracking molecular dynamics without tracking: image correlation of photo-activation microscopy. Methods Appl Fluoresc. 2015;3:14006. Dempster APA, Laird NMN, DDB R. Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Ser B Methodol. 1977;39:1–38. Van Kampen NG. Stochastic processes in physics and chemistry. Third ed: North-holl. Pers. Libr. Elsevier. 2007. https://www.elsevier.com/books/stochastic-processes-in-physics-and-chemistry/van-kampen/978-0-444-52965-7. Bilmes JA. A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models. 1997; Chenouard N, Smal I, de Chaumont F, Maška M, Sbalzarini IF, Gong Y, et al. Objective comparison of particle tracking methods. Nat Methods. 2014;11:281–9. de Chaumont F, Dallongeville S, Chenouard N, Hervé N, Pop S, Provoost T, et al. Icy: an open bioimage informatics platform for extended reproducible research. Nat Methods. 2012;9:690–6. Di Rienzo C, Piazza V, Gratton E, Beltram F, Cardarelli F. Probing short-range protein Brownian motion in the cytoplasm of living cells. Nat Commun Nature Publishing Group. 2014;5:5891. Digman MA, Gratton E. Imaging barriers to diffusion by pair correlation functions. Biophys. J. Biophysical. Society. 2009;97:665–73. Sbalzarini I, Koumoutsakos P. Feature point tracking and trajectory analysis for video imaging in cell biology. J Struct Biol. 2005;151:182–95. Suzuki KGN, Fujiwara TK, Sanematsu F, Iino R, Edidin M, Kusumi A. GPI-anchored receptor clusters transiently recruit Lyn and G alpha for temporary cluster immobilization and Lyn activation: single-molecule tracking study 1. J Cell Biol. 2007;177:717–30. Akaike H. Information theory and an extension of themaximum likelihood principle. 2nd Int Symp Inf Theory, Akademinai Kiado. 1973:267–81. The authors thank M. Ueda and J. Kozuka for help in single-molecule imaging and valuable feedback for this manuscript, R. Mishima for help in preparation of plasmids, A. Yoshimura, E. Kurumatani, and Y. Kimura for help in cell culture, M. Ogawa for secretarial assistance. S. T. thanks Y. Miyanaga for letting us know about the work by Semrau and Schmidt. The authors would like to thank Enago (https://www.enago.com/) for the English language review. This work has been partially supported by a grant-in-aid from the SENTAN program of the Japan Agency for Medical Research and Development (AMED), a combined research grant provided by IFReC, a research grant from The Uehara Memorial Foundation, and JSPS Grant-in-Aid for Young Scientists (B) (25870396). Funding for the publication of this article was provided by a research grant from The Uehara Memorial Foundation. All data generated or analyzed during this study, all the codes used, and all the experimental materials used are available upon request. This article has been published as part of BMC Systems Biology Volume 12 Supplement 1, 2018: Selected articles from the 16th Asia Pacific Bioinformatics Conference (APBC 2018): systems biology. The full contents of the supplement are available online at https://bmcsystbiol.biomedcentral.com/articles/supplements/volume-12-supplement-1 . Tohoku Medical Megabank Organization, Tohoku University, 2-1 Seiryo-machi, Aoba-ku, Sendai, 980-8573, Japan Shunsuke Teraguchi Quantitative Immunology Research Unit, Immunology Frontier Research Center, Osaka University, 3-1 Yamada-oka, Suita, Osaka, 565-0871, Japan Shunsuke Teraguchi & Yutaro Kumagai Yutaro Kumagai ST and YK conceived and designed the study. ST developed and implemented the algorithm. YK performed the imaging experiment. ST and YK wrote the paper. Both of the authors have read and approve the final manuscript. Correspondence to Shunsuke Teraguchi or Yutaro Kumagai. Figure S1. Comparison of the performance of different algorithms in uniform distributions with lower particle densities. Box plots summarizing the comparison of the algorithms as in Fig. 3. The x axis is the particle density and the y axis is the estimated diffusion constant. The red line indicates the true diffusion constant. A, local SPT. B, global SPT. C, PICS and D, PNN. (PNG 105 kb) Figure S2. Comparison of the performance of PICS and PNN in a circular distribution. A, a representative snapshot of the particle distribution. B, C, and D, box plots summarizing the comparison between PICS and PNN under a circular distribution. B, PICS. C, PNN, where the known particle density distribution for the simulation is used for the diffusion constant estimation. D, PNN where the particle density distribution is estimated from the data using the k nearest neighbor algorithm. The x axis is the mean particle density over the area of interest, and the y axis is the estimated diffusion constant. The red line indicates the true diffusion constant. (PNG 171 kb) Figure S3. Comparison of the performance of PICS and PNN in Gaussian mixture distributions. A, a representative snapshot of the particle distribution. The red crosses represent centers of three Gaussian distributions. B, C, and D, box plots summarizing the comparison between PICS and PNN under a Gaussian mixture distribution. B, PICS. C, PNN, where the known particle density distribution for the simulation is used for the diffusion constant estimation. D, PNN where the particle density distribution is estimated from the data using the k nearest neighbor algorithm. The x axis is the mean particle density over the area of interest, and the y axis is the estimated diffusion constant. The red line indicates the true diffusion constant. (PNG 306 kb) Movie S1. A representative movie of the image based simulation. A representative movie generated by the plugin, ISBI Challenge Track Generator, of an open platform software ICY. Seed = 123,456, SNR = 4, sequence length = 10, particle density = 1000, sigma = 10 in the particle motion with creator type "BROWNIAN_UNIFORM". The other parameters are set to default, which means the extinction rate of each particle is 0.05. (TIFF 2561 kb) Figure S4. Representative images of the image based simulation. Representative images generated by the plugin, ISBI Challenge Track Generator, of an open platform software ICY. Seed = 123,456, SNR = 4, sequence length = 10, sigma = 10 in the particle motion with creator type "BROWNIAN_UNIFORM". Particle densities are 100, 500 and 1000, respectively. The other parameters are set to default. (PNG 517 kb) Teraguchi, S., Kumagai, Y. Estimation of diffusion constants from single molecular measurement without explicit tracking. BMC Syst Biol 12 (Suppl 1), 15 (2018). https://doi.org/10.1186/s12918-018-0526-5 Diffusion constants Expectation maximization algorithm Probabilistic model Single molecular measurement
CommonCrawl
Arthroscopic repair of rotator cuff injury with bioabsorbable suture anchor vs. all-suture anchor: a non-inferiority study Stefano Di Gennaro1, Domenico Lecce1, Alessio Tarantino ORCID: orcid.org/0000-0001-5111-48152, Mauro De Cupis3, Erica Bassetti4, Pierpaolo Scarnera2, Enrico Ciminello5,6 & Vittorio Calvisi2 Compare all-suture anchors to traditional anchors through clinical and radiological evaluation at pre-established end-points. We performed a two-arms non-inferiority study on all-suture anchor (2.3 iconix™, Stryker) device with respect to traditional anchor (5.5 healix Advance™ BR, Depuy/Mitek) device under unpaired samples with size equal to 30 patients per group, all suffering from supraspinatus tendon rupture. We administrated DASH (Disabilities of the Arm, Shoulder and Hand); constant; and SST (Simple Shoulder Test) questionnaires in pre-operative, 3 ± 1 months post-intervention and 8 ± 1 months post-intervention. Questionnaires scores were the primary outcome. We also evaluated RMI at 3 and at 8 months after surgery to assess the presence of oedema or any loosening of the implant. All-suture anchor approach has been proven to have non-inferior performances with respect to traditional anchor approach, according to questionnaires scores at the 3-month endpoint. We observed 26 patients with oedema by MRI (18 in control group, 6 in experimental group). In the 8-month endpoint we found persistent edema in 12 patients (all treated with healix), 2 had mobilitazions (healix), 10 had partial retears (8 healix, 2 iconix) and 1 implant failure (healix). All suture devices have clinical and functional results comparable to traditional devices, while they tend to give fewer complications in terms of bone edema, loosening and retear rate. The effectiveness of all-suture devices should be further investigated in rotator cuff suture arthroscopic revision surgery, given the advantages they offer. Rotator cuff injury is a widespread pathology in many different types of patients. Its etiology is multifactorial: it can depend on traumatic events, a degenerative process, or a combination of these two factors [1]. Predisposing factors include: age, physically stressful jobs, intense sports activity (especially overhead sports), repeated microtraumatisms, metabolic diseases [2]. The most affected area by these injuries is the insertional portion of the tendon, due to its mechanical characteristics and the greater mechanical effort to which this area is subjected. Currently, international literature suggests that the best strategy for repairing rotator cuff lesions is arthroscopic surgery using suture anchors [3]. Recently, a new all-suture anchor has been developed. This device has theoretical advantages compared to traditional anchors, as shown in some biomechanical studies on cadavers, which have shown a reduction in the effects of pullout and less invasiveness on tissues, with lesions of the bone tissue and areas of bone defect considerably reduced in case of pullout, a very relevant factor especially in case of need for reoperation [4]. Indeed, we believe that the use of such all-suture anchor, given its less invasive nature, may be preferable in the surgical treatment of young patients or patients at greater risk of reoperation, due to the significant reduced trauma that these devices apply to the tissues. Therefore, aim of this study is to test the non-inferiority of all-suture devices with respect to traditional ones, by investigation of clinical and radiological outcomes in arthroscopic rotator cuff repair surgery. We compared two different types of anchors: a standard bioabsorbable threaded suture anchor (5.5 healix Advance™ BR, Depuy/Mitek) and an all-suture anchor (2.3 iconix™, Stryker) by evaluating clinical and radiological results [5]. The performance of the two types of anchors was evaluated via the administration of three validated questionnaires to the patients, namely: DASH (Disabilities of the Arm, Shoulder and Hand); constant; and SST (Simple Shoulder Test) [6, 7]. The questionnaires were administrated by the clinicians to the patients in three different moments: pre-operative (time 0), 3 ± 1 months post-intervention (time 1) and 8 ± 1 months post-intervention (time 2), in order to investigate the possible functional improvement of the patients, highlighted by changes in the resulting scores. Patients enrollment and statistical analysis A two arms non-inferiority study was performed, where the performance of 2.3 iconix™, Stryker, used in experimental treatment group (referred to also as iconix group in the following), was tested against the performance of 5.5 healix Advance™ BR, Depuy/Mite, used in control group (referred to also as healix group in the following). The main outcome and measure of comparison between treatments was the difference in terms of the average DASH score between the two groups of patients. The differences in terms of the average constant and SST scores between the two groups of patients were the secondary outcomes. The sample size was determined equal to 30 patients in each group, while considering a 0.8 power for the test with 0.95 significance and by fixing sd = 26 and \(\partial\)= 12 in the following non-inferiority test: $$\left\{\begin{array}{c}{H}_{0}: E-C \le -\partial \\ {H}_{1}: E-C > -\partial ,\end{array}\right.$$ where E is outcome in terms of DASH score of the experimental treatment and C is the outcome of the control group treatment. The considered value of the sd is the first integer to guarantee a 95% confidence interval length equal to 100 on a normal distribution; the fixed value for \(\partial\) is given by a integer approximation of the quantity identified by van Kampen and co-authors as Minimal Important Change in shoulder-related PROMs when comparing performance of devices between groups in terms of DASH score [8]. Between February 2016 and May 2017, 60 patients with total rupture of the supraspinatus tendon were enrolled in the study according to the following inclusion criteria, that are the same for the two groups: over than 40 years of age; no previous surgery on the same shoulder; absence of comorbidities of the long head of the biceps that involve tenotomy/tenodesis; absence of concomitant lesion of other rotator cuff tendons; no neoplastic pathologies. Continuous variables are presented in terms of average(sd), while categorical variables are presented in terms of absolute frequencies(percentage in the group). The significance of the differences between the groups are tested by t-test for continuous variables and by \({\chi }^{2}\) test for categorical variables, with significance level for the p-value fixed equal to 0.05. The resulting score for the three considered questionnaires (DASH, constant, SST) at 3-months check-ups were compared between the two groups to investigate statistically significant differences by unilateral t-tests for unpaired groups with significance level at 0.05. The tests were performed taking into account that: for the DASH scale, a lower score means a better clinical outcome; for constant and SST scales, a higher score means a better clinical performance. The statistical analysis was performed by using the software R version 3.6.3 (2020–02-29) – "Holding the Windsock". Surgery was performed by a skilled surgeon with more than 20 years of experience in accordance with the principles established and recognized by international literature [9, 10]. The torn supraspinatus tendon was repaired arthroscopically using 1 or more anchors and the same knotting technique. Surgical access was performed through standard arthroscopic portals. Patients underwent intraoperative assessment of the lesion (extent, tissue quality, reducibility of the lesion). Repair was performed after debridement of the lesion and the footprint, with reinsertion on the bone. At the end of the procedure all patients underwent minimal acromioplasty, not for biomechanical purposes but with the aim of providing regenerative input. All patients underwent arthroscopic surgery, followed by a 2-day hospital stay, with the instruction to wear a 15° abduction brace for 3 weeks and observe the same standardized rehabilitation protocol. All the patients underwent tenotomy of the long head of the biceps during the arthroscopic procedure due to its irreparable lesion. The quality of the tendon tissue was assessed intraoperatively by testing its consistency, elasticity and mechanical strength. Radiological evaluation Patients underwent clinical and radiological (MR) assessments at time 0, time 1 and time 2. The presence of pain was assessed by the VAS scale at time 0, 1 and 2. MR evaluation at time 1 was intended only to verify possible bone edema, while the time 2 check-up aimed to ascertain the presence of bone edema, mobilization of the anchors, suture and tendon status. The time 1 and 2 MR exam was accomplished through the use of sequences suitable for evaluating the region of interest; in particular, a 0.2 T Opera Esaote magnet was used. Images with a 13 cm FOV (Field Of View) were acquired, with a layer thickness of 5 mm. Routine sequences were Spin Echo T1 weighted, Turbo-Spin Echo T2 weighted and STIR sequences in the coronal scan planes, Spin Echo T1 weighted sequences and Turbo-Spin Echo T2 in the axial scan planes and Spin Echo T1 sequences weighted in the sagittal planes. The diagnostic tests were performed in accordance with the ethical standards of the Evaluation Committee on Human Experimentation and with the Helsinki Declaration of 1975, revised in 2008. Informed consent was obtained from all patients who participated in the study. The sample consisted of 32 females and 28 males. Comorbidities were observed in 26 patients: 22 patients with high blood pressure (13 healix, 9 iconix) and 4 with diabetes mellitus (1 healix, 3 iconix). The tendon lesion affected the dominant shoulder in 42 patients (14 healix, 18 iconix). Every-day activities of 12 patients (4 healix, 8 iconix) were characterized by the risk of shoulder wear and tear, while 26 (12 healix, 14 iconix) were engaged in sports associated with a tendon injury. All patients were affected by lesions greater than one centimeter in length. Moderate tissue quality was detected in 36 patients (20 healix, 16 iconix), while 24 patients presented tissue of poor quality (13 healix, 11 iconix). Distributions of patients' characteristics along experimental and control groups were statistically tested and are resumed in Table 1. Statistical tests ensure that the observed differences in pre-operative conditions between the two groups were not statistically significant, that is, the bias in distributions of patients' features along the two groups is negligible. Table 1 Patients' features in the two observed groups Results of the functional scores at different endpoints are summarized in Table 2 and overall, across all endpoints considered and with all rating scales used, patients treated with iconix showed better results than patients in the healix group. However, these results did not improve significantly quality of life or functional autonomy, in accordance with the design of the study which aimed to highlight a non-inferiority of iconix compared to healix. Table 2 Scores at endpoints The results of the non-inferiority test for all the considered outcomes at Time 1 show non-inferiority of the iconix anchor (Table 3), according to one-sided t-test and this is statistically significant for DASH, constant and SST scores (p < 0.0001), while considering \(\partial\)= 12, as fixed in the study design. At the 3-month check-up (Time 1) with MRI [Figs. 1 and 2], intraosseous edema was detected in 26 patients (18 healix, 8 iconix). After 8 months (Time 2) it was possible to observe a reduction in edema, with 12 cases in all (12 healix, no edema in iconix). Furthermore, the mobilization of the anchors was detected at time 2 only for two patients in healix group, who presented a partial dislocation, defined as mobilization of less than 2 mm with no signs of suture failure. We also recorded 10 partial retears (8 healix, 2 iconix) and a single case of suture failure (healix). Table 3 Result of the non-inferiority test between the two groups with \(\partial\) = 12 pre-surgery MRI post-surgery MRI The assessing for pain by VAS scale showed an average of 2.21(2.18) in experimental group (moderate non-disabling pain), and 4.14(2.17) for control group (p = 0.0016). Rotator cuff injury is a disease that has historically been approached with various therapeutic possibilities: open surgery, conservative therapy and, more recently, arthroscopic repair [11]. Technological innovation and the development of arthroscopic techniques have allowed a progressive expansion of the treatable lesions and have given a significant input to the development of new devices, especially anchors [12]. In our study in particular we used the iconix anchor, characterized by the "all-suture" system, a device smaller in size compared to the other bioabsorbable anchors (healix) and with the possibility of a less demolishing approach on the bone and on the tissues. This technique has already been studied in the literature [13,14,15] with biomechanical analyses in studies on cadavers [16] and on guinea pigs [17], without however being able to obtain medium-long-term results regarding functional outcomes. All-suture anchors were also tested, again in cadaveric studies, in tenodesis of the long head of the biceps, with encouraging results [18]. In a single study, the functional outcome was assessed by means of constant on a small number of patients, without however a radiological evaluation or a comparison with a similar system used by the same surgeon [19]. It is fair to say that the literature does not provide reliable clinical or instrumental parameters for the prognosis of arthroscopic repair, thus highlighting poor reproducibility of the evaluation indices considered [20, 21]. In our study, we assessed the non-inferiority of the 2.3 iconix™, Stryker "all-suture" system with respect to the 5.5 healix Advance™ BR, Depuy/Mite bioadsorbable anchors. We performed a two-arms non-inferiority study with not paired samples with Dash, constant and SST scores as measures to compare the performance between the treatments. According to the statistical tests, there was not any significant bias in distributions of patients' features between the two groups. The only observed significant difference is in oedema, that was mostly found in the group treated by the use of the healix anchor, but no evidence for correlation between presence of oedema and post-operative outcome, up to our knowledge, has been found in literature [22]. The results of the study allow us to conclude that the null hypothesis H0 can be rejected and, therefore, the performance of the experimental treatment is at least not lower than the performance of the control one. Furthermore, we observed a reduced incidence of edema in patients treated with all-suture anchors. However, we did not note any relationship between the dislocations (measured by MR) and the functional results obtained and the patients' quality of life, thus confirming the thesis expressed by S. H. Kim et al. [23], regarding the non-correlation between the presence of perianchor edema and the functional result of the repair. We also found no significant differences from the results obtained for patients treated with the healix system. Moreover, pain assessment by VAS scale showed a statistically significant better recovery for patients treated with all-suture anchor device at 8 ± 1 months after the intervention. Considering also what we have seen in the literature, we believe that the best result in terms of pain of the all-sutures is to be attributed to the minor bone trauma compared to the role of bone edema, which the evidence seems to suggest is negligible. Lastly, the less invasive nature of all-suture systems on the bone must be recognized: this is a factor which has been proved as extremely important in a disease that often occurs in an active population with a significant incidence of retear [24]. In fact, studies on cadavers have shown that all-suture anchors can be removed in revision surgery and allow the implant site to be used as a new footprint for traditional revision, thus making any reintervention on a cuff rupture technically less difficult [25]. The study has several limitations. The considered sample is small and the follow-up is limited to 2 controls within 1 year from surgery. However, there are not yet any clinical studies that have investigated all-suture devices on rotator cuff arthroscopic repair and the considered outcomes suggest an optimal or satisfactory recovery that makes a longer follow-up less likely to invalidate the obtained results. In our experience, an all-suture system offers results comparable to the ones obtained in patients treated with traditional anchors, with the advantage of a small number of edema cases and tendon retears, as well as mechanical failure is less likely to be observed. Further studies, preferably randomized and multi-center, would provide more case histories, as well as extending the endpoints, and would verify any long-term outcomes observed in patients treated with iconix anchors. Data are available from the corresponding author upon request. MR: DASH: Disabilities of the Arm, Shoulder and Hand SST: Simple Shoulder Test Seitz AL, McClure PW, Finucane S, Boardman ND, Michener LA. Mechanisms of rotator cuff tendinopathy: intrinsic, extrinsic, or both? Clin Biomech. 2011;26(1):1–12. https://doi.org/10.1016/j.clinbiomech.2010.08.001. Leong HT, Fu SC, He X, Oh JH, Yamamoto N, Yung SHP. Risk factors for rotator cuff tendinopathy: a systematic review and meta-analysis. J Rehabil Med. 2019;51(9):627–37. https://doi.org/10.2340/16501977-2598. Huegel J, Williams AA, Soslowsky LJ. Rotator cuff biology and biomechanics: a review of normal and pathological conditions. Curr Rheumatol Rep. 2014;17(1):1–9. https://doi.org/10.1007/s11926-014-0476-x. Ntalos D, Huber G, Sellenschloh K, Saito H, Püschel K, Morlock MM, Frosch KH, Klatte TO. All-suture anchor pullout results in decreased bone damage and depends on cortical thickness. Knee Surg Sports Traumatol Arthrosc. 2021;29(7):2212–9. https://doi.org/10.1007/s00167-020-06004-6. Papalia R, Franceschi F, Diaz Balzani L, D'Adamio S, Denaro V, Maffulli N. The arthroscopic treatment of shoulder instability: Bioabsorbable and standard metallic anchors produce equivalent clinical results. Arthroscopy. 2014;30(9):1173–83. https://doi.org/10.1016/j.arthro.2014.03.030. Angst F, Schwyzer HK, Aeschlimann A, Simmen BR, Goldhahn J. Measures of adult shoulder function: Disabilities of the Arm, Shoulder, and Hand Questionnaire (DASH) and Its Short Version (QuickDASH), Shoulder Pain and Disability Index (SPADI), American Shoulder and Elbow Surgeons (ASES) Society Standardized Shoulder Assessment Form, Constant (Murley) Score (CS), Simple Shoulder Test (SST), Oxford Shoulder Score (OSS), Shoulder Disability Questionnaire. Arthritis Care Res. 2011;63(SUPPL. 11):174–88. https://doi.org/10.1002/acr.20630. Carosi M, Galeoto G, Di Gennaro S, Berardi A, Valente D, Servadio A. Transcultural reliability and validity of an Italian language version of the constant–Murley score. J Orthop Trauma Rehab. 2020;221049172094532. https://doi.org/10.1177/2210491720945327 van Kampen DA, Willems WJ, van Beers LW, Castelein RM, Scholtes VA, Terwee CB. Determination and comparison of the smallest detectable change (SDC) and the minimal important change (MIC) of four-shoulder patient-reported outcome measures (PROMs). J Orthop Surg Res. 2013;8(1):1–9. https://doi.org/10.1186/1749-799X-8-40. Farmer KW, Wright TW. Shoulder arthroscopy: the basics. J Hand Surg Am. 2015;40(4):817–21. https://doi.org/10.1016/j.jhsa.2015.01.002. Epub 2015 Feb 26 PMID: 25726045. Paxton ES, Backus J, Keener J, Brophy RH. Shoulder arthroscopy: basic principles of positioning, anesthesia, and portal anatomy. J Am Acad Orthop Surg. 2013;21(6):332–42. https://doi.org/10.5435/JAAOS-21-06-332. PMID: 23728958. Dang A, Davies M. Rotator Cuff Disease: Treatment Options and Considerations. Sports Med Arthrosc Rev. 2018;26(3):129–33. https://doi.org/10.1097/JSA.0000000000000207. Visscher LE, Jeffery C, Gilmour T, Anderson L, Couzens G. The history of suture anchors in orthopaedic surgery. Clin Biomech. 2019;61:70–8. https://doi.org/10.1016/j.clinbiomech.2018.11.008. Byrd JWT, Jones KS, Loring CL, Sparks SL. Acetabular all-suture anchor for labral repair: incidence of intraoperative failure due to pullout. Arthroscopy. 2018;34(4):1213–6. https://doi.org/10.1016/j.arthro.2017.09.049. Epub 2018 Jan 17 PMID: 29373296. Lacheta L, Dekker TJ, Anderson N, Goldenberg B, Millett PJ. Arthroscopic knotless, tensionable all-suture anchor bankart repair. Arthrosc Tech. 2019;8(6):e647–53. https://doi.org/10.1016/j.eats.2019.02.010. Published Jun 2 2019. Lee JH, Park I, Hyun HS, Kim SW, Shin SJ. Comparison of clinical outcomes and computed tomography analysis for tunnel diameter after arthroscopic Bankart repair with the all-suture anchor and the biodegradable suture anchor. Arthroscopy. 2019;35(5):1351–8. https://doi.org/10.1016/j.arthro.2018.12.011. Epub 2019 Apr 12 PMID: 30987905. Nagra NS, Zargar N, Smith RDJ, Carr AJ. Mechanical properties of all-suture anchors for rotator cuff repair. Bone Joint Res. 2017;6(2):82–9. https://doi.org/10.1302/2046-3758.62.BJR-2016-0225.R1. Barber FA, Herbert MA. Cyclic loading biomechanical analysis of the pullout strengths of rotator cuff and glenoid anchors: 2013 update. Arthroscopy. 2013;29(5):832–44. https://doi.org/10.1016/j.arthro.2013.01.028. Frank RM, Bernardoni ED, Veera SS, Waterman BR, Griffin JW, Shewman EF, Verma NN. Biomechanical analysis of all-suture suture anchor fixation compared with conventional suture anchors and interference screws for biceps tenodesis. Arthroscopy. 2019;35(6):1760–8. https://doi.org/10.1016/j.arthro.2019.01.026. Dhinsa BS, Bhamra JS, Aramberri-Gutierrez M, Kochhar T. Mid-term clinical outcome following rotator cuff repair using all-suture anchors. J Clin Orthop Trauma. 2019;10(2):241–3. https://doi.org/10.1016/j.jcot.2018.02.014. Saccomanno MF, Cazzato G, Fodale M, Sircana G, Milano G. Magnetic resonance imaging criteria for the assessment of the rotator cuff after repair: a systematic review. Knee Surg Sports Traumatol Arthrosc. 2015;23(2):423–42. https://doi.org/10.1007/s00167-014-3486-3. Saccomanno MF, Sircana G, Cazzato G, Donati F, Randelli P, Milano G. Prognostic factors influencing the outcome of rotator cuff repair: a systematic review. Knee Surg Sports Traumatol Arthrosc. 2016;24(12):3809–19. https://doi.org/10.1007/s00167-015-3700-y. Chen S, He Y, Wu D, Hu N, Liang X, Jiang D, Huang W, Chen H. Postoperative bone marrow edema lasts no more than 6 months after uncomplicated arthroscopic double-row rotator cuff repair with PEEK anchors. Knee Surg Sports Traumatol Arthrosc. 2021;29(1):162–9. https://doi.org/10.1007/s00167-020-05897-7. Epub 2020 Feb 14 PMID: 32055881. Kim SH, Yang SH, Rhee SM, Lee KJ, Kim HS, Oh JH. The formation of perianchor fluid associated with various suture anchors used in rotator cuff repair: all-suture, polyetheretherketone, and biocomposite anchors. Bone Joint J. 2019;101-B(12):1506–11. https://doi.org/10.1302/0301-620X.101B12.BJJ-2019-0462.R2. Lee YS, Jeong JY, Park CD, Kang SG, Yoo JC. Evaluation of the risk factors for a rotator cuff retear after repair surgery. Am J Sports Med. 2017;45(8):1755–61. https://doi.org/10.1177/0363546517695234. Ntalos D, Huber G, Sellenschloh K, Briem D, Püschel K, Morlock MM, Klatte TO. Biomechanical analysis of conventional anchor revision after all-suture anchor pullout: a human cadaveric shoulder model. J Shoulder Elbow Surg. 2019;28(12):2433–7. https://doi.org/10.1016/j.jse.2019.04.053. No funds were received for this clinical study. Polo Sanitario San Feliciano, Rome, Italy Stefano Di Gennaro & Domenico Lecce UNIVAQ MeSVA: Università Degli Studi Dell'Aquila Dipartimento Di Medicina Clinica Sanita Pubblica Scienze Della Vita E Dell'Ambiente, Via Mattia Battistini, 44, 00167, Rome, RM, Italy Alessio Tarantino, Pierpaolo Scarnera & Vittorio Calvisi Department of Orthopaedics and Traumatology, C.T.O. Hospital, Rome, Italy Mauro De Cupis Department of Radiological, Oncological and Pathological Sciences, La Sapienza" University of Rome, Rome, Italy Erica Bassetti Italian Implantable Prostheses Registry, Scientific Secretary of the Presidency, Italian National Institute of Health, Rome, Italy Enrico Ciminello Department of Statistical Science, La Sapienza" University of Rome, Rome, Italy Stefano Di Gennaro Domenico Lecce Alessio Tarantino Pierpaolo Scarnera Vittorio Calvisi S. D. G., V. C. and E. B. designed the studio. A. T. and D. L. selected the patients and collected the data. M. D. C. scored the collected data. P. S. statistically processed the data. E. C. revised, improved and processed statistical data. All authors have read and approved the final version of manuscript (31.08.2022). No funding to declare. None of the Authors have affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript. Correspondence to Alessio Tarantino. The study was conducted according to the guidelines of the declaration of Helsinki (1964). Ethic and law management committee from "Polo Sanitario San Feliciano" (Rome, Italy) approved the study. all patients signed informed consent for this study. All authors have given consent to publication. The authors declare they have no competing interest. No funds were received for this clinical study. Di Gennaro, S., Lecce, D., Tarantino, A. et al. Arthroscopic repair of rotator cuff injury with bioabsorbable suture anchor vs. all-suture anchor: a non-inferiority study. BMC Musculoskelet Disord 23, 1098 (2022). https://doi.org/10.1186/s12891-022-06061-7 Shoulderarthroscopy Allsutureanchor Bioabsorbableanchor Rotatorcuffsurgery
CommonCrawl
On solutions of fractal fractional differential equations A direct method of moving planes for fully nonlinear nonlocal operators and applications Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator Zedong Yang a, , Guotao Wang a,c, , Ravi P. Agarwal b,c,, and Haiyong Xu d, School of Mathematics and Computer Science, Shanxi Normal University, Linfen, Shanxi 041004, China Department of Mathematics, Texas A & M University, Kingsville, TX 78363-8202, USA Nonlinear Analysis and Applied Mathematics (NAAM) Research Group, Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia College of Science & Technology, Ningbo University, Ningbo, Zhejiang 315211, China * Corresponding author: Ravi P. Agarwal All authors equally contributed this manuscript Received April 2020 Revised June 2020 Published November 2020 Fund Project: This work is supported by NSFC (No.11501342), NSF of Shanxi, China (No.201701D221007), Science and Technology Innovation Project of Shanxi Normal University (No.2019XSY027), the Graduate Innovation Program of Shanxi, China (No.2020SY337) and STIP (Nos.201802068 and 201802069) In this paper, we study the positive solutions of the Schrödinger elliptic system $ \begin{equation*} \begin{split} \left\{\begin{array}{ll}{\operatorname{div}(\mathcal{G}(|\nabla y|^{p-2})\nabla y) = b_{1}(|x|) \psi(y)+h_{1}(|x|) \varphi(z),}& {x \in \mathbb{R}^{n}(n \geq 3)}, \\ {\operatorname{div}(\mathcal{G}(|\nabla z|^{p-2})\nabla z) = b_{2}(|x|) \psi(z)+h_{2}(|x|) \varphi(y),} & {x \in \mathbb{R}^{n}},\end{array}\right. \end{split} \end{equation*} $ $ \mathcal{G} $ is a nonlinear operator. By using the monotone iterative technique and Arzela-Ascoli theorem, we prove that the system has the positive entire bounded radial solutions. Then, we establish the results for the existence and nonexistence of the positive entire blow-up radial solutions for the nonlinear Schrödinger elliptic system involving a nonlinear operator. Finally, three examples are given to illustrate our results. Keywords: Schrödinger elliptic system, nonlinear operator, radial solution, blow up, monotone iterative method. Mathematics Subject Classification: Primary: 35A24, 35B09; Secondary: 35B44. Citation: Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020436 D. Baleanu, S. Rezapour and H. Mohammadi, Some existence results on nonlinear fractional differential equations, Phil. Trans. R. Soc. A, 371 (2013), 20120144. doi: 10.1098/rsta.2012.0144. Google Scholar D. Baleanu, R. P. Agarwal, H. Mohammadi and S. Rezapour, Some existence results for a nonlinear fractional differential equation on partially ordered Banach spaces, Bound. Value Probl., 2013 (2013), 112. doi: 10.1186/1687-2770-2013-112. Google Scholar D.-P. Covei, Symmetric solutions for an elliptic partial differential equation that arises in stochastic production planning with production constraints, Appl. Math. Comput., 350 (2019), 190-197. doi: 10.1016/j.amc.2019.01.015. Google Scholar D.-P. Covei, Large and entire large solution for a quasilinear problem, Nonlinear Anal., 70 (2009), 1738-1745. doi: 10.1016/j.na.2008.02.057. Google Scholar D.-P. Covei, Radial and nonradial solutions for a semilinear elliptic system of Schrödinger type, Funkcial. Ekvac., 54 (2011), 439-449. doi: 10.1619/fesi.54.439. Google Scholar A. B. Dkhil, Positive solutions for nonlinear elliptic systems, Electron. J. Differential Equations, 2012 (239) (2012), 1-10. Google Scholar X. Dong and Y. Wei, Existence of radial solutions for nonlinear elliptic equations with gradient terms in annular domains, Nonlinear Anal., 187 (2019), 93-109. doi: 10.1016/j.na.2019.03.024. Google Scholar J. B. Keller, On solutions of $\triangle z = \psi(z)$, Comm. Pure Appl. Math., 10 (1957), 503-510. doi: 10.1002/cpa.3160100402. Google Scholar A. V. Lair, Large solution of sublinear/superlinear elliptic equations, J. Math. Anal. Appl., 346 (2008), 99-106. doi: 10.1016/j.jmaa.2008.05.047. Google Scholar A. V. Lair, A necessary and sufficient condition for the existence of large solutions to sublinear elliptic systems, J. Math. Anal. Appl., 365 (2010), 103-108. doi: 10.1016/j.jmaa.2009.10.026. Google Scholar A. V. Lair, Entire large solutions to semilinear elliptic systems, J. Math. Anal. Appl., 382 (2011), 324-333. doi: 10.1016/j.jmaa.2011.04.051. Google Scholar A. V. Lair and A. W. Wood, Existence of entire large positive solutions of semilinear elliptic systems, J. Differential Euqations, 164 (2000), 380-394. doi: 10.1006/jdeq.2000.3768. Google Scholar H. Li, P. Zhang and Z. Zhang, A remark on the existence of entire positve solutions for a class of semilinear elliptic system, J. Math. Anal. Appl., 365 (2010), 338-341. doi: 10.1016/j.jmaa.2009.10.036. Google Scholar R. Osserman, On the inequality $\triangle z\geq \psi(z)$, Pacific J. Math., 7 (1957), 1641-1647. Google Scholar K. Pei, G. Wang and Y. Sun, Successive iterations and positive extremal solutions for a Hadamard type fractional integro-differential equations on infinite domain, Appl. Math. Comput., 312 (2017), 158-168. doi: 10.1016/j.amc.2017.05.056. Google Scholar J. Qin, G. Wang, L. Zhang and B. Ahmad, Monotone iterative method for a p-Laplacian boundary value problem with fractional conformable derivatives, Bound. Value Probl., 2019 (2019), 145. doi: 10.1186/s13661-019-1254-5. Google Scholar Y. Sun, L. Liu and Y. Wu, The existence and uniqueness of positive monotone solutions for a class of nonlinear Schrödinger equations on infinite domains, J. Comput. Appl. Math., 321 (2017), 478-486. doi: 10.1016/j.cam.2017.02.036. Google Scholar G. Wang and X. Ren, Radial symmetry of standing waves for nonlinear fractional Laplacian Hardy-Schrödinger systems, Appl. Math. Lett., 110 (2020), 106560. doi: 10.1016/j.aml.2020.106560. Google Scholar G. Wang, X. Ren, Z. Bai and W. Hou, Radial symmetry of standing waves for nonlinear fractional Hardy-Schrödinger equation, Appl. Math. Lett., 96 (2019), 131-137. doi: 10.1016/j.aml.2019.04.024. Google Scholar G. Wang, Twin iterative positive solutions of fractional q-difference Schrödinger equations, Appl. Math. Lett., 76 (2018), 103-109. doi: 10.1016/j.aml.2017.08.008. Google Scholar G. Wang, Explicit iteration and unbounded solutions for fractional integral boundary value problem on an infinite interval, Appl. Math. Lett., 47 (2015), 1-7. doi: 10.1016/j.aml.2015.03.003. Google Scholar G. Wang, K. Pei, R. P. Agarwal, L. Zhang and B. Ahmad, Nonlocal Hadamard fractional boundary value problem with Hadamard integral and discrete boundary conditions on a half-line, J. Comput. Appl. Math., 343 (2018), 230-239. doi: 10.1016/j.cam.2018.04.062. Google Scholar G. Wang, J. Qin, L. Zhang and D. Baleanu, Explicit iteration to a nonlinear fractional Langevin equation with non-separated integro-differential strip-multi-point boundary conditions, Chaos Solitons Fractals, 131 (2020), 109476. doi: 10.1016/j.chaos.2019.109476. Google Scholar G. Wang, Z. Bai and L. Zhang, Successive iterations for unique positive solution of a nonlinear fractional q-integral boundary value problem, J. Appl. Anal. Comput., 9 (2019), 1204-1215. doi: 10.11948/2156-907X.20180193. Google Scholar G. Wang, Z. Yang, L. Zhang and D. Baleanu, Radial solutions of a nonlinear $k$-Hessian system involving a nonlinear operator, Commun. Nonlinear Sci. Numer. Simulat., 91 (2020), 105396. doi: 10.1016/j.cnsns.2020.105396. Google Scholar D. Ye and F. Zhou, Invariant criteria for existence of bounded positive solutions, Discrete Contin. Dyn. Syst., 12 (2005), 413-424. doi: 10.3934/dcds.2005.12.413. Google Scholar Z. Zhang, Existence of entire positive solutions for a class of semilinear elliptic systems, Electron. J. Differential Equations, 2010 (2010), 1-5. Google Scholar X. Zhang, Y. Wu and Y. Cui, Existence and nonexistence of blow-up solutions for a Schrödinger equation involving a nonlinear operator, Appl. Math. Lett., 82 (2018), 85-91. doi: 10.1016/j.aml.2018.02.019. Google Scholar X. Zhang, C. Mao, L. Liu and Y. Wu, Exact iterative solution for an abstract fractional dynamic system model for Bioprocess, Qual. Theory Dyn. Syst., 16 (2017), 205-222. doi: 10.1007/s12346-015-0162-z. Google Scholar X. Zhang, L. Liu, Y. Wu and L. Caccetta, Entire large solutions for a class of Schrödinger systems with a nonlinear random operator, J. Math. Anal. Appl., 423 (2015), 1650-1659. doi: 10.1016/j.jmaa.2014.10.068. Google Scholar X. Zhang, L. Liu, Y. Wu and Y. Cui, The existence and nonexistence of entire large solutions for a quasilinear Schrödinger elliptic system by dual approach, J. Math. Anal. Appl., 464 (2018), 1089-1106. doi: 10.1016/j.jmaa.2018.04.040. Google Scholar L. Zhang and W. Hou, Standing waves of nonlinear fractional p-Laplacian Schrödinger equation involving logarithmic nonlinearity, Appl. Math. Lett., 102 (2020), 106149. doi: 10.1016/j.aml.2019.106149. Google Scholar L. Zhang, B. Ahmad and G. Wang, Explicit iterations and extremal solutions for fractional differential equations with nonlinear integral boundary conditions, Appl. Math. Comput., 268 (2015), 388-392. doi: 10.1016/j.amc.2015.06.049. Google Scholar L. Zhang, B. Ahmad and G. Wang, The existence of an extremal solution to a nonlinear system with the right-handed Riemann-Liouville fractional derivative, Appl. Math. Lett., 31 (2014), 1-6. doi: 10.1016/j.aml.2013.12.014. Google Scholar Justin Holmer, Chang Liu. Blow-up for the 1D nonlinear Schrödinger equation with point nonlinearity II: Supercritical blow-up profiles. Communications on Pure & Applied Analysis, 2021, 20 (1) : 215-242. doi: 10.3934/cpaa.2020264 Alex H. Ardila, Mykael Cardoso. Blow-up solutions and strong instability of ground states for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2021, 20 (1) : 101-119. doi: 10.3934/cpaa.2020259 Youshan Tao, Michael Winkler. Critical mass for infinite-time blow-up in a haptotaxis system with nonlinear zero-order interaction. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 439-454. doi: 10.3934/dcds.2020216 Riadh Chteoui, Abdulrahman F. Aljohani, Anouar Ben Mabrouk. Classification and simulation of chaotic behaviour of the solutions of a mixed nonlinear Schrödinger system. Electronic Research Archive, , () : -. doi: 10.3934/era.2021002 Masaru Hamano, Satoshi Masaki. A sharp scattering threshold level for mass-subcritical nonlinear Schrödinger system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1415-1447. doi: 10.3934/dcds.2020323 Oussama Landoulsi. Construction of a solitary wave solution of the nonlinear focusing schrödinger equation outside a strictly convex obstacle in the $ L^2 $-supercritical case. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 701-746. doi: 10.3934/dcds.2020298 Norman Noguera, Ademir Pastor. Scattering of radial solutions for quadratic-type Schrödinger systems in dimension five. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021018 Hassan Mohammad. A diagonal PRP-type projection method for convex constrained nonlinear monotone equations. Journal of Industrial & Management Optimization, 2021, 17 (1) : 101-116. doi: 10.3934/jimo.2019101 Kihoon Seong. Low regularity a priori estimates for the fourth order cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5437-5473. doi: 10.3934/cpaa.2020247 Serge Dumont, Olivier Goubet, Youcef Mammeri. Decay of solutions to one dimensional nonlinear Schrödinger equations with white noise dispersion. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020456 José Luis López. A quantum approach to Keller-Segel dynamics via a dissipative nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020376 Noriyoshi Fukaya. Uniqueness and nondegeneracy of ground states for nonlinear Schrödinger equations with attractive inverse-power potential. Communications on Pure & Applied Analysis, 2021, 20 (1) : 121-143. doi: 10.3934/cpaa.2020260 Van Duong Dinh. Random data theory for the cubic fourth-order nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020284 Jason Murphy, Kenji Nakanishi. Failure of scattering to solitary waves for long-range nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1507-1517. doi: 10.3934/dcds.2020328 Denis Bonheure, Silvia Cingolani, Simone Secchi. Concentration phenomena for the Schrödinger-Poisson system in $ \mathbb{R}^2 $. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020447 Juntao Sun, Tsung-fang Wu. The number of nodal solutions for the Schrödinger–Poisson system under the effect of the weight function. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021011 Juliana Fernandes, Liliane Maia. Blow-up and bounded solutions for a semilinear parabolic problem in a saturable medium. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1297-1318. doi: 10.3934/dcds.2020318 Zedong Yang Guotao Wang Ravi P. Agarwal Haiyong Xu
CommonCrawl
Is bed turnover rate a good metric for hospital scale efficiency? A measure of resource utilization rate for hospitals in Southeast Nigeria Henry E. Aloh ORCID: orcid.org/0000-0001-9263-25471,2, Obinna E. Onwujekwe2,3, Obianuju G. Aloh4 & Chijioke J. Nweke5 Cost Effectiveness and Resource Allocation volume 18, Article number: 21 (2020) Cite this article Nigeria health sector, like that of other sub-Saharan African countries, increasingly faces critical resource constraints. Thus, there is need to seek for ways of improving efficient use of scarce health resources. The aim of this study was to determine resource utilization rate of teaching hospitals in Southeast Nigeria as a means of estimating their efficiency. The study is a longitudinal cross sectional study. It applied ratio indicators and Pabon Lasso model using data on the number of hospital bed, number of inpatients and total inpatient-days from purposefully selected teaching hospitals in Southeast Nigeria to measure efficiency over a period of 6 years (2011–2011). The hospitals' mean bed occupancy rate was as low as 42.14%, far below standard benchmark of 80–85%. The mean average length of stay was as high as 8.15 days and observed mean bed turnover was 21.27 patients/bed/year. These findings portrayed high level of inefficiency in Nigeria teaching hospitals, which was further illustrated by Pabon Lasso graph, with only 10–20% of the hospital-years located within or near the efficient zone or quadrant. The study was able to show that health ratio indicators such as hospital bed turnover rate (BTR) and bed occupancy rate (BOR), as well as patients' average length of stay (ALS) can be used as tools for assessing hospital performance or its efficiency in resource utilization. Thus, in low and middle income countries where medical record keeping may be inadequate or poor, ratio indicators used alone or with Pabon Lasso graph/chart could be an optional metrics for hospital efficiency. Healthcare is one of the most important services provided by the government in every country of the world. It is regarded as a critical resource in the process of economic development. Hence, in both the developed and developing nations, a significant proportion of the nation's wealth is devoted to the health sector. The health system in Sub-Saharan African countries including Nigeria increasingly face critical resource constraints and this is accounted for by a host of factors such as poor macroeconomic performance, cutbacks in public spending, rapid population growth, various disease outbreaks (e.g. HIV/AIDS, Lassa fever and Ebola fever), and upsurge in diseases such as malaria [1]. The health care system components such as hospitals, in developing countries has for a long time remained under resource constraint and probably inefficient [2]. Performance or efficiency evaluation of hospitals may therefore play a strategic role in healthcare organizations and help address the best use of resources and rationing of demand [3]. Performance evaluation has become central to the concept of quality improvement. It provides a means of defining what hospitals are actually doing and compare it with expected targets [4]. It enhances greater accountability and stimulates continuous quality improvement. This is why WHO Europe Regional Office lunched in 2003 a flexible and comprehensive framework called Performance Assessment Tool for quality improvement in Hospitals (PATH) [5]. Improvement in the efficiency of hospital care is a fundamental aspect of health system strengthening [6]. However, the challenges facing low-income countries is that many keep on struggling without much success to develop and implement feasible strategies to monitor hospital nationally [6]. In sub-Sahara African, hospitals play a key role in the delivery of healthcare services [7]. They also account for the bulk of government's health sector expenditure, ranging between 45 and 80% in developing countries [8]. Empirical evidence emerging from studies in South Africa [9], in Kenya [10], in Ghana [11] and Namibia [12] indicate a wide prevalence of inefficiency in provision of hospital based healthcare. [13]. In Nigeria, hospitals are perceived to exhibit gross inefficiency [14]. This cut across all the level of healthcare. The tertiary or teaching hospitals are the highest level of healthcare in Nigeria and they take up about 60% of the country's annual budget on health. In health systems, the nature of outputs differs from that of other organizations, thus measurements of efficiency are more difficult [15]. In hospital literatures, performance or efficiency are measured using inputs and outputs. Capital input is taken to represent a wide range of manufactured products such as complex medical equipment, buildings, beds and vehicles employed in health care. By nature capital inputs are durable and provide services over a fairly long period of time. Number of beds is the most commonly used variable in hospital efficiency studies. The use of this variable as a proxy for capital inputs has been accepted by researchers [16]. Inpatient services require that patients utilize hospital bed for an overnight stay or for extended treatment over a period of one or more days. A systemic literature reviews carried out by Iranian researchers claimed that out of about 218 indicators used in hospital performance assessment, the most frequently used are average length of stay (ALS) and bed occupancy rate (BOR) [17]. The present study used these two health ratio indicators (ALS and BOR), bed turnover rate (BTR) and turnover interval (TI), as well as Pabon Lasso model to investigate efficiency in resource utilization by Teaching Hospitals in Southeast Nigeria. It is hope that this will proof to be a simple good method for measuring performance of hospitals that treat inpatients. Thus, the study could help develop a tool or metrics for comparing hospital performance [18]. The design was cross-sectional and retrospective study. Purposeful sampling method was used to select 3 teaching hospitals that had the requisite health records from Southeast Nigeria. The region is located between longitudes 6° 25′ E and 8° 30′ E, and between Latitude 5° 10′ N and 6° 45′ N. The population of the region is estimated to be 20,683,115 as at 2015 with a growth rate of 2.47% [19]. The hospitals in this study were all University Medical College health institutions, hence they function as teaching hospitals and referral centers, treating mainly chronic and complicated illnesses. The study used health ratio indicators such as bed occupancy rate (BOR), bed turnover rate (BTR), average length of stay (ALS) and turnover interval (TI) [20] to evaluate efficiency in resource utilization of 3 teaching hospitals that were selected by simple random sampling. Data were collected for a period covering 7 years (2010 to 2016) on the following variables: Number of active beds—this refers to number of functional beds for each hospital-year. Active beds-days—this refers to the number of functional beds in the hospital for a given period, usually 1 year, and it is obtained as number of active beds multiply by 365 days. Number of admissions or discharges in a given year. Occupied-bed-days or total inpatient-days, which refers to the sum of total number of days all admitted patients spent in the hospital for a given year. Using the above variables four ratios or indices were computed as follow [20]: $$Bed\,Occupancy\,\,Rate\,\,(BOR)\,\, = \frac{Occupied\,\,Bed - Days\,\,(Total\,\,Inpatient\,\,Days)}{Active\,\,Bed - Days\,\,} \times 100\%$$ $$Average\,\,Length\,\,of\,\,Stay\,(ALS)\, = \frac{Occupied\,\,Bed\, - Days\,\,(Total\,\,Inpatient\,\,Days)}{Number\,of\,\,Disch\arg es}$$ $$Bed\,Turnover\,\,Rate\,\,(BTR)\,\, = \frac{Number\,of\,\,Disch\arg es\,(\,or\,admissions)\,\,in\,1\,year}{Active\,\,Beds}$$ $$Turnover\,\,Interval\,(TI)\,\, = \frac{365}{BTR} - ALS$$ The above formulae were designed into a stata-11 version of Microsoft Excel spread sheet for easier computation. The study went on to apply Pabon Lasso model or graph to further demonstrate efficiency of the hospital-years, since using any one of the above indicators alone may not sufficiently estimate performance or efficiency of the hospitals. Pabon Lasso Model was originally developed by Pabon Lasso in 1986, and it is a technique used for interpreting and comparing hospital efficiency using three indices [21]. Mathematically, correlations of the 3 indicators were shown by plotting BTR on the y-axis and BOR on the x-axis [22]. And using the average of the two indices (BTR and BOR) 2 perpendicular lines were drawn to divide the graph into four quadrants or zones. The Pabon Lasso diagram obtained was then used as performance assessment tool [23, 24]. Thus, using the four quadrants/zones of the graph, efficiency of various hospital-years were known and the manner with which they utilized available resources are also made clear. The record of active hospital beds, as well as number of inpatients and inpatient-days for respective Teaching Hospitals, between year 2010 and 2016 (7 years period), are shown in Table 1. The average (mean) active hospital bed spaces for the hospital-years was 380 beds. The average yearly number of admissions was 8394 inpatients per hospital per year. FETHA had the highest number of admitted patients of 14,037 during the period. While the mean inpatient-days for all the hospitals was 56,912 per year, the highest was UNTH inpatient-days of 85,056 in 2012 (as shown in Table 1 below). Table 1 Descriptive Statistics of Hospital Resources (beds) and Patients admission (2010 – 2016) Table 2 show that the overall mean bed occupancy rate (BOR) of the hospitals during the period was 42.14%, and the values range between 22.62% and 62.37% among the hospital-years. Mean bed turnover rate (BTR) was 22 patients per bed per year, and the BTR was as low as about 7 patients per bed per year for ESUTH in 2011. The highest value of about 34 patients/bed/year for FETHA was recorded in 2011. Mean average length of stay (ALS) for admitted patients in this study was as high as 8.15 days and the least mean ALS was 3.27 days for FETHA in 2012. The mean turnover interval (TI) for the hospitals was as high as 10.19 days. The shortest average TI for the hospitals was recorded by FETHA as 8.04 days. Thus, apart from long stay of admitted (ALS) patients, all the 3 hospitals exhibited protracted turnover interval (ESUTH—10.72 days and UNTH—11.83 days). Pabon Lasso graph in Fig. 1 emanated from plotting of BTR against BOR. Table 2 Health ratio indicator expressing resource utilization rate of hospitals Pabon Lasso graph (BTR: bed turnover rate; BOR: bed occupancy rate; E-i or F-i or U-i represent hospital-year) The location of each hospital-years in any of the Pabon Lasso quadrants in Fig. 1 describe the level and type of efficiency of such hospital-year. Hospital-years in zone-1 (Low BTR and low BOR) exhibit relatively excess bed supply, less need for hospitalization and low demand/utilization. Zone-ii (high BTO and low BOR) refers to excess bed capacity, unnecessary hospitalization, too many patients being admitted for observation or predominant normal obstetric delivery. The efficient zone - iii (high BTO and high BOR) infers that the hospital-years in this zone had good quantitative performance and small proportion of unused beds. The fourth quadrant (low BTO and high BOR) host hospital-years that had low demand for hospital beds, yet had small proportion of its beds unused. About 20% of the hospital-years were situated in the least efficient quadrant-I, 30% of the hospital-years were located in quadrant-II; while only 10% of the hospital-years were situated in the efficient quadrant-III and 40% of the hospital-years were found in quadrant-IV. It was observed that most of the FETHA hospital-years were located in the second Pabon Lasso quadrants that is characterized by high BTR, low BOR. Two (2) of the hospital-years of ESUTH were located in the efficient quadrant. It is widely accepted that improved efficiency is one of the four overarching goals of health systems [25]. The World Health Report 2014 estimated that about 20–40% of all health sector resources are wasted [26]. One of the vital approach to reducing resource waste is to enhance efficiency in utilization of available resource [27]. And the starting point in doing so is to undertake performance or efficiency assessment [28]. It is useful in guiding hospital manager at micro level and health policy makers in government at macro level [29]. The present study investigated the efficiency of resource utilization among University Teaching hospitals in Southeast Nigeria; it compared the performance of the hospitals over a period of 7 years (2010–2016). Four ratio indicators: bed occupancy rate (BOR), average length of stay (ALS), bed turnover rate (BTR) and turnover interval (TI) were used to do so. Demonstration of hospital efficiency were elucidated using Pabon Lasso model or graph. Bed occupancy rate (BOR) is a measure of utilization of the available bed capacity in the hospital, and it indicates the percentage of beds occupied by patients in a given period of time, usually 1 year. It reflects efficiency in the use of hospital beds. And hospital can be said to be operating efficiently at BOR of 80–90% [30]. Among all the hospital-years in the present study the maximum BOR of 62.37% was observed for ESUTH in 2016. Within the seven years period the mean BOR for the hospitals was abysmally as low as 42.14%. Similar study in Uganda hospital over a period of 10 years show that the average BOR was as much as 78.8% [31]. Younsi (2014) in a comparative assessment got a value of 58.1% as mean BOR for 40 public hospitals in Tunisia [32]. A more recent evaluation in Uganda, Sub-Saharan African country, showed average BOR of 49.35% (and BTR of 74.0 times per year and ALS of 3.63 days) [33]. In Iran and other Middle East countries the use of ratio measurement as a means of assessing hospital efficiency is common. Recent studies in different part of Iran showed an average BOR of 65.40% [34], 62.63–69.56% [35], 65.91% [36], as against Iranian national BOR average of 57.8%. The mean BOR obtained from the present study was still below the BOR of 56–61% among public hospitals in Malaysia between 2006 and 2010 [37]. In recent years BOR in countries such as Indonesia range between 55 and 60% in both public and private hospitals, as compared to 80% average for South-East Asian region hospitals [38]. The conventionally suggested benchmark for hospital BOR is 85% [31], signifying that the mean BOR of 42.14% in the present study was relatively very low. And thus, the teaching hospital could be said to have exhibited high level of inefficiency in the utilization of hospital beds during the period under review. Average length of stay (ALS) refers to the number of days each admitted patient stayed in the hospital. It is often better to compare homogenous group of hospitals that have a similar case-mix. The hospitals studied here were all university teaching hospitals and are known for treating mainly referred and often chronically sick clients. Hospital(s) or hospital-years with shorter ALS than its peers could be regarded to be performing relatively better than those with higher ALS. In this study the lowest ALS was observed for FETHA with a mean ALS of 3.77 days. The explanation for this is either that FETHA was more efficient than the other teaching hospitals in terms of effectiveness in treatment of its patients and in terms of shortening of patients hospital stay. It may also be that FETHA tend to treat more number of acutely ill patients than these other hospitals. The mean ALS for all the hospitals was about 8.15 days. Previous studies on hospitals affiliated to medical school in Iran exhibited mean ALS of 4.1 days [35], 3.21 days [39] and 4.08–4.59 days [40]. This, again show that the hospitals in the present study were less efficient. Bed turnover rate (BTR) measures productivity of hospital beds, and it represent the number of patients treated per bed in a defined period, usually 1 year. BTR of chronic care hospitals such as orthopedic or teaching hospitals are expected to be lower than those of acute care hospitals. The BTR of the teaching hospitals in this study for the various years were between 6.84 and 34.34 patients per bed per year, with a mean BTR of 21.27 patients per bed per year. This again demonstrate low productivity and high level of inefficiency. The highest value of BTR obtained being that of FETHA with a mean BTR of 30.89 patients/bed per year. The BTR of the hospital-years were all quite low, compared to that of their Iranian counterpart hospitals that are affiliated to medical school, where the BTR were observed to be between 61.10 and 95.54 patients per bed per year [24, 35, 36, 40, 41]. On the other hand, turnover interval (TI) is referred to as measure of the average times or days that hospital beds are unoccupied between successive inpatients. The higher the value of TI of hospital, the less efficient the hospital is. The ideal turnover interval is suggested to be 1–3 days. In the present study all the TI obtained were between 5.43 and 41.27 days with overall mean TI of 10.19 days. This portrays high level of inefficiency or poor performance in terms of hospital bed utilization. Assessment of hospital performance based on any single ratio indicators may sometime be misleading. Pabon Lasso (1986) had devised a model or graph that makes use of 3 of these ratio indicators (BOR, ALS and BTR) to assess relative performance of hospitals [21]. The plotting of BTR on the vertical axis, BOR on the horizontal axis and the use of mean values of these two ratios to divide the graph into four quadrants is what is known as Pabon Lasso graph [21]. The various hospital-years were located into these four different quadrants. About 20% of the hospital-years were located in the most inefficient quadrants that is characterized by low bed turnover and low bed occupancy rate. The hospital-years located in this quadrant experienced high un-utilize hospital beds. As much as 30% of the hospital-years, mainly from FETHA, were located in quadrant-II. These hospital-years were characterized by high turnover rate, low bed occupancy rate and relatively short stay in hospital by patients. The possible explanation is that there might have been too many un-necessary hospital admission or treatment of many acutely ill patients. Only 10% of hospital-years were located within the efficient quadrant-III and another 10% were located very close to it. These small fraction of hospital-years exhibited appropriate efficient level or performance. The hospitals enjoyed high bed occupancy and high bed turnover rate in 2013 and 2016. High BTR and BOR implies efficiency or ability of the hospital to efficiently utilize available resources [21, 26]. The study further revealed that the majority (40%) of the hospital-years that were located in quadrant-IV, which were characterized by high occupancy rate, low turnover and long hospital stay. This is not surprising; since the hospitals studied were teaching hospital that usually or often admitted chronically sick patients. Hospital-years in this zone were mainly those of UNTH and that of ESUTH. The findings indicate clearly that half of the hospital-years relatively exhibited high bed occupancy rate (as shown by quadrants-III and -IV). However, in terms of a wholesome efficient performance, the Pabon Lasso graph was able to demonstrate that only as few as 10% of hospital-years were efficient. Thus, the hospitals were unable to efficiently utilize their available capital resources during the period under review (2006–2011). This is in agreement with the result of a recent systemic review of hospital efficiency in East Mediterranean region which show that excess bed supply and inappropriate hospital size are some of the major causes of inefficiency [42]. Efficiency is widely used in health economics and it simply refers to wise utilization of resources in production of health services. The present study undertook appraisal of teaching hospital performance in the Southeast Nigeria by analyzing their resource utilization rate, covering a period of 2010 to 2016; the findings include: low bed occupancy rate and high average length of stay for the patients. This, is strong evidence of inefficiency among the teaching hospitals in Southeast Nigeria. These observations were emphatically expressed by location of a large proportion (80–90%) of the hospital-years outside the efficient quadrant of Pabon Lasso model. This study suffers from some limitations upon which future studies should improve upon. Poor quality of health information and record/data keeping among the hospitals is of great concern. In some instance to ensure completeness, data were collected in peace-meal from many departments, instead of the designated Health Record Department. The ratios indicators as a method of measuring hospital performance can only be applied on hospitals that provide inpatient services. There is need for various government to make efficiency a policy objective and institutionalize health facility efficiency monitoring and evaluation as a basis for the design and implementation of appropriate policy interventions, and as a means of curbing wastage of health system inputs (WHO, 2014). Health efficiency monitoring should be used as a tool within health management information system (HMIS) of various ministries of health, both at State or Regional level or National level. Availability of data Datasets used and analyzed for the study are available and maybe released by the corresponding author on reasonable request. World Health Organization (2015). Country statistics and global health estimates by WHO and UN partner (Nigeria: WHO statistical profile). http://www.who.int/gho/countries/nga.pdf. Kong X, Yang Y, Gao J, Guan J, Liu Y, Wang R, Xing B, Li Y, Ma W. Overview of the health care system in Hong Kong and its referential significance to mainland China. J Chin Med Assoc. 2015;78:569–73. Matranga D, Bono F, Casuccio A, Firenze A, Maesala L, Giaimo R, Sapienza F, Vitale F. Evaluating the effect of organization and context on technical efficiency; a second-stage DEA analysis of Italian hospitals. Epidemiol Biostat Public Health. 2014;11(1):87851–878511. Shaw C. How can hospital performance be measured and monitored? Copenhagen, WHO Regional Office for Europe (Health Evidence Network report. 2013; http://www.euro.who.int/document/e82975.pdf). Veilland J, Champagne F, Klapinga N, Kazandjian V, Arah AO, Guisset L. A performance assessment framework for hospitals: the WHO regional office for Europe PATH project. Int J Qual Health Care. 2005;17(6):487–96. McNatt Z, Linnander E, Endeshaw A, Tatek D, Conteh D, Bradley BH. A national System for monitoring the performance of Hospitals in Ethiopia. WHO Bull. 2015;93(10):719–26. Akazili J, Adjuik M, Chatio S, Kanyomse E, Hodgson A, Aikins M, Giapong J. What are the technical and allocative efficiencies of Public Health Centres in Ghana? Ghana Med J. 2008;42(4):149–55. Bahadori M, Sadeghifar J, Hamonzadeh M, Nejati M. Combining multiple indicators to assess hospital performance in Iran using the Pabon Lasso Model. Australas Med J. 2011;4(4):175–9. Zere E. Hospital Efficiency in Sub-Saharan Africa: Evidence from South Africa. UNU World Institute for Development Economic Research, Helsinki, Finland: working 2000; Paper.No. 187. Kirigia JM, Emrouznejad A, Sambo LG. Measurement of technical efficiency of public hospitals in Kenya: using data envelopment analysis. J Med Syst. 2002;26(1):39–45. Osei D, George M, Almeida S, Kirigia JM, Mensah AO, Kainyu LH. Technical efficiency of public district hospitals and health centres in Ghana: a pilot study. Cost Eff Resourc Alloc. 2005;3(9):1–13. Zere E, Shangula K, Mandlhate C, Mutirua K, Tjivambi B, Kapenambili W. Technical Efficiency of District Hospitals: Evidence from Nambia Using DEA. Cost Effectiveness and Resource allocation. 2006;4:5. Odhiambosa J, Wambugu A, Kiriti-Nganga T. Effect of health expenditure on child health in Sub-Saharan Africa: government perspective. J Econ Sust Dev. 2015;6(8):43–65. Kpamor Z. Nigeria's health statistics and trends: the Woodrow Wilson Intl Isboa, Lisbon, Portugal. Eur J Public Health. 2012;25(4):52–8. Masoompour MS, Petramfar P, Farhadi P, Mahdaviazad H. Five-year trend analysis of capacity utilization measures in a teaching hospital 2008–2014. Shiraz E-Med J. 2015;16(2):ez1176. Kawaguchi H, Tone K, Tsutsui M. Estimation of the efficiency of Japanese hospitals using a dynamic and network data envelopment analysis model. Health Care Manag Sci. 2014;17:101. https://doi.org/10.1007/s10729-013-9248-9. Bahadori M, Izadi AR, Ghardashi F, Ravangard R, Hosseini SM. The evaluation of hospital performance in Iran: a systematic review article. Iran J Public Health. 2016;45(7):855–66. Davis P, Milne B, Parker K, Hider P, Lay-Yee R, Cumming J, Graham P. Efficiency, effectiveness, equity (E3). Evaluating hospital performance in three dimensions. Health Policy. 2013;112:19–27. Nigeria National Population Commission (NNPC) [Nigeria] and ICF Macro. Nigeria Demographic and Health Survey 2008. Abuja: National Populaton Commission and ICF Macro; 2008. p. 2015. Mehrtak M, Yusefzadeh H, Jaafaripooyan E. Pabon Lasso and data envelopment analysis: a complementary approach to hospital performance measurement. Glob J Health Sci. 2014;6(4):107–16. Pabon LH. Evaluating hospital performance through simultaneous application of several indicators. Bull Pan Am Health Org. 1986;20(4):341–57. Adham D, Panah M, Barfar I, Amari H, Sadeghi G, Salarikhah E. Contemporary use of hospital efficiency indicators to evaluate hospital performance using the Pabon Lasso Model. Eur J Bus Soc Sci. 2014;13(2):1–08. Mehrolhasani M, Fayzabad VY, Shahrbabak TB. Assessing performance of Kerman province's hospitals using Pabon Lasso Diagram between 2008 and 2010. J Hosp. 2014;12(4):99–108. Goshtasebi A, Vahdaninia M, Gorgipour R, Samanpour A, Maftoon F, Farzadiand F, Ahmadi F. Assessing, hospital performance by Pabon Lasso model. Iran J Public Health. 2009;38(2):119–24. World Health Organization (WHO). Global health expenditure atlas. Geneva: World Health Organization. http://www.who.int/health-accounts/atlas2014.pdf. Mohebbifar R, Sokhanvar M, Hasanpoor E, Isfahani HM, Ziaiifar H, Kakenam E, Mohseni A. Survey on the performance of hospitals of Qazvin province by the Pabon Lasso model. Int Res J Biol Sci. 2014;3(12):5–9. Mujasi PN, Asbu EZ, Puig-Junoy J. How efficient are referral hospitals in Uganda? A data envelopment analysis and Tobit regression approach. BMC Health Serv Res. 2016;8(16):230. Renner A, Kirigia JM, Zere EA, Barry SP, Kirigia DG, Kamara C, Muthuri LHK. Technical efficiency of peripheral units in Pajehun District of Sierra Leone: a DEA application. BMC Health Serv Res. 2005;5(77):1–12. Kirigia MM, Asbu EZ, Kirigia DG, Onwujekwe OE, Fonta WM, Ichoku HE. Technical efficiency of human resources for health in Africa. Eur J Bus Manag. 2011;3(4):321–45. Barnum H, Kutzin J. Public hospitals in developing countries: Resource use, cost, financing. Baltimore: Johns Hopkins University Press; 1993. Accorsi S, Corrado B, Fabiani M, Iriso R, Nattabi B, Ayella EO. Competing demands and limited resources in the context of war, poverty and disease: the case of Lacor hospital. Health Policy Dev J. 2003;1:29–39. Younsi M, Chakroun M. Measuring health-related quality of life: psychometric evaluation of the Tunisian version of the SF-12 health survey. Qual Life Res. 2014;23(7):93. https://doi.org/10.1007/s11136-014-0641-8. NabuKeera M, Boerhannoeddin A, Raja Noriza RA. An evaluation of health centers and hospital efficiency in Kampala capital city authority Uganda: using Pabon Lasso technique. J Health Transl Med. 2015;18(1):12–7. https://doi.org/10.5430/wjss.v1n2p86. Ghobad M, Bakhtiar P, Hossein S, Nader EN, Amjad MB, Arezoo Y. Assessment of the efficiency of hospitals before and after the implementation of Health Sector Evolution Plan in Iran Based on Pabon Lasso Model. Iran J Public Health. 2017;46(3):389–95. Kalhor R, Salehi A, Kechavarz A, Bastani P, Orojloo PA. Ssessing hospital performance in Iran using the Pabon Lasso model. Asia Pac J Health Manag. 2014;9(2):77–82. Lotfi F, Kalhor R, Bastani P, Zadeh NS, Eslamian M, Dehghani MR, Kiaee M. Various indicators for the assessment of hospitals' performance status: differences and similarities. Iran Red Crescent Med J. 2014;16(4):1–7. Nwagbara VC, Rasiah R. Rethinking health care commercialization: evidence from Malaysia. Glob Health. 2015;11(1):44. https://doi.org/10.1186/s12992-015-0131-y. Awofeso N, Rammohan A, Asmaripa A. Exploring Indonesia's "low hospital bed utilization-low bed occupancy-high disease burden" paradox. J Hosp Adm. 2013;2(1):49–58. Goudarzi R, Pourreza A, Shokoohi M, Askari R, Mhdavi M, Moghri J. Technical efficiency of teaching hospitals in Iran: the use of Stochastic Frontier Analysis, 1999–2011. Int J Health Policy Manag. 2014;3(2):91–7. Kalhor R, Ramandi F. D., Rafiel, S., Rafiel, S., et al. (2016). Performance analysis of hospitals affiliated to Mashhad University of medical sciences using Pabon Lasso model: A six year trend study. Biotech Health Sci. 3(4). Gholipour K, Delgoshai B, Masudi-Asl I, Hajinabi K, Iezadi S. Comparing performance of Tabrz obstetrics and gynawecology hospitals manged as autonomous and budgetary units using Pabon Lasso method. Australas Med J. 2013;6(12):701–7. Ravaghi H, Afshari M, Isfahani P, Bélorgeot VD. A systematic review on hospital inefficiency in the Eastern Mediterranean Region: sources and solutions. BMC Health Serv Res. 2019;19:830. https://doi.org/10.1186/s12913-019-4701-1. We are very grateful to the management and medical record departments of all the participating teaching hospitals for their cooperation and assistance. Private funding. Health Economics and Policy Research Unit, Department of Health Services, Alex Ekwueme Federal University Ndufu-Alike Ikwo, Ikwo, Ebonyi, Nigeria Henry E. Aloh Department of Health Administration & Management, Faculty of Health Sciences, College of Medicine, University of Nigeria Enugu Campus, Nsukka, Nigeria Henry E. Aloh & Obinna E. Onwujekwe Health Policy Research Group, Department of Pharmacology and Therapeutics, College of Medicine, University of Nigeria Enugu Campus, Nsukka, Nigeria Obinna E. Onwujekwe Primary Health Development Agency, Ministry of Health, Abakaliki, Ebonyi, Nigeria Obianuju G. Aloh Department of Mathematics/Computer Sciences/Statistics & Informatics, Alex Ekwueme Federal University Ndufu-Alike Ikwo, Ikwo, Nigeria Chijioke J. Nweke HEA conceptualized the study and wrote the first draft of the manuscript. OGA collected the data. OEO contributed to the design of the study and reviewed of the manuscript. CJN analysed the data. All authors read and approved the final manuscript. Correspondence to Henry E. Aloh. Ethical approvals were obtained from respective ethical committee of the participating hospitals. Aloh, H.E., Onwujekwe, O.E., Aloh, O.G. et al. Is bed turnover rate a good metric for hospital scale efficiency? A measure of resource utilization rate for hospitals in Southeast Nigeria. Cost Eff Resour Alloc 18, 21 (2020). https://doi.org/10.1186/s12962-020-00216-w Ratio indicators Resource utilization Pabon Lasso model Southeast Nigeria
CommonCrawl
pp 1–19 | Cite as The Effect of Hot Treatment on Composition and Microstructure of HVOF Iron Aluminide Coatings in Na2SO4 Molten Salts C. Senderowski N. Cinca S. Dosta I. G. Cano J. M. Guilemany First Online: 08 July 2019 The paper deals with the hot corrosion performance of FeAl base intermetallic HVOF coatings in molten Na2SO4 at 850 °C in an isothermal process over the span of 45 h under static conditions. The test was validated with electron microscopy and compositional analyses in the cross-section area, as well as x-ray diffraction techniques. All the coatings were characterized by Al-depleted regions, intersplat oxidation and different stoichiometric ratios of iron aluminides. The results were discussed in relation to the formation of oxide scales on the surface after exposition to corrosive media, as well as heterogeneity and defects of the sprayed coatings. The Fe40Al (at.%) powder showed quite uniform phase distribution after spraying and preserved its integrity after corrosion test; the FeCr-25% + FeAl-TiAl-Al2O3 (wt.%) and Fe46Al-6.55Si (at.%) powders exhibited interface oxidation, with localized corrosion attacks proceeding through particle boundaries and microcrack networks with no evidence of Na and S penetration. FexAly alloys are susceptible to accelerated damage and decohesion of the coating, whereas the formation of sulfides is observed at certain points. FeAl intermetallic hot corrosion thermal spray coatings High-temperature oxidation is believed to be the major reason for the degradation of materials used at elevated temperatures, which in consequence leads to prolonged downtime of elements such as boilers and turbines utilized in power production (Ref 1-4). As an adequate countermeasure to the above-mentioned issues, intermetallics and Ni-based alloys have gained significant attraction as coating materials (Ref 1-3, 5-7). Transition metal aluminides, mainly those based on Ni and Fe, are potentially applicable at high temperatures and provide a sufficient alternative to superalloys (Ref 4-6, 8-12). The alumina layer, formed on the surface of materials, is responsible for their excellent resistance to oxidizing, sulfiding and carburizing atmospheres even at temperatures exceeding 1000 °C (Ref 5, 10, 13). However, while showing good strength and environmental stability, other aspects such as poor ductility and toughness at room temperature, mediocre creep strength, as well as fabrication difficulties have greatly hindered the introduction of intermetallics as industrial structural materials (Ref 14). Therefore, their commercial application in some fields is still a matter of concern (Ref 15). The applications of iron aluminides are, for the most part, based on their excellent corrosion resistance at high temperatures in environments that cause damage to Fe-Cr-Ni steels and other alloys (Ref 4). They show higher resistance to sulfidation and carburizing atmospheres, as well as to molten nitrates and carbonate salts in relation to multiple different iron- or nickel-based alloys (Ref 16, 17). FeAl alloys have demonstrated particularly improved resistance to various molten salts leading to hot corrosion in heat exchanging systems, incinerators and burners. This pertains to such chemicals as potassium sulfate (K2SO4), vanadium pentoxide (V2O5), mixtures of sodium sulfate and vanadium pentoxide (Na2SO4-V2O5), chlorates and carbonates, all of which can inflict severe damage in the energy sector (Ref 18-24). High resistance to hot corrosion is a matter of paramount importance in many branches of industry concerning the construction of boilers, internal combustion engines, gas turbines, fluidized bed combustion and industrial waste incinerators. The material degradation is determined by the confluence of high-temperature oxidation, hot corrosion and erosion processes (Ref 1-3, 5-7, 11, 12, 25, 26). However, iron aluminide corrosion resistance extends to temperatures at which these alloys exhibit limited or poor mechanical strength. Therefore, in many cases, they may be better utilized as clads or coatings for anti-corrosion protection, owing to their limited strength at elevated temperatures (Ref 26-30). Numerous thermal spray techniques, most notably plasma spraying (Ref 28, 31, 32), high-velocity oxy-fuel (HVOF) (Ref 17, 27, 28, 33-47) and D-gun spraying processes (Ref 26, 29, 48-59) are considered for Fe-Al intermetallic coating materials. In comparison with other industrially used coatings such as CVD, PVD and hard chromium plating, a much thicker coating can be obtained by thermal spraying, which is a prerequisite in the energy sector. High-velocity arc spraying process (HVAS), a technique used to deposit Fe-Al intermetallic and Fe-Al/WC protective coatings, was designed for evaporator pipes subjected to corrosive and erosive influence of vapor at 550 °C and serves as an example, especially through the prism of their application in the Chinese industry (Ref 60). Among thermal spray techniques, the scientists, manufacturers and global investors show much interest in HVOF, a state-of-the-art thermal spray technology, which not only yields positive results, but also is relatively cheap (Ref 3, 17, 26, 28, 33-47). Thermal spray iron aluminide coatings were previously tested in high-temperature gaseous environments (Ref 17, 36, 37, 47, 61), but to the best of the authors' knowledge, very few findings concerning their use under hot corrosion conditions were made (Ref 62). These authors report that no degradation (corrosion and wear) was noticed on the surface of the Fe-25%Al-Zr (wt.%) plasma and HVOF coatings sprayed onto low-carbon steel heat exchanger tubes, which were tested in a new industrial plant burning fuel of very poor quality. However, their research was not orientated toward the coating structures and corrosion evolution. On the other hand, Singh Sidhu et al. 63 studied the corrosion of plasma-sprayed Ni3Al coatings in air and molten salt (Na2SO4-60%V2O5) at 900 °C on low-carbon steel substrates of extended application in boilers. Other thermal spray coatings, widely studied in terms of hot corrosion protection resistance, are the case of plasma spraying of MCrAlY's in TBC systems for aviation gas turbines purposes, which notably reduces their longevity under severe conditions involving molten sulfate-vanadate deposits (Ref 64-68). These coatings can be alternatively produced by HVOF process, which utilizes high-pressure combustion of oxygen and fuel to obtain a relatively low temperature of a supersonic gas jet in comparison with plasma spraying. HVOF allows us to obtain denser and less oxidized coatings, which are more resistant to corrosion (Ref 69). The growing interest in the promising properties of intermetallic alloys based on the Fe-Al equilibrium phase diagram contributed to the gradual development of the HVOF spraying technique, which proved useful for the production of such intermetallic coatings in terms of their practical application on various steel elements, exposed to corrosive and erosive environment in the energy sector (Ref 28, 33-47, 70-74). The focus in these works was mostly placed on the structural properties of Fe-Al coatings and their wear resistance under dry friction (in congruence with ASTM G99-03), abrasive wear (in accordance with ASTM G65-00) and erosive wear, along with the involvement of Al2O3 particles (Ref 44). Furthermore, the research involved the performance of Fe-Al coatings under high-temperature oxidation conditions at 900, 1000 and 1100 °C—for 4, 36 and 72 h, respectively, in the atmospheric air (Ref 27). Usitalo et al. 75 conducted studies on laser re-melting of HVOF-sprayed Ni-50Cr, Ni-57Cr, Fe3Al, Ni-21Cr-9Mo coatings and reported that the above-mentioned coatings did not suffer from any corrosive damage, whereas sprayed coatings were penetrated by corrosive species. Other HVOF and novel cold-spray coatings, such as Cr3C2-NiCr and WC-Co, are widely studied regarding their wear resistance behavior (Ref 1-3, 7, 25, 76) while great emphasis is placed upon hot corrosion-related applications. Iron aluminide intermetallics appear to provide interesting properties favorable to hot corrosion protection and also manifest wear resistance at high temperatures, providing competition to cobalt binder in WC-Co composites and Ni-based superalloys (Ref 5, 6, 10-12, 60, 77-81). Different alloying elements in iron aluminide and their effect on the oxide scales development when exposed to harsh environments have been investigated (Ref 17, 38, 42, 72-74). In this regard, we propose the application of several feedstock iron aluminide powders obtained from different manufacturing routes. Notably, Senderowski Ref (56) developed a new concept of nanocomposite Fe-Al intermetallic coatings created in situ during gas detonation spraying out of powder with compounds from the Fe-Al phase diagram, manufactured by the self-decomposition method (Ref 57). It was assumed that those powders would exhibit sufficient plastic susceptibility under the spraying test conditions, acceptable mechanical properties of the coatings and good stability of the structure during high-temperature heating. The shortlisted properties of these powders are mostly related to reduced brittleness caused by dynamic oxidation at high temperatures (especially above 500 °C), in the oxygen containing environment. Particle size control of the self-decomposed powders, especially of the fraction below 80 μm, gives them a more prominent role in the HVOF spraying process. Furthermore, the price of self-decomposing powder is about three times lower than the price of powders of equivalent compositions, produced by gas atomization. Therefore, after considering the potential advantages of the implementation of the self-decomposing intermetallic Fe-Al-type powders, the aim of the present research was twofold: developing several iron aluminide HVOF coatings from a Fe40Al (at.%) and FeCr-25% + FeAl-TiAl-Al2O3 (wt.%) powders and comparing them with self-decomposed and SHS (self-propagating high-temperature synthesis)-manufactured powders of different compositions and evaluating the performance of these coatings in Na2SO4 molten salt at 850 °C, as the ultimate solution for typical application in industrial boilers. It is well known that the application area of the FeAl coatings depends on their extensive properties. On the basis of the comprehensive results of own research (Ref 59, 82) already conducted, a comprehensive analysis of the impact of the structure, the level of strengthening and the state of residual stress of FeAl coatings on their adhesive strength was carried out. The mechanism of residual stress generation in FeAl coating under supersonic D-gun spraying conditions was presented, with a multi-phase structure of Fe-Al coatings and changes in the Young's modulus of the FeAl coating at elevated temperatures up to 900 °C taken into account. The mechanism of structure degradation of hybrid coating systems in different load states was subjected to an analysis by means of TAT (tensile adhesion test) and a three-dimensional bending test coupled with acoustic emission recording (Ref 82). The TAT test showed that the FeAl coating sprayed directly onto a steel substrate exhibits significantly lower adhesive strength, compared to hybrid coating systems of NiCr-20 or NiAl-5 sprayed onto the steel substrate before the FeAl base coating. The average adhesive strength of individual coating systems was, respectively: FeAl/steel—23 MPa, FeAl/NiAl5/steel—31 MPa, NiAl5/steel and NiCr20/steel—33 MPa, and FeAl/NiCr20/steel system—37 MPa (Ref 82). Because we have already considered some aspects of the mechanical performance of the Fe-Al-type coatings, it is a good reason to focus in this paper on the phase and microstructural changes as the "corrosion performance" of the coatings at high temperature in an aggressive environment. At the same time, the "corrosion performance" that we are studying makes reference to the qualitative phase and microstructural evolution of the coatings, without a quantitative evaluation of weight changes as oxidation dynamics, which is relatively simple for bulk materials. Such an analysis is not so simple in the case of the coating-substrate system, due to the strong oxidation of substrate material at high temperature, which does not lead to reliable results in regard to the FeAl coating itself. Therefore, in this work, we focused on the analysis of structural stability during high-temperature oxidation at 850 °C in the aggressive Na2SO4 environment of as-sprayed Fe-Al coatings, under the same HVOF process conditions with various types of alloy powders of different chemical composition. Experimental Procedure The nominal compositions and characteristics of the powders used in the tests are presented in Table 1. The commercial FeAl grade 3 with a near equiatomic composition, provided by Mecachrome (France), is a pre-alloyed, gas atomized and subsequently ball-milled powder (powder 1). Both powder 2 and powder 3 were produced in the Department of Materials Science of the Silesian Technical University by the self-decomposed method described in detail in Ref 57. Iron aluminide feedstock powders characteristics Nominal composition Particle size, µm Method of manufacture Powder 1 (FeAl grade 3) Fe-40Al-0.05Zr (at.%) + 50 ppm B +1 wt.%Y2O3 < 50 Ball milling Powder 2 FeCr25 (wt.%) + FeAl-TiAl-Al2O3 Fe46Al-6.55Si (at.%) Self-decomposed FexAly − 53 + 38 SHS multi-phases FexAly type powder Powder 4 was also produced in the Department of Materials Science of the Silesian Technical University through the SHS technique, contained Fe-Al-type phases agreed upon as FexAly. Their complex phase composition, properties and morphology were considered with a view to possible applications as protective coatings in the power industry sector. The substrate material was a low-alloy carbon steel G41350 UNS (AISI 4135) of chemical composition presented in Table 2, in the form of coupons with dimensions of 50 × 20 × 5 mm which were grit-blasted (Ra = 4 μm), directly before the HVOF spraying to provide mechanical bonding. Chemical composition of substrate material Content, wt.% G41350 UNS (AISI 4135) The equipment used for the spraying process was a Diamond Jet Hybrid (DJH2700) designed by SULZER METCO. The following spraying parameters were applied: H2 flow rate = 717 l min−1, oxygen flow rate = 147 l min−1, feeding rate = 20 g/min, spraying distance = 250 mm, traverse gun speed = 500 mm/s and number of layers = 9. In addition, the samples were cooled with compressed air during the spraying process. Nitrogen was used as the powder carrying and shielding gas. Hot corrosion studies were conducted in molten salt (Na2SO4) at 850 °C for all specimens (Ref 80, 81) with dimensions of 35 × 20 × 5 mm. The samples were cut using the wire electric discharge machining technique, following the HVOF spraying. The Na2SO4 tablet (0.2 g pulp and 5 mm in diameter) was pressed under 0.4 MPa and placed on the surfaces of the coatings. First, the samples were mounted in a furnace preheated to 950 °C and annealed for 10 min in an oven to melt Na2SO4. (Melting point of the salt is close to 890 °C.) Then, the temperature was lowered to 850 °C and the samples were saturated successively for 45 h in order to evaluate the coatings behavior under hot corrosion conditions in the aggressive environment. The microstructural characteristics of the feedstock powder, as well as initial and corroded coatings, were obtained by SEM/EDS using the Quanta 3D FEG Dual Beam and JEOL 5310 microscopes operating at 20 kV. The backscattered images were obtained with a K.E. developments detector. Coating porosity was evaluated by means of the image analysis ImageJ software. Qualitative microanalysis was performed by EDS with a RÖNTEC detector. Additionally, the roughness of the coatings was measured by confocal microscopy (Leica DCM3D). XRD was used to characterize the phases and assess the degree of order in the feedstock powders and sprayed coatings. All x-ray measurements were carried out with the Bragg-Brentano θ/2θ Siemens D-500 diffractometer with Cu Kα radiation. Feedstock Powder Figure 1 shows the particle size distribution of the powders. It can be observed that the ball-milled powder 1 is characterized by the Gaussian distribution centered at a mean size of 30 µm, while powder 2 shows a non-symmetric distribution with d10 = 3 µm/d90 = 56 µm. The self-decomposed powder 3 contains a large amount of fine particles with d10 = 3 µm/d90 = 60 µm, while d10 = 7 µm/d90 = 68 µm was recorded in powder 4. Particle size distribution of: (a) powder 1, (b) powder 2, (c) powder 3, (d) powder 4 The SEM-BSE micrographs of the cross sections show that all the powder particles exhibit irregular morphology and reveal uniform composition of powder 1, whereas the rest presents a varying degree of grayness (Fig. 2). Their compositions are presented in Table 3 for different EDX point microanalysis. Powder 2 contains a varying chemical composition with diversified content of Al, Cr and Ti in individual particles, as well as separate regions of Al2O3 (Fig. 2b). Self-decomposed powder 3 shows regions identified as SiO2 and predominant light gray areas with aluminum content significantly higher than iron (Fig. 2c). In powder 4, the distribution of the phases from one particle to the other is quite different (Fig. 2d), with some particles exhibiting a mixed laminar structure of two phases. Thus, it was determined that the SHS intermetallic powder showed a wide range of chemical compositions of the Fe-Al-based phases in single powder particles (52-73 at.%), which suggests that they were secondary solutions based on Fe-Al phases with wide range of Al content and trace amounts of Cr. SEM images in the cross sections of: (a) powder 1 and, (b) powder 2, (c) powder 3, (d) powder 4 Semiquantitative EDS analysis (at.%) of different Fe-Al-type powders used for HVOF spraying Designation of grain area according to Fig. 2 Content, at.% Powder 2—Fig. 3(b) 1—Light 2—Dark gray 3—Light gray Powder 3—Fig. 3(c) 3—Medium gray Powder 4—Fig. 3(d) Figure 3 shows the XRD results of the powders. Powder 1 presents typical fundamental lines of the FeAl pattern (h + k + l = even), exhibited only when the structure is disordered, as otherwise, the superlattice lines (h + k + l = odd) would also appear. The occurrence of broad peaks is related to the fine grain size and microstrains resulting from the milling. XRD diffraction patterns of the feedstock powders at the initial state (from the manufacturer) Based on Senderowski's results (Ref 57), the low-energy milling of the powder particles causes crystallite fragmentation, resulting in the formation of the nanocrystalline structure of the powder particles. Low-energy milling decreases the ordering degree of the FeAl secondary solution, which in turn limits the strength of the particles. Nevertheless, this is compensated with strengthening, which originates from the crystallite fragmentation. Powder 2 contains Fe-Al and Ti-Al intermetallics, while the XRD of powder 3 confirms the presence of different intermetallic Fe-Al phases, mainly Fe2Al5 and FeAl3, together with trace amounts of SiO2. Silicon embrittles the material. A clear explanation and concise description of the self-decomposing process are presented in Ref 56, 57, where it was reported that many hypothesis can be introduced to explain the self-decomposition of the Pyroferal cast-iron casts. The Pyroferal casts structure, which depends on the chemical composition, is made of the following intermetallic phases: Fe3Al and FeAl, or FeAl and Al4C3 aluminum carbide, trace amounts of graphite. The most common hypothesis of the self-decomposition suggests that secretions of aluminum carbide Al4C3 react with water vapor on the surface of the Fe-Al-C-Me alloys (Me = Ni, Mn, Cr, Mo, V, B, Si) and create aluminum hydroxide and methane (Ref 57): $${\text{Al}}_{4} {\text{C}}_{3} + 12{\text{H}}_{2} {\text{O}} \to 4{\text{Al}}\left( {\text{OH}} \right)_{3} + 3{\text{CH}}_{4} \uparrow$$ The cracking and fragmenting of the castings occur under the influence of stresses caused by the product Al(OH)3, characterized by a higher specific volume than reacting Al4C3 carbide. Powder 4 consists of strongly oxidized secondary solution on the FeAl intermetallic base with a widely varying content of aluminum and thin Al2O3 films covering the particle surface, which has a bearing on their growing importance in the production of coatings with a nanocomposite structure. The strong diversification of chemical composition between single particles as well as within the area inside shows that the tested powder has the structure of a secondary solution based on phases from the Fe-Al equilibrium phase diagram, with a wide span of changes in Al and sparse distribution of Cr and Si. It is to be assumed that the formation of oxide films on the surface of the powder particles is most likely attributable to self-propagating high-temperature synthesis, a phenomenon strongly exothermic in its nature. The oxide formation may as well be related to the technological process consisting of crashing and high-energy mechanical milling during selective heat sintering onto a powder. In consequence, the XRD analysis of powder 4 revealed the formation of FeAl, FeAl2, Fe2Al5 and FeAl2O4 phases under the SHS process. Relatively high half-width of the overlapping reflections of Fe-Al phases is the result of a wide span of Al content across the area containing individual powder particles (Fig. 3), which leads to a network deformation within each phase and generation of residual stress. Moreover, the latter is amplified by crushing and high-energy mechanical milling of sinters following the SHS process. As-Sprayed Coating Microstructures Figure 4 shows the cross sections of as-sprayed HVOF coatings with thickness of 103 ± 9; 84 ± 10; 76 ± 13; and 93 ± 11 µm, obtained through spraying nine layers, for each one of the four powders presented in Table 1. The coating obtained with the pre-alloyed powder (powder 1) is quite uniform in thickness, whereas other are less homogeneous; the values of roughness were found to be Ra = 3.6 ± 0.6; 5.1 ± 0.7; 4.3 ± 0.3; and 6.8 ± 0.4 µm, respectively. The highest porosity of 1.45 ± 0.02% corresponds to coating 4 (as-sprayed powder 4, from now on label coating X stands for as-sprayed powder X). SEM images in cross section of the as-sprayed HVOF coatings obtained with: (a) powder 1—coating 1, (b) powder 2—coating 2, (c) powder 3—coating 3, and (d) powder 4—coating 4 The examination of the microstructure indicates that the uniform distribution of the oxidation occurs in-flight rather than after the splat impact. The powder particles are usually melted or at least pre-melted, as a result of HVOF spraying, during which the gas mixture is being continuously combusted under high pressure (Ref 28, 33-35, 39-43, 46). As a result of the thermal activation of gaseous products in the HVOF process, the in situ formation of thin and complex oxide films on the internal splat interfaces is affected. The oxide films, identified mainly as Al203 compounds, become a specific composite reinforcement in a Fe-Al intermetallic coating (Ref 33, 34, 40-43, 45, 46). Oxides are formed during the HVOF process in the phase during which the gaseous products transport the powder particles, along with rapid chemical reactions, accompanied by the release of a great amount of thermal energy (Ref 40-43). The presence of a lamellar structure resulting from partly melted and oxidized particles with inhomogeneous compositions (Table 4) and intersplat porosity can be observed at higher magnification (Fig. 4). The nature of coating 1 is well documented by partially and fully melted particles exhibiting different degrees of grayness at the boundaries of the intersplats (Fig. 4a). The light areas in the intersplats correspond to the Al-depleted regions, whereas the darkest ones are attributed to spinel oxides (Ref 56). Semiquantitative EDS analysis (at.%) of as-HVOF-sprayed Fe-Al coatings from different types of powders (presented in Tab. 2) Coating 1—Fig. 5(a) Coating 2—Fig. 5(b) ~ 0.8 Coating 3—Fig. 5(c) Coating 4—Fig. 5(d) Furthermore, the XRD results confirm the findings (Fig. 5); the additional peaks, also identified as FeAl, correspond to the superlattice lines due to ordering of the intermetallic phase as a result of the thermal history of particles in the flame. Light regions around the intersplat boundaries of coating 2 in Fig. 4(b) are poorer in Al and Ti, which in fact are located next to the dark gray areas identified as oxides (Fig. 5). Coating 2 is reinforced by the incorporation of alumina, visible as intensely dark areas in the shape of circle-like figures. The SiO2 particles act as some sort of reinforcement in coating 3 (dark regions in Fig. 4c). The light gray regions in coating 3 correspond to iron-rich phases, while the darker predominant contrast reveals more balanced iron and aluminum content (Fig. 4c). Some porosity is observed; however, the extend of oxidation is significantly lower in relation to coating 1. SiO2 particles from the feedstock can be found as very dark regions, uniformly distributed within the coating. In coating 4, the lightest regions are poorer in aluminum than the medium gray ones and are identified as Fe3Al phase, whereas the medium gray contrast is mainly identified as FeAl2 and Fe2Al5 (Fig. 4d). XRD diffraction patterns of the as-sprayed HVOF coatings (according to the legend) The degree of melting or semi-melting of the particles in HVOF within the coating can be controlled by process variables, i.e., fuel and oxygen flow rates, spraying distance and particle size. The process variables determine particle temperature and velocity upon impact and, thus, the typical lamellar structure of thermal-sprayed coatings. Many different iron aluminide compositions have been deposited using these technologies (Ref 17, 27, 28, 33-47, 82). However, different distributions of the intermetallic phases and Fe-rich areas are usually observed after the evaluation of their structural characterization. Moreover, these areas are aluminum-depleted as a result of the thermal history of the particles in the flame. Low oxygen-to-fuel ratio is normally preferred in order to minimize oxidation, whereas lower carrying gas flow implies slower particle velocities; a higher in-flight period promotes further oxidation (Ref 43, 70). The formation of intersplat oxides, and thus the occurrence of Al-depleted regions, may stimulate corrosion in field performance; at the same time, such oxides may also increase coating hardness and wear resistance. For example, Totemeier et al. 70 observed a decrease in the oxide content and coating porosity both in Fe3Al and in FeAl cases when the chamber pressure was increased, because it directly affects particle velocity and thus their degree of melting. However, the particle temperature for FeAl was lower than for the Fe3Al powder, probably because of the lesser thermal conductivity of FeAl. Considering those factors, Al2O3 can clearly act as a reinforcement phase in coating 2, aiding Al and Cr oxidation which leads to forming a protective layer. On the whole, it is important to point out that the resulting Al content and distribution in the as-sprayed coating also determine corrosion properties. Some microcracks, formed perpendicularly to the layer, were observed particularly in coating 4 and less noticeably in coating 3; such microcracks are attributed to the brittleness of the intermetallic phases, which are unable to withstand the deformation upon impact at high particle velocities. The grain boundaries were not the most common areas favoring the propagation of cracks, and therefore good cohesive strength is assumed. The microcrack network for the as-sprayed SHS powder (coating 4, see arrows in inset Fig. 4d) does not exhibit a specific direction within individual splats, which confirms the correspondence between embrittlement and the occurrence of Al-rich phases, namely Fe2Al5 and FeAl2. For the as-sprayed self-decomposed powder (coating 3, see arrows in inset Fig. 4c), the microcracks are perpendicular to the coating surface, which suggests that the cracking is also due to the influence of tensile thermal strain sustained during rapid quenching of splats. The values of the linear thermal expansion coefficient for Fe-Al-type intermetallic phases (ranging from 15 × 10−6 up to 22 × 10−6 K−1) are significantly different in comparison with these of the steel substrate (12 × 10−6 K−1) (Ref 58, 59). Some of the cracks found in the ball-milled Cr- and Ti-alloyed powder (coating 2, see arrows in inset Fig. 4b) may be additionally linked to the impact of the hot metallic particles entering cooler Al2O3 regions. Additionally, it was previously observed for Fe40Al type coatings that equiaxed small grains were displayed in the unmelted areas, while columnar grains, typical for rapid solidification processes, were visible in the melted regions. Interestingly, as a result of the thermal history of the milled particles in the flame, the final FeAl phase appears to be the ordered B2 lattice, present in the areas that reached the molten state (Ref 34, 46). Taking into consideration a higher melting point of the FeAl stoichiometric compound in relation to Fe2Al5 and FeAl2 (1250, 1171 and 1157 °C, respectively), and the high particle heterogeneity of powders 3 and 4, there is a great likelihood that these phases melt during the formation of amorphous oxide (AO) (Ref 29, 46, 51-56). This results in the multi-phase (composite-like) structure of the Fe-Al coatings (Ref 29, 56). Corrosion Performance Degradation and infiltration of Na and S elements within the coatings following exposure to Na2SO4 at 850 °C are examined in Fig. 6 to 9. The cross section of coating 1 (Fig. 6a, b) does not show significant damage compared to Fig. 4(a); the coating preserves its original thickness all along the tested sample. No infiltration of the salt can be observed within the splat boundaries (Fig. 6c-g). The light contrast (Fig. 6b) is poorer in aluminum than the as-sprayed state (spot 1-coating 1 Table 5), while the intersplat dark contrast is richer in oxygen. Typical lamellar-like microstructure in the cross sections of the as-sprayed HVOF coating and, after molten salt corrosion—obtained with powder 1 (a, b) and SEM/EDX results with corresponding EDX maps of Fe (c), Al (d), O (e), Na (f) and S (g) distributions Semiquantitative EDS analysis (at.%) of the HVOF-sprayed Fe-Al coatings after the molten salt corrosion Designation of grain area Content [at.%] ~ 9 ~ 39 4, 5—Dark gray Coating 4—Fig. 10(a) and (b) A similar case is observed in the as-sprayed powders 2 (Fig. 7) and 3 (Fig. 8) where the oxygen diffusion is detected even within the splats. Following the tests, the non-oxidized phase in the as-sprayed coating 2 (spot 3-coating 2 Table 4), which is nearly equal in Fe and Al content, becomes oxidized and enriched in chromium at the expense of depleted rate of titanium (spot 3-coating 2 Table 5). By contrast, the dark gray phase doubles its Al content while O content is reduced (spot 2-coating 2 Table 4 compared to spot 4-coating 2 Table 5). The silicon in coating 3 appears to diffuse the core of the splats. Also, some oxide microareas are detected at the coating-substrate interface and have been identified as aluminum oxide (Fig. 8b). No significant amounts of sodium or sulfur were identified within the EDS maps (Fig. 7c-h, 8c-h). Typical lamellar-like microstructure in the cross sections of the as-sprayed HVOF coating and, after molten salt corrosion—obtained with powder 2 (a, b) and SEM/EDX results with corresponding EDX maps of Fe (c), Al (d), Ti (e), Cr (f), O (g), Na (h) and S (i) distributions Typical lamellar-like microstructure in the cross sections of the as-sprayed HVOF coating and, after molten salt corrosion—obtained with powder 3 (a, b) and SEM/EDX results with corresponding EDX maps of Fe (c), Al (d), O (e), Si (f), Na (g) and S (h) distributions Coating 4 suffered the highest damage as splat shapes are no longer visible and the deposit consists of a composite containing Al-rich oxide network with a Fe-rich matrix (Fig. 9a, b). Such a structure is visible to progress uniformly from the air-coating interface (area 1-coating 4 Table 5); it displays higher oxygen content than the rest of the coating (area 2-coating 4 Table 5). For that coating, regions near to the edges of the sample were severely damaged with considerable degradation observed; in these cases, Na and S concentrations escalated in proportion to visible infiltration. Top surface oxide morphologies in Fig. 10 are contrasting with more granular shapes discovered in coatings 1, 3 and 4, whereas coating 2 is more needle-shaped. Coating 1 was covered by iron oxide even when exposed to oxidation atmosphere (Ref 83). The needles in coating 2 were identified as mixed Fe and Ti oxides, with an oxide layer below, also rich in Al and Cr (Fig. 7). Coating 3 was mostly covered by alumina layer, while coating 4 was a mixed Fe and Al oxide; the scale fluxing may involve an interactive reaction between the basic dissolution of Al2O3 and the acidic dissolution of Fe2O3. SEM cross-section micrographs of the oxide layer on the coatings surface obtained from: (a) powder 1, (b) powder 2, (c) powder 3 and (d) powder 4 The XRD of the corroded coatings (Fig. 11a-d) shows that coating 1 is covered with two oxides, namely Fe2O3 and Al2O3. The results obtained from the EDS analysis (Fig. 6) confirm the depletion of Al in the Fe-Al phase. The rapid growth of iron oxide was not observed in the rest of the coatings, yet alumina was identified; the alumina-identified pattern phase is mainly α-Al2O3 corundum; actually, it has been reported that the predominant surface product that forms between 600 and 800 °C is α-Al2O3(rhombohedral), together with γ-Al2O3(cubic) and θ-Al2O3(monoclinic) (Ref 84). The latter two phases are fast growing, more voluminous, more porous and less protective than α-Al2O3; the heterogeneous growing of α-Al2O3, also with some traces of γ and θ phases, could also explain why the other coatings showed significant damage. According to the literature, the sequence is believed to be as follows: γ-Al2O3 → δ-Al2O3 (750 °C); δ-Al2O3 → θ-Al2O3 (900 °C); θ-Al2O3 → α-Al2O3 (1000 °C), and the precise temperature transformation from θ to α is influenced by the presence of reactive elements (Ref 85). XRD diffraction patterns of the HVOF coatings after corrosion in molten Na2SO4: (a) coating 1, (b) coating 2, (c) coating 3 and (d) coating 4 The formation of alumina consumes a certain quantity of Al, reducing its activity and partial pressure of the oxygen. This caused a relative increase in the activities of the Fe and S and serves as a catalyst for the reaction with the molten mixture to obtain a compound such as FeS. $$\begin{aligned} & 2{\text{FeAl}} + {\text{SO}}_{3} \to {\text{Al}}_{2} {\text{O}}_{3} + {\text{S}} + 2{\text{Fe}} \\ & {\text{S}} + {\text{Fe}} \to {\text{FeS}} \\ \end{aligned}$$ Sulfur attack and penetration appear to be more visible in the edges of coating 4 (not presented here). Under the molten salt corrosion conditions, the dissolution of the component below can be produced by local dissolution or selective dissolution of different components of the oxide (Ref 86). Selective oxidation and dissolution of iron in coating 4 resulted in a loss of the coating integrity, leading to a high corrosion rate. In this case, sulfur may have moved from the oxide/molten salt interface toward the coating/substrate interface by diffusion or infiltration of the melt through the structural defects of the oxide scale. It proceeded through particle boundaries as well as microcrack networks until the moment it reached the steel substrate in some parts of the coating. It can be suspected that this local corrosion mechanism may have triggered the damage, causing metal dissolution in hot points. The decomposition of Na2SO4 would result in SO3 formation, which might have been the aggressive agent for the rapid preferential attack at coating defects (Ref 87). Sodium presence within the coating might follow the basic dissolution reaction at the oxide/molten salt interface: Al2O3 + Na2O → 2NaAlO2 (Ref 19). Corrosion in the rest of the coatings appears to have been produced by uniform oxidation at the coating/molten salt/air interface. The formation of the fast growing oxides indicates that the coating might be diluted upon longer exposure times, apparently without preference for any of the coating components. At 900 °C, the Fe40Al composition for bulk materials was found to be more resistant than Fe40Al-0.1B-10Al2O3 (at.%) (Ref 88). Apparently, a similar phenomenon applies to coating 2, but the scale is much more complex, especially in contrast to coating 3. Under the oxide scale, Al depletion was observed in the intermetallic phase. Less defective structure of the as-sprayed coatings and the favorable presence of other stoichiometric intermetallic phases may be the reason why their corrosion rates were lower than the ones observed in the as-sprayed SHS powder. The results of experiments and subsequent analyses allowed an evaluation of hot corrosion performance of HVOF-sprayed coatings with Fe-Al intermetallic matrix in molten Na2SO4 at 850 °C in an isothermal process in the span of 45 h under static conditions. It was determined that under applied HVOF spraying conditions, Fe-Al powder particles form a stratified/laminar/pseudo-composite structure of the coating, in which the thickness varies in dependence of the Fe-Al powder composition after nine passes of the HVOF gun. At the same time, high plastic deformation of FeAl grains in the volume of the coating, obtained from the powder particles of different chemical composition with the involvement of alloying elements, proves the plastic deformability of a highly brittle Fe-Al phase upon impact with the substrate material. However, significant changes to the percentage shares of iron and aluminum in the structure of the as-sprayed coatings, involving the oxide phases formed in situ during the HVOF process, indicate melting or pre-melting of the powder particles, coupled with intensive oxidation due to reaction with the highly reactive hydroxyl radicals (OH). Rapid plastic transformation of intermetallic powder particles, combined with their "freezing" in contact with the "cold" substrate, leads to the amorphization of oxide ceramics. The oxides are shaped in the form of flattened, nanometric thin films at the boundaries of the splats, within a fine-dispersed, heterogenous structure of the Fe-Al coating. Selective depletion of aluminum, diffusing into oxide phases, has no influence on the behavior of the FeAl (B2) superstructure, obtained from the pre-milled powder FeCr25 + FeAl-TiAl-Al2O3 sprayed under applied HVOF conditions. Hard oxide phases, in the form of thin films at the grain boundaries and dispersions in the grain volume, influence the strengthening of the structure, mainly by limiting the dislocation motion and migration of grain boundaries. Consequently, this reduces the susceptibility to plastic deformation of FeAl grains and recrystallization of the intermetallic alloy. Participation of the phases rich in aluminum, namely Fe2Al5 and FeAl2, as well as oxide phases, leads to the formation of microcracks. As a result, this is conducive to the diffusion of toxic ingredients in aggressive environment of Na2SO4 molten salt under the conditions of high-temperature oxidation at 850 °C in the span of 45 h. Generally, among the multi-phase corrosion products formed on the surface of the FeAl (HVOF) coatings at the temperature of 850 °C, the dominant oxide is α-Al2O3, alongside other oxides (i.e., Fe2O3). The aluminum in the Fe-Al coatings is selectively oxidized and forms a stable α-Al2O3 oxide on the surface of the coatings. However, it is then subject to degradation as a result of several structural defects and different thermal expansion coefficients, as compared to the Fe-Al-type phases, especially in the case of the FexAly coating. The research leading to these results has received funding from the People Programme (Accions Marie Curie) of the 7 Framework Programme of the European Union (FP7/2007-2013) under REA Grant Agreement No. 600388 (TECNIO spring programme), and from the Agency for Business Competitiveness of the Government of Catalonia, ACCIÓ. The authors wish to thank Dr. D. Zasada and M.Sc. Eng. D. Marczak from the Department of Advanced Materials and Technologies, Military University of Technology, for his help in the experimental work as well as Prof. L. Swadźba for enabling the study of hot corrosion. S. Swaminathan, S.-M. Hong, M. Kumar, W.-S. Jung, D.-I. Kim, H. Singh, and I.-S. Choi, Microstructural Evolution and High Temperature Oxidation Characteristics of Cold Sprayed Ni-20Cr Nanostructured Alloy Coating, Surf. Coat. Technol., 2019, 362, p 333-344CrossRefGoogle Scholar H. Singh, M. Kaur, and S. Prakash, High-Temperature Exposure Studies of HVOF-Sprayed Cr3C2-25(NiCr)/(WC-Co) Coating, J. Therm. Spray Technol., 2016, 26(6), p 1192-1207CrossRefGoogle Scholar N. Kaur, M. Kumar, S.K. Sharma, D. Young Kim, S. Kumar, N.M. Chavan, S.V. Joshi, N. Singh, and H. Singh, Study of Mechanical Properties and High Temperature Oxidation Behavior of a Novel Cold-Spray Ni-20Cr Coating on Boiler Steels, Appl. Surf. Sci., 2015, 328, p 13-25CrossRefGoogle Scholar H. Singh, D. Puri, and S. Prakash, An Overview of Na2SO4 and/or V2O5 Induced Hot Corrosion of Fe- and Ni-Based Superalloys, Rev. Adv. Mater. Sci., 2007, 16(1-2), p 27-50Google Scholar P. Audigié, V. Encinas-Sánchez, M. Juez-Lorenzo, S. Rodríguezo, M. Gutiérrez, F.J. Pérez, and A. Agüero, High Temperature Molten Salt Corrosion Behavior of Aluminide and Nickel-Aluminide Coatings for Heat Storage in Concentrated Solar Power Plants, Surf. Coat. Technol., 2018, 349, p 1148-1157CrossRefGoogle Scholar T.L. Talako, M.S. Yakovleva, E.A. Astakhov, and A.I. Letsko, Structure and Properties of Detonation Gun Sprayed Coatings from the Synthesized FeAlSi/Al2O3 Powder, Surf. Coat. Technol., 2018, 353, p 93-104CrossRefGoogle Scholar H.S. Grewal, S. Bhandari, and H. Singh, Parametric Study of Slurry-Erosion of Hydroturbine Steels with and Without Detonation Gun Spray Coatings Using Taguchi Technique, Metall. Mater. Trans. A, 2012, 43A, p 3387-3401CrossRefGoogle Scholar R.L. Fleischer, D.M. Dimiduk, and H.A. Lipsitt, Intermetallic Compounds for Strong High-Temperature Materials: Status and Potential, Annu. Rev. Mater. Sci., 1989, 19, p 231-253CrossRefGoogle Scholar S.C. Deevi, V.K. Sikka, and C.T. Liu, Processing, Properties and Applications of Nickel and Iron Aluminides, Prog. Mater Sci., 1997, 42, p 177-192CrossRefGoogle Scholar Y. Shi and D.B. Lee, Corrosion of Fe3Al-4Cr Alloys at 1000 C in N2-0.1%H2S Gas, Key Eng. Mater., 2018, 765, p 173-177CrossRefGoogle Scholar C. Shen, K.-D. Liss, Z. Pan, Z. Wang, X. Li, and H. Li, Thermal Cycling of Fe3Al Based Iron Aluminide During the Wire-Arc Additive Manufacturing Process: An in Situ Neutron Diffraction Study, Intermetallics, 2018, 92, p 101-107CrossRefGoogle Scholar W. Liu, Y. Wang, H. Ge, L. Li, Y. Ding, L. Meng, and X. Zhang, Microstructure Evolution and Corrosion Behavior of Fe-Al-Based Intermetallic Aluminide Coatings Under Acidic Condition, Trans. Nonferrous Met. Soc. China, 2018, 28, p 2028-2043CrossRefGoogle Scholar S.C. Deevi and V.K. Sikka, Nickel and Iron Aluminides: An Overview on Properties, Processing, and Applications, Intermetallics, 1996, 4, p 357-375CrossRefGoogle Scholar D.G. Morris and M.A. Muñoz-Morris, Intermetallics: Past, Present and Future, Rev. Metal., 2005, 41, p 498-501CrossRefGoogle Scholar A. Lasalmonie, Intermetallics: Why is it So Difficult to Introduce Them in Gas Turbine Engines?, Intermetallics, 2006, 14, p 1123-1129CrossRefGoogle Scholar V.K. Sikka, Intermetallic-Based High-Temperature Materials, ORNL/CP-101117 (1999), 23 ppGoogle Scholar C. Xiao and W. Chen, Sulfidation Resistance of CeO2 Modified HVOF Sprayed FeAl Coatings at 700°C, Surf. Coat. Technol., 2006, 201, p 3625-3632CrossRefGoogle Scholar O.L. Arenas, J. Porcayo-Calderon, V.M. Salinas-Bravo, A. Martinez-Villafane, and J.G. Gonzalez-Rodriguez, Effect of Boron on the Hot Corrosion Resistance of Sprayed Fe40Al Intermetallics, High Temp. Mater. Proc., 2005, 242, p 93-100Google Scholar M.A. Espinosa, G. Carbajal De la Torre, J. Porcayo-Calderon, A. Martinez-Villafañe, J.G. Chacon-Nava, M. Casales, and J.G. Gonzalez-Rodriguez, Corrosion of Atomized Fe40Al Based Intermetallics in Molten Na2SO4, Mater. Corrosion, 2003, 54, p 304-310CrossRefGoogle Scholar J.G. Gonzalez-Rodriguez, Μ. Salazar Luna-Ramirez, J. Porcayo-Calderon, G. Rosas, and A. Martinez-Villfane, Effect of Li, Ce and Ni on the Corrosion Resistance of Fe3Al in Molten Na2So4 and NaVO3, High Temp. Mater. Proc., 2004, 233, p 17-183Google Scholar M. Amaya, M.A. Espinosa-Medina, J. Porcayo-Calderon, L. Martinez, and J.G. Gonzalez-Rodriguez, High Temperature Corrosion Performance of FeAl Intermetallic Alloys in Molten Salts, Mater. Sci. Eng. A, 2003, 349, p 12-19CrossRefGoogle Scholar L. Martinez, M. Amaya, J. Porcayo-Calderon, and E.J. Lavernia, High-Temperature Electrochemical Testing of Spray Atomized and Deposited Iron Aluminides Alloyed with Boron and Reinforced with Alumina Particulate, Mater. Sci. Eng. A, 1998, 258, p 306-312CrossRefGoogle Scholar J.G. Gonzalez-Rodrıguez, A. Luna-Ramirez, M. Salazar, J. Porcayo-Calderon, G. Rosas, and A. Martinez-Villafane, Molten Salt Corrosion Resistance of FeAl Alloy with Additions of Li, Ce and Ni, Mater. Sci. Eng. A, 2005, 399, p 344-350CrossRefGoogle Scholar M.A. Espinosa-Medina, G. Carbajal-De la Torre, H.B. Liu, A. Martínez-Villafane, and J.G. González-Rodriguez, Hot Corrosion Behaviour of Fe-Al Based Intermetallic in Molten NaVO3 Salt, Corros. Sci., 2009, 51, p 1420-1427CrossRefGoogle Scholar D.K. Goyal, H. Singh, H. Kumar, and V. Sahni, Slurry Erosive Wear Evaluation of HVOF-Spray Cr2O3 Coating on Some Turbine Steels, J. Therm. Spray Technol., 2012, 21(5), p 838-851CrossRefGoogle Scholar C. Senderowski and Z. Bojar, Gas Detonation Spray Forming of Fe-Al Coatings in the Presence of Interlayer, Surf. Coat. Technol., 2008, 202, p 3538-3548CrossRefGoogle Scholar J.M. Guilemany, N. Cinca, S. Dosta, and C.R.C. Lima, High-Temperature Oxidation of Fe40Al Coatings Obtained by HVOF Thermal Spray, Intermetallics, 2007, 15, p 1384-1394CrossRefGoogle Scholar N. Cinca and J.M. Guilemany, Thermal Spraying of Transition Metal Aluminides: An Overview, Intermetallics, 2012, 24, p 60-72CrossRefGoogle Scholar C. Senderowski, Z. Bojar, W. Wołczyński, and A. Pawłowski, Microstructure Characterization of D-Gun Sprayed Fe-Al Intermetallic Coatings, Intermetallics, 2010, 18, p 1405-1409CrossRefGoogle Scholar C. Senderowski, M. Chodala, and Z. Bojar, Corrosion Behavior of Detonation Gun Sprayed Fe-Al Type Intermetallic Coating, Materials, 2015, 8, p 1108-1123CrossRefGoogle Scholar Y. Tsunekawa, M. Okumiya, K. Gotoh, T. Nakamura, and I. Niimi, Synthesis of Iron Aluminide Matrix In Situ Composites from Elemental Powders by Reactive Low Pressure Plasma Spraying, Mater. Sci. Eng. A, 1992, 159(2), p 253-259CrossRefGoogle Scholar S. Wei, B. Xu, H. Wang, G. Jin, and H. Lv, Comparison on Corrosion-Resistance Performance of Electro-Thermal Explosion Plasma Spraying FeAl-Based Coatings, Surf. Coat. Technol., 2007, 201(9-11), p 5294-5297CrossRefGoogle Scholar T. Grosdidier, A. Tidu, and H.-L. Liao, Nanocrystalline Fe-40Al Coating Processed by Thermal Spraying of Milled Powder, Scripta Mater., 2001, 44(3), p 387-393CrossRefGoogle Scholar G. Ji, J. Morniroli, and T. Grosidider, Nanostructures in Thermal Spray Coatings, Scripta Mater., 2003, 48, p 1599-1604CrossRefGoogle Scholar G. Ji, O. Elkedim, and T. Grosdidier, Deposition and Corrosion Resistance of HVOF Sprayed Nanocrystalline Iron Aluminide Coatings, Surf. Coat. Technol., 2005, 190(2), p 406-416CrossRefGoogle Scholar B. Szczucka-Lasota, B. Formanek, and A. Hernas, Growth of Corrosion Products on Thermally Sprayed Coatings with Intermetallic Phases in Aggressive Environments, J. Mater. Process. Technol., 2005, 164-165, p 930-934CrossRefGoogle Scholar B. Szczucka-Lasota, B. Formanek, A. Hernas, and K. Szymański, Oxidation Models of the Growth of Corrosion Products on the Intermetallic Coatings Strengthened by a Fine Dispersive Al2O3, J. Mater. Process. Technol., 2005, 164-165, p 935-939CrossRefGoogle Scholar Y. Wang and M. Yan, The effect of CeO2 on the Erosion and Abrasive Wear of Thermal Sprayed FeAl Intermetallic Alloy Coatings, Wear, 2006, 261(11-12), p 1201-1207CrossRefGoogle Scholar T. Grosdidier, G. Ji, F. Bernard, E. Gaffet, Z.A. Munir, and S. Launois, Synthesis of Bulk FeAl Nanostructured Materials by HVOF Spray Forming and Spark Plasma Sintering, Intermetallics, 2006, 14, p 1208-1213CrossRefGoogle Scholar G. Ji, T. Grosdidier, and J.-P. Morniroli, Microstructure of a High-Velocity Oxy-Fuel Thermal-Sprayed Nanostructured Coating Obtained from Milled Powder, Metall. Mater. Trans. A, 2007, 38(10), p 2455-2463CrossRefGoogle Scholar G. Ji, T. Grosdidier, N. Bozzolo, and S. Launois, The Mechanisms of Microstructure Formation in a Nanostructured Oxide Dispersion Strengthened FeAl Alloy Obtained by Spark Plasma Sintering, Intermetallics, 2007, 15, p 108-118CrossRefGoogle Scholar G. Ji, T. Grosdidier, F. Bernard, S. Paris, E. Gaffet, and S. Launois, Bulk FeAl Nanostructured Materials Obtained by Spray Forming and Spark Plasma Sintering, J. Alloy. Compd., 2007, 434-435, p 358-361CrossRefGoogle Scholar J.M. Guilemany, C.R.C. Lima, N. Cinca, and J.R. Miguel, Studies of Fe-40Al Coatings Obtained by High Velocity Oxy-Fuel, Surf. Coat. Technol., 2006, 201, p 2072-2079CrossRefGoogle Scholar J.M. Guilemany, N. Cinca, J. Fernández, and S. Sampath, Erosion, Abrasive, and Friction Wear Behavior of Iron Aluminide Coatings Sprayed by HVOF, J. Therm. Spray Technol., 2008, 17(5-6), p 762-773CrossRefGoogle Scholar J.M. Guilemany, N. Cinca, S. Dosta, and I.G. Cano, FeAl and NbAl3 Intermetallic-HVOF Coatings: Structure and Properties, J. Therm. Spray Technol., 2009, 18(4), p 536-545CrossRefGoogle Scholar N. Cinca, S. Dosta, and J.M. Guilemany, Nanoscale Characterization of FeAl-HVOF Coatings, Surf. Coat. Technol., 2010, 205, p 967-973CrossRefGoogle Scholar J. Xiang, X. Zhu, G. Chen, Z. Duan, Y. Lin, and Y. Liu, Oxidation Behavior of Fe40Al-xWC Composite Coatings Obtained by High-Velocity Oxygen Fuel Thermal Spray, Trans. Nonferrous Met. Soc. China, 2009, 19, p 1545-1550CrossRefGoogle Scholar L. Singh, V. Chawla, and J.S. Grewal, A Review of Detonation Gun Sprayed Coatings, J. Miner. Mater. Charact. Eng., 2012, 11(3), p 243-265Google Scholar C. Senderowski, Z. Bojar, W. Wołczyński, G. Roy, and T. Czujko, Residual Stresses Determined by the Modified Sachs Method Within a Gas Detonation Sprayed Coatings of the Fe-Al Intermetallic, Arch. Metall. Mater., 2007, 52(4), p 569-578Google Scholar C. Senderowski and Z. Bojar, Influence of Detonation Gun Spraying Conditions on the Quality of Fe-Al Intermetallic Protective Coatings in the Presence of NiAl and NiCr Interlayers, J. Therm. Spray Technol., 2009, 18(3), p 435-447CrossRefGoogle Scholar A. Pawłowski, T. Czeppe, Ł. Major, and C. Senderowski, Structure Morphology of Fe-Al Coating Detonation Sprayed Onto Carbon Steel Substrate, Arch. Metall. Mater., 2009, 54(3), p 783-788Google Scholar W. Wołczyński, C. Senderowski, J. Morgiel, and G. Garzeł, D-Gun Sprayed Fe-Al Single Particle Solidification, Arch. Metall. Mater., 2014, 59(1), p 209-217Google Scholar C. Senderowski, A. Pawłowski, Z. Bojar, W. Wołczyński, M. Faryna, J. Morgiel, and Ł. Major, TEM Microstructure of Fe-Al Coatings Detonation Sprayed Onto Steel Substrate, Arch. Metall. Mater., 2010, 55(2), p 373-381Google Scholar A. Pawłowski, C. Senderowski, Z. Bojar, and M. Faryna, Detonation Deposited Fe-Al Coatings, Part I: The Interlayers Ni(Al) and Ni(Cr) and Fe-Al Coating Detonation Sprayed onto Substrate of 045 Steel, Arch. Metall. Mater., 2010, 55(4), p 1061-1071Google Scholar A. Pawłowski, C. Senderowski, W. Wołczyński, and J. Morgiel, Detonation Deposited Fe-Al Coatings, Part II: Transmission Electron Microscopy of Interlayers and Fe-Al Intermetallic Coating Detonation Sprayed onto the 045 Steel Substrate, Arch. Metall. Mater., 2011, 56(1), p 71-79CrossRefGoogle Scholar C. Senderowski, Nanocomposite Fe-Al Intermetallic Coating Obtained by Gas Detonation Spraying of Milled Self-Decomposing Powder, J. Therm. Spray Technol., 2014, 237, p 1124-1134CrossRefGoogle Scholar C. Senderowski, D. Zasada, T. Durejko, and Z. Bojar, Characterization of As-Synthesized and Mechanically Milled Fe-Al Powders Produced by the Self-Disintegration Method, Powder Technol., 2014, 263, p 96-103CrossRefGoogle Scholar B. Fikus, C. Senderowski, and A. Panas, Modeling of Dynamics and Thermal History of Fe40Al Intermetallic Powder Particles Under Gas Detonation Spraying Using Propane-Air Mixture, J. Therm. Spray Technol., 2019, 28, p 346-358CrossRefGoogle Scholar A.J. Panas, C. Senderowski, and B. Fikus, Thermophysical Properties of Multiphase Fe-Al Intermetallic-Oxide Ceramic Coatings Deposited by Gas Detonation Spraying, Thermochim. Acta, 2019, 676, p 164-171CrossRefGoogle Scholar B. Xu, Z. Zhu, S. Ma, W. Zhang, and W. Liu, Sliding Wear Behavior of Fe-Al and Fe-Al/WC Coatings Prepared by High Velocity Arc Spraying, Wear, 2004, 257, p 1089-1095CrossRefGoogle Scholar T.C. Totemeier, R.N. Wright, Coating-microstructure-property-performance issues, in 19th Annual Conference on Fossil Energy Materials, INL/CON-05-00416, Preprint (2005), 9 ppGoogle Scholar A. Magnee, E. Offergeld, M. Leroy, A. Lefort, Fe-Al intermetallic coating applications to thermal energy conversion advanced systems, in Proceedings of the 15th Thermal Spray Conference, Nice (France), vol. 2 (1998), pp. 1091-1096Google Scholar B.S. Sidhu and S. Prakash, Evaluation of the Corrosion Behaviour of Plasma-Sprayed Ni3Al Coatings on Steel in Oxidation and Molten Salt Environments at 900°C, Surf. Coat. Technol., 2003, 166, p 89-100CrossRefGoogle Scholar G.D. Girolamo, C. Blasi, M. Schioppa, and L. Tapfer, Structure and Thermal Properties of Heat Treated Plasma Sprayed Ceria-Yttria Co-stabilized Zirconia Coatings, Ceram. Int., 2010, 36, p 961-968CrossRefGoogle Scholar R.L. Jones, Some Aspects of the Hot Corrosion of Thermal Barrier Coatings, J. Therm. Spray Technol., 1997, 61, p 77-84CrossRefGoogle Scholar X. Chen, Y. Zhao, L. Gu, B. Zou, Y. Wang, and X. Cao, Hot Corrosion Behavior of Plasma Sprayed YSZ/LaMgAl11O19 Composite Coatings in Molten Sulfate-Vanadate Salt, Corros. Sci., 2011, 53, p 2335-2343CrossRefGoogle Scholar R. Ahmadi-Pidani, R. Shoja-Razavi, R. Mozafarinia, and H. Jamali, Evaluation of Hot Corrosion Behavior of Plasma Sprayed Ceria and Yttria Stabilized Zirconia Thermal Barrier Coatings in the Presence of Na2SO4-V2O5 Molten Salt, Ceram. Int., 2012, 38, p 6613-6620CrossRefGoogle Scholar X.H. Zhong, Y.M. Wang, Z.H. Xu, Y.F. Zhang, J.F. Zhang, and X.Q. Cao, Hot-Corrosion Behaviors of Overlay-Clad Yttria-Stabilized Zirconia Coatings in Contact with Vanadate-Sulfate Salts, J. Eur. Ceram. Soc., 2010, 30, p 1401-1408CrossRefGoogle Scholar T.S. Sidhu, R.D. Agrawal, and S. Prakash, Hot Corrosion of Some Superalloys and Role of High-Velocity Oxy-Fuel Spray Coatings—A Review, Surf. Coat. Technol., 2005, 198, p 441-446CrossRefGoogle Scholar T.C. Totemeier, R.N. Wright, and W.D. Swank, Microstructure and Stresses in HVOF Sprayed Iron Aluminide Coatings, J. Therm. Spray Technol., 2002, 113, p 400-408CrossRefGoogle Scholar T.C. Totemeier, R.N. Wright, and W.D. Swank, FeAl and Mo-Si-B Intermetallic Coatings Prepared by Thermal Spraying, Intermetallics, 2004, 12, p 1335-1344CrossRefGoogle Scholar G. Ji, J.P. Morniroli, A. Tidu, C. Coddet, and T. Grosdidier, Surface Engineering by Thermal Spraying Nanocrystalline Coatings: X-Ray and TEM Characterisation of As-Deposited Iron Aluminide Structure, J. Phys. IV France, 2002, 12(6), p 509-518CrossRefGoogle Scholar G. Ji, T. Grosdidier, H.L. Liao, J.-P. Morniroli, and C. Coddet, Spray Forming Thick Nanostructured and Microstructured FeAl Deposits, Intermetallics, 2005, 13, p 596-607CrossRefGoogle Scholar T. Grosdidier, G. Ji, and N. Bozzolo, Hardness, Thermal Stability and Yttrium Distribution in Nanostructured Deposits Obtained by Thermal Spraying from Milled—Y2O3 Reinforced—or Atomized FeAl Powders, Intermetallics, 2006, 14(7), p 715-721CrossRefGoogle Scholar M.A. Uusitalo, P.M.J. Vuoristo, and T.A. Mantyla, High Temperature Corrosion of Coatings and Boiler Steels in Reducing Chlorine-Containing Atmosphere, Surf. Coat. Technol., 2002, 161, p 275-285CrossRefGoogle Scholar S. Kamal, R. Jayaganthan, S. Prakash, and S. Kumar, Hot Corrosion Behavior of Detonation Gun Sprayed Cr3C2-NiCr Coatings on Ni and Fe-Based Superalloys in Na2SO4-60% V2O5 Environment at 900 °C, J. Alloys Compd., 2008, 463, p 358-372CrossRefGoogle Scholar A.Y. Mosbah, D. Wexler, and A. Calka, Abrasive Wear of WC-FeAl Composites, Wear, 2005, 258, p 1337-1341CrossRefGoogle Scholar M. Ahmadian, D. Wexler, T. Chandra, and A. Calka, Abrasive Wear of WCeFeAl-B and WCeNi3Al-B Composites, Int. J. Refract. Met. Hard Mater., 2005, 23, p 155-159CrossRefGoogle Scholar B.-H. Tian, P. Liu, B.-S. Xu, S.-N. Ma, W. Zhang, and S.-Z. Li, Tribological Properties of Thermal Spray Formed Fe3Al-Based Coatings at Elevated Temperature, Chin. J. Nonferrous Met., 2003, 13, p 978-982Google Scholar M. Sozańska, B. Kościelniak, and L. Swadźba, Evaluation of Hot Corrosion Resistance of Directionally Solidified Nickel-Based Superalloy, Solid State Phenom., 2015, 227, p 337-340CrossRefGoogle Scholar K. Katiki, S. Yadlapati, S.N.S. Chidepudi, and N. Arivazhagan, Performance of Plasma Spray Coatings on Inconel 625 in Air Oxidation and Molten Salt Environment at 800°C, Int. J. Chem. Teach. Res., 2014, 65, p 2744-2749Google Scholar C. Senderowski, Iron-Aluminium Intermetallic Coatings Synthesized by Supersonic Stream Metallization, Copyright by BEL Studio Sp. Z o.o., Warszawa—2015 (2015). ISBN: 978-83-7798-227-3. 280 pp (in polish) Google Scholar J.M. Guilemany, N. Cinca, S. Dosta, Oxidation Behavior of HVOF-Sprayed ODS-Fe40Al Coatings at 900°C, in Proceedings of Thermal Spray (Global Coating Solutions, 2007)Google Scholar K. Natesan, Corrosion Performance of Iron Aluminides in Mixed-Oxidant Environments, Mater. Sci. Eng., 1998, 2581-2, p 126-134CrossRefGoogle Scholar A. Mignone, S. Frangini, A. La Barbera, and O. Tassa, High Temperature Corrosion of B2 Iron Aluminides, Corros. Sci., 1998, 408, p 1331-1347CrossRefGoogle Scholar Metals Handbook, High Temperature Corrosion in Molten Salts, Vol 13, 9th ed., ASM International, Russell Township, 1987, p 50-55Google Scholar J.C. Hallet and K.H. Stern, Vaporization and Decomposition of Na2SO4. Thermodynamics and Kinetics, J. Phys. Chem., 1980, 84, p 1699-1704CrossRefGoogle Scholar M. Amaya, M.A. Espinosa-Medina, J. Porcayo-Calderon, L. Martinex, and J.G. Gonzalex-Rodriguez, High Temperature Corrosion Performance of FeAl Intermetallic Alloys in Molten Salts, Mater. Sic. Eng. A, 2003, 349, p 12-19CrossRefGoogle Scholar © The Author(s) 2019 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 1.Department of Materials Technology and MachineryUniversity of Warmia and MazuryOlsztynPoland 2.Dpt. Ciència dels Materials i Enginyeria Metallúrgica, Centre de Projecció Tèrmica (CPT)Universitat de BarcelonaBarcelonaSpain Senderowski, C., Cinca, N., Dosta, S. et al. J Therm Spray Tech (2019). https://doi.org/10.1007/s11666-019-00886-w Revised 25 May 2019 First Online 08 July 2019 DOI https://doi.org/10.1007/s11666-019-00886-w
CommonCrawl
Title: Slide rule Subject: Calculator, Torpedo Data Computer, Analog computer, Logarithm, Mechanical calculator Collection: 1620 Introductions, Analog Computers, English Inventions, Historical Scientific Instruments, Logarithms, Mathematical Tools, Mechanical Calculators, Obsolete Technologies A typical ten-inch student slide rule (Pickett N902-T simplex trig). The slide rule, also known colloquially in the United States as a slipstick,[1] is a mechanical analog computer.[2][3][4][5][6] The slide rule is used primarily for multiplication and division, and also for functions such as roots, logarithms and trigonometry, but is not normally used for addition or subtraction. Though similar in name and appearance to a standard ruler, the slide rule is not ordinarily used for measuring length or drawing straight lines. Slide rules come in a diverse range of styles and generally appear in a linear or circular form with a standardized set of markings (scales) essential to performing mathematical computations. Slide rules manufactured for specialized fields such as aviation or finance typically feature additional scales that aid in calculations common to those fields. The Reverend William Oughtred and others developed the slide rule in the 17th century based on the emerging work on logarithms by John Napier. Before the advent of the pocket calculator, it was the most commonly used calculation tool in science and engineering. The use of slide rules continued to grow through the 1950s and 1960s even as digital computing devices were being gradually introduced; but around 1974 the electronic scientific calculator made it largely obsolete[7][8][9][10] and most suppliers left the business. This slide rule is positioned to yield several values: From C scale to D scale (multiply by 2), from D scale to C scale (divide by 2), A and B scales (multiply and divide by 4), A and D scales (squares and square roots). Basic concepts 1 Operation 2 Multiplication 2.1 Division 2.2 Other operations 2.3 Roots and powers 2.3.1 Trigonometry 2.3.2 Logarithms and exponentials 2.3.3 Addition and subtraction 2.3.4 Physical design 3 Standard linear rules 3.1 Circular slide rules 3.2 Cylindrical slide rules 3.3 Materials 3.4 Modern form 4.1 Specialized calculators 4.2 Decline 4.3 Compared to electronic digital calculators 5 Finding and collecting slide rules 6 Cursor on a slide rule. In its most basic form, the slide rule uses two logarithmic scales to allow rapid multiplication and division of numbers. These common operations can be time-consuming and error-prone when done on paper. More elaborate slide rules allow other calculations, such as square roots, exponentials, logarithms, and trigonometric functions. Scales may be grouped in decades, which are numbers ranging from 1 to 10 (i.e. 10n to 10n+1). Thus single decade scales C and D range from 1 to 10 across the entire width of the slide rule while double decade scales A and B range from 1 to 100 over the width of the slide rule. In general, mathematical calculations are performed by aligning a mark on the sliding central strip with a mark on one of the fixed strips, and then observing the relative positions of other marks on the strips. Numbers aligned with the marks give the approximate value of the product, quotient, or other calculated result. The user determines the location of the decimal point in the result, based on mental estimation. Scientific notation is used to track the decimal point in more formal calculations. Addition and subtraction steps in a calculation are generally done mentally or on paper, not on the slide rule. Most slide rules consist of three linear strips of the same length, aligned in parallel and interlocked so that the central strip can be moved lengthwise relative to the other two. The outer two strips are fixed so that their relative positions do not change. Some slide rules ("duplex" models) have scales on both sides of the rule and slide strip, others on one side of the outer strips and both sides of the slide strip (which can usually be pulled out, flipped over and reinserted for convenience), still others on one side only ("simplex" rules). A sliding cursor with a vertical alignment line is used to find corresponding points on scales that are not adjacent to each other or, in duplex models, are on the other side of the rule. The cursor can also record an intermediate result on any of the scales. A logarithm transforms the operations of multiplication and division to addition and subtraction according to the rules \log(xy) = \log(x) + \log(y) and \log(x/y) = \log(x) - \log(y). Moving the top scale to the right by a distance of \log(x), by matching the beginning of the top scale with the label x on the bottom, aligns each number y, at position \log(y) on the top scale, with the number at position \log(x) + \log(y) on the bottom scale. Because \log(x) + \log(y) = \log(xy), this position on the bottom scale gives xy, the product of x and y. For example, to calculate 3×2, the 1 on the top scale is moved to the 2 on the bottom scale. The answer, 6, is read off the bottom scale where 3 is on the top scale. In general, the 1 on the top is moved to a factor on the bottom, and the answer is read off the bottom where the other factor is on the top. This works because the distances from the "1" are proportional to the logarithms of the marked values: Operations may go "off the scale;" for example, the diagram above shows that the slide rule has not positioned the 7 on the upper scale above any number on the lower scale, so it does not give any answer for 2×7. In such cases, the user may slide the upper scale to the left until its right index aligns with the 2, effectively dividing by 10 (by subtracting the full length of the C-scale) and then multiplying by 7, as in the illustration below: Here the user of the slide rule must remember to adjust the decimal point appropriately to correct the final answer. We wanted to find 2×7, but instead we calculated (2/10)×7=0.2×7=1.4. So the true answer is not 1.4 but 14. Resetting the slide is not the only way to handle multiplications that would result in off-scale results, such as 2×7; some other methods are: Use the double-decade scales A and B. Use the folded scales. In this example, set the left 1 of C opposite the 2 of D. Move the cursor to 7 on CF, and read the result from DF. Use the CI inverted scale. Position the 7 on the CI scale above the 2 on the D scale, and then read the result off of the D scale below the 1 on the CI scale. Since 1 occurs in two places on the CI scale, one of them will always be on-scale. Use both the CI inverted scale and the C scale. Line up the 2 of CI with the 1 of D, and read the result from D, below the 7 on the C scale. Using a circular slide rule. Method 1 is easy to understand, but entails a loss of precision. Method 3 has the advantage that it only involves two scales. The illustration below demonstrates the computation of 5.5/2. The 2 on the top scale is placed over the 5.5 on the bottom scale. The 1 on the top scale lies above the quotient, 2.75. There is more than one method for doing division, but the method presented here has the advantage that the final result cannot be off-scale, because one has a choice of using the 1 at either end. In addition to the logarithmic scales, some slide rules have other mathematical functions encoded on other auxiliary scales. The most popular were trigonometric, usually sine and tangent, common logarithm (log10) (for taking the log of a value on a multiplier scale), natural logarithm (ln) and exponential (ex) scales. Some rules include a Pythagorean scale, to figure sides of triangles, and a scale to figure circles. Others feature scales for calculating hyperbolic functions. On linear rules, the scales and their labeling are highly standardized, with variation usually occurring only in terms of which scales are included and in what order: A, B two-decade logarithmic scales, used for finding square roots and squares of numbers C, D single-decade logarithmic scales K three-decade logarithmic scale, used for finding cube roots and cubes of numbers CF, DF "folded" versions of the C and D scales that start from π rather than from unity; these are convenient in two cases. First when the user guesses a product will be close to 10 but is not sure whether it will be slightly less or slightly more than 10, the folded scales avoid the possibility of going off the scale. Second, by making the start π rather than the square root of 10, multiplying or dividing by π (as is common in science and engineering formulas) is simplified. CI, DI, CIF, DIF "inverted" scales, running from right to left, used to simplify 1/x steps S used for finding sines and cosines on the C (or D) scale T, T1, T2 used for finding tangents and cotangents on the C and CI (or D and DI) scales ST, SRT used for sines and tangents of small angles and degree–radian conversion L a linear scale, used along with the C and D scales for finding base-10 logarithms and powers of 10 LLn a set of log-log scales, used for finding logarithms and exponentials of numbers Ln a linear scale, used along with the C and D scales for finding natural (base e) logarithms and e^x The scales on the front and back of a Keuffel and Esser (K&E) 4081-3 slide rule. The Binary Slide Rule manufactured by Gilson in 1931 performed an addition and subtraction function limited to fractions.[11] Roots and powers There are single-decade (C and D), double-decade (A and B), and triple-decade (K) scales. To compute x^2, for example, locate x on the D scale and read its square on the A scale. Inverting this process allows square roots to be found, and similarly for the powers 3, 1/3, 2/3, and 3/2. Care must be taken when the base, x, is found in more than one place on its scale. For instance, there are two nines on the A scale; to find the square root of nine, use the first one; the second one gives the square root of 90. For x^y problems, use the LL scales. When several LL scales are present, use the one with x on it. First, align the leftmost 1 on the C scale with x on the LL scale. Then, find y on the C scale and go down to the LL scale with x on it. That scale will indicate the answer. If y is "off the scale," locate x^{y/2} and square it using the A and B scales as described above. The S, T, and ST scales are used for trig functions and multiples of trig functions, for angles in degrees. For angles from around 5.7 up to 90 degrees, sines are found by comparing the S scale with C (or D) scale; though on many closed-body rules the S scale relates to the A scale instead, and what follows must be adjusted appropriately. The S scale has a second set of angles (sometimes in a different color), which run in the opposite direction, and are used for cosines. Tangents are found by comparing the T scale with the C (or D) scale for angles less than 45 degrees. For angles greater than 45 degrees the CI scale is used. Common forms such as k\sin x can be read directly from x on the S scale to the result on the D scale, when the C-scale index is set at k. For angles below 5.7 degrees, sines, tangents, and radians are approximately equal, and are found on the ST or SRT (sines, radians, and tangents) scale, or simply divided by 57.3 degrees/radian. Inverse trigonometric functions are found by reversing the process. Many slide rules have S, T, and ST scales marked with degrees and minutes (e.g. some Keuffel and Esser models, late-model Teledyne-Post Mannheim-type rules). So-called decitrig models use decimal fractions of degrees instead. Logarithms and exponentials Base-10 logarithms and exponentials are found using the L scale, which is linear. Some slide rules have a Ln scale, which is for base e. The Ln scale was invented by an 11th grade student, Stephen B. Cohen, in 1958. The original intent was to allow the user to select an exponent x (in the range 0 to 2.3) on the Ln scale and read ex on the C (or D) scale and e–x on the CI (or DI) scale. Pickett, Inc. was given exclusive rights to the scale. Later, the inventor created a set of "marks" on the Ln scale to extend the range beyond the 2.3 limit, but Pickett never incorporated these marks on any of its slide rules. Slide rules are not typically used for addition and subtraction, but it is nevertheless possible to do so using two different techniques.[12] The first method to perform addition and subtraction on the C and D (or any comparable scales) requires converting the problem into one of division. For addition, the quotient of the two variables plus one times the divisor equals their sum: x + y = \left(\frac{x}{y} + 1\right) y. For subtraction, the quotient of the two variables minus one times the divisor equals their difference: x - y = \left(\frac{x}{y} - 1\right) y. This method is similar to the addition/subtraction technique used for high-speed electronic circuits with the logarithmic number system in specialized computer applications like the Gravity Pipe (GRAPE) supercomputer and hidden Markov models. The second method utilizes a sliding linear L scale available on some models. Addition and subtraction are performed by sliding the cursor left (for subtraction) or right (for addition) then returning the slide to 0 to read the result. A 7-foot (2.1 m) teaching slide rule compared to a normal sized model. Standard linear rules The width of the slide rule is quoted in terms of the nominal width of the scales. Scales on the most common "10-inch" models are actually 25 cm, as they were made to metric standards, though some rules offer slightly extended scales to simplify manipulation when a result overflowed. Pocket rules are typically 5 inches. Models a couple of metres wide were sold to be hung in classrooms for teaching purposes.[13] Typically the divisions mark a scale to a precision of two significant figures, and the user estimates the third figure. Some high-end slide rules have magnifier cursors that make the markings easier to see. Such cursors can effectively double the accuracy of readings, permitting a 10-inch slide rule to serve as well as a 20-inch. Various other conveniences have been developed. Trigonometric scales are sometimes dual-labeled, in black and red, with complementary angles, the so-called "Darmstadt" style. Duplex slide rules often duplicate some of the scales on the back. Scales are often "split" to get higher accuracy. Circular slide rules A simple circular slide rule, made by Concise Co., Ltd., Tokyo, Japan, with only inverse, square, and cubic scales. On the reverse is a handy list of 38 metric/imperial conversion factors. A Russian circular slide rule built like a pocket watch that works as single cursor slide rule since the two needles are ganged together. Pickett circular slide rule with two cursors. (4.25 in/10.9 cm width) Reverse has additional scale and one cursor. Breitling Navitimer wristwatch with circular slide rule. Circular slide rules come in two basic types, one with two cursors (left), and another with a free dish and one cursor (right). The dual cursor versions perform multiplication and division by holding a fast angle between the cursors as they are rotated around the dial. The onefold cursor version operates more like the standard slide rule through the appropriate alignment of the scales. The basic advantage of a circular slide rule is that the widest dimension of the tool was reduced by a factor of about 3 (i.e. by π). For example, a 10 cm circular would have a maximum precision approximately equal to a 31.4 cm ordinary slide rule. Circular slide rules also eliminate "off-scale" calculations, because the scales were designed to "wrap around"; they never have to be reoriented when results are near 1.0—the rule is always on scale. However, for non-cyclical non-spiral scales such as S, T, and LL's, the scale width is narrowed to make room for end margins.[14] Circular slide rules are mechanically more rugged and smoother-moving, but their scale alignment precision is sensitive to the centering of a central pivot; a minute 0.1 mm off-centre of the pivot can result in a 0.2mm worst case alignment error. The pivot, however, does prevent scratching of the face and cursors. The highest accuracy scales are placed on the outer rings. Rather than "split" scales, high-end circular rules use spiral scales for more complex operations like log-of-log scales. One eight-inch premium circular rule had a 50-inch spiral log-log scale. The main disadvantages of circular slide rules are the difficulty in locating figures along a dish, and limited number of scales. Another drawback of circular slide rules is that less-important scales are closer to the center, and have lower precisions. Most students learned slide rule use on the linear slide rules, and did not find reason to switch. One slide rule remaining in daily use around the world is the E6B. This is a circular slide rule first created in the 1930s for aircraft pilots to help with dead reckoning. With the aid of scales printed on the frame it also helps with such miscellaneous tasks as converting time, distance, speed, and temperature values, compass errors, and calculating fuel use. The so-called "prayer wheel" is still available in flight shops, and remains widely used. While GPS has reduced the use of dead reckoning for aerial navigation, and handheld calculators have taken over many of its functions, the E6B remains widely used as a primary or backup device and the majority of flight schools demand that their students have some degree of proficiency in its use. Proportion wheels are simple circular slide rules used in graphic design to broaden or slim images and photographs. Lining up the desired values on the emmer and inner wheels (which correspond to the original and desired sizes) will display the proportion as a percentage in a small window. They are not as common since the advent of computerized layout, but are still made and used. In 1952, Swiss watch company Breitling introduced a pilot's wristwatch with an integrated circular slide rule specialized for flight calculations: the Breitling Navitimer. The Navitimer circular rule, referred to by Breitling as a "navigation computer", featured airspeed, rate/time of climb/descent, flight time, distance, and fuel consumption functions, as well as kilometer—nautical mile and gallon—liter fuel amount conversion functions. Cylindrical slide rules Otis King Model K Thacher slide rule, circa 1890 There are two main types of cylindrical slide rules: those with helical scales such as the Fuller, the Otis King and the Bygrave slide rule, and those with bars, such as the Thacher and some Loga models. In either case, the advantage is a much longer scale, and hence potentially higher accuracy, than a straight or circular rule. Traditionally slide rules were made out of hard wood such as mahogany or boxwood with cursors of glass and metal. At least one high precision instrument was made of steel. In 1895, a Japanese firm, Hemmi, started to make slide rules from bamboo, which had the advantages of being dimensionally stable, strong and naturally self-lubricating. These bamboo slide rules were introduced in Sweden in September, 1933,[15] and probably only a little earlier in Germany. Scales were made of celluloid or plastic. Later slide rules were made of plastic, or aluminium painted with plastic. Later cursors were acrylics or polycarbonates sliding on Teflon bearings. All premium slide rules had numbers and scales engraved, and then filled with paint or other resin. Painted or imprinted slide rules were viewed as inferior because the markings could wear off. Nevertheless, Pickett, probably America's most successful slide rule company, made all printed scales. Premium slide rules included clever catches so the rule would not fall apart by accident, and bumpers to protect the scales and cursor from rubbing on tabletops. The recommended cleaning method for engraved markings is to scrub lightly with steel-wool. For painted slide rules, and the faint of heart, use diluted commercial window-cleaning fluid and a soft cloth. William Oughtred (1575–1660), inventor of the circular slide rule. The slide rule was invented around 1620–1630, shortly after John Napier's publication of the concept of the logarithm. Edmund Gunter of Oxford developed a calculating device with a single logarithmic scale, which, with additional measuring tools, could be used to multiply and divide. The first description of this scale was published in Paris in 1624 by Edmund Wingate (c.1593–1656), an English mathematician, in a book entitled L'usage de la reigle de proportion en l'arithmetique & geometrie. The book contains a double scale on one side of which is a logarithmic scale and on the other a tabular scale. In 1630, William Oughtred of Cambridge invented a circular slide rule, and in 1632 he combined two Gunter rules, held together with the hands, to make a device that is recognizably the modern slide rule. Like his contemporary at Cambridge, Isaac Newton, Oughtred taught his ideas privately to his students, but delayed in publishing them, and like Newton, he became involved in a vitriolic controversy over priority, with his one-time student Richard Delamain and the prior claims of Wingate. Oughtred's ideas were only made public in publications of his student William Forster in 1632 and 1653. In 1677, Henry Coggeshall created a two-foot folding rule for timber measure, called the Coggeshall slide rule. His design and uses for the tool gave the slide rule purpose outside of mathematical inquiry. In 1722, Warner introduced the two- and three-decade scales, and in 1755 Everard included an inverted scale; a slide rule containing all of these scales is usually known as a "polyphase" rule. In 1815, Peter Mark Roget invented the log log slide rule, which included a scale displaying the logarithm of the logarithm. This allowed the user to directly perform calculations involving roots and exponents. This was especially useful for fractional powers. In 1821, Nathaniel Bowditch, in the American Practical Navigator, described the use of a "sliding rule" that contained scales trigonometric functions on the fixed part and a line of log-sines and log-tans on the slider. This device was used to solve navigation problems. In 1845, Paul Cameron of Glasgow introduced the Nautical Slide-Rule. designed to answer questions of navigation including right ascension and declination of the sun and principal stars.[16] Modern form Engineer using a slide rule. Note mechanical calculator in background. The more modern form was created in 1859 by French artillery lieutenant Amédée Mannheim, "who was fortunate in having his rule made by a firm of national reputation and in having it adopted by the French Artillery." It was around that time, as engineering became a recognized professional activity, that slide rules came into wide use in Europe. They did not become common in the United States until 1881, when Edwin Thacher introduced a cylindrical rule there. The duplex rule was invented by William Cox in 1891, and was produced by Keuffel and Esser Co. of New York.[17][18] Astronomical work also required fine computations, and in 19th-century Germany a steel slide rule about 2 meters long was used at one observatory. It had a microscope attached, giving it accuracy to six decimal places. Throughout the 1950s and 1960s the slide rule was the symbol of the engineer's profession (in the same way that the stethoscope symbolizes the medical profession). German rocket scientist Wernher von Braun brought two 1930s vintage Nestler slide rules with him when he moved to the U.S. after World War 2 to work on the American space program. Throughout his life he never used any other pocket calculating devices; slide rules served him perfectly well for making quick estimates of rocket design parameters and other figures. Aluminium Pickett-brand slide rules were carried on Project Apollo space missions. The Pickett model N600-ES that was taken to the moon on Apollo 13 in 1970 is owned by the National Air and Space Museum.[19] The Pickett N600-ES that was owned by Buzz Aldrin and flew with him to the moon on Apollo 11 was sold at auction in 2007.[20] Some engineering students and engineers carried ten-inch slide rules in belt holsters, and even into the mid-1970s this was a common sight on campuses. Students also might keep a ten- or twenty-inch rule for precision work at home or the office[21] while carrying a five-inch pocket slide rule around with them. In 2004, education researchers David B. Sher and Dean C. Nataro conceived a new type of slide rule based on prosthaphaeresis, an algorithm for rapidly computing products that predates logarithms. There has been little practical interest in constructing one beyond the initial prototype, however.[22] Specialized calculators Hurter and Driffield's actinograph Slide rules have often been specialized to varying degrees for their field of use, such as excise, proof calculation, engineering, navigation, etc., but some slide rules are extremely specialized for very narrow applications. For example, the John Rabone & Sons 1892 catalog lists a "Measuring Tape and Cattle Gauge", a device to estimate the weight of a cow from its measurements. John Rabone & Sons 1892 Cattle Gauge There were many specialized slide rules for photographic applications; for example, the actinograph of Hurter and Driffield was a two-slide boxwood, brass, and cardboard device for estimating exposure from time of day, time of year, and latitude. Specialized slide rules were invented for various forms of engineering, business and banking. These often had common calculations directly expressed as special scales, for example loan calculations, optimal purchase quantities, or particular engineering equations. For example, the Fisher Controls company distributed a customized slide rule adapted to solving the equations used for selecting the proper size of industrial flow control valves.[23] Cryptographic slide rule used by the Swiss Army between 1914 and 1940. In World War II, bombardiers and navigators who required quick calculations often used specialized slide rules. One office of the U.S. Navy actually designed a generic slide rule "chassis" with an aluminium body and plastic cursor into which celluloid cards (printed on both sides) could be placed for special calculations. The process was invented to calculate range, fuel use and altitude for aircraft, and then adapted to many other purposes. The TI-30 scientific calculator was introduced for under US$25 in 1976 ($104 adjusted for inflation), signaling the end of the slide rule era. The importance of the slide rule began to diminish as electronic computers, a new but very scarce resource in the 1950s, became more widely available to technical workers during the 1960s. (See History of computing hardware (1960s–present).) Computers also changed the nature of calculation. With slide rules, there was a great emphasis on working the algebra to get expressions into the most computable form. Users of slide rules would simply approximate or drop small terms to simplify the calculation. FORTRAN allowed complicated formulas to be typed in from textbooks without the effort of reformulation. Numerical integration was often easier than trying to find closed-form solutions for difficult problems. The young engineer asking for computer time to solve a problem that could have been done by a few swipes on the slide rule became a humorous cliché. The availability of mainframe computing did not however significantly affect the ubiquitous use of the slide rule until cheap hand held electronic calculators for scientific and engineering purposes became available in the mid-1970s at which point they rapidly fell out of use. The first included the Wang Laboratories LOCI-2,[24][25] introduced in 1965, which used logarithms for multiplication and division and the Hewlett-Packard HP-9100, introduced in 1968.[26] The HP-9100 had trigonometric functions (sin, cos, tan) in addition to exponentials and logarithms. It used the CORDIC (coordinate rotation digital computer) algorithm,[27] which allows for calculation of trigonometric functions using only shift and add operations. This method facilitated the development of ever smaller scientific calculators. The era of the slide rule ended with the launch of pocket-sized scientific calculators, of which the 1972 Hewlett-Packard HP-35 was the first. Introduced at US$395, it was too expensive for most students. By 1975 basic four-function electronic calculators could be purchased for less than $50, and by 1976 a scientific calculator, the TI-30, could be purchased for less than $25. Compared to electronic digital calculators Most people find slide rules difficult to learn and use. Even during their heyday, they never caught on with the general public.[28] Addition and subtraction are not well-supported operations on slide rules and doing a calculation on a slide rule tends to be slower than on a calculator.[29] This led engineers to take mathematical shortcuts favoring operations that were easy on a slide rule, creating inaccuracies and mistakes.[30] On the other hand, the spatial, manual operation of slide rules cultivates in the user an intuition for numerical relationships and scale that people who have used only digital calculators often lack.[31] A slide rule will also display all the terms of a calculation along with the result, thus eliminating uncertainty about what calculation was actually performed. A slide rule requires the user to separately compute the order of magnitude of the answer in order to position the decimal point in the results. For example, 1.5 × 30 (which equals 45) will show the same result as 1,500,000 × 0.03 (which equals 45,000). This separate calculation is less likely to lead to extreme calculation errors, but forces the user to keep track of magnitude in short-term memory (which is error-prone), keep notes (which is cumbersome) or reason about it in every step (which distracts from the other calculation requirements). The typical precision of a slide rule is about three significant digits, compared to many digits on digital calculators. As order of magnitude gets the greatest prominence when using a slide rule, users are less likely to make errors of false precision. When performing a sequence of multiplications or divisions by the same number, the answer can often be determined by merely glancing at the slide rule without any manipulation. This can be especially useful when calculating percentages (e.g. for test scores) or when comparing prices (e.g. in dollars per kilogram). Multiple speed-time-distance calculations can be performed hands-free at a glance with a slide rule. Other useful linear conversions such as pounds to kilograms can be easily marked on the rule and used directly in calculations. Being entirely mechanical, a slide rule does not depend on electricity or batteries. However, mechanical imprecision in slide rules that were poorly constructed or warped by heat or use will lead to errors. Many sailors keep slide rules as backups for navigation in case of electric failure or battery depletion on long route segments. Slide rules are still commonly used in aviation, particularly for smaller planes. They are only being replaced by integrated, special purpose and expensive flight computers, and not general-purpose calculators. The E6B circular slide rule used by pilots has been in continuous production and remains available in a variety of models. Some wrist watches designed for aviation use still feature slide rule scales to permit quick calculations. The Citizen Skyhawk AT is a notable example.[32] Finding and collecting slide rules Faber-Castell slide rule with pouch There are still people who prefer a slide rule over an electronic calculator as a practical computing device. Many others keep their old slide rules out of a sense of nostalgia, or collect slide rules as a hobby.[33] A popular collectible model is the Keuffel & Esser Deci-Lon, a premium scientific and engineering slide rule available both in a ten-inch "regular" (Deci-Lon 10) and a five-inch "pocket" (Deci-Lon 5) variant. Another prized American model is the eight-inch Scientific Instruments circular rule. Of European rules, Faber-Castell's high-end models are the most popular among collectors. Although there is a large supply of slide rules circulating on the market, specimens in good condition tend to be expensive. Many rules found for sale on online auction sites are damaged or have missing parts, and the seller may not know enough to supply the relevant information. Replacement parts are scarce, and therefore expensive, and are generally only available for separate purchase on individual collectors' web sites. The Keuffel and Esser rules from the period up to about 1950 are particularly problematic, because the end-pieces on the cursors, made of celluloid, tend to break down chemically over time. There are still a handful of sources for brand new slide rules. The Concise Company of Tokyo, which began as a manufacturer of circular slide rules in July 1954,[34] continues to make and sell them today. In September 2009, on-line retailer ThinkGeek introduced its own brand of straight slide rules, which they described as "faithful replica[s]" that are "individually hand tooled" due to a stated lack of any existing manufacturers.[35] These are no longer available in 2012.[36] In addition, Faber-Castell has a number of slide rules still in inventory, available for international purchase through their web store.[37] Proportion wheels are still used in graphic design. Timeline of computing Bygrave slide rule E6B Flight computer Lunometer Nomography Slide chart Vernier scale Volvelle ^ Lester V. Berrey and Melvin van den Bark (1953). American Thesaurus of Slang: A Complete Reference Book of Colloquial Speech. Crowell. ^ Roger R. Flynn (June 2002). Computer sciences 1. Macmillan. p. 175. ^ Eric G. Swedin; David L. Ferro (24 October 2007). Computers: The Life Story of a Technology. JHU Press. p. 26. ^ Peter Grego (2009). Astronomical cybersketching. Springer. p. 12. ^ Ernst Bleuler; Robert Ozias Haxby (21 September 2011). Electronic Methods. Academic Press. p. 638. ^ Harry Henderson (1 January 2009). Encyclopedia of Computer Science and Technology, Revised Edition. Infobase Publishing. p. 13. ^ Behrens, Lawrence; Rosen, Leonard J. (1982). Writing and reading across the curriculum. ^ Maor, Eli (2009). e: The Story of a Number. Princeton University Press. p. 16. ^ Castleden, Rodney (2007). Inventions that Changed the World. Futura. p. 157. ^ instruction manual pages 7 & 8. Retrieved March 14, 2007. ^ AntiQuark: Slide Rule Tricks. ^ "Slide Rules". Tbullock.com. 2009-12-08. Retrieved 2010-02-20. ^ At least one circular rule, a 1931 Gilson model, sacrificed some of the scales usually found in slide rules in order to obtain additional resolution in multiplication and division. It functioned through the use of a spiral C scale, which was claimed to be 50 feet and readable to five significant figures. See http://www.sphere.bc.ca/test/gilson/gilson-manual2.jpg A photo can be seen at http://www.hpmuseum.org/srcirc.htm An instruction manual for the unit marketed by Dietzgen can be found at http://www.sliderulemuseum.com/SR_Library_General.htm All retrieved March 14, 2007. ^ "336 (Teknisk Tidskrift / 1933. Allmänna avdelningen)". Runeberg.org. Retrieved 2010-02-20. ^ "Cameron's Nautical Slide Rule", The Practical Mechanic and Engineer's Magazine, April 1845, p187 and Plate XX-B ^ Kells, Lyman M.; Kern, Willis F.; Bland, James R. (1943). The Log-Log Duplex Decitrig Slide Rule No. 4081: A Manual. Keuffel & Esser. p. 92. Archived from the original on 14 February 2009. ^ The Polyphase Duplex Slide Rule, A Self-Teaching Manual, Breckenridge, 1922, p. 20. ^ "Slide Rule, 5-inch, Pickett N600-ES, Apollo 13". Smithsonian National Air and Space Museum. Retrieved 3 September 2013. ^ "Lot 25368 Buzz Aldrin's Apollo 11 Slide Rule - Flown to the Moon. ... 2007 September Grand Format Air & Space Auction #669". Heritage Auctions. Retrieved 3 September 2013. ^ Charles Overton Harris, Slide rule simplified, American Technical Society, 1961, p. 5. ^ "Prosthaphaeretic Slide Rule: A Mechanical Multiplication Device Based On Trigonometric Identities, The | Mathematics And Computer Education | Find Articles At Bnet". Findarticles.com. 2009-06-02. Retrieved 2010-02-20. ^ Fisher sizing rules, retrieved 2009 Oct 06. ^ The Wang LOCI-2 ^ Wang Laboratories (December 1966). "Now you can determine Copolymer Composition in a few minutes at your desk". American Chemical Society 38 (13): 62A–63A. ^ The HP 9100 Project. ^ , 101 (2000).25J. E. Volder, "The Birth of CORDIC", J. VLSI Signal Processing ^ Stoll, Cliff. "When Slide Rules Ruled," Scientific American, May 2006, pp. 80–87. "The difficulty of learning to use slide rules discouraged their use among the hoi polloi. Yes, the occasional grocery store manager figured discounts on a slipstick, and this author once caught his high school English teacher calculating stats for trifecta horse-race winners on a slide rule during study hall. But slide rules never made it into daily life because you could not do simple addition and subtraction with them, not to mention the difficulty of keeping track of the decimal point. Slide rules remained tools for techies." ^ Watson, George H. "Problem-based learning and the three C's of technology," The Power of Problem-Based Learning, Barbara Duch, Susan Groh, Deborah Allen, eds., Stylus Publishing, LLC, 2001. "Numerical computations in freshman physics and chemistry were excruciating; however, this did not seem to be the case for those students fortunate enough to already own a calculator. I vividly recall that at the end of 1974, the students who were still using slide rules were given an additional 15 minutes on the final examination to compensate for the computational advantage afforded by the calculator, hardly adequate compensation in the opinions of the remaining slide rule practitioners." ^ Stoll, Cliff. "When Slide Rules Ruled," Scientific American, May 2006, pp. 80–87. "With computation moving literally at a hand's pace and the lack of precision a given, mathematicians worked to simplify complex problems. Because linear equations were friendlier to slide rules than more complex functions were, scientists struggled to linearize mathematical relations, often sweeping high-order or less significant terms under the computational carpet. So a car designer might calculate gas consumption by looking mainly at an engine's power, while ignoring how air friction varies with speed. Engineers developed shortcuts and rules of thumb. At their best, these measures led to time savings, insight and understanding. On the downside, these approximations could hide mistakes and lead to gross errors." ^ Stoll, Cliff. "When Slide Rules Ruled", Scientific American, May 2006, pp. 80–87. "One effect was that users felt close to the numbers, aware of rounding-off errors and systematic inaccuracies, unlike users of today's computer-design programs. Chat with an engineer from the 1950s, and you will most likely hear a lament for the days when calculation went hand-in-hand with deeper comprehension. Instead of plugging numbers into a computer program, an engineer would understand the fine points of loads and stresses, voltages and currents, angles and distances. Numeric answers, crafted by hand, meant problem solving through knowledge and analysis rather than sheer number crunching." ^ Citizen Watch Company – Citizen Eco-Drive / US, Canada, UK, IrelandCitizen Watch ^ "Greg's Slide Rules - Links to Slide Rule Collectors". Sliderule.ozmanor.com. 2004-07-29. Retrieved 2010-02-20. ^ "About CONCISE". Concise.co.jp. Retrieved 2010-02-20. ^ "Slide Rule". ThinkGeek. Retrieved 2010-02-20. ^ "Rechenschieber". Faber-Castell. Retrieved 2012-01-17. General information, history International Slide Rule Museum The history, theory and use of the engineering slide rule — By Dr James B. Calvert, University of Denver Oughtred Society Slide Rule Home Page — Dedicated to the preservation and history of slide rules Derek's virtual slide rule gallery — Javascript simulations of historical slide rules Reglas de Cálculo — A very big Faber Castell collection Collection of slide rules — French Slide Rules (Graphoplex, Tavernier-Gravet and others) Eric's Slide Rule Site — History and use WorldHeritage articles incorporating a citation from the New International Encyclopedia WorldHeritage articles incorporating a citation from the Encyclopedia Americana with a Wikisource reference Mathematical tools Mechanical calculators English inventions Analog computers Historical scientific instruments Obsolete technologies Principal investigator, Isoelectric point, Parental investment, Paternity Index, Phosphatidylinositol Calculus, University of Cambridge, Trinity College, Cambridge, Lincolnshire, Physics Texas Instruments, Casio, Sharp Corporation, Graphing calculator, Olivetti
CommonCrawl
Journal of Applied Volcanology Evaluating life-safety risk for fieldwork on active volcanoes: the volcano life risk estimator (VoLREst), a volcano observatory's decision-support tool Natalia Irma Deligne ORCID: orcid.org/0000-0001-9221-85811, Gill E. Jolly1,2, Tony Taig3 & Terry H. Webb1,4 Journal of Applied Volcanology volume 7, Article number: 7 (2018) Cite this article When is it safe, or at least, not unreasonably risky, to undertake fieldwork on active volcanoes? Volcano observatories must balance the safety of staff against the value of collecting field data and/or manual instrument installation, maintenance, and repair. At times of volcanic unrest this can present a particular dilemma, as both the value of fieldwork (which might help save lives or prevent unnecessary evacuation) and the risk to staff in the field may be high. Despite the increasing coverage and scope of remote monitoring methods, in-person fieldwork is still required for comprehensive volcano monitoring, and can be particularly valuable at times of volcanic unrest. A volcano observatory has a moral and legal duty to minimise occupational risk for its staff, but must do this in a way that balances against this its duty to provide the best possible information in support of difficult decisions on community safety. To assist with consistent and objective decision-making regarding whether to undertake fieldwork on active volcanoes, we present the Volcano Life Risk Estimator (VoLREst). We developed VoLREst to quantitatively evaluate life-safety risk to GNS Science staff undertaking fieldwork on volcanoes in unrest where the primary concerns are volcanic hazards from an eruption with no useful short-term precursory activity that would indicate an imminent eruption. The hazards considered are ballistics, pyroclastic density currents, and near-vent processes. VoLREst quantifies the likelihood of exposure to volcanic hazards at various distances from the vent for small, moderate, or large eruptions. This, combined with the estimate of the chance of a fatality given exposure to a volcanic hazard, provides VoLREst's final output: quantification of the hourly risk of a fatality for an individual at various distances from the volcanic vent. At GNS Science, the calculated levels of life-safety risk trigger different levels of managerial approval required to undertake fieldwork. Although an element of risk will always be present when conducting fieldwork on potentially active volcanoes, this is a first step towards providing objective and reproducible guidance for go/no go decisions for access to undertake volcano monitoring. Volcano observatories face a challenge: balancing the need to monitor volcanoes to the best of their ability to provide adequate information and advice to crisis management officials and/or the public with the need to keep observatory staff safe whilst collecting time critical data. Even in this era of increased remote monitoring capabilities, such as real-time data telemetry of ground instrumentation and satellite imagery, there remains a need for fieldwork near or on active volcanoes (note: in this paper an 'active' volcano is in a state of detectable unrest or erupting). Volcano observatory staff regularly go to volcanoes to install and maintain instruments, collect samples that couldn't otherwise be collected, conduct field surveys (both longitudinal and ad-hoc), and make observations that haven't yet satisfactorily or economically been outsourced to instruments. During periods of unrest there is generally a lot of uncertainty as to what is happening at a volcano. In these situations, a monitoring team may require more data collection to interpret what is likely to happen. However, an eruption is considerably more likely at a volcano in a state of unrest than at a volcano with no unrest indicators (e.g. Sparks, 2003), thus a volcano in unrest is arguably more dangerous to visit than a 'quiet' volcano. Unfortunately, even if a volcano is in detectable unrest, eruptions may occur with no useful precursory activity indicating an eruption is imminent. Eruptions produce a suite of hazards that can rapidly kill people, including pyroclastic density currents (PDCs), ballistics, lahars, vent formation, and gases (Baxter, 1990; Auker et al., 2013; Baxter et al., 2017; Brown et al., 2017). Tragically, since 1893 CE at least 39 scientists have been killed by at least 16 different volcanic eruptions around the world (Brown et al., 2017). While not all of these scientists were actively monitoring the volcano on behalf of an observatory, these fatalities reflect the risk undertaken by those who visit volcanoes in unrest or eruption. Over the past several decades, and especially the past several years, there has been increased legal scrutiny around fatalities that may have been deemed preventable. In the infamous L'Aquila earthquake case, officials, including scientists, were indicted and initially found guilty of involuntary manslaughter by misleading the public and providing inadequate and inconsistent advice concerning the risk of a large damaging earthquake (Alexander, 2014; see also Bretton et al., 2015). In New Zealand, the 2010 Pike River Mining disaster, which killed 29 miners, led to an overhaul of health and safety legislation with more liability given to company boards and senior executives (Macfie, 2013; see subsection New Zealand context). This increased legal scrutiny, along with a moral imperative to keep staff safe, is likely to result in conservative decision-making when volcano observatory managers are faced with difficult choices between the need to collect time-sensitive data critical for more accurate and precise interpretation of a volcano's activity and the need to keep staff safe. When is it 'safe', or, at least, not unreasonably risky, to go into the field to collect critical data that will assist decision-makers making decisions that could affect many people (e.g., closing a popular hiking track important for tourism, or large-scale evacuation of a population)? To be able to mitigate the risk to staff, there is a need to understand the levels of risk to which they are exposed. We therefore have developed a decision support tool for deciding when and where fieldwork can be undertaken as the activity of a volcano changes: the Volcano Life Risk Estimator (VoLREst). VoLREst outputs a quantitative estimate of the hourly risk of fatality at different distances from a vent area. VoLREst is available in a spreadsheet format (see Supplementary material) and can be tailored to any volcano. Development was prompted by a near-miss when several GNS Science staff members were at Te Maari vent a few minutes before it erupted with no useful precursory activity in November 2012 (Jolly et al., 2014). A health and safety investigation into the near-miss recommended implementing a rational, defendable, and quantitative life-safety risk assessment framework for staff undertaking fieldwork on active volcanoes, the result of which includes VoLREst. Refer to Jolly et al. (2014) for more information on the context in which VoLREst was developed along with its early application. In this paper we provide a brief summary of fatal volcanic hazards, approaches to evaluating volcanic and life-safety risk, and the New Zealand context in which VoLREst was developed. We then go on to describe how VoLREst works and how it can be tailored to any volcano with explanations, tips, and suggested considerations. Finally, we summarise how VoLREst is applied at GNS Science, and provide known limitations. Fatal volcanic hazards Volcanic eruptions have killed at least 278,368 people since 1500 CE (Brown et al., 2017). Volcanoes produce a multitude of hazards that directly kill people (Baxter, 1990), and can lead to indirect consequences such as disease and/or starvation which can kill large numbers of people (Auker et al., 2013). Although there hasn't been a systematic study establishing the likelihood of fatality given exposure to a volcanic hazard, historic examples point to the high fatality rate of PDCs (e.g., Zen and Hadikusumo, 1964; Baxter, 1990; Spence et al., 2007; Jenkins et al., 2013; Swanson et al., 2015), lahars (e.g., Zen and Hadikusumo, 1964; Voight, 1990), and ballistics (e.g., Baxter and Gresham, 1997; Yamaoka et al., 2016; Fitzgerald et al., 2017). For our purposes, we are concerned about hazards without any useful warning that can lead to an immediate fatality. A recent review paper by Brown et al. (2017) considered who has been killed by volcanic eruptions since 1500 CE, what hazard killed them (including non-eruptive volcanic environmental hazards), and how far away they were from the vent. If we consider the 16 eruptions that have killed scientists since 1893 CE, ballistics were responsible in 7 eruptions (15 fatalities), PDCs in 3 eruptions (8 fatalities), lava flows in 1 eruption (1 fatality), and multiple hazards (lahars, PDCs) in 2 eruptions (2 fatalities); 4 eruptions (13 fatalities) had no designated lethal hazard. Thus, PDCs and ballistics combined accounted for 23 out of 26 (just under 90%) of the fatalities to scientists that can be attributed to a specific hazard. Moreover, hazards such as lava flows and lahars are to a substantial degree avoidable by informed staff, whereas ballistics and PDCs are much more difficult to avoid. If we consider the entire fatalities database (irrespective of who was killed), within 5 km of the vent, ballistics and PDCs combined account for over half of the number of fatal incidents and half of all fatalities (Brown et al., 2017). PDCs and ballistics are thus considered the main source of risk to staff, and are the two volcanic hazards we focus on for our life-safety risk evaluation. Evaluating volcanic and life-safety risk Risk is generally considered a probabilistic function of hazard, exposure, and consequence (Fournier d'Albe, 1979). In the case of life-safety risk, this includes the probability of the hazard occurring, and the probability of fatality given exposure to the hazard. Event trees are widely used in evaluating volcanic hazard and risk (e.g., Newhall and Hoblitt 2002; Meloy, 2006; Marzocchi and Woo 2009; Sobradelo and Marti, 2010; Selva et al., 2012; Ogburn et al., 2016; Wright et al., 2013). Event trees provide a linear framework for understanding how a situation may unfold, and are useful for exploring comparative probabilities of different possible outcomes. We refer the reader to Newhall and Hoblitt (2002) for an overview of event trees. We used a modified event tree approach here – modified to preclude double counting fatal injuries, as one can only die once, but an eruption can produce multiple concurrent hazards, all of which may be fatal. There are a variety of metrics available to quantify life-safety risk (e.g., Health & Safety Executive (HSE) 2001), including: Annual individual fatality risk: likelihood of death of a particular individual in a year. Likelihood of someone being killed: likelihood of death due to an event. Risk per experience: likelihood of death due to taking part in an event. Societal (multiple-fatality) risk: likelihood of a number of deaths due to an event (e.g., chance of 50 or more deaths if an eruption occurs). Annual individual fatality risk (or fatal injury risk per full-time equivalent employee per year) is the most widely used metric for employee safety (e.g. HSE, 2001; WorkSafe New Zealand, 2017a). Figure 1 shows a comparison between annual individual fatality risk in different industries in New Zealand. Forestry and mining are at the upper end of the range, with annual risk of order 10− 3 per year (we note that the mining statistics may have been distorted by the Pike River disaster in 2010). Several other industries involving substantial proportions of time working with heavy machinery in outdoor environments (e.g., agriculture, construction, utilities) experience annual individual risk of the order of 10− 4 per year. Industries where the majority of staff are office-based typically experience individual risk levels of the order of 10− 5 per year or well below. New Zealand workplace fatality rates per employee per year. The values come from all fatalities in the workplace, including non-workers, divided by the number of workers in the sector; as such this overstates the risk to the workers, but is still useful for comparative purposes. Fatality data comes WorkSafe New Zealand (2017b), and the data on the number of workers in each sector is from the New Zealand Ministry of Business, Innovation, and Employment (2017). Volcanologists are grouped under 'Professional, scientific & technical'. After Taig and McSaveney (2014) There is some guidance and precedent on what is an 'acceptable' annual fatality risk (e.g., HSE, 2001; Massey et al. 2014ab), with 10− 4 widely adopted as an upper threshold of acceptable risk in work environments. Despite this, there is much debate about whether it is tolerable to accept activities with > 10− 4 annual fatality risk for a short period of time, with views and practices ranging from 'no particular limit on instantaneous/very short term risk' to 'don't accept any risk rate greater than the annual fatality rate divided by the hours worked in a year'. The latter view, though initially sounding plausible, is inherently illogical as for any job the average risk through the year is going to involve periods of lower and higher risk. In addition, it takes no account of situations where actions involving particular risk to an employee may provide substantial benefits (such as saving lives) for others. We note that individual risk per trip (experience) is a metric used operationally by the New Zealand Department of Conservation (government agency charged with conserving New Zealand's natural and historic heritage, which includes managing national parks and public access to these areas). The Department of Conservation assigns different risk thresholds to different visitor groups (see Jolly et al., 2014): a person taking a short walk on a popular trail to a waterfall is assumed to have a lower level of acceptable risk than a person partaking in a mountaineering expedition in the middle of winter. Risk per experience is useful for considering one-off experiences, but does not account for accumulated exposure to risk. What is acceptable in terms of the risk for a tourist visiting a volcano might be quite different from that for their tour guide who visits the volcano every day for their livelihood. There is widespread recognition that what is an acceptable level of risk is strongly context dependent, ranging from virtually zero (for wanton acts which create risk for others with no benefit for society) through to a high level of acceptable risk with a very high probability of death (e.g., for a terminally ill patient offered a potentially life-saving treatment with a high risk of a fatal failure). Among all the different contexts in which risk acceptability has been discussed, that of the fatality risk to employees in the workplace is particularly well researched and established in public policy making (e.g., HSE, 2001; WorkSafe New Zealand 2017a). The first quantitative life-safety risk calculation in the volcanic context we are aware of is Newhall (1982), undertaken for workers entering the blast zone of the Mount St Helens eruption in the months and years after the 1980 eruption. In this calculation, the area on and around Mount St Helens was divided into a series of zones. Newhall (1982) first considered the probability of a hazard (e.g., PDC) occurring on a given day, and then the probability that this hazard would reach a given zone. A separate life-safety calculation was then done for residents without means of radio communication, and workers who spend a certain number of hours per year in the zone (8 h per day, 220 days per year) and have radio communication with the USGS and a way to evacuate. We note this assumes that mitigative actions reduce life-safety risk, which may not be the case for those working in the immediate vicinity of an active vent. The zone map developed by Newhall (1982) accounted for topographic influences, and changed over time as the activity evolved at the volcano (Newhall, 1984). Forestry workers successfully used the analysis of Newhall (1982) to argue that they should receive double the pay when entering the blast zone, as they were doubling the amount of risk they were exposed to (Newhall, personal communication 2016). We comment that regardless of the level of risk an employee is exposed to, risks to employees must always be within tolerable limits and employers and employees need to work to lower them further. It is not acceptable to pay people to induce them to accept higher risks. New Zealand context New Zealand has over a dozen volcanoes known to have erupted in the Holocene (Global Volcanism Program, 2013). New Zealand volcanoes feature a diversity of volcano typesFootnote 1 (calderas, complex volcanoes, lava domes, pyroclastic cones, shield volcanoes, stratovolcanoes, submarine volcanoes, volcanic fields), eruption sizes (Volcano Explosivity Index (VEI) 0 through 8), eruption styles, and volcanic hazards (Global Volcanism Program, 2013). New Zealand volcanoes are monitored by GNS Science through the GeoNet project (Miller and Jolly, 2014); GNS Science serves as New Zealand's volcano observatory. The past two decades (time of writing: July 2018) have been a relatively quiet period for New Zealand volcanoes – apart from one VEI 31 eruption at Whakaari/White Island, all eruptions have been VEI 2 or smaller. Two volcanoes (Ruapehu and Whakaari) have been in continuous unrest over this period. Although all New Zealand eruptions in the past 20 years (from Raoul, Ruapehu, Tongariro, and Whakaari) were proceeded by detectable (albeit often minor) unrest, the majority of these eruptions could be considered "blue sky" or "unheralded" eruptions due to the lack of useful precursory activity for short term eruption forecasting. Over this time there have also been periods of heightened unrest at these volcanoes with no resulting eruption, and periods of unrest at volcanoes with no eruptions. VoLREst has been developed in this context of unheralded eruptions with many instances of unrest leading to no eruptive activity. In March 2006, an unheralded eruption at Raoul Island (VEI 0) tragically killed a Department of Conservation staff member (Christenson et al., 2007). In September 2007, an unheralded eruption at Ruapehu (VEI 1) cost a climber his leg from the knee down (Kilgour et al., 2010). In November 2012, an unheralded eruption at the Upper Te Maari vent of the Tongariro volcano (VEI 2) resulted in a near miss for four scientists, including three GNS Science staff members (Jolly et al., 2014). Fortuitous eruption timing (e.g., occurring during the night and/or the middle of winter) is a major reason why no other New Zealand eruption of the past 20 years resulted in injuries or fatalities. Following the fatality at Raoul Island in 2006, GNS Science undertook a review of risks associated with volcano monitoring. In 2006, an internal qualitative evaluation considered non-eruptive environmental hazards (e.g., gas poisoning, hot unstable ground) and eruptive hazards associated with volcano monitoring for Raoul Island, Ruapehu, Tongariro complex, and Whakaari volcanoes, and provided recommendations for general risk reduction of monitoring operations. In 2007, early efforts towards risk quantification used simple generalisations to evaluate the hourly and corresponding annual risk of common monitoring tasks for Ruapehu volcano – the volcano with the best historical record at the time; these results were shared with the Department of Conservation. At this stage, the practice of evaluating hourly risk to inform decisions was adopted, in part because it is easier to 'size' monitoring tasks (e.g., sample collection, installing and maintaining instruments) in units of hours rather than years. The risk of fatality associated with volcano monitoring at Whakaari was also evaluated, this time with a more formal Bayesian event tree framework, the results of which were presented at an internal staff workshop. In early 2008, a risk evaluation for Raoul Island based on a similar framework was shared with the Department of Conservation. Importantly, these risk characterisation efforts relied solely on the historic record and did not consider the actual likelihood of an eruption at a particular time. Furthermore, prior to the development of VoLREst following the near miss during the November 2012 Te Maari eruption, there was no standard robust quantitative protocol for evaluating the hourly risk of fatality and applying the results to guide fieldwork decisions. The New Zealand Health and Safety at Work Act (2015) legislates the requirements expected of employers, employees, contractors, and associates to ensure workplace health and safety. An important component of the act requires senior business leaders to understand and manage their company's health and safety risks to be as low as is reasonably practicable. At GNS Science, this includes managing risks associated with undertaking fieldwork for monitoring purposes on volcanoes. This must be balanced against risk to the achievement of the organisation's purpose, which includes a core objective to "Increase New Zealand's resilience to natural hazards and reduce risk from earthquakes, volcanoes, landslides and tsunamis" (GNS Science, 2017). VoLREst is a decision-support tool we have developed to calculate the hourly risk of fatality at a given distance from an erupting vent (see Discussion subsection Application). Broadly speaking, we use a Bayesian Event Tree approach (e.g., Newhall and Hoblitt, 2002), although it is slightly modified to preclude double counting fatalities. See Fig. 2 for an overview of the methodology. Overview of VoLREst methodology VoLREst evaluates the risk of fatality from a small, medium or large eruption (customised for each volcano), with no double-counting for either eruptions or hazard. We first determine the hourly probability of a small, medium, and large eruption assuming a binomial distribution (see Part A). Next, based on the vent location and pre-identified representative sites (see Part B), and the hazards of concern at each site (see Part C), we calculate the chance of surviving all the hazards at a given site, and from there, calculate the hourly risk of fatality at each site (see Part D). From there, we interpolate and extrapolate to determine the risk of fatality at any distance from the vent area (see Part E). At GNS Science, specific risk thresholds (i.e., 10− 3, 10− 4, or 10− 5 hourly risk of fatality) trigger different levels of managerial sign-off required for approval to undertake fieldwork at the volcano (see Discussion subsection Application). Parts A – E below should be read in parallel with Figs. 3, 4, 5, 6 and 7 and Tables 1 and 2. Figures 3, 4, 5 and 7 show different parts of VoLREst, populated with the values we used for the life-safety calculation undertaken for Whakaari in response to the April 2016 eruption. Table 1 defines all the variables used, while Table 2 lists the equations used with an explanation if required. We detail the procedure in a series of steps, which are labelled on relevant parts of Figs. 3, 4, 5 and 7. Cells that are shaded in dark grey need to be tailored for each volcano, and cells shaded in yellow need to be updated for each risk calculation. Discussion subsection Adaptability provides explanations and comments on the methodology along with suggestions on what to consider when implementing VoLREst. VoLREst close-up: inputting time window of interest and eruption likelihood. In yellow are cells the user must update at every use. The values populated in the yellow cells in this figure come from the application of VoLREst for Whakaari in response to the 28 April 2016 eruption. The labelled numbers correspond to the steps detailed in Part A of the text VoLREst close-up: hazard and exposure calculation. In grey are cells the user must tailor for each volcano. The values populated in the grey cells in this figure come from the VoLREst tailored for Whakaari from 2014 to the time of publication, while the eruption probabilities come from the application of VoLREst for Whakaari in response to the 28 April 2016 eruption. The labelled numbers correspond to the steps detailed in Parts B - D of the text VoLREst close-up: ballistic exposure calculation. In grey are cells the user must tailor for each volcano. The values populated in the grey cells in this figure come from the VoLREst tailored for Whakaari from 2014 to the time of publication, while the eruption probabilities come from the application of VoLREst for Whakaari in response to the 28 April 2016 eruption. The labelled numbers correspond to the steps detailed in Part D of the text Explanation of ballistic exposure calculation. a. Cartoon and equations for calculating likelihood of an impact from one ballistic falling from directly above (travelling perpendicular from the ground). b. Cartoon and equations for calculating likelihood of an impact from a ballistic coming from a side (travelling parallel to the ground). Refer to part D and Fig. 5 VoLREst close-up: hourly risk vs distance plot. The values come from the application of VoLREst for Whakaari in response to the 28 April 2016 eruption Table 1 Definition of variables used in equations used in VoLREst (see Table 2) Table 2 Equations used in VoLREst A clean version of VoLREst is available as a spreadsheet in the Additional file 1. VoLREst risk calculation spreadsheet Part a: Eruption likelihood Refer to Fig. 3 to see how Steps 1–7 are implemented. Decide time window of interest, using units of days or weeks. Evaluate probability of at least one eruption of a specified size or greater within the specified time window at the volcano of interest (see Discussion subsection Determining the risk calculation time window and eruption likelihood). Calculate the probability of no eruption of a specified size or greater within a specified time window at the volcano of interest (Eq. 1). Calculate how many hours, h, are in the specified time window. Calculate the hourly probability of no eruption over the course of the specified time window at the volcano of interest (Eq. 2). Calculate the hourly probability of an eruption over the course of the specified time window at the volcano of interest (Eq. 3). Determine the hourly probability of a large, moderate, and small eruption (Eqs. 4–6; see Table 3). Table 3 Description of eruption sizes in VoLREst. What constitutes a large, moderate or small eruption is different at every volcano Part B: Identify areas of interest Identify vent area. Select at least three sites at which to calculate likelihood of fatality. Part C: Identify hazards of concern VoLREst considers ballistics, pyroclastic density currents, and near-vent hazards (e.g., water spouts, landslides, shock/pressure waves, dense slugs). VoLREst could be modified to consider other hazards. Part D: For each site, calculate the hourly risk of fatality In Steps 11–24, we display a generic calculation; this calculation must be done for small, moderate, and large eruptions at each site. Refer to Figs. 4 and 5 to see how the calculation is implemented. Near-vent hazards and PDCs (Fig. 4) Determine the probability of exposure to specified hazard given an eruption. Determine the probability of a fatality given exposure to specified hazard. Calculate the probability of a fatality due to specified hazard given an eruption (Eq. 7). Calculate the hourly probability of a fatality due to specified hazard (Eq. 8). Ballistics (Figs. 4, 5 and 6) Select the length of a reference area square. For ease, this is the same for all sites and eruption size combinations. Calculate the area of the reference area. Determine the representative ballistic diameter. This can be different for different size eruptions. Determine the number of ballistics within a representative reference area. This is the same number as will cross the reference length. Determine a representative diameter for a person. Determine whether to select a ballistic direction from above, the side, or a geometric mean: Direction 'above': a ballistic only falls from directly above, and is deemed fatal if it touches the person (Eq. 9, Fig. 6a). Direction 'side': a ballistic only comes from the side, and is deemed fatal if it crosses the reference line in the same place the person is (Eq. 10, Fig. 6b). Geometric mean: The geometric mean of the probabilities of impact from above and side directions. Calculate the probability of an individual being hit by a single ballistic (Eqs. 9, 10). Calculate the probability of an individual not being hit by the number of ballistics determined in Step 18 (Eq. 11). Calculate the probability of a fatality from ballistics given an eruption (Eq. 12). Calculate the hourly probability of fatality from ballistics (Eq. 8). Combining risks For each eruption size, calculate the hourly probability of surviving all hazards (Eq. 13). For each eruption size, calculate the hourly probability of fatality (Eq. 14). For each site, calculate the hourly risk of fatality (Eq. 15). Part E: Evaluate risk at any distance from the vent Refer to Fig. 7 to see how the calculation is implemented. Plot the hourly risk of fatality vs distance for each site on a log-linear plot. Calculate the best-fit line. Use the equation of the line to determine what the distance is for a given hourly risk of fatality level. While we have described the life-safety risk calculation approach using New Zealand volcanoes as examples, the method can be applied to any volcano. However, VoLREst must be tailored to the volcano in question; indeed, at GNS Science we have separate VoLREst spreadsheets for each volcano that has been in unrest or had an eruption since 2012. To adapt VoLREst for a particular volcano, the user must follow these steps: Identify the vent area of interest. This could be a point source or a polygon, e.g., the extent of a crater lake. Select at least three sites at different distances from the vent where a chance of fatality given an eruption is possible in a large eruption, although preferably all three eruption sizes. These must be scaled for the volcano. Populate these distances in the spreadsheet. Populate near-vent hazard and consequence cells: [For each site and eruption size combination] Given an eruption, determine probability of exposure to near-vent hazards; [For each site and eruption size combination] Given exposure, determine probability of fatality from near-vent processes. Populate ballistic hazard and exposure cells: [Same for all sites and eruption sizes] Choose the length of a square reference area (e.g., 30 m); [Same for all sites and eruption sizes] Choose 'diameter' of a person; [For each site and eruption size combination] Determine ballistic diameter; [For each site and eruption size combination] Determine number of ballistics in reference area. Populate PDC hazard and consequence cells: Comments on each of these steps: Step 1: The vent area is treated as the eruption source, and can be either a point source or a polygon. At volcanoes with crater lakes we have used the entire crater lake extent as the vent area, despite there only being a few likely eruption sources within this area. The vent designation is one weakness of this approach, as if the next eruption is outside the designated area VoLREst is not particularly helpful. Step 2: We select at least three sites to avoid interpolating and extrapolating between just two points. The sites selected must be scaled for the volcano. For example, in the GNS Science VoLREst spreadsheets, for Whakaari the distances are 0, 100, 350, and 750 m, while at Ruapehu the distances are 0, 0.5, 1.3, and 2 km. We have found it helpful if these distances are known landmarks near the volcano – so for example at Whakaari, the distances correspond to a known observation point (100 m), a key fumarole (350 m), and the ruins of a factory on the island (750 m). The relatively short distances for Whakaari reflect the fact that the island is small and there isn't much more land beyond "the factory". Step 3: Near-vent hazards are meant to be localised to the vent area. It is helpful to specify what they are for the given volcano. For example, for New Zealand volcanoes these include water spouts (for volcanoes with crater lakes), landslides, shock/pressure waves, and dense slugs. In VoLREst the closest two sites have near-vent hazards. This may not be appropriate at all volcanoes; if near-vent hazards are not a concern for the second site, set the probability of exposure to 0 for all eruption sizes. In the GNS Science VoLREst spreadsheets, we have one volcano (Whakaari) were we have an 'extra' site at the vent (which is accessible, unlike some of our other volcanoes), and there for all eruption sizes we estimate that the chance of fatality equals the chance of an eruption. At this volcano, we consider near-vent hazards for the second distance. Step 4: At GNS Science, our standard ballistic reference area is 30 m × 30 m: we find this a large enough area to mostly have whole numbers for the estimated number of ballistics reaching the area, yet small enough to be tangible during discussions. We assume a person has a 'diameter' of 1 m. We have estimated ballistics hazard and exposure values using expert judgement; this does not preclude use of physical models to populate these values. In calculating the probability of a hit from above (Fig. 6a), VoLREst does not consider the impact crater area – which can be considerably larger than the source ballistic (e.g., Maeno et al., 2013; Breard et al., 2014; Fitzgerald et al., 2014) – or of debris or shrapnel resulting from the ballistic impact, which may cause fatal injuries (e.g., Fitzgerald et al., 2014; Williams et al., 2017). Thus, the 'fatal area' may be underestimated by VoLREst. VoLREst also does not consider impact angle – we highlight this as an area that could be improved in the future. Directionality of ballistics (e.g., Breard et al., 2014), where the ballistic hazard may not be radially symmetric around the vent, can be addressed through careful selection of the parameter estimating the number of ballistics in the reference area, depending on whether (for example) an average or a worst-case risk estimate is required. We note that if the user expands the range of ballistics sizes, the user will need to add additional blocks of rows (a single block is shown in Fig. 5) and update the cell-referencing in the VoLREst spreadsheet. Step 5: At GNS Science, we have estimated PDC hazard and consequence values using expert judgement; this does not preclude use of physical models to populate these values. Also, similar to ballistics, PDCs are not necessarily radially symmetric around the vent. For small eruptions, PDCs tend to follow topographic lows, and may have a strong directional component. Directionality can be addressed through careful selection of the parameter P (given eruption, exposure to surge). With regards to parameter P (given exposure, probability of fatality), at sites (particularly more distant ones) where PDC exposure would likely involve a distal portion of the flow, it may be appropriate to account for a slightly lower chance of fatality given exposure (e.g., Baxter et al., 2017). At GNS Science at some distal sites, we adopt a 95% probability of death given exposure to PDC. Finally, VoLREst can also be adapted by adjusting the small/moderate/large eruption frequency. At GNS Science for Ruapehu volcano we have modified VoLREst to describe Scale 3, 4, and 5 eruptions (rather than small/moderate/large eruptions) following the designation developed in Scott (2013). For the Ruapehu adaptation of VoLREst, the distribution of Scale 3, 4, and 5 eruptions in Part A Step 7 is based on the eruptive record of Scott (2013). Determining the risk calculation time window and eruption likelihood Once VoLREst has been tailored for a specific volcano, there are two critical inputs to determine: the time window of the calculation, and the likelihood of an eruption within the time window. We briefly describe how these are determined at GNS Science; these are not meant to be prescriptive but rather illustrative. At GNS Science, the default time window of a risk calculation is tied to the New Zealand Volcano Alert Level (VAL; Potter et al., 2014), although the default time window is often adjusted depending on volcanic activity. In short, no risk calculations are undertaken for volcanoes at VAL 0 (no volcanic unrest), the default time window for VAL 1 (minor volcanic unrest) is 13 weeks (approximately 3 months), the default time window for VAL 2 (moderate to heightened volcanic unrest) is 4 weeks, and the default time window for VAL 3 or greater (volcanic eruption) is 1 week. If there is a change in VAL a new risk calculation is undertaken, and any member of the monitoring team can call for a new risk calculation at any time. As an illustrative example, Table 4 provides the risk calculation schedule for Whakaari for 2016 along with the reason behind each risk calculation. Table 4 Timing and duration of Whakaari VoLREst risk calculations in 2016 It is extremely difficult to accurately determine the likelihood of an eruption within a given time window. There are number of ways this could be evaluated, for example via expert judgement, probabilistic and/or physical models. At GNS Science this value is currently determined via an unweighted expert elicitation process. We describe the procedure for illustrative purposes below, and acknowledge there are many other ways the value could be determined. At GNS Science, when a new risk calculation is called, members of the volcano monitoring team are asked over email to provide their best guess, minimum, and maximum likelihood estimates for an eruption impacting a specified area over the time window of interest. The reason for the elicitation is stated in this email (e.g., previous one expired, change in VAL, called by a team member), along with the deadline for providing values. Participants are also invited to provide their rationale. As an illustrative example, the wording of the question asked for elicitation associated with the 18 January 2016 Whakaari risk calculation was (no italics in original email; we note that an alternate name for Whakaari is White Island, which was used in this email): We are due for White Island elicitation. Please get me your values by 4:30 pm today, or let me know prior to then if you need longer. What is the probability of an eruption that would impact beyond the rim of the 1976–2000 crater complex within the next THIRTEEN WEEKS (~ 3 months; now - > 18 April 2016)? Please provide your best guess, min, and max. You are encouraged to provide your rationale/thought process/reasoning/data used. The moderator typically reminds participants where to find monitoring data and other relevant information for the given volcano – for example a record of past activity at the volcano, or recent presentations or publications that may enhance the participant's conceptual understanding of the system. At GNS Science, the identity of elicitation participants for a specific elicitation is known only by the risk calculation moderator. A quorum is obtained if there are at least 8 participants and at least one each from the fields of geochemistry, geophysics, and geology. We note this represents over half of the GNS Science volcano monitoring team. The number of participants is based on the observation (not statistically tested) of when a single expert's contribution doesn't greatly change the outcome of the exercise, and the second criterion is to ensure representation from all the disciplines. At present, the eruption likelihood input into the risk calculation is the 84th percentile of the distribution of the min, best guess, and max values, with the best guess counted twice. The 84th percentile is used as it is one standard deviation from the mean, so this makes the risk calculation more conservative. Given that the data do not often follow a Gaussian distribution, and different team members have different interpretations as to what the minimum and maximum value actually means, there are problems with this methodology but at present it is our approach. The best guess is counted twice to increase the contribution of that expert assessment. Translating between hourly and annual risk While VoLREst is set up to evaluate hourly risk, it may be advantageous to evaluate staff safety on another time frame. We thus provide two translation tables, Tables 5 and 6. Table 5 Equivalent risk of fatality given an hourly risk, assuming no other risks Table 6 Given annual risk of fatality, equivalent duration at specified hour risk level Table 5 begins with the hourly risk of fatality, and provides the corresponding risk of fatality over different time frames, assuming there are no other sources of fatal risk. We use the binomial distribution, and calculate 1 minus the chance of surviving over the time period of interest given the hourly risk of fatality: $$ Equivalent\ risk=1-{\left(1- hourly\ risk\ of\ fatality\right)}^N $$ where N is the number of hours in the time period of interest. Table 5 reveals that if an individual is exposed to an hourly risk of 10− 3 for every hour for an entire year, their annual risk of fatality is 99.8% (almost certain death), whereas if the same individual is exposed to an hourly risk of 10− 5 for an entire work year (assuming a 48-week work year at 40 h per week), their risk of fatality is just under 2%. Table 6 begins with an annual risk of fatality, and reveals how many hours of working at various hourly risk levels this would correspond to. This is likewise calculated using the same framework as in Eq. 16, but involves solving for the exponent: $$ n=\frac{\ln \left(1- annual\kern0.17em risk\kern0.17em of\kern0.17em fatality\right)}{\ln \left(1- hourly\kern0.17em risk\kern0.17em of\kern0.17em fatality\right)} $$ where n is the number of hours at the hourly risk level. Table 6 reveals that an annual risk of fatality risk of 10− 5 is 'achieved' in 36 s if an individual is exposed to an hourly risk of fatality of 10− 3, while an annual risk of fatality risk of 10− 3 is 'achieved' in 100 h if an individual is exposed to an hourly risk of fatality of 10− 5. VoLREst is designed as a decision support tool to facilitate discussions about undertaking fieldwork on active volcanoes. While it is not meant to be prescriptive, at GNS Science it does heavily influence decisions, as a decision to send staff into the field when the calculated risk high will be hard to defend should an incident occur. Table 7 shows how at GNS Science VoLREst results are used to support go/no go fieldwork decisions on active volcanoes. Since VoLREst implementation, challenging decisions have been made: certain data have not been collected due to the calculated level for risk, and/or staff have had to limit their time in certain areas, which led to less data being collected. In some instances, some staff would argue this has led to key perishable or time-sensitive data not being collected, limiting staff ability to interpret volcanic activity. Table 7 Application of VoLREst results at GNS Science What is GNS Science's rationale behind Table 7? The driving principles are as follows: International guidance suggests an annual risk of fatality upper threshold around 10− 4, and GNS Science would ideally like to work within that. GNS Science recognises the high public value of monitoring, and in light of this is prepared, in exceptional cases and with the explicit consent of the staff involved, to exceed norms in other industries. GNS Science staff must always be aware of the risks involved in their work and must never be pressured by management, colleagues, stakeholders, the public, or others into situations in which they're uncomfortable with the risk. Due to the at times conflicting nature of driving principles 1 and 2, GNS Science isn't prepared always to leave decisions to the discretion of individual staff. In cases where proposed activity would be pushing staff outside other industry norms, GNS Science escalates the go/no-go decision to management. VoLREst is the process by which GNS Science assesses risk every time staff are engaging in potentially hazardous field work on volcanoes. Finally, the higher the risk above typical norms for other industries, the higher GNS Science escalates the go/no-go decision. The following rationale went into setting the procedures detailed in Table 7: Over the course of a year staff engaged in intensive field work may spend up to 2 full work months in the field. Only a proportion of this will be spent on higher risk volcanoes, of which only a very small proportion will be spent in very high-risk areas. An estimate of 15 min per day, averaged over 40 working days per year, gives us a reasonable working estimate of 10 h per year spent at the highest levels of risk for active field staff generally. Staff involved with a specific volcano at a time of known high risk might spend two weeks per year on field work at the relevant site. With careful planning staff exposure to the highest levels of risk should in most cases be containable within 10 h per year, but it is possible that a single trip could involve 10 or more hours at high-risk levels. For hourly risk of fatality up to 10− 5 (standard procedures): GNS Science and staff recognise that field volcanology is hazardous and GNS Science is prepared for staff, with their express consent, to expose themselves to risk up to this level subject to internal standard procedures for risk minimisation. Staff are unlikely to accumulate what could be considered as a year's equivalent of acceptable risk (10− 4) on a single trip as a result of exposure at these levels. For hourly risk of fatality between 10− 4 and 10− 5 (Head of Volcanology Department authorization): At these levels staff could be collecting up to or above what would be regarded a year's acceptable risk in other industries on a single trip. There needs to be a significant benefit (beyond "I'm really interested" from the staff member in question) for GNS Science to accept this. There is often considerable discussion prior to authorisation to prioritise what is most critical and develop a detailed strategy to minimise time at this level of risk. GNS Science is not prepared to leave the decision solely to the staff members concerned and consider that the Head of Volcanology is the appropriate person to make a judgment about the value of the information at stake in relation to the risk staff would be accepting. For hourly risk of fatality between 10− 3 and 10− 4 (Head of Volcanology Department and Natural Hazards Divisional Director authorization): These are extremely high levels of exposure, with concerned staff likely to collect several times what would be acceptable per year in other industries on a single trip. Agreement to such work would be given only in exceptional circumstances. Examples might be retrieving data from a damaged station that would provide critical information on an eruption or collecting a single fumarole sample or efficiently collecting a rock/ash sample to inform on the presence of juvenile material. In these circumstances GNS Science escalates the go/no-go decision up beyond the volcanology department (where there is a long history of staff with a strong public service ethic prepared to subject themselves to significant risk to collect valuable information) and consider that the Natural Hazards Divisional Director is well placed to provide a judgment informed not only by the risk assessment and the volcanology department, but also by the balance between benefit and risk which is taken in other natural hazard areas. For hourly risk of fatality greater than 10− 3 (no access): While at the time of writing no such risk levels have been estimated using VoLREst, GNS Science considers it appropriate to draw an upper "too risky" line at some point. Management have discussed this with GNS Science's board of directors and collectively concluded that there are no circumstances in which GNS Science would be comfortable for staff (or contractors) to be exposing themselves to this level of risk. GNS Science would be prepared to reconsider this position in the event of a national or global crisis to which our staff could make a unique and vitally important contribution through activities involving exceptionally high personal risk exposure. Figure 8 shows a sample map produced with VoLREst results (using the application illustrated in Figs. 3, 4, 5 and 7), used internally at GNS Science. Results to date are only used internally and at present are not used to support Civil Defence and Emergency Management, Department of Conservation, or concessionaires (e.g., tour guides, ski field operators) evacuation or access decisions. This has led to situations where the public has access to a volcanic area but GNS Science staff are not permitted to go; when this has happened GNS Science publicly stated that staff are not visiting the area (e.g., GeoNet, 2016). However, Department of Conservation staff and university researchers working with GNS Science staff have at times followed guidance provided by the assessments when making decisions about their own staff safety. Representative decision-support tool map produced with VoLREst results. The map is using the results of the VoLREst application Whakaari for 28 April 2016, done immediately after an eruption. Also shown as black dots are the locations of the representative sites for Whakaari, and white dots indicate select monitoring fumarole and lake sampling sites Retrospective application The motivation for VoLREst development was the near-miss during the November 2012 Te Maari eruption (see Introduction subsection New Zealand context and Jolly et al., 2014). Would VoLREst have prompted managerial sign-off for fieldwork near the vent at the time of the eruption? At the time of the November 2012 eruption, ballistics were the only hazard concern for Te Maari (Jolly et al., 2014). Later investigations revealed there were PDCs in both the August and November 2012 eruptions (Lube et al., 2014), but in November 2012, PDCs were an under-appreciated hazard at Te Maari (Jolly et al., 2014). Based on the available historical record at the time (the historic record was later improved by Scott and Potter, 2014), the conceptual worst-case scenario was that the volcano was entering a decade of heightened activity similar to that between 1886 and 1897 CE. The corresponding worse-case eruption rate was estimated to be 0.27 eruption onsets per year (Jolly et al., 2014). The above is based on the historic record; what did the experts think? In October 2012, a group of experts met to evaluate the probability of a similar eruption as in August 2012 and that of a larger eruption for the following 3-month period (comment: the November 2012 eruption was smaller than the August 2012 eruption), along with the size and concentration of ballistics at a specified distance from the vent for both eruption sizes. Table 8 shows the VoLREst outputs if we use input values based on the known historic record at the time of the November 2012 eruption, with and without consideration of PDCs, using VoLREst hazard probability and characteristic values set in January 2014. We also show outputs using the eruption probability and ballistic characteristics resulting from the October 2012 expert elicitation workshop. Table 8 Retrospective VoLREst analysis for Te Maari volcano, November 2012 The 10− 4 threshold is not attained for any of these input combinations, although in all cases at the vent the hourly risk of fatality exceeds 10− 5. If we compare the first two columns (three sites, three eruption sizes in the VoLREst spreadsheet), we can clearly see the effect of considering PDC exposure: the calculated hourly risk of a fatality at the vent is the same for both cases (as it equals the hourly probability of an eruption), but when PDCs are considered the 10− 5 threshold almost doubles – the area exposed to an hourly risk of fatality of at least 10− 5 is almost a factor of three greater. The above retrospective analysis suggests that had VoLREst and associated policies been operational the morning of November 22 2012, the GNS Science Head of Volcanology department would have had to approve fieldwork plans at the vent considering the evaluated life-safety risk posed to staff. Apart from obvious limitations – the major ones being the subjectivity of the estimation of eruption likelihood, correctly identifying the vent area in advance of the eruption, the hazard footprint for different eruption sizes, and chance of fatality given exposure – there are several additional limitations associated with VoLREst. These include: VoLREst has been designed in a context where the primary concern is an unheralded eruption, i.e., there is little or no precursory activity suggesting an eruption is eminent. VoLREst assumes a constant probability of an eruption over the time period of interest. VoLREst is thus not appropriate when there is a rapidly changing situation where the chance of an eruption is escalating by the hour. If the situation is rapidly escalating, VoLREst is likely to underestimate the likelihood of a fatality given the lag time between when VoLREst is run and when the fieldwork is undertaken. We only consider the risk of fatality, not of injury. Injuries can have serious consequences for individuals and can take a very long time to recover from with a final reduction in quality of life. VoLREst is not appropriate for estimating casualties. Additionally, VoLREst is not explicitly designed for a situation where someone survives an eruption but requires urgent assistance and evacuation, necessitating others to put themselves at risk. If the primary concern is eruption casualties, consider solely calculating exposure risk, rather than including the step of accounting for likelihood of fatality given exposure (Step 12 in the Methods section). A conservative approach could include disregarding any consideration of directionality and assuming all hazards are radial in extent. We only consider individual fatality risk. For safety reasons, GNS Science discourages solo fieldwork, yet VoLREst only considers individual exposure. VoLREst is not currently setup to evaluate the risk of multiple fatalities, which may be valuable for organisational risk assessment purposes or for rescue operation planning. Our recommendation is to consider the number of people potentially exposed and find the balance between specific data importance, fieldwork safety, fieldwork efficiency, minimising the time an individual is potentially exposed, and minimising the number of people potentially exposed. VoLREst does not consider other risks related to fieldwork, which can include transportation via helicopter and/or driving, or working in alpine environments. This means the overall risk of fieldwork is higher than calculated – it is good practice to have protocols in place to minimise these additional risks (e.g., Table 4 ). We consider hourly risk of fatality, irrespective of the past or future exposure of an individual (i.e., 'dosing'). Thus, VoLREst itself does not consider whether a scientist goes to the volcano every single day, or whether this is a one-off short visit. At GNS Science, these considerations form part of the discussion between managers and scientists when developing fieldwork plans. Considering cumulative risk could assist in assessing and managing overall exposure over a period of time, along with ensuring that fieldwork is done as efficiently as practical and that only truly critical data is collected when the risk is high. Different people and cultures have different levels of acceptable risk, which may change given a specific context. There may also be a conflict between scientists and managers, with one or the other advocating that specific data be collected to add to accuracy or precision of overall volcano behaviour interpretation; external pressures can exacerbate this conflict. Clear protocols can assist, along with a procedure for determining what data is critical, and transparency on how risks are assessed. Additionally, it is important to acknowledge that what works in one jurisdiction/context may not in another. There is no certain distance where one is certainly safe or a 'safe' risk threshold level. A holistic approach considering all risks and managing them when possible can minimise, although not eliminate, overall risk. These limitations reinforced the GNS Science view that VoLREst is appropriate as a decision support tool, but not as a prescriptive measure. VoLREst is designed considering the consequences of large, moderate, and small eruptions. An alternative approach could be to consider the consequences of the most likely, possible, and credible next eruption. This alternate approach might be more appropriate in a quickly evolving situation with multiple eruption phases, or at a volcano where it is not appropriate to think about small, moderate, and large eruptions. We explored this alternate approach when a previously quiet volcano (Ngauruhoe) exhibited minor signs of volcanic unrest for a few weeks, but found it challenging to describe these three categories. At present we have not further developed VoLREst in the framework of most likely, possible, and credible next eruption, which can be different from a small, moderate, large eruption likelihood distribution. Finally, VoLREst outputs hourly risk of fatality, whilst many values in the health and safety literature reflect annual risk. We thus caution the user to be careful when using VoLREst to compare the risk volcanologists face to those faced by workers in other industries. At the outset of this paper we posed some key questions. When is it too dangerous to undertake fieldwork on active volcanoes? What is the balance between keeping observatory staff safe and the necessity for staff to undertake critical data collection to better understand the state of a volcano? While we do not have simple black and white answers to these questions, our conclusions are that it is possible a) to make sensible reproducible quantitative estimates of risk to staff involved in field data collection on volcanoes at times of unrest, and b) to use such risk estimates to enable management to make better-informed 'go/no-go' decisions for fieldwork to proceed. We have presented VoLREst, a decision-support tool developed at GNS Science to quantitively evaluate life-safety risk to staff undertaking fieldwork on volcanoes in unrest to assist with go/no-go decisions for fieldwork on active volcanoes. The driving concern is an eruption with no useful precursory activity, and we consider PDCs, ballistics, and near-vent hazards. VoLREst outputs a quantitative estimate of the hourly risk of a fatality as a function of distance from a volcanic vent. At GNS Science, specific life-safety risk thresholds trigger different levels of managerial approval required to undertake work. Many scientists at GNS Science initially struggled with the concept of quantifying risk to staff and considering this explicitly in managerial go/no-go decisions, though with time this has become standard practice. Managers at GNS Science have found it very useful, as decisions are better supported, more transparent, and easier to explain to staff and other stakeholders. We recommend such an approach - thinking quantitatively about what the risks are and what is acceptable in particular context - to other organisations facing a similar dilemma in balancing the safety of their own staff and contractors against the wider public good of fulfilling their mission, when that mission involves risk to staff. We stress VoLREst must be tailored for each volcano, and should not be used in a prescriptive manner. Although an element of risk will always be present when conducting fieldwork on potentially active volcanoes, this is a first step towards providing objective guidance for go/no go decisions for volcano monitoring. We use Global Volcanism Program (2013) terminology to describe volcano types. All VEI eruption sizes also come from the Global Volcanism Program (2013). Common Era HSE: Health & Safety Executive PDC: pyroclastic density current VAL: New Zealand Volcano Alert Level VEI: Volcano Explosivity Index Alexander DE. Communicating earthquake risk to the public: the trial of the 'L'Aquila. Seven' Nat Hazards. 2014;72(2):1159–73. https://doi.org/10.1007/s11069-014-1062-2. Auker MR, Sparks RSJ, Siebert L, Crosweller HS, Ewert J. A statistical analysis of the global historical volcanic fatalities record. J Appl Volcanol. 2013;2(2):1–24. https://doi.org/10.1186/2191-5040-2-2. Baxter PJ. Medical effects of volcanic eruptions. Bull Volcanol. 1990;52(7):532–44. https://doi.org/10.1007/BF00301534. Baxter PJ, Gresham A. Deaths and injuries in the eruption of Galeras volcano, Colombia, 14 January 1993. J Volcanol Geotherm Res. 1997;77(1–4):325–38. https://doi.org/10.1016/S0377-0273(96)00103-5. Baxter PJ, Jenkins SF, Seswandhana R, Komorowski J-C, Dunn K, Purser D, Voight B, Shelley I. Thermal injuries in pyroclastic surges, their causes, prognosis and emergency management. Burns. 2017;43(5):1051–69. https://doi.org/10.1016/j.burns.2017.01.025. Breard E, Lube G, Cronin S, Fitzgerald R, Kennedy B, Scheu B, Montanaaro C, White JDL, Tost M, Procter JN, Moebis A. Using the spatial distribution and lithology of ballistic blocks to interpret the eruption sequence and dynamics: august 6, 2012 upper Te Maari eruption, New Zealand. J Volcanol Geotherm Res. 2014;286:373–86. https://doi.org/10.1016/j.jvolgeores.2014.03.006. Bretton RJ, Gottsmann J, Aspinall WP, Christie R. Implications of legal scrutiny processes (including the L'Aquila trial and other recent court cases) for future volcanic risk governance. J Appl Volcanol. 2015;4:18. https://doi.org/10.1186/s13617-015-0034-x. Brown SK, Jenkins SF, Sparks RSJ, Odbert H, Auker MR. Volcanic fatalities database: analysis of volcanic threat with distance and victim classification. J Appl Volcanol. 2017;6:15. https://doi.org/10.1186/s13617-017-0067-4. Christenson BW, Werner CA, Reyes AG, Sherburn S, Scott BJ, Miller C, Rosenberg MJ, Hurst AW, Britten KA. Hazards from hydrothermally sealed volcanic conduits. EOS. 2007;88(50):53–5. Fitzgerald RH, Kennedy BM, Wilson TM, Leonard GS, Tsunematsu K, Keys H. The communication and risk Management of Volcanic Ballistic Hazards. In: Fearnley C, Bird D, Jolly G, Haynes H, McGuire B, editors. Observing the volcano world: volcano crisis communication, advances in volcanology. Berlin: Springer International Publishing; 2017. https://doi.org/10.1007/11157_2016_35. Fitzgerald RH, Tsunematsu K, Kenned BM, Breard ECP, Lube G, Wilson TM, Jolly AD, Pawson J, Rosenberg MD, Cronin SJ. The application of a calibrated 3D ballistic trajectory model to ballistic hazard assessments at upper Te Maari, Tongariro. J Volcanol Geotherm Res. 2014;286:248–62. https://doi.org/10.1016/j.jvolgeores.2014.04.006. Fournier d'Albe EM. Objectives of volcanic monitoring and prediction. J Geol Soc Lond. 1979;136(3):321–6. https://doi.org/10.1144/gsjgs.136.3.0321. GeoNet (2016) Volcano Alert Bulletin WI 2016/03. Accessed 21 Dec 2016 from http://www.geonet.org.nz/vabs/5FZDmPCNqwqkIi8GEUE2UE. Global Volcanism Program (2013) Volcanoes of the World, v. 4.5.5. Venzke, E (ed). Smithsonian Institution. Accessed 02 May 2017. doi: https://doi.org/10.5479/si.GVP.VOTW4-2013. GNS Science (2017) Statement of Corporate Intent July 2016 – June 2021. GNS Science, Lower Hutt. Accessed 6 Dec from https://www.gns.cri.nz/content/download/12085/64390/file/Statement-of-Corporate-Intent-2016-2017.pdf. Health & Safety Executive (2001) Reducing risks, protecting people: HSE's decision-making process. Her Majesty's Stationery Office, Norwich. http://www.hse.gov.uk/risk/theory/r2p2.pdf. Accessed 30 Nov 2017. Jenkins SF, Komorowski J-C, Baxter PJ, Spence R, Picquout A, Lavigne F, Surono. The Merapi 2010 eruption: an interdisciplinary impact assessment methodology for studying pyroclastic density current dynamics. J Volcanol Geotherm Res. 2013;261:316–29. https://doi.org/10.1016/j.jvolgeores.2013.02.012. Jolly GE, Keys HJR, Procter JN, Deligne NI. Overview of the co-ordinated risk-based approach to science and management response and recovery for the 2012 eruptions of Tongariro volcano, New Zealand. J Volcanol Geotherm Res. 2014;286:184–207. https://doi.org/10.1016/j.jvolgeores.2014.08.028. Kilgour G, Manville V, Della Pasqua F, Graettinger A, Hodgson KA, Jolly GE. The 25 September 2007 eruption of mount Ruapehu, New Zealand: directed ballistics, surtseyan jets, and ice-slurry lahars. J Volcanol Geotherm Res. 2010;191(1–2):1–14. https://doi.org/10.1016/j.jvolgeores.2009.10.015. Lube G, Breard ECP, Cronin SJ, Procter JN, Brenna M, Moebis A, Pardo N, Stewart RB, Jolly A, Fournier N. Dynamics of surges generated by hydrothermal blasts during the 6 august 2012 Te Maari eruption, Mt. Tongariro, New Zealand. J Volcanol Geotherm Res. 2014;286:348–66. https://doi.org/10.1016/j.jvolgeores.2014.05.010. Macfie R. Tragedy at Pike River mine: how and why 29 men died. Wellington: Awa Press; 2013. Maeno F, Nakada S, Nagai M, Kozono T. Ballistic ejecta and eruption condition of the vulcanian explosion of Shinmoedake volcano, Kyushu, Japan on 1 February, 2011. Earth Planet Sp. 2013;65(6):609–21. https://doi.org/10.5047/eps.2013.03.004. Marzocchi W, Woo G. Principles of volcanic risk metrics: theory and the case study of Mount Vesuvius and Campi Flegrei, Italy. J Geophys Res. 2009;114:B03213. https://doi.org/10.1029/2008JB005908. Massey CI, Della Pasqua F, Taig T, Lukovic B, Ries W, Heron D, Archibald G (2014b) Canterbury Earthquakes 2010–2011 Port Hills Slope Stability: Risk assessment for Redcliffs GNS Science Consultancy Report 2014/78, 123 p. Accessed 22 November 2017 from https://www.ccc.govt.nz/assets/Documents/Environment/Land/CR2014-78RedcliffsFINAL.pdf. Massey CI, McSaveney MJ, Taig T, Richards L, Litchfield NJ, Rhoades DA, McVerry GH, Lukovic B, Heron DW, Ries W, Van Dissen RJ. Determining Rockfall risk in Christchurch using Rockfalls triggered by the 2010–2011 Canterbury earthquake sequence. Earthquake Spectra. 2014a;30(1):155–81. https://doi.org/10.1193/021413EQS026M. Meloy AF. Arenal-type pyroclastic flows: a probabilistic event tree risk analysis. J Volcanol Geotherm Res. 2006;157(1–3):121–34. https://doi.org/10.1016/j.jvolgeores.2006.03.048. Miller CA, Jolly AD. A model for developing best practice volcano monitoring: a combined threat assessment, consultation and network effectiveness approach. Nat Hazards. 2014;71(1):493–522. https://doi.org/10.1007/s11069-013-0928-z. Ministry of Business, Innovation & Employment (2017) Accessed 7 Dec 2017 http://www.mbie.govt.nz/info-services/employment-skills/labour-market-reports/labour-market-analysis. Newhall CG (1982) A method for estimating intermediate and long- term risks from volcanic activity, with an example from Mount St. Helens, Washington. US Geol Surv Open-file Rep 82–396. Newhall CG (1984) Semi-quantitative assessment of changing volcanic risk at Mount St. Helens, Washington. US Geol Surv Open-file Rep 84–272. Newhall CG, Hoblitt RP. Constructing event trees for volcanic crises. Bull Volcanol. 2002;64(1):3–20. https://doi.org/10.1007/s004450100173. Ogburn S, Harpel C, Pesicek J, Wellik J, Wright H, Pallister J (2016) The use of incomplete global data for probabilistic event trees: challenges and strategies. EGU General Assembly. Potter SH, Jolly GE, Neall VE, Johnston DM, Scott BJ. Communicating the status of volcanic activity: revising New Zealand's volcanic alert level system. J Appl Volcanol. 2014;3(1):1–13. https://doi.org/10.1186/s13617-014-0013-7. Scott BJ (2013) A revised eruption catalogue of Ruapehu volcano eruptive activity: 1830–2012. GNS Science Report 2013/45.113 p. Scott BJ, Potter SH. Aspects of historical eruptive activity and volcanic unrest at Mt. Tongariro, New Zealand: 1846–2013. J Volcanol Geotherm Res. 2014;286:263–76. https://doi.org/10.1016/j.jvolgeores.2014.04.003. Selva J, Marzocchi W, Papale P, Sandri L. Operational eruption forecasting at high-risk volcanoes: the case of Campi Flegrei, Naples. J Appl Volcanol. 2012;1:5. https://doi.org/10.1186/2191-5040-1-5. Sobradelo R, Martí J. Bayesian event tree for long-term volcanic hazard assessment: application to Teide-Pico Viejo stratovolcanoes, Tenerife, Canary Islands. J Geophys Res. 2010;115:B05206. https://doi.org/10.1029/2009JB006566. Sparks RSJ. Forecasting volcanic eruptions. Earth and Planetary Science Research Letters. 2003;210(1–2):1–15. https://doi.org/10.1016/S0012-821X(03)00124-9. Spence R, Kelman I, Brown A, Toyos G, Purser D, Baxter P. Residential building and occupant vulnerability to pyroclastic density currents in explosive eruptions. Nat Hazards Earth Syst Sci. 2007;7(2):219–30. https://doi.org/10.5194/nhess-7-219-2007. Swanson DA, Weaver SJ, Houghton BF. Reconstructing the deadly eruptive events of 1790 CE at Kīlauea volcano, Hawai'i. GSA Bull. 2015;127(3–4):503–15. https://doi.org/10.1130/B31116.1. Taig T, McSaveney MJ (2014) Milford sound risk from landslide-generated tsunami. GNS science consultancy report 2014/224.57 p. Voight B. The 1985 Nevado del Ruiz volcano catastrophe: anatomy and retrospection. J Volcanol Geotherm Res. 1990;42(1–2):151–88. https://doi.org/10.1016/0377-0273(90)90075-Q. Williams GT, Kennedy BM, Wilson TM, Fitzgerald RH, Tsunematsu K, Teissier A. Buildings vs. ballistics: Quantifying the vulnerability of buildings to volcanic ballistic impacts using field studies and pneumatic cannon experiments. J Volcanol Geotherm Res. 2017;343:171–80. https://doi.org/10.1016/j.jvolgeores.2017.06.026. WorkSafe New Zealand (2017a) Annual Report 2016–2017. New Zealand Government, Wellington. Accessed 6 Dec from https://worksafe.govt.nz/dmsdocument/3196-annual-report-2016-2017. WorkSafe New Zealand (2017b) Accessed 7 Dec from https://worksafe.govt.nz/data-and-research/ws-data/fatalities/. Wright HMN, Pallister JS, McCausland WA, Griswold JP, Andreastuti S, Budianto A, Primulyana S, Gunawan H. VDAP team (in press) construction of probabilistic event trees for eruption forecasting at Sinabung volcano, Indonesia 2013–14. J Volcanol Geotherm Res. 2013; https://doi.org/10.1016/j.jvolgeores.2018.02.003. Yamaoka K, Geshi N, Hashimoto T, Ingebritsen SE, Oikawa T. Special issue "the phreatic eruption of Mt. Ontake volcano in 2014". Earth, Planets and Space. 2016;68:175. https://doi.org/10.1186/s40623-016-0548-4. Zen MT, Hadikusumo D. Preliminary report on the 1963 eruption of Mt. Agung in Bali (Indonesia). Bull Volcanol. 1964;27(1):269–99. https://doi.org/10.1007/BF02597526. We thank members and affiliates of the Volcanology Department of GNS Science for their comments, suggestions, and patience during the development and implementation of VoLREst. In particular, we thank Brad Scott, Michael Rosenberg, and Geoff Kilgour who helped set hazard parameters for several New Zealand volcanoes, Tony Hurst and Craig Miller who volunteered feedback on several occasions on the procedure and its implementation, and Karen Britten and Richard Johnson who provided the perspectives of technicians engaged in considerable field work. We also thank Chris Massey and David Rhoades for mathematical discussions, Annemarie Christophersen, Nico Fournier, and Graham Leonard for encouragement, and Stephen McGregor for writing suggestions. Nico Fournier, Chris Van Houtte, and Craig Miller provided internal GNS Science reviews. The manuscript was improved by comments by Associate Editor Laura Sandri, Editor-in-Chief Jan Lindsay, Chris Newhall, and an anonymous reviewer. This project was supported by GNS Science Core Research Programme funding. VoLREst is available in the Additional file 1. There are no other associated data with this paper. GNS Science, PO Box 30368, Lower Hutt, 5040, New Zealand Natalia Irma Deligne , Gill E. Jolly & Terry H. Webb GNS Science, Wairakei Research Centre, Private Bag 2000, Taupo, 3352, New Zealand Gill E. Jolly TTAC Limited, 10 The Avenue, Marston, Cheshire, CW9 6EU, UK Tony Taig 8 Rata St, Eastbourne, Lower Hutt, 5013, New Zealand Terry H. Webb Search for Natalia Irma Deligne in: Search for Gill E. Jolly in: Search for Tony Taig in: Search for Terry H. Webb in: NID drafted the manuscript, refined VoLREst and has been the GNS Science volcano life-safety risk calculation moderator since mid-2013. GEJ, TT, and THW conceived VoLREst and set volcano risk thresholds for GNS Science; GEJ and THT undertook preliminary development of VoLREst. GEJ and NID developed guidelines for frequency of the VoLREst calculation at GNS Science. All authors read and approved the final manuscript. Correspondence to Natalia Irma Deligne. At the time of the initial development of VoLREst, GEJ was the Head of the Volcanology Department and THW was the Natural Hazards Division Director at GNS Science. GEJ is the current Natural Hazards Division Director at GNS Science. VoLREst spreadsheet for calculating life-safety risk as a function of distance from volcanic vent. All DARK GREY cells must be tailored for a specific volcano, and YELLOW cells must be updated for every application of VoLREst. In the RED cell enter the hourly risk of fatality of interest; the corresponding distance will be provided immediately underneath. (XLSX 33 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Deligne, N.I., Jolly, G.E., Taig, T. et al. Evaluating life-safety risk for fieldwork on active volcanoes: the volcano life risk estimator (VoLREst), a volcano observatory's decision-support tool. J Appl. Volcanol. 7, 7 (2018) doi:10.1186/s13617-018-0076-y DOI: https://doi.org/10.1186/s13617-018-0076-y Event-tree Decision-support tool Life-safety risk evaluation Unheralded eruption Volcanic risk Volcano observatory VoLREst Submission enquiries: [email protected]
CommonCrawl
scPower accelerates and optimizes the design of multi-sample single cell transcriptomic studies Katharina T. Schmid ORCID: orcid.org/0000-0001-7082-10991,2, Barbara Höllbacher ORCID: orcid.org/0000-0001-9848-78701,2, Cristiana Cruceanu3, Anika Böttcher ORCID: orcid.org/0000-0001-5473-31364,5,6, Heiko Lickert ORCID: orcid.org/0000-0002-4597-88254,5,6, Elisabeth B. Binder ORCID: orcid.org/0000-0001-7088-66183,7, Fabian J. Theis ORCID: orcid.org/0000-0002-2419-19431,8 & Matthias Heinig ORCID: orcid.org/0000-0002-5612-17201,2 Transcriptomics Single cell RNA-seq has revolutionized transcriptomics by providing cell type resolution for differential gene expression and expression quantitative trait loci (eQTL) analyses. However, efficient power analysis methods for single cell data and inter-individual comparisons are lacking. Here, we present scPower; a statistical framework for the design and power analysis of multi-sample single cell transcriptomic experiments. We modelled the relationship between sample size, the number of cells per individual, sequencing depth, and the power of detecting differentially expressed genes within cell types. We systematically evaluated these optimal parameter combinations for several single cell profiling platforms, and generated broad recommendations. In general, shallow sequencing of high numbers of cells leads to higher overall power than deep sequencing of fewer cells. The model, including priors, is implemented as an R package and is accessible as a web tool. scPower is a highly customizable tool that experimentalists can use to quickly compare a multitude of experimental designs and optimize for a limited budget. Understanding the molecular basis of phenotypic variation, such as disease susceptibility, is a key goal of contemporary biomedical research. To this end, researchers use transcriptomic profiling to identify changes of gene expression levels (differentially expressed genes; DEGs) between sets of samples, e.g., patients and healthy controls1,2,3,4,5. Combining this with genetic information leads to the analysis of differential expression between genotypes and the identification of expression quantitative trait loci (eQTLs)6,7,8,9, supplying the molecular link between genome and phenotype10. Single cell RNA-sequencing (scRNA-seq)11,12,13,14,15 allows for differential gene expression and eQTL analysis on the level of individual cell types. Typically, single cell differential gene expression analysis seeks to identify genes whose expression levels are markedly different between different cell types16,17,18. In contrast, multi-sample experiments aim at the identification of DEGs between sets of samples within the same cell type. These sets can be defined by different experimental conditions or genotypes and are each measured at the single cell level. Multi-sample experiments have been identified as one of the grand challenges for single cell data analysis19. Power analysis is an important step in the design of statistically powerful experiments given certain assumptions about the expected effect sizes and constraints on the available resources. Researchers need to decide on parameters such as the sample size, the number of cells per sample and the number of reads. The power is tightly linked with the statistical testing procedure. Several methods have been established based on the theory of linear regression models20 and the control of the false discovery rate21,22,23,24 for microarray studies. For bulk RNA-seq studies, power analysis methods based on the theory of negative binomial count regression25,26, other parametric models27,28,29, or simulations30,31 have been proposed and benchmarked32. In principle methods for bulk RNA-seq power analysis could also be applied to compute power or minimally required sample sizes for given effect sizes for single cell experiments, however they fail to take into account specific characteristics of single cell data. In scRNA-seq experiments, individual cells are typically not sequenced to saturation, leading to sparse count matrices, where only highly expressed genes are detected with counts greater than zero. In addition, the overall number of transcripts as well as the number of transcripts of individual genes can be highly cell type specific33. Recently, individual aspects of single-cell specific experimental design were addressed (Supplementary Table S1). First, recommendations of sequencing depth have been obtained by comparing sensitivity and accuracy of different technology platforms34,35,36. Second, it has been established that the minimal number of sequenced cells required to observe a rare cell type with a certain frequency can be modelled with a negative binomial distribution37,38 or multinomial distribution39. While these insights also help with the design of multi-sample experiments, there are additional parameters that need to be taken into account such as the sample size and the effect sizes. For single cell differential expression analyses, several simulation-based methods have been published recently which estimate the power dependent on the effect size between the groups40,41,42,43. However, only one simulation tool also addresses multi-sample comparisons with cells from different individuals in each group and can thereby give recommendations for the sample size43. Two benchmarking studies, one applying the aforementioned simulation tool and one using different example data sets, demonstrated that the "pseudobulk" approach in combination with classical differential gene expression methods such as edgeR44 and limma-voom45 outperforms single cell specific methods and mixed models in multi sample DE analysis43,46. The pseudobulk approach approximates cell type specific gene expression levels for each individual as the sum of UMI counts over all cells of the cell type and was also successfully applied in different single cell eQTL studies47,48,49. While simulations43 successfully assess the power of the pseudobulk approach, they suffer from a number of shortcomings. A big disadvantage of simulation-based studies are their long runtimes, which make them unsuitable to evaluate the large number of experimental designs needed to optimize parameter combinations. Even power analysis for a single experiment with a large sample size can be very memory and runtime intensive. In addition, handling more complex designs is not easily accomplished with simulation based methods, but could be achieved with analytical power analysis methods. A first analytic exploration of different experimental designs for single cell eQTL studies showed the importance of the optimizing parameters for a restricted budget50, as shallow sequencing of more samples can increase the effective sample size. However, the analysis provided no generalizable tool that can be applied on other data sets and is missing an exact power estimation based on effect size priors. Furthermore, it is not applicable for DEG analysis. Here, we provide a resource that enables choosing the optimal experimental design for interindividual comparisons. It focuses on the power to detect DEGs and eQTLs while also addressing the power to detect rare cell types. Our model was specifically developed for the pseudobulk approach, including a quantification of the probability to detect cell type specific gene expression in scRNA-seq data. We ensure an accurate power estimation with our model by selecting appropriate priors for the cell type specific expression distributions and for the effect size distributions. We derive data driven priors on expression distributions from single cell atlases of three different tissues51,52. We combine these with cell type specific priors for effect sizes based on DEGs and eQTL genes from bulk RNA-seq experiments on cells sorted by fluorescence activated cell sorting (FACS)53,54,55,56,57. Comparing our method against established simulation-based approaches validates our power estimates. In contrast to simulation-based methods, our analytic method can efficiently test a multitude of design options, making it suitable for the optimization of experimental parameters. Our model provides the basis for rationally designing well powered experiments, increasing the number of true biological findings and reducing the number of false negatives. Efficient calculation including a selection of different possible priors is easily accessible for the user, as we provide our model and parameters as an open source R package scPower on github https://github.com/heiniglab/scPower. All code to reproduce the figures of the paper is provided in the package vignette. The repository includes a shiny app with a user-friendly graphical user interface, which is additionally available as a web server at http://scpower.helmholtz-muenchen.de/. Power analysis framework for scRNA-seq experimental design Our power analysis framework targets multi-sample transcriptomic experiments analyzed with the pseudobulk approach. Each analysis starts with a count matrix of genes times cells. These counts can either be counts of unique molecular identifiers (UMI), in the case of droplet-based technologies, or read counts in the case of Smart-Seq. Cells are annotated to an individual and a discrete cell type or state. These can be derived by clustering and analysis of marker genes, potentially considering multiple levels of resolution58 or using the metacell approach59. Individuals are annotated with different experimental covariates, such as disease status. We focused on two group comparisons, but more complex experimental designs, which can be analyzed with generalized linear models, can also be accommodated (see package vignette). To determine cell type specific differential expression between samples, gene expression estimates for each sample and each cell type are approximated as the sum of (UMI) counts over all cells of the cell type47,48,49. This pseudobulk approach has been identified as one of the currently best performing approaches for multi sample DE analysis in recent benchmarking studies43,46. It is important to keep in mind that the pseudobulk approach on single cell data is distinct from traditional bulk RNA-seq. In pseudobulk the ability to detect the expression of a gene depends on the number of cells of the cell type and on the expression level of the specific gene. Therefore, we model the general detection power dependent on the number of cells per sample nc which is related to the number of cells per cell type. Two additional experimental parameters determine the power in our model and also the cost of a scRNA-seq experiment in general: the number of samples ns and the number of reads sequenced per cell r. In order to compute the power of the experiment, we either need to make explicit assumptions or use prior knowledge about unknown experimental parameters, such as the assumed effect sizes and gene expression levels of eQTLs and DEGs. This prior knowledge is combined with user-defined parameters and cost determining factors to model the overall detection power (Fig. 1). Fig. 1: Dependence of experimental design parameters. The cost determining factors (purple: number of samples, number of cells per sample and number of reads per cell) affect the overall detection power through the expression probability and the DE/eQTL power (blue). In addition, the power depends on prior knowledge or assumptions (green) as well as user-defined parameters such as the significance threshold and the expression cutoff (grey). Our model enables fast power calculation, independent of the chosen experimental parameters, and easy adaptation to different use-cases through reference priors. In order to choose the optimal parameter combination of sample size, cells per sample and read depth for an experimental design, there are two types of power to consider. First, the power to detect the cell type of interest and second, the power to detect DE/eQTL genes within this cell type (i.e., overall detection power). The power to observe the cell type of interest depends on its frequency, the number of cells sequenced per individual and the total number of individuals. Following Abrams et al.37, we model this problem using the negative binomial distribution (see "Methods"). Using prior knowledge of cell proportions in peripheral blood mononuclear cells (PBMCs) from the literature, we determine the number of cells required for each individual to detect a minimal number of cells of a specific type (Supplementary Fig. S1). The comparison for varying numbers of individuals shows that the number of cells required for each individual is most strongly affected by the frequency of the cell type and only to a smaller degree by the number of individuals. The power to detect DE/eQTL genes within this cell type is called overall detection power P. Our framework models P of an experiment across all considered DEGs/eQTL genes D conditional on the experimental design parameters and the priors. The overall detection power is defined as the mean gene level detection power Pi conditional on gene specific priors of gene i: $$P=\frac{1}{|D|}\mathop{\sum}\limits_{i\in D}{P}_{i}$$ In order to identify a gene as an DEG/eQTL gene, it must be both expressed and exceeding the significance cutoff. Therefore, we further decompose the gene level detection power \({P}_{i}\) into the expression probability \(P(i\in E)\), which quantifies the probability to detect gene \(i\) in the set of expressed genes \(E\), and the DE/eQTL power, which we denote as the probability \(P(i\in S)\) that gene \(i\) is in the set of significant differentially expressed genes \(S\). This quantifies the power (probability to reject \({H}_{0}\) when \({H}_{1}\) is true) of the statistical test for gene \(i\) and depends on the assumed effect sizes \({\varTheta }_{p}\), which can be derived from prior data, and the multiple testing adjusted significance threshold \(\alpha\). In addition, both the expression probability and the DE/eQTL power depend on the mean \(\mu\) and dispersion \(\phi\) of expression levels of gene \(i\). In our model \(\mu\) and \(\phi\) are determined by the experimental design parameters (\(\,{n}_{c},\,r\)) and the parameters of cell type specific expression distributions \({\varTheta }_{e}\). Conditioning the gene level detection power \({P}_{i}\) on these priors and experimental design parameters, allows for decomposing \({P}_{i}\) as the product of the expression probability and the DE/eQTL power: $$\begin{array}{c}{P}_{i}=P(i\in E\wedge i\in S|{n}_{s},{n}_{c},r,{\varTheta }_{e},{\varTheta }_{p},\alpha )=\\ \,=P(i\in E|{n}_{s},\mu ({n}_{c},r,{\varTheta }_{e}),\phi ({n}_{c},r,{\varTheta }_{e}))\cdot \\ \,P(i\in S|{n}_{s},\mu ({n}_{c},r,{\varTheta }_{e}),\phi ({n}_{c},r,{\varTheta }_{e}),{\varTheta }_{p},\alpha )\end{array}$$ In the following sections the models for the gene level expression probability and the DE/eQTL power are specified. scPower accurately models the number of detectable genes per cell type In scRNA-seq experiments, typically only highly expressed genes are detected with counts greater than zero34,35,36. This sparsity makes it difficult to assess gene expression levels and probabilities of detecting expressed genes of future experiments. We tackled this by modelling the cell type specific expression distribution based on the number of reads sequenced per cell \(r\), the number of cells of the cell type per individual \({n}_{c,s}\) and the number of individuals \({n}_{s}\). Taken together with a user-defined cutoff, this allows us to accurately predict the number of detectable genes per cell type. In the following sections, we explain how we parameterize the model by these three variables. In order to model expression probabilities that are cell type-specific, we need to take into account that the overall RNA abundance and distribution varies between different cell populations33. These cell type specific differences can be captured in priors that describe the general expression distribution in the target cell types. We illustrate our expression probability model and the strength of expression priors on various blood cell types. To this end, we fit the expression priors per cell type using a scRNA-seq data set of PBMCs from 14 healthy individuals measured with 10X Genomics (Supplementary Fig. S2, Table S2), in the following called the training data set, and evaluate it on a second independent PBMC data set47, the validation data set. Of note, the pilot data should in general represent controls without strong DE effects, that cover the natural inter-sample variability. For the cell type specific expression prior, we approximate the single cell count distribution in each cell type with a small number of hyperparameters dependent on the read depth (Fig. 2a). We model UMI counts per gene \(i\) in a particular cell type \(c\) as independent and identically distributed according to a negative binomial distribution with a mean \({\mu }_{i,c}\) and dispersion parameter \({\phi }_{i,c}\). The distribution of means \({\mu }_{i,c}\) across all genes is further modeled as a mixture distribution with a zero component and two left censored gamma distributions to cover highly expressed genes (see "Methods" and Supplementary Fig. S3). Subsampling the read depth of our data shows that the parameters of the mixture distribution are linearly dependent on the average UMI counts (Supplementary Fig. S4). The dispersion parameter \({\phi }_{i,c}\) is modelled dependent on the mean \({\mu }_{i,c}\), using the approach of DEseq60. As the initial experimental parameter for our model is the read depth and not directly the UMI counts, average UMI counts are related to the average number of reads mapped confidently to the transcriptome, which are in turn related to the number of reads sequenced per cell (Supplementary Fig. S5). Fig. 2: Expression probability model parameterized by UMI counts per cell. a The expression probabilities for genes in pseudobulk of a newly planned experiment are estimated based on the expression prior and the planned experimental parameters. For this, the expression prior is derived from the mean and dispersion parameters of gene-wise negative binomial distributions fitted from a matching pilot data set. b Using this approach, the number of expressed genes expected under our model (dashed line) closely matches the observed number of expressed genes (solid line) dependent on the number of cells per cell type (cell type indicated by point symbol) for one batch of the training PBMC data set (Supplementary Table S2). The data is subsampled to different read depths (indicated by colour). The r2 values between estimated and expressed genes were highly significant for both expression thresholds. c The model performed similarly well for the three batches of an independent validation PBMC data set47. Used expression threshold: count > 10 (right panels of b, c) or count > 0 (left panels of b, c) in more than 50% of the individuals. Taken together, we now have a model of per cell read counts across all genes parameterized by the number of reads sequenced, which was trained on cell type specific expression data. The set of parameters describing the gamma mixture distribution dependent on the UMI counts, the mean-dispersion curves and the read depth-UMI curves is called expression prior in the following. It is required for a correct modelling of the count distribution in unseen data and so the expression probabilities. We provide expression priors for 25 different cell types from 3 different tissues in scPower and the user can easily generate their own expression priors for missing cell types with our package. We can now use these expression priors to quantify the expression probability of all genes in a future experiment with different experimental parameters. For this, we quantify the expression distribution of a particular gene in a particular cell type and individual based on its prior expression strength. This prior is represented by the expression rank of the gene compared to all other genes. We determine its mean expression level as the quantile corresponding to this expression rank in the single cell expression prior distribution. This quantity is dependent on the read depth. Next, we derive the pseudobulk count distribution from the single cell expression distributions. This pseudobulk count distribution is again a negative binomial distribution. Its mean and dispersion are scaled by the number of cells per individual and cell type. Whether a gene is expressed or not, can now be estimated based on this gene specific pseudobulk distribution, combined with a user defined threshold. In our default settings, the threshold is composed of a minimum pseudobulk count (sum of UMI counts per gene per cell type per individual) and a certain fraction of individuals. Specifically, we compute the probability that the observed counts are greater than the user defined minimal count threshold in at least a given number of individuals. Summing up these gene expression probabilities allows for modelling the expected number of expressed genes (see "Methods" section for detailed formulas). On top of our default threshold criteria, our package offers the user alternative options for expression thresholds, e.g., that a gene is called expressed if it has a count > 0 in a certain percentage of cells. Subsampling of our data shows that the number of expressed genes per cell type depends on the number of cells of the cell type and the read depth (Fig. 2b, c). The observed numbers of expressed genes (solid lines) are closely matched by the expectation under our model (dashed lines), shown here with example cutoffs of counts greater than ten and zero. We show the results for one batch of the PBMC data set (Fig. 2b), while the fits of all batches can be found in Supplementary Figs. S6 and S7. Predicted and observed numbers of expressed genes were highly correlated (all r2 > 0.9, Supplementary Table S3). To validate our model, we applied it on a second PBMC data set47 that was not used during parameter estimation for the expression priors (Fig. 2c). This validation data set was measured at a smaller read depth of 25,000 reads per cell and for a different sample size (batch A and B with 4 individuals and batch C with 8 individuals). The observed numbers are closely matched by the expectation under our model (all r2 > 0.9), which demonstrates that it can generalize well between data sets and different experimental parameters. Taken together, we now have a general model for the expected number of expressed genes, which is parameterized by the number of cells per cell type and the number of reads per cell. Of note, gene expression distributions are cell type specific and the model parameters have to be fitted from suitable (pilot) experiments, such as the human cell atlas project61. scPower models the power to detect differentially expressed genes and expression quantitative trait genes Building on our expression probability model, we can assess the DE/eQTL power of the expressed genes using existing analytical power analysis tools that have been established for bulk sequencing data. They estimate the power to detect an effect of a given effect size depending on the sample size, the gene mean expression level and the chosen significance threshold. Analytic power analysis compares the distributions of the test statistic under the null and the alternative model (e.g., applying a certain effect size). Based on the significance threshold the critical value of the test statistic is determined from the null distribution. Then the power is given by the probability mass of the distribution under the alternative model that exceeds the critical value. An adjustment of the significance threshold is necessary due to the large number of parallel tests performed in a DEG analysis in order to avoid large numbers of false positive results. We provide two methods in our framework for that, either controlling the family-wise error rate (FWER) using the Bonferroni method62 or the false discovery rate (FDR)22. In the following analyses, we used the FDR adjustment for DE power and FWER adjustment for eQTL power, as proposed by the GTEx Consortium63 for a genome-wide cis eQTL analysis. Specifically, the power analysis methods we apply for DE and eQTL studies are based on negative binomial regressions64 and linear regressions20, respectively. This also leads to different effect size specifications; fold changes in the DE case and R-squared values in the eQTL case. Of note, the R-squared values combine allele frequency and beta value in the linear model. For DE analysis, power calculations are based on negative binomial regression, which is a powerful approach used in tools such as DESeq5,60 or edgeR44 for DEG analysis of both RNA-seq and scRNA-seq18,65,66,67. Benchmarking studies showed that these tools combined with the pseudobulk approach outperform other methods in multi-sample differential expression analysis43,46. We verified that all our training data sets could be modelled by negative binomial distributions after pseudobulk transformation and found no evidence of zero inflation (Supplementary Table S4). In contrast to the other technologies, the Smart-seq2 data showed zero-inflation on the single cell level (see also68,69), but aggregation to pseudobulk removed the excess of zero values. Hence, it is valid to apply analytical methods for the power analysis of negative binomial regression models64. To obtain a range of typical effect sizes and mean expression distributions in specific immune cell types, we analyzed several DEG studies based on FACS sorted bulk RNA-seq53,54 (Supplementary Figs. S8 and S9). Combined with our gene expression model, we can calculate the overall detection power of DE genes averaging over the gene specific expression probability times the power to detect the gene as a DE gene based on fold changes from prior DEG studies. In the following analyses, we assume a balanced number of samples for both groups, but scPower can also evaluate unbalanced comparisons, which lead to a decrease in power. Using fold changes from a study comparing CLL subtypes iCLL vs mCLL53 as effect size priors (sample size of 6, 84 DEGs with median absolute log fold change of 2.8) we find a maximum overall detection power of 74% (Fig. 3a). This power is reached with the experimental parameters of 3000 cells per cell type and individual, a total balanced sample size of 20, i.e., 10 individuals per group, and FDR adjusted p values. For this parameter combination and prior, the DE power would reach even 98% for all DE genes of the study, however, only 74% are likely to be expressed. Overall, the DE power increases with higher number of measured cells and higher sample sizes, while the expression probability is mainly influenced by the number of measured cells. Fig. 3: Expression probability, DE/eQTL power and overall detection power and their validation in simulation studies. Power estimation using data driven priors for DE genes (a) and eQTL genes (b) dependent on the total sample size and the number of measured cells per cell type. The detection power is the product of the expression probability and the power to detect the genes as DE or eQTL genes, respectively. The fold change for DEGs and the R2 for eQTL genes were taken from published studies, together with the expression rank of the genes. For (a), the Blueprint CLL study with comparison iCLL vs mCLL was used, for (b), the Blueprint T cell study. The expression profile and expression probabilities in a single cell experiment with a specific number of samples and measured cells was estimated using our expression prior, setting the definition for expressed to > 10 counts in more than 50% of the individuals. Multiple testing correction was performed by using FDR adjusted p values for DE power and FWER adjusted p values for eQTL power. The probabilities calculated in (a) were verified by the simulation-based methods powsimR and muscat with each point representing one parameter combination. f The eQTL power of (b) could be replicated with a self-implemented simulation. Runtime (g) and memory requirements (h) were drastically higher in the simulations than for our tool scPower during the evaluations of (c–e), showing the strength of our analytic model. The influence of the sample size is not so pronounced in this example due to the small sample size of the reference study. Potential weaker effect sizes that would be identified with larger sample sizes could not be considered in the priors, which leads to a low required sample size for the power estimation. For other reference studies the impact of a higher sample size on the power is more visible (Supplementary Fig. S10). Similar detection ranges are found for the comparison of other CLL subtypes in the same study, while the detection power in a study of systemic sclerosis vs control were much lower with values up to 30% (Supplementary Fig. S10). Smaller absolute fold changes in this study decrease the DE power and therefore also the overall detection power. The effect of using the FWER adjustment also for the DE power can be seen in Supplementary Fig. S11. For eQTL analysis, power calculations are based on linear models20. Due to the very large number of statistical tests (~millions), simple linear models are usually applied to transformed read count data45,70, as they can be computed very efficiently. For large mean values, the power is estimated analytically, for small mean values, this approximation can be imprecise and instead simulations are used that take the discrete nature of scRNA-seq into account. This introduces a dependency between the eQTL power and the expression mean and thus eQTL power is considered conditional on the mean. The mean threshold below which simulations are used, was defined by comparison of simulated and analytic power (Supplementary Fig. S12). Overall detection power for eQTL genes (Fig. 3b) shows a stronger effect of the sample size, which increases the eQTL power. In the depicted use case, the applied priors originate from an eQTL study of T cells from the Blueprint consortium57, which had a sample size of 192 and identified 5,132 eQTL genes with a median absolute beta value for the strongest associated SNP of 0.89. Increasing the number of cells per individual increases both the expression probability and the eQTL power by shifting the expression mean of the pseudo bulk counts to higher values. Notably, increasing the number of measured cells per individual and increasing the sample size both result in higher costs. A maximal detection power of 64% was found for a sample size of 200 individuals and 3,000 measured cells per cell type and individual. The Blueprint eQTL data set also contains eQTLs from monocytes where we observe the same trend and found a maximal detection power of 65% (Supplementary Fig. S11). scPower estimations are supported by simulations The accuracy of scPower was evaluated by benchmarking against different simulation-based methods (Fig. 3c–f). In general, simulation-based methods generate and analyze example count matrices. Therefore, they are always approximations and need to be repeated multiple times for accurate results, while we transformatively enable the design of experiments with our analytic model that requires order of magnitude less runtime and memory (Fig. 3g-h). For single cell DE experiments, we compared our model with powsimR40 and muscat43, which both show well matching power estimations compared to our tool scPower. powsimR is a widely used simulation-based method that is however not designed for multi-sample single cell comparison, i.e., it is only possible to make comparisons of groups of single cell measurements within the same sample but not between multiple samples. Adaptations of powsimR were necessary to make it comparable to scPower (see "Methods" for a detailed description of changes). In contrast, muscat is a recent method that already incorporates the pseudobulk approach for multi-sample comparison and can be used directly. Both simulation methods can be combined with different DE analysis methods for the downstream analysis of the simulated counts. We evaluated them in combination with different common DE methods, such as DESeq25, edgeR44 and limma45. The simulation based power estimates from the adapted version of powsimR as well as from muscat matched the estimates from scPower very well (Fig. 3c–e). We compared the expected number of expressed genes, the DE power of these expressed genes and the overall power for all simulated genes. Running simulations with different DE methods showed that the observed power also depends on analysis choices such as the DE method with scPower estimates being most accurate when using edgeR (Supplementary Fig. S13). Furthermore, powsimR and muscat differ slightly, caused by different modelling assumptions. The overall trends when comparing different experimental designs are in good agreement between scPower and all analysis methods applied to the simulated reads. This is true for both FWER adjustment and FDR adjustment as multiple testing correction. A comparison over a wide range of experimental design parameters between edgeR applied to simulated data from powsimR and scPower confirms the agreement of power estimates (Fig. 3c–e and Supplementary Fig. S14). Furthermore, we used the simulation-based methods to evaluate how well our power analysis method performs for different real-life conditions, such as batch effects or unbalanced cell type proportions between the groups. Simulating batch effects showed a clear drop in power, especially if the magnitude of the batch effect is larger than the effect size of DEGs (Supplementary Fig. S15). However, under the assumptions of an unconfounded experimental design with batches containing both controls and cases, batch effects can be removed by adding a batch covariate to the regression model71. This increases the power compared to non-batch corrected analyses72,73,74. Following this strategy, we could recover the same power as in experiments without batch effects, i.e., our power estimations stay accurate in experiments with batch effects, given that they are adjusted for in the analysis. A second source that can lead to a reduction of power are different cell type proportions in both groups (Supplementary Fig. S16). In this case, a conservative power estimation can be achieved by setting the expected cell type frequency to the frequency of the smaller group. This represents a good lower bound estimation, especially in cases with small sample sizes. Contrary to DE analysis, there currently exists no power estimation method for single cell eQTL that explicitly accounts for specific effect size priors. Therefore, we compared the analytical eQTL power with our own simulation method, which is also used for power estimation of genes with small mean values. The simulation method applies our underlying expression probability model of scPower for assigning a mean value to each gene. This part of the model is the same for eQTL and DE power and was already shown to be accurate compared to powsimR and muscat. Therefore, we focus on benchmarking the eQTL power, which showed good agreement between the simulated and analytic values (Fig. 3f). The analytic calculations of scPower are orders of magnitude faster than the simulation-based approaches: calculations for Fig. 3c–e took 8 days for powsimR, 3 days for muscat and less than a minute for scPower (Fig. 3g). Also the memory requirements are much lower, as no count matrices are generated. For the simulation-based methods the memory requirements increase with larger sample size and numbers of cells, leading for example for 20 samples and 3000 cells per sample to 48GB used memory for powsimR and 35GB used memory for muscat compared to the parameter-independent requirements of scPower of few MB (Fig. 3h). In addition the installation of scPower is easier due to less dependencies: 11 dependencies of scPower vs. 82 dependencies of powsimR and 28 dependencies of muscat. These advantages of scPower over simulation based approaches enable a systematic evaluation of a large number of design options as described in the next section. scPower maximizes detection power for a fixed budget by optimizing experimental parameters With this model for power estimation in DE and eQTL single cell studies in place, we are now able to optimize the experimental design for a fixed budget. The overall cost function for a 10X Genomics experiment is the sum of the library preparation cost and the sequencing cost (see "Methods"). The library preparation cost is defined by the number of measured samples and the number of measured cells per sample, while the sequencing cost is defined by the number of sequenced reads, which also depends on the target read depth per cell. We evaluated exemplarily the three parameters maximizing detection power, given a fixed total budget (Fig. 4). In this scenario, the optimal parameter combinations are identified for a DE study with a budget of 10,000€ (Fig. 4a) and for an eQTL study with a budget of 30,000€ (Fig. 4b). Besides the budget, the user can choose a criterion and threshold to define whether a gene is expressed. We followed the recommendation of edgeR75 that the expression cutoff should correspond to the percentage of samples in the smaller group. For our DE example, this results in a percentage threshold of 50% due to the balanced DE design. For the eQTL example, we consider an eQTL with a minor allele frequency of 0.05, which is a common lower threshold for genetic variants tested for associations. We suggest that the gene should be at least expressed in the heterozygotes and thus pick a percentage threshold of 9.5% (see "Methods"). Fig. 4: Parameter optimization for constant budget. Maximizing detection power by selecting the best combination of cells per individual and read depth for a DE study with a budget of 10,000€ (a) and an eQTL study with a budget of 30,000€ (b). Sample size is uniquely defined given the other two parameters due to the budget restriction and visualized using the point size. c–f Overall detection power dependent on cost determining factors. Influence of the cells per individual given the optimized read depth (c, e) and of the read depth given the optimized number of cells per individual (d, f). Corresponds to the DE study in (a), visualized in (a) by the red frame around the row with the optimal number of cells (corresponding to (c)) and the red frame around the column with the optimal read depth (corresponding to (d)). Same frames for (e, f) in the eQTL study (b). The optimal sample size values are shown in the upper x axes for (c–f). Vertical line in the subplots marks the optimal parameter combination. Effect sizes were chosen as in Fig. 3. Gene expression is defined as detected in >50% (DE analysis) or >9.5% (eQTL analysis) of individuals. We use our method to calculate the overall detection power for different parameter combinations of cells per individual and read depth, while the sample size is defined uniquely given the other parameters and the fixed experimental budget. For the DE study with this specific prior combination, the optimal parameters are measuring 1200 cells in 4 samples with a read depth of 30,000. Measuring more cells per individual increases the expression probability and so the overall detection power (Fig. 4c), but due to the fixed budget this goes hand in hand with measuring less samples which decreases the DE power. A similar trend exists for the read depth (Fig. 4d). For the eQTL study with this specific prior combination, the optimal parameters are measuring 1500 cells in 242 samples with a read depth of 10,000. Again a balance of the eQTL power, which depends mostly on the sample size, and the expression probability, which depends mostly on the cells per sample and the read depth, is visible (Fig. 4e–f). A user-specific version of this analysis with custom budget and priors can be generated using our webtool http://scpower.helmholtz-muenchen.de. We can expand our analyses with expression priors from our 10X PBMC data set and find the optimal parameter combinations depending on a given experimental budget (Fig. 5). We systematically investigated the evolution of optimal parameters for increasing budgets in four prototypic scenarios for DEG (Fig. 5a) and eQTL analysis (Fig. 5b), four scenarios based on prior DEG (Fig. 5c) and two scenarios on prior eQTL (Fig. 5d) experiments on FACS sorted cells (for the estimated costs see Supplementary Table S5). The prototypic scenarios reflect combinations of effect sizes (high, low) and expression ranks (high, low) of DEGs and eQTL genes. We observed that the number of cells per individual is the major determinant of power, as this is the variable that is either directly set to maximum values or increased first in the optimization (Fig. 5). This effect is least pronounced in the prototypic eQTL scenario (Fig. 5b), where small effect sizes require large sample sizes. For most DEG scenarios, the number of reads per cell is increased before increasing the sample size (Fig. 5a,c), indicating that strong effects can be detected with relatively few samples, while the detection of expression requires deeper sequencing. For eQTL scenarios, increasing the sample size first is more beneficial than increasing the read depth (Fig. 5b,d), which remains relatively low (10,000 reads per cell). Fig. 5: Optimal parameters for varying budgets and 10X Genomics data. The maximal reachable detection power (column 1) and the corresponding optimal parameter combinations (columns 2–4) change depending on the given experimental budget (x-axis). The coloured lines indicate different effect sizes and gene expression rank distributions. Different simulated effect sizes and rank distributions for DEG studies (a) and eQTL studies (b) with models fitted on 10X PBMC data. highES = high effect sizes, lowES = low effect sizes, highRank = high expression ranks and unifRank = uniformly distributed expression ranks (always relative to effect sizes observed in published studies). Effect sizes and rank distributions observed in cell type sorted bulk RNA-seq DEG studies (c) and eQTL studies (d) with model fits analogously to (a, b). Expression thresholds were chosen as for Fig. 4. Figure 5 was generated with FDR adjusted p values for DE power and FWER adjusted p values for eQTL power. Using FWER adjustment for DE power changes the observed overall power, but leads to very similar optimal parameter combinations and the same trends overall (Supplementary Fig. S17). In the cost optimization, we also took into account that increasing the number of cells per lane leads to higher numbers of doublets, i.e., droplets with two instead of one cell. Doublet detection methods such as Demuxlet47 and Scrublet76 enable faithful detection of those to exclude the doublets from the downstream analysis. We validated the doublet detection and donor identification of Demuxlet using our PBMC data set by comparing the expression of sex specific genes with the sex of the assigned donor (Supplementary Fig. S2b) and found high concordance after doublet removal, also for run 5, which was overloaded with 25,000 cells. The increase of the doublet rate through overloading was modeled using experimental data77 to accurately estimate the number of usable cells for the eQTL/DEG analysis. However, we observe in our own data set as well as in published studies47,78 slightly higher doublet rates than shown in77. Therefore, we consider the modeled doublet rate as a lower bound estimation. With a high detection rate of doublets, overloading of lanes is highly beneficial, since larger numbers of cells per individual lead to an increase in detection power, while not causing additional library preparation costs. This supports previous evaluations that demonstrated the benefit of overloading50. Even though overloading leads to a decreasing number of usable cells and a decreasing read depth of the singlets, as doublets contain more reads, the overall detection power still rises strongly for both DE and eQTL studies. scPower generalizes across tissues and scRNAseq technologies Our power analysis framework is applicable to data sets for other tissues besides PBMC and to other single cell technologies besides 10X Genomics. We demonstrate this with a lung cell data set measured by Drop-seq52 and a pancreas data set measured by Smart-seq251. Drop-seq is a droplet-based technology similar to 10X Genomics, which is why we only need minor adjustments to our model. We set doublet rates as a constant factor, since Drop-seq does not provide information on the effect of overloading and from there, the DE/eQTL power calculations are the same as for 10X Genomics. Smart-seq2 is a plate-based technology, generating read counts from full-length transcripts. To correct for the resulting gene-length bias, we express the count threshold for an expressed gene relative to one kilobase of the transcript. We fitted the expression model including the transcript length in the size normalization factor of the count model. In addition, as the technology is sorting individual cells into wells and does not suffer from variable doublet rates due to overloading, we modelled the doublet rate as a constant factor. With these adaptations, our expression probability model (Supplementary Fig. S18) for both Drop-seq and Smart-seq2 performs as well as for 10X Genomics with r2 = 0.995 and r2 = 0.991, respectively (Supplementary Table S3). Furthermore, the power calculations are in good agreement with simulation based estimates (Supplementary Fig. S19). The adapted expression probability models combined with platform-specific sequencing costs, either default (Supplementary Table S5) or user-defined, serve as input to budget optimization. Analogous to (Fig. 5), we evaluated the evolution of parameters for simulated priors and observed priors in Drop-seq and Smart-seq2 (Supplementary Fig. S20). For the Smart-seq2 pancreas study, the overall observed power is lower. In contrast to 10X Genomics and Drop-seq, the optimal number of reads per cell is much higher and the number of cells per individual and sample size is only increased at higher budgets for both the prototypic and data driven priors. In general, we observe that Smart-seq2 experiments are not less powerful per se, but the significantly higher cost in the multi-sample setting leads to less powerful designs when restricting the budget. This allows only to measure much fewer cells, even though a higher number of samples and cells would be beneficial. For the Drop-seq lung data we observe similar trends as for the 10X PBMC data set, with the number of cells per individual being the major determinant of power. We have introduced scPower, a method for experimental design and power analysis for interindividual differential gene expression and eQTL analysis with cell type resolution. Our model generalizes across different tissues and scRNAseq technologies and provides the means to easily design experiments that maximize the number of biological discoveries. Previous experimental design methods for multi-sample scRNA-seq43 are based on simulations. These simulations allow for assessing complex single cell multi-sample data, including scenarios of cell to cell heterogeneities other than differential gene expression. However, analytical models, such as our framework, are by orders of magnitude faster than comparable simulation-based tools. This transformatively enables the evaluation of many experimental design options in a short time and thus to identify optimal experimental parameters for a limited budget. In addition, analytical models require only a small amount of memory independent of the assessed experimental parameters, while simulation of data sets with larger sample sizes lead to increasing memory usage. A sample size of 20 with 3000 cells per sample required already between 35 GB (muscat) and 48GB (powsimR) in our evaluation. Therefore, larger data sets with hundreds of samples, as required for eQTL studies, will be very difficult to simulate. A first analytic investigation of power optimization in single cell eQTL studies50 has been done, but suffered from several limitations. First, it was based solely on the effective sample size, ignoring actual effect sizes and expression strength of eQTL genes. Second, it provided no generalizable tool. Third, it is limited to eQTL analysis and does not cover DE studies. In contrast, our approach provides gene level and overall power estimates based on prior data and we provide a generalizable tool for analytic power analysis of single cell DE and eQTL studies. This enables the user the evaluation of his target experiment in order to identify the use-case specific optimal parameter combination. The method is implemented in an R package with a user-friendly graphical user interface and is freely available on github. In addition, the graphical interface of our model is also available over this website http://scpower.helmholtz-muenchen.de/. We identify the optimal experimental parameters based on expression priors from single cell atlases of three different tissues and cell type specific effect size priors from bulk DEGs and eQTLs. We show that the number of cells is not only crucial for the power to detect rare cell types37,38 but also for the power to find DE/eQTL genes by increasing the sensitivity of gene expression detection. In line with Mandric et al.50, our analyses suggest that aggregating shallowly sequenced transcriptomes of a large number of cells of the same cell type is a more cost efficient way than increasing read depth to increase the sensitivity for individual level gene expression analysis. Most likely, multiple independent library preparations in individual cells lead to an improved sampling of the transcriptome as compared to fewer independent libraries sequenced more deeply, an effect that has previously been analyzed in the context of variant detection79. Specifically, we found the optimal read depths to be ~10,000 in most evaluations, which is relatively low compared to previous recommendations34,35,36,80,81. However, in a systematic analysis of spike-in expression it has been shown that the accuracy of the measurements is not strongly dependent on the sequencing depth and consistently high for a read depth of 10,000 reads per cell35. Hence, we expect to accurately quantify the gene expression levels with the optimized experimental design. On top of the DE/eQTL power, the number of cells and sequencing depth also determine accurate cell type annotation82. Shallow sequencing of more cells has been recommended for extracting the gene expression programs required for annotations, because it has achieved equal accuracy as deeper sequencing of fewer cells50,82. These recommendations match our optimal determined parameters. To ensure sufficient power for cell type annotation, our framework scPower can be combined with specific power analysis tools for cell type annotation38,82. The optimal sample size is mostly dependent on the effect size, with low effect sizes requiring large sample sizes and consequently optimal setting with high sample size typically lead to low sequencing depth and relatively low number of cells. In general, priors affect the optimal design and should therefore be selected carefully. In the optimal case, priors are known from well matched pilot experiments or knowledge from the literature. Of note, our data driven priors only allow for reliably assessing the overall power in sample sizes that are smaller or roughly equal to the sample size of the pilot data sets from which the effect sizes were estimated. Consequently, a larger sample size will identify new significant DEGs with lower effect sizes, which were not identified in the smaller pilot study and thus not included in the computation of the overall detection power. In the absence of well-matched pilot experiments, it is nevertheless important to make assumptions explicit by either selecting a prior based on a similar biological phenomenon or by choosing a prototypic case. In our study, we have compared the prototypic cases of strong effect sizes and relatively high expression versus intermediate effect sizes and expression levels across the whole range from highly expressed to lowly expressed genes. Both options, processing priors from a selected reference study and simulating proteotypic priors, are possible with scPower and described in the package vignette. The pseudobulk approach presented here leverages well established power analysis methods based on (generalized) linear models. While the (negative binomial) regression model for pseudobulk is currently the most powerful method for assessing individual level differential expression43, it requires a discrete cell type definition and our approach is tightly linked to it. Therefore, continuous cell annotations such as pseudo time would need to be discretized before the power analysis. Our model requires the user to choose between our defaults and custom settings for parameters such as doublet rate and expression threshold. The default for the doublet rate is based on reference values from 10X Genomics and is a lower bound compared to the doublet rates we estimate for our own data and to rates reported by other studies47,78. Thus, actual experiments might result in higher doublet rates and lower number of usable cells. The choice of a threshold on the number of reads required for a gene to be called expressed also influences the choice of optimal parameters. In our examples we used a threshold of >10 and >3 reads, however, some eQTL analyses of bulk RNAseq data advocate using >0 reads70. Following the independent filtering strategy of DESeq25,83, we additionally offer users to find the threshold optimizing the number of discoveries at a given FDR (see package vignette). The identified optimal thresholds are low and increase the number of detectable genes. However, the user needs to be aware that this strategy is likely increasing the number of false positives18. For this reason, best practice guidelines for differential gene expression with RNA-seq recommend cutoffs that remove between 19 and 33% of lowly expressed genes, depending on the analysis pipeline84. These percentages correspond to 1–10 reads per million sequenced, which translates to 1–5 UMI counts for a median of around 5000 UMI counts per cell in our data set. Our gene expression probability model is cell type specific and has to be fitted based on realistic pilot data. We have shown that our model can be applied to data generated with 10X Genomics, Drop-seq and Smart-seq2 and we are confident that it is applicable to similar technology platforms. When using our approach, the user should keep in mind that our experimental design recommendations are optimized for differential expression between individuals. Other applications might result in very different optimal experimental designs. For instance, co-expression analysis requires a high number of quantified genes per cell, especially when one is interested in cell type specific co-expression and comparison of such co-expression relations between individuals. Furthermore, the power to annotate new rare cell types by clustering analysis of scRNA-seq data might have different optimal parameters38. Lastly, we did not address the power for the detection of variance QTLs (quantitative trait loci associated with gene expression variance across cells) from scRNAseq data48 due to the lack of data driven priors for the effect sizes. The human cell atlas project has made it its goal to build a reference map of healthy human cells by iteratively sampling the cells with increasing resolution61,85. This will create high quality priors that will further broaden the applicability of scPower. We are convinced that scPower will provide the foundation for building rational experimental design of interindividual gene expression comparisons with cell type resolution across a wide range of organ systems. Collection of PBMCs Blood was collected from healthy control individuals according to the clinical trial protocol of the Biological Classification of Mental Disorders study (BeCOME; ClinicalTrials.gov TRN: NCT03984084) at the Max Planck Institute of Psychiatry86. All individuals gave informed consent. Peripheral blood mononuclear cells (PBMCs) were isolated and cryopreserved in RPMI 1640 medium (Sigma-Aldrich) supplemented with 10% Dimethyl Sulfoxide at a concentration of roughly 1 M cells per ml. Ethics approval, consent to participate and consent for publication All investigations have been carried out in accordance with the Declaration of Helsinki, including written informed consent of all participants. Study conduct complies with the recommendations by the ethics committee of the Ludwig-Maximilian University, Munich. Applicable national and EU law, in particular the General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679) has been followed. Permission for using the data has been obtained from the Biobank of Max Planck Institute of Psychiatry. Consent for secondary use of the existing data has been obtained. In compliance with the consent for secondary use, the data generated in this project will be stored and can be used for future research. All data has been pseudonymized. Written informed consent of all participants allows for publication of data in online repositories. Single cell RNA-sequencing For single-cell experiments, 14 cell vials from different individuals (7 male and 7 female) were snap-thawed in a 37 °C water bath and serially diluted in RPMI 1640 medium (Sigma-Aldrich) supplemented with 10% Foetal Bovine Serum (Sigma-Aldrich) medium. Cells were counted and equal cell numbers per individual were pooled in two pools of 7 individuals each. Cell pools were concentrated and resuspended in PBS supplemented with 0.04 % bovine serum albumin, and loaded separately or as a combined pool with cells of all 14 individuals on the Chromium microfluidic system (10X Genomics) aiming for 8000 or 25,000 cells per run. Single cell libraries were generated using the Chromium Single Cell 3′library and gel bead kit v2 (PN #120237) from 10X Genomics. The cells were sequenced with a targeted depth of ~50,000 reads per cell on the HiSeq4000 (Illumina) with 150 bp paired-end sequencing of read2 (exact numbers for each run in Supplementary Table S2). Preprocessing of the single cell RNA-seq data We mapped the single cell RNA-seq reads to the hg19 reference genome using CellRanger version 2.0.0 and 2.1.187. Demuxlet version 1.0 was used to identify doublets and to assign cells to the correct donors47. In addition, Scrublet version 0.1 was run with a doublet threshold of 0.28 to identify also doublets from cells which originate from the same donor88. Afterwards, the derived gene count matrices from CellRanger were loaded into Scanpy version 1.489. Cells identified as doublets or ambivalent by Demuxlet and Scrublet were removed, as well as cells with less than 200 genes or more than 2,500 genes and with more than 10% counts from mitochondrial genes. Verification of Demuxlet assignment using sex-specific errors We validated the donor assignment and doublet detection of Demuxlet by testing if assigned cells express sex-specific genes correctly. Xist expression was taken as evidence for a female cell, expression of genes on the Y chromosome as evidence for a male cell. The male-specific error shows the fraction of cells assigned to a male donor among all cells expressing Xist (count > 0). The threshold for the female-specific error was set less strictly, as mismapping of a few reads to the chromosome Y occurs also in female cells. Instead, the female-specific error indicates which fraction of cells is assigned to a female donor among all cells having more reads mapped to chromosome Y than the \({q}_{f}\) quantile of all cells, with \({q}_{f}\)being the overall fraction of cells assigned to a female donor among all cells. TPM mapped to chromosome Y is calculated by counting all reads mapped to chromosome Y, excluding reads mapped to the pseudoautosomal regions, times 106 divided by the total number of read counts per cell. Both error rates are calculated twice, once with all cells and once without doublets from Demuxlet and Scrublet. Cell type identification We performed the cell type identification according to the Scanpy PBMC tutorial90. Genes which occurred in less than 3 cells were removed. Counts were normalized per cell and logarithmized. Afterwards the highly variable genes were identified, the effect of counts and mitochondrial percentage regressed out. We calculated a nearest neighbour graph between the cells, taking the first 40 PCs, and then clustered the cells with a Louvain clustering91. Cell types were assigned to the clusters using marker genes (Supplementary Table S6). Frequency of the rarest cell type The probability to detect at least \({n}_{c,s}\) cells of a specific cell type \(c\) in each individual \(s\) depends on the frequency of the cell type \({f}_{c}\), the number of cells per individual \({n}_{c}\) and the number of individuals \({n}_{s}\). For one individual, the minimal number of cells can be modeled using a cumulative negative binomial distribution37 as \({F}_{NB}({n}_{c}-{n}_{c,s},{n}_{c,s},{f}_{c})\) and for all individuals as \({F}_{NB}{({n}_{c}-{n}_{c,s},{n}_{c,s},{f}_{c})}^{{n}_{s}}\). The cell type frequencies were obtained by literature research, the frequencies in PBMC are approximately twice as high as in whole blood92. All other parameters can be freely chosen (dependent on the expected study design). Influence of read depths We used subsampling to estimate the dependence of gene expression probabilities on read depths. The fastq files of all 6 runs were subsampled using fastq-sample from fastq-tools version 0.893. The number of reads was downsampled to 75%, 50% and 25% of the original number of reads. CellRanger was used to generate count matrices from the subsampled reads. Donor, doublet and cell type annotation were always taken from the full runs with all reads. Expression probability model The gene expression distribution of each cell type was modeled separately because there are deviations in RNA content between different cell types33. The UMI counts x per gene across the cells of a cell type are modeled by a negative binomial distribution. We used DESeq60 to perform the library size normalization as well as the estimation of the negative binomial parameters. The standard library size normalization of DEseq and the variant "poscounts" of DESeq25 were both used, depending on the quality of the fit for the specific data set. For the PBMC 10X data set (Supplementary Table S2), the standard normalization was taken, for the Drop-seq lung and the Smart-seq2 pancreas datasets the poscount normalization, which is more suitable for sparse data. Only cell types with at least 50 cells were analyzed to get a robust estimation of the parameters. Negative binomial distributions were fitted separately for each batch to avoid overdispersion by batch effects and the fits combined downstream (see paragraph about gamma mixture distribution). The negative binomial distribution is defined by the probability of success \(\,p\) and the number of successes \(\,r\): $${f}_{NB}(x,r,p)=NB(x,r,p)={x+r-1 \choose x}\cdot {(1-p)}^{r}\cdot {p}^{x}$$ DESeq uses a parametrization based on mean \(\mu =\frac{p\,\cdot \,r}{1\,-\,p}\) and dispersion parameter \(\phi =\,\frac{1}{r}\). We formulated the definition of an expressed gene in a flexible way so that users can adapt the thresholds. The definition is based on the pseudobulk approach where the counts \({x}_{i,j}\) are summed up per gene \(i\) for all cells \(j\) part of cell type \(c\) and donor \(s\) to a three dimensional matrix \({y}_{i,c,s}=\mathop{\sum}\limits_{j\in C\wedge j\in S}{x}_{i,j}\) with \(C\) the set of all cells part of cell type \(c\) and \(S\) the set of all cells part of donor \(s\). In general, a gene \(i\) is called expressed in a cell type \(c\) if the sum of counts \({y}_{i,c,s}\) over all cells of the cell type within an individual \(s\) is greater than \(n\) in more than \(k\) percent of the individuals. We assume a negative binomial distribution (\({f}_{NB}({x}_{i,j},\,{\mu }_{i,c},\,{\phi }_{i,c})\)) for the counts \({x}_{i,j}\) of each gene \(i\) in each cell type \(c\) with \({\mu }_{i,c}\) and \({\phi }_{i,c}\). The sum of gene counts \({y}_{i,c,s}\) follows a negative binomial distribution where the parameters are altered by the number of cells per cell type and donor \({n}_{c,s}=|\{j\,\in \,C\,\wedge \,j\,\in \,S\}|\) to \(\mu ^{{\prime} }_{i,c,s}=\,{n}_{c,s}\,\cdot \,{\mu }_{i,c}\) and \(\phi ^{{\prime} }_{i,c,s}\,=\frac{{\phi }_{i,c}}{{n}_{c,s}}\,\). The probability that the sum of counts \(y\) is greater than \(n\) is $${p}_{i,s}=P({y}_{i,c,s} > n)=1-{F}_{NB}(n,\mu ^{\prime}_{i,c,s},\, \phi^{\prime}_{{i,c,s^{\prime}}})$$ with \({F}_{NB}\) as the cumulative negative binomial distribution. To define a gene as expressed, we require that it can be found in a certain fraction of more than \(k\) percent in all \({n}_{s}\) individuals. The expression probability of a gene \(i\) is obtained from a cumulative binomial distribution \({F}_{Bin}\) as $$P(i\in E)=1-{F}_{Bin}(k\cdot {n}_{s},{n}_{s},{p}_{i,s})$$ So in total, the expected value of the number of expressed genes \((E)\) can be defined as $${\mathbb{E}}(E)=\mathop{\sum} _{gene\,i}P(i\in E)$$ To generalize the expression probability model also for unseen data sets, the distribution of the mean values \({\mu }_{i,c}\) over all genes in a cell type \(c\) is modelled as a mixture distribution with three components, a zero component \(Z(x)\) and two left-censored gamma distributions \(\varGamma (x,\,{{{{{\rm{r}}}}}},\,s)\) $${f}_{{\mu }_{t}}(x)={p}_{1}Z(x)+{p}_{2}\varGamma (x,{r}_{1},{s}_{1})+{p}_{3}\varGamma (x,{r}_{2},{s}_{2})$$ The model is an adaptation of the distribution used in the single cell simulation tool Splatter94. The largest part of the mean values can be fitted with one gamma distribution, a small fraction with high expressed gene outlier with the second gamma distribution. The genes with zero mean values originate from two sources. Either, the gene is not expressed or the expression level is too low to be captured in the setting. The lower bound for the expression level at which both Gamma distributions are censored depends on the number of cells \(j\) measured for this cell type \({n}_{c}=|\{j\in C\}|\). The smallest expression level to be captured is \(\frac{1}{{n}_{c}}\). The density of the gamma distribution is parametrized by rate \(r\) and shape \(s\): $$\varGamma (x,r,s)=\frac{{s}^{r}\,{x}^{r-1}\,{e}^{-sx}}{(r\,-\,1)!}$$ For modeling of the gamma parameters, also the parameterization by mean \(\mu =\frac{s}{r}\) and standard deviation \(\sigma =\sqrt{\frac{\,s\,}{\,{r}^{2}\,}}\) is used. The relationship between the mean UMI counts per cell and the gamma parameters (mean and standard deviation of the two gamma distributions) is linear and \(\beta\) values are estimated by linear regression, fitted over the gamma distribution for each run and all subsampled runs. The mixture proportion of the zero component \({p}_{1}\) is linearly decreasing with the mean UMI counts, also estimated by linear regression. The lower bound of \({p}_{1}\) is set to a small positive number: 0.01. In contrast, the mixture proportion of the second gamma component \({p}_{3}\) is modelled as a constant, independent of the mean UMI counts. We set it to the median value of all fits per cell type. The mixture proportion of the first gamma component is \({p}_{2}=1-{p}_{1}-{p}_{2}\) and is linearly increasing with increasing mean UMI counts. The number of transcriptome mapped reads is linearly related to the logarithm of the mean UMI counts per cell, with an increasing read depth leading to a saturation of UMIs. 10X Genomics describes this also with the sequence saturation parameter. The exact logarithmic saturation curve depends on multiple biological and technical factors, therefore, it needs to be fitted for each experiment individually. However, scPower provides example fits from the different scenarios observed in our analysis. The dispersion parameter is estimated dependent on the mean value using the dispersion function fitted by DESeq. The parameters of the mean-dispersion curve showed no correlation with the mean UMI counts, therefore the mean of the parameters of the dispersion function across all runs and subsampled runs were taken, resulting in one mean-dispersion function per cell type. Expression cutoffs and threshold criteria The selection of expression cutoffs both on the individual level and the population level depends on the users and their research question, balancing the increase in power by more lenient cutoffs and the potential higher false positive rates associated with it. We applied different UMI count cutoffs for the individual level to prove the flexibility of our tool. For the population level, we followed in most of our analyses the recommendation of edgeR75 that the expression cutoff should correspond to the percentage of samples in the smaller group. In the DE case, this results in a cutoff of 50% as we focus on studies with balanced design. In the eQTL case, the definition of groups depends on the genotype and is therefore not directly chosen by the user. We decided to select the cutoff based on the minor allele frequency \({f}_{A}\), so that at least in heterozygotes the gene should be expressed. The fraction of heterozygotes \({f}_{AB}\) is thereby calculated dependent on the minor allele frequency as: $${f}_{AB}=2* {f}_{A}* (1-{f}_{A})$$ For example, assuming a minor allele frequency of at least 0.05 would result in a population cutoff of 0.095. Furthermore, our R package provides alternative threshold criteria. On the population level, instead of a percentage threshold for the number of individuals, an absolute threshold can be chosen. On the individual level, instead of an absolute count threshold in the pseudobulk, a gene can be defined as expressed if it is expressed in a certain number of cells with count larger than 0. Both alternative criteria are based on the same model as explained above in the previous section. If the users want to choose a threshold that maximizes the power, our package provides an optimization function for that. Power analysis for differential expression The power to detect differential expression, also denoted as the probability \(P(i\in S)\) that gene \(i\) is in the set of significant differentially expressed genes \(S\), is calculated analytically for the negative binomial model64. An implementation of the method can be found in the R package MKmisc. Parameters are sample size, fold change, significance threshold, the mean of the control group, the dispersion parameter (assuming the same dispersion for both groups) and the sample size ratio between both groups. We focus in our analyses on balanced comparisons with the same number of samples in both groups, represented by a sample size ratio of 1. Zhu et al. implemented three different methods to estimate the dispersion parameter, we chose method 3 for the power calculation, which was shown to be more accurate in simulation studies in the paper. More complex experimental designs can be addressed using the method of95. Power analysis for expression quantitative trait loci Additionally to the DE analyses, the use of scRNA-seq for the detection of expression quantitative trait loci (eQTLs) was evaluated. We distinguish for the eQTL power between genes with high and with low expression levels, where the mean is used to parameterize a simulation. Therefore, the eQTL power is a function of the mean expression level. For genes with high expression level, the power to detect an eQTL is calculated analytically using an F-test and depends on the sample size \({n}_{s}\), the coefficient of determination \({R}^{2}\) of the locus and the chosen significance threshold \(\alpha\). \({R}^{2}\) is calculated for the pilot studies from the regression parameter \(\beta\), its standard error \(se(\beta )\) and the sample size \(N\) of the pilot study: $$t=\frac{\beta }{se(\beta )}$$ $${R}^{2}=\frac{{t}^{2}}{N-2+{t}^{2}}$$ The implementation pwr.f2.test of the R package pwr is used for the F-test20. The degrees of freedom of the numerator are 1 and of the denominator are \({n}_{s}-2\), the effect size is \(\frac{{R}^{2}}{1\,-\,{R}^{2}}\). This power calculation assumes that the residuals are i.i.d. normally distributed. For large count values, it has been shown that normalized log transformed counts have a constant variance independent of the mean value and can be analyzed with linear models45. However, for genes with small mean values, i.e., only very few non-zero counts, this normalization might not be effective and the power is overestimated by the analytical power calculation based on the F-test. We performed a simulation study to assess the effect of the mean values on the eQTL power. To account for the discrete nature of the counts we adopted a simulation scheme similar to a negative binomial regression model and analyzed the log transformed counts using linear models45. As for the analytical power calculation, the effect size is given by the coefficient of determination \({R}^{2}\) of the locus. To determine the simulation based power for sample size \({n}_{s}\), significance threshold \(\alpha\) and mean count \({\mu }_{c}\) of the allele with lower expression, the following steps are repeated B = 100 times: Simulate genotypes. To also account for the discrete nature of the genotypes, we first draw allele frequency \({f}_{a}\) from a uniform distribution between 0.1 and 0.9. A random genotype vector \(g\) with \({g}_{i}\in \,\{0,1,2\}\) of length \({n}_{s}\) is generated with the expected number of each genotype \(({{f}_{a}}^{2},\,2{f}_{a}(1-{f}_{a}),{(1-{f}_{a})}^{2})\,\) according to Hardy Weinberg equilibrium. Simulate read counts. Using the allele frequency, the beta value \(\beta\) and the standard deviation of the residuals \(\hat{\sigma }\) is calculated: $$\beta =\sqrt{\frac{{R}^{2}}{2* {f}_{a}* (1-{f}_{a})}}$$ $$\hat{\sigma }=\sqrt{1-{R}^{2}}$$ The associated gene expression count vector \(x\,\) is sampled from a negative binomial distribution parameterized for each genotype \({g}_{i}\) with mean \({\mu }_{i}={e}^{log({\mu }_{c})\,+\,\beta * {g}_{i}}\) and dispersion \({\phi }_{i}\). In the following, we work with log transformed counts (plus one pseudo count). To match with the effect size \({R}^{2}\), the dispersion parameter \({\phi }_{i}\) is chosen, such that the variance of the log transformed counts is \(\hat{\sigma }\). Since the Taylor approximation of the dispersion parameter45 was not accurate enough, we used instead a numerical optimization. This numerical optimization is precalculated for a range of parameter combinations to speed up calculation for the user. Using the linear regression \(\log ({x}_{i}+1)\sim {g}_{i}\,\), the p value \({P}_{i}\) for \({H}_{o}:\beta =0\) is determined. Finally, the simulation based power is estimated as \(\mathop{\sum }\limits_{i=1}^{B}{P}_{i} \, < \, \alpha\) The power of the simulation was compared with the analytic power calculated by scPower to assess at which value of the mean \({\mu }_{c}\) the analytic power starts to overestimate the simulation based empirical power (see Supplementary Fig. S12) for Bonferroni adjusted significance thresholds used in eQTL analyses. We choose a cut-off of mean count < 5 and estimate the power for genes with smaller mean values based on simulation instead of the F-test to increase accuracy for small count values. Overall detection power The overall detection power for DEGs/eQTLs is the product of the expression probability and power to detect DEGs/eQTLs, as both probabilities are conditionally independent given the expression mean of the gene. Expression probabilities were determined based on the gene expression rank in the observed (pilot) data. The number of considered genes \(G\) was set to 21,000, the number of genes used for fitting of the curves. Ranks \(i\) were transformed to the quantiles \(\frac{i}{G}\) of the gamma mixture model parameterized by the mean UMI counts to obtain the mean \({\mu }_{c}\) of the negative binomial model, which is in turn used to compute the expression probability. To quantify the overall power of an experimental setup, we compute the expected fraction of detected DEG/eQTL genes with prior expression levels and effect sizes derived from the pilot data. We obtain gene expression ranks of DEGs/eQTLs and their corresponding fold changes to compute overall detection power for each gene. The overall power of the experimental setup is then the average detection power over all prior DEG/eQTL genes. DE/eQTL power is computed using a significance threshold \(\alpha\) corrected for multiple testing, controlling either the family-wise error rate (FWER) or the false discovery rate (FDR). We used FDR adjustment for the DE power and followed the approach of the GTEx consortium63 based on FWER adjustment for the eQTL power. However, our framework allows for any combination of power analysis and multiple testing method. For all analyses shown, adjusted \(\alpha\) was set to 0.05. The family-wise error rate is defined as the probability of at least one false positive among all tests. Each expressed gene is tested once in the DE analysis, therefore, the adjustment for the family-wise error rate is done by correcting the threshold to \(\frac{\alpha }{(E)}\) for \((E)\) expected expressed genes. For eQTLs we followed the approach of the GTEX consortium63, which assumes that for each gene on average 10 independent (uncorrelated) SNPs are tested in a genome-wide cis eQTL analysis. Thus, the adjusted p value threshold is set at \(\frac{0.05}{(E)\, * \,10}\). In our tool, the number of independent SNPs can be flexibly chosen, for example if the user wants to perform a cis and trans eQTL analysis, he can define a higher number of independent SNPs. Alternatively, for DE analysis the significance threshold can be adjusted for the false discovery rate using the method of Jung22. In contrast to the Bonferroni correction, which depends only on the number of tests, the FDR correction depends on the p value distribution of all genes. As our analytic method outputs the power without computing the p values, we can not apply the FDR correction directly and use the method of Jung. The goal of the approach is to identify the raw p value \(\alpha {{\hbox{'}}}\) corresponding to chosen FDR corrected threshold \(\alpha =FDR(\alpha ^{\prime} )\). The FDR is the fraction of false positives among all rejected null hypotheses (predicted positives), which includes both the false positives and the true positives. Based on the probability integral transform, the distribution of p values for the \({m}_{o}\) true null hypotheses is uniform. Therefore, we expect \({m}_{o} * \alpha {{\hbox{'}}}\) false positives at a raw p value significance threshold of \(\alpha {{\hbox{'}}}\). Here \({m}_{o}\)= \((E)-({E}_{DEG/eQTL})\) is the number of expected expressed genes without the expected expressed DEGs/eQTLs. The expected number of true positives is directly derived from the power we reach for \(\alpha {{\hbox{'}}}\). Summing up the gene-wise power (at \(\alpha {{\hbox{'}}}\)) yields the expected number of significant DEGs/eQTLs \({r}_{1}(\alpha {{\hbox{'}}})\). Using numerical optimization of the complete formula $$FDR(\alpha ^{\prime} )=\frac{{m}_{0} * \alpha ^{\prime} }{{m}_{0} * \alpha ^{\prime} +{r}_{1}(\alpha ^{\prime} )}$$ with respect to the unknown parameter \(\alpha ^{\prime}\) we identify the raw p value threshold \(\alpha ^{\prime}\) corresponding to the FDR threshold of \(\alpha\) . Pilot data sets Realistic DE and eQTL priors, i.e., effect sizes and expression ranks, were taken from sorted bulk RNA-seq studies of matching tissues (PBMCs, lung and pancreas). For all studies, the significance cut-off of the DE and eQTL genes was set to FDR <0.05 and the expression levels of the genes were taken from FPKM normalized values. When published, we took directly the effect sizes, otherwise we recalculated the DE analysis with DEseq2. Differential gene expression: To get realistic estimates for effect sizes (fold changes), data sets from FACS sorted bulk RNA-seq studies were taken53,54. The data sets were used to rank the expression level of the DEGs among all other genes using the FPKM values. The cell types used in the studies were matched to our annotated cell types in PBMCs for the expression profiles. The expression profile of CD14+ Monocytes was used for the study of Macrophages, the profile of CD4+ T cells for the CLL study. Lung cell type specific priors were obtained from a DE study of freshly isolated airway epithelial cells of asthma patients and healthy controls55. As no effect sizes were reported, the analysis was redone with the given count matrix from GEO (accession number GSE85567) using DEseq2. A DE study analyzing age-dependent gene regulation in human pancreas56 was used to get pancreas cell type specific priors. We obtained expression ranks and gene length, which is needed for proper normalization of Smart-seq2 expression values. eQTLs: We used eQTL effect and sample sizes from the Blueprint study on bulk RNA-seq of FACS sorted Monocytes and T cells57. Neutrophils were excluded as they are not PBMCs. We took the most significant eQTL for each gene, using a significance cutoff of 10−6. We compared the FPKM normalized expression levels of the eQTL genes among all other genes to get the expression rank for each eQTL gene. Effect sizes were derived from the slope parameters of the linear regression against genotype dosage, its standard error and the sample size of the study. Comparison with simulation-based power analysis tools To validate our model, we compared the DE power estimations of our framework with two simulation-based tools, called powsimR and muscat40,43. For both tools, a few changes needed to be implemented to compare the output exactly with our approach. powsimR is not designed for multi-sample comparison and for both methods the option to apply a vector of log-fold changes with matching expression ranks was not available. A detailed explanation of both methods and applied changes can be found below. The simulation-based methods perform random sampling of their count matrices and therefore the simulation was repeated 25 times for each parameter combination to generate stable results. Both tools allow the power estimation for different DE methods. We evaluated powsimR in combination with edgeR-LRT, DESeq and limma-voom, together with median-ratio normalization of DESeq ('MR'), and muscat in combination with edgeR, DESeq2, limma-voom and limma-trend. No imputation or filtering was applied for any of the methods. In the comparisons with our model scPower, the expression probability parameters of scPower were set to minCounts >0 in at least one individual to match the detected genes of powsimR and muscat. Exemplarily, the CD4 T cells of our PBMC data set were used for fitting the simulation models of powsimR and muscat. We evaluated all DE methods for 4, 8 and 16 samples in combination with 200, 1000 and 3000 cells per person. Additionally, we performed a comparison for a large range of parameter combinations of powsimR with edgeR-LRT and muscat with edgeR, testing all combinations as evaluated in (Fig. 3a). In the following, it is important to distinguish the training data set, which is used for model fitting of powsimR/muscat and restricts the number of simulated genes, and the simulated data set which is sampled from the trained model. The three main components of our statistical framework were evaluated in the comparison, the expression probability (by comparing the number of expressed genes), the power (here according to the definition of powsimR, i.e., the power of all genes expressed in the simulated data) and the overall detection power. Expressed genes: The expected number of expressed genes for scPower is compared with the number of expressed genes in powsimR and muscat, which are all genes with at least one count in the simulated matrix. An important limitation of the simulation based frameworks is here that the number of expressed genes in the simulation tools can never be larger than the number of expressed genes in the training data set, while scPower can also approximate expression of unseen genes with smaller mean values and so estimate more expressed genes than seen in the pilot data. DE power: The reported power of powsimR includes only genes, which are expressed in the simulated data set (count > 0). The same value can also be calculated for muscat. To make the DE power of our framework comparable, the mean power for all expressed DE genes was calculated. An expressed DE gene for scPower is defined by its expression rank, which needs to be smaller than the expected number of expressed genes. Overall power: powsimR does not return directly an overall power, which we define as the power over all simulated DE genes (including genes simulated with count > 0 and count = 0). However, the overall detection power of powsimR can simply be calculated by dividing the number of true positives of powsimR by the number of all simulated DE genes. The same was done for muscat. powsimR: uses training data to fit the parameters of the expression distributions for each cell type and gene. Using these parameters, it is randomly generating count matrices for two groups of cells introducing differential gene expression between these two groups for a prespecified number of DE genes. These DE genes are randomly selected and the means of their distributions shifted by a given effect size. In the next step the simulated data is analyzed with different methods and results are compared to the simulated group truth to determine the power. Adaptations of powsimR are required to simulate a multisample setting and thus make it comparable to scPower: We added an additional step that generates a pseudobulk count matrix for multi-sample comparison. For this, we included an additional parameter for the sample size \({n}_{s}\), with samples distributed equally across both groups (\({n}_{s}/2\) samples per group). Thus, individual level effect sizes are identical to the cell level effect sizes, as more complex differential distributions are not implemented in powsimR96. After simulation of the new count matrix \(C\) with dimensions \({n}_{C}\) (number of cells) times \({n}_{G}\) (number of genes) in powsimR, we changed the algorithm to equally distribute the simulated cells between the samples (\({n}_{C}/{n}_{s}\)cells for each sample), while preserving the group structure. Summing up the counts for each sample generates a pseudobulk matrix with dimensions \({n}_{s}\) times \({n}_{G}\), which can be processed exactly the same way as a single cell matrix in the following steps in powsimR. Furthermore, instead of randomly sampling the position of the DE genes with powsimR, we assigned DE genes based on their expression ranks in the bulk studies, as in scPower. muscat: was specifically implemented for multi-sample comparisons, in contrast to powsimR. It fits one negative binomial distribution separately for each sample and subpopulation in the training data set. The subpopulation definition is hereby equivalent to our cell type definition. We noticed that fitting each sample separately decreases the number of expressed genes quite drastically, if not a sufficient number of cells are available for each sample. Because again only as many genes can be sampled as are detected in the training data set. To get a robust fit of the negative binomial distribution with our training data set, we therefore decided to fit the negative binomial distribution for all samples together, for a very large training data set this is probably not necessary. Another difference to powsimR is that muscat provides different scenarios for simulating differential expression besides the shift of the mean expression (called DE in muscat). Additionally, they simulate genes with different proportions of low and high expression-state (DP), differential modality (DM) or changes in both proportions and modality (DB). For comparison with powsimR and scPower, we focus on the DE scenario. Similar to powsimR, we also incorporated here the option to assign genes of a specific expression rank a specific log fold change to simulate the same DE genes as in scPower. We applied the simulation-based methods to further validate the scPower estimations in regards to two different real life scenarios that can affect the power: batch effects and differences of cell proportions between the groups. The introduction of batch effects was already implemented in powsimR. Of note, we slightly adapted the code of powsimR again to the multi-sample setting by assigning all cells from one individual to the same batch. We separated the individuals into two different batches with 50% cases and 50% controls in each batch, which represent a setup based on a non-confounded experimental design. 20% of the genes were randomly sampled to show batch log fold changes with values between 2 and 6. We ran the downstream powsimR power analysis with edgeR once without accounting for batch effects and once with adjusting for the batch effects using a model that includes a batch covariate. This was repeated for different batch effects and experimental parameter combinations and each time the results were compared with the scPower estimation. The second real life scenario simulating differences of cell proportions between the groups is readily implemented in muscat. The cell proportion parameter represents the fraction of all measured cells that belong to group 1, i.e., a fraction of 0.3 means that 30% of measured cells belong to group 1 and 70% to group 2. We evaluated cell proportions between 0.1 and 0.5 in combination with different experimental parameter combinations and compared the results to scPower. For the scPower estimation, we calculated two versions, once the default approach assuming balanced distribution of cells between the groups and once a conservative approach, also assuming the balanced distribution, but reducing the cell frequency \({f}_{c}\) by the cell proportion parameter \({p}_{c}\) so that the number of cells per cell type entering the model matches the cell frequency in the lower group \(2 * {f}_{c} * {p}_{c}\). As no simulation-based power analysis for eQTLs exists (and also no other method), we benchmarked the eQTL power with our own simulation tool (described in the methods section Power analysis for expression quantitative trait loci). Our simulation method uses our expression probability model to estimate the mean parameter, therefore only the power itself is compared (not the expression probability and overall power). We tested again 25 rounds of simulation for all parameter combinations depicted in Fig. 3b. Cost calculation and parameter optimization for a given budget The overall experimental cost \({C}_{t}\) for a 10X Genomics experiment is the sum of the library preparation cost and the sequencing cost. It can be calculated dependent on the three cost determining parameters sample size \({n}_{s}\), number of cells per sample \({n}_{c}\) and the read depth \(r\). The library preparation cost is determined by the number of 10X kits, depending on how many samples are loaded per lane \({n}_{s,l}\) and the cost of one kit \({C}_{k}\). The cost of a flow cell \({C}_{f}\) and the number of reads per flow cell \({r}_{f}\) determine the sequencing cost. $${C}_{t}=ceiling\left(\frac{{n}_{s}}{6 * {n}_{s,l}}\right) * {C}_{k}+ceiling\left(\frac{{n}_{s} * {n}_{c} * r}{{r}_{f}}\right)\, * {C}_{f}$$ We optimized the three cost parameters for a fixed budget to maximize the detection power. A grid of values for number of cells per individual and for the read depth was tested, while the sample size is uniquely determined given the other two parameters and the fixed total costs. As an approximation of the sample size, the ceiling functions from the cost formula were removed. $${n}_{s}=floor\left({C}_{t}\,\bigg/\left(\frac{{C}_{k}}{6 * {n}_{s,l}}+\frac{{n}_{c} * r * {C}_{f}}{{r}_{f}}\right)\right)$$ The same approach can also be used with a grid of sample size and cells per sample or read depth. In general, two parameters need to be chosen and the third parameter is uniquely determined given the other two and the fixed experimental cost. Given the three cost parameters, the detection power for a specific cell type and a specific DE or eQTL study can be estimated. However, we also have to account for the appearance of doublets during the experiment. The fraction of doublets depends on the number of cells loaded on the lane. Following the approach of37, we model the doublet rate \(d\) linear dependent on the number of recovered cells, using the values from the 10X User guide of V377. A factor of \(7.67 * {10}^{-6}\) was estimated, so that \(d=7.67 * {10}^{-6} * {n}_{c} * {n}_{s,l}\). The number of usable cells per individual used for the calculation of detection power is then \({n}_{u}=(1-d) * {n}_{c}\). We assume that nearly all doublets are detectable using Demuxlet and Scrublet and discarded during the preprocessing of the data set. The expected number of cells for the target cell type with a frequency of \({f}_{c}\) will be \({f}_{c} * (1-d) * {n}_{c}\). A second effect of doublets is that the read distribution is shifted, as doublets contain more reads than singlets. Again following the approach of37, we assume that doublets contain 80% more reads than singlets. In the following, the ratio of reads in doublets compared to reads in singlets is called doublet factor \({f}_{d}\), a factor of 1.8 is assumed in the calculations in this manuscript. Therefore, depending on the number of doublets, the read depth of the singlets will be slightly lower than the target read depth. $${r}_{s}=\frac{r * {n}_{c}}{{n}_{u}+{f}_{d} * ({n}_{c}-{n}_{u})}$$ In addition, the mapping efficiency is taken into account. Assuming a mapping efficiency of 80%, \({r}_{m}=0.8 * {r}_{s}\) mapped read depth remains. In the power calculation, the number of usable cells per cell type will be used instead of the number of cells and the mapped read depth instead of the target read depth. Instead of defining the number of samples per lane directly, usually the number of cells loaded per lane \({n}_{c,l}\) is defined. So, the doublet rate per lane can be directly restricted. We use in our analyses \({n}_{c,l}=20,000\), which leads to a doublet rate of at most 15.4%. The number of individuals per lane can be derived directly as \({n}_{s,l}=floor({n}_{c,l}/{n}_{c})\). Simulation of effect sizes and gene rank distributions Model priors, i.e., effect sizes and gene rank distributions, were derived from FACS sorted bulk RNA-seq to get realistic assumptions. In addition, we simulated different extreme prior distributions to evaluate their influence on the optimal experimental parameters. The log fold changes for the DE studies were modeled as normally distributed. High effect size distributions were simulated with a mean of 2 and a standard deviation of 1, low effect sizes distributions with a mean of 0.5 and standard deviation of 1. Effect sizes (\({R}^{2}\) values) for the eQTL studies were obtained by sampling normally distributed Z scores and applying the inverse Fisher Z Transformation. Because very small values are not observed due to the significance threshold, the normal distribution is truncated to retain values above the mean. High effect sizes were simulated with a mean of 0.5 and standard deviation of 0.2, low effect sizes with a mean of 0.2 and a standard deviation of 0.2. A similar standard deviation was also observed in the pilot data. 250 DEGs were simulated and 2000 eQTL genes. The ranks were uniformly distributed, either over the first 10,000 genes or the first 20,000 genes. This leads to four simulation scenarios for each, high and low effect sizes (ES) and high or uniformly distributed expression ranks, called in the studies highES_highRank, lowES_highRank, highES_unifRank and lowES_unifRank. Evaluation of Drop-seq and Smart-seq2 data We validated our expression probability model for other tissues and single cell RNA-seq technologies. Two data sets of the human cell atlas were used for that, a Drop-seq data set measured in lung tissue52 and a Smart-seq2 data set measured in pancreas tissue51. The Drop-seq technology is also a droplet-based technique, similar to 10X Genomics. The same model can be used, only adapting the doublet and cost parameter. However, as there was no data available to model the linear increase of the doublet rate during overloading correctly, the doublet rate was modeled instead as a constant factor and the library preparation costs were estimated per cell. scPower provides models for both cases and with the necessary prior data, users can also model the overloading for Drop-seq. Smart-seq2 is a plate-based technique, which produces full length transcripts and read counts instead of UMI counts. To compensate the gene length bias in the counts, the definition of an expressed gene was adapted to at least \(n\) counts per kilobase of transcript, resulting in a gene specific threshold of \(\frac{n\, * 1000}{{l}_{i}}\) with \({l}_{i}\) as gene length for gene \(i\). The gamma mixed distribution of the mean gene expression levels is modelled using length normalized counts, but the gene length is required as a prior for the dispersion estimation and the power calculation, as DEseq uses counts, which are not normalized for gene length. These priors can be obtained together with the effect sizes and the expression ranks from the pilot bulk studies. In the simulation of non-DE genes, an average mean length of 5000 bp is assumed. The linear relationship of the parameters of the mixture of gamma distributions is modeled directly based on the mean number of reads per cell. Doublets also appear in Smart-seq2, but as a constant factor, not increasing with a higher number of cells per individual. We observed for the parameter of the DEseq dispersion model a linear relationship with the read depth, which was not visible for Drop-seq and 10X Genomics. So, instead of taking the mean value per cell type, a linear fit is modeled for Smart-seq2. For both data sets, the cell type frequencies varied greatly among individuals, therefore an estimation of expressed genes in a certain fraction of individuals could not be validated, as this requires similar cell type frequencies for each donor. Instead, the expressed genes were estimated to be above a certain count threshold in all cells of a cell type, independent of the individual. Both data sets were subsampled to investigate the effect of the read depth. The Drop-seq reads are subsampled using fastq-tools version 0.893 and the subsampled UMI count matrix was generated following the pipeline previously described in97. The Smart-seq2 read matrix was subsampled directly using the function downsampleMatrix of the package DropletUtils98. We compared the budget restricted power to our PBMC 10X Genomics results, using the same simulated effect sizes and distribution ranks as well as matched observed priors from FACS sorted bulk studies. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. The single cell PBMC data set generated and analysed during the current study is available on Gene Expression Omnibus (GEO) with accession number GSE185714. The other single cell test data sets are available on GEO with accession numbers GSE96583, GSE130148 and GSE81547. The effect sizes for the eQTL and DE power were taken from published studies, accessible in the supplement of Chen et al. 2016 (https://doi.org/10.1016/j.cell.2016.10.026), Rendeiro et al. 2016 (https://doi.org/10.1038/ncomms11938), Moreno-Moral et al. 2018 (https://doi.org/10.1136/annrheumdis-2017-212454) and Arda et al, 2016 (https://doi.org/10.1016/j.cmet.2016.04.002). For one data set, we reanalysed the count matrix at GEO with accession number GSE85567 to get the effect sizes. Code availability All code is available as open source R package scPower on github https://github.com/heiniglab/scPower and on Zenodo https://doi.org/10.5281/zenodo.555275399. Code to reproduce the figures of the paper is provided in the package vignette. The repository includes a shiny app with a user-friendly graphical user interface, which is additionally available as a web server at http://scpower.helmholtz-muenchen.de/. Khan, J. et al. Gene expression profiling of alveolar rhabdomyosarcoma with cDNA microarrays. Cancer Res. 58, 5009–5013 (1998). Debouck, C. & Goodfellow, P. N. DNA microarrays in drug discovery and development. Nat. Genet. 21, 48–50 (1999). Claverie, J. M. Computational methods for the identification of differential and coordinated gene expression. Hum. Mol. Genet. 8, 1821–1832 (1999). Ritchie, M. E. et al. limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 43, e47 (2015). Love, M. I., Huber, W. & Anders, S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 15, 550 (2014). Cookson, W., Liang, L., Abecasis, G., Moffatt, M. & Lathrop, M. Mapping complex disease traits with global gene expression. Nat. Rev. Genet. 10, 184–194 (2009). Civelek, M. & Lusis, A. J. Systems genetics approaches to understand complex traits. Nat. Rev. Genet. 15, 34–48 (2014). GTEx Consortium. et al. Genetic effects on gene expression across human tissues. Nature 550, 204–213 (2017). PubMed Central Google Scholar Aguet, F. et al. The GTEx Consortium atlas of genetic regulatory effects across human tissues. bioRxiv 787903. https://doi.org/10.1101/787903 (2019). GTEx Consortium. The GTEx Consortium atlas of genetic regulatory effects across human tissues. Science 369, 1318–1330 (2020). Tang, F. et al. mRNA-Seq whole-transcriptome analysis of a single cell. Nat. Methods 6, 377–382 (2009). Stegle, O., Teichmann, S. A. & Marioni, J. C. Computational and analytical challenges in single-cell transcriptomics. Nat. Rev. Genet. 16, 133–145 (2015). Angerer, P. et al. Single cells make big data: New challenges and opportunities in transcriptomics. Curr. Opin. Syst. Biol. 4, 85–91 (2017). Svensson, V., Vento-Tormo, R. & Teichmann, S. A. Exponential scaling of single-cell RNA-seq in the past decade. Nat. Protoc. 13, 599–604 (2018). Stark, R., Grzelak, M. & Hadfield, J. RNA sequencing: the teenage years. Nat. Rev. Genet. 20, 631–656 (2019). Kharchenko, P. V., Silberstein, L. & Scadden, D. T. Bayesian approach to single-cell differential expression analysis. Nat. Methods 11, 740–742 (2014). Finak, G. et al. MAST: a flexible statistical framework for assessing transcriptional changes and characterizing heterogeneity in single-cell RNA sequencing data. Genome Biol. 16, 278 (2015). Soneson, C. & Robinson, M. D. Bias, robustness and scalability in single-cell differential expression analysis. Nat. Methods 15, 255–261 (2018). Lähnemann, D. et al. Eleven grand challenges in single-cell data science. Genome Biol. 21, 31 (2020). Cohen, J. Statistical power analysis for the behavioral sciences. (Hillsdale, 1989). Yang, Y. H. & Speed, T. P. Design and analysis of comparative microarray experiments. Stat. Anal. gene Expr. microarray data 35, 91 (2003). Jung, S.-H. Sample size for FDR-control in microarray data analysis. Bioinformatics 21, 3097–3104 (2005). Pounds, S. & Cheng, C. Sample size determination for the false discovery rate. Bioinformatics 21, 4263–4271 (2005). Liu, P. & Hwang, J. T. G. Quick calculation for sample size while controlling false discovery rate with application to microarray analysis. Bioinformatics 23, 739–746 (2007). Hart, S. N., Therneau, T. M., Zhang, Y., Poland, G. A. & Kocher, J.-P. Calculating sample size estimates for RNA sequencing data. J. Comput. Biol. 20, 970–978 (2013). MathSciNet CAS PubMed PubMed Central Google Scholar Li, C.-I. & Shyr, Y. Sample size calculation based on generalized linear models for differential expression analysis in RNA-seq data. Stat. Appl. Genet. Mol. Biol. 15, 491–505 (2016). MathSciNet CAS PubMed MATH Google Scholar van Iterson, M., van de Wiel, M. A., Boer, J. M. & de Menezes, R. X. General power and sample size calculations for high-dimensional genomic data. Stat. Appl. Genet. Mol. Biol. 12, 449–467 (2013). MathSciNet PubMed MATH Google Scholar Busby, M. A., Stewart, C., Miller, C. A., Grzeda, K. R. & Marth, G. T. Scotty: a web tool for designing RNA-Seq experiments to measure differential gene expression. Bioinformatics 29, 656–657 (2013). Bi, R. & Liu, P. Sample size calculation while controlling false discovery rate for differential expression analysis with RNA-sequencing experiments. BMC Bioinforma. 17, 146 (2016). Ching, T., Huang, S. & Garmire, L. X. Power analysis and sample size estimation for RNA-Seq differential expression. RNA 20, 1684–1696 (2014). Wu, H., Wang, C. & Wu, Z. PROPER: comprehensive power evaluation for differential expression using RNA-seq. Bioinformatics 31, 233–241 (2015). Poplawski, A. & Binder, H. Feasibility of sample size calculation for RNA-seq studies. Brief. Bioinform. 19, 713–720 (2018). Monaco, G. et al. RNA-Seq signatures normalized by mRNA abundance allow absolute deconvolution of human immune cell types. Cell Rep. 26, 1627–1640.e7 (2019). Wu, A. R. et al. Quantitative assessment of single-cell RNA-sequencing methods. Nat. Methods 11, 41–46 (2014). Svensson, V. et al. Power analysis of single-cell RNA-sequencing experiments. Nat. Methods 14, 381–387 (2017). Ziegenhain, C. et al. Comparative analysis of single-cell RNA sequencing methods. Mol. Cell 65, 631–643.e4 (2017). Hafemeister, C. How Many Cells. https://satijalab.org/howmanycells (2019). Abrams, D., Kumar, P., Karuturi, R. K. M. & George, J. A computational method to aid the design and analysis of single cell RNA-seq experiments for cell type identification. BMC Bioinforma. 20, 275 (2019). Davis, A., Gao, R. & Navin, N. E. SCOPIT: sample size calculations for single-cell sequencing experiments. BMC Bioinforma. 20, 566 (2019). Vieth, B., Ziegenhain, C., Parekh, S., Enard, W. & Hellmann, I. powsimR: power analysis for bulk and single cell RNA-seq experiments. Bioinformatics 33, 3486–3488 (2017). Li, W. V. & Li, J. J. A statistical simulator scDesign for rational scRNA-seq experimental design. Bioinformatics 35, i41–i50 (2019). Su, K., Wu, Z. & Wu, H. Simulation, power evaluation and sample size recommendation for single-cell RNA-seq. Bioinformatics 36, 4860–4868 (2020). Crowell, H. L. et al. muscat detects subpopulation-specific state transitions from multi-sample multi-condition single-cell transcriptomics data. Nat. Commun. 11, 6077 (2020). Robinson, M. D., McCarthy, D. J. & Smyth, G. K. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics 26, 139–140 (2010). Law, C. W., Chen, Y., Shi, W. & Smyth, G. K. voom: Precision weights unlock linear model analysis tools for RNA-seq read counts. Genome Biol. 15, R29 (2014). Squair, J. W. et al. Confronting false discoveries in single-cell differential expression. https://doi.org/10.1101/2021.03.12.435024. Kang, H. M. et al. Multiplexed droplet single-cell RNA-sequencing using natural genetic variation. Nat. Biotechnol. 36, 89–94 (2018). Sarkar, A. K. et al. Discovery and characterization of variance QTLs in human induced pluripotent stem cells. PLoS Genet 15, e1008045 (2019). Cuomo, A. S. E. et al. Publisher correction: single-cell RNA-sequencing of differentiating iPS cells reveals dynamic genetic effects on gene expression. Nat. Commun. 11, 1572 (2020). ADS CAS PubMed PubMed Central Google Scholar Mandric, I. et al. Optimized design of single-cell RNA sequencing experiments for cell-type-specific eQTL analysis. Nat. Commun. 11, 5504 (2020). Enge, M. et al. Single-cell analysis of human pancreas reveals transcriptional signatures of aging and somatic mutation patterns. Cell 171, 321–330.e14 (2017). Vieira Braga, F. A. et al. A cellular census of human lungs identifies novel cell states in health and in asthma. Nat. Med. 25, 1153–1163 (2019). Rendeiro, A. F. et al. Chromatin accessibility maps of chronic lymphocytic leukaemia identify subtype-specific epigenome signatures and transcription regulatory networks. Nat. Commun. 7, 11938 (2016). Moreno-Moral, A. et al. Changes in macrophage transcriptome associate with systemic sclerosis and mediate GSDMA contribution to disease risk. Ann. Rheum. Dis. 77, 596–601 (2018). Nicodemus-Johnson, J. et al. DNA methylation in lung cells is associated with asthma endotypes and genetic risk. JCI Insight 1, e90151 (2016). Arda, H. E. et al. Age-dependent pancreatic gene regulation reveals mechanisms governing human β cell function. Cell Metab. 23, 909–920 (2016). Chen, L. et al. Genetic drivers of epigenetic and transcriptional variation in human immune. Cells Cell 167, 1398–1414.e24 (2016). Wolf, F. A. et al. PAGA: graph abstraction reconciles clustering with trajectory inference through a topology preserving map of single cells. Genome Biol. 20, 59 (2019). Baran, Y. et al. MetaCell: analysis of single-cell RNA-seq data using K-nn graph partitions. Genome Biol. 20, 206 (2019). Anders, S. & Huber, W. Differential expression analysis for sequence count data. Genome Biol. 11, R106 (2010). Regev, A. et al. The human cell atlas. Elife 6, e27041 (2017). Dunn, O. J. Multiple comparisons among means. J. Am. Stat. Assoc. 56, 52–64 (1961). MathSciNet MATH Google Scholar GTEx Consortium. The genotype-tissue expression (GTEx) project. Nat. Genet. 45, 580–585 (2013). Zhu, H. & Lakkis, H. Sample size calculation for comparing two negative binomial rates. Stat. Med. 33, 376–387 (2014). MathSciNet PubMed Google Scholar Jaakkola, M. K., Seyednasrollah, F., Mehmood, A. & Elo, L. L. Comparison of methods to detect differentially expressed genes between single-cell populations. Brief. Bioinform. 18, 735–743 (2017). Wang, T., Li, B., Nelson, C. E. & Nabavi, S. Comparative analysis of differential gene expression analysis tools for single-cell RNA sequencing data. BMC Bioinforma. 20, 40 (2019). Luecken, M. D. & Theis, F. J. Current best practices in single-cell RNA-seq analysis: a tutorial. Mol. Syst. Biol. 15, e8746 (2019). Chen, W. et al. UMI-count modeling and differential expression analysis for single-cell RNA sequencing. Genome Biol. 19, 70 (2018). Svensson, V. Droplet scRNA-seq is not zero-inflated. https://doi.org/10.1101/582064. Lappalainen, T. et al. Transcriptome and genome sequencing uncovers functional variation in humans. Nature 501, 506–511 (2013). Chen, W. et al. A comparison of methods accounting for batch effects in differential expression analysis of UMI count based single cell RNA sequencing. Comput. Struct. Biotechnol. J. 18, 861–873 (2020). Hernández, A. V., Steyerberg, E. W. & Habbema, J. D. F. Covariate adjustment in randomized controlled trials with dichotomous outcomes increases statistical power and reduces sample size requirements. J. Clin. Epidemiol. 57, 454–460 (2004). Stegle, O., Parts, L., Piipari, M., Winn, J. & Durbin, R. Using probabilistic estimation of expression residuals (PEER) to obtain increased power and interpretability of gene expression analyses. Nat. Protoc. 7, 500–507 (2012). Kahan, B. C., Jairath, V., Doré, C. J. & Morris, T. P. The risks and rewards of covariate adjustment in randomized trials: an assessment of 12 outcomes from 8 studies. Trials 15, 139 (2014). Chen, Y., Lun, A. T. L. & Smyth, G. K. From reads to genes to pathways: differential expression analysis of RNA-Seq experiments using Rsubread and the edgeR quasi-likelihood pipeline. F1000Res. 5, 1438 (2016). Wolock, S. L., Lopez, R. & Klein, A. M. Scrublet: Computational Identification of Cell Doublets in Single-Cell Transcriptomic Data. Cell Syst. 8, 281–291.e9 (2019). 10X Genomics. User Guides — 10x Genomics. 10x Genomics https://www.10xgenomics.com/resources/user-guides/ (2019). van der Wijst, M. G. P. et al. Single-cell RNA sequencing identifies celltype-specific cis-eQTLs and co-expression QTLs. Nat. Genet. 50, 493–497 (2018). Heinrich, V. et al. The allele distribution in next-generation sequencing data sets is accurately described as the result of a stochastic branching process. Nucleic Acids Res. 40, 2426–2431 (2012). Lafzi, A., Moutinho, C., Picelli, S. & Heyn, H. Tutorial: guidelines for the experimental design of single-cell RNA sequencing studies. Nat. Protoc. 13, 2742–2757 (2018). 10x Genomics. What is the recommended sequencing depth for Single Cell 3′ and 5' Gene Expression libraries? 10X Genomics https://kb.10xgenomics.com/hc/en-us/articles/115002022743-What-is-the-recommended-sequencing-depth-for-Single-Cell-3-and-5-Gene-Expression-libraries- (2020). Heimberg, G., Bhatnagar, R., El-Samad, H. & Thomson, M. Low dimensionality in gene expression data enables the accurate extraction of transcriptional programs from shallow sequencing. Cell Syst. 2, 239–250 (2016). Bourgon, R., Gentleman, R. & Huber, W. Independent filtering increases detection power for high-throughput experiments. Proc. Natl Acad. Sci. USA 107, 9546–9551 (2010). SEQC/MAQC-III Consortium. A comprehensive assessment of RNA-seq accuracy, reproducibility and information content by the Sequencing Quality Control Consortium. Nat. Biotechnol. 32, 903–914 (2014). Regev, A. et al. The human cell atlas white paper. arXiv [q-bio.TO] (2018). Brückl, T. M. et al. The biological classification of mental disorders (BeCOME) study: a protocol for an observational deep-phenotyping study for the identification of biological subtypes. BMC Psychiatry 20, 213 (2020). Zheng, G. X. Y. et al. Massively parallel digital transcriptional profiling of single cells. Nat. Commun. 8, 14049 (2017). Wolock, S. L., Lopez, R. & Klein, A.M. Scrublet: computational identification of cell doublets in single-cell transcriptomic data. bioRxiv 1–18 (2018). Wolf, F. A., Angerer, P. & Theis, F. J. SCANPY: large-scale single-cell gene expression data analysis. Genome Biol. 19, 15 (2018). Preprocessing and clustering 3k PBMCs—Scanpy documentation. https://scanpy-tutorials.readthedocs.io/en/latest/pbmc3k.html. Blondel, V. D., Guillaume, J.-L., Lambiotte, R. & Lefebvre, E. Fast unfolding of communities in large networks. J. Stat. Mech.: Theory Exp. 2008, P10008 (2008). MATH Google Scholar Bio-Rad. Cell frequencies in common samples - Flow Cytometry analysis | Bio-Rad. Bio-Rad https://www.bio-rad-antibodies.com/flow-cytometry-cell-frequency.html. fastq-tools. https://homes.cs.washington.edu/~dcjones/fastq-tools/. Zappia, L., Phipson, B. & Oshlack, A. Splatter: simulation of single-cell RNA sequencing data. Genome Biol. 18, 1–15 (2017). Lyles, R. H., Lin, H.-M. & Williamson, J. M. A practical approach to computing power for generalized linear models with nominal, count, or ordinal responses. Stat. Med. 26, 1632–1648 (2007). Korthauer, K. D. et al. A statistical approach for identifying differential distributions in single-cell RNA-seq experiments. Genome Biol. 17, 222 (2016). Macosko, E. Z. et al. Highly parallel genome-wide expression profiling of individual cells using nanoliter droplets. Cell 161, 1202–1214 (2015). Lun, A. T. L. et al. EmptyDrops: distinguishing cells from empty droplets in droplet-based single-cell RNA sequencing data. Genome Biol. 20, 63 (2019). Schmid, K.T., et al. scPower accelerates and optimizes the design of multi-sample single cell transcriptomic studies. Zenodo. https://doi.org/10.5281/zenodo.5552753. (2021). We thank Thomas Walzthoeni for bioinformatics support provided at the Bioinformatics Core Facility, Institute of Computational Biology, Helmholtz Zentrum München. We thank Elisabeth Graf and Thomas Schwarzmayr for help in sequencing. We thank the BeCOME study team at the Max Planck Institute for Psychiatry, including the BioPrep core unit for their contribution to control individuals recruitment and characterizations, as well as collection of PBMCs. We thank Maren Büttner for insightful discussion and proofreading of the manuscript. H.L. is grateful for support by "ExNet-0041-Phase2-3 ("SyNergy-HMGU")" through the Initiative and Network Fund of the Helmholtz Association. CC is supported by a Banting Postdoctoral Fellowship. F.J.T. acknowledges support by the BMBF (grant # 01IS18036A and grant # 01IS18053A), by the Helmholtz Association (Incubator grant sparse2big, grant # ZT-I-0007) and by the Chan Zuckerberg Initiative DAF (advised fund of Silicon Valley Community Foundation, 182835). M.H. acknowledges support by the Chan Zuckerberg Foundation (CZF Grant #: CZF2019-002431). B.H. is supported by the Helmholtz Association under the joint research school "Munich School for Data Science—MUDS". Open Access funding enabled and organized by Projekt DEAL. Institute of Computational Biology, Helmholtz Zentrum München – German Research Center for Environmental Health, Neuherberg, Germany Katharina T. Schmid, Barbara Höllbacher, Fabian J. Theis & Matthias Heinig Department of Informatics, Technical University Munich, Munich, Germany Katharina T. Schmid, Barbara Höllbacher & Matthias Heinig Department of Translational Research, Max Planck Institute for Psychiatry, Munich, Germany Cristiana Cruceanu & Elisabeth B. Binder Institute of Diabetes and Regeneration Research, Helmholtz Diabetes Center, Helmholtz Zentrum München – German Research Center for Environmental Health, Neuherberg, Germany Anika Böttcher & Heiko Lickert German Center for Diabetes Research (DZD), Neuherberg, Germany School of Medicine, Technical University of Munich, Munich, Germany Department of Psychiatry and Behavioral Sciences, Emory University School of Medicine, Georgia, USA Elisabeth B. Binder Department of Mathematics, Technical University Munich, Munich, Germany Fabian J. Theis Katharina T. Schmid Barbara Höllbacher Cristiana Cruceanu Anika Böttcher Heiko Lickert Matthias Heinig K.T.S., B.H. and M.H. conceived the power analysis framework and analyzed the data. M.H., F.J.T., E.B.B. and H.L. designed the scRNA-seq experiment. E.B.B. planned the BeCOME study and recruited the study participants. C.C. and A.B. generated scRNA-seq data in PBMCs. K.T.S., B.H. and M.H. wrote the manuscript with input from all authors. All authors approved the final manuscript. Correspondence to Matthias Heinig. F.J.T. reports receiving consulting fees from Roche Diagnostics GmbH and Cellarity Inc., and ownership interest in Cellarity, Inc. and Dermagnostix. The other authors declare that they have no competing interests. Peer review information Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available. Peer Review File Schmid, K.T., Höllbacher, B., Cruceanu, C. et al. scPower accelerates and optimizes the design of multi-sample single cell transcriptomic studies. Nat Commun 12, 6625 (2021). https://doi.org/10.1038/s41467-021-26779-7
CommonCrawl
Artificial Intelligence Stack Exchange is a question and answer site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment. It only takes a minute to sign up. What is a "surrogate model"? In the following paragraph from the book Automated Machine Learning: Methods, Systems, Challenges (by Frank Hutter et al.) In this section we first give a brief introduction to Bayesian optimization, present alternative surrogate models used in it, describe extensions to conditional and constrained configuration spaces, and then discuss several important applications to hyperparameter optimization. What is an "alternative surrogate model"? What exactly does "alternative" mean? terminology definitions hyperparameter-optimization bayesian-optimization surrogate-model yousef yeganeyousef yegane What is Bayesian optimization? Bayesian optimization (BO) is an optimization technique used to model an unknown (usually continuous) function $f: \mathbb{R}^d \rightarrow Y$, where typically $d \leq 20$, so it can be used to solve regression and classification problems, where you want to find an approximation of $f$. In this sense, BO is similar to the usual approach of training a neural network with gradient descent combined with the back-propagation algorithm, so that to optimize an objective function. However, BO is particularly suited for regression or classification problems where the unknown function $f$ is expensive to evaluate (that is, given the input $\mathbf{x} \in \mathbb{R}^d$, the computation of $f(x) \in Y$ takes a lot of time or, in general, resources). For example, when doing hyper-parameter tuning, we usually need to first train the model with the new hyper-parameters before evaluating the specific configuration of hyper-parameters, but this usually takes a lot of time (hours, days or even months), especially when you are training deep neural networks with big datasets. Moreover, BO does not involve the computation of gradients and it usually assumes that $f$ lacks properties such as concavity or linearity. How does Bayesian optimization work? There are three main concepts in BO the surrogate model, which models an unknown function, a method for statistical inference, which is used to update the surrogate model, and the acquisition function, which is used to guide the statistical inference and thus it is used to update the surrogate model The surrogate model is usually a Gaussian process, which is just a fancy name to denote a collection of random variables such that the joint distribution of those random variables is a multivariate Gaussian probability distribution (hence the name Gaussian process). Therefore, in BO, we often use a Gaussian probability distribution (the surrogate model) to model the possible functions that are consistent with the data. In other words, given that we do not know $f$, rather than finding the usual point estimate (or maximum likelihood estimate), like in the usual case of supervised learning mentioned above, we maintain a Gaussian probability distribution that describes our uncertainty about the unknown $f$. The method of statistical inference is often just an iterative application of the Bayes rule (hence the name Bayesian optimization), where you want to find the posterior, given a prior, a likelihood and the evidence. In BO, you usually place a prior on $f$, which is a multivariate Gaussian distribution, then you use the Bayes rule to find the posterior distribution of $f$ given the data. What is the data in this case? In BO, the data are the outputs of $f$ evaluated at certain points of the domain of $f$. The acquisition function is used to choose these points of the domain of $f$, based on the computed posterior distribution. In other words, based on the current uncertainty about $f$ (the posterior), the acquisition function attempts to cleverly choose points of the domain of $f$, $\mathbf{x} \in \mathbb{R}^d$, which will be used to find an updated posterior. Why do we need the acquisition function? Why can't we simply evaluate $f$ at random domain points? Given that $f$ is expensive to evaluate, we need a clever way to choose the points where we want to evaluate $f$. More specifically, we want to evaluate $f$ where we are more uncertain about it. There are several acquisition functions, such as expected improvement, knowledge-gradient, entropy search, and predictive entropy search, so there are different ways of choosing the points of the domain of $f$ where we want to evaluate it to update the posterior, each of which deals with the exploration-exploitation dilemma differently. What can Bayesian optimization be used for? BO can be used for tuning hyper-parameters (also called hyper-parameter optimisation) of machine learning models, such as neural networks, but it has also been used to solve other problems. What is an alternative surrogate model? In the book Automated Machine Learning: Methods, Systems, Challenges (by Frank Hutter et al.) that you are quoting, the authors say that the commonly used surrogate model Gaussian process scales cubically in the number of data points, so sparse Gaussian processes are often used. Moreover, Gaussian processes also scale badly with the number of dimensions. In section 1.3.2.2., the authors describe some alternative surrogate models to the Gaussian processes, for example, alternatives that use neural networks or random forests. nbro♦nbro A surrogate model is a simplified model. It is a mapping $y_S=f_S(x)$ that approximates the original model $y=f(x)$, in a given domain, reasonably well. Source: Engineering Design via Surrogate Modelling: A Practical Guide In the context of Bayesian optimization, one wants to optimize a function $y=f(x)$ which is expensive (very time consuming) to evaluate, therefore one optimizes the surrogate model $y_S=f_S(x)$ which is cheaper (faster) to evaluate. Javier-AcunaJavier-Acuna $\begingroup$ Forgive my ignorance, but why exactly is yS=fS(x) faster to evaluate? $\endgroup$ – Goose $\begingroup$ Imagine that the original model is computed from Finite Element simulations (x would be some geometric parameter or material constant for instance and f(x) some quantity of interest) and f_S is a polynomial approximation like a0 + a1x + a2x^2. f(x) can take some hours to evaluate whereas f_S(x) can be calculated pretty fast $\endgroup$ – Javier-Acuna Recently, I've been thinking this question as well. After reading several papers, finally came up with some thoughts about the surrogate model. In FEM(finite element method), we try to find a weak form to approximate the strong form so that we can solve the weak form analytically. (weak form: approximation equation; strong form: PDE in real world) In my opinion, the surrogate model can be regarded as 'weak form'. There are many methods can form a surrogate model. And if we use a NN model as the surrogate model, the training process is equivalent to 'solving analytically'. T.C. LiuT.C. Liu Not the answer you're looking for? Browse other questions tagged terminology definitions hyperparameter-optimization bayesian-optimization surrogate-model or ask your own question. What other kind of AIs exist apart from goal-driven? What is the fringe in the context of search algorithms? What is the relation between an environment, a state and a model? What is local consistency in constraint satisfaction problems? What is an end-to-end AI project? What are the differences between an agent and a model? What exactly is a Parzen?
CommonCrawl
Mathematica Meta Mathematica Stack Exchange is a question and answer site for users of Wolfram Mathematica. It only takes a minute to sign up. How to generate all the combinations with repetition? I have $K$ variables. Each variable can take any value form a set with $N$ elements. We have $N^K$ possible solutions (permutations with repetition, when at each time slot we can choose among $N$ elements each time). However, some of these $N^K$ possible solutions will provide the same offered rate (we do not care about the ordering). So, the possible solutions reduce to: $\frac{(K+N-1)!}{K!(N-1)!}$ How can I generate all these possible combinations when $N=7$, $K=20$? edited May 3, 2019 at 9:54 dipak narayanan asked May 3, 2019 at 9:26 dipak narayanandipak narayanan $\begingroup$ Can you please illustrate the desired combinations with a few small examples? $\endgroup$ – Kiro May 3, 2019 at 9:30 $\begingroup$ @Kiro, please see my edit. $\endgroup$ – dipak narayanan $\begingroup$ Related: equivalent-nested-loop-structure (combinations_with_replacement) $\endgroup$ – expression $\begingroup$ GroupTheory`Tools`Multisets[Range[n], k] $\endgroup$ – matrix42 With[{n = 2, k = 3}, Join @@ Table[IntegerPartitions[s, {k}, Range[n]], {s, k, n k}]] {{1, 1, 1}, {2, 1, 1}, {2, 2, 1}, {2, 2, 2}} With[{n = 7, k = 20}, {{1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, {2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, ..., {7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 6}, {7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7}} (230230 solutions) RomanRoman Thanks for contributing an answer to Mathematica Stack Exchange! Equivalent Nested Loop Structure (combinations_with_replacement) How to determine all cases consistent with constraints How can I generate permutations of bit strings with repetition? A few tuples at a time? Generating all permutations of labels in an expression How do I calculate the number of possible cases of polynomial equation (combinations) in Mathematica? How to generate the lists of $0 \leq m \leq X$ integer values so these values add up to $X$ Generating Lyndon words modulo mirroring operation and substituion lang-mma Mathematica is a registered trademark of Wolfram Research, Inc. While the mark is used herein with the limited permission of Wolfram Research, Stack Exchange and this site disclaim all affiliation therewith.
CommonCrawl
Numerical null controllability of semi-linear 1-D heat equations: Fixed point, least squares and Newton methods MCRF Home Time-inconsistent optimal control problems and the equilibrium HJB equation September 2012, 2(3): 247-270. doi: 10.3934/mcrf.2012.2.247 Controllability of the cubic Schroedinger equation via a low-dimensional source term Andrey Sarychev 1, DiMaD, Università di Firenze, via delle Pandette 9, Firenze, 50127, Italy Received May 2011 Revised November 2011 Published August 2012 We study controllability of $d$-dimensional defocusing cubic Schroe-din-ger equation under periodic boundary conditions. The control is applied additively, via a source term, which is a linear combination of few complex exponentials (modes) with time-variant coefficients - controls. We manage to prove that controlling $2^d$ modes one can achieve controllability of the equation in each finite-dimensional projection of the evolution space $H^{s}(\mathbb{T}^d), \ s>d/2$, as well as approximate controllability in $H^{s}(\mathbb{T}^d)$. We also present a negative result regarding exact controllability of cubic Schroedinger equation via a finite-dimensional source term. Keywords: Lie extensions., geometric control theory, Cubic Schroedinger equation, approximate controllability. Mathematics Subject Classification: Primary: 93B05; Secondary: 35Q55, 93B27,93C2. Citation: Andrey Sarychev. Controllability of the cubic Schroedinger equation via a low-dimensional source term. Mathematical Control & Related Fields, 2012, 2 (3) : 247-270. doi: 10.3934/mcrf.2012.2.247 A. A. Agrachev and A. V.Sarychev, Controllability of 2D Euler and Navier-Stokes equations by degenerate forcing,, Communications in Mathematical Physics, 265 (2006), 673. doi: 10.1007/s00220-006-0002-8. Google Scholar A. A. Agrachev and A. V.Sarychev, Solid controllability in fluid dynamics,, in, 6 (2008), 1. Google Scholar J. M. Ball, J. E. Marsden and M. Slemrod, Controllability for distributed bilinear systems,, SIAM J. Control Optimization, 20 (1982), 575. doi: 10.1137/0320042. Google Scholar K. Beauchard, Local controllability of a 1-D Schrödinger equation,, J. Math Pures Appl. (9), 84 (2005), 851. doi: 10.1016/j.matpur.2005.02.005. Google Scholar K. Beauchard and J.-M.Coron, Controllability of a quantum particle in a moving potential well,, J. Functional Analysis, 232 (2006), 328. doi: 10.1016/j.jfa.2005.03.021. Google Scholar J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations. I. Schrödinger equation,, Geometric and Functional Analysis, 3 (1993), 107. Google Scholar N. Burq, P. Gérard and N. Tzvetkov, Strichartz estimates and the nonlinear Schrödinger equation on compact manifolds,, American J. of Mathematics, 126 (2004), 569. doi: 10.1353/ajm.2004.0016. Google Scholar T. Chambrion, P. Mason, M. Sigalotti and U. Boscain, Controllability of the discrete-spectrum Schrödinger equation driven by an external field,, Annales de l'Institut Henri Poincaré, 26 (2009), 329. Google Scholar B. Dehman, P. Gérard and G. Lebeau, Stabilization and control for the nonlinear Schrödinger equation on a compact surface,, Mathematische Zeitschrift, 254 (2006), 729. doi: 10.1007/s00209-006-0005-3. Google Scholar H. Fattorini, Relaxation theorems, differential inclusions, and Filippov's theorem for relaxed controls in semilinear infinite-dimensional systems,, J. of Differential Equations, 112 (1994), 131. doi: 10.1006/jdeq.1994.1097. Google Scholar H. Frankowska, A priori estimates for operational differential inclusions,, J. Differential Equations, 84 (1990), 100. doi: 10.1016/0022-0396(90)90129-D. Google Scholar R. V. Gamkrelidze, "Principles of Optimal Control Theory,", Revised edition, (1978). Google Scholar R. Illner, H. Lange and H. Teismann, Limitations on the control of Schrödinger equation,, ESAIM Control Optim. Calc. Var., 12 (2006), 615. Google Scholar I. Lasiecka and R. Triggiani, Exact controllability of semilinear abstract systems with application to waves and plates boundary control problems,, Applied Mathematics and Optimization, 23 (1991), 109. doi: 10.1007/BF01442394. Google Scholar G. Lebeau, Contrôle de l'equation de Schrödinger,, (French) [Control of the Schrödinger equation], 71 (1992), 267. Google Scholar L. Rosier and B.-Y. Zhang, Local exact controllability and stabilizability of the nonlinear Schrödinger equation on a bounded interval,, SIAM J. Control Optim., 48 (2009), 972. doi: 10.1137/070709578. Google Scholar A. Shirikyan, Euler equations are not exactly controllable by a finite-dimensional external force,, Physica D, 237 (2008), 1317. Google Scholar H. J. Sussmann and W. Liu, Lie bracket extensions and averaging the single-bracket case,, in, (1993), 109. Google Scholar T. Tao, "Nonlinear Dispersive Equations. Local and Global Analysis,", CBMS Regional Conference Series in Mathematics, 106 (2006). Google Scholar E. Zuazua, Remarks on the controllability of the Schrödinger equation,, in, 33 (2003), 193. Google Scholar Andrey Sarychev. Errata: Controllability of the cubic Schroedinger equation via a low-dimensional source term. Mathematical Control & Related Fields, 2014, 4 (2) : 261-261. doi: 10.3934/mcrf.2014.4.261 Lianwen Wang. Approximate controllability and approximate null controllability of semilinear systems. Communications on Pure & Applied Analysis, 2006, 5 (4) : 953-962. doi: 10.3934/cpaa.2006.5.953 Elie Assémat, Marc Lapert, Dominique Sugny, Steffen J. Glaser. On the application of geometric optimal control theory to Nuclear Magnetic Resonance. Mathematical Control & Related Fields, 2013, 3 (4) : 375-396. doi: 10.3934/mcrf.2013.3.375 Bopeng Rao, Laila Toufayli, Ali Wehbe. Stability and controllability of a wave equation with dynamical boundary control. Mathematical Control & Related Fields, 2015, 5 (2) : 305-320. doi: 10.3934/mcrf.2015.5.305 Mohamed Ouzahra. Controllability of the semilinear wave equation governed by a multiplicative control. Evolution Equations & Control Theory, 2019, 8 (4) : 669-686. doi: 10.3934/eect.2019039 Hans Weinberger. The approximate controllability of a model for mutant selection. Evolution Equations & Control Theory, 2013, 2 (4) : 741-747. doi: 10.3934/eect.2013.2.741 Shi Jin, Dongsheng Yin. Computational high frequency wave diffraction by a corner via the Liouville equation and geometric theory of diffraction. Kinetic & Related Models, 2011, 4 (1) : 295-316. doi: 10.3934/krm.2011.4.295 Valentin Keyantuo, Mahamadi Warma. On the interior approximate controllability for fractional wave equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3719-3739. doi: 10.3934/dcds.2016.36.3719 Guillaume Olive. Boundary approximate controllability of some linear parabolic systems. Evolution Equations & Control Theory, 2014, 3 (1) : 167-189. doi: 10.3934/eect.2014.3.167 Moncef Aouadi, Taoufik Moulahi. Approximate controllability of abstract nonsimple thermoelastic problem. Evolution Equations & Control Theory, 2015, 4 (4) : 373-389. doi: 10.3934/eect.2015.4.373 Hugo Leiva, Nelson Merentes, José L. Sánchez. Approximate controllability of semilinear reaction diffusion equations. Mathematical Control & Related Fields, 2012, 2 (2) : 171-182. doi: 10.3934/mcrf.2012.2.171 Hugo Leiva, Jahnett Uzcategui. Approximate controllability of discrete semilinear systems and applications. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 1803-1812. doi: 10.3934/dcdsb.2016023 Simone Farinelli. Geometric arbitrage theory and market dynamics. Journal of Geometric Mechanics, 2015, 7 (4) : 431-471. doi: 10.3934/jgm.2015.7.431 Andrew D. Lewis, David R. Tyner. Geometric Jacobian linearization and LQR theory. Journal of Geometric Mechanics, 2010, 2 (4) : 397-440. doi: 10.3934/jgm.2010.2.397 Ulrike Kant, Werner M. Seiler. Singularities in the geometric theory of differential equations. Conference Publications, 2011, 2011 (Special) : 784-793. doi: 10.3934/proc.2011.2011.784 Eduardo Martínez. Classical field theory on Lie algebroids: Multisymplectic formalism. Journal of Geometric Mechanics, 2018, 10 (1) : 93-138. doi: 10.3934/jgm.2018004 Kirill D. Cherednichenko, Alexander V. Kiselev, Luis O. Silva. Functional model for extensions of symmetric operators and applications to scattering theory. Networks & Heterogeneous Media, 2018, 13 (2) : 191-215. doi: 10.3934/nhm.2018009 Viorel Niţică. Stable transitivity for extensions of hyperbolic systems by semidirect products of compact and nilpotent Lie groups. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 1197-1204. doi: 10.3934/dcds.2011.29.1197 Danijela Damjanović. Central extensions of simple Lie groups and rigidity of some abelian partially hyperbolic algebraic actions. Journal of Modern Dynamics, 2007, 1 (4) : 665-688. doi: 10.3934/jmd.2007.1.665 Heinz Schättler, Urszula Ledzewicz. Perturbation feedback control: A geometric interpretation. Numerical Algebra, Control & Optimization, 2012, 2 (3) : 631-654. doi: 10.3934/naco.2012.2.631 Andrey Sarychev
CommonCrawl
TFAP2A is a component of the ZEB1/2 network that regulates TGFB1-induced epithelial to mesenchymal transition Yoana Dimitrova1, Andreas J. Gruber1, Nitish Mittal1, Souvik Ghosh1, Beatrice Dimitriades1, Daniel Mathow3, William Aaron Grandy1, Gerhard Christofori2 and Mihaela Zavolan1Email authorView ORCID ID profile Biology Direct201712:8 Received: 17 November 2016 Accepted: 22 March 2017 The transition between epithelial and mesenchymal phenotypes (EMT) occurs in a variety of contexts. It is critical for mammalian development and it is also involved in tumor initiation and progression. Master transcription factor (TF) regulators of this process are conserved between mouse and human. From a computational analysis of a variety of high-throughput sequencing data sets we initially inferred that TFAP2A is connected to the core EMT network in both species. We then analysed publicly available human breast cancer data for TFAP2A expression and also studied the expression (by mRNA sequencing), activity (by monitoring the expression of its predicted targets), and binding (by electrophoretic mobility shift assay and chromatin immunoprecipitation) of this factor in a mouse mammary gland EMT model system (NMuMG) cell line. We found that upon induction of EMT, the activity of TFAP2A, reflected in the expression level of its predicted targets, is up-regulated in a variety of systems, both murine and human, while TFAP2A's expression is increased in more "stem-like" cancers. We provide strong evidence for the direct interaction between the TFAP2A TF and the ZEB2 promoter and we demonstrate that this interaction affects ZEB2 expression. Overexpression of TFAP2A from an exogenous construct perturbs EMT, however, in a manner similar to the downregulation of endogenous TFAP2A that takes place during EMT. Our study reveals that TFAP2A is a conserved component of the core network that regulates EMT, acting as a repressor of many genes, including ZEB2. This article has been reviewed by Dr. Martijn Huynen and Dr. Nicola Aceto. Epithelial-to-mesenchymal transition Transcription regulatory network TFAP2A ZEB2 TGFb1 NMuMG The epithelial to mesenchymal transition (EMT) is defined as the process in which cells that display predominantly epithelial features transition to a state in which they exhibit mesenchymal characteristics. EMT has well-established and important roles in different stages of embryonic development: it is observed during gastrulation, in the generation of the primitive mesoderm, during neural crest (NC) formation, and in the development of many organs such as heart valves, skeletal muscle, and the palate [1]. EMT-like phenomena were also described in adult organisms, as part of normal developmental changes, as well as during pathological processes [2]. For example, during breast development, an EMT-like program referred to as epithelial plasticity is thought to be part of branching morphogenesis, which leads to the formation of the complex ductal tree [3]. Recent findings suggest that an EMT program may increase the "stemness" potential of epithelial cells [4]. The mammary gland epithelium is composed of an internal luminal layer, and an external, basal layer of myoepithelial cells. Recent studies suggest that these different cell types derive from a common stem cell, through a process that involves epithelial plasticity [5, 6]. Whereas this process is very well coordinated in normal development, its dysregulation in cancer leads to outcomes that are difficult to predict [3]. While the majority of experimental results indicate that manipulating EMT also affects cancer metastasis, recent reports on cancer cells circulating in the blood stream or resulting from genetic lineage tracing have questioned a critical role of EMT in the formation of metastases, but have demonstrated a role in chemotherapy resistance [7–9]. In breast cancer, it is believed that EMT affects the basal epithelial phenotype and is responsible for an increased metastatic potential [10]. The TFAP2A transcription factor (TF) is expressed early in embryogenesis, where it contributes to cell fate determination in the formation of the neural crest and the epidermis. The knockout of Tfap2a in mouse is lethal due to neural crest formation defects [11]. In humans, mutations in TFAP2A have been linked to the developmental defects in the Branchio-Oculo-Facial Syndrome (BOFS) [12]. TFAP2A is a member of the AP-2 family of TFs, which in humans and mice is composed of five members, TFAP2A, TFAP2B, TFAP2C, TFAP2D and TFAP2E, or AP-2α, AP-2β, AP-2γ, AP-2δ and AP-2ε, respectively. These proteins share important sequence similarities and have a specific structural organization with a proline and glutamine-rich trans-activation domain located at the N-terminus, a central region with positively-charged amino acids, and a highly conserved helix-loop-helix region at the C-terminus. The last two domains are involved in DNA binding and dimerization, the proteins being able to form hetero- or homo-dimers [13]. The TFAP2A gene is composed of seven exons. In mice, four different isoforms have been described [14]. Systemic Evolution of Ligand by EXponential enrichment (SELEX)-based, in vitro assays, have determined that AP-2α binds to the palindromic motif GCCN3GGC and to some close variants, GCCN4GGC, GCCN3/4GGG [15]. More recent ChIP-seq experiments inferred SCCTSRGGS and SCCYSRGGS (S = G or C, R = A or G and Y = C or T) as the consensus sites for human AP-2γ and AP-2α, respectively [16]. In the adult mammary gland, TFAP2A is expressed in virgin and pregnant mice. Its mRNA and protein are detected at the terminal end buds and also in the ductal epithelium, predominantly in the luminal cell population [17]. Targeted overexpression of TFAP2A and TFAP2C in the mouse mammary gland results in lactation deficiency, increased proliferation and apoptosis, reduced alveolar budding and differentiation [17, 18]. Knockout of the TFAP2C paralog of TFAP2A in mouse mammary luminal cells results in an increased number of terminal end buds with reduced distal migration [19]. Aberrant expression of TFAP2A has been observed in various cancers. It is overexpressed in human nasopharyngeal carcinoma and is involved in tumorigenesis by targeting the HIF-1α/VEGF/PEDF pathway [20]. In contrast, reduced AP-2α expression was reported to be associated with poor prognosis in gastric adenocarcinoma [21]. The loss of TFAP2A is connected with the acquisition of the malignant phenotype in melanoma through regulation of cell adhesion molecules (ALCAM) [22]. TFAP2A expression was found to be less organized in breast cancer compared to normal mammary gland and it is associated with HER2/ErbB-2 and ERα expression [23]. To define conserved EMT regulatory networks, we started by analyzing seven mouse and human datasets obtained from EMT systems, altogether containing thirty-six mRNA sequencing samples. We found that TFAP2A is one of the factors that contribute most significantly to mRNA-level expression changes that take place during embryonic stem cell (ESC) differentiation to mesoderm or to NC cells, during normal mammary gland development, and most importantly, in breast cancer models. To investigate TFAP2A's involvement in EMT we used mouse mammary gland epithelial cell line NMuMG, a well-known model of EMT [24]. We demonstrate, for the first time, that the expression and activity of Tfap2a are modulated during TGFβ1-induced transdifferentiation of these cells. We further show that TFAP2A directly binds to the Zeb2 promoter, modulating its transcriptional output. TFAP2A overexpression in NMuMG cells results in increased levels of EMT-inducing TFs, and promotes an EMT-like phenotype. Our study sheds a new light on the role of TFAP2A in processes that involve EMT, including breast cancer, and it contributes to a deeper understanding of the molecular and cellular mechanism of cancer development and metastasis. Expression vectors and constructs Mouse TFAP2A cDNA was kindly provided by Prof. Qingjie [25]. The TFAP2A-FLAG fusion was subcloned into pDONR201 plasmid, using a Gateway® BP Clonase® II Enzyme mix (#11789-020, Life Technologies) and it was further subcloned into pCLX vector, using Gateway® LR Clonase® II Enzyme mix (#11791-020, Life Technologies). We used a subclone of NMuMG cells that was generated as previously described (NMuMG/E9) [24]. Cells were cultured in Dulbecco's modified Eagle's medium (DMEM #D5671, Sigma Aldrich) with high glucose and L-glutamine, supplemented with 10% fetal bovine serum (#f-7524, Sigma-Aldrich) and where indicated were treated with 2 ng/ml TGFβ1 (#240-B, R&D Systems). Transient transfection was done using Lipofectamine2000 (#11668-019, Life Technologies) according to the manufacturer's instructions. For time course experiments, cells were grown in six well plates for up to 14 days and treated with 2 ng/mL TGFβ1. In addition, NMuMG pCLX-TFAP2A or NMuMG pCLX-GFP cells induced with 2 μg/mL of doxycycline for 6 days, and further treated or not treated with TGFβ1 for 72 hours were used to study the effect of TFAP2A overexpression. Lentiviral infection Stable populations of NMuMG cells expressing the blasticidine-resistant marker together with TFAP2A-FLAG under a doxycycline-inducible promoter were obtained with the pCLX expression system [26]. Lentiviral particles were produced in HEK293-LV cells using the helper vectors pMDL, pREV and the envelope-encoding vector pVSV. For infection, viral supernatants were added to target cells in the presence of polybrene (#TR-1003-G, Millipore) (1 μg/ml). Cells were further incubated at 37 °C under 5% CO2 in a tissue culture incubator for 72 h, prior to selection with blasticidine at 10 μg/ml (#15205-25 mg, Sigma-Aldrich). Light microscopy and immunofluorescence Cells were treated with doxycycline or TGFβ1 for the indicated times, and were grown on gelatin coated glass coverslips. Cells were fixed with 4% paraformaldehyde in 1x PBS for 15 min (Fig. 2a, b). They were later permeabilized and blocked for 30 min with 0.1% Triton X-100 (#T8787, Sigma-Aldrich), 10% goat serum (#16210072, Gibco®, Life Technologies), and 1% BSA (#A9647, Sigma-Aldrich) in PBS (#20012-019, Gibco®, Life Technologies). Afterwards, the coverslips were incubated with the indicated primary antibodies overnight at 4 °C, and then with Alexa Fluor 488,647 conjugated secondary antibodies, (Molecular Probes, Life Technologies), for one hour at room temperature. Where appropriate, Acti-stain™ 555‬ (#PHDH1‬, Cytoskeleton)‬ diluted 1:200 was added together with secondary antibody stain. The coverslips were mounted with VECTASHIELD™ DAPI Mounting Media (Vector Laboratories) on microscope slides and imaged with a confocal microscope (Zeiss LSM 700 Inverted). Quantitative real-time reverse transcription PCR Total RNA was extracted with TRI Reagent® (#T9424, Sigma-Aldrich) and further purified with Direct-zol™ RNA MiniPrep kit (#R2050, Zymo Research). Reverse transcription was performed with SuperScript® III Reverse Transcriptase (#18080-044, Life Technologies) according to the manufacturer's instructions. For qPCR, 8 ng of cDNA was used in a reaction with Power SYBR® Green PCR Master Mix (#4367659, Applied Biosystems). Gene expression changes are normalized to the expression of the house-keeping genes Gapdh and Rplp0. mRNA sequencing For the mRNA-seq library preparation, a well of a 6-well plate of NMuMG cells was used, either treated with growth factor and/or doxycycline, or with control reagents for the indicated times. mRNA-seq libraries were prepared as already described [27]. Chromatin immunoprecipitation (ChIP), sequencing library preparation and data analysis The ChIP protocol was adapted from [28]. Cells were crosslinked in fixing buffer (50 mM HEPES pH 7.5, 1 mM EDTA pH 8.0, 0.5 mM EGTA pH 8.0, 100 mM NaCl, 1% formaldehyde) for 10 min with continuous rocking at room temperature (RT), and then quenched with 125 mM glycine for 5 min. Cells were washed three times with cold PBS and collected by scrapping. Nuclei were isolated, and lysed to obtain crosslinked chromatin. Simultaneously, the antibody was coupled with protein G magnetic beads (#88848, Pierce™) by incubating 100 μl of protein G beads with 10 μg of TFAP2A-specific antibody (Novus) and 10 μg of rabbit IgG (#PP64, Millipore) as a negative control, for minimum 1 h at RT with continuous rotation. A probe sonicator was then used in cold conditions to reduce heating, for six cycles of 30 s pulse-on at amplitude value of 60 and 1 min and 15 s pulse-off to obtain chromatin fragments of 100–500 bp followed by centrifugation at 20,000 g for 10 min at 4 °C to get rid of nuclear debris. Further, 3% chromatin was kept as input control from each sample and an equal amount (around 750–1000 μg) of chromatin was incubated with magnetic beads-coupled antibody at 4 °C overnight with continuous rotation. Immuno-complexes were washed with 1 mL of wash buffers as described in the original protocol. Samples of washed immuno-complexes along with the input were further treated with RNase and then with proteinase K followed by overnight reverse crosslinking at 65 °C with continuous shaking at 1400 rpm in a thermoblock with heating lid. DNA was purified using Agencourt AMPure XP (#A63880, Beckman Coulter) beads as detailed in the reference. The enrichment of specific target genes was quantified by qRT-PCR, comparing the TFAP2A-ChIP with the IgG negative control. Libraries of ChIPed and input DNA were prepared according to the instruction manual of NEBNext® ChIP-Seq Library Prep Reagent Set for Illumina. In brief, end repair of input and ChIPed DNA was done by incubating with T4 DNA Polymerase Klenow fragment and T4 PNK enzyme at 20 °C for 30 min. The reaction was purified using Ampure beads according to the instruction manual. An A nucleotide overhang at the 3' end was produced by treating the end repaired DNA with dATP and Klenow Fragment (3´ → 5´ exo−) at 37 °C for 20 min followed by DNA purification. Double stranded DNA adapters were ligated to dA overhang DNA by T4 DNA ligase reaction at 37 °C for 30 min followed by DNA purification and size selection as described in the instruction manual. Size selected DNA was PCR-amplified for 16 cycles using NEBNext® High-Fidelity 2X PCR Master Mix with Illumina universal forward primer and indexed reverse primer, that enabled multiplexing of samples for sequencing. Amplified DNA was finally purified and sequenced on an Illumina Hiseq2500 instrument. The obtained sequencing reads were mapped to the genome and visualized within the clipz genome browser (www.clipz.unibas.ch). Antibodies and reagents We used primary antibodies against the following proteins: TFAP2A (#sc-12726, Santa Cruz Biotechnology) for Western Blot (WB) and TFAP2A (#NBP1-95386, Novus Biologicals, Bio-Techne) for immunofluorescence and immunoprecipitation, actin (#sc-1615, Santa Cruz Biotechnology), E-cadherin (#610181, BD Transduction Laboratories), N-cadherin (#610921, BD Transduction Laboratories), Fibronectin (#F3648, Sigma-Aldrich), GAPDH (#sc-32233, Santa Cruz Biotechnology), vimentin (#v2258, Sigma-Aldrich). Recombinant human TGFβ1 was obtained from R&D Systems. Electrophoretic Mobility Shift Assay (EMSA) TnT® T7 Quick Coupled Transcription/Translation System (#L1171, Promega) was used to express in vitro translated TFAP2A from the pcDNA3-TFAP2A construct. Double-stranded oligonucleotide probes were end-labeled with 32P and purified on autoseq G-50 columns (#27-5340-01, Amersham). Binding reactions containing probe, TFAP2A protein, poly (dI-dC) (#81349, Sigma-Aldrich) non-specific competitor in gel retention buffer (25 mM HEPES pH 7.9, 1 mM EDTA, 5 mM DTT, 150 mM NaCl, 10% Glycerol) and electrophoresis were carried out as described previously [29]. Combined motif activity response analysis The datasets used in the following analysis are listed in Additional file 1: Table S1. We applied the ISMARA tool to each dataset as previously described [30]. Briefly, the Motif Activity Response Analysis (MARA) infers the activity of regulatory motifs from the number of binding sites of each motif m in each promoter p (N m,p ) and the genome-wide expression driven by these promoters p in samples s (E p,s ): $$ {E}_{p, s}={\tilde{c}}_s+{c}_p+{\displaystyle \sum_m}{N}_{m, p}{A}_{m, s} $$ where \( {\overset{\sim }{c}}_s \) represents the mean expression in sample s, \( {c}_p \) is the basal expression of promoter p, and \( {A}_{m, s} \) is the (unknown) activity of motif m in sample s. To identify motifs that consistently change in activity across datasets we used a computational strategy as previously described [31]. In brief, first we obtained the average activities over the replicates of each condition in every dataset. Next, because the range of gene expression levels and consequently the motif activities varied across datasets, we re-centered and then standardized the averaged motif activities \( {\overline{A}}_{m, g}^{*} \) and corresponding errors \( {\overline{\sigma}}_{m, g}^{*} \), belonging to a specific condition g. To standardize the activities in a given dataset with the epithelial-like condition labeled as a and the mesenchymal-like condition by b we defined a scaling factor \( \mathrm{S}=\sqrt{\frac{{\left({\overline{A}}_{m, g}^{* b}\right)}^2+{\left({\overline{A}}_{m, g}^{* a}\right)}^2}{2}} \), and then rescaled the activities \( {\overset{\sim }{A}}_{m, g}^{*}=\frac{{\overline{A}}_{m, g}^{*}}{\mathrm{S}} \) and the corresponding errors \( {\overset{\sim }{\sigma}}_{m, g}^{*}=\frac{{\overline{\sigma}}_{m, g}^{*}}{\mathrm{S}} \). Subsequently, we separated the condition-specific, averaged and rescaled activities (\( {\overset{\sim }{A}}_{m, g}^{\ast } \)) and errors (\( {\overset{\sim }{\sigma}}_{m, g}^{\ast } \)) obtained from different datasets into two groups, depending on whether they originated from epithelial-like cells (a) or mesenchymal-like cells (b). We averaged activities belonging to the same group as done for sample replicates before (see above and [31]). Finally, to rank motif activity changes during EMT we calculated for every motif m a z-score by dividing the change in averaged activities by the averaged errors: $$ \mathrm{z}=\frac{{\overline{\mathrm{A}}}_{m, g}^{\ast b}-{\overline{\mathrm{A}}}_{m, g}^{\ast a}}{\sqrt{{\left({\overline{\sigma}}_{m, g}^{\ast b}\right)}^2+{\left({\overline{\sigma}}_{m, g}^{\ast a}\right)}^2}} $$ Constructing motif-motif interaction networks ISMARA predicts potential targets for each motif m by calculating a target score R as the logarithm of the ratio of two likelihoods: the likelihood of the data D assuming that a promoter p is a target of the motif, and the likelihood of the data assuming that it is not: $$ R= log\left(\frac{P\left( D\Big| target\ promoter\right)}{P\left( D\Big| not\ target\ promoter\right)}\right) $$ The posterior probability p that a promoter is a target given the data and assuming a uniform prior of 0.5 is given by \( p=\frac{1}{1+\frac{1}{e^R}} \). To construct motif-motif interactions, we focused on those transcription regulators, whose regulatory regions were consistently (within all datasets) predicted by ISMARA to be targeted by motifs of other regulators. We obtained a combined probability p comb that a regulator is a target of a particular motif m across I different datasets by calculating the probability product of the probabilities obtained from individual datasets: $$ {p}_{comb}={\displaystyle \prod_{i=1}^I}{p}_i $$ GOBO analysis The top 100 target genes of the TFAP2 {A,C}.p2 motif as derived by applying ISMARA to the Neve et al. data set [32] were analyzed with the Gene Expression-Based Outcome for Breast Cancer Online (GOBO) tool [33]. For each gene only the promoter with the highest ISMARA target score was considered for the analysis. Estimating gene expression log2 fold changes from mRNA sequencing data For each sample s the expression values driven by each promoter of a gene g (determined by ISMARA, see above) were summed up to estimate the expression of gene g in sample s. Log2 gene expression fold changes were then calculated for TGFβ1-treated pCLX-GFP (pCLX-GFP + TGF-beta), pCLX-TFAP2A (pCLX-TFAP2A), and for TGFβ1-treated pCLX-TFAP2A (pCLX-TFAP2A + TGF-beta) cell lines relative to the pCLX-GFP (pCLX-GFP) control cells. TFAP2A/C motif activity increases upon EMT in both mouse and human systems Aiming to identify major regulators of EMT and to further construct a conserved network of their interactions, we used the Motif Activity Response Analysis (MARA) approach, which combines high-throughput measurements of mRNA expression with computational prediction of regulatory elements [30]. The published ISMARA tool [30] allows not only the automated analysis of individual data sets, but also the inference of motifs that most generally explain gene expression changes across multiple experiments. The results from the combined MARA analysis of different EMT mRNA expression datasets from breast epithelial cell lines of mouse and human, and from the differentiation of human pluripotent stem cells into NC cell and mesoderm (Additional file 1: Table S1) are shown in Fig. 1 [34–40]. How much a given motif contributes to the observed gene expression changes is quantified in terms of a combined z-score, which in our case represents the significance of the motif activity change between the epithelial and mesenchymal cell types (denoted by the intensity of the color in Fig. 1a and b and listed in Additional file 1: Tables S2 and S3). Based on the genome-wide computational prediction of binding sites for transcription regulators we can further infer motif interaction networks. In Fig. 1, an arrow is drawn between two motifs A and B when any of the regulators that recognizes motif B is a predicted target of motif A. The motif interaction networks derived from mouse and human EMT models suggest that only a small fraction of the TFs has a highly conserved and significant role in both species. The core transcriptional network of EMT, containing the TFs Zeb1, Zeb2 and Snai1, is conserved, as expected. The motifs that correspond to these factors have negative activity changes during EMT (represented by the blue color on the scheme) which indicates that the expression of their targets decreases, as expected from their known repressive function during the process [41]. The TFAP2A/C motif is also a conserved component of both mouse and human EMT networks. Its target genes are upregulated during EMT (reflected by the red color in the figure) and thus the motif itself is predicted to have a highly significant positive change in activity. Furthermore, in both human and mouse systems, the TFAP2A/C motif is predicted to target both Zeb1 and Zeb2 TFs (Fig. 1a and b). The transcriptional networks inferred from different EMT systems. Motif–motif interaction networks derived from mouse (a) and human (b) datasets. An arrow was drawn from a motif A to a motif B if motif A was consistently (across datasets from the corresponding species) predicted to regulate a transcriptional regulator b that is known to bind motif B. The probability product that A targets b is reflected by the thickness of the line. For readability, only motifs with an absolute z-score > 2.0 and having at least one interaction with another such motif (with a target probability product > 0.35 for human and > 0.15 for mouse) are depicted. The color intensity of the nodes representing motifs is proportional to the significance of the motif given by its z-score. Red indicates increased and green indicates decreased activity upon EMT TFAP2A expression and activity changes in EMT and breast cancer We made use of the murine mammary gland cell line NMuMG to further investigate the role of the AP-2 family members TFAP2A and TFAP2C in EMT. Upon induction with TGFβ1, NMuMG cells undergo EMT, which manifests itself through E-cadherin downregulation, formation of actin stress fibers and an elongated, mesenchymal-like cell shape (Fig. 2a, b and [36]). mRNA-seq revealed that of the five members of the AP-2 family, only Tfap2a is expressed in this system, with reads covering all its exons (Additional file 1: Figure S1). Immunofluorescence staining of endogenous TFAP2A demonstrated that the protein has a predominantly nuclear localization (Fig. 2a, b). 48 h after the TGFβ1 stimulation we observed that Tfap2a mRNA levels decreased moderately and further declined during the 14 days time course, while the common EMT markers such as E-cadherin, Fibronectin and Vimentin followed the expected trend (Fig. 2c). TFAP2A expression and activity profile in the NMuMG EMT model. a-b NMuMG cells were treated with 2 ng/mL of TGFβ1 for 72 h and were stained for TFAP2A and F-Actin (a) and TFAP2A and E-cadherin (b). The merged panels represent colocalization of the imaged markers with the nucleus which was stained with DAPI and compared to controls. Scale bar represents 50 μm. c NMuMG cells were treated for 14 days with 2 ng/mL of TGFβ1. Quantitative RT-PCR of Tfap2a during the time course of this treatment indicates that Tfap2a mRNA levels are reduced upon EMT. The EMT markers E-cadherin (Cdh1), Fibronectin (Fn1), Occludin (Ocln), and Vimentin (Vim) follow the expected trend. d Two mRNA-seq samples from independent wells were prepared from a time course of NMuMG cells treated for 14 days with 2 ng/mL of TGFβ1, and the data was consequently analyzed with ISMARA [30]. The figure depicts the dynamics of TFAP2A/C transcriptional activity during the time course. The sequence logo of the TFAP2A/C binding motif is also indicated. e-f Lysates from NMuMG/E9 cells treated with 2 ng/mL of TGFβ1 for 72 h were probed for TFAP2A, GAPDH and Lamin B expression by WB and their levels compared with the expression levels of Actin and also to the Ponceau-stained membrane (e). The bar plot represents the densitometric quantification of the TFAP2A protein levels upon treatment compared to the control (f) ** indicates a p-value < 0.01 in the paired t-test (P = 0.0014) We next generated mRNA-seq data from a 14 days time course of NMuMG cells stimulated with 2 ng/mL TGFβ1. Applying ISMARA to these data revealed the dynamics of TFAP2A activity during the entire length of the time course (Fig. 2d). As the paralogous TFAP2A and TFAP2C bind similar sequences, we therefore refer to their shared binding motif as TAFP2 {A,C}. In contrast to its mRNA expression (Fig. 2c), the TFAP2A transcriptional activity, reflected in the behavior of its targets, increases during EMT (Figs. 1 and 2d). This indicates that TFAP2A probably acts as a repressor in this context. Despite the fact that Tfap2a transcript levels and the TAFP2{A,C} motif activity exhibit a clear negative correlation, we observed the highest increase in activity in the first 6 h of treatment, while the changes in Tfap2a mRNA were delayed until a later time point. This may indicate that Tfap2a is regulated at the protein level. Considering that a rapid reduction of the active form of a regulator (here within 6 h) can only be achieved by post-translational mechanisms such as phosphorylation and/or targeted protein decay, the delayed response at the mRNA level appears coherent [42, 43]. Consistent with the changes observed at mRNA level, TFAP2A protein levels tend to decrease in the first 72 h after the TGFβ1 treatment (Fig. 2e and f). To gain further insight into the relationship between TFAP2A expression and activity, we examined the mRNA expression data that was previously generated from human breast cancer cell lines [32]. The Neve et al. data set contained 51 samples that were separated in three categories according to their transcriptomic signature. Using the GOBO online tool we found that TFAP2A expression is reduced in the basal B breast cancer cell lines (Fig. 3a), which have a higher expression of the mesenchymal markers compared to the basal A type cell lines (Additional file 1: Figure S6). This is consistent with our observations in the mouse cell line [33]. We also analyzed the Neve et al. dataset [32] in ISMARA to identify the most significant TFAP2{A,C} targets, based on their ISMARA-provided z-score. Using the top 100 TFAP2{A,C} targets as input for the GOBO tool, we found that their expression is significantly increased in the basal B sub-type (Fig. 3b). Thus, we found a strikingly consistent negative correlation between TFAP2A mRNA and the expression of its transcriptional targets in the Neve et al. dataset, as well as in the data that we obtained in the NMuMG model. Remarkably, in a large panel of breast tumor datasets originating from more than 1500 patients, the expression of TFAP2A mRNA is also downregulated in the basal sub-type cancer category (Fig. 3c) [33]. More generally, using mRNA expression data from The Cancer Genome Atlas, we found that the expression of TFAP2A is positively correlated with that of epithelial markers and negatively correlated with that of mesenchymal markers, in normal breast tissue samples as well as in samples from breast tumors (Additional file 1: Figure S7). TFAP2A expression and activity in breast cancers. Box plots of TFAP2A gene expression (a) and expression levels of the top 100 ISMARA-inferred TFAP2A targets (b) in a panel of breast cancer cell lines grouped in the basal A (red), basal B (grey) and luminal (blue) subgroups based on the annotation from Neve et al. [32]. c Box plot of TFAP2A gene expression for tumor samples stratified according to PAM50 subtypes [57]. All plots were generated with the GOBO online tool [33] TFAP2A binds directly to the Zeb2 promoter region In addition to the significant activity change of the TFAP2{A,C} motif activity in human and mouse EMT systems (Fig. 1a and b), the interaction of the TFAP2{A,C} and ZEB1,2 motifs was also conserved in the EMT networks of both species. Our analysis predicted that TFAP2{A,C} controls the expression of ZEB1 and ZEB2 genes in both systems. The Zeb2 target has a higher score than Zeb1 in NMuMG cells (target scores from the initial ISMARA analysis were 0.7 for ZEB1 and 0.51 for ZEB2 in human, and 0.18 and 0.52, respectively in mouse). To validate the interaction between TFAP2A and the Zeb2 promoter we performed an Electrophoretic Mobility Shift Assay (EMSA). From the SwissRegulon database of transcription factor binding sites that were predicted based on evolutionary conservation (www.swissregulon.ch), we found that the region around the second exon of the Zeb2 gene, in which the ATG start codon resides, contains seven clusters of consensus binding sites for TFAP2{A,C} with a relatively high posterior probability. The corresponding region is represented in Fig. 4a. Two transcription start sites (TSS), annotated in the SwissRegulon, based on cap analysis of gene expression (CAGE) data [44], are in close proximity to the TFAP2{A,C} binding sites, in the intronic region between the first and the second exon (Fig. 4a) [44]. To confirm that the TFAP2A TF binds to the predicted sites, we carried out EMSA with radiolabeled oligonucleotides, each spanning one of the predicted binding sites (Fig. 4a and b). In the presence of the broad competitor poly-dI-dC, most of the probes give a shift upon addition of TFAP2A. The addition of an excessive amount of cold probes containing the same binding sites (Wt), results in a reduction of the shifted radiolabelled oligonucleotides, indicating competition for specific binding. This is further demonstrated by the fact that only few probes, indicated with red arrows, restored their shift in the presence of cold competitors that contained mutated versions of TFAP2A binding sites (M) (Fig. 4b). TFAP2A binds directly to the Zeb2 promoter region. a Sketch of the region around the second exon of mouse Zeb2, showing the two transcription start sites found in SwissRegulon [44]. The blue filled box indicates the non-coding untranslated region (UTR) in exon 2, while the white filled box designates the start of the coding region (CDS). The predicted TFAP2A binding sites from SwissRegulon are marked with red arrows, and the probes that were used in (b) are indicated with green lines below the gene structure. Predicted transcription start sites (TSS) are also indicated. b Radiography of TFAP2A Electrophoretic Mobility Shift Assay (EMSA) with radiolabeled oligonucleotides, each spanning one of the predicted binding sites. The presence or absence of TFAP2A protein in the assay is indicated by a + or – sign, respectively. Cold competitors were used at 200-fold excess over the radiolabelled probes. Wt corresponds to unlabeled probe; M indicates a double-stranded oligonucleotide with a mutated TFAP2A binding site. Red arrows indicate the predicted TFAP2A binding probes that behave as expected from specific binding of TFAP2A. c TFAP2A ChIP was performed in NMuMG cells stably transduced with pCLX-TFAP2A (denoted as TFAP2A-OE (blue)) or with pCLX-GFP (denoted as TFAP2A-GFP (green)) viral vectors and further treated with 2 μg/mL doxycycline. Quantitative PCR data shows the enrichment of Zeb2 promoter relative to a non-transcribed genomic region in TFAP2A-ChIP normalized to IgG control (red). Two independent experiments were performed for each condition and shown are means and standard deviations. The one-tail paired t-test indicates that TFAP2A is significantly enriched at the Zeb2 (** for p < 0.01). d ChIP-seq libraries from TFAP2A ChIP or input chromatin were generated and the coverage of the genomic region spanning the second exon of Zeb2 by reads is shown in a mouse genome browser (www.clipz.unibas.ch and [45]). The results of two independent experiments are presented. The TFAP2A ChIP-seq the Zeb2 promoter region previously assessed by qPCR is enriched with respect to the input control sample. Mapping, annotation and visualization of deep-sequencing data was done with the ClipZ server [45] To validate this regulatory interaction in NMuMG cells we have generated a stable cell line in which the overexpression of TFAP2A can be induced with doxycycline (see Methods; Additional file 1: Figure S2). As a control we established a similar cell line using an expression construct in which the TFAP2A coding region (CDR) was replaced by green fluorescent protein (GFP) CDR. Using an antibody that recognizes the endogenous TF we further confirmed that TFAP2A binds to the Zeb2 promoter region by TFAP2A-chromatin immunoprecipitation (ChIP) followed by quantitative PCR: the Zeb2 promoter was significantly enriched in the TFAP2A-ChIP from cell lines expressing either exogenously-encoded TFAP2A (p = 0.005). Cells expressing only endogenous TFAP2A also showed an enrichment of the the Zeb2, albeit not to the same level of significance (p = 0.06) (Fig. 4c). Visualization of ChIP-seq data that we also obtained in this system, with the CLIPZ genome browser (www.clipz.unibas.ch) [45], confirms the presence of a peak in the predicted binding region that is only present in the TFAP2A-ChIP sample, but not in the Input controls (Fig. 4d) or the IgG (not shown). Overall, these results confirm that TFAP2A directly interacts with the Zeb2 promoter, both in vitro as well as in the NMuMG cell line. TFAP2A overexpression in NMuMG modulates epithelial plasticity Finally, we used the above-mentioned cell lines to investigate the consequences of perturbed TFAP2A expression. Induced expression of TFAP2A, but not GFP, in untreated NMuMG cells led to morphological changes visible in phase contrast microscopy (Fig. 5a); compared to GFP-expressing cells, TFAP2A-expressing cells lose their epithelial polygonal cell shape and disperse on the plate. Consistently, qRT-PCR showed that adhesion-related genes were specifically deregulated upon TFAP2A induction (Additional file 1: Figure S3a and S3b). As expected, the treatment of GFP-expressing cells with TGFβ1 for 3 days leads to the induction of EMT markers Snai1, Zeb2 and Vim. The expression of endogenous Tfap2a decreases upon the treatment of GFP-expressing NMuMG cells with TGFβ1. However, the induction of TFAP2A expression in the absence of TGFβ1 treatment appears to promote the expression of core EMT TFs such as Snai1, and Zeb2 (Fig. 5b and Additional file 1: Figure S3c), without affecting the expression of E-cadherin at the mRNA level (Additional file 1: Figure S3a). TFAP2A overexpression in NMuMG modulates epithelial plasticity. a Expression of either GFP or TFAP2A was induced by 72 h doxycycline treatment in NMuMG cells stably transduced with either pCLX-GFP or pCLX-TFAP2A. Morphological changes and sparse cell arrangement are visible in phase contrast microscopy upon TFAP2A expression. Scale bar: 50 μm. b Gene expression log2 fold changes of EMT markers (TFs) were calculated from mRNA-seq samples of doxycycline-induced, TGFβ1-treated (72 h, 2 ng/mL) pCLX-GFP (pCLX-GFP + TGF-beta), doxycycline-induced pCLX-TFAP2A (pCLX-TFAP2A), as well as of doxycycline-induced, TGFβ1-treated (72 h, 2 ng/mL) pCLX-TFAP2A (pCLX-TFAP2A + TGF-beta) cell lines relative to doxycycline-induced pCLX-GFP (pCLX-GFP) cell line. Shown are the mean log2 fold changes (+/- 1 standard deviation) from two experiments. TFAP2A overexpression is apparent in both TFAP2A-induced samples (dark green and dark blue) but is not induced in cells treated with TGFβ1 alone (light blue). The EMT-inducing TFs have increased expression upon TFAP2A induction. * indicates a p-value ≤ 0.05 and ** a p-value ≤ 0.01 in a two-tailed t-test. c The transcriptional activities of TFAP2{A,C} and SNAI1..3 motifs in different conditions, as inferred with ISMARA from mRNA-seq data as described in (b). The two replicates from each condition are plotted next to each other To better understand the effect of TFAP2A overexpression, we carried out transcriptional profiling of these four cell populations, namely untreated and TGFβ1-treated GFP-expressing cells, and untreated and TGFβ1-treated (for 72 h) TFAP2A overexpressing cells. The Tfap2a expression is increased upon doxycycline induction (Fig. 5b), but it decreases upon TGFβ1 treatment of GFP-expressing control cells (as we have observed before). Notably, the MARA analysis of these data reveals an increased activity of the TFAP2{A, C} motif in TGFβ1-induced, GFP-expressing cells, as we have initially observed in wild-type NMuMG cells, but also in TFAP2A-overexpressing cells treated with the growth factor when compared to GFP-expressing cells (Fig. 5c). The TGFβ1 treatment of TFAP2A-overexpressing cells further increases the TFAP2A activity. Thus, the exogenously introduced TFAP2A has an opposite transcriptional activity relative to the endogenous form. The activity of the SNAI1 motif decreases upon TGFβ1 treatment while its mRNA level increases, as expected from its known repressive activity in mesenchymal cells [41] (compare Fig. 5b and c). However, the >4-fold increase in Snai1 mRNA that occurred upon TFAP2A overexpression was followed only by a small decrease in SNAI1 motif activity. Interestingly, the TGFβ1-induced decrease of SNAI1 activity is less pronounced when the TGFβ1 treatment is carried out in TFAP2A-overexpressing cells (Fig. 5b and c). These results indicate that overexpression of TFAP2A perturbs the course of TGFβ1-induced EMT in NMuMG cells. Metastasis is the leading cause of death among breast cancer patients and a deeper understanding of the process is necessary for the development of treatment strategies [46]. The development of malignancy has been related to epithelial plasticity, and unsurprisingly, regulatory modules and networks that are involved in normal human development are hijacked during tumorigenic processes [41]. Although the regulatory network behind EMT has been intensely studied, by integrating data from multiple systems, recently developed computational methods can continue to provide new insights. In this study we have compared data from both developmental processes and cancer models of epithelial plasticity aiming to identify key regulators that are evolutionarily conserved. We found only a small number of motifs that have a significant activity change upon EMT in both human and mouse systems. Of these, SNAI1..3 and ZEB1..2 correspond to TFs that form the core EMT network [35]. We did not explicitly recover motifs for GSC, TWIST and FOXC2/SLUG. However, only the last factor has a specific motif represented in ISMARA. Motifs for miR-200 and the TGFβ1-related TGFI1 were only identified from the human samples. A novel insight derived from our analysis was that the motif corresponding to the TFAP2A and/or TFAP2C TFs also has a significant contribution to the expression changes that occur upon EMT in both species (Fig. 1a and b). The mechanistic link between TFAP2A/C and EMT was so far unknown, although TFAP2A was previously found important for neural crest formation and implicated in the activation of EMT inducing factors [47]. Furthermore, TFAP2A and TFAP2C have been implicated in mammary gland tumorigenesis and metastasis formation [16, 19]. Our data demonstrates that TFAP2A activity dynamically changes in the early time points of the TGFβ1 induced EMT in NMuMG cells, and thus suggests that TFAP2A regulates early steps in this process (Fig. 2c). Although our analysis of the EMT time series indicated that the expression of Tfap2a is negatively correlated with the expression of its targets (reflected in the motif activity, Additional file 1: Figure S4), overexpression of TFAP2A induces changes that are similar to those occurring upon Tfap2a downregulation during EMT. This observation can have multiple causes. One is that TFAP2A activity is regulated post-translationally, similar to the core EMT TFs [41]. For instance, the SNAI1 protein has a rapid turn-over and its stability and activity are regulated by post-translational phosphorylation, lysine oxidation and ubiquitylation [41]. Indeed, it has been demonstrated that the sumoylation and phosphorylation of the TFAP2A protein can affect its transcription activation or DNA binding functions [48, 49]. Therefore, it is possible that during EMT, the activity of TFAP2A on its targets changes from repressive to activating and its mRNA levels may decrease due to a feedback regulatory mechanism. A regulatory step at the protein level is also suggested by the fact that the highest increase in TFAP2A activity is observed in the first 6 h of treatment whereas the changes in the Tfap2a mRNA are delayed to a later time point (Fig. 2c and d). Alternatively, TFAP2A may activate some of its targets and repress others, so that which effect dominates overall will depend on other factors or on TFAP2A expression levels. The dual transcription activity of TFAP2A has also been reported before [16]. Yet another possibility is that depending on its mode of expression and of post-translational modifications, TFAP2A may form distinct complexes with other factors to activate or repress its targets. Additional experiments will be necessary to address these possibilities. Nevertheless, our data provides evidence for a direct regulatory link between TFAP2A/C and the core EMT regulators ZEB1 and ZEB2 in both human and mouse. In mouse, we found that TFAP2A binds to the Zeb2 promoter (Fig. 4), and that Zeb2 levels increase when TFAP2A is overexpressed (Fig. 5b). These results indicate that TFAP2A regulates EMT-inducing factors transcriptionally. Although we have not investigated it in detail here, our TFAP2A-ChIP-seq data suggests that other critical regulators of EMT such as Snai1, Sox4, Ezh2 and Esrp2 may also be targets of TFAP2A (Additional file 1: Figure S5). This further strengthens the hypothesis that TFAP2A is part of a densely-connected network of genes that are essential for EMT [50–52]. Consistent with exogenous TFAP2A-induced activation of EMT markers, the NMuMG cells that overexpressed TFAP2A underwent phenotypical changes that were indicative of the acquisition of a mesenchymal phenotype (Fig. 5a). Furthermore, an EMT signature of positively regulated genes was significantly represented among genes that were up-regulated in TFAP2A-overexpressing NMuMG cells compared to control, GFP-expressing cells (Additional file 1: Table S4) [35]. Genes involved in cellular adhesion and glycosphingolipid metabolism, which has been recently suggested to regulate cellular adhesion via St3gal5 and, more upstream, Zeb1 [53], seems to also be affected by TFAP2A overexpression (Fig. 5b; Additional file 1: Figure S3b and S3c). Cell adhesion is concomitantly affected (Fig. 5a). Thus, our results support the link between TFAP2A and ZEB TFs, although overexpression of TFAP2A leads to cellular that are observed upon TGFβ1-induced down-regulation of endogenous TFAP2A. One cannot exclude that the observed induction of an EMT response upon TFAP2A overexpression is due to a phenomenon similar to the so-called 'squelching effect' [54]. The activity of TFAP2A does not appear to be sufficient for the induction of a complete EMT phenotype in the absence of TGFβ1 (Fig. 5a, c). Previously, ChIP-chip-based measurements of SMAD2/3 binding in human keratinocytes upon TGFβ stimulation indicated that SMAD2/3 binding sites co-occur with those for TFAP2A/C TFs, leading to the hypothesis that TFAP2A is involved in mediating the TGFβ signaling [55]. However, maintaining a high TFAP2A level in the context of TGFβ signaling may interfere with the activity of EMT TFs (Fig. 5c), consistent with our observation that EMT factors such as SNAI1 have less repressive activity when TFAP2A is overexpressed during TGFβ1-induced EMT. This in turn could be the rationale for the moderate downregulation in Tfap2a levels that we observed in the later phases of the TGFβ1-induced EMT time course (Fig. 2c). Consistent with previous studies that suggested that TFAP2A activation is connected with the luminal breast phenotype, thus promoting the epithelial state [16], here we found that endogenously-encoded TFAP2A is down-regulated upon TGFβ1-induced EMT. Interestingly, PRRX1, another TF that promotes EMT in a developmental context, was found to both induce the transition, and reduce the metastatic potential in tumors [56]. This suggests that the two processes are not always coupled and that a tumor suppressor can also activate EMT. This may be the case with TFAP2A as well; while it mediates the initiation of EMT, its sustained expression may interfere with EMT signaling. Our data thus connects TFAP2A to the core regulatory network that orchestrates the epithelium-to-mesenchyme transition in normal development as well as in cancers. Applying recently developed computational methods to a set of epithelial plasticity datasets we have construct a conserved transcription factor motif interaction network that operates during the epithelium-to-mesenchyme transition. Our analysis recovered the known core EMT TFs and further linked the TFAP2A/C motif to this core network. Employing the NMuMG model cell line we provided further evidence that TFAP2A is involved in EMT, most likely in the early stages. We found that TFAP2A binds to the promoter of the Zeb2 master regulator of EMT and that TFAP2A overexpression in NMuMG cells induces an increase in Zeb2 expression. Finally overexpression of TFAP2A in NMuMG cells promoted the expression of EMT markers and of cellular features related to the acquisition of a mesenchymal phenotype. Overall, our data links TFAP2A to the core TF network that is regulating EMT in normal development as well as in cancers. Reviewer's report 1: Dr. Martijn Huynen, Nijmegen Centre for Molecular Life Science, The Netherlands The manuscript describes an elegant computational analysis of the regulatory motifs associated with the EMT transition, followed by the experimental validation that a new factor, TFAP2A, plays an important role in this process. In general I do find the first part of the paper very convincing, it computationally identifies the factor, confirms the results in independent data, and confirms binding of the factor to a predicted target. I do get a bit confused by the results of the overexpression of TFAP2A, and the arguments used to make these results consistent with the first part of the paper. Author's response: We thank the reviewer for the positive assessment of our computational analysis. Although we did find publicly available data that supports our conclusions about the involvement of TFAP2A in EMT, we nevertheless sought to validate its role ourselves. We tried to explain better the rationale and the results in the revision, even though some results remain paradoxical. Does Fig. 1 contain the complete set of motifs that are predicted to be "differentially active" in the transition? If so, is it a coincidence that they are all connected to each other? Author's response: We have described the selection of the motifs that we show in the legend of the Figure. Briefly, we only showed motifs with an absolute z-score > 2 and arrows that represent predictions with probabilities larger than a threshold (0.35 for human and 0.15 for mouse). For the readability of Fig. 1 , only motifs that have at least a predicted interaction with another motif at the mentioned thresholds are considered. However, realizing that motifs with significant activity that are not connected to other motifs may also be of interest, we have now included the full tables of motif activity changes upon EMT as Additional file 1 : Tables S2 and S3. I am surprised by the low level of conservation between the species. Are there some motifs from e.g. human that are just below a threshold? The authors argue "The motif interaction networks derived from mouse and human EMT models suggest that only a small fraction of the TFs has a highly conserved and significant role in both species." How reliable are those species-specific predictions, and how reliable is the absence of a signal in these analyses, with these data. Author's response: Although we selected sequencing data sets obtained from systems where EMT presumably occurs for both species, we unfortunately did not have matching systems available for human and mouse. So indeed, the precise scores of the different motifs depend on the data sets that we used and given sufficient data, other motifs may emerge as having similar behaviour in mouse and human EMT systems. Nevertheless, we found it reassuring that the core EMT factors that were extensively studied so far, such as SNAI and ZEB emerged from our analysis. That the TFAP2A,C motif also has a conserved function was unexpected and prompted us to study it further. If I understand the manuscript correctly, the downregulation of TFAP2A is associated with the epithelial to mesenchymal transition. Why then overexpress TFAP2A? Even is this has to do with technical limitations, I would like to see that mentioned explicitly to better understand the logic of the approach. Author's response: Our initial analysis indicated that the expression of TFAP2A is down-regulated during EMT (Fig. 2 ), while its motif activity increases, suggesting that TFAP2A may function as a repressor. Therefore, we overexpressed TFAP2A, reasoning that this should perturb the process of TGFb1-induced EMT. Indeed, this is also what we observe. However, analysis of the sequencing data obtained after TFAP2A overexpressionoverexpression also revealed some paradoxical results, which we addressed in our discussion. I find the discussion why "overexpression of TFAP2A induces changes that are similar to those occurring upon Tfap2a downregulation during EMT" lengthy and unconvincing. The authors first perform a very thorough quantitative analysis of gene expression and motif occurrence data, based on the simplifying but defendable assumptions of their linear model, confirm their findings in independent breast cancer data (Fig. 3). Then they use a large number of ad-hoc arguments to explain the inconsistencies in their results. They may all be true, but they are not convincing. Given the apparent contradictory results of the overexpression, I am surprised by the sentence "Finally, we confirm that overexpression of TFAP2A in NMuMG cells modulates epithelial plasticity and cell adhesion" in the abstract as those results do not confirm a specific hypothesis based on the results of the quantitative analysis. Author's response: We have revised the discussion to hopefully make it more streamlined. We agree with the reviewer that the initial computational analysis suggested a clear picture of TFAP2's involvement in EMT. However, as we tried to go deeper into the mouse model, the results that we obtained were more complex than we anticipated. We felt it was important to show the unexpected overexpression results, but in the revision we have included only the initial characterization of this cell line, without following it into the phenotypic analysis. We hope that our revised description of the results makes it clear what we have learned from the different systems about the behaviour and role of TFAP2A. In Fig. 5c there is a line connecting the various constructs. I take it this is not meant to implicate some sort of continuity? I do fully support publication once these issues have been handled. Author's response: Thank you for pointing this out. We have removed the lines to prevent the illusion of continuity of the data points. editorial: The legend with Fig. 3 could use some work "ABasal" or "Basal A"? Author's response: We thank the reviewer for pointing this out. We have fixed this issue and made the labels easier to read. TFAP2A expression was found to be less organized in breast cancer compared to normal mammary gland. - > glands Author's response: We think that the original formulation is correct. what is "substantially expressed" Author's response: We have explained that only Tfap2a (and not the other family members) has read coverage in all exons. It would be nice to specify which TFs of the core EMT network of ref 33 are retrieved and which are not. Author's response: We have expanded the text accordingly. "transcriptional" can often be replaced by "transcription", e.g. in "transcriptional regulation" page 18, Author's response: We have changed the term in all places where we thought it makes sense. line 20 "the interactions of the TFAP2{A,C}" appears redundant. Author's response: We removed the redundancy. page 22. "in untreated NMuMG cells lead to morphological changes" -- > "led" Author's response: Fixed. "an EMT signature of positively regulated genes were significantly represented" -- > "was" Reviewer's report 1: Dr. Nicola Aceto, Department of Biomedicine, University of Basel, Switzerland Dimitrova et al. present a manuscript in which they highlight the transcription factor TFAP2A as a novel EMT regulator. They suggest that TFAP2A target genes, such as ZEB2, are upregulated during EMT in the NMuMG mouse model. Further, they conclude that the interaction between TFAP2A and ZEB2 promoter affects ZEB2 expression, hence modulating the EMT process itself and providing evidence for a role of TFAP2A in cancer progression. Altogether, this is an interesting manuscript yet requiring a few modifications and clarifications to convincingly argue in favor of TFAP2A's role in cancer progression. (1) Introduction: the authors write their introductory paragraph arguing that e.g. "cancer progression, metastasis and chemotherapy resistance have all been linked to EMT". However, the role of EMT for each of these processes is highly debated in the field, and I would suggest the authors to provide a more balanced introduction, where it is clearly stated (and referenced) that the role/requirement of EMT in all these processed has still to be fully understood, especially in clinically-relevant settings. Author's response: We have rephrased and provided additional references to make the introduction more balanced. (2) Fig. 2a: I remain unconvinced about the degree of EMT that is triggered by TGFb in NMuMG cells. For instance, why only a small fraction of control cells express E-cad (roughly 30%)? Looking at the TGFb-treated cells, this ratio appears to remain the same (3/9 cells, i.e. roughly 30%). TFAP2A-positive vs negative cells in control vs TGFb also do not seem to change much, and neither does actin. I would suggest the authors to provide more quantitative data here (% of positive cells for each marker, or signal intensity) that comprise several fields of view. Author's response: To answer the reviewer's questions, we have redone the experiment, and imaged the cells with higher magnification. The results in the revised Fig. 2 clearly show that TFAP2A is abundantly expressed and nuclearly localized in control cells, while this staining pattern is abrogated upon TGFb1 treatment. In almost all control cells, the expression of E-cadherin is clearly visible, as is its localization close to the plasma membrane, features which are also abrogated by the TGFb1 treatment. E-cadherin levels estimated by Western blot (Fig. 2 e) also indicate down-regulation upon TGFb1 treatment. (3) Fig. 2c: how relevant is a Z-value of 3, with an activity range varying from -0.02 to 0.01? Looking at Fig. 1 (Z-values ranging from -19 to +19), can the authors convincingly state that TFAP2 target genes (and TFAP2 activity, respectively) significantly change upon TGFb treatment in NMuMG cells? Author's response: Please note that Fig. 1 was generated based on multiple data sets and that is why the z-scores cover a much larger range. Based on a standard normal distribution of z-scores we consider values larger than 2 (in absolute value) significant. (4) Fig. 2d: somehow related to the previous point. Changes in TFAP2A protein levels are not very impressive. Is the change statistically significant? Control does not seem to have any error bar, was it repeated more than once? Author's response: We have repeated this experiment as well, using three biological replicates, adding an additional control (actin, in addition to lamin and GAPDH) and also Ponceau staining (current Fig. 2 e). Although the overall protein levels are similar between conditions, TFAP2A's expression decreases upon TGFb1 treatment (as apparent also from the immunofluorescence staining, Fig. 2 a). The controls that we initially used, lamin and GAPDH, also decrease to some extent upon TGFb1 treatment, which is probably why the relative change in TFAP2A in our initial figure was not very impressive. However, relative to the total protein level as well as to actin, TFAP2A expression is clearly reduced by the TGFb1 treatment. (5) Fig. 3: The authors observe a correlation between low TFAP2A expression and basal type of breast cancer. Two questions arise here: (a) is basalB more EMT-like than basal-A? Author's response: In the original publication (Ringner et al. PLoS One, 6:e17911, 2011), the basal B type is considered "more stem like". (b) how are TFAP2A target genes behaving in the larger dataset with 1500 samples? Author's response: Unfortunately we could not carry out this analysis on the GOBO web server. (6) Fig. 5: could the authors elaborate more about their conclusion "TFAP2A perturbs the course of TGFb-induced EMT in NMuMG cells"? It seems here that TFAP2A mRNA expression and activity are somewhat disconnected here, yet in previous experiments they seem to be going along quite well (e.g. see Fig. 2b-c and Fig. 3). Author's response: The reviewer, as reviewer #1 as well, rightly points out that the TFAP2A that is expressed from the exogenous construct seems to behave differently than the endogenously-encoded gene. This is also apparent from the quantification of TFAP2A expression in TGFb1-treated control cells, that only express endogenously encoded TFAP2A (which is down-regulated by the treatment) and in TFAP2A overexpressionoverexpression (where the expression is up-regulated, as expected, Fig. 5 b). We discuss possible causes for this discrepancy in our manuscript ( Discussion section). Although we did not identify the precise cause for it, we felt that it was important to show these results. (7) Fig. 6: In some instances (i.e. in TGFb-treated samples), actin staining seems to extend to regions that do not display any Hoechst staining. For example, in TFAP2A + TGFb sample, actin staining shows cells on the lower right corner of the image, but those cells do not show up in the Hoechst staining. Author's response: We think that this had to do with the intensity of the signal. However, we removed this figure from the revised version of the manuscript. (8) Differences in the aggregation index are not very impressive, and when taken per se would not be a strong argument of the involvement of TFAP2A in EMT. Instead, what would be the effect -in terms of EMT genes expression- of depleting TFAP2A in NMuMG cells treated with TGFb? Author's response: Because endogenous TFAP2A is down-regulated upon TGFb1 treatment, we initially sought to perturb the course of EMT by overexpressing TFAP2A and we carried out most of the experiments with this construct. It turned out that the overexpression of TFAP2A leads to similar molecular signatures as the downregulation of endogenous TFAP2A that takes place upon TGFb1-induced EMT. We agree with the reviewer that presenting the results with this construct as well as with the siRNAs makes the interpretation very difficult. We therefore decided to remove this figure and close the study at the point where the exogenous construct showed paradoxical results. The authors show in Additional file 1: Figure S3 some EMT genes, but it seems that genes such as Vim and Ocln are missing. Author's response: We have regenerated panel b in Fig. 5 based on the mRNA-seq samples that we used to infer the motif activities shown in panel c of the figure and we have included also Ocln, aside from Vim, whose expression we also estimated by qPCR. Both of the markers behave as expected in EMT. The additional qPCR validations are now shown in Additional file 1 : Figure S3c. Also, what is the TFAP2A knockdown level with the siRNAs? Author's response: As we explained above, because the results of perturbing TFAP2A expression were difficult to interpret, we decided to not pursue too far the perturbation experiments. Therefore, we removed Fig. 6 and we did not include the siRNA quantifications in the revised manuscript. (9) Generally, it would be great to show some functional assays related to EMT (e.g. Boyden chamber, etc.) to reinforce the involvement of TFAP2A in this process Author's response: We agree with the reviewer that it would be exciting to carry out these studies. However, as the reviewer probably appreciates, this regulatory network is very complex and the perturbation experiments did not turn out as we expected. We therefore decided to follow the suggestion of reviewer #1, concentrating on the comparative analysis of the different systems that yielded consistent results and not trying to resolve the specific mechanism of TFAP2A, which likely depends on the precise form of the protein that is expressed from the endogenous locus. This reviewer provided no additional comments. Dimitrova et al. present a revised version of the manuscript that addressed and discussed some of the initial concerns. While I find the manuscript worthy of publication, a few points are still worth mentioning: (1) In an answer to my previous question #5 (see 1st review) the authors argue that Basal B is considered more stem-like (therefore more mesenchymal) than Basal A. However, EMT and stem-like are two very different features of cancer cells as well as normal tissues, which may or may not overlap depending on a variety of factors. For instance, a number of tumor cell lines that are fully epithelial can display stem-like features (tumor initiation, self-renewal, differentiation). My original question was more whether by looking at gene expression data of Basal B, this tumor type expresses significantly more EMT markers than Basal A. This would reinforce their conclusions. Author's response: To answer the reviewer's question we have used the GOBO tool to compare the expression levels of various epithelial and mesenchymal markers in Basal A and Basal B tumor types. As shown in the new Additional file 1 : Figure S6, epithelial markers have higher expression in Basal A tumors, whereas mesenchymal markers have higher expression in Basal B tumors. This is in line with the concept that Basal B tumors are more mesenchymal. (2) Regarding patient data it would be more convincing to check the expression of TFAP2 (as well as its target genes and EMT markers) in several independent datasets to reinforce the conclusions of the authors. Author's response: To answer the reviewer's second question, we have used yet another data set, namely expression profiles of tumors and normal tissue samples from The Cancer Genome Atlas, to further examine the relationship between the expression of TFAP2A and that of various epithelial and mesenchymal markers. These results, summarized in the new Additional file 1 : Figure S7, show that the TFAP2A expression is positively correlated with that of epithelial markers and negatively correlated with that of mesenchymal markers. This is again consistent with the results we obtained in our experimental system (Fig. 2 ). BOFS: Branchio-Oculo-Facial Syndro CAGE: Cap analysis of gene expression CDR: Coding region Chromatin immunoprecipitation EMT: ESC: GOBO: Gene expression-based outcome for breast cancer online MARA: Motif-activity response analysis NC: NCBI: NMuMG: Mouse mammary gland epithelial cell line SELEX: Systemic evolution of Ligand by EXponential enrichment SRA: TF: WB: We thank Arnau Vina-Vilaseca for excellent technical assistance. The authors also thank Xiaomo Wu and Geoges Martin for advice on setting-up EMSA experiments. This work was supported by the Swiss National Science Foundation grant # 31003A_147013 to MZ and by the SystemsX.ch initiative in systems biology, through the RTD project 51RT-0_126031. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The sequencing data can be accessed at the Sequence Read Archive (SRA) of the National Center for Biotechnology Information (NCBI) with the SRA accession ID SRP067296. The datasets that were taken from other studies and analyzed for this study are listed in Additional file 1: Table S1. YD, SG, NM, BD, and DM performed experiments; YD, WAG, MZ designed and GC and MZ supervised the study; AJG, YD and SG performed data analysis; YD, AJG, SG and MZ wrote the manuscript. All authors edited the paper. All authors read and approved the final manuscript. All authors have read and approved the manuscript. Additional file 1: Supplementary information. (DOCX 19403 kb) Biozentrum, University of Basel, Klingelbergstrasse 50-70, CH-4056 Basel, Switzerland Department of Biomedicine, University of Basel, Mattenstrasse 28, CH-4058 Basel, Switzerland Department of Cellular and Molecular Pathology, German Cancer Research Center (DKFZ), Heidelberg, Germany Thiery JP, Acloque H, Huang RY, Nieto MA. Epithelial-mesenchymal transitions in development and disease. Cell. 2009;139(5):871–90.View ArticlePubMedGoogle Scholar Hanahan D, Weinberg RA. Hallmarks of cancer: the next generation. Cell. 2011;144(5):646–74.View ArticlePubMedGoogle Scholar Micalizzi DS, Farabaugh SM, Ford HL. Epithelial-mesenchymal transition in cancer: parallels between normal development and tumor progression. J Mammary Gland Biol Neoplasia. 2010;15(2):117–34.View ArticlePubMedPubMed CentralGoogle Scholar Mani SA, Guo W, Liao MJ, Eaton EN, Ayyanan A, Zhou AY, Brooks M, Reinhard F, Zhang CC, Shipitsin M, et al. The epithelial-mesenchymal transition generates cells with properties of stem cells. Cell. 2008;133(4):704–15.View ArticlePubMedPubMed CentralGoogle Scholar Scheel C, Eaton EN, Li SH, Chaffer CL, Reinhardt F, Kah KJ, Bell G, Guo W, Rubin J, Richardson AL, et al. Paracrine and autocrine signals induce and maintain mesenchymal and stem cell states in the breast. Cell. 2011;145(6):926–40.View ArticlePubMedPubMed CentralGoogle Scholar Prater MD, Petit V, Alasdair Russell I, Giraddi RR, Shehata M, Menon S, Schulte R, Kalajzic I, Rath N, Olson MF, et al. Mammary stem cells have myoepithelial cell properties. Nat Cell Biol. 2014;16(10):942–50.View ArticlePubMedPubMed CentralGoogle Scholar Zheng X, Carstens JL, Kim J, Scheible M, Kaye J, Sugimoto H, Wu CC, LeBleu VS, Kalluri R. Epithelial-to-mesenchymal transition is dispensable for metastasis but induces chemoresistance in pancreatic cancer. Nature. 2015;527(7579):525–30.View ArticlePubMedPubMed CentralGoogle Scholar Yu M, Bardia A, Wittner BS, Stott SL, Smas ME, Ting DT, Isakoff SJ, Ciciliano JC, Wells MN, Shah AM, et al. Circulating breast tumor cells exhibit dynamic changes in epithelial and mesenchymal composition. Science. 2013;339(6119):580–4.View ArticlePubMedPubMed CentralGoogle Scholar Fischer KR, Durrans A, Lee S, Sheng J, Li F, Wong ST, Choi H, El Rayes T, Ryu S, Troeger J, et al. Epithelial-to-mesenchymal transition is not required for lung metastasis but contributes to chemoresistance. Nature. 2015;527(7579):472–6.View ArticlePubMedPubMed CentralGoogle Scholar Sarrio D, Rodriguez-Pinilla SM, Hardisson D, Cano A, Moreno-Bueno G, Palacios J. Epithelial-mesenchymal transition in breast cancer relates to the basal-like phenotype. Cancer Res. 2008;68(4):989–97.View ArticlePubMedGoogle Scholar Zhang J, Hagopian-Donaldson S, Serbedzija G, Elsemore J, Plehn-Dujowich D, McMahon AP, Flavell RA, Williams T. Neural tube, skeletal and body wall defects in mice lacking transcription factor AP-2. Nature. 1996;381(6579):238–41.View ArticlePubMedGoogle Scholar Milunsky JM, Maher TA, Zhao G, Roberts AE, Stalker HJ, Zori RT, Burch MN, Clemens M, Mulliken JB, Smith R, et al. TFAP2A mutations result in branchio-oculo-facial syndrome. Am J Hum Genet. 2008;82(5):1171–7.View ArticlePubMedPubMed CentralGoogle Scholar Williams T, Tjian R. Characterization of a dimerization motif in AP-2 and its function in heterologous DNA-binding proteins. Science. 1991;251(4997):1067–71.View ArticlePubMedGoogle Scholar Meier P, Koedood M, Philipp J, Fontana A, Mitchell PJ. Alternative mRNAs encode multiple isoforms of transcription factor AP-2 during murine embryogenesis. Dev Biol. 1995;169(1):1–14.View ArticlePubMedGoogle Scholar Mohibullah N, Donner A, Ippolito JA, Williams T. SELEX and missing phosphate contact analyses reveal flexibility within the AP-2[alpha] protein: DNA binding complex. Nucleic Acids Res. 1999;27(13):2760–9.View ArticlePubMedPubMed CentralGoogle Scholar Bogachek MV, Chen Y, Kulak MV, Woodfield GW, Cyr AR, Park JM, Spanheimer PM, Li Y, Li T, Weigel RJ. Sumoylation pathway is required to maintain the basal breast cancer subtype. Cancer Cell. 2014;25(6):748–61.View ArticlePubMedPubMed CentralGoogle Scholar Zhang J, Brewer S, Huang J, Williams T. Overexpression of transcription factor AP-2alpha suppresses mammary gland growth and morphogenesis. Dev Biol. 2003;256(1):127–45.View ArticlePubMedGoogle Scholar Jager R, Werling U, Rimpf S, Jacob A, Schorle H. Transcription factor AP-2gamma stimulates proliferation and apoptosis and impairs differentiation in a transgenic model. Mol Cancer Res. 2003;1(12):921–9.PubMedGoogle Scholar Cyr AR, Kulak MV, Park JM, Bogachek MV, Spanheimer PM, Woodfield GW, White-Baer LS, O'Malley YQ, Sugg SL, Olivier AK, Zhang W, Domann FE, Weigel RJ. TFAP2C governs the luminal epithelial phenotype in mammary development and carcinogenesis. Oncogene. 2014;34(4):436–44.Google Scholar Shi D, Xie F, Zhang Y, Tian Y, Chen W, Fu L, Wang J, Guo W, Kang T, Huang W, et al. TFAP2A regulates nasopharyngeal carcinoma growth and survival by targeting HIF-1alpha signaling pathway. Cancer Prev Res. 2014;7(2):266–77.View ArticleGoogle Scholar Wang W, Lv L, Pan K, Zhang Y, Zhao JJ, Chen JG, Chen YB, Li YQ, Wang QJ, He J, et al. Reduced expression of transcription factor AP-2alpha is associated with gastric adenocarcinoma prognosis. PLoS One. 2011;6(9), e24897.View ArticlePubMedPubMed CentralGoogle Scholar Melnikova VO, Bar-Eli M. Transcriptional control of the melanoma malignant phenotype. Cancer Biol Ther. 2008;7(7):997–1003.View ArticlePubMedGoogle Scholar Pellikainen J, Naukkarinen A, Ropponen K, Rummukainen J, Kataja V, Kellokoski J, Eskelinen M, Kosma VM. Expression of HER2 and its association with AP-2 in breast cancer. Eur J Cancer. 2004;40(10):1485–95.View ArticlePubMedGoogle Scholar Maeda M, Johnson KR, Wheelock MJ. Cadherin switching: essential for behavioral but not morphological changes during an epithelium-to-mesenchyme transition. J Cell Sci. 2005;118(Pt 5):873–87.View ArticlePubMedGoogle Scholar Li Q, Luo C, Lohr CV, Dashwood RH. Activator protein-2alpha functions as a master regulator of multiple transcription factors in the mouse liver. Hepatol Res. 2011;41(8):776–83.View ArticlePubMedPubMed CentralGoogle Scholar Giry-Laterriere M, Cherpin O, Kim YS, Jensen J, Salmon P. Polyswitch lentivectors: "all-in-one" lentiviral vectors for drug-inducible gene expression, live selection, and recombination cloning. Hum Gene Ther. 2011;22(10):1255–67.View ArticlePubMedGoogle Scholar Gruber AR, Martin G, Muller P, Schmidt A, Gruber AJ, Gumienny R, Mittal N, Jayachandran R, Pieters J, Keller W, et al. Global 3' UTR shortening has a limited effect on protein abundance in proliferating T cells. Nat Commun. 2014;5:5465.View ArticlePubMedGoogle Scholar Blecher-Gonen R, Barnett-Itzhaki Z, Jaitin D, Amann-Zalcenstein D, Lara-Astiaso D, Amit I. High-throughput chromatin immunoprecipitation for genome-wide mapping of in vivo protein-DNA interactions and epigenomic states. Nat Protoc. 2013;8(3):539–54.View ArticlePubMedGoogle Scholar Wu X, Gehring W. Cellular uptake of the Antennapedia homeodomain polypeptide by macropinocytosis. Biochem Biophys Res Commun. 2014;443(4):1136–40.View ArticlePubMedGoogle Scholar Balwierz PJ, Pachkov M, Arnold P, Gruber AJ, Zavolan M, van Nimwegen E. ISMARA: automated modeling of genomic signals as a democracy of regulatory motifs. Genome Res. 2014;24(5):869–84.View ArticlePubMedPubMed CentralGoogle Scholar Gruber AJ, Grandy WA, Balwierz PJ, Dimitrova YA, Pachkov M, Ciaudo C, Nimwegen E, Zavolan M. Embryonic stem cell-specific microRNAs contribute to pluripotency by inhibiting regulators of multiple differentiation pathways. Nucleic Acids Res. 2014;42(14):9313–26.View ArticlePubMedPubMed CentralGoogle Scholar Neve RM, Chin K, Fridlyand J, Yeh J, Baehner FL, Fevr T, Clark L, Bayani N, Coppe JP, Tong F, et al. A collection of breast cancer cell lines for the study of functionally distinct cancer subtypes. Cancer Cell. 2006;10(6):515–27.View ArticlePubMedPubMed CentralGoogle Scholar Ringner M, Fredlund E, Hakkinen J, Borg A, Staaf J. GOBO: gene expression-based outcome for breast cancer online. PLoS One. 2011;6(3), e17911.View ArticlePubMedPubMed CentralGoogle Scholar Evseenko D, Zhu Y, Schenke-Layland K, Kuo J, Latour B, Ge S, Scholes J, Dravid G, Li X, MacLellan WR, et al. Mapping the first stages of mesoderm commitment during differentiation of human embryonic stem cells. Proc Natl Acad Sci U S A. 2010;107(31):13742–7.View ArticlePubMedPubMed CentralGoogle Scholar Taube JH, Herschkowitz JI, Komurov K, Zhou AY, Gupta S, Yang J, Hartwell K, Onder TT, Gupta PB, Evans KW, et al. Core epithelial-to-mesenchymal transition interactome gene-expression signature is associated with claudin-low and metaplastic breast cancer subtypes. Proc Natl Acad Sci U S A. 2010;107(35):15449–54.View ArticlePubMedPubMed CentralGoogle Scholar Diepenbruck M, Waldmeier L, Ivanek R, Berninger P, Arnold P, van Nimwegen E, Christofori G. Tead2 expression levels control the subcellular distribution of Yap and Taz, zyxin expression and epithelial-mesenchymal transition. J Cell Sci. 2014;127(Pt 7):1523–36.View ArticlePubMedGoogle Scholar Brunskill EW, Potter AS, Distasio A, Dexheimer P, Plassard A, Aronow BJ, Potter SS. A gene expression atlas of early craniofacial development. Dev Biol. 2014;391(2):133–46.View ArticlePubMedPubMed CentralGoogle Scholar Feuerborn A, Srivastava PK, Kuffer S, Grandy WA, Sijmonsma TP, Gretz N, Brors B, Grone HJ. The Forkhead factor FoxQ1 influences epithelial differentiation. J Cell Physiol. 2011;226(3):710–9.View ArticlePubMedGoogle Scholar Tiwari N, Meyer-Schaller N, Arnold P, Antoniadis H, Pachkov M, van Nimwegen E, Christofori G. Klf4 is a transcriptional regulator of genes critical for EMT, including Jnk1 (Mapk8). PLoS One. 2013;8(2), e57329.View ArticlePubMedPubMed CentralGoogle Scholar Kreitzer FR, Salomonis N, Sheehan A, Huang M, Park JS, Spindler MJ, Lizarraga P, Weiss WA, So PL, Conklin BR. A robust method to derive functional neural crest cells from human pluripotent stem cells. American journal of stem cells. 2013;2(2):119–31.PubMedPubMed CentralGoogle Scholar De Craene B, Berx G. Regulatory networks defining EMT during cancer initiation and progression. Nat Rev Cancer. 2013;13(2):97–110.View ArticlePubMedGoogle Scholar Nardozzi JD, Lott K, Cingolani G. Phosphorylation meets nuclear import: a review. Cell Commun Signal. 2010;8:32.View ArticlePubMedPubMed CentralGoogle Scholar Westermarck J. Regulation of transcription factor function by targeted protein degradation: an overview focusing on p53, c-Myc, and c-Jun. Methods Mol Biol. 2010;647:31–6.View ArticlePubMedGoogle Scholar Pachkov M, Erb I, Molina N, van Nimwegen E. SwissRegulon: a database of genome-wide annotations of regulatory sites. Nucleic Acids Res. 2007;35(Database issue):D127–31.View ArticlePubMedGoogle Scholar Khorshid M, Rodak C, Zavolan M. CLIPZ: a database and analysis environment for experimentally determined binding sites of RNA-binding proteins. Nucleic Acids Res. 2011;39(Database issue):D245–52.View ArticlePubMedGoogle Scholar Bill R, Christofori G. The relevance of EMT in breast cancer metastasis: Correlation or causality? FEBS Lett. 2015;589(14):1577–87.View ArticlePubMedGoogle Scholar Rada-Iglesias A, Bajpai R, Prescott S, Brugmann SA, Swigut T, Wysocka J. Epigenomic annotation of enhancers predicts transcriptional regulators of human neural crest. Cell Stem Cell. 2012;11(5):633–48.View ArticlePubMedPubMed CentralGoogle Scholar Berlato C, Chan KV, Price AM, Canosa M, Scibetta AG, Hurst HC. Alternative TFAP2A isoforms have distinct activities in breast cancer. Breast Cancer Res. 2011;13(2):R23.View ArticlePubMedPubMed CentralGoogle Scholar Garcia MA, Campillos M, Marina A, Valdivieso F, Vazquez J. Transcription factor AP-2 activity is modulated by protein kinase A-mediated phosphorylation. FEBS Lett. 1999;444(1):27–31.View ArticlePubMedGoogle Scholar Tiwari N, Tiwari VK, Waldmeier L, Balwierz PJ, Arnold P, Pachkov M, Meyer-Schaller N, Schubeler D, van Nimwegen E, Christofori G. Sox4 is a master regulator of epithelial-mesenchymal transition by controlling Ezh2 expression and epigenetic reprogramming. Cancer Cell. 2013;23(6):768–83.View ArticlePubMedGoogle Scholar Cano A, Perez-Moreno MA, Rodrigo I, Locascio A, Blanco MJ, del Barrio MG, Portillo F, Nieto MA. The transcription factor snail controls epithelial-mesenchymal transitions by repressing E-cadherin expression. Nat Cell Biol. 2000;2(2):76–83.View ArticlePubMedGoogle Scholar Horiguchi K, Sakamoto K, Koinuma D, Semba K, Inoue A, Inoue S, Fujii H, Yamaguchi A, Miyazawa K, Miyazono K, et al. TGF-beta drives epithelial-mesenchymal transition through deltaEF1-mediated downregulation of ESRP. Oncogene. 2012;31(26):3190–201.View ArticlePubMedGoogle Scholar Mathow D, Chessa F, Rabionet M, Kaden S, Jennemann R, Sandhoff R, Grone HJ, Feuerborn A. Zeb1 affects epithelial cell adhesion by diverting glycosphingolipid metabolism. EMBO Rep. 2015;16(3):321–31.View ArticlePubMedPubMed CentralGoogle Scholar Heslot H, Gaillardin C. Molecular biology and genetic engineering of yeasts. Boca Raton: CRC Press; 1992.Google Scholar Koinuma D, Tsutsumi S, Kamimura N, Taniguchi H, Miyazawa K, Sunamura M, Imamura T, Miyazono K, Aburatani H. Chromatin immunoprecipitation on microarray analysis of Smad2/3 binding sites reveals roles of ETS1 and TFAP2A in transforming growth factor beta signaling. Mol Cell Biol. 2009;29(1):172–86.View ArticlePubMedGoogle Scholar Ocana OH, Corcoles R, Fabra A, Moreno-Bueno G, Acloque H, Vega S, Barrallo-Gimeno A, Cano A, Nieto MA. Metastatic colonization requires the repression of the epithelial-mesenchymal transition inducer Prrx1. Cancer Cell. 2012;22(6):709–24.View ArticlePubMedGoogle Scholar Parker JS, Mullins M, Cheang MC, Leung S, Voduc D, Vickery T, Davies S, Fauron C, He X, Hu Z, et al. Supervised risk predictor of breast cancer based on intrinsic subtypes. J Clin Oncol. 2009;27(8):1160–7.View ArticlePubMedPubMed CentralGoogle Scholar
CommonCrawl
Home Journals JESA Dynamic Modelling and Adaptive Control of Automobile Active Suspension System Dynamic Modelling and Adaptive Control of Automobile Active Suspension System Wentang Wang* | Kun Tian | Jianxia Zhang School of Mechanical Engineering, Henan Institute of Technology, Xinxiang 453003, China School of Intelligent Engineering, Henan Institute of Technology, Xinxiang 453003, China Corresponding Author Email: [email protected] https://doi.org/10.18280/jesa.530218 | Citation 53.02_18.pdf The active suspension system of automobiles has great advantages in riding comfort and handling stability. However, it is a challenging task to design an active control method for this system, owing to system features like multi-input and multi-output, time variation, and nonlinearity. To cope with the challenge, this paper mathematically models the active suspension system based on the full-car model, rather than the common quarter car model, and obtains a nonlinear dynamic model with variables like displacement, roll angle and pitch angle. Subsequently, an incremental proportional–integral–derivative (PID) controller was designed, and a deep reinforcement learning adaptive (DRLA) controller was proposed to realize online adjustment of control parameters. Finally, the active suspension system of the entire vehicle was simulated on MATLAB/Simulink. The simulation results prove that the DRLA controller can effectively reduce the displacement, the amplitude of roll and pitch angle of the car body, and greatly enhance the smoothness of the ride on the vehicle. active suspension system, reinforcement learning (RL), adaptive control, dynamic modelling Recent years has witnessed great improvement to the riding, handling and safety of road vehicles, thanks to the extensive explorations into the design of automotive suspension system [1-3]. In the suspension system, multiple springs and dampers connect the vehicle body with the wheels, and thereby control the vertical motions of the vehicle body. The main functions of the system include offsetting the variation in the force and payload of the vehicle body induced by turning, acceleration or braking, and isolates the passenger cabin from the irregularities on road. Automotive suspension systems can be divided into passive suspension system, semi-active suspension system, and active suspension system. The passive suspension system solely relies on its own structure to damp the vibrations resulted from road disturbances, without needing any external control force. The damping coefficient of the system is almost constant. The classic semi-active suspension system has a variable damping coefficient, and realizes electronic modulation by magnetorheological (MR), electrorheological (ER) or electro-hydraulic techniques. The active suspension system applies the control force based on the real-time state feedback of the vehicle, achieving a good vibration damping effect, and assists with the attitude control of the car body. Many scholars have attempted to design an effective automotive suspension system. For example, Spelta et al. [4] proposed a new comfort-oriented variable damping and stiffness control algorithm, named stroke–speed–threshold–stiffness–control, which overcomes the critical tradeoff between the choice of the stiffness coefficient and the end-stop hitting; the variable-damping and-stiffness suspension, coupled with this algorithm, achieves much better comfort performance than traditional passive suspensions and more classical variable-damping semi-active suspensions. Bei et al. [5] built a full-car model based on multi-body dynamics, including the steering system, front and rear suspensions, tire, driving controller, and road, and verified the model through tests. Based on co-simulation, a controller was created based on hybrid sensor network control. The principle of the controller switches among comfort controller, stability controller, and safety controller, according to working conditions. The controller effectively improves the ride comfort, handling stability, and driving safety. Recently, there is a substantial growth in the research and development of the active suspension systems of car models. To improve ride comfort and road handling, the active suspension controls vehicle attitude and reduces the impact of road roughness by increasing and dissipating system energy through the actuator. The active suspension system is a closed-loop system, in which the required actuator force can be predicted based on the suspension travel [6]. Considering the vibrations acting on human body in the vertical direction, Rao and Anusha, [7] carried out bump analysis on a three degrees-of-freedom (3DOF) quarter car model, controlled the active suspension of the model with fuzzy logic, and simulated the transient response to road perturbations. Sun et al. [8] constructed a full-car model with high nonlinearity, selected actuator forces as virtual inputs to suppress disturbance, and designed controllers that help real force inputs track virtual ones, based on the adaptive H∞ robust control technique. In addition, Yao et al. [9] presented a method for controlling an automobile to tilt toward the turning direction using active suspension: the desired tilt angle was determined through dynamic analysis, and used to establish an active tilt sliding mode controller, which causes zero steady-state tilt angle error; finally, the effectiveness of the controller was confirmed through simulation. Na et al. [10] put forward an active suspension control of full-car systems with unknown nonlinearity; on the upside, this control method can handle the uncertainties and nonlinearities in the systems, without using any function approximator or online adaptive function; on the downside, the method poses high requirements on the accuracy of model parameters. Gang [11] developed a full-car model with unknown dynamics and uncertain parameters, and designed a novel non-singular terminal sliding mode controller (NSTSMC), which can stabilize the vertical, pitch, and roll displacements into a desired equilibrium in finite time. However, the parameters of the controller must be adjusted, if any change takes place to the model. In most of the above methods, the control parameters must be adjusted to suit the parameter changes of the car model. This obviously limits the application scope of the control algorithm. The reinforcement learning (RL) is an emerging method that adaptively adjusts control parameters, according to environmental feedbacks and given rewards. The RL is an important tool of machine learning (ML), whose initial focuses include pattern classification, supervised learning and adaptive control. After assigning the learning agent a goal, the RL proceeds through repeated interactions with the dynamic environment, that is, mapping situations (states) to actions [12-14]. The mapping from the action in a state to the scalar number of the reward constitutes the immediate demand of the state. The RL is an algorithm that can effectively find the optimal value function. The learning agent learns its environment through exploration, and gains experience in this process [15-19]. Suffice it to say that the RL provides a suitable tool for active control of automotive suspension system. Targeting automobile active suspension system, this paper first models the nonlinear dynamics of the entire vehicle, setting up the control target for the design of the controller. Then, an incremental proportional–integral–derivative (PID) controller was developed to realize the target. Since the control parameters cannot automatically adapt to environmental changes, the authors proposed a deep RL adaptive (DRLA) controller based on adaptive critic element (ACE) and associative search element (ASE), and applied the DRLA controller to update control parameters online in real time. Finally, the DRLA controller was fully verified through simulations on MATLAB/Simulink. The remainder of this paper is organized as follows: Section 2 puts forward the dynamic model of the whole vehicle; Section 3 designs the DRLA controller based on the RL algorithm; Section 4 compares the effect of the DRLA controller with that of the PID controller; Section 5 gives the conclusions and looks forward to the future research. 2. Dynamic Modelling The 7DOF full-car model, including the car body and four wheels, is illustrated in Figure 1, where m is the mass of the car body; m1, m2, m3, and m4 are the masses of the four wheels, respectively; z1, z2, z3, and z4 are the vertical displacements of the four wheels, respectively; $z_{1}^{\prime}, z_{2}^{\prime}, z_{3}^{\prime}$ and $z_{4}^{\prime}$ are the vertical displacements at the four corners of the car body, respectively; ksi, ci, and Fi(i=1,⋯,4) are the stiffness, damping coefficient, and actuator force (active control force) of the springs at the four corners of the car body; θ and ϕ are the pitching and rolling of the car body, respectively; zr1, zr2, zr3, and zr4 are the road displacements of the four wheels, respectively; z is the heave of the car body; a and b are the distances to the front and the rear from the centroid, respectively; tf and tr are the front and rear treads, respectively. The 7DOFs of the full-car model include zr1, zr2, zr3, zr4, z, θ, and ϕ. Figure 1. Full-car model of the active suspension system The actuators are arranged vertically between the mass of the car body and that of the wheels, providing the control force to the active suspension system. The full-car model makes it possible to measure the pitching and rolling of the car body, which cannot be measured in the quarter car model of the vehicle suspension system. Here, a hydraulic actuator is placed on each suspension between the spring and non-spring. The dynamic features of the actuator were neglected in the simulation of the full-car model. Using the schematic diagram of the full-car model, the motion equations of the model [20, 21] were derived from Newton's second law of motion. The heave, pitch angle, and roll angle can be respectively defined as: $I_{r} \ddot{\varphi}=-b_{1} t_{f}\left(\dot{z}_{1}-\dot{z}_{u 1}\right)+b_{2} t_{f}\left(\dot{z}_{2}-\dot{z}_{u 2}\right)-b_{3} t_{r}\left(\dot{z}_{3}\right.$ $\left.-\dot{z}_{u 3}\right)+b_{4} t_{r}\left(\dot{z}_{4}-\dot{z}_{u 4}\right)$ $-k_{1} t_{f}\left(z_{1}-z_{u 1}\right)+k_{2} t_{f}\left(z_{2}\right.$ $\left.-z_{u 2}\right)-k_{3} t_{r}\left(z_{3}-z_{u 3}\right)$ $+k_{4} t_{r}\left(z_{4}-z_{u 4}\right)+t_{f} u_{1}-t_{f} u_{2}$ $+t_{r} u_{3}-t_{r} u_{4}$ (1) $\begin{aligned} I_{p} \ddot{\theta}=-b_{1} a\left(\dot{z}_{1}-\dot{z}_{u 1}\right)-b_{2} a\left(\dot{z}_{2}-\dot{z}_{u 2}\right) \\ &+b_{3} a\left(\dot{z}_{3}-\dot{z}_{u 3}\right)+b_{4} b\left(\dot{z}_{4}-\dot{z}_{u 4}\right) \\ &-k_{1} a\left(z_{1}-z_{u 1}\right)-k_{2} a\left(z_{2}-z_{u 2}\right) \\ &+k_{3} b\left(z_{3}-z_{u 3}\right)+k_{4} b\left(z_{4}-z_{u 4}\right) \\ &+a u_{1}+a u_{2}+a u_{3}+a u_{4} \end{aligned}$ (2) $\begin{aligned} m_{s} \ddot{z}=-b_{1}\left(\dot{z}_{1}-\dot{z}_{u 1}\right)-b_{2}\left(\dot{z}_{2}-\dot{z}_{u 2}\right)-b_{3}\left(\dot{z}_{3}\right.\\\left.-\dot{z}_{u 3}\right)-b_{4}\left(\dot{z}_{4}-\dot{z}_{u 4}\right)-k_{1}\left(z_{1}\right.\\\left.-z_{u 1}\right)-k_{2}\left(z_{2}-z_{u 2}\right)-k_{3}\left(z_{3}\right.\\-z_{u 3} &-k_{4}\left(z_{4}-z_{u 4}\right)+u_{1}+u_{2} \\+u_{3}+u_{4} \end{aligned}$ (3) Under external disturbances, the vertical motions of tires at the four corners can be respectively described: $m_{u_{1}} \ddot{z}_{u 1}=b_{1}\left(\dot{z}_{1}-\dot{z}_{u 1}\right)+k_{1}\left(\dot{z}_{1}-\dot{z}_{u 1}\right)+k_{t 1}\left(z_{r i}-z_{u 1}\right)-u_{1}$ (4) $m_{u 2} \ddot{z}_{u 2}=b_{2}\left(\dot{z}_{2}-\dot{z}_{u 2}\right)+k_{2}\left(z_{2}-z_{u 2}\right)+k_{t 2}\left(z_{r 2}-z_{u 2}\right)-u_{2}$ (5) $m_{u_{4}} \ddot{z}_{u 4}=b_{4}\left(\dot{z}_{4}-\dot{z}_{u 4}\right)+k_{4}\left(z_{4}-z_{u 4}\right)-k_{t 4}\left(z_{r 4}-z_{u 4}\right)-u_{4}$ (7) The vertical displacements of corners 1-4 can be respectively expressed by the heave, pitch angle, and roll angle: $z_{1}=z+t_{f} \phi_{s}+a \theta_{s}, \dot{z}_{1}=\dot{z}+t_{f} \dot{\phi}_{s}+a \dot{\theta}_{s}$ (8) $z_{2}=z+t_{f} \phi_{s}+a \theta_{s}, \dot{z}_{2}=\dot{z}-t_{f} \dot{\phi}_{s}+a \dot{\theta}_{s}$ (9) $z_{3}=z+t_{f} \phi_{s}+a \theta_{s}, \dot{z}_{3}=\dot{z}+t_{r} \dot{\phi}_{s}-b \dot{\theta}_{s}$ (10) $z_{4}=z+t_{f} \phi_{s}+a \theta_{s}, \dot{z}_{4}=\dot{z}-t_{r} \dot{\phi}_{s}-b \dot{\theta}_{s}$ (11) 3. Controller Design This section designs an incremental PID controller, and the DRLA controller for active control the suspension system. The incremental PID controller was designed as a control sample to demonstrate the superiority of the DRLA controller. 3.1 Incremental PID controller The traditional PID controller contains a proportional link P(e(t)), an integral link I(e(t)), and a differential link D(e(t)) [20]. Suppose each wheel is completely decoupled and independently controlled by the active control force Fi(t). Then, the control input Fi(t) can be expressed as: $F_{i}(t)=K_{P} e(t)+K_{I} \int e(t) d t+K_{D} \frac{d e(t)}{d t}$ (12) where, KP, KI, and KD are proportional, integral, and differential factors, respectively; e(t) is the control error: $e(t)=z_{i}(t)-z_{d}(t)$ (13) where, zd(t) is the control target; zi(t) is the actual response. The PID parameters were tuned by the Zeigler–Nicholds method. Then, the traditional PID was extended into an incremental PID capable of deep RL and adaptive adjustment: $F_{i}(t)=F_{i}(t-1)+\Delta F_{i}(t)=F_{i}(t-1)+K_{P} e(t)+K_{I} \Delta e(t)+K_{D} \Delta^{2} e(t)$ (14) where, Δe(t)=e(t)-e(t-1), and Δ2e(t)=e(t)-2*e(t-1)+e(t-2). Figure 2. The structure of incremental PID controller The structure of the incremental PID controller is explained in Figure 2. Compared with the traditional PID controller, the incremental PID controller saves storage space for deep RL, remains robust to environmental rewards, and improves the RL rate. 3.2 DRLA controller The DRLA controller was designed by adaptively updating the parameters of the incremental PID controller online through deep RL. The structure of the DRLA controller is presented in Figure 3. The RL control system has two main functional components, namely, the ASE, and the ACE. The ASE attempts to find the best action in a given system state through trial and error or through generation and test search, that is, mapping the state vector into the KP, KI, and KD of the PID controller. The ACE receives reinforcement signals from its environment, and then generates internal RL signals for ASE adjustment. The Actor network has n paths for non-enhanced signal input, a path for enhanced input, and a path for signal output. Let {xi (t),1≤i≤n} be the real-valued signal on the i-th path for unenhanced input, and y(t) be the output at time t. Then, the ASE output can be defined as: $y(t)=f\left(\sum_{i} w_{i}(t) x_{i}(t)+\text { noise }(t)\right)$ (15) where, noise(t) is a real-valued random variable obeying zero mean Gaussian distribution, with covariance as the probability function; f(x, t) is an S-type function or a threshold function; wi is a weight that is updated based on internal enhancement r'(t) and the legal input ei(t) in the i-th path: $w_{i}(t+1)=w_{i}(t)+\alpha r^{\prime}(t) e_{i}(t)$ (16) where, α is the learning rate; ei(t) is the legal exponential decay curve with time: $e_{i+1}(t)=\delta e_{i}(t)+(1-\delta) y(t) x_{i}(t)$ (17) where, (0≤δ≤1) is the decay rate. The final enhancement as predicted can be described as a linear function of the input vector x in the Critic network: $p(t)=\sum_{i} v_{i}(t) x_{i}(t)$ (18) The p(t) will converge to the accurate prediction by updating the weight vi as follows: $v_{i}(t+1)=v_{i}(t)+\beta[r(t)+\gamma p(t)-p(t-1)] \bar{x}$ (19) where, β is the learning rate; r(t) is the enhanced signal provided by the system environment; $\bar{x}(t)$ is the trajectory of the input vector xi(t); 0≤γ≤1 is a positive value. Without external enhancement, the prediction of the positive state quantity will weaken. In other words, heuristic or internal enhancement r' encompasses the change of p value and the external reinforcement. The farther the future value of p is from the current state of the system, the higher its discount rate, and the trajectory $\bar{x}$ behaves more similar to the legal trajectory ei defined in formula (17). However, as long as there is a nonzero signal on a path, the input path will be qualified no matter what is the role of the element. $\bar{x}$ can be calculated by the following linear difference equation: $\bar{x}_{i}(t+1)=\eta \bar{x}_{i}(t)+(1-\eta) \bar{x}_{i}(t)$ (20) where, 0≤η≤1 is the trajectory decay rate. According to formula (20), as long as the sum of actual reinforcement r(t) and the current prediction p(t) differs from the prediction p(t-1) for this sum, the weight of the qualified path will change. Then, a learning rule was provided to find the weight such that p(t-1) approximates r(t)+γp(t). The ACE output is an improved or internally enhanced signal: $r^{\prime}(t)=r(t)+\gamma p(t)-p(t-1)$ (21) This is also known as a time difference (TD) error. The parameters should be adjusted to reduce the TD between successive states. The reward function can be defined as: $R(t)=\zeta r(t)$ (22) Figure 3. The structure of the DRLA controller 4. Simulation Two simulations were carried out on MATLAB/Simulink, which applies to linear and non-linear systems that contain both continues and discrete data, as well as systems with multiple sampling frequencies. The model parameters of the automobile active suspension system are configured as shown in Table 1. Table 1. The model parameters of the automobile active suspension system m1=m2=m3=m4 ks1=ks4 22,000 N/m 19,000N/m c1=c2=c3=c4 800Ns/m kt1=kt2=kt3=kt4 143,000 N/m 1,859 kg m2 471kg m2 9.81 kg m2 4.1 Simulation 1 Simulation 1 was carried out on a continuously changing road surface. To make the simulation more realistic, the amplitude and frequency of surface changes were set to 0.02cm and 2.5Hz, respectively; the road surface was obtained by zr(t) = 0.02 sin(5πt). To avoid redundancy, only the simulation data on wheel 1 and corner 1 were analyzed (Figures 4-6). Figures 7-9 compares the online adaptive adjustments of the PID parameters by the incremental PID controller and the DRLA controller. Figure 4. Vertical displacement of wheel 1 (m) Figure 5. Active control force of wheel 1 Figure 6. Vertical displacement of corner 1 Figure 7. Adaptive change of KP Figure 8. Adaptive change of KI Figure 9. Adaptive change of KD As shown in Figure 4, under the incremental PID controller, the vertical displacement of wheel 1 did not stabilize until 5.8s; the vibration amplitude peaked at 0.45m. Under the DRLA controller, the vertical displacement of wheel 1 stabilized at 1.2s; the vibration amplitude, which was high at the initial moment, decreased to and stabilized at 0.1m. Hence, the DRLA controller led to faster response. As shown in Figure 5, under the incremental PID controller, the control force fluctuated significantly and kept oscillating. Under the DRLA controller, the control force changed slowly and stabilized at 1.5s. As shown in Figure 6, under the incremental PID controller, the vertical displacement of corner 1 kept oscillating, exhibiting significant changes. Under the DRLA controller, the vertical displacement of corner 1 changed less significantly, and quickly stabilized at 1.3s. The responses of pitch angle and roll angle were simulated on the road in Figure 10. The simulation results are displayed in Figures 11-12. The RL rewards are given in Figure 13. Figure 10. Time-domain spectrum of the road Figure 11. Dynamic responses of pitch angle Figure 12. Dynamic responses of roll angle Figure 13. The curves of RL rewards As shown in Figure 11, the pitch angle of corner 1 changed more significantly and responded slower under the incremental PID controller than under the DRLA controller. As shown in Figure 12, the roll angle of corner 1 changed by 2.8deg at the maximum under the incremental PID controller, and 1.8deg at the maximum under the DRLA controller; the stabilization time of the roll angle under the incremental PID controller was 2s longer than that under the DRLA controller. The simulation data on the other three wheels are listed in Table 2. Judging by the root mean square (RMS) values, the DRLA controller outperformed the incremental PID controller. Table 2. Root mean squared value of suspension system RMS value Incremental PID controller DRLA controller Corner 2 Displacement (m) Pitch angle (deg) Roll angle (deg) The suspension based on the 7-DOF full-car model has strong nonlinearly. The actuators of active control add to the complexity of the mathematical model on the active suspension system. In the active suspension system, the model-based controller has poor real-time performance, due to the nonlinear features of its actuators. Based on the deep RL strategy of ACE and ASE, this paper designs the DRLA controller by adaptively adjusting the incremental PID controller. The simulation results show that: Under the incremental PID controller, the displacement response kept oscillating, and the vertical displacement changed significantly. Under the DRLA controller, the displacement response changed less significantly, and stabilized quickly at 1.3s. Under the incremental PID controller, the pitch angle of corner 1 changed significantly by 3.7deg, and took 4s to stabilize. Under the DRLA controller, that pitch angle changed by 2.8deg only, and took merely 3.2 to stabilize. Under the incremental PID controller, the roll angle changed by 2.8deg at the maximum. Under the DRLA controller, the roll angle changed by 1.8deg at the maximum. Besides, the roll angle stabilized 2s faster under the DRLA controller than the incremental PID controller. The RMS values of all wheels indicate that the DRLA controller outshined the incremental PID controller in both vibration amplitude and the time to reach stability. To sum up, the DRLA controller can greatly improve the riding comfort of passengers and the operability of the vehicle. The future research will further verify the proposed DRLA controller through experiments on full-scale experimental platform. This research was supported by the Key scientific and technological project of Henan Province (Grant No.: 172102210123); The Key scientific research project plan of colleges and universities in Henan Province (Grant No.: 20B590001). [1] Eski, I., Yıldırım, Ş. (2009). Vibration control of vehicle active suspension system using a new robust neural network control system. Simulation Modelling Practice and Theory, 17(5): 778-793. https://doi.org/10.1016/j.simpat.2009.01.004 [2] Wang, G., Chadli, M., Chen, H., Zhou, Z. (2019). Event-triggered control for active vehicle suspension systems with network-induced delays. Journal of the Franklin Institute, 356(1): 147-172. https://doi.org/10.1016/j.jfranklin.2018.10.012 [3] Youn, I., Khan, M.A., Uddin, N., Youn, E., Tomizuka, M. (2017). Road disturbance estimation for the optimal preview control of an active suspension systems based on tracked vehicle model. International Journal of Automotive Technology, 18(2): 307-316. https://doi.org/10.1007/s12239-017-0031-7 [4] Spelta, C., Previdi, F., Savaresi, S.M., Bolzern, P., Cutini, M., Bisaglia, C., Bertinotti, S.A. (2011). Performance analysis of semi-active suspensions with control of variable damping and stiffness. Vehicle System Dynamics, 49(1-2): 237-256. https://doi.org/10.1080/00423110903410526 [5] Bei, S., Huang, C., Li, B., Zhang, Z. (2020). Hybrid sensor network control of vehicle ride comfort, handling, and safety with semi-active charging suspension. International Journal of Distributed Sensor Networks, 16(2): 1-10. https://doi.org/10.1177/1550147720904586 [6] Sivakumar, K., Kanagarajan, R., Kuberan, S. (2018). Fuzzy control of active suspension system using full car model. Mechanics, 24(2): 240-247. https://doi.org/10.5755/j01.mech.24.2.17457 [7] Rao, T.R., Anusha, P. (2013). Active suspension system of a 3 DOF quarter car using fuzzy logic control for ride comfort. International Conference on Control and Automation, pp. 1-6. https://doi.org/10.1109/CARE.2013.6733771 [8] Sun, W., Gao, H., Yao, B. (2013). Adaptive robust vibration control of full-car active suspensions with electrohydraulic actuators. IEEE Transactions on Control Systems Technology, 21(6): 2417-2422. https://doi.org/10.1109/TCST.2012.2237174 [9] Yao, J., Li, Z., Wang, M., Yao, F., Tang, Z. (2018). Automobile active tilt control based on active suspension. Advances in Mechanical Engineering, 10(10): 1687814018801456. https://doi.org/10.1177/1687814018801456 [10] Na, J., Huang, Y., Pei, Q., Wu, X., Gao, G., Li, G. (2019). Active suspension control of full-car systems without function approximation. IEEE/ASME Transactions on Mechatronics. https://doi.org/10.1109/TMECH.2019.2962602 [11] Gang, W. (2020). ESO-based terminal sliding mode control for uncertain full-car active suspension systems. International Journal of Automotive Technology, 21(3): 691-702. https://doi.org/10.1007/s12239-020-0067-y [12] Chen, X.S., Yang, Y.M. (2011). Adaptive PID control based on actuator-evaluator learning. Control Theory and Application, 28(8): 1187-1192. https://doi.org/CNKI:SUN:KZLY.0.2011-08-023 [13] Su, L.J., Zhu, H.J., Qi, X.H., Dong, H.R. (2016). Design of four-rotor height controller based on reinforcement learning. Measurement and Control Technology, 35 (10): 51-53. https://doi.org/10.3969/j.issn.1000-8829.2016.10.013 [14] Wang, J., Paschalidis, I.C. (2016). An actor-critic algorithm with second-order actor and critic. IEEE Transactions on Automatic Control, 62(6): 2689-2703. https://doi.org/10.1109/TAC.2016.2616384 [15] Sun, Y., Qiang, H., Mei, X., Teng, Y. (2018). Modified repetitive learning control with unidirectional control input for uncertain nonlinear systems. Neural Computing and Applications, 30(6): 2003-2012. https://doi.org/10.1007/s00521-018-3643-6 [16] Liu, Y. J., Li, S., Tong, S., Chen, C.P. (2018). Adaptive reinforcement learning control based on neural approximation for nonlinear discrete-time systems with unknown nonaffine dead-zone input. IEEE Transactions on Neural Networks and Learning Systems, 30(1): 295-305. [17] Zhang, Z., Lam, K.P. (2018). Practical implementation and evaluation of deep reinforcement learning control for a radiant heating system. In Proceedings of the 5th Conference on Systems for Built Environments, pp. 148-157. https://doi.org/10.1145/3276774.3276775 [18] Xu, X., Huang, Z., Zuo, L., He, H. (2016). Manifold-based reinforcement learning via locally linear reconstruction. IEEE Transactions on Neural Networks and Learning Systems, 28(4): 934-947. https://doi.org/10.1109/TNNLS.2015.2505084 [19] Choi, S., Kim, S., Kim, H.J. (2017). Inverse reinforcement learning control for trajectory tracking of a multirotor UAV. International Journal of Control, Automation and Systems, 15(4): 1826-1834. https://doi.org/10.1007/s12555-015-0483-3 [20] Darus, R., Sam, Y.M. (2009). Modeling and control active suspension system for a full car model. In 2009 5th International Colloquium on Signal Processing & Its Applications, pp. 13-18. https://doi.org/10.1109/CSPA.2009.5069178 [21] Sun, Y.G., Xu, J.Q., Qiang, H.Y., Lin, G.B. (2019). Adaptive neural-fuzzy robust position control scheme for maglev train systems with experimental verification. IEEE Transactions on Industrial Electronics, 66(11): 8589-8599. https://doi.org/10.1109/TIE.2019.2891409
CommonCrawl
[ "article:topic", "authorname:mmanes", "license:ccbysa", "showtoc:no", "transcluded:yes", "source[1]-math-9861", "licenseversion:40", "[email protected]/mathforelementaryteachers" ] https://math.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fmath.libretexts.org%2FCourses%2FTeachers_College_Columbia_University%2FBook%253A_Mathematics_for_Elementary_Teachers_(Manes)%2F07%253A_Place_Value_and_Decimals%2F7.06%253A_Terminating_or_Repeating Michelle Manes Think / Pair / Share The Period of a Repeating Decimal You've seen that when you write a fraction as a decimal, sometimes the decimal terminates, like: \[\frac{1}{2} = 0.5 \quad \text{and} \quad \frac{33}{100} = 0.033 \ldotp \nonumber \] However, some fractions have decimal representations that go on forever in a repeating pattern, like: \[\frac{1}{3} = 0.33333 \ldots \quad \text{and} \quad \frac{6}{7} = 0.857142857142857142857142 \ldots \nonumber \] It's not totally obvious, but it is true: Those are the only two things that can happen when you write a fraction as a decimal. Of course, you can imagine (but never write down) a decimal that goes on forever but doesn't repeat itself, for example: \[0.1010010001000010000001 \ldots \quad \text{and} \quad \pi = 3.14159265358979 \ldots \nonumber \] But these numbers can never be written as a nice fraction \(\frac{a}{b}\) where and are whole numbers. They are called irrational numbers. The reason for this name: Fractions like \(\frac{a}{b}\) are also called ratios. Irrational numbers cannot be expressed as a ratio of two whole numbers. For now, we'll think about the question: Which fractions have decimal representations that terminate, and which fractions have decimal representations that repeat forever? We'll focus just on unit fractions. A unit fraction is a fraction that has 1 in the numerator. It looks like \(\frac{1}{n}\) for some whole number . Which of the following fractions have infinitely long decimal representations and which do not? $$\frac{1}{2} \quad \frac{1}{3} \quad \frac{1}{4} \quad \frac{1}{5} \quad \frac{1}{6} \quad \frac{1}{7} \quad \frac{1}{8} \quad \frac{1}{9} \quad \frac{1}{10} \ldotp$$ Try some more examples on your own. Do you have a conjecture? A fraction \(\frac{1}{b}\) has an infinitely long decimal expansion if: ________________________________. Complete the table below which shows the decimal expansion of unit fractions where the denominator is a power of 2. (You may want to use a calculator to compute the decimal representations. The point is to look for and then explain a pattern, rather than to compute by hand.) Try even more examples until you can make a conjecture: What is the decimal representation of the unit fraction \(\frac{1}{2^{n}}\)? \(\frac{1}{2}\) \(2\) \(0.5\) \(\frac{1}{4}\) \(2^{2}\) \(0.25\) \(\frac{1}{8}\) \(2^{3}\) \(0.125\) \(\frac{1}{16}\) \(\frac{1}{128}\) \(\frac{1}{25}\) \(5^{2}\) \(0.04\) \(\frac{1}{125}\) \(5^{3}\) \(\frac{1}{3125}\) \(\frac{1}{15625}\) Marcus noticed a pattern in the table from Problem 7, but was having trouble explaining exactly what he noticed. Here's what he said to his group: I remembered that when we wrote fractions as decimals before, we tried to make the denominator into a power of ten. So we can do this: $$\begin{split} \frac{1}{2} &= \frac{1}{2} \cdot \frac{5}{5} = \frac{5}{10} = 0.5 \ldotp \\ \frac{1}{4} &= \frac{1}{4} \cdot \frac{25}{25} = \frac{25}{100} = 0.25 \ldotp \\ \frac{1}{8} &= \frac{1}{8} \cdot \frac{125}{125} = \frac{125}{1000} = 0.125 \ldotp \end{split}$$When we only have 2's, we can always turn them into 10's by adding enough 5's. Write out several more examples of what Marcus discovered. If Marcus had the unit fraction \(\frac{1}{2^{n}}\), what would be his first step to turn it into a decimal? What would the decimal expansion look like and why? Now think about unit fractions with powers of 5 in the denominator. If Marcus had the unit fraction \(\frac{1}{5^{n}}\), what would be his first step to turn it into a decimal? What would the decimal expansion look like and why? Marcus had a really good insight, but he didn't explain it very well. He doesn't really mean that we "turn 2's into 10's." And he's not doing any addition, so talking about "adding enough 5's" is pretty confusing. Complete the statement below by filling in the numerator of the fraction. The unit fraction \(\frac{1}{2^{n}}\) has a decimal representation that terminates. The representation will have n decimal digits, and will be equivalent to the fraction \(\frac{?}{10^{n}} \ldotp\) Write a better version of Marcus's explanation to justify why this fact is true. Write a statement about the decimal representations of unit fractions \(\frac{?}{5^{n}}\) and justify that your statement is correct. (Use the statement in Problem 9 as a model.) Each of the fractions listed below has a terminating decimal representation. Explain how you could know this for sure, without actually calculating the decimal representation. \[\frac{1}{10} \quad \frac{1}{20} \quad \frac{1}{50} \quad \frac{1}{200} \quad \frac{1}{500} \quad \frac{1}{4000} \ldotp \nonumber \] If the denominator of a fraction can be factored into just 2's and 5's, you can always form an equivalent fraction where the denominator is a power of ten. For example, if we start with the fraction \[\frac{1}{2^{a} 5^{b}}, \nonumber \] we can form an equivalent fraction \[\frac{1}{2^{a} 5^{b}} = \frac{1}{2^{a} 5^{b}} \cdot \frac{2^{b} 5^{a}}{2^{b} 5^{a}} = \frac{2^{b} 5^{a}}{2^{a+b} 5^{a+b}} = \frac{2^{b} 5^{a}}{10^{a+b}} \ldotp \nonumber \] The denominator of this fraction is a power of ten, so the decimal expansion is finite with (at most) \(a+b\) places. What about fractions where the denominator has other prime factors besides 2's and 5's? Certainly we can't turn the denominator into a power of 10, because powers of 10 have just 2's and 5's as their prime factors. So in this case the decimal expansion will go on forever. But why will it have a repeating pattern? And is there anything else interesting we can say in this case? The period of a repeating decimal is the smallest number of digits that repeat. For example, we saw that \[\frac{1}{3} = 0.33333 \cdots = 0. \bar{3} \ldotp \nonumber \] The repeating part is just the single digit 3, so the period of this repeating decimal is one. Similarly, we know that \[\frac{6}{7} = 0.857142857142857142857142 \ldots = 0. \overline{857142} \ldotp \nonumber \] The smallest repeating part is the digits 857142, so the period of this repeating decimal is 6. You can think of it this way: the period is the length of the string of digits under the vinculum (the horizontal bar that indicates the repeating digits). Complete the table below which shows the decimal expansion of unit fractions where the denominator has prime factors besides 2 and 5. (You may want to use a calculator to compute the decimal representations. The point is to look for and then explain a pattern, rather than to compute by hand.) Try even more examples until you can make a conjecture: What can you say about the period of the fraction \(\frac{1}{n}\) when n has prime factors besides 2 and 5? \(\frac{1}{3}\) \(0.1 \bar{6}\) \(1\) \(\frac{1}{6}\) \(0. \overline{142857}\) \(6\) \(\frac{1}{7}\) Imagine you are doing the "Dots & Boxes" division to compute the decimal representation of a unit fraction like \(\frac{1}{6}\). You start with a single dot in the ones box: To find the decimal expansion, you "unexplode" dots, form groups of six, see how many dots are left, and repeat. Draw your own pictures to follow along this explanation: Picture 1: When you unexplode the first dot, you get 10 dots in the \(\frac{1}{10}\) box, which gives one group of six with remainder of 4. Picture 2: When you unexplode those four dots, you get 40 dots in the \(\frac{1}{100}\) box, which gives six group of six with remainder of 4. Picture 3: Unexplode those 4 dots to get 40 in the next box to the right. Picture 4: Make six groups of 6 dots with remainder 4. Since the remainder repeated (we got a remainder of 4 again), we can see that the process will now repeat forever: unexplode 4 dots to get 40 in the next box to the right, make six groups of 6 dots with remainder 4, and so on forever… Work on the following exercises on your own or with a partner. Use "Dots & Boxes" division to compute the decimal representation of \(\frac{1}{11}\). Explain how you know for sure the process will repeat forever. What are the possible remainders you can get when you use division to compute the fraction \(\frac{1}{7}\)? How can you be sure the process will eventually repeat? Suppose that is a whole number, and it has some prime factors besides 2's and 5's. Write a convincing argument that: The decimal representation of \(\frac{1}{n}\) will go on forever (it will not terminate). The decimal representation of \(\frac{1}{n}\) will be an infinite repeating decimal. The period of the decimal representation of \(\frac{1}{n}\) will be less than n. Find the "decimal" expansion for \(\frac{1}{2}\) in the following bases. Be sure to show your work: $$two, \quad three, \quad four, \quad five, \quad six, \quad seven \ldotp$$ Make a conjecture: If I write the decimal expansion of \(\frac{1}{2}\) in base b, when will that expansion be finite and when will it be an infinite repeating decimal expansion? Can you prove your conjecture is true? This page titled 7.6: Terminating or Repeating? is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Michelle Manes via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. 7.5: More x -mals [email protected]/mathforelementaryteachers source[1]-math-9861
CommonCrawl
Direct proof of empty set being subset of every set Recently I finished my first pure mathematics course but with some intrigue about some proofs of definitions by contradiction and contrapositive but not direct proofs (the existence of infinite primes for example), I think most of them because the direct proof extends away from a first mathematics course or the proofs by contradiction/contrapositive are more didactic. The one that most bothers me in particular is the demonstration that the empty set is a subset of every set, and it is unique. I understand the uniqueness and understand the proof by contradiction: "Suppose $\emptyset \subsetneq A$ where $A$ is a set. So it exists an element $x \in \emptyset$ such that $x \notin A$ wich is absurd because $\emptyset$ does not have any elements by definition." but I would like to know if there exists a direct proof of this and if indeed extends from a first course. Thanks beforehand. taue2pitaue2pi There is a direct proof, if you know what a vacuous truth is. But the problem is that when one sees this statement $\varnothing\subseteq A$, it's usually before fully understanding vacuous arguments. So it's slightly more instructive to first give a proof by contradiction, and then discuss vacuous arguments. At least from my experience teaching this argument. The proof is simple. We verify that $\forall x\in\varnothing$ it holds that $x\in A$. However, since $\forall x(x\notin\varnothing)$, the argument holds vacuously. And we are done. $\begingroup$ I didn't know about vacuous truths. From what I just read, its a proposition of the form $P\implies Q$ where one knows $P$ to be false. It's still a little confusing because, in this statement, $x\in A$ might be true, I guess this requires studying other examples of vacuous proofs to understand well the concept. Thank you! $\endgroup$ – taue2pi Jan 8 '14 at 6:00 $\begingroup$ There were a couple of threads about vacuous truths on the site before. You might as well read them. $\endgroup$ – Asaf Karagila♦ Jan 8 '14 at 6:04 $\begingroup$ I came across this question in the first chapter of Rudin's book on analysis and initially I went to proof by contradiction and did it "correctly" by most standards but I still feel as though there is something of fundamental nature that I am missing. It is not as if $\varnothing$ contains the integer 0; it's empty, yeah? It contains no element, so how can it be the case that this "no element-ness" is represented in every single set as $\varnothing\subseteq A$ means? Apologies since this is quite naive; but it's really tripping me up! $\endgroup$ – ofey73 Apr 19 '19 at 23:07 $\begingroup$ @Theodore: It's a matter of syntax, or semantics if you will. It's how mathematical conditionals are built. I actually wrote an answer about this recently. The short version is that implications in mathematics are difference from implications in natural language, and $\subseteq$ is defined by an implication. $\endgroup$ – Asaf Karagila♦ Apr 19 '19 at 23:12 For any $B\subseteq A$, we have $A\setminus B\subseteq A$. So $A\setminus A=\varnothing\subseteq A$. (This proof operates at a slightly higher level of abstraction than verifying the definition of $\varnothing\subseteq A$. Since the definition is so easy to verify, you might think that it's silly to take a different strategy. If so, fair enough!) Chris CulterChris Culter $\begingroup$ This proof is really nice! I already accepted another answer because introduced me to vacuous proofs but yours is very neat. Thank you $\endgroup$ – taue2pi Jan 8 '14 at 6:05 Yes, there's a direct proof: The way that we show that a set $A$ is a subset of a set $B$, i.e. $A \subseteq B$, is that we show that all of the elements of $A$ are also in $B$, i.e. $\forall a \in A, a\in B$. So we want to show that $\emptyset \subseteq A$. So consider all the elements of the empty set. There are none. Therefore, the statement that they are in $A$ is vacuously true: $\forall x \in \emptyset, x \in A$. So $\emptyset \subseteq A$. NewbNewb Here is another direct proof, more calculational, where we first use the definitions and basic properties of $\;\emptyset,\subseteq\;$ and then simplify using predicate logic: \begin{align} & \emptyset \subseteq A \\ \equiv & \qquad\text{"definition of $\;\subseteq\;$"} \\ & \langle \forall x :: x \in \emptyset \Rightarrow x \in A \rangle \\ \equiv & \qquad\text{"basic property of $\;\emptyset\;$"} \\ & \langle \forall x :: \text{false} \Rightarrow x \in A \rangle \\ (*) \quad \equiv & \qquad\text{"logic: false implies anything"} \\ & \langle \forall x :: \text{true} \rangle \\ \equiv & \qquad\text{"logic: leave out unused quantified variable"} \\ & \text{true} \\ \end{align} Note how the last steps are really just a more detailed proof of the principle of 'vacuous truth', as used by earlier answers. If you want more detail on the third step $(*)$: \begin{align} & \text{false} \Rightarrow P \\ \equiv & \qquad\text{"rewrite"} \\ & \lnot \text{false} \lor P \\ \equiv & \qquad\text{"simplify"} \\ & \text{true} \lor P \\ \equiv & \qquad\text{"simplify"} \\ & \text{true} \\ \end{align} MarnixKlooster ReinstateMonicaMarnixKlooster ReinstateMonica Another way to look at vacuous proofs. Make use of the logical principle that anything follows from a falsehood (the arbitrary consequent rule): $$P\implies[\neg P \implies Q]$$ Your proof: $\forall a: \neg a\in \emptyset$ (by defintion, better to use $\neg$ than $\notin$ in this case) Let $S$ be any set. Suppose $x\in \emptyset$ $\neg x\in \emptyset$ (from 1) $\neg\neg x\in \emptyset\implies x\in S$ (arbitrary consequent rule applied to 4) $x\in \emptyset\implies x\in S$ (from 5) $x\in S$ (from 3 and 6) $x\in \emptyset \implies x\in S$ (conclusion from 3 and 7) $\forall a:[a\in\emptyset\implies a\in S]$ (generalizing from 8) Yes, lines 6 and 8 look the same, but they play different roles in the proof. We can't immediately generalize on line 6. Dan ChristensenDan Christensen For any set A, union of set A and nullset, gives set A. this proves that null set is subset of every set A. Using union operation for subset definition is the trick. jnyanjnyan $\begingroup$ First you need to prove that $A\cup B=A$ if and only if $B\subseteq A$. $\endgroup$ – Asaf Karagila♦ Aug 30 '16 at 6:24 Here's a definition of "subset" that works: "If set A contains an element that is not also in set B, then A is not a subset of B; otherwise, it is." So, it is not true that the empty set contains an element that is also not in some (other) set or another. Therefore, the empty set is a subset of every set. The problem is that the definition of a "subset" is sometimes (or even usually) stated like this: "If all of the elements of set A are also in B, then A is a subset of B; otherwise, it is not." But according to general English usage, this definition pre-supposes at least one element in set A and therefore can't be applied to the empty set -- at least, not according to general English usage. An interesting follow-on question is, I think: "Why do mathematicians even conceive of the empty set, given that it constitutes 'nothingness'?" Jean ForgeronJean Forgeron $\begingroup$ "But according to general English usage", mathematics is its own language. What if I teach my students in Hebrew, or in Esperanto or Inuit? What if I invented a new language just to teach my students about the empty set and there the statement "If all elements of $A$ are elements of $B$" does not presupposes the existence of any elements in $A$? What if this language sounds extremely like English, or Hebrew, or Esperanto, or even Inuit? Is it okay then? $\endgroup$ – Asaf Karagila♦ May 2 '14 at 5:38 $\begingroup$ Maybe your students would just figure that your statement meant my first definition. I'm only pointing out the possible motivation of the OP's question by making an association with something outside of the self-contained, consistent system that is mathematics. After all, one can always question how meaningful it is merely to show that an assertion follows from certain mathematical definitions, principles, procedures, etc., which is what the OP seems to be doing. Perhaps the OP is wondering if "the empty set is a subset of every set" is asserted only for the sake of mathematical consistency. $\endgroup$ – Jean Forgeron May 2 '14 at 7:23 $\begingroup$ I'm sorry. Your answer makes no sense, and your comment makes no sense either. One can always question everything, that's the point of skepticism, I can even question your understanding of mathematics and mathematical language (and I should). I also don't know of any mathematical statement that is asserted for anything other than mathematical purposes. $\endgroup$ – Asaf Karagila♦ May 2 '14 at 7:34 $\begingroup$ Now, now... (Please!) "I also don't know of any mathematical statement that is asserted for anything other than mathematical purposes." Really? A physicist would certainly disagree with you. $\endgroup$ – Jean Forgeron May 2 '14 at 7:36 $\begingroup$ Yes, physicists think that mathematics exists to describe the universe; and engineers think that physics exist to guide their constructions; and mindless drones think that engineering exists so they can live their thoughtless lives faster. I don't think that anything physicists say about mathematics has any weight here. It might be true, or it might be false. But that claim you refer, which is more about philosophy of mathematics has little to no relation to physicists and their work; and they have absolutely nothing to contribute to this sort of discussion. $\endgroup$ – Asaf Karagila♦ May 2 '14 at 7:39 A set $A$ is a subset of a set $B$ if $A$ has no elements that are not also in $B:$ $¬∃x∈A:x∉B$ Since the $empty$ $set$ has $no$ $elements$, it is clearly a subset of every set. Prop: Empty set is a subset of every set. Any set A has void set as its subset? if yes how? How to state the principle of non-contradiction in order " All unicorns are both mammals and non-mammals" to be compatible with it? Does every subset start with an empty set? Is empty set a subset of every set direct proof confusion Inclusion vs Belonging, and the proof of empty set being subset of every set Seeking a direct proof about nested inclusions Direct proof set theory. Proving the generalized intersection of the interval (0, 1/n) is the empty set? Prove $A \subset \emptyset \iff A = \emptyset$ Is my proof by contradiction about the empty set correct? Squaring any element of the empty set. Question on proof that $\operatorname{PowerSet}(X)\subset X $ is false for any $X$. Why "to every set and to every statement p(x), there exists {$x\in A | p(x)$}? How to prove statements in formal set theory? Substitution, the empty set and an example.
CommonCrawl
Plant Growth-Promoting Rhizobacteria Improve Growth, Morph-Physiological Responses, Water Productivity, and Yield of Rice Plants Under Full and Deficit Drip Irrigation Taia A. Abd El-Mageed1, Shimaa A. Abd El-Mageed2, Mohamed T. El-Saadony3, Sayed Abdelaziz4 & Nasr M. Abdou1 Inoculating rice plants by plant growth promoting rhizobacteria (PGPR) may be used as a practical and eco-friendly approach to sustain the growth and yield of drought stressed rice plants. The effect of rice inoculation using plant growth hormones was investigated under drip full irrigation (FI; 100% of evapotranspiration (ETc), and deficit irrigation (DI; 80% of ETc) on growth, physiological responses, yields and water productivities under saline soil (ECe = 6.87 dS m−1) for 2017 and 2018 seasons. Growth (i.e. shoot length and shoot dry weight), leaf photosynthetic pigments (chlorophyll 'a' and chlorophyll 'b' content), air–canopy temperature (Tc–Ta), membrane stability index (MSI%), and relative water content, (RWC%) chlorophyll fluorescence (Fv/Fm) stomatal conductance (gs), total phenols, peroxidase (PO), polyphenol oxidase (PPO), nitrogen contents and water productivities (grain water productivity; G-WP and straw water productivity; S-WP) were positively affected and significantly (p < 0.05) differed in two seasons in response to the applied PGPR treatments. The highest yields (3.35 and 6.7 t ha−1 for grain and straw yields) as the average for both years were recorded under full irrigation and plants inoculated by PGPR. The results indicated that under water scarcity, application of (I80 + PGPR) treatment was found to be favorable to save 20% of the applied irrigation water, to produce not only the same yields, approximately, but also to save more water as compared to I100%. Rice is a very important cereal crop worldwide, supplying more than 50% of the global food demand. The Global rice production was more than 700 × 106 tons year−1, produced from 167million ha (FAOSTAT 2018). More than 75% of rice production is supplied by irrigated lowland rice (Ram et al. 2003; Yuan et al. 2021). Generally, rice has been grown under flooded conditions with maintaining a continuous water depth of 5–10 cm (Bouman et al. 2007). Lowland rice mainly is direct-seeded or transplanted in puddled soils by plowing under saturated water conditions, and then followed by harrowing and leveling management. Under flooded conditions, a large amount of irrigation water supply is required, which is not only used to cope with water needs for the growth and development of rice plants but also as a management technique during rice cultivation (Brown et al. 1977; McCauley 1990; Sivapalan, 2015). The irrigation water demand for rice plants under the traditional flooded system is more than 20,000 m3 ha−1 which is more than 3–4 times that of its biological needs from water (Tuong et al. 2005; Kruzhilin et al. 2015). In a puddled rice field the consumption of water depends on the rates of evaporation, transpiration, and water losses by percolation, seepage, and surface runoff. Therefore the lower water productivity under irrigated rice conditions is referring to water losses (Abd El-Mageed et al. 2020; Abdou et al. 2021). Soil salinity is abiotic stress that limits both vegetative and reproductive development of grown crops (Abd El-Mageed et al. 2019). Worldwide, more than 800 million hectares of arable land are salt affected (Wang et al. 2011). Plants are induced by salinity that causes ion toxicity, osmotic stress, ion imbalance, mineral deficiencies, physiological and biochemical disruption, consequently, reducing the quality and total yield of the affected crop (Rady et al. 2016). The availability of irrigation water for agriculture, especially for rice production in many regions of the world, is threatened, not only by the global shortage of water resources (Cai et al. 2020) but also by increasing urban and industrial demand (Boretti and Rosa 2019). Worldwide, the production of rice consumes much water more than that of other crops, it is determined that irrigated rice consumes about 40% of the global water used particularly for irrigation purposes (Bouman et al. 2007; Hoekstra et al. 2011). In Egypt, after wheat, rice ranked a second staple food and cultivated in reclaimed saline soils, in North delta and coastal areas, rice consumes about 10 billion m3 of water which is about 18% of the Egyptian share of water from the Nile River. Egypt like many countries of the world faces several challenges respecting the increasing water demand and increasing water competition among users, the sustainability of rice production in Egypt is becoming more threatened by the limited water resources (Abd El-Mageed et al. 2020; Abdou et al. 2021). Therefore, the Ministry of Irrigation and Water Resources in Egypt annually reduces the allotted area for rice cultivation, which is decreased by 59% from 745,000 ha to 304,080 ha during the past 10 years (2008–2018). Water stress negatively affects the growth and productivity of crops (Ahuja et al. 2010; Shekoofa and Sinclair 2018). Physiological functioning in rice plants (Guimarães et al. 2013; Yang et al. 2019; Abdou et al. 2021) viz root length density, root moisture extraction, the rate of apical development, canopy size, leaf elongation rate, leaf rolling, transpiration rate, RWC, biomass production, spikelet number, spikelet sterility, panicle development, grain size, and grain yield (Palanog et al. 2014; Kruzhilin et al. 2016; Yang et al. 2019) may be drastically reduced due to water stress, especially if it occurs during vegetative or reproductive stages of rice, depending upon the stress severity and cultivar tolerance. In recent years, the trickle irrigation system has been spread out more intensely, not only for enhancing water productivity but also for increasing crop production (Geerts and Raes 2009). Drip irrigation can achieve application efficiencies as high as 90% if the system is well maintained and combined with soil moisture monitoring or other ways of assessing crop water requirements (Vickers 2002; Jägermeyr et al. 2015). Water use efficiency and crop production can be enhanced by using drip irrigation under limited water resources by declining the volume of water that leaches out of the root zone (El-Hendawy et al. 2008). Irrigation techniques that tend to minimize the inputs of irrigation water for rice production like deficit irrigation should be applied. Deficit irrigation (DI) is a method mainly applied to decrease water losses and maximize water productivity (WP), particularly in areas where the water supply is inadequate for irrigation (Agami et al. 2018; Abd El-Mageed et al. 2019; Semida et al. 2020). DI can also have other benefits related to reducing the energy used during irrigations and decreasing nitrate leaching (Falagán et al. 2015), reducing production costs and water consumption (Badal et al. 2013; Ballester et al. 2014). To cope with drought stress, several adaptations and strategies are required. Plant growth-promoting rhizobacteria (PGPR) could play a significant role in the alleviation of induced injurious effects by drought stress on plants (Vurukonda et al. 2016). The role of microorganisms regarding plant growth, nutrient management, and biocontrol activity is very well established. These beneficial microorganisms colonize the rhizosphere/endo-rhizosphere of plants and promote the growth of the plants through various direct and indirect mechanisms (Grover and Ali 2011). Furthermore, the role of microorganisms in the management of biotic and abiotic stresses is gaining importance. The possible explanation for the mechanism of plant drought tolerance induced by rhizobacteria includes (1) production of phytohormones like abscisic acid (ABA), gibberellic acid, cytokinins, and indole-3-acetic acid (IAA); (2) ACC deaminase to reduce the level of ethylene in the roots; (3) induced systemic tolerance by bacterial compounds; (4) bacterial exopolysaccharides (Timmusk et al. 2014; Carlson et al. 2020; Getahun et al. 2020; Poudel et al. 2021). Hence, the application of PGPR may increase water-saving and enhance crop yield productivity under conditions of deficit water supply. Likewise, rice crop responses to combined PGPR with deficit irrigation regimes synchronized with salt affected soils have not yet been investigated. Therefore, the main objective of the current study was to investigate the effect of PGPR application and DI on growth, plant defense system, physio-biochemical attributes, seed and straw yield, and WP of rice plants cultivated in salt soil-affected. Experimental Set-Up Our study was conducted in the private farm South-east Fayoum, (29° 35′ N; 30° 05′ E) Egypt for two successive years 2017 and 2018. The climate is arid, characterized by low precipitation and rainfall occurs mainly during the period from December to April. The region is also characterized by more than 320 days a year of sunny days. The meteorological parameters (i.e. air temperature °C, relative humidity (%), wind speed (m s−1) and pan evaporation (mm day−1) during the rice cultivation period in 2017 and 2018 were presented in (Table 1). The soil, 80–100 cm deep, is loamy sand and defined as Typic Torripsamments, siliceous, hyperthermic (Soil Survey Staff 1999). Physio-chemical characteristics of the soil were: pH 7.85 (1:2.5 soil/water extract), Kjeldahl total N 1.4 g kg−1, Olsen extractable P 3.53 mg kg−1, ammonium acetate extractable K 42.85 mg kg−1, organic C 8.2 g kg−1, total carbonate 43.7 g kg−1, ECe (soil paste extract) 6.4 dS m−1, bulk density 1.53 kg dm−3, field capacity and wilting point 21.31% and 10.3%, respectively Tables 2 and 3. Table 1 The climatic data recorded at Meteorological observatory of Fayoum governorate, during crop growing seasons of 2017 and 2018 Table 2 Some initial physical properties of the experimental soil Table 3 Some initial chemical properties of the experimental soil Experimental Design and Plant Management Two field experiments were conducted in a randomized complete block design (Split Plot). 2 irrigation treatments were applied (100, and 80% of ETc were occupied as main plots) and two PGPR treatments (treated and non-treated were allocated to sub-plots). The 4 treatments were replicated three times, making a total of 12 plots. The area of the experimental plot was 16 m length × 0.8 m row width (12.80 m2), each plot included 4 planting rows placed 20 cm apart with a distance of 10 cm between plants within rows. Two drip lines were placed 0.40 m apart in each elementary test plot. Healthy seeds of rice (Oryza sativa L.), variety Sakha 107 were sown on 20 May 2017 and 2018. The 4-week-old transplants were transported and replanted and then harvested on 6 October 2017 and 2018. Mineral fertilization, pest management, disease, and cultural practices were performed as the instructions of local commercial crop production. Irrigation water applied (IWA) was estimated as a percentage of the crop evapotranspiration (ETc) representing the following three treatments: FI = 100%, and DI = 80% of ETc. Daily ETo and ETc were estimated according to Allen et al. (1998) equation. $${\text{IWA}} = \frac{{{\text{A }} \times {\text{ETc}}}}{{{\text{Ea}} \times 1000{ } \times \left( {1 - {\text{LR}}} \right)}}$$ where IWA: irrigation water applied (m3), A: irrigated plot area (m2), ETc: water consumptive use (mm day−1) and was computed as follow; $${\text{ETc}} = {\text{ETo}} \times {\text{Kc}}$$ ETo is the reference evapotranspiration (mm d−1) and Kc = crop coefficient. ETo was determined as follows: $${\text{ETo}} = {\text{Epan}} \times {\text{Kp}}$$ where Epan is the evaporation from a class A and Kp is the pan coefficient, Ea: efficiency of application (%) and LR: leaching requirements. Growth and Physiological Measurements At the tillering stage of both seasons/experiments, 5 individual plants were randomly chosen from each experimental plot to evaluate growth characteristics and another group of 5 plants to determine chemical attributes. Shoot lengths and spikes lengths were measured using a meter scale. The number of spikes was counted per plant, and leaf area per plant was measured using Digital Planimeter (Planix 7). Shoots of plants were weighed to record their fresh weights and then placed in an oven at 70 ± 2 °C till a constant weight to measure their dry weights. Chlorophyll Fluorescence (Fv/Fm) and Performance Index (PI) The (Fv/Fm) was measured by using a portable fluorometer (Handy PEA, Hansatech Instruments Ltd, Kings Lynn, UK) and calculated according to Maxwell and Johnson (2000). Where the (PI) of photosynthesis based on equal absorption (PIABS) was calculated as reported by Clark et al. (2000). Stomatal Conductance (gs) and Leaf Chlorophyll Concentration (SPAD) The gs was measured on fully expanded upper canopy leaves between 10 and 12 h with a portable photosynthetic system (CIRAS-2, PP Systems, Hitchin, UK). The SPAD was determined at 90 DAS for the three youngest completely expanded leaves per hill by (SPAD-value; SPAD502, KONICAMINOLTA. Inc., Tokyo). Rice Water Status (RWC %, MSI %, and Canopy Temperature) The RWC was determined according to Hayat et al. (2007) equation as follows; $${\text{RWC}}\,(\% ) = \left[ {\frac{{\left( {{\text{FM}} - {\text{DM}}} \right)}}{{\left( {{\text{TM}} - {\text{DM}}} \right)}}} \right] \times 100$$ where RWC% is relative water content (%), FM: fresh mass (g), TM: turgid mass (g), and DM is the dry mass (g). Likewise, MSI% was determined and calculated using the method of Premachandra et al. (1990) as follow $${\text{MSI}}\left( {\text{\% }} \right) = \left[ {1 - \frac{C1}{{C2}}} \right] \times 100$$ where MSI % is the membrane stability index, C1: is the EC of the solution at 40 °C and C2: is the EC of the solution at100°C. Canopy temperature (Tc) was measured by a hand-held infrared thermometer (Fluk 574, Everett WA, USA) at an emissivity of 0.98 and a spectral response range of 8–14 µm. Total Nitrogen and Antioxidant Defense System Total nitrogen was determined according to the well-known method described by Donald and Robert (1998). Estimations of total phenols, peroxidase (PO), and polyphenol oxidase (PPO) were carried out by the method described by Ramamoorthy et al. (2002). Chlorophyll 'a' and Chlorophyll 'b' Content Chlorophyll 'a' and chlorophyll 'b' content were extracted and determined (in mg g−1 FW) according to the procedure given by Arnon (1949) using a UV-160 A UV–Vis recording spectrometer (Shimadzu, Kyoto, Japan) at 663 and 645 nm. Rhizobacteria Strains Preparations and Inoculation of Rice Seedlings The most effective facultative oligotrophic bacterial two strains used in this experiment as PGPR were isolated from the same soils at Fayoum region, Egypt, and were completely identified as [Bacillus subtilis subsp, spizizenii strain NRRL B-23049T and Bacillus megatherium strain IAM 13418]. The most effective facultative oligotrophic bacterial strains obtained, from the previous part, were selected and chosen for some different characters based on previous knowledge of their ability to produce (indole acetic acid IAA, Salicylic acid, zinc, and phosphate solubilization, N2-fixation, cellulase and chitinase, oxidase, catalase activities and lactose fermentation (Table 4). Table 4 Morphological, physiological and biochemical characters For the preparation of bacterial strains inoculants (antagonizers), each strain was grown individually on sterilized nutrient broth medium in flasks with 1 L capacity on rotary shaker after shaking for 72 h incubation period at 30 °C. The growing organisms were concentrated by centrifuging the medium and cell sidements were aseptically collected and diluted, by the same medium, to 250 mL only (1/4 L). In the case of using a mixture of the two antagonizers, an equal volume of the three strains was mixed instantaneously before use. 20 mL of the resultant suspension was poured twice directly onto the rice seedlings in cones at the seedling and at 15 days after transplanting. Water Productivities Water productivities as mentioned byFernández et al. (2020) were calculated as (1) the ratio between above-ground biomass and crop evapotranspiration, i.e. straw WP (S-WP) and (2) the ratio between grain yield and crop evapotranspiration, i.e. grain WP (G-WP) according to Jensen (1983). Statistical analysis was performed through the procedure of GenStat (version 11) (VSN International Ltd, Oxford, UK). The least significant difference (LSD) at 5% probability (p ≤ 0.05) level was used as mean separation test. Rice Growth in Response to Plant Growth Promoting Bacteria Under Full and Deficit Irrigation Data in Table 5 illustrate the effects of irrigation level, plant growth promoting bacteria, and their interaction on rice growth. Plants under deficit irrigation had lower growth traits (i.e. shoot length, tillers number plant−1, panicles number plant−1 and shoot dry weight) than those under full irrigation. On the other hand, plants treated with PGPR had higher growth traits (i.e. shoot length, tillers number plant−1, panicles number plant−1 and shoot dry weight) than untreated plants. Growth traits were decreased significantly with increasing water stress, I80% resulted in decreases of plant height by 8%, tillers number by 11.8%, panicles number by 12.4%, and shoot dry weight by 25% as compared to fully irrigated plants. On the other hand, treated rice plants with PGPR increased significantly these parameters by 9.4%, 15.3%, 18.6%, and 29.6% for plant height, tillers number, panicles number, and shoot dry weight, respectively. The combined application of PGPR and irrigation at 100% of ETc recorded the best growth parameters, while the treatment I80 × −PGPR showed the lowest values of growth parameters. Otherwise, no significant differences were found between I100 × −PGPR and I80 × +PGPR treatments. Table 5 Effect of integrative deficit drip irrigation and plant growth promoting rhizobacteria on growth characteristics of rice plants grown under saline soil for (SI) 2017 and (SII) 2018 seasons Rice Water Status Results of rice water states (RWC, MSI, and canopy-air temperature) in responses to irrigation and PGPR treatments and their interaction are presented in Table 6. The water status of rice plants as evaluated by RWC, MSI, and the canopy-air temperature was significantly affected by irrigation treatment. Data in (Table 6) reflected that RWC and MSI of well-irrigated plants were higher (82.3 and 75.3) than those under deficit irrigation (70.8 and 66.5). On contrary, canopy-air temperature (Tc–Ta) at 13.0 and 14 O'clock of plants irrigated at 100% of ETc (1.24 and 1.59) was lower than plants irrigated at 80% of ETc (1.97 and 2.08). Also, values of RWC, MSI, and the canopy-air temperature were affected positively or negatively by PGPR inoculation. The values of RWC and MSI% for plants treated with PGPR (82.1 and 75.6) were higher than −PGPR plants (63.5 and 73.8). Interaction between PGPR and irrigation treatment significantly affected plant water status. According to the results in Table 6, No significant effects were observed between seasons on RWC, MSI, and Tc–Ta. Table 6 Effect of integrative deficit drip irrigation and plant growth promoting rhizobacteria on plant water status (MSI% and RWC%), canopy-air temperature (Tc–Ta) of rice plants grown saline soil for (SI) 2016/2017 and (SII) 2017/2018 seasons Stomatal Conductance (gs) The influences of plant growth promoting bacteria on stomatal conductance (gs) under full and deficit irrigation are presented in Fig. 1. Results showed that gs values were almost stable from 10 to 11 am but thereafter, gs decreased sharply at 12 pm in all treatments. The values of stomatal conductance were higher under FI than those of DI. Maximum values of stomatal conductance were found in FI + PGPR treatment which was greater than those of FI, DI, and DI + PGPR treatments for all times (10 am, 11 am, and 12 pm). Basically, inoculated rice plants increased gs in comparison to the uninoculated control plant. Effect of plant growth promoting bacteria on stomatal conductance of rice plants grown under deficit and non-deficit drip irrigation from 10 am to 12 pm during (SI) 2017 and (SII) 2018 seasons. Error bars indicate standard errors of means (S.E.) (n = 3) Chlorophyll Fluorescence Efficiency, Relative Chlorophyll Content and Photosynthetic Pigments Responses of chlorophyll fluorescence (Fv/Fm and PI), relative chlorophyll content (SPAD value), and photosynthetic pigments (chlorophyll a and chlorophyll b) of rice plants to irrigation and plant growth promoting bacteria treatments and their interactions are displayed in Table 7. Except for PI, no significant differences were observed between seasons. Chlorophyll fluorescence, relative chlorophyll content, and photosynthetic pigments were significantly influenced by irrigation, PGPR treatments and by their interaction. Results in (Table 7) showed that Fv/Fm, PI, SPAD, chlorophyll "a" and chlorophyll "b" of rice plants under well-watered conditions were compared by water-stressed 7.7, and 14.3%, respectively as compared by water stressed plants. Also, inoculation rice plants by PGPR positively increased Fv/Fm by 5.1%, PI by 66.7%, SPAD by 13.8%, and chlorophyll "a" by 10.5% and chlorophyll "b" by 14.3% as compared with uninoculated plants. Chlorophyll fluorescence, relative chlorophyll content, and photosynthetic pigments were strongly influenced by the interaction between PGPR and irrigation treatments. Maximum values of Fv/Fm, PI, SPAD, chlorophyll a, and chlorophyll b were recorded under I100 × +PGPR treatment, while the minimum values for these parameters were observed under I80 × −PGPR treatment. Table 7 Effect of integrative of deficit drip irrigation and plant growth promoting rhizobacteria on plant water status (MSI% and RWC %), chlorophyll fluorescence (Fv/Fm and PI) SPAD value, chlorophyll a and chlorophyll a of rice plants grown saline soil for (SI) 2016/2017 and (SII) 2017/2018 seasons Antioxidant Defense System and Nitrogen Contents The effects of irrigation, PGPR treatments and their interaction on defense principles like [(peroxidase (PO), polyphenol oxidase (PPO) and total phenol)], N% (leaves) and N% (grains) contents of rice plants were presented in Table 8. The concentration of PO, PPO, total phenol and the content of N% (leaves and grains) were strongly (p < 0.05) affected by irrigation quantity and plant growth promoting bacteria and were not positively affected by season except for total phenol. Data in (Table 8) reflected that PO, PPO, total phenol, N content in leaves and grains when rice plants were received 100% of irrigation water requirements were higher by 28.1, 17.7, 7.3, 8.3, and 6.4%, respectively as compared by plants received 80% of ETc. Additionally, rice plants inoculated by PGPR positively increased PO by 20.0%, PPO by 58.3%, total phenol by 24.8%, leaves N content by 33.9%, and grains N content by 20.0% as compared with uninoculated plants. According to the results displayed in Table 8, PO, PPO, total phenol, N content (in leaves, and in grains) were significantly (p < 0.05) affected by the interaction between PGPR and irrigation treatments. The highest values of PO, PPO, total phenol, N content in leaves and grains were found when plants were irrigated at 100% of ETc and inoculated by PGPR treatment (I100 × +PGPR), while the lowest values for the aforementioned parameters were recoded when rice plants were exposed to water stress (I80) and untreated by PGPR (I80 × −PGPR). Table 8 Effect of integrative deficit drip irrigation and plant growth promoting rhizobacteria on chlorophyll content (a and b), PO, PPO, total phenol and N% of rice plants grown saline soil for (SI) 2016/2017 and (SII) 2017/2018 seasons Yield Components Responses of rice yield components such as; panicle length, (cm), panicle weight (g), number of grains panicle−1, and 1000 grain weight (g) to cropping seasons, irrigation, PGPR, and their interaction are presented in Table 9. Rice yield components were positively affected by irrigation level, PGPR, and by their interaction and were not affected by the growing season. Yield components of rice plants exposed to drought stress were decreased by 7.5% for panicle length, by panicle weight 23.7%, the number of grains panicle−1 10.8%, and 1000 grain weight of rice plants by 17.8% as compared with unstressed plants. On the other hand, inoculated rice plants by PGPR increased yield component by 10.6, 28.0, 19.9, and 23.0% for panicle length, panicle weight, number of grains panicle−1, and 1000 grain weight as compared by untreated plants, respectively. Our results showed that rice yield components were strongly influenced by the interaction between PGPR and irrigation treatments. The highest values of panicle length, panicle weight, number of grains panicle−1 and 1000 grain weight (15.8, 2.1, 75.1 and 22.3) were recorded when plants received 100% of ETc and inoculated by PGPR (I100 × +PGPR), while the lowest values for aforementioned traits (13.6, 1.2, 56.2 and 14.7) were recorded when rice plants exposed to water stress (I80) and untreated by PGPR (I80 × −PGPR) treatment. Table 9 Effect of integrative deficit drip irrigation and plant growth promoting rhizobacteria on yield component, grain yield and straw yield of rice plants grown under saline soil for (SI) 2016/2017 and (SII) 2017/2018 seasons Rice Yields and Water Productivities Table 10 illustrates the effects of growing seasons, irrigation level, PGPR, and their interaction on rice yields (grain and straw; t ha−1) and water productivities (G-WP, and S-WP; kg m−3). Plants grown under full irrigation had higher yields (i.e. grain yield, straw yield) than those grown under drought stress. Grains yield, straw yield, were decreased positively with increasing water stress, I80% resulted in decreases of grain yield by 19%, straw yield by 11.9%, in relation to fully irrigated plants. On the other hand, values of G-WP, and S-WP under I80% treatment were higher than those of I100% treatment by 1.3 and 10.4%, respectively, (Table 10). Rice plants treated with PGPR increased grains yield, straw yield, G-WP, and S-WP by 19.0, 16.8, as compared with untreated plants. No significant differences between growing seasons were observed. Our findings showed that grains yield, straw yield, G-WP, and S-WP were significantly affected by the interaction between PGPR and irrigation treatments. Plants fully irrigated and inoculated by +PGPR gained the highest values of grains yield (5.24 t ha−1), straw yield (8.87 t ha−1), G-WP (kg m−3), and S-WP (kg m−3). Moreover, the lowest values for grains yield (3.65 t ha−1), straw yield (6.58 t ha−1), G-WP (kg m−3), and S-WP (kg m−3) were found when rice plants were irrigated at 80% of irrigation water requirements (I80) and untreated by PGPR. Table 10 Effect of integrative deficit drip irrigation and plant growth promoting rhizobacteria on yield component, grain yield, straw yield and water productivities (G-WP and S-WP) of rice plants grown under saline soil for (SI) 2016/2017 and (SII) 2017/2018 seasons Water scarcity is one of the main constraints to agricultural production worldwide, and it is expected to intensify in the future. In arid soil where irrigation is necessary for the production of crops, producers are seeking techniques to save water by increasing the efficiency of irrigation water. Plant growth promoting rhizobacteria (PGPR) is considered one of these strategies and it could play an important role in mitigating the detrimental effects of drought stress on plants. Bacteria strains used in our study [Bacillus subtilis subsp. and Bacillus megatherium] can produce plant growth promoting substances (PGPs) such as; Indoleacetic acid (IAA) (Loper and Schroth 1986), salicylic acid (Meyer and Abdallah 1978), siderophores (Palli 2005), chitinase (Renwick et al. 1991), cellulose (Andro et al. 1984), phosphate and Zinc solubilization (Rodriguez and Miller 2000; Saravanan et al. 2004) and N2-fixation (Cattelan et al. 1999). Besides, it has antagonistic activity against pathogenic fungi like; Pythium ultimum, Rhizoctonia solani, and Fusarium sp (Koch 1997). The strains also have the capability to live, proliferate sustain life and perform their activities under some adverse environmental conditions such as; temperature, increasing pH, and salt stress. Therefore, Bacillus subtilis subsp. and Bacillus megatherium are considered as plant growth promoting rhizobacteria (PGPR) and it could use under normal conditions and overcome the negative effects of environmental stresses on some plants (Abdelaziz et al. 2018). The current study has used PGPR as soil application for deficit irrigation DI-stressed rice plants grown under salt stress (ECe = 6.3 dS m−1). Inoculating plants with PGPR showed greatly significant positive results for performance growth, water status, stomatal conductance (gs), and chlorophyll fluorescence efficiency, relative chlorophyll content and photosynthetic pigments, antioxidant enzymes and nitrogen contents, yield component, and yields and water productivities of rice plants grown under both DI and saline conditions. In our study, drought stress indirectly inhibited rice growth parameters may be attributed to the drought-induced reduction of cell division and enlargement, resulting in the reduction of shoot length, tillers number plant−1, the number of panicles plant−1 and shoot dry weight, simultaneously with the reduction of stomatal conductance, water status, chlorophyll fluorescence efficiency, relative chlorophyll content and photosynthetic pigments, as well as antioxidant enzymes and nitrogen contents (Selvakumar and Panneerselvam 2012; Steduto et al. 2012; Abd El-Mageed et al. 2021). On the other hand, inoculation water-stressed rice pants (80% ETc) with PGPR alleviated the deleterious effects of water shortage on rice growth, showing that increased shoot length, tillers number plant−1, number of panicles plant−1 and shoot dry weight similar to those produced in fully irrigated plants inoculated with PGPR. Also, compared to the untreated plants, inoculation by plant growth promoting bacteria improved rice growth. Rice growth-promoting because of adding PGPR may be linked to the increased micronutrient uptake and affect phytohormones homeostasis. The inoculation effect of our bacterial isolates had a remarkable positive effect on plant growth parameters under stress and non-stress condition. Various studies indicated that PGPRs inoculated plants can take up a higher volume of water and nutrients from rhizosphere soil; the attributes could be useful for the growth of plants under drought stress (Alami et al. 2000). The enhancement of rice growth traits treated with PGPR under water stress may be due to phytohormones like abscisic acid (ABA), indole-3-acetic acid (IAA), salicylic acid, gibberellic acid, cytokinins, and exopolysaccharides which produced by PGPR and help plants to cope with drought stress. A similar trend was reported by Yang et al. (2009), Kim et al. (2012) and Timmusk et al. (2014). The study displayed that rice plants irrigated at 80% ETc and untreated with PGPR produced not only reduction of rice water status (MSI and RWC) but also decreased chlorophyll fluorescence (Fv/Fm and PI) SPAD value, chlorophyll 'a' and chlorophyll 'b' as well as stomatal conductance, indicating the negative effects of water stress on rice. On the other hand, the canopy-air temperature of rice plants increased by 0.61 °C (Tc–Ta) under water stress (I80%) compared to full irrigation. Our results showed that inoculating rice plants with Bacillus subtilis subsp. and Bacillus megatherium as a plant growth promoting rhizobacteria (PGPR) stabilized membrane integrity and maintained cell turgor of rice leaves under drought stress. In this concern, increases of tissue RWC and MSI chlorophyll fluorescence (Fv/Fm and PI) SPAD value, chlorophyll 'a' and chlorophyll 'b' and decreases of canopy temperature (Tc–Ta) as metabolically available water, enabling to maintain tissue health and reflect on the metabolic processes in rice under drought stress. Our results are in line with those reported by Creus et al. (2004), Arzanesh et al. (2011), Liu et al. (2013) and Armada et al. (2014), who reported that PGPR helped plants by increasing leaf water content which was ascribed to the production of plant hormones such as IAA, by the bacteria that improved root growth and formation of lateral roots their by increasing uptake of water, decreased leaf transpiration, improved nutrition and physiology, controlling stomatal closure, and metabolic activities. Also, it was documented that under water stress chlorophyll content (Chl a, and Chl b or SPAD), stomatal conductance, chlorophyll fluorescence (Fv/Fm and PI), photosynthetic parameters as well as water state were increased when plants treated PGPR compared to untreated plants (Wang et al. 2012; Elekhtyar 2015; Samaniego-Gámez et al. 2016; Zhang et al. 2019). In the present work the reduction of antioxidant defense system (e.g., peroxidase (PO), polyphenol oxidase (PPO), total phenol), N% (leaves), and N% (grains) under drought stress may be due to the influences of drought stress on the availability and transport of nutrients, as soil nutrients are carried to the roots by water. Our results are in line with those of Selvakumar and Panneerselvam (2012), Abd El-Mageed et al. (2017) and Semida et al. (2021a). They reported that water stress reduces nutrient diffusion and mass flow of water-soluble elements such as nitrate, K, Ca, Mg, and Si. Moreover, drought induces free radicals affecting antioxidant defenses such as superoxide radicals, hydrogen peroxide, and hydroxyl radicals. However, our study exhibited that the negative effects on antioxidant defense system (e.i., peroxidase (PO), polyphenol oxidase (PPO), and total phenol), N% (leaves), and N% (grains) of water-stressed rice were alleviated by inoculated by PGPR, thereby enhanced antioxidant enzymes and N% contents (leaves and grains). In these concerns, Yogendra et al. (2015) reported that PGPR mitigates oxidative damage in rice plants grown under drought by increasing plant growth and activating antioxidant defense systems, thereby enhancing the stability of membranes in plant cells. Additionally, PGPR increased rice biomass production grown under drought stress. Enhancement of the plant dry biomass is a positive criterion for drought tolerance correlates with an increase of rice yields (Yogendra et al. 2015). Our strains have the ability to fix N, thus led to an increase in N uptake in leaves and grains These positive results in response to PGPR application may be related to PGPR and regulated the redistribution and uptake of N, besides restoration of photosynthetic efficiency (Rodriguez et al. 2004; Anjum et al. 2007), and more metabolites required for rice growth. Drought stress (I80%) positively decreased rice yield attributes (e.g., panicle length, panicle length, panicle weight, grains number panicle−1, and 1000 grain weight) and yields (grain and straw) compared to fully irrigated plants (I100%). The reduction in yield components under water stress may be due to the decreases in growth, stomatal conductance, chlorophyll content, water status, N uptake, and photosynthesis efficiency of plants (Quampah et al. 2011; Pejic et al. 2011). Consequentially, the reduction in panicle length, panicle length, panicle weight, grains number panicle−1, and 1000 grain weight decreased the yield of grain and straw. In these concern, Pantuwan et al. (2002), Wu et al. (2011), Kumar et al. (2014) and Yang et al. (2019) reported that water stress could cause spikelet degenerate, spikelet sterility, and grains number reduce unfilled grain No. increase, and 1000-grain weight and yield reduce. The G-WP values were not affected significantly by the irrigation quantity where S-WP values were significantly affected and the highest values for G-WP and S-WP were recorded under I80% treatment. A similar trend was reported by Semida et al. (2014) and Rady et al. (2021a, b). In general, according to the results of various experiments, lower water application provides higher WP values (Rady et al. 2021a; Semida et al. 2021b). Li et al. (2001) indicated that limited irrigation in wheat during the growing season could significantly increase WP. Abd El-Mageed et al. (2018) and Agami et al. (2018) found that the highest values of WUE for sorghum and wheat were recorded via low moisture conditions (60% of Class A pan evaporation). Results of the current study indicate that inoculation rice plants by PGPR enhanced yield, yield components, and G-WP and S-WP irrespective of irrigation treatment, and the higher rice values were noted when rice plants irrigated well and inoculated by bacillus subtilis, and bacillus megatherium strains. This could be as a result of enhancing the survival, and growth yield, yield components, and G-WP and S-WP under PGPR inoculation by improving morpho-physiological responses, chlorophyll efficiency, plant water status, providing higher protection for plant tissues and thus led to an increase in yields and water productivities. This result is found to be in harmony with Hussain et al. (2014) for wheat, Kang et al. (2014) for soybean, Cohen et al. (2009) for maize, Cassán et al. (2009) and García de Salamone et al. (2012) for rice. They concluded that the application of PGPRs in plants increased yield and alleviated water stress by various mechanisms such as; reduced oxidative damage, increased proline, abscisic acid, auxin, gibberellin, and cytokinin content; improved vegetative growth, water status of the plant, photosynthetic capacity and nutrients status; enhanced physiological and biochemical attributes. Exposure of rice plants to drought stress positively reduced, physiological responses, RWC%, MSI%, antioxidant enzymes (e.i., peroxidase (PO), polyphenol oxidase (PPO), total phenol), N% (in leaves and grains), growth attributes, grain, and straw yields and increased canopy temperature the of rice plants. However, inoculation rice plants with PGPR could mitigate the deleterious effects of water stress by enhancing leaf photosynthetic pigments, chlorophyll fluorescence, SPAD value, stomatal conductance, plant water status, antioxidant enzymes, plant growth, yields, and WP and reduce plant canopy temperature. Depending on the obtained results it could be summarized that the treatment (I100 × +PGPR) is the most suitable for obtaining the highest grain and straw yields. Under water deficit, the application of (I80 × +PGPR) treatment was found to be a favorable strategy to save 20% of the applied irrigation water, providing the same rice yield. Our results suggest that PGPR applications may find value as anti-abiotic stresses for improving rice growth and productivity under drought stress. Abd El-Mageed TA, Semida WM, Rady MM (2017) Moringa leaf extract as biostimulant improves water use efficiency, physio-biochemical attributes of squash plants under deficit irrigation. Agric Water Manag 193:46–54. https://doi.org/10.1016/j.agwat.2017.08.004 Abd El-Mageed TA, Samnoudi IME, Ibrahim AEM, El Tawwab ARA (2018) Compost and mulching modulates morphological, physiological responses and water use e ffi ciency in sorghum (bicolor L. Moench) under low moisture regime. Agric Water Manag 208:431–439. https://doi.org/10.1016/j.agwat.2018.06.042 Abd El-Mageed TA, El-sherif AMA, Abd El-Mageed SA, Abdou NM (2019) A novel compost alleviate drought stress for sugar beet production grown in Cd-contaminated saline soil. Agric Water Manag 226:105831. https://doi.org/10.1016/j.agwat.2019.105831 Abd El-Mageed TA, Abdurrahman HA, Abd El-Mageed SA (2020) Residual acidified biochar modulates growth, physiological responses, and water relations of maize (Zea mays) under heavy metal-contaminated irrigation water. Environ Sci Pollut Res 27:22956–22966 Abd El-Mageed TA, Shaaban A, Abd El-Mageed SA et al (2021) Silicon defensive role in maize (Zea mays L.) against drought stress and metals-contaminated irrigation water. SILICON 13:2165–2176 Abdelaziz S, Hemeda NF, Belal EE, Elshahawy R (2018) Efficacy of facultative oligotrophic bacterial strains as plant growth-promoting rhizobacteria (PGPR) and their potency against two pathogenic fungi causing damping-off diseases. Appl Microbiol Open Access. https://doi.org/10.4172/2471-9315.1000153 Abdou NM, Abdel-Razek MA, Abd El-Mageed SA, Semida WM, Leilah AAA, Abd El-Mageed TA, Ali EF, Majrashi A, Rady MOA (2021) High nitrogen fertilization modulates morpho-physiological responses, yield, and water productivity of lowland rice under deficit irrigation. Agronomy 11:1291. https://doi.org/10.3390/agronomy11071291 Agami RA, Alamri SAM, Abd El-Mageed TA et al (2018) Role of exogenous nitrogen supply in alleviating the deficit irrigation stress in wheat plants. Agric Water Manag 210:261–270 Ahuja I, de Vos RCH, Bones AM, Hall RD (2010) Plant molecular stress responses face climate change. Trends Plant Sci 15:664–674. https://doi.org/10.1016/j.tplants.2010.08.002 Alami Y, Achouak W, Marol C, Heulin T (2000) Rhizosphere soil aggregation and plant growth promotion of sunflowers by an exopolysaccharide-producing Rhizobium sp. strain isolated from sunflower roots. Appl Environ Microbiol 66:3393–3398. https://doi.org/10.1128/AEM.66.8.3393-3398.2000 Allen RG, Pereira LS, Raes D, Smith M (1998) Crop evapotranspiration: guidelines for computing crop requirements. Irrigation and drainage paper no. 56. FAO irrigation and drainage paper no. 56, Rome, Italy Andro T, Chambost JP, Kotoujansky A et al (1984) Mutants of Erwinia chrysanthemi defective in secretion of pectinase and cellulase. J Bacteriol 160:1199–1203. https://doi.org/10.1128/jb.160.3.1199-1203.1984 Anjum M, Sajjad M, Akhtar N et al (2007) Response of cotton to plant growth promoting rhizobacteria (PGPR) inoculation under different levels of nitrogen. J Agric Res 45:135–143 Armada E, Roldán A, Azcon R (2014) Differential activity of autochthonous bacteria in controlling drought stress in native lavandula and salvia plants species under drought conditions in natural arid soil. Microb Ecol 67:410–420. https://doi.org/10.1007/s00248-013-0326-9 Arnon DI (1949) Copper enzymes in isolated chloroplasts. Polyphenol-oxidase in Beta vulgaris L. Plant Physiol 24:1–5 Arzanesh MH, Alikhani HA, Khavazi K et al (2011) Wheat (Triticum aestivum L.) growth enhancement by Azospirillum sp. under drought stress. World J Microbiol Biotechnol 27:197–205. https://doi.org/10.1007/s11274-010-0444-1 Badal E, El-Mageed TAA, Buesa I et al (2013) Moderate plant water stress reduces fruit drop of "Rojo Brillante" persimmon (Diospyros kaki) in a Mediterranean climate. Agric Water Manag 119:154–160. https://doi.org/10.1016/j.agwat.2012.12.020 Ballester C, Castel J, Abd El-Mageed TA et al (2014) Long-term response of "Clementina de Nules" citrus trees to summer regulated deficit irrigation. Agric Water Manag. https://doi.org/10.1016/j.agwat.2014.03.003 Boretti A, Rosa L (2019) Reassessing the projections of the World Water Development Report. NPJ Clean Water 2:15. https://doi.org/10.1038/s41545-019-0039-9 Bouman B, Lampayan R, Tuong T (2007) Water management in irrigated rice; coping with water scarcity. International Rice Research Institute Brown KW, Turner FT, Thomas JC et al (1977) Water balance of flooded rice paddies. Agric Water Manag 1:277–291. https://doi.org/10.1016/0378-3774(77)90006-3 Cai J, He Y, Xie R, Liu Y (2020) A footprint-based water security assessment: an analysis of Hunan province in China. J Clean Prod. https://doi.org/10.1016/j.jclepro.2019.118485 Carlson R, Tugizimana F, Steenkamp PA, Dubery IA, Hassen AI, Labuschagne N (2020) Rhizobacteria-induced systemic tolerance against drought stress in Sorghum bicolor (L.) Moench. Microbiol Res 232:126388. https://doi.org/10.1016/j.micres.2019.126388 Cassán F, Maiale S, Masciarelli O et al (2009) Cadaverine production by Azospirillum brasilense and its possible role in plant growth promotion and osmotic stress mitigation. Eur J Soil Biol 45:12–19. https://doi.org/10.1016/j.ejsobi.2008.08.003 Cattelan AJ, Hartel PG, Fuhrmann JJ (1999) Screening for plant growth-promoting rhizobacteria to promote early soybean growth. Soil Sci Soc Am J 63:1670–1680. https://doi.org/10.2136/sssaj1999.6361670x Clark AJ, Landolt W, Bucher JB, Strasser RJ (2000) Beech (Fagus sylvatica) response to ozone exposure assessed with a chlorophyll a fluorescence performance index. Environ Pollut 109:501–507. https://doi.org/10.1016/S0269-7491(00)00053-1 Cohen AC, Travaglia CN, Bottini R, Piccoli PN (2009) Participation of abscisic acid and gibberellins produced by endophytic Azospirillum in the alleviation of drought effects in maize. Botany 87:455–462. https://doi.org/10.1139/B09-023 Creus C, Sueldo RJ, Barassi CA (2004) Water relations and yield in Azospirillum-inoculated wheat exposed to drought in the field. Can J Bot 82:273–281 Donald AH, Robert O (1998) Determination of total nitrogen in plant tissue. In: Kalra YP (ed) Handbook and reference methods for plant analysis. CRC Press Elekhtyar N (2015) Efficiency of Pseudomonas fluorescens as plant growth-promoting rhizobacteria (PGPR) for the enhancement of seedling vigor, nitrogen uptake, yield and its attributes of rice (Oryza sativa L.). Int J Sci Res Agric Sci 2:57–67 El-Hendawy SE, El-Lattief EAA, Ahmed MS, Schmidhalter U (2008) Irrigation rate and plant density effects on yield and water use efficiency of drip-irrigated corn. Agric Water Manag 95:836–844. https://doi.org/10.1016/j.agwat.2008.02.008 Falagán N, Artés F, Artés-Hernández F et al (2015) Comparative study on postharvest performance of nectarines grown under regulated deficit irrigation. Postharvest Biol Technol 110:24–32. https://doi.org/10.1016/j.postharvbio.2015.07.011 FAOSTAT (2018) Food and agriculture data. Food and Agriculture Organization Fernández JE, Alcon F, Diaz-espejo A et al (2020) Water use indicators and economic analysis for on-farm irrigation decision: a case study of a super high density olive tree orchard. Agric Water Manag 237:106074. https://doi.org/10.1016/j.agwat.2020.106074 García de Salamone IE, Funes JM, Di Salvo LP et al (2012) Inoculation of paddy rice with Azospirillum brasilense and Pseudomonas fluorescens: Impact of plant genotypes on rhizosphere microbial communities and field crop production. Appl Soil Ecol 61:196–204. https://doi.org/10.1016/j.apsoil.2011.12.012 Geerts S, Raes D (2009) Deficit irrigation as an on-farm strategy to maximize crop water productivity in dry areas. Agric Water Manag 96:1275–1284. https://doi.org/10.1016/j.agwat.2009.04.009 Getahun A, Muleta D, Assefa F, Kiros S (2020) Plant growth-promoting rhizobacteria isolated from degraded habitat enhance drought tolerance of acacia (Acacia abyssinica Hochst. ex Benth.) seedlings. Int J Microbiol 2020:8897998. https://doi.org/10.1155/2020/8897998 Grover M, Ali SZ (2011) Role of microorganisms in adaptation of agriculture crops to abiotic stresses Role of microorganisms in adaptation of agriculture crops to abiotic stresses. World J Microbiol Biotechnol. https://doi.org/10.1007/s11274-010-0572-7 Guimarães CM, Stone LF, Rangel PHN, Silva ACL (2013) Tolerance of upland rice genotypes to water deficit [Tolerância à deficiência hídrica de genótipos de arroz de terras altas]. Rev Bras Eng Agric e Ambient 17:805–810 Hayat S, Ali B, Hasan SA, Ahmad A (2007) Brassinosteroid enhanced the level of antioxidants under cadmium stress in Brassica juncea. Environ Exp Bot 60:33–41 Hoekstra AY, Chapagain AK, Aldaya MM, Mekonnen MM (2011) The water footprint assessment manual: setting the global standard. Routledge. https://doi.org/10.4324/9781849775526 Hussain MB, Zahir ZA, Asghar HN, Asgher M (2014) Can catalase and exopolysaccharides producing rhizobia ameliorate drought stress in wheat? Int J Agric Biol 16:3–13 Jägermeyr J, Gerte D, Heinke J, Schaphoff S, Kummu M, Lucht W (2015) Water savings potentials of irrigation systems:global simulation of processes and linkages. Hydrol Earth Syst Sci 19:3073–3091 Jensen ME (1983) Design and operation of farm irrigation systems. American Society of Agricultural Engineers, p 827 Kang SM, Radhakrishnan R, Khan AL et al (2014) Gibberellin secreting rhizobacterium, Pseudomonas putida H-2-3 modulates the hormonal and stress physiology of soybean to improve the plant growth under saline and drought conditions. Plant Physiol Biochem 84:115–124. https://doi.org/10.1016/j.plaphy.2014.09.001 Kim YC, Glick BR, Bashan R, Ryu C (2012) Enhancement of plant drought tolerance by microbes. In: Aroca R (ed) Plant responses to drought stress. Springer Koch E (1997) Screening of rhizobacteria for antagonistic activity against Pythium ultimum on cucumber and kale. J Plant Dis Prot 104:353–361 Kruzhilin IP, Doubenok NN, Ganiev MA, Abdou NM, MeliKhov VV, Bolotin AG, Rodin KA (2015) Water-saving technology of drip irrigated aerobic rice cultivation. J Isvestiya 3:47–56 Kruzhilin IP, Doubenok NN, Ganiev MA et al (2016) Combination of the natural and anthropogenically-controlled conditions for obtaining various rice yield using drip irrigation systems. Russ Agric Sci 42:454–457. https://doi.org/10.3103/s1068367416060173 Kumar A, Dixit S, Ram T et al (2014) Breeding high-yielding drought-tolerant rice: genetic variations and conventional and molecular approaches. J Exp Bot 65:6265–6278. https://doi.org/10.1093/jxb/eru363 Li FM, Song QH, Liu HS et al (2001) Effects of pre-sowing irrigation and phosphorus application on water use and yield of spring wheat under semi-arid conditions. Agric Water Manag 49:173–183. https://doi.org/10.1016/S0378-3774(01)00087-7 Liu F, Xing S, Ma H et al (2013) Cytokinin-producing, plant growth-promoting rhizobacteria that confer resistance to drought stress in Platycladus orientalis container seedlings. Appl Microbiol Biotechnol 97:9155–9164. https://doi.org/10.1007/s00253-013-5193-2 Loper JE, Schroth MN (1986) Influence of bacterial sources of indole-3-acetic acid on root elongation of sugar beet. Phytopathology 76:386–389 Maxwell K, Johnson GN (2000) Chlorophyll fluorescence—a practical guide. J Exp Bot 51:659–668. https://doi.org/10.1093/jxb/51.345.659 McCauley GN (1990) Sprinkler vs. flood irrigation in traditional rice production regions of Southeast Texas. Agron J 82:677–683 Meyer JM, Abdallah MA (1978) The fluorescent pigment of Pseudomonas fluorescens: Biosynthesis, purification and physicochemical properties. J Gen Microbiol 107:319–328. https://doi.org/10.1099/00221287-107-2-319 Palanog AD, Swamy BPM, Shamsudin NAA et al (2014) Grain yield QTLs with consistent-effect under reproductive-stage drought stress in rice. Field Crop Res 161:46–54. https://doi.org/10.1016/j.fcr.2014.01.004 Palli R (2005) Effect of plant growth-promoting rhizobacteria on canola (Brassica napus L.) and lentil (Lens culinaris Medik) plants. Thesis, Master of Science, Department of Applied Microbiology and Food Science, University of Saskatchewan, Saskatoon, Canada Pantuwan G, Fukai S, Cooper M et al (2002) Yield response of rice (Oryza sativa L.) genotypes to drought under rainfed lowlands 2. Selection of drought resistant genotypes. Field Crop Res 73:169–180. https://doi.org/10.1016/S0378-4290(01)00195-2 Pejic B, Cupina B, Dimitrijevic M et al (2011) Response of sugar beet to soil water deficit. Rom Agric Res 28:151–155 Poudel M, Mendes R, Costa L, Bueno CG, Meng Y, Folimonova SY, Garrett KA, Martins SJ (2021) The role of plant-associated bacteria, fungi, and viruses in drought stress mitigation. Front Microbiol 12:743512. https://doi.org/10.3389/fmicb.2021.743512 Premachandra GS, Saneoka H, Ogata S (1990) Cell membrane stability, an indicator of drought tolerance, as affected by applied nitrogen in soyabean. J Agric Sci. https://doi.org/10.1017/S0021859600073925 Quampah A, Wang RM, Shams H et al (2011) Improving water productivity by potassium application in various rice genotypes. Int J Agric Biol 13:9–17 Rady MM, AbdEl-Mageed TA, Abdurrahman HA, Mahdi AH (2016) Humic acid application improves field performance of cotton (Gossypium barbadense L.) under saline conditions. J Anim Plant Sci 26:487–493 Rady MM, Boriek SHK, Abd El-Mageed TA et al (2021a) Exogenous gibberellic acid or dilute bee honey boosts drought stress tolerance in vicia faba by rebalancing osmoprotectants, antioxidants, nutrients, and phytohormones. Plants 10:1–23. https://doi.org/10.3390/plants10040748 Rady MOA, Semida WM, Howladar SM, Abd El-Mageed TA (2021b) Raised beds modulate physiological responses, yield and water use efficiency of wheat (Triticum aestivum L.) under deficit irrigation. Agric Water Manag 245:106629 Ram PC, Maclean JL, Dawe DC et al (2003) Rice almanac, 3rd edn. Ann Bot 92(5):739. https://doi.org/10.1093/aob/mcg189 Ramamoorthy V, Raguchander T, Samiyappan R (2002) Induction of defense-related proteins in tomato roots treated with Pseudomonas fluorescens Pf1 and Fusarium oxysporum f. sp. lycopersici. Plant Soil 239:55–68 Renwick A, Campbel L, Coe S (1991) Assessment of in vivo screening systems for potential biocontrol agents of Gaeumannomyces graminis. Plant Pathol 40:524–532 Rodriguez IR, Miller GL (2000) Using a chlorophyll meter to determine the chlorophyll concentration, nitrogen concentration, and visual quality of St. Augustinegrass Hortsci 35:751–754 Rodriguez H, Gonzalez T, Goire I, Bashan Y (2004) Gluconic acid production and phosphate solubilization by the plant growth-promoting bacterium Azospirillum spp. Naturwissenschaften 91:552–555. https://doi.org/10.1007/s00114-004-0566-0 Samaniego-Gámez BY, Garruña R, Tun-Suárez JM et al (2016) Bacillus spp. Inoculation improves photosystem II efficiency and enhances photosynthesis in pepper plants. Chil J Agric Res 76:409–416. https://doi.org/10.4067/S0718-58392016000400003 Saravanan VS, Subramoniam SR, Raj SA (2004) Assessing in vitro solubilization potential of different zinc solubilizing bacterial (ZSB) isolates. Braz J Microbiol 35:121–125. https://doi.org/10.1590/S1517-83822004000100020 Selvakumar G, Panneerselvam PGA (2012) Bacterial mediated alleviation of abiotic stress in crops. In: Maheshwari DK (ed) Bacteria in agrobiology: stress management. Springer, Berlin, pp 205–224 Semida WM, Abd El-Mageed TA, Howladar SM (2014) A novel organo-mineral fertilizer can alleviate negative effects of salinity stress for eggplant production on reclaimed saline calcareous soil. ISHS Acta Hortic 1034:493–499 Semida WM, Abdelkhalik A, Rady MOA et al (2020) Exogenously applied proline enhances growth and productivity of drought stressed onion by improving photosynthetic efficiency, water use efficiency and up-regulating osmoprotectants. Sci Hortic 272:109580. https://doi.org/10.1016/j.scienta.2020.109580 Semida WM, Abd El-Mageed TA, Abdelkhalik A et al (2021a) Selenium modulates antioxidant activity, osmoprotectants, and photosynthetic efficiency of onion under saline soil conditions. Agronomy 11:855. https://doi.org/10.3390/agronomy11050855 Semida WM, Abdelkhalik A, Mohamed G et al (2021b) Foliar application of zinc oxide nanoparticles promotes drought stress tolerance in eggplant (Solanum melongena L.). Plants 10:421 Shekoofa A, Sinclair T (2018) Aquaporin activity to improve crop drought tolerance. Cells 7:123. https://doi.org/10.3390/cells7090123 Article CAS PubMed Central Google Scholar Sivapalan S (2015) Water Balance of Flooded Rice in the Tropics. In: Javaid MS (ed) Irrigation and drainage—sustainable strategies and systems. Intech Open. https://doi.org/10.5772/59043 Soil Survey Staff (1999) Soil taxonomy. A basic system of soil classification for making sand interpreting soil surveys. Agriculture Handbook no. 466, 2nd edn. USDA Steduto P, Hsiao TC, Fereres E, Raes D (2012) Crop yield response to water. FAO Timmusk S, El-daim IAA, Copolovici L et al (2014) Drought-tolerance of wheat improved by rhizosphere bacteria from harsh environments: enhanced biomass production and reduced emissions of stress volatiles. PLoS ONE. https://doi.org/10.1371/journal.pone.0096086 Tuong TP, Bouman BAM, Mortimer M (2005) More rice, less water—integrated approaches for increasing water productivity in irrigated rice-based systems in Asia. Plant Prod Sci 8:231–241. https://doi.org/10.1626/pps.8.231 Vickers ACR (2002) Handbook of water use and conservation. Water Plow Vurukonda SS, Vardharajula S, Shrivastava M, SkZ A (2016) Enhancement of drought stress tolerance in crops by plant growth promoting rhizobacteria. Microbiol Res 184:13–24 Wang F, Kang S, Du T et al (2011) Determination of comprehensive quality index for tomato and its response to different irrigation treatments. Agric Water Manag 98:1228–1238. https://doi.org/10.1016/j.agwat.2011.03.004 Wang CJ, Yang W, Wang C et al (2012) Induction of drought tolerance in cucumber plants by a consortium of three plant growth-promoting rhizobacterium strains. PLoS ONE 7:1–10. https://doi.org/10.1371/journal.pone.0052565 Wu N, Guan Y, Shi Y (2011) Effect of water stress on physiological traits and yield in rice backcross lines after anthesis. Energy Procedia 5:255–260. https://doi.org/10.1016/j.egypro.2011.03.045 Yang J, Kloepper JW, Ryu C (2009) Rhizosphere bacteria help plants tolerate abiotic stress. Trends Plant Sci 14:1–4 Yang X, Wang B, Chen L et al (2019) The different influences of drought stress at the flowering stage on rice physiological traits, grain yield, and quality. Sci Rep 9:1–12. https://doi.org/10.1038/s41598-019-40161-0 Yogendra SG, Singh US, Sharma AK (2015) Bacterial mediated amelioration of drought stress in drought tolerant and susceptible cultivars of rice (Oryza sativa L.). African J Biotechnol 14:764–773. https://doi.org/10.5897/ajb2015.14405 Yuan S, Linquist BA, Wilson LT et al (2021) Sustainable intensification for a larger global rice bowl. Nat Commun 12:7163. https://doi.org/10.1038/s41467-021-27424-z Zhang W, Xie Z, Zhang X et al (2019) Growth-promoting bacteria alleviates drought stress of G. uralensis through improving photosynthesis characteristics and water status. J Plant Interact 14:580–589. https://doi.org/10.1080/17429145.2019.1680752 Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). Soil and Water Department, Faculty of Agriculture, Fayoum University, Fayoum, 63514, Egypt Taia A. Abd El-Mageed & Nasr M. Abdou Agronomy Department, Faculty of Agriculture, Fayoum University, Fayoum, 63514, Egypt Shimaa A. Abd El-Mageed Department of Agricultural Microbiology, Faculty of Agriculture, Zagazig University, Zagazig, 44511, Egypt Mohamed T. El-Saadony Department of Agricultural Microbiology, Faculty of Agriculture, Fayoum University, Fayoum, 63514, Egypt Sayed Abdelaziz Taia A. Abd El-Mageed Nasr M. Abdou TAA, SAA and SA conceived and designed the experiment. TAA, NA and SAA handled the experiment and measured physiological indicators. TAA, and MTE analyzed the data and wrote the paper. All authors read and approved the final manuscript. Correspondence to Taia A. Abd El-Mageed. Abd El-Mageed, T.A., Abd El-Mageed, S.A., El-Saadony, M.T. et al. Plant Growth-Promoting Rhizobacteria Improve Growth, Morph-Physiological Responses, Water Productivity, and Yield of Rice Plants Under Full and Deficit Drip Irrigation. Rice 15, 16 (2022). https://doi.org/10.1186/s12284-022-00564-6 PGPR Chlorophyll fluorescence Air–canopy temperature (Tc–Ta) Water relations Antioxidant system
CommonCrawl
Volume 19 Supplement 6 Selected articles from the IEEE BIBM International Conference on Bioinformatics & Biomedicine (BIBM) 2018: medical informatics and decision making MultiSourcDSim: an integrated approach for exploring disease similarity Lei Deng1, Danyi Ye1, Junmin Zhao2 & Jingpu Zhang2 BMC Medical Informatics and Decision Making volume 19, Article number: 269 (2019) Cite this article A collection of disease-associated data contributes to study the association between diseases. Discovering closely related diseases plays a crucial role in revealing their common pathogenic mechanisms. This might further imply treatment that can be appropriated from one disease to another. During the past decades, a number of approaches for calculating disease similarity have been developed. However, most of them are designed to take advantage of single or few data sources, which results in their low accuracy. In this paper, we propose a novel method, called MultiSourcDSim, to calculate disease similarity by integrating multiple data sources, namely, gene-disease associations, GO biological process-disease associations and symptom-disease associations. Firstly, we establish three disease similarity networks according to the three disease-related data sources respectively. Secondly, the representation of each node is obtained by integrating the three small disease similarity networks. In the end, the learned representations are applied to calculate the similarity between diseases. Our approach shows the best performance compared to the other three popular methods. Besides, the similarity network built by MultiSourcDSim suggests that our method can also uncover the latent relationships between diseases. MultiSourcDSim is an efficient approach to predict similarity between diseases. Quantitative measurement of disease similarity is gaining more and more attentions because it helps to reveal common psychophysiology and improve clinical decision-making systems, so as to better understand human diseases status and more accurately classify diseases [1]. It also plays a crucial role in identifying novel drug indications [2], since diseases may have the same or similar therapeutic targets, suggesting that they may be treated with the same or similar drugs [3–6]. In the past few decades, our understanding of human diseases has made remarkable progress [7]. For example, the network-based approaches [8–11] to calculating the similarity between diseases is impressive. Constructing a disease similarity network based on biological data to explore the relationship between diseases has become one of the research hotspots of modern biology and medicine. At present, the measurement of similarity disease research is necessary. In previous studies, various properties of human genes (such as predicted function or amino-acid sequence length) and Gene Ontology (GO) [12–14] biological processes have been correlated with the chance of causing a disease [15–17]. The calculation approaches of disease similarity can be roughly divided into function-based methods [18, 19] and semantic-based methods [20]. The functional-based approach calculates similarities between diseases by comparing genes associated with diseases [18, 19]. For instance, the BOG [18] method, which was designed by Mathur and Dinakarpandian, calculates the similarity between diseases by comparing gene overlaps of related diseases. Moreover, BOG [18] also considers the self-information of each disease. However, its shortcoming is that it does not consider the functional link between disease-related genes. Further, Mathur and Dinakarpandian proposed a method based on process similarity (PSB [19]). The method provides functions to measure similarity, including the similarity function based on GO terms [12], and the similarity function between entities annotated with terms extracted from the ontology based on both co-occurrence and information content. The semantic-based method is extensively used in biomedical and bioinformatics. For instance, Resnik's method [21] calculates the similarity between diseases according to the information content of the most informative common ancestor. Lin's method [22] incorporates not only the information content of the most informative common ancestor but also the the information content of the two disease terms. Jiang and Conrath et al. [23] represented the similarity between two terms through the semantic distance. In addition, phenotype similarity plays an important part in a lot of biological similarity and biomedical applications, and it is also the most common way of classifying diseases [24]. For example, the Human phenotype ontology (HPO) is a controlled and standardized vocabulary that describes the abnormal phenotype of human disease. And Medical Subject Headings (MeSH) [25] use this approach to classify diseases. Although there are many patterns for measuring similarity between diseases, most of them use a single biological data source, and few methods using multiple biological data sources are proposed. For example, some of the previous approaches calculate the similarity according to genes related with diseases. Nevertheless, there exist some diseases which are unrelated or rarely related to genes. Thus, depending solely on individual biological data associated with disease might greatly affects the prediction performance of the methods. In this work, a novel approach named MultiSourcDSim is proposed to compute the similarity between diseases by integrating multiple biological datasets. In MultiSourcDSim, firstly, three disease similarity networks are respectively built by using a variety of biological data such as gene-disease associations, GO biological process-disease associations and symptom-disease associations. Secondly, the high-dimensional vector of each node is extracted by running restart random walks [26] on each network, and low-dimensional vectors that can represent the high-dimensional topological patterns in each network are learned. Finally, the similarity between diseases is obtained by calculating the cosine score between two low-dimensional vectors. The experiments demonstrate that disease similarity predicted by our method is significantly correlated with disease category of MeSH, implying that the network constructed by our method is capable of detecting the latent relationships between diseases. Moreover, the results also show that MultiSourcDSim outperform the other three popular methods. CTD's MEDIC disease vocabulary which is downloaded in http://ctdbase.org (March 4, 2018) is chosen as criterion for describing diseases. CTD's MEDIC disease vocabulary is a modified subset of descriptors from the Diseases [C] branch of the U.S. National Library of Medicine's MeSH, combined with genetic disorders from the Online Mendelian Inheritance in Man (OMIM) database, and we use MeSH to mark disease terms. Each record in CTD's MEDIC disease vocabulary contains 9 fields, 4 of which are retained for calculating disease similarity. They are respectively DiseaseID, DiseaseName, AltDiseaseIDs (alternative identifiers) and ParentIDs (identifiers of the parent terms). We have collected three data sets associated with disease, namely gene-disease associations, GO biological process-disease associations, and symptom-disease associations. In the three sets, a great deal of biological information bound up with diseases is included. For instance, each record in the gene-disease associations contains 9 fields (GeneSymbol, GeneID, DiseaseName, DiseaseID, DirectEvidence, InferenceChemicalName, InferenceScore, OmimIDs, PubMedIDs). In the three data sets, 3,125,954 gene-disease associations containing 3254 disease terms and 668,760 GO biological process-disease associations containing 5720 disease terms are pooled from http://ctdbase.org(March 4, 2018), and each record in the two data sets is identified by MeSH markers. The gene terms and the gene ontology biological process terms are labeled with the NCBI gene identifiers and GO identifiers, respectively. The 80,638 symptom-disease associations are collected from paper [27], which describes 4040 diseases. However, the diseases in the symptom-disease associations are marked by the MeSH names. To obtain the Mesh identifiers corresponding to the names, we map the disease names in the symptom-disease associations to the IDs in the CTD's MEDIC disease vocabulary. After screening for the co-occurring diseases term in all associations, 8126 diseases are extracted. Overview of MultiSourcDSim In our method, we combine three disease-related data sets to calculate the similarity between diseases more accurately. Specifically, we firstly construct three disease similarity networks through computing the similarity respectively according to the gene-disease associations, GO biological process-disease associations, and symptom-disease associations. Secondly, the compact low-dimensional feature representations of diseases from the three similarity networks are learned by running Diffusion Component Analysis (DCA) [28–30]. Finally, the disease similarity is calculated according to the learned representations. Calculate semantic similarity of diseases MeSH is a vocabulary that gives uniformity and consistency to the indexing and cataloging of biomedical literature. It is organized in a manner of tree structures with 16 main branches. Category C represents diseases. In our approach, the semantic similarity of diseases is measured by using the special structure between MeSH descriptor [25]. We build a directed acyclic graph (DAG) to clarify the associations among various diseases. The nodes in the DAG represent the MeSH descriptor. Child nodes are more specialized (containing more disease information) and parent nodes are more generalized (containing less disease information). In addition to the relationships of the disease itself, we also combine the relationships between disease and other biological entity, namely gene, GO and symptom. The probability of a disease occurs in a disease-related data set is just its frequency in the data set. The frequency of a disease term t is calculated as: $$ f(t)=self(t)+\sum_{tc\in children(t)}f(tc). $$ Here, self(t) represents the number of occurrences of the disease term t in a single data set, and the disease term tc is a direct child of the disease item t, belonging to the children(t) collection. In other words, the frequency of the disease term t in a single disease-related data set is defined as the frequency of its own occurrence plus the frequency of occurrence of all its child nodes. The probability that the disease term t appears in the disease-related data set is as follows: $$ prob(t)=\frac{f(t)}{N}. $$ Here, N indicates the frequency of occurrence of the root node in the corresponding DAG. Then, the similarity scores are computed according to the probabilities of diseases based on the metric proposed by Lin et al. [22]. In Lin's method, the similarity is measured in terms of information theory. It is believed that the similarity between terms is determined by their generality (information content of common ancestor nodes) and particularity (their respective information content). Therefore, the semantic similarity depends on the maximum ratio of the information content of the common ancestor nodes of the two terms to the sum of the information content of the two terms themselves. Generally, the higher the degree of information sharing between two terms, the higher the semantic similarity score, and on the contrary, the lower the similarity score. This definition is as follows: $$ {\begin{aligned} Score(t1,t2)=\max_{t\in\left(LCA(t1,t2)\right)}\left(\frac{2 * \log prob(t)}{\log prob(t1)+ \log prob(t2)}\right). \end{aligned}} $$ Here, LCA(t1,t2) is the set of least common ancestors of term t1 and t2. The similarity scores fall in the range [0, 1]. Integrate multiple networks and learn representations We construct three disease similarity networks according to the similarity scores. To achieve the compact integration of multiple similarity network, we adopt DCA strategy to capture low-dimensional vectors representing topological patterns of networks. In DCA, the random walk with restart (RWR) method [26] is firstly employed to analyze the structure of each network. The RWR from a node i is defined as: $$ s_{i}^{t+1}=(1-a)s_{i}^{t}T+ae_{i}. $$ Here, T denotes the probability transfer matrix. \(s_{i}^{t}\) is specified as an n-dimensional vector, where each entry is the probability of visiting a node at t iterations from the initial node i. ei is the initial probability vector, where ei(i)=1 and ei(j)=0, ∀j≠i. a is the restart probability. After several iterations, a stable distribution is obtained, and si is regard as the 'diffusion state' of the node i. There exists noise in the diffusion states obtained in this manner, and the dimensionality is high. To solve this problem, we utilize fewer dimensions to approximate each diffusion state si through a polynomial logistic model based on the potential vector representation of nodes in a network. Specifically, the probability assigned to node j in the diffusion state of node i is as follows: $$ \hat{s}_{ij}=\frac{exp{\left\{x_{i}^{T}w_{j}\right\}}}{\sum_{j'}exp{\left\{x_{i}^{T}w_{j}'\right\}}}, $$ where ∀i,xi,wj∈Rd for d≪n. xi and wj represent the node feature and context feature of node i respectively. The goal is to find the low-dimensional vector representation of nodes w and x that best approximates a set of observed diffusion states s={s1,…,sn} according to the logistic model. To achieve the goal, KL-divergence is used as the objective function to optimize, which is given by: $$ \mathop {\min }\limits_{w,x} C(s,\hat s) = \frac{1}{n}{\sum\nolimits}_{i = 1}^{n} {{D_{KL}}} \left({s_{i}}||{\hat s_{i}}\right), $$ where n is the number of nodes. By writing out the definition of KL-divergence, the formula is written as: $$ {\begin{aligned} \begin{array}{l} C\left(s,\hat s\right) =\\ \frac{1}{n}{\sum\nolimits}_{i = 1}^{n} { \left[ - H({s_{i}}) - \sum\limits_{j = 1}^{n} {{s_{ij}}} \left(x_{i}^{T}{w_{j}} - \log \left(\sum\limits_{j' = 1}^{n} {\exp \left\{ x_{i}^{T}{w_{j'}}\right\}}\right)\right)\right]}, \end{array} \end{aligned}} $$ where H(·) denotes the entropy. In order to combine the three disease similarity networks, the formula (6) is modified as follows: $$ \mathop {\min }\limits_{w,x} C(s,\hat s) = \frac{1}{n}{\sum\nolimits}_{m = 1}^{M} {{\sum\nolimits}_{i = 1}^{n} {{D_{KL}}} \left(s_{i}^{m}||\hat s_{i}^{m}\right)}. $$ Here, M represents the number of networks. In this work, M is equal to 3. To minimize the objective function, we compute the gradients with regard to the parameters w and x. The low-dimensional vector representations are obtained by the quasi-Newton L-BFGS method with these gradients. To improve efficiency, we can employ singular value decomposition (SVD) to optimize the alternative objective function [31]. Calculate the similarity between diseases After extracting the low-dimensional representations for all nodes which can best explain the connectivity patterns in the networks, we utilize the learned representations as features for calculating the disease similarity. In this study, the number of nodes in the three networks, namely the total number of diseases is 8126, and the dimension of these features is set to 600. The similarity between diseases is measured through cosine score, which is as follows: $$ cosine(d_{x},d_{y})=\frac{\sum_{i}d_{x,i}d_{y,i}}{\sqrt{\sum_{i}d_{x,i}^{2}d_{y,i}^{2}}}. $$ Here, dx and dy are two vectors which represent two disease respectively. Obviously, the similarity is between 0 and 1. The degree distribution of disease similarity networks We adopt gene-disease associations, GO biological process-disease associations and symptom-disease associations as the sources of disease similarity network, and construct the small similarity networks based on the Lin's measure separately. In order to better understand the topology of these networks, we calculate the degree distribution of nodes in the network. Figures 1, 2 and 3 elucidates the degree distribution of disease node in three small disease similarity networks. Degree distribution of disease node in the small similarity network built based on disease-gene association dataset Degree distribution of disease node in the small similarity network constructed based on GO biological process-disease association dataset Degree distribution of disease node in the small similarity network constructed based on disease-symptom association dataset In the disease similarity network based on gene-disease association dataset (GDN), there exist 3254 diseases and 32733 connections. Marfan Syndrome (MeSH: D008382), which is the relation with 178 diseases, has the maximum degree. There are 225 diseases with degree 1 (Fig. 1). 5720 diseases and 249490 relationships make up the disease similarity network based on GO biological process-disease association dataset (BPDN). The disease with the maximum degree is Martin-Probst Deafness-Mental Retardation Syndrome (MeSH: C564495), the degree is 1024. As shown in Fig. 2, nearly half of the disease nodes have margins with about 100 other disease nodes. And similarity values of all disease pairs are computed in the disease similarity network based on symptom-disease association dataset (SDN), and the distribution of 48279 similarity values (between 4040 diseases) is acquired. Oculocerebrorenal Syndrome (MeSH: D009800) associated with 256 diseases has the maximum degree (Fig. 3). From the above calculation we can draw a conclusion that the density of GDN is the largest compared to BPDN and SDN. After obtaining the integrated disease similarity network (GPSN), the distribution of these similarity scores are also counted. The distribution is represented in Fig. 4, the similarity scores for most disease pairs across the network ranges from 0 to 0.6. The number of disease pairs in the 0.2-0.3 similarity bin is the highest, followed by the 0.3-0.4 bin. Histogram of similarity scores between 8126 disease nodes. Most disease-disease pairs have a low similarity score The benchmark set which is adopted in this experiment contains 40 pairs of highly similar diseases. It is derived from the work of Suthram et al. [1] and Pakhomov et al. [32], and cancers are deleted. The benchmark set consists of pairs of diseases that are confirmed to be interrelated, such as Polycystic Ovary Syndrome(MeSH: D011085) and Obesity(MeSH: D009765), Chronic Obstructive Airway Disease(MeSH: D029424) and Asthma (MeSH: D001249). It also contains some diseases pairs which have no apparent correlations, but have proved to be correlated through various evidences, such as Obesity and Asthma, Malaria (MeSH: D008288) and Anemia (MeSH: D000740). Moreover, we randomly choose 500 disease pairs from the similarity network as a random set, where the disease pairs in the benchmark set are deleted. Parameter selection There are two parameters (α and d) to be tuned in MultiSourcDSim. The parameter α is the restart probability. According to previous practical experience [33], it is set to 0.5. The parameter d denotes the feature dimension of each node. We compare the performance for different numbers of dimensions based on the benchmark set. We calculate the values of AUC when d is increasing from 500 to 800 with step size 100. As shown in Fig. 5, the results show that the performance of MultiSourcDSim is stable over a wide range of values for the number of dimensions, implying that our method is robust to over-fitting. On the whole, the AUC comes to the max value when d equals 600. Hence, d is set to 600 in this paper. Comparision for different numbers of dimensions Performances assessment To evaluate the disease similarity results calculated by MultiSourcDSim, we make a comparison on the disease classification of MeSH. MeSH is an authoritative medical thesaurus and the basis for biomedical indexing. MeSH divides the disease (C) sections into 26 categories according to the tree code (excluding some ambiguous categories). To discuss whether GPSN is related to the MeSH disease category, we examine the difference between the similarity scores of disease pairs belonging to the same MeSH category and the similarity scores of disease pairs of different MeSH categories. As demonstrated in Fig. 6, the average similarity scores for disease pairs from the same MeSH are significantly higher than those from different MeSH categories. In conclusion, the experiment demonstrates that the similarity scores of disease pairs are closely relevant to MeSH disease category. Evaluation of MultiSourcDSim against MeSH classification Moreover, in order to verify that the performance of the network integrating the three data sets is better than that of the network formed by the single data set, we compare GDN, BPDN, SDN and GPSN based on the banchmark set and random set. AS shown in Fig. 7, MultiSourcDSim achieves the best AUC of 0.906, and the AUC values of GDN, BPDN and SDN are 0.771, 0.774 and 0.797, respectively. This result indicates that compared to individual networks without integration, MultiSourcDSim has a more stable and stronger power for discovering disease-disease associations. The performance improvement is partially attributed to the fact that synthetical analyzing the structure of the the multiple networks can uncover fine-grained topological patterns. Another important factor is the compactness of the feature representations, which help capture the relevant topological patterns apart from noise in the data. Integrating Multiple Networks Outperforms Individual Networks The performance of MultiSourcDSim is further evaluated by comparing it with other three recent approaches: the text-based approach, namely MimMiner [34], an integrated semantic and functional approach, called MedNetSim [35], and the web-based approach, HSDN [27]. To fairly compare the performance of these methods, we select widely used metrics, such as accuracy (ACC), the area under the ROC curve (AUC), F1-score (F1), the Matthew's correlation coefficient (MCC), precision (PRE), sensitivity (SEN/Recall) and specificity (SPE). Based on the four approaches, we compute the the similarity scores of disease pairs in the benchmark set and the random set, and sort them in descending order, respectively. Moreover, we look on the disease pairs in the benchmark set and the random set as positive and negative samples, respectively. The disease pairs correctly predicted in the benchmark set are considered to be true positive samples, and the disease pairs in the random set which are predicted to be highly correlated are thought of as false positive samples. The results of the evaluation are shown in Table 1, where the AUC value of the HSDN method is the minimum, which is 0.818. The MimMiner method applies text mining to disease classification and improves performance, resulting in an AUC of 0.836. The MedNetSim method takes the entire protein interactions and the biomedical literature corpus into consideration, increasing its AUC to 0.854. Our approach integrates multiple disease-related data sets and further improves the performance with an AUC value of 0.905, which is the best in the four methods. In addition, our method also achieves the highest values for ACC, F1, MCC, PRE, and SEN, which are 0.815, 0.684, 0.273, 0.601, and 0.750, separately. Table 1 Prediction performance of MultiSourcDsim in comparison with other three methods on the benchmark set and random set The results in Table 1 demonstrate that calculating disease similarity by integrating multiple disease-related data sources is an effective method. In order to test the stability of our method, we randomly select 100 disease pairs and compute their similarity scores. The calculations are repeated 100 times and the average AUC of the four methods are depicted in Fig. 8. The average values are respectively 0.819 (HSDN), 0.835 (MimMiner), 0.855 (MedNetSim) and 0.906 (MultiSourcDsim), which are consistent with the AUC column in Table 1. We further compare the ranking of disease pairs derived from the benchmark set. As shown in Fig. 9, The number of the solution disease pairs which are found by MultiSourcDsim always are the largest in the top 220 disease pairs. Average of AUC for 100 permutations The number of disease-pairs with varying the number of top-ranking disease pairs In addition, by using the lowest ranked disease pairs in 540 disease pairs (500 random disease pairs and 40 benchmark pairs), MultiSourcDSim can find all 40 benchmark pairs, which represents quite good performance. For example, Obesity (MeSH: D009765) and Asthma (MeSH: D001249) are disease pairs belonging to the benchmark set, which ranks last in our approach. As shown in Table 2, the average ranking of Obesity and Asthma is very low among all the four methods. Nevertheless, compared to the other three methods, our approach has increased the ranking of Obesity and Asthma by 9%-14%. Table 2 The average ranking of the disease pair (Obesity and Asthma) in 540 disease pairs Integrated disease similarity network We construct a disease similarity network by using the top-ranking 0.3% of the similarity values in 8126 diseases. As shown in Fig. 10, there are 2604 diseases in the network and they are connected to each other by 121787 edges. The maximum connected component consists of 283 nodes. Martin-Probst Deafness-Mental Retardation Syndrome (MeSH: C564495), which is connected to 511 diseases, has the maximum degree. In Fig. 10, nodes in the network represent diseases, and the nodes are colored different colors. Each color is corresponding to a different MeSH category, such as Virus Diseases (MeSH: C02), Digestive System Diseases (MeSH: C06), Eye Diseases (MeSH: C11), Immune System Diseases (MeSH: C20) and so on. For each classification, diseases in the same MeSH category are usually similar to each other, such as disease of Musculoskeletal Diseases (MeSH: C05) category, disease of Nervous System Diseases (MeSH: C10) category, and so on. Figure 11 also shows the feature that diseases within one class are more probable to gather in the same neighbourhood with each other. For instance, 5 diseases belonging to the Otorhinolaryngologic Diseases classification constitute a small component. As shown in the Fig. 11a, all of these 5 diseases are deafness. Six diseases generate another connected component (Fig. 11b), five of which are Otorhinolaryngologic Diseases and the other is Stomatognathic Diseases. These demonstrations further indicate that the similarity scores of disease pairs belonging to the same category in the results computed by MultiSourcDSim are greater than those between belonging to different categories. An overview of disease similarity network (GPSN) based on our method results. Nodes were coloured according to the MeSH category to which they belong Three connected components from the disease similarity network constructed by our method Besides identifying relationships between diseases belonging to the same disease classification, our approach can also find the associations beween diseases belonging to different classifications. For instance, as shown in Fig. 11c, three Musculoskeletal Diseases are linked to two Immune System Diseases by our method. Among the three Musculoskeletal Diseases, it has been reported that people with Lymphopenia might have immune system diseases. Discussion and conclusion Determining the correlation between diseases helps to deepen understanding of the potential mechanisms among diseases. There are many studies about the association between diseases, such as predicting disease-related genes [36–38] and new drug indications [2]. In addition, a huge challenge for researchers in modern biology [39, 40] is how to get more information about the disease. In the past few decades, many researchers have proposed a number of methods to predict the similarity between diseases (for example, build a network of disease similarity) based on biological data and make a great progress. However, these methods use only a single biological data and do not consider combining multiple biological data as a basis for predicting disease similarity. In this paper, we propose a novel method, MultiSourcDSim, to predict similarity between diseases, which builds a disease similarity network based on multi-faceted biological data related to disease. According to the similarity scores computed by our method, we can conclude that the similarity scores of disease pairs belonging to the same MeSH classification are significantly higher than those of disease pairs belonging to different MeSH classifications. And, comparing the performance of the MultiSourcDSim method with the other three methods (MimMiner [34], MedNetSim [35] and HSDN [27]) under the same benchmark set, we have found that our method is superior. Furthermore, the disease similarity network constructed by our method can also uncover latent relationships between diseases. Although multiple disease-related data sources are integrated to compute similarities between diseases, there may be some bias due to incomplete data. In addition to considering the integration of multiple biological data, we also need to take into account the modular nature of each disease in further study of the similarities between diseases, since the modularity of each disease module can give more information [41–43]. Moreover, disease networks have proven useful for predicting novel therapeutic applications of known compounds [44] and inferring novel disease genes [45]. The datasets used in this study is available at http://ctdbase.org. ACC: AUC: The area under the ROC curve BPDN: GO biological process-disease association network The comparative toxicogenomics database Directed acyclic graph DCA: Diffusion component analysis F1: F1 score GDN: Gene-disease association network GPSN: The integrated disease similarity network HPO: Human phenotype ontology MCC: The Matthew's correlation coefficient Online mendelian inheritance in man RWR: Random walk with restart Symptom-disease association network SPE: Suthram S, Dudley JT, Chiang AP, Rong C, Hastie TJ, Butte AJ. Network-based elucidation of human disease similarities reveals common functional modules enriched for pluripotent drug targets. Plos Comput Biol. 2010; 6(2):1000662. Gottlieb A, Stein GY, Ruppin E, Sharan R. Predict: a method for inferring novel drug indications with application to personalized medicine. Mole Syst Biol. 2011; 7(1):496. Goh KI, Cusick ME, Valle D, Childs B, Vidal M, Barabási AL. The human disease network. Proc Nat Acad Sci USA. 2007; 104(21):8685–90. Hu G, Agarwal P. Human disease-drug network based on genomic expression profiles. Plos One. 2009; 4(8):6536. Zhang X, Zhang R, Jiang Y, Sun P, Tang G, Wang X, Lv H, Li X. The expanded human disease network combining protein-protein interaction information. Eur J Human Genet Ejhg. 2011; 19(7):783–8. Lee DS, Park J, Kay KA, Christakis NA, Oltvai ZN, Barabási AL. The implications of human metabolic network topology for disease comorbidity. Proc Natl Acad Sci USA. 2008; 105(29):9880–5. Botstein D, Risch N. Discovering genotypes underlying human phenotypes: past successes for mendelian disease, future approaches for complex disease. Nature Genet. 2003; 33(33 Suppl):228–37. Emmert-Streib F, Dehmer M. Analysis of Microarray Data: A Network-Based Approach: Wiley; 2008. Emmertstreib F, Glazko GV. Network biology: a direct approach to study biological function. Wiley Interdiscipl Rev Syst Biol Med. 2011; 3(4):379–91. Jin L, Min L, Wei L, Wu FX, Yi P, Wang J. Classification of alzheimer's disease using whole brain hierarchical network. IEEE/ACM Trans Comput Biol Bioinforma. 2018; PP(99):624–32. Chen B, Li M, Wang J, Shang X, Wu FX. A fast and high performance multiple data integration algorithm for identifying human disease genes. Bmc Med Genomics. 2015; 8(S3):1–11. Consortium TGO, Ashburner M, Ball CA, Blake JA, Botstein D, Butler H, Cherry JM, Davis AP, Dolinski K, Dwight SS. Gene ontology: tool for the unification of biology. Nature Genet. 2000; 25(1):25–9. Zeng C, Zhan W, Deng L. SDADB: A functional annotation database of protein structural domains. Database. 2018:1–8. Zhang Z, Zhang J, Fan C, Tang Y, Deng L. Katzlgo: large-scale prediction of lncrna functions by using the katz measure based on multiple networks. IEEE/ACM Trans Comput Biol Bioinforma. 2019; 16(2):407–16. Jimenezsanchez G, Childs B, Valle D. Human disease genes. Nature. 2001; 409(6822):853–5. López-Bigas N, Ouzounis CA. Genome-wide identification of genes likely to be involved in human genetic disease. Nucleic Acids Res. 2004; 32(10):3108. Pereziratxeta C, Bork P, Andrade MA. Association of genes to genetically inherited diseases using data mining. Nature Genet. 2002; 31(3):316–9. Mathur S, Dinakarpandian D. Automated ontological gene annotation for computing disease similarity. Transl. Bioinforma. 2010; 2010:12. Mathur S, Dinakarpandian D. Finding disease similarity based on implicit semantic similarity. J Biomed Informa. 2012; 45(2):363–71. Li J. Dosim: An r package for similarity between diseases based on disease ontology. Bmc Bioinformatics. 2011; 12(1):266. Resnik P. Using information content to evaluate semantic similarity in a taxonomy. 1995; 1995:448Ű453. Lin D. An information-theoretic definition of similarity. In: International Conference on Machine Learning(Citeseer): 1998. p. 296–304. Jiang JJ, Conrath DW. Semantic similarity based on corpus statistics and lexical taxonomy. Proc. Int. Conf. Res. Comput. Linguist. 1997:19–33. Deng Y, Gao L, Wang B, Guo X. Hposim: An r package for phenotypic similarity measure and enrichment analysis based on the human phenotype ontology. Plos One. 2015; 10(2):0115692. Lipscomb CE. Medical subject headings (mesh). Bull Med Libr Assoc. 2000; 88(3):265–6. Tong H, Faloutsos C, Pan JY. Fast random walk with restart and its applications. In: International Conference on Data Mining(IEEE): 2006. p. 613–22. Zhou XZ, Menche J, Barabási A, Sharma A. Human symptoms–disease network. Nature Commun. 2014; 5:4212. Cho H, Berger B, Peng J. Diffusion component analysis: Unraveling functional topology in biological networks. Comput Sci. 2016; 9029(4):62–4. Zhang J, Zhang Z, Wang Z, Liu Y, Deng L. Ontological function annotation of long non-coding rnas through hierarchical multi-label classification. Bioinformatics. 2018; 34(10):1750–7. Deng L, Wu H, Liu C, Zhan W, Zhang J. Probing the functions of long non-coding rnas by exploiting the topology of global association and interaction network. Comput Biol Chem. 2018; 74:360–7. Wang S, Cho H, Zhai C, Berger B, Peng J. Exploiting ontology graph for predicting sparsely annotated gene function. Bioinformatics. 2015; 31(12):357–64. Pakhomov S, Mcinnes B, Adam T, Liu Y, Pedersen T, Melton GB. Semantic similarity and relatedness between clinical terms: An experimental study. AMIA... Ann Symp Proc/ AMIA Symp. AMIA Symposium. 2010; 2010:572. Cho H, Berger B, Peng J. Compact integration of multi-network topology for functional analysis of genes. Cell Syst. 2016; 3(6):540. van Driel MA, Bruggeman J, Vriend G, Brunner HG, Leunissen JA. A text-mining analysis of the human phenome. Eur J Human Genet. 2006; 14(5):535–42. Li P, Nie Y, Yu J. Fusing literature and full network data improves disease similarity computation. Bmc Bioinformatics. 2016; 17(1):326. Lan W, Wang J, Li M, Peng W, Wu F. Computational approaches for prioritizing candidate disease genes based on ppi networks. Tsinghua Sci Technol. 2015; 20(5):500–512. Zhang J, Zhang Z, Chen Z, Deng L. Integrating multiple heterogeneous networks for novel lncrna-disease association inference. IEEE/ACM Trans Comput Biol Bioinforma. 2019; 16(2):396–406. Deng L, Zhang W, Shi Y, Tang Y. Fusion of multiple heterogeneous networks for predicting circrna-disease associations. Sci Rep (Nat Publ Group). 2019; 9:1–10. Guo X, Zhang J, Cai Z, Du DZ, Pan Y. Searching genome-wide multi-locus associations for multiple diseases based on bayesian inference. IEEE/ACM Trans Comput Biol Bioinforma. 2017; PP(99):1–1. Teng B, Yang C, Liu J, Cai Z, Wan X. Exploring the genetic patterns of complex diseases via the integrative genome-wide approach. IEEE/ACM Trans Comput Biol Bioinforma. 2016; 13(3):557–64. Zeng X, Zhang X, Zou Q. Integrative approaches for predicting microrna function and prioritizing disease-related microrna using biological interaction networks. Brief Bioinforma. 2016; 17(2):193. Zou Q, Li J, Hong Q, Lin Z, Wu Y, Shi H, Ying J. Prediction of microrna-disease associations based on social network analysis methods. Biomed Res Int. 2015; 2015(10):810514. Yan C, Wang J, Ni P, Lan W, Wu F, Pan Y. Dnrlmf-mda:predicting microrna-disease associations based on similarities of micrornas and diseases. IEEE/ACM Trans Comput Biol Bioinforma. 2017; PP(99):1–1. Liang C, Li J, Peng J, Peng J, Wang Y. Semfunsim: A new method for measuring disease similarity by integrating semantic and gene functional association. Plos One. 2014; 9(6):99415. Ghiassian SD, Menche J, Barabási AL. A disease module detection (diamond) algorithm derived from a systematic analysis of connectivity patterns of disease proteins in the human interactome. Plos Comput Biol. 2015; 11(4):1004120. This work was supported by National Natural Science Foundation of China under grants No. 61972422, No. 61672541 and No. 61672113. About this supplement This article has been published as part of BMC Medical informatics and Decision Making Volume 19 Supplement 6, 2019: Selected articles from the IEEE BIBM International Conference on Bioinformatics & Biomedicine (BIBM) 2018: medical informatics and decision making. The full contents of the supplement are available online at https://bmcmedinformdecismak.biomedcentral.com/articles/supplements/volume-19-supplement-6. Publication costs are funded by National Natural Science Foundation of China under grant No. 61672541. School of Computer Science and Engineering, Central South University, Changsha, 410075, China Lei Deng & Danyi Ye School of Computer and Data Science, Henan University of Urban Construction, Pingdingshan, 467000, China Junmin Zhao & Jingpu Zhang Search for Lei Deng in: Search for Danyi Ye in: Search for Junmin Zhao in: Search for Jingpu Zhang in: LD, DY, JZ and JP designed the study and conducted experiments. LD and DY performed statistical analyses. LD, DY and JP drafted the manuscript. DY prepared the experimental materials and benchmarks. All authors have read and approved the final manuscript. Correspondence to Jingpu Zhang. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Deng, L., Ye, D., Zhao, J. et al. MultiSourcDSim: an integrated approach for exploring disease similarity. BMC Med Inform Decis Mak 19, 269 (2019) doi:10.1186/s12911-019-0968-8 Disease similarity network Integrating multiple data sources
CommonCrawl
What is the relationship between the Lorentz group and the $CL(1,3)$ algebra? In my classes the dirac equation is always presented as the "square root" of the Klein Gordon equation, then from this you can demand certain properties from the Matrices (anticommutation relations, square to one etc) and it turns out the four gamma matrices will satisfy all these relations. However as I've been delving into group theory, specifically representation theory of the Lorentz group, it would seem the gamma matrices have much more physical significance and aren't just purely mathematical requirements, which is discussed here at into level: https://en.wikipedia.org/wiki/Gamma_matrices#Physical_structure Can anyone lend themselves to the task to help myself understand how to think of these matrices physically? special-relativity group-theory dirac-equation dirac-matrices clifford-algebra CraigCraig I'll start in the context of three-dimensional space, and then I'll extend the context to four-dimensional space-time. An observable should be invariant under a $2\pi$ rotation. A model is usually constructed in terms of field operators, and observables are expressed in terms of field operators, but the field operators themselves do not need to be invariant under a $2\pi$ rotation. This is important because of the spin-statistics theorem, which says that in relativistic QFT, a fermion field (whose corresponding particle obeys the Pauli exclusion principle) must change sign under a $2\pi$ rotation. So, if we want to be able to handle the Pauli exclusion principle in a relativistic QFT, we need a way to construct fields that change sign under a $2\pi$ rotation. Representations of the rotation group $O(3)$ don't do this. We need something else. Clifford algebra gives us a nice way to construct that something else. Still working in the context of three-dimensional space, suppose we have three matrices $\gamma_1,\gamma_2,\gamma_3$ that satisfy $$ \gamma_j\gamma_k+\gamma_k\gamma_j=2\delta_{jk}. $$ We can represent an ordinary vector as $\mathbf{v}=\sum_k v^k\gamma_k$. Familiar manipulations of vectors can be expressed using this representation. In the following equations, $\mathbf{v}\mathbf{u}$ means the matrix product of the matrix representations of $\mathbf{v}$ and $\mathbf{u}$. (As an abstract product, apart from any matrix representation, this would be called the Clifford product.) The dot product of two vectors $\mathbf{v}$ and $\mathbf{u}$ is $$ \frac{\mathbf{v}\mathbf{u}+\mathbf{u}\mathbf{v}}{2} =(\mathbf{v}\cdot\mathbf{u})I, $$ where $I$ is the identity matrix, and the more-natural replacement for the "cross product" is $$ \mathbf{v}\wedge\mathbf{u}\equiv \frac{\mathbf{v}\mathbf{u}-\mathbf{u}\mathbf{v}}{2}, $$ which is a linear combination of the basis bivectors $\gamma_j\gamma_k$. (This is called the wedge product, and it produces a bivector — as it should — rather than a vector.) A rotation through angle $\theta$ in the $1$-$2$ plane (for example) is given by $$ \mathbf{v}\mapsto \exp\left(\frac{\theta}{2}\gamma_1\gamma_2\right) \mathbf{v} \exp\left(-\frac{\theta}{2}\gamma_1\gamma_2\right). $$ This is an ordinary rotation of the vector $\mathbf{v}$ through angle $\theta$ (not $\theta/2$) in the $1$-$2$ plane (aka "about the $3$ axis"). A spinor is a single-column matrix $\psi$ that transforms under rotations according to $$ \psi\mapsto \exp\left(\frac{\theta}{2}\gamma_1\gamma_2\right)\psi. $$ To motivate this, notice that the product $\mathbf{v}\,\psi$ again transforms like spinor: $$ \mathbf{v}\,\psi\mapsto \exp\left(\frac{\theta}{2}\gamma_1\gamma_2\right) \mathbf{v} \exp\left(-\frac{\theta}{2}\gamma_1\gamma_2\right) \exp\left(\frac{\theta}{2}\gamma_1\gamma_2\right)\psi = \exp\left(\frac{\theta}{2}\gamma_1\gamma_2\right)\mathbf{v}\,\psi. $$ Even more, if we choose the matrix representation of the $\gamma$-matrices so that $\gamma_k^\dagger=\gamma_k$, then the quantity $\psi^\dagger\mathbf{v}\psi$ is invariant under all rotations. And if $\theta=2\pi$, then we can use $(\gamma_1\gamma_2)^2=-1$ to prove that the transformation reduces to $\psi\mapsto-\psi$, which is what we want. This means that $\psi$ by itself can't be an observable, but something involving a product of two $\psi$s can still be an observable because the minus signs cancel. The smallest matrices that satisfy the first equation are $2\times 2$, so we can represent $\psi$ as a column matrix with two (complex) components. These correspond to the "spin up" and "spin down" components of an electron, for example. The preceding equations show how these two components mix with each other under a rotation. In summary, regarding the physical significance of the $\gamma$-matrices in three-dimensional space: we can use them to describe ordinary vectors, including ordinary rotations, and they also provide a nice way to describe things that change sign under $2\pi$ rotations, as fermions should. So we get everything we need, all in one package. Now transition to four-dimensional space-time. We have basically the same story but with the rotation group $O(3)$ replaced by the Lorentz group. For consistency with the spin-statistics connection, we need a way to construct representations that change sign under a $2\pi$ rotation. Representations of the Lorentz group itself don't do this, but again we can use Clifford algebra. By the way, this all generalizes nicely to an arbitrary number of space-time dimensions, but I'll just show the 4-d case here. Suppose we have 4 matrices $\gamma_\mu$ that satisfy $$ \gamma_\mu\gamma_\nu+\gamma_\nu\gamma_\mu=2\eta_{\mu\nu}, $$ where $\eta_{\mu\nu}$ are the components of the Minkowski metric. We can represent an ordinary four-vector as $\mathbf{v}=\sum_\mu v^\mu\gamma_\mu$. The preceding comments about dot products and the wedge-product apply here, too. (The "cross product", which pretends to construct a vector from the two input vectors, does not generalize to four-dimensional space-time, but the wedge product does.) A Lorentz transformation (boost or rotation) in the $\mu$-$\nu$ plane is given by $$ \mathbf{v}\mapsto \exp\left(\frac{\theta}{2}\gamma_\mu\gamma_\nu\right) \mathbf{v} \exp\left(-\frac{\theta}{2}\gamma_\mu\gamma_\nu\right). $$ The effect of the same Lorentz transformation on a Dirac spinor $\psi$ is $$ \psi \mapsto \exp\left(\frac{\theta}{2}\gamma_\mu\gamma_\nu\right)\psi. $$ Again, this changes sign under a $2\pi$ rotation, so we can use this for a fermion field. It can't be an observable by itself, but we can use it to construct observables because any product of an even number of these things is invariant under a $2\pi$ rotation. The smallest matrices that satisfy the defining relationship have size $4\times 4$. (In $2n$-dimensional space-time, they have size $2^n\times 2^n$, and they have this same size in $2n+1$-dimensional space-time.) In summary, regarding the physical significance of the $\gamma$-matrices in four-dimensional space-time: we can use them to describe Lorentz boosts of things like ordinary vectors, and they also provide a nice way to describe things that change sign under $2\pi$ rotations, as fermions should. So we get everything we need, all in one package — without ever mentioning anything about square-roots of Klein-Gordon equations. By the way, saying that a fermion field must change sign under a $2\pi$ rotation might seem problematic, because it says that ordinary spin-1/2 particles — like electrons, protons, and neutrons — must also have this property. They do, and that's not a problem. It's not a problem in, say, a single-electron state, because the sign-change in that case is just a change in the overall coefficient of the state-vector, which has no observable consequences. It would cause a problem in a state like $|\text{even}\rangle+|\text{odd}\rangle$ that is a superposition of states with even and odd number of fermions, and such superpositions are not allowed in QFT. States with even and odd numbers of fermions belong to different superselection sectors. What we can do is consider a superposition of two different locations of a single fermion, and then the sign-change under a $2\pi$ rotation does have indirect observable consequences. This has been demonstrated in neutron interference experiments, basically two-slit experiments with a macroscopic distance between the two paths in the interferometer. (Diffraction in a crystal was used as a substitute for "slits".) Magnets were used to cause precession of any neutron that passes through one of the paths, and the effect on the resulting two-slit interference pattern displays the effect of the sign-change under $2\pi$ rotations. This is reviewed in "Theoretical and conceptual analysis of the celebrated $4\pi$-symmetry neutron interferometry experiments", https://arxiv.org/abs/1601.07053. Chiral AnomalyChiral Anomaly $\begingroup$ Thank you this was very helpful, I'm leaving it open for a bit in case anyone else wants to chime in as well. One question, what does $$\mathbf{v}\mathbf{u}$$ mean here? If it's a bivector that makes sense for the wedge product but not so much for the scaler product. Thanks so much $\endgroup$ – Craig Oct 27 '18 at 21:32 $\begingroup$ @Craig Good question. $\mathbf{vu}$ is the matrix product of the two vectors' matrix representations, also called the Clifford product. In the scalar product case, the result is the dot product times the identity matrix. This follows from $\gamma_j\gamma_k+\gamma_k\gamma_j=2\delta_{ij}$. I added these words to the post. Thanks for catching this oversight. I also added an appendix about observables consequences of the sign-change under $2\pi$ rotations. $\endgroup$ – Chiral Anomaly Oct 27 '18 at 22:30 $\begingroup$ can I say that the gamma matrices are the generators of the underlying Lie algebra which specifies how bi-spinors transform? $\endgroup$ – Craig Mar 30 '19 at 4:19 $\begingroup$ @Craig The products $\gamma^a\gamma^b$ with $a\neq b$ can be regarded as those generators, yes. Also, in 4-d spacetime, those products can be block-diagonalized, and the blocks are the generators that specify how Weyl spinors transform. $\endgroup$ – Chiral Anomaly Mar 30 '19 at 5:07 $\begingroup$ @Craig The vector-transformation rule that I wrote is consistent with the one that you wrote, just expressed differently. The way you wrote it, the vector $v$ is represented as a column matrix (a matrix with a single column). The way I wrote it, the vector $v$ is represented as a square matrix (a linear combination of the $\gamma$-matrices: $\sum_k v^k \gamma_k$). These are two different ways to represent the same vector, both equally valid. The effect of a Lorentz transform looks different in these two representations because the representations are different, but the effect is the same. $\endgroup$ – Chiral Anomaly Dec 8 '19 at 20:44 The construction can be generalized to an $n$-dimensional $\mathbb{F}$-vector space $V$ with a $\mathbb{F}$-bilinear symmetric non-degenerate form $g: V\times V\to \mathbb{F}$. Let $(e_k)_{k=1, \ldots, n}$ be a basis for $V$ and $g_{jk}:=g(e_j,e_k)$. The (possibly indefinite) orthogonal group $$O(V)~:=~\{M\in GL(V) ~|~\forall v,w\in V:~~ g(M(v), M(w))~=~g(v,w) \}$$ $$ ~\stackrel{\text{polarization}}{=}~\{M\in GL(V) ~|~\forall v\in V:~~ g(M(v), M(v))~=~g(v,v) \} \tag{1}$$ with corresponding Lie algebra $$so(V)~:=~\{m\in {\rm End}(V) ~|~\forall v,w\in V:~~ g(m(v),w)+g(v,m(w))~=~0 \}$$ $$~\stackrel{\text{polarization}}{=}\{m\in {\rm End}(V) ~|~\forall v\in V:~~ g(m(v),v)+g(v,m(v))~=~0 \}~. \tag{2}$$ There is a vector space isomorphism $$\bigwedge{}^2 V~\ni~\omega ~=~\frac{1}{2}\sum_{j,k=1}^n\omega^{jk}e_j\wedge e_k ~~\mapsto~~ -i((\cdot)^{\flat})\omega ~=~\sum_{j,k,\ell=1}^n e_j \omega^{jk}{}g_{k\ell} e^{\ast\ell} \in~so(V), \tag{3}$$ where $i:V^{\ast}\times \bigwedge V \to \bigwedge V$ denotes the interior product and $\flat:V\to V^{\ast}$ is the musical isomorphism $v\mapsto v^{\flat}:=g(v,\cdot)$. The Clifford algebra is defined as $$Cl(V)~:=~T(V)/I(V), \qquad T(V)~:=~\bigoplus_{n=0}^{\infty} T^n(V), $$ $$T^n(V)~:=~ \underbrace{V\otimes \ldots\otimes V}_{n\text{ factors}},\qquad T^0(V)~:=~\mathbb{F}, \tag{4} $$ where $I(V)$ is the 2-sided ideal in $T(V)$ generated by $$\{v\otimes v - g(v,v){\bf 1} \in T(V)~|~ v\in V\}.\tag{5}$$ The linear map $\Phi: V\to {\rm End}(\bigwedge V)$ given by a sum of exterior and interior multiplication $ v\mapsto e(v)+i(v^{\flat})$ can be extended to an algebra homomorphism $$\Phi: T(V)~\to~ {\rm End}(\bigwedge V)\tag{6}$$ so that $$ \Phi(v\otimes v)~=~ \Phi(v)\circ \Phi(v)~=~\ldots~=~g(v,v)\Phi({\bf 1}) \tag{7}$$ with kernel $ {\rm Ker}(\Phi)=I(V)$. In other words, there is an algebra homomorphism $$\widetilde{\Phi}: Cl(V)~\to~ {\rm End}(\bigwedge V)\tag{8}$$ Then $$Cl(V)~\ni~ c~~\mapsto~~ \widetilde{\Phi}(c)(1)~\in~ \bigwedge V\tag{9}$$ is a vector space isomorphism. In particular $$Cl(V)^{\rm even}~\ni~ c~=~\frac{1}{4}\sum_{j,k=1}^n\omega^{jk}(e_j\otimes e_k-e_k\otimes e_j) $$ $$~~\mapsto~~ \widetilde{\Phi}(c)(1) ~=~\frac{1}{2}\sum_{j,k=1}^n\omega^{jk}e_j\wedge e_k ~=~\bigwedge{}^2 V.\tag{10} $$ The maps (3) & (10) can be combined to yield an imbedding $so(V) \hookrightarrow Cl(V)^{\rm even}$. In plain English: The generators of the Lie algebra (2) can be can be identified with anticommutators of gamma matrices (up to normalization). See also Refs. 1 & 2. S. Sternberg, Lie algebras, 2004; Chapter 9. W. Fulton & J. Harris, Representation theory, 1991; Lecture 20. Qmechanic♦Qmechanic Not the answer you're looking for? Browse other questions tagged special-relativity group-theory dirac-equation dirac-matrices clifford-algebra or ask your own question. Spin statistics from the fundamental group of $SO(D)$ Can Maxwells equations be written as one equation? Why do we use infinitesimal forms of operators? What does a basis rotation correspond to physically for linear position-momemtum? Peskin equation 6.38 How Should I Think About the Dirac Equation? What is the role of the spacetime algebra? Is the Dirac equation equivalent to the Klein-Gordon equation for its left handed component? Invertibility of Dirac matrices Components of Dirac equation solve the Klein Gordan equation derivation Relation between the Dirac Algebra and the Lorentz group Spinors, Spacetime and Clifford algebra
CommonCrawl
Traveling wave solutions for time periodic reaction-diffusion systems Zero viscosity-resistivity limit for the 3D incompressible magnetohydrodynamic equations in Gevrey class Topological classification of $Ω$-stable flows on surfaces by means of effectively distinguishable multigraphs Vladislav Kruglov , Dmitry Malyshev and Olga Pochinka , HSE; Bolshaya Pecherskaya 25/12, Nizhniy Novgorod, 603155, Russia * Corresponding author: Olga Pochinka Received May 2017 Revised April 2018 Published June 2018 Fund Project: Authors are grateful to participants of the seminar "Topological Methods in Dynamics" for fruitful discussions. The classification results (Sections 1–6 without Subsections 5.2, 5.3) were obtained with the support of the Russian Science Foundation (project 17-11-01041). The realisation results (Subsection 5.2, Section 7) were obtained as an output of the research project "Topology and Chaos in Dynamics of Systems, Foliations and Deformation of Lie Algebras (2018)" implemented as part of the Basic Research Program at the National Research University Higher School of Economics (HSE). The algorithmic results (Subsection 5.3, Section 8) were obtained with the support of Russian Foundation for Basic Research 16-31-60008-mol-a-dk and with LATNA laboratory, National Research University Higher School of Economics. Figure(10) Structurally stable (rough) flows on surfaces have only finitely many singularities and finitely many closed orbits, all of which are hyperbolic, and they have no trajectories joining saddle points. The violation of the last property leads to $Ω$-stable flows on surfaces, which are not structurally stable. However, in the present paper we prove that a topological classification of such flows is also reduced to a combinatorial problem. Our complete topological invariant is a multigraph, and we present a polynomial-time algorithm for the distinction of such graphs up to an isomorphism. We also present a graph criterion for orientability of the ambient manifold and a graph-associated formula for its Euler characteristic. Additionally, we give polynomial-time algorithms for checking the orientability and calculating the characteristic. Keywords: Ω-stable flow, topological invariant, multigraph, four-colour graph, polynomial-time, algorithms. Mathematics Subject Classification: 37D05. Citation: Vladislav Kruglov, Dmitry Malyshev, Olga Pochinka. Topological classification of $Ω$-stable flows on surfaces by means of effectively distinguishable multigraphs. Discrete & Continuous Dynamical Systems, 2018, 38 (9) : 4305-4327. doi: 10.3934/dcds.2018188 V. E. Alekseev and V. A. Talanov, Graphs and Algorithms. Data structures. Models of Computing (in Russian), Nizhny Novgorod State University Press, Nizhny Novgorod, 2006. Google Scholar A. A. Andronov and L. S. Pontryagin, Rough systems (in Russian), Doklady Akademii nauk SSSR, 14 (1937), 247-250. Google Scholar A. V. Bolsinov, S. V. Matveev and A. T. Fomenko, Topological classification of integrable Hamiltonian systems with two degrees of freedom. The list of systems of small complexity (in Russian), Uspekhi matematicheskikh nauk, 45 (1990), 49-77. doi: 10.1070/RM1990v045n02ABEH002344. Google Scholar Yu. G. Borisovich, N. M. Bliznyakov, Ya. A. Izrailevich and T. N. Fomenko, Introduction to Topology (in Russian), "Vyssh. Shkola", Moscow, 1980. Google Scholar A. Cobham, The intrinsic computational difficulty of functions, Proc. 1964 International Congress for Logic, Methodology, and Philosophy of Science, North-Holland, Amsterdam, (1964), 24-30. Google Scholar M. Garey and D. Johnson, Computers and Intractability: A Guide to the Theory of NP-completeness, W. H. Freeman, San Francisco, 1979. Google Scholar V. Grines, T. Medvedev and O. Pochinka, Dynamical Systems on 2- and 3-Manifolds, Springer International Publishing Switzerland, 2016. Google Scholar E. Ya. Gurevich and E. D. Kurenkov, Energy function and topological classification of Morse-Smale flows on surfaces (in Russian), Zhurnal SVMO, 17 (2015), 15-26. Google Scholar D. König, Grafok es matrixok, Matematikai es Fizikai Lapok, 38 (1931), 116-119. Google Scholar V. E. Kruglov, D. S. Malyshev and O. V. Pochinka, Multicolour graph as a complete topological invariant for $Ω$-stable flows without periodic trajectories on surfaces (in Russian), Matematicheskiy Sbornik, 209 (2018), 100-126. doi: 10.4213/sm8797. Google Scholar V. E. Kruglov, T. M. Mitryakova and O. V. Pochinka, About types of cells of $Ω$ -stable flows without periodic trajectories on surfaces (in Russian), Dinamicheskie sistemy, 5 (2015), 43-49. Google Scholar E. A. Leontovich and A. G. Mayer, About trajectories determining qualitative structure of sphere partition into trajectories (in Russian), Doklady Akademii Nauk SSSR, 14 (1937), 251-257. Google Scholar E. A. Leontovich and A. G. Mayer, About scheme determining topological structure of partition into trajectories (in Russian), Doklady Akademii Nauk SSSR, 103 (1955), 557-560. Google Scholar A. G. Mayer, Rough transformations of a circle (in Russian), Uchionye zapiski GGU. Gor'kiy, publikatsii. GGU, 12 (1939), 215-229. Google Scholar G. Miller, Isomorphism testing for graphs of bounded genus, Proceedings of the 12th Annual ACM Symposium on Theory of Computing, (1980), 225-235. doi: 10.1145/800141.804670. Google Scholar D. Neumann and T. O'Brien, Global structure of continuous flows on 2-manifolds, J. DifF. Eq., 22 (1976), 89-110. doi: 10.1016/0022-0396(76)90006-1. Google Scholar A. A. Oshemkov and V. V. Sharko, About classification of Morse-Smale flows on 2-manifolds (in Russian), Matematicheskiy sbornik, 189 (1998), 93-140. doi: 10.1070/SM1998v189n08ABEH000341. Google Scholar J. Palis, On the $C^1$ omega-stability conjecture, Publ. Math. Inst. Hautes Études Sci., 66 (1988), 211-215. Google Scholar J. Palis and W. De Melo, Geometric Theory Of Dynamical Systems: An Introduction, Transl. from the Portuguese by A. K. Manning, New York, Heidelberg, Berlin, Springer-Verlag, 1982. Google Scholar M. Peixoto, Structural stability on two-dimensional manifolds, Topology, 1 (1962), 101-120. doi: 10.1016/0040-9383(65)90018-2. Google Scholar M. Peixoto, Structural stability on two-dimensional manifolds (a further remarks), Topology, 2 (1963), 179-180. doi: 10.1016/0040-9383(63)90032-6. Google Scholar M. Peixoto, On the Classification of Flows on Two-Manifolds, Dynamical systems Proc. Symp. held at the Univ. of Bahia, Salvador, Brasil, 1971. Google Scholar C. Pugh and M. Shub, The $Ω$-stability theorem for flows, Inven. Math., 11 (1970), 150-158. doi: 10.1007/BF01404608. Google Scholar C. Robinson, Dynamical Systems: Stability, Symbolic Dynamics, and Chaos, CRC Press, Boca Raton, Ann Arbor, London, Tokyo, 1995. Google Scholar S. Smale, Differentiable dynamical systems, Bull. Amer. Soc., 73 (1967), 747-817. doi: 10.1090/S0002-9904-1967-11798-1. Google Scholar Figure 1. The case when $U_\mathfrak c$ is homeomorphic to a Möbius band Figure 2. $\phi^t$ and $\Upsilon_{\phi^t}$ Figure 3. The cases of the consistent (leftward) and the inconsistent (rightward) orientation of boundary's connecting component of some $\mathcal E$-region Figure 4. A polygonal region Figure 5. An example of the flow $f^t$ together with the polygonal regions Figure 6. An example of $f^t$ and its four-colour graph Figure 7. Two flows from $G$ and their equipped graphs Figure 8. Two examples of flows from $G$ differing only by orientation of the limit cycle between $\mathcal M$ and $\mathcal A$ and their equipped graphs Figure 9. Two examples of flow from $G$ without $\mathcal A$- and $\mathcal M$-regions differing only by orientation of the limit cycle and their equipped graphs Figure 10. $f^t$, $\Gamma_{\mathcal M}$ and $\Gamma^*_{{\mathcal M}}$ Neil Robertson, Daniel P. Sanders, Paul Seymour and Robin Thomas. A new proof of the four-colour theorem. Electronic Research Announcements, 1996, 2: 17-25. Terry Shue Chien Lau, Chik How Tan. Polynomial-time plaintext recovery attacks on the IKKR code-based cryptosystems. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2020132 Yanqin Bai, Pengfei Ma, Jing Zhang. A polynomial-time interior-point method for circular cone programming based on kernel functions. Journal of Industrial & Management Optimization, 2016, 12 (2) : 739-756. doi: 10.3934/jimo.2016.12.739 Shinji Imahori, Yoshiyuki Karuno, Kenju Tateishi. Pseudo-polynomial time algorithms for combinatorial food mixture packing problems. Journal of Industrial & Management Optimization, 2016, 12 (3) : 1057-1073. doi: 10.3934/jimo.2016.12.1057 Eric Babson and Dmitry N. Kozlov. Topological obstructions to graph colorings. Electronic Research Announcements, 2003, 9: 61-68. Gabriella Bretti, Roberto Natalini, Benedetto Piccoli. Fast algorithms for the approximation of a traffic flow model on networks. Discrete & Continuous Dynamical Systems - B, 2006, 6 (3) : 427-448. doi: 10.3934/dcdsb.2006.6.427 Victor Magron, Marcelo Forets, Didier Henrion. Semidefinite approximations of invariant measures for polynomial systems. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6745-6770. doi: 10.3934/dcdsb.2019165 Alexey Gorshkov. Stable invariant manifolds with application to control problems. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021040 Gemma Huguet, Rafael de la Llave, Yannick Sire. Computation of whiskered invariant tori and their associated manifolds: New fast algorithms. Discrete & Continuous Dynamical Systems, 2012, 32 (4) : 1309-1353. doi: 10.3934/dcds.2012.32.1309 Pengyu Yan, Shi Qiang Liu, Cheng-Hu Yang, Mahmoud Masoud. A comparative study on three graph-based constructive algorithms for multi-stage scheduling with blocking. Journal of Industrial & Management Optimization, 2019, 15 (1) : 221-233. doi: 10.3934/jimo.2018040 César J. Niche. Topological entropy of a magnetic flow and the growth of the number of trajectories. Discrete & Continuous Dynamical Systems, 2004, 11 (2&3) : 577-580. doi: 10.3934/dcds.2004.11.577 Ethel Mokotoff. Algorithms for bicriteria minimization in the permutation flow shop scheduling problem. Journal of Industrial & Management Optimization, 2011, 7 (1) : 253-282. doi: 10.3934/jimo.2011.7.253 Hari Nandan Nath, Urmila Pyakurel, Tanka Nath Dhamala, Stephan Dempe. Dynamic network flow location models and algorithms for quickest evacuation planning. Journal of Industrial & Management Optimization, 2021, 17 (5) : 2943-2970. doi: 10.3934/jimo.2020102 Isaac A. García, Jaume Giné. Non-algebraic invariant curves for polynomial planar vector fields. Discrete & Continuous Dynamical Systems, 2004, 10 (3) : 755-768. doi: 10.3934/dcds.2004.10.755 Jingxian Sun, Shouchuan Hu. Flow-invariant sets and critical point theory. Discrete & Continuous Dynamical Systems, 2003, 9 (2) : 483-496. doi: 10.3934/dcds.2003.9.483 Ursula Hamenstädt. Dynamics of the Teichmüller flow on compact invariant sets. Journal of Modern Dynamics, 2010, 4 (2) : 393-418. doi: 10.3934/jmd.2010.4.393 Francois Ledrappier and Omri Sarig. Invariant measures for the horocycle flow on periodic hyperbolic surfaces. Electronic Research Announcements, 2005, 11: 89-94. Santanu Sarkar, Subhamoy Maitra. Further results on implicit factoring in polynomial time. Advances in Mathematics of Communications, 2009, 3 (2) : 205-217. doi: 10.3934/amc.2009.3.205 Chuang Peng. Minimum degrees of polynomial models on time series. Conference Publications, 2005, 2005 (Special) : 720-729. doi: 10.3934/proc.2005.2005.720 Dengfeng Sun, Issam S. Strub, Alexandre M. Bayen. Comparison of the performance of four Eulerian network flow models for strategic air traffic management. Networks & Heterogeneous Media, 2007, 2 (4) : 569-595. doi: 10.3934/nhm.2007.2.569 Vladislav Kruglov Dmitry Malyshev Olga Pochinka
CommonCrawl
A new all-in-one nootropic mix/company run by some people active on /r/nootropics; they offered me a month's supply for free to try & review for them. At ~$100 a month (it depends on how many months one buys), it is not cheap (John Backus estimates one could buy the raw ingredients for $25/month) but it provides convenience & is aimed at people uninterested in spending a great deal of time reviewing research papers & anecdotes or capping their own pills (ie. people with lives) and it's unlikely I could spare the money to subscribe if TruBrain worked well for me - but certainly there was no harm in trying it out. I have personally found that with respect to the NOOTROPIC effect(s) of all the RACETAMS, whilst I have experienced improvements in concentration and working capacity / productivity, I have never experienced a noticeable ongoing improvement in memory. COLURACETAM is the only RACETAM that I have taken wherein I noticed an improvement in MEMORY, both with regards to SHORT-TERM and MEDIUM-TERM MEMORY. To put matters into perspective, the memory improvement has been mild, yet still significant; whereas I have experienced no such improvement at all with the other RACETAMS. The evidence? In small studies, healthy people taking modafinil showed improved planning and working memory, and better reaction time, spatial planning, and visual pattern recognition. A 2015 meta-analysis claimed that "when more complex assessments are used, modafinil appears to consistently engender enhancement of attention, executive functions, and learning" without affecting a user's mood. In a study from earlier this year involving 39 male chess players, subjects taking modafinil were found to perform better in chess games played against a computer. Modafinil, sold under the name Provigil, is a stimulant that some have dubbed the "genius pill." It is a wakefulness-promoting agent (modafinil) and glutamate activators (ampakine). Originally developed as a treatment for narcolepsy and other sleep disorders, physicians are now prescribing it "off-label" to cellists, judges, airline pilots, and scientists to enhance attention, memory and learning. According to Scientific American, "scientific efforts over the past century [to boost intelligence] have revealed a few promising chemicals, but only modafinil has passed rigorous tests of cognitive enhancement." A stimulant, it is a controlled substance with limited availability in the U.S. the larger size of the community enables economies of scale and increases the peak sophistication possible. In a small nootropics community, there is likely to be no one knowledgeable about statistics/experimentation/biochemistry/neuroscience/whatever-you-need-for-a-particular-discussion, and the available funds increase: consider /r/Nootropics's testing program, which is doable only because it's a large lucrative community to sell to so the sellers are willing to donate funds for independent lab tests/Certificates of Analysis (COAs) to be done. If there were 1000 readers rather than 23,295, how could this ever happen short of one of those 1000 readers being very altruistic? In sum, the evidence concerning stimulant effects of working memory is mixed, with some findings of enhancement and some null results, although no findings of overall performance impairment. A few studies showed greater enhancement for less able participants, including two studies reporting overall null results. When significant effects have been found, their sizes vary from small to large, as shown in Table 4. Taken together, these results suggest that stimulants probably do enhance working memory, at least for some individuals in some task contexts, although the effects are not so large or reliable as to be observable in all or even most working memory studies. Depending on where you live, some nootropics may not be sold over the counter, but they are usually available online. The law regarding nootropics can vary massively around the world, so be sure to do your homework before you purchase something for the first time. Be particularly cautious when importing smart drugs, because quality control and regulations abroad are not always as stringent as they are in the US. Do not put your health at risk if all you are trying to do is gain an edge in a competitive sport. Cytisine is not known as a stimulant and I'm not addicted to nicotine, so why give it a try? Nicotine is one of the more effective stimulants available, and it's odd how few nicotine analogues or nicotinic agonists there are available; nicotine has a few flaws like short half-life and increasing blood pressure, so I would be interested in a replacement. The nicotine metabolite cotinine, in the human studies available, looks intriguing and potentially better, but I have been unable to find a source for it. One of the few relevant drugs which I can obtain is cytisine, from Ceretropic, at 2x1.5mg doses. There are not many anecdotal reports on cytisine, but at least a few suggest somewhat comparable effects with nicotine, so I gave it a try. The demands of university studies, career, and family responsibilities leaves people feeling stretched to the limit. Extreme stress actually interferes with optimal memory, focus, and performance. The discovery of nootropics and vitamins that make you smarter has provided a solution to help college students perform better in their classes and professionals become more productive and efficient at work. It is known that American college students have embraced cognitive enhancement, and some information exists about the demographics of the students most likely to practice cognitive enhancement with prescription stimulants. Outside of this narrow segment of the population, very little is known. What happens when students graduate and enter the world of work? Do they continue using prescription stimulants for cognitive enhancement in their first jobs and beyond? How might the answer to this question depend on occupation? For those who stay on campus to pursue graduate or professional education, what happens to patterns of use? To what extent do college graduates who did not use stimulants as students begin to use them for cognitive enhancement later in their careers? To what extent do workers without college degrees use stimulants to enhance job performance? How do the answers to these questions differ for countries outside of North America, where the studies of Table 1 were carried out? Smart drug, also called nootropic or cognitive enhancer, any of a group of pharmaceutical agents used to improve the intellectual capacity of persons suffering from neurological diseases and psychological disorders. The use of such drugs by healthy individuals in order to improve concentration, to study longer, and to better manage stress is a subject of controversy. My first dose on 1 March 2017, at the recommended 0.5ml/1.5mg was miserable, as I felt like I had the flu and had to nap for several hours before I felt well again, requiring 6h to return to normal; after waiting a month, I tried again, but after a week of daily dosing in May, I noticed no benefits; I tried increasing to 3x1.5mg but this immediately caused another afternoon crash/nap on 18 May. So I scrapped my cytisine. Oh well. When comparing supplements, consider products with a score above 90% to get the greatest benefit from smart pills to improve memory. Additionally, we consider the reviews that users send to us when scoring supplements, so you can determine how well products work for others and use this information to make an informed decision. Every month, our editor puts her name on that month's best smart bill, in terms of results and value offered to users. Caffeine keeps you awake, which keeps you coding. It may also be a nootropic, increasing brain-power. Both desirable results. However, it also inhibits vitamin D receptors, and as such decreases the body's uptake of this-much-needed-vitamin. OK, that's not so bad, you're not getting the maximum dose of vitamin D. So what? Well, by itself caffeine may not cause you any problems, but combined with cutting off a major source of the vitamin - the production via sunlight - you're leaving yourself open to deficiency in double-quick time. Smart drugs, formally known as nootropics, are medications, supplements, and other substances that improve some aspect of mental function. In the broadest sense, smart drugs can include common stimulants such as caffeine, herbal supplements like ginseng, and prescription medications for conditions such as ADHD, Alzheimer's disease, and narcolepsy. These substances can enhance concentration, memory, and learning. Brain-imaging studies are consistent with the existence of small effects that are not reliably captured by the behavioral paradigms of the literature reviewed here. Typically with executive function tasks, reduced activation of task-relevant areas is associated with better performance and is interpreted as an indication of higher neural efficiency (e.g., Haier, Siegel, Tang, Abel, & Buchsbaum, 1992). Several imaging studies showed effects of stimulants on task-related activation while failing to find effects on cognitive performance. Although changes in brain activation do not necessarily imply functional cognitive changes, they are certainly suggestive and may well be more sensitive than behavioral measures. Evidence of this comes from a study of COMT variation and executive function. Egan and colleagues (2001) found a genetic effect on executive function in an fMRI study with sample sizes as small as 11 but did not find behavioral effects in these samples. The genetic effect on behavior was demonstrated in a separate study with over a hundred participants. In sum, d-AMP and MPH measurably affect the activation of task-relevant brain regions when participants' task performance does not differ. This is consistent with the hypothesis (although by no means positive proof) that stimulants exert a true cognitive-enhancing effect that is simply too small to be detected in many studies. Nootropics are a great way to boost your productivity. Nootropics have been around for more than 40 years and today they are entering the mainstream. If you want to become the best you, nootropics are a way to level up your life. Nootropics are always personal and what works for others might not work for you. But no matter the individual outcomes, nootropics are here to make an impact! This continued up to 1 AM, at which point I decided not to take a second armodafinil (why spend a second pill to gain what would likely be an unproductive set of 8 hours?) and finish up the experiment with some n-backing. My 5 rounds: 60/38/62/44/5023. This was surprising. Compare those scores with scores from several previous days: 39/42/44/40/20/28/36. I had estimated before the n-backing that my scores would be in the low-end of my usual performance (20-30%) since I had not slept for the past 41 hours, and instead, the lowest score was 38%. If one did not know the context, one might think I had discovered a good nootropic! Interesting evidence that armodafinil preserves at least one kind of mental performance. Some supplement blends, meanwhile, claim to work by combining ingredients – bacopa, cat's claw, huperzia serrata and oat straw in the case of Alpha Brain, for example – that have some support for boosting cognition and other areas of nervous system health. One 2014 study in Frontiers in Aging Neuroscience, suggested that huperzia serrata, which is used in China to fight Alzheimer's disease, may help slow cell death and protect against (or slow the progression of) neurodegenerative diseases. The Alpha Brain product itself has also been studied in a company-funded small randomized controlled trial, which found Alpha Brain significantly improved verbal memory when compared to adults who took a placebo. For proper brain function, our CNS (Central Nervous System) requires several amino acids. These derive from protein-rich foods. Consider amino acids to be protein building blocks. Many of them are dietary precursors to vital neurotransmitters in our brain. Epinephrine (adrenaline), serotonin, dopamine, and norepinephrine assist in enhancing mental performance. A few examples of amino acid nootropics are: Several studies have assessed the effect of MPH and d-AMP on tasks tapping various other aspects of spatial working memory. Three used the spatial working memory task from the CANTAB battery of neuropsychological tests (Sahakian & Owen, 1992). In this task, subjects search for a target at different locations on a screen. Subjects are told that locations containing a target in previous trials will not contain a target in future trials. Efficient performance therefore requires remembering and avoiding these locations in addition to remembering and avoiding locations already searched within a trial. Mehta et al. (2000) found evidence of greater accuracy with MPH, and Elliott et al. (1997) found a trend for the same. In Mehta et al.'s study, this effect depended on subjects' working memory ability: the lower a subject's score on placebo, the greater the improvement on MPH. In Elliott et al.'s study, MPH enhanced performance for the group of subjects who received the placebo first and made little difference for the other group. The reason for this difference is unclear, but as mentioned above, this may reflect ability differences between the groups. More recently, Clatworthy et al. (2009) undertook a positron emission tomography (PET) study of MPH effects on two tasks, one of which was the CANTAB spatial working memory task. They failed to find consistent effects of MPH on working memory performance but did find a systematic relation between the performance effect of the drug in each individual and its effect on individuals' dopamine activity in the ventral striatum. "How to Feed a Brain is an important book. It's the book I've been looking for since sustaining multiple concussions in the fall of 2013. I've dabbled in and out of gluten, dairy, and (processed) sugar free diets the past few years, but I have never eaten enough nutritious foods. This book has a simple-to-follow guide on daily consumption of produce, meat, and water. Stimulants are the smart drugs most familiar to people, starting with widely-used psychostimulants caffeine and nicotine, and the more ill-reputed subclass of amphetamines. Stimulant drugs generally function as smart drugs in the sense that they promote general wakefulness and put the brain and body "on alert" in a ready-to-go state. Basically, any drug whose effects reduce drowsiness will increase the functional IQ, so long as the user isn't so over-stimulated they're shaking or driven to distraction. Perceptual–motor congruency was the basis of a study by Fitzpatrick et al. (1988) in which subjects had to press buttons to indicate the location of a target stimulus in a display. In the simple condition, the left-to-right positions of the buttons are used to indicate the left-to-right positions of the stimuli, a natural mapping that requires little cognitive control. In the rotation condition, the mapping between buttons and stimulus positions is shifted to the right by one and wrapped around, such that the left-most button is used to indicate the right-most position. Cognitive control is needed to resist responding with the other, more natural mapping. MPH was found to speed responses in this task, and the speeding was disproportionate for the rotation condition, consistent with enhancement of cognitive control. Sounds too good to be true? Welcome to the world of 'Nootropics' popularly known as 'Smart Drugs' that can help boost your brain's power. Do you recall the scene from the movie Limitless, where Bradley Cooper's character uses a smart drug that makes him brilliant? Yes! The effect of Nootropics on your brain is such that the results come as a no-brainer. In addition, large national surveys, including the NSDUH, have generally classified prescription stimulants with other stimulants including street drugs such as methamphetamine. For example, since 1975, the National Institute on Drug Abuse–sponsored Monitoring the Future (MTF) survey has gathered data on drug use by young people in the United States (Johnston, O'Malley, Bachman, & Schulenberg, 2009a, 2009b). Originally, MTF grouped prescription stimulants under a broader class of stimulants so that respondents were asked specifically about MPH only after they had indicated use of some drug in the category of AMPs. As rates of MPH prescriptions increased and anecdotal reports of nonmedical use grew, the 2001 version of the survey was changed to include a separate standalone question about MPH use. This resulted in more than a doubling of estimated annual use among 12th graders, from 2.4% to 5.1%. More recent data from the MTF suggests Ritalin use has declined (3.4% in 2008). However, this may still underestimate use of MPH, as the question refers specifically to Ritalin and does not include other brand names such as Concerta (an extended release formulation of MPH). Starting from the studies in my meta-analysis, we can try to estimate an upper bound on how big any effect would be, if it actually existed. One of the most promising null results, Southon et al 1994, turns out to be not very informative: if we punch in the number of kids, we find that they needed a large effect size (d=0.81) before they could see anything: The fish oil can be considered a free sunk cost: I would take it in the absence of an experiment. The empty pill capsules could be used for something else, so we'll put the 500 at $5. Filling 500 capsules with fish and olive oil will be messy and take an hour. Taking them regularly can be added to my habitual morning routine for vitamin D and the lithium experiment, so that is close to free but we'll call it an hour over the 250 days. Recording mood/productivity is also free a sunk cost as it's necessary for the other experiments; but recording dual n-back scores is more expensive: each round is ~2 minutes and one wants >=5, so each block will cost >10 minutes, so 18 tests will be >180 minutes or >3 hours. So >5 hours. Total: 5 + (>5 \times 7.25) = >41. All clear? Try one (not dozens) of nootropics for a few weeks and keep track of how you feel, Kerl suggests. It's also important to begin with as low a dose as possible; when Cyr didn't ease into his nootropic regimen, his digestion took the blow, he admits. If you don't notice improvements, consider nixing the product altogether and focusing on what is known to boost cognitive function – eating a healthy diet, getting enough sleep regularly and exercising. "Some of those lifestyle modifications," Kerl says, "may improve memory over a supplement." The main area of the brain effected by smart pills is the prefrontal cortex, where representations of our goals for the future are created. Namely, the prefrontal cortex consists of pyramidal cells that keep each other firing. However in some instances they can become disconnected due to chemical imbalances, or due to being tired, stressed, and overworked. The majority of nonmedical users reported obtaining prescription stimulants from a peer with a prescription (Barrett et al., 2005; Carroll et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; McCabe & Boyd, 2005; Novak et al., 2007; Rabiner et al., 2009; White et al., 2006). Consistent with nonmedical user reports, McCabe, Teter, and Boyd (2006) found 54% of prescribed college students had been approached to divert (sell, exchange, or give) their medication. Studies of secondary school students supported a similar conclusion (McCabe et al., 2004; Poulin, 2001, 2007). In Poulin's (2007) sample, 26% of students with prescribed stimulants reported giving or selling some of their medication to other students in the past month. She also found that the number of students in a class with medically prescribed stimulants was predictive of the prevalence of nonmedical stimulant use in the class (Poulin, 2001). In McCabe et al.'s (2004) middle and high school sample, 23% of students with prescriptions reported being asked to sell or trade or give away their pills over their lifetime. Low level laser therapy (LLLT) is a curious treatment based on the application of a few minutes of weak light in specific near-infrared wavelengths (the name is a bit of a misnomer as LEDs seem to be employed more these days, due to the laser aspect being unnecessary and LEDs much cheaper). Unlike most kinds of light therapy, it doesn't seem to have anything to do with circadian rhythms or zeitgebers. Proponents claim efficacy in treating physical injuries, back pain, and numerous other ailments, recently extending it to case studies of mental issues like brain fog. (It's applied to injured parts; for the brain, it's typically applied to points on the skull like F3 or F4.) And LLLT is, naturally, completely safe without any side effects or risk of injury. When Giurgea coined the word nootropic (combining the Greek words for mind and bending) in the 1970s, he was focused on a drug he had synthesized called piracetam. Although it is approved in many countries, it isn't categorized as a prescription drug in the United States. That means it can be purchased online, along with a number of newer formulations in the same drug family (including aniracetam, phenylpiracetam, and oxiracetam). Some studies have shown beneficial effects, including one in the 1990s that indicated possible improvement in the hippocampal membranes in Alzheimer's patients. But long-term studies haven't yet borne out the hype. Jesper Noehr, 30, reels off the ingredients in the chemical cocktail he's been taking every day before work for the past six months. It's a mixture of exotic dietary supplements and research chemicals that he says gives him an edge in his job without ill effects: better memory, more clarity and focus and enhanced problem-solving abilities. "I can keep a lot of things on my mind at once," says Noehr, who is chief technology officer for a San Francisco startup. An unusual intervention is infrared/near-infrared light of particular wavelengths (LLLT), theorized to assist mitochondrial respiration and yielding a variety of therapeutic benefits. Some have suggested it may have cognitive benefits. LLLT sounds strange but it's simple, easy, cheap, and just plausible enough it might work. I tried out LLLT treatment on a sporadic basis 2013-2014, and statistically, usage correlated strongly & statistically-significantly with increases in my daily self-ratings, and not with any sleep disturbances. Excited by that result, I did a randomized self-experiment 2014-2015 with the same procedure, only to find that the causal effect was weak or non-existent. I have stopped using LLLT as likely not worth the inconvenience. Bacopa Monnieri is probably one of the safest and most effective memory and mood enhancer nootropic available today with the least side-effects. In some humans, a majorly extended use of Bacopa Monnieri can result in nausea. One of the primary products of AlternaScript is Optimind, a nootropic supplement which mostly constitutes of Bacopa Monnieri as one of the main ingredients. He used to get his edge from Adderall, but after moving from New Jersey to San Francisco, he says, he couldn't find a doctor who would write him a prescription. Driven to the Internet, he discovered a world of cognition-enhancing drugs known as nootropics — some prescription, some over-the-counter, others available on a worldwide gray market of private sellers — said to improve memory, attention, creativity and motivation. The difference in standard deviations is not, from a theoretical perspective, all that strange a phenomenon: at the very beginning of this page, I covered some basic principles of nootropics and mentioned how many stimulants or supplements follow a inverted U-curve where too much or too little lead to poorer performance (ironically, one of the examples in Kruschke 2012 was a smart drug which did not affect means but increased standard deviations). Take at 10 AM; seem a bit more active but that could just be the pressure of the holiday season combined with my nice clean desk. I do the chores without too much issue and make progress on other things, but nothing major; I survive going to The Sitter without too much tiredness, so ultimately I decide to give the palm to it being active, but only with 60% confidence. I check the next day, and it was placebo. Oops. The goal of this article has been to synthesize what is known about the use of prescription stimulants for cognitive enhancement and what is known about the cognitive effects of these drugs. We have eschewed discussion of ethical issues in favor of simply trying to get the facts straight. Although ethical issues cannot be decided on the basis of facts alone, neither can they be decided without relevant facts. Personal and societal values will dictate whether success through sheer effort is as good as success with pharmacologic help, whether the freedom to alter one's own brain chemistry is more important than the right to compete on a level playing field at school and work, and how much risk of dependence is too much risk. Yet these positions cannot be translated into ethical decisions in the real world without considerable empirical knowledge. Do the drugs actually improve cognition? Under what circumstances and for whom? Who will be using them and for what purposes? What are the mental and physical health risks for frequent cognitive-enhancement users? For occasional users? Recent developments include biosensor-equipped smart pills that sense the appropriate environment and location to release pharmacological agents. Medimetrics (Eindhoven, Netherlands) has developed a pill called IntelliCap with drug reservoir, pH and temperature sensors that release drugs to a defined region of the gastrointestinal tract. This device is CE marked and is in early stages of clinical trials for FDA approval. Recently, Google announced its intent to invest and innovate in this space. Vinpocetine walks a line between herbal and pharmaceutical product. It's a synthetic derivative of a chemical from the periwinkle plant, and due to its synthetic nature we feel it's more appropriate as a 'smart drug'. Plus, it's illegal in the UK. Vinpocetine is purported to improve cognitive function by improving blood flow to the brain, which is why it's used in some 'study drugs' or 'smart pills'. Smart Pill is a dietary supplement that blends vitamins, amino acids, and herbal extracts to sustain mental alertness, memory and concentration. One of the ingredients used in this formula is Vitamin B-1, also known as Thiamine, which sustains almost all functions present in the body, but plays a key role in brain health and function. A deficiency of this vitamin can lead to several neurological function problems. The most common use of Thiamine is to improve brain function; it acts as a neurotransmitter helping the brain prevent learning and memory disorders; it also provides help with mood disorders and offers stress relief. Medication can be ineffective if the drug payload is not delivered at its intended place and time. Since an oral medication travels through a broad pH spectrum, the pill encapsulation could dissolve at the wrong time. However, a smart pill with environmental sensors, a feedback algorithm and a drug release mechanism can give rise to smart drug delivery systems. This can ensure optimal drug delivery and prevent accidental overdose. How should the mixed results just summarized be interpreted vis-á-vis the cognitive-enhancing potential of prescription stimulants? One possibility is that d-AMP and MPH enhance cognition, including the retention of just-acquired information and some or all forms of executive function, but that the enhancement effect is small. If this were the case, then many of the published studies were underpowered for detecting enhancement, with most samples sizes under 50. It follows that the observed effects would be inconsistent, a mix of positive and null findings. There are hundreds of cognitive enhancing pills (so called smart pills) on the market that simply do NOT work! With each of them claiming they are the best, how can you find the brain enhancing supplements that are both safe and effective? Our top brain enhancing pills have been picked by sorting and ranking the top brain enhancing products yourself. Our ratings are based on the following criteria. (If I am not deficient, then supplementation ought to have no effect.) The previous material on modern trends suggests a prior >25%, and higher than that if I were female. However, I was raised on a low-salt diet because my father has high blood pressure, and while I like seafood, I doubt I eat it more often than weekly. I suspect I am somewhat iodine-deficient, although I don't believe as confidently as I did that I had a vitamin D deficiency. Let's call this one 75%. The therapeutic effect of AMP and MPH in ADHD is consistent with the finding of abnormalities in the catecholamine system in individuals with ADHD (e.g., Volkow et al., 2007). Both AMP and MPH exert their effects on cognition primarily by increasing levels of catecholamines in prefrontal cortex and the cortical and subcortical regions projecting to it, and this mechanism is responsible for improving cognition and behavior in ADHD (Pliszka, 2005; Wilens, 2006). So it's no surprise that as soon as medical science develops a treatment for a disease, we often ask if it couldn't perhaps make a healthy person even healthier. Take Viagra, for example: developed to help men who couldn't get erections, it's now used by many who function perfectly well without a pill but who hope it will make them exceptionally virile. The use of cognition-enhancing drugs by healthy individuals in the absence of a medical indication spans numerous controversial issues, including the ethics and fairness of their use, concerns over adverse effects, and the diversion of prescription drugs for nonmedical uses, among others.[1][2] Nonetheless, the international sales of cognition-enhancing supplements exceeded US$1 billion in 2015 when global demand for these compounds grew.[3]
CommonCrawl
American Institute of Mathematical Sciences Journal Prices Book Prices/Order Proceeding Prices About AIMS E-journal Policy eISSN: Discrete & Continuous Dynamical Systems - B A sufficient optimality condition for delayed state-linear optimal control problems Ana P. Lemos-Paião, Cristiana J. Silva and Delfim F. M. Torres 2019, 24(5): 2293-2313 doi: 10.3934/dcdsb.2019096 +[Abstract](694) +[HTML](179) +[PDF](471.3KB) We give answer to an open question by proving a sufficient optimality condition for state-linear optimal control problems with time delays in state and control variables. In the proof of our main result, we transform a delayed state-linear optimal control problem to an equivalent non-delayed problem. This allows us to use a well-known theorem that ensures a sufficient optimality condition for non-delayed state-linear optimal control problems. An example is given in order to illustrate the obtained result. Ana P. Lemos-Pai\u00E3o, Cristiana J. Silva, Delfim F. M. Torres. A sufficient optimality condition for delayed state-linear optimal control problems. Discrete & Continuous Dynamical Systems - B, 2019, 24(5): 2293-2313. doi: 10.3934\/dcdsb.2019096. Applications of stochastic semigroups to cell cycle models Katarzyna Pichór and Ryszard Rudnicki 2019, 24(5): 2365-2381 doi: 10.3934/dcdsb.2019099 +[Abstract](714) +[HTML](170) +[PDF](440.63KB) We consider a generational and continuous-time two-phase model of the cell cycle. The first model is given by a stochastic operator, and the second by a piecewise deterministic Markov process. In the second case we also introduce a stochastic semigroup which describes the evolution of densities of the process. We study long-time behaviour of these models. In particular we prove theorems on asymptotic stability and sweeping. We also show the relations between both models. Katarzyna Pich\u00F3r, Ryszard Rudnicki. Applications of stochastic semigroups to cell cycle models. Discrete & Continuous Dynamical Systems - B, 2019, 24(5): 2365-2381. doi: 10.3934\/dcdsb.2019099. Mathematical analysis of macrophage-bacteria interaction in tuberculosis infection Danyun He, Qian Wang and Wing-Cheong Lo 2018, 23(8): 3387-3413 doi: 10.3934/dcdsb.2018239 +[Abstract](1825) +[HTML](443) +[PDF](1087.46KB) Tuberculosis (TB) is a leading cause of death from infectious disease. TB is caused mainly by a bacterium called Mycobacterium tuberculosis which often initiates in the respiratory tract. The interaction of macrophages and T cells plays an important role in the immune response during TB infection. Recent experimental results support that active TB infection may be induced by the dysfunction of Treg cell regulation that provides a balance between anti-TB T cell responses and pathology. To better understand the dynamics of TB infection and Treg cell regulation, we build a mathematical model using a system of differential equations that qualitatively and quantitatively characterizes the dynamics of macrophages, Th1 and Treg cells during TB infection. For sufficiently analyzing the interaction between immune response and bacterial infection, we separate our model into several simple subsystems for further steady state and stability studies. Using this system, we explore the conditions of parameters for three situations, recovery, latent disease and active disease, during TB infection. Our numerical simulations support that Th1 cells and Treg cells play critical roles in TB infection: Th1 cells inhibit the number of infected macrophages to reduce the chance of active disease; Treg cell regulation reduces the immune response to stabilize the dynamics of the system. Danyun He, Qian Wang, Wing-Cheong Lo. Mathematical analysis of macrophage-bacteria interaction in tuberculosis infection. Discrete & Continuous Dynamical Systems - B, 2018, 23(8): 3387-3413. doi: 10.3934\/dcdsb.2018239. Does assortative mating lead to a polymorphic population? A toy model justification Ryszard Rudnicki and Radoslaw Wieczorek 2018, 23(1): 459-472 doi: 10.3934/dcdsb.2018031 +[Abstract](2386) +[HTML](707) +[PDF](1511.3KB) We consider a model of phenotypic evolution in populations with assortative mating of individuals. The model is given by a nonlinear operator acting on the space of probability measures and describes the relation between parental and offspring trait distributions. We study long-time behavior of trait distribution and show that it converges to a combination of Dirac measures. This result means that assortative mating can lead to a polymorphic population and sympatric speciation. Ryszard Rudnicki, Radoslaw Wieczorek. Does assortative mating lead to a polymorphic population? A toy model justification. Discrete & Continuous Dynamical Systems - B, 2018, 23(1): 459-472. doi: 10.3934\/dcdsb.2018031. Stability of stochastic semigroups and applications to Stein's neuronal model 2018, 23(1): 377-385 doi: 10.3934/dcdsb.2018026 +[Abstract](2294) +[HTML](574) +[PDF](346.0KB) A new theorem on asymptotic stability of stochastic semigroups is given. This theorem is applied to a stochastic semigroup corresponding to Stein's neuronal model. Asymptotic properties of models with and without the refractory period are compared. Katarzyna Pich\u00D3r, Ryszard Rudnicki. Stability of stochastic semigroups and applications to Stein\'s neuronal model. Discrete & Continuous Dynamical Systems - B, 2018, 23(1): 377-385. doi: 10.3934\/dcdsb.2018026. Honglei Xu, Yi Zhang and Ka Fai Cedric Yiu 2017, 22(1): i-ii doi: 10.3934/dcdsb.201701i +[Abstract](1234) +[HTML](678) +[PDF](75.6KB) Honglei Xu, Yi Zhang, Ka Fai Cedric Yiu. Preface. Discrete & Continuous Dynamical Systems - B, 2017, 22(1): i-ii. doi: 10.3934\/dcdsb.201701i. Domain control of nonlinear networked systems and applications to complex disease networks Suoqin Jin, Fang-Xiang Wu and Xiufen Zou 2017, 22(6): 2169-2206 doi: 10.3934/dcdsb.2017091 +[Abstract](3012) +[HTML](1249) +[PDF](5226.6KB) The control of complex nonlinear dynamical networks is an ongoing challenge in diverse contexts ranging from biology to social sciences. To explore a practical framework for controlling nonlinear dynamical networks based on meaningful physical and experimental considerations, we propose a new concept of the domain control for nonlinear dynamical networks, i.e., the control of a nonlinear network in transition from the domain of attraction of an undesired state (attractor) to the domain of attraction of a desired state. We theoretically prove the existence of a domain control. In particular, we offer an approach for identifying the driver nodes that need to be controlled and design a general form of control functions for realizing domain controllability. In addition, we demonstrate the effectiveness of our theory and approaches in three realistic disease-related networks: the epithelial-mesenchymal transition (EMT) core network, the T helper (Th) differentiation cellular network and the cancer network. Moreover, we reveal certain genes that are critical to phenotype transitions of these systems. Therefore, the approach described here not only offers a practical control scheme for nonlinear dynamical networks but also helps the development of new strategies for the prevention and treatment of complex diseases. Suoqin Jin, Fang-Xiang Wu, Xiufen Zou. Domain control of nonlinear networked systems and applications to complex disease networks. Discrete & Continuous Dynamical Systems - B, 2017, 22(6): 2169-2206. doi: 10.3934\/dcdsb.2017091. A continuum model for nematic alignment of self-propelled particles Pierre Degond, Angelika Manhart and Hui Yu 2017, 22(4): 1295-1327 doi: 10.3934/dcdsb.2017063 +[Abstract](2028) +[HTML](1309) +[PDF](639.7KB) A continuum model for a population of self-propelled particles interacting through nematic alignment is derived from an individual-based model. The methodology consists of introducing a hydrodynamic scaling of the corresponding mean field kinetic equation. The resulting perturbation problem is solved thanks to the concept of generalized collision invariants. It yields a hyperbolic but non-conservative system of equations for the nematic mean direction of the flow and the densities of particles flowing parallel or anti-parallel to this mean direction. Diffusive terms are introduced under a weakly non-local interaction assumption and the diffusion coefficient is proven to be positive. An application to the modeling of myxobacteria is outlined. Pierre Degond, Angelika Manhart, Hui Yu. A continuum model for nematic alignment of self-propelled particles. Discrete & Continuous Dynamical Systems - B, 2017, 22(4): 1295-1327. doi: 10.3934\/dcdsb.2017063. Chris Cosner, Yuan Lou, Shigui Ruan and Wenxian Shen 2017, 22(3): ⅰ-ⅱ doi: 10.3934/dcdsb.201703i +[Abstract](1061) +[HTML](579) +[PDF](77.4KB) Chris Cosner, Yuan Lou, Shigui Ruan, Wenxian Shen. Preface. Discrete & Continuous Dynamical Systems - B, 2017, 22(3): \u2170-\u2171. doi: 10.3934\/dcdsb.201703i. Tomás Caraballo, María J. Garrido-Atienza and Wilfried Grecksch 2016, 21(9): i-ii doi: 10.3934/dcdsb.201609i +[Abstract](903) +[PDF](91.4KB) It is a great honor and pleasure to dedicate this special issue of the journal Discrete and Continuous Dynamical Systems, Series B, to our colleague and friend Björn Schmalfuß, on the occasion of his 60th birthday. For more information please click the "Full Text" above. Tom\u00E1s Caraballo, Mar\u00EDa J. Garrido-Atienza, Wilfried Grecksch. Preface. Discrete & Continuous Dynamical Systems - B, 2016, 21(9): i-ii. doi: 10.3934\/dcdsb.201609i. Xiaoying Han and Qing Nie Stochasticity, sometimes referred to as noise, is unavoidable in biological systems. Noise, which exists at all biological scales ranging from gene expressions to ecosystems, can be detrimental or sometimes beneficial by performing unexpected tasks to improve biological functions. Often, the complexity of biological systems is a consequence of dealing with uncertainty and noise, and thus, consideration of noise is necessary in mathematical models. Recent advancement of technology allows experimental measurement on stochastic effects, showing multifaceted and perplexed roles of noise. As interrogating internal or external noise becomes possible experimentally, new models and mathematical theory are needed. Over the past few decades, stochastic analysis and the theory of nonautonomous and random dynamical systems have started to show their strong promise and relevance in studying complex biological systems. This special issue represents a collection of recent advances in this emerging research area. Xiaoying Han, Qing Nie. Preface. Discrete & Continuous Dynamical Systems - B, 2016, 21(7): i-ii. doi: 10.3934\/dcdsb.201607i. Controlling stochasticity in epithelial-mesenchymal transition through multiple intermediate cellular states Catherine Ha Ta, Qing Nie and Tian Hong 2016, 21(7): 2275-2291 doi: 10.3934/dcdsb.2016047 +[Abstract](1731) +[PDF](4573.4KB) Epithelial-mesenchymal transition (EMT) is an instance of cellular plasticity that plays critical roles in development, regeneration and cancer progression. Recent studies indicate that the transition between epithelial and mesenchymal states is a multi-step and reversible process in which several intermediate phenotypes might coexist. These intermediate states correspond to various forms of stem-like cells in the EMT system, but the function of the multi-step transition or the multiple stem cell phenotypes is unclear. Here, we use mathematical models to show that multiple intermediate phenotypes in the EMT system help to attenuate the overall fluctuations of the cell population in terms of phenotypic compositions, thereby stabilizing a heterogeneous cell population in the EMT spectrum. We found that the ability of the system to attenuate noise on the intermediate states depends on the number of intermediate states, indicating the stem-cell population is more stable when it has more sub-states. Our study reveals a novel advantage of multiple intermediate EMT phenotypes in terms of systems design, and it sheds light on the general design principle of heterogeneous stem cell population. Catherine Ha Ta, Qing Nie, Tian Hong. Controlling stochasticity in epithelial-mesenchymal transition through multiple intermediate cellular states. Discrete & Continuous Dynamical Systems - B, 2016, 21(7): 2275-2291. doi: 10.3934\/dcdsb.2016047. Jin Liang and Lihe Wang 2016, 21(5): i-ii doi: 10.3934/dcdsb.201605i +[Abstract](775) +[PDF](173.9KB) We dedicate this volume of the Journal of Discrete and Continuous Dynamical Systems-B to Professor Lishang Jiang on his 80th birthday. Professor Lishang Jiang was born in Shanghai in 1935. His family had migrated there from Suzhou. He graduated from the Department of Mathematics, Peking University, in 1954. After teaching at Beijing Aviation College, in 1957 he returned to Peking University as a graduate student of partial differential equations under the supervision of Professor Yulin Zhou. Later, as a professor, a researcher and an administrator, he worked at Peking University, Suzhou University and Tongji University at different points of his career. From 1989 to 1996, Professor Jiang was the President of Suzhou University. From 2001 to 2005, he was the Chairman of the Shanghai Mathematical Society. Jin Liang, Lihe Wang. Preface. Discrete & Continuous Dynamical Systems - B, 2016, 21(5): i-ii. doi: 10.3934\/dcdsb.201605i. José M. Amigó and Karsten Keller 2015, 20(10): i-iii doi: 10.3934/dcdsb.2015.20.10i +[Abstract](820) +[PDF](136.8KB) It is our pleasure to thank Prof. Peter E. Kloeden for having invited us to guest edit a special issue of Discrete and Continuous Dynamical Systems - Series B on Entropy, Entropy-like Quantities, and Applications. From its inception this special issue was meant to be a blend of research papers, showing the diversity of current research on entropy, and a few surveys, giving a more systematic view of lasting developments. Furthermore, a general review should set the framework first. Jos\u00E9 M. Amig\u00F3, Karsten Keller. Preface. Discrete & Continuous Dynamical Systems - B, 2015, 20(10): i-iii. doi: 10.3934\/dcdsb.2015.20.10i. Classical converse theorems in Lyapunov's second method Christopher M. Kellett 2015, 20(8): 2333-2360 doi: 10.3934/dcdsb.2015.20.2333 +[Abstract](1598) +[PDF](606.5KB) Lyapunov's second or direct method is one of the most widely used techniques for investigating stability properties of dynamical systems. This technique makes use of an auxiliary function, called a Lyapunov function, to ascertain stability properties for a specific system without the need to generate system solutions. An important question is the converse or reversability of Lyapunov's second method; i.e., given a specific stability property does there exist an appropriate Lyapunov function? We survey some of the available answers to this question. Christopher M. Kellett. Classical converse theorems in Lyapunov\'s second method. Discrete & Continuous Dynamical Systems - B, 2015, 20(8): 2333-2360. doi: 10.3934\/dcdsb.2015.20.2333. Robert Stephen Cantrell, Suzanne Lenhart, Yuan Lou and Shigui Ruan 2015, 20(6): i-iii doi: 10.3934/dcdsb.2015.20.6i +[Abstract](1293) +[PDF](127.7KB) The movement and dispersal of organisms have long been recognized as key components of ecological interactions and as such, they have figured prominently in mathematical models in ecology. More recently, dispersal has been recognized as an equally important consideration in epidemiology and in environmental science. Recognizing the increasing utility of employing mathematics to understand the role of movement and dispersal in ecology, epidemiology and environmental science, The University of Miami in December 2012 held a workshop entitled ``Everything Disperses to Miami: The Role of Movement and Dispersal in Ecology, Epidemiology and Environmental Science" (EDM). Robert Stephen Cantrell, Suzanne Lenhart, Yuan Lou, Shigui Ruan. Preface. Discrete & Continuous Dynamical Systems - B, 2015, 20(6): i-iii. doi: 10.3934\/dcdsb.2015.20.6i. Optimal control of integrodifference equations in a pest-pathogen system Marco V. Martinez, Suzanne Lenhart and K. A. Jane White 2015, 20(6): 1759-1783 doi: 10.3934/dcdsb.2015.20.1759 +[Abstract](2258) +[PDF](3533.2KB) We develop the theory of optimal control for a system of integrodifference equations modelling a pest-pathogen system. Integrodifference equations incorporate continuous space into a system of discrete time equations. We design an objective functional to minimize the damaged cost generated by an invasive species and the cost of controlling the population with a pathogen. Existence, characterization, and uniqueness results for the optimal control and corresponding states have been completed. We use a forward-backward sweep numerical method to implement our optimization which produces spatio-temporal control strategies for the gypsy moth case study. Marco V. Martinez, Suzanne Lenhart, K. A. Jane White. Optimal control of integrodifference equations in a pest-pathogen system. Discrete & Continuous Dynamical Systems - B, 2015, 20(6): 1759-1783. doi: 10.3934\/dcdsb.2015.20.1759. Alexandre N. Carvalho, José A. Langa and James C. Robinson 2015, 20(3): i-ii doi: 10.3934/dcdsb.2015.20.3i +[Abstract](851) +[PDF](161.4KB) We were very pleased to be given the opportunity by Prof. Peter Kloeden to edit this special issue of Discrete and Continuous Dynamical Systems - Series B on the asymptotic dynamics of non-autonomous systems. Alexandre N. Carvalho, Jos\u00E9 A. Langa, James C. Robinson. Preface. Discrete & Continuous Dynamical Systems - B, 2015, 20(3): i-ii. doi: 10.3934\/dcdsb.2015.20.3i. Urszula Ledzewicz, Marek Galewski, Andrzej Nowakowski, Andrzej Swierniak, Agnieszka Kalamajska and Ewa Schmeidel 2014, 19(8): i-ii doi: 10.3934/dcdsb.2014.19.8i +[Abstract](1081) +[PDF](447.1KB) Most mathematicians who in their professional career deal with differential equations, PDEs, dynamical systems, stochastic equations and a variety of their applications, particularly to biomedicine, have come across the research contributions of Avner Friedman to these fields. However, not many of them know that his family background is actually Polish. His father was born in the small town of Włodawa on the border with Belarus and lived in another Polish town, Łomza, before he emigrated to Israel in the early 1920's (when it was still the British Mandate, Palestine). His mother came from the even smaller Polish town Knyszyn near Białystok and left for Israel a few years earlier. In May 2013, Avner finally had the opportunity to visit his father's hometown for the first time accompanied by two Polish friends, co-editors of this volume. His visit in Poland became an occasion to interact with Polish mathematicians. Poland has a long tradition of research in various fields related to differential equations and more recently there is a growing interest in biomedical applications. Avner visited two research centers, the Schauder Center in Torun and the Department of Mathematics of the Technical University of Lodz where he gave a plenary talk at a one-day conference on Dynamical Systems and Applications which was held on this occasion. In spite of its short length, the conference attracted mathematicians from the most prominent research centers in Poland including the University of Warsaw, the Polish Academy of Sciences and others, and even some mathematicians from other countries in Europe. Avner had a chance to get familiar with the main results in dynamical systems and applications presented by the participants and give his input in the scientific discussions. This volume contains some of the papers related to this meeting and to the overall research interactions it generated. The papers were written by mathematicians, mostly Polish, who wanted to pay tribute to Avner Friedman on the occasion of his visit to Poland. Urszula Ledzewicz, Marek Galewski, Andrzej Nowakowski, Andrzej Swierniak, Agnieszka Kalamajska, Ewa Schmeidel. Preface. Discrete & Continuous Dynamical Systems - B, 2014, 19(8): i-ii. doi: 10.3934\/dcdsb.2014.19.8i. Two-species particle aggregation and stability of co-dimension one solutions Alan Mackey, Theodore Kolokolnikov and Andrea L. Bertozzi Systems of pairwise-interacting particles model a cornucopia of physical systems, from insect swarms and bacterial colonies to nanoparticle self-assembly. We study a continuum model with densities supported on co-dimension one curves for two-species particle interaction in $\mathbb{R}^2$, and apply linear stability analysis of concentric ring steady states to characterize the steady state patterns and instabilities which form. Conditions for linear well-posedness are determined and these results are compared to simulations of the discrete particle dynamics, showing predictive power of the linear theory. Some intriguing steady state patterns are shown through numerical examples. Alan Mackey, Theodore Kolokolnikov, Andrea L. Bertozzi. Two-species particle aggregation and stability of co-dimension one solutions. Discrete & Continuous Dynamical Systems - B, 2014, 19(5): 1411-1436. doi: 10.3934\/dcdsb.2014.19.1411. RSS this journal Tex file preparation Open Choice Add your name and e-mail address to receive news of forthcoming issues of this journal: Select the journal Copyright © 2019 American Institute of Mathematical Sciences Select Journals
CommonCrawl
OSA Publishing > JOSA A > Volume 37 > Issue 11 > Page 1814 P. Scott Carney, Editor-in-Chief High-speed range and velocity measurement using frequency scanning interferometry with adaptive delay lines [Invited] Christos A. Pallikarakis, Jonathan M. Huntley, and Pablo D. Ruiz Christos A. Pallikarakis,* Jonathan M. Huntley, and Pablo D. Ruiz Loughborough University, Wolfson School of Mechanical, Electrical and Manufacturing Engineering, Loughborough LE11 3TU, UK *Corresponding author: [email protected] Christos A. Pallikarakis https://orcid.org/0000-0002-2679-8861 Jonathan M. Huntley https://orcid.org/0000-0003-3813-0401 C Pallikarakis J Huntley P Ruiz •https://doi.org/10.1364/JOSAA.403858 Christos A. Pallikarakis, Jonathan M. Huntley, and Pablo D. Ruiz, "High-speed range and velocity measurement using frequency scanning interferometry with adaptive delay lines [Invited]," J. Opt. Soc. Am. A 37, 1814-1825 (2020) Precision improvement in frequency scanning interferometry based on suppression of the magnification effect (OE) Absolute distance measurement system with micron-grade measurement uncertainty and 24 m range using frequency scanning interferometry with compensation of environmental vibration (OE) Multi-channel absolute distance measurement system with sub ppm-accuracy and 20 m range using frequency scanning interferometry and gas absorption cells (OE) Instrumentation, Measurement, and Metrology Mode locking Phase shifting interferometry Refractive index Tunable lasers Vertical cavity surface emitting lasers Original Manuscript: July 28, 2020 Revised Manuscript: September 7, 2020 Manuscript Accepted: September 8, 2020 JOSA A Feature Issue: Advances in Optical Metrology and Instrumentation (2020) FREQUENCY SCANNING INTERFEROMETRY WITH ADAPTIVE DELAY LINES Equations (14) Range (i.e., absolute distance), displacement, and velocity of a moving target have been measured with a frequency scanning interferometer that incorporates a ${100}{,}{000}\;{\rm scan}\;{{\rm s}^{- 1}}$ vertical-cavity surface-emitting laser with 100 nm tuning range. An adaptive delay line in the reference beam, consisting of a chain of switchable exponentially growing optical delays, reduced modulation frequencies to sub-gigahertz levels. Range, displacement, and velocity were determined from the phase of the interference signal; fine alignment and linearization of the scans were achieved from the interferogram of an independent reference interferometer. Sub-nanometer displacement resolution, sub-100-nm range resolution, and velocity resolution of ${12}\;\unicode{x00B5}{\rm m}\;{{\rm s}^{- 1}}$ have been demonstrated over a depth measurement range of 300 mm. Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI. A variety of pointwise optical techniques have been developed over the years for measurement of distance to a remote target. These include time of flight, phase shifting interferometry, and frequency scanning interferometry (FSI), also known as frequency-modulated continuous-wave (FMCW) lidar [1,2]. FSI is widely regarded as one of the most accurate methods when absolute distance, rather than changes in distance over time, is required. At its simplest, a tunable laser illuminates the target while undergoing a frequency scan at a constant rate. The target range is determined from the modulation frequency of the interference signal, produced when the back-reflected or back-scattered object wave is mixed with a reference wave from the same laser. The drawback of the method in its basic form is that the deduced range is dependent on the tuning rate of the laser, which can drift over time. The use of a separate reference interferometer avoids such errors: target range is determined as a multiple of the reference interferometer length through the ratio of modulation frequencies from the measurement and reference interferometers [3,4]. The inclusion of a gas absorption cell enables the calibration of the reference interferometer length traceable to international standards [5]. As calibration can be performed once per frequency scan, stability requirements of the reference interferometer are relaxed significantly with this approach. Although the accuracy of FSI can be in the range of 1 part in ${{10}^6}$ to 1 in ${{10}^8}$, it has long been recognized that the technique's "Achilles' heel," for highly accurate range measurement in practical industrial environments, is motion of the target, or changes in refractive index of the air, that occur during a scan. The error in the calculated range is equal to the intra-scan target displacement (or the effective displacement in the case of refractive index changes), multiplied by an amplification factor $({{\omega _c}/{\Delta}\omega})$, where ${\omega _c}$ is the angular frequency of the emitted light at the center of the scan, and ${\Delta}\omega$ is the angular frequency tuning range of the laser [3–7]. Several approaches to solving this problem have been proposed, such as the use of two lasers scanning simultaneously in opposite directions or at different rates [5–7], the generation of a frequency scan rate of opposite sign using four-wave mixing [8], the use of a laser with successive up–down frequency ramps [9], and the addition of a separate laser Doppler velocimeter (LDV) that informs the FSI system of the required error correction due to target motion [10]. When phase or frequency data are acquired from systems with frequency ramps of opposite sign, the average of the two values largely cancels the displacement error, while the difference between them provides a measure of the displacement or velocity, respectively, of the target. The ability to measure both range and velocity is attractive, but the cost and complexity of many of the proposed systems is an obstacle to widespread industrial adoption. New tunable laser (TL) sources with dramatic increases in scan repetition rate, ${f_s}$, have been developed in recent years, particularly within the optical coherence tomography (OCT) community. These are an interesting proposition for absolute distance measurement, not only for the increased coordinate acquisition rate, but also because the range error due to target motion scales inversely with ${f_s}$. Sources with large scan ranges ${\Delta}\lambda$ of 100 nm or more about a center wavelength ${\lambda _c}$ in the 1–1.5 µm range are now available, with ${f_s}$ values of upwards of ${{10}^5}\;{{\rm s}^{- 1}}$ for vertical-cavity surface-emitting lasers (VCSEL) sources and ${{10}^6}\;{{\rm s}^{- 1}}$ for Fourier-domain mode-locked lasers [11]. The first applications of such sources to medium scale (${\sim}{0.75}\;{\rm m}$) ranging applications have been described recently [12]. High-speed sources have an inherent drawback, however, which is the resulting very high frequencies of the interference signal. In Ref. [12], a high-speed oscilloscope with 16 GHz bandwidth and ${50}\;{\rm GS}\;{{\rm s}^{- 1}}$ sampling rate was used as the data acquisition (DAQ) hardware. The cost of DAQs currently rises very dramatically above bandwidths of about 1 GHz. A solution to this problem is the adaptive delay line (ADL) concept, proposed in [13], which is a chain of $N$ switches and $N$ optical delays that follow an exponential sequence. Through an appropriate selection of switch positions, the modulation frequency can be reduced by a factor of ${{2}^N}$. In the current paper, we describe the first combination of a high-speed VCSEL source running at ${{10}^5}\;{\rm scans}\;{{\rm s}^{- 1}}$, with an ADL, to measure the range, displacement, and velocity of a target undergoing controlled vibration. Section 2 provides the background theory of FSI and ADLs. The experimental setup and numerical analysis of the interference signals are described in Sections 3 and 4, respectively, with results and discussion presented in Sections 5 and 6, before some concluding remarks in Section 7. 2. FREQUENCY SCANNING INTERFEROMETRY WITH ADAPTIVE DELAY LINES A. Frequency Scanning Interferometry An FSI system requires a tunable light source. Within a single scan, the angular frequency of the light emitted by the source, ${\omega _e}$, should ideally vary linearly with time, $t$. In practice, however, there may be non-linear contributions, and, in general, ${\omega _e}$ can be written as a Taylor series expansion as follows: (1)$${\omega _e} = {\omega _0} + {\dot \omega _0}t + \frac{1}{2}{\ddot \omega _0}{t^2} + \ldots ,$$ where ${\omega _0}$, ${\dot \omega _0}$, and ${\ddot \omega _0}$ are the zeroth, first, and second time derivatives of ${\omega _e}$ at the start of the scan ($t = 0$). ${\dot \omega _0}$ is the linear tuning rate, and $\frac{1}{2}{\ddot \omega _0}{t^2}$ represents the first non-linear term. Light from the source is divided into object and reference beams; the object beam is reflected off the target, which may in general be moving, and subsequently interferes with the reference wave. The intensity of the resulting interference signal may be written as (2)$$I(t ) = {I_0} + {I_1}{\cos}(\phi ),$$ where ${I_0}$ and ${I_1}$, the background ("dc") intensity, and modulation envelope, respectively, are slowly varying functions of time, and $\phi$ is the phase offset between the object and reference waves. A rigorous second-order analysis of the phase term, including a relativistic description of the Doppler shift from the moving target, is given by Reichold in [14] and results in the following expression: (3)$$\phi = {\omega _e}\frac{{\Lambda}}{c} + {\left({\frac{{\Lambda}}{c}} \right)^2}\left[{- \frac{{{{\ddot \omega}_0}}}{2}t + \frac{{{{\dot \omega}_0}}}{2} + \frac{{{{\ddot \omega}_0}}}{6}\frac{{\Lambda}}{c}} \right].$$ Here, ${\Lambda}$ represents the optical path difference (OPD) between the object and reference waves, which is time-varying due to the motion of the target mirror (TM), and $c$ is the speed of light. As pointed out by Reichold, the terms in the square brackets can be neglected in many situations, in comparison to the first term on the right-hand side of Eq. (3), and this is the case for the parameter values in the current experimental setup. For the remainder of the paper, we therefore make the approximation (4)$$\phi = {\omega _e}\frac{{\Lambda}}{c}.$$ Although ${\omega _e}(t)$ may not be well-characterized, the phase changes ${\Delta}{\phi _M}$ and ${\Delta}{\phi _R}$ that occur over the course of a scan in the measurement and reference interferometers, with OPDs ${{\Lambda}_M}$ and ${{\Lambda}_R}$, respectively, are related through Eq. (4) as follows: (5)$${{\Lambda}_M} = \frac{{{\Delta}{\phi _M}}}{{{\Delta}{\phi _R}}}{{\Lambda}_R}.$$ ${{\Lambda}_R}$ is assumed to be constant through the scan, with its value determined by some independent means such as a gas absorption cell, frequency comb, or a frequency stabilized interferometer, as is done later in this paper. If ${{\Lambda}_M}$ changes by ${{\Delta \Lambda}_M}$ during the course of the scan, then the value calculated by Eq. (5) is in error by ${{\Delta \Lambda}_M}{\omega _c}/{\Delta}\omega$, as noted in the previous section. The case of a traditional interferometer with a monochromatic source of wavelength ${\lambda _0}$ can be derived from Eq. (4) by substituting ${\omega _e}(t) = 2\pi c/{\lambda _0}$, and ${\Lambda} = {{\Lambda}_0} - 2{u_z}$. ${u_z}$ is the displacement component of the mirror along the optical axis, in a direction towards the interferometer, during the course of the "scan," and ${{\Lambda}_0}$ is the initial OPD. This results in the following well-known equation for an out-of-plane interferometer: (6)$${u_z} = - \frac{{{\lambda _0}{\Delta}\phi}}{{4\pi}}.$$ A key parameter in an FSI system is the maximum modulation frequency of the interference signal, $f$, since this determines the bandwidth and sampling rate of the DAQ. $f$ can be calculated from Eqs. (1) and (4) if we neglect non-linear terms for simplicity and take ${\dot \omega _0} = {\Delta}\omega /{\Delta}t$, where ${\Delta}t$ is the scan duration, which gives the result (7)$$f = \frac{1}{{2\pi}}\frac{{{d}\phi}}{{{d}t}} = \frac{1}{{2\pi}}\frac{{{\Delta}\omega}}{{{\Delta}t}}\frac{{\Lambda}}{c} \approx {\Lambda}{f_s}\frac{{{\Delta}\lambda}}{{\lambda _c^2}}.$$ ${f_s} = 1/{\Delta}t$ is the scan repetition rate, ${\lambda _c}$ is the center wavelength, and the approximation in terms of wavelengths, which is valid for short scans, allows for easy evaluation of $f$ from the laser manufacturers' datasheets. Equation (7) is valid for saw-tooth waveforms; a pre-factor of 2 appears on the right-hand side for triangular waveforms, and $\pi$ for sinusoidal waveforms. The dependence of $f$ on ${f_s}$ and on the target range $z = {\Lambda}/2$, predicted by Eq. (7) for the case of ${\lambda _c} = {1300}\;{\rm nm}$ and ${\Delta}\lambda = {100}\;{\rm nm}$, is shown as a contour map in Fig. 1. All of the points on a given contour have identical bandwidth and sampling rate demands on the DAQ. The minimum sampling rate here is taken to be $2f$, as given by the Shannon sampling theorem. The green zone indicates the parameter space accessible by reasonably low-cost DAQs (below $\sim \$ 1000$ per channel). The cost of DAQ hardware increases dramatically with sampling rate, and the red zone indicates the requirement for DAQ hardware that costs upwards of several $\$100{,}000$ per channel. This is the regime that a VCSEL source with ${f_s} = {100}\;{\rm kHz}$ runs into at a range of a few meters (m). In addition to the cost of the DAQs, the computational effort to process the $\gt\! {100}\;{\rm GS\;s}^{-1}$ data streams is too high, by at least 1–2 orders of magnitude, for real-time analysis by current reasonably priced graphics processing unit (GPU) or field-programmable gate array (FPGA) hardware. It was for these reasons that ADLs were developed, as summarized in the next sub-section. Fig. 1. Minimum DAQ sampling rates for a target at a range $z$ and a laser with (saw-tooth) scan repetition rate ${f_s}$. B. Adaptive Delay Lines The concept of the ADL was first introduced in [13] and is outlined below. The ADL is a module placed in an interferometer's reference beam, as shown in Fig. 2(a) for the case of a Mach–Zehnder interferometer (MZI). In this example, a circulator is used to direct the object beam onto the target and to collect the back-reflected light. The interference signal between the object and reference beams is then measured by an auto-balanced pair of photodetectors and recorded by a DAQ board. Fig. 2. Adaptive delay line (ADL) concept. (a) FSI system with ADL showing tunable laser (TL), couplers (CPL), circulator (C), auto-balanced photodetectors (ABPD), low-pass filter (LPF), and data acquisition board (DAQ). (b) Example of 3-bit ADL: continuous lines indicate the routing of the reference beam selected by switches ${{\rm S}_0}\ldots{{\rm S}_2}$, for bit configuration 101. The ADL module consists of a chain of $N$ switches, each of which selects one of two possible optical paths to the next switch. The OPD between the two paths from switch $j$ will be denoted as ${{\Lambda}_j}$. The path differences are selected according to the following equation: (8)$${{\Lambda}_j} = {2^j}{d_0},\quad j = 0,1, \ldots ,N - 1,$$ where ${d_0}$ is the minimum OPD. An example is given in Fig. 2(b) for the case $N = {3}$. Curved paths are shown since an ADL could be implemented using optical fibers or waveguides on a photonic integrated circuit (PIC). A recent example of a FMCW lidar, implemented on a silicon platform, is described in [15]. The state of switch $j$ is defined by a binary digit ${b_j}$, where the value of one indicates that the longer path is selected, and a value of zero is the shorter path. The state of the ADL is completely specified by the bit pattern (byte) $B = {b_{N - 1}}{b_{N - 2}} \ldots {b_1}{b_0}$. This controls, in turn, the location of the surface within the measurement volume, where the path difference between object and reference waves equals zero. Cross sections through the "zero-OPD surfaces" for the 3-bit example of Fig. 2(b) are shown as dotted lines in Fig. 2(a). These are spaced with a separation of ${d_0}/2$ in the case of coaxial illumination and observation directions. The benefits of the ADL were discussed in [13]. In general, for a given DAQ bandwidth and sampling rate, each additional switch in the chain doubles either the maximum range or the coordinate acquisition rate. Moreover, the restriction on the source coherence length (${l_c} \ge {2^N}{d_0}$), which ultimately limits measurement range, is relaxed by a factor ${2^N}$ to ${l_c} \ge {d_0}$. 3. EXPERIMENTAL The optical setup used to demonstrate the ADL proof-of-principle [13] was modified slightly to allow combined range and displacement/velocity measurements, as shown in Fig. 3. Its design will be summarized here for completeness. Fig. 3. Experimental optical setup, incorporating 3-bit ADL, to measure range and displacement. The ADL is a 3-bit device, the switching for which is carried out by manual rotation of achromatic half-wave plates (HWP) in front of polarizing beam splitters (PBS). If horizontally polarized light is incident on a given HWP, the beam passes through the PBS to the next switch. Rotating the HWP by 45° causes vertically polarized light to enter the PBS. The beam is then directed around an optical delay loop consisting of two pairs of gold mirrors (M-M), where translation stages (TS) can be used to make fine adjustments of the delay lengths. The delays took the values ${d_0}\sim 240\;{\rm mm}$, ${d_1}\sim 490\; {\rm mm}$, and ${d_2}\sim 830\; {\rm mm}$. An additional HWP and PBS before the ADL are used as a variable ratio beam splitter (BS). The TL source is a VCSEL (Thorlabs SL131090) that runs at a scan repetition rate of ${f_s} = 100\; {\rm kHz}$, with a center wavelength of ${\lambda _c} = 1300\;{\rm nm}$, and mode-hop-free tuning range of ${\Delta}\lambda = 100\; {\rm nm}$. In its standard form, the laser produces linear frequency up-scans with very short down-scans. The use of up- and down-scanning lasers in the literature to separate velocity and range information has been described in Section 1. The laser was therefore customized by the manufacturer to produce up- and down-scans having a similar maximum tuning rate in both directions, though at the expense of an increase in signal modulation frequency by approximately ${2} \times$ and some degradation of scan linearity. Ultimately only the frequency down-scan data was used in the experiments described in this paper. Light is delivered to the interferometer through a single-mode, polarization-maintaining (SM PM) fiber and, after collimation with a reflective collimator (RCL), is split into the reference and object beams. Two linear polarizers (LNP) at the final ${\rm BS}_{\rm CPL}$ ensure that the polarization states of object and reference waves are matched; the resulting interference signal is transmitted to an auto-balanced photodetector (ABPD) by a pair of SM PM fibers using couplers with achromatic doublets. The target gold mirror (TM) that reflects the object beam was mounted on a piezoelectric (PZT) transducer (Burleigh Instruments PZ-81 linear translator, driven by a Burleigh R6-93 ramp generator). The PZT was in turn mounted on a rail assembly (not shown here), which allowed manual movement of the TM over a range of approximately 1 m and a corresponding round trip range of ${\sim}{2}\;{\rm m}$. The position was monitored independently by means of a frequency stabilized interferometer (Renishaw XL-80). This is unable to measure absolute distance, as it operates at a fixed wavelength, but it can track changes in position (i.e., displacement) to an accuracy of $0.5\; \unicode{x00B5}{\rm m\;m}^{-1}$ and was used for calibration of the system. The laser's 100 kHz repetition rate and 100 nm scan range mean that TM locations more than about 13 mm from a zero-OPD surface result in a modulation frequency, $f$, lying outside the 300 MHz bandwidth of the ABPDs. The experiment involved positioning the TM at a series of six fixed locations close to the zero-OPD surfaces for each of two ADL bit configurations (the 010 and 101). The location index and measured range with the Renishaw interferometer, ${z_{\rm{TM}}}$, for the 12 locations, are shown in Table 1. The maximum modulation frequency was approximately 250 MHz for locations 1, 6, 7, and 12; 150 MHz for locations 2, 5, 8, and 11; and 50 MHz for locations 3, 4, 9, and 10. Table 1. Measurement Interferometer Mean Frequency ($\bar A$), Target Range (${z_{\rm{TM}}}$), Standard Deviation in Range (${\sigma _z}$), and Displacement (${\sigma _{{u_z}}}$) for Each of 12 Target Mirror Locations View Table At each location, a triangular waveform with a period of 25 ms was applied to the PZT, which resulted in a highly linear axial motion of the TM with peak-to-peak displacement of approximately 1.0 µm. The PZT motion was characterized by recording the interference signal produced by a fixed-wavelength laser (Santec TSL-510, $\lambda = 1270\;{\rm nm}$) in place of the TL. The PZT drive signal and interferometer signal are shown in Figs. 4(a) and 4(b), respectively, for one complete cycle of the PZT. The PZT motion was then extracted by separately fitting a five-parameter model to the PZT up-scan and down-scan intensity arrays [16]. The model allows for a quadratic (two-parameter) PZT response to the applied voltage; the non-linear contribution to the displacement was however very small for this particular PZT. The other three parameters were the dc intensity, the intensity modulation amplitude, and the initial phase offset between the object and reference arms of the interferometer. The modeled intensity, and corresponding underlying displacement of TM, are shown in Figs. 4(b) and 4(c). Fig. 4. (a) PZT drive voltage over one complete cycle of the triangular wave. (b) Measured intensity signal (continuous line) and modeled intensity (dashed line, with small vertical shift for clarity) from fixed-wavelength laser. (c) Axial displacement of TM corresponding to model intensity in (b). For the experiments with the TL, the PZT drive signal was used to trigger a storage oscilloscope (Tektronix MSO54, 500 MHz, ${6.25}\;{\rm GS}\;{{\rm s}^{- 1}}$, maximum record length: 62.5 Mpts) at a point in the waveform such that the maximum axial displacement occurred in the center of each dataset. The ABPD signal was digitized on one channel of the storage oscilloscope. A second channel recorded the output of a reference interferometer (termed "$k$-clock" in the OCT community) contained within the laser control unit. This is a MZI with an optical delay of nominal value of 44 mm and produces a peak modulation frequency of around 890 MHz on the frequency up-scan and a little lower on the down-scan. A third channel recorded the output signal from a fiber Bragg grating (FBG), which is, again, incorporated within the laser control box. This produces a pulse whenever the laser wavelength passes through the center wavelength, 1310 nm, of the FBG. Finally, the fourth channel recorded the output from the PZT control unit. The captured data sequences from all four channels, which were recorded synchronously, were then transferred to a personal computer (PC) for subsequent processing. The experimental data for all 12 locations are available from Ref. [17]. 4. NUMERICAL ANALYSIS The recorded intensity from the measurement interferometer, ${I_M}(t)$, during a single 10 µs duration up–down frequency scan is shown in Fig. 5(a), together with the FBG timing pulses. Each scan consists of approximately 32,000 sample points per channel, and, in total, 1000 up–down scans were recorded per dataset. The time-varying modulation frequency for the measurement and reference interferometers are represented by the spectrograms in Figs. 5(b) and 5(c), in which a sliding window length of 1000 sample points with 95% overlap was used. Fig. 5. (a) Measurement interferometer intensity signal during a single "up–down" frequency scan cycle. Blue line: FBG pulses from frequency up-scan and down-scan. Red line: time-domain window function for data analysis. (b), (c) Spectrograms for measurement and reference interferometer, respectively. Dashed lines represent timing of FBG down-scan pulse (blue); edges of time-domain window (red); edges of frequency-domain window (green). Fig. 6. (a) Intensity signals from reference and measurement interferometers [(a) and (b), respectively] in the neighborhood of the FBG pulse, for scan index 13 out of total 2000 scans. [PZT was ${\sim}{5}\;{\rm ms}$ from peak displacement in Fig. 4(a).] (c), (d) Corresponding signals for scan index 450 (PZT close to top of the up ramp). ${-}{2},- {1},{0},+ {1}$, and ${+}{2}$ labels refer to the fringe order, $q$. Three distinct signal processing steps were developed to calculate range and displacement data from such a scan, as discussed in the following three sub-sections. A. Signal Segmentation and Coarse Alignment In order to measure displacement information from the phase changes of ${I_M}(t)$, it is necessary to segment the entire signal into individual frequency scans, similar to that shown in Fig. 5(a), and then to shift them along the time axis so as to align them with respect to one another. As the sampling rate will not normally be an integer multiple of ${f_s}$, alignment to sub-sample point accuracy is in general required. The approach adopted here was to use the reference interferometer signal, ${I_R}(t)$, which is recorded in tandem with ${I_M}(t)$, as a stable "ruler" against which the shifts in ${I_M}(t)$ due to displacement of the target could be measured. The reference interferometer fringe maxima can be thought of as the ruler markings and are separated by a constant wavenumber difference; however, as they are not labelled, a robust method of fringe-order identification is required. The approach is illustrated in Fig. 6, which shows expanded portions of the measurement and reference interferometer signals near the FBG pulse at two different times during the PZT ramp. The orders (denoted $q$) for the fringes of the reference interferometer signals have been assigned, using the procedures described later in this section, for the plots in Figs. 6(a) and 6(c). The corresponding measurement interferometer signals [Figs. 6(b) and 6(d)] show a different phase offset with respect to the $q = {0}$ fringe that is a result of the target motion between these two frequency scans. Fig. 7. Normalized Fourier transforms of windowed (a) reference and (b) measurement interferometer signals from a single frequency down-scan. Frequency-domain window functions are shown in green. Alignment of the scans was a two-stage process. The first stage, coarse alignment, was achieved using the FBG synchronization signal that was recorded in parallel with ${I_R}(t)$ and ${I_M}(t)$. The FBG pulses show some variability in shape and location from scan to scan, with a root-mean-square (rms) timing jitter of typically 1.5 sample points at the sampling rate used here. To reduce the jitter, a straight-line fit of the center of mass of the down-scan FBG pulse location versus scan index, $s$, was performed, and the gradient of this line defined the increment in index of the one-dimensional (1D) data vectors from one down-scan to the next. In this way, the timing jitter of one scan relative to the others was reduced to a maximum of one sample point. More precise registration was then achieved from the recovered reference interferometer phase signal, as described in Sections 4.B and 4.C. B. Phase Analysis The Takeda Fourier transform method was used to extract the time-varying phase signals, ${\phi _R}(t)$ and ${\phi _M}(t)$, for the reference and measurement interferometers, from the corresponding intensity signals, ${I_R}(t)$ and ${I_M}(t)$ [18]. This involves application of a time-domain window function, as shown in Fig. 5, to select the portion of the down-scan to be processed. The Fourier transforms of the windowed data vectors, ${\tilde I_R}(f)$ and ${\tilde I_M}(f)$, were then windowed with a top hat function that is asymmetrical with respect to the frequency origin. This second window function acts as a band-pass filter, the location and width of which were chosen to let through the range of positive frequencies present in the original signals, but which block all the negative frequencies. This is illustrated in Fig. 7 for the same measurement interferometer dataset as shown in Fig. 5. Identical time-domain windows were used for all datasets; the frequency-domain windows were also the same for all of the reference interferometer signals, but varied from one measurement interferometer signal to another as the frequency content changed with the distance to the target. The inverse Fourier transform of the windowed frequency-domain signal was then calculated, resulting in a complex time-domain signal ${I_M^\prime} (t)$. The steps above can be summarized by the following equation: (9)$${I_M^\prime} (t ) = {{\cal F}^{- 1}}\{{{W_f}(f ){\cal F}\{{{W_t}(t){I_M}(t )} \}} \},$$ where ${W_f}$ and ${W_t}$ are the frequency and time-domain windows, respectively, and ${\cal F}\{\ldots \}$ and ${{\cal F}^{- 1}}\{\ldots \}$ are the forward and inverse Fourier transform operators. When selecting the window functions, the time-domain window should be as long as possible to maximize the effective frequency tuning range, and hence minimize the range resolution, whereas the frequency-domain window should be narrow to cut out as much intensity noise as possible. In terms of the spectrograms in Fig. 5, bounding boxes that are short and wide are therefore expected to give the best results, which in turn require a laser that has been optimized to provide highly linear frequency scans. Wrapped phase values, i.e., values lying on the range of ${-}\pi$ to ${+}\pi$, were calculated from ${I_M^\prime} (t)$ as (10)$${\hat \phi _M}(t ) = {\rm atan} ({\Im \{{{I_M^\prime} (t )} \}/\Re \{{{I_M^\prime} (t )} \}} ),$$ where $\Re \{\ldots \}$ and $\Im \{\ldots \}$ represent the real and imaginary parts, respectively. The unwrapped phase, ${\phi _M}(t)$, was then obtained from ${\hat \phi _M}(t)$ by adding integral multiples of $2\pi$ to the phase at each sample point so that the phase change between adjacent points always lay in the range of ${-}\pi$ to ${+}\pi$. It is convenient for the next stage in the analysis if the unwrapping starts from the sample point nearest the FBG pulse. Equations equivalent to Eqs. (9) and (10) were used to find the reference interferometer phase, ${\phi _R}(t)$, from ${I_R}(t)$. The forward and inverse transforms, phase evaluation, and unwrapping were implemented in MATLAB using the $\texttt{fft, ifft, atan2}$, and $\texttt{unwrap}$ functions, respectively. C. Scan Linearization and Fine Alignment The fact that the laser does not maintain a constant rate of change of frequency through a given scan, as shown by the spectrograms in Fig. 5, means that linearization of the signals is required. Without linearization, the Fourier transform (see, e.g., Fig. 7) is not a well-defined narrow peak that can be used to locate the target position to a high accuracy. One approach to linearization would be to resample the measurement interferometer intensity signal before Fourier transformation. However, as intensity is a high-frequency oscillatory signal, interpolation can introduce significant numerical errors. A better way is to resample the phase signals, since over any small time interval these are close to linear functions of time. The time values of the original equally spaced sample points will be denoted as ${t_p}$ ($p = 0,1,2, \ldots ,{N_s} - 1)$. ${N_s}$ is the total number of sample points in a single scan. The fringe maxima for the reference interferometer, which, as stated previously, provide the "ruler markings" to linearize the measurement interferometer phase, occur at the phase values ${\phi _R} = 2q\pi$ ($q = \ldots , - 2, - 1,0,1,2 \ldots)$. $q$ is the fringe order; the case $q = 0$ is taken to be the fringe closest to the FBG pulse within each scan and is termed here the "pivot" point. The times $t_q^\prime$, at which ${\phi _R} = 2q\pi$, were found from the vector ${\phi _R}({{t_p}})$ by linear interpolation using MATLAB function $\texttt{interp1}$. The $t_q^\prime$ were then used to interpolate ${\phi _M}({{t_p}})$, as shown schematically in Fig. 8. The $t_q^\prime$ values are not, in general, equally spaced in time, so that ${\phi _M}({t_q^\prime})$ is not a linear function of time. It is, however, a linear function of variable $q$, provided there is no dispersion mismatch between the measurement and reference interferometers. Dispersion mismatch is clearly present in the optical setup of Fig. 3 and will produce some deviations from linearity, but this was neglected in the current study. Fig. 8. (a) Linearization of the measurement interferometer phase, ${\phi _M}(t)$, using linearly spaced phase values ${\phi _R} = 2q\pi$ ($q = \ldots , - 2, - 1,0,1,2 \ldots)$ from the reference interferometer. (b) The corresponding time values $t_q^\prime$ are used to interpolate ${\phi _M}(t)$, which is sampled on the uniformly spaced time vector ${t_p}$ ($p = 0,1,2, \ldots ,{N_s} - 1),$ represented by the tick marks on both horizontal axes. "Pivot point" phase values are shown as open circles. The procedure described above achieves, in addition to scan linearization, the required fine alignment of the measurement interferometer signal to within a small fraction of a fringe of the reference interferometer signal. There is, however, one final step required, which is to correct for a possible mis-identification of the $q = 0$ fringe. If the FBG pulse is calculated to lie midway between two fringes, for example, the fringe selected as the pivot point may toggle between the two fringes from scan to scan. This problem was overcome by considering the points $q = \pm 1, \pm 2$ as alternative candidate pivot points. For each candidate pivot, the phase ramp for scan $s$, $\phi _M^{(s)}({t_q^\prime})$, is compared with that from the previous scan, $\phi _M^{({s - 1})}({t_q^\prime})$. Unwrapping in the scan direction is achieved by adding an integer multiple of $2\pi$ to $\phi _M^{(s)}({t_q^\prime})$ to minimize $S$, the sum of the squares of the differences between $\phi _M^{(s)}({t_q^\prime})$ and $\phi _M^{({s - 1})}({t_q^\prime})$ (${q = {q_1}, \ldots , - 2, - 1,0,1,2 \ldots ,{q_2}}$). The candidate pivot point with the smallest $S$ value was then taken to be the true pivot point for scan $s$ and relabelled $q = 0$. In practice, the candidate pivot points with $q = 0$ or ${\pm}1$ were always selected with this procedure. Fig. 9. Displacement of target mirror calculated from "pivot point" phase for target locations (a) 7 and (b) 9. The output from the previous section is a matrix of measurement interferometer phase values, ${\varphi _{{sq}}} = \phi _M^{(s)}({t_q^\prime})$: each row $({s = 0,1,2, \ldots ,{N_s} - 1})$ corresponds to an individual frequency scan, each column $({q = {q_1},{q_1} + 1, \ldots ,{q_2} - 1,{q_2}})$ to an individual laser wavenumber. The phase values in the pivot point column, ${\varphi _{s0}}$, are equivalent to those from a fixed-wavelength laser operating at the FBG wavelength, ${{\lambda}_{\rm{FBG}}} = 1310 \; {\rm nm}$. Changes in phase relative to the starting phase, ${\Delta}{\varphi _{s0}} = {\varphi _{s0}} - {\varphi _{00}}$, can therefore be scaled to the axial displacement component, ${u_z}$, through the equation (11)$${u_z} = \frac{{- {{\lambda}_{\rm{FBG}}}{\Delta}{\varphi _{s0}}}}{{4\pi}}.$$ The minus sign in front of the right-hand side corresponds to that given earlier in Eq. (6), but, as noted later, the required sign depends on which side of the nearest zero-OPD surface the target happened to be located. Figures 9(a) and 9(b) show ${u_z}$ as it tracks the TM motion with the target in locations 7 and 9, respectively (see Table 1). For the horizontal axis, scan index $s$ has been converted to time, $t$, through the known inter-scan time (9.998 µs). The total measured motion of just under 0.3 µm is very close to that measured over the central 10 ms of the PZT-characterization experiment, shown previously in Fig. 4(c). Although both datasets display the expected inverted "V" shape, there is a higher level of noise in Fig. 9(a) than in Fig. 9(b). The noise can be characterized through the standard deviation of the residuals of the data with respect to a best-fit quadratic over the first 5 ms of the PZT up ramp. Values of 1.78 nm and 0.76 nm were obtained for Figs. 9(a) and 9(b), respectively. The performance can clearly be improved upon, however, as in the above method of calculating ${u_z}$, all of the columns but one of the phase matrix ${\varphi _{\textit{sq}}}$ have been discarded. One way to make use of all of the data is to fit a straight line to the phase values from each frequency scan, i.e., to fit (over the $q$ variable) the equation (12)$${\psi _s} = {A_s}q + {B_s},$$ to the ${\varphi _{\textit{sq}}}$ values, where $q$ runs from ${q_1}$ to ${q_2}$ (${-}{1000}$ to ${+}{100}$ in this case) in unit steps. ${B_s}$ then represents an improved estimate of the phase at the pivot point. Errors can be reduced further by extracting the best-fit phase at the center of the $q$ vector, rather than at the pivot point, as the latter is close to one end of the vector. This is easily achieved by performing the fit with respect to a vector $q^\prime$ that runs from ${-}({{q_2} - {q_1}})/2$ to $({{q_2} - {q_1}})/2$ in unit steps. The results for TM in location 9 are shown in Fig. 10(a), where the conversion from phase to displacement has been carried out using the wavelength at $q^\prime = 0$, estimated as 1292.7 nm from the reference interferometer optical delay. The corresponding plot for location 7 is not shown, as, to the eye, it appears identical. The standard deviations of the residuals about the quadratic best fit curve, denoted here as ${\sigma _{{u_z}}}$, are reduced to 1.01 and 0.58 nm for locations 7 and 9, respectively. Fig. 10. (a) Displacement of target mirror calculated from ${B_s}$ for target location 9. (b) Enlarged central portion of (a). (c) As for (a), but target in location 10. As a further indication of the data quality, Fig. 10(b) shows an enlarged portion of the FSI-measured displacement near the time of maximum PZT displacement. Each point represents an independent measurement from a single-frequency scan. A small transient is visible after the PZT reverses its direction ($t$ in the range 5.1–5.2 ms), with a frequency of around 10 kHz and initial amplitude of around 2 nm. For datasets acquired on the other side of the zero-OPD surface, the phase change due to the PZT motion appears inverted, as shown in Fig. 10(c) for location 10. This provides a convenient way of distinguishing between positive and negative modulation frequencies in the current case, where the method described in [13] involving the use of neighboring ADL zero-OPD surfaces is not available due to the very high modulation frequencies. A more general way to distinguish between the positive and negative frequencies would be through in-phase quadrature (IQ) detection. The gradient term ${A_s}$, estimated by fitting Eq. (12) to the phase data, is the rate of change of measurement interferometer phase per fringe-order increment of the reference interferometer. It is therefore related, as seen in Eq. (5), to the ratio of the OPDs for the measurement and reference interferometers, ${{\Lambda}_M}$ and ${{\Lambda}_R}$, respectively, as follows: (13)$${A_s} = 2\pi {{\Lambda}_M}/{{\Lambda}_R}.$$ ${A_s}$ can therefore be seen as an alternative means of measuring the target range, assuming ${{\Lambda}_R}$ has been previously measured to sufficient accuracy, which is simpler and less prone to numerical errors than the conventional process of linearizing the intensity signal and then determining the Fourier-domain peak location. The TM range due to the motion of the PZT, measured with TM at locations 7 and 9, is shown in Figs. 11(a) and 11(b), respectively. The ${A_s}$ values have been converted to range through the scaling factor $C = 0.285875\; {\rm rad \; mm}^{-1}$, as determined through the calibration experiment described later in this section. The graphs replicate the displacement of TM but are inverted because a positive axial displacement corresponds to a decrease in target range. The level of noise is significantly higher than for the displacement-time plots: the standard deviation about the best quadratic fit to the PZT up ramp, ${\sigma _z}$, is 98.6 and 18.0 nm for positions 7 and 9, respectively. The values of ${\sigma _z}$ for all 12 locations are given in Table 1. Fig. 11. FSI-measured range from straight-line fits to rows of matrix ${\varphi _{\textit{sq}}}$, with target in location (a) 7 and (b) 9. (c) Velocity of TM calculated from the displacement data of Fig. 10(a). The range estimate plots in Figs. 11(a) and 11(b) contain a contribution arising from the target motion: an upwards shift as the PZT moves towards the interferometer and a downwards shift as it moves away. This can be interpreted as being due to the Doppler shift in frequency on reflection from a moving mirror, or it can be interpreted as the drift error, which is discussed further in the next section. However, the symmetrical nature of the plots about the point when the PZT reverses direction indicates that the error is so small as to be invisible, even with the lower noise level present in Fig. 11(b). Attempts to estimate target velocity from the Doppler shift, by subtracting a range value calculated from a laser down-scan from one made with an up-scan, will therefore be unsuccessful. A better way to estimate velocity is to make use of the previously calculated high-quality displacement data. Figure 11(c) shows the result of a simple centered finite difference operator, using the nearest-neighbor displacement values, on the data from Fig. 10(a). The standard deviation of the residuals about a best-fit straight line to the first 5 ms of the velocity data is ${11.7}\;\unicode{x00B5}{\rm m}\;{{\rm s}^{- 1}}$. Similar time histories of ${A_s}$ were also calculated for the other 10 TM locations. The time-averaged value, $\bar A$, for each location and the corresponding TM range, ${z_{\rm{TM}}}$, as measured by the Renishaw stabilized interferometer are listed in Table 1 and are plotted in Fig. 12. A best-fit line of the form (14)$$\begin{split}\bar A &= C{z_{\rm{TM}}} + {D_1}\quad (010\;\text{bit configuration locations}), \\ \bar A &= C{z_{\rm{TM}}} + {D_2}\quad (101\; \text{bit configuration locations})\end{split}$$ is shown as superimposed on the data. The three adjustable parameters in Eq. (14) are $C$ (which equals $4\pi /{{\Lambda}_R}$), ${D_1}$ (the frequency at location 1 when the Renishaw interferometer was zeroed), and ${D_2}$, which differs from ${D_1}$ due to the optical delay between the two bit configurations introduced by the ADL. In general, absolute distance measurement by an $N$-bit ADL requires $N + {1}$ unknown path lengths to be determined by a prior calibration. A process to achieve this with the Renishaw interferometer for the 3-bit ADL was described in [13]: the four unknowns were determined in a least-squares sense from the frequencies measured from all ${{2}^N}$ bit configurations, while the target was kept in a fixed location. In the current case, only a subset of calibration constants is required, since only two bit configurations were activated. The ${D_1}$ and ${D_2}$ values in Eq. (14) characterize the OPDs between object and reference waves for these two bit configurations with TM located at the origin of the Renishaw coordinate system. The values calculated by the least-squares analysis were $C = 0.285875\;{\rm rad}\;{{\rm mm}^{- 1}}$, ${D_1} = - 1.9044\;{\rm rad}$, and ${D_2} = - 86.9483$ rad. The $\bar A$ values in Figs. 12(a) and 12(c) for the 101 bit configuration have been shifted up by ${D_1} - {D_2} = 85.0439\;{\rm rad}$. The value of the reference interferometer OPD, ${{\Lambda}_R}$, calculated as $4\pi /C$, was 43.9576 mm, which may be compared to the nominal value of 44 mm, as stated in the manufacturer's datasheet. Finally, the standard deviation of the 12 residuals about the best-fit line was ${1.76} \times {{10}^{- 4}}\;{\rm rad}$, which equates to a rms range error of 614 nm. For comparison, the Renishaw interferometer has a claimed accuracy of $500\;{\rm nm}\; {\rm m}^{-1}$. Although it might be expected that compensation for dispersion in the air would be required due to the use of a tunable source, the accuracy achieved suggests that dispersion effects can be largely mitigated through calibration with a stabilized reference interferometer operating at a single wavelength. Fig. 12. (a) Measurement interferometer mean frequency, $\bar A$, from six locations at each of two ADL bit configurations compared to target mirror range measured by the Renishaw interferometer, ${z_{\rm{TM}}}$. Continuous line: best straight-line fit to data. (b), (c) Expanded portions of (a) for bit configurations 010 and 101, respectively. $\bar A$ values for the 101 bit configuration have been shifted up by 85.0439 rad. The sub-nanometer (nm) displacement resolution and sub-100-nm range resolution reported in the previous section, at 100 kHz sampling rates, are encouraging results. It is therefore helpful to compare the performance of this multi-functional system to existing approaches that have been optimized now over many years, but which measure range or displacement/velocity alone. Laser Doppler vibrometers (see, for example, [19] for a recent review article) measure velocity with a single-wavelength laser and therefore cannot measure target range. Typical LDV displacement resolution, though measured on rough surfaces rather than the mirror used here, is 1–10 nm. LDV resolution can therefore be regarded as being broadly comparable to the values presented above. However, the use of a fixed-wavelength source with LDV does bring the benefit that none of the bandwidth of the photodetectors and DAQ hardware is used up for the range estimation: all is available to the vibration measurement. A carrier is normally introduced into the interference signal by means of an acousto-optic modulator, allowing vibration frequencies up to the carrier frequency [typically a few tens of megahertz (MHz)] to be accommodated. In the FSI system, this limitation is addressed to a large degree by the introduction of an ADL, which brings down the bandwidth required for the range measurement by a factor of ${{2}^N}$. Although the current displacement sampling rate is 100 kHz, i.e., one velocity update per frequency scan, this could be pushed significantly higher by using the other phase values in the ${\varphi _{\textit{sq}}}$ matrix. Successive elements in a given row of ${\varphi _{\textit{sq}}}$ measure target displacement at successive time sample points, at wavenumber increments of $2\pi /{{\Lambda}_R}$ and hence with different (but nevertheless well-defined) ${u_z}/{\varphi _{\textit{sq}}}$ scaling factors. As demonstrated in the previous section, a displacement resolution of under 2 nm is achievable with single-wavelength phase values. The ultimate limit to displacement sampling rate is the inverse of the laser fly-back time (i.e., the duration of the frequency down-scan, when no information can be recorded). For example, a ${100}{,}{000}\;{\rm scan}\;{{\rm s}^{- 1}}$ laser with an up-ramp duration of 9 µs and down-ramp duration of 1 µs, would allow displacement to be sampled at a uniform rate of 1 MHz and at intra-scan sampling rates limited only by the DAQ. Turning to range measurement, nearly all previously published work has been either at much lower tuning rates or else with very narrow tuning ranges, which severely compromise range resolution. The combination of sub-100-nm range resolution over a 300-mm-deep volume at 100,000 range values ${{\rm s}^{- 1}}$ is enabled at sub-gigahertz (GHz) bandwidth requirements through the use of an ADL. Current commercial FSI systems provide scan rates of just a few kilohertz (kHz). An immediate benefit of the 100 kHz scan rate is the reduced susceptibility to range measurement errors from moving targets. Target motion of $\delta z$ during a frequency scan range of ${\Delta}\omega$ about a center frequency ${\omega _c}$ results in a range error of $({{\omega _c}/{\Delta}\omega})\delta z$. Previous solutions to this problem, as outlined in Section 1, have included fairly complex approaches such as the use of an LDV in parallel with the FSI system, synchronized up/down scanning lasers, four-wave mixing, etc. In the current case, the relevant target motion is the ${\sim}{0.1}\;{\rm nm}$ that occurs during the 1.7 µs, for which the time window is non-zero, as shown in Fig. 5. The tuning range of the laser within this window can be calculated from the $q$ values ${q_1} = - 1000$ and ${q_2} = 100$, and the fact that each $q$ increment corresponds to a wavenumber change of $2\pi /{{\Lambda}_R}$, to be 1272 nm to 1314 nm, i.e., 42 nm. The range error amplification factor $({{\omega _c}/{\Delta}\omega})$ is 31. The range error produced by this artefact is therefore ${\sim}{3}\;{\rm nm}$, which is insignificant compared to the random range errors of 20–100 nm. It was for this reason that the data analysis in the previous section needed only unidirectional scan data. However, for target velocities an order of magnitude or more higher, or laser scan rates an order of magnitude lower, motion artefacts would start to become significant. The strong variation of random range errors with distance from the nearest zero-OPD surface, visible in Figs. 11(a) and 11(b) and the ${\sigma _z}$ data from Table 1, deserves comment. The locations with 50, 150, and 250 MHz maximum modulation frequencies had average ${\sigma _z}$ values of 21, 56, and 98 nm, respectively. This variation cannot be explained by the widths of the frequency-domain windows, which were broadly comparable at 78, 137, and 57 MHz, respectively. A more likely explanation is frequency jitter in the scans. If the instantaneous angular frequency differs from the expected value by $\varepsilon (t)$, then a phase error $\varepsilon (t){{\Lambda}_M}/c$, i.e., proportional to distance from the relevant zero-OPD surface, will result. The analysis procedure described in the previous section will remove low-frequency errors, for example scan non-linearities, but not the high-frequency ones. The use of an ADL can therefore be seen to have the additional benefit, beyond that of reducing the maximum modulation frequency, of allowing control of the maximum range error through the measurement volume. Selection of the ADL design parameters follows from the FSI measurement volume, defined by the minimum and maximum ranges, ${z_{{\min}}}$ and ${z_{{\max}}}$, respectively. The sampling rate ${f_D}$ of the chosen DAQ will determine the ${d_0}$ parameter from its associated frequency ${f_0} = \frac{1}{{2\pi}}\frac{{{\Delta}\omega}}{{{\Delta}t}}\frac{{{d_0}}}{c} \approx {d_0}{f_s}\frac{{{\Delta}\lambda}}{{\lambda _c^2}}$. For IQ detection, ${f_D} = {f_0}$, while in the case of a single ABPD ${f_D} = 2{f_0}$. The number of switches required, $N$, can then be determined by Eq. (8), i.e., ${2^N}{d_0} = 2({z_{{\max}}} - {z_{{\min}}}).$ In order to make full use of the DAQ's bandwidth, the OPD of the reference interferometer, ${{\Lambda}_R}$, should ideally match or be comparable to ${d_0}/2$. One final point worth making is that the dynamic range of the $z$ measurements (i.e., maximum range/range resolution), which is often considered to be ${\sim}{{10}^6}$ for FSI, can be extended in both directions by the analysis presented here. Sub-nm range resolution could potentially be achieved from phase, i.e., either the ${\varphi _{\textit{sq}}}$ values or the ${B_s}$ values derived from them by Eq. (12). These were unwrapped temporally in the previous section to measure displacement, but could be unwrapped instead using the ${A_s}$ values to measure range, provided the phase shift on reflection from the target is known. A similar approach is used routinely in coherence scanning interferometry and has also been recently proposed in FSI [20]. Identification of the correct fringe order requires range noise below $\lambda /4 \approx {330}\;{\rm nm}$, a condition which is satisfied in this case as the range noise standard deviation ${\sigma _z}$ is below 100 nm. At the other end, the upper limit can be increased to arbitrarily large values, as the maximum range is doubled for each additional switch in the ADL. Coiled optical fibers could provide very long ADL delays in a suitably compact form. It has been demonstrated how the combination of ADLs and a fast scanning VCSEL light source allows absolute distance, displacement, and velocity of a target to be measured over a 0.3 m range, at rates of ${100}{,}{000}\;{{\rm s}^{- 1}}$, while maintaining signal modulation frequencies at sub-GHz levels. As a result, the use of low-cost DAQ hardware, and potentially real-time data processing on GPUs or FPGAs, become feasible, thus removing two significant barriers to future high-speed FSI systems. The use of phase analysis algorithms together with a separate reference interferometer have allowed a displacement resolution of under 1 nm and range resolution under 100 nm to be achieved. Engineering and Physical Sciences Research Council (Future Advanced Metrology Hub, EP/P006930/1). The authors wish to thank Sebastian Schaefer and colleagues at Thorlabs for the provision of a customized loan laser. Software developments by Russell Coggrave are also gratefully acknowledged. All three authors are co-authors of a current patent application in this area. 1. M.-C. Amann, T. Bosch, M. Lescure, R. Myllylä, and M. Rioux, "Laser ranging: a critical review of usual techniques for distance measurement," Opt. Eng. 40, 10–19 (2001). [CrossRef] 2. R. Schödel, ed., Modern Interferometry for Length Metrology (IOP Publishing, 2018). 3. J. A. Stone, A. Stejskal, and L. Howard, "Absolute interferometry with a 670-nm external cavity diode laser," Appl. Opt. 38, 5981–5994 (1999). [CrossRef] 4. P. A. Coe, D. F. Howell, and R. B. Nickerson, "Frequency scanning interferometry in ATLAS: remote, multiple, simultaneous and precise distance measurements in a hostile environment," Meas. Sci. Technol. 15, 2175–2187 (2004). [CrossRef] 5. J. Dale, B. Hughes, A. J. Lancaster, A. J. Lewis, A. J. H. Reichold, and M. S. Warden, "Multi-channel absolute distance measurement system with sub PPM-accuracy and 20 m range using frequency scanning interferometry and gas absorption cells," Opt. Express 22, 24869–24893 (2014). [CrossRef] 6. R. Schneider, P. Thürmel, and M. Stockmann, "Distance measurement of moving objects by frequency modulated laser radar," Opt. Eng. 40, 33–37 (2001). [CrossRef] 7. S. Kakuma and Y. Katase, "Frequency scanning interferometry immune to length drift using a pair of vertical-cavity surface-emitting laser diodes," Opt. Rev. 19, 376–380 (2012). [CrossRef] 8. J. J. Martinez, M. A. Campbell, M. S. Warden, E. B. Hughes, N. J. Copner, and A. J. Lewis, "Dual-sweep frequency scanning interferometry using four wave mixing," IEEE Photon. Technol. Lett. 27, 733–736 (2015). [CrossRef] 9. L. Tao, Z. Liu, W. Zhang, and Y. Zhou, "Frequency-scanning interferometry for dynamic absolute distance measurement using Kalman filter," Opt. Lett. 39, 6997–7000 (2014). [CrossRef] 10. C. Lu, G. Liu, B. Liu, F. Chen, and Y. Gan, "Absolute distance measurement system with micron-grade measurement uncertainty and 24 m range using frequency scanning interferometry with compensation of environmental vibration," Opt. Express 24, 30215–30224 (2016). [CrossRef] 11. T. Klein and R. Huber, "High-speed OCT light sources and systems," Biomed. Opt. Express 8, 828–859 (2017). [CrossRef] 12. Z. Wang, B. Potsaid, L. Chen, C. Doerr, H.-C. Lee, T. Nielson, V. Jayaraman, A. E. Cable, E. Swanson, and J. G. Fujimoto, "Cubic meter volume optical coherence tomography," Optica 3, 1496–1503 (2016). [CrossRef] 13. C. A. Pallikarakis, J. M. Huntley, and P. D. Ruiz, "Adaptive delay lines for absolute distance measurements in high-speed long-range frequency scanning interferometry," OSA Continuum (under review). 14. A. Reichold, "Absolute distance measurement using frequency scanning interferometry," in Modern Interferometry for Length Metrology, R. Schödel, ed. (IOP, 2018), pp. 1–54. 15. A. Martin, D. Dodane, L. Leviandier, D. Dolfi, A. Naughton, P. O'Brien, T. Spuessens, R. Baets, G. Lepage, P. Verheyen, P. De Heyn, P. Absil, P. Feneyrou, and J. Bourderionnet, "Photonic integrated circuit-based FMCW coherent LiDAR," J. Lightwave Technol. 36, 4640–4645 (2018). [CrossRef] 16. N. A. Ochoa and J. M. Huntley, "Convenient method for calibrating non-linear phase modulators for use in phase shifting interferometry," Opt. Eng. 37, 2501–2505 (1998). [CrossRef] 17. C. A. Pallikarakis, J. M. Huntley, and P. D. Ruiz, "Datasets for high-speed range and velocity measurement using frequency scanning interferometry with adaptive delay lines," 2020, https://doi.org/10.17028/rd.lboro.13050800. 18. M. Takeda, H. Ina, and S. Kobayashi, "Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry," J. Opt. Soc. Am. 72, 156–160 (1982). [CrossRef] 19. S. J. Rothberg, M. S. Allen, P. Castellini, D. Di Maio, J. J. J. Dirckx, D. J. Ewins, B. J. Halkon, P. Muyshondt, N. Paone, T. Ryan, H. Steger, E. P. Tomasini, S. Vanlanduit, and J. F. Vignola, "An international review of laser Doppler vibrometry: making light work of vibration measurement," Opt. Laser Eng. 99, 11–22 (2017). [CrossRef] 20. S. Kakuma, "Frequency scanning interferometry with nanometer precision using a vertical-cavity surface-emitting laser diode under scanning speed control," Opt. Rev. 22, 869–874 (2015). [CrossRef] M.-C. Amann, T. Bosch, M. Lescure, R. Myllylä, and M. Rioux, "Laser ranging: a critical review of usual techniques for distance measurement," Opt. Eng. 40, 10–19 (2001). R. Schödel, ed., Modern Interferometry for Length Metrology (IOP Publishing, 2018). J. A. Stone, A. Stejskal, and L. Howard, "Absolute interferometry with a 670-nm external cavity diode laser," Appl. Opt. 38, 5981–5994 (1999). P. A. Coe, D. F. Howell, and R. B. Nickerson, "Frequency scanning interferometry in ATLAS: remote, multiple, simultaneous and precise distance measurements in a hostile environment," Meas. Sci. Technol. 15, 2175–2187 (2004). J. Dale, B. Hughes, A. J. Lancaster, A. J. Lewis, A. J. H. Reichold, and M. S. Warden, "Multi-channel absolute distance measurement system with sub PPM-accuracy and 20 m range using frequency scanning interferometry and gas absorption cells," Opt. Express 22, 24869–24893 (2014). R. Schneider, P. Thürmel, and M. Stockmann, "Distance measurement of moving objects by frequency modulated laser radar," Opt. Eng. 40, 33–37 (2001). S. Kakuma and Y. Katase, "Frequency scanning interferometry immune to length drift using a pair of vertical-cavity surface-emitting laser diodes," Opt. Rev. 19, 376–380 (2012). J. J. Martinez, M. A. Campbell, M. S. Warden, E. B. Hughes, N. J. Copner, and A. J. Lewis, "Dual-sweep frequency scanning interferometry using four wave mixing," IEEE Photon. Technol. Lett. 27, 733–736 (2015). L. Tao, Z. Liu, W. Zhang, and Y. Zhou, "Frequency-scanning interferometry for dynamic absolute distance measurement using Kalman filter," Opt. Lett. 39, 6997–7000 (2014). C. Lu, G. Liu, B. Liu, F. Chen, and Y. Gan, "Absolute distance measurement system with micron-grade measurement uncertainty and 24 m range using frequency scanning interferometry with compensation of environmental vibration," Opt. Express 24, 30215–30224 (2016). T. Klein and R. Huber, "High-speed OCT light sources and systems," Biomed. Opt. Express 8, 828–859 (2017). Z. Wang, B. Potsaid, L. Chen, C. Doerr, H.-C. Lee, T. Nielson, V. Jayaraman, A. E. Cable, E. Swanson, and J. G. Fujimoto, "Cubic meter volume optical coherence tomography," Optica 3, 1496–1503 (2016). C. A. Pallikarakis, J. M. Huntley, and P. D. Ruiz, "Adaptive delay lines for absolute distance measurements in high-speed long-range frequency scanning interferometry," OSA Continuum (under review). A. Reichold, "Absolute distance measurement using frequency scanning interferometry," in Modern Interferometry for Length Metrology, R. Schödel, ed. (IOP, 2018), pp. 1–54. A. Martin, D. Dodane, L. Leviandier, D. Dolfi, A. Naughton, P. O'Brien, T. Spuessens, R. Baets, G. Lepage, P. Verheyen, P. De Heyn, P. Absil, P. Feneyrou, and J. Bourderionnet, "Photonic integrated circuit-based FMCW coherent LiDAR," J. Lightwave Technol. 36, 4640–4645 (2018). N. A. Ochoa and J. M. Huntley, "Convenient method for calibrating non-linear phase modulators for use in phase shifting interferometry," Opt. Eng. 37, 2501–2505 (1998). C. A. Pallikarakis, J. M. Huntley, and P. D. Ruiz, "Datasets for high-speed range and velocity measurement using frequency scanning interferometry with adaptive delay lines," 2020, https://doi.org/10.17028/rd.lboro.13050800 . M. Takeda, H. Ina, and S. Kobayashi, "Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry," J. Opt. Soc. Am. 72, 156–160 (1982). S. J. Rothberg, M. S. Allen, P. Castellini, D. Di Maio, J. J. J. Dirckx, D. J. Ewins, B. J. Halkon, P. Muyshondt, N. Paone, T. Ryan, H. Steger, E. P. Tomasini, S. Vanlanduit, and J. F. Vignola, "An international review of laser Doppler vibrometry: making light work of vibration measurement," Opt. Laser Eng. 99, 11–22 (2017). S. Kakuma, "Frequency scanning interferometry with nanometer precision using a vertical-cavity surface-emitting laser diode under scanning speed control," Opt. Rev. 22, 869–874 (2015). Absil, P. Allen, M. S. Amann, M.-C. Baets, R. Bosch, T. Bourderionnet, J. Cable, A. E. Campbell, M. A. Castellini, P. Chen, F. Coe, P. A. Copner, N. J. Dale, J. De Heyn, P. Di Maio, D. Dirckx, J. J. J. Dodane, D. Doerr, C. Dolfi, D. Ewins, D. J. Feneyrou, P. Fujimoto, J. G. Gan, Y. Halkon, B. J. Howard, L. Howell, D. F. Huber, R. Hughes, B. Hughes, E. B. Huntley, J. M. Ina, H. Jayaraman, V. Kakuma, S. Katase, Y. Klein, T. Kobayashi, S. Lancaster, A. J. Lee, H.-C. Lepage, G. Lescure, M. Leviandier, L. Lewis, A. J. Liu, B. Liu, G. Liu, Z. Lu, C. Martin, A. Martinez, J. J. Muyshondt, P. Myllylä, R. Naughton, A. Nickerson, R. B. Nielson, T. O'Brien, P. Ochoa, N. A. Pallikarakis, C. A. Paone, N. Potsaid, B. Reichold, A. Reichold, A. J. H. Rioux, M. Rothberg, S. J. Ruiz, P. D. Schneider, R. Spuessens, T. Steger, H. Stejskal, A. Stockmann, M. Stone, J. A. Swanson, E. Takeda, M. Tao, L. Thürmel, P. Tomasini, E. P. Vanlanduit, S. Verheyen, P. Vignola, J. F. Wang, Z. Warden, M. S. Zhang, W. Zhou, Y. Appl. Opt. (1) Biomed. Opt. Express (1) IEEE Photon. Technol. Lett. (1) J. Lightwave Technol. (1) J. Opt. Soc. Am. (1) Meas. Sci. Technol. (1) Opt. Eng. (3) Opt. Laser Eng. (1) Opt. Lett. (1) Opt. Rev. (2)
CommonCrawl