text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
I will teach a lesson on image file formats in a computer science high school class.
Regarding JPEG, I will say that it achieves lossy compression and that it is well suited for photographs but it should not be used for geometric drawings where it can generate artifacts, showing some examples.
However, is it possible to explain why JPEG works this way, without venturing into domain change (space to frequency conversion), Discrete Cosine Transformation, and so on?
I would start with the lossless algorithms, run-length is a good place to start. Then you can go on to discuss human perception, I like to use the example of hiding in plain site.
Ask which is harder to spot someone standing by a wall, or someone standing in the middle of a field. Use this as the basis for discussing lossy compression.
Explore images that look the same, but are different (when examined). One that I remember is an image white background, with a series of vertical thick black bars. Then to the right of each bar is a thin line with a linear gradient from dark to light. All black bars are identical, all thin lines are identical, however the lines get closer to the bars as you move to the right. (so multiple copies of the bar and line, but each copy to the right has the bar and line closer). This is perceived as the thin lines getting shorter to the right. An example of the hiding in plain sight illusion discussed above.
This relates to e-safety, digital literacy, ethics, etc.
As well as the ones you said: Cartoon (traditional, such as Simpsons), line drawings. There are certain uses of photograph that should not be lossy encoded. These include medical X-ray (especially of fractures), and of nuclear power rods or aircraft, to check for build quality (fractures). I mention these 3 as I know this has been done, at great cost (£Millions), and safety risk.
Let me point out that JPEG is not a file format but a set of algorithms, some lossy some lossless. However just after release someone made a library that used some of the algorithms (Discrete cosine transform, hoffman encoding, etc), to make a lossy compression algorithm, this became a de-facto standard).
Let me start with a quick overview of how JPEG (as usually applied) compresses color images.
Colorspace and downscaling. Raw images are usually encoded in the RGB color space. The human eye is more sensitive to the luminosity of images than to their chromatic aspects — as an extreme example, we can understand black and white photos quite well. Therefore JPEG images store more information about luminosity and less about colors. This is implemented in two steps. First, the RGB color space is transformed into a different color space (in a linear fashion), YCbCr (similar to the TV color space YPbPr) in which Y is luminosity and Cb/Cr encode how much the pixels are blue rather than purple (Cb) and red rather than green (Cr). The Cb and Cr planes are then downscaled or subsampled, that is, their resolution is reduced, usually by a factor of 2 in the horizontal direction (the eye is more sensitive to the vertical direction). From this point on, the three planes are compressed individually.
Division into blocks. Each of the color planes is now divided into 8×8 non-overlapping blocks (in contrast, in MP3 the sections are partially overlapping), which will be compressed more-or-less individually. This division is the reason for the so-called blocking effects, since the superimposed grid cuts through features of the image.
Entropy encoding. It remains to encode the quantized components into bits, which is done using Huffman coding, a standard encoding scheme. Since the DC components of all blocks are a downscaled version of the entire image, we expect them to vary continuously. Therefore instead of encoding their actual values, we encode them differentially, that is, we encode the difference from the preceding DC value. The AC components don't have this behavior, and so are encoded absolutely.
Why does compressing geometric drawings result in artifacts? Geometric drawings are very different from natural images. They don't vary continuously. Spectral compression results in "smearing" effects, since we are only retaining the lower frequencies. The best illustration of this is the Gibbs phenomenon, which is the continuous analog of the same issue. When trying to represent a step function using Fourier series, we need to use all frequencies. If we suppress the high frequencies, we get something really different. Lines in a geometric drawing are the discrete analog of step functions.
Another problem is the blocking effects. Imagine a diagonal line. JPEG partitions the space around these lines into 8×8 blocks, which are compressed individually. No attempt is made for the resulting lines to match exactly, and indeed they probably don't. This is very apparent because lines are very clean and exact, something that cannot be said about real-life images.
Using software like Matlab, you can show the effect of quantization on a real-world image, and then compare it to the effect of quantization on clip art. You can vary the quantization steps to show the effect of downplaying the higher frequencies (and also what happens when trying to suppress the lower frequencies).
You can demonstrate the effect of blocking by varying the angle of a line which is compressed block-by-block, or fixing a line and varying the block size. You can use a low quality setting to emphasize the effect, and include the block boundaries to highlight its origins.
However, is it possible to explain why JPEG works this way, without venturing into domain change, DCT and so on?
I'd be inclined to answer: no.
Firstly, I don't think it's necessary. I think the intuition behind the concept of transform (at least as in Fourier or DCT) is within the grasp of most high school kids that know some trigonometry: you can probably get away with drawing a simple function, decomposing it into sine waves and writing down their phase and amplitude while waving your hands a lot.
You'd then proceed to reconstruct the original function by means of summation and persuade the students that the representation in terms of 4/5 phase/amplitude coefficients is "more compact" (finite, even) than its representation in terms of infinite pairs in $R \times R$.
I'd argue it's also not possible because the DCT is also very much one of - if not the - working principles behind JPEG, differentiating it from classical lossy compression techniques.
Operation in the frequency domain is ultimately the reason why *PEG performs subjectively better than, say, resampling and requantizing at 240x180@16 colors, which is essentially a much naiver lossy compression method that operates purely in the space/time domain.
Note, finally, how the excellent Smith handbook - along with probably several other resources - explicitly calls JPEG "transform compression".
Not the answer you're looking for? Browse other questions tagged lesson-ideas high-school or ask your own question. | CommonCrawl |
Random matrix ensembles are introduced that respect the local tensor structure of Hamiltonians describing a chain of $n$ distinguishable spin-half particles with nearest-neighbour interactions. We prove a central limit theorem for the density of states when $n \rightarrow\infty$, giving explicit bounds on the rate of approach to the limit. Universality within a class of probability measures and the extension to more general interaction geometries are established. The level spacing distributions of the Gaussian Orthogonal, Unitary and Symplectic Ensembles are observed numerically for the energy levels in these ensembles. | CommonCrawl |
The radicand (expression inside a radical sign) of a square root cannot be negative as its root is an imaginary number. This means that: (1) The radicand of the first radical, which is $x-2$, must be greater than or equal to 0. Thus, $x$ can be any real number greater than or equal o $2$ (2) The radicand of the second radical, which is $x+3$, must be grater than or equal to 0. Thus, $x$ can be any real number greater than or equal to $-3$. Based on (1) and (2) above, the restrictions to the value of $x$ are: (1) $x \ge 2$; and (2) $ x\ge -3$ Both of the restrictions above must be satisfied. Thus, the value of $x$ must be greater than or equal to $2$. Therefore, the domain of the given function is $[2, +\infty)$. | CommonCrawl |
Abstract: This paper addresses the problem of quickest detection of a change in the maximal coherence between columns of a $n\times p$ random matrix based on a sequence of matrix observations having a single unknown change point. The random matrix is assumed to have identically distributed rows and the maximal coherence is defined as the largest of the $p \choose 2$ correlation coefficients associated with any row. Likewise, the $k$ nearest neighbor (kNN) coherence is defined as the $k$-th largest of these correlation coefficients. The forms of the pre- and post-change distributions of the observed matrices are assumed to belong to the family of elliptically contoured densities with sparse dispersion matrices but are otherwise unknown. A non-parametric stopping rule is proposed that is based on the maximal k-nearest neighbor sample coherence between columns of each observed random matrix. This is a summary statistic that is related to a test of the existence of a hub vertex in a sample correlation graph having a degree at least $k$. Performance bounds on the delay and false alarm performance of the proposed stopping rule are obtained in the purely high dimensional regime where $p\rightarrow \infty$ and $n$ is fixed. When the pre-change dispersion matrix is diagonal it is shown that, among all functions of the proposed summary statistic, the proposed stopping rule is asymptotically optimal under a minimax quickest change detection (QCD) model as the stopping threshold approaches infinity. The theory developed also applies to sequential hypothesis testing and fixed sample size tests. | CommonCrawl |
Bayesian Nerd. Social Psychology PhD Student.
I'm working on [yet another] Bayesian CFA model. I've grown to really like using Stan for latent variable models like CFA, SEM, mixtures, etc.
The problem is that the docs are rather sparse about how to do this, or even where to begin.
This post won't get you to the point of writing the Stan code, but I can at least give the basic gist while I wait for Stan to finish.
Most CFA packages operate on covariance matrices. The "aim" is to specify a set of linear equations such that the implied covariance matrix adequately matches the observed covariance matrix.
This has a host of "problems" that people have been solving for years.
Assume normality, then tweak every fit statistic to be robust against the inherent non-normality.
Although you can still take a covariance-comparison approach in Bayes, there isn't really a need to if you have access to raw data.
Let's say you have one factor and ten indicators.
For the sake of simplicity, we'll include a normal assumption on the indicators.
Also for the sake of simplicity, we will standardize the data and assume a standardized latent variable (mean of zero, sd of 1).
where $\theta_i$ is the latent score for person $i$, $y_ij$ is the observation of item $j$ for person $i$, $\lambda_j$ is a factor loading, and $\sigma_j$ is the residual standard deviation.
It's really about that simple. Note that this method gives you the posterior distributions (and expected values) for factor scores for free.
That also means that in structural models, or moderated CFAs, you can very easily include interactions, as $\theta_i\times x_i$ or $\theta_i\times \gamma_i$ (where $\gamma_i$ is some other latent variable).
The nice thing about this is that it's flexible. Want differing assumptions about the item likelihoods? Go for it; don't use a normal likelihood for each item. Use ordered logit, ordered probit, skew-normal, skew-T, T, or whatever the heart desires.
Want to model residual covariance? Go for it. Want to allow everything to possibly residually covary using a soft constraint? Sure! Use a multivariate normal for all manifest variable likelihoods, and put a strong LKJ prior on $\Sigma_r$ to have a soft-constraint.
Do you want non-normal latent score distributions? Sure! Don't assume latent scores are distributed normally; just use a different prior for $\theta_i$ and estimate any other parameters you need to.
This was a really simple model, but it's very easy to code into Stan. Extensions are just about as easy.
If you want multiple factors to covary, use a MVN prior with an unknown correlation matrix.
If you want multiple factors to potentially load onto multiple indicators beyond those hypothesized, just use priors to place soft-constraints. This is equivalent to exploratory CFA/SEM modeling. You can have all factors load onto all items, just with non-hypothesized paths having peaked zero-priors for model identification.
Of course, the remaining problem is — How do you assess model fit?
Well, two ways come to mind.
First, you could actually estimate the observed covariance matrix for all observed variables and the model-implied covariance matrix within the same model, then compute the standard fit indices based on those matrices.
This would provide not only the fit indices of interest, but uncertainty about those fit indices.
This sounds like a terrible pain though.
Second, you can ignore model fit indices (e.g., RMSEA, SRMR, CFI) and just focus on predictive adequacy. This comes in multiple flavors.
You can obtain the log-likelihood of each observation within each manifest variable, and examine for which people the model is inadequate for which items using point-wise LOOIC.
This is a killer diagnostic tool.
You can also obtain the log-likelihood of each observation across all items jointly, and compare models' person-wise LOOIC values. This would give you a metric akin to the AIC, but in the Bayesian world. It's essentially the approximate joint leave-one-out error of the model. The best model is the one with the lowest LOOIC.
Another nice thing about the elpd (which is computed in the LOOIC), is that when used in model comparison, it's akin to assessing the posterior predictive performance in a similar manner to an "informed" Bayes factor.
This isn't the time to really get into what this means, but suffice it to say that IF you computed the posteriors of a model, then plugged those in as priors for a second equally-sized sample and computed the marginal likelihood, it would approximately equal what the elpd within the LOOIC attempts to estimate (I will take another post to explain why that is, but you can test that yourself if you want). | CommonCrawl |
P. Manjunathan, Maradur, S. P., Halgeri, A. B., and Shanbhag, G. V., "Room temperature synthesis of solketal from acetalization of glycerol with acetone: Effect of crystallite size and the role of acidity of beta zeolite", Journal of Molecular Catalysis A: Chemical, vol. 396, pp. 47–54, 2015.
J. L. Hodala, Bhat, Y. S., Halgeri, A. B., and Shanbhag, G. V., "Shape-selective synthesis of para-diethylbenzene over pore-engineered ZSM-5: A kinetic study", Chemical Engineering Science, vol. 138, pp. 396–402, 2015.
S. Mishra, Shukla, C., Pathak, A., Srikanth, R., and Venugopalan, A., "A Simplified Hierarchical Dynamic Quantum Secret Sharing Protocol with Added Features", International Journal of Theoretical Physics, vol. 54, 2015.
N. Banerjee and Sarkar, S., "Topological Quantum Phase Transition of Light in Cavity QED Lattice", Journal of the Physical Society of Japan, vol. 85, p. 014004, 2015.
S. Adhikari, Home, D., Majumdar, A. S., Pan, A. Kumar, Shenoy, A., and Srikanth, R., "Toward secure communication using intra-particle entanglement", Quantum Information Processing, vol. 14, pp. 1451–1468, 2015.
S. Sandesh, Halgeri, A. B., and Shanbhag, G. V., "Utilization of renewable resources: Condensation of glycerol with acetone at room temperature catalyzed by organic–inorganic hybrid catalyst", Journal of Molecular Catalysis A: Chemical, vol. 401, pp. 73–80, 2015.
S. M Raghavan, Jaiswal, P., Sundaram, N. G., and Shivashankar, S. A., "A composition-dependent "re-entrant" crystallographic phase transition in the substitutional metal acetylacetonate complex (Cr 1- xGax)(acac) 3", Polyhedron, vol. 70, pp. 188–193, 2014.
C. S. Mandayam Nayakar, Omkar, S., and Srikanth, R., "Consciousness, Libertarian Free Will and Quantum Randomness", in Interdisciplinary Perspectives on Consciousness and the Self, Springer India, 2014, pp. 307–323.
A. Shenoy, Srikanth, R., and Srinivas, T., "Counterfactual quantum certificate authorization", Physical Review A, vol. 89, p. 052307, 2014.
N. Pavithra, Sathish, L., Nagasai, B., Venkatarathanamma, V., Pushpalatha, H., G Reddy, B., and Ananda, K., "EVALUATION OF [alpha]-AMYLASE,[alpha]-GLUCOSIDASE AND ALDOSE REDUCTASE INHIBITORS IN ETHYL ACETATE EXTRACTS OF ENDOPHYTIC FUNGI ISOLATED FROM ANTI-DIABETIC MEDICINAL PLANTS", International Journal of Pharmaceutical Sciences and Research, vol. 5, p. 5334, 2014.
L. Sathish, Pavithra, N., and Ananda, K., "Evaluation of antimicrobial activity of secondary metabolites and enzyme production from endophytic fungi isolated from Eucalyptus citriodora", Journal of Pharmacy Research Vol, vol. 8, pp. 269–276, 2014.
C. S. Mandayam Nayakar and Srikanth, R., "Gödel, Tarski, Turing and the conundrum of free will", in Nature's Longest Threads: New Frontiers in the Mathematics and Physics of Information in Biology, World Scientific, 2014, pp. 151–166.
V. S. Marakatti, Rao, P. V. C., Choudary, N. V., Ganesh, G. Sri, Shah, G., Maradur, S. P., Halgeri, A. B., Shanbhag, G. V., and Ravishankar, R., "Influence of Alkaline Earth Cation Exchanged X-Zeolites Towards Ortho-Selectivity in Alkylation of Aromatics: Hard-Soft-Acid-Base Concept", Advanced Porous Materials, vol. 2, pp. 221–229, 2014.
J. Balakrishnan, "Instabilities in sensory processes", in Nature's Longest Threads: New Frontiers in the Mathematics and Physics of Information in Biology, World Scientific, 2014, pp. 37–46.
A. Kumar, Reddy, G., Kulal, A., and , O., "In-vitro Studies on $\alpha$-Amylase, $\alpha$-Glucosidase and Aldose Reductase Inhibitors found in Endophytic Fungi Isolated from Ocimum sanctum", Current Enzyme Inhibition, vol. 10, pp. 129–136, 2014.
V. S. Marakatti, Halgeri, A. B., and Shanbhag, G. V., "Metal ion-exchanged zeolites as solid acid catalysts for the green synthesis of nopol from Prins reaction", Catalysis Science & Technology, vol. 4, pp. 4065–4074, 2014.
S. Aravinda, Banerjee, A., Pathak, A., and Srikanth, R., "Orthogonal-state-based cryptography in quantum mechanics and local post-quantum theories", International Journal of Quantum Information, vol. 12, p. 1560020, 2014.
J. L. Hodala, Halgeri, A. B., and Shanbhag, G. V., "Phosphate modified ZSM-5 for the shape-selective synthesis of para-diethylbenzene: Role of crystal size and acidity", Applied Catalysis A: General, vol. 484, pp. 8–16, 2014.
S. S. M. Bhat, Huq, A., Swain, D., Narayana, C., and Sundaram, N. G., "Photoluminescence tuning of Na 1- x K x NdW 2 O 8 (0.0≤ x≤ 0.7) nanoparticles: synthesis, crystal structure and Raman study", Physical Chemistry Chemical Physics, vol. 16, pp. 18772–18780, 2014.
S. S. M. Bhat, Swain, D., Narayana, C., Feygenson, M., Neuefeind, J. C., and Sundaram, N. G., "Polymorphism in Photoluminescent KNdW2O8: Synthesis, Neutron Diffraction, and Raman Study", Crystal Growth & Design, vol. 14, pp. 835–843, 2014.
S. Sarkar, "Quantum Correlations of Two Superconducting Charge Qubits in a Magnetic Field", Journal of the Physical Society of Japan, vol. 83, p. 104003, 2014.
S. Sarkar, "Quantum criticality of geometric phase in coupled optical cavity arrays under linear quench", Physica B: Condensed Matter, vol. 447, pp. 42–46, 2014.
N. Srinatha, Omkar, S., Srikanth, R., Banerjee, S., and Pathak, A., "The quantum cryptographic switch", Quantum information processing, pp. 1–12, 2014.
S. Sarkar and , O., "Quantum phase transition of light in coupled optical cavity arrays: A renormalization group study", Advances in Theoretical and Mathematical Physics, vol. 18, pp. 741–760, 2014.
H. L. Janardhan, Shanbhag, G. V., and Halgeri, A. B., "Shape-selective catalysis by phosphate modified ZSM-5: Generation of new acid sites with pore narrowing", Applied Catalysis A: General, vol. 471, pp. 12–18, 2014. | CommonCrawl |
Show that, with the array representation for sorting an $n$-element heap, the leaves are the nodes indexed by $\lfloor n/2 \rfloor + 1,\lfloor n/2 \rfloor + 2, \ldots, n$.
Let's take the left child of the node indexed by $\lfloor n/2 \rfloor + 1$.
Since the index of the left child is larger than the number of elements in the heap, the node doesn't have childrens and thus is a leaf. Same goes for all nodes with larger indices.
Note that if we take element indexed by $\lfloor n/2 \rfloor$, it will not be a leaf. In case of even number of nodes, it will have a left child with index $n$ and in the case of odd number of nodes, it will have a left child with index $n-1$ and a right child with index $n$.
This makes the number of leaves in a heap of size $n$ equal to $\lceil n/2 \rceil$. | CommonCrawl |
In the world of model-based inference, unobserved parameters give rise to observed data. There are a number of ways of estimating these parameters; among the most widely used is the Expectation Maximization, or EM algorithm.
But what if instead of one distribution, $X$ is drawn from one of two normal distributions, based on some unobserved indicator $Z$. We'll (arbitrarily) let $Z_n=1$ if the $n$-th sample came from one of the distributions, and $Z_n=0$ if it came from the other. Let $\theta_1$, and $\theta_0$ be the parameters of the respective distributions, and let $\pi$ be $p(Z_n=1)$.
If we happened to observed $Z$ along with $X$, then it would be straightforward to write down the likelihood of $X$ and $Z$ given $\theta_0$, $\theta_1$ and $\pi$.
To find the maximum likelihood estimate of $\theta_1$, we take the derivative of the log-likelihood, set it to zero, and solve for $\theta_1$ (technically, to be sure you're at a maximum, you also have to ensure that the second derivative is negative, but we'll ignore that step here).
Unfortunately, there are $2^N$ possible configurations of $Z$ to sum over, which is like, a lot. To make things worse, we can't even consider the log of the marginal likelihood because of the summation over $Z$, so we'd definitely run in to underflow issues.
These take a value between 0 and 1 based on the probability that $x_n$ is from distribution 1.
We can then find the maximum likelihood for $\theta_0$ ,$\theta_1$, and $\pi$, then recompute the expectation of $Z$, rinse and repeat until convergence. That the E-M algorithm monotonically increases in likelihood with successive iterations is given by something called Jensen's inequality, which is beyond the scope of this post. | CommonCrawl |
by Gelu M. Nita et al.
The study of time-dependent solar active region morphology and its relation to eruptive events requires analysis of imaging data obtained in multiple wavelength domains with differing spatial and time resolutions, ideally in combination with 3D physical models. To facilitate this goal, we have undertaken a major enhancement of our IDL-based simulation tool, GX Simulator, originally developed for modeling microwave and X-ray emission from flaring loops (Nita et al. 2015), to allow it to simulate quiescent emission from solar active regions.
GX Simulator is publicly available as part of the SolarSoftWare (SSW) IDL repository. The object-based architecture of GX Simulator, which runs on Windows, Mac and Linux platforms, provides an interactive graphical user interface that allows the user to import photospheric magnetic field maps as input to the magnetic field extrapolations within the tool, or alternatively to import 3D numerical magnetic field models. The magnetic skeleton may be populated with thermal plasma by importing 3D density and temperature distribution models or, alternatively, by assigning to each individual volume element numerically defined differential emission measure (DEM) distributions inferred from parametric heating EBTEL models (Klimchuk et al. 2008) that assume either steady-state or impulsive nanoflare plasma heating.
The application integrates shared-object libraries containing fast microwave (gyrosynchrotron and gyroresonance) emission codes developed in FORTRAN and C++ (Fleishman and Kuznetsov 2010), and soft and hard X-ray and EUV codes developed in IDL.
A major functionality added to the current release of the GX Simulator distribution package is an almost fully automated way of downloading needed SDO maps and the ability to produce NLFFF reconstructions using the weighted optimization code described and tested by Fleishman et al. (2017).
We illustrate our upgraded tool by creating synthetic emission maps for the AR 11072 obtained on 23-May-2010 12:00:00 UT by SDO/AIA, Nobeyama Radio Heliograph (NORH), and the Siberian Solar Radio Telescope (SSRT), as shown in Figure 1 in the case of the 94 Angstrom AIA channel, and in Figure 2 for the radio emission.
Figure 1 Example of EUV synthesized images to observed data comparison for AR 11072 on 23-May-2010. Left Column: SDO/AIA 94 A image averaged for a six hour interval centered on the time of the NLFFF magnetic field model. Second column: Synthesized EUV images using the EBTEL impulsive heating DEM solution. Third column: Synthesized EUV 94 A image using the EBTEL steady-state heating DEM solution. Both impulsive and steady-state heating models were obtained using the same volumetric heating rate. The images are convolved with circular a Gaussian beam with $\sigma_x=\sigma_y=1.2"$. Forth column: Impulsive heating model to data relative residuals clipped to the $\pm100\%$ range. Right column: Steady-state heating model to data relative residuals clipped to the $\pm100\%$ range.
Figure 2 AR 11072 23-May-2010 12:00:00 UT: comparison between microwave synthesized images and observations by SSRT at 5.7 GHz (top row) and NORH at 17~GHz (bottom row). a) 10%, 20%,50%,70% and 90% contours of the observed SSRT 5.7 GHz brightness temperature (red contours), impulsive heating (green contours) and steady-state heating (blue contours) synthesized brightness temperature, overlaid on top of the SDO/HMI LOS magnetogram. The plot inset displays the peak brightness temperatures corresponding to the three 5.7 GHz brightness temperature maps, $960\times10^3$K, $1313\times10^3$K and $1463\times10^3$K, respectively. b) Impulsive heating synthetic emission to SSRT 5.7 GHz temperature residual map. c) Steady-state heating synthetic emission to SSRT 5.7 GHz temperature residual map. The same scale in the residual maps shown in panels (b) and (c) is adopted for ease of comparison. d) 55%,70% and 90% contours of the observed NORH 17 GHz brightness temperature (red contours), impulsive heating (green contours) and steady-state heating (blue contours) synthesized brightness temperature, overlaid on top of the SDO/HMI LOS magnetogram. The plot inset displays the peak brightness temperatures corresponding to three 17 GHz brightness temperature maps, $31\times10^3$K, $29\times10^3$K and $26\times10^3$K, respectively. e) Impulsive heating synthetic emission to NORH 17 GHz temperature residual map. f) Steady-state heating synthetic emission to NORH 17 GHz temperature residual map. The same scale in the residual maps shown in panels (e) and (f) is adopted for ease of comparison.
Figure 2 demonstrates that our approach resulted in a model that is quantitatively consistent with the radio imaging data produced by SSRT at 5.7 GHz (top row) and NORH at 17 GHz (bottom row). In the first column of Figure 2 we display the observed microwave brightness temperature maps (red contours) and the synthesized radio contours, for both steady-state and impulsive models, on top of the HMI LOS magnetic field maps. Remarkably, for both observed frequencies, the peaks of the brightness temperature maps are reproduced within a few tens of percent accuracy by both heating models. The location and morphology of the 5.7 GHz SSRT contours are also matched remarkably well in the simulations. However, the NORH 17 GHz contours indicate a more compact source than in the simulations, as well as a ∼ 20′′ spatial displacement of the emission peak.
Based on data to model comparison in both EUV and microwave domains, we conclude that, although still perfectible, the magneto-thermal structure that we modelled using the recent upgrade of the GX Simulator may be considered a reasonably good approximation of the reality. | CommonCrawl |
Borisov A. V., Kilin A. A., Pivovarova E. N.
In this paper we consider the control of the motion of a dynamically asymmetric unbalanced ball (Chaplygin top) by means of two perpendicular rotors. We propose a mechanism for control by periodically changing the gyrostatic momentum of the system, which leads to an unbounded speedup. We then formulate a general hypothesis of the mechanism for speeding up spherical bodies on a plane by periodically changing the system parameters.
Kilin A. A., Pivovarova E. N.
This paper presents a qualitative analysis of the dynamics in a fixed reference frame of a wheel with sharp edges that rolls on a horizontal plane without slipping at the point of contact and without spinning relative to the vertical. The wheel is a ball that is symmetrically truncated on both sides and has a displaced center of mass. The dynamics of such a system is described by the model of the ball's motion where the wheel rolls with its spherical part in contact with the supporting plane and the model of the disk's motion where the contact point lies on the sharp edge of the wheel. A classification is given of possible motions of the wheel depending on whether there are transitions from its spherical part to sharp edges. An analysis is made of the behavior of the point of contact of the wheel with the plane for different values of the system parameters, first integrals and initial conditions. Conditions for boundedness and unboundedness of the wheel's motion are obtained. Conditions for the fall of the wheel on the plane of sections are presented.
Kilin A. A., Artemova E. M.
This paper is concerned with the problem of the interaction of vortex lattices, which is equivalent to the problem of the motion of point vortices on a torus. It is shown that the dynamics of a system of two vortices does not depend qualitatively on their strengths. Steadystate configurations are found and their stability is investigated. For two vortex lattices it is also shown that, in absolute space, vortices move along closed trajectories except for the case of a vortex pair. The problems of the motion of three and four vortex lattices with nonzero total strength are considered. For three vortices, a reduction to the level set of first integrals is performed. The nonintegrability of this problem is numerically shown. It is demonstrated that the equations of motion of four vortices on a torus admit an invariant manifold which corresponds to centrally symmetric vortex configurations. Equations of motion of four vortices on this invariant manifold and on a fixed level set of first integrals are obtained and their nonintegrability is numerically proved.
Ivanova T. B., Kilin A. A., Pivovarova E. N.
In this paper, we develop a model of a controlled spherical robot of combined type moving by displacing the center of mass and by changing the internal gyrostatic momentum, with a feedback that stabilizes given partial solutions for a free system at the final stage of motion. According to the proposed approach, feedback depends on phase variables (current position, velocities) and does not depend on the specific type of trajectory. We present integrals of motion and partial solutions, analyze their stability, and give examples of computer simulations of motion with feedback that demonstrate the efficiency of the proposed model.
Borisov A. V., Kilin A. A., Karavaev Y. L., Klekovkin A. V.
The paper is concerned with the problem of stabilizing a spherical robot of combined type during its motion. The focus is on the application of feedback for stabilization of the robot which is an example of an underactuated system. The robot is set in motion by an inter- nal wheeled platform with a rotor placed inside the sphere. The results of experimental investigations for a prototype of the spherical robot are presented.
In this work we consider the controlled motion of a pendulum spherical robot on an inclined plane. The algorithm for determining the control actions for the motion along an arbitrary trajectory and examples of numerical simulation of the controlled motion are given.
This paper is concerned with a model of the controlled motion of a spherical robot with an axisymmetric pendulum actuator on an inclined plane. First integrals of motion and partial solutions are presented and their stability is analyzed. It is shown that the steady solutions exist only at an inclination angle less than some critical value and only for constant control action.
This paper is concerned with the dynamics of a wheel with sharp edges moving on a horizontal plane without slipping and rotation about the vertical (nonholonomic rubber model). The wheel is a body of revolution and has the form of a ball symmetrically truncated on both sides. This problem is described by a system of differential equations with a discontinuous right-hand side. It is shown that this system is integrable and reduces to quadratures. Partial solutions are found which correspond to fixed points of the reduced system. A bifurcation analysis and a classification of possible types of the wheel's motion depending on the system parameters are presented.
In this paper, equations of motion for the problem of a ball rolling without slipping on a rotating hyperbolic paraboloid are obtained. Integrals of motions and an invariant measure are found. A detailed linear stability analysis of the ball's rotations at the saddle point of the hyperbolic paraboloid is made. A three-dimensional Poincar´e map generated by the phase flow of the problem is numerically investigated and the existence of a region of bounded trajectories in a neighborhood of the saddle point of the paraboloid is demonstrated. It is shown that a similar problem of a ball rolling on a rotating paraboloid, considered within the framework of the rubber model, can be reduced to a Hamiltonian system which includes the Brower problem as a particular case.
Kilin A. A., Klenov A. I., Tenenev V. A.
This article is devoted to the study of self-propulsion of bodies in a fluid by the action of internal mechanisms, without changing the external shape of th e body. The paper presents an overview of theoretical papers that justify the possibility of this displacement in ideal and viscous liquids. A special case of self-propulsion of a rigid body along the surface of a liquid is considered due to the motion of two internal masses along the circles. The paper presents a mathematical model of the motion of a solid body with moving internal masses in a three-dime nsional formulation. This model takes into account the three-dimensional vibrations of the body during motion, which arise under the action of external forces-gravity force, Archimedes force and forces acting on the body, from the side of a viscous fluid. The body is a homogeneous elliptical cylinder with a k eel located along the larger diagonal. Inside the cylinder there are two material point masses moving along the circles. The centers of the circles lie on the smallest diagonal of the ellipse at an equal distance from the center of mass. Equations of motion of the system (a body with two mater ial points, placed in a fluid) are represented as Kirchhoff equations with the addition of external for ces and moments acting on the body. The phenomenological model of viscous friction is quadratic in velocity used to describe the forces of resistance to motion in a fluid. The coefficients of resistance to movement were determ ined experimentally. The forces acting on the keel were determined by numerical modeling of the keel oscillations in a viscous liquid using the Navier – Stokes equations. In this paper, an experimental verification of the proposed mathematical model was carried out. Several series of experiments on self-propulsion of a body in a liquid by means of rotation of internal masses with different speeds of rotation are presented. The dependence of the average propagation velocity, the amplitude of the transverse oscillations as a function of the rotational speed of internal masses is investigated. The obtained experimental data are compared with the results obtai ned within the framework of the proposed mathematical model.
In this paper, we develop a model of a controlled spherical robot with an axisymmetric pendulum-type actuator with a feedback system suppressing the pendulum's oscillations at the final stage of motion. According to the proposed approach, the feedback depends on phase variables (the current position and velocities) and does not depend on the type of trajectory. We present integrals of motion and partial solutions, analyze their stability, and give examples of computer simulation of motion using feedback to illustrate compensation of the pendulum's oscillations.
In the paper, a study of rolling of a dynamically asymmetrical unbalanced ball (Chaplygin top) on a horizontal plane under the action of periodic gyrostatic moment is carried out. The problem is considered in the framework of the model of a rubber body, i.e., under the assumption that there is no slipping and spinning at the point of contact. It is shown that, for certain values of the parameters of the system and certain dependence of the gyrostatic moment on time, an acceleration of the system, i.e., an unbounded growth of the energy of the system, is observed. Investigations of the dependence of the presence of acceleration on the parameters of the system and on the initial conditions are carried out. On the basis of the investigations of the dynamics of the frozen system, a conjecture concerning the general mechanism of acceleration at the expense to periodic impacts in nonholonomic systems is expressed.
In this paper, we show that the trajectories of a dynamical system with nonholonomic constraints can satisfy Hamilton's principle. As the simplest illustration, we consider the problem of a homogeneous ball rolling without slipping on a plane. However, Hamilton's principle is formulated either for a reduced system or for a system defined in an extended phase space. It is shown that the dynamics of a nonholonomic homogeneous ball can be embedded in a higher-dimensional Hamiltonian phase flow. We give two examples of such an embedding: embedding in the phase flow of a free system and embedding in the phase flow of the corresponding vakonomic system.
Borisov A. V., Kilin A. A., Karavaev Y. L.
This paper presents results of theoretical and experi- mental research explaining the retrograde final-stage rolling of a disk under certain relations between its mass and geometric parameters. Modifying the no-slip model of a rolling disk by including viscous rolling friction provides a qualitative explana- tion for the disk's retrograde motion. At the same time, the simple experiments described in the paper completely reject the aerodynamical drag torque as a key reason for the retro- grade motion of a disk considered, thus disproving some recent hypotheses.
Borisov A. V., Vetchanin E. V., Kilin A. A.
The motion of a body shaped as a triaxial ellipsoid and controlled by the rotation of three internal rotors is studied. It is proved that the motion is controllable with the exception of a few particular cases. Partial solutions whose combinations enable an unbounded motion in any arbitrary direction are constructed.
This paper is concerned with the dynamics of a top in the form of a truncated ball as it moves without slipping and spinning on a horizontal plane about a vertical. Such a system is described by differential equations with a discontinuous right-hand side. Equations describing the system dynamics are obtained and a reduction to quadratures is performed. A bifurcation analysis of the system is made and all possible types of the top's motion depending on the system parameters and initial conditions are defined. The system dynamics in absolute space is examined. It is shown that, except for some special cases, the trajectories of motion are bounded.
Karavaev Y. L., Kilin A. A., Klekovkin A. V.
In this paper the model of rolling of spherical bodies on a plane without slipping is presented taking into account viscous rolling friction. Results of experiments aimed at investigating the influence of friction on the dynamics of rolling motion are presented. The proposed dynamical friction model for spherical bodies is verified and the limits of its applicability are estimated. A method for determining friction coefficients from experimental data is formulated.
Vetchanin E. V., Tenenev V. A., Kilin A. A.
In this paper we consider the controlled motion of a helical body with three blades in an ideal fluid, which is executed by rotating three internal rotors. We set the problem of selecting control actions, which ensure the motion of the body near the predetermined trajectory. To determine controls that guarantee motion near the given curve, we propose methods based on the application of hybrid genetic algorithms (genetic algorithms with real encoding and with additional learning of the leader of the population by a gradient method) and artificial neural networks. The correctness of the operation of the proposed numerical methods is estimated using previously obtained differential equations, which define the law of changing the control actions for the predetermined trajectory.
In the approach based on hybrid genetic algorithms, the initial problem of minimizing the integral functional reduces to minimizing the function of many variables. The given time interval is broken up into small elements, on each of which the control actions are approximated by Lagrangian polynomials of order 2 and 3. When appropriately adjusted, the hybrid genetic algorithms reproduce a solution close to exact. However, the cost of calculation of 1 second of the physical process is about 300 seconds of processor time.
To increase the speed of calculation of control actions, we propose an algorithm based on artificial neural networks. As the input signal the neural network takes the components of the required displacement vector. The node values of the Lagrangian polynomials which approximately describe the control actions return as output signals . The neural network is taught by the well-known back-propagation method. The learning sample is generated using the approach based on hybrid genetic algorithms. The calculation of 1 second of the physical process by means of the neural network requires about 0.004 seconds of processor time, that is, 6 orders faster than the hybrid genetic algorithm. The control calculated by means of the artificial neural network differs from exact control. However, in spite of this difference, it ensures that the predetermined trajectory is followed exactly.
Vetchanin E. V., Kilin A. A.
In this paper we study the controlled motion of an arbitrary two-dimensional body in an ideal fluid with a moving internal mass and an internal rotor in the presence of constant circulation around the body. We show that by changing the position of the internal mass and by rotating the rotor, the body can be made to move to a given point, and discuss the influence of nonzero circulation on the motion control. We have found that in the presence of circulation around the body the system cannot be completely stabilized at an arbitrary point of space, but fairly simple controls can be constructed to ensure that the body moves near the given point.
Kilin A. A., Bozek P., Karavaev Y. L., Klekovkin A. V., Shestakov V. A.
In this article, a dynamical model for controlling an omniwheel mobile robot is presented. The proposed model is used to construct an algorithm for calculating control actions for trajectories characterizing the high maneuverability of the mobile robot. A description is given for a prototype of the highly maneuverable robot with four omniwheels, for which an algorithm for setting the coefficients of the PID controller is considered. Experiments on the motion of the robot were conducted at different angles, and the orientation of the platform was preserved. The experimental results are analyzed and statistically assessed.
In this paper, we study the free and controlled motion of an arbitrary two-dimensional body with a moving internal material point through an ideal fluid in presence of constant circulation around the body. We perform bifurcation analysis of free motion (with fixed internal mass). We show that by changing the position of the internal mass the body can be made to move to a specified point. There are a number of control problems associated with the nonzero drift of the body in the case of fixed internal mass.
We consider the controlled motion in an ideal incompressible fluid of a rigid body with moving internal masses and an internal rotor in the presence of circulation of the fluid velocity around the body. The controllability of motion (according to the Rashevskii–Chow theorem) is proved for various combinations of control elements. In the case of zero circulation, we construct explicit controls (gaits) that ensure rotation and rectilinear (on average) motion. In the case of nonzero circulation, we examine the problem of stabilizing the body (compensating the drift) at the end point of the trajectory. We show that the drift can be compensated for if the body is inside a circular domain whose size is defined by the geometry of the body and the value of circulation.
Karavaev Y. L., Kilin A. A.
We present the results of theoretical and experimental investigations of the motion of a spherical robot on a plane. The motion is actuated by a platform with omniwheels placed inside the robot. The control of the spherical robot is based on a dynamic model in the nonholonomic statement expressed as equations of motion in quasivelocities with indeterminate coefficients. A number of experiments have been carried out that confirm the adequacy of the dynamic model proposed.
Vetchanin E. V., Kilin A. A., Mamaev I. S.
This paper is concerned with the motion of a helical body in an ideal fluid, which is controlled by rotating three internal rotors. It is proved that the motion of the body is always controllable by means of three rotors with noncoplanar axes of rotation. A condition whose satisfaction prevents controllability by means of two rotors is found. Control actions that allow the implementation of unbounded motion in an arbitrary direction are constructed. Conditions under which the motion of the body along an arbitrary smooth curve can be implemented by rotating the rotors are presented. For the optimal control problem, equations of sub-Riemannian geodesics on $SE(3)$ are obtained.
In this paper we describe the results of experimental investigations of the motion of a screwless underwater robot controlled by rotating internal rotors. We present the results of comparison of the trajectories obtained with the results of numerical simulation using the model of an ideal fluid.
Klenov A. I., Kilin A. A.
This paper is devoted to an experimental investigation of the motion of a rigid body set in motion by rotating two unbalanced internal masses. The results of experiments confirming the possibility of motion by this method are presented. The dependence of the parameters of motion on the rotational velocity of internal masses is analyzed. The velocity field of the fluid around the moving body is examined.
This paper is concerned with two systems from sub-Riemannian geometry. One of them is defined by a Carnot group with three generatrices and growth vector $(3, 6, 14)$, the other is defined by two generatrices and growth vector $(2, 3, 5, 8)$. Using a Poincaré map, the nonintegrability of the above systems in the general case is shown. In addition, particular cases are presented in which there exist additional first integrals.
This paper is concerned with the motion of an unbalanced heavy three-axial ellipsoid in an ideal fluid controlled by rotation of three internal rotors. It is proved that the motion of the body considered is controlled with respect to configuration variables except for some special cases. An explicit control that makes it possible to implement unbounded motion in an arbitrary direction has been calculated. Directions for which control actions are bounded functions of time have been determined.
In this paper, we develop the results obtained by J.Hadamard and G.Hamel concerning the possibility of substituting nonholonomic constraints into the Lagrangian of the system without changing the form of the equations of motion. We formulate the conditions for correctness of such a substitution for a particular case of nonholonomic systems in the simplest and universal form. These conditions are presented in terms of both generalized velocities and quasi-velocities. We also discuss the derivation and reduction of the equations of motion of an arbitrary wheeled vehicle. In particular, we prove the equivalence (up to additional quadratures) of problems of an arbitrary wheeled vehicle and an analogous vehicle whose wheels have been replaced with skates. As examples, we consider the problems of a one-wheeled vehicle and a wheeled vehicle with two rotating wheel pairs.
Bolsinov A. V., Kilin A. A., Kazakov A. O.
Topological monodromy as an obstruction to Hamiltonization of nonholonomic systems: Pro or contra?
The phenomenon of a topological monodromy in integrable Hamiltonian and nonholonomic systems is discussed. An efficient method for computing and visualizing the monodromy is developed. The comparative analysis of the topological monodromy is given for the rolling ellipsoid of revolution problem in two cases, namely, on a smooth and on a rough plane. The first of these systems is Hamiltonian, the second is nonholonomic. We show that, from the viewpoint of monodromy, there is no difference between the two systems, and thus disprove the conjecture by Cushman and Duistermaat stating that the topological monodromy gives a topological obstruction for Hamiltonization of the rolling ellipsoid of revolution on a rough plane.
Klenov A. I., Vetchanin E. V., Kilin A. A.
This paper is concerned with the experimental determination of the added masses of bodies completely or partially immersed in a fluid. The paper presents an experimental setup, a technique of the experiment and an underlying mathematical model. The method of determining the added masses is based on the towing of the body with a given propelling force. It is known (from theory) that the concept of an added mass arises under the assumption concerning the potentiality of flow over the body. In this context, the authors have performed PIV visualization of flows generated by the towed body, and defined a part of the trajectory for which the flow can be considered as potential. For verification of the technique, a number of experiments have been performed to determine the added masses of a spheroid. The measurement results are in agreement with the known reference data. The added masses of a screwless freeboard robot have been defined using the developed technique.
Borisov A. V., Mamaev I. S., Kilin A. A., Bizyaev I. A.
This paper is concerned with the problem of the motion of a wheeled vehicle on a plane in the case where one of the wheel pairs is fixed. In addition, the motion of a wheeled vehicle on a plane in the case of two free wheel pairs is considered. A method for obtaining equations of motion for the vehicle with an arbitrary geometry is presented. Possible kinds of motion of the vehicle with a fixed wheel pair are determined.
Kilin A. A., Pivovarova E. N., Ivanova T. B.
This paper is concerned with free and controlled motions of a spherical robot of combined type moving by displacing the center of mass and by changing the internal gyrostatic momentum. Equations of motion for the nonholonomic model are obtained and their first integrals are found. Fixed points of the reduced system are found in the absence of control actions. It is shown that they correspond to the motion of the spherical robot in a straight line and in a circle. A control algorithm for the motion of the spherical robot along an arbitrary trajectory is presented. A set of elementary maneuvers (gaits) is obtained which allow one to transfer the spherical robot from any initial point to any end point.
A nonholonomic model of the dynamics of an omniwheel vehicle on a plane and a sphere is considered. A derivation of equations is presented and the dynamics of a free system are investigated. An explicit motion control algorithm for the omniwheel vehicle moving along an arbitrary trajectory is obtained.
This paper deals with the problem of a spherical robot propelled by an internal omniwheel platform and rolling without slipping on a plane. The problem of control of spherical robot motion along an arbitrary trajectory is solved within the framework of a kinematic model and a dynamic model. A number of particular cases of motion are identified, and their stability is investigated. An algorithm for constructing elementary maneuvers (gaits) providing the transition from one steady-state motion to another is presented for the dynamic model. A number of experiments have been carried out confirming the adequacy of the proposed kinematic model.
This paper presents the results of experimental investigations for the rolling of a spherical robot of combined type actuated by an internal wheeled vehicle with rotor on a horizontal plane. The control of spherical robot based on nonholonomic dynamical by means of gaits. We consider the motion of the spherical robot in case of constant control actions, as well as impulse control. A number of experiments have been carried out confirming the importance of rolling friction.
Kilin A. A., Vetchanin E. V.
In this paper we consider the problem of motion of a rigid body in an ideal fluid with two material points moving along circular trajectories. The controllability of this system on the zero level set of first integrals is shown. Elementary "gaits" are presented which allow the realization of the body's motion from one point to another. The existence of obstacles to a controlled motion of the body along an arbitrary trajectory is pointed out.
The dynamic model for a spherical robot with an internal omniwheel platform is presented. Equations of motion and first integrals according to the non-holonomic model are given. We consider particular solutions and their stability. The algorithm of control of spherical robot for movement along a given trajectory are presented.
Kilin A. A., Karavaev Y. L., Klekovkin A. V.
In this article a kinematic model of the spherical robot is considered, which is set in motion by the internal platform with omni-wheels. It has been introduced a description of construction, algorithm of trajectory planning according to developed kinematic model, it has been realized experimental research for typical trajectories: moving along a straight line and moving along a circle.
Borisov A. V., Kilin A. A., Mamaev I. S., Tenenev V. A.
Fluid Dynamics Research, 2014, vol. 46, no. 3, 031415, 16 pp.
We consider the problem of motion of axisymmetric vortex rings in an ideal incompressible and viscous fluid. Using the numerical simulation of the Navier–Stokes equations, we confirm the existence of leapfrogging of three equal vortex rings and suggest the possibility of detecting it experimentally. We also confirm the existence of leapfrogging of two vortex rings with opposite-signed vorticities in a viscous fluid.
Borisov A. V., Kilin A. A., Mamaev I. S., Tenenev V. A., The dynamics of vortex rings: leapfrogging in an ideal and viscous fluid , Fluid Dynamics Research, 2014, vol. 46, no. 3, 031415, 16 pp.
We investigate the motion of the point of contact (absolute dynamics) in the integrable problem of the Chaplygin ball rolling on a plane. Although the velocity of the point of contact is a given vector function of variables of the reduced system, it is impossible to apply standard methods of the theory of integrable Hamiltonian systems due to the absence of an appropriate conformally Hamiltonian representation for an unreduced system. For a complete analysis we apply the standard analytical approach, due to Bohl and Weyl, and develop topological methods of investigation. In this way we obtain conditions for boundedness and unboundedness of the trajectories of the contact point.
In our earlier paper we examined the problem of control of a balanced dynamically nonsymmetric sphere with rotors with no-slip condition at the point of contact. In this paper we investigate the controllability of a ball in the presence of friction. We also study the problem of the existence and stability of singular dissipation-free periodic solutions for a free ball in the presence of friction forces. The issues of constructive realization of the proposed algorithms are discussed.
We consider the problem of motion of axisymmetric vortex rings in an ideal incompressible fluid. Using the topological approach, we present a method for complete qualitative analysis of the dynamics of a system of two vortex rings. In particular, we completely solve the problem of describing the conditions for the onset of leapfrogging motion of vortex rings. In addition, for the system of two vortex rings we find new families of motions where the relative distances remain finite (we call them pseudo-leapfrogging). We also find solutions for the problem of three vortex rings, which describe both the regular and chaotic leapfrogging motion of vortex rings.
We investigate the motion of the point of contact (absolute dynamics) in the integrable problem of the Chaplygin ball rolling on a plane. Although the velocity of the point of contact is a given vector function of variables of a reduced system, it is impossible to apply standard methods of the theory of integrable Hamiltonian systems due to the absence of an appropriate conformally Hamiltonian representation for an unreduced system. For a complete analysis we apply the standard analytical approach, due to Bohl and Weyl, and develop topological methods of investigation. In this way we obtain conditions for boundedness and unboundedness of the trajectories of the contact point.
In the paper we study the control of a balanced dynamically non-symmetric sphere with rotors. The no-slip condition at the point of contact is assumed. The algebraic controllability is shown and the control inputs that steer the ball along a given trajectory on the plane are found. For some simple trajectories explicit tracking algorithms are proposed.
We discuss explicit integration and bifurcation analysis of two non-holonomic problems. One of them is the Chaplygin's problem on no-slip rolling of a balanced dynamically non-symmetric ball on a horizontal plane. The other, first posed by Yu.N.Fedorov, deals with the motion of a rigid body in a spherical support. For Chaplygin's problem we consider in detail the transformation that Chaplygin used to integrate the equations when the constant of areas is zero. We revisit Chaplygin's approach to clarify the geometry of this very important transformation, because in the original paper the transformation looks a cumbersome collection of highly non-transparent analytic manipulations. Understanding its geometry seriously facilitate the extension of the transformation to the case of a rigid body in a spherical support – the problem where almost no progress has been made since Yu.N. Fedorov posed it in 1988. In this paper we show that extending the transformation to the case of a spherical support allows us to integrate the equations of motion explicitly in terms of quadratures, detect mostly remarkable critical trajectories and study their stability, and perform an exhaustive qualitative analysis of motion. Some of the results may find their application in various technical devices and robot design. We also show that adding a gyrostat with constant angular momentum to the spherical-support system does not affect its integrability.
In the paper we study control of a balanced dynamically nonsymmetric sphere with rotors. The no-slip condition at the point of contact is assumed. The algebraic contrability is shown and the control inputs providing motion of the ball along a given trajectory on the plane are found. For some simple trajectories explicit tracking algorithms are proposed.
We consider the problem of the motion of axisymmetric vortex rings in an ideal incompressible fluid. Using the topological approach, we present a method for complete qualitative analysis of the dynamics of a system of two vortex rings. In particular, we completely solve the problem of describing the conditions for the onset of leapfrogging motion of vortex rings. In addition, for the system of two vortex rings we find new families of motions in which the mutual distances remain finite (we call them pseudo-leapfrogging). We also find solutions for the problem of three vortex rings, which describe both the regular and chaotic leapfrogging motion of vortex rings.
In this paper we develop a new model of non-holonomic billiard that accounts for the intrinsic rotation of the billiard ball. This model is a limit case of the problem of rolling without slipping of a ball without slipping over a quadric surface. The billiards between two parallel walls and inside a circle are studied in detail. Using the three-dimensional-point-map technique, the non-integrability of the non-holonomic billiard within an ellipse is shown.
We consider a novel mechanical system consisting of two spherical bodies rolling over each other, which is a natural extension of the famous Chaplygin problem of rolling motion of a ball on a plane. In contrast to the previously explored non-holonomic systems, this one has a higher dimension and is considerably more complicated. One remarkable property of our system is the existence of "clandestine" linear in momenta first integrals. For a more trivial integrable system, their counterparts were discovered by Chaplygin. We have also found a few cases of integrability.
The Hamiltonian representation and integrability of the nonholonomic Suslov problem and its generalization suggested by S. A. Chaplygin are considered. This subject is important for understanding the qualitative features of the dynamics of this system, being in particular related to a nontrivial asymptotic behavior (i. e., to a certain scattering problem). A general approach based on studying a hierarchy in the dynamical behavior of nonholonomic systems is developed.
We consider a nonholonomic model of the dynamics of an omni-wheel vehicle on a plane and a sphere. An elementary derivation of equations is presented, the dynamics of a free system is investigated, a relation to control problems is shown.
We consider the problem of explicit integration and bifurcation analysis for two systems of nonholonomic mechanics. The first one is the Chaplygin's problem on no-slip rolling of a balanced dynamically non-symmetrical ball on a horizontal plane. The second problem is on the motion of rigid body in a spherical support. We explicitly integrate this problem by generalizing the transformation which Chaplygin applied to the integration of the problem of the rolling ball at a non-zero constant of areas. We consider the geometric interpretation of this transformation from the viewpoint of a trajectory isomorphism between two systems at different levels of the energy integral. Generalization of this transformation for the case of dynamics in a spherical support allows us to integrate the equations of motion explicitly in quadratures and, in addition, to indicate periodic solutions and analyze their stability. We also show that adding a gyrostat does not lead to the loss of integrability.
We consider a novel mechanical system consisting of two spherical bodies rolling over each other, which is a natural extension of the famous Chaplygin problem of rolling motion of a ball on a plane. In contrast to the previously explored non-holonomic systems, this one has a higher dimension and is considerably more complicated. One remarkable property of our system is the existence of «clandestine» linear in momenta first integrals. For a more trivial integrable system, their counterparts were discovered by Chaplygin. We have also found a few cases of integrability.
Borisov A. V., Bolotin S. V., Kilin A. A., Mamaev I. S., Treschev D. V.
We consider the problems of Hamiltonian representation and integrability of the nonholonomic Suslov system and its generalization suggested by S. A. Chaplygin. These aspects are very important for understanding the dynamics and qualitative analysis of the system. In particular, they are related to the nontrivial asymptotic behaviour (i. e. to some scattering problem). The paper presents a general approach based on the study of the hierarchy of dynamical behaviour of nonholonomic systems.
We consider the motion of a material point on the surface of a sphere in the field of $2n + 1$ identical Hooke centers (singularities with elastic potential) lying on a great circle. Our main result is that this system is superintegrable. The property of superintegrability for this system has been conjectured by us in , where the structure of a superintegral of arbitrarily high odd degree in momemnta was outlined. We also indicate an isomorphism between this system and the one-dimensional $N$-particle system discussed in the recent paper and show that for the latter system an analogous superintegral can be constructed.
Borisov A. V., Mamaev I. S., Kilin A. A.
The dynamics of self-gravitating liquid and gas ellipsoids is considered. A literary survey and authors' original results obtained using modern techniques of nonlinear dynamics are presented. Strict Lagrangian and Hamiltonian formulations of the equations of motion are given; in particular, a Hamiltonian formalism based on Lie algebras is described. Problems related to nonintegrability and chaos are formulated and analyzed. All the known integrability cases are classified, and the most natural hypotheses on the nonintegrability of the equations of motion in the general case are presented. The results of numerical simulations are described. They, on the one hand, demonstrate a chaotic behavior of the system and, on the other hand, can in many cases serve as a numerical proof of the nonintegrability (the method of transversally intersecting separatrices).
Systems of material points interacting both with one another and with an external field are considered in Euclidean space. For the case of arbitrary binary interaction depending solely on the mutual distance between the bodies, new integrals are found, which form a Galilean momentum vector. A corresponding algebra of integrals constituted by the integrals of momentum, angular momentum, and Galilean momentum is presented. Particle systems with a particleinteraction potential homogeneous of degree $\alpha = –2$ are considered. The most general form of the additional integral of motion, which we term the Jacobi integral, is presented for such systems. A new nonlinear algebra of integrals including the Jacobi integral is found. A systematic description is given to a new reduction procedure and possibilities of applying it to dynamics with the aim of lowering the order of Hamiltonian systems.
Some new integrable and superintegrable systems generalizing the classical ones are also described. Certain generalizations of the Lagrangian identity for systems with a particle interaction potential homogeneous of degree $\alpha = –2$ are presented. In addition, computational experiments are used to prove the nonintegrability of the Jacobi problem on a plane.
3-particle systems with a particle-interaction homogeneous potential of degree $α=-2$ is considered. A constructive procedure of reduction of the system by 2 degrees of freedom is performed. The nonintegrability of the systems is shown using the Poincare mapping.
Systems of material points interacting both with one another and with an external field are considered in Euclidean space. For the case of arbitrary binary interaction depending solely on the mutual distance between the bodies, new integrals are found, which form a Galilean momentum vector.
A corresponding algebra of integrals constituted by the integrals of momentum, angular momentum, and Galilean momentum is presented. Particle systems with a particle-interaction potential homogeneous of degree $α=-2$ are considered. The most general form of the additional integral of motion, which we term the Jacobi integral, is presented for such systems. A new nonlinear algebra of integrals including the Jacobi integral is found. A systematic description is given to a new reduction procedure and possibilities of applying it to dynamics with the aim of lowering the order of Hamiltonian systems.
Some new integrable and superintegrable systems generalizing the classical ones are also described. Certain generalizations of the Lagrangian identity for systems with a particle-interaction potential homogeneous of degree $α=-2$ are presented. In addition, computational experiments are used to prove the nonintegrability of the Jacobi problem on a plane.
We have discovered a new first integral in the problem of motion of a dynamically symmetric ball, subject to gravity, on the surface of a paraboloid. Using this integral, we have obtained conditions for stability (in the Lyapunov sense) of steady rotations of the ball at the upmost, downmost and saddle point.
In this paper, we consider the transition to chaos in the phase portrait of a restricted problem of rotation of a rigid body with a fixed point. Two interrelated mechanisms responsible for chaotization are indicated: (1) the growth of the homoclinic structure and (2) the development of cascades of period doubling bifurcations. On the zero level of the area integral, an adiabatic behavior of the system (as the energy tends to zero) is noted. Meander tori induced by the break of the torsion property of the mapping are found.
For the classical problem of motion of a rigid body about a fixed point with zero area integral, we present a family of solutions that are periodic in the absolute space. Such solutions are known as choreographies. The family includes the well-known Delone solutions (for the Kovalevskaya case), some particular solutions for the Goryachev–Chaplygin case, and the Steklov solution. The "genealogy" of solutions of the family naturally appearing from the energy continuation and their connection with the Staude rotations are considered. It is shown that if the integral of areas is zero, the solutions are periodic with respect to a coordinate frame that rotates uniformly about the vertical (relative choreographies).
The paper contains the review and original results on the dynamics of liquid and gas self-gravitating ellipsoids. Equations of motion are given in Lagrangian and Hamiltonian form, in particular, the Hamiltonian formalism on Lie algebras is presented. Problems of nonintegrability and chaotical behavior of the system are formulated and studied. We also classify all known integrable cases and give some hypotheses about nonintegrability in the general case. Results of numerical modelling are presented, which can be considered as a computer proof of nonintegrability.
We discuss system of material points in Euclidean space interacting both with each other and with external field. In particular we consider systems of particles whose interacting is described by homogeneous potential of degree of homogeneity $\alpha=-2$. Such systems were first considered by Newton and—more systematically—by Jacobi). For such systems there is an extra hidden symmetry, and corresponding first integral of motion which we call Jacobi integral. This integral was given in different papers starting with Jacobi, but we present in general. Furthermore, we construct a new algebra of integrals including Jacobi integral. A series of generalizations of Lagrange's identity for systems with homogeneous potential of degree of homogeneity $\alpha=-2$ is given. New integrals of motion for these generalizations are found.
The dynamics of an antipodal vortex on a sphere (a point vortex plus its antipode with opposite circulation) is considered. It is shown that the system of n antipodal vortices can be reduced by four dimensions (two degrees of freedom). The cases $n = 2, 3$ are explored in greater detail both analytically and numerically. We discuss Thomson, collinear and isosceles configurations of antipodal vortices and study their bifurcations.
The paper considers the dynamics of a rattleback as a model of a heavy balanced ellipsoid of revolution rolling without slippage on a fixed horizontal plane. Central ellipsoid of inertia is an ellipsoid of revolution as well. In presence of the angular displacement between two ellipsoids, there occur dynamical effects somewhat similar to the reverse fenomena in earlier models. However, unlike a customary rattleback model (a truncated biaxial paraboloid) our system allows the motions which are superposition of the reverse motion (reverse of the direction of spinning) and the turn over (change of the axis of rotation). With appropriate values of energies and mass distribution, this effect (reverse + turn over) can occur more than once. Such motions as repeated reverse or repeated turn over are also possible.
The paper considers the process of transition to chaos in the problem of four point vortices on a plane. A new method for constructive reduction of the order for a system of vortices on a plane is presented. Existence of the cascade of period doubling bifurcations in the given problem is indicated.
Rolling (without slipping) of a homogeneous ball on an oblique cylinder in different potential fields and the integrability of the equations of motion are considered. We examine also if the equations can be reduced to a Hamiltonian form. We prove the theorem stated that if there is a gravity (and the cylinder is oblique), the ball moves without any vertical shift, on the average.
We have discovered a new first integral in the problem of motion of a dynamically symmetric ball, subject to gravity, on the surface of a paraboloid. Using this integral, we have obtained conditions for stability (in the Lyapunov sense) of steady rotations of the ball in the upmost, downmost and saddle point.
In this paper we describe new classes of periodic solutions for point vortices on a plane and a sphere. They correspond to similar solutions (so-called choreographies) in celestial mechanics.
We consider the problem of two interacting particles on a sphere. The potential of the interaction depends on the distance between the particles. The case of Newtonian-type potentials is studied in most detail. We reduce this system to a system with two degrees of freedom and give a number of remarkable periodic orbits. We also discuss integrability and stochastization of the motion.
We obtained new periodic solutions in the problems of three and four point vortices moving on a plane. In the case of three vortices, the system is reduced to a Hamiltonian system with one degree of freedom, and it is integrable. In the case of four vortices, the order is reduced to two degrees of freedom, and the system is not integrable. We present relative and absolute choreographies of three and four vortices of the same intensity which are periodic motions of vortices in some rotating and fixed frame of reference, where all the vortices move along the same closed curve. Similar choreographies have been recently obtained by C. Moore, A. Chenciner, and C. Simo for the $n$-body problem in celestial mechanics [6, 7, 17]. Nevertheless, the choreographies that appear in vortex dynamics have a number of distinct features.
In the paper we present the qualitative analysis of rolling motion without slipping of a homogeneous round disk on a horisontal plane. The problem was studied by S.A. Chaplygin, P. Appel and D. Korteweg who showed its integrability. The behavior of the point of contact on a plane is investigated and conditions under which its trajectory is finit are obtained. The bifurcation diagrams are constructed.
The problem of rolling motion without slipping of an unbalanced ball on 1) an arbitrary ellipsoid and 2) an ellipsoid of revolution is considered. In his famous treatise E. Routh showed that the problem of rolling motion of a body on a surface of revolution even in the presence of axisymmetrical potential fields is integrable. In case 1, we present a new integral of motion. New solutions expressed in elementary functions are found in case 2.
The paper is concerned with the problem on rolling of a homogeneous ball on an arbitrary surface. New cases when the problem is solved by quadratures are presented. The paper also indicates a special case when an additional integral and invariant measure exist. Using this case, we obtain a nonholonomic generalization of the Jacobi problem for the inertial motion of a point on an ellipsoid. For a ball rolling, it is also shown that on an arbitrary cylinder in the gravity field the ball's motion is bounded and, on the average, it does not move downwards. All the results of the paper considerably expand the results obtained by E. Routh in XIX century.
The motion of Chaplygin ball with and without gyroscope in the absolute space is analyzed. In particular, the trajectories of the point of contact are studied in detail. We discuss the motions in the absolute space, that correspond to the different types of motion in the moving frame of reference related to the body. The existence of the bounded trajectories of the ball's motion is shown by means of numerical methods in the case when the problem is reduced to a certain Hamiltonian system.
In the paper Motion of a circular cylinder and a vortex in an ideal fluid (Reg. & Chaot. Dyn. V. 6. 2001. No 1. P. 33-38) Ramodanov S.M. showed the integrability of the problem of motion of a circular cylinder and a point vortex in unbounded ideal fluid. In the present paper we find additional first integral and invariant measure of motion equations.
Borisov A. V., Kilin A. A.
In this work stability of polygonal configurations on a plane and sphere is investigated. The conditions of linear stability are obtained. A nonlinear analysis of the problem is made with the help of Birkhoff normalization. Some problems are also formulated.
We consider two-body problem and restricted three-body problem in spaces $S^2$ and $L^2$. For two-body problem we have showed the absence of exponential instability of partiбular solutions relevant to roundabout motion on the plane. New libration points are found, and the dependence of their positions on parameters of a system is explored. The regions of existence of libration points in space of parameters were constructed. Basing on a examination of the Hill's regions we found the qualitative estimation of stability of libration points was produced. | CommonCrawl |
In this paper, we investigate the ergodic secrecy capacity of a block-fading wiretap channel with limited channel knowledge at the transmitter. We consider that the legitimate receiver, the eavesdropper and the transmitter are equipped with multiple antennas and that the receiving nodes are aware of their respective channel matrices. The transmitter, on the other hand, is only provided by a B-bit feedback of the main channel state information. The feedback bits are sent by the legitimate receiver, at the beginning of each fading block, over an error-free public link with limited capacity. The statistics of the main and the eavesdropper channel state information are known at all nodes. Assuming an average transmit power constraint, we establish upper and lower bounds on the ergodic secrecy capacity. Then, we present a framework to design the optimal codebooks for feedback and transmission. In addition, we show that the proposed lower and upper bounds coincide asymptotically as the capacity of the feedback link becomes large, i.e. <formula> <tex>$B \rightarrow \infty$</tex> </formula>; hence, fully characterizing the ergodic secrecy capacity in this case. Besides, we analyze the asymptotic behavior of the presented secrecy rates, at high Signal-to-Noise Ratio (SNR), and evaluate the gap between the bounds.
Hyadi A, Rezki Z, Alouini M-S (2017) Secure Multiple-Antenna Block-Fading Wiretap Channels with Limited CSI Feedback. IEEE Transactions on Wireless Communications: 1–1. Available: http://dx.doi.org/10.1109/TWC.2017.2727043.
The research reported in this publication was supported by CRG 2 grant from the Office of Sponsored Research at King Abdullah University of Science and Technology (KAUST). | CommonCrawl |
Abstract: The first eight orders are calculated in the high-temperature expansion in powers of $\beta=1/kT$ of the function $\varphi(\alpha , \beta)$ ($\alpha$ is the magnetization), which is the Legendre transform of the specific logarithm of the partition function $w$ with respect to the reduced external field $\alpha\equiv\beta h$. This is equivalent to calculating $w$ in an arbitrary external field in temperature-magnetization variables. The transition from the field to the magnetization enables one to use the high-temperature expansion below the Curie point as well, and, in particular, it enables one to calculate the spontaneous magnetization in zero field below the transition point. The calculations are made for two planar (square and triangular) and three three-dimensional (simple cubic, bcc and fcc) lattices, two variants being considered for the three-dimensional lattices: interaction of only nearest neighbors and interaction of first and second neighbors. | CommonCrawl |
There are \(9\) students in Josh's class, and there are \(6\) other classes of the same size in this school (and no other classes). What is the total number of students in this school?
Juan has \(\$41\) left after spending \($10\) on groceries. How much money (in $) did Juan have before going to the grocery store?
In one game, Michael Jordan of the Chicago Bulls scored \(30\) points from \(9\) successful \(2\)-point shots and \(x\) successful \(3\)-point shots. What is the value of \(x\)?
An online music store charges $\(15\) for each CD. There is also an additional shipping charge per order. If Sue paid a total of $\(195\) for ordering \(12\) CDs, which of the following is the shipping charge for Sue's order?
Sudoku is a logic-based combinatorial number-placement puzzle. The objective is to fill a \( 9 \times 9 \) grid with digits 1 to 9 such that each column, row and \( 3 \times 3 \) box contains all the digits from 1 to 9.
If a Sudoku puzzle is filled correctly, what is the sum of all of the entries in the \( 9 \times 9 \) grid? | CommonCrawl |
I do not know if this correct. So I ask.
Given a number $n\geq 2$, can we find a Galois extension of $\mathbb Q$ such that the group has order $n$? Similarly, given $n\in \mathbb N$ can we find a totally imaginary number field that is a Galois extension of $\mathbb Q$ and for which the order of its Galois group is $2n$?
Browse other questions tagged group-theory field-theory galois-extensions or ask your own question.
Precise definition of "weaker" and "stronger"?
How to find Galois extension given the Galois group? | CommonCrawl |
It is often nice to consider BIBDs for which the number of points in the BIBD equals the number of blocks, i.e., $v = b$. Such BIBDs are given a special name which we define below.
Definition: A $(v, k, \lambda)$-BIBD $(X, \mathcal A)$ is said to be Symmetric if $v = b$.
The following theorem tells us that every symmetric BIBD has the additional property that $r = k$.
Theorem: If $(X, \mathcal A)$ is a symmetric $(v, k, \lambda)$-BIBD then $r = k$. | CommonCrawl |
Define $F(n)$ to be the number of integers $x≤n$ that can be written in the form $x=a^2b^3$, where $a$ and $b$ are integers not necessarily different and both greater than 1.
For example, $32=2^2\times 2^3$ and $72=3^2\times 2^3$ are the only two integers less than 100 that can be written in this form. Hence, $F(100)=2$.
Further you are given $F(2\times 10^4)=130$ and $F(3\times 10^6)=2014$. | CommonCrawl |
Given a complete, weighted graph with non-negative edge costs, find a route that visits every node exactly once with minimum cost.
With $w=(1+\epsilon)\times\lvert V\rvert$, the TSP-PTAS could be used to decide the Hamiltonian cycle problem in polynomial time.
Mark de Berg. Lecture 7: Polynomial-time approximation schemes. 2017.
Thomas Jansen. Introduction to the theory of complexity and approximation algorithms. In Lectures on Proof Verification and Approximation Algorithms. Springer, 1998.
Vijay V. Vazirani. Approximation Algorithms. Springer Science & Business Media, 2013.
Gerhard J Woeginger. When does a dynamic programming formulation guarantee the existence of a fully polynomial time approximation scheme (fptas)? INFORMS Journal on Computing, 12(1):57–74, 2000. | CommonCrawl |
I discuss a geometric interpretation of the twisted indexes of 3d (softly broken) $\cN=4$ gauge theories on $S^1 \times \Sigma$ where $\Sigma$ is a closed genus $g$ Riemann surface, mainly focussing on quivers with unitary gauge groups. The path integral localises to a moduli space of solutions to generalised vortex equations on $\Sigma$, which can be understood algebraically as quasi-maps to the Higgs branch. I demonstrate that the twisted indexes computed in previous work reproduce the virtual Euler characteristic of the moduli spaces of twisted quasi-maps. I investigate 3d $\cN=4$ mirror symmetry in this context, which implies an equality of enumerative invariants associated to mirror pairs of Higgs branches under the exchange of equivariant and degree counting parameters. I will conclude with some remarks about how holomorphic Morse theory can be used to access the spaces of supersymmetric ground states in limits where $\cN=4$ supersymmetry is fully restored. These spaces of ground states may be related to the spaces of conformal blocks for the VOAs introduced by Costello and Gaiotto. | CommonCrawl |
Xianghua Zhang, Hweon Park, Sung-Sik Han, Jung Woo Kim and Chang-Young Jang.
ER$\alpha$ regulates chromosome alignment and spindle dynamics during mitosis.. Biochemical and biophysical research communications 456(4):919–25, January 2015.
Abstract Estrogen receptors are activated by the hormone estrogen and they control cell growth by altering gene expression as a transcription factor. So far two estrogen receptors have been found: ER$\alpha$ and ER$\beta$. Estrogen receptors are also implicated in the development and progression of breast cancer. Here, we found that ER$\alpha$ localized on the spindle and spindle poles at the metaphase during mitosis. Depletion of ER$\alpha$ generated unaligned chromosomes in metaphase cells and lagging chromosomes in anaphase cells in a transcription-independent manner. Furthermore, the levels of $\beta$-tubulin and $\gamma$-tubulin were reduced in ER$\alpha$-depleted cells. Consistent with this, polymerization of microtubules in ER$\alpha$-depleted cells and turnover rate of $\alpha$/$\beta$-tubulin were decreased than in control cells. We suggest that ER$\alpha$ regulates chromosome alignment and spindle dynamics by stabilizing microtubules during mitosis.
Geethu Emily Thomas, Jamuna S Sreeja, K K Gireesh, Hindol Gupta and Tapas K Manna.
+TIP EB1 downregulates paclitaxel‑induced proliferation inhibition and apoptosis in breast cancer cells through inhibition of paclitaxel binding on microtubules.. International journal of oncology 46(1):133–46, January 2015.
Abstract Microtubule plus‑end‑binding protein (+TIP) EB1 has been shown to be upregulated in breast cancer cells and promote breast tumor growth in vivo. However, its effect on the cellular actions of microtubule‑targeted drugs in breast cancer cells has remained poorly understood. By using cellular and biochemical assays, we demonstrate that EB1 plays a critical role in regulating the sensitivity of breast cancer cells to anti‑microtubule drug, paclitaxel (PTX). Cell viability assays revealed that EB1 expression in the breast cancer cell lines correlated with the reduction of their sensitivity to PTX. Knockdown of EB1 by enzymatically‑prepared siRNA pools (esiRNAs) increased PTX‑induced cytotoxicity and sensitized cells to PTX‑induced apoptosis in three breast cancer cell lines, MCF‑7, MDA MB‑231 and T47D. Apoptosis was associated with activation of caspase‑9 and an increase in the cleavage of poly(ADP‑ribose) polymerase (PARP). p53 and Bax were upregulated and Bcl2 was downregulated in the EB1‑depleted PTX‑treated MCF‑7 cells, indicating that the apoptosis occurs via a p53‑dependent pathway. Following its upregulation, the nuclear accumulation of p53 and its association with cellular microtubules were increased. EB1 depletion increased PTX‑induced microtubule bundling in the interphase cells and induced formation of multiple spindle foci with abnormally elongated spindles in the mitotic MCF‑7 cells, indicating that loss of EB1 promotes PTX‑induced stabilization of microtubules. EB1 inhibited PTX‑induced microtubule polymerization and diminished PTX binding to microtubules in vitro, suggesting that it modulates the binding sites of PTX at the growing microtubule ends. Results demonstrate that EB1 downregulates inhibition of PTX‑induced proliferation and apoptosis in breast cancer cells through a mechanism in which it impairs PTX‑mediated stabilization of microtubule polymerization and inhibits PTX binding on microtubules.
Agnieszka Marczak and Aneta Rogalska.
TUBB3 role in the response of tumor cells to epothilones and taxanes.. Postȩpy higieny i medycyny doświadczalnej (Online) 69:158–64, January 2015.
Abstract Because of increased incidence of cancer and the development of resistance after treatment with typical drugs, new insights into the mechanisms of action of individual compounds are extremely valuable. In this article, we focus on taxanes, drugs belonging to the group of microtubule stabilizers, and their new generation - epothilones. Facing the fact that the molecular target for these compounds are microtubules, our attention was focused primarily on the role of overexpression of one of tubulin isotypes in response of tumor cells, particularly ovarian cancer to treatment with these compounds. On the basis of the literature data it can be concluded that one reason for the ineffectiveness of taxane is the resistance growing in the case of overexpression of b-tubulin class III- (TUBB3). Epothilones, however, due to their ability to bind equally to b-tubulin class I and III are effective in these cells, giving them an advantage over taxanes. It is necessary to emphasize the role of mikroRNA, transcription factors and other proteins associated with the activation of microtubules in development of resistance to taxanes and overcoming the resistance of the epothilones. Particularly interesting tubuseems to be the link between expression of TUBB3 and Glis proteins, which are end-effectors of Hedgehog pathway. Thanks to the confirmation that Gli1 overexpression is associated with decreased response to chemotherapy, it was possible to sensitize cells to epothilones after addition a suitable inhibitor.
Xiantao Li, Ximu Hu, Xiaoqing Li and Xuran Hao.
Overexpression of Tau Downregulated the mRNA Levels of Kv Channels and Improved Proliferation in N2A Cells.. PloS one 10(1):e0116628, January 2015.
Abstract Microtubule binding protein tau has a crucial function in promoting the assembly and stabilization of microtubule. Besides tuning the action potentials, voltage-gated K+ channels (Kv) are important for cell proliferation and appear to play a role in the development of cancer. However, little is known about the possible interaction of tau with Kv channels in various tissues. In the present study, tau plasmids were transiently transfected into mouse neuroblastoma N2A cells to explore the possible linkages between tau and Kv channels. This treatment led to a downregulation of mRNA levels of several Kv channels, including Kv2.1, Kv3.1, Kv4.1, Kv9.2, and KCNH4, but no significant alteration was observed for Kv5.1 and KCNQ4. Furthermore, the macroscopic currents through Kv channels were reduced by 36.5% at +60 mV in tau-tranfected N2A cells. The proliferation rates of N2A cells were also improved by the induction of tau expression and the incubation of TEA (tetraethylammonium) for 48 h by 120.9% and 149.3%, respectively. Following the cotransfection with tau in HEK293 cells, the mRNA levels and corresponding currents of Kv2.1 were significantly declined compared with single Kv2.1 transfection. Our data indicated that overexpression of tau declined the mRNA levels of Kv channels and related currents. The effects of tau overexpression on Kv channels provided an alternative explanation for low sensitivity to anti-cancer chemicals in some specific cancer tissues.
Ai-Jun Li, Yue-Hua Zheng, Guo-Dong Liu, Wei-Sheng Liu, Pei-Cheng Cao and Zhen-Fu Bu.
Efficient delivery of docetaxel for the treatment of brain tumors by cyclic RGD-tagged polymeric micelles.. Molecular medicine reports 11(4):3078–86, 2015.
Abstract The treatment of glioblastoma, and other types of brain cancer, is limited due to the poor transport of drugs across the blood brain barrier and poor penetration of the blood‑brain‑tumor barrier. In the present study, cyclic Arginine‑Glycine‑Aspartic acid‑D‑Tyrosine‑Lysine [c(RGDyK)], that has a high binding affinity to integrin $\alpha$v$\beta$3 receptors, that are overexpressed in glioblastoma cancers, was employed as a novel approach to target cancer by delivering therapeutic molecules intracellularly. The c(RGDyK)/docetaxel polylactic acid‑polyethylene glycol (DTX‑PLA‑PEG) micelle was prepared and characterized for various in vitro and in vivo parameters. The specific binding affinity of the Arginine‑Glycine‑Aspartic acid (RGD) micelles, to the integrin receptor, enhanced the intracellular accumulation of DTX, and markedly increased its cytotoxic efficacy. The effect of microtubule stabilization was evident in the inhibition of glioma spheroid volume. Upon intravenous administration, c(RGDyK)/DTX‑PLA‑PEG showed enhanced accumulation in brain tumor tissues through active internalization, whereas non‑targeted micelles showed limited transport ability. Furthermore, RGD‑linked micelles showed marked anti‑glioma activity in U87MG malignant glioma tumor xenografts, and significantly suppressed the growth of tumors without signs of systemic toxicity. In conclusion, the results of the present study suggest that ligand‑mediated drug delivery may improve the efficacy of brain cancer chemotherapy.
Chetna Tyagi, Ankita Gupta, Sukriti Goyal, Jaspreet Dhanjal and Abhinav Grover.
Fragment based group QSAR and molecular dynamics mechanistic studies on arylthioindole derivatives targeting the $\alpha$-$\beta$ interfacial site of human tubulin.. BMC genomics 15 Suppl 9:S3, December 2014.
Abstract BACKGROUND: A number of microtubule disassembly blocking agents and inhibitors of tubulin polymerization have been elements of great interest in anti-cancer therapy, some of them even entering into the clinical trials. One such class of tubulin assembly inhibitors is of arylthioindole derivatives which results in effective microtubule disorganization responsible for cell apoptosis by interacting with the colchicine binding site of the $\beta$-unit of tubulin close to the interface with the $\alpha$ unit. We modelled the human tubulin $\beta$ unit (chain D) protein and performed docking studies to elucidate the detailed binding mode of actions associated with their inhibition. The activity enhancing structural aspects were evaluated using a fragment-based Group QSAR (G-QSAR) model and was validated statistically to determine its robustness. A combinatorial library was generated keeping the arylthioindole moiety as the template and their activities were predicted. RESULTS: The G-QSAR model obtained was statistically significant with r2 value of 0.85, cross validated correlation coefficient q2 value of 0.71 and pred_r2 (r2 value for test set) value of 0.89. A high F test value of 65.76 suggests robustness of the model. Screening of the combinatorial library on the basis of predicted activity values yielded two compounds HPI (predicted pIC50 = 6.042) and MSI (predicted pIC50 = 6.001) whose interactions with the D chain of modelled human tubulin protein were evaluated in detail. A toxicity evaluation resulted in MSI being less toxic in comparison to HPI. CONCLUSIONS: The study provides an insight into the crucial structural requirements and the necessary chemical substitutions required for the arylthioindole moiety to exhibit enhanced inhibitory activity against human tubulin. The two reported compounds HPI and MSI showed promising anti cancer activities and thus can be considered as potent leads against cancer. The toxicity evaluation of these compounds suggests that MSI is a promising therapeutic candidate. This study provided another stepping stone in the direction of evaluating tubulin inhibition and microtubule disassembly degeneration as viable targets for development of novel therapeutics against cancer.
Anne Martinez, Emmanuelle Soleilhac, Caroline Barette, Renaud Prudent, Gustavo Jabor Gozzi, Emilie Vassal-Stermann, Catherine Pillet, Attilio Di Pietro, Marie-Odile Fauvarque and Laurence Lafanechère.
Novel Synthetic Pharmacophores Inducing a Stabilization of Cellular Microtubules.. Current cancer drug targets, December 2014.
Abstract Microtubule drugs have been widely used in cancer chemotherapies. Although microtubules are subject to regulation by signal transduction mechanisms, their pharmacological modulation has so far relied on compounds that bind to the tubulin subunit. Using a cell-based assay designed to probe the microtubule polymerization status, we identified two pharmacophores, CM09 and CM10, as cell-permeable microtubule stabilizing agents. These synthetic compounds do not affect the assembly state of purified microtubules in vitro but they profoundly suppress microtubule dynamics in vivo. Moreover, they exert cytotoxic effects on several cancer cell lines including multidrug resistant cell lines. Therefore, these classes of compounds represent novel attractive leads for cancer chemotherapy.
Dalip Kumar, N Maruthi Kumar, Mukund P Tantak, Maiko Ogura, Eriko Kusaka and Takeo Ito.
Synthesis and identification of $\alpha$-cyano bis(indolyl)chalcones as novel anticancer agents.. Bioorganic & medicinal chemistry letters 24(22):5170–4, November 2014.
Abstract Microwave-assisted synthesis of 23 $\alpha$-cyano bis(indolyl)chalcones (6a-w) and their in vitro anticancer activity against three human cancer cell lines have been discussed. Among the synthesized chalcones, compound 6n was found to be the most potent and selective against A549 lung cancer cell line (IC50 = 0.8 $\mu$M). In a preliminary mechanism of action studies some $\alpha$-cyano bis(indolyl)chalcones were found to enhance tubulin polymerization suggesting these compounds could act as microtubule stabilizing agents.
B Sathish Kumar, Amit Kumar, Jyotsna Singh, Mohammad Hasanain, Arjun Singh, Kaneez Fatima, Dharmendra K Yadav, Vinay Shukla, Suaib Luqman, Feroz Khan, Debabrata Chanda, Jayanta Sarkar, Rituraj Konwar, Anila Dwivedi and Arvind S Negi.
Synthesis of 2-alkoxy and 2-benzyloxy analogues of estradiol as anti-breast cancer agents through microtubule stabilization.. European journal of medicinal chemistry 86:740–51, October 2014.
Abstract 2-Methoxyestradiol (2ME2) is an investigational anticancer drug. In the present study, 2-alkoxyesters/acid and 2-benzyloxy analogues of estradiol have been synthesized as analogues of 2ME2. Three of the derivatives exhibited significant anticancer activity against human breast cancer cell lines. The best analogue of the series i.e. 24 showed stabilization of tubulin polymerisation process. It was substantiated by confocal microscopy and molecular docking studies where 24 occupied 'paclitaxel binding pocket' to stabilize the polymerisation process. Compound 24 significantly inhibited MDA-MB-231 cells (IC50: 7 $\mu$M) and induced arrest of cell cycle and apoptosis in MDA-MB-231 cells. In acute oral toxicity, 24 was found to be non-toxic and well tolerated in Swiss albino mice up to 1000 mg/kg dose.
Veronika Graml, Xenia Studera, Jonathan L D Lawson, Anatole Chessel, Marco Geymonat, Miriam Bortfeld-Miller, Thomas Walter, Laura Wagstaff, Eugenia Piddini and Rafael E Carazo-Salas.
A genomic Multiprocess survey of machineries that control and link cell shape, microtubule organization, and cell-cycle progression.. Developmental cell 31(2):227–39, October 2014.
Abstract Understanding cells as integrated systems requires that we systematically decipher how single genes affect multiple biological processes and how processes are functionally linked. Here, we used multiprocess phenotypic profiling, combining high-resolution 3D confocal microscopy and multiparametric image analysis, to simultaneously survey the fission yeast genome with respect to three key cellular processes: cell shape, microtubule organization, and cell-cycle progression. We identify, validate, and functionally annotate 262 genes controlling specific aspects of those processes. Of these, 62% had not been linked to these processes before and 35% are implicated in multiple processes. Importantly, we identify a conserved role for DNA-damage responses in controlling microtubule stability. In addition, we investigate how the processes are functionally linked. We show unexpectedly that disruption of cell-cycle progression does not necessarily affect cell size control and that distinct aspects of cell shape regulate microtubules and vice versa, identifying important systems-level links across these processes.
Amelia L Parker, Maria Kavallaris and Joshua A McCarroll.
Microtubules and their role in cellular stress in cancer.. Frontiers in oncology 4:153, January 2014.
Abstract Microtubules are highly dynamic structures, which consist of $\alpha$- and $\beta$-tubulin heterodimers, and are involved in cell movement, intracellular trafficking, and mitosis. In the context of cancer, the tubulin family of proteins is recognized as the target of the tubulin-binding chemotherapeutics, which suppress the dynamics of the mitotic spindle to cause mitotic arrest and cell death. Importantly, changes in microtubule stability and the expression of different tubulin isotypes as well as altered post-translational modifications have been reported for a range of cancers. These changes have been correlated with poor prognosis and chemotherapy resistance in solid and hematological cancers. However, the mechanisms underlying these observations have remained poorly understood. Emerging evidence suggests that tubulins and microtubule-associated proteins may play a role in a range of cellular stress responses, thus conferring survival advantage to cancer cells. This review will focus on the importance of the microtubule-protein network in regulating critical cellular processes in response to stress. Understanding the role of microtubules in this context may offer novel therapeutic approaches for the treatment of cancer.
Ning Ding, Lingyan Ping, Yunfei Shi, Lixia Feng, Xiaohui Zheng, Yuqin Song and Jun Zhu.
Thiamet-G-mediated inhibition of O-GlcNAcase sensitizes human leukemia cells to microtubule-stabilizing agent paclitaxel.. Biochemical and biophysical research communications 453(3):392–7, 2014.
Abstract Although the microtubule-stabilizing agent paclitaxel has been widely used for treatment of several cancer types, particularly for the malignancies of epithelia origin, it only shows limited efficacy on hematological malignancies. Emerging roles of O-GlcNAcylation modification of proteins in various cancer types have implicated the key enzymes catalyzing this reversible modification as targets for cancer therapy. Here, we show that the highly selective O-GlcNAcase (OGA) inhibitor thiamet-G significantly sensitized human leukemia cell lines to paclitaxel, with an approximate 10-fold leftward shift of IC50. Knockdown of OGA by siRNAs or inhibition of OGA by thiamet-G did not influence the cell viability. Furthermore, we demonstrated that thiamet-G binds to OGA in competition with 4-methylumbelliferyl N-acetyl-$\beta$-d-glucosaminide dehydrate, an analogue of O-GlcNAc UDP, thereby suppressing the activity of OGA. Importantly, inhibition of OGA by thiamet-G decreased the phosphorylation of microtubule-associated protein Tau and caused alterations of microtubule network in cells. It is noteworthy that paclitaxel combined with thiamet-G resulted in more profound perturbations on microtubule stability than did either one alone, which may implicate the underlying mechanism of thiamet-G-mediated sensitization of leukemia cells to paclitaxel. These findings thus suggest that a regimen of paclitaxel combined with OGA inhibitor might be more effective for the treatment of human leukemia.
Ning Ding, Lingyan Ping, Lixia Feng, Xiaohui Zheng, Yuqin Song and Jun Zhu.
Histone deacetylase 6 activity is critical for the metastasis of Burkitt's lymphoma cells.. Cancer cell international 14(1):139, January 2014.
Abstract BACKGROUND: Burkitt's lymphoma is an aggressive malignancy with high risk of metastasis to extranodal sites, such as bone marrow and central nervous system. The prognosis of metastatic Burkitt's lymphoma is poor. Here we sought to identify a role of histone deacetylase 6 (HDAC6) in the metastasis of Burkitt's lymphoma cells. METHODS: Burkitt's lymphoma cells were pharmacologically treated with niltubacin, tubacin or sodium butyrate (NaB) or transfected with siRNAs to knock down the expression of HDAC6. Cell migration and invasion ability were measured by transwell assay, and cell cycle progression was analyzed by flow cytometry. Cell adhesion and proliferation was determined by CellTiter-Glo luminescent cell viability assay kit. Cell morphological alteration and microtubule stability were analyzed by immunofluorescence staining. Effect of niltubacin, tubacin and NaB on acetylated tubulin and siRNA efficacy were measured by western blotting. RESULTS: Suppression of histone deacetylase 6 activity significantly compromised the migration and invasion of Burkitt's lymphoma cells, without affecting cell proliferation and cell cycle progression. Mechanistic study revealed that HDAC6 modulated chemokine induced cell shape elongation and cell adhesion probably through its action on microtubule dynamics. CONCLUSIONS: We identified a critical role of HDAC6 in the metastasis of Burkitt's lymphoma cells, suggesting that pharmacological inhibition of HDAC6 could be a promising strategy for the management of metastatic Burkitt's lymphoma.
Nadia D'Ambrosi, Simona Rossi, Valeria Gerbino and Mauro Cozzolino.
Rac1 at the crossroad of actin dynamics and neuroinflammation in Amyotrophic Lateral Sclerosis.. Frontiers in cellular neuroscience 8:279, January 2014.
Abstract Rac1 is a major player of the Rho family of small GTPases that controls multiple cell signaling pathways, such as the organization of cytoskeleton (including adhesion and motility), cell proliferation, apoptosis and activation of immune cells. In the nervous system, in particular, Rac1 GTPase plays a key regulatory function of both actin and microtubule cytoskeletal dynamics and thus it is central to axonal growth and stability, as well as dendrite and spine structural plasticity. Rac1 is also a crucial regulator of NADPH-dependent membrane oxidase (NOX), a prominent source of reactive oxygen species (ROS), thus having a central role in the inflammatory response and neurotoxicity mediated by microglia cells in the nervous system. As such, alterations in Rac1 activity might well be involved in the processes that give rise to Amyotrophic Lateral Sclerosis (ALS), a complex syndrome where cytoskeletal disturbances in motor neurons and redox alterations in the inflammatory compartment play pivotal and synergic roles in the final disease outcomes. Here we will discuss the genetic and mechanistic evidence indicating the relevance of Rac1 dysregulation in the pathogenesis of ALS.
Alyssa N Coyne, Bhavani Bagevalu Siddegowda, Patricia S Estes, Jeffrey Johannesmeyer, Tina Kovalik, Scott G Daniel, Antony Pearson, Robert Bowser and Daniela C Zarnescu.
Futsch/MAP1B mRNA is a translational target of TDP-43 and is neuroprotective in a Drosophila model of amyotrophic lateral sclerosis.. The Journal of neuroscience : the official journal of the Society for Neuroscience 34(48):15962–74, 2014.
Abstract TDP-43 is an RNA-binding protein linked to amyotrophic lateral sclerosis (ALS) that is known to regulate the splicing, transport, and storage of specific mRNAs into stress granules. Although TDP-43 has been shown to interact with translation factors, its role in protein synthesis remains unclear, and no in vivo translation targets have been reported to date. Here we provide evidence that TDP-43 associates with futsch mRNA in a complex and regulates its expression at the neuromuscular junction (NMJ) in Drosophila. In the context of TDP-43-induced proteinopathy, there is a significant reduction of futsch mRNA at the NMJ compared with motor neuron cell bodies where we find higher levels of transcript compared with controls. TDP-43 also leads to a significant reduction in Futsch protein expression at the NMJ. Polysome fractionations coupled with quantitative PCR experiments indicate that TDP-43 leads to a futsch mRNA shift from actively translating polysomes to nontranslating ribonuclear protein particles, suggesting that in addition to its effect on localization, TDP-43 also regulates the translation of futsch mRNA. We also show that futsch overexpression is neuroprotective by extending life span, reducing TDP-43 aggregation, and suppressing ALS-like locomotor dysfunction as well as NMJ abnormalities linked to microtubule and synaptic stabilization. Furthermore, the localization of MAP1B, the mammalian homolog of Futsch, is altered in ALS spinal cords in a manner similar to our observations in Drosophila motor neurons. Together, our results suggest a microtubule-dependent mechanism in motor neuron disease caused by TDP-43-dependent alterations in futsch mRNA localization and translation in vivo.
Qiao-Hong Chen and David G I Kingston.
Zampanolide and dactylolide: cytotoxic tubulin-assembly agents and promising anticancer leads.. Natural product reports 31(9):1202–26, 2014.
Abstract Zampanolide is a marine natural macrolide and a recent addition to the family of microtubule-stabilizing cytotoxic agents. Zampanolide exhibits unique effects on tubulin assembly and is more potent than paclitaxel against several multi-drug resistant cancer cell lines. A high-resolution crystal structure of $\alpha$$\beta$-tubulin in complex with zampanolide explains how taxane-site microtubule-stabilizing agents promote microtubule assemble and stability. This review provides an overview of current developments of zampanolide and its related but less potent analogue dactylolide, covering their natural sources and isolation, structure and conformation, cytotoxic potential, structure-activity studies, mechanism of action, and syntheses.
Tim N Beck, Emmanuelle Nicolas, Meghan C Kopp and Erica A Golemis.
Adaptors for disorders of the brain? The cancer signaling proteins NEDD9, CASS4, and PTK2B in Alzheimer's disease.. Oncoscience 1(7):486–503, 2014.
Abstract No treatment strategies effectively limit the progression of Alzheimer's disease (AD), a common and debilitating neurodegenerative disorder. The absence of viable treatment options reflects the fact that the pathophysiology and genotypic causes of the disease are not well understood. The advent of genome-wide association studies (GWAS) has made it possible to broadly investigate genotypic alterations driving phenotypic occurrences. Recent studies have associated single nucleotide polymorphisms (SNPs) in two paralogous scaffolding proteins, NEDD9 and CASS4, and the kinase PTK2B, with susceptibility to late-onset AD (LOAD). Intriguingly, NEDD9, CASS4, and PTK2B have been much studied as interacting partners regulating oncogenesis and metastasis, and all three are known to be active in the brain during development and in cancer. However, to date, the majority of studies of these proteins have emphasized their roles in the directly cancer relevant processes of migration and survival signaling. We here discuss evidence for roles of NEDD9, CASS4 and PTK2B in additional processes, including hypoxia, vascular changes, inflammation, microtubule stabilization and calcium signaling, as potentially relevant to the pathogenesis of LOAD. Reciprocally, these functions can better inform our understanding of the action of NEDD9, CASS4 and PTK2B in cancer.
Yan Jouroukhin, Regina Ostritsky, Yaniv Assaf, Galit Pelled, Eliezer Giladi and Illana Gozes.
NAP (davunetide) modifies disease progression in a mouse model of severe neurodegeneration: protection against impairments in axonal transport.. Neurobiology of disease 56:79–94, 2013.
Abstract NAP (davunetide) is a novel neuroprotective compound with mechanism of action that appears to involve microtubule (MT) stabilization and repair. To evaluate, for the first time, the impact of NAP on axonal transport in vivo and to translate it to neuroprotection in a severe neurodegeneration, the SOD1-G93A mouse model for amyotrophic lateral sclerosis (ALS) was used. Manganese-enhanced magnetic resonance imaging (MRI), estimating axonal transport rates, revealed a significant reduction of the anterograde axonal transport in the ALS mice compared to healthy control mice. Acute NAP treatment normalized axonal transport rates in these ALS mice. Tau hyperphosphorylation, associated with MT dysfunction and defective axonal transport, was discovered in the brains of the ALS mice and was significantly reduced by chronic NAP treatment. Furthermore, in healthy wild type (WT) mice, NAP reversed axonal transport disruption by colchicine, suggesting drug-dependent protection against axonal transport impairment through stabilization of the neuronal MT network. Histochemical analysis showed that chronic NAP treatment significantly protected spinal cord motor neurons against ALS-like pathology. Sequential MRI measurements, correlating brain structure with ALS disease progression, revealed a significant damage to the ventral tegmental area (VTA), indicative of impairments to the dopaminergic pathways relative to healthy controls. Chronic daily NAP treatment of the SOD1-G93A mice, initiated close to disease onset, delayed degeneration of the trigeminal, facial and hypoglossal motor nuclei as was significantly apparent at days 90-100 and further protected the VTA throughout life. Importantly, protection of the VTA was significantly correlated with longevity and overall, NAP treatment significantly prolonged life span in the ALS mice.
Ewa Usarek, Magdalena Kuźma-Kozakiewicz, Birgit Schwalenstöcker, Beata Kaźmierczak, Christoph Münch, Albert C Ludolph and Anna Barańczyk-Kuźma.
Tau isoforms expression in transgenic mouse model of amyotrophic lateral sclerosis.. Neurochemical research 31(5):597–602, 2006.
Abstract Tau is a protein involved in regulation of microtubule stability, axonal differentiation and transport. Alteration of retrograde transport may lead to motor neuron degeneration. Thus alternative mRNA splicing and expression of tau isoforms were studied in a transgenic mouse model harboring the human SOD1 G93A mutation. The studies were performed on cortex, hippocampus and spinal cord of 64- and 120-day-old animals (presymptomatic and symptomatic stage) and wild type controls. Exon 10 was found in all studied tissues. The 2N isoform containing exons 2 and 3 (+2+3) and the 1N (+2-3) predominated over the 0N (-2-3) in brain regions of the studied mice. The 2N expression was significantly lower in cortex and hippocampus of symptomatic animals compared to analogue control tissues. The decrease in 2N expression resulted in lower levels of total tau mRNA and tau protein. No changes in tau expression were observed in spinal cord of studied animals.
F Letournel, A Bocquet, F Dubas, A Barthelaix and J Eyer.
Stable tubule only polypeptides (STOP) proteins co-aggregate with spheroid neurofilaments in amyotrophic lateral sclerosis.. Journal of neuropathology and experimental neurology 62(12):1211–9, 2003.
Abstract A major cytopathological hallmark of amyotrophic lateral sclerosis (ALS) is the presence of axonal spheroids containing abnormally accumulated neurofilaments. The mechanism of their formation, their contribution to the disease, and the possibility of other co-aggregated components are still enigmatic. Here we analyze the composition of such lesions with special reference to stable tubule only polypeptide (STOP), a protein responsible for microtubule cold stabilization. In normal human brain and spinal cord, the distribution of STOP proteins is uniform between the cytoplasm and neurites of neurons. However, all the neurofilament-rich spheroids present in the tissues of affected patients are intensely labeled with 3 different anti-STOP antibodies. Moreover, when neurofilaments and microtubules are isolated from spinal cord and brain, STOP proteins are systematically co-purified with neurofilaments. By SDS-PAGE analysis, no alteration of the migration profile of STOP proteins is observed in pathological samples. Other microtubular proteins, like tubulin or kinesin, are inconstantly present in spheroids, suggesting that a microtubule destabilizing process may be involved in the pathogenesis of ALS. These results indicate that the selective co-aggregation of neurofilament and STOP proteins represent a new cytopathological marker for spheroids. | CommonCrawl |
in thich $\alpha$ indicates shifted angle.
Using a cosine-dominated signal for benchmark, the theoretical phase shifted signal shold be a sine signal with same frequency. However, the phase shifted signal shows great differences in edge part refering to the theoretical one and is the same as the signal computed from numerical hilbert tranformation result.
So, it's dangerous to numerically shift phases and completely wrong in edge part, around 2~3 times of the maximum peroid. | CommonCrawl |
DML-CZ - Czech Digital Mathematics Library: No hedgehog in the product?
Assuming OCA, we shall prove that for some pairs of Fréchet $\alpha_4$-spaces $X, Y$, the Fréchetness of the product $X\times Y$ implies that $X\times Y$ is $\alpha_4$. Assuming MA, we shall construct a pair of spaces satisfying the assumptions of the theorem.
[Ar] Archangel'skii A.V.: The frequency spectrum of a topological space and the classification of spaces. Soviet. Math. Dokl. 13 (1972), 265-268.
[BL] Brendle J., LaBerge T.: Forcing tightness of products of fans. Fund. Math. 150 3 (1996), 211-226.
[ES] Erdös P., Shelah S.: Separability properties of almost-disjoint families of sets. Israel J. Math. 12 (1972), 207-214.
[LL] LaBerge T., Landver A.: Tightness in products of fans and pseudo-fans. Topology Appl. 65 (1995), 237-255.
[MS] Martin D.A., Solovay R.M.: Internal Cohen extension. Ann. Math. Logic 2 (1970), 143-178.
[Mi] Michael E.: A quintuple quotient quest. Gen. Topology Appl. 2 (1972), 91-138.
[No] Products of $\langle \alpha_i\rangle$-spaces.
[Ny] Nyikos P.J.: Convergence in topology. in: Recent Progress in General Topology, ed. by M. Hušek and J. van Mill, North-Holland, 1992 pp.537-570.
[Si] Simon P.: A hedgehog in the product. Acta. Univ. Carolin. Math. Phys. 39 (1998), 147-153.
[Sw] Siwiec F.: Sequence covering and countably bi-quotient mappings. Gen. Topology Appl. 1 (1971), 143-154.
[To] Todorcevic S.: Partition problems in topology. Contemporary Mathematics, 84, Amer. Math. Soc., Providence, RI, 1989. | CommonCrawl |
This course will describe the classic differential geometry of curves, tubes and ribbons, and associated coordinate systems. We will prove various classic mathematical theorems such as the Weyl-Hotelling formula for tube volumes, and the relation between Link, Twist and Writhe, which couples differential geometry and topological invariance for closed and knotted framed curves. While we will not consider applications explicitly in this course, much of the mathematical material that will be described is central in various problems of mechanics, including nanostructures and topological fluid mechanics.
3) The geometry of Coordinates on SO(3) and 2pi vs 4pi. Euler angles, Cayley vectors, Euler parameters, and quaternions.
4) Fattened curves, Tubes and Ribbons. Contact framings, global radius of curvature, and ideal shapes.
These notes are meant to supplement your personal notes. They are to be understood as a first draft and as such may contain inaccuracies and mistakes. Be critical and do not hesitate to let us know if you find errors and/or typos. Furthermore, they make no pretence of being exhaustive. The material of the course is by definition what is exposed during lectures and exercise sessions. Finally, as the semester progresses, come back to the website and check frequently: pay attention to the version numbers. The notes will be edited as we progress during semester. The original document is a collection of chapters corresponding to the different lectures given last year. The horizontal red line indicates where the oral lecture is at.
The keen student, will also find relevant material on last year's webpage.
There is an older polycopie associated with a DNA modelling masters course with some chapters, specifically chapters 8 and 9 on this page. This material will be incorporated in the new polycopie in due course.
Week 1 (19.2) Overview of course and physical demonstrations of the Calugareanu $Lk=Tw+ Wr$ formula. Basic vector and matrix notation. Arc-length, curvature, (geometrical) torsion, and Serret-Frenet equations for a space curve.
Week 2 (26.2) The Lie groups $O(3)$, $SO(3)$, and $SE(3)$. Left and right actions in $SO(3)$ and $SE(3)$, both algebraic and geometric interpretation. Curves in $SO(3)$ and in $SE(3)$. Darboux vector of a curve in $SE(3)$. Darboux vector of the Frenet frame. Framed curves, intrinsic, extrinsic, adapted or not.
Week 3 (5.3) Factorisations of curves in $SO(3)$ and relations between their Darboux vectors. Relations between two adapted framings particularly important. Factored curves in $SE(3)$, and the special case of two offset curves in $\mathbb R^3$.
Week 4 (12.3) Statement of Calugareanu Theorem for a smoothly closed curve $x$ and a smoothly closed offset curve $y$. First definitions of a) Link $Lk$ of two closed non-intersecting curves, b) Total twist $Tw$ of a unit normal field about a curve $x$, and c) Writhe $Wr$ of a non-self-intersecting curve $x$. Start of discussion of the properties of Link.
Week 5 (19.3) Further properties of Link. Computing Link by homotopy to a sum of Hopf links. Rules for computing Link via a count of signed crossings in one particular projection. Start of relating signed crossing formulas for Link to the double integral definition via the signed area formulas on the unit sphere. The Mathematica files and images used in the demonstration of Link as signed area are in this archive.
Week 6 (26.3) End of discussion of the boundary of the zodiacus, and singularities of the projection of the surface $y(\sigma) - x(s)$ (compare with last weeks exercises). Connection between signed area integral definition of Link and counts of signed crossings in one specific (generic) projection. Curves lying on general surfaces, and the surface normal extrinsic adapted framing. The special case of curves lying on the unit sphere, and the particular case of the tangent indicatrix or tantrix of curve. Introduction to the next exercise series, tantrices of closed curves.
Week 7 (9.4) Discussion of properties of Writhe. Interpretation as signed area of the zodiacus (now of one curve instead of two as was the case for Link), and singularities/discontinuities in the boundary of the zodiacus coming from the tantrix of the curve x. Interpretation of writhe in terms of global radius of curvature circles, namely as the sign-indefinite weighted sum of radii of circles that pass through two points of the curve and are tangent at one point. Such global radius of curvature circles will re-appear in our discussion of Normal Injectivity Radius in Chapter 4 at the end of the course.
Week 8 (16.4) As JHM had to be away at short notice this week, the material of this week, lecture and exercises, was a more or less stand alone lecture that would usually appear later in the semester. Coordinates on the rotation group. Euler angles, Euler-Rodrigues parameters, and quaternions. Besides the polycopie (part III, chapter 9), see also these notes from Dichmann.
Week 9 (23.4) Proof of the main C-F-W Theorem. Read through Chapter 7 of the polycopie. Note that Chapter 7 was originally written as a standalone document, so much of the first part of the Chapter is a rapid review of material that has already been treated in more detail in the current version of the polycopie.
Week 10 (30.4) Strand passages and curves with an even or odd 'number of turns'. Register angle between two framings. Discussion of particular framings of curves: Frenet-Serret, surface, and parallel transport, framings.
Week 11 (7.5) Continuation of special (throughout always assumed to be adapted) framings of closed curves: completion of natural framings, and introduction of the writhe framing (which is always closed and zero link for any smoothly closed curve). Interpretation of $Tw + Wr$ as the discontinuity angle for closed curves with open framings, and the lemma that framings of closed curves are closed iff $Tw + Wr$ is an integer. Open problem (not examinable): for framed open curves with a spherical closure between the point-tangent data pairs at each end (and with surface normal framing, and either for general curve closure or with biarcs) is there a simple geometrical interpretation of the sum $Tw + Wr$?
Week 12 (14.5) Todays lectures would naturally follow those given in Week 8 which together make up Part III of the course in the polycopie. Discussion of the multiply covered circle in light of Euler parameters and tracking rotations mod $4\pi$ along curves of rotation matrices. The midpoint rule applied to first order linear matrix ODE to motivate the Cayley transform. In the case of the matrix group $SO(3)$ connections between the Darboux vector and the Cayley vector of the Cayley transform (sometimes also called the Gibbs vector). Then the Euler-Rodrigues formula in terms of the Cayley vector and connections to the Euler-Rodrigues parameters via stereographic projection.
Week 13 (21.5) Pentecost Monday, no lecture.
Week 14 (28.5) Volume of a tube. Condition for local self-intersection avoidance. Equilibrium of strings and the case of frictionless contact in particular. Here are notes meant to complement your own.
A summary of the exercises is provided to aid your revision.
There is no text book that we are aware of covering the material of this course. The first part on Frenet frames is however very standard and is discussed in any book on the Elementary Differential Geometry of Curves and Surfaces, of which there are many. One good one is by D. J. Struik, and another (from which some of the series questions were taken) is by M. P. Do Carmo.
The citations below are to research or survey articles concerning the material of the course. The citations have links to PDF versions of the articles but for copyright reasons the links are restricted to students in the class via password protection, login: frames, password as given in class.
Questions de topologie en biologie moléculaire, C. Weber, Gazette des mathématiciens, Vol. 64 (1995), pp. 29--42.
This one is from a particular point of view (the author Pohl was White's PhD supervisor) DNA and differential geometry, W. F. Pohl, The Mathematical Intelligencer, vol. 3 (1980), pp. 20--27.
An article on magnetohydrodynamics which relates helicity of a vector field to the CFW. The introduction contains a historical perspective of the CFW theorem.
Helicity and the Calugareanu invariant, H. K. Moffat and R. Ricca Proc. R. Soc. Lond. A, vol 439 (1992), pp 411--429.
An article on scroll waves, with an extensive discussion of the CFW theorem, including another historical perspective.
The differential geometry of scroll waves, J. J. Tyson and S. H. Strogatz Int. Jour. Bifurcation and Chaos, vol 1 (1991), pp 723--744.
A topology textbook which gives a friendly introduction to the subject. Chapters 10 and 14 are the most pertinent to the course.
A beautiful little book "A singular mathematical promenade" by Étienne Ghys. The most pertinent chapter for this course is the second to last one all about the Gauss linking number.
L'integrale de Gauss et l'analyse des noeuds tridimensionnels, G. Călugăreanu, Rev. Math. pures appl, vol. 4 (1959).
Sur les classes d'isotopie des noeuds tridimensionnels et leurs invariants, G. Călugăreanu, Czechoslov. Math. J., vol. 11 (1961), pp. 588--625.
O Teorema Asupra Inlantuirilor Tridimensionale de Curbe Inchise, G. Călugăreanu, Comunicarile Academiei Republicii Populare Romine, (1961), pp. 829--832.
Formulae for the calculation and estimation of writhe, J. Aldinger, I. Klapper, and M. Tabor, J. Knot Theory and Its Ramifications, vol. 04 (1995), 343.
An article that outlines a geometric point of view of the proof of CFW: the proof comes as a direct application of Stokes theorem on the pull-back of a certain 2-form. Note however that a discussion of the smoothness of the curves concerned is missing as well as a discussion of why it is that the various integrals defined do converge. A similar, and perhaps more accessible, discussion of Stokes Theorem applied to the spanning Seifert surface of a closed curve (including knotted curves) are also presented in the Tyson and Strogatz article cited above.
On White's formula, M.H. Eggar, J. Knot Theory Ramifications, vol. 09 (2000).
The Self-Linking Number of a Closed Space Curve, W. F. Pohl, J. Math. Mech., vol. 17 (1968), pp. 975--985.
Ribbons: Their Geometry and Topology, C. K. Au and T. C. Woo, Computer-Aided Design and Applications, Vol. 1 (2004), pp. 1--6.
An article on molecules with Möbius topology Möbius molecules with twists and writhes, S. R. Schaller and Rainer Herges, Chem. Commun., vol. 49 (2013), pp. 1254-1260.
Link, Twist, Energy, and the Stability of DNA Minicircles, K. A. Hoffman, R. S. Manning, and J. H. Maddocks, Biopolymers, vol. 70 (2003), pp. 145--157.
Geometry of Călugăreanu theorem, M. R. Dennis and J. H. Hannay, Proc. Roy. Soc. A, vol. 461 (2005), pp. 3245--3254.
The article that introduced global radius of curvature Global curvature, thickness, and the ideal shapes of knots, O. Gonzalez and J. H. Maddocks, Proc. Natl. Acad. Sci. USA, vol. 96 (1994), pp. 4769-4773.
Best packing in proteins and DNA, A. Stasiak, and J. H. Maddocks, Nature, vol. 406 (2000), pp. 251--253.
Optimal shapes of compact strings, A. Maritan, C. Micheletti, A. Trovato, and J. R. Banavar, Nature, vol. 406 (2000), pp. 287--290.
On the writhe of non-closed curves, E. L. Starostin, arXiv 0212095, (2002).
Computing the Writhing Number of a Polygonal Knot, P. K. Agarwal, H. E. Edelsbrunner, and Y. Wang, Discrete Comput Geom 32:37–53 (2004).
The writhe of open and closed curves, M. A. Berger and C. Prior, J. Phys. A: Math. Gen., vol 39, (2006), pp. 8321–8348.
The extended polar writhe: a tool for open curves mechanics, C. Prior and S. Neukirch (2015) hal-01228386.
Writhing Geometry at Finite Temperature: Random Walks and Geometric phases for Stiff Polymers, A. C. Maggs.
Writhing geometry of open DNA, V. Rossetto and A. C. Maggs.
Computation of Writhe in Modeling of Supercoiled DNA, K. Klenin and J. Langowski, Biopolymers 54.5 (2000): 307--317.
White's orginal contribution in generalising "CFW" to higher dimensions Self-linking and the gauss integral in higher dimensions., J.H. White, American Jour. Math., vol. 91 (1969), pp. 693--728.
A paper on generalising Link, Twist, and Writhe to non-Euclidean three dimensional spaces. Although it assumes prior knowledge of differential geometry on Lie groups, Section 2 is accessible by all and provides a nice historical background. Electrodynamics and the Gauss linking integral on the 3-sphere and in hyperbolic 3- space, D. DeTurck and H. Gluck, Jour. Math. Phys. vol 49 (2008), 023504. | CommonCrawl |
Why are the trig functions versine, haversine, exsecant, etc, seldom utilized in present society?
Why is this proof of a congruence relation valid?
Famous fractions: Can any "special" numbers be approximated by simple ratios like $3.14\ldots$ as $22/7$?
Proving surjectivity of some map from a power set to a subset of integers. | CommonCrawl |
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:1412-1420, 2017.
We present the first treatment of the arc length of the GP with more than a single output dimension. GPs are commonly used for tasks such as trajectory modelling, where path length is a crucial quantity of interest. Previously, only paths in one dimension have been considered, with no theoretical consideration of higher dimensional problems. We fill the gap in the existing literature by deriving the moments of the arc length for a stationary GP with multiple output dimensions. A new method is used to derive the mean of a one-dimensional GP over a finite interval, by considering the distribution of the arc length integrand. This technique is used to derive an approximate distribution over the arc length of a vector valued GP in $\mathbbR^n$ by moment matching the distribution. Numerical simulations confirm our theoretical derivations.
%X We present the first treatment of the arc length of the GP with more than a single output dimension. GPs are commonly used for tasks such as trajectory modelling, where path length is a crucial quantity of interest. Previously, only paths in one dimension have been considered, with no theoretical consideration of higher dimensional problems. We fill the gap in the existing literature by deriving the moments of the arc length for a stationary GP with multiple output dimensions. A new method is used to derive the mean of a one-dimensional GP over a finite interval, by considering the distribution of the arc length integrand. This technique is used to derive an approximate distribution over the arc length of a vector valued GP in $\mathbbR^n$ by moment matching the distribution. Numerical simulations confirm our theoretical derivations. | CommonCrawl |
Large-eddy simulations (LES) are used to study round jets impinging on rough surface at nozzle-to-plate distance $H/D=1$ ($D$ is the nozzle exit diameter) and Reynolds numbers $Re=U_oD/\nu= 6.6\times10^4$ ($U_o$ is the mean jet velocity). Our aim is to explore the effect of roughness on the evolution of vortices in the analysis of impinging jet. Two cases, one with turbulent, the other with laminar inflow, are performed. Roughness is represented by uniformly distributed but randomly oriented ellipsoids of equivalent sand-grain height $k_s/D = 0.02$, modelled by immersed boundary method. Results are compared to our previous LES simulations of jets impinging on a smooth surface. A wider and weaker wall jet is observed in the rough surface turbulent jet, compared to the reference turbulent one with smooth surface. The vortices and the peak of wall jet velocity shift away from the surface. Secondary vorticity is formed and lifted up, as in the smooth-surface case.The wall shear stress increases significantly; the separated vorticity, however, has the same strength as the one in the smooth case. The roughness causes higher turbulent fluctuations, and leads to the transition to turbulent wall jet even when the inflow is laminar, changing the vortex dynamics during vortex interaction. | CommonCrawl |
Can I say that $A$ is a pseudo-differential Operator ?
Not the answer you're looking for? Browse other questions tagged pde regularity-theory-of-pdes elliptic-operators pseudo-differential-operators fractional-sobolev-spaces or ask your own question.
Gårding's inequality for $\mathbb R^n$ implies that for bounded smooth domains?
Every Pseudo-Differential Operator is Pseudo-Local operator. | CommonCrawl |
The latter seems intuitive but is there any problem in imagining it that way?
By the transfer principle, anything you do with the standard reals looks exactly the same when done internally to the hyperreals.
In particular, there is a hyperreal number line. And (internally) it looks exactly like the standard number line.
Every standard real number has a halo of hyperreal numbers surrounding it, and this halo doesn't contain any standard reals.
There are more hyperreals off to the right and left, larger in magnitude than anything in the halo of a standard real.
If you take the picture of the extended reals (and extended hyperreals) instead — that is, add $\pm \infty$ as the endpoints of the number line, then the hyperreals of the last bullet can be gathered up into the halos of $+\infty$ and $-\infty$.
So, the picture you are trying to imagine looks fairly reasonable. Keisler's book uses something like that a lot, where you look at the standard number line, and then when desired, you use a "telescope" to "zoom in" on some point to see an infinitesimal segment of the hyperreal line.
Not the answer you're looking for? Browse other questions tagged calculus nonstandard-analysis number-line or ask your own question.
Do Hyperreal numbers include infinitesimals?
Why do the infinitely many infinitesimal errors from each term of an infinite Riemann sum still add up to only an infinitesimal error?
What is the use of hyperreal numbers?
How do I prove that every hyperreal has a standard part after constructing the reals from the hyperrationals?
Can nonstandard analysis give a uniform probability distribution over the integers?
In non-standard analysis, should we necessarily consider derivative as slope between two infinitesimally apart points? | CommonCrawl |
α-Linolenic acid (ALA) is an n−3 fatty acid. It is one of two essential fatty acids (the other being linoleic acid), so called because they are necessary for health and cannot be produced within the human body. They must be acquired through diet. ALA is an omega-3 fatty acid found in seeds (chia, flaxseed, hemp, see also table below), nuts (notably walnuts), and many common vegetable oils. In terms of its structure, it is named all–cis-9,12,15-octadecatrienoic acid. In physiological literature, it is listed by its lipid number, 18:3, and (n−3); its isomer GLA is 18:3 (n−6).
The word linolenic is an irregular derivation from linoleic, which itself is derived from the Greek word linon (flax). Oleic means "of or relating to oleic acid" because saturating linoleic acid's omega-6 double bond produces oleic acid.
α-Linolenic acid was first isolated by Rollett as cited in J. W. McCutcheon's synthesis in 1942, and referred to in Green and Hilditch's 1930s survey. It was first artificially synthesized in 1995 from C6 homologating agents. A Wittig reaction of the phosphonium salt of [(Z-Z)-nona-3,6-dien-1-yl]triphenylphosphonium bromide with methyl 9-oxononanoate, followed by saponification, completed the synthesis.
α-Linolenic acid can only be obtained by humans through their diets because the absence of the required 12- and 15-desaturase enzymes makes de novo synthesis from stearic acid impossible. Eicosapentaenoic acid (EPA; 20:5, n−3) and docosahexaenoic acid (DHA; 22:6, n−3) are readily available from fish and algae oil and play a vital role in many metabolic processes. These can also be synthesized by humans from dietary α-linolenic acid, but with an efficiency of only a few percent.
Because the efficacy of n−3 long-chain polyunsaturated fatty acid (LC-PUFA) synthesis decreases down the cascade of α-linolenic acid conversion, DHA synthesis from α-linolenic acid is even more restricted than that of EPA. Conversion of ALA to DHA is higher in women than in men.
However, when partially hydrogenated, part of the unsaturated fatty acids become unhealthy trans fats. Consumers are increasingly avoiding products that contain trans fats, and governments have begun to ban trans fats in food products. These regulations and market pressures have spurred the development of low-α-linolenic acid soybeans. These new soybean varieties yield a more stable oil that doesn't require hydrogenation for many applications, thus providing trans fat-free products, such as frying oil.
Several consortia are bringing low-α-linolenic acid soy to market. DuPont's effort involves silencing the FAD2 gene that codes for Δ6-desaturase, giving a soy oil with very low levels of both α-linolenic acid and linoleic acid. Monsanto Company has introduced to the market Vistive, their brand of low α-linolenic acid soybeans, which is less controversial than GMO offerings, as it was created via conventional breeding techniques.
^ Loreau, O; Maret, A; Poullain, D; Chardigny, JM; Sébédio, JL; Beaufrère, B; Noël, JP (2000). "Large-scale preparation of (9Z,12E)-1-(13)C-octadeca-9,12-dienoic acid, (9Z,12Z,15E)-1-(13)C-octadeca-9,12,15-trienoic acid and their 1-(13)C all-cis isomers". Chemistry and Physics of Lipids. 106 (1): 65–78. doi:10.1016/S0009-3084(00)00137-7. PMID 10878236.
^ a b c Beare-Rogers (2001). "IUPAC Lexicon of Lipid Nutrition" (PDF). Archived (PDF) from the original on 12 February 2006. Retrieved 22 February 2006.
^ Rollett, A. (1909). "Zur kenntnis der linolensäure und des leinöls". Z. Physiol. Chem. 62 (5–6): 422–431. doi:10.1515/bchm2.1909.62.5-6.422.
^ Green, TG; Hilditch, TP (1935). "The identification of linoleic and linolenic acids". Biochem. J. 29 (7): 1552–63. PMC 1266662. PMID 16745822.
^ Sandri, J.; Viala, J. (1995). "Direct preparation of (Z,Z)-1,4-dienic units with a new C6 homologating agent: synthesis of alpha-linolenic acid". Synthesis. 3 (3): 271–275. doi:10.1055/s-1995-3906.
^ Chapman, David J.; De-Felice, John; Barber, James (May 1983). "Growth temperature effects on thylakoid membrane lipid and protein content of pea chloroplasts 1". Plant Physiol. 72 (1): 225–228. doi:10.1104/pp.72.1.225. PMC 1066200. PMID 16662966.
^ Manthey, F. A.; Lee, R. E.; Hall Ca, 3rd (2002). "Processing and cooking effects on lipid content and stability of alpha-linolenic acid in spaghetti containing ground flaxseed". J. Agric. Food Chem. 50 (6): 1668–71. doi:10.1021/jf011147s. PMID 11879055.
^ "OXIDATIVE STABILITY OF FLAXSEED LIPIDS DURING BAKING".
^ Li, Thomas S. C. (1999). "Sea buckthorn: New crop opportunity". Perspectives on new crops and new uses. Alexandria, VA: ASHS Press. pp. 335–337. Archived from the original on 22 September 2006. Retrieved 2006-10-28.
^ "Omega-3 fatty acids". University of Maryland Medical Center.
^ Breanne M Anderson; David WL Ma (2009). "Are all n-3 polyunsaturated fatty acids created equal?". Lipids in Health and Disease. 8 (33): 33. doi:10.1186/1476-511X-8-33. PMC 3224740. PMID 19664246.
^ Shiels M. Innis (2007). "Fatty acids and early human development". Early Human Development. 83 (12): 761–766. doi:10.1016/j.earlhumdev.2007.09.004. PMID 17920214.
^ Burdge, GC; Calder, PC (2005). "Conversion of alpha-linolenic acid to longer-chain polyunsaturated fatty acids in human adults" (PDF). Reproduction, Nutrition, Development. 45 (5): 581–97. doi:10.1051/rnd:2005047. PMID 16188209.
^ "Conversion of $\alpha$-linolenic acid to longer-chain polyunsaturated fatty acids in human adults".
^ Ramon, JM; Bou, R; Romea, S; Alkiza, ME; Jacas, M; Ribes, J; Oromi, J (2000). "Dietary fat intake and prostate cancer risk: a case-control study in Spain". Cancer Causes & Control. 11 (8): 679–85. doi:10.1023/A:1008924116552. PMID 11065004.
^ Brouwer, IA; Katan, MB; Zock, PL (2004). "Dietary alpha-linolenic acid is associated with reduced risk of fatal coronary heart disease, but increased prostate cancer risk: a meta-analysis". The Journal of Nutrition. 134 (4): 919–22. doi:10.1093/jn/134.4.919. PMID 15051847.
^ De Stéfani, E; Deneo-Pellegrini, H; Boffetta, P; Ronco, A; Mendilaharsu, M (2000). "Alpha-linolenic acid and risk of prostate cancer: a case-control study in Uruguay". Cancer Epidemiology, Biomarkers & Prevention. 9 (3): 335–8. PMID 10750674.
^ Koralek DO, Peters U, Andriole G, et al. (2006). "A prospective study of dietary α-linolenic acid and the risk of prostate cancer (United States)". Cancer Causes & Control. 17 (6): 783–791. doi:10.1007/s10552-006-0014-x. PMID 16783606.
^ Simon, JA; Chen, YH; Bent, S (May 2009). "The relation of alpha-linolenic acid to the risk of prostate cancer". American Journal of Clinical Nutrition. 89 (5): 1558S–1564S. doi:10.3945/ajcn.2009.26736E. PMID 19321563.
^ Kinney, Tony. "Metabolism in plants to produce healthier food oils (slide #4)" (PDF). Archived from the original (PDF) on 29 September 2006. Retrieved 2007-01-11.
^ Fitzgerald, Anne; Brasher, Philip. "Ban on trans fat could benefit Iowa". Truth About Trade and Technology. Archived from the original on 27 September 2007. Retrieved 2007-01-03.
^ Monsanto. "ADM to process Monsanto's Vistive low linolenic soybeans at Indiana facility". Archived from the original on 11 December 2006. Retrieved 6 January 2007.
^ Kinney, Tony. "Metabolism in plants to produce healthier food oils" (PDF). Archived from the original (PDF) on 29 September 2006. Retrieved 2007-01-11.
^ Pan A, Chen M, Chowdhury R, et al. (December 2012). "α-Linolenic acid and risk of cardiovascular disease: a systematic review and meta-analysis". Am. J. Clin. Nutr. (Systematic review). 96 (6): 1262–73. doi:10.3945/ajcn.112.044040. PMC 3497923. PMID 23076616. | CommonCrawl |
Looking at the road map it seems that most of the things and tasks listed there are either [DONE] or deferred to [LATER] by now. So to me it seems it is time to consider starting the Public Beta of PhysicsOverflow now :-).
- Glossary of PhysicsOverflow (19). I am not quite sure about what kind of things should be listed there ?
Prepare and plan the promotion of PhysicsOverflow as discussed for example here.
So, after communicating with Polarkernel, it seems feasible to start the Public Beta of PhysicsOverflow next week at Midnight 0000 hours 4 April 2014 (Fri) [CEST]. Is this okay with everybody?
@physicsnewbie That's a great idea, but it may give the impression to ex-TP users that the site will, like TP.SE, go down if it doesn't attract too much of an audience. This is obviously not true.
Nope, I tried it, but it isn't possible.
We still don't know how fast users are going to gain reputation, so the reputation values aren't confirmed till the end of the public beta.
The reviews section isn't done yet. Speaking of which I have an idea for local contribution hosting, I will post about it later.
Not all questions have been imported yet.
@dimension10 Maybe that's the problem: we're trying to copy the format of the development of TP.SE, without asking if we have to. Physics overflow is now public, and that's all there is to it, isn't it? No need to give it a "public beta" status.
I think that the TP.SE question Public beta: Attracting users? is also relevant to us.
These plug-ins are good to have, and the first one is almost a necessity. I wonder why Q2A does not incorporate it in it's core.
The second one gives admins some great tools to see how well the site is faring at a glance. It could even be made public I hope.
Additionally, we also may want to decide on solutions for having a chat room. That somehow reminds me; we may also consider retaining the tpproposal blog as a (hopefully $\infty$) life-long blog for Physics Overflow. The chat room; I'm not sure if we should wait for the public beta to start or not, but I just thought of reminding everyone about it.
We should invite everyone listed in the blog post.
[DONE] I (and anyone else who is interested) need to finish [DONE] 14 and [DONE] 15 of the FAQ.
[DONE] The long users need to be corrected (I will do this).
[DONE; to be expanded] Post a "Help Promote Physics Overflow" post.
[TO BE DONE ASAP] Set your alarm clocks right for 2300 GMT/UCT(Wed), 0000 CEST, 0430 IST, 0700 ECST, 1900 (Wed) EAST, or 1600 (Wed) WAST.
[DONE] Restart the site, accessible only for super-administrators.
[DONE] Open the new site and going online.
[DONE] Open the champagne and propose a toast to this great team!
On the blog, user10001 (dushya) had the idea of promoting the site to various physicists in the world.
Restart the site, accessible only for super-administrators.
Open the new site and going online.
Open the champagne and propose a toast to this great team!
- Comment2Answer is installed and running. You may try, but on my local server, it did not work.
The Webmaster warning message will disappear, because I have switched off php-warnings now for security reasons. Our PHP-server does not allow to set time-limits.
- Q2A Embed Media is installed. You may configure it to your needs under Admin->Plugins.
- Q2A Featured Questions is installed. The ID of featured questions can be inserted using the options of the plugin at Admin->Plugins.
- Installing the Image Manager makes no sense. It searches for images stored in the database as blobs. As long as we do not yet have an upload facility, we have no blobs in the database.
The main page is set to Q&A. Can be revoked very easy if required.
But we have at least shortly announced it on the theoretical physics forums thread Lumo installed for us.
I have just aske quid again in a MO chat room if he personally thinks we could write a short announcement in their meta. I am working on a text now. | CommonCrawl |
WHAT IS... a Rauzy Fractal?
We define the Rauzy gasket as a subset of the standard two-dimensional simplex associated with letter frequencies of ternary episturmian words. We prove that the Rauzy gasket is homeomorphic to the usual Sierpiński gasket (by a two-dimensional generalization of the Minkowski ? function) and to the Apollonian gasket (by a map which is smooth on the boundary of the simplex). We prove that it is also homothetic to the invariant set of the fully subtractive algorithm, hence of measure 0.
Auteur(s): Arnoux Pierre, Schmidt Thomas A.
We adjust Arnoux's coding, in terms of regular continued fractions, of the geodesic flow on the modular surface to give a cross section on which the return map is a double cover of the natural extension for the \alpha-continued fractions, for each $\alpha$ in (0,1]. The argument is sufficiently robust to apply to the Rosen continued fractions and their recently introduced \alpha-variants.
There has been much recent work on the geometric representation of Pisot substitutions and the explicit construction of Markov partitions for Pisot toral automorphims. We give a construction that extends this to the general hyperbolic case.For the sake of simplicity, we consider a simple example of an automorphism of the free group on 4 generators whose associated matrix has 4 distinct complex eigenvalues, two of them of modulus larger than 1 and the other 2 of modulus smaller than 1 (non-Pisot case). Using this generator, we build substitution polygonal tilings of the contracting plane and the expanding plane of the matrix. We prove that these substitution tilings can be lifted in a unique way to stepped surfaces approximating each of these planes. The vertices of each of these stepped surfaces can be projected to an "atomic surface'', a compact set with fractal boundary contained in the other plane.We prove that both tilings can be refined to exact self-similar tilings whose tiles have fractal boundaries and can be obtained by iteration or by a ``cut and project'' method by using the atomic surface as the window.Using the self-similar tiling, one can build a numeration system associated to a complex $ \lambda$-expansion; the natural extension of the $ \lambda$-expansion associated with this number system is the linear map obtained by abelianization of the free group automorphism. This gives an explicit Markov partition of this hyperbolic toral automorphism.The fractal domains can be used to define a pseudo-group of translations which gives transversal dynamics in the sense of Vershik (1994) or numeration systems in the sense of Kamae (2005).The construction can be extended to a larger class of free group automorphisms, each of which can be used to build substitution rules and dynamical systems.
We consider a substitution associated with the Arnoux-Yoccoz interval exchange transformation (IET) related to the tribonacci substitution. We construct the so-called stepped lines associated with the fixed points of the substitution in the abelianization (symbolic) space. We analyze various projections of the stepped line, recovering the Rauzy fractal, a Peano curve related to work in [Arnoux 88], another Peano curve related to the work of [McMullen 09] and [Lowenstein et al. 07], and also the interval exchange transformation itself. | CommonCrawl |
A polyhedron $P$ has the Integer Carath\'eodory Property if the following holds. For any positive integer $k$ and any integer vector $w$ in $kP$, there exist affinely independent integer vectors $x_1,\ldots,x_t$ in $P$ and positive integers $n_1,\ldots,n_t$ such that $n_1+\cdots+n_t=k$ and $w=n_1x_1+\cdots+n_tx_t$. In this paper we prove that if $P$ is a (poly)matroid base polytope or if $P$ is defined by a TU matrix, then $P$ and projections of $P$ satisfy the integer Carath\'eodory property.
Gijswijt, D, & Regts, G. (2010). Polyhedra with the integer Caratheodory property. arXiv.org e-Print archive. Cornell University Library . | CommonCrawl |
When people buy a home they usually have to borrow an appreciable fraction of its value from a bank or other financial institution.
Htsi si na raitlcf ero aparobal no oht wb oerka oceds.
Problem 1. You have a huge pile of 1¢, 2¢, 5¢, 10¢, 20¢, 50¢ and \$1 coins. Some number of coins, $N$ say, total $X$ ¢ . Show that you can make up $\$N$ using $X$ coins.
Q1072. Is it possible to fill the empty circles in the diagram below with the integers $0, 1, \ldots , 9$ so that the sum of the numbers at the vertices of each shaded triangle is the same?
Q1064. The numbers $1, 2, \ldots , 16$ are placed in the cells of a $4 \times 4$ table as shown in the left hand diagram below. | CommonCrawl |
Clavel, Caroline and Canales, Angeles and Gupta, Garima and Canada, Javier F and Penades, Soledad and Surolia, Avadhesha and Jimenez, Jesus B (2007) NMR Investigation of the Bound Conformation of Natural and Synthetic Oligomannosides to Banana Lectin. In: European Journal of Organic Chemistry, 2007 (10). pp. 1577-1585.
The conformational behaviour of three mannose-containing oligosaccharides, namely, the $\alpha1 \rightarrow 3[\alpha1 \rightarrow 6]$ trisaccharide, a heptasaccharide with $\alpha1 \rightarrow 2$, $\alpha1 \rightarrow 3$ and $\alpha1 \rightarrow 6$ linkages and a tetrasaccharide consisting of $\alpha1 \rightarrow 3$ and $\alpha1 \rightarrow 2$ linkages, when bound to banana lectin (BanLec) has been evaluated by trNOE NMR methods and docking calculations. It was found that the molecular recognition event involves a conformational selection process with only one of the conformations present in the free state of the sugar being recognised at the lectin binding site. | CommonCrawl |
After learning a lot about applying OO principles, interfaces and a bit about argument-handling I post a revised version of my previous code. This is still an exercise for me to improve my general coding for a simple program. As such I would like to ask to pay special attention towards the application of OO programming principles and Exception handling, because those are the areas I feel weakest at.
The code is tested and - just like before - works properly.
Step 2 was changed from "If line contains ">" " to "If Line contains ">" as first character", since that reduces the number of comparisons the if-statement has to make - I think.
The code structure changed a lot compared to last time and is now a lot more complex, which is why it has its own section.
Argument parsing is no longer handled by ArgumentHandler, which was split into a class ArgumentCollection - which does the argument parsing now - and a class ArgumentHandler - which triggers the argument parsing and contains the code that slowy previously suggested putting into a createArgumentHandler() method. ArgumentCollection implements an interface ArgumentManager that provides a bunch of method to manipulate, set and test arguments of the various flags. ArgumentHandlernow also implements an Interface Configuration that provides the methods getSourceFile()and getSinkFile()that returns the files to read from/write to. For the sake of some semblance of brevity 'ArgumentCollection' is not shown (It has over 250 lines of code).
Besides Configuration, the Client also works with the classes 'FileLineReader' , and 'FileLineWriter' (both implementing AutoCloseable) , which have the methods readLine()/writeLine().
"Output file name " + filename + " was not accessible or could not be created. Printing to "
* argument parsing, set its argument to its default value.
One major thing this definitely taught me - If you can avoid it, don't code easy tasks in java. This program totals over 300 lines of code (not counting comments and package/import statements) to do something, as Vogel612 pointed out, that you can do in 1 line in sed in a linux-terminal.
I'll handle the big thing first.
Your current code is quite a bit over-engineered (the rest of the answer elaborates on that). In my opinion, if something is meant to be used for only a very specific purpose, there is no reason to make it more general than it needs to be, adding complexity and reducing maintainability in the process. Take my word for it - I learnt it the hard way.
In your case, BufferedReader is already sufficiently general for your needs - why bother implementing a class which just wraps it within another layer of indirection without providing any different or extra functionality?
BufferedReader and BufferedWriter implement AutoCloseable, so it's noise to abstract that behind classes which do not add significant functionality. (Yeah, I'm mostly an FP programmer now so I don't really care about what OOP purists say about decoupling interface from implementation, etc., etc.). But, you know, those OOP purists are (mostly) right. So what are we missing here?
However, there's an easier way out. You want PrintWriter. It'll completely, utterly replace FileLineWriter (of course it also implements AutoCloseable and is buffered). You can even use outputWriter.println(...)!
You want this constructor: public PrintWriter(File file, String csn). Why the csn bit?
Always specify the output charset!
csn is the output charset name. You want it, if, say, you move this code from macOS to Windows and the default charset changes from say MacRoman to CP-1252 or whatever. Don't make your code essentially single-platform when it's Java. Java will choose the default charset when outputting, so... you'll probably want "UTF-8" for csn.
getSourceFile and getSinkFile should return java.io.File (as their names indicate). You probably don't want them to return Strings containing the file path (principle of least surprise). If you do, name them appropriately (getSourceFilePath, etc.).
ArgumentHandler should probably be a private static inner class of RemoveSpaces_Client, as it is specific to it (if otherwise, justify in the documentation, and name ArgumentHandler more specifically (IOArgumentsHandler should be enough, a help function can be assumed to be common)).
contains would have iterated the String in order, and since the character in question is constrained by the format to always be at the start, the loop would have run for only 1 iteration before returning if the String did indeed start with the character in question, but it would have run till the end if it didn't.
When you're measuring file sizes in millions of lines, these things add up. I project about a \$n\times\$ performance improvement on average, if each line had \$n\$ characters.
Also, if you're going to be using the source/sink terminology, try to be consistent about it - the names should be sourceReader and sinkwriter.
I'd prefer a while loop for this case, as "loop while progression condition is satisfied" is more like a while loop. Of course, there's nothing wrong with using a for loop, it's just a matter of personal preference. I use whichever is convenient under the circumstances. However, as always, better to be consistent.
Where is UncheckedIOException implemented? I suspect it's a subclass of RuntimeException encapsulating the parent IOException in its cause field, and the message associated with the parent in its message field.
Edit: I have since realised thanks to the comments that UncheckedIOException is very much part of the library, and I retract my below statement and appreciate the OP simulating the logging of the occurrence of an exception.
However, I still think that the entire wrap-and-rethrow business seems convoluted in this context, however, it would benefit in the long run by reducing the burden on consumers of the OP's API by not enforcing the throws clause in all their method signatures. This is now an appreciated effort.
Then, if you're going to notify the user that an exception has occurred, print a stack trace, and exit, why bother catching the exception at all and wrapping it? The default JVM uncaught exception handler does exactly what you do manually, minus a vague error message printed to STDOUT.
Nitpick: To keep in line with standard Java camelCase identifiers, RemoveSpaces_Client should be named RemoveSpacesClient, without the underscore.
a priori: OP and me talked about the the solution in the initial question, let me explain why he came up with this solution.
I must highly disagree with the "over-engineered" statement of Tamoghna Chowdhury. Of course, it depends, but since it is a learning excercise, it's the best opportunity to apply object oriented principles, right? If not applied to an easy problem, I doubt, OP will have it easy, applied to a more complex problem. I agree, make the solution, as easy as possible, but not easier. I draw the line between easy and easier earlier, it seems.
And my philosphy - and the philosophy of most of the developers I talk to, which are developing large enterprise applications - is: Good code runs its test. Good code has a decent code coverage. That's how I made OP do this solution.
I highly disagree with the statement "reducing maintainability" / "adding complexity", I state the opposite - of course, always with unit testing in mind: Having unit tests not only verifies, it also - and that's imo more important - enforces good design - Because of one reason: If you can't test it, you have a design problem. Now, if you have code, which is not tested and you have to refactor it, you do not know if it still works. Having a set of unit tests, which can be executed after every step of the refactoring, will always help and give you confidence. Unit tests also are a very good documentation. My experience has been: Code with decent test cases, which may seem "a bit over engineered" are easier to maintain, than code without tests - and therefore more classes/abstraction/etc.
The single responsibility principle's goal - which we tried to apply here - is to reduce complexity: A class should only have one reason to change. I have catched myself too often, facing routines, which started easy as this one, but "historically grew", without having test cases and putting new requirements without thinking too much about future self. It often ends up in reverse engineering, trying to test it, introducing bugs and also - the worst - staying late friday night. In my opinion and my experience, the initial investment will pay off, sooner or later, when on the other hand, not applying the principles, will be expensive.
Since I'm whining about test cases, I'm gonna start with that one. I applied the tdd approach as best as I could. I didn't care too much about naming (otherwise, I'd sit here all night) or exception handling, I try to explain my "ranting" from above. I used JUnit4 and mockito 2.7.22.
// This test class tests "two", the "remove spaces when line starts with >", and the "do not touch the line if not" aspect.
/* LineReader and Writer are mocked - this can be easily done, since those are interface.
// readers and writers get injected, so no need for a configuration here for now.
// dedicate the main method to the creation and wiring of the needed components aka "dependency injection"
The LineFactory is a plain static Factory which actually creates FileLineReader's and FileWriteReader's which use LineNumberReader's and FileWriter's - pretty straight forward. | CommonCrawl |
Let X be a real orientable compact differentiable manifold. Is the (co)homology of X generated by the fundamental classes of oriented subvarieties? And if not, what is known about the subgroup generated?
Rene Thom answered this in section II of "Quelques propriétés globales des variétés différentiables." Every class $x$ in $H_r(X; \mathbb Z)$ has some integral multiple $nx$ which is the fundamental class of a submanifold, so the homology is at least rationally generated by these fundamental classes.
Section II.11 works out some specific cases: for example, every homology class of a manifold of dimension at most 8 is realizable this way, but this is not true for higher dimensional manifolds and the answer in general has to do with Steenrod operations.
This is a reply to Alon's comment, but it's too long to be a comment and is probably interesting enough to be an answer.
Here's an example Thom gives of a homology class that is not realized by a submanifold: let $X=S^7/\mathbb Z_3$, with $\mathbb Z_3$ acting by rotations, and $Y=X \times X$.
Then $H^1(X;\mathbb Z_3)=H^2(X;\mathbb Z_3)=\mathbb Z_3$ (and they are related by a Bockstein); let $u$ generate $H^1$ and $v=\beta u$ be the corresponding generator of $H^2$. Then it can be shown that the class $u \otimes vu^2 - v \otimes u^3 \in H^7(Y;Z_3)$ is actually integral (i.e., in $H^7(Y;Z)$), and its Poincare dual in $H_7$ cannot be realized by a submanifold (in fact, it can't be realized by any map from a closed manifold to $Y$, which need not be the inclusion of a submanifold). This is a natural example to consider because the first obstruction to classes being realized by submanifolds comes from a mod 3 Steenrod operation, and these are easy to compute on $Y$ because $X$ is the 7-skeleton of a $K(\mathbb Z_3,1)$. Note that the class in question is 3-torsion, so trivially 3 times it is realized by a submanifold.
Not the answer you're looking for? Browse other questions tagged cohomology gn.general-topology dg.differential-geometry or ask your own question.
When is a Homology Class Represented by a Submanifold?
Are the stiefel-Whitney classes of the tangent bundle determined by the mod 2 cohomology?
Poincare dual in equivariant (co)homology?
Which cohomology theories are real- and complex-orientable?
fundamental class is the sum of simplices of triangulation of the manifold?
A simple proof that parallelizable oriented closed manifolds are oriented boundaries? | CommonCrawl |
Suppose I want to form a 12-bead necklace such that each bead is either red or blue. Necklaces which differ by a rotation are considered the same and beads of the same color are indistinguishable. How may distinct necklaces can I make in this way?
My approach to this problem is as follows. We only have to consider the cases where the number of red beads is between $0$ and $6$, as the remaining cases are symmetric. If the number of red beads is $0,1,2$ the number of necklaces is easily found to be $1,1,6$, respectively. For $3$ to $6$ read beads the counting becomes more difficult. So lets suppose we have $k$ red beads (thus $12-k$ blue beads). Starting at some red bead and moving clockwise, let the number of blue beads between consecutive red beads be $x_1,x_2,...,x_k$ (these are non-negative integers). We need $x_1+...+x_k=12-k$ and thus the number of necklaces with $k$ read beads is the number of sets (recall the elements of a set are unordered) of $k$ nonegative integers with sum $12-k$, which we will call $f(k)$. So the answer to the problem is sum from $k=0$ to $5$ of $2f(k)$, plus $f(6)$. The problem is finding $f(k)$. Perhaps this is the wrong approach?
You should use Burnside's lemma.
There are $4$ rotations of order $12$. Each of these stabilizes $2$ colorings.
There are $2$ rotations of order $6$. Each of these stabilizes $4$ colorings.
There are $2$ rotations of order $4$. Each of these stabilizes $8$ colorings.
There are $2$ rotations of order $3$. Each of these stabilizes $16$ colorings.
There is $1$ rotation of order $1$. It stabilizes the $4096$ colorings.
Polya's theorem is more powerful than Burnside's theorem, because you then don't have to consider how the symmetry group acts on colorings, instead you only have to consider how it acts on the structure that is colored, which is much easier.
when there are $n_k$ orbits of length $k$. So, for the case of the rotation by $2\pi/6$, you have the term $T_6^2$. Adding up all the terms for all the elements of the group and dividing by the number of elements in the group yields the cycle index polynomial of the group. Then we can do a weighted counting of colorings such that the weight is the product of some arbitrary function $w$ of each color of each bead. Polya's theorem says that you have to take the cycle index polynomial and replace $T_i$ by the sum of the ith powers of the weights associated with each color.
E.g., the coefficient of $x^6$ is 80, so there are 80 necklaces with exactly 6 red and 6 blue beads.
What are the number of circular arrangements possible?
How many ways to arrange $8$ read beads and $32$ blue beads into a necklace such that at least $2$ blue beads between any $2$ red beads ?
How many necklaces are there so that between any two red beads there are at least two blue ones? | CommonCrawl |
We propose an efficient commutative group action suitable for non-interactive key exchange in a post-quantum setting. Our construction follows the layout of the Couveignes–Rostovtsev–Stolbunov cryptosystem, but we apply it to supersingular elliptic curves defined over a large prime field $$\mathbb F_p$$, rather than to ordinary elliptic curves. The Diffie–Hellman scheme resulting from the group action allows for public-key validation at very little cost, runs reasonably fast in practice, and has public keys of only 64 bytes at a conjectured AES-128 security level, matching NIST's post-quantum security category I. | CommonCrawl |
The ARGO-YBJ collaboration Bartoli, B. ; Bernardini, P. ; Bi, X. J. ; et al.
Phys.Rev. D91 (2015) 112017, 2015.
The ARGO–YBJ experiment is a full-coverage air shower detector located at the Yangbajing Cosmic Ray Observatory (Tibet, People's Republic of China, 4300 m a.s.l.). The high altitude, combined with the full-coverage technique, allows the detection of extensive air showers in a wide energy range and offer the possibility of measuring the cosmic ray proton plus helium spectrum down to the TeV region, where direct balloon/space-borne measurements are available. The detector has been in stable data taking in its full configuration from November 2007 to February 2013. In this paper the measurement of the cosmic ray proton plus helium energy spectrum is presented in the region 3–300 TeV by analyzing the full collected data sample. The resulting spectral index is γ=-2.64±0.01, the error is dominated by systematic uncertainties. The accurate measurement of the spectrum of light elements with a ground based air shower detector demonstrates the possibility of extending these measurements at larger energies, where galactic cosmic ray sources should run out of power in accelerating light elements.
Proton plus helium flux measured at $5.0 \times 10^4$ GeV.
Light component energy spectrum measured by the ARGO-YBJ experiment by using the full 2008-2012 data sample in each energy bin. | CommonCrawl |
I am a data scientist at Splunk. I got my math PhD at the University of Arizona, specializing in probability theory and differential geometry, and I did a postdoc at the Courant Institute at NYU. Here is a link to my LinkedIn profile, my résumé, and my publications and projects.
30 What is an $(\infty,1)$-topos, and why is this a good setting for doing differential geometry?
21 What is a Gaussian measure?
19 What is quantum Brownian motion?
18 How do we express measurable spaces using type theory? | CommonCrawl |
As a baseline for comparison, we can fit a model to all the clinically-confirmed cases, regardless of lab confirmation status. For this, we will use a simple SIR disease model, which will be fit using MCMC.
Rather than assume all clinical cases are true cases, we can adjust the model to account for lab confirmation probability. This is done by including a sub-model that estimates age group-specific probabilities of confirmation, and using these probabilities to estimate the number of lab-confirmed cases. These estimates are then plugged into the model in place of the clinically-confirmed cases.
where $a(i)$ denotes the appropriate age group for the individual indexed by i. There were 16 age groups, the first 15 of which were 5-year age intervals $[0,5), [5, 10), \ldots , [70, 75)$, with the 16th interval including all individuals 75 years and older.
Since the age interval choices were arbitrary, and the confirmation probabilities of adjacent groups likely correlated, we modeled the correlation structure directly, using a multivariate logit-normal model. Specifically, we allowed first-order autocorrelation among the age groups, whereby the variance-covariance matrix retained a tridiagonal structure.
From this, the confirmation probabilities were specified as multivariate normal on the inverse-logit scale.
Age classes are defined in 5-year intervals.
Lab-checked observations are extracted for use in estimating lab confirmation probability.
Extract confirmed and clinical subset, with no missing county information.
#Extract cases by age and time.
Run models for June 15 and July 15 observation points, both with and without clinical confirmation.
Proportion of population susceptible, June model.
Epidemic intensity estimates at June and July, per district.
Epidemic intensity in June and July (with lab confirmation).
Epidemic intensity in June for lab-confirmed and clinical-confirmed. | CommonCrawl |
I am trying to find sigmoid function alternatives for logistic regression. I am curious that if I can replace sigmoid function by any cumulative distribution function, and what will be the best?
At a minimum, you are going to need a distribution whose support is $(-\infty, \infty)$ before you could consider using its CDF as a link function.
There are many (I suppose infinite) possible link functions that can be used, though. You don't have to use the logit, and it isn't necessarily the best (although we need to be more precise about what "best" means). You may be interested in reading my answers here: Difference between logit and probit models, or here: Is the logit function always the best for regression modeling of binary data?
Not the answer you're looking for? Browse other questions tagged logistic link-function or ask your own question.
Is the logit function always the best for regression modeling of binary data?
What does the name "Logistic Regression" mean?
Logistic regression: why bothering with the sigmoid? | CommonCrawl |
Building upon the one-step replica symmetry breaking formalism, duly understood and ramified, we show that the sequence of ordered extreme values of a general class of Euclidean-space logarithmically correlated random energy models (logREMs) behave in the thermodynamic limit as a randomly shifted decorated exponential Poisson point process. The distribution of the random shift is determined solely by the large-distance ("infra-red", IR) limit of the model, and is equal to the free energy distribution at the critical temperature up to a translation. the decoration process is determined solely by the small-distance ("ultraviolet", UV) limit, in terms of the biased minimal process. Our approach provides connections of the replica framework to results in the probability literature and sheds further light on the freezing/duality conjecture which was the source of many previous results for log-REMs. In this way we derive the general and explicit formulae for the joint probability density of depths of the first and second minima (as well its higher-order generalizations) in terms of model-specific contributions from UV as well as IR limits. In particular, we show that the second min statistics is largely independent of details of UV data, whose influence is seen only through the mean value of the gap. For a given log-correlated field this parameter can be evaluated numerically, and we provide several numerical tests of our theory using the circular model of $1/f$-noise.
The many-body localization (MBL) transition is a quantum phase transition involving highly excited eigenstates of a disordered quantum many-body Hamiltonian, which evolve from "extended/ergodic" (exhibiting extensive entanglement entropies and fluctuations) to "localized" (exhibiting area-law scaling of entanglement and fluctuations). The MBL transition can be driven by the strength of disorder in a given spectral range, or by the energy density at fixed disorder - if the system possesses a many-body mobility edge. Here we propose to explore the latter mechanism by using "quantum-quench spectroscopy", namely via quantum quenches of variable width which prepare the state of the system in a superposition of eigenstates of the Hamiltonian within a controllable spectral region. Studying numerically a chain of interacting spinless fermions in a quasi-periodic potential, we argue that this system has a many-body mobility edge; and we show that its existence translates into a clear dynamical transition in the time evolution immediately following a quench in the strength of the quasi-periodic potential, as well as a transition in the scaling properties of the quasi-stationary state at long times. Our results suggest a practical scheme for the experimental observation of many-body mobility edges using cold-atom setups.
We study four-point functions of critical percolation in two dimensions, and more generally of the Potts model. We propose an exact ansatz for the spectrum: an infinite, discrete and non-diagonal combination of representations of the Virasoro algebra. Based on this ansatz, we compute four-point functions using a numerical conformal bootstrap approach. The results agree with Monte-Carlo computations of connectivities of random clusters.
Pumping a finite energy density into a quantum system typically leads to `melted' states characterized by exponentially-decaying correlations, as is the case for finite-temperature equilibrium situations. An important exception to this rule are states which, while being at high energy, maintain a low entropy. Such states can interestingly still display features of quantum criticality, especially in one dimension. Here, we consider high-energy states in anisotropic Heisenberg quantum spin chains obtained by splitting the ground state's magnon Fermi sea into separate pieces. Using methods based on integrability, we provide a detailed study of static and dynamical spin-spin correlations. These carry distinctive signatures of the Fermi sea splittings, which would be observable in eventual experimental realizations. Going further, we employ a multi-component Tomonaga-Luttinger model in order to predict the asymptotics of static correlations. For this effective field theory, we fix all universal exponents from energetics, and all non-universal correlation prefactors using finite-size scaling of matrix elements. The correlations obtained directly from integrability and those emerging from the Luttinger field theory description are shown to be in extremely good correspondence, as expected, for the large distance asymptotics, but surprisingly also for the short distance behavior. Finally, we discuss the description of dynamical correlations from a mobile impurity model, and clarify the relation of the effective field theory parameters to the Bethe Ansatz solution.
We study the role of fluctuations on the thermodynamic glassy properties of plaquette spin models, more specifically on the transition involving an overlap order parameter in the presence of an attractive coupling between different replicas of the system. We consider both short-range fluctuations associated with the local environment on Bethe lattices and long-range fluctuations that distinguish Euclidean from Bethe lattices with the same local environment. We find that the phase diagram in the temperature-coupling plane is very sensitive to the former but, at least for the $3$-dimensional (square pyramid) model, appears qualitatively or semi-quantitatively unchanged by the latter. This surprising result suggests that the mean-field theory of glasses provides a reasonable account of the glassy thermodynamics of models otherwise described in terms of the kinetically constrained motion of localized defects and taken as a paradigm for the theory of dynamic facilitation. We discuss the possible implications for the dynamical behavior.
We study one dimensional mixtures of two-component Bose-Einstein condensates in the limit where the intra-species and inter-species interaction constants are very close. Near the mixing-demixing transition the polarization and the density dynamics decouple. We study the nonlinear polarization waves, show that they obey a universal (i.e., parameter free) dynamical description, identify a new type of algebraic soliton, explicitly write simple wave solutions, and study the Gurevich-Pitaevskii problem in this context.
The purpose of this article is to demonstrate that non-crystallographic reflection groups can be used to build new solvable quantum particle systems. We explicitly construct a one-parametric family of solvable four-body systems on a line, related to the symmetry of a regular icosahedron: in two distinct limiting cases the system is constrained to a half-line. We repeat the program for a 600-cell, a four-dimensional generalization of the regular three-dimensional icosahedron.
The fields of quantum simulation with cold atoms and quantum optics are currently being merged. In a set of recent pathbreaking experiments with atoms in optical cavities [3,4] lattice quantum many-body systems with both, a short-range interaction and a strong interaction potential of infinite range -mediated by a quantized optical light field- were realized. A theoretical modelling of these systems faces considerable complexity at the interface of: (i) spontaneous symmetry-breaking and emergent phases of interacting many-body systems with a large number of atoms $N\rightarrow\infty$, (ii) quantum optics and the dynamics of fluctuating light fields, and (iii) non-equilibrium physics of driven, open quantum systems. Here we propose what is possibly the simplest, quantum-optical magnet with competing short- and long-range interactions, in which all three elements can be analyzed comprehensively: a Rydberg-dressed spin lattice coherently coupled to a single photon mode. Solving a set of coupled even-odd sublattice Master equations for atomic spin and photon mean-field amplitudes, we find three key results. (R1): Superradiance and a coherent photon field can coexist with spontaneously broken magnetic translation symmetry. The latter is induced by the short-range nearest-neighbor interaction from weakly admixed Rydberg levels. (R2): This broken even-odd sublattice symmetry leaves its imprint in the light via a novel peak in the cavity spectrum beyond the conventional polariton modes. (R3): The combined effect of atomic spontaneous emission, drive, and interactions can lead to phases with anomalous photon number oscillations. Extensions of our work include nano-photonic crystals coupled to interacting atoms and multi-mode photon dynamics in Rydberg systems.
We study the time evolution in the transverse-field Ising chain subject to quantum quenches of finite duration, ie, a continuous change in the transverse magnetic field over a finite time. Specifically, we consider the dynamics of the total energy, one- and two-point correlation functions and Loschmidt echo during and after the quench as well as their stationary behaviour at late times. We investigate how different quench protocols affect the dynamics and identify universal properties of the relaxation.
We present a frequency-shifted feedback (FSF) laser based on a tapered amplifier. The laser operates as a coherent broadband source with up to 370GHz spectral width and 2.3us coherence time. If the FSF laser is seeded by a continuous-wave laser a frequency comb spanning the output spectrum appears in addition to the broadband emission. The laser has an output power of 280mW and a center wavelength of 780nm. The ease and flexibility of use of tapered amplifiers makes our FSF laser attractive for a wide range of applications, especially in metrology. | CommonCrawl |
Once there was an inventor congress, where inventors from all over the world met in one place. The organizer of the congress reserved exactly one hotel room for each inventor. Each inventor, however, had its own preference regarding which room he would like to stay in. Being a clever inventor himself, the organizer soon found an objective way of doing the room assignments in a fair manner: each inventor wrote two different room numbers on a fair coin, one room number on each side. Then, each inventor threw his coin and was assigned the room number which was shown on the upper side of his coin. If some room had been assigned to more than one inventor, all inventors had to throw their coins again.
As you can imagine, this assignment process could take a long time or even not terminate at all. It has the advantage, however, that among all possible room assignments, one assignment is chosen randomly according to a uniform distribution. In order to apply this method in modern days, you should write a program which helps the organizer.
The organizer himself needs a hotel room too. As the organizer, he wants to have some advantage: he should be able to rate each of the rooms (the higher the rating, the better), and the program should tell him which two room numbers he should write on his coin in order to maximize the expected rating of the room he will be assigned to. The program also has access to the choices of the other inventors before making the proposal. It should never propose two rooms for the organizer such that it is not possible to assign all inventors to the rooms, if a valid assignment is possible at all.
The input starts with a single number $c$ ($1 \le c \le 200$) on one line, the number of test cases. Each test case starts with one line containing a number $n$ ($2 \le n \le 50\, 000$), the number of inventors and rooms. The following $n-1$ lines contain the choices of the $n-1$ guests (excluding the organizer). For each inventor, there is a line containing two numbers $a$ and $b$ ($1 \le a < b \le n$), the two room numbers which are selected by the inventor. The last line of each test case consists of $n$ integers $v_1, \ldots , v_ n$ ($1 \le v_ i \le 1\, 000\, 000$), where $v_ i$ is the organizer's rating for room $i$.
For each test case, print a single line containing the two different room numbers $a$ and $b$ which should be selected by the organizer in order to maximize the expected rating of the room he will be assigned to. If there is more than one optimal selection, break ties by choosing the smallest $a$ and, for equal $a$, the smallest $b$. If there is no way for the organizer to select two rooms such that an assignment of inventors to rooms is possible, print "impossible" instead. | CommonCrawl |
[SOLVED] Is there a complex structure on the 6-sphere?
[SOLVED] Open problems in Euclidean geometry?
[SOLVED] Are nontrivial integer solutions known for $x^3+y^3+z^3=3$?
[SOLVED] Can a discrete set of the plane of uniform density intersect all large triangles?
[SOLVED] Are most cubic plane curves over the rationals elliptic?
[SOLVED] Is the fixed point property for posets preserved by products?
[SOLVED] Is a smooth closed surface in Euclidean 3-space rigid?
[SOLVED] Polynomial bijection from $\mathbb Q\times\mathbb Q$ to $\mathbb Q$?
[SOLVED] What are some open problems in algebraic geometry?
[SOLVED] Open problems/questions in representation theory and around?
[SOLVED] Are there Ricci-flat riemannian manifolds with generic holonomy? | CommonCrawl |
Abstract: The high sensitivity and wide frequency coverage of the Murchison Widefield Array allow for the measurement of the spectral scaling of the pulsar scattering timescale, $\alpha$, from a single observation. Here we present three case studies targeted at bright, strongly scattered pulsars J0534+2200 (the Crab pulsar), J0835-4510 (the Vela pulsar) and J0742-2822. We measure the scattering spectral indices to be $-3.8\pm0.2$, $-4.0\pm1.5$, and $-2.5\pm0.6$ for the Crab, Vela, and J0742-2822, respectively. We find that the scattered profiles of both Vela and J0742-2822 are best described by a thin screen model where the Gum Nebula likely contributes most of the observed scattering delay. For the Crab pulsar we see characteristically different pulse shapes compared to higher frequencies, for which none of the scattering screen models we explore are found to be optimal. The presence of a finite inner scale to the turbulence can possibly explain some of the discrepancies. | CommonCrawl |
Tatiana Toro, University of Washington, "Uniform Rectifiability via Perimeter Minimization"
Abstract: Quantitative geometric measure theory has played a fundamental role in the development of harmonic analysis, potential theory and partial differential equations on non-smooth domains. In general the tools used in this area differ greatly from those used in geometric measure theory as it appears in the context of geometric analysis. In this course we will discuss how ideas arising when studying perimeter minimization questions yield interesting and powerful results concerning uniform rectifiability of sets. The course will be mostly self-contained.
Panagiota Daskalopoulos, Columbia University, "Ancient Solutions to Geometric Flows"
Abstract: Some of the most important problems in geometric evolution partial differential equations are related to the understanding of singularities. This usually happens through a blow up procedure near the singularity which uses the scaling properties of the equation. In the case of a parabolic equation th eblow up analysis often leads to special solutions which are defined for all time $-\infty < t \leq T$, for some $T \leq + \infty$. We refer to them as ancient solutions. The classification of such solutions often sheds new insight upon the singularity analysis.
In this lecture series we will discuss Uniqueness Theorems for ancient solutions to parabolic partial differential equations, starting from the Heat equation and extending to the Semi-linear heat equation, the Mean curvature flow, the Ricci flow and the Yamabe flow. We will also discuss the construction of new solutions from the gluing of two more solitons.
Multivariable calculus and an undergraduate course in analysis. | CommonCrawl |
12 Mar 2019 - Updated tutorial for BDSKY 1.4.5.
In the Bayesian analysis of sequence data, priors play an important role. When priors are not specified correctly, it may cause runs to take very long to converge, not converge at all or cause a bias in the inferred trees and model parameters. Selection of proper priors and starting values is crucial and can be a difficult exercise in the beginning. It is not always easy to pick a proper model of tree generation (tree prior), substitution model, molecular clock model or the prior distribution for an unknown parameter.
The molecular clock model aims to estimate the substitution rate of the data. It is important to understand under which circumstances to use which model and when molecular calibration works. This will help the investigator determine which estimates of parameters can be trusted and which cannot.
In this tutorial we will explore how priors are selected and how molecular clock calibration works using H3N2 influenza A data from the flu virus spreading in the USA in 2009.
FigTree (http://tree.bio.ed.ac.uk/software/figtree) is a program for viewing trees and producing publication-quality figures. It can interpret the node-annotations created on the summary trees by TreeAnnotator, allowing the user to display node-based statistics (e.g. posterior probabilities). We will be using FigTree v1.4.4.
In this tutorial, we will estimate the rate of evolution from a set of virus sequences that have been isolated either at one point in time (homochronous) or at different points in time (heterochronous or time-stamped data). We use the hemagglutinin (HA) gene of the H3N2 strain spreading across America alongside the pandemic H1N1 virus in 2009 (CDC, 2009/2010).
date of the most recent common ancestor of the sampled virus sequences.
understand why and when the rate of evolution can be estimated from the data.
The full heterochronous dataset contains an alignment of 139 HA sequences 1738 nucleotides long. The samples were obtained from California between April and June 2009 (file named InfluenzaAH3N2_HAgene_2009_California_heterochronous.nexus). The homochronous data is a subset of the heterochronous data, consisting of an alignment of 29 sequences of 1735 nucleotides all sampled on April 28, 2009 (file named InfluenzaAH3N2_HAgene_2009_California_homochronous.nexus).
We will use BEAUti to select the priors and starting values for our analysis and save these settings into a BEAST-readable XML file.
Since we will be using the birth-death skyline model (BDSKY) (Stadler, Kühnert, Bonhoeffer, & Drummond, 2013), we need to make sure it is available in BEAUti. It is not one of the default models but rather an add-on (also called a plug-in or package). You only need to install a BEAST2 package once. Thus, if you close BEAUti, you do not have to load BDSKY the next time you open the program. However, it is worth checking the package manager for updates to plug-ins, particularly if you update your version of BEAST2. For this tutorial we need to ensure that we have at least BDSKY v1.4.5 installed.
Figure 1: Finding the BEAST2 Package Manager.
Figure 2: The BEAST2 Package Manager.
After the installation of an add-on, the program is on your computer, but BEAUti is unable to load the template files for the newly installed model unless it is restarted. So, let's restart BEAUti to make sure we have the BDSKY model at hand.
Close the BEAST2 Package Manager and restart BEAUti to fully load the BDSKY package.
We will first analyse the alignment of sequences sampled through time (heterochronous sequences).
In the Partitions panel, import the nexus file with the alignment by navigating to File > Import Alignment in the menu (Figure 3) and then finding the file called InfluenzaAH3N2_HAgene_2009_California_heterochronous.nexus file on your computer.
Figure 3: Importing alignment into BEAUti.
You can view the alignment by double-clicking on the name of the alignment in BEAUti. Since we only have one partition there is nothing more we can do in the Partitions panel and proceed to specifying the tip dates.
The heterochronous dataset contains information on the dates sequences were sampled. We want to use this information to specify the tip dates in BEAUti.
In the Tip Dates panel, click the Use tip dates option.
The sequence labels (headers in the FASTA file) contain sampling times specified as dates in the format year/month/day. In order for BEAST to use this information we must specify the form of this date string and tell BEAST where to find the data. To do this, first set Dates specified to the "as dates with format" option. Then click the arrows next to the box immediately to the right of this option and select "yyyy/M/dd" (Figure 4). This tells BEAUti that the dates are specified with a full length (4-digit) year, then the month number, then a 2-digit day, all separated by '/' characters.
Set Dates specified to the option "as dates with format", then select "yyyy/M/dd" from the list of possible date formats.
Figure 4: Specifying time units and direction of time flow.
You could specify the tip dates by hand, by clicking for each row (i.e. for each sequence) into the Date (raw value) column and typing the date information in for each sequence in turn. However, this is a laborious and error-prone procedure and can take a long time to finish. Fortunately, we can use BEAUti, to read off the dates from the sequence names for us. Each sequence is named such that the expression after the last underscore character ("_") contains the sampling date information. BEAUti can search for this expression to extract the sequence date.
Select use everything and specify after last _.
Figure 5: Specifying tip dates.
You should now see that the tip ages have been filled in for all of the taxa and that the Date columns shows a number in form 2009.xyz and the Height column shows the number in form 0.abc (the height of the tip from present time, where present is 0.0).
Now we are done with the data specification and we are about to start specifying models and priors for the model parameters.
Navigate to the Site Model panel, where we can choose the model of nucleotide evolution that we want to assume to underly our dataset.
Our dataset is made of nucleotide sequences. There are four models of nucleotide evolution available in BEAUti2: JC69, HKY, TN93 and GTR. The JC69 model is the simplest evolutionary model. All the substitutions are assumed to happen at the same rate and all the bases are assumed to have identical frequencies, i.e. each base A, C, G and T is assumed to have an equilibrium frequency of 0.25. In the HKY model, the rate of transitions A ↔\leftrightarrow↔ G and C ↔\leftrightarrow↔ T is allowed to be different from the rate of transversions A ↔\leftrightarrow↔ C, G ↔\leftrightarrow↔ T. Furthermore, the frequency of each base can be either "Estimated", "Empirical" or "All Equal". When we set the frequencies to "Estimated", the frequency of each base will be co-estimated as a parameter during the BEAST run. If we use "Empirical", base frequencies will be set to the frequencies of each base found in the alignment. Finally, if set to "All Equal", the base frequencies will be set to 0.25. The TN93 model is slightly more complicated than HKY, by allowing for different rates of A ↔\leftrightarrow↔ G and C ↔\leftrightarrow↔ T transitions. Finally, the GTR model is the most general model and allows for different substitution rates between each pair of nucleotides as well as different base frequencies, resulting in a total of 9 free parameters.
Topic for discussion: Which substitution model may be the most appropriate for our dataset and why?
Since we do not have any extra information on how the data evolved, the decision is not clear cut. The best would be to have some independent information on what model fits the influenza data the best. Alternatively, one could perform model comparison, or apply reversible jump MCMC (see for example the bModelTest and substBMA packages) to choose the best model. Let's assume, we have done some independent data analyses and found the HKY model to fit the influenza data the best. In general, this model captures the major biases that can arise in the analysis of the nucleotide data.
Now we have to decide whether we want to assume all of the sites to have been subject to the same substitution rate or if we want to allow for the possibility that some sites are evolving faster than others. For this, we choose the number of gamma rate categories. This model scales the substitution rate by a factor, which is defined by a Gamma distribution. If we choose to split the Gamma distribution into 4 categories, we will have 4 possible scalings that will be applied to the substitution rate. The probability of a substitution at each site will be calculated under each scaled substitution rate (and corresponding transition probability matrix) and averaged over the 4 outcomes.
Topic for discussion: Do you think a model that assumes one rate for all the sites is preferable over a model which allows different substitution rates across sites (i.e. allows for several gamma rate categories)? Why or why not?
Once again, a proper model comparison, i.e. comparing a model without gamma rate heterogeneity to a model with some number of gamma rate categories, should ideally be done. We do not have any independent information on whether Gamma rate categories are needed or not. Thus, we take our best guess in order not to bias our analyses. Since the data are the sequences of the HA (hemagglutinin) gene of influenza, we may want to allow for variation of the substitution rates between sites. Hemagglutinin is a surface protein on the virus and is under significant evolutionary pressure from the immune system of the host organism. It is not unrealistic to assume that some sites may be under more pressure to escape from the immune system.
Let us therefore choose the HKY model with 4 gamma rate categories for the substitution rate.
Figure 6: Specifying substitution model.
Notice that we estimate the shape parameter of the Gamma distribution as well. This is generally recommended, unless one is sure that the Gamma distribution with the shape parameter equal to 1 captures exactly the rate variation in the given dataset. Notice also, that we leave the substitution rate fixed to 1.0 and do not estimate it. In fact, the overall substitution rate is the product of the clock rate and the substitution rate (one of the two acting as a scalar rather than a quantity measured in number of substitutions per site per time unit), and thus fixing one to 1.0 and estimating the other one allows for estimation of the overall rate of substitution. We will therefore use the clock rate to estimate the number of substitutions per site per year.
Navigate to the Clock Model panel.
Four different clock models are available in BEAST2, allowing us to specify lineage-specific substitution rate variation. The default model in BEAUti is the Strict Clock. The other three models relax the assumption of a constant substitution rate. The Relaxed Clock Log Normal allows for the substitution rates associated with each branch to be independently drawn from a single, discretized log normal distribution (Drummond, Ho, Phillips, & Rambaut, 2006). Under the Relaxed Clock Exponential model, the rates associated with each branch are drawn from an exponential distribution (Drummond, Ho, Phillips, & Rambaut, 2006). Both of these models are uncorrelated relaxed clock models. The log normal distribution has the advantage that one can estimate its variance, which reflects the extent to which the molecular clock needs to be relaxed. In both models, BEAUti sets by default the Number Of Discrete Rates to -1. This means that the number of bins that the distribution is divided into is equal to the number of branches. The last available model is the Random Local Clock which averages over all possible local clock models (Drummond & Suchard, 2010).
Topic for discussion: Which clock model may be the most appropriate for our dataset and why?
Since we are observing the sequence data from a single epidemic of H3N2 virus in humans in a single location (southwest USA), we do not have any reason to assume different substitution rates for different lineages. Thus, the most straightforward option is to choose the default Strict Clock model (Figure 7). Note however, that a rigorous model comparison would be the best way to proceed with the choice of the clock model.
Figure 7: Specifying the clock model.
Navigate to the Priors panel.
Since the dynamics of influenza virus is likely to change due to the depletion of the susceptible population and/or the presence of the resistant individuals, we choose the birth-death skyline model of population dynamics with 5 time intervals for ReR_eRe, to capture this likely change of dynamics over time. R0R_0R0, the basic reproductive number, is an important variable for the study of infectious diseases, since it defines how many individuals a single infected individual infects on average in a completely susceptible population over the course of her/his infection. ReR_eRe, the effective reproduction number, is the average number of secondary infections caused by an infected individual at a given time during the epidemic. Thus, ReR_eRe is a function of time. In other words, it tells us how quickly the disease is spreading in a population. As long as ReR_eRe is above 1 the epidemic is likely to continue spreading, therefore prevention efforts aim to push ReR_eRe below 1. Note that as more people become infected and the susceptible population decreases ReR_eRe will naturally decrease over the course of an epidemic, however treatment, vaccinations, quarantine and changes in behaviour can all contribute to decreasing ReR_eRe faster. In a birth-death process, ReR_eRe is defined as the ratio of the birth (or speciation) rate and the total death (or extinction) rate. ReR_eRe for any infection is rarely above 10, so we set this as the upper value for ReR_eRe in our analysis.
For the Tree model, select the option Birth Death Skyline Serial.
Then, click in the arrow to the left from reproductiveNumber to open all the options for ReR_eRe settings (Figure 8). Leave all the settings to the default, since it specifies a prior that is not too strong and centered around 1. This is exactly what we want.
Then, click on the button where it says initial = [2.0] [0.0, Infinity]. A pop-up window will show up (Figure 9).
In the pop-up window change the Upper, the upper limit of the prior distribution, from Infinity to 10 and the Dimension of the ReR_eRe from 10 to 5 and click OK.
Figure 8: Specifying the tree prior.
Figure 9: Specifying the ReR_eRe prior.
Notice that the pop-up window allows one to specify not only the Dimension but also the Minordimension. If the parameter is specified as a vector of nnn entries, we only use the Dimension with input nnn. If the parameter is specified as an n×mn \times mn×m matrix, we then use the Minordimension to specify the number of columns (mmm) the parameter is split into. In the birth-death skyline model, we use the parameter vector only, and thus the Minordimension always stays specified as 1. (In fact, Minordimension is only used very rarely in any BEAST2 model).
After we have specified the prior for ReR_eRe, the next prior that needs our attention is the becomeUninfectiousRate. This specifies how quickly a person infected with influenza recovers. From our personal experience, we would say, it takes around one week to 10 days from infection to recovery. Since the rate of becoming uninfectious is the reciprocal of the period of infectiousness this translates to a becoming uninfectious rate of 365/10=36.5 to 365/7 ≈\approx≈ 52.14 per year (recall that we specified dates in our tree in years, and not days). Let us set the prior for becomeUninfectiousRate rate accordingly.
Click on the arrow next to becomeUninfectiousRate and change the value for M (mean) of the default log normal distribution to 52 and tick the box Mean In Real Space to specify the mean of the distribution in real space instead of log space (Figure 10).
Figure 10: Specifying the become uninfectious prior.
Looking at the 2.5% and 97.5% quantiles for the distribution we see that 95% of the weight of our becoming uninfectious rate prior falls between 4.44 and 224, i.e. our prior on the period of infectiousness is between ≈\approx≈ 1.63 and 82.2 days. Thus, our prior is quite diffuse. If we wanted to use a more specific prior we could decrease the standard deviation of the distribution (the S parameter).
Now we have to specify the clock rate prior. This is the prior for the substitution rate.
Topic for discussion: What substitution rate is appropriate for viruses? More specifically, what substitution rate is expected for influenza virus, in your opinion?
By default, the clock rate in BEAST2 has a uniform prior between 0 and infinity. This is not only extremely unspecific, but also an improper prior (it does not integrate to 1). In general, a log-normal distribution works well for rates, since it does not allow negative values. Furthermore, it places most weight close to 0, while also allowing for larger values, making it an appropriate prior for the clock rate, which we expect to be quite low in general, but may be higher in exceptional cases. You could set your best guess as a prior by, for example, choosing a log-normal distribution centered around your best guess for the substitution rate.
Now consider the following information: Influenza virus is an RNA virus (Kawaoka, 2006) and RNA viruses in general, have a mutation rate of ≈\approx≈ 10-3 substitutions per site per year (Jenkins, Rambaut, Pybus, & Holmes, 2002).
Topic for discussion: Did you change your best guess, for the substitution rate appropriate for RNA viruses? What would it be? How would you specify the prior?
Our best guess would be to set the prior distribution peaked around 10-3 substitutions per site per year.
Change the prior for the clock rate from a Uniform to Log Normal distribution. Click on the arrow next to the clockRate and change the value for M (mean) of the default log normal distribution to 0.001 and tick the box Mean In Real Space (Figure 11).
Figure 11: Specifying the clock rate prior.
We also need to estimate the Gamma shape parameter, which governs the shape of the Gamma distribution of the rates across different sites. The default setting of the Gamma shape parameter of alpha=beta=1.0 reflects our belief that on average, the rate scaler is equal to 1, i.e. on average all the sites mutate with the same substitution rate. The distribution on the gamma shape parameter allows us to deviate from this assumption. The default exponential distribution with M (mean) of 1.0 and 95%HPD of [0.0253,3.69] covers a wide range of possible shape parameters. This looks fine for our analysis, and thus, we leave the Gamma shape settings at its defaults (Figure 12).
Figure 12: Specifying the gamma shape prior.
We do not have any prior information on transition-transversion ratio besides the fact that it is a value usually larger than 1 (transitions are more frequent than transversions). We therefore set a weakly informative prior for this parameter. The default log normal prior perfectly fits to these requirements and usually does not need to be changed (Figure 13).
Figure 13: Specifying the kappa (transition/transversion ratio) prior.
For the next parameter, the origin of the epidemic, we ask ourselves whether there is any reasonable expectation we might have in terms of when the infection in California started, i.e. what is the date when the ancestor of all of the sequences first appeared.
Topic for discussion: Do you have any feeling for what the origin should/could be set to?
The data span a period of 3 months and come from a limited area; thus, it would be unreasonable to assume that a single season flu epidemic would last longer than a few months. The best guess for the origin parameter prior we could make is therefore on the order of at least 3-4, but probably no more than 6 months. We set the prior according to this expectation.
Click on the arrow next to the origin and change the prior distribution from Uniform to Gamma with Alpha parameter set to 0.5 and Beta parameter set to 2.0 (Figure 14).
Figure 14: Specifying the origin prior.
Lastly, for the sampling proportion, we know that we certainly did not sample every single infected individual. Therefore, setting a prior close to 1 would not be reasonable. Actually, it is more reasonable to usually expect only a proportion of less than 0.1 of all flu cases to be sampled. Here, we specify something on the order of 10-3. The default prior for the sampling proportion is a Beta distribution, which is only defined between 0 and 1, making it a natural choice for proportions. However, this is not the only prior that can be used, and here we specify a log-normal distribution, while ensuring that an appropriate upper limit is set, to prevent a sampling proportion higher than 1, which is not defined.
Click on the arrow next to the samplingProportion and change the distribution from Beta to Log Normal.
Next, change the value for the M (mean) to 0.001 and tick the box Mean In Real Space (Figure 15).
Also, make sure that the Lower is set to 0.0 and the Upper is set to 1.0.
Figure 15: Specifying the sampling proportion prior.
Navigate to the MCMC panel.
We want to shorten the chain length, in order for it to run in a reasonable time and we want to decrease the tree sampling frequency.
Change the Chain Length from 10'000'000 to 5'000'000.
Click on the arrow next to the treelog and set the Log Every to 100'000 (Figure 16).
Figure 16: Specifying the MCMC properties.
Now, all the specifications are done. We want to save and run the XML.
Save the XML file as Heterochronous.xml.
Within BEAST, specify the file Heterochronous.xml.
If you have BEAGLE installed tick the box to Use BEAGLE library if available, which will make the run faster.
Hit Run to start the analysis.
The run should take about 15-20 minutes. While waiting for your results, you can start preparing the XML file for the homochronous data.
Load the file into Tracer to check mixing and the parameter estimates.
Figure 17: Loading the log file into Tracer.
First thing you may notice is that most of the parameters do have low ESS (effective sample size below 200) marked in red (Figure 17). This is because our chain did not run long enough. However, the estimates we obtained with a chain of length 5'000'000 are very similar to those obtained with a longer chain.
Click on clockRate and then click on Trace to examine the trace of the parameter (Figure 18).
Figure 18: The trace of the clock rate parameter.
Note that even though the parameter has a low ESS, the chain appears to have passed the burn-in phase and seems to be sampling from across the posterior without getting stuck in any local optima. This is not a proof that the run is mixing well, however it gives us a good intuition that the parameter will have a good ESS value if we run the chain for longer. You should always examine the parameter traces to check convergence; a high ESS value is not proof that a run has converged to the true posterior.
If you like, you can compare your results with the example results we obtained with identical settings and a chain of 30,000,000.
Do the parameter traces look better?
Examine the posterior estimates for the becomeUninfectiousRate, samplingProportion and clockRate in Tracer. Do the estimates look realistic? Are they different from the priors we set and if so, how?
The estimated posterior distribution for the becomeUninfectiousRate has a median of 58.376 and a 95% HPD between 43.2389 and 77.6039 (Figure 19). This is between ≈\approx≈ 4.7 and 8.44 days, thus, roughly one week. This is a lot more specific than the prior we set, which allowed for a much longer infectious period. The estimates also agree with what we know about Influenza A. In this case there was enough information in the sequencing data to estimate a more specific becoming uninfectious rate. If we had relied more on our prior knowledge we could have set a tighter prior on the becomeUninfectiousRate parameter, which may have helped the run to converge faster, by preventing it from sampling unrealistic parameter values. However, if you are unsure about a parameter it is always better to set more diffuse priors.
Figure 19: Estimated posterior distribution for the becoming uninfectious rate.
We see that the sampling proportion (Figure 20) is estimated to be below 5 ×\times× 10-5. This a lot lower than the mean we set for the prior on the sampling proportion (0.001). Therefore our prior estimate of the sampling proportion was much too high. Consequently, we see that the number of cases is also much higher than we initially thought. We assumed that there are around 1,000 cases when we set the prior, however our posterior indicates that the epidemic has on the order of tens of thousands of cases.
Figure 20: Estimated posterior distribution for the sampling proportion.
Looking at the clock rate estimates (Figure 21) we see that they are about 2 to 3 times faster than the usual substitution rate for human influenza A (Jenkins, Rambaut, Pybus, & Holmes, 2002). This is not a cause for concern and is actually a well-documented phenomenon. When viral samples are collected over a short time period the clock rate is often overestimated. The exact cause of the bias is not known, but it is suspected that incomplete purifying selection plays a role. What is important to keep in mind is that this is does not mean that the virus is mutating or evolving faster than usual. When samples are collected over a longer time period the estimated clock rate slows down and eventually reaches the long-term substitution rate.
Figure 21: Estimated posterior distribution for the clock rate.
We could also use the homochronous data to investigate the dynamics of the H3N2 spread in California in 2009. We use the 29 sequences from April 28, 2009 to investigate whether this is possible.
Follow the same procedure as for the heterochronous sampling. Now, however, use the alignment file called InfluenzaAH3N2_HAgene_2009_California_homochronous.nexus and use the Birth Death Skyline Contemporary model.
Note that for the Birth Death Skyline Contemporary model the sampling proportion is called rho, and refers only to the proportion of infected individuals sampled at the present time. This is to distinguish it from the sampling proportion in the Birth Death Skyline Serial model, which refers to the proportion of individuals sampled through time.
Figure 22: Specifying the sampling proportion prior for homochronous data.
Save the file as Homochronous.xml and run it in BEAST2.
After the run is finished, load the log file into Tracer and examine the traces of the parameters.
Topic for discussion: Do you think running the analysis for longer will lead to the run mixing well?
Most of the parameters again have ESS values below 200, however in this case the ESS values are lower than for heterochronous data and it is not clear that running the analysis for longer will lead to mixing. Indeed, while running the analysis for longer increases increases the ESS values for some parameters, they remain low for some parameters, in particular the origin, TreeHeight (tMRCA) and clockRate. Low ESS values for these parameters in turn translate into low ESS values for the tree prior (BirthDeathSkyContemporary), prior and posterior.
Figure 23: The trace of the clock rate parameter.
Now, check the clock rate and the tree height parameters.
Topic for discussion: Do you think that homochronous samples allow for good substitution rate estimation?
If yes, how would you know?
If not, how can you see that and where do you think might the problem be? Can we address this problem in our analysis?
Notice the values of the substitution rate estimates. From literature, one can read that influenza's HA gene has a substitution rate of about 10-3 substitutions per site per year (Jenkins, Rambaut, Pybus, & Holmes, 2002). Our estimate of the clock rate is of the same order as this value, but has a very large confidence interval. Notice also, that the confidence interval of the tree height is very large [0.1305,2.7393].
Another way to see that the homochronous sampling does not allow for the estimation of the clock rate is to observe a very strong negative correlation of the clock rate with the tree height.
In Tracer click on the Joint Marginal panel, select the TreeHeight and the clockRate simultaneously, and uncheck the Sample only box below the graphics (Figure 23).
Figure 23: Clock rate and tree height correlation in homochronous data.
The correlation between the tree height and the clock rate is obvious: the taller the tree, the slower the clock. One way to solve this problem is to break this correlation by setting a strong prior on one of the two parameters. We describe how to set a prior on the tree height in the section below.
We will use the results from the heterochronous data, to find out what a good estimate for the tree height of these homochronous samples is. For this aim, we first create an MCC (maximum clade credibility) tree in the TreeAnnotator and then check with FigTree what the estimate of the tMRCA (time to the most recent common ancestor) of the samples from April 28, 2009 is.
Note, however, that we do this for illustration purposes only. In good practice, one should avoid re-using the data or using the results of an analyses to inform any further analyses containing the same data. Let's pretend therefore that the heterochronous dataset is an independent dataset from the homochronous one.
Open the TreeAnnotator and set Burnin percentage to 10, Posterior probability limit to 0.5. Leave the other options unchanged.
Figure 24: Creating the MCC tree.
How can we find out what the tMRCA of our homochronous data may be? The best may be to have a look at the estimates of the heterochronous data in the FigTree.
Now open FigTree and load InfluenzaAH3N2_HAgene_2009_California_heterochronous.tree.
Figure 25: Displaying median estimates of the node height in the MCC tree.
Tick the Node Labels in the left menu, and click the arrow next to it to open the full options. Change the Display from age to height_median (Figure 25) and then to height_95%_HPD (Figure 26).
Figure 26: Displaying 95% HPD estimates of the node height in the MCC tree.
Notice, that since we are using only a subset of all the heterochronous sequences, we are interested in the tMRCA of the samples from April 28, 2009 which may not coincide with the tree height of all the heterochronous data. These samples are spread around over all the clades in the tree, and the most recent common ancestor of all of them turns out to be the root of the MCC tree of the heterochronous samples. We therefore want to set the tMRCA prior of the tree formed by the homochronous sequences to be peaked around the median value of the MCC tree height, which is 0.5488 and we want 95% of the density of the prior distribution to be between 0.5343-0.5603.
Open BEAUti, load the homochronous data and use the same settings as for the Homochronous.xml file.
Create a new taxon set for root node by clicking the + Add Prior button at the bottom of the parameter list in the Priors window. Select MRCA prior in the dropdown menu (if one appears) and press OK. This will reveal the Taxon set editor.
Change the Taxon set label to allseq.
Figure 27: Specifying the root height prior.
The prior that we are specifying is the date (not the height) of the tMRCA of all the samples in our dataset. Thus, we need to recalculate the date from the tMRCA height estimates that we obtained above. All the tips are sampled at the date ≈\approx≈ 2009.3233. The median date of the MRCA should therefore be calculated as follows 2009.3233 - 0.5488 = 2008.7745 and the 95% HPD should be [2009.3233-0.5603, 2009.3233-0.5343]=[2008.763,2008.789].
Back in the Priors window, check the box labeled monophyletic for the allseq.prior.
Click on the arrow next to the allseq.prior. Change the prior distribution on the time of the MRCA of selected sequences from [none] to Laplace Distribution and set the Mu to 2008.7745 and the Scale to 0.01 (Figure 28).
You can check that these settings correspond to the height of tMRCA from the MCC tree by setting Mu to 0.5488 and observing the distribution to the right. When you are done, do not forget to set Mu back to 2008.7745.
Figure 28: Specifying the root height prior.
We also need to change the names of the output files so that we do not overwrite the results of the previous analyses. Here we will do this by setting the trace log file name to $(filebase).log and the tree log file name to $(filebase).$(tree).log. This is often a sensible thing to do as $(filebase) is replaced with the name of the XML file, so will always produce different log file names as long as the XML is different.
In the MCMC window, click on the arrow next to the tracelog and change the File Name to $(filebase).log.
Then, click on the arrow next to the treelog and change the File Name to $(filebase).$(tree).log.
Save the XML file as Homochronous_tMRCA.xml and run the analysis and compare to the original analysis of the homochronous data.
Topic for discussion: Are the substitution rate estimates more precise now? What about the correlation between the tMRCA and the clock rate?
Load the log files for all three analyses into Tracer.
Select clockRate and then press shift to select all three trace files.
Click on Marginal Prob Distribution, selected Top-Right for the legend and colour by Trace File.
How do the estimates for the three analyses compare to each other?
Now repeat for the TreeHeight.
Figure 29: Comparing the marginal posteriors of the clock rate.
Figure 30: Comparing the marginal posteriors of the tMRCA.
We see that the heterochronous analysis has the tightest posterior estimates for the clock rate. Hence, it is clear that this dataset contains the most information about the clock rate. This is obvious, since this dataset not only contains sequences sampled across time, but it also contains many more sequences than the homochronous dataset. The marginal posterior for the clock rate estimated from homochronous data with a prior on the tMRCA approaches this distribution, however it is still more diffuse. On the other hand, the clock rate estimates made on the homochronous data without a tMRCA prior are very diffuse. It is important to note that these estimates are not wrong, but simply indicates that there is a lot of uncertainty in the data. Importantly, the true clock rate still falls within the 95% HPD of the estimated clock rate from homochronous data. If this were not the case then the estimates would be wrong. Thus, when there is not a lot of information in our data, it is always better to have an uncertain estimate that contains the truth than to have a very specific, but wrong estimate.
On the TreeHeight we see that the marginal posterior estimated from homochronous data with a tMRCA prior is almost identical to the marginal posterior estimated on heterochronous data. However, estimates on homochronous data without a tMRCA prior are very diffuse, because there is not enough information in the data to accurately date the tMRCA.
Note that while we can compare parameter estimates between heterochronous and homochronous data easily enough you should never compare the likelihoods or posteriors between analyses that were run on different datasets!
Bouckaert, R., Heled, J., Kühnert, D., Vaughan, T., Wu, C.-H., Xie, D., … Drummond, A. J. (2014). BEAST 2: A Software Platform for Bayesian Evolutionary Analysis. PLoS Computational Biology, 10(4), e1003537.
Stadler, T., Kühnert, D., Bonhoeffer, S., & Drummond, A. J. (2013). Birth–death skyline plot reveals temporal changes of epidemic spread in HIV and hepatitis C virus (HCV). Proceedings of the National Academy of Sciences, 110(1), 228–233.
Drummond, A. J., Ho, S. Y., Phillips, M. J., & Rambaut, A. (2006). Relaxed phylogenetics and dating with confidence. PLoS Biology, 4, e88.
Drummond, A. J., & Suchard, M. A. (2010). Bayesian random local clocks, or one rate to rule them all. BMC Biology, 8, 114.
Kawaoka, Y. (2006). Influenza virology: current topics. Horizon Scientific Press.
Jenkins, G. M., Rambaut, A., Pybus, O. G., & Holmes, E. C. (2002). Rates of molecular evolution in RNA viruses: a quantitative phylogenetic analysis. Journal of Molecular Evolution, 54(2), 156–165. | CommonCrawl |
Search for Dirac structures or Courant algebroids in MathSciNet: These are common generalizations of symplectic and Poisson structures and use the symmetric bilinear form on $TM\times_M T^*M$ on a manifold: Namely, the graph of a symplectic structure as well as the graph of a Poisson structure are maximal isotropic subbundles, with further properties.
There is a lot of literature on them now.
Not the answer you're looking for? Browse other questions tagged linear-algebra clifford-algebras big-list or ask your own question.
Is a simple graph the "sum" of a partial order and its dual? | CommonCrawl |
I've found the trivial solution $(1,1,1)$ but I don't know how to start looking for more... Does this system have an infinite amount of solutions?
Since all the sumands are product of even powers, they cannot be negative, so they all are $0$. That means that each unknown is either $0$ or $1$. But only if they all are $1$ the original equations hold.
With $a$ even and $n$ odd.
So same as before, each $x_k$ is either $1$ or $0$, but only the $n$-tuple $(1,1,\cdots,1)$ satisfies the equation.
If all the RHS are $m < n$ with $m$ a positive integer, then the solutions are the $n$-tuples with $m$ ones and $n-m$ zeros permutated.
with equality iff $|x|=|y|=|z|=1$. Now using the second equation, the unique answer is obvious.
P.S. This obviously generalises to several variables - all you need are three equations, two of which with even exponents.
Not the answer you're looking for? Browse other questions tagged polynomials systems-of-equations nonlinear-system or ask your own question.
Does this system of linear equations have infinite solutions?
Finding out the number solutions for an equation system. | CommonCrawl |
K-means problem is a classic NP-hard problem in machine learning and computational geometry, and its seeding algorithms based on the Lloyd's algorithm are hotly studied. In order to cluster the textual data with high dimension in modern data analysis, the spherical K-means clustering is presented. It aims to partition the given points with unit length into K sets so as to minimize the within-cluster sum of cosine dissimilarity. In this talk, we mainly study the seeding algorithm for the spherical K-means clustering, and its special case (with separable sets) and generalized problem ($\alpha$-spherical K-means clustering). For the spherical K-means clustering with separable sets, the approximate algorithm with a constant factor is presented. Moreover, it can be generalized to the $\alpha$-spherical separable K- means clustering. By slickly constructing a useful function, we also show that the famous seeding algorithms such as K-means++ and K-means$||$ of K-means problem can be applied directly to solve the $\alpha$-spherical K-means clustering. | CommonCrawl |
How to Factorize the Composite Numbers?
Now (2×3) n is divisible by 2 and 3 for sure but not divisible by 5. So it can not end with 0.
Explain why $7 \times 11 \times 13 + 13$ and $7 \times 6 \times 5 \times 4 \times 3 \times 2 \times 1 + 5$ are composite numbers.
It is an exercise for LCM.They will be meet again after LCM of both values at the starting point.
LCM = 2 × 2 × 3 × 3 = 36 Therefore, they will meet together at the starting point after 36 minutes.
We know that for terminating decimal expansion of a rational number of form p/q ,q must be of the form 2m × 5n.
As denominator is in form of 5m so it is terminating.
As denominator is in form of 2m so it is terminating.
There are 7 and 13 also in denominator so denominator is not in form of 2m × 5n. so it is not terminating.
The following real numbers have decimal expansions as given below. In each case, decide whether they are rational or not. If they are rational, and of the form p, q you say about the prime factors of q? | CommonCrawl |
In the 1960's, Feferman and Schutte did groundbreaking proof-theoretic work to find out the strength of predicative systems of second-order arithmetic. They used the ramified theory of types, a method of disallowing a formula $\phi$ from being substituted into the comprehension schema if it has quantification over all sets, including the set being defined by $\phi$ itself. This is done by dividing the comprehension schema into levels, as follows. The comprehension schema for level $0$ sets does not allow any formulas with second-order quantifiers. The comprehension schema for level $1$ sets allows formulas with quantification over level $0$ sets. For any natural number $n$, the schema for level $n+1$ sets allows quantification over sets of level $n$ and below.
And there's no particular reason to stop at finite levels. The schema for level omega sets, for instance, allows quantification over sets of any finite level. And so on, for higher and higher transfinite ordinals. There's a question of which ordinals to use, and Feferman and Schutte answered it as follows: we only allow an ordinal $\alpha$ if it is predicatively acceptable, i.e. we can prove its existence using the comprehension schemata for lower levels. Proceeding in this way, they argued that if you started from $ACA_0$, which is second-order artihmetic with only comprehension for level 0 sets, you would get all the levels up to $\Gamma_0$, the Feferman-Schutte ordinal.
There's one part of the Feferman-Schutte analysis, however, that I don't understand the point of: the use of the omega rule in the systems of ramified second-order logic. I haven't studied Feferman's 1964 paper "Systems of Predicative Analysis" in detail, so some of this may be wrong, but here's what I've gleamed: he presents two systems of ramified second-order arithmetic. One system is an infinitary system, in which the ordinals that index the levels of the ramified hierarchy are defined set-theoretically, and there is an infinitary omega rule: from $\phi(0)$, $\phi(1)$, $\phi(2)$, ..., conclude $\forall x \phi(x)$. The other system is a finitary system, where we use Kleene's $O$ to encode the ordinals using natural numbers. This time, there is a "formalized omega rule", which is defined as follows: let $\sharp \phi$ denote the Godel number of $\phi$, and let $PROV(x)$ denote the probability predicate that encodes the proposition that the statement with Godel number $x$ is provable (in the system we're considering). Then the formalized omega rule states that $\forall x PROV(\sharp\phi(x))$ implies $\forall x \phi(x)$. In other words, if $\phi(n)$ is provable for all $n$, then conclude $\forall x \phi(x)$.
My question is, why is either version of the omega rule needed for systems of ramified analysis? What would happen if you proceeded without it? Would we not be able to prove any new truths of arithmetic? And what is the philosophical justification for using it? Is it because in the context of the Feferman-Schutte analysis, we're talking about "predicativity given the natural numbers", so we're willing to accept natural numbers on a Platonic basis, rendering the omega rule being acceptable somehow?
If that's the explanation, what would we do if we were doing ramified second-order arithmetic in other contexts, for instance starting with a weaker base theory as I discuss in this question? In that context, we're talking about "predicativity", full stop, not "predicativity given the natural numbers", so we're even questioning the validity of induction, let alone the omega rule. Would we not be able to extend the ramified hierarchy to transfinite hierarchy in that case, or is the omega rule inessential?
EDIT: I emailed Albert Visser, and he referred me to his paper "The Predicative Frege Hierarchy", where he apparently shows that if you try to naively extend the ramified hierarchy to the transfinite using ordinal notations, and you don't add the formalized omega rule or any other principle, then the resulting system can't prove any more statements than a system with finite levels, because any given proof only involves finitely many ordinal levels, and thus we can just interpret them as finite levels.
So it looks like the formalized omega rule, or something to take its place, is essential to building the ramified hierarchy to transfinite levels in a non-trivial way. So what is the predicative justification of the formalized omega rule in the Feferman-Schutte context? Does that justification depend on a Platonic view of the set of natural numbers, and if so can we replace the formalized omega rule with something else that IS predicative justifiable if you don't accept the natural numbers as a completed totality? Perhaps an iteration of consistency statements or other reflection principle?
Browse other questions tagged proof-theory foundations reverse-math ordinal-analysis lo.logic or ask your own question.
Can the Burgess-Hazen analysis of Predicative Arithmetic be extended to Transfinite Types?
Does the Feferman-Schutte analysis give a precise characterization of Predicative Second-Order Arithmetic?
Is there a notion of "predicative given the real numbers"?
Did Gödel prove that the Ramified Theory of Types collapses at $\omega_1$?
Has the Ramified Theory of Types been applied to NBG?
What is the proof-theoretic ordinal of Hyperarithmetical Comprehension?
Why is adopting Russell's Axiom of Reducibility as strong as eliminating the Ramified Hierarchy?
What can be achieved by liberalizing induction for $RCA_0$?
Is there a modern account of Veblen functions of *several* variables?
When do we get $CON(ZF)$ in transfinite progressions of consistency statements? | CommonCrawl |
So, the scientist would find C14-to-C12 ratios ranging from: .34 \times 10^$ - to - [insert 000$ year calculation here].The method of carbon dating makes use of the fact that all living organisms contain two isotopes of carbon, carbon-12, denoted 12C (a stable isotope), and carbon-14, denoted 14C (a radioactive isotope).The question is a paleontologist discovers remains of animals that appear to have died at the onset of the Holocene ice age, between 1000 years ago.what range of C^14 to C^12 ratio would the scientist expect to find in the animal remains?There are several ways to figure out relative ages, that is, if one thing is older than another.
If possible, the ink should be tested, since a recent forgery would use recently-made ink.
Im not really sure how to go about solving this problem, any help would be apprecaited.
The exponential decay formula is given by: $$m(t) = m_0 e^$$ where $\displaystyle r = \frac$, $h$ = half-life of Carbon-14 = 30$ years, $m_0$ is of the initial mass of the radioactive substance.
How am I supposed to figure out what the decay constant is?
I can do this by working from the definition of "half-life": in the given amount of time (in this case, hours.
Above is a graph that illustrates the relationship between how much Carbon 14 is left in a sample and how old it is. | CommonCrawl |
(See the bottom of the post for Tikz code). This is a Serre fibration, since the image of any map $[0,1]^n\to E$ actually has to land in a connected component of $E$ (and this is the reason for the diagonal tail).
However, the composed map $F\to B$ is just the constant map $1$. The homotopy of this map with the constant map 0 cannot factor through $E$. Indeed, if there were such a factorisation, i.e. a homotopy $F:F\times I\to E$ then the composed map $F\to F\times I\to E$ where $F\to F\times I$ is given by $x\mapsto (x,1)$ would map $F$ a compact set to a noncompact set (shown projected into the vertical line). | CommonCrawl |
Abstract: The first mixed problem is investigated for a certain class of parabolic equations with double non-power-law nonlinearities in a cylindrical domain of the form $D=(t>0)\times\Omega$. The domain $\Omega\subset \mathbb R^n$ can be unbounded. The existence of strong solutions in a Sobolev-Orlicz space is proved by the method of Galerkin approximations. A maximum principle is established, and upper and lower bounds characterizing the power-law decay of solution as $t\to \infty$ are proved. The uniqueness of the solution is proved under certain additional assumptions.
Keywords: parabolic equation with double nonlinearity, $N$-functions, existence of a solution, estimate for the decay rate of a solution.
This work was supported by the Russian Foundation for Basic Research (grant no. 15-01-07920-a). | CommonCrawl |
On a calculator, make $15$ by using only the $2$ key and any of the operations keys ($+$, $-$, $\times$, $\div$).
How many ways can you find to do it?
Calculators. Trial and improvement. Place value. Multiplication & division. Addition & subtraction. Generalising. Factors and multiples. Investigations. Working systematically. Combinations. | CommonCrawl |
Received June 27, 2016. First published January 12, 2018.
Abstract: Let $\mu$ be a nonnegative Borel measure on $\mathbb R^d$ satisfying that $\mu(Q)\le l(Q)^n$ for every cube $Q\subset\mathbb R^n$, where $l(Q)$ is the side length of the cube $Q$ and $0<n\leq d$. We study the class of pairs of weights related to the boundedness of radial maximal operators of fractional type associated to a Young function $B$ in the context of non-homogeneous spaces related to the measure $\mu$. Our results include two-weighted norm and weak type inequalities and pointwise estimates. Particularly, we give an improvement of a two-weighted result for certain fractional maximal operator proved in W. Wang, C. Tan, Z. Lou (2012). | CommonCrawl |
On the support of Pollicott-Ruelle resonant states for Anosov flowsNov 26 2015Sep 15 2016We show that all generalized Pollicott-Ruelle resonant states of a topologically transitiv $C^\infty$-Anosov flow with an arbitrary $C^\infty$ potential, have full support.
Antisymmetry of the stochastic order on all ordered metric spacesOct 16 2018In this short note, we prove that the stochastic order of Radon probability measures on any metric space is antisymmetric.
A Conjectural Algorithm for Simple Characters of Algebraic GroupsSep 08 2017We describe an algorithm, which - given the characters of tilting modules and assuming that Donkin's tilting conjecture is true - computes the characters of simple modules for an algebraic group in any characteristic.
News on PenguinsNov 09 2011We summarize recent theoretical developments in the field of radiative and semileptonic penguin decays.
On the ring of invariants of ordinary quartic curves in characteristic 2May 04 2004In this article a complete set of invariants for ordinary quartic curves in characteristic 2 is computed. | CommonCrawl |
sliding then what is the angle $\alpha$ made by the plane of hemisphere with inclined plane .
There is no acceleration. The hemispherical shell is *kept static*.
Take moments about the point of contact P. Friction $F$ and normal reaction $N$ act through P so weight $W$ must also act through P. ie COM must lie vertically above P.
Centroid C is at the midpoint of the axis OA of the hemisphere, ie distance OC=R/2. In equilibrium position C must lie vertically above point of contact P.
Is torque zero because all force vectors pass through P? | CommonCrawl |
S Yamashita, E Kimura, N Tawara, H Sakaguchi, T Nakama, Y Maeda, T Hirano, M Uchino and Y Ando.
Optineurin is potentially associated with TDP-43 and involved in the pathogenesis of inclusion body myositis.. Neuropathology and applied neurobiology 39(4):406–16, June 2013.
Abstract AIMS: Increasing evidences suggest a similarity in the pathophysiological mechanisms of neuronal cell death in amyotrophic lateral sclerosis (ALS) and myofibre degeneration in sporadic inclusion body myositis (sIBM). The aim of this study is to elucidate the involvement of ALS-causing proteins in the pathophysiological mechanisms in sIBM. METHODS: Skeletal muscle biopsy specimens of five patients with sIBM, two with oculopharyngeal muscular dystrophy (OPMD), three with polymyositis (PM), three with dermatomyositis (DM), three with neurogenic muscular atrophy, and three healthy control subjects were examined. We analysed the expression and localization of familial ALS-causing proteins, including transactive response DNA binding protein-43 (TDP-43), fused in sarcoma/translocated in liposarcoma (FUS/TLS), Cu/Zn superoxide dismutase (SOD1) and optineurin (OPTN) by immunohistochemistry. RESULTS: TDP-43, OPTN and, to a lesser extent, FUS/TLS were more frequently accumulated in the cytoplasm in patients with sIBM and OPMD than in patients with PM, DM, neurogenic muscular atrophy, or healthy control subjects. SOD1 was accumulated in a small percentage of myofibres in patients with sIBM and OPMD, and to a very small extent in patients with PM and DM. Confocal microscopy imaging showed that TDP-43 proteins more often colocalized with OPTN than with FUS/TLS, p62 and phosphorylated Tau. CONCLUSIONS: These findings suggest that OPTN in cooperation with TDP-43 might be involved in the pathophysiological mechanisms of skeletal muscular degeneration in myopathy with rimmed vacuoles. Further investigation into these mechanisms is therefore warranted.
Hirofumi Maruyama and Hideshi Kawakami.
Optineurin and amyotrophic lateral sclerosis.. Geriatrics & gerontology international 13(3):528–32, 2013.
Abstract Amyotrophic lateral sclerosis is a devastating disease, and thus it is important to identify the causative gene and resolve the mechanism of the disease. We identified optineurin as a causative gene for amyotrophic lateral sclerosis. We found three types of mutations: a homozygous deletion of exon 5, a homozygous Q398X nonsense mutation and a heterozygous E478G missense mutation within its ubiquitin-binding domain. Optineurin negatively regulates the tumor necrosis factor-$\alpha$-induced activation of nuclear factor kappa B. Nonsense and missense mutations abolished this function. Mutations related to amyotrophic lateral sclerosis also negated the inhibition of interferon regulatory factor-3. The missense mutation showed a cyotoplasmic distribution different from that of the wild type. There are no specific clinical symptoms related to optineurin. However, severe brain atrophy was detected in patients with homozygous deletion. Neuropathologically, an E478G patient showed transactive response DNA-binding protein of 43 kDa-positive neuronal intracytoplasmic inclusions in the spinal and medullary motor neurons. Furthermore, Golgi fragmentation was identified in 73% of this patient's anterior horn cells. In addition, optineurin is colocalized with fused in sarcoma in the basophilic inclusions of amyotrophic lateral sclerosis with fused in sarcoma mutations, and in basophilic inclusion body disease. These findings strongly suggest that optineurin is involved in the pathogenesis of amyotrophic lateral sclerosis.
Jelena Korac, Veronique Schaeffer, Igor Kovacevic, Albrecht M Clement, Benno Jungblut, Christian Behl, Janos Terzic and Ivan Dikic.
Ubiquitin-independent function of optineurin in autophagic clearance of protein aggregates.. Journal of cell science 126(Pt 2):580–92, 2013.
Abstract Aggregation of misfolded proteins and the associated loss of neurons are considered a hallmark of numerous neurodegenerative diseases. Optineurin is present in protein inclusions observed in various neurodegenerative diseases including amyotrophic lateral sclerosis (ALS), Huntington's disease, Alzheimer's disease, Parkinson's disease, Creutzfeld-Jacob disease and Pick's disease. Optineurin deletion mutations have also been described in ALS patients. However, the role of optineurin in mechanisms of protein aggregation remains unclear. In this report, we demonstrate that optineurin recognizes various protein aggregates via its C-terminal coiled-coil domain in a ubiquitin-independent manner. We also show that optineurin depletion significantly increases protein aggregation in HeLa cells and that morpholino-silencing of the optineurin ortholog in zebrafish causes the motor axonopathy phenotype similar to a zebrafish model of ALS. A more severe phenotype is observed when optineurin is depleted in zebrafish carrying ALS mutations. Furthermore, TANK1 binding kinase 1 (TBK1) is colocalized with optineurin on protein aggregates and is important in clearance of protein aggregates through the autophagy-lysosome pathway. TBK1 phosphorylates optineurin at serine 177 and regulates its ability to interact with autophagy modifiers. This study provides evidence for a ubiquitin-independent function of optineurin in autophagic clearance of protein aggregates as well as additional relevance for TBK1 as an upstream regulator of the autophagic pathway.
Anna M Blokhuis, Ewout J N Groen, Max Koppers, Leonard H Berg and Jeroen R Pasterkamp.
Protein aggregation in amyotrophic lateral sclerosis.. Acta neuropathologica 125(6):777–94, 2013.
Abstract Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disease characterized by the aggregation of ubiquitinated proteins in affected motor neurons. Recent studies have identified several new molecular constituents of ALS-linked cellular aggregates, including FUS, TDP-43, OPTN, UBQLN2 and the translational product of intronic repeats in the gene C9ORF72. Mutations in the genes encoding these proteins are found in a subgroup of ALS patients and segregate with disease in familial cases, indicating a causal relationship with disease pathogenesis. Furthermore, these proteins are often detected in aggregates of non-mutation carriers and those observed in other neurodegenerative disorders, supporting a widespread role in neuronal degeneration. The molecular characteristics and distribution of different types of protein aggregates in ALS can be linked to specific genetic alterations and shows a remarkable overlap hinting at a convergence of underlying cellular processes and pathological effects. Thus far, self-aggregating properties of prion-like domains, altered RNA granule formation and dysfunction of the protein quality control system have been suggested to contribute to protein aggregation in ALS. The precise pathological effects of protein aggregation remain largely unknown, but experimental evidence hints at both gain- and loss-of-function mechanisms. Here, we discuss recent advances in our understanding of the molecular make-up, formation, and mechanism-of-action of protein aggregates in ALS. Further insight into protein aggregation will not only deepen our understanding of ALS pathogenesis but also may provide novel avenues for therapeutic intervention.
David Kachaner, Pierre Génin, Emmanuel Laplantine and Robert Weil.
Toward an integrative view of Optineurin functions.. Cell cycle (Georgetown, Tex.) 11(15):2808–18, August 2012.
Abstract This review highlights recent advances in our understanding of the mechanisms of Optineurin (Optn) action and its implication in diseases. Optn has emerged as a key player regulating various physiological processes, including membrane trafficking, protein secretion, cell division and host defense against pathogens. Furthermore, there is growing evidence for an association of Optn mutations with human diseases such as primary open-angle glaucoma, amyotrophic lateral sclerosis and Paget's disease of bone. Optn functions depend on its precise subcellular localization and its interaction with other proteins. Here, we review the mechanisms that allow Optn to ensure a timely and spatially coordinated integration of different physiological processes and discuss how their deregulation may lead to different pathologies.
Hongyu Ying and Beatrice Y J T Yue.
Cellular and molecular biology of optineurin.. International review of cell and molecular biology 294:223–58, January 2012.
Abstract Optineurin is a gene linked to glaucoma, amyotrophic lateral sclerosis, other neurodegenerative diseases, and Paget's disease of bone. This review describes the characteristics of optineurin and summarizes the cellular and molecular biology investigations conducted so far on optineurin. Data from a number of laboratories indicate that optineurin is a cytosolic protein containing 577 amino acid residues. Interacting with proteins such as myosin VI, Rab8, huntingtin, transferrin receptor, and TANK-binding kinase 1, optineurin is involved in basic cellular functions including protein trafficking, maintenance of the Golgi apparatus, as well as NF-$\kappa$B pathway, antiviral, and antibacteria signaling. Mutation or alteration of homeostasis of optineurin (such as overexpression or knockdown) results in adverse consequences in the cells, leading to the development of neurodegenerative diseases including glaucoma.
Claudia Schwab, Sheng Yu, Edith G McGeer and Patrick L McGeer.
Optineurin in Huntington's disease intranuclear inclusions.. Neuroscience letters 506(1):149–54, January 2012.
Abstract Optineurin mutations cause adult-onset primary open-angle glaucoma and have been associated with some familial forms of amyotrophic lateral sclerosis (ALS). Optineurin is involved in many cellular processes and interacts with a variety of proteins, among them huntingtin (htt). Here we report that in Huntington's disease (HD) cortex, optineurin frequently occurs in neuronal intranuclear inclusions, and to a lesser extent, in inclusions in the neuropil and in perikarya. Most intranuclear optineurin-positive inclusions were co-labeled for ubiquitin, but they were only occasionally and more weakly co-labeled for htt. Optineurin-labeled neuropil and perikaryal inclusions were commonly co-labeled for ubiquitin and htt. Although these inclusions were common in cortex, they were rare in striatum. Our results show that in HD optineurin is present in intranuclear, neuropil and perikaryal inclusions. It is not clear whether this indicates a primary involvement in the disease process. In HD, the known interaction of htt and optineurin may suggest that a different process takes place as compared to other neurodegenerative disorders.
Hidefumi Ito, Kengo Fujita, Masataka Nakamura, Reika Wate, Satoshi Kaneko, Shoichi Sasaki, Kiyomi Yamane, Naoki Suzuki, Masashi Aoki, Noriyuki Shibata, Shinji Togashi, Akihiro Kawata, Yoko Mochizuki, Toshio Mizutani, Hirofumi Maruyama, Asao Hirano, Ryosuke Takahashi, Hideshi Kawakami and Hirofumi Kusaka.
Optineurin is co-localized with FUS in basophilic inclusions of ALS with FUS mutation and in basophilic inclusion body disease.. Acta neuropathologica 121(4):555–7, April 2011.
Takemasa Sakaguchi, Takashi Irie, Ryoko Kawabata, Asuka Yoshida, Hirofumi Maruyama and Hideshi Kawakami.
Optineurin with amyotrophic lateral sclerosis-related mutations abrogates inhibition of interferon regulatory factor-3 activation.. Neuroscience letters 505(3):279–81, 2011.
Abstract Optineurin has been shown to be involved in primary open-angle glaucoma. We recently found that optineurin is involved in familial amyotrophic lateral sclerosis (ALS). On the other hand, optineurin has been shown to inhibit transcription factors related to innate immunity such as NF-$\kappa$B and interferon regulatory factor-3 (IRF3). In the present study, the effect of ALS-associated optineurin mutations on IRF3 activation was investigated. Optineurin inhibited IRF3 activation induced by melanoma differentiation-associated gene 5 or Toll-IL-1 receptor domain-containing adaptor-inducing interferon-$\beta$. The inhibition was abrogated by mutations related to ALS but not by a mutation related to glaucoma. Reporter assay indicated that the JAK-STAT signaling pathway was not affected by optineurin. These results show that ALS-related optineurin is involved in the IRF3 activation pathway. Pathogenesis of ALS may be associated with some kind of innate immunity, especially that against virus infection, through IRF3 activation.
Tenshi Osawa, Yuji Mizuno, Yukio Fujita, Masamitsu Takatama, Yoichi Nakazato and Koichi Okamoto.
Optineurin in neurodegenerative diseases.. Neuropathology : official journal of the Japanese Society of Neuropathology 31(6):569–74, 2011.
Abstract Optineurin is a gene associated with normal tension glaucoma and primary open-angle glaucoma, one of the major causes of irreversible bilateral blindness. Recently, mutations in the gene encoding optineurin were found in patients with amyotrophic lateral sclerosis (ALS). Immunohistochemical analysis showed aggregation of optineurin in skein-like inclusions and round hyaline inclusions in the spinal cord, suggesting that optineurin appears to be a more general marker for ALS. However, our detailed examinations demonstrated that optineurin was found not only in ALS-associated pathological structures, but also in ubiquitin-positive intraneuronal inclusions in ALS with dementia, basophilic inclusions in the basophilic type of ALS, neurofibrillary tangles and dystrophic neurites in Alzheimer's disease, Lewy bodies and Lewy neurites in Parkinson's disease, ballooned neurons in Creutzfeldt-Jakob disease, glial cytoplasmic inclusions in multiple system atrophy, and Pick bodies in Pick disease. With respect to optineurin-positive basophilic inclusions, these structures showed variable immunoreactivities for ubiquitin; some structures were obviously ubiquitin-positive, while others were negative for the protein, suggesting that optineurin expression was not always associated with the expression of ubiquitin. This study indicates that optineurin is widely distributed in neurodegenerative conditions; however, its significance is obscure.
Tibor Hortobágyi, Claire Troakes, Agnes L Nishimura, Caroline Vance, John C Swieten, Harro Seelaar, Andrew King, Safa Al-Sarraj, Boris Rogelj and Christopher E Shaw.
Optineurin inclusions occur in a minority of TDP-43 positive ALS and FTLD-TDP cases and are rarely observed in other neurodegenerative disorders.. Acta neuropathologica 121(4):519–27, 2011.
Abstract Optineurin (OPTN) is a multifunctional protein involved in vesicular trafficking, signal transduction and gene expression. OPTN mutations were described in eight Japanese patients with familial and sporadic amyotrophic lateral sclerosis (FALS, SALS). OPTN-positive inclusions co-localising with TDP-43 were described in SALS and in FALS with SOD-1 mutations, potentially linking two pathologically distinct pathways of motor neuron degeneration. We have explored the abundance of OPTN inclusions using a range of antibodies in postmortem tissues from 138 cases and controls including sporadic and familial ALS, frontotemporal lobar degeneration (FTLD) and a wide range of neurodegenerative proteinopathies. OPTN-positive inclusions were uncommon and detected in only 11/32 (34%) of TDP-43-positive SALS spinal cord and 5/15 (33%) of FTLD-TDP. Western blot of lysates from FTLD-TDP frontal cortex and TDP-43-positive SALS spinal cord revealed decreased levels of OPTN protein compared to controls (p < 0.05), however, this correlated with decreased neuronal numbers in the brain. Large OPTN inclusions were not detected in FALS with SOD-1 and FUS mutation, respectively, or in FTLD-FUS cases. OPTN-positive inclusions were identified in a few Alzheimer's disease (AD) cases but did not co-localise with tau and TDP-43. Occasional striatal neurons contained granular cytoplasmic OPTN immunopositivity in Huntington's disease (HD) but were absent in spinocerebellar ataxia type 3. No OPTN inclusions were detected in FTLD-tau and $\alpha$-synucleinopathy. We conclude that OPTN inclusions are relatively rare and largely restricted to a minority of TDP-43 positive ALS and FTLD-TDP cases. Our results do not support the proposition that OPTN inclusions play a central role in the pathogenesis of ALS, FTLD or any other neurodegenerative disorder.
Han-Xiang Deng, Eileen H Bigio, Hong Zhai, Faisal Fecto, Kaouther Ajroud, Yong Shi, Jianhua Yan, Manjari Mishra, Senda Ajroud-Driss, Scott Heller, Robert Sufit, Nailah Siddique, Enrico Mugnaini and Teepu Siddique.
Differential involvement of optineurin in amyotrophic lateral sclerosis with or without SOD1 mutations.. Archives of neurology 68(8):1057–61, 2011.
Abstract BACKGROUND: Mutations in optineurin have recently been linked to amyotrophic lateral sclerosis (ALS). OBJECTIVE: To determine whether optineurin-positive skeinlike inclusions are a common pathologic feature in ALS, including SOD1 -linked ALS. DESIGN: Clinical case series. SETTING: Academic referral center. SUBJECTS: We analyzed spinal cord sections from 46 clinically and pathologically diagnosed ALS cases and ALS transgenic mouse models overexpressing ALS-linked SOD1 mutations G93A or L126Z. RESULTS: We observed optineurin-immunoreactive skeinlike inclusions in all the sporadic ALS and familial ALS cases without SOD1 mutation, but not in cases with SOD1 mutations or in transgenic mice overexpressing the ALS-linked SOD1 mutations G93A or L126Z. CONCLUSION: The data from this study provide evidence that optineurin is involved in the pathogenesis of sporadic ALS and non- SOD1 familial ALS, thus supporting the hypothesis that these forms of ALS share a pathway that is distinct from that of SOD1-linked ALS.
Ghanshyam Swarup and Ananthamurthy Nagabhushana.
Optineurin, a multifunctional protein involved in glaucoma, amyotrophic lateral sclerosis and antiviral signalling.. Journal of biosciences 35(4):501–5, December 2010.
Wataru Sako, Hidefumi Ito, Mari Yoshida, Hidetaka Koizumi, Masaki Kamada, Koji Fujita, Yoshio Hashizume, Yuishin Izumi and Ryuji Kaji.
Nuclear factor $\kappa$ B expression in patients with sporadic amyotrophic lateral sclerosis and hereditary amyotrophic lateral sclerosis with optineurin mutations.. Clinical neuropathology 31(6):418–23.
Abstract Nuclear factor $\kappa$ B (NF-$\kappa$B) is involved in the pathogenesis of a number of neurodegenerative disorders with neuroinflammation. In order to clarify the role of NF-$\kappa$B in ALS, immunohistochemical studies with an antibody that recognizes the p65 subunit of NF-$\kappa$B were performed on the spinal anterior horn of 4 patients with sporadic ALS (sALS), 1 patient with optineurin-mutated ALS (OPTN-ALS), and 3 normal controls (NC). In patients with sALS or OPTN-ALS, the expression pattern of NF-$\kappa$B was altered when compared to that of NC; NF-$\kappa$B immunoreactivity tended to be absent from neuronal nucleus and was increased in microglia. The down-regulation of NF-$\kappa$B in neuronal nucleus might contribute to a loss of neuroprotection, or neurons with nuclear NF-$\kappa$B might be lost immediately after its activation. The microglial induction of NF-$\kappa$B might contribute to neuroinflammation. In conclusion, NF-$\kappa$B signaling pathway could have a key role in the pathomechanism of ALS. | CommonCrawl |
We prove a weighted fractional inequality involving the solution $u$ of a nonlocal semilinear problem in $\mathbb R^n$. Such inequality bounds a weighted $L^2$-norm of a compactly supported function $\phi$ by a weighted $H^s$-norm of $\phi$. In this inequality a geometric quantity related to the level sets of $u$ will appear. As a consequence we derive some relations between the stability of $u$ and the validity of fractional Hardy inequalities. | CommonCrawl |
Abstract: Neural network training relies on our ability to find "good" minimizers of highly non-convex loss functions. It is well-known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and well-chosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effects on the underlying loss landscape, are not well understood. In this paper, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. First, we introduce a simple "filter normalization" method that helps us visualize loss function curvature and make meaningful side-by-side comparisons between loss functions. Then, using a variety of visualizations, we explore how network architecture affects the loss landscape, and how training parameters affect the shape of minimizers.
- Observed that __the deeper networks become, neural loss landscapes become more chaotic__; causes a dramatic drop in generalization error, and ultimately to a lack of trainability.
- Observed that __skip connections promote flat minimizers and prevent the transition to chaotic behavior__; helps explain why skip connections are necessary for training extremely deep networks.
- Studies the visualization of SGD optimization trajectories.
- Network architecture improvement decreases parameters 51X (240MB to 4.8MB).
- By using Deep Compression, parameters shrinks more 10X more (4.8MB to 0.47MB).
Even improves more accuracy for about 2% by using Simple Bypass (shortcut connection).
3. Downsample late to have larger activation maps to lead to higher accuracy.
- Squeeze Ratio to find good balance between weight size and accuracy.
- 3x3 filter percentage to find enough number of it.
Abstract: Keyword spotting (KWS) is a critical component for enabling speech based user interactions on smart devices. It requires real-time response and high accuracy for good user experience. Recently, neural networks have become an attractive choice for KWS architecture because of their superior accuracy compared to traditional speech processing algorithms. Due to its always-on nature, KWS application has highly constrained power budget and typically runs on tiny microcontrollers with limited memory and compute capability. The design of neural network architecture for KWS must consider these constraints. In this work, we perform neural network architecture evaluation and exploration for running KWS on resource-constrained microcontrollers. We train various neural network architectures for keyword spotting published in literature to compare their accuracy and memory/compute requirements. We show that it is possible to optimize these neural network architectures to fit within the memory and compute constraints of microcontrollers without sacrificing accuracy. We further explore the depthwise separable convolutional neural network (DS-CNN) and compare it against other neural network architectures. DS-CNN achieves an accuracy of 95.4%, which is ~10% higher than the DNN model with similar number of parameters.
- Result of thourough research which not only covers major research, but also compares under same criteria/ dataset; This is also a great survey.
- Train on 32-bit FP model, run 8-bit model. No retraining required to convert to 8-bit w/o loss in accuracy.
- Provides comparison concerning computing resource, it's useful to design for typical (ARM) microcontroller systems.
- MobileNet inspired DS-CNN runs small and accurate, achieves the best accuracies of 94.4% ~ 95.4%. Maybe SOTA.
- Apatche licensed code/ pretrained models are available at https://github.com/ARM-software/ML-KWS-for-MCU.
Very efficient data augmentation method. Linear-interpolate training set x and y randomly at every epoch.
- ERM (Empirical Risk Minimization) is $\alpha = 0$ version of mixup, i.e. not using mixup.
- Reduces the memorization of corrupt labels.
- Increases robustness to adversarial examples.
- Stabilizes the training of GAN.
Abstract: With the popularity of deep learning (DL), artificial intelligence (AI) has been applied in many areas of human life. Neural network or artificial neural network (NN), the main technique behind DL, has been extensively studied to facilitate computer vision and natural language recognition. However, the more we rely on information technology, the more vulnerable we are. That is, malicious NNs could bring huge threat in the so-called coming AI era. In this paper, for the first time in the literature, we propose a novel approach to design and insert powerful neural-level trojans or PoTrojan in pre-trained NN models. Most of the time, PoTrojans remain inactive, not affecting the normal functions of their host NN models. PoTrojans could only be triggered in very rare conditions. Once activated, however, the PoTrojans could cause the host NN models to malfunction, either falsely predicting or classifying, which is a significant threat to human society of the AI era. We would explain the principles of PoTrojans and the easiness of designing and inserting them in pre-trained deep learning models. PoTrojans doesn't modify the existing architecture or parameters of the pre-trained models, without re-training. Hence, the proposed method is very efficient.
To keep it simple, this figure shows the basic idea.
Abstract: We introduce YOLO9000, a state-of-the-art, real-time object detection system that can detect over 9000 object categories. First we propose various improvements to the YOLO detection method, both novel and drawn from prior work. The improved model, YOLOv2, is state-of-the-art on standard detection tasks like PASCAL VOC and COCO. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At 40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like Faster RCNN with ResNet and SSD while still running significantly faster. Finally we propose a method to jointly train on object detection and classification. Using this method we train YOLO9000 simultaneously on the COCO detection dataset and the ImageNet classification dataset. Our joint training allows YOLO9000 to predict detections for object classes that don't have labelled detection data. We validate our approach on the ImageNet detection task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite only having detection data for 44 of the 200 classes. On the 156 classes not in COCO, YOLO9000 gets 16.0 mAP. But YOLO can detect more than just 200 classes; it predicts detections for more than 9000 different object categories. And it still runs in real-time.
- "With batch nor-malization we can remove dropout from the model without overfitting"
- gets 78.6 mAP at 40 FPS.
- detects more than 9000 different object classes in real-time. | CommonCrawl |
I shall report on calculations of isovector matrix elements of the nucleon, such as $g_A, g_s$, and $\langle x \rangle$ on the $48^3 \times 96$ lattice with pion mass at 139 MeV and lattice size of 5.5 fm. We employ overlap valence fermion on the 2+1 flavor DWF configurations for the calculation. Also reported will be the strange quark momentum fraction and its magnetic moment from this lattice. A comparison of the cost of such calculations with those of the twisted mass fermion, clover fermion, and domain wall fermion on similar lattices and quark masses will be made for the calculation of the nucleon mass and the three-point functions of both the connected and disconnected insertions. | CommonCrawl |
Compare the exact theory with the linear theory by plotting curves of $P_2/P_1$ versus $M_1$ for two values of $\delta$, namely $2^\circ$ and $10^\circ$. Use a range of $M_1$ from 1 to 3.
A thin flat plate airfoil is immersed in a supersonic stream of $M_\infty=2.2$ at an angle of attack of $10^\circ$. Determine the lift and drag coefficients per unit length of the flat plate airfoil using the linearized theory and compare with the estimates from exact methods. The width of the airfoil (i.e., distance along the chord) is $l$.
The airfoil is located in an airstream with a free-stream Mach number $M_\infty=3$ and a free stream static pressure $P=1.0133\times 10^5$ $\rm N/m^2$. Calculate the lift and drag per unit width of the wedge for an angle of attack $\alpha$ of $-15^\circ$. Take $\gamma=1.4$.
(a) Find the expressions for the lift and drag coefficients in terms of $M_\infty$ , $h/t$, and $m$.
(b) Find an expression for the lift over drag ratio in terms of $h/t$ and $m$. Plot the lift over drag ratio versus $m$ for $t/h=5$ and for $t/h=10$. Consider the range $1 \le m \le 4$.
As shown below, a cambered supersonic aerofoil is simulated by an articulated flat plate where the articulated deflections are $2^\circ$ at each step.
(b) using first order linearized theory.
The pressure of the atmosphere into which the jet discharges is 1 bar.
(a) Calculate the pressures in regions "b" and "c".
(b) Make a sketch to scale showing stream lines and shock lines.
(c) Assuming the pressure at the nozzle entrance to be maintained constant, what is the maximum atmospheric pressure for which this general type of flow configuration is possible? Describe the nature of the flow pattern when the exhaust-region pressure is raised above the limiting value.
(d) Compare the results of part (a) with the results of calculations based on linear theory.
Knowing that the surrounding pressure is of 1 atm, that the pressure in the reservoir driving the nozzle is of 15 atm, and that the nozzle exit area is of $\rm 0.2~m^2$, determine the minimum and maximum nozzle throat area that would yield the wave pattern observed in the Schlieren photo.
2. 0.362, 0.064, 0.351, 0.0619.
3. -225 kN/m, 74.6 kN/m, -279 kN/m, 106.5 kN/m.
6. 8462 Pa/m, 809 Pa/m, 8422 Pa/m, 806 Pa/m.
7. 1 bar, 1.5 bar, 1.3 bar, 1 bar, 1.328 bar.
8. $0.026 — 0.082$ m$^2$.
Due on Tuesday December 11th at 16:30. Do Questions #5, #6, and #7 only. | CommonCrawl |
This study investigated electron transfer potential of the carbonyl-metallate anions cyclopentadienylirondicarbonyl and methylcyclopentadienyliron-dicarbonyl anions to the electron acceptors 2,2-dinitropropane, 1,1-dinitrocyclohexane, $\alpha$,p-dinitrocumene, methyl cyclopentadienyl-mercury(II) chloride and iodide.
The findings from this study indicated that: (a) the expected products could not be obtained from reactions of the anions with the dinitro-substrates, and (b) the mercurials reacted with the anions but the expected products, (ferrocene and methylferrocene), was the minor product and an unexpected (methyl) cyclopentadienylirondicarbonylmercury(II)-chloride or iodide was the major product.
Agyin, Joseph Kofi, "Investigation of Electron Transfer from Carbonylmetallate Anions to Electron Acceptors" (1993). Master's Theses. 804. | CommonCrawl |
Parallelism in ABINIT, generalities and environments.
What parts of ABINIT are parallel?
There are many situations where a sequential code is not enough, often because it would take too much time to get a result. There are also cases where you just want things to go as fast as your computational resources allow it. By using more than one processor, you might also have access to more memory than with only one processor. To this end, it is possible to use ABINIT in parallel, with dozens, hundreds or even thousands processors.
This tutorial offers you a little reconnaissance tour inside the complex world that emerges as soon as you want to use more than one processor. From now on, we will suppose that you are already familiar with ABINIT and that you have gone through all four basic tutorials. If this is not the case, we strongly advise you to do so, in order to truly benefit from this tutorial.
We strongly recommend you to acquaint yourself with some basic concepts of parallel computing too. In particular Almdalh's law, that rationalizes the fact that, beyond some number of processors, the inherently sequential parts will dominate parallel parts, and give a limitation to the maximal speed-up that can be achieved.
Such tightly integrated multi-core processors (or so-called SMP machines, meaning Symmetric Multi-Processing) can be interlinked within networks, based on Ethernet or other types of connections (Quadrics, Myrinet, etc …). The number of cores in such composite machines can easily exceed one hundred, and go up to a fraction of a million these days. Most ABINIT capabilities can use efficiently several hundred computing cores. In some cases, even more than ten thousand computing cores can be used efficiently.
Before actually starting this tutorial and the associated ones, we strongly advise you to get familiar with your own parallel environment. It might be relatively simple for a SMP machine, but more difficult for very powerful machines. You will need at least to have MPI (see next section) installed on your machine. Take some time to determine how you can launch a job in parallel with MPI (typically the qsub or sbatch command and an associated shell script), what are the resources available and the limitations as well, and do not hesitate to discuss with your system administrator if you feel that something is not clear to you.
We will suppose in the following that you know how to run a parallel program and that you are familiar with the peculiarities of your system. Please remember that, as there is no standard way of setting up a parallel environment, we are not able to provide you with support beyond ABINIT itself.
Different software solutions can be used to benefit from parallelism. Most of ABINIT parallelism is based on MPI, but significant additional speed-up (or a better distribution of data, allowing to run bigger calculations) is based on OpenMP and multi-threaded libraries. As of writing, efforts also focus on Graphical Processing Units (GPUs), with CUDA and MAGMA. The latter will not be described in the present tutorial.
MPI stands for Message Passing Interface. The goal of MPI, simply stated, is to develop a widely used standard for writing message- passing programs. As such the interface attempts to establish a practical, portable, efficient, and flexible standard for message passing.
The main advantages of establishing a message-passing standard are portability and ease-of-use. In a distributed memory communication environment in which the higher level routines and/or abstractions are build upon lower-level message-passing routines, the benefits of standardization are particularly obvious. Furthermore, the definition of a message-passing standard provides vendors with a clearly defined base set of routines that they can implement efficiently, or in some cases provide hardware support for, thereby enhancing scalability (see http://mpi-forum.org).
The OpenMP Application Program Interface (API) supports multi-platform shared-memory parallel programming in C/C++ and Fortran on all architectures, including Unix platforms and Windows NT platforms. Jointly defined by a group of major computer hardware and software vendors, OpenMP is a portable, scalable model that gives shared-memory parallel programmers a simple and flexible interface for developing parallel applications for platforms ranging from the desktop to the supercomputer (http://www.openmp.org).
Scalapack is the parallel version of the popular LAPACK library (for linear algebra). It can play some role in the parallelism of several parts of ABINIT, especially the LOBPCG algorithm in ground state calculations, and the parallelism for the Bethe- Salpether equation. ScaLAPACK being itself based on MPI, we will not discuss its use in ABINIT in this tutorial.
Scalapack is not thread-safe in many versions. Combining OpenMP and Scalapack can result is unpredictable behaviours.
Characterizing the data-transfer efficiency between two computing cores (or the whole set of cores) is a complex task. At a quite basic level, one has to recognize that not only the quantity of data that can be transferred per unit of time is important, but also the time that is needed to initialize such a transfer (so called latency).
Broadly speaking, one can categorize computers following the speed of communications. In the fast communication machines, the latency is very low and the transfer time, once initialized, is very low too. For the parallelised part of ABINIT, SMP machines and machines with fast interconnect (Quadrics, Myrinet …) will usually not be limited by their network characteristics, but by the existence of residual sequential parts. The tutorials that have been developed for ABINIT have been based on fast communication machines.
If the set of computing cores that you plan to use is not entirely linked using a fast network, but includes some connections based e.g. on Ethernet, then, you might not be able to benefit from the speed-up announced in the tutorials. You have to perform some tests on your actual machine to gain knowledge of it, and perhaps consider using multithreading.
Note that the tutorial on ground state with plane waves presents a complete overview of this parallelism, including up to four levels of parallelisation and, as such, is rather complex. Of course, it is also quite powerful, and allows to use several hundreds of processors.
are, on the contrary, quite easy to use. An example of such parallelism will be given in the next section.
Before starting, you might consider working in a different subdirectory as for the other tutorials. Why not Work_paral?
Copy the files file and the input file from the $ABI_TUTORIAL directory to your work directory. They are named tbasepar_1.files and tbasepar_1.in.
cp ../tbasepar_1.files . # You will need to edit this file.
to have a reference CPU time. On a 2.8GHz PC, it runs in about one minute.
Note that determining ahead of time the precise resources you will need for your run will save you a lot of time if you are using a batch queue system. Also, for parallel runs, note that the log files will not be written exept the main log file.
On the contrary, you can create a _NOLOG file if you want to avoid all log files.
The most favorable case for a parallel run is to treat the k-points concurrently, since the calculations can be done independently for each one of them.
Actually, tbasepar_1.in corresponds to the investigation of a fcc crystal of lead, which requires a large number of k-points if one wants to get an accurate description of the ground state. Examine this file. Note that the cut-off is realistic, as well as the grid of k-points (giving 60 k points in the irreducible Brillouin zone). However, the number of SCF steps, nstep, has been set to 3 only. This is to keep the CPU time reasonable for this tutorial, without affecting the way parallelism on the k points will be able to increase the speed. Once done, your output files have likely been produced. Examine the timing in the output file (the last line gives the CPU overall time and Wall time), and keep note of it.
Depending on your particular machine, mpirun might have to be replaced by mpiexec, and -n by some other option.
Then, you have to issue the run command for your MPI implementation, and mention the number of processors you want to use, as well as the abinit command and the file containing the CPU addresses.
Now, examine the corresponding output file. If you have kept the output from the sequential job, you can make a diff between the two files. You will notice that the numerical results are quite identical. You will also see that 60 k-points have been kept in the memory in the sequential case, while 30 k-points have been kept in the memory (per processor !) in the parallel case.
Delivered 1 WARNINGs and 1 COMMENTs to log file.
This corresponds effectively to a speed-up of the job by a factor of two. Let's examine it. The line beginning with Proc. 0 corresponds to the CPU and Wall clock timing seen by the processor number 0 (processor indexing always starts at 0: here the other is number 1): 28.3 sec of CPU time, and the same amount of Wall clock time. The line that starts with +Overall time corresponds to the sum of CPU times and Wall clock timing for all processors. The summation is quite meaningful for the CPU time, but not so for the wall clock time: the job was finished after 28.3 sec, and not 56.6 sec.
The red curve materializes the speed-up achieved, while the green one is the y = x line. The shape of the red curve will vary depending on your hardware configuration. The definition of the speed-up is the time taken in a sequential calculation divided by the time for your parallel calculation (hopefully > 1) .
One last remark: the number of k-points need not be a multiple of the number of processors. As an example, you might try to run the above case with 16 processors: all will treat \lfloor 60/16 \rfloor k points, but 60-16\times3=12 processors will have to treat one more k point so that 12*4+16*3=60. The maximal speed-up will only be 15 (=60/4), instead of 16.
Try to avoid leaving an empty processor as this can make abinit fail with certain compilers. An empty processor happens, for example, if you use more processors than the number of k point. The extra processors do no useful work, but have to run anyway, just to confirm to abinit once in a while that all processors are alive.
The parallelization over the spins (up, down) is done along with the one over the k-points, so it works exactly the same way. The files tbasepar_2.in and tbasepar_2.files in $ABI_TUTORIAL treat a spin-polarized system (distorted FCC Iron) with only one k-point in the Irreducible Brillouin Zone. This is quite unphysical, and has the sole purpose to show the spin parallelism with as few as two processors: the k-point parallelism has precedence over the spin parallelism, so that with 2 processors, one needs only one k-point to see the spin parallelism. If needed, modify the files file, to provide a local temporary disk space. Run this test case, in sequential, then in parallel.
In the second case (parallelism), node 0 is taking care of the up state for k-point 1, while node 1 is taking care of the down state for k-point 1. The timing analysis is very similar to the k-point parallelism case.
If you have more than 2 processors at hand, you might increase the value of ngkpt, so that more than one k-point is available, and see that the k-point and spin parallelism indeed work concurrently.
Balancing efficiently the load on the processors is not always straightforward. When using k-point- and spin-parallelism, the ideal numbers of processors to use are those that divide the product of nsppol by nkpt (e.g. for nsppol * nkpt, it is quite efficient to use 2, 3, 4, 6 or 12 processors). ABINIT will nevertheless handle correctly other numbers of processors, albeit slightly less efficiently, as the final time will be determined by the processor that will have the biggest share of the work to do.
Beyond a certain number of processors, the efficiency of parallelism saturates, and may even decrease. This is due to the inevitable overhead resulting from the increasing amount of communication between the processors. The loss of efficiency is highly dependent on the implementation and linked to the decreasing charge on each processor too. | CommonCrawl |
A general line element and a general metric tensor are defined as functions of two parameters $\alpha$ and $\alpha'$. The related Einstein's field equations of a gravitational potential field in a vacuum, including parameter $\Lambda$, have been derived. The parameters $\alpha$ and $\alpha'$ are identified in a gravitational field by the solution of the Einstein's field equations. Parallel with this, it has been find out that the so‐called cosmological constant $\Lambda$, is not really constant, but a function of gravitational radius, $\Lambda = f(r)$. This discovery is very important, among the others, for cosmology. One of the consequences is the new form of the acceleration equation of the universe motion that can be attractive (negative) or repulsive (positive). According to the observations, the repulsive acceleration gives rise to accelerating expansion of the universe at the present time. The obtained solution of the diagonal line element can be applied in a very strong gravitational field. Besides, this solution gives the Ricci scalar equal to zero, $R=0$. This is in an agreement with the current observation that our universe is flat. | CommonCrawl |
:: ) special thanks to : DirtyOperatingSystemTips.COM team and members.
Everything I need to be able to do good old dos. First being released as a function library for mass deployment. Learned vbs and converted the entire library to vbsKit. In the end vbsKit is just the next step for me but at times I still escape to dos.
Started various fun projects and optimizing with new tricks I found useful. You can find almost any trick that was found useful by the community used in some way in the code or otherwise documented. Doskit includes testing examples, explanations and debugging features. My latest addition is a compiler, it converts doskit language to cmd batch.
This document will introduce the latest version as it is being written, older versions may not fully comply with this documentation. The compiler is not available for versions below 1.0.
Save below and drag'ndrop the file onto compile.CMD to start the conversion.
External binaries are linked as optional classes. Optional classes are included using the include_ macro. Classes are included recursively so you can include entire class hierarchies in a single command. $madplay is an object. All variables and objects use $ as prefix. The § prefix is used for constants, globals and true functions.
Next code creates object $. The first True argument will make oFile_ throw an exception if the path argument is malformed or undefined. The second True argument will enable methods for the object.
The True argument tells the move method to throw an exception if any problem occurs, for example if the file does not exist. Most macros set an exitcode so you can implement conditions without checking errorlevels.
Setting it to False prevents the program to halt on any non-critical errors. I use the False explicitly here but I could just have left it out because it is the default setting for this function.
Of all delimiters only starDelim cannot crash on it's input, it is also the only delimiter that can handle any input.
bruteParam_ is associated with starDelim_ because it can handle any character. param_ cannot handle '*' and is associated with the other delimiters.
To pass arguments by value enclose them in double quotes. If you want to pass a double quote you need to double it. All carets and exclamation marks need to be escaped. The parser can be crashed by passing an argument by value with unescaped double quotes.
To pass arguments by reference leave out any surrounding double quotes. If a function enforces byValue and you provide a reference it will be interpreted byValue. oFile_ enforces %a byValue.
For obvious reasons references cannot consist of default delimiters like tab, space, comma or quotes. Do not pass single character long variable names except for char $, most others are cleared. Variable names starting with $ are set and unset faster than variables starting with any other symbol outside local scopes.
%a is the name of the variable that will be assigned.
%b is the user's argument.
%c is the default value.
If no default value is set, %b cannot be empty.
If the default value is substituted by an empty string "", %b is optional.
doskit.NFO is created when doskit runs. In it you find the class hierarchy. Only required functions are part of class kernel.
$vr is interpreted byValue because %~a is enclosed in double quotes, it is also required because no optional value is defined like in the 2nd argument $file which %b parameter is not quoted. This means it depends on you whether it is interpreted by value or reference. The other 2 are optional and default to False.
Methods are currently not documented, to figure out how to use an objects methods a simple rule is applied. Replacing [object].[method] and substituting it with [class][method]_ gives the function associated with a method, in this case fileMove_.
When working with objects you never provide the first argument %a as it is provided by the object.
Here, the 1st rule tells the parser to accept %a byRef and byVal and allow it to be undefined. Similar for the 2nd argument.
assertFail_ demands we provide it with at least one parameter or it will throw an exception and halt even though it's parsing allows %a to be empty initially. Tests can be based upon the exitcode of the previous command. If pushd_ is successful then 'cd' must contain "C:\This work^^ s^!" otherwise assertFail_ fails. But if pushd_ fails, assertFail_ fails per definition.
Sometimes we wonder why a variable is not returned or errors pop up telling you've reached the maximum number of setlocals. These are typical symptoms of wrong scoping, somewhere we are using too many or too few endlocals. ScopeFail_ can tell whether your code is off. And in a range of +/-9 endlocals accurately.
This will replace each single double quote in $ with the third argument and print the following lines each time it finds one.
-offDelayed means only works in disDelayed.
-onDelayed means only works in enaDelayed.
-naDelayed means works regardless delayedState.
Objects can only be declared in disableDelayed if wrapped in the new_ macro.
No macro can return complex data to disableDelayed without using CALL which negatively impacts performance.
In the code are some definitions with the characters LH added at the end. LH stands for Local and Hidden. These functions do not set errors and are undocumented because they are smaller, faster and easier to embed duplicates. One of these is reDelayLH_ a minimized version of reDelay_.
On occasion, changes are made to very primitive kernel functions. At this point I increase the alphaVersionNumber and disable any dependencies. The next time the program runs these functions are then forced into unitTests which need to be manually re-enabled. It is possible that a release contains such disabled classes. You may attempt to use it anyways but do prepare to potentially die.
- choice: available because many people use it but doskit does not need it as it uses the more powerful getKey_ macro in pure dos.
- setACL: edit system registry AccessControlLists.
- regfind: system registry search and replace (BULK).
If you download the optional external binaries listed below. Some programs like cmdow may give you a false positive virus alert ( because it can hide windows ). All binaries are 32bit. Make sure that the ext folder is a sub folder of the folder where doskit.CMD is or it won't find the binaries.
I try to write code that is international, still there are functions that are language dependent.
There is a minor difference with the 'set' command. In 7 'set/P=' without double quotes will eat spaces at the beginning of the line, this is a break from previous windows versions. Since 'set' is the most important command in dos, I predict this minor difference has big consequences that may cause all kinds of visual and file formatting issues.
Please do report bugs and post logs as attachment, so it can be fixed in a next version.
doskit output for the "Hello World !" program.
<!-- : As far as WSF is concerned, this is all just comment.
:: special thanks to : Peter, jeb, dBenham, aGerman from www.DOSTips.COM, the DOSTips team and members.
:: - DOES NOT SUPPORT variableNames with delimiters that cannot be processed by 'if defined %variableName%' !
:: - Functions are onDelayed by parser.
< :§return.notDelayed %( * )% <nul rem^ &if /I "%~1" NEQ "§return.notDelayed" goto :§return.notDelayed "()"
if "%b%" NEQ "" if not defined %a% call :§endSimple "§return.notDelayed: data overFlow: '%a%'"
) || call :§endSimple "§parser_: 'call :§init_ ( * )' failed."
< :§callSetVarSafe %( * )% <nul rem^ &set "ó=%pathext%" &set "ô=%path%" &set "pathext=;" &set "path="
set "pathext=%ó%" &set "path=%ô%" &set "ó=" &set "ô="
:: call function, warn: call failures cannot be detected if £e is set to none 0 by caller.
call :§endSimple "call failed with exitCode zero." ^"§parser_: ^( :!$trace! !*! ^)^"
:: 8k returnables maximum is 9.
if "%b1%" NEQ "" if not defined %a1% call :§endSimple "§parser_: data overFlow: '%a1%'"
if "%b2%" NEQ "" if not defined %a2% call :§endSimple "§parser_: data overFlow: '%a2%'"
if "%b3%" NEQ "" if not defined %a3% call :§endSimple "§parser_: data overFlow: '%a3%'"
if "%b4%" NEQ "" if not defined %a4% call :§endSimple "§parser_: data overFlow: '%a4%'"
if "%b5%" NEQ "" if not defined %a5% call :§endSimple "§parser_: data overFlow: '%a5%'"
if "%b6%" NEQ "" if not defined %a6% call :§endSimple "§parser_: data overFlow: '%a6%'"
if "%b7%" NEQ "" if not defined %a7% call :§endSimple "§parser_: data overFlow: '%a7%'"
if "%b8%" NEQ "" if not defined %a8% call :§endSimple "§parser_: data overFlow: '%a8%'"
if "%b9%" NEQ "" if not defined %a9% call :§endSimple "§parser_: data overFlow: '%a9%'"
>nul copy /Z /A nul + nul "%temp%\$sub.TMP"
>nul del /F /Q "%temp%\$sub.TMP"
) do set "£error[%%~?]=error unidentified"
set "£error=invalid argument V variable undefined"
set "£error=array at end, must undeclare manually"
set "£error=pattern not in file"
set "£error=only one instance can be active simultaneously"
set "£error=violation of rule, restart without $jumpLabel"
set "£error=a required resource is missing"
:: (n2echo_:disDelayed) '26' bytes on file, '23' bytes in memory.
%= =%for %%# in ("") do set ^"n2echo_=echo(^&echo(^&^<nul set/P=" !
:: (necho_:disDelayed) '19' bytes on file, '17' bytes in memory.
%= =%for %%# in ("") do set ^"necho_=echo(^&^<nul set/P=" !
:: (endlocalRF_:disDelayed) '92' bytes on file, '77' bytes in memory.
%= =%for %%# in ("") do set ^"endlocalRF_=for /F ^"useback tokens=1-3^" %%1 in ('%%,^^^",^!$cr^!,^^^"') do for %%4 in (^"^!$lf^!^") do" !
:: (callSetVarSafe_:disDelayed) '332' bytes on file, '262' bytes in memory.
%= =%set pathext=%%s^&set path=%%t))else setlocal disableDelayedExpansion^&set _=" !
:: (setLocal_:disDelayed) '255' bytes on file, '180' bytes in memory.
%= =%set $r=£e))else set ~=" !
:: (advancedRF:disDelayed) '1084' bytes on file, '738' bytes in memory.
:: (forI_:disDelayed) '13' bytes on file, '12' bytes in memory.
%= =%for %%# in ("") do set ^"forI_=for /L %%i in" !
:: (deq_:disDelayed) '7' bytes on file, '7' bytes in memory.
%= =%for %%# in ("") do set ^"deq_=if 1==0" !
:: (forQ_:disDelayed) '10' bytes on file, '9' bytes in memory.
%= =%for %%# in ("") do set ^"forQ_=for %%? in" !
:: (endlocal_:disDelayed) '1825' bytes on file, '1240' bytes in memory.
%= =%set pathext=%%r^&set path=%%s))else setlocal enableDelayedExpansion^&set º=" !
:: (forP_:disDelayed) '2816' bytes on file, '1879' bytes in memory.
%= =%set pathext=%%r^&set path=%%s))else set ~=^^^^^^^!" !
:: (endoftest:disDelayed) '52' bytes on file, '47' bytes in memory.
%= =%for %%# in ("") do set ^"endoftest=(echo(^&^<nul set/P= endoftest ^&pause^&exit 0)^>con" !
:: (exception.nPrint:disDelayed) '271' bytes on file, '201' bytes in memory.
:: filegetmacro in the creation of objects is better replaced by callmacro.
:: this is possible through dependencies.
:: and allows bytecode to get a reference point.
:: if forP_ uses onDelim_ instead of _xsDelim_ then char '?' may be supported by functions.
:: eliminating the need for filelookup_ to pass those regular expressions as reference instead of value.
:: not in default $percent variable.
:: forT_ is nasty and not very handy in macro's anyways.
:: all macros using forT_ will simply have to use another token.
:: I could design a macro _ultraDelim_ that replaces any * or ? wildcards by enumerating each char.
:: Then all macros using _ultraDelim_ can support regular expressions.
:: If this is fast enough then I may reconsider every advEnc /KCF situation.
:: bytecode should also support §EndOldOS.
:: kernel\IO\reg\key\regKeyRead_ should query the default value with /ve if key missing.
:: not just enumerate them all.
set "$r=!$r!" &set "$rA=" &( %exception.nPrint% ) &exit /B !$err!
shift &if "%~1" NEQ "" goto :§endSimpleLoop "()"
Conversion took "00:33:38,230" in a VMWare6 1x3Ghz 512MB.
DEMO version 20181203 in attachment below.
-In §parser, now supports bytecode initialization.
-In §parser, fixed broken overflow detection and shorter algorithm.
-Excluded several tests, to allow to be run by 'the world' which would never pass them.
-Fixed: 'kernel\system\include_' would not link if directory did not exist.
-Loosened input format restrictions on several macros, "%%a" is now "%%~a" per default.
-In §parser, fixed broken call failure detection crash if input contained '<'.
-Doskit no longer runs in low priority so it suffers less when caching is happening in the background.
-In §toByteCode_ added optional arguments: /supportParser /supportOldOS /supportDebug /clearCache.
-In §byteAddPre_ fixed broken $lf, $n1c definition.
-In §byteAddPre_ added $c1 definition.
-In §byteAddPre_ added constants: ( False, True ).
-Depreciated function: byteAddMain_, byteAddPost, use toByteCode_.
DEBUG version 20181212 in attachment below.
-In §byteAddPre_ fixed function pointers were not updated correctly to work with byteCode.
-In dateTime\timerStart_, dateTime\timerStop_ now use a default return $timer, so it is no longer necessary to pass it as argument if only using a single timer.
-In kernel\math\percent_ now returns percentage in variable given, not in default $percent variable.
-In kernel\IO\reg\key\regKeyRead_ now queries the default value with /ve if key missing, instead of enumerating all key values.
-In §filePutMacro fixed a bug causing a wrong logFile to be used.
-Update: now supports all special characters except NUL byValue: All §functions and macros if they implement starDelim &bruteParam_.
-New kernel\function\encode_, decode_. More powerful but slower brute force encoding of '*', '=', '$lf', '$cr', '"', '^', '!', '?'.
-Depreciated kernel\function\advEnco_, advDeco_, enco_, deco_: use kernel\function\encode_, decode_.
-Dropped: kernel\function\unDelay_ due to lack of use.
-In §filePutMacro.session fixed crash on verify if written macro is syntactically fraud.
-Update: IO\drive\driveGetMounted_, faster detection and set to cache using filewrite_.
-Update: background caching is now controlled through filePutMacroBulk_, this speeds up kernel pre-caching.
-Update: filePutINI_ now supports delayed characters '^' and '!' in $key and $value of '*.INI' files.
-Update: include_ enforced includeOnce_ and no longer overwrites existing macros.
-Update: in kernel\math\percent_ no longer locks files before writing, all file- write/read errors are now silent, this is faster.
-Update: in process\processGetCPULoad_ fixed a bug where returned value would always be 0 using a smaller and faster algorithm.
-Update: All definitions are loaded alphabetically and doskit reserves 1MB of memory for user environment for performance reasons described here.
-Update: endlocal_ now supports character '#' as part of variable name.
-Fixed a bug in window\winWaitOpen_ that would cause redirection failures causing more serious scope failure.
-Fixed a bug in kernel\function\endlocal_ that would cause the scope to remain open if no variables are provided to be returned.
-Update: With bruteParam_ in place there is no more need to encode any arguments, coParam_,coParamL_,deEndLocal_ and deEndLocalL_ have been deleted.
-Fixed a bug in kernel\string\stringLen_ where it would return a wrong length if return variable matches '$vr' used internally.
-Fixed a bug in kernel\function\bruteParam_ that would fail to properly support '$cr' AND '$lf'.
-update: object constructors no longer create objects from cache but directly from memory to make them compatible with byteCode_.
-Fixed a problem in kernel\debug\debug_ causing malformed output to be returned.
-Update: various kernel classes have been moved deeper in the class hierarchy to keep amount of members per class reasonable.
-Update: In kernel\unitTest\assertFail_ now can compare any two values when deciding whether a test is successful.
-Fixed: a bug in class encryption\mask_ that would cause it to not encrypt at all.
-Update: kernel\string\trim_,trimLeft_,trimRight_,eatLeft_,eatRight_ now support any character except NUL as byVal.
-Update: kernel\delim\coDelim_ has been deleted, there is no more need for encoding tokens by coDelim_, use starDelim_/bruteParam_ or forP_.
-Update: changed behavior of kernel\math\eval_ no longer erases the value if expression is empty but returns error 'variable undefined'.
-Fixed: a bug in kernel\scope\endlocalBig_ where it would clear the variable if it matched an internally used name.
-Fixed: excluded some more tests 'The World' cannot pass, that would prevent caching to complete.
v0.1 (20190129) in attachment below.
- Update: applying standardized versioning from now on just vAlphaVersionNumber.betaVersionNumber ( date ).
- Update: kernel\unitTest\unitTest_ no longer can be controlled with arguments and was renamed to unitTest.
- Update: In window\winGetInfo_ rewritten without loss of functionality because it was too large to be cached.
- Update: in window\winGetInfo_ now supports english language OS.
- Fixed a performance bug in reDelayLH_ that would add useless returns.
- New: scopeFail_ can tell whether a function is off. And in a range of +9 to -9 endlocals correct the problem.
- Update: added call-/file- macro support to dirDelete_.
- Update: Applied more precise rules to 'The World' cannot pass tests to reduce chance of false positives.
- Fixed a bug in workFileClose_ that would accept $vr byRef and then fails because it can't determine the mutex.
- Fixed in IO\file\fileDeleteAtboot_ added missing class dependency IO\file\autoRun_.
- Fixed in IO\file\fileReplaceString_ would return an unresolvable errorCode.
- Performance update: §cacls_ .
- Dropped: kernel\loop\forT_, this is a difficult token to work with, is not as efficient as the other tokens, and is not required.
- Update: kernel\unitTest\unitTest more informative errorMessage.
- Fixed: some tests where not cleaning up fully, leaving garbage files behind.
- Fixed: bruteParam_ was using inverted dequoting rules.
- Update: In kernel\IO\file\fileCopy_ now supports regular expressions as part of argument.
- Update: In :IO\keyboard\$cmdGetKey, slightly improved algorithm.
- Performance update: getDosKeyMacro_, setDosKeyMacro_, filePutMacro_ from 4 to 3 variables for encoding(worstCase 300% covered 300%).
- Performance update: getDosKeyMacro_ rewritten as macro.
- Dropped: §getDosKeyMacro_, no longer required.
- Dropped: echon2_, nechon2_, due to lack of use.
- Fixed: debugger was not initialized if loaded from cache.
- Fixed: external program failes when running from a cmd /K initiated environment.
- Update: IO\reg\path\regPathTouch_ now supports any char but NUL to appear in it's arguments.
- New: IO\net\shareRemove_ remove mountpoint.
- New: method shareRemove of type Share.
- Fixed: In process\processGetCPULoad_ now enums 'Idle' performance counters instead of 'Total' because it turns out to be broken on XP.
- Fixed: in kernel\IO\reg\path\getRegPath_ now removes ending backslash that causes reg.EXE to fail.
- Fixed: in kernel\IO\reg\key\regKeyWrite_ now invokes getRegPath_ to make sure regPath is properly formed.
doskitXPserver2003 20181212 DEBUG kernel only.
From what I've read there is a minor difference with the 'set' command. In 7 'set/P=' without double quotes will eat spaces at the beginning of the line, this is a break from previous windows versions. Since 'set' is the most important command in dos, I predict this minor difference has big consequences that may cause all kinds of visual and file formatting issues.
Also doskit is completely unaware of 64bit capable OSes and it's binaries are all 32bit. | CommonCrawl |
Abstract: Three models of growing random networks with fitness dependent growth rates are analysed using the rate equations for the distribution of their connectivities. In the first model (A), a network is built by connecting incoming nodes to nodes of connectivity $k$ and random additive fitness $\eta$, with rate $(k-1)+ \eta $. For $\eta >0$ we find the connectivity distribution is power law with exponent $\gamma=<\eta>+2$. In the second model (B), the network is built by connecting nodes to nodes of connectivity $k$, random additive fitness $\eta$ and random multiplicative fitness $\zeta$ with rate $\zeta(k-1)+\eta$. This model also has a power law connectivity distribution, but with an exponent which depends on the multiplicative fitness at each node. In the third model (C), a directed graph is considered and is built by the addition of nodes and the creation of links. A node with fitness $(\alpha, \beta)$, $i$ incoming links and $j$ outgoing links gains a new incoming link with rate $\alpha(i+1)$, and a new outgoing link with rate $\beta(j+1)$. The distributions of the number of incoming and outgoing links both scale as power laws, with inverse logarithmic corrections. | CommonCrawl |
The comment on this question made me think, suppose all holding studs mounted on SRB's do get failed, how much impact would it really result in trajectory, considering same fuel burn time of SRB's.
Considering acceleration of $10~m/s^2$ at launch. To calculate the $\delta V$ lost at launch pad.
$$\delta V = 10 \times \delta t$$ where $\delta t$ is time taken by SRB's to break the stud.
Is this a severe trajectory impact? What kind of margins can STS tolerate on trajectory or underperformance and still operate?
Browse other questions tagged space-shuttle trajectory or ask your own question.
Did every shuttle flight after STS-5 carry EMUs onboard? | CommonCrawl |
Let's look at some examples of evaluating triple integrals over boxes.
Evaluate the triple integral $\iiint_B 2x + 3y + 4z \: dV$ where $B = [0, 1] \times [0, 2] \times [0, 3]$.
Evaluate the triple integral $\iiint_B xye^x \: dV$ where $B = [0, 1] \times [0, 2] \times [0, 1]$.
We immediately set up our triple integral as iterated integrals and evaluate. | CommonCrawl |
This article is about alpha-linolenic acid. For other uses, see Linolenic acid.
The word linolenic is an irregular derivation from linoleic, which itself is derived from the Greek word linon (flax). Oleic means "of or relating to oleic acid" because saturating linoleic acid's omega-6 double bond produces oleic acid.
α-Linolenic acid was first isolated by Rollett as cited in J. W. McCutcheon's synthesis in 1942, and referred to in Green and Hilditch's 1930s survey. It was first artificially synthesized in 1995 from C6 homologating agents. A Wittig reaction of the phosphonium salt of [(Z-Z)-nona-3,6-dien-1-yl]triphenylphosphonium bromide with methyl 9-oxononanoate, followed by saponification, completed the synthesis.
Seed oils are the richest sources of α-linolenic acid, notably those of hempseed, chia, perilla, flaxseed (linseed oil), rapeseed (canola), and soybeans. α-Linolenic acid is also obtained from the thylakoid membranes in the leaves of Pisum sativum (pea leaves). Plant chloroplasts consisting of more than 95 percent of photosynthetic thylakoid membranes are highly fluid due to the large abundance of linolenic acid, that shows up as sharp resonances in high-resolution carbon-13 NMR spectra, invariably. Some studies state that ALA remains stable during processing and cooking. However, other studies state that ALA might not be suitable for baking, as it will polymerize with itself, a feature exploited in paint with transition metal catalysts. Some ALA may also oxidize at baking temperatures. ALA percentages in the table below refer to the oils extracted from each item.
Flax is a rich source of α-linolenic acid.
Although the best source of ALA is seeds, most seeds and seed oils are much richer in an n−6 fatty acid, linoleic acid. Exceptions include flaxseed (must be ground for proper nutrient absorption) and chia seeds. Linoleic acid is the other essential fatty acid, but it, and the other n−6 fatty acids, compete with n−3s for positions in cell membranes and have very different effects on human health. There is a complex set of essential fatty acid interactions.
α-Linolenic acid can only be obtained by humans through their diets because the absence of the required 12- and 15-desaturase enzymes makes de novo synthesis from stearic acid impossible. Eicosapentaenoic acid (EPA; 20:5, n−3) and docosahexaenoic acid (DHA; 22:6, n−3) are readily available from fish and algae oil and play a vital role in many metabolic processes. These can also be synthesized by humans from dietary α-linolenic acid, but with an efficiency of only a few percent. Because the efficacy of n−3 long-chain polyunsaturated fatty acid (LC-PUFA) synthesis decreases down the cascade of α-linolenic acid conversion, DHA synthesis from α-linolenic acid is even more restricted than that of EPA. Conversion of ALA to DHA is higher in women than in men.
Multiple studies have shown a relationship between α-linolenic acid and an increased risk of prostate cancer. This risk was found to be irrespective of source of origin (e.g., meat, vegetable oil). However, a large 2006 study found no association between total α-linolenic acid intake and overall risk of prostate cancer; and a 2009 meta-analysis found evidence of publication bias in earlier studies, and concluded that if ALA contributes to increased prostate cancer risk, the increase in risk is quite small.
α-Linolenic acid is relatively more susceptible to oxidation and will become rancid more quickly than many other oils. Oxidative instability of α-linolenic acid is one reason why producers choose to partially hydrogenate oils containing α-linolenic acid, such as soybean oil. Soybeans are the largest source of edible oils in the U.S., and, as of a 2007 study, 40% of soy oil production was partially hydrogenated.
However, when partially hydrogenated, part of the unsaturated fatty acids become unhealthy trans fats. Consumers are increasingly avoiding products that contain trans fats, and governments have begun to ban trans fats in food products. These regulations and market pressures have spurred the development of low-α-linolenic acid soybeans. These new soybean varieties yield a more stable oil that doesn't require hydrogenation for many applications, thus providing trans fat-free products, such as frying oil.
Several consortia are bringing low-α-linolenic acid soy to market. DuPont's effort involves silencing the FAD2 gene that codes for Δ6-desaturase, giving a soy oil with very low levels of both α-linolenic acid and linoleic acid. Monsanto Company has introduced to the market Vistive, their brand of low α-linolenic acid soybeans, which is less controversial than GMO offerings, as it was created via conventional breeding techniques.
^ Loreau, O; Maret, A; Poullain, D; Chardigny, JM; Sébédio, JL; Beaufrère, B; Noël, JP (2000). "Large-scale preparation of (9Z,12E)-1-(13)C-octadeca-9,12-dienoic acid, (9Z,12Z,15E)-1-(13)C-octadeca-9,12,15-trienoic acid and their 1-(13)C all-cis isomers". Chemistry and Physics of Lipids. 106 (1): 65–78. doi:10.1016/S0009-3084(00)00137-7. PMID 10878236.
^ a b c Beare-Rogers (2001). "IUPAC Lexicon of Lipid Nutrition" (PDF). Archived (PDF) from the original on 12 February 2006. Retrieved 22 February 2006.
^ Rollett, A. (1909). "Zur kenntnis der linolensäure und des leinöls". Z. Physiol. Chem. 62 (5–6): 422–431. doi:10.1515/bchm2.1909.62.5-6.422.
^ Green, TG; Hilditch, TP (1935). "The identification of linoleic and linolenic acids". Biochem. J. 29 (7): 1552–63. PMC 1266662. PMID 16745822.
^ Sandri, J.; Viala, J. (1995). "Direct preparation of (Z,Z)-1,4-dienic units with a new C6 homologating agent: synthesis of alpha-linolenic acid". Synthesis. 3 (3): 271–275. doi:10.1055/s-1995-3906.
^ Chapman, David J.; De-Felice, John; Barber, James (May 1983). "Growth temperature effects on thylakoid membrane lipid and protein content of pea chloroplasts 1". Plant Physiol. 72 (1): 225–228. doi:10.1104/pp.72.1.225. PMC 1066200. PMID 16662966.
^ Manthey, F. A.; Lee, R. E.; Hall Ca, 3rd (2002). "Processing and cooking effects on lipid content and stability of alpha-linolenic acid in spaghetti containing ground flaxseed". J. Agric. Food Chem. 50 (6): 1668–71. doi:10.1021/jf011147s. PMID 11879055.
^ "OXIDATIVE STABILITY OF FLAXSEED LIPIDS DURING BAKING".
^ Li, Thomas S. C. (1999). "Sea buckthorn: New crop opportunity". Perspectives on new crops and new uses. Alexandria, VA: ASHS Press. pp. 335–337. Archived from the original on 22 September 2006. Retrieved 28 October 2006.
^ "Omega-3 fatty acids". University of Maryland Medical Center.
^ Breanne M Anderson; David WL Ma (2009). "Are all n-3 polyunsaturated fatty acids created equal?". Lipids in Health and Disease. 8 (33): 33. doi:10.1186/1476-511X-8-33. PMC 3224740. PMID 19664246.
^ Shiels M. Innis (2007). "Fatty acids and early human development". Early Human Development. 83 (12): 761–766. doi:10.1016/j.earlhumdev.2007.09.004. PMID 17920214.
^ Burdge, GC; Calder, PC (2005). "Conversion of alpha-linolenic acid to longer-chain polyunsaturated fatty acids in human adults" (PDF). Reproduction, Nutrition, Development. 45 (5): 581–97. doi:10.1051/rnd:2005047. PMID 16188209.
^ "Conversion of $\alpha$-linolenic acid to longer-chain polyunsaturated fatty acids in human adults".
^ Ramon, JM; Bou, R; Romea, S; Alkiza, ME; Jacas, M; Ribes, J; Oromi, J (2000). "Dietary fat intake and prostate cancer risk: a case-control study in Spain". Cancer Causes & Control. 11 (8): 679–85. doi:10.1023/A:1008924116552. PMID 11065004.
^ Brouwer, IA; Katan, MB; Zock, PL (2004). "Dietary alpha-linolenic acid is associated with reduced risk of fatal coronary heart disease, but increased prostate cancer risk: a meta-analysis". The Journal of Nutrition. 134 (4): 919–22. doi:10.1093/jn/134.4.919. PMID 15051847.
^ De Stéfani, E; Deneo-Pellegrini, H; Boffetta, P; Ronco, A; Mendilaharsu, M (2000). "Alpha-linolenic acid and risk of prostate cancer: a case-control study in Uruguay". Cancer Epidemiology, Biomarkers & Prevention. 9 (3): 335–8. PMID 10750674.
^ Koralek DO, Peters U, Andriole G, et al. (2006). "A prospective study of dietary α-linolenic acid and the risk of prostate cancer (United States)". Cancer Causes & Control. 17 (6): 783–791. doi:10.1007/s10552-006-0014-x. PMID 16783606.
^ Simon, JA; Chen, YH; Bent, S (May 2009). "The relation of alpha-linolenic acid to the risk of prostate cancer". American Journal of Clinical Nutrition. 89 (5): 1558S–1564S. doi:10.3945/ajcn.2009.26736E. PMID 19321563.
^ Kinney, Tony. "Metabolism in plants to produce healthier food oils (slide #4)" (PDF). Archived from the original (PDF) on 29 September 2006. Retrieved 11 January 2007.
^ Fitzgerald, Anne; Brasher, Philip. "Ban on trans fat could benefit Iowa". Truth About Trade and Technology. Archived from the original on 27 September 2007. Retrieved 3 January 2007.
^ Monsanto. "ADM to process Monsanto's Vistive low linolenic soybeans at Indiana facility". Archived from the original on 11 December 2006. Retrieved 6 January 2007.
^ Kinney, Tony. "Metabolism in plants to produce healthier food oils" (PDF). Archived from the original (PDF) on 29 September 2006. Retrieved 11 January 2007.
^ Pan A, Chen M, Chowdhury R, et al. (December 2012). "α-Linolenic acid and risk of cardiovascular disease: a systematic review and meta-analysis". Am. J. Clin. Nutr. (Systematic review). 96 (6): 1262–73. doi:10.3945/ajcn.112.044040. PMC 3497923. PMID 23076616. | CommonCrawl |
Machine Learning has developed several efficient techniques for a wide range of problems. A single technique or model like Googlenet is used across various problems such as image classification, object detection and others. Some models like MobileNet are designed to work efficiently in computational limited resources like mobile devices. It is a challenge to figure out the technique/ model that will be the best fit for your problem. To solve this, we will show how to evaluate models to determine if its performance meets your expectations/ requirements.
When you reach step 3, a major issue arises: Which algorithm is best suited to solve our problem?
There are a large number of machine learning algorithms out there but not all of them apply to a given problem. We need to choose among those algorithms the one that best suits our problem and gives us the desired results. This is where the role of Model Evaluation comes in. It defines metrics to evaluate our models and then based on that evaluation, we choose one or more than one model to use. Let's see how we do it.
The test harness is the data on which you will train and test your model against a performance measure. It is crucial to define which part of the data will be used for training the model and which part for testing it. This may be as simple as selecting a random split of data (66% for training, 34% for testing) or may involve more complicated sampling methods.
While training the model on the training dataset, it is not exposed to test dataset. Its predictions on the test dataset are indicative of the performance of the model in general.
Classification Accuracy: It is the ratio of the number of correct predictions to the total number of input samples.
This metric is used only in those cases where we have an equal or nearly equal number of data points belonging to all the classes.
For example, if we have a dataset where 98 percent of the dataset belongs to class A and 2 percent to class be, then a model which simply predicts a given data point to be belonging to class A all the time will have 98 percent accuracy. But in reality, such a model is performing very poorly.
This function generates values between [0, $ \infty $). Log loss nearer to zero indicates higher accuracy and log loss away from zero indicates lower accuracy. It is mostly used in multiclass classification and gives fairly good results.
Confusion Matrix: Let's assume we have a binary classification problem. We have some samples belonging to two classes: YES or NO. Also, we have our own classifier which predicts a class for a given input sample. On testing our model on 160 samples, we get the following result.
True Positives: The cases in which we predicted YES and the actual output was also YES.
True Negatives: The cases in which we predicted NO and the actual output was NO.
False Positives: The cases in which we predicted YES and the actual output was NO.
False Negatives: The cases in which we predicted NO and the actual output was YES.
Mean Squared Error: Mean Squared Error(MSE) is quite similar to Mean Absolute Error, the only difference being that MSE takes the average of the square of the difference between the original values and the predicted values. The advantage of MSE is that it is easier to compute the gradient, whereas Mean Absolute Error requires complicated linear programming tools to compute the gradient. As we take the square of the error, the effect of larger errors become more pronounced then smaller error, hence the model can now focus more on the larger errors.
Here we use the entire dataset to train the model and test the model as well. Here's how.
Step 1: we divide our dataset into equally sized groups of data points called folds.
Step 2: Then we train our data on all the folds except 1.
Step 3: Next we test our data on that fold that was left out.
Step 4: Repeat step 2 and 3 such that all the folds get to be the test data one by one.
Step 5: The performance measures are averaged across all folds to estimate the capability of the algorithm on the problem.
For example, if we have 3 folds.
Step 4: Average the results of all the above steps to get the final result.
Usually, the number of folds we use are 3, 5, 7 and 10 folds.
Now, its time to test a variety of machine learning algorithms.
Select 5 to 10 standard algorithms that are appropriate for your problem and run them through your test harness. By standard algorithms, one means popular methods no special configurations. Appropriate for your problem means that the algorithms can handle regression if you have a regression problem.
If you have quite a few numbers of algorithms to test, then you may want to reduce the size of the dataset.
Following the above steps, you will be able to determine the best algorithm for your system configuration and problem. | CommonCrawl |
In a previous post, I presented a method of deriving the explicit formula for the terms of the Fibonacci Sequence. In this post, I will present a more concrete method that can be applied to an entire class of recursively-defined sequences.
Let us start by returning to the problem of the Fibonacci Sequence: If we move all of the terms in our recursive definition to one side, we get Now suppose that the initial terms of the sequence were not given to us, and we were simply asked to find sequences satisfying this relationship, as if it were a functional equation. In functional equations, observing the properties of a function and making an assumption about its nature can often lead to helpful results. Thus, in this functional equation, I will consider functions of the form where $\rho$ is some real number. If I assume that $F_n$ takes this form, or assume that $F_n=f(n)$, then I obtain and, assuming that $\rho$ is nonzero, Given this, we may solve for $\rho$, using the quadratic formula: and so we now have two functions satisfying the recursive formula: and ...however, neither of these satisfy the initial conditions of the Fibonacci Sequence, so we must keep looking for a solution.
Now I will take a pause in our pursuit of an explicit formula to unveil the "trick" which is central to the solution of this problem, and many others like it. Suppose a sequence is defined by the recursive formula where each $k_i$ is some real constant, and the first $m$ terms are given. Then suppose that some sequence $S_1(n)$ satisfies the same recursive formula. Then and, if $a$ is some real number, and so $\alpha S_1(n)$ also satisfies the same recursive formula. Similarly, suppose some two sequences $S_1(n)$ and $S_2(n)$ both satisfy the recursive formula. Then, both and are true. Then, by adding the two equations, and so the sequence $(S_1+S_2)(n)$ also satisfies the recursive formula.
We may now apply this method to the four problems that I posed at the beginning of the post. First, we have the sequence with initial conditions $a_0=2, a_1=1$ and recursive definition Because this recursive rule is the same as that in the previous problem, we may use the same class of functions: But this time, $\alpha,\beta$ will have different values, since the initial conditions are different this time. To solve, we can set up the system By solving this, we obtain and so This sequence, a variation of the Fibonacci sequence, is known as the Lucas sequence.
Before beginning the next problem, I will take another pause to derive another result that is, in part, a generalization of the trick that we used earlier. Suppose again that some sequence satisfies Now notice that if some number $\rho$ has the property Then the sequence also satisfies the recurrence relation. As a consequence, if $\rho_1, \rho_2,...,\rho_m$ are the roots of the polynomial then any sequence in the form also satisfies the recurrence relation, where each $\alpha_i$ is a real number. This result is very helpful, because it allows us to instantly generate a family of solutions for any of our recursively defined sequences. Because of this, given a recursively define sequence, the polynomial is often referred to as its characteristic polynomial.
One last problem. The sequence $c_n$ has initial conditions $c_0=c_1=1$ and recursive definition Uh oh, there's a problem - this is similar to the Fibonacci Sequence, but there's a constant term in our recursive definition, and we don't know how to deal with that. If you want to try to figure out what to do on your own, you should stop reading here and try it. | CommonCrawl |
The Poisson regression model is often used as a first model for count data with covariates. Since this model is a GLM with canonical link, regression parameters can be easily fitted using standard software. However the model requires equidispersion, which might not be valid for the data set under consideration. There have been many models proposed in the literature to allow for overdispersion. One such model is the negative binomial regression model. In addition, score tests have been commonly used to detect overdispersion in the data. However these tests do not allow to quantify the effects of overdispersion. In this paper we propose easily interpretable discrepancy measures which allow to quantify the overdispersion effects when comparing a negative binomial regression to Poisson regression. We propose asymptotic $\alpha$-level tests for testing the size of overdispersion effects in terms of the developed discrepancy measures. A graphical display of p-values curves can then be used to allow for an exact quantification of the overdispersion effects. This can lead to a validation of the Poisson regression or a discrimination of the Poisson regression with respect to the negative binomial regression. The proposed asymptotic tests are investigated in small samples using simulation and applied to two examples. | CommonCrawl |
This basis has the same angle $\alpha$ between the two axes as given in the figure, namely $\tan \alpha = \beta$. However, this transformation would make the $S'$ basis have axes that are at an obtuse angle relative to the $S$ basis, whereas most Minkowski diagrams I've seen have $S'$ axes within the $S$ axes at an acute angle. Am I doing something wrong here? Thanks.
which rotates both axes anti-clockwise by $\theta$.
which rotates the $x$-axis anti-clockwise by $\alpha$ whereas the $y$-axis clockwise by $\alpha$.
Not the answer you're looking for? Browse other questions tagged special-relativity linear-algebra or ask your own question.
What are the eigenvalues of the Lorentz matrix?
Is the electromagnetic field strength tensor a tensor?
How does one describe how a basis vector changes through space using the Christoffel symbols? | CommonCrawl |
Comment on "Weak value amplification is suboptimal for estimation and detection"Feb 02 2014The assumptions of Ferrie and Combes are irrelevant for realistic experimental situations. This undermines their conclusions.
Proper holomorphic mapppings between Reinhardt domains in $\mathbb C^2$May 22 2008We describe all possibilities of existence of non-elementary proper holomorphic maps between non-hyperbolic Reinhardt domains in $\mathbb C^2$ and the corresponding pairs of domains.
OCD factorization for the pion diffractive dissociation into two jetsSep 10 2001We present main ideas and obtained results of our recent calculations of Coulomb and QCD contribution to the pion diffractive dissociation into two jets. | CommonCrawl |
How do I get a list of control qubits from Q# operations when tracing the simulation in C#?
I'm having trouble writing the Controls function, which extracts a list of qubits being used as controls. When the operation is uncontrolled, or controlled by 0 qubits, the returned array should be of length 0.
The issue I'm running into is that the type and layout of arg.Value varies from operation to operation, even after conditioning on op.Variant being OperationFunctor.ControlledAdjoint or OperationFunctor.Controlled. I can handle individual cases by inspecting the types, but I keep running into new unhandled cases. This indicates there's probably a "correct" way to do this that I'm missing.
By "controls" I always mean the cs in Controlled Op(cs, ...). The same operation may have different controls when expressed in different ways. For example, the controls list of Controlled Toffoli(a, (b, c, d)) is the list [a] whereas the controls list of Controlled X([a, b, c], d) is the list [a, b, c]. A further example: the controls list of Toffoli(b, c, d) is , even though normally one might think of the first two arguments as the controls. It is of course expected that within Toffoli(b, c, d) there may be a sub-operation Controlled X((b, c), d) where the controls list is [b, c]; I'm not thinking of controls as some kind of absolute concept that is invariant as you go down through layers of abstraction.
arg.Value contains the actual tuple that the controlled operation receives at runtime. It's a two item tuple in which the first item is the control qubits, and the second another tuple with the arguments the operation normally expects, so in your case you are only interested in the first item of this tuple.
// Uncontrolled operations have no control qubits.
// Get the first item of the (controls, args) tuple.
Notice the array of Qubits is encapsulated in something called a QArray<Qubit>, QArray is the data structure we use in simulation for all Q# arrays.
How do we code the matrix for a controlled operation knowing the control qubit, the target qubit and the $2\times 2$ unitary?
How to construct the "Inversion About the Mean" operator? | CommonCrawl |
Is the linear span of special orthogonal matrices equal to the whole space of $N\times N$ matrices?
When is a blow-up non-singular?
Why are they called Spherical Varieties?
Algebraic Topology Beyond the Basics:Any Texts Bridging The Gap?
How to find Erdős' treasure trove?
Is there non-simple-connected projective variety(over C) with trivial etale fundamental group?
A second Ph.D. in mathematics?
How much of the ATLAS of finite groups is independently checked and/or computer verified?
Set theories without "junk" theorems?
Why do we need random variables?
What is a cup-product in group cohomology, and how does it relate to other branches of mathematics?
Do bubbles between plates approximate Voronoi diagrams?
Would you resubmit a research paper after it has been superseded by another as yet unpublished paper?
Does $\pi_1$ have a right adjoint?
An easy proof of the uncountability of bijections on natural numbers?
Which way for reading the proofs?
Can a subset of the plane have nontrivial $H_2$ or $\pi_2$?
How to think about model categories?
Why Weil group and not Absolute Galois group? | CommonCrawl |
Do we have an $L^1$ function whose Fourier series converges almost everywhere but not to itself?
The formulation seems specific enough to me. If the partial sums of the Fourier series converge to a function $g$ a.e., then so do the Cesaro means. But these converge in the $L^1$-sense to the original function. So the answer to your question is no.
Not the answer you're looking for? Browse other questions tagged fourier-analysis or ask your own question.
What function has fourier series the harmonic series?
Fourier series representing a continuous function?
Are there examples of functions in $L_1$ and $L_\infty$ whose Fourier series divergent ("weakly")?
Fourier series of a continuous function converging to non-value of function? | CommonCrawl |
Let $G_1, G_2$ be two lie groups, $V$ be a finite dimensional (continuous) irreducible complex representation of $G_1 \times G_2$, must $V \cong V_1 \otimes V_2$ for some irreducible representation $V_i$ of $G_i$?
If $G_i$ are compact, this is true by Peter-Weyl theorem.
If the field is $\mathbb C$, There are many ways of seeing this. For $i=1,2$ we may replace $G_i$ by its Zariski closure in $GL(V_i)$ without changing the hypotheses or the conclusion. But if an algebraic subgroup $G\subset GL(V)$ is irreducible, then it is reductive (the unipotent radical will have a fixed space which is G invariant and hence zero).
We now use the fact that the reductive group $G_i$ has a Zariski dense compact group $K_i$; we may thus replace $G_i$ by the compact $K_i$ where you have accepted the result.
Not the answer you're looking for? Browse other questions tagged rt.representation-theory lie-groups or ask your own question.
Does every irreducible representation of a compact group occur in tensor products of a faithful representation and its dual?
Other than SU(3), SO(4), SU(2)xU(1), are there compact semisimple Lie groups which exactly two 3-dimensional representations that are dual to each other? | CommonCrawl |
Add ord_pairs to retrieve $N_c$, $N_d$, ties and total number of pairs for contingency tables. Internals for this function are straight up copied from this gist.
Add etasq in case you want to show students what $\eta^2$ is without having to explain ANOVA.
Eliminate vcd dependency, also in favor of DescTools.
Eliminate lazyeval dependency in tadaa_int by being better at ggplot2.
Eliminate dplyr dependency by being better at R.
Eliminate sjmisc depencency because why did we depend on that again?
Eliminate sjlabelled dependency (only used for re-exports).
Eliminate lsr dependency in favor of, you guessed it, DescTools for eta in tadaa_aov.
Remove recoded leist var from ngo, as it should be computed from leistung by students.
Move cowplot from Imports to Suggests because whe only need it in one function, sometimes.
More compact table output in tadaa_nom and tadaa_ord.
[tadaa_]likertize is removed. Use sjmisc::split_var.
labels_to_factor is removed because various as_factors exist.
Add tadaa_chisq for a $\chi^2$-Test with OR and effect size.
Also re-export magrittr::%$% because it's really handy sometimes.
Add tadaa_pairwise_t as an extension of stats::pairwise.t.test that works with two grouping factors and thereby can test interactions.
Also knows the Sidak method for p-adjustment, both regular and step-down procedures.
Add tadaa_pairwise_gh for the Games Howell post-hoc procedure as an alternative to TukeyHSD.
Add tadaa_pairwise_tukey while we're at it. Just a thin wrapper for stats::TukeyHSD but with tidied output and usage consistent with the previous tadaa_pairwise_* functions.
Add tadaa_plot_tukey to plot Tukey HSD results as error bars because boy do I like error bars.
Add tadaa_balance as a replacement for tadaa_heatmap to check equality of group sizes.
%>% from magrittr as all the cool kids to these days.
%<>% from magrittr because I happen to really like it.
[sg]et_label[s]and word_wrap from sjmisc, as they're handy.
as_factor from haven as a replacement for the deprecated labels_to_factor.
Is now an alias for theme_readthedown, will probably become the new canonical version.
Now finally adds vertical space to the x axis title via proper margining.
Default type is now 3, for generally safer results and consistency with SPSS.
Now auto-factorizes independent variables by default, fixes #24.
Now imports methods, which should fix an issue during knitr or rmarkdown processing where the function is couldn't be found. If not, manually library(methods) as a workaround.
Fix wrong sprinkle labelling causing eta.sq to be formatted like a p-value.
Added show_power argument to calculate power via pwr::pwr.f2.test.
Requires more testing against software like G*power to ensure accuracy.
Internal Levene test now uses center = "median" for more robust results, as it should.
Now also uses $\alpha = 0.05$ instead of $\alpha = 0.1$.
Use new argument var.equal to override internal Levene test.
Power should now be properly reported for alternative = "less" or greater.
Now doesn't return the absolute effect size by default.
Added paired argument so effects for paired tests are now a thing.
Also fix direction argument not being honored.
Gains print (logical) argument to suppress printing if so desired. The output will still be returned invisibly.
tadaa_one_sample: Should make sense now.
Remove na.rm argument from tadaa_t.test and tadaa_wilcoxon because it's problematic, and in case of paired = TRUE it would have produced flat out wrong results.
Improved print = markdown output of tadaa_aov, tadaa_t.test, tadaa_wilcoxon, tadaa_one_sample, tadaa_kruskal. Unfortunately print = "console" now has headers with unparsed $\LaTeX$-expressions, but who uses that anyway.
labels_to_factor: Was a wrapper around haven::as_factor and is obsolete by now, as as_factor can do the same thing this function was built for.
tadaa_likertize is renamed to likertize, deprecated since sjmisc::split_var is probably better anyway.
tadaa_aov now knows about types, uses type 1 by default and can do types 2 and 3.
Method for effect size calculation now uses lsr::etaSquared, which also takes a type argument.
Add tadaa_mean_ci: Plots means with 95% confidence intervals as errorbars (thanks Christoph for the suggestion).
Add tadaa_one_sample: For one-sample t-tests and finally an easy z-test.
Add tadaa_wilcoxon: For when tadaa_t.test isn't non-parametric enough. Same usage.
Additionally displays medians of each group.
Add tadaa_kruskal: For when tadaa_aov isn't non-parametric enough, too.
Move tadaa_sem ➡ mean_ci_sem because it's more confint than tadaa.
Add show_n option to tadaa_int: Optionally display N in subtitle.
Turns out pval_string(0.05) returned < 0.05. Well. That was embarrassing.
Minor tweaks to theme_readthedown regarding text placement.
Remove superfluous variables from ngo: index, zeng, zdeutsch, zmathe.
New function: tadaa_normtest lets you do tests for normality (4 methods) over multiple variables.
New function: tadaa_heatmap generates a heatmap. Mhhh, heatmaps.
New function: pval_string as a modification of pixiedust::pvalString that includes p < .05.
Added a ggplot2 theme for the rmdformats::readthedown Rmd template.
Set grid = TRUE for the two interaction plots to be printen in a grid via cowplot::plot_grid.
Choose the plot labels via the labels argument.
tadaa_int plot output now also is a little tidier and optimized for smaller widths.
Add option reduce to modus, so multiple results will be concatenated to a character by default.
Add additional option as_character to modus because guessing about return value classes is no joke.
Fix issues with generate_recodes and interval_labels (#1).
Add tadaa_ord as ordinal equivalent of tadaa_nom.
Dependencies declared in DESCRIPTION are still experimental because of uncertainty regarding failing travis builds. I don't know what's going on there.
Fix typo in DESCRIPTION, misspelling pixiedust. Sorry! | CommonCrawl |
PbSnF$\sb4$ is the highest performance fluoride ion conductor known to date. We have studied the phase transitions it undergoes under three different conditions: (i) versus the amount of hydrofluoric acid used for the preparation by the aqueous route, (ii) upon application of mechanical energy, and (iii) versus temperature. The addition of an aqueous solution of lead(II) nitrate to a fresh aqueous solution of SnF$\sb2$ results in the precipitation of $\alpha$-PbSnF$\rm\sb4(aq\sb1$), which is very highly strained in the ($\vec a,\vec b$) plane of the tetragonal unit-cell. If a very minor amount of hydrofluoric acid is used, the strain increases and o-PbSnF$\sb4$ is obtained. The $\alpha$-PbSnF$\sb4\to$ o-$\rm PbSnF\sb4$ transition is mostly a bidimensional phase transition, which includes a highly disordered intermediate "transitional phase". Ball milling results in a phase transition, giving microcrystalline disordered $\mu\gamma$-PbSnF$\sb4$, a cubic $\beta$-PbF$\sb2$ like phase. Tests were performed to make sure the $\mu\gamma$-PbSnF$\sb4$ samples are indeed microcrystalline cubic PbSnF$\sb4$ and not a mixture of microcrystalline $\beta$-PbF$\sb2$ and amorphous SnF$\sb2$. Milling $\alpha$- and $\beta$-PbF$\sb2$ for comparison and testing showed that both $\beta\to\alpha$ and $\alpha\to\beta$ transitions take place upon milling. Stirring $\alpha$-PbF$\sb2$ in an aqueous solution of SnF$\sb2$ results in the formation of $\rm\alpha$-PbSnF$\rm\sb4(aq\sb2)$ or $\rm Pb\sb2SnF\sb6$ depending on the conditions whereas no reaction occurs with $\beta$-PbF$\sb2$. When all the phases of PbSnF$\sb4$ are heated, phase changes take place versus temperature and time. Our findings provide key knowledge that will have to be taken into account in the fabrication of practical devices using PbSnF$\sb4$ for guaranteed reproducible results, long term stability and stable output of the device.
xix, 195 leaves : ill. ; 29 cm. | CommonCrawl |
A set with an operation defined on a family of subsets is studied. The operation is used to generalize the topological space itself. The operation defines the operation-open subsets in the set. Relations are studied among two types of the interiors and the closures of subsets. Some properties of maximal operation-open sets are obtained. Semi-open sets and pre-open sets are defined in the sets with operations and some relations among them are proved.
D. S. Jankovic, On functions with $\alpha$-closed graphs, Glas. Mat. Ser. III 18(38) (1983), no. 1, 141-148.
S. Kasahara, Operation-compact spaces, Math. Japon. 24 (1979), no. 1, 97-105.
J. L. Kelley, General Topology, D. Van Nostrand Company, Inc., Toronto-New York- London, 1955.
A. S. Mashhour, A. A. Allam, F. S. Mahmoud, and F. H. Khedr, On supratopological spaces, Indian J. Pure Appl. Math. 14 (1983), no. 4, 502-510.
H. Maki, K. Chandrasekhara Rao, and A. Nagoor Gani, On generalizing semi-open sets and preopen sets, Pure Appl. Math. Sci. 49 (1999), no. 1-2, 17-29.
H. Ogata, Operations on topological spaces and associated topology, Math. Japon. 36 (1991), no. 1, 175-184.
V. Popa and T. Noiri, On the definitions of some generalized forms of continuity under minimal conditions, Mem. Fac. Sci. Kochi Univ. Ser. A Math. 22 (2001), 9-18.
F. U. Rehman and B. Ahmad, Operations on topological spaces-I, Math. Today 10 (1992), 29-36. | CommonCrawl |
We investigate totally geodesic foliations (M, F) of arbitrary codimemsion q on n-dimensional pseudo-Riemannian manifolds for which the induced metrics on leaves don't degenerate. We assume that the q-dimensional orthogonal distribution D to (M, F) is an Ehresmann connection for this foliation. Since the usual graph G(F) is not Hausdorff manifold in general, we investigate the graph G(F, D) of the foliation with an Ehresmann connection D introduced early by the author. This graph is always Hausdorff manifold. We prove that on the graph G(F, D) a pseudo-Riemannian metric is defined, with respect to which the induced foliation and the simple foliations formed by the fibers of the canonical projections are totally geodesic. It is proved that the leaves of the induced foliation on the graph are nondegenerately reducible pseudo-Riemannian manifolds and their structure is described. The application to parallel foliations on nondegenerately reducible pseudo-Riemannian manifolds is considered. It is shown that every foliation defined by the suspension of a homomorphism of the fundamental group of a pseudo-Riemannian manifold belongs to the investigated class of foliations.
Жукова Н. И. Труды Московского физико-технического института. 2017. Т. 9. № 4. С. 132-141.
Complete transversely affine foliations are studied. The strong transversal equivalence of complete affine foliations is investigated, which is a more refined notion than the transverse equivalence of foliations in the sense of Molino. A global holonomy group of a complete affine foliations is determined and it is proved that this group is the complete invariant of the foliation relatively strong transversal equivalence. A representative of an arbitrary equivalence class is constructed on its complete invariant. This representative is a twodimensional complete transversely affine foliation (𝑀, 𝐹), where 𝑀 is the space of Elenberg- McLane of the type 𝐾(𝜋, 1).
Жукова Н. И. Журнал Средневолжского математического общества. 2017. Т. 19. № 4. С. 33-44.
For any smooth orbifold $\mathcal N$ is constructed a foliated model, which is a foliation with an Ehresmann, the leaf space of which is the same as $\mathcal N$. We investigate the relationship relationship between some properties of orbifold and its foliated model. The article discusses the application to Cartan orbifolds, that is orbifolds endowed with Cartan geometry. | CommonCrawl |
Is there a formal definition of an energy cascade in terms of the energy transfer kernel?
diverges as the high pass filter velocity $\Omega \rightarrow \infty$.
First of all, why is $F$ called "flatness", it should be a measure of flatness of what? What does it mean for $F$ to diverge in the high frequency (UV) limit? Looking at the expression for $F$, I got the impression that it could mean that higher order correlations in time ("n-point functions") start to dominate in case of intermittency and more "local" 2-point interactions, which are responsible to maintain a scale invariant inertial subrange (?), become negligible such that the turbulent system starts to deviate from a Kolmogorov inertial subrange for example.
In addition, other measures of intermittency can be defined involving higher order correlations in length $l$, such as the so called hyper-flattness defined as $F_6 (l) = S_6 (l)/(S_2(l)^3)$ etc... Does this mean that one could more generally say that for a turbulent system that shows intermittency, the Wick theorem can not be applied to calculate higher order n-point functions from 2-point functions?
Is your flatness criterion related to what people call kurtosis in stats? The formula looks a bit similar.
High kurtosis means a strong peak plus long tails. The long tails mean that the extreme values can't be neglected (like they could for, say, a Gaussian distribution). So in the fluid case, where they're looking at velocity gradients, this would mean that the velocity gradient was mostly low (the high peak), but there are a significant number of isolated scattered pockets where there is a very high velocity gradient (giving the long tails).
@twistor59 that quite fits the phenomenology of intermittency as far as I understand it. | CommonCrawl |
Questions On My Exam, did i answer correctly?
where C is a constants. What is the value of C?
Which of the following functions could qualify as the density function of a random variable?
Last edited by Ku5htr1m; August 29th, 2014 at 12:25 PM.
We require the integral to be equal to 1, so $C = \tfrac32$.
2) There are no intervals with most of them! But your answer is correct. b blows up as $x \to \infty$ while the other two blow up at $x = 0$.
3) Incorrect. The variable $t$ is a dummy variable for the integral. The solution will still have the variable $x$, not the variable $t$. So c) is the correct answer.
Last edited by v8archie; August 29th, 2014 at 12:37 PM.
Ok thank you for the answer and the explanation !
Let me continue with a couple of exam question I was not sure on.
You are joining a game that involves flipping a fair coin three times. If the coin comes up heads two or three times, you win €500; otherwise, you lose €400. What are you expected earnings in the game?
In a certain binomial tree model for a stock price, the stock price today is €100.Each day from now on, independently of each other day, the price goes up or down €1, up with probability 0.7 and down with probability 0.3. What is the expected stock price three days from today?
..could you give some explanation to how you arrived at the right answers?
Last edited by Ku5htr1m; August 29th, 2014 at 01:49 PM.
Finding the local maximum & minimum ?
Is that a local maximum or a local minimum ?
The point x1 =1 and x2= -1 is a local maximum or local minimum? | CommonCrawl |
1 . What will come in place of question mark (?) in the given questions?
12 7.5 9 15 31.5 ?
2 . What will come in place of question mark (?) in the given questions?
3 . What will come in place of question mark (?) in the given questions?
2 3 10 39 172 ?
A pipe can fill a tank in 8 hrs, but due to a leakage it took 29$1 \over 3$ hours to fill the tank. If the tank is full, in how much time will the tank become empty due to the leakage?
Pawan invested Rs. 16,400 in each of the two schemes A and B. A offers compound interest (compounded annually) and scheme B offers simple interest. In both the schemes he invested for two years and the rates of interest of both the schemes are equal. If the interest earned by him from scheme A is Rs.236.16 more than the interest earned by him from scheme B, what is the rate of interest (pcpa) of both the schemes?
The sum of 5 times the first number and 4 times the second number is 73. If the sum of 4 times the first number and 5 times the second number is 71, then what is the bigger number?
7 . What will come in place of question mark in the given questions?
8 . What will come in place of question mark in the given questions?
9 . What will come in place of question mark in the given questions?
869.4 + 604.8 = [489.5 - 398.5] $\times$ ?
10 . What will come in place of question mark in the given questions? | CommonCrawl |
Title: The spin of prime ideals and applications.
where $\alpha$ is a totally positive generator of $a$ and $(*/*)_K$ is the quadratic residue symbol in $K$. Friedlander, Iwaniec, Mazur and Rubin prove equidistribution of $spin(\sigma, p)$ as $p$ varies over the odd principal prime ideals of $K$. In this talk I will show how to extend their work to more general fields and give various arithmetic applications. | CommonCrawl |
Abstract: Modern microprocessors are equipped with Single Instruction Multiple Data (SIMD) or vector instructions which expose data level parallelism at a fine granularity. Programmers exploit this parallelism by using low-level vector intrinsics in their code. However, once programs are written using vector intrinsics of a specific instruction set, the code becomes non-portable. Modern compilers are unable to analyze and retarget the code to newer vector instruction sets. Hence, programmers have to manually rewrite the same code using vector intrinsics of a newer generation to exploit higher data widths and capabilities of new instruction sets. This process is tedious, error-prone and requires maintaining multiple code bases. We propose Revec, a compiler optimization pass which revectorizes already vectorized code, by retargeting it to use vector instructions of newer generations. The transformation is transparent, happening at the compiler intermediate representation level, and enables performance portability of hand-vectorized code.
Revec can achieve performance improvements in real-world performance critical kernels. In particular, Revec achieves geometric mean speedups of 1.160$\times$ and 1.430$\times$ on fast integer unpacking kernels, and speedups of 1.145$\times$ and 1.195$\times$ on hand-vectorized x265 media codec kernels when retargeting their SSE-series implementations to use AVX2 and AVX-512 vector instructions respectively. We also extensively test Revec's impact on 216 intrinsic-rich implementations of image processing and stencil kernels relative to hand-retargeting. | CommonCrawl |
Abstract: The distribution of the $K^\pm\to\pi^\pm\pi^+\pi^-$ decays in the Dalitz plot has been measured by the NA48/2 experiment at the CERN SPS with a sample of $4.71\times 10^8$ fully reconstructed events. With the standard Particle Data Group parameterization the following values of the slope parameters were obtained: $g=(-21.134\pm0.017)\%$, $h=(1.848\pm0.040)\%$, $k=(-0.463\pm0.014)\%$. The quality and statistical accuracy of the data have allowed an improvement in precision by more than an order of magnitude, and are such as to warrant a more elaborate theoretical treatment, including pion-pion rescattering, which is in preparation. | CommonCrawl |
Why do most space probes survive for far longer than they were designed for?
Looking back to Opportunity (Rest In Peace, little friend), it was apparently designed to operate for 90 days but it ended up going for 16 years which is approximately 64 times longer than the engineers hoped for. This blows my mind. The technology that we buy and use here on Earth seems so fragile and badly engineered compared to Opportunity.
Besides, Opportunity is not the only one. The second Mars rover, Spirit, was also meant to last for far shorter than it actually did (even though it wasn't nearly as tough as Opportunity). And if I remember correctly, both Voyagers were also estimated to lose connection with Earth far sooner.
Now of course I admire and appreciate the mental and physical work the engineers had to go through to design a rover that lasts for almost two decades on another planet, but I still don't understand how they did that.
How come so many space probes are able to survive for such long periods of time and why is the difference between the expected duration of service and the actual duration of service so dramatically large? Is it really that hard to predict how long a device will last?
Very good question! The answer boils down to statistics of failure. Some aspects involve the statistics of "random" failures—for some reason some critical component just bites the dust—and some involve event-driven failures, such as failures induced by landing shocks, long engine burns, atmospheric entry stresses, etc.
When someone (a government, usually) spends hundreds of millions to billions of dollars/euros (or the equivalent in yen, or rupees, or rubles, or whatever) for a scientific mission, they want the probability of failure to be "acceptably low", which usually means very low. The more is spent, usually the smaller the accepted probability of failure. Typical numbers I have seen working with NASA and JPL are 95% probability of success for a relatively inexpensive mission, and 99% or even higher for flagship-class missions (Probability of success = 1 - Probability of failure). Pushing to those high success probabilities gets really expensive.
Probabilities of random failures are not exactly normally distributed, but let's treat them as such. To get the expected probability of failure over the mission's intended lifetime to very low values, you have to make time to the 50% probability of failure a lot longer than that, sometimes many times that. You're way out on the wings of a normal distribution. At 95% probability of success, you're 2$\sigma$ (two "standard deviations") from the mean, that 50% probability of failure. If you're wanting a mission duration of, say, 5 years, with a 95% probability of success (5% probability of failure) and the standard deviation of failure is 4 years, then you have to design for a mean time to failure of $5 + (2 \times 4)$ years, or 13 years. So half the time, you expect this spacecraft designed for a 5-year mission to last 13 years.
Event-driven statistics can modify that further. Components for a lander or rover must be designed to survive the atmospheric entry (for a destination with an atmosphere) and landing. There is a statistical probability that those components will fail, but they have to be designed with the robustness to make that probability very low. But designing for survival during landing often means that, once they've successfully landed, the expected lifetime goes up a lot.
That is true of spacecraft other than landers, too. Spacecraft that are quiescent, i.e. not doing propulsive maneuvers, not doing a lot of radical attitude variations, not running scan platforms rapidly all over the sky, tend to last a long time. This is the case with the two Voyager spacecraft: since Saturn for Voyager 1, and Neptune for Voyager 2, they've largely been in "quiet cruise". Also, a small but dedicated operations team has come up with creative ways of conserving electric power. They figure out such tactics as turning off instruments that are no longer useful, turning off heaters in components that aren't needed anymore, etc. When I was working the Neptune encounter I remember the project saying that they expected to have enough power to operate until about 2015. We've gone well beyond that, mostly due to those power conservation strategies. Suzie Dodd, the Project Manager, says now they're thinking maybe 2025.
There are a lot of generic answers here about spacecraft. I will try to answer the question specifically for Spirit and Opportunity.
90 sols was deemed sufficient to conduct the primary mission of the rovers, so the systems were designed and tested to assure full capability through the entire 90 sols.
The first thing expected to take a rover below full capability was dust on the solar panels. The dust deposition rate and impact on solar panels was well known from Mars Pathfinder, a 0.3% multiplicative power loss per sol, and was considered to be a global constant in normal weather conditions. It turns out it is global. So the solar panels were sized to support all of the driving, instrument and arm operations, communication, and thermal control required for full capability given about 3/4 power from the solar panels. I.e. they were oversized by a third, compared to the power they could deliver with no dust.
We can see that even with the expected dust deposition, the rover would not just up and die at 90 sols. Its capability would only start to be reduced below "full". You could keep going for a long time, continuing to reduce the operations until the solar panels got so covered in dust that the rover could no longer communicate or maintain thermal control. Furthermore the power required for full capability was conservatively estimated during the design process, and the rover actually required less than those estimates for "full". As the rover was operated, we got smarter about how to conserve power, and could make each watt-hour drive that much further or send back that much more data.
Still, that 0.3% per sol is relentless. You can't go forever. Before we launched, I predicted that the rovers would each last for at least nine months before succumbing. They would be down to 44% power on the panels, and even more loss due to the seasonal movement of Sun north and so less light on level panels. Other folk on the project thought I was nuts. They were thinking six months, tops. In any case, there was no way they could go indefinitely, even if they were parked on the sides of hills to try to point the panels more at the Sun.
So what happened? How did they keep going after nine months? For years?!
Each time this happened, the rovers would get a new lease on life. We got cleaning events reliably every Martian year. Until one year we got none for Spirit. Spirit died shortly thereafter.
The other expected life-limiters on the rovers were the brushed DC electric motors, and the lithium-ion battery. In fact, one of the wheel drive motors went out on Spirit, about two Earth years into the mission. Spirit continued to limp along for four more years, dragging that wheel through the dirt. (Once resulting in a scientific discovery found in the trench left behind. In image below, you can see white silica in part of typical Spirit trench dug by stuck wheel.) Due to the failed wheel, Spirit became stuck and could not free itself. Its inability to position on the side of a hill when the Sun moved North again contributed to its loss when the cleaning events didn't return.
Opportunity also lost a motor, but it was a steering motor, and so had less impact on mobility. And Opportunity continued to see cleaning events each Martian year, up until it got hit by the giant global dust storm.
The motors were only tested to three times the 90 sols, simulating the operation and environmental temperature swings. And there were failures in some of the those tests, which resulted in some changes. So it is quite amazing that those motors lasted as long as they did, even with the two failures.
Though we were worried about the lifetime of the batteries in the rovers, they were remarkably reliable through their many years of operations, and lost very little of their capacity.
In general, electronics is not expected to degrade over time, so long as thermal control is maintained. You are only subject to random failures, which can occur. There were some failures in the flash memory on Opportunity, as it got older. Flash memory has a wear out mechanism, though we tend to not notice it since we don't use the same flash memory for a decade. Eventually the operations team gave up on the flash.
Bottom line, Mars cleaned off the solar panels most of the time, but in the end both rovers died because of dust. There were in fact two motor failures, but the rovers were able to keep going. The batteries held up way better than we expected. The electronics I would expect to keep working, though the flash memory failed on one of the rovers.
That's how the rovers lasted so long. Every spacecraft's story is different.
In this chart, figure that the horizontal axis measures time and that the MTBF is at the zero position.
The trick is to make the part rugged enough and/or have enough backups (NASA generally goes with the "and" here) that the expected life falls in the -3 range.
That makes it extremely unlikely for the part to fail before its expected life to run out and much more likely to last until a bit after the MTBF.
Every component and group of components is designed this way. That means that anything they build will, in most cases, last much longer than its MTBF.
Although they're up for purchase, the fruits of labor from 58 years of space traveling excellence are not available at Radio Shack.
Is it really that hard to predict how long a device will last?
Yes. Quality control tells you how many cycles something should be able to go through until it is unreliable. The parts you choose to use should be based on their reliability. Your Earth based purchases (aside from planned obsolescence*, economics, and availability), have no such requirements.
I still don't understand how they did that : design a rover that lasted for almost two decades on another planet, and the now +30yo extra-solar probes?
What you're not understanding is that humans know how to build things*, to do whatever/anything you can pay for, for as long as is required. And If you're going to spend a billion dollars to launch a million dollar probe, you buy the expensive gears and put expensive circuits. - I have to buy a new 10 dollar coffee maker about once every two years. Pay me $10k and I'll build you one that you can pass on for the next several generations. How many zeros in your check book?
People might have been promoted for having been able to say, "Yeah, I worked on Opportunity." - but not if it had gone dark 12 hours in. At NASA, everyone does everything exactly right and above par, or they might as well have all of stayed in bed.
In the early days, most probes didn't survive launch or for very long - but that's why we now know how to make a one year mission probably last ten. Mission critical was the first 90 days. It had to work that long to fulfill it's own statement. Any longer is gravy, but if it died 89 days in, that's mission failure.
Guaranteeing something to work for a time period is easy: just run the numbers and apply a good safety factor. But knowing when something will fail can only be gauged with sacrificial test data. Which now we have for Mars rovers, and the Pioneer probes had told us that the Voyagers would be able to handle passing through, e.g., Jupiter's radiation (and now even, past the sun's bow shock).
(*) humans know how to build things (to make them break) that shouldn't ever break, to get you buy them repeatedly, or things that don't physically exist like, The White Album.
If there's a reverse single word for planned obsolescence, that would be the word for the LOVE that got put into all these probes.
How can you love something that much? Pay for it, and pay the right people to do it.
They Write the Right Stuff, "The Onboad Shuttle Group" : 260 women and men based in an anonymous office building across the street from the Johnson Space Center in Clear Lake, Texas, southeast of Houston.
This software never crashes. It never needs to be re-booted. This software is bug-free. It is perfect, as perfect as human beings have achieved. Consider these stats : the last three versions of the program — each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. Commercial programs of equivalent complexity would have 5,000 errors.
Space may be really, really big. But there's absolutely no room to effaround.
Most space probes don't survive far longer than they were designed for.
Example: Mars missions. 30 failures, 18 successful missions and 8 missions in progress. I count 4 missions (Viking 1 and 2 landers, Spirit and Opportunity) that lasted far longer than their primary mission. So 4/56 is 7% of Mars missions.
There's a bathtub curve at work: if a mission survives launch and orbit insertion/landing, there's a good chance it will fulfill its primary mission (due to the reasons given in other answers). At some point the hardware starts reaching the end of its life, and you get component failures (some of which have redundancy or can be compensated for, like Voyager's camera platform seizing or MER wheel failure, computer failure etc.).
stationary landers have a limited amount of science they can do. At some point, science return diminishes and the mission is ended.
flyby missions run out of targets (but the few deep-space missions we have, give interesting information on deep space so we keep them powered up as long as we can).
There's some very good answers here on failure modes and statistics, looking at the specific cases of Spirit and Opportunity there is a bit more to it. NASA had just experienced 2 consecutive failures with the Mars Climate Orbiter and Mars Polar Explorer, both of which were caused by errors in development, seen as the result of NASA's "Better, Faster, Cheaper" approach (for more details on that see my answer here).
NASA needed a win to show the US taxpayer, congress and the world they were still in business, so they built 2 probes instead of 1 to double their chances of success and then they worked hard to get the designs right, putting in the best parts and materials they could get. NASA always tries to under-promise and over-deliver, but in this case they were even more conservative than usual when it came to mission duration, 90 days was a target they were reasonably certain to meet if the probes got there.
Might be better to see this as: "Why are some estimates of longevity so conservative?".
I think this is clear. They deal with a lot of unknowns. The mars rovers are a great example of this. "How long before dust accumulates?" is a difficult question to answer without having been there. This isn't always the case BTW. Sometimes the limiting factor is something that's very well understood, and I am always surprised by how accurate some of the predictions of failure are.
As a tech guy at work, I'm sometimes faced with with the question "Why didn't it work?". It's sometimes difficult to get people to grasp that things like The Internet requires lots and lots of things to work and, if any of them fail, the whole thing falls apart. I call this a Logistics Chain.
What made Spirit and Opportunity different was they had to operate under some bad assumptions about Mars (because we know little about the planet's long term surface conditions on hardware). The most notable bad assumption was that Martian dust would permanently coat the solar panels, rendering them useless after 90 days. We learned that assumption was wrong, and it's believed Martian wind knocks it off. With the solar panels being cleaned, the rovers could continue to function longer. Spirit died after its wheels stopped working and it was unable to orient itself for Winter. Opportunity (which fared better with its wheels) died when a dust storm likely cut off power for too long.
Not the answer you're looking for? Browse other questions tagged probe rovers opportunity or ask your own question.
Why wasn't the Mars Climate Orbiter's fatal error caught prior to launch?
What software language was used to program the martian rovers Spirit, Opportunity and Curiosity?
What was the solution to the half-amp the heater on Opportunity was drawing?
Reasons why the Lunar Rover DID NOT become "a jumble of loosely assembled broken parts on Launch"?
What was the last message Opportunity sent?
Do Mars rovers have irises? How do they safely look at the Sun? | CommonCrawl |
We show that every $n\times n$ matrix can be decomposed into (i) a product of $n/2 $ Toeplitz matrices, (ii) a product of $n/2$ Hankel matrices, (iii) a product of $4n$ Vandermonde matrices, (iv) a product of $16n$ bidiagonal matrices, or (v) a product of $n^2$ companion matrices. We will see that such decompositions do not in general hold with other types of structured matrix factors (e.g. circulant, symmetric Toeplitz, persymmetric Hankel, etc).
Joint work with Ke Ye (University of Chicago, USA). | CommonCrawl |
When is every orbit closure uniquely ergodic?
Can the full shift be embedded in a flow?
Is there a minimal, topologically mixing but not positively expansive dynamical system?
Is there a topologically mixing and minimal homeomorphism on the circle (or on $\mathbb S^2$)?
Hausdorff dimension = entropy/Lyapunov exponent for the baker's map?
Approximation of topological dynamical systems?
Is there a universal $\omega$-limit set? | CommonCrawl |
A Radiative Hydrodynamic model applied to solar flares.
In order to study the properties of faint and moderate flares, we simulate the conditions of the solar atmosphere using a radiative hydrodynamic model (Allred et al, 2005). In the simulation, a constant beam of non-thermal electrons is injected at the apex of 1D coronal loop and heating from thermal soft X-ray emission is included.
We study the contribution of different processes to the total intensity of some lines at different atmospheric layers and how it evolves in time. We obtain the total integrated intensity of the lines and compare them with some observational values obtained from Ly-$\alpha$ and H-$\alpha$ measurements.
We also modify the electron beam heating rate proposed by Allred et al. (2005) to study how it affects the results obtained by simulations. | CommonCrawl |
Magnitude of a Vector. Calculating the magnitude of any vector.
Dot Product. Calculating the dot product of any two vectors.
Cross Product. Calculating the Cross product of any two vectors.
Multivariable Functions. Multivariable functions in Sage.
Converting a multivariable function to a parametric function Using sage to convert a multivariable function to a parametric function.
A simple plot in polar coordinates. Plotting $r = \cos \theta$.
Partial Derivatives. Calculating partial derivatives.
Gradients. Calculating the gradient of a multivariate function.
Double Integrals. Calculating Double Integrals.
Plotting Two-Dimensional Vector Fields. Plotting a vector field for $\mathbf F(x,y) = (\cos x, \sin y)$. | CommonCrawl |
If X and Y are independent random variables that are normally distributed (and therefore also jointly so), then their sum is also normally distributed.
That is $X \sim N(\mu_1,\sigma^2_1)$, $Y \sim N(\mu_2,\sigma^2_2)$ and $Z = X + Y$.
Does the same result apply? Thanks in advance.
As the integral is taken over the whole of $\mathbb R$, you can freely shift the argument of the first exponential and you get a Gaussian $(0,\sqrt2)$.
The translation doesn't work when the integration range is limited to the positive reals and the antiderivative will involve the error function.
Not the answer you're looking for? Browse other questions tagged probability probability-distributions normal-distribution self-learning or ask your own question. | CommonCrawl |
1) We found that the membrane detection is the key of neuron segmentation in our studies. To solve the problem of neuron membrane detection, we proposed to use deep convolutional neural networks (DCNN). We firstly constructed the pixel classifier based on DCNN with the image block centred around each pixel as its input. With the classifier, we can get the membrane detection probability map(MDPM) . By furher studies, based on the fully convolutional neural network (FCN), we proposed a network named as SPPUNet that fuses the multiply spatial scale convolutional features and deep contexture features. SPPUNet significantly improved the speed of generate MDPM and also improved the performance to some degree.
2) After getting the MDPM, to solve the problem of neuron segmentation and reconstruction, we proposed a pipeline that firstly preprocessing the MDPM with multi-scale median filter to filter out the blur noises, and then applying the double scale marker controlled watershed on the preprocessed MDPM. Based on the pipeline, we got 1693 sections, $14k\times 14k$ fully segmentation of the fly mushroom body neuronal SEM images. To reconstruct the neurons from the segmentation maps, we proposed the heuristic tracking and linking method, based on which we densely reconstructed neurons within $10\mu m\times 10\mu m\times 10\mu m$ volume.
with KCF tracking for tracing single neurons. After all, we proposed a framework that segmenting and tracing neurons simultaneously.
GB/T 7714 饶强. 基于深度学习的神经组织微观结构重建算法研究[D]. 北京. 中国科学院研究生院,2017. | CommonCrawl |
Historically, this object arose as an axiomatization of "vertex operators" in "conformal field theory" from physics; I don't know what these phrases mean.
To date, I haven't been able to gather together any kind of intuition for a vertex algebra, or even a precise justification as to why anyone should care about them a priori (i.e. not "they come from physics" nor "you can prove moonshine with them").
What is the basic physical phenomenon/problem/question that vertex operators model?
What is the subsequent story about vertex operators and conformal field theory, and how can we see that this leads naturally to the axioms of a vertex algebra?
Are there accessible physical examples ("consider two particles colliding in an infinite vacuum...", etc.) that illustrate the key ideas?
Also, are there alternative, purely mathematical interpretations of vertex algebras which make them easier to think about intuitively?
Perhaps people who played a role in their discovery could say a bit about the thinking process that led them to define these objects?
Vertex algebras precisely model the structure of "holomorphic one-dimensional algebra" -- in other words, the algebraic structure that you get if you try to formalize the idea of operators (elements of your algebra) living at points of a Riemann surface, and get multiplied when you collide.
Our geometric understanding of how to formalize this idea has I think improved dramatically over the years with crucial steps being given by the point of view of "factorization algebras" by Beilinson and Drinfeld, which is explained (among other places :-) ) in the last chapter of my book with Edward Frenkel, "Vertex algebras and algebraic curves" (second edition only). This formalism gives a great way to understand the algebraic structure of local operators in general quantum field theories -- as is seen in the recent work of Kevin Costello -- or in topological field theory, where it appears eg in the work of Jacob Lurie (in particular the notion of "topological chiral homology").
In fact I now think the best way to understand a vertex algebra is to first really understand its topological analog, the structure of local operators in 2d topological field theory. If you check out any article about topological field theory it will explain that in a 2d TFT, we assign a vector space to the circle, it obtains a multiplication given by the pair of pants, and this multiplication is commutative and associative (and in fact a commutative Frobenius algebra, but I'll ignore that aspect here). It's very helpful to picture the pair of pants not traditionally but as a big disc with two small discs cut out -- that way you can see the commutativity easily, and also that if you think of those discs as small (after all everything is topologically invariant) you realize you're really describing operators labeled by points (local operators in physics, which we insert around a point) and the multiplication is given by their collision (ie zoom out the picture, the two small discs blend and look like one disc, so you've started with two operators and gotten a third).
Now you say, come on, commutative algebras are SO much simpler than vertex algebras, how is this a useful toy model? well think about where else you've seen the same picture -- more precisely let's change the discs to squares. Then you realize this is precisely the picture given in any topology book as to why $\pi_2$ of a space is commutative (move the squares around each other). So you get a great model for a 2d TFT by thinking about some pointed topological space X.. to every disc I'll assign maps from that disc to X which send the boundary to the basepoint (ie the double based loops of X), and multiplication is composition of loops -- i.e. $\Omega^2 X$ has a multiplication which is homotopy commutative (on homotopy groups you get the abelian group structure of $\pi_2$). In homotopy theory this algebraic structure on two-fold loops is called an $E_2$ structure.
My claim is thinking about $E_2$ algebras is a wonderful toy model for vertex algebras that captures all the key structures. If we think of just the mildest generalization of our TFT story, and assign a GRADED vector space to the circle, and keep track of homotopies (ie think of passing from $\Omega^2 X$ to its chains) we find not just a commutative multiplication of degree zero, but a Lie bracket of degree one, coming from $H^1$ of the space of pairs of discs inside a bigger disc (ie from taking a "residue" as one operator circles another). This is in fact what's called a Gerstenhaber algebra (aka $E_2$ graded vector space). Now all of a sudden you see why people say you can think of vertex algebras as analogs of either commutative or Lie algebra (they have a "Jacobi identity") -- -the same structure is there already in topological field theory, where we require everything in sight to depend only on the topology of our surfaces, not the more subtle conformal geometry.
Anyway this is getting long - to summarize, a vertex algebra is the holomorphic refinement of an $E_2$ algebra, aka a "vector space with the algebraic structure inherent in a double loop space", where we allow holomorphic (rather than locally constant or up-to-homotopy) dependence on coordinates.
AND we get perhaps the most important example of a vertex algebra--- take $X$ in the above story to be $BG$, the classifying space of a group $G$. Then $\Omega^2 X=\Omega G$ is the "affine Grassmannian" for $G$, which we now realize "is" a vertex algebra.. by linearizing this space (taking delta functions supported at the identity) we recover the Kac-Moody vertex algebra (as is explained again in my book with Frenkel).
The original motivation for vertex algebras is explained briefly in the original paper http://www.jstor.org/stable/27441 as follows. For any even lattice one can construct a space $V$ acted on by vertex operators corresponding to lattice vectors. More generally one can write down a vertex operator for every element of $V$. These vertex operators satisfy some complicated relations, which are then used as the definition of a vertex algebra. In other words, the original example of a vertex algebra was the vertex algebra of an even lattice, and the definition of a vertex algebra was an axiomatization of this example.
This was motivated by my attempt to understand Igor Frenkel's work on the Lie algebra with Dynkin diagram the Leech lattice. Frenkel had constructed a representation of this Lie algebra using vertex operators acting on the space $V$, and I was trying to use his work to understand its root multiplicities. I did not use any insights from conformal/quantum/topological field theory or operator product expansions when defining vertex algebras (as implied by some of the other answers), for the simple reason that I had barely heard of these concepts and had almost no idea what they were.
This is not all that helpful for understanding what a vertex algebra really is. A better view is to regard them as something like a commutative ring with an action of the (formal) 1-dimensional additive group. In particular any such ring is canonically a vertex algebra. The difference between rings with group actions and vertex algebras is that the "multiplication" of a vertex algebra from $V\times V$ to $V$ is not defined everywhere: it can be thought of as a "rational map" rather than a "regular map" in some sense. More precisely if we write $u^g$ for the action of a group element $g$ on $u$, then the product $u^gv^h$ is not defined for all group elements $g$ and $h$, but behaves formally like a meromorphic function of $g$ and $h$, which in particular may have poles when $g$ and $h$ are trivial. Making sense of this informal idea produces the definition of a vertex algebra. (For more details see the unpublished paper http://math.berkeley.edu/~reb/papers/bonn/bonn.pdf.) This means that vertex algebras behave like commutative rings: for example, one should define modules over them, tensor products of modules, and so on.
The answers here have focused on the mathematical aspects of VOAs and the motivation coming from QFT, the specialization to Conformal Field Theory, and then the further specialization to two-dimensional holomorphic CFT. Two-dimensional CFT's did arise from string theory where the fields on the 2d world-sheet of the string define a CFT. However there is another important part of the story which has not been mentioned and is more directly tied to physical phenomenon and that is the theory of critical phenomenon. Many systems, such as water-ice, magnetic systems and so on undergo phase transitions as a thermodynamic parameter such as temperature is varied. Typically these are first order transitions, meaning that there is a latent heat associated to the transition. Sometimes one can vary an additional parameter and find a line of first order transitions which terminates at a second order transition. The second order transition is characterized by fluctuations on all scales: the theory becomes scale and conformally invariant at that point. It also turns out that the behavior of thermodynamic quantities as one approaches the critical point are characterized by numbers called critical exponents which are universal for systems with the same symmetry structure. These exponents are related to what are called the conformal dimensions of operators in CFT and they are directly measurable in the lab for a variety of systems. One important tool which was used in the study of critical phenomenon is the operator product expansion or operator algebra of K. Wilson and L. Kadanoff. There is a huge literature on this. Here is a reference to an early paper on the operator algebra for the Ising model: http://prb.aps.org/abstract/PRB/v3/i11/p3918_1 . VOA's are a rigorous mathematical formalization of this kind of algebraic structure. For someone who wants to learn about CFT starting from a particular physical system (or at least a mathematical idealization of a physical system) the Ising Model is a good place to start.
What is the basic physical phenomenon/problem/question that vertex operators model?*"
There is also a lot of material about QFT there.
In the Wightman framework we can think of quantum fields as operator valued distributions, that map test functions living on a given spacetime to (maybe unbounded) operators. Essentially selfadjoint operators represent observables that can be measured, in principle, by some experiment or device. Physicsists are mostly interested in systems that have interactions. We can think of elementary particles as localized excitations of quantumf fields that are solutions to specified wave equations (in the distributional sense). In order to have any interactions, these wave equations should have non-linear terms (actually you can define the notion of "free field" (a field with no interactions whatsoever) this way: A free field in the sense of physicists is a quantum field that is a solution of a linear equation).
Non-linear terms entail products of quantum fields, which are undefined, in general, because products of distributions are undefined, in general. The history of QFT is about the struggle of physicists to dodge this problem in one way or another. One way to dodge this problem is to introduce, as an axiom, so called "(associative) operator product expansions". This is a way to formulate, as an axiom, (handwaving:) that the "severity of the singularity of products of distributions" is under control.
In a handwaving way, this axiom says that we assume we know something about the kind of singularity of the product of two fields, as their support comes closer and closer, and it is not so bad.
A rigorous interpretation of an OPE would interpret the given relation as a relation of e.g. matrix elements or vacuum expectation values.
An OPE is called associative if the expansion of a product of more than two fields does not depend on the order of the expansion of the products of two factors. Warning: Since the OPE has no interpretation as defining products of operators, or more generally the product in a ring, the notion of associativity does not refer to the associativity of a product in a ring, as the term may suggest.
The axioms of vertex operator algebras are an axiomatization of OPEs. In this sense there is reason to expect that vertex operator algebras will play a key role, on one way or another, in a rigorous construction of interacting QFTs in four dimensions.
Name-dropping here, but I can claim that Richard Borcherds discussed vertex algebras with me, before anyone knew what they were (not that I was able to be of any help). At that stage they were purely algebraic objects, with some kind of "Jacobi identity". At some point he said he had a definition as Lie algebra in an internal sense in a category - this idea was binned as not in fact useful. Much later he said something about a relationship with the Wightman axioms, but I gather that isn't really watertight when it comes down to it. So I doubt whether the emergence of an axiomatic theory can be "rationally reconstructed" without doing some damage to the history.
Speaking as a string theorist, I would say Victor Kac's book "Vertex Algebras for Beginners" followed fairly closely what physicists originally thought. So you might want to have a look at it.
I'll take a more physical POV. Although, we don't have any mathematically concrete definition of QFT in general, but we can define 2d Conformal Field theory (this is attributed to infinite symmetries and exact solvability), other that CFTs we can also define Topological QFTs (Atiyah).Greame Segal proposed a geometric definition of CFT. In Conformal field theory we deal with vertex operators analog to operators in QFT, we can write a Taylor like expansion of two vertex operators, k.a Operator product expansions (OPEs) which gives the QFT analog of two fields interacting. All these notions are captured by the axiomatization based on Vertex Algebras. To see a better picture,look at the classic paper of Belavin, Polyakov, and Zamolodchikov, where an algebraic approach of CFT was proposed.
Recently Kapustin and Orlov proposed a more general definition of Vertex algebras and they showed the relation between their algebraic definition and Segal's geometric one.
This may hold you off until a REAL answer comes along. I found something resembling intuition on this subject when I first studied Quantum Field Theory. I'd recommend looking there before trying to tackle the many-headed beast that is conformal field theory. Many (but not all) of the VOA axioms can be seen in the properties of a QFT. As far as a "mathematician-friendly" place to learn about QFT goes I remember liking the book by Ryder.
Not the answer you're looking for? Browse other questions tagged mp.mathematical-physics or ask your own question.
What is the structure of $SO(3)$ and its Lie Algebra?
What are braided vertex algebras?
Wightman axioms to Vertex algebra, the inspiration for the infinitesimal translation operator T? | CommonCrawl |
The International Commission for Perfect Cars (ICPC) has constructed a city scale test course for advanced driver assistance systems. Your company, namely the Automotive Control Machines (ACM), is appointed by ICPC to make test runs on the course.
The test course consists of streets, each running straight either east-west or north-south. No streets of the test course have dead ends, that is, at each end of a street, it meets another one. There are no grade separated streets either, and so if a pair of orthogonal streets run through the same geographical location, they always meet at a crossing or a junction, where a car can turn from one to the other. No U-turns are allowed on the test course and a car never moves outside of the streets.
Oops! You have just received an error report telling that the GPS (Global Positioning System) unit of a car running on the test course was broken and the driver got lost. Fortunately, however, the odometer and the electronic compass of the car are still alive.
You are requested to write a program to estimate the current location of the car from available information. You have the car's location just before its GPS unit was broken. Also, you can remotely measure the running distance and the direction of the car once every time unit. The measured direction of the car is one of north, east, south, and west. If you measure the direction of the car while it is making a turn, the measurement result can be the direction either before or after the turn. You can assume that the width of each street is zero.
The car's direction when the GPS unit was broken is not known. You should consider every possible direction consistent with the street on which the car was running at that time.
The input consists of a single test case. The first line contains four integers $n$, $x_0$, $y_0$, $t$, which are the number of streets ($4 \leq n \leq 50$), $x$- and $y$-coordinates of the car at time zero, when the GPS unit was broken, and the current time ($1 \leq t \leq 100$), respectively. $(x_0, y_0)$ is of course on some street. This is followed by $n$ lines, each containing four integers $x_s$, $y_s$, $x_e$, $y_e$ describing a street from $(x_s, y_s)$ to $(x_e, y_e)$ where $(x_s, y_s) \ne (x_e, y_e)$. Since each street runs either east-west or north-south, $x_s = x_e$ or $y_s = y_e$ is also satisfied. You can assume that no two parallel streets overlap or meet. In this coordinate system, the $x$- and $y$-axes point east and north, respectively. Each input coordinate is non-negative and at most 50. Each of the remaining $t$ lines contains an integer $d_i$ ($1 \leq d_i \leq 10$), specifying the measured running distance from time $i − 1$ to $i$, and a letter $c_i$, denoting the measured direction of the car at time $i$ and being either N for north, E for east, W for west, or S for south.
Each output line should consist of two integers separated by a space. | CommonCrawl |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.