text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Of course, the curves may neither converge nor diverge. An extreme case of such case is they may evolve more or less parallel to each other.
My problem is to recognize the convergence and the divergence patterns. This may be trivial as done visually by human, but I want to automate the task using an algorithm, and likely a ML algorithm. "Pattern recognition' comes to mind as a (broad) area relating to this kind of problem.
I have some very high-level idea about a possible approach: first manually label some existing graphs as 'convergence', 'divergence', or 'other', and then use a set of these categorized graphs/images to 'train' the ML algorithm in the hope that the algorithm can recognize the patterns on its own after proper training.
What are the recommended ML algorithm(s) for this problem? And any pointer to the theory and implementation of such algorithm(s) is appreciated.
I may be terribly ignorant, but: does such a ML algorithm operate on the graph/image directly or does it operate on the underlying data points that make up the graph? In other words, what would be the input to the algorithm? A series of images or a bunch of numbers?
I suspect machine learning is the wrong approach. Instead, I suspect you will do better to define a metric and measure the metric, or define a hypothesis and use hypothesis testing. You are not trying to predict the future evolution of these values; that's something that ML might be suitable for, but that's not what you're trying to do, so ML doesn't seem like the right tool for the job.
Let me suggest an approach. If $f(t),g(t)$ are the values of the measurements at time $t$, define $D(t) = |f(t)-g(t)|$ (the absolute value of the difference). $D(t)$ measures how close they are. Then, you want to test whether $D(t)$ is increasing or decreasing. If $D(t)$ is decreasing, then you have a situation of "convergence"; if $D(t)$ is increasing, then you have a situation of "divergence".
How can you test whether $D(t)$ is increasing vs decreasing, over some time period? Here's one simple approach. You could separate your time window into two halves, the first half and the second half. Calculate the average value of $D(t)$ over the first half, say $\mu_0$, and the average value of $D(t)$ over the second half, say $\mu_1$. Now you can compare $\mu_0$ to $\mu_1$, to determine whether it is increasing or decreasing.
Alternatively, another approach would be to use simple linear regression on $D(t)$ to fit a linear model $D(t) \sim \alpha t + \beta$, and then test whether the slope $\alpha$ is greater than zero or smaller than zero.
However, one problem with this is that your conclusion might be confounded by noise: it might be that $D(t)$ is neither increasing nor decreasing, but all you're seeing is statistical noise. So can you separate out the case where $D(t)$ is increasing from the case where $D(t)$ is decreasing?
The answer is yes: you can test for statistical significance using hypothesis testing. If you observe $\mu_0 > \mu_1$, you can use a hypothesis test to test whether this difference is statistically significant (e.g., using a permutation test). Or, if you fit a linear model and find $\alpha > 0$, you can use a hypothesis test to test whether the difference from zero is statistically significant (this is covered under standard textbooks on linear regression).
There's lots more one can say about hypothesis testing, but this should give you the general approach. The short version is: don't use machine learning; using an appropriately chosen statistical measure or statistical hypothesis test.
I think the key is to abstract your problem by finding appropriate features. For example a feature could be the minimal distance between the two curves. Or how many times do they cross over. Or how many steps do they stay less than K units away from each other. Think about features that will capture convergence and divergence. Once you have the right features you can throw them into your favourite machine learning algorithm like a Support Vector Machine, Neural Network or Decision Tree. I would give the learning algorithm a set of numbers, because it will make the learning much easier than giving it the whole image.
Not the answer you're looking for? Browse other questions tagged machine-learning artificial-intelligence pattern-recognition hidden-markov-models or ask your own question.
how to identify patterns in program execution flows?
what's analogy learning algorithm ? differences from induction learning algorithm? | CommonCrawl |
Opis: Single tungsten oxide aerogels (WO3), binary oxide aerogels (WO3-Al2O3) and ternary oxide aerogels (WO3-SiO2-Al2O3) were prepared using standard sol-gel route. Tungsten oxide tetraethoxide (WO(OCH2CH3)4) was used as the sol-gel precursor. The excellent properties of the gels obtained by the sol-gel synthesis were preserved upon supercritical drying with CO2. After supercritical drying at 40 °C and 100 bar, all aerogels were calcined to 800 °C. The influence of the synthesis parameters on the catalytic activity of WO3as supported on silica andžor alumina aerogels was investigated through thetransformation of N-(phosphonomethyl)iminodiacetic acid to N-(phosphonomethyl)glycine. Despite including WO3 into single and mixed silicaand alumina aerogels, high specific surface areas (284-653 m2 g-1) were preserved. Higher conversion was obtained for catalysts with higher ratios of WO3 in the mixed silica-alumina aerogels that were calcined at 800 °C.
Opis: Reaction-formed $MnZn$ ferrite was prepared and the decrease in shrinkage after sintering due to the volume expansion accompanying iron oxidation was studied. Green compacts consisting of the milled raw oxides $Fe_2O_3$, $Mn_3O_4$, $ZnO$ and metallic iron powder were sintered at 1350 °C in air. During the first hold at 800 °C, $Fe$ was oxidized to $\alpha-Fe_2O_3$ and $Zn$ ferrite was formed. Above 1300 °C the reaction bonding was completed and $MnZn$ ferrite, exhibiting a relatively low shrinkage, was formed. The chemical reactions involved during reaction bonding were associated with a volume expansion and porosity formation, compensating for the shrinkage on sintering. Intensive milling decreases the porosity after sintering but induces the oxidation of iron, and partially removes the shrinkage compensation caused by the presence of metallic iron.
Opis: Our work deals with the problem of producing a complex metal-ceramic composite using the processes of internal oxidation (IO) and severe plastic deformation. For this purpose, Cu-Al alloy with 0.4wt.% of Al was used. IO of sample serves in the first step of the processing as a means for attaining a fine dispersion of nanosized oxide particles in the metal matrix. Production technology continues with repeated application of severe plastic deformation (SPD) of the resulting metal matrix composite to produce the bulk nanoscaled structural material. SPD was carried out with equal channel angular pressing (ECAP), which allowed that the material could be subjected to an intense plastic strain through simple shear. Microstructural characteristics of one phase and multiphase material was studied on internally oxidized Cu with 0.4wt.% of Al sample composed of one phase copper-aluminum solid solution in the core and fine dispersed oxide particles in the same matrix in the mantle region. In this manner AFM, X-ray diffraction and Raman spectroscopy were used. Local structures in plastically deformed samples reflect presence of $Cu$, $CuO$, $Cu_2O$, $Cu_4O_3$ or $Al_2O_3$ structural characteristics, depending on type of sample. | CommonCrawl |
$s^ n_ i = (s^ n_ i, s^ n_ i)$.
In other words, $V \times _ U W$ is the fibre product of the presheaves $V$ and $W$ over the presheaf $U$ on $\Delta $.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 016T. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 016T, in case you are confused. | CommonCrawl |
A 0-definable set $ D $ in a structure $ M $ is said to be stably embedded if every $ M $-definable subset of $ D^n $, for any $ n $, is $ D $-definable.
In a stable theory, every definable set is stably embedded. This fact is sometimes called the Parameter Separation Theorem, e.g., by Bruno Poizat. More generally, in an arbitrary theory, the set of realizations of any stable type is stably embedded (right?). For example, any strongly minimal set, or set of Morley rank less than $ \infty $, is stably embedded.
An analogous result exists for o-minimal sets. Theorem 2 in Hasson and Onshuus's paper Embedded O-minimal Structures implies that if $ S $ is a 0-definable ordered set which, in the monster model, has the property that every definable subset of $ S^1 $ is a finite union of points and intervals, then $ S $ is stably embedded.
In ACVF, the value group and the residue field are both stably embedded.
In a monster model $ M $, every elementary map from $ D(M) $ onto $ D(M) $ can be extended to an automorphism of $ M $.
Every complete type over $ D $ is definable over a small subset of $ D $.
Every complete type over $ D $ is implied by a partial type over a small subset of $ D $.
The assumption that enough sets are stably embedded plays a key role in the general means of getting binding groups from internality, as explained in Hrushovski's paper Groupoids, Imaginaries, and Internal Covers. | CommonCrawl |
15:00 to 16:00 Elvira Zappale Lower semicontinuity and relaxation of nonlocal $L^\infty$ functionals.
16:30 to 17:15 Kirill Cherednichenko Effective behaviour of critical-contrast PDEs: micro-resonances, frequency conversion, and time dispersive properties.
13:30 to 14:10 Julia Yeomans Bacteria: self-motile liquid crystals? | CommonCrawl |
Here we are finding number of state less than energy E for particle trapped in 3D harmonic potential .For large value of E as compare to hw , we assumed energy levels as continuous.And we introduces coordinate system in terms of which surface of constant energy is plane E = E_1 + E_2 + E_3 .Number of state with energy less than E in these conditions is given below in image. I am not understanding how the terms in denominator came and why we chosed first octant in plane ?
Browse other questions tagged solid-state-physics harmonic-oscillator bose-einstein-condensate volume quantum-statistics or ask your own question.
Volume of Brillouin zone is the same as Fourier primitive cell?
Can we relate volume to amount of matter?
Reason for Debye cutoff frequency?
Can volume be interpreted as density $\times$ volume?
What conditions must eigenvalues satisfy for degenerate states? | CommonCrawl |
We propose a new provable method for robust PCA, where the task is to recover a low-rank matrix, which is corrupted with sparse perturbations. Our method consists of simple alternating projections onto the set of low rank and sparse matrices with intermediate de-noising steps. We prove correct recovery of the low rank and sparse components under tight recovery conditions, which match those for the state-of-art convex relaxation techniques. Our method is extremely simple to implement and has low computational complexity. For a $m \times n$ input matrix (say m \geq n), our method has O(r^2 mn\log(1/\epsilon)) running time, where $r$ is the rank of the low-rank component and $\epsilon$ is the accuracy. In contrast, the convex relaxation methods have a running time O(mn^2/\epsilon), which is not scalable to large problem instances. Our running time nearly matches that of the usual PCA (i.e. non robust), which is O(rmn\log (1/\epsilon)). Thus, we achieve ``best of both the worlds'', viz low computational complexity and provable recovery for robust PCA. Our analysis represents one of the few instances of global convergence guarantees for non-convex methods. | CommonCrawl |
The question is "Let $X$ be a metric space, find a closed subset $A$ such that $A'\neq \emptyset$ and $(A')' = \emptyset $"
Well in $\mathbb R$ with the standard topology, $\mathbb Q$ is close and open simultaneously (hope I'm not wrong with that) therefore it answers the demands. Yet I can't find an example in which $A$ is closed and not clopen. Are there any?
The Cantor Space and open, but not closed sets.
Are $\emptyset$ and $X$ closed, open or clopen?
discrete metric, both open and closed.
Question on Baire lemma : why a non empty complete metric space have a non-empty interior?
How does a closed metric space induce a topology? | CommonCrawl |
We study the semiclassical propagation of a class of wavepackets for large times on manifolds of negative curvature. The time evolution is generated by the Laplace-Beltrami operator and the wavepackets considered are Lagrangian states. The principal result is that these wavepackets become weakly equidistributed in the joint limit $\hbar\to 0$ and $t\to\infty$ with $t<<|\ln \hbar|$. The main ingredient in the proof is hyperbolicity and mixing of the geodesic flow. | CommonCrawl |
Bagchi, Biman and Chandra, Amalendu and Rice, Stuart A (1990) An interpretation of the bifurcation of orientational relaxation processes in a supercooled liquid. In: Journal of Chemical Physics, 93 (12). pp. 8991-9001.
The orientational relaxation of molecules in a supercooled liquid is known to exhibit interesting dynamical behavior. As the temperature of the liquid is lowered towards its glass transition temperature, there is a bifurcation of the relaxation dynamics into primary ($\alpha$) and secondary ($\beta$) processes; the former is associated with the collective motion responsible for the glass transition, while the latter is associated with single particle motion. In this paper we present a theory of orientational relaxation in a supercooled liquid. This theory provides both qualitative and quantitative descriptions of the ($\alpha\beta$) bifurcation phenomenon at a molecular level. The theory exploits the properties of a time dependent free energy functional which explicitly includes the effects of the collective motions in the liquid on the orientational motion of a solute (or a tagged) molecule. In the overdamped limit, this analysis leads to two coupled Smoluchowski equations for the orientation distribution function. These equations, when solved, reveal the essential features of the ($\alpha\beta$) bifurcation phenomenon. Explicit calculations are presented for orientational relaxation in a liquid of dipolar hard spheres, a liquid of nonpolar ellipsoids, and an orientationally disordered solid. Our calculations demonstrate the ubiquity of the ($\alpha\beta$) bifurcation phenomenon and they reveal many of its aspects. The relevance of the present work to current theories of glass transition is discussed briefly. The Journal of Chemical Physics is copyrighted by The American Institute of Physics. | CommonCrawl |
Rodrigo Platte's primary research interests are in computational mathematics, numerical analysis, and approximation theory. A recurring theme in his research is sampling. Today's world is increasingly rich in data, but working with large data sets can be computationally expensive and, in some cases, even detrimental to the accuracy of the information being extracted. In other cases, data acquisition can be limited due to hardware restrictions or other physical constraints. Among his main contributions are the analysis and development of high-order and spectral methods, and the design of algorithms for image reconstruction from Fourier samples. His work is motivated by problems arising in medical and radar imaging, atmospheric research, terrain modeling, and fluid mechanics.
Rodrigo B. Platte, Lloyd N. Trefethen, Arno B.J. Kuijlaars. Impossibility of fast stable approximation of analytic functions from equispaced samples. SIAM Review (2011).
Rodrigo B. Platte. How fast do radial basis function interpolants of analytic functions converge?. IMA Journal of Numerical Analysis (2011).
Justin Holmer, Rodrigo B. Platte, Svetlana Roudenko. Blow-up criteria for the 3D cubic nonlinear Schrödinger equation. Nonlinearity (2010).
Ricardo Pachón, Rodrigo B. Platte, Lloyd N. Trefethen. Piecewise-smooth chebfuns. IMA Journal of Numerical Analysis (2010).
Rodrigo B. Platte, Lloyd N. Trefethen. Chebfun: a new kind of numerical computing. Progress in Industrial Mathematics at ECMI 2008. Springer (2010).
Platte,Rodrigo B*. Windowed Fourier Methods for Overlapping Domain Approximations. NSF-MPS-DMS(9/15/2015 - 8/31/2018).
Gelb,Anne*, Boggess,Albert, Cochran,Douglas, Kao,Ming-Hung, Platte,Rodrigo B, Stufken,John. RTG: DataOriented Mathematical and Statistical Sciences. NSF-MPS-DMS(8/1/2015 - 7/31/2019).
Gelb,Anne*, Platte,Rodrigo B. Developing Fast Accurate and Robust Numerical Algorithms. DOD-AFOSR(6/1/2015 - 5/31/2018).
Gelb,Anne*, Platte,Rodrigo B, Renaut,Rosemary Anne. International Conference on High Order and Spectral Methods Participant costs:2014. DOD-NAVY-ONR(6/15/2014 - 9/1/2015).
Renaut,Rosemary Anne*, Gelb,Anne, Platte,Rodrigo B. Participant expenses: International Conference on Spectral and High Order Methods 2014. DOD-AFOSR(6/1/2014 - 5/31/2015).
Kostelich,Eric John*, Armbruster,Hans Dieter, Czygrinow,Andrzej Michal, Fishel,Susanna, Gelb,Anne, Jacobs,Mark, Kawski,Matthias, Kuang,Yang, Mahalov,Alex, Moustaoui,Mohamed, Platte,Rodrigo B, Tang,Wenbo. MCTP: Mathematics Mentoring Partnership Between Arizona State University and the Maricopa County Community College District. NSF-MPS-DMS(7/15/2012 - 6/30/2017).
Gelb,Anne*, Platte,Rodrigo B, Renaut,Rosemary Anne. Development and Analysis of Non-Classical Numerical Approximation Methods. DOD-AFOSR(7/15/2012 - 7/14/2015).
Rodrigo B. Platte. Fourier reconstruction of univariate piecewise-smooth functions from non-uniform spectral data with exponential convergence rates. Research Cluster: Computational Challenges in Sparse and Redundant Representations (Nov 2014).
Rodrigo B. Platte. Impact of Waves Along Coastlines. IMA Hot Topics Workshop (Oct 2014).
Rodrigo B. Platte. Scattered data approximation and solution of PDEs on complex domains. Colloquium, Department of Mathematics, University of Arizona (Oct 2014).
Rodrigo B. Platte. Algorithms for recovering smooth functions from uniformly sampled data. International Conference on Spectral and High Order Methods (ICOSAHOM), Salt Lake City (Jun 2014).
Rodrigo B. Platte. Mini-symposium: Advances in Radial Basis Function and Other Meshfree Methods. International Conference on Spectral and High Order Methods (ICOSAHOM), Salt Lake City (Jun 2014).
Rodrigo B. Platte, Anne Gelb, Ben Adcock, Jan Hesthaven, Edward Walsh. Understanding the mathematical underpinnings in medical imaging. AIM SQuaREs, American Institute of Mathematics, Palo Alto, CA (Oct 2013).
Rodrigo B. Platte. Stability of radial basis function methods for convection problems on the circle and sphere. SIAM Annual Meeting (Jun 2013).
Rodrigo B. Platte. Computing on surfaces with Chebfun2. SIAM Annual Meeting (Jun 2013).
Rodrigo B. Platte. Mapped polynomial methods for approximation on equispaced points. Colloquium, University of Fribourg, Switzerland (May 2013).
Rodrigo B. Platte. Algorithms for approximation on equispaced nodes and the generalized Hermite error formula. Numerical Analysis Seminar, Mathematical Institute, University of Oxford (May 2013).
Rodrigo B. Platte. Algorithms for recovering smooth functions from equispaced data and the impossibility theorem. Colloquium, Boise State University (May 2013).
Rodrigo B. Platte. Algorithms for recovering smooth functions from equispaced data. Seminar, Purdue University (Apr 2013).
Rodrigo B. Platte. Algorithms for recovering smooth functions from equispaced data: from rational interpolation to compactly supported radial basis function methods. Computational and Applied Mathematics Seminar, Arizona State University (Oct 2012).
Rodrigo B. Platte. Accurate representation of Fourier transforms of piecewise smooth functions. Chebfun and Beyond Workshop, Oxford, United Kingdom (Sep 2012).
Rodrigo B. Platte. Convergence and stability properties of a family of $C^\infty$ compactly supported kernels. 3rd Dolomites Workshop on Constructive Approximation and Applications, Italy (Sep 2012).
Rodrigo B. Platte. Algorithms for recovering smooth functions from equispaced data. 3rd Dolomites Workshop on Constructive Approximation and Applications, Italy (Sep 2012).
Rodrigo B. Platte. A hybrid Fourier-polynomial method for partial differential equations. Banff International Research Station (BIRS) -- Workshop organized by Oscar Bruno (Caltech) (Jun 2012).
Rodrigo B. Platte. Algorithms for recovering smooth functions from equally spaced data. University of Delaware Colloquium (Feb 2012).
Rodrigo B. Platte. $C^\infty$ Radial Basis Function Methods for PDEs. 7th International Congress on Industrial and Applied Mathematics - ICIAM 2011 (Jul 2011).
Rodrigo B. Platte. Exploring logistic maps with Chebfun. 7th International Congress on Industrial and Applied Mathematics - ICIAM 2011 (Jul 2011).
Rodrigo B. Platte. Convergence properties of analytic and $C^\infty$ compactly-supported RBF interpolation. NSF-CBS conference on Radial Basis Functions Mathematical Developments and Applications (Jun 2011).
Rodrigo B. Platte. $C^\infty$ compactly supported radial basis function approximations. International Symposium in Approximation Theory (May 2011).
Rodrigo B. Platte. Exploring Logistic Maps with Chebfun. Computational and Applied Mathematics Seminar, Arizona State University (Nov 2010).
Rodrigo B. Platte. $C^\infty$ Compactly Supported Radial Basis Function Methods for PDEs. SIAM Annual Meeting, Pittsburgh, PA (Jul 2010).
Rodrigo B. Platte. How fast do radial basis function interpolants of analytic functions converge?. 13th International Conference on Approximation Theory, San Antonio, TX (Mar 2010).
Rodrigo B. Platte. Impossibility of approximating analytic functions from equispaced samples at geometric convergence rate. 13th International Conference on Approximation Theory, San Antonio, TX (Mar 2010).
Rodrigo B. Platte. Signal reconstruction with radial basis functions. Southwest Conference on Integrated Mathematical Methods in Medical Imaging (Feb 2010).
Rodrigo B. Platte. Impossibility of approximating analytic functions from equispaced samples at geometric convergence rates. Computational and Applied Mathematics Seminar, Arizona State University (Feb 2010). | CommonCrawl |
Schubert calculus is the study of flag varieties, which are quotients of algebraic groups (usually complex semisimple, but sometimes over the real numbers or even finite fields) by parabolic subgroups.
Reference for Grassmann and Schubert varieties for Beginners .
I need some references to understand Grassmann and Schubert Variety as a beginner. I am looking for self-contained notes on these. Thanks.
Every Schubert cycle a Chern class?
How to compute the Schubert class $\sigma$$^2$$_2$$_1$ in the Grassmannian G(3,6)? I remember the result is $\sigma$$_3$$_3$ + 2$\sigma$$_3$$_2$$_1$ + $\sigma$$_2$$_2$$_2$.
How to prove Wielandt minimax formula?
How can I get the matrix form for a schubert cell?
Is the geometrical meaning of cup product still valid for subvarieties?
Why is the Complete Flag Variety an algebraic variety?
What is a general linear subspace?
Can anybody please suggest nice reference which will have lots of examples and counter examples for studying Grassmann varieties in particular Schubert variety?
Let $X(w)$ be a Schubert variety in $G/B$, where $G$ is a semisimple algebraic group and $B$ is a Borel in $G$. Then for $v \in W$, what is $v X(w)$ ? Is it same as $X(vw)$ ?
The associated Schubert variety of a flag of subspaces of a vector space.
I am looking for a self-contained basic theory of Schubert-cells through finding the decomposition of the full flag $Fl_3(\mathbb C^3)$.
Two questions about Schubert calculus and Schur functions.
Schubert calculus and number of lines satisfying some properties.
Representation-theoretical reasons for positivity of product of two Schubert polynomials?
In which books can I find something about the grassmannian and the plucker coordinates ?
Schur functors as spaces of "flag tensors"? | CommonCrawl |
Industry vs Kaggle challenges. Is collecting more observations and having access to more variables more important than fancy modelling?
What must someone know in statistics and machine learning?
What is the probability that $X<Y$ given $\min(X,Y)$?
Why is computing ridge regression with a Cholesky decomposition much quicker than using SVD?
Can confidence interval of positive values be negative?
+35 What does the pmf of a discrete random variable look like if it can take on the value $\infty$?
+20 How to find out if there is any real pattern in the data set?
+40 What must someone know in statistics and machine learning?
+10 EM algorithm for mixture of Gaussians - is it ok to use my updated mu's in my new estimate of Sigma, within a single M-step? | CommonCrawl |
The expression with n terms is (1/2)(2/3)(3/4)...)((n-1)/n)=1/n. The limit is obviously 0.
(n-1)*(n-2)*(n-3)!/n*(n-1)*(n-2)*(n-3)!. Now I understand the result.
Last edited by skipjack; October 4th, 2018 at 11:28 PM.
Is it okay how I solve it? I wanted to know where the result 1/n appeared from.
The result comes intuitively comes from pattern recognition and formally from mathematical induction.
Last edited by skipjack; October 4th, 2018 at 11:29 PM.
which tends to 0 as $n \to \infty$.
You did not understand what I wrote. (n-1)/n is the least term of the product. When it is concatenated, the product is (n-1)!/n!. | CommonCrawl |
You are given two integers $N$ and $K$. Find all ways to represent $N$ as the sum of exactly $K$ distinct positive integers $x_1,x_2, \ldots,x_K$ — in other words.
How to code to generate these combinations in any language?
Are you currently participating in SnackDown Round 1B, Then you should not ask this question before the competition ends, you are violating the codechef code of conduct for the competition.
Not the answer you're looking for? Browse other questions tagged combinatorics algorithms integer-partitions or ask your own question.
Find number of solutions to the equation? | CommonCrawl |
50 Requesting someone to give up their seat to me on public transport if I have an invisible disability?
23 Is there any combination of two airports that are connected via taxiways?
11 No Lie algebra over $\Bbb R$ or $\Bbb C$ can have a unit element.
9 Why are $L^p$ spaces for $p\not=1,2,\infty$ important?
6 Can GHC warn if class instance is a loop? | CommonCrawl |
We study charmonium spectral functions at finite temperature by using stochastic reconstruction methods. Our quenched lattice QCD simulations are performed with the standard plaquette gauge and the $O(a)$-improved Wilson fermion actions on 192$^3 \times N_\tau$ lattices with $N_\tau$ = 96--32, which corresponds to temperatures from 0.73$T_c$ to 2.2$T_c$. To reconstruct the charmonium spectral functions for the Euclidean time correlators we apply two different stochastic methods called Stochastic Analytical Inference (SAI) and Stochastic Optimization Method (SOM), where the former is based on the Bayes' theorem similar to commonly used Maximum Entropy Method (MEM) while the latter does not rely on any prior information. We carefully estimate systematic uncertainties by comparing results among SAI, SOM and also MEM. With the given spectral functions we discuss melting temperatures of charmonia as well as the heavy quark diffusion coefficient. | CommonCrawl |
3. It is smaller than $7\times 4$.
You might like to print off this sheet of the problem.
Addition & subtraction. Factors and multiples. Working systematically. Visualising. Combinations. Odd and even numbers. Interactivities. Place value. Generalising. Investigations. | CommonCrawl |
The first edition of Bourbaki's General Topology (chapter I, §9, p. 56) contains the following theorem.
Proposition 3. Soient $E$, $F$ deux espaces topologiques, $R$ une relation d'équivalence dans $E$, $S$ une relation d'équivalence dans $F$. L'application canonique de l'espace produit $(E/R) \times (F/S)$ sur l'espace quotient $(E\times F)/(R\times S)$ est un homéomorphisme.
It is followed by a very convincing proof. However, the theorem is wrong. The subsequent editions give an example where the spaces are not homeomorphic, even when one of the equivalence relation is equality.
There are cases where one would like this theorem to holds, for example when one discusses topologies on the fundamental group. Indeed, the fundamental group of a pointed space $(X,x)$ is a quotient of the space of loops based at $x$ on $X$ for the pointed-homotopy relation, hence can be endowed with the quotient of the topology of compact convergence (roughly, uniform convergence on compact sets). Multiplication of loops is continuous. However, the resulting group law on $\pi_1(X,x)$ need not be.
The mistake appears in the recent litterature, see for example this paper, or that one (which has been even featured as «best AMM paper of the year» in 2000...). MathScinet is not aware of the flaws in those papers... Fortunately, MathOverflow is!
Dear Emmanuel. Thank you for pointing that the second link was incorrect. I did not find any information about this procedure, so I sent an email to the support-service of Math Reviews.
Fabel's paper http://arxiv.org/abs/0909.3086 provides a very clear counterexample.
You're perfectly right! Thank you! I knew this paper of Fabel, so I should have told the rest of the story in my post.
I'll complete it soon. Let's just say that the counterexample is—of course—given by the hawaiian earring.
This difficulty was one of the reasons for my introducing the category of Hausdorff k-spaces in my 1961 thesis, and discussing the idea of a "category adequate and convenient for all purposes of topology" in my first paper "Ten topologies for X x Y". See the discussion of convenient categories on the ncatlab. Also full details are in my book "Topology and Groupoids" (2006) available from amazon,see my web pages, and the example on quotient spaces was in the 1968 edition of the book. | CommonCrawl |
M-theory and string theory predict the existence of many six-dimensional SCFTs. In particular, type IIA brane constructions involving NS5-, D6- and D8-branes conjecturally give rise to a very large class of N=(1,0) CFTs in six dimensions. We point out that these theories sit at the end of RG flows which start from six-dimensional theories which admit an M-theory construction as a M5 stack transverse to $R^4/Z_k \times R$. The flows are triggered by Higgs branch expectation values and correspond to D6's opening up into transverse D8-branes via a Nahm pole. We find a precise correspondence between these CFT's and the AdS$_7$ vacua found in a recent classification in type II theories. Such vacua involve massive IIA regions, and the internal manifold is topologically $S^3$. They are characterized by fluxes for the NS three-form and RR two-form, which can be thought of as the near-horizon version of the NS5's and D6's in the brane picture; the D8's, on the other hand, are still present in the AdS$_7$ solution, in the form of an arbitrary number of concentric shells wrapping round $S^2$'s. | CommonCrawl |
You may have noticed that I am back to publishing regular blog posts! My goal for now is a blog post every second Wednesday. I am now also trying to answer forum questions promptly. I want to thank the readers who took up the slack for the last year and a half in answering questions in the forums. In particular, I'd like to call out abieniek, Alexei Kassymov, and Lane Walker, whose answers were always spot on.
Now to the topic of this post. There has been a lot of talk since the standards came out about what they say about multiple methods for arithmetic operations, and I'd like to clear up a couple of points.
First, the standards do encourage that students have access to multiple methods as they learn to add, subtract, multiply and divide. But this does not mean that you have to solve every problem in multiple ways. Having different methods available is like having different means of transportation available to get to work; flexibility is good, but it doesn't mean you have to go to school by car, then by bus, then walk, then bike—every single day! The point of having multiple methods available is to encourage students to think strategically about what might be the best method for a given problem, not force them to solve every problem four times.
Second, the different methods are not unrelated; they form a progression, with the ultimate goal being the standard algorithm. For example, when students are first learning to multiply two digit numbers, they might use a rectangle to represent a product such as $42 \times 71$.
This shows the fundamental role of the distributive property in multiplying multi-digit numbers. You have to multiply each base ten component into each other one. Indeed, the same rectangle representation provides a visual proof of the distributive property itself.
At some later point students might just start writing down all the partial products, without using the rectangle to derive them.
Note the correspondence between the rectangle method and the partial product method, indicated by the colors. The first row of the rectangle and shows all the products by the 2 in 42 (in red); the second row shows all the products by the 40 (in blue). The products in the partial product method are grouped in the same way. There are many ways you can order the partial products, but if you group them as I have here, going from right to left in each two-digit number, as in the standard algorithm, you make an amazing discovery: you can add up all the partial products in each group (blue group or red group) in your head as you go along. That's because, in each case, adding the 2 to the 140 or the 40 to the 2800, there are enough zeroes in the second addend to accommodate the first, so it is easy to write down the sum right away, without writing the addends separately.
OK, so it's not always quite this easy, because every now and then you will have to keep in mind a bundled unit from the previous step (aka carrying), but you will never have to remember that for more than one step at a time, because each bundled unit gets used up at the next step. So if you invent a notation for remembering the bundled unit (what we used to call "little 1 in the corner" when I was growing up) then you can still avoid writing down all the partial products, and just compute the sum within each group as you go along. You have just created the standard algorithm.
The different methods are not isolated different ways of doing the same thing; they are steps towards fluency with the standard algorithm, fluency that is not fragile because it is supported by understanding.
Bill McCallum, founder of Illustrative Mathematics, is a University Distinguished Professor of Mathematics at the University of Arizona. He has worked in both mathematics research, in the area of number theory and arithmetical algebraic geometry, and mathematics education, writing textbooks and advising researchers and policy makers. He is a founding member of the Harvard Calculus Consortium and lead author of its college algebra and multivariable calculus texts. In 2009–2010 he was one of the lead writers for the Common Core State Standards in Mathematics. He holds a Ph. D. in Mathematics from Harvard University and a B.Sc. from the University of New South Wales. | CommonCrawl |
As you probably know, a Sudoku puzzle is a $9\times 9$ grid divided into nine $3\times 3$ subgrids. Some of the cells in the grid contain a symbol: usually the symbols are the numbers $1,2,\ldots, 9$.
Here I revisit a topic I have written on several times before, and which furthermore continues the theme of my last column.
In the latter part of year 2009, I attended a scientific talk at Sydney University about the path of a body exiting a cliff. The position at which the body lands from the base of the cliff depends on exit velocity.
Q1351 A city consists of a rectangular grid of roads, with $m$ roads running east--west and $n$ running north--south. Every east--west road intersects every north--south road. | CommonCrawl |
by Lorenzo Di Paola. Published on 28 January 2016.
the risk of a financial breakdown.
Making the best decisions means not only using the available data and the right model, but also understanding its implications and limitations. Then, companies can take the risks that could lead them to them the best possible outcomes. Analytic tools help companies when they are dealing with delicate decisions, for example, when signing a contract with another company, or when they need to decide where to open a new store.
One of the markets that has risk is the health system: Talking about health also means talking about risks and money. For an insurance company, a customer might represent a profitable business when the customer makes no or just a small amount of claims, but if the customer requires special treatment it could result in a huge cost for the company. Although it may be unpopular to say, insurance companies are always placing a bet in favour of their customers' health.
How does a health insurance company work?
Health insurance companies receive money from their members, who in exchange can be treated for their conditions in any of the hospitals that the company recognises. When a patient is admitted into a hospital and receives treatment, the insurance company pays the hospital for every procedure carried out, according to a set of pre-contracted prices. It is therefore vital for the insurance company to have good price agreements with the hospitals in order not to incur excessive costs.
When an agreement with a hospital comes to a contractual end (it expires), the hospital and the insurance company enter into a negotiation and work together to find a new agreement for the coming years. Sometimes hospitals can leverage their quality or strategic geographical position to get very favourable deals, giving insurance companies a difficult decision: do they accept the hospital's new conditions (and close a possibly unfruitful deal) or do they derecognise the hospital and stop working with them? In the past, this decision was often made based only on the perception or the feelings of the managers, whereas now a new analytical methodology has been introduced that helps insurance managers have a clearer picture of the consequences of their decision, and hence make the best choice.
What is the impact on members?
What is the impact on other hospitals?
The first question asks how important a particular hospital is in its geographical area. What would be the impact of derecognising that hospital in terms of their users? In order to learn this, the company measures what percentage of medical procedures would still be available for its members if a hospital were derecognised. If the hospitals in the surrounding area (usually considered within a 30-minute drive) can perform the same medical procedures as the hospital under consideration, then it is deemed as being not vital for their insurance system.
If a hospital stops being available for members, the insurance company provides a list of nearby alternative hospitals. Management then need to forecast how exactly patients will redistribute themselves.
Once the forecast has been produced, the management will whether the alternative sites have enough capacity to deal with the upcoming volume of visits; if this check has a positive outcome then the hospital can be derecognised.
In order to forecast which hospital the members will likely end up visiting, a simple methodology has been developed. It assumes that the members are more likely to visit bigger hospitals than smaller ones, where the size of a hospital is measured by the number of members that visit a particular hospital, that is, the market share. The probability of a member visiting hospital A is estimated simply as the market share of that hospital in the area that lies within 30 minutes of the derecognised hospital. These probabilities can then be used to predict how many members each alternative hospital will end up receiving.
For example, let's suppose that the management is considering to derecognise a hospital with 100 members. In the 30-minute area around that hospital, the members have to choose between three alternative hospitals: A, B and C, where A holds 40% of the market share, hospital B holds 10% of the market share and hospital C holds 50% of the market share. According to the model, we forecast that the number of members going to hospital A is $100 \times 0.4 = 40$, and similarly we forecast 10 members going to hospital B and 50 to hospital C. If hospital A is able to handle those 40 new members, hospital B is able to handle 10 new members and hospital C is able to handle 50 new members, the insurance managers can safely assume that the surrounding hospitals will be able to handle the redirected volumes.
Thanks to this simple methodology, the management has a quantitative tool that helps the company make decisions based on actual measurements. If they decide to derecognise a particular hospital, they might save money by not going through with a bad deal, plus they might improve their relationship with the alternative hospitals, as these are receiving more patients and hence more money. Alternatively, if they decide to accept the negotiation with that hospital, then they know they are making the right decision for the coming years.
Lorenzo Di Paola is a Statistician from University College London.
Is there a perfect maths font? | CommonCrawl |
A $3 \times 3 \times 3$ cube may be reduced to unit cubes ($1 \times1 \times1$ cubes) in six saw cuts if you go straight at it.
What about a cube of any size (an $n \times n \times n$ cube)?
Visualising. Interlocking cubes. Creating and manipulating expressions and formulae. Squares. Games. Interactivities. Cubes & cuboids. Working systematically. Mathematical reasoning & proof. Generalising. | CommonCrawl |
Description: Górecki and Łuczak describe an approach for using a weighted combination of raw series and first-order differences for NN classification with either the Euclidean distance or full-window DTW. They find the DTW distance between two series and the two differenced series. These two distances are then combined using a weighting parameter $\alpha$ (See Algorithm 4). Parameter $\alpha$ is found during training through a leave-one-out cross-validation on the training data. This search is relatively efficient as different parameter values can be assessed using pre-computed distances.
An optimisation to reduce the search space of possible parameter values is proposed. However, we could not recreate their results using this optimisation. We found that if we searched through all values of $\alpha$ in the range of $[0,1]$ in increments of 0.01, we were able to recreate the results exactly. Testing is then performed with a 1-NN classifier using the combined distance function given in Algorithm 4. | CommonCrawl |
What is a good introduction to integrable models in physics?
What is known about first return times to Markov partitions for Anosov diffeomorphisms?
the ability to give explicit solutions.
These guidelines whould be interpreted in a very broad sense."
Very good answers! I'd love to see more angles to this important issue, which is why a little bounty is offered.
Excellent question, I think. But I'm stuck before we get to the "integrable" part. What is a "system"? I'd be glad if someone addressed this in their answer.
I believe that 'system' is in the same sense as 'dynamical system', which probably comes from 'system of differential equations'.
The book by Hitchin, Segal, Ward and Woodhouse begins with this nice quote: "Integrable systems, what are they? It's not easy to answer precisely. The question can occupy a whole book (Zakharov 1991), or be dismissed as Louis Armstrong is reputed to have done once when asked what jazz was---'If you gotta ask, you'll never know!'"
Could anyone with enough rep please add the "integrable-systems" tag to this question?
I'll take off from the questioner's suggesting that maybe it's better to say what is a NON-integrable system is.
The Newtonian planar three body problem, for most masses, has been proven to be non-integrable.
Before Poincare, there seemed to be a kind of general hope in the air that every autonomous Hamiltonian system was integrable. One of Poincare's big claims to fame, proved within his Les Methodes Nouvelles de Mecanique Celeste, was that the planar three-body problem is not completely integrable. It is the dynamical systems equivalent to Galois' work on quintics. Specifically, Poincare proved that besides the energy, angular momentum and linear momentum there are no other ANALYTIC functions on phase space which Poisson commute with the energy. (To be more careful: any 'other' such function is a function of energy, angular momentum, and linear momentum. And his proof, or its extensions, only holds in the parameter region where one of the mass dominates the other two. It is still possible that for very special masses and angular momenta/ energies the system is integrable. No one believes this.) As best I can tell, existence of additional smooth integrals (with fractal-like level sets) is still open, at least in most cases.
Poincare's impossibitly proof is based on his discovery of what is nowadays called a "homoclinic tangle" embedded within the restricted three body problem, viewed in a rotating frame. In this tangle, the unstable and stable manifold of some point (an orbit in the non-rotating inertial frame) cross each other infinitely often, these crossing points having the point in its closure.
Roughly speaking, an additional integral would have to be constant along this complicated set. Now use the fact that if the zeros of an analytic function have an accumulation point then that function is zero to conclude that the function is zero.
Before Poincare (and I suppose since) mathematicians and in particular astronomers spent much energy searching for sequences of changes of variables which made the system "more and more integrable". Poincare realized the series defining their transformations were divergent -- hence his interest in divergent series. This divergence problem is the "small denominators problem" and getting around it by putting number theoretic conditions on frequencies appearing is at the heart of the KAM theorem.
This is, of course, a very good question. I should preface with the disclaimer that despite having worked on some aspects of integrability, I do not consider myself an expert. However I have thought about this question on and (mostly) off.
I will restrict myself to integrability in classical (i.e., hamiltonian) mechanics, since quantum integrability has to my mind a very different flavour.
The standard definition, which you can find in the wikipedia article you linked to, is that of Liouville. Given a Poisson manifold $P$ parametrising the states of a mechanical system, a hamiltonian function $H \in C^\infty(P)$ defines a vector field $\lbrace H,-\rbrace$, whose flows are the classical trajectories of the system. A function $f \in C^\infty(P)$ which Poisson-commutes with $H$ is constant along the classical trajectories and hence is called a conserved quantity. The Jacobi identity for the Poisson bracket says that if $f,g \in C^\infty(P)$ are conserved quantities so is their Poisson bracket $\lbrace f,g\rbrace$. Two conserved quantities are said to be in involution if they Poisson-commute. The system is said to be classically integrable if it admits "as many as possible" independent conserved quantities $f_1,f_2,\dots$ in involution. Independence means that the set of points of $P$ where their derivatives $df_1,df_2,\dots$ are linearly independent is dense.
I'm being purposefully vague above. If $P$ is a finite-dimensional and symplectic, hence of even dimension $2n$, then "as many as possible" means $n$. (One can include $H$ among the conserved quantities.) However there are interesting infinite-dimensional examples (e.g., KdV hierarchy and its cousins) where $P$ is only Poisson and "as many as possible" means in practice an infinite number of conserved quantities. Also it is not strictly necessary for the conserved quantities to be in involution, but one can allow the Lie subalgebra of $C^\infty(P)$ they span to be solvable but nonabelian.
Now the reason that integrability seems to be such a slippery notion is that one can argue that "locally" any reasonable hamiltonian system is integrable in this sense. The hallmark of integrability, according to the practitioners anyway, seems to be coordinate-dependent. I mean this in the sense that $P$ is not usually given abstractly as a manifold, but comes with a given coordinate chart. Integrability then requires the conserved quantities to be written as local expressions (e.g., differential polynomials,...) of the given coordinates.
The simple answer is that a $2n$-dimensional Hamiltonian system of ODE is integrable if it has $n$ (functionally) independent constants of the motion that are "in involution". (Functionally independent means none of them can be written as a function of the others. And "in involution" means that their Poisson Brackets all vanish -- a somewhat technical condition I won't define carefully (* but see below), but instead refer you to: http://en.wikipedia.org/wiki/Poisson_bracket). The simplest and the motivating example is the $n$-dimensional Harmonic Oscillator. What makes integrable systems remarkable and interesting is that one can find so-called "action angle variables" for them, in terms of which the time-evolution of any orbit becomes transparent.
It is primarily about the infinite dimensional theory of integrable systems, like SGE (the Sine-Gordon Equation), KdV (Korteweg deVries) , and NLS (non-linear Schrodinger equation), but it starts out with an exposition of the classic finite dimensional theory.
Here is a little bit about what the Poisson bracket of two functions is that explains its meaning and why two functions with vanishing Poisson bracket are said to "Poisson commute". Recall that in Hamiltonian mechanics there is a natural non degenerate two-form $\omega = \sum_i dp_i \wedge dq_i $. This defines (by contraction with $\omega$) a bijective correspondence between vector fields and differential 1-forms. OK then -- given two functions $f$ and $g$, let $F$ and $G$ be the vector fields corresponding to the 1-forms $df$ and $dg$. Then the Poisson bracket of $f$ and $g$ is the function $h$ such that $dh$ corresponds to the vector field $[F,G]$, the usual commutator bracket of the vector fields $F$ and $G$. Thus two functions Poisson commute iff the vector fields corresponding to their differentials commute, i.e., iff the flows defined by these vector fields commute. So if a Hamiltonian vector field (on a compact $2n$-dimensional symplectic manifold $M$) is integrable, then it belongs to an $n$-dimensional family of commuting vector fields that generate a torus action on $M$. And this is where the action-angle variables come from: the level surfaces of the action variables are the torus orbits and the angle variables are the angles coordinates for the $n$ circles whose product gives a torus orbit.
I don't think that one could say that there is a dichotomy between integrable and chaotic systems. There is certainly a huge chunk in the middle. By a chaotic system we often mean a system where trajectories of points deviate exponentially with time, a canonical example is the Arnold (or Anosov) cat's map. In this case a generic trajectory is of course everywhere dense in the phase space. This is related to ergodicity (in the case when there is a measure preserved by the system). But of course not every ergodic system is chaotic. There are different degrees of chaos, mixing, strong mixing, etc.
On the contrary for an integrable system the motion of every trajectory is quasi-periodic, it stays forever on a half-dimensional torus, such systems are rare. A little perturbation of such a system is not integrable anymore. KAM theory describes the residue of integrability of the perturbation, while Arnol'd diffusion is about trajectories that don't move quasiperiodically anymore.
The above answers deal mostly with finite-dimensional systems. As for the (systems of) PDEs, you typically need the Lax pair or a zero curvature representation (see e.g. the Takhtajan--Faddeev book mentioned in the wikipedia entry you linked to for the definition of the latter) or something else like that. To the best of my knowledge, the complete understanding of what is an integrable system for the case of three (3D) or more independent variables is still missing. In particular, for the case of three independent variables (a.k.a. 3D or (2+1)D) the overwhelming majority of examples are generalizations of the systems with two independent variables. These generalizations are constructed using the so-called central extension procedure (e.g. the KP equation is related to KdV in this way). Many integrable partial differential systems in three independent variables and apparently the overwhelming majority thereof in four or more independent variables are dispersionless, i.e., can be written as first-order homogeneous quasilinear systems, see e.g. this article and references therein for details.
As for the reading suggestions, in addition to the Takhtajan--Faddeev book cited above, you can look e.g. into a fairly recent book Introduction to classical integrable systems by Babelon, Bernard and Talon, and into the book Multi-Hamiltonian theory of dynamical systems by Maciej Blaszak which covers the central extension stuff in a pretty straightforward fashion. Both books have extensive bibliographies with further references to look into.
Now, as for classification and identification of (new) integrable systems of PDEs, at least in two independent variables, it turns out that the (infinitesimal higher) symmetries play an important role here. A recent collective monograph Integrability, edited by A.V. Mikhailov and published by Springer in 2009, could be a good starting point in this direction. See also another recent book Algebraic theory of differential equations edited by MacCallum and Mikhailov and published by Cambridge University Press. For a general introduction to the subject of symmetries of (systems of) PDEs, I can recommend the book Applications of Lie groups to differential equations by Peter Olver.
Your "3D" is better known as 1+2 (one time variable, two space variables). This is an important distinction both in the Lax pair formalism (time variable is preferred) and in zero curvature representation approach (applies primarily to 1+1).
Regarding extension to PDEs, note Dedecker's paper "Intégrales complètes de l'équation aux dérivées partielles de Hamilton-Jacobi d'une intégrale multiple", C.R. Acad. Sc. Paris, 285 (1977) pp. 123-6. Together with two preceding notes in the same journal, this generalises the concept of "complete integrability" of a mechanical system.
This is soft -- but I think of an integrable system as one whose dynamics are dominated by algebra. For finite dimensional integrable systems, the symmetries (related to conserved quantities by Noether's theorem) force the trajectories to live on half-dimensional tori. For infinite dimensional integrable systems, where the flow on the scattering data is isospectral the symmetries force solutions to be n-soliton solutions plus dispersive modes.
There is a blog post of Terry Tao's (apologies for not having the link) which talks about how algebra is the right tool to understand structure while analysis is the right tool to understand randomness. The claim is that one mark of an good problem is the presence of an interesting relationship between structure and randomness and hence the requirement that both algebra and analysis be used -- to some degree -- in order to get a good answer to the problem. The soliton resolution conjecture is by this standard a good problem because the asymptotic n-soliton solutions are fundamentally algebraic while the dispersive modes are fundamentally analytic objects.
I agree with Dmitri that there isn't a dichotomy. The symmetries can have a large or small role in the dynamics as can the ergodicity.
So apparently some integrable systems can have chaotic scattering maps.
"A mechanical system is called integrable if we can reduce its solution to a sequence of quadratures."
So, literally, an integrable system (in this view) is one that can be solved by a sequence of integrals (which may not be explicitly solvable in elementary functions, of course). To connect to other answers, this should only work out when there are enough symmetries for us to write down and integrate.
Since, it hasn't been mentioned yet a short addition to José Figueroa-O'Farrill's answer. I will only talk about the finite dimensional case. So let's assume that $dim(P) = 2n$. Then the Hamiltonian flow is integrable if there exist these $n$ functions $f_1, \dots, f_n$ which are in involution with respect to the Poisson structure.
Now, the cool thing is that there exist action angle coordinates. These means we can conjugate our possibly complicated dynamics to the simple dynamics $$ \partial_t I_j = 0,\quad \partial_t \theta_j = I_j,\quad j=1,\dots,n $$ this is something, we can all solve since it is just linear. Note: We will have $I_j = f_j(orbit)$, which is time independent.
As a possible application, KAM theory is usually formulated as an application to systems in action angle coordinates. This in turn implies that integrable systems are stable (in a subtle measure theoretic sense). But I think this is what is meant with "integrable $\neq$ chaos". We have a great form of perturbation theory for integrable systems.
"...working on completely integrable systems is based on a contemplation of some very exceptional equations which hide a Platonic structure: although these equations do not look trivial a priori, we shall discover that they are elementary, once we understand how they are encoded in the language of symplectic geometry, Lie groups and algebraic geometry. It will turn out that this contemplation is fruitful and lead to many results" | CommonCrawl |
This question was previously titled "Finding residues of a huge multivariable rational function."
First I want to solve for $u_1$ in the denominator and find those roots (poles) that have $x_1$ as below.
Then I will loop through the roots and compute the residue of $f$ w.r.t. $u_1$ at those poles.
Then I replace $f$ with $ans1$ to continue to do the same process w.r.t $u2$ and poles containing $x2$ and finally w.r.t $u3$ and poles containing $x3.$ However, this consumes about 800GB of memory on an HPC when I feed it a larger rational function.
residue of a rational function avoiding symbolic variables?
Both .roots() and .residue() are not defined for rational functions that are not defined in terms of symbolic variables.
Instead of working with symbolic variables you can work in (fraction fields of) polynomial rings, as follows.
Division of polynomials automatically yields elements of the fraction field.
This should be much more efficient than using symbolic variables. Let me know if this works for you.
As for residues, indeed they are not implemented for rational functions. But you can easily implement the formulas yourself. For example, the residue of $p(z)/q(z)$ at a simple pole $z=r$ is $p(r)/q'(r)$.
Edit: In the second step, the roots of the denominator with respect to $u_2$ may live in a larger field (when there are irreducible factors of degree $\gt 1$ w.r.t $u_2$ in the denominator), so you have to add these to your field when searching for roots. Basically you want the splitting field of the denominator. Since the variables are still undetermined it seems you would have to create a function field (extension).
Thank you @rburing. Yes, the roots are instantaneous. I am working on the residues now. I will let you know if I have further questions.
but the limiting part takes a huge amount of memory for a moderately big rational function in several variables. I am looking for other methods now. | CommonCrawl |
Let $T:Σ^*\to Σ^*$ be an operation such that $T(L)$ is regular for all regular languages $L \in Σ^*$.
Is it possible to prove $T^∞(L)$ is regular?
Colleague Apass Jack already warned about the dangers of infinity, and he also indicated a very simple example that shows that a very simple iteration leads to a non-regular language. Case closed, but I like to add an observation: iterating simple local substitutions lead to Turing power, not just non-regularity.
The single steps of a Turing machine van be encoded, and performed with a regular operation. The instruction "on reading $a$ in state $q$, write $b$, move left and change to state $p$" is coded as rule $aq \mapsto pb$ and is extended to longer strings containing a single state as $\alpha aq \beta \mapsto \alpha pb\beta $. This operation can be extended to sets of instructions, and will map a regular language into a regular language. However,iterating them will actually generated Turing machine computations!
Not the answer you're looking for? Browse other questions tagged formal-languages regular-languages closure-properties or ask your own question. | CommonCrawl |
It is recently been shown that the spin Hall effect (SHE) in $\beta $-Ta generates a transverse spin current that is sufficient for efficiently reversing the moment of adjacent thin film nanomagnets through the spin torque (ST) mechanism. Here we report the existence of an even larger SHE in $\beta $-W thin films. Using spin torque induced ferromagnetic resonance (ST-FMR) with a $\beta $-W/CoFeB bilayer microstrip we have determined the magnitude of the spin Hall angle $\theta $ to be 0.30$\pm $0.02, which is twice as large as the previously reported value for $\beta $-Ta ($\sim $0.15). From switching data obtained with 3-terminal devices consisting of a $\beta $-W channel and an adjacent CoFeB/MgO/CoFeB magnetic tunnel junction, we have independently determined \textbar $\theta $\textbar $=$ 0.33$\pm $0.06. We will also report on the variation of the spin Hall switching efficiency with W layers of different resistivities and hence of variable ($\alpha $ and $\beta )$ phase composition. Finally we have studied the SHE exhibited by several other 4d and 5d transition metals using the techniques mentioned above and we will report on those results. | CommonCrawl |
Purpose: In this tutorial we will learn how to perform a time-dependent density-functional (TDDFT) calculation using different xc-kernels. Two examples will be proposed: One for the BSE derived xc kernel and other for the LRC kernel. As a test case, the optical spectrum of LiF will be studied.
Before starting, be sure that relevant environment variables are already defined as specified in How to set environment variables for tutorials scripts.
Important note: All input parameters that will appear will be given in atomic units!
As a preliminary step for this excited-state calculation, a ground-state calculation will be performed. In this tutorial we consider as an example LiF. Create a directory named LiF_TDDFT-BSE-kernel and move into it.
Inside the GS sub-directory we create the input file for LiF. In the structure element we include the lattice parameter and basis vectors of LiF, which has a rock-salt cubic lattice, as well as the positions of the Li and F atoms. In the groundstate element, we include a 10$\times$10$\times$10 k-point mesh (ngridk) and a value of 14.0 for gmaxvr. This value, which is larger than the default, is needed in view of the excited-state calculation (for details on this we refer to Excited States from BSE).
N. B.: Do not forget to replace in the input.xml the string "$EXCITINGROOT" by the actual value of the environment variable $EXCITINGROOT.
Start now the ground-state SCF calculation and check if it finishes gracefully.
In case of a successful run the files STATE.OUT and EFERMI.OUT should be present in the directory. These two files are needed as a starting point for the excited-state calculation.
The work-flow of the algorithm is a combination of the TDDFT linear response calculation (see Excited states from TDDFT) and the calculation of the direct term of the BSE Hamiltonian, which is then used to set up a MBPT-derived kernel in first order. This kernel then enters the Dyson equation for the response function in the last stage of the TDDFT formalism.
There is a large literature dealing with the inclusion of many-body effects into TDDFT kernels, in order to correctly reproduce excitonic features (for further details we refer the seminal review ORR-2002). In the following we will present two examples related to two different approaches. In the first one we will present a TDDFT calculation of the optical spectrum of LiF performed with a xc kernel derived from the solution of the Bethe-Sapeter equation (BSE). In the second part of the tutorial we will deal with the so-called long-range correction (LRC) kernel.
In the example treating the BSE-derived xc kernel, the scheme proposed in MDR-2003 was adopted. In this approach, a nonlocal exchange-correlation functional is derived by imposing TDDFT to reproduce the many-body diagrammatic expansion of the Bethe-Salpeter polarization function. In this way, it is shown that the TDDFT kernel is able to capture the excitonic features in solids, otherwise missing in simpler approximation for the kernel. For further details about the implementation in the code, see SAD-2009.
The values of $\alpha$ and $\beta$ are also material dependent and have to be tuned in order to correctly reproduce the experimental data for the excitons. For further details about the model we refer to the original paper BOT-2005.
This block is very similar to the one presented in Excited States from BSE, to which we refer for an exhaustive description of the input attributes. In the following, we discuss only the relevant parameters for the TDDFT calculation with a BSE-derived kernel.
The bse element must be specified to generate the kernel.
Once the run is completed (it should take a few minutes), we can analyze the results. As in any TDDFT calculation, a number of output files are created (see also Excited States from TDDFT for additional details). Here we are interested in the files named EPSILON_NAR_FXCMB1_OCYY_QMT001.OUT (YY = 11, 22, and 33) and specifically in the imaginary part of the macroscopic dielectric function, corresponding to the optical absorption spectrum.
The main features of the optical spectrum are clearly visible in the graph. The intense excitonic peak at about 13 eV dominates the low energy part of the spectrum and another strong peak is found above 20 eV. This result is in agreement with the spectrum obtained by solving the BSE equation (see Excited States from BSE). For further comparison with the literature, we refer to MDR-2003.
In the plot above, the strong excitonic peak at about 12.5 eV characterizing the spectrum of LiF is correctly reproduced by the TDDFT calculation with the static LRC kernel. Compared to the result obtained with the BSE-derived kernel the main differences appear in the higher energy region of the spectrum, above 20 eV. However, the purpose to correctly reproduce the intense bound exciton of LiF is fulfilled.
The first intense excitonic peak is again well reproduced by the LRC kernel.
If you have already done the tutorial Excited states from TDDFT, calculate the optical absorption spectrum for LiF using RPA and ALDA kernels. What happens to the excitonic peak?
Decrease the parameter alphalrc in the calculation with the static LRC kernel and check what happens to the excitonic peak. Compare your results with the onset of the spectrum obtained from the RPA calculation.
Tune the parameters alphalrcdyn and betalrcdyn, following the rule of a thumb suggested above. What happens to the spectrum?
ORR-2002: G. Onida, L. Reining, and A. Rubio, Rev. Mod. Phys. 74, 601 (2002).
MDR-2003: A. Marini, R. Del Sole, and A. Rubio, Phys. Rev. Lett. 91, 256402 (2003).
SAD-2009: S. Sagmeister and C. Ambrosch-Draxl, Phys. Chem. Chem. Phys. 11, 4451 (2009).
REI-2002: L. Reining, V. Olevano, A. Rubio, and G. Onida Phys. Rev. Lett. 88, 066404 (2002).
BOT-2005: S. Botti, A. Fourreau, F. Nguyen, Y.-O. Renault, F. Sottile, and L. Reining, Phys. Rev. B 72, 125203 (2005). | CommonCrawl |
Abstract: We analyze eight epochs of Hubble Space Telescope H$\alpha$+[N II] imaging of Eta Carinae's outer ejecta. Proper motions of nearly 800 knots reveal that the detected ejecta are divided into three apparent age groups, dating to around 1250 A.D., to around 1550 A.D., and to during or shortly before the Great Eruption of the 1840s. Ejecta from these groups reside in different locations and provide a firm constraint that Eta Car experienced multiple major eruptions prior to the 19th century. The 1250 and 1550 events did not share the same axisymmetry as the Homunculus; the 1250 event was particularly asymmetric, even one-sided. In addition, the ejecta in the S ridge, which have been associated with the Great Eruption, appear to predate the ejection of the Homunculus by several decades. We detect essentially ballistic expansion across multiple epochs. We find no evidence for large-scale deceleration of the observed knots that could power the soft X-ray shell by plowing into surrounding material, suggesting that the observed X-rays arise instead from fast, rarefied ejecta from the 1840s overtaking the older dense knots. Early deceleration and subsequent coasting cannot explain the origin of the older outer ejecta---significant episodic mass loss prior to the 19th century is required. The timescale and geometry of the past eruptions provide important constraints for any theoretical physical mechanisms driving Eta Car's behavior. Non-repeating mechanisms such as the merger of a close binary in a triple system would require additional complexities to explain the observations. | CommonCrawl |
of two constants ($\alpha_2$ and $\alpha_1$) of the model.
in radicals. The explicit solution for $m = l$ is presented in Appendix. Imposing certain restrictions on $x$, we prove the stability of the solutions in a class of cosmological solutions with diagonal metrics.
We also consider a subclass of solutions with small enough variation of the effective gravitational constant $G$ and show the stability of all solutions from this subclass. | CommonCrawl |
Thank you and well done to everyone who submitted solutions to this problem. There were lots and lots of correct solutions, so we couldn't mention you all. In fact, choosing between them was difficult!
$10A+B=(A+B)\times4$, because $A$ has to be multiplied by ten to make it a 2 digit number.
$10A+B=4A+4B$, because you multiply out of the brackets and are left with this.
In other words, the units have to be double the tens, so the numbers are $12$, $24$, $36$ and $48$.
A has to be below $3$, because $110 \times 3$ gives $330$, which is greater than $300$.
However, it can't be $1$, because that would leave $11B + 2C$ equalling $190$, and with the maximum value of 9, this clearly cannot be true: $(11 \times 9) + (2 \times 9) = 117$.
The maximum value of $2C$ is $2 \times 9 = 18$, so the minimum value of $11B$ is $80 - 18 = 62$.
Of course, $11B$ must be a multiple of $11$, and it also has to be even, because $2C$ and $80$ are both even. The only even multiple of $11$ between $62$ and $80$ is $66$, which must equal 11B, so B = 6.
To check this, $267 + 26 + 7$ does equal $300$.
Choose a two-digit number with two different digits ($AB$) and form itsreversal (i.e. $BA$).
Subtracting $(A+B)$ gives $9A$ and $9B$.
Adding these gives $9A+9B=9(A+B)$, a multiple of 9.
The total is $22A+22B+22C =22(A+B+C)$. Dividing by $A+B+C$ gives $22$.
These are $9a-9b$, $9b-9c$, and $9a-9c$.
When you add these results you get $18a-18c=18(a-c)$ which is clearly divisible by $18$.
Since we can only use the digits $1$ to $9$, the only possible solution for $A$ is $A=1$, because any larger $A$ gives a $3$ digit number.
This leaves $C=8$ and $B=9$.
Firstly I transformed it into an equation: $10a+b+10c+d=10d+c+10b+a$.
This simplifies to $9a-9b+9c-9d=0$, and dividing by $9$ gives $a+c=b+d$.
This will always work because the first equation is the same as the last one, just in a simplified form, and you could undo all of the steps.
It is like the quadratic equation, $ax^2+bx+c$ is the same as the quadratic formula, just that it´s rearranged to make $x$ the subject. Well here it´s the same but putting each letter only once. | CommonCrawl |
The goal of the project is to better understand and well formalize the effects of complex environments on the dynamics of the interconnections, as well as to develop methods and tools for the analysis and control of such systems.
The applications deal with medecine (chronic and acute myeloid leukaemias), microbial ecology (anaerobic digesters) and nuclear energy (cryogenic installations, teleoperation schemes).
The environment is seen as a dynamical object in order to model phenomena such as a temporary loss of connection, a nonhomogeneous environment or the presence of the human factor in the control loop but also the problems involved with technological constraints.
Questions of stability characterization or robust stabilization of possibly nonlinear infinite-diemensional are considered by various methods : $H_\infty$-control, nonlinear control via Lyapunov-Krassovski techniques, observers, adaptative control, predictive control, set invariance.
Our main question is the determination of finite-dimensional controllers of low order for infinite-dimensional systems.
The development of algorithms and numerical methods for the implementation of our results (writing of scilab/matlab toolboxes) supplement the mathematical analysis of the problems raised in each of these three lines of research.
Leeds University, Max Planck Institute, CSCD Newcastle, KU Leuven, NTNU Trondheim, University of Twente, University of Craiova, Bilkent University, University of California, Southern Illinois University, Northeastern University, University of Maryland, Louisiana State University, Illinois Instute of Technology, Korea university. | CommonCrawl |
Maximum protection of your sensitive data thanks to the security algorithm Rijndael 256-Bit!
Instead of passwords like "toothbrush" or "Rover", which can both be cracked in a few minutes, you now use passwords like "g\/:1bmV5″£$p'}=8>,,/2¬%`CN?\A:y:Cwe-k)mUpHiJu:[email protected]<i" (with a 1-GHz-Pentium-PC, it takes approx. 307 years to guess this password!).
Password lists on the internet: Place your encrypted password lists on the Internet and enjoy access to all of them, no matter where you are!
Protection from keylogging (intercepting of keystrokes) – All password fields are internally protected from keylogging.
I've got issues with all three five points above.
That's a pretty bold statement to say that your passwords are uncrackable.. I suspect they really mean that they haven't been able to crack them, or somebody hasn't been able to crack them YET.
Another word for Rijndael… Yep, AES. Really nothing that sophisticated. Under closer inspection they're really no better than the free alternatives.
While "g\/:1bmV5T$x_sb}[email protected]?\A:y:Cwe-k)mUpHiJu:[email protected]<i" may be long, secure, mixed cases, characters, alphanumeric, and symbols, it's certainly not memorable. So what happens if you generate this password for XYZ internet banking service, and then you go on holiday and forget to pay a bill, or need to move some money about.. You don't have your password safe with you. Bugger.
Does anyone else think this is potentially asking for trouble? Assuming XYZ company is hosting them, "securely", how can you prove they don't have a backdoor to decrypt the files. Do you trust them? Considering you've paid €30 for this package, it's not really as binding as a really expensive legal SLA.
The other thing that's at the front of my mind now, is what password do you use to lock the password safe? Do you use a long, complex, difficult to break one, which you'll probably never remember, and will need to write it down (therefore making it totally pointless anyway), or a simple short password like your first pet's name, and some thoughtful numbers after it.
Sidenote to point 3. 307 years on a 1GHz Pentium.. What about a dual-quad core Pentium Xeon. Or a distributed attempt across 256 nodes of dual-quad core Xeons. Still, it's reaching a bit far, but it doesn't mean that this password is unbreakable. Not by a long way.
I really don't like the sound of this software, actually, I'm not keen on this "credentials management" type thing at all. There's too many unanswered questions. And that's before we get onto the rather open question of the use of biometrics for passwords. There seems to be a growing trend at the moment where biometric data (fingerprints, webcam images, iris scans) provide the password data, as opposed to the identity data that is then confirmed with a password.
Private keys and passwords are easy to change when compromised, but how do you change your fingerprint, facial shape, or iris detail when your credentials are compromised? | CommonCrawl |
Inspired by @DEEM's Grandpa Mystery puzzles (e.g. here and here). Be sure to check them out!
Now we all know that Grandpa is a genius... but sometimes he says the most absurd things!
Did you know that $6$ is prime?
$6$ is not a prime number, as it is equal to $3\times 2$. Oh wait, are you referring to the gesture your friend showed you nearly last week?
No. I never said number.
What is Grandpa trying to say?
I walked up to Grandpa. Why was it taking so long for him to finish his brownie? I soon found out he fell asleep at the table.
Grandpa, wake up! I think I might know how $6$ is prime. If I am right, then you are very clever!
What? You might think it's a letter? No no no no no.
And he went back to snoring. Poor Grandpa. I think it is best I leave him alone, now. $$$$ Hint 2.
I had no idea what he meant by that, but when I looked at the brownies, they looked identical... like twins! So I just left them alone... $$$$ Hint 3. Last Hint before I will declare a 50 rep bounty.
Grandpa, what do you mean by prime?
Well, how long did I rest for exactly?
Just under a full hour.
Well, your watch could be wrong. I slept for just over 5 minutes. That's what I mean by prime.
But I wasn't wearing it when Grandpa first asked me this ridiculous question... $$$$ Super last hint.
Well, I know the secret now! I figured it out!
...but he did say, "No."
Told ya my Grandpa was a genius, and now the question is: What am I trying to say?
6 doesn't represent the number 6. That's why Grandpa says "No. I never said number (6)".
We know it's not a letter and not a number, so we are left with symbols.
Unicode U+212X is the third row of Unicode characters, if X is 6 we get Omega. Omega, in maths, is used to denote how many prime divisors a number has. So Grandpa is not saying 6, but Unicode 6 is prime.
Six has 3 letters. 3 is prime.
6 = G => G = 7 and 7 is prime.
TV Primetime. But it is between 8-11 PM.
6 is between Twin Primes (5-7).
Per Wikipedia, "Prime, or the First Hour, is a fixed time of prayer of the traditional Divine Office (Canonical Hours), said at the first hour of daylight (approximately 6:00 a.m.)". If Grandpa is Christian, he could mean that 6 am is time to say prime, as opposed to lauds or terce.
I first began thinking about a number, and trying to find an interesting, whacky property about that number. Fortunately enough, it was the very first number I thought of: the number $6$. Now the accepted answer has been accepted, but that was not actually the intended answer. I accepted it because I really liked the answer, and it could have actually been the answer, too. I will not reveal the answer, here, but I hope the steps of the creation provide a big hint.
Because this was going to be the lateral-thinking type, I decided to try and find something useful in SIX and not $6$. And that's when I came up with the answer and made a Grandpa puzzle out of it. I then just started playing with words, realising that words like "No" can stand for "number" without you actually saying "number".
I worded it this way to make people think that $6$ is prime with definitions excluding prime number... but not necessarily. It's only a number if you refer to $6$ as a number (as opposed to a word). Even though the actual prime that $6$ is equal to is in fact a number, it is reached from knowing that $6$ is not referred to as a number, because numbers behave much more differently than words (even if they might mean the same thing; i.e., Eight = $8$ but Eight $\neq 2^3=8$).
After that, the following hints were just a bit of wordplay. Grandpa says "no" a lot in the Hint 1 from what I just mentioned. (And the last sentences in Hint 1 also have a cool property to reveal the actual prime number itself.) Time is also used for correspondence, and "twins" is also a clue. (Note that $20$ in "$20$ minutes later" sounds like "twin".) And then it gets a bit weird how the super last hint mainly refers to the actual puzzle and not the previous hints.
Grandpa is in his sixties, so he is saying that $6$ is prime [the state or time of greatest vigour or success in a person's life].
He asked "Do you know?". So the simple answer is "Yes" or "No". As 6 is not prime, "no"
Not the answer you're looking for? Browse other questions tagged wordplay lateral-thinking knowledge or ask your own question. | CommonCrawl |
In Button Up, the focus was on working in a systematic way. This follow-up problem will allow children to consolidate their understanding of working systematically, but the main objective is to encourage them to identify and explain patterns, which will lead to generalisations.
Here you can see a response from a school in Canada together with some useful notes that might be helpful to read before watching the video.
It would be a good idea to introduce the problem in a similar way to that suggested in the teachers' notes of Button Up.
The focus for some pairs as they work on this activity will be on developing a system so that they know they have found all the possible ways. However, you might expect that learners working at a higher level won't need to write out all $120$ ways for five buttons (and perhaps not all $24$ ways for four buttons). Encourage these children to explain how they know that the total is right, even though they haven't listed all the possibilities.
For example, for four buttons, they might find that starting with the top button, there are six different ways, so this means there will be six different ways starting with the second button, six ways starting with the third and six with the fourth, making $24$ ways in total. Alternatively, they could argue that if the first way for three buttons is ABC, you could add in a fourth button in four different ways i.e. DABC, ADBC, ABDC and ABCD. You can add a fourth button into all six ways, giving $6 \times 4=24$ ways for four buttons.
You can then encourage children to be able to predict the number of ways of buttoning up a jacket with any number of buttons. Can they convince themselves why their method works? Can they convince another pair why it works?
This activity might make a good 'simmering' task so that it is worked on over a period of a few days or weeks before you bring all the children's ideas together.
How do you know you have found all the ways?
How could you use the number of ways to button up three buttons to help you work out the number of ways for four buttons?
How will you record what you're doing?
Rather than being interested in the order of buttoning, invite the children to investigate the total distance that their hands have to travel to do up all the buttons. For example, for three buttons, what is the greatest distance that their hands can travel? What is the least distance? How about for four buttons? Five buttons?
Can learners make any generalisations about the distances travelled? For example, how would they achieve the shortest distance for any number of buttons? How would they achieve the longest distance for any number?
Some learners might benefit from writing each order on a separate strip of paper. The strips can then be ordered so that any missing possibilities might be identified more easily.
Generalising. Combinations. Practical Activity. Working systematically. Interactivities. Factors and multiples. Multiplication & division. Addition & subtraction. Visualising. Investigations. | CommonCrawl |
Figure 1:Shape collections typically come with inconsistent orientations (a). PCA-based alignment (b), or aligning to an arbitrarily chosen base model (c) is prone to error. The problem with pairwise alignments is attributed to several minima in alignment distances (Epair), arising due to near-symmetries of shapes. We introduce an autocorrelation-guided algorithm to efficiently sample the minima (red boxes) and jointly co-align the input models (d).
Co-aligning a collection of shapes to a consistent pose is a common problem in shape analysis with applications in shape matching, retrieval, and visualization. We observe that resolving among some orientations is easier than others, for example, a common mistake for bicycles is to align front-to-back, while even the simplest algorithm would not erroneously pick orthogonal alignment. The key idea of our work is to analyze rotational autocorrelations of shapes to facilitate shape co-alignment. In particular, we use such an autocorrelation measure of individual shapes to decide which shape pairs might have well-matching orientations; and, if so, which configurations are likely to produce better alignments. This significantly prunes the number of alignments to be examined, and leads to an efficient, scalable algorithm that performs comparably to state-of-the-art techniques on benchmark datasets, but requires significantly fewer computations, resulting in 2-16$\times$ speed improvement in our tests.
Figure 5: Starting from a set of shapes, we normalize and compute their autocorrelation descriptors to cluster the shapes. We then align the shapes first within and then across the clusters using a graph-based discrete formulation wherein we intelligently sample candidate alignments for each shape guided by their autocorrelation descriptors.
Figure 12: Randomly selected shapes from all our datasets, indicating their pose before (odd rows - in gray) and after (even rows - in green) alignment.
We thank the reviewers for their comments and suggestions for improving the paper. This work was supported in part by ERC Starting Grant SmartGeometry (StG-2013-335373) and gifts from Adobe. Melinos Averkiou is grateful to the Rabin Ezra Scholarship Trust for the award of a bursary. | CommonCrawl |
My question has survived. Therefore I try another one. Consider some elementary operations on closed compact 3-manifold $M \subset R^4$. These elementary operations are e.g. $0$-surgery or $1$-surgery such that corresponding disk of dimension $1$ or $2$ is nicely embedded in $R^4$ (not intersecting manifold except the boundary). This way we can for given manifold $M$ already embedded in $R^4$ construct new one $M'$ which will also be embedded in $R^4$ by its construction. Let's assume that we are working in smooth category.
I am still digesting the comment of Ryan Budney on my other question. This comment was. "Compute the torsion subgroup of $H_1$, and check that for each prime power $p^k$ the subgroup $\mathbb Z_p^k$ occurs an even number of times in the prime-power direct-sum factorization of the torsion subgroup."
I appreciate if someone can comment on it in terms of fundamental group. What can we say about finite order elements in fundamental group of 3-manifold embedded in $R^4$ ? I have understood that out of spherical manifolds only few embed in $R^4$.
Going back to my original question. Can such "step-by-step" method produce all closed 3-sub-manifolds of $R^4$ ? If "surgery" step is not enough, what other step could be considered in order to achieve this result ?
The same question can be asked for compact manifold with boundary. In such case we should also admit non-orientable ones.
My motivation for this question is to catalog manifolds by it's topological complexity in popular language called "number of holes". The "holes" can be one or two-dimensional. Connected problem is how to imagine 3-manifold. Embedding in $R^4$ seems to be more accessible then in $R^5$. Definition of manifold by surgery on certain knot or link is good on one hand, because it is fairly easy to draw a know. On the other hand it is not easy (for me ) to see what impact surgery has on fundamental group of the manifold.
The other aspect is analogy. We have already surfaces embedded in $R^3$ (orientable) and in $R^4$ (non-orientable). It should somehow be possible to find analogy between (known) surfaces and (less known) 3-manifolds.
Browse other questions tagged gt.geometric-topology 3-manifolds or ask your own question.
Which closed 3-manifolds can be embedded in $R^4$?
Is a simple loop in a spine of a strongly irreducible Heegaard splitting primitive in the fundamental group? | CommonCrawl |
A positive integer is called a "prime-factor prime" when the number of its prime factors is prime. For example, $12$ is a prime-factor prime because the number of prime factors of $12 = 2 \times 2 \times 3$ is $3$, which is prime. On the other hand, $210$ is not a prime-factor prime because the number of prime factors of $210 = 2 \times 3 \times 5 \times 7$ is $4$, which is a composite number.
In this problem, you are given an integer interval $[l, r]$. Your task is to write a program which counts the number of prime-factor prime numbers in the interval, i.e. the number of prime-factor prime numbers between $l$ and $r$, inclusive.
A line contains two integers $l$ and $r$ ($1 \leq l \leq r \leq 10^9$), which presents an integer interval $[l, r]$. You can assume that $0 \leq r-l < 1,000,000$.
Print the number of prime-factor prime numbers in $[l,r]$.
In the first example, there are 4 prime-factor primes in $[l,r]$: $4,6,8,$ and $9$. | CommonCrawl |
Why does the result of cutting a Möbius strip down the middle lengthwise have two full twists in it? I can account for one full twist--the identification of the top left corner with the bottom right is a half twist; similarly, the top right corner and bottom left identification contributes another half twist. But where does the second full twist come from?
Explanations with examples or analogies drawn from real life much appreciated.
edit: I'm pasting J.M.'s Mathematica code here (see his answer), modified for version 5.2.
One twist comes from the two half-twists of the Möbius strip. Another comes from the fact that just after you've made the cut, the resulting half-width strip goes two times around the cut, so it will turn an extra time when you unfold it to a large circle.
Try making an ordinary strip that goes two times around a cylinder and then meets itself, without a Möbius twist. If you remove the cylinder and try to unfold your strip to a circle, it will have one full twist. This twist arises from the fact that the strip's centerline must wind around itself when it goes around the cylinder twice. (In the cut-Möbius case, the direction of this winding depends on the direction the original Möbius strip was twisted, which means that the single twist from the unfolding adds to the two half-twists rather than cancel them out).
Another everyday effect that shows this (in reverse) is to try to wrap a rubber band (an ordinary cylindrical-section rubber band with a flat cross section) twice round a package. It will need to twist in order to do this, even if it can lie flat wrapped once around the package.
I seem to be terribly late (I'm quite slow as a typist), but hopefully this answer is complementary rather than redundant.
That the Möbius strip has a single "twist" is apparent in the color clash that appears when one attempts to color one "face" red and the opposite face blue. This color clash does not occur in the cut strip, as one requires two twists to finish a circuit through the surface.
One can verify that the normal vector expressions at $u=0$ and $u=2\pi$ are not equal, but the normal vector expressions at $u=0$ and $u=4\pi$ do agree. This means that if you trace a pencil through the surface of the usual Möbius strip (that is, fix $v$ and vary $u$), after exactly one turn through the surface, your pencil should end up at the spot exactly under your original starting point. For the "cut" Möbius strip (same parametric equation, but $0 \leq v \leq \frac23$ and $0 \leq u \leq 4\pi$), two turns ($2\times2\pi$) will return your pencil to its starting point.
Observe that the boundary of a Möbius strip is a circle. When you cut, you create more boundary; this is in fact a second circle.
During this process, the Möbius strip loses its non-orientability. Make two Möbius strips with paper and some tape. Cut one and leave the other uncut. Now take each and draw a line down the middle. The line will come back and meet itself on the Möbius strip; on the cut Möbius strip, it won't.
Not the answer you're looking for? Browse other questions tagged general-topology intuition surfaces visualization mobius-band or ask your own question.
Can you really construct a Möbius strip from this model?
Is it possible to define a frame on the Möbius strip?
This quotient space is homeomorphic to the Möbius strip? | CommonCrawl |
Schur complement based preconditioners are well-established and studied for classical saddle point problems in $\mathbb The method is applied to two sets of problems: \begin Both methods are robust with respect to mesh size and the later one is also robust with respect to spline degree. By combining these smoothers, a hybrid smoother is created. This hybrid smoother is numerically superior to the two other smoothers. | CommonCrawl |
All stores are offering amazing deals for Black Friday. Dan wants to buy a huge TV, a Home Theater system, and a sofa at a store. The regular price for the TV is $\$1100$; the regular price for the Home Theater system is $\$1600$; and the regular price for the sofa is $\$450$. This store is offering a deal where they will charge no tax and will also give back a gift card with $25\%$ of the value of the purchase that can be used to buy more items. Dan will buy each item separately, so that after buying each item he will get some money in the gift card that he can use for following items to avoid paying full price. Depending on the order in which he buys each item he may spend a different amount of cash, so he needs to decide what is the best way to purchase the items to spend the least amount of money. How many dollars would he pay in cash to the store using the best possible method?
27% of the students who tried the problem got it right on their first attempt.
This problem is describing a situation to which you or your parents can relate. I'm sure every year you hear about the amazing deals all stores offer for Black Friday, which are so good that some people even camp outside stores to make sure they are the firsts to come in and get the best deals! Have you ever seen a deal like this? One should pay attention to the deals to get the best price!
In deals like the one described in the problem it is always better to start by paying for the most expensive item first, then continue with the second most expensive, and so on. This allows the gift card to have as much money as possible each time, thus spending less out of pocket money.
A popular answer was $\$2750$. This is the amount you would pay if you paid first for the most expensive item (the Home Theater system) and then used the $1600\times.25 = 400$ dollar gift card to pay for the other two items together. However, we can still buy the other two items separately and pay even less money!
Another popular answer was $\$2362.50$. This corresponds to the price one would pay if all three articles had a $25\%$ discount, however, this is not the case. Remember to always make sure to read all the problem thoroughly to make sure you understand the context of the problem and what is it asking.
Make sure to share your thoughts and questions below! | CommonCrawl |
44 Why is Lebesgue integration taught using positive and negative parts of functions?
13 Formerly good at math, but after 12 years I've lost most of my skills. Now I need them once again. Any advice to grow them back?
11 Why do we care about $L^p$ spaces besides $p = 1$, $p = 2$, and $p = \infty$?
9 Why should one still teach Riemann integration?
8 How do I help my student understand concepts such as "$x$ divided by $x$"? | CommonCrawl |
The RosetteSep™ Human Monocyte Enrichment Cocktail is designed to isolate monocytes from whole blood by negative selection. Unwanted cells are targeted for removal with Tetrameric Antibody Complexes (TAC) recognizing non-monocyte cells and red blood cells (RBCs). When centrifuged over a buoyant density medium such as Lymphoprep™ (Catalog #07801), the unwanted cells pellet along with the RBCs. The purified monocytes are present as a highly enriched population at the interface between the plasma and the buoyant density medium.
Starting with fresh peripheral blood, the CD14+ cell content of the enriched fraction is typically 72% - 85%. *Note: Red blood cells were removed by lysis prior to flow cytometry.
IL-27 amplifies cytokine responses to Gram-negative bacterial products and Salmonella typhimurium infection.
Cytokine responses from monocytes and macrophages exposed to bacteria are of particular importance in innate immunity. Focusing on the impact of the immunoregulatory cytokine interleukin (IL)-27 on control of innate immune system responses, we examined human immune responses to bacterial products and bacterial infection by E. coli and S. typhimurium. Since the effect of IL-27 treatment in human myeloid cells infected with bacteria is understudied, we treated human monocytes and macrophages with IL-27 and either LPS, flagellin, or bacteria, to investigate the effect on inflammatory signaling and cytokine responses. We determined that simultaneous stimulation with IL-27 and LPS derived from E. coli or S. typhimurium resulted in enhanced IL-12p40, TNF-$\alpha$, and IL-6 expression compared to that by LPS alone. To elucidate if IL-27 manipulated the cellular response to infection with bacteria, we infected IL-27 treated human macrophages with S. typhimurium. While IL-27 did not affect susceptibility to S. typhimurium infection or S. typhimurium-induced cell death, IL-27 significantly enhanced proinflammatory cytokine production in infected cells. Taken together, we highlight a role for IL-27 in modulating innate immune responses to bacterial infection.
Recognition of nucleic acids by endosomal Toll-like receptors (TLR) is essential to combat pathogens, but requires strict control to limit inflammatory responses. The mechanisms governing this tight regulation are unclear. We found that single-stranded oligonucleotides (ssON) inhibit endocytic pathways used by cargo destined for TLR3/4/7 signaling endosomes. Both ssDNA and ssRNA conferred the endocytic inhibition, it was concentration dependent, and required a certain ssON length. The ssON-mediated inhibition modulated signaling downstream of TLRs that localized within the affected endosomal pathway. We further show that injection of ssON dampens dsRNA-mediated inflammatory responses in the skin of non-human primates. These studies reveal a regulatory role for extracellular ssON in the endocytic uptake of TLR ligands and provide a mechanistic explanation of their immunomodulation. The identified ssON-mediated interference of endocytosis (SOMIE) is a regulatory process that temporarily dampens TLR3/4/7 signaling, thereby averting excessive immune responses.
MiR-181b modulates EGFR-dependent VCAM-1 expression and monocyte adhesion in glioblastoma.
Tumor-associated macrophages (TAMs) originate as circulating monocytes, and are recruited to gliomas, where they facilitate tumor growth and migration. Understanding the interaction between TAM and cancer cells may identify therapeutic targets for glioblastoma multiforme (GBM). Vascular cell adhesion molecule-1 (VCAM-1) is a cytokine-induced adhesion molecule expressed on the surface of cancer cells, which is involved in interactions with immune cells. Analysis of the glioma patient database and tissue immunohistochemistry showed that VCAM-1 expression correlated with the clinico-pathological grade of gliomas. Here, we found that VCAM-1 expression correlated positively with monocyte adhesion to GBM, and knockdown of VCAM-1 abolished the enhancement of monocyte adhesion. Importantly, upregulation of VCAM-1 is dependent on epidermal-growth-factor-receptor (EGFR) expression, and inhibition of EGFR effectively reduced VCAM-1 expression and monocyte adhesion activity. Moreover, GBM possessing higher EGFR levels (U251 cells) had higher VCAM-1 levels compared to GBMs with lower levels of EGFR (GL261 cells). Using two- and three-dimensional cultures, we found that monocyte adhesion to GBM occurs via integrin α4β1, which promotes tumor growth and invasion activity. Increased proliferation and tumor necrosis factor-α and IFN-γ levels were also observed in the adherent monocytes. Using a genetic modification approach, we demonstrated that VCAM-1 expression and monocyte adhesion were regulated by the miR-181 family, and lower levels of miR-181b correlated with high-grade glioma patients. Our results also demonstrated that miR-181b/protein phosphatase 2A-modulated SP-1 de-phosphorylation, which mediated the EGFR-dependent VCAM-1 expression and monocyte adhesion to GBM. We also found that the EGFR-dependent VCAM-1 expression is mediated by the p38/STAT3 signaling pathway. Our study suggested that VCAM-1 is a critical modulator of EGFR-dependent interaction of monocytes with GBM, which raises the possibility of developing effective and improved therapies for GBM.Oncogene advance online publication, 1 May 2017; doi:10.1038/onc.2017.129.
Response to Treatment with TNFα Inhibitors in Rheumatoid Arthritis Is Associated with High Levels of GM-CSF and GM-CSF(+) T Lymphocytes.
Biologic TNFα inhibitors are a mainstay treatment option for patients with rheumatoid arthritis (RA) refractory to other treatment options. However, many patients either do not respond or relapse after initially responding to these agents. This study was carried out to identify biomarkers that can distinguish responder from non-responder patients before the initiation of treatment. The level of cytokines in plasma and those produced by ex vivo T cells, B cells and monocytes in 97 RA patients treated with biologic TNFα inhibitors was measured before treatment and after 1 and 3 months of treatment by multiplex analyses. The frequency of T cell subsets and intracellular cytokines were determined by flow cytometry. The results reveal that pre-treatment, T cells from patients who went on to respond to treatment with biologic anti-TNFα agents produced significantly more GM-CSF than non-responder patients. Furthermore, immune cells from responder patients produced higher levels of IL-1β, TNFα and IL-6. Cytokine profiling in the blood of patients confirmed the association between high levels of GM-CSF and responsiveness to biologic anti-TNFα agents. Thus, high blood levels of GM-CSF pre-treatment had a positive predictive value of 87.5% (61.6 to 98.5% at 95% CI) in treated RA patients. The study also shows that cells from most anti-TNFα responder patients in the current cohort produced higher levels of GM-CSF and TNFα pre-treatment than non-responder patients. Findings from the current study and our previous observations that non-responsiveness to anti-TNFα is associated with high IL-17 levels suggest that the disease in responder and non-responder RA patients is likely to be driven/sustained by different inflammatory pathways. The use of biomarker signatures of distinct pro-inflammatory pathways could lead to evidence-based prescription of the most appropriate biological therapies for different RA patients.
Expression of specific inflammasome gene modules stratifies older individuals into two extreme clinical and immunological states.
Low-grade, chronic inflammation has been associated with many diseases of aging, but the mechanisms responsible for producing this inflammation remain unclear. Inflammasomes can drive chronic inflammation in the context of an infectious disease or cellular stress, and they trigger the maturation of interleukin-1β (IL-1β). Here we find that the expression of specific inflammasome gene modules stratifies older individuals into two extremes: those with constitutive expression of IL-1β, nucleotide metabolism dysfunction, elevated oxidative stress, high rates of hypertension and arterial stiffness; and those without constitutive expression of IL-1β, who lack these characteristics. Adenine and N(4)-acetylcytidine, nucleotide-derived metabolites that are detectable in the blood of the former group, prime and activate the NLRC4 inflammasome, induce the production of IL-1β, activate platelets and neutrophils and elevate blood pressure in mice. In individuals over 85 years of age, the elevated expression of inflammasome gene modules was associated with all-cause mortality. Thus, targeting inflammasome components may ameliorate chronic inflammation and various other age-associated conditions. | CommonCrawl |
Vol. 22, no. 2, pp. 357-387, 2018. Regular paper.
Abstract We define the visual complexity of a plane graph drawing to be the number of basic geometric objects needed to represent all its edges. In particular, one object may represent multiple edges (e.g., one needs only one line segment to draw a path with an arbitrary number of edges). Let $n$ denote the number of vertices of a graph. We show that trees can be drawn with $3n/4$ straight-line segments on a polynomial grid, and with $n/2$ straight-line segments on a quasi-polynomial grid. Further, we present an algorithm for drawing planar 3-trees with $(8n-17)/3$ segments on an $O(n)\times O(n^2)$ grid. This algorithm can also be used with a small modification to draw maximal outerplanar graphs with $3n/2$ edges on an $O(n)\times O(n^2)$ grid. We also study the problem of drawing maximal planar graphs with circular arcs and provide an algorithm to draw such graphs using only $(5n - 11)/3$ arcs. This is significantly smaller than the lower bound of $2n$ for line segments for a nontrivial graph class. | CommonCrawl |
Abstract: We consider a scale invariant extension of the standard model (SM) with a combined breaking of conformal and electroweak symmetry in a strongly interacting hidden $SU(n_c)$ gauge sector with $n_f$ vector-like hidden fermions. The (pseudo) Nambu-Goldstone bosons that arise due to dynamical chiral symmetry breaking are dark matter (DM) candidates. We focus on $n_f=n_c=3$, where $SU(3)$ is the largest symmetry group of hidden flavor which can be explicitly broken into either $U(1) \times U(1)$ or $SU(2)\times U(1)$. We study DM properties and discuss consistent parameter space for each case. Because of different mechanisms of DM annihilation the consistent parameter space in the case of $SU(2)\times U(1)$ is significantly different from that of $SU(3)$ if the hidden fermions have a SM $U(1)_Y$ charge of $O(1)$. | CommonCrawl |
Abstract: An upper estimate is obtained for the growth exponent of the set of all uncancellable words equal to $1$ in a group given by a system of defining relations with the Dehn condition. By a theorem of Grigorchuk, this yields a sufficient test for the transience of a random walk on a group given by a system of defining relations with the Dehn condition, and for the nonamenability of such a group. It is proved that the free periodic groups $\mathbf B(m,n)$ with $m\geqslant2$ and odd $n\geqslant665$ satisfy this test. A question asked by Kesten in 1959 is thereby answered in the negative, and a conjecture put foth earlier by the author is confirmed. | CommonCrawl |
This paper is concerned with entropy methods for linear drift-diffusion equations with explicitly time-dependent or degenerate coefficients. Our goal is to establish a list of various qualitative properties of the solutions. The motivation for this study comes from a model for molecular motors, the so-called Brownian ratchet, and from a nonlinear equation arising in traffic flow models, for which complex long time dynamics occurs. General results are out of the scope of this paper, but we deal with several examples corresponding to most of the expected behaviors of the solutions. We first prove a contraction property for general entropies which is a useful tool for uniqueness and for the convergence to some eventually time-dependent large time asymptotic solutions. Then we focus on power law and logarithmic relative entropies. When the diffusion term is of the type $\nabla(|x|^\alpha\,\nabla\cdot)$, we prove that the inequality relating the entropy with the entropy production term is a Hardy-Poincaré type inequality, that we establish. Here we assume that $\alpha\in (0,2]$ and the limit case $\alpha=2$ appears as a threshold for the method. As a consequence, we obtain an exponential decay of the relative entropies. In the case of time-periodic coefficients, we prove the existence of a unique time-periodic solution which attracts all other solutions. The case of a degenerate diffusion coefficient taking the form $|x|^\alpha$ with $\alpha>2$ is also studied. The Gibbs state exhibits a non integrable singularity. In this case concentration phenomena may occur, but we conjecture that an additional time-dependence restores the smoothness of the asymptotic solution. | CommonCrawl |
The shape of the cow figurine is described by an $N \times N$ grid of characters like the one below ($3 \leq N \leq 8$), where '#' characters are part of the figurine and '.' characters are not.
Unfortunately, right before FJ can make his purchase, a bull runs through the shop and breaks not only FJ's figurine, but many of the other glass objects on the shelves as well! FJ's figurine breaks into 2 pieces, which quickly become lost among $K$ total pieces lying on the ground ($3 \leq K \leq 10$). Each of the $K$ pieces is described by an $N \times N$ grid of characters, just like the original figurine.
Please help FJ determine which of the $K$ pieces are the two that he needs to glue back together to mend his broken figurine. Fortunately, when the two pieces of his figurine fell to the ground they were not rotated or flipped, so to reassemble them, FJ only needs to possibly shift the pieces horizontally and/or vertically and then super-impose them. If he has the correct two pieces, he should be able to do this in a way that exactly reconstructs the original figurine, with each '#' in the original figurine represented in exactly one of the two pieces (that is, the two pieces, when shifted and superimposed, should not share any '#' characters in common, and together they should form the original shape exactly).
FJ can shift a piece both vertically and/or horizontally by any number of characters, but it cannot be shifted so far that any of its '#' characters fall outside the original $N \times N$ grid. The shape of each piece does not necessarily consist of a single "connected" region of '#' characters; nonetheless, if a piece consists of multiple disjoint clumps of '#' characters, they must all be shifted the same amount if the entire piece is to be shifted.
The first line of input contains $N$ followed by $K$. The next $N$ lines provide the grid of characters describing FJ's original figurine. The next $KN$ lines give the $K$ grids of characters specifying the $K$ pieces FJ finds on the ground.
Please print out one line containing two space-separated integers, each in the range $1 \ldots K$, specifying the indices of the two pieces of FJ's figurine. A solution will always exist, and it will be unique. The two numbers you print must be in sorted order. | CommonCrawl |
Z. Kiguradze, T. Jangveladze, M.Aptsiauri. On the Stabilization of Solution as $t\to\infty$ and Convergence of the Corresponding Finite Difference Scheme for One Nonlinear Integro-Differential Equation. Rep. Enl. Sess. Sem. of I.Vekua Inst. Appl. Math. 2008წ. V.22, p.15-19.
Z. Kiguradze. On the Stationary Solution for One Diffusion Model. Rep. Enl. Sess. Sem. of I.Vekua Inst. Appl. Math. 2001წ. V.16, N1, p.17-20.
T. Buadze. On the statistical estimation of the Probability Distribution Density: . Part V. The Integral Mean-Square Error of Nuclear Estimates of the Probability Distribution Density. Georgian Engineering News. No 1 (vol. 61). 2012წ. 2012, 9-15 p. ISSN 1512-0287.
T. Buadze. On the statistical estimation of the probability distribution density: . Part IV On the asymptotic representation of mean-square integral difference in the nuclear estimates of the probability distribution density. Georgian Engineering News No 4 (vol. 60). 2011წ. 2011, 15-23 p. ISSN 1512-0287.
T. Buadze. On the statistical estimation of the probability distribution density: . Part III. Local validity of nuclear estimates of the probability distribution density. Georgian Engineering News No 4 (vol. 60). 2011წ. 2011, 11-15 p ISSN 1512-0287.
T. Buadze. On the statistical estimation of the probability distribution density: . Part II. The class of nucloar estimates. GEORGIAN ENGINEERING NEWS. No 4. 2010წ. (vol. 56), 2010, 15-21 p. ISSN 1512-0287.
T. Buadze. On the statistical estimation of the probability distribution density: . Part I. Georgian Engineering News No 4 . 2010წ. (vol. 56), 2010, 10-15 p.
T. Buadze. On the statistical estimation the Probability Distribution Density: . Part. VIII. On the asumptotic Normality of the mean-square Integral error of the projection estimation on of the probability distribution density. Georgian Engineering News. 2013წ. No 2, 2013, 32-36p. ISSN 1512-0287.
T. Buadze. On the statistical estimation the Probability Distribution Density: . Part. VII. On the statistical Problem in the Hilbert Space. Georgian Engineering News. No 1 (vol. 65). 2013წ. 2013, 60-65p. ISSN 1512-0287.
T. Todua, D. Kapanadze. On the Strategies of the Test Generation. Computing and Computational Intelligence, Proceedings of the European Computing Conference (ECC'09) & 3rd International Conference on Computational Intelligence(CI'09) Tbilisi. 2009წ. .
T. Todua. On the Strategies of the Test Generation. Computing and Computational Intelligence, Proceedings of the European Computing Conference (ECC'09) & 3rd International Conference on Computational Intelligence(CI'09) Tbilisi. 2009წ. .
მ. შარიქაძე, Kakabadze M., Kakabadze I.. On the stratigraphical unconformity between the Lower and middle Aptian sequens in the Caucasus. Al. Janelidze Ins-te of Geol. of I.Javakhishvili Tbilisi State Univ. Abstracts. Tbilisi. 2010წ. pp.54-56.
. On the Strict Transitivity Property for Infinite-Dimensional Topological Vector Spaces. Bull. Georgian Acad.Sci. 2002წ. 166,no 1 , 32-35.
N. Macharashvili. On the Summability of Furier series in Generalized Spherical Functions. Georgian International Journal of Science and Technology. 2013წ. Volume 6, Number 5, 1-6.
V. Khocholava, Macharashvili N. On the summability of Haar series . Transactions of GTU, 2(504). 2017წ. 149-157.
N. Macharashvili, Khocholava V. On the Summability of Haar Series. Transactions of the Georgian Technical University. 2017წ. 2(504), 153-163.
N. Macharashvili. On the Summability of Haar Series. Georgian International Journal of Science and Technology. 2013წ. Volume 5, Namber 1, 1-12.
Ts. Tsanava. On the Summability of Trigonometric Fourier Series in Weighted Lebesgue Spaces. Proceedings A. Razmadze Mathematical Institute. 2005წ. 138, 119-121.
H.Salehi. On the tail estimation of the norm. Georgain Math. J., 8, 2, 237-244. 2001წ. 8, 2, 237-244..
ს. ჩობანიანი, H.Salehi. On the tail estimation of the norm of Rademacher sums. Georgian Mathematical Journal. 2001წ. 8, 2, 237-244.. | CommonCrawl |
An internal kink instability has been observed to grow and saturate in the Rotating Wall Machine Experiment. Detailed measurements show that an ideal, line-tied kink mode begins growing when the safety factor drops sufficiently below 1 inside the plasma; the saturated state corresponds to a rotating helical equilibrium. In addition to the ideal mode, reconnection events have been observed to periodically flatten the current profile and change the magnetic topology. The reconnection events strongly resemble the reconnection phenomena described in numerical simulations of a nearly identical geometry. Recently, the 2D equilibrium current profile has been measured using an axially and radially scanning magnetic probe so that better comparisons between experiment and theory can be carried out. The measurements show the current channel diffuses radially, inconsistent with Spitzer resistivity. To determine the effect of neutrals on conductivity, neutral fraction is being independently quantified via H$\alpha $ emission. Future work will involve the construction and installation of a 2D coil array to measure fluctuations in the current at the axial midpoint of the experiment in an effort to characterize the reconnection rate in this inherently 3D geometry. | CommonCrawl |
The $k$-Even Set problem is a parameterized variant of the Minimum Distance Problem of linear codes over $\mathbb F_2$, which can be stated as follows: given a generator matrix $\mathbf A$ and an integer $k$, determine whether the code generated by $\mathbf A$ has distance at most $k$. Here, $k$ is the parameter of the problem. The question of whether $k$-Even Set is fixed parameter tractable (FPT) has been repeatedly raised in literature and has earned its place in Downey and Fellows' book (2013) as one of the "most infamous" open problems in the field of Parameterized Complexity.
In this work, we show that $k$-Even Set does not admit FPT algorithms under the (randomized) Gap Exponential Time Hypothesis (Gap-ETH) [Dinur'16, Manurangsi-Raghavendra'16]. In fact, our result rules out not only exact FPT algorithms, but also any constant factor FPT approximation algorithms for the problem. Furthermore, our result holds even under the following weaker assumption, which is also known as the Parameterized Inapproximability Hypothesis (PIH) [Lokshtanov et al.'17]: no (randomized) FPT algorithm can distinguish a satisfiable 2CSP instance from one which is only $0.99$-satisfiable (where the parameter is the number of variables).
We also consider the parameterized $k$-Shortest Vector Problem (SVP), in which we are given a lattice whose basis vectors are integral and an integer $k$, and the goal is to determine whether the norm of the shortest vector (in the $\ell_p$ norm for some fixed $p$) is at most $k$. Similar to $k$-Even Set, this problem is also a long-standing open problem in the field of Parameterized Complexity. We show that, for any $p > 1$, $k$-SVP is hard to approximate (in FPT time) to some constant factor, assuming PIH. Furthermore, for the case of $p = 2$, the inapproximability factor can be amplified to any constant. | CommonCrawl |
Diffie-Hellman key exchange works by agreeing on two publicly shared values: a large prime number $q$ and a primitive root $g$. Alice and Bob each generate a secret key—a large random number—$x_a$ and $x_b$ respectively, and each raise the primitive root to the power of the secret key, modulo the large prime number.
The results are sent to each other and the shared key is computed by raising the received value to the secret key modulo the primitive root.
Given a prime number $q$, a primitive root $g$ is a number such that every number from 1 up to $q - 1$ can be computed by raising the primitive root to some number $k$.
A common analogy is that of mixing paint. | CommonCrawl |
In addition to numpy, we need a module to handle sparse matrics. scipy.sparse has all of what we need.
How much memory does a dense matrix of double precision numbers of size $10^6 \times 10^6$ occupy?
How much memory does a sparse identity matrix of the same size occupy? | CommonCrawl |
Abstract: The idea of topological quantum computation (TQC) is to store and manipulate quantum information in an intrinsically fault-tolerant manner by utilizing the physics of topologically ordered phases of matter. Currently, one of the most promising platforms for a topological qubit is in terms of Majorana fermion zero modes (MZMs) in spin-orbit coupled superconducting nanowires. However, the topologically robust operations that are possible with MZMs can be efficiently simulated on a classical computer and are therefore not sufficient for realizing a universal gate set for TQC. Here, we show that an array of coupled semiconductor-superconductor nanowires with MZM edge states can be used to realize a more sophisticated type of non-Abelian defect: a genon in an Ising $\times$ Ising topological state. This leads to a possible implementation of the missing topologically protected $\pi/8$ phase gate and thus universal TQC based on semiconductor-superconductor nanowire technology. We provide detailed numerical estimates of the relevant energy scales, which we show to lie within accessible ranges. | CommonCrawl |
Is it possible to integrate a matrix?
Can this be done or do we think I went wrong somewhere?
Matrices form a vector space. Therefore, you can simply integrate them componentwise.
There is a more sophisticated operation, in case the matrix in question belongs to a Lie algebra: ordered exponentiation. It is to integration as exponentiation is to multiplication, and permits to go from a Lie algebra element (intuitively, a differential transformation) to a group element (a whole transformation). In this case, you need a $n\times n$ matrix-valued function.
It is explained quite well here: http://en.wikipedia.org/wiki/Ordered_exponential.
Not the answer you're looking for? Browse other questions tagged matrices ordinary-differential-equations integration definite-integrals or ask your own question.
Integral of a curve with respect to its curvature?
Can functions within a matrix adjust its size? | CommonCrawl |
Theorem 4.1: $ L^p\cong (L^q)^* $ when $ \frac1p+\frac1q=1,1\le p-<\infty $.
We have a natural injection $ L^q\hookrightarrow (L^p)^*, g\mapsto \left(\ell:f\mapsto \int_X fg\,d\mu\right) $ by Holder's inequality. We want $ ||\ell||=||g||_q $, which is equivalent to showing that equality can be attained (or become arbitrarily close in ratio to being attained) (Lemma 4.2(i)). Just use the equality case of Holder.
We want to go the other way: given $ \ell $, find $ g $; we'd like a linear functional to come from integrating against a function.
The key idea is to use the Radon-Nykodim Theorem: given $ \sigma $-finite measures $ \nu\ll \mu $ (that is, $ \mu(A)=0\implies \nu(A)=0 $), there's g so that $ \int fd\,d\nu=\int fg\,d\mu $.
How to define $ \nu $? Let $ \nu(E)=\ell(\chi_E) $. But this only works for $ E $ with finite measure, so assume first the space has finite measure. Show $ \nu $ is countably additive and $ \nu\ll \mu $.
How to deal with infinite measures? Take a limit of $ g_n $ taken from nested $ E_n $ whose union is all $ X $. | CommonCrawl |
The structure of atom was first given by the plum pudding model of J.J. Thomson before the experiment of Ernest Rutherford. The plum pudding model explained an atom as a positive charge body which contains small negatively charged particles which are called electrons. He also described that the negative charge in atom is balanced with the equal amount of positive charge to maintain the neutrality of atom. But there were some faults in this model of Thomson. He did not give the complete structure of atom which was then given by Rutherford in his Gold Foil Experiment in 1898 which was published in 1911.
He discovered the concept of nucleus in atom. His research is based on the experiment with alpha particles. Alpha particles are helium atom particles. He did bombardment of positive alpha particles on thin foil of gold approx. 8.6 x 10 -6 centimetres thick and took the observations on the screen of zinc sulphide which was behind the gold foil. He observed the deflection of these bombarded alpha particles on the photographic film. Let's have a look on the Gold foil experiment of Rutherford.
In 1910, a physicist from New Zealand, Ernest Rutherford performed an experiment known as Rutherford's gold foil experiment. This experiment was determined to find out the structure of an atom. By this time it was discovered by J.J. Thomson that electrons are present in an atom and that they are negatively charged. So it was assumed that since an atom is neutral and electrons present are negatively charged, there should be some positive charge inside it that makes it neutral. So Rutherford worked under the discoveries and assumption of J.J. Thomson. He accepted J.J. Thomson's model of an atom which was plum pudding model.
It has a radioactive source rich in positively charged heavy alpha particles inside a cube shaped thick lead box with a narrow opening.
The alpha particles were confined to a narrow beam by passing then through a lead sheet through a slit. An extremely thin gold foil was bombarded with the narrow beam of fast moving alpha particles. On bombarding the alpha particles were scattered in different directions with different angles and were detected by florescent rotatable detector which has a microscope and a screen coated with zinc sulphide. The whole experimental setup was placed in an evacuated chamber to prevent scattering by the air molecules. These particles after striking on the screen caused scintillations. Before performing this experiment it was assumed by Rutherford that most of the alpha particles would pass through the gold foil with less deflection. He assumed this on the basis of theory proposed by JJ Thomson. This was assumed because the alpha particles are heavy and the negative charge in the "plum pudding model" is widely spread.
Some alpha particles were deflected off at different angles as observed at the screen of the detector.
Very few of the alpha particles (one or two) even bounced backwards after hitting the gold foil.
Since most of the alpha particles passed straight through the gold foil without any deflection, most of the space within the atoms is empty.
Since some of the alpha particles (which are big in size) were deflected by large angles or bounced backwards, they must have approached some positively charged region responsible for the deflection. This positively charged region is now called the nucleus.
As very few alpha particles undergone the deflection, it was concluded that the volume occupied by the central region ( nucleus ) is very small.
Since alpha particles which are relatively denser, were deflected by the central volume of charge, it shows that almost the complete mass of the atom must be within the central volume.
This central volume also contained most of the atomic mass of the atom. This region was named the "nucleus" of the atom in later years.
Rutherford's model did not make any new headway in explanation of the electron-structure of the atom. Rutherford's paper merely mentioned the earlier 1904 "Saturnian" atomic model of Hantaro Nagaoka in this regard, in which a number of small electrons circled the nucleus like the particles then speculated to make up the ring around Saturn. Rutherford's concentration of most of the atom's mass into a very small core, made some type of planetary model, as such a core would contain most of the atom's mass, similar to the Sun containing most of the solar system's mass. Rutherford's model was later improved and quantified by one of his students, Niels Bohr, with the known Bohr model of the atom.
Rutherford was able to calculate that the radius of his gold central charge from purely energetic considerations of how far particles of known speed would be able to penetrate toward a central charge of 100e. He founded that it would need to be less (how much less could not be told) than 3.4 x 10-14m. This was in a gold atom known to be 10−10 metres or so in radius—a very surprising finding since it implied a strong central charge less than (1/3000)th of the diameter of the atom.
The Rutherford model served to concentrate a great deal of the atom's charge and mass to a very small core. But he could not contribute for any structure of the remaining electrons and remaining atomic mass. It mentioned the atomic model of Hantaro Nagaoka, which proposed that the electrons are arranged in one or more rings, with the specific metaphorical structure of the stable rings of Saturn.
Rutherford's discovery has contributed a lot in the field of modern science. After Rutherford's theory, scientists started to consider that the atom is not a single particle ultimately, but it is made up of very smaller subatomic particles. Following research was done to figure out the exact atomic structure which led to Rutherford's gold foil experiment. They discovered eventually that atoms have a positively-charged nucleus (with an exact atomic number of charges) in the center which has radius of about 1.2$\times$10−15$\times$[Atomic Mass Number]1/3 meters. Since electrons were found to be even smaller, this concluded that the atom consists of mostly empty space.
Afterwards, by using X-rays scientists found the expected number of electrons (equal to the atomic number) in an atom. When an X-ray passes through an atom, some of the rays is scattered and the rest passes through the atom. As the X-ray loses its intensity mainly due to scattering at electrons, the number of electrons contained in an atom can be accurately estimated by noting the rate of decrease in X-ray intensity.
Rutherford conducted the famous gold foil experiment and on the basis of that he proposed a new atomic theory after JJ Thomson. His theory is considered to be very important in the history of atomic theories as he was the first who discovered and proved the existence of central positive charge i.e. nucleus inside an atom. Later on the basis of his theory further improvements were made in the structure of an atom by Neil Bohr and other scientists.
Most of the part of an atom is empty.
Approximately all the mass of the atom is concentrated at the center of atom which is now called nucleus.
In the central region of atom, the positively charged particles are present.
The charge on the nucleus of an atom is positive and is equal to Z.e where Z is charge number, e is charge of proton.
The negatively charged particles i.e. electrons revolve around the central positive portion in different circular orbits.
Central region (nucleus) is very small in size if compared to the size of atom.
Being a charged particle, electron must emit energy when it is accelerated, according to classical electromagnetic theory. We know that around the nucleus, the motion of electron is an accelerated motion, hence it must radiate energy. But this does not happen in actual practice. Assume that if it occurs then due to continuous loss of energy orbit of electron must decrease continuously. As a result electron will fall into the nucleus eventually after some time. But this is against the practical situation and hence this shows that atom is unstable.
If the electrons emit energy continuously, continuous spectrum should be formed. But in practical line spectrum is observed.
Copyright © 2018 – TutorVista.com, All rights reserved. | CommonCrawl |
In this note, we establish a general result on the existence of global attractors for semigroups $S(t)$ of operators acting on a Banach space $\mathcal X$, where the strong continuity $S(t)\in C(\mathcal X,\mathcal X)$ is replaced by the much weaker requirement that $S(t)$ be a closed map.
Keywords: global attractors, Semigroups of operators, connected attractors., closed operators, abstract Cauchy problems.
Mathematics Subject Classification: Primary: 34D45; Secondary: 47H20, 47J3. | CommonCrawl |
Abstract: We consider a continuum model of active viscoelastic matter, whereby an active nematic liquid-crystal is coupled to a minimal model of polymer dynamics with a viscoelastic relaxation time $\tau_C$. To explore the resulting interplay between active and polymeric dynamics, we first generalise a linear stability analysis (from earlier studies without polymer) to derive criteria for the onset of spontaneous heterogeneous flows (strain rate) and/or deformations (strain). We find two modes of instability. The first is a viscous mode, associated with strain rate perturbations. It dominates for relatively small values of $\tau_C$ and is a simple generalisation of the instability known previously without polymer. The second is an elastomeric mode, associated with strain perturbations, which dominates at large $\tau_C$ and persists even as $\tau_C\to\infty$. We explore the novel dynamical states to which these instabilities lead by means of direct numerical simulations. These reveal oscillatory shear-banded states in 1D, and activity-driven turbulence in 2D even in the elastomeric limit $\tau_C\to\infty$. Adding polymer can also have calming effects, increasing the net throughput of spontaneous flow along a channel in a new type of "drag-reduction". Finally the effect of including strong, antagonistic coupling between nematic and polymer is examined numerically, revealing a rich array of spontaneously flowing states. | CommonCrawl |
Abstract: This is an edited version of an unpublished 1979 EFI (U. Chicago) preprint: "The U(N) lattice gauge theory in 2-dimensions can be considered as the statistical mechanics of a Coulomb gas on a circle in a constant electric field. The large N limit of this system is discussed and compared with exact answers for finite N. Near the fixed points of the renormalization group and especially in the critical region where one can define a continuum theory, computations in the thermodynamic limit $(N \rightarrow \infty)$ are in remarkable agreement with those for finite and small N. However, in the intermediate coupling region the thermodynamic computation, unlike the one for finite N, shows a continuous phase transition. This transition seems to be a pathology of the infinite N limit and in this simple model has no bearing on the physical continuum limit." | CommonCrawl |
This lecture will present a short overview on kinetic MHD. The advantages and drawbacks of kinetic versus fluid modelling will be summarized. Various techniques to implement kinetic effects in the fluid description will be introduced with increasing complexity: bi-fluid effects, gyroaverage fields, Landau closures. Hybrid formulations, which combine fluid and kinetic approaches will be presented. It will be shown that these formulations raise several difficulties, including inconsistent ordering and choice of representation. The non linear dynamics of an internal kink mode in a tokamak will be used as a test bed for the various formulations. It will be shown that bi-fluid effects can explain to some extent fast plasma relaxations (reconnection), but cannot address kinetic instabilities due to energetic particles. Some results of hybrid codes will be shown. Recent developments and perspectives will be given in conclusion.
The momentum transport in a fusion device such as a tokamak has been in a scope of the interest during last decade. Indeed, it is tightly related to the plasma rotation and therefore its stabilization, which in its turn is essential for the confinement improvement. The intrinsic rotation, i.e. the part of the rotation occurring without any external torque is one of the possible sources of plasma stabilization.
The modern gyrokinetic theory is an ubiquitous theoretical framework for lowfrequency fusion plasma description. In this work we are using the field theory formulation of the modern gyrokinetics . The main attention is focussed on derivation of the momentum conservation law via the Noether method, which allows to connect symmetries of the system with conserved quantities by means of the infinitesimal space-time translations and rotations.
Such an approach allows to consistently keep the gyrokinetic dynamical reduction effects into account and therefore leads towards a complete momentum transport equation.
Elucidating the role of the gyrokinetic polarization is one of the main results of this work. We show that the terms resulting from each step of the dynamical reduction (guiding-center and gyrocenter) should be consistently taken into account in order to establish physical meaning of the transported quantity. The present work generalizes previous result obtained in by taking into the account purely geometrical contributions into the radial polarization.
A simple, robust and accurate HLLC-type Riemann solver for two-phase 7-equation type models is built. It involves 4 waves per phase, i.e. the three conventional right- and left-facing and contact waves, augmented by an extra "interfacial" wave. Inspired by the Discrete Equations Method (Abgrall and Saurel, 2003), this wave speed $u_I$ is assumed function only of the piecewise constant initial data. Therefore it is computed easily from these initial data. The same is done for the interfacial pressure $P_I$. Interfacial variables $u_I$ and $P_I$ are thus local constants in the Riemann problem. Thanks to this property there is no difficulty to express the non-conservative system of partial differential equations in local conservative form. With the conventional HLLC wave speed estimates and the extra interfacial speed $u_I$, the four-waves Riemann problem for each phase is solved following the same strategy as in Toro et al. (1994) for the Euler equations. As $u_I$ and $P_I$ are functions only of the Riemann problem initial data, the two-phase Riemann problem consists in two independent Riemann problems with 4 waves only. Moreover, it is shown that these solvers are entropy producing. The method is easy to code and very robust. Its accuracy is validated against exact solutions as well as experimental data.
Reduced MHD models in Tokamak geometry are convenient simplifications of full MHD and are fundamental for the numerical simulation of MHD stability in Tokamaks. This presentation will address the mathematical well-posedness and the justification of the such models.
The first result is a systematic design of hierachies of well-posed reduced MHD models. Here well-posed means that the system is endowed with a physically sound energy identity and that existence of a weak solution can be proved. Some of these models will be detailed.
The second result is perhaps more important for applications. It provides understanding on the fact the the growth rate of linear instabilities of the initial (non reduced) model is lower bounded by the growth rate of linear instabilities of the reduced model.
This work has been done with Rémy Sart.
Many physical phenomena deal with a fluid interacting with a moving rigid or deformable structure. These kinds of problems have a lot of important applications, for instance, in aeroelasticity, biomechanics, hydroelasticity, sedimentation, etc. From the analytical point of view as well as from the numerical point of view they have been studied extensively over the past years. We will mainly focus on viscous fluid interacting with an elastic structure. The purpose of the present lecture is to present an overview of some of the mathematical and numerical difficulties that may be encountered when dealing with fluid-structure interaction problems such as the geometrical nonlinearities or the added mass effect and how one can deal with these difficulties.
We describe here formal analogies between the Darcy equations, that describe the flow of a viscous fluid in a porous medium, and some problems arising from the handing of congestion in crowd motion models.
At the microscopic level, individuals are identified to rigid discs, and the dual handling of the non overlapping constraint leads to discrete Darcy-like equations with a unilateral constraint that involves the velocities and interaction pressures, and that are set on the contact network. At the macroscopic level, a similar problem is obtained, that is set on the congested zone.
We emphasize the differences between the two settings: at the macroscopic level, a straight use of the maximum principle shows that congestion actually favors evacuation, which is in contradiction with experimental evidence. On the contrary, in the microscopic setting, the very particular structure of the discrete differential operators makes it possible to reproduce observed "Stop and Go waves", and the so called "Faster is Slower" effect.
At the end of the 70', Littlejohn [1, 2, 3] shed new light on what is called the Gyro-Kinetic Approximation. His approach incorporated high-level mathematical concepts from Hamiltonian Mechanics, Differential Geometry and Symplectic Geometry into a physical affordable theory in order to clarify what has been done for years in the domain. This theory has been being widely used to deduce the numerical methods for Tokamak and Stellarator simulation. Yet, it was formal from the mathematical point of view and not directly accessible for mathematicians.
This talk will present a mathematically rigorous version of the theory. The way to set out this Gyro-Kinetic Approximation consists of the building of a change of coordinates that decouples the Hamiltonian dynamical system satisfied by the characteristics of charged particles submitted to a strong magnetic field into a part that concerns the fast oscillation induced by the magnetic field and a other part that describes a slower dynamics.
This building is made of two steps. The goal of the first one, so-called "Darboux Algorithm", is to give to the Poisson Matrix (associated to the Hamiltonian system) a form that would achieve the goal of decoupling if the Hamiltonian function does not depend on one given variable. Then the second change of variables (which is in fact a succession of several ones), so-called "Lie Algorithm", is to remove the given variable from the Hamiltonian function without changing the form of the Poisson Matrix.
Le CIRM : écrin estival du CEMRACS depuis 20 ans !
Recently, an important research activity on mean field games (MFGs for short) has been initiated by the pioneering works of Lasry and Lions: it aims at studying the asymptotic behavior of stochastic differential games (Nash equilibria) as the number $n$ of agents tends to infinity. The field is now rapidly growing in several directions, including stochastic optimal control, analysis of PDEs, calculus of variations, numerical analysis and computing, and the potential applications to economics and social sciences are numerous.
In the limit when $n \to +\infty$, a given agent feels the presence of the others through the statistical distribution of the states. Assuming that the perturbations of a single agent's strategy does not influence the statistical states distribution, the latter acts as a parameter in the control problem to be solved by each agent. When the dynamics of the agents are independent stochastic processes, MFGs naturally lead to a coupled system of two partial differential equations (PDEs for short), a forward Fokker-Planck equation and a backward Hamilton-Jacobi-Bellman equation.
The latter system of PDEs has closed form solutions in very few cases only. Therefore, numerical simulation are crucial in order to address applications. The present mini-course will be devoted to numerical methods that can be used to approximate the systems of PDEs.
The numerical schemes that will be presented rely basically on monotone approximations of the Hamiltonian and on a suitable weak formulation of the Fokker-Planck equation.
1. Introduction to the system of PDEs and its interpretation. Uniqueness of classical solutions.
The question of using the available measurements to retrieve mathematical models characteristics (parameters, boundary conditions, initial conditions) is a key aspect of the modeling objective in biology or medicine. In a stochastic/statistical framework this question is seen as an estimation problems. From a deterministic point of view, we classical talk about inverse problems as we recover classical model inputs from outputs. When considering evolution problems,this question falls in the realm of data assimilation that can be seen from a deterministic of statistical point of view. Our objective in this course is to introduce the mathematical principles and numerical aspects behind data assimilation strategies with an emphasis on the deterministic formalism allowing to understand why data assimilation is a specific inverse problem. Our presentation will include considerations on finite dimensional problems but also on infinite dimensional problems such as the ones arising from PDE models. And we will illustrate the course with numerous examples coming from cardiovascular applications and biology.
This minicourse aims at providing tentative explanations of some specific phenomena observed in the motion of crowds, or more generally collections of living entities. The first lecture shall focus on the so-called Stop and Go Waves, which sometimes spontaneously emerge and persist in crowds in motion. We shall present a general class of dynamical systems which are likely to exhibit this type of instabilities, and emphasize the critical role of two basic ingredients: the asymmetry of interactions, and any sort of delay in the transmission of information through the network of entities. The second lecture will address the Capacity Drop Phenomenon (decrease of the flux though a bottleneck when the upstream density becomes too high), and the more paradoxical Faster is Slower Effect (in some regimes, attempts to go quicker may slow down the overall process). We shall in particular detail how an accurate description of the relative position of entities (at the microscopic level) is crucial to recover and understand those effects.
Irreversible electroporation (IRE) is the sole physical ablative technology inducing tumorous cell death by process unrelated to thermal effect. This characteristic makes the technique suitable for the treatment of subtypes of liver tumors especially hepatocellular carcinoma (HCC) located next to critical structures leading to contraindications to thermal ablation like radiofrequency, microwave or cryotherapy. However, while IRE appears safe in such assumed challenging cases for thermal techniques, several issues remain to be addressed to make its use easier and more effective in clinical practice. First of all, tissue changes induced by IRE must be assessed keeping in mind that conversely to thermal techniques its efficacy is not limited to observable coagulative necrotic component of treatment zone. In addition, IRE which is multibipolar ablative technology requires meticulous demanding electrodes positioning to ensure proper magnitude of electric fields between each dipole. Finally, numerical simulations of IRE are mandatory to ease the setting of electrical pulses parameters to improve predictability of treatment in each individual case. In this setting of continue efforts to improve practicability of IRE the technique is routinely used in our institution since several years for the treatment of patients bearing early and locally advanced HCC not amenable to resection or thermal ablation. All along our experience with IRE, imaging appeared as a key point for addressing the specific issues listed above. For the 58 first patients 92% of complete ablation were achieved while the one-year local tumor progression free survival was 70% (95% CI: 56%, 81%). Indeed, despite the need of improvements IRE appears right now as a unique opportunity to achieve complete sustained local tumor control for patient bearing early or locally advanced HCC not amenable to other curative treatments.
I will introduce the topic of computational cardiac electrophysiology and electrocardiograms simulation. Then I will address some questions of general interest, like the modeling of variability and the extraction of features from biomedical signals, relevant for identification and classification. I will illustrate this research with an example of application to the pharmaceutical industry.
Cell-extracellular matrix interaction and the mechanical properties of cell nucleus have been demonstrated to play a fundamental role in cell movement across fibre networks and micro-channels and then in the spread of cancer metastases. The lectures will be aimed at presenting several mathematical models dealing with such a problem, starting from modelling cell adhesion mechanics to the inclusion of influence of nucleus stiffness in the motion of cells, through continuum mechanics, kinetic models and individual cell-based models. | CommonCrawl |
The basic example works fairly well because the two systems can be decomposed in two fairly distinct rays of the Hilbert space. But in the case of a quantum field theory, how does one define an observer? Any "realistic" object (especially for interactive QFTs) will likely be a sum of every state of the Fock space of the theory, hence I do not think it is trivial to separate the system and the observer into a product of two wavefunctionals.
Is there a simple way of defining observers in QFT? Perhaps by only considering wavefunctionals on compact regions of space? I can't really think of anything that really delves into the matter so I don't have a clue.
I like to think of "observer/system" separation in the context of boundary formalism, where quantum fields live on the compact bulk region of spacetime bounded by a 3-surface where boundary states live. These states describe the interaction with the outside "observer", though in this picture the term "observer" completely loses its original meaning.
As long as only scattering experiments are involved, the observer prepares the in-state at time $-\infty$ and takes a measurement on the out-state at time $+\infty$. In this setting, the observer is completely outside the QFT formalism.
A correct account of an observer in relativistic QFT would have to model it as a very massive (many-particle) part of the quantum field localized in some region, in the spirit of the nonrelativistic treatment by in the work by Allahverdyan et al. reviewed by me here. I haven't seen anything like this for the relativistic case.
On the other hand, papers by Peres and Terno (e.g., https://arxiv.org/abs/quant-ph/0212023) discuss relativistic quantum mechanics (not QFT) for multiple observers in different Lorentz frames.
I generally agree with Arnold's opinion and comments, and I would like just underline that "the observer" makes sure that the incident particles have the necessary energy/momentum and other properties, that the target is also has the necessary properties, that the collisions happen in a deep vacuum (no other obstacles), that the measuring devices detect the scattering products correctly, etc., etc. The observer makes a huge preparatory work, accompanying work, and result processing work. In other words, he makes a permanent work keeping the necessary and sufficient conditions for a given experiment. Only with all that we may be sure of the initial and final states, withing the experimental uncertainties established with the observer. As you can see, "observer" includes experimentalists, theorists, staff and stuff, all working to make sure that a (thus simplified) QFT is applicable to the studied processes.
its implication for a good theory of observers and measurement in QFT, which he attributes to Bohr-Rosenfeld 1933.
There is no doubt about the relevance of the Peierls bracket: This is the covariant form of the Poisson bracket (explained in detail in "Mathematical QFT - 8. Phase space"); and the positive frequency part of its integral kernel is nothing but the vacuum 2-point function (explained in "Mathematical QFT - 9. Propagators").
Chapters 7 and 8 of DeWitt's book (volume 1) mean to lay out a theory of measurement and observers in QFT based on this. I don't feel quite qualified to review this here, but if you are interested, I would suggest to take a look.
Does the Peierls bracket exist for interacting systems too? In particular, does the statement about 2-point functions still apply? It sounds very interesting.
The positive frequency part of the Peierls bracket is the vacuum 2-point function of a free (linear) quantum field theory corresponding to the associated linear classical theory. Did you mean this? I don't know of any nonlinear version of this statement, and your PhysicsForums article is too big to see quickly the precise statement you intend here.
The Peierls bracket exist generally, also for interacting field theories. I recommend Khavkine 14.
Answering whether it's relation to the vacuum 2-point function goes beyond perturbation theory would seem to require non-perturbative theory which does not exist at this point. | CommonCrawl |
How does one compute all the solutions to this system?
I have the following method in place for computing solutions given the initial condition that $$x_1^2 + x_2^2 = y^2$$ for some integer $y$.
One can make the standard Pythagorean triple reduction given this condition and then repeat the reduction again thus generating a general solution for all integers.
Given a number and told that it is part of a Pythagorean triple (let's say it's the hypotenuse) how do you find the other squares that sum to it?
You can parametrize the sphere by Stereographic projection. Have you considered that?
All the primitive Pythagorean Quadruples are known. This is Theorem 3 on page 176 and Theorem 4 on page 177 of Jones_Pall_1939.pdf, available at TERNARY as a pdf. The same information is on the first two pages of Pall_Automorphs_1940.pdf at the same site.
July 2015: for another project, I decided to try to generate all the quadruples by the sort of three-parameter formulas one gets by stereographic projection to $\mathbb S^2.$ the results are really disappointing. I think I will stick with the four parameter thing. I found it in Jones and Pall, but it goes back at least to V. A. Lebesgue (and likely known to Euler), https://en.wikipedia.org/wiki/Pythagorean_quadruple#Parametrization_of_primitive_quadruples and is a simple calculation using quaternions with ordinary integer coefficients. Alright, the (first correct) proof that the four parameter recipe gives all primitive solutions is from 1920, by Dickson. In 1941, Skolem gave a proof that can be adjusted to give a reasonably direct algorithm for taking a quadruple and reconstructing the four parameters. In 1956, one F. Steiger gave inequalities that make the mapping one-to-one. This is all reported in a 1962 article by Robert Spira, in the maa Monthly, May 1962, volume 69, number 5, pages 360-365, title The Diophantine Equation $x^2 + y^2 + z^2 = m^2.$ One thing I did not initially notice, Spira just discards the Pythagorean triples; in the quadruple setting, if one of the numbers is $0,$ we have a triple and can easily recover parameters.
Not the answer you're looking for? Browse other questions tagged number-theory elementary-number-theory diophantine-equations sums-of-squares or ask your own question.
When will a parametric solution generate all possible solutions?
The product of all differences of the possible couples of six given positive integers is divisible by 960. | CommonCrawl |
Bessie likes sightseeing, and today she is looking for scenic valleys.
Of interest is an $N \times N$ grid of cells, where each cell has a height. Every cell outside this square grid can be considered to have infinite height.
A valley is a region of this grid which is contiguous, has no holes, and is such that every cell immediately surrounding it is higher than all cells in the region.
A set of cells is called "edgewise-contiguous" if one can reach any cell of the set from any other by a sequence of moves up, down, left, or right.
A set of cells is called "pointwise-contiguous" if one can reach any cell of the set from any other by a sequence of moves up, down, left, right, or diagonally.
A "region" is a non-empty edgewise-contiguous set of cells.
A region is called "holey" if the complement of the region (which includes the infinite cells outside the $N \times N$ grid) is not pointwise-contiguous.
The "border" of a region is the set of cells orthogonally adjacent (up, down, left, or right) to some cell in the region, but which is not in the region itself.
A "valley" is any non-holey region such that every cell in the region has height lower than every cell on the region's border.
Bessie's goal is to determine the sum of the sizes of all valleys.
First line contains integer $N$, where $1 \le N \le 750$.
Next $N$ lines each contain $N$ integers, the heights of the cells of the grid. Each height $h$ will satisfy $1 \le h \le 10^6$. Every height will be a distinct integer.
In at least 19% of the test cases, it is further guaranteed that $N \leq 100$.
Output a single integer, the sum of the sizes of all valleys.
Thus, the answer is 1 + 1 + 1 + 2 + 3 + 6 + 7 + 9 = 30. | CommonCrawl |
$A,B,C,D$ and $E$ are five persons who are to be seated around a circular table such that $A$ and $B$ must sit together and $C$ and $D$ must never sit together. In how many ways can they be seated?
First we make $(AB)$ and $E$ sit which can be done in $2$ ways since $A$ and $B$ can arrange themselves in $2!=2$ ways.
One of $C$ and $D$ can be put into gaps between $E$ and $A$ and the other can be put into the gap between $B$ and $E$ for which there are obviously $2$ ways.
To obtain total number of ways it appears that we should multiply number of ways in Step 1($=2$) and number of ways in Step 2($=2$) ways, i.e. total number of ways $=2\times 2=4$.
But it is easy to notice that in these $4$ ways two of the arrangements are rotation of the other two.
So should the answer be $4$ or should it be $2$.
In general what should be the approach?
Say we have $10$ persons sitting around a circular table with $3$ of them wanting to sit together only whereas $4$ other persons do not want to sit next to each other?
Should the rotation of a particular arrangement be construed as same or different?
$A$, $B$, $C$, $D$ and $E$ are five persons who are to be seated around a circular table such that $A$ and $B$ must sit together and $C$ and $D$ must never sit together. In how many ways can they be seated?
There are four possible seating arrangements.
Seat E. Since A and B sit together and C and D are separated, C and D must both be adjacent to E. Therefore, choosing whether C or D sits to E's immediate left also determines who sits to E's immediate right and choosing whether A or B sits two seats to E's left also determines who sits two seats to E's right. Hence, there are $2 \cdot 2 = 4$ permissible seating arrangements, as shown below.
Notice that none of these seating arrangements can be obtained from another by rotation.
Should the rotation of a particular arrangement be construed as the same or different?
By convention, a rotation of a particular arrangement is considered to be the same unless the seats are labeled or we are given a particular reference point (such as a special chair or the north end of the table).
Notice that we have already accounted for rotational invariance by measuring our seating arrangements relative to the position of E.
Say we have $10$ persons sitting around a circular table, with $3$ of them wanting to sit together and $4$ other persons who wish to be separated? In how many ways can they be seated?
We use the block of three people who wish to sit together as our reference point. Say the people are $A$, $B$, and $C$. In how many ways can they be arranged within the block?
Suppose the four people who wish to be separated are $D$, $E$, $F$, and $G$. Since there are only seven seats left at the table, they must be seated in the seats that are $1$, $3$, $5$, and $7$ positions to the left of the block. In how many ways can they be seated?
Let's call the remaining three people $G$, $H$, and $I$. In how many ways, can they be seated in the remaining three chairs?
Counting the arrangements of 8 people around a square table?
Circular permutation - Arranging 4 persons around a circular table where 8 seats are there.
In how many ways can 2 adults, 2 girls, and 2 boys be seated around a circular table? | CommonCrawl |
Files are identified by filenames, which are represented in GAP as strings. Filenames can be created directly by the user or a program, but of course this is operating system dependent.
Filenames for some files can be constructed in a system independent way using the following functions. This is done by first getting a directory object for the directory the file shall reside in, and then constructing the filename. However, it is sometimes necessary to construct filenames of files in subdirectories relative to a given directory object. In this case the directory separator is always / even under DOS or MacOS.
Section 9.3 describes how to construct directory objects for the common GAP and system directories. Using the command Filename (9.4-1) it is possible to construct a filename pointing to a file in these directories. There are also functions to test for accessibility of files, see 9.6.
For portability filenames and directory names should be restricted to at most 8 alphanumerical characters optionally followed by a dot . and between 1 and 3 alphanumerical characters. Upper case letters should be avoided because some operating systems do not make any distinction between case, so that NaMe, Name and name all refer to the same file whereas some operating systems are case sensitive. To avoid problems only lower case characters should be used.
Another function which is system-dependent is LastSystemError (9.1-1).
LastSystemError returns a record describing the last system error that has occurred. This record contains at least the component message which is a string. This message is, however, highly operating system dependent and should only be used as an informational message for the user.
When GAP is started it determines a list of directories which we call the GAP root directories. In a running GAP session this list can be found in GAPInfo.RootPaths.
The core part of GAP knows which files to read relative to its root directories. For example when GAP wants to read its library file lib/group.gd, it appends this path to each path in GAPInfo.RootPaths until it finds the path of an existing file. The first file found this way is read.
Furthermore, GAP looks for available packages by examining the subdirectories pkg/ in each of the directories in GAPInfo.RootPaths.
The root directories are specified via one or several of the -l paths command line options, see 3.1. Furthermore, by default GAP automatically prepends a user specific GAP root directory to the list; this can be avoided by calling GAP with the -r option. The name of this user specific directory depends on your operating system, it can be found in GAPInfo.UserGapRoot. This directory can be used to tell GAP about personal preferences, to always load some additional code, to install additional packages, or to overwrite some GAP files. See 3.2 for more information how to do this.
IsDirectory is a category of directories.
returns a directory object for the string string. Directory understands "." for "current directory", that is, the directory in which GAP was started. It also understands absolute paths.
If the variable GAPInfo.UserHome is defined (this may depend on the operating system) then Directory understands a string with a leading ~ (tilde) character for a path relative to the user's home directory (but a string beginning with "~other_user" is not interpreted as a path relative to other_user's home directory, as in a UNIX shell).
Paths are otherwise taken relative to the current directory.
returns a directory object in the category IsDirectory (9.3-1) for a new temporary directory. This is guaranteed to be newly created and empty immediately after the call to DirectoryTemporary. GAP will make a reasonable effort to remove this directory upon termination of the GAP job that created the directory.
If DirectoryTemporary is unable to create a new directory, fail is returned. In this case LastSystemError (9.1-1) can be used to get information about the error.
A warning message is given if more than 1000 temporary directories are created in any GAP session.
returns the directory object for the current directory.
DirectoriesLibrary returns the directory objects for the GAP library name as a list. name must be one of "lib" (the default), "doc", "tst", and so on.
The string "" is also legal and with this argument DirectoriesLibrary returns the list of GAP root directories. The return value of this call differs from GAPInfo.RootPaths in that the former is a list of directory objects and the latter a list of strings.
The directory name must exist in at least one of the root directories, otherwise fail is returned.
As the files in the GAP root directories (see 9.2) can be distributed into different directories in the filespace a list of directories is returned. In order to find an existing file in a GAP root directory you should pass that list to Filename (9.4-1) as the first argument. In order to create a filename for a new file inside a GAP root directory you should pass the first entry of that list. However, creating files inside the GAP root directory is not recommended, you should use DirectoryTemporary (9.3-3) instead.
DirectoriesSystemPrograms returns the directory objects for the list of directories where the system programs reside, as a list. Under UNIX this would usually represent $PATH.
This function returns a list of filenames/directory names that reside in the directory dir. The argument dir can either be given as a string indicating the name of the directory or as a directory object (see IsDirectory (9.3-1)). It is an error, if such a directory does not exist.
The ordering of the list entries can depend on the operating system.
An interactive way to show the contents of a directory is provided by the function BrowseDirectory (Browse: BrowseDirectory) from the GAP package Browse.
returns a directory object for the users desktop directory as defined on many modern operating systems. The function is intended to provide a cross-platform interface to a directory that is easily accessible by the user. Under Unix systems (including Mac OS X) this will be the Desktop directory in the users home directory if it exists, and the users home directory otherwise. Under Windows it will the users Desktop folder (or the appropriate name under different languages).
returns a directory object for the users home directory, defined as a directory in which the user will typically have full read and write access. The function is intended to provide a cross-platform interface to a directory that is easily accessible by the user. Under Unix systems (including Mac OS X) this will be the usual user home directory. Under Windows it will the users My Documents folder (or the appropriate name under different languages).
If the first argument is a directory object dir, Filename returns the (system dependent) filename as a string for the file with name name in the directory dir. Filename returns the filename regardless of whether the directory contains a file with name name or not.
If the first argument is a list list-of-dirs (possibly of length 1) of directory objects, then Filename searches the directories in order, and returns the filename for the file name in the first directory which contains a file name or fail if no directory contains a file name.
For example, in order to locate the system program date use DirectoriesSystemPrograms (9.3-6) together with the second form of Filename.
In order to locate the library file files.gd use DirectoriesLibrary (9.3-5) together with the second form of Filename.
In order to construct filenames for new files in a temporary directory use DirectoryTemporary (9.3-3) together with the first form of Filename.
The special filename "*stdin*" denotes the standard input, i.e., the stream through which the user enters commands to GAP. The exact behaviour of reading from "*stdin*" is operating system dependent, but usually the following happens. If GAP was started with no input redirection, statements are read from the terminal stream until the user enters the end of file character, which is usually Ctrl-D. Note that terminal streams are special, in that they may yield ordinary input after an end of file. Thus when control returns to the main read-eval-print loop the user can continue with GAP. If GAP was started with an input redirection, statements are read from the current position in the input file up to the end of the file. When control returns to the main read eval view loop the input stream will still return end of file, and GAP will terminate.
The special filename "*errin*" denotes the stream connected to the UNIX stderr output. This stream is usually connected to the terminal, even if the standard input was redirected, unless the standard error stream was also redirected, in which case opening of "*errin*" fails.
The special filename "*stdout*" can be used to print to the standard output.
The special filename "*errout*" can be used to print to the standard error output file, which is usually connected to the terminal, even if the standard output was redirected.
When the following functions return false one can use LastSystemError (9.1-1) to find out the reason (as provided by the operating system), see the examples.
IsExistingFile returns true if a file with the filename filename exists and can be seen by the GAP process. Otherwise false is returned.
IsReadableFile returns true if a file with the filename filename exists and the GAP process has read permissions for the file, or false if this is not the case.
IsWritableFile returns true if a file with the filename filename exists and the GAP process has write permissions for the file, or false if this is not the case.
IsExecutableFile returns true if a file with the filename filename exists and the GAP process has execute permissions for the file, or false if this is not the case. Note that execute permissions do not imply that it is possible to execute the file, e.g., it may only be executable on a different machine.
IsDirectoryPath returns true if the file with the filename filename exists and is a directory, and false otherwise. Note that this function does not check if the GAP process actually has write or execute permissions for the directory. You can use IsWritableFile (9.6-3), resp. IsExecutableFile (9.6-4) to check such permissions.
reads the input from the file with the filename filename, which must be given as a string.
Read first opens the file filename. If the file does not exist, or if GAP cannot open it, e.g., because of access restrictions, an error is signalled.
Then the contents of the file are read and evaluated, but the results are not printed. The reading and evaluations happens exactly as described for the main loop (see 6.1).
If a statement in the file causes an error a break loop is entered (see 6.4). The input for this break loop is not taken from the file, but from the input connected to the stderr output of GAP. If stderr is not connected to a terminal, no break loop is entered. If this break loop is left with quit (or Ctrl-D), GAP exits from the Read command, and from all enclosing Read commands, so that control is normally returned to an interactive prompt. The QUIT statement (see 6.7) can also be used in the break loop to exit GAP immediately.
Note that a statement must not begin in one file and end in another. I.e., eof (end-of-file) is not treated as whitespace, but as a special symbol that must not appear inside any statement.
Note that one file may very well contain a read statement causing another file to be read, before input is again taken from the first file. There is an upper limit of 15 on the number of files that may be open simultaneously.
reads the file with filename filename as a function and returns this function.
Reading the file as a function will not affect a global variable a.
PrintTo works like Print (6.3-4), except that the arguments obj1, \(\ldots\) (if present) are printed to the file with the name filename instead of the standard output. This file must of course be writable by GAP. Otherwise an error is signalled. Note that PrintTo will overwrite the previous contents of this file if it already existed; in particular, PrintTo with just the filename argument empties that file.
AppendTo works like PrintTo, except that the output does not overwrite the previous contents of the file, but is appended to the file.
There is an upper limit of 15 on the number of output files that may be open simultaneously.
Note that one should be careful not to write to a logfile (see LogTo (9.7-4)) with PrintTo or AppendTo.
Calling LogTo with a string filename causes the subsequent interaction to be logged to the file with the name filename, i.e., everything you see on your terminal will also appear in this file. (LogTo (10.4-5) may also be used to log to a stream.) This file must of course be writable by GAP, otherwise an error is signalled. Note that LogTo will overwrite the previous contents of this file if it already existed.
Called without arguments, LogTo stops logging to a file or stream.
Calling InputLogTo with a string filename causes the subsequent input to be logged to the file with the name filename, i.e., everything you type on your terminal will also appear in this file. Note that InputLogTo and LogTo (9.7-4) cannot be used at the same time while InputLogTo and OutputLogTo (9.7-6) can. Note that InputLogTo will overwrite the previous contents of this file if it already existed.
Called without arguments, InputLogTo stops logging to a file or stream.
Calling OutputLogTo with a string filename causes the subsequent output to be logged to the file with the name filename, i.e., everything GAP prints on your terminal will also appear in this file. Note that OutputLogTo and LogTo (9.7-4) cannot be used at the same time while InputLogTo (9.7-5) and OutputLogTo can. Note that OutputLogTo will overwrite the previous contents of this file if it already existed.
Called without arguments, OutputLogTo stops logging to a file or stream.
CRC (cyclic redundancy check) numbers provide a certain method of doing checksums. They are used by GAP to check whether files have changed.
CrcFile computes a checksum value for the file with filename filename and returns this value as an integer. The function returns fail if a system error occurred, say, for example, if filename does not exist. In this case the function LastSystemError (9.1-1) can be used to get information about the error.
will remove the file with filename filename and returns true in case of success. The function returns fail if a system error occurred, for example, if your permissions do not allow the removal of filename. In this case the function LastSystemError (9.1-1) can be used to get information about the error.
If the string str starts with a '~' character this function returns a new string with the leading '~' substituted by the users home directory as stored in GAPInfo.UserHome. Otherwise str is returned unchanged.
In general, it is not possible to read the same GAP library file twice, or to read a compiled version after reading a GAP version, because crucial global variables are made read-only (see 4.9) and filters and methods are added to global tables.
A partial solution to this problem is provided by the function Reread (and related functions RereadLib etc.). Reread( filename ) sets the global variable REREADING to true, reads the file named by filename and then resets REREADING. Various system functions behave differently when REREADING is set to true. In particular, assignment to read-only global variables is permitted, calls to NewRepresentation (79.2-1) and NewInfoClass (7.4-1) with parameters identical to those of an existing representation or info class will return the existing object, and methods installed with InstallMethod (78.2-1) may sometimes displace existing methods.
This function may not entirely produce the intended results, especially if what has changed is the super-representation of a representation or the requirements of a method. In these cases, it is necessary to restart GAP to read the modified file.
An additional use of Reread is to load the compiled version of a file for which the GAP language version had previously been read (or perhaps was included in a saved workspace). See 76.3-11 and 3.3 for more information.
It is not advisable to use Reread programmatically. For example, if a file that contains calls to Reread is read with Reread then REREADING may be reset too early. | CommonCrawl |
Find some upper and lower bounds for the integral given that $f:\mathbb R\mapsto\mathbb R$ is continuous, increasing, and that $f(f(x))=e^x$.
This is an interesting problem that I came up with while investigating half-exponential functions. I last investigated this type of function in this post about fractional iterates of functions, and I posed this puzzle to the users of Math Stack Exchange. In this short post, I will present my original solution to the problem.
As I showed in the previous post, the function $f(x)$ must be bounded above by $e^x$ and below by $x$ for all $x\in\mathbb R$. This implies that $0\lt f(0)\lt 1$. For convenience, I will let $f_0=f(0)$. To solve the problem, I will first split up the integral in question into Now, by making the substitution $x\to f(x)$ and using integration by parts in the second integral, we see that and so our original integral is equal to Now consider the following integral: It is always positive, because of the previously mentioned bounds for $f$. Furthermore, since $f$ is increasing, for all $x\in [0,f_0]$, we have that $f(x)\in [f_0,1]$. Thus, we have that ...or, after simplifying the integrals in the upper and lower bounds, By combining this inequality with equation (1), we have that It can be shown using basic calculus that for all $x\in [0,1]$, the quantity $(1-x)e^x+x^2$ is always less than or equal to $\ln^2(2)-2\ln(2)+2\approx 1.0942$. Thus, since $f_0\in [0,1]$, we have that Not only are these bounds correct, but they are also the best possible bounds, in that there exist functions $f$ making the integral arbitrarily close to $1$ or $\ln^2(2)-2\ln(2)+2$. In order to make the integral arbitrarily close to $1$, one need only choose a function $f(x)$ such that $f(0)$ is arbitrarily close to $1$, and such that $f(x)$ is arbitrarily close to $1$ for $x\in [0,f(0)]$, so that the graph of $f$ for $0\le x\le 1$ looks very much like a horizontal line segment. Similarly, to make $I$ arbitrarily close to $\ln^2(2)-2\ln(2)+2$, one should choose $f$ such that $f(0)=\ln(2)$ and such that $f(x)$ is very close to $\ln(2)$ for $x\in [0,\ln(2)]$.
Prove that given that $g:\mathbb R\mapsto\mathbb R$ is continuous, increasing, and that $g(g(x))=x^2+1$. | CommonCrawl |
A conference was held recently at the University of New South Wales to celebrate the ninetieth birthday of well-known local mathematician George Szekeres.
The 'Happy End' problem is a geometrical problem that has permeated George Szekeres' life.
On 26 October 2001 a large block of ice was unveiled inside an ice fridge at Darling Harbour in Sydney to launch the start of the "World's Coolest Tim-Tam" promotion.
Thiz iz a nartical on a voy ding miss takes.
In May of 2000, the Clay Mathematical Institute (an organisation at MIT) organised a meeting in the Coll`ege de France in Paris, where, among other things, two prominent Mathematicians gave a list of 7 Problems for the Third Millenium.
We have four red cards, four blue, four yellow and four green. Is it possible to arrange 15 of these in a $3 \times 5$ rectangle in such a way that no two adjacent cards (horizontally, vertically or diagonally) have the same colour?
Q1111 A grandfather clock takes 30 seconds to strike 6 o'clock. How long does it take to strike 12 o'clock?
Q1105. A hollow square is an arrangement of dots in a square with a central square left blank. For example here are thirty two dots arranged in a hollow square. | CommonCrawl |
We extend the Adaptive Aggregation Based Domain Decomposition Multigrid (DD-$\alpha$AMG) algorithm to $N_f=2$ twisted mass fermions. We show numerical results for an $N_f=2$ ensemble of twisted fermions with a clover term simulated at the physical value of the pion mass. We fine-tuned the parameters to achieve a speedup comparable to the one obtained for clover fermions. We also present a complete analysis of the aggregation parameters that provides a novel insight on the multigrid methods for lattice QCD independently of the fermion discretization. | CommonCrawl |
We consider the geodesic flow defined by periodic Eaton lens patterns in the plane and discover ergodic ones among those. The ergodicity result on Eaton lenses is derived from a result for quadratic differentials on the plane that are pull backs of quadratic differentials on tori. Ergodicity itself is concluded for $\mathbbZ^d$-covers of quadratic differentials on compact surfaces with vanishing Lyapunov exponents. | CommonCrawl |
by Steve Chien, Prahladh Harsha, Alistair Sinclair and Srikanth Srinivasan.
The two conflicting results (i.e., hard vs easy) as summarized in the first part of the 2nd paragraph, speak for themselves. The third result (i.e., dichotomy) identifies a parameter of algebras that governs the complexity of computing the determinant of matrices over these algebras.
The determinant and the permanent of a matrix, though deceivingly similar in their definitions, behave very differently with respect to how efficiently one can compute these quantities. The determinant of a matrix over a field can be easily computed via Gaussian elimination while computing the permanent, as shown by Valiant, is at least as hard as counting the number of satisfiable assignments to a Boolean formula. Given this, it is natural to ask ``over which algebras, is the determinant easier to compute than the permament?" Furthermore, since all algorithms for determinant crucially use commutivity of the underlying algebra, we could ask ``is commutativity essential for efficient determinant computation?"
Extending the recent result of Arvind and Srinivasan [STOC 2010], we show that computing the determinant of an $n\times n$ matrix whose entries are themselves $2\times 2$ matrices over a field is as hard as computing the permanent over the field. On the other hand, surprisingly if one restricts the elements to be $d\times d$ upper triangular matrices, then determinant can be computed in $poly(n^d)$ time. Combining this with the decomposition theorem of finite algebras, we get the following dichotomy result: if $A$ is a constant dimensional algebra over a finite field of odd characteristic, then the commutativity of the quotient algebra $A/R(A)$ determines efficient determinant computation (where $R(A)$ is the radical of $A$). | CommonCrawl |
Abstract: We study representation stability in the sense of Church and Farb of sequences of cohomology groups of complements of arrangements of linear subspaces in real and complex space as $S_n$-modules. We consider arrangement of linear subspaces defined by sets of diagonal equalities $x_i = x_j$ and invariant under the action of $S_n$ permuting the coordinates. We provide bounds on the point when stabilization occurs and an alternative proof for the fact that stabilization happens. The latter is a special case of a very general stabilization result of Gadish and for the pure braid space the result is part of the work of Church and Farb. For this space better stabilization bounds were obtained by Hersh and Reiner. | CommonCrawl |
Highly irregular : fractal objects tend to be highly irregular and fill the space in which it is embedded.
Self-similarity : an object that displays the same basic pattern at all scales. The simplest fractals are deterministic, and are generated using recursive or iterative procedures.
Fractal Dimension : the characteristic are captured by a dimension that is a measure of complexity of the object.
Fractal behavior can be observed looking at different scales. In the next figure (modified from Solé & Bascompte 2006) a beetle species walks on the surface of a trunk with lichens carrying lichens on its back.
The Sierpinsky gasket : Starting with an equilateral triangle, the procedure consist on removing from the central portion an upside down equilateral triangle with half the side length of the starting triangle.
The Coch curve: A segment of length 1 is divided into thirds. The center one is replaced by the other two sides of an equilateral triangle of length 1/3.
The curve occupies a definite space, but its length $L$ goes to infinity.
At an arbitrary step $L_n=(4/3)^n$ that goes to infinity as $n$ grows.
The state of each unit change according to its own state and the state of some neighborhood.
The simplest case is that we have only one species: the possible states are 0 and 1.
All the previous fractals constructions have random analogues. In the Von Koch curve we replace the middle third by the sides of an equilateral triangle, we might toss a coin to determine the position of the new part above or below the removed segment.
The pattern of random fractals is self-similar in the statistical sense.
A given property $L(r)$, which can be length, mass, population abundance or number or species, measured at some scale of resolution $r$.
Then we look at a different scale $r'=\alpha r$. If $\alpha < 1$ then is a finer resolution, else a coarser resolution.
This definition implies that the statistical features of a fractal set are the same when measured at different scales.
Scaling in the cumulative biomass distribution of all organisms in lake Konstanz (from Gaedke 1992).
To show that power laws are scale invariant we can see the effect of a scale transformation.
We want to cover these with a set of identical non-overlaping segments/squares/cubes of side $\epsilon L$ with $\epsilon < 1$.
Why we need the limits?
This is a non-integrer value between a line dim=1 and a surface dim=2. In general fractal objects have a dimension below of the dimension of the space that contains it.
How to compute fractal dimensions for natural objects that display statistical self similarity?
The fine scale movement patterns of the ocean sunfish Mola mola (From Seuront 2009). The inset is the detail of the diurnal and nocturnal (shaded) movements.
The fractal dimension was calculated for diurnal and nocturnal movement paths and they were different.
lower D during daylight suggest individuals move in more directed manner.
Higher D In the night the movements were more complex suggesting individual interact with environmental heterogeneity on a finer scale.
An increase in the complexity of spatial movements should indicate an increase in foraging or searching effort.
Mandelbrot Originally defined fractals as sets that have fractal dimension strictly greater than its topological dimension.
There is no hard and fast definition but a list of properties.
F has a fine structure: i.e. detail on small scales.
Sugihara G, May RM (1990) Applications of fractals in ecology. Trends in Ecology & Evolution 5: 79–86.
Gaedke U (1992) The size distribution of plankton biomass in a large lake and its seasonal variability. Limnology and Oceanography 37: 1202–1220.
Seuront L (2009) Fractals and Multifractals in Ecology and Aquatic Sciences. Taylor & Francis. | CommonCrawl |
Afshari, H., Sajjadmanesh, M. (2015). Fixed point theorems for $\alpha$-contractive mappings. Sahand Communications in Mathematical Analysis, 02(2), 65-72.
Hojjat Afshari; Mojtaba Sajjadmanesh. "Fixed point theorems for $\alpha$-contractive mappings". Sahand Communications in Mathematical Analysis, 02, 2, 2015, 65-72.
Afshari, H., Sajjadmanesh, M. (2015). 'Fixed point theorems for $\alpha$-contractive mappings', Sahand Communications in Mathematical Analysis, 02(2), pp. 65-72.
Afshari, H., Sajjadmanesh, M. Fixed point theorems for $\alpha$-contractive mappings. Sahand Communications in Mathematical Analysis, 2015; 02(2): 65-72.
Faculty of Basic Science, University of Bonab, P.O.Box 5551761167, Bonab, Iran.
In this paper we prove existence the common fixed point with different conditions for $\alpha-\psi$-contractive mappings. And generalize weakly Zamfirescu map in to modified weakly Zamfirescu map.
D. Ariza-Ruiza, A. Jimenez-Melado, A continuation method for weakly Kannan maps, fixed point theory and applications, (2010), Art. Id 321594, 12pp.
D. Ariza-Ruiza, A. Jimenez-Melado, Genaro Lopez-acedo, A fixed point theorem for weakly Zamfirescu mappings, Nonlinear analysis (2010).
S. Banach, Sur les operations dans les ensembles abstraits et leur application aux equations integrals, Fund. Math., 3 (1922) 133-181.
S. K. Chatterjea, Fixed-point theorems, C. R. Acad. Bulgare Sci., 25 (1972) 727-730.
J. Dugundji, A. Granas, Weakly contractive maps and elementary domain invariance theorem, Bull. Soc. Math. Greece (N. S) 19, No.1 (1978) 141-151.
R. Kannan, Some results on fixed points, Bull Calcutta Math. Soc., 60 (1968), 71-76.
B. Samet, C. Vetro, P. Vetro, Fixed-point theorems for α-Ψ-cotractive type mappings, Nonlinear Analysis (2011), .
T. Zamfirescu, Fixed-point theorems in metric spacesArch. Math., 23 (1972), 292-298. | CommonCrawl |
DrawVib is a powerful graphical interface to visualise and analyze the vibrational spectra together with the atomic displacements of the vibrational normal modes calculated by a quantum chemistry program.
Analyze one particular QM result calculated by Gaussian, Gamess US.
Show both the structure of the molecule (as well as the atomic displacements of the different vibrational normal modes) and the vibrational spectra.
Mix between the different vibrational spectra on the same graph.
Show the different mode contributions to a peak and click on one of them to visualize the corresponding normal mode.
ACPs, GCMs, perform and analyze the localized modes.
Represent the IR vector and the AAT vector.
Represent the normal mode derivative of $\alpha$, G' and A via an ellipsoid representation. | CommonCrawl |
You are an elephant working for peanuts. The job that you do is to walk around the packing plant and make sure that peanuts are being packed correctly. Peanuts at your plant are packed by size — small, medium, and large. You walk around and note every peanut that is packed in the wrong type of box, or that is not packed in any box, so that your boss (the principal peanut packing pachyderm) can know of any problems.
Input contains a sequence of up to $100$ test cases. Each case starts with an integer $1 \le n \le 30$, followed by $n$ box descriptions, one per line. A box is described by the real-valued coordinates for two of its corners $x_1~ y_1~ x_2~ y_2$, where $x_1 < x_2$ and $y_1 < y_2$, and the type of peanuts it should hold (small, medium, or large). There are no two boxes that overlap or touch. After the list of box descriptions is an integer $0 \le m \le 100$, followed by $m$ peanut descriptions, one per line. Each peanut is described by a pair of real-valued coordinates $x~ y$ indicating its location, and the type of peanut that it is (small, medium, or large). All real-valued inputs are in the range $[0, 300]$ and have at most $8$ digits past the decimal point. Input ends when $n = 0$.
For each test case, print the size and status of each of the peanuts inspected, in the order they are given. If a peanut is in a box of the correct size, its status is 'correct'. If it is on the floor, its status is 'floor'. Otherwise, its status is the size of the box that it is in. A peanut just on the edge of a box is considered to be inside that box. Print a blank line between each pair of test cases. | CommonCrawl |
Given a $N \times N$ grid of integers, we can choose a combination of $N$ numbers where no two in the same row and no two in the same column. We aim to find a combination of $N$ numbers whose minimum is maximized.
The first line of input consists of an integer $N$, where $1 \leq N \leq 30$. The next $N$ lines each contains $N$ integers, representing the grid. Each integer is in the range $[1..10^6]$.
Output a single integer, representing the maximum possible value of the minimum. | CommonCrawl |
Also, the set of vectors are independent because splitting $\bf x$ will always produce non-zero vectors, hence $a_1 a_2 ... I suggest you start here if you are interested in how this is done in a computationally efficient and of course provably correct way.
Perhaps the VDI error is just a symptom of a repairable disk corruption. An invalid pre-header implies corruption of the file, or means its not a VDI at all.
We know: 1) The null space of $A$ consists of all vectors of the form $\bf x $ above. 3) We need three independent vectors for our basis for the null space. 2) If you split up the general solution to $A=$ as done above, then these vectors will be independent (and span of course since you'll have $r$ of them).
So what we can do is take $\bf x$ and split it up as follows: $$\eqalign $$ Each of the column vectors above are in the null space of $A$. : Thank you for explaining the procedure of finding the basis of the null space.
That the null space has dimension 3 (and thus the solution set to $A=$ has three free variables) could have also been obtained by knowing that the dimension of the column space is 2 from the rank-nullity theorem. We may assign any value to their corresponding variable.
So, we set $x_2=a$, $x_4=b$, and $x_5=c$, where $a$, $b$, and $c$ are arbitrary.
I have gone through "the conversation" a million times in my mind.
These findings didn't even include those who were not currently involved at the time, so the average Japanese person is probably doing it less than that 2.9 times. | CommonCrawl |
Abstract: Neural time-series data contain a wide variety of prototypical signal waveforms (atoms) that are of significant importance in clinical and cognitive research. One of the goals for analyzing such data is hence to extract such 'shift-invariant' atoms. Even though some success has been reported with existing algorithms, they are limited in applicability due to their heuristic nature. Moreover, they are often vulnerable to artifacts and impulsive noise, which are typically present in raw neural recordings. In this study, we address these issues and propose a novel probabilistic convolutional sparse coding (CSC) model for learning shift-invariant atoms from raw neural signals containing potentially severe artifacts. In the core of our model, which we call $\alpha$CSC, lies a family of heavy-tailed distributions called $\alpha$-stable distributions. We develop a novel, computationally efficient Monte Carlo expectation-maximization algorithm for inference. The maximization step boils down to a weighted CSC problem, for which we develop a computationally efficient optimization algorithm. Our results show that the proposed algorithm achieves state-of-the-art convergence speeds. Besides, $\alpha$CSC is significantly more robust to artifacts when compared to three competing algorithms: it can extract spike bursts, oscillations, and even reveal more subtle phenomena such as cross-frequency coupling when applied to noisy neural time series. | CommonCrawl |
Fifteen $2 \times 1$ dominoes can be used to tile a $6 \times 5$ rectangle. In tiling the rectangle we might generate what are known as fault-lines. A fault-line is any horizontal or vertical line that divides a tiling, without cutting through any of the domino pieces, so as to form a tiling consisting of two sub-rectangular tilings. A fault-free tiling is thus one that has no such fault-lines.
One example of a fault-free $2 \times 1$ domino tiling on a $6 \times 5$ rectangle is shown below.
How many different fault-free $2 \times 1$ domino tilings of a $6 \times 5$ rectangle are possible? Here rotations and reflections are not considered to be different.
So far I have managed to find 2 (Edit: not 3 as I initially thought) different fault-free tilings using trial-and-error but honestly have no idea how one ought to proceed in general.
I enumerated all $1183$ tilings (not considering symmetry, and not necessarily fault-free). Of those only $6$ were fault-free, and these are in two symmetry classes.
For each of the four horizontal grid lines passing through the interior, there certainly needs to be a domino crossing that line. In fact, there must be at least two such dominoes, because having exactly one domino is impossible for parity reasons (why?).
Similarly, of the five vertical interior grid lines, the leftmost, middle and rightmost need one domino across them, and the other two need two dominoes crossing them. Since there are only $15$ dominoes to go around, and the number of required dominoes on each of these grid lines adds to $4\cdot 2+3\cdot 1+2\cdot 2=15$, the requirements are met exactly.
This gives you a starting point for how to place the dominoes, as each line has one or two distinguished dominos which must cross it. Start with the unique dominos crossing the outer vertical lines; these can a priori go in one of five places each, but a little thought shows that they only have three possibilities each, and in fact you can rule out $2$ of the $3\times 3$ possibilities. You may find that many dominoes afterwards are forced... or maybe more insight is required to make this countable by a human.
Edit: I also made a program which loops through all domino tilings of the $5\times 6$ board (using Knuth's Algorithm X) and checks to see if they are fault free. Only $6$ are fault free, and they fall in two symmetry classes.
Not the answer you're looking for? Browse other questions tagged combinatorics recreational-mathematics puzzle tiling or ask your own question.
Wrong number of $4\times 4$ domino tilings, but why?
Is the Aztec Diamond chaotic?
Does every domino tiling have at least two "exposed" dominoes?
Is it known whether strips cannot be tiled aperiodically with a single tile?
Elementary proof of transformations of domino tilings.
How can I prove this operation on domino tilings is associative?
How many colors are necessary for a rectangle to never cover a color more than once? | CommonCrawl |
Abstract: In the abelian sandpile model on a graph $G = (V,E)$ with sink $s$, there are a non-negative number of chips $\sigma(v)$ at each non-sink vertex $v$. If $\sigma(v) \geq \deg(v)$, the vertex can 'topple' passing one chip to each neighbor. Any chips which fall on the sink are lost from the model. A configuration $\sigma$ is called 'stable' if no vertex can topple. In driven dynamics in the model, at each step a chip is added to the model at a uniform random vertex, and all legal topplings are performed until a stable configuration is reached. Together with Dan Jerison and Lionel Levine, I determined the asymptotic mixing time to stationarity and proved a cut-off phenomenon for dynamics on a square $N\times N$ grid with periodic boundary conditions and a single sink in the limit $N \to \infty$. Recently, with Hyojeong Son, I have extended this result to prove a cut-off phenomenon for sandpile dynamics on a growing piece of an arbitrary plane or space tiling, with open or periodic boundary condition, and proved that the asymptotic mixing time is equal in two dimensions subject to a reflection condition. A different boundary behavior exists for the D4 lattice in dimension 4, in which the open boundary can change the mixing time. I will discuss the spectral methods behind these results. | CommonCrawl |
Abstract: We study the problem of decomposing a volume bounded by a smooth surface into a collection of Voronoi cells. Unlike the dual problem of conforming Delaunay meshing, a principled solution to this problem for generic smooth surfaces remained elusive. VoroCrust leverages ideas from $\alpha$-shapes and the power crust algorithm to produce unweighted Voronoi cells conforming to the surface, yielding the first provably-correct algorithm for this problem. Given an $\epsilon$-sample on the bounding surface, with a weak $\sigma$-sparsity condition, we work with the balls of radius $\delta$ times the local feature size centered at each sample. The corners of this union of balls are the Voronoi sites, on both sides of the surface. The facets common to cells on opposite sides reconstruct the surface. For appropriate values of $\epsilon$, $\sigma$ and $\delta$, we prove that the surface reconstruction is isotopic to the bounding surface. With the surface protected, the enclosed volume can be further decomposed into an isotopic volume mesh of fat Voronoi cells by generating a bounded number of sites in its interior. Compared to state-of-the-art methods based on clipping, VoroCrust cells are full Voronoi cells, with convexity and fatness guarantees. Compared to the power crust algorithm, VoroCrust cells are not filtered, are unweighted, and offer greater flexibility in meshing the enclosed volume by either structured grids or random samples. | CommonCrawl |
Why is there a deep mysterious relation between string theory and number theory (Langlands program), elliptic curves, modular functions, the exceptional group $E_8$, and the Monster group as in Monstrous Moonshine?
Surely it's not just a coincidence in the Platonic world of mathematics.
Granted this may not be fully answerable given the current state of knowledge, but are there any hints/plausibility arguments that might illuminate the connections?
I actually voted this question thumbs-up. It's a good question and I would like to know the most accurate answer, too. Clearly, the rough sketch of the answer is that string theory just knows about all important and exceptional structures in mathematics. But why does it know them? What is the logic that dictates that "other solutions" of a theory whose main physical goal is "only" to unify the interactions including gravity with quantum mechanics produces all other maths, including maths we used to think was totally abstract?
I remember the days when the eightfold way en.wikipedia.org/wiki/Eightfold_Way_%28physics%29 was mysterious.
To start with, the relation of string theory to complex elliptic curves is clear: these are just pointed, genus one closed Riemann surfaces, and hence are certain string worldsheets. The fact that in constructions such as the "refined Witten genus" it is actually arithmetic elliptic curves (not over the complex numbers but, ultimately, over the ring of integers and hence over the rationals and the p-adics, see at fracture square) that play a role is some deep fact that is vaguely reminiscent of p-adic string theory, only that what presently goes under this headline does not fully live up to what is at issue here. (There is a PO question on this point here).
The true answer to this arithmetic geometry-incarnation of stringy pysics must rest in the function field analogy, which rouhgly says something like if you do algebraic number theory in a single variable -- if you study arithmetic curves -- , then this is analogous to studying complex curves, hence string worldsheets.
To put this in perspective: there is an old motivation from the first pages of the string theory textbooks, which says that where point particle mechanics is about the real line (the worldline) so string theory is about the complex plane (the worldsheet) and hence that the passage from point particles to strings is like the step from real analysis to complex analysis.
Somehow the function field analogy says that this seemingly simple-minded statement is indeed true and much deeper than it might maybe seem. In some way stringy phenomena are visible at the very root of mathematics (number theory) because if you "work with a siingle algebraic variable", then that is already analogous to "working with a single complex variable" hence is analogous to studying complex curves, hence string worldsheets.
Even that statement may still seem far-out at this level.But digging deeper it turns out to work out more and more. For instance 90 per cent of number theory is about picking some such arithmetic curve and then "attaching" to it a zeta-function or theta-function or eta-function or L-function. The deep conjectures of number theory all revolve around this (notably the Langlands correspondence). But looking at this from the point of view of the function field analogy, one finds that all this is analogous to the 3dCS/2dWZW correspondence. I have tried to summarize this a bit in this table here: zeta-functions and eta-functions and theta-functions and L-functions -- table .
There'd be more to say, but I am running out of battery. I gave a talk related to this four weeks back at CUNY, here.
That's a lot of embeddings, but notice - The first group here, in the Standard Model subgroup, the second, third, fourth, fifth, are GUT subgroups. And $E(8)$ happens to be the "largest" and "most complicated" of the exceptional lie groups. So a TOE better deal with $E(8)$, somewhere!
I don't know about the relation between monstrous moonshine and string theory, but you can refer to Wikipedia.
There is definitely a connection with number theory. And even more: .
Not joking! EM is the curvature of the $U(1)$ bundle . Weak is the curvature of the $SU(2)$ bundles. Strong is the curvature of the $SU(3)$ bundle. Gravity is the curvature of spacetime . I.e. 1D manifold, 2D, 3D, 4D $\implies$ 10 D .
SO(10) is not a subgroup of U(5). Why would a TOE need E(8) just because it is the largest exceptional group? The 1,2,3,4 numerology is rather weak since you are just looking at groups with these numbers in them that appear in very different ways.
@PhilipGibbs: Fixed the SO(10) U(5) probem . The $E(8)$ logic was supposed to be intuitive . The 1,2,3,4 thing isn't numerology, it isn't so different, by the way .
@PhilipGibbs: In fact, why do you think Kaluza - Klein theory is 5-dimensionals?
There is another point, that E(8) is E6xSU(3), and on a Calabi Yau, the SU(3) is the holonomy, so you can easily and naturally break the E8 to E6. This idea appears in Candelas Horowitz Strominger Witten in 1985, right after Heterotic strings and it is still the easiest way to get the MSSM. The biggest obstacle is to get rid of the MS part--- you need a SUSY breaking at high energy that won't wreck the CC or produce a runaway Higgs mass, since it seems right now there is no low-energy SUSY.
@RonMaimon: Thanks, I added that in too.
@DImension10AbhimanyuPS: ok, but you shouldn't write what I said, which is technically wrong--- E8 is not E6xSU(3), it's a simple group, but it has an embedded E6xSU(3) and fills in the off-diagonal parts with extra crud that's broken when you have SU(3) gauge fluxes which follow the holonomy of the manifold. The precise decomposition is described in detail in Green Schwarz and Witten, which has a nice description of E8.
@RonMaimon: I know, but I think that is clear (that $E(8)$ is not $E(6)\times SU(3)$. | CommonCrawl |
How should I approach using two 8s and two 3s to make the number 24?
Use two $8$s, two $3$s, and basic arithmetic operators ($ +, -, \times , \div$, parentheses) to make the number $24$.
I don't know how to start besides just trying to find the correct answer. Is there a way you can make this equation through small steps or I should just bruteforce it?
1) Is this a trick question?
It appears not - everything seems to be at face value, and there is a mathematics tag not a lateral thinking tag or similar.
2) What do we need to do?
What is the structure of the answer that you need to find? Well, it looks something like $8 + 8 - (3 + 3) = 10$. Except of course, this example equals 10, we need 24. But at least that's what we are going for. Another example is $8 + 8 - (3 \times 3) = 7$, but that doesn't work either. Not to worry just yet, we are just getting a feel of things.
3) Can we simplify the problem down at all?
Well, in this case, we can see that we can generate more potential solutions by changing the operators that we use. In fact, that's what we did above - we changed the $+$ in the brackets to $\times$, which changed the $6$ in the brackets to a $9$, which subtracted an extra $3$ from the result. The $8 + 8 = 16$ didn't change at all. Hmmm... there's something in that which we can use.
4) What components get us closer to the solution?
So the $16$ we had in both the proposals above is like its own starting point - that is, we can swap the two 8s from the original question for a 16, and make the question "Given a 16 and two 3s, make 24". That's not to say that we are going to find a solution to this, but it's one possible statement that will solve the original question. And it comes from us thinking about the number $16$. What other numbers can we make by consuming two of the numbers?
5) Work from the other end - what do the components of the solution look like?
I'm not completely happy with this though - it seems to me that using the square root is a bit of trickery. How else can we make 24 using one of our numbers?
We now have a list of numbers that can be made with two of our numbers, and a list of numbers that we want to be made with 3 of our numbers. It might take a bit of inspiration, but is there any link we can make between any of them?
That's the way I think of these things. Hopefully you will get to a point where most of this occurs in your head pretty fast, and not necessarily in that order.
Here is a solution that uses only "elementary" operations (addition, subtraction, multiplication, and division).
If we allow square roots, a simpler solution is possible.
In fact, there are many more solutions if you allow more operations. | CommonCrawl |
A common practice in Reinforcement Learning is to go from a continuous space towards a discrete space. What does this mean? Take for example a range of numbers between $]-3, 3[$, if we want to represent all the numbers that fit in this range as a state then we would have $\infty$ numbers being represented or $\infty$ states.
We thus need to find a way on how to classify our numbers, such that we are able to control the amount of "states", this approach is called bucketization.
Let's work this out through a practical example. Take the following number set [-100, -2.635, -1.052, -1.051, 0, 0.54, 2.2, 3.698, 100] for the range $]-3, 3[$ and let's create 4 equal buckets.
How do we go from one bucket to the other?
How do we classify number to their respective bucket?
Our other problem is a bit harder though, we want to be able to solve it in a performance that is good enough and does not require too much effort.
A simple (maybe naive) solution would just be to go over our ranges one by one, starting from the lowerBound. So let's just implement this one.
We know that we will need a loop that ends when we reach the last bucket (so that we put values that are greater than our range in the last bucket automatically) through (idx < bucketCount - 1).
What we also know is that we will keep looping until our value is larger than the end bound of the current range. Which we calculate through rangeLow + stepSize * (idx + 1). | CommonCrawl |
Department of Math & Stat, KFUPM, POBox 119, Dhahran 31261, KSA.
B.Sc. Mathematics (1986) King Saud University, Riyadh, Saudi Arabia.
Calculus , Linear Algebra, Introduction to Differential Equations and Linear Algebra, Engineering Mathematics, Introduction to Numerical Computing, Numerical Analysis, Linear and Nonlinear Programming, Reading and Research "Optimization".
O Chadli, QH Ansari, Al-Homidan, Existence of Solutions and Algorithms for Bilevel Vector Equilibrium Problems: An Auxiliary Principle Technique. Journal of Optimization Theory and Applications Vol. 172(3) (2017) pp. 726-758.
Al-Homidan Ansari and Burachik, Weak sharp solutions for generalized variational Inequalities, Positivity online.
O Chadli, QH Ansari, Al-Homidan, Existence of Solutions for Nonlinear Implicit Differential Equations: An Equilibrium Problem Approach. Numerical Functional Analysis and Optimization Vol. 37(11) (2016) pp. 1385-1419.
AA Awotunde, R Ghanam, S Al-Homidan and N Tatar, Numerical Schemes for Anomalous Diffusion of Single-Phase Fluids In Porous Media. Communications in Nonlinear Science and Numerical Simulation 39, 381-395.
I Ahmad, D Singh, BA Dar, S Al-Homidan, On interval valued functions and Mangasarian type duality involving Hukuhara derivative. Journal of Computational Analysis & Applications 21 (5), 881-896.
Alshahrani, Al-Homidan, and Ansari, Minimum and maximum principle sufficiency properties for Nonsmooth variational inequalities. Optimization Letters, 10 (4), 805-819.
S Al-Homidan, M Alshahrani, QH Ansari, System of nonsmooth variational inequalities with applications Optimization,64(5) pp. 1211–1218, Oct. 2015.
Alshahrani, Abbas, Ansari and Al-Homidan Iterative schemes for generalized nonlinear Complementarity problems on isotone Projection cones. Journal of Nonlinear and Convex Analysis.16 (8), 1681-1697.
S Al-Homidan, M Alshahrani, QH Ansari,Weak Sharp Solutions for Equilibrium Problems in Metric Spaces. Journal of Nonlinear and Convex Analysis.16 (7), 1185-1193.
Suliman Al-Homidan, Structure method for solving the nearest Euclidean distance matrix problem, Journal of Inequalities and Applications2014 (1), 491, 2014.
Lu-Chuan Ceng and Suliman Al-Homidan, Algorithms of common solutions for generalized mixed equilibria, variational inclusion and constrained convex minimization, Abstract and Applied Analysis 2014, 10, 2014.
Nadeem A. Malik, R. A. Ghanam and S. Suliman Al-Homidan. Sensitivity of the pressure distribution to the fractional order $\alpha$ in the fractional diffusion equation. Canadian Journal of Physics. 93 (1), 18-36 1,2014.
S Al-Homidan, RA Ghanam, N Tatar, On a generalized diffusion equation arising in petroleum engineering, Advances in Difference Equations 2013 (1), 349.
LC Ceng, S Al-Homidan, QH Ansari Iterative algorithms with regularization for hierarchical variational inequality problems and convex minimization problems Fixed Point Theory and Applications 2013 (1), 284 .
M Alshahrani, QH Ansari, S Al-Homidan Existence Results for Nonsmooth Vector Quasi-Variational-Like Inequalities, Abstract and Applied Analysis pp. 1–7, 2013.
Suliman Al-Homidan. Hankel matrix transforms and operators. Journal of Inequalities and Applications 2012, 2012:92.
S Al-Homidan, Qamrul Ansari, BS Mordukhovich, Positivity in Variational Analysis and Optumization, Guest editor, Positivity 2011-2013.
Al-Homidan S. and Algarni M., Structured Methods for Solving Correlation Problem. Positivity Vol 16 pp 497-508, 2012.
Al-Homidan S. and Ansari Q Relations Between Generalized Vector Variational-Like Inequalities And Vector Optimization Problems, Taiwanese Journal of Mathematics Vol. 16, No. 3, pp. 987-998, June 2012.
Agarwal R. Ahmad I. and Al-Homidan S. Optimality and duality for for nondifferentiable multiobjective programming problems involving generalized d- ρ (,θ)–d–I-univex functions. Journal of Nonlinear and Convex Analysis. vol. 13, no. 4, pp. 733-744, 2012.
S Al-Homidan, Qamrul Ansari, Jen-Chih Yao, Nonsmooth invexities, invariant monotonicities and nonsmooth vector variational-like inequalities with applications to vector optimization, Recent Developments in Vector Optimization 221-274, edited by Editors:Qamrul Ansari, Jen-Chih Yao.
Al-Homidan S., Ansari Q. and Yao J., Collectively Fixed Point and Maximal Element Theorems in Topological Semilattice Spaces. Applicable Analysis: An International Journal. Vol. 90, No. 6, 2011, 865–888.
Al-Homidan S. and Ansari Q Fixed Point Theorems on Product Topological Semilattice Space, Generalized bstract Econmies and System of Generalized Vector-Equlilbrium Problems, Taiwanese Journal of Mathematics Vol. 15, No. 1, pp. 307-330, February 2011.
Ahmad I., Al-Homidan S. and S Sharma S., Duality for nondifferentiable multiobjective variational problems with generalized Type I functions. Dynamics of Continuous, Discrete and Impulsive Systems Series A: Mathematical Analysis. Vol 18, 443-455 2011.
Lu-Chuan Zeng, Q.H. Ansari and Al-Homidan Hybrid Proximal-Type Algorithms for Generalized Equilibrium Problems, Maximal Monotone Operators and Relatively Nonexpansive Mappings. Fixed Point Theory and Applications. vol. 2011, Article ID 973028, 23 pages, 2010.
Al-Homidan S., Alshahrani M., Petra C. and Potra F. Minimal Condition Number for Positive Definite Hankel Matrices using Semidefinite Programming. Linear Algebra and its Applications, Vol. 433 (2010) pp. 1101-1109.
Al-Homidan S. and Ansari Q., Generalized Minty Vector Variational-like Inequalities and Vector Optimization Problems, Journal of Optimization Theory and Applications Vol. 144(3) (2010) pp. 1-11.
Al-Homidan S. and Ansari Q., Quasi-Equilibrium Problems with Lower and Upper Bounds in Ordered Topological Spaces. Journal of Nonlinear and Convex Analysis V. 11, Number 2, 2010, pp345-355.
Lin L., Chuang C. and S Al-Homidan, Ekeland type variational principle with applications to quasi-variational inclusions. Nonlinear Analysis. Volume 72, Issue 2, 15 January 2010, Pages 651-661.
Ceng L., Al-Homidan S., Ansari Q. and Yao J., An Iterative Scheme For Equilibrium Problems And Fixed Point Problems Of Strict Pseudo-Contraction Mappings, Journal of Computational and Applied Mathematics, Vol. 223 (2009) pp. 967-974.
Al-Homidan S. and. Alshahrani M., Positive Definite Hankel Matrices using Cholesky Factorization. Computational methods in applied mathematics. Vol. 9 (2009). No. 3 pp.221-225.
Al-Homidan S., Ansari Q. and Yao J., Some Generalizations Of Ekeland-Type Variational Principle With Applications To Equilibrium Problems And Fixed Point Theory, Nonlinear Analysis, Vol. 69(1) (2008) pp. 126-139.
Al-Homidan S., Semidefinite Programming for the Educational Testing Problem, Central European Journal for Operations Research, Vol. 16 (2008) pp. 239-249.
Al-Homidan S., "Solving Hankel Matrix Approximation Problem using Semidefinite Programming, Journal of Computational and Applied Mathematics, Vol. 202(2) (2007) pp. 304-314.
Al-Homidan S. and Ansari Q., Systems of Equilibrium Problems with Lower and Upper Bounds, Applied Mathematics Letters, Vol. 20(3) (2007) pp. 323-328.
Al-Homidan S., Approximate Toeplitz Problem Using Semidefinite Programming, Journal of Optimization Theory and Applications, Vol. 135(3) (2007) pp. 583-598.
Al-Homidan S., Ansari Q. and Schaible S., Existence of Solutions of Systems of Generalized Implicit Vector Variational Inequalities, Journal of Optimization Theory and Applications, Vol. 134(3) (2007) pp. 515-531.
Alshahrani M. and S. Al-Homidan, Mixed Semidefinite And Second-Order Cone Optimization Approach For The Hankel Matrix Approximation Problem, Nonlinear Dynamics and Systems Theory. Vol. 6(3) (2006) pp. 211-224.
Al-Homidan S. and Fletcher R., Rationalizing Foot and Ankle Measurements to Conform to a Rigid Body Model, Computer Methods in Biomechanics and Biomedical, Vol. 9 (2) (2006) pp. 103-111.
Al-Homidan S., Semidefinite and Second Order Cone Optimization Approach for the Toeplitz Matrix Approximation Problem, Journal of Numerical Mathematics, Vol.14(1) (2006) pp.1-15.
Al-Homidan S. and Wolkowicz H., Approximate and Exact Completion Problems for Euclidean Distance Matrices using Semidefinite Programming, Linear Algebra and its Applications, Vol. 406 (2005) pp. 109-141.
Al-Homidan S., Structured Methods for Solving Hankel Matrix Approximation Problems, Pacific journal of Optimization, Vol. 1 (2005) pp. 599-609. | CommonCrawl |
Are there any theories being developed which study structures with many operations and many distributive laws?
"Algebraic structure" will mean a set with some n-ary operations defined on it. This does not include vector spaces for example.
During my study of algebra I have encountered mostly algebraic structures with one or two binary operations. I have not encountered any theory of structures with three or more. However, I do know examples of more operations being used in practice -- they just aren't included in the axioms, but defined in terms of the "main" operations.
I know axiomatized structures with two (or four if we want to count left and right distributive properties separately) distributive laws between distinct operations, that is distributive lattices. This is the most a structure with two operations can have. However, I have noticed that there are interesting examples of structures with more (however "implicit") distributive laws.
Is there any, even obscure, theory being developed which studies algebraic structures with more than two binary operations interconnected by more than two distributive laws (counting the distribution of $\circ$ over $\star$ only once, even if both left and right distributive laws hold)?
I am asking this question because I find it curious that even though such structures may appear in nature, I have not encountered any study of them. I will give two examples that have occured to me.
Together with $\cdot\longrightarrow +,$ this gives four binary operations and seven distributive laws.
This gives us four binary operations on $\mathscr B$ interconnected by four distributive laws.
Perhaps, algebras over operads is what you want http://en.wikipedia.org/wiki/Operad_theory.
What is the significance of multiplication (as distinct from addition) in algebra & ring theory?
Is there an upper bound to the number of rings that can be obtained from a semigroup with zero by defining an additive operation?
An analogy between subgroups and equivalence relations.
Exponential fields as structures with three binary operations.
Does this theorem/collection of theorems have a name? Or is it just seen as obvious? | CommonCrawl |
We study the limiting distribution of the height in a generalized trie in which external nodes are capable to store up to $b$ items (the so called $b$-tries). We assume that such a tree is built from $n$ random strings (items) generated by an unbiased memoryless source. In this paper, we discuss the case when $b$ and $n$ are both large. We shall identify five regions of the height distribution that should be compared to three regions obtained for fixed $b$. We prove that for most $n$, the limiting distribution is concentrated at the single point $k_1=\lfloor \log_2 (n/b)\rfloor +1$ as $n,b\to \infty$. We observe that this is quite different than the height distribution for fixed $b$, in which case the limiting distribution is of an extreme value type concentrated around $(1+1/b)\log_2 n$. We derive our results by analytic methods, namely generating functions and the saddle point method. We also present some numerical verification of our results. | CommonCrawl |
How can we be sure that for every $A$, $A^\dagger A$ has a positive square root?
Mostly I'm confused over whether the common convention is to use +$i$ or -$i$ along the anti-diagonal of the middle $2\times 2$ block.
What are theta, phi and lambda in cu1(theta, ctl, tgt) and cu3(theta, phi, lam, ctl, tgt)? What are the rotation matrices being used?
Can we process infinite matrices with a quantum computer?
Can we process infinite matrices with a quantum computer? If then, how can we do that?
What is the difference between 3 qubits, 2 qutrits and a 6th level qunit? Are they equivalent? Why / why not? Can 6 classical bits be super-densely coded into each? | CommonCrawl |
This node exports several files from a mask. The images are exported as gray levels, each pixel having a value between 0 and 1. The generated images have the resolution of the mask as its size, and the gray levels (0 is the min value and 1 is the max value) determine the value of each point.
The advantage of exporting multiple files is to optimize the rendering times for very large masks in an external engine where it is more practical to manage parts of the mask in different files.
To add a node, right click in the Graph Editor and select Create Node > Export > Multi file export mask.
File pattern: This is the formula used to name the files to export. The naming convention is important because the node aligns the mask on a grid and where the first number is the X axis and the second one is the Y axis. Depending on the XY coordinates of the part of the mask and the number of files to export, each exported file be named according to its XY coordinates, for example the top left part of a 2x2 mask will have the name Mask_0_0.png, the top right part Group_0_1.png, etc.
See Explanation about the formula for a detailed description about the formula.
Browse to the folder where want to save your files and copy the path.
In File pattern in the parameters dialog, paste the path and add a file name, for example here we add "Mask" and then _$x_$y, where $x represents the position of the part of the mask on the X axis and $y represents the Y axis.
The following pattern works for exporting to UE4: "filename_X$x_Y$y.png" (UE4 requires an "X" and an "Y" before the coordinates of the tile).
File format: Choose the file format from the options available.
Width and height: Set the width and height in number of vertices.
Overlapping: Instant Terra displays the number of files to create based on this parameter.
If overlapping is set to 0 the file is cut into parts, and the parts have nothing in common.
If overlapping is set to 1, two contiguous parts will have a column or a line in common.
If overlapping is set to 2, two contiguous parts will have 2 columns or 2 lines in common.
Number of files: The number of files to export depends on the graph and the value of the overlapping parameter. This parameter is non-editable.
Click Export now to export the files.
A pop-up dialog displays the progress of the export.
Another popup-up dialog confirms the export.
The files are exported to the directory entered in the File pattern box.
Replaces $y by 0, and finds the largest value of $x that exists for the files, i.e. input_$x_0.bmp. This determines one dimension of the grid of files, called X.
Replaces $x by 0, and finds the largest value of $y that exists for the files, i.e. input_0_$y.bmp. This determines the other dimension of the grid of files, called Y.
The node iterates on $y and $x and calculates the image size, i.e. width and height.
Each file is loaded, and forms the corresponding part of the final mask.
Ditto for $y, $0y, $00y, $000y, $0000y, and $00000y.
In the name of the file and in the directory name.
In one or more copies.
Mixed (for both $X and $Y). | CommonCrawl |
Does a diode block current but not voltage?
Does it mean a diode blocks current but not voltage ?
Figure 3 from the 1N4148 datasheet.
Diodes have very small leakage current. At 3 V this will be between 3.5 and 10 nA.
The 2N7000 has a gate−body leakage current, Forward of -10 nA max.
The diode also has about 4 pF capacitance. When the supply voltage jumps from zero to +3 V on power-up the diode capacitance will cause the gate of M4 to jump up too.
The effect of voltage that you care about in most electronics is current. When a device blocks current, it makes voltage inconsequential, save electrostatic effects.
The problem in the circuit shown is that it essentially leaves an unshielded MOSFET gate floating (no current path to or from it). A floating MOSFET gate is almost always a design error, leading to undefined behaviour.
The idea of blocking voltage is a nonsense. You need to think the problem in terms of "resistance" and voltage divider (I'm putting quotes around «resistance», because it's not reactive but it's heavily non linear). Since the transistor is a MOSFET, it's gate has a much higher resistance than the resistance offered by the reversed diode (because of the leaking current) and the divider made from the diode D3 and the M4's gate transmit nearly all the voltage to the gate.
So the solution is simple : Just put a high value resistor between M4's gate and the chassis ground.
The presence of a voltage between two points of a circuit means we measure a difference in electric potential with the value \$U\$. So, if there was a conducting path between these points with \$R<\infty\$ there would flow a current \$U=RI\$. The presence of a potential difference (\$U\$) doesn't mean any electrons could get from A to B, but that one spot would accept the "surplus" from the other.
A diode is, in theory, \$R=\infty\$ in one way and \$R=0\$ in the other. However, those approximation isn't true in reality. This can lead to a small current flowing "backwards" through a diode.
Not the answer you're looking for? Browse other questions tagged circuit-analysis or ask your own question.
What sets the source voltage in this simple CMOS circuit if the current source is 0A?
Should I use superposition if there is a sine wave current source and a DC voltage source in a circuit?
Why does not current flow in open circuit wire connected to a closed circuit?
How do I solve the current of this circuit with a diode? | CommonCrawl |
Formulas are derived which relate strength and asymmetry between magnet top and bottom poles of ferrite and compensator to the strength and temperature compensation of the magnetic field and the skew quadrupole moment and its temperature dependence in dipole or gradient dipole magnets. Applying these formulas will allow one to judge to what extent the symmetry must be maintained separately for ferrite and compensator and the interaction between compensation and asymmetry. We find for Recycler Ring materials, if $\alpha \approx 1/40 $ is the ratio of the skew quad to the top-bottom asymmetry in magnetic potential, to keep $|a_1| < 1$ unit at operating temperature, we need the asymmetry of the ferrite, $\delta_F < 36$ units. Since the compensator contributes much less field change, $\delta_C < 394$ units is sufficient. This symmetry will result in $da_1/dT$ less than 0.02 units$^o$C which is sufficient for Recycler Ring requirements. | CommonCrawl |
You are currently browsing the archive for the Deep Belief Networks category.
This is a great intro and I highly recommend it.
If you want more information, check out Ng's lecture notes, Honglak Lee's 2010 NIPS slides, and Hinton's Videos ( ).
Nuit Blanche's article "The Summer of the Deeper Kernels" references the two page paper "Deep Support Vector Machines for Regression Problems" by Schutten, Meijster, and Schomaker (2013).
$f(x) = \sum_i \alpha_i K(x_i, x)$ is positive for one class of $x_i$ and negative for the other class (sometimes allowing exceptions). ($K(x,y)$ is called the kernel function which is in the simplest case just the dot product of $x$ and $y$.) SVM's are great because they are fast and the solution is sparse (i.e. most of the $\alpha_i$ are zero).
Schutten, Meijster, and Schomaker apply the ideas of deep neural nets to SVMs.
where $f(x) = (f_1(x), f_2(x), \ldots, f_d(x))$. They use a simple gradient descent algorithm to optimize the alphas and obtain numerical results on ten different data sets comparing the mean squared error to a standard SVM.
"Deep Belief Networks for Speech".
Check out minutes 47 to 50 where he says that the deep belief network approach created a 30% improvement over the state of the art speech recognition systems.
In "Temporal Autoencoding Restricted Boltzmann Machine", Hausler and Susemihl explain how to train a deep belief RBM to learn to recognize patterns in sequences of inputs (mostly video). The resulting networks could recognize the patterns in human motion capture or the non-linear dynamics of a bouncing ball. | CommonCrawl |
Generalized entropic functionals are in an active area of research. Hence lower and upper bounds on these functionals are of interest. Lower bounds for estimating Rényi conditional $\alpha$-entropy and two kinds of non-extensive conditional $\alpha$-entropy are obtained. These bounds are expressed in terms of error probability of the standard decision and extend the inequalities known for the regular conditional entropy. The presented inequalities are mainly based on the convexity of some functions. In a certain sense, they are complementary to generalized inequalities of Fano type.
A. E. Rastegin: Fano type quantum inequalities in terms of $q$-entropies. Quantum Information Processing (2011), doi 10.1007/s11128-011-0347-6. | CommonCrawl |
"Kernel density estimation" is a convolution of what?
I am trying to get a better understanding of kernel density estimation.
Let's take $K()$ to be a rectangular function which gives $1$ if $x$ is between $-0.5$ and $0.5$ and $0$ otherwise, and $h$ (window size) to be 1.
I understand that the density is a convolution of two functions, but I am not sure I know how to define these two functions. One of them should (probably) be a function of the data which, for every point in R, tells us how many data points we have in that location (mostly $0$). And the other function should probably be some modification of the kernel function, combined with the window size. But I am not sure how to define it.
Bellow is an example R code which (I suspect) replicates the settings I defined above (with a mixture of two Gaussians and $n=100$), on which I hope to see a "proof" that the functions to be convoluted are as we suspect.
Corresponding to any batch of data $X = (x_1, x_2, \ldots, x_n)$ is its "empirical density function"
Letting $k(x) = K_h(-x)$ (which is the same as $K_h(x)$ for symmetric kernels--and most kernels are symmetric) we obtain the claimed result: the Wikipedia formula is a convolution.
Not the answer you're looking for? Browse other questions tagged r kernel-smoothing convolution or ask your own question.
Can MCMC iterations after burn in be used for density estimation?
Which kernel function for Watson Nadaraya classifier?
Can you explain Parzen window (kernel) density estimation in layman's terms?
Are vanishing bias and variance enough for pointwise consistency for KDE-based estimation? | CommonCrawl |
Corollaries to Lebesgue's Crit. - Riemann Integrability of Bnd. Funct.
Recall from the Lebesgue's Criterion Part 1 - Riemann Integrability of a Bounded Function and Lebesgue's Criterion Part 2 - Riemann Integrability of a Bounded Function pages that if $f$ is a bounded function on $[a, b]$ and if $D$ is the set of discontinuities of $f$ on $D$ then $f$ is Riemann integrable on $[a, b]$ if and only if the set of discontinuities has measure $0$, i.e., $m(D) = 0$.
We will now state some very nice corollaries that come as a consequence of Lebesgue's criterion.
Corollary 1: Let $f$ be a function of bounded variation on $[a, b]$. Then $f$ is Riemann integrable on $[a, b]$.
Proof: If $f$ is a function of bounded variation on $[a, b]$ then we have already proven that $f$ is bounded on $[a, b]$ on the Functions of Bounded Variation page.
Furthermore, from the Decomposition of Functions of Bounded Variation as the Difference of Two Increasing Functions page we have that $f$ can be written as the difference of two increasing functions, say $f = \alpha_1 - \alpha_2$ where $\alpha_1, \alpha_2$ are both increasing on $[a, b]$.
From the Countable Discontinuities of Monotonic Functions page we know that $\alpha_1$ and $\alpha_2$ both have countably many discontinuities, and so the set $D$ of all discontinuties of $f$ is countable.
Corollary 2: Let $f$ and $g$ be bounded functions on $[a, b]$. Let $D_f$ and $D_g$ denote the set of discontinuities of $f$ and $g$ on $[a, b]$ respectively. Suppose that $D_f = D_g$. Then $f$ is Riemann integrable on $[a, b]$ if and only if $g$ is Riemann integrable on $[a, b]$.
Proof: $\Rightarrow$ Suppose that $f$ is Riemann integrable on $[a, b]$. Then $m(D_f) = 0$. But since $D_f = D_g$ we have that $m(D_g) = 0$. So $g$ is Riemann integrable on $[a, b]$. | CommonCrawl |
A model is presented which simulates the behavior of superthermal ions previously reported in the dayside ionosphere of Venus. The model considers effects of E $\times$ B and gradient drifts, charge exchange and collisions with the ambient neutral atmosphere and the possible effects of a wave-particle (anomalous) scattering process. Results indicate that scattering processes are required if superthermal ions are the explanation for the observed "missing pressure" component in the dayside Venus ionosphere. The scattering scale length required to match the "missing pressure" distribution is similar to the scale length previously predicted for growth of a lower hybrid beam instability.
Kramer, Leonard. "Model of superthermal ions in the dayside Venus ionosphere." (1993) Diss., Rice University. https://hdl.handle.net/1911/16636. | CommonCrawl |
Abstract: The advance in RF energy transfer and harvesting technique over the past decade has enabled wireless energy replenishment for electronic devices, which is deemed as a promising alternative to address the energy bottleneck of conventional battery-powered devices. In this paper, by using a stochastic geometry approach, we aim to analyze the performance of an RF-powered wireless sensor in a downlink simultaneous wireless information and power transfer (SWIPT) system with ambient RF transmitters. Specifically, we consider the point-to-point downlink SWIPT transmission from an access point to a wireless sensor in a network, where ambient RF transmitters are distributed as a Ginibre ?$\alpha$-determinantal point process (DPP), which becomes the Poisson point process when $\alpha$? approaches zero. In the considered network, we focus on analyzing the performance of a sensor equipped with the power-splitting architecture. Under this architecture, we characterize the expected RF energy harvesting rate of the sensor. Moreover, we derive the upper bound of both power and transmission outage probabilities. Numerical results show that our upper bounds are accurate for different value of ?$\alpha$. | CommonCrawl |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.